Press play to listen to this article
By MARK SCOTT
Send tips here Subscribe for free | View in your browser
IT’S A NEW MONTH. But Digital Bridge remains a constant. I’m Mark Scott, POLITICO’s Chief Technology Correspondent, and I bring terrifying news. Somewhere, out there, is an unleashed Roomba automated vacuum robot — and it has no known predators. Save yourselves!
It’s a doozy this week. Buckle up:
— Meet the shit-posting, meme-using internet culture warriors taking on Russian disinformation and raising funds for Ukraine.
— Julie Inman GrantAustralia’s eSafety commissioner, doesn’t think social media platforms are doing enough to stop vile content.
— California just passed a new regulatory code to keep kids safe online. It’s a mirror of what the United Kingdom has already done.
NAFO: NORTH ATLANTIC FELLAS ORGANIZATION
A BUNCH OF INTERNET CULTURE WARRIORS bombarding Russian diplomats and sympathizers on Twitter with dog-style memes is not how I thought things would shake out after Russia invaded Ukraine six months ago. But since May, the so-called North Atlantic Fellas Organizationor NAFO — a rag-tag band of average social media users — has turned into a global movement that has taken the disinformation war to the Kremlin online.
Confused? So was I. What’s happened is that a Twitter movement to raise funds for Ukrainian front-line troops has morphed into a decentralized mob of well-intentioned (if that’s a thing) internet trolls willing to pick fights with those peddling Russian falsehoods about the war. Whenever someone donates money, directly, to Kyiv’s war effort — and proves it via a screenshot — they can ask NAFO for a bespoke avatar of a Shiba Inu, a Japanese dog breed that became an internet sensation a decade ago and is referred to as a “doge” in internet culture.
“Most of this was accidental,” Kamil (whose last name POLITICO is withholding for security reasons) told me. The 27-year-old Pole, who goes by @Kama_Kamilia on Twitter, posted the first NAFO tweet in late May, mostly for fun. He then kept doing it to laugh at Russia’s war effort. Others quickly followed suit, similarly using humor and a heavy dose of internet culture to undermine the Kremlin’s official narrative. Twitter users then asked Kamil to also make them “dog” avatars, and the movement began.
If you scroll through the hashtags #NAFO or #NAFOFellas, you’ll get a crash course in how internet culture can be weaponized for a common cause. Just as the Islamic State uses TikTok-style videos to engage would-be followers or the American far-right boogaloo movement has adopted “Pepe the Frog” to ridicule mainstream culture, so, too, have the NAFO fellas, as they prefer to be called, co-opted the “doge” meme as their weapon of choice to go after Russia.
What started out as a niche response to the war in Eastern Europe has quickly turned into something of a big deal. This week, both Ukraine’s defense ministry and its minister, Oleksiy Reznikov, gave NAFO their backing. Reznikov even changed his Twitter avatar into a Shiba Inu, wearing a suit, sporting a shield and standing in front of a burnt-out bridge. Other high-profile fellas include US Congressman Adam Kinzinger and former Estonian President Toomas Hendrik Ilves.
This all sounds like a lot of fun, if that’s what you’re into. But there’s a serious political idea brewing under the surface. Until now, the West’s response to Russian disinformation has mostly focused on often boring reports or bland public statements. In response, the Kremlin has undermined that by relying on internet trolls and underhanded means to peddle its messages. NAFO has combated that, successfully, via humor, inclusiveness and the power of the crowd.
“One of the funniest things about the fella character is that if you’re tweeting at one of these Russian government accounts or sycophants and they respond, now they’re engaging with a cartoon dog,” Matthew, a former US Marine and one of the leaders of the movement, told me. He, too, declined to give his last name. But for all the shit-posting, NAFO’s goal remains to send money to front-line troops.
So far, it has raised more than $400,000 via direct contributions and the sale of merchandise, including “doge-inspired” T-shirts and mugs. “The dog was very funny. That’s what caught my attention,” Matthew, the former American serviceman, added. “But what really kept my attention was the idea of raising money for people who are actually fighting.” Here’s my full story on NAFO.
HOW TO POLICE ONLINE CHILD SEXUAL EXPLOITATION CONTENT?
JULIE INMAN GRANT YOU HAVE HAD ENOUGH. The former Microsoft executive, who took over as Australia’s eSafety commissioner in 2017, sent legal demands this week to five social networks (Meta, Microsoft, Apple, Snap and Omegle, a niche anonymous chat service) ordering them to cough up more details about how they tackle online child sexual exploitation material. She’s using new powers that came into force last year that give the country’s online content regulator more sway in holding companies’ feet to the fire.
“Every company should have a zero-tolerance policy around having their platforms weaponized in that way, for either the proliferation, the hosting, or the livestreaming of this material,” she told me from Sydney. “If they’re not doing enough to proactively detect, prevent and remove this (content), then what else are they letting happen on their platforms?” The companies now have 28 days to respond to her demands or else face potential daily fines of $380,000.
FWIW, almost all of these firms provide some form of public oversight about how they handle this vile content. Meta confirmed it had received the legal notice, while representatives for the other companies did not respond for comment. No one wants such material to appear on their platforms like Instagram or Microsoft’s Skype (which Inman Grant told me is often used to livestream child pornography from the Philippines). This should be a no-brainer.
But what the Australians and other countries are finding out is that it’s not that simple. Sure, demanding the end of online child sexual exploitation is a slam dunk. But that also requires policing content that’s shared on encrypted services like WhatsApp or Apple’s iMessage. That has pitted governments and online safety campaigners against privacy advocates who, legitimately, claim you can’t create so-called backdoors for these popular digital services.
If you want to see how bad that can go, just remember the heat Apple faced when it proposed scanning its services like iCloud for such heinous material. Personally, I thought their approach was pretty balanced. It upholds people’s privacy rights while using so-called hashing tools to anonymously scan photos and videos for harmful content. But I’m in the minority, and the iPhone maker soon scrapped the idea after a global backlash.
This debate is not going away. The European Union is working on proposals to require companies to scan for child sexual exploitation material. The UK and Canada, as part of their separate content proposals, are doing something similar. Most Western governments (excluding Germany) want some technical ability to track encrypted services. “I do see it as the responsibility of the platforms that are using this technology to also develop some of the tools that can help uncover illegal activity when it’s happening while preserving privacy and safety,” Inman Grant said when asked her where she stood on the debate
Expect greater global coordination on this and other content moderation matters as more countries follow Australia’s lead to create (or rebrand) agencies to be in charge of what happens on social media. Inman Grant said plans were underway later this year to create a “global regulators network” of like-minded officials to swap notes similar to what already exists for data protection enforcement. “We’re talking to the Irish. We’re talking to the Canadians. We’re talking to the White House,” she said.
BY THE NUMBERS
CALIFORNIA COPIES THE UK ON ONLINE SAFETY
DIGITAL BRIDGE IS GOING DEEP ON children’s safety this week. Next up is new legislation just passed, known as the California Age-Appropriate Design Code Act, that forces tech companies to take greater precautions on how under-18-year-olds use their services. That includes baking in design choices to limit how addictive the likes of TikTok or YouTube can be; setting the highest privacy settings as default for minors; and outlawing the collection of these users’ exact locations for advertising. The Golden State’s governor, Gavin Newsomstill needs to approve the proposals, but that should be a rubber stamp exercise.
To see how this will shake out, you only have to look at the UK, whose separate age-appropriate design code was the basis for what just got passed in California. For more on that, check out Beeban Kidron, the British lawmaker behind both the London and Sacramento legislation, writing in Digital Bridge in July. The UK’s code came into force last September, and tech companies have similarly upgraded their services to meet almost identical proposals soon to be in place on the West Coast. Also, one to watch: Britain’s privacy regulator, which is in charge of enforcing the rules, is expected to announce the first fines linked to the country’s code by the end of the week.
WONK OF THE WEEK
I’M NOT SURE BENEDICT EVANS, whose newsletter has more than 100,000 subscribers, is obscure enough to make this section. But since returning to the UK from his role at Andreesen Horowitz, the California venture capital giant, the technology pundit has increasingly delved into the world of policymaking — often poking holes in the impracticalities of government’s digital proposals.
It’s fair to say he’s more on the corporate side of this debate — although he’s not averse to throwing shade at the ridiculous nature of the tech industry. But he’s given evidence to several countries’ parliaments on the role of digital, is well-read by techies and politicians alike, and his skeptical view on the need for more rules has caught some officials’ attention.
“We will regulate text just as we regulate cars,” Evans posted on Twitter after California passed its new online rulebook for kids (see above section). “But way too many politicians still push for laws that are the equivalent of demanding gasoline that doesn’t burn, or telling GM to stop people from driving drunk.”
THEY SAID WHAT, NOW?
“We must build the national capacity to defend against, and recover from, cyberattacks,” Jen Easterlyhead of the United States’ Cybersecurity and Infrastructure Security Agency, wrote in the body’s strategic plan for 2023-2025. “We must work with federal partners to bolster their cybersecurity and incident response postures and safeguard the federal civilian executive branch networks that support our nation’s essential operations.”
WHAT I’M READING
— I’m not a fan of the term “killer robots.” But Mariaosaria Taddeo and Alexander Blanchard have put together a comprehensive definition of what actually constitutes an autonomous weapons system for Science and Engineering Ethics.
— US legislation to force the likes of Google and Facebook to pay publishers whenever their content appears on these platforms is winding its way through Congress, according to Rick Edmonds for the Poynter Institute. Read the bill here.
— Researchers at Bertelsman Stiftung have crunched the numbers on which countries have the highest number of high-tech patents. It’s a long report, but worth the read.
— The EU’s AI Act is stalled. But that does not mean it won’t have an effective halo on how governments and companies approach the policing of this emerging technology, claim Charlotte Siegmann and Markus Anderljung for the Center for the Governance of AI.
— A growing amount of media in the Czech Republic and Slovakia are spouting claims favoring the Chinese Communist Party in similar ways to how Russian propaganda has similarly been spread within the Eastern European countries, based on an analysis by MapInfluence.
— The UK’s online content proposals need to be updated so that they do not overreach to harm people’s legitimate freedom of speech and ability to remain anonymous online, argue Kir Nuthi and Mella Tesfazgi for the Center for Data Innovation.