Happy Monday! Send news tips and summer podcast recommendations to: cristiano.lima@washpost.com.
Below: The Microsoft-Activision deal is back in the hands of the U.K.'s antitrust watchdog, and a key OpenAI employee departs. First: | Twitter rival Mastodon is rife with child abuse material, study finds | Mastodon has emerged as a Twitter competitor popular in Silicon Valley. (Barbara Ortutay/AP) | | A new report has found rampant child sexual abuse material on Mastodon, a social media site that has gained popularity in recent months as an alternative to platforms like Twitter and Instagram. Researchers say the findings raise major questions about the effectiveness of safety efforts across so-called "decentralized" platforms, which let users join independently run communities that set their own moderation rules, particularly in dealing with the internet's most vile content. During a two-day test, researchers at the Stanford Internet Observatory found over 600 pieces of known or suspected child abuse material across some of Mastodon's most popular networks, according to a report shared exclusively with The Technology 202. Researchers reported finding their first piece of content containing child exploitation within about five minutes. They would go on to uncover roughly 2,000 uses of hashtags associated with such material. David Thiel, one of the report's authors, called it an unprecedented sum. "We got more photoDNA hits in a two-day period than we've probably had in the entire history of our organization of doing any kind of social media analysis, and it's not even close," said Thiel, referring to a technique used to identify pieces of content with unique digital signatures. Policymakers for years have called on prominent social media platforms like Instagram, TikTok and Twitter to take greater steps to stem the tide of child abuse material online, criticizing tech companies for not devoting enough resources to enforcing rules against such content. But as millions of users seek out substitutes to those sites, the findings released Monday underscore the significant structural challenges faced by platforms that don't rely on a single company to set and enforce policies that address illegal or harmful content. | | | | Social media is putting our kids at risk. We have a right to know how Big Tech is collecting their information and feeding them addictive content. Congress must adopt responsible safeguards now. | | | | | | Mastodon is what's known as a "federated" social media platform, where users can join servers or "instances" that are separate but interconnected, allowing them to view content from other communities while adhering to their own network's rules. That model has been billed as more open and democratizing than the more centralized approach of giants like Twitter and Facebook. But the report's results highlight the hurdles moderators on those platforms face in tackling harmful material, Thiel said, including "fairly primitive" tools to detect and escalate reports of child abuse material and limited volunteer moderation teams. "A lot of it is just a result of what seems to be a lack of tooling that centralized social media platforms use to address child safety concerns," he said. To tackle some of those obstacles, researchers wrote in the report, decentralized platforms like Mastodon may need to borrow some of the strategies and tools used by their larger peers. "Investment in one or more centralized clearinghouses for performing content scanning (as well as investment in moderation tooling) would be beneficial to the Fediverse as a whole," Thiel and co-author Renée DiResta wrote, referring to the so-called federated universe of platforms. Another challenge for decentralized platforms, Thiel said, is that pockets of dangerous content can emerge in communities with more lax guidelines. A significant portion of the child abuse material researchers uncovered was from networks in Japan, where there are "significantly more lax laws" that "exclude computer-generated content as well as manga and anime," according to the report. "We found that on one of the largest Mastodon instances in the Fediverse (based in Japan), 11 of the top 20 most commonly used hashtags were related to pedophilia," the researchers wrote. With massive recent developments in artificial intelligence technology, researchers also reported finding a spike in computer-generated child-abuse material. Thiel called it "a picture of things to come," and said companies and officials have yet to fully tackle the problem. "It's an issue that has never really been fully addressed by either regulators or tech platforms as a whole. … It's not really so cleanly delineated," he said. | Their findings could also pose fresh questions for policymakers globally, especially as Europe imposes sweeping new online safety rules with greater obligations for major platforms. "The policies that people are trying to come up with to pressure large platforms … [will] have to take that vastly different nature of that network into account," Thiel said. | | | Our top tabs | | Fate of Microsoft-Activision deal is back in hands of U.K.'s antitrust regulator | Britain's Competition and Markets Authority said it could reach a provisional decision on whether to greenlight the transaction by the week of Aug. 7 (Richard Drew/AP) | | The fate of Microsoft's planned $69 billion purchase of video game company Activision Blizzard is back in the hands of the U.K.'s antitrust watchdog, Sarah Young, Paul Sandle and Sam Tobin report for Reuters. Britain's Competition and Markets Authority (CMA) said it could reach a provisional decision on whether to greenlight the transaction by the week of Aug. 7. "Having initially blocked the $69 billion deal in April over concerns about its impact on competition in the cloud gaming market, the CMA has since reopened the file, after it was left increasingly isolated among world regulators in its opposition," according to the report. "Explaining why the deal should now be given the green light, Microsoft argued that the binding commitments accepted by the European Union shortly after Britain had blocked the deal changed matters," the report adds, citing published court documents. Those binding commitments in the E.U. include allowing Activision games to be streamed for 10 years after the deal closes. Microsoft also made agreements with hardware maker Nvidia, as well as cloud gaming providers Boosteroid and Ubitus. | School districts join lawsuits alleging social media harms to kids | The school districts could face challenges when a judge later this year is expected to consider a request from the social media platforms to dismiss the cases on the grounds that they are protected by Section 230. (iStock) | | "Nearly 200 school districts so far have joined the litigation against the parent companies of Facebook, TikTok, Snapchat, and YouTube," they write. "The suits have been consolidated in the U.S. District Court in Oakland, Calif., along with hundreds of suits by families alleging harms to their children from social media." The school districts could face challenges when a judge later this year is expected to consider a request from the social media platforms to dismiss the cases on the grounds that they are protected by Section 230, the legal liability shield that generally prevents internet companies from being sued for their third-party content on their platforms, according to the report. However, the school districts and families "contend that the social-media companies have created an addictive product that pushes destructive content to youth — and that a product, unlike content, doesn't enjoy Section 230 protections," Randazzo and Tracy write. | OpenAI head of trust and safety is leaving | The move comes as OpenAI agreed to voluntary AI safety commitments with the White House. (Richard Drew/AP) | | OpenAI's head of trust and safety Dave Willner is stepping down from his role, Clare Duffy reports for CNN, citing a Thursday LinkedIn post. Willner is moving into an advisory role to spend more time with family. The departure comes as "OpenAI has faced growing scrutiny from lawmakers, regulators and the public over the safety of its products and their potential implications for society" amid growing success of its ChatGPT product, Duffy writes. The move comes as the company agreed to voluntary artificial intelligence safety commitments with the White House, alongside other major tech companies. The CNN report adds: "OpenAI's Chief Technology Officer Mira Murati will become the trust and safety team's interim manager and Willner will advise the team through the end of this year, according to the company." | | | Rant and rave | | Twitter reacts to Elon Musk bidding farewell to the platform's bird logo. Tech journalist Kara Swisher: | New York Times tech reporter Ryan Mac: | Platformer's Zoë Schiffer: | | | Hill happenings | | | | Inside the industry | | | | Competition watch | | | | Workforce report | | | | Trending | | | | Daybook | | | | Before you log off | | That's all for today — thank you so much for joining us! Make sure to tell others to subscribe to The Technology 202 here. Get in touch with tips, feedback or greetings on Twitter or email. | |
No comments:
Post a Comment