| Happy Tuesday! We're filling in for Cristiano today. You can reach us at will.oremus@washpost.com and naomi.nix@washpost.com.
Below: The Justice Department asks a court to delay a social media order, and senators attend their first classified AI briefing. First: | Meta has its own ideas about AI regulation | Nick Clegg, Meta's president of global affairs, argued that keeping AI models "under lock and key" is misguided. (Kirsty Wigglesworth/AP) | | | Back in May, when the White House summoned the CEOs of four top artificial intelligence companies to talk AI policy with Vice President Harris, Meta's Mark Zuckerberg wasn't invited. Later that month, the Senate Judiciary Committee held a high-profile hearing on AI oversight with leaders from OpenAI and IBM — but not Meta. What Zuckerberg did get from Sens. Richard Blumenthal (D-Conn.) and Josh Hawley (R-Mo.) in June was a letter expressing concern over Meta's open-source LLaMA language model and its potential for misuse. (Will and our colleague Pranshu Verma wrote last month about how people are using LLaMA for applications ranging from drug discovery to sexually explicit chatbots.) The senators wanted to know how details of the model's workings, initially intended for use by registered academic researchers, had leaked to the public, and what steps the company would take to keep future models under wraps. But Meta has its own vision for how AI should be regulated — one in which openness is viewed more as a virtue than a threat. And it's increasingly pushing that view in Washington and beyond. In an interview with The Technology 202 on Tuesday, Meta President of Global Affairs Nick Clegg argued that keeping AI models "under lock and key" is misguided — and that the industry doesn't need a special licensing regime. He said the "existential threats" supposedly posed by supersmart AI systems are far-off and hypothetical. "No one thinks the kind of models that we're looking at [with] LLaMA one or LLaMA version two are even remotely knocking on the door of these kind of high-capability [AI models] that might require some specialized regulatory licensing treatment," Clegg said. The interview came as Meta seeks to establish itself as a champion of open-source approaches to AI with regulators in key markets. This morning, the Financial Times published a Clegg op-ed calling for AI policy that prioritizes transparency, self-regulation and independent testing of AI models rather than government controls. And Clegg told us he has spoken with people in the administration and White House, European commissioners and Senate Majority Leader Charles E. Schumer (D-N.Y.) about AI policy issues. While Clegg's comments were light on specific policy prescriptions, they arrive at a moment when Meta appears to be playing catch-up to rivals such as Google and Microsoft in the generative AI boom. In recent weeks, Zuckerberg and other executives have been touting the company's latest innovations, such as an internal productivity assistant, a generative AI-based advertising product and a new photo-generation tool. But its most significant contribution to the generative AI world so far is almost certainly LLaMA, which Clegg said has demonstrated the value of "open innovation" as developers experiment freely with it. | | While Clegg said Meta "obviously" didn't intend for the whole LLaMA model to become public, he said the proliferation of experimental AI tools it has sparked "should give people some confidence that open innovation does lead to insights that aren't necessarily going to be arrived at by Big Tech companies … working on these things on their own." Clegg also addressed Instagram chief Adam Mosseri's vision for Threads, Meta's fast-growing new Twitter rival. Mosseri stirred debate on Friday by saying that, unlike Twitter, Meta won't "do anything to encourage" discussions of hard news and politics on Threads. Asked if that means Meta plans to use its algorithms to filter out or reduce the distribution of political content, Clegg said that wasn't his understanding. "Are we going to suppress and censor anyone who wants to talk about politics and current affairs? Of course not," Clegg said. "That would be absurd." But he said the company probably wouldn't go out of its way to "massively boost" news on Threads or create a special tab for it in the app, which he said would be "not in line with the kind of friendly, respectful, fun ethos" that has attracted users to Threads so far. Rather than "algorithmic editorializing," Clegg said Threads plans to rely on "greater use of individual controls" for content moderation across its various apps. "I hope over time we'll have less of a discussion about what our big crude algorithmic choices are and more about whether you guys feel that the individual controls we're giving you on Threads feel meaningful to you." Speaking of content moderation, Clegg downplayed the impact of a Louisiana federal judge's injunction restricting communications between the Biden administration and social media companies. "I should stress I don't think it will remotely affect our long-term, industry-leading capability to allow elections to be conducted safely and openly on our platforms," he said. Clegg noted that the vast majority of Meta's users are outside the United States and that the company is constantly updating its election playbook. "I don't want to belittle the discussions we obviously have, like any private sector company has, with whoever's in charge at any one time in D.C.," he said. "But at the same time, I also want to provide context. This is ongoing work we've done over years and do so on a global basis." | | |  | Our top tabs | | Justice Department asks appeals court to delay judge's social media order | The Justice Department argues the preliminary injunction could chill law enforcement activity to protect national security interests. (Andrew Harnik/AP) | | | The Justice Department on Monday asked the U.S. Court of Appeals for the 5th Circuit to delay a preliminary injunction that places extraordinary limits on how the United States can communicate with social media platforms, our colleagues Cat Zakrzewski and Tim Starks report. | | The request came shortly after U.S. District Judge Terry A. Doughty denied the agency's request for a delay. The Justice Department argues the preliminary injunction, first imposed on July 4, "could chill law enforcement activity to protect national security interests," Cat and Tim write. "In its filing, the Justice Department warned that the injunction could bar a wide swath of communications between the government and the tech industry, stopping the president, for instance, from denouncing misinformation about a natural disaster circulating online." their report adds. In rejecting the Justice Department's request, Doughty argued that the order includes exceptions for communications concerning criminal activities, cyberattacks, national security threats and foreign election interference attempts. But the Justice Department argues that those exceptions don't answer the questions that may arise and limit the free speech rights of the government to make certain misinformation is corrected. | Senators to participate in first classified AI briefing | Majority Leader Chuck Schumer (D-N.Y.) is among the senators hosting the meeting. (Minh Connors/The Washington Post) | | | The Senate will participate in its first classified artificial intelligence briefing Tuesday at 3 p.m. in a sensitive compartmented information facility (SCIF) in the U.S. Capitol. Attendees will include Director of National Intelligence Avril Haines, Deputy Secretary of Defense Kathleen Hicks, White House Office of Science and Technology Policy Director Arati Prabhakar, National Geospatial Intelligence Agency Director Trey Whitworth and the Defense Department's Chief Digital and AI Officer Craig Martell, according to Senate Majority Leader Chuck Schumer (D-N.Y.) spokesperson Allison Biasotti. Senators are also expected to hear from other Defense Department and intelligence community representatives. The briefing is the second of three planned meetings hosted by Sens. Schumer, Mike Rounds (R-S.D.), Martin Heinrich (D-N.M.) and Todd C. Young (R-Ind.). The Senate held its first all-member briefing on AI last month. Schumer a week later outlined a congressional AI regulation plan as President Biden met with AI experts in San Francisco to discuss the technology's opportunities and risks. Several lawmakers have said it would take months before legislation regulating the emerging technology is introduced. | 2024 presidential candidates under pressure to decline money from Big Tech donors | A bipartisan group of tech critics Tuesday launched a pledge aimed at curbing Silicon Valley's influence over 2024 presidential candidates. (Carlos Barria/Reuters) | | | A bipartisan group of tech critics Tuesday launched a "No Big Tech Money" pledge aimed at curbing Silicon Valley's influence over 2024 presidential candidates, our colleague Cat Zakrzewski reports. Cat writes: "Candidates who take the pledge agree not to take donations of more than $200 from the Political Action Committees, executives or lobbyists of Apple, Amazon, Microsoft, Google parent company Alphabet or Facebook parent company Meta. The initiative, which received initial funding from the liberal organization Way to Win, wants to attract signatures from candidates running for the White House and Congress from both political parties." Amazon founder Jeff Bezos owns The Washington Post. Interim CEO Patty Stonesifer sits on Amazon's board. "The group's board is composed of several prominent tech critics from both political parties. Dan Geldon, who served as chief of staff for Elizabeth Warren's presidential bid, and Jeff Long, an attorney who has previously advised Republican senators and the FTC, serve alongside Sacha Haworth, the executive director of the anti-monopoly Tech Oversight Project," Cat's report adds. | | |  | Agency scanner | | | |  | Hill happenings | | | |  | Inside the industry | | | |  | Competition watch | | | |  | Privacy monitor | | | |  | Workforce report | | | |  | Trending | | | |  | Daybook | | - Former OSTP deputy director Alondra Nelson speaks at a Washington Post Live event on AI and its impact on the workforce at 2 p.m.
- The Bipartisan Policy Center convenes a discussion on facial recognition technologies at 2 p.m.
- The Senate Commerce Committee votes on FCC nominee Anna Gomez and others tomorrow at 10 a.m.
- The Center for Strategic and International Studies holds a discussion on the strategic implications of cloud computing tomorrow at 1:30 p.m.
- The Senate Judiciary Committee convenes a hearing on AI, intellectual property and copyright tomorrow at 3 p.m.
| | |  | Before you log off | | | That's all for today — thank you so much for joining us! Make sure to tell others to subscribe to The Technology 202 here. Get in touch with tips, feedback or greetings on Twitter or email. | |
No comments:
Post a Comment