| Welcome to The Cybersecurity 202! I accidentally got sucked into a rabbit hole of The Oatmeal cartoons yesterday. I never regret it. Was this forwarded to you? Sign up here. Below: T-Mobile discloses another hack, and an appeals court rules on whether NotPetya was an act of war. First: | U.S. officials say AI will be a big cyberthreat. How it'll materialize is less clear. | Explorations of the limits of AI projects like ChatGPT are at the beginning stage. (Lionel Bonaventure/AFP/Getty Images) | | | Officials across the federal government are sounding alarms about the cybersecurity threats that artificial intelligence will pose — even as they acknowledge they're not sure how, precisely, that threat is going to manifest. Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency, called AI one of two era-defining cyber challenges, alongside China. Rob Joyce, director of cybersecurity for the National Security Agency, called AI a "game-changing technology that's emerging." And that was just last week. Still, Joyce said he doesn't expect to have many examples of how adversaries are exploiting AI until next year. And Easterly, cautioning that she's not an expert on AI, said she's more concerned about how the technology is developing without safeguards than she is certain about what specific kind of threats will arise from it. Congress, too, is starting to take a closer look at how to get ahead of the potential cyberthreat of AI. But all of the efforts to tackle the AI threat are at the beginning stage — just as explorations of the limits of AI projects like ChatGPT are at the beginning stage, too. | | It's not that Joyce and Easterly are lacking examples of what worries them. Easterly, for example, told me during a recent MeriTalk/Axonius event interview that she worries about the "weaponization" of AI. | | "Cyberweapons, biotech weapons, kinetic weapons," she said. "I worry about foreign influence operations, particularly with our upcoming 2024 election." Testifying last week before a House Homeland Security Committee panel, Easterly called AI one of two "epoch-defining threats and challenges," with the other being China. She also said she worries about what U.S. adversaries who lack the United States' values will do with AI. "AI will also be the most powerful weapons of this century and the most powerful weapons of the last century, nuclear weapons, were built and maintained by governments who were disincentivized to use them," she said. She drew parallels to the Biden administration's current battle to get software manufacturers to make their products secure during the design process, rather than focus on speedily getting those products to market. "This technology is built by companies whose job it is to maximize profits for their shareholders," she said. "It's a different conversation and I applaud the efforts to try and get ahead of it." Lawmakers on both sides of the aisle have concerns about AI. "People are trying to say, 'Hey, we need to slow down AI,'" Rep. Carlos Giménez (R-Fla.) said to Easterly. "Frankly, we cannot slow down AI because our adversaries are not going to slow down AI and they understand the potential of artificial intelligence and all sorts of things. … The only defense that we're going to have against AI is AI." Joyce, speaking at the RSA Conference last week, predicted that AI would help hackers overcome language barriers to lure victims into clicking on malicious links. "It's going to do things like enable much more effective phishing," he said. "That Russian-native hacker who doesn't speak English well is no longer going to craft a crappy email to your employees," he added. The phishing email "is going to be native-language English, it's going to make sense, it's going to pass the sniff test of whatever topic it's trying to convey." But for now, there are limitations to what it can accomplish, he said. | - "It's not a buzzword," Joyce said of AI. But, he said, "I won't say it's delivered yet. … In the near term, I don't expect some magical technical capability that is AI generated that will exploit all the things."
- Around this time next year, he expects to see "a bunch of examples of where it's been weaponized, where it's been used and where it's succeeded."
| | Congress and the administration have begun to take steps toward addressing AI. Today, Rep. Yvette D. Clarke (D-N.Y.) is introducing legislation that would require disclosure of the use of AI-generated content in political ads, my colleague Isaac Stanley-Becker reports. It comes as AI has "quickly become an instrument of political messaging, mischief and misinformation," as Isaac writes. Clarke, the former chair of the House Homeland Security Committee's cybersecurity subcommittee, said the bill is part of an effort to "get the Congress going on addressing many of the challenges that we're facing with AI." Other lawmakers have also introduced AI-focused bills. | - "Our current laws don't begin to scratch the surface with respect to protecting the American people from what the rapid deployment of AI can mean in disrupting society," she said in an interview.
| | The chairman of the Senate Intelligence Committee, Mark R. Warner (D-Va.), said last week in a series of letters to AI company CEOs that it is "clear that some level of regulation is necessary in this field," specifically citing security concerns. Letter recipients included leaders of Apple, Google, Meta, Microsoft, Midjourney and OpenAI. "As companies like yours make rapid advancements in AI, we must acknowledge the security risks inherent in this technology and ensure AI development and adoption proceeds in a responsible and secure way," Warner wrote. "AI presents a new set of security concerns that are distinct from traditional software vulnerabilities," he said, such as tampering with data. The White House on Monday said it was seeking information on how workplaces monitor employees with AI. | - "When paired with employer decisions about pay, discipline, and promotion, automated surveillance can lead to workers being treated differently or discriminated against," wrote Deirdre Mulligan, deputy U.S. chief technology officer for policy in the White House Office of Science and Technology Policy, and Jenny Yang, deputy assistant to the president for racial justice and equity on the Domestic Policy Council.
| | The Biden administration's national cybersecurity strategy tackles AI on several fronts, including by putting the onus of protecting consumers' personal data on the organizations that collect and store it, said Kemba Walden, the acting national cyber director. "We're beginning to develop some guardrails on AI," she told reporters last week at the RSA Conference. But the potential future threats loom as policymakers scramble. "I'm very hopeful that in the near term we'll be able to lay down some guardrails that will be enforceable to ensure that there are significant safety elements built into this technology," Easterly told me. | | |  | The keys | | T-Mobile's second breach this year compromises customer PINs, other sensitive info | The breach compromised 836 T-Mobile accounts, T-Mobile told state regulators. (Michael Dwyer/AP) | | - "The information obtained for each customer varied but may have included full name, contact information, account number and associated phone numbers, T-Mobile account PIN, social security number, government ID, date of birth, balance due, internal codes that T-Mobile uses to service customer accounts (for example, rate plan and feature codes), and the number of lines."
| | The hack is at least the ninth that's affected the company since 2018, but it appears to have a iumpacted significantly fewer accounts than in past breaches, including a 2021 hack that exposed data on some 49 million customer accounts. | NotPetya cyberattack wasn't a warlike action under N.J. insurance law, appeals judges say | The NotPetya ransomware attack, which was linked to Russia's GRU spy agency, cost Merck some $1.4 billion. (Christopher Occhicone/Bloomberg News) | | | A New Jersey appellate court on Monday rejected several insurance groups' arguments that pharma giant Merck suffered a cyberattack under warlike conditions, paving the way for the company to receive insurance payouts in connection to the 2017 NotPetya malware attack, Ufonobong Umanah reports for Bloomberg Law. The ransomware attack that was linked to Russia's GRU agency cost the company around $1.4 billion, according to the report. "Although the pharmaceutical company had 'all risks' policies with several insurers, they excluded losses or damages from a 'hostile or warlike action' by a government, its agents, or its military forces," Umanah writes. | - But an opinion from Judge Heidi Willis Currier said that exclusion would require the involvement of military action and does not cover all damages that arise "out of a government action motivated by ill will."
| | Insurance representatives told the Wall Street Journal's Richard Vanderford earlier this year that NotPetya was linked to broader Kremlin military actions. | - "Russia did this," James E. Rocap, a lawyer arguing on behalf of Merck's insurers, told the Wall Street Journal at the time. "This was a destructive act. It was all part of the ongoing conflict between Russia and Ukraine over Ukrainian sovereignty."
| AI chatbots used for creating dozens of news content farms, planting seeds for fraud | None of the sites disclose that they were populated with generated text from tools like ChatGPT. (Gabby Jones/Bloomberg News) | | | Research published Monday found that dozens of news sites created with AI chatbots are proliferating online, increasing risks for new fraud scams, Bloomberg News's Davey Alba reports. The report was published by news rating group NewsGuard, and the websites were independently reviewed by Bloomberg News. | - "Some are dressed up as breaking news sites with generic-sounding names like News Live 79 and Daily Business Post, while others share lifestyle tips, celebrity news or publish sponsored content," Alba writes.
- None of the sites disclose that they were populated with generated text from tools like ChatGPT, the story adds.
| | One such site had a headline that declares President Biden died peacefully in his sleep. "Using AI models known for making up facts to produce what only look like news websites is fraud masquerading as journalism," said NewsGuard co-CEO Gordon Crovitz, a former publisher of the Wall Street Journal. | - Additionally, this type of content-farm fraud will continue to be pushed by bad actors who will "keep experimenting to find what's effective," said Bentley University data science and mathematics professor Noah Giansiracusa.
| | |  | Global cyberspace | | | |  | National security watch | | | |  | Securing the ballot | | | |  | Government scan | | | |  | Hill happenings | | | |  | Industry report | | | |  | Daybook | | - The Intelligence and National Security Alliance holds a discussion about how the Biden administration's national cybersecurity strategy ties to critical infrastructure protections at 2 p.m.
- Stanford University's Center for International Security and Cooperation convenes an event on AI and military decision-making at 4 p.m.
| | |  | Secure log off | | | Thanks for reading. See you tomorrow. | | |
No comments:
Post a Comment