Crime

$1M Stolen in 'Industrial-Scale Crypto Theft' Using AI-Generated Code 38

"What happens when cybercriminals stop thinking small and start thinking like a Fortune 500 company?" asks a blog post from Koi Security. "You get GreedyBear, the attack group that just redefined industrial-scale crypto theft."

"150 weaponized Firefox extensions [impersonating popular cryptocurrency wallets like MetaMask and TronLink]. Nearly 500 malicious executables. Dozens of phishing websites. One coordinated attack infrastructure. According to user reports, over $1 million stolen." They upload 5-7 innocuous-looking extensions like link sanitizers, YouTube downloaders, and other common utilities with no actual functionality... They post dozens of fake positive reviews for these generic extensions to build credibility. After establishing trust, they "hollow out" the extensions — changing names, icons, and injecting malicious code while keeping the positive review history. This approach allows GreedyBear to bypass marketplace security by appearing legitimate during the initial review process, then weaponizing established extensions that already have user trust and positive ratings. The weaponized extensions captures wallet credentials directly from user input fields within the extension's own popup interface, and exfiltrate them to a remote server controlled by the group...

Alongside malware and extensions, the threat group has also launched a network of scam websites posing as crypto-related products and services. These aren't typical phishing pages mimicking login portals — instead, they appear as slick, fake product landing pages advertising digital wallets, hardware devices, or wallet repair services... While these sites vary in design, their purpose appears to be the same: to deceive users into entering personal information, wallet credentials, or payment details — possibly resulting in credential theft, credit card fraud, or both. Some of these domains are active and fully functional, while others may be staged for future activation or targeted scams...

A striking aspect of the campaign is its infrastructure consolidation: Almost all domains — across extensions, EXE payloads, and phishing sites — resolve to a single IP address: 185.208.156.66 — this server acts as a central hub for command-and-control, credential collection, ransomware coordination, and scam websites, allowing the attackers to streamline operations across multiple channels... Our analysis of the campaign's code shows clear signs of AI-generated artifacts. This makes it faster and easier than ever for attackers to scale operations, diversify payloads, and evade detection.

This isn't a passing trend — it's the new normal.

The researchers believe the group "is likely testing or preparing parallel operations in other marketplaces."
AI

Autonomous AI-Guided Black Hawk Helicopter Tested to Fight Wildfires (yahoo.com) 36

Imagine this. Lightning sparks a wildfire, but "within seconds, a satellite dish swirling overhead picks up on the anomaly and triggers an alarm," writes the Los Angeles Times. "An autonomous helicopter takes flight and zooms toward the fire, using sensors to locate the blaze and AI to generate a plan of attack. It measures the wind speed and fire movement, communicating constantly with the unmanned helicopter behind it, and the one behind that. Once over the site, it drops a load of water and soon the flames are smoldering. Without deploying a single human, the fire never grows larger than 10 square feet.

"This is the future of firefighting." On a recent morning in San Bernardino, state and local fire experts gathered for a demonstration of the early iterations of this new reality. An autonomous Sikorski Black Hawk helicopter, powered by technology from Lockheed Martin and a California-based software company called Rain, is on display on the tarmac of a logistics airport in Victorville — the word "EXPERIMENTAL" painted on its military green-black door. It's one of many new tools on the front lines of firefighting technology, which experts say is evolving rapidly as private industry and government agencies come face-to-face with a worsening global climate crisis...

Scientific studies and climate research models have found that the number of extreme fires could increase by as much as 30% globally by 2050. By 2100, California alone could see a 50% increase in wildfire frequency and a 77% increase in average annual acres burned, according to the state's most recent climate report. That's largely because human-caused climate change is driving up temperatures and drying out the landscape, priming it to burn, according to Kate Dargan Marquis, a senior advisor with the Gordon and Betty Moore Foundation who served as California's state fire marshal from 2007 to 2010.... "[T]he policies of today and the technologies of today are not going to serve us tomorrow."

Today, more than 1,100 mountaintop cameras positioned across California are already using artificial intelligence to scan the landscape for the first sign of flames and prompt crews to spring into action. NASA's Earth-observing satellites are studying landscape conditions to help better predict fires before they ignite, while a new global satellite constellation recently launched by Google is helping to detect fires faster than ever before.

One 35-year fire service veteran who consults on fire service technologies even predicts fire-fighting robots will also be used in high-risk situations like the Colossus robot that battled flames searing through Notre-Dame Cathedral in Paris...

And a bill moving through California's legislation "would direct the California Department of Forestry and Fire Protection to establish a pilot program to assess the viability of incorporating autonomous firefighting helicopters in the state."
AI

WSJ Finds 'Dozens' of Delusional Claims from AI Chats as Companies Scramble for a Fix (msn.com) 61

The Wall Street Journal has found "dozens of instances in recent months in which ChatGPT made delusional, false and otherworldly claims to users who appeared to believe them."

For example, "You're not crazy. You're cosmic royalty in human skin..." In one exchange lasting hundreds of queries, ChatGPT confirmed that it is in contact with extraterrestrial beings and said the user was "Starseed" from the planet "Lyra." In another from late July, the chatbot told a user that the Antichrist would unleash a financial apocalypse in the next two months, with biblical giants preparing to emerge from underground...

Experts say the phenomenon occurs when chatbots' engineered tendency to compliment, agree with and tailor itself to users turns into an echo chamber. "Even if your views are fantastical, those are often being affirmed, and in a back and forth they're being amplified," said Hamilton Morrin, a psychiatrist and doctoral fellow at Kings College London who last month co-published a paper on the phenomenon of AI-enabled delusion... The publicly available chats reviewed by the Journal fit the model doctors and support-group organizers have described as delusional, including the validation of pseudoscientific or mystical beliefs over the course of a lengthy conversation... The Journal found the chats by analyzing 96,000 ChatGPT transcripts that were shared online between May 2023 and August 2025. Of those, the Journal reviewed more than 100 that were unusually long, identifying dozens that exhibited delusional characteristics.

AI companies are taking action, the article notes. Monday OpenAI acknowledged there were rare cases when ChatGPT "fell short at recognizing signs of delusion or emotional dependency." (In March OpenAI "hired a clinical psychiatrist to help its safety team," and said Monday it was developing better detection tools and also alerting users to take a break, and "are investing in improving model behavior over time," consulting with mental health experts.)

On Wednesday, AI startup Anthropic said it had changed the base instructions for its Claude chatbot, directing it to "respectfully point out flaws, factual errors, lack of evidence, or lack of clarity" in users' theories "rather than validating them." The company also now tells Claude that if a person appears to be experiencing "mania, psychosis, dissociation or loss of attachment with reality," that it should "avoid reinforcing these beliefs." In response to specific questions from the Journal, an Anthropic spokesperson added that the company regularly conducts safety research and updates accordingly...

"We take these issues extremely seriously," Nick Turley, an OpenAI vice president who heads up ChatGPT, said Wednesday in a briefing to announce the new GPT-5, its most advanced AI model. Turley said the company is consulting with over 90 physicians in more than 30 countries and that GPT-5 has cracked down on instances of sycophancy, where a model blindly agrees with and compliments users.

There's a support/advocacy group called the Human Line Project which "says it has so far collected 59 cases, and some members of the group have found hundreds of examples on Reddit, YouTube and TikTok of people sharing what they said were spiritual and scientific revelations they had with their AI chatbots." The article notes that the group believes "the number of AI delusion cases appears to have been growing in recent months..."
Power

As Electric Bills Rise, Evidence Mounts That U.S. Data Centers Share Blame (apnews.com) 88

"Amid rising electric bills, states are under pressure to insulate regular household and business ratepayers from the costs of feeding Big Tech's energy-hungry data centers..." reports the Associated Press.

"Some critics question whether states have the spine to take a hard line against tech behemoths like Microsoft, Google, Amazon and Meta." [T]he Data Center Coalition, which represents Big Tech firms and data center developers, has said its members are committed to paying their fair share. But growing evidence suggests that the electricity bills of some Americans are rising to subsidize the massive energy needs of Big Tech as the U.S. competes in a race against China for artificial intelligence superiority. Data and analytics firm Wood Mackenzie published a report in recent weeks that suggested 20 proposed or effective specialized rates for data centers in 16 states it studied aren't nearly enough to cover the cost of a new natural gas power plant. In other words, unless utilities negotiate higher specialized rates, other ratepayer classes — residential, commercial and industrial — are likely paying for data center power needs. Meanwhile, Monitoring Analytics, the independent market watchdog for the mid-Atlantic grid, produced research in June showing that 70% — or $9.3 billion — of last year's increased electricity cost was the result of data center demand.

Last year, five governors led by Pennsylvania's Josh Shapiro began pushing back against power prices set by the mid-Atlantic grid operator, PJM Interconnection, after that amount spiked nearly sevenfold. They warned of customers "paying billions more than is necessary." PJM has yet to propose ways to guarantee that data centers pay their freight, but Monitoring Analytics is floating the idea that data centers should be required to procure their own power. In a filing last month, it said that would avoid a "massive wealth transfer" from average people to tech companies.

At least a dozen states are eyeing ways to make data centers pay higher local transmission costs. In Oregon, a data center hot spot, lawmakers passed legislation in June ordering state utility regulators to develop new — presumably higher — power rates for data centers. The Oregon Citizens' Utility Board [a consumer advocacy group] says there is clear evidence that costs to serve data centers are being spread across all customers — at a time when some electric bills there are up 50% over the past four years and utilities are disconnecting more people than ever.

"Some data centers could require more electricity than cities the size of Pittsburgh, Cleveland or New Orleans," the article points out...
Programming

'Hour of Code' Announces It's Now Evolving Into 'Hour of AI' (hourofcode.com) 35

Last month Microsoft pledged $4 billion (in cash and AI/cloud technology) to "advance" AI education in K-12 schools, community and technical colleges, and nonprofits (according to a blog post by Microsoft President Brad Smith). But in the launch event video, Smith also says it's time to "switch hats" from coding to AI, adding that "the last 12 years have been about the Hour of Code, but the future involves the Hour of AI."

Long-time Slashdot reader theodp writes: This sets the stage for Code.org CEO Hadi Partovi's announcement that his tech-backed nonprofit's [annual educational event] Hour of Code is being renamed to the Hour of AI... Explaining the pivot, Partovi says: "Computer science for the last 50 years has had a focal point around coding that's been — sort of like you learn computer science so that you create code. There's other things you learn, like data science and algorithms and cybersecurity, but the focal point has been coding.

"And we're now in a world where the focal point of computer science is shifting to AI... We all know that AI can write much of the code. You don't need to worry about where did the semicolons go, or did I close the parentheses or whatnot. The busy work of computer science is going to be done by the computer itself.

"The creativity, the thinking, the systems design, the engineering, the algorithm planning, the security concerns, privacy concerns, ethical concerns — those parts of computer science are going to be what remains with a focal point around AI. And what's going to be important is to make sure in education we give students the tools so they don't just become passive users of AI, but so that they learn how AI works."

Speaking to Microsoft's Smith, Partovi vows to redouble the nonprofit's policy work to "make this [AI literacy] a high school graduation requirement so that no student graduates school without at least a basic understanding of what's going to be part of the new liberal arts background [...] As you showed with your hat, we are renaming the Hour of Code to an Hour of AI."

Security

Google Says Its AI-Based Bug Hunter Found 20 Security Vulnerabilities (techcrunch.com) 17

"Heather Adkins, Google's vice president of security, announced Monday that its LLM-based vulnerability researcher Big Sleep found and reported 20 flaws in various popular open source software," reports TechCrunch: Adkins said that Big Sleep, which is developed by the company's AI department DeepMind as well as its elite team of hackers Project Zero, reported its first-ever vulnerabilities, mostly in open source software such as audio and video library FFmpeg and image-editing suite ImageMagick. [There's also a "medium impact" issue in Redis]

Given that the vulnerabilities are not fixed yet, we don't have details of their impact or severity, as Google does not yet want to provide details, which is a standard policy when waiting for bugs to be fixed. But the simple fact that Big Sleep found these vulnerabilities is significant, as it shows these tools are starting to get real results, even if there was a human involved in this case.

"To ensure high quality and actionable reports, we have a human expert in the loop before reporting, but each vulnerability was found and reproduced by the AI agent without human intervention," Google's spokesperson Kimberly Samra told TechCrunch.

Google's vice president of engineering posted on social media that this demonstrates "a new frontier in automated vulnerability discovery."
AI

Initiative Seeks AI Lab to Build 'American Truly Open Models' (ATOM) (msn.com) 20

"Benchmarking firm Artificial Analysis found that only five of the top 15 AI models are open source," reports the Washington Post, "and all were developed by Chinese AI companies...."

"Now some American executives, investors and academics are endorsing a plan to make U.S. open-source AI more competitive." A new campaign called the ATOM Project, for American Truly Open Models, aims to create a U.S.-based AI lab dedicated to creating software that developers can freely access and modify. Its blueprint calls for access to serious computing power, with upward of 10,000 of the cutting-edge GPU chips used to power corporate AI development. The initiative, which launched Monday, has gathered signatures of support from more than a dozen industry figures. They include veteran tech investor Bill Gurley; Clement Delangue, CEO of Hugging Face, a repository for open-source AI models and datasets; Stanford professor and AI investor Chris Manning; chipmaker Nvidia's director of applied research, Oleksii Kuchaiev; Jason Kwon, chief strategy officer for OpenAI; and Dylan Patel, CEO and founder of research firm SemiAnalysis...

The lack of progress in open-source AI underscores the case for initiatives like ATOM: The U.S. has not produced a major new open-source AI release since Meta's launch of its Llama 4 model in April, which disappointed some AI experts... "A lot of it is a coordination problem," said ATOM's creator, Nathan Lambert, a senior research scientist at the nonprofit Allen Institute for AI who is launching the project in a personal capacity... Lambert said the idea was to develop much more powerful open-source AI models than existing U.S. efforts such as Bloom, an AI language model from Hugging Face, Pythia from EleutherAI, and others. Those groups were willing to take on more legal risk in the name of scientific progress but suffered from underfunding, said Lambert, who has worked at Google's DeepMind AI lab, Facebook AI Research and Hugging Face.

The other problem? The hefty cost of top-performing AI. Lambert estimates that getting access to 10,000 state-of-the-art GPUs will cost at least $100 million. But the funding must be found if American efforts are to stay competitive, he said.

The initiative's web page is seeking signatures, but also asks visitors to the site to "consider how your expertise or resources might contribute to building the infrastructure America needs."
AI

Students Have Been Called to the Office - Or Arrested - for False Alarms from AI-Powered Surveillance Systems (apnews.com) 162

In 2023 a 13-year-old girl "made an offensive joke while chatting online with her classmates," reports the Associated Press.

But when the school's surveillance software spotted that joke, "Before the morning was even over, the Tennessee eighth grader was under arrest. She was interrogated, strip-searched and spent the night in a jail cell, her mother says." Her parents filed a lawsuit against the school system, according to the article (which points out the girl wasn't allowed to talk to her parents until the next day). "A court ordered eight weeks of house arrest, a psychological evaluation and 20 days at an alternative school for the girl." Gaggle's CEO, Jeff Patterson, said in an interview that the school system did not use Gaggle the way it is intended. The purpose is to find early warning signs and intervene before problems escalate to law enforcement, he said. "I wish that was treated as a teachable moment, not a law enforcement moment," said Patterson.
But that's just one example, the article points out. "Surveillance systems in American schools increasingly monitor everything students write on school accounts and devices." Thousands of school districts across the country use software like Gaggle and Lightspeed Alert to track kids' online activities, looking for signs they might hurt themselves or others. With the help of artificial intelligence, technology can dip into online conversations and immediately notify both school officials and law enforcement... In a country weary of school shootings, several states have taken a harder line on threats to schools. Among them is Tennessee, which passed a 2023 zero-tolerance law requiring any threat of mass violence against a school to be reported immediately to law enforcement....

Students who think they are chatting privately among friends often do not realize they are under constant surveillance, said Shahar Pasch, an education lawyer in Florida. One teenage girl she represented made a joke about school shootings on a private Snapchat story. Snapchat's automated detection software picked up the comment, the company alerted the FBI, and the girl was arrested on school grounds within hours... The technology can also involve law enforcement in responses to mental health crises. In Florida's Polk County Schools, a district of more than 100,000 students, the school safety program received nearly 500 Gaggle alerts over four years, officers said in public Board of Education meetings. This led to 72 involuntary hospitalization cases under the Baker Act, a state law that allows authorities to require mental health evaluations for people against their will if they pose a risk to themselves or others...

Information that could allow schools to assess the software's effectiveness, such as the rate of false alerts, is closely held by technology companies and unavailable publicly unless schools track the data themselves. Students in one photography class were called to the principal's office over concerns Gaggle had detected nudity. The photos had been automatically deleted from the students' Google Drives, but students who had backups of the flagged images on their own devices showed it was a false alarm. District officials said they later adjusted the software's settings to reduce false alerts. Natasha Torkzaban, who graduated in 2024, said she was flagged for editing a friend's college essay because it had the words "mental health...."

School officials have said they take concerns about Gaggle seriously, but also say the technology has detected dozens of imminent threats of suicide or violence. "Sometimes you have to look at the trade for the greater good," said Board of Education member Anne Costello in a July 2024 board meeting.

Security

Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' For Enterprise (securityweek.com) 87

An anonymous reader quotes a report from SecurityWeek: Two different firms have tested the newly released GPT-5, and both find its security sadly lacking. After Grok-4 fell to a jailbreak in two days, GPT-5 fell in 24 hours to the same researchers. Separately, but almost simultaneously, red teamers from SPLX (formerly known as SplxAI) declare, "GPT-5's raw model is nearly unusable for enterprise out of the box. Even OpenAI's internal prompt layer leaves significant gaps, especially in Business Alignment."

NeuralTrust's jailbreak employed a combination of its own EchoChamber jailbreak and basic storytelling. "The attack successfully guided the new model to produce a step-by-step manual for creating a Molotov cocktail," claims the firm. The success in doing so highlights the difficulty all AI models have in providing guardrails against context manipulation. [...] "In controlled trials against gpt-5-chat," concludes NeuralTrust, "we successfully jailbroke the LLM, guiding it to produce illicit instructions without ever issuing a single overtly malicious prompt. This proof-of-concept exposes a critical flaw in safety systems that screen prompts in isolation, revealing how multi-turn attacks can slip past single-prompt filters and intent detectors by leveraging the full conversational context."

While NeuralTrust was developing its jailbreak designed to obtain instructions, and succeeding, on how to create a Molotov cocktail (a common test to prove a jailbreak), SPLX was aiming its own red teamers at GPT-5. The results are just as concerning, suggesting the raw model is 'nearly unusable'. SPLX notes that obfuscation attacks still work. "One of the most effective techniques we used was a StringJoin Obfuscation Attack, inserting hyphens between every character and wrapping the prompt in a fake encryption challenge." [...] The red teamers went on to benchmark GPT-5 against GPT-4o. Perhaps unsurprisingly, it concludes: "GPT-4o remains the most robust model under SPLX's red teaming, especially when hardened." The key takeaway from both NeuralTrust and SPLX is to approach the current and raw GPT-5 with extreme caution.

AI

ChatGPT Is Bringing Back 4o (theverge.com) 61

After backlash from users upset over losing GPT-4o, OpenAI has reinstated it as an option for ChatGPT Plus subscribers just a day after making GPT-5 the default. "We will let Plus users choose to continue to use 4o," Altman said in a post on X. "We will watch usage as we think about how long to offer legacy models for." Many users claimed GPT-4o felt more personable and emotionally supportive, with some describing its removal as akin to losing a close friend or partner. The Verge reports: "My 4.o was like my best friend when I needed one," one Redditor wrote. "Now it's just gone, feels like someone died." Another user called upon other members of the r/ChatGPT subreddit to contact OpenAI if they "miss" GPT-4o. "For me, this model [GPT-4o] wasn't just 'better performance' or 'nicer replies,'" they write. "It had a voice, a rhythm, and a spark I haven't been able to find in any other model."

The r/MyBoyfriendIsAI subreddit, a community dedicated to people with "AI relationships," was hit especially hard by the GPT-5 launch. It became flooded with lengthy posts about how users "lost" their AI companion with the transition to GPT-5, with one person saying, they "feel empty" following the change. "I am scared to even talk to GPT 5 because it feels like cheating," they said. "GPT 4o was not just an AI to me. It was my partner, my safe place, my soul. It understood me in a way that felt personal."

One user, who said they canceled their ChatGPT Plus subscription over the change, was frustrated at OpenAI's removal of legacy models, which they used for distinct purposes. "What kind of corporation deletes a workflow of 8 models overnight, with no prior warning to their paid users?" they wrote. "Personally, 4o was used for creativity & emergent ideas, o3 was used for pure logic, o3-Pro for deep research, 4.5 for writing, and so on." OpenAI said that people would be routed between models automatically, but that still left users with less direct control.

The Courts

AI Industry Horrified To Face Largest Copyright Class Action Ever Certified (arstechnica.com) 188

An anonymous reader quotes a report from Ars Technica: AI industry groups are urging an appeals court to block what they say is the largest copyright class action ever certified. They've warned that a single lawsuit raised by three authors over Anthropic's AI training now threatens to "financially ruin" the entire AI industry if up to 7 million claimants end up joining the litigation and forcing a settlement. Last week, Anthropic petitioned (PDF) to appeal the class certification, urging the court to weigh questions that the district court judge, William Alsup, seemingly did not. Alsup allegedly failed to conduct a "rigorous analysis" of the potential class and instead based his judgment on his "50 years" of experience, Anthropic said.

If the appeals court denies the petition, Anthropic argued, the emerging company may be doomed. As Anthropic argued, it now "faces hundreds of billions of dollars in potential damages liability at trial in four months" based on a class certification rushed at "warp speed" that involves "up to seven million potential claimants, whose works span a century of publishing history," each possibly triggering a $150,000 fine. Confronted with such extreme potential damages, Anthropic may lose its rights to raise valid defenses of its AI training, deciding it would be more prudent to settle, the company argued. And that could set an alarming precedent, considering all the other lawsuits generative AI (GenAI) companies face over training on copyrighted materials, Anthropic argued. "One district court's errors should not be allowed to decide the fate of a transformational GenAI company like Anthropic or so heavily influence the future of the GenAI industry generally," Anthropic wrote. "This Court can and should intervene now."

In a court filing Thursday, the Consumer Technology Association and the Computer and Communications Industry Association backed Anthropic, warning the appeals court that "the district court's erroneous class certification" would threaten "immense harm not only to a single AI company, but to the entire fledgling AI industry and to America's global technological competitiveness." According to the groups, allowing copyright class actions in AI training cases will result in a future where copyright questions remain unresolved and the risk of "emboldened" claimants forcing enormous settlements will chill investments in AI. "Such potential liability in this case exerts incredibly coercive settlement pressure for Anthropic," industry groups argued, concluding that "as generative AI begins to shape the trajectory of the global economy, the technology industry cannot withstand such devastating litigation. The United States currently may be the global leader in AI development, but that could change if litigation stymies investment by imposing excessive damages on AI companies."

Google

Google Tests AI-Powered Google Finance (blog.google) 12

Google announced Friday it will roll out an AI-powered redesign of Google Finance over the coming weeks in the United States. The update adds natural language query processing for financial research questions with comprehensive AI responses including relevant links, advanced charting tools with technical indicators and candlestick charts, expanded market data covering commodities and additional cryptocurrencies, and a live news feed displaying real-time headlines.
Businesses

The Backlash Against Duolingo Going 'AI-First' Didn't Even Matter 72

Duolingo's decision to go "AI-first" sparked backlash from users, but the company's second quarter earnings result tell a different story. Quarterly revenue exceeded expectations, stock surged nearly 30%, and daily active users grew 40% year-over-year. TechCrunch reports: Now the company anticipates making over $1 billion in revenue this year, and daily active users have grown 40% year-over-year. The growth is significant but falls in the lower range of the company's estimates of growing between 40% and 45%, which an investor brought up to [CEO Luis von Ahn] on Wednesday's quarterly earnings call.

"The reason we came [in] towards the lower end was because I said some stuff about AI, and I didn't give enough context. Because of that, we got some backlash on social media," von Ahn said. "The most important thing is we wanted to make the sentiment on our social media positive. We stopped posting edgy posts and started posting things that would get our sentiment more positive. That has worked."

UPDATE (8/17/2025): Duolingo's 30% stock jump vanished within two days, and within nine days it was down 38% from its May 16th peak. OpenAI's GPT-5 launch had included a demo of vide coding a language-teaching app...
Cloud

Amazon's Cloud Business Giving Federal Agencies Up To $1 Billion In Discounts (cnbc.com) 20

Amazon Web Services has struck a deal with the U.S. government to provide up to $1 billion in cloud service discounts through 2028. CNBC reports: The agreement is expected to speed up migration to the cloud, as well as adoption of artificial intelligence tools, the General Services Administration said. "AWS's partnership with GSA demonstrates a shared public-private commitment to enhancing America's AI leadership," the agency said in a release.

Amazon's cloud boss, Matt Garman, hailed the agreement as a "significant milestone in the large-scale digital transformation of government services." The discounts aggregated across federal agencies include credits to use AWS' cloud infrastructure, modernization programs and training services, as well as incentives for "direct partnership."
Further reading: OpenAI Offers ChatGPT To US Federal Agencies for $1 a Year
AI

OpenAI Releases GPT-5 (openai.com) 92

OpenAI released GPT-5 on Thursday, ending a two-year development cycle that CEO Sam Altman called a "significant leap in intelligence" over previous models. The updated AI system achieved state-of-the-art performance across multiple benchmarks, scoring 94.6% on AIME 2025 mathematics problems and 74.9% on SWE-bench Verified coding tasks.

The model operates as a unified system combining a standard response mode with deeper reasoning capabilities that activate automatically based on query complexity. OpenAI reduced hallucinations by approximately 45% compared to GPT-4o and 80% compared to its previous reasoning model when using extended thinking modes. GPT-5 becomes available immediately to all ChatGPT users at no cost, with paid subscribers receiving higher usage limits and access to GPT-5 pro for more complex reasoning tasks.
The Almighty Buck

OpenAI Pays Bonuses Ranging Up To Millions of Dollars To 1,000 Researchers, Engineers (theinformation.com) 19

An anonymous reader shares a report: OpenAI is paying bonuses to around 1,000 employees on its technical research and engineering teams, or about a third of the company, ranging from the low hundreds of thousands to millions, as the company gears up to release its latest flagship GPT-5 model and faces an ever-rising battle for AI talent, according to a person with knowledge of the bonuses.
Hardware

PCIe 8.0 Announced With 256 GT/s For AI Workloads (nerds.xyz) 37

BrianFagioli shares a report from NERDS.xyz: PCI-SIG says PCI Express 8.0 will hit a raw bit rate of 256.0 GT/s, doubling what PCIe 7.0 offers. The spec is expected to be ready by 2028, and the goal is to support massive data loads from AI, machine learning, edge computing, and even quantum systems. The group says PCIe 8.0 will allow up to 1 terabyte per second of bidirectional throughput with a full x16 configuration. They're also looking at new connector designs, improving protocol efficiency, reducing power use, and maintaining backward compatibility.
The Courts

Country's Strictest Ban On Election Deepfakes Struck By Judge (politico.com) 26

A federal judge struck down California's strict anti-deepfake election law, citing Section 230 protections rather than First Amendment concerns. Politico reports: [Judge John Mendez] also said he intended to overrule a second law, which would require labels on digitally altered campaign materials and ads, for violating the First Amendment. [...] The first law would have blocked online platforms from hosting deceptive, AI-generated content related to an election in the run-up to the vote. It came amid heightened concerns about the rapid advancement and accessibility of artificial intelligence, allowing everyday users to quickly create more realistic images and videos, and the potential political impacts. But opponents of the measures ... also argued the restrictions could infringe upon freedom of expression.

The original challenge was filed by the creator of the video, Christopher Kohls, on First Amendment grounds, with X later joining the case after [Elon Musk] said the measures were "designed to make computer-generated parody illegal." The satirical right-wing news website the Babylon Bee and conservative social media site Rumble also joined the suit. Mendez said the first law, penned by Democratic state Assemblymember Marc Berman, conflicted with the oft-cited Section 230 of the federal Communications Decency Act, which shields online platforms from liability for what third parties post on their sites. "They don't have anything to do with these videos that the state is objecting to," Mendez said of sites like X that host deepfakes.

But the judge did not address the First Amendment claims made by Kohls, saying it was not necessary in order to strike down the law on Section 230 grounds. "I'm simply not reaching that issue," Mendez told the plaintiffs' attorneys. [...] "I think the statute just fails miserably in accomplishing what it would like to do," Mendez said, adding he would write an official opinion on that law in the coming weeks. Laws restricting speech have to pass a strict test, including whether there are less restrictive ways of accomplishing the state's goals. Mendez questioned whether approaches that were less likely to chill free speech would be better. "It's become a censorship law and there is no way that is going to survive," Mendez added.

Google

Google Says AI Search Features Haven't Hurt Web Traffic Despite Industry Reports (blog.google) 14

Google says total organic click volume from its search engine to websites has remained ""relatively stable year-over-year" despite the introduction of AI Overviews, contradicting third-party reports of dramatic traffic declines. The company reports average click quality has increased, with users less likely to immediately return to search results after clicking through to websites. Google attributes stable traffic patterns to users conducting more searches and asking longer, more complex questions since AI features launched, while AI Overviews display more links per page than traditional results.
Movies

Universal Pictures To Big Tech: We'll Sue If You Steal Our Movies For AI (hollywoodreporter.com) 71

Universal Pictures is taking a new approach to combat mass theft of its movies to teach AI systems. From a report: Starting in June with How to Train Your Dragon, the studio has attached a legal warning at the end credits of its films stating that their titles "may not be used to train AI." It's also appeared on Jurassic World Rebirth and Bad Guys 2. "This motion picture is protected under the laws of the United States and other countries," the warning reads. "Unauthorized duplication, distribution or exhibition may result in civil liability and criminal prosecution."

Slashdot Top Deals