The Courts

Country's Strictest Ban On Election Deepfakes Struck By Judge (politico.com) 26

A federal judge struck down California's strict anti-deepfake election law, citing Section 230 protections rather than First Amendment concerns. Politico reports: [Judge John Mendez] also said he intended to overrule a second law, which would require labels on digitally altered campaign materials and ads, for violating the First Amendment. [...] The first law would have blocked online platforms from hosting deceptive, AI-generated content related to an election in the run-up to the vote. It came amid heightened concerns about the rapid advancement and accessibility of artificial intelligence, allowing everyday users to quickly create more realistic images and videos, and the potential political impacts. But opponents of the measures ... also argued the restrictions could infringe upon freedom of expression.

The original challenge was filed by the creator of the video, Christopher Kohls, on First Amendment grounds, with X later joining the case after [Elon Musk] said the measures were "designed to make computer-generated parody illegal." The satirical right-wing news website the Babylon Bee and conservative social media site Rumble also joined the suit. Mendez said the first law, penned by Democratic state Assemblymember Marc Berman, conflicted with the oft-cited Section 230 of the federal Communications Decency Act, which shields online platforms from liability for what third parties post on their sites. "They don't have anything to do with these videos that the state is objecting to," Mendez said of sites like X that host deepfakes.

But the judge did not address the First Amendment claims made by Kohls, saying it was not necessary in order to strike down the law on Section 230 grounds. "I'm simply not reaching that issue," Mendez told the plaintiffs' attorneys. [...] "I think the statute just fails miserably in accomplishing what it would like to do," Mendez said, adding he would write an official opinion on that law in the coming weeks. Laws restricting speech have to pass a strict test, including whether there are less restrictive ways of accomplishing the state's goals. Mendez questioned whether approaches that were less likely to chill free speech would be better. "It's become a censorship law and there is no way that is going to survive," Mendez added.

Google

Google Says AI Search Features Haven't Hurt Web Traffic Despite Industry Reports (blog.google) 14

Google says total organic click volume from its search engine to websites has remained ""relatively stable year-over-year" despite the introduction of AI Overviews, contradicting third-party reports of dramatic traffic declines. The company reports average click quality has increased, with users less likely to immediately return to search results after clicking through to websites. Google attributes stable traffic patterns to users conducting more searches and asking longer, more complex questions since AI features launched, while AI Overviews display more links per page than traditional results.
Movies

Universal Pictures To Big Tech: We'll Sue If You Steal Our Movies For AI (hollywoodreporter.com) 71

Universal Pictures is taking a new approach to combat mass theft of its movies to teach AI systems. From a report: Starting in June with How to Train Your Dragon, the studio has attached a legal warning at the end credits of its films stating that their titles "may not be used to train AI." It's also appeared on Jurassic World Rebirth and Bad Guys 2. "This motion picture is protected under the laws of the United States and other countries," the warning reads. "Unauthorized duplication, distribution or exhibition may result in civil liability and criminal prosecution."
AI

OpenAI Offers ChatGPT To US Federal Agencies for $1 a Year (openai.com) 25

OpenAI will provide ChatGPT access to US federal agencies for $1 annually through the General Services Administration's new AI marketplace that also includes Google and Anthropic as approved vendors. The nominal pricing represents the deepest discount GSA has negotiated with software providers, surpassing previous deals with Adobe and Salesforce.

OpenAI said it will not use federal worker data to train its models and agencies face no renewal requirements. The $1 rate applies only to the ChatGPT chatbot interface, not OpenAI's API for custom software development.
China

Nvidia Rejects US Demand For Backdoors in AI Chips 78

Nvidia's chief security officer has published a blog post insisting that its GPUs "do not and should not have kill switches and backdoors." From a report: It comes amid pressure from both sides of the Pacific, with some US lawmakers pushing Nvidia to grant the government backdoors to AI chips, while Chinese officials have alleged that they already exist.

David Reber Jr.'s post seems pointedly directed at US lawmakers. In May a bipartisan group introduced the Chip Security Act, a bill that would require Nvidia and other manufacturers to include tracking technology to identify when chips are illegally transported internationally, and leaves the door open for further security measures including remote kill switches. While Nvidia is expecting to be granted permits to once again sell certain AI chips in China, its most powerful hardware is still under strict US export controls there and elsewhere.
AI

Jim Acosta Interviews AI Version of Teenager Killed in Parkland Shooting (variety.com) 127

Jim Acosta, the former CNN chief White House correspondent who now hosts an independent show on YouTube, has interviewed an AI-generated avatar of Parkland shooting victim Joaquin Oliver. The late teen's parents created the avatar to preserve his voice and advocate for gun reform. Oliver's parents "granted Acosta the first 'interview' with the recreated version of their son on what would have been his 25th birthday," notes Variety. "Oliver was one of 17 people killed in the mass shooting at Marjory Stoneman Douglas High School." From the report: Acosta asked AI Oliver about his solution for gun violence, to which the avatar responded: "I believe in a mix of stronger gun control laws, mental health support and community engagement. We need to create safe spaces for conversations and connections, making sure everyone feels seen and heard. It's about building a culture of kindness and understanding." The avatar added, "Though my life was cut short, I want to keep inspiring others to connect and advocate for change." Acosta then asked AI Oliver about his personal life, such as his favorite sport and favorite basketball team. The two discussed the movie "Remember the Titans" and their favorite "Star Wars" moments.

After a five-minute chat with the AI, Acosta then connected with Oliver's father, Manuel Oliver. "I'm kind of speechless as to the technology there," Acosta said. "It was so insightful. I really felt like I was speaking with Joaquin. It's just a beautiful thing." Manuel, who has been an outspoken voice in the push for gun control, said he believed bringing "AI Joaquin to life" would "create more impact." According to Manuel, the avatar is trained on information on the internet as well as things Oliver wrote, said and posted online. He said he wanted to make it clear to viewers that he is under no illusions about reviving his son. "I understand that this is AI. I don't want anyone to think that I am, in some way, trying to bring my son back," he said. "Sadly, I can't, right? I wish I could. However, the technology is out there." [...]

Manuel said he is excited about the future of the project and what it means for his son's legacy. "What's amazing about this is that we've heard from the parents, we've heard from the politicians. Now we're hearing from one of the kids," Acosta said. "That's important. That hasn't happened." Manuel said he plans to have AI Oliver "on stage in the middle of a debate," and that "his knowledge is unlimited."
You can watch the full interview on YouTube.
AI

Perplexity Says Cloudflare's Accusations of 'Stealth' AI Scraping Are Based On Embarrassing Errors (zdnet.com) 96

In a report published Monday, Cloudflare accused Perplexity of deploying undeclared web crawlers that masquerade as regular Chrome browsers to access content from websites that have explicitly blocked its official bots. Since then, Perplexity has publicly and loudly announced that Cloudflare's claims are baseless and technically flawed. "This controversy reveals that Cloudflare's systems are fundamentally inadequate for distinguishing between legitimate AI assistants and actual threats," says Perplexity in a blog post. "If you can't tell a helpful digital assistant from a malicious scraper, then you probably shouldn't be making decisions about what constitutes legitimate web traffic."

Perplexity continues: "Technical errors in Cloudflare's analysis aren't just embarrassing -- they're disqualifying. When you misattribute millions of requests, publish completely inaccurate technical diagrams, and demonstrate a fundamental misunderstanding of how modern AI assistants work, you've forfeited any claim to expertise in this space."
Government

Swedish PM Under Fire For Using AI In Role 26

Sweden's Prime Minister Ulf Kristersson has come under fire after admitting that he frequently uses AI tools like ChatGPT for second opinions on political matters. The Guardian reports: ... Kristersson, whose Moderate party leads Sweden's center-right coalition government, said he used tools including ChatGPT and the French service LeChat. His colleagues also used AI in their daily work, he said. Kristersson told the Swedish business newspaper Dagens industri: "I use it myself quite often. If for nothing else than for a second opinion. What have others done? And should we think the complete opposite? Those types of questions."

Tech experts, however, have raised concerns about politicians using AI tools in such a way, and the Aftonbladet newspaper accused Kristersson in a editorial of having "fallen for the oligarchs' AI psychosis." Kristersson's spokesperson, Tom Samuelsson, later said the prime minister did not take risks in his use of AI. "Naturally it is not security sensitive information that ends up there. It is used more as a ballpark," he said.

But Virginia Dignum, a professor of responsible artificial intelligence at Umea University, said AI was not capable of giving a meaningful opinion on political ideas, and that it simply reflects the views of those who built it. "The more he relies on AI for simple things, the bigger the risk of an overconfidence in the system. It is a slippery slope," she told the Dagens Nyheter newspaper. "We must demand that reliability can be guaranteed. We didn't vote for ChatGPT."
The Courts

OpenAI Offers 20 Million User Chats In ChatGPT Lawsuit. NYT Wants 120 Million. (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: OpenAI is preparing to raise what could be its final defense to stop The New York Times from digging through a spectacularly broad range of ChatGPT logs to hunt for any copyright-infringing outputs that could become the most damning evidence in the hotly watched case. In a joint letter (PDF) Thursday, both sides requested to hold a confidential settlement conference on August 7. Ars confirmed with the NYT's legal team that the conference is not about settling the case but instead was scheduled to settle one of the most disputed aspects of the case: news plaintiffs searching through millions of ChatGPT logs. That means it's possible that this week, ChatGPT users will have a much clearer understanding of whether their private chats might be accessed in the lawsuit. In the meantime, OpenAI has broken down (PDF) the "highly complex" process required to make deleted chats searchable in order to block the NYT's request for broader access.

Previously, OpenAI had vowed to stop what it deemed was the NYT's attempt to conduct "mass surveillance" of ChatGPT users. But ultimately, OpenAI lost its fight to keep news plaintiffs away from all ChatGPT logs. After that loss, OpenAI appears to have pivoted and is now doing everything in its power to limit the number of logs accessed in the case -- short of settling -- as its customers fretted over serious privacy concerns. For the most vulnerable users, the lawsuit threatened to expose ChatGPT outputs from sensitive chats that OpenAI had previously promised would be deleted. Most recently, OpenAI floated a compromise, asking the court to agree that news organizations didn't need to search all ChatGPT logs. The AI company cited the "only expert" who has so far weighed in on what could be a statistically relevant, appropriate sample size -- computer science researcher Taylor Berg-Kirkpatrick. He suggested that a sample of 20 million logs would be sufficient to determine how frequently ChatGPT users may be using the chatbot to regurgitate articles and circumvent news sites' paywalls. But the NYT and other news organizations rejected the compromise, OpenAI said in a filing (PDF) yesterday. Instead, news plaintiffs have made what OpenAI said was an "extraordinary request that OpenAI produce the individual log files of 120 million ChatGPT consumer conversations."

That's six times more data than Berg-Kirkpatrick recommended, OpenAI argued. Complying with the request threatens to "increase the scope of user privacy concerns" by delaying the outcome of the case "by months," OpenAI argued. If the request is granted, it would likely trouble many users by extending the amount of time that users' deleted chats will be stored and potentially making them vulnerable to a breach or leak. As negotiations potentially end this week, OpenAI's co-defendant, Microsoft, has picked its own fight with the NYT over its internal ChatGPT equivalent tool that could potentially push the NYT to settle the disputes over ChatGPT logs.

Google

Google's New Genie 3 AI Model Creates Video Game Worlds In Real Time (theverge.com) 15

An anonymous reader quotes a report from The Verge: Google DeepMind is releasing a new version of its AI "world" model, called Genie 3, capable of generating 3D environments that users and AI agents can interact with in real time. The company is also promising that users will be able to interact with the worlds for much longer than before and that the model will actually remember where things are when you look away from them. [...] Genie 3 seems like it could be a notable step forward. Users will be able to generate worlds with a prompt that supports a "few" minutes of continuous interaction, which is up from the 10-20 seconds of interaction possible with Genie 2, according to a blog post.

Google says that Genie 3 can keep spaces in visual memory for about a minute, meaning that if you turn away from something in a world and then turn back to it, things like paint on a wall or writing on a chalkboard will be in the same place. The worlds will also have a 720p resolution and run at 24fps. DeepMind is adding what it calls "promptable world events" into Genie 3, too. Using a prompt, you'll be able to do things like change weather conditions in a world or add new characters.
The model is launching as "a limited research preview" available to "a small cohort of academics and creators," according to Google. It's "exploring" how to bring Genie 3 to "additional testers."
Data Storage

DRAM Prices Soar as China Eyes Self-Reliance For High-End Chips (nikkei.com) 30

Standard DDR4 DRAM prices doubled between May and June 2025, with 8-gigabit units reaching $4.12 and 4-gigabit units hitting $3.14 -- the latter's highest level since July 2021, according to electronics trading companies cited by Nikkei Asia. The unprecedented single-month doubling follows speculation that Chinese manufacturer ChangXin Memory Technologies has halted DDR4 production to shift factories toward DDR5 memory for AI applications.

DDR4 currently comprises 60% of desktop PC memory while DDR5 accounts for 40%, per Tokyo-based BCN research. Samsung Electronics, SK Hynix, and Micron Technology controlled 90% of the global DRAM market in Q2 2025.
AI

OpenAI Releases First Open-Weight Models Since GPT-2 (openai.com) 30

OpenAI has released two open-weight language models, marking the startup's first such release since GPT-2 in 2019. The models, gpt-oss-120b and gpt-oss-20b, can run locally on consumer devices and be fine-tuned for specific purposes. Both models use chain-of-thought reasoning approaches first deployed in OpenAI's o1 model and can browse the web, execute code, and function as AI agents.

The smaller 20-billion-parameter model runs on consumer devices with 16 GB of memory. Gpt-oss-120B model will require about 80 GB of memory. OpenAI said the 120-billion-parameter model performs similarly to the company's proprietary o3 and o4-mini models. The models are available free on Hugging Face under the Apache 2.0 license after safety testing that delayed their March announcement.
Wikipedia

Wikipedia Editors Adopt 'Speedy Deletion' Policy for AI Slop Articles (404media.co) 31

Wikipedia editors have adopted a policy enabling administrators to delete AI-generated articles without the standard week-long discussion period. Articles containing telltale LLM responses like "Here is your Wikipedia article on" or "Up to my last training update" now qualify for immediate removal.

Articles with fabricated citations -- nonexistent papers or unrelated sources such as beetle research cited in computer science articles -- also meet deletion criteria.
Windows

Microsoft Teases the Future of Windows as an Agentic OS 127

An anonymous reader shares a report: Microsoft has published a new video that appears to be the first in an upcoming series of videos dubbed "Windows 2030 Vision," where the company outlines its vision for the future of Windows over the next five years. It curiously makes references to some potentially major changes on the horizon, in the wake of AI.

This first episode features David Weston, Microsoft's Corporate Vice President of Enterprise & Security, who opens the video by saying "the world of mousing and keyboarding around will feel as alien as it does to Gen Z [using] MS-DOS."

Right out of the gate, it sounds like he's teasing the potential for a radical new desktop UX made possible by agentic AI. Weston later continues, "I truly believe the future version of Windows and other Microsoft operating systems will interact in a multimodal way. The computer will be able to see what we see, hear what we hear, and we can talk to it and ask it to do much more sophisticated things."
Privacy

AI Is Listening to Your Meetings. Watch What You Say. (msn.com) 33

AI meeting transcription software is inadvertently sharing private conversations with all meeting participants through automated summaries. WSJ found a series of mishaps that people confirmed on-record.

Digital marketing agency owner Tiffany Lewis discovered her "Nigerian prince" joke about a potential client was included in the summary sent to that same client. Nashville branding firm Studio Delger received meeting notes documenting their discussion about "getting sandwich ingredients from Publix" and not liking soup when their client failed to appear. Communications agency coordinator Andrea Serra found her personal frustrations about a neighborhood Whole Foods and a kitchen mishap while making sweet potato recipes included in official meeting recaps distributed to colleagues.
Privacy

Nearly 100,000 ChatGPT Conversations Were Searchable on Google (404media.co) 13

An anonymous reader shares a report: A researcher has scraped nearly 100,000 conversations from ChatGPT that users had set to share publicly and Google then indexed, creating a snapshot of all the sorts of things people are using OpenAI's chatbot for, and inadvertently exposing. 404 Media's testing has found the dataset includes everything from the sensitive to the benign: alleged texts of non-disclosure agreements, discussions of confidential contracts, people trying to use ChatGPT to understand their relationship issues, and lots of people asking ChatGPT to write LinkedIn posts.

The news follows a July 30 Fast Company article which reported "thousands" of shared ChatGPT chats were appearing in Google search results. People have since dug through some of the chats indexed by Google. The around 100,000 conversation dataset provides a better sense of the scale of the problem, and highlights some of the potential privacy risks in using any sharing features of AI tools. OpenAI did not dispute the figure of around 100,000 indexed chats when contacted for comment.

AI

An Illinois Bill Banning AI Therapy Has Been Signed Into Law (mashable.com) 51

An anonymous reader shares a report: In a landmark move, Illinois state lawmakers have passed a bill banning AI from acting as a standalone therapist and placing firm guardrails on how mental health professionals can use AI to support care. Governor JB Pritzker signed the bill into law on Aug. 1.

The legislation, dubbed the Wellness and Oversight for Psychological Resources Act, was introduced by Rep. Bob Morgan and makes one thing clear: only licensed professionals can deliver therapeutic or psychotherapeutic services to another human being. [...] Under the new state law, mental health providers are barred from using AI to independently make therapeutic decisions, interact directly with clients, or create treatment plans -- unless a licensed professional has reviewed and approved it. The law also closes a loophole that allows unlicensed persons to advertise themselves as "therapists."

Google

Google Agrees To Pause AI Workloads To Protect the Grid When Power Demand Spikes (theregister.com) 50

Google will pause non-essential AI workloads to protect power grids, the advertising giant announced on Monday. From a report: The web giant already does this sort of thing for non-essential workloads like processing YouTube vids, which it moves to datacenters where power is available rather than continuing to run them in places demand for energy strains the grid. Under an agreement with Indiana Michigan Power (I&M) and the Tennessee Valley Authority (TVA), Google will use the same techniques for AI workloads.

The announcement comes as states served by the power companies brace for a heat wave that will likely strain the grid as residents use air conditioners and increase demand for energy. Amid debate about datacenters' consumption of power and water, the last thing that the Chocolate Factory needs is folks blaming its AI Mode search function for a power outage when temperatures top 100F (37.7C). Under the agreement, if energy demand surges or there's a disruption in the grid due to extreme weather, I&M and TVA can now request that Google reduce its power use by rescheduling workloads or limiting non-urgent tasks until the issue is resolved.

Movies

Disney Scrapped Deepfakes For Moana and Tron To Avoid 'Bad Publicity' 23

Disney scrapped plans to use a deepfake of Dwayne Johnson in Moana and an AI-generated character in Tron: Ares due to concerns over bad publicity and legal ownership. Ultimately, the studio decided the potential PR and copyright risks weren't worth the convenience. Deadline reports: Disney is working on a live-action remake of Moana, where Dwayne "The Rock" Johnson will reprise his role of Maui. In a recent report from The Wall Street Journal, the studio came up with the idea to digitally clone Johnson and use a body double for some shots. "Disney would work with AI company Metaphysic to create deepfakes of Johnson's face that could be layered on top of Reed's performance in the footage -- a 'digital double' that effectively allowed Johnson to be in two places at once," WSJ said (paywalled). Although the Black Adam star approved the idea, the studio "worried" that they "ultimately couldn't claim ownership over every element of the film if AI generated parts of it." The film studio and the AI company were seemingly unable to come to terms, and the footage was scrapped.

Disney's upcoming Tron: Ares revolves around AI and the real-world implications of it. According to WSJ sources, "executives pitched the idea of actually incorporating AI into one of the characters in the sequel to the 1980s hit movie Tron as a buzzy marketing strategy." The AI-generated character would be a sidekick to Jeff Bridges' character, but the idea was ultimately scrapped because "executives internally were told that the company couldn't risk the bad publicity."
Power

Hyundai To Help Build Nuclear-Powered Datacenter In Texas (theregister.com) 44

Fermi America is planning to build a colossal AI datacenter complex in Amarillo, Texas, powered by up to six gigawatts of nuclear energy. According to The Register, the company has selected Hyundai to support the deployment of the "HyperGrid," describing it as the "world's largest advanced energy campus." From the report: The project is backed by Rick Perry, who served as Texas governor and US Energy Secretary, and investor Toby Neugebauer, and aims to establish Texas as the US's largest energy and intelligence campus. Construction of the first of four Westinghouse AP1000 reactors is set to begin next year in Amarillo with the plant funneling behind-the-meter power to GPU bit barns by 2032, at least that's according to a memorandum of understanding (MoU). In other words, there is no guarantee the 23 million square meter project (1.1 MilliWales) will actually be built in its entirety, but if it is, Hyundai will oversee it.

"This agreement is significant in that it allows us to participate from the early stages of this project and contribute to the creation of the world's largest integrated energy and artificial intelligence campus, which leverages a diverse range of energy infrastructure," Hyundai said in a canned statement. At the very least, Hyundai knows what it's doing when it comes to nuclear developments. The industrial giant has led the deployment of some 22 reactors. Ambitious as the project may be, it won't be cheap. A single AP1000 reactor was estimated to cost $6.8 billion two years ago. That's a lot of money, but nothing compared to what the hyperscalers and neo-clouds are pumping into datacenters these days. Meta, for reference, expects to spend $66-72 billion on bit barns this year. [...] How exactly Fermi America or its founders Perry and Neugebauer expect to pay for one AP1000 reactor, let alone four, isn't clear. [...]

Slashdot Top Deals