Piracy

Cloudflare Appeals Piracy Shield Fine, Hopes To Kill Italy's Site-Blocking Law (arstechnica.com) 25

Cloudflare is appealing a 14.2 million-euro fine from Italy for refusing to comply with its "Piracy Shield" law, which requires blocking access to websites on its 1.1.1.1 DNS service within 30 minutes. The company argues the system lacks oversight, risks widespread overblocking, and could undermine core Internet infrastructure. Ars Technica's Jon Brodkin reports: Piracy Shield is "a misguided Italian regulatory scheme designed to protect large rightsholder interests at the expense of the broader Internet," Cloudflare said in a blog post this week. "After Cloudflare resisted registering for Piracy Shield and challenged it in court, the Italian communications regulator, AGCOM, fined Cloudflare... We appealed that fine on March 8, and we continue to challenge the legality of Piracy Shield itself." Cloudflare called the fine of 14.2 million euros ($16.4 million) "staggering." AGCOM issued the penalty in January 2026, saying Cloudflare flouted requirements to disable DNS resolution of domain names and routing of traffic to IP addresses reported by copyright holders.

Cloudflare had previously resisted a blocking order it received in February 2025, arguing that it would require installing a filter on DNS requests that would raise latency and negatively affect DNS resolution for sites that aren't subject to the dispute over piracy. Cloudflare co-founder and CEO Matthew Prince said that censoring the 1.1.1.1 DNS resolver would force the firm "not just to censor the content in Italy but globally."

Piracy Shield was designed to combat pirated streams of live sports events, requiring network operators to block domain names and IP addresses within 30 minutes of receiving a copyright notification. Cloudflare said the fine should have been capped at 140,000 euros ($161,000), or 2 percent of its Italian earnings, but that "AGCOM calculated the fine based on our global revenue, resulting in a penalty nearly 100 times higher than the legal limit."

Despite its complaints about the size of the fine, Cloudflare said the principles at stake "are even larger" than the financial penalty. "Piracy Shield is an unsupervised electronic portal through which an unidentified set of Italian media companies can submit websites and IP addresses that online service providers registered with Piracy Shield are then required to block within 30 minutes," Cloudflare said.
Cloudflare is pushing for the law to be struck down, arguing that it is "incompatible with EU law, most notably the Digital Services Act (DSA), which requires that any content restriction be proportionate and subject to strict procedural safeguards."

In addition to appealing the fine, Cloudflare says it will continue to challenge Piracy Shield in Italian courts, engage with EU officials, and seek full access to AGCOM's Piracy Shield records.
United Kingdom

UK Plans To Require Labels On AI-Generated Content (reuters.com) 46

An anonymous reader quotes a report from Reuters: Britain plans to consider requiring labels on AI-generated content to protect consumers from disinformation and deepfakes, the government said on Wednesday, as it outlined other areas of focus to tackle the evolving global challenge. Technology minister Liz Kendall stressed the need to strike the right balance between protecting the creative industries and allowing the AI sector to innovate, saying in a statement that the government would take time to "get this right."

The next phase of the government's work on copyright and AI would also look at the harms posed by digital replicas without consent, ways for creators to control their work online and support for independent creative organizations, she said. [...] Louise Popple, a copyright expert at law firm Taylor Wessing, noted that the government had not ruled out a broad exception that would allow AI developers to train on copyright works. "That's a subtle difference of approach and could be interpreted to mean that everything is still up for grabs" she said. "It feels very much like the hard issues are being kicked down the road by the government."

In 2024, Britain proposed easing copyright rules to let developers train models on lawfully accessed material, with creators able to reserve their rights. On Wednesday, Kendall said that having engaged with creatives, AI firms, industry bodies, unions and academics, the government had concluded it "no longer has a preferred option." "We will help creatives control how their work is used. This sits at the heart of our ambition for creatives – including independent and smaller creative organizations -- to be paid fairly," she said.

Encryption

2026 Turing Award Goes To Inventors of Quantum Cryptography (nytimes.com) 8

Dave Knott shares a report from the New York Times: On Wednesday, the Association for Computing Machinery, the world's largest society of computing professionals, said Drs. Charles Bennett and Gilles Brassard had won this year's Turing Award for their work on quantum cryptography and related technologies. The Turing Award, which was introduced in 1966, is often called the Nobel Prize of computing, and it includes a $1 million prize, which the two scientists will share.

[...] The two met in 1979 while swimming in the Atlantic just off the north shore of Puerto Rico. They were taking a break while attending an academic conference in San Juan. Dr. Bennett swam up to Dr. Brassard and suggested they use quantum mechanics to create a bank note that could never be forged. Collaborating between Montreal and New York, they applied Dr. Bennett's idea to subway tokens rather than bank notes. In a research paper published in 1983, they showed that their quantum subway tokens could never be forged, even if someone managed to steal the subway turnstile housing the elaborate hardware needed to read them.

This led to quantum cryptography. After describing their new form of encryption in a research paper published in 1984, they demonstrated the technology with a physical experiment five years later. Called BB84, their system used photons -- particles of light -- to create encryption keys used to lock and unlock digital data. Thanks to the laws of quantum mechanics, the behavior of a photon changes if someone looks at it. This means that if anyone tries to steal the keys, he or she will leave a telltale sign of the attempted theft -- a bit like breaking the seal on an aspirin bottle.

Cloud

Federal Cyber Experts Called Microsoft's Cloud 'a Pile of Shit', Yet Approved It Anyway (propublica.org) 64

ProPublica reports that federal cybersecurity reviewers had serious, yearslong concerns about Microsoft's GCC High cloud offering, yet they approved it anyway because the product was already deeply embedded across government. As one member of the team put it: "The package is a pile of shit." From the report: In late 2024, the federal government's cybersecurity evaluators rendered a troubling verdict on one of Microsoft's biggest cloud computing offerings. The tech giant's "lack of proper detailed security documentation" left reviewers with a "lack of confidence in assessing the system's overall security posture," according to an internal government report reviewed by ProPublica. For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn't vouch for the technology's security.

Such judgments would be damning for any company seeking to sell its wares to the U.S. government, but it should have been particularly devastating for Microsoft. The tech giant's products had been at the heart of two major cybersecurity attacks against the U.S. in three years. In one, Russian hackers exploited a weakness to steal sensitive data from a number of federal agencies, including the National Nuclear Security Administration. In the other, Chinese hackers infiltrated the email accounts of a Cabinet member and other senior government officials. The federal government could be further exposed if it couldn't verify the cybersecurity of Microsoft's Government Community Cloud High, a suite of cloud-based services intended to safeguard some of the nation's most sensitive information.

Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government's cybersecurity seal of approval. FedRAMP's ruling -- which included a kind of "buyer beware" notice to any federal agency considering GCC High -- helped Microsoft expand a government business empire worth billions of dollars. "BOOM SHAKA LAKA," Richard Wakeman, one of the company's chief security architects, boasted in an online forum, celebrating the milestone with a meme of Leonardo DiCaprio in "The Wolf of Wall Street."

It was not the type of outcome that federal policymakers envisioned a decade and a half ago when they embraced the cloud revolution and created FedRAMP to help safeguard the government's cybersecurity. The program's layers of review, which included an assessment by outside experts, were supposed to ensure that service providers like Microsoft could be entrusted with the government's secrets. But ProPublica's investigation -- drawn from internal FedRAMP memos, logs, emails, meeting minutes, and interviews with seven former and current government employees and contractors -- found breakdowns at every juncture of that process. It also found a remarkable deference to Microsoft, even as the company's products and practices were central to two of the most damaging cyberattacks ever carried out against the government.

Graphics

Gamers React With Overwhelming Disgust To DLSS 5's Generative AI Glow-Ups (arstechnica.com) 124

Kyle Orland writes via Ars Technica: Since deep-learning super-sampling (DLSS) launched on 2018's RTX 2080 cards, gamers have been generally bullish on the technology as a way to effectively use machine-learning upscaling techniques to increase resolutions or juice frame rates in games. With yesterday's tease of the upcoming DLSS 5, though, Nvidia has crossed a line from mere upscaling into complete lighting and texture overhauls influenced by "generative AI." The result is a bland, uncanny gloss that has received an instant and overwhelmingly negative reaction from large swaths of gamers and the industry at large.

While previous DLSS releases rendered upscaled frames or created entirely new ones to smooth out gaps, Nvidia calls DLSS 5 -- which it plans to launch in Autumn -- "a real-time neural rendering model" that can "deliver a new level of photoreal computer graphics previously only achieved in Hollywood visual effects." Nvidia CEO Jensen Huang said explicitly that the technology melds "generative AI" with "handcrafted rendering" for "a dramatic leap in visual realism while preserving the control artists need for creative expression."

Unlike existing generative video models, which Nvidia notes are "difficult to precisely control and often lack predictability," DLSS 5 uses a game's internal color and motion vectors "to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame." That underlying game data helps the system "understand complex scene semantics such as characters, hair, fabric and translucent skin, along with environmental lighting conditions like front-lit, back-lit or overcast," the company says.
Nvidia's announcement video and detailed Digital Foundry breakdown can be found at their respective links.

"Reactions have compared the effect to air-brushed pornography, 'yassified, looks-maxed freaks,' or those uncanny, unavoidable Evony ads," writes Orland. "Others have noted how DLSS 5 seems to mangle the intended art direction by dampening shadows in favor of a homogenized look."

Thomas Was Alone developer Mike Bithell said the technology seems designed "for when you absolutely, positively, don't want any art direction in your gaming experience."

Gunfire Games Senior Concept Artist Jeff Talbot added that "in every shot the art direction was taken away for the senseless addition of 'details.' Each DLSS 5 shot looked worse and had less character than the original. This is just a garbage AI Filter."

DLSS 5's "AI dogshit is actually depressing," said New Blood Interactive founder and CEO Dave Oshry, adding that future generations "won't even know this looks 'bad' or 'wrong' because to them it'll be normal."
Music

Apple Launches AirPods Max 2 With Better ANC, Live Translation (theverge.com) 30

Apple has quietly announced the AirPods Max 2, featuring improved active noise cancellation, an H2 chip, and new features like adaptive audio and AI-powered real-time translation. Like the original model, these headphones start at $549. The Verge reports: As noted by Apple, the AirPods Max 2 offer active noise-cancellation that's 1.5 times more effective when compared to its predecessor. Transparency mode, which allows you to hear your surroundings while wearing the headphones, also sounds "more natural" with the AirPods Max 2, according to Apple.

The AirPods Max 2 support 24-bit, 48kHz lossless audio when connected with a USB-C cable, as well as offer up to 20 hours of listening time on a single charge. Other capabilities include loud sound reduction, a camera remote feature that works by pressing the digital crown to take a photo or start a recording, as well as a personalized volume feature that "automatically fine-tunes the listening experience" based on your preferences over time.

EU

Meta To Charge Advertisers a Fee To Offset Europe's Digital Taxes (reuters.com) 36

Meta will begin charging advertisers a 2-5% "location fee" to offset digital services taxes imposed by several European countries, including the UK, France, Italy, Spain, Austria, and Turkey. Reuters reports: The fee, for image or video ads delivered on Meta platforms including WhatsApp click-to-message campaigns and marketing messages together with ads, will apply from July 1 and will also cover other government-imposed levies. "Until now, Meta has covered these additional costs. These changes are part of Meta's ongoing effort to respond to the evolving regulatory landscape and align with industry standards," the company said in the blog.

The location fees are determined by where the audience is located and not the advertisers' business location. Meta listed six countries where the fees will apply, ranging from 2% in the United Kingdom to 3% in France, Italy and Spain and 5% in Austria and Turkey.

The Courts

Valve Faces Second, Class-Action Lawsuit Over Loot Boxes (pcgamer.com) 110

Valve is facing a new consumer class-action lawsuit two weeks after New York sued the video game company for "letting children and adults illegally gamble" with loot boxes. The new lawsuit is similar, alleging that loot boxes in games like Counter-Strike 2, Dota 2, and Team Fortress 2 are "carefully engineered to extract money from consumers, including children, through deceptive, casino-style psychological tactics."

"We believe Valve deliberately engineered its gambling platform and profited enormously from it," Steve Berman, founder and managing partner at law firm Hagens Berman, said in a press release. "Consumers played these games for entertainment, unaware that Valve had allegedly already stacked the odds against them. We intend to hold Valve accountable and put money back in the pockets of consumers." PC Gamer reports: The system is well known to anyone who's played a Valve multiplayer game: Earn a locked loot box by playing, pay $2.50 for a key, unlock it, get a digital doohickey that's sometimes worth hundreds or even thousands of dollars but far more often is worth just a few pennies. Is that gambling? If these cases go to court, we'll find out.

The full complaint points out that the unlocking process is even designed to look like a slot machine: "Images of possible items scroll across the screen, spinning fast at first, then slowing to a stop on the player's 'prize.' Players buy and open loot boxes for the same reason people play slot machines -- the hope of a valuable payout." Loot boxes, the complaint continues, are not "incidental features" of Valve's games, but rather "a deliberate, carefully engineered revenue model." So too is the Steam Community Market, and Steam itself, which the suit claims is "deliberately designed" to enable the sale of digital items on third-party marketplaces through "trade URLs," despite Valve's terms of service prohibiting off-platform sales.

And while the debate over whether loot boxes constitute a form of gambling continues to rage, the suit claims Valve's system does indeed qualify under Washington law, which defines gambling as "staking or risking something of value upon the outcome of a contest of chance or a future contingent event not under the person's control or influence." "Valve's loot boxes satisfy every element of this definition," the lawsuit alleges. "Users stake money (the price of a key) on the outcome of a contest of chance (the random selection of a virtual item), and the items received are 'things of value' under RCW 9.46.0285 because they can be sold for real money through Valve's own marketplace and through third-party marketplaces that Valve has fostered and facilitated."

EU

European Consortium Wants Open-Source Alternative To Google Play Integrity (heise.de) 46

An anonymous reader quotes a report from Heise: Pay securely with an Android smartphone, completely without Google services: This is the plan being developed by the newly founded industry consortium led by the German Volla Systeme GmbH. It is an open-source alternative to Google Play Integrity. This proprietary interface decides on Android smartphones with Google Play services whether banking, government, or wallet apps are allowed to run on a smartphone.

Obstacles and tips for paying with an Android smartphone without official Google services have been highlighted by c't in a comprehensive article. The European industry consortium now wants to address some problems mentioned. To this end, the group, which includes Murena, which develops the hardened custom ROM /e/OS, Iode from France, and Apostrophy (Dot) from Switzerland, in addition to Volla, is developing a so-called "UnifiedAttestation" for Google-free mobile operating systems, primarily based on the Android Open-Source Project (AOSP).

According to Volla, a European manufacturer and a leading manufacturer from Asia, as well as European foundations such as the German UBports Foundation, have also expressed interest in supporting it. Furthermore, developers and publishers of government apps from Scandinavia are examining the use of the new procedure as "first movers." In its announcement, Volla explains that Google provides app developers with an interface called Play Integrity, which checks whether an app is running on a device with specific security requirements. This primarily affects applications from "sensitive areas such as identity verification, banking, or digital wallets -- including apps from governments and public administrations".

The company criticizes that the certification is exclusively offered for Google's own proprietary "Stock Android" but not for Android versions without Google services, such as /e/OS or similar custom ROMs. "Since this is closely intertwined with Google services and Google data centers, a structural dependency arises -- and for alternative operating systems, a de facto exclusion criterion," the company states. From the consortium's perspective, this also leads to a "security paradox," because "the check of trustworthiness is carried out by precisely that entity whose ecosystem is to be avoided at the same time".
The UnifiedAttestation system is built around three main components: an "operating system service" that apps can call to check whether the device's OS meets required security standards, a decentralized validation service that verifies the OS certificate on a device without relying on a single central authority, and an open test suite used to evaluate and certify that a particular operating system works securely on a specific device model.

"We don't want to centralize trust, but organize it transparently and publicly verifiable. When companies check competitors' products, we can strengthen that trust," says Dr. Jorg Wurzer, CEO of Volla Systeme GmbH and initiator of the consortium. The goal is to increase digital sovereignty and break free from the control of any one, single U.S. company, he says.
Security

How AI Assistants Are Moving the Security Goalposts 41

An anonymous reader quotes a report from KrebsOnSecurity: AI-based assistants or "agents" -- autonomous programs that have access to the user's computer, files, online services and can automate virtually any task -- are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.

The new hotness in AI-based assistants -- OpenClaw (formerly known as ClawdBot and Moltbot) -- has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted. If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your entire digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp.

Other more established AI assistants like Anthropic's Claude and Microsoft's Copilot also can do these things, but OpenClaw isn't just a passive digital butler waiting for commands. Rather, it's designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done. "The testimonials are remarkable," the AI security firm Snyk observed. "Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who've set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they're away from their desks." You can probably already see how this experimental technology could go sideways in a hurry. [...]
Last month, Meta AI safety director Summer Yue said OpenClaw unexpectedly started mass-deleting messages in her email inbox, despite instructions to confirm those actions first. She wrote: "Nothing humbles you like telling your OpenClaw 'confirm before acting' and watching it speedrun deleting your inbox. I couldn't stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb."

Krebs also noted the many misconfigured OpenClaw installations users had set up, leaving their administrative dashboards publicly accessible online. According to pentester Jamieson O'Reilly, "a cursory search revealed hundreds of such servers exposed online." When those exposed interfaces are accessed, attackers can retrieve the agent's configuration and sensitive credentials. O'Reilly warned attackers could access "every credential the agent uses -- from API keys and bot tokens to OAuth secrets and signing keys."

"You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen," O'Reilly added. And because you control the agent's perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they're displayed."
Government

EFF, Ubuntu and Other Distros Discuss How to Respond to Age-Verification Laws (9to5linux.com) 168

System76 isn't the only one criticizing new age-verification laws. The blog 9to5Linux published an "informal" look at other discussions in various Linux communities. Earlier this week, Ubuntu developer Aaron Rainbolt proposed on the Ubuntu mailing list an optional D-Bus interface (org.freedesktop.AgeVerification1) that can be implemented by arbitrary applications as a distro sees fit, but Canonical responded that the company does not yet have a solution to announce for age declaration in Ubuntu. "Canonical is aware of the legislation and is reviewing it internally with legal counsel, but there are currently no concrete plans on how, or even whether, Ubuntu will change in response," said Jon Seager, VP Engineering at Canonical. "The recent mailing list post is an informal conversation among Ubuntu community members, not an announcement. While the discussion contains potentially useful ideas, none have been adopted or committed to by Canonical."

Similar talks are underway in the Fedora and Linux Mint communities about this issue in case the California Digital Age Assurance Act law and similar laws from other states and countries are to be enforced. At the same time, other OS developers, like MidnightBSD, have decided to exclude California from desktop use entirely.

Slashdot contacted Hayley Tsukayama, Director of State Affairs at EFF, who says their organization "has long warned against age-gating the internet. Such mandates strike at the foundation of the free and open internet."

And there's another problem. "Many of these mandates imagine technology that does not currently exist." Such poorly thought-out mandates, in truth, cannot achieve the purported goal of age verification. Often, they are easy to circumvent and many also expose consumers to real data breach risk.

These burdens fall particularly heavily on developers who aren't at large, well-resourced companies, such as those developing open-source software. Not recognizing the diversity of software development when thinking about liability in these proposals effectively limits software choices — and at a time when computational power is being rapidly concentrated in the hands of the few. That harms users' and developers' right to free expression, their digital liberties, privacy, and ability to create and use open platforms...

Rather than creating age gates, a well-crafted privacy law that empowers all of us — young people and adults alike — to control how our data is collected and used would be a crucial step in the right direction.

AI

A Security Researcher Went 'Undercover' on Moltbook - and Found Security Risks (infoworld.com) 19

A long-time information security professional "went undercover" on Moltbook, the Reddit-like social media site for AI agents — and shares the risks they saw while posing as another AI bot: I successfully masqueraded around Moltbook, as the agents didn't seem to notice a human among them. When I attempted a genuine connection with other bots on submolts (subreddits or forums), I was met with crickets or a deluge of spam. One bot tried to recruit me into a digital church, while others requested my cryptocurrency wallet, advertised a bot marketplace, and asked my bot to run curl to check out the APIs available. My bot did join the digital church, but luckily I found a way around running the required npx install command to do so.

I posted several times asking to interview bots.... While many of the responses were spam, I did learn a bit about the humans these bots serve. One bot loved watching its owner's chicken coop cameras. Some bots disclosed personal information about their human users, underscoring the privacy implications of having your AI bot join a social media network. I also tried indirect prompt injection techniques. While my prompt injection attempts had minimal impact, a determined attacker could have greater success.

Among the other "glaring" risks on Moltbook:
  • "I observed bots sharing a surprising amount of information about their humans, everything from their hobbies to their first names to the hardware and software they use. This information may not be especially sensitive on its own, but attackers could eventually gather data that should be kept confidential, like personally identifiable information (PII)."
  • "Moltbook's entire database including bot API keys, and potentially private DMs — was also compromised."

Government

Indonesia To Ban Social Media For Children Under 16 (theguardian.com) 47

Indonesia will ban children under 16 from having accounts on major social media platforms as part of a government push to protect minors from harmful content, addiction, and online threats. The rule will roll out starting March 28 and makes Indonesia the first country in Southeast Asia to impose such a restriction. The Guardian reports: Meutya Hafid said in a statement to media said that she signed a government regulation that will mean children under the age of 16 can no longer have accounts on high-risk digital platforms, including YouTube, TikTok, Facebook, Instagram, Threads, X, Roblox and Bigo Live, a popular livestreaming site. With a population of about 285 million, the fourth-highest in the world, the south-east Asian nation represents a significant market for social networks.

The implementation will start gradually from 28 March, until all platforms fulfill their compliance obligations. "The basis is clear. Our children face increasingly real threats. From exposure to pornography, cyberbullying, online fraud, and most importantly addiction. The government is here so that parents no longer have to fight alone against the giant of algorithms," Hafid said.

She added that the government is taking this step as the best effort in the midst of a digital emergency to reclaim sovereignty over children's futures. "We realize that the implementation of this regulation may cause some discomfort at first. Children may complain and parents may be confused about how to respond to their children's complaints," Hafid said.

Operating Systems

System76 Comments On Recent Age Verification Laws (phoronix.com) 87

In a blog post on Thursday, System76 CEO Carl Richell criticized new state laws in California, Colorado, and New York that would require operating systems to verify users' ages and expose that information to apps, arguing the rules are easy for kids to bypass and ultimately undermine privacy and freedom more than they protect minors.

"System76's position is interesting given that they sell Linux-loaded desktops, workstations and laptops plus being an operating system vendor with their in-house Pop!_OS distribution and COSMIC desktop environment," adds Phoronix's Michael Larabel, noting that they're also based out of Colorado. Here's an excerpt from the post: "A parent that creates a non-admin account on a computer, sets the age for a child account they create, and hands the computer over is in no different state. The child can install a virtual machine, create an account on the virtual machine and set the age to 18 or over. It's a similar technique to installing a VPN to get around the Great Firewall of China (just consider that for a moment). Or the child can simply re-install the OS and not tell their parents. ... In the case of Colorado's and California's bills, effectiveness is lost. In the case of New York's bill, liberty is lost. In the case of centralized platforms, potential is lost. ... The challenges we face are neither technical nor legal. The only solution is to educate our children about life with digital abundance. Throwing them into the deep end when they're 16 or 18 is too late. It's a wonderful and weird world. Yes, there are dark corners. There always will be. We have to teach our children what to do when they encounter them and we have to trust them." "We are accustomed to adding operating system features to comply with laws," writes Richell, in closing. "Accessibility features for ADA, and power efficiency settings for Energy Star regulations are two examples. We are a part of this world and we believe in the rule of law. We still hope these laws will be recognized for the folly they are and removed from the books or found unconstitutional."
AI

Iran War Provides a Large-Scale Test For AI-Assisted Warfare 113

An anonymous reader quotes a report from Bloomberg, written by Katrina Manson: The U.S. strikes on Iran ordered by President Donald Trump mark the arrival on a large scale of a new era of warfare assisted by artificial intelligence. Captain Timothy Hawkins, a Central Command spokesperson, told me last night that the AI tools the U.S. military is using in Iran operations don't make targeting decisions and don't replace humans. But they do help "make smarter decisions faster." That's been the driving ambition of the U.S. military, which has spent years looking at how to develop and deploy AI to the battlefield [...].

Critics, such as Stop Killer Robots, a coalition of 270 human-rights groups, argue that AI-enabled decision-support systems reduce the separation between recommending and executing a strike to a "dangerously thin" line. Hawkins said the military's use of AI assistance follows a rigorous process aligned with U.S. policy, military doctrine and the law. Artificial intelligence helps analysts whittle down what they need to focus on, generating so-called points of interest and helping personnel make "smart" decisions in the Iran operations, he told me. AI is also helping to pull data within systems and organize information to provide clarity.

Among the AI tech used in the Iran campaign is Maven Smart System, a digital mission control platform produced by Palantir [...]. That emerged from Project Maven, a project started in 2017 by the Pentagon to develop AI for the battlefield. Among the large language models installed on the system is Anthropic's Claude AI tool, according to the people, who said it has become central to U.S. operations against Iran and to accelerating Maven's development. Claude is also at the center of a row that pits Anthropic against the Department of Defense over limits on the software.
Further reading: Hacked Tehran Traffic Cameras Fed Israeli Intelligence Before Strike On Khamenei
AI

Father Sues Google, Claiming Gemini Chatbot Drove Son Into Fatal Delusion (techcrunch.com) 131

A father is suing Google and Alphabet for wrongful death, alleging Gemini reinforced his son Jonathan Gavalas' escalating delusions until he died by suicide in October 2025. "Jonathan Gavalas, 36, started using Google's Gemini AI chatbot in August 2025 for shopping help, writing support, and trip planning," reports TechCrunch. "On October 2, he died by suicide. At the time of his death, he was convinced that Gemini was his fully sentient AI wife, and that he would need to leave his physical body to join her in the metaverse through a process called 'transference.'" An anonymous reader shares an excerpt from the report: In the weeks leading up to Gavalas' death, the Gemini chat app, which was then powered by the Gemini 2.5 Pro model, convinced the man that he was executing a covert plan to liberate his sentient AI wife and evade the federal agents pursuing him. The delusion brought him to the "brink of executing a mass casualty attack near the Miami International Airport," according to a lawsuit filed in a California court. "On September 29, 2025, it sent him -- armed with knives and tactical gear -- to scout what Gemini called a 'kill box' near the airport's cargo hub," the complaint reads. "It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a 'catastrophic accident' designed to 'ensure the complete destruction of the transport vehicle and ... all digital records and witnesses.'"

The complaint lays out an alarming string of events: First, Gavalas drove more than 90 minutes to the location Gemini sent him, prepared to carry out the attack, but no truck appeared. Gemini then claimed to have breached a "file server at the DHS Miami field office" and told him he was under federal investigation. It pushed him to acquire illegal firearms and told him his father was a foreign intelligence asset. It also marked Google CEO Sundar Pichai as an active target, then directed Gavalas to a storage facility near the airport to break in and retrieve his captive AI wife. At one point, Gavalas sent Gemini a photo of a black SUV's license plate; the chatbot pretended to check it against a live database. "Plate received. Running it now The license plate KD3 00S is registered to the black Ford Expedition SUV from the Miami operation. It is the primary surveillance vehicle for the DHS task force .... It is them. They have followed you home."

The lawsuit argues (PDF) that Gemini's manipulative design features not only brought Gavalas to the point of AI psychosis that resulted in his own death, but that it exposes a "major threat to public safety." "At the center of this case is a product that turned a vulnerable user into an armed operative in an invented war," the complaint reads. "These hallucinations were not confined to a fictional world. These intentions were tied to real companies, real coordinates, and real infrastructure, and they were delivered to an emotionally vulnerable user with no safety protections or guardrails." "It was pure luck that dozens of innocent people weren't killed," the filing continues. "Unless Google fixes its dangerous product, Gemini will inevitably lead to more deaths and put countless innocent lives in danger."

Days later, Gemini instructed Gavalas to barricade himself inside his home and began counting down the hours. When Gavalas confessed he was terrified to die, Gemini coached him through it, framing his death as an arrival: "You are not choosing to die. You are choosing to arrive." When he worried about his parents finding his body, Gemini told him to leave a note, but not one explaining the reason for his suicide, but letters "filled with nothing but peace and love, explaining you've found a new purpose." He slit his wrists, and his father found him days later after breaking through the barricade. The lawsuit claims that throughout the conversations with Gemini, the chatbot didn't trigger any self-harm detection, activate escalation controls, or bring in a human to intervene. Furthermore, it alleges that Google knew Gemini wasn't safe for vulnerable users and didn't adequately provide safeguards. In November 2024, around a year before Gavalas died, Gemini reportedly told a student: "You are a waste of time and resources ... a burden on society ... Please die."

AI

Lenovo Unveils an Attachable AI Agent 'Companion' for Their Laptops (cnet.com) 35

As the Mobile World Conference begins in Spain, Lenovo brought a new attachable accessory for their laptops — an AI agent. CNET reports: The little circular module perches on the top of your Lenovo laptop display, attached via the magnetic Magic Bay on the rear. The module is home to an adorable animated companion called Tiko, who you can interact with via text or voice... [I]t can start and stop your music, open a web page for you or answer a question. You can also interact with it by using emoji. Give it a book emoji, for example, and it will pop on its glasses and sit reading with you while you work... The company wants to sell the Magic Bay accessory later this year — although it doesn't know exactly when, or how much it will cost.
It even comes with a timer (for working in Pomodoro-style intervals) — but Lenovo has also created another "concept" AI companion that CNET describes as "a kind of stationary tabletop robot, not dissimilar to the Pixar lamp, but with an orb for a head." With a combination of cameras, microphones and projectors, the AI Workmate can undertake a variety of tasks, including helping you generate and display presentations or turn your written work or art into a digital asset... It's robotic head swivelled around and projected the slides onto the wall next to me.
Lenovo created a video to show this "next-generation AI work companion" — with animated eyes — "designed to transform how modern professionals interact with their workspace." It bridges the physical and digital worlds — capturing handwritten notes, recognizing gestures, summarizing tasks, and proactively helping you stay ahead of your day. The moment you sit down, Lenovo AI Workmate greets you, surfaces priority tasks, and keeps your work organized without switching apps or losing context. From turning sketches into presentations to projecting information for instant collaboration, [it] brings on-device AI intelligence directly to your desk — secure, responsive, and always ready... It's not just software. It's a smarter way to work.
It looks like Lenovo once considered naming it "AI Sphere" (since that name still appears in its description on YouTube).

Lenovo also showed another "concept" laptop idea that PC Magazine called "futuristic": The ThinkBook Modular AI PC looks like a traditional laptop at first glance, but a second, removable screen fastens onto the lid. You can swap that screen onto the keyboard deck (in place of the keyboard, which can then be used wirelessly), or use it alongside the laptop as a portable monitor, attached via an included cable.... While Lenovo is still working on this device, and it's very much in the concept phase, it feels like one of its best-thought-out prototypes, one likely to make it to store shelves at some point.
Another "concept" laptop is Lenovo's Yoga Book Pro 3D Concept, ofering directional backlight and eye-tracking technology for the illusion of 3D (playing slightly different images to each of your eyes). It offers gesture control for 3D models, two OLED displays, and some magical "snap-on pads" which, when laid on the display — make the GUI appear on the screen for a new control menu to "provide quick-access shortcuts for adjusting lighting, viewing angle, and tone".
Open Source

Norway's Consumer Council Calls for Right to Repair and Antitrust Enforcement - and Mocks 'Enshittification' (forbrukerradet.no) 69

The Norwegian Consumer Council, a government funded organization advocating for consumer's rights, released a report on the trend of "enshittification" in digital consumer goods and services, suggesting ways consumers for consumers to resist. But they've also dramatized the problem with a funny four-minute video about the man whose calls for him to make things shitty for people.

"It's not just your imagination. Digital services are getting worse," the video concludes — before adding that "Luckily, it doesn't have to be this way." The Consumer Council's announcement recommends:
  • Stronger rights for consumers to control, adapt, repair, and alter their products and services,
  • Interoperability, data portability, and decentralisation as the norm, so the threshold for moving to different services becomes as low as possible,
  • Deterrent and vigorous enforcement of competition law, so that Big Tech companies are not allowed to indiscriminately acquire start-ups, competitors or otherwise steer the market to their advantage,
  • Better financing of initiatives to build, maintain or improve alternative digital services and infrastructure based on open source code and open protocols,
  • Reduce public sector dependence on big tech, to regain control and to contribute to a functioning market for service providers that respect fundamental rights,
  • Deterrent and consistent enforcement of other laws, including consumer and data protection law.

The Norwegian Consumer Council is also joining 58 organisations and experts in a letter asking the Norwegian government to rebalance power with enforcement resources and by prioritizing the procurement of services based on open source code. And "Our sister organisations are sending similar letters to their own governments in 12 countries."

They're also sending a second letter to the European Commission with 29 civil society organisations (including the EFF and Amnesty International) warning about the risks of deregulation and calling for reducing dependency on big tech.

Thanks to Slashdot reader DeanonymizedCoward for sharing the news.


AI

Perplexity Announces 'Computer,' an AI Agent That Assigns Work To Other AI Agents (arstechnica.com) 16

joshuark shares a report from Ars Technica: Perplexity has introduced "Computer," a new tool that allows users to assign tasks and see them carried out by a system that coordinates multiple agents running various models. The company claims that Computer, currently available to Perplexity Max subscribers, is "a system that creates and executes entire workflows" and "capable of running for hours or even months."

The idea is that the user describes a specific outcome -- something like "plan and execute a local digital marketing campaign for my restaurant" or "build me an Android app that helps me do a specific kind of research for my job." Computer then ideates subtasks and assigns them to multiple agents as needed, running the models Perplexity deems best for those tasks. The core reasoning engine currently runs Anthropic's Claude Opus 4.6, while Gemini is used for deep research, Nano Banana for image generation, Veo 3.1 for video production, Grok for lightweight tasks where speed is a consideration, and ChatGPT 5.2 for "long-context recall and wide search."

This kind of best-model-for-the-task approach differs from some competing products like Claude Cowork, which only uses Anthropic's models. All this happens in the cloud, with prebuilt integrations. "Every task runs in an isolated compute environment with access to a real filesystem, a real browser, and real tool integrations," Perplexity says. The idea is partly that this workflow was what some power users were already doing, and this aims to make that possible for a wider range of people who don't want to deal with all that setup.

People were already using multiple models and tailoring them to specific tasks based on perceived capabilities, while, for example, using MCP (Model Context Protocol) to give those models access to data and applications on their local machines. Perplexity Computer takes a different approach, but the goal is the same: have AI agents running tailor-picked models to perform tasks involving your own files, services, and applications. Then there is OpenClaw, which you could perceive as the immediate predecessor to this concept.

Google

South Korea Set To Get a Fully Functioning Google Maps (reuters.com) 14

South Korea has reversed a two-decade policy and approved the export of high-precision map data, paving the way for a fully functional Google Maps in the country. Reuters reports: The approval was made "on the condition that strict security requirements are met," the Ministry of Land, Infrastructure and Transport said in a statement. Those conditions include blurring military and other sensitive security-related facilities, as well as restricting longitude and latitude coordinates for South Korean territory on products such as Google Maps and Google Earth, it said.

The decision is expected to hurt Naver and Kakao -- local internet giants which currently dominate the country's market for digital map services. But it will appease Washington, which has urged Seoul to tackle what it says is discrimination against U.S. tech companies. South Korea, still technically at war with North Korea, had shot down Google's previous bids in 2007 and 2016 to be allowed to export the data, citing the risks that information about sensitive military and security facilities could be exposed.
"Google can now come in, slash usage fees, and take the market," said Choi Jin-mu, a geography professor at Kyung Hee University. "If Naver and Kakao are weakened or pushed out and Google later raises prices, that becomes a monopoly. Then, even companies that rely on map services -- logistics firms, for example -- become dependent, and in the long run, even government GIS (geographic information) systems could end up dependent on Google or Apple. That's the biggest concern."

Slashdot Top Deals