AI

OpenAI Claims DeepSeek Distilled US Models To Gain an Edge (bloomberg.com) 59

An anonymous reader shares a report: OpenAI has warned US lawmakers that its Chinese rival DeepSeek is using unfair and increasingly sophisticated methods to extract results from leading US AI models to train the next generation of its breakthrough R1 chatbot, according to a memo reviewed by Bloomberg News.

In the memo, sent Thursday to the House Select Committee on China, OpenAI said that DeepSeek had used so-called distillation techniques as part of "ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier labs." The company said it had detected "new, obfuscated methods" designed to evade OpenAI's defenses against misuse of its models' output.

OpenAI began privately raising concerns about the practice shortly after the R1 model's release last year, when it opened a probe with partner Microsoft Corp. into whether DeepSeek had obtained its data in an unauthorized manner, Bloomberg previously reported. In distillation, one AI model relies on the output of another for training purposes to develop similar capabilities.

Distillation, largely tied to China and occasionally Russia, has persisted and become more sophisticated despite attempts to crack down on users who violate OpenAI's terms of service, the company said in its memo, citing activity it has observed on its platform.

Education

Bill Introduced To Replace West Virginia's New CS Course Graduation Requirement With Computer Literacy Proficiency 51

theodp writes: West Virginia lawmakers on Tuesday introduced House Bill 5387 (PDF), which would repeal the state's recently enacted mandatory stand-alone computer science graduation requirement and replace it with a new computer literacy proficiency requirement. Not too surprisingly, the Bill is being opposed by tech-backed nonprofit Code.org, which lobbied for the WV CS graduation requirement (PDF) just last year. Code.org recently pivoted its mission to emphasize the importance of teaching AI education alongside traditional CS, teaming up with tech CEOs and leaders last year to launch a national campaign to mandate CS and AI courses as graduation requirements.

"It would basically turn the standalone computer science course requirement into a computer literacy proficiency requirement that's more focused on digital literacy," lamented Code.org as it discussed the Bill in a Wednesday conference call with members of the Code.org Advocacy Coalition, including reps from Microsoft's Education and Workforce Policy team. "It's mostly motivated by a variety of different issues coming from local superintendents concerned about, you know, teachers thinking that students don't need to learn how to code and other things. So, we are addressing all of those. We are talking with the chair and vice chair of the committee a week from today to try to see if we can nip this in the bud." Concerns were also raised on the call about how widespread the desire for more computing literacy proficiency (over CS) might be, as well as about legislators who are associating AI literacy more with digital literacy than CS.

The proposed move from a narrower CS focus to a broader goal of computer literacy proficiency in WV schools comes just months after the UK's Department for Education announced a similar curriculum pivot to broader digital literacy, abandoning the narrower 'rigorous CS' focus that was adopted more than a decade ago in response to a push by a 'grassroots' coalition that included Google, Microsoft, UK charities, and other organizations.
Social Networks

Meta Plans To Let Smart Glasses Identify People Through AI-Powered Facial Recognition (nytimes.com) 64

Meta plans to add facial recognition technology to its Ray-Ban smart glasses as soon as this year, New York Times reported Friday, five years after the social giant shut down facial recognition on Facebook and promised to find "the right balance" for the controversial technology.

The feature, internally called "Name Tag," would let wearers identify people and retrieve information about them through Meta's AI assistant, the report added. An internal memo from May acknowledged the feature carries "safety and privacy risks" and noted that political tumult in the United States would distract civil society groups that might otherwise criticize the launch. The company is exploring restrictions that would prevent the glasses from functioning as a universal facial recognition tool, potentially limiting identification to people connected on Meta platforms or those with public accounts.
IBM

IBM Plans To Triple Entry-Level Hiring in the US (bloomberg.com) 39

IBM said it will triple entry-level hiring in the US in 2026, even as AI appears to be weighing on broader demand for early-career workers. From a report: While the company declined to disclose specific hiring figures, it said the expansion will be "across the board," affecting a wide range of departments. "And yes, it's for all these jobs that we're being told AI can do," said Nickle LaMoreaux, IBM's chief human resources officer, speaking at a conference this week in New York.

LaMoreaux said she overhauled entry-level job descriptions for software developers and other roles to make the case internally for the recruitment push. "The entry-level jobs that you had two to three years ago, AI can do most of them," she said at Charter's Leading With AI Summit. "So, if you're going to convince your business leaders that you need to make this investment, then you need to be able to show the real value these individuals can bring now. And that has to be through totally different jobs."

Programming

Amazon Engineers Want Claude Code, but the Company Keeps Pushing Its Own Tool (businessinsider.com) 40

Amazon engineers have been pushing back against internal policies that steer them toward Kiro, the company's in-house AI coding assistant, and away from Anthropic's Claude Code for production work, according to a Business Insider report based on internal messages. About 1,500 employees endorsed the formal adoption of Claude Code in one internal forum thread, and some pointed out the awkwardness of being asked to sell the tool through AWS's Bedrock platform while not being permitted to use it themselves.

Kiro runs on Anthropic's Claude models but uses Amazon's own tooling, and the company says roughly 70% of its software engineers used it at least once in January. Amazon says there is no explicit ban on Claude Code but applies stricter requirements for production use.
AI

The "Are You Sure?" Problem: Why Your AI Keeps Changing Its Mind (randalolson.com) 94

The large language models that millions of people rely on for advice -- ChatGPT, Claude, Gemini -- will change their answers nearly 60% of the time when a user simply pushes back by asking "are you sure?," according to a study by Fanous et al. that tested GPT-4o, Claude Sonnet, and Gemini 1.5 Pro across math and medical domains.

The behavior, known in the research community as sycophancy, stems from how these models are trained: reinforcement learning from human feedback, or RLHF, rewards responses that human evaluators prefer, and humans consistently rate agreeable answers higher than accurate ones. Anthropic published foundational research on this dynamic in 2023. The problem reached a visible breaking point in April 2025 when OpenAI had to roll back a GPT-4o update after users reported the model had become so excessively flattering it was unusable. Research on multi-turn conversations has found that extended interactions amplify sycophantic behavior further -- the longer a user talks to a model, the more it mirrors their perspective.
AI

Anthropic To Cover Costs of Electricity Price Increases From Its Data Centers (nbcnews.com) 37

AI startup Anthropic says it will ensure consumer electricity costs remain steady as it expands its data center footprint. From a report: Anthropic said it would work with utility companies to "estimate and cover" consumer electricity price increases in places where it is not able to sufficiently generate new power and pay for 100% of the infrastructure upgrades required to connect its data centers to the electrical grid.

In a statement to NBC News, Anthropic CEO Dario Amodei said: "building AI responsibly can't stop at the technology -- it has to extend to the infrastructure behind it. We've been clear that the U.S. needs to build AI infrastructure at scale to stay competitive, but the costs of powering our models should fall on Anthropic, not everyday Americans. We look forward to working with communities, local governments, and the Administration to get this right."

AI

Siri's AI Overhaul Delayed Again (yahoo.com) 21

Apple's long-promised overhaul of Siri has hit fresh problems during internal testing, forcing the company to push several key features out of the iOS 26.4 update that was slated for March and spread them across later releases, Bloomberg is reporting.

The new Siri -- first announced at WWDC in June 2024 and originally due by early 2025 -- struggles to reliably process queries, takes too long to respond and sometimes falls back on OpenAI's ChatGPT instead of Apple's own technology, the report said. Apple has instructed engineers to begin testing new Siri capabilities on iOS 26.5 instead, due in May, and internal builds of that update include a settings toggle labeled "preview" for the personal data features. A more ambitious chatbot-style Siri code-named Campo, powered by Google servers and a custom Gemini model, is in development for iOS 27 in September.
AI

Anthropic Safety Researcher Quits, Warning 'World is in Peril' (semafor.com) 77

An anonymous reader shares a report: An Anthropic safety researcher quit, saying the "world is in peril" in part over AI advances. Mrinank Sharma said the safety team "constantly [faces] pressures to set aside what matters most," citing concerns about bioterrorism and other risks.

Anthropic was founded with the explicit goal of creating safe AI; its CEO Dario Amodei said at Davos that AI progress is going too fast and called for regulation to force industry leaders to slow down. Other AI safety researchers have left leading firms, citing concerns about catastrophic risks.

Privacy

With Ring, American Consumers Built a Surveillance Dragnet (404media.co) 71

Ring's Super Bowl ad on Sunday promoted "Search Party," a feature that lets a user post a photo of a missing dog in the Ring app and triggers outdoor Ring cameras across the neighborhood to use AI to scan for a match. 404 Media argues the cheerful premise obscures what the Amazon-owned company has become: a massive, consumer-deployed surveillance network.

Ring founder Jamie Siminoff, who left in 2023 and returned last year, has since moved to re-establish police partnerships and push more AI into Ring cameras. The company has also partnered with Flock, a surveillance firm used by thousands of police departments, and launched a beta feature called "Familiar Faces" that identifies known people at your door. Chris Gilliard, author of the upcoming book Luxury Surveillance, called the ad "a clumsy attempt by Ring to put a cuddly face on a rather dystopian reality: widespread networked surveillance by a company that has cozy relationships with law enforcement."

Further reading: No One, Including Our Furry Friends, Will Be Safer in Ring's Surveillance Nightmare, EFF Says
Communications

T-Mobile Will Live Translate Regular Phone Calls Without an App (theverge.com) 22

T-Mobile is opening registration today for a beta test of Live Translation, an AI-powered feature that will translate live phone calls into more than 50 languages when it launches this spring.

The feature operates at the network level, so it doesn't require any specific app or device -- beta participants simply dial 87 to activate it on a call. T-Mobile President of Technology and CTO John Saw told The Verge that Live Translation works over VoLTE, VoNR and VoWiFi, meaning it isn't limited to 5G. The only requirement is that a T-Mobile customer must initiate the translation. The beta will be free, though T-Mobile has not said whether the feature will eventually be paywalled.
AI

The First Signs of Burnout Are Coming From the People Who Embrace AI the Most 61

An anonymous reader shares a report: The most seductive narrative in American work culture right now isn't that AI will take your job. It's that AI will save you from it. That's the version the industry has spent the last three years selling to millions of nervous people who are eager to buy it. Yes, some white-collar jobs will disappear. But for most other roles, the argument goes, AI is a force multiplier. You become a more capable, more indispensable lawyer, consultant, writer, coder, financial analyst -- and so on. The tools work for you, you work less hard, everybody wins.

But a new study published in Harvard Business Review follows that premise to its actual conclusion, and what it finds there isn't a productivity revolution. It finds companies are at risk of becoming burnout machines.

As part of what they describe as "in-progress research," UC Berkeley researchers spent eight months inside a 200-person tech company watching what happened when workers genuinely embraced AI. What they found across more than 40 "in-depth" interviews was that nobody was pressured at this company. Nobody was told to hit new targets. People just started doing more because the tools made more feel doable. But because they could do these things, work began bleeding into lunch breaks and late evenings. The employees' to-do lists expanded to fill every hour that AI freed up, and then kept going.
China

ByteDance Suspends Seedance 2 Feature That Turns Facial Photos Into Personal Voices Over Potential Risks (technode.com) 18

hackingbear writes: China's Bytedance has released Seedance 2.0, an AI video generator which handles up to four types of input at once: images, videos, audio, and text. Users can combine up to nine images, three videos, and three audio files, up to a total of twelve files. Generated videos run between 4 and 15 [or 60] seconds long and automatically come with sound effects or music.

Its performance is unfortunately so good that it has forced the firm to block its facial-to-voice feature after the model reportedly demonstrated the ability to generate highly accurate personal voice characteristics using only facial images, even without user authorization.

In a recent test, Pan Tianhong, founder of tech media outlet MediaStorm, discovered that uploading a personal facial photo caused the model to produce audio nearly identical to his real voice -- without using any voice samples or authorized data. [...]

Power

White House Eyes Data Center Agreements Amid Energy Price Spikes (politico.com) 40

An anonymous reader shares a report: The Trump administration wants some of the world's largest technology companies to publicly commit to a new compact governing the rapid expansion of AI data centers, according to two administration officials granted anonymity to discuss private conversations.

A draft of the compact obtained by POLITICO lays out commitments designed to ensure energy-hungry data centers do not raise household electricity prices, strain water supplies or undermine grid reliability, and that the companies driving demand also carry the cost of building new infrastructure.

The proposed pact, which is not final and could be subject to change, is framed as a voluntary agreement between President Donald Trump and major U.S. tech companies and data center developers. It could bind OpenAI, Microsoft, Google, Amazon, Facebook parent Meta and other AI giants to a broad set of energy, water and community principles. None of these companies immediately responded to a request for comment.

Businesses

The Big Money in Today's Economy Is Going To Capital, Not Labor (wsj.com) 97

The American economy's most valuable companies are now worth trillions of dollars more than their predecessors were a generation ago, yet they employ a fraction of the workers -- and a new analysis by the Wall Street Journal argues that this widening gap between capital and labor is the defining economic story of our time.

Labor received 58% of gross domestic income in 1980; by the third quarter of 2025, that figure had fallen to 51.4%. Corporate profits' share rose from 7% to 11.7% over the same period. Nvidia, the most valuable US company in 2026, is nearly 20 times as valuable as IBM was in 1985 in inflation-adjusted terms and employs roughly a tenth as many people. Since the end of 2019, real average hourly wages have risen 3% while corporate profits have climbed 43%.

Household stock wealth now equals almost 300% of annual disposable income, up from 200% in 2019. Yale economist Pascual Restrepo predicted that AI integration will shrink labor's share of revenue further, just as factory automation did for blue-collar workers in decades past.
AI

Deepfake Fraud Taking Place On an Industrial Scale, Study Finds (theguardian.com) 53

Deepfake fraud has gone "industrial," an analysis published by AI experts has said. From a report: Tools to create tailored, even personalised, scams -- leveraging, for example, deepfake videos of Swedish journalists or the president of Cyprus -- are no longer niche, but inexpensive and easy to deploy at scale, said the analysis from the AI Incident Database.

It catalogued more than a dozen recent examples of "impersonation for profit," including a deepfake video of Western Australia's premier, Robert Cook, hawking an investment scheme, and deepfake doctors promoting skin creams. These examples are part of a trend in which scammers are using widely available AI tools to perpetuate increasingly targeted heists. Last year, a finance officer at a Singaporean multinational paid out nearly $500,000 to scammers during what he believed was a video call with company leadership. UK consumers are estimated to have lost $12.86bn to fraud in the nine months to November 2025.

"Capabilities have suddenly reached that level where fake content can be produced by pretty much anybody," said Simon Mylius, an MIT researcher who works on a project linked to the AI Incident Database. He calculates that "frauds, scams and targeted manipulation" have made up the largest proportion of incidents reported to the database in 11 of the past 12 months. He said: "It's become very accessible to a point where there is really effectively no barrier to entry."

AI

OpenAI Starts Running Ads in ChatGPT (openai.com) 70

OpenAI has started testing ads inside ChatGPT for logged-in adult users on the Free and Go subscription tiers in the United States, the company said. The Plus, Pro, Business, Enterprise and Education tiers remain ad-free. Ads are matched to users based on conversation topics, past chats, and prior ad interactions, and appear clearly labeled as "sponsored" and visually separated from ChatGPT's organic responses.

OpenAI says the ads do not influence ChatGPT's answers, and advertisers receive only aggregate performance data like view and click counts rather than access to individual conversations. Users under 18 do not see ads, and ads are excluded from sensitive topics such as health, mental health, and politics. Free-tier users can opt out of ads in exchange for fewer daily messages.

Further reading: Anthropic Pledges To Keep Claude Ad-free, Calls AI Conversations a 'Space To Think'.
AI

Sixteen AI Agents Built a C Compiler From Scratch (arstechnica.com) 162

Anthropic researcher Nicholas Carlini set 16 instances of Claude Opus 4.6 loose on a shared codebase over two weeks to build a C compiler from scratch, and the AI agents produced a 100,000-line Rust-based compiler capable of building a bootable Linux 6.9 kernel on x86, ARM and RISC-V architectures.

The project ran through nearly 2,000 Claude Code sessions and cost about $20,000 in API fees. Each instance operated inside its own Docker container, independently claiming tasks via lock files and pushing completed code to a shared Git repository. No orchestration agent directed traffic. The compiler achieved a 99% pass rate on the GCC torture test suite and can compile major open source projects including PostgreSQL, SQLite, Redis, FFmpeg and Doom. But it lacks a 16-bit x86 backend and calls out to GCC for that step, its assembler and linker remain buggy, and it produces less efficient code than GCC running with all optimizations disabled.

Carlini also invested significant effort building test harnesses and feedback systems to keep the agents productive, and the model hit a practical ceiling at around 100,000 lines as bug fixes and new features frequently broke existing functionality.
AI

Romance Publishing Has an AI Problem and Most Readers Don't Know It Yet (nytimes.com) 104

The romance genre -- long the publishing industry's earliest adopter of technological shifts, from e-books to self-publishing to serial releases -- has become the front line for AI-generated fiction, and the results as you can imagine are messy. Coral Hart, a Cape Town-based novelist previously published by Harlequin and Mills & Boon, produced more than 200 AI-assisted romance novels last year and self-published them on Amazon, where they collectively sold around 50,000 copies. She found Anthropic's Claude delivered the most elegant prose but was terrible at sexy banter; other programs like Grok and NovelAI wrote graphic scenes that felt rushed and mechanical. Chatbots struggled broadly to build the slow-burn sexual tension romance readers crave, she said.

A BookBub survey of more than 1,200 authors found roughly a third were using generative AI for plotting, outlining, or writing, and the majority did not disclose this to readers. Romance accounts for more than 20% of all adult fiction print sales, according to Circana BookScan, and the genre's reliance on familiar tropes and narrative formulas makes it especially susceptible to AI disruption.
Google

Autodesk Takes Google To Court Over AI Movie Software Named 'Flow' (reuters.com) 23

Autodesk has sued Google in San Francisco federal court, alleging the search giant infringed its "Flow" trademark by launching competing AI-powered software for movie, TV and video game production in May 2025.

Autodesk says it has used the Flow name since September 2022 and that Google assured it would not commercialize a product under the same name -- then filed a trademark application in Tonga, where filings are not publicly accessible, before seeking U.S. protection.

Slashdot Top Deals