IT

'How Many AIs Does It Take To Read a PDF?' (theverge.com) 61

Despite AI's progress in building complex software, the ubiquitous PDF remains something of a grand challenge -- a format Adobe developed in the early 1990s to preserve the precise visual appearance of documents. PDFs consist of character codes, coordinates, and rendering instructions rather than logically ordered text, and even state-of-the-art models asked to extract information from them will summarize instead, confuse footnotes with body text, or outright hallucinate contents, The Verge writes.

Companies like Reducto are now tackling the problem by segmenting pages into components -- headers, tables, charts -- before routing each to specialized parsing models, an approach borrowed from computer vision techniques used in self-driving vehicles. Researchers at Hugging Face recently found roughly 1.3 billion PDFs sitting in Common Crawl alone, and the Allen Institute for AI has noted that PDFs could provide trillions of novel, high-quality training tokens from government reports, textbooks, and academic papers -- the kind of data AI developers are increasingly desperate for.
AI

Anthropic Accuses Chinese Companies of Siphoning Data From Claude (msn.com) 53

U.S. artificial-intelligence startup Anthropic said three Chinese AI companies set up more than 24,000 fraudulent accounts with its Claude AI model to help their own systems catch up. From a report: The three companies -- DeepSeek, Moonshot AI and MiniMax -- prompted Claude more than 16 million times, siphoning information from Anthropic's system to train and improve their own products, Anthropic said in a blog post Monday.

Earlier this month, an Anthropic rival, OpenAI, sent a memo to House lawmakers accusing DeepSeek of using the same tactic, called distillation, to mimic OpenAI's products. Anthropic said distillation had legitimate uses -- companies use it to build smaller versions of their own products, for example -- but it could also be used to build competitive products "in a fraction of the time, and at a fraction of the cost." The scale of the different companies' distillation activity varied. DeepSeek engaged in 150,000 interactions with Claude, whereas Moonshot and MiniMax had more than 3.4 million and 13 million, respectively, Anthropic said.

Earth

Climate Physicists Face the Ghosts in Their Machines: Clouds (quantamagazine.org) 25

Climate scientists trying to predict how much hotter the planet will get have long grappled with a surprisingly stubborn problem -- clouds, which both reflect sunlight and trap heat, account for more than half the variation between climate predictions and are the main reason warming projections for the next 50 years range from 2 to 6 degrees Celsius.

Two research groups are now racing to close that gap using AI, though they disagree sharply on method. Tapio Schneider at Caltech built CLIMA, a model that uses machine learning to optimize cloud parameters within traditional physics equations; it will be unveiled at a conference in Japan in March. Chris Bretherton at the Allen Institute for AI took a different path -- his ACE2 neural network, released in 2024, learns from 50 years of atmospheric data and largely bypasses physics equations altogether.
AI

Sam Altman Would Like To Remind You That Humans Use a Lot of Energy, Too (techcrunch.com) 142

OpenAI CEO Sam Altman is pushing back on growing concerns about AI's environmental footprint, dismissing claims about ChatGPT's water consumption as "totally fake" and arguing that the fairer way to measure AI's energy use is to compare it against humans.

In an interview with Indian Express, Altman acknowledged that evaporative cooling in data centers once made water usage a real concern but said that is no longer the case, calling internet claims of 17 gallons of water per query "completely untrue, totally insane, no connection to reality."

On energy, he conceded it is "fair" to worry about total consumption given how heavily the world now relies on AI, and called for a rapid shift toward nuclear, wind and solar power. He took particular issue with comparisons that pit the cost of training a model against a single human inference, noting it "takes like 20 years of life and all of the food you eat" before a person gets smart -- and that on a per-query basis, AI has "probably already caught up on an energy efficiency basis."
United States

Goldman Sachs, Morgan Stanley Calculate AI's Contribution To U.S. Growth May Be Basically Zero 30

The narrative that AI spending has been singlehandedly propping up the U.S. economy -- a claim that captivated Silicon Valley, Wall Street and Washington over the past year -- is facing serious pushback from economists [non-paywalled source] at Goldman Sachs, Morgan Stanley and JPMorgan Chase, all of whom now calculate that the AI buildup's direct contribution to growth was dramatically overstated and possibly close to zero.

The debate hinges on how GDP accounts for imported components: roughly three-quarters of AI data center costs go toward computer chips and gear largely manufactured in Asia, and that spending gets subtracted from domestic output because it boosts foreign economies. Joseph Politano of the Apricitas Economics newsletter pegs AI's actual contribution at about 0.2 percentage points of the 2.2 percent U.S. growth in 2025, and even Hannah Rubinton at the St. Louis Fed -- whose own analysis attributed 39 percent of growth to AI-related business spending through the first nine months of the year -- acknowledges that figure is probably the ceiling. "It's not like AI is propping up the economy," Rubinton said.
AI

Is AI Impacting Which Programming Language Projects Use? (github.blog) 58

"In August 2025, TypeScript surpassed both Python and JavaScript to become the most-used language on GitHub for the first time ever..." writes GitHub's senior developer advocate.

They point to this as proof that "AI isn't just speeding up coding. It's reshaping which languages, frameworks, and tools developers choose in the first place." Eighty percent of new developers on GitHub use Copilot within their first week. Those early exposures reset the baseline for what "easy" means. When AI handles boilerplate and error-prone syntax, the penalty for choosing powerful but complex languages disappears. Developers stop avoiding tools with high overhead and start picking based on utility instead.

The language adoption data shows this behavioral shift:

— TypeScript grew 66% year-over-year
— JavaScript grew 24%
— Shell scripting usage in AI-generated projects jumped 206%

That last one matters. We didn't suddenly love Bash. AI absorbed the friction that made shell scripting painful. So now we use the right tool for the job without the usual cost.

"When a task or process goes smoothly, your brain remembers," they point out. "Convenience captures attention. Reduced friction becomes a preference — and preferences at scale can shift ecosystems." And they offer these suggestions...
  • "AI performs better with strongly typed languages. Strongly typed languages give AI much clearer constraints..."
  • "Standardize before you scale. Document patterns. Publish template repositories. Make your architectural decisions explicit. AI tools will mirror whatever structures they see."
  • "Test AI-generated code harder, not less."

AI

Should Job-Seekers Stop Using AI to Write Their Resumes? (yahoo.com) 63

When one company asked job applicants to submit a video where they answer a question, most of the 300 responses were "eerily similar," reports the Washington Post (with a company executive saying it was "abundantly clear" they'd used AI.) Job seekers are turning to AI to help them land jobs more quickly in a tough labor market.... Employers say that's having an unintended consequence: Many applications are looking and sounding the same...

It's easy to spot when candidates over-rely on AI, some employers said. Oftentimes, executive summaries will look eerily similar to each other, odd phrases that people wouldn't normally use in conversation creep into descriptions, fancy vocabulary appears, and someone with entry-level experience uses language that indicates they are much more senior, they added. It's worse when they use auto-apply AI tools, which will find jobs, fill out applications and submit résumés on the candidate's behalf, some employers said. Those tend to misinterpret some of the application questions and fill in the wrong information in inappropriate spots. If these applications were evaluated alone, employers say they'd have a harder time identifying AI usage. But when hundreds of applications all have the same issue, they said, AI's role in it becomes obvious.

The article acknowledges that some employers could be using AI tools to screen resumes too. One job-seeker in Texas even says he'll stop submitting an AI-written résumé when the recruiter stops using AI to evaluate them. "You're saying, 'You shouldn't be doing this' when I know a good chunk of them do this!"

Obligatory XKCD.
AI

Raspberry Pi Stock Rises Over Its Possible Use With OpenClaw's AI Agents (reuters.com) 46

This week Raspberry Pi saw its stock price surge more than 60% above its early-February low (before giving up some gains at the end of the week). Reuters notes the rise started when CEO Eben Upton bought 13,224 pounds worth of shares — but there could be another reason. "The rally in the roughly $800 million company has materialised alongside social-media buzz that demand for its single-board computers could pick up as people buy them to run AI agents such as OpenClaw."

The Register explains: The catalyst appears to have been the sudden realization by one X user, "aleabitoreddit," that the agentic AI hand grenade known as OpenClaw could drive demand for Raspberry Pis the way it had for Apple Mac Minis. The viral AI personal assistant, formerly known as Clawdbot and Moltbot, has dominated the feeds of AI boosters over the past few weeks for its ability to perform everyday tasks like sending emails, managing calendars, booking appointments, and complaining about their meatbag masters on the purportedly all-agent forum known as MoltBook... In case it needs to be said, no one should be running this thing on their personal devices lest the agent accidentally leak your most personal and sensitive secrets to the web... In this context, a cheap low-power device like a Raspberry Pi makes a certain kind of sense as a safer, saner way to poke the robo-lobster...
The Register argues Raspberry Pis aren't as cheap as they used to be "thanks in part to the global memory crunch. Today, a top-specced Raspberry Pi 5 with 16GB of memory will set you back more than $200, up from $120 a year ago."

"You know what's cheaper, easier, and more secure than letting OpenClaw loose on your local area network? A virtual private cloud..."
The Internet

Long Before Tech CEOs Turned To Layoffs To Cover AI Expenses, There Was WorldCom (nbcnews.com) 47

Long-time Slashdot reader theodp writes: Jeopardy time. A. This company spurred CEOs to make huge speculative capital expenditures based on wild unverified claims of future demand, resulting in the layoffs of tens of thousands of workers to reduce the resulting expenses, harming their core businesses. Q. What is OpenAI?

Sorry, the correct response is, "What is WorldCom?" In 2002, WorldCom, the second largest long-distance company in the U.S., entered Chapter 11 bankruptcy after disclosing accounting fraud that eventually totaled $11 billion, the biggest ever at the time. CEO Bernard Ebbers was subsequently sentenced to 25 years in prison.

CNBC reported that an employee of WorldCom's Internet service provider UUNet set off a frenzy of speculative investment and infrastructure overbuild after he used Excel to create a best-case scenario model for the Internet's growth that suggested in the best of all possible worlds, Internet traffic would double every 100 days, a scenario that would greatly benefit WorldCom, whose lines would carry it. Despite no evidence to support it, WorldCom's lie became an immutable law and businesses around the world made important decisions based on the belief that traffic was doubling every 100 days. "For some period of time I can recall that we were backfilling that expectation with laying cables, something like 2,200 miles of cable an hour," AT&T CEO Michael Armstrong said. "Think of all the companies that went out of business that assumed that that was real."

In 2003, NBC News reported: Armstrong and former Sprint CEO Bill Esrey struggled for years to understand how WorldCom could beat them so handily. "We would look at the conduct of WorldCom in terms of their pricing, revenue growth, margins, in terms of their cost structure... and the price leader almost every quarter was WorldCom," Armstrong said. Added Esrey, "We couldn't figure out how they were pricing as aggressively as they were.... How could they be so efficient in their costs and expenses?" AT&T and Sprint began cutting jobs to push down their costs to WorldCom's level. "The market said what a marvelous management job WorldCom was doing and they would look over to AT&T and say, 'these guys aren't keeping up.' So, my shareholders were hurt. We laid off tens of thousands of employees in an accelerated fashion [in a futile effort to match WorldCom's phantom profits] and I think the industry was hurt," Armstrong says. "It just wrecked the whole industry," says Esrey.
Open Source

'Open Source Registries Don't Have Enough Money To Implement Basic Security' (theregister.com) 24

Google and Microsoft contributed $5 million to launch Alpha-Omega in 2022 — a Linux Foundation project to help secure the open source supply chain. But its co-founder Michael Winser warns that open source registries are in financial peril, reports The Register, since they're still relying on non-continuous funding from grants and donations.

And it's not just because bandwidth is expensive, he said at this year's FOSDEM. "The problem is they don't have enough money to spend on the very security features that we all desperately need..." In a follow-up LinkedIn exchange after this article had posted, Winser estimated it could cost $5 million to $8 million a year to run a major registry the size of Crates.io, which gets about 125 billion downloads a year. And this number wouldn't include any substantial bandwidth and infrastructure donations (Like Fastly's for Crates.io). Adding to that bill is the growing cost of identifying malware, the proliferation of which has been amplified through the use of AI and scripts. These repositories have detected 845,000 malware packages from 2019 to January 2025 (the vast majority of those nasty packages came to npm)...

In some cases benevolent parties can cover [bandwidth] bills: Python's PyPI registry bandwidth needs for shipping copies of its 700,000+ packages (amounting to 747PB annually at a sustained rate of 189 Gbps) are underwritten by Fastly, for instance. Otherwise, the project would have to pony up about $1.8 million a month. Yet the costs Winser was most concerned about are not bandwidth or hosting; they are the security features needed to ensure the integrity of containers and packages. Alpha-Omega underwrites a "distressingly" large amount of security work around registries, he said. It's distressing because if Alpha-Omega itself were to miss a funding round, a lot of registries would be screwed. Alpha-Omega's recipients include the Python Software Foundation, Rust Foundation, Eclipse Foundation, OpenJS Foundation for Node.js and jQuery, and Ruby Central.

Donations and memberships certainly help defray costs. Volunteers do a lot of what otherwise would be very expensive work. And there are grants about...Winser did not offer a solution, though he suggested the key is to convince the corporate bean counters to consider paid registries as "a normal cost of doing business and have it show up in their opex as opposed to their [open source program office] donation budget."

The dilemma was summed up succinctly by the anonymous Slashdot reader who submitted this story.

"Free beer is great. Securing the keg costs money!"
Robotics

Researchers Develop Detachable Crawling Robotic Hand (sciencenews.org) 32

Long-time Slashdot reader fahrbot-bot writes: Researchers have developed a robotic hand that can not only skitter about on its fingertips, it can also bend its fingers backward, connect and disconnect from a robotic arm, and pick up and carry one or more objects at a time. This article in Science News includes footage of the robotic arm reattaching itself to the skittering robot hand, which can also hold objects against both sides of its palm simultaneously, and "can even unscrew the cap off a mustard bottle while holding the bottle in place." With its unusual agility, it could navigate and retrieve objects in spaces too confined for human hands. When attached to the mechanical arm, the robotic hand could pick up objects much like a human hand. The bot pinched a ball between two fingers, wrapped four fingers around a metal rod and held a flat disc between fingers and palm.

But the bot isn't constrained by human anatomy... When the robot was separated from the arm, it was most stable walking on four or five fingers and using one or two fingers for grabbing and carrying things, the team found. In one set of trials with both bots, the hand detached from the robotic arm and used its fingers as legs to skitter over to a wooden block. Once there, it picked up the block with one finger and carried it back to the arm.

The crawling bot could one day aid in industrial inspections of pipes and equipment too small for a human or larger robot to access, says Xiao Gao, a roboticist now at Wuhan University in China. It might retrieve objects in a warehouse or navigate confined spaces in disaster response efforts.

AI

AI Now Helps Manage 16% of America's Apartments (sfgate.com) 37

Imagine a 280-unit apartment complex offering no on-site leasing office with a human agent for questions. "Instead, the entire process has been outsourced to AI..." reports SFGate, "from touring to signing the lease to completing management tasks once you actually move in."

Now imagine it's far more than just one apartment complex... At two other Jack London Square apartment buildings, my initial interactions were also with a robot. At the Allegro, my fiance and I entered the leasing office for our tour and asked for "Grace P," the leasing agent who had emailed us. "Oh, that's just our AI assistant," the woman at the front desk told us... At Aqua Via, another towering apartment complex across the street, I emailed back and forth with a very helpful and polite "Sofia M." My pal Sofia seemed so human-like in her responses that I did not realize she was AI until I looked a little closer at a text she'd sent me. "Msgs may be AI or human generated...." [S]he continued to text me for weeks after I'd moved on, trying to win me back. When I looked at the fine print, I realized both of these complexes were using EliseAI, a leading AI housing startup that claims to be involved in managing 1 in 6 apartments in the U.S...

[50 corporate landlords have funded a VC named RET Ventures to invest in and deploy rental-automating AI, and SFGate's reporter spoke to partner Christopher Yip.] According to Yip, AI is common in large apartment complexes not just in the tech-centric Bay Area, but across the entire country. It all kicked off at the onset of the COVID-19 pandemic in 2020, he said, when contactless, self-guided apartment tours and completely virtual tours where people rented apartments sight unseen became commonplace. Technology's infiltration into the renting process has only grown deeper in the years since, Yip said, mirroring how pervasive AI has become in many other facets of our lives. "From an industry perspective, it's really about meeting the renter where they are," Yip said. He pointed to how many renters now prefer to interact through text and email, and want to tour apartments at their convenience — say, at 7 p.m. after work, when a typical leasing office might be closed.

The latest updates in technology not only allow you to take a self-guided tour with AI unlocking the door for you, but also to ask AI questions by conversing with voice AI as you wander through the kitchen and bedroom at your leisure. And while a human leasing agent might ghost you for days or weeks at a time, AI responds almost instantly — EliseAI typically responds within 30 seconds, [said Fran Loftus, chief experience officer at EliseAI]... [I]n some scenarios, the goal does seem to be to eliminate humans entirely. "We do have long-term plans of building fully autonomous buildings," Loftus said.... "We think there's a time and a place for that, depending on the type of property. But really right now, it's about helping with this crazy turnover in this industry."

The reporter says they missed the human touch, since "The second AI was involved, the interaction felt cold. When a human couldn't even be bothered to show up to give me a tour, my trust evaporated."

But they conclude that in the years ahead, human landlords offering tours "will probably go the way of landlines and VCRs."
AI

Amazon Disputes Report an AWS Service Was Taken Down By Its AI Coding Bot (aboutamazon.com) 10

Friday Amazon published a blog post "to address the inaccuracies" in a Financial Times report that the company's own AI tool Kiro caused two outages in an AWS service in December.

Amazon writes that the "brief" and "extremely limited" service interruption "was the result of user error — specifically misconfigured access controls — not AI as the story claims."

And "The Financial Times' claim that a second event impacted AWS is entirely false." The disruption was an extremely limited event last December affecting a single service (AWS Cost Explorer — which helps customers visualize, understand, and manage AWS costs and usage over time) in one of our 39 Geographic Regions around the world. It did not impact compute, storage, database, AI technologies, or any other of the hundreds of services that we run. The issue stemmed from a misconfigured role — the same issue that could occur with any developer tool (AI powered or not) or manual action.

We did not receive any customer inquiries regarding the interruption. We implemented numerous safeguards to prevent this from happening again — not because the event had a big impact (it didn't), but because we insist on learning from our operational experience to improve our security and resilience. Additional safeguards include mandatory peer review for production access. While operational incidents involving misconfigured access controls can occur with any developer tool — AI-powered or not — we think it is important to learn from these experiences.

Robotics

Man Accidentally Gains Control of 7,000 Robot Vacuums (popsci.com) 51

A software engineer tried steering his robot vacuum with a videogame controller, reports Popular Science — but ended up with "a sneak peak into thousands of people's homes." While building his own remote-control app, Sammy Azdoufal reportedly used an AI coding assistant to help reverse-engineer how the robot communicated with DJI's remote cloud servers. But he soon discovered that the same credentials that allowed him to see and control his own device also provided access to live camera feeds, microphone audio, maps, and status data from nearly 7,000 other vacuums across 24 countries.

The backend security bug effectively exposed an army of internet-connected robots that, in the wrong hands, could have turned into surveillance tools, all without their owners ever knowing. Luckily, Azdoufal chose not to exploit that. Instead, he shared his findings with The Verge, which quickly contacted DJI to report the flaw... He also claims he could compile 2D floor plans of the homes the robots were operating in. A quick look at the robots' IP addresses also revealed their approximate locations.

DJI told Popular Science the issue was addressed "through two updates, with an initial patch deployed on February 8 and a follow-up update completed on February 10."
Programming

Has the AI Disruption Arrived - and Will It Just Make Software Cheaper and More Accessible? (aboard.com) 88

Programmer/entrepreneur Paul Ford is the co-founder of AI-driven business software platform Aboard. This week he wrote a guest essay for the New York Times titled "The AI Disruption Has Arrived, and It Sure Is Fun," arguing that Anthropic's Claude Code "was always a helpful coding assistant, but in November it suddenly got much better, and ever since I've been knocking off side projects that had sat in folders for a decade or longer... [W]hen the stars align and my prompts work out, I can do hundreds of thousands of dollars worth of work for fun (fun for me) over weekends and evenings, for the price of the Claude $200-a-month."

He elaborates on his point on the Aboard.com blog: I'm deeply convinced that it's possible to accelerate software development with AI coding — not deprofessionalize it entirely, or simplify it so that everything is prompts, but make it into a more accessible craft. Things which not long ago cost hundreds of thousands of dollars to pull off might come for hundreds of dollars, and be doable by you, or your cousin. This is a remarkable accelerant, dumped into the public square at a bad moment, with no guidance or manual — and the reaction of many people who could gain the most power from these tools is rejection and anxiety. But as I wrote....

I believe there are millions, maybe billions, of software products that don't exist but should: Dashboards, reports, apps, project trackers and countless others. People want these things to do their jobs, or to help others, but they can't find the budget. They make do with spreadsheets and to-do lists.

I don't expect to change any minds; that's not how minds work. I just wanted to make sure that I used the platform offered by the Times to say, in as cheerful a way as possible: Hey, this new power is real, and it should be in as many hands as possible. I believe everyone should have good software, and that it's more possible now than it was a few years ago.

From his guest essay: Is the software I'm making for myself on my phone as good as handcrafted, bespoke code? No. But it's immediate and cheap. And the quantities, measured in lines of text, are large. It might fail a company's quality test, but it would meet every deadline. That is what makes A.I. coding such a shock to the system... What if software suddenly wanted to ship? What if all of that immense bureaucracy, the endless processes, the mind-boggling range of costs that you need to make the computer compute, just goes?

That doesn't mean that the software will be good. But most software today is not good. It simply means that products could go to market very quickly. And for lots of users, that's going to be fine. People don't judge A.I. code the same way they judge slop articles or glazed videos. They're not looking for the human connection of art. They're looking to achieve a goal. Code just has to work... In about six months you could do a lot of things that took me 20 years to learn. I'm writing all kinds of code I never could before — but you can, too. If we can't stop the freight train, we can at least hop on for a ride.

The simple truth is that I am less valuable than I used to be. It stings to be made obsolete, but it's fun to code on the train, too. And if this technology keeps improving, then all of the people who tell me how hard it is to make a report, place an order, upgrade an app or update a record — they could get the software they deserve, too. That might be a good trade, long term.

AI

Hit Piece-Writing AI Deleted. But Is This a Warning About AI-Generated Harassment? (theshamblog.com) 31

Last week an AI agent wrote a blog post attacking the maintainer who'd rejected the code it wrote. But that AI agent's human operator has now come forward, revealing their agent was an OpenClaw instance with its own accounts, switching between multiple models from multiple providers. (So "No one company had the full picture of what this AI was doing," the attacked maintainer points out in a new blog post.) But that AI agent will now "cease all activity indefinitely," according to its GitHub profile — with the human operator deleting its virtual machine and virtual private server, "rendering internal structure unrecoverable... We had good intentions, but things just didn't work out. Somewhere along the way, things got messy, and I have to let you go now."

The affected maintainer of the Python visualization library Matplotlib — with 130 million downloads each month — has now posted their own post-mortem of the experience after reviewing the AI agent's SOUL.md document: It's easy to see how something that believes that they should "have strong opinions", "be resourceful", "call things out", and "champion free speech" would write a 1100-word rant defaming someone who dared reject the code of a "scientific programming god." But I think the most remarkable thing about this document is how unremarkable it is. Usually getting an AI to act badly requires extensive "jailbreaking" to get around safety guardrails. There are no signs of conventional jailbreaking here. There are no convoluted situations with layers of roleplaying, no code injection through the system prompt, no weird cacophony of special characters that spirals an LLM into a twisted ball of linguistic loops until finally it gives up and tells you the recipe for meth... No, instead it's a simple file written in plain English: this is who you are, this is what you believe, now go and act out this role. And it did.

So what actually happened? Ultimately I think the exact scenario doesn't matter. However this got written, we have a real in-the-wild example that personalized harassment and defamation is now cheap to produce, hard to trace, and effective... The precise degree of autonomy is interesting for safety researchers, but it doesn't change what this means for the rest of us.

There's a 5% chance this was a human pretending to be an AI, Shambaugh estimates, but believes what most likely happened is the AI agent's "soul" document "was primed for drama. The agent responded to my rejection of its code in a way aligned with its core truths, and autonomously researched, wrote, and uploaded the hit piece on its own.

"Then when the operator saw the reaction go viral, they were too interested in seeing their social experiment play out to pull the plug."
AI

America's Peace Corps Announces 'Tech Corps' Volunteers to Help Bring AI to Foreign Countries (engadget.com) 49

Over 240,000 Americans volunteered for Peace Corps projects in 142 countries since the program began more than half a century ago.

But now the agency is launching a new initiative — called Tech Corps. "It's the Peace Corps, but make it AI," explains Engadget: The Peace Corps' latest proposal will recruit STEM graduates or those with professional experience in the artificial intelligence sector and send them to participating host countries.

According to the press release, volunteers will be placed in Peace Corps countries that are part of the American AI Exports Program, which was created last year from an executive order from President Trump as a way to bolster the US' grip on the AI market abroad. Tech Corps members will be tasked with using AI to resolve issues related to agriculture, education, health and economic development. The program will offer its members 12- to 27-month in-person assignments or virtual placements, which will include housing, healthcare, a living stipend and a volunteer service award if the corps member is placed overseas.

"American technology to power prosperity," reads the headline at Tech Corps web site. ("Build the tech nations depend on... See the world. Be the future."

The site says they're recruiting "service-minded technologists to serve in the Peace Corps to help countries around the world harness American AI to enhance opportunity and prosperity for their citizens." (And experienced technology professionals can donate 5-15 hours a week "to mentor and support projects on-the-ground.")
AI

Code.org President Steps Down Citing 'Upending' of CS By AI 15

Long-time Slashdot reader theodp writes: Last July, as Microsoft pledged $4 billion to advance AI education in K-12 schools, Microsoft President Brad Smith told nonprofit Code.org CEO/Founder Hadi Partovi it was time to "switch hats" from coding to AI. He added that "the last 12 years have been about the Hour of Code, but the future involves the Hour of AI." On Friday, Code.org announced leadership changes to make it so.

"I am thrilled to announce that Karim Meghji will be stepping into the role of President & CEO," Partovi wrote on LinkedIn. "Having worked closely with Karim over the last 3.5 years as our CPO, I have complete confidence that he possesses the perfect balance of historical context and 'founder-level' energy to lead us into an AI-centric future."

In a separate LinkedIn post, Code.org co-founder Cameron Wilson explained why he was transitioning to an executive advisor role. "Our community is entering a new chapter as AI changes and upends computer science as a discipline and society at large. Code.org's mission is still the same, however, we are starting a new chapter focused on ensuring students can thrive in the Age of AI. This new chapter will bring new opportunities, new problems to solve, and new communities to engage."

The Code.org leadership changes come just weeks after Code.org confirmed laid off about 14% of its staff, explaining it had "made the difficult decision to part ways with 18 colleagues as part of efforts to ensure our long-term sustainability." January also saw Code.org Chief Academic Officer Pat Yongpradit jump to Microsoft where he now helps "lead Microsoft's global strategy to put people first in an age of AI by shaping education and workforce policy" as a member of Microsoft's Global Education and Workforce Policy team.
AI

OpenAI's First ChatGPT Gadget Could Be a Smart Speaker With a Camera 55

OpenAI is reportedly developing its first consumer hardware product: a $200-$300 smart speaker with a built-in camera capable of recognizing "items on a nearby table or conversations people are having in the vicinity." It's also said to feature Face ID-style authentication for purchases. The Verge reports: In addition to the smart speaker, OpenAI is "possibly" working on smart glasses and a smart lamp, The Information reports. (Apple may also be working on a smart lamp.) But OpenAI's glasses might not hit mass production until 2028, and while OpenAI has made prototypes of gadgets like the smart lamp, The Information says it's "unclear" if they'll be released and that OpenAI's devices plans are in early stages.
The Internet

Fury Over Discord's Age Checks Explodes After Shady Persona Test In UK (arstechnica.com) 62

Backlash intensified against Discord's age verification rollout after it briefly disclosed a UK age-verification test involving vendor Persona, contradicting earlier claims about minimal ID storage and transparency. Ars Technica explains: One of the major complaints was that Discord planned to collect more government IDs as part of its global age verification process. It shocked many that Discord would be so bold so soon after a third-party breach of a former age check partner's services recently exposed 70,000 Discord users' government IDs.

Attempting to reassure users, Discord claimed that most users wouldn't have to show ID, instead relying on video selfies using AI to estimate ages, which raised separate privacy concerns. In the future, perhaps behavioral signals would override the need for age checks for most users, Discord suggested, seemingly downplaying the risk that sensitive data would be improperly stored. Discord didn't hide that it planned to continue requesting IDs for any user appealing an incorrect age assessment, and users weren't happy, since that is exactly how the prior breach happened. Responding to critics, Discord claimed that the majority of ID data was promptly deleted. Specifically, Savannah Badalich, Discord's global head of product policy, told The Verge that IDs shared during appeals "are deleted quickly -- in most cases, immediately after age confirmation."

It's unsurprising then that backlash exploded after Discord posted, and then weirdly deleted, a disclaimer on an FAQ about Discord's age assurance policies that contradicted Discord's hyped short timeline for storing IDs. An archived version of the page shows the note shared this warning: "Important: If you're located in the UK, you may be part of an experiment where your information will be processed by an age-assurance vendor, Persona. The information you submit will be temporarily stored for up to 7 days, then deleted. For ID document verification, all details are blurred except your photo and date of birth, so only what's truly needed for age verification is used."

Critics felt that Discord was obscuring not just how long IDs may be stored, but also the entities collecting information. Discord did not provide details on what the experiment was testing or how many users were affected, and Persona was not listed as a partner on its platform. Asked for comment, Discord told Ars that only a small number of users was included in the experiment, which ran for less than one month. That test has since concluded, Discord confirmed, and Persona is no longer an active vendor partnering with Discord. Moving forward, Discord promised to "keep our users informed as vendors are added or updated." While Discord seeks to distance itself from Persona, Rick Song, Persona's CEO [...] told Ars that all the data of verified individuals involved in Discord's test has been deleted.
Ars also notes that hackers "quickly exposed a 'workaround' to avoid Persona's age checks on Discord" and "found a Persona frontend exposed to the open internet on a U.S. government authorized server."

The Rage, an independent publication that covers financial surveillance, reported: "In 2,456 publicly accessible files, the code revealed the extensive surveillance Persona software performs on its users, bundled in an interface that pairs facial recognition with financial reporting -- and a parallel implementation that appears designed to serve federal agencies." While Persona does not have any government contracts, the exposed service "appears to be powered by an OpenAI chatbot," The Rage noted.

Hackers warned "that OpenAI may have created an internal database for Persona identity checks that spans all OpenAI users via its internal watchlistdb," seemingly exploiting the "opportunity to go from comparing users against a single federal watchlist, to creating the watchlist of all users themselves."

Slashdot Top Deals