Power

Open-Source Tool Designed To Throttle PC and Server Performance Based On Electricity Pricing (tomshardware.com) 56

Robotics and machine learning engineer Naveen Kul developed WattWise, a lightweight open-source CLI tool that monitors power usage via smart plugs and throttles system performance based on electricity pricing and peak hours. Tom's Hardware reports: The simple program, called WattWise, came about when Naveen built a dual-socket EPYC workstation with plans to add four GPUs. It's a power-intensive setup, so he wanted a way to monitor its power consumption using a Kasa smart plug. The enthusiast has released the monitoring portion of the project to the public now, but the portion that manages clocks and power will be released later. Unfortunately, the Kasa Smart app and the Home Assistant dashboard was inconvenient and couldn't do everything he desired. He already had a terminal window running monitoring tools like htop, nvtop, and nload, and decided to take matters into his own hands rather than dealing with yet another app.

Naveen built a terminal-based UI that shows power consumption data through Home Assistant and the TP-Link integration. The app monitors real-time power use, showing wattage and current, as well as providing historical consumption charts. More importantly, it is designed to automatically throttle CPU and GPU performance. Naveen's power provider uses Time-of-Use (ToU) pricing, so using a lot of power during peak hours can cost significantly more. The workstation can draw as much as 1400 watts at full load, but by reducing the CPU frequency from 3.7 GHz to 1.5 GHz, he's able to reduce consumption by about 225 watts. (No mention is made of GPU throttling, which could potentially allow for even higher power savings with a quad-GPU setup.)

Results will vary based on the hardware being used, naturally, and servers can pull far more power than a typical desktop -- even one designed and used for gaming. WattWise optimizes the system's clock speed based on the current system load, power consumption as reported by the smart plug, and the time -- with the latter factoring in peak pricing. From there, it uses a Proportional-Integral (PI) controller to manage the power and adapts system parameters based on the three variables.
A blog post with more information is available here.

WattWise is also available on GitHub.
AI

Anthropic Launches an AI Chatbot Plan For Colleges and Universities (techcrunch.com) 9

An anonymous reader quotes a report from TechCrunch: Anthropic announced on Wednesday that it's launching a new Claude for Education tier, an answer to OpenAI's ChatGPT Edu plan. The new tier is aimed at higher education, and gives students, faculty, and other staff access to Anthropic's AI chatbot, Claude, with a few additional capabilities. One piece of Claude for Education is "Learning Mode," a new feature within Claude Projects to help students develop their own critical thinking skills, rather than simply obtain answers to questions. With Learning Mode enabled, Claude will ask questions to test understanding, highlight fundamental principles behind specific problems, and provide potentially useful templates for research papers, outlines, and study guides.

Anthropic says Claude for Education comes with its standard chat interface, as well as "enterprise-grade" security and privacy controls. In a press release shared with TechCrunch ahead of launch, Anthropic said university administrators can use Claude to analyze enrollment trends and automate repetitive email responses to common inquiries. Meanwhile, students can use Claude for Education in their studies, the company suggested, such as working through calculus problems with step-by-step guidance from the AI chatbot. To help universities integrate Claude into their systems, Anthropic says it's partnering with the company Instructure, which offers the popular education software platform Canvas. The AI startup is also teaming up with Internet2, a nonprofit organization that delivers cloud solutions for colleges.

Anthropic says that it has already struck "full campus agreements" with Northeastern University, the London School of Economics and Political Science, and Champlain College to make Claude for Education available to all students. Northeastern is a design partner -- Anthropic says it's working with the institution's students, faculty, and staff to build best practices for AI integration, AI-powered education tools, and frameworks. Anthropic hopes to strike more of these contracts, in part through new student ambassador and AI "builder" programs, to capitalize on the growing number of students using AI in their studies.

Microsoft

Microsoft Urges Businesses To Abandon Office Perpetual Licenses 95

Microsoft is pushing businesses to shift away from perpetual Office licenses to Microsoft 365 subscriptions, citing collaboration limitations and rising IT costs associated with standalone software. "You may have started noticing limitations," Microsoft says in a post. "Your apps are stuck on your desktop, limiting productivity anytime you're away from your office. You can't easily access your files or collaborate when working remotely."

In its pitch, the Windows-maker says Microsoft 365 includes Office applications as well as security features, AI tools, and cloud storage. The post cites a Microsoft-commissioned Forrester study that claims the subscription model delivers "223% ROI over three years, with a payback period of less than six months" and "over $500,000 in benefits over three years."
AI

95% of Code Will Be AI-Generated Within Five Years, Microsoft CTO Says 130

Microsoft Chief Technology Officer Kevin Scott has predicted that AI will generate 95% of code within five years. Speaking on the 20VC podcast, Scott said AI would not replace software engineers but transform their role. "It doesn't mean that the AI is doing the software engineering job.... authorship is still going to be human," Scott said.

According to Scott, developers will shift from writing code directly to guiding AI through prompts and instructions. "We go from being an input master (programming languages) to a prompt master (AI orchestrator)," he said. Scott said the current AI systems have significant memory limitations, making them "awfully transactional," but predicted improvements within the next year.
The Almighty Buck

Zelle Is Shutting Down Its App (techcrunch.com) 18

An anonymous reader quotes a report from TechCrunch: Zelle is shutting down its stand-alone app on Tuesday, according to a company blog post. This news might be alarming if you're one of the over 150 million customers in the U.S. who use Zelle for person-to-person payments. But only about 2% of transactions take place via Zelle's app, which is why the company is discontinuing its stand-alone app.

Most consumers access Zelle via their bank, which then allows them to send money to their phone contacts. Zelle users who relied on the stand-alone app will have to re-enroll in the service through another financial institution. Given the small user base of the Zelle app, it makes sense why the company would decide to get rid of it -- maintaining an app takes time and money, especially one where people's financial information is involved.

Mozilla

Mozilla To Launch 'Thunderbird Pro' Paid Services (techspot.com) 36

Mozilla plans to introduce a suite of paid professional services for its open-source Thunderbird email client, transforming the application into a comprehensive communication platform. Dubbed "Thunderbird Pro," the package aims to compete with established ecosystems like Gmail and Office 365 while maintaining Mozilla's commitment to open-source software.

The Pro tier will include four core services: Thunderbird Appointment for streamlined scheduling, Thunderbird Send for file sharing (reviving the discontinued Firefox Send), Thunderbird Assist offering AI capabilities powered by Flower AI, and Thundermail, a revamped email client built on Stalwart's open-source stack. Initially, Thunderbird Pro will be available free to "consistent community contributors," with paid access for other users.

Mozilla Managing Director Ryan Sipes indicated the company may consider limited free tiers once the service establishes a sustainable user base. This initiative follows Mozilla's 2023 announcement about "remaking" Thunderbird's architecture to modernize its aging codebase, addressing user losses to more feature-rich competitors.
The Courts

Donkey Kong Champion Wins Defamation Case Against Australian YouTuber Karl Jobst (theguardian.com) 58

An anonymous reader quotes a report from The Guardian: A professional YouTuber in Queensland has been ordered to pay $350,000 plus interest and costs to the former world record score holder for Donkey Kong, after the Brisbane district court found the YouTuber had defamed him "recklessly" with false claims of a link between a lawsuit and another YouTuber's suicide. William "Billy" Mitchell, an American gamer who had held world records in Donkey Kong and Pac-Man going back to 1982, as recognized by the Guinness World Records and the video game database Twin Galaxies, brought the case against Karl Jobst, seeking $400,000 in general damages and $50,000 in aggravated damages.

Jobst, who makes videos about "speed running" (finishing games as fast as possible), as well as gaming records and cheating in games, made a number of allegations against Mitchell in a 2021 YouTube video. He accused Mitchell of cheating, and "pursuing unmeritorious litigation" against others who had also accused him of cheating, the court judgment stated. The court heard Mitchell was accused in 2017 of cheating in his Donkey Kong world records by using emulation software instead of original arcade hardware. Twin Galaxies investigated the allegation, and subsequently removed Mitchell's scores and banned him from participating in its competitions. The Guinness World Records disqualified Mitchell as a holder of all his records -- in both Donkey Kong and Pac-Man -- after the Twin Galaxies decision. The judgment stated that Jobst's 2021 video also linked the December 2020 suicide of another YouTuber, Apollo Legend, to "stress arising from [his] settlement" with Mitchell, and wrongly asserted that Apollo Legend had to pay Mitchell "a large sum of money."

AI

MCP: the New 'USB-C For AI' That's Bringing Fierce Rivals Together (arstechnica.com) 30

An anonymous reader quotes a report from Ars Technica: What does it take to get OpenAI and Anthropic -- two competitors in the AI assistant market -- to get along? Despite a fundamental difference in direction that led Anthropic's founders to quit OpenAI in 2020 and later create the Claude AI assistant, a shared technical hurdle has now brought them together: How to easily connect their AI models to external data sources. The solution comes from Anthropic, which developed and released an open specification called Model Context Protocol (MCP) in November 2024. MCP establishes a royalty-free protocol that allows AI models to connect with outside data sources and services without requiring unique integrations for each service.

"Think of MCP as a USB-C port for AI applications," wrote Anthropic in MCP's documentation. The analogy is imperfect, but it represents the idea that, similar to how USB-C unified various cables and ports (with admittedly a debatable level of success), MCP aims to standardize how AI models connect to the infoscape around them. So far, MCP has also garnered interest from multiple tech companies in a rare show of cross-platform collaboration. For example, Microsoft has integrated MCP into its Azure OpenAI service, and as we mentioned above, Anthropic competitor OpenAI is on board. Last week, OpenAI acknowledged MCP in its Agents API documentation, with vocal support from the boss upstairs. "People love MCP and we are excited to add support across our products," wrote OpenAI CEO Sam Altman on X last Wednesday.

MCP has also rapidly begun to gain community support in recent months. For example, just browsing this list of over 300 open source servers shared on GitHub reveals growing interest in standardizing AI-to-tool connections. The collection spans diverse domains, including database connectors like PostgreSQL, MySQL, and vector databases; development tools that integrate with Git repositories and code editors; file system access for various storage platforms; knowledge retrieval systems for documents and websites; and specialized tools for finance, health care, and creative applications. Other notable examples include servers that connect AI models to home automation systems, real-time weather data, e-commerce platforms, and music streaming services. Some implementations allow AI assistants to interact with gaming engines, 3D modeling software, and IoT devices.

Encryption

Gmail is Making It Easier For Businesses To Send Encrypted Emails To Anyone (theverge.com) 39

Google is rolling out a new encryption model for Gmail that allows enterprise users to send encrypted messages without requiring recipients to use custom software or exchange encryption certificates. The feature, launching in beta today, initially supports encrypted emails within the same organization, with plans to expand to all Gmail inboxes "in the coming weeks" and third-party email providers "later this year."

Unlike Gmail's current S/MIME-based encryption, the new system lets users simply toggle "additional encryption" in the email draft window. Non-Gmail recipients will receive a link to access messages through a guest Google Workspace account, while Gmail users will see automatically decrypted emails in their inbox.
Programming

'There is No Vibe Engineering' 121

Software engineer Sergey Tselovalnikov weighs in on the new hype: The term caught on and Twitter quickly flooded with posts about how AI has radically transformed coding and will soon replace all software engineers. While AI undeniably impacts the way we write code, it hasn't fundamentally changed our role as engineers. Allow me to explain.

[...] Vibe coding is interacting with the codebase via prompts. As the implementation is hidden from the "vibe coder", all the engineering concerns will inevitably get ignored. Many of the concerns are hard to express in a prompt, and many of them are hard to verify by only inspecting the final artifact. Historically, all engineering practices have tried to shift all those concerns left -- to the earlier stages of development when they're cheap to address. Yet with vibe coding, they're shifted very far to the right -- when addressing them is expensive.

The question of whether an AI system can perform the complete engineering cycle and build and evolve software the same way a human can remains open. However, there are no signs of it being able to do so at this point, and if it one day happens, it won't have anything to do with vibe coding -- at least the way it's defined today.

[...] It is possible that there'll be a future where software is built from vibe-coded blocks, but the work of designing software able to evolve and scale doesn't go away. That's not vibe engineering -- that's just engineering, even if the coding part of it will look a bit different.
Programming

'No Longer Think You Should Learn To Code,' Says CEO of AI Coding Startup (x.com) 108

Learning to code has become sort of become pointless as AI increasingly dominates programming tasks, said Replit founder and chief executive Amjad Masad. "I no longer think you should learn to code," Masad wrote on X.

The statement comes as major tech executives report significant AI inroads into software development. Google CEO Sundar Pichai recently revealed that 25% of new code at the tech giant is AI-generated, though still reviewed by engineers. Furthermore, Anthropic CEO Dario Amodei predicted AI could generate up to 90% of all code within six months.

Masad called this shift a "bittersweet realization" after spending years popularizing coding through open-source work, Codecademy, and Replit -- a platform that now uses AI to help users build apps and websites. Instead of syntax-focused programming skills, Masad recommends learning "how to think, how to break down problems... how to communicate clearly, with humans and with machines."
Linux

Linus Torvalds Gently Criticizes Build-Slowing Testing Code Left in Linux 6.15-rc1 (phoronix.com) 25

"The big set of open-source graphics driver updates for Linux 6.15 have been merged," writes Phoronix, "but Linux creator Linus Torvalds isn't particularly happy with the pull request." The new "hdrtest" code is for the Intel Xe kernel driver and is around trying to help ensure the Direct Rendering Manager header files are self-contained and pass kernel-doc tests — basic maintenance checks on the included DRM header files to ensure they are all in good shape.
But Torvalds accused the code of not only slowing down the full-kernel builds, but also leaving behind "random" files for dependencies "that then make the source tree nasty," reports Tom's Hardware: While Torvalds was disturbed by the code that was impacting the latest Linux kernel, beginning his post with a "Grr," he remained precise in his objections to it. "I did the pull, resolved the (trivial) conflicts, but I notice that this ended up containing the disgusting 'hdrtest' crap that (a) slows down the build because it's done for a regular allmodconfig build rather than be some simple thing that you guys can run as needed (b) also leaves random 'hdrtest' turds around in the include directories," he wrote.

Torvalds went on to state that he had previously complained about this issue, and inquired why the hdr testing is being done as a regular part of the build. Moreover, he highlighted that the resulting 'turds' were breaking filename completion. Torvalds underlined this point — and his disgust — by stating, "this thing needs to *die*." In a shot of advice to fellow Linux developers, Torvalds said, "If you want to do that hdrtest thing, do it as part of your *own* checks. Don't make everybody else see that disgusting thing...."

He then noted that he had decided to mark hdrtest as broken for now, to prevent its inclusion in regular builds.

As of Saturday, all of the DRM-Next code had made it into Linux 6.15 Git, notes Phoronix. "But Linus Torvalds is expecting all this 'hdrtest' mess to be cleaned up."
Robotics

China is Already Testing AI-Powered Humanoid Robots in Factories (msn.com) 71

The U.S. and China "are racing to build a truly useful humanoid worker," the Wall Street Journal wrote Saturday, adding that "Whoever wins could gain a huge edge in countless industries."

"The time has come for robots," Nvidia's chief executive said at a conference in March, adding "This could very well be the largest industry of all." China's government has said it wants the country to be a world leader in humanoid robots by 2027. "Embodied" AI is listed as a priority of a new $138 billion state venture investment fund, encouraging private-sector investors and companies to pile into the business. It looks like the beginning of a familiar tale. Chinese companies make most of the world's EVs, ships and solar panels — in each case, propelled by government subsidies and friendly regulations. "They have more companies developing humanoids and more government support than anyone else. So, right now, they may have an edge," said Jeff Burnstein [president of the Association for Advancing Automation, a trade group in Ann Arbor, Michigan]....

Humanoid robots need three-dimensional data to understand physics, and much of it has to be created from scratch. That is where China has a distinct edge: The country is home to an immense number of factories where humanoid robots can absorb data about the world while performing tasks. "The reason why China is making rapid progress today is because we are combining it with actual applications and iterating and improving rapidly in real scenarios," said Cheng Yuhang, a sales director with Deep Robotics, one of China's robot startups. "This is something the U.S. can't match." UBTech, the startup that is training humanoid robots to sort and carry auto parts, has partnerships with top Chinese automakers including Geely... "A problem can be solved in a month in the lab, but it may only take days in a real environment," said a manager at UBTech...

With China's manufacturing prowess, a locally built robot could eventually cost less than half as much as one built elsewhere, said Ming Hsun Lee, a Bank of America analyst. He said he based his estimates on China's electric-vehicle industry, which has grown rapidly to account for roughly 70% of global EV production. "I think humanoid robots will be another EV industry for China," he said. The UBTech robot system, called Walker S, currently costs hundreds of thousands of dollars including software, according to people close to the company. UBTech plans to deliver 500 to 1,000 of its Walker S robots to clients this year, including the Apple supplier Foxconn. It hopes to increase deliveries to more than 10,000 in 2027.

Few companies outside China have started selling AI-powered humanoid robots. Industry insiders expect the competition to play out over decades, as the robots tackle more-complicated environments, such as private homes.

The article notes "several" U.S. humanoid robot producers, including the startup Figure. And robots from Amazon's Agility Robotics have been tested in Amazon warehouses since 2023. "The U.S. still has advantages in semiconductors, software and some precision components," the article points out.

But "Some lawmakers have urged the White House to ban Chinese humanoids from the U.S. and further restrict Chinese robot makers' access to American technology, citing national-security concerns..."
Windows

Microsoft Attempts To Close Local Account Windows 11 Setup Loophole (theverge.com) 196

Slashdot reader jrnvk writes: The Verge is reporting that Microsoft will soon make it harder to run the well-publicized bypassnro command in Windows 11 setup. This command allows skipping the Microsoft account and online connection requirements on install. While the command will be removed, it can still be enabled by a regedit change — for now.
"However, there's no guarantee Microsoft will allow this additional workaround for long," writes the Verge. (Though they add "There are other workarounds as well" involving the unattended.xml automation.) In its latest Windows 11 Insider Preview, the company says it will take out a well-known bypass script... Microsoft cites security as one reason it's making this change. ["This change ensures that all users exit setup with internet connectivity and a Microsoft Account."] Since the bypassnro command is disabled in the latest beta build, it will likely be pushed to production versions within weeks.
Programming

How Rust Finally Got a Specification - Thanks to a Consultancy's Open-Source Donation (rustfoundation.org) 16

As Rust approaches its 10th anniversary, "there is an important piece of documentation missing that many other languages provide," notes the Rust Foundation.

While there's documentation and tutorials — there's no official language specification: In December 2022, an RFC was submitted to encourage the Rust Project to begin working on a specification. After much discussion, the RFC was approved in July 2023, and work began.

Initially, the Rust Project specification team (t-spec) were interested in creating the document from scratch using the Rust Reference as a guiding marker. However, the team knew there was already an external Rust specification that was being used successfully for compiler qualification purposes — the FLS.

Thank Berlin-based Ferrous Systems, a Rust-based consultancy who assembled that description "some years ago," according to a post on the Rust blog: They've since been faithfully maintaining and updating this document for new versions of Rust, and they've successfully used it to qualify toolchains based on Rust for use in safety-critical industries. [The Rust Foundation notes it part of the consultancy's "Ferrocene" Rust compiler/toolchain.] Seeing this success, others have also begun to rely on the FLS for their own qualification efforts when building with Rust.
The Rust Foundation explains: The FLS provides a structured and detailed reference for Rust's syntax, semantics, and behavior, serving as a foundation for verification, compliance, and standardization efforts. Since Rust did not have an official language specification back then, nor a plan to write one, the FLS represented a major step toward describing Rust in a way that aligns with industry requirements, particularly in high-assurance domains.
And the Rust Project is "passionate about shipping high quality tools that enable people to build reliable software at scale," adds the Rust blog. So... It's in that light that we're pleased to announce that we'll be adopting the FLS into the Rust Project as part of our ongoing specification efforts. This adoption is being made possible by the gracious donation of the FLS by Ferrous Systems. We're grateful to them for the work they've done in assembling the FLS, in making it fit for qualification purposes, in promoting its use and the use of Rust generally in safety-critical industries, and now, for working with us to take the next step and to bring the FLS into the Project.

With this adoption, we look forward to better integrating the FLS with the processes of the Project and to providing ongoing and increased assurances to all those who use Rust in safety-critical industries and, in particular, to those who use the FLS as part of their qualification efforts.

More from the Rust Foundation: The t-spec team wanted to avoid potential confusion from having two highly visible Rust specifications in the industry and so decided it would be worthwhile to try to integrate the FLS with the Rust Reference to create the official Rust Project specification. They approached Ferrous Systems, which agreed to contribute its FLS to the Rust Project and allow the Rust Project to take over its development and management... This generous donation will provide a clearer path to delivering an official Rust specification. It will also empower the Rust Project to oversee its ongoing evolution, providing confidence to companies and individuals already relying on the FLS, and marking a major milestone for the Rust ecosystem.

"I really appreciate Ferrous taking this step to provide their specification to the Rust Project," said Joel Marcey, Director of Technology at the Rust Foundation and member of the t-spec team. "They have already done a massive amount of legwork...." This effort will provide others who require a Rust specification with an official, authoritative reference for their work with the Rust programming language... This is an exciting outcome. A heartfelt thank you to the Ferrous Systems team for their invaluable contribution!

Marcey said the move allows the team "to supercharge our progress in the delivery of an official Rust specification."

And the co-founder of Ferrous Systems, Felix Gilcher, also sounded excited. "We originally created the Ferrocene Language Specification to provide a structured and reliable description of Rust for the certification of the Ferrocene compiler. As an open source-first company, contributing the FLS to the Rust Project is a logical step toward fostering the development of a unified, community-driven specification that benefits all Rust users."
AI

Has the Decline of Knowledge Worker Jobs Begun? (boston.com) 101

The New York Times notes that white-collar workers have faced higher unemployment than other groups in the U.S. over the past few years — along with slower wager growth.

Some economists wonder if this trend might be irreversible... and partly attributable to AI: After sitting below 4% for more than two years, the overall unemployment rate has topped that threshold since May... "We're seeing a meaningful transition in the way work is done in the white-collar world," said Carl Tannenbaum, the chief economist of Northern Trust. "I tell people a wave is coming...." Thousands of video game workers lost jobs last year and the year before... Unemployment in finance and related industries, while still low, increased by about a quarter from 2022 to 2024, as rising interest rates slowed demand for mortgages and companies sought to become leaner....

Overall, the latest data from the Federal Reserve Bank of New York show that the unemployment rate for college grads has risen 30% since bottoming out in September 2022 (to 2.6% from 2%), versus about 18% for all workers (to 4% from 3.4%). An analysis by Julia Pollak, chief economist of ZipRecruiter, shows that unemployment has been most elevated among those with bachelor's degrees or some college but no degree, while unemployment has been steady or falling at the very top and bottom of the education ladder — for those with advanced degrees or without a high school diploma. Hiring rates have slowed more for jobs requiring a college degree than for other jobs, according to ADP Research, which studies the labor market....

And artificial intelligence could reduce that need further by increasing the automation of white-collar jobs. A recent academic paper found that software developers who used an AI coding assistant improved a key measure of productivity by more than 25% and that the productivity gains appeared to be largest among the least experienced developers. The result suggested that adopting AI could reduce the wage premium enjoyed by more experienced coders, since it would erode their productivity advantages over novices... [A]t least in the near term, many tech executives and their investors appear to see AI as a way to trim their staffing. A software engineer at a large tech company who declined to be named for fear of harming his job prospects said that his team was about half the size it was last year and that he and his co-workers were expected to do roughly the same amount of work by relying on an AI assistant. Overall, the unemployment rate in tech and related industries jumped by more than half from 2022 to 2024, to 4.4% from 2.9%.

"Some economists say these trends may be short term in nature and little cause for concern on their own," the article points out (with one economist noting the unemployment rate is still low compared to historical averages).

Harvard labor economist Lawrence Katz even suggested the slower wage growth could reflect the discount that these workers accepted in return for being able to work from home.

Thanks to Slashdot reader databasecowgirl for sharing the article.
Cloud

Microsoft Announces 'Hyperlight Wasm': Speedy VM-Based Security at Scale with a WebAssembly Runtime (microsoft.com) 18

Cloud providers like the security of running things in virtual machines "at scale" — even though VMs "are not known for having fast cold starts or a small footprint..." noted Microsoft's Open Source blog last November. So Microsoft's Azure Core Upstream team built an open source Rust library called Hyperlight "to execute functions as fast as possible while isolating those functions within a VM."

But that was just the beginning... Then, we showed how to run Rust functions really, really fast, followed by using C to [securely] run Javascript. In February 2025, the Cloud Native Computing Foundation (CNCF) voted to onboard Hyperlight into their Sandbox program [for early-stage projects].

[This week] we're announcing the release of Hyperlight Wasm: a Hyperlight virtual machine "micro-guest" that can run wasm component workloads written in many programming languages...

Traditional virtual machines do a lot of work to be able to run programs. Not only do they have to load an entire operating system, they also boot up the virtual devices that the operating system depends on. Hyperlight is fast because it doesn't do that work; all it exposes to its VM guests is a linear slice of memory and a CPU. No virtual devices. No operating system. But this speed comes at the cost of compatibility. Chances are that your current production application expects a Linux operating system running on the x86-64 architecture (hardware), not a bare linear slice of memory...

[B]uilding Hyperlight with a WebAssembly runtime — wasmtime — enables any programming language to execute in a protected Hyperlight micro-VM without any prior knowledge of Hyperlight at all. As far as program authors are concerned, they're just compiling for the wasm32-wasip2 target... Executing workloads in the Hyperlight Wasm guest isn't just possible for compiled languages like C, Go, and Rust, but also for interpreted languages like Python, JavaScript, and C#. The trick here, much like with containers, is to also include a language runtime as part of the image... Programming languages, runtimes, application platforms, and cloud providers are all starting to offer rich experiences for WebAssembly out of the box. If we do things right, you will never need to think about whether your application is running inside of a Hyperlight Micro-VM in Azure. You may never know your workload is executing in a Hyperlight Micro VM. And that's a good thing.

While a traditional virtual-device-based VM takes about 125 milliseconds to load, "When the Hyperlight VMM creates a new VM, all it needs do to is create a new slice of memory and load the VM guest, which in turn loads the wasm workload. This takes about 1-2 milliseconds today, and work is happening to bring that number to be less than 1 millisecond in the future."

And there's also double security due to Wasmtime's software-defined runtime sandbox within Hyperlight's larger VM...
AI

First Trial of Generative AI Therapy Shows It Might Help With Depression 42

An anonymous reader quotes a report from MIT Technology Review: The first clinical trial of a therapy bot that uses generative AI suggests it was as effective as human therapy for participants with depression, anxiety, or risk for developing eating disorders. Even so, it doesn't give a go-ahead to the dozens of companies hyping such technologies while operating in a regulatory gray area. A team led by psychiatric researchers and psychologists at the Geisel School of Medicine at Dartmouth College built the tool, called Therabot, and the results were published on March 27 in the New England Journal of Medicine. Many tech companies are building AI therapy bots to address the mental health care gap, offering more frequent and affordable access than traditional therapy. However, challenges persist: poorly worded bot responses can cause harm, and forming meaningful therapeutic relationships is hard to replicate in software. While many bots rely on general internet data, researchers at Dartmouth developed "Therabot" using custom, evidence-based datasets. Here's what they found: To test the bot, the researchers ran an eight-week clinical trial with 210 participants who had symptoms of depression or generalized anxiety disorder or were at high risk for eating disorders. About half had access to Therabot, and a control group did not. Participants responded to prompts from the AI and initiated conversations, averaging about 10 messages per day. Participants with depression experienced a 51% reduction in symptoms, the best result in the study. Those with anxiety experienced a 31% reduction, and those at risk for eating disorders saw a 19% reduction in concerns about body image and weight. These measurements are based on self-reporting through surveys, a method that's not perfect but remains one of the best tools researchers have.

These results ... are about what one finds in randomized control trials of psychotherapy with 16 hours of human-provided treatment, but the Therabot trial accomplished it in about half the time. "I've been working in digital therapeutics for a long time, and I've never seen levels of engagement that are prolonged and sustained at this level," says [Michael Heinz, a research psychiatrist at Dartmouth College and Dartmouth Health and first author of the study].
Science

A New Image File Format Efficiently Stores Invisible Light Data (arstechnica.com) 11

An anonymous reader quotes a report from Ars Technica: Imagine working with special cameras that capture light your eyes can't even see -- ultraviolet rays that cause sunburn, infrared heat signatures that reveal hidden writing, or specific wavelengths that plants use for photosynthesis. Or perhaps using a special camera designed to distinguish the subtle visible differences that make paint colors appear just right under specific lighting. Scientists and engineers do this every day, and they're drowning in the resulting data. A new compression format called Spectral JPEG XL might finally solve this growing problem in scientific visualization and computer graphics. Researchers Alban Fichet and Christoph Peters of Intel Corporation detailed the format in a recent paper published in the Journal of Computer Graphics Techniques (JCGT). It tackles a serious bottleneck for industries working with these specialized images. These spectral files can contain 30, 100, or more data points per pixel, causing file sizes to balloon into multi-gigabyte territory -- making them unwieldy to store and analyze.

[...] The current standard format for storing this kind of data, OpenEXR, wasn't designed with these massive spectral requirements in mind. Even with built-in lossless compression methods like ZIP, the files remain unwieldy for practical work as these methods struggle with the large number of spectral channels. Spectral JPEG XL utilizes a technique used with human-visible images, a math trick called a discrete cosine transform (DCT), to make these massive files smaller. Instead of storing the exact light intensity at every single wavelength (which creates huge files), it transforms this information into a different form. [...]

According to the researchers, the massive file sizes of spectral images have reportedly been a real barrier to adoption in industries that would benefit from their accuracy. Smaller files mean faster transfer times, reduced storage costs, and the ability to work with these images more interactively without specialized hardware. The results reported by the researchers seem impressive -- with their technique, spectral image files shrink by 10 to 60 times compared to standard OpenEXR lossless compression, bringing them down to sizes comparable to regular high-quality photos. They also preserve key OpenEXR features like metadata and high dynamic range support.
The report notes that broader adoption "hinges on the continued development and refinement of the software tools that handle JPEG XL encoding and decoding."

Some scientific applications may also see JPEG XL's lossy approach as a drawback. "Some researchers working with spectral data might readily accept the trade-off for the practical benefits of smaller files and faster processing," reports Ars. "Others handling particularly sensitive measurements might need to seek alternative methods of storage."
Oracle

Oracle Health Breach Compromises Patient Data At US Hospitals 5

A breach of legacy Cerner servers at Oracle Health exposed patient data from multiple U.S. hospitals and healthcare organizations, with threat actors using compromised customer credentials to steal the data before it had been migrated to Oracle Cloud. Despite confirming the breach privately, Oracle Health has yet to publicly acknowledge the incident. BleepingComputer reports: Oracle Health, formerly known as Cerner, is a healthcare software-as-a-service (SaaS) company offering Electronic Health Records (EHR) and business operations systems to hospitals and healthcare organizations. After being acquired by Oracle in 2022, Cerner was merged into Oracle Health, with its systems migrated to Oracle Cloud. In a notice sent to impacted customers and seen by BleepingComputer, Oracle Health said it became aware of a breach of legacy Cerner data migration servers on February 20, 2025.

"We are writing to inform you that, on or around February 20, 2025, we became aware of a cybersecurity event involving unauthorized access to some amount of your Cerner data that was on an old legacy server not yet migrated to the Oracle Cloud," reads a notification sent to impacted Oracle Health customers. Oracle says that the threat actor used compromised customer credentials to breach the servers sometime after January 22, 2025, and copied data to a remote server. This stolen data "may" have included patient information from electronic health records. However, multiple sources told BleepingComputer that it was confirmed that patient data was stolen during the attack.

Oracle Health is also telling hospitals that they will not notify patients directly and that it is their responsibility to determine if the stolen data violates HIPAA laws and whether they are required to send notifications. However, the company says they will help identify impacted individuals and provide templates to help with notifications.

Slashdot Top Deals