AMD

Huawei's New CPU Matches Zen 3 In Single-Core Performance (tomshardware.com) 77

Long-time Slashdot reader AmiMoJo quotes Tom's Hardware: A Geekbench 6 result features what is likely the first-ever look at the single-core performance of the Taishan V120, developed by Huawei's HiSilicon subsidiary (via @Olrak29_ on X). The single-core score indicates that Taishan V120 cores are roughly on par with AMD's Zen 3 cores from late 2020, which could mean Huawei's technology isn't that far behind cutting-edge Western chip designers.

The Taishan V120 core was first spotted in Huawei's Kirin 9000s smartphone chip, which uses four of the cores alongside two efficiency-focused Arm Cortex A510 cores. Since Kirin 9000s chips are produced using SMIC's second-generation 7nm node (which may make it illegal to sell internationally according to U.S. lawmakers), it would also seem likely that the Taishan V120 core tested in Geekbench 6 is also made on the second-generation 7nm node.

The benchmark result doesn't really say much about what the actual CPU is, with the only hint being 'Huawei Cloud OpenStack Nova.' This implies it's a Kunpeng server CPU, which may either be the Kunpeng 916, 920, or 930. While we can only guess which one it is, it's almost certain to be the 930 given the high single-core performance shown in the result. By contrast, the few Geekbench 5 results for the Kunpeng 920 show it performing well behind AMD's first-generation Epyc Naples from 2017.

Programming

Rust Survey Finds Linux and VS Code Users, More WebAssembly Targeting (rust-lang.org) 40

Rust's official survey team released results from their 8th annual survey "focused on gathering insights and feedback from Rust users". In terms of operating systems used by Rustaceans, the situation is very similar to the results from 2022, with Linux being the most popular choice of Rust users [69.7%], followed by macOS [33.5%] and Windows [31.9%], which have a very similar share of usage. Rust programmers target a diverse set of platforms with their Rust programs, even though the most popular target by far is still a Linux machine [85.4%]. We can see a slight uptick in users targeting WebAssembly [27.1%], embedded and mobile platforms, which speaks to the versatility of Rust.

We cannot of course forget the favourite topic of many programmers: which IDE (developer environment) do they use. Visual Studio Code still seems to be the most popular option [61.7%], with RustRover (which was released last year) also gaining some traction [16.4%].

The site ITPro spoke to James Governor, co-founder of the developer-focused analyst firm RedMonk, who said Rust's usage is "steadily increasing", pointing to its adoption among hyperscalers and cloud companies and in new infrastructure projects. "Rust is not crossing over yet as a general-purpose programming language, as Python did when it overtook Java, but it's seeing steady growth in adoption, which we expect to continue. It seems like a sustainable success story at this point."

But InfoWorld writes that "while the use of Rust language by professional programmers continues to grow, Rust users expressed concerns about the language becoming too complex and the low level of Rust usage in the tech industry." Among the 9,374 respondents who shared their main worries for the future of Rust, 43% were most concerned about Rust becoming too complex, a five percentage point increase from 2022; 42% were most concerned about low usage of Rust in the tech industry; and 32% were most concerned about Rust developers and maintainers not being properly supported, a six percentage point increase from 2022. Further, the percentage of respondents who were not at all concerned about the future of Rust fell, from 30% in 2022 to 18% in 2023.
Programming

Stack Overflow To Charge LLM Developers For Access To Its Coding Content (theregister.com) 32

Stack Overflow has launched an API that will require all AI models trained on its coding question-and-answer content to attribute sources linking back to its posts. And it will cost money to use the site's content. From a report: "All products based on models that consume public Stack Overflow data are required to provide attribution back to the highest relevance posts that influenced the summary given by the model," it confirmed in a statement. The Overflow API is designed to act as a knowledge database to help developers build more accurate and helpful code-generation models. Google announced it was using the service to access relevant information from Stack Overflow via the API and integrate the data with its latest Gemini models, and for its cloud storage console.
Government

Government Watchdog Hacked US Federal Agency To Stress-Test Its Cloud Security (techcrunch.com) 21

In a series of tests using fake data, a U.S. government watchdog was able to steal more than 1GB of seemingly sensitive personal data from the cloud systems of the U.S. Department of the Interior. The experiment is detailed in a new report by the Department of the Interior's Office of the Inspector General (OIG), published last week. TechCrunch reports: The goal of the report was to test the security of the Department of the Interior's cloud infrastructure, as well as its "data loss prevention solution," software that is supposed to protect the department's most sensitive data from malicious hackers. The tests were conducted between March 2022 and June 2023, the OIG wrote in the report. The Department of the Interior manages the country's federal land, national parks and a budget of billions of dollars, and hosts a significant amount of data in the cloud. According to the report, in order to test whether the Department of the Interior's cloud infrastructure was secure, the OIG used an online tool called Mockaroo to create fake personal data that "would appear valid to the Department's security tools."

The OIG team then used a virtual machine inside the Department's cloud environment to imitate "a sophisticated threat actor" inside of its network, and subsequently used "well-known and widely documented techniques to exfiltrate data." "We used the virtual machine as-is and did not install any tools, software, or malware that would make it easier to exfiltrate data from the subject system," the report read. The OIG said it conducted more than 100 tests in a week, monitoring the government department's "computer logs and incident tracking systems in real time," and none of its tests were detected nor prevented by the department's cybersecurity defenses.

"Our tests succeeded because the Department failed to implement security measures capable of either preventing or detecting well-known and widely used techniques employed by malicious actors to steal sensitive data," said the OIG's report. "In the years that the system has been hosted in a cloud, the Department has never conducted regular required tests of the system's controls for protecting sensitive data from unauthorized access." That's the bad news: The weaknesses in the Department's systems and practices "put sensitive [personal information] for tens of thousands of Federal employees at risk of unauthorized access," read the report. The OIG also admitted that it may be impossible to stop "a well-resourced adversary" from breaking in, but with some improvements, it may be possible to stop that adversary from exfiltrating the sensitive data.

AI

StarCoder 2 Is a Code-Generating AI That Runs On Most GPUs (techcrunch.com) 44

An anonymous reader quotes a report from TechCrunch: Perceiving the demand for alternatives, AI startup Hugging Face several years ago teamed up with ServiceNow, the workflow automation platform, to create StarCoder, an open source code generator with a less restrictive license than some of the others out there. The original came online early last year, and work has been underway on a follow-up, StarCoder 2, ever since. StarCoder 2 isn't a single code-generating model, but rather a family. Released today, it comes in three variants, the first two of which can run on most modern consumer GPUs: A 3-billion-parameter (3B) model trained by ServiceNow; A 7-billion-parameter (7B) model trained by Hugging Face; and A 15-billion-parameter (15B) model trained by Nvidia, the newest supporter of the StarCoder project. (Note that "parameters" are the parts of a model learned from training data and essentially define the skill of the model on a problem, in this case generating code.)a

Like most other code generators, StarCoder 2 can suggest ways to complete unfinished lines of code as well as summarize and retrieve snippets of code when asked in natural language. Trained with 4x more data than the original StarCoder (67.5 terabytes versus 6.4 terabytes), StarCoder 2 delivers what Hugging Face, ServiceNow and Nvidia characterize as "significantly" improved performance at lower costs to operate. StarCoder 2 can be fine-tuned "in a few hours" using a GPU like the Nvidia A100 on first- or third-party data to create apps such as chatbots and personal coding assistants. And, because it was trained on a larger and more diverse data set than the original StarCoder (~619 programming languages), StarCoder 2 can make more accurate, context-aware predictions -- at least hypothetically.

[I]s StarCoder 2 really superior to the other code generators out there -- free or paid? Depending on the benchmark, it appears to be more efficient than one of the versions of Code Llama, Code Llama 33B. Hugging Face says that StarCoder 2 15B matches Code Llama 33B on a subset of code completion tasks at twice the speed. It's not clear which tasks; Hugging Face didn't specify. StarCoder 2, as an open source collection of models, also has the advantage of being able to deploy locally and "learn" a developer's source code or codebase -- an attractive prospect to devs and companies wary of exposing code to a cloud-hosted AI. Hugging Face, ServiceNow and Nvidia also make the case that StarCoder 2 is more ethical -- and less legally fraught -- than its rivals. [...] As opposed to code generators trained using copyrighted code (GitHub Copilot, among others), StarCoder 2 was trained only on data under license from the Software Heritage, the nonprofit organization providing archival services for code. Ahead of StarCoder 2's training, BigCode, the cross-organizational team behind much of StarCoder 2's roadmap, gave code owners a chance to opt out of the training set if they wanted. As with the original StarCoder, StarCoder 2's training data is available for developers to fork, reproduce or audit as they please.
StarCoder 2's license may still be a roadblock for some. "StarCoder 2 is licensed under the BigCode Open RAIL-M 1.0, which aims to promote responsible use by imposing 'light touch' restrictions on both model licensees and downstream users," writes TechCrunch's Kyle Wiggers. "While less constraining than many other licenses, RAIL-M isn't truly 'open' in the sense that it doesn't permit developers to use StarCoder 2 for every conceivable application (medical advice-giving apps are strictly off limits, for example). Some commentators say RAIL-M's requirements may be too vague to comply with in any case -- and that RAIL-M could conflict with AI-related regulations like the EU AI Act."
Businesses

Nvidia's Free-tier GeForce Now Will Soon Show Ads While You're Waiting To Play (theverge.com) 34

Nvidia's completely free, no-strings attached trial of its cloud gaming service GeForce Now is about to be very slightly less of a deal. Nvidia says users will now start seeing ads. From a report: They're only for the free tier -- not Priority or Ultimate -- and even then, it sounds like they won't interrupt your gameplay. "Free users will start to see up to two minutes of ads while waiting in queue to start a gaming session," writes Nvidia spokesperson Stephanie Ngo. Currently, the free tier does often involve waiting in line for a remote computer to free up before every hour of free gameplay -- now, I guess there'll be a few ads too. Nvidia says the ads should help pay for the free tier of service, and that it expects the change "will reduce average wait times for free users over time."
Cloud

Google Steps Up Microsoft Criticism, Warns of Rival's Monopoly in Cloud (reuters.com) 110

Alphabet's Google Cloud on Monday ramped up its criticism of Microsoft's cloud computing practices, saying its rival is seeking a monopoly that would harm the development of emerging technologies such as generative AI. From a report: "We worry about Microsoft wanting to flex their decade-long practices where they had a lot of monopoly on the on-premise software before and now they are trying to push that into cloud now," Google Cloud Vice President Amit Zavery said in an interview. "So they are creating this whole walled garden, which is completely controlled and owned by Microsoft, and customers who want to do any of this stuff, you have to go to Microsoft only," he said.

"If Microsoft cloud doesn't remain open, we will have issues and long-term problems, even in next generation technologies like AI as well, because Microsoft is forcing customers to go to Azure in many ways," Zavery said, referring to Microsoft's cloud computing platform. He urged antitrust regulators to act. "I think regulators need to provide some kind of guidance as well as maybe regulations which prevent the way Microsoft is building the Azure cloud business, not allow your on-premise monopoly to bring it into the cloud monopoly," Zavery said.

Microsoft

Microsoft Strikes Deal With Mistral in Push Beyond OpenAI (ft.com) 13

Microsoft has struck a deal with French AI startup Mistral as it seeks to broaden its involvement in the fast-growing industry beyond OpenAI. From a report: The US tech giant will provide the 10-month-old Paris-based company with help in bringing its AI models to market. Microsoft will also take a minor stake in Mistral, although the financial details have not been disclosed. The partnership makes Mistral the second company to provide commercial language models available on Microsoft's Azure cloud computing platform. Microsoft has already invested about $13 billion in San Francisco-based OpenAI, an alliance that is being reviewed by competition watchdogs in the US, EU and UK. Other Big Tech rivals, such as Google and Amazon, are also investing heavily in building generative AI -- software that can produce text, images and code in seconds -- which analysts believe has the capacity to shake up industries across the world. WSJ adds: On Monday, Mistral plans to announce a new AI model, called Mistral Large, that Mensch said can perform some reasoning tasks comparably with GPT-4, OpenAI's most advanced language model to date, and Gemini Ultra, Google's new model. Mensch said his new model cost less than 20 million euros, the equivalent of roughly $22 million, to train. By contrast OpenAI Chief Executive Sam Altman said last year after the release of GPT-4 that training his company's biggest models cost "much more than" $50 million to $100 million.
Moon

Moon Landing's Payloads Include Archive of Human Knowledge, Lunar Data Center Test, NFTs (medium.com) 75

In 2019 a SpaceX Falcon 9 rocket launched an Israeli spacecraft carrying a 30-million page archive of human civilization to the moon. Unfortunately, that spacecraft crashed. But thanks to this week's moon landing by the Odysseus, there's now a 30-million page "Lunar Library" on the moon — according to a Medium post by the Arch Mission Foundation.

"This historic moment secures humanity's cultural heritage and knowledge in an indestructible archive built to last for up to billions of years." Etched onto thin sheets of nickel, called NanoFiche, the Lunar Library is practically indestructible and can withstand the harsh conditions of space... Some of the notable content includes:


The Wikipedia. The entire English Wikipedia containing over 6 million articles on every branch of knowledge.
Project Gutenberg. Portions of Project Gutenberg's library of over 70,000 free eBooks containing some of our most treasured literature.
The Long Now Foundation's Rosetta Project archive of over 7,000 human languages and The Panlex datasets.
Selections from the Internet Archive's collections of books and important documents and data sets.
The SETI Institute's Earthling Project, featuring a musical compilation of 10,000 vocal submissions representing humanity united
The Arch Lunar Art Archive containing a collection of works from global contemporary and digital artists in 2022, recorded as NFTs.
David Copperfield's Magic Secrets — the secrets to all his greatest illusions — including how he will make the Moon disappear in the near future.
The Arch Mission Primer — which teaches a million concepts with images and words in 5 languages.
The Arch Mission Private Library — containing millions of pages as well as books, documents and articles on every subject, including a broad range of fiction and non-fiction, textbooks, periodicals, audio recordings, videos, historical documents, software sourcecode, data sets, and more.
The Arch Mission Vaults — private collections, including collections from our advisors and partners, and a collection of important texts and images from all the world's religions including the great religions and indigenous religions from around the world, collections of books, photos, and a collection of music by leading recording artists, and much more content that may be revealed in the future...


We also want to recognize our esteemed advisors, and our many content partners and collections including the Wikimedia Foundation, the Long Now Foundation, The SETI Institute Earthling Project, the Arch Lunar Art Archive project, Project Gutenberg, the Internet Archive, and the many donors who helped make the Lunar Library possible through their generous contributions. This accomplishment would not have happened without the collaborative support of so many...

We will continue to send backups of our important knowledge and cultural heritage — placing them on the surface of the Earth, in caves and deep underground bunkers and mines, and around the solar system as well. This is a mission that continues as long as humanity endures, and perhaps even long after we are gone, as a gift for whoever comes next.

Space.com has a nice rundown of the other new payloads that just landed on the moon. Some highlights:
  • "Cloud computing startup Lonestar's Independence payload is a lunar data center test mission for data storage and transmission from the lunar surface."
  • LRA is a small hemisphere of light-reflectors built to servce as a precision landmark to "allow spacecraft to ping it with lasers to help them determine their precise distance..."
  • ROLSES is a radio spectrometer for measuring the electron density near the lunar surface, "and how it may affect radio observatories, as well as observing solar and planetary radio waves and other phenomena."
  • "Artist Jeff Koons is sending 125 miniature stainless steel Moon Phase sculptures, each honoring significant human achievements across cultures and history, to be displayed on the moon in a cube. "

Cloud

Service Mesh Linkerd Moves Its Stable Releases Behind a Paywall (techtarget.com) 13

TechTarget notes it was Linkerd's original developers who coined the term "service mesh" — describing their infrastructure layer for communication between microservices.

But "There has to be some way of connecting the businesses that are being built on top of Linkerd back to funding the project," argues Buoyant CEO William Morgan. "If we don't do that, then there's no way for us to evolve this project and to grow it in the way that I think we all want."

And so, TechTarget reports... Beginning May 21, 2024, any company with more than 50 employees running Linkerd in production must pay Buoyant $2,000 per Kubernetes cluster per month to access stable releases of the project...

The project's overall source code will remain available in GitHub, and edge, or experimental early releases of code, will continue to be committed to open source. But the additional work done by Buoyant developers to backport minimal changes so that they're compatible with existing versions of Linkerd and to fix bugs, with reliability guarantees, to create stable releases will only be available behind a paywall, Morgan said... Morgan said he is prepared for backlash from the community about this change. In the last section of a company blog post FAQ about the update, Morgan included a question that reads, in part, "Who can I yell at...?"

But industry watchers flatly pronounced the change a departure from open source. "By saying, 'Sorry but we can no longer afford to hand out a production-ready product as free open source code,' Buoyant has removed the open source character of this project," said Torsten Volk, an analyst at Enterprise Management Associates. "This goes far beyond the popular approach of offering a managed version of a product that may include some additional premium features for a fee while still providing customers with the option to use the more basic open source version in production." Open source developers outside Buoyant won't want to contribute to the project — and Buoyant's bottom line — without receiving production-ready code in return, Volk predicted.

Morgan conceded that these are potentially valid concerns and said he's open to finding a way to resolve them with contributors... "I don't think there's a legal argument there, but there's an unresolved tension there, similar to testing edge releases — that's labor just as much as contributing is. I don't have a great answer to that, but it's not unique to Buoyant or Linkerd."

And so, "Starting in May, if you want the latest stable version of the open source Linkerd to download and run, you will have to go with Buoyant's commercial distribution," according to another report (though "there are discounts for non-profits, high-volume use cases, and other unique needs.") The Cloud Native Computing Foundation manages the open source project. The copyright is held by the Linkerd authors themselves. Linkerd is licensed under the Apache 2.0 license.

Buoyant CEO William Morgan explained in an interview with TNS that the changes in licensing are necessary to continue to ensure that Linkerd runs smoothly for enterprise users. Packaging the releases has also been demanding a lot of resources, perhaps even more than maintaining and advancing the core software itself, Morgan explained. He likened the approach to how Red Hat operates with Linux, which offers Fedora as an early release and maintains its core Linux offering, Red Hat Enterprise Linux (RHEL) for commercial clients.

"If you want the work that we put into the stable releases, which is predominantly around, not just testing, but also minimizing the changes in subsequent releases, that's hard hard work" requiring input from "world-leading experts in distributed systems," Morgan said.

"Well, that's kind of the dark, proprietary side of things."

Businesses

Nvidia Posts Record Revenue Up 265% On Booming AI Business (cnbc.com) 27

In its fourth quarter earnings report today, Nvidia beat Wall Street's forecast for earnings and sales, causing shares to rise about 10% in extended trading. CNBC reports: Here's what the company reported compared with what Wall Street was expecting for the quarter ending in January, based on a survey of analysts by LSEG, formerly known as Refinitiv:

Earnings per share: $5.16 adjusted vs. $4.64 expected
Revenue: $22.10 billion vs. $20.62 billion expected

Nvidia said it expected $24.0 billion in sales in the current quarter. Analysts polled by LSEG were looking for $5.00 per share on $22.17 billion in sales. Nvidia CEO Jensen Huang addressed investor fears that the company may not be able to keep up this growth or level of sales for the whole year on a call with analysts. "Fundamentally, the conditions are excellent for continued growth" in 2025 and beyond, Huang told analysts. He says demand for the company's GPUs will remain high due to generative AI and an industry-wide shift away from central processors to the accelerators that Nvidia makes.

Nvidia reported $12.29 billion in net income during the quarter, or $4.93 per share, up 769% versus last year's $1.41 billion or 57 cents per share. Nvidia's total revenue rose 265% from a year ago, based on strong sales for AI chips for servers, particularly the company's "Hopper" chips such as the H100, it said. "Strong demand was driven by enterprise software and consumer internet applications, and multiple industry verticals including automotive, financial services and health care," the company said in commentary provided to investors. Those sales are reported in the company's Data Center business, which now comprises the majority of Nvidia's revenue. Data center sales were up 409% to $18.40 billion. Over half the company's data center sales went to large cloud providers. [...]

The company's gaming business, which includes graphics cards for laptops and PCs, was merely up 56% year over year to $2.87 billion. Graphics cards for gaming used to be Nvidia's primary business before its AI chips started taking off, and some of Nvidia's graphics cards can be used for AI. Nvidia's smaller businesses did not show the same meteoric growth. Its automotive business declined 4% to $281 million in sales, and its OEM and other business, which includes crypto chips, rose 7% to $90 million. Nvidia's business making graphics hardware for professional applications rose 105% to $463 million.

Businesses

International Nest Aware Subscriptions Jump in Price, as Much As 100% (arstechnica.com) 43

Google's "Nest Aware" camera subscription is going through another round of price increases. From a report: This time it's for international users. There's no big announcement or anything, just a smattering of email screenshots from various countries on the Nest subreddit. 9to5Google was nice enough to hunt down a pile of the announcements. Nest Aware is a monthly subscription fee for Google's Nest cameras. Nest cameras exclusively store all their video in the cloud, and without the subscription, you aren't allowed to record video 24/7.

There are two sets of subscriptions to keep track of: the current generation subscription for modern cameras and the "first generation Nest Aware" subscription for older cameras. To give you an idea of what we're dealing with, in the US, the current free tier only gets you three hours of "event" video -- meaning video triggered by motion detection. Even the basic $8-a-month subscription doesn't get you 24/7 recording -- that's still only 30 days of event video. The "Nest Aware Plus" subscription, at $15 a month in the US, gets you 10 days of 24/7 video recording. The "first-generation" Nest Aware subscription, which is tied to earlier cameras and isn't available for new customers anymore, is doubling in price in Canada. The basic tier of five days of 24/7 video is going from a yearly fee of CA$50 to CA$110 (the first-generation sub has 24/7 video on every tier). Ten days of video is jumping from CA$80 to CA$160, and 30 days is going from CA$110 to CA$220. These are the prices for a single camera; the first-generation subscription will have additional charges for additional cameras. The current Nest Aware subscription for modern cameras is getting jumps that look similar to the US, with Nest Aware Plus, the mid-tier, going from CA$16 to CA $20 per month, and presumably similar raises across the board.

Sony

Sony's PlayStation Portal Hacked To Run Emulated PSP Games (theverge.com) 12

An anonymous reader shares a report: Sony's new PlayStation Portal has been hacked by Google engineers to run emulated games locally. The $199.99 handheld debuted in November but was limited to just streaming games from a PS5 console and not even titles from Sony's cloud gaming service. Now, two Google engineers have managed to get the PPSSPP emulator running natively on the PlayStation Portal, allowing a Grand Theft Auto PSP version to run on the Portal without Wi-Fi streaming required. "After more than a month of hard work, PPSSPP is running natively on PlayStation Portal. Yes, we hacked it," says Andy Nguyen in a post on X. Nguyen also confirms that the exploit is "all software based," so it doesn't require any hardware modifications like additional chips or soldering. Only a photo of Grand Theft Auto: Liberty City Stories running on the PlayStation Portal has been released so far, but Nguyen may release some videos to demonstrate the exploit at the weekend.
Security

MIT Researchers Build Tiny Tamper-Proof ID Tag Utilizing Terahertz Waves (mit.edu) 42

A few years ago, MIT researchers invented a cryptographic ID tag — but like traditional RFID tags, "a counterfeiter could peel the tag off a genuine item and reattach it to a fake," writes MIT News.

"The researchers have now surmounted this security vulnerability by leveraging terahertz waves to develop an antitampering ID tag that still offers the benefits of being tiny, cheap, and secure." They mix microscopic metal particles into the glue that sticks the tag to an object, and then use terahertz waves to detect the unique pattern those particles form on the item's surface. Akin to a fingerprint, this random glue pattern is used to authenticate the item, explains Eunseok Lee, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on the antitampering tag. "These metal particles are essentially like mirrors for terahertz waves. If I spread a bunch of mirror pieces onto a surface and then shine light on that, depending on the orientation, size, and location of those mirrors, I would get a different reflected pattern. But if you peel the chip off and reattach it, you destroy that pattern," adds Ruonan Han, an associate professor in EECS, who leads the Terahertz Integrated Electronics Group in the Research Laboratory of Electronics.

The researchers produced a light-powered antitampering tag that is about 4 square millimeters in size. They also demonstrated a machine-learning model that helps detect tampering by identifying similar glue pattern fingerprints with more than 99 percent accuracy. Because the terahertz tag is so cheap to produce, it could be implemented throughout a massive supply chain. And its tiny size enables the tag to attach to items too small for traditional RFIDs, such as certain medical devices...

"These responses are impossible to duplicate, as long as the glue interface is destroyed by a counterfeiter," Han says. A vendor would take an initial reading of the antitampering tag once it was stuck onto an item, and then store those data in the cloud, using them later for verification."

Seems like the only way to thwart that would be carving out the part of the surface where the tag was affixed — and then pasting the tag, glue, and what it adheres to all together onto some other surface. But more importantly, Han says they'd wanted to demonstrate "that the application of the terahertz spectrum can go well beyond broadband wireless."

In this case, you can use terahertz for ID, security, and authentication. There are a lot of possibilities out there."
Earth

Ocean Temperatures Are Skyrocketing (arstechnica.com) 110

"For nearly a year now, a bizarre heating event has been unfolding across the world's oceans," reports Wired.

"In March 2023, global sea surface temperatures started shattering record daily highs and have stayed that way since..." Brian McNoldy, a hurricane researcher at the University of Miami. "It's really getting to be strange that we're just seeing the records break by this much, and for this long...." Unlike land, which rapidly heats and cools as day turns to night and back again, it takes a lot to warm up an ocean that may be thousands of feet deep. So even an anomaly of mere fractions of a degree is significant. "To get into the two or three or four degrees, like it is in a few places, it's pretty exceptional," says McNoldy.

So what's going on here? For one, the oceans have been steadily warming over the decades, absorbing something like 90 percent of the extra heat that humans have added to the atmosphere...

A major concern with such warm surface temperatures is the health of the ecosystems floating there: phytoplankton that bloom by soaking up the sun's energy and the tiny zooplankton that feed on them. If temperatures get too high, certain species might suffer, shaking the foundations of the ocean food web. But more subtly, when the surface warms, it creates a cap of hot water, blocking the nutrients in colder waters below from mixing upwards. Phytoplankton need those nutrients to properly grow and sequester carbon, thus mitigating climate change...

Making matters worse, the warmer water gets, the less oxygen it can hold. "We have seen the growth of these oxygen minimum zones," says Dennis Hansell, an oceanographer and biogeochemist at the University of Miami. "Organisms that need a lot of oxygen, they're not too happy when the concentrations go down in any way — think of a tuna that is expending a lot of energy to race through the water."

But why is this happening? The article suggests less dust blowing from the Sahara desert to shade the oceans, but also 2020 regulations that reduced sulfur aerosols in shipping fuels. (This reduced toxic air pollution — but also some cloud cover.)

There was also an El Nino in the Pacific ocean last summer — now waning — which complicates things, according to biological oceanographer Francisco Chavez of the Monterey Bay Aquarium Research Institute in California. "One of our challenges is trying to tease out what these natural variations are doing in relation to the steady warming due to increasing CO2 in the atmosphere."

But the article points out that even the Atlantic ocean is heating up — and "sea surface temperatures started soaring last year well before El Niño formed." And last week the U.S. Climate Prediction Center predicted there's now a 55% chance of a La Nina in the Atlantic between June and August, according to the article — which could increase the likelihood of hurricanes.

Thanks to long-time Slashdot reader mrflash818 for sharing the article.
AI

Will 'Precision Agriculture' Be Harmful to Farmers? (substack.com) 61

Modern U.S. farming is being transformed by precision agriculture, writes Paul Roberts, the founder of securepairs.org and Editor in Chief at Security Ledger.

Theres autonomous tractors and "smart spraying" systems that use AI-powered cameras to identify weeds, just for starters. "Among the critical components of precision agriculture: Internet- and GPS connected agricultural equipment, highly accurate remote sensors, 'big data' analytics and cloud computing..." As with any technological revolution, however, there are both "winners" and "losers" in the emerging age of precision agriculture... Precision agriculture, once broadly adopted, promises to further reduce the need for human labor to run farms. (Autonomous equipment means you no longer even need drivers!) However, the risks it poses go well beyond a reduction in the agricultural work force. First, as the USDA notes on its website: the scale and high capital costs of precision agriculture technology tend to favor large, corporate producers over smaller farms. Then there are the systemic risks to U.S. agriculture of an increasingly connected and consolidated agriculture sector, with a few major OEMs having the ability to remotely control and manage vital equipment on millions of U.S. farms... (Listen to my podcast interview with the hacker Sick Codes, who reverse engineered a John Deere display to run the Doom video game for insights into the company's internal struggles with cybersecurity.)

Finally, there are the reams of valuable and proprietary environmental and operational data that farmers collect, store and leverage to squeeze the maximum productivity out of their land. For centuries, such information resided in farmers' heads, or on written or (more recently) digital records that they owned and controlled exclusively, typically passing that knowledge and data down to succeeding generation of farm owners. Precision agriculture technology greatly expands the scope, and granularity, of that data. But in doing so, it also wrests it from the farmer's control and shares it with equipment manufacturers and service providers — often without the explicit understanding of the farmers themselves, and almost always without monetary compensation to the farmer for the data itself. In fact, the Federal Government is so concerned about farm data they included a section (1619) on "information gathering" into the latest farm bill.

Over time, this massive transfer of knowledge from individual farmers or collectives to multinational corporations risks beggaring farmers by robbing them of one of their most vital assets: data, and turning them into little more than passive caretakers of automated equipment managed, controlled and accountable to distant corporate masters.

Weighing in is Kevin Kenney, a vocal advocate for the "right to repair" agricultural equipment (and also an alternative fuel systems engineer at Grassroots Energy LLC). In the interview, he warns about the dangers of tying repairs to factory-installed firmware, and argues that its the long-time farmer's "trade secrets" that are really being harvested today. The ultimate beneficiary could end up being the current "cabal" of tractor manufacturers.

"While we can all agree that it's coming...the question is who will own these robots?" First, we need to acknowledge that there are existing laws on the books which for whatever reason, are not being enforced. The FTC should immediately start an investigation into John Deere and the rest of the 'Tractor Cabal' to see to what extent farmers' farm data security and privacy are being compromised. This directly affects national food security because if thousands- or tens of thousands of tractors' are hacked and disabled or their data is lost, crops left to rot in the fields would lead to bare shelves at the grocery store... I think our universities have also been delinquent in grasping and warning farmers about the data-theft being perpetrated on farmers' operations throughout the United States and other countries by makers of precision agricultural equipment.
Thanks to long-time Slashdot reader chicksdaddy for sharing the article.
AI

Scientists Propose AI Apocalypse Kill Switches 104

A paper (PDF) from researchers at the University of Cambridge, supported by voices from numerous academic institutions including OpenAI, proposes remote kill switches and lockouts as methods to mitigate risks associated with advanced AI technologies. It also recommends tracking AI chip sales globally. The Register reports: The paper highlights numerous ways policymakers might approach AI hardware regulation. Many of the suggestions -- including those designed to improve visibility and limit the sale of AI accelerators -- are already playing out at a national level. Last year US president Joe Biden put forward an executive order aimed at identifying companies developing large dual-use AI models as well as the infrastructure vendors capable of training them. If you're not familiar, "dual-use" refers to technologies that can serve double duty in civilian and military applications. More recently, the US Commerce Department proposed regulation that would require American cloud providers to implement more stringent "know-your-customer" policies to prevent persons or countries of concern from getting around export restrictions. This kind of visibility is valuable, researchers note, as it could help to avoid another arms race, like the one triggered by the missile gap controversy, where erroneous reports led to massive build up of ballistic missiles. While valuable, they warn that executing on these reporting requirements risks invading customer privacy and even lead to sensitive data being leaked.

Meanwhile, on the trade front, the Commerce Department has continued to step up restrictions, limiting the performance of accelerators sold to China. But, as we've previously reported, while these efforts have made it harder for countries like China to get their hands on American chips, they are far from perfect. To address these limitations, the researchers have proposed implementing a global registry for AI chip sales that would track them over the course of their lifecycle, even after they've left their country of origin. Such a registry, they suggest, could incorporate a unique identifier into each chip, which could help to combat smuggling of components.

At the more extreme end of the spectrum, researchers have suggested that kill switches could be baked into the silicon to prevent their use in malicious applications. [...] The academics are clearer elsewhere in their study, proposing that processor functionality could be switched off or dialed down by regulators remotely using digital licensing: "Specialized co-processors that sit on the chip could hold a cryptographically signed digital "certificate," and updates to the use-case policy could be delivered remotely via firmware updates. The authorization for the on-chip license could be periodically renewed by the regulator, while the chip producer could administer it. An expired or illegitimate license would cause the chip to not work, or reduce its performance." In theory, this could allow watchdogs to respond faster to abuses of sensitive technologies by cutting off access to chips remotely, but the authors warn that doing so isn't without risk. The implication being, if implemented incorrectly, that such a kill switch could become a target for cybercriminals to exploit.

Another proposal would require multiple parties to sign off on potentially risky AI training tasks before they can be deployed at scale. "Nuclear weapons use similar mechanisms called permissive action links," they wrote. For nuclear weapons, these security locks are designed to prevent one person from going rogue and launching a first strike. For AI however, the idea is that if an individual or company wanted to train a model over a certain threshold in the cloud, they'd first need to get authorization to do so. Though a potent tool, the researchers observe that this could backfire by preventing the development of desirable AI. The argument seems to be that while the use of nuclear weapons has a pretty clear-cut outcome, AI isn't always so black and white. But if this feels a little too dystopian for your tastes, the paper dedicates an entire section to reallocating AI resources for the betterment of society as a whole. The idea being that policymakers could come together to make AI compute more accessible to groups unlikely to use it for evil, a concept described as "allocation."
Microsoft

Microsoft 'Retires' Azure IoT Central In Platform Rethink (theregister.com) 4

Lindsay Clark reports via The Register: In a statement on the Azure console, Microsoft confirmed the Azure IoT Central service is being retired on March 31, 2027. "Starting on April 1, 2024, you won't be able to create new application resources; however, all existing IoT Central applications will continue to function and be managed. Subscription {{subscriptionld} is not allowed to create new applications. Please create a support ticket to request an exception," the statement to customers, seen by The Register, said. According to a Microsoft "Learn" post from February 8, 2024, IoT Central is an IoT application platform as a service (aPaaS) designed to reduce work and costs while building, managing, and maintaining IoT solutions.

Microsoft's Azure IoT offering includes three pillars: IoT Hub, IoT Edge and IoT Central. IoT Hub is a cloud-based service that provides a "secure and scalable way to connect, monitor, and manage IoT devices and sensors," according to Microsoft. Azure IoT Edge is designed to allow devices to run cloud-based workloads locally. And Azure IoT Central is a fully managed, cloud-based IoT solution for connecting and managing devices at scale. Central is a layer above Hub in the architecture, and Hub itself may well continue. One developer told The Register there was no warning about Hub on the Azure console. As for IoT Edge, it is "a device-focused runtime that enables you to deploy, run, and monitor containerized Linux workloads." Microsoft has not said whether this would continue.

Cloud

Nginx Core Developer Quits Project, Says He No Longer Sees Nginx as 'Free and Open Source Project For the Public Good' (arstechnica.com) 53

A core developer of Nginx, currently the world's most popular web server, has quit the project, stating that he no longer sees it as "a free and open source project... for the public good." From a report: His fork, freenginx, is "going to be run by developers, and not corporate entities," writes Maxim Dounin, and will be "free from arbitrary corporate actions." Dounin is one of the earliest and still most active coders on the open source Nginx project and one of the first employees of Nginx, Inc., a company created in 2011 to commercially support the steadily growing web server. Nginx is now used on roughly one-third of the world's web servers, ahead of Apache.

Nginx Inc. was acquired by Seattle-based networking firm F5 in 2019. Later that year, two of Nginx's leaders, Maxim Konovalov and Igor Sysoev, were detained and interrogated in their homes by armed Russian state agents. Sysoev's former employer, Internet firm Rambler, claimed that it owned the rights to Nginx's source code, as it was developed during Sysoev's tenure at Rambler (where Dounin also worked). While the criminal charges and rights do not appear to have materialized, the implications of a Russian company's intrusion into a popular open source piece of the web's infrastructure caused some alarm. Sysoev left F5 and the Nginx project in early 2022. Later that year, due to the Russian invasion of Ukraine, F5 discontinued all operations in Russia. Some Nginx developers still in Russia formed Angie, developed in large part to support Nginx users in Russia. Dounin technically stopped working for F5 at that point, too, but maintained his role in Nginx "as a volunteer," according to Dounin's mailing list post.

Dounin writes in his announcement that "new non-technical management" at F5 "recently decided that they know better how to run open source projects. In particular, they decided to interfere with security policy nginx uses for years, ignoring both the policy and developers' position." While it was "quite understandable," given their ownership, Dounin wrote that it means he was "no longer able to control which changes are made in nginx," hence his departure and fork.

Software

VMware Admits Sweeping Broadcom Changes Are Worrying Customers (arstechnica.com) 79

An anonymous reader quotes a report from Ars Technica: Broadcom has made a lot of changes to VMware since closing its acquisition of the company in November. On Wednesday, VMware admitted that these changes are worrying customers. With customers mulling alternatives and partners complaining, VMware is trying to do damage control and convince people that change is good. Not surprisingly, the plea comes from a VMware marketing executive: Prashanth Shenoy, VP of product and technical marketing for the Cloud, Infrastructure, Platforms, and Solutions group at VMware. In Wednesday's announcement, Shenoy admitted that VMware "has been all about change" since being swooped up for $61 billion. This has resulted in "many questions and concerns" as customers "evaluate how to maximize value from" VMware products.

Among these changes is VMware ending perpetual license sales in favor of a subscription-based business model. VMware had a history of relying on perpetual licensing; VMware called the model its "most renowned" a year ago. Shenoy's blog sought to provide reasoning for the change, with the executive writing that "all major enterprise software providers are on [subscription models] today." However, the idea that '"everyone's doing it" has done little to ameliorate impacted customers who prefer paying for something once and owning it indefinitely (while paying for associated support costs). Customers are also dealing with budget concerns with already paid-for licenses set to lose support and the only alternative being a monthly fee.

Shenoy's blog, though, focused on license portability. "This means you will be able to deploy on-premises and then take your subscription at any time to a supported Hyperscaler or VMware Cloud Services Provider environment as desired. You retain your license subscription as you move," Shenoy wrote, noting new Google Cloud VMware Engine license portability support for VMware Cloud Foundation. Further, Shenoy claimed the discontinuation of VMware products so that Broadcom could focus on VMware Cloud Foundation and vSphere Foundation would be beneficial, because "offering a few offerings that are lower in price on the high end and are packed with more value for the same or less cost on the lower end makes business sense for customers, partners, and VMware."
VMware's Wednesday post also addressed Broadcom taking VMware's biggest customers direct, removing channel partners from the equation: "It makes business sense for Broadcom to have close relationships with its most strategic VMware customers to make sure VMware Cloud Foundation is being adopted, used, and providing customer value. However, we expect there will be a role change in accounts that will have to be worked through so that both Broadcom and our partners are providing the most value and greatest impact to strategic customers. And, partners will play a critical role in adding value beyond what Broadcom may be able."

"Broadcom identified things that needed to change and, as a responsible company, made the changes quickly and decisively," added Shenoy. "The changes that have taken place over the past 60+ days were absolutely necessary."

Slashdot Top Deals