Intel

Intel Will Lay Off 15% To 20% of Its Factory Workers, Memo Says 60

Intel will lay off 15% to 20% of its factory workforce starting in July, potentially cutting over 10,000 jobs as part of a broader effort to streamline operations amid declining sales and mounting competitive pressure. "These are difficult actions but essential to meet our affordability challenges and current financial position of the company. It drives pain to every individual," Intel manufacturing Vice President Naga Chandrasekaran wrote to employees Saturday. "Removing organizational complexity and empowering our engineers will enable us to better serve the needs of our customers and strengthen our execution. We are making these decisions based on careful consideration of what's needed to position our business for the future." The company reiterated that "we will treat people with care and respect as we complete this important work." Oregon Live reports: Intel announced the pending layoffs in April and notified factory workers last week that the cuts would begin in July. It hadn't previously said just how deep the layoffs will go. The company had 109,000 employees at the end of 2024, but it's not clear how many of those worked in its factory division -- called Intel Foundry. The Foundry business includes a broad array of jobs, from technicians on the factory floor to specialized researchers who work years in advance to develop future generations of microprocessors.

Intel is planning major cuts in other parts of its business, too, but employees say the company hasn't specified how many jobs it will eliminate in each business unit. Workers say they believe the impacts will vary within departments. Overall, though, the layoffs will surely eliminate several thousand jobs -- and quite possibly more than 10,000.
Red Hat Software

Rocky and Alma Linux Still Going Strong. RHEL Adds an AI Assistant (theregister.com) 21

Rocky Linux 10 "Red Quartz" has reached general availability, notes a new article in The Register — surveying the differences between "RHELatives" — the major alternatives to Red Hat Enterprise Linux: The Rocky 10 release notes describe what's new, such as support for RISC-V computers. Balancing that, this version only supports the Raspberry Pi 4 and 5 series; it drops Rocky 9.x's support for the older Pi 3 and Pi Zero models...

RHEL 10 itself, and Rocky with it, now require x86-64-v3, meaning Intel "Haswell" generation kit from about 2013 onward. Uniquely among the RHELatives, AlmaLinux offers a separate build of version 10 for x86-64-v2 as well, meaning Intel "Nehalem" and later — chips from roughly 2008 onward. AlmaLinux has a history of still supporting hardware that's been dropped from RHEL and Rocky, which it's been doing since AlmaLinux 9.4. Now that includes CPUs. In comparison, the system requirements for Rocky Linux 10 are the same as for RHEL 10. The release notes say.... "The most significant change in Rocky Linux 10 is the removal of support for x86-64-v2 architectures. AMD and Intel 64-bit architectures for x86-64-v3 are now required."

A significant element of the advertising around RHEL 10 involves how it has an AI assistant. This is called Red Hat Enterprise Linux Lightspeed, and you can use it right from a shell prompt, as the documentation describes... It's much easier than searching man pages, especially if you don't know what to look for... [N]either AlmaLinux 10 nor Rocky Linux 10 includes the option of a helper bot. No big surprise there... [Rocky Linux] is sticking closest to upstream, thanks to a clever loophole to obtain source RPMs. Its hardware requirements also closely parallel RHEL 10, and CIQ is working on certifications, compliance, and special editions. Meanwhile, AlmaLinux is maintaining support for older hardware and CPUs, which will widen its appeal, and working with partners to ensure reboot-free updates and patching, rather than CIQ's keep-it-in-house approach. All are valid, and all three still look and work almost identically... except for the LLM bot assistant.

Hardware

PCI Express 7.0 Specs Released (tomshardware.com) 17

The PCI-SIG, which oversees the development of the PCIe specification, has officially released the final spec for PCI Express 7.0. "The PCIe 7.0 specification increases the per-lane data transfer rate to 128 GT/s in each direction, which is twice as fast as PCIe 6.0 supports and four times faster than PCIe 5.0," reports Tom's Hardware. "Such a significant performance increase enables devices with 16 PCIe 7.0 lanes to transfer up to 256 GB/s in each direction, not accounting for protocol overhead. The new version of the interface continues to use PAM4 signaling while maintaining the 1b/1b FLIT encoding method first introduced in PCIe 6.0." From the report: To achieve PCIe 7.0's 128 GT/s record data transfer rate, developers of PCIe 7.0 had to increase the physical signaling rate to 32 GHz or beyond. Keep in mind that both PCIe 5.0 and 6.0 use a physical signaling rate of 16 GHz to enable 32 GT/s using NRZ signaling and 64 GT/s using PAM4 signaling (which allows transfers of two bits per symbol). With PCIe 7.0, developers had to boost the physical frequency for the first time since 2017, which required tremendous work at various levels, as maintaining signal integrity at 32 GHz over long distances using copper wires is extremely challenging.

Beyond raw throughput, the update also offers improved power efficiency and stronger support for longer or more complex electrical channels, particularly when using a cabling solution, to cater to the needs of next-generation data center-grade bandwidth-hungry applications, such as 800G Ethernet, Ultra Ethernet, and quantum computing, among others. [...] With the PCIe 7.0 standard officially released, members of the PCI-SIG, including AMD, Intel, and Nvidia, can begin finalizing the development of their platforms that support the PCIe specifications. PCI-SIG plans to start preliminary compliance tests in 2027, with official interoperability tests scheduled for 2028. Therefore, expect actual PCIe 7.0 devices and platforms on the market sometime in 2028 - 2029, if everything goes as planned.
PCI-SIG also announced that pathfinding for PCIe 8.0 is underway, and members of the organization are actively exploring possibilities and defining capabilities of a standard that they are going to use in 2030 and beyond.

"Interestingly, when asked whether PCIe 8.0 would double data transfer rate to 256 GT/s in each direction (and therefore enable bandwidth of 1 TB/s in both directions using 16 lanes), Al Yanes, president of PCI-SIG, said that while this is an intention, he would not like to make any definitive claims," reports Tom's Hardware. "Additionally, he stated that PCI-SIG is looking forward to enabling PCIe 8.0, which will offer higher performance over copper wires in addition to optical interconnects."
Operating Systems

FreeBSD 14.3 Released (phoronix.com) 21

Michael Larabel of Phoronix highlights the key updates in today's stable release of FreeBSD 14.3: FreeBSD 14.3 has back-ported a number of improvements from FreeBSD 15 back to the FreeBSD 14 series. Plus a number of routine package updates and other fixes. Some of the FreeBSD 14.3-RELEASE highlights include:

- Updating the ZFS support against OpenZFS 2.2.7.
- Merging of the Realtek RTW88 and RTW89 WiFi drivers based on the Linux 6.14 kernel code.
- The LinuxKPI code has been improved to support crypto offload as well as the 802.11n and 802.11ac standards.
- The Intel IX Ethernet driver has added support for the x550 1000BAS-BX SFP modules.
- Thor2 PCI IDs added to the Broadcom NetXtreme "BNXT" driver along with support for 400G speed modules.
- XZ 5.8.1, OpenSSH 9.9p2, OpenSSL 3.0.16, and many other package updates.
- Syscons as the legacy system console driver is now considered deprecated. Syscons is not compatible with UEFI, lacks UTF-8 support, and is Giant-locked.
You can download and learn more about FreeBSD 14.3 via FreeBSD.org.
AI

Gabbard Says AI is Speeding Up Intel Work, Including the Release of the JFK Assassination Files (apnews.com) 39

AI is speeding up the work of America's intelligence services, Director of National Intelligence Tulsi Gabbard said Tuesday. From a report: Speaking to a technology conference, Gabbard said AI programs, when used responsibly, can save money and free up intelligence officers to focus on gathering and analyzing information. The sometimes slow pace of intelligence work frustrated her as a member of Congress, Gabbard said, and continues to be a challenge. AI can run human resource programs, for instance, or scan sensitive documents ahead of potential declassification, Gabbard said. Her office has released tens of thousands of pages of material related to the assassinations of President John F. Kennedy and his brother, New York Sen. Robert F. Kennedy, on the orders of President Donald Trump.

Experts had predicted the process could take many months or even years, but AI accelerated the work by scanning the documents to see if they contained any material that should remain classified, Gabbard said during her remarks at the Amazon Web Services Summit in Washington. "We have been able to do that through the use of AI tools far more quickly than what was done previously -- which was to have humans go through and look at every single one of these pages," Gabbard said.

Desktops (Apple)

Apple Will End Support For Intel Macs Next Year (9to5mac.com) 67

Apple announced that macOS 26 "Tahoe" will be the final version to support Intel-based Macs, with future macOS releases running exclusively on Apple Silicon devices (that is, 2020 M1 models and newer). They will, however, continue to receive security updates for a few more years. 9to5Mac reports: In some ways, Apple has already stopped supporting some non-Apple Silicon models of its lineup. macOS Tahoe does not work with any Intel MacBook Air or Mac mini for instance. But Tahoe does still support some Intel Macs. That includes compatibility with the 2019 16-inch MacBook Pro, the 2020 Intel 13-inch MacBook Pro, 2020 iMac, and the 2019 Mac Pro.

Based on Apple's warning, you can expect that macOS 27 will drop support for all of these legacy machines, and therefore macOS 26 will be the last compatible version. These devices will continue to receive security updates for another three years, however. Going forward, the minimum support hardware generations will be from 2020 onwards, as that is when Apple began the Apple Silicon transition with the M1. M1 Pro and M1 Max MacBook Pros followed in 2021.

IOS

What To Expect From Apple's WWDC (arstechnica.com) 26

Apple's Worldwide Developers Conference 25 (WWDC) kicks off next week, June 9th, showcasing the company's latest software and new technologies. That includes the next version of iOS, which is rumored to have the most significant design overhaul since the introduction of iOS 7. Here's an overview of what to expect: Major Software Redesigns
Apple plans to shift its operating system naming to reflect the release year, moving from sequential numbers to year-based identifiers. Consequently, the upcoming releases will be labeled as iOS 26, macOS 26, watchOS 26, etc., streamlining the versioning across platforms.

iOS 26 is anticipated to feature a glossy, glass-like interface inspired by visionOS, incorporating translucent elements and rounded buttons. This design language is expected to extend across iPadOS, macOS, watchOS, and tvOS, promoting a cohesive user experience across devices. Core applications like Phone, Safari, and Camera are slated for significant redesigns, too. For instance, Safari may introduce a translucent, "glassy" address bar, aligning with the new visual aesthetics.

While AI is not expected to be the main focus due to Siri's current readiness, some AI-related updates are rumored. The Shortcuts app may gain "Apple Intelligence," enabling users to create shortcuts using natural language. It's also possible that Gemini will be offered as an option for AI functionalities on the iPhone, similar to ChatGPT.

Other App and Feature Updates
The lock screen might display charging estimates, indicating how long it will take for the phone to fully charge. There's a rumor about bringing live translation features to AirPods. The Messages app could receive automatic translations and call support; the Music app might introduce full-screen animated lock screen art; and Apple Notes may get markdown support. Users may also only need to log into a captive Wi-Fi portal once, and all their devices will automatically be logged in.

Significant updates are expected for Apple Home. There's speculation about the potential announcement of a "HomePad" with a screen, Apple's competitor to devices like the Nest Hub Mini. A new dedicated Apple gaming app is also anticipated to replace Game Center.
If you're expecting new hardware, don't hold your breath. The event is expected to focus primarily on software developments. It may even see discontinued support for several older Intel-based Macs in macOS 26, including models like the 2018 MacBook Pro and the 2019 iMac, as Apple continues its transition towards exclusive support for Apple Silicon devices.

Sources:
Apple WWDC 2025 Rumors and Predictions! (Waveform)
WWDC 2025 Overview (MacRumors)
WWDC 2025: What to expect from this year's conference (TechCrunch)
What to expect from Apple's Worldwide Developers Conference next week (Ars Technica)
Apple's WWDC 2025: How to Watch and What to Expect (Wired)
Intel

Top Researchers Leave Intel To Build Startup With 'The Biggest, Baddest CPU' (oregonlive.com) 104

An anonymous reader quotes a report from OregonLive: Together, the four founders of Beaverton startup AheadComputing spent nearly a century at Intel. They were among Intel's top chip architects, working years in advance to develop new generations of microprocessors to power the computers of the future. Now they're on their own, flying without a net, building a new class of microprocessor on an entirely different architecture from Intel's. Founded a year ago, AheadComputing is trying to prove there's a better way to design computer chips.

"AheadComputing is doing the biggest, baddest CPU in the world," said Debbie Marr, the company's CEO. [...] AheadComputing is betting on an open architecture called RISC-V -- RISC stands for "reduced instruction set computer." The idea is to craft a streamlined microprocessor that works more efficiently by doing fewer things, and doing them better than conventional processors. For AheadComputing's founders and 80 employees, many of them also Intel alumni, it's a major break from the kind of work they've been doing all their careers. They've left a company with more than 100,000 workers to start a business with fewer than 100.

"Every person in this room," Marr said, looking across a conference table at her colleagues, "we could have stayed at Intel. We could have continued to do very exciting things at Intel." They decided they had a better chance at leading a revolution in semiconductor technology at a startup than at a big, established company like Intel. And AheadComputing could be at the forefront of renewal in Oregon's semiconductor ecosystem. "We see this opportunity, this light," Marr said. "We took our chances."
It'll be years before AheadComputing's designs are on the market, but the company "envisions its chips will someday power PCs, laptops and data centers," reports OregonLive. "Possible clients could include Google, Amazon, Samsung or other large computing companies."
Intel

Intel: New Products Must Deliver 50% Gross Profit To Get the Green Light (tomshardware.com) 44

Intel has implemented a strict new policy requiring all new projects to demonstrate at least a 50% gross margin to move forward. CEO Lip-Bu Tan explained Intel's new risk-averse policy as "something that we probably should have had before," later clarifying that the number is a figure the company is aspiring toward internally. Tom's Hardware reports: Tan is reportedly "laser focused on the fact that we need to get our gross margins back up above 50%." To accomplish this, Tan is also said to be investigating and potentially cancelling or changing unprofitable deals with other companies. Intel's margins have slipped to new lows for the company in recent months. MacroTrends reports Intel's trailing 12 months gross margin for Q1 2025 was as low as 31.67%. Intel's gross margins had hovered around the 60% mark for the ten years leading up to the COVID-19 pandemic, falling beneath 50% in Q2 2022 and continuing to steadily fall ever since.

Holthaus predicts a "tug-of-war" to ensue within Intel in the coming months as engineers and executives reckon with being forced between a rock and a hard place. "We need to be building products that... fit the right competitive landscape and requirements of our customers, but also have the right cost structure in place. It really requires us to do both." [...] Tan is also quoted as wanting to turn Intel into an "engineering-focused company" again under his leadership. To reach this, Tan has committed to investing in recruiting and retaining top talent; "I believe Intel has lost some of this talent over the years; I want to create a culture of innovation empowerment." Maintaining a culture of empowering innovation and top talent seems, on its face, at odds with layoffs and a lock on projects not projected to gross 50% margins, but Tan seemingly has Intel investors on his side in these pursuits.

Patents

Intel Wins Jury Trial Over Patent Licenses In $3 Billion VLSI Fight (reuters.com) 22

A Texas jury ruled that Intel may hold a license to patents owned by VLSI Technology through its agreement with Finjan Inc., both controlled by Fortress Investment Group -- potentially nullifying over $3 billion in previous patent infringement verdicts against Intel. Reuters reports: VLSI has sued Intel in multiple U.S. courts for allegedly infringing several patents covering semiconductor technology. A jury in Waco, Texas awarded VLSI $2.18 billion in their first trial in 2021, which a U.S. appeals court has since overturned and sent back for new proceedings.

An Austin, Texas jury determined that VLSI was entitled to nearly $949 million from Intel in a separate patent infringement trial in 2022. Intel has argued in that case that the verdicts should be thrown out based on a 2012 agreement that gave it a license to patents owned by Finjan and other companies "under common control" with it. U.S. District Judge Alan Albright held the latest jury trial in Austin to determine whether Finjan and VLSI were under the "common control" of Fortress. VLSI said it was not subject to the Finjan agreement, and that the company did not even exist until four years after it was signed.

Desktops (Apple)

macOS 26 May Not Support 2018 MacBook Pros, 2019 iMacs, or the iMac Pro (appleinsider.com) 125

Apple's upcoming macOS 26 operating system may abandon support for several older Mac models, according to AppleInsider. The casualties will include 2018 MacBook Pro models, the 2020 Intel MacBook Air, the 2017 iMac Pro, and the 2018 Mac mini -- all currently the oldest machines compatible with macOS Sequoia, the report said, citing a source familiar with the matter. The 2019 MacBook Pro models and 2020 5K iMac models will retain compatibility with the new system, codenamed "Cheer," said AppleInsider.
Build

Linux 6.16 Adds 'X86_NATIVE_CPU' Option To Optimize Your Kernel Build (phoronix.com) 33

unixbhaskar shares a report from Phoronix: The X86_NATIVE_CPU Kconfig build time option has been merged for the Linux 6.16 merge window as an easy means of enforcing "-march=native" compiler behavior on AMD and Intel processors to optimize your kernel build for the local CPU architecture/family of your system. For those wanting to "-march=native" your Linux kernel build on AMD/Intel x86_64 processors, the new CONFIG_X86_NATIVE_CPU option can be easily enabled for setting that compiler option on your local kernel builds.

The CONFIG_X86_NATIVE_CPU option is honored if compiling the Linux x86_64 kernel with GCC or LLVM Clang when using Clang 19 or newer due to a compiler bug with the Linux kernel on older compiler versions. In addition to setting the "-march=native" compiler option for the Linux kernel C code, enabling this new Kconfig build option also sets "-Ctarget-cpu=native" for the kernel's Rust code too.
"It seems interesting though," comments unixbhaskar. "If the detailed benchmark shows some improvement with the option selected, then distros might start to adopt it for their flavor."
Security

DanaBot Malware Devs Infected Their Own PCs (krebsonsecurity.com) 10

The U.S. unsealed charges against 16 individuals behind DanaBot, a malware-as-a-service platform responsible for over $50 million in global losses. "The FBI says a newer version of DanaBot was used for espionage, and that many of the defendants exposed their real-life identities after accidentally infecting their own systems with the malware," reports KrebsOnSecurity. From the report: Initially spotted in May 2018 by researchers at the email security firm Proofpoint, DanaBot is a malware-as-a-service platform that specializes in credential theft and banking fraud. Today, the U.S. Department of Justice unsealed a criminal complaint and indictment from 2022, which said the FBI identified at least 40 affiliates who were paying between $3,000 and $4,000 a month for access to the information stealer platform. The government says the malware infected more than 300,000 systems globally, causing estimated losses of more than $50 million. The ringleaders of the DanaBot conspiracy are named as Aleksandr Stepanov, 39, a.k.a. "JimmBee," and Artem Aleksandrovich Kalinkin, 34, a.k.a. "Onix," both of Novosibirsk, Russia. Kalinkin is an IT engineer for the Russian state-owned energy giant Gazprom. His Facebook profile name is "Maffiozi."

According to the FBI, there were at least two major versions of DanaBot; the first was sold between 2018 and June 2020, when the malware stopped being offered on Russian cybercrime forums. The government alleges that the second version of DanaBot -- emerging in January 2021 -- was provided to co-conspirators for use in targeting military, diplomatic and non-governmental organization computers in several countries, including the United States, Belarus, the United Kingdom, Germany, and Russia. The indictment says the FBI in 2022 seized servers used by the DanaBot authors to control their malware, as well as the servers that stored stolen victim data. The government said the server data also show numerous instances in which the DanaBot defendants infected their own PCs, resulting in their credential data being uploaded to stolen data repositories that were seized by the feds.

"In some cases, such self-infections appeared to be deliberately done in order to test, analyze, or improve the malware," the criminal complaint reads. "In other cases, the infections seemed to be inadvertent -- one of the hazards of committing cybercrime is that criminals will sometimes infect themselves with their own malware by mistake." A statement from the DOJ says that as part of today's operation, agents with the Defense Criminal Investigative Service (DCIS) seized the DanaBot control servers, including dozens of virtual servers hosted in the United States. The government says it is now working with industry partners to notify DanaBot victims and help remediate infections. The statement credits a number of security firms with providing assistance to the government, including ESET, Flashpoint, Google, Intel 471, Lumen, PayPal, Proofpoint, Team CYRMU, and ZScaler.

Graphics

Nvidia's RTX 5060 Review Debacle Should Be a Wake-Up Call (theverge.com) 67

Nvidia is facing backlash for allegedly manipulating the review process of its GeForce RTX 5060 GPU by withholding drivers, selectively granting early access to favorable reviewers, and pressuring media to present the card in a positive light. As The Verge's Sean Hollister writes, the debacle "should be a wake-up call for gamers and reviewers." Here's an excerpt from the report: Nvidia has gone too far. This week, the company reportedly attempted to delay, derail, and manipulate reviews of its $299 GeForce RTX 5060 graphics card, which would normally be its bestselling GPU of the generation. Nvidia has repeatedly and publicly said the budget 60-series cards are its most popular, and this year it reportedly tried to ensure it by withholding access and pressuring reviewers to paint them in the best light possible.

Nvidia might have wanted to prevent a repeat of 2022, when it launched this card's predecessor. Those reviews were harsh. The 4060 was called a "slap in the face to gamers" and a "wet fart of a GPU." I had guessed the 5060 was headed for the same fate after seeing how reviewers handled the 5080, which similarly showcased how little Nvidia's hardware has improved year over year and relies on software to make up the gaps. But Nvidia had other plans. Here are the tactics that Nvidia reportedly just used to throw us off the 5060's true scent, as individually described by GamersNexus, VideoCardz, Hardware Unboxed, GameStar.de, Digital Foundry, and more:

- Nvidia decided to launch its RTX 5060 on May 19th, when most reviewers would be at Computex in Taipei, Taiwan, rather than at their test beds at home.
- Even if reviewers already had a GPU in hand before then, Nvidia cut off most reviewers' ability to test the RTX 5060 before May 19th by refusing to provide drivers until the card went on sale. (Gaming GPUs don't really work without them.)
- And yet Nvidia allowed specific, cherry-picked reviewers to have early drivers anyhow if they agreed to a borderline unethical deal: they could only test five specific games, at 1080p resolution, with fixed graphics settings, against two weaker GPUs (the 3060 and 2060 Super) where the new card would be sure to win.
- In some cases, Nvidia threatened to withhold future access unless reviewers published apples-to-oranges benchmark charts showing how the RTX 5060's "fake frames" MFG tech can produce more frames than earlier GPUs without it.

Some reviewers apparently took Nvidia up on that proposition, leading to day-one "previews" where the charts looked positively stacked in the 5060's favor [...]. But the reality, according to reviews that have since hit the web, is that the RTX 5060 often fails to beat a four-year-old RTX 3060 Ti, frequently fails to beat a four-year-old 3070, and can sometimes get upstaged by Intel's cheaper $250 B580. And yet, the 5060's lackluster improvements are overshadowed by a juicier story: inexplicably, Nvidia decided to threaten GamersNexus' future access over its GPU coverage. Yes, the same GamersNexus that's developed a staunch reputation for defending consumers from predatory behavior, and just last month published a report on "GPU shrinkflation" that accused Nvidia of misleading marketing. Bad move! [...]

Nvidia is within its rights to withhold access, of course. Nvidia doesn't have to send out graphics cards or grant interviews. It'll only do it if it's good for business. But the unspoken covenant of product reviews is that the press, as a whole, gets a chance to warn the public if a movie, video game, or GPU is not worth their money. It works both ways: the media also gets the chance to warn that a product is so good you might want to line up in advance. That unspoken rule is what Nvidia is trampling here.

Intel

Intel Explores Sale of Networking and Edge Unit 15

An anonymous reader shares a report: Intel has considered divesting its network and edge businesses as the chipmaker looks to shave off parts of the company its new chief executive does not see as crucial, three sources familiar with the matter said.

Talks about the potential sale of the group, once called NEX in Intel's financial results, are a part of CEO Lip-Bu Tan's strategy to focus its tens of thousands of employees on areas in which it has historically thrived: PC and data center chips.
AI

Qualcomm To Launch Data Center Processors That Link To Nvidia Chips 6

Qualcomm announced plans to re-enter the data center market with custom CPUs designed to integrate with Nvidia GPUs and software. As CNBC reports, the move supports Qualcomm's broader strategy to diversify beyond smartphones and into high-growth areas like data centers, PCs, and automotive chips. From the report: "I think we see a lot of growth happening in this space for decades to come, and we have some technology that can add real value added," Cristiano Amon, CEO of Qualcomm, told CNBC in an interview on Monday. "So I think we have a very disruptive CPU." Amon said the company will make an announcement about the CPU roadmap and the timing of its release "very soon," without offering specifics. The data center CPU market remains highly competitive. Big cloud computing players like Amazon and Microsoft already design and deploy their own custom CPUs. AMD and Intel also have a strong presence.

Addressing the competition, Amon said that there will be a place for Qualcomm in the data center CPU space. "As long as ... we can build a great product, we can bring innovation, and we can add value with some disruptive technology, there's going to be room for Qualcomm, especially in the data center," Amon said. "[It] is a very large addressable market that will that will see a lot of investment for decades to come." Last week, Qualcomm signed a memorandum of understanding with Saudi-based AI frim Humain to develop data centers, joining a slew of U.S. tech companies making deals in the region. Humain will operate under Saudi Arabia's Public Investment Fund.
Open Source

OSU's Open Source Lab Eyes Infrastructure Upgrades and Sustainability After Recent Funding Success (osuosl.org) 11

It's a nonprofit that's provide hosting for the Linux Foundation, the Apache Software Foundation, Drupal, Firefox, and 160 other projects — delivering nearly 430 terabytes of information every month. (It's currently hosting Debian, Fedora, and Gentoo Linux.) But hosting only provides about 20% of its income, with the rest coming from individual and corporate donors (including Google and IBM). "Over the past several years, we have been operating at a deficit due to a decline in corporate donations," the Open Source Lab's director announced in late April.

It's part of the CS/electrical engineering department at Oregon State University, and while the department "has generously filled this gap, recent changes in university funding makes our current funding model no longer sustainable. Unless we secure $250,000 in committed funds, the OSL will shut down later this year."

But "Thankfully, the call for support worked, paving the way for the OSU Open Source Lab to look ahead, into what the future holds for them," reports the blog It's FOSS.

"Following our OSL Future post, the community response has been incredible!" posted director Lance Albertson. "Thanks to your amazing support, our team is funded for the next year. This is a huge relief and lets us focus on building a truly self-sustaining OSL." To get there, we're tackling two big interconnected goals:

1. Finding a new, cost-effective physical home for our core infrastructure, ideally with more modern hardware.
2. Securing multi-year funding commitments to cover all our operations, including potential new infrastructure costs and hardware refreshes.


Our current data center is over 20 years old and needs to be replaced soon. With Oregon State University evaluating the future of this facility, it's very likely we'll need to relocate in the near future. While migrating to the State of Oregon's data center is one option, it comes with significant new costs. This makes finding free or very low-cost hosting (ideally between Eugene and Portland for ~13-20 racks) a huge opportunity for our long-term sustainability. More power-efficient hardware would also help us shrink our footprint.

Speaking of hardware, refreshing some of our older gear during a move would be a game-changer. We don't need brand new, but even a few-generations-old refurbished systems would boost performance and efficiency. (Huge thanks to the Yocto Project and Intel for a recent hardware donation that showed just how impactful this is!) The dream? A data center partner donating space and cycled-out hardware. Our overall infrastructure strategy is flexible. We're enhancing our OpenStack/Ceph platforms and exploring public cloud credits and other donated compute capacity. But whatever the resource, it needs to fit our goals and come with multi-year commitments for stability. And, a physical space still offers unique value, especially the invaluable hands-on data center experience for our students....

[O]ur big focus this next year is locking in ongoing support — think annualized pledges, different kinds of regular income, and other recurring help. This is vital, especially with potential new data center costs and hardware needs. Getting this right means we can stop worrying about short-term funding and plan for the future: investing in our tech and people, growing our awesome student programs, and serving the FOSS community. We're looking for partners, big and small, who get why foundational open source infrastructure matters and want to help us build this sustainable future together.

The It's FOSS blog adds that "With these prerequisites in place, the OSUOSL intends to expand their student program, strengthen their managed services portfolio for open source projects, introduce modern tooling like Kubernetes and Terraform, and encourage more community volunteers to actively contribute."

Thanks to long-time Slashdot reader I'm just joshin for suggesting the story.
AMD

Intel Struggles To Reverse AMD's Share Gains In x86 CPU Market (crn.com) 91

An anonymous reader shared this report from CRN: CPU-tracking firm Mercury Research reported on Thursday that Intel's x86 CPU market share grew 0.3 points sequentially to 75.6 percent against AMD's 24.4 percent in the first quarter. However, AMD managed to increase its market share by 3.6 points year over year. These figures only captured the server, laptop and desktop CPU segments. When including IoT and semicustom products, AMD grew its x86 market share sequentially by 1.5 points and year over year by 0.9 points to 27.1 percent against Intel's 72.9 percent... AMD managed to gain ground on Intel in the desktop and server segments sequentially and year over year. But it was in the laptop segment where Intel eked out a sequential share gain, even though rival AMD ended up finishing the first quarter with a higher share of shipments than what it had a year ago...

While AMD mostly came out on top in the first quarter, [Mercury Research President Dean] McCarron said ARM's estimated CPU share against x86 products crossed into the double digits for the first time, growing 2.3 points sequentially to 11.9 percent. This was mainly due to a "surge" of Nvidia's Grace CPUs for servers and a large increase of Arm CPU shipments for Chromebooks.

Meanwhile, PC Gamer reports that ARM's share of the PC processor market "grew to 13.6% in the first quarter of 2025 from 10.8% in the fourth quarter of 2024." And they note the still-only-rumors that an Arm-based chip from AMD will be available as soon next year. [I]f one of the two big players in x86 does release a mainstream Arm chip for the PC, that will very significant. If it comes at about the same time as Nvidia's rumoured Arm chip for the PC, well, momentum really will be building and questioning x86's dominance will be wholly justified.
Google

Google Dominates AI Patent Applications (axios.com) 12

Google has overtaken IBM to become the leader in generative AI-related patents and also leads in the emerging area of agentic AI, according to data from IFI Claims. Axios: In the patents-for-agents U.S. rankings, Google and Nvidia top the list, followed by IBM, Intel and Microsoft, according to an analysis released Thursday.

Globally, Google and Nvidia also led the agentic patents list, but three Chinese universities also make the top 10, highlighting China's place as the chief U.S. rival in the field. In global rankings for generative AI, Google was also the leader -- but six of the top 10 global spots were held by Chinese companies or universities. Microsoft was No. 3, with Nvidia and IBM also in the top 10.

Intel

Intel Certifies Shell Lubricant for Cooling AI Data Centers (bloomberg.com) 44

Intel has certified Shell's lubricant-based method for cooling servers more efficiently within data centers used for AI. From a report: The announcement on Tuesday, which follows the chipmaker's two-year trial of the technology, offers a way to use less energy at AI facilities, which are booming and are expected to double their electricity demand globally by 2030, consuming as much power then as all of Japan today, according to the International Energy Agency.

So far, companies have largely used giant fans to reduce temperatures inside AI data centers, which generate more heat in order to run at a higher power. Increasingly, these fans consume electricity at a rate that rivals the computers themselves, something the facilities' operators would prefer to avoid, Intel Principal Engineer Samantha Yates said in an interview.

Slashdot Top Deals