HP

HP Chooses Ubuntu-Based Pop!_OS Linux For Its Upcoming Dev One Laptop (betanews.com) 64

System76's CEO Carl Richell announced that HP has chosen the Ubuntu-based Pop!_OS operating system to run on its 14-inch developer-focused notebook called "Dev One." Brian Fagioli from BetaNews speculates that a HP acquisition of System76 "could be a possibility in the future -- if this new relationship pans out at least." He continues: HP could be testing the waters with the upcoming Dev One. Keep in mind, System76 does not even build its own laptops, so we could see the company leave the notebook business and focus on desktops only -- let HP handle the Pop!_OS laptops. "We've got you covered. Experience exceptional multi-core performance from the AMD Ryzen 7 PRO processor and multitask with ease. Compile code, run a build, and keep all your apps running with more speed from the 16GB memory. Plus, load and save files in a flash, thanks to 1TB fast PCIe NVMe M.2 storage. We've even added a Linux Super key so shortcuts are a click away. Simply put, HP Dev One is built to help you code better," explains HP.

The company adds, "Pop!_OS is at your service. Create your ideal work experience with multiple tools to help you perform with peak efficiency. Use Stacking to organize and access multiple applications, browsers, and terminal windows. Move, resize, and arrange windows with ease or, let Pop!_OS keep you organized and efficient with Auto-tiling. And use Workspaces to reduce clutter by organizing windows across multiple desktops." Apparently, there will only be one configuration priced at $1,099. So far, no details about a release date have been announced other than "coming soon."

Open Source

Nvidia Transitioning To Official, Open-Source Linux GPU Kernel Driver (phoronix.com) 102

Nvidia is publishing their Linux GPU kernel modules as open-source and will be maintaining it moving forward. Phoronix's Michael Larabel reports: To much excitement and a sign of the times, the embargo has just expired on this super-exciting milestone that many of us have been hoping to see for many years. Over the past two decades NVIDIA has offered great Linux driver support with their proprietary driver stack, but with the success of AMD's open-source driver effort going on for more than a decade, many have been calling for NVIDIA to open up their drivers. Their user-space software is remaining closed-source but as of today they have formally opened up their Linux GPU kernel modules and will be maintaining it moving forward. [...] This isn't limited to just Tegra or so but spans not only their desktop graphics but is already production-ready for data center GPU usage.
AMD

AMD Doubles the Number of CPU Cores It Offers In Chromebooks (arstechnica.com) 23

AMD announced the Ryzen 5000 C-series for Chromebooks today. "The top chip in the series has eight of AMD's Zen 3 cores, giving systems that use it more x86 CPU cores than any other Chromebook," reports Ars Technica. From the report: The 7nm Ryzen 5000 C-series ranges from the Ryzen 3 5125C with two Zen 3 cores and a base and boost clock speed of 3 GHz, up to the Ryzen 7 5825C with eight cores and a base clock speed of 2 GHz that can boost to 4.5 GHz. For comparison, Intel's Core i7-1185G7, found in some higher end Chromebooks, has four cores and a base clock speed of 3 GHz that can boost to 4.8 GHz.

On their own, the chips aren't that exciting. They seemingly offer similar performance to the already-released Ryzen 5000 U-series chips. The Ryzen 5000 C-series also uses years-old Vega integrated graphics rather than the upgraded RDNA 2 found in Ryzen 6000 mobile chips, which, upon release, AMD said are "up to 2.1 times faster." But for someone who's constantly pushing their Chromebook to do more than just open a Chrome tab or two, the chips bring potentially elevated performance than what's currently available.

AMD

AMD Promises 'Extreme Gaming Laptops' in 2023 With New Dragon Range CPU (theverge.com) 37

An anonymous reader shares a report: A funny thing happened in 2020: AMD won the gaming laptop for the first time ever. Until the Asus Zephyrus G14, we'd never seen a laptop with an AMD CPU and AMD GPU run circles around the competition. Since then, we've repeatedly seen that "AMD laptop" no longer means cheap. But now, AMD is setting its sights higher than mid-range gaming machines -- it just revealed it's building a new CPU aimed at the "pinnacle of gaming performance" with the "highest core, thread and cache ever." The new CPU line is codenamed "Dragon Range," and they'll live exclusively at 55W TDP and up -- enough power that they'll "largely exist in the space where gaming laptops are plugged in the majority of the time," says AMD director of technical marketing Robert Hallock.
Hardware

Qualcomm's M1-Class Laptop Chips Will Be Ready For PCs In 'Late 2023' (arstechnica.com) 46

An anonymous reader quotes a report from Ars Technica: Qualcomm bought a chipmaking startup called Nuvia back in March of 2021, and later that year, the company said it would be using Nuvia's talent and technology to create high-performance custom-designed ARM chips to compete with Apple's processor designs. But if you're waiting for a truly high-performance Windows PC with anything other than an Intel or AMD chip in it, you'll still be waiting for a bit. Qualcomm CEO Christian Amon mentioned during the company's most recent earnings call that its high-performance chips were on track to land in consumer devices "in late 2023."

Qualcomm still plans to sample chips to its partners later in 2022, a timeframe it has mentioned previously and has managed to stick to. A gap between sampling and mass production is typical, giving Qualcomm time to work out bugs and improve chip yields and PC manufacturers more time to design and build finished products that incorporate the chips. [...] Like Apple's processors, Nuvia's support the ARM instruction set but don't use off-the-shelf ARM Cortex CPU designs. These processor cores have been phenomenally successful in commodity SoCs that power everything from Android phones to smart TVs, and they helped popularize the practice of combining large, high-performance CPU cores and small, high-efficiency CPU cores together in the same design. But they rarely manage to top the performance charts, something that's especially noticeable when they're running x86 code on Windows with a performance penalty.

Businesses

Nvidia and AMD GPUs Are Returning To Shelves and Prices Are Finally Falling (theverge.com) 78

For nearly two years, netting a PS5, Xbox Series X, or AMD Radeon and Nvidia RTX graphics cards without paying a fortune has been a matter of luck (or a lot of skill). At its peak, scalpers were successfully charging double or even triple MSRP for a modern GPU. But it's looking like the great GPU shortage is nearly over. From a report: In January, sites including Tom's Hardware reported that prices were finally beginning to drop, and drop they did; they've now dropped an average of 30 percent in the three months since. On eBay, the most popular graphics cards are only commanding a street price of $200-$300 over MSRP. And while that might still seem like a lot, some have fallen further: used Nvidia RTX 3080 Ti or AMD RX 6900 XT are currently fetching less than their original asking price, a sure sign that sanity is returning to the marketplace.

Just as importantly, some graphics cards are actually staying in stock at retailers when their prices are too high -- again, something that sounds perfectly normal but that we haven't seen in a while. For many months, boutiques like my local retailer Central Computers could only afford to sell you a GPU as part of a big PC bundle, but now it's making every card available on its own. GameStop is selling a Radeon RX 6600 for just $10 over MSRP, and it hasn't yet sold out. Newegg has also continually been offering an RTX 3080 Ti for just $10 over MSRP (after rebate, too) -- even if $1,200 still seems high for that card's level of performance.

GNU is Not Unix

Richard Stallman Speaks on the State of Free Software, and Answers Questions (libreplanet.org) 112

Richard Stallman celebrated his 69th birthday last month. And Wednesday, he gave a 92-minute presentation called "The State of the Free Software Movement."

Stallman began by thanking everyone who's contributed to free software, and encouraged others who want to help to visit gnu.org/help. "The Free Software movement is universal, and morally should not exclude anyone. Because even though there are crimes that should be punished, cutting off someone from contributing to free software punishes the world. Not that person."

And then he began by noting some things that have gotten better in the free software movement, including big improvements in projects like GNU Emacs when displaying external packages. (And in addition, "GNU Health now has a hospital management facility, which should make it applicable to a lot more medical organizations so they can switch to free software. And [Skype alternative] GNU Jami got a big upgrade.")

What's getting worse? Well, the libre-booted machines that we have are getting older and scarcer. Finding a way to support something new is difficult, because Intel and AMD are both designing their hardware to subjugate people. If they were basically haters of the public, it would be hard for them to do it much worse than they're doing.

And Macintoshes are moving towards being jails, like the iMonsters. It's getting harder for users to install even their own programs to run them. And this of course should be illegal. It should be illegal to sell a computer that doesn't let users install software of their own from source code. And probably shouldn't allow the computer to stop you from installing binaries that you get from others either, even though it's true in cases like that, you're doing it at your own risk. But tying people down, strapping them into their chairs so that they can't do anything that hurts themselves -- makes things worse, not better. There are other systems where you can find ways to trust people, that don't depend on being under the power of a giant company.

We've seen problems sometimes where supported old hardware gets de-supported because somebody doesn't think it's important any more — it's so old, how could that matter? But there are reasons...why old hardware sometimes remains very important, and people who aren't thinking about this issue might not realize that...


Stallman also had some advice for students required by their schools to use non-free software like Zoom for their remote learning. "If you have to use a non-free program, there's one last thing... which is to say in each class session, 'I am bitterly ashamed of the fact that I'm using Zoom for this class.' Just that. It's a few seconds. But say it each time.... And over time, the fact that this is really important to you will sink in."

And then halfway through, Stallman began taking questions from the audience...

Read on for Slashdot's report on Stallman's remarks, or jump ahead to...
Apple

How Apple's Monster M1 Ultra Chip Keeps Moore's Law Alive 109

By combining two processors into one, the company has squeezed a surprising amount of performance out of silicon. From a report: "UltraFusion gave us the tools we needed to be able to fill up that box with as much compute as we could," Tim Millet, vice president of hardware technologies at Apple, says of the Mac Studio. Benchmarking of the M1 Ultra has shown it to be competitive with the fastest high-end computer chips and graphics processor on the market. Millet says some of the chip's capabilities, such as its potential for running AI applications, will become apparent over time, as developers port over the necessary software libraries. The M1 Ultra is part of a broader industry shift toward more modular chips. Intel is developing a technology that allows different pieces of silicon, dubbed "chiplets," to be stacked on top of one another to create custom designs that do not need to be redesigned from scratch. The company's CEO, Pat Gelsinger, has identified this "advanced packaging" as one pillar of a grand turnaround plan. Intel's competitor AMD is already using a 3D stacking technology from TSMC to build some server and high-end PC chips. This month, Intel, AMD, Samsung, TSMC, and ARM announced a consortium to work on a new standard for chiplet designs. In a more radical approach, the M1 Ultra uses the chiplet concept to connect entire chips together.

Apple's new chip is all about increasing overall processing power. "Depending on how you define Moore's law, this approach allows you to create systems that engage many more transistors than what fits on one chip," says Jesus del Alamo, a professor at MIT who researches new chip components. He adds that it is significant that TSMC, at the cutting edge of chipmaking, is looking for new ways to keep performance rising. "Clearly, the chip industry sees that progress in the future is going to come not only from Moore's law but also from creating systems that could be fabricated by different technologies yet to be brought together," he says. "Others are doing similar things, and we certainly see a trend towards more of these chiplet designs," adds Linley Gwennap, author of the Microprocessor Report, an industry newsletter. The rise of modular chipmaking might help boost the performance of future devices, but it could also change the economics of chipmaking. Without Moore's law, a chip with twice the transistors may cost twice as much. "With chiplets, I can still sell you the base chip for, say, $300, the double chip for $600, and the uber-double chip for $1,200," says Todd Austin, an electrical engineer at the University of Michigan.
Supercomputing

Russia Cobbles Together Supercomputing Platform To Wean Off Foreign Suppliers (theregister.com) 38

Russia is adapting to a world where it no longer has access to many technologies abroad with the development of a new supercomputer platform that can use foreign x86 processors such as Intel's in combination with the country's homegrown Elbrus processors. The Register reports: The new supercomputer reference system, dubbed "RSK Tornado," was developed on behalf of the Russian government by HPC system integrator RSC Group, according to an English translation of a Russian-language press release published March 30. RSC said it created RSK Tornado as a "unified interoperable" platform to "accelerate the pace of important substitution" for HPC systems, data processing centers and data storage systems in Russia. In other words, the HPC system architecture is meant to help Russia quickly adjust to the fact that major chip companies such as Intel, AMD and TSMC -- plus several other technology vendors, like Dell and Lenovo -- have suspended product shipments to the country as a result of sanctions by the US and other countries in reaction to Russia's invasion of Ukraine.

RSK Tornado supports up to 104 servers in a rack, with the idea being to support foreign x86 processors (should they come available) as well as Russia's Elbrus processors, which debuted in 2015. The hope appears to be the ability for Russian developers to port HPC, AI and big data applications from x86 architectures to the Elbrus architecture, which, in theory, will make it easier for Russia to rely on its own supply chain and better cope with continued sanctions from abroad. RSK Tornado systems software is RSC proprietary and is currently used to orchestrate supercomputer resources at the Interdepartmental Supercomputer Center of the Russian Academy of Sciences, St Petersburg Polytechnic University and the Joint Institute for Nuclear Research. RSC claims to have also developed its own liquid-cooling system for supercomputers and data storage systems, the latter of which can use Elbrus CPUs too.

Intel

Intel Suspends All Operations in Russia 'Effective Immediately' (arstechnica.com) 107

Intel, one of the world's largest semiconductor companies, is suspending business operations in Russia "effective immediately," the company announced late Tuesday. From a report: "Intel continues to join the global community in condemning Russia's war against Ukraine," the company said in a statement. Intel stopped shipping chips to customers in Russia and Belarus in early March. Intel said that it is "working to support all of our employees through this difficult situation, including our 1,200 employees in Russia."

Ordinarily, it would be a drastic step for a multinational company like Intel to exit a market the size of Russia. But Western sanctions have made it increasingly difficult for global companies to operate in Russia. Earlier this week, the Biden administration announced broad sanctions on the Russian electronics industry, which presumably includes many of Intel's partners and customers in Russia. Two of Intel's major competitors, AMD and Nvidia, halted sales of their products in Russia early last month. Taiwanese chipmaker TSMC has also restricted sales in Russia.

AMD

AMD Confirms Its GPU Drivers Are Overclocking CPUs Without Asking (tomshardware.com) 73

AMD has confirmed to Tom's Hardware that a bug in its GPU driver is, in fact, changing Ryzen CPU settings in the BIOS without permission. This condition has been shown to auto-overclock Ryzen CPUs without the user's knowledge. From the report: Reports of this issue began cropping up on various social media outlets recently, with users reporting that their CPUs had mysteriously been overclocked without their consent. The issue was subsequently investigated and tracked back to AMD's GPU drivers. AMD originally added support for automatic CPU overclocking through its GPU drivers last year, with the idea that adding in a Ryzen Master module into the Radeon Adrenalin GPU drivers would simplify the overclocking experience. Users with a Ryzen CPU and Radeon GPU could use one interface to overclock both. Previously, it required both the GPU driver and AMD's Ryzen Master software.

Overclocking a Ryzen CPU requires the software to manipulate the BIOS settings, just as we see with other software overclocking utilities. For AMD, this can mean simply engaging the auto-overclocking Precision Boost Overdrive (PBO) feature. This feature does all the dirty work, like adjusting voltages and frequency on the fly, to give you a one-click automatic overclock. However, applying a GPU profile in the AMD driver can now inexplicably alter the BIOS settings to enable automatic overclocking. This is problematic because of the potential ill effects of overclocking -- in fact, overclocking a Ryzen CPU automatically voids the warranty. AMD's software typically requires you to click a warning to acknowledge that you understand the risks associated with overclocking, and that it voids your warranty, before it allows you to overclock the system. Unfortunately, that isn't happening here.
Until AMD issues a fix, "users have taken to using the Radeon Software Slimmer to delete the Ryzen Master SDK from the GPU driver, thus preventing any untoward changes to the BIOS settings," adds Tom's Hardware.
AMD

AMD To Acquire Pensando in a $1.9 Billion Bid for Networking Tech (protocol.com) 12

AMD said early Monday that it plans to acquire networking chip maker Pensando for $1.9 billion in cash, in a bid to arm itself with tech that competes with directly with Nvidia and Intel's data-center chip packages. From a report: Pensando was founded by several former Cisco engineers, and makes edge computing technology that competes with AWS Nitro, Intel's DPU launched last year, and Nvidia's data processing units called BlueField. In a release distributed in advance of the announcement, AMD said that buying the closely held Pensando will give it a networking platform that will bolster its existing server chip lineup. Pensando's chips are an increasingly important part of data center design, as it becomes impossible to simply throw larger numbers of processors at demanding computing tasks. As regular chips scale up, the networking connections become a bottleneck, and the DPU's goal (Intel calls it an IPU) is to free up the central processor to perform other functions.
Intel

Intel Beats AMD and Nvidia with Arc GPU's Full AV1 Support (neowin.net) 81

Neowin notes growing support for the "very efficient, potent, royalty-free video codec" AV1, including Microsoft's adding of support for hardware acceleration of AV1 on Windows.

But AV1 even turned up in Intel's announcement this week of the Arc A-series, a new line of discrete GPUs, Neowin reports: Intel has been quick to respond and the company has become the first such GPU hardware vendor to have full AV1 support on its newly launched Arc GPUs. While AMD and Nvidia both offer AV1 decoding with their newest GPUs, neither have support for AV1 encoding.

Intel says that hardware encoding of AV1 on its new Arc GPUs is 50 times faster than those based on software-only solutions. It also adds that the efficiency of AV1 encode with Arc is 20% better compared to HEVC. With this feature, Intel hopes to potentially capture at least some of the streaming and video editing market that's based on users who are looking for a more robust AV1 encoding solution compared to CPU-based software approaches.

From Intel's announcement: Intel Arc A-Series GPUs are the first in the industry to offer full AV1 hardware acceleration, including both encode and decode, delivering faster video encode and higher quality streaming while consuming the same internet bandwidth. We've worked with industry partners to ensure that AV1 support is available today in many of the most popular media applications, with broader adoption expected this year. The AV1 codec will be a game changer for the future of video encoding and streaming.
Graphics

More Apple M1 Ultra Benchmarks Show It Doesn't Beat the Best GPUs from Nvidia and AMD (tomsguide.com) 121

Tom's Guide tested a Mac Studio workstation equipped with an M1 Ultra with the Geekbench 5.4 CPU benchmarks "to get a sense of how effectively it handles single-core and multi-core workflows."

"Since our M1 Ultra is the best you can buy (at a rough price of $6,199) it sports a 20-core CPU and a 64-core GPU, as well as 128GB of unified memory (RAM) and a 2TB SSD."

Slashdot reader exomondo shares their results: We ran the M1 Ultra through the Geekbench 5.4 CPU benchmarking test multiple times and after averaging the results, we found that the M1 Ultra does indeed outperform top-of-the-line Windows gaming PCs when it comes to multi-core CPU performance. Specifically, the M1 Ultra outperformed a recent Alienware Aurora R13 desktop we tested (w/ Intel Core i7-12700KF, GeForce RTX 3080, 32GB RAM), an Origin Millennium (2022) we just reviewed (Core i9-12900K CPU, RTX 3080 Ti GPU, 32GB RAM), and an even more 3090-equipped HP Omen 45L we tested recently (Core i9-12900K, GeForce RTX 3090, 64GB RAM) in the Geekbench 5.4 multi-core CPU benchmark.

However, as you can see from the chart of results below, the M1 Ultra couldn't match its Intel-powered competition in terms of CPU single-core performance. The Ultra-powered Studio also proved slower to transcode video than the afore-mentioned gaming PCs, taking nearly 4 minutes to transcode a 4K video down to 1080p using Handbrake. All of the gaming PCs I just mentioned completed the same task faster, over 30 seconds faster in the case of the Origin Millennium. Before we even get into the GPU performance tests it's clear that while the M1 Ultra excels at multi-core workflows, it doesn't trounce the competition across the board. When we ran our Mac Studio review unit through the Geekbench 5.4 OpenCL test (which benchmarks GPU performance by simulating common tasks like image processing), the Ultra earned an average score of 83,868. That's quite good, but again it fails to outperform Nvidia GPUs in similarly-priced systems.

They also share some results from the OpenCL Benchmarks browser, which publicly displays scores from different GPUs that users have uploaded: Apple's various M1 chips are on the list as well, and while the M1 Ultra leads that pack it's still quite a ways down the list, with an average score of 83,940. Incidentally, that means it ranks below much older GPUs like Nvidia's GeForce RTX 2070 (85,639) and AMD's Radeon VII (86,509). So here again we see that while the Ultra is fast, it can't match the graphical performance of GPUs that are 2-3 years old at this point — at least, not in these synthetic benchmarks. These tests don't always accurately reflect real-world CPU and GPU performance, which can be dramatically influenced by what programs you're running and how they're optimized to make use of your PC's components.
Their conclusion? When it comes to tasks like photo editing or video and music production, the M1 Ultra w/ 128GB of RAM blazes through workloads, and it does so while remaining whisper-quiet. It also makes the Mac Studio a decent gaming machine, as I was able to play less demanding games like Crusader Kings III, Pathfinder: Wrath of the Righteous and Total War: Warhammer II at reasonable (30+ fps) framerates. But that's just not on par with the performance we expect from high-end GPUs like the Nvidia GeForce RTX 3090....

Of course, if you don't care about games and are in the market for a new Mac with more power than just about anything Apple's ever made, you want the Studio with M1 Ultra.

AMD

Radeon Super Resolution Arrives To Speed Up Your Games in AMD Adrenalin (anandtech.com) 7

Alongside their spring driver update, AMD this morning is also unveiling the first nugget of information about the next generation of their FidelityFX Super Resolution (FSR) technology. From a report: Dubbed FSR 2.0, the next generation of AMD's upscaling technology will be taking the logical leap into adding temporal data, giving FSR more data to work with, and thus improving its ability to generate details. And, while AMD is being coy with details for today's early teaser, at a high level this technology should put AMD much closer to competing with NVIDIA's temporal-based DLSS 2.0 upscaling technology, as well as Intel's forthcoming XeSS upscaling tech.

AMD's current version of FSR, which is now being referred to as FSR 1.0, was released last summer by the company. Implemented as a compute shader, FSR 1.0 was a (relatively) simple spatial upscaler, which could only use data from the current frame for generating a higher resolution frame. Spatial upscaling's simplicity is great for compatibility but it's limited by the data it has access to, allowing for more advanced multi-frame techniques to generate more detailed images. For that reason, AMD has been very careful with their image quality claims for FSR 1.0, treating it more like a supplement to other upscaling methods than a rival to NVIDIA's class-leading DLSS 2.0.

AMD

Intel Finds Bug In AMD's Spectre Mitigation, AMD Issues Fix (tomshardware.com) 44

"News of a fresh Spectre BHB vulnerability that only impacts Intel and Arm processors emerged this week," reports Tom's Hardware, "but Intel's research around these new attack vectors unearthed another issue.

"One of the patches that AMD has used to fix the Spectre vulnerabilities has been broken since 2018." Intel's security team, STORM, found the issue with AMD's mitigation. In response, AMD has issued a security bulletin and updated its guidance to recommend using an alternative method to mitigate the Spectre vulnerabilities, thus repairing the issue anew....

Intel's research into AMD's Spectre fix begins in a roundabout way — Intel's processors were recently found to still be susceptible to Spectre v2-based attacks via a new Branch History Injection variant, this despite the company's use of the Enhanced Indirect Branch Restricted Speculation (eIBRS) and/or Retpoline mitigations that were thought to prevent further attacks. In need of a newer Spectre mitigation approach to patch the far-flung issue, Intel turned to studying alternative mitigation techniques. There are several other options, but all entail varying levels of performance tradeoffs. Intel says its ecosystem partners asked the company to consider using AMD's LFENCE/JMP technique. The "LFENCE/JMP" mitigation is a Retpoline alternative commonly referred to as "AMD's Retpoline."

As a result of Intel's investigation, the company discovered that the mitigation AMD has used since 2018 to patch the Spectre vulnerabilities isn't sufficient — the chips are still vulnerable. The issue impacts nearly every modern AMD processor spanning almost the entire Ryzen family for desktop PCs and laptops (second-gen to current-gen) and the EPYC family of datacenter chips....

In response to the STORM team's discovery and paper, AMD issued a security bulletin (AMD-SB-1026) that states it isn't aware of any currently active exploits using the method described in the paper. AMD also instructs its customers to switch to using "one of the other published mitigations (V2-1 aka 'generic retpoline' or V2-4 aka 'IBRS')." The company also published updated Spectre mitigation guidance reflecting those changes [PDF]....

AMD's security bulletin thanks Intel's STORM team by name and noted it engaged in the coordinated vulnerability disclosure, thus allowing AMD enough time to address the issue before making it known to the public.

Thanks to Slashdot reader Hmmmmmm for submitting the story...
China

How China Built an Exascale Supercomputer Out of Old 14nm Tech (nextplatform.com) 29

Slashdot reader katydid77 shares a report from the supercomputing site The Next Platform: If you need any proof that it doesn't take the most advanced chip manufacturing processes to create an exascale-class supercomputer, you need look no further than the Sunway "OceanLight" system housed at the National Supercomputing Center in Wuxi, China. Some of the architectural details of the OceanLight supercomputer came to our attention as part of a paper published by Alibaba Group, Tsinghua University, DAMO Academy, Zhejiang Lab, and Beijing Academy of Artificial Intelligence, which is running a pretrained machine learning model called BaGuaLu, across more than 37 million cores and 14.5 trillion parameters (presumably with FP32 single precision), and has the capability to scale to 174 trillion parameters (and approaching what is called "brain-scale" where the number of parameters starts approaching the number of synapses in the human brain)....

Add it all up, and the 105 cabinet system tested on the BaGuaLu training model, with its 107,250 SW26010-Pro processors, had a peak theoretical performance of 1.51 exaflops. We like base 2 numbers and think that the OceanLight system probably scales to 160 cabinets, which would be 163,840 nodes and just under 2.3 exaflops of peak FP64 and FP32 performance. If it is only 120 cabinets (also a base 2 number), OceanLight will come in at 1.72 exaflops peak. But these rack scales are, once again, just hunches. If the 160 cabinet scale is the maximum for OceanLight, then China could best the performance of the 1.5 exaflops "Frontier" supercomputer being tuned up at Oak Ridge National Laboratories today and also extend beyond the peak theoretical performance of the 2 exaflops "Aurora" supercomputer coming to Argonne National Laboratory later this year — and maybe even further than the "El Capitan" supercomputer going into Lawrence Livermore National Laboratory in 2023 and expected to be around 2.2 exaflops to 2.3 exaflops according to the scuttlebutt.

We would love to see the thermals and costs of OceanLight. The SW26010-Pro chip could burn very hot, to be sure, and run up the electric bill for power and cooling, but if SMIC [China's largest foundry] can get good yield on 14 nanometer processes, the chip could be a lot less expensive to make than, say, a massive GPU accelerator from Nvidia, AMD, or Intel. (It's hard to say.) Regardless, having indigenous parts matters more than power efficiency for China right now, and into its future, and we said as much last summer when contemplating China's long road to IT independence. Imagine what China can do with a shrink to 7 nanometer processes when SMIC delivers them — apparently not even using extreme ultraviolet (EUV) light — many years hence....

The bottom line is that the National Research Center of Parallel Computer Engineering and Technology (known as NRCPC), working with SMIC, has had an exascale machine in the field for a year already. (There are two, in fact.) Can the United States say that right now? No it can't.

AMD

New UCIe Chiplet Standard Supported by Intel, AMD and Arm (anandtech.com) 20

A number of industry stalwarts including Intel, AMD, Arm, TSMC, and Samsung on Wednesday introduced a new Universal Chiplet Interconnect Express (UCIe) consortium. AnandTech: Taking significant inspiration from the very successful PCI-Express playbook, with UCIe the involved firms are creating a standard for connecting chiplets, with the goal of having a single set of standards that not only simplify the process for all involved, but lead the way towards full interoperability between chiplets from different manufacturers, allowing chips to mix-and-match chiplets as chip makers see fit. In other words, to make a complete and compatible ecosystem out of chiplets, much like today's ecosystem for PCIe-based expansion cards.

The comparisons to PCIe are apt on multiple levels, and this is perhaps the best way to quickly understand the UCIe group's goals. Not only is the new standard being made available in an open fashion, but the companies involved will be establishing a formal consortium group later this year to administer UCIe and further develop it. Meanwhile from a general technology perspective, the use of chiplets is the latest step in the continual consolidation of integrated circuits, as smaller and smaller transistors have allowed more and more functionality to be brought on-chip. In essence, features that have been on an expansion card or separate chip up until now are starting to make their way on to the chip/SoC itself. So like PCIe moderates how these parts work together as expansion cards, a new standard has become needed to moderate how these parts should work together as chiplets.

AMD

AMD Is Now Worth More Than Rival Intel (yahoo.com) 25

Hmmmmmm shares a report from Yahoo Finance: AMD's market cap currently stands at $188 billion after shares rose nearly 2% in Tuesday's session. Intel's market cap is $182 billion. That marks the second time in a week AMD's market value has climbed above Intel -- the first time it happened was a week ago. Followers of this battle may not be surprised to see this one happen (and seeing it continue from here) for several reasons. First, AMD has been winning the battle on Wall Street for sexier investment thesis. AMD last week closed on its $35 billion acquisition for Xilinx. Secondarily, AMD has flat out posted better financials than Intel (for some time) as it has gained market share in key areas (notably in servers). AMD's sales and profits rose 68% and 117%, respectively in 2021. The company outlined 31% revenue growth for 2022 and gross profit margins of 51%. Intel's 2021 sales and earnings increased 2% and 7%, respectively. The company sees sales in 2022 rising about 2%. Profits are expected to drop 36% as Intel further builds out its chip-making capacity.
Intel

Intel's 12th Gen Alder Lake Chips for Thinner and Lighter Laptops Have Arrived (theverge.com) 28

Intel launched the first wave of its 12th Gen Alder Lake chips at CES 2022 -- but only for its H-series lineup of chips, destined for the most powerful and power-hungry laptops. And now, it's rolling out the rest of its Alder Lake laptop lineup: the P-series and U-series models it briefly showed off in January, which are set to power the thinner, lighter, and cheaper laptops of 2022. From a report: In total, there are a whopping 20 chips fit for a wide range of hardware across the P-series, U-series (15W), and U-series (9W) categories, with the first laptops powered by the new processors set to arrive in March. Like their more powerful H-series cousins (and the Alder Lake desktop chips that Intel launched in late 2021 and at CES 2022), the new P-series and U-series chips have a lot more cores than 2020's 11th Gen models, with a hybrid architecture approach that combines performance and efficiency cores to maximize both power and battery life. And Intel is promising some big improvements focused around those boosted core counts, touting up to 70 percent better multi-thread performance than previous 11th Gen (and AMD) hardware. The company also says that it wins out in benchmarks against chips like Apple's M1 and M1 Pro (although not the M1 Max), and AMD's Ryzen R7 5800U in tasks like web browsing and photo editing.

Slashdot Top Deals