Businesses

Anthropic Reveals $30 Billion Run Rate, Plans To Use 3.5GW of New Google AI Chips (theregister.com) 47

Anthropic says its annualized revenue run rate has surpassed $30 billion and disclosed plans to secure roughly 3.5 gigawatts of next-generation Google TPU compute starting in 2027. Broadcom will supply the key chips and networking gear for the effort, the company announced. The Register reports: News of the two deals emerged today in a Broadcom regulatory filing that opens with two items of news. One is a "Long Term Agreement for Broadcom to develop and supply custom Tensor Processing Units ("TPUs") for Google's future generations of TPUs." Google and Broadcom have collaborated to produce custom TPUs. Broadcom CEO Hock Tan recently shared his opinion that hyperscalers don't have the skill to create custom accelerators and predicted Broadcom's chip business will therefore win over $100 billion of revenue from AI chips in 2027 alone.

Working on next-gen TPUs for Google will presumably help to make that prediction a reality. So will the second part of Broadcom's announcement: a "Supply Assurance Agreement for Broadcom to supply networking and other components to be used in Google's next-generation AI racks through up to 2031." Broadcom's filing also revealed one user of Google's next-gen TPU will be Anthropic, which starting in 2027, "will access through Broadcom approximately 3.5 gigawatts as part of the multiple gigawatts of next generation TPU-based AI compute capacity committed by Anthropic."

Privacy

New Freenet Network Launches, Along With 'River' Group Chat (freenet.org) 26

Wikipedia describes Freenet as "a peer-to-peer platform for censorship-resistant, anonymous communication," released in the year 2000. "Both Freenet and some of its associated tools were originally designed by Ian Clarke," Wikipedia adds. (And in 2000 Clarke answered questions from Slashdot's readers...)

And now Ian Clarke (aka Sanity — Slashdot reader #1,431) returns to share this announcement: Freenet's new generation peer-to-peer network is now operational, along with the first application built on the network: a decentralized group chat system called River.

The new version is a complete redesign of the original project, focusing on real-time decentralized applications rather than static content distribution. Applications run as WebAssembly-based contracts across a small-world peer network, allowing software to operate directly on the network without centralized infrastructure.

An introductory video demonstrating the system is available on YouTube.

"While the original Freenet was like a decentralized hard drive, the new Freenet is like a full decentralized computer," Clarke wrote in 2023, "allowing the creation of entirely decentralized services like messaging, group chat, search, social networking, among others... designed for efficiency, flexibility, and transparency to the end user."

"Freenet 2023 can be used seamlessly through your web browser, providing an experience that feels just like using the traditional web,"
Encryption

Instagram Discontinues End-To-End Encryption For DMs (thehackernews.com) 31

Meta plans to remove end-to-end encryption (E2EE) from Instagram direct messages by May 8, 2026. "Very few people were opting in to end-to-end encrypted messaging in DMs, so we're removing this option from Instagram in the coming months," says Meta. "Anyone who wants to keep messaging with end-to-end encryption can easily do that on WhatsApp." The Hacker News reports: The American company first began testing E2EE for Instagram direct messages in 2021 as part of CEO Mark Zuckerberg's "privacy-focused vision for social networking." The feature is currently "only available in some areas" and is not enabled by default. Weeks into the Russo-Ukrainian war in February 2022, the company made encrypted direct messaging available to all adult users in both countries. Last week, TikTok said it would not introduce E2EE, arguing it makes users less safe by preventing police and safety teams from being able to read direct messages if needed.
The Internet

Google Quantum-Proofs HTTPS (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: Google on Friday unveiled its plan for its Chrome browser to secure HTTPS certificates against quantum computer attacks without breaking the Internet. The objective is a tall order. The quantum-resistant cryptographic data needed to transparently publish TLS certificates is roughly 40 times bigger than the classical cryptographic material used today. Today's X.509 certificates are about 64 bytes in size, and comprise six elliptic curve signatures and two EC public keys. This material can be cracked through the quantum-enabled Shor's algorithm. Certificates containing the equivalent quantum-resistant cryptographic material are roughly 2.5 kilobytes. All this data must be transmitted when a browser connects to a site.

To bypass the bottleneck, companies are turning to Merkle Trees, a data structure that uses cryptographic hashes and other math to verify the contents of large amounts of information using a small fraction of material used in more traditional verification processes in public key infrastructure. Merkle Tree Certificates, "replace the heavy, serialized chain of signatures found in traditional PKI with compact Merkle Tree proofs," members of Google's Chrome Secure Web and Networking Team wrote Friday. "In this model, a Certification Authority (CA) signs a single 'Tree Head' representing potentially millions of certificates, and the 'certificate' sent to the browser is merely a lightweight proof of inclusion in that tree."

[...] Google is [also] adding cryptographic material from quantum-resistant algorithms such as ML-DSA (PDF). This addition would allow forgeries only if an attacker were to break both classical and post-quantum encryption. The new regime is part of what Google is calling the quantum-resistant root store, which will complement the Chrome Root Store the company formed in 2022. The [Merkle Tree Certificates] MTCs use Merkle Trees to provide quantum-resistant assurances that a certificate has been published without having to add most of the lengthy keys and hashes. Using other techniques to reduce the data sizes, the MTCs will be roughly the same 64-byte length they are now [...]. The new system has already been implemented in Chrome.

United States

Texas Sues TP-Link Over China Links and Security Vulnerabilities (theregister.com) 46

TP-Link is facing legal action from the state of Texas for allegedly misleading consumers with "Made in Vietnam" claims despite China-dominated manufacturing and supply chains, and for marketing its devices as secure despite reported firmware vulnerabilities exploited by Chinese state-sponsored actors. The Register: The Lone Star State's Attorney General, Ken Paxton, is filing the lawsuit against California-based TP-Link Systems Inc., which was originally founded in China, accusing it of deceptively marketing its networking devices and alleging that its security practices and China-based affiliations allowed Chinese state-sponsored actors to access devices in the homes of American consumers.

It is understood that this is just the first of several lawsuits that the Office of the Attorney General intends to file this week against "China-aligned companies," as part of a coordinated effort to hold China accountable under Texas law. The lawsuit claims that TP-Link is the dominant player in the US networking and smart home market, controlling 65 percent of the American market for network devices.

It also alleges that TP-Link represents to American consumers that the devices it markets and sells within the US are manufactured in Vietnam, and that consistent with this, the devices it sells in the American market carry a "Made in Vietnam" sticker.

Space

Musk Predicts SpaceX Will Launch More AI Compute Per Year Than the Cumulative Total on Earth (substack.com) 245

Elon Musk told podcast host Dwarkesh Patel and Stripe co-founder John Collison that space will become the most economically compelling location for AI data centers in less than 36 months, a prediction rooted not in some exotic technical breakthrough but in the basic math of electricity supply: chip output is growing exponentially, and electrical output outside China is essentially flat.

Solar panels in orbit generate roughly five times the power they do on the ground because there is no day-night cycle, no cloud cover, no atmospheric loss, and no atmosphere-related energy reduction. The system economics are even more favorable because space-based operations eliminate the need for batteries entirely, making the effective cost roughly 10 times cheaper than terrestrial solar, Musk said. The terrestrial bottleneck is already real.

Musk said powering 330,000 Nvidia GB300 chips -- once you account for networking hardware, storage, peak cooling on the hottest day of the year, and reserve margin for generator servicing -- requires roughly a gigawatt at the generation level. Gas turbines are sold out through 2030, and the limiting factor is the casting of turbine vanes and blades, a process handled by just three companies worldwide.

Five years from now, Musk predicted, SpaceX will launch and operate more AI compute annually than the cumulative total on Earth, expecting at least a few hundred gigawatts per year in space. Patel estimated that 100 gigawatts alone would require on the order of 10,000 Starship launches per year, a figure Musk affirmed. SpaceX is gearing up for 10,000 launches a year, Musk said, and possibly 20,000 to 30,000.
Patents

Acer Sues Verizon, AT&T, and T-Mobile, Alleging Infringment on Acer's Cellular Networking Patents (nerds.xyz) 32

Slashdot reader BrianFagioli writes: Acer has filed three separate patent infringement lawsuits against AT&T, Verizon, and T-Mobile, taking the unusual step of hauling the nation's largest wireless carriers into federal court. The suits, filed in the Eastern District of Texas, claim the companies are using Acer-developed cellular networking technology without paying for the privilege. Acer says it tried to negotiate licenses for years but reached a dead end, arguing it was left with no option except litigation. The case centers on six U.S. patents Acer asserts are core to modern wireless networks, rather than anything tied to PCs or laptops.

The company describes itself as reluctant to pursue courtroom battles, but it has been quietly building a large global patent portfolio after pouring hundreds of millions of dollars into R&D. Acer also notes that some of its patents count as standard-essential, hinting the carriers may be required to license them. All three companies are expected to push back, and the dispute could become another long-running telecom patent saga. Consumers will not notice any immediate changes, but if Acer wins or settles, it may find a new revenue stream far beyond its traditional hardware business.

Further coverage from Hot Hardware
Security

To Pressure Security Professionals, Mandiant Releases Database That Cracks Weak NTLM Passwords in 12 Hours (arstechnica.com) 34

Ars Technica reports: Security firm Mandiant [part of Google Cloud] has released a database that allows any administrative password protected by Microsoft's NTLM.v1 hash algorithm to be hacked in an attempt to nudge users who continue using the deprecated function despite known weaknesses.... a precomputed table of hash values linked to their corresponding plaintext. These generic tables, which work against multiple hashing schemes, allow hackers to take over accounts by quickly mapping a stolen hash to its password counterpart... Mandiant said it had released an NTLMv1 rainbow table that will allow defenders and researchers (and, of course, malicious hackers, too) to recover passwords in under 12 hours using consumer hardware costing less than $600 USD. The table is hosted in Google Cloud. The database works against Net-NTLMv1 passwords, which are used in network authentication for accessing resources such as SMB network sharing.

Despite its long- and well-known susceptibility to easy cracking, NTLMv1 remains in use in some of the world's more sensitive networks. One reason for the lack of action is that utilities and organizations in industries, including health care and industrial control, often rely on legacy apps that are incompatible with more recently released hashing algorithms. Another reason is that organizations relying on mission-critical systems can't afford the downtime required to migrate. Of course, inertia and penny-pinching are also causes.

"By releasing these tables, Mandiant aims to lower the barrier for security professionals to demonstrate the insecurity of Net-NTLMv1," Mandiant said. "While tools to exploit this protocol have existed for years, they often required uploading sensitive data to third-party services or expensive hardware to brute-force keys."

"Organizations that rely on Windows networking aren't the only laggards," the article points out. "Microsoft only announced plans to deprecate NTLMv1 last August."

Thanks to Slashdot reader joshuark for sharing the news.
Bug

How Long Does It Take to Fix Linux Kernel Bugs? (itsfoss.com) 36

An anonymous reader shared this report from It's FOSS: Jenny Guanni Qu, a researcher at [VC fund] Pebblebed, analyzed 125,183 bugs from 20 years of Linux kernel development history (on Git). The findings show that the average bug takes 2.1 years to find. [Though the median is 0.7 years, with the average possibly skewed by "outliers" discovered after years of hiding.] The longest-lived bug, a buffer overflow in networking code, went unnoticed for 20.7 years! [But 86.5% of bugs are found within five years.]

The research was carried out by relying on the Fixes: tag that is used in kernel development. Basically, when a commit fixes a bug, it includes a tag pointing to the commit that introduced the bug. Jenny wrote a tool that extracted these tags from the kernel's git history going back to 2005. The tool finds all fixing commits, extracts the referenced commit hash, pulls dates from both commits, and calculates the time frame. As for the dataset, it includes over 125k records from Linux 6.19-rc3, covering bugs from April 2005 to January 2026. Out of these, 119,449 were unique fixing commits from 9,159 different authors, and only 158 bugs had CVE IDs assigned.

It took six hours to assemble the dataset, according to the blog post, which concludes that the percentage of bugs found within one year has improved dramatically, from 0% in 2010 to 69% by 2022. The blog post says this can likely be attributed to:
  • The Syzkaller fuzzer (released in 2015)
  • Dynamic memory error detectors like KASAN, KMSAN, KCSAN sanitizers
  • Better static analysis
  • More contributors reviewing code

But "We're simultaneously catching new bugs faster AND slowly working through ~5,400 ancient bugs that have been hiding for over 5 years."

They've also developed an AI model called VulnBERT that predicts whether a commit introduces a vulnerability, claiming that of all actual bug-introducing commits, it catches 92.2%. "The goal isn't to replace human reviewers but to point them at the 10% of commits most likely to be problematic, so they can focus attention where it matters..."


Businesses

Tough Job Market Has People Using Dating Apps To Get Interviews 42

An anonymous reader quotes a report from Bloomberg: Most people use dating apps to find love. Tiffany Chau used one to hunt for a summer internship. This fall, the 20-year-old junior at California College of the Arts tailored her Hinge profile to connect with people who could offer job referrals or interviews. One match brought her to a Halloween party, where she networked in hopes of landing a product-design internship for the summer. While there, she got some tips from someone who had recently interviewed at Accenture. As for the connection with her date? Not so much. "I feel like my approach to the dating apps is it being another networking platform like everything else, like Instagram or LinkedIn," Chau said.

Chau is among a cadre of workers who are using dating apps to boost their job searches. They're recognizing that the online job hunt is broken as unemployed workers flood the system, AI screens out resumes and many job matching programs are overwhelmed. Automation has squeezed human contact out of hiring, which has pushed applicants to seek any path to a live hiring manager, no matter the means.

The overall US unemployment rate continued to climb throughout 2025, reaching 4.6%, according to the Bureau of Labor Statistics. And while the number of unemployed high school graduates held steady at about 4.4% in November, the rate for workers with a bachelor's degree rose to 2.9% from 2.5% a year ago. About a third of dating app users said they had sought matches for job hook-ups, according to a ResumeBuilder.com survey of about 2,200 US dating site customers in October. Two-thirds targeted potential paramours who worked at a desirable employer. Three-quarters said they matched with people working in roles they wanted.
"People are doing it to expand their networks, make connections, because the best way to get a job today is who you know," said Stacie Haller, ResumeBuilder.com's chief career advisor. "Networking is the only way people are rising above the horror show that the job search is today."
Wireless Networking

Mesh Networks Are About To Escape Apple, Amazon and Google Silos (ieee.org) 31

After more than two decades of promises and false starts in the mesh networking space, the smart home standards that Apple, Amazon and Google have each championed are finally set to escape their respective brand silos and work together in a single unified network.

Starting January 1, 2026, Thread 1.4 becomes the Thread Group's only certified standard, bringing a crucial new capability called credential sharing. Devices from different manufacturers can now securely join the same mesh network -- an Amazon Echo Show and an Apple HomePod mini in the same house will both be able to control the same Nanoleaf lightbulb. This marks a significant departure from Thread 1.3, released in 2022, where each brand's mesh network connected only to devices from that same brand.

The Thread Group launched in 2014 as a coalition led by Arm, Google's Nest Labs, and Samsung, later welcoming Apple and Amazon into the fold. Thread 1.4 handles low-power smart home devices and sensors, but homes also need high-bandwidth connections for laptops and phones. Wi-Fi 7 mesh serves that purpose and the Matter protocol acts as a translation layer between the two different mesh networks. Both Wi-Fi 7 and Matter arrived in products on store shelves in 2025.
Virtualization

VMware Kills vSphere Foundation In Parts of EMEA (theregister.com) 19

Broadcom has quietly pulled VMware vSphere Foundation from parts of EMEA, pushing smaller customers toward far more expensive bundles and prompting some to consider jumping to Hyper-V or Nutanix. The Register reports: VVF is a bundle that offers compute, storage, and networking virtualization, and a platform to run containers. It's most useful in hyperconverged infrastructure and hybrid clouds, but is less capable than the Cloud Foundation (VCF) private cloud suite. Virtzilla said EMEA customers would need to check with their local dealer to see if VVF was still on sale in their country. "VVF is no longer available in some EMEA countries, but for the majority it is still available," a Broadcom spokesperson said. "Customers will have to reach out to sales reps or partners to determine availability of a given product in their region. These changes were recent."

Our initial tipster said their reseller clued them into the impending change when VMware's new fiscal year started in November. This anonymous customer told us that their hardware fleet boasts thousands of compute cores and without more affordable options, his organization was looking at their annual VMware spend leaping by 10x from around $130,000 to $1.3 million. "We're currently looking to jump ship to either Microsoft's Hyper-V or Nutanix, as we can't eat (that) increase," they told The Register. [...]

For the moment, a Broadcom spokesperson told us it has no plans to ditch VMware vSphere Standard, the basic server virtualization bundle which we're told makes up about 60 percent of the company's licenses and is a lower-cost way to access VMware's hypervisor than buying its full suite of VMware Cloud Foundation products. "We have not announced any changes to the availability of vSphere Standard in EMEA nor end of support for vSphere Standard," the spokesperson said via email. "The product remains fully available across EMEA today. However, Broadcom product availability can vary by region to align with local market requirements, customer demand, and other considerations."

Businesses

Cisco Stock Hits New All-Time High, 25 Years After the Dotcom Bubble Burst (ft.com) 29

Cisco's stock price touched $80.25 on Wednesday, finally eclipsing its dotcom-era peak of $80.06 set on March 27, 2000 -- when the networking giant briefly surpassed Microsoft to become the world's most valuable company. The journey back took 25 years, eight months and 13 days. The company's fundamentals improved dramatically over that period, of course. Revenues have nearly quintupled since 1999, profits have quadrupled, earnings per share have grown eightfold, and margins have remained healthy throughout. Investors who bought at the peak still lost money to inflation for a generation.

Cisco's trajectory draws obvious comparisons to Nvidia, today's dominant "picks and shovels" supplier for the AI boom. Nvidia trades at a price-to-earnings ratio above 45 and an enterprise value-to-sales ratio near 24. At its 2000 peak, Cisco traded at a P/E above 200 and EV/sales of 31.
Businesses

The Accounting Uproar Over How Fast an AI Chip Depreciates (msn.com) 61

Tech giants including Meta, Alphabet, Microsoft and Amazon have all extended the estimated useful lives of their servers and AI equipment over the past five years, sparking a debate among investors about whether these accounting changes are artificially inflating profits. Meta this year increased its depreciation timeline for most servers and network assets to 5.5 years, up from four to five years previously and as little as three years in 2020. The company said the change reduced its depreciation expense by $2.3 billion for the first nine months of 2025. Alphabet and Microsoft now use six-year periods, up from three in 2020. Amazon extended to six years by 2024 but cut back to five years this year for some servers and networking equipment.

Michael Burry, the investor portrayed in "The Big Short," called extending useful lives "one of the more common frauds of the modern era" in an article last month. Meta's total depreciation expense for the nine-month period was almost $13 billion against pretax profit exceeding $60 billion.
Unix

New FreeBSD 15 Retires 32-Bit Ports and Modernizes Builds (theregister.com) 32

FreeBSD 15.0-RELEASE arrived this week, notes this report from The Register, which calls it the latest release "of the Unix world's leading alternative to Linux." As well as numerous bug fixes and upgrades to many of its components, the major changes in this version are reductions in the number of platforms the OS supports, and in how it's built and how its component software is packaged.

FreeBSD 15 has significantly reduced support for 32-bit platforms. Compared to FreeBSD 14 in 2023, there are no longer builds for x86-32, POWER, or ARM-v6. As the release notes put it:

"The venerable 32-bit hardware platforms i386, armv6, and 32-bit powerpc have been retired. 32-bit application support lives on via the 32-bit compatibility mode in their respective 64-bit platforms. The armv7 platform remains as the last supported 32-bit platform. We thank them for their service."

Now FreeBSD supports five CPU architectures — two Tier-1 platforms, x86-64 and AArch64, and three Tier-2 platforms, armv7 and up, powerpc64le, and riscv64.

Arguably, it's time. AMD's first 64-bit chips started shipping 22 years ago. Intel launched the original x86 chip, the 8086 in 1978. These days, 64-bit is nearly as old as the entire Intel 80x86 platform was when the 64-bit versions first appeared. In comparison, a few months ago, Debian 13 also dropped its x86-32 edition — six years after Canonical launched its first x86-64-only distro, Ubuntu 19.10.

Another significant change is that this is the first version built under the new pkgbase system, although it's still experimental and optional for now. If you opt for a pkgbase installation, then the core OS itself is installed from multiple separate software packages, meaning that the whole system can be updated using the package manager. Over in the Linux world, this is the norm, but Linux is a very different beast... The plan is that by FreeBSD 16, scheduled for December 2027, the restructure will be complete, the old distribution sets will be removed, and the current freebsd-update command and its associated infrastructure can be turned off.

Another significant change is reproducible builds, a milestone the project reached in late October. This change is part of a multi-project initiative toward ensuring deterministic compilation: to be able to demonstrate that a certain set of source files and compilation directives is guaranteed to produce identical binaries, as a countermeasure against compromised code. A handy side-effect is that building the whole OS, including installation media images, no longer needs root access.

There are of course other new features. Lots of drivers and subsystems have been updated, and this release has better power management, including suspend and resume. There's improved wireless networking, with support for more Wi-Fi chipsets and faster wireless standards, plus updated graphics drivers... The release announcement calls out the inclusion of OpenZFS 2.4.0-rc4, OpenSSL 3.5.4, and OpenSSH 10.0 p2, and notes the inclusion of some new quantum-resistant encryption systems...

In general, we found FreeBSD 15 easier and less complicated to work with than either of the previous major releases. It should be easier on servers too. The new OCI container support in FreeBSD 14.2, which we wrote about a year ago, is more mature now. FreeBSD has its own version of Podman, and you can run Linux containers on FreeBSD. This means you can use Docker commands and tools, which are familiar to many more developers than FreeBSD's native Jail system.


"FreeBSD has its own place in servers and the public cloud, but it's getting easier to run it as a desktop OS as well," the article concludes. "It can run all the main Linux desktops, including GNOME on Wayland."

"There's no systemd here, and never will be — and no Flatpak or Snap either, for that matter.
Cloud

Amazon and Google Announce Resilient 'Multicloud' Networking Service Plus an Open API for Interoperability (reuters.com) 21

Their announcement calls it "more than a multicloud solution," saying it's "a step toward a more open cloud environment. The API specifications developed for this product are open for other providers and partners to adopt, as we aim to simplify global connectivity for everyone."

Amazon and Google are introducing "a jointly developed multicloud networking service," reports Reuters. "The initiative will enable customers to establish private, high-speed links between the two companies' computing platforms in minutes instead of weeks." The new service is being unveiled a little over a month after an Amazon Web Services outage on October 20 disrupted thousands of websites worldwide, knocking offline some of the internet's most popular apps, including Snapchat and Reddit. That outage will cost U.S. companies between $500 million and $650 million in losses, according to analytics firm Parametrix.
Google and Amazon are promising "high resiliency" through "quad-redundancy across physically redundant interconnect facilities and routers," with both Amazon and Google continuously watching for issues. (And they're using MACsec encryption between the Google Cloud and AWS edge routers, according to Sunday's announcement: As organizations increasingly adopt multicloud architectures, the need for interoperability between cloud service providers has never been greater. Historically, however, connecting these environments has been a challenge, forcing customers to take a complex "do-it-yourself" approach to managing global multi-layered networks at scale.... Previously, to connect cloud service providers, customers had to manually set up complex networking components including physical connections and equipment; this approach required lengthy lead times and coordinating with multiple internal and external teams. This could take weeks or even months. AWS had a vision for developing this capability as a unified specification that could be adopted by any cloud service provider, and collaborated with Google Cloud to bring it to market.

Now, this new solution reimagines multicloud connectivity by moving away from physical infrastructure management toward a managed, cloud-native experience.

Reuters points out that Salesforce "is among the early users of the new approach, Google Cloud said in a statement."
Linux

Linux Kernel 6.18 Officially Released (9to5linux.com) 12

From the blog 9to5Linux: Linux kernel 6.18 is now available for download, as announced today by Linus Torvalds himself, featuring enhanced hardware support through new and updated drivers, improvements to file systems and networking, and more. Highlights of Linux 6.18 include the removal of the Bcachefs file system, support for the Rust Binder driver, a new dm-pcache device-mapper target to enable persistent memory as a cache for slower block devices, and a new microcode= command-line option to control the microcode loader's behavior on x86 platforms. Linux kernel 6.18 also extends the support for file handles to kernel namespaces, implements initial 'block size > page size' support for the Btrfs file system, adds PTW feature detection on new hardware for LoongArch KVM, and adds support for running the kernel as a guest on FreeBSD's Bhyve hypervisor.
AI

Amazon Pledges Up To $50 Billion To Expand AI, Supercomputing For US Government 15

Amazon is committing up to $50 billion to massively expand AI and supercomputing capacity for U.S. government cloud regions, adding 1.3 gigawatts of high-performance compute and giving federal agencies access to its full suite of AI tools. Reuters reports: The project, expected to break ground in 2026, will add nearly 1.3 gigawatts of artificial intelligence and high-performance computing capacity across AWS Top Secret, AWS Secret and AWS GovCloud regions by building data centers equipped with advanced compute and networking technologies. The project, expected to break ground in 2026, will add nearly 1.3 gigawatts of artificial intelligence and high-performance computing capacity across AWS Top Secret, AWS Secret and AWS GovCloud regions by building data centers equipped with advanced compute and networking technologies.

Under the latest initiative, federal agencies will gain access to AWS' comprehensive suite of AI services, including Amazon SageMaker for model training and customization, Amazon Bedrock for deploying models and agents, as well as foundation models such as Amazon Nova and Anthropic Claude. The federal government seeks to develop tailored AI solutions and drive cost-savings by leveraging AWS' dedicated and expanded capacity.
Google

Google Must Double AI Serving Capacity Every 6 Months To Meet Demand 57

Google's AI infrastructure chief told employees the company must double its AI serving capacity every six months in order to meet demand. In a presentation earlier this month, Amin Vahdat, a vice president at Google Cloud, gave a presentation titled "AI Infrastructure." It included a slide on "AI compute demand" that said: "Now we must double every 6 months.... the next 1000x in 4-5 years." CNBC reports: The presentation was delivered a week after Alphabet reported better-than-expected third-quarter results and raised its capital expenditures forecast for the second time this year, to a range of $91 billion to $93 billion, followed by a "significant increase" in 2026. Hyperscaler peers Microsoft, Amazon and Meta also boosted their capex guidance, and the four companies now expect to collectively spend more than $380 billion this year.

Google's "job is of course to build this infrastructure but it's not to outspend the competition, necessarily," Vahdat said. "We're going to spend a lot," he said, adding that the real goal is to provide infrastructure that is far "more reliable, more performant and more scalable than what's available anywhere else." In addition to infrastructure build-outs, Vahdat said Google bolsters capacity with more efficient models and through its custom silicon. Last week, Google announced the public launch of its seventh generation Tensor Processing Unit called Ironwood, which the company says is nearly 30 times more power efficient than its first Cloud TPU from 2018.

Vahdat said the company has a big advantage with DeepMind, which has research on what AI models can look like in future years. Google needs to "be able to deliver 1,000 times more capability, compute, storage networking for essentially the same cost and increasingly, the same power, the same energy level," Vahdat said. "It won't be easy but through collaboration and co-design, we're going to get there."
Businesses

Netgear Accused by Rival of China Smear To Fan Security Fear (msn.com) 34

An anonymous reader shares a report: California-based TP-Link says it may take a sales hit of more than $1 billion because of erroneous reports that the networking company's technology has been "infiltrated" by Beijing. In a lawsuit, TP-Link claims its competitor, Netgear, orchestrated a smear by planting false claims with journalists and internet influencers with the goal of scaring off customers.

Closely held TP-Link, which makes wireless routers, alleges in a complaint filed Monday that Netgear's campaign "threatens injury to well over a billion dollars in sales" and violates a 2024 settlement of a patent fight. That accord, in which TP-Link agreed to pay Netgear $135 million, includes a provision that the public company promises not to disparage its rival, according to the suit in Delaware federal court.

The suit comes as TP-Link faces growing scrutiny in Washington over national-security issues. US lawmakers from both parties have expressed concern that TP-Link's wireless equipment could be exploited by Chinese hackers following a series of attacks on its routers.

Slashdot Top Deals