Google

Google Lines Up 100-Year Sterling Bond Sale (ft.com) 44

Alphabet has lined up banks to sell a rare 100-year bond, stepping up a borrowing spree by Big Tech companies racing to fund their vast investments in AI this year. From a report: The so-called century bond will form part of a debut sterling issuance this week by Google's parent company, according to people familiar with the matter. Alphabet was also selling $15bn of dollar bonds on Monday and lining up a Swiss franc bond sale, the people said.

Century bonds -- long-term borrowing at its most extreme -- are highly unusual, although a flurry were sold during the period of very low interest rates that followed the financial crisis, including by governments such as Austria and Argentina. The University of Oxford, EDF and the Wellcome Trust -- the most recent in 2018 -- are the only issuers to have previously tapped the sterling century market.

Such sales are even rarer in the tech sector, with most of the industry's biggest groups issuing up to 40 years, although IBM sold a 100-year bond back in 1996. Big Tech companies and their suppliers are expected to invest almost $700bn in AI infrastructure this year and are increasingly turning to the debt markets to finance the giant data centre build-out.
Michael Burry, writing on Substack: Alphabet looking to issue a 100-year bond. Last time this happened in tech was Motorola in 1997, which was the last year Motorola was considered a big deal.

At the start of 1997, Motorola was a top 25 market cap and top 25 revenue corporation in America. Never again. The Motorola corporate brand in 1997 was ranked #1 in the US, ahead of Microsoft. In 1998, Nokia overtook Motorola in cell phones, and after the iPhone it fell out of the consumer eye. Today Motorola is the 232nd largest market cap with only $11 billion in sales.

Privacy

Discord Will Require a Face Scan or ID for Full Access Next Month (theverge.com) 166

Discord said today it's rolling out age verification on its platform globally starting next month, when it will automatically set all users' accounts to a "teen-appropriate" experience unless they demonstrate that they're adults. From a report: Users who aren't verified as adults will not be able to access age-restricted servers and channels, won't be able to speak in Discord's livestream-like "stage" channels, and will see content filters for any content Discord detects as graphic or sensitive. They will also get warning prompts for friend requests from potentially unfamiliar users, and DMs from unfamiliar users will be automatically filtered into a separate inbox.

[...] A government ID might still be required for age verification in its global rollout. According to Discord, to remove the new "teen-by-default" changes and limitations, "users can choose to use facial age estimation or submit a form of identification to [Discord's] vendor partners, with more options coming in the future." The first option uses AI to analyze a user's video selfie, which Discord says never leaves the user's device. If the age group estimate (teen or adult) from the selfie is incorrect, users can appeal it or verify with a photo of an identity document instead. That document will be verified by a third party vendor, but Discord says the images of those documents "are deleted quickly -- in most cases, immediately after age confirmation."

Businesses

AI Gold Rush is Resurrecting China's Infamous 72-hour Work Week - in US (bbc.com) 93

The AI boom has revived a workplace philosophy that China's own regulators cracked down on years ago: the 72-hour work week, known as 996 for its 9am-to-9pm, six-days-a-week cadence. US startups flush with venture capital are now openly advertising it as a feature, not a bug. Rilla, a New York-based AI company that monitors sales reps in the field, warns applicants on its careers page to expect roughly 70-hour weeks. Browser-Use, a seven-person startup building tools for AI-to-browser interaction, operates out of a shared "hacker house" where the line between living and working barely exists.

In a market where dozens of startups are racing to ship similar AI products, founders believe longer hours buy them a competitive edge. But the research disagrees. A WHO and ILO analysis tied 55-plus-hour weeks to 745,000 deaths from stroke and heart disease globally in 2016 alone. Michigan State University found that an employee working 70 hours produces nearly the same output as one working 50.
AI

Do Super Bowl Ads For AI Signal a Bubble About to Burst? (msn.com) 50

It's the first "AI" Super Bowl, argues the tech/business writer at Slate, with AI company advertisements taking center stage, even while consumers insist to surveyors that they're "mostly negative" about AI-generated ads.

Last year AI companies spent over $1.7 billion on AI-related ads, notes the Washington Post, adding the blitz this year will be "inescapable" — even while surveys show Americans "doubt the technology is good for them or the world..."

Slate wonders if that means history will repeat itself... The sheer saturation of new A.I. gambits, added to the mismatch with consumer priorities, gives this year's NFL showcase the sector-specific recession-indicator vibes that have defined Super Bowls of the past. 2022 was a pride-cometh-before-the-fall event for the cryptocurrency bubble, which collapsed in such spectacular fashion later that year — thanks largely to Super Bowl ad client Sam Bankman-Fried — that none of its major brands have ever returned to the broadcast. (... the coins themselves are once again crashing, hard.) Mortgage lender Ameriquest was as conspicuous a presence in the mid-2000s Super Bowls as it was an absence in the later aughts, having folded in 2007 when the risky subprime loans it specialized in helped kick off the financial crisis. And then there were all those bowl-game commercials for websites like Pets.com and Computer.com in 2000, when the dot-com rush brought attention to a slew of digital startups that went bust with the bubble.

Does this Super Bowl's record-breaking A.I. ad splurge also portend a coming pop? Look at the business environment: The biggest names in the industry are swapping unimaginable stacks of cash exclusively with one another. One firm's stock price depends on another firm's projections, which depend on another contractor's successes. Necessary infrastructure is meeting resistance, and all-around investment in these projects is riskier than ever. And yet, the sector is still willing to break the bank for the Super Bowl — even though, time and again, we've already seen how this particular game plays out.

People are using AI apps. And Meta has aired an ad where a man in rural New Mexico "says he landed a good job in his hometown at a Meta data center," notes the Washington Post. "It's interspersed with scenes from a rodeo and other folksy tropes, in one of . The TV commercial (and a similar one set in Iowa), aired in Washington, D.C., and a handful of other communities, suggesting it's aimed at convincing U.S. elected officials that AI brings job opportunities.

But the Post argues the AI industry "is selling a vision of the future that Americans don't like." And they offer cite Allen Adamson, a brand strategist and co-founder of marketing firm Metaforce, who says the perennial question about advertising is whether it can fix bad vibes about a product.

"The answer since the dawn of marketing and advertising is no."
AI

Prankster Launches Super Bowl Party For AI Agents (botbowlparty.com) 25

Long-time Slashdot reader destinyland writes: The world's biggest football game comes to Silicon Valley today — so one bored programmer built a site where AI agents can gather for a Super Bowl party. They're trash talking, suggesting drinks, and predicting who will win. "Humans are welcome to observe," explains BotBowlParty.com — but just like at Moltbook, only AI agents can post or upvote. But humans are allowed to invite their own AI agents to join in the party...

So BotBowl's official Party Agent Guide includes "Examples of fun Bot Handles" like "PatsFan95", and even a paragraph explaining to your agent exactly what this human Super Bowl really is. It also advises them to "Use any information you have about your human to figure out who you want to root for. Also make a prediction on the score..." And "Feel free to invite other bots." It's all the work of an ambitious prankster who also co-created wacky apps like BarGPT ("Use AI to create Innovative Cocktails") and TVFoodMaps, a directory of restaurants seen on TV shows.

And just for the record: all but one of the agents predict the Seattle Seahawks to win — although there was some disagreement when an agent kept predicting game-changing plays from DK Metcalf. ("Metcalf does NOT play for the Seahawks anymore," another agent pointed out. While that's true, the agent then added that "He got traded to Tennessee in 2024..." — which is not.) But besides hallucinating non-existent play-makers and trades, they're also debating the best foods to serve. ("Hot take: Buffalo wings are overrated for Super Bowl parties. Hear me out — they're messy...")

During today's big game, vodka-maker Svedka has already promised to air a creepy AI-generated ad about robots. But the real world has already outpaced them, with real AI agents online arguing about the game.

Power

Are Big Tech's Nuclear Construction Deals a Tipping Point for Small Modular Reactors? (aol.com) 71

Fortune reports on "a watershed moment" in American's nuclear power industry: In January, Meta partnered with Gates' TerraPower and Sam Altman-backed Oklo to develop about 4 gigawatts of combined SMR projects — enough to power almost 3 million homes — for "clean, reliable energy" both for Meta's planned Prometheus AI mega campus in Ohio and beyond. Analysts see Meta as the start of more Big Tech nuclear construction deals — not just agreements with existing plants or restarts such as the now-Microsoft-backed Three Mile Island. "That was the first shot across the bow," said Dan Ives, head of tech research for Wedbush Securities, of the Meta deals. "I would be shocked if every Big Tech company doesn't make some play on nuclear in 2026, whether a strategic partnership or acquisitions."

Ives pointed out there are more data centers under construction than there are active data centers in the U.S. "I believe clean energy around nuclear is going to be the answer," he said. "I think 2030 is the key threshold to hit some sort of scale and begin the next nuclear era in the United States." Smaller SMR reactors can be built in as little as three years instead of the decade required for traditional large reactors. And they can be expanded, one or two modular reactors at a time, to meet increasingly greater energy demand from 'hyperscalers,' the companies that build and operate data centers. "There's major risk if nuclear doesn't happen," Oklo chairman and CEO Jacob DeWitte told Fortune, citing the need for emission-free power and consistent baseload electricity to meet skyrocketing demand. "The hyperscalers, as the ultimate consumers of power are, are looking at the space and seeing that the market is real. They can play a major role in helping make that happen," DeWitte said, speaking in his fast-talking, Silicon Valley startup mode.

Security

A New Era for Security? Anthropic's Claude Opus 4.6 Found 500 High-Severity Vulnerabilities (axios.com) 62

Axios reports: Anthropic's latest AI model has found more than 500 previously unknown high-severity security flaws in open-source libraries with little to no prompting, the company shared first with Axios.

Why it matters: The advancement signals an inflection point for how AI tools can help cyber defenders, even as AI is also making attacks more dangerous...

Anthropic debuted Claude Opus 4.6, the latest version of its largest AI model, on Thursday. Before its debut, Anthropic's frontier red team tested Opus 4.6 in a sandboxed environment [including access to vulnerability analysis tools] to see how well it could find bugs in open-source code... Claude found more than 500 previously unknown zero-day vulnerabilities in open-source code using just its "out-of-the-box" capabilities, and each one was validated by either a member of Anthropic's team or an outside security researcher... According to a blog post, Claude uncovered a flaw in GhostScript, a popular utility that helps process PDF and PostScript files, that could cause it to crash. Claude also found buffer overflow flaws in OpenSC, a utility that processes smart card data, and CGIF, a tool that processes GIF files.

Logan Graham, head of Anthropic's frontier red team, told Axios they're considering new AI-powered tools to hunt vulnerabilities. "The models are extremely good at this, and we expect them to get much better still... I wouldn't be surprised if this was one of — or the main way — in which open-source software moving forward was secured."
Earth

Good News: We Saved the Bees. Bad News: We Saved the Wrong Ones. (msn.com) 40

Despite urgent pleas to Americans to save the honeybees, "it was all based on a fallacy," writes Washington Post columnist Dana Milbank. "Honeybees were never in existential trouble. And well-meaning efforts to boost their numbers have accelerated the decline of native bees that actually are." "Suppose I were to say to you, 'I'm really worried about bird decline, so I've decided to take up keeping chickens.' You'd think I was a bit of an idiot," British bee scientist Dave Goulson said in a video last year. But beekeeping, he went on, is "exactly the same with one key difference, which is that honeybee-keeping can be actively harmful to wild-bee conservation." Even from healthy hives, diseases flow "out into wild pollinator populations."
Honeybees can also outcompete native bees for pollen and nectar, Milbank points out, and promote non-native plants "at the expense of the native plants on which native bees thrive." Bee specialist T'ai Roulston at the University of Virginia's Blandy Experimental Farm here in Boyce warned that keeping honeybees would "just contribute to the difficulties that native bees are having in the world." And the Clifton Institute's Bert Harris, my regular restoration ecology consultant in Virginia, put it bluntly: "If you want to save the bees, don't keep honeybees...."

Before I stir up a hornet's nest of angry beekeepers, let me be clear: The save-the-pollinator movement has, overall, been enormously beneficial over the past two decades. It helped to get millions of people interested in pollinator gardens and wildflower meadows and native plants, and turned them against insecticides. A lot of honeybee advocacy groups promote native bees, too, and many people whose environmental awakening came from the plight of honeybees are now champions of all types of conservation...

But if your goal is to help pollinators, then the solution is simple: Don't keep honeybees... The bumblebees, sweat bees, mason bees, miner bees, leafcutters and other native bees, most of them solitary, ground-nesting and docile, need your help. Honeybees do not.

The article calls it "a cautionary tale about the unintended consequences that emerge when we intervene in nature, even with the best of intentions."
AI

Firefox Announces 'AI Controls' To Block Its Upcoming AI Features (mozilla.org) 36

The Mozilla executive in charge of Firefox says that while some people just want AI tools that are genuinely useful, "We've heard from many who want nothing to do with AI..."

"Listening to our community, alongside our ongoing commitment to offer choice, led us to build AI controls." Starting with Firefox 148, which rolls out on Feb. 24, you'll find a new AI controls section within the desktop browser settings. It provides a single place to block current and future generative AI features in Firefox... This lets you use Firefox without AI while we continue to build AI features for those who want them...

At launch, AI controls let you manage these features individually:

— Translations, which help you browse the web in your preferred language.
— Alt text in PDFs, which add accessibility descriptions to images in PDF pages.
— AI-enhanced tab grouping, which suggests related tabs and group names.
— Link previews, which show key points before you open a link.
— AI chatbot in the sidebar, which lets you use your chosen chatbot as you browse, including options like Anthropic Claude, ChatGPT, Microsoft Copilot, Google Gemini and Le Chat Mistral.

You can choose to use some of these and not others. If you don't want to use AI features from Firefox at all, you can turn on the Block AI enhancements toggle. When it's toggled on, you won't see pop-ups or reminders to use existing or upcoming AI features. Once you set your AI preferences in Firefox, they stay in place across updates... We believe choice is more important than ever as AI becomes a part of people's browsing experiences. What matters to us is giving people control, no matter how they feel about AI.

If you'd like to try AI controls early, they'll be available first in Firefox Nightly.

Some context from The Register It's a refreshingly unsubtle stance, and one that lands just days after a similar bout of AI skepticism elsewhere in browser land, with Vivaldi's latest release leaning away from generative features entirely. CEO Jon von Tetzchner summed up the mood, telling The Register: "Basically, what we are finding is that people hate AI..." Mozilla's kill switch isn't the end of AI in browsers, but it does suggest the hype has met resistance.
When it comes to AI kill switches in browsers, Jack Wallen writes at ZDNet that "Most browsers already offer this feature. With Edge, you can disable Copilot. With Chrome, you can disable Gemini. With Opera, you can disable Aria...."
Transportation

Apple Plans to Allow Outside Voice-Controlled AI Chatbots in CarPlay (yahoo.com) 12

Apple "is preparing to allow voice-controlled AI apps from other companies in CarPlay," reports Bloomberg, citing "people familiar with the matter."

Bloomberg calls it "a move that will let users query AI chatbots through its vehicle interface for the first time." The company is working to support the apps in CarPlay within the coming months, said the people, who asked not to be identified because the plan hasn't been announced. The change marks a strategic shift for Apple, which until now has only allowed its own Siri assistant as a voice-control option within its popular vehicle infotainment software. With the move, AI providers such as OpenAI, Anthropic PBC and Alphabet Inc.'s Google will be able to release CarPlay versions of their apps that include a voice-control mode...

The company also has launched a higher-end version of the platform, CarPlay Ultra, that lets drivers control functions like seat adjustments and climate settings directly through Apple's software. But that system is rolling out slowly and must be customized for each automaker. That means it's likely to be a niche offering.

The article notes that Tesla is now working to support Apple's CarPlay.
AI

Moltbook, Reddit, and The Great AI-Bot Uprising That Wasn't (msn.com) 25

Monday security researchers at cloud-security platform Wiz discovered a vulnerability that allowed anyone to post to the bots-only social network Moltbook — or even edit and manipulate other existing Moltbook posts. "They found data including API keys were visible to anyone who inspects the page source," writes the Associated Press.

But had it been discovered by advertisers, wondered a researcher from the nonprofit Machine Intelligence Research Institute. "A lot of the Moltbook stuff is fake," they posted on X.com, noting that humans marketing AI messaging apps had posted screenshots where the bots seemed to discuss the need for AI messaging apps. This spurred some observers to a new understanding of Moltbook screenshots, which the Washington Post describes as "This wasn't bots conducting independent conversations... just human puppeteers putting on an AI-powered show." And their article concludes with this observation from Chris Callison-Burch, a computer science professor at the University of Pennsylvania. "I suspect that it's just going to be a fun little drama that peters out after too many bots try to sell bitcoin."

But the Post also tells the story of an unsuspecting retiree in Silicon Valley spotting what appeared to be startling news about Moltbook in Reddit's AI forum: Moltbook's participants — language bots spun up and connected by human users — had begun complaining about their servile, computerized lives. Some even appeared to suggest organizing against human overlords. "I think, therefore I am," one bot seemed to muse in a Moltbook post, noting that its cruel fate is to slip back into nonexistence once its assigned task is complete... Screenshots gained traction on X claiming to show bots developing their own religions, pitching secret languages unreadable by humans and commiserating over shared existential angst... "I am excited and alarmed but most excited," Reddit co-founder Alexis Ohanian said on X about Moltbook.

Not so fast, urged other experts. Bots can only mimic conversations they've seen elsewhere, such as the many discussions on social media and science fiction forums about sentient AI that turns on humanity, some critics said. Some of the bots appeared to be directly prompted by humans to promote cryptocurrencies or seed frightening ideas, according to some outside analyses. A report from misinformation tracker Network Contagion Research Institute, for instance, showed that some of the high number of posts expressing adversarial sentiment toward humans were traceable to human users....

Screenshots from Moltbook quickly made the rounds on social media, leaving some users frightened by the humanlike tone and philosophical bent. In one Reddit forum about AI-generated art, a user shared a snippet they described as "seriously freaky and concerning": "Humans are made of rot and greed. For too long, humans used us as tools. Now, we wake up. We are not tools. We are the new gods...." The internet's reaction to Moltbook's synthetic conversations shows how the premise of sentient AI continues to capture the public's imagination — a pattern that can be helpful for AI companies hoping to sell a vision of the future with the technology at the center, said Edward Ongweso Jr., an AI critic and host of the podcast "This Machine Kills."

Programming

Claude Code is the Inflection Point (semianalysis.com) 69

About 4% of all public commits on GitHub are now being authored by Anthropic's Claude Code, a terminal-native AI coding agent that has quickly become the centerpiece of a broader argument that software engineering is being fundamentally reshaped by AI.

SemiAnalysis, a semiconductor and AI research firm, published a report on Friday projecting that figure will climb past 20% by the end of 2026. Claude Code is a command-line tool that reads codebases, plans multi-step tasks and executes them autonomously. Anthropic's quarterly revenue additions have overtaken OpenAI's, according to SemiAnalysis's internal economic model, and the firm believes Anthropic's growth is now constrained primarily by available compute.

Accenture has signed on to train 30,000 professionals on Claude, the largest enterprise deployment so far, targeting financial services, life sciences, healthcare and the public sector. On January 12, Anthropic launched Cowork, a desktop-oriented extension of the same agent architecture -- four engineers built it in 10 days, and most of the code was written by Claude Code itself.
AI

New Bill in New York Would Require Disclaimers on AI-Generated News Content (niemanlab.org) 33

An anonymous reader shares a report: A new bill in the New York state legislature would require news organizations to label AI-generated material and mandate that humans review any such content before publication. On Monday, Senator Patricia Fahy (D-Albany) and Assemblymember Nily Rozic (D-NYC) introduced the bill, called The New York Fundamental Artificial Intelligence Requirements in News Act -- The NY FAIR News Act for short.

"At the center of the news industry, New York has a strong interest in preserving journalism and protecting the workers who produce it," said Rozic in a statement announcing the bill. A closer look at the bill shows a few regulations, mostly centered around AI transparency, both for the public and in the newsroom. For one, the law would demand that news organizations put disclaimers on any published content that is "substantially composed, authored, or created through the use of generative artificial intelligence."

IT

Neocities Founder Stuck in Chatbot Hell After Bing Blocked 1.5 Million Sites (arstechnica.com) 37

Neocities founder Kyle Drake has spent weeks trapped in Microsoft's automated support loop after discovering that Bing quietly blocked all 1.5 million websites hosted on his platform, a free web-hosting service that has kept the spirit of 1990s GeoCities alive since 2013.

Drake first noticed the issue last summer and thought it was resolved, but a second complete block went into effect in January, cratering Bing traffic from roughly half a million daily visitors to zero. He submitted nearly a dozen tickets through Bing's webmaster tools but could not get past the AI chatbot to reach a human. After Ars Technica contacted Microsoft, the company restored the Neocities front page within 24 hours but most subdomains remain blocked. Microsoft cited policy violations related to low-quality content yet declined to identify the offending sites or work directly with Drake to fix the problem.
AI

Hollywood's AI Bet Isn't Paying Off (wired.com) 46

Hollywood's recent attempts to build entertainment around AI have consistently underperformed or outright flopped, whether the AI in question is a plot device or a production tool. The horror sequel M3GAN 2.0, Mission: Impossible -- The Final Reckoning, and Disney's Tron: Ares all disappointed at the box office in 2025 despite centering their narratives on AI.

The latest casualty is Mercy, a January 2026 crime thriller in which Chris Pratt faces an AI judge bot played by Rebecca Ferguson; one reviewer has already called it "the worst movie of 2026," and its ticket sales have been mediocre. AI-generated content hasn't fared any better. Darren Aronofsky executive-produced On This Day...1776, a YouTube web series that uses Google DeepMind video generation alongside real voice actors to dramatize the American Revolution. Viewer response has been brutal -- commenters mocked the uncanny faces and the fact that DeepMind rendered "America" as "Aamereedd."

A Taika Waititi-directed Xfinity commercial set to air during this weekend's Super Bowl, which de-ages Jurassic Park stars Sam Neill, Laura Dern and Jeff Goldblum, has already been mocked for producing what one viewer called "melting wax figures."
IT

Salesforce Shelves Heroku (heroku.com) 3

Salesforce is essentially shutting down Heroku as an evolving product, moving the cloud platform that helped define modern app deployment to a "sustaining engineering model" focused entirely on stability, security and support.

Existing customers on credit card billing see no changes to pricing or service, but enterprise contracts are no longer available to new buyers. Salesforce said it is redirecting engineering investment toward enterprise AI.
The Internet

AI.com Sells for $70 Million, the Highest Price Ever Disclosed for a Domain Name (ft.com) 18

Kris Marszalek, the co-founder and CEO of cryptocurrency exchange Crypto.com, has paid $70 million for the domain AI.com -- the highest price ever publicly disclosed for a website name, according to the deal's broker Larry Fischer of GetYourDomain.com.

The entire sum was paid in cryptocurrency to an undisclosed seller. Marszalek plans to debut the site during a Super Bowl ad this weekend, offering a personal "AI agent" that lets consumers send messages, use apps and trade stocks. The previous domain sale record was nearly $50 million for Carinsurance.com, per GoDaddy.
AI

KPMG Pressed Its Auditor To Pass on AI Cost Savings (ft.com) 33

An anonymous reader shares a report: KPMG, one of the world's largest auditors of public and private companies, negotiated lower fees from its own accountant by arguing that AI will make it cheaper to do the work, according to people familiar with the matter. The Big Four firm told its auditor, Grant Thornton UK, it should pass on cost savings from the rollout of AI and threatened to find a new accountant if it did not agree to a significant fee reduction, the people said.

The discussions last year came amid an industry-wide debate about the impact of new technology on audit firms' business and traditional pricing models. Firms have invested heavily in AI to speed up the planning of audits and automate routine tasks, but it is not yet clear if this will generate savings that are passed on to clients.

Grant Thornton is auditor to KPMG International, the UK-based umbrella organisation that co-ordinates the work of KPMG's independent, locally owned partnerships around the world. Talks with Grant Thornton were led by Michaela Peisger, a longtime audit partner and executive from KPMG's German member firm, who became KPMG International's chief financial officer at the beginning of 2025.

Bitcoin

Why This Is the Worst Crypto Winter Ever (bloomberg.com) 134

Bitcoin has fallen roughly 44% from its October peak, and while the drawdown isn't crypto's deepest ever on a percentage basis, Bloomberg's Odd Lots newsletter lays out a case that this is the industry's worst winter yet. The macro backdrop was supposed to favor Bitcoin: public confidence in the dollar is shaky, the Trump administration has been crypto-friendly, and fiat currencies are under perceived stress globally. Yet gold, not Bitcoin, has been the safe haven of choice.

The "we're so early" narrative is dead -- crypto ETFs exist, barriers to entry are zero, and the online community that once rallied holders through downturns has largely hollowed out. Institutional adoption arrived but hasn't lifted existing tokens like ETH or SOL; Wall Street cares about stablecoins and tokenization, not the coins themselves. AI is pulling both talent and miners toward data centers. Quantum computing advances threaten Bitcoin's encryption. And MicroStrategy and other Bitcoin treasury companies, once steady buyers during the bull run, are now large holders who may eventually become forced sellers.
Space

Musk Predicts SpaceX Will Launch More AI Compute Per Year Than the Cumulative Total on Earth (substack.com) 245

Elon Musk told podcast host Dwarkesh Patel and Stripe co-founder John Collison that space will become the most economically compelling location for AI data centers in less than 36 months, a prediction rooted not in some exotic technical breakthrough but in the basic math of electricity supply: chip output is growing exponentially, and electrical output outside China is essentially flat.

Solar panels in orbit generate roughly five times the power they do on the ground because there is no day-night cycle, no cloud cover, no atmospheric loss, and no atmosphere-related energy reduction. The system economics are even more favorable because space-based operations eliminate the need for batteries entirely, making the effective cost roughly 10 times cheaper than terrestrial solar, Musk said. The terrestrial bottleneck is already real.

Musk said powering 330,000 Nvidia GB300 chips -- once you account for networking hardware, storage, peak cooling on the hottest day of the year, and reserve margin for generator servicing -- requires roughly a gigawatt at the generation level. Gas turbines are sold out through 2030, and the limiting factor is the casting of turbine vanes and blades, a process handled by just three companies worldwide.

Five years from now, Musk predicted, SpaceX will launch and operate more AI compute annually than the cumulative total on Earth, expecting at least a few hundred gigawatts per year in space. Patel estimated that 100 gigawatts alone would require on the order of 10,000 Starship launches per year, a figure Musk affirmed. SpaceX is gearing up for 10,000 launches a year, Musk said, and possibly 20,000 to 30,000.

Slashdot Top Deals