Google

NATO Taps Google For Air-Gapped Sovereign Cloud (theregister.com) 14

NATO has hired Google to provide "air-gapped" sovereign cloud services and AI in "completely disconnected, highly secure environments." From a report: The Chocolate Factory will support the military alliance's Joint Analysis, Training, and Education Centre (JATEC) in a move designed to improve its digital infrastructure and strengthen its data governance. NATO was formed in 1949 after Belgium, Canada, Denmark, France, Iceland, Italy, Luxembourg, the Netherlands, Norway, Portugal, the United Kingdom, and the United States signed the North Atlantic Treaty. Since then, 20 more European countries have joined, most recently Finland and Sweden. US President Donald Trump has criticized fellow members' financial contribution to the alliance and at times cast doubt over how likely the US is to defend its NATO allies.

In an announcement this week, Google Cloud said the "significant, multimillion-dollar contract" with the NATO Communication and Information Agency (NCIA) would offer highly secure, sovereign cloud capabilities. The agreement promises NATO "uncompromised data residency and operational controls, providing the highest degree of security and autonomy, regardless of scale or complexity," the statement said.

Google

How Google Finally Leapfrogged Rivals With New Gemini Rollout (msn.com) 38

An anonymous reader shares a report: With the release of its third version last week, Google's Gemini large language model surged past ChatGPT and other competitors to become the most capable AI chatbot, as determined by consensus industry-benchmark tests. [...] Aaron Levie, chief executive of the cloud content management company Box, got early access to Gemini 3 several days ahead of the launch. The company ran its own evaluations of the model over the weekend to see how well it could analyze large sets of complex documents. "At first we kind of had to squint and be like, 'OK, did we do something wrong in our eval?' because the jump was so big," he said. "But every time we tested it, it came out double-digit points ahead."

[...] Google has been scrambling to get an edge in the AI race since the launch of ChatGPT three years ago, which stoked fears among investors that the company's iconic search engine would lose significant traffic to chatbots. The company struggled for months to get traction. Chief Executive Sundar Pichai and other executives have since worked to overhaul the company's AI development strategy by breaking down internal silos, streamlining leadership and consolidating work on its models, employees say. Sergey Brin, one of Google's co-founders, resumed a day-to-day role at the company helping to oversee its AI-development efforts.

Programming

Microsoft and GitHub Preview New Tool That Identifies, Prioritizes, and Fixes Vulnerabilities With AI (thenewstack.io) 18

"Security, development, and AI now move as one," says Microsoft's director of cloud/AI security product marketing.

Microsoft and GitHub "have launched a native integration between Microsoft Defender for Cloud and GitHub Advanced Security that aims to address what one executive calls decades of accumulated security debt in enterprise codebases..." according to The New Stack: The integration, announced this week in San Francisco at the Microsoft Ignite 2025 conference and now available in public preview, connects runtime intelligence from production environments directly into developer workflows. The goal is to help organizations prioritize which vulnerabilities actually matter and use AI to fix them faster. "Throughout my career, I've seen vulnerability trends going up into the right. It didn't matter how good of a detection engine and how accurate our detection engine was, people just couldn't fix things fast enough," said Marcelo Oliveira, VP of product management at GitHub, who has spent nearly a decade in application security. "That basically resulted in decades of accumulation of security debt into enterprise code bases." According to industry data, critical and high-severity vulnerabilities constitute 17.4% of security backlogs, with a mean time to remediation of 116 days, said Andrew Flick, senior director of developer services, languages and tools at Microsoft, in a blog post. Meanwhile, applications face attacks as frequently as once every three minutes, Oliveira said.

The integration represents the first native link between runtime intelligence and developer workflows, said Elif Algedik, director of product marketing for cloud and AI security at Microsoft, in a blog post... The problem, according to Flick, comes down to three challenges: security teams drowning in alert fatigue while AI rapidly introduces new threat vectors that they have little time to understand; developers lacking clear prioritization while remediation takes too long; and both teams relying on separate, nonintegrated tools that make collaboration slow and frustrating... The new integration works bidirectionally. When Defender for Cloud detects a vulnerability in a running workload, that runtime context flows into GitHub, showing developers whether the vulnerability is internet-facing, handling sensitive data or actually exposed in production. This is powered by what GitHub calls the Virtual Registry, which creates code-to-runtime mapping, Flick said...

In the past, this alert would age in a dashboard while developers worked on unrelated fixes because they didn't know this was the critical one, he said. Now, a security campaign can be created in GitHub, filtering for runtime risk like internet exposure or sensitive data, notifying the developer to prioritize this issue.

GitHub Copilot "now automatically checks dependencies, scans for first-party code vulnerabilities and catches hardcoded secrets before code reaches developers," the article points out — but GitHub's VP of product management says this takes things even further.

"We're not only helping you fix existing vulnerabilities, we're also reducing the number of vulnerabilities that come into the system when the level of throughput of new code being created is increasing dramatically with all these agentic coding agent platforms."
The Internet

How the Internet Rewired Work - and What That Tells Us About AI's Likely Impact (msn.com) 105

"The internet did transform work — but not the way 1998 thought..." argues the Wall Street Journal. "The internet slipped inside almost every job and rewired how work got done."

So while the number of single-task jobs like travel agent dropped, most jobs "are bundles of judgment, coordination and hands-on work," and instead the internet brought "the quiet transformation of nearly every job in the economy... Today, just 10% of workers make minimal use of the internet on the job — roles like butcher and carpet installer." [T]he bigger story has been additive. In 1998, few could conceive of social media — let alone 65,000 social-media managers — and 200,000 information-security analysts would have sounded absurd when data still lived on floppy disks... Marketing shifted from campaign bursts to always-on funnels and A/B testing. Clinics embedded e-prescribing and patient portals, reshaping front-office and clinical handoffs. The steps, owners and metrics shifted. Only then did the backbone scale: We went from server closets wedged next to the mop sink to data centers and cloud regions, from lone system administrators to fulfillment networks, cybersecurity and compliance.

That is where many unexpected jobs appeared. Networked machines and web-enabled software quietly transformed back offices as much as our on-screen lives. Similarly, as e-commerce took off, internet-enabled logistics rewired planning roles — logisticians, transportation and distribution managers — and unlocked a surge in last-mile work. The build-out didn't just hire coders; it hired coordinators, pickers, packers and drivers. It spawned hundreds of thousands of warehouse and delivery jobs — the largest pockets of internet-driven job growth, and yet few had them on their 1998 bingo card... Today, the share of workers in professional and managerial occupations has more than doubled since the dawn of the digital era.

So what does that tell us about AI? Our mental model often defaults to an industrial image — John Henry versus the steam drill — where jobs are one dominant task, and automation maps one-to-one: Automate the task, eliminate the job. The internet revealed a different reality: Modern roles are bundles. Technologies typically hit routine tasks first, then workflows, and only later reshape jobs, with second-order hiring around the backbone. That complexity is what made disruption slower and more subtle than anyone predicted. AI fits that pattern more than it breaks it... [LLMs] can draft briefs, summarize medical notes and answer queries. Those are tasks — important ones — but still parts of larger roles. They don't manage risk, hold accountability, reassure anxious clients or integrate messy context across teams. Expect a rebalanced division of labor: The technical layer gets faster and cheaper; the human layer shifts toward supervision, coordination, complex judgment, relationship work and exception handling.

What to expect from AI, then, is messy, uneven reshuffling in stages. Some roles will contract sharply — and those contractions will affect real people. But many occupations will be rewired in quieter ways. Productivity gains will unlock new demand and create work that didn't exist, alongside a build-out around data, safety, compliance and infrastructure.

AI is unprecedented; so was the internet. The real risk is timing: overestimating job losses, underestimating the long, quiet rewiring already under way, and overlooking the jobs created in the backbone. That was the internet's lesson. It's likely to be AI's as well.

Earth

'The Strange and Totally Real Plan to Blot Out the Sun and Reverse Global Warming' (politico.com) 117

In a 2023 pitch to investors, a "well-financed, highly credentialed" startup named Stardust aimed for a "gradual temperature reduction demonstration" in 2027, according to a massive new 9,600-word article from Politico. ("Annually dispersing ~1 million tons of sun-reflecting particles," says one slide. "Equivalent to ~1% extra cloud coverage.")

"Another page told potential investors Stardust had already run low-altitude experiments using 'test particles'," the article notes: [P]ublic records and interviews with more than three dozen scientists, investors, legal experts and others familiar with the company reveal an organization advancing rapidly to the brink of being able to press "go" on its planet-cooling plans. Meanwhile, Stardust is seeking U.S. government contracts and quietly building an influence machine in Washington to lobby lawmakers and officials in the Trump administration on the need for a regulatory framework that it says is necessary to gain public approval for full-scale deployment....

The presentation also included revenue projections and a series of opportunities for venture capitalists to recoup their investments. Stardust planned to sign "government contracts," said a slide with the company's logo next to an American flag, and consider a "potential acquisition" by 2028. By 2030, the deck foresaw a "large-scale demonstration" of Stardust's system. At that point, the company claimed it would already be bringing in $200 million per year from its government contracts and eyeing an initial public offering, if it hadn't been sold already.

The article notes that for "a widening circle of researchers and government officials, Stardust's perceived failures to be transparent about its work and technology have triggered a larger conversation about what kind of international governance framework will be needed to regulate a new generation of climate technologies." (Since currently Stardust and its backers "have no legal obligations to adhere to strenuous safety principles or to submit themselves to the public view.")

In October Politico spoke to Stardust CEO, Yanai Yedvab, a former nuclear physicist who was once deputy chief scientist at the Israeli Atomic Energy Commission. Stardust "was ready to announce the $60 million it had raised from 13 new investors," the article points out, "far larger than any previous investment in solar geoengineering." [Yedvab] was delighted, he said, not by the money, but what it meant for the project. "We are, like, few years away from having the technology ready to a level that decisions can be taken" — meaning that deployment was still on track to potentially begin on the timeline laid out in the 2023 pitch deck. The money raised was enough to start "outdoor contained experiments" as soon as April, Yedvab said. These would test how their particles performed inside a plane flying at stratospheric heights, some 11 miles above the Earth's surface... The key thing, he insisted, was the particle was "safe." It would not damage the ozone layer and, when the particles fall back to Earth, they could be absorbed back into the biosphere, he said. Though it's impossible to know this is true until the company releases its formula. Yedvab said this round of testing would make Stardust's technology ready to begin a staged process of full-scale, global deployment before the decade is over — as long as the company can secure a government client. To start, they would only try to stabilize global temperatures — in other words fly enough particles into the sky to counteract the steady rise in greenhouse gas levels — which would initially take a fleet of 100 planes.
This begs the question: should the world attempt solar geoengineering? That the global temperature would drop is not in question. Britain's Royal Society... said in a report issued in early November that there was little doubt it would be effective. They did not endorse its use, but said that, given the growing interest in this field, there was good reason to be better informed about the side effects... [T]hat doesn't mean it can't have broad benefits when weighed against deleterious climate change, according to Ben Kravitz, a professor of earth and atmospheric sciences at Indiana University who has closely studied the potential effects of solar geoengineering. "There would be some winners and some losers. But in general, some amount of ... stratospheric aerosol injection would likely benefit a whole lot of people, probably most people," he said. Other scientists are far more cautious. The Royal Society report listed a range of potential negative side effects that climate models had displayed, including drought in sub-Saharan Africa. In accompanying documents, it also warned of more intense hurricanes in the North Atlantic and winter droughts in the Mediterranean. But the picture remains partial, meaning there is no way yet to have an informed debate over how useful or not solar geoengineering could be...

And then there's the problem of trying to stop. Because an abrupt end to geoengineering, with all the carbon still in the atmosphere, would cause the temperature to soar suddenly upward with unknown, but likely disastrous, effects... Once the technology is deployed, the entire world would be dependent on it for however long it takes to reduce the trillion or more tons of excess carbon dioxide in the atmosphere to a safe level...

Stardust claims to have solved many technical and safety challenges, especially related to the environmental impacts of the particle, which they say would not harm nature or people. But researchers say the company's current lack of transparency makes it impossible to trust.

Thanks to long-time Slashdot reader fjo3 for sharing the article.
Google

Google Must Double AI Serving Capacity Every 6 Months To Meet Demand 57

Google's AI infrastructure chief told employees the company must double its AI serving capacity every six months in order to meet demand. In a presentation earlier this month, Amin Vahdat, a vice president at Google Cloud, gave a presentation titled "AI Infrastructure." It included a slide on "AI compute demand" that said: "Now we must double every 6 months.... the next 1000x in 4-5 years." CNBC reports: The presentation was delivered a week after Alphabet reported better-than-expected third-quarter results and raised its capital expenditures forecast for the second time this year, to a range of $91 billion to $93 billion, followed by a "significant increase" in 2026. Hyperscaler peers Microsoft, Amazon and Meta also boosted their capex guidance, and the four companies now expect to collectively spend more than $380 billion this year.

Google's "job is of course to build this infrastructure but it's not to outspend the competition, necessarily," Vahdat said. "We're going to spend a lot," he said, adding that the real goal is to provide infrastructure that is far "more reliable, more performant and more scalable than what's available anywhere else." In addition to infrastructure build-outs, Vahdat said Google bolsters capacity with more efficient models and through its custom silicon. Last week, Google announced the public launch of its seventh generation Tensor Processing Unit called Ironwood, which the company says is nearly 30 times more power efficient than its first Cloud TPU from 2018.

Vahdat said the company has a big advantage with DeepMind, which has research on what AI models can look like in future years. Google needs to "be able to deliver 1,000 times more capability, compute, storage networking for essentially the same cost and increasingly, the same power, the same energy level," Vahdat said. "It won't be easy but through collaboration and co-design, we're going to get there."
China

Tech Company CTO and Others Indicted For Exporting Nvidia Chips To China (arstechnica.com) 11

An anonymous reader quotes a report from Ars Technica: The US crackdown on chip exports to China has continued with the arrests of four people accused of a conspiracy to illegally export Nvidia chips. Two US citizens and two nationals of the People's Republic of China (PRC), all of whom live in the US, were charged in an indictment (PDF) unsealed on Wednesday in US District Court for the Middle District of Florida. The indictment alleges a scheme to send Nvidia "GPUs to China by falsifying paperwork, creating fake contracts, and misleading US authorities," John Eisenberg, assistant attorney general for the Justice Department's National Security Division, said in a press release yesterday.

The four arrestees are Hon Ning Ho (aka Mathew Ho), a US citizen who was born in Hong Kong and lives in Tampa, Florida; Brian Curtis Raymond, a US citizen who lives in Huntsville, Alabama; Cham Li (aka Tony Li), a PRC national who lives in San Leandro, California; and Jing Chen (aka Harry Chen), a PRC national who lives in Tampa on an F-1 non-immigrant student visa. The suspects face a raft of charges for conspiracy to violate the Export Control Reform Act of 2018, smuggling, and money laundering. They could serve many decades in prison if convicted and given the maximum sentences and forfeit their financial gains. The indictment says that Chinese companies paid the conspirators nearly $3.9 million.
One of the suspects was briefly the CTO of Corvex, a Virginia-based AI cloud computing company that is planning to go public. Corvex told CNBC yesterday that it "had no part in the activities cited in the Department of Justice's indictment," and that "the person in question is not an employee of Corvex. Previously a consultant to the company, he was transitioning into an employee role but that offer has been rescinded."
Businesses

Amazon Cut Thousands of Engineers in Its Record Layoffs, Despite Saying It Needs To Innovate Faster (cnbc.com) 64

Amazon's 14,000-plus layoffs announced last month touched almost every piece of the company's sprawling business, from cloud computing and devices to advertising, retail and grocery stores. But one job category bore the brunt of cuts more than others: engineers. CNBC: Documents filed in New York, California, New Jersey and Amazon's home state of Washington showed that nearly 40% of the more than 4,700 job cuts in those states were engineering roles. The data was reported by Amazon in Worker Adjustment and Retraining Notification, or WARN, filings to state agencies. The figures represent a segment of the total layoffs announced in October. Not all data was immediately available because of differences in state WARN reporting requirements.
Microsoft

Microsoft's AI-Powered Copy and Paste Can Now Use On-Device AI (theverge.com) 45

An anonymous reader shares a report: Microsoft is upgrading its Advanced Paste tool in PowerToys for Windows 11, allowing you to use an on-device AI model to power some of its features. With the 0.96 update, you can route requests through Microsoft's Foundry Local tool or the open-source Ollama, both of which run AI models on your device's neural processing unit (NPU) instead of connecting to the cloud.

That means you won't need to purchase API credits to perform certain actions, like having AI translate or summarize the text copied to your clipboard. Plus, you can keep your data on your device.

Cloud

Nvidia Brings Ad-free Cloud Gaming To New Chromebooks (theverge.com) 13

Nvidia and Google announced today a new cloud gaming plan called GeForce Now Fast Pass that is exclusive to Chromebooks. Anyone who purchases a new Chromebook will receive a year of the service included with their device at no additional charge. Fast Pass allows Chromebook owners to stream more than 2,000 games from their existing Steam, Epic or Xbox libraries.

The service removes ads and lets users skip the queue that typically adds two minutes or more of wait time on GeForce Now's free tier. Users get 10 hours of cloud gaming each month. Up to five unused hours can roll over to the following month. Nvidia offers other paid plans starting at $9.99 per month that support higher resolutions, faster frame rates, RTX ray-tracing, and access to a larger game library that includes thousands of additional titles. The companies did not announce pricing for Fast Pass after the first year ends.
Businesses

Adobe Bolsters AI Marketing Tools With $1.9 Billion Semrush Buy (reuters.com) 4

Adobe is buying Semrush for $1.9 billion in a move to supercharge its AI-driven marketing stack. Reuters reports: Semrush designs and develops AI software that helps companies with search engine optimization, social media and digital advertising. The acquisition, expected to close in the first half of next year, would allow Adobe to help marketers better understand how their brands are viewed by online consumers through searches on websites and generative AI bots such as ChatGPT and Gemini. "The price is steep as Semrush isn't a massive revenue engine on its own, so Adobe is likely paying for strategic value. The payoff could be high too if Adobe can quickly turn Semrush's data into monetizable AI products," said Emarketer analyst Grace Harmon.

"While we are positive on Adobe restarting its M&A engine given the success that it has seen with this motion over the years... this deal likely does little to answer the questions revolving around the company's creative cloud business," added William Blair analysts.
Movies

Saudi Makes Big Bet On AI Films As Hollywood Moves From Studios To Datacenters (reuters.com) 38

pbahra writes: Saudi Arabia is betting that the future of Hollywood won't be built in physical stages but in datacenters. In a push to anchor itself in next-generation film production, Riyadh-based Humain has led Luma AI's latest Series C round, backing the shift towards cloud-based, AI-generated video rather than traditional studio infrastructure.. Humain's announcement says the new investment will accelerate Luma's development of world models capable of learning from video, audio and language to generate photorealistic scenes, environments and characters on demand. Supporters argue this could upend film-making by pushing much of Hollywood's production pipeline into high-performance datacenters rather than physical sets.
Businesses

Nvidia Beats Earnings Expectations, Even As Bubble Concerns Mount 22

Nvidia blew past earnings expectations with soaring revenue and profit, easing fears of an AI bubble and reinforcing its position as the engine of the global AI boom. From a report: Nvidia's sales grew 62% year-over-year to $57 billion in the October quarter, ahead of the $54.9 billion Wall Street had projected, signaling that demand for AI chips remains strong even as more questions emerge about whether returns from the technology will keep up with the pace of AI infrastructure investments. It posted profits of $31.9 billion, up 65% from the year-ago quarter and also slightly above expectations.

"Blackwell sales are off the charts, and cloud GPUs are sold out," Nvidia CEO Jensen Huang said in a statement, a message that echoes his earlier arguments that fears of an AI bubble are overblown. The company also posted stronger-than-expected sales guidance of around $65 billion for the fourth quarter, another indicator that the AI spending spree isn't slowing anytime soon.
Cloud

Cloud-Native Computing Is Poised To Explode (zdnet.com) 32

An anonymous reader quotes a report from ZDNet: At KubeCon North America 2025 in Atlanta, the Cloud Native Computing Foundation (CNCF)'s leaders predicted an enormous surge in cloud-native computing, driven by the explosive growth of AI inference workloads. How much growth? They're predicting hundreds of billions of dollars in spending over the next 18 months. [...] Where cloud-native computing and AI inference come together is when AI is no longer a separate track from cloud-native computing. Instead, AI workloads, particularly inference tasks, are fueling a new era where intelligent applications require scalable and reliable infrastructure. That era is unfolding because, said [CNCF Executive Director Jonathan Bryce], "AI is moving from a few 'Training supercomputers' to widespread 'Enterprise Inference.' This is fundamentally a cloud-native problem. You, the platform engineers, are the ones who will build the open-source platforms that unlock enterprise AI."

"Cloud native and AI-native development are merging, and it's really an incredible place we're in right now," said CNCF CTO Chris Aniszczyk. The data backs up this opinion. For example, Google has reported that its internal inference jobs have processed 1.33 quadrillion tokens per month recently, up from 980 trillion just months before. [...] Aniszczyk added that cloud-native projects, especially Kubernetes, are adapting to serve inference workloads at scale: "Kubernetes is obviously one of the leading examples as of the last release the dynamic resource allocation feature enables GPU and TPU hardware abstraction in a Kubernetes context." To better meet the demand, the CNCF announced the Certified Kubernetes AI Conformance Program, which aims to make AI workloads as portable and reliable as traditional cloud-native applications.

"As AI moves into production, teams need a consistent infrastructure they can rely on," Aniszczyk stated during his keynote. "This initiative will create shared guardrails to ensure AI workloads behave predictably across environments. It builds on the same community-driven standards process we've used with Kubernetes to help bring consistency as AI adoption scales." What all this effort means for business is that AI inference spending on cloud-native infrastructure and services will reach into the hundreds of billions within the next 18 months. That investment is because CNCF leaders predict that enterprises will race to stand up reliable, cost-effective AI services.

Oracle

Oracle is Already Underwater On Its 'Astonishing' $300B OpenAI Deal (ft.com) 29

An anonymous reader shares a report: It's too soon to be talking about the Curse of OpenAI, but we're going to anyway. Since September 10, when Oracle announced a $300 billion deal with the chatbot maker, its stock has shed $315 billion in market value.

OK, yes, it's a gross simplification to just look at market cap. But equivalents to Oracle shares are little changed over the same period (Nasdaq Composite, Microsoft, Dow Jones US Software Index), so the $15 billion loss figure [figure updated with stock price] is not entirely wrong. Oracle's "astonishing quarter" really has cost it nearly as much as one General Motors, or two Kraft Heinz.

Botnet

Microsoft Mitigated the Largest Cloud DDoS Ever Recorded, 15.7 Tbps (securityaffairs.com) 11

An anonymous reader quotes a report from Security Affairs: On October 24, 2025, Azure DDoS Protection detected and mitigated a massive multi-vector attack peaking at 15.72 Tbps and 3.64 billion pps, the largest cloud DDoS ever recorded, aimed at a single Australian endpoint. Azure's global protection network filtered the traffic, keeping services online. The attack came from the Aisuru botnet, a Turbo Mirai-class IoT botnet using compromised home routers and cameras.

The attack used massive UDP floods from more than 500,000 IPs hitting a single public address, with little spoofing and random source ports that made traceback easier. It highlights how attackers are scaling with the internet: faster home fiber and increasingly powerful IoT devices keep pushing DDoS attack sizes higher.
"On October 24, 2025, Azure DDOS Protection automatically detected and mitigated a multi-vector DDoS attack measuring 15.72 Tbps and nearly 3.64 billion packets per second (pps). This was the largest DDoS attack ever observed in the cloud and it targeted a single endpoint in Australia," reads a report published by Microsoft. "The attack originated from Aisuru botnet."

"Attackers are scaling with the internet itself. As fiber-to-the-home speeds rise and IoT devices get more powerful, the baseline for attack size keeps climbing," concludes the post. "As we approach the upcoming holiday season, it is essential to confirm that all internet-facing applications and workloads are adequately protected against DDOS attacks."
Cloud

Tech Giants' Cloud Power Probed As EU Weighs Inclusion In DMA (bloomberg.com) 13

An anonymous reader quotes a report from Bloomberg: Amazon Web Services, Microsoft's Azure, and Alphabet's Google Cloud risk being dragged into the scope of the European Union's crackdown on Big Tech as antitrust watchdogs prepare to study the platforms' market power. The European Commission wants to decide if any of the trio should face a raft of new restrictions under the bloc's Digital Markets Act (source paywalled; alternative source), according to people familiar with the matter who spoke on condition of anonymity. The plan for a market probe follows several major outages in the cloud industry that wrought havoc across global services, highlighting the risks of relying on a mere handful of players.

To date, the world's largest cloud providers have avoided the DMA because a large part of their business comes via enterprise contracts, making it difficult to count the number of individual users, one of the EU's main benchmarks for earmarking Silicon Valley services for extra oversight. Under the investigation's remit, regulators will asses whether the top cloud operators -- regardless of the challenge of counting user numbers -- should be forced to contend with a raft of fresh obligations including increased interoperability with rival software and better data portability for users, as well as restrictions on tying and bundling.

Earth

Iran Begins Cloud Seeding To Induce Rain Amid Historic Drought (bbc.com) 36

Authorities in Iran have sprayed clouds with chemicals to induce rain, in an attempt to combat the country's worst drought in decades. From a report: Known as cloud-seeding, the process was conducted over the Urmia lake basin on Saturday, Iran's official news agency Irna reported. Urmia is Iran's largest lake, but has largely dried out leaving a vast salt bed. Further operations will be carried out in east and west Azerbaijan, the agency said.

Rainfall is at record lows and reservoirs are nearly empty. Last week President Masoud Pezeshkian warned that if there is not enough rainfall soon, Tehran's water supply could be rationed and people may be evacuated from the capital. Cloud seeding involves injecting chemical salts including silver or potassium iodide into clouds via aircraft or through generators on the ground. Water vapour can then condense more easily and turn into rain. The technique has been around for decades, and the UAE has used it in recent years to help address water shortages. Iran's meteorological organisation said rainfall had decreased by about 89% this year compared with the long-term average, Irna reported.

AI

Microsoft Executives Discuss How AI Will Change Windows, Programming -- and Society (windowscentral.com) 69

"Windows is evolving into an agentic OS," Microsoft's president of Windows Pavan Davuluri posted on X.com, "connecting devices, cloud, and AI to unlock intelligent productivity and secure work anywhere."

But former Uber software engineer and engineering manager Gergely Orosz was unimpressed. "Can't see any reason for software engineers to choose Windows with this weird direction they are doubling down on. So odd because Microsoft has building dev tools in their DNA... their OS doesn't look like anything a builder who wants OS control could choose. Mac or Linux it is for devs."

Davuluri "has since disabled replies on his original post..." notes the blog Windows Central, "which some people viewed as an attempt to shut out negative feedback." But he also replied to that comment... Davuluri says "we care deeply about developers. We know we have work to do on the experience, both on the everyday usability, from inconsistent dialogs to power user experiences. When we meet as a team, we discuss these pain points and others in detail, because we want developers to choose Windows..." The good news is Davuluri has confirmed that Microsoft is listening, and is aware of the backlash it's receiving over the company's obsession with AI in Windows 11. That doesn't mean the company is going to stop with adding AI to Windows, but it does mean we can also expect Microsoft to focus on the other things that matter too, such as stability and power user enhancements.
Elsewhere on X.com, Microsoft CEO Satya Nadella shared his own thoughts on "the net benefit of the AI platform wave ." The Times of India reports: Nadella said tech companies should focus on building AI systems that create more value for the people and businesses using them, not just for the companies that make the technology. He cited Bill Gates to emphasize the same: "A platform is when the economic value of everybody that uses it exceeds the value of the company that creates it."Tesla CEO Elon Musk responded to Nadella's post with a facepalm emoji.

Nadella said this idea matters even more during the current AI boom, where many firms risk giving away too much of their own value to big tech platforms. "The real question is how to empower every company out there to build their own AI-native capabilities," he wrote. Nadella says Microsoft's partnership with OpenAI is an example of zero-sum mindset industry... [He also cited Microsoft's "work to bring AMD into the fleet."]

More from Satya Nadella's post: Thanks to AI, the [coding] category itself has expanded and may ultimately become one of the largest software categories. I don't ever recall any analyst ever asking me about how much revenue Visual Studio makes! But now everyone is excited about AI coding tools. This is another aspect of positive sum, when the category itself is redefined and the pie becomes 10x what it was! With GitHub Copilot we compete for our share and with GitHub and Agent HQ we also provide a platform for others.

Of course, the real test of this era won't be when another tech company breaks a valuation record. It will be when the overall economy and society themselves reach new heights. When a pharma company uses AI in silico to bring a new therapy to market in one year instead of twelve. When a manufacturer uses AI to redesign a supply chain overnight. When a teacher personalizes lessons for every student. When a farmer predicts and prevents crop failure.That's when we'll know the system is working.

Let us move beyond zero-sum thinking and the winner-take-all hype and focus instead on building broad capabilities that harness the power of this technology to achieve local success in each firm, which then leads to broad economic growth and societal benefits. And every firm needs to make sure they have control of their own destiny and sovereignty vs just a press release with a Tech/AI company or worse leak all their value through what may seem like a partnership, except it's extractive in terms of value exchange in the long run.

AI

Fear Drives the AI 'Cold War' Between America and China (msn.com) 28

A new "cold war" between America and China is "pushing leaders to sideline concerns about the dangers of powerful AI models," reports the Wall Street Journal, "including the spread of disinformation and other harmful content, and the development of superintelligent AI systems misaligned with human values..."

"Both countries are driven as much by fear as by hope of progress. " In Washington and Silicon Valley, warnings abound that China's "authoritarian AI," left unchecked, will erode American tech supremacy. Beijing is gripped by the conviction that a failure to keep pace in AI will make it easier for the U.S. to cut short China's resurgence as a global power. Both countries believe market share for their companies across the world is up for grabs — and with it, the potential to influence large swaths of the global population.

The U.S. still has a clear lead, producing the most powerful AI models. China can't match it in advanced chips and has no answer for the financial firepower of private American investors, who funded AI startups to the tune of $104 billion in the first half of 2025, and are gearing up for more. But it has a massive population of capable engineers, lower costs and a state-led development model that often moves faster than the U.S., all of which Beijing is working to harness to tip the contest in its direction. A new "whole of society" campaign looks to accelerate the construction of computing clusters in areas like Inner Mongolia, where vast solar and wind farms provide plentiful cheap energy, and connect hundreds of data centers to create a shared compute pool — some describe it as a "national cloud" — by 2028. China is also funneling hundreds of billions of dollars into its power grid to support AI training and adoption...

"Our lead is probably in the 'months but not years' realm," said Chris McGuire, who helped design U.S. export controls on AI chips while serving on the National Security Council under the Biden administration. Chinese AI models currently rank at or near the top in every task from coding to video generation, with the exception of search, according to Chatbot Arena, a popular crowdsourced ranking platform. China's manufacturing sector, meanwhile, is rocketing past the U.S. in bringing AI into the physical world through robotaxis, autonomous drones and humanoid robots. Given China's progress, McGuire said, the U.S. is "very lucky" to have its advantage in chips...

If AI surpasses human intelligence and acquires the ability to improve itself, it could confer unshakable scientific, economic and military superiority on the country that controls it. Short of that, AI's ability to automate tedious tasks and process vast amounts of data quickly promises to supercharge everything from cancer diagnoses to missile defense. With so much at stake, hacking and cyber espionage are likely to get worse, as AI gives hackers more powerful tools, while increasing incentives for state-backed groups to try to steal AI-related intellectual property. As distrust grows, Washington and Beijing will also find it hard, if not impossible, to cooperate in areas like preventing extremist groups from using AI in destructive ways, such as building bioweapons. "The costs of the AI Cold War are already high and will go much higher," said Paul Triolo, a former U.S. government analyst and current technology policy lead at business consulting firm DGA-Albright Stonebridge Group. "A U.S.-China AI arms race becomes a self-fulfilling prophecy, with neither side able to trust that the other would observe any restrictions on advanced AI capability development...."

The article includes an interesting observation from Helen Toner, director of strategy for Georgetown's Center for Security and Emerging Technology and a former OpenAI board member. Toner points out "We don't actually know" if boosting computing power with better chips will continue producing more-powerful AI models.

So "If performance plateaus," the Journal writes, "despite all the spending by OpenAI and others — a growing concern in Silicon Valley — China has a chance to compete."

Slashdot Top Deals