XBox (Games)

Microsoft Expands Xbox Cloud Gaming to Cheaper Game Pass Tiers 7

Microsoft is testing new Xbox Game Pass features with Insiders, letting Core and Standard subscribers stream cloud-enabled titles they own or access via subscription across more devices, including supported TVs and browsers. These tiers will also gain access to select PC game versions for the first time. From a Xbox blog post: We're always exploring more ways to make your Xbox experience centered around you -- your content, benefits, and playstyle. That's why we're making it easier to enjoy the games you love, wherever you are, and on any device. Starting today, Xbox Insiders are invited to try out new updates in Xbox Game Pass that make it easier to stream and play across more devices.

Xbox Insiders subscribed to Xbox Game Pass Core or Standard now have even more freedom to play wherever they are with Xbox Cloud Gaming (Beta). As part of this Insider experience, Xbox Game Pass Core and Standard subscribers will be able to stream cloud playable games included with their subscription or select cloud playable games they own, making it easier to jump in from any supported device. [...] We're expanding the ways players can experience PC gaming through Xbox Game Pass. As part of testing, Xbox Insiders subscribed to Game Pass Core or Standard will for the first time gain access to PC versions of select titles, giving you even more flexibility and the choice to play on a PC or Windows handheld."
Japan

Japan Launches its First Homegrown Quantum Computer (livescience.com) 2

Japan has launched its first entirely homegrown quantum computer, built with domestic superconducting qubits and components, and running on the country's own open-source software toolchain, OQTOPUS. "The system is now ready to take on workloads from its base at the University of Osaka's Center for Quantum Information and Quantum Biology (QIQB)," reports LiveScience. From the report: The system uses a quantum chip with superconducting qubits -- quantum bits derived from metals that exhibit zero electrical resistance when cooled to temperatures close to absolute zero (minus 459.67 degrees Fahrenheit, or minus 273.15 degrees Celsius). The quantum processing unit (QPU) was developed at the Japanese research institute RIKEN. Other components that make up the "chandelier" -- the main body of the quantum computer -- include the chip package, delivered by Seiken, the magnetic shield, infrared filters, bandpass filters, a low-noise amplifier and various cables.

These are all housed in a dilution refrigerator (a specialized cryogenic device that cools the quantum computing components) to allow for those extremely low temperatures. It also comes alongside a pulse tube refrigerator (which again cools various components in use), controllers and a low-noise power source. OQTOPUS, meanwhile, is a collection of open-source tools that include everything required to run quantum programs. It includes the core engine and cloud module, as well as graphical user interface (GUI) elements, and is designed to be built on top of a QPU and quantum control hardware.

Cloud

Word Documents Will Now Be Saved To the Cloud Automatically On Windows (ghacks.net) 132

Starting with Word for Windows version 2509, Microsoft is making cloud saving the default behavior. New documents will automatically save to OneDrive (or another cloud destination), with dated filenames, unless users manually revert to local saving in the settings. From the report: "Anything new you create will be saved automatically to OneDrive or your preferred cloud destination", writes Raul Munoz, product manager at Microsoft on the Office Shared Services and Experiences team. Munoz backs up the decision with half a dozen advantages for saving documents to the cloud. From never losing progress and access anywhere to easy collaboration and increased security and compliance. While cloud saving is without doubt beneficial in some cases, Munoz fails to address the elephant in the room. Some users may not want that their documents are stored in the cloud. There are good reasons for that, including privacy.

Summed up:
- If you do not mind that Word documents are stored in the cloud, you do not need to become active.
- If you mind that Word documents are stored in the cloud by default, you need to modify the default setting.

Open Source

LibreOffice Stakes Claim as Strategic Sovereignty Tool For Governments (documentfoundation.org) 46

The Document Foundation, which operates the popular open source productivity suite LibreOffice, is positioning the suite's newest release, v25.8, as a strategic asset for digital sovereignty, targeting governments and enterprises seeking independence from foreign software vendors and cloud infrastructure.

The Document Foundation released the update last week with zero telemetry architecture, full offline capability, and OpenPGP encryption for documents, directly addressing national security concerns about extraterritorial surveillance and software backdoors. The suite requires no internet access for any features and maintains complete transparency through open source code that governments can audit. Government bodies in Germany, Denmark, and France, alongside national ministries in Italy and Brazil, have deployed LibreOffice to meet GDPR compliance, national procurement laws, and IT localization mandates while eliminating unpredictable licensing costs from proprietary vendors.

"It's time to own your documents, own your infrastructure, and own your future," the foundation wrote in a blog post.
Python

Survey Finds More Python Developers Like PostgreSQL, AI Coding Agents - and Rust for Packages (jetbrains.com) 85

More than 30,000 Python developers from around the world answered questions for the Python Software Foundation's annual survey — and PSF Fellow Michael Kennedy tells the Python community what they've learned in a new blog post. Some highlights: Most still use older Python versions despite benefits of newer releases... Many of us (15%) are running on the very latest released version of Python, but more likely than not, we're using a version a year old or older (83%). [Although less than 1% are using "Python 3.5 or lower".] The survey also indicates that many of us are using Docker and containers to execute our code, which makes this 83% or higher number even more surprising... You simply choose a newer runtime, and your code runs faster. CPython has been extremely good at backward compatibility. There's rarely significant effort involved in upgrading... [He calculates some cloud users are paying up to $420,000 and $5.6M more in compute costs.] If your company realizes you are burning an extra $0.4M-$5M a year because you haven't gotten around to spending the day it takes to upgrade, that'll be a tough conversation...

Rust is how we speed up Python now... The Python Language Summit of 2025 revealed that "Somewhere between one-quarter and one-third of all native code being uploaded to PyPI for new projects uses Rust", indicating that "people are choosing to start new projects using Rust". Looking into the survey results, we see that Rust usage grew from 27% to 33% for binary extensions to Python packages... [The blog post later advises Python developers to learn to read basic Rust, "not to replace Python, but to complement it," since Rust "is becoming increasingly important in the most significant portions of the Python ecosystem."]

PostgreSQL is the king of Python databases, and only it's growing, going from 43% to 49%. That's +14% year over year, which is remarkable for a 28-year-old open-source project... [E]very single database in the top six grew in usage year over year. This is likely another indicator that web development itself is growing again, as discussed above...

[N]early half of the respondents (49%) plan to try AI coding agents in the coming year. Program managers at major tech companies have stated that they almost cannot hire developers who don't embrace agentic AI. The productive delta between those using it and those who avoid it is simply too great (estimated at about 30% greater productivity with AI).

It's their eighth annual survey (conducted in collaboration with JetBrains last October and November). But even though Python is 34 years old, it's still evolving. "In just the past few months, we have seen two new high-performance typing tools released," notes the blog post. (The ty and Pyrefly typecheckers — both written in Rust.) And Python 3.14 will be the first version of Python to completely support free-threaded Python... Just last week, the steering council and core developers officially accepted this as a permanent part of the language and runtime... Developers and data scientists will have to think more carefully about threaded code with locks, race conditions, and the performance benefits that come with it. Package maintainers, especially those with native code extensions, may have to rewrite some of their code to support free-threaded Python so they themselves do not enter race conditions and deadlocks.

There is a massive upside to this as well. I'm currently writing this on the cheapest Apple Mac Mini M4. This computer comes with 10 CPU cores. That means until this change manifests in Python, the maximum performance I can get out of a single Python process is 10% of what my machine is actually capable of. Once free-threaded Python is fully part of the ecosystem, I should get much closer to maximum capacity with a standard Python program using threading and the async and await keywords.

Some other notable findings from the survey:
  • Data science is now over half of all Python. This year, 51% of all surveyed Python developers are involved in data exploration and processing, with pandas and NumPy being the tools most commonly used for this.
  • Exactly 50% of respondents have less than two years of professional coding experience! And 39% have less than two years of experience with Python (even in hobbyist or educational settings)...
  • "The survey tells us that one-third of devs contributed to open source. This manifests primarily as code and documentation/tutorial additions."

Firefox

Firefox 142's Link Previews Have a New Option: AI-Generated Summaries (theregister.com) 73

"Good news, everyone! The new version of Mozilla's browser now makes even more extensive use of AI," writes the Register, "providing summaries of linked content and offering developers the ability to add LLM support to extensions." Firefox 142 brings some visible shininess, but due to the combination of regional restrictions and Mozilla's progressive rollout system, not everybody can see all the features just yet... Not geofenced but subject to phased rollout are link previews, for various native-English-speaking regions. Hover over, long-press, or right-click a link and pick Preview Link, and a summary should appear. Mozilla's summary says: "Previews can optionally include AI-generated key points, which are processed on your device to protect your privacy."
"Link Previews is gradually rolling out to ensure performance and quality," Firefox says in their release notes, "and is now available in en-US, en-CA, en-GB, en-AU for users with more than 3 GB of available RAM." (The notes also add a welcome for "the developers who contributed their first code change to Firefox in this release, 20 of whom were brand new volunteers!")

The Register notes that Firefox 142 also gives developers the ability to add LLM support to extensions using wllama, a Wasm binding interfacing with llama.cpp, which lets you run Meta's Llama LLM and other models, locally or in the cloud.
Cloud

Meta Signs $10 Billion Cloud Deal With Google (reuters.com) 14

Google has signed a six-year cloud computing deal with Meta worth over $10 billion, making it the second major partnership after a recent agreement with OpenAI. The deal will see Meta rely on Google Cloud's infrastructure to support its massive AI data center buildout, as the company ramps up capital spending into the tens of billions. The Information (paywalled) first reported the deal.
AI

Amazon Cloud Chief Says Replacing Junior Staff With AI is 'Dumbest' Idea (yahoo.com) 50

Matt Garman, Amazon's cloud boss, has a warning for business leaders rushing to swap workers for AI: Don't ditch your junior employees. From a report: The Amazon Web Services CEO said on an episode of the "Matthew Berman" podcast published Tuesday that replacing entry-level staff with AI tools is "one of the dumbest things I've ever heard."

"They're probably the least expensive employees you have. They're the most leaned into your AI tools," he said. "How's that going to work when you go like 10 years in the future and you have no one that has built up or learned anything?" Garman said companies should keep hiring graduates and teaching them how to build software, break down problems, and adopt best practices.

He also said the most valuable skills in an AI-driven economy aren't tied to any one college degree. "If you spend all of your time learning one specific thing and you're like, 'That's the thing I'm going to be expert at for the next 30 years,' I can promise you that's not going to be valuable 30 years from now," he said.

AI

Gates Funds $1 Million AI Alzheimer's Prize (ft.com) 59

Bill Gates is funding a $1 million competition to spur the use of AI to find innovative treatments for Alzheimer's disease, the latest effort to deploy the promising technology to find cures for humanity's toughest illnesses. From a report: The Alzheimer's Insights AI prize will be awarded to the team that comes up with the most original way to program AI-powered agents that are "capable of independent planning, reasoning, and action to accelerate breakthrough discoveries from existing Alzheimer's data."

 The winning tool will be released for free on the Alzheimer's Disease Data Initiative's cloud "workbench" to be used by scientists globally, the organisation said on Tuesday. The prize is being financed by Gates Ventures, the family office of the billionaire philanthropist and Microsoft co-founder.

Security

Male-Oriented App 'TeaOnHer' Also Had Security Flaws That Could Leak Men's Driver's License Photos (techcrunch.com) 112

The women-only dating-advice app Tea "has been hit with 10 potential class action lawsuits in federal and state court," NBC News reported last week, "after a data breach led to the leak of thousands of selfies, ID photos and private conversations online." The suits could result in Tea having to pay tens of millions of dollars in damages to the plaintiffs, which could be catastrophic for the company, an expert told NBC News... One of the suits lists the right-wing online discussion board 4chan and the social platform X as defendants, alleging that they allowed bad actors to spread users' personal information.
But meanwhile, a new competing app for men called "TeaOnHer" has already been launched. And it was also found to have enormous security flaws, reports TechCrunch, that "exposed its users' personal information, including photos of their driver's licenses and other government-issued identity documents..." [W]hen we looked at the TeaOnHer's public internet records, it had no meaningful information other than a single subdomain, appserver.teaonher.com. When we opened this page in our browser, what loaded was the landing page for TeaOnHer's API (for the curious, we uploaded a copy here)... It was on this landing page that we found the exposed email address and plaintext password (which wasn't that far off from "password") for [TeaOnHer developer Xavier] Lampkin's account to access the TeaOnHer "admin panel"... This API landing page included an endpoint called /docs, which contained the API's auto-generated documentation (powered by a product called Swagger UI) that contained the full list of commands that can be performed on the API [including administrator commands to return user data]...

While it's not uncommon for developers to publish their API documentation, the problem here was that some API requests could be made without any authentication — no passwords or credentials were needed...

The records returned from TeaOnHer's server contained users' unique identifiers within the app (essentially a string of random letters and numbers), their public profile screen name, and self-reported age and location, along with their private email address. The records also included web address links containing photos of the users' driver's licenses and corresponding selfies. Worse, these photos of driver's licenses, government-issued IDs, and selfies were stored in an Amazon-hosted S3 cloud server set as publicly accessible to anyone with their web addresses. This public setting lets anyone with a link to someone's identity documents open the files from anywhere with no restrictions...

The bugs were so easy to find that it would be sheer luck if nobody malicious found them before we did. We asked, but Lampkin would not say if he has the technical ability, such as logs, to determine if anyone had used (or misused) the API at any time to gain access to users' verification documents, such as by scraping web addresses from the API. In the days since our report to Lampkin, the API landing page has been taken down, along with its documentation page, and it now displays only the state of the server that the TeaOnHer API is running on as "healthy."

The flaws were discovered while TeaOnHer was the #2 free app in the Apple App Store, the article points out. And while these flaws "appear to be resolved," the article notes a larger issue. "Shoddy coding and security flaws highlight the ongoing privacy risks inherent in requiring users to submit sensitive information to use apps and websites,"

And TeaOnHer also had another authentication issue. A female reporter at Cosmopolitan also noted Friday that TeaOnHer "lets you browse through profiles before your verifications are complete. So literally anyone (like myself) can read reviews..."
AI

AI Is Reshaping Hacking. No One Agrees How Fast (axios.com) 18

"Several cybersecurity companies debuted advancements in AI agents at the Black Hat conference last week," reports Axios, "signaling that cyber defenders could soon have the tools to catch up to adversarial hackers." - Microsoft shared details about a prototype for a new agent that can automatically detect malware — although it's able to detect only 24% of malicious files as of now.

- Trend Micro released new AI-driven "digital twin" capabilities that let companies simulate real-world cyber threats in a safe environment walled off from their actual systems.

- Several companies and research teams also publicly released open-source tools that can automatically identify and patch vulnerabilities as part of the government-backed AI Cyber Challenge.

Yes, but: Threat actors are now using those AI-enabled tools to speed up reconnaissance and dream up brand-new attack vectors for targeting each company, John Watters, CEO of iCounter and a former Mandiant executive, told Axios.

The article notes "two competing narratives about how AI is transforming the threat landscape." One says defenders still have the upper hand. Cybercriminals lack the money and computing resources to build out AI-powered tools, and large language models have clear limitations in their ability to carry out offensive strikes. This leaves defenders with time to tap AI's potential for themselves. [In a DEF CON presentation a member of Anthropic's red team said its Claude AI model will "soon" be able to perform at the level of a senior security researcher, the article notes later]

Then there's the darker view. Cybercriminals are already leaning on open-source LLMs to build tools that can scan internet-connected devices to see if they have vulnerabilities, discover zero-day bugs, and write malware. They're only going to get better, and quickly...

Right now, models aren't the best at making human-like judgments, such as recognizing when legitimate tools are being abused for malicious purposes. And running a series of AI agents will require cybercriminals and nation-states to have enough resources to pay the cloud bills they rack up, Michael Sikorski, CTO of Palo Alto Networks' Unit 42 threat research team, told Axios. But LLMs are improving rapidly. Sikorski predicts that malicious hackers will use a victim organization's own AI agents to launch an attack after breaking into their infrastructure.

Open Source

Remember the Companies Making Vital Open Source Contributions (infoworld.com) 22

Matt Asay answered questions from Slashdot readers in 2010 as the then-COO of Canonical. Today he runs developer marketing at Oracle (after holding similar positions at AWS, Adobe, and MongoDB).

And this week Asay contributed an opinion piece to InfoWorld reminding us of open source contributions from companies where "enlightened self-interest underwrites the boring but vital work — CI hardware, security audits, long-term maintenance — that grassroots volunteers struggle to fund." [I]f you look at the Linux 6.15 kernel contributor list (as just one example), the top contributor, as measured by change sets, is Intel... Another example: Take the last year of contributions to Kubernetes. Google (of course), Red Hat, Microsoft, VMware, and AWS all headline the list. Not because it's sexy, but because they make billions of dollars selling Kubernetes services... Some companies (including mine) sell proprietary software, and so it's easy to mentally bucket these vendors with license fees or closed cloud services. That bias makes it easy to ignore empirical contribution data, which indicates open source contributions on a grand scale.
Asay notes Oracle's many contributions to Linux: In the [Linux kernel] 6.1 release cycle, Oracle emerged as the top contributor by lines of code changed across the entire kernel... [I]t's Oracle that patches memory-management structures and shepherds block-device drivers for the Linux we all use. Oracle's kernel work isn't a one-off either. A few releases earlier, the company topped the "core of the kernel" leaderboard in 5.18, and it hasn't slowed down since, helping land the Maple Tree data structure and other performance boosters. Those patches power Oracle Cloud Infrastructure (OCI), of course, but they also speed up Ubuntu on your old ThinkPad. Self-interested contributions? Absolutely. Public benefit? Equally absolute.

This isn't just an Oracle thing. When we widen the lens beyond Oracle, the pattern holds. In 2023, I wrote about Amazon's "quiet open source revolution," showing how AWS was suddenly everywhere in GitHub commit logs despite the company's earlier reticence. (Disclosure: I used to run AWS' open source strategy and marketing team.) Back in 2017, I argued that cloud vendors were open sourcing code as on-ramps to proprietary services rather than end-products. Both observations remain true, but they miss a larger point: Motives aside, the code flows and the community benefits.

If you care about outcomes, the motives don't really matter. Or maybe they do: It's far more sustainable to have companies contributing because it helps them deliver revenue than to contribute out of charity. The former is durable; the latter is not.

There's another practical consideration: scale. "Large vendors wield resources that community projects can't match."

Asay closes by urging readers to "Follow the commits" and "embrace mixed motives... the point isn't sainthood; it's sustainable, shared innovation. Every company (and really every developer) contributes out of some form of self-interest. That's the rule, not the exception. Embrace it." Going forward, we should expect to see even more counterintuitive contributor lists. Generative AI is turbocharging code generation, but someone still has to integrate those patches, write tests, and shepherd them upstream. The companies with the most to lose from brittle infrastructure — cloud providers, database vendors, silicon makers — will foot the bill. If history is a guide, they'll do so quietly.
AI

Foxconn Now Making More From Servers than iPhones (theregister.com) 9

An anonymous reader shares a report: Manufacturer to the stars Foxconn is building so many AI servers that they're now bringing in more cash than consumer electronics -- even counting the colossal quantity of iPhones it creates for Apple.

The Taiwanese company revealed the shift in its Thursday announcement of Q2 results, which saw revenue grow 16% to NT$1.79 trillion ($59.73 billion) and operating profit rise 27% to NT$56.6 billion ($1.9 billion). CEO Kathy Yang told investors the company's Cloud and Networking Products division delivered 41% of total revenue, up nine percent compared to Q2 2024, and surpassing the company's Smart Consumer Electronics unit for the first time. The latter business includes Foxconn's work for Apple.

Windows

Microsoft Says Voice Will Emerge as Primary Input for Next Windows (youtube.com) 138

The next version of Windows will become "more ambient, pervasive, and multi-modal" as AI transforms how users interact with computers, Microsoft's Windows chief Pavan Davuluri said in a company video. Davuluri, Corporate Vice President and head of Windows, said that voice will emerge as a primary input method alongside keyboard and mouse, with the operating system gaining context awareness to understand screen content and user intent through natural language.

Windows interfaces, he said, will appear fundamentally different within five years as the platform becomes increasingly agentic. The transformation will rely on both local processing power and cloud computing capabilities to deliver seamless experiences where users can speak to their computers while simultaneously typing or inking.
Science

Physicists Create Quantum Radar That Could Image Buried Objects (technologyreview.com) 28

An anonymous reader quotes a report from MIT Technology Review: Physicists have created a new type of radar that could help improve underground imaging, using a cloud of atoms in a glass cell to detect reflected radio waves. The radar is a type of quantum sensor, an emerging technology that uses the quantum-mechanical properties of objects as measurement devices. It's still a prototype, but its intended use is to image buried objects in situations such as constructing underground utilities, drilling wells for natural gas, and excavating archaeological sites. [...] The glass cell that serves as the radar's quantum component is full of cesium atoms kept at room temperature. The researchers use lasers to get each individual cesium atom to swell to nearly the size of a bacterium, about 10,000 times bigger than the usual size. Atoms in this bloated condition are called Rydberg atoms.

When incoming radio waves hit Rydberg atoms, they disturb the distribution of electrons around their nuclei. Researchers can detect the disturbance by shining lasers on the atoms, causing them to emit light; when the atoms are interacting with a radio wave, the color of their emitted light changes. Monitoring the color of this light thus makes it possible to use the atoms as a radio receiver. Rydberg atoms are sensitive to a wide range of radio frequencies without needing to change the physical setup... This means a single compact radar device could potentially work at the multiple frequency bands required for different applications.

[Matthew Simons, a physicist at the National Institute of Standards and Technology (NIST), who was a member of the research team] tested the radar by placing it in a specially designed room with foam spikes on the floor, ceiling, and walls like stalactites and stalagmites. The spikes absorb, rather than reflect, nearly all the radio waves that hit them. This simulates the effect of a large open space, allowing the group to test the radar's imaging capability without unwanted reflections off walls.The researchers placed a radio wave transmitter in the room, along with their Rydberg atom receiver, which was hooked up to an optical table outside the room. They aimed radio waves at a copper plate about the size of a sheet of paper, some pipes, and a steel rod in the room, each placed up to five meters away. The radar allowed them to locate the objects to within 4.7 centimeters. The team posted a paper on the research to the arXiv preprint server in late June.

AI

The Dead Need Right To Delete Their Data So They Can't Be AI-ified, Lawyer Says 71

Legal scholar Victoria Haneman argues that U.S. law should grant estates a time-limited right to delete a deceased person's data so they can't be recreated by AI without their consent. "Digital resurrection by or through AI requires the personal data of the deceased, and the amount of data that we are storing online is increasing exponentially with each passing year," writes Haneman in an article published earlier this year in the Boston College Law Review. "It has been said that data is the new uranium, extraordinarily valuable and potentially dangerous. A right to delete will provide the decedent with a time-limited right for deletion of personal data." The Register reports: A living person may have some say on the matter through the control of personal digital documents and correspondence. But a dead person can't object, and US law doesn't offer the dead much data protection in terms of privacy law, property law, intellectual property law, or criminal law. The Revised Uniform Fiduciary Access to Digital Assets Act (RUFADAA), a law developed to help fiduciaries deal with digital files of the dead or incapacitated, can come into play. But Haneman points out that most people die intestate (without a will), leaving matters up to tech platforms. Facebook's response to dead users is to allow anyone to request the memorialization of an account, which keeps posts online. As for RUFADAA, it does little to address digital resurrection, says Haneman.

The right to publicity, which provides a private right of action against unauthorized commercial use of a person's name, image, or likeness, covers the dead in about 25 states, according to Haneman. But the monetization of publicity rights has proven to be problematic. Haneman says that there are some states where it's theoretically possible to be prosecuted for libeling or defaming the deceased, such as Idaho, Nevada, and Oklahoma, but adds that such prosecutions have declined because they tread upon the constitutional right to free expression. [...] A recent California law, the Delete Act, which took effect last year, is the first to offer a way for the living to demand the deletion of personal data from data brokers in one step. But according to Haneman, it's unclear whether the text of the law will be extended to cover the dead -- a possibility think tank Aspen Tech Policy Hub supports [PDF].

Haneman argues that a data deletion law for the dead would be grounded in laws governing human remains, where corpses receive protection against abuse despite being neither a person nor property. "The personal representative of the decedent has the right to destroy all physical letters and photographs saved by the decedent; merely storing personal information in the cloud should not grant societal archival rights," she argues. "A limited right of deletion within a twelve-month window balances the interests of society against the rights of the deceased."
The Military

How 12 'Enola Gay' Crew Members Remember Dropping the Atomic Bomb (mentalfloss.com) 130

Last week saw the 80th anniversary of a turning point in World War II: the day America dropped an atomic bomb on Hiroshima.

"Twelve men were on that flight..." remembers the online magazine Mental Floss, adding "Almost all had something to say after the war." The group was segregated from the rest of the military and trained in secret. Even those in the group only knew as much as they needed to know in order to perform their duties. The group deployed to Tinian in 1945 with 15 B-29 bombers, flight crews, ground crews, and other personnel, a total of about 1770 men. The mission to drop the atomic bomb on Hiroshima, Japan (special mission 13) involved seven planes, but the one we remember was the Enola Gay.

Air Force captain Theodore "Dutch" Van Kirk did not know the destructive force of the nuclear bomb before Hiroshima. He was 24 years old at that time, a veteran of 58 missions in North Africa. Paul Tibbets told him this mission would shorten or end the war, but Van Kirk had heard that line before. Hiroshima made him a believer. Van Kirk felt the bombing of Hiroshima was worth the price in that it ended the war before the invasion of Japan, which promised to be devastating to both sides. " I honestly believe the use of the atomic bomb saved lives in the long run. There were a lot of lives saved. Most of the lives saved were Japanese."

In 2005, Van Kirk came as close as he ever got to regret. "I pray no man will have to witness that sight again. Such a terrible waste, such a loss of life..."

Many of the other crewmembers also felt the bomb ultimately saved lives.

The Washington Post has also published a new oral history of the flight after it took off from Tinian Island. The oral history was assembled for a new book published this week titled The Devil Reached Toward the Sky: An Oral History of the Making and Unleashing of the Atomic Bomb.. Col. Paul W. Tibbets, lead pilot of the Enola Gay: We were only eight minutes off the ground when Capt. William S. "Deak" Parsons and Lt. Morris R. Jeppson lowered themselves into the bomb bay to insert a slug of uranium and the conventional explosive charge into the core of the strange-looking weapon. I wondered why we were calling it ''Little Boy." Little Boy was 28 inches in diameter and 12 feet long. Its weight was a little more than 9,000 pounds. With its coat of dull gunmetal paint, it was an ugly monster...

Lt. Morris R. Jeppson, crew member of the Enola Gay: Parsons was second-in-command of the military in the Manhattan Project. The Little Boy weapon was Parsons's design. He was greatly concerned that B-29s loaded with conventional bombs were crashing at the ends of runways on Tinian during takeoff and that such an event could cause the U-235 projectile in the gun of Little Boy to fly down the barrel and into the U-235 target. This could have caused a low-level nuclear explosion on Tinian...

Jeppson: On his own, Parsons decided that he would go on the Hiroshima mission and that he would load the gun after the Enola Gay was well away from Tinian.

Tibbets: That way, if we crashed, we would lose only the airplane and crew, himself included... Jeppson held the flashlight while Parsons struggled with the mechanism of the bomb, inserting the explosive charge that would send one block of uranium flying into the other to set off the instant chain reaction that would create the atomic explosion.

The navigator on one of the other six planes on the mission remember that watching the mushroom cloud, "There was almost complete silence on the flight deck. It was evident the city of Hiroshima was destroyed."

And the Enola Gay's copilot later remembered thinking: "My God, what have we done?"
Programming

'Hour of Code' Announces It's Now Evolving Into 'Hour of AI' (hourofcode.com) 35

Last month Microsoft pledged $4 billion (in cash and AI/cloud technology) to "advance" AI education in K-12 schools, community and technical colleges, and nonprofits (according to a blog post by Microsoft President Brad Smith). But in the launch event video, Smith also says it's time to "switch hats" from coding to AI, adding that "the last 12 years have been about the Hour of Code, but the future involves the Hour of AI."

Long-time Slashdot reader theodp writes: This sets the stage for Code.org CEO Hadi Partovi's announcement that his tech-backed nonprofit's [annual educational event] Hour of Code is being renamed to the Hour of AI... Explaining the pivot, Partovi says: "Computer science for the last 50 years has had a focal point around coding that's been — sort of like you learn computer science so that you create code. There's other things you learn, like data science and algorithms and cybersecurity, but the focal point has been coding.

"And we're now in a world where the focal point of computer science is shifting to AI... We all know that AI can write much of the code. You don't need to worry about where did the semicolons go, or did I close the parentheses or whatnot. The busy work of computer science is going to be done by the computer itself.

"The creativity, the thinking, the systems design, the engineering, the algorithm planning, the security concerns, privacy concerns, ethical concerns — those parts of computer science are going to be what remains with a focal point around AI. And what's going to be important is to make sure in education we give students the tools so they don't just become passive users of AI, but so that they learn how AI works."

Speaking to Microsoft's Smith, Partovi vows to redouble the nonprofit's policy work to "make this [AI literacy] a high school graduation requirement so that no student graduates school without at least a basic understanding of what's going to be part of the new liberal arts background [...] As you showed with your hat, we are renaming the Hour of Code to an Hour of AI."

Cloud

Amazon's Cloud Business Giving Federal Agencies Up To $1 Billion In Discounts (cnbc.com) 20

Amazon Web Services has struck a deal with the U.S. government to provide up to $1 billion in cloud service discounts through 2028. CNBC reports: The agreement is expected to speed up migration to the cloud, as well as adoption of artificial intelligence tools, the General Services Administration said. "AWS's partnership with GSA demonstrates a shared public-private commitment to enhancing America's AI leadership," the agency said in a release.

Amazon's cloud boss, Matt Garman, hailed the agreement as a "significant milestone in the large-scale digital transformation of government services." The discounts aggregated across federal agencies include credits to use AWS' cloud infrastructure, modernization programs and training services, as well as incentives for "direct partnership."
Further reading: OpenAI Offers ChatGPT To US Federal Agencies for $1 a Year
Businesses

The Great Indian IT Squeeze 25

An anonymous reader shares a report: The Indian IT sector has operated for decades under the dominance of major firms TCS, Infosys, Wipro, and HCLT. The historical growth of these companies was tightly coupled with the U.S. economy through a strong "multiplier effect," where Indian IT export growth significantly outpaced US GDP growth. This reliable growth model is now under pressure.

The multiplier has weakened considerably, falling from a peak of 4.1x to a projected 1.6x. This is contributing to a prolonged slowdown period for India IT exports. A primary factor in this slowdown is a clear shift in client spending priorities. While overall enterprise technology spending remains strong, clients are now allocating a larger portion of their budgets to core digital infrastructure, such as cloud platforms and SaaS platforms, over traditional IT services.

The firms are facing challenges on multiple fronts. Global corporations are increasingly establishing their own global capability centers in India, with projections indicating an accelerated pace of 120 new centers being added annually in fiscal years 2024 and 2025, up from some 40 six years ago. This insourcing trend diverts revenue from traditional IT vendors and creates direct competition for skilled technology talent.

Slashdot Top Deals