XBox (Games)

Microsoft Is No Longer Making New Games For the Xbox One (engadget.com) 10

Microsoft says it is no longer making games for the Xbox One but will continue to support ongoing previous-generation titles like Minecraft and Halo Infinite. Engadget reports: "We've moved on to gen 9," Xbox Game Studios head Matt Booty told Axios, referring to the Xbox Series X/S consoles. The company also makes its games for PC. This move had to happen at some point to avoid newer and more complex games being hamstrung by the hardware limitations of the decade-old Xbox One. Still, it'll be possible for those clinging onto an Xbox One to play Series X/S titles such as Starfield and Forza Motorsport through Xbox Cloud Gaming. "That's how we're going to maintain support," Booty said.

The move away from Xbox One will free Microsoft's teams from the shackles of the previous generation. However, some third-party developers have raised concerns that the Xbox Series S, which is less powerful than the Series X, is holding them back too. Booty conceded that making sure games run well on the Series S requires "more work." Still, he noted Microsoft's studios (particularly those working on their second games for this generation of consoles) are now able to better optimize their projects for the Series S.

Supercomputing

Intel To Start Shipping a Quantum Processor (arstechnica.com) 18

An anonymous reader quotes a report from Ars Technica: Intel does a lot of things, but it's mostly noted for making and shipping a lot of processors, many of which have been named after bodies of water. So, saying that the company is set to start sending out a processor called Tunnel Falls would seem unsurprising if it weren't for some key details. Among them: The processor's functional units are qubits, and you shouldn't expect to be able to pick one up on New Egg. Ever. Tunnel Falls appears to be named after a waterfall near Intel's Oregon facility, where the company's quantum research team does much of its work. It's a 12-qubit chip, which places it well behind the qubit count of many of Intel's competitors -- all of which are making processors available via cloud services. But Jim Clarke, who heads Intel's quantum efforts, said these differences were due to the company's distinct approach to developing quantum computers.

Intel, in contrast, is attempting to build silicon-based qubits that can benefit from the developments that most of the rest of the company is working on. The company hopes to "ride the coattails of what the CMOS industry has been doing for years," Clarke said in a call with the press and analysts. The goal, according to Clarke, is to make sure the answer to "what do we have to change from our silicon chip in order to make it?" is "as little as possible." The qubits are based on quantum dots, structures that are smaller than the wavelength of an electron in the material. Quantum dots can be used to trap individual electrons, and the properties of the electron can then be addressed to store quantum information. Intel uses its fabrication expertise to craft the quantum dot and create all the neighboring features needed to set and read its state and perform manipulations.

However, Clarke said there are different ways of encoding a qubit in a quantum dot (Loss-DiVincenzo, singlet-triplet, and exchange-only, for those curious). This gets at another key difference with Intel's efforts: While most of its competitors are focused solely on fostering a software developer community, Intel is simultaneously trying to develop a community that will help it improve its hardware. (For software developers, the company also released a software developer kit.) To help get this community going, Intel will send Tunnel Falls processors out to a few universities: The Universities of Maryland, Rochester, Wisconsin, and Sandia National Lab will be the first to receive the new chip, and the company is interested in signing up others. The hope is that researchers at these sites will help Intel characterize sources of error and which forms of qubits provide the best performance.
"Overall, Intel has made a daring choice for its quantum strategy," concludes Ars' John Timmer. "Electron-based qubits have been more difficult to work with than many other technologies because they tend to have shorter life spans before they decohere and lose the information they should be holding. Intel is counting on rapid iteration, a large manufacturing capacity, and a large community to help it figure out how to overcome this. But testing quantum computing chips and understanding why their qubits sometimes go wrong is not an easy process; it requires highly specialized refrigeration hardware that takes roughly a day to get the chips down to a temperature where they can be used."

"The company seems to be doing what it needs to overcome that bottleneck, but it's likely to need more than three universities to sign up if the strategy is going to work."
Transportation

Mercedes Is Adding ChatGPT To Its Infotainment System (techcrunch.com) 71

Mercedes is adding OpenAI's ChatGPT to its MBUX infotainment system. "U.S. owners of models that use MBUX will be able to opt into a beta program starting tomorrow, June 16, activating ChatGPT functionality," reports TechCrunch. "This will enable the highly versatile large language model to augment the car's conversation skills. You can join up simply by telling your car 'Hey Mercedes, I want to join the beta program.'" From the report: Mercedes describes the capabilities thusly: "Users will experience a voice assistant that not only accepts natural voice commands but can also conduct conversations. Soon, participants who ask the Voice Assistant for details about their destination, to suggest a new dinner recipe, or to answer a complex question, will receive a more comprehensive answer -- while keeping their hands on the wheel and eyes on the road."

If you're worried about privacy, you should be. Although Mercedes loudly expresses its concern over user data, it's clear that it retains and uses your conversations: "The voice command data collected is stored in the Mercedes-Benz Intelligent Cloud, where it is anonymized and analyzed. Mercedes-Benz developers will gain helpful insights into specific requests, enabling them to set precise priorities in the further development of voice control. Findings from the beta program will be used to further improve the intuitive voice assistant and to define the rollout strategy for large language models in more markets and languages."

Sony

Sony Starts Testing Cloud Streaming PS5 Games (theverge.com) 23

Sony says it has started testing the ability to stream PS5 games from the cloud. The PlayStation maker says it's testing cloud streaming for PS5 games and is planning to add this as a feature to its PlayStation Plus Premium subscription. From a report: "We're currently testing cloud streaming for supported PS5 games -- this includes PS5 titles from the PlayStation Plus Game Catalog and Game Trials, as well as supported digital PS5 titles that players own," says Nick Maguire, VP of global services, global sales, and business operations at Sony Interactive Entertainment. "When this feature launches, cloud game streaming for supported PS5 titles will be available for use directly on your PS5 console." A cloud feature for PS5 games would mean you'll no longer have to download games to your console to stream them to other devices. Sony currently supports streaming PS5 games to PCs, Macs, and iOS and Android devices, but you have to use your PS5 as the host to download and stream titles to your other devices.
AI

AWS is Considering AMD's New AI Chips (reuters.com) 12

Amazon Web Services, the world's largest cloud computing provider, is considering using new artificial intelligence chips from AMD, though it has not made a final decision, an AWS executive told Reuters. From the report: The remarks came during an AMD event where the chip company outlined its strategy for the AI market, which is dominated by rival Nvidia. In interviews with Reuters, AMD Chief Executive Lisa Su outlined an approach to winning over major cloud computing customers by offering a menu of all the pieces needed to build the kinds of systems to power services similar to ChatGPT, but letting customers pick and choose which they want, using industry standard connections.

While AWS has not made any public commitments to use AMD's new MI300 chips in its cloud services, Dave Brown, vice president of elastic compute cloud at Amazon, said AWS is considering them. "We're still working together on where exactly that will land between AWS and AMD, but it's something that our teams are working together on," Brown said. "That's where we've benefited from some of the work that they've done around the design that plugs into existing systems."

Government

Microsoft Is Bringing OpenAI's GPT-4 AI Model To US Government Agencies (bloomberg.com) 8

Microsoft will make it possible for users of its Azure Government cloud computing service, which include a variety of US agencies, to access artificial intelligence models from ChatGPT creator OpenAI. From a report: Microsoft, which is the largest investor in OpenAI and uses its technology to power its Bing chatbot, plans to announce Wednesday that Azure Government customers can now use two of OpenAI's large language models: The startup's latest and most powerful model, GPT-4, and an earlier one, GPT-3, via Microsoft's Azure OpenAI service.

The Redmond, Washington-based company plans Wednesday to release a blog post, viewed by Bloomberg, about the program, although its doesn't name specific US agencies expected to use the large language models at launch. The Defense Department, the Energy Department and NASA are among the federal government customers of Azure Government. The Defense Technical Information Center -- a part of the Defense Department that focuses on gathering and sharing military research -- will be experimenting with the OpenAI models through Microsoft's new offering, a DTIC official confirmed.

AMD

AMD Likely To Offer Details on AI Chip in Challenge To Nvidia (reuters.com) 18

Advanced Micro Devices on Tuesday is expected to reveal new details about an AI "superchip" that analysts believe will be a strong challenger to Nvidia, whose chips dominate the fast-growing artificial intelligence market. From a report: AMD Chief Executive Lisa Su will give a keynote address at an event in San Francisco on the company's strategy in the data center and AI markets. Analysts expect fresh details about a chip called the MI300, AMD's most advanced graphics processing unit, the category of chips that companies like OpenAI use to develop products such as ChatGPT. Nvidia dominates the AI computing market with 80% to 95% of market share, according to analysts.

Last month, Nvidia's market capitalization briefly touched $1 trillion after the company said it expected a jump in revenue after it secured new chip supplies to meet surging demand. Nvidia has few competitors working at a large scale. While Intel and several startups such as Cerebras Systems and SambaNova Systems have competing products, Nvidia's biggest sales threat so far is the internal chip efforts at Alphabet's Google and Amazon's cloud unit, both of which rent their custom chips to outside developers.

AI

Will Productivity Gains from AI-Generated Code Be Offset by the Need to Maintain and Review It? (zdnet.com) 95

ZDNet asks the million-dollar question. "Despite the potential for vast productivity gains from generative AI tools such as ChatGPT or GitHub Copilot, will technology professionals' jobs actually grow more complicated? " People can now pump out code on demand in an abundance of languages, from Java to Python, along with helpful recommendations. Already, 95% of developers in a recent survey from Sourcegraph report they use Copilot, ChatGPT, and other gen AI tools this way.

But auto-generating new code only addresses part of the problem in enterprises that already maintain unwieldy codebases, and require high levels of cohesion, accountability, and security.

For starters, security and quality assurance tasks associated with software jobs aren't going to go away anytime soon. "For programmers and software engineers, ChatGPT and other large language models help create code in almost any language," says Andy Thurai, analyst with Constellation Research, before talking about security concerns. "However, most of the code that is generated is security-vulnerable and might not pass enterprise-grade code. So, while AI can help accelerate coding, care should be taken to analyze the code, find vulnerabilities, and fix it, which would take away some of the productivity increase that AI vendors tout about."

Then there's code sprawl. An analogy to the rollout of generative AI in coding is the introduction of cloud computing, which seemed to simplify application acquisition when first rolled out, and now means a tangle of services to be managed. The relative ease of generating code via AI will contribute to an ever-expanding codebase — what the Sourcegraph survey authors refer to as "Big Code". A majority of the 500 developers in the survey are concerned about managing all this new code, along with code sprawl, and its contribution to technical debt. Even before generative AI, close to eight in 10 say their codebase grew five times over the last three years, and a similar number struggle with understanding existing code generated by others.

So, the productivity prospects for generative AI in programming are a mixed bag.

Power

Smoke Sends US Northeast Solar Power Plunging By 50% As Wildfires Rage In Canada (reuters.com) 90

Longtime Slashdot reader WindBourne writes: "A shroud of smoke has sent solar power generation in parts of the eastern US plummeting by more than 50% as wildfires rage in Canada," reports Bloomberg. "Solar farms powering New England were producing 56% less energy at times of peak demand compared with the week before, according to the region's grid operator. Electricity generated by solar across the territory serviced by PJM Interconnection LLC, which spans Illinois to North Carolina, was down about 25% from the previous week."

Not mentioned in the article is that the wind generator output has also dropped. ["Wind power also dropped to 5% of total generation so far this week versus a recent high of 12% during the windy week ended May 12," reports Reuters. "That forced power generators to boost the amount of electricity generated by gas to 45% this week, up from around 40% in recent weeks."]

If forest fires can cut PV output by 50%, what would happen in real disasters when a nation most needs their electricity -- especially as we convert from fossil fuels (stored energy) to electricity? This will hopefully have politicians thinking in terms of national security, as well as anthropogenic global warming, when it comes to western grids.

Cloud

AWS Teases Mysterious Mil-Spec 'Snowblade' Server (theregister.com) 27

Amazon Web Services has announced a new member of its "Snow" family of on-prem hardware -- but the specs of the machine appear not to be available to eyes outside the US military. From a report: AWS announced the "Snowblade" on Tuesday, revealing it's a "portable, compact 5U, half-rack width form-factor" that can offer up to 209 vCPUs running "AWS compute, storage, and other hybrid services in remote locations, including Denied, Disrupted, Intermittent, and Limited (DDIL) environments."

The boxes can run Amazon EC2, AWS IAM, AWS CloudTrail, AWS IoT Greengrass, AWS Deep Learning AMIs, Amazon Sagemaker Neo, and AWS DataSync. The device meets the US military's MIL-STD-810H Ruggedization Standards, meaning it can handle extreme temperatures, vibrations, and shocks. The cloud colossus's brief description also lauds the Snowblade as "the densest compute device of the AWS Snow Family allowing Joint Warfighting Cloud Capability (JWCC) customers to run demanding workloads in space, weight, and power (SWaP) constrained edge locations." The AWS announcement links to more information on its Joint Warfighting Cloud Capability (JWCC) -- and there be dragons. Your correspondent's civilian-grade AWS account was unable to access JWCC resources.

Google

Google Cloud is Partnering With Mayo Clinic (cnbc.com) 11

Google's cloud business is expanding its use of new artificial intelligence technologies in health care, giving medical professionals at Mayo Clinic the ability to quickly find patient information using the types of tools powering the latest chatbots. From a report: On Wednesday, Google Cloud said Mayo Clinic is testing a new service called Enterprise Search on Generative AI App Builder, which was introduced Tuesday. The tool effectively lets clients create their own chatbots using Google's technology to scour mounds of disparate internal data. In health care, that means workers can interpret data such as a patient's medical history, imaging records, genomics or labs more quickly and with a simple query, even if the information is stored across different formats and locations. Mayo Clinic, one of the top hospital systems in the U.S. with dozens of locations, is an early adopter of the technology for Google, which is trying to bolster the use of generative AI in the medical system.

Mayo Clinic will test out different use cases for the search tool in the coming months, and Vish Anantraman, chief technology officer at Mayo Clinic, said it has already been "very fulfilling" for helping clinicians with administrative tasks that often contribute to burnout. For instance, if a physician needs to see information about a cohort of female patients aged 45 through 55, including their mammograms and medical charts, they can enter that query into the search tool instead of seeking out each element separately. Similarly, if a physician needs to know which clinical trials a patient may match, they can search for that, too.

AI

Healthcare Org With Over 100 Clinics Uses OpenAI's GPT-4 To Write Medical Records (theregister.com) 111

US healthcare chain Carbon Health has implemented an AI tool named Carby, powered by OpenAI's GPT-4 language model, to automatically generate medical records from conversations between physicians and patients. The Register reports: If a patient consents to having their meeting recorded and transcribed, the audio recording is passed to Amazon's AWS Transcribe Medical cloud service, which converts the speech to text. The transcript -- along with data from the patient's medical records, including recent test results -- is passed to an ML model that produces notes summarizing important information gathered in the consultation. The screenshot of an example medical chart below shows what type of text the software, nicknamed Carby, generates. The hypothetical patient's information and vital measurements are included, as well as a summaries of medical records and diagnoses.

Carbon Health CEO Eren Bali said the software is directly integrated into the firm's electronic health records (EHR) system, and is powered by OpenAI's latest language model, GPT-4. Carbon Health said the tool produces consultation summaries in four minutes, compared to the 16 consumed by a flesh and blood doctor working alone. Clinics can therefore see more patients [...] Generative AI models aren't perfect, and often produce errors. Physicians therefore need to verify the AI-generated text. Carbon Health claims 88 percent of the verbiage can be accepted without edits. Carbon Health said the model is already supporting over 130 clinics, where over 600 staff have access to the tool. A clinic testing the tool in San Francisco reportedly saw a 30 percent increase in the number of patients it could treat.

Data Storage

Why Millions of Usable Hard Drives Are Being Destroyed (bbc.com) 168

Millions of storage devices are being shredded each year, even though they could be reused. "You don't need an engineering degree to understand that's a bad thing," says Jonmichael Hands. From a report: He is the secretary and treasurer of the Circular Drive Initiative (CDI), a partnership of technology companies promoting the secure reuse of storage hardware. He also works at Chia Network, which provides a blockchain technology. Chia Network could easily reuse storage devices that large data centres have decided they no longer need. In 2021, the company approached IT Asset Disposition (ITAD) firms, who dispose of old technology for businesses that no longer need it. The answer came back: "Sorry, we have to shred old drives."

"What do you mean, you destroy them?" says Mr Hands, relating the story. "Just erase the data, and then sell them! They said the customers wouldn't let them do that. One ITAD provider said they were shredding five million drives for a single customer." Storage devices are typically sold with a five-year warranty, and large data centres retire them when the warranty expires. Drives that store less sensitive data are spared, but the CDI estimates that 90% of hard drives are destroyed when they are removed. The reason? "The cloud service providers we spoke to said security, but what they actually meant was risk management," says Mr Hands. "They have a zero-risk policy. It can't be one in a million drives, one in 10 million drives, one in 100 million drives that leaks. It has to be zero."

Cloud

Sony Chief Warns Technical Problems Persist for Cloud Gaming (arstechnica.com) 29

Sony's chief executive has warned that cloud gaming is still technically "very tricky," playing down the risk to the console maker of the industry quickly converting to a technology on which its rival Microsoft has bet heavily. From a report: In an interview with the Financial Times, Kenichiro Yoshida said the PlayStation creator would still study "various options" in the future for streaming games over the Internet itself, adding it could utilize GT Sophy, its artificial intelligence agent, to enhance cloud gaming. "I think cloud itself is an amazing business model, but when it comes to games, the technical difficulties are high," said Yoshida, citing latency -- the fast response times demanded by gamers -- as the biggest issue.

"So there will be challenges to cloud gaming, but we want to take on those challenges." Despite various attempts to remake the gaming industry around the cloud, many users have yet to switch from a console or high-end gaming PC to streaming games entirely over the Internet, fearing the lags that can be caused by slowing Internet connectivity and server speeds. Publishers have also not been fully supportive.

Google

Google Trials Passwordless Login Across Workspace and Cloud Accounts (theverge.com) 48

Google has taken a significant step toward a passwordless future with the start of an open beta for passkeys on Workspace accounts. From a report: Starting today, June 5th, over 9 million organizations can allow their users to sign in to a Google Workspace or Google Cloud account using a passkey instead of their usual passwords.

Passkeys are a new form of passwordless sign-in tech developed by the FIDO Alliance, whose members include industry giants like Google, Apple, and Microsoft. Passkeys allow users to log in to websites and apps using their device's own authentication, such as a laptop with Windows Hello, an Android phone with a fingerprint sensor, or an iPhone with Face ID, instead of traditional passwords and other sign-in systems like 2FA or SMS verification. Because passkeys are based on public key cryptographic protocols, there's no fixed "sequence" that can be stolen or leaked in phishing attacks.

Cloud

Amazon's AWS is 'Retiring' Its Open-Source-and-on-GitHub Documentation 21

Long-time Slashdot reader theodp writes: On the AWS News Blog, AWS Chief Evangelist Jeff Barr has published a kind of obituary for AWS Documentation on GitHub (RIP, 2018-2023). From the blog post:

"About five years ago I announced that AWS Documentation is Now Open Source and on GitHub. After a prolonged period of experimentation we will archive most of the repos starting the week of June 5th, and will devote all of our resources to directly improving the AWS documentation and website."

"The primary source for most of the AWS documentation is on internal systems that we had to manually sync with the GitHub repos. Despite the best efforts of our documentation team, keeping the public repos in sync with our internal ones has proven to be very difficult and time consuming, with several manual steps and some parallel editing. With 262 separate repos and thousands of feature launches every year, the overhead was very high and actually consumed precious time that could have been put to use in ways that more directly improved the quality of the documentation."

"Our intent was to increase value to our customers through openness and collaboration, but we learned through customer feedback that this wasn't necessarily the case. After carefully considering many options we decided to retire the repos and to invest all of our resources in making the content better."
Data Storage

Dropbox-like Cloud Storage Service Shadow Drive Lowers Its Price (techcrunch.com) 22

Shadow has decided to cut the price of its cloud storage service Shadow Drive. Users can now get 2TB of storage for $5.3 per month instead of $9.6 per month. From a report: As for the free tier, things aren't changing. Users who sign up get 20GB of online storage for free. Shadow is also the company behind Shadow PC, a cloud computing service that lets you rent a virtual instance of a Windows PC in a data center near you. It works particularly well to play demanding PC games on any device, such as a cheap laptop, a connected TV or a smartphone. Coming back to Shadow Drive, as the name suggests, Shadow Drive works a lot like Google Drive, OneDrive, iCloud Drive or Dropbox. Users can upload and download files from a web browser. They are stored in a data center based in France so that you can access them later.
Science

New Device Generates Electricity From Thin Air (smithsonianmag.com) 54

An anonymous reader quotes a report from Smithsonian: With a new technique, scientists have essentially figured out how to create power from thin air. Their tiny device generates electricity from the air's humidity, and it can be made from nearly any substance, scientists reported this month in the journal Advanced Materials. The invention involves two electrodes and a thin layer of material, which must be covered with tiny holes less than 100 nanometers in diameter -- thinner than one-thousandth the width of a human hair, according to a statement from the University of Massachusetts, Amherst, where the researchers work.

As water molecules pass through the device, from an upper chamber to a lower chamber, they knock against the tiny holes' edges, creating an electric charge imbalance between the layered chambers. In effect, it makes the device run like a battery. The whole process resembles the way clouds make electricity, which we see in the form of lightning bolts, according to Inverse's Molly Glick. [...] Currently, the fingernail-sized device can only create continuous electricity equivalent to a fraction of a volt, writes Vice's Becky Ferreira. But the researchers hope it can someday become a practical, sustainable source of power.

Scientists have previously tried harnessing humidity to generate electricity, but their attempts have often only worked for a short amount of time or relied on expensive materials, per Vice. In 2020, Yao and other researchers found a way to continuously collect electricity from humidity using a material grown from bacteria. But now, the new paper shows that such a specific material isn't necessary -- just about any material works, such as wood or silicon, as long as it can be punctured with the ultra-small holes. This finding makes the device much more practical; it "turns an initially narrow window to a wide-open door for broad potential," Yao tells Vice.

AI

Asus Will Offer Local ChatGPT-Style AI Servers For Office Use (arstechnica.com) 9

An anonymous reader quotes a report from Ars Technica: Taiwan's Asustek Computer (known popularly as "Asus") plans to introduce a rental business AI server that will operate on-site to address security concerns and data control issues from cloud-based AI systems, Bloomberg reports. The service, called AFS Appliance, will feature Nvidia chips and run an AI language model called "Formosa" that Asus claims is equivalent to OpenAI's GPT-3.5.

Asus hopes to offer the service at about $6,000 per month, according to Bloomberg's interview with Asus Cloud and TWS President Peter Wu. The highest-powered server, based on an Nvidia DGX AI platform, will cost about $10,000 a month. The servers will be powered by Nvidia's A100 GPUs and will be owned and operated by Asus. The company hopes to provide the service to 30 to 50 enterprise customers in Taiwan at first, then expand internationally later in 2023. "Nvidia are a partner with us to accelerate the enterprise adoption of this technology," Wu told Bloomberg. "Before ChatGPT, the enterprises were not aware of why they need so much computing power."

According to Asus, the "Formosa Foundation Model" that will run on the AFS Appliance is a large language model that generates text with traditional Chinese semantics. It was developed by TWS, a subsidiary of Asustek. Like ChatGPT, it will offer AI-powered text generation and coding capabilities. Despite the growing demand for AI-training chips, Bloomberg reports that companies like Asus hope to secure a share of the market by offering "holistic AI systems" that offer a complete AI solution in a service package. Asus claims that its existing partnership with Nvidia will ensure that there's no supply shortage of Nvidia's chips as the AFS Appliance service rolls out.

Microsoft

Microsoft Signs Deal for AI Computing Power With Nvidia-backed CoreWeave That Could Be Worth Billions (cnbc.com) 3

Microsoft's massive investment in OpenAI has put the company at the center of the artificial intelligence boom. But it's not the only place where the software giant is opening its wallet to meet the surging demand for AI-powered services. From a report: CNBC has learned from people with knowledge of the matter that Microsoft has agreed to spend potentially billions of dollars over multiple years on cloud-computing infrastructure from startup CoreWeave, which announced on Wednesday that it raised $200 million. That financing comes just over a month after the company attained a valuation of $2 billion. CoreWeave sells simplified access to Nvidia's graphics processing units, or GPUs, which are considered the best available on the market for running AI models.

Microsoft signed the CoreWeave deal earlier this year in order to ensure that OpenAI, which operates the viral ChatGPT chatbot, will have adequate computing power going forward, said one of the people, who asked not to be named due to confidentiality. OpenAI relies on Microsoft's Azure cloud infrastructure for its hefty compute needs.

Slashdot Top Deals