Microsoft

Microsoft Engineers' Pay Data Leaked, Reveals Compensation Details (businessinsider.com) 73

Software engineers at Microsoft earn an average total compensation ranging from $148,436 to $1,230,000 annually, depending on their level, according to a leaked spreadsheet viewed by Business Insider. The data, voluntarily shared by hundreds of U.S.-based Microsoft employees, includes information on salaries, performance-based raises, promotions, and bonuses. The highest-paid engineers work in Microsoft's newly formed AI organization, with average total compensation of $377,611. Engineers in Cloud and AI, Azure, and Experiences and Devices units earn between $242,723 and $255,126 on average.
IT

110K Domains Targeted in 'Sophisticated' AWS Cloud Extortion Campaign (theregister.com) 33

A sophisticated extortion campaign has targeted 110,000 domains by exploiting misconfigured AWS environment files, security firm Cyble reports. The attackers scanned for exposed .env files containing cloud access keys and other sensitive data. Organizations that failed to secure their AWS environments found their S3-stored data replaced with ransom notes.

The attackers used a series of API calls to verify data, enumerate IAM users, and locate S3 buckets. Though initial access lacked admin privileges, they created new IAM roles to escalate permissions. Cyble researchers noted the attackers' use of AWS Lambda functions for automated scanning operations.
Music

Sonos CEO Says the Old App Can't Be Rereleased (theverge.com) 106

The old Sonos app won't be making a return to replace the buggy new version. According to Sonos CEO Patrick Spence, rereleasing the old app would make things worse now that updated software has already been sent out to the company's speakers and cloud infrastructure. The Verge reports: In a Reddit AMA response posted Tuesday, Sonos CEO Spence says that he was hopeful "until very recently" that the company could rerelease the app, confirming a report from The Verge that the company was considering doing so. [...] Since the new app was released on May 7th, Spence has issued a formal apology and announced in August that the company would be delaying the launch of two products "until our app experience meets the level of quality that we, our customers, and our partners expect from Sonos." "The trick of course is that Sonos is not just the mobile app, but software that runs on your speakers and in the cloud too," writes Spence in the Reddit AMA. "In the months since the new mobile app launched we've been updating the software that runs on our speakers and in the cloud to the point where today S2 is less reliable & less stable then what you remember. After doing extensive testing we've reluctantly concluded that re-releasing S2 would make the problems worse, not better. I'm sure this is disappointing. It was disappointing to me."
Privacy

Microsoft Copilot Studio Exploit Leaks Sensitive Cloud Data (darkreading.com) 8

An anonymous reader quotes a report from Dark Reading: Researchers have exploited a vulnerability in Microsoft's Copilot Studio tool allowing them to make external HTTP requests that can access sensitive information regarding internal services within a cloud environment -- with potential impact across multiple tenants. Tenable researchers discovered the server-side request forgery (SSRF) flaw in the chatbot creation tool, which they exploited to access Microsoft's internal infrastructure, including the Instance Metadata Service (IMDS) and internal Cosmos DB instances, they revealed in a blog post this week. Tracked by Microsoft as CVE-2024-38206, the flaw allows an authenticated attacker to bypass SSRF protection in Microsoft Copilot Studio to leak sensitive cloud-based information over a network, according to a security advisory associated with the vulnerability. The flaw exists when combining an HTTP request that can be created using the tool with an SSRF protection bypass, according to Tenable.

"An SSRF vulnerability occurs when an attacker is able to influence the application into making server-side HTTP requests to unexpected targets or in an unexpected way," Tenable security researcher Evan Grant explained in the post. The researchers tested their exploit to create HTTP requests to access cloud data and services from multiple tenants. They discovered that "while no cross-tenant information appeared immediately accessible, the infrastructure used for this Copilot Studio service was shared among tenants," Grant wrote. Any impact on that infrastructure, then, could affect multiple customers, he explained. "While we don't know the extent of the impact that having read/write access to this infrastructure could have, it's clear that because it's shared among tenants, the risk is magnified," Grant wrote. The researchers also found that they could use their exploit to access other internal hosts unrestricted on the local subnet to which their instance belonged. Microsoft responded quickly to Tenable's notification of the flaw, and it has since been fully mitigated, with no action required on the part of Copilot Studio users, the company said in its security advisory.
Further reading: Slack AI Can Be Tricked Into Leaking Data From Private Channels
Intel

Intel Discontinues High-Speed, Open-Source H.265/HEVC Encoder Project (phoronix.com) 37

Phoronix's Michael Larabel reports: As part of Intel's Scalable Video Technology (SVT) initiative they had been developing SVT-HEVC as a BSD-licensed high performance H.265/HEVC video encoder optimized for Xeon Scalable and Xeon D processors. But recently they've changed course and the project has been officially discontinued. [...] The SVT-AV1 project a while ago was already punted to the Alliance for Open Media (AOMedia) project and one of its lead maintainers having joined Meta from Intel two years ago. SVT-AV1 continues excelling great outside the borders of Intel but SVT-HEVC (and SVT-VP9) have remained Intel open-source projects but at least officially SVT-HEVC has ended.

SVT-HEVC hadn't seen a new release since 2021 and there are already several great open-source H.265 encoders out there like x265 and Kvazaar. But as of a few weeks ago, SVT-HEVC upstream is now discontinued. The GitHub repository was put into a read-only state [with a discontinuation notice]. Meanwhile SVT-VP9 doesn't have any discontinuation notice at this time. The SVT-VP9 GitHub repository remains under Intel's Open Visual Cloud account although it hasn't seen any new commits in four months and the last tagged release was back in 2020.

Businesses

North America Added a Whole Silicon Valley's Worth of Data Center Inventory This Year (sherwood.news) 34

North America's eight primary data center markets added 515 megawatts (MW) of new supply in the first half of 2024 -- the equivalent of Silicon Valley's entire existing inventory -- according to a new report real-estate services firm CBRE. From a report: All of Silicon Valley has 459 MW of data center supply, while those main markets have a total of 5,689 MW. That's up 10% from a year ago and about double what it was five years ago. Data center space under construction is up nearly 70% from a year ago and is currently at a record high. But the vast majority of that is already leased, and vacancy rates have shrunk to a record low of 2.8%. In other words, developers are building an insane amount of data center capacity, but it's still not enough to meet the growing demands of cloud computing and artificial intelligence providers.
Programming

'GitHub Actions' Artifacts Leak Tokens, Expose Cloud Services and Repositories (securityweek.com) 19

Security Week brings news about CI/CD workflows using GitHub Actions in build processes. Some workflows can generate artifacts that "may inadvertently leak tokens for third party cloud services and GitHub, exposing repositories and services to compromise, Palo Alto Networks warns." [The artifacts] function as a mechanism for persisting and sharing data across jobs within the workflow and ensure that data is available even after the workflow finishes. [The artifacts] are stored for up to 90 days and, in open source projects, are publicly available... The identified issue, a combination of misconfigurations and security defects, allows anyone with read access to a repository to consume the leaked tokens, and threat actors could exploit it to push malicious code or steal secrets from the repository. "It's important to note that these tokens weren't part of the repository code but were only found in repository-produced artifacts," Palo Alto Networks' Yaron Avital explains...

"The Super-Linter log file is often uploaded as a build artifact for reasons like debuggability and maintenance. But this practice exposed sensitive tokens of the repository." Super-Linter has been updated and no longer prints environment variables to log files.

Avital was able to identify a leaked token that, unlike the GitHub token, would not expire as soon as the workflow job ends, and automated the process that downloads an artifact, extracts the token, and uses it to replace the artifact with a malicious one. Because subsequent workflow jobs would often use previously uploaded artifacts, an attacker could use this process to achieve remote code execution (RCE) on the job runner that uses the malicious artifact, potentially compromising workstations, Avital notes.

Avital's blog post notes other variations on the attack — and "The research laid out here allowed me to compromise dozens of projects maintained by well-known organizations, including firebase-js-sdk by Google, a JavaScript package directly referenced by 1.6 million public projects, according to GitHub. Another high-profile project involved adsys, a tool included in the Ubuntu distribution used by corporations for integration with Active Directory." (Avital says the issue even impacted projects from Microsoft, Red Hat, and AWS.) "All open-source projects I approached with this issue cooperated swiftly and patched their code. Some offered bounties and cool swag."

"This research was reported to GitHub's bug bounty program. They categorized the issue as informational, placing the onus on users to secure their uploaded artifacts." My aim in this article is to highlight the potential for unintentionally exposing sensitive information through artifacts in GitHub Actions workflows. To address the concern, I developed a proof of concept (PoC) custom action that safeguards against such leaks. The action uses the @actions/artifact package, which is also used by the upload-artifact GitHub action, adding a crucial security layer by using an open-source scanner to audit the source directory for secrets and blocking the artifact upload when risk of accidental secret exposure exists. This approach promotes a more secure workflow environment...

As this research shows, we have a gap in the current security conversation regarding artifact scanning. GitHub's deprecation of Artifacts V3 should prompt organizations using the artifacts mechanism to reevaluate the way they use it. Security defenders must adopt a holistic approach, meticulously scrutinizing every stage — from code to production — for potential vulnerabilities. Overlooked elements like build artifacts often become prime targets for attackers. Reduce workflow permissions of runner tokens according to least privilege and review artifact creation in your CI/CD pipelines. By implementing a proactive and vigilant approach to security, defenders can significantly strengthen their project's security posture.

The blog post also notes protection and mitigation features from Palo Alto Networks....
Data Storage

Ask Slashdot: What Network-Attached Storage Setup Do You Use? 135

"I've been somewhat okay about backing up our home data," writes long-time Slashdot reader 93 Escort Wagon.

But they could use some good advice: We've got a couple separate disks available as local backup storage, and my own data also gets occasionally copied to encrypted storage at BackBlaze. My daughter has her own "cloud" backups, which seem to be a manual push every once in a while of random files/folders she thinks are important. Including our media library, between my stuff, my daughter's, and my wife's... we're probably talking in the neighborhood of 10 TB for everything at present. The whole setup is obviously cobbled together, and the process is very manual. Plus it's annoying since I'm handling Mac, Linux, and Windows backups completely differently (and sub-optimally). Also, unsurprisingly, the amount of data we possess does seem to be increasing with time.

I've been considering biting the bullet and buying an NAS [network-attached storage device], and redesigning the entire process — both local and remote. I'm familiar with Synology and DSM from work, and the DS1522+ looks appealing. I've also come across a lot of recommendations for QNAP's devices, though. I'm comfortable tackling this on my own, but I'd like to throw this out to the Slashdot community.

What NAS do you like for home use. And what disks did you put in it? What have your experiences been?

Long-time Slashdot reader AmiMoJo asks "Have you considered just building one?" while suggesting the cheapest option is low-powered Chinese motherboards with soldered-in CPUs. And in the comments on the original submission, other Slashdot readers shared their examples:
  • destined2fail1990 used an AMD Threadripper to build their own NAS with 10Gbps network connectivity.
  • DesertNomad is using "an ancient D-Link" to connect two Synology DS220 DiskStations
  • Darth Technoid attached six Seagate drives to two Macbooks. "Basically, I found a way to make my older Mac useful by simply leaving it on all the time, with the external drives attached."

But what's your suggestion? Share your own thoughts and experiences. What NAS do you like for home use? What disks would you put in it?

And what have your experiences been?

AI

'AI-Powered Remediation': GitHub Now Offers 'Copilot Autofix' Suggestions for Code Vulnerabilities (infoworld.com) 18

InfoWorld reports that Microsoft-owned GitHub "has unveiled Copilot Autofix, an AI-powered software vulnerability remediation service."

The feature became available Wednesday as part of the GitHub Advanced Security (or GHAS) service: "Copilot Autofix analyzes vulnerabilities in code, explains why they matter, and offers code suggestions that help developers fix vulnerabilities as fast as they are found," GitHub said in the announcement. GHAS customers on GitHub Enterprise Cloud already have Copilot Autofix included in their subscription. GitHub has enabled Copilot Autofix by default for these customers in their GHAS code scanning settings.

Beginning in September, Copilot Autofix will be offered for free in pull requests to open source projects.

During the public beta, which began in March, GitHub found that developers using Copilot Autofix were fixing code vulnerabilities more than three times faster than those doing it manually, demonstrating how AI agents such as Copilot Autofix can radically simplify and accelerate software development.

"Since implementing Copilot Autofix, we've observed a 60% reduction in the time spent on security-related code reviews," says one principal engineer quoted in GitHub's announcement, "and a 25% increase in overall development productivity."

The announcement also notes that Copilot Autofix "leverages the CodeQL engine, GPT-4o, and a combination of heuristics and GitHub Copilot APIs." Code scanning tools detect vulnerabilities, but they don't address the fundamental problem: remediation takes security expertise and time, two valuable resources in critically short supply. In other words, finding vulnerabilities isn't the problem. Fixing them is...

Developers can keep new vulnerabilities out of their code with Copilot Autofix in the pull request, and now also pay down the backlog of security debt by generating fixes for existing vulnerabilities... Fixes can be generated for dozens of classes of code vulnerabilities, such as SQL injection and cross-site scripting, which developers can dismiss, edit, or commit in their pull request.... For developers who aren't necessarily security experts, Copilot Autofix is like having the expertise of your security team at your fingertips while you review code...

As the global home of the open source community, GitHub is uniquely positioned to help maintainers detect and remediate vulnerabilities so that open source software is safer and more reliable for everyone. We firmly believe that it's highly important to be both a responsible consumer of open source software and contributor back to it, which is why open source maintainers can already take advantage of GitHub's code scanning, secret scanning, dependency management, and private vulnerability reporting tools at no cost. Starting in September, we're thrilled to add Copilot Autofix in pull requests to this list and offer it for free to all open source projects...

While responsibility for software security continues to rest on the shoulders of developers, we believe that AI agents can help relieve much of the burden.... With Copilot Autofix, we are one step closer to our vision where a vulnerability found means a vulnerability fixed.

AI

AI PCs Made Up 14% of Quarterly PC Shipments (reuters.com) 73

AI PCs accounted for 14% of all PC shipped in the second quarter with Apple leading the way, research firm Canalys said on Tuesday, as added AI capabilities help reinvigorate demand. From a report: PC providers and chipmakers have pinned high hopes on devices that can perform AI tasks directly on the system, bypassing the cloud, as the industry slowly emerges from its worst slump in years. These devices typically feature neural processing units dedicated to performing AI tasks.

Apple commands about 60% of the AI PC market, the research firm said in the report, pointing to its Mac portfolio incorporating M-series chips with a neural engine. Within Microsoft's Windows, AI PC shipments grew 127% sequentially in the quarter. The tech giant debuted its "Copilot+" AI PCs in May, with Qualcomm's Snapdragon PC chips based on Arm Holdings' architecture.

Space

Milky Way May Escape Fated Collision With Andromeda Galaxy (science.org) 33

sciencehabit shares a report from Science.org: For years, astronomers thought it was the Milky Way's destiny to collide with its near neighbor the Andromeda galaxy a few billion years from now. But a new simulation finds a 50% chance the impending crunch will end up a near-miss, at least for the next 10 billion years. It's been known that Andromeda is heading toward our home Galaxy since 1912, heading pretty much straight at the Milky Way at a speed of 110 kilometers per second. Such galaxy mergers, which can be seen in progress elsewhere in the universe, are spectacularly messy affairs. Although most stars survive unscathed, the galaxies' spiral structures are obliterated, sending streams of stars spinning off into space. After billions of years, the merged galaxies typically settle into a single elliptical galaxy: a giant featureless blob of stars. A study from 2008 suggested a Milky Way-Andromeda merger was inevitable within the next 5 billion years, and that in the process the Sun and Earth would get gravitationally grabbed by Andromeda for a time before ending up in the distant outer suburbs of the resulting elliptical, which the researchers dub "Milkomeda."

In the new simulation, researchers made use of the most recent and best estimates of the motion and mass of the four largest galaxies in the Local Group. They then plugged those into simulations developed by the Institute for Computational Cosmology at Durham University. First, they ran the simulation including just the Milky Way and Andromeda and found that they merged in slightly less than half of the cases -- lower odds than other recent estimates. When they included the effect of the Triangulum galaxy, the Local Group's third largest, the merger probability increased to about two-thirds. But with the inclusion of the Large Magellanic Cloud, a satellite galaxy of the Milky Way that is the fourth largest in the Local Group, those chances dropped back down to a coin flip. And if the cosmic smashup does happen, it won't be for about 8 billion years. "As it stands, proclamations of the impending demise of our Galaxy appear greatly exaggerated," the researchers write. Meanwhile, if the accelerating expansion of the universe continues unabated, all other galaxies will disappear beyond our cosmic event horizon, leaving Milkomeda as the sole occupant of the visible universe.
The study is available as a preprint on arXiv.
Earth

Excess Memes and 'Reply All' Emails Are Bad For Climate, Researcher Warns (theguardian.com) 120

An anonymous reader quotes a report from The Guardian: When "I can has cheezburger?" became one of the first internet memes to blow our minds, it's unlikely that anyone worried about how much energy it would use up. But research has now found that the vast majority of data stored in the cloud is "dark data", meaning it is used once then never visited again. That means that all the memes and jokes and films that we love to share with friends and family -- from "All your base are belong to us", through Ryan Gosling saying "Hey Girl", to Tim Walz with a piglet -- are out there somewhere, sitting in a datacenter, using up energy. By 2030, the National Grid anticipates that datacenters will account for just under 6% of the UK's total electricity consumption, so tackling junk data is an important part of tackling the climate crisis.

Ian Hodgkinson, a professor of strategy at Loughborough University has been studying the climate impact of dark data and how it can be reduced. "I really started a couple of years ago, it was about trying to understand the negative environmental impact that digital data might have," he said. "And at the top of it might be quite an easy question to answer, but it turns out actually, it's a whole lot more complex. But absolutely, data does have a negative environmental impact." He discovered that 68% of data used by companies is never used again, and estimates that personal data tells the same story. [...] One funny meme isn't going to destroy the planet, of course, but the millions stored, unused, in people's camera rolls does have an impact, he explained: "The one picture isn't going to make a drastic impact. But of course, if you maybe go into your own phone and you look at all the legacy pictures that you have, cumulatively, that creates quite a big impression in terms of energy consumption."
Since we're paying to store data in the cloud, cloud operators and tech companies have a financial incentive to keep people from deleting junk data, says Hodgkinson. He recommends people send fewer pointless emails and avoid the "dreaded 'reply all' button."

"One [figure] that often does the rounds is that for every standard email, that equates to about 4g of carbon. If we then think about the amount of what we mainly call 'legacy data' that we hold, so if we think about all the digital photos that we have, for instance, there will be a cumulative impact."
United States

US Colleges Slash Majors in Effort To Cut Costs (cbsnews.com) 110

St. Cloud State University announced plans to eliminate its music department and cut 42 degree programs and 50 minors, as part of a broader trend of U.S. colleges slashing offerings amid financial pressures. The Minnesota school's decision, driven by a $32 million budget shortfall over two years, reflects challenges facing higher education institutions nationwide. Similar program cuts have been announced at universities across the country, including in North Carolina, Arkansas, and New York. Some smaller institutions have closed entirely, unable to weather the financial storm, reports CBS News.

Federal COVID relief funds have dried up, operational costs are rising, and fewer high school graduates are pursuing college degrees. St. Cloud State's enrollment plummeted from 18,300 students in fall 2020 to about 10,000 in fall 2023, mirroring national trends. The National Student Clearinghouse Research Center reports a decline in four-year college enrollment, despite a slight rebound in community college numbers.
Cloud

Cloud Growth Puts Hyperscalers On Track To 60% of Data Capacity By 2029 (theregister.com) 6

Dan Robinson writes via The Register: Hyperscalers are forecast to account for more than 60 percent of datacenter space by 2029, a stark reversal on just seven years ago when the majority of capacity was made up of on-premises facilities. This trend is the result of demand for cloud services and consumer-oriented digital services such as social networking, e-commerce and online gaming pushing growth in hyperscale bit barns, those operated by megacorps including Amazon, Microsoft and Meta. The figures were published by Synergy Research Group, which says they are drawn from several detailed quarterly tracking research services to build an analysis of datacenter volume and trends.

As of last year's data, those hyperscale companies accounted for 41 percent of the entire global data dormitory capacity, but their share is growing fast. Just over half of the hyperscaler capacity is comprised of own-build facilities, with the rest made up of leased server farms, operated by providers such as Digital Realty or Equinix. On-premises datacenters run by enterprises themselves now account for 37 percent of the total, a drop from when they made up 60 percent a few years ago. The remainder (22 percent) is accounted for by non-hyperscale colocation datacenters.

What the figures appear to show is that hyperscale volume is growing faster than colocation or on-prem capacity -- by an average of 22 percent each year. Hence Synergy believes that while colocation's share of the total will slowly decrease over time, actual colo capacity will continue to rise steadily. Likewise, the proportion of overall bit barn space represented by on-premise facilities is forecast by Synergy to decline by almost three percentage points each year, although the analyst thinks the actual total capacity represented by on-premises datacenters is set to remain relatively stable. It's a case of on-prem essentially standing still in an expanding market.

United Kingdom

UK Regulator To Examine $4 Billion Amazon Investment In AI Startup Anthropic (theguardian.com) 2

An anonymous reader quotes a report from The Guardian: Amazon's $4 billion investment into US artificial intelligence startup Anthropic is to be examined in the latest investigation into technology tie-ups by the UK's competition watchdog. The Competition and Markets Authority (CMA) said on Thursday that it was launching a preliminary investigation into the deal, before deciding whether to refer it for an in-depth review. The deal, announced in March, included a $4 billion investment in Anthropic from Amazon, and a commitment from Anthropic to use Amazon Web Services "as its primary cloud provider for mission critical workloads, including safety research and future foundation model development." The regulator said it was "considering whether it is or may be the case that Amazon's partnership with Anthropic has resulted in the creation of a relevant merger situation." "We are an independent company. Our strategic partnerships and investor relationships do not diminish our corporate governance independence or our freedom to partner with others," said an Anthropic spokesperson said in a statement. "Amazon does not have a seat on Anthropic's board, nor does it have any board observer rights. We intend to cooperate with the CMA and provide them with a comprehensive understanding of Amazon's investment and our commercial collaboration."
Hardware

NVMe 2.1 Specifications Published With New Capabilities (phoronix.com) 22

At the Flash Memory Summit 2024 this week, NVM Express published the NVMe 2.1 specifications, which hope to enhance storage unification across AI, cloud, client, and enterprise. Phoronix's Michael Larabel writes: New NVMe capabilities with the revised specifications include:

- Enabling live migration of PCIe NVMe controllers between NVM subsystems.
- New host-directed data placement for SSDs that simplifies ecosystem integration and is backwards compatible with previous NVMe specifications.
- Support for offloading some host processing to NVMe storage devices.
- A network boot mechanism for NVMe over Fabrics (NVMe-oF).
- Support for NVMe over Fabrics zoning.
- Ability to provide host management of encryption keys and highly granular encryption with Key Per I/O.
- Security enhancements such as support for TLS 1.3, a centralized authentication verification entity for DH-HMAC-CHAP, and post sanitization media verification.
- Management enhancements including support for high availability out-of-band management, management over I3C, out-of-band management asynchronous events and dynamic creation of exported NVM subsystems from underlying NVM subsystem physical resources.
You can learn more about these updates at NVMExpress.org.
AI

Mainframes Find New Life in AI Era (msn.com) 56

Mainframe computers, stalwarts of high-speed data processing, are finding new relevance in the age of AI. Banks, insurers, and airlines continue to rely on these industrial-strength machines for mission-critical operations, with some now exploring AI applications directly on the hardware, WSJ reported in a feature story. IBM, commanding over 96% of the mainframe market, reported 6% growth in its mainframe business last quarter. The company's latest zSystem can process up to 30,000 transactions per second and hold 40 terabytes of data. WSJ adds: Globally, the mainframe market was valued at $3.05 billion in 2023, but new mainframe sales are expected to decline through 2028, IDC said. Of existing mainframes, however, 54% of enterprise leaders in a 2023 Forrester survey said they would increase their usage over the next two years.

Mainframes do have limitations. They are constrained by the computing power within their boxes, unlike the cloud, which can scale up by drawing on computing power distributed across many locations and servers. They are also unwieldy -- with years of old code tacked on -- and don't integrate well with new applications. That makes them costly to manage and difficult to use as a platform for developing new applications.

Space

Are There Diamonds on Mercury? (cnn.com) 29

The planet Mercury could have "a layer of diamonds," reports CNN, citing new research suggesting that about 310 miles (500 kilometers) below the surface...could be a layer of diamonds 11 miles (18 kilometers) thick.

And the study's co-author believes lava might carry some of those diamonds up to the surface: The diamonds might have formed soon after Mercury itself coalesced into a planet about 4.5 billion years ago from a swirling cloud of dust and gas, in the crucible of a high-pressure, high-temperature environment. At this time, the fledgling planet is believed to have had a crust of graphite, floating over a deep magma ocean.

A team of researchers recreated that searing environment in an experiment, with a machine called an anvil press that's normally used to study how materials behave under extreme pressure but also for the production of synthetic diamonds. "It's a huge press, which enables us to subject tiny samples at the same high pressure and high temperature that we would expect deep inside the mantle of Mercury, at the boundary between the mantle and the core," said Bernard Charlier, head of the department of geology at the University of Liège in Belgium and a coauthor of a study reporting the findings.

The team inserted a synthetic mixture of elements — including silicon, titanium, magnesium and aluminum — inside a graphite capsule, mimicking the theorized composition of Mercury's interior in its early days. The researchers then subjected the capsule to pressures almost 70,000 times greater than those found on Earth's surface and temperatures up to 2,000 degrees Celsius (3,630 degrees Fahrenheit), replicating the conditions likely found near Mercury's core billions of years ago.

After the sample melted, the scientists looked at changes in the chemistry and minerals under an electron microscope and noted that the graphite had turned into diamond crystals.

The researchers believe this mechanism "can not only give us more insight into the secrets hidden below Mercury's surface, but on planetary evolution and the internal structure of exoplanets with similar characteristics."
Government

US Progressives Push For Nvidia Antitrust Investigation (reuters.com) 42

Progressive groups and Senator Elizabeth Warren are urging the Department of Justice to investigate Nvidia for potential antitrust violations due to its dominant position in the AI chip market. The groups criticize Nvidia's bundling of software and hardware, claiming it stifles innovation and locks in customers. Reuters reports: Demand Progress and nine other groups wrote a letter (PDF) this week, opens new tab urging Department of Justice antitrust chief Jonathan Kanter to probe business practices at Nvidia, whose market value hit $3 trillion this summer on demand for chips able to run the complex models behind generative AI. The groups, which oppose monopolies and promote government oversight of tech companies, among other issues, took aim at Nvidia's bundling of software and hardware, a practice that French antitrust enforcers have flagged as they prepare to bring charges.

"This aggressively proprietary approach, which is strongly contrary to industry norms about collaboration and interoperability, acts to lock in customers and stifles innovation," the groups wrote. Nvidia has roughly 80% of the AI chip market, including the custom AI processors made by cloud computing companies like Google, Microsoft and Amazon.com. The chips made by the cloud giants are not available for sale themselves but typically rented through each platform.
A spokesperson for Nvidia said: "Regulators need not be concerned, as we scrupulously adhere to all laws and ensure that NVIDIA is openly available in every cloud and on-prem for every enterprise. We'll continue to support aspiring innovators in every industry and market and are happy to provide any information regulators need."
Microsoft

Microsoft Now Lists OpenAI as a Competitor in AI and Search (techcrunch.com) 11

An anonymous reader shares a report: Microsoft has a long and tangled history with OpenAI, having invested a reported $13 billion in the ChatGPT maker as part of a long term partnership. As part of the deal, Microsoft runs OpenAI's models across its enterprise and consumer products, and is OpenAI's exclusive cloud provider. However, the tech giant called the startup a "competitor" for the first time in an SEC filing on Tuesday.

In Microsoft's annual 10K, OpenAI joined long list of competitors in AI, alongside Anthropic, Amazon, and Meta. OpenAI was also listed alongside Google as a competitor to Microsoft in search, thanks to OpenAI's new SearchGPT feature announced last week. It's possible Microsoft is trying to change the narrative on its relationship with OpenAI in light of antitrust concerns -- the FTC is currently looking into the relationship, alongside similar cloud provider investments into AI startups.

Slashdot Top Deals