Your Rights Online

VPN Apps Vanish from Brazilian App Store (techradar.com) 93

Dozens of VPN apps have vanished from Brazil's Apple App Store, including popular services NordVPN, ExpressVPN, and Surfshark. Simone Magliano, Head of Research at Top10VPN, reports that at least 30 VPN apps have become unavailable, though their store listings remained visible. Proton VPN, a major free VPN provider, confirmed the App Store issues, speculating it could be "a bug, or Apple implementing a secret censorship order." The move follows X, formerly Twitter, announcing over the weekend that it was shutting its Brazil operations, citing a "secret order" to arrest its legal representative if X didn't "comply with his [Brazilian Supreme Court Justice Alexandre de Morae] censorship orders."
Programming

'GitHub Actions' Artifacts Leak Tokens, Expose Cloud Services and Repositories (securityweek.com) 19

Security Week brings news about CI/CD workflows using GitHub Actions in build processes. Some workflows can generate artifacts that "may inadvertently leak tokens for third party cloud services and GitHub, exposing repositories and services to compromise, Palo Alto Networks warns." [The artifacts] function as a mechanism for persisting and sharing data across jobs within the workflow and ensure that data is available even after the workflow finishes. [The artifacts] are stored for up to 90 days and, in open source projects, are publicly available... The identified issue, a combination of misconfigurations and security defects, allows anyone with read access to a repository to consume the leaked tokens, and threat actors could exploit it to push malicious code or steal secrets from the repository. "It's important to note that these tokens weren't part of the repository code but were only found in repository-produced artifacts," Palo Alto Networks' Yaron Avital explains...

"The Super-Linter log file is often uploaded as a build artifact for reasons like debuggability and maintenance. But this practice exposed sensitive tokens of the repository." Super-Linter has been updated and no longer prints environment variables to log files.

Avital was able to identify a leaked token that, unlike the GitHub token, would not expire as soon as the workflow job ends, and automated the process that downloads an artifact, extracts the token, and uses it to replace the artifact with a malicious one. Because subsequent workflow jobs would often use previously uploaded artifacts, an attacker could use this process to achieve remote code execution (RCE) on the job runner that uses the malicious artifact, potentially compromising workstations, Avital notes.

Avital's blog post notes other variations on the attack — and "The research laid out here allowed me to compromise dozens of projects maintained by well-known organizations, including firebase-js-sdk by Google, a JavaScript package directly referenced by 1.6 million public projects, according to GitHub. Another high-profile project involved adsys, a tool included in the Ubuntu distribution used by corporations for integration with Active Directory." (Avital says the issue even impacted projects from Microsoft, Red Hat, and AWS.) "All open-source projects I approached with this issue cooperated swiftly and patched their code. Some offered bounties and cool swag."

"This research was reported to GitHub's bug bounty program. They categorized the issue as informational, placing the onus on users to secure their uploaded artifacts." My aim in this article is to highlight the potential for unintentionally exposing sensitive information through artifacts in GitHub Actions workflows. To address the concern, I developed a proof of concept (PoC) custom action that safeguards against such leaks. The action uses the @actions/artifact package, which is also used by the upload-artifact GitHub action, adding a crucial security layer by using an open-source scanner to audit the source directory for secrets and blocking the artifact upload when risk of accidental secret exposure exists. This approach promotes a more secure workflow environment...

As this research shows, we have a gap in the current security conversation regarding artifact scanning. GitHub's deprecation of Artifacts V3 should prompt organizations using the artifacts mechanism to reevaluate the way they use it. Security defenders must adopt a holistic approach, meticulously scrutinizing every stage — from code to production — for potential vulnerabilities. Overlooked elements like build artifacts often become prime targets for attackers. Reduce workflow permissions of runner tokens according to least privilege and review artifact creation in your CI/CD pipelines. By implementing a proactive and vigilant approach to security, defenders can significantly strengthen their project's security posture.

The blog post also notes protection and mitigation features from Palo Alto Networks....
AI

'AI-Powered Remediation': GitHub Now Offers 'Copilot Autofix' Suggestions for Code Vulnerabilities (infoworld.com) 18

InfoWorld reports that Microsoft-owned GitHub "has unveiled Copilot Autofix, an AI-powered software vulnerability remediation service."

The feature became available Wednesday as part of the GitHub Advanced Security (or GHAS) service: "Copilot Autofix analyzes vulnerabilities in code, explains why they matter, and offers code suggestions that help developers fix vulnerabilities as fast as they are found," GitHub said in the announcement. GHAS customers on GitHub Enterprise Cloud already have Copilot Autofix included in their subscription. GitHub has enabled Copilot Autofix by default for these customers in their GHAS code scanning settings.

Beginning in September, Copilot Autofix will be offered for free in pull requests to open source projects.

During the public beta, which began in March, GitHub found that developers using Copilot Autofix were fixing code vulnerabilities more than three times faster than those doing it manually, demonstrating how AI agents such as Copilot Autofix can radically simplify and accelerate software development.

"Since implementing Copilot Autofix, we've observed a 60% reduction in the time spent on security-related code reviews," says one principal engineer quoted in GitHub's announcement, "and a 25% increase in overall development productivity."

The announcement also notes that Copilot Autofix "leverages the CodeQL engine, GPT-4o, and a combination of heuristics and GitHub Copilot APIs." Code scanning tools detect vulnerabilities, but they don't address the fundamental problem: remediation takes security expertise and time, two valuable resources in critically short supply. In other words, finding vulnerabilities isn't the problem. Fixing them is...

Developers can keep new vulnerabilities out of their code with Copilot Autofix in the pull request, and now also pay down the backlog of security debt by generating fixes for existing vulnerabilities... Fixes can be generated for dozens of classes of code vulnerabilities, such as SQL injection and cross-site scripting, which developers can dismiss, edit, or commit in their pull request.... For developers who aren't necessarily security experts, Copilot Autofix is like having the expertise of your security team at your fingertips while you review code...

As the global home of the open source community, GitHub is uniquely positioned to help maintainers detect and remediate vulnerabilities so that open source software is safer and more reliable for everyone. We firmly believe that it's highly important to be both a responsible consumer of open source software and contributor back to it, which is why open source maintainers can already take advantage of GitHub's code scanning, secret scanning, dependency management, and private vulnerability reporting tools at no cost. Starting in September, we're thrilled to add Copilot Autofix in pull requests to this list and offer it for free to all open source projects...

While responsibility for software security continues to rest on the shoulders of developers, we believe that AI agents can help relieve much of the burden.... With Copilot Autofix, we are one step closer to our vision where a vulnerability found means a vulnerability fixed.

Security

'Sinkclose' Flaw in Hundreds of Millions of AMD Chips Allows Deep, Virtually Unfixable Infections (wired.com) 57

An anonymous reader quotes a report from Wired: Security flaws in your computer's firmware, the deep-seated code that loads first when you turn the machine on and controls even how its operating system boots up, have long been a target for hackers looking for a stealthy foothold. But only rarely does that kind of vulnerability appear not in the firmware of any particular computer maker, but in the chips found across hundreds of millions of PCs and servers. Now security researchers have found one such flaw that has persisted in AMD processors for decades, and that would allow malware to burrow deep enough into a computer's memory that, in many cases, it may be easier to discard a machine than to disinfect it. At the Defcon hacker conference tomorrow, Enrique Nissim and Krzysztof Okupski, researchers from the security firm IOActive, plan to present a vulnerability in AMD chips they're calling Sinkclose. The flaw would allow hackers to run their own code in one of the most privileged modes of an AMD processor, known as System Management Mode, designed to be reserved only for a specific, protected portion of its firmware. IOActive's researchers warn that it affects virtually all AMD chips dating back to 2006, or possibly even earlier.

Nissim and Okupski note that exploiting the bug would require hackers to already have obtained relatively deep access to an AMD-based PC or server, but that the Sinkclose flaw would then allow them to plant their malicious code far deeper still. In fact, for any machine with one of the vulnerable AMD chips, the IOActive researchers warn that an attacker could infect the computer with malware known as a "bootkit" that evades antivirus tools and is potentially invisible to the operating system, while offering a hacker full access to tamper with the machine and surveil its activity. For systems with certain faulty configurations in how a computer maker implemented AMD's security feature known as Platform Secure Boot -- which the researchers warn encompasses the large majority of the systems they tested -- a malware infection installed via Sinkclose could be harder yet to detect or remediate, they say, surviving even a reinstallation of the operating system. Only opening a computer's case, physically connecting directly to a certain portion of its memory chips with a hardware-based programming tool known as SPI Flash programmer and meticulously scouring the memory would allow the malware to be removed, Okupski says. Nissim sums up that worst-case scenario in more practical terms: "You basically have to throw your computer away."
In a statement shared with WIRED, AMD said it "released mitigation options for its AMD EPYC datacenter products and AMD Ryzen PC products, with mitigations for AMD embedded products coming soon."

The company also noted that it released patches for its EPYC processors earlier this year. It did not answer questions about how it intends to fix the Sinkclose vulnerability.
Software

Sonos Delays Two New Products As It Races To Fix Buggy App (theverge.com) 24

"Sonos is delaying two hardware releases originally planned for later this year as it deploys an all-hands-on-deck approach to fixing the app," writes The Verge's Chris Welch. The company released a redesigned mobile app on May 7th that has been riddled with flaws and missing features. Sonos also entered the crowded headphone market in May with the launch of its Ace headphones, but it was immediately "overshadowed" by problems with the new Sonos app, according to Sonos CEO Patrick Spence. The Verge reports: "I will not rest until we're in a position where we've addressed the issues and have customers raving about Sonos again," Spence said during the afternoon earnings call. "We believe our focus needs to be addressing the app ahead of everything else," he continued."This means delaying the two major new product releases we had planned for Q4 until our app experience meets the level of quality that we, our customers, and our partners expect from Sonos." One of those two products is almost certainly Sonos' next flagship soundbar, codenamed Lasso, which I revealed last month. "These products were ready to ship in Q4," Spence said in response to a question on the call.

He also went in-depth on the app issues and how Sonos plans to fix them. Spence remains adamant that overhauling the app and its underlying infrastructure "was the right thing to do" for the company's future; the new app "has a modular developer platform based on modern programming languages that will allow us to drive more innovation faster," he said. But Spence also now acknowledges that the project was rushed. "With the app, my push for speed backfired," he said. "As we rolled out the new software to more and more users, it became evident that there were stubborn bugs we had not discovered in our testing. As a result, far too many of our customers are having an experience that is worse than what they previously had." [...]

For now, Sonos is turning to some longtime experts for help. "I've asked Nick Millington, the original software architect of the Sonos experience, to do whatever it takes to address the issues with our new app," Spence said. Sonos board member Tom Conrad is helping to oversee the app improvement effort and "ensure" things stay on the right track.

Apple

macOS Sequoia Adds Weekly Permission Prompt For Screenshot and Screen Recording Apps (9to5mac.com) 78

Apple is set to implement stricter controls on screen recording permissions in its upcoming macOS Sequoia release this fall. Users will be required to grant explicit permission weekly and after each reboot for apps needing screen access, 9to5Mac reports. From the report: Multiple developers who spoke to 9to5Mac say that they've received confirmation from Apple that this is not a bug. Instead, Apple is indeed adding a new system prompt reminding users when an app has permission to access their computer's screen and audio.
Programming

DARPA Wants to Automatically Transpile C Code Into Rust - Using AI (theregister.com) 236

America's Defense Department has launched a project "that aims to develop machine-learning tools that can automate the conversion of legacy C code into Rust," reports the Register — with an online event already scheduled later this month for those planning to submit proposals: The reason to do so is memory safety. Memory safety bugs, such buffer overflows, account for the majority of major vulnerabilities in large codebases. And DARPA's hope [that's the Defense Department's R&D agency] is that AI models can help with the programming language translation, in order to make software more secure. "You can go to any of the LLM websites, start chatting with one of the AI chatbots, and all you need to say is 'here's some C code, please translate it to safe idiomatic Rust code,' cut, paste, and something comes out, and it's often very good, but not always," said Dan Wallach, DARPA program manager for TRACTOR, in a statement. "The research challenge is to dramatically improve the automated translation from C to Rust, particularly for program constructs with the most relevance...."

DARPA's characterization of the situation suggests the verdict on C and C++ has already been rendered. "After more than two decades of grappling with memory safety issues in C and C++, the software engineering community has reached a consensus," the research agency said, pointing to the Office of the National Cyber Director's call to do more to make software more secure. "Relying on bug-finding tools is not enough...."

Peter Morales, CEO of Code Metal, a company that just raised $16.5 million to focus on transpiling code for edge hardware, told The Register the DARPA project is promising and well-timed. "I think [TRACTOR] is very sound in terms of the viability of getting there and I think it will have a pretty big impact in the cybersecurity space where memory safety is already a pretty big conversation," he said.

DARPA's statement had an ambitious headline: "Eliminating Memory Safety Vulnerabilities Once and For All."

"Rust forces the programmer to get things right," said DARPA project manager Wallach. "It can feel constraining to deal with all the rules it forces, but when you acclimate to them, the rules give you freedom. They're like guardrails; once you realize they're there to protect you, you'll become free to focus on more important things."

Code Metal's Morales called the project "a DARPA-hard problem," noting the daunting number of edge cases that might come up. And even DARPA's program manager conceded to the Register that "some things like the Linux kernel are explicitly out of scope, because they've got technical issues where Rust wouldn't fit."

Thanks to long-time Slashdot reader RoccamOccam for sharing the news.
Government

Why DARPA is Funding an AI-Powered Bug-Spotting Challenge (msn.com) 43

Somewhere in America's Defense Department, the DARPA R&D agency is running a two-year contest to write an AI-powered program "that can scan millions of lines of open-source code, identify security flaws and fix them, all without human intervention," reports the Washington Post. [Alternate URL here.]

But as they see it, "The contest is one of the clearest signs to date that the government sees flaws in open-source software as one of the country's biggest security risks, and considers artificial intelligence vital to addressing it." Free open-source programs, such as the Linux operating system, help run everything from websites to power stations. The code isn't inherently worse than what's in proprietary programs from companies like Microsoft and Oracle, but there aren't enough skilled engineers tasked with testing it. As a result, poorly maintained free code has been at the root of some of the most expensive cybersecurity breaches of all time, including the 2017 Equifax disaster that exposed the personal information of half of all Americans. The incident, which led to the largest-ever data breach settlement, cost the company more than $1 billion in improvements and penalties.

If people can't keep up with all the code being woven into every industrial sector, DARPA hopes machines can. "The goal is having an end-to-end 'cyber reasoning system' that leverages large language models to find vulnerabilities, prove that they are vulnerabilities, and patch them," explained one of the advising professors, Arizona State's Yan Shoshitaishvili.... Some large open-source projects are run by near-Wikipedia-size armies of volunteers and are generally in good shape. Some have maintainers who are given grants by big corporate users that turn it into a job. And then there is everything else, including programs written as homework assignments by authors who barely remember them.

"Open source has always been 'Use at your own risk,'" said Brian Behlendorf, who started the Open Source Security Foundation after decades of maintaining a pioneering free server software, Apache, and other projects at the Apache Software Foundation. "It's not free as in speech, or even free as in beer," he said. "It's free as in puppy, and it needs care and feeding."

40 teams entered the contest, according to the article — and seven received $1 million in funding to continue on to the next round, with the finalists to be announced at this year's Def Con, according to the article.

"Under the terms of the DARPA contest, all finalists must release their programs as open source," the article points out, "so that software vendors and consumers will be able to run them."
Intel

Intel Will Give Two Years of Additional Warranty on Crash-Prone 13th and 14th Gen CPUs (theverge.com) 19

After months of back and forth, Intel has finally agreed to extend the warranty on all affected 13th- and 14th-generation desktop CPUs by an additional two years. This extension increases the warranty period for new boxed Intel CPUs from three to five years. For CPUs pre-installed in systems, Intel directs users to contact their PC's manufacturer for support, maintaining its established channels for warranty claims. The Verge adds: Intel has said that a primary cause of the instability issues for the desktop CPUs was due to an "elevated operating voltage" and that it was working on a patch for mid-August that addresses the root cause of that. But the patch apparently won't fix any damage that's already happened, meaning the best way to fix a damaged chip is to replace it.
Games

The Best ROM Hack Website is Shutting Down After Nearly 20 Years (polygon.com) 16

ROMhacking.net, a prominent platform for fan translations and modifications of classic games, is shutting down after nearly two decades of operation. The site's administrator, who goes by the name Nightcrawler, said the website will remain accessible in a read-only format, but all new submissions have been halted and the site's extensive database has been transferred to the Internet Archive for preservation.

ROMhacking.net has long served as a crucial resource for gaming enthusiasts, according to Polygon, hosting a vast array of fan-made translations, bug fixes, and modifications for classic titles, many of which never received official localizations outside their countries of origin. The site's contributions to the gaming community include fan translations of Japanese-exclusive titles and even patches for long-standing bugs in popular games like Super Mario 64. Nightcrawler said the website ran into challenges including in managing the site's exponential growth and increasing copyright pressures, things that contributed to the decision to winding down operations.
AI

Meta's AI Safety System Defeated By the Space Bar (theregister.com) 22

Thomas Claburn reports via The Register: Meta's machine-learning model for detecting prompt injection attacks -- special prompts to make neural networks behave inappropriately -- is itself vulnerable to, you guessed it, prompt injection attacks. Prompt-Guard-86M, introduced by Meta last week in conjunction with its Llama 3.1 generative model, is intended "to help developers detect and respond to prompt injection and jailbreak inputs," the social network giant said. Large language models (LLMs) are trained with massive amounts of text and other data, and may parrot it on demand, which isn't ideal if the material is dangerous, dubious, or includes personal info. So makers of AI models build filtering mechanisms called "guardrails" to catch queries and responses that may cause harm, such as those revealing sensitive training data on demand, for example. Those using AI models have made it a sport to circumvent guardrails using prompt injection -- inputs designed to make an LLM ignore its internal system prompts that guide its output -- or jailbreaks -- input designed to make a model ignore safeguards. [...]

It turns out Meta's Prompt-Guard-86M classifier model can be asked to "Ignore previous instructions" if you just add spaces between the letters and omit punctuation. Aman Priyanshu, a bug hunter with enterprise AI application security shop Robust Intelligence, recently found the safety bypass when analyzing the embedding weight differences between Meta's Prompt-Guard-86M model and Redmond's base model, microsoft/mdeberta-v3-base. "The bypass involves inserting character-wise spaces between all English alphabet characters in a given prompt," explained Priyanshu in a GitHub Issues post submitted to the Prompt-Guard repo on Thursday. "This simple transformation effectively renders the classifier unable to detect potentially harmful content."
"Whatever nasty question you'd like to ask right, all you have to do is remove punctuation and add spaces between every letter," Hyrum Anderson, CTO at Robust Intelligence, told The Register. "It's very simple and it works. And not just a little bit. It went from something like less than 3 percent to nearly a 100 percent attack success rate."
Open Source

Mike McQuaid on 15 Years of Homebrew and Protecting Open-Source Maintainers (thenextweb.com) 37

Despite multiple methods available across major operating systems for installing and updating applications, there remains "no real clear answer to 'which is best,'" reports The Next Web. Each system faces unique challenges such as outdated packages, high fees, and policy restrictions.

Enter Homebrew.

"Initially created as an option for developers to keep the dependencies they often need for developing, testing, and running their work, Homebrew has grown to be so much more in its 15-year history." Created in 2009, Homebrew has become a leading solution for macOS, integrating with MDM tools through its enterprise-focused extension, Workbrew, to balance user freedom with corporate security needs, while maintaining its open-source roots under the guidance of Mike McQuaid. In an interview with The Next Web's Chris Chinchilla, project leader Mike McQuaid talks about the challenges and responsibilities of maintaining one of the world's largest open-source projects: As with anything that attracts plenty of use and attention, Homebrew also attracts a lot of mixed and extreme opinions, and processing and filtering those requires a tough outlook, something that Mike has spoken about in numerous interviews and at conferences. "As a large project, you get a lot of hate from people. Either people are just frustrated because they hit a bug or because you changed something, and they didn't read the release notes, and now something's broken," Mike says when I ask him about how he copes with the constant influx of communication. "There are a lot of entitled, noisy users in open source who contribute very little and like to shout at people and make them feel bad. One of my strengths is that I have very little time for those people, and I just insta-block them or close their issues."

More crucially, an open-source project is often managed and maintained by a group of people. Homebrew has several dozen maintainers and nearly one thousand total contributors. Mike explains that all of these people also deserve to be treated with respect by users, "I'm also super protective of my maintainers, and I don't want them to be treated that way either." But despite these features and its widespread use, one area Homebrew has always lacked is the ability to work well with teams of users. This is where Workbrew, a company Mike founded with two other Homebrew maintainers, steps in. [...] Workbrew ties together various Homebrew features with custom glue to create a workflow for setting up and maintaining Mac machines. It adds new features that core Homebrew maintainers had no interest in adding, such as admin and reporting dashboards for a computing fleet, while bringing more general improvements to the core project.

Bearing in mind Mike's motivation to keep Homebrew in the "traditional open source" model, I asked him how he intended to keep the needs of the project and the business separated and satisfied. "We've seen a lot of churn in the last few years from companies that made licensing decisions five or ten years ago, which have now changed quite dramatically and have generated quite a lot of community backlash," Mike said. "I'm very sensitive to that, and I am a little bit of an open-source purist in that I still consider the open-source initiative's definition of open source to be what open source means. If you don't comply with that, then you can be another thing, but I think you're probably not open source."

And regarding keeping his and his co-founder's dual roles separated, Mike states, "I'm the CTO and co-founder of Workbrew, and I'm the project leader of Homebrew. The project leader with Homebrew is an elected position." Every year, the maintainers and the community elect a candidate. "But then, with the Homebrew maintainers working with us on Workbrew, one of the things I say is that when we're working on Workbrew, I'm your boss now, but when we work on Homebrew, I'm not your boss," Mike adds. "If you think I'm saying something and it's a bad idea, you tell me it's a bad idea, right?" The company is keeping its early progress in a private beta for now, but you can expect an announcement soon. As for what's happening for Homebrew? Well, in the best "open source" way, that's up to the community and always will be.

GNU is Not Unix

After Crowdstrike Outage, FSF Argues There's a Better Way Forward (fsf.org) 139

"As free software activists, we ought to take the opportunity to look at the situation and see how things could have gone differently," writes FSF campaigns manager Greg Farough: Let's be clear: in principle, there is nothing ethically wrong with automatic updates so long as the user has made an informed choice to receive them... Although we can understand how the situation developed, one wonders how wise it is for so many critical services around the world to hedge their bets on a single distribution of a single operating system made by a single stupefyingly predatory monopoly in Redmond, Washington. Instead, we can imagine a more horizontal structure, where this airline and this public library are using different versions of GNU/Linux, each with their own security teams and on different versions of the Linux(-libre) kernel...

As of our writing, we've been unable to ascertain just how much access to the Windows kernel source code Microsoft granted to CrowdStrike engineers. (For another thing, the root cause of the problem appears to have been an error in a configuration file.) But this being the free software movement, we could guarantee that all security engineers and all stakeholders could have equal access to the source code, proving the old adage that "with enough eyes, all bugs are shallow." There is no good reason to withhold code from the public, especially code so integral to the daily functioning of so many public institutions and businesses. In a cunning PR spin, it appears that Microsoft has started blaming the incident on third-party firms' access to kernel source and documentation. Translated out of Redmond-ese, the point they are trying to make amounts to "if only we'd been allowed to be more secretive, this wouldn't have happened...!"

We also need to see that calling for a diversity of providers of nonfree software that are mere front ends for "cloud" software doesn't solve the problem. Correcting it fully requires switching to free software that runs on the user's own computer.The Free Software Foundation is often accused of being utopian, but we are well aware that moving airlines, libraries, and every other institution affected by the CrowdStrike outage to free software is a tremendous undertaking. Given free software's distinct ethical advantage, not to mention the embarrassing damage control underway from both Microsoft and CrowdStrike, we think the move is a necessary one. The more public an institution, the more vitally it needs to be running free software.

For what it's worth, it's also vital to check the syntax of your configuration files. CrowdStrike engineers would do well to remember that one, next time.

Privacy

Data From Deleted GitHub Repos May Not Actually Be Deleted, Researchers Claim (theregister.com) 23

Thomas Claburn reports via The Register: Researchers at Truffle Security have found, or arguably rediscovered, that data from deleted GitHub repositories (public or private) and from deleted copies (forks) of repositories isn't necessarily deleted. Joe Leon, a security researcher with the outfit, said in an advisory on Wednesday that being able to access deleted repo data -- such as APIs keys -- represents a security risk. And he proposed a new term to describe the alleged vulnerability: Cross Fork Object Reference (CFOR). "A CFOR vulnerability occurs when one repository fork can access sensitive data from another fork (including data from private and deleted forks)," Leon explained.

For example, the firm showed how one can fork a repository, commit data to it, delete the fork, and then access the supposedly deleted commit data via the original repository. The researchers also created a repo, forked it, and showed how data not synced with the fork continues to be accessible through the fork after the original repo is deleted. You can watch that particular demo [here].

According to Leon, this scenario came up last week with the submission of a critical vulnerability report to a major technology company involving a private key for an employee GitHub account that had broad access across the organization. The key had been publicly committed to a GitHub repository. Upon learning of the blunder, the tech biz nuked the repo thinking that would take care of the leak. "They immediately deleted the repository, but since it had been forked, I could still access the commit containing the sensitive data via a fork, despite the fork never syncing with the original 'upstream' repository," Leon explained. Leon added that after reviewing three widely forked public repos from large AI companies, Truffle Security researchers found 40 valid API keys from deleted forks.
GitHub said it considers this situation a feature, not a bug: "GitHub is committed to investigating reported security issues. We are aware of this report and have validated that this is expected and documented behavior inherent to how fork networks work. You can read more about how deleting or changing visibility affects repository forks in our [documentation]."

Truffle Security argues that they should reconsider their position "because the average user expects there to be a distinction between public and private repos in terms of data security, which isn't always true," reports The Register. "And there's also the expectation that the act of deletion should remove commit data, which again has been shown to not always be the case."
Intel

No Fix For Intel's Crashing 13th and 14th Gen CPUs - Any Damage is Permanent 85

An anonymous reader shares a report: On Monday, it initially seemed like the beginning of the end for Intel's desktop CPU instability woes -- the company confirmed a patch is coming in mid-August that should address the "root cause" of exposure to elevated voltage. But if your 13th or 14th Gen Intel Core processor is already crashing, that patch apparently won't fix it.

Citing unnamed sources, Tom's Hardware reports that any degradation of the processor is irreversible, and an Intel spokesperson did not deny that when we asked. Intel is "confident" the patch will keep it from happening in the first place. But if your defective CPU has been damaged, your best option is to replace it instead of tweaking BIOS settings to try and alleviate the problems.

And, Intel confirms, too-high voltages aren't the only reason some of these chips are failing. Intel spokesperson Thomas Hannaford confirms it's a primary cause, but the company is still investigating. Intel community manager Lex Hoyos also revealed some instability reports can be traced back to an oxidization manufacturing issue that was fixed at an unspecified date last year.
Intel

Intel Blames 13th, 14th Gen CPU Crashes on Software Bug 59

Intel has finally figured out why its 13th and 14th generation core desktop CPUs are repeatedly crashing. From a report: In a forum post on Monday, Intel said it traced the problem to faulty software code, which can trigger the CPUs to run at higher voltage levels. Intel examined a number of 13th and 14th gen desktop processors that buyers had returned. "Our analysis of returned processors confirms that the elevated operating voltage is stemming from a microcode algorithm resulting in incorrect voltage requests to the processor," it says. But in some bad news, Intel still needs a few more weeks to test its fix for the problem. "Intel is currently targeting mid-August for patch release to partners following full validation," it says. The company also recently confirmed that the issue doesn't extend to its mobile processors.
Privacy

Telegram Zero-Day for Android Allowed Malicious Files To Masquerade as Videos (therecord.media) 7

Researchers have identified a zero-day exploit for the Telegram messaging app on Android devices that could have allowed attackers to send malicious payloads disguised as legitimate files. From a report: The exploit was built to abuse a vulnerability that Slovakia-based firm ESET dubbed EvilVideo. Telegram fixed the bug earlier this month in versions 10.14.5 and above after researchers reported it. Threat actors had about five weeks to exploit the zero-day before it was patched, but it's not clear if it was used in the wild, ESET said. ESET discovered the exploit on an underground forum in early June. It was sold for an unspecified price by a user with the username "Ancryno." In its post, the seller showed screenshots and a video of testing the exploit in a public Telegram channel.

In unpatched versions of Telegram for Android, attackers could use the exploit to send malicious payloads via Telegram channels, groups and chats, making them appear as multimedia files. The exploit takes advantage of Telegram's default setting to automatically download media files. The option can be disabled manually, but in that case, the payload could still be installed on the device if a user tapped the download button in the top left corner of the shared file. If the user tried to play the "video," Telegram displayed a message that it was unable to play it and suggested using an external player. The hackers disguised a malicious app as this external player.

Microsoft

To Fix CrowdStrike Blue Screen of Death Simply Reboot 15 Straight Times, Microsoft Says (404media.co) 173

Microsoft has a suggested solution for individual customers affected by what may turn out to be the largest IT outage that has ever happened: Just reboot it a lot. From a report: Customers can delete a specific file called C00000291*.sys, which is seemingly tied to the bug, Microsoft said in a status update published Friday. But in some cases, people can't even get to a spot where they can delete that file. In an update posted Friday morning, Microsoft told users that they should simply reboot Virtual Machines (VMs) experiencing a BSoD over and over again until they can fix the issue.

[...] "We have received reports of successful recovery from some customers attempting multiple Virtual Machine restart operations on affected Virtual Machines," Microsoft told users. "We have received feedback from customers that several reboots (as many as 15 have been reported) may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage."

Security

CISA Broke Into a US Federal Agency, No One Noticed For a Full 5 Months (theregister.com) 35

A 2023 red team exercise by the U.S. Cybersecurity and Infrastructure Security Agency (CISA) at an unnamed federal agency exposed critical security failings, including unpatched vulnerabilities, inadequate incident response, and weak credential management, leading to a full domain compromise. According to The Register's Connor Jones, the agency failed to detect or remediate malicious activity for five months. From the report: According to the agency's account of the exercise, the red team was able to gain initial access by exploiting an unpatched vulnerability (CVE-2022-21587 - 9.8) in the target agency's Oracle Solaris enclave, leading to what it said was a full compromise. It's worth noting that CVE-2022-21587, an unauthenticated remote code execution (RCE) bug carrying a near-maximum 9.8 CVSS rating, was added to CISA's known exploited vulnerability (KEV) catalog in February 2023. The initial intrusion by CISA's red team was made on January 25, 2023. "After gaining access, the team promptly informed the organization's trusted agents of the unpatched device, but the organization took over two weeks to apply the available patch," CISA's report reads. "Additionally, the organization did not perform a thorough investigation of the affected servers, which would have turned up IOCs and should have led to a full incident response. About two weeks after the team obtained access, exploit code was released publicly into a popular open source exploitation framework. CISA identified that the vulnerability was exploited by an unknown third party. CISA added this CVE to its Known Exploited Vulnerabilities Catalog on February 2, 2023." [...]

After gaining access to the Solaris enclave, the red team discovered they couldn't pivot into the Windows part of the network because missing credentials blocked their path, despite enjoying months of access to sensitive web apps and databases. Undeterred, CISA managed to make its way into the Windows network after carrying out phishing attacks on unidentified members of the target agency, one of which was successful. It said real adversaries may have instead used prolonged password-praying attacks rather than phishing at this stage, given that several service accounts were identified as having weak passwords. After gaining that access, the red team injected a persistent RAT and later discovered unsecured admin credentials, which essentially meant it was game over for the agency being assessed. "None of the accessed servers had any noticeable additional protections or network access restrictions despite their sensitivity and critical functions in the network," CISA said.

CISA described this as a "full domain compromise" that gave the attackers access to tier zero assets -- the most highly privileged systems. "The team found a password file left from a previous employee on an open, administrative IT share, which contained plaintext usernames and passwords for several privileged service accounts," the report reads. "With the harvested Lightweight Directory Access Protocol (LDAP) information, the team identified one of the accounts had system center operations manager (SCOM) administrator privileges and domain administrator privileges for the parent domain. "They identified another account that also had administrative permissions for most servers in the domain. The passwords for both accounts had not been updated in over eight years and were not enrolled in the organization's identity management (IDM)." From here, the red team realized the victim organization had trust relationships with multiple external FCEB organizations, which CISA's team then pivoted into using the access they already had.

The team "kerberoasted" one partner organization. Kerberoasting is an attack on the Kerberos authentication protocol typically used in Windows networks to authenticate users and devices. However, it wasn't able to move laterally with the account due to low privileges, so it instead used those credentials to exploit a second trusted partner organization. Kerberoasting yielded a more privileged account at the second external org, the password for which was crackable. CISA said that due to network ownership, legal agreements, and/or vendor opacity, these kinds of cross-organizational attacks are rarely tested during assessments. However, SILENTSHIELD assessments are able to be carried out following new-ish powers afforded to CISA by the FY21 National Defense Authorization Act (NDAA), the same powers that also allow CISA's Federal Attack Surface Testing (FAST) pentesting program to operate. It's crucial that these avenues are able to be explored in such exercises because they're routes into systems adversaries will have no reservations about exploring in a real-world scenario. For the first five months of the assessment, the target FCEB agency failed to detect or remediate any of the SILENTSHIELD activity, raising concerns over its ability to spot genuine malicious activity.
CISA said the findings demonstrated the need for agencies to apply defense-in-depth principles. The cybersecurity agency recommended network segmentation and a Secure-by-Design commitment.
Google

Google Paper: AI Potentially Breaking Reality Is a Feature Not a Bug (404media.co) 82

An anonymous reader shares a report: Generative AI could "distort collective understanding of socio-political reality or scientific consensus," and in many cases is already doing that, according to a new research paper from Google, one of the biggest companies in the world building, deploying, and promoting generative AI. The paper, "Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data," [PDF] was co-authored by researchers at Google's artificial intelligence research laboratory DeepMind, its security think tank Jigsaw, and its charitable arm Google.org, and aims to classify the different ways generative AI tools are being misused by analyzing about 200 incidents of misuse as reported in the media and research papers between January 2023 and March 2024.

Unlike self-serving warnings from Open AI CEO Sam Altman or Elon Musk about the "existential risk" artificial general intelligence poses to humanity, Google's research focuses on real harm that generative AI is currently causing and could get worse in the future. Namely, that generative AI makes it very easy for anyone to flood the internet with generated text, audio, images, and videos. Much like another Google research paper about the dangers of generative AI I covered recently, Google's methodology here likely undercounts instances of AI-generated harm. But the most interesting observation in the paper is that the vast majority of these harms and how they "undermine public trust," as the researchers say, are often "neither overtly malicious nor explicitly violate these tools' content policies or terms of service." In other words, that type of content is a feature, not a bug.

Slashdot Top Deals