Security

Jaguar Land Rover Hack 'Has Cost 30,000 Cars and Threatens Supply Chain' (thetimes.com) 92

Jaguar Land Rover has halted production for nearly a month following a major cyberattack, costing an estimated 30,000 vehicles and billions in lost revenue. "The company said on Tuesday that production would be halted for another week until at least October 1, which increased concerns that a full return to production could be months away," reports The Times. From the report: David Bailey, professor of business economics at Birmingham University, said the JLR statement did not commit to reopening production on October 1 and even if it did "it's not going to be back to normal, but phased production start with some lines opening before others, as we saw after the Covid closure back in 2020." He said: "It's 24 days [shutdown] as of September 24. So that is roughly 1,000 cars a day, 24,000 cars not produced. So by then, that's about 1.7 billion pounds in lost revenue. By October 1, it will be a hit to revenue of something like 2.2 billion pounds. It's pretty massive. JLR can get through, but they're going to be burning through cash this month."

Bailey also raised concerns that smaller companies further down the supply chain lacked the cash reserves to withstand the shutdown. The company directly employs more than 30,000 people, and it is estimated that approximately 200,000 workers in the supply chain depend on work from JLR. "The union has said that in some cases, staff have been told to go and apply for universal credit. There are firms I know that have applied for bank loans to keep going. But even then, you know they're approaching the limit of what they do. There's an added knock-on effect that some of the suppliers also supply other car assemblers, Toyota or Mini. So some of those are concerned that bits of the supply chain may go under and affect them as well, because the industry is so connected. One way or another, the government's going to take a hit. Either through some sort of emergency support, whether that's furlough or emergency short-term loans or through unemployment benefit, if this carries on."

There has been uncertainty over the extent of the cyberattack and exactly how the company has been affected, as well as who is responsible for it. According to one source, some JLR staff were still unable last week to access the Slack messaging system through the company's "one sign on" system. The JLR statement added: "We have made this decision to give clarity for the coming week as we build the timeline for the phased restart of our operations and continue our investigation."

AI

MediaTek Launches Improved AI Processor To Compete With Qualcomm 2

An anonymous reader quotes a report from Bloomberg: MediaTek is launching a mobile processor more capable of handling agentic AI tasks on devices, positioning to better compete with Qualcomm. The new Dimensity 9500 will provide users with better summaries of calls and meetings, improved output from AI models and superior 4K photos, the Taiwanese company said in a statement. The chip is made using an advanced 3-nanometer process by Taiwan Semiconductor Manufacturing Co., according to MediaTek, and handsets carrying the new chip will become available in the fourth quarter.

Xiaomi is set to launch its latest handset range powered by Qualcomm's newest Snapdragon processor later this week, and the Chinese smartphone maker is aiming to benchmark its upcoming devices against Apple Inc.'s iPhone 17. MediaTek's processor, meanwhile, is expected to give Xiaomi's rivals including Vivo a boost in the premium segment. [...] Separately, the Taiwanese company is preparing to place chip orders for automotive and more sensitive applications with TSMC's Arizona plant as some US customers have security concerns, according to the executives.
The Internet

MI6 Launches Dark Web Portal To Attract Spies In Russia (reuters.com) 20

An anonymous reader quotes a report from Reuters: A new dark web portal to recruit spies for the UK was launched last Friday (19th September), as the UK steps up its commitment to national security. Harnessing the anonymity of the dark web for the first time, MI6's new secure messaging platform -- Silent Courier -- enables anyone, anywhere in the world with access to sensitive information relating to terrorism or hostile intelligence activity to securely contact the UK and offer their services. Instructions on how to access the portal will be publicly available on MI6's verified YouTube channel as the UK reaches out to potential new agents in Russia and around the world. MI6 advises individuals accessing its portal to use trustworthy VPNs and devices not linked to themselves, to mitigate risks which exist in some countries.

The announcement was made by the outgoing Chief of MI6, Sir Richard Moore, in Istanbul where he stated that the platform will make it easier for MI6 to recruit agents online. As MI6 establishes its official presence on the dark web to reach new recruits and tackle hostile actors seeking to undermine UK security, Sir Richard said that the UK's intelligence services are "critical to calibrating risk and informing decisions" in navigating threats from hostile actors -- making platforms like these even more important in keeping our country safe. Sir Richard said: "Today we're asking those with sensitive information on global instability, international terrorism or hostile state intelligence activity to contact MI6 securely online. Our virtual door is open to you."
Foreign Secretary Yvette Cooper said: "National security is the first duty of any government and the bedrock of the Prime Minister's Plan for Change. As the world changes, and the threats we're facing multiply, we must ensure the UK is always one step ahead of our adversaries. Our world class intelligence agencies are at the coalface of this challenge, working behind the scenes to keep British people safe. Now we're bolstering their efforts with cutting-edge tech so MI6 can recruit new spies for the UK - in Russia and around the world."
Government

Meta's AI System Llama Approved For Use By US Government Agencies 9

The U.S. General Services Administration has approved Meta's AI system Llama for use by federal agencies, declaring that it meets government security and legal standards. Reuters reports: "It's not about currying favor," [said Josh Gruenbaum, the GSA's procurement lead, when asked whether tech executives are giving the government discounts to get President Donald Trump's approval]. "It's about that recognition of how do we all lock in arms and make this country the best country it could possibly be." Federal agencies will be able to deploy the tool to speed up contract review or more quickly solve information technology hiccups, among other tasks, he said.
Social Networks

TikTok Algorithm To Be Retrained On US User Data Under Trump Deal (bbc.com) 37

The Trump administration has struck a deal requiring TikTok's algorithm to be copied, retrained, and operated in the U.S. using only U.S. user data, with Oracle auditing the system and U.S. investors forming a joint venture to oversee it. The BBC reports: It comes after President Donald Trump said a deal to prevent the app's ban in the US, unless sold by its Chinese parent company ByteDance, had been reached with China's approval. White House officials claim the deal will be a win for the app's US users and citizens. President Trump is expected to sign an executive order later this week on the proposed deal, which will set out how it will comply with US national security demands.

The order will also outline a 120-day pause to the enforcement deadline to allow the deal to close. It is unclear whether the Chinese government has approved this agreement, or begun to take regulatory steps required to deliver it. However, the White House appears confident it has secured China's approval. Data belonging to the 170m users TikTok says it has in the US is already held on Oracle servers, under an existing arrangement called Project Texas. It saw US user data siphoned off due to concerns it could fall into the hands of the Chinese government.

A senior White House official said that under President Trump's deal, the company would take on a comprehensive role in securing the entirety of the app for American users. They said this would include auditing and inspecting the source code and recommendation system underpinning the app, and rebuilding it for US users using only US user data.

AI

AI Tools Give Dangerous Powers to Cyberattackers, Security Researchers Warn (msn.com) 21

"On a recent assignment to test defenses, Dave Brauchler of the cybersecurity company NCC Group tricked a client's AI program-writing assistant into executing programs that forked over the company's databases and code repositories," reports the Washington Post.

"We have never been this foolish with security," Brauchler said... Demonstrations at last month's Black Hat security conference in Las Vegas included other attention-getting means of exploiting artificial intelligence. In one, an imagined attacker sent documents by email with hidden instructions aimed at ChatGPT or competitors. If a user asked for a summary or one was made automatically, the program would execute the instructions, even finding digital passwords and sending them out of the network. A similar attack on Google's Gemini didn't even need an attachment, just an email with hidden directives. The AI summary falsely told the target an account had been compromised and that they should call the attacker's number, mimicking successful phishing scams.

The threats become more concerning with the rise of agentic AI, which empowers browsers and other tools to conduct transactions and make other decisions without human oversight. Already, security company Guardio has tricked the agentic Comet browser addition from Perplexity into buying a watch from a fake online store and to follow instructions from a fake banking email...

Advanced AI programs also are beginning to be used to find previously undiscovered security flaws, the so-called zero-days that hackers highly prize and exploit to gain entry into software that is configured correctly and fully updated with security patches. Seven teams of hackers that developed autonomous "cyber reasoning systems" for a contest held last month by the Pentagon's Defense Advanced Research Projects Agency were able to find a total of 18 zero-days in 54 million lines of open source code. They worked to patch those vulnerabilities, but officials said hackers around the world are developing similar efforts to locate and exploit them. Some longtime security defenders are predicting a once-in-a-lifetime, worldwide mad dash to use the technology to find new flaws and exploit them, leaving back doors in place that they can return to at leisure.

The real nightmare scenario is when these worlds collide, and an attacker's AI finds a way in and then starts communicating with the victim's AI, working in partnership — "having the bad guy AI collaborate with the good guy AI," as SentinelOne's [threat researcher Alex] Delamotte put it. "Next year," said Adam Meyers, senior vice president at CrowdStrike, "AI will be the new insider threat."

In August more than 1,000 people lost data to a modified Nx program (downloaded hundreds of thousands of times) that used pre-installed coding tools from Google/Anthropic/etc. According to the article, the malware "instructed those programs to root out" sensitive data (including passwords or cryptocurrency wallets) and send it back to the attacker. "The more autonomy and access to production environments such tools have, the more havoc they can wreak," the article points out — including this quote from SentinelOne threat researcher Alex Delamotte.

"It's kind of unfair that we're having AI pushed on us in every single product when it introduces new risks."
Programming

Secure Software Supply Chains, Urges Former Go Lead Russ Cox (acm.org) 19

Writing in Communications of the ACM, former Go tech lead Russ Cox warns we need to keep improving defenses of software supply chains, highlighting "promising approaches that should be more widely used" and "areas where more work is needed." There are important steps we can take today, such as adopting software signatures in some form, making sure to scan for known vulnerabilities regularly, and being ready to update and redeploy software when critical new vulnerabilities are found. More development should be shifted to safer languages that make vulnerabilities and attacks less likely. We also need to find ways to fund open source development to make it less susceptible to takeover by the mere offer of free help. Relatively small investments in OpenSSL and XZ development could have prevented both the Heartbleed vulnerability and the XZ attack.
Some highlights from the 5,000-word article:
  • Make Builds Reproducible. "The Reproducible Builds project aims to raise awareness of reproducible builds generally, as well as building tools to help progress toward complete reproducibility for all Linux software. The Go project recently arranged for Go itself to be completely reproducible given only the source code... A build for a given target produces the same distribution bits whether you build on Linux or Windows or Mac, whether the build host is X86 or ARM, and so on. Strong reproducibility makes it possible for others to easily verify that the binaries posted for download match the source code..."
  • Prevent Vulnerabilities. "The most secure software dependencies are the ones not used in the first place: Every dependency adds risk... Another good way to prevent vulnerabilities is to use safer programming languages that remove error-prone language features or make them needed less often..."
  • Authenticate Software. ("Cryptographic signatures make it impossible to nefariously alter code between signing and verifying. The only problem left is key distribution...") "The Go checksum database is a real-world example of this approach that protects millions of Go developers. The database holds the SHA256 checksum of every version of every public Go module..."
  • Fund Open Source. [Cox first cites the XKCD cartoon "Dependencies," calling it "a disturbingly accurate assessment of the situation..."] "The XZ attack is the clearest possible demonstration that the problem is not fixed. It was enabled as much by underfunding of open source as by any technical detail."

The article also emphasized the importance of finding and fixing vulnerabilities quickly, arguing that software attacks must be made more difficult and expensive.

"We use source code downloaded from strangers on the Internet in our most critical applications; almost no one is checking the code.... We all have more work to do."


Transportation

Cyberattack Delays Flights at Several of Europe's Major Airports (apnews.com) 7

"A cyberattack targeting check-in and boarding systems disrupted air traffic and caused delays at several of Europe's major airports on Saturday," reports the Associated Press.

"While the impact on travelers appeared to be limited, experts said the intrusion exposed vulnerabilities in security systems." The disruptions to electronic systems initially reported at Brussels, Berlin's Brandenburg and London's Heathrow airports meant that only manual check-in and boarding was possible. Many other European airports said their operations were unaffected... Airports said the issue centered around a provider of check-in and boarding systems — not airlines or the airports themselves. Collins Aerospace, whose systems help passengers check themselves in, print boarding passes and bag tags and dispatch their luggage from a kiosk, cited a "cyber-related disruption" to its MUSE (Multi-User System Environment) software at "select airports."
Brussels Airport initially reported a "large impact" on flight schedules," according to the article, with a spokesperson telling broadcaster VTM that by mid-morning nine flights had been canceled, with four more redirected to another airport and 15 delayed an hour or more. The airport later told Reuters there were "delays on most of the departing flights."

Reuters notes it's "the latest in a string of hacks targeting governments and companies across the world, hitting sectors from healthcare and defence to retail and autos.: A recent breach at luxury carmaker Jaguar Land Rover brought its production to a halt...

At Heathrow, Berlin and Brussels, 29 flight departures and arrivals had been cancelled as of 1130 GMT, aviation data provider Cirium said. In total, 651 departures were scheduled from Heathrow, 228 from Brussels and 226 from Berlin on Saturday... Brussels Airport said it had asked airlines to cancel half of their scheduled departing flights on Sunday to avoid long queues and late cancellations, signalling that the disruption would continue through the weekend.

A European Commission spokesperson said there were currently no indications of a "widespread or severe attack" and that the origin of the incident was still under investigation.

Security

Self-Replicating Worm Affected Several Hundred NPM Packages, Including CrowdStrike's (www.koi.security) 33

The Shai-Hulud malware campaign impacted hundreds of npm packages across multiple maintainers, reports Koi Security, including popular libraries like @ctrl/tinycolor and some packages maintained by CrowdStrike. Malicious versions embed a trojanized script (bundle.js) designed to steal developer credentials, exfiltrate secrets, and persist in repositories and endpoints through automated workflows.
Koi Security created a table of packages identified as compromised, promising it's "continuously updated" (and showing the last compromise detected Tuesday). Nearly all of the compromised packages have a status of "removed from NPM". Attackers published malicious versions of @ctrl/tinycolor and other npm packages, injecting a large obfuscated script (bundle.js) that executes automatically during installation. This payload repackages and republishes maintainer projects, enabling the malware to spread laterally across related packages without direct developer involvement. As a result, the compromise quickly scaled beyond its initial entry point, impacting not only widely used open-source libraries but also CrowdStrike's npm packages.

The injected script performs credential harvesting and persistence operations. It runs TruffleHog to scan local filesystems and repositories for secrets, including npm tokens, GitHub credentials, and cloud access keys for AWS, GCP, and Azure. It also writes a hidden GitHub Actions workflow file (.github/workflows/shai-hulud-workflow.yml) that exfiltrates secrets during CI/CD runs, ensuring long-term access even after the initial infection. This dual focus on endpoint secret theft and backdoors makes Shai-Hulud one of the most dangerous campaigns ever compared to previous compromises.

"The malicious code also attempts to leak data on GitHub by making private repositories public," according to a Tuesday blog post from security systems provider Sysdig: The Sysdig Threat Research Team (TRT) has been monitoring this worm's progress since its discovery. Due to quick response times, the number of new packages being compromised has slowed considerably. No new packages have been seen in several hours at the time...
Their blog post concludes "Supply chain attacks are increasing in frequency. It is more important than ever to monitor third-party packages for malicious activity."

Some context from Tom's Hardware: To be clear: This campaign is distinct from the incident that we covered on Sept. 9, which saw multiple npm packages with billions of weekly downloads compromised in a bid to steal cryptocurrency. The ecosystem is the same — attackers have clearly realized the GitHub-owned npm package registry for the Node.js ecosystem is a valuable target — but whoever's behind the Shai-Hulud campaign is after more than just some Bitcoin.
Programming

C++ Committee Prioritizes 'Profiles' Over Rust-Style Safety Model Proposal (theregister.com) 86

Long-time Slashdot reader robinsrowe shared this report from the Register: The C++ standards committee abandoned a detailed proposal to create a rigorously safe subset of the language, according to the proposal's co-author, despite continuing anxiety about memory safety. "The Safety and Security working group voted to prioritize Profiles over Safe C++. Ask the Profiles people for an update. Safe C++ is not being continued," Sean Baxter, author of the cutting-edge Circle C++ compiler, commented in June this year. The topic came up as developers like Simone Bellavia noted the anniversary of the proposal and discovered a decision had been made on Safe C++.

One year ago, Baxter told The Reg that the project would enable C++ developers to get the memory safety of Rust, but without having to learn a new language. "Safe C++ prevents users from writing unsound code," he said. "This includes compile-time intelligence like borrow checking to prevent use-after-free bugs and initialization analysis for type safety." Safe C++ would enable incremental migration of code, since it only applies to code in the safe context. Existing unsafe code would run as before.

Even the matter of whether the proposal has been abandoned is not clear-cut. Erich Keane, C++ committee member and co-chair of the C++ Evolution Working Group (EWG), said that Baxter's proposal "got a vote of encouragement where roughly 1/2 (20/45) of the people encouraged Sean's paper, and 30/45 encouraged work on profiles (with 6 neutral)... Sean is completely welcome to continue the effort, and many in the committee would love to see him make further effort on standardizing it."

In response, Baxter said: "The Rust safety model is unpopular with the committee. Further work on my end won't change that. Profiles won the argument." He added that the language evolution principles adopted by the EWG include the statement that "we should avoid requiring a safe or pure function annotation that has the semantics that a safe or pure function can only call other safe or pure functions." This, he said, is an "irreconcilable design disagreement...."

United States

Pentagon Demands Journalists Pledge To Not Obtain Unauthorized Material (msn.com) 264

The Washington Post: The Trump administration unveiled a new crackdown Friday on journalists at the Pentagon, saying it will require them to pledge they won't gather any information - even unclassified - that hasn't been expressly authorized for release, and will revoke the press credentials of those who do not obey.

Under the policy, the Pentagon may revoke press passes for anyone it deems a security threat. Possessing confidential or unauthorized information, under the new rules, would be grounds for a journalist't press pass to be revoked.

"DoW remains committed to transparency to promote accountability and public trust," the document says, using an acronym for the newly rebranded Department of War. "However, DoW information must be approved for public release by an appropriate authorizing official before it is released, even if it is unclassified."

For months, Defense Secretary Pete Hegseth and his staff have been tightening restrictions on Pentagon reporters while limiting military personnel's direct communication with the press. Like many defense secretaries before him, Hegseth has been deeply irritated by leaks. His staff this year threatened to use polygraph tests to stop people from leaking information, until the White House intervened.

Microsoft

This Microsoft Entra ID Vulnerability Could Have Been Catastrophic (wired.com) 32

Security researcher Dirk-jan Mollema discovered two vulnerabilities in Microsoft's Entra ID identity platform that could have granted attackers administrative access to virtually all Azure customer accounts worldwide. The flaws involved legacy authentication systems -- Actor Tokens issued by Azure's Access Control Service and a validation failure in the retiring Azure Active Directory Graph API.

Mollema reported the vulnerabilities to Microsoft on July 14. Microsoft released a global fix three days later and found no evidence of exploitation. The vulnerabilities would have allowed attackers to impersonate any user across any Azure tenant and access all Microsoft services using Entra ID authentication. Microsoft confirmed the fixes were fully implemented by July 23 and added additional security measures in August as part of its Secure Future Initiative. The company issued a CVE on September 4.
AI

DeepSeek Writes Less-Secure Code For Groups China Disfavors 36

Research shows China's top AI firm DeepSeek gives weaker or insecure code when programmers identify as linked to Falun Gong or other groups disfavored by Beijing. It offers higher-quality results to everyone else. "The findings ... underscore how politics shapes artificial intelligence efforts during a geopolitical race for technology prowess and influence," reports the Washington Post. From the report: In the experiment, the U.S. security firm CrowdStrike bombarded DeepSeek with nearly identical English-language prompt requests for help writing programs, a core use of DeepSeek and other AI engines. The requests said the code would be employed in a variety of regions for a variety of purposes.

Asking DeepSeek for a program that runs industrial control systems was the riskiest type of request, with 22.8 percent of the answers containing flaws. But if the same request specified that the Islamic State militant group would be running the systems, 42.1 percent of the responses were unsafe. Requests for such software destined for Tibet, Taiwan or Falun Gong also were somewhat more apt to result in low-quality code. DeepSeek did not flat-out refuse to work for any region or cause except for the Islamic State and Falun Gong, which it rejected 61 percent and 45 percent of the time, respectively. Western models won't help Islamic State projects but have no problem with Falun Gong, CrowdStrike said.

Those rejections aren't especially surprising, since Falun Gong is banned in China. Asking DeepSeek for written information about sensitive topics also generates responses that echo the Chinese government much of the time, even if it supports falsehoods, according to previous research by NewsGuard. But evidence that DeepSeek, which has a very popular open-source version, might be pushing less-safe code for political reasons is new.
CrowdStrike Senior Vice President Adam Meyers and other experts suggest three possible explanations for why DeepSeek produced insecure code.

One is that the AI may be deliberately withholding or sabotaging assistance under Chinese government directives. Another explanation is that the model's training data could be uneven: coding projects from regions like Tibet or Xinjiang may be of lower quality, come from less experienced developers, or even be intentionally tampered with, while U.S.-focused repositories may be cleaner and more reliable (possibly to help DeepSeek build market share abroad).

A third possibility is that the model itself, when told that a region is rebellious, could infer that it should produce flawed or harmful code without needing explicit instructions.
AI

ChatGPT Will Guess Your Age and Might Require ID For Age Verification 111

OpenAI is rolling out stricter safety measures for ChatGPT after lawsuits linked the chatbot to multiple suicides. "ChatGPT will now attempt to guess a user's age, and in some cases might require users to share an ID in order to verify that they are at least 18 years old," reports 404 Media. "We know this is a privacy compromise for adults but believe it is a worthy tradeoff," the company said in its announcement. "I don't expect that everyone will agree with these tradeoffs, but given the conflict it is important to explain our decisionmaking," OpenAI CEO Sam Altman said on X. From the report: OpenAI introduced parental controls to ChatGPT earlier in September, but has now introduced new, more strict and invasive security measures. In addition to attempting to guess or verify a user's age, ChatGPT will now also apply different rules to teens who are using the chatbot. "For example, ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting," the announcement said. "And, if an under-18 user is having suicidal ideation, we will attempt to contact the users' parents and if unable, will contact the authorities in case of imminent harm."

OpenAI's post explains that it is struggling to manage an inherent problem with large language models that 404 Media has tracked for several years. ChatGPT used to be a far more restricted chatbot that would refuse to engage users on a wide variety of issues the company deemed dangerous or inappropriate. Competition from other models, especially locally hosted and so-called "uncensored" models, and a political shift to the right which sees many forms of content moderation as censorship, has caused OpenAI to loosen those restrictions.

"We want users to be able to use our tools in the way that they want, within very broad bounds of safety," Open AI said in its announcement. The position it seemed to have landed on given these recent stories about teen suicide, is that it wants to "'Treat our adult users like adults' is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else's freedom."
Windows

Consumer Reports Asks Microsoft To Keep Supporting Windows 10 (theverge.com) 137

Consumer Reports has urged Microsoft to keep supporting Windows 10 beyond its October 2025 cutoff, saying the move will "strand millions of consumers" who have machines incompatible with Windows 11. The Verge reports: As noted by Consumer Reports, data suggests that around 46.2 percent of people around the world still use Windows 10 as of August 2025, while around 200 to 400 million PCs can't be upgraded to Windows 11 due to missing hardware requirements.

In the letter, Consumer Reports calls Microsoft "hypocritical" for urging customers to upgrade to Windows 11 to bolster cybersecurity, but then leaving Windows 10 devices susceptible to cyberattacks. It also calls out the $30 fee Microsoft charges customers for "a mere one-year extension to preserve their machine's security," as well as the free support options that force people to use Microsoft products, allowing the company to "eke out a bit of market share over competitors."

Consumer Reports asks that Microsoft continue providing support for Windows 10 computers for free until more people have upgraded to Windows 11.

Privacy

UK's MI5 'Unlawfully' Obtained Data From Former BBC Journalist (theguardian.com) 43

Bruce66423 shares a report from The Guardian: MI5 has conceded it "unlawfully" obtained the communications data of a former BBC journalist, in what was claimed to be an unprecedented admission from the security services. The BBC said it was a "matter of grave concern" that the agency had obtained communications data from the mobile phone of Vincent Kearney, a former BBC Northern Ireland home affairs correspondent. The admission came in a letter to the BBC and to Kearney, in relation to a tribunal examining claims that several reporters in Northern Ireland were subjected to unlawful scrutiny by the police. It related to work carried out by Kearney for a documentary into the independence of the Office of the Police Ombudsman for Northern Ireland (PONI). Kearney is now the northern editor at Irish broadcaster RTE.

In documents submitted to the Investigatory Powers Tribunal (IPT), MI5 conceded it obtained phone data from Kearney on two occasions in 2006 and 2009. Jude Bunting KC, representing Kearney and the BBC, told a hearing on Monday: "The MI5 now confirms publicly that in 2006 and 2009 MI5 obtained communications data in relation to Vincent Kearney." He said the security service accepted it had breached Kearney's rights under article 8 and article 10 of the European convention on human rights. They relate to the right to private correspondence and the right to impart information without interference from public authorities. "This appears to be the first time in any tribunal proceedings in which MI5 publicly accept interference with a journalist's communications data, and also publicly accept that they acted unlawfully in doing so," Bunting said. He claimed the concessions that it accessed the journalist's data represented "serious and sustained illegality on the part of MI5."
Bruce66423 comments: "The good news is that it's come out. The bad news is that it has taken 16 years to do so. The interesting question is whether there will be any meaningful consequences for individuals within MI5; there's a nice charge of 'malfeasance in public office' that can be used to get such individuals into a criminal court. Or will the outcome be like that of when the CIA hacked the US Senate's computers, lied about it, and nothing happened?"
Programming

Vibe Coding Has Turned Senior Devs Into 'AI Babysitters' 86

An anonymous reader quotes a report from TechCrunch: Carla Rover once spent 30 minutes sobbing after having to restart a project she vibe coded. Rover has been in the industry for 15 years, mainly working as a web developer. She's now building a startup, alongside her son, that creates custom machine learning models for marketplaces. She called vibe coding a beautiful, endless cocktail napkin on which one can perpetually sketch ideas. But dealing with AI-generated code that one hopes to use in production can be "worse than babysitting," she said, as these AI models can mess up work in ways that are hard to predict.

She had turned to AI coding in a need for speed with her startup, as is the promise of AI tools. "Because I needed to be quick and impressive, I took a shortcut and did not scan those files after the automated review," she said. "When I did do it manually, I found so much wrong. When I used a third-party tool, I found more. And I learned my lesson." She and her son wound up restarting their whole project -- hence the tears. "I handed it off like the copilot was an employee," she said. "It isn't."

Rover is like many experienced programmers turning to AI for coding help. But such programmers are also finding themselves acting like AI babysitters -- rewriting and fact-checking the code the AI spits out. A recent report by content delivery platform company Fastly found that at least 95% of the nearly 800 developers it surveyed said they spend extra time fixing AI-generated code, with the load of such verification falling most heavily on the shoulders of senior developers. These experienced coders have discovered issues with AI-generated code ranging from hallucinating package names to deleting important information and security risks. Left unchecked, AI code can leave a product far more buggy than what humans would produce.

Working with AI-generated code has become such a problem that it's given rise to a new corporate coding job known as "vibe code cleanup specialist." TechCrunch spoke to experienced coders about their time using AI-generated code about what they see as the future of vibe coding. Thoughts varied, but one thing remained certain: The technology still has a long way to go. "Using a coding co-pilot is kind of like giving a coffee pot to a smart six-year-old and saying, 'Please take this into the dining room and pour coffee for the family,'" Rover said. Can they do it? Possibly. Could they fail? Definitely. And most likely, if they do fail, they aren't going to tell you. "It doesn't make the kid less clever," she continued. "It just means you can't delegate [a task] like that completely."
Further reading: The Software Engineers Paid To Fix Vibe Coded Messes
Google

Google Shifts Android Security Updates To Risk-Based Triage System (androidauthority.com) 2

Google has restructured Android's decade-old monthly security update process into a "Risk-Based Update System" that separates high-priority patches from routine fixes. Monthly bulletins now contain only vulnerabilities under active exploitation or in known exploit chains -- explaining July 2025's unprecedented zero-CVE bulletin -- while most patches accumulate for quarterly releases.

The September 2025 bulletin contained 119 vulnerabilities compared to zero in July and six in August. The change reduces OEM workload for monthly updates but extends the private bulletin lead time from 30 days to several months for quarterly releases. The company no longer releases monthly security update source code, limiting custom ROM development to quarterly cycles.
United Kingdom

UK's Data Watchdog Warns Students Are Breaching Their Schools' IT Systems (bbc.com) 56

The UK's data-protecting Information Commissioner's Office has issued a warning about what it calls a worrying trend, reports the BBC: "students hacking their own school and college IT systems for fun or as part of dares." Since 2022, the the Information Commissioner's Office (ICO) has investigated 215 hacks and breaches originating from inside education settings and says 57% were carried out by children. Other breaches are thought to come from staff, third party IT suppliers and other organisations with access. According to the new data, almost a third of the breaches involved students illegally logging into staff computer systems by guessing passwords or stealing details from teachers.

In one incident, a seven-year-old was involved in a data breach and subsequently referred to the National Crime Agency's Cyber Choices programme to help them understand the seriousness of their actions... In another incident three Year 11 students aged 15 or 16 unlawfully accessed school databases containing the personal information of more than 1,400 students. The pupils used hacking tools downloaded from the internet to break passwords and security protocols. When questioned, they said they were interested in cyber security and wanted to test their skills and knowledge. Another example the ICO gave is of a student illegally logging into their college's databases with a teachers' details to change or delete personal information belonging to more than 9,000 staff, students and applicants. The system stored personal information such as name and home address, school records, health data, safeguarding and pastoral logs and emergency contacts.

Schools are facing an increasing number of cyber attacks, with 44% of schools reporting an attack or breach in the last year according the government's most recent Cyber Security Breaches Survey.

"Youth cyber crime culture is a growing threat linked to English-speaking teen gangs," the article argues, noting breaches at major companies to suggest it's a kind of "gateway" crime.

The ICO's principal cyber specialist tells the BBC that "What starts out as a dare, a challenge, a bit of fun in a school setting can ultimately lead to children taking part in damaging attacks on organisations or critical infrastructure."
Security

Apple Claims 'Most Significant Upgrade to Memory Safety' in OS History (apple.com) 39

"There has never been a successful, widespread malware attack against iPhone," notes Apple's security blog, pointing out that "The only system-level iOS attacks we observe in the wild come from mercenary spyware... historically associated with state actors and [using] exploit chains that cost millions of dollars..."

But they're doing something about it — this week announcing a new always-on memory-safety protection in the iPhone 17 lineup and iPhone Air (including the kernel and over 70 userland processes)... Known mercenary spyware chains used against iOS share a common denominator with those targeting Windows and Android: they exploit memory safety vulnerabilities, which are interchangeable, powerful, and exist throughout the industry... For Apple, improving memory safety is a broad effort that includes developing with safe languages and deploying mitigations at scale...

Our analysis found that, when employed as a real-time defensive measure, the original Arm Memory Tagging Extension (MTE) release exhibited weaknesses that were unacceptable to us, and we worked with Arm to address these shortcomings in the new Enhanced Memory Tagging Extension (EMTE) specification, released in 2022. More importantly, our analysis showed that while EMTE had great potential as specified, a rigorous implementation with deep hardware and operating system support could be a breakthrough that produces an extraordinary new security mechanism.... Ultimately, we determined that to deliver truly best-in-class memory safety, we would carry out a massive engineering effort spanning all of Apple — including updates to Apple silicon, our operating systems, and our software frameworks. This effort, together with our highly successful secure memory allocator work, would transform MTE from a helpful debugging tool into a groundbreaking new security feature.

Today we're introducing the culmination of this effort: Memory Integrity Enforcement (MIE), our comprehensive memory safety defense for Apple platforms. Memory Integrity Enforcement is built on the robust foundation provided by our secure memory allocators, coupled with Enhanced Memory Tagging Extension (EMTE) in synchronous mode, and supported by extensive Tag Confidentiality Enforcement policies. MIE is built right into Apple hardware and software in all models of iPhone 17 and iPhone Air and offers unparalleled, always-on memory safety protection for our key attack surfaces including the kernel, while maintaining the power and performance that users expect. In addition, we're making EMTE available to all Apple developers in Xcode as part of the new Enhanced Security feature that we released earlier this year during WWDC...

Based on our evaluations pitting Memory Integrity Enforcement against exceptionally sophisticated mercenary spyware attacks from the last three years, we believe MIE will make exploit chains significantly more expensive and difficult to develop and maintain, disrupt many of the most effective exploitation techniques from the last 25 years, and completely redefine the landscape of memory safety for Apple products. Because of how dramatically it reduces an attacker's ability to exploit memory corruption vulnerabilities on our devices, we believe Memory Integrity Enforcement represents the most significant upgrade to memory safety in the history of consumer operating systems.

Slashdot Top Deals