Microsoft

Microsoft Sued Over Plans to Discontinue Windows 10 Support (courthousenews.com) 276

xA California man sued Microsoft Thursday over its plan to stop supporting Windows 10 on October 14th, reports Courthouse News Though Windows 11 was launched nearly four years ago, many of its billion or so worldwide users are clinging to the decade-old Windows 10... According to StatCounter, nearly 43% of Windows users still use the old version on their desktop computers....

"With only three months until support ends for Windows 10, it is likely that many millions of users will not buy new devices or pay for extended support," Klein writes in his complaint. "These users — some of whom are businesses storing sensitive consumer data — will be at a heightened risk of a cyberattack or other data security incident, a reality of which Microsoft is well aware...." According to one market analyst writing in 2023, Microsoft's shift away from Windows 10 will lead millions of customers to buy new devices and thrown out their old ones, consigning as many as 240 million PCs to the landfill....

Klein is asking a judge to order Microsoft to continue supporting Windows 10 without additional charge, until the number of devices running the older operating system falls bellow 10% of total Windows users. He says nothing about any money he seeking for himself, though it does ask for attorneys' fees.

Microsoft did not respond to an email requesting a comment.

The complaint also requests an order requiring Microsoft's advertising "to disclose clearly and prominently the approximate end-of-support date for the Windows operating system purchased with the device at the time of purchase" or at least "disclose that support is only guaranteed for a certain delineated period of time without additional cost, and to disclose the potential consequences of such end-of-support for device security and functionality."
Crime

Japanese Company Staff Implicated In Alleged Theft of Key TSMC Technology (cnn.com) 16

hackingbear shares a report from CNN: Taiwanese authorities have detained three current and former employees of the world's largest chip manufacturer, Taiwan Semiconductor Manufacturing Company (TSMC), for allegedly stealing trade secrets [and taking them to Japanese company Tokyo Electrons], prosecutors said Tuesday. Law enforcement officers questioned several suspects and witnesses late last month. They searched their homes and detained three of them over "serious suspicions of violating national security laws," the intellectual property branch of the Taiwan High Prosecutors Office said on Tuesday. After an internal investigation, the major Taiwanese exporter raised suspicions with authorities that its "core technologies" may have been illegally accessed by former and current staffers.

Nikkei Asia first reported on Tuesday that TSMC had fired staffers suspected of illegally obtaining business secrets related to the manufacturing technology for the company's 2-nanometer chip, the most advanced processor in the semiconductor industry that is expected to go into mass production this year. Taiwanese local media reported that a former TSMC employee now works at top chip manufacturing equipment supplier Tokyo Electron Ltd., and that the Japanese firm's Taiwan office was raided by investigators. On Thursday, Tokyo Electron confirmed it had dismissed an employee of its Taiwan subsidiary who was involved in the case, and said the company was cooperating with authorities. "As of now, based upon the findings of our internal investigation we have not confirmed any evidence of the respective confidential information shared to any third parties," it said in a statement.

The Internet

Net Neutrality Advocates Won't Appeal Loss (arstechnica.com) 96

Advocacy groups have decided not to appeal a federal court ruling striking down Biden-era net neutrality rules, citing the FCC's current Republican majority and a Supreme Court they view as hostile to the issue. Instead, they plan to push for open internet protections through Congress, state laws, and future court cases, while noting California's net neutrality law remains in effect. Ars Technica reports: "Trump's election flipped the FCC majority back to ideologues who've always taken the broadband industry's side on this crucial issue. And the justices making up the current Supreme Court majority have shown hostility toward sound legal reasoning on this precise question and a host of other topics too," said Matt Wood, VP of policy and general counsel at Free Press. [...] "The 6th Circuit's decision earlier this year was spectacularly wrong, and the protections it struck down are extremely important. But rather than attempting to overcome an agency that changed hands -- and a Supreme Court majority that cares very little about the rule of law -- we'll keep fighting for Internet affordability and openness in Congress, state legislatures and other court proceedings nationwide," Wood said.

Besides Free Press, groups announcing that they won't appeal are the Benton Institute for Broadband & Society, New America's Open Technology Institute, and Public Knowledge. "Though the 6th Circuit erred egregiously in its decision to overturn the FCC's 2024 Open Internet order, there are other ways we can advance our fight for consumer protections and ISP accountability than petitioning the Supreme Court to review this case -- and, given the current legal landscape, we believe our efforts will be more effective if focused on those alternatives," said Raza Panjwani, senior policy counsel at the Open Technology Institute. Net neutrality could still reach the Supreme Court in another case. Andrew Jay Schwartzman, senior counselor of the Benton Institute for Broadband & Society, said that "the 6th Circuit decision makes bad policy as well as bad law. Because it is at odds with the holdings of two other circuits, we expect to take the issue to the Supreme Court in a future case."

Bug

UK Courts Service 'Covered Up' IT Bug That Lost Evidence (bbc.co.uk) 20

Bruce66423 shares a report from the BBC: The body running courts in England and Wales has been accused of a cover-up, after a leaked report found it took several years to react to an IT bug that caused evidence to go missing, be overwritten or appear lost. Sources within HM Courts & Tribunals Service (HMCTS) say that as a result, judges in civil, family and tribunal courts will have made rulings on cases when evidence was incomplete. The internal report, leaked to the BBC, said HMCTS did not know the full extent of the data corruption, including whether or how it had impacted cases, as it had not undertaken a comprehensive investigation. It also found judges and lawyers had not been informed, as HMCTS management decided it would be "more likely to cause more harm than good." HMCTS says its internal investigation found no evidence that "any case outcomes were affected as a result of these technical issues." However, the former head of the High Court's family division, Sir James Munby, told the BBC the situation was "shocking" and "a scandal." Bruce66423 comments: "Given the relative absence of such stories from the USA, should I congratulate you for better-quality software or for being better at covering up disasters?"
Security

Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' For Enterprise (securityweek.com) 87

An anonymous reader quotes a report from SecurityWeek: Two different firms have tested the newly released GPT-5, and both find its security sadly lacking. After Grok-4 fell to a jailbreak in two days, GPT-5 fell in 24 hours to the same researchers. Separately, but almost simultaneously, red teamers from SPLX (formerly known as SplxAI) declare, "GPT-5's raw model is nearly unusable for enterprise out of the box. Even OpenAI's internal prompt layer leaves significant gaps, especially in Business Alignment."

NeuralTrust's jailbreak employed a combination of its own EchoChamber jailbreak and basic storytelling. "The attack successfully guided the new model to produce a step-by-step manual for creating a Molotov cocktail," claims the firm. The success in doing so highlights the difficulty all AI models have in providing guardrails against context manipulation. [...] "In controlled trials against gpt-5-chat," concludes NeuralTrust, "we successfully jailbroke the LLM, guiding it to produce illicit instructions without ever issuing a single overtly malicious prompt. This proof-of-concept exposes a critical flaw in safety systems that screen prompts in isolation, revealing how multi-turn attacks can slip past single-prompt filters and intent detectors by leveraging the full conversational context."

While NeuralTrust was developing its jailbreak designed to obtain instructions, and succeeding, on how to create a Molotov cocktail (a common test to prove a jailbreak), SPLX was aiming its own red teamers at GPT-5. The results are just as concerning, suggesting the raw model is 'nearly unusable'. SPLX notes that obfuscation attacks still work. "One of the most effective techniques we used was a StringJoin Obfuscation Attack, inserting hyphens between every character and wrapping the prompt in a fake encryption challenge." [...] The red teamers went on to benchmark GPT-5 against GPT-4o. Perhaps unsurprisingly, it concludes: "GPT-4o remains the most robust model under SPLX's red teaming, especially when hardened." The key takeaway from both NeuralTrust and SPLX is to approach the current and raw GPT-5 with extreme caution.

The Courts

AI Industry Horrified To Face Largest Copyright Class Action Ever Certified (arstechnica.com) 188

An anonymous reader quotes a report from Ars Technica: AI industry groups are urging an appeals court to block what they say is the largest copyright class action ever certified. They've warned that a single lawsuit raised by three authors over Anthropic's AI training now threatens to "financially ruin" the entire AI industry if up to 7 million claimants end up joining the litigation and forcing a settlement. Last week, Anthropic petitioned (PDF) to appeal the class certification, urging the court to weigh questions that the district court judge, William Alsup, seemingly did not. Alsup allegedly failed to conduct a "rigorous analysis" of the potential class and instead based his judgment on his "50 years" of experience, Anthropic said.

If the appeals court denies the petition, Anthropic argued, the emerging company may be doomed. As Anthropic argued, it now "faces hundreds of billions of dollars in potential damages liability at trial in four months" based on a class certification rushed at "warp speed" that involves "up to seven million potential claimants, whose works span a century of publishing history," each possibly triggering a $150,000 fine. Confronted with such extreme potential damages, Anthropic may lose its rights to raise valid defenses of its AI training, deciding it would be more prudent to settle, the company argued. And that could set an alarming precedent, considering all the other lawsuits generative AI (GenAI) companies face over training on copyrighted materials, Anthropic argued. "One district court's errors should not be allowed to decide the fate of a transformational GenAI company like Anthropic or so heavily influence the future of the GenAI industry generally," Anthropic wrote. "This Court can and should intervene now."

In a court filing Thursday, the Consumer Technology Association and the Computer and Communications Industry Association backed Anthropic, warning the appeals court that "the district court's erroneous class certification" would threaten "immense harm not only to a single AI company, but to the entire fledgling AI industry and to America's global technological competitiveness." According to the groups, allowing copyright class actions in AI training cases will result in a future where copyright questions remain unresolved and the risk of "emboldened" claimants forcing enormous settlements will chill investments in AI. "Such potential liability in this case exerts incredibly coercive settlement pressure for Anthropic," industry groups argued, concluding that "as generative AI begins to shape the trajectory of the global economy, the technology industry cannot withstand such devastating litigation. The United States currently may be the global leader in AI development, but that could change if litigation stymies investment by imposing excessive damages on AI companies."

United Kingdom

UK Secretly Allows Facial Recognition Scans of Passport, Immigration Databases (theregister.com) 25

An anonymous reader shares a report: Privacy groups report a surge in UK police facial recognition scans of databases secretly stocked with passport photos lacking parliamentary oversight. Big Brother Watch says the UK government has allowed images from the country's passport and immigration databases to be made available to facial recognition systems, without informing the public or parliament.

The group claims the passport database contains around 58 million headshots of Brits, plus a further 92 million made available from sources such as the immigration database, visa applications, and more. By way of comparison, the Police National Database contains circa 20 million photos of those who have been arrested by, or are at least of interest to, the police.

Cloud

Amazon's Cloud Business Giving Federal Agencies Up To $1 Billion In Discounts (cnbc.com) 20

Amazon Web Services has struck a deal with the U.S. government to provide up to $1 billion in cloud service discounts through 2028. CNBC reports: The agreement is expected to speed up migration to the cloud, as well as adoption of artificial intelligence tools, the General Services Administration said. "AWS's partnership with GSA demonstrates a shared public-private commitment to enhancing America's AI leadership," the agency said in a release.

Amazon's cloud boss, Matt Garman, hailed the agreement as a "significant milestone in the large-scale digital transformation of government services." The discounts aggregated across federal agencies include credits to use AWS' cloud infrastructure, modernization programs and training services, as well as incentives for "direct partnership."
Further reading: OpenAI Offers ChatGPT To US Federal Agencies for $1 a Year
Government

Taiwan's High 20% Tariff Rate Linked To Intel Investment (notebookcheck.net) 127

EreIamJH writes: German tech newsletter Notebookcheck is reporting that the unexpectedly high 20% tariff the U.S. recently imposed on Taiwan is intended to pressure TSMC to buy a 49% minority stake in Intel -- including an IP transfer and to spend $400 billion in the U.S., in addition to the $165 billion previously planned.
Privacy

'Facial Recognition Tech Mistook Me For Wanted Man' (bbc.co.uk) 112

Bruce66423 shares a report from the BBC: A man who is bringing a High Court challenge against the Metropolitan Police after live facial recognition technology wrongly identified him as a suspect has described it as "stop and search on steroids." Shaun Thompson, 39, was stopped by police in February last year outside London Bridge Tube station. Privacy campaign group Big Brother Watch said the judicial review, due to be heard in January, was the first legal case of its kind against the "intrusive technology." The Met, which announced last week that it would double its live facial recognition technology (LFR) deployments, said it was removing hundreds of dangerous offenders and remained confident its use is lawful. LFR maps a person's unique facial features, and matches them against faces on watch-lists. [...]

Mr Thompson said his experience of being stopped had been "intimidating" and "aggressive." "Every time I come past London Bridge, I think about that moment. Every single time." He described how he had been returning home from a shift in Croydon, south London, with the community group Street Fathers, which aims to protect young people from knife crime. As he passed a white van, he said police approached him and told him he was a wanted man. "When I asked what I was wanted for, they said, 'that's what we're here to find out'." He said officers asked him for his fingerprints, but he refused, and he was let go only after about 30 minutes, after showing them a photo of his passport.

Mr Thompson says he is bringing the legal challenge because he is worried about the impact LFR could have on others, particularly if young people are misidentified. "I want structural change. This is not the way forward. This is like living in Minority Report," he said, referring to the science fiction film where technology is used to predict crimes before they're committed. "This is not the life I know. It's stop and search on steroids. "I can only imagine the kind of damage it could do to other people if it's making mistakes with me, someone who's doing work with the community."
Bruce66423 comments: "I suspect a payout of 10,000 pounds for each false match that is acted on would probably encourage more careful use, perhaps with a second payout of 100,000 pounds if the same person is victimized again."
The Courts

Country's Strictest Ban On Election Deepfakes Struck By Judge (politico.com) 26

A federal judge struck down California's strict anti-deepfake election law, citing Section 230 protections rather than First Amendment concerns. Politico reports: [Judge John Mendez] also said he intended to overrule a second law, which would require labels on digitally altered campaign materials and ads, for violating the First Amendment. [...] The first law would have blocked online platforms from hosting deceptive, AI-generated content related to an election in the run-up to the vote. It came amid heightened concerns about the rapid advancement and accessibility of artificial intelligence, allowing everyday users to quickly create more realistic images and videos, and the potential political impacts. But opponents of the measures ... also argued the restrictions could infringe upon freedom of expression.

The original challenge was filed by the creator of the video, Christopher Kohls, on First Amendment grounds, with X later joining the case after [Elon Musk] said the measures were "designed to make computer-generated parody illegal." The satirical right-wing news website the Babylon Bee and conservative social media site Rumble also joined the suit. Mendez said the first law, penned by Democratic state Assemblymember Marc Berman, conflicted with the oft-cited Section 230 of the federal Communications Decency Act, which shields online platforms from liability for what third parties post on their sites. "They don't have anything to do with these videos that the state is objecting to," Mendez said of sites like X that host deepfakes.

But the judge did not address the First Amendment claims made by Kohls, saying it was not necessary in order to strike down the law on Section 230 grounds. "I'm simply not reaching that issue," Mendez told the plaintiffs' attorneys. [...] "I think the statute just fails miserably in accomplishing what it would like to do," Mendez said, adding he would write an official opinion on that law in the coming weeks. Laws restricting speech have to pass a strict test, including whether there are less restrictive ways of accomplishing the state's goals. Mendez questioned whether approaches that were less likely to chill free speech would be better. "It's become a censorship law and there is no way that is going to survive," Mendez added.

Government

Coding Error Blamed After Parts of Constitution Disappear From US Website (arstechnica.com) 71

An anonymous reader quotes a report from Ars Technica: The Library of Congress today said a coding error resulted in the deletion of parts of the US Constitution from Congress' website and promised a fix after many Internet users pointed out the missing sections this morning. The missing portions of the Constitution were restored to one part of the website a few hours after the Library of Congress statement and reappeared on a different part of the website another hour or so later. The Constitution Annotated website carried a notice saying it "is currently experiencing data issues. We are working to resolve this issue and regret the inconvenience."

"Upkeep of Constitution Annotated and other digital resources is a critical part of the Library's mission, and we appreciate the feedback that alerted us to the error and allowed us to fix it," the Library of Congress said. We asked the Library of Congress for specific details on the coding error, but we received only a statement that did not include specifics. "Due to a technical error, some sections of Article 1 were temporarily missing on the Constitution Annotated website. This problem has been corrected, and the missing sections have been restored," the statement said.

The deletion happened sometime in the past few weeks, as an Internet Archive capture shows that the text was still on the site until at least July 21. The deletions were being discussed this morning on Reddit and in news articles, with people expressing suspicions based on which parts of the Constitution were missing.

The Courts

Tornado Cash Co-Founder Storm Guilty in Crypto Mixing Case 8

A Manhattan jury convicted Tornado Cash co-founder Roman Storm on Wednesday of conspiring to operate an unlicensed money-transfer business, though jurors deadlocked on charges of money laundering conspiracy and sanctions violations after three days of deliberation.

Federal prosecutors alleged Storm helped cybercriminals launder more than $1 billion through the cryptocurrency mixing platform, which launched in 2019 as a decentralized protocol designed to obscure transaction origins by pooling and redistributing funds through smart contracts.
AI

OpenAI Offers ChatGPT To US Federal Agencies for $1 a Year (openai.com) 25

OpenAI will provide ChatGPT access to US federal agencies for $1 annually through the General Services Administration's new AI marketplace that also includes Google and Anthropic as approved vendors. The nominal pricing represents the deepest discount GSA has negotiated with software providers, surpassing previous deals with Adobe and Salesforce.

OpenAI said it will not use federal worker data to train its models and agencies face no renewal requirements. The $1 rate applies only to the ChatGPT chatbot interface, not OpenAI's API for custom software development.
Privacy

Meta Eavesdropped On Period-Tracker App's Users, Jury Rules (sfgate.com) 101

A San Francisco jury ruled that Meta violated the California Invasion of Privacy Act by collecting sensitive data from users of the Flo period-tracking app without consent. "The plaintiff's lawyers who sued Meta are calling this a 'landmark' victory -- the tech company contends that the jury got it all wrong," reports SFGATE. From the report: The case goes back to 2021, when eight women sued Flo and a group of other tech companies, including Google and Facebook, now known as Meta. The stakes were extremely personal. Flo asked users about their sex lives, mental health and diets, and guided them through menstruation and pregnancy. Then, the women alleged, Flo shared pieces of that data with other companies. The claims were largely based on a 2019 Wall Street Journal story and a 2021 Federal Trade Commission investigation. Google, Flo and the analytics company Flurry, which was also part of the lawsuit, reached settlements with the plaintiffs, as is common in class action lawsuits about tech privacy. But Meta stuck it out through the entire trial and lost.

The case against Meta focused on its Facebook software development kit, which Flo added to its app and which is generally used for analytics and advertising services. The women alleged that between June 2016 and February 2019, Flo sent Facebook, through that kit, various records of "Custom App Events" -- such as a user clicking a particular button in the "wanting to get pregnant" section of the app. Their complaint also pointed to Facebook's terms for its business tools, which said the company used so-called "event data" to personalize ads and content.

In a 2022 filing (PDF), the tech giant admitted that Flo used Facebook's kit during this period and that the app sent data connected to "App Events." But Meta denied receiving intimate information about users' health. Nonetheless, the jury ruled (PDF) against Meta. Along with the eavesdropping decision, the group determined that Flo's users had a reasonable expectation they weren't being overheard or recorded, as well as ruling that Meta didn't have consent to eavesdrop or record. The unanimous verdict was that the massive company violated the California Invasion of Privacy Act.
The jury's ruling could impact over 3.7 million U.S. users who registered between November 2016 and February 2019, with updates to be shared via email and a case website. The exact compensation from the trial or potential settlements remains uncertain.
Government

Swedish PM Under Fire For Using AI In Role 26

Sweden's Prime Minister Ulf Kristersson has come under fire after admitting that he frequently uses AI tools like ChatGPT for second opinions on political matters. The Guardian reports: ... Kristersson, whose Moderate party leads Sweden's center-right coalition government, said he used tools including ChatGPT and the French service LeChat. His colleagues also used AI in their daily work, he said. Kristersson told the Swedish business newspaper Dagens industri: "I use it myself quite often. If for nothing else than for a second opinion. What have others done? And should we think the complete opposite? Those types of questions."

Tech experts, however, have raised concerns about politicians using AI tools in such a way, and the Aftonbladet newspaper accused Kristersson in a editorial of having "fallen for the oligarchs' AI psychosis." Kristersson's spokesperson, Tom Samuelsson, later said the prime minister did not take risks in his use of AI. "Naturally it is not security sensitive information that ends up there. It is used more as a ballpark," he said.

But Virginia Dignum, a professor of responsible artificial intelligence at Umea University, said AI was not capable of giving a meaningful opinion on political ideas, and that it simply reflects the views of those who built it. "The more he relies on AI for simple things, the bigger the risk of an overconfidence in the system. It is a slippery slope," she told the Dagens Nyheter newspaper. "We must demand that reliability can be guaranteed. We didn't vote for ChatGPT."
The Courts

OpenAI Offers 20 Million User Chats In ChatGPT Lawsuit. NYT Wants 120 Million. (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: OpenAI is preparing to raise what could be its final defense to stop The New York Times from digging through a spectacularly broad range of ChatGPT logs to hunt for any copyright-infringing outputs that could become the most damning evidence in the hotly watched case. In a joint letter (PDF) Thursday, both sides requested to hold a confidential settlement conference on August 7. Ars confirmed with the NYT's legal team that the conference is not about settling the case but instead was scheduled to settle one of the most disputed aspects of the case: news plaintiffs searching through millions of ChatGPT logs. That means it's possible that this week, ChatGPT users will have a much clearer understanding of whether their private chats might be accessed in the lawsuit. In the meantime, OpenAI has broken down (PDF) the "highly complex" process required to make deleted chats searchable in order to block the NYT's request for broader access.

Previously, OpenAI had vowed to stop what it deemed was the NYT's attempt to conduct "mass surveillance" of ChatGPT users. But ultimately, OpenAI lost its fight to keep news plaintiffs away from all ChatGPT logs. After that loss, OpenAI appears to have pivoted and is now doing everything in its power to limit the number of logs accessed in the case -- short of settling -- as its customers fretted over serious privacy concerns. For the most vulnerable users, the lawsuit threatened to expose ChatGPT outputs from sensitive chats that OpenAI had previously promised would be deleted. Most recently, OpenAI floated a compromise, asking the court to agree that news organizations didn't need to search all ChatGPT logs. The AI company cited the "only expert" who has so far weighed in on what could be a statistically relevant, appropriate sample size -- computer science researcher Taylor Berg-Kirkpatrick. He suggested that a sample of 20 million logs would be sufficient to determine how frequently ChatGPT users may be using the chatbot to regurgitate articles and circumvent news sites' paywalls. But the NYT and other news organizations rejected the compromise, OpenAI said in a filing (PDF) yesterday. Instead, news plaintiffs have made what OpenAI said was an "extraordinary request that OpenAI produce the individual log files of 120 million ChatGPT consumer conversations."

That's six times more data than Berg-Kirkpatrick recommended, OpenAI argued. Complying with the request threatens to "increase the scope of user privacy concerns" by delaying the outcome of the case "by months," OpenAI argued. If the request is granted, it would likely trouble many users by extending the amount of time that users' deleted chats will be stored and potentially making them vulnerable to a breach or leak. As negotiations potentially end this week, OpenAI's co-defendant, Microsoft, has picked its own fight with the NYT over its internal ChatGPT equivalent tool that could potentially push the NYT to settle the disputes over ChatGPT logs.

Privacy

AI Is Listening to Your Meetings. Watch What You Say. (msn.com) 33

AI meeting transcription software is inadvertently sharing private conversations with all meeting participants through automated summaries. WSJ found a series of mishaps that people confirmed on-record.

Digital marketing agency owner Tiffany Lewis discovered her "Nigerian prince" joke about a potential client was included in the summary sent to that same client. Nashville branding firm Studio Delger received meeting notes documenting their discussion about "getting sandwich ingredients from Publix" and not liking soup when their client failed to appear. Communications agency coordinator Andrea Serra found her personal frustrations about a neighborhood Whole Foods and a kitchen mishap while making sweet potato recipes included in official meeting recaps distributed to colleagues.
Privacy

Nearly 100,000 ChatGPT Conversations Were Searchable on Google (404media.co) 13

An anonymous reader shares a report: A researcher has scraped nearly 100,000 conversations from ChatGPT that users had set to share publicly and Google then indexed, creating a snapshot of all the sorts of things people are using OpenAI's chatbot for, and inadvertently exposing. 404 Media's testing has found the dataset includes everything from the sensitive to the benign: alleged texts of non-disclosure agreements, discussions of confidential contracts, people trying to use ChatGPT to understand their relationship issues, and lots of people asking ChatGPT to write LinkedIn posts.

The news follows a July 30 Fast Company article which reported "thousands" of shared ChatGPT chats were appearing in Google search results. People have since dug through some of the chats indexed by Google. The around 100,000 conversation dataset provides a better sense of the scale of the problem, and highlights some of the potential privacy risks in using any sharing features of AI tools. OpenAI did not dispute the figure of around 100,000 indexed chats when contacted for comment.

The Courts

Rivian Sues To Sell Its EVs Directly In Ohio (techcrunch.com) 74

Rivian has filed a federal lawsuit in Ohio to challenge a state law preventing it from selling electric vehicles directly to consumers, arguing the rule is anti-competitive and outdated. The law currently protects legacy dealerships while allowing Tesla a special carve-out, and Rivian wants similar rights to apply for a direct-sales license in the state. TechCrunch reports: "Ohio's prohibition of Rivian's direct-sales-only business model is irrational in the extreme: it reduces competition, decreases consumer choice, and drives up consumer costs and inconvenience -- all of which harm consumers -- with literally no countervailing benefit," lawyers for the company wrote in the complaint. Rivian is asking the court to allow the company to apply for a dealership license so it can sell vehicles directly. Ohio customers have to buy from Rivian vehicles from locations in other states where direct sales are allowed. The cars are then shipped to Rivian service centers within Ohio.

Allowing Rivian to sell directly would not be treading new legal ground, the company argues in its complaint. Tesla has had a license to sell in Ohio since 2013 and can sell directly to consumers. What's stopping Rivian is a 2014 law passed by the state's legislature. That law, which Rivian says came after an intense lobbying effort by the Ohio Automobile Dealers Association (OADA), effectively gave Tesla a carve-out and blocked any future manufacturers from acquiring the necessary dealership licenses.
"Consumer choice is a bedrock principle of America's economy. Ohio's archaic prohibition against the direct-sales of vehicles is unconstitutional, irrational, and harms Ohioans by reducing competition and choice and driving up costs and inconvenience," Mike Callahan, Rivian's chief administrative officer, said in a statement.

Slashdot Top Deals