Government

53 LA County Public Health Workers Fall for Phishing Email. 200,000 People May Be Affected (yahoo.com) 37

The Los Angeles Times reports that "The personal information of more than 200,000 people in Los Angeles County was potentially exposed after a hacker used a phishing email to steal the login credentials of 53 public health employees, the county announced Friday." Details that were possibly accessed in the February data breach include the first and last names, dates of birth, diagnoses, prescription information, medical record numbers, health insurance information, Social Security numbers and other financial information of Department of Public Health clients, employees and other individuals. "Affected individuals may have been impacted differently and not all of the elements listed were present for each individual," the agency said in a news release...

The data breach happened between Feb. 19 and 20 when employees received a phishing email, which tries to trick recipients into providing important information such as passwords and login credentials. The employees clicked on a link in the body of the email, thinking they were accessing a legitimate message, according to the agency...

The county is offering free identity monitoring through Kroll, a financial and risk advisory firm, to those affected by the breach. Individuals whose medical records were potentially accessed by the hacker should review them with their doctor to ensure the content is accurate and hasn't been changed. Officials say people should also review the Explanation of Benefits statement they receive from their insurance company to make sure they recognize all the services that have been billed. Individuals can also request credit reports and review them for any inaccuracies.

From the official statement by the county's Public Health department: Upon discovery of the phishing attack, Public Health disabled the impacted e-mail accounts, reset and re-imaged the user's device(s), blocked websites that were identified as part of the phishing campaign and quarantined all suspicious incoming e-mails. Additionally, awareness notifications were distributed to all workforce members to remind them to be vigilant when reviewing e-mails, especially those including links or attachments. Law enforcement was notified upon discovery of the phishing attack, and they investigated the incident.
Medicine

Security Lessons from the Change Healthcare Ransomware Catastrophe (csoonline.com) 45

The $22 million paid by Change Healthcare's parent company to unlock its systems "may have emboldened bad actors to further target the vulnerable industry," writes Axios: There were 44 attacks against the health care sector in April, the most that [cybersecurity firm] Recorded Future has seen in the four years it's been collecting data. It was also the second-largest month-over-month jump, after 30 ransomware attacks were recorded in March. There were 32 attacks in February and May.
But an analysis by the security-focused magazine CSO says the "disastrous" incident also "starkly illustrated the fragility of the healthcare sector, prompting calls for regulatory action." In response to the attack, US politicians have called for mandated baseline cybersecurity standards in the health sector, as well as better information sharing. They have also raised concerns that industry consolidation is increasing cyber risk.
So what went wrong? The attackers used a set of stolen credentials to remotely access the company's systems. But the article also notes Change Healthcare's systems "suffered from a lack of segmentation, which enables easy lateral movement of the attack" — and that the company's acquisition may have played a role: Mergers and acquisitions create new cyber threats because they involve the integration of systems, data, and processes from different organizations, each with its own security protocols and potential vulnerabilities. "During this transition, cybercriminals can exploit discrepancies in security measures, gaps in IT governance, and the increased complexity of managing merged IT environments," Aron Brand, CTO of CTERA told CSOonline. "Additionally, the heightened sharing of sensitive information between parties provides more opportunities for data breaches."
And "In the end, paying the ransom failed to protect UHG from secondary attempts at extortion." In April, cybercriminals from the RansomHub group threatened to leak portions of 6TB of sensitive data stolen from the breach of Change Healthcare, and obtained through Nichy, according to an analysis by security vendor Forescout. An estimated one in three Americans had their sensitive data exposed as a result of the attack. Such secondary scams are becoming increasingly commonplace and healthcare providers are particularly at risk, according to compliance experts... The US Department of Health and Human Services (HHS) is investigating whether a breach of protected health information occurred in assessing whether either UHG or Change Healthcare violated strict healthcare sector privacy regulations.
Thanks to Slashdot reader snydeq for sharing the article.
Facebook

Meta Pauses Plans To Train AI Using European Users' Data, Bowing To Regulatory Pressure 22

Meta has confirmed that it will pause plans to start training its AI systems using data from its users in the European Union and U.K. From a report: The move follows pushback from the Irish Data Protection Commission (DPC), Meta's lead regulator in the EU, which is acting on behalf of several data protection authorities across the bloc. The U.K.'s Information Commissioner's Office (ICO) also requested that Meta pause its plans until it could satisfy concerns it had raised. "The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA," the DPC said in a statement Friday. "This decision followed intensive engagement between the DPC and Meta. The DPC, in cooperation with its fellow EU data protection authorities, will continue to engage with Meta on this issue."

While Meta is already tapping user-generated content to train its AI in markets such as the U.S., Europe's stringent GDPR regulations has created obstacles for Meta -- and other companies -- looking to improve their AI systems, including large language models with user-generated training material. However, Meta last month began notifying users of an upcoming change to its privacy policy, one that it said will give it the right to use public content on Facebook and Instagram to train its AI, including content from comments, interactions with companies, status updates, photos and their associated captions. The company argued that it needed to do this to reflect "the diverse languages, geography and cultural references of the people in Europe."
Privacy

Sonos Draws More Customer Anger - This Time For Its Privacy Policy (theverge.com) 15

An anonymous reader shares a report: It's been a rocky couple of months for Sonos -- so much so that CEO Patrick Spence now has a canned autoreply for customers emailing him to vent about the redesigned app. But as the company works to right the ship, restore trust, and get the new Sonos Ace headphones off to a strong start, it finds itself in the middle of yet another controversy.

As highlighted by repair technician and consumer privacy advocate Louis Rossmann, Sonos has made a significant change to its privacy policy, at least in the United States, with the removal of one key line. The updated policy no longer contains a sentence that previously said, "Sonos does not and will not sell personal information about our customers." That pledge is still present in other countries, but it's nowhere to be found in the updated US policy, which went into effect earlier this month.

AI

Clearview AI Used Your Face. Now You May Get a Stake in the Company. (nytimes.com) 40

A facial recognition start-up, accused of invasion of privacy in a class-action lawsuit, has agreed to a settlement, with a twist: Rather than cash payments, it would give a 23 percent stake in the company to Americans whose faces are in its database. From a report: Clearview AI, which is based in New York, scraped billions of photos from the web and social media sites like Facebook, LinkedIn and Instagram to build a facial recognition app used by thousands of police departments, the Department of Homeland Security and the F.B.I. After The New York Times revealed the company's existence in 2020, lawsuits were filed across the country. They were consolidated in federal court in Chicago as a class action.

The litigation has proved costly for Clearview AI, which would most likely go bankrupt before the case made it to trial, according to court documents. The company and those who sued it were "trapped together on a sinking ship," lawyers for the plaintiffs wrote in a court filing proposing the settlement. "These realities led the sides to seek a creative solution by obtaining for the class a percentage of the value Clearview could achieve in the future," added the lawyers, from Loevy + Loevy in Chicago.

Anyone in the United States who has a photo of himself or herself posted publicly online -- so almost everybody -- could be considered a member of the class. The settlement would collectively give the members a 23 percent stake in Clearview AI, which is valued at $225 million, according to court filings. (Twenty-three percent of the company's current value would be about $52 million.) If the company goes public or is acquired, those who had submitted a claim form would get a cut of the proceeds. Alternatively, the class could sell its stake. Or the class could opt, after two years, to collect 17 percent of Clearview's revenue, which it would be required to set aside.

Microsoft

Microsoft Postpones Windows Recall After Major Backlash (windowscentral.com) 96

In an unprecedented move, Microsoft has announced that its big Copilot+ PC initiative that was unveiled last month will launch without its headlining "Windows Recall" AI feature next week on June 18. From a report: The feature, which captures snapshots of your screen every few seconds, was revealed to store sensitive user data in an unencrypted state, raising serious concerns among security researchers and experts.

Last week, Microsoft addressed these concerns by announcing that it would make changes to Windows Recall to ensure the feature handles data securely on device. At that time, the company insisted that Windows Recall would launch alongside Copilot+ PCs on June 18, with an update being made available at launch to address the concerns with Windows Recall. Now, Microsoft is saying Windows Recall will launch at a later date, beyond the general availability of Copilot+ PCs. This means these new devices will be missing their headlining AI feature at launch, as Windows Recall is now delayed indefinitely. The company says Windows Recall will be added in a future Windows update, but has not given a timeframe for when this will be.
Further reading:
'Microsoft Has Lost Trust With Its Users and Windows Recall is the Straw That Broke the Camel's Back'
Windows 11's New Recall Feature Has Been Cracked To Run On Unsupported Hardware
Is the New 'Recall' Feature in Windows a Security and Privacy Nightmare?
Mozilla Says It's Concerned About Windows Recall.
Google

Google's Privacy Sandbox Accused of Misleading Chrome Browser Users (theregister.com) 41

Richard Speed reports via The Register: Privacy campaigner noyb has filed a GDPR complaint regarding Google's Privacy Sandbox, alleging that turning on a "Privacy Feature" in the Chrome browser resulted in unwanted tracking by the US megacorp. The Privacy Sandbox API was introduced in 2023 as part of Google's grand plan to eliminate third-party tracking cookies. Rather than relying on those cookies, website developers can call the API to display ads matched to a user's interests. In the announcement, Google's VP of the Privacy Sandbox initiative called it "a significant step on the path towards a fundamentally more private web."

However, according to noyb, the problem is that although Privacy Sandbox is advertised as an improvement over third-party tracking, that tracking doesn't go away. Instead, it is done within the browser by Google itself. To comply with the rules, Google needs informed consent from users, which is where issues start. Noyb wrote today: "Google's internal browser tracking was introduced to users via a pop-up that said 'turn on ad privacy feature' after opening the Chrome browser. In the European Union, users are given the choice to either 'Turn it on' or to say 'No thanks,' so to refuse consent." Users would be forgiven for thinking that 'turn on ad privacy feature' would protect them from tracking. However, what it actually does is turn on first-party tracking.

Max Schrems, honorary chairman of noyb, claimed: "Google has simply lied to its users. People thought they were agreeing to a privacy feature, but were tricked into accepting Google's first-party ad tracking. "Consent has to be informed, transparent, and fair to be legal. Google has done the exact opposite." Noyb noted that Google had argued "choosing to click on 'Turn it on' would indeed be considered consent to tracking under Article 6(1)(a) of the GDPR."

AI

How Amazon Blew Alexa's Shot To Dominate AI 43

Amazon unveiled a new generative AI-powered version of its Alexa voice assistant at a packed event in September 2023, demonstrating how the digital assistant could engage in more natural conversation. However, nearly a year later, the updated Alexa has yet to be widely released, with former employees citing technical challenges and organizational dysfunction as key hurdles, Fortune reported Thursday. The magazine reports that the Alexa large language model lacks the necessary data and computing power to compete with rivals like OpenAI. Additionally, Amazon has prioritized AI development for its cloud computing unit, AWS, over Alexa, the report said. Despite a $4 billion investment in AI startup Anthropic, privacy concerns and internal politics have prevented Alexa's teams from fully leveraging Anthropic's technology.
Transportation

One of Two Major Data Brokers Shuts Down Product Related To Driver Behavior Patterns (therecord.media) 35

An anonymous reader quotes a report from The Record: The revelation earlier this year that General Motors had been selling driver behavior patterns to data brokers -- who in turn packaged and resold it to insurers -- has led at least one of two major data brokers to shut down its related product. That data broker, Verisk, disclosed last month that it has stopped accepting data from car makers and no longer sells the information to insurers, according to the organization Privacy4Cars, which received the response after sending the data broker an inquiry.

"Verisk received driving data from vehicles manufactured by General Motors, Honda, and Hyundai and may have provided a Driving Behavior Data History Report ("Report") to insurers upon request, as a service provider to such insurers, that included certain data provided by these manufacturers," the Verisk response to Privacy4Cars said. "Please note that Verisk no longer receives this data from these automakers to generate Reports and also no longer provides Reports to insurers," the statement added.

While Verisk has stopped selling car company-provided driver behavior patterns to insurers, LexisNexis Risk Solutions continues to prominently promote its driver behavior data product for insurers despite the mounting backlash from state governments, federal officials and consumer groups. LexisNexis Risk Solutions' Telematics OnDemand page remains online, boasting that it is "bringing automakers and insurance carriers together." "By partnering directly with automotive OEMs, LexisNexis is able to turn connected car data into tangible driving behavior insights that can be leveraged within insurance carriers' existing workflows," the page says. Much of LexisNexis Risk Solutions' work remains shrouded in secrecy.

Security

The Mystery of an Alleged Data Broker's Data Breach (techcrunch.com) 4

An anonymous reader shares a report: Since April, a hacker with a history of selling stolen data has claimed a data breach of billions of records -- impacting at least 300 million people -- from a U.S. data broker, which would make it one of the largest alleged data breaches of the year. The data, seen by TechCrunch, on its own appears partly legitimate -- if imperfect.

The stolen data, which was advertised on a known cybercrime forum, allegedly dates back years and includes U.S. citizens' full names, their home address history and Social Security numbers -- data that is widely available for sale by data brokers. But confirming the source of the alleged data theft has proven inconclusive; such is the nature of the data broker industry, which gobbles up individuals' personal data from disparate sources with little to no quality control. The alleged data broker in question, according to the hacker, is National Public Data, which bills itself as "one of the biggest providers of public records on the Internet."

On its official website, National Public Data claimed to sell access to several databases: a "People Finder" one where customers can search by Social Security number, name and date of birth, address or telephone number; a database of U.S. consumer data "covering over 250 million individuals;" a database containing voter registration data that contains information on 100 million U.S. citizens; a criminal records one; and several more. Malware research group vx-underground said on X (formerly Twitter) that they reviewed the whole stolen database and could "confirm the data present in it is real and accurate."

Privacy

New York Times Source Code Stolen Using Exposed GitHub Token (bleepingcomputer.com) 52

The New York Times has confirmed that its internal source code was leaked on 4chan after being stolen from the company's GitHub repositories in January 2024. BleepingComputer reports: As first seen by VX-Underground, the internal data was leaked on Thursday by an anonymous user who posted a torrent to a 273GB archive containing the stolen data. "Basically all source code belonging to The New York Times Company, 270GB," reads the 4chan forum post. "There are around 5 thousand repos (out of them less than 30 are additionally encrypted I think), 3.6 million files total, uncompressed tar."

While BleepingComputer did not download the archive, the threat actor shared a text file containing a complete list of the 6,223 folders stolen from the company's GitHub repository. The folder names indicate that a wide variety of information was stolen, including IT documentation, infrastructure tools, and source code, allegedly including the viral Wordle game. A 'readme' file in the archive states that the threat actor used an exposed GitHub token to access the company's repositories and steal the data. The company said that the breach of its GitHub account did not affect its internal corporate systems and had no impact on its operations.
The Times said in a statement to BleepingComputer: "The underlying event related to yesterday's posting occurred in January 2024 when a credential to a cloud-based third-party code platform was inadvertently made available. The issue was quickly identified and we took appropriate measures in response at the time. There is no indication of unauthorized access to Times-owned systems nor impact to our operations related to this event. Our security measures include continuous monitoring for anomalous activity."
Encryption

Apple Introduces Standalone 'Passwords' App (macrumors.com) 39

An anonymous reader quotes a report from MacRumors: iOS 18, iPadOS 18, and macOS Sequoia feature a new, dedicated Passwords app for faster access to important credentials. The Passwords app replaces iCloud Keychain, which is currently only accessible via a menu in Settings. Now, passwords are available directly via a standalone app for markedly quicker access, bringing it more in line with rival services. The Passwords app consolidates various credentials, including passwords, passkeys, and Wi-Fi passwords, into a single, easily accessible location. Users can filter and sort their accounts based on various criteria, such as recently created accounts, credential type, or membership in shared groups.

Passwords is also compatible with Windows via the iCloud for Windows app, extending its utility to users who operate across different platforms. The developer beta versions of iOS 18, iPadOS 18, and macOS Sequoia are available today with official release to the public scheduled for the fall, providing an early look at the Passwords app.

AI

Apple Unveils Apple Intelligence 29

As rumored, Apple today unveiled Apple Intelligence, its long-awaited push into generative artificial intelligence (AI), promising highly personalized experiences built with safety and privacy at its core. The feature, referred to as "A.I.", will be integrated into Apple's various operating systems, including iOS, macOS, and the latest, VisionOS. CEO Tim Cook said that Apple Intelligence goes beyond artificial intelligence, calling it "personal intelligence" and "the next big step for Apple."

Apple Intelligence is built on large language and intelligence models, with much of the processing done locally on the latest Apple silicon. Private Cloud Compute is being added to handle more intensive tasks while maintaining user privacy. The update also includes significant changes to Siri, Apple's virtual assistant, which will now support typed queries and deeper integration into various apps, including third-party applications. This integration will enable users to perform complex tasks without switching between multiple apps.

Apple Intelligence will roll out to the latest versions of Apple's operating systems, including iOS and iPadOS 18, macOS Sequoia, and visionOS 2.
Desktops (Apple)

Apple Unveils macOS 15 'Sequoia' at WWDC, Introduces Window Tiling and iPhone Mirroring (arstechnica.com) 35

At its Worldwide Developers Conference, Apple formally introduced macOS 15, codenamed "Sequoia." The new release combines features from iOS 18 with Mac-specific improvements. One notable addition is automated window tiling, allowing users to arrange windows on their screen without manual resizing or switching to full-screen mode. Another feature, iPhone Mirroring, streams the iPhone's screen to the Mac, enabling app use with the Mac's keyboard and trackpad while keeping the phone locked for privacy.

Gamers will appreciate the second version of Apple's Game Porting Toolkit, simplifying the process of bringing Windows games to macOS and vice versa. Sequoia also incorporates changes from iOS and iPadOS, such as RCS support and expanded Tapback reactions in Messages, a redesigned Calculator app, and the Math Notes feature for typed equations in Notes. Additionally, all Apple platforms and Windows will receive a new Passwords app, potentially replacing standalone password managers. A developer beta of macOS Sequoia is available today, with refined public betas coming in July and a full release planned for the fall.
Crime

Should Police Departments Use Drones? (wired.com) 195

Wired visits Chula Vista, California (population: 275,487) — where since 2018 drones have been dispatched by police "teleoperators" monitoring 911 calls. ("Noise complaints, car accidents, overdoses, domestic disputes...") After nearly 20,000 drone flights, it's become the envy of other police departments, according to Wired's article, as other police departments "look to expand their use of unmanned aerial aircraft." The [Chula Vista] department says that its drones provide officers with critical intelligence about incidents they are responding to ahead of initiating in-person contact — which the CVPD says has reduced unnecessary police contacts, decreased response times, and saved lives. But a WIRED investigation paints a complicated picture of the trade-offs between public safety and privacy. In Chula Vista, drone flight paths trace a map of the city's inequality, with poorer residents experiencing far more exposure to the drones' cameras and rotors than their wealthier counterparts, a WIRED analysis of nearly 10,000 drone flight records from July 2021 to September 2023 found. The drones, often dispatched for serious incidents like reports of armed individuals, are also routinely deployed for minor issues such as shoplifting, vandalism, and loud music. [Drones are sent in response to about 1 in every 14 calls.] Early in the Covid-19 pandemic, the city even used drones to broadcast public service announcements to homeless encampments.

Despite the police promoting the benefits of the "Drone as First Responder" program, residents who encounter the technology day-to-day report feeling constantly watched. Some say they are afraid to spend time in their backyards; they fear that the machines are following them down the street, spying on them while they use the public pool or change their clothes. One resident says that he was so worried that the drones were harassing him that he went to the emergency room for severe depression and exhaustion. [A 60-year-old professor told Wired that the sound of drones kept them awake at night.]

The police drones, equipped with cameras and zoom lenses powerful enough to capture faces clearly and constantly recording while in flight, have amassed hundreds of hours of video footage of the city's residents. Their flight paths routinely take them over backyards and above public pools, high schools, hospitals, churches, mosques, immigration law firms, and even the city's Planned Parenthood facility. Privacy advocates argue that the extensive footage captured by the drones makes it difficult to distinguish between flights responding to specific incidents and mass surveillance from the sky. Department secrecy around the recordings remains the subject of ongoing litigation... At the time of our analysis, approximately one in 10 drone flights listed on the department's transparency portal lacked a stated purpose and could not be connected to any relevant 911 call.

Bitcoin

US Justice Department Indicts Creators of Bitcoin-Anonymizing 'Samouri' Wallet (reason.com) 92

America's Justice Department "indicted the creators of an application that helps people spend their bitcoins anonymously," writes Reason.com: They're accused of "conspiracy to commit money laundering." Why "conspiracy to commit" as opposed to just "money laundering"?

Because they didn't hold anyone else's money or do anything illegal with it. They provided a privacy tool that may have enabled other people to do illegal things with their bitcoin... What this tool does is offer what's known as a "coinjoin," a method for anonymizing bitcoin transactions by mixing them with other transactions, as the project's founder, Keonne Rodriguez, explained to Reason in 2022: "I think the best analogy for it is like smelting gold," he said. "You take your Bitcoin, you add it into [the conjoin protocol] Whirlpool, and Whirlpool smelts it into new pieces that are not associated to the original piece."

Reason argues that providing the tool isn't a crime, just like selling someone a kitchen knife isn't a crime: The government's decision to indict Rodriguez and his partner William Lonergan Hill is also an attack on free speech because all they did was write open-source code and make it widely available. "It is an issue of a chilling effect on free speech," attorney Jerry Brito, who heads up the cryptocurrency nonprofit Coin Center, told Reason after the U.S. Treasury went after the creators of another piece of anonymizing software...

The most important thing about bitcoin, and money like it, isn't its price. It's the check it places on the government's ability to devalue, censor, and surviel our money. Creators of open-source tools like Samourai Wallet should be celebrated, not threatened with a quarter-century in a federal prison.

Long-time Slashdot reader SonicSpike shared the article...
Microsoft

Windows Won't Take Screenshots of Everything You Do After All (theverge.com) 81

Microsoft says it's making its new Recall feature in Windows 11 that screenshots everything you do on your PC an opt-in feature and addressing various security concerns. From a report: The software giant first unveiled the Recall feature as part of its upcoming Copilot Plus PCs last month, but since then, privacy advocates and security experts have been warning that Recall could be a "disaster" for cybersecurity without changes. Thankfully, Microsoft has listened to the complaints and is making a number of changes before Copilot Plus PCs launch on June 18th. Microsoft had originally planned to turn Recall on by default, but the company now says it will offer the ability to disable the controversial AI-powered feature during the setup process of new Copilot Plus PCs. "If you don't proactively choose to turn it on, it will be off by default," says Windows chief Pavan Davuluri.
Privacy

Bangladeshi Police Agents Accused of Selling Citizens' Personal Information on Telegram (techcrunch.com) 5

An anonymous reader shares a report: Two senior officials working for anti-terror police in Bangladesh allegedly collected and sold classified and personal information of citizens to criminals on Telegram, TechCrunch has learned. The data allegedly sold included national identity details of citizens, cell phone call records and other "classified secret information," according to a letter signed by a senior Bangladeshi intelligence official, seen by TechCrunch.

The letter, dated April 28, was written by Brigadier General Mohammad Baker, who serves as a director of Bangladesh's National Telecommunications Monitoring Center, or NTMC, the country's electronic eavesdropping agency. Baker confirmed the legitimacy of the letter and its contents in an interview with TechCrunch. "Departmental investigation is ongoing for both the cases," Baker said in an online chat, adding that the Bangladeshi Ministry of Home Affairs ordered the affected police organizations to take "necessary action against those officers." The letter, which was originally written in Bengali and addressed to the senior secretary of the Ministry of Home Affairs Public Security Division, alleges the two police agents accessed and passed "extremely sensitive information" of private citizens on Telegram in exchange for money.

AI

Artists Are Deleting Instagram For New App Cara In Protest of Meta AI Scraping (fastcompany.com) 21

Some artists are jumping ship for the anti-AI portfolio app Cara after Meta began using Instagram content to train its AI models. Fast Company explains: The portfolio app bills itself as a platform that protects artists' images from being used to train AI, and only allowing AI content to be posted if it's clearly labeled. Based on the number of new users the Cara app has garnered over the past few days, there seems to be a need. Between May 31 and June 2, Cara's user base tripled from less than 100,000 to more than 300,000 profiles, skyrocketing to the top of the app store. [...] Cara is a social networking app for creatives, in which users can post images of their artwork, memes, or just their own text-based musings. It shares similarities with major social platforms like X (formerly Twitter) and Instagram on a few fronts. Users can access Cara through a mobile app or on a browser. Both options are free to use. The UI itself is like an arts-centric combination of X and Instagram. In fact, some UI elements seem like they were pulled directly from other social media sites. (It's not the most innovative approach, but it is strategic: as a new app, any barriers to potential adoption need to be low).

Cara doesn't train any AI models on its content, nor does it allow third parties to do so. According to Cara's FAQ page, the app aims to protect its users from AI scraping by automatically implementing "NoAI" tags on all of its posts. The website says these tags "are intended to tell AI scrapers not to scrape from Cara." Ultimately, they appear to be html metadata tags that politely ask bad actors not to get up to any funny business, and it's pretty unlikely that they hold any actual legal weight. Cara admits as much, too, warning its users that the tags aren't a "fully comprehensive solution and won't completely prevent dedicated scrapers." With that in mind, Cara assesses the "NoAI" tagging system as a "a necessary first step in building a space that is actually welcoming to artists -- one that respects them as creators and doesn't opt their work into unethical AI scraping without their consent."

In December, Cara launched another tool called Cara Glaze to defend its artists' work against scrapers. (Users can only use it a select number of times.) Glaze, developed by the SAND Lab at University of Chicago, makes it much more difficult for AI models to accurately understand and mimic an artist's personal style. The tool works by learning how AI bots perceive artwork, and then making a set of minimal changes that are invisible to the human eye but confusing to the AI model. The AI bot then has trouble "translating" the art style and generates warped recreations. In the future, Cara also plans to implement Nightshade, another University of Chicago software that helps protect artwork against AI scapers. Nightshade "poisons" AI training data by adding invisible pixels to artwork that can cause AI software to completely misunderstand the image. Beyond establishing shields against data mining, Cara also uses a third party service to detect and moderate any AI artwork that's posted to the site. Non-human artwork is forbidden, unless it's been properly labeled by the poster.

Chrome

Google Is Working On a Recall-Like Feature For Chromebooks, Too (pcworld.com) 47

In an interview with PCWorld's Mark Hachman, Google's ChromeOS chief said the company is cautiously exploring a Recall-like feature for Chromebooks, dubbed "memory." Microsoft's AI-powered Recall feature for Windows 11 was unveiled at the company's Build 2024 conference last month. The feature aims to improve local searches by making them as efficient as web searches, allowing users to quickly retrieve anything they've seen on their PC. Using voice commands and contextual clues, Recall can find specific emails, documents, chat threads, and even PowerPoint slides. Given the obvious privacy and security concerns, many users have denounced the feature, describing it as "literal spyware or malware." PCWorld reports: I sat down with John Solomon, the vice president at Google responsible for ChromeOS, for a lengthy interview around what it means for Google's low-cost Google platform as the PC industry moved to AI PCs. Microsoft, of course, is launching Copilot+ PCs alongside Qualcomm's Snapdragon X Elite -- an Arm chip. And Chromebooks, of course, have a long history with Arm. But it's Recall that we eventually landed upon -- or, more precisely, how Google sidles into the same space. Recall is great in theory, but in practice may be more problematic.) Recall the Project Astra demo that Google showed off at its Google I/O conference. One of the key though understated aspects of it was how Astra "remembered" where the user's glasses were.

Astra didn't appear to be an experience that could be replicated on the Chromebook. Most users aren't going to carry a Chromebook around (a device which typically lacks a rear camera) visually identifying things. Solomon respectfully disagreed. "I think there's a piece of it which is very relevant, which is this notion of having some kind of context and memory of what's been happening on the device," Solomon said. "So think of something that's like, maybe viewing your screen and then you walk away, you get distracted, you chat to someone at the watercooler and you come back. You could have some kind of rewind function, you could have some kind of recorder function that would kind of bring you back to that. So I think that there is a crossover there.

"We're actually talking to that team about where the use case could be," Solomon added of the "memory" concept. "But I think there's something there in terms of screen capture in a way that obviously doesn't feel creepy and feels like the user's in control." That sounds a lot like Recall! But Solomon was quick to point out that one of the things that has turned off users to Recall was the lack of user control: deciding when, where, and if to turn it on. "I'm not going to talk about Recall, but I think the reason that some people feel it's creepy is when it doesn't feel useful, and it doesn't feel like something they initiated or that they get a clear benefit from it," Solomon said. "If the user says like -- let's say we're having a meeting, and discussing complex topics. There's a benefit of running a recorded function if at the end of it it can be useful for creating notes and the action items. But you as a user need to put that on and decide where you want to have that."

Slashdot Top Deals