Privacy

License Plate-Scanning Company Violates Privacy of Millions of California Drivers, Argues Class Action (sfgate.com) 49

"If you drive a car in California, you may be in for a payday thanks to a lawsuit alleging privacy violations by a Texas company," report SFGate: The 2021 lawsuit, given class-action status in September, alleges that Digital Recognition Network is breaking a California law meant to regulate the use of automatic license plate readers. DRN, a Fort Worth-based company, uses plate-scanning cameras to create location data for people's vehicles, then sells that data to marketers, car repossessors and insurers.

What's particularly notable about the case is the size of the class. The court has established that if you're a California resident whose license plate data was collected by DRN at least 15 times since June 2017, you're a class member. The plaintiff's legal team estimates that the tally includes about 23 million people, alleging that DRN cameras were mounted to cars on public roads. The case website lets Californians check whether their plates were scanned.

Barring a settlement or delay, the trial to decide whether DRN must pay a penalty to those class members will begin on May 17 in San Diego County Superior Court... The company's cameras scan 220 million plates a month, its website says, and customers can use plate data to "create comprehensive vehicle stories."

A lawyer for the firm representing class members told SFGATE Friday that his team will try to show DRN's business is a "mass surveillance program."
AI

Tinder Owner Inks Deal With OpenAI (techcrunch.com) 27

An anonymous reader quotes a report from TechCrunch: In a press release written with help from ChatGPT, Match Group announced an enterprise agreement with the AI chatbot's maker, OpenAI. The new agreement includes over 1,000 enterprise licenses for the dating app giant and home to Tinder, Match, OkCupid, Hinge and others. The AI tech will be used to help Match Group employees with work-related tasks, the company says, and come as part of Match's $20 million-plus bet on AI in 2024. [...] As for the news itself, Match Group says it will begin using the AI tech, and specifically ChatGPT-4, to aid with coding, design, analysis, build templates, and other daily tasks, including, as you can tell, communications. To keep its corporate data protected, only trained and licensed Match Group employees will have access to OpenAI's tools, it noted.

Before being able to use these tools, Match Group employees will also have to undergo mandatory training that focuses on responsible use, the technology's capabilities, as well as its limitations. The use will be guided by the company's existing privacy practices and AI principles, too. The company declined to share the cost of the agreement or how it will impact the tech giant's bottom line, but Match believes that the AI tools will make teams more productive. Match execs recently spoke of the company's plans for AI during the company's fourth-quarter earnings, noting that, this year, the app maker will use AI technology to both evolve its existing products and build new ones. The company's Shareholder letter explained how AI could help to improve various aspects of the dating app journey. For instance, it could help with profile creation, where Match is testing features like an AI-powered photo picker, and generative AI for help making bios. The company said that AI will also improve its matching abilities and post-match guidance, in areas like conversation starters, nudges, and offering date ideas.

Privacy

Vending Machine Error Reveals Secret Face Image Database of College Students (arstechnica.com) 100

Ashley Belanger reports via Ars Technica: Canada-based University of Waterloo is racing to remove M&M-branded smart vending machines from campus after outraged students discovered the machines were covertly collecting facial-recognition data without their consent. The scandal started when a student using the alias SquidKid47 posted an image on Reddit showing a campus vending machine error message, "Invenda.Vending.FacialRecognitionApp.exe," displayed after the machine failed to launch a facial recognition application that nobody expected to be part of the process of using a vending machine. "Hey, so why do the stupid M&M machines have facial recognition?" SquidKid47 pondered. The Reddit post sparked an investigation from a fourth-year student named River Stanley, who was writing for a university publication called MathNEWS. [...]

MathNEWS' investigation tracked down responses from companies responsible for smart vending machines on the University of Waterloo's campus. Adaria Vending Services told MathNEWS that "what's most important to understand is that the machines do not take or store any photos or images, and an individual person cannot be identified using the technology in the machines. The technology acts as a motion sensor that detects faces, so the machine knows when to activate the purchasing interface -- never taking or storing images of customers." According to Adaria and Invenda, students shouldn't worry about data privacy because the vending machines are "fully compliant" with the world's toughest data privacy law, the European Union's General Data Protection Regulation (GDPR). "These machines are fully GDPR compliant and are in use in many facilities across North America," Adaria's statement said. "At the University of Waterloo, Adaria manages last mile fulfillment services -- we handle restocking and logistics for the snack vending machines. Adaria does not collect any data about its users and does not have any access to identify users of these M&M vending machines." [...]

But University of Waterloo students like Stanley now question Invenda's "commitment to transparency" in North American markets, especially since the company is seemingly openly violating Canadian privacy law, Stanley told CTV News. On Reddit, while some students joked that SquidKid47's face "crashed" the machine, others asked if "any pre-law students wanna start up a class-action lawsuit?" One commenter summed up students' frustration by typing in all caps, "I HATE THESE MACHINES! I HATE THESE MACHINES! I HATE THESE MACHINES!"

United Kingdom

Leisure Firm in UK Told Scanning Staff Faces is Illegal (bbc.co.uk) 17

Bruce66423 writes: The data watchdog has ordered a leisure centre group to stop using facial recognition tech to monitor its staff. The Information Commissioner's Office (ICO) says Serco Leisure has been unlawfully processing the biometric data of more than 2,000 employees at 38 UK leisure facilities. It did so to check staff attendance - a practice the ICO said was "neither fair nor proportionate."

Serco Leisure says it will comply with the enforcement notice. But it added it had taken legal advice prior to installing the cameras, and said staff had not complained about them during the five years they had been in place. The firm said it was to "make clocking-in and out easier and simpler" for workers. "We engaged with our team members in advance of its roll-out and its introduction was well-received by colleagues," the company said in a statement.

United States

FTC To Ban Avast From Selling Browsing Data For Advertising Purposes (bleepingcomputer.com) 28

The U.S. FTC will order Avast to pay $16.5 million and ban the company from selling the users' web browsing data or licensing it for advertising purposes. From a report: The complaint says Avast violated millions of consumers' rights by collecting, storing, and selling their browsing data without their knowledge and consent while misleading them that the products used to harvest their data would block online tracking. "While the FTC's privacy lawsuits routinely take on firms that misrepresent their data practices, Avast's decision to expressly market its products as safeguarding people's browsing records and protecting data from tracking only to then sell those records is especially galling," said FTC Chair Lina M. Khan.

"Moreover, the volume of data Avast released is staggering: the complaint alleges that by 2020 Jumpshot had amassed "more than eight petabytes of browsing information dating back to 2014." More specifically, the FTC says UK-based company Avast Limited harvested consumers' web browsing information without their knowledge or consent using Avast browser extensions and antivirus software since at least 2014.

China

Leaked Hacking Files Show Chinese Spying On Citizens and Foreigners Alike (pbs.org) 18

An anonymous reader quotes a report from PBS: Chinese police are investigating an unauthorized and highly unusual online dump of documents from a private security contractor linked to the nation's top policing agency and other parts of its government -- a trove that catalogs apparent hacking activity and tools to spy on both Chinese and foreigners. Among the apparent targets of tools provided by the impacted company, I-Soon: ethnicities and dissidents in parts of China that have seen significant anti-government protests, such as Hong Kong or the heavily Muslim region of Xinjiang in China's far west. The dump of scores of documents late last week and subsequent investigation were confirmed by two employees of I-Soon, known as Anxun in Mandarin, which has ties to the powerful Ministry of Public Security. The dump, which analysts consider highly significant even if it does not reveal any especially novel or potent tools, includes hundreds of pages of contracts, marketing presentations, product manuals, and client and employee lists. They reveal, in detail, methods used by Chinese authorities used to surveil dissidents overseas, hack other nations and promote pro-Beijing narratives on social media.

The documents show apparent I-Soon hacking of networks across Central and Southeast Asia, as well as Hong Kong and the self-ruled island of Taiwan, which Beijing claims as its territory. The hacking tools are used by Chinese state agents to unmask users of social media platforms outside China such as X, formerly known as Twitter, break into email and hide the online activity of overseas agents. Also described are devices disguised as power strips and batteries that can be used to compromise Wi-Fi networks. I-Soon and Chinese police are investigating how the files were leaked, the two I-Soon employees told the AP. One of the employees said I-Soon held a meeting Wednesday about the leak and were told it wouldn't affect business too much and to "continue working as normal." The AP is not naming the employees -- who did provide their surnames, per common Chinese practice -- out of concern about possible retribution. The source of the leak is not known.
Jon Condra, an analyst with Recorded Future, a cybersecurity company, called it the most significant leak ever linked to a company "suspected of providing cyber espionage and targeted intrusion services for the Chinese security services." According to Condra, citing the leaked material, I-Soon's targets include governments, telecommunications firms abroad and online gambling companies within China.
Privacy

Vietnam To Collect Biometrics For New ID Cards (theregister.com) 33

Starting in July, the Vietnamese government will begin collecting biometric information from its citizens when issuing new identification cards. The Register reports: Prime minister Pham Minh Chinh instructed the nation's Ministry of Public Security to collect the data in the form of iris scans, voice samples and actual DNA, in accordance with amendments to Vietnam's Law on Citizen Identification. The ID cards are issued to anyone over the age of 14 in Vietnam, and are optional for citizens between the ages of 6 and 14, according to a government news report. Amendments to the Law on Citizen Identification that allow collection of biometrics passed on November 27 of last year.

The law allows recording of blood type among the DNA-related information that will be contained in a national database to be shared across agencies "to perform their functions and tasks." The ministry will work with other parts of the government to integrate the identification system into the national database. [...] Vietnam's future identity cards will incorporate the functions of health insurance cards, social insurance books, driver's licenses, birth certificates, and marriage certificates, as defined by the amendment.

As for how the information will be collected, the amendments state: "Biometric information on DNA and voice is collected when voluntarily provided by the people or the agency conducting criminal proceedings or the agency managing the person to whom administrative measures are applied in the process of settling the case according to their functions and duties whether to solicit assessment or collect biometric information on DNA, people's voices are shared with identity management agencies for updating and adjusting to the identity database."

EU

EU Opens Formal Investigation Into TikTok Over Possible Online Content Breaches (reuters.com) 18

An anonymous reader quotes a report from Reuters: The European Union will investigate whether ByteDance's TikTok breached online content rules aimed at protecting children and ensuring transparent advertising, an official said on Monday, putting the social media platform at risk of a hefty fine. EU industry chief Thierry Breton said he took the decision after analyzing the short video app's risk assessment report and its replies to requests for information, confirming a Reuters story. "Today we open an investigation into TikTok over suspected breach of transparency & obligations to protect minors: addictive design & screen time limits, rabbit hole effect, age verification, default privacy settings," Breton said on X.

The European Union's Digital Services Act (DSA), which applies to all online platforms since Feb. 17, requires in particular very large online platforms and search engines to do more to tackle illegal online content and risks to public security. TikTok's owner, China-based ByteDance, could face fines of up to 6% of its global turnover if TikTok is found guilty of breaching DSA rules. TikTok said it would continue to work with experts and the industry to keep young people on its platform safe and that it looked forward to explaining this work in detail to the European Commission.

The European Commission said the investigation will focus on the design of TikTok's system, including algorithmic systems which may stimulate behavioral addictions and/or create so-called 'rabbit hole effects'. It will also probe whether TikTok has put in place appropriate and proportionate measures to ensure a high level of privacy, safety and security for minors. As well as the issue of protecting minors, the Commission is looking at whether TikTok provides a reliable database on advertisements on its platform so that researchers can scrutinize potential online risks.

Encryption

Signal Finally Rolls Out Usernames, So You Can Keep Your Phone Number Private (wired.com) 38

Encrypted messaging app Signal has launched new feature allowing users to conceal their phone numbers and instead use usernames, in a move aimed at boosting privacy protections long sought by cybersecurity experts and privacy advocates. From a report: Rather than give your phone number to other Signal contacts as the identifier they use to begin a conversation with you, in other words, you can now choose to be discoverable via a chosen handle -- or even to prevent anyone who does have your phone number from finding you on Signal.

The use of phone numbers has long been perhaps the most persistent criticism of Signal's design. These new privacy protections finally offer a fix, says Meredith Whittaker, Signal's executive director. "We want to build a communications app that everyone in the world can easily use to connect with anyone else privately. That 'privately' is really in bold, underlined, in italics," Whittaker tells WIRED. "So we're extremely sympathetic to people who might be using Signal in high-risk environments who say, 'The phone number is really sensitive information, and I don't feel comfortable having that disseminated broadly.'"

Google

This Tiny Website Is Google's First Line of Defense in the Patent Wars (wired.com) 45

A trio of Google engineers recently came up with a futuristic way to help anyone who stumbles through presentations on video calls. They propose that when algorithms detect a speaker's pulse racing or "umms" lengthening, a generative AI bot that mimics their voice could simply take over. That cutting-edge idea wasn't revealed at a big company event or in an academic journal. Instead, it appeared in a 1,500-word post on a little-known, free website called TDCommons.org that Google has quietly owned and funded for nine years. WIRED: Until WIRED received a link to an idea on TDCommons last year and got curious, Google had never spoken with the media about its website. Scrolling through TDCommons, you can read Google's latest ideas for coordinating smart home gadgets for better sleep, preserving privacy in mobile search results, and using AI to summarize a person's activities from their photo archives. And the submissions aren't exclusive to Google; about 150 organizations, including HP, Cisco, and Visa, also have posted inventions to the website.

The website is a home for ideas that seem potentially valuable but not worth spending tens of thousands of dollars seeking a patent for. By publishing the technical details and establishing "prior art," Google and other companies can head off future disputes by blocking others from filing patents for similar concepts. Google gives employees a $1,000 bonus for each invention they post to TDCommons -- a tenth of what it awards its patent seekers -- but they also get an immediately shareable link to gloat about otherwise secretive work.

Privacy

Wyze Says Camera Breach Let 13,000 Customers Briefly See Into Other People's Homes 26

An anonymous reader shares a report: Wyze's problems with letting its security camera customers briefly see into other customer homes is a lot worse than we thought. Last week, co-founder David Crosby said that "so far" the company had identified 14 people who were able to briefly see into a stranger's property because they were shown an image from someone else's Wyze camera. Now we're being told that number of affected customers has ballooned to 13,000.

The revelation came from an email sent to customers entitled "An Important Security Message from Wyze," in which the company copped to the breach and apologized, while also attempting to lay some of the blame on its web hosting provider AWS. [...] The breach, however, occurred as Wyze was attempting to bring its cameras back online. Customers were reporting seeing mysterious images and video footage in their own Events tab. Wyze disabled access to the tab and launched its own investigation.
AI

Will 'Precision Agriculture' Be Harmful to Farmers? (substack.com) 61

Modern U.S. farming is being transformed by precision agriculture, writes Paul Roberts, the founder of securepairs.org and Editor in Chief at Security Ledger.

Theres autonomous tractors and "smart spraying" systems that use AI-powered cameras to identify weeds, just for starters. "Among the critical components of precision agriculture: Internet- and GPS connected agricultural equipment, highly accurate remote sensors, 'big data' analytics and cloud computing..." As with any technological revolution, however, there are both "winners" and "losers" in the emerging age of precision agriculture... Precision agriculture, once broadly adopted, promises to further reduce the need for human labor to run farms. (Autonomous equipment means you no longer even need drivers!) However, the risks it poses go well beyond a reduction in the agricultural work force. First, as the USDA notes on its website: the scale and high capital costs of precision agriculture technology tend to favor large, corporate producers over smaller farms. Then there are the systemic risks to U.S. agriculture of an increasingly connected and consolidated agriculture sector, with a few major OEMs having the ability to remotely control and manage vital equipment on millions of U.S. farms... (Listen to my podcast interview with the hacker Sick Codes, who reverse engineered a John Deere display to run the Doom video game for insights into the company's internal struggles with cybersecurity.)

Finally, there are the reams of valuable and proprietary environmental and operational data that farmers collect, store and leverage to squeeze the maximum productivity out of their land. For centuries, such information resided in farmers' heads, or on written or (more recently) digital records that they owned and controlled exclusively, typically passing that knowledge and data down to succeeding generation of farm owners. Precision agriculture technology greatly expands the scope, and granularity, of that data. But in doing so, it also wrests it from the farmer's control and shares it with equipment manufacturers and service providers — often without the explicit understanding of the farmers themselves, and almost always without monetary compensation to the farmer for the data itself. In fact, the Federal Government is so concerned about farm data they included a section (1619) on "information gathering" into the latest farm bill.

Over time, this massive transfer of knowledge from individual farmers or collectives to multinational corporations risks beggaring farmers by robbing them of one of their most vital assets: data, and turning them into little more than passive caretakers of automated equipment managed, controlled and accountable to distant corporate masters.

Weighing in is Kevin Kenney, a vocal advocate for the "right to repair" agricultural equipment (and also an alternative fuel systems engineer at Grassroots Energy LLC). In the interview, he warns about the dangers of tying repairs to factory-installed firmware, and argues that its the long-time farmer's "trade secrets" that are really being harvested today. The ultimate beneficiary could end up being the current "cabal" of tractor manufacturers.

"While we can all agree that it's coming...the question is who will own these robots?" First, we need to acknowledge that there are existing laws on the books which for whatever reason, are not being enforced. The FTC should immediately start an investigation into John Deere and the rest of the 'Tractor Cabal' to see to what extent farmers' farm data security and privacy are being compromised. This directly affects national food security because if thousands- or tens of thousands of tractors' are hacked and disabled or their data is lost, crops left to rot in the fields would lead to bare shelves at the grocery store... I think our universities have also been delinquent in grasping and warning farmers about the data-theft being perpetrated on farmers' operations throughout the United States and other countries by makers of precision agricultural equipment.
Thanks to long-time Slashdot reader chicksdaddy for sharing the article.
AI

Scientists Propose AI Apocalypse Kill Switches 104

A paper (PDF) from researchers at the University of Cambridge, supported by voices from numerous academic institutions including OpenAI, proposes remote kill switches and lockouts as methods to mitigate risks associated with advanced AI technologies. It also recommends tracking AI chip sales globally. The Register reports: The paper highlights numerous ways policymakers might approach AI hardware regulation. Many of the suggestions -- including those designed to improve visibility and limit the sale of AI accelerators -- are already playing out at a national level. Last year US president Joe Biden put forward an executive order aimed at identifying companies developing large dual-use AI models as well as the infrastructure vendors capable of training them. If you're not familiar, "dual-use" refers to technologies that can serve double duty in civilian and military applications. More recently, the US Commerce Department proposed regulation that would require American cloud providers to implement more stringent "know-your-customer" policies to prevent persons or countries of concern from getting around export restrictions. This kind of visibility is valuable, researchers note, as it could help to avoid another arms race, like the one triggered by the missile gap controversy, where erroneous reports led to massive build up of ballistic missiles. While valuable, they warn that executing on these reporting requirements risks invading customer privacy and even lead to sensitive data being leaked.

Meanwhile, on the trade front, the Commerce Department has continued to step up restrictions, limiting the performance of accelerators sold to China. But, as we've previously reported, while these efforts have made it harder for countries like China to get their hands on American chips, they are far from perfect. To address these limitations, the researchers have proposed implementing a global registry for AI chip sales that would track them over the course of their lifecycle, even after they've left their country of origin. Such a registry, they suggest, could incorporate a unique identifier into each chip, which could help to combat smuggling of components.

At the more extreme end of the spectrum, researchers have suggested that kill switches could be baked into the silicon to prevent their use in malicious applications. [...] The academics are clearer elsewhere in their study, proposing that processor functionality could be switched off or dialed down by regulators remotely using digital licensing: "Specialized co-processors that sit on the chip could hold a cryptographically signed digital "certificate," and updates to the use-case policy could be delivered remotely via firmware updates. The authorization for the on-chip license could be periodically renewed by the regulator, while the chip producer could administer it. An expired or illegitimate license would cause the chip to not work, or reduce its performance." In theory, this could allow watchdogs to respond faster to abuses of sensitive technologies by cutting off access to chips remotely, but the authors warn that doing so isn't without risk. The implication being, if implemented incorrectly, that such a kill switch could become a target for cybercriminals to exploit.

Another proposal would require multiple parties to sign off on potentially risky AI training tasks before they can be deployed at scale. "Nuclear weapons use similar mechanisms called permissive action links," they wrote. For nuclear weapons, these security locks are designed to prevent one person from going rogue and launching a first strike. For AI however, the idea is that if an individual or company wanted to train a model over a certain threshold in the cloud, they'd first need to get authorization to do so. Though a potent tool, the researchers observe that this could backfire by preventing the development of desirable AI. The argument seems to be that while the use of nuclear weapons has a pretty clear-cut outcome, AI isn't always so black and white. But if this feels a little too dystopian for your tastes, the paper dedicates an entire section to reallocating AI resources for the betterment of society as a whole. The idea being that policymakers could come together to make AI compute more accessible to groups unlikely to use it for evil, a concept described as "allocation."
Privacy

New 'Gold Pickaxe' Android, iOS Malware Steals Your Face For Fraud (bleepingcomputer.com) 13

An anonymous reader quotes a report from BleepingComputer: A new iOS and Android trojan named 'GoldPickaxe' employs a social engineering scheme to trick victims into scanning their faces and ID documents, which are believed to be used to generate deepfakes for unauthorized banking access. The new malware, spotted by Group-IB, is part of a malware suite developed by the Chinese threat group known as 'GoldFactory,' which is responsible for other malware strains such as 'GoldDigger', 'GoldDiggerPlus,' and 'GoldKefu.' Group-IB says its analysts observed attacks primarily targeting the Asia-Pacific region, mainly Thailand and Vietnam. However, the techniques employed could be effective globally, and there's a danger of them getting adopted by other malware strains. [...]

For iOS (iPhone) users, the threat actors initially directed targets to a TestFlight URL to install the malicious app, allowing them to bypass the normal security review process. When Apple remove the TestFlight app, the attackers switched to luring targets into downloading a malicious Mobile Device Management (MDM) profile that allows the threat actors to take control over devices. Once the trojan has been installed onto a mobile device in the form of a fake government app, it operates semi-autonomously, manipulating functions in the background, capturing the victim's face, intercepting incoming SMS, requesting ID documents, and proxying network traffic through the infected device using 'MicroSocks.'

Group-IB says the Android version of the trojan performs more malicious activities than in iOS due to Apple's higher security restrictions. Also, on Android, the trojan uses over 20 different bogus apps as cover. For example, GoldPickaxe can also run commands on Android to access SMS, navigate the filesystem, perform clicks on the screen, upload the 100 most recent photos from the victim's album, download and install additional packages, and serve fake notifications. The use of the victims' faces for bank fraud is an assumption by Group-IB, also corroborated by the Thai police, based on the fact that many financial institutes added biometric checks last year for transactions above a certain amount.

EU

Apple Confirms iOS 17.4 Removes Home Screen Web Apps In the EU (9to5mac.com) 136

Apple has now offered an explanation for why iOS 17.4 removes support for Home Screen web apps in the European Union. Spoiler: it's because of the Digital Markets Act that went into effect last August. 9to5Mac reports: Last week, iPhone users in the European Union noticed that they were no longer able to install and run web apps on their iPhone's Home Screen in iOS 17.4. Apple has added a number of features over the years to improve support for progressive web apps on iPhone. For example, iOS 16.4 allowed PWAs to deliver push notifications with icon badges. One change in iOS 17.4 is that the iPhone now supports alternative browser engines in the EU. This allows companies to build browsers that don't use Apple's WebKit engine for the first time. Apple says that this change, required by the Digital Markets Act, is why it has been forced to remove Home Screen web apps support in the European Union.

Apple explains that it would have to build an "entirely new integration architecture that does not currently exist in iOS" to address the "complex security and privacy concerns associated with web apps using alternative browser engines." This work "was not practical to undertake given the other demands of the DMA and the very low user adoption of Home Screen web apps," Apple explains. "And so, to comply with the DMA's requirements, we had to remove the Home Screen web apps feature in the EU." "EU users will be able to continue accessing websites directly from their Home Screen through a bookmark with minimal impact to their functionality," Apple continues.

It's understandable that Apple wouldn't offer support for Home Screen web apps for third-party browsers. But why did it also remove support for Home Screen web apps for Safari? Unfortunately, that's another side effect of the Digital Markets Act. The DMA requires that all browsers have equality, meaning that Apple can't favor Safari and WebKit over third-party browser engines. Therefore, because it can't offer Home Screen web apps support for third-party browsers, it also can't offer support via Safari. [...] iOS 17.4 is currently available to developers and public beta testers, and is slated for a release in early March.
The full explanation was published on Apple's developer website today.
Encryption

Indian Government Moves To Ban ProtonMail After Bomb Threat 25

Following a hoax bomb threat sent via ProtonMail to schools in Chennai, India, police in the state of Tamil Nadu put in a request to block the encrypted email service in the region since they have been unable to identify the sender. According to Hindustan Times, that request was granted today. From the report: The decision to block Proton Mail was taken at a meeting of the 69A blocking committee on Wednesday afternoon. Under Section 69A of the IT Act, the designated officer, on approval by the IT Secretary and at the recommendation of the 69A blocking committee, can issue orders to any intermediary or a government agency to block any content for national security, public order and allied reasons. HT could not ascertain if a blocking order will be issued to Apple and Google to block the Proton Mail app. The final order to block the website has not yet been sent to the Department of Telecommunications but the MeitY has flagged the issue with the DoT.

During the meeting, the nodal officer representing the Tamil Nadu government submitted that a bomb threat was sent to multiple schools using ProtonMail, HT has learnt. The police attempted to trace the IP address of the sender but to no avail. They also tried to seek help from the Interpol but that did not materialise either, the nodal officer said. During the meeting, HT has learnt, MeitY representatives noted that getting information from Proton Mail, on other criminal matters, not necessarily linked to Section 69A related issues, is a recurrent problem.

Although Proton Mail is end-to-end encrypted, which means the content of the emails cannot be intercepted and can only be seen by the sender and recipient if both are using Proton Mail, its privacy policy states that due to the nature of the SMTP protocol, certain email metadata -- including sender and recipient email addresses, the IP address incoming messages originated from, attachment name, message subject, and message sent and received times -- is available with the company.
"We condemn a potential block as a misguided measure that only serves to harm ordinary people. Blocking access to Proton is an ineffective and inappropriate response to the reported threats. It will not prevent cybercriminals from sending threats with another email service and will not be effective if the perpetrators are located outside of India," said ProtonMail in a statement.

"We are currently working to resolve this situation and are investigating how we can best work together with the Indian authorities to do so. We understand the urgency of the situation and are completely clear that our services are not to be used for illegal purposes. We routinely remove users who are found to be doing so and are willing to cooperate wherever possible within international cooperation agreements."
The Courts

AMC To Pay $8 Million For Allegedly Sharing Subscribers' Viewing History With Tech Companies (arstechnica.com) 20

An anonymous reader quotes a report from Ars Technica: On Thursday, AMC notified subscribers of a proposed $8.3 million settlement that provides awards to an estimated 6 million subscribers of its six streaming services: AMC+, Shudder, Acorn TV, ALLBLK, SundanceNow, and HIDIVE. The settlement comes in response to allegations that AMC illegally shared subscribers' viewing history with tech companies like Google, Facebook, and X (aka Twitter) in violation of the Video Privacy Protection Act (VPPA). Passed in 1988, the VPPA prohibits AMC and other video service providers from sharing "information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider." It was originally passed to protect individuals' right to private viewing habits, after a journalist published the mostly unrevealing video rental history of a judge, Robert Bork, who had been nominated to the Supreme Court by Ronald Reagan.

The so-called "Bork Tapes" revealed little -- other than that the judge frequently rented spy thrillers and British costume dramas -- but lawmakers recognized that speech could be chilled by monitoring anyone's viewing habits. While the law was born in the era of Blockbuster Video, subscribers suing AMC wrote in their amended complaint (PDF) that "the importance of legislation like the VPPA in the modern era of datamining is more pronounced than ever before." According to subscribers suing, AMC allegedly installed tracking technologies -- including the Meta Pixel, the X Tracking Pixel, and Google Tracking Technology -- on its website, allowing their personally identifying information to be connected with their viewing history. [...]

If it's approved, AMC has agreed to "suspend, remove, or modify operation of the Meta Pixel and other Third-Party Tracking Technologies so that use of such technologies on AMC Services will not result in AMC's disclosure to the third-party technology companies of the specific video content requested or obtained by a specific individual." All registered users of AMC services who "requested or obtained video content on at least one of the six AMC services" between January 18, 2021, and January 10, 2024, are currently eligible to submit claims under the proposed settlement. The deadline to submit is April 9. In addition to distributing the $8.3 million settlement fund among class members, subscribers will also receive a free one-week digital subscription.

Privacy

Your AI Girlfriend Is a Data-Harvesting Horror Show (gizmodo.com) 135

"A lot of that AI chatbots that you spend days talking to push hard on getting more and more private information from you," writes longtime Slashdot reader michelcultivo, sharing a report from Gizmodo.

"To be perfectly blunt, AI girlfriends and boyfriends are not your friends," says Misha Rykov, a Mozilla Researcher from the company's *Privacy Not Included project. "Although they are marketed as something that will enhance your mental health and well-being, they specialize in delivering dependency, loneliness, and toxicity, all while prying as much data as possible from you." Gizmodo reports: Mozilla dug into 11 different AI romance chatbots, including popular apps such as Replika, Chai, Romantic AI, EVA AI Chat Bot & Soulmate, and CrushOn.AI. Every single one earned the Privacy Not Included label, putting these chatbots among the worst categories of products Mozilla has ever reviewed. You've heard stories about data problems before, but according to Mozilla, AI girlfriends violate your privacy in "disturbing new ways." For example, CrushOn.AI collects details including information about sexual health, use of medication, and gender-affirming care. 90% of the apps may sell or share user data for targeted ads and other purposes, and more than half won't let you delete the data they collect. Security was also a problem. Only one app, Genesia AI Friend & Partner, met Mozilla's minimum security standards.

One of the more striking findings came when Mozilla counted the trackers in these apps, little bits of code that collect data and share them with other companies for advertising and other purposes. Mozilla found the AI girlfriend apps used an average of 2,663 trackers per minute, though that number was driven up by Romantic AI, which called a whopping 24,354 trackers in just one minute of using the app. The privacy mess is even more troubling because the apps actively encourage you to share details that are far more personal than the kind of thing you might enter into a typical app. EVA AI Chat Bot & Soulmate pushes users to "share all your secrets and desires," and specifically asks for photos and voice recordings. It's worth noting that EVA was the only chatbot that didn't get dinged for how it uses that data, though the app did have security issues. [...]

Privacy

US Military Notifies 20,000 of Data Breach After Cloud Email Leak (techcrunch.com) 11

An anonymous reader quotes a report from TechCrunch: The U.S. Department of Defense is notifying tens of thousands of individuals that their personal information was exposed in an email data spill last year. According to the breach notification letter sent out to affected individuals on February 1, the Defense Intelligence Agency -- the DOD's military intelligence agency -- said, "numerous email messages were inadvertently exposed to the Internet by a service provider," between February 3 and February 20, 2023. TechCrunch has learned that the breach disclosure letters relate to an unsecured U.S. government cloud email server that was spilling sensitive emails to the open internet. The cloud email server, hosted on Microsoft's cloud for government customers, was accessible from the internet without a password, likely due to a misconfiguration.

The DOD is sending breach notification letters to around 20,600 individuals whose information was affected. "As a matter of practice and operations security, we do not comment on the status of our networks and systems. The affected server was identified and removed from public access on February 20, 2023, and the vendor has resolved the issues that resulted in the exposure. DOD continues to engage with the service provider on improving cyber event prevention and detection. Notification to affected individuals is ongoing," said DOD spokesperson Cdr. Tim Gorman in an email to TechCrunch.

Encryption

Backdoors That Let Cops Decrypt Messages Violate Human Rights, EU Court Says (arstechnica.com) 30

An anonymous reader quotes a report from Ars Technica: The European Court of Human Rights (ECHR) has ruled that weakening end-to-end encryption disproportionately risks undermining human rights. The international court's decision could potentially disrupt the European Commission's proposed plans to require email and messaging service providers to create backdoors that would allow law enforcement to easily decrypt users' messages. This ruling came after Russia's intelligence agency, the Federal Security Service (FSS), began requiring Telegram to share users' encrypted messages to deter "terrorism-related activities" in 2017, ECHR's ruling said. [...] In the end, the ECHR concluded that the Telegram user's rights had been violated, partly due to privacy advocates and international reports that corroborated Telegram's position that complying with the FSB's disclosure order would force changes impacting all its users.

The "confidentiality of communications is an essential element of the right to respect for private life and correspondence," the ECHR's ruling said. Thus, requiring messages to be decrypted by law enforcement "cannot be regarded as necessary in a democratic society." [...] "Weakening encryption by creating backdoors would apparently make it technically possible to perform routine, general, and indiscriminate surveillance of personal electronic communications," the ECHR's ruling said. "Backdoors may also be exploited by criminal networks and would seriously compromise the security of all users' electronic communications. The Court takes note of the dangers of restricting encryption described by many experts in the field."

Martin Husovec, a law professor who helped to draft EISI's testimony, told Ars that EISI is "obviously pleased that the Court has recognized the value of encryption and agreed with us that state-imposed weakening of encryption is a form of indiscriminate surveillance because it affects everyone's privacy." [...] EISI's Husovec told Ars that ECHR's ruling is "indeed very important," because "it clearly signals to the EU legislature that weakening encryption is a huge problem and that the states must explore alternatives." If the Court of Justice of the European Union endorses this ruling, which Husovec said is likely, the consequences for the EU's legislation proposing scanning messages to stop illegal content like CSAM from spreading "could be significant," Husovec told Ars. During negotiations this spring, lawmakers may have to make "major concessions" to ensure the proposed rule isn't invalidated in light of the ECHR ruling, Husovec told Ars.
Europol and the European Union Agency for Cybersecurity (ENISA) said in a statement: "Solutions that intentionally weaken technical protection mechanisms to support law enforcement will intrinsically weaken the protection against criminals as well, which makes an easy solution impossible."

Slashdot Top Deals