×
AI

Can The AI Industry Continue To Avoid Paying for the Content They're Using? (yahoo.com) 196

Last year Marc Andreessen's firm "argued that AI companies would go broke if they had to pay copyright royalties or licensing fees," notes a Los Angeles Times technology columnist.

But are these powerful companies doing even more to ensure they're not billed for their training data? Just this week, British media outlets reported that OpenAI has made the same case, seeking an exemption from copyright rules in England, claiming that the company simply couldn't operate without ingesting copyrighted materials.... The AI companies also argue what they're doing falls under the legal doctrine of fair use — probably the strongest argument they've got — because it's transformative. This argument helped Google win in court against the big book publishers when it was copying books into its massive Google Books database, and defeat claims that YouTube was profiting by allowing users to host and promulgate unlicensed material. Next, the AI companies argue that copyright-violating outputs like those uncovered by AI expert Gary Marcus, film industry veteran Reid Southern and the New York Times are rare or are bugs that are going to be patched.
But finally, William Fitzgerald, a partner at the Worker Agency and former member of the public policy team at Google, predicts Google will try to line up supportive groups to tell lawmakers artists support AI: Fitzgerald also sees Google's fingerprints on Creative Commons' embrace of the argument that AI art is fair use, as Google is a major funder of the organization. "It's worrisome to see Google deploy the same lobbying tactics they've developed over the years to ensure workers don't get paid fairly for their labor," Fitzgerald said. And OpenAI is close behind. It is not only taking a similar approach to heading off copyright complaints as Google, but it's also hiring the same people: It hired Fred Von Lohmann, Google's former director of copyright policy, as its top copyright lawyer....

[Marcus says] "There's an obvious alternative here — OpenAI's saying that we need all this or we can't build AI — but they could pay for it!" We want a world with artists and with writers, after all, he adds, one that rewards artistic work — not one where all the money goes to the top because a handful of tech companies won a digital land grab. "It's up to workers everywhere to see this for what it is, get organized, educate lawmakers and fight to get paid fairly for their labor," Fitzgerald says.

"Because if they don't, Google and OpenAI will continue to profit from other people's labor and content for a long time to come."

Security

Ivanti Warns of Critical Vulnerability In Its Popular Line of Endpoint Protection Software (arstechnica.com) 19

Dan Goodin reports via Ars Technica: Software maker Ivanti is urging users of its end-point security product to patch a critical vulnerability that makes it possible for unauthenticated attackers to execute malicious code inside affected networks. The vulnerability, in a class known as a SQL injection, resides in all supported versions of the Ivanti Endpoint Manager. Also known as the Ivanti EPM, the software runs on a variety of platforms, including Windows, macOS, Linux, Chrome OS, and Internet of Things devices such as routers. SQL injection vulnerabilities stem from faulty code that interprets user input as database commands or, in more technical terms, from concatenating data with SQL code without quoting the data in accordance with the SQL syntax. CVE-2023-39336, as the Ivanti vulnerability is tracked, carries a severity rating of 9.6 out of a possible 10.

"If exploited, an attacker with access to the internal network can leverage an unspecified SQL injection to execute arbitrary SQL queries and retrieve output without the need for authentication," Ivanti officials wrote Friday in a post announcing the patch availability. "This can then allow the attacker control over machines running the EPM agent. When the core server is configured to use SQL express, this might lead to RCE on the core server." RCE is short for remote code execution, or the ability for off-premises attackers to run code of their choice. Currently, there's no known evidence the vulnerability is under active exploitation. Ivanti has also published a disclosure that is restricted only to registered users. A copy obtained by Ars said Ivanti learned of the vulnerability in October. [...]

Putting devices running Ivanti EDM behind a firewall is a best practice and will go a long way to mitigating the severity of CVE-2023-39336, but it would likely do nothing to prevent an attacker who has gained limited access to an employee workstation from exploiting the critical vulnerability. It's unclear if the vulnerability will come under active exploitation, but the best course of action is for all Ivanti EDM users to install the patch as soon as possible.

IT

A 'Ridiculously Weak' Password Causes Disaster for Spain's No. 2 Mobile Carrier (arstechnica.com) 26

Orange Espana, Spain's second-biggest mobile operator, suffered a major outage on Wednesday after an unknown party obtained a "ridiculously weak" password and used it to access an account for managing the global routing table that controls which networks deliver the company's Internet traffic, researchers said. From a report: The hijacking began around 9:28 Coordinated Universal Time (about 2:28 Pacific time) when the party logged into Orange's RIPE NCC account using the password "ripeadmin" (minus the quotation marks). The RIPE Network Coordination Center is one of five Regional Internet Registries, which are responsible for managing and allocating IP addresses to Internet service providers, telecommunication organizations, and companies that manage their own network infrastructure. RIPE serves 75 countries in Europe, the Middle East, and Central Asia.

The password came to light after the party, using the moniker Snow, posted an image to social media that showed the orange.es email address associated with the RIPE account. RIPE said it's working on ways to beef up account security. Security firm Hudson Rock plugged the email address into a database it maintains to track credentials for sale in online bazaars. In a post, the security firm said the username and "ridiculously weak" password were harvested by information-stealing malware that had been installed on an Orange computer since September. The password was then made available for sale on an infostealer marketplace.

Security

LastPass Now Requires 12-Character Master Passwords (bleepingcomputer.com) 31

LastPass notified customers today that they are now required to use complex master passwords with a minimum of 12 characters to increase their accounts' security. From a report: Even though LastPass has repeatedly said that there is a 12-character master password requirement since 2018, users have had the ability to use a weaker one. "Historically, while a 12-character master password has been LastPass' default setting since 2018, customers still had the ability to forego the recommended default settings and choose to create a master password with fewer characters, if they wished to do so," LastPass said in a new announcement today.

LastPass has begun enforcing a 12-character master password requirement since April 2023 for new accounts or password resets, but older accounts could still use passwords with fewer than 12 characters. Starting this month, LastPass is now enforcing the 12-character master password requirement for all accounts. Furthermore, LastPass added that it will also start checking new or updated master passwords against a database of credentials previously leaked on the dark web to ensure that they don't match already compromised accounts.

Privacy

UK Police To Be Able To Run Face Recognition Searches on 50 Million Driving Licence Holders (theguardian.com) 24

The police will be able to run facial recognition searches on a database containing images of Britain's 50 million driving licence holders under a law change being quietly introduced by the government. From a report: Should the police wish to put a name to an image collected on CCTV, or shared on social media, the legislation would provide them with the powers to search driving licence records for a match. The move, contained in a single clause in a new criminal justice bill, could put every driver in the country in a permanent police lineup, according to privacy campaigners.

Facial recognition searches match the biometric measurements of an identified photograph, such as that contained on driving licences, to those of an image picked up elsewhere. The intention to allow the police or the National Crime Agency (NCA) to exploit the UK's driving licence records is not explicitly referenced in the bill or in its explanatory notes, raising criticism from leading academics that the government is "sneaking it under the radar." Once the criminal justice bill is enacted, the home secretary, James Cleverly, must establish "driver information regulations" to enable the searches, but he will need only to consult police bodies, according to the bill.

AI

Rite Aid Banned From Using Facial Recognition Software 60

An anonymous reader quotes a report from TechCrunch: Rite Aid has been banned from using facial recognition software for five years, after the Federal Trade Commission (FTC) found that the U.S. drugstore giant's "reckless use of facial surveillance systems" left customers humiliated and put their "sensitive information at risk." The FTC's Order (PDF), which is subject to approval from the U.S. Bankruptcy Court after Rite Aid filed for Chapter 11 bankruptcy protection in October, also instructs Rite Aid to delete any images it collected as part of its facial recognition system rollout, as well as any products that were built from those images. The company must also implement a robust data security program to safeguard any personal data it collects.

A Reuters report from 2020 detailed how the drugstore chain had secretly introduced facial recognition systems across some 200 U.S. stores over an eight-year period starting in 2012, with "largely lower-income, non-white neighborhoods" serving as the technology testbed. With the FTC's increasing focus on the misuse of biometric surveillance, Rite Aid fell firmly in the government agency's crosshairs. Among its allegations are that Rite Aid -- in partnership with two contracted companies -- created a "watchlist database" containing images of customers that the company said had engaged in criminal activity at one of its stores. These images, which were often poor quality, were captured from CCTV or employees' mobile phone cameras.

When a customer entered a store who supposedly matched an existing image on its database, employees would receive an automatic alert instructing them to take action -- and the majority of the time this instruction was to "approach and identify," meaning verifying the customer's identity and asking them to leave. Often, these "matches" were false positives that led to employees incorrectly accusing customers of wrongdoing, creating "embarrassment, harassment, and other harm," according to the FTC. "Employees, acting on false positive alerts, followed consumers around its stores, searched them, ordered them to leave, called the police to confront or remove consumers, and publicly accused them, sometimes in front of friends or family, of shoplifting or other wrongdoing," the complaint reads. Additionally, the FTC said that Rite Aid failed to inform customers that facial recognition technology was in use, while also instructing employees to specifically not reveal this information to customers.
In a press release, Rite Aid said that it was "pleased to reach an agreement with the FTC," but that it disagreed with the crux of the allegations.

"The allegations relate to a facial recognition technology pilot program the Company deployed in a limited number of stores," Rite Aid said in its statement. "Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC's investigation regarding the Company's use of the technology began."
Television

Your Smart TV Knows What You're Watching (themarkup.org) 164

An anonymous reader shares a report: If you bought a new smart TV during any of the holiday sales, there's likely to be an uninvited guest watching along with you. The most popular smart TVs sold today use automatic content recognition (ACR), a kind of ad surveillance technology that collects data on everything you view and sends it to a proprietary database to identify what you're watching and serve you highly targeted ads. The software is largely hidden from view, and it's complicated to opt out. Many consumers aren't aware of ACR, let alone that it's active on their shiny new TVs. If that's you, and you'd like to turn it off, we're going to show you how.

First, a quick primer on the tech: ACR identifies what's displayed on your television, including content served through a cable TV box, streaming service, or game console, by continuously grabbing screenshots and comparing them to a massive database of media and advertisements. Think of it as a Shazam-like service constantly running in the background while your TV is on.

These TVs can capture and identify 7,200 images per hour, or approximately two every second. The data is then used for content recommendations and ad targeting, which is a huge business; advertisers spent an estimated $18.6 billion on smart TV ads in 2022, according to market research firm eMarketer. For anyone who'd rather not have ACR looking over their shoulder while they watch, we've put together a guide to turning it off on three of the most popular smart TV software platforms in use last year. Depending on the platform, turning off ACR took us between 10 and 37 clicks.

Security

ownCloud Vulnerability With Maximum 10 Severity Score Comes Under 'Mass' Exploitation (arstechnica.com) 20

An anonymous reader quotes a report from Ars Technica: Security researchers are tracking what they say is the "mass exploitation" of a security vulnerability that makes it possible to take full control of servers running ownCloud, a widely used open source file-sharing server app. The vulnerability, which carries the maximum severity rating of 10, makes it possible to obtain passwords and cryptographic keys allowing administrative control of a vulnerable server by sending a simple Web request to a static URL, ownCloud officials warned last week. Within four days of the November 21 disclosure, researchers at security firm Greynoise said, they began observing "mass exploitation" in their honeypot servers, which masqueraded as vulnerable ownCloud servers to track attempts to exploit the vulnerability. The number of IP addresses sending the web requests has slowly risen since then. At the time this post went live on Ars, it had reached 13.

CVE-2023-49103 resides in versions 0.2.0 and 0.3.0 of graphapi, an app that runs in some ownCloud deployments, depending on the way they're configured. A third-party code library used by the app provides a URL that, when accessed, reveals configuration details from the PHP-based environment. In last week's disclosure, ownCloud officials said that in containerized configurations -- such as those using the Docker virtualization tool -- the URL can reveal data used to log in to the vulnerable server. The officials went on to warn that simply disabling the app in such cases wasn't sufficient to lock down a vulnerable server. [...]

To fix the ownCloud vulnerability under exploitation, ownCloud advised users to: "Delete the file owncloud/apps/graphapi/vendor/microsoft/microsoft-graph/tests/GetPhpInfo.php. Additionally, we disabled the phpinfo function in our docker-containers. We will apply various hardenings in future core releases to mitigate similar vulnerabilities.

We also advise to change the following secrets:
- ownCloud admin password
- Mail server credentials
- Database credentials
- Object-Store/S3 access-key"

United Kingdom

Genetic Data On 500,000 Volunteers In UK To Be Released For Scientific Study (theguardian.com) 18

An anonymous reader quotes a report from The Guardian: A new era of medical discoveries, treatments and cures is on the horizon, researchers say, following the announcement that an unprecedented trove of genetic information is to be made available to scientists. Health researchers from around the world can now apply to study the whole genomes of half a million people enrolled in UK Biobank, a biomedical research project that has compiled detailed health and lifestyle records on individuals since it began 20 years ago. The move on Thursday amounts to the largest number of whole-genome sequences ever released for medical research. The sequences will be used with UK Biobank's records and other data to delve deeply into the genetics of everything -- from people's risk of obesity, diabetes, heart disease, cancer and other conditions, to individuals' sleep and exercise patterns.

Researchers believe the new data will allow them to calculate people's individual risk scores for a raft of cancers and other diseases, and so work out who could benefit most from early screening. They should also gain a deeper understanding of serious genetic conditions such as Huntington's and motor neurone disease, which have often been studied in small numbers of severely affected patients. Health experts from academia, the government, industry and charities can apply for access though they have to be approved and study the genomes through a protected database stripped of identifying details such as names, addresses, birth dates, and GP information.
"Until 2021 scientists could study only about 1% of the DNA of UK Biobank volunteers -- the fraction that encodes proteins," notes the report. "Since then, whole genomes have been released for 200,000 participants, but work continued to sequence all of the 500,000 volunteers."

"With that number of whole genomes in hand, researchers will be able to find much rarer genes which drive diseases, including those that behave like switches and turn other genes on and off."
Facebook

Russia Puts Spokesman For Facebook-owner Meta on a Wanted List (yahoo.com) 100

Russia has added the spokesman of U.S. technology company Meta, which owns Facebook and Instagram, to a wanted list, according to an online database maintained by the country's interior ministry. From a report: Russian state agency Tass and independent news outlet Mediazona first reported that Meta communications director Andy Stone was included on the list Sunday, weeks after Russian authorities in October classified Meta as a "terrorist and extremist" organization, opening the way for possible criminal proceedings against Russian residents using its platforms.

The interior ministry's database doesn't give details of the case against Stone, stating only that he is wanted on criminal charges. According to Mediazona, an independent news website that covers Russia's opposition and prison system, Stone was put on the wanted list in February 2022, but authorities made no related statements at the time and no news media reported on the matter until this week. In March this year, Russia's federal Investigative Committee opened a criminal investigation into Meta.

Security

Why Do So Many Sites Have Bad Password Policies? (gatech.edu) 242

"Three out of four of the world's most popular websites are failing to meet minimum requirement standards" for password security, reports Georgia Tech's College of Computing. Which means three out of four of the world's most popular web sites are "allowing tens of millions of users to create weak passwords."

Using a first-of-its-kind automated tool that can assess a website's password creation policies, researchers also discovered that 12% of websites completely lacked password length requirements. Assistant Professor Frank Li and Ph.D. student Suood Al Roomi in Georgia Tech's School of Cybersecurity and Privacy created the automated assessment tool to explore all sites in the Google Chrome User Experience Report (CrUX), a database of one million websites and pages.

Li and Al Roomi's method of inferring password policies succeeded on over 20,000 sites in the database and showed that many sites:

- Permit very short passwords
- Do not block common passwords
- Use outdated requirements like complex characters

The researchers also discovered that only a few sites fully follow standard guidelines, while most stick to outdated guidelines from 2004... More than half of the websites in the study accepted passwords with six characters or less, with 75% failing to require the recommended eight-character minimum. Around 12% of had no length requirements, and 30% did not support spaces or special characters. Only 28% of the websites studied enforced a password block list, which means thousands of sites are vulnerable to cyber criminals who might try to use common passwords to break into a user's account, also known as a password spraying attack.

Georgia Tech describes the new research as "the largest study of its kind." ("The project was 135 times larger than previous works that relied on manual methods and smaller sample sizes.")

"As a security community, we've identified and developed various solutions and best practices for improving internet and web security," said assistant professor Li. "It's crucial that we investigate whether those solutions or guidelines are actually adopted in practice to understand whether security is improving in reality."

The Slashdot community has already noticed the problem, judging by a recent post from eggegick. "Every site I visit has its own idea of the minimum and maximum number of characters, the number of digits, the number of upper/lowercase characters, the number of punctuation characters allowed and even what punctuation characters are allowed and which are not." The limit of password size really torques me, as that suggests they are storing the password (they need to limit storage size), rather than its hash value (fixed size), which is a real security blunder. Also, the stupid dots drive me bonkers, especially when there is no "unhide" button. For crying out loud, nobody is looking over my shoulder! Make the "unhide" default.
"The 'dots' are bad security," agrees long-time Slashdot reader Spazmania. "If you're going to obscure the password you should also obscure the length of the password." But in their comment on the original submission, they also point out that there is a standard for passwords, from the National Institute of Standards and Technology: Briefly:

* Minimum 8 characters
* Must allow at least 64 characters.
* No constraints on what printing characters can be used (including high unicode)
* No requirements on what characters must be used or in what order or proportion

This is expected to be paired with a system which does some additional and critical things:

* Maintain a database of known compromised passwords (e.g. from public password dictionaries) and reject any passwords found in the database.
* Pair the password with a second authentication factor such as a security token or cell phone sms. Require both to log in.
* Limit the number of passwords which can be attempted per time period. At one attempt per second, even the smallest password dictionaries would take hundreds of years to try...

Someone attempting to brute force a password from outside on a rate-limited system is limited to the rate, regardless of how computing power advances. If the system enforces a rate limit of 1 try per second, the time to crack an 8-character password containing only lower case letters is still more than 6,000 years.

The Courts

Robocar Tech Biz Sues Nvidia, Claims Stolen Code Shared In Teams Meeting Blunder (theregister.com) 25

Dan Robinson reports via The Register: Nvidia is facing legal action in the U.S. for theft of trade secrets from a German automotive company, which alleges its ex-employee made an epic blunder of showing something he shouldn't have when minimizing a Powerpoint slide at a joint Microsoft Teams meeting both companies were attending. The automotive firm, Valeo Schalter und Sensoren, claims the flashing of its source code for the assisted parking app on the call is evidence to support its accusations that the ex-staffer stole the IP before leaving to join Nvidia. The two tech companies were both on the call as they were each suppliers on contract for a parking and driving assistance project with a major automotive OEM that was not named in the suit. Under the terms of the contract with the OEM, the suit states, engineers from both Valeo and Nvidia had to schedule collaboration meetings so that "Nvidia employees could ask Valeo employees questions about Valeo's ultrasonic hardware and data associated with the hardware."

The complaint [PDF], filed by Valeo in the US District Court for Northern California, goes on to allege misappropriation of trade secrets by Nvidia, through which the company claims the GPU-maker attempted to take a shortcut into the automotive marketplace by using its stolen software. Nvidia is a relative newcomer to the automotive market, introducing its Nvidia Drive platform at the CES trade show in 2015. Valeo says that it only discovered the theft during a conference call on March 8, 2022 between its engineers and those of Nvidia to collaborate on work for an automotive OEM, a customer of both companies. Valeo develops automotive hardware such as cameras and sensors, in addition to software to processes the data from the hardware. The court filing states that Valeo previously provided the OEM in question with both hardware and software for its autonomous vehicle technology, but in this instance, it asked Valeo to provide ultrasonic hardware only. For the software side, the OEM instead chose Nvidia. One of the Nvidia engineers on the call, named as Mohammad Moniruzzaman, was a former employee of Valeo, and during the call, made using Microsoft's Teams software, he shared his screen in order to give a presentation containing questions for the Valeo participants.

Yet also visible on his screen after the presentation finished - or so the complaint alleges - was a window of source code, which the Valeo participants recognized as belonging to their company. According to the filing, one of the Valeo engineers succeeded in capturing a screenshot as evidence. According to Valeo, the source code file names that were allegedly visible in the screenshot were identical to those used in its source code, and it also claims the source code appeared to be identical to proprietary code maintained in Valeo's repositories. The company says in the suit that it then conducted a comprehensive internal forensic IT audit, and alleges it discovered that Moniruzzaman had copied four repositories containing the code for Valeo's parking and driving assistance software, prior to leaving the company in May 2021. [...] The claim is that Valeo's source code and documentation has been used in the development of Nvidia's software, and this provided the GPU giant and its engineers with a shortcut in the development of its parking assistance code, saving Nvidia perhaps hundreds of millions of dollars in development costs.

According to the court filing, Nvidia said it removed Moniruzzaman's additions to its code. However, those additions underwent "a peer review process of 10-30 iterations of feedback loops" before the code was fully merged into Nvidia's database. Valeo contends that this process of extensive edits by others means it is not realistic that Nvidia could have fully remove Moniruzzaman's contributions. Valeo claims it has suffered competitive harm as a result of Nvidia's action and as a result is seeking damages, to be determined at trial, as well as an injunction prohibiting Nvidia or its employees from using or disclosing Valeo's trade secrets. A date for jury trial has yet to be announced.

Databases

Online Atrocity Database Exposed Thousands of Vulnerable People In Congo (theintercept.com) 6

An anonymous reader quotes a report from The Intercept: A joint project of Human Rights Watch and New York University to document human rights abuses in the Democratic Republic of the Congo has been taken offline after exposing the identities of thousands of vulnerable people, including survivors of mass killings and sexual assaults. The Kivu Security Tracker is a "data-centric crisis map" of atrocities in eastern Congo that has been used by policymakers, academics, journalists, and activists to "better understand trends, causes of insecurity and serious violations of international human rights and humanitarian law," according to the deactivated site. This includes massacres, murders, rapes, and violence against activists and medical personnel by state security forces and armed groups, the site said. But the KST's lax security protocols appear to have accidentally doxxed up to 8,000 people, including activists, sexual assault survivors, United Nations staff, Congolese government officials, local journalists, and victims of attacks, an Intercept analysis found. Hundreds of documents -- including 165 spreadsheets -- that were on a public server contained the names, locations, phone numbers, and organizational affiliations of those sources, as well as sensitive information about some 17,000 "security incidents," such as mass killings, torture, and attacks on peaceful protesters.

The data was available via KST's main website, and anyone with an internet connection could access it. The information appears to have been publicly available on the internet for more than four years. [...] The spreadsheets, along with the main KST website, were taken offline on October 28, after investigative journalist Robert Flummerfelt, one of the authors of this story, discovered the leak and informed Human Rights Watch and New York University's Center on International Cooperation. HRW subsequently assembled what one source close to the project described as a "crisis team." Last week, HRW and NYU's Congo Research Group, the entity within the Center on International Cooperation that maintains the KST website, issued a statement that announced the takedown and referred in vague terms to "a security vulnerability in its database," adding, "Our organizations are reviewing the security and privacy of our data and website, including how we gather and store information and our research methodology." The statement made no mention of publicly exposing the identities of sources who provided information on a confidential basis. [...] The Intercept has not found any instances of individuals affected by the security failures, but it's currently unknown if any of the thousands of people involved were harmed.
"We deeply regret the security vulnerability in the KST database and share concerns about the wider security implications," Human Rights Watch's chief communications officer, Mei Fong, told The Intercept. Fong said in an email that the organization is "treating the data vulnerability in the KST database, and concerns around research methodology on the KST project, with the utmost seriousness." Fong added, "Human Rights Watch did not set up or manage the KST website. We are working with our partners to support an investigation to establish how many people -- other than the limited number we are so far aware of -- may have accessed the KST data, what risks this may pose to others, and next steps. The security and confidentiality of those affected is our primary concern."
Android

Nothing is Bringing iMessage To Its Android Phone (theverge.com) 146

Nothing Phone 2 owners get blue bubbles now. The company shared it has added iMessage to its newest phone through a new "Nothing Chats" app powered by the messaging platform Sunbird. From a report: The feature will be available to users in North America, the EU, and other European countries starting this Friday, November 17th. Nothing writes on its page that it's doing this because "messaging services are dividing phone users," and it wants "to break those barriers down." But doing so here requires you to trust Sunbird. Nothing's FAQ says Sunbird's "architecture provides a system to deliver a message from one user to another without ever storing it at any point in its journey," and that messages aren't stored on its servers.

Marques Brownlee has also had a preview of Nothing Chats. He confirmed with Nothing that, similar to how other iMessage-to-Android bridge services have worked before, "...it's literally signing in on some Mac Mini in a server farm somewhere, and that Mac Mini will then do all of the routing for you to make this happen." Nothing's US head of PR, Jane Nho, told The Verge in an email that Sunbird stores user iCloud credentials as a token "in an encrypted database" and associated with one of its Mac Minis in the US or Europe, depending on the user's location, that then act as a relay for iMessages sent via the app. She added that, after two weeks of inactivity, Sunbird deletes the account information.

Privacy

Data Broker's 'Staggering' Sale of Sensitive Info Exposed in Unsealed FTC Filing (arstechnica.com) 30

One of the world's largest mobile data brokers, Kochava, has lost its battle to stop the Federal Trade Commission from revealing what the FTC has alleged is a disturbing, widespread pattern of unfair use and sale of sensitive data without consent from hundreds of millions of people. ArsTechnica: US District Judge B. Lynn Winmill recently unsealed a court filing, an amended complaint that perhaps contains the most evidence yet gathered by the FTC in its long-standing mission to crack down on data brokers allegedly "substantially" harming consumers by invading their privacy. The FTC has accused Kochava of violating the FTC Act by amassing and disclosing "a staggering amount of sensitive and identifying information about consumers," alleging that Kochava's database includes products seemingly capable of identifying nearly every person in the United States.

According to the FTC, Kochava's customers, ostensibly advertisers, can access this data to trace individuals' movements -- including to sensitive locations like hospitals, temporary shelters, and places of worship, with a promised accuracy within "a few meters" -- over a day, a week, a month, or a year. Kochava's products can also provide a "360-degree perspective" on individuals, unveiling personally identifying information like their names, home addresses, phone numbers, as well as sensitive information like their race, gender, ethnicity, annual income, political affiliations, or religion, the FTC alleged.

Beyond that, the FTC alleged that Kochava also makes it easy for advertisers to target customers by categories that are "often based on specific sensitive and personal characteristics or attributes identified from its massive collection of data about individual consumers." These "audience segments" allegedly allow advertisers to conduct invasive targeting by grouping people not just by common data points like age or gender, but by "places they have visited," political associations, or even their current circumstances, like whether they're expectant parents. Or advertisers can allegedly combine data points to target highly specific audience segments like "all the pregnant Muslim women in Kochava's database," the FTC alleged, or "parents with different ages of children."

Desktops (Apple)

First Benchmark Results Surface For M3 Chips In New Macs (macrumors.com) 44

Joe Rossignol reports via MacRumors: The first benchmark results for the standard M3 chip surfaced in the Geekbench 6 database today, providing a closer look at the chip's CPU performance improvements. Based on the results so far, the M3 chip has single-core and multi-core scores of around 3,000 and 11,700, respectively. The standard M2 chip has single-core and multi-core scores of around 2,600 and 9,700, respectively, so the M3 chip is up to 20% faster than the M2 chip, as Apple claimed during its "Scary Fast" event on Monday.

It's unclear if the results are for the new 14-inch MacBook Pro or iMac, both of which are available with the standard M3 chip, but performance should be similar for both machines. The results have a "Mac15,3" identifier, which Bloomberg's Mark Gurman previously reported was for a laptop with the same display resolution as a 14-inch MacBook Pro. We have yet to see any Geekbench results for the higher-end M3 Pro and M3 Max chips available in most new 14-inch and 16-inch MacBook Pro models.

Biotech

Drugmakers Are Set To Pay 23andMe Millions To Access Consumer DNA (bloomberg.com) 106

An anonymous reader quotes a report from Bloomberg: GSK will pay 23andMe $20 million for access to the genetic-testing company's vast trove of consumer DNA data, extending a five-year collaboration that's allowed the drugmaker to mine genetic data as it researches new medications. Under the new agreement, 23andMe will provide GSK with one year of access to anonymized DNA data from the approximately 80% of gene-testing customers who have agreed to share their information for research, 23andMe said in a statement Monday. The genetic-testing company will also provide data-analysis services to GSK.

23andMe is best known for its DNA-testing kits that give customers ancestry and health information. But the DNA it collects is also valuable, including for scientific research. With information from more than 14 million customers, the only data sets that rival the size of the 23andMe library belong to Ancestry.com and the Chinese government. The idea for drugmakers is to comb the data for hints about genetic pathways that might be at the root of disease, which could significantly speed up the long, slow process of drug development. GSK and 23andMe have already taken one potential medication to clinical trials: a cancer drug that works to block CD96, a protein that helps modulate the body's immune responses. It entered that testing phase in four years, compared to an industry average of about seven years. Overall, the partnership between GSK and 23andMe has produced more than 50 new drug targets, according to the statement.

The new agreement changes some components of the collaboration. Any discoveries GSK makes with the 23andMe data will now be solely owned by the British pharmaceutical giant, while the genetic-testing company will be eligible for royalties on some projects. In the past, the two companies pursued new drug targets jointly. GSK's new deal with 23andMe is also non-exclusive, leaving the genetic-testing company free to license its database to other drugmakers.

Facebook

Meta Told To Stop Using Threads Name By Company That Owns UK Trademark (businessinsider.com) 60

Pete Syme reports via Insider: A British software company is giving Meta 30 days to stop using the name Threads in the UK because it owns the trademark. Threads Software Limited says its lawyers wrote to the Facebook and Instagram parent company on Monday. If Meta doesn't stop using the name Threads, Threads Software Limited says it will seek an injunction from the courts.

The British company trademarked Threads in 2012 for its intelligent messaging hub, which can store a company's emails, tweets, and voice over internet protocol phone calls in a cloud database. In a press release, it said it had declined the four offers that Meta's lawyers made to purchase its domain name "threads.app." Then when Meta launched Threads, its social media app designed to compete with Elon Musk's X, the British company says it was removed from Facebook.
John Yardley, the managing director of Threads Software Limited, said the business "faces a serious threat from one of the largest technology companies in the world."

"We recognize that this is a classic 'David and Goliath' battle with Meta," said Yardley. "And whilst they may think they can use whatever name they want, that does not give them the right to use the Threads brand name."
AI

Leica Camera Has Built-In Defense Against Misleading AI, Costs $9,125 45

Scharon Harding reports via Ars Technica: On Thursday, Leica Camera released the first camera that can take pictures with automatically encrypted metadata and provide features such as an editing history. The company believes this system, called Content Credentials, will help photojournalists protect their work and prove authenticity in a world riddled with AI-manipulated content.

Leica's M11-P can store each captured image with Content Credentials, which is based on the Coalition for Content Provenance and Authenticity's (C2PA's) open standard and is being pushed by the Content Authenticity Initiative (CAI). Content Credentials, announced in October, includes encrypted metadata detailing where and when the photo was taken and with what camera and model. It also keeps track of edits and tools used for edits. When a photographer opts to use the feature, they'll see a Content Credentials logo in the camera's display, and images will be signed through the use of an algorithm.

The feature requires the camera to use a specialized chipset for storing digital certificates. Credentials can be verified via Leica's FOTOS app or on the Content Credentials website. Leica's announcement said: "Whenever someone subsequently edits that photo, the changes are recorded to an updated manifest, rebundled with the image, and updated in the Content Credentials database whenever it is reshared on social media. Users who find these images online can click on the CR icon in the [pictures'] corner to pull up all of this historical manifest information as well, providing a clear chain of providence, presumably, all the way back to the original photographer." The M11-P's Content Credentials is an opt-in feature and can also be erased. As Ars has previously noted, an image edited with tools that don't support Content Credentials can also result in a gap in the image's provenance data.
Security

Ukrainian Hackers and Intel Officers Partner Up In Apparent Hack of a Top Russian Bank (npr.org) 41

An anonymous reader quotes a report from NPR: Two Ukrainian hacktivist groups are claiming to have broken into Russia's largest private bank, Alfa-Bank. In a blog post last week, the hackers from groups called KibOrg and NLB shared screenshots of what appears to be an internal database belonging to Alfa-Bank, as well as personal details of several Russian individuals as "confirmation" of the breach. Within the database, the hackers say there are over 30 million records including names, birthdates, account numbers and phone numbers of Russian customers.

Adding some legitimacy to those claims, a Ukrainian intelligence official who requested anonymity to discuss the sensitive operation confirmed to NPR that Ukraine's top counterintelligence agency, the SBU, helped the hacktivists breach Alfa-Bank. The official did not share additional details about how the SBU participated or any further plans for sharing the stolen data. Ukrainian journalists including from cybersecurity website The Record previously reported on the connection to the SBU. While the hacktivists did not immediately respond to a request to discuss the breach, they wrote in the blog post -- posted on their own site -- that they would be sharing the data obtained from Alfa-Bank with investigative journalists. Alfa-Bank has not publicly responded to the news of the hack.

Slashdot Top Deals