Security

Anker Finally Comes Clean About Its Eufy Security Cameras (theverge.com) 30

An anonymous reader quotes a report from The Verge: First, Anker told us it was impossible. Then, it covered its tracks. It repeatedly deflected while utterly ignoring our emails. So shortly before Christmas, we gave the company an ultimatum: if Anker wouldn't answer why its supposedly always-encrypted Eufy cameras were producing unencrypted streams -- among other questions -- we would publish a story about the company's lack of answers. It worked.

In a series of emails to The Verge, Anker has finally admitted its Eufy security cameras are not natively end-to-end encrypted -- they can and did produce unencrypted video streams for Eufy's web portal, like the ones we accessed from across the United States using an ordinary media player. But Anker says that's now largely fixed. Every video stream request originating from Eufy's web portal will now be end-to-end encrypted -- like they are with Eufy's app -- and the company says it's updating every single Eufy camera to use WebRTC, which is encrypted by default. Reading between the lines, though, it seems that these cameras could still produce unencrypted footage upon request.

That's not all Anker is disclosing today. The company has apologized for the lack of communication and promised to do better, confirming it's bringing in outside security and penetration testing companies to audit Eufy's practices, is in talks with a "leading and well-known security expert" to produce an independent report, is promising to create an official bug bounty program, and will launch a microsite in February to explain how its security works in more detail. Those independent audits and reports may be critical for Eufy to regain trust because of how the company has handled the findings of security researchers and journalists. It's a little hard to take the company at its word! But we also think Anker Eufy customers, security researchers and journalists deserve to read and weigh those words, particularly after so little initial communication from the company. That's why we're publishing Anker's full responses [here].
As highlighted by Ars Technica, some of the notable statements include: - Its web portal now prohibits users from entering "debug mode."
- Video stream content is encrypted and inaccessible outside the portal.
- While "only 0.1 percent" of current daily users access the portal, it "had some issues," which have been resolved.
- Eufy is pushing WebRTC to all of its security devices as the end-to-end encrypted stream protocol.
- Facial recognition images were uploaded to the cloud to aid in replacing/resetting/adding doorbells with existing image sets, but has been discontinued. No recognition data was included with images sent to the cloud.
- Outside of the "recent issue with the web portal," all other video uses end-to-end encryption.
- A "leading and well-known security expert" will produce a report about Eufy's systems.
- "Several new security consulting, certification, and penetration testing" firms will be brought in for risk assessment.
- A "Eufy Security bounty program" will be established.
- The company promises to "provide more timely updates in our community (and to the media!)."

Privacy

GoodRx Leaked User Health Data To Facebook and Google, FTC Says (nytimes.com) 31

An anonymous reader quotes a report from The New York Times: Millions of Americans have used GoodRx, a drug discount app, to search for lower prices on prescriptions like antidepressants, H.I.V. medications and treatments for sexually transmitted diseases at their local drugstores. But U.S. regulators say the app's coupons and convenience came at a high cost for users: wrongful disclosure of their intimate health information. On Wednesday, the Federal Trade Commission accused the app's developer, GoodRx Holdings, of sharing sensitive personal data on millions of users' prescription medications and illnesses with companies like Facebook and Google without authorization. [...]

From 2017 to 2020, GoodRx uploaded the contact information of users who had bought certain medications, like birth control or erectile dysfunction pills, to Facebook so that the drug discount app could identify its users' social media profiles, the F.T.C. said in a legal complaint. GoodRx then used the personal information to target users with ads for medications on Facebook and Instagram, the complaint said, "all of which was visible to Facebook." GoodRx also targeted users who had looked up information on sexually transmitted diseases on HeyDoctor, the company's telemedicine service, with ads for HeyDoctor's S.T.D. testing services, the complaint said. Those data disclosures, regulators said, flouted public promises the company had made to "never provide advertisers any information that reveals a personal health condition."

The company's information-sharing practices, the agency said, violated a federal rule requiring health apps and fitness trackers that collect personal health details to notify consumers of data breaches. While GoodRx agreed to settle the case, it said it disagreed with the agency's allegations and admitted no wrongdoing. The F.T.C.'s case against GoodRx could upend widespread user-profiling and ad-targeting practices in the multibillion-dollar digital health industry, and it puts companies on notice that regulators intend to curb the nearly unfettered trade in consumers' health details. [...] If a judge approves the proposed federal settlement order, GoodRx will be permanently barred from sharing users' health information for advertising purposes. To settle the case, the company also agreed to pay a $1.5 million civil penalty for violating the health breach notification rule.

Privacy

Stable Diffusion 'Memorizes' Some Images, Sparking Privacy Concerns (arstechnica.com) 37

An anonymous reader quotes a report from Ars Technica: On Monday, a group of AI researchers from Google, DeepMind, UC Berkeley, Princeton, and ETH Zurich released a paper outlining an adversarial attack that can extract a small percentage of training images from latent diffusion AI image synthesis models like Stable Diffusion. It challenges views that image synthesis models do not memorize their training data and that training data might remain private if not disclosed. Recently, AI image synthesis models have been the subject of intense ethical debate and even legal action. Proponents and opponents of generative AI tools regularly argue over the privacy and copyright implications of these new technologies. Adding fuel to either side of the argument could dramatically affect potential legal regulation of the technology, and as a result, this latest paper, authored by Nicholas Carlini et al., has perked up ears in AI circles.

However, Carlini's results are not as clear-cut as they may first appear. Discovering instances of memorization in Stable Diffusion required 175 million image generations for testing and preexisting knowledge of trained images. Researchers only extracted 94 direct matches and 109 perceptual near-matches out of 350,000 high-probability-of-memorization images they tested (a set of known duplicates in the 160 million-image dataset used to train Stable Diffusion), resulting in a roughly 0.03 percent memorization rate in this particular scenario. Also, the researchers note that the "memorization" they've discovered is approximate since the AI model cannot produce identical byte-for-byte copies of the training images. By definition, Stable Diffusion cannot memorize large amounts of data because the size of the 160,000 million-image training dataset is many orders of magnitude larger than the 2GB Stable Diffusion AI model. That means any memorization that exists in the model is small, rare, and very difficult to accidentally extract.

Still, even when present in very small quantities, the paper appears to show that approximate memorization in latent diffusion models does exist, and that could have implications for data privacy and copyright. The results may one day affect potential image synthesis regulation if the AI models become considered "lossy databases" that can reproduce training data, as one AI pundit speculated. Although considering the 0.03 percent hit rate, they would have to be considered very, very lossy databases -- perhaps to a statistically insignificant degree. [...] Eric Wallace, one of the paper's authors, shared some personal thoughts on the research in a Twitter thread. As stated in the paper, he suggested that AI model-makers should de-duplicate their data to reduce memorization. He also noted that Stable Diffusion's model is small relative to its training set, so larger diffusion models are likely to memorize more. And he advised against applying today's diffusion models to privacy-sensitive domains like medical imagery.

Crime

'Pig-Butchering' Scam Apps Sneak Into Apple's App Store and Google Play (arstechnica.com) 44

In the past year, a new term has arisen to describe an online scam raking in millions, if not billions, of dollars per year. It's called "pig butchering," and now even Apple is getting fooled into participating. From a report: Researchers from security firm Sophos said on Wednesday that they uncovered two apps available in the App Store that were part of an elaborate network of tools used to dupe people into putting large sums of money into fake investment scams. At least one of those apps also made it into Google Play, but that market is notorious for the number of malicious apps that bypass Google vetting. Sophos said this was the first time it had seen such apps in the App Store and that a previous app identified in these types of scams was a legitimate one that was later exploited by bad actors.

Pig butchering relies on a rich combination of apps, websites, web hosts, and humans -- in some cases human trafficking victims -- to build trust with a mark over a period of weeks or months, often under the guise of a romantic interest, financial adviser, or successful investor. Eventually, the online discussion will turn to investments, usually involving cryptocurrency, that the scammer claims to have earned huge sums of money from. The scammer then invites the victim to participate. Once a mark deposits money, the scammers will initially allow them to make withdrawals. The scammers eventually lock the account and claim they need a deposit of as much as 20 percent of their balance to get it back. Even when the deposit is paid, the money isn't returned, and the scammers invent new reasons the victim should send more money. The pig-butchering term derives from a farmer fattening up a hog months before it's butchered.

Facebook

Hacker Finds Bug That Allowed Anyone To Bypass Facebook 2FA (techcrunch.com) 13

An anonymous reader quotes a report from TechCrunch: A bug in a new centralized system that Meta created for users to manage their logins for Facebook and Instagram could have allowed malicious hackers to switch off an account's two-factor protections just by knowing their phone number. Gtm Manoz, a security researcher from Nepal, realized that Meta did not set up a limit of attempts when a user entered the two-factor code used to log into their accounts on the new Meta Accounts Center, which helps users link all their Meta accounts, such as Facebook and Instagram.

With a victim's phone number, an attacker would go to the centralized accounts center, enter the phone number of the victim, link that number to their own Facebook account, and then brute force the two-factor SMS code. This was the key step, because there was no upper limit to the amount of attempts someone could make. Once the attacker got the code right, the victim's phone number became linked to the attacker's Facebook account. A successful attack would still result in Meta sending a message to the victim, saying their two-factor was disabled as their phone number got linked to someone else's account.

Manoz found the bug in the Meta Accounts Center last year, and reported it to the company in mid-September. Meta fixed the bug a few days later, and paid Manoz $27,200 for reporting the bug. Meta spokesperson Gabby Curtis told TechCrunch that at the time of the bug the login system was still at the stage of a small public test. Curtis also said that Meta's investigation after the bug was reported found that there was no evidence of exploitation in the wild, and that Meta saw no spike in usage of that particular feature, which would signal the fact that no one was abusing it.

The Internet

Massive Yandex Code Leak Reveals Russian Search Engine's Ranking Factors (arstechnica.com) 24

An anonymous reader quotes a report from Ars Technica: Nearly 45GB of source code files, allegedly stolen by a former employee, have revealed the underpinnings of Russian tech giant Yandex's many apps and services. It also revealed key ranking factors for Yandex's search engine, the kind almost never revealed in public. [...] While it's not clear whether there are security or structural implications of Yandex's source code revelation, the leak of 1,922 ranking factors in Yandex's search algorithm is certainly making waves. SEO consultant Martin MacDonald described the hack on Twitter as "probably the most interesting thing to have happened in SEO in years" (as noted by Search Engine Land). In a thread detailing some of the more notable factors, researcher Alex Buraks suggests that "there is a lot of useful information for Google SEO as well."

Yandex, the fourth-ranked search engine by volume, purportedly employs several ex-Google employees. Yandex tracks many of Google's ranking factors, identifiable in its code, and competes heavily with Google. Google's Russian division recently filed for bankruptcy after losing its bank accounts and payment services. Buraks notes that the first factor in Yandex's list of ranking factors is "PAGE_RANK," which is seemingly tied to the foundational algorithm created by Google's co-founders.

As detailed by Buraks (in two threads), Yandex's engine favors pages that: - Aren't too old
- Have a lot of organic traffic (unique visitors) and less search-driven traffic
- Have fewer numbers and slashes in their URL
- Have optimized code rather than "hard pessimization," with a "PR=0"
- Are hosted on reliable servers
- Happen to be Wikipedia pages or are linked from Wikipedia
- Are hosted or linked from higher-level pages on a domain
- Have keywords in their URL (up to three)

Security

JD Sports Admits Intruder Accessed 10 Million Customers' Data (theregister.com) 6

Sports fashion retailer JD Sports has confirmed miscreants broke into a system that contained data on a whopping 10 million customers, but no payment information was among the mix. The Register reports: In a post to investors this morning, the London Stock Exchange-listed business said the intrusion related to infrastructure that housed data for online orders from sub-brands including JD, Size? Millets, Blacks, Scotts and MilletSport between November 2018 and October 2020. The data accessed consisted of customer name, billing address, delivery address, phone number, order details and the final four digits of payment cards "of approximately 10 million unique customers." The company does "not hold full payment card details" and said that it has "no reason to believe that account passwords were accessed."

As is customary in such incidents, JD Sports has contacted the relevant authorities such as the Information Commissioner's Office and says it has enlisted the help of "leading cyber security experts." The chain has stores across Europe, with some operating in North America and Canada. It also operates some footwear brands including Go Outdoors and Shoe Palace.
"We want to apologize to those customers who may have been affected by this incident," said Neil Greenhalgh, chief financial officer at JD Sports. "We are advising them to be vigilant about potential scam emails, calls and texts and providing details on now to report these."

He added: "We are continuing with a full review of our cyber security in partnership with external specialists following this incident. Protecting that data of our customers is an absolute priority for JS."
Biotech

A Drug Company Made $114 Billion Gaming America's Patent System (msn.com) 92

The New York Times looks at the AbbVie's anti-inflammatory drug Humira and their "savvy but legal exploitation of the U.S. patent system." Though AbbVie's patent was supposed to expire in 2016, since then it's maintained a monopoly that generated $114 billion in revenue by using "a formidable wall of intellectual property protection and suing would-be competitors before settling with them to delay their product launches until this year." AbbVie did not invent these patent-prolonging strategies; companies like Bristol Myers Squibb and AstraZeneca have deployed similar tactics to maximize profits on drugs for the treatment of cancer, anxiety and heartburn. But AbbVie's success with Humira stands out even in an industry adept at manipulating the U.S. intellectual-property regime.... AbbVie and its affiliates have applied for 311 patents, of which 165 have been granted, related to Humira, according to the Initiative for Medicines, Access and Knowledge, which tracks drug patents. A vast majority were filed after Humira was on the market.

Some of Humira's patents covered innovations that benefited patients, like a formulation of the drug that reduced the pain from injections. But many of them simply elaborated on previous patents. For example, an early Humira patent, which expired in 2016, claimed that the drug could treat a condition known as ankylosing spondylitis, a type of arthritis that causes inflammation in the joints, among other diseases. In 2014, AbbVie applied for another patent for a method of treating ankylosing spondylitis with a specific dosing of 40 milligrams of Humira. The application was approved, adding 11 years of patent protection beyond 2016.

AbbVie has been aggressive about suing rivals that have tried to introduce biosimilar versions of Humira. In 2016, with Amgen's copycat product on the verge of winning regulatory approval, AbbVie sued Amgen, alleging that it was violating 10 of its patents. Amgen argued that most of AbbVie's patents were invalid, but the two sides reached a settlement in which Amgen agreed not to begin selling its drug until 2023.

Over the next five years, AbbVie reached similar settlements with nine other manufacturers seeking to launch their own versions of Humira. All of them agreed to delay their market entry until 2023.

A drug pricing expert at Washington University in St. Louis tells the New York Times that AbbVie and its strategy with Humira "showed other companies what it was possible to do."

But the article concludes that last year such tactics "became a rallying cry" for U.S. lawmakers "as they successfully pushed for Medicare to have greater control over the price of widely used drugs that, like Humira, have been on the market for many years but still lack competition."
Advertising

How to Handle Web Sites Asking for Your Email Address (seattletimes.com) 117

When you share your email, "you're sharing a lot more," warns the New York Times' lead consumer technology writer: [I]t can be linked to other data, including where you went to school, the make and model of the car you drive, and your ethnicity....

For many years, the digital ad industry has compiled a profile on you based on the sites you visit on the web.... An email could contain your first and last name, and assuming you've used it for some time, data brokers have already compiled a comprehensive profile on your interests based on your browsing activity. A website or an app can upload your email address into an ad broker's database to match your identity with a profile containing enough insights to serve you targeted ads.

The article recommends creating several email addresses to "make it hard for ad tech companies to compile a profile based on your email handle... Apple and Mozilla offer tools that automatically create email aliases for logging in to an app or a site; emails sent to the aliases are forwarded to your real email address." Apple's Hide My Email tool, which is part of its iCloud+ subscription service that costs 99 cents a month, will create aliases, but using it will make it more difficult to log in to the accounts from a non-Apple device. Mozilla's Firefox Relay will generate five email aliases at no cost; beyond that, the program charges 99 cents a month for additional aliases.

For sites using the UID 2.0 framework for ad targeting, you can opt out by entering your email address [or phone number] at https://transparentadvertising.org.

Security

Security Researchers Breached Server of Russia's 'Black Basta' Ransomware Gang (quadrantsec.com) 9

Long-time Slashdot reader Beave writes: Security researchers and practitioners at Quadrant Information Security recently found themselves in a battle with the Russian ransomware gang known as "Black Basta"... Quadrant discovered the Russian gang attempting to exfiltrate data from a network. Once a victim's data is fully exfiltrated the gang then encrypts workstations and servers, and demands ransom payments from the victim in order to decrypt their data and to prevent Black Basta from releasing exfiltrated data to the public.

Fortunately, in this case, Black Basta didn't make it that far. Instead, the security researchers used the opportunity to better understand Black Basta's "backend servers", tools, and methods. Black Basta will sometimes use a victim's network to log into their own servers, which leads to interesting opportunities to observe the gang's operations...

The first write up goes into technical details about the malware and tactics Black Basta used. The second second write up focuses on Black Basta's "backend" servers and how they manage them.

TLDR? You can also listen to two of the security researchers discuss their findings on the latest episode of the "Breaking Badness" podcast.

The articles go into great detail - even asking whether deleting their own exfiltrated data from the gang's server "would technically constitute a federal offense per the 'The Computer Fraud and Abuse Act' of 1986."
AI

Lawsuit Accusing Copilot of Abusing Open-Source Code Challenged by GitHub, Microsoft, OpenAI (reuters.com) 60

GitHub, Microsoft, and OpenAI "told a San Francisco federal court that a proposed class-action lawsuit for improperly monetizing open-source code to train their AI systems cannot be sustained," reports Reuters: The companies said in Thursday court filings that the complaint, filed by a group of anonymous copyright owners, did not outline their allegations specifically enough and that GitHub's Copilot system, which suggests lines of code for programmers, made fair use of the source code. A spokesperson for GitHub, an online platform for housing code, said Friday that the company has "been committed to innovating responsibly with Copilot from the start" and that its motion is "a testament to our belief in the work we've done to achieve that...."

Microsoft and OpenAI said Thursday that the plaintiffs lacked standing to bring the case because they failed to argue they suffered specific injuries from the companies' actions. The companies also said the lawsuit did not identify particular copyrighted works they misused or contracts that they breached.

Microsoft also said in its filing that the copyright allegations would "run headlong into the doctrine of fair use," which allows the unlicensed use of copyrighted works in some situations. The companies both cited a 2021 U.S. Supreme Court decision that Google's use of Oracle source code to build its Android operating system was transformative fair use.

Slashdot reader guest reader shares this excerpt from the plaintiffs' complaint: GitHub and OpenAI have offered shifting accounts of the source and amount of the code or other data used to train and operate Copilot. They have also offered shifting justifications for why a commercial AI product like Copilot should be exempt from these license requirements, often citing "fair use."

It is not fair, permitted, or justified. On the contrary, Copilot's goal is to replace a huge swath of open source by taking it and keeping it inside a GitHub-controlled paywall. It violates the licenses that open-source programmers chose and monetizes their code despite GitHub's pledge never to do so.

The Almighty Buck

Wyoming Crypto Bank Denied for Federal Reserve System Membership (apnews.com) 23

The Associated Press reports that America's Federal Reserve Board "has denied a Wyoming cryptocurrency bank's application for Federal Reserve System membership, officials announced Friday, dealing a setback to the crypto industry's attempts to build acceptance in mainstream U.S. banking." Many in crypto have been looking to Cheyenne-based Custodia Bank's more than 2-year-old application as a bellwether for crypto banking. Approval would have meant access to Federal Reserve services including its electronic payments system.

The rejection adds to doubts about crypto banking's viability, particularly in Wyoming, a state that has sought to become a hub of crypto banking, exchanges and mining....

Custodia sued the Federal Reserve Board and Federal Reserve Bank of Kansas City in Wyoming federal court last year, accusing them of taking an unreasonably long time on its application. In a statement Friday, the company said it was "surprised and disappointed" by the rejection and pledged to continue to litigate the issue.

In a statement, America's Federal Reserve Board argued argued that Custodia's "novel business model" and focus on crypto-assets "presented significant safety and soundness risks." "The Board has previously made clear that such crypto activities are highly likely to be inconsistent with safe and sound banking practices.

"The Board also found that Custodia's risk management framework was insufficient to address concerns regarding the heightened risks associated with its proposed crypto activities, including its ability to mitigate money laundering and terrorism financing risks."

AI

US and EU To Launch First-Of-Its-Kind AI Agreement 13

The United States and European Union on Friday announced an agreement to speed up and enhance the use of artificial intelligence to improve agriculture, healthcare, emergency response, climate forecasting and the electric grid. Reuters reports: A senior U.S. administration official, discussing the initiative shortly before the official announcement, called it the first sweeping AI agreement between the United States and Europe. Previously, agreements on the issue had been limited to specific areas such as enhancing privacy, the official said. AI modeling, which refers to machine-learning algorithms that use data to make logical decisions, could be used to improve the speed and efficiency of government operations and services.

"The magic here is in building joint models (while) leaving data where it is," the senior administration official said. "The U.S. data stays in the U.S. and European data stays there, but we can build a model that talks to the European and the U.S. data because the more data and the more diverse data, the better the model." The initiative will give governments greater access to more detailed and data-rich AI models, leading to more efficient emergency responses and electric grid management, and other benefits, the administration official said. The partnership is currently between just the White House and the European Commission, the executive arm of the 27-member European Union. The senior administration official said other countries will be invited to join in the coming months.
Crime

Boeing Pleads Not Guilty To Fraud In Criminal Case Over Deadly 737 Max Crashes (npr.org) 42

An anonymous reader quotes a report from NPR: Aerospace giant Boeing entered a plea of not guilty to a criminal charge at an arraignment in federal court in Texas Thursday. The company is charged with felony fraud related to the crashes of two of its 737 Max airplanes that killed a total of 346 people. About a dozen relatives of some of those who were killed in the crashes gave emotional testimony during the three-hour arraignment hearing about how they've been affected by what they call "the deadliest corporate crime in U.S. history." They testified after Boeing's chief aerospace safety officer Mike Delaney entered a plea of not guilty on behalf of the airplane manufacturer to the charge of conspiracy to commit fraud. The company is accused of deceiving and misleading federal regulators about the safety of a critical automated flight control system that investigators found played a major role in causing the crashes in Indonesia in 2018 and in Ethiopia in 2019.

Boeing and the Justice Department had entered into a deferred prosecution agreement to settle the charge two years ago but many of the families of the crash victims objected to the agreement, saying that they were not consulted about what they called a "secret, sweetheart deal." Under the terms of the agreement, Boeing admitted to defrauding the FAA by concealing safety problems with the 737 Max, but pinned much of the blame on two technical pilots who they say misled regulators while working on the certification of the aircraft. Only one of those pilots was prosecuted and a jury acquitted him at trial last year. Boeing also agreed to pay $2.5 billion, including $1.7 billion in compensation to airlines that had purchased 737 Max planes but could not use them while the plane was grounded for 20 months after the second plane crashed. The company also agreed to pay $500 million in compensation to the families of those killed in the two Max plane crashes, and to pay a $243 million fine. The agreement also required Boeing to make significant changes to its safety policies and procedures, as well as to the corporate culture, which many insiders have said had shifted in recent years from a safety first focus to one that critics say put profits first.

After three years, if the aerospace giant and defense contractor lived up to the terms of the deferred prosecution agreement, the criminal charge against Boeing would be dismissed and the company would be immune from further prosecution. But last fall, U.S. District Court Judge Reed O'Connor agreed that under the Crime Victims' Rights Act, the relatives' rights had been violated and they should have been consulted before the DOJ and Boeing reached the agreement. Last week, he ordered Boeing to appear Thursday to be arraigned. On Thursday, the families asked Judge O'Connor to impose certain conditions on Boeing as a condition of release, including appointing an independent monitor to oversee Boeing's compliance with the terms of the previous deferred prosecution agreement, and that the company's compliance efforts "be made public to the fullest extent possible." O'Connor did not rule on whether to impose those conditions yet, as Boeing and the Justice Department opposed the request. But he did impose a standard condition that Boeing commit no new crimes.

Technology

Apple Brings Mainland Chinese Web Censorship To Hong Kong (theintercept.com) 35

An anonymous reader shares a report: When Safari users in Hong Kong recently tried to load the popular code-sharing website GitLab, they received a strange warning instead: Apple's browser was blocking the site for their own safety. The access was temporarily cut off thanks to Apple's use of a Chinese corporate website blacklist, which resulted in the innocuous site being flagged as a purveyor of misinformation. Neither Tencent, the massive Chinese firm behind the web filter, nor Apple will say how or why the site was censored. The outage was publicized just ahead of the new year. On December 30, 2022, Hong Kong-based software engineer and former Apple employee Chu Ka-cheong tweeted that his web browser had blocked access to GitLab, a popular repository for open-source code. Safari's "safe browsing" feature greeted him with a full-page "deceptive website warning," advising that because GitLab contained dangerous "unverified information," it was inaccessible. Access to GitLab was restored several days later, after the situation was brought to the company's attention.

The warning screen itself came courtesy of Tencent, the mammoth Chinese internet conglomerate behind WeChat and League of Legends. The company operates the safe browsing filter for Safari users in China on Apple's behalf -- and now, as the Chinese government increasingly asserts control of the territory, in Hong Kong as well. Apple spokesperson Nadine Haija would not answer questions about the GitLab incident, suggesting they be directed at Tencent, which also declined to offer responses. The episode raises thorny questions about privatized censorship done in the name of "safety" -- questions that neither company seems interested in answering: How does Tencent decide what's blocked? Does Apple have any role? Does Apple condone Tencent's blacklist practices?

Canada

Home Depot Canada Found Sharing Customer Personal Data With Meta (reuters.com) 38

Home Depot's Canadian arm was found to be sharing details from e-receipts related to in-store purchases with Facebook owner Meta Platforms without the knowledge or consent of its customers, according to Canada's privacy regulator. From a report: An investigation by the Office of the Privacy Commissioner of Canada (OPC) found that by participating in Meta's offline conversions program Home Depot shared the e-receipts that included encoded email addresses and purchase information. The regulator added that the home goods chain stopped sharing customer information with Meta in October 2022, which was among the recommendations made by OPC, until the company is able to implement measures to ensure valid consent.
Security

US Says It 'Hacked the Hackers' To Bring Down Hive Ransomware Gang (reuters.com) 34

The FBI revealed today that it had shut down the prolific ransomware gang called Hive, "a maneuver that allowed the bureau to thwart the group from collecting more than $130 million in ransomware demands from more than 300 victims," reports Reuters. Slashdot readers wiredmikey and unimind shared the news. From the report: At a news conference, U.S. Attorney General Merrick Garland, FBI Director Christopher Wray, and Deputy U.S. Attorney General Lisa Monaco said government hackers broke into Hive's network and put the gang under surveillance, surreptitiously stealing the digital keys the group used to unlock victim organizations' data. They were then able to alert victims in advance so they could take steps to protect their systems before Hive demanded the payments. "Using lawful means, we hacked the hackers," Monaco told reporters. "We turned the tables on Hive."

News of the takedown first leaked on Thursday morning when Hive's website was replaced with a flashing message that said: "The Federal Bureau of Investigation seized this site as part of coordinated law enforcement action taken against Hive Ransomware." Hive's servers were also seized by the German Federal Criminal Police and the Dutch National High Tech Crime Unit. The undercover infiltration, which started in July 2022, went undetected by the gang until now.

The Justice Department said that over the years, Hive has targeted more than 1,500 victims in 80 different countries, and has collected more than $100 million in ransomware payments. Although there were no arrests announced on Wednesday, Garland said the investigation was ongoing and one department official told reporters to "stay tuned."

Government

Member of Congress Reads AI-Generated Speech On House Floor (apnews.com) 48

U.S. Rep. Jake Auchincloss read a speech on the floor of the U.S. House that was generated by AI chatbot ChatGPT. "Auchincloss said he prompted the system in part to 'write 100 words to deliver on the floor of the House of Representatives' about the legislation," reports the Associated Press. "Auchincloss said he had to refine the prompt several times to produce the text he ultimately read. His staff said they believe it's the first time an AI-written speech was read in Congress." From the report: The bill, which Auchincloss is refiling, would establish a joint U.S.-Israel AI Center in the United States to serve as a hub for AI research and development in the public, private and education sectors. Auchincloss said part of the decision to read a ChatGPT-generated text was to help spur debate on AI and the challenges and opportunities created by it. He said he doesn't want to see a repeat of the advent of social media, which started small and ballooned faster than Congress could react. "I'm the youngest parent in the Democratic caucus, AI is going to be part of my life and it could be a general purpose technology for my children," said Auchincloss, 34.

The text generated from Auchincloss's prompt includes sentences like: "We must collaborate with international partners like the Israeli government to ensure that the United States maintains a leadership role in AI research and development and responsibly explores the many possibilities evolving technologies provide." "There were probably about a dozen of my colleagues on the floor. I bet none of them knew it was written by a computer," he said. Lawmakers and others shouldn't be reflexively hostile to the new technology, but also shouldn't wait too long before drafting policies or new laws to help regulate it, Auchincloss said. In particular, he argued that the country needs a "public counterweight" to the big tech firms that would help guarantee that smaller developers and universities have access to the same cloud computing, cutting edge algorithms and raw data as larger companies.

AI

A Robot Was Scheduled To Argue In Court, Then Came the Jail Threats (npr.org) 115

schwit1 shares a report from NPR: A British man who planned to have a "robot lawyer" help a defendant fight a traffic ticket has dropped the effort after receiving threats of possible prosecution and jail time. [...] The first-ever AI-powered legal defense was set to take place in California on Feb. 22, but not anymore. As word got out, an uneasy buzz began to swirl among various state bar officials, according to Browder. He says angry letters began to pour in. "Multiple state bar associations have threatened us," Browder said. "One even said a referral to the district attorney's office and prosecution and prison time would be possible." In particular, Browder said one state bar official noted that the unauthorized practice of law is a misdemeanor in some states punishable up to six months in county jail.

"Even if it wouldn't happen, the threat of criminal charges was enough to give it up," [said Joshua Browden, the CEO of the New York-based startup DoNotPay]. "The letters have become so frequent that we thought it was just a distraction and that we should move on." State bar associations license and regulate attorneys, as a way to ensure people hire lawyers who understand the law. Browder refused to cite which state bar associations in particular sent letters, and what official made the threat of possible prosecution, saying his startup, DoNotPay, is under investigation by multiple state bar associations, including California's.
"The truth is, most people can't afford lawyers," he said. "This could've shifted the balance and allowed people to use tools like ChatGPT in the courtroom that maybe could've helped them win cases."

"I think calling the tool a 'robot lawyer' really riled a lot of lawyers up," Browder said. "But I think they're missing the forest for the trees. Technology is advancing and courtroom rules are very outdated."
AI

MSG Probed Over Use of Facial Recognition To Eject Lawyers From Show Venues (arstechnica.com) 40

An anonymous reader quotes a report from ArsTechnica: The operator of Madison Square Garden and Radio City Music Hall is being probed by New York's attorney general over the company's use of facial recognition technology to identify and exclude lawyers from events. AG Letitia James' office said the policy may violate civil rights laws. Because of the policy, lawyers who work for firms involved in litigation against MSG Entertainment Corp. can be denied entry to shows or sporting events, even when they have no direct involvement in any lawsuits against MSG. A lawyer who is subject to MSG's policy may buy a ticket to an event but be unable to get in because the MSG venues use facial recognition to identify them.

In December, attorney Kelly Conlon was denied entry into Radio City Music Hall in New York when she accompanied her daughter's Girl Scout troop to a Rockettes show. Conlon wasn't personally involved in any lawsuits against MSG but is a lawyer for a firm that "has been involved in personal injury litigation against a restaurant venue now under the umbrella of MSG Entertainment," NBC New York reported. James' office sent a letter (PDF) Tuesday to MSG Entertainment, noting reports that it "used facial recognition software to forbid all lawyers in all law firms representing clients engaged in any litigation against the Company from entering the Company's venues in New York, including the use of any season tickets."

"We write to raise concerns that the Policy may violate the New York Civil Rights Law and other city, state, and federal laws prohibiting discrimination and retaliation for engaging in protected activity," Assistant AG Kyle Rapinan of the Civil Rights Bureau wrote in the letter. "Such practices certainly run counter to the spirit and purpose of such laws, and laws promoting equal access to the courts: forbidding entry to lawyers representing clients who have engaged in litigation against the Company may dissuade such lawyers from taking on legitimate cases, including sexual harassment or employment discrimination claims." The AG's office also said it is concerned that "facial recognition software may be plagued with biases and false positives against people of color and women." The letter asked MSG Entertainment to respond by February 13 "to state the justifications for the Company's Policy and identify all efforts you are undertaking to ensure compliance with all applicable laws and that the Company's use of facial recognition technology will not lead to discrimination."
"To be clear, our policy does not unlawfully prohibit anyone from entering our venues and it is not our intent to dissuade attorneys from representing plaintiffs in litigation against us," said an MSG spokesperson in a statement. "We are merely excluding a small percentage of lawyers only during active litigation. Most importantly, to even suggest anyone is being excluded based on the protected classes identified in state and federal civil rights laws is ludicrous. Our policy has never applied to attorneys representing plaintiffs who allege sexual harassment or employment discrimination."

Slashdot Top Deals