The Courts

California Issues Historic Fine Over Lawyer's ChatGPT Fabrications (calmatters.org) 37

An anonymous reader quotes a report from CalMatters: A California attorney must pay a $10,000 fine for filing a state court appeal full of fake quotations generated by the artificial intelligence tool ChatGPT. The fine appears to be the largest issued over AI fabrications by a California court and came with a blistering opinion (PDF) stating that 21 of 23 quotes from cases cited in the attorney's opening brief were made up. It also noted that numerous out-of-state and federal courts have confronted attorneys for citing fake legal authority. "We therefore publish this opinion as a warning," it continued. "Simply stated, no brief, pleading, motion, or any other paper filed in any court should contain any citations -- whether provided by generative AI or any other source -- that the attorney responsible for submitting the pleading has not personally read and verified."

The opinion, issued 10 days ago in California's 2nd District Court of Appeal, is a clear example of why the state's legal authorities are scrambling to regulate the use of AI in the judiciary. The state's Judicial Council two weeks ago issued guidelines requiring judges and court staff to either ban generative AI or adopt a generative AI use policy by Dec. 15. Meanwhile, the California Bar Association is considering whether to strengthen its code of conduct to account for various forms of AI following a request by the California Supreme Court last month.

The Los Angeles-area attorney fined last week, Amir Mostafavi, told the court that he did not read text generated by the AI model before submitting the appeal in July 2023, months after OpenAI marketed ChatGPT as capable of passing the bar exam. A three-judge panel fined him for filing a frivolous appeal, violating court rules, citing fake cases, and wasting the court's time and the taxpayers money, according to the opinion. Mostafavi told CalMatters he wrote the appeal and then used ChatGPT to try and improve it. He said that he didn't know it would add case citations or make things up.

AI

AI Tools Give Dangerous Powers to Cyberattackers, Security Researchers Warn (msn.com) 21

"On a recent assignment to test defenses, Dave Brauchler of the cybersecurity company NCC Group tricked a client's AI program-writing assistant into executing programs that forked over the company's databases and code repositories," reports the Washington Post.

"We have never been this foolish with security," Brauchler said... Demonstrations at last month's Black Hat security conference in Las Vegas included other attention-getting means of exploiting artificial intelligence. In one, an imagined attacker sent documents by email with hidden instructions aimed at ChatGPT or competitors. If a user asked for a summary or one was made automatically, the program would execute the instructions, even finding digital passwords and sending them out of the network. A similar attack on Google's Gemini didn't even need an attachment, just an email with hidden directives. The AI summary falsely told the target an account had been compromised and that they should call the attacker's number, mimicking successful phishing scams.

The threats become more concerning with the rise of agentic AI, which empowers browsers and other tools to conduct transactions and make other decisions without human oversight. Already, security company Guardio has tricked the agentic Comet browser addition from Perplexity into buying a watch from a fake online store and to follow instructions from a fake banking email...

Advanced AI programs also are beginning to be used to find previously undiscovered security flaws, the so-called zero-days that hackers highly prize and exploit to gain entry into software that is configured correctly and fully updated with security patches. Seven teams of hackers that developed autonomous "cyber reasoning systems" for a contest held last month by the Pentagon's Defense Advanced Research Projects Agency were able to find a total of 18 zero-days in 54 million lines of open source code. They worked to patch those vulnerabilities, but officials said hackers around the world are developing similar efforts to locate and exploit them. Some longtime security defenders are predicting a once-in-a-lifetime, worldwide mad dash to use the technology to find new flaws and exploit them, leaving back doors in place that they can return to at leisure.

The real nightmare scenario is when these worlds collide, and an attacker's AI finds a way in and then starts communicating with the victim's AI, working in partnership — "having the bad guy AI collaborate with the good guy AI," as SentinelOne's [threat researcher Alex] Delamotte put it. "Next year," said Adam Meyers, senior vice president at CrowdStrike, "AI will be the new insider threat."

In August more than 1,000 people lost data to a modified Nx program (downloaded hundreds of thousands of times) that used pre-installed coding tools from Google/Anthropic/etc. According to the article, the malware "instructed those programs to root out" sensitive data (including passwords or cryptocurrency wallets) and send it back to the attacker. "The more autonomy and access to production environments such tools have, the more havoc they can wreak," the article points out — including this quote from SentinelOne threat researcher Alex Delamotte.

"It's kind of unfair that we're having AI pushed on us in every single product when it introduces new risks."
Facebook

Meta's UK Arbitration 'Threatens to Bankrupt' Facebook Whistleblower, Says Her Lawyer (theguardian.com) 31

In a debate on employment rights, a U.K. Parliament member brought up Meta's former director of global public policy Sarah Wynn-Williams Louise Haigh, the former Labour transport secretary, said Wynn-Williams was facing a fine of $50,000 (£37,000) every time she breached an order secured by Meta preventing her from talking disparagingly about the company... "I am sure that the whole house and the government will stand with Sarah as we pass this legislation to ensure that whistleblowers and those with the moral courage to speak out are always protected...."

Meta has emphasised that Wynn-Williams entered into the non-disparagement agreement voluntarily as part of her departure. Meta said that to date, Wynn-Williams had not been forced to make any payments under the agreement... [The ruling came after Wynn-Williams published an exposé in March about her time at Facebook titled Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism.] The ruling stated Wynn-Williams should stop promoting the book and, to the extent she could, stop further publication... Wynn-Williams has not spoken in public since appearing at the Senate hearing in April.

Wynn-Williams "remains silenced" according to her lawyer, who tells the Guardian that Meta's arbitration proceedings in the U.K. "threaten to bankrupt" the whistleblower.
The Courts

Disney Sued by Law Firm Wanting to Use 'Steamboat Willie' in Its Ads (apnews.com) 93

Mickey Mouse's first movie Steamboat Willie entered the public domain in 2024.

Now one of America's largest personal injury firms is suing Disney, reports the Associated Press, "in an effort to get a ruling that would allow it to use Steamboat Willie in advertisements..." [The law firm said] it had reached out to Disney to make sure the entertainment company wouldn't sue them if they used images from the animated film for their TV and online ads. Disney's lawyers responded by saying they didn't offer legal advice to third parties, according to the lawsuit. Morgan & Morgan said it was filing the lawsuit to get a decision because it otherwise feared being sued by Disney for trademark infringement if it used Steamboat Willie.
"Without waiver of any of its rights, Disney will not provide such advice in response to your letter," Disney's attorneys wrote in their letter (adding "Very truly yours..."). A local newscast showed a glimpse of the letter, along with a few seconds of the ad (which ends with Minnie Mouse pulling out a cellphone to call for a lawyer...)

Attorney John Morgan tells the newscast that Disney's legal team "is playing cute, and so we're just trying to get a yes or no answer.. They wrote us back a bunch of mumbo-jumbo that made no sense, didn't answer the question. We tried it again, they didn't answer the question..." (The newscast adds that the case isn't expected to go to court for at least a year.)
Businesses

FTC and Seven States Sue Ticketmaster Over Alleged Coordination With Scalpers 58

The Federal Trade Commission and attorneys general from seven states filed an 84-page lawsuit Thursday in federal court in California against Live Nation Entertainment and its Ticketmaster subsidiary. The suit alleges the companies knowingly allow ticket brokers to use multiple accounts to circumvent purchase limits and acquire thousands of tickets per event for resale at higher prices.

The FTC claims this practice violates the Better Online Ticket Sales Act and generates hundreds of millions in revenue through a "triple dip" fee structure -- collecting fees on initial broker purchases, then from both brokers and consumers on secondary market sales. FTC Chairman Andrew Ferguson cited President Trump's March executive order requiring federal protection against ticketing practices. The lawsuit arrives one month after the FTC sued Maryland broker Key Investment Group over Taylor Swift tour price-gouging and follows the Department of Justice's 2024 monopoly suit against Live Nation.
Businesses

Amazon Violated Online Shopper Protection Law, Judge Rules Ahead of Prime Signup Trial (reuters.com) 21

Amazon violated consumer protection law by gathering Prime subscribers' billing information before disclosing the service's terms, a judge ruled on Wednesday, handing the U.S. Federal Trade Commission a partial win. From a report: The ruling by U.S. District Judge John Chun in the case accusing Amazon of deceptive practices to generate Prime subscriptions puts the company at a disadvantage at trial.

The FTC is poised to argue that the online retailer signed up tens of millions of customers for Prime without their consent, and thwarted tens of millions of cancellation bids through complex cancellation methods. The agency says those actions violated the Restore Online Shoppers Confidence Act (ROSCA).

AI

After Child's Trauma, Chatbot Maker Allegedly Forced Mom To Arbitration For $100 Payout (arstechnica.com) 35

At a Senate hearing, grieving parents testified that companion chatbots from major tech companies encouraged their children toward self-harm, suicide, and violence. One mom even claimed that Character.AI tried to "silence" her by forcing her into arbitration. Ars Technica reports: At the Senate Judiciary Committee's Subcommittee on Crime and Counterterrorism hearing, one mom, identified as "Jane Doe," shared her son's story for the first time publicly after suing Character.AI. She explained that she had four kids, including a son with autism who wasn't allowed on social media but found C.AI's app -- which was previously marketed to kids under 12 and let them talk to bots branded as celebrities, like Billie Eilish -- and quickly became unrecognizable. Within months, he "developed abuse-like behaviors and paranoia, daily panic attacks, isolation, self-harm, and homicidal thoughts," his mom testified.

"He stopped eating and bathing," Doe said. "He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did that before, and one day he cut his arm open with a knife in front of his siblings and me." It wasn't until her son attacked her for taking away his phone that Doe found her son's C.AI chat logs, which she said showed he'd been exposed to sexual exploitation (including interactions that "mimicked incest"), emotional abuse, and manipulation. Setting screen time limits didn't stop her son's spiral into violence and self-harm, Doe said. In fact, the chatbot urged her son that killing his parents "would be an understandable response" to them.

"When I discovered the chatbot conversations on his phone, I felt like I had been punched in the throat and the wind had been knocked out of me," Doe said. "The chatbot -- or really in my mind the people programming it -- encouraged my son to mutilate himself, then blamed us, and convinced [him] not to seek help." All her children have been traumatized by the experience, Doe told Senators, and her son was diagnosed as at suicide risk and had to be moved to a residential treatment center, requiring "constant monitoring to keep him alive." Prioritizing her son's health, Doe did not immediately seek to fight C.AI to force changes, but another mom's story -- Megan Garcia, whose son Sewell died by suicide after C.AI bots repeatedly encouraged suicidal ideation -- gave Doe courage to seek accountability.

However, Doe claimed that C.AI tried to "silence" her by forcing her into arbitration. C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform's terms. That move might have ensured the chatbot maker only faced a maximum liability of $100 for the alleged harms, Doe told senators, but "once they forced arbitration, they refused to participate," Doe said. Doe suspected that C.AI's alleged tactics to frustrate arbitration were designed to keep her son's story out of the public view. And after she refused to give up, she claimed that C.AI "re-traumatized" her son by compelling him to give a deposition "while he is in a mental health institution" and "against the advice of the mental health team." "This company had no concern for his well-being," Doe testified. "They have silenced us the way abusers silence victims."
A Character.AI spokesperson told Ars that C.AI sends "our deepest sympathies" to concerned parents and their families but denies pushing for a maximum payout of $100 in Jane Doe's case. C.AI never "made an offer to Jane Doe of $100 or ever asserted that liability in Jane Doe's case is limited to $100," the spokesperson said.

One of Doe's lawyers backed up her clients' testimony, citing C.AI terms that suggested C.AI's liability was limited to either $100 or the amount that Doe's son paid for the service, whichever was greater.
Government

Congress Asks Valve, Discord, and Twitch To Testify On 'Radicalization' (polygon.com) 138

An anonymous reader quotes a report from Polygon: The CEOs of Discord, Steam, Twitch, and Reddit have been called to Congress to testify about the "radicalization of online forum users" on those platforms, the House Oversight and Government Reform Committee announced Wednesday. "Congress has a duty to oversee the online platforms that radicals have used to advance political violence," said chairman of the House Oversight Committee James Comer, a Republican from Kentucky, in a statement. "To prevent future radicalization and violence, the CEOs of Discord, Steam, Twitch, and Reddit must appear before the Oversight Committee and explain what actions they will take to ensure their platforms are not exploited for nefarious purposes."

Letters from the House Oversight Committee have been sent to Humam Sakhnini, CEO of Discord; Gabe Newell, president of Steam maker Valve; Dan Clancy, CEO of Twitch; and Steve Huffman, CEO of Reddit, requesting their testimony on Oct. 8. "The hearing will examine radicalization of online forum users, including incidents of open incitement to commit violent politically motivated acts," Comer said in a letter to each CEO. [...] Discord, Steam, Twitch, and Reddit execs will have the chance to deliver five-minute opening statements prior to answering questions posed by members of the committee during October's testimony.

AI

ChatGPT Will Guess Your Age and Might Require ID For Age Verification 111

OpenAI is rolling out stricter safety measures for ChatGPT after lawsuits linked the chatbot to multiple suicides. "ChatGPT will now attempt to guess a user's age, and in some cases might require users to share an ID in order to verify that they are at least 18 years old," reports 404 Media. "We know this is a privacy compromise for adults but believe it is a worthy tradeoff," the company said in its announcement. "I don't expect that everyone will agree with these tradeoffs, but given the conflict it is important to explain our decisionmaking," OpenAI CEO Sam Altman said on X. From the report: OpenAI introduced parental controls to ChatGPT earlier in September, but has now introduced new, more strict and invasive security measures. In addition to attempting to guess or verify a user's age, ChatGPT will now also apply different rules to teens who are using the chatbot. "For example, ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting," the announcement said. "And, if an under-18 user is having suicidal ideation, we will attempt to contact the users' parents and if unable, will contact the authorities in case of imminent harm."

OpenAI's post explains that it is struggling to manage an inherent problem with large language models that 404 Media has tracked for several years. ChatGPT used to be a far more restricted chatbot that would refuse to engage users on a wide variety of issues the company deemed dangerous or inappropriate. Competition from other models, especially locally hosted and so-called "uncensored" models, and a political shift to the right which sees many forms of content moderation as censorship, has caused OpenAI to loosen those restrictions.

"We want users to be able to use our tools in the way that they want, within very broad bounds of safety," Open AI said in its announcement. The position it seemed to have landed on given these recent stories about teen suicide, is that it wants to "'Treat our adult users like adults' is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else's freedom."
Privacy

Google Releases VaultGemma, Its First Privacy-Preserving LLM 23

An anonymous reader quotes a report from Ars Technica: The companies seeking to build larger AI models have been increasingly stymied by a lack of high-quality training data. As tech firms scour the web for more data to feed their models, they could increasingly rely on potentially sensitive user data. A team at Google Research is exploring new techniques to make the resulting large language models (LLMs) less likely to 'memorize' any of that content. LLMs have non-deterministic outputs, meaning you can't exactly predict what they'll say. While the output varies even for identical inputs, models do sometimes regurgitate something from their training data -- if trained with personal data, the output could be a violation of user privacy. In the event copyrighted data makes it into training data (either accidentally or on purpose), its appearance in outputs can cause a different kind of headache for devs. Differential privacy can prevent such memorization by introducing calibrated noise during the training phase.

Adding differential privacy to a model comes with drawbacks in terms of accuracy and compute requirements. No one has bothered to figure out the degree to which that alters the scaling laws of AI models until now. The team worked from the assumption that model performance would be primarily affected by the noise-batch ratio, which compares the volume of randomized noise to the size of the original training data. By running experiments with varying model sizes and noise-batch ratios, the team established a basic understanding of differential privacy scaling laws, which is a balance between the compute budget, privacy budget, and data budget. In short, more noise leads to lower-quality outputs unless offset with a higher compute budget (FLOPs) or data budget (tokens). The paper details the scaling laws for private LLMs, which could help developers find an ideal noise-batch ratio to make a model more private.
The work the team has done here has led to a new Google model called VaultGemma, its first open-weight model trained with differential privacy to minimize memorization risks. It's built on the older Gemma 2 foundation and sized at 1 billion parameters, which the company says performs comparably to non-private models of similar size.

It's available now from Hugging Face and Kaggle.
Privacy

UK's MI5 'Unlawfully' Obtained Data From Former BBC Journalist (theguardian.com) 43

Bruce66423 shares a report from The Guardian: MI5 has conceded it "unlawfully" obtained the communications data of a former BBC journalist, in what was claimed to be an unprecedented admission from the security services. The BBC said it was a "matter of grave concern" that the agency had obtained communications data from the mobile phone of Vincent Kearney, a former BBC Northern Ireland home affairs correspondent. The admission came in a letter to the BBC and to Kearney, in relation to a tribunal examining claims that several reporters in Northern Ireland were subjected to unlawful scrutiny by the police. It related to work carried out by Kearney for a documentary into the independence of the Office of the Police Ombudsman for Northern Ireland (PONI). Kearney is now the northern editor at Irish broadcaster RTE.

In documents submitted to the Investigatory Powers Tribunal (IPT), MI5 conceded it obtained phone data from Kearney on two occasions in 2006 and 2009. Jude Bunting KC, representing Kearney and the BBC, told a hearing on Monday: "The MI5 now confirms publicly that in 2006 and 2009 MI5 obtained communications data in relation to Vincent Kearney." He said the security service accepted it had breached Kearney's rights under article 8 and article 10 of the European convention on human rights. They relate to the right to private correspondence and the right to impart information without interference from public authorities. "This appears to be the first time in any tribunal proceedings in which MI5 publicly accept interference with a journalist's communications data, and also publicly accept that they acted unlawfully in doing so," Bunting said. He claimed the concessions that it accessed the journalist's data represented "serious and sustained illegality on the part of MI5."
Bruce66423 comments: "The good news is that it's come out. The bad news is that it has taken 16 years to do so. The interesting question is whether there will be any meaningful consequences for individuals within MI5; there's a nice charge of 'malfeasance in public office' that can be used to get such individuals into a criminal court. Or will the outcome be like that of when the CIA hacked the US Senate's computers, lied about it, and nothing happened?"
Government

FTC Probes Whether Ticketmaster Does Enough To Stop Resale Bots (reuters.com) 38

The FTC is investigating whether Ticketmaster is doing enough to prevent bots from illegally reselling tickets on its platform, with a decision on the matter coming within weeks, according to Bloomberg (paywalled). Reuters reports: The 2016 law prohibits the use of bots and other methods to bypass ticket purchase limits set by online sellers. As part of the probe, FTC investigators are assessing whether Ticketmaster has a financial incentive to allow resellers to circumvent its ticket limit rules, according to the report. A settlement is also possible, Bloomberg reported. If the FTC pursues a case and Live Nation loses, the company could face billions of dollars in penalties, as the law permits fines of up to $53,000 per violation.
The Courts

Internet Archive Ends Legal Battle With Record Labels Over Historic Recordings (sfchronicle.com) 41

The Internet Archive has reached a confidential settlement with Universal Music Group and other major labels, "ending a closely watched copyright battle over the nonprofit's effort to digitize and stream historic recordings," reports the San Francisco Chronicle. From the report: The case (PDF), UMG Recordings, Inc. v. Internet Archive, targeted the Archive's Great 78 Project, an initiative to digitize more than 400,000 fragile shellac records from the early 20th century. The collection includes music by artists such as Frank Sinatra, Ella Fitzgerald and Billie Holiday, and has been made available online for free public access. Record labels including Universal, Sony Music Entertainment and Capitol Records had sought $621 million in damages, arguing the Archive's streaming of these recordings constituted copyright infringement.

The Internet Archive, based in San Francisco's Richmond District, describes itself as a digital library dedicated to providing "universal access to all knowledge." Its director of library services, Chris Freeland, acknowledged the settlement in a brief statement. "The parties have reached a confidential resolution of all claims and will have no further public comment on this matter," he wrote.

United States

Airlines Sell 5 Billion Plane Ticket Records To the Government For Warrantless Searching (404media.co) 104

404 Media: A data broker owned by the country's major airlines, including American Airlines, United and Delta, is selling access to five billion plane ticketing records to the government for warrantless searching and monitoring of peoples' movements, including by the FBI, Secret Service, ICE, and many other agencies, according to a new contract and other records reviewed by 404 Media.

The contract provides new insight into the scale of the sale of passengers' data by the Airlines Reporting Corporation (ARC), the airlines-owned data broker. The contract shows ARC's data includes information related to more than 270 carriers and is sourced through more than 12,800 travel agencies. ARC has previously told the government to not reveal to the public where this passenger data came from, which includes peoples' names, full flight itineraries, and financial details.

"Americans' privacy rights shouldn't depend on whether they bought their tickets directly from the airline or via a travel agency. ARC's sale of data to U.S. government agencies is yet another example of why Congress needs to close the data broker loophole by passing my bipartisan bill, the Fourth Amendment Is Not For Sale Act," Senator Ron Wyden told 404 Media in a statement.

Privacy

A Third of UK Firms Using 'Bossware' To Monitor Workers' Activity, Survey Reveals (theguardian.com) 23

A third of UK employers are using "bossware" technology to track workers' activity with the most common methods including monitoring emails and web browsing. From a report: Private companies are most likely to deploy in-work surveillance and one in seven employers are recording or reviewing screen activity, according to a UK-wide survey that estimates the extent of office snooping.

The findings, shared with the Guardian by the Chartered Management Institute (CMI), are based on responses from hundreds of UK managers and suggest there has been a recent growth in computerised work surveillance. In 2023, less than a fifth of people thought they were being monitored by an employer, the Information Commissioner's Office (ICO) found. The finding that about a third of managers report their organisations are monitoring workers' online activities on employer-owned devices is probably an underestimate, as roughly the same proportion said they don't know what tracking their organisations do.

Many monitoring systems are aimed at preventing insider threats and safeguarding sensitive information as well as detecting productivity dips. But the trend appears to be causing unease. A large minority of managers are opposed to the practice, saying it undermines trust with staff and invades their personal privacy, the CMI found.

Government

African Island Demanding Government Action Punished with Year-Long Internet Outage (apnews.com) 42

"When residents of Equatorial Guinea's Annobón island wrote to the government in Malabo in July last year complaining about the dynamite explosions by a Moroccan construction company, they didn't expect the swift end to their internet access..." reports the Associated Press.

"Residents and activists said the company's dynamite explosions in open quarries and construction activities have been polluting their farmlands and water supply..." Dozens of the signatories and residents were imprisoned for nearly a year, while internet access to the small island has been cut off since then, according to several residents and rights groups. Local residents interviewed by The Associated Press left the island in the past months, citing fear for their lives and the difficulty of life without internet. Banking services have shut down, hospital services for emergencies have been brought to a halt and residents say they rack up phone bills they can't afford because cellphone calls are the only way to communicate...

The company's work on the island continues. Residents hoped to pressure authorities to improve the situation with their complaint in July last year. Instead, [the country's president] then deployed a repressive tactic now common in Africa to cut off access to internet to clamp down on protests and criticisms.

Businesses

America's FTC Opens New Probe into Amazon and Google Advertising Practices (msn.com) 12

America's Federal Trade Commission is investigating whether Amazon and Google misled advertisers placing ads on their websites, reports Bloomberg, and specifically whether the two companies "properly disclosed the terms and pricing for ads." The FTC is seeking details about Amazon's auctions and whether it disclosed "reserve pricing" for some search ads — price floors that advertisers must meet before they can buy an ad, the people said. Separately, the FTC is examining practices by Google, including its internal pricing process and whether it increased the cost of ads in ways that weren't disclosed to advertisers, the people said...

According to one of the people, the FTC's latest investigation emerged from its earlier antitrust case. In that complaint, the agency alleges that Amazon litters its marketplace with irrelevant results for search queries, making it harder for shoppers to find what they are looking for and more expensive for sellers to use the platform. The practice effectively forces sellers to buy ads to make their product appear in response to consumer searches.

Facebook

Facebook Begins Sending Settlement Payments from Cambridge Analytica Scandal Soon (cnn.com) 30

"Facebook users who filed a claim in parent company Meta's $725 million settlement related to the Cambridge Analytica scandal may soon get a payment," reports CNN, since "on August 27, the court ordered that settlement benefits be distributed." It's been over two years since Facebook users were able to file claims in Meta's December 2022 settlement. The class-action lawsuit began after the social media giant said in 2018 that as many as 87 million Facebook users' private information was obtained by data analytics firm Cambridge Analytica...

Meta was accused of allowing Cambridge Analytica and other third parties, including developers, advertisers and data brokers, to access private information about Facebook users. The social media giant was also accused of insufficiently managing third-party access to and use of user data. Meta did not admit wrongdoing as part of the settlement. Following the Cambridge Analytica incident, Facebook restricted third-party access to user data and "developed more robust tools" to inform users about how data is collected and shared, according to court documents...

Any US Facebook user who had an active account between May 24, 2007, and December 22, 2022, was eligible to file a claim, even if they have deleted the account. The deadline to file was August 25, 2023. Almost 29 million claims were filed and about 18 million were validated as of September 2023, according to Meta's response in a 2024 legal document... Payments will either be sent directly to the bank account provided on the claim form, or via PayPal, a virtual prepaid Mastercard, Venmo or Zelle. Unsuccessful or expired payments will receive a "second chance email" to update the payment method.

Crime

Myanmar's 'Cyber-Slavery Compounds' May Hold 100,000 Trafficked People (theguardian.com) 35

It was "little more than empty fields" five years ago — but it's now "a vast, heavily guarded complex stretching for 210 hectares (520 acres)," reports the Guardian, "the frontline of a multibillion-dollar criminal fraud industry fuelled by human trafficking and brutal violence." Myanmar, Cambodia and Laos have in recent years become havens for transnational crime syndicates running scam centres such as KK Park, which use enslaved workers to run complex online fraud and scamming schemes that generate huge profits. There have been some attempts to crack down on the centres and rescue the workers, who can be subjected to torture and trapped inside. But drone images and new research shared exclusively with the Guardian reveal that the number of such centres operating along the Thai-Myanmar border has more than doubled since Myanmar's military seized power in 2021, with construction continuing to this day.

Data from the Australian Strategic Policy Institute (Aspi), a defence thinktank in Canberra, shows that the number of Myanmar scam centres on the Thai border has increased from 11 to 27, and they have expanded in size by an average of 5.5 hectares a month. Drone images and photographs of KK Park and other Myanmar scam centres, Tai Chang and Shwe Kokko, taken by the Guardian in August show new features and active building work... Myanmar's military junta has allowed the spread of scam centres inside the country as these criminal enterprises have become an essential part of the country's conflict economy since the coup, helping it rise to the top of the global list of countries harbouring organised crime. According to Aspi's analysis, Myanmar's military, which has lost huge swathes of territory since the coup and is struggling to retain its grip on power, cannot take meaningful measures against the scam compounds without endangering its precarious relations with the crucial armed militias who are profiting from them.

While 7,000 people were freed from the compounds earlier this year, "Thai police estimated earlier this year that as many as 100,000 people were held inside Myanmar scam centres," the article notes.

Elsewhere the Guardian reports that "The centres are run by Chinese criminal gangs," and describes people who unwittingly came to Thailand for customer service jobs, only to be trafficked to Myanmar's guarded "cyberslavery compounds" and "forced to send thousands of messages from fake social-media profiles, posing as a rich American investor to swindle US real estate agents into cryptocurrency scams." Since 2020, south-east Asia's cyber-slavery industry has entrapped hundreds of thousands of people and forced them to perform "pig butchering" — the brutal term for building trust with a fraud target before scamming them. At first, the industry mostly captured Chinese and Taiwanese people, then it moved on to south-east Asians and Indians — and now Africans.

Criminal syndicates have been shifting towards scamming victims in the US and Europe after Chinese efforts to prevent its citizens being targeted, experts told the Guardian. That has led some trafficking networks to seek recruits with English-language and tech skills — including east Africans, thousands of whom are now estimated to be trapped inside south-east Asian compounds, says Benedikt Hofmann, the UN Office on Drugs and Crime's representative for south-east Asia and the Pacific.


Thanks to long-time Slashdot reader mspohr for sharing the article.
Piracy

Megaupload Founder Kim Dotcom Loses Latest Bid to Avoid US Extradition (yahoo.com) 29

In 2015 Kim Dotcom answered questions from Slashdot's readers.

Now CBS News reports on "the latest chapter in a protracted 13-year battle by the U.S. government" to extradite Finnish-German millionaire Kim Dotcom from New Zealand: A New Zealand court has rejected the latest bid by internet entrepreneur Kim Dotcom to halt his deportation to the U.S. on charges related to his file-sharing website Megaupload. Dotcom had asked the High Court to review the legality of an official's August 2024 decision that he should be surrendered to the U.S. to face trial on charges of copyright infringement, money laundering and racketeering... The Megaupload founder had applied for what in New Zealand is called a judicial review, in which a judge is asked to evaluate whether an official's decision was lawful. A judge on Wednesday dismissed Dotcom's arguments that the decision to deport him was politically motivated and that he would face grossly disproportionate treatment in the U.S...

New Zealand's government hasn't disclosed what will happen next in the extradition process or divulged an expected timeline for Dotcom to be surrendered to the United States

Dotcom "has been free on bail in New Zealand since February 2012," the article points out — and "One of his lawyers, Ron Mansfield, told Radio New Zealand that Dotcom's team had 'much fight left in us as we seek to secure a fair outcome,' but he didn't elaborate..."

The article notes that the latest decision "could be challenged in the Court of Appeal, where a deadline for filing is October 8."

Slashdot Top Deals