Technology

Maze, a Notorious Ransomware Group, Says It's Shutting Down (techcrunch.com) 22

One of the most active and notorious data-stealing ransomware groups, Maze, says it is "officially closed." From a report: The announcement came as a waffling statement, riddled with spelling mistakes, and published on its website on the dark web, which for the past year has published vast troves of stolen internal documents and files from the companies it targeted, including Cognizant, cybersecurity insurance firm Chubb, pharmaceutical giant ExecuPharm, Tesla and SpaceX parts supplier Visser, and defense contractor Kimchuk. Where typical ransomware groups would infect a victim with file-encrypting malware and hold the files for a ransom, Maze gained its notoriety for first exfiltrating a victim's data and threatening to publish the stolen files unless the ransom was paid. It quickly became the preferred tactic of ransomware groups, which set up websites -- often on the dark web -- to leak the files it stole if the victim refused to pay up. Maze initially used exploit kits and spam campaigns to infect its victims, but later began using known security vulnerabilities to specifically target big name companies. Maze was known to use vulnerable virtual private network (VPN) and remote desktop (RDP) servers to launch targeted attacks against its victim's network. Some of the demanded ransoms reached into the millions of dollars.
Spam

A Massive Spam Attack Is Ruining Public 'Among Us' Games (engadget.com) 60

Just days after US Representative Alexandria Ocasio-Cortez played Among Us to an audience of more than 435,000 viewers, InnerSloth, the developer of the popular multiplayer title, is struggling to contain a spam attack that is affecting most of the game's community. Engadget reports: The hack started to spread through the game's userbase on Thursday evening. It causes players to spam their match's text chat with messages that direct people to the YouTube and Discord channels of a person who goes by the pseudonym "Eris Loris," threatening them if they don't subscribe. For good measure, some of the messages also promote President Donald Trump's 2020 campaign.

InnerSloth said it's "super duper aware of the current hacking issue" and that it had planned to roll out an emergency server-side update to address the spam. Forest Willard, one of three developers who make up the InnerSloth team, said they had begun rolling out the update at some point in the middle of the night, but it doesn't seem to have addressed the issue; new reports of spam-filled matches continue to flood Twitter. The studio is advising people to play private games with friends while it works to solve the problem.

As for the hacker, it appears their primary motive in all of this was to troll people. "I was curious to see what would happen, and personally I found it funny," they told Kotaku. "The anger and hatred is the part that makes it funny. If you care about a game and are willing to go and spam dislike some random dude on the internet because you cant [sic] play it for three minutes, it's stupid."

Android

240+ Android Apps Caught Showing Out-of-Context Ads (zdnet.com) 8

Google has removed this summer more than 240 Android apps from the official Play Store for showing out-of-context ads and breaking a newly introduced Google policy against this type of intrusive advertising. From a report: Out-of-context ads (also known as out-of-app ads) are mobile ads that are shown outside an app's normal container. They can appear as popups or as fullscreen ads. Out-of-context ads are banned on the Play Store since February this year, when Google banned more than 600 apps that were abusing this practice to spam their users with annoying ads. But despite the public crackdown and ban, other apps showing out-of-context ads have continued to be discovered -- such as in June this year. The latest of these discoveries come from ad fraud detection firm White Ops. In a blog post today, the company said it discovered a new cluster of more than 240+ Android apps bombarding their users with out-of-context ads -- but made to look like they originated from other, more legitimate applications.
Youtube

YouTube Celebrates Deaf Awareness Week By Killing Crowd-Sourced Captions (arstechnica.com) 41

Two days after the International Week of the Deaf, which is the last full week in September, YouTube is killing its "Community Contributions" feature for videos, which let content creators crowdsource captions and subtitles for their videos. Ars Technica reports: Once enabled by a channel owner, the Community Contributions feature would let viewers caption or translate a video and submit it to the channel for approval. YouTube currently offers machine-transcribed subtitles that are often full of errors, and if you also need YouTube to take a second pass at the subtitles for machine translation, they've probably lost all meaning by the time they hit your screen. The Community Caption feature would load up those machine-written subtitles as a starting point and allow the user to make corrections and add text that the machine transcription doesn't handle well, like transcribed sound cues for the deaf and hard of hearing.

YouTube says it's killing crowd-source subtitles due to spam and low usage. "While we hoped Community Contributions would be a wide-scale, community-driven source of quality translations for Creators," the company wrote, "it's rarely used and people continue to report spam and abuse." The community does not seem to agree with this assessment, since a petition immediately popped up asking YouTube to reconsider, and so far a half-million people have signed. "Removing community captions locks so many viewers out of the experience," the petition reads. "Community captions ensured that many videos were accessible that otherwise would not be."

Instead of the free, in-house solution YouTube already built and doesn't want to keep running, the company's shutdown post pushes users to paid, third-party alternatives like Amara.org. YouTube says that because "many of you rely on community captions," (what happened to the low usage?) "YouTube will be covering the cost of a 6 month subscription of Amara.org for all creators who have used the Community Contribution feature for at least 3 videos in the last 60 days."

Facebook

US Teens Are Being Paid to Spread Disinformation on Social Media (adn.com) 204

The Washington Post covered "a sprawling yet secretive campaign that experts say evades the guardrails put in place by social media companies to limit online disinformation of the sort used by Russia" during America's last presidential campaign in 2016.

According to four people with knowledge of the effort, "Teenagers, some of them minors, are being paid to pump out the messages..." The campaign draws on the spam-like behavior of bots and trolls, with the same or similar language posted repeatedly across social media. But it is carried out, at least in part, by humans paid to use their own accounts, though nowhere disclosing their relationship with Turning Point Action or the digital firm brought in to oversee the day-to-day activity. One user included a link to Turning Point USA's website in his Twitter profile until The Washington Post began asking questions about the activity. In response to questions from The Post, Twitter on Tuesday suspended at least 20 accounts involved in the activity for "platform manipulation and spam." Facebook also removed a number of accounts as part of what the company said is an ongoing investigation...

The months-long effort by the tax-exempt nonprofit is among the most ambitious domestic influence campaigns uncovered this election cycle, said experts tracking the evolution of deceptive online tactics. "In 2016, there were Macedonian teenagers interfering in the election by running a troll farm and writing salacious articles for money," said Graham Brookie, director of the Atlantic Council's Digital Forensic Research Lab. "In this election, the troll farm is in Phoenix...."

The messages — some of them false and some simply partisan — were parceled out in precise increments as directed by the effort's leaders, according to the people with knowledge of the highly coordinated activity, most of whom spoke on the condition of anonymity to protect the privacy of minors carrying out the work... The messages have appeared mainly as replies to news articles about politics and public health posted on social media. They seek to cast doubt on the integrity of the electoral process, asserting that Democrats are using mail balloting to steal the election — "thwarting the will of the American people," they alleged. The posts also play down the threat from covid-19, which claimed the life of Turning Point's co-founder Bill Montgomery in July...

By seeking to rebut mainstream news articles, the operation illustrates the extent to which some online political activism is designed to discredit the media. While Facebook and Twitter have pledged to crack down on what they have labeled coordinated inauthentic behavior, in Facebook's case, and platform manipulation and spam, as Twitter defines its rules, their efforts falter in the face of organizations willing to pay users to post on their own accounts, maintaining the appearance of independence and authenticity.

One parent even said their two teenagers had been posting the messages since June as "independent contractors" — while being paid less than minimum wage.
Google

Google's New 'Verified Calls' Feature Will Tell You Why a Business is Calling You (techcrunch.com) 69

Google today is introducing a new feature for Android phones that will help legitimate businesses reach their customers by phone by having their brand name and reason for calling properly identified. From a report: The feature, known as "Verified Calls," will display the caller's name, their logo, a reason why they're calling, and a verification symbol that will indicate the call has been verified by Google. The feature arrives at a time when spam calls are on the rise. U.S. consumers received 61.4 billion spam calls in 2019, according to a recent report from RoboKiller, representing a 28% increase from the prior year. The U.S. Federal Communications Commission also says that unwanted calls, are its top consumer complaint. Google's new system gives legitimate businesses a way to share their information with consumers along with their reason for calling on the incoming call screen. This, however, only works with those participating businesses who have chosen to sign up with one of Google's partners in order to have their calls verified. According to Google's website for the service, businesses can get started with Verified Calls by working with a partner such as Neustar, JustCall, Telecall, Zenvia, Prestus, Aspect, Five9, Vonage, Bandwidth, IMImobile, Kaleyra, Quiubas Mobile, or Datora.
Security

'Unusually Large Number' of Breached SendGrid Accounts Are Sending Spams and Scams (krebsonsecurity.com) 13

Krebs on Security reports: Email service provider Sendgrid is grappling with an unusually large number of customer accounts whose passwords have been cracked, sold to spammers, and abused for sending phishing and email malware attacks. Sendgrid's parent company Twilio says it is working on a plan to require multi-factor authentication for all of its customers, but that solution may not come fast enough for organizations having trouble dealing with the fallout in the meantime...

[A] large number of organizations allow email from Sendgrid's systems to sail through their spam-filtering systems. To make matters worse, links included in emails sent through Sendgrid are obfuscated (mainly for tracking deliverability and other metrics), so it is not immediately clear to recipients where on the Internet they will be taken when they click...

Rob McEwen is CEO of Invaluement.com, an anti-spam firm whose data on junk email trends are used to improve the spam-blocking technologies deployed by several Fortune 100 companies. McEwen said no other email service provider has come close to generating the volume of spam that's been emanating from Sendgrid accounts lately. "As far as the nasty criminal phishes and viruses, I think there's not even a close second in terms of how bad it's been with Sendgrid over the past few months," he said...

Neil Schwartzman, executive director of the anti-spam group CAUCE, said Sendgrid's two-factor authentication plans are long overdue, noting that the company bought Authy back in 2015. "Single-factor authentication for a company like this in 2020 is just ludicrous given the potential damage and malicious content we're seeing," Schwartzman said... Schwartzman said if Twilio doesn't act quickly enough to fix the problem on its end, the major email providers of the world (think Google, Microsoft and Apple) — and their various machine-learning anti-spam algorithms — may do it for them.

Krebs found an online cybercriminal selling access to more than 400 compromised Sendgrid accounts. "Accounts that can send up to 40,000 emails a month go for $15, whereas those capable of blasting 10 million missives a month sell for $400."
Spam

Elon Musk Shows Neuralink Brain Link Working In a Pig (cnet.com) 87

With a pig named Gertrude, Elon Musk demonstrated his startup Neuralink's technology to build a digital link between brains and computers. A wireless link from the Neuralink device showed the pig's activity activity as it snuffled around a pen on stage Friday night. CNET reports: The demonstration shows the the technology to be significantly closer to delivering on Musk's radical ambitions than during a 2019 product debut, when Neuralink only showed photos of a rat with a Neuralink connected via a USB-C port. It's still far from reality, but Musk said the US Food and Drug Administration in July granted approval for "breakthrough device" testing. Musk also showed a second-generation device that's more compact and that fits into a small cavity hollowed out of a hole in a skull. "It's like a Fitbit in your skull with tiny wires," Musk said of the device. It communicates with brain cells with 1,024 thin electrodes that penetrate within brain cell.
Privacy

235 Million Instagram, TikTok and YouTube User Profiles Exposed In Massive Data Leak (forbes.com) 19

An anonymous reader quotes a report from Forbes: The security research team at Comparitech today disclosed how an unsecured database left almost 235 million Instagram, TikTok and YouTube user profiles exposed online in what can only be described as a massive data leak. The data was spread across several datasets; the most significant being two coming in at just under 100 million each and containing profile records apparently scraped from Instagram. The third-largest was a dataset of some 42 million TikTok users, followed by just under 4 million YouTube user profiles.

Comparitech says that, based on the samples it collected, one in five records contained either a telephone number or email address. Every record also included at least some, sometimes all, the following information: Profile name; Full real name; Profile photo; and Account description. Statistics about follower engagement, including: Number of followers; Engagement rate; Follower growth rate; Audience gender; Audience age; Audience location; Likes; Last post timestamp; Age; and Gender. "The information would probably be most valuable to spammers and cybercriminals running phishing campaigns," Paul Bischoff, Comparitech editor, says. "Even though the data is publicly accessible, the fact that it was leaked in aggregate as a well-structured database makes it much more valuable than each profile would be in isolation," Bischoff adds. Indeed, Bischoff told me that it would be easy for a bot to use the database to post targeted spam comments on any Instagram profile matching criteria such as gender, age or number of followers.
The data appeared to have originated from a company called Deep Social, which was banned by both Facebook and Instagram in 2018 after scraping user profile data. The company was wound down sometime after this.

The researchers reached out to Deep Social, which then forwarded the disclosure to a Hong Kong-registered social media influencer data-marketing company called Social Data. Social Data shut down the database about three hours after the researchers' initial email. "Social Data has denied any connection between itself and Deep Social," reports Forbes, citing Comparitech.
Cloud

Countering Google, Microsoft Promises Its Own Open Source Service Mesh for the CNCF (infoworld.com) 13

"As controversy rages over the governance of Google's Istio service mesh, Microsoft has seen an opportunity to offer a simple and truly open alternative," reports InfoWorld: Microsoft has announced that it will release its own open source service mesh — called Open Service Mesh (OSM) — and transfer it to the Cloud Native Computing Foundation (CNCF) as soon as possible. This sets the Redmond-based company apart from its cloud rival Google, which recently announced that its own Istio service mesh will no longer be part of the vendor-neutral CNCF and will instead sit under Google's own Open Usage Commons foundation.

The service mesh has quickly become a vital part of the modern cloud native computing stack, as it essentially enables communication, monitoring, and load balancing between disparate parts of today's microservices-based architecture. This differs from the popular container orchestration service Kubernetes in its level of granularity. When run in tandem with Kubernetes, a service mesh enables deeper security policy and encryption enforcement and automated load balancing and circuit breaking functionality...

With this launch Microsoft is not only aligning itself with the open governance side of the debate which has been raging through the open source software community for the past few months, but is also looking to solve a customer pain point.

Youtube

YouTube Bans Thousands of Chinese Accounts To Combat 'Coordinated Influence Operations' (techcrunch.com) 187

An anonymous reader quotes a report from TechCrunch: YouTube has banned a large number of Chinese accounts it said were engaging in "coordinated influence operations" on political issues, the company announced today; 2,596 accounts from China alone were taken down from April to June, compared with 277 in the first three months of 2020. "These channels mostly uploaded spammy, non-political content, but a small subset posted political content primarily in Chinese similar to the findings in a recent Graphika report (PDF), including content related to the U.S. response to COVID-19," Google posted in its Threat Analysis Group bulletin for Q2.

The Graphika report, entitled "Return of the (Spamouflage) Dragon: Pro Chinese Spam Network Tries Again," [...] details a large set of accounts on YouTube, Facebook, Twitter and other social media that began to be activated early this year that appeared to be part of a global propaganda push: "The network made heavy use of video footage taken from pro-Chinese government channels, together with memes and lengthy texts in both Chinese and English. It interspersed its political content with spam posts, typically of scenery, basketball, models, and TikTok videos. These appeared designed to camouflage the operation's political content, hence the name." It's the "return" of this particular spam dragon because it showed up last fall in a similar form, and whoever is pulling the strings appears undeterred by detection. New, sleeper and stolen accounts were amassed again and deployed for similar purposes, though now -- as Google notes -- with a COVID-19 twist. When June rolled around, content was also being pushed related to the ongoing protests regarding the killings of George Floyd and Breonna Taylor and other racial justice matters.

Businesses

It's Official: EU Launches Antitrust Probe Into Google's Fitbit Takeover (cnn.com) 15

It was rumored last week and now it's official: the European Commission announced it is launching an in-depth antitrust investigation into Google's $2.1 billion bid for Fitbit. CNN reports: The European Union's top antitrust regulator said it is concerned that the takeover would further strengthen Google's market position in online advertising by "increasing the already vast amount of data that Google could use for personalization of the ads it serves and displays." Google announced it was buying Fitbit, the world's leading maker of wearable fitness activity trackers, in November. The deal, worth about $2.1 billion, is one of Google's largest acquisitions and represents an important step for the company into smartwatches and other wearable devices.

The Commission had already launched a preliminary investigation into the transaction. It said a commitment by Google not to use Fitbit data for advertising purposes was insufficient to address the concerns identified in the initial probe. The Commission's top antitrust official, Margrethe Vestager, said in a statement that the use of wearable devices by European consumers, as well as the data generated by them, is expected to grow significantly. "Our investigation aims to ensure that control by Google over data collected through wearable devices as a result of the transaction does not distort competition," Vestager said. In a blog post, Google Senior Vice President for Devices and Services Rick Osterloh said the deal "is about devices, not data," a market he said is full of competition. "We've been clear from the beginning that we will not use Fitbit health and wellness data for Google ads," Osterloh said. "We recently offered to make a legally binding commitment to the European Commission regarding our use of Fitbit data. As we do with all our products, we will give Fitbit users the choice to review, move or delete their data."

Youtube

YouTube Criticized For Ending Its Community Captions Feature (theverge.com) 36

Long-time Slashdot reader xonen quotes the Verge: YouTube plans to discontinue its community captions feature, which allowed viewers to add subtitles to videos, because it was "rarely used and had problems with spam/abuse," the company announced. It says it's removing the captions and will "focus on other creator tools." The feature will be removed as of September 28th.

"You can still use your own captions, automatic captions and third-party tools and services," YouTube said in an update on its help page. But deaf and hard-of-hearing creators say removing the community captions feature will stifle accessibility, and they want to see the company try to fix the issues with volunteer-created captions, rather than doing away with them entirely. Deaf YouTuber Rikki Poynter said on her channel in May that community captions were an "accessibility tool that not only allowed deaf and hard of hearing people to watch videos with captions, but allowed creators that could not afford to financially invest in captions." She tweeted Thursday that she was disappointed with YouTube's decision.

YouTuber JT, whose channel has more than 550,000 subscribers, highlighted the downside of the community captions feature last year, showing how viewers were adding abusive comments to videos by popular creators. But many creators say they relied on the captions not only to better reach deaf and hard-of-hearing viewers, but to help translate their videos into other languages, giving them a larger audience.

YouTube is offering a free six-month subscription to a subtitling service for regular users of the community contribution feature — but not everyone is satisfied, according to the Verge. A petition calling on Google to reverse the decision has now garnered more than 155,000 signatures.
Facebook

Facebook Criticized For Temporarily Blocking Entire Domain 'Dreamwidth.org' (dreamwidth.org) 41

Dreamwidth is an online journal service based on the LiveJournal codebase, according to Wikipedia — "a code fork of the original service, set up by ex-LiveJournal staff Denise Paolucci and Mark Smith, born out of a desire for a new community based on open access, transparency, freedom and respect."

"I discovered, about an hour ago, that all of my posts on Facebook which were links to Dreamwidth had vanished. Suddenly gone as if they'd never existed," complained Dreamwidth user Andrew Ducker on Sunday morning.

Though that afternoon he posted "All working fine now," thousands had already seen his original post (quoted below): I checked with Denise (one of the owners of Dreamwidth) to find out if she knew about it, and discovered that Facebook have stuck Dreamwidth on a block list...

This is unbelievably frustrating. And the kind of centralised, autocratic, opaque decision making which I loathe. Tens of thousands of active users, unable to share blog posts with Facebook (which, let's face it, is where most of my friends go for their socialising)...

"This may be an overzealous spam filter at work," Slashdot reader JoshuaZ had argued. But even before Facebook adjusted their filtering, Dreamwidth co-owner Mark Smith was calling it "definitely a bit of a /shrug moment... 'Facebook gonna Facebook' I think is approximately how we feel about this...

"We do not have any goals around growth, we don't advertise, and we ultimately don't care that much what the other platforms do. Our goal is to give people a stable home where they don't have to worry about their data being sold, their writing being monetized..."
Republicans

Trump Campaign Angry That Cell Carriers Blocked Company Texts To Voters (arstechnica.com) 103

An anonymous reader quotes a report from Ars Technica: President Trump's re-election campaign has accused Verizon, AT&T, and T-Mobile of "suppression of political speech" over the carriers' blocking of spam texts sent by the campaign. The fight was described Wednesday in an in-depth article by Business Insider and other reports. "The Trump campaign has been battling this month with the biggest US cellphone carriers over an effort to blast millions of cell users with texts meant to coax them to vote or donate," Business Insider wrote. "President Donald Trump's adviser and son-in-law, Jared Kushner, didn't appreciate it when AT&T, Verizon, and T-Mobile blocked mass campaign texts to voters. He called the companies to complain, setting off the legal wrangling."

When contacted by Ars, a Trump campaign spokesperson said that "any effort by the carriers to restrict the campaign from contacting its supporters is suppression of political speech. Plain and simple." The Trump campaign statement also said it "stands by the compliance of its texting programs" with the US Telephone Consumer Protection Act (TCPA) and Federal Communications Commission guidelines. Business Insider wrote that "the showdown got serious at the start of July when Trump's team sent a blast of texts to people who hadn't signed up for them," and "a third-party firm hired to screen such messages for the major cellphone companies blocked the texts." The article said that campaign lawyers and the carriers "are still fighting over what kinds of messages the campaign is allowed to send and what the companies have the power to stop." Politico wrote about the dispute on Monday. "People familiar with the chain of events said Verizon, T-Mobile and AT&T flagged potential regulatory problems with the peer-to-peer messaging operation, which differs from robo-texting in that texts are sent individually, as opposed to a mass blast," Politico wrote. "But within Trump's orbit, the episode has further fueled suspicions that big tech companies are looking to influence the election."
The Trump campaign has not explained why the texts are legal and shouldn't have been blocked. They also didn't say how many people they tried to send the texts to, or whether the texts were unsolicited or sent to people who had signed up for campaign communications.

Carriers "viewed the texts as a possible violation of federal anti-robocall laws and Federal Communications Commission rules that come with hefty fines," Business Insider reported, citing information provided by "two Republicans familiar with the effort." Trump "campaign operatives" contend that its texting "exists in a legal gray area that allows campaigns to blast cellphone users if the messages are sent manually," Business Insider also wrote.
Communications

FCC: Phone Carriers That Profit From Robocalls Could Have All Calls Blocked (arstechnica.com) 55

"Bad-actor" phone companies that profit from robocalls could be blocked by more legitimate carriers under rules approved unanimously yesterday by the Federal Communications Commission. From a report: Under the change, the FCC said carriers can block calls "from bad-actor upstream voice service providers that pass illegal or unwanted calls along to other providers, when those upstream providers have been notified but fail to take action to stop these calls." Carriers that impose this type of blocking will get a safe harbor from liability "for the unintended or inadvertent blocking of wanted calls, thus eliminating a concern that kept some companies from implementing robust robocall blocking efforts."

This expanded level of blocking -- spurred by a new law in which Congress directed the FCC to expand safe harbors -- could be implemented by companies that sell phone service directly to consumers. That includes mobile carriers Verizon, AT&T, and T-Mobile, traditional landline companies, and VoIP providers. Carriers won't be able to block calls from just any provider. As Chairman Ajit Pai explained, the safe harbor will be available in cases when the "bad-actor" telecom has been notified by the FCC that it is carrying illegal traffic and "fails either to effectively mitigate such traffic or to implement effective measures to prevent customers from using its network to originate illegal calls."

China

Apple 'Suddenly Catches TikTok Secretly Spying On Millions Of iPhone Users', Claims Forbes (forbes.com) 61

In February, Reddit's CEO called TikTok "fundamentally parasitic," according to a report on TechCrunch, adding "it's always listening, the fingerprinting technology they use is truly terrifying, and I could not bring myself to install an app like that on my phone... I actively tell people, 'Don't install that spyware on your phone.'"

TikTok called his remarks "baseless accusations made without a shred of evidence."

But now Apple "has fixed a serious problem in iOS 14, due in the fall, where apps can secretly access the clipboard on users' devices..." reports Forbes cybersecurity contributor Zak Doffman, noting that one of the biggest offenders it revealed still turns out to be TikTok: Worryingly, one of the apps caught snooping [in March] by security researchers Talal Haj Bakry and Tommy Mysk was China's TikTok. Given other security concerns raised about the app, as well as broader worries given its Chinese origins, this became a headline issue. At the time, TikTok owner Bytedance told me the problem related to the use of an outdated Google advertising SDK that was being replaced.

Well, maybe not. With the release of the new clipboard warning in the beta version of iOS 14, now with developers, TikTok seems to have been caught abusing the clipboard in a quite extraordinary way. So it seems that TikTok didn't stop this invasive practice back in April as promised after all. Worse, the excuse has now changed. According to TikTok, the issue is now "triggered by a feature designed to identify repetitive, spammy behavior," and has told me that it has "already submitted an updated version of the app to the App Store removing the anti-spam feature to eliminate any potential confusion." In other words: We've been caught doing something we shouldn't, we've rushed out a fix...

iOS users can relax, knowing that Apple's latest safeguard will force TikTok to make the change, which in itself shows how critical a fix this has been. For Android users, though, there is no word yet as to whether this is an issue for them as well.

Long-time Slashdot reader schwit1 also shares an online rumor from an anonymous Redditor (with a 7-year-old account) who claims to be a software engineer who's reverse engineered TikTok's software and learned more scary things, concluding that TikTok is a "data collection service that is thinly-veiled as a social network."

So far the most reputable news outlets that have repeated his allegations are Bored Panda, Stuff, Hot Hardware, and Illinois radio station WBNQ.
United States

Julian Assange Charged in Superseding Indictment (justice.gov) 229

A federal grand jury returned a second superseding indictment today charging Julian P. Assange, the founder of WikiLeaks, with offenses that relate to Assange's alleged role in one of the largest compromises of classified information in the history of the United States. DOJ, in a press release: The new indictment does not add additional counts to the prior 18-count superseding indictment returned against Assange in May 2019. It does, however, broaden the scope of the conspiracy surrounding alleged computer intrusions with which Assange was previously charged. According to the charging document, Assange and others at WikiLeaks recruited and agreed with hackers to commit computer intrusions to benefit WikiLeaks. Since the early days of WikiLeaks, Assange has spoken at hacking conferences to tout his own history as a "famous teenage hacker in Australia" and to encourage others to hack to obtain information for WikiLeaks. In 2009, for instance, Assange told the Hacking At Random conference that WikiLeaks had obtained nonpublic documents from the Congressional Research Service by exploiting "a small vulnerability" inside the document distribution system of the United States Congress, and then asserted that "[t]his is what any one of you would find if you were actually looking." In 2010, Assange gained unauthorized access to a government computer system of a NATO country. In 2012, Assange communicated directly with a leader of the hacking group LulzSec (who by then was cooperating with the FBI), and provided a list of targets for LulzSec to hack. With respect to one target, Assange asked the LulzSec leader to look for (and provide to WikiLeaks) mail and documents, databases and pdfs. In another communication, Assange told the LulzSec leader that the most impactful release of hacked materials would be from the CIA, NSA, or the New York Times. WikiLeaks obtained and published emails from a data breach committed against an American intelligence consulting company by an "Anonymous" and LulzSec-affiliated hacker. According to that hacker, Assange indirectly asked him to spam that victim company again.
The Media

The Good, The Bad & The Ugly (of Technology) (om.co) 47

If our social platforms are going to be gatekeepers, then they need to acknowledge their role in the information ecosystems. It is knowing what to boost and what to ignore that makes a good platform, writes veteran technology journalist and now a venture capitalist Om Malik. From his essay: The battle of good email versus spam email has taken a long time, but it has been worth fighting. The struggle between real information and fake information is no different. Unfortunately, what we have is ambivalent algorithms on our social platforms that blindly amplify both hope and hate. This gets complicated pretty quickly. Without access to the same platforms currently being used to gaslight our country, we won't see the awful videos of police in conflict with the people they should protect. Without the same platforms, it would be harder to tell that the media just glorifies the titillating stuff, whether it is the opinion page of the old Gray Lady or the fake looting of a non-existent Rolex store.

I am the first to admit that this is one hard and messy problem. The challenge we face today is that technology's supreme commanders fail to fight the real monkey on their back -- how the modern internet works. Whether it is Facebook, Twitter, Amazon, or Google, the core principle of these companies is engagement and growth. More engagement means more growth, and that means more attention and thus more money. If Facebook removed news from your feed and just restricted it to social items, like baby pictures, ravings of a crazy uncle, and event announcements, there is a good chance that engagement on the platform would decrease. Twitter would be a lot less engaging if it reverted back to its original premise of showing the latest, not the loudest. And what if Google stopped rewarding frequent visits as one of the measurements for showing the results on its search engine? I think you know.

Chrome

Google To Enable the Chrome Anti-Notification Spam System in July 2020 (zdnet.com) 17

Google announced this week plans to enable its new anti-notification spam system in Chrome over the summer, with the release of Chrome 84, on July 12, 2020. From a report: Known internally as the "quieter notification permission UI," this Chrome component works by blocking sites from showing notification requests, which are hidden under an icon in the Chrome URL bar (on desktop) or under a toolbar (on mobile). Google first announced the "quieter notification permission UI" in January, and shipped it in February, in Chrome 80, in a limited, user opt-in fashion. But in a blog post, Google said the new UI and its ability to detect spammy notification popups has been improved and will roll out enabled by default for all users in July, with the release of Chrome 84.

Slashdot Top Deals