Privacy

110,000 Affected by Epik Breach - Including Those Who Trusted Epik to Hide Their Identity (washingtonpost.com) 112

Epik's massive data breach is already affecting lives. Today the Washington Post describes a real estate agent in Pompano Beach who urged buyers on Facebook to move to "the most beautiful State." His name and personal details "were found on invoices suggesting he had once paid for websites with names such as racisminc.com, whitesencyclopedia.com, christiansagainstisrael.com and theholocaustisfake.com". The real estate brokerage where he worked then dropped him as an agent. The brokerage's owner told the Post they didn't "want to be involved with anyone with thoughts or motives like that."

"Some users appear to have relied on Epik to lead a double life," the Post reports, "with several revelations so far involving people with innocuous day jobs who were purportedly purveyors of hate online." (Alternate URL here.) Epik, based outside Seattle, said in a data-breach notice filed with Maine's attorney general this week that 110,000 people had been affected nationwide by having their financial account and credit card numbers, passwords and security codes exposed.... Heidi Beirich, a veteran researcher of hate and extremism, said she is used to spending weeks or months doing "the detective work" trying to decipher who is behind a single extremist domain. The Epik data set, she said, "is like somebody has just handed you all the detective work — the names, the people behind the accounts..."

Many website owners who trusted Epik to keep their identities hidden were exposed, but some who took additional precautions, such as paying in bitcoin and using fake names, remain anonymous....

Aubrey "Kirtaner" Cottle, a security researcher and co-founder of Anonymous, declined to share information about the hack's origins but said it was fueled by hackers' frustrations over Epik serving as a refuge for far-right extremists. "Everyone is tired of hate," Cottle said. "There hasn't been enough pushback, and these far-right players, they play dirty. Nothing is out of bounds for them. And now ... the tide is turning, and there's a swell moving back in their direction."

Earlier in the week, the Post reported: Since the hack, Epik's security protocols have been the target of ridicule among researchers, who've marveled at the site's apparent failure to take basic security precautions, such as routine encryption that could have protected data about its customers from becoming public... The hack even exposed the personal records from Anonymize, a privacy service Epik offered to customers wanting to conceal their identity.
Electronic Frontier Foundation

Why EFF Flew a Plane Over Apple's Headquarters (eff.org) 29

EFF.org has the story: For the last month, civil liberties and human rights organizations, researchers, and customers have demanded that Apple cancel its plan to install photo-scanning software onto devices. This software poses an enormous danger to privacy and security. Apple has heard the message, and announced that it would delay the system while consulting with various groups about its impact. But in order to trust Apple again, we need the company to commit to canceling this mass surveillance system.

The delay may well be a diversionary tactic. Every September, Apple holds one of its big product announcement events, where Apple executives detail the new devices and features coming out. Apple likely didn't want concerns about the phone-scanning features to steal the spotlight.

But we can't let Apple's disastrous phone-scanning idea fade into the background, only to be announced with minimal changes down the road. To make sure Apple is listening to our concerns, EFF turned to an old-school messaging system: aerial advertising.

During Apple's event, a plane circled the company's headquarters carrying an impossible-to-miss message: "Apple, don't scan our phones!" The evening before Apple's event, protestors also rallied nationwide in front of Apple stores. The company needs to hear us, and not just dismiss the serious problems with its scanning plan. A delay is not a cancellation, and the company has also been dismissive of some concerns, referring to them as "confusion" about the new features.

Apple's iMessage is one of the preeminent end-to-end encrypted chat clients. End-to-end encryption is what allows users to exchange messages without having them intercepted and read by repressive governments, corporations, and other bad actors. We don't support encryption for its own sake: we fight for it because encryption is one of the most powerful tools individuals have for maintaining their digital privacy and security in an increasingly insecure world.

Now that Apple's September event is over, Apple must reach out to groups that have criticized it and seek a wider range of suggestions on how to deal with difficult problems, like protecting children online...

The world, thankfully, has moved towards encrypted communications over the last two decades, not away from them, and that's a good thing. If Apple wants to maintain its reputation as a pro-privacy company, it must continue to choose real end-to-end encryption over government demands to read user's communication.

Privacy matters now more than ever. It will continue to be a selling point and a distinguishing feature of some products and companies. For now, it's an open question whether Apple will continue to be one of them.

Crime

'Every Message Was Copied to the Police': the Daring Sting Behind the An0m Phone (theguardian.com) 105

The Guardian tells the story of "a viral sensation in the global underworld," the high-security An0m phones, which launched with "a grassroots marketing campaign, identifying so-called influencers — 'well-known crime figures who wield significant power and influence over other criminal associates', according to a US indictment — within criminal subcultures." An0m could not be bought in a shop or on a website. You had to first know a guy. Then you had to be prepared to pay the astronomical cost: $1,700 for the handset, with a $1,250 annual subscription, an astonishing price for a phone that was unable to make phone calls or browse the internet.

Almost 10,000 users around the world had agreed to pay, not for the phone so much as for a specific application installed on it. Opening the phone's calculator allowed users to enter a sum that functioned as a kind of numeric open sesame to launch a secret messaging application. The people selling the phone claimed that An0m was the most secure messaging service in the world. Not only was every message encrypted so that it could not be read by a digital eavesdropper, it could be received only by another An0m phone user, forming a closed loop system entirely separate from the information speedways along which most text messages travel. Moreover, An0m could not be downloaded from any of the usual app stores. The only way to access it was to buy a phone with the software preinstalled...

[U]sers could set an option to wipe the phone's data if the device went offline for a specified amount of time. Users could also set especially sensitive messages to self-erase after opening, and could record and send voice memos in which the phone would automatically disguise the speaker's voice. An0m was marketed and sold not so much to the security conscious as the security paranoid...

An0m was not, however, a secure phone app at all. Every single message sent on the app since its launch in 2018 — 19.37m of them — had been collected, and many of them read by the Australian federal police (AFP) who, together with the FBI, had conceived, built, marketed and sold the devices.

On 7 June 2021, more than 800 arrests were made around the world....

Law enforcement agencies ultimately saw An0m as a creative workaround for unbreakable encryption, according to the Guardian. "Why debate tech companies on privacy issues through costly legal battles if you can simply trick criminals into using your own monitored network?"

The Guradian's story was shared by jd (Slashdot user #1,658), who sees an ethical question. "As the article notes, what's to stop a tyrant doing the same against rivals or innocent protestors?"
Encryption

WhatsApp Will Finally Let Users Encrypt Their Chat Backups in the Cloud (techcrunch.com) 12

WhatsApp said on Friday it will give its two billion users the option to encrypt their chat backups to the cloud, taking a significant step to put a lid on one of the tricky ways private communication between individuals on the app can be compromised. From a report: The Facebook-owned service has end-to-end encrypted chats between users for more than a decade. But users have had no option but to store their chat backup to their cloud -- iCloud on iPhones and Google Drive on Android -- in an unencrypted format. [...] Now WhatsApp says it is patching this weak link in the system.

The company said it has devised a system to enable WhatsApp users on Android and iOS to lock their chat backups with encryption keys. WhatsApp says it will offer users two ways to encrypt their cloud backups, and the feature is optional. In the "coming weeks," users on WhatsApp will see an option to generate a 64-digit encryption key to lock their chat backups in the cloud. Users can store the encryption key offline or in a password manager of their choice, or they can create a password that backs up their encryption key in a cloud-based "backup key vault" that WhatsApp has developed.

Encryption

UK.gov Is Launching An Anti-Facebook Encryption Push (theregister.com) 33

The British government is preparing to launch a full-scale policy assault against Facebook as the company gears up to introduce end-to-end encryption across all of its services. The Register reports: Prominent in details briefed to the news media this week (including The Register) were accusations that Facebook harbours paedophiles, terrorists, and mobsters and that British police forces would effectively be blinded to the scale of criminality on the social networking platform, save for cases where crimes are reported. It's a difficult and nuanced topic made no simpler or easier by the fact that government officials seem hellbent on painting it in black and white.

Government and law enforcement officials who briefed the press on condition of anonymity earlier this week* sought to paint a picture of the internet going dark if Facebook's plans for end-to-end encryption (E2EE) went forward, in terms familiar to anyone who remembers how Western nation states defended themselves from public upset after former NSA sysadmin Edward Snowden's 2013 revelations of illegal mass surveillance. The US National Centre for Missing and Exploited Children (NCMEC) generates around 20 million reports of child sexual abuse material (CSAM) every year, of which 70 per cent would be "lost" if E2E encryption were put in place, claimed British officials.

The government's long-signaled push to deter Facebook from implementing E2EE comes, inevitably, at a significant cost to taxpayers: London ad agency M&C Saatchi has been hired at an undisclosed cost by the Home Office to tell the public that Facebook (and WhatsApp) harbours criminals. The ad campaign will run online, in newspapers and on radio stations with the aim of turning public opinion against E2EE -- and, presumably, driving home the message that encryption itself is something inherently bad. Other announcements due this week, from notoriously anti-encryption Home Secretary Priti Patel and intergovernmental meetings, will explicitly condemn Facebook's contemplated rollout of E2EE.

Facebook

WhatsApp Moderators Can Read Your Messages (gizmodo.com) 87

Gizmodo highlights the findings of a new ProPublica report on WhatsApp's content moderation system. What they found was that there are at least 1,000 WhatsApp content moderators employed by Facebook's moderator contract firm Accenture to review user-reported content that's been flagged by its machine learning system. "They monitor for, among other things, spam, disinformation, hate speech, potential terrorist threats, child sexual abuse material (CSAM), blackmail, and "sexually oriented businesses,'" reports Gizmodo. "Based on the content, moderators can ban the account, put the user 'on watch,' or leave it alone." From the report: Most can agree that violent imagery and CSAM should be monitored and reported; Facebook and Pornhub regularly generate media scandals for not moderating enough. But WhatsApp moderators told ProPublica that the app's artificial intelligence program sends moderators an inordinate number of harmless posts, like children in bathtubs. Once the flagged content reaches them, ProPublica reports that moderators can see the last five messages in a thread.

WhatsApp discloses, in its terms of service, that when an account is reported, it "receives the most recent messages" from the reported group or user as well as "information on your recent interactions with the reported user." This does not specify that such information, viewable by moderators, could include phone numbers, profile photos, linked Facebook and Instagram accounts, their IP address, and mobile phone ID. And, the report notes, WhatsApp does not disclose the fact that it amasses all users' metadata no matter their privacy settings.

WhatsApp didn't offer much clarity on what mechanism it uses to receive decrypted messages, only that the person tapping the "report" button is automatically generating a new message between themselves and WhatsApp. That seems to indicate that WhatsApp is deploying a sort of copy-paste function, but the details are still unclear. Facebook told Gizmodo that WhatsApp can read messages because they're considered a version of direct messaging between the company and the reporter. They added that users who report content make the conscious choice to share information with Facebook; by their logic, Facebook's collection of that material doesn't conflict with end-to-end encryption. So, yes, WhatsApp can see your messages without your consent.

Government

Will Gaming Change Humanity As We Know It? (bloombergquint.com) 77

"The advent of gaming, especially computer gaming, marks a fundamental break in human affairs," argues American economist Tyler Cowen (in a Bloomberg opinion column).

"Gaming is profoundly transforming two central aspects of the modern world: culture and regulation. There will be no turning back... Plenty of trading already takes place in games — involving currencies, markets, prices and contracts. Game creators and players set and enforce the rules, and it is harder for government regulators to play a central role. The lesson is clear: If you wish to create a new economic institution, put it inside a game. Or how about an app that gamifies share trading? Do you wish to experiment with a new kind of stock exchange or security outside the purview of traditional government regulation? Try the world of gaming, perhaps combined with crypto, and eventually your "game" just might influence events in the real world...

[R]egulators are already falling behind. Just as gaming has outraced the world of culture, so will gaming outrace U.S. regulatory capabilities, for a variety of reasons: encryption, the use of cryptocurrency, the difficulties of policing virtual realities, varying rules in foreign jurisdictions and, not incidentally, a lack of expertise among U.S. regulators. (At least the Chinese government's attempt to restrict youth gaming to three hours a week, while foolhardy, reflects a perceptive cultural conservatism.)

Both the culture-weakening and the regulation-weakening features of games follow from their one basic characteristic: They are self-contained worlds. Until now, human institutions and structures have depended on relatively open and overlapping networks of ideas. Gaming is carving up and privatizing those spaces. This shift is the big trend that hardly anyone — outside of gaming and crypto — is noticing.

If the much-heralded "metaverse" ever arrives, gaming will swallow many more institutions, or create countervailing versions of them. Whether or not you belong to the world of gaming, it is coming for your worlds. I hope you are ready.

Encryption

America's NSA Isn't Sure Quantum Computers Will Ever Break Public Key Encryption (msn.com) 92

America's National Security Agency "isn't really sure when or even if quantum computers will be able to crack public key cryptography," writes TechRadar.

They report that the NSA "has expressed its reservations about the potential of quantum computing" in a new FAQ titled Quantum Computing and Post-Quantum Cryptography. "NSA does not know when or even if a quantum computer of sufficient size and power to exploit public key cryptography (a CRQC) will exist," said the security agency in response to whether it is worried about the potential of adversarial use of quantum computing. In the FAQ, the NSA describes a Cryptographically Relevant Quantum Computer (CRQC) as a quantum computer that's capable of actually attacking real world cryptographic systems, something that's currently infeasible.

While it agrees that such a computer would be "devastating" to the digital security infrastructure, it seems to suggest that it doesn't believe such a CRQC would ever materialize.

However, the growing research in quantum computing has moved the agency to also support the development of post-quantum cryptographic standards, along with plans for eventual transition to such standards.

Apple

We Built a CSAM System Like Apple's - the Tech Is Dangerous (washingtonpost.com) 186

An anonymous reader writes: Earlier this month, Apple unveiled a system that would scan iPhone and iPad photos for child sexual abuse material (CSAM). The announcement sparked a civil liberties firestorm, and Apple's own employees have been expressing alarm. The company insists reservations about the system are rooted in "misunderstandings." We disagree.

We wrote the only peer-reviewed publication on how to build a system like Apple's -- and we concluded the technology was dangerous. We're not concerned because we misunderstand how Apple's system works. The problem is, we understand exactly how it works.

Our research project began two years ago, as an experimental system to identify CSAM in end-to-end-encrypted online services. As security researchers, we know the value of end-to-end encryption, which protects data from third-party access. But we're also horrified that CSAM is proliferating on encrypted platforms. And we worry online services are reluctant to use encryption without additional tools to combat CSAM.

We sought to explore a possible middle ground, where online services could identify harmful content while otherwise preserving end-to-end encryption. The concept was straightforward: If someone shared material that matched a database of known harmful content, the service would be alerted. If a person shared innocent content, the service would learn nothing. People couldn't read the database or learn whether content matched, since that information could reveal law enforcement methods and help criminals evade detection.

But we encountered a glaring problem.

Our system could be easily repurposed for surveillance and censorship. The design wasn't restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.
About the authors of this report: Jonathan Mayer is an assistant professor of computer science and public affairs at Princeton University. He previously served as technology counsel to then-Sen. Kamala D. Harris and as chief technologist of the Federal Communications Commission Enforcement Bureau. Anunay Kulshrestha is a graduate researcher at the Princeton University Center for Information Technology Policy and a PhD candidate in the department of computer science.
Privacy

Policy Groups Ask Apple To Drop Plans To Inspect iMessages, Scan for Abuse Images (reuters.com) 89

More than 90 policy and rights groups around the world published an open letter on Thursday urging Apple to abandon plans for scanning children's messages for nudity and the phones of adults for images of child sex abuse. From a report: "Though these capabilities are intended to protect children and to reduce the spread of child sexual abuse material, we are concerned that they will be used to censor protected speech, threaten the privacy and security of people around the world, and have disastrous consequences for many children," the groups wrote in the letter, which was first reported by Reuters. The largest campaign to date over an encryption issue at a single company was organized by the U.S.-based nonprofit Center for Democracy & Technology (CDT). Some overseas signatories in particular are worried about the impact of the changes in nations with different legal systems, including some already hosting heated fights over encryption and privacy.
Encryption

Apple's Child Protection Features Spark Concern Within Its Own Ranks (reuters.com) 99

According to an exclusive report from Reuters, Apple's move to scan U.S. customer phones and computers for child sex abuse images has resulted in employees speaking out internally, "a notable turn in a company famed for its secretive culture." From the report: Apple employees have flooded an Apple internal Slack channel with more than 800 messages on the plan announced a week ago, workers who asked not to be identified told Reuters. Many expressed worries that the feature could be exploited by repressive governments looking to find other material for censorship or arrests, according to workers who saw the days-long thread. Past security changes at Apple have also prompted concern among employees, but the volume and duration of the new debate is surprising, the workers said. Some posters worried that Apple is damaging its leading reputation for protecting privacy.

In the Slack thread devoted to the photo-scanning feature, some employees have pushed back against criticism, while others said Slack wasn't the proper forum for such discussions. Core security employees did not appear to be major complainants in the posts, and some of them said that they thought Apple's solution was a reasonable response to pressure to crack down on illegal material. Other employees said they hoped that the scanning is a step toward fully encrypting iCloud for customers who want it, which would reverse Apple's direction on the issue a second time.
Apple has said it will refuse requests from governments to use the system to check phones for anything other than illegal child sexual abuse material.
Security

How a Security Researcher Took Over a Hotel's IoT Devices (zdnet.com) 36

"The moment you network IoT and hand over control to third parties, you may also give individuals the keys to a digital kingdom — and the ability to cause mischief, or worse," writes ZDNet.

For example, at a hotel where guests control the devices in their room with an iPod Touch... Speaking at Black Hat USA, Las Vegas, security consultant Kya Supa from LEXFO explained how a chain of security weaknesses were combined and exploited to gain control of rooms at a capsule hotel, a budget-friendly type of hotel offering extremely small — and, therefore, cozy — spaces to guests, who are stacked side-by-side... A neighbor, "Bob," kept waking Supa up by making loud phone calls in the early hours of the morning. While Bob had agreed to keep it down, he did not keep his promise — and the researcher set to work since he needed his sleep, especially during his vacation. The first thing Supa did was to explore his room, finding an emergency light installed for safety reasons; a Nasnos automaton center for use in controlling products in case the iPod Touch was lost; an electric motor used to manage the incline of the capsule's bed; and a Nasnos router, hidden in the wall.

If you connected to the router via a smartphone, it was then possible to control other devices on the network, and this was the setup the hotel chose to use... Supa found that two networks were connected — the hotel Wi-Fi and the router. To retrieve the router key, Supa targeted WEP, a protocol that has been known to be weak for years. Access points, each being one of the bedrooms, were found. Supa inspected the traffic and found weak credentials in place — "123" — and you can guess the rest...

By using an Android smartphone, the iPod Touch, and a laptop, the researcher created a Man-in-The-Middle (MiTM) architecture and inspected the network traffic. No encryption was found and he created a simple program to tamper with these connections, allowing the researcher to seize control of his bedroom through his laptop... Now that he could "control every bedroom," and Bob was still there, Supa then tampered with the lights of different bedrooms until he found the right one. He created a script that, every two hours, would change the bed into a sofa and turn the lights on and off. The script was launched at midnight. We can probably assume Bob did not enjoy his stay.

"I hope he will be more respectful in the future," Supa commented.

Electronic Frontier Foundation

Edward Snowden and EFF Slam Apple's Plans To Scan Messages and iCloud Images (macrumors.com) 55

Apple's plans to scan users' iCloud Photos library against a database of child sexual abuse material (CSAM) to look for matches and childrens' messages for explicit content has come under fire from privacy whistleblower Edward Snowden and the Electronic Frontier Foundation (EFF). MacRumors reports: In a series of tweets, the prominent privacy campaigner and whistleblower Edward Snowden highlighted concerns that Apple is rolling out a form of "mass surveillance to the entire world" and setting a precedent that could allow the company to scan for any other arbitrary content in the future. Snowden also noted that Apple has historically been an industry-leader in terms of digital privacy, and even refused to unlock an iPhone owned by Syed Farook, one of the shooters in the December 2015 attacks in San Bernardino, California, despite being ordered to do so by the FBI and a federal judge. Apple opposed the order, noting that it would set a "dangerous precedent."

The EFF, an eminent international non-profit digital rights group, has issued an extensive condemnation of Apple's move to scan users' iCloud libraries and messages, saying that it is extremely "disappointed" that a "champion of end-to-end encryption" is undertaking a "shocking about-face for users who have relied on the company's leadership in privacy and security." The EFF highlighted how various governments around the world have passed laws that demand surveillance and censorship of content on various platforms, including messaging apps, and that Apple's move to scan messages and "iCloud Photos" could be legally required to encompass additional materials or easily be widened. "Make no mistake: this is a decrease in privacy for all "iCloud Photos" users, not an improvement," the EFF cautioned.

Chrome

Google Chrome To No Longer Show Secure Website Indicators (bleepingcomputer.com) 68

Google Chrome will no longer show whether a site you are visiting is secure and only show when you visit an insecure website. Bleeping Computer reports: To further push web developers into only using HTTPS on their sites, Google introduced the protocol as a ranking factor. Those not hosting a secure site got a potentially minor hit in their Google search results rankings. It has appeared to have worked as according to the 'HTTPS encryption on the web' of Google's Transparency Report, over 90% of all browser connections in Google Chrome currently use an HTTPS connection.

Currently, when you visit a secure site, Google Chrome will display a little locked icon indicating that your communication with the site is encrypted, as shown below. As most website communication is now secure, Google is testing a new feature that removes the lock icon for secure sites. This feature is available to test in Chrome 93 Beta, and Chrome 94 Canary builds by enabling the 'Omnibox Updated connection security indicators' flag. With this feature enabled, Google Chrome will only display security indicators when the site is not secure. For businesses who wish to have continued HTTPS security indicators, Google has added an enterprise policy for Chrome 93 named 'LockIconInAddressBarEnabled' that can be used to enable the lock icon again on the address bar.

Facebook

Facebook Researchers Hope To Bring Together Two Foes: Encryption and Ads 53

Facebook is bulking up a team of artificial intelligence researchers, including a key hire from Microsoft, to study ways of analyzing encrypted data without decrypting it, the company confirmed to the Information. From the report: The research could allow Facebook to target ads based on encrypted messages on its WhatsApp messenger, or to encrypt the data it collects on billions of users without hurting its ad-targeting capabilities, outside experts say. Facebook is one of several technology giants, including cloud computing providers Microsoft, Amazon and Google, now researching an emerging field known as homomorphic encryption. Researchers hope the technology will allow companies to analyze personal information, including medical records and financial data, while keeping the information encrypted and protected from cybersecurity threats or, in Facebook's case, leaks to advertisers or other parties.
The Internet

Russia's 'Nonsensical, Impossible Quest' to Create Its Own Domestic Internet (slate.com) 61

"It was pretty strange when Russia decided to announce last week that it had successfully run tests between June 15 and July 15 to show it could disconnect itself from the internet," writes an associate professor of cybersecurity policy at Tufts Fletcher School of Law and Diplomacy. The tests seem to have gone largely unnoticed both in and outside of Russia, indicating that whatever entailed did not involve Russia actually disconnecting from the global internet... since that would be impossible to hide. Instead, the tests — and, most of all, the announcement about their success — seem to be intended as some kind of signal that Russia is no longer dependent on the rest of the world for its internet access. But it's not at all clear what that would even mean since Russia is clearly still dependent on people and companies in other countries for access to the online content and services they create and host — just as we all are...

For the past two years, ever since implementing its "sovereign internet law" in 2019, Russia has been talking about establishing its own domestic internet that does not rely on any infrastructure or resources located outside the country. Presumably, the tests completed this summer are related to that goal of being able to operate a local internet within Russia that does not rely on the global Domain Name System to map websites to specific IP addresses. This is not actually a particularly ambitious goal — any country could operate its own domestic internet with its own local addressing system if it wanted to do so instead of connecting to the larger global internet... The Center for Applied Internet Data Analysis at the University of California San Diego maintains an Internet Outage Detection and Analysis tool that combines three data sets to identify internet outages around the world... The data sets for Russia from June 15 through July 15, the period of the supposed disconnection tests, shows few indications of any actual disconnection other than a period around July 5 when unsolicited traffic from Russia appears to have dropped off.

Whatever Russia did this summer, it did not physically disconnect from the global internet. It doesn't even appear to have virtually disconnected from the global internet in any meaningful sense. Perhaps it shifted some of its critical infrastructure systems to rely more on domestic service providers and resources. Perhaps it created more local copies of the addressing system used to navigate the internet and tested its ability to rely on those. Perhaps it tested its ability to route online traffic within the country through certain chokepoints for purposes of better surveillance and monitoring. None of those are activities that would be immediately visible from outside the country and all of them would be in line with Russia's stated goals of relying less on internet infrastructure outside its borders and strengthening its ability to monitor online activity.

But the goal of being completely independent of the rest of the world's internet infrastructure while still being able to access the global internet is a nonsensical and impossible one. Russia cannot both disconnect from the internet and still be able to use all of the online services and access all of the websites hosted and maintained by people in other parts of the world, as appears to have been the case during the monthlong period of testing... Being able to disconnect your country from the internet is not all that difficult — and certainly nothing to brag about. But announcing that you've successfully disconnected from the internet when it's patently clear that you haven't suggests both profound technical incompetence and a deep-seated uncertainty about what a domestic Russian internet would actually mean.

Communications

Baseball's Newest Anti-Cheating Technology: Encrypted Transmitters for Catchers' Signals (theverge.com) 75

First Major League Baseball experimented with automated umpiring of balls and strikes in the minor leagues.

Now the Verge reports they're trying a time-saving tactic that might also make it harder to cheat: Baseball has a sign stealing problem — or at least, a technological one, seeing how reading another team's pitches is technically legal, but using Apple Watches or telephoto cameras and then suspiciously banging on trash cans is very much not. But soon the MLB may try fighting fire with fire: on August 3rd, it plans to begin testing an encrypted wireless communication device that replaces the traditional flash of fingers with button taps, according to ESPN.

The device, from a startup called PitchCom, will be tested in the Low-A West minor league first. As you'd expect from something that's relaying extremely basic signals, it's not a particularly complicated piece of kit: one wristband transmitter for the catcher with nine buttons to signal "desired pitch and location," which sends an encrypted audio signal to receivers that can squeeze into a pitcher's cap and a catcher's helmet.

The receivers use bone-conduction technology, so they don't necessarily need to be up against an ear, and might theoretically be harder to eavesdrop on. (Bone conduction stimulates bones in your head instead of emitting audible sound.)

"MLB hopes the devices will cut down on time spent by pitchers stepping off the rubber and changing signals," reports the Associated Press, noting another interesting new rule. "A team may continue to use the system if the opposing club's device malfunctions."

But don't worry about that, reports ESPN: Hacking the system, the company says, is virtually impossible. PitchCom uses an industrial grade encryption algorithm and transmits minimal data digitally, making it mathematically impossible for someone to decrypt intercepted transmissions, according to the company.
United Kingdom

Hole Blasted In Guntrader: UK Firearms Sales Website's CRM Database Breached, 111K Users' Info Spilled Online (theregister.com) 63

Criminals have hacked into a Gumtree-style website used for buying and selling firearms, making off with a 111,000-entry database containing partial information from a CRM product used by gun shops across the UK. The Register reports: The Guntrader breach earlier this week saw the theft of a SQL database powering both the Guntrader.uk buy-and-sell website and its electronic gun shop register product, comprising about 111,000 users and dating between 2016 and 17 July this year. The database contains names, mobile phone numbers, email addresses, user geolocation data, and more including bcrypt-hashed passwords. It is a severe breach of privacy not only for Guntrader but for its users: members of the UK's licensed firearms community. Guntrader spokesman Simon Baseley told The Register that Guntrader.uk had emailed all the users affected by the breach on July 21 and issued a further update yesterday.

Guntrader is roughly similar to Gumtree: users post ads along with their contact details on the website so potential purchasers can get in touch. Gun shops (known in the UK as "registered firearms dealers" or RFDs) can also use Guntrader's integrated gun register product, which is advertised as offering "end-to-end encryption" and "daily backups", making it (so Guntrader claims) "the most safe and secure gun register system on today's market." [British firearms laws say every transfer of a firearm (sale, drop-off for repair, gift, loan, and so on) must be recorded, with the vast majority of these also being mandatory to report to the police when they happen...]

The categories of data in the stolen database are: Latitude and longitude data; First name and last name; Police force that issued an RFD's certificate; Phone numbers; Fax numbers; bcrypt-hashed passwords; Postcode; Postal addresses; and User's IP addresses. Logs of payments were also included, with Coalfire's Barratt explaining that while no credit card numbers were included, something that looks like a SHA-256 hashed string was included in the payment data tables. Other payment information was limited to prices for rifles and shotguns advertised through the site.
The Register recommends you check if your data is included in the hack by visiting Have I Been Pwned. If you are affected and you used the same password on Guntrader that you used on other websites, you should change it as soon as possible.
Security

Malware Increasingly Targets Discord for Abuse (sophos.com) 20

Threat actors who spread and manage malware have long abused legitimate online services. As we found during our investigation into the use of TLS by malware, more than half of network traffic generated by malware uses TLS encryption, and 20 percent of that involved the malware communicating with legitimate online services. During the timeframe of that research, we found that four percent of the overall TLS-protected malware downloads came from one service in particular: Discord. From a report: The growing popularity of the game-centric text and voice chat platform has not failed to draw the attention of malware operators. Discord operates its own content delivery network, or CDN, where users can upload files to share with others. The service also publishes an API, enabling developers to create new ways to interact with Discord other than through its client application. We observed significant volumes of malware hosted in Discord's own CDN, as well as malware interacting with Discord APIs to send and receive data.

Several password-hijacking malware families specifically target Discord accounts. SophosLabs also found malware that leveraged Discord chat bot APIs for command and control, or to exfiltrate stolen information into private Discord servers or channels. As the origins of the service were tied to online gaming, Discord's audience includes large numbers of gamers -- including players of youth-oriented titles such as Fortnite, Minecraft, or Roblox. Among the malicious files we discovered in Discord's network, we found game cheating tools that target games that integrate with Discord, in-game. The tools allegedly make it possible, exploiting weaknesses in Discord's protocols, for one player to crash the game of another player. We also found applications that serve as nothing more than harmless, though disruptive, pranks.

Cellphones

Investigation Reveals Widespread Cellphone Surveillance of the Innocent (theguardian.com) 184

Cellphones "can be transformed into surveillance devices," writes the Guardian, reporting startling new details about which innocent people are still being surveilled (as part of a collaborative reporting project with 16 other media outlets led by the French nonprofit Forbidden Stories).

Long-time Slashdot reader shanen shared the newspaper's critique of a "privatised government surveillance industry" that's made NSO a billion-dollar company, thanks to its phone-penetrating spy software Pegaus: [NSO] insists only carefully vetted government intelligence and law enforcement agencies can use Pegasus, and only to penetrate the phones of "legitimate criminal or terror group targets". Yet in the coming days the Guardian will be revealing the identities of many innocent people who have been identified as candidates for possible surveillance by NSO clients in a massive leak of data... The presence of their names on this list indicates the lengths to which governments may go to spy on critics, rivals and opponents.

First we reveal how journalists across the world were selected as potential targets by these clients prior to a possible hack using NSO surveillance tools. Over the coming week we will be revealing the identities of more people whose phone numbers appear in the leak. They include lawyers, human rights defenders, religious figures, academics, businesspeople, diplomats, senior government officials and heads of state. Our reporting is rooted in the public interest. We believe the public should know that NSO's technology is being abused by the governments who license and operate its spyware.

But we also believe it is in the public interest to reveal how governments look to spy on their citizens and how seemingly benign processes such as HLR lookups [which track the general locations of cellphone users] can be exploited in this environment.

It is not possible to know without forensic analysis whether the phone of someone whose number appears in the data was actually targeted by a government or whether it was successfully hacked with NSO's spyware. But when our technical partner, Amnesty International's Security Lab, conducted forensic analysis on dozens of iPhones that belonged to potential targets at the time they were selected, they found evidence of Pegasus activity in more than half.

The investigators say that potential targets included nearly 200 journalists around the world, including numerous reporters from CNN, the Associated Press, Voice of America, the New York Times, the Wall Street Journal, Bloomberg News, Le Monde in France, and even the editor of the Financial Times.

In addition, the investigators say they found evidence the Pegasus software had been installed on the phone of the fiancée of murdered Saudi journalist Jamal Khashoggi. NSO denies this to the Washington Post. But they also insist that they're simply licensing their software to clients, and their company "has no insight" into those clients' specific intelligence activities.

The Washington Post reports that Amnesty's Security Lab found evidence of Pegasus attacks on 37 of 67 smartphones from the list which they tested. But beyond that "for the remaining 30, the tests were inconclusive, in several cases because the phones had been replaced. Fifteen of the phones were Android devices, none of which showed evidence of successful infection. However, unlike iPhones, Androids do not log the kinds of information required for Amnesty's detective work."

Familiar privacy measures like strong passwords and encryption offer little help against Pegasus, which can attack phones without any warning to users. It can read anything on a device that a user can, while also stealing photos, recordings, location records, communications, passwords, call logs and social media posts. Spyware also can activate cameras and microphones for real-time surveillance.

Slashdot Top Deals