Encryption

WhatsApp Will Finally Let Users Encrypt Their Chat Backups in the Cloud (techcrunch.com) 12

WhatsApp said on Friday it will give its two billion users the option to encrypt their chat backups to the cloud, taking a significant step to put a lid on one of the tricky ways private communication between individuals on the app can be compromised. From a report: The Facebook-owned service has end-to-end encrypted chats between users for more than a decade. But users have had no option but to store their chat backup to their cloud -- iCloud on iPhones and Google Drive on Android -- in an unencrypted format. [...] Now WhatsApp says it is patching this weak link in the system.

The company said it has devised a system to enable WhatsApp users on Android and iOS to lock their chat backups with encryption keys. WhatsApp says it will offer users two ways to encrypt their cloud backups, and the feature is optional. In the "coming weeks," users on WhatsApp will see an option to generate a 64-digit encryption key to lock their chat backups in the cloud. Users can store the encryption key offline or in a password manager of their choice, or they can create a password that backs up their encryption key in a cloud-based "backup key vault" that WhatsApp has developed.

Encryption

UK.gov Is Launching An Anti-Facebook Encryption Push (theregister.com) 33

The British government is preparing to launch a full-scale policy assault against Facebook as the company gears up to introduce end-to-end encryption across all of its services. The Register reports: Prominent in details briefed to the news media this week (including The Register) were accusations that Facebook harbours paedophiles, terrorists, and mobsters and that British police forces would effectively be blinded to the scale of criminality on the social networking platform, save for cases where crimes are reported. It's a difficult and nuanced topic made no simpler or easier by the fact that government officials seem hellbent on painting it in black and white.

Government and law enforcement officials who briefed the press on condition of anonymity earlier this week* sought to paint a picture of the internet going dark if Facebook's plans for end-to-end encryption (E2EE) went forward, in terms familiar to anyone who remembers how Western nation states defended themselves from public upset after former NSA sysadmin Edward Snowden's 2013 revelations of illegal mass surveillance. The US National Centre for Missing and Exploited Children (NCMEC) generates around 20 million reports of child sexual abuse material (CSAM) every year, of which 70 per cent would be "lost" if E2E encryption were put in place, claimed British officials.

The government's long-signaled push to deter Facebook from implementing E2EE comes, inevitably, at a significant cost to taxpayers: London ad agency M&C Saatchi has been hired at an undisclosed cost by the Home Office to tell the public that Facebook (and WhatsApp) harbours criminals. The ad campaign will run online, in newspapers and on radio stations with the aim of turning public opinion against E2EE -- and, presumably, driving home the message that encryption itself is something inherently bad. Other announcements due this week, from notoriously anti-encryption Home Secretary Priti Patel and intergovernmental meetings, will explicitly condemn Facebook's contemplated rollout of E2EE.

Facebook

WhatsApp Moderators Can Read Your Messages (gizmodo.com) 87

Gizmodo highlights the findings of a new ProPublica report on WhatsApp's content moderation system. What they found was that there are at least 1,000 WhatsApp content moderators employed by Facebook's moderator contract firm Accenture to review user-reported content that's been flagged by its machine learning system. "They monitor for, among other things, spam, disinformation, hate speech, potential terrorist threats, child sexual abuse material (CSAM), blackmail, and "sexually oriented businesses,'" reports Gizmodo. "Based on the content, moderators can ban the account, put the user 'on watch,' or leave it alone." From the report: Most can agree that violent imagery and CSAM should be monitored and reported; Facebook and Pornhub regularly generate media scandals for not moderating enough. But WhatsApp moderators told ProPublica that the app's artificial intelligence program sends moderators an inordinate number of harmless posts, like children in bathtubs. Once the flagged content reaches them, ProPublica reports that moderators can see the last five messages in a thread.

WhatsApp discloses, in its terms of service, that when an account is reported, it "receives the most recent messages" from the reported group or user as well as "information on your recent interactions with the reported user." This does not specify that such information, viewable by moderators, could include phone numbers, profile photos, linked Facebook and Instagram accounts, their IP address, and mobile phone ID. And, the report notes, WhatsApp does not disclose the fact that it amasses all users' metadata no matter their privacy settings.

WhatsApp didn't offer much clarity on what mechanism it uses to receive decrypted messages, only that the person tapping the "report" button is automatically generating a new message between themselves and WhatsApp. That seems to indicate that WhatsApp is deploying a sort of copy-paste function, but the details are still unclear. Facebook told Gizmodo that WhatsApp can read messages because they're considered a version of direct messaging between the company and the reporter. They added that users who report content make the conscious choice to share information with Facebook; by their logic, Facebook's collection of that material doesn't conflict with end-to-end encryption. So, yes, WhatsApp can see your messages without your consent.

Government

Will Gaming Change Humanity As We Know It? (bloombergquint.com) 77

"The advent of gaming, especially computer gaming, marks a fundamental break in human affairs," argues American economist Tyler Cowen (in a Bloomberg opinion column).

"Gaming is profoundly transforming two central aspects of the modern world: culture and regulation. There will be no turning back... Plenty of trading already takes place in games — involving currencies, markets, prices and contracts. Game creators and players set and enforce the rules, and it is harder for government regulators to play a central role. The lesson is clear: If you wish to create a new economic institution, put it inside a game. Or how about an app that gamifies share trading? Do you wish to experiment with a new kind of stock exchange or security outside the purview of traditional government regulation? Try the world of gaming, perhaps combined with crypto, and eventually your "game" just might influence events in the real world...

[R]egulators are already falling behind. Just as gaming has outraced the world of culture, so will gaming outrace U.S. regulatory capabilities, for a variety of reasons: encryption, the use of cryptocurrency, the difficulties of policing virtual realities, varying rules in foreign jurisdictions and, not incidentally, a lack of expertise among U.S. regulators. (At least the Chinese government's attempt to restrict youth gaming to three hours a week, while foolhardy, reflects a perceptive cultural conservatism.)

Both the culture-weakening and the regulation-weakening features of games follow from their one basic characteristic: They are self-contained worlds. Until now, human institutions and structures have depended on relatively open and overlapping networks of ideas. Gaming is carving up and privatizing those spaces. This shift is the big trend that hardly anyone — outside of gaming and crypto — is noticing.

If the much-heralded "metaverse" ever arrives, gaming will swallow many more institutions, or create countervailing versions of them. Whether or not you belong to the world of gaming, it is coming for your worlds. I hope you are ready.

Encryption

America's NSA Isn't Sure Quantum Computers Will Ever Break Public Key Encryption (msn.com) 92

America's National Security Agency "isn't really sure when or even if quantum computers will be able to crack public key cryptography," writes TechRadar.

They report that the NSA "has expressed its reservations about the potential of quantum computing" in a new FAQ titled Quantum Computing and Post-Quantum Cryptography. "NSA does not know when or even if a quantum computer of sufficient size and power to exploit public key cryptography (a CRQC) will exist," said the security agency in response to whether it is worried about the potential of adversarial use of quantum computing. In the FAQ, the NSA describes a Cryptographically Relevant Quantum Computer (CRQC) as a quantum computer that's capable of actually attacking real world cryptographic systems, something that's currently infeasible.

While it agrees that such a computer would be "devastating" to the digital security infrastructure, it seems to suggest that it doesn't believe such a CRQC would ever materialize.

However, the growing research in quantum computing has moved the agency to also support the development of post-quantum cryptographic standards, along with plans for eventual transition to such standards.

Apple

We Built a CSAM System Like Apple's - the Tech Is Dangerous (washingtonpost.com) 186

An anonymous reader writes: Earlier this month, Apple unveiled a system that would scan iPhone and iPad photos for child sexual abuse material (CSAM). The announcement sparked a civil liberties firestorm, and Apple's own employees have been expressing alarm. The company insists reservations about the system are rooted in "misunderstandings." We disagree.

We wrote the only peer-reviewed publication on how to build a system like Apple's -- and we concluded the technology was dangerous. We're not concerned because we misunderstand how Apple's system works. The problem is, we understand exactly how it works.

Our research project began two years ago, as an experimental system to identify CSAM in end-to-end-encrypted online services. As security researchers, we know the value of end-to-end encryption, which protects data from third-party access. But we're also horrified that CSAM is proliferating on encrypted platforms. And we worry online services are reluctant to use encryption without additional tools to combat CSAM.

We sought to explore a possible middle ground, where online services could identify harmful content while otherwise preserving end-to-end encryption. The concept was straightforward: If someone shared material that matched a database of known harmful content, the service would be alerted. If a person shared innocent content, the service would learn nothing. People couldn't read the database or learn whether content matched, since that information could reveal law enforcement methods and help criminals evade detection.

But we encountered a glaring problem.

Our system could be easily repurposed for surveillance and censorship. The design wasn't restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.
About the authors of this report: Jonathan Mayer is an assistant professor of computer science and public affairs at Princeton University. He previously served as technology counsel to then-Sen. Kamala D. Harris and as chief technologist of the Federal Communications Commission Enforcement Bureau. Anunay Kulshrestha is a graduate researcher at the Princeton University Center for Information Technology Policy and a PhD candidate in the department of computer science.
Privacy

Policy Groups Ask Apple To Drop Plans To Inspect iMessages, Scan for Abuse Images (reuters.com) 89

More than 90 policy and rights groups around the world published an open letter on Thursday urging Apple to abandon plans for scanning children's messages for nudity and the phones of adults for images of child sex abuse. From a report: "Though these capabilities are intended to protect children and to reduce the spread of child sexual abuse material, we are concerned that they will be used to censor protected speech, threaten the privacy and security of people around the world, and have disastrous consequences for many children," the groups wrote in the letter, which was first reported by Reuters. The largest campaign to date over an encryption issue at a single company was organized by the U.S.-based nonprofit Center for Democracy & Technology (CDT). Some overseas signatories in particular are worried about the impact of the changes in nations with different legal systems, including some already hosting heated fights over encryption and privacy.
Encryption

Apple's Child Protection Features Spark Concern Within Its Own Ranks (reuters.com) 99

According to an exclusive report from Reuters, Apple's move to scan U.S. customer phones and computers for child sex abuse images has resulted in employees speaking out internally, "a notable turn in a company famed for its secretive culture." From the report: Apple employees have flooded an Apple internal Slack channel with more than 800 messages on the plan announced a week ago, workers who asked not to be identified told Reuters. Many expressed worries that the feature could be exploited by repressive governments looking to find other material for censorship or arrests, according to workers who saw the days-long thread. Past security changes at Apple have also prompted concern among employees, but the volume and duration of the new debate is surprising, the workers said. Some posters worried that Apple is damaging its leading reputation for protecting privacy.

In the Slack thread devoted to the photo-scanning feature, some employees have pushed back against criticism, while others said Slack wasn't the proper forum for such discussions. Core security employees did not appear to be major complainants in the posts, and some of them said that they thought Apple's solution was a reasonable response to pressure to crack down on illegal material. Other employees said they hoped that the scanning is a step toward fully encrypting iCloud for customers who want it, which would reverse Apple's direction on the issue a second time.
Apple has said it will refuse requests from governments to use the system to check phones for anything other than illegal child sexual abuse material.
Security

How a Security Researcher Took Over a Hotel's IoT Devices (zdnet.com) 36

"The moment you network IoT and hand over control to third parties, you may also give individuals the keys to a digital kingdom — and the ability to cause mischief, or worse," writes ZDNet.

For example, at a hotel where guests control the devices in their room with an iPod Touch... Speaking at Black Hat USA, Las Vegas, security consultant Kya Supa from LEXFO explained how a chain of security weaknesses were combined and exploited to gain control of rooms at a capsule hotel, a budget-friendly type of hotel offering extremely small — and, therefore, cozy — spaces to guests, who are stacked side-by-side... A neighbor, "Bob," kept waking Supa up by making loud phone calls in the early hours of the morning. While Bob had agreed to keep it down, he did not keep his promise — and the researcher set to work since he needed his sleep, especially during his vacation. The first thing Supa did was to explore his room, finding an emergency light installed for safety reasons; a Nasnos automaton center for use in controlling products in case the iPod Touch was lost; an electric motor used to manage the incline of the capsule's bed; and a Nasnos router, hidden in the wall.

If you connected to the router via a smartphone, it was then possible to control other devices on the network, and this was the setup the hotel chose to use... Supa found that two networks were connected — the hotel Wi-Fi and the router. To retrieve the router key, Supa targeted WEP, a protocol that has been known to be weak for years. Access points, each being one of the bedrooms, were found. Supa inspected the traffic and found weak credentials in place — "123" — and you can guess the rest...

By using an Android smartphone, the iPod Touch, and a laptop, the researcher created a Man-in-The-Middle (MiTM) architecture and inspected the network traffic. No encryption was found and he created a simple program to tamper with these connections, allowing the researcher to seize control of his bedroom through his laptop... Now that he could "control every bedroom," and Bob was still there, Supa then tampered with the lights of different bedrooms until he found the right one. He created a script that, every two hours, would change the bed into a sofa and turn the lights on and off. The script was launched at midnight. We can probably assume Bob did not enjoy his stay.

"I hope he will be more respectful in the future," Supa commented.

Electronic Frontier Foundation

Edward Snowden and EFF Slam Apple's Plans To Scan Messages and iCloud Images (macrumors.com) 55

Apple's plans to scan users' iCloud Photos library against a database of child sexual abuse material (CSAM) to look for matches and childrens' messages for explicit content has come under fire from privacy whistleblower Edward Snowden and the Electronic Frontier Foundation (EFF). MacRumors reports: In a series of tweets, the prominent privacy campaigner and whistleblower Edward Snowden highlighted concerns that Apple is rolling out a form of "mass surveillance to the entire world" and setting a precedent that could allow the company to scan for any other arbitrary content in the future. Snowden also noted that Apple has historically been an industry-leader in terms of digital privacy, and even refused to unlock an iPhone owned by Syed Farook, one of the shooters in the December 2015 attacks in San Bernardino, California, despite being ordered to do so by the FBI and a federal judge. Apple opposed the order, noting that it would set a "dangerous precedent."

The EFF, an eminent international non-profit digital rights group, has issued an extensive condemnation of Apple's move to scan users' iCloud libraries and messages, saying that it is extremely "disappointed" that a "champion of end-to-end encryption" is undertaking a "shocking about-face for users who have relied on the company's leadership in privacy and security." The EFF highlighted how various governments around the world have passed laws that demand surveillance and censorship of content on various platforms, including messaging apps, and that Apple's move to scan messages and "iCloud Photos" could be legally required to encompass additional materials or easily be widened. "Make no mistake: this is a decrease in privacy for all "iCloud Photos" users, not an improvement," the EFF cautioned.

Chrome

Google Chrome To No Longer Show Secure Website Indicators (bleepingcomputer.com) 68

Google Chrome will no longer show whether a site you are visiting is secure and only show when you visit an insecure website. Bleeping Computer reports: To further push web developers into only using HTTPS on their sites, Google introduced the protocol as a ranking factor. Those not hosting a secure site got a potentially minor hit in their Google search results rankings. It has appeared to have worked as according to the 'HTTPS encryption on the web' of Google's Transparency Report, over 90% of all browser connections in Google Chrome currently use an HTTPS connection.

Currently, when you visit a secure site, Google Chrome will display a little locked icon indicating that your communication with the site is encrypted, as shown below. As most website communication is now secure, Google is testing a new feature that removes the lock icon for secure sites. This feature is available to test in Chrome 93 Beta, and Chrome 94 Canary builds by enabling the 'Omnibox Updated connection security indicators' flag. With this feature enabled, Google Chrome will only display security indicators when the site is not secure. For businesses who wish to have continued HTTPS security indicators, Google has added an enterprise policy for Chrome 93 named 'LockIconInAddressBarEnabled' that can be used to enable the lock icon again on the address bar.

Facebook

Facebook Researchers Hope To Bring Together Two Foes: Encryption and Ads 53

Facebook is bulking up a team of artificial intelligence researchers, including a key hire from Microsoft, to study ways of analyzing encrypted data without decrypting it, the company confirmed to the Information. From the report: The research could allow Facebook to target ads based on encrypted messages on its WhatsApp messenger, or to encrypt the data it collects on billions of users without hurting its ad-targeting capabilities, outside experts say. Facebook is one of several technology giants, including cloud computing providers Microsoft, Amazon and Google, now researching an emerging field known as homomorphic encryption. Researchers hope the technology will allow companies to analyze personal information, including medical records and financial data, while keeping the information encrypted and protected from cybersecurity threats or, in Facebook's case, leaks to advertisers or other parties.
The Internet

Russia's 'Nonsensical, Impossible Quest' to Create Its Own Domestic Internet (slate.com) 61

"It was pretty strange when Russia decided to announce last week that it had successfully run tests between June 15 and July 15 to show it could disconnect itself from the internet," writes an associate professor of cybersecurity policy at Tufts Fletcher School of Law and Diplomacy. The tests seem to have gone largely unnoticed both in and outside of Russia, indicating that whatever entailed did not involve Russia actually disconnecting from the global internet... since that would be impossible to hide. Instead, the tests — and, most of all, the announcement about their success — seem to be intended as some kind of signal that Russia is no longer dependent on the rest of the world for its internet access. But it's not at all clear what that would even mean since Russia is clearly still dependent on people and companies in other countries for access to the online content and services they create and host — just as we all are...

For the past two years, ever since implementing its "sovereign internet law" in 2019, Russia has been talking about establishing its own domestic internet that does not rely on any infrastructure or resources located outside the country. Presumably, the tests completed this summer are related to that goal of being able to operate a local internet within Russia that does not rely on the global Domain Name System to map websites to specific IP addresses. This is not actually a particularly ambitious goal — any country could operate its own domestic internet with its own local addressing system if it wanted to do so instead of connecting to the larger global internet... The Center for Applied Internet Data Analysis at the University of California San Diego maintains an Internet Outage Detection and Analysis tool that combines three data sets to identify internet outages around the world... The data sets for Russia from June 15 through July 15, the period of the supposed disconnection tests, shows few indications of any actual disconnection other than a period around July 5 when unsolicited traffic from Russia appears to have dropped off.

Whatever Russia did this summer, it did not physically disconnect from the global internet. It doesn't even appear to have virtually disconnected from the global internet in any meaningful sense. Perhaps it shifted some of its critical infrastructure systems to rely more on domestic service providers and resources. Perhaps it created more local copies of the addressing system used to navigate the internet and tested its ability to rely on those. Perhaps it tested its ability to route online traffic within the country through certain chokepoints for purposes of better surveillance and monitoring. None of those are activities that would be immediately visible from outside the country and all of them would be in line with Russia's stated goals of relying less on internet infrastructure outside its borders and strengthening its ability to monitor online activity.

But the goal of being completely independent of the rest of the world's internet infrastructure while still being able to access the global internet is a nonsensical and impossible one. Russia cannot both disconnect from the internet and still be able to use all of the online services and access all of the websites hosted and maintained by people in other parts of the world, as appears to have been the case during the monthlong period of testing... Being able to disconnect your country from the internet is not all that difficult — and certainly nothing to brag about. But announcing that you've successfully disconnected from the internet when it's patently clear that you haven't suggests both profound technical incompetence and a deep-seated uncertainty about what a domestic Russian internet would actually mean.

Communications

Baseball's Newest Anti-Cheating Technology: Encrypted Transmitters for Catchers' Signals (theverge.com) 75

First Major League Baseball experimented with automated umpiring of balls and strikes in the minor leagues.

Now the Verge reports they're trying a time-saving tactic that might also make it harder to cheat: Baseball has a sign stealing problem — or at least, a technological one, seeing how reading another team's pitches is technically legal, but using Apple Watches or telephoto cameras and then suspiciously banging on trash cans is very much not. But soon the MLB may try fighting fire with fire: on August 3rd, it plans to begin testing an encrypted wireless communication device that replaces the traditional flash of fingers with button taps, according to ESPN.

The device, from a startup called PitchCom, will be tested in the Low-A West minor league first. As you'd expect from something that's relaying extremely basic signals, it's not a particularly complicated piece of kit: one wristband transmitter for the catcher with nine buttons to signal "desired pitch and location," which sends an encrypted audio signal to receivers that can squeeze into a pitcher's cap and a catcher's helmet.

The receivers use bone-conduction technology, so they don't necessarily need to be up against an ear, and might theoretically be harder to eavesdrop on. (Bone conduction stimulates bones in your head instead of emitting audible sound.)

"MLB hopes the devices will cut down on time spent by pitchers stepping off the rubber and changing signals," reports the Associated Press, noting another interesting new rule. "A team may continue to use the system if the opposing club's device malfunctions."

But don't worry about that, reports ESPN: Hacking the system, the company says, is virtually impossible. PitchCom uses an industrial grade encryption algorithm and transmits minimal data digitally, making it mathematically impossible for someone to decrypt intercepted transmissions, according to the company.
United Kingdom

Hole Blasted In Guntrader: UK Firearms Sales Website's CRM Database Breached, 111K Users' Info Spilled Online (theregister.com) 63

Criminals have hacked into a Gumtree-style website used for buying and selling firearms, making off with a 111,000-entry database containing partial information from a CRM product used by gun shops across the UK. The Register reports: The Guntrader breach earlier this week saw the theft of a SQL database powering both the Guntrader.uk buy-and-sell website and its electronic gun shop register product, comprising about 111,000 users and dating between 2016 and 17 July this year. The database contains names, mobile phone numbers, email addresses, user geolocation data, and more including bcrypt-hashed passwords. It is a severe breach of privacy not only for Guntrader but for its users: members of the UK's licensed firearms community. Guntrader spokesman Simon Baseley told The Register that Guntrader.uk had emailed all the users affected by the breach on July 21 and issued a further update yesterday.

Guntrader is roughly similar to Gumtree: users post ads along with their contact details on the website so potential purchasers can get in touch. Gun shops (known in the UK as "registered firearms dealers" or RFDs) can also use Guntrader's integrated gun register product, which is advertised as offering "end-to-end encryption" and "daily backups", making it (so Guntrader claims) "the most safe and secure gun register system on today's market." [British firearms laws say every transfer of a firearm (sale, drop-off for repair, gift, loan, and so on) must be recorded, with the vast majority of these also being mandatory to report to the police when they happen...]

The categories of data in the stolen database are: Latitude and longitude data; First name and last name; Police force that issued an RFD's certificate; Phone numbers; Fax numbers; bcrypt-hashed passwords; Postcode; Postal addresses; and User's IP addresses. Logs of payments were also included, with Coalfire's Barratt explaining that while no credit card numbers were included, something that looks like a SHA-256 hashed string was included in the payment data tables. Other payment information was limited to prices for rifles and shotguns advertised through the site.
The Register recommends you check if your data is included in the hack by visiting Have I Been Pwned. If you are affected and you used the same password on Guntrader that you used on other websites, you should change it as soon as possible.
Security

Malware Increasingly Targets Discord for Abuse (sophos.com) 20

Threat actors who spread and manage malware have long abused legitimate online services. As we found during our investigation into the use of TLS by malware, more than half of network traffic generated by malware uses TLS encryption, and 20 percent of that involved the malware communicating with legitimate online services. During the timeframe of that research, we found that four percent of the overall TLS-protected malware downloads came from one service in particular: Discord. From a report: The growing popularity of the game-centric text and voice chat platform has not failed to draw the attention of malware operators. Discord operates its own content delivery network, or CDN, where users can upload files to share with others. The service also publishes an API, enabling developers to create new ways to interact with Discord other than through its client application. We observed significant volumes of malware hosted in Discord's own CDN, as well as malware interacting with Discord APIs to send and receive data.

Several password-hijacking malware families specifically target Discord accounts. SophosLabs also found malware that leveraged Discord chat bot APIs for command and control, or to exfiltrate stolen information into private Discord servers or channels. As the origins of the service were tied to online gaming, Discord's audience includes large numbers of gamers -- including players of youth-oriented titles such as Fortnite, Minecraft, or Roblox. Among the malicious files we discovered in Discord's network, we found game cheating tools that target games that integrate with Discord, in-game. The tools allegedly make it possible, exploiting weaknesses in Discord's protocols, for one player to crash the game of another player. We also found applications that serve as nothing more than harmless, though disruptive, pranks.

Cellphones

Investigation Reveals Widespread Cellphone Surveillance of the Innocent (theguardian.com) 184

Cellphones "can be transformed into surveillance devices," writes the Guardian, reporting startling new details about which innocent people are still being surveilled (as part of a collaborative reporting project with 16 other media outlets led by the French nonprofit Forbidden Stories).

Long-time Slashdot reader shanen shared the newspaper's critique of a "privatised government surveillance industry" that's made NSO a billion-dollar company, thanks to its phone-penetrating spy software Pegaus: [NSO] insists only carefully vetted government intelligence and law enforcement agencies can use Pegasus, and only to penetrate the phones of "legitimate criminal or terror group targets". Yet in the coming days the Guardian will be revealing the identities of many innocent people who have been identified as candidates for possible surveillance by NSO clients in a massive leak of data... The presence of their names on this list indicates the lengths to which governments may go to spy on critics, rivals and opponents.

First we reveal how journalists across the world were selected as potential targets by these clients prior to a possible hack using NSO surveillance tools. Over the coming week we will be revealing the identities of more people whose phone numbers appear in the leak. They include lawyers, human rights defenders, religious figures, academics, businesspeople, diplomats, senior government officials and heads of state. Our reporting is rooted in the public interest. We believe the public should know that NSO's technology is being abused by the governments who license and operate its spyware.

But we also believe it is in the public interest to reveal how governments look to spy on their citizens and how seemingly benign processes such as HLR lookups [which track the general locations of cellphone users] can be exploited in this environment.

It is not possible to know without forensic analysis whether the phone of someone whose number appears in the data was actually targeted by a government or whether it was successfully hacked with NSO's spyware. But when our technical partner, Amnesty International's Security Lab, conducted forensic analysis on dozens of iPhones that belonged to potential targets at the time they were selected, they found evidence of Pegasus activity in more than half.

The investigators say that potential targets included nearly 200 journalists around the world, including numerous reporters from CNN, the Associated Press, Voice of America, the New York Times, the Wall Street Journal, Bloomberg News, Le Monde in France, and even the editor of the Financial Times.

In addition, the investigators say they found evidence the Pegasus software had been installed on the phone of the fiancée of murdered Saudi journalist Jamal Khashoggi. NSO denies this to the Washington Post. But they also insist that they're simply licensing their software to clients, and their company "has no insight" into those clients' specific intelligence activities.

The Washington Post reports that Amnesty's Security Lab found evidence of Pegasus attacks on 37 of 67 smartphones from the list which they tested. But beyond that "for the remaining 30, the tests were inconclusive, in several cases because the phones had been replaced. Fifteen of the phones were Android devices, none of which showed evidence of successful infection. However, unlike iPhones, Androids do not log the kinds of information required for Amnesty's detective work."

Familiar privacy measures like strong passwords and encryption offer little help against Pegasus, which can attack phones without any warning to users. It can read anything on a device that a user can, while also stealing photos, recordings, location records, communications, passwords, call logs and social media posts. Spyware also can activate cameras and microphones for real-time surveillance.
China

Will a New Law Help the Chinese Government Stockpile Zero-Days? (securityweek.com) 27

"Starting September 1, 2021, the Chinese government will require that any Chinese citizen who finds a zero-day vulnerability must pass the details to the Chinese government," reports SecurityWeek, "and must not sell or give the knowledge to any third-party outside of China (apart from the vulnerable product's manufacturer)." Brief details are provided in a report by the Associated Press (AP) published Tuesday, July 13, 2021. No source is provided beyond the statement, "No one may 'collect, sell or publish information on network product security vulnerabilities,' say the rules issued by the Cyberspace Administration of China and the police and industry ministries...."

AP describes this action as "further tightening the Communist Party's control over information". This is unlikely to be the primary motivation for the new rule since the government already has a vice-like grip on data. Companies may not store data on Chinese customers outside of China. Foreign companies selling routers and some other network devices in China must disclose to regulators how any encryption features work.

"I would expect the Chinese Government to weaponize any discovered security vulnerabilities to enhance China's cybersecurity capabilities," Joseph Carson, chief security scientist and advisory CISO at ThycoticCentrify, tells SecurityWeek. And Jake Williams, co-founder and CTO at BreachQuest adds that "the defensive advantages of Chinese government organizations being able to mitigate vulnerabilities discovered may well outweigh any offensive gains...."

But he also believes this could rebound against China. "One of the biggest likely issues is brain drain. If Chinese researchers can profit handsomely from their work anywhere else, but can't do so in China, why would they stay? This probably helps China in the short term but harms them in the long term."

The new law does encourage network operators and product vendors to set up a reward mechanism for reported vulnerabilities, according to the Record. But Katie Moussouris, founder and CEO of Luta Security, also raises the issue of western-based bug bounty platforms that have been working with Chinese security researchers for the past years. "If Western-based bug bounty platforms comply with this requirement in order to continue to legally receive bug reports from Chinese researchers, we must assume they will be required to hand over vulnerability data to the Ministry within two days of receiving the reports," Moussouris said. "That requirement will effectively introduce a backdoor straight to the Chinese government in any VDP [vulnerability disclosure program] or bug bounty program where Chinese researchers submit bugs via platforms, even to non-Chinese companies."
China

China Is Pulling Ahead In Global Quantum Race, New Studies Suggest (scientificamerican.com) 49

An anonymous reader writes: When a team of Chinese scientists beamed entangled photons from the nation's Micius satellite to conduct the world's first quantum-secured video call in 2017, experts declared that China had taken the lead in quantum communications. New research suggests that lead has extended to quantum computing as well. In three preprint papers posted on arXiv.org last month, physicists at the University of Science and Technology of China (USTC) reported critical advances in both quantum communication and quantum computing. In one of the studies, researchers used nanometer-scale semiconductors called quantum dots to reliably transmit single photons -- an essential resource for any quantum network -- over 300 kilometers of fiber, well over 100 times farther than previous attempts. In another, scientists improved their photonic quantum computer from 76 detected photons to 113, a dramatic upgrade to its "quantum advantage," or how much faster it is than classical computers at one specific task. The third paper introduced Zuchongzhi, made of 66 superconducting qubits, and performed a problem with 56 of them -- a figure similar to the 53 qubits used in Google's quantum computer Sycamore, which set a performance record in 2019.

All three achievements are world-leading, but Zuchongzhi in particular has scientists talking because it is the first corroboration of Google's landmark 2019 result. "I'm very pleased that someone has reproduced the experiment and shown that it works properly," says John Martinis, a former Google researcher who led the effort to build Sycamore. "That's really good for the field, that superconducting qubits are a stable platform where you can really build these machines." Quantum computers and quantum communication are nascent technologies. None of this research is likely to be of practical use for many years to come. But the geopolitical stakes of quantum technology are high: full-fledged quantum networks could provide unhackable channels of communication, and a powerful quantum computer could theoretically break much of the encryption currently used to secure e-mails and Internet transactions.

Encryption

Amazon Rolls Out Encryption For Ring Doorbells (zdnet.com) 53

Starting today in the U.S. (and other countries in the not too distant future), you'll be able to encrypt the video footage captured via your Ring devices. ZDNet reports: This is done with Amazon's Video End-to-End Encryption (E2EE). If you decide to install this optional privacy feature, you'll need to install a new version of the Ring application on your smartphone. Once installed, it uses a Public Key Infrastructure (PKI) security system based on an RSA 2048-bit asymmetric account signing key pair. In English, the foundation is pretty darn secure.

Earlier, Ring already encrypted videos when they are uploaded to the cloud (in transit) and stored on Ring's servers (at rest). Law enforcement doesn't have automatic access to customer devices or videos. You choose whether or not to share footage with law enforcement. With E2EE, customer videos are further secured with an additional lock, which can only be unlocked by a key that is stored on the customer's enrolled mobile device, designed so that only the customer can decrypt and view recordings on their enrolled device. In addition, you'll need to opt into using E2EE. It doesn't turn on automatically with the software update. You'll also need to set a passphrase, which you must remember. AWS doesn't keep a copy. If you lose it, you're out of luck. [Just know that if you use E2EE, various features will be missing, such as sharing your videos, being able to view encrypted videos on Ring.com, the Windows desktop app, the Mac desktop app, or the Rapid Ring app, and the Event Timeline. E2EE also won't work with many Ring devices.]
ZDNet notes that while police can still ask for or demand your video and audio content, they won't be able to decrypt your E2EE end-to-end encrypted video "because the private keys required to decrypt the videos are only stored on customer's enrolled mobile devices."

Slashdot Top Deals