×
Encryption

Hyundai Uses Example Keys For Encryption System (schneier.com) 107

"Hyundai predictably fails in attempting to secure their car infotainment system with a default key lifted from programming examples," writes Slashdot reader sinij. "This level of security is unfortunately expected from auto manufacturers, who also would like to sell you always-connected Car2Car self-driving automobiles." Cryptographer and security experience Bruce Schneier writes: "Turns out the [AES] encryption key in that script is the first AES 128-bit CBC example key listed in the NIST document SP800-38A [PDF]," writes an unidentified developer under the name "greenluigi1." Luck held out, in a way. "Greenluigi1" found within the firmware image the RSA public key used by the updater, and searched online for a portion of that key. The search results pointed to a common public key that shows up in online tutorials like "RSA Encryption & Decryption Example with OpenSSL in C." Two questions remain:
1.) How did the test key get left behind?
2) Was it by accident or design?
Facebook

Encrypting Facebook Messenger Could Be a 'Grotesque Betrayal', Says Top UK Politician (theverge.com) 97

Facebook's parent company Meta is heading into another political battle over the planned introduction of end-to-end encryption (E2EE) in its Messenger chat platform. From a report: The UK's home secretary, Priti Patel, makes this clear in an op-ed for Tory mouthpiece The Telegraph this week, saying it would be a "grotesque betrayal" if the company didn't consider issues of child safety while introducing E2EE. Similar arguments are likely to be raised in the US, too. Meta has been working on adding E2EE to Messenger for years, and recently confirmed that it aims to encrypt all chats and calls on the platform by default next year. (It currently only offers default E2EE on its other big chat platform, WhatsApp, though users can opt-in to E2EE on Messenger on a chat-by-chat basis.)

The move is reigniting decades-old debates in politics and tech about the right way to balance user privacy and safety. In the US, these arguments have been heightened by the potential for police to issues search warrants for user chats in order to enforce new abortion laws after the overturn of Roe v. Wade. In the UK, arguments over encryption tend to focus on child safety and the dissemination of of child sexual abuse material, or CSAM. "A great many child predators use social media platforms such as Facebook to discover, target and sexually abuse children," writes Patel in her op-ed. "It is vital that law enforcement have access to the information they need to identify the children in these images and safeguard them from vile predators."

Google

How Google Cloud Blocked the Largest Web DDOS Attack Ever (google.com) 11

Distributed denial-of-service (or DDoS) attacks "are increasing in frequency and growing in size exponentially," reports Google Cloud's blog.

Recently an attacker tried to hit one of their customers with 46 million requests per second. The blog post describes it as the largest attack of its kind reported to date, "at least 76% larger than the previously reported record. To give a sense of the scale of the attack, that is like receiving all the daily requests to Wikipedia (one of the top 10 trafficked websites in the world) in just 10 seconds." Starting around 9:45 a.m. PT on June 1, 2022, an attack of more than 10,000 requests per second (rps) began targeting our customer's HTTP/S Load Balancer. Eight minutes later, the attack grew to 100,000 requests per second. Cloud Armor Adaptive Protection detected the attack and generated an alert containing the attack signature by assessing the traffic across several dozen features and attributes. The alert included a recommended rule to block on the malicious signature....

Our customer's network security team deployed the Cloud Armor-recommended rule into their security policy, and it immediately started blocking the attack traffic. In the two minutes that followed, the attack began to ramp up, growing from 100,000 rps to a peak of 46 million rps. Since Cloud Armor was already blocking the attack traffic, the target workload continued to operate normally. Over the next few minutes, the attack started to decrease in size, ultimately ending 69 minutes later at 10:54 a.m.

Presumably the attacker likely determined they were not having the desired impact while incurring significant expenses to execute the attack.... The attack leveraged encrypted requests (HTTPS) which would have taken added computing resources to generate. Although terminating the encryption was necessary to inspect the traffic and effectively mitigate the attack, the use of HTTP Pipelining required Google to complete relatively few TLS handshakes.... The attack was stopped at the edge of Google's network, with the malicious requests blocked upstream from the customer's application.

While 22% of the source IPs corresponded to Tor exit nodes, the actual traffic coming from Tor nodes represented just 3% of attack traffic, the blog post points out.

And ultimately despite the attack, "the customer's service stayed online and continued serving their end-users."
Security

Hackers Are Stealing Session Cookies To Bypass Multi-factor Authentication (esecurityplanet.com) 28

Slashdot reader storagedude writes: Hackers are stealing cookies from current or recent web sessions to bypass multi-factor authentication (MFA), according to an eSecurity Planet report.

The attack method, reported by Sophos researchers, is already growing in use. The "cookie-stealing cybercrime spectrum" is broad, the researchers wrote, ranging from "entry-level criminals" to advanced adversaries, using various techniques.

Cybercriminals collect cookies or buy stolen credentials "in bulk" on dark web forums. Ransomware groups also harvest cookies and "their activities may not be detected by simple anti-malware defenses because of their abuse of legitimate executables, both already present and brought along as tools," the researchers wrote.

Browsers allow users to maintain authentication, remember passwords and autofill forms. That might seem convenient, but attackers can exploit this functionality to steal credentials and skip the login challenge.

Behind the scenes, browsers use SQLite database files that contain cookies. These cookies are composed of key-value pairs, and the values often contain critical information such as tokens and expiration dates.

Adversaries know the exact name and location of these files for all major browsers such as Chrome, Firefox, and even Brave, on various operating systems. That's why the attack can be scripted. It's not uncommon to find such scripts along with other modules in info-stealing and other malware.

For example, the latest version of the Emotet botnet targets cookies and credentials stored by browsers, which include saved credit cards. According to the Sophos researchers, "Google's Chrome browser uses the same encryption method to store both multi-factor authentication cookies and credit card data."

To gain initial access, attackers can also perform phishing and spear-phishing campaigns to implant droppers that can deploy cookie-stealer malware stealthily.

The cookies are then used for post-exploitation and lateral movements. Cybercriminals can use them to change passwords and emails associated with user accounts, or trick the victims into downloading additional malware, or even deploy other exploitation tools such as Cobalt Strike and Impacket kit.

Users should not use built-in features to save passwords unless the browser encrypts them with, at least, a master password. It's recommended that users uncheck the setting called "remember passwords," and users should probably not allow persistent sessions as well.

Developers can be part of the problem if they don't secure authentication cookies properly. Such cookies must have a short expiration date. Otherwise, the persistent authentication could turn into a persistent threat. You can have great security processes and still get hacked because the cookies do not have the necessary flags (e.g., HttpOnly, Secure attribute). For example, authentication cookies must be sent using SSL/TLS channels. Otherwise the data could be sent in plain text and attackers would only have to sniff traffic to intercept credentials.

Encryption

Semiconductor Makers Scramble to Support New Post-Quantum Cryptography Standard (eetimes.com) 40

IoT Times brings an update on "the race to create a new set of encryption standards." Last month, it was announced that a specialized security algorithm co-authored by security experts of NXP, IBM, and Arm had been selected by the U.S. Government's National Institute of Standards and Technology (NIST) to become part of an industry global standard designed to counter quantum threats.
IoT Times interviews the cryptography expert who co-created the Crystals-Kyber lattice-based algorithm selected by NIST — Joppe W. Bos, a senior principal cryptographer at the Competence Center for Cryptography and Security at NXP Semiconductors.

And what worries his colleagues at the semiconductor company isn't the "imminent threat of quantum computers," Bos says, but an even closer and more practical deadline: "the timeline for these post-quantum crypto standards." "Two weeks ago, NIST announced the winners of these new public standards, the post-quantum crypto standards, and their timeline is that in 2024, so in roughly two years, the winners will be converted into standards. And as soon as the standards are released, our customers will expect NXP Semiconductors, as one of the leaders in crypto and security, to already have support for these standards, because we are, of course, at the start of the chain for many end products. Our secure elements, our secure platforms, SOCs, are one of the first things that need to be integrated into larger platforms that go into end products. Think about industrial IoT. Think about automotive applications. So, our customers already expect us to support post-quantum crypto standards in 2024, and not only support but, for many companies, being able to compute the functional requirements of the standard.

"It took over ten years to settle down on the best methods for RSA and ECC, and now we have a much shorter timeline to get ready for post-quantum crypto."

"When you ask the experts, it ranges from one to five decades until we can see quantum computers big enough to break our current crypto," Bos says in the interview. So he stresses that they're not driven by a few of quantum computers. "The right question to ask, at least for us at NXP is, when is this new post-quantum crypto standard available? Because then, our customers will ask for post-quantum support, and we need to be ready.

"The standard really drives our development and defines our roadmap."

But speaking of the standard's "functional requirements", in the original story submission Slashdot reader dkatana raised an interesting point. There's already billions of low-powered IoT devices in the world.

Will they all have the memory and processing power to use this new lattice-based encryption?

Cellphones

Erik Prince Wants To Sell You a 'Secure' Smartphone That's Too Good To Be True (technologyreview.com) 86

MIT Technology Review obtained Prince's investor presentation for the "RedPill Phone," which promises more than it could possibly deliver. From the report: Erik Prince's pitch to investors was simple -- but certainly ambitious: pay just 5 million euros and cure the biggest cybersecurity and privacy plagues of our day. The American billionaire -- best known for founding the notorious private military firm Blackwater, which became globally infamous for killing Iraqi civilians and threatening US government investigators -- was pushing Unplugged, a smartphone startup promising "free speech, privacy, and security" untethered from dominant tech giants like Apple and Google. In June, Prince publicly revealed the new phone, priced at $850. But before that, beginning in 2021, he was privately hawking the device to investors -- using a previously unreported pitch deck that has been obtained by MIT Technology Review. It boldly claims that the phone and its operating system are "impenetrable" to surveillance, interception, and tampering, and its messenger service is marketed as "impossible to intercept or decrypt."

Boasting falsely that Unplugged has built "the first operating system free of big tech monetization and analytics," Prince bragged that the device is protected by "government-grade encryption." Better yet, the pitch added, Unplugged is to be hosted on a global array of server farms so that it "can never be taken offline." One option is said to be a server farm "on a vessel" located in an "undisclosed location on international waters, connected via satellite to Elon Musk's StarLink." An Unplugged spokesperson explained that "they benefit in having servers not be subject to any governmental law." The Unplugged investor pitch deck is a messy mix of these impossible claims, meaningless buzzwords, and outright fiction. While none of the experts I spoke with had yet been able to test the phone or read its code, because the company hasn't provided access, the evidence available suggests Unplugged will fall wildly short of what's promised.

[...] The UP Phone's operating system, called LibertOS, is a proprietary version of Google's Android, according to an Unplugged spokesperson. It's running on an unclear mix of hardware that a company spokesperson says they've designed on their own. Even just maintaining a unique Android "fork" -- a version of the operating system that departs from the original, like a fork in the road -- is a difficult endeavor that can cost massive money and resources, experts warn. For a small startup, that can be an insurmountable challenge. [...] Another key issue is life span. Apple's iPhones are considered the most secure consumer device on the market due in part to the fact that the company offers security updates to some of its older phones for six years, longer than virtually all competitors. When support for a phone ends, security vulnerabilities go unaddressed, and the phone is no longer secure. There is no information available on how long UP Phones will receive security support.
"There are two things happening here," says Allan Liska, a cyberintelligence analyst at the cybersecurity firm Recorded Future. "There are the actual attempts to make real secure phones, and then there is the marketing BS. Distinguishing between those two can be really hard."

"When I worked in US intelligence, we [penetrated] a number of phone companies overseas," says Liska. "We were inside those phone companies. We could easily track people based on where they connected to the towers. So when you talk about being impenetrable, that's wrong. This is a phone, and the way that phones work is they triangulate to cell towers, and there is always latitude and longitude for exactly where you're sitting," he adds. "Nothing you do to the phone is going to change that."

The UP Phone is due out in November 2022.
Your Rights Online

Right To Repair Battle Heats Up With Rooting of John Deere Equipment (wired.com) 79

Long-time Slashdot reader drinkypoo writes: John Deere, current and historic American producer of farming equipment, has long been maligned for their DRM-based lockdowns of said equipment which can make it impossible for farmers to perform their own service. Now a new security bypass has been discovered for some of their equipment, which has revealed that it is in general based on outdated versions of Linux and Windows CE.

Carried out by Sick Codes, the complete attack involves attaching hardware to the PCB inside a touchscreen controller, and ultimately produces a root terminal.

In the bargain and as a result, the question is being raised about JD's GPL compliance.

Sick Codes isn't sure how John Deere can eliminate this vulnerability (beyond overhauling designs to add full disk encryption to future models). But Wired also notes that "At the same time, though, vulnerabilities like the ones that Sick Codes found help farmers do what they need to do with their own equipment."

Although the first thing Sick Codes did was get the tractor running a farm-themed version of Doom.
Facebook

Facing Privacy Concerns, Facebook Begins Testing End-to-End Encrypted Chats, Secure Backups (cnbc.com) 19

Thursday Meta published a blog post by their "product management director of Messenger Trust," who emphasized that they've begun at least testing end-to-end encryption by default for Messenger chats. But Meta also announced plans "to test a new secure storage feature for backups of your end-to-end encrypted chats on Messenger...."

"As with end-to-end encrypted chats, secure storage means that we won't have access to your messages, unless you choose to report them to us."

CNBC provides some context: The announcement comes after Facebook turned over Messenger chat histories to Nebraska police as part of an investigation into an alleged illegal abortion. Meta spokesperson Andy Stone said the feature has been in the works for a while and is not related to the Nebraska case...

The feature is rolling out on Android and iOS devices this week, but it isn't yet available on the Messenger website. The company has been discussing full-scale deployment of end-to-end encryption since 2016, but critics have said the security measure would make it much more difficult for law enforcement to catch child predators....Meta said in the release that it is making progress toward the global rollout of default end-to-end encryption for personal messages and calls in 2023.

Other privacy enhancements announced Thursday by Meta:
  • "We plan to bring end-to-end encrypted calls to the Calls Tab on Messenger."
  • Meta announced that the deleting of messages will start syncing across your other devices "soon."
  • Messenger will continue offering the option of "Disappearing" messages, in which viewed messages in an end-to-end encrypted chat automatically then disappear after a pre-specified period of time.

And there's more, according to Meta's announcement:.

"This week, we'll begin testing default end-to-end encrypted chats between some people. If you're in the test group, some of your most frequent chats may be automatically end-to-end encrypted, which means you won't have to opt in to the feature. You'll still have access to your message history, but any new messages or calls with that person will be end-to-end encrypted. You can still report messages to us if you think they violate our policies, and we'll review them and take action as necessary....

"Last year, we started a limited test of opt-in end-to-end encrypted messages and calls on Instagram, and in February we broadened the test to include adults in Ukraine and Russia. Soon, we'll expand the test even further to include people in more countries and add more features like group chats....

"We will continue to provide updates as we make progress toward the global rollout of default end-to-end encryption for personal messages and calls in 2023."


Encryption

Facebook Will Begin Testing End-To-End Encryption As Default On Messenger App (theguardian.com) 13

Facebook announced on Thursday it will begin testing end-to-end encryption as the default option for some users of its Messenger app on Android and iOS. The Guardian reports: Facebook messenger users currently have to opt in to make their messages end-to-end encrypted (E2E), a mechanism that theoretically allows only the sender and recipient of a message to access its content. Facebook spokesperson Alex Dziedzan said on Thursday that E2E encryption is a complex feature to implement and that the test is limited to a couple of hundred users for now so that the company can ensure the system is working properly. Dziedzan also said the move was "not a response to any law enforcement requests." Meta, Facebook's parent company, said it had planned to roll out the test for months. The company had previously announced plans to make E2E encryption the default in 2022 but pushed the date back to 2023. "The only way for companies like Facebook to meaningfully protect people is for them to ensure that they do not have access to user data or communications when a law enforcement agency comes knocking," Evan Greer, the director of the digital rights group Fight for the Future, said. "Expanding end-to-end encryption by default is a part of that, but companies like Facebook also need to stop collecting and retaining so much intimate information about us in the first place."
Bug

Windows 11 Encryption Bug Could Cause Data Loss, Temporary Slowdowns On Newer PCs (arstechnica.com) 28

An anonymous reader quotes a report from Ars Technica: Microsoft has published a knowledge base article acknowledging a problem with encryption acceleration in the newest versions of Windows that could result in data corruption. The company recommends installing the June 2022 security updates for Windows 11 and Windows Server 2022 "to prevent further damage," though there are no suggested solutions for anyone who has already lost data because of the bug.

The problems only affect relatively recent PCs and servers that support Vector Advanced Encryption Standard (VAES) instructions for accelerating cryptographic operations. Microsoft says affected systems use AES-XTS or AES-GCM instructions "on new hardware." Part of the AVX-512 instruction set, VAES instructions are supported by Intel's Ice Lake, Tiger Lake, Rocket Lake, and Alder Lake architectures -- these power some 10th-generation Core CPUs for laptops, as well as all 11th- and 12th-gen Core CPUs. AMD's upcoming Zen 4 architecture also supports VAES, though by the time these chips are released in the fall, the patches will have had plenty of time to proliferate. Microsoft says that the problem was caused when it added "new code paths" to support the updated encryption instructions in SymCrypt, Windows' cryptographic function library. These code paths were added in the initial release of Windows 11 and Windows Server 2022, so the problem shouldn't affect older versions like Windows 10 or Windows Server 2019.

The initial fix for the problem, provided in Windows' June 2022 security update package (Windows 11 build 22000.778), will prevent further damage at the cost of reduced performance, suggesting that the initial fix was to disable encryption acceleration on these processors entirely. Using Bitlocker-encrypted disks or the Transport Layer Security (TLS) protocol or accessing encrypted storage on servers will all be slower with the first patch installed, though installing the July 2022 security updates (Windows 11 build 22000.795) should restore performance to its previous level.

Intel

SGX, Intel's Supposedly Impregnable Data Fortress, Has Been Breached Yet Again (arstechnica.com) 23

Intel's latest generation of CPUs contains a vulnerability that allows attackers to obtain encryption keys and other confidential information protected by the company's software guard extensions, the advanced feature that acts as a digital vault for security users' most sensitive secrets. From a report: Abbreviated as SGX, the protection is designed to provide a fortress of sorts for the safekeeping of encryption keys and other sensitive data, even when the operating system or a virtual machine running on top is maliciously compromised. SGX works by creating trusted execution environments that protect sensitive code and the data it works with from monitoring or tampering by anything else on the system.

SGX is a cornerstone of the security assurances many companies provide to users. Servers used to handle contact discovery for the Signal Messenger, for instance, rely on SGX to ensure the process is anonymous. Signal says running its advanced hashing scheme provides a "general recipe for doing private contact discovery in SGX without leaking any information to parties that have control over the machine, even if they were to attach physical hardware to the memory bus." The example is purely hypothetical. Signal spokesperson Jun Harada wrote in an email: "Intel alerted us to this paper... and we were able to verify that the CPUs that Signal uses are not impacted by the findings of this paper and therefore are not vulnerable to the stated attack." Key to the security and authenticity assurances of SGX is its creation of what are called "enclaves," or blocks of secure memory. Enclave contents are encrypted before they leave the processor and are written in RAM. They are decrypted only after they return. The job of SGX is to safeguard the enclave memory and block access to its contents by anything other than the trusted part of the CPU.

Google

Google Tries Publicly Shaming Apple Into Adopting RCS (theverge.com) 187

Google is kicking off a new publicity campaign today to pressure Apple into adopting RCS, the cross-platform messaging protocol that's meant to be a successor to the aging SMS and MMS standards. From a report: The search giant has a new "Get The Message" website that lays out a familiar set of arguments for why Apple should support the standard, revolving around smoother messaging between iPhone and Android devices. Naturally, there's also a #GetTheMessage hashtag to really get those viral juices flowing. For most people, the problems Google describes are most familiar in the form of the green bubbles that signify messages to Android users in Apple's Messages app. While the iPhone app uses Apple's own iMessage service to send texts between iPhones (complete with modern features like encryption, support for group chats, and high-quality image and video transfers), they revert to old-fashioned SMS and MMS when texting an Android user. Not only are these messages shown in a color-clashing green bubble but also they break many of the modern messaging features people have come to rely on.
AI

WhatsApp Boss Says No To AI Filters Policing Encrypted Chat (theregister.com) 38

An anonymous reader quotes a report from The Register: The head of WhatsApp will not compromise the security of its messenger service to bend to the UK government's efforts to scan private conversations. Will Cathcart, who has been at parent company Meta for more than 12 years and head of WhatsApp since 2019, told the BBC that the popular communications service wouldn't downgrade or bypass its end-to-end encryption (EE2E) just for British snoops, saying it would be "foolish" to do so and that WhatsApp needs to offer a consistent set of standards around the globe. "If we had to lower security for the world, to accommodate the requirement in one country, that ... would be very foolish for us to accept, making our product less desirable to 98 percent of our users because of the requirements from 2 percent," Cathcart told the broadcaster. "What's being proposed is that we -- either directly or indirectly through software -- read everyone's messages. I don't think people want that."

Strong EE2E ensures that only the intended sender and receiver of a message can read it, and not even the provider of the communications channel nor anyone eavesdropping on the encrypted chatter. The UK government is proposing that app builders add an automated AI-powered scanner in the pipeline -- ideally in the client app -- to detect and report illegal content, in this case child sex abuse material (CSAM).

The upside is that at least messages are encrypted as usual when transmitted: the software on your phone, say, studies the material, and continues on as normal if the data is deemed CSAM-free. One downside is that any false positives mean people's private communications get flagged up and potentially analyzed by law enforcement or a government agent. Another downside is that the definition of what is filtered may gradually change over time, and before you know it: everyone's conversations are being automatically screened for things politicians have decided are verboten. And another downside is that client-side AI models that don't produce a lot of false positives are likely to be easily defeated, and are mainly good for catching well-known, unaltered CSAM examples.

Security

Post-Quantum Encryption Contender is Taken Out by Single-Core PC and 1 Hour (arstechnica.com) 45

In the US government's ongoing campaign to protect data in the age of quantum computers, a new and powerful attack that used a single traditional computer to completely break a fourth-round candidate highlights the risks involved in standardizing the next generation of encryption algorithms. From a report: Last month, the US Department of Commerce's National Institute of Standards and Technology, or NIST, selected four post-quantum computing encryption algorithms to replace algorithms like RSA, Diffie-Hellman, and elliptic curve Diffie-Hellman, which are unable to withstand attacks from a quantum computer. In the same move, NIST advanced four additional algorithms as potential replacements pending further testing in hopes one or more of them may also be suitable encryption alternatives in a post-quantum world. The new attack breaks SIKE, which is one of the latter four additional algorithms. The attack has no impact on the four PQC algorithms selected by NIST as approved standards, all of which rely on completely different mathematical techniques than SIKE.
United States

Amazon's Ring and Google Can Share Footage With Police Without Warrants (or Your Consent) (cnet.com) 70

U.S. law let's companies like Google and Amazon's Ring doorbell/security camera system "share user footage with police during emergencies without consent and without warrants," CNET reported this week. They add that after that revelation "came under renewed criticism from privacy activists this month after disclosing it gave video footage to police in more than 10 cases without users' consent thus far in 2022 in what it described as 'emergency situations'."

"That includes instances where the police didn't have a warrant." "So far this year, Ring has provided videos to law enforcement in response to an emergency request only 11 times," Amazon vice president of public policy Brian Huseman wrote. "In each instance, Ring made a good-faith determination that there was an imminent danger of death or serious physical injury to a person requiring disclosure of information without delay...." Of the 11 emergency requests Ring has complied with so far in 2022, the company said they include cases involving kidnapping, self-harm and attempted murder, but it won't provide further details, including information about which agencies or countries the requests came from.

We also asked Ring if it notified customers after the company had granted law enforcement access to their footage without their consent.

"We have nothing to share," the spokesperson responded.

CNET also supplies this historical context: It's been barely a year since Ring made the decision to stop allowing police to email users to request footage. Facing criticism that requests like those were subverting the warrant process and contributing to police overreach, Ring directed police instead to post public requests for assistance in the Neighbors app, where community members are free to view and comment on them (or opt out of seeing them altogether)... That post made no mention of a workaround for the police during emergency circumstances.
When CNET asked why that workaround wasn't mentioned, Amazon response was that law enforcement requests, "including emergency requests, are directed to Ring (the company), the same way a warrant or subpoena is directed to Ring (and not the customer), which is why we treat them entirely separately."

CNET notes there's also no mention of warrantless emergency requests without independent oversight in Ring's own transparency reports about law enforcement requests from past years.

CNET adds that it's not just Amazon. "Google, Ring and other companies that process user video footage have a legal basis for warrantless disclosure without consent during emergency situations, and it's up to them to decide whether or not to do so when the police come calling...." (Although Google told CNET that while it reserves the right to comply with warrantless requests for user data during emergencies, to date it has never actually done so.) The article also points out that "Others, most notably Apple, use end-to-end encryption as the default setting for user video, which blocks the company from sharing that video at all... Ring enabled end-to-end encryption as an option for users in 2021, but it isn't the default setting, and Ring notes that turning it on will break certain features, including the ability to view your video feed on a third-party device like a smart TV, or even Amazon devices like the Echo Show smart display."

The bottom line? [C]onsumers have a choice to make about what they're comfortable with... That said, you can't make informed choices when you aren't well-informed to begin with, and the brands in question don't always make it easy to understand their policies and practices. Ring published a blog post last year walking through its new, public-facing format for police footage requests, but there was no mention of emergency exceptions granted without user consent or independent oversight, the details of which only came to light after a Senate probe. Google describes its emergency sharing policies within its Terms of Service, but the language doesn't make it clear that those cases include instances where footage may be shared without a warrant, subpoena or court order compelling Google to do so.
Privacy

Google's Nest Will Provide Data to Police Without a Warrant (petapixel.com) 81

As reported by CNET, Google will allow law enforcement to access data from its Nest products -- or theoretically any other data you store with Google -- without a warrant. PetaPixel reports: "If we reasonably believe that we can prevent someone from dying or from suffering serious physical harm, we may provide information to a government agency -- for example, in the case of bomb threats, school shootings, kidnappings, suicide prevention, and missing person cases," reads Google's TOS page on government requests for user information. "We still consider these requests in light of applicable laws and our policies."

An unnamed Nest spokesperson did tell CNET that the company tries to give its users notice when it provides their data under these circumstances. Google "reserves the right" to make emergency disclosures to law enforcement even when there is no legal requirement to do so. "A provider like Google may disclose information to law enforcement without a subpoena or a warrant 'if the provider, in good faith, believes that an emergency involving danger of death or serious physical injury to any person requires disclosure without delay of communications relating to the emergency,'" a Nest spokesperson tells CNET.

While Amazon and Google have both said they would hand over a user's data to law enforcement without a warrant, Arlo, Apple, Wyze, and Anker, owner of Eufy, all confirmed to CNET that they won't give authorities access to a user's smart home camera's footage unless they're shown a warrant or court order. These companies would be legally bound to provide data to the authorities if they were shown a legal document. But, unlike Google and Amazon, they will not otherwise share camera footage with law enforcement, even if they had an emergency request for data. Apple's default setting for video cameras connected via Homekit is end-to-end encryption which means the company is unable to share user video at all.
In an updated statement, a Google spokesperson clarified that they have never sent Nest data to authorities, "but it's important that we reserve the right to do so."

They added: "To reiterate, and as we've specified in our privacy commitments, we will only share video footage and audio recordings with third-party apps and services that work with our devices if you or a member of your home explicitly gives us permission, and we'll only ask for this permission in order to provide a helpful experience from an approved partner (such as a home security service provider)."
Encryption

Codebreakers Find 'Sexts,' Arctic Dispatches In 200-Year-Old Encrypted Newspaper Ads (vice.com) 28

Between 1850 and 1855, someone published a series of unusual ads in the British newspaper The Times. They were made up of a series of seemingly random letters, apparently gobbledygook. An anonymous reader adds: Almost 200 years later, a group of codebreakers has finally been able to decrypt some of them and read what they said, discovering that they were actually encrypted messages from a rescue expedition in the Arctic Ocean.
United Kingdom

UK Cybersecurity Chiefs Back Plan To Scan Phones for Child Abuse Images (theguardian.com) 73

Tech companies should move ahead with controversial technology that scans for child abuse imagery on users' phones, the technical heads of GCHQ and the UK's National Cybersecurity Centre have said. From a report: So-called "client-side scanning" would involve service providers such as Facebook or Apple building software that monitors communications for suspicious activity without needing to share the contents of messages with a centralised server. Ian Levy, the NCSC's technical director, and Crispin Robinson, the technical director of cryptanalysis -- codebreaking -- at GCHQ, said the technology could protect children and privacy at the same time.

"We've found no reason why client-side scanning techniques cannot be implemented safely in many of the situations one will encounter," they wrote in a discussion paper published on Thursday, which the pair said was "not government policy." They argued that opposition to proposals for client-side scanning -- most famously a plan from Apple, now paused indefinitely, to scan photos before they are uploaded to the company's image-sharing service -- rested on specific flaws, which were fixable in practice. They suggested, for instance, requiring the involvement of multiple child protection NGOs, to guard against any individual government using the scanning apparatus to spy on civilians; and using encryption to ensure that the platform never sees any images that are passed to humans for moderation, instead involving only those same NGOs.

Security

Russian Hackers Behind SolarWinds Are Now Hiding Malware In Google Drive (techcrunch.com) 10

An anonymous reader quotes a report from TechCrunch: The Russia-linked hacking group behind the infamous SolarWinds espionage campaign is now using Google Drive to stealthily deliver malware to its latest victims. That's according to researchers at Palo Alto Networks' Unit 42 threat intelligence team, who said on Tuesday that the Russian Foreign Intelligence Service (SVR) hacking unit -- tracked as "Cloaked Ursa" by Unit 42 but more commonly known as APT29 or Cozy Bear -- has incorporated Google's cloud storage service into its hacking campaigns to hide their malware and their activities.

APT29 has used this new tactic in recent campaigns targeting diplomatic missions and foreign embassies in Portugal and Brazil between early May and June 2022, according to Unit 42. "This is a new tactic for this actor and one that proves challenging to detect due to the ubiquitous nature of these services and the fact that they are trusted by millions of customers worldwide," the researchers said. "When the use of trusted services is combined with encryption, as we see here, it becomes extremely difficult for organizations to detect malicious activity in connection with the campaign." Unit 42 disclosed the activity to both Dropbox and Google, which took action.
In May, the group was found to be using Dropbox in a campaign targeting diplomats and various government agencies. A Dropbox spokesperson told TechCrunch it disabled the accounts immediately.
Technology

The Code the FBI Used To Wiretap the World (vice.com) 39

The FBI operation in which the agency intercepted messages from thousands of encrypted phones around the world was powered by cobbled together code. From a report: Motherboard has obtained that code and is now publishing sections of it that show how the FBI was able to create its honeypot. The code shows that the messages were secretly duplicated and sent to a "ghost" contact that was hidden from the users' contact lists. This ghost user, in a way, was the FBI and its law enforcement partners, reading over the shoulder of organized criminals as they talked to each other.

Last year, the FBI and its international partners announced Operation Trojan Shield, in which the FBI secretly ran an encrypted phone company called Anom for years and used it to hoover up tens of millions of messages from Anom users. Anom was marketed to criminals, and ended up in the hands of over 300 criminal syndicates worldwide. The landmark operation has led to more than 1,000 arrests including alleged top tier drug traffickers and massive seizures of weapons, cash, narcotics, and luxury cars. Motherboard has obtained this underlying code of the Anom app and is now publishing sections of it due to the public interest in understanding how law enforcement agencies are tackling the so-called Going Dark problem, where criminals use encryption to keep their communications out of the hands of the authorities. The code provides greater insight into the hurried nature of its development, the freely available online tools that Anom's developers copied for their own purposes, and how the relevant section of code copied the messages as part of one of the largest law enforcement operations ever.

Slashdot Top Deals