The Courts

The ACLU Is Suing For More Information About the FBI's Phone-Hacking Lab (theverge.com) 31

On Tuesday, the American Civil Liberties Union filed a new lawsuit demanding information about the FBI's Electronic Device Analysis Unit (EDAU) -- a forensic unit that the ACLU believes has been quietly breaking the iPhone's local encryption systems. The Verge reports: "The FBI is secretly breaking the encryption that secures our cell phones and laptops from identity thieves, hackers, and abusive governments," the ACLU said in a statement announcing the lawsuit, "and it refuses to even acknowledge that it has information about these efforts." The FBI has made few public statements about the EDAU, but the lawsuit cites a handful of cases in which prosecutors have submitted a "Mobile Device Unlock Request" and received data from a previously locked phone. The EDAU also put in public requests for the GrayKey devices that found success unlocking a previous version of iOS.

In June 2018, the ACLU filed a FOIA request for records relating to the EDAU, but the FBI has refused to confirm any records even exist. After a string of appeals within the FOIA process, the group is taking the issue to federal court, calling on the attorney general and FBI inspector general to directly intervene and make the records available. "We're demanding the government release records concerning any policies applicable to the EDAU, its technological capabilities to unlock or access electronic devices, and its requests for, purchases of, or uses of software that could enable it to bypass encryption," the ACLU said in a statement.

Encryption

Signal Says Cellebrite Cannot Break Its Encryption 14

Signal, in a blog post: Yesterday, the BBC ran a story with the factually untrue headline, "Cellebrite claimed to have cracked chat app's encryption." This is false. Not only can Cellebrite not break Signal encryption, but Cellebrite never even claimed to be able to. Since we weren't actually given the opportunity to comment in that story, we're posting this to help to clarify things for anyone who may have seen the headline. Last week, Cellebrite posted a pretty embarrassing (for them) technical article to their blog documenting the "advanced techniques" they use to parse Signal on an Android device they physically have with the screen unlocked. This is a situation where someone is holding an unlocked phone in their hands and could simply open the app to look at the messages in it. Their post was about doing the same thing programmatically (which is equally simple), but they wrote an entire article about the "challenges" they overcame, and concluded that "...it required extensive research on many different fronts to create new capabilities from scratch."

[...] What really happened: If you have your device, Cellebrite is not your concern. It is important to understand that any story about Cellebrite Physical Analyzer starts with someone other than you physically holding your device, with the screen unlocked, in their hands. Cellebrite does not even try to intercept messages, voice/video, or live communication, much less "break the encryption" of that communication. They don't do live surveillance of any kind.

Cellebrite is not magic. Imagine that someone is physically holding your device, with the screen unlocked, in their hands. If they wanted to create a record of what's on your device right then, they could simply open each app on your device and take screenshots of what's there. This is what Cellebrite Physical Analyser does. It automates the process of creating that record. However, because it's automated, it has to know how each app is structured, so it's actually less reliable than if someone were to simply open the apps and manually take the screenshots. It is not magic, it is mediocre enterprise software. Cellebrite did not "accidentally reveal" their secrets. This article, and others, were written based on a poor interpretation of a Cellebrite blog post about adding Signal support to Cellebrite Physical Analyzer. Cellebrite posted something with a lot of detail, then quickly took it down and replaced it with something that has no detail. This is not because they "revealed" anything about some super advanced technique they have developed (remember, this is a situation where someone could just open the app and look at the messages). They took it down for the exact opposite reason: it made them look bad.
Encryption

Authorities Don't Need To Break Phone Encryption in Most Cases, Because Modern Phone Encryption Sort of Sucks. (twitter.com) 61

Matthew Green, a cryptographer and professor at Johns Hopkins University, shares in a series of tweets: My students Max and Tushar Jois spent most of the summer going through every piece of public documentation, forensics report, and legal document we could find to figure out how police were "breaking phone encryption." This was prompted by a claim from someone knowledgeable, who claimed that forensics companies no longer had the ability to break the Apple Secure Enclave Processor, which would make it very hard to crack the password of a locked, recent iPhone. We wrote an enormous report -- a draft of which you can read here (PDF) about what we found, which we'll release after the holidays. The TL;DR is kind of depressing: Authorities don't need to break phone encryption in most cases, because modern phone encryption sort of sucks.

I'll focus on Apple here but Android is very similar. The top-level is that, to break encryption on an Apple phone you need to get the encryption keys. Since these are derived from the user's passcode, you either need to guess that -- or you need the user to have entered it. Guessing the password is hard on recent iPhones because there's (at most) a 10-guess limit enforced by the Secure Enclave Processor (SEP). There's good evidence that at one point in 2018 a company called GrayKey had a SEP exploit that did this for the X. See photo. There is really no solid evidence that this exploit still works on recent-model iPhones, after 2018. If anything, the evidence is against it. So if they can't crack the passcode, how is law enforcement still breaking into iPhones (because they definitely are)? The boring answer very likely is that police aren't guessing suspects' passcodes. They're relying on the fact that the owner probably typed it in. Not after the phone is seized, in most cases. Beforehand.
The full thread on Twitter here.
Encryption

Israeli Spy Tech Firm Says It Can Break Into Signal App (haaretz.com) 87

Last Thursday, Israeli phone-hacking firm Cellebrite said in a blog post that it can now break into Signal, an encrypted app considered safe from external snooping. Haaretz reports: Cellebrite's flagship product is the UFED (Universal Forensic Extraction Device), a system that allows authorities to unlock and access the data of any phone in their possession. Another product it offers is the Physical Analyzer, which helps organize and process data lifted from the phone. Last Thursday, the company announced that the analyzer has now been updated with a new capability, developed by the firm, that allows clients to decode information and data from Signal. Signal, owned by the Signal Technology Foundation, uses a special open source encryption system called Signal Protocol, which was thought to make it nigh-on impossible for a third party to break into a conversation or access data being shared on the platform. It does so by employing what's called "end-to-end encryption."

According to Cellebrite's announcement last week, "Law enforcement agencies are seeing a rapid rise in the adoption of highly encrypted apps like Signal, which incorporate capabilities like image blurring to stop police from reviewing data. "Criminals are using this application to communicate, send attachments, and making [sic] illegal deals that they want to keep discrete [sic] and out of sight from law enforcement," the blog post added. Despite support for the app's encryption capabilities, Cellebrite noted that "Signal is an encrypted communication application designed to keep sent messages and attachments as safe as possible from 3rd-party programs.

"Cellebrite Physical Analyzer now allows lawful access to Signal app data. At Cellebrite, we work tirelessly to empower investigators in the public and private sector to find new ways to accelerate justice, protect communities, and save lives." In an earlier, now deleted, version of the blog post, the company went as far as to say: "Decrypting Signal messages and attachments was not an easy task. It required extensive research on many different fronts to create new capabilities from scratch. At Cellebrite, however, finding new ways to help those who make our world a safer place is what we're dedicated to doing every day." The initial post, which was stored on the Internet Archive, also included a detailed explanation of how Cellebrite "cracked the code" by reviewing Signal's own open source protocol and using it against it. The company noted in the deleted blog post that "because [Signal] encrypts virtually all its metadata to protect its users, efforts have been put forward by legal authorities to require developers of encrypted software to enable a 'backdoor' that makes it possible for them to access people's data. Until such agreements are reached, Cellebrite continues to work diligently with law enforcement to enable agencies to decrypt and decode data from the Signal app."

The Internet

Why Apple, Cloudflare, and Fastly Proposed a New Privacy-Focused DNS Standard Called 'Oblivious DoH' (zdnet.com) 64

"Cloudflare, Apple, and Fastly have co-designed and proposed a new DNS standard to tackle ongoing privacy issues associated with DNS," reports ZDNet.

Cloudflare calls it "a practical approach for improving privacy" that "aims to improve the overall adoption of encrypted DNS protocols without compromising performance and user experience..." Third-parties, such as ISPs, find it more difficult to trace website visits when DNS over HTTPS (DoH) is enabled. DoH deployment is on the cards for many major browser providers, although rollout plans are ongoing. Now, Oblivious DNS over HTTPS (ODoH) has been proposed by Cloudflare — together with partners PCCW Global, Surf, and Equinix — to improve on these models by adding an additional layer of public key encryption and a network proxy...

The overall aim of ODoH is to decouple client proxies from resolvers. A network proxy is inserted between clients and DoH servers — such as Cloudflare's 1.1.1.1's public DNS resolver — and the combination of both this and public key encryption "guarantees that only the user has access to both the DNS messages and their own IP address at the same time," according to Cloudflare... "The client behaves as it does in DNS and DoH, but differs by encrypting queries for the target, and decrypting the target's responses..."

Test clients for the code have been provided to the open source community to encourage experimentation with the proposed standard. It can take years before support is enabled by vendors for new DNS standards, but Eric Rescorla, Firefox's CTO, has already indicated that Firefox will "experiment" with ODoH.

The Internet

Cloudflare and Apple Design a New Privacy-Friendly Internet Protocol (techcrunch.com) 90

Engineers at Cloudflare and Apple say they've developed a new internet protocol that will shore up one of the biggest holes in internet privacy that many don't know even exists. Dubbed Oblivious DNS-over-HTTPS, or ODoH for short, the new protocol makes it far more difficult for internet providers to know which websites you visit. From a report: [...] Recent developments like DNS-over-HTTPS (or DoH) have added encryption to DNS queries, making it harder for attackers to hijack DNS queries and point victims to malicious websites instead of the real website you wanted to visit. But that still doesn't stop the DNS resolvers from seeing which website you're trying to visit. Enter ODoH, which decouples DNS queries from the internet user, preventing the DNS resolver from knowing which sites you visit. Here's how it works: ODoH wraps a layer of encryption around the DNS query and passes it through a proxy server, which acts as a go-between the internet user and the website they want to visit. Because the DNS query is encrypted, the proxy can't see what's inside, but acts as a shield to prevent the DNS resolver from seeing who sent the query to begin with. "What ODoH is meant to do is separate the information about who is making the query and what the query is," said Nick Sullivan, Cloudflare's head of research.
Businesses

Google Launches Android Enterprise Essentials Aimed at SMBs (zdnet.com) 14

Google said it is launching Android Enterprise Essentials, a mobile device management service for small enterprises. From a report: Based on the Android Enterprise Recommended program, Google's Android Enterprise Essentials is a pared down version with default features and smaller budgets. Google is trying to address the reality that smaller organizations are often targeted by cybercriminals. Features include:

Requiring a lock screen and encryption on devices to prevent unauthorized access to company data.
Enforcing mandatory malware protection with an always-on Google Play Protect.
The ability to wipe all company data from a device.
The core security features are applied automatically without the need to configure devices.

Government

Report Claims America's CIA Also Controlled a Second Swiss Encryption Firm (courthousenews.com) 100

Long-time Slashdot reader SonicSpike brings this report from AFP: Swiss politicians have voiced outrage and demanded an investigation after revelations that a second Swiss encryption company was allegedly used by the CIA and its German counterpart to spy on governments worldwide. "How can such a thing happen in a country that claims to be neutral like Switzerland?" co-head of Switzerland's Socialist Party, Cedric Wermuth, asked in an interview with Swiss public broadcaster SRF late Thursday. He called for a parliamentary inquiry after an SRF investigation broadcast on Wednesday found that a second Swiss encryption firm had been part of a spectacular espionage scheme orchestrated by U.S. and German intelligence services.

A first investigation had revealed back in February an elaborate, decades-long set-up, in which the CIA and its German counterpart creamed off the top-secret communications of governments through their hidden control of a Swiss encryption company called Crypto.

SRF's report this week found that a second but smaller Swiss encryption firm, Omnisec, had been used in the same way.

That company, which was split off from Swiss cryptographic equipment maker Gretag in 1987, sold voice, fax and data encryption equipment to governments around the world until it halted operations two years ago. SRF's investigative program Rundschau concluded that, like Crypto, Omnisec had sold manipulated equipment to foreign governments and armies. Omnisec meanwhile also sold its faulty OC-500 series devices to several federal agencies in Switzerland, including its own intelligence agencies, as well as to Switzerland's largest bank, UBS, and other private companies in the country, the SRF investigation showed.

The findings unleashed fresh outrage in Switzerland, which is still reeling from the Crypto revelations.

The first compromised cryptography company "served for decades as a Trojan horse to spy on governments worldwide," according to the article, citing news reports from SRF, the Washington Post and German broadcaster ZDF. "The company supplied devices for encoded communications to some 120 countries from after World War II to the beginning of this century, including to Iran, South American governments, India and Pakistan.

"Unknown to those governments, Crypto was secretly acquired in 1970 by the U.S. Central Intelligence Agency together with the then West Germanyâ(TM)s BND Federal Intelligence Service."
Privacy

Ask Slashdot: Why Haven't We Implemented Public Key Infrastructure Voting? 433

Long-time Slashdot reader t0qer has a question: why haven't we gone to an open source, Public Key Infrastructure-based voting system? "I'm fairly well versed in PKI technology, and quoting this site, it would take traditional computers 300 trillion years to break RSA-2048 for a single vote." SSL.com has a pretty interesting piece on using Public Key Infrastructure in voting. There's also a GitHub project that leverages PKI and IBM blockchain technology...

It just seems like paper at this point has outlived its secureness. A closed sourced voting system doesn't really seem like the kind of thing Slashdot would really get behind.

SSL's article points out that the technology seems to exist already. Nearly half the population of Estonia already votes online, and four U.S. states (Arizona, Colorado, Missouri and North Dakota) already have web portals that allow for absentee voting. (And West Virginia has a mobile voting app that uses blockchain technology.) [L]uckily, the groundwork for securing the practice of remote, online voting is already there. We have been conducting many delicate transactions online for some time — the secure transfer of information has been a cornerstone for many industries that have successfully shifted online such as personal banking and investing, and those methods of securing and authenticating information can be employed in voting as well. For years, people have suggested that the use of blockchain technology could be used to secure elections and increase voter turnout.
Share your own thoughts in the comments. Why haven't we implemented Public Key Infrastructure voting?
Encryption

Google is Rolling Out End-To-End Encryption for RCS in Android Messages Beta (theverge.com) 77

After two long, complicated years, every Android user worldwide (outside China) now has access to the next-gen texting standard that is replacing SMS. Google is directly offering RCS chat services through its Android Messages app to anybody who installs it and uses it as their default texting app, which partly bypasses a carrier rollout that, at times, has ranged from sluggish to incoherent to broken. From a report Just as importantly, Google has announced that it's finally beginning to enable a key privacy feature: end-to-end encryption. For Android users who use Android Messages, one-on-one chats will eventually be end-to-end encrypted by default, meaning neither carriers nor Google will be able to read the content of those messages. Even though encryption is only beginning to roll out to people who sign up for the public beta for Android Messages, turning on encryption for RCS is a very big deal. It's a massive privacy win, as it could mean that the de facto replacement for SMS will, by default, be private on the smartphone platform used by the vast majority of people worldwide.

As for the people who use that other smartphone platform -- the iPhone -- we have no word on whether Apple intends to adopt the RCS standard. But as every carrier worldwide gets on board, and now that there is a clearer path to ensuring private communication with RCS, the pressure on Apple to participate is likely to build. Unfortunately, SMS becoming fully deprecated and replaced by RCS will only happen if all goes to plan for Google. Since initially announcing plans to transition to RCS as the primary texting platform for Android, the standard's rollout has been mired in confusion. In attempting to be neutral and make Android's texting a standard shared by carriers worldwide, Google set itself up with the job of herding multibillion-dollar cats -- with sadly predictable results.

Privacy

Apple Responds To Gatekeeper Issue With Upcoming Fixes (techcrunch.com) 54

Apple has updated a documentation page detailing the company's next steps to prevent last week's Gatekeeper bug from happening again. The company plans to implement the fixes over the next year. From a report: Apple had a difficult launch day last week. The company released macOS Big Sur, a major update for macOS. Apple then suffered from server-side issues. Third-party apps failed to launch as your Mac couldn't check the developer certificate of the app. That feature, called Gatekeeper, makes sure that you didn't download a malware app that disguises itself as a legit app. If the certificate doesn't match, macOS prevents the app launch. Many have been concerned about the privacy implications of the security feature. Does Apple log every app you launch on your Mac to gain competitive insights on app usage? It turns out it's easy to answer that question as the server doesn't mandate encryption. Jacopo Jannone intercepted an unencrypted network request and found out that Apple is not secretly spying on you. Gatekeeper really does what it says it does. "We have never combined data from these checks with information about Apple users or their devices. We do not use data from these checks to learn what individual users are launching or running on their devices," the company wrote.
Security

Report: Swiss Government Long in Dark Over CIA Front Company (axios.com) 25

The Swiss intelligence service has known since at least 1993 that Switzerland-based encryption device maker Crypto AG was actually a front for the CIA and its German counterpart, according to a new report released by the Swiss Parliament, but Swiss leaders were in the dark until last year. From a report: Switzerland's intra-governmental information gap is unlikely to be welcome news in Europe, which already looks warily upon the U.S.' expansive surveillance practices. Still, Crypto AG provided information of incalculable value to U.S. policymakers over many decades. Crypto AG was controlled from 1970 on by the CIA and the West German BND intelligence agency. It sold encryption devices -- often employed in diplomatic communications -- that were used by over 120 countries through the 2000s.
Government

Swiss Report Reveals New Details On CIA Spying Operation (washingtonpost.com) 36

An anonymous reader quotes a report from The Washington Post: The CIA and German intelligence jeopardized Switzerland's historic reputation for neutrality by using a Swiss company as a platform for a global espionage operation for decades, according to a report released Tuesday by members of the Swiss parliament. Investigators concluded that Swiss authorities were aware of, and at times complicit in, an elaborate espionage operation in which the CIA covertly owned and controlled a Swiss company, Crypto AG, that secretly sold rigged encryption systems to foreign governments.

The report marks the culmination of a Swiss investigation launched after the history of the Crypto operation was revealed earlier this year by The Washington Post in collaboration with ZDF, German public television, and Swiss broadcaster SRF. The Crypto operation exploited "Switzerland's image abroad as a neutral state," according to the report, which also said that Swiss authorities had effectively allowed the CIA and its German counterpart, the BND, to carry out "intelligence operations to the detriment of other states by hiding behind a Swiss company." The probe marks the first public accounting by a foreign government of an espionage operation so successful and extensive that a classified CIA history referred to it as "the intelligence coup of the century." The CIA did not respond to a request for comment, and the BND previously declined to comment.

Math

Computer Scientists Achieve 'Crown Jewel' of Cryptography (quantamagazine.org) 69

A cryptographic master tool called indistinguishability obfuscation has for years seemed too good to be true. Three researchers have figured out that it can work. Erica Klarreich, reporting for Quanta Magazine: In 2018, Aayush Jain, a graduate student at the University of California, Los Angeles, traveled to Japan to give a talk about a powerful cryptographic tool he and his colleagues were developing. As he detailed the team's approach to indistinguishability obfuscation (iO for short), one audience member raised his hand in bewilderment. "But I thought iO doesn't exist?" he said. At the time, such skepticism was widespread. Indistinguishability obfuscation, if it could be built, would be able to hide not just collections of data but the inner workings of a computer program itself, creating a sort of cryptographic master tool from which nearly every other cryptographic protocol could be built. It is "one cryptographic primitive to rule them all," said Boaz Barak of Harvard University. But to many computer scientists, this very power made iO seem too good to be true. Computer scientists set forth candidate versions of iO starting in 2013. But the intense excitement these constructions generated gradually fizzled out, as other researchers figured out how to break their security. As the attacks piled up, "you could see a lot of negative vibes," said Yuval Ishai of the Technion in Haifa, Israel. Researchers wondered, he said, "Who will win: the makers or the breakers?" "There were the people who were the zealots, and they believed in [iO] and kept working on it," said Shafi Goldwasser, director of the Simons Institute for the Theory of Computing at the University of California, Berkeley. But as the years went by, she said, "there was less and less of those people."

Now, Jain -- together with Huijia Lin of the University of Washington and Amit Sahai, Jain's adviser at UCLA -- has planted a flag for the makers. In a paper posted online on August 18, the three researchers show for the first time how to build indistinguishability obfuscation using only "standard" security assumptions. All cryptographic protocols rest on assumptions -- some, such as the famous RSA algorithm, depend on the widely held belief that standard computers will never be able to quickly factor the product of two large prime numbers. A cryptographic protocol is only as secure as its assumptions, and previous attempts at iO were built on untested and ultimately shaky foundations. The new protocol, by contrast, depends on security assumptions that have been widely used and studied in the past. "Barring a really surprising development, these assumptions will stand," Ishai said. While the protocol is far from ready to be deployed in real-world applications, from a theoretical standpoint it provides an instant way to build an array of cryptographic tools that were previously out of reach. For instance, it enables the creation of "deniable" encryption, in which you can plausibly convince an attacker that you sent an entirely different message from the one you really sent, and "functional" encryption, in which you can give chosen users different levels of access to perform computations using your data. The new result should definitively silence the iO skeptics, Ishai said. "Now there will no longer be any doubts about the existence of indistinguishability obfuscation," he said. "It seems like a happy end."

Apple

Apple Introduces M1 Chip To Power Its New Arm-Based Macs (theverge.com) 155

Apple has introduced the new M1 chip that will power its new generation of Arm-based Macs. It's a 5nm processor, just like the A14 Bionic powering its latest iPhones. From a report: Apple says the new processor will focus on combining power efficiency with performance. It has an eight-core CPU, which Apple says offers the world's best performance per watt of an CPU. Apple says it delivers the same peak performance as a typical laptop CPU at a quarter of the power draw. It says this has four of the world's fastest CPUs cores, paired with four high-efficiency cores. It pairs this with up to an eight-core GPU, which Apple claims offers the world's fastest integrated graphics, and a 16-core Neural Engine. In addition, the M1 processor has a universal memory architecture, a USB 4 controller, media encode and decode engines, and a host of security features. These include hardware-verified secure boot, encryption, and run-time protections.
Encryption

NSA Ducks Questions About Backdoors In Tech Products (reuters.com) 84

The U.S. National Security Agency is rebuffing efforts by a leading Congressional critic to determine whether it is continuing to place so-called back doors into commercial technology products, in a controversial practice that critics say damages both U.S. industry and national security. Reuters reports: The NSA has long sought agreements with technology companies under which they would build special access for the spy agency into their products, according to disclosures by former NSA contractor Edward Snowden and reporting by Reuters and others. These so-called back doors enable the NSA and other agencies to scan large amounts of traffic without a warrant. Agency advocates say the practice has eased collection of vital intelligence in other countries, including interception of terrorist communications. The agency developed new rules for such practices after the Snowden leaks in order to reduce the chances of exposure and compromise, three former intelligence officials told Reuters. But aides to Senator Ron Wyden, a leading Democrat on the Senate Intelligence Committee, say the NSA has stonewalled on providing even the gist of the new guidelines.

The agency declined to say how it had updated its policies on obtaining special access to commercial products. NSA officials said the agency has been rebuilding trust with the private sector through such measures as offering warnings about software flaws. "At NSA, it's common practice to constantly assess processes to identify and determine best practices," said Anne Neuberger, who heads NSA's year-old Cybersecurity Directorate. "We don't share specific processes and procedures." Three former senior intelligence agency figures told Reuters that the NSA now requires that before a back door is sought, the agency must weigh the potential fallout and arrange for some kind of warning if the back door gets discovered and manipulated by adversaries.

Intel

Hackers Can Now Reverse Engineer Intel Updates Or Write Their Own Custom Firmware (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: Researchers have extracted the secret key that encrypts updates to an assortment of Intel CPUs, a feat that could have wide-ranging consequences for the way the chips are used and, possibly, the way they're secured. The key makes it possible to decrypt the microcode updates Intel provides to fix security vulnerabilities and other types of bugs. Having a decrypted copy of an update may allow hackers to reverse engineer it and learn precisely how to exploit the hole it's patching. The key may also allow parties other than Intel -- say a malicious hacker or a hobbyist -- to update chips with their own microcode, although that customized version wouldn't survive a reboot.

"At the moment, it is quite difficult to assess the security impact," independent researcher Maxim Goryachy said in a direct message. "But in any case, this is the first time in the history of Intel processors when you can execute your microcode inside and analyze the updates." Goryachy and two other researchers -- Dmitry Sklyarov and Mark Ermolov, both with security firm Positive Technologies -- worked jointly on the project. The key can be extracted for any chip -- be it a Celeron, Pentium, or Atom -- that's based on Intel's Goldmont architecture.
In a statement, Intel officials wrote: "The issue described does not represent security exposure to customers, and we do not rely on obfuscation of information behind red unlock as a security measure. In addition to the INTEL-SA-00086 mitigation, OEMs following Intel's manufacturing guidance have mitigated the OEM specific unlock capabilities required for this research. The private key used to authenticate microcode does not reside in the silicon, and an attacker cannot load an unauthenticated patch on a remote system."
The Internet

Study Shows Which Messengers Leak Your Data, Drain Your Battery, and More (arstechnica.com) 28

An anonymous reader quotes a report from Ars Technica: Link previews are a ubiquitous feature found in just about every chat and messaging app, and with good reason. They make online conversations easier by providing images and text associated with the file that's being linked. Unfortunately, they can also leak our sensitive data, consume our limited bandwidth, drain our batteries, and, in one case, expose links in chats that are supposed to be end-to-end encrypted. Among the worst offenders, according to research published on Monday, were messengers from Facebook, Instagram, LinkedIn, and Line. More about that shortly.

The researchers behind Monday's report, Talal Haj Bakry and Tommy Mysk, found that Facebook Messenger and Instagram were the worst offenders. As the chart below shows, both apps download and copy a linked file in its entirety -- even if it's gigabytes in size. Again, this may be a concern if the file is something the users want to keep private. It's also problematic because the apps can consume vast amounts of bandwidth and battery reserves. Both apps also run any JavaScript contained in the link. That's a problem because users have no way of vetting the security of JavaScript and can't expect messengers to have the same exploit protections modern browsers have.

LinkedIn performed only slightly better. Its only difference was that, rather than copying files of any size, it copied only the first 50 megabytes. Haj Bakry and Mysk reported their findings to Facebook, and the company said that both apps work as intended. Meanwhile, when the Line app opens an encrypted message and finds a link, it appears to send the link to the Line server to generate a preview. "We believe that this defeats the purpose of end-to-end encryption, since LINE servers know all about the links that are being sent through the app, and who's sharing which links to whom," Haj Bakry and Mysk wrote. Discord, Google Hangouts, Slack, Twitter, and Zoom also copy files, but they cap the amount of data at anywhere from 15MB to 50MB. [This chart] provides a comparison of each app in the study.

Encryption

The Police Can Probably Break Into Your Phone (nytimes.com) 96

At least 2,000 law enforcement agencies have tools to get into encrypted smartphones, according to new research, and they are using them far more than previously known. From a report: In a new Apple ad, a man on a city bus announces he has just shopped for divorce lawyers. Then a woman recites her credit card number through a megaphone in a park. "Some things shouldn't be shared," the ad says, "iPhone helps keep it that way." Apple has built complex encryption into iPhones and made the devices' security central to its marketing pitch. That, in turn, has angered law enforcement. Officials from the F.B.I. director to rural sheriffs have argued that encrypted phones stifle their work to catch and convict dangerous criminals. They have tried to force Apple and Google to unlock suspects' phones, but the companies say they can't. In response, the authorities have put their own marketing spin on the problem. Law enforcement, they say, is "going dark." Yet new data reveals a twist to the encryption debate that undercuts both sides: Law enforcement officials across the nation regularly break into encrypted smartphones.

That is because at least 2,000 law enforcement agencies in all 50 states now have tools to get into locked, encrypted phones and extract their data, according to years of public records collected in a report by Upturn, a Washington nonprofit that investigates how the police use technology. At least 49 of the 50 largest U.S. police departments have the tools, according to the records, as do the police and sheriffs in small towns and counties across the country, including Buckeye, Ariz.; Shaker Heights, Ohio; and Walla Walla, Wash. And local law enforcement agencies that don't have such tools can often send a locked phone to a state or federal crime lab that does. With more tools in their arsenal, the authorities have used them in an increasing range of cases, from homicides and rapes to drugs and shoplifting, according to the records, which were reviewed by The New York Times. Upturn researchers said the records suggested that U.S. authorities had searched hundreds of thousands of phones over the past five years. While the existence of such tools has been known for some time, the records show that the authorities break into phones far more than previously understood -- and that smartphones, with their vast troves of personal data, are not as impenetrable as Apple and Google have advertised. While many in law enforcement have argued that smartphones are often a roadblock to investigations, the findings indicate that they are instead one of the most important tools for prosecutions.

Privacy

3 TB of Private Webcam/Home Security Video Leaked on Porn Sites (inputmag.com) 44

schwit1 quotes Input: A hacking group that has yet to identify itself found and stole more than 3 TB of private video from around the world — mainly collected from Singapore — and shared it on porn sites, according to reports from local media like The New Paper. While some of the footage was indeed pornographic in nature, other videos are more mundane.

More than 50,000 private IP-based cameras were accessed by hackers to amass the collection. Some were explicitly tagged with locations in Singapore, The New Paper reports, while others revealed their location as Singapore based on context clues such as book titles and home layout. Many show people (sometimes with their faces censored) in "various stages of undress or compromising positions...."

It's looking like poor security is the culprit. Clement Lee, a solutions architect for multinational software company Check Point Software Technologies, told The New Paper that the hacking of IP cameras is often due to "poor password management." IP cameras make it easy to access your video feeds from anywhere — which means it's also easy for hackers to access them from anywhere, once they've figured out your password...

The unfortunate fact of the matter is that internet-connected devices are inherently susceptible to hacking. Add lax encryption and lazy users to the mix and you have a recipe for disaster.

Slashdot Top Deals