Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Debian Security

Preparing To Migrate Off of SHA-1 In OpenPGP 152

jamie found a note on debian-administration.org, the first in a promised series on migrating off of SHA-1 in OpenPGP. "Last week at eurocrypt, a small group of researchers announced a fairly serious attack against the SHA-1 digest algorithm, which is used in many cryptosystems, including OpenPGP. The general consensus is that we should be 'moving in an orderly fashion toward the theater exits,' deprecating SHA-1 where possible with an eye toward abandoning it soon (one point of reference: US govt. federal agencies have been directed to cease all reliance on SHA-1 by the end of 2010, and this directive was issued before the latest results). ... So what can you do to help facilitate the move away from SHA-1? I'll outline three steps that current gpg users can do today, and then I'll walk through how to do each one..."
This discussion has been archived. No new comments can be posted.

Preparing To Migrate Off of SHA-1 In OpenPGP

Comments Filter:
  • by eldavojohn ( 898314 ) * <eldavojohnNO@SPAMgmail.com> on Friday May 08, 2009 @10:15AM (#27876801) Journal

    'moving in an orderly fashion toward the theater exits'

    An elderly application was trampled to death today as everyone struggled to exit the Sha One theater after someone screamed that an unknown assailant had a knife. After the panic, there was no evidence of injuries from the alleged attack and police are still investigating the presence of an actual weapon.

    • I could not really understand the paper in the answer. But it looked to me like they were talking about a reduction of the brute force time to solve the hash from 2^57 trials to 2^52.

      2^52 is about 4x10^15. I don't know how many operations this takes to try, but 2^52 cycles of 1Ghz is 1.7 months of time.

      So if it takes 10,000 1Ghz cycles to solve this then you need to accelerate this 10,000 times to get it to 1.7 months.

      Brute force parallelism at that kind of scale is now very possible

      As I said I did not perf

      • Re: (Score:3, Interesting)

        by Joce640k ( 829181 )

        It's a collision attack, that means you can make two files with the same SHA1 in 2^52 operations (via the birthday paradox).

        In the best possible application of this attack you can make two files, one good, one incriminating and get somebody to sign the good one.

        The two changeling files are generated via a randomizing process so generating meaningful text files really isn't possible, the files have to contain binary data with a 'random' appearance. Examples of this would be crypto keys, SSL certificates, stu

        • Actually, the birthday attack IS useful in the real world. The birthday attack has been successfully used to dupe Verisign into issuing an SSL certificate to an address of the attacker's choosing! The attackers simply created two CSRs with the same MD5 hash--one for their website (which they had signed) and another for their victim's site.

          Had they been blackhats instead of security researchers, they could have performed flawless MITM against SSL sites.

  • by marcosdumay ( 620877 ) <(marcosdumay) (at) (gmail.com)> on Friday May 08, 2009 @10:28AM (#27876957) Homepage Journal
    From TFA, so others don't have to read it, GPG will stay with SHA 224, SHA 256, SHA 384 and SHA 512.
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      If I understand the attack correctly, I think most real-world SHA-1 usage should be secure for the time being. From the looks of it, the researchers were able to reduce the time necessary to find two inputs that hash to the same digest. This is very different from finding a second input that hashes to a known digest. If that were the case, common hash applications like storing the digest of passwords or a digital signatures would be vulnerable.

      But until researchers can take a known digest value and find an

  • by Anonymous Coward on Friday May 08, 2009 @10:36AM (#27877075)

    I guess I'll just go back to good old MD5.

    • I hope you were joking... MD5 isn't encryption, it's a hash. Encryption has to be reversible with a passphrase, hashes are not irreversible.
  • by Vexler ( 127353 )

    ...I am moving "off of" this grammar-school newsletter piece.

    This is news for nerds, not news for dropouts.

  • by tlhIngan ( 30335 ) <slashdot&worf,net> on Friday May 08, 2009 @10:55AM (#27877269)

    Really stupid question (not a cryptographer), but is there anything wrong with using multiple hash algorithms (hopefully none derived from one another)? Surely breaking two or more hashes simultaneously would be far harder?

    E.g., MD5 is broken. But what if we use both MD5 and SHA-1?

    • Re: (Score:3, Informative)

      by russotto ( 537200 )

      Really stupid question (not a cryptographer), but is there anything wrong with using multiple hash algorithms (hopefully none derived from one another)? Surely breaking two or more hashes simultaneously would be far harder?

      E.g., MD5 is broken. But what if we use both MD5 and SHA-1?

      Unfortunately not; once you've broken the strongest hash, the rest can be broken in polynomial time.

      http://article.gmane.org/gmane.comp.encryption.general/5154 [gmane.org]

      • by blueg3 ( 192743 ) on Friday May 08, 2009 @11:35AM (#27877869)

        This has nothing to do with multiple hash algorithms. What you're referring to is that finding an n-way collision from a 2-way collision is polynomial time. That is, a 2-way collision is two documents with the same hash, and an n-way collision is n documents with the same hash.

        Finding a pair of documents that have the same SHA1 hash doesn't help you find a pair of documents with the same MD5 hash. Indeed, none of the efficient-collision algorithms allow you to find collisions in both SHA1 and MD5 simultaneously. (Note that, as far as I know, there aren't even any efficient preimage attacks on MD5 or SHA1, only collision attacks.)

        Using multiple hash algorithms is helpful, yes.

        • This has nothing to do with multiple hash algorithms.

          You should really read the rest of that post.

          Finding a pair of documents that have the same SHA1 hash doesn't help you find a pair of documents with the same MD5 hash.

          No, but if you can find one you can also find many, and in the many you can find an MD5 collision. From the link:

          Joux then extended this argument to point out that attempts to increase the security of hash functions by concatenating the outputs of two independent functions don't actually increase their theoretical security. For example, defining H(x) = SHA1(x) || RIPEMD160(x) still gives you only about 160 bits of strength

          "||" is concatenation. The theoretical argument, in brief: with the Merkle-Damgaard construction (used in MDx, SHAx, RIPEMDx), it takes a factor k to find 2^k collisions, over finding just a single one. By finding many collisions in one hash function, we have enough messages that they're likely to collide in the second as well.

          But, he goes on to say

    • by SparkleMotion88 ( 1013083 ) on Friday May 08, 2009 @11:28AM (#27877739)

      Surely breaking two or more hashes simultaneously would be far harder

      You can't be "sure" about much in cryptography. If m is a message, and f and g are hash functions, you are assuming that you get more security from f(g(m)) than from f(m). You have to define what kind of security you are hoping for. With hash algorithms, you usually want: given m', it is difficult to find some m where h(m) = m'.

      So to answer your question, using f(g(m)) would only be more secure than f(m) if f contains some vulnerability and no cryptanalytic vulnerability is introduced by combining f and g. If I don't have any information about either type of vulnerability, I would have no reason to assume that f(g(m)) is more secure than f(m). In this situation, I would stick with f(m) because it has been studied more and people have probably spent time trying to break it.

      To put it another way, breaking f(g(m)) does not necessarily require breaking both f and g. The resulting algorithm that you get by composing f with g is still only one algorithm (just like SHA-1), and that is the only algorithm that needs to be broken.

      • by YesIAmAScript ( 886271 ) on Friday May 08, 2009 @11:34AM (#27877837)

        I'm not so sure he's talking about applying one hash to the other's output, as much as performing both hashes on the same material and storing both results, also checking both results. Then you'd have to create a collision for both hashes in order to beat the system.

        • by tlhIngan ( 30335 )

          I'm not so sure he's talking about applying one hash to the other's output, as much as performing both hashes on the same material and storing both results, also checking both results. Then you'd have to create a collision for both hashes in order to beat the system.

          Yes, sorry. I meant to say you have message M, and you hash M with multiple algorithms, a(M), b(M), ..., coming up with hashes A, B, ... Now you have message N, and a(M) == a(N). It shouldn't follow (always, that is) that b(M) == b(N), especiall

      • Such a long post for the content:

        f(g(m)) may be stronger or weaker than g(m) or f(m) but i have no evidence either way, however g has been tested more than f.g

        OR

        I dunno, but SHA-1 has been tested more than MD5 on SHA-1

    • An ideal cryptographic hash should require exponential (on the hash length) time to break. A "broken" hash, like MD5 or SHA-1, is one for which an algorithm exists which is capable of finding collisions in less than exponential time.

      Combining hashes is the cryptographic equivalent of making a single, longer hash. If you combine broken hashes, you have a long broken hash, which would actually be far easier to break than an ideal cryptographic hash of the same length.

  • by Morphine007 ( 207082 ) on Friday May 08, 2009 @10:57AM (#27877289)
    Guess the Aussies overpaid, since their $560k "unbreakable" cryptosystem relies on SHA-1 [slashdot.org]. Shock of shocks, I know...
    • by jd ( 1658 )

      If they pay me another $560k, I'll upgrade them to SHA-2 or Midnight Blue. (Either that, or I'll figure out how to Seuss the names of hashes.)

    • Meh, only for key diversification and PIN hashing. Both are pretty safe. Of course, that won't matter to people that are not too deep into cryptography. Once they hear it is unsafe they will try and migrate away.

  • by YesIAmAScript ( 886271 ) on Friday May 08, 2009 @10:57AM (#27877291)

    So many major systems are secured with PK systems that depend on SHA-1 hashes now. If this can be broken, someone please put this to good use by making a collision that makes it possible for people to write homebrew code for the PS3 or 360.

    I keep hearing about all these hash collisions and how I should be scared, but I wish I could at least get the good with the bad.

    • by gmuslera ( 3436 )
      Reading about collisions and gaming consoles risk to put the context far away from the crypto topic, specially if you think that PK and SHA-1 could be great as car model names.
    • Afaict collisions aren't much use for such things, unless you could make two apps one malicious one non-malicious that collided and somehow get the non-malicious one signed to high privilages. But if you are in a position to do that you can just get code with a hidden malicious feature signed.

  • by bcrowell ( 177657 ) on Friday May 08, 2009 @11:04AM (#27877373) Homepage

    So what can you do to help facilitate the move away from SHA-1?

    One specific thing that would really help would be if debian would make it a priority to do a complete job of packaging the relevant hash functions, along with bindings for popular languages. For instance, I have an open-source perl app [lightandmatter.com] that uses digital watermarks. The user can choose between SHA1 and Whirlpool. However, I want to keep my app simple for users to install, and the perl binding for Whirlpool hasn't been packaged for debian yet, so I've made SHA1 the default. A debian user who wants to use Whirlpool with my app has to jump through hoops, e.g., installing the perl module via CPAN. That's actually a real pain for a debian or ubuntu user, because CPAN and apt don't play nicely; you can get in all kinds of screwed-up states if you try to install half your perl modules using apt and half using CPAN.

    TFA is talking about gpg. Well, realistically, the choice of hash function is not the bottleneck in terms of security when it comes to sending encrypted or signed email. The bottleneck is mainly just that it's too hard to use (isn't built in to most GUI email clients), and in the case of encryption it also suffers from negative network effects -- there's no big benefit to me from using gpg encryption in my email unless the people I'm communicating also use the technology. The world's best crypto doesn't do you any good if you don't use it because it's too much of a pain. I think gpg is clearly a case where the perfect has been the enemy of the good. They've been so hung up on protecting the user against obscure vulnerabilities that they've ended up making the darn thing too hard for the vast majority of users. The docs, last time I checked, were basically written in Martian. I have a bachelor's degree in math, I program computers as a hobby, and I've read Schneier's Applied Cryptography. I'm not claiming that makes me a big expert on crypto, but it does put me out in front of the vast majority of the population. Well, I simply cannot figure out all the ins and outs of gpg. Okay, I could, but it would take more time than I'm willing to invest.

    • Re: (Score:2, Funny)

      by Anonymous Coward

      One specific thing that would really help would be if debian would make it a priority to do a complete job of packaging the relevant hash functions, along with bindings for popular languages.

      However, as this is Debian they are more likely to "disable" SHA-1 by making it emit the plaintext.

    • by Eythian ( 552130 )

      installing the perl module via CPAN. That's actually a real pain for a debian or ubuntu user, because CPAN and apt don't play nicely; you can get in all kinds of screwed-up states if you try to install half your perl modules using apt and half using CPAN.

      IME, 'dh-make-perl' does a pretty good job of making .debs from CPAN packages.

      Of course, this doesn't invalidate your actual point that the bindings should be in there in the first place.

  • by 200_success ( 623160 ) on Friday May 08, 2009 @12:33PM (#27878759)

    According to x509(1) and ca(1), OpenSSL supports md2, md5, sha1, and mdc2 as options for message digests for certificates. Since MD2 and MD5 are already broken, and SHA1 is now suspect, that leaves just the relatively obscure MDC-2 [wikipedia.org].

    • Re: (Score:3, Informative)

      by Cruicky ( 1122359 )

      The OpenSSL package in Ubuntu supports SHA224, SHA256, SHA384, and SHA512, even though it's not actually shown in the help, with the command line options being -sha224, -sha256, -sha384 and -sha512.

      I've been happily using SHA512 with a personal CA for the last year.

      It's possible that other distributions have also compiled in support for these hash functions in their OpenSSL packages.

    • by jd ( 1658 )

      A good point, especially given that sites upgraded to SHA-1 from MD5 after the debacle over forged certificates. They're not going to want to upgrade again, especially to something obscure.

    • And not blowfish, sha256, sha512?

  • by swordgeek ( 112599 ) on Friday May 08, 2009 @02:08PM (#27880317) Journal

    It's been a while since I had to deal with PGP keys and the like, and things have multiplied since then. Is there a simple explanation for the status/compatability/equivalency of...

    pgp
    openpgp
    gpg
    gnupg

    And any others I'm missing?

    • by molo ( 94384 ) on Friday May 08, 2009 @03:43PM (#27881781) Journal

      Did no one really reply to this?

      PGP is the original. Phil Zimmerman, export control, all the history.

      OpenPGP is a specification for all input and output of a PGP system. RFC 4880. Diverges from PGP5.

      GPG == GNUPG. A Free Software implementation of OpenPGP. Has now become the most commonly used OpenPGP implementation. Werner Koch is the project lead.

      -molo

    • Is there a simple explanation for the status/compatability/equivalency

      PGP 2.x primarily used IDEA, which is patented, so it's unsupported everywhere else.

      Pretty much everything else (including later versions of PGP) interoperates just fine.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (10) Sorry, but that's too useful.

Working...