Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Encryption Software Bug Security Linux

GnuTLS Flaw Leaves Many Linux Users Open To Attacks 127

A new flaw has been discovered in the GnuTLS cryptographic library that ships with several popular Linux distributions and hundreds of software implementations. According to the bug report, "A malicious server could use this flaw to send an excessively long session id value and trigger a buffer overflow in a connecting TLS/SSL client using GnuTLS, causing it to crash or, possibly, execute arbitrary code." A patch is currently available, but it will take time for all of the software maintainers to implement it. A lengthy technical analysis is available. "There don't appear to be any obvious signs that an attack is under way, making it possible to exploit the vulnerability in surreptitious "drive-by" attacks. There are no reports that the vulnerability is actively being exploited in the wild."
This discussion has been archived. No new comments can be posted.

GnuTLS Flaw Leaves Many Linux Users Open To Attacks

Comments Filter:
  • by Anonymous Coward

    Everything I know of uses OpenSSL.

    • Re:Who uses GnuTLS? (Score:4, Informative)

      by Anonymous Coward on Tuesday June 03, 2014 @03:17PM (#47158627)

      "apt-cache showpkg libgnutls26" says that mutt, claws-mail, empathy, emacs, telepathy, wine, and some qemu stuff uses it.

      So it is not completely unused.

      • But no major server software that another party can connect remotely to exploit.

      • $ apt-cache rdepends libgnutls26 | tail -n +3 | wc -l
        497

        Oh crap...

      • Don't know about the others, but mutt has an option to compile with OpenSSL instead.

      • Ho, that's what you think! Have a look at this: chromium-browser is in there as well! $ apt-cache showpkg libgnutls26 Package: libgnutls26 Versions: 2.12.23-12ubuntu2.1 (/var/lib/apt/lists/za.archive.ubuntu.com_ubuntu_dists_trusty-updates_main_binary-amd64_Packages) (/var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_trusty-security_main_binary-amd64_Packages) (/var/lib/dpkg/status) Description Language: File: /var/lib/apt/lists/za.archive.ubuntu.com_ubuntu_dists_trusty_main_binary-amd
    • by r1348 ( 2567295 )

      If that makes you feel safe...

    • FileZilla uses GnuTLS because the maintainer decided OpenSSL had an API that was too unwieldy.

  • "malicious server" (Score:4, Informative)

    by bill_mcgonigle ( 4333 ) * on Tuesday June 03, 2014 @02:41PM (#47158287) Homepage Journal

    malicious server

    Sorta important - there's not much popular software that uses GNUTLS, but wget is one of them. Since it's almost always used as a client, it's probably wise to use curl -O against unknown servers, until they get this straightened out.

    • by Mr. Gus ( 58458 )

      Exim uses gnutls on debian (and ubuntu, and probably other derivates).

    • Sorta important - there's not much popular software that uses GNUTLS, but wget is one of them. Since it's almost always used as a client, it's probably wise to use curl -O against unknown servers, until they get this straightened out.

      wget can be built against OpenSSL, and curl can be used with GnuTLS.

  • by koan ( 80826 )
    I saw the patch first thing, then the warning telling me it wasn't trusted.
  • by maugle ( 1369813 ) on Tuesday June 03, 2014 @02:45PM (#47158333)
    I don't understand what the programmers of all these crypto libraries were thinking here. Even for the most basic and unimportant program, the rule is "if the data comes from outside, verify!" This is vastly more important when cryptography is involved, so why is it that all these crypto libraries seem to blindly trust whatever the Internet is sending them?!
    • It seems like taint tracking and sanitation should be pervasive and explicit. This can be partially enforced by type enforcement, no?

      • It seems like taint tracking and sanitation should be pervasive and explicit. This can be partially enforced by type enforcement, no?

        This is possible in almost any modern language, although in some languages the code will be so horrible you can wonder if the cure isn't worse than the disease. For example, in C you could wrap tainted data in a struct that is only touched by a few select sanitisation functions. (You would still have to make sure no lazy or malicious code pokes around in the struct, or casts away this protection, but you could write a tool to check that.) Similar for languages like Python, although again it is easy to get a

    • by QuietLagoon ( 813062 ) on Tuesday June 03, 2014 @02:57PM (#47158453)

      I don't understand what the programmers of all these crypto libraries were thinking here. Even for the most basic and unimportant program, the rule is "if the data comes from outside, verify!" This is vastly more important when cryptography is involved, so why is it that all these crypto libraries seem to blindly trust whatever the Internet is sending them?!

      From what I read of the OpenSSL source code, it would be an insult to programmers everywhere to call the people who barfed up the OpenSSL code "programmers".

      • Re: (Score:2, Offtopic)

        by rsclient ( 112577 )

        Actually, most of the comments I've seen about the OpenSSL code are immature, and show a lack of appreciation for the changes in the industry.

        Like, remember that if-isupper-then-tolower code? Well, back in the day, tolower on most platforms would just bit-bang in a '1' bit. That will convert A to a, but also converts at-sign to back-tick. In "modern" toolchains, this doesn't happen any more; tolower is expected to handle all chars, and work correctly.

        But -- as a developer, can you prove that every system

        • I've done code that works on multiple platforms. It used to be really, really gnarly: everything platform was always just a little bit different. And you get code that looks just like what I've seen in the snarky comments.

          No, you don't. If you have a broken printf on a platform, you write code like:

          #ifdef BROKEN_PRINTF
          int GOOD_printf(...) {
          /* Work around the breakage */
          }
          #else
          #define GOOD_printf printf
          #endif

          GOOD_printf("Hello, world!\n");

          so that you've encapsulated the damage to one place in your codebase. You don't sprinkle #ifdef BROKEN_PRINTF a thousand different places in 20 modules if you don't want to go insane trying to keep track of it.

          The OpenSSL devs aren't getting grief for writing complex code. They're getting grief for writing unnecessarily complex code by an order of magnitude, and they've earned every bit of it.

          • Almost right, except that you stick #define printf GOOD_printf at the end of the #ifdef block and then always use printf(), don't force everyone reading the code to work out that GOOD_printf() means printf().
            • That works, too. OpenSSL took the route of using the macro names everywhere (calling them BIO_*), which kind of makes sense because printf wouldn't necessarily have the behavior documented in a contributor's printf(3) man page. That could be a whole 'nother world of hurt.
              • OpenSSL took the worst possible route. They had FOO_{standard library function} and BAR_{standard library function}, and also just used the unadorned library function. The FOO_ variant had some special behaviour, the BAR_ version was sometimes the standard library version and sometimes their own (depending on both the platform and the function - in some cases they always wrote their own even when there's an adequate - or even better - version shipped with the platform, in some cases they made a per-platfo
    • by Anonymous Coward

      It all comes down to money and time in the end. Input verification type errors like this are the most common bug in all software, but most of those don't involve exploits just screw up something you are doing. The weakness is really in the fact that C/C++ are extremely long in the tooth with the way that they handle bounds checking and really -- we have CPU cycles to burn across the board at this point. If this were implemented in Java, C#, or any other modern language it just would be simply impossible to

      • you're not up to date are you - about 20 years out of date TBH.

        C++ has had bounds checking of arrays for decades, either through guard pages at both ends, or with decent containers.

        sure, when working with C constructs, you're falling back to C, but even then you could enable memory guard pages in most compilers. Its just that these are generally only enabled in debug builds (as if you don't spot the bug when you're testing debug, you're not likely to find it in release builds either I guess).

    • by rahvin112 ( 446269 ) on Tuesday June 03, 2014 @03:08PM (#47158553)

      The actual rule is you always verify data, regardless of source. You might trust internal data to not be intentionally malicious but you can't design something idiot proof because idiots are incredibly ingenious.

      • ...but you can't design something idiot proof because idiots are incredibly ingenious.

        And incomprehensibly lucky...

        • Re: (Score:2, Insightful)

          by Anonymous Coward

          not exactly lucky ... it's just there so damn many of them ....

    • by Anonymous Coward

      The rule should be to always verify, regardless of where the data comes from. You can catch a lot of bugs by not being too presumptuous about complex data structures or formats you receive from "trusted" callers.

      But I'm not advocating belt & suspenders programming. That's usually bad engineering. Bridge builders don't decide to just toss in a few more girders "just to be on the safe side". Writing algorithms that are robust and resilient without being redundant takes a very thoughtful approach.

      • ... But I'm not advocating belt & suspenders programming. That's usually bad engineering. Bridge builders don't decide to just toss in a few more girders "just to be on the safe side". Writing algorithms that are robust and resilient without being redundant takes a very thoughtful approach.

        They used a dangerous speed hack, in a message that was used occasionally and was non critical.
        The speed hack was why the out-of-range was not detected. That is not good programming, or good judgement.

    • by cant_get_a_good_nick ( 172131 ) on Tuesday June 03, 2014 @05:57PM (#47160199)

      I think there's a basic issue here, and that's of "what do I want to work on". This is a problem in any project - it's not limited to coding.

      I'm sure GNUTLS is coded how many things are coded. Lets start with a framework, and hang dummy code on it. Say "hey we got here!" when we got a packet. Then you flesh that out, and do what you really should do when you get that. Hey, it works! Beers all around. Then later, you start thinking "hmm, how can this get abused" and you add checks.

      But wait, before you think of how you can get broken, you're like "this code needs real functionality, let me work on this next". And the boundschecks never get coded.

      I'm sure you've been on a project where you thought "i really should cross all the T's, dot all the I's here" then your boss says "it works good enough" and you never get around to making it bulletproof. Or you do the fun drywall project at home, and you already sanded with 150 grit, you just not bother with the 300 grit.. it's good enough.

      OpenSource doesn't mean it's not written by people, with peoples' quirks and issues.

      • That's exactly the problem. If you think to add features first, security later you have already made a fundamental mistake. Writing secure code is not a matter of adding extra checking later. It means writing good, proper code right from the start. One of the most obvious consequences of that is not to use functions like sprintf at all, but use substitutes that allow and in fact demand proper length checking.

        My $0.05: Of course managers never see a business case for adding security checking later. There is

      • ... OpenSource doesn't mean it's not written by people, with peoples' quirks and issues.

        That's true.
        But it is not an excuse.

  • by larry bagina ( 561269 ) on Tuesday June 03, 2014 @02:48PM (#47158369) Journal
    There have been too many problems with existing crypto code so I've developed something better: goatsecret. Instead of relying on math, it relies on a frenchman's gaping asshole. Basically, the software breaks your message/file/whatever into small chunks and superimposes the data in the goatsecret image. Sure, it's not encrypted, but who is going to stare into the void just to get your data? No hacker/cracker/big business/three-letter-agency is that desperate.
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      There have been too many problems with existing crypto code so I've developed something better: goatsecret. Instead of relying on math, it relies on a frenchman's gaping asshole. Basically, the software breaks your message/file/whatever into small chunks and superimposes the data in the goatsecret image. Sure, it's not encrypted, but who is going to stare into the void just to get your data? No hacker/cracker/big business/three-letter-agency is that desperate.

      ... neither is the intended recipient of the data.
      That's the only flaw with your scheme I can think of.

      • by Noryungi ( 70322 )

        There have been too many problems with existing crypto code so I've developed something better: goatsecret. Instead of relying on math, it relies on a frenchman's gaping asshole. Basically, the software breaks your message/file/whatever into small chunks and superimposes the data in the goatsecret image. Sure, it's not encrypted, but who is going to stare into the void just to get your data? No hacker/cracker/big business/three-letter-agency is that desperate.

        ... neither is the intended recipient of the data.
        That's the only flaw with your scheme I can think of.

        ... Except if the recipient is French, of course!

        (By the way, wasn't the goatse.cx guy American?)

    • I have developed a program that cracks goatsecret output: goatsextract. It is able to automatically extract the data from a frenchman's gaping asshole, without the user ever viewing the goatsecret image. Requires imagemagick, tesseract and leptonica.


  • ...at least since C was initially created (and perhaps even before that).

    When do We accept that this is a failure intrinsic to the programming languages themselves and move on to correct it?

    http://en.wikipedia.org/wiki/B... [wikipedia.org]
    • As soon as Ruby can walk the walk that they so brazenly talk while maintaining an acceptable performance level. People who *don't* design languages for a living still have to care about how the execution times for simple tasks affects their clients' operating costs. A true carpenter doesn't blame his tools for his own mistakes.

    • I'd even go far as to say the problem creeps into larger issues. All the libraries you require are based in C/C++. QT, etc. These code bases are completely massive and even if you run some small "shows a box on screen" app you are calling 3000 lines of possibly broken an insecure code. The solution is move the core libraries away from C to C#, Java, or some other viable candidate that prevents software from "doing bad things". Essentially what the open source community has been saying is "trust us", but who
      • I'd even go far as to say the problem creeps into larger issues. All the libraries you require are based in C/C++. QT, etc. These code bases are completely massive and even if you run some small "shows a box on screen" app you are calling 3000 lines of possibly broken an insecure code. The solution is move the core libraries away from C to C#, Java, or some other viable candidate that prevents software from "doing bad things". Essentially what the open source community has been saying is "trust us", but who exactly do you trust to carry your wallet? I only trust myself... How about you? Community developed software is great provided it is implemented on a framework that is invulnerable to input errors. I rather have my app crash than get hacked.

        Guess what language the Java VM's or CLR VM are written... C and C++, sometimes assembly.
        So what you are saying is we should not be codeing with libraries that are written in C or C++ instead we should be coding with VMs that are written in C and C++. And how exactly is that any better?

        As for trusting these VM to not let code do bad things you mean like we should trust the VMs who's security was bad enough Homeland security issued an warning to dissable or remove it?

        • Yes, but it's easier to worry about one library (the VM) than 5000. Also the need for "compiling" is caused by the limitation that your OS cannot directly digest the bytecode. If it could there is nearly no need for a C++ glue layer other than a very minimal one that can easily be secured. If it can read that code then the C++ dependency goes away outside of the small bit riding right on the hardware. Put C where it belongs -- touching the hardware -- move the standard libraries into the virtual sandbox. No
    • by ledow ( 319597 ) on Tuesday June 03, 2014 @03:08PM (#47158549) Homepage

      The alternative of runtime-performance hits, and allowing arrays to grow to unreasonable - and uncontrollable - sizes without inserting checks similar to those that combat buffer overflows just seems to be something that nobody wants.

      Fact is, moan all you like, system libraries can be written in any language you like, and interface with C code and C-style functions quite easily. There's nothing stopping - as Windows is moving towards - system libraries being written in a managed language and interfacing with old-style C API's.

      But nobody's doing that. Not because buffer overflow in C isn't a problem, not because they naively think their code is bulletproof. But simply because of reasons of performance, memory use and knock-on library sizes and dependencies.

      Nobody is stopping yourself, or anyone else, from rewriting something as performance critical as GnuTLS in any language you like. But nobody has. And if they have, nobody that develops code that requires GnuTLS uses it.

      For kernels and drivers, I'd fight the corner of "It has to be C, or a similar, dangerous, low-level language". Once you get to the application layer of things like OpenSSL, GnuTLS, or pretty much any library, there's no excuse. Nobody's writing them, and if they are they are losing out to the C-based libraries. And not BECAUSE they are written in C and we all have this nostalgia for crappy C code, not BECAUSE these things must be written in C to work properly, not BECAUSE the API is C-based and not language interfaces with it - but obviously because of other reasons.

      What those exact reasons are, I'll leave others to discuss. But I greatly suspect it's to do with the huge size and impact of such managed languages.

      • by Animats ( 122034 ) on Tuesday June 03, 2014 @03:31PM (#47158777) Homepage

        No, it's a backwards compatibility issue. There's all that C code out there, sort of working. It's not an overhead issue. Most subscript checks in inner loops can be hoisted to the top of the loop and optimized out. For FOR loops, this is almost a no-brainer inside a compiler. The Go compiler, which checks subscripts, does that.

        There are three big memory issues in C: "How big is it","Who deletes it", and "Who locks it?". The language helps with none of those. "How big is it" problems lead to buffer overflows and security holes. "Who deletes it" leads to memory leaks, and occasionally use after deletion. "Who locks it" leads to race conditions. Of these problems, "How big is it" cause the most security trouble.

        C is especially bad because the language doesn't even have a way to talk about the size of an array. When you pass an array to a function, all size info is lost. This sucks.

        C is this way because the compiler had to be crammed in a machine with an address space of 128KB, the PDP-11. We have more memory now. I first wrote C in 1978. We should be past this mess by now.

        • wrong. (Score:2, Informative)

          by Anonymous Coward

          C is that way because it had to interface to HARDWARE that doesn't know anything about size limitations.

          And EVERY language you use has to interface at some point where "size" limitations are unknown.

          And if the hardware (ie, the CPU) has such instructions (like the VAX did) you then bump into the problem of lack of portability... And the CPU is then so constrained that it can't evolve either... (also a VAX problem - it had so many things bound to a 32 bit address that it couldn't go to 64 bits, and remain a

          • C is that way because it had to interface to HARDWARE that doesn't know anything about size limitations.

            That's what programming languages are for - to allow better abstractions than the hardware provides. That's what higher level languages are for. At the assembly code level, few assemblers know about array size, but C is not an assembly language. C knows about loops, and scope, which the hardware does not. This allows optimization of checks, which can often be hoisted out of a loop. (For non-compiler people, "hoisting" means moving a computation upwards in code, so that it's done earlier, preferably once pe

        • C is especially bad because the language doesn't even have a way to talk about the size of an array. When you pass an array to a function, all size info is lost. This sucks.

          How is that a problem? Pass the size in a separate variable. Put the array in a struct and add a member for the size. Or add a function to your struct that returns the size. Whatever. The possibilities are there. If you don't use them because programming in C is less cushy than in other languages, the fault is entirely yours. There is nothing in C preventing you from writing proper code. You just have to do it, with the understanding that it will be more work. But it's hardly impossible.

          • How is that a problem? Pass the size in a separate variable.

            You've just answered your own question. It's a problem because it requires programmers to concern themselves with low-level tedious details that the compiler could handle for them - details that they are in fact likely to get wrong. (E.g., you have to pass the correct size value, you have to remember to check it everywhere, and so on.)

            Decades of buffer overflows should be sufficient evidence that this is not a good approach. Unfortunately, many programmers stubbornly refuse to see the obvious.

            • It's a problem because it requires programmers to concern themselves with low-level tedious details that the compiler could handle for them

              So basically your statement can be reduced to is "If you're lazy and stupid, don't use C". I'm fine with that. But I'd like to add that if you're lazy and stupid, don't program at all, become a manager.

      • by Anonymous Coward

        Managed languages have a few problems.

        First of all, they're slow due to JIT compiling. "Wait, what?" I hear you ask. Isn't JITting supposed to be faster? That's what we've all been told, right? No. JITting is faster than interpretation, including bytecode interpretation. JITting is not faster than raw machine code. Period. The fact that it has to be JITted is overhead enough to make it slow when compared to something that needs no immediate compilation. Now, that's not to say it doesn't help in certain case

        • by lgw ( 121541 )

          The performance hit and need for the runtime is a big problem in many ways. Compiling to native code is the right path to jumping those hurdles. For C#, .NET Native [microsoft.com] is quite interesting. I can think of so many ways MS could ruin this, but the prospect of a stand-alone exe compiled from C# is exiting.

      • by Anonymous Coward

        "system libraries can be written in any language you like"

        Let me know when your language of choice can make a system call without using assembly...

    • by Anonymous Coward

      I think it's a bit weak to blame the programming language on sloppy programming. Since when do we blame the tools for shoddy workmanship?

  • by Anonymous Coward

    Na-na-na-na-na.

    • by Anonymous Coward

      Rest assured the Windows bugs are there. It's just that only malicious individuals and GO's know about them and they ain't saying shit because they use them[or sell them].

    • by Anonymous Coward

      Yup. Good thing that no open-source libraries are available for Windows, right? And, I mean, Microsoft is perfect. Not like they release thousands of bug fixes for each iteration of their operating systems.

      Thank you for your infinite wisdom, AC.

  • by Animats ( 122034 )

    This is C code, the only major language that doesn't know how big its arrays are. After 30 years of buffer overflow bugs, you'd thing we'd have something better now for low-level programming. The higher level stuff, where you can use garbage collection, is in good shape, with Java, C#, Python, etc. resistant to buffer overflow problems. C and C++, though, are still broken. C++ tries to paper over the problem with templates, but the mold (raw pointers) keeps seeping through the wallpaper.

    I've tried. Here'

    • by Greyfox ( 87712 )
      The programmers always know how big those arrays are. They're just lazy or bad in a variety of ways. It's easy enough to bound a read or copy to a specific size. They just never actually do. I've been on a couple of big C projects and a few smaller ones and the programmer always know what they're working with in a specific function. The problem is never that they don't know how big that specific thing is. The problem is they make no effort to validate the size, or that the pointer's not null, or that someon
      • I haven't done any programming in years, but when I did, I worked in C. On every project that I worked on, we were expected to use strncpy() instead of just strcpy() to make sure that we didn't copy more bytes than we had room for. AFAIK, not one of those projects ever had a buffer overflow issue. Why this isn't the standard now is beyond me.
        • To be pedantic, strncpy() isn't just a limited version of strcpy(), and was designed for a different use (copying into a fixed-length area of memory). Always put the terminating '\0' at the end of the buffer to be safe.

          • Yes indeed. And, of course, make sure that if the maximum number of bytes are copied the \0 will still be there. Avoiding buffer overruns in C isn't rocket surgery, it's just a matter of common sense and good programming habits.
    • by Anonymous Coward

      That "stricter version of C ... " is also slow.

      Runtime checks are already possible... but are slow to use. And the runtime checks STILL have to be implemented by something that doesn't have runtime checks...

      C is what it is. Good for implementing whatever you want. And that means it has to allow you to do things that shoot yourself in the foot.

      C is used to implement programs for things from an 8 bit processor used for controllers (see PIC for an example - some only have 256 bytes of ram, and 2K of rom; Think

    • Or... if you're using C++, use a Standard Library container? They all keep track of their sizes, and can perform runtime bounds checks (by using say, at() rather than the bracket operator). Or Write your own class with an array member and an accessor that does bounds checking. It's not difficult to do. At all. And templates aren't meant for addressing buffer overflow problems, or as a replacement for raw pointer (use STL containers). Or garbage collection (use RAII). I get the feeling you aren't too familia
  • by Anonymous Coward

    Should have used LibreSSL.

BLISS is ignorance.

Working...