Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Government Security Linux Your Rights Online

Ask Slashdot: Linux Security, In Light of NSA Crypto-Subverting Attacks? 472

New submitter deepdive writes "I have a basic question: What is the privacy/security health of the Linux kernel (and indeed other FOSS OSes) given all the recent stories about the NSA going in and deliberately subverting various parts of the privacy/security sub-systems? Basically, can one still sleep soundly thinking that the most recent latest/greatest Ubuntu/OpenSUSE/what-have-you distro she/he downloaded is still pretty safe?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Linux Security, In Light of NSA Crypto-Subverting Attacks?

Comments Filter:
  • No. (Score:4, Funny)

    by Anonymous Coward on Sunday September 08, 2013 @02:49PM (#44791833)

    I think there's even a law for this kind of reply...

    • by Jeremiah Cornelius ( 137 ) on Sunday September 08, 2013 @03:00PM (#44791915) Homepage Journal

      You can not add security, later.

      In Unix systems, there’s a program named “login“. login is the code that takes your username and password, verifies that the password you gave is the correct one for the username you gave, and if so, logs you in to the system.

      For debugging purposes, Thompson put a back-door into “login”. The way he did it was by modifying the C compiler. He took the code pattern for password verification, and embedded it into the C compiler, so that when it saw that pattern, it would actually generate code

      that accepted either the correct password for the username, or Thompson’s special debugging password. In pseudo-Python:

          def compile(code):
              if (looksLikeLoginCode(code)):
                  generateLoginWithBackDoor()
              else:
                  compileNormally(code)

      With that in the C compiler, any time that anyone compiles login,

      the code generated by the compiler will include Ritchie’s back door.

      Now comes the really clever part. Obviously, if anyone saw code like what’s in that

      example, they’d throw a fit. That’s insanely insecure, and any manager who saw that would immediately demand that it be removed. So, how can you keep the back door, but get rid of the danger of someone noticing it in the source code for the C compiler? You hack the C compiler itself:

          def compile(code):
              if (looksLikeLoginCode(code)):
                  generateLoginWithBackDoor(code)
              elif (looksLikeCompilerCode(code)):
                  generateCompilerWithBackDoorDetection(code)
              else:
                  compileNormally(code)

      What happens here is that you modify the C compiler code so that when it compiles itelf, it inserts the back-door code. So now when the C compiler compiles login, it will insert the back door code; and when it compiles

      the C compiler, it will insert the code that inserts the code into both login and the C compiler.

      Now, you compile the C compiler with itself – getting a C compiler that includes the back-door generation code explicitly. Then you delete the back-door code from the C compiler source. But it’s in the binary. So when you use that binary to produce a new version of the compiler from the source, it will insert the back-door code into

      the new version.

      So you’ve now got a C compiler that inserts back-door code when it compiles itself – and that code appears nowhere in the source code of the compiler. It did exist in the code at one point – but then it got deleted. But because the C compiler is written in C, and always compiled with itself, that means thats each successive new version of the C compiler will pass along the back-door – and it will continue to appear in both login and in the C compiler, without any trace in the source code of either.

      http://scienceblogs.com/goodmath/2007/04/15/strange-loops-dennis-ritchie-a/ [scienceblogs.com]

      • by Jeremiah Cornelius ( 137 ) on Sunday September 08, 2013 @03:01PM (#44791923) Homepage Journal

        Moral

        The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect.

        http://cm.bell-labs.com/who/ken/trust.html [bell-labs.com]

        • by Anachragnome ( 1008495 ) on Sunday September 08, 2013 @04:21PM (#44792345)

          "The moral is obvious. You can't trust code that you did not totally create yourself...."

          I agree, but that doesn't really help us in the real world--writing our own code doesn't reasonably work out for most people. So, what's the solution to your dismal conclusion? Ferret out those that cannot be trusted--doing so is the closest we will ever come to being able to "trust the code".

          So, how does one go about ferreting out those that cannot be trusted? The Occupy Movement had almost figured it out, but wandered around aimlessly with nobody to point a finger at when they should have been naming names.

          The NSA has made it clear that making connections--following the metadata--is often enough to get an investigation started. So why not do the same thing? Turn the whole thing around? Start focusing on their networks. I can suggest a good starting point--the entities that train the "Future Rulers of the World" club. The "Consulting Firms" that are really training and placing their own agents throughout the global community. These firms are the world's real leaders--they have vast funding and no real limitations to who and where they exert influence. In my opinion, they literally decide who runs the world.

          Pay close attention to the people associated with these firms, the inter-relatedness of the firms and the other organizations "Alumni" end up leading. Pay very close attention to the technologies involved and the governments involved.

          Look through the lists of people involved, start researching them and their connections...follow the connections and you start to see the underlying implications of such associations. I'm not just talking the CEO of Redhat (no, Linux is no more secure then Windows), but leaders of countries, including the US and Israel.

          http://en.wikipedia.org/wiki/Boston_Consulting_Group [wikipedia.org]

          http://en.wikipedia.org/wiki/McKinsey_and_Company [wikipedia.org]

          http://en.wikipedia.org/wiki/Bain_%26_Company [wikipedia.org]

          THIS is the 1%. These are the perpetrators of NSA surveillance, to further their needs...NOT yours. People with connections to these firms need to be removed from any position of power, especially government. Their future actions need to be monitored by the rest of society, if for no other reason then to limit their power.

          As George Carlin once put it so well..."It's all just one big Club, and you are not in the fucking club."

        • by Trax3001BBS ( 2368736 ) on Sunday September 08, 2013 @06:15PM (#44793041) Homepage Journal

          Moral

          The moral is obvious. You can't trust code that you did not totally create yourself..... A well installed microcode bug will be almost impossible to detect.

          http://cm.bell-labs.com/who/ken/trust.html [bell-labs.com]

          You and the submitter in on this one? As the answer is a resounding NO.

          A well installed microcode bug will be almost impossible to detect.

          For people like me that didn't know, microcode can also be known as firmware, bios update
          or "code in a device" http://en.wikipedia.org/wiki/Microcode [wikipedia.org]

          Ken Thompson's Acknowledgment

          I first read of the possibility of such a Trojan horse in an Air Force critique (4) of the security of an early implementation of Multics.

          (4.) Karger, P.A., and Schell, R.R. Multics Security Evaluation: Vulnerability Analysis. ESD-TR-74-193, Vol II, June 1974, p 52.

          So in theory you can't even trust the code you write as your video card could change it.

          --
          If you aren't paranoid yet, just wait

        • http://cm.bell-labs.com/who/ken/trust.html [bell-labs.com]">http://cm.bell-labs.com/who/ken/trust.html

          quoting Ken Thompson
          I would like to criticize the press in its handling of the "hackers," the 414 gang

          God I guess...

          The 414s gained notoriety in the early 1980s as a group of friends and computer hackers who broke into dozens of high-profile computer systems, including ones at Los Alamos National Laboratory, Sloan-Kettering Cancer Center, and Security Pacific Bank.

          They were eventually identified as six teenagers, taking their name after the area code of their hometown of Milwaukee, Wisconsin. Ranging in age from 16 to 22, they met as members of a local Explorer Scout troop. The 414s were investigated and identified by the FBI in 1983. There was widespread media coverage of them at the time, and 17-year-old Neal Patrick, a student at Rufus King High School, emerged as spokesman and "instant celebrity" during the brief frenzy of interest, which included Patrick appearing on the September 5, 1983 cover of Newsweek.

          September 5, 1983 cover of Newsweek
          http://mimg.ugo.com/201102/0/6/5/175560/cuts/4c6de9daa1c16-23680n_480x480.jpg [ugo.com]

          Text from http://en.wikipedia.org/wiki/The_414s [wikipedia.org]

        • by Burz ( 138833 )

          A person would have to be absolutely arrogant to trust themselves alone to effect a secure environment. No one is that good, unless we are talking about "secure" systems that are essentially non-functional.

          That's why we have communities of open source developers. Many minds and eyeballs enable a more comprehensive view of security, especially when they are watching changes incrementally accumulate. I think it is much harder to get even subtly surreptitious malware past developers this way.

          The other way, you

          • by smash ( 1351 )
            I think you misunderstand the premise. You can trust code you yourself write to not be concealing deliberately malicious intent. It still maybe INSECURE, but you can at least be sure of the INTENT of code you write yourself. This isn't the case with third party software.
      • by dalias ( 1978986 ) on Sunday September 08, 2013 @03:14PM (#44792017)
        Fortunately there is an effective counter-measure: http://www.dwheeler.com/trusting-trust/ [dwheeler.com]
      • by gl4ss ( 559668 ) on Sunday September 08, 2013 @03:29PM (#44792111) Homepage Journal

        write your own login, use a different sourced compiler for compiling the compiler..

        anyways, can we stop posting this same story to every fucking security story already? put it in a sig or something.

      • Re: (Score:3, Informative)

        Very pretty example, but badly flawed. Thanks to Login being open source, and the abundance of de-compilers available from independent sources, shenanigans such as this can be readily detected by comparing the de-compiled code from the freely available source code and noting significant variations, specifically blocks of additional logic not included in the source. While behaviour like that illustrated would go unnoticed in the closed source (Windows) world, and very likely does, it doesn't wash in the FOS
        • it doesn't wash in the FOSS community.

          Nice try.

          A laughable comment at best. The FOSS community does not have an army of people running around decompiling binaries just to check to see if it can match compiled code from source. This is significantly less useful than the argument that FOSS doesn't contain back doors because you can look at the source. Just a tip, the vast majority of users don't.

          The vast majority of developers do, but the vast majority of developers don't as I said routinely get up in the morning and decompile published binaries. not the

      • Isn't the compiler software?
        And doesn't the compiler target an architecture?
        And isn't that architecture rife with microcode you never see?
      • Re: (Score:3, Insightful)

        by Mathinker ( 909784 )

        Ken Thompson's theoretical attack against the Unix ecosystem was only practical because, at the time, he controlled a major portion of binary distribution and simultaneously a major portion of the information which could be used to defeat the attack, that being compiler technology. Nowadays, there are tons of different, competing compilers and systems for code rewriting, any of which can be used to "return trust" to a particular OS's binary ecosystem (if someone would take the time and effort to actually do

  • by msobkow ( 48369 ) on Sunday September 08, 2013 @02:50PM (#44791835) Homepage Journal

    The big worry is not building from source, but builds delivered by companies like Ubuntu, which you have absolutely no guarantee are actually built from the same source that they publish. Ditto Microsquishy, iOS, Android, et. al.

    The big concern is back doors built into distributed binaries.

    • by msobkow ( 48369 ) on Sunday September 08, 2013 @02:53PM (#44791859) Homepage Journal

      Another one that concerns me is Chrome, which on Ubuntu insists on unlocking my keystore to access stored passwords. I'd much rather have a browser store it's passwords in it's own keystore, not my user account keystore. After all, once you've granted access to the keystore, any key can be retrieved.

      And, in the case of a browser, you'd never notice that your keys are being uploaded.

      • In the Apple Keychain Access app the access to each key is restricted to a list of applications that are set by the user. You are allowed to grant access of a particular key to all applications, however.

      • by Zumbs ( 1241138 ) on Sunday September 08, 2013 @02:57PM (#44791889) Homepage
        Then why do you use Chrome? Pulling stunts like that would make me uninstall a program in a heartbeat ...
        • by msobkow ( 48369 )

          I don't. I use Firefox because it doesn't ask for access to my default keystore.

          Not that I keep any keys in the default keystore anyhow. I just don't like the behaviour of Chrome in this regard.

          Why would any sane person want to unlock their whole wallet just for a freaking browser?

          • by pthisis ( 27352 )

            It only unlocks the wallet for the user it's running as, it doesn't have crazy admin privileges.

            If you care about security, you're already running the browser as a restricted user anyway--even if you did stupidly share passphrases between wallets (or accidentally mistype the wrong passphrase into the browser unlock window) it still shouldn't have FS permission to your primary wallet.

            Plus you can run Chromium if you want to be able to audit the source, presuming you don't think someone's Ken Thompson'd chrom

      • Re: (Score:2, Informative)

        Much better to use LastPass or whathaveyou instead of the Chrome keystore, IMHO. For one thing, you're right about separating that from your user account keystore, but also the Chrome keystore is pretty insecure. LastPass makes a point of this during installation, once you've OK'd the install it's able to silently access all your passwords.
        • by Anonymous Coward on Sunday September 08, 2013 @06:35PM (#44793127)

          why do people keep suggesting to use lastpass?

          Seriously!

          You don't want Chrome to have acces to all your keys, but you're quite happy to fucking upload them to some server run by some random fucking mouth breather in some fucking country you don't know.

      • by gweihir ( 88907 )

        Chrome is spyware, what do you expect? Its very purpose is to get all your data.

    • by AlphaWolf_HK ( 692722 ) on Sunday September 08, 2013 @02:58PM (#44791897)

      Eventually you have to draw the line somewhere with regard to where you stop trusting. If the Linux kernel sources themselves contained a backdoor, I would be none the wiser, and neither would most of the world. Some of us have very little interest in coding, let alone picking through millions of lines of it to look for that kind of thing. And then of course there's syntactic ways of hiding backdoors that even somebody looking for one might miss.

      • You do, but if you're that worried, there's always truecrypt and keepassx. If you keep the database in a truecrypt encrypted partition, the NSA can't get at that with any reasonable period of time. You can also ditch the keepassx and just store it as plain text in the encrypted partition, but that's not very convenient.

      • by ImdatS ( 958642 )

        Mod up the parent!

        Yes, that's actually my concern all the time. Of course, with open source, you could technically check the source of the system you are using. But then, you'd need to check every line of code, thinking exactly like the NSA (or what-not) in every piece of software you use, including the compiler you use to compile and the compiler compiler, etc, etc.

        Additionally, you'd need to check the source of all the HW-components that come with their own BIOS, including the system's BIOS, networking ch

      • by Smallpond ( 221300 ) on Sunday September 08, 2013 @04:03PM (#44792267) Homepage Journal

        There was an attempt to backdoor the kernel [lwn.net] a few years back. I don't believe the perpetrators were ever revealed.

      • by budgenator ( 254554 ) on Sunday September 08, 2013 @05:09PM (#44792645) Journal

        One of our advanatages is that I'm sure the Russians don't want NSA backdoors in Linux, and the NSA doesn't want Russian backdoors in Linux and neither want Chinese Backdoors and simalarly the Chinese want neither NSA or Russian backdoors. After all of this "Spy vs. Spy" Linux is unlikely to have backdoors. If your requirements are great enough that unlikely isn't good enough your probably shit outa luck because nothing will be good enough for you.

    • The big worry is not building from source, but builds delivered by companies like Ubuntu, which you have absolutely no guarantee are actually built from the same source that they publish. Ditto Microsquishy, iOS, Android, et. al.

      The big concern is back doors built into distributed binaries.

      So what is the practical difference between a "back door" and a security vulnerability anyway? They both remain hidden until found and they both can easily result in total ownage of the (sub)system.

      History demonstrates "open source" community is not immune from injection of "innocent" security vulneribilities into open source projects by way of human error. I find it illogical to assume intentional vulnerabilities would be detectible in source code where we have failed to detect innocent ones.

      And as for y

    • by BitterOak ( 537666 ) on Sunday September 08, 2013 @03:16PM (#44792035)

      The big concern is back doors built into distributed binaries.

      And what about the hardware? And how can you be sure the compilers aren't putting a little something extra into the binaries. There are so many places for NSA malware to hide it's scary. Could be in the BIOS, could be in the keyboard or graphics firmware, could be in the kernel placed there by a malicious compiler. Could be added to the kernel if some other trojan horse is allowed to run. And just because the kernel, etc. are open source doesn't mean they have perfect security. The operating system is incredibly complex, and all it takes is one flaw in one piece of code with root privileges (or without if a local privilege escalation vulnerability exists anywhere on the system, which it surely does), and that can be exploited to deliver a payload into the kernel (or BIOS, or something else). Really, if the NSA wants to see what you're doing on your Linux system, rest assured, they can.

  • i never understood why people go for AES. clearly, if NSA recommends it, in my view it is something to be avoided (i personally go for twofish instead). in ubuntu, ecryptfs uses aes by default, so i would not trust that.

    • Re:AES (Score:5, Informative)

      by Digana ( 1018720 ) on Sunday September 08, 2013 @03:06PM (#44791953)
      The last time that the NSA weakened an algorithm they recommended was by shortening the key for DES. Snowden confirms that properly implemented crypto still works, and Rijndael (AES) still seems strong. The problem aren't the algorithms, because the mathematics still check out. The thing to fear are the implementations. Any implementation for which we are not free to inspect its source is suspect.
      • Re:AES (Score:4, Informative)

        by cold fjord ( 826450 ) on Sunday September 08, 2013 @03:17PM (#44792045)

        The last time that the NSA weakened an algorithm they recommended was by shortening the key for DES.

        Minor correction: They strengthened the DES algorithm by substituting a new set of S-boxes which protected against an attack that wasn't publicly known at the time. They shortened the key space which made it more susceptible to brute forcing the key. Full strength DES has held up very well against attacks overall until its key length became a problem. It lasted much longer in use than intended.

        I seem to recall that DES was never approved for protecting classified data, but that AES does have that approval.

    • AES consists of well studied algorithms. Whether or not the NSA recommends it, it's still known to be secure by independent researchers. From what I understand the only breaks to it are marginally better than brute force, and not likely to result in the data becoming available in a useful period of time.

      • Re:AES (Score:5, Funny)

        by greenfruitsalad ( 2008354 ) on Sunday September 08, 2013 @03:20PM (#44792057)

        if the whole world goes for one cipher, then nsa can concentrate on creating and improving a single ASIC design for breaking it. we should be using hundreds of different algorithms. then they'd have to design hundreds of types of ASICs, build 100x more datacentres, increase taxation in USofA to 10x what it is now, yanks would rebel and overthrow that government and then there would be no more evil NSA. simples

  • If it is off (Score:5, Insightful)

    by morcego ( 260031 ) on Sunday September 08, 2013 @02:55PM (#44791869)

    You can sleep soundly if your computer is off and/or unplugged. Otherwise, you should always be on your guard.

    Keep your confidential data behind multiple levels of protection, and preferentially disconnected when you are not using it. Never trust anything that is marketed at 100% safe. There will always be bugs to be exploited, if nothing else.

    A healthy level of paranoia is the best security tool...

    • by Chemisor ( 97276 ) on Sunday September 08, 2013 @04:37PM (#44792429)

      10000 laptops are stolen at airports every year. Presumably, they are off when that happens.

      The NSA is not your problem; you are not important enough to be a target. When thinking about security, thieves are your problem. Theft happens, and happens often. Your computer is far more likely to get stolen than to be inflitrated by the NSA. And the solution is to encrypt your hard drive. Without encryption the thief will have access to everything you normally access from the computer - like your bank account. You wouldn't want that, would you? Today's CPUs all have AESNI support, so there is no excuse for not encrypting your laptop's hard drive. Do it today and get some financial peace of mind.

      • by Dunbal ( 464142 ) * on Sunday September 08, 2013 @05:27PM (#44792747)

        you are not important enough to be a target.

        Wrong. You may become important in the future. So you are important enough to target. They are collecting data on everyone, and holding on to it. They just might not be actively going through all the data from everyone (or they might be, if they have enough computing power). But if it's recorded it doesn't really matter if they do it today or in 20 years. They've got you. "If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him." --Richelieu

  • Yes. (Score:2, Insightful)

    by Anonymous Coward

    You have to trust the integrity of Linus and the core developers.

    If any of them let in such major flaws they would be found out fairly quickly... and that would destroy the reputation of the subsystem leader, and he would be removed.

    Having the entire subsystem subverted would cause bigger problems.. but more likely the entire subsystem would be reverted. This has happened in the past, most recently, the entire changes made for Android were rejected en-mass. Only small, internally compatible changes were acc

  • by dryriver ( 1010635 ) on Sunday September 08, 2013 @03:03PM (#44791931)
    or "Privacy" anymore. Perhaps there hasn't been for the last decade or so. We just didn't know at the time. ---- Enjoy your 21st Century. As long as people fail to defend their basic rights, there will not be such a thing as "security" or "privacy" again. My 2 Cents...
  • Linux and RdRand (Score:5, Informative)

    by Digana ( 1018720 ) on Sunday September 08, 2013 @03:04PM (#44791943)

    There was recently a bit of a kerfuffle over RdRand [cryptome.org].

    Matt Mackall, kernel hacker and Mercurial lead dev, quit Linux development two years ago because Linus insulted him repeatedly. Linus called Matt a paranoid idiot because Matt would not allow RdRand into the kernel, because it was an Intel CPU instruction for random numbers that could not be audited. Linus thought Matt's paranoia was unwarranted and wanted RdRand due to improved performance. Recently Theodore T'so has undone most of the damage, but call RdRand still exist in Linux. I do not understand exactly if there are lingering issues or not.

    • by Greyfox ( 87712 ) on Sunday September 08, 2013 @03:07PM (#44791961) Homepage Journal
      Yeah yeah and I'm having to go through the last couple years of E-mails and tell the various paranoid whackos, slightly demented old relatives and that one guy with the tinfoil that they were right and I was wrong. How do you think that makes ME feel?
  • It's sad but you can't trust any mainstream Linux distro created by a US company, and you likely can't trust any created in other countries either. I'm not saying that as a pro-windows troll because you can trust MS's efforts even less.

    I believe you can trust OpenBSD totally but it lacks many of the features and much of the convenience of the main Linux distros. It is rock solid and utterly secure though, and the man pages are actually better than any Linux distro I've ever seen.

    The possibly bigger problem

    • by Noryungi ( 70322 ) on Sunday September 08, 2013 @03:31PM (#44792117) Homepage Journal

      I believe you can trust OpenBSD totally but it lacks many of the features and much of the convenience of the main Linux distros. It is rock solid and utterly secure though, and the man pages are actually better than any Linux distro I've ever seen.

      Three points:

      1) See the above discussion: you cannot trust anything that you did not create and compile yourself. With a compiler you wrote yourself. On a machine you created yourself from the ground up, that is not connected to any network in any way. OpenBSD does not make any difference if your compiler or toolchain is compromised.

      2) Speaking of which, I cannot but note that OpenBSD had a little kerfuffle a while back, about a backdoot planted by the FBI in the OS? (Source 1 [schneier.com]) (Source 2 [cryptome.org]). I am willing to bet that (a) it's perfectly possible (though not likely), (b) if it was done, it was not by the FBI and (c) that the dev @openbsd.org are, right now, taking another long and hard look at the incriminated code.

      3) Finally OpenBSD lacking features and convenience? Care to support that statement? I have a couple of computers running OpenBSD here, and they are just as nice - or even nicer - to use than any Linux. Besides, you don't choose OpenBSD for convenience - you use it for its security. Period.

      The possibly bigger problem is that no matter what OS you use you can't trust SSL's broken certificate system either because the public certificate authorities are corruptible. And before someone says create your own CA, sure, for internal sites, but you can't do that for someone else's website.

      This goes way beyond a simple question of OpenSSL certificates - think OpenSSH and VPN security being compromised, and you will have a small idea of the sh*tstorm brewing right now.

  • by Todd Knarr ( 15451 ) on Sunday September 08, 2013 @03:12PM (#44791999) Homepage

    It's possible the NSA did something bad to the code, but it's not likely and it won't last.

    For the "not likely" part, code accepted into Linux projects tends to be reviewed. The NSA can't be too obvious about any backdoors or holes they try to put in, or at least one of the reviewers is going to go "Hey, WTF is this? That's not right. Fix it.". and the change will be rejected. That's even more true with the kernel itself where changes go through multiple levels of review before being accepted and the people doing the reviewing pretty much know their stuff. My bet would be that the only thing that might get through would be subtle and exotic modifications to the crypto algorithms themselves to render them less secure than they ought to be.

    And that brings us to the "not going to last" part. Now that the NSA's trickery is known, the crypto experts are going to be looking at crypto implementations. And all the source code for Linux projects is right there to look at. If a weakness were introduced, it's going to be visible to the experts and it'll get fixed.

    That leaves only the standard external points of attack: the NSA getting CAs to issue it valid certificates with false subjects so they can impersonate sites and servers, encryption standards that permit "null" (no encryption) as a valid encryption option allowing the NSA to tweak servers to disable encryption entirely, that sort of thing. There's no technical solution to those, but they're easier to monitor for.

  • Pointless Worrying (Score:4, Insightful)

    by Luthair ( 847766 ) on Sunday September 08, 2013 @03:13PM (#44792007)
    The NSA doesn't really need to have backdoors written into the systems, they have a lot of exploits in their bag of tricks that they've bought or found. Unfortunately the NSA only needs to find one exploit, but truly secure systems we need to find and fix them all :/
  • by hedrick ( 701605 ) on Sunday September 08, 2013 @03:26PM (#44792093)

    No, but there's no reason to think that Linux is worse than anything else, and it's probably easier to fix.

    If I were Linus I'd be putting together a small team of people who have been with Linux for years to begin assessing things. From Gilmour's posting it seems clear that IPsec and VPN functionality will need major change. Other things to audit include crypto libraries, both in Linux and the browsers, and the random number generators.

    But certainly some examination of SELinux and other portions are also needed.

    I don't see how anyone can answer the original question without doing some serious assessment. However I'm a bit skpetical whether this problem can actually be fixed at all. We don't know what things have been subverted, and what level of access the NSA and their equivalents in other countries have had to be code and algorithm design. They probably have access to more resources than the Linux community does.

    • The good news is that for linux, this can, in theory, be audited.

      For Windows...no. Not a hope. None. At all. Likewise OSX.

      Which means that any and every government that might possibly have any future dispute with the US is, right now, going over all their Windows servers and desktops in the military. diplomatic and intelligence services to see how much they can replace.

      It'll take months just to write up the reports, and months more to run through the political commitees, and even then it'll be very undiplom

  • by rueger ( 210566 ) on Sunday September 08, 2013 @03:37PM (#44792153) Homepage
    We are being told - and some of us suspected as much for a very long time - that the NSA &Co track everything we do, and have the ability de-encrypt much of what we think is secure; whether through brute force, exploits, backdoors, or corporate collusion.

    Surely we should also assume that there are other criminal and/or hacker groups with the resources or skills to gain similar access? Another case of "once they know it can be done, you can't turn back."

    I honestly believe that we're finally at the point where the reasonable assumption is that nothing is secure, and that you should act accordingly.
  • by cold fjord ( 826450 ) on Sunday September 08, 2013 @03:42PM (#44792181)

    I think that depends on what keeps you up at night.

    In one of the earlier stories today there was a post making all sorts of claims about compromised software, bad actors, and pointing to this paper: A Cryptographic Evaluation of IPsec [schneier.com]. I wonder if anyone bothered to read it?

    IPsec was a great disappointment to us. Given the quality of the people that worked on it and the time that was spent on it, we expected a much better result. We are not alone in this opinion; from various discussions with the people involved, we learned that virtually nobody is satised with the process or the result. The development of IPsec seems to have been burdened by the committee process that it was forced to use, and it shows in the results. Even with all the serious critisisms that we have on IPsec, it is probably the best IP security protocol available at the moment. We have looked at other, functionally similar, protocols in the past (including PPTP [SM98, SM99]) in much the same manner as we have looked at IPsec. None of these protocols come anywhere near their target, but the others manage to miss the mark by a wider margin than IPsec.

    I even saw calls for the equivalent of mole hunts in the opens source software world. What could possibly go wrong?

    Criminals, vandals, and spies have been targeting computers for a very long time. Various types of security problems have been known for 40 years or more, yet they either persist or are reimplemented in interesting new ways with new systems. People make a lot of mistakes in writing software, and managing their systems and sites, and yet the internet overall works reasonably well. Of course it still has boatloads of problems, including both security and privacy issues.

    Frankly I think you have much more to worry about from unpatched buggy software, poor configuration, unmonitored logs, lack of firewalls, crackers or vandals, and the usual problems sites have than from a US national intelligence agency. That is assuming you and 10 of your closes friends from Afghanistan aren't planning to plant bombs in shopping malls, or try to steal the blueprints for the new antitank missiles. Something to keep in mind is that their resources are limited, and they have more important things to do unless you make yourself important for them to look at. If you make yourself important for them to look, a "secure" computer won't stop them. You should probably worry more about ordinary criminal hackers, vandals, and automated probe / hack attacks.

  • "pretty safe?" (Score:5, Insightful)

    by bill_mcgonigle ( 4333 ) * on Sunday September 08, 2013 @03:52PM (#44792217) Homepage Journal

    Yes, it's "pretty safe". It's not absolutely safe or guaranteed to be safe. But if your other alternative is a hidden-source OS, especially one in US jurisdiction, then OSS is "pretty safe."

  • by FudRucker ( 866063 ) on Sunday September 08, 2013 @04:46PM (#44792503)
    they destroyed my trust in anything, i dont trust any operating system and software anymore, i dont trust the internet or any encryption method, the US Govt and all its elements have been proven to be a criminal gang of fascist kleptocratic totalitarian warmongering pigs.
  • Remember this? (Score:5, Interesting)

    by Voline ( 207517 ) on Sunday September 08, 2013 @06:41PM (#44793163)
    Remember this [slashdot.org]? In December 2010 there was a scandal when a developer who had previously worked on OpenBSD wrote to Theo de Raadt and claimed that the FBI had paid the company he had been working with at the time, NETSEC Inc (since absorbed by Verizon), to insert a backdoor [linuxjournal.com] into the OpenBSD IPSEC stack. They particularly pointed to two employees of NETSEC who had worked on OpenBSD's cryptograhpic code, Jason Wright and Angelos Keromytis. In typically open-source fashion, de Raadt published [marc.info] the letter on an OpenBSD mailing list. After the team began a code audit de Raadt wrote [marc.info],

    "After Jason left, Angelos (who had been working on the ipsec stack alreadyfor 4 years or so, for he was the ARCHITECT and primary developer of the IPSEC stack) accepted a contract at NETSEC and (while travelling around the world) wrote the crypto layer that permits our ipsec stack to hand-off requests to the drivers that Jason worked on. That crypto layer contained the half-assed insecure idea of half-IV that the US govt was pushing at that time. Soon after his contract was over this was ripped out. ...

    "I believe that NETSEC was probably contracted to write backdoors as alleged."

    I'd like to find a more recent report of what they found.

  • by caseih ( 160668 ) on Monday September 09, 2013 @01:17AM (#44794799)

    Over the years the NSA has contributed what seemed like positive things to computer security in general, and Linux specifically. They have helped correct some algorithms to make them more secure, and implemented things like SELinux.

    However, now that their other actions and intentions have been starkly revealed, any and all things the NSA does (and has done) are now cast into steep doubt. Which is unfortunately because the NSA has a lot of really smart cryptographers and mathematicians that could greatly contribute to information security.

    Now, however, their ability to contribute in any positive way to the open source community, or even to the industry at large, is gone forever. No one will trust them again. A sad loss for them, but also a potential loss for everyone. Nothing will quite be the same from here on out. And in the long run, without the help of smart, honest mathematicians and cryptographers, our security across the board will suffer. It's not the the revelations caused the damage, but that the NSA sabotaged things. Shame on them. Kudos to Snowden for helping us learn the extent of the damage.

Were there fewer fools, knaves would starve. - Anonymous

Working...