Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Government Security Linux Your Rights Online

Ask Slashdot: Linux Security, In Light of NSA Crypto-Subverting Attacks? 472

New submitter deepdive writes "I have a basic question: What is the privacy/security health of the Linux kernel (and indeed other FOSS OSes) given all the recent stories about the NSA going in and deliberately subverting various parts of the privacy/security sub-systems? Basically, can one still sleep soundly thinking that the most recent latest/greatest Ubuntu/OpenSUSE/what-have-you distro she/he downloaded is still pretty safe?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Linux Security, In Light of NSA Crypto-Subverting Attacks?

Comments Filter:
  • by msobkow ( 48369 ) on Sunday September 08, 2013 @03:50PM (#44791835) Homepage Journal

    The big worry is not building from source, but builds delivered by companies like Ubuntu, which you have absolutely no guarantee are actually built from the same source that they publish. Ditto Microsquishy, iOS, Android, et. al.

    The big concern is back doors built into distributed binaries.

  • by msobkow ( 48369 ) on Sunday September 08, 2013 @03:53PM (#44791859) Homepage Journal

    Another one that concerns me is Chrome, which on Ubuntu insists on unlocking my keystore to access stored passwords. I'd much rather have a browser store it's passwords in it's own keystore, not my user account keystore. After all, once you've granted access to the keystore, any key can be retrieved.

    And, in the case of a browser, you'd never notice that your keys are being uploaded.

  • by AlphaWolf_HK ( 692722 ) on Sunday September 08, 2013 @03:58PM (#44791897)

    Eventually you have to draw the line somewhere with regard to where you stop trusting. If the Linux kernel sources themselves contained a backdoor, I would be none the wiser, and neither would most of the world. Some of us have very little interest in coding, let alone picking through millions of lines of it to look for that kind of thing. And then of course there's syntactic ways of hiding backdoors that even somebody looking for one might miss.

  • by Jeremiah Cornelius ( 137 ) on Sunday September 08, 2013 @04:00PM (#44791915) Homepage Journal

    You can not add security, later.

    In Unix systems, there’s a program named “login“. login is the code that takes your username and password, verifies that the password you gave is the correct one for the username you gave, and if so, logs you in to the system.

    For debugging purposes, Thompson put a back-door into “login”. The way he did it was by modifying the C compiler. He took the code pattern for password verification, and embedded it into the C compiler, so that when it saw that pattern, it would actually generate code

    that accepted either the correct password for the username, or Thompson’s special debugging password. In pseudo-Python:

        def compile(code):
            if (looksLikeLoginCode(code)):
                generateLoginWithBackDoor()
            else:
                compileNormally(code)

    With that in the C compiler, any time that anyone compiles login,

    the code generated by the compiler will include Ritchie’s back door.

    Now comes the really clever part. Obviously, if anyone saw code like what’s in that

    example, they’d throw a fit. That’s insanely insecure, and any manager who saw that would immediately demand that it be removed. So, how can you keep the back door, but get rid of the danger of someone noticing it in the source code for the C compiler? You hack the C compiler itself:

        def compile(code):
            if (looksLikeLoginCode(code)):
                generateLoginWithBackDoor(code)
            elif (looksLikeCompilerCode(code)):
                generateCompilerWithBackDoorDetection(code)
            else:
                compileNormally(code)

    What happens here is that you modify the C compiler code so that when it compiles itelf, it inserts the back-door code. So now when the C compiler compiles login, it will insert the back door code; and when it compiles

    the C compiler, it will insert the code that inserts the code into both login and the C compiler.

    Now, you compile the C compiler with itself – getting a C compiler that includes the back-door generation code explicitly. Then you delete the back-door code from the C compiler source. But it’s in the binary. So when you use that binary to produce a new version of the compiler from the source, it will insert the back-door code into

    the new version.

    So you’ve now got a C compiler that inserts back-door code when it compiles itself – and that code appears nowhere in the source code of the compiler. It did exist in the code at one point – but then it got deleted. But because the C compiler is written in C, and always compiled with itself, that means thats each successive new version of the C compiler will pass along the back-door – and it will continue to appear in both login and in the C compiler, without any trace in the source code of either.

    http://scienceblogs.com/goodmath/2007/04/15/strange-loops-dennis-ritchie-a/ [scienceblogs.com]

  • by roscocoltran ( 1014187 ) on Sunday September 08, 2013 @04:00PM (#44791919)
    I can't help but feel scared by this SELINUX thing. You can tell me a hundred time that the code was reviewed (was it ?) I still won't trust it. I'd like to be sure that just disabling it altogether is enough to stop it completely from....I don't know, opening backdoor ? Cmon, NSA code in the kernel ?
  • by Anonymous Coward on Sunday September 08, 2013 @04:17PM (#44792041)

    This argument is much, much too complicated. Plus, it can indeed be tracked down in the compiler binary. Compiling the compiler with an unrelated compiler will remove the malware in the compiler binary. You can use a really slow one for this effort, as you must use it only once.
    In reality, there are more than enough bugs of the "Ping of death" style, which can be used. Read "confessions of a cyber warrior".
    The worst thing Bell Labs brought into this world was the C and C++ languages and the associated programming style. Like char* pointers, uninitialized pointers possible and so on.

    If Bell Labs had no foisted C and C++ on this world for "free", the government would have had to invent something to make their "cyber war space" possible. Wait, Bell Labs WAS the government.

    If that's not enough, a single buffer overflow in firefox or Acrobat reader can trigger something like the Pentium F00F bug, and then they OWN THE CPU. Your stinking sandbox is wholly irrelevant at this time.

    Go figure, sucker. Me, I am a C and C++ software engineering sucker, too.

  • by hedrick ( 701605 ) on Sunday September 08, 2013 @04:26PM (#44792093)

    No, but there's no reason to think that Linux is worse than anything else, and it's probably easier to fix.

    If I were Linus I'd be putting together a small team of people who have been with Linux for years to begin assessing things. From Gilmour's posting it seems clear that IPsec and VPN functionality will need major change. Other things to audit include crypto libraries, both in Linux and the browsers, and the random number generators.

    But certainly some examination of SELinux and other portions are also needed.

    I don't see how anyone can answer the original question without doing some serious assessment. However I'm a bit skpetical whether this problem can actually be fixed at all. We don't know what things have been subverted, and what level of access the NSA and their equivalents in other countries have had to be code and algorithm design. They probably have access to more resources than the Linux community does.

  • by gl4ss ( 559668 ) on Sunday September 08, 2013 @04:29PM (#44792111) Homepage Journal

    write your own login, use a different sourced compiler for compiling the compiler..

    anyways, can we stop posting this same story to every fucking security story already? put it in a sig or something.

  • by Noryungi ( 70322 ) on Sunday September 08, 2013 @04:31PM (#44792117) Homepage Journal

    I believe you can trust OpenBSD totally but it lacks many of the features and much of the convenience of the main Linux distros. It is rock solid and utterly secure though, and the man pages are actually better than any Linux distro I've ever seen.

    Three points:

    1) See the above discussion: you cannot trust anything that you did not create and compile yourself. With a compiler you wrote yourself. On a machine you created yourself from the ground up, that is not connected to any network in any way. OpenBSD does not make any difference if your compiler or toolchain is compromised.

    2) Speaking of which, I cannot but note that OpenBSD had a little kerfuffle a while back, about a backdoot planted by the FBI in the OS? (Source 1 [schneier.com]) (Source 2 [cryptome.org]). I am willing to bet that (a) it's perfectly possible (though not likely), (b) if it was done, it was not by the FBI and (c) that the dev @openbsd.org are, right now, taking another long and hard look at the incriminated code.

    3) Finally OpenBSD lacking features and convenience? Care to support that statement? I have a couple of computers running OpenBSD here, and they are just as nice - or even nicer - to use than any Linux. Besides, you don't choose OpenBSD for convenience - you use it for its security. Period.

    The possibly bigger problem is that no matter what OS you use you can't trust SSL's broken certificate system either because the public certificate authorities are corruptible. And before someone says create your own CA, sure, for internal sites, but you can't do that for someone else's website.

    This goes way beyond a simple question of OpenSSL certificates - think OpenSSH and VPN security being compromised, and you will have a small idea of the sh*tstorm brewing right now.

  • Re:OpenBSD (Score:3, Interesting)

    by Predius ( 560344 ) <.josh.coombs. .at. .gmail.com.> on Sunday September 08, 2013 @04:33PM (#44792133)

    Even that's no good if the problem is flaws in the spec rather than how it's implemented by OSs. If the NSA did things correctly they didn't have to muddle with actual Linux/BSD/etc src, they got flaws into the crypto definition itself that reduces the work needed to crack it. The better an OS follows the spec... the easier for the NSA to punch through.

  • by Anonymous Coward on Sunday September 08, 2013 @04:46PM (#44792193)

    Hmmm - all of a sudden this looks interesting again:

    http://news.cnet.com/8301-31921_3-20025767-281.html [cnet.com]

  • by Smallpond ( 221300 ) on Sunday September 08, 2013 @05:03PM (#44792267) Homepage Journal

    There was an attempt to backdoor the kernel [lwn.net] a few years back. I don't believe the perpetrators were ever revealed.

  • Re:AES (Score:2, Interesting)

    by rvw ( 755107 ) on Sunday September 08, 2013 @05:18PM (#44792331)

    Is there any particular reason why people don't strengthen AES (or any other symmetric encryption) by just reencrypting 1000 times? Perhaps interleaving each encryption with encrypting with the first 1, then 2 etc. It would make next to no difference for the end user, who's going to decrypt just once, but I imagine it would add a lot more time to the cracking of the encrypted data than increasing the size of the key.

    It seems that encrypting the file multiple times with the same key is not safe, and tends to expose the flaws in the encryption method. It will be less secure. Hashing the password with a random key multiple times (like keepass uses 5000 rounds), and then using that string to encrypt the file, however should work. I'm not an expert on this matter, just repeat what someone else replied me when asking the same question.

  • by Anachragnome ( 1008495 ) on Sunday September 08, 2013 @05:21PM (#44792345)

    "The moral is obvious. You can't trust code that you did not totally create yourself...."

    I agree, but that doesn't really help us in the real world--writing our own code doesn't reasonably work out for most people. So, what's the solution to your dismal conclusion? Ferret out those that cannot be trusted--doing so is the closest we will ever come to being able to "trust the code".

    So, how does one go about ferreting out those that cannot be trusted? The Occupy Movement had almost figured it out, but wandered around aimlessly with nobody to point a finger at when they should have been naming names.

    The NSA has made it clear that making connections--following the metadata--is often enough to get an investigation started. So why not do the same thing? Turn the whole thing around? Start focusing on their networks. I can suggest a good starting point--the entities that train the "Future Rulers of the World" club. The "Consulting Firms" that are really training and placing their own agents throughout the global community. These firms are the world's real leaders--they have vast funding and no real limitations to who and where they exert influence. In my opinion, they literally decide who runs the world.

    Pay close attention to the people associated with these firms, the inter-relatedness of the firms and the other organizations "Alumni" end up leading. Pay very close attention to the technologies involved and the governments involved.

    Look through the lists of people involved, start researching them and their connections...follow the connections and you start to see the underlying implications of such associations. I'm not just talking the CEO of Redhat (no, Linux is no more secure then Windows), but leaders of countries, including the US and Israel.

    http://en.wikipedia.org/wiki/Boston_Consulting_Group [wikipedia.org]

    http://en.wikipedia.org/wiki/McKinsey_and_Company [wikipedia.org]

    http://en.wikipedia.org/wiki/Bain_%26_Company [wikipedia.org]

    THIS is the 1%. These are the perpetrators of NSA surveillance, to further their needs...NOT yours. People with connections to these firms need to be removed from any position of power, especially government. Their future actions need to be monitored by the rest of society, if for no other reason then to limit their power.

    As George Carlin once put it so well..."It's all just one big Club, and you are not in the fucking club."

  • by Anonymous Coward on Sunday September 08, 2013 @05:27PM (#44792373)

    Digitial Forensics for Prosecutors presentation suggests Truecrypt has a backdoor.
    http://www.techarp.com/showarticle.aspx?artno=770&pgno=0 [techarp.com]

  • by SuricouRaven ( 1897204 ) on Sunday September 08, 2013 @05:30PM (#44792399)

    Backdooring a CPU wouldn't actually be that difficult. You'd need it to recognise a specific command sequence (128-bits long should do it) when reading memory to trigger the backdoor - that way you could activate it by sending a network packet, or reading external media, or routing traffic. And all the backdoor needs to do is run a simple 'set instruction pointer to immediately after this trigger.' It'd be impossible to defend against short of using an un-backdoored CPU to filter the trigger out, and even then it could be snuck through in an SSL session or a fragmented packet.

    And best of all, it would *never* be detected. The schematics for a CPU are practically impossible to reverse-engineer from the masks, and both schematics and masks are strictly internal company property. Plus the number of people who could understand them in enough detail to spot a backdoor without years of specialist study could probably fit in one conference hall.

  • by Anonymous Coward on Sunday September 08, 2013 @05:52PM (#44792547)

    the 1946 Convention on the Privileges and Immunities of the United Nations, the 1947 agreement between the United Nations and the United States, and the 1961 Vienna Convention on Diplomatic Relations

    from Wikipedia. Found in the first few hits by Google for "wiretapping NSA international treaties", it's really not that hard.

  • by budgenator ( 254554 ) on Sunday September 08, 2013 @06:09PM (#44792645) Journal

    One of our advanatages is that I'm sure the Russians don't want NSA backdoors in Linux, and the NSA doesn't want Russian backdoors in Linux and neither want Chinese Backdoors and simalarly the Chinese want neither NSA or Russian backdoors. After all of this "Spy vs. Spy" Linux is unlikely to have backdoors. If your requirements are great enough that unlikely isn't good enough your probably shit outa luck because nothing will be good enough for you.

  • by Todd Knarr ( 15451 ) on Sunday September 08, 2013 @06:46PM (#44792881) Homepage

    That won't even make it through the casual review. Most project maintainers don't like code that's impenetrable. Unless it's a fix for a critical bug that nobody else even has a proposal for a fix for, they're going to take one look at obfuscated code and toss it back with a "No thanks.". Especially if it's coming from a source they don't recognize, because messy complex obfuscated code also tends to be buggy unreliable unmaintainable code and they don't want the headache.

  • by Trax3001BBS ( 2368736 ) on Sunday September 08, 2013 @07:15PM (#44793041) Homepage Journal

    Moral

    The moral is obvious. You can't trust code that you did not totally create yourself..... A well installed microcode bug will be almost impossible to detect.

    http://cm.bell-labs.com/who/ken/trust.html [bell-labs.com]

    You and the submitter in on this one? As the answer is a resounding NO.

    A well installed microcode bug will be almost impossible to detect.

    For people like me that didn't know, microcode can also be known as firmware, bios update
    or "code in a device" http://en.wikipedia.org/wiki/Microcode [wikipedia.org]

    Ken Thompson's Acknowledgment

    I first read of the possibility of such a Trojan horse in an Air Force critique (4) of the security of an early implementation of Multics.

    (4.) Karger, P.A., and Schell, R.R. Multics Security Evaluation: Vulnerability Analysis. ESD-TR-74-193, Vol II, June 1974, p 52.

    So in theory you can't even trust the code you write as your video card could change it.

    --
    If you aren't paranoid yet, just wait

  • Remember this? (Score:5, Interesting)

    by Voline ( 207517 ) on Sunday September 08, 2013 @07:41PM (#44793163)
    Remember this [slashdot.org]? In December 2010 there was a scandal when a developer who had previously worked on OpenBSD wrote to Theo de Raadt and claimed that the FBI had paid the company he had been working with at the time, NETSEC Inc (since absorbed by Verizon), to insert a backdoor [linuxjournal.com] into the OpenBSD IPSEC stack. They particularly pointed to two employees of NETSEC who had worked on OpenBSD's cryptograhpic code, Jason Wright and Angelos Keromytis. In typically open-source fashion, de Raadt published [marc.info] the letter on an OpenBSD mailing list. After the team began a code audit de Raadt wrote [marc.info],

    "After Jason left, Angelos (who had been working on the ipsec stack alreadyfor 4 years or so, for he was the ARCHITECT and primary developer of the IPSEC stack) accepted a contract at NETSEC and (while travelling around the world) wrote the crypto layer that permits our ipsec stack to hand-off requests to the drivers that Jason worked on. That crypto layer contained the half-assed insecure idea of half-IV that the US govt was pushing at that time. Soon after his contract was over this was ripped out. ...

    "I believe that NETSEC was probably contracted to write backdoors as alleged."

    I'd like to find a more recent report of what they found.

  • by raymorris ( 2726007 ) on Sunday September 08, 2013 @11:15PM (#44794075) Journal

    The reason you can boot from a raid card or network is because the BIOS loads and runs BIOS modules from those cards. You may be familiar with the Linux kernel, where most of the functionallity is in modules that become part of the kernel. BIOS is the same. One differentiator between a server motherboard and a consumer one is how much BIOS memory it has, to load modules from many different pieces of hardware. I have one machine with at least four different pieces of hardware that include BIOS. MOST of the BIOS on that machine didn't come with the motherboard.

  • by Sean ( 422 ) on Sunday September 08, 2013 @11:19PM (#44794097)

    Cryptome notes this document is claimed to be a hoax by a Hacker News user.

    http://cryptome.org/2013/09/computer-forensics-2013.pdf [cryptome.org]

  • by raymorris ( 2726007 ) on Sunday September 08, 2013 @11:31PM (#44794149) Journal

    For the Linux kernel, that's how development is done already, for quality control and bloat reduction. Nobody can commit by themselves, it takes at least three people to get a change into mainline. Each developer has their own copy of the tree into which changes are pulled, so they can see all changes that are made, and who made them.

    For each part of the kernel, there are a number of people particularly interested in that bit who watch it and work on it. For example, the people making NAS and SAN devices and services keep a close eye on the storage subsystems. Myself, I watch the cm storage stack generally, more specifically LVM, and even more specifically snapshots. There are a few dozen people around the world with special interest in that particular part of the code. No backdoors will come in without some of us spotting it. What COULD happen is that some code could come in that isn't quite as secure as it could be.

    It just so happens that I'm a security professional who uses advanced Linux storage systems for a security product called Clonebox, so that's at least one security professional closely watching that part of the code. Thousands of others watch the other parts.

    It's convenient that a lot of the development is done by companies like Netapp, Amazon (S3) and Google. You can bet that when Amazon submits code, Netapp and Google are looking closely at it. When RedHat submits something, Canonical will point out any reasons it shouldn't be accepted.

  • by ron_ivi ( 607351 ) <sdotno@cheapcomp ... s.com minus poet> on Monday September 09, 2013 @06:51AM (#44795775)
    Perhaps he's thinking to configure it so you only have to trust the Russian *or* US government. Dunno how it'd work for compute nodes --- but if you have 1 Russian Firewall in front of one US firewall in front of one Chinese firewall -- it seems you could set up a network where unless all 3 of them collude your combo-firewall is safe.
  • by allaunjsilverfox2 ( 882195 ) on Monday September 09, 2013 @09:35AM (#44796637) Homepage Journal

    Maybe modern ones, but if you go back a few generations your chances of it existing drop drastically. so what you do for high security....

    1 - rely on OLDER hardware. Stuff from before the past two administrations would have a significantly higher chance of not having government back doors. Clinton era computers to start with.

    2 - use a completely different architecture. ARM is your best friend here or SPARC. The chances of SPARC having this are insanely small

    3 - Get processors from your countries "enemy" Russians dont use Intel processors for their KGB and Government operations. If they did they would be the biggest morons on the planet. Find out what they use and try to source them through the black or grey market channels.

    Welcome to the new world of underground computer science. Oh and keep your mouths shut. Don't do stupid shit like bragging as to what you have and where you got it. I'd say "hack the planet" but the safest thing is to go off the net and transfer data via offline means for the highest security.

    You forgot a 4th option. If you were TRULY paranoid, you could write your own CPU and emulate in a FPGA. You would also have to design the fpga on a wire wrapped CPU, which would suck, but it's possible.

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...