Follow Slashdot stories on Twitter


Forgot your password?
Government Security Linux Your Rights Online

Ask Slashdot: Linux Security, In Light of NSA Crypto-Subverting Attacks? 472

New submitter deepdive writes "I have a basic question: What is the privacy/security health of the Linux kernel (and indeed other FOSS OSes) given all the recent stories about the NSA going in and deliberately subverting various parts of the privacy/security sub-systems? Basically, can one still sleep soundly thinking that the most recent latest/greatest Ubuntu/OpenSUSE/what-have-you distro she/he downloaded is still pretty safe?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Linux Security, In Light of NSA Crypto-Subverting Attacks?

Comments Filter:
  • If it is off (Score:5, Insightful)

    by morcego ( 260031 ) on Sunday September 08, 2013 @03:55PM (#44791869)

    You can sleep soundly if your computer is off and/or unplugged. Otherwise, you should always be on your guard.

    Keep your confidential data behind multiple levels of protection, and preferentially disconnected when you are not using it. Never trust anything that is marketed at 100% safe. There will always be bugs to be exploited, if nothing else.

    A healthy level of paranoia is the best security tool...

  • by Zumbs ( 1241138 ) on Sunday September 08, 2013 @03:57PM (#44791889) Homepage
    Then why do you use Chrome? Pulling stunts like that would make me uninstall a program in a heartbeat ...
  • Yes. (Score:2, Insightful)

    by Anonymous Coward on Sunday September 08, 2013 @03:59PM (#44791911)

    You have to trust the integrity of Linus and the core developers.

    If any of them let in such major flaws they would be found out fairly quickly... and that would destroy the reputation of the subsystem leader, and he would be removed.

    Having the entire subsystem subverted would cause bigger problems.. but more likely the entire subsystem would be reverted. This has happened in the past, most recently, the entire changes made for Android were rejected en-mass. Only small, internally compatible changes were accepted, and these went through the usual analysis, and (rather severe) modifications to make them compatible.

    It is possible that this is part of the reason IPsec has never been accepted in the kernel networking code.

  • by dryriver ( 1010635 ) on Sunday September 08, 2013 @04:03PM (#44791931)
    or "Privacy" anymore. Perhaps there hasn't been for the last decade or so. We just didn't know at the time. ---- Enjoy your 21st Century. As long as people fail to defend their basic rights, there will not be such a thing as "security" or "privacy" again. My 2 Cents...
  • by Todd Knarr ( 15451 ) on Sunday September 08, 2013 @04:12PM (#44791999) Homepage

    It's possible the NSA did something bad to the code, but it's not likely and it won't last.

    For the "not likely" part, code accepted into Linux projects tends to be reviewed. The NSA can't be too obvious about any backdoors or holes they try to put in, or at least one of the reviewers is going to go "Hey, WTF is this? That's not right. Fix it.". and the change will be rejected. That's even more true with the kernel itself where changes go through multiple levels of review before being accepted and the people doing the reviewing pretty much know their stuff. My bet would be that the only thing that might get through would be subtle and exotic modifications to the crypto algorithms themselves to render them less secure than they ought to be.

    And that brings us to the "not going to last" part. Now that the NSA's trickery is known, the crypto experts are going to be looking at crypto implementations. And all the source code for Linux projects is right there to look at. If a weakness were introduced, it's going to be visible to the experts and it'll get fixed.

    That leaves only the standard external points of attack: the NSA getting CAs to issue it valid certificates with false subjects so they can impersonate sites and servers, encryption standards that permit "null" (no encryption) as a valid encryption option allowing the NSA to tweak servers to disable encryption entirely, that sort of thing. There's no technical solution to those, but they're easier to monitor for.

  • Pointless Worrying (Score:4, Insightful)

    by Luthair ( 847766 ) on Sunday September 08, 2013 @04:13PM (#44792007)
    The NSA doesn't really need to have backdoors written into the systems, they have a lot of exploits in their bag of tricks that they've bought or found. Unfortunately the NSA only needs to find one exploit, but truly secure systems we need to find and fix them all :/
  • by BitterOak ( 537666 ) on Sunday September 08, 2013 @04:16PM (#44792035)

    The big concern is back doors built into distributed binaries.

    And what about the hardware? And how can you be sure the compilers aren't putting a little something extra into the binaries. There are so many places for NSA malware to hide it's scary. Could be in the BIOS, could be in the keyboard or graphics firmware, could be in the kernel placed there by a malicious compiler. Could be added to the kernel if some other trojan horse is allowed to run. And just because the kernel, etc. are open source doesn't mean they have perfect security. The operating system is incredibly complex, and all it takes is one flaw in one piece of code with root privileges (or without if a local privilege escalation vulnerability exists anywhere on the system, which it surely does), and that can be exploited to deliver a payload into the kernel (or BIOS, or something else). Really, if the NSA wants to see what you're doing on your Linux system, rest assured, they can.

  • by M. Baranczak ( 726671 ) on Sunday September 08, 2013 @04:32PM (#44792123)

    The NSA is a big organization. They do plenty of things that don't violate the Constitution, international treaties, or common sense.

    SELinux is the least of our worries. It's not impossible to hide backdoors or vulnerabilities in an open-source product, but it is pretty difficult. And if the spooks managed to do it, they certainly wouldn't be putting their name on this product, because the people that they're really interested in are even more paranoid than you.

  • Re:AES (Score:5, Insightful)

    by WaffleMonster ( 969671 ) on Sunday September 08, 2013 @04:32PM (#44792129)

    Is there any particular reason why people don't strengthen AES (or any other symmetric encryption) by just reencrypting 1000 times? Perhaps interleaving each encryption with encrypting with the first 1, then 2 etc. It would make next to no difference for the end user, who's going to decrypt just once, but I imagine it would add a lot more time to the cracking of the encrypted data than increasing the size of the key.

    Exponents are actually what protects information, multiplication just makes people feel good.

  • by rueger ( 210566 ) on Sunday September 08, 2013 @04:37PM (#44792153) Homepage
    We are being told - and some of us suspected as much for a very long time - that the NSA &Co track everything we do, and have the ability de-encrypt much of what we think is secure; whether through brute force, exploits, backdoors, or corporate collusion.

    Surely we should also assume that there are other criminal and/or hacker groups with the resources or skills to gain similar access? Another case of "once they know it can be done, you can't turn back."

    I honestly believe that we're finally at the point where the reasonable assumption is that nothing is secure, and that you should act accordingly.
  • by cold fjord ( 826450 ) on Sunday September 08, 2013 @04:42PM (#44792181)

    I think that depends on what keeps you up at night.

    In one of the earlier stories today there was a post making all sorts of claims about compromised software, bad actors, and pointing to this paper: A Cryptographic Evaluation of IPsec []. I wonder if anyone bothered to read it?

    IPsec was a great disappointment to us. Given the quality of the people that worked on it and the time that was spent on it, we expected a much better result. We are not alone in this opinion; from various discussions with the people involved, we learned that virtually nobody is satised with the process or the result. The development of IPsec seems to have been burdened by the committee process that it was forced to use, and it shows in the results. Even with all the serious critisisms that we have on IPsec, it is probably the best IP security protocol available at the moment. We have looked at other, functionally similar, protocols in the past (including PPTP [SM98, SM99]) in much the same manner as we have looked at IPsec. None of these protocols come anywhere near their target, but the others manage to miss the mark by a wider margin than IPsec.

    I even saw calls for the equivalent of mole hunts in the opens source software world. What could possibly go wrong?

    Criminals, vandals, and spies have been targeting computers for a very long time. Various types of security problems have been known for 40 years or more, yet they either persist or are reimplemented in interesting new ways with new systems. People make a lot of mistakes in writing software, and managing their systems and sites, and yet the internet overall works reasonably well. Of course it still has boatloads of problems, including both security and privacy issues.

    Frankly I think you have much more to worry about from unpatched buggy software, poor configuration, unmonitored logs, lack of firewalls, crackers or vandals, and the usual problems sites have than from a US national intelligence agency. That is assuming you and 10 of your closes friends from Afghanistan aren't planning to plant bombs in shopping malls, or try to steal the blueprints for the new antitank missiles. Something to keep in mind is that their resources are limited, and they have more important things to do unless you make yourself important for them to look at. If you make yourself important for them to look, a "secure" computer won't stop them. You should probably worry more about ordinary criminal hackers, vandals, and automated probe / hack attacks.

  • "pretty safe?" (Score:5, Insightful)

    by bill_mcgonigle ( 4333 ) * on Sunday September 08, 2013 @04:52PM (#44792217) Homepage Journal

    Yes, it's "pretty safe". It's not absolutely safe or guaranteed to be safe. But if your other alternative is a hidden-source OS, especially one in US jurisdiction, then OSS is "pretty safe."

  • by rvw ( 755107 ) on Sunday September 08, 2013 @05:04PM (#44792279)

    You do, but if you're that worried, there's always truecrypt and keepassx. If you keep the database in a truecrypt encrypted partition, the NSA can't get at that with any reasonable period of time. You can also ditch the keepassx and just store it as plain text in the encrypted partition, but that's not very convenient.

    Can you be sure that Truecrypt has no backdoors? If so, how?

  • by Chemisor ( 97276 ) on Sunday September 08, 2013 @05:37PM (#44792429)

    10000 laptops are stolen at airports every year. Presumably, they are off when that happens.

    The NSA is not your problem; you are not important enough to be a target. When thinking about security, thieves are your problem. Theft happens, and happens often. Your computer is far more likely to get stolen than to be inflitrated by the NSA. And the solution is to encrypt your hard drive. Without encryption the thief will have access to everything you normally access from the computer - like your bank account. You wouldn't want that, would you? Today's CPUs all have AESNI support, so there is no excuse for not encrypting your laptop's hard drive. Do it today and get some financial peace of mind.

  • by FudRucker ( 866063 ) on Sunday September 08, 2013 @05:46PM (#44792503)
    they destroyed my trust in anything, i dont trust any operating system and software anymore, i dont trust the internet or any encryption method, the US Govt and all its elements have been proven to be a criminal gang of fascist kleptocratic totalitarian warmongering pigs.
  • by Dunbal ( 464142 ) * on Sunday September 08, 2013 @06:09PM (#44792643)
    I think you're missing the concept of what a back-door actually is. Yeah. Trust your "key chain" app. Your data is safe! If only the NSA had thought of that. Curses!
  • by Dunbal ( 464142 ) * on Sunday September 08, 2013 @06:27PM (#44792747)

    you are not important enough to be a target.

    Wrong. You may become important in the future. So you are important enough to target. They are collecting data on everyone, and holding on to it. They just might not be actively going through all the data from everyone (or they might be, if they have enough computing power). But if it's recorded it doesn't really matter if they do it today or in 20 years. They've got you. "If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him." --Richelieu

  • by Lumpy ( 12016 ) on Sunday September 08, 2013 @07:43PM (#44793175) Homepage

    Maybe modern ones, but if you go back a few generations your chances of it existing drop drastically. so what you do for high security....

    1 - rely on OLDER hardware. Stuff from before the past two administrations would have a significantly higher chance of not having government back doors. Clinton era computers to start with.

    2 - use a completely different architecture. ARM is your best friend here or SPARC. The chances of SPARC having this are insanely small

    3 - Get processors from your countries "enemy" Russians dont use Intel processors for their KGB and Government operations. If they did they would be the biggest morons on the planet. Find out what they use and try to source them through the black or grey market channels.

    Welcome to the new world of underground computer science. Oh and keep your mouths shut. Don't do stupid shit like bragging as to what you have and where you got it. I'd say "hack the planet" but the safest thing is to go off the net and transfer data via offline means for the highest security.

  • by Jeremiah Cornelius ( 137 ) on Sunday September 08, 2013 @08:04PM (#44793275) Homepage Journal


    How many Intel or nVidia employees... How many Broadcom or Qualcom employees need to be placed by NSA, into their otherwise ordinary engineering jobs?

    How many Mossad associated employees? Whoops. I guess that's anti-Semitic. I'd have to ask how many PLA planted engineers, as there's no recognized anti-Sinoism. ;-)

  • by RabidReindeer ( 2625839 ) on Sunday September 08, 2013 @08:44PM (#44793397)

    This argument is much, much too complicated. Plus, it can indeed be tracked down in the compiler binary. Compiling the compiler with an unrelated compiler will remove the malware in the compiler binary. You can use a really slow one for this effort, as you must use it only once.
    In reality, there are more than enough bugs of the "Ping of death" style, which can be used. Read "confessions of a cyber warrior".
    The worst thing Bell Labs brought into this world was the C and C++ languages and the associated programming style. Like char* pointers, uninitialized pointers possible and so on.

    If Bell Labs had no foisted C and C++ on this world for "free", the government would have had to invent something to make their "cyber war space" possible. Wait, Bell Labs WAS the government.

    If that's not enough, a single buffer overflow in firefox or Acrobat reader can trigger something like the Pentium F00F bug, and then they OWN THE CPU. Your stinking sandbox is wholly irrelevant at this time.

    Go figure, sucker. Me, I am a C and C++ software engineering sucker, too.

    Before C, much less C++, there were languages like FORTRAN, COBOL, and PL/1. They were not as rigid about checking types and ranges as Java and Ada, for example. Even some versions of BASIC allowed definition of an "array" that was, in fact, a map of the entire system RAM. And, of course, peek() and poke. PL/1 has actual pointer support built into the language.

    So don't blame C. The problems go way, way back. Some systems and languages were more secure than others, but none of them were all that airtight. The onlu commercial hardware architecture that I know of that approached being REALLY secure was the Intel iAPX 432, which practically gave each stackframe its own private address space. But that one never caught on.

  • by Mathinker ( 909784 ) on Monday September 09, 2013 @12:44AM (#44794449) Journal

    Ken Thompson's theoretical attack against the Unix ecosystem was only practical because, at the time, he controlled a major portion of binary distribution and simultaneously a major portion of the information which could be used to defeat the attack, that being compiler technology. Nowadays, there are tons of different, competing compilers and systems for code rewriting, any of which can be used to "return trust" to a particular OS's binary ecosystem (if someone would take the time and effort to actually do it).

    Although I believe Bruce Schneier's information (previously covered in Slashdot) that, probably, any widely used software system available today is practically effortlessly pwnable by the NSA's TAO division, I don't think that the problem of designing hardened systems is an impractical one. It's just going to take a lot of hard, hard work (Schneier's call-to-arms in this regard has already been covered by Slashdot).

  • by caseih ( 160668 ) on Monday September 09, 2013 @02:17AM (#44794799)

    Over the years the NSA has contributed what seemed like positive things to computer security in general, and Linux specifically. They have helped correct some algorithms to make them more secure, and implemented things like SELinux.

    However, now that their other actions and intentions have been starkly revealed, any and all things the NSA does (and has done) are now cast into steep doubt. Which is unfortunately because the NSA has a lot of really smart cryptographers and mathematicians that could greatly contribute to information security.

    Now, however, their ability to contribute in any positive way to the open source community, or even to the industry at large, is gone forever. No one will trust them again. A sad loss for them, but also a potential loss for everyone. Nothing will quite be the same from here on out. And in the long run, without the help of smart, honest mathematicians and cryptographers, our security across the board will suffer. It's not the the revelations caused the damage, but that the NSA sabotaged things. Shame on them. Kudos to Snowden for helping us learn the extent of the damage.

  • by Alain Williams ( 2972 ) <> on Monday September 09, 2013 @04:34AM (#44795313) Homepage

    What would you do if you where a Chinese or Russian spook and discover a NSA backdoor in Linux ? You could cry foul! to Linus and get it fixed. However: a much more profitable action would be to silently fix it in your own security critical machines and then exploit it as much as possible on your targets in the West.

  • by bluegutang ( 2814641 ) on Monday September 09, 2013 @05:38AM (#44795541)

    But you'd have to prevent knowledge of the backdoor from leaking. Hundreds of engineers work on each CPU, each group produces and verifies a new CPU design every year or so, there is considerable employee turnover every few years, and nobody has ever reported such a thing. So I find it unlikely.

    Disclaimer: I work as a hardware engineer for a major CPU manufacturer.

  • by Aaden42 ( 198257 ) on Monday September 09, 2013 @03:38PM (#44801585) Homepage

    1 Russian Firewall in front of one US firewall in front of one Chinese firewall

    So you’re looking for 100% packet loss? Why not just unplug the cord. Would be cheaper, less stuff to patch...

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein