Ask Slashdot: Linux Security, In Light of NSA Crypto-Subverting Attacks? 472
New submitter deepdive writes "I have a basic question: What is the privacy/security health of the Linux kernel (and indeed other FOSS OSes) given all the recent stories about the NSA going in and deliberately subverting various parts of the privacy/security sub-systems? Basically, can one still sleep soundly thinking that the most recent latest/greatest Ubuntu/OpenSUSE/what-have-you distro she/he downloaded is still pretty safe?"
No. (Score:4, Funny)
I think there's even a law for this kind of reply...
Ken Thompson, Anyone? (Score:5, Interesting)
You can not add security, later.
In Unix systems, there’s a program named “login“. login is the code that takes your username and password, verifies that the password you gave is the correct one for the username you gave, and if so, logs you in to the system.
For debugging purposes, Thompson put a back-door into “login”. The way he did it was by modifying the C compiler. He took the code pattern for password verification, and embedded it into the C compiler, so that when it saw that pattern, it would actually generate code
that accepted either the correct password for the username, or Thompson’s special debugging password. In pseudo-Python:
def compile(code):
if (looksLikeLoginCode(code)):
generateLoginWithBackDoor()
else:
compileNormally(code)
With that in the C compiler, any time that anyone compiles login,
the code generated by the compiler will include Ritchie’s back door.
Now comes the really clever part. Obviously, if anyone saw code like what’s in that
example, they’d throw a fit. That’s insanely insecure, and any manager who saw that would immediately demand that it be removed. So, how can you keep the back door, but get rid of the danger of someone noticing it in the source code for the C compiler? You hack the C compiler itself:
def compile(code):
if (looksLikeLoginCode(code)):
generateLoginWithBackDoor(code)
elif (looksLikeCompilerCode(code)):
generateCompilerWithBackDoorDetection(code)
else:
compileNormally(code)
What happens here is that you modify the C compiler code so that when it compiles itelf, it inserts the back-door code. So now when the C compiler compiles login, it will insert the back door code; and when it compiles
the C compiler, it will insert the code that inserts the code into both login and the C compiler.
Now, you compile the C compiler with itself – getting a C compiler that includes the back-door generation code explicitly. Then you delete the back-door code from the C compiler source. But it’s in the binary. So when you use that binary to produce a new version of the compiler from the source, it will insert the back-door code into
the new version.
So you’ve now got a C compiler that inserts back-door code when it compiles itself – and that code appears nowhere in the source code of the compiler. It did exist in the code at one point – but then it got deleted. But because the C compiler is written in C, and always compiled with itself, that means thats each successive new version of the C compiler will pass along the back-door – and it will continue to appear in both login and in the C compiler, without any trace in the source code of either.
http://scienceblogs.com/goodmath/2007/04/15/strange-loops-dennis-ritchie-a/ [scienceblogs.com]
Re:Ken Thompson, Anyone? (Score:5, Informative)
Moral
The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect.
http://cm.bell-labs.com/who/ken/trust.html [bell-labs.com]
Re:Ken Thompson, Anyone? (Score:5, Interesting)
"The moral is obvious. You can't trust code that you did not totally create yourself...."
I agree, but that doesn't really help us in the real world--writing our own code doesn't reasonably work out for most people. So, what's the solution to your dismal conclusion? Ferret out those that cannot be trusted--doing so is the closest we will ever come to being able to "trust the code".
So, how does one go about ferreting out those that cannot be trusted? The Occupy Movement had almost figured it out, but wandered around aimlessly with nobody to point a finger at when they should have been naming names.
The NSA has made it clear that making connections--following the metadata--is often enough to get an investigation started. So why not do the same thing? Turn the whole thing around? Start focusing on their networks. I can suggest a good starting point--the entities that train the "Future Rulers of the World" club. The "Consulting Firms" that are really training and placing their own agents throughout the global community. These firms are the world's real leaders--they have vast funding and no real limitations to who and where they exert influence. In my opinion, they literally decide who runs the world.
Pay close attention to the people associated with these firms, the inter-relatedness of the firms and the other organizations "Alumni" end up leading. Pay very close attention to the technologies involved and the governments involved.
Look through the lists of people involved, start researching them and their connections...follow the connections and you start to see the underlying implications of such associations. I'm not just talking the CEO of Redhat (no, Linux is no more secure then Windows), but leaders of countries, including the US and Israel.
http://en.wikipedia.org/wiki/Boston_Consulting_Group [wikipedia.org]
http://en.wikipedia.org/wiki/McKinsey_and_Company [wikipedia.org]
http://en.wikipedia.org/wiki/Bain_%26_Company [wikipedia.org]
THIS is the 1%. These are the perpetrators of NSA surveillance, to further their needs...NOT yours. People with connections to these firms need to be removed from any position of power, especially government. Their future actions need to be monitored by the rest of society, if for no other reason then to limit their power.
As George Carlin once put it so well..."It's all just one big Club, and you are not in the fucking club."
Re: (Score:3)
Re:Ken Thompson, Anyone? (Score:5, Interesting)
Moral
The moral is obvious. You can't trust code that you did not totally create yourself..... A well installed microcode bug will be almost impossible to detect.
http://cm.bell-labs.com/who/ken/trust.html [bell-labs.com]
You and the submitter in on this one? As the answer is a resounding NO.
A well installed microcode bug will be almost impossible to detect.
For people like me that didn't know, microcode can also be known as firmware, bios update
or "code in a device" http://en.wikipedia.org/wiki/Microcode [wikipedia.org]
Ken Thompson's Acknowledgment
I first read of the possibility of such a Trojan horse in an Air Force critique (4) of the security of an early implementation of Multics.
(4.) Karger, P.A., and Schell, R.R. Multics Security Evaluation: Vulnerability Analysis. ESD-TR-74-193, Vol II, June 1974, p 52.
So in theory you can't even trust the code you write as your video card could change it.
--
If you aren't paranoid yet, just wait
Re:Ken Thompson, Anyone? (Score:5, Insightful)
Bingo.
How many Intel or nVidia employees... How many Broadcom or Qualcom employees need to be placed by NSA, into their otherwise ordinary engineering jobs?
How many Mossad associated employees? Whoops. I guess that's anti-Semitic. I'd have to ask how many PLA planted engineers, as there's no recognized anti-Sinoism. ;-)
Re: (Score:3)
http://cm.bell-labs.com/who/ken/trust.html [bell-labs.com]">http://cm.bell-labs.com/who/ken/trust.html
quoting Ken Thompson
I would like to criticize the press in its handling of the "hackers," the 414 gang
God I guess...
The 414s gained notoriety in the early 1980s as a group of friends and computer hackers who broke into dozens of high-profile computer systems, including ones at Los Alamos National Laboratory, Sloan-Kettering Cancer Center, and Security Pacific Bank.
They were eventually identified as six teenagers, taking their name after the area code of their hometown of Milwaukee, Wisconsin. Ranging in age from 16 to 22, they met as members of a local Explorer Scout troop. The 414s were investigated and identified by the FBI in 1983. There was widespread media coverage of them at the time, and 17-year-old Neal Patrick, a student at Rufus King High School, emerged as spokesman and "instant celebrity" during the brief frenzy of interest, which included Patrick appearing on the September 5, 1983 cover of Newsweek.
September 5, 1983 cover of Newsweek
http://mimg.ugo.com/201102/0/6/5/175560/cuts/4c6de9daa1c16-23680n_480x480.jpg [ugo.com]
Text from http://en.wikipedia.org/wiki/The_414s [wikipedia.org]
Re: (Score:3)
A person would have to be absolutely arrogant to trust themselves alone to effect a secure environment. No one is that good, unless we are talking about "secure" systems that are essentially non-functional.
That's why we have communities of open source developers. Many minds and eyeballs enable a more comprehensive view of security, especially when they are watching changes incrementally accumulate. I think it is much harder to get even subtly surreptitious malware past developers this way.
The other way, you
Re: (Score:3)
Re:Ken Thompson, Anyone? (Score:4, Informative)
It is not easy to discover vulnerabilities through code examination.
The easy to discover problems are picked up by source management tools, LINTs and things.
Functional vulnerability in derived object code is less work-intensive, and generally returns richer results versus man-hours of investment.
Pen geniuses still "fuzz" binaries, rather than trawl millions of lines of code.
Think about how Android vulnerabilities are discovered, by Blackhat Briefing presenters. They don't usually delve into the monolithic available sources. Many vulns only make themselves evident, when combined with microcode on devices or in combination with radio stacks, etc.
Code is used to confirm findings. Sometimes. ;-)
Re: Ken Thompson, Anyone? (Score:3)
Re:Ken Thompson, Anyone? (Score:5, Insightful)
This argument is much, much too complicated. Plus, it can indeed be tracked down in the compiler binary. Compiling the compiler with an unrelated compiler will remove the malware in the compiler binary. You can use a really slow one for this effort, as you must use it only once.
In reality, there are more than enough bugs of the "Ping of death" style, which can be used. Read "confessions of a cyber warrior".
The worst thing Bell Labs brought into this world was the C and C++ languages and the associated programming style. Like char* pointers, uninitialized pointers possible and so on.
If Bell Labs had no foisted C and C++ on this world for "free", the government would have had to invent something to make their "cyber war space" possible. Wait, Bell Labs WAS the government.
If that's not enough, a single buffer overflow in firefox or Acrobat reader can trigger something like the Pentium F00F bug, and then they OWN THE CPU. Your stinking sandbox is wholly irrelevant at this time.
Go figure, sucker. Me, I am a C and C++ software engineering sucker, too.
Before C, much less C++, there were languages like FORTRAN, COBOL, and PL/1. They were not as rigid about checking types and ranges as Java and Ada, for example. Even some versions of BASIC allowed definition of an "array" that was, in fact, a map of the entire system RAM. And, of course, peek() and poke. PL/1 has actual pointer support built into the language.
So don't blame C. The problems go way, way back. Some systems and languages were more secure than others, but none of them were all that airtight. The onlu commercial hardware architecture that I know of that approached being REALLY secure was the Intel iAPX 432, which practically gave each stackframe its own private address space. But that one never caught on.
Re: (Score:3)
If you're going to play it THAT way, then the exploits go back to assembler and every early digital computer. (Analog computers had different weaknesses.)
But please remember that early Fortrans (e.g. IBSYS FORTRANII) discouraged using pointers at all. I will grant that they didn't check array bounds, but the location of the array WRT the rest of the program was not guaranteed, and was subject to being changed with different compiler options. I don't know COBOL well enough to really comment, but it's my i
Re: (Score:3)
I don't know that I'd call assembler "exploits", since in assembler you're allowed to do any darn thing you want to. High-level languages exist as much to limit that ability as anything else.
None of the early FORTRAN implementations I worked with supported pointers as such. But the Primos OS was mostly written in FORTRAN (in fact the instruction set was optimized for FORTRAN), and I think there was a pre-defined integer array whose first element was memory location 0 and each word in that array thus had a 1
Re: (Score:3)
This is why you start by compiling a very simple, basic compiler like PCC using your choice of random, potentially compromised compiler, then use that PCC binary to compile a new copy of PCC. The resulting PCC-compiled PCC binary should be both small enough and simple enough instruction-wise for a few dozen people to feasibly audit it by hand. Use that to then build a verifiably source-clean copy of GCC. Use that, i
Re:Ken Thompson, Anyone? (Score:5, Insightful)
Maybe modern ones, but if you go back a few generations your chances of it existing drop drastically. so what you do for high security....
1 - rely on OLDER hardware. Stuff from before the past two administrations would have a significantly higher chance of not having government back doors. Clinton era computers to start with.
2 - use a completely different architecture. ARM is your best friend here or SPARC. The chances of SPARC having this are insanely small
3 - Get processors from your countries "enemy" Russians dont use Intel processors for their KGB and Government operations. If they did they would be the biggest morons on the planet. Find out what they use and try to source them through the black or grey market channels.
Welcome to the new world of underground computer science. Oh and keep your mouths shut. Don't do stupid shit like bragging as to what you have and where you got it. I'd say "hack the planet" but the safest thing is to go off the net and transfer data via offline means for the highest security.
Re:Ken Thompson, Anyone? (Score:5, Funny)
If your prescription for fixing the issues of low security is to trust the Russian (nee Soviet) Government, I'm pretty sure you're doing it wrong.
Re:Ken Thompson, Anyone? (Score:5, Interesting)
Re:Ken Thompson, Anyone? (Score:4, Insightful)
So you’re looking for 100% packet loss? Why not just unplug the cord. Would be cheaper, less stuff to patch...
Re: (Score:3)
You are advocating security though obscurity?
Re:Ken Thompson, Anyone? (Score:4, Interesting)
Maybe modern ones, but if you go back a few generations your chances of it existing drop drastically. so what you do for high security....
1 - rely on OLDER hardware. Stuff from before the past two administrations would have a significantly higher chance of not having government back doors. Clinton era computers to start with.
2 - use a completely different architecture. ARM is your best friend here or SPARC. The chances of SPARC having this are insanely small
3 - Get processors from your countries "enemy" Russians dont use Intel processors for their KGB and Government operations. If they did they would be the biggest morons on the planet. Find out what they use and try to source them through the black or grey market channels.
Welcome to the new world of underground computer science. Oh and keep your mouths shut. Don't do stupid shit like bragging as to what you have and where you got it. I'd say "hack the planet" but the safest thing is to go off the net and transfer data via offline means for the highest security.
You forgot a 4th option. If you were TRULY paranoid, you could write your own CPU and emulate in a FPGA. You would also have to design the fpga on a wire wrapped CPU, which would suck, but it's possible.
Re: (Score:3)
That's not realistically very likely. Microcode typically never gets updated after the CPU ships, which means that as soon as some critical part of the compiled binary looks slightly different, the microcode won't have the desired effect. It doesn't take a large compiler change to screw that sort of thing up. Even tiny optimization changes would prevent microcode from usefully changing the behavior of a particular binary. The microcode level is just wa
Re:Ken Thompson, Anyone? (Score:5, Informative)
Re: (Score:3)
Fortunately there is an effective counter-measure:
http://www.dwheeler.com/trusting-trust/ [dwheeler.com]
So you compile the code using two different compilers. How can you be sure that both other compilers don't have a parent compiler that is infected?
Re:Ken Thompson, Anyone? (Score:4, Interesting)
write your own login, use a different sourced compiler for compiling the compiler..
anyways, can we stop posting this same story to every fucking security story already? put it in a sig or something.
Re: (Score:3, Informative)
Re: (Score:3)
it doesn't wash in the FOSS community.
Nice try.
A laughable comment at best. The FOSS community does not have an army of people running around decompiling binaries just to check to see if it can match compiled code from source. This is significantly less useful than the argument that FOSS doesn't contain back doors because you can look at the source. Just a tip, the vast majority of users don't.
The vast majority of developers do, but the vast majority of developers don't as I said routinely get up in the morning and decompile published binaries. not the
Isn't it really worse than that? (Score:3)
And doesn't the compiler target an architecture?
And isn't that architecture rife with microcode you never see?
Re: (Score:3, Insightful)
Ken Thompson's theoretical attack against the Unix ecosystem was only practical because, at the time, he controlled a major portion of binary distribution and simultaneously a major portion of the information which could be used to defeat the attack, that being compiler technology. Nowadays, there are tons of different, competing compilers and systems for code rewriting, any of which can be used to "return trust" to a particular OS's binary ecosystem (if someone would take the time and effort to actually do
Re: (Score:3)
Not much worry with a source build (Score:5, Interesting)
The big worry is not building from source, but builds delivered by companies like Ubuntu, which you have absolutely no guarantee are actually built from the same source that they publish. Ditto Microsquishy, iOS, Android, et. al.
The big concern is back doors built into distributed binaries.
Re:Not much worry with a source build (Score:5, Interesting)
Another one that concerns me is Chrome, which on Ubuntu insists on unlocking my keystore to access stored passwords. I'd much rather have a browser store it's passwords in it's own keystore, not my user account keystore. After all, once you've granted access to the keystore, any key can be retrieved.
And, in the case of a browser, you'd never notice that your keys are being uploaded.
Re: (Score:3)
In the Apple Keychain Access app the access to each key is restricted to a list of applications that are set by the user. You are allowed to grant access of a particular key to all applications, however.
Re:Not much worry with a source build (Score:4, Insightful)
Re:Not much worry with a source build (Score:5, Insightful)
Re: (Score:3)
I don't. I use Firefox because it doesn't ask for access to my default keystore.
Not that I keep any keys in the default keystore anyhow. I just don't like the behaviour of Chrome in this regard.
Why would any sane person want to unlock their whole wallet just for a freaking browser?
Re: (Score:3)
It only unlocks the wallet for the user it's running as, it doesn't have crazy admin privileges.
If you care about security, you're already running the browser as a restricted user anyway--even if you did stupidly share passphrases between wallets (or accidentally mistype the wrong passphrase into the browser unlock window) it still shouldn't have FS permission to your primary wallet.
Plus you can run Chromium if you want to be able to audit the source, presuming you don't think someone's Ken Thompson'd chrom
Re: (Score:2, Informative)
Re:Not much worry with a source build (Score:5, Informative)
why do people keep suggesting to use lastpass?
Seriously!
You don't want Chrome to have acces to all your keys, but you're quite happy to fucking upload them to some server run by some random fucking mouth breather in some fucking country you don't know.
Re: (Score:3)
Chrome is spyware, what do you expect? Its very purpose is to get all your data.
Re: (Score:3)
It does, though it's configurable. https://code.google.com/p/chromium/wiki/LinuxPasswordStorage [google.com] has details.
Re:Not much worry with a source build (Score:5, Interesting)
Eventually you have to draw the line somewhere with regard to where you stop trusting. If the Linux kernel sources themselves contained a backdoor, I would be none the wiser, and neither would most of the world. Some of us have very little interest in coding, let alone picking through millions of lines of it to look for that kind of thing. And then of course there's syntactic ways of hiding backdoors that even somebody looking for one might miss.
Re: (Score:2)
You do, but if you're that worried, there's always truecrypt and keepassx. If you keep the database in a truecrypt encrypted partition, the NSA can't get at that with any reasonable period of time. You can also ditch the keepassx and just store it as plain text in the encrypted partition, but that's not very convenient.
Re:Not much worry with a source build (Score:5, Insightful)
You do, but if you're that worried, there's always truecrypt and keepassx. If you keep the database in a truecrypt encrypted partition, the NSA can't get at that with any reasonable period of time. You can also ditch the keepassx and just store it as plain text in the encrypted partition, but that's not very convenient.
Can you be sure that Truecrypt has no backdoors? If so, how?
Truecrypt Re:Not much worry with a source build (Score:3, Interesting)
Digitial Forensics for Prosecutors presentation suggests Truecrypt has a backdoor.
http://www.techarp.com/showarticle.aspx?artno=770&pgno=0 [techarp.com]
Re:Truecrypt Re:Not much worry with a source build (Score:4, Interesting)
Cryptome notes this document is claimed to be a hoax by a Hacker News user.
http://cryptome.org/2013/09/computer-forensics-2013.pdf [cryptome.org]
Re:Truecrypt Re:Not much worry with a source build (Score:5, Informative)
Digitial Forensics for Prosecutors presentation suggests Truecrypt has a backdoor.
http://www.techarp.com/showarticle.aspx?artno=770&pgno=0 [techarp.com]
The entire link inadvertently explains why cloud storage shouldn't be used, and that mobile devices are your worst enemy.
The only mention of TrueCrypt is this sentence:
"Currently available for major software - Microsoft bitlocker,
FileVault, BestCrypt, TrueCrypt, Etc" (sic)
It does have these gems
"The Patriot Act allows for the use of backdoors for counter terrorist investigations"
The use of backdoors cannot be detected or proven.
Vendors are legally and commercially prevented from acknowledging their backdoors.
Defense will not be able to prove their existence.
The files can be described as "forensically obtained"
Users of mobile devices and cloud storage sign off on their rights to data scanning.
There is no opt our option.
Lots more...
PDF can be downloaded here:
http://www.techarp.com/article/LEA/Encryption_Backdoor/Computer_Forensics_for_Prosecutors_(2013)_Part_1.pdf [techarp.com]
Re: (Score:3)
Mod up the parent!
Yes, that's actually my concern all the time. Of course, with open source, you could technically check the source of the system you are using. But then, you'd need to check every line of code, thinking exactly like the NSA (or what-not) in every piece of software you use, including the compiler you use to compile and the compiler compiler, etc, etc.
Additionally, you'd need to check the source of all the HW-components that come with their own BIOS, including the system's BIOS, networking ch
Re: (Score:3)
BIOS loads modules from cards, to boot raid or pxe (Score:4, Interesting)
The reason you can boot from a raid card or network is because the BIOS loads and runs BIOS modules from those cards. You may be familiar with the Linux kernel, where most of the functionallity is in modules that become part of the kernel. BIOS is the same. One differentiator between a server motherboard and a consumer one is how much BIOS memory it has, to load modules from many different pieces of hardware. I have one machine with at least four different pieces of hardware that include BIOS. MOST of the BIOS on that machine didn't come with the motherboard.
we already do that for QC. All maintainers see all (Score:5, Interesting)
For the Linux kernel, that's how development is done already, for quality control and bloat reduction. Nobody can commit by themselves, it takes at least three people to get a change into mainline. Each developer has their own copy of the tree into which changes are pulled, so they can see all changes that are made, and who made them.
For each part of the kernel, there are a number of people particularly interested in that bit who watch it and work on it. For example, the people making NAS and SAN devices and services keep a close eye on the storage subsystems. Myself, I watch the cm storage stack generally, more specifically LVM, and even more specifically snapshots. There are a few dozen people around the world with special interest in that particular part of the code. No backdoors will come in without some of us spotting it. What COULD happen is that some code could come in that isn't quite as secure as it could be.
It just so happens that I'm a security professional who uses advanced Linux storage systems for a security product called Clonebox, so that's at least one security professional closely watching that part of the code. Thousands of others watch the other parts.
It's convenient that a lot of the development is done by companies like Netapp, Amazon (S3) and Google. You can bet that when Amazon submits code, Netapp and Google are looking closely at it. When RedHat submits something, Canonical will point out any reasons it shouldn't be accepted.
Re: (Score:3)
When RedHat submits something, Canonical will point out any reasons it shouldn't be accepted.
I had a good laugh when I read this.
Red Hat employs hundreds of software engineers, contributing a lot to the entire Linux ecosystem. Canonical's resources in terms of code contribution are laughable in comparison and being a streamlined business Cacnonical has few, if any, resources to review third party code. They are happy to ride along, but the number of people at Canonical who actually write and read code outside the shiny UI field are hardly those with the expertise to review low level kernel code.
Re:Not much worry with a source build (Score:5, Interesting)
There was an attempt to backdoor the kernel [lwn.net] a few years back. I don't believe the perpetrators were ever revealed.
Re:Not much worry with a source build (Score:5, Interesting)
One of our advanatages is that I'm sure the Russians don't want NSA backdoors in Linux, and the NSA doesn't want Russian backdoors in Linux and neither want Chinese Backdoors and simalarly the Chinese want neither NSA or Russian backdoors. After all of this "Spy vs. Spy" Linux is unlikely to have backdoors. If your requirements are great enough that unlikely isn't good enough your probably shit outa luck because nothing will be good enough for you.
Re:Not much worry with a source build (Score:5, Funny)
Re:Not much worry with a source build (Score:5, Insightful)
What would you do if you where a Chinese or Russian spook and discover a NSA backdoor in Linux ? You could cry foul! to Linus and get it fixed. However: a much more profitable action would be to silently fix it in your own security critical machines and then exploit it as much as possible on your targets in the West.
Re: (Score:3)
The big worry is not building from source, but builds delivered by companies like Ubuntu, which you have absolutely no guarantee are actually built from the same source that they publish. Ditto Microsquishy, iOS, Android, et. al.
The big concern is back doors built into distributed binaries.
So what is the practical difference between a "back door" and a security vulnerability anyway? They both remain hidden until found and they both can easily result in total ownage of the (sub)system.
History demonstrates "open source" community is not immune from injection of "innocent" security vulneribilities into open source projects by way of human error. I find it illogical to assume intentional vulnerabilities would be detectible in source code where we have failed to detect innocent ones.
And as for y
What about the hardware or compiler? (Score:5, Insightful)
The big concern is back doors built into distributed binaries.
And what about the hardware? And how can you be sure the compilers aren't putting a little something extra into the binaries. There are so many places for NSA malware to hide it's scary. Could be in the BIOS, could be in the keyboard or graphics firmware, could be in the kernel placed there by a malicious compiler. Could be added to the kernel if some other trojan horse is allowed to run. And just because the kernel, etc. are open source doesn't mean they have perfect security. The operating system is incredibly complex, and all it takes is one flaw in one piece of code with root privileges (or without if a local privilege escalation vulnerability exists anywhere on the system, which it surely does), and that can be exploited to deliver a payload into the kernel (or BIOS, or something else). Really, if the NSA wants to see what you're doing on your Linux system, rest assured, they can.
Re:Not much worry with a source build (Score:5, Insightful)
The NSA is a big organization. They do plenty of things that don't violate the Constitution, international treaties, or common sense.
SELinux is the least of our worries. It's not impossible to hide backdoors or vulnerabilities in an open-source product, but it is pretty difficult. And if the spooks managed to do it, they certainly wouldn't be putting their name on this product, because the people that they're really interested in are even more paranoid than you.
Re: (Score:3, Funny)
Or at least, they will have in ten years when the OpenBSD codebase catches up.
Re:Not much worry with a source build (Score:5, Interesting)
Backdooring a CPU wouldn't actually be that difficult. You'd need it to recognise a specific command sequence (128-bits long should do it) when reading memory to trigger the backdoor - that way you could activate it by sending a network packet, or reading external media, or routing traffic. And all the backdoor needs to do is run a simple 'set instruction pointer to immediately after this trigger.' It'd be impossible to defend against short of using an un-backdoored CPU to filter the trigger out, and even then it could be snuck through in an SSL session or a fragmented packet.
And best of all, it would *never* be detected. The schematics for a CPU are practically impossible to reverse-engineer from the masks, and both schematics and masks are strictly internal company property. Plus the number of people who could understand them in enough detail to spot a backdoor without years of specialist study could probably fit in one conference hall.
Re:Not much worry with a source build (Score:4, Insightful)
But you'd have to prevent knowledge of the backdoor from leaking. Hundreds of engineers work on each CPU, each group produces and verifies a new CPU design every year or so, there is considerable employee turnover every few years, and nobody has ever reported such a thing. So I find it unlikely.
Disclaimer: I work as a hardware engineer for a major CPU manufacturer.
AES (Score:2)
i never understood why people go for AES. clearly, if NSA recommends it, in my view it is something to be avoided (i personally go for twofish instead). in ubuntu, ecryptfs uses aes by default, so i would not trust that.
Re:AES (Score:5, Informative)
Re:AES (Score:4, Informative)
The last time that the NSA weakened an algorithm they recommended was by shortening the key for DES.
Minor correction: They strengthened the DES algorithm by substituting a new set of S-boxes which protected against an attack that wasn't publicly known at the time. They shortened the key space which made it more susceptible to brute forcing the key. Full strength DES has held up very well against attacks overall until its key length became a problem. It lasted much longer in use than intended.
I seem to recall that DES was never approved for protecting classified data, but that AES does have that approval.
Re:AES (Score:5, Insightful)
Is there any particular reason why people don't strengthen AES (or any other symmetric encryption) by just reencrypting 1000 times? Perhaps interleaving each encryption with encrypting with the first 1, then 2 etc. It would make next to no difference for the end user, who's going to decrypt just once, but I imagine it would add a lot more time to the cracking of the encrypted data than increasing the size of the key.
Exponents are actually what protects information, multiplication just makes people feel good.
Re:AES (Score:5, Informative)
Re: (Score:2)
AES consists of well studied algorithms. Whether or not the NSA recommends it, it's still known to be secure by independent researchers. From what I understand the only breaks to it are marginally better than brute force, and not likely to result in the data becoming available in a useful period of time.
Re:AES (Score:5, Funny)
if the whole world goes for one cipher, then nsa can concentrate on creating and improving a single ASIC design for breaking it. we should be using hundreds of different algorithms. then they'd have to design hundreds of types of ASICs, build 100x more datacentres, increase taxation in USofA to 10x what it is now, yanks would rebel and overthrow that government and then there would be no more evil NSA. simples
Re:AES (Score:5, Funny)
Pick a government. If you trust the Russians use GOST. If you trust the Japanese use CAMILLA.
Then use all three of them in sequence and hope it would be quite difficult to have them all cooperate to break your encryption.
Re: (Score:3)
The academic crypto community widely considers it secure after more than 10 years of effort to break it (note that twofish does not look less secure, but what makes you think that the NSA could break the AES and not twofish ? In fact nobody can break any of them).
In fact, you are dumber than you appear. I've said it more than once, encryption is not a magic spell. Trust me, if anyone has the mathematicians and the hardware to break *ANY* encryption it is the NSA. It's been their job for more than 60 years. If you can show me internal NSA documents that prove otherwise, I'll believe you. In the mean time, believe that no encryption algorithm is "secure".
If it is off (Score:5, Insightful)
You can sleep soundly if your computer is off and/or unplugged. Otherwise, you should always be on your guard.
Keep your confidential data behind multiple levels of protection, and preferentially disconnected when you are not using it. Never trust anything that is marketed at 100% safe. There will always be bugs to be exploited, if nothing else.
A healthy level of paranoia is the best security tool...
Re:If it is off - it might get stolen (Score:5, Insightful)
10000 laptops are stolen at airports every year. Presumably, they are off when that happens.
The NSA is not your problem; you are not important enough to be a target. When thinking about security, thieves are your problem. Theft happens, and happens often. Your computer is far more likely to get stolen than to be inflitrated by the NSA. And the solution is to encrypt your hard drive. Without encryption the thief will have access to everything you normally access from the computer - like your bank account. You wouldn't want that, would you? Today's CPUs all have AESNI support, so there is no excuse for not encrypting your laptop's hard drive. Do it today and get some financial peace of mind.
Re:If it is off - it might get stolen (Score:5, Insightful)
you are not important enough to be a target.
Wrong. You may become important in the future. So you are important enough to target. They are collecting data on everyone, and holding on to it. They just might not be actively going through all the data from everyone (or they might be, if they have enough computing power). But if it's recorded it doesn't really matter if they do it today or in 20 years. They've got you. "If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him." --Richelieu
Re: (Score:3)
You can sleep soundly if your computer is off and/or unplugged.
That's the good advice that nobody takes. Putin went one step further and recommended using typewriters for confidential data.
I'm more on the "never sleep soundly" side of things. Trusting you have a secure system is a good part of the problem. Even a typewriter had flaws.
Re: (Score:3)
Yes. (Score:2, Insightful)
You have to trust the integrity of Linus and the core developers.
If any of them let in such major flaws they would be found out fairly quickly... and that would destroy the reputation of the subsystem leader, and he would be removed.
Having the entire subsystem subverted would cause bigger problems.. but more likely the entire subsystem would be reverted. This has happened in the past, most recently, the entire changes made for Android were rejected en-mass. Only small, internally compatible changes were acc
There is no such thing as "Security"... (Score:4, Insightful)
Linux and RdRand (Score:5, Informative)
There was recently a bit of a kerfuffle over RdRand [cryptome.org].
Matt Mackall, kernel hacker and Mercurial lead dev, quit Linux development two years ago because Linus insulted him repeatedly. Linus called Matt a paranoid idiot because Matt would not allow RdRand into the kernel, because it was an Intel CPU instruction for random numbers that could not be audited. Linus thought Matt's paranoia was unwarranted and wanted RdRand due to improved performance. Recently Theodore T'so has undone most of the damage, but call RdRand still exist in Linux. I do not understand exactly if there are lingering issues or not.
Re:Linux and RdRand (Score:5, Funny)
Re: (Score:3)
You can't trust any mainstream Linux distro (Score:2)
It's sad but you can't trust any mainstream Linux distro created by a US company, and you likely can't trust any created in other countries either. I'm not saying that as a pro-windows troll because you can trust MS's efforts even less.
I believe you can trust OpenBSD totally but it lacks many of the features and much of the convenience of the main Linux distros. It is rock solid and utterly secure though, and the man pages are actually better than any Linux distro I've ever seen.
The possibly bigger problem
Re:You can't trust any mainstream Linux distro (Score:5, Interesting)
I believe you can trust OpenBSD totally but it lacks many of the features and much of the convenience of the main Linux distros. It is rock solid and utterly secure though, and the man pages are actually better than any Linux distro I've ever seen.
Three points:
1) See the above discussion: you cannot trust anything that you did not create and compile yourself. With a compiler you wrote yourself. On a machine you created yourself from the ground up, that is not connected to any network in any way. OpenBSD does not make any difference if your compiler or toolchain is compromised.
2) Speaking of which, I cannot but note that OpenBSD had a little kerfuffle a while back, about a backdoot planted by the FBI in the OS? (Source 1 [schneier.com]) (Source 2 [cryptome.org]). I am willing to bet that (a) it's perfectly possible (though not likely), (b) if it was done, it was not by the FBI and (c) that the dev @openbsd.org are, right now, taking another long and hard look at the incriminated code.
3) Finally OpenBSD lacking features and convenience? Care to support that statement? I have a couple of computers running OpenBSD here, and they are just as nice - or even nicer - to use than any Linux. Besides, you don't choose OpenBSD for convenience - you use it for its security. Period.
The possibly bigger problem is that no matter what OS you use you can't trust SSL's broken certificate system either because the public certificate authorities are corruptible. And before someone says create your own CA, sure, for internal sites, but you can't do that for someone else's website.
This goes way beyond a simple question of OpenSSL certificates - think OpenSSH and VPN security being compromised, and you will have a small idea of the sh*tstorm brewing right now.
Subversion possible but unlikely and temporary (Score:5, Insightful)
It's possible the NSA did something bad to the code, but it's not likely and it won't last.
For the "not likely" part, code accepted into Linux projects tends to be reviewed. The NSA can't be too obvious about any backdoors or holes they try to put in, or at least one of the reviewers is going to go "Hey, WTF is this? That's not right. Fix it.". and the change will be rejected. That's even more true with the kernel itself where changes go through multiple levels of review before being accepted and the people doing the reviewing pretty much know their stuff. My bet would be that the only thing that might get through would be subtle and exotic modifications to the crypto algorithms themselves to render them less secure than they ought to be.
And that brings us to the "not going to last" part. Now that the NSA's trickery is known, the crypto experts are going to be looking at crypto implementations. And all the source code for Linux projects is right there to look at. If a weakness were introduced, it's going to be visible to the experts and it'll get fixed.
That leaves only the standard external points of attack: the NSA getting CAs to issue it valid certificates with false subjects so they can impersonate sites and servers, encryption standards that permit "null" (no encryption) as a valid encryption option allowing the NSA to tweak servers to disable encryption entirely, that sort of thing. There's no technical solution to those, but they're easier to monitor for.
Re:Subversion possible but unlikely and temporary (Score:5, Interesting)
That won't even make it through the casual review. Most project maintainers don't like code that's impenetrable. Unless it's a fix for a critical bug that nobody else even has a proposal for a fix for, they're going to take one look at obfuscated code and toss it back with a "No thanks.". Especially if it's coming from a source they don't recognize, because messy complex obfuscated code also tends to be buggy unreliable unmaintainable code and they don't want the headache.
Re: (Score:3)
Obfuscated code is pretty obvious. There is a large body of conventions you have to follow to get anything into the kernel, precisely to prevent unreadable code. I have looked at a few kernel security patches and they were all clan and clear.
If you do not follow strong simplicity guidelines, a project the size of the Linux kernel will just fail by eventually becoming unmaintainable.
Pointless Worrying (Score:4, Insightful)
Re: (Score:3)
Specifically the leaks indicate - and this is based largely on speculation - that they have some sort of central database. That means they can collect keys opportunistically (Trojans, interception of cleartext communications containing the key like VM migrations, cracking via advanced mathematics, old-fashioned espionage, secret court orders, backdoors, etc) whenever they get a chance. So when they need to decrypt a communication, there's a chance the key is already in the database - even if they only obtai
Re: (Score:3)
The use of all that power isn't to brute force directly. It's to render possible other attacks that can reduce the key space, like side-channel attacks or known weaknesses in the RNG.
nothing's safe, but there are obvious things to do (Score:5, Interesting)
No, but there's no reason to think that Linux is worse than anything else, and it's probably easier to fix.
If I were Linus I'd be putting together a small team of people who have been with Linux for years to begin assessing things. From Gilmour's posting it seems clear that IPsec and VPN functionality will need major change. Other things to audit include crypto libraries, both in Linux and the browsers, and the random number generators.
But certainly some examination of SELinux and other portions are also needed.
I don't see how anyone can answer the original question without doing some serious assessment. However I'm a bit skpetical whether this problem can actually be fixed at all. We don't know what things have been subverted, and what level of access the NSA and their equivalents in other countries have had to be code and algorithm design. They probably have access to more resources than the Linux community does.
Re: (Score:3)
The good news is that for linux, this can, in theory, be audited.
For Windows...no. Not a hope. None. At all. Likewise OSX.
Which means that any and every government that might possibly have any future dispute with the US is, right now, going over all their Windows servers and desktops in the military. diplomatic and intelligence services to see how much they can replace.
It'll take months just to write up the reports, and months more to run through the political commitees, and even then it'll be very undiplom
Government? What About Other Bad Guys? (Score:5, Insightful)
Surely we should also assume that there are other criminal and/or hacker groups with the resources or skills to gain similar access? Another case of "once they know it can be done, you can't turn back."
I honestly believe that we're finally at the point where the reasonable assumption is that nothing is secure, and that you should act accordingly.
Can you sleep soundly? (Score:5, Insightful)
I think that depends on what keeps you up at night.
In one of the earlier stories today there was a post making all sorts of claims about compromised software, bad actors, and pointing to this paper: A Cryptographic Evaluation of IPsec [schneier.com]. I wonder if anyone bothered to read it?
IPsec was a great disappointment to us. Given the quality of the people that worked on it and the time that was spent on it, we expected a much better result. We are not alone in this opinion; from various discussions with the people involved, we learned that virtually nobody is satised with the process or the result. The development of IPsec seems to have been burdened by the committee process that it was forced to use, and it shows in the results. Even with all the serious critisisms that we have on IPsec, it is probably the best IP security protocol available at the moment. We have looked at other, functionally similar, protocols in the past (including PPTP [SM98, SM99]) in much the same manner as we have looked at IPsec. None of these protocols come anywhere near their target, but the others manage to miss the mark by a wider margin than IPsec.
I even saw calls for the equivalent of mole hunts in the opens source software world. What could possibly go wrong?
Criminals, vandals, and spies have been targeting computers for a very long time. Various types of security problems have been known for 40 years or more, yet they either persist or are reimplemented in interesting new ways with new systems. People make a lot of mistakes in writing software, and managing their systems and sites, and yet the internet overall works reasonably well. Of course it still has boatloads of problems, including both security and privacy issues.
Frankly I think you have much more to worry about from unpatched buggy software, poor configuration, unmonitored logs, lack of firewalls, crackers or vandals, and the usual problems sites have than from a US national intelligence agency. That is assuming you and 10 of your closes friends from Afghanistan aren't planning to plant bombs in shopping malls, or try to steal the blueprints for the new antitank missiles. Something to keep in mind is that their resources are limited, and they have more important things to do unless you make yourself important for them to look at. If you make yourself important for them to look, a "secure" computer won't stop them. You should probably worry more about ordinary criminal hackers, vandals, and automated probe / hack attacks.
"pretty safe?" (Score:5, Insightful)
Yes, it's "pretty safe". It's not absolutely safe or guaranteed to be safe. But if your other alternative is a hidden-source OS, especially one in US jurisdiction, then OSS is "pretty safe."
fuck the NSA and US Govt (Score:5, Insightful)
Remember this? (Score:5, Interesting)
I'd like to find a more recent report of what they found.
NSA's actions damage their credibility forever (Score:4, Insightful)
Over the years the NSA has contributed what seemed like positive things to computer security in general, and Linux specifically. They have helped correct some algorithms to make them more secure, and implemented things like SELinux.
However, now that their other actions and intentions have been starkly revealed, any and all things the NSA does (and has done) are now cast into steep doubt. Which is unfortunately because the NSA has a lot of really smart cryptographers and mathematicians that could greatly contribute to information security.
Now, however, their ability to contribute in any positive way to the open source community, or even to the industry at large, is gone forever. No one will trust them again. A sad loss for them, but also a potential loss for everyone. Nothing will quite be the same from here on out. And in the long run, without the help of smart, honest mathematicians and cryptographers, our security across the board will suffer. It's not the the revelations caused the damage, but that the NSA sabotaged things. Shame on them. Kudos to Snowden for helping us learn the extent of the damage.
Re:It has never been safe. (Score:5, Informative)
Every encryption protocol you use has been sabotaged to be readable by them. You dont really think they will try 200 trillion keys to break your stream do you?
No. They modified the protocols, (to make them more secure) and of course never explained the changes. They just mandated it.
Even the almighty NSA with it's insanely high budget can't crack all the encryption. But it does make me wonder if I should avoid everything they recommend.
I suspect the NSA has developed custom hardware for the more common encryption types. Custom hardware was shown to work extremely well on DES by deep crack. http://en.wikipedia.org/wiki/EFF_DES_cracker [wikipedia.org]
Re: (Score:3, Interesting)
Even that's no good if the problem is flaws in the spec rather than how it's implemented by OSs. If the NSA did things correctly they didn't have to muddle with actual Linux/BSD/etc src, they got flaws into the crypto definition itself that reduces the work needed to crack it. The better an OS follows the spec... the easier for the NSA to punch through.