New Linux Kernel Vulnerability 486
Stop Or I'll Noop writes "Paul Starzetz writes, "A critical security vulnerability has been found in the Linux kernel memory management code inside the mremap(2) system call due to missing function return
value check. This bug is completely unrelated to the mremap bug disclosed on 05-01-2003 except concerning the same internal kernel function code." Full scoop here."
Update: 03/07 20:53 GMT by T : This vulnerability (and fixes) were mentioned briefly in an update to this earlier posting.
Many eyes, but wide open or tight shut ? (Score:5, Insightful)
looked at in great depth just recently, after a critical vulnerability was found. A few weeks go by and another hugely important hole is found...
Now I know the consequences of a problem bear little relation to its root cause, but I am a little surprised at how this managed to find its way through all these eyes looking at the offending code a week or so ago. Actually making it work as a security hole looks to be reasonably complex, (which may be why it wasn't found, I guess), but if one piece of code can have 2 major vulnerabilities in as many weeks, maybe it's time to start worrying about when Linux *does* take over the desktop...
I thought the automated 'Stanford Checker' (sp ?) was ideal for this sort of problem ? (Where the returned value from a function is ignored...) Perhaps it was flagged up but took some in-depth analysis for the kernel developers to realise it really was a problem...
So, is this a master-stroke of the development model, with various people around the world all individually checking code and Hey! Someone found something, or is it a "failure" where all those people missed it the first time around, and it's a pure fluke it was found now.... I'm still not sure, but I'll give the benefit of the doubt to the model - hey, it's been fixed!
Simon
A lot of problems in mremap... (Score:5, Insightful)
Damn (Score:4, Insightful)
dupe (Score:5, Insightful)
Re:Many eyes, but wide open or tight shut ? (Score:5, Insightful)
Amazing what a one line oversight can do (Score:5, Insightful)
Re:Many eyes, but wide open or tight shut ? (Score:5, Insightful)
My thinking is that Linux on the desktop is going to need a contingency plan for a widespread vulerability, similar to what Microsoft does with Automatic Updates. I know it's not perfect, but I'll be damned if I can think of anything better. It's nice to think you can make a bullet-proof kernel, but also naive.
i beg your pardon? (Score:5, Insightful)
Um, the source code for the *fix* is listed *in* the article (you didn't read it did you?)
i don't call posting fixed code and owning up to an exploitable coding error "covering up".
Re:Many eyes, but wide open or tight shut ? (Score:5, Insightful)
That's a long time. Maybe some crackers have been using this exploit during that time (or, of course, maybe they haven't).
Date format (Score:5, Insightful)
OK time for me to tilt at a few windmills. Aside from the date being off by a year (the link quotes the date as 05-01-2004), is this supposed to be 1st of May or the 5th of January?
In an international forum and for clarity, ISO 8601 dates [cam.ac.uk]. Therefore: 2004-01-05.
Sorry for the rant, but I work for an international company, and have spent sizable parts of meetings trying to figure out which version of a document is "most recent", 2/3/04 or 3/2/04.
Re:Old news (Score:1, Insightful)
Slashdot in general needs to get a grip. Far too much of this kind of thing going on. Its getting close to the edge of not worth spending any time at all on slashdot.
MS vs Linux debugging. (Score:5, Insightful)
What winds up happening is I pay MS to produce a product that I have very little input on. I buy the off the shelf solution to then develop 50% of the solution anyway. And, then it crashes, the documents are incorrect (updates might be available on their web sites), and I have no way of figuring out what the issues are without paying more $s for something I paid for already. If I tried to pull the same trick, I would loose my client.
Linux side is someone spots the issue, makes us aware of it in most cases. People have something more important than a paycheck at stake get to work on a fix for the problem. A, or multiple, potential fix(es) is(are) put up. Sometimes a fix goes straight in with minimal review (it works, most liked it), sometimes the fix gets kicked around to hash out any potential problems (in the full light of day, normally my apps do not break when the fix is rolled out.)
I like the public knowledge aspect of OSS. Yep, hackers have access to it also, but closed source never seemed to stop them, it just stop me from protecting myself.
Maybe we need to look at the next step for OSS? Maybe there is a better model for building OSS? Maybe companies might start providing more donations (like cheap lic fees) to a foundation that rewards freelance OSS programmers with cash for tackling certain problems (and does not pay until the code is peer reviewed and bug checked to a reasonable extent.) Maybe that would work better... Are certain organizations not starting to do that?
Given how much OSS has accomplished in the past decade with its relative lack of fees and "structure", imagine what might happen if more companies started using their proprietary source software budget to put bounties out on features they needed in OSS. True, not all features would they want to make public, but enough they would wat to so as to dramatically cut everyone's costs (GNU lic is important because of this). Most companies actually have very close to the same needs. But, their money goes to legal and marketing fees more than it seems to go to actual development fees with off the sheld software. What an economic waste! Check out John Nash [nobel.se] for a rather different rather OSS view of the world.
In the end, you are left with a decision. The programmers at MS are very bright. The programmers in OSS are very bright. The real difference is the perceived safety of being able to blame MS (who you can not hold responsible yet - name one successful law suit against MS for the failure of their software to function as advertised) versus the cost effectiveness of not paying for huge legal and marketing fees (as well as other corporate overhead having very little to do with getting better or more code). I am not against programmers getting paid. I am against sloth and leeches in a corporate setting destroying the market in which programmers get paid.
InnerWeb
eyes wide stupid? (Score:5, Insightful)
You may trust your authorized users, but do you trust their passwords, habits in storing passwords ("You don't expect me to remember that, do you? Where are my post-it notes..."), and wisdom to not extend trust to ANYONE?
Do you also trust users to not run a piece of malicious code that shows up purporting to be some groovy new Linux app that will do some groovy new thing? Afterall, it would only have to require a vanilla user account... and Linux never gets viruses, so why worry? ;)
I think you see where I'm going with this. Local exploits need to be patched too, and sysadmins all too frequently think they don't because they are "only local".
Can't agree more (Score:5, Insightful)
This, of course, is why nobody uses them.
*sigh*
As the evil dictator-like sysadmin, at work all my in-house intranet tools report ISO dates. I had a few people confused at first, but now it's the accepted format at work for things like archive directories (hundreds of directories named NN-NN-NN, NN.NN.NN or NNNNNN can get rather confusing - YYYY-MM-DD is so much easier).
Now, if only the
While we're at it, can we have the ISO paper sizes adopted by the few holdouts, too? (I only wish...)
Re:Many eyes, but wide open or tight shut ? (Score:5, Insightful)
Now, I am not a hacker, but I think after I got local access via another exploit, I would use this current vulnerability to get root, install my back door/zombie code, etc. and leave quietly.
Every exploit is serious.
Re:"Windows users: want Security, install linux"?? (Score:5, Insightful)
"On a Windows box, there would have been no peer review."
I doubt that even Microsoft lets security fixes be released without having other Microsoft programmers review all the relevant code. A more accurate comment might be:
"On a Windows box, there would have been no public peer review."
Re:Many eyes, but wide open or tight shut ? (Score:2, Insightful)
In open source you get acknowledgment from the authors and community, there aren't many other places to search for an ego-boost in closed-source land than on hacker-boards.
What I mean with the above is that I think that the chances that "Good Guys" spot the error are greater than that "Bad Guys" find and exploit them in OSS than in Closed Source Softw.
Re:Amazing what a one line oversight can do (Score:5, Insightful)
That figure depends largely upon how many customers you have and how sophisticated your patch-distribution system is. In pre-internet days, a critical problem might have meant shipping a floppy disk to each of your customers (of course, this reduced the chance of problems being classified as "critical"). Now, most security problems in FreeBSD can be fixed in two minutes using 50kB of bandwidth and binary patches [daemonology.net]. Most operating systems fall somewhere in the middle, distributing entire [apple.com] files [microsoft.com], or even complete [redhat.com] packages [debian.org], every time a one-line security fix is necessary, with the effect of requiring a 50-fold (or more, in the case of packages) increase in bandwidth (and, over slow connections, time).
Someone from Microsoft explained this to me as "we've got huge amounts of bandwidth, so we really don't need to save bandwidth by using patches"... it doesn't surprise me that Microsoft ignores the fact that delta compression would benefit their customers, but I expected better from Apple or the Linux community.
Re:Important to Remember (Score:5, Insightful)
You know there are -- among the many, many, many open vulnerabilities out there -- two which are particularly problematic for Windows users. (There are many more out there, but I figure I'll focus in on these two for now.
The first one [slashdot.org] allows an attacker to mask the real address of the site you're viewing in IE. So, go and open up a spam claiming that Paypal needs you to update your credit card number, and you'll actually see PayPal.com as the URL. The second one [slashdot.org] allows an attacker to crash IE and exploit arbitrary code when a user views a picture on a web page under IE.
As a Computer Programmer, I understand how hard it is to create 100% bug free code. Any system as complex as Windows or Linux is bound to contain some bugs and / or vulnerabilities. However, when an exploit is found in Windows (to the best of my knowledge those two exploits have yet to be patched), it takes forever to get a fix to the public.
On the other hand, as soon as I heard of the vulnerability in the Linux Kernel, I have the following options:
Now, whereas I am pretty certain Slackware will have a package available for me to update my kernel in another 48 - 72 hours, and if it's absolutely urgent for me to fix it I can either disable it or fix it myself (something Windoze won't let you do -- although the nature of the vulnerability in the kernel may make disabling it impractical. But still, at least you have the option), Microsoft has not, to the best of my knowledge, fixed these vulnerabilities, even though it's been months.
This is why Open Source Software is so great. Technically sophisticated users hold the destiny of the software in their own hands. And I haven't even begun to get started on how great it is not to submit annoying feature requests, but to make software do what you want it to do.
Re:Story is a troll!!!!! (Score:3, Insightful)
Re:MS vs Linux debugging. (Score:4, Insightful)
Re:Not a big deal really (Score:1, Insightful)
Re:MS vs Linux debugging. (Score:3, Insightful)
Nobody claims that peer review results in code which is free of bugs or security problems. The claim is the peer review model results in less bugs and security problems than the closed source model, given equivalent man power.
Cryptographers tend to be the most paranoid, security-conscious types, and any (respected) cryptographer is going to tell you that peer review is an absolute necessity. Peer review doesn't guarantee unbreakable algorithms, but if a dozen pairs of brilliant, and objective eyeballs review an algorithm and don't find any attacks, it's a hell of a lot more likely to be secure than some closed, proprietary algorithm.
It sucks that I have to update the kernel of all my Linux servers. But this is reality when you use complex software. I still feel much safer using OSS with a peer review model, because this way I don't have to trust that a company with an agenda (i.e. profit) has my best interests in mind.
Jason.
Comment removed (Score:2, Insightful)
Re:eyes wide stupid? (Score:5, Insightful)
Sysadmins who trivialize these 'moot' issues should realize that at some point, if not today, maybe next year, they are going to have to defend their judgement to an angry CEO who has just lost big money. I don't believe 'total security', even at the software can be attained. All we can do is to keep on patching, and to disclose these vulnerabilities in a responsible and efficient manner.
Re:Many eyes, but wide open or tight shut ? (Score:2, Insightful)
And when something breaks, you'll have no idea what caused it, and you'll have to jump through some hoops to make it work again.
No thanks.
Security through obscurity? (Score:3, Insightful)
Well, I think this proves that the "security through obscurity" model is, at best, ineffective. If it has been so long there for anyone to see and the "good" guys didn't see it, what makes you believe that the "bad" guys would spot it?
I don't have hard data to prove this, but I believe that the following two points are true: (1) there are more good guys than bad guys, or otherwise society as we know it wouldn't exist; and (2) good guys are smarter than bad guys, because our current social organization tends to favor being honest. Good guys get good salaries, bad guys are sent to jail.
So, if it took many smart good guys five years to find this vulnerability, how many years it would take a few stupid bad guys to find it?
Re:Important to Remember (Score:3, Insightful)
Two things:
1. Why aren't "days" enough to do proper testing? I agree that minutes aren't enough. And neither are hours usually, but there cases where I would argue that, depending on the kind of change, the testsuites and the QA requirements.
2. In OSS, most times a patch isn't released in the conventional meaning of the word ("released", I mean). The patch is made available (often when it is checked into CVS). What will be released is a new tar ball or announcement, after the QA process. Me, I consider it released, when my favored distribution has an updated package for it on its security site. And contrary to some Microsoft fixes, I never had a Mandrake or Debian security update break my installation - so the QA process seems to work.
It's simple to mix up those, because in closed source you don't see the step where a patch/fix is internally distributed from devs to QA. In OSS you do, but that doesn't imply an official release.
But on the other side, the difference in availibility matters. Having access to the patch before a security update is officially released, gives me a choice. If it's criticial enough for my infrastructure, I can dig into it and do my own QA and deploy it even before an official announcement is available.
Heck, if it's important enough, I can start even before a patch is available, because I have the source. And if you follow security lists, you will notice that it is no exception that a so created patch will find its way into the official release (in other words: the time to release is cut short, because a suggested fix is already available the moment the core team starts looking at the problem).
Re:Many eyes, but wide open or tight shut ? (Score:3, Insightful)
Re:Are we sure? (Score:4, Insightful)
Oh oh, I found a bug in Win 3.11... oh wait... that's an old release? Dang... Nobody will want to hear about that...
Tom
Re:Double standard? (Score:3, Insightful)
Are you being deliberately naive - to load a fixed kernel, it is required to load the fixed kernel, you do understand that, correct? However, for anything other than loading a new kernel, linux does not need to be rebooted, and that includes system lib updates, distribution upgrades etc.
Re:Many eyes, but wide open or tight shut ? (Score:5, Insightful)
judicial use of 'noexec' (Score:4, Insightful)
Enough already ... Obscurity has its place (Score:5, Insightful)
Don't forget
So
Re:Patched in 2.6.3 apparently (Score:2, Insightful)
That doesn't mean someone won't release an exploit that doesn't trigger pax though. It's just the way this particular example of the exploit is coded that PAX (part of Grsec) is catching. Doesn't mean someone can't code up a version of the exploit that won't trip PAX and hack you to bits!
Upgrade to the latest grsec, there are patches for the latest 2.4.25 kernel.
Oh, yes, send me a binary... (Score:5, Insightful)
How this compares to send me a fscking html-with-vbscript that will be executed while in the preview pane of Outlook Express and downloads another executable that has the power to install itself as a device driver and run in kernel mode?????
Even if I have to click on the attachment, it will execute right away!!!!
Re:judicial use of 'noexec' (Score:4, Insightful)
Nope. Sorry, won't work. As long as users have execute access to ld-linux.so.2 (which lives in /lib) or the equivalent on non-Linux boxes, they can run any ELF executable, noexec or not.
And AFAIK, ld-linux.so.2 has to be executable by all in order for the system to function normally, but I am not quite sure there.
MartRe:judicial use of 'noexec' (Score:3, Insightful)
Scratch what I just said. It seems that this hole is closed. I get 'Operation not permitted' when I try to run an executable. It appears that noexec really means noexec.
MartRe:Enough already ... Obscurity has its place (Score:2, Insightful)
Whether or not you use a password has little to do with obscurity. The question is whether or not the algorithm used to make the key is publicly known.
I do agree with you that "STO" is not necessarilly bad. If a company is obtaining security through obscurity, then not only do you not have the key to the lock, you don't even know how the lock works.
The premise behind open source, as I understand it, is using algoriths that are very difficult to crack even when you know exactly how they work. With closed source, the danger is that the designers might choose to asume that their algorithm will remain secret, and base their security on that fact. If closed source developers assumed that their software would eventually be operating in a worst case scenario (open source), and used algorithms that maintained security even after the source was compromised, closed source would actually be considerably more secure. Deadlines, profit motives, and decreasing execution time give incentive to trim the edges though, so security is skimped in favor of other considerations, because they figure they can rely on obscurity to make up for good code.
That is where the problem lies. Not in the fact that they don't show us the code, but that the code wasn't good to begin with. Truth is, the perfect coder WOULD be better off developing closed source. I wouldn't waste your time searching for a perfect coder though... God's the only one that might fit the ticket, and my guess is he's not employed by Microsoft.