Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Security Software Linux

Torvalds on the Linux Security Process 280

darthcamaro writes "Linus Torvalds thinks that Linux kernel security disclsoure should be completely open and he really doesn't like the vendor-security model of having a time embargo on security disclosure. 'I think kernel bugs should be fixed as soon as humanly possible, and any delay is basically just about making excuses,' Torvalds wrote. 'And that means that as many people as possible should know about the problem as early as possible, because any closed list (or even just anybody sending a message to me personally) just increases the risk of the thing getting lost and delayed for the wrong reasons.'"
This discussion has been archived. No new comments can be posted.

Torvalds on the Linux Security Process

Comments Filter:
  • by Anonymous Coward on Friday January 14, 2005 @11:21AM (#11361739)
    ...he propably knows what he's talking about.
    • by sphealey ( 2855 ) on Friday January 14, 2005 @11:26AM (#11361819)
      Sorry to have to disagree, particularly about someone who is clearly far above my level in most respects, but .... Linus doesn't administer any significant number of Linux systems, nor is he responsible for any significant-sized networks. While I agree that full disclosure in a reasonable period of time (say 90 days) is best, immediate disclosure can leave thousands of systems vulnerable with no patches and no reasonable way to get them patched immediately even if a fix is available.

      sPh

      • by ichimunki ( 194887 ) on Friday January 14, 2005 @11:37AM (#11361979)
        Disclosure or not, if there is an exploit possible your systems are vulnerable. Would you not prefer knowing right away that your system is vulnerable? The exploit may have been discovered some time ago by a black-hat--he won't wait 90 days for you to have a chance to patch it before exploiting it. What you're saying makes it sound like the bug doesn't exist until somebody talks about it.
        • Disclosure or not, if there is an exploit possible your systems are vulnerable.

          I agree. Not only are your systems vulnerable, but someone knows they are, too. Maybe only the good guys know, but there's enough chance that there's a black hat who found out too.

          If you get a message as simple as "there's a security hole in Tux that could potentially lead to a remote root exploit," even if there is no fix, at least you can disable Tux and move your pages over to Apache--just for now until the patch is relea

        • I think your damned if you do and damned if you don't.

          Your damned if you disclose, Black Hats can read the Kernel News groups, Butraq, and other popular outlets just like the rest of us do.

          Your damned if you don't disclose and a breach occures. The public will cry, "Security through Obscurity!"

        • I agree with you partially, but here's where I think you and Linus are missing the bigger picture: Just like Microsoft Windows, the majority of installed Linux systems are not being hovered over by people who read every security advisory. I, for example, watch for mail from my vendor telling me of a patch for a security issue. That's how I find out.

          Now, I understand the need to open up the process in order to fuel the power of open source development, but I'm worried about script kiddies having a constant
          • Just like Microsoft Windows, the majority of installed Linux systems are not being hovered over by people who read every security advisory.

            I don't usually read security advisories--I just run "apt-get update && apt-get dist-upgrade" once a day. Takes a few seconds. Windows update can do the same, right?

            --Bruce Fields

            • by ajs ( 35943 ) <ajsNO@SPAMajs.com> on Friday January 14, 2005 @06:10PM (#11367995) Homepage Journal
              "I just run "apt-get update && apt-get dist-upgrade" once a day"

              Ah, what a nice world you live in.

              I do that at home. At work, I would be in a world of hurt if I did that. I have thousands of machines running a mix of in-house and external software which customers rely on for mission-critical stuff. I can't install every little patch just because it might make my frobnitzer go faster, and even when I WANT a fix, it's got to be tested in various production configurations first to see if it breaks something (you'd be surprised how often a security fix breaks something).

              So I read security updates from the vendor, and install what needs to be installed as soon as I can. If those security updates are coming to me days, weeks or even months after the script kiddies started playing with the exploit code... ugh.
        • by m50d ( 797211 ) on Friday January 14, 2005 @02:12PM (#11364453) Homepage Journal
          If there is an undisclosed exploit, your systems are vulnerable to whoever has done a deep kernel audit and found it. If there is a disclosed exploit and no patch, your systems are vulnerable to every script kiddie out there. In the case of services which can be turned off you might be better with disclosure, but how the hell do you plan to turn off your kernel? I know which situation I prefer.
        • What you're saying makes it sound like the bug doesn't exist until somebody talks about it.

          Bugs, and problems in general, cannot be fixed until someone starts talking about it. To try to limit the knowledge of problem also limits the ability to find a solution.

      • by MBAFK ( 769131 ) on Friday January 14, 2005 @11:39AM (#11362006)
        The systems would still be vulnerable with no patch available. The administrators might not know there was a vulnerability but an attacker may know about it.

        Keeping it a secret might put you at a greater risk - you don't know you might be in trouble but the bad people know about the problem.

        So reducing the number of people who know about the problem could make it worse rather than better.

        • I understand your point, and I am not saying you are wrong. But the cracker dystopia (I won't say "community") is neither infinitely fast nor capable of reading security researchers' minds. Flaws which are discovered in the lab or via code review may well not be known to the crackers until they are published. Giving the kernal wizards and distribution integrators some time to develop patches _will_ improve security in this case.

          If the researcher has evidence, or strongly suspects, that the exploit is in
        • Not for a kernel flaw. With an unpatched kernel flaw there is *nothing* you can do even if you know about it. And the number of attackers which will know of an unpublished kernel flaw is far smaller than the number of attackers which will know of a published one.
      • by A beautiful mind ( 821714 ) on Friday January 14, 2005 @11:41AM (#11362028)
        IF someone would have linked to the full discussion, it would have turned out that he suggested a 5 working day embargo on the disclosure MAX. They say and i think i have to agree, that it's enough time for vendors to catch up. Anything more just makes the problem worse. They will disclose everything after that embargo of course. There are a lot of good ideas and views and Linus refined his opinion more than once so it would be good to read the original discussion and not react based on the submitter's pick.

        Just to note, im reading LKML for over a year now and i read most of the mail about this thread aswell.
        • At one point he said, he wants complete openness, everything else is just bad practice. Then he is fine with a short closed period.

          He says he never wants to wait applying a fix, because he wants to give users the best kernel possible. Then he says that it probably doesn't matter that the kernel.org kernel gets fixes last, because most users run vendor kernel anyway.

          But I think what annoys me most is that he is constantly claiming he is not forcing his (in his own admittance extreme) views on anybody. Bu

          • I couldn't quote the letter by word when he explained what you feel an incosistency but if you accept my interpretation of it, he said that his view about total openness may be a bit extreme SO he suggested this compromise between full embargo(vendor-sec) and total openness to strike a healthy balance.

            I just want to point out aswell that the reason why Linus doesn't want to do with anything with vendor-sec is politics.

            I have to point out on a sidenote though, but it seems to fit here that Andres Salomon
        • IF someone would have linked to the full discussion, it would have turned out that he suggested a 5 working day embargo on the disclosure MAX.

          I would have thought the embargo should be n-1 days, where n is the number of days it would take a company to serve the individual courteous enough to notify the vendor before the public with a gagging order on some frivoulous pretext (DMCA etc) :-/

          Phillip.
      • While I agree that full disclosure in a reasonable period of time (say 90 days) is best, immediate disclosure can leave thousands of systems vulnerable with no patches and no reasonable way to get them patched immediately even if a fix is available.

        Those systems are *already* vulnerable, it's just that not everyone knows it yet.

        And I especially don't believe that a vulnerability is still a secret after it's been disclosed to a mailing list (even a "closed" one). That just means all a black hat has to d

        • And why on earth would it be reasonable to take 90 days to produce what is usually something like an obvious 5-line kernel patch?

          Again keeping in mind that I am not disagreeing with you, it is still necessary to consider that (1) the patch is probably more than "5 lines", because otherwise it would have been found earler (2) security patches almost always IME have substantial side-effects and must be tested carefully (3) it takes longer than 5 days to QC, prepare, and release a patch for an enterprise-cla

          • by Anonymous Coward
            Hey, it's great that you've got your MCSE and all, but that's not how things work or should work in open source development.

            A developer creates a fix and submits it to the maintainer. Note that the more people who know, the sooner on average somebody who can fix the bug will fix it and submit it.

            A maintainer checks the patch, and tries it out. If it seems to fix the problem, it's checked into the main code base, or possibly just to the development branch depending on severity and complexity. There's g
          • And why on earth would it be reasonable to take 90 days to produce what is usually something like an obvious 5-line kernel patch?

            (1) the patch is probably more than "5 lines", because otherwise it would have been found earler.

            Not true. To quote Linus, "In the case of "uselib()", it was literally four lines of obvious code - all the rest was just to make sure that there weren't any other cases like that lurking around." From other patches I've seen, this is typical. Often it's just a small oversight

      • The CIA, Clinton, and Bush administrations were aware of Osama Bin Laden and the risk to the country that he represented. They did not disclose this information, nor prepare for the inevitable. It didn't stop Osama from doing what he was going to do.

        Had we acted pre-emptively, we would have avoided the years of war that ensued.

      • first off, let me remind you of something. if a good guy knows about a problem you bet that at least 2 bad guys know about it.

        Secondly the number of "hackers" (I know wrong term but security people refuse to use the proper term) is actually quite low, (yes this is true, go to HOPE, CCC camps and other events, 9 out of 10 people you meet are wannabe's, the number can omly increase cince the last ones I attended a few years ago) most of them out there are wannabe's and script-kiddies simply standing on the
      • Huh? 90 days? What for?

        Most security exploits are just another buffer overrun, or just another race. That stuff can be fixed in a matter of 5-60 minutes by a competent developer. Lots of things can happen in 90 days, and the black hats definitely won't give you that long.

        I'd give 3 or 5 days maximum. There's no reason why a patch can't be made in that time.
      • As soon as more than one person knows them, secrets don't exist.

        If there is one person out there who knows about a vulnerability and/or exploit, there is more than one, and that means your systems are at risk instantly, embargo or not.

        I'd rather know right away my systems are at risk...worst case, I walk over and yank the ethernet cable until the issue can be resolved. Waiting even 5 days means I'm vulnerable for those 5 days, which is unacceptable. I want the vulnerability fixed immediately. My company's
        • Sorry if I got everyone hung up on the "90 days" part of my argument. Make it 45 or 30 or 25 if you want. But I am not going to buy that a fully-tested patch for a RedHat system running production Oracle in a high-volume mission-critical environment is going to get out the door in 3 days, short of an impending alien invasion.

          sPh
          • The article is discussing the kernel. How can Red Hat ever test every possible configuration anyway? They have no way of knowing whether you're running Oracle in a mission-critical environment or running it on your desktop to play MP3s.

            In your example it would be up to Oracle to certify and support the kernel released by Red Hat, not up to Red Hat to make sure their patch plays nice in every possible way with every possible application.

        • I'd rather know right away my systems are at risk...worst case, I walk over and yank the ethernet cable until the issue can be resolved. Waiting even 5 days means I'm vulnerable for those 5 days, which is unacceptable. [...] And yes, I do administer a significant number of Linux systems (and Solaris) so I know what I am talking about.

          I doubt it. You're a liar, an idiot, or a cracker. Sometimes you can't take systems down even if you know there is a root level exploit. Tell Citibank and NASDAQ to take

  • What !? (Score:5, Funny)

    by Squatchman ( 844798 ) on Friday January 14, 2005 @11:24AM (#11361780)
    kernel bugs

    Thou shalt not speak ill of the linux kernel!

    Oh wait, it's Linus.
  • by teiresias ( 101481 ) on Friday January 14, 2005 @11:24AM (#11361791)
    any closed list (or even just anybody sending a message to me personally) just increases the risk of the thing getting lost and delayed for the wrong reasons.'"

    I think he really hit the nail on the head with that comment. I can't tell you the number of times CRs or issues have been sent to me through e-mail which have either been lost or forgotten about on my part (sorry). However, using tracking programs which the entire group has access to (we use Mantis [mantisbt.org]) not only are the problems kept on fresh but people will remind me of them or if they are feeling particularly bold, fix them themselves.
  • by Nosf3ratu ( 702029 ) <Nosf3ratu AT sbcglobal DOT net> on Friday January 14, 2005 @11:25AM (#11361802)
    "Quite frankly, nobody should ever depend on the kernel having zero holes," Torvalds wrote. "We do our best, but if you want real security, you should have other shields in place."

    Bingo.

    • by Stevyn ( 691306 ) on Friday January 14, 2005 @11:29AM (#11361856)
      Yeah, like Service Pack 2. That's got a firewall and everything!
      • The XP firewall is much maligned, but I have to say it's been a boon to me. In my day-to-day work I install naked XP Virtual machines over and over. If I do a fresh install with the network disconnected, then enable the firewall, THEN connect and patch the machine up with WU, no problem at all.

        If, however, I just connect it directly to the network (behind a firewall, large corporate), the VM is as good as dead.

        So the OEM installs with XPSP2 and the slipstreamed install media which is apparently being dist
    • by Erik Hensema ( 12898 ) on Friday January 14, 2005 @11:45AM (#11362099) Homepage

      You should never depend on a single point of failure. If kernel security is your single point of failure, then you're at risk.

      However, you also shouldn't depend solely on "other shields in place" for security. Those shields might fail too.

      A simple example is apache. It almost never is run as root, as a security measure. However, when an attacker succeeds in gaining webuser privileges, you'll have to depend on kernel security.

      In short: try to make software as bugfree as possible and use multiple barriers that will have to fail before an attacker can 0wn your machine.

      • You should never depend on a single point of failure.

        Only pirates depend on points of failure ;-)
        I think you meant You should never depend on a single layer of protection. Multiple (and redundant, if possible) protections are less likely to fail at the same time.

        OTOH, multiple points of failure are no better than a single point of failure...
        • I think you meant You should never depend on a single layer of protection. Multiple (and redundant, if possible) protections are less likely to fail at the same time.

          That's why I always wear condoms and my gf is on the pill.

          Oh, sorry, didn't realize we were still dealing with kernel security...

  • But... (Score:5, Insightful)

    by nuclear305 ( 674185 ) * on Friday January 14, 2005 @11:26AM (#11361813)
    "And that means that as many people as possible should know about the problem as early as possible, because any closed list (or even just anybody sending a message to me personally) just increases the risk of the thing getting lost and delayed for the wrong reasons.'"

    I don't disagree with what Linus is saying, but what difference does it make if 10 people are informed rather than 10 million when it still doesn't change the fact that only a select few can change the official kernel source? People in production environments aren't going to apply a patch created by Joe in his basement, they're going to want an official kernel patch.

    If the ones responsible for the affected part of the kernel are slow to handle a security issue, full disclosure IMHO is a bad thing.

    One could argue that full disclosure would motivate those responsible to fix the problem faster, but this is not always the case.

    If Linus is the only person that can change a specific part of the kernel, what good does notifying the world instead of just him do?
    • Re:But... (Score:2, Interesting)

      But...

      Linus isn't the only one that can change any part of the kernel. You may be correct by saying that an enterprise level operation isn't going to accept a patch from you or me.

      I'd expect that most enterprise level IT operations have developers on staff (or available through some sort of outsourcing or support contract) that may add a temporary fix until an official patch is available.

      At the bare minimum, even if they don't want to craft a patch themselves, they can evaluate for themselves the sec

    • Re:But... (Score:3, Interesting)

      by duffbeer703 ( 177751 ) *
      Folks at Red Hat, Suse/Novell or whereever can produce quick patches to their distros, provided that they know of the problem.

      Few people are using the "official" kernel these days anyway.
    • The issue is, with linux, you never know exactly who the right ten will be to solve the problem. There's no infrastructure to mandate that a specific group of ten people are responsible for the code. And if only ten select people know about the problem, and they also know that the're the only ones, they can put it off for a while without being as worried.

      It's a cool system, and 2.6 shows it works. I'd lookup the /. article from a few weeks ago comparing linux kernel to most commercial produtct bug counts,
    • Re:But... (Score:5, Insightful)

      by DrSkwid ( 118965 ) on Friday January 14, 2005 @11:44AM (#11362085) Journal
      If Linus is the only person that can change a specific part of the kernel, what good does notifying the world instead of just him do?

      Because some of us can change our own kernels while we wait for the official patch.

    • If a closed vendor is "slow" on fixing an issue guess what happens? You wait. Hopefully there is disclosure so you know what to protect "from the outside" while you patiently wait for the vendor to release a fix.

      On the other hand, if the maintainer responsible for an errant kernel module is "slow" then guess what happens? Someone else fixes it. Most importantly, you can fix it if you chose to do so. If you know there is a problem, you have the source, and ultimately you can fix it. You don't have to
    • Even if the average user/admin can't *fix* the problem themselves, being aware of the problem is still a step in the right direction. It gives them a heads-up so they can make sure they're not compromised, take other steps to prevent being compromised, and actively seek a patch so they can fix it the moment the patch is released.

      For example, if I knew of a brand new critical security flaw in Apache, I personally do not have the expertise to fix it. But I do have the expertise to close port 80 on my rou
  • by thegnu ( 557446 ) <thegnu.gmail@com> on Friday January 14, 2005 @11:27AM (#11361829) Journal
    I've never really gotten the mechanism whereby software giants keep their software secure by not telling anyone about the security hole until it's fixed. First, we know about information leaks. Secondly, it's terribly profitable for some people to sit around and figure out security holes so they can steal from people.

    Especially in the position that Microsoft is in, with the lion's share of the market, and a supposed interest in keeping my data secure, I would assume that the first move would be to notify their customers of any security hole that might be potentially harmful to me. Given the number of them, I guess it would keep my mailbox full, but I wouldn't mind.

    Oh, I don't use Windows. Nevermind. Yay for Linux (and Linus)!
  • by Anonymous Coward on Friday January 14, 2005 @11:28AM (#11361838)
    and this is it. I totally agree with his ideas and would prefer his solution -- total openness.

    "Otherwise it just becomes politics..." -- Linus Torvalds
  • by krgallagher ( 743575 ) on Friday January 14, 2005 @11:30AM (#11361864) Homepage
    "Quite frankly, nobody should ever depend on the kernel having zero holes," Torvalds wrote. "We do our best, but if you want real security, you should have other shields in place."

    Why would you have all your ports exposed with nothing running on them? I have a hardware firewall. I only run HTTP ad FTP when I need them and then turn them off when I am through. It is really just simple security. Be smart. Oh yeah I subscribe to security lists and patch when security patches are released.

    • This doesn't work in all situations. For instance, at a university where you have a system a large number of students can log into.

      Kernel local root escalations don't affect MOST systems, but those few systems where a large number of arbritrary users can log into them to work on projects and such are highly vulnerable to them. Especially in an academic environment where a student might be tempted to try to crack root to peek at someone else's work.

      -Z
      • There are further security measures effective in those situations as well. chroot jails and restricted shells come to my mind. Not having a compiler installed and having a monolithic kernel with no module loading support also helps. Big university systems should have admins a lot more knowledable than me, so I expect them to have more stronger measures in place.
        • Big university systems should have admins a lot more knowledable than me, so I expect them to have more stronger measures in place.

          hardly.... at the school i went to, and i suspect many others, the systems administrators (on the systems students had access to) were just work study students, many of whom barely knew what they were doing.

          also, not having a compiler installed isn't always an option- a lot of the higher level cs classes that i took required us to turn in irix or solaris binaries along with t
          • (which always made things interesting when the operating systems classes started learning about fork(2))

            Always happens, all around the world. That's the time of the year when the sysadmins storm out of their office in rage, walk into the students lab, and yell "Who the fuck is [username]?". After a cowering [username] identifies himself, the sysadmin yells in his face "Next time run your homework in the workstation where you are sitting, not the central server you @%#$^@&#!@#$!^#^$&##%#%!@$!!!" and


    • there's more to life than websites

    • Why would you have all your ports exposed with nothing running on them?

      A port "exposed" with nothing listening on it (ie, SYN packets to this port get an RST answer) is not any more exposed than if it was denied (SYN gets no answer). The only attack possible on a closed port is one based on an IP stack security bug, in which case ports with something listening are just as vulnerable.
  • On Linus (Score:2, Insightful)

    by stratjakt ( 596332 )
    Linus is the only figure in the entire OSS movement who doesn't have his head shoved up his ass.

    Everything he says is practical. He's all about technology, not new-age computer "philosophy" or rhetoric. He doesn't go around promoting linux because he thinks "M$ is Gayer Thean AIdz!@!!", he does so only when he truly believes it's a good solution.

    Here he is showing it again. He believes in full disclosure of bugs, not for any philosophical bullshit or imaginary right-to-know, but because it gets bugs fi
    • Re:On Linus (Score:3, Informative)

      by GoofyBoy ( 44399 )
      >He believes in full disclosure of bugs, not for any philosophical bullshit or imaginary right-to-know,

      No he doesn't.

      From the article:
      "I'd be very happy with a 'private' list in the sense that people wouldn't feel pressured to fix it that day," Torvalds wrote. "And I think it makes sense to have some policy where we don't necessarily make them public immediately in order to give people the time to discuss them.

      • "I'd be very happy with a 'private' list in the sense that people wouldn't feel pressured to fix it that day," Torvalds wrote.

        I read that as he'd accept a private list solely for internal discussion in the very short-term; I'm not convinced it suggests Linus opposes full disclosure at the earliest convenient opportunity. For example, he continues this quote with:

        But it should be very clear that no entity (neither the reporter nor any particular vendor/developer) can require silence, or ask for anythi

        • >I read that as he'd accept a private list solely for internal discussion in the very short-term;

          So for vendor-sec it bad if they keep things secret for a timeperiod defined by them but its ok to keep it secret for a short-term as defined by Linus? This makes things different because we trust one person?

      • That doesn't mean no full-disclosure. That means delayed full-disclosure.

        Reading his quote from LKML:
        But it should be very clear that no entity (neither the reporter nor any particular vendor/developer) can require silence, or ask for anything more than "let's find the right solution". A purely _technical_ delay, in other words, with no politics or other issues involved.
    • Re:On Linus (Score:5, Insightful)

      by ajs ( 35943 ) <ajsNO@SPAMajs.com> on Friday January 14, 2005 @12:16PM (#11362475) Homepage Journal
      This is a massive distortion. There are dozens of folks who are just as level-headed as Linus. Linus happens to get the lion's share of attention from the community, which is a bit of a paradox given his personality, but he's not alone by a long shot.

      Now, if you're just thinking of the handfull of interview-bait folks like RMS, ESR, etc. then yes Linus does tend to stand out as a non-politico.
  • Politics (Score:5, Insightful)

    by RangerRick98 ( 817838 ) on Friday January 14, 2005 @11:33AM (#11361911) Journal
    I've seen quite a few comments about systems being left vulnerable with no solution if immediate disclosure is the policy. But from TFA:

    "I'd be very happy with a 'private' list in the sense that people wouldn't feel pressured to fix it that day," Torvalds wrote. "And I think it makes sense to have some policy where we don't necessarily make them public immediately in order to give people the time to discuss them. But it should be very clear that no entity (neither the reporter nor any particular vendor/developer) can require silence, or ask for anything more than 'let's find the right solution.'"


    Linus is just trying to keep the politics out of it, is all. He's not saying that every bug should be made public knowledge immediately, only that things shouldn't be kept secret for reasons other than the security of the users' systems.
  • Mailing list thread (Score:5, Informative)

    by OblongPlatypus ( 233746 ) on Friday January 14, 2005 @11:36AM (#11361963)
    Since the article is pretty much a copy/paste job from the lkml, why not link directly to the thread in question [iu.edu]?
  • by Anonymous Coward on Friday January 14, 2005 @11:38AM (#11361990)
    It is very very rare that there is a security problem in the kernel that leaves you vulnerable with no work-around. Almost always, it's just a question of blocking external access to some port, external access which should usually be blocked anyway. Once that is blocked, solving the problem isn't critical. It's still important, but the net won't melt down or anything like that. Also, these kinds of things tend to get patched very quickly.


    The other reason why this is the right way to go is because we should be moving towards a model of damage containment with all forms of electronic security. Faults should be isolated. A security problem in one part of the code should not result in a total compromise of the system, even if that fault is in the kernel. That's where Linux should be heading. Part of that would be moving more stuff out of the kernel and also having less stuff running as root. The end goal would be to get rid of root entirely.


    • so we should remove remote access to ssh from our users every time a bug is found ?

      get real

      As for root, you are totally correct, plan9 did that over 15 years ago. You can't escalate privileges because there's nothing to escalate to!
      • so we should remove remote access to ssh from our users every time a bug is found ?

        No, the information that a bug has been found should be made available to you so you can take whatever action you deem appropriate.

  • by Anonymous Coward
    There's a good writeup [kerneltrap.org] on this thread at KernelTrap [kerneltrap.org], too. Includes links to the full thread, which is quite fascinating.
  • Computer security these days is becoming more and more related to national security.

    We need a simple bill (preferably a UN decision that countries agree to abide by and enforce among their citizens).

    1. All vendors must provide an email contact for security issues.
    2. There will be 1 official list for security disclosures. Mantained internationally.
    3. The vendor gets 48hr advanced notification. To allow them to patch/research if it's high profile.
    4. If vendor feels the security issue is extremely crit
    • Computer security these days is becoming more and more related to national security. We need a simple bill (preferably a UN decision that countries agree to abide by and enforce among their citizens).

      Wow, if you're a troll, you're good. My hat's off.

      If you're serious, you're a moron. Getting governments and lawyers involved is a recipe for disaster. It's bad enough that they already legislate their incompetence and corrruption into every other aspect of our lives, but it's a necessary evil. The
  • No! (Score:4, Interesting)

    by 91degrees ( 207121 ) on Friday January 14, 2005 @11:58AM (#11362254) Journal
    Scenario 1: Bug is detected. Full disclosure including exploit.

    Result: Mallory uses exploit. Alice releases a bugfix, Bob applies the fix. If it takes Alice andBob longer than Mallory, the server is compromised.

    Scenario 2: Bug is detected. Kept quiet.

    Result: Eventually Mallory detects the same bug. Exploits it. Server compromised.

    Scenario 3: Bug is detected. Released only to trusted developers.

    Result: Alice releases bugfix. Announces that it fixes a security hole. Gives general details of what the bug is. Mallory has to work out the details and exploit it. This gives bob a lot more time to apply the patch than scenario 1.

    So what's so great about full immediate disclosure?
    • Cracker finds expolit but tells his buddy joe. Joe reports the vulnerability to some closed list. Cracker in the mean time exploits a 1000 machines because no patch is available and there is no disclosure while the originator is taking his own
      sweet time putting together a patch because nobody supposedly knows about it.

      I also want immediate disclosure so I can defend myself. I don't need to wait for a patch I can patch my own software if I know about it.

      Take for instance a while back there was a linux worm
    • Scenario 1 and 3 are quite different. Look at it from inside the organization.

      Scenario 1: Damn! We've got to fix that one before out customers start getting hacked!

      Scenario 3: No big deal, the policy is to keep the exploit secret for 30 days. We can add that to the queue of things to do.

      The problem is that the blackhats aren't going to wait. Given enough time, which can easily be measured in days or even hours, Scenario 3 can become Scenario 2. And then you're in big trouble.

      As an admin, some organizati
    • Who says Alice is a trusted person?

      Trusted in this case usually means people who pony up money to be in the know, it doesn't mean people who actually know how to fix the bug.
    • And while I'm at it:

      Scenario 4:

      Bug is detected by *untrusted* source and exploited quietly for months.

      Result: when bug is detected by whitehat person and kept quiet, little do they realize that the bug is being exploited as they wait with their thumbs up their ...
    • What if Alice and/or Bob are blackhat hackers themselves?
  • For what it is worth...
    (Posted Jan 10, 2005 11:26 UTC (Mon) by guest PaXTeam) (Post reply)

    lots of speculation so let's see the actual timeline a bit. spender emailed Linus sometime early december about the few issues he had found. he also mentioned some of the fixes that were in PaX, the result of one of them was this commit: http://linux.bkbits.net:8080/linux-2.6/cset@41bc9 0 0azV2y9... . understand please that we (well, spender at least) already had had a working two-way email connection with Linus. duri
  • 'I think kernel bugs should be fixed as soon as humanly possible, and any delay is basically just about making excuses,' Torvalds wrote.

    My problem with this is that some people will interpret it as how quickly you can patch over the symptom, rather than how quickly you can correct the underlying problem. This will lead to "workarounds" being called "patches".

    Personal experience shows that most bugs involve issues that might take a few days to resolve conflicts. "If I insert this fix here, does anything el

  • by Alejo ( 69447 ) <alejos1 AT hotmail DOT com> on Friday January 14, 2005 @03:29PM (#11365703)
    What if someone discovers a security bug, and they are really responsible professional researchers, and they want to give all affected vendors some time to come up with an official solution? (researchers, not ppl into 0day exploits or cracking or whatever)
    The way to do this is to have a multiple vendor coordinated release, where all agree on a date to release all together the alert and fix. This usually takes a few days, as most of them need to go through QA and other processes, as they are responsible to their customers.
    SecurityFocus [securityfocus.com] offers such a service for FREE to any researcher/vendor.

    Blowing the whistle too early:
    Even with that, there is always some a**hole or some idiot vendor breaking this blanket period. See how RH fsckd up this [securityfocus.com], many times, and got themselves up to the point of being told late. Some other linux groups also did this, by "mentioning" the bug to uncontrolled developers who went fixing on their own, thus blowing the whistle.

    IF LINUS & CO LEAVE THIS COORDINATED SCHEMA, THEY'LL LOCK THEMSELVES OUT NOTIFICATIONS FROM RESPECTED RESEARCHERS.

    NOTE1: i have nothing against the 0day or the cracking comunities, im only stating IF a researcher wants to give a blanket to vendors. (a very common case)
    NOTE2: im not affiliated with SF, and even HATE the split bugtraq times for special vendors (i think this really killed it, a VV BAD move)
    NOTE3: you might not agree with this schema, but consider most top name security firms follow it and it is to protect the users.
    NOTE4: there is a defined period, so vendors are urged to come up with patch/alert
    NOTE5: think also for the poor devs working for those vendors, making them work overnight hurried is not polite, they are devs like all of us
    (im sure i miss some note and i'll get flamed anyway... flame on grrrrr)
  • I disagree with immediate, full disclosure. That is the perspective of a computer scientist.

    I do believe in partial disclosure for a limited time, and then full disclosure. This time limit should be variable and discussed by the private community that has access to the report. The private community should respect this disclosure time limit.

    The article does suggest something like this.
  • If the issues are 100% open, then 0% of the security comes from obscurity.

    If 0% of the security comes from the ability of others to keep secrets (obscurity), then 100% of the security comes from my configuration, my password, my private key, etc.

    Me controlling 100% of my security is a Good Thing.
  • by ozborn ( 161426 ) on Friday January 14, 2005 @05:08PM (#11367010)
    Nevermind arguments about fixing bugs (although I think Linus is right here too). The important thing is to NOT let kernel security issues be driven by vendors perceived needs. Give vendors that, and they will run wild. Security is a favorite bugaboo under which any change be justified - kudos to Linus for not letting this happen. If they want to form a secret cabal where they can keep secret any kernel security issues they know about - fine (but stupid), but don't try to tell everybody else you can't talk about these issues.

Old programmers never die, they just hit account block limit.

Working...