Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming Software Linux IT Technology

Behind the Scenes in Kernel Development 139

An anonymous reader writes "Some interesting changes took place in the way the Linux kernel is developed and tested. In many ways, the methods used to develop the Linux kernel are much the same today as they were 3 years ago. However, several key changes have improved overall stability as well as quality. This article takes a look behind the scenes at the tools, tests, and techniques -- from revision control and regression testing to bugtracking and list keeping -- that helped make 2.6 a better kernel than any that have come before it." We might as well mention here (again) that a couple of new kernels are out: leif.singer writes "2.6.3 and 2.4.25 are out, fixing another vulnerability in do_mremap()."
This discussion has been archived. No new comments can be posted.

Behind the Scenes in Kernel Development

Comments Filter:
  • by ObviousGuy ( 578567 ) <ObviousGuy@hotmail.com> on Thursday February 19, 2004 @09:53AM (#8326204) Homepage Journal
    I wish I could wrap my head around even the smallest part of the kernel. There is so much code in there and aside from main(), it is hard to find a good place to start studying.

    Would these tests be a good starting place?
    • by deadlinegrunt ( 520160 ) on Thursday February 19, 2004 @09:59AM (#8326268) Homepage Journal
      Find a particular functionality of the kernel that really interest you; read any documentation you can find about it; then grep the src till you see the relevant sections of code and start perusing with your $(EDITOR)

      Much time is spent teaching people how to write code but never really reading it. This is a perfect example of how to do it and why you would.
      • by millahtime ( 710421 ) on Thursday February 19, 2004 @10:03AM (#8326327) Homepage Journal
        "Much time is spent teaching people how to write code but never really reading it. This is a perfect example of how to do it and why you would."

        Reading code can be a huge help in becomeing a better coder. You see how other coders do things. Learning from their bad on what not to do and seeing new good methods you may not have come up with on your own.
        • Code Reading [spinellis.gr] by Diomidis Spinellis contains a bunch of ideas on ways to comprehend large codebases more easily.

          He talks about browsing code, package structures, adding features or fixing bugs in a large codebase, and so on. It's a good read - well worth the money.
          • by po8 ( 187055 ) on Thursday February 19, 2004 @11:37AM (#8327559)

            Does the fact that Diomidis Spinellis has repeated won [ioccc.org] the International Obfuscated C Code Contest (IOCCC [ioccc.org]) make him more or less qualified to write such a book :-)? Check out his "best abuse of the rules" entry from 1988 that is my all-time favorite. BTW, the contest is currently open.

            • > Does the fact that Diomidis Spinellis has
              > repeated won the International Obfuscated
              > C Code Contest (IOCCC)

              Heh :-) Yup, he talks about that a bit in the book, and makes the point the to IOCCC winners usually employ the preprocessor in their entries.

              Then he goes on to suggest that preprocessor usage be minimized in 'normal' programming activities :-)
        • This is exactly why Universities should have the senior or junior CS design-project be to add some significant functionality to an existing large project. There simply isn't enough time to build something that large from scratch in that short amount of time, and the things you learn from working with a large codebase are invaluable. Furthermore, in the real world you will spend much more time improving other people's code then you will writing from scratch anyway.
        • I agree.

          I'm a firm believer that all CS curriculum should have two classes for the sophomore year entitled "Real World Programming Part 1 & 2". Part 1 consists of an entire semester writing a user application to meet a loose set of design specifications.

          In part 2, the students drop their own code and inherit another student's codebase from part 1. Part 2 will consist of dropping 30% of the functionality, altering another 40% of it and then adding in another set of features about half the size of the
          • I could not agree more (or less). Though, I would have to point out that, while schools could do more about this subject, in the end it comes down to the person behind the keyboard.

            I manage a department, which includes about 20 programmers and some do it right from the start (and right out of a school), and some don't.

            What is interesting, while not maybe surprising, is that women tend to document their code better and also write more maintainable code. This observation is prone to errors (most of the prog
      • I found a hacker who has contributed some code to the kernel development. When I asked him to explain his contribution, he smiled and started talking. The words went in one ear and off a duck's back, but I was impressed that a mere mortal could be in front of me and talking about this esoteric topic. It's definitely worth persuing. Good luck. phnork
    • by Rosco P. Coltrane ( 209368 ) on Thursday February 19, 2004 @10:08AM (#8326386)
      I wish I could wrap my head around even the smallest part of the kernel. There is so much code in there and aside from main(), it is hard to find a good place to start studying.

      You could contribute some work to SCO: I hear they're very interested in having someone sprinkle several "printk("(c) SCO\n");" lines here and there in init/main.c, since they can't do it themselves, having no technical department, being a law firm and all...
    • "Linux Kernel Development" [amazon.com] is a nice introduction book by kernel hacker Robert Love, and it already covers the 2.6 Kernel.

      It doesn't go into too much detail, but it gives a very good overview and basic understanding of the issues you have to deal with in the kernel! I'm currently reading it and getting enlightened :-)

    • I highly suggest picking up a copy of the Linux Core Kernel Commentary [amazon.com]. The first part of the book contains the code of the majority of the "core" kernel components, and the second part explains the code. It's slightly out of date, but still a good read.
    • This isn't Linux, but it is definitely an interesting read if you want to learn how a kernel works. The Design and Implementation of the 4.4 BSD Operating System [amazon.com].

      It's a bit expensive, since it's also apparently a textbook, but I think it is worth it. Reads easy, too.
    • by slamb ( 119285 ) on Thursday February 19, 2004 @12:52PM (#8328463) Homepage
      I wish I could wrap my head around even the smallest part of the kernel. There is so much code in there and aside from main(), it is hard to find a good place to start studying.

      Very recently, I've been writing some low-level code. There was a long while I'd thought this was out of my league. Then I realized several things:

      • I was not happy with several characteristics of the low-level code other people had written and I was depending on.
      • I had done some more low-level stuff long ago - like a couple simple but legitimately useful assembly programs in DOS, and even a patch that added a sort of capability system to the OpenBSD kernel. (I never polished up the patch enough to send it in to them or anything, but the point is that it essentially worked, and I wasn't afraid to take it on.)
      • When I'd done those things back in the day, I wasn't anywhere near as good a coder as I am now.
      • The only reason I'd been unable to do these things more recently is an attitude that I'm not good enough, not a reality. (It's an attitude a lot of people in low-level code promote, I think. They so much don't want to waste their time with people who really are bad that they probably don't mind scaring off a few people who are in fact good but don't realize it. Also, I think there's ego involved - it's an exclusive club, why not let it stay that way.)

      So I think the moral of the story is to just be fearless/persistent. If you're not confident, there are plenty of ways you can improve without even involving anyone else:

      • Read the code. It sounds obvious, but there's a lot of code I'd stayed away from even looking at because of intimidation.
      • Try experiments. Make a change, set a hypothesis about what it will do, and run it. Then see why you were wrong, if you were. Then try it again. Even just getting in the habit of running the build system will help, and setting up experiments like this will help your debugging [bell-labs.com].
      • Find something lacking and try to fix it.

      And then, if you're still not comfortable talking on the linux-kernel list, I think you have at least another couple choices:

      • If you're lucky, you're friendly with someone more skilled and can use him/her to screen questions.
      • There's a couple lists like kernel-janitors and kernel-newbies to dip your feet in the water.
      • Sometimes in the process of writing an eloquent question through email you'll figure out the answer yourself. (Did you see the teddy bear anecdote in the debugging link above?)

      As for myself, I'm taking my own advice to make sigsafe [slamb.org] - an alternate set of system call wrappers (libc level) that eliminate a couple race conditions involving signals, without a performance penalty. It's going well - the code works, and I have a race condition checker and microbenchmark to prove it. I just released my first version. Now I'm working on the documentation; it still needs a lot of work. (I could use plenty of help with this project! If you want to try low-level programming, it's a great way. It requires writing assembly for each combination of operating system and architecture. I've only written it for two systems. There are plenty left, and public systems to do it on if you don't have access to exotic machines of your own. Plus, you can hopefully gain some low-level understanding by proof-reading and helping me write the documentation.)

      Once I have that polished, I've got a couple projects I might try in the Linux kernel (and/or other kernels):

      • implementing a couple of system calls - the nonblocking_read(2) and nonblocking_write(2) that djb mentions [cr.yp.to].
      • implementing SO_RCVTIMEO and SO_SNDTIMEO under Linux. Assuming no one has yet; I haven't checked, so the manpage could just be out of date. Which brings m
      • by slamb ( 119285 ) on Thursday February 19, 2004 @03:53PM (#8331565) Homepage
        Try experiments. Make a change, set a hypothesis about what it will do, and run it. Then see why you were wrong, if you were. Then try it again. Even just getting in the habit of running the build system will help, and setting up experiments like this will help your debugging.

        An addendum: your experiments don't have to be anything spectacular. Here are a few:

        • Try adding a few printk statements throughout the code, decide when you expect they will print and what they will say, and confirm it.
        • Try tuning a sysctl or hardcoded value. See how it affects performance. Do microbenchmarks or benchmark real systems. Make pretty graphs. (Graphs are fun!)
        • Add a likely() or unlikely() option to help the compiler's branch prediction. Again, test the performance. Or take one away - it's possible someone was wrong about the performance
        And in user-level code, there are even more things you can do:
        • compile with -ftest-coverage to see what code is run and design unit tests [junit.org] to that uses every line. This will help your understand, could find existing bugs or dead code, and will help you be confident in your changes if you later make any.
        • compile with -ftest-coverage -fprofile-arcs to see what code branches are very likely or unlikely to be taken, then use likely() and unlikely() to change the branch prediction. Benchmark the results.

        Another project I've considered is adding likely() and unlikely() equivalents to boost [boost.org]. A lot of people use this code, so increasing its performance a little would be pretty beneficial.

        • Cool suggestions, I had 2 questions though:

          Is there any nifty way to speed up the compile->execute cycle? The way I see me coding is:
          a)code
          b)compile
          c)reboot test machine and wait 1-3 minutes for it to come up
          d)see if it worked
          e)goto (a)

          step C could be frustrating, is there a quicker way to go about it?

          Also, what is the likely() and unlikely() functions you speak of. Google shows a lot of unrelated info.

          Thanks
          • by slamb ( 119285 ) on Thursday February 19, 2004 @05:45PM (#8333130) Homepage
            Is there any nifty way to speed up the compile->execute cycle? The way I see me coding is:
            [...]
            c)reboot test machine and wait 1-3 minutes for it to come up
            [...]

            step C could be frustrating, is there a quicker way to go about it?

            You could making your changes to a User-mode Linux [sourceforge.net] kernel to avoid the reboot. Or running it inside a virtual machine [vmware.com]. That way you only have the kernel's boot time, not the main system BIOS, ATA-100 BIOS, SCSI BIOS, etc.

            Also, what is the likely() and unlikely() functions you speak of. Google shows a lot of unrelated info.

            They're macros that tell the compiler if the expression contained within is likely to be true or false. There's an article [lwn.net] about them here. If you've ever seen any code that mentions __builtin_expect, it's the same thing with better names:

            #if COMPILER_SUPPORTS_BUILTIN_EXPECT
            #define likely(condition) __builtin_expect(!!(condition), 1)
            #define unlikely(condition) __builtin_expect(!!(condition), 0)
            #else
            #define likely(condition) (condition)
            #define unlikely(condition) (condition)
          • by Anonymous Coward
            ccache - http://ccache.samba.org/
            Safe compiler cache if you do "make clean" a lot.

            distcc - http://distcc.samba.org/
            Simple, fast compilation cluster software.

            Both are great.
    • I will assume that you are serious.

      While testing is one way, you might try doing janitorial work or some of the simple stuff. [kernelnewbies.org]

      If you have good coding skills, try newbies, otherwise, you may find testing is up your ally.
    • For german geeks I can only recommend "Linux Kernelarchitektur" by Wolfgang Mauerer - it is a good introduction and well written. The only thing annoying is that it has a few typos and sometimes skips some details (like Reiserfs) - although this is obviously unavoidable - the book still has 770 pages.
  • Automatic Testing (Score:5, Interesting)

    by 4of12 ( 97621 ) on Thursday February 19, 2004 @10:01AM (#8326295) Homepage Journal

    I can't say how much I appreciate the automatic tests. This is applying computers to a thankless task that they're suited for.

    Now if they only had a web dashboard portal showing the latest results in an easily-assimilated color coded HTML table....

  • by bc90021 ( 43730 ) * <bc90021.bc90021@net> on Thursday February 19, 2004 @10:04AM (#8326332) Homepage
    I don't know what they've done in terms of changing things, and I didn't RTFA. What I do know is that I've been using the 2.6 kernel on my new ThinkPad T40, and the machine is FAST, and stable. Hats off to all reponsible.
  • Kernel quality (Score:5, Interesting)

    by Rosco P. Coltrane ( 209368 ) on Thursday February 19, 2004 @10:04AM (#8326341)
    However, several key changes have improved overall stability as well as quality.

    I have a suggestion : how about not calling development kernels with an even version number?

    - 2.6.0-beta-something kernels were bad (okay fair enough, it was beta, and Linus admitted having called a 2.5.x kernel 2.6 in order to lure early adopters and get them to test it).

    - 2.6.0, 2.6.1 and 2.6.2 were unstable for me, with doozies such as oopses while rmmoding and random crashes using ide-scsi (yes I know it's deprecated, but some of us need it).

    I now run 2.6.3-rc3 and it's the first time it seems stable enough to be called a 2.6 kernel. There are some problems left, but overall it's getting decent. But then why are the others "2.6" kernel called 2.6 at all? they were really 2.5 kernels imho.

    This has happened before, with the beginning of the 2.4 serie. I only felt it was getting good enough at version 2.4.6 and above (I'm not counting the failed 2.4.11 release). When 2.4.0 went out, I thought it meant it was ready for prime time, like 2.2.0 was, or at least was more, but no it was crap. I was slightly annoyed with Linus then, but I thought he had been pressured by commercial Linux shops and that he wouldn't do it again. But no, he did it again with 2.6.

    It's really quite annoying, because those who follow Linux know the first "stable" kernels aren't stable at all, therefore avoid it, therefore defeat the point of testing it for Linus, but beginners think "cool, a new stable kernel", try it and are disappointed, giving a bad name to an otherwise great kernel. Too bad ...
    • Re:Kernel quality (Score:5, Interesting)

      by Psiren ( 6145 ) on Thursday February 19, 2004 @10:10AM (#8326406)
      This is a chicken and egg situation. Unless there is widespread testing of a kernel, some bugs won't be found. But not everyone wants to risk running a development kernel, so the only way to get them to test is to bend the truth slightly, and call a beta version the new stable kernel. At the end of the day, the number just reflects the developers opinion on the stability of the thing as a whole. They could make no changes to 2.6.3 and release it as 2.7.0, but that wouldn't make it any less unstable.
      • Re:Kernel quality (Score:5, Interesting)

        by Rosco P. Coltrane ( 209368 ) on Thursday February 19, 2004 @10:20AM (#8326479)
        the only way to get them to test is to bend the truth slightly, and call a beta version the new stable kernel

        I realize that, but what I'm saying is that those in the know get burnt a couple of times, then see through the bullcrap and silently renumber the kernel versions. In the end, early adopters are pissed off because they've been lied to a little, and swear never to try newer stable versions again, newcomers get disgusted by the quality of early stable releases, and Linus doesn't get the testing he wanted that made him bend the truth in the first place, therefore everybody loses.

        I'd much rather see Linus say : "here, there's this 2.5, we call it 2.5.xx-RC-something. It's close to 2.6, but not quite. *PLEASE* test it for us *PLEASE*!, that'll allow 2.6.0 to be good". He could even have a "best testers" or "more devoted QA volunteers" list prominently displayed on the main page at kernel.org, to appeal to people's sense of ego.

        At least that would be a more honest approach to testing new kernels than lying to people.
        • Re:Kernel quality (Score:1, Insightful)

          by Anonymous Coward
          Linus never claimed everything was working 100%. In fact, if you followed the list, quite the opposite. That is not what 2.6.0 meant at all. He's not lying, your just reading into what 2.6.0 way way too much. And he did do the '2.6 but not quite' thing a good bit with the 2.6rc's.

          Name me one piece of software that worked as expected with a .0 relase.
        • Re:Kernel quality (Score:5, Insightful)

          by adrianbaugh ( 696007 ) on Thursday February 19, 2004 @10:44AM (#8326769) Homepage Journal
          That's exactly what Linus does: there is such a series of release candidates (first introduced prior to 2.4). You can argue that it isn't long enough, but there's an obvious counterargument that if you wait forever nothing will ever get released.
          I can't think of a x.y.0 release of any software project that's been properly stable. It's not just linux, it's the way the world is. You could argue that software never ever becomes perfectly stable: marking a series as "stable" is really just shorthand for good enough that further development is largely maintenance, therefore we expect the structure and codebase to remain stable", not some guarantee that they'll never go wrong. It's more a development term than a performance or reliability term, though the stability of development generally arises from the performance and reliability being sufficient to obviate the need for large changes to the code.

          Even quite late in stable kernel release cycles there's occasionally a shocker - anyone remember 2.0.33?

          If you don't like those kernels, just stick with 2.4 until a distribution ships with 2.6.8 or so. For what it's worth 2.6.(1+) has been fine for me.

          Nobody's lying to you - there has to be some cut-off where a kernel series is declared stable, and by and large I think Linus judges it pretty well.
          • I can't think of a x.y.0 release of any software project that's been properly stable.

            OpenBSD comes to mind. I faithfully rely on their .0 releases, and have never been burned. YMMV, of course, and they are an exception, not the rule, I suspect.
        • I don't think that production people use the early stable kernels anyway (especially since most distros aren't even using it yet).

          That leaves people like me who use it on the desktop (and I haven't had a single problem from 2.6.0 onward) and geeks who test it on non-vital server boxen.

          I don't think it's much of a problem, frankly.
        • by b17bmbr ( 608864 ) on Thursday February 19, 2004 @11:00AM (#8327018)
          because waiting for a final release to be stable, secure, and thoroughly tested has worked for microsoft, and we wouldn't want to do things their way.
        • Re:Kernel quality (Score:3, Informative)

          by Gollum ( 35049 )
          You mean something like the 2.6.0-test series? Which started with the following post, 5 months before the final 2.6.0 was released:
          Linus Torvalds: Linux 2.6.0-test1
          Jul 14 2003, 09:16 (UTC+0)
          Ok, the naming should be familiar - it's the same deal as with 2.4.0.

          One difference is that while 2.4.0 took about 7 months from the pre1 to the final release, I hope (and believe) that we have fewer issues facing us in the current 2.6.0. But very obviously there are going to be a few test-releases before the real thing
    • - 2.6.0, 2.6.1 and 2.6.2 were unstable for me, with doozies such as oopses while rmmoding and random crashes using ide-scsi (yes I know it's deprecated, but some of us need it).

      Out if curiosity, why do you need ide-scsi? Most people in the past used it for CD burning under linux, but ide-scsi is no longer a requirement for using ATAPI cd burners.

      There are other uses as well (tape drives for example), but they're generally a lot less common.
      • Out if curiosity, why do you need ide-scsi? Most people in the past used it for CD burning under linux, but ide-scsi is no longer a requirement for using ATAPI cd burners.

        I need(ed) it because I could not, for the hell of me, get a POS Mitsumi CD burner to work any other way. The other reason I want(ed) it was because I tend to dislike breaking things that work, such as init scripts, fstab, symlinks and various script thingy I made that expect the burner to be a scsi device, and other scsi devices to appe
    • Re:Kernel quality (Score:5, Informative)

      by Anonymous Coward on Thursday February 19, 2004 @10:31AM (#8326578)
      and random crashes using ide-scsi (yes I know it's deprecated, but some of us need it).

      disable ide-scsi and use latest cdrecord with

      dev=ATAPI:x,x,x

      instead of

      dev=x,x,x

      and everything is cool. Don't forget to check lilo.conf. Stop using cdrdao. Ignore xcdroast's "This would be faster is you had ide-scsi enabled" dialogs.

      This should apply to almost every kind of ide-scsi use.
      • Re:Kernel quality (Score:2, Informative)

        by A.T. Hun ( 192737 )
        Actually, there's a new version of cdrdao out that supports ATAPI burners without the need of ide-scsi.

        Also, I'm not sure what hardware the original poster has, but I had no problem with 2.6.1 or 2.6.2. As always, YMMV.
      • Re:Kernel quality (Score:2, Informative)

        by thue ( 121682 )
        dev=ATAPI:x,x,x

        dev=/dev/hdc works for me. Seems simpler.
    • 2.2.0? (Score:5, Interesting)

      by autechre ( 121980 ) on Thursday February 19, 2004 @10:42AM (#8326732) Homepage
      2.2.0 had a bug where the system would instantly reboot when any user ran "ldd". I wouldn't call that "ready for prime time" :)

      (I remember this because we were waiting for 2.2.x to come out, having just gotten a dual P-II 350 server [2.0.x didn't have SMP support]. Fortunately, we managed to hold off for the first few revisions.)

      It's not as if this problem is unique to the Linux kernel. "Never use a Red Hat .0 release" is pretty sage advice, and of course we know Microsoft's track record. You're not going to be able to catch all of the bugs before something gets truly widespread testing, no matter what you call it or how long you work on it.

    • In a previous story re: 2.6.x someone mentioned that PS/2 (the interface, not the Game system) Mouse support is broken w/ 2.6 if you're using a KVM switch. I have experienced this problem w/ 2 different KVM setups (one at work and one at home) and wondering if this has be resolved.

      In a nutshell; The problem occurs when I switch back and forth between machines my mouse stops working or goes erratic. (the work-around is to restart gpm and then X) I dropped back to 2.4.24 because it doesn't have this proble
      • Are you sure the problem's not with gpm? Whenever my mouse accidentally came unplugged, I restarted gpm and everything worked fine.

        Of course, I use a USB mouse now, so I don't know.

        Versions of Windows up through ME had the same problem. I'm not sure, but I'd guess that PS/2 mice need to be bootstrapped.
        • pretty sure. like I said I dropped back to 2.4 and this behavior is non-existant. oh, and forgot a step: I have to rmmod ps2mouse and then modprobe ps2mouse then restart gpm then everything works again. The part about restarting X only fixes a problem with my pointer turning into a big square when I switch to a VC (but I know that's X's fault.)
    • Re:Kernel quality (Score:3, Insightful)

      by swillden ( 191260 ) *

      It's really quite annoying, because those who follow Linux know the first "stable" kernels aren't stable at all, therefore avoid it, therefore defeat the point of testing it for Linus, but beginners think "cool, a new stable kernel", try it and are disappointed, giving a bad name to an otherwise great kernel.

      The problem here is not with the kernel numbering, it's with your understanding of the kernel numbering. An even-numbered kernel is "stable" in the sense that the developers promise they will not mu

    • Re:Kernel quality (Score:2, Informative)

      by ncr53c8xx ( 262643 )

      oopses while rmmoding

      I thought rmmmod support was removed or at least deprecated in the latest kernel? Any kernel hackers care to clear this up?

      but beginners think "cool, a new stable kernel", try it and are disappointed, giving a bad name to an otherwise great kernel.

      Why would beginners be installing a new kernel? The distros would have the latest features patched into their supported kernels. In any case, if you are installing kernels, you should at least read kernel traffic. [kerneltraffic.org]

    • Part of the confusion is that 'stable' refers to the interfaces, as well as to the code maturity and crash-proofness.

      There's no guarantee a module from 2.5.x will work on 2.5.x+1, where a module (ignoring bugs) from 2.6.0 will work in 2.6.60.

      If you want crash-proof, wait for RedHat or Suse or someone to package the kernel for you. If you want stable interfaces so you can begin to develop your kernel module, or whatever, then grab 2.6.0 straight off the presses, but maybe don't put your valuable data on it
    • My experience has been the opposite. I've found the 2.6.x releases to be very stable on x86 and ppc. I think I hit one oops in the -beta-foo days but haven't seen another one since. This is of course a YMMV type of situation. I just want to add a counter anecdote to the discussion because my impression has been that pains were taken to give 2.6 a better birthing than 2.4 had and (to me at least) it shows.
  • Post: -1, Redundant (Score:5, Informative)

    by HoldmyCauls ( 239328 ) on Thursday February 19, 2004 @10:06AM (#8326358) Journal
    "2.6.3 and 2.4.25 are out, fixing another vulnerability in do_munmap()."

    The announcement for 2.6.3 and 2.4.25 was yesterday, and the vulnerability to which the link in the text above refers was with mremap, not munmap; there's also another vulnerability with mremap mentioned yesterday as an *update* to the kernel announcement.
  • SCO (Score:5, Funny)

    by rauhest ( 560597 ) on Thursday February 19, 2004 @10:08AM (#8326382)
    Without RTFA (of course), I tried to find any reference to "sco".

    The only match was "a misconfigured system". :)
  • by Quietti ( 257725 ) on Thursday February 19, 2004 @10:11AM (#8326417) Journal
    Was here [slashdot.org] in yesterday's thread about 2.4.25 and 2.6.3 releases.
  • do_mremap (Score:4, Funny)

    by suckamc_0x90 ( 716974 ) on Thursday February 19, 2004 @10:36AM (#8326638)
    fixing another vulnerability in do_mremap() ah, good old Mr. Emap.
  • by bruthasj ( 175228 ) <bruthasj@yaho o . c om> on Thursday February 19, 2004 @10:44AM (#8326761) Homepage Journal
    I think they forgot to test the framebuffer in 2.6.x kernels. If I can't see Tux, then I ain't booting it! (radeon)
  • Bitkeeper (Score:5, Interesting)

    by jurgen ( 14843 ) on Thursday February 19, 2004 @10:45AM (#8326784)
    I sent this to the author of the article...

    [the author] wrote:

    The lack of formal revision control and source code management led many to suggest the use of a product called BitKeeper.
    I grant that sometimes you have to simplify history to avoid digressing in an article, but this is a bit too inaccurate to let stand.

    Bitkeeper wasn't suggested by anyone; it didn't have to be. It was developed from the ground up to Linus' requirements. Larry McVoy had a discussion about source control with Linus years ago, in which Linus said "none of the products are good enough" and Larry said, "ok, I'm going to write one that is". Apparently he had this on his mind anyway, and so he started Bitmover Co. As bitkeeper became a usable product Larry continued to take Linus feedback and improve it until it was good enough for Linus to use... at which point Linus started using it.

    This is still a simplification of course, but it's closer... and as you can see, there were no third party suggestions involved.

  • Interesting read (Score:5, Insightful)

    by Derkec ( 463377 ) on Thursday February 19, 2004 @10:55AM (#8326937)
    It's still amazing to me that a project as large as Linux was able to be so successfull BEFORE the changes that were made to the development process. It lacked a centralized CVS, coherent bug tracking, automated testing... These are all things I use in the smallest of professional projets. Many eyes goes a long way towards compensating for having many hands in a big project, but some structure seems like it's helped.
    • Re:Interesting read (Score:3, Interesting)

      by FePe ( 720693 )
      It's still amazing to me that a project as large as Linux was able to be so successfull BEFORE the changes that were made to the development process. It lacked a centralized CVS, coherent bug tracking, automated testing...

      Without beeing too sure, I believe that Linux developer's in the beginning focused more on fixing bugs than keeping things clean and structured. That's mainly because the base of the system needed to be developed, and little attention was drawn to factors like speed and optimization. "

    • Re:Interesting read (Score:4, Informative)

      by swillden ( 191260 ) * <shawn-ds@willden.org> on Thursday February 19, 2004 @12:03PM (#8327868) Journal

      It's still amazing to me that a project as large as Linux was able to be so successfull BEFORE the changes that were made to the development process. It lacked a centralized CVS, coherent bug tracking, automated testing... These are all things I use in the smallest of professional projets.

      CVS is an interesting one. I don't really understand his rationale, but Linus says that for Linux development, CVS is not only inadequate, it is bad. Linus has said that if he ever has to quit using Bitkeeper, he'll go back to tarballs, patches and manual version control, because every other available system is worse.

      I can certainly see weaknesses in CVS, and for Linux development I can see how the lack of changesets would be problematic, but the notion that manual version control could be better is... startling.

      • Re:Interesting read (Score:3, Interesting)

        by raxx7 ( 205260 )
        I'm just speculating here, but I think the issue would be speed. CVS isn't very efficient in terms of speed or disk space. Handling something as large as the kernel might be a problem and duplicating trees with cp -rl an interesting alternative.
        SVN is much more efficient though. I'm not sure that comment aplied only to CVS or to avaliable versioning systems in general.
      • My mistake :). I wrote CVS when I should have said VCS - meaning some sort of version control system.
        • Yet another mistake. Both times I should have said SCM (Source Control Management) if I wanted to be proper.
          • Whatever abstract name you want to use, or whatever specific system you want to point at, what's interesting to me is that Linus says that manual management is better that everything except Bitkeeper. CVS may have deficiencies, but I can't imagine that a directory full of tarballs is better...
            • I can't imagine it either. My favorite source control approach that I've seen is having each developers machine backed up to tape nightly. You want your source as it was last week? Let's go get that tape.

    • It lacked a centralized CVS


      It still lacks a centralized revision control system, since one of the prime virtues of BitKeeper is that it is decentralized. ;-)
  • i thought it was just:

    cp -R /usr/src/sco /usr/src/linux

    ducks
  • by Bob Bitchen ( 147646 ) on Thursday February 19, 2004 @10:57AM (#8326987) Homepage
    That's what bothers me. How long will the distros wait until they use the 2.6 kernel? I hear the scheduler is improved amongst many other things. So what's the hold up? Is it just that there's no one willing to be the guinea pig?
  • Just wondering if java has got its ugly head in somewhere in the kernel development release, maybe someone uses jedit perhaps, or even a JSP or servlet web site.

    Maybe real kernel hackers do not use something that is half the speed of C but maybe the odd exception likes java.lang.Exceptions.

    Offtopic but I just like to know where a 5 year java programmer might fit in besides maybe jakarta from apache.
  • Beautiful (Score:5, Insightful)

    by maximilln ( 654768 ) on Thursday February 19, 2004 @11:30AM (#8327447) Homepage Journal
    I don't like to start new threads but I didn't see this: A general "Thank you for your time, effort, and a job well done" to all of the kernel hackers out there. They're fixing kernel level bugs that are almost at the hardware level while M$ is still patching their web browser. I don't think there's any doubt which system is ultimately more secure.

    Can anyone take a guess how many low-level memory exploits are in Windows XP, 2k, or others? Perhaps it's irrelevant. Who needs to crack the low mem when there are so many ways into the system at the document level?
  • BitKeeper? (Score:2, Insightful)

    by Anonymous Coward
    Is that true that Linux is developed using the proprietary BitKeeper? Why not Subversion or CVS or RCS? I just tried to download BitKeeper and there's a strange question in the download form [bitmover.com]: "Do you anticipate implementing a source management system in the next: [Please Select]" What's the deal? Don't just tell me that if I want to contribute to Linux, than I cannot contribute to Subversion... If that is indeed the case, then what are they nuts? No wonder that Subversion is maturing so slowly, when all of
    • Re:BitKeeper? (Score:2, Insightful)

      by typobox43 ( 677545 )
      I read the word "implementing" here as asking you when you plan on utilizing a source management system, not when you plan on coding one.
    • Troll Alert! (Score:1, Informative)

      by Anonymous Coward
      Listen, it's very simple: Subversion wasn't (and still isn't) up scratch. CVS or RCS are so fundamentally broken as to be irrelevant to this discussion.

      Wrt. the "must-not-develop-a-competitor" angle: This does not prevent people contributing to Linux. It simply means that they cannot use BK to do it! (If they plan on being able to work on any RCS). You may argue the fairness of such a stipulation in the BK license, but the fact remains: Linus chose BK.
    • Re:BitKeeper? (Score:4, Informative)

      by raxx7 ( 205260 ) on Thursday February 19, 2004 @01:47PM (#8329366) Homepage
      First, you can contribute to Linux and Subversion. You just have to:
      a) not use BitKeeper
      b) buy a BitKeeper licence

      Second, RCS doesn't support concurrent development. That's why we have CVS.

      Third, why BitKeeper?
      Though CVS has lots of shortcommings (and thats why Subversion exists) and Subversion (SVN) is still labeled "alpha" by it's developers (though in practice it's stable enough to be self hosted and widely used), the real reason has to do with the basic model of CVS and SVN. Two main issues, in my opinion:
      a) In CVS/SVN you need write access to the central repository or you can't make proper use of versioning control. Giving write access is a problem for Linux's contribute based development model. BitKeeper doesn't need it.
      b) CVS/SVN know about branches but they don't know about merges from one branch into the other. Their view of the repository is a pure spanning tree. Subversion has a "merge" command, but a merge is commited as any other change into the repositoty. BitKeeper knows about previous merges and where they were merged from and uses that information to be smarter at resolving conflicts when you do a merge.
      In contribute based development every change to the project has to go through one of few maintainers who can write into the main repository (in Linux's case, there's only one), so proper merging support becomes very important. At some point before BitKeeper, Linus was having trouble keeping up with all the patches people were sending him and people were getting angry with that.

      If you don't believe me, you can check the GNU Arch website: http://wiki.gnuarch.org/
      They're developing a Free versioning control system very similar to BitKeeper.
    • Re:BitKeeper? (Score:1, Interesting)

      by Anonymous Coward
      Bitkeeper is proprietary. You can contribute (all you like) by posting patches to the linux kernel mailing list (LKML). Linus didn't like CVS or RCS. There was a lot of (angry) debate on LKML about using bitkeeper. I know (at least one) kernel developer using subversion. You can call bitkeeper proprietary all you like (and be correct, since it is). Anticompetitive? Maybe. It's what Linus wants to use because it works (there is only one Linus, and he would pull his hair out applying patches all day w
  • by Spoing ( 152917 ) on Thursday February 19, 2004 @12:26PM (#8328178) Homepage
    This isn't the right story to mention this on, though it's somewhat related.

    I've encountered many problems with external hard drives using USB 1 and 2 interfaces. Locking up the entire system on large file copies was the main issue. (Copying small numbers of files was never an issue. Lockups occured on different drives, different external chipsets, different 2.4.x kernels though supposedly fixed in the latest 2.4.x releases.)

    I've finally gotten the nerve to run a few days of tests on 2.6.1 to see if this has been really resolved, and I'm happy to report that this now works like a charm.

    If you've encountered similar problems with 2.4.x, give 2.6.x a try.

  • Because there are several posts that ask how to get into Linux kernel development, I would also ask here (a bit OT): How to get into GCC development, especially backends?
    GCC is also an amazing feat of software and creating custom backends would make many experiments possible (porting to old architectures, porting to virtual machines, c++ on virtual machines) etc.
    • by Anonymous Coward
      If you are interested in porting GCC to a new target, then read the porting GCC for dummies document. There are several other accounts floating around that document this process. If you want to get involved with development of GCC internals then read through the GCC internals document. Then download the source code and start looking through it. Start on some small project just to get familiar with how GCC is layed out, how it's built, etc. http://gcc.gnu.org/projects/ has several projects that are rele

If you didn't have to work so hard, you'd have more time to be depressed.

Working...