Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Software

New Linux Kernel Development Process 207

An anonymous reader writes "Releasing the 2.6.13-rc4 Linux Kernel, Linus Torvalds announced an improved development process to try and minimize the number of bugs in the kernel. The general idea is simple: changes will only be allowed for two weeks after the release of a stable kernel. All the rest of the time between releases will be spent on fixing bugs. This should improve upon last year's development module, which allows for active development in the 2.6 stable kernel."
This discussion has been archived. No new comments can be posted.

New Linux Kernel Development Process

Comments Filter:
  • by enoraM ( 749327 ) * on Friday July 29, 2005 @03:26PM (#13198062)
    >as many of you are aware, we were talking (not enough) about the release process at LKS this year.
    As you may be aware, we're goin to talk more than enough about the release process at LKS, since your post is on slashdot ;-)
  • Who'da thunk it (Score:2, Interesting)

    by winkydink ( 650484 ) *
    It almost sounds like a normal sw dev process.
    • Hahahaha. You've never worked in a real software development company, have you? Let me break down the process for you:

      1. Code furiously, creating bugs and features.
      2. Tar up the development tree, drop it into place for release.
      3. Perform steps 1 and 4 in parallell.
      4. Fix bug in main development tree.
      5. Perform steps 1, 2, 3, and 4 in parallell.

      I think that about covers it.

  • Interesting (Score:5, Insightful)

    by youknowmewell ( 754551 ) on Friday July 29, 2005 @03:30PM (#13198101)
    This seems to put more pressure on individual distro vendors to add features and test them, then discuss their inclusion in the upstream kernel. Seems pretty reasonable to me. This should definitely stabilize the kernel a lot.
  • Troll this (Score:2, Informative)

    by null etc. ( 524767 )
    to try and minimize

    Proper English is:

    try to minimize

    not:

    try and minimize

    I'm just going for my dialy troll mod, since I seem to be getting many troll and overrated mods for posts that don't deserve them.

    • How is a grammarcop post a "Troll"? Trolls are poser posts designed to create predictable negative responses. Which add nothing to the discussion but distraction and acrimony. Usually containing a lie or false hidden premise. The term is primarily from fishing, where "trolling" is cruising around with a fixed pole/hook, trying to catch anything stupid enough to bite. The monster reference is secondary, only supplying resonance with its image of a creature trying to catch an unsuspecting human. Your post mig
      • That's his whole point there. Not that he just trolled, but that he will get modded it anyway.

        " I seem to be getting many troll and overrated mods for posts that don't deserve them."
        • no, no, no (Score:3, Informative)

          by hawk ( 1151 )
          when someone trolls and notes that they'll get "unfairly" moderated for it, the usual response is "informative" . . . whehter it's trollish, informative, both, or neither . . .

          hawk
      • The term is primarily from fishing, where "trolling" is cruising around with a fixed pole/hook, trying to catch anything stupid enough to bite.

        I thought it was "trawling"?

        Ah, I've learned something. "Trawling" can refer to this, but also to dragging a wide net (which is what I had in mind). "Trolling" can mean however to fish by dragging a baited line.

        -b
    • Read your "Eats, Shoots and Leaves". He may very well have meant to say "try and minimize" if he meant to suggest that two things be done:
      1) 'try' the bugs (test them, put them under trial)
      AND
      2) 'minimize' the bugs (attempt to reduce the number of the bugs which you have tested).
      So, while I agree that it's statistically likely that he intended to say what you think he meant - his sentence is still grammatically correct if you interpret it the way I've suggested.
      Cheers!
    • I, for one, welcome our English-correcting overlords.

      As long as the correction is done in a kind manner, this kind of stuff does nothing but help. I've learned a few things, at least.
  • I was planning on submitting a patch to make a certain tablet pass pressure data to X. (By re-mapping Tablet-Pressure to Mouse-Z).

    Now I'll have to rush to get it in without a huge wait before it gets in the main tree.
    • I'll also have to test it against the new kernel within a few days of it's release, which may be difficult for changes that aren't so minor (filesystems, etc. not this simple USB event driver).
      • by iabervon ( 1971 ) on Friday July 29, 2005 @05:29PM (#13198906) Homepage Journal
        There's no reason to test the patch with 2.6.13 (as opposed to 2.6.13-rc4) if you're trying to get it into 2.6.14; there's a much higher chance that something will break it between 2.6.13 and 2.6.14-rc1 than between 2.6.13-rc4 and 2.6.13, since the former is when all of the new features are getting put in, and the latter is only the last set of bug fixes during a code freeze. The point of the change is so that you're not the only person testing it in the cycle leading up to the release that includes it, because bugs will probably show up in configurations you don't have.

        Of course, the right way to get a change made is to get it into -mm now (or whenever), and have it working there; then it'll get tested and accounted for in other development, and will get put in by default at the start of the cycle if there's been good feedback. Then you just have to test it in -rcs and report if it gets broken.
  • Good because... (Score:4, Interesting)

    by samjam ( 256347 ) on Friday July 29, 2005 @03:35PM (#13198143) Homepage Journal
    It means the longer you wait, the more stable the kernel will be.

    No more lucky dips, and less need to depend on the vendors tracked patchsets.

    Sam
  • When we're about to release a new version of our software [codeblocks.org], we only focus on fixing bugs and adding important requested features. And of course, there are the all-famous CVS branches. In any case I'm glad the Linux development process has taken this approach.
  • Linux has made amazing progress.

    But as I browse the submitters of actual code, it seems that it's no longer the every-man's operating system.

    More and more often we're seeing Red Hat and IBM employees tinkering with the code.

    Does this mean a lack of quality? No, certainly not. A professional developer is usually very well versed in what he or she's working on.

    But I propose that we watch what is being worked on and that our priorities are appropriate.

    Perhaps an IBM or similar company has a new feature that
    • by Anonymous Coward on Friday July 29, 2005 @03:47PM (#13198256)
      that's a remarkably verbose post saying... what again? nothing? right.

    • But as I browse the submitters of actual code, it seems that it's no longer the every-man's operating system.

      More and more often we're seeing Red Hat and IBM employees tinkering with the code.


      All this means is that these companies are now paying people to work on Linux instead of hack on it in their spare time. There's even people with @microsoft.com addresses in the contributors file.

      Linus verifies most submissions in the dev kernel, but he often does it now as merges from someone else's branch, instead o
    • Feel free to fork it at any time.
    • by LnxAddct ( 679316 ) <sgk25@drexel.edu> on Friday July 29, 2005 @04:04PM (#13198371)
      With all due respect, Red Hat has been the largest kernel contributor for just under a decade now. They've been taking it in a good direction so far, I wouldn't worry about it too much. Despite all the nonsense slashdotters will say about Red Hat, Red Hat is one of the few distros that actually develops the software it ships rather than just repackaging other people's software. Look at Apache, the Kernel, Gnome, libc, the GNU compiler collection, openoffice, evince, totem, dbus, and most of the drivers you use in your system. The list could literally go on for ages. They also are a major reason why linux has an amazing record for getting patches out quickly for security fixes, alot of those fixes are coded by Red Hat develoeprs. Not to mention they gave Linus 10 million dollars in stock as a showing of gratitude to him. They are a good company, and some of the best kernel hackers are paid by them (including Alan Cox).
      Regards,
      Steve
    • by Linus Torvaalds ( 876626 ) on Friday July 29, 2005 @04:25PM (#13198508)

      Perhaps an IBM or similar company has a new feature that they want, or worse, need, in the Linux kernel, and as such they spend all their time working on that.

      The reality might be however that an improved VM is needed but all the Red Hat guys are busy working on some scheduling code that really isn't as crucial.

      If an improved VM is needed, then who needs it? Why would there not be anybody to "scratch that itch" if people need it? The presence of large companies contributing scheduling code does not mean that other people can't contribute VM code.

      But I propose that we watch what is being worked on and that our priorities are appropriate.

      From your post, I suspect what you mean is that you want Redhat's priorities to be appropriate for your needs. Free Software isn't about getting other people to do what you want.

      If there's nobody contributing code that satisfies your needs, then perhaps the rest of the world doesn't consider it as necessary as you do, in which case, you have an unusual problem and you know exactly what you can do to fix it - code it yourself or pay somebody else to.

      As far as I know, Linus himself still verifies all submissions and deems which baselines they appear in, but I hope that since he's also a professional and getting paid by Corporate if our priorities are straight.

      That sentence doesn't make sense.

    • The reality might be however that an improved VM is needed but all the Red Hat guys are busy working on some scheduling code that really isn't as crucial.

      That's fine, because an improved VM can only really be done by developers who have specialized in the VM, not the people working on scheduling code. Practically all of Linux is to the point now that you can only improve it if you have some special qualification (which may be just that you have the hardware you want a driver for, or might be a lot of experi
    • by diegocgteleline.es ( 653730 ) on Friday July 29, 2005 @05:11PM (#13198817)
      That has been always true, and all^Wmost of linux 2.6 features are there because vendors needed it - NTPL, SMP scalability...

      It's no different than any other open source project. If I want something in slashcode, I code it according to my interests. In linux, big features are coded according with the interest of vendors. There's not a really big difference. Look at the poor state of graphic drivers in linux for example - if that was important for vendors we'd have great graphic drivers
    • >More and more often we're seeing Red Hat and IBM
      >employees tinkering with the code.

      And it never occured to you that RH/IBM are *hiring* kernel developers?

      >The reality might be however that an improved VM is
      >needed but all the Red Hat guys are busy working on
      >some scheduling code

      Well, if a new VM is needed, i'm sure someone will work on it (at least the people who need it, right?)
      If you also get better scheduling code, i don't know seems like a good deal for me...

      >since he's also a profess
  • by Sheetrock ( 152993 ) on Friday July 29, 2005 @03:39PM (#13198174) Homepage Journal
    Is the idea of splitting kernel releases into two branches -- one stable, one development. This gives the benefit of allowing bugfixes to be applied to an otherwise stable tree while creating an experimental environment for new ideas, mirroring the "parallel deployment" methodology system analysts prefer for stability during upgrades (where you continue running an old system for a while after the new system is deployed so you have something to fall back on.)

    This seems to work successfully for a number of open source projects, which use a version numbering system that allows users to tell at a glance whether they're using a development version.

    • They used to do this. Odd kernel releases were development/experimental, even releases were stable/bug fix. That's why the kernel went from 2.0 to 2.2 to 2.4 to 2.6. Recently in the 2.6 branch though, they moved away from that model. Don't remember why.
      • Recently in the 2.6 branch though, they moved away from that model. Don't remember why.

        It was supposed to decrease the amount of backporting, and increase the available testing.

        First, as new features (and drivers) were added to the development tree, people backported them to the stable branch, this supposedly drew efforts away from the development tree (ie. people were spending time backporting the new features/drivers when they could be debugging/testing the development tree instead.)

        Second, as the develop
    • Unless I'm misreading your intent that's what they've been doing. Even numbers are stable (2.2, 2.4, 2.6) and odd numbers are development (2.3, 2.5, 2.7).
    • They've got two branches (mainline and -mm). They do parallel deployment. The problem they're having is that people stick with the old system until it goes away, and then complain about the new system not working. The change is designed to get all of the completed development into the stabilizing system at the start of the cycle, so that it will only get bugfixes during the cycle, and will actually work when it is blessed as "stable", rather than only after being "stable" for a few point releases.
  • by jesuscyborg ( 903402 ) on Friday July 29, 2005 @03:44PM (#13198230)
    Didn't Torvalds once say something along the line that 'perfect is the enemy of good' when criticizing BSD? Is he moving away from 'good-enough' with lots of features constantly coming out, towards a more BSD-esque, move along slowly with stable-code philosophy?
    • by shmlco ( 594907 ) on Friday July 29, 2005 @03:59PM (#13198346) Homepage
      As Linux is used more and more frequently in corporate mission-critical applications and servers, the priority seems to be shifting towards stability. Code that works is preferred over less-stable code that contains the latest and greatest features.

      Which is also, btw, what people say they want from MS and Windows.

      • Generally when you need absolute stability, you'll pick an old kernel version (like 2.4.x) that has been closed on development for years and just patched for bugs. This development model will make kernel releases less buggy, but someone making a mission critical server probably isn't going to be persuaded to get the latest release. Due to the nature of open source and the volume of Linux users, bugs get hammered over time out no matter what development model Torvalds uses.
        • While that's true, the 2.6 series in general has vast performance and feature upgrades over 2.4. Only if I wanted EXTREME stability at the cost of uptime would I bother running something from the 2.4 series.
          • I would be a bit more conservative than that.

            But then, my CD drive is on a Promise card because Linux can't use it when it's connected to the motherboard anymore.

            It's not just a few pesky little bugs, it's huge crippling regressions, and I don't even want to upgrade because who knows what will bite me next time. At least I have a Promise card so I can work around the bugs in my current version.

            2.6 is very close to being unusably bad. To me, the only cases where it's acceptable are where stability doesn't ma
      • by po8 ( 187055 ) on Friday July 29, 2005 @05:06PM (#13198790)

        If only we could have it both ways. If only there was some way that folks that needed stability could have some kind of stable Linux kernel, while folks who wanted to experiment with the latest and greatest features could have some kind of experimental kernel.

        Perhaps we could use some kind of numbering scheme that separated the two; for example using "odd" version numbers like 2.7 for the experimental kernel series and even numbers like 2.6 or 2.8 for the stable series. One might imagine that periodically the matured changes in the experimental series could be merged back into the stable series, starting new series for both stable and experimental.

        Maybe Linux should have instituted a process like this years ago. Then they'd have some experience with it by now, and could have it running smoothly instead of messing around with new development processes like they currently are.

        Oh well. Just a crazy dream I had.

    • No, this is more of the snow globe shake and settle method. rampart development for 2 weeks then let it settle until stable.
    • by pilgrim23 ( 716938 ) on Friday July 29, 2005 @04:13PM (#13198441)
      Even living gods get older....and some might say....wiser.....
    • by joto ( 134244 ) on Friday July 29, 2005 @05:09PM (#13198805)
      Didn't Torvalds once say something along the line that 'perfect is the enemy of good' when criticizing BSD? Is he moving away from 'good-enough' with lots of features constantly coming out, towards a more BSD-esque, move along slowly with stable-code philosophy?

      Linus says a lot of things. It seems to me that he is just using the scientific approach, and trying new ways of doing stuff to see what works best. Some ideas are good, others are bad. But if you never change your process, you'll never find out.

      These changes in the process make a lot of people scream whenever they happen. That's because people doesn't like change. Even now, people are screaming about breaking the odd/even process (which didn't work too well), even though the 2.6 process has worked much better. If the 2.6.13 process isn't even better, Linus will scrap it and try something else (such as going back to the old 2.6 process, or the 2.6.x.y process, or something else new, or whatever).

      Stay calm! The world isn't going to end! All these changes mostly affects kernel developers, and even then, mostly those in the "inner circle". Your redhat/ubuntu/suse/whatever will still work just fine.

    • Didn't Torvalds once say something along the line that 'perfect is the enemy of good' when criticizing BSD?

      What linus has really said in the past:

      "I retain the right to change my mind, as always. Le Linus e mobile."

      "And don't get me wrong - I don't mind getting proven wrong. I change my opinions the way some people change underwear. And I think that's ok"

    • Is he moving away from 'good-enough' with lots of features constantly coming out, towards a more BSD-esque, move along slowly with stable-code philosophy?

      I wouldn't say that. It used to be that active kernel development took place on odd kernel numbers (2.1,2.3,2.5), and bugfixes only happened in even kernel numbers. This has changed recently due to odd branches not being tested enough by distributions. Of course this leads to instability in the kernel because new untested stuff is "dangerous".

      This move
    • This is simply another step in the chain of Linux evolution. The old way of doing things was simply cumbersome and probably cost Linux some stability and features by inefficiently wasting developers time. Backporting a feature like a VM to an older kernel like 2.4 wasn't a simple task.

      It may be decided in the future that the new system can't handle radical new changes the way the old system can. This is the most likely scenario for going back.

  • let's start hacking again ...
  • by Work Account ( 900793 ) on Friday July 29, 2005 @03:50PM (#13198276) Journal
    As I think more about this decision, I wonder why not simply split the development between bug fixes and feature providers.

    For example, Linux kernel 2.7 is released.

    We run regression test on it for a week or two.

    At that point, we document all known bugs and hand them and the entire 2.7 codebase off to our bug fixing team.

    Then we identify improvements to current capabilities as well as new features we want to add, document all of it clearly, and hand that off to the feature team along with their own copy of the 2.7 baseline.

    Then we have our bug guys working the 2.7_bugfix baseline and our features guys adding valuable new code to the 2.7_features baseline.

    Prior to the next release, we merge all the changes together, spend a week sorting out any dependecy problems and interface problems, then we ship.

    And repeat.

    Sounds feasible to me. I just don't like the feeling I get when thinking that there's such a short development window.

    The Linux kernel is already pretty darn stable, especially when compared to other operating systems. Let's keep the new features coming!
  • Two details to note (Score:5, Informative)

    by rewt66 ( 738525 ) on Friday July 29, 2005 @04:12PM (#13198434)
    Note that the idea came out of the Linux Developer's Summit as a way to improve the Linux development process; it's not just one person's idea.

    Also note that they are going to try this approach. If it doesn't work out, I expect that Linus (ever the pragmatist) will drop it rather quickly...

  • by Fritzy ( 564827 ) on Friday July 29, 2005 @04:12PM (#13198437)
    This simply makes it so that bug patches don't get stepped on all of the time. Developers that are submitting feature updates will have to simply time their submittion. I don't think it'll slow development at all, it'll just polish releases more easily.
  • Here's an idea! (Score:3, Insightful)

    by Anonymous Coward on Friday July 29, 2005 @04:25PM (#13198509)
    How about fixing the bugs that have been outstanding for well over a year?

    It really is disappointing to spend hours testing and finding how to 100% reproduce bugs, even those that freeze the system as a user, report it to the various mailing lists, only for them to be ignored.

    Yes, I've tried fixing some myself.
  • about time (Score:4, Insightful)

    by ArbitraryConstant ( 763964 ) on Friday July 29, 2005 @04:47PM (#13198680) Homepage
    2.6 has been one big regression fest, and despite its advantages I've always had to use something else for anything but desktop duty because the risk was too high.

    There has to be a tradeoff between new features and sufficient stability to contemplate using the new features -- they aren't an advantage if they are inseparable from the bugs.

    Glad Linus came around.
    • Re:about time (Score:3, Insightful)

      by Kent Recal ( 714863 )
      2.6 has been one big regression fest

      Dude, how often do you need to repeat yourself?
      You made no less than 4 posts to this thread, all saying the same thing.

      Yes, 2.6. is not quite stable but at the same time it's not as bad as you make it sound.
      I wouldn't use it on a server but for most workstation- and home-use
      it's pretty fine already!

      It seems you have missed that 2.6 is the head-branch.
      That means it's a work-in-progress and nobody considers it finished!

      So just quit the whining and stick with 2.4 until 2.6 i
      • Open Source,
        open to criticism.

        I feel the 2.6 philosophy has been damaging to Linux, and I have as much of a right to criticise Linux as the yes-men have to praise it.
  • My first reaction is "then they better start releasing major kernel versions more frequently. I'm not trying to wait two years for each significant update. But then I started thinking. I have a couple of comments.

    1) Is the driver system in linux mature enough that it is ABI-stable? Do you need to continue to have that many features added if drivers can be added easily by distros?

    2) What features are still being added to the kernel that aren't drivers? Are they that exciting that you can't wait.

    3) Will it re
  • I suppose it's obligatory for me to note that LWN's kernel summit coverage [lwn.net] talks about the development model changes - and many other things.
  • This is GREAT news (Score:5, Interesting)

    by rc.loco ( 172893 ) on Friday July 29, 2005 @06:04PM (#13199096)

    I remember when the Linux kernel was rock solid, stable and reliable. I remember when there were no huge code changes in the "stable" even-numbered kernel series. Remember those days? I'm talking late 2.2.x before the whole VM debacle in the first part of 2.4.x.

    In the last few years, it seems the push to carve out marketshare on the desktop has been fuelling kernel development more so than server-oriented work. I've been frustrated to the point of recommending Linux-kernel-based systems only with caution and caveats, preferring instead Solaris for serious enterprise-level server-side work.

    If this works out, it'd be a boon for enterprise adoption of the Linux kernel. Hats off to Linus et al. for this change in their practices.
    • I remember when the Linux kernel was rock solid, stable and reliable. I remember when there were no huge code changes in the "stable" even-numbered kernel series. Remember those days?

      Yeah, I remembered those days as it was yesterday. In fact it actually WAS yesterday that I logged in to a rock solid Red Hat AS 3.0 box.

      What are you blabbering about, man? It's the task of the big distro's to take a stable kernel and keep it stable.

      You don't do kernel downloads/compiles yourself and put them in productio

  • Having the 1st 2 weeks in the cycle be the only time for development need not slow us down.

    We can just release a new kernel every 2 weeks and spend all our time adding features! :0

  • Why not have an unstable branch (2.7) where all the development work is done, and a stable branch (2.6) which only gets bugfixes? It seems that with 2.6 both branches have been merged, leaving half-assed stability with half-assed feature rushing...

    While I'm wishing for ponies, could the bugfix-only release also have a stable ABI, pretty please?

Every cloud has a silver lining; you should have sold it, and bought titanium.

Working...