Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

2.6 and 2.7 Release Management 173

An anonymous reader writes: "A recent discussion on the Linux kernel mailing list debated whether the upcoming 2.6 and 2.7 kernels should be released at the same time instead of first stabilizing the 2.6 'stable tree' then branching the 2.7 'development tree.' The theory behind the proposition is to keep "new" things from going into 2.6 once it is released, focusing instead only on making it stable. On the flip side of this argument is the possibility that with a 2.7 kernel in development, there will be too little focus on stabilizing the 2.6 kernel. The resulting debate makes for an interesting read."
This discussion has been archived. No new comments can be posted.

2.6 and 2.7 Release Management

Comments Filter:
  • this is silly (Score:3, Insightful)

    by edrugtrader ( 442064 ) on Friday July 12, 2002 @08:05PM (#3874458) Homepage
    there will always be a kernel in development and one being stabalized.... its a wash either way.

    i would recommend the stabalization of 2.6 before the branch of 2.7 (the initial arguement) and i think the flip side is incorrect... just because 2.7 is 'in the works' doesn't mean that the 2.6 hackers are going to take a nap on their work
  • by on by ( 572414 ) on Friday July 12, 2002 @08:07PM (#3874468) Homepage Journal
    Stupid fucks. Which version is actually going to be useful? 8.3? 10.2?
  • by Hrunting ( 2191 ) on Friday July 12, 2002 @08:11PM (#3874493) Homepage
    See, to me when someone calls it a "stable release", that means it's already been stabilized. Sure, you're going to have the occasional bug fix here and there, but actual "stabilization" should've been done in the 2.5.99 range, ie. the previous development branch. Once the stable tree is released, there shouldn't be a need to stabilize it and branching the new development tree right then makes sense. There should not be an "development" per se in the stable release after that, only the occasional maintenance.

    If the kernel maintainers would just grasp this one simple point, maybe this issue wouldn't be one, and maybe people wouldn't laugh at the .0 release of the kernel.
  • Re:A Good Thing... (Score:1, Insightful)

    by Felt Tip Pen ( 592394 ) on Friday July 12, 2002 @08:36PM (#3874595)
    The Linux kernel needs a severe restructuring of it's development model to stay afloat. Bad Things Will Happen (TM) if we keep on like we are now. Kernel differences between distributions are making it even worse. Nevermind becoming a viable business/corporate solution, let's fix what's wrong before it's too late!
  • Just my thoughts (Score:4, Insightful)

    by young jedi ( 311623 ) on Friday July 12, 2002 @08:47PM (#3874627)
    If 2.7 begins before 2.6 is stable aren't we in danger of seeing a win9x syndrome in that bugs will live for ever and instead of being fixed they will be coded around. I fear very much the long term affects on the kernel and in turn Linux if the trees are split prior to a stabilization period. I am a developer, not on this level, but I have seen the affects of splitting a code base simply to continue developing and at the same time trying to patch existing "production code" and then port things back and forth. It is a very bad idea!! Usually what happens is things don't get back ported they are only provided doing a major upgrade, again the microsoft way of bug fixing.

    Granted, you will always have some cross patching, however I think the idea of building off of a clean base is very important. For example, you would not put new tires on your car if the engine is not running, right?

    Essentially, I think the issue here is one of knowing the base is clean versus drudging on in the dark despite the fact that you have been offered a lantern.

    To put this most bluntly I would call this Microsoft syndrome. As I said before win9x is the perfect example of a system that was never stabilized rather it was constantly released to the unsuspecting public as upgrades which where really bug fixes and the monkeys went back to the keyboards never addressing issues raised by numerous consumer requests on the so called production release because the devel team would rather work on that new feature because it is more interesting than maintaining the existing code base.

    I am being harsh here I know, but I am trying to view this in the long term. I feel that this would weaken the kernel and as I said weaken Linux which would in the end at least decrease corporate trust in the stability of Linux or at worst give M$ what it wants, Linux's death,

    Maybe I am extreme, feel free to beat me but I know you have to have a clean starting point before you can move forward otherwise you will constantly be taking steps backwards which eventually leads to stagnation and death.

    Just my thoughts
  • by Felt Tip Pen ( 592394 ) on Friday July 12, 2002 @08:55PM (#3874655)
    Marcelo got into kernel hacking when he was like 16. So what if our current Gods decide to leave? They would be missed, for sure, but there'd surely be someone new, hungry, and just as skilled ready to take their place.

    That's why Linux can be counted on. NT4? MS just up and decides to drop support. Interactive Unix? Sun just up and decides to drop support. Thank god IBM has kept up with OS/2..for now..
  • by tlambert ( 566799 ) on Friday July 12, 2002 @10:04PM (#3874875)
    [ ...Putting on my "politically incorrect" hat... ]

    It's a common Open Source Software problem: there is the last release, and there is the developement branch.

    Developers would all prefer that you use the developement branch, report bugs against *that*, provide patches for the bugs against *that*, do all new work in the context of *that*.

    But it's not how things work, outside of an Ivory Tower.

    In the real world, people who are using the system are using it as a platform to do real work *unrelated to developement of the system itself*.

    I know! Unbelieveable! Heretics! Sacreligios!

    FreeBSD has this disease, and has it bad. It very seldom accepts patches against it's last release, even in the developement branch of the last release, if those patches attempt to solve problems that make the submitted work look suspiciously like "developement". The cut-off appears to be "it fixes it in -stable, but would be hard to port to -current; do it in -current, as your price of admission, and back-port it instead, even if you end up with identical code".

    The only real answer is to keep the releases fairly close together -- and *end-of-life* the previous release *as soon as posible*.

    The FreeBSD 4.x series has lived on well past FreeBSD 4.4 -- supposedly the last release on the 4.x line before 5.0. FreeBSD 4.6 is out, and 4.7 is in the planning stages.

    It's now nearly impossible for a commercially paid developer to contribute usefully to FreeBSD, since nearly all commercially paid developers are running something based on -stable. FreeBS -current -- the 5.x developement work -- is *nearly two years* off the branch point from the 4.x -stable from which it is derived.

    Linux *MUST* strive to keep the differences between "this release" and "the next release" *as small as possible*. They *MUST* not "back-port" new features from their -current branch to their -stable branch, simply because their -current branch is -*UN*stable.

    Delaying the 2.6 release until the 2.7 release so that you can "stabilize" and "jam as many 2.7 features into 2.6 as possible" is a mistake.

    Make the cut-off on 2.6. And then leave it alone. People who are driven by features will have to either run the developement version of 2.7, or they will simply have to wait.

    Bowing to the people who want to "have their cake and eat it, too" is the biggest mistake any Open Source Software project can make.

    Don't drag out 2.7, afterward, either... and that's inevitable, if everything that makes 2.7 desirable is pushed back into 2.6. Learn from the mistakes of others.

    -- Terry
  • by WNight ( 23683 ) on Friday July 12, 2002 @11:10PM (#3875027) Homepage
    Stable refers to the interfaces, more than the product stability.

    It's good when a "stable" kernel doesn't crash, but that's not actually what the word means. Look at Mozilla, it was stable, in that I often had 20 windows open for weeks at a time in Win2k (yeah, Win2k having an uptime of weeks, but really, it happened...) and Mozilla wouldn't crash once. But they didn't call it 1.0 until they stabilized the interfaces so you could use pluggins and addons without having to upgrade them for every minor update.

    It just happens that when you're adding functionality you often break backwards compatibility (hence, unstable interfaces) and make things crash (unstable in the other sense.)

    It's like 'Free', it's got multiple meanings. Linux 2.(even) releases are stable in the sense of unchanging. Releases that don't crash are stable in the meaning we normally use.
  • by Anonymous Coward on Saturday July 13, 2002 @01:08AM (#3875396)
    The code should just have been shelved to be used in 2.5 -- with the possibility to backport to 2.4 if it was deemed necessary and/or useful. .. All that forking the dev branch early does is move a whole bunch of developers away from fixing old code to breaking new code. That's not exactly the effect we're looking for here.


    I could be wrong, but i believe the important thing to keep in mind is that some people would just rather break new code than fix old code.

    And, since Linux is not some kind of paid corporation with top-down control, there's no way to make these people concentrate on fixing old code. So they won't. They'll just go off and write new stuff anyway. Then when we get to the next dev kernel, we'll have to go gather up all the random uncoordinated new-stuff patches scattered all over the internet..

    We might as well recognize linux kernel development is no longer a "ok, here's a beta.. now here's a final.. now here's a beta... now here's a final". It's a large river of projects being done by a large, disparate bunch of people as they need their kernels able to do something. Linux has become too big for it to be practical anymore to try to pretend the entire development is following some cohesive, planned procedure. You can ensure that everyone who's actually being PAID to write the linux kernel is following a cohesive development process, but you should acknowledge the chaos outside the core group's door once in awhile.
  • by tlambert ( 566799 ) on Saturday July 13, 2002 @08:15AM (#3876458)
    [ Dammit, I hate people who use cookies instead of hidden fields for forms ]

    "I think you're being terribly naive about this, particularly the It's now nearly impossible for a commercially paid developer to contribute usefully to FreeBSD comment. Development work on FreeBSD succeeds on both fronts."

    I have built or contributed to 5 embedded systems products based on FreeBSD. If you count licensing the source code to third parties, that number goes up to 12. This list includes IBM, Ricoh, Deutch Telekom, Ricoh, Seagate, ClickArray, and NTT, among other lessers.

    There has been no case where any of these projects have involved use of FreeBSD-current. It just does not happen: the intent of the commercial is to work on the product itself, not on the platform on which the product is intended to run. Toward that end, every one of these projects has used a stabilized snapshot of FreeBSD, usually a release, and, on only two occasions, a -security (release plus security bug fixes) or -stable (release plus any bug fixes) branch. Under no circumstances has the employer *paid* me to work on -current on their time.

    There are notable exceptions to this practice, where there have been specific DARPA grants, or Yahoo has a number of highly placed people who get to pick what they work on; these opportunities are few and far between.

    "Plainly, it is concievable that were a commercial team to submit changes to -CURRENT, they would be timed for integration into -STABLE in the same manner as current changes are. And try not to make a mistake, a *lot* of things are folded back from -CURRENT on a regular basis. They may have a lengthy test period, but hey, shouldn't every new feature?"

    Your argument is that FreeBSD -current and FreeBSD -stable bear a strong relationship to each other, besides each having the prefix "FreeBSD", and that integration into -current means that testing will be done, and that after testing, integration into -stable will happen.

    I disagree strongly. The two source bases run different tool chains, and they are significantly different, under the hood, as well. It is nearly impossible to write kernel code that operates identically, without substantial modification, on both -stable and -current. The differences in process vs. thread context, and locking *alone* mean that there is not one subsystem in the kernel that has not be touched. This ignores the semantics and API changes, etc., on top of that.

    Despite back-porting, there is nearly two years difference between -stable and -current. I have a system that I updated to -current in October of 2000. It claims to be 5.0. FreeBSD has not made a code cut off the HEAD branch in that entire time -- or FreeBSD 5.0 would have been its name.

    It would be a serious mistake for Linux to follow FreeBSD down this path. Linux should continue to make release code cuts off their HEAD branch, stabilize, *and then deprecating* the releases.

    FreeBSD has failed to deprecate its branches, following releases. This means that if a commercial developer wants to contribute code for inclusion in future version of FreeBSD in order to avoid local maintenance (FreeBSD, unlike Linux, does not *require* such contributions, it relies on this and other emergent properties), they must first take their code from where it's running, and port it to an entirely *alien* environment. Then they must wait for approval, and then back-port it, since no one is going to do the work for them, to the minor version after the one that they stabilized on for their product.

    "It keeps the stable kernels stable, and it folds in new changes in an orderly and well tested fashion."

    `Orderly' and `tested' are one thing. Two *years* of API and interface evolution are something else entirely.

    The first time Linux cuts a distribution that isn't a pure maintenance point release of a minor number (e.g. NOT 2.5.1 off 2.5, and NO 2.6 off of 2.5 if a 3.0 is in the works), it will have effectively forked itself.

    -- Terry
  • by Error27 ( 100234 ) <error27.gmail@com> on Saturday July 13, 2002 @02:07PM (#3877990) Homepage Journal
    The difference really is not in the patches that they add but the testing that they do. Some of the stock kernels are very bad. They might not compile for example.

    You're probably right that there is not always a lot of difference between stock kernels and vendor kernels. But I always tell people to only use vendor kernels, because if they break then the people can blame Red Hat or Suse but don't hassle the developers.

    The post I was replying to was belly aching about .0 releases and thus falls under the "hassling developers" catagory. I'd be willing to bet that Red Hat and Suse didn't ship with the .0 version because they knew it wasn't trusted.

    Mandrake may have shipped with it... They like to live on the edge.

    But yes. You're right. There is nothing wrong with using stock kernels in production. I believe that Debian only uses stock kernels.

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...