2.6 and 2.7 Release Management 173
An anonymous reader writes: "A recent discussion on the Linux kernel mailing list debated whether the upcoming 2.6 and 2.7 kernels should be released at the same time instead of first stabilizing the 2.6 'stable tree' then branching the 2.7 'development tree.' The theory behind the proposition is to keep "new" things from going into 2.6 once it is released, focusing instead only on making it stable. On the flip side of this argument is the possibility that with a 2.7 kernel in development, there will be too little focus on stabilizing the 2.6 kernel.
The resulting debate makes for an interesting read."
this is silly (Score:3, Insightful)
i would recommend the stabalization of 2.6 before the branch of 2.7 (the initial arguement) and i think the flip side is incorrect... just because 2.7 is 'in the works' doesn't mean that the 2.6 hackers are going to take a nap on their work
Don't forget the past (Score:3, Interesting)
A large reason for the awful VM mess that 2.4 was in around 2.4.8 - 2.4.11 or so was largely due to the fact that a totally new VM was just kind of "thrown in" to the "stable" branch, probably mainly cause there wasn't a 2.5 branch yet at that point (as I recall). This is the sort of thing that branching earlier would hopefully prevent. While the stable branch may not have some of the "bells and whistles" it could have gained from keeping the branches together, at least hopefully a mess like that can be avoided.
Then again, that's just my opinion :)
Re:Don't forget the past (Score:2)
Re:Don't forget the past (Score:4, Informative)
The old VM has 4 different major bugs:
1. Concurrent processes could not place deadlocks on sibling process in SMP mode.
2. Utilization of VM code on tread process in UP and SMP systems were so bad (under load - ( specially when the 4gig mem system was used )), that more preformance would have been got by running the 2.0 series rather than 2.4's VM.
3. No checks were placed in thread corruption and bucket fill code, which caused other nice things under load.
4. The enter beast was so unyeilding, it was known that the only one person who ever understood it, had to keep a journal just to keep track of the beast (Seriously). This was one of the major reasons that ticked off Linus and I believe the reason why he pushed the new VM.
But the 2.4 series was a major testbest. The moment we released it without having a 2.5 out, ppl started testing 2.4 in much more demand than they did the previous development series. This gave feedback in a week that we could not have got in six months for the development kernel.
The new VM has been stabilized and it's working wonderfully, if you use any kernel above 2.4.16, you should be fine.
jr
what is the "enter beast"? (Score:2)
4. The enter beast was so unyeilding, it was known that the only one person who ever understood it, had to keep a journal just to keep track of the beast (Seriously). This was one of the major reasons that ticked off Linus and I believe the reason why he pushed the new VM.
what is the "enter beast"?
Re:Don't forget the past (Score:1)
I respect how hard it is to work out the kinks without a lot of eyes, but the goal should be to minimize the number of potential bugs in the stable version, while giving people the option to try the latest-and-greatest (at their own risk) if they so choose.
Re:Don't forget the past (Score:1, Insightful)
I could be wrong, but i believe the important thing to keep in mind is that some people would just rather break new code than fix old code.
And, since Linux is not some kind of paid corporation with top-down control, there's no way to make these people concentrate on fixing old code. So they won't. They'll just go off and write new stuff anyway. Then when we get to the next dev kernel, we'll have to go gather up all the random uncoordinated new-stuff patches scattered all over the internet..
We might as well recognize linux kernel development is no longer a "ok, here's a beta.. now here's a final.. now here's a beta... now here's a final". It's a large river of projects being done by a large, disparate bunch of people as they need their kernels able to do something. Linux has become too big for it to be practical anymore to try to pretend the entire development is following some cohesive, planned procedure. You can ensure that everyone who's actually being PAID to write the linux kernel is following a cohesive development process, but you should acknowledge the chaos outside the core group's door once in awhile.
Re:this is silly (Score:1)
But that makes no sense why would we have a 2.6 release if it is not yet stable? What happened in 2.5 then? Why was it changed to 2.6 if it was not stable?
2.6.0 should be a STABLE realease right?
Re:this is silly (Score:1)
Yes, you are right, 2.6.0 should be a stable release. And hopefully it will at least be a stable release for all the people that developed and tested 2.5.x. But it is practically impossible to make a piece of software without bugs, so when 2.6.0 is released there will still be bugs to fix. When the version number changes from 2.5.x to 2.6.0 more people will start using it and consequently remaining bugs will be found faster.
Of course I hope 2.6.0 will not be released with known bugs, and will be intensively tested before release. But that is the best we can hope for.
keep it stable... (Score:1)
If I don't have to reboot in 6 months or beyond, then I'm happy with that.
Re:keep it stable... (Score:1)
FBSD has both concurrent stable and experimental development. The stable configuration includes kernel and userland source and is very suitable for production environments when non-security related change is almost never desired.
A Good Thing... (Score:4, Interesting)
I see this as having two benifits. First, it will help with the ``Most things work pretty well---let's go ahead and release it.'' attitude. The 2.4 series has only recently gotten stable enough to reliably use in a production environment, and not everyone agrees on that even.
Second, it will allow people to focus on what they are good at. The 2.6 series will mature much faster without adding new features in every release. Sure, there are bound to be a few gotchas, but if the focus is on stabilizing the code, they will be out by the 2.6.3 or 2.6.4 release. At the same time, people will be adding to 2.7, which should mean that there is much less time between stable kernel series releases.
I'm all for it!
--Wyatt
Re:A Good Thing... (Score:1)
Also it will probably mean that it will not be such a long time between major releases...
Re:A Good Thing... (Score:1, Insightful)
Re:A Good Thing... (Score:2)
The difference between kernel distributions is not really a big deal. Maybe in a close source system it would be
Re:A Good Thing... (Score:1)
UnitedLinux is a step in the right direction, but something more granular is needed.
Re:A Good Thing... (Score:2)
It would probably be ok if there were, say, 2.6 were released no more than a week before 2.7, but I'm a bit dubious. OTOH, I don't know just how stable 2.5.99999 is.
It seems to me that the plan used last time worked out well. Not perfect, but well enough that I wouldn't want to tinker with it. Release the new version as current, and then wait a bit for more problems to show up. "We won't find any more problems unless we get some new testers in, so release it now." (LT being paraphrased.)
That was the right time to release it. Then the next round of debugging happened. Then the 2.5 branch was forked off.
Perhaps, as an intermediate step, submissions for the 2.7 branch could be accepted, and placed in a queue for evaluation after 2.6 was released.
Standard resource allocation problem (Score:1)
I would recommend dividing the team up.
Stabilizing the stable branch? (Score:4, Insightful)
If the kernel maintainers would just grasp this one simple point, maybe this issue wouldn't be one, and maybe people wouldn't laugh at the
Re:Stabilizing the stable branch? (Score:3, Funny)
Linux doesn't crash. It can't; it's simply not possible... Slashdot told me so. They said that Linux crashing would defy the laws of physics, or something.
Re:Stabilizing the stable branch? (Score:1)
Re:Stabilizing the stable branch? (Score:1)
Re:Stabilizing the stable branch? (Score:2)
Re:Stabilizing the stable branch? (Score:2)
Obviously they've never tried to change VTs when X was busy starting. (2.4.17, stock Linus kernel)
Re:Stabilizing the stable branch? (Score:1)
Windows XP doesn't crash. It can't; it's simply not possible. My friends/classmates/people on my contact list/the local MCSE/MSN.com told me so. They said that Windows XP crashing would defy the laws of physics, or something.
And as a professor on a university ever said: "Windows XP's stability rivals that of Unix".
Wether you believe that or not, that's up to you (I don't). But most people actually believe that.
Re:Stabilizing the stable branch? (Score:2)
If you need stable kernels you should get them from Linux vendors like Red Hat and Suse etc. That's the way it has been for a long time.
Re:Stabilizing the stable branch? (Score:1)
A while back a patched RedHat kernel had a pretty bad bug in it that caused hard locks in X with ATi chipsets (2.4.9-something, I think). This particular bug was specific only to the RedHat kernel.
So, maybe the distro vendors fix some bugs, but they also introduce bugs of their own. I've used both stock kernels and distro kernels in production environments and haven't noticed particularly more or less bugs in either type.
Re:Stabilizing the stable branch? (Score:3, Insightful)
You're probably right that there is not always a lot of difference between stock kernels and vendor kernels. But I always tell people to only use vendor kernels, because if they break then the people can blame Red Hat or Suse but don't hassle the developers.
The post I was replying to was belly aching about
Mandrake may have shipped with it... They like to live on the edge.
But yes. You're right. There is nothing wrong with using stock kernels in production. I believe that Debian only uses stock kernels.
Re:Stabilizing the stable branch? (Score:2)
When tons of people starting using 2.6.0, they will find new bugs. These bugs would also be in 2.7.0. Both kernel branches would need to be fixed BY HAND, increasing dev and testing time. I think it makes more sense for 2.7.0 to start with a "stable" stable 2.6.x.
Re:Stabilizing the stable branch? (Score:1)
Re:Stabilizing the stable branch? (Score:1, Troll)
Re:Stabilizing the stable branch? (Score:1)
The term commercial might depend on your definition of the word. The tool in question is BitKeeper [bitkeeper.com]. BitKeeper is not free software, but you don't have to pay for using it in Linux development. The use of BitKeeper by a number of kernel hackers does not affect the license of Linux itself.
On the kernel mailing list a few people has been flamed for requesting that kernel documentation does not mention BitKeeper.
It's a catch-22 (Score:3, Interesting)
Release candidate kernels help alleviate this somewhat, but you can never really duplicate what happens when the bulk of normal users stand using it on an everyday basis.
Re:Stabilizing the stable branch? (Score:3, Insightful)
It's good when a "stable" kernel doesn't crash, but that's not actually what the word means. Look at Mozilla, it was stable, in that I often had 20 windows open for weeks at a time in Win2k (yeah, Win2k having an uptime of weeks, but really, it happened...) and Mozilla wouldn't crash once. But they didn't call it 1.0 until they stabilized the interfaces so you could use pluggins and addons without having to upgrade them for every minor update.
It just happens that when you're adding functionality you often break backwards compatibility (hence, unstable interfaces) and make things crash (unstable in the other sense.)
It's like 'Free', it's got multiple meanings. Linux 2.(even) releases are stable in the sense of unchanging. Releases that don't crash are stable in the meaning we normally use.
Re:Stabilizing the stable branch? (Score:2)
"Stable" refers to the halfway-frozen API in even-numbered releases; it doesn't mean 2.4.x is not expected to crash. Stability is empirical.
The backport concept (Score:2, Interesting)
Great for bug fixes, and other things in the middle ground.
Certainly if there is interest, a set of patches to a stable kernel, or even another -someone kernel series can be developed. If these turn out to be in demand, and stable enoug, they can be officially included.
Re:The backport concept (Score:1)
Re:The backport concept (Score:2, Interesting)
First. The kernel is pretty damn stable. The instability people talk about are usually extreme cases or performance that is less than optimal. I have yet to see a 2.4.8 or later kernel lock up or panic on my typical hardware and I've got 3 machines running 24x7. I don't count the first 7 cuts becuase I was doing development on them and going to great pains to track them and they collectively should have been maybe to of the last 2.3.99 releases.
Linux is large. There are hundreds of regular contributors as well as a number of companies doing stuff. 2.3 lasted way to long and so everybody wanted to get their stuff in to 2.4 because 2.6 could be 2 years away after 2.4 came out. That was a mistake. The kernel underwent a lot of change during 2.3.99 and people were still adding tons of stuff. There were bitter fights about what should go in and when. You hate to be SGI, spend hundreds of thousands of dollars (I'm guessing that's the man hour cost) porting SGI and not make the cut and then wait 2 more years.
At the same time the releases need to be tempered. People say 2.0.40 is a bad sign because it needed 40 patches. We could be on kernel 4.2 now if we made the releases closer and that just creates more confusion in a lot of ways also. 2.4 took way too long and too much happened. 2.6 will be much better and the community needs to see that and get used to 6 to 9 to 18 months for a major release. Companies need to understand that and make their investments accordingly. It's difficult though because there are so many independant people developing stuff they are planning to get in and it all can't go to Linus when he says he's getting ready to lock it down.
I think if anything, maybe 2.7 should branch off before 2.6 is cut. Linux and the team and go through the big items and determine where and when and then make some timelines accordingly. You give people working on the bigger things a place to put their stuff. Then the final 2.5 releases shouldn't be as rushed.
Re:The backport concept (Score:2)
For the kernel itself, it would have to be fairly minor changes, or possibly stub or future compatiblity reasons.
I certainly agree that the average administrator needs to be able to rely on as stable.
Again, the most common need would be a bug that affected both stable and development versions.
What are the proposed new features in 2.5 and 2.6 (Score:3, Interesting)
better networking? better I/O performance?
what about multiple CPU support?
The most important thing for me would be resource management features
Does anyone have any info on what's happening in the area of adding resource management features to the linux kernel?
Actually any info on what cool features they are working on for future releases would be appreciated
Re:What are the proposed new features in 2.5 and 2 (Score:1)
Re:What are the proposed new features in 2.5 and 2 (Score:3, Informative)
There is a list of the new features in 2.5 here [kernelnewbies.org].
In summary:
Performance
Suspiciously missing are any memory management patches (although Rik has his reversed mapping patch in the pipe). Perhaps the topic is still a litte too hot...
The most important thing for me would be resource management features
I think that the with the current kernel you can already do much of this. But some of the new features of the 2.5 kernel allows for much more fine-grained control - like binding a process to a distinct CPU, better quota accounting etc. Perhaps thats what you're looking for ?
The direction of the 2.5 kernel seems to me to be mainly (but not exclusively) targetting enterprise systems.
Simultaneous Release... (Score:3, Interesting)
Look at what happened with 2.4, we had the change to VM, 2.4.11 which needed immediate patching and is tagged as dontuse, 2.4.13 similar problems, 2.4.15-greased-turkey released by Linus for Thanksgiving and a nice syncing problem.
When it comes to deciding what is and is not allowed into the kernels the buck stops with Linus. This is why I think Linus should stick with the development kernels where a major change can have all its kinks worked out in relative safety. The stable branches should be maintained by someone who only has authority to accept and apply bug fixes.
Re:Simultaneous Release... (Score:1)
Re:Problems with linux development (Score:2, Insightful)
That's why Linux can be counted on. NT4? MS just up and decides to drop support. Interactive Unix? Sun just up and decides to drop support. Thank god IBM has kept up with OS/2..for now..
Slashdot (Score:3, Interesting)
Re:Slashdot (Score:2)
I want and need for my kernel to magically just run. Yes, I'm a Debian maintainer running unstable on my own machines, but kernels are neither my forte nor my interest.
Re:Slashdot (Score:1)
Isn't the kernel made by the people for the people. I think we should all voice our opinions and those that get heard great, maybe they will make a difference.
I for one would like to the see the development kernel be forked as soon as possible and let Linus continue his work on that, and have the new stable kernel maintain to the point it is stable and is being fixed of bugs.
Re:Slashdot (Score:2)
Even if Cox or Linus isn't reading regularly, it helps ideas get mindshare with lots of smaller players, who can spin the debate threads on LKML toward one side or the other.
That doesn't mean Linus et al will agree, but his judgement in the past has usually been pretty good, which is a good thing, democracy is not a particularly good method of software engineering.
Re:Slashdot (Score:2)
The thing is, most of us run the kernel that our distribution provides us. Most of us wouldn't really gain much be doing otherwise (though USB 2 sounds quite interesting). And we choose our distribution based partially on how stable we want our system to be. There seems to be a kind of order that runs roughly...
Debain-stable, Red Hat, SuSE, Mandrake, Debian-unstable, other
This is grossly oversimplified, as stability isn't the only variable here, but the people for whom stability is more important cluster toward the left, and those with other priorities cluster toward the right.
And what we are talking about is stability of the distro, not of the kernel. The kernel is a small part. (For the more experimental people, the kernel may not even be the one supplied by the distro.)
What most people are really doing is dreaming of the fabulous "next release" when all of the unnamed marvels will be given to them. It never happens, though incremental improvements happen all the time, and there's always lots of new eye-candy.
I know this is happening, and even so it still happens to me. Watching it happen makes me feel silly, but it's fun! So I just don't take it too seriously. I'm just glad that I lust after new software more than after Ice Cream! (I already have enough troubles with that).
Just my thoughts (Score:4, Insightful)
Granted, you will always have some cross patching, however I think the idea of building off of a clean base is very important. For example, you would not put new tires on your car if the engine is not running, right?
Essentially, I think the issue here is one of knowing the base is clean versus drudging on in the dark despite the fact that you have been offered a lantern.
To put this most bluntly I would call this Microsoft syndrome. As I said before win9x is the perfect example of a system that was never stabilized rather it was constantly released to the unsuspecting public as upgrades which where really bug fixes and the monkeys went back to the keyboards never addressing issues raised by numerous consumer requests on the so called production release because the devel team would rather work on that new feature because it is more interesting than maintaining the existing code base.
I am being harsh here I know, but I am trying to view this in the long term. I feel that this would weaken the kernel and as I said weaken Linux which would in the end at least decrease corporate trust in the stability of Linux or at worst give M$ what it wants, Linux's death,
Maybe I am extreme, feel free to beat me but I know you have to have a clean starting point before you can move forward otherwise you will constantly be taking steps backwards which eventually leads to stagnation and death.
Just my thoughts
Re:Just my thoughts (Score:1)
Heh! 2.6 should be a polished to extremely stability 2.5! Not anyhow else! So it should be as stable as possible, unless it's not worth changing from 2.5 to 2.6.
Hmm. May be a good idea to put something like
2.5pre? People will know that it's a kernel that
is going to be stable and will test it as nearly stable?
Re:Just my thoughts (Score:1)
Re:Just my thoughts (Score:2)
I have heard that:
A stable version release is one that has specified interfaces that don't change during the minor version releases.
This isn't the same as a bug free release, though it does imply that a certain kind of bug has been fixed.
Bug free releases don't happen. They aren't going to happen. And nobody expects them to happen. During the development process of changing the interfaces, attempts are made to avoid introducing new bugs, and to fix the ones that are introduced. These aren't totally successful. Don't expect it.
Actually, until the interfaces have been frozen you can't really fix all the bugs. It really isn't possible even for perfect programmers. So only the really daring would even think of using 2.6.0 for anything that couldn't be reconstructed instantly. When inherrent problems are found with an interface in a major version, the only solution is to work around it until the next major version is ready. And you can't really know ahead of time. All you can do is try to get a wide variety of people to test it. And you can never get a wide enough variety of testers to give good reports. Guaranteed.
Re:Just my thoughts (Score:1)
2.5.xx should be the last development version,
2.5.xxPre-1 should be freezed version
2.5.xxPre-2 should be freezed version with some bug fixes
2.5.xxPre-N should be version where almost everybody agree that it's a stable version with 99.99% stability
2.6.0 will be first production stable kernel
2.6.1 will be fixing an unexpected bugs, etc.
2.7.0 will be next development kernel, introduced
when 2.5.xxPre-1 issued
Re:Just my thoughts (Score:1)
Secondly. It's important to know that when you get down to the distribution level, that there are still backports to 2.2 (possibly even 2.0, but I don't know. I use a 2.2 kernel). This is why you'll see a version like 2.2.18-23. It means it's the 2.2.18 kernel from the linux kernel guys, with 23 revisions that have backported bug fixes (the same thing applies to other packages). This is really one of the strengths of open source and shouldn't be taken lightly. It is one thing that sets linux apart from the MS line.
Who cares... (Score:1)
+5 interesting (Score:1)
joe.
ps. not a flamebait, but genuinly funny.
In honour of Euler (Score:2, Funny)
Re:In honour of Euler (Score:1)
To release or not to release (Score:3, Interesting)
So, the best time to let 2.6 "escape" is when you're fairly confident it's "ready" and won't need patching.
Of course, you'll be wrong -- it will need patching, or backports of useful features that just didn't make it in time.
But, the idea is that these patches or backports should be trivial "oopses" where the change does not require massive code review, or the backport is clearly something that was "99% done" already.
So, my suggestion is release 2.7, and hold off on release 2.6 until the obvious release-related "oops"es are found, say 1-2 weeks, then try your best to release a 2.6 that won't need patching. It will anyway, but don't lose sleep over it.
Re:To release or not to release (Score:2)
That's why Linus tried to release 2.4 when it wasn't quite ready... it wasn't improving fast enough to ever be ready...
Doug
Re:To release or not to release (Score:2)
Either issue a release candidate, with the intent of catching release-related oopses;
or branch the next version, release it, and backport release-related oopses;
or try your best, release, and patch release-related oopses to the stable branch either at the same time as the unstable branch, or delay forking the unstable branch for a while.
As you note, the first two approaches don't give enough feedback. The third results in less than perfect releases. Without a release test plan in place, I don't really know how this problem can be avoided, short of getting people to test "release candidate" releases. In the old days, that's what gamma tests were for -- beta tests from an end-user perspective.
Even numbered releases HAVE to be stable! (Score:3, Interesting)
There's no serious linux admin out there that wants to have to test a new supposedly "stable" kernel for a week before employing it on a bunch of mission critical boxes. Say I want/need a feature in the new release of the "stable" kernel, should i expect anything less that a kernel that is rock solid? There's people still running 2.2 series kernels because of the whole 2.4 feature creep fiasco.
All the stability issues should be worked out before a kernel is considered "stable." Seems to make sense to me...
Re:Even numbered releases HAVE to be stable! (Score:1)
That's right! If it isn't stable, you'll take your OS dollars somewhere else.
Oh, wait...
Re:Even numbered releases HAVE to be stable! (Score:2)
Early unstable kernels are too broken anyway (Score:3, Interesting)
Common Open Source Software Problem (Score:5, Insightful)
It's a common Open Source Software problem: there is the last release, and there is the developement branch.
Developers would all prefer that you use the developement branch, report bugs against *that*, provide patches for the bugs against *that*, do all new work in the context of *that*.
But it's not how things work, outside of an Ivory Tower.
In the real world, people who are using the system are using it as a platform to do real work *unrelated to developement of the system itself*.
I know! Unbelieveable! Heretics! Sacreligios!
FreeBSD has this disease, and has it bad. It very seldom accepts patches against it's last release, even in the developement branch of the last release, if those patches attempt to solve problems that make the submitted work look suspiciously like "developement". The cut-off appears to be "it fixes it in -stable, but would be hard to port to -current; do it in -current, as your price of admission, and back-port it instead, even if you end up with identical code".
The only real answer is to keep the releases fairly close together -- and *end-of-life* the previous release *as soon as posible*.
The FreeBSD 4.x series has lived on well past FreeBSD 4.4 -- supposedly the last release on the 4.x line before 5.0. FreeBSD 4.6 is out, and 4.7 is in the planning stages.
It's now nearly impossible for a commercially paid developer to contribute usefully to FreeBSD, since nearly all commercially paid developers are running something based on -stable. FreeBS -current -- the 5.x developement work -- is *nearly two years* off the branch point from the 4.x -stable from which it is derived.
Linux *MUST* strive to keep the differences between "this release" and "the next release" *as small as possible*. They *MUST* not "back-port" new features from their -current branch to their -stable branch, simply because their -current branch is -*UN*stable.
Delaying the 2.6 release until the 2.7 release so that you can "stabilize" and "jam as many 2.7 features into 2.6 as possible" is a mistake.
Make the cut-off on 2.6. And then leave it alone. People who are driven by features will have to either run the developement version of 2.7, or they will simply have to wait.
Bowing to the people who want to "have their cake and eat it, too" is the biggest mistake any Open Source Software project can make.
Don't drag out 2.7, afterward, either... and that's inevitable, if everything that makes 2.7 desirable is pushed back into 2.6. Learn from the mistakes of others.
-- Terry
Re:Common Open Source Software Problem (Score:1)
And here I was about to mention the positives to mirroring FreeBSD's development model.
I think you're being terribly naive about this, particularly the It's now nearly impossible for a commercially paid developer to contribute usefully to FreeBSD comment. Development work on FreeBSD succeeds on both fronts.
Where commercial development would be done on FreeBSD, there would be as much of a shift in development process as there has been in the shift from 'regular development' to that of entire teams at IBM submitting patches to Linus. At first this seems unviable as well.
Plainly, it is concievable that were a commercial team to submit changes to -CURRENT, they would be timed for integration into -STABLE in the same manner as current changes are. And try not to make a mistake, a *lot* of things are folded back from -CURRENT on a regular basis. They may have a lengthy test period, but hey, shouldn't every new feature?
I think the Linux development community will only benefit from a move such as this. It keeps the stable kernels stable, and it folds in new changes in an orderly and well tested fashion. I'm looking forward to seeing it happen.
Re:Common Open Source Software Problem (Score:3, Insightful)
"I think you're being terribly naive about this, particularly the It's now nearly impossible for a commercially paid developer to contribute usefully to FreeBSD comment. Development work on FreeBSD succeeds on both fronts."
I have built or contributed to 5 embedded systems products based on FreeBSD. If you count licensing the source code to third parties, that number goes up to 12. This list includes IBM, Ricoh, Deutch Telekom, Ricoh, Seagate, ClickArray, and NTT, among other lessers.
There has been no case where any of these projects have involved use of FreeBSD-current. It just does not happen: the intent of the commercial is to work on the product itself, not on the platform on which the product is intended to run. Toward that end, every one of these projects has used a stabilized snapshot of FreeBSD, usually a release, and, on only two occasions, a -security (release plus security bug fixes) or -stable (release plus any bug fixes) branch. Under no circumstances has the employer *paid* me to work on -current on their time.
There are notable exceptions to this practice, where there have been specific DARPA grants, or Yahoo has a number of highly placed people who get to pick what they work on; these opportunities are few and far between.
"Plainly, it is concievable that were a commercial team to submit changes to -CURRENT, they would be timed for integration into -STABLE in the same manner as current changes are. And try not to make a mistake, a *lot* of things are folded back from -CURRENT on a regular basis. They may have a lengthy test period, but hey, shouldn't every new feature?"
Your argument is that FreeBSD -current and FreeBSD -stable bear a strong relationship to each other, besides each having the prefix "FreeBSD", and that integration into -current means that testing will be done, and that after testing, integration into -stable will happen.
I disagree strongly. The two source bases run different tool chains, and they are significantly different, under the hood, as well. It is nearly impossible to write kernel code that operates identically, without substantial modification, on both -stable and -current. The differences in process vs. thread context, and locking *alone* mean that there is not one subsystem in the kernel that has not be touched. This ignores the semantics and API changes, etc., on top of that.
Despite back-porting, there is nearly two years difference between -stable and -current. I have a system that I updated to -current in October of 2000. It claims to be 5.0. FreeBSD has not made a code cut off the HEAD branch in that entire time -- or FreeBSD 5.0 would have been its name.
It would be a serious mistake for Linux to follow FreeBSD down this path. Linux should continue to make release code cuts off their HEAD branch, stabilize, *and then deprecating* the releases.
FreeBSD has failed to deprecate its branches, following releases. This means that if a commercial developer wants to contribute code for inclusion in future version of FreeBSD in order to avoid local maintenance (FreeBSD, unlike Linux, does not *require* such contributions, it relies on this and other emergent properties), they must first take their code from where it's running, and port it to an entirely *alien* environment. Then they must wait for approval, and then back-port it, since no one is going to do the work for them, to the minor version after the one that they stabilized on for their product.
"It keeps the stable kernels stable, and it folds in new changes in an orderly and well tested fashion."
`Orderly' and `tested' are one thing. Two *years* of API and interface evolution are something else entirely.
The first time Linux cuts a distribution that isn't a pure maintenance point release of a minor number (e.g. NOT 2.5.1 off 2.5, and NO 2.6 off of 2.5 if a 3.0 is in the works), it will have effectively forked itself.
-- Terry
Re:Common Open Source Software Problem (Score:1)
on only two occasions, a -security (release plus security bug fixes) or -stable (release plus any bug fixes) branch.
-STABLE is release plus feature additions.
FreeBSD would certainly have to change its development model to appeal to the current tides facing Linux. It has perhaps been protected to date by the fact that the project hasn't tried to appeal to outside influence, and has become unattractive in some development situations because of this.
I think we come to an agreement on the fact that due to the nature of development done on the Linux kernel, stable and testing would have to remain very compatible and very close to each other in the pipeline.
Whether this is an appropriate move for FreeBSD is a topic for another slashdot article. :P
Maybe you should look at it this way (Score:1)
Branched development - more testers (Score:2)
Would starting the new development branch immediately after the stable release help? Hardly. It's the time when a lot of work has to be done on the stable branch.
But what if we make sure that the stable kernel is indeed stable when it's released, not after the "stabilization"? The only solution to make kernel stable is to test it a lot before it's released.
I don't think we should be afraid of "debian syndrome". Kernel is much more monolithic than a disribution, and if e.g. IDE doesn't work well, it takes much more efforts to downgrade it safely compared to downgrading e.g. Mozilla.
The fundamental problem with the development branch is that issues with one part of the kernel affect all developers and testers. If I e.g. want to test ACPI and know how to fix it, but I don't know how to fix IDE, I won't test the latest 2.5 kernel.
I believe that the best solution would be to have branches for different subsystems. IDE changes would be merged to the trunk only when they are stable enough for other developers. It's important that the development on the branches is done openly, step by step, so that an interested developer could find the exact place where a bug was introduced. But this style of development doesn't require doing everything in the trunk. In fact, to keep the kernel relatively stable the development should be done on specialized branches.
More stable development kernel would mean more testers. More testers would mean stable release, which is truly stable, at least compared to 2.2.0 and 2.4.0. And that would eliminate the need to force developers on stabilizing the branch that is supposed to be stable form the beginning.
My $0.02... (Score:4, Interesting)
For example, let's say that we're happy with the feature set in the 2.5 unstable series. Instead of putting off waiting for all of the bugs to get shaken out and call it 2.6, just switch from 2.5 to 2.7 on the unstable development side. Linus can pass the reins off to someone he trusts, we can have a GROF (Get Rid Of the Fin) party and his trusted lieutenant can finish stabilizing 2.5 into 2.6 without him.
This solves the problem of wanting to keep back-porting features from 2.7 into 2.6, it allows for time to make sure the 2.5 code is stable before public release as 2.6, and provides a clear feature-freeze mechanism: once Linus is gone, go bugfixes only. If you want the new features, run the unstable kernel or wait for 2.8 (released sometime after 2.9 is branched).
Not that my opinion matters at all, it's just an idea.
Re:My $0.02... (Score:2)
Re:My $0.02... (Score:2)
The thing is, what needs to be done on the future 2.6 branch to stabilize it would also benefit 2.7. So the point of the current development model is to keep only one branch until it's really stable, then create the next development kernel branch on something which is sane. As a bonus you don't do the same work twice (stabilize the two branches for the same issues).
Now, if Linus is not the one you want in charge of this, he could always back-out of the last stabilizing efforts (IIRC, he doesn't particularly appreciate that part of development).
Branching before having a stable release would only cause both branches to diverge too much (especially in terms of bug fixes and drivers). And if you never have a "stable" development branch, it's kinda difficult to develop effectively on it. For example, see the current IDE situation in 2.5. To really develop on 2.5 atm, you need a SCSI machine because IDE seems too broken. Of course it will stabilize for 2.6, but if 2.7 is branched now, and then IDE is stabilized in 2.6, 2.7 won't necessarily have all the fixes (or some fixes won't make it to 2.6), if only because of communication problems.
So I think they'd better to work on 2.5 for now (obviously), then get into feature freeze, stabilize, release 2.6, wait a couple minor releases for it to be really stable (as in you'd be confortable using it on low-criticality (sp?) production machines after enough testing), and then start 2.7. Then 2.7 will start in a "known good" state.
openbsd, article topic (Score:1)
Either way is wrong (Score:2, Interesting)
To me, 2.6.0 means "okay, this is what we can possibly get if only developers are running the code. We have tested our kernel, we have high confidence that it will work for you, but, you know, there are surprises. So do try it out, if you can. We promise that if you find problems and tell us, we will put you to the highest priority, so that you don't have to fall back to 2.4.XX."
What is 2.7.0? People says that it means "okay, now we have 2.6.Y stable, we can pretty much ignore it. Let's put it in the hand of Xyz Abc, the new maintainer of 2.6 series, and new work will be placed at 2.7.ZZ". But I don't like this view. This ignores the possibility that new thing can land directly into 2.6.XX. This happened quite frequently in 2.4.XX, actually, and it does work.
I believe the real reason for 2.7.XX is that "after some use, we find that 2.6.XX has the following stupid problems. It can also be improved if we don't do things this way, but instead do things that way. But they are so fundamental to 2.6.XX, that if we ever change it, we can no longer make the claim that we made when we roll out 2.6.0. These things really needs to be done, though, but we prefer people not to use it yet, and we developers will try to make things work again after they break, and after every developers can reasonably make the claim we made when we delivered 2.6.0, we will roll out 2.8.0, when every of you can try this new neat way of doing things. Currently, please stick with what we have in 2.6.XX."
If that reasoning can stand, then what 2.7 is for is really new API. A new one that can cause everything else to break. I'd say, once we know what new API we want to create, we should create 2.7.0, *regardless* of whether 2.6.XX is stable enough or not. It is absurd to be afraid that stablization of 2.6.XX will slow down because of the existence of 2.7.YY: preference is always given to 2.6.XX if things go wrong there. The real problem to release 2.7.0 too early is that many things get implemented too quickly, when most of the API changes are still up in the air, forcing most things to be written again, perhaps for many times. When that "up-in-the-air" problem goes away (or has settled to a point that we want to write and see what will happen if we really do things in the new way), there is no excuse not to release 2.7.0. Further delay only makes sure that the next kernel will arrive late again.
How about they use some bug tracking system first (Score:5, Interesting)
It's really frustrating not to have a Linux kernel bug tracking system available. Searching through the huge lkml mailing list just doesn't cut it. And some questions pop back up every month, with no sign of it ever being addressed.
Example: the Athlon/VIA chipset freeze bug. Is it a chipset bug? A bios bug? A kernel bug? Is it fixed? Is it AMD's fault? Is it VIA's? Is it Linus's? Is it a PCI latency problem? An IDE problem?
Who the fuck knows!
Been fixed long time ago (Score:1)
What about Gentoo? (Score:1)
Re:Why can't they just make it work? (Score:1)
Re:Why can't they just make it work? (Score:1)
Re:Idea (Score:1)
Anyway, the next comment is correct. In the Linux world, and might I add the correct world(not because it is linux but because there should no real connection between the kernel and a GUI system, the GUI has absolfuckinlutely nothing to do with the kernel!!!
This guy should get a clue prior to spouting off at the mouth, especially at a place like
Re:Idea (Score:2, Offtopic)
Nice math their buddy, that is called once an hour....
Granted I had Windows 2000 core dump on me ~10-12 times yesterday (w00t) and another good 3 or 4 unexplained crashes, but that is all because I have managed to fuck up the DirectX sub layer to hell and well, err, heh;
when it IS running properly (which it will be as soon as I manage to find the install disks. ^_^ ) it has very long uptimes, though it is awfully dependent on what you are doing. After the first 3 or 4 hundred program installs/uninstalls things to tend to, err, get a bit cluttered, heh.
Then again I would like to see you do that many program installs in as short a time on a *nix box.
Bah, what am I saying, dependencies suck period, no matter what OS you are on.
Oh, and the *nixs DO have a shitty ass GUI system, seriously, there needs to be a BIG effort to ditch X and get a real system underneath that, technology has advanced a lot in the intervening years since X came out, hell, just the pure CS theory stuff has advanced a good deal, for crying out loud, come up with a better system. Or at least a more coherent one. It has taken MS how long and how many revisions/API changes to get even a
Re:Idea (Score:1)
Re:Idea (Score:1)
See, there is your problem;
the misc software crud part of my registery?
Over 50 thousand entries. (!!!!)
158 fonts installed (ouch); 3 language packs; 3 versions of Photoshop, 2 versions of Java (err, well 3 but only 2 that the system will admit to, uninstaller failed miserably), and if I even try and install Java into Moz bad things happen.
Well heck now if I even run moz.exe bad things happen, heh.
DirectX is like one layer BELOW the drivers, if the DirectX layer is shot ANY driver will fuck up on you. That is what is happening to me.
My WinNT directory is still a 'mere' 1.43 Gigs though, heh.
(not
I appear to be gathering ~50-60 cookies a day just browsing the internet (ouch),
and none of this is counting that NT5 apparently works
I have seen NT5 literaly kill itself just by being rebooted, boot, shutdown, boot, shutdown, eventualy it just refuses to boot and you have to go into safe mode and start correcting crap.
Oh yah, Microsoft also decided to install the French version of the Matrox G400 Dual Head drivers some time back, hehe, I installed a more, err, appropriate verison of them, but even in Safemode I cannot remove the old version.
I also have a bunch of things called WAN Miniport drivers which I have like NO idea what they are, there are four of them installed on the system (just kinda showed up one day. . . . better then 9x where things would just disappear one day I guess though) and when I tried to remove them (any mode) it said "cannot remove this device is neccisary to start up your computer."
Disabled them though, yaah. Didn't have too much of an impact on stability one way or the other though.
Re:Idea (Score:2)
Then again I would like to see you do that many program installs in as short a time on a *nix box.
So, let's see if I got this right. You install a lot of stuff, the box goes bad. And you can install a lot of stuff a lot faster in windows than in an *nix box.
Therefore, the primary attraction in Windows is that you can muck it up one HELL of a lot faster.
Efficiency, in other words.
Yep, I'm with you
Re:Idea (Score:2, Interesting)
Therefore, the primary attraction in Windows is that you can muck it up one HELL of a lot faster.
Efficiency, in other words.
Yep, I'm with you
Well, thing is, I would like a setup (out of the box, I know I could just keep sequential images of my HD on a RAID array someplace on an older machine or such, but, err, that is NOT exactly eloquent. Kick ass yes, eloquent, no) where I can install all the crap I want and not have to worry about some bloated huge central depository of crap getting too big, or of the alternative, everything becoming so cross referenced and dependent that uninstalling anything becomes like a giant Jenga game.
Either pathway sucks.
DOS rocked, copied apps, ran apps, deleted apps. None of this installing or dependency bullshit.
ALL HAIL THE GLORIOUS PIRATE JOKE (Score:2)
Burning karma for Jesus.