The Future Of The 2.0 Linux Kernel 241
An Anonymous Reader writes: "The first 2.0 stable kernel was released over six years ago, in June of 1996. It was followed by the 2.2 stable kernel two and a half years later, in January of 1999. The more recent 2.4 stable kernel followed by two years in January of 2001. And the upcoming 2.6 kernel is at least a year off.
Through all these years, 2.0 has continued to be maintained, currently up to revision 2.0.39, also released in January of 2001. David Weinehall maintains this kernel, and says, "there _are_ people that still use 2.0 and wouldn't consider an upgrade the next few years, simply because they know that their software/hardware works with 2.0 and have documented all quirks. Upgrading to a newer kernel-series means going through this work again."
Read the full story here."
old systems (Score:3, Informative)
Re:old systems (Score:2, Interesting)
then again we have the "it doesn't work, lets make it better" attitude that gave us Windows, so its your choice of the lesser of two evils
Re:old systems (Score:1)
Re:old systems (Score:2, Informative)
I experienced one fiasco, my brother has a computer from 1995. The BIOS developers was "smart" they have been thinking: "Nobody is going to need any year before 94 in this RTC, so let's check for that and change the year to 94 to avoid some problems."
Guess what the clock displayed the first time it was switched on in the year 2000.
Re:old systems (Score:1)
This means, that potentially, if they used signed 8-bit ints, or unsigned 4-bit ints, the world could blow up January 1, 2028 (when the computers add one to "December 31, 127"). Or, January 1, 2156. (add one to "December 31, 255")... does this make sense?
Re:old systems (Score:1)
Re:old systems (Score:1, Flamebait)
No, you're thinking of the "640K of memory should be more than enough" attitude (as in "2 digits should be more than enough").
Re:old systems (Score:2)
Re:old systems (Score:2)
Get them to loosen the purse strings and fund spurious projects
Use y2k as a foil to clean up a few old messes
The refreshing thing about the 2.0 kernel attituted is the unwillingness to twist someone's arm to make them upgrade.
There is a legitimate need to talk about upgrading, though.
The IIWDFWI (If it works...) attitude can have all the benefits of clinging to a bad habit,
and said habit can put you in extremis if you ignore it.
My company (and project) is as deep in the habit of !planning as anyone else. Recent firewall implementation is a running disaster.
Thus, keeping an old 2.0 box doing its thing is great. I'd be considering what a cheap drop-in 2.4 box might look like, even get it tested (in that spare time) so that we don't have an 'Ostrich moment'...
Not just old systems (Score:1)
Versions below "Realone" (in fact,9) had a real easily accessible option to check what is the server OS and realserver version.
As I remember myself, always interested in those huge servers which can handle thousands of clients on media platform, checked them...
Guess what? I don't know if its changed or not, speaking about year 2001, all of them were linux 2.0 kernel!
I guess, its not just "old" machines, people trust to that "old" kernel in fact.
It could be kinda preventing downtime too,as serving video/audio 24/7 (media) doesn'T like downtime because some guy found a simple glitch on latest 2.4 kernel and its propangated worldwide.
Re:old systems (Score:1)
I can't say I stay all that on top of things (Score:1)
killer feature (Score:5, Insightful)
The long time maintainance of an "old" kernel is a very important argument in favour of linux for serious industrial applications.
In our area we have the saying "you earn money with depreciated machines" - and to use them, you simple do need an "old" maintained operating system.
So the work of the "historic kernel"-maintainers is helping Linux to get good reputation.Re:killer feature (Score:3, Interesting)
Now we know who we must thank. Thank you very much David Weinehall.
Only they'd have problem browsing pages which require mplayer plugin. Any expert out there would give me some hints?
Secure browsing for third world kids. (Score:1)
Why in the world would secure browsing be a requirement for third world kids on old PCs?
Re:Secure browsing for third world kids. (Score:2)
What, poor kids don't deserve to have Hotmail? Or Yahoo mail?
There are a lot of people in the "Third World". They want services too. My ex-gf was Brazilian. She just got a Pentium 4, and needs secure browsing to do her online banking. You can do things with ATMs there that they're just designing here. Check out www.lavrasnovas.com.br [lavrasnovas.com.br]. This is a small town, maybe 100 people (but at least 10 bars, woohoo!), but it's got a web site, with Shockwave.
"The Third World" is a pretty complex, diverse place. I personally hate the term, it has too many connotations of arrogance. But if you do use it, don't lump people all together. Middle class there is a much better life than middle class here.
Re:Secure browsing for third world kids. (Score:2)
Talking about online banking. I might have to use Windows if I had to let them communicating with commericals, as most companies there are Windows-centric. I'm glad that we work for kids, that gives us greater flexibility in choosing platform
Re:killer feature (Score:1)
This statement seems to be making the assumption that when talking about software, "newer" is synonymous with "bigger/bloated".
It is still more than possible to set up a small install, using a modern distribution, with the minimum number of functions compiled into the kernel for old machines.
A typical use of older machines, as a firewall/router, springs to mind. Here I doubt that the 2.4 firewall code is much more resource hungry than the 2.0 code, but the changes to the kernel make iptables much more flexible than ipfwadm.
Julian
Re:killer feature (Score:2, Insightful)
My network router/web server/email server is all mounted off of a single floppy that is both the root filesystem and the boot disk. Can't do that with a 2.2 or 2.4, and still have all the drivers necessary to make all the hardware work, and have the software necessary to make all the rest of it work.
Re:killer feature (Score:2, Insightful)
Re:killer feature (Score:2, Insightful)
Re:killer feature (Score:2)
What about machines that stared life with ipfwadm and have been firewall/routers for about 5 years now? Updating to the newest kernels pretty much means you have to rewrite all of the rules in ipchans/iptables
Well, even the 2.4 series have support for using ipfwadm or ipchains - style syntax if desired. The options are available under Networking options -> IP: Netfilter configuration.
Re:killer feature (Score:1)
Re:killer feature (Score:1)
Consider yourself warned (Score:3, Funny)
Oh wait, this is open source.
Re:Consider yourself warned (Score:5, Insightful)
Which reduces the problem but doesn't negate it. Everyone loves pointing out that anyone can get their hands on the tools necessary to modify open-source software, but they tend to conveniently ignore the fact that not everyone has the programming skills necessary to do so.
Sure there are a lot of people out there who can program, and even a decent number of people out there who can program well. But in this case, you'd need someone with at least some Linux kernel hacking skills and enough programming know-how to be able to close a bug (possibly even a security bug) that made it past all those people who've hacked on 2.0 so far. Now factor in that you'd want a programmer good enough to be trusted with mucking around with the kernel for Very Important Systems -- systems important enough, at least, that you aren't willing to even take the next big jump in kernel versions.
It all boils down to a dicey situation. Even certain Open Source projects/versions get end-of-lifed by the official maintainers. You aren't always guaranteed that someone else will pick it up.
Re:Consider yourself warned (Score:2, Troll)
The point is not that everyone should maintain their own source code; the point is that if there are enough people interested in keeping it around, it will stay around. You're not at the mercy of your monopolistic vendor's business plans.
Re:Consider yourself warned (Score:1)
The point is not that everyone should maintain their own source code;
And as an extension to that, you can always hire a consultant to fix up the software for you. That's not as expensive as it sounds, since once the software does what you want it to, it really doesn't need to be maintained much anymore.
At work we are running RPG code on a System/36 emulator from the 80s, and it rarely needs too much maintenence. The main concern is having the data in an accessible format, so that you have a migration path off the old software eventually. Flat EBCEDIC text files aren't quite the most portable, but it has output filters that let us synchronize the postgres database to it nightly. They will also eventually let us migrate off of it.
Re:Consider yourself warned (Score:1)
This is true, but it also true that people who still need old kernels tend to have higher than average computer skills, so among them it is easier to find somebody who could fix bugs etc.
Anyway, when an open source piece of software is abandoned by official maintainers and is not picked up by anybody chances are that almost nobody is using it anymore, as all of the few who still did decided that an upgrade would have caused less problems than acquiring the skills needed to continue using it.
Yes, even open source software dies, but this happens when it really has no more reasons to be alive, not when some commercial department decides that they want to sell some new version.
Re:Consider yourself warned (Score:2)
So what? If your business depends on a feature in the 2.0 series kernel, then it doesn't matter if you have the requisite kernel programming skills. You can buy those. I don't work for redhat, but I'll bet $.50 that they'd take on that support contract. If not them, maybe IBM. If not them, how about contracting with the guy who's doing it right now?
The fact that it's open source means that anyone who's willing to do the work of maintaining the code can. And if you're depending on it, you will always have options.
Re:Consider yourself warned (Score:2)
If I had to name one major downside to open source software, it would be that it has taught people to expect, nay demand, something for nothing. In the olden days you used to shut up, put up, and pay up.
Now factor in that you'd want a programmer good enough to be trusted with mucking around with the kernel for Very Important Systems -- systems important enough, at least, that you aren't willing to even take the next big jump in kernel versions.
Here's me still running Linux 0.13 on my beer cooler. I don't let anybody near it, *especially* Finnish kernel hackers.
You miss the point. (Score:2)
No one expects non-technical users to teach themselves to be kernel hackers. That's just a silly straw man.
The point is that you can hire a kernel hacker to do the work. Linus and the rest of the gang doing the volunteer work don't want to support the stuff that's running your business anymore? Hire someone else to do it. It's an option, and in some cases it can be a very good one.
Whereas with unfree software, whether from MS or Sun or whoever, that option just doesn't exist.
Re:You miss the point. (Score:2)
Re:Consider yourself warned (Score:1)
Because we are talking about what would be refered to as a major version, I think in this case there is safety in numbers. Anyhow, does it really matter? Because, if it's not broke, don't fix it. You know I know some ppl that are still installing 1.0 kernels on certain systems...
Yes, at some point practically noone will be using any of the kernels that are out now. But, that is going to be a long time...
The main reason for all of this is that there are really 2 big ppl that use Linux...geeks (those most likely to spend time and effort to keep things working) and companies (those most likely to spend money to keep legacy systems working).
Re:Consider yourself warned (Score:3, Interesting)
Now, of course there are lots of programs out there that are useful, but broken in some way or not actively maintained. I'm sure everybody has found a nice project that just needs one little thing to be perfect but nobody touched it for a year.
I think what we need is a "Volunteer Hackers" site where users could post their requests for help, and programmers willing to help could see what is needed. I'm wondering if this could succeed. It would be very nice if it did, and probably would be yet another good reason to switch.
Re:Consider yourself warned (Score:2)
I may be wrong, but wasn't this the exact idea behind SourceForge (or perhaps Mozilla's bug tracking system)?
Multiple suplliers (Score:2)
The real advantage (from the non-programmers point of view) is that free software gives you the a much larger choice of suppliers. As long as the market exists, someone will be their to support it. With non-free software, you are depending on a single supplier, who may at any time refocus their interrest away from you.
Of course, even with free software the market can become so small that the cost of finding supplier becomes too large. But at least it is your wallet, and not the strategic geniuses in some board room, that decides when that point has been reached.
Driven by users, though! (Score:2)
Re:Consider yourself warned (Score:2)
The good news is, even though its no longer supported, you STILL have all the source available. You can if you're desparate enough, either fix the code yourself or hire someone to do so. Certainly, it would probably be easier to just upgrade, but if for some reason that choice is not feasible, there's no huge company in Redmond telling you to go fuck yourself.
-Restil
Re:Consider yourself warned (Score:2, Interesting)
This is just what happened with the plans for the Saturn V rocket -- there were three copies, each of which was destroyed "because it's just a copy."
Where do you go, then?
Re:Consider yourself warned (Score:2, Informative)
If it ain't broke (Score:3, Insightful)
A good example of this is that NASA still uses 8086 processors: You know exactly how they work.
New things mean new problems. If you're having a system which does its job, why upgrade to a higher level kernel that can support hardware and protocols you don't need, but brings in bugs you don't want.
Re:If it ain't broke (Score:2, Interesting)
A good example of this is that NASA still uses 8086 processors: You know exactly how they work.
I thought this was more due to radiation robustness , than due to plain conservatism (which I agree is an asset in critical-system engineering)
am I wrong ?
Re:If it ain't broke (Score:2)
That last part looks to be a clincher in a completely and utterly isolated and self sustaining environment... obviously us earth-bound electricity suckers are considerably spoiled.. barring more SoCal brown-outs this summer;-p
Re:If it ain't broke (Score:2)
Yup, it's radiation hardening that's the issue.
Making a chip radiation hardened is a big engineering undertaking, for a lot of reasons. The indivudual chips are very expensive, and thus the testing cycles are expensive. The testing process is long, and the skills to make it work is uncommon. Radition hardening a "simple" microprocessor like a 386 or a sparc might cost in the hundreds of millions, while a processor like a p4 would probably not even be considered.
Nasa may move to the original pentium as a control center chip in the near future, as Intel so graciously donated their pentium design for this purose (a small fraction of the cost of the actual radiation hardening design work!) Last I checked it was still rs6000 processors for system control with 8086's for simpler tasks.
Re:If it ain't broke (Score:1)
Re:If it ain't broke (Score:2)
You're implying a false dilemma, it's not one or the other. There are in fact still 286s being used, that doesn't stop you from buying a P4 now does it?
Wise choice... (Score:2)
Re:Wise choice... (Score:1)
Ok, there always is the update thing, but there are also not that much updates for older versions of Open Source software. The kernel might be a notable exception, but try getting upgrades to KDE1 or an old XFree or the like.
Re:Wise choice... (Score:1)
In general the pressure is never direct. It's just that if you run older versions of MS software you accept that you will remain unprotected against known vulnerabilities and you will get no support from Microsoft.
Same for Linux distros (Score:1)
The point I am trying to make is that the soft pressure to update is inherent to software, be it open or closed source. On one hand a software vendor, even a monster like MS, is only able to properly support a subset of the products it ever made, on the other hand everyone living from selling stuff, be it MS or your favorite Linux packager live from you buying more from them, so they certainly try to create incentives to buy their latest toys. If you won't fall for the shiny new stuff, well, maybe the lack of easily applicable fixes will convince you. The only way around this are 100% open source distros like Debian, but they are not everyone's piece of cake either for various reasons.
Also try to get a bug-fix for an older release of some major open source product applied. It hasn't has to be something really outdated like KDE1. In one thread of the recent days (don't remember which, but I think it was the "10 things wrong with Linux" one) a lot of people complained that it is often difficult to get bugs, that are no extremely critical security bugs fixed even in current stable releases. You will often be told to upgrade to the most recent version or even a cvs version. No monetary costs involved, but still the same principle, and still the upgrade to the latest version can mean to upgrade whole toolchains, especially in Linux.
Used since 1996! (Score:2)
How about in a server environment? [ducks]
Re:Used since 1996! (Score:3, Funny)
If I was the head of a company that owned a few servers and I discovered that one of them was running Win95.
Well I'll make an exception to the saying "Nobody ever got fired for buying Microsoft."
Re:Used since 1996! (Score:1)
Of course, there might also be very good reasons to upgrade to Linux or something else right now (security, easier to administratrate, etc).
BTW, the company I work at still has quite a few Win95 desktops in use for customer check-in. There are many problems with our existing setup. One of the big looming ones is that MS no longer supports Win95; I suppose it is expensive for them to do so, and would be a disincentive for people to keep upgrading their OS. Contrast this to the situation on Linux, where the old kernel verions will be supported as long as there is demand.
Re:Used since 1996! (Score:1)
Nowadays his most used software on this machine is an X11 emulator to his linux machine
Re:Used since 1996! (Score:1)
Re:Used since 1996! (Score:2)
40 are running Win98/ME, 5 are XP, 4 are Win95/etc, and 1 is MacOS.
that is an average.
Re:Used since 1996! (Score:1)
Re:Used since 1996! (Score:1)
Of course (Score:2)
Absolutely. The machine I'm typing on now is running 98SE, customised with 98lite using the explorer.exe from 95. Runs every win32 program I need on a desktop, and does it noticeably faster than machines with significantly more powerful hardware running later versions. Reasonably stable, considering it is windows after all - it gets uptime close to the Win2k boxes they have at work actually.
If you had any doubt that the answer to your question would be yes, this will really blow your mind - I've also got DOS 6.22 and WfW on a CD on a shelf across the room, I haven't actually used it in months (haven't used WfW in years, but DOS 6 really does come in very handy at times.)
Re:Of course (Score:1)
Re:Used since 1996! (Score:2, Interesting)
What about this network:
SERVER:
1 x Netware 4.2 small business
CLIENTS:
1 x Windows 98 SE / Win2K SP2 (dual boot)
19 x Windows 98 SE
1 x Windows 98
2 x Windows 95B
1 x Windows 95A
5 x Window 3.11 for Workgroups (& MS Word 6)
Ciao, Roman
Re:Used since 1996! (Score:1)
L
Re:Used since 1996! (Score:1)
A ton of win95 is out there. You seem to forget that >90% of businesses are small businesses and thus don't upgrade their machine very often. I even still come across plently of win 3.1 and DOS machines in law offices, doctor offices, accounting firms, and factory's.
There's nothing wrong with patting the linux kernel on the back, but lets not forget, there is also a bunch of Netware servers not to mention older Unix boxes that have been running since before linux even existed.
Re:Used since 1996! (Score:1)
My Windows machine at work still runs 95 as does the machines of a few co-workers. I have a machine at home that also runs 95. Someone gave it to me and the OS is on 13-14 floppies.
Re:Used since 1996! (Score:1)
The Point of Sale system at work operates Windows '95. It's used primarily as (a) a mostly dumb terminal [it runs screensavers on it's own, and displays graphics locally, but all information/screen placement is determined by the server in the backroom, which runs Xenix] (b) internet device [has ie 5.5 installed]
Re:Used since 1996! (Score:2)
The company I'm currently working for mainly uses Winows 95 and NT 4.0 on ~ 1000 desktops.
Re:Used since 1996! (Score:2)
My everyday-workhorse machine runs Win95 OSR2.0b, and will probably do so for its entire lifespan. W95 is suitable for a lowly P233, and once beaten into submission, it's nearly 100% stable. It has all my critical can't-live-without apps already trained to play nice together. Changing the OS (or anything else) would be counterproductive. This machine is expected to do daily work without making me hunt down and fix today's complication. Why rock the boat?
I have an old P75 that I use as a test rig, that runs Win95 first edition (cuz that's what came with it, whaddya want for free). It's stable (it has NEVER crashed since I've had it despite serious abuse) and already has all the weird obscure drivers it needs. While there's probably no compelling reason (other than said drivers) to keep W95 on it, there's no pressing reason to switch or upgrade the OS, either.
I have several clients still running Win95 too, because "if it ain't broke, don't fix it". And in some cases because that's all their hardware will support, and they can't justify replacing it.
I agree with a post somewhere upstream -- if what you're using works for you, and if an upgrade doesn't address a need YOU have (be that a feature or a bugfix) you're probably better off NOT upgrading. We all know how often patches break more than they fix -- well, upgrades are much the same.
Personally, I only mess with upgrades and such on machines whose mission in life is to test whatever so I can get familiar with it. Never on a production machine unless the upgrade is needed, and then only after it's proven sound.
So in my view, there is much to be said for older versions, and for the people who maintain them.
[BTW the oldest utility I still use is dated 1983.]
This is a good thing... (Score:3, Informative)
The truth is, I think alot of ppl put too much emphasis on the newest version of software just because it's newer.
I have found that maintaining the same version is much easier and leads to a much more stable & secure system.
When a new version comes out I ask myself 3 questions.
1) Do I need an upgrade or does it fix any issues that I was having (crashes, incompatabilities, etc)
2) Was I going to choose an alternative if not for a feature in the new revision (my new CD-Rom won't work with the old version)
3) Am I willing to suffer the consequences of any problems that might arise as a result of the upgrade
The M$ mentality says... "It's version v1.2 which is higher than my current version, v1.1...I need it!!!"
The reason is that they have been trained (primarily by M$) to crave the newest version of whatever bug-ridden crap is thrown at them and the corporations try to do whatever they can to force you to upgrade.
The reason for this is simple, the Linux kernel is not maintained by a corporation that is driven by sales and therefore driven by your purchase of their product.
0.99.13 (Score:5, Interesting)
Re:0.99.13 (Score:2)
I know a factory who use(d) 25 year old computors. (Actually they must be over 30 years old by now and as far as I knwo they still use them)
They still worked so they saw no need to spend loads of bucks to upgrade it.
It was much more cost-effective to keep the old computors. Sadly they could no longer get spare parts so thay had to dismantle one of them and exchange them for modern computors - they kept the working parts as spare for the remaining computor so it could continue to serve them.
If cars can be 50 years old so should computors.
Re:0.99.13 (Score:1)
Re:0.99.13 (Score:2)
Broken windows fallacy. Buying a car to replace a car that still works does not help the economy.
Re:0.99.13 (Score:4, Funny)
Re:0.99.13 (Score:2)
I'm just curious: if he were to upgrade, would the 2.4 kernel make his machine run faster or slower?
Re:0.99.13 (Score:1)
Re:0.99.13 (Score:2, Informative)
There is also the different memory management that has a tendancy to swap out pages that aren't in use *now* (thus putting more priority to the page cache) whereas earlier versions tried to keep as many pages in memory as possible, and only swap out as little as possible.
It isn't as simple as presented here though, and there is still much debate about this.
Not everyone needs cutting edge! (Score:3, Insightful)
The basic advantage is the understanding someone comes to have by working a number of years with something specific. Most bugs, and for certain all the serious ones, are known and documented. Design limitations are known also. There are field proven designs and in many cases known tweaks to extend functionality, even beyond the original capabilities.
This stands true for pretty much everything; another poster pointed out that NASA still uses 8086 hardware!
The need for maintenance is also something relative; if you have something that constantly works reliably, the maintenance required to keep it that way is minimal.
I believe that even if 2.0.39 was the last kernel of the 2.0.x series, people who use 2.0.x won't really care. I know, since I have a 2.0.36 based home router that runs for the past year and a half with zero maintenance. I don't even plan to upgrade to another 2.0.x kernel, let alone 2.2 or 2.4, as long as it just works (tm).
Open source is a more perfect "marketplace" (Score:5, Insightful)
IMHO one of the big problems with proprietary software--which I once saw personally from within a then-Fortune-500 company--is that career advancement depends on working on big projects and thinking big. One one occasion I was told that something wasn't pursuing because "on your own showing it can't bring in more than $2,000,000." I said, "yes, but the costs are trivial so it will be very good business." It was explained to me that projects of that size were just too insignificant to be considered. I believe that just the cost of translating the manuals into the fifteen languages supported by this global company was enough to sink the project (and of course ALL the company's product HAD to be translated into ALL languages because that was their procedure). On another occasion, when wondering whether we should be developing projects for a certain market sector, I was told, "Naaaah, we already had a consultant look into that, it's not worth it, it's just another $100 million market."
And of course with proprietary commercial software is you usually have the vendor "pushing" newer versions because selling new versions provides more profit to the vendor than maintaining old ones. The commercial software marketplace is a very imperfect, high-friction "market." And one place where the vendor has a lot of asymmetrical power is with respect to versions and releases. It is usually easy to keep customers on the "version treadmill." What if you don't like Microsoft discontinuing Windows NT 4.0? Where's the customer leverage? "If you do that I'll just buy Windows NT 4.0 from one of your competitors?"
linux 2.0 stability (Score:2, Interesting)
The 2.0 series had real stability. In 6 years I had just one or two 2.0 kernel crash mainly when using X or the sparse superblock patch. The 2.4 series has more features but I've much less stability. I've lost count of how many crashes I've had even without using X, or beta quality optional kernel code, or devfsd. The most annoying ones are the module load/unload lockups still present in 2.4.19 and up:
# lsmod
Module Size Used by
isa-pnp 21381 0 (unused)
# insmod etherpro
# lsmod
Module Size Used by
etherpro100 13413 0
# rmmod epic100
Jun 27 11:32:03 koyuki kernel: unregister_netdevice: waiting for eth0 to become free. Usage count = 4
At this point, the kernel module code is unsalvageable. A reboot is required.
where is the future? (Score:1)
Correct me if I am wrong, but I thought about the future of 2.0 in terms of design and features of the version greater than 2.5 and, maybe, of 3.0. What is planned in future releases?
As for 2.0 itself - who cares about the dead meat. We must use 2.4 or 2.5. Period
old hardware != old Linux (Score:2)
Re:old hardware != old Linux (Score:1)
Memory usage is up compared to the 2.0 kernels. That might not make a difference on newer machines, but it does if you only have 4M. It's better than the 2.2 kernels though. I have a 486 machine that I've tried 2.0, 2.2, and 2.4 kernels on. The 2.0 kernel gives me about 3M for userland stuff, the 2.4 about 2M and the 2.2 will boot, but fails to run any of the standard initialization scripts.
Re:old hardware != old Linux (Score:2)
'Sides, unless it's a router box and you need the latest, greatest QoS tools and security fixes on your 486, there's very little reason to upgrade.
Most people seem to get new kernels when they need support for new hardware, and that's just not much of an issue for a 486.
I did upgrade the kernel on my 386SL/25 laptop recently, from 2.0.37 to 2.2.18, but only because I wanted to play with the swsusp patches and didn't feel like learning how to backport them to 2.0.
The temptation to move to a 2.4-AC kernel with swsusp built-in was very easy to resist.
good problems (Score:1)
Why do we have to have so many kernels maintained at the same time? Even just the current "stable" and "unstable" release system is a little strange to me. I mean, why spread the work among two kernels when we could be doing twice the work on just one?
I would propose things differently. A single kernel, the latest release, is the only one maintained (officially. anyone can maintain old kernels if they wish). the patches would, however, be marked stable and unstable. Test patches and work on them until they are stable enough for what would be a stable release, then merge them permanently into the main source. Until the patch is stable, it remains just a patch, being tested and worked upon.
I admit, I'm no kernel hacker (yet) but I do think this would be a much better solution. Linux would advance much faster with all the effort focusing on one kernel, no more.
Re:good problems (Score:4, Informative)
2.0: Legacy systems & embedded. It's tiny!
2.2: Middle-aged systems or wherever stability is a must. RH6.x and other 2.2-based distros are still in widespread use.
2.4: New systems with new hardware that requires new drivers.
2.5: Development. Don't use in a production environment, lest you fall down and go boom.
Besides, each line has a different head maintainer.
Re:good problems (Score:3, Interesting)
Re:good problems (Score:2)
The merging of Patch A into the stable kernel would not affect Patch B or Patch C
They're not dependant on it, they're intermangled with it. They touch the same subsystems (same source files), although they can be orthogonal (in theory) to each other (applied in whatever combination you want). But still, you'll need to adapt each of those to the current kernel (and for each new kernel before your patch is adopted). That takes some efforts.
I agree that a very very very good revision system could maybe do the trick, although it'll need help from a (more likely, more than one) human.
Another thing is applications and libraries. Yes, normally they're independant from the kernel (to a certain point). But if your development and stable kernel are the same, you'll need to update some of those a lot more often if you upgrade your kernel. By contrast, a stable kernel series should be compatible with the same libs and apps from beginning to end. If there are some changes in API or new features, get the newer kernel series, along with updated apps and libs. It's called modularity. A new kernel series is effectively another module, even if it replaces a previous one and fulfils the same task.
Last thing (somewhat linked to the last point): up until now, I've mostly talked about the POV of a user (either server or desktop). Now I'll take the POV of a developer (distributor, or a company designing a new product using this technology). Field upgrades are a PITA. There are some ways to do them, but they're difficult (witness the number of unpatched IIS installation trying to propagate Code Red). When you need to do one, you (normally) prefer to change the minimum of things. So no jump from 2.0.32 to 2.4.19, but ideally 2.0.32 to 2.0.32+patches, or 2.0.40 if really needed. Now, if that 2.0.40 had diverged on a number of fronts from your 2.0.32, because features and major changes creep in in minor releases, your PITA has just grown again. It would be like basing your new product development on 2.5.0, and then trying to keep on top of new changes with every new releases. Good luck.
please read this linux app developers (Score:1, Insightful)
If there is not an absolute requirement for the latest and greatest, then please do not require them for the build. Additions are great, but they should be optional and 'extra' not the bare minimum. Otherwise this is like Linux binaries only being released for the latest instruction code of AMD (or Intel) chips, but not for any other chips. (this is a loose and probably poor example not to be overanalyzed to the point it looses its underlying meaning and reason for being said)
If anybody is making money off 2.0.x (Score:1)
1.0 kernel series (Score:2)
upgrade ? (Score:1)
'if it aint broke, dont fix it' are probably
not concerned about 2.0 patches and releases
since they won't install them anyway.
My Unv... (Score:2)
The 2.0 kernel is more popular than 2.5 (Score:2)
That's more than the number of people using 2.5.
(Don't like the numbers? Get counted! [li.org])
Servers running 2.0 (Score:2)
Why should I mess with them? I have software on another machine that requires an old verson of gcc (due the changes in the String library) and I don't want to rewrite it. Everything works. Everything is stable.
I also run old distros, even with the 2.2 kernel. I upgraded one machine to Slackware 4.0 when that was the New Thing and it took me a while to get it stable. Now I don't want to mess with it; just upgrade the kernel for security issues. It just runs apache and WordPerfect, it is a PPro200 with 128Mb RAM and is solid as can be. If I upgrade it, my old copy of WordPerfect won't work anymore and I don't like the new one.
Many friends who came from the Windoze world always have the need to be upgrading. As long as the old software still works, why change it?
Re:Um, HUH? (Score:4, Informative)
Er. Not quite correct:
ftp://ftp.kernel.org/pub/linux/kernel/v2.0/testinSo the latest release candidate for 2.0.40 was only released back in June. Doesn't look dead to me.
Re:Um, HUH? (Score:2)
2.0 is pretty much dead.
Do you bother to read the article before trolling?
"The 2.0.40 kernel is due to be released soon."
Re:Um, HUH? (Score:1)
Dead in my book means "no more support". However, some people are still using 2.0 and will continue to do so for many years to come. Thus, there is still demand for bug fixes on 2.0. This being free software, noone could enforce a halt to development even if they wanted to.
Thus, it's still supported. Thus, it's not dead.
Driver updates still support 2.0 kernels (Score:1)
Re:Um, HUH? (Score:5, Informative)
I've done 9 pre-releases since January 2001, and I'm probably going to release 2.0.40 any day now (I have one thing to do some research on first.) While the flow of releases isn't quite the same as that of the 2.4-series, it is maintained. Something would be really wrong if I had to release a new kernel every month, 6 years after the release of the first 2.0-kernel...
I open a new revision whenever I get a serious enough bug-report and/or fix, and release pre-patches/release-candidates until everything seems to have slowed down again. Wash, rinse, repeat.
Releases every one and a half years or so, with interim releases every month or two seems to be a pretty decent pace for a really stable kernel-series. Most of my users aren't the kind that does regular kernel-upgrades anyway; they usually inspect a new 2.0-kernel very carefully before installing it on their hardware.
Regards: David Weinehall, maintainer of the 2.0-series