Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Linux Software

The Future Of The 2.0 Linux Kernel 241

An Anonymous Reader writes: "The first 2.0 stable kernel was released over six years ago, in June of 1996. It was followed by the 2.2 stable kernel two and a half years later, in January of 1999. The more recent 2.4 stable kernel followed by two years in January of 2001. And the upcoming 2.6 kernel is at least a year off. Through all these years, 2.0 has continued to be maintained, currently up to revision 2.0.39, also released in January of 2001. David Weinehall maintains this kernel, and says, "there _are_ people that still use 2.0 and wouldn't consider an upgrade the next few years, simply because they know that their software/hardware works with 2.0 and have documented all quirks. Upgrading to a newer kernel-series means going through this work again." Read the full story here."
This discussion has been archived. No new comments can be posted.

The Future Of The 2.0 Linux Kernel

Comments Filter:
  • old systems (Score:3, Informative)

    by vstanescu ( 522393 ) on Sunday July 14, 2002 @05:52AM (#3880912) Homepage
    I have a very old system, running redhat 4.2 on it, that does the billing for the X.25 part of my network. It runs a lot of scripts and binary programs that are reading accounting files generated by the X.25 switches, transforming them into text files and generating monthly reports for the billing department. It is so complex, that I would think more than twice even for upgrading the kernel from its current 2.0.32 to the new 2.0.39, and upgrading the operating system to a newer distribution will never be done, because it does not worth the effort. Its great to see that somebody still takes care of old software and if a bug will bother me someday, i will have the option to upgrade or at least to talk with somebody that still mantains the software.
    • Re:old systems (Score:2, Interesting)

      by skydude_20 ( 307538 )
      isn't this kind of attitude of "it works, leave it alone" that gave us that Y2K fiasco?

      then again we have the "it doesn't work, lets make it better" attitude that gave us Windows, so its your choice of the lesser of two evils
      • What Y2K fiasco? nothing bad happened in 2000, at least to me..
        • Re:old systems (Score:2, Informative)

          by kasperd ( 592156 )
          What Y2K fiasco?

          I experienced one fiasco, my brother has a computer from 1995. The BIOS developers was "smart" they have been thinking: "Nobody is going to need any year before 94 in this RTC, so let's check for that and change the year to 94 to avoid some problems."

          Guess what the clock displayed the first time it was switched on in the year 2000.
          • Y2K was such a non-issue that it didn't really matter. Everyone thought "Hey, maybe all these programmers from long ago used 2-digits to hold the year.." well, programmers from LONG AGO would have used a SINGLE 4-bit or 8-bit byte, instead of -2- 8-bit bytes to hold the year.

            This means, that potentially, if they used signed 8-bit ints, or unsigned 4-bit ints, the world could blow up January 1, 2028 (when the computers add one to "December 31, 127"). Or, January 1, 2156. (add one to "December 31, 255")... does this make sense?
        • the fiasco was the few years before Y2K when we found the problem and had to spend a ton of money to make sure we could make it through
      • Re:old systems (Score:1, Flamebait)

        by Quixote ( 154172 )
        isn't this kind of attitude of "it works, leave it alone" that gave us that Y2K fiasco?

        No, you're thinking of the "640K of memory should be more than enough" attitude (as in "2 digits should be more than enough").

      • No...that was the "oh..c'mon, no one will still be using this code 20 years from now" that gave us the y2k paranoia.
        • Cynic that I am, IMO the y2k paranoia was all about preying on the fears of the ignorant to either

          Get them to loosen the purse strings and fund spurious projects

          Use y2k as a foil to clean up a few old messes
          The refreshing thing about the 2.0 kernel attituted is the unwillingness to twist someone's arm to make them upgrade.
          There is a legitimate need to talk about upgrading, though.
          The IIWDFWI (If it works...) attitude can have all the benefits of clinging to a bad habit,
          and said habit can put you in extremis if you ignore it.
          My company (and project) is as deep in the habit of !planning as anyone else. Recent firewall implementation is a running disaster.
          Thus, keeping an old 2.0 box doing its thing is great. I'd be considering what a cheap drop-in 2.4 box might look like, even get it tested (in that spare time) so that we don't have an 'Ostrich moment'...

    • As I prefer Realplayer (when no choice except wmedia), I am kinda used to it.

      Versions below "Realone" (in fact,9) had a real easily accessible option to check what is the server OS and realserver version.

      As I remember myself, always interested in those huge servers which can handle thousands of clients on media platform, checked them...

      Guess what? I don't know if its changed or not, speaking about year 2001, all of them were linux 2.0 kernel!

      I guess, its not just "old" machines, people trust to that "old" kernel in fact.

      It could be kinda preventing downtime too,as serving video/audio 24/7 (media) doesn'T like downtime because some guy found a simple glitch on latest 2.4 kernel and its propangated worldwide.
  • But was there any real reason to think support of 2.0 would suddenly be drying up?
  • killer feature (Score:5, Insightful)

    by ghum ( 109642 ) <(moc.temruogmaps ... g.91.3xxmodeerf)> on Sunday July 14, 2002 @05:53AM (#3880915)

    The long time maintainance of an "old" kernel is a very important argument in favour of linux for serious industrial applications.

    In our area we have the saying "you earn money with depreciated machines" - and to use them, you simple do need an "old" maintained operating system.

    So the work of the "historic kernel"-maintainers is helping Linux to get good reputation.
    • Re:killer feature (Score:3, Interesting)

      by jsse ( 254124 )
      Right you are. I know there aren't many such a case here, but my friend is working on refurbish old 486s for kids in third world. Since the requirement is to be able to surf web securely, I recommended to use stable Debian with 2.0.x kernel, which seems to work well with these old hardware, while has good security, and above all, no license fee incurred.

      Now we know who we must thank. Thank you very much David Weinehall. :)

      Only they'd have problem browsing pages which require mplayer plugin. Any expert out there would give me some hints? :)
      • my friend is working on refurbish old 486s for kids in third world. Since the requirement is to be able to surf web securely

        Why in the world would secure browsing be a requirement for third world kids on old PCs?

        • Why in the world would secure browsing be a requirement for third world kids on old PCs?

          What, poor kids don't deserve to have Hotmail? Or Yahoo mail?

          There are a lot of people in the "Third World". They want services too. My ex-gf was Brazilian. She just got a Pentium 4, and needs secure browsing to do her online banking. You can do things with ATMs there that they're just designing here. Check out www.lavrasnovas.com.br [lavrasnovas.com.br]. This is a small town, maybe 100 people (but at least 10 bars, woohoo!), but it's got a web site, with Shockwave.

          "The Third World" is a pretty complex, diverse place. I personally hate the term, it has too many connotations of arrogance. But if you do use it, don't lump people all together. Middle class there is a much better life than middle class here.
          • Right you are.

            Talking about online banking. I might have to use Windows if I had to let them communicating with commericals, as most companies there are Windows-centric. I'm glad that we work for kids, that gives us greater flexibility in choosing platform
    • In our area we have the saying "you earn money with depreciated machines" - and to use them, you simple do need an "old" maintained operating system.

      This statement seems to be making the assumption that when talking about software, "newer" is synonymous with "bigger/bloated".

      It is still more than possible to set up a small install, using a modern distribution, with the minimum number of functions compiled into the kernel for old machines.

      A typical use of older machines, as a firewall/router, springs to mind. Here I doubt that the 2.4 firewall code is much more resource hungry than the 2.0 code, but the changes to the kernel make iptables much more flexible than ipfwadm.

      Julian

      • Re:killer feature (Score:2, Insightful)

        by XO ( 250276 )
        processor/network/ram hungry, probably not-- pure physical SIZE of the 2.2 and 2.4 kernels prevent them from being used adequately with single floppy based systems.

        My network router/web server/email server is all mounted off of a single floppy that is both the root filesystem and the boot disk. Can't do that with a 2.2 or 2.4, and still have all the drivers necessary to make all the hardware work, and have the software necessary to make all the rest of it work.
        • Re:killer feature (Score:2, Insightful)

          by Alan ( 347 )
          Not only floppy but other embedded devices. My old company was using 2.0.39 simply because otherwise we couldn't fit it onto the system, or get it to use a reasonable amount of ram. When you're trying to produce hundreds of thousands of units, the move from say, an 8 meg DOC or DIMM to a 16 meg one is a big expense. The 2.0 series was stable, time tested, and fit in a very small amount of space. We simply couldn't get the same results from 2.4.
      • A typical use of older machines, as a firewall/router, springs to mind. Here I doubt that the 2.4 firewall code is much more resource hungry than the 2.0 code, but the changes to the kernel make iptables much more flexible than ipfwadm.
        What about machines that stared life with ipfwadm and have been firewall/routers for about 5 years now? Updating to the newest kernels pretty much means you have to rewrite all of the rules in ipchans/iptables, which takes time of an employee, which costs money, and decreases productivity. I'd rather just install an old 2.0 series kernel for the latest security patches than I would have to go through the pain of rewriting lots of firewall rules.
        • What about machines that stared life with ipfwadm and have been firewall/routers for about 5 years now? Updating to the newest kernels pretty much means you have to rewrite all of the rules in ipchans/iptables

          Well, even the 2.4 series have support for using ipfwadm or ipchains - style syntax if desired. The options are available under Networking options -> IP: Netfilter configuration.

    • And because it's GPL no company can ever end-of-life it.
  • by flacco ( 324089 ) on Sunday July 14, 2002 @05:54AM (#3880916)
    The 2.0 kernel is rapidly reaching end-of-life status. You are all warned that operating system updates (including security updates) will soon be discontinued. You are urged to contact your local software vendor, upgrade to the latest version of the Linux kernel, and sign up for Software Assurance ASAP.

    Oh wait, this is open source.

    • by Erasmus Darwin ( 183180 ) on Sunday July 14, 2002 @06:09AM (#3880941)
      "Oh wait, this is open source."

      Which reduces the problem but doesn't negate it. Everyone loves pointing out that anyone can get their hands on the tools necessary to modify open-source software, but they tend to conveniently ignore the fact that not everyone has the programming skills necessary to do so.

      Sure there are a lot of people out there who can program, and even a decent number of people out there who can program well. But in this case, you'd need someone with at least some Linux kernel hacking skills and enough programming know-how to be able to close a bug (possibly even a security bug) that made it past all those people who've hacked on 2.0 so far. Now factor in that you'd want a programmer good enough to be trusted with mucking around with the kernel for Very Important Systems -- systems important enough, at least, that you aren't willing to even take the next big jump in kernel versions.

      It all boils down to a dicey situation. Even certain Open Source projects/versions get end-of-lifed by the official maintainers. You aren't always guaranteed that someone else will pick it up.

      • Which reduces the problem but doesn't negate it. Everyone loves pointing out that anyone can get their hands on the tools necessary to modify open-source software, but they tend to conveniently ignore the fact that not everyone has the programming skills necessary to do so.

        The point is not that everyone should maintain their own source code; the point is that if there are enough people interested in keeping it around, it will stay around. You're not at the mercy of your monopolistic vendor's business plans.

        • not everyone has the programming skills necessary to do so.
          The point is not that everyone should maintain their own source code;


          And as an extension to that, you can always hire a consultant to fix up the software for you. That's not as expensive as it sounds, since once the software does what you want it to, it really doesn't need to be maintained much anymore.

          At work we are running RPG code on a System/36 emulator from the 80s, and it rarely needs too much maintenence. The main concern is having the data in an accessible format, so that you have a migration path off the old software eventually. Flat EBCEDIC text files aren't quite the most portable, but it has output filters that let us synchronize the postgres database to it nightly. They will also eventually let us migrate off of it.
      • not everyone has the programming skills necessary to do so

        This is true, but it also true that people who still need old kernels tend to have higher than average computer skills, so among them it is easier to find somebody who could fix bugs etc.

        Anyway, when an open source piece of software is abandoned by official maintainers and is not picked up by anybody chances are that almost nobody is using it anymore, as all of the few who still did decided that an upgrade would have caused less problems than acquiring the skills needed to continue using it.

        Yes, even open source software dies, but this happens when it really has no more reasons to be alive, not when some commercial department decides that they want to sell some new version.

      • Everyone loves pointing out that anyone can get their hands on the tools necessary to modify open-source software, but they tend to conveniently ignore the fact that not everyone has the programming skills necessary to do so.

        So what? If your business depends on a feature in the 2.0 series kernel, then it doesn't matter if you have the requisite kernel programming skills. You can buy those. I don't work for redhat, but I'll bet $.50 that they'd take on that support contract. If not them, maybe IBM. If not them, how about contracting with the guy who's doing it right now?

        The fact that it's open source means that anyone who's willing to do the work of maintaining the code can. And if you're depending on it, you will always have options.

      • Everyone loves pointing out that anyone can get their hands on the tools necessary to modify open-source software, but they tend to conveniently ignore the fact that not everyone has the programming skills necessary to do so.

        If I had to name one major downside to open source software, it would be that it has taught people to expect, nay demand, something for nothing. In the olden days you used to shut up, put up, and pay up.

        Now factor in that you'd want a programmer good enough to be trusted with mucking around with the kernel for Very Important Systems -- systems important enough, at least, that you aren't willing to even take the next big jump in kernel versions.

        Here's me still running Linux 0.13 on my beer cooler. I don't let anybody near it, *especially* Finnish kernel hackers.
      • No one expects non-technical users to teach themselves to be kernel hackers. That's just a silly straw man.

        The point is that you can hire a kernel hacker to do the work. Linus and the rest of the gang doing the volunteer work don't want to support the stuff that's running your business anymore? Hire someone else to do it. It's an option, and in some cases it can be a very good one.

        Whereas with unfree software, whether from MS or Sun or whoever, that option just doesn't exist.

      • You aren't always guaranteed that someone else will pick it up.

        Because we are talking about what would be refered to as a major version, I think in this case there is safety in numbers. Anyhow, does it really matter? Because, if it's not broke, don't fix it. You know I know some ppl that are still installing 1.0 kernels on certain systems...

        Yes, at some point practically noone will be using any of the kernels that are out now. But, that is going to be a long time...

        The main reason for all of this is that there are really 2 big ppl that use Linux...geeks (those most likely to spend time and effort to keep things working) and companies (those most likely to spend money to keep legacy systems working).
      • Well, you can always pay somebody to do it. That's how Namesys (the people who make ReiserFS) earn money.

        Now, of course there are lots of programs out there that are useful, but broken in some way or not actively maintained. I'm sure everybody has found a nice project that just needs one little thing to be perfect but nobody touched it for a year.

        I think what we need is a "Volunteer Hackers" site where users could post their requests for help, and programmers willing to help could see what is needed. I'm wondering if this could succeed. It would be very nice if it did, and probably would be yet another good reason to switch.

        • I think what we need is a "Volunteer Hackers" site where users could post their requests for help, and programmers willing to help could see what is needed.

          I may be wrong, but wasn't this the exact idea behind SourceForge (or perhaps Mozilla's bug tracking system)?
      • Everyone loves pointing out that anyone can get their hands on the tools necessary to modify open-source software, but they tend to conveniently ignore the fact that not everyone has the programming skills necessary to do so.
        True, which is why I believe that is poor advocacy. Most users will not find the argument impressive, as they know they can't change the code themselves.

        The real advantage (from the non-programmers point of view) is that free software gives you the a much larger choice of suppliers. As long as the market exists, someone will be their to support it. With non-free software, you are depending on a single supplier, who may at any time refocus their interrest away from you.

        Of course, even with free software the market can become so small that the cost of finding supplier becomes too large. But at least it is your wallet, and not the strategic geniuses in some board room, that decides when that point has been reached.

      • With open source, the existence of a user community for a particular version is far more likely to produce people who are willing to maintain that version than in the commercial case. Companies will drop products even when there's a thriving user community, if the sales of the product in question are no longer commercially viable. (I've done this myself, with a software package I used to sell.) All products eventually become the responsibility of their user communities, but with open source, at least you have some options, up to and including paying someone to make enhancements and fix bugs for you. If all you have are binaries, you're SOL.
      • Lets say the worst case scenario is realized and they discontinue maintaining the 2.0 kernel and someone using it in a production environment is not in a position to upgrade, and a dangerous flaw is discovered, or a driver desparately needs to be backported.

        The good news is, even though its no longer supported, you STILL have all the source available. You can if you're desparate enough, either fix the code yourself or hire someone to do so. Certainly, it would probably be easier to just upgrade, but if for some reason that choice is not feasible, there's no huge company in Redmond telling you to go fuck yourself.

        -Restil
    • But what happens when everyone deletes the old source versions off their mirrors "because it's just a copy of old stuff" ...?

      This is just what happened with the plans for the Saturn V rocket -- there were three copies, each of which was destroyed "because it's just a copy."

      Where do you go, then?
  • If it ain't broke (Score:3, Insightful)

    by Ubi_UK ( 451829 ) on Sunday July 14, 2002 @06:06AM (#3880938)
    ...don't fix it

    A good example of this is that NASA still uses 8086 processors: You know exactly how they work.
    New things mean new problems. If you're having a system which does its job, why upgrade to a higher level kernel that can support hardware and protocols you don't need, but brings in bugs you don't want.
    • Re:If it ain't broke (Score:2, Interesting)

      by guybarr ( 447727 )

      A good example of this is that NASA still uses 8086 processors: You know exactly how they work.

      I thought this was more due to radiation robustness , than due to plain conservatism (which I agree is an asset in critical-system engineering)

      am I wrong ?
      • I'd think mostly conservatism... custom programs with a very finite purpose running their tasks as efficiently as necessary without... and this seems most important: unnecessary POWER consumption.

        That last part looks to be a clincher in a completely and utterly isolated and self sustaining environment... obviously us earth-bound electricity suckers are considerably spoiled.. barring more SoCal brown-outs this summer;-p

      • Yup, it's radiation hardening that's the issue.

        Making a chip radiation hardened is a big engineering undertaking, for a lot of reasons. The indivudual chips are very expensive, and thus the testing cycles are expensive. The testing process is long, and the skills to make it work is uncommon. Radition hardening a "simple" microprocessor like a 386 or a sparc might cost in the hundreds of millions, while a processor like a p4 would probably not even be considered.

        Nasa may move to the original pentium as a control center chip in the near future, as Intel so graciously donated their pentium design for this purose (a small fraction of the cost of the actual radiation hardening design work!) Last I checked it was still rs6000 processors for system control with 8086's for simpler tasks.

    • Yes NASA uses 8086 and runs linux86 on them! Flight Linux [nasa.gov]
  • If something isn't broken and does what you want, why upgrade it? That's the beauty of free software, *you* decide what version you're running, not your vendor...
    • When was the last time Bill Gates stood behind you pressing a gun to your neck and forcefully forcing you to upgrade something ? I know a lot of businesses that still run Office 97 and even Windows 95.

      Ok, there always is the update thing, but there are also not that much updates for older versions of Open Source software. The kernel might be a notable exception, but try getting upgrades to KDE1 or an old XFree or the like.
      • Well, I know for a fact that MS hasn't released patches for Office 97 in the last year. This leaves an Office installation open to a whole slew of vulnerabilities that could easily allow system compromise. They also don't provide support anymore for 97 or below. I can't say anything particular about Win 95 because from a security perspective I just assume it's compromised.
        In general the pressure is never direct. It's just that if you run older versions of MS software you accept that you will remain unprotected against known vulnerabilities and you will get no support from Microsoft.
        • SuSE announced recently that they stopped supporting SuSE 6.4, that also means no security patches etc any more. Sure, SuSE 6.4 is semi-antique and you can still try to patch everything manually from source (which admittedly is a strong point for open source), but the same principle applies.

          The point I am trying to make is that the soft pressure to update is inherent to software, be it open or closed source. On one hand a software vendor, even a monster like MS, is only able to properly support a subset of the products it ever made, on the other hand everyone living from selling stuff, be it MS or your favorite Linux packager live from you buying more from them, so they certainly try to create incentives to buy their latest toys. If you won't fall for the shiny new stuff, well, maybe the lack of easily applicable fixes will convince you. The only way around this are 100% open source distros like Debian, but they are not everyone's piece of cake either for various reasons.

          Also try to get a bug-fix for an older release of some major open source product applied. It hasn't has to be something really outdated like KDE1. In one thread of the recent days (don't remember which, but I think it was the "10 things wrong with Linux" one) a lot of people complained that it is often difficult to get bugs, that are no extremely critical security bugs fixed even in current stable releases. You will often be told to upgrade to the most recent version or even a cvs version. No monetary costs involved, but still the same principle, and still the upgrade to the latest version can mean to upgrade whole toolchains, especially in Linux.
  • The first 2.0 stable kernel was released over six years ago, in June of 1996.
    I wonder how many Windows 95 machines are still running and in actual use. Anyone here still running a variant of Win95?
    ...
    How about in a server environment? [ducks]
    • I wonder how many Windows 95 machines are still running and in actual use. Anyone here still running a variant of Win95?

      If I was the head of a company that owned a few servers and I discovered that one of them was running Win95.

      Well I'll make an exception to the saying "Nobody ever got fired for buying Microsoft."

      • Why? There may have been perfectly valid reasons for running some server software on Win95 at that time. Perhaps the software was not available for any other OS, management was most comfortable using it, etc.

        Of course, there might also be very good reasons to upgrade to Linux or something else right now (security, easier to administratrate, etc).

        BTW, the company I work at still has quite a few Win95 desktops in use for customer check-in. There are many problems with our existing setup. One of the big looming ones is that MS no longer supports Win95; I suppose it is expensive for them to do so, and would be a disincentive for people to keep upgrading their OS. Contrast this to the situation on Linux, where the old kernel verions will be supported as long as there is demand.
      • I have a friend who still uses Windows 3.11! He has so many (old) softwares that he can afford to upgrade them.

        Nowadays his most used software on this machine is an X11 emulator to his linux machine :)
    • Me! Well, if by running it, you mean running it for a coupla hours till it crashes (the joys of beta, probably never gonna be updated sounddrivers), then nerve pinch into Linux, read a story like this and decide that my 2.4.17 kernel isn't quite antiquated yet :-)
    • I deal w/fifty or more individuals daily for tech support.

      40 are running Win98/ME, 5 are XP, 4 are Win95/etc, and 1 is MacOS.

      that is an average.
    • i run win95 on a backup router/spare net terminal for looking up FAQ's online when i fubar my main rig. i've gotten more PC's since that computer, but i don't ever change anything on it, and it's actually quite stable for what it does. as for a server, lol, it's my backup when my mac LC II(68020 processor! circa 1991!) goes down for repairs/upgrades, which is primarily a web/ftp/mail server. i have a sony viao w/tv tuner card (ati all in wonder rage 2-ish card) that serves as my roomate and i's tv/media center, it runs win 95 SE w/usb support(what came preinstalled on it 6 years ago or so). all the drivers for it were custom tailored for that hardware setup, and as a result, it's almost as stable as my OS X powerbook. all my other legacy windows machines, of course, run like ass. i know my school in plano (rich suburb of dallas) ran win 95 until this summer, they're switching to win 2k this fall.
    • I switched from Win95 to Win2K 8-9 months ago.

    • I wonder how many Windows 95 machines are still running and in actual use. Anyone here still running a variant of Win95?

      Absolutely. The machine I'm typing on now is running 98SE, customised with 98lite using the explorer.exe from 95. Runs every win32 program I need on a desktop, and does it noticeably faster than machines with significantly more powerful hardware running later versions. Reasonably stable, considering it is windows after all - it gets uptime close to the Win2k boxes they have at work actually.

      If you had any doubt that the answer to your question would be yes, this will really blow your mind - I've also got DOS 6.22 and WfW on a CD on a shelf across the room, I haven't actually used it in months (haven't used WfW in years, but DOS 6 really does come in very handy at times.)

    • Re:Used since 1996! (Score:2, Interesting)

      by modicr ( 320487 )
      Hi!

      What about this network:

      SERVER:
      1 x Netware 4.2 small business

      CLIENTS:
      1 x Windows 98 SE / Win2K SP2 (dual boot)
      19 x Windows 98 SE
      1 x Windows 98
      2 x Windows 95B
      1 x Windows 95A
      5 x Window 3.11 for Workgroups (& MS Word 6)

      Ciao, Roman
    • Until recently I worked for a very large (global) company and win95 was very much alive and kicking on the corporate desktop.

      L
    • "I wonder how many Windows 95 machines are still running and in actual use"

      A ton of win95 is out there. You seem to forget that >90% of businesses are small businesses and thus don't upgrade their machine very often. I even still come across plently of win 3.1 and DOS machines in law offices, doctor offices, accounting firms, and factory's.

      There's nothing wrong with patting the linux kernel on the back, but lets not forget, there is also a bunch of Netware servers not to mention older Unix boxes that have been running since before linux even existed.
    • My Windows machine at work still runs 95 as does the machines of a few co-workers. I have a machine at home that also runs 95. Someone gave it to me and the OS is on 13-14 floppies.

    • I just acquired a laptop from '97 that was still running Windows '95. I did ugprade it to Windows '98, but it took me quite a while to find all the components necessary to get all of it's neat stuff working ('98 was supposed to have made PCMCIA and Battery/APM/APIC issues a lot easier.. but it just made it a lot worse for me, since '95 had the software already installed, and '98 install blew it all away)

      The Point of Sale system at work operates Windows '95. It's used primarily as (a) a mostly dumb terminal [it runs screensavers on it's own, and displays graphics locally, but all information/screen placement is determined by the server in the backroom, which runs Xenix] (b) internet device [has ie 5.5 installed]

    • I wonder how many Windows 95 machines are still running and in actual use. Anyone here still running a variant of Win95?

      The company I'm currently working for mainly uses Winows 95 and NT 4.0 on ~ 1000 desktops.

    • Since you ask, yes (tho not on a server :) and they will do well to illustrate the value of older versions:

      My everyday-workhorse machine runs Win95 OSR2.0b, and will probably do so for its entire lifespan. W95 is suitable for a lowly P233, and once beaten into submission, it's nearly 100% stable. It has all my critical can't-live-without apps already trained to play nice together. Changing the OS (or anything else) would be counterproductive. This machine is expected to do daily work without making me hunt down and fix today's complication. Why rock the boat?

      I have an old P75 that I use as a test rig, that runs Win95 first edition (cuz that's what came with it, whaddya want for free). It's stable (it has NEVER crashed since I've had it despite serious abuse) and already has all the weird obscure drivers it needs. While there's probably no compelling reason (other than said drivers) to keep W95 on it, there's no pressing reason to switch or upgrade the OS, either.

      I have several clients still running Win95 too, because "if it ain't broke, don't fix it". And in some cases because that's all their hardware will support, and they can't justify replacing it.

      I agree with a post somewhere upstream -- if what you're using works for you, and if an upgrade doesn't address a need YOU have (be that a feature or a bugfix) you're probably better off NOT upgrading. We all know how often patches break more than they fix -- well, upgrades are much the same.

      Personally, I only mess with upgrades and such on machines whose mission in life is to test whatever so I can get familiar with it. Never on a production machine unless the upgrade is needed, and then only after it's proven sound.

      So in my view, there is much to be said for older versions, and for the people who maintain them.

      [BTW the oldest utility I still use is dated 1983.]

  • by OneFix ( 18661 ) on Sunday July 14, 2002 @06:44AM (#3881001)
    Because it shows that alot of these users don't care about the "what's in" and are choosing Linux for its stability, power, ease of use, etc...

    The truth is, I think alot of ppl put too much emphasis on the newest version of software just because it's newer.

    I have found that maintaining the same version is much easier and leads to a much more stable & secure system.

    When a new version comes out I ask myself 3 questions.

    1) Do I need an upgrade or does it fix any issues that I was having (crashes, incompatabilities, etc)

    2) Was I going to choose an alternative if not for a feature in the new revision (my new CD-Rom won't work with the old version)

    3) Am I willing to suffer the consequences of any problems that might arise as a result of the upgrade

    The M$ mentality says... "It's version v1.2 which is higher than my current version, v1.1...I need it!!!"

    The reason is that they have been trained (primarily by M$) to crave the newest version of whatever bug-ridden crap is thrown at them and the corporations try to do whatever they can to force you to upgrade.

    The reason for this is simple, the Linux kernel is not maintained by a corporation that is driven by sales and therefore driven by your purchase of their product.
  • 0.99.13 (Score:5, Interesting)

    by shoppa ( 464619 ) on Sunday July 14, 2002 @06:45AM (#3881003)
    A former customer of mine is still running a Linux 0.99.x that I set up from a Slackware (on floppies) kit for him. It's a 486DX66, was pretty hot stuff in 1994 or so. No connection to the net, so little need to apply security updates (not that I haven't tried to get him to upgrade!)
    • Why is that marked funny ???

      I know a factory who use(d) 25 year old computors. (Actually they must be over 30 years old by now and as far as I knwo they still use them)

      They still worked so they saw no need to spend loads of bucks to upgrade it.

      It was much more cost-effective to keep the old computors. Sadly they could no longer get spare parts so thay had to dismantle one of them and exchange them for modern computors - they kept the working parts as spare for the remaining computor so it could continue to serve them.

      If cars can be 50 years old so should computors.

      • The palce I used to work at before they went to strictly home health care sold to local businesses. This was in the 70's and 80's. One business was still using the box and software (Cadol) up til 1999, the Y2K was the only thing that was going to bring it down. And that was due to hard ware not software.
    • Re:0.99.13 (Score:4, Funny)

      by nathanm ( 12287 ) <nathanm.engineer@com> on Sunday July 14, 2002 @09:43AM (#3881360)
      I'd bet he just doesn't want to ruin his uptime.
    • It's a 486DX66, was pretty hot stuff in 1994 or so. No connection to the net, so little need to apply security updates (not that I haven't tried to get him to upgrade!)

      I'm just curious: if he were to upgrade, would the 2.4 kernel make his machine run faster or slower?
      • well... running on my P75, 2.2 was much faster for a lot of things than 2.4 is. ( I just upgraged last week, so the experience is still fresh)

  • by xt ( 225814 ) on Sunday July 14, 2002 @06:50AM (#3881007)
    There are a lot of specialized applications running on legacy systems, such as many mechanical corridors that connect to aircrafts (Win 3.11) or handheld barcode scanners (DOS), or even a lot of ATMs (OS/2 1.x).

    The basic advantage is the understanding someone comes to have by working a number of years with something specific. Most bugs, and for certain all the serious ones, are known and documented. Design limitations are known also. There are field proven designs and in many cases known tweaks to extend functionality, even beyond the original capabilities.

    This stands true for pretty much everything; another poster pointed out that NASA still uses 8086 hardware!

    The need for maintenance is also something relative; if you have something that constantly works reliably, the maintenance required to keep it that way is minimal.

    I believe that even if 2.0.39 was the last kernel of the 2.0.x series, people who use 2.0.x won't really care. I know, since I have a 2.0.36 based home router that runs for the past year and a half with zero maintenance. I don't even plan to upgrade to another 2.0.x kernel, let alone 2.2 or 2.4, as long as it just works (tm). :)
  • by dpbsmith ( 263124 ) on Sunday July 14, 2002 @07:17AM (#3881050) Homepage
    One advantage of open source is that the continuation of older versions is _truly_ market-based. That is, an old version that is genuinely valuable to a small coterie of users can remain in existence. In particular, low-benefit-low-cost products--products that appeal to a small base but cost little to maintain--can thrive as long as the benefit/cost ratio is good (even if numerator and denominator are both small).

    IMHO one of the big problems with proprietary software--which I once saw personally from within a then-Fortune-500 company--is that career advancement depends on working on big projects and thinking big. One one occasion I was told that something wasn't pursuing because "on your own showing it can't bring in more than $2,000,000." I said, "yes, but the costs are trivial so it will be very good business." It was explained to me that projects of that size were just too insignificant to be considered. I believe that just the cost of translating the manuals into the fifteen languages supported by this global company was enough to sink the project (and of course ALL the company's product HAD to be translated into ALL languages because that was their procedure). On another occasion, when wondering whether we should be developing projects for a certain market sector, I was told, "Naaaah, we already had a consultant look into that, it's not worth it, it's just another $100 million market."

    And of course with proprietary commercial software is you usually have the vendor "pushing" newer versions because selling new versions provides more profit to the vendor than maintaining old ones. The commercial software marketplace is a very imperfect, high-friction "market." And one place where the vendor has a lot of asymmetrical power is with respect to versions and releases. It is usually easy to keep customers on the "version treadmill." What if you don't like Microsoft discontinuing Windows NT 4.0? Where's the customer leverage? "If you do that I'll just buy Windows NT 4.0 from one of your competitors?"
  • linux 2.0 stability (Score:2, Interesting)

    by FeatureBug ( 158235 )

    The 2.0 series had real stability. In 6 years I had just one or two 2.0 kernel crash mainly when using X or the sparse superblock patch. The 2.4 series has more features but I've much less stability. I've lost count of how many crashes I've had even without using X, or beta quality optional kernel code, or devfsd. The most annoying ones are the module load/unload lockups still present in 2.4.19 and up:

    # lsmod
    Module Size Used by
    isa-pnp 21381 0 (unused)

    # insmod etherpro
    # lsmod
    Module Size Used by
    etherpro100 13413 0

    # rmmod epic100
    Jun 27 11:32:03 koyuki kernel: unregister_netdevice: waiting for eth0 to become free. Usage count = 4

    At this point, the kernel module code is unsalvageable. A reboot is required.

  • I see the history. All article is about the past. Where is future?

    Correct me if I am wrong, but I thought about the future of 2.0 in terms of design and features of the version greater than 2.5 and, maybe, of 3.0. What is planned in future releases?

    As for 2.0 itself - who cares about the dead meat. We must use 2.4 or 2.5. Period

  • What's bad in using Linux-2.4 on 486?
    • Memory usage is up compared to the 2.0 kernels. That might not make a difference on newer machines, but it does if you only have 4M. It's better than the 2.2 kernels though. I have a 486 machine that I've tried 2.0, 2.2, and 2.4 kernels on. The 2.0 kernel gives me about 3M for userland stuff, the 2.4 about 2M and the 2.2 will boot, but fails to run any of the standard initialization scripts.

    • I have no numbers to back this up, but in my experience, 2.4 -feels- slower on older hardware than 2.0.

      'Sides, unless it's a router box and you need the latest, greatest QoS tools and security fixes on your 486, there's very little reason to upgrade.

      Most people seem to get new kernels when they need support for new hardware, and that's just not much of an issue for a 486. ;)

      I did upgrade the kernel on my 386SL/25 laptop recently, from 2.0.37 to 2.2.18, but only because I wanted to play with the swsusp patches and didn't feel like learning how to backport them to 2.0.

      The temptation to move to a 2.4-AC kernel with swsusp built-in was very easy to resist.
  • Although I do see the need for keeping old kernels maintained (I even have some old systems lying around I want to install 2.0.x on), I see this as one of the problems with Linux.

    Why do we have to have so many kernels maintained at the same time? Even just the current "stable" and "unstable" release system is a little strange to me. I mean, why spread the work among two kernels when we could be doing twice the work on just one?

    I would propose things differently. A single kernel, the latest release, is the only one maintained (officially. anyone can maintain old kernels if they wish). the patches would, however, be marked stable and unstable. Test patches and work on them until they are stable enough for what would be a stable release, then merge them permanently into the main source. Until the patch is stable, it remains just a patch, being tested and worked upon.

    I admit, I'm no kernel hacker (yet) but I do think this would be a much better solution. Linux would advance much faster with all the effort focusing on one kernel, no more.

    • Re:good problems (Score:4, Informative)

      by DrQu+xum ( 218745 ) on Sunday July 14, 2002 @11:11AM (#3881635) Homepage Journal
      Why have 4 active kernel lines?

      2.0: Legacy systems & embedded. It's tiny!
      2.2: Middle-aged systems or wherever stability is a must. RH6.x and other 2.2-based distros are still in widespread use.
      2.4: New systems with new hardware that requires new drivers.
      2.5: Development. Don't use in a production environment, lest you fall down and go boom.

      Besides, each line has a different head maintainer.
    • Re:good problems (Score:3, Interesting)

      by Papineau ( 527159 )
      A few points to consider:
      • More difficult to change big parts of the kernel, or entire subsystems, without a development kernel for which it is "normal" to be broken at times. For something in maintenance mode, the system you propose is quite fine (witness what's happening for 2.0, 2.2 and event 2.4 kernels). But for the bleeding edge, it's just not possible to do it that way, because patch A (which improves on the VM) affects patch B (VFS) and patch C (scheduler). So if you merge patch A (because it's deemed "stable") in the next official kernel release, then patch B and C must be reworked not beacuse of themselves, but because what they build upon has changed. Next, when patch C goes in, it's patch B's time to be adapted (again). It's more efficient to have all 3 develop at the same time in an unstable kernel, and have all the quirks sorted out. Of course, don't run those kernels on production machines...
      • The goal of the two kernel branches are different. One strives to be usable right now (bugfixes), the other one strives to be easier to work with in the future (more features, cleanup, performances ameliorations, etc.). If you merge those two together, you'll more than likely end up with something absolutely unstable, or a nightmare to manage (and merge different patches).
      • The goal of kernel development is not only to develop new features (aka advance). There's also a big part of it which goal is to keep running systems, well, running. It's for them that 2.0.40 is being prepared, as well as 2.2.22. And even 2.4.19 enters that category, which is quite different than the goal of 2.5.
      • As for the officiality of updates of older releases, it's only so that the development isn't split between a few groups with the same goals. I don't think a lot of the people currently working on the different subsystems of 2.5 also work a lot on 2.0.40, especially since the differences between the latest RCs are one or two fixes each time. OTOH, driver maintainers are more likely to follw it's development (although bugfixes only).
      So in the end, it's not the double of the work to maintain (which implies "no new development", hence "not a huge workload") older kernels. And if nobody would need it, nobody would do it.
  • by Anonymous Coward
    this is EXACTLY why you should not depend on the latest and greatest simply for your app to work. Try building with multiple libs (different versions) and for older versions of the kernel and environment. Many do NOT want to upgrade the kernel just in order to have our video cards, sound cards, and such work. I have often noticed a rather alarming trend that a very vital addition to the linux suite of apps that people have been eagerly waiting for and contributing to, ends up being released in formats that require libs and binaries that are only weeks to a month or two old... this is odd considering that the app was worked on (and the features/support promised) long before these binaries and libs were put together in even an unstable format.

    If there is not an absolute requirement for the latest and greatest, then please do not require them for the build. Additions are great, but they should be optional and 'extra' not the bare minimum. Otherwise this is like Linux binaries only being released for the latest instruction code of AMD (or Intel) chips, but not for any other chips. (this is a loose and probably poor example not to be overanalyzed to the point it looses its underlying meaning and reason for being said)

  • Then they should hire programmers to fix it for them. Just because people gave you a mug of free beer doesn't mean you don't have to buy another one later or brew your own.
  • Weren't there patches against the 1.0 patchlevel 9 kernel to make it compile with gcc 2.7.2? Who continues to maintain this one? :-)
  • I noticed most of the people chiming in that
    'if it aint broke, dont fix it' are probably
    not concerned about 2.0 patches and releases
    since they won't install them anyway.
  • ...is using RH 5.x with 2.0.36 on their DNS servers. Of course we're also using Bind 4.9.7. Apparently they have (well, had) no ambition to upgrade. My network project requires a new DHCP setup and dynamic DNS. Now they have to upgrade. If it wasn't for that though, I wouldn't see them upgrading either system until a security problem cropped up and bit them on the ass. I keep all of my systems current to within a couple RH releases and my kernels are always to within a couple versions on a stable major release.
  • According to the Linux Counter [li.org], about 1.6% of the Linux users use the 2.0 kernel.
    That's more than the number of people using 2.5.
    (Don't like the numbers? Get counted! [li.org])
  • I still run some servers on 2.0. They have been up for years, only failing after hardware fries or a power outage.

    Why should I mess with them? I have software on another machine that requires an old verson of gcc (due the changes in the String library) and I don't want to rewrite it. Everything works. Everything is stable.

    I also run old distros, even with the 2.2 kernel. I upgraded one machine to Slackware 4.0 when that was the New Thing and it took me a while to get it stable. Now I don't want to mess with it; just upgrade the kernel for security issues. It just runs apache and WordPerfect, it is a PPro200 with 128Mb RAM and is solid as can be. If I upgrade it, my old copy of WordPerfect won't work anymore and I don't like the new one.

    Many friends who came from the Windoze world always have the need to be upgrading. As long as the old software still works, why change it?

You scratch my tape, and I'll scratch yours.

Working...