Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

Linux 2.0.37 Released 71

After many months of hacking, Alan Cox has released what is likely to be the last 2.0 kernel. He writes in his diary that we will only see 2.0.38 if there is some sort of security hole. For those who don't know the drill by now, you can download the kernel from any kernel.org mirror.
This discussion has been archived. No new comments can be posted.

Linux 2.0.37 Released

Comments Filter:
  • Yeah, you're running an older distro (eg. Slack 3.6) and really don't want to bother upgrading all the necessary stuff to have full 2.2 kernel compatibility...
  • Right, but Slack 4.0 is out now. :-) What all is involved in upgrading a linux OS anyway? just install over or what? and what if you are switching distributions? (hypothetical questions, but I'd like to know)
  • Well, I know of at least one actually. 2.2 was designed with the notion that everyone has at least 16megs of ram (or at least everyone that will want to upgrade), so there are all sorts of optimizations and such which make 2.2 slower on machines with around 16megs or less of ram (certain cases being exceptions of course). Not that this is bad, we shouldn't hold linux back so that we can always run it on a two meg 386 without a math coprocessor, we just have to be careful to not assume to high of a hardware configuration. I'm still running 2.0.36 on a 486 with 16megs of ram (100mhz fwiw) and it runs a bit faster than with 2.2, so it's staying with 2.0 unless something terible is found which can't really be fixed.

    So, yes, there are technical reasons out there to stay with 2.0.

    Hope that helped,
    Chris Frost
  • Linux 2.0.x is stable. Linux 2.2.x, despite the fact that it's an even-numbered series, is not stable, and should really be considered a development kernel (note the numerous bugs - filesystem corruption and DoS attack vulnerability). If someone were planning to use Linux in a mission-critical situation, I would most certainly caution against using the 2.2.x series.
  • Some distros have upgrade scripts. AFAIK, Slackware does not, so you either have to reformat and reinstall, or upgrade each package/library separately (or write an upgrade script yourself).
  • For me, all that I had to upgrade was pppd, to 2.3.5.
  • If you want to use FreeS/WAN [xs4all.nl] (an IPSEC implementation on top of Linux for building secure VPN's), you'll
    have to grab 2.0.3x because the package hasn't been ported to Kernel 2.2.x yet.

    Paulão

  • I do some hardware testing on a 486DX33 with 8M of RAM, Slackware 3.6, 2.0.36. I wouldn't put a 2.2.x on that.
  • I've got a similar spec machine here, being used as a masquerading firewall. Compaq 486DX33 with 8MB and Debian potato.

    I've been running the 2.2.x series on it since I set the machine up, so I don't have any comparison to go with. What are your reasons for not running 2.2.x on this spec machine?

    Incidentally, I've got a spare 8MB SIMM sitting here, but it seems that the machine only takes 'Compaq' RAM or something, since I can't get it to accept the SIMM. :-/
  • Many people have forgotten (weren't there) the problems that 2.0 had while it was the latest stable kernel. There were at least 3 DoS attacks that were fixed, and probably file corruption too.

    You have to remember that it takes widespead testing by many people to find all the bugs in a software product. The kernel team can only test the software on their own computers and configurations, and need outside testing to detect the remaining bugs. They don't get this widespread testing until they declare a stable tree. We get a rush of bug fixes after that as widespread testing occurs.

    The corruption issue in 2.2.8 was due to the correction of a long standing bug that exposed another one.

    Don't just dismiss the 2.2 kernel without trying it. The best way to improve the kernel is to try it and file bug reports.

    Beau Kuiper
  • I can tell you one right away. I used to be in charge of admining a firewall. It had tons of individual ipfwadm firewall rules. You generally don't touch a machine like that unless there is a known problem or security hole. Moving to ipchains would have required a few days of rule writing, some testing, bugs found from me mistyping one charachter for weeks on end, ect. Thus I had no intetnion to (and I doubt my successor did either) upgrade the machine to 2.2. If I had to make a similar configuration, I would certainly use ipchains, but ipfwadm did its job, and that was all that was important in that case.

    Second, linux has moved towards optimizing for newer hardware (aka adding new features to make life faster and easier, but requiring more RAM). Thus on 386es, and some 486es, 2.0 may be better. Of course, nowadays, FreeBSD is so much better on a 386 if you ask me, but I prefer linux on my newer machines.
  • I'm running a 486/100 w/16mb as a masquerading and port-forwarding firewall, perfectly happy on 2.0.36, thank you. This machine has only a 1.2 gig or so HD so I'm not inclined to download and compile a 2.2 kernel on it unless some real problem comes up that I cant solve with a 2.0 kernel. But for now, it just happily sits in my kitchen closet with my cable modem and hub and trades packets back and forth all day with no complaints.

  • RH6 has a fairly painless install/upgrade thingy that brings you up to a 2.2.5 kernel without any suffering at all (at least in my case).

  • Most packages of ipchains comes with an "ipfwadm" wrapper script which works very well. You could at least give the setup a try on a test machine and then route a few test hosts through it and see if it worked. Then just dump the ipchains stuff out for boot scripts.
  • Then perhaps it should be called a beta series of kernels, as opposed to 2.3.x, which is the alpha or pre-alpha version. When all the bugs are ironed out, then call it stable. I personally consider 2.0.37 to be the latest stable kernel out, and I wouldn't recommend 2.1.x, 2.2.x, or 2.3.x to anybody who wanted to do serious work and needed a reliable system.
  • 2.0.36 works perfectly fine, I see no reason to go beyond, at least not at this point. There's nothing in 2.2.x that would make it more fit for its purpose.
  • > If it was called a beta series of kernels, then who would use it.

    It would be honest. Right now lots of inocent newbies are told about Linux' stability. Then they go and buy the latest 2.2 based SuSE/Redhat/whatever. And it's indeed mor estable than Windows on the average, but I didn't need to turn the power off because the system was completely frozen for years (only time was long ago when I tried an ealry dosemu version) until I started using 2.2. It happened 3 times so far.

    The problem is we can't really compare Linux to Windows, we have to compare it to other Unix'es, and most of them are a lot more stable than our beloved (irony) Linux.
    --
    Michael Hasenstein
    http://www.csn.tu-chemnitz.de/~mha/ [tu-chemnitz.de]

  • Comment removed based on user account deletion
  • I found that when I transferred a large file from one linux box to another, with realtek network cards (that I scored cheaply), it consistantly stalled. However, changing the mtu of one of them to 576, while keeping the mtu the same on the other fixed the problem. Quite strange, but it fixed the problem...
  • S/WAN [xs4all.nl] is another reason too, it's designed for the 2.0 series kernel, and is unlikely to work with the 2.2 series for a while yet.

    In short though, the point with linux kernels is usually "do I *need* to upgrade?" rather than "why *shouldn't* I upgrade?"
    --
  • To be real here it is only worth the effort of upgrading the kernel when it gives you something you need... I am going to upgrade my workstation because I need the improved soundcard support, but my firewall running 2.0.37 I will leave as it is because its working and I have no problems with it. Also the 2.0.x kernels work better with machines with less than 16Mb I am told and my firewall is a 486DX2/66 with 16Mb ram.
    From what I read 2.2.x is perfectly OK for most users.. there are some issues with it but that has always been true (thats why the 2.0 series went up to release 36!!). Generally though dont waste time upgrading kernels unless you need something it has got in it....

  • So you consider NT stable? I have two diskettes with a small defect: track 0 is broke on them. Whenever I insert one of these into a machine equipped with NT, you see a nice, blue screen containing lots of hexadecimal information. This has to be one of the worst bugs ever. Why not just report that my disc is defect instead of crashing the entire OS without any warning?

  • Um... to upgrade Slackware 3.6 for 2.2.x, I compiled maybe two packages (and I think they may not have even been necessary). Everything else is from the original install.

  • What's the diff between the pre10 release and this full release? I have pre10 going here, but uname reports it as just "2.0.37". Should I DL this, or is there a patch to bring it to current?
  • We all use NT at work - some of us workstation, some of us server. I don't know of a single person who doesn't suffer the occasional, spontaneous crash.

    (My most memorable one was the time I minimised a Netscape window and it Blue Screened on me...)

    Worse still, though, I've had *two* mission-critical servers hang on shutdown *on the same day*. They both just sat there for roughly half an hour each, "writing unsaved data to the drive", until I hit the power switch...

    Needless to say, I now think long and hard before rebooting them.

    I'm not saying that I've never taken down a linux box (giving X the three finger salute while gdm is running on RH6 is a good way to do that - try it a few times and see!), but it seems to happen a lot less than with NT.

    Tim
  • Ouch, the number of misfortune has leaked till here.
    I guess the M$ mslogo.gif syndrome is spreading.
  • Posted by King_Arthur:

    I think it all depends on the hardware you're using and the stuff you need to run. Not only the kernel can be buggy.
  • Anyone else out there having trouble getting 2.0.37 to boot? Especially on a machine with a Cyrix processor and/or VIA motherboard? The machine I tried it out on gets as far as decompressing the kernel... when it finishes that, it spontaneously reboots. It's getting as far as printing the first kernel message, but it doesn't stay on screen long enough to be seen.

    The machine in question is known to be defective, so I'm not terribly concerned about it, but it's consistently rebooted 2.0.37 at the same spot every time, where 2.0.36 mostly works and only spontaneously reboots occasionally and erratically.
  • Why bother ? The original poster obviously
    has everything set up right - why fix what
    isn't broken? I am just as guilty as the next
    guy for wanting the newest and flashiest things,
    but if you have a machine that works, just leave
    it. There is no reason why this machine
    shouldn't run 2.0.X indefinitely.
  • We installed 2.2.5 on our test system here. We started having random system hangs. There are also compatibility issues with the 2.2.x series and the 1.1.7 JDK which we use to run our transaction server. So we're still using the 2.0.x series on the production systems and most of our workstations. Not that we wont continue to beat on new 2.2.x versions as they come out, but they aren't stable enough for us to use on critical systems yet.
  • Time for the bullshit filter.

    There are very few people who know enough about the internals of NT and linux simultaneously to make sweeping, or detailed statements about their relative stability. None of them post on slashdot. Everything above is pure conjecture and/or horseshit.
  • I seem to remember reading that 2.2.X has better handling of TCP/IP, and better memory managment. But don't quote me on that.

    On a side note, I have been having all sorts of trouble getting mpg123 to run properly after I upgraded to RH 6.0. It always worked great under 5.2. Now it will only play for a few minutes before cutting out. I don't know if this is a KDE thing or a 2.2.X thing...or what. I compiled both under 6.0, downloaded the latest versions, etc... no luck. Anybody have and suggestions or similar experience?
  • mpg123 comes with RedHat 6.0. Install the RPM, it worked fine for me.

    I don't love RedHat packages, but if you use them in RedHat, it makes life easier. If you don't use them, use SlackWare, or just compile anything.

    Foolish consistency is the hobgoblin of well-designed computers...
  • AC,
    You probably haven't tried Linux, which means that you don't know about Linux's stability. 2.2 is rock-solid stable compared to NT, it's just not quite as stable as the late 2.0.x series yet, and there are people who are real sticklers for stability.

    2.2.x runs GREAT on my machine, with the exception of stock RedHat kernels not liking my APM BIOS - They kernel panic on system halt. (Not serious, since the machine is down anyway, but weird.) RedHat's tech support says it's buggy BIOS - I'm inclined to believe them, because APM is generally screwy on my machine, Linux or Windows.

    I used to run NT4 Workstation, it crashed all the time. Now I use Linux for reliability and Win95 for games. (No Win98, because it sucks and doesn't even boot on my machine. That's right, MS boy, your precious Windows 98 doesn't even BOOT on some machines that run Linux like a charm.) Given the release of CivCTP and Nvidia GL drivers, Win9x's days on my machine are very numbered. Now Cornell just has to convert Just The Facts from VB to Java. (They intend to.)
  • Hmmm...I've been using NT 4.0 at work for over a year and I've never had it "lock up" but I've had to reboot it many times because sometimes when things go wrong it just comes to a screeching halt. Sure it's not locked-up and I can save my data, but it moves along at a snails pace like it's choking in a major way and cannot recover.

    Our NT guru tried to help me once and ended up walking away shaking his head after 20 minutes of poking at it to no avail.

    Also we've had several BSOD's at my company in the past week, all on NT 4.0. BSOD = game..set match..you lose.

    Yeah...NT is better than any of the 9X platforms, but it's a far cry from Un*x or Linux.
  • My sole experience with trying to mount a completely bad floppy under Linux was that the drive busy indicator light went on momentarily, followed by a kernel panic. However, the system didn't hang for any noticeable length of time. And, of course, the panic didn't affect anything else on the rest of the system.

    (Incidentally, this is the only time that I have ever seen the Linux kernel panic in five years of using Linux; this was a 1.2-based kernel IIRC. Maybe I'm just lucky? :-) )
  • I've got an NT workstation at work that sits alongside an NCD X terminal. The damned thing locks up approximately twice a day when I'm using Netscape Communicator 4.5. (It doesn't slow down, BSOD, or require me to kill off Netscape with the Task Manager .. it simply ceases to function and requires a cold boot to even get control back!)

    The tech guys have been most helpful: "We think Communicator is buggy, so please try using IE instead." Now, if I had the time to adequately explain to them that a user application (buggy or not) should never be able to completely take down a "mission-critical" operating system, I suspect we wouldn't be using NT. However, since there are better things to do, I'll put up with a couple of reboots each day; a lot of times I'll just use the IRIX version of Communicator on my X terminal. So much for the environment that Dell's latest commercials goes so far as to call "unstoppable!"

    Yes, I know that it's all my fault; that NT is likely "poorly configured" and that complete lockups are the price that I pay for my ignorance. It really doesn't matter, though. This box is going to be running Linux within a month or so. :-)
  • Okay. I'm running a 6x86-90+ on an FYI VIA board that I know to be broken - it works well enough that I haven't thrown it away, but not so well that I haven't been tempted to on occasion. The problem is probably just the motherboard, but it's been consistent enough about it that I thought it worth at least asking... consistency from that motherboard is rare.
  • I upgraded a dozen or more, a couple of which I had to do in stages, but then, I was coming from Slack 3.0... Still, downloading all the necessary tarballs over a 33.6 was the worst part.
  • Yes everyone who disagrees with you must be a paid employee of MS. Makes the world a much less threatening place to think you are always right, doesn't it? Or are you just not bright enough to think that there are people out there with different opinions?

    In NT is so unstable (someone else in this thread said they could crash it by just manipulating the UI) tell me a guaranteed series of steps to crash it...

  • When I'm at home from school, I've been running a
    2.2 kernel on a 486/50 with 20 megs of RAM, doing
    IP masquerading etc via a cable modem. I really
    haven't noticed much of a speed difference from
    when the box had a 2.0.35 kernel, but maybe thats
    because I have more than 16MB of ram (though only
    slightly more).
    Other reasons that I could imagine someone wanting
    to use 2.0 kernels is because they are tried and
    tested, and while the 2.2 series is earmarked
    as a stable series, it is still very new. For
    people who are using their box as a server, it
    might be preferable to have something tried and
    true, that has been in use for a significant
    period of time.
    In the lab I work in, there is a mixture of dec
    alphas, rh 5.1, and rh 6. These boxes are all
    managed by a central admin group, only one person
    in the lab has root on any of these. In this
    situation it is just as easy not to upgrade the
    older rh5.1 box(es) (not sure how many we have),
    since there really is no critical need to upgrade,
    and the one that I use at least is a critical
    file server, so downtime on it would have
    something of a negative impact on the fragile web
    of nfs mounts in the lab.

To communicate is the beginning of understanding. -- AT&T

Working...