Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Software

Kernel 2.2.12 112

DrunkenBastard writes "I just noticed that a 2.2.12 kernel was starting to appear on the kernel.org mirror I usually frequent. " When I saw this submission, the first thing I thought was "And me with my 38 day uptime". That confirms it. I gotta go out ;)
This discussion has been archived. No new comments can be posted.

Kernel 2.2.12

Comments Filter:
  • (damnit, I forgot that this editing window corrupts text if defaults aren't over-ridden. How about changing the defaults so it's at least as polite to whitespace as LaTex??)


    If you can keep your computer running for long periods of time without rebooting, it proves that you can keep your computer running for long periods of time without rebooting. Anything assumptions further than that are, well, assumptions. I can leave a machine online for years running MS-DOS. If I choose to do so.

    Many operating systems have impressive 'uptime' statistics when used in applications (i.e. 24/7 server tasks with many users) where it matters. They'd totally fall down in certain other applications.

    An example of this is my laptop. I ran Linux, and then NetBSD on it for quite some time. It's just a little 486 laptop with 28 Megs of RAM. But having to power it up and shut it down every time it was necessary to disconnect it for longer than the battery would sustain it grew to be a pain. And using power-down/resume was problematic as the OS's clock 'went to sleep' for the whole time it was off. The CMOS real time clock was only read to initialize system time in a bootup sequence.

    I figured out to create a cron job than would call a system function to bring the system time into sync with the hardware clock at regular intervals. This resulted in a little 'flash' sequence anywhere from one to several minutes after a power-up, when the machine discovered it was far later than it had presumed. This somtimes kicks in other cron jobs, invokes the screen blanker rather suddenly, etc.

    My conclusion after awhile was that I was running a system with basic infrastructure design to be a system left running all the time. It didn't handle well at all being used intermittantly. I don't know (I could have looked, certainly) what cron jobs might have been scheduled to occur at Midnight or One AM that it never ran (most 'distribtions' have some cron tasks set up for offpeak hours, that many users never even query to discover)

    The lesson isn't that Unix type systems are bad. It's that they are suited for some purposes and ill suited for others. People can contort machines like the Palm Pilot to get a Linux system to run on them, but it's basically a dog that can play cards. The dog isn't good at playing cards, it's just impressive that it can play at all.
  • I have had problems with previous kernles, so I am upgrading in hopes that they are fixed. There are also some tcp/ip fixes that are in 2.2.12 that area slightly different from the 2.2.11 patch, that hopefully will be better. 2.2.11 was up for 1.5 days when the machine froze, then I got the patch and was up for 2 days, then started testing various kernels for other things. I now have been up 10:48 houre:minutes
  • If you have 2.2.11 and have applied the tcp/ip patch to it then you may not want to upgrade to 2.2.12 using a patch on that tcp-patched 2.2.11. I did and it did not compile correctly and 2 of the hunks failed. The tcp/ip fixes to ipv4 and ipv6 in 2.2.12 are different slightly than those of the 2.2.11-tcp-patch. My suggestion is to either use a fresh 2.2.12, or to use a 2.2.11 that was not patched. Yes you can just remove those files from the patch, but do you really want to do that?

    2.2.12

    uptime 10:49 hours:minutes

  • Sincerly I'm an upgrade freak. At the moment our mirror gets 2.2.12 I'll upgrade my machines. But not because I need to. Just I'm in my nature I love to experiment. Frankly my job is to know what goes on the edge so I'll be ready for the future.

    2.2.x kernels are mostly stable to work on Internet since 2.2.9. Frankly even 2.2.5 is stable in 80% of the tasks. The 2.2.10 is pretty good for regular and intensive desktop work. At least in a "multimedia horse", "Windows-like" working environment, I have noted a good robustness on it: 2 crashes in almost 2 months.

    The only point I would see useful for using the last kernels would be in a server environment. However that also depends on the tasks. A 2.2.3 kernel worked with a few occasional bugs for nearly 4 months in a small web server. The main problem were some memory leaks form time to time.

    Note that Linus has announced code freezing for development kernel 2.3 in two weeks. So in late Fall we probably will start seeing the new 2.4 coming up. No matter the cooleness one should be aware that each kernel jump is usually traumatic. Sometimes app code needs to be rewritten. People need to hunt for new features between new and old code. Today there is a strange fever of upgrading. And, recently, many newbies seem to have been caught in it. They don't understand why but they go with the fashion. When 2.3 bug-fixing starts up many may fall on the error of following up the vets and face some serious consequences.



  • by Anonymous Coward
    A very long uptimee (~100 days) on a production system where there have been *any* system changes at all (sw install, fw mods, new routes, whatever) is the sign of a poor administrator.

    You should test all of your procedures regularly. This includes reading backup tapes, restoring from backup, and yes, gaaaaasp, rebooting the system. You really don't want to find out that you've got the order of things wrong in /etc/rc.d/rc3.d at 3 am, or peak load on friday. Murphy happens. Things get bit rot. Un-accessed sectors go bad. Firewall changes do things that suprise you (happend to me).

    Ok, I know this is blastphemy, but you really should reboot your systems at some regular interval unless there have been *no* system changes.

    I guess on a *really* static system you could get away by bringing it down to level 1, fscking, and bringing it back up, but why not just be safe and verify that everything comes up even after all the electrons get drained out.

    Fire away...

    -- cary


  • I'm an administrator running a Samba network (Linux server running as PDC) serving over 100 NT clients. We tell them not to reboot. However, after about a week or a week and a half of uptime, most of them have to be rebooted anyway because of strange errors and BSOD's. I'd say at least one or two of them BSOD's a day out of them.

    These are Dell machines with all the "right" hardware in them.
  • Well, dialup services don't take all that much cpu out of the system.

    We've got a squid proxy server that's been up for 80+ days, that's rebooted because of power problems. That server gets a lot more traffic, but I liked the higher number on this one, "for effect" or something like that.
  • Here it is. Our squid proxy server:

    Allie:/home/edgy# uptime
    9:56am up 91 days, 23 min, 8 users, load average: 2.09, 1.78, 1.72

  • ...Back in the 80's I ran a slough of DOS machines, and after a while, usually a week or so, of uptime, they locked up. All of them, no matter what version, or what processor. I used to tell the folks around me that they just wouldn't run for that long without a reboot...
    Actually, this one is probably a 32-bit calendar overflow, the MS version of the 2038-bug -- except that:
    • since the system calendar counts milliseconds instead of seconds like the the UNIX system calendar does, you get a crash 1000 times faster-- 14.7 days, iirc. And this one affects all MS systems up through NT4. (One of the NT service packs fixed this, I believe, and I'm not sure about Win98.)

    • MS systems are so flaky anyway that no one was able to keep them up long enough often enough to discover/diagnose this 32-bit overflow as a cause of system crashes until some time in 1996.

  • Alright, most of your arguments do make some sense. But the last one, "video drivers?" When I used to have to use Win 95 at work, applications that routinely crashed the system included Photoshop, Netscape Navigator, Homesite, and Eudora. Now, these are basically all I used, but I'll be charitable and assume that just because every epp I used managed to crash the system eventually doesn't necessarily mean that any app could do it. I mean, Notepad never crashed the system, so that's something right? But my question is, what are Homesite, or Navigator doing that makes it possible for them to bring down my OS?

    Oh yeah, another app that crashed the system all the time was soemthing called "explorer.exe..."

  • I don't know who this is, but I have been checking the kernel list archives, and there's no mention of anything like this.

    Linux isn't about vaporware, like some software companies.
  • If I had to choose, I would use Windows 2000 (which simply kicks ass - Beta 3 uptime: 123 days , no crashes or restarts since installation, running IIS 5, serving a live web site)

    Lets see. Accoring to MS, W2KB3 was released to manufacting on April 29, 1999. (see this [microsoft.com]). 30 days Has Sep,Apr,Jun and Nov. So, That means, provided you got the FIRST disk out of manufacturing, and they got that CD made the FIRST day they had the gold master. SO, you got it on the 29th of April. And lets say you installed that day. So you're first possible up day would be April 30th. Last day of April. Lets do the math..

    April, 1 day. May, 31 days, total of 32. June, 30 days, total of 62. July, 31 days, total of 93. August. Currently. its the 28th. So, 27 full days gone, 93+27=120.

    You lie like the microsoft dog you are. Even if you got that disk on the day it was released, then you CANNOT run a machine for 123 days in 120 days time. And considering manufactoring, shipping and install time, I'll bet you're barely over 100-assuming, of course, you're definition of uptime includes reboots 'cause the log is full.



    Back under the Bridge, Troll!
  • I think it's 49.7 days.
  • I'll ignore that, other than to say I think you are too much in touch with your j key. Given the rest of your post, I can only wonder what you're doing with it.
  • Don't work so hard, just boot the new kernel under VMware and try it out. Your uptime will be fine and you will be able to enjoy the new code without a reboot.

    :)

    Hedley
  • Believe it or not, 2.2.12 seems to be one of the most stable kernels out of the 2.2.x series I've seen. Just about every single other one has had various problems, from TCP/IP memory leaks, to data corruption, to other problems.

    Even the high load server we're running here that had some network flakiness was fixed when we upgraded to 2.2.12 (in hopes that this would fix these problems). It did. We're lucky. :-)

  • Linux rulez, d00d! My system uptime is now almost 1000 days!

    *whispering* Ummm, installing a new kernel every two weeks and rebooting nightly into my Windows partition to play games, read email, and surf the web don't count against my uptime, right? It's not like it crashed or sumpthin', right? Right?

    Cheers,
    ZicoKnows@hotmail.com

  • by edgy ( 5399 ) on Saturday August 28, 1999 @05:16AM (#1720898)
    As far as NT running and running and Linux crashing, you are not providing any evidence of your situation or any real information. You can make up anything to suit your argument.

    On to the second point. Let me put this so you can understand it, AN APPLICATION SHOULD NOT BE ABLE TO CRASH THE OPERATING SYSTEM. If Windows were any good, there would be no way to crash the operating system from an application.

    Funny, you have a Beta 3 machine? What applications can you run on it, anyway? I thought they threw code portability away anyway. It's probably just sitting there looking all pretty.

    There are lots of security patches for Red Hat, but none that allow someone from a web page to modify your startup files using ActiveX.

    Additionally, with Red Hat, you get the fixes much more often, and earlier. No need to wait 6 months for the next service pack. Also, these patches only affect certain services. If you run a tight ship, and only enable important services on the system, you don't even have to apply most of them.

    And the last one is the kicker. Now you're blaming it on development environments. Well, we're getting them for Linux. So we'll be able to invalidate that argument pretty quickly.

    The fact is, there are a lot of buggy Linux applications that I've seen, but none of them have brought down the system. You obviously don't know what you're talking about.
  • Too bad that windrone troll will probably never see your excellent reply.
    Open source forever.

    Go edgy! Go TuX! Go Linux!
  • People should not be trumpeting uptime as a measure of stability but actually how stable the machine is. If I run the exact same thing over and over again, it is not to demanding on a machine's os. But doing wildly different tasks all at the same time are. For example, instead of trumpeting, my email server has been up for 490490 days, instead see if that computer can handle sending 100 email messages to 1000 people on the network all at the same time. Or saying that your workstation handled a C++ compile, a 3D render and 10-20 instances of netscape running java apps all at the same time. I know Win95 will without a doubt lock up if I go anywhere near the compile button while rendering in Truespace.
  • Ever hear of TESTING a kernel before putting it into full production use??
  • We would do that, except if they reboot every night, sometimes certain services don't start up properly. As I recall, the netlogon service wouldn't always start up reliably, so we just gave up on it, and have the people reboot as rarely as possible.

  • At least, for me. There aren't any international patches for it, yet, and I doubt everything in the AC series got included. It makes more sense for me to stick with 2.2.11ac3 + 2.2.11-int-2, for now.

    That won't be true of everyone, though. There might very well be stuff in the patch-2.1.12 file that Alan Cox missed out of his patch series, or fixes which work slightly better than the ones AC collected. For such people, I can see some value in moving to 2.2.12, especially if they don't need any of the supplementary kernel patches that I gratuitously throw in for good measure.

    But I would caution, as others have, against upgrade fever. I used to suffer from this, and still have over 2 gigabytes of software upgrades I've never gotten round to installing. Many of those upgrades are probably now out-of-date, themselves. It's an exercise in futility and it's only reward for me has been frustration. I've started getting into the habit of only upgrading what I need, plus supplementary packages, libraries, interpreters and compilers. It's made my life a lot easier.

  • You cannot connect linux uptime with slashdot uptime. There are many things other than the base operating system that could be causing problems.

    Heck, we've got Rob's perl scripts, mysql, apache, and a multitude of other programs I'm sure.

    What the NT bigots don't mention is that it's impossible to get Windows NT to do what slashdot does on the same scale out of the box, at least.

    Windows makes the easy things easy. Unix makes the hard things possible.
  • The vaunted Linux reliability is probably due, to some extent, to the Army of intelligent testers who bang away at the latest kernels the instant they become available.

    If it's beneficial we usually call it something like 'a good habit' or 'insisting on excellence', rather than the negative 'addiction'.

  • The reason for the near hypocrisy when NT-run sites are down vs. UNIX-run sites that are down is that OS users are bigots (most of them anyway) and maintain that their OS of choice is the best in the world, half the time without really ever using anything else or having any real world experience to know what else is out there. I must agree that there are complete idiots out there who fancy themselves administrators of computing systems just because they have that level of privelege at the login screen of WinNT.

    I administer both systems and while my slant is towards UNIX, we have NT servers in 6 buildings that do not crash and have, in some cases, over a year of uptime. Now granted, all they are doing is file serving and DHCP so its not exactly a sweeping stability victory for NT to say it doesn't crash on these boxes. But the secret is hardware, at least in my experience. If you buy decent hardware that is certified to work with NT, it greatly improves the stability of the OS.

    Now the email server that runs on NT (Netscape Messaging Server) - thats a different story. It was done before my time but its definitely entropy in action.

    My motto has always been to use what does the job best for you. For some, that may mean using NT. They will sacrifice some stability and scaleability for the (sic) "ease" of use and stupid wizards (my own bias slips through..). For others, a UNIX system does the job best. Though graphics may never see the light of day on my DNS server, I wouldn't trade BIND and the command line on it for anything NT has to offer.

    KmArT
  • LOL
    Nah, I can beat that. My NT system has a 3000 day uptime.
    It has BSODed a few times, Froze another few... and hell, the UPS don't work no more, but I just don't count them, coz they simply show the maturity of the OS.

    My windows 98 box also has a 2000 day uptime, its running 98 now, it was running MS-DOS 6.0 2000 days ago... in fact it has NONE of the origional parts in it, but all reboots were `scheduled, .05seconds in advance. you see, "OOOOHHHH FUCK!" actually means "We have scheduled a small downtime about ummm NOW!" therefore the downtime doesn't count.

    I home I'm not cheating or anything... Installing 98 is like MS-DOS with a new kernel isn't it :)


    PS.
    No, this wasn't serious, this was a JOKE. I don't *NEED* to be flamed. thanks.
  • Oh well. I'm sure as heck always going to download the whole tree, even if it does take an hour to get it using my 33.6 modem.

  • well we all know linux is a good opperating system... do we really need thousands of people proving it with their uptimes? i'd agree... for a workstation, uptime really doesnt matter.
  • Depends what sort of services you're running: if you're just doing file & print to an 20-person office then it's fine having scheduled downtime. If you require 5 9's availability then it ain't.
    --
    Cheers

    Jon
  • Yup, and it was already posted on Slashdot:
    http://slashdot.org /article.pl?sid=99/08/25/2314203&mode=thread [slashdot.org]
  • An act of nature in the form of a multi-hour power outage cured me of that problem. The UPS can only hold out so long before it runs outta juice :-) I had to check one last time before shutting down by candle light. Just over 112 days, not even notable compared to some of the year plus uptimes out there but it freed me to do upgrades and experiment with new, buggy video drivers.
  • Better yet use downtime [freshmeat.net] and figure it in percentages rather than time since last reboot. It'll keep you honest and not desperately grasping for 100, 1000 or 10000 days ;-> uptime.
  • I understand why you don't post links to the kernel itself, don't want to /. any of the kernel.org mirrors, but is there a safe place to link to the release notes, or changelog? You could just add them to the announcement, it would still be shorter than man of Katz's stories.
    ;-)
    Those who like Katz's postings, please don't flame me. I like 80% of them myself.


    ---
    Stephen L. Palmer
    http://midearth.org
    Just another BOFH.

  • I don't understand why you couldn't do that, I mean, I have my theories and I don't see why they wouldn't work, but I'm no kernel hacker.
    So I don't see why if the kernel controls everything, it could just stop scheduling processes while it updates itself.. A kernel-helper could move in and takeover to assist the rewriting of heart of the system.

    Or perhaps, a totally modularized and reduntant kernel. While the memory manager is updating it could evilly use swap while it updates, or perhaps simply doesn't any processes that would want to use memory to be scheduled to take place..

    Also hooks could be put in to show where you could overwrite, and also tell how much it can overwrite before going out of that parts' space and you could supply a goto kludge until you can lock and rewrite the kernel?
    geeze, i don't know.. Why can't they do it? This looks like something that should be worked on for 2.6 or 3.0 :)

  • "And me with my 38 day uptime"
    So fix it. Make the kernel able to load a new version and continue all processes...
  • "When I saw this submission, the first thing I thought was "And me with my 38 day uptime". That confirms it. I gotta go out ;)" Sure, 2.2.12 might be out, but do you *really* need to upgrade to it? This revision might not even fix/update anything that you use. Recompiling your kernel just because a new version came out is like warez'n'hackz kiddies making sure they always have the latest 0-day. I mean, I see alot of people who just recompile simply so they get a new spiffy version number. Well, if you ask me, it's a sad addiction to recompile if the changelog doesn't effect you. It's sad to see /. promoting such behaviour with comments as quoted above.
  • In an "Emergency Situation" it's good to have the entire kernel source. Also how about "Well, I forgot to download TCP/IP, damn, i'm screwed"

  • "So, given that(uncompressed) the source tree takes up 73Megs"
    I think you must have done your 'du' after a build. (mine came out to be about the same). Out of the box the kernel source is closer to 60M's.
    AdamT
  • As an AC said, the kernel is already splitted. It's called patches. Haven't downloaded the source since 2.1.x something.
  • Well, there is a significant difference between scheduled downtime and downtime.

    With scheduled downtime, users are notified of the impending change, and can prepare for it, and stop using the system for the few minutes that it's down.

    Unscheduled downtime, on the other hand, as with crashes of the server operating system, makes people lose work, and can last for a long time, since the administrators may be nowhere near the system.

    So, I think even from a user standpoint, scheduled downtime is much more acceptable.
  • Heh. Heh.

    Uptime on a dialup SERVER being used quite extensively here:

    $ uptime
    6:12am up 145 days, 9:48, 3 users, load average: 0.00, 0.00, 0.00

    {grin}
  • If all kernel structures were linked up into a well-defined tree and all hardware state changes were recorded there then it wouldn't be all that hard to change the kernel while running live, at least in concept. The trouble is, this wouldn't express the way certain changes are interrelated at a strong or even absolute atomic level -- you'd need checkpointing to guarantee that, plus possibly transaction logging if you want to slip the new kernel in between checkpoints.

    Hmmm, an interesting problem though.
  • You are so full of it.

    If Windows NT requires an administrator that REALLY knows what they're doing in order to get a stable system, then why bother using NT, when you can get a stable Linux box without all of that? With Linux, you at least have a lot more flexibility.

    I don't get it anymore. First, the NT apologists say that NT is easy enough for anyone to administer. Then they excuse NT's instability by saying you need an expert administrator to have a stable system. So, which is it?

    Linux is probably the easiest way to get a stable system.
  • Well in theory, someone could be dumber then that.

    I mean, I'm allowed to compile a kernel sans ext2.. Anyone that silly deserves a lesson. There would be defaults on.. Just like every kernel tree has defaults. They're just modifiable. I like this idea really.. Even though it only takes me 1 minute to download a whole kernel tree.. Phat Pipe =)
  • I would consider that cheating. Uptime is only uptime.
  • Besides.. Everyone should have a kernel on a disk.. You don't need the whole tree for a backup, just an image...
  • by Zico ( 14255 )

    I wasn't exactly baiting (hell, for one I apologized for my phrasing beforehand, and if I really wanted to, I woulda said something like "dump linux and apache for NT and IIS"), but there have been some big problems with Slashdot's website being down the past few days. I just find the average "luser" fascination with things like "uptime", "beowulf", and "squid" annoying. My mindset is more along the lines of, "Hey, could you iron out the problems and get it working and then play with your buzzwords?"

    Not that I really blame whoever moderated me down -- it wasn't exactly the nicest thing I've ever said.

    Cheers,
    ZicoKnows@hotmail.com

  • Seems like a security threat to me.
  • Hey there - the other architectures take up very little space(at least uncompressed). All told, the non-i386 architectures take up 8.172 megs(v. 2.2.12). The i386 arch. tree takes up 3.9(which is pretty big, in comparison to the 8.172 for all other 7 architectures combined). So, given that(uncompressed) the source tree takes up 73Megs, I think that it's reasonable to have just over 10% of that for different architectures. It's just so much easier that way :)
  • Not more so than having modules loadable after bootup...
  • The changelog takes a few days to be milked out of http://edge.kernelnotes.org [kernelnotes.org] The kernel programmers are too lazy to do one themselves apparently. Sometimes you get release notes from Linus though, usually about it being blessed with something penguinish.

    Until then you can just use the 2.2.x patch browser at http://www.kernelnotes.org/v22patch/ [kernelnotes.org] (click on the breakdown)

    Whatever is shown got tweaked a little (or a lot). You can figgure out where the changes were made quickly. Finding out what the changes were requires more effort however.

    ~Kevin
    :)
  • Actually, I'd like to see a really cool way to cut down on kernel downloads.. Lord knows this is cpu heavy, but think about it anyway:

    User downloads a small chunk of code, runs it. Basically, just the 'menuconfig'. Selects what's good, what's bad. Program goes out and gets the code it needs from a depository (maybe cvs..?), compresses it and downloads.

    User ends up not only not needing to download other arch's, but no unneeded sections. User doesn't use IDE? Doesn't download it. Ditto for SCSI, Strange CD-ROM's, V4L, etc.

    That initial bit of code wouldn't even need updating, even when the kernel version changes.. Or that selection program could be completely online, a big perl/php/whatever..

    Too bad I'm more of an idea rat or I'd do something like this..
  • Uhm, isn't it good practice to always wait a while before upgrading kernels?

    Features and optimizations are not just thrown in there. I think it has something to do with the extensive testing you get when you release a kernel, versus when you just get to test it.

    You should always wait before upgrading a server box.
  • Actually, there are now release notes written by Alan Cox for every stable kernel release. You can go see the Release notes [linux.org.uk] over where Alan's written them. I rather like this, saves waiting for Myrdraal.
  • by gas ( 2801 )
    There is a nice thing called anacron that is not assuming the system to be on 24h.
  • If we didn't know the guy using winders as a server that crashes every 3 or 4 days. Or our mom and pop using windows asking you to fix AOL becuase it crashed windows, again!, we wouldn't be stuck up on uptimes. Blame it on windows you must brag about up times!!! And when you see a winders users yell at them and tell them:

    "It is becuase of people like you that use a crappy os that I feel the need to take pride in how long my computer is up without a reboot!! DAMN YOU!!! DAMN YOU TO HELL!!"

    And start flinging brunt copy of linux at them. Aim for the eyes or head, it hurts more. Then sit down, chill, and run "top" and watch your uptime go up and be happy that you don't have to reboot to change the ip on your nic!!!
  • and I've never seen that happen with any other OS. You can't blame it on power glitches either, because everything is UPS'd, and anyway, the Linux and Solaris equipment is powered from the same sources and it stays up just fine.

    And BSODs and running out of virtual memory happen almost as frequently. I can't imagine why your NT machines are stable, but it certainly isn't representative of our (very large and multiple) sites -- I can very easily relate to the slagging off that NT gets for lack of stability, through personal experience.
  • Sound like too much work to me. I'm speculating that 90% of the users would be just as happy if someone wrote a utility to set the uptime and produced a website carefully detailing why "setting your uptime after installing a new kernel should not be considered cheating".
  • You were not paying attention last time a kernel notice was posted! This is the same phobic argument somebody made last time. Disregarding any newly support hardware, any changes made to the kernel in minor additions are basically bugfixes. I am sure that bugfixes can create new bugs, but I am damn sure that many more bugs are fixed than they are caused with new kernel releases.

    The problem here is you look at kernel upgrades as a potential creation of problems to a stable kernel. I look at kernel upgrades as a fix to problems in a relatively stable kernel. If there are any kernel exploits found I want them patched. Sure there are idiots that download new kernels for the spiffy new number but I am not one of them.

    My kernel size has increased by 10K without any additions of new hardware. My logical guess is that is 10K of fixes and improvements. So it fixed/updated things for me THANK YOU VERY MUCH.
  • The line should be:

    98% of the rants about NT's purported instability and lack of security are the direct results of complete idiots that MADE it.

    Funny, I can put a linux box in front of my friend that never had a computer and doesn't know anything about computers but everyday when he comes home to use it it's on and working fine but the windows box next to it crashes everyday. I'm not some idiot. I understand the registy and how to make things works. The thing is I fix one thing then something else that has not ties to what i just fix breaks.

    You want to do a real world test??

    Try this one time. Get two machines that are the same in everyway. Install NT on one. Linux on the other. Do everything stock with no changes what so every. Just install it with nothing special. Simple right? Now let both machine sit. Just doing nothing what so every. Since you are an admin or make your self out to be one. Which one would crash first?

    What ever machine failed, it can't be blamed on the admin. It can't be blamed on the hardware. It can only be the people who made the OS.

    Don't tell me it's was a user error. You can get 100's of 1000's of people at random to install linux then NT and I'm sure that NT will 99% of time fail first.

    Oh btw, 100% of my linux crashes has been becuase of something I did. Not the software. Of course I have only crashed my 11 home systems 3 times total in the one year I have been using Linux and all of them was becuase I was trying something completely new to me.

    Windows 98 crashed 20 times in 24 hours.
  • by crow ( 16139 ) on Friday August 27, 1999 @06:55PM (#1720970) Homepage Journal
    So you want online kernel upgrades? Sounds like something we do with our systems at work: At EMC, we will upgrade the OS that runs inside our storage systems while the system is online and processing I/Os. (I haven't worked on the code that does the online upgrades, however.)

    It's not easy to do, but it can be done.

    The tricky part is not the replacing of the kernel code with new code, but migrating between changed data structures at the same time. In theory, you could do it with the facilities in place now:

    1) Build the new kernel.

    2) Build a program that understands all the differences in kernel data structures.

    3) Load the new kernel into memory, but at a different address from the running kernel.

    4) Load the translation program as a loadable module--it will need to do several steps:
    a) suspend interrupts
    b) translate the data strucutres
    c) relocate the new kernel to the proper place in memory (possibly using VM tricks)
    d) enable interrupts
    e) clean up any junk from the old kernel that is no longer in use
    f) transfer control to the new kernel

    5) Unload the upgrade program

    That would be a pain to code, but in theory is possible. For most applications, though, rebooting is acceptable. I doubt that anyone will code online kernel upgrades anytime soon.
  • I've been running 2.2.12 for 1 day, 6 hrs, 50 mins. What we have here is a lack of COMMUNICATION!

    -Will the Chill
  • No, it's not cheating. Usually I don't count scheduled downtime. After all, is it really a bad thing when I install more memory and disk?
  • I believe the figure of uptime is how well computers can do when on for long periods of time. When you reboot you reinitialize the state of the hardware as well as software so any problems like memory leakage, fs corruption, etc stops when you reboot.

    If you can keep your computer on for long periods of time without rebooting not only means you have a good OS it may also show the skill of the administrator. Not to say you are a bad administrator when you reboot.

    The problem with using uptime as a bragging right is that it is not much to brag about when we talk about workstations. It is important to upgrade the hardware and kernels of workstations so reboots may be frequent depending on the type of user. For "static" mission critical severs uptime is a very important figure that is worth bragging about. If your server does a very limited set of functions day-in and day-out you want to keep you computers up as long as possible.
  • I would think that it should depend on your purpose.

    If you're looking at this from a user's point of view, then count all downtime. The user, after all, doesn't really care why the system is down; they're just ticked off that they can't access the site.

    If you're looking at this from an OS stability point of view, then only count the downtime caused by OS crashes/problems. Hardware upgrades, power outages, or someone tripping over the power cord simply aren't the OS's fault.

    If you want to go for bragging rights, ignore all downtime. This will give you the highest uptime, after all, won't it? :>

    (Although, in my case, I've never had any downtime on my server due to an OS crash, so it would be the same as the previous case.)
  • Oh come on, what are you smoking.

    At work my Win95 boxes BSOD every 1-2 days. My Linux PC at home has NEVER crashed in the 2 years I've been using it (Red Hat 5.0, 5.2, COL2.2, Mandrake 6.0) including custom kernels. The NT servers require a reboot every month to 6 weeks for one reason or another. It's usually out of hours to minimize impact. Admittedly my NT Workstation has never BSOD'ed, but that may be because it only works for 8-12 hour stretches.

    These NT servers that I have personal knowledge of include Stock Exchange, Uni, and BIG Co.'s the likes of some certain Cola beverage Co.'s with big budgets feeding big quality servers.

    It does'nt even take an idiot to know that M$ OS's SUCK when it comes to reliability. You can say all you like about me being another of these "omplete idiot pretending they're an admin", but I'm not, and don't pretend to be, I just know what I see and can't be blamed for judging MS based on that.

    Wake up.

  • As soon as Linus / Alan figgers out how to upgrade an entire kernel from source without rebooting, I'll be happy! ; )

    Solaris can do it... they "recommend" going singleuser first, but it amounts basically to a hot-swap kernel. I have no clue how it's done, but I'm jealous too.

    I suppose in a HA cluster, you would generally count cluster uptime rather than machine uptime -- maybe then individual kernel upgrades could be discounted so long as the cluster output was still flowing. Hmm.

BLISS is ignorance.

Working...