

2.4, The Kernel of Pain 730
Joshua Drake has written an article for LinuxWorld.com called
The Kernel of Pain.
He seems to think 2.4 is fine for desktop systems but is only now, after a year of release, approaching stability for high-end use. Slashdot has had its own issues with 2.4, so I know where he's coming from. What have your experiences been? Is it still too soon for 2.4?
Well, from my point of view... (Score:2, Interesting)
--Josh
Re:Well, from my point of view... (Score:3, Interesting)
Re:Well, from my point of view... (Score:3, Funny)
Reset button? WE DON'T NEED NO STEENKING RESET BU (Score:3, Funny)
IBM must be pretty confident, the reviewers figured, to leave the reset button out (Apple subsequently did the same on the Mac)(Did the Apple ][ have a reset button?). Bill Gates and DoS proved them (IBM) merely arrogant, and the 286-based PC/AT a couple of years later (5170? I know not the model number) had a reset button.
Now, twenty years later, I've removed the reset switch I eventually added to the 5160 cabinet for the sake of DoS. I'll need it no more.
Au contraire (Score:3, Interesting)
I'll tell ya, I tried the preemptive patches, and all the -ac stuff naturally, and well, the desktop just isn't snappy ... I mean, Windows (follow me here) just feels better. I don't need a force feedback mouse or anything, it just doesn't not show me that it is rendering a window... and that's something that Gnome was doing even on 450 mhz machine.
Also, even with the preemptive patches, I could hold down a key in, say, star office or abi word, and it would stutter! Hold down the arrow key, and it stutters.
These are basic inferace issues that could use some due attention before Linux is ever ready for the desktop.
Re:Au contraire (Score:3, Insightful)
Re:Au contraire (Score:3, Insightful)
The only reasons I bought a new machine is that I needed the K6 to act as a FreeBSD box and because I wanted to play DiVX and games, both of which demand more than a K6-266 regardless of the OS used.
Re:Au contraire (Score:2, Insightful)
What you're saying is equivalent to one of the many post on using VI when the discussion's topic is IDEs. fine if it does the job for you but 90% of people out there want the fully integrated DEs like GNOME and KDE.
Re:Au contraire (Score:3, Interesting)
Also, the problem wasn't that the system was slow, but that when you had many active processes, the system would respond very poorly or lock up.
Cryptnotic
Re:Au contraire (Score:5, Interesting)
The preemptive patches have made my system a lot more responsive under use. Most notably the mouse cursor doesn't slow down during heavy compiles and audio latency is good enough to play with some of the more interesting sound software projects out for linux.
But it really sounds like your problem isn't with linux but with XFree86. X has its share of problems but if you have a good video card that's supported well under it, you should get more than acceptible 2d drawing performance. I use a 3dfx voodoo3 here and its about as good as win2k running KDE (sometimes you can see it rendering when resizing or moving windows quickly but i like to think of it as a cool effect ;) and its way faster with lighter WM's like blackbox.
Re:Au contraire (Score:4, Informative)
Also
nice -n -10
helps quite a lot on an average desktop linux
Re:Au contraire (Score:5, Informative)
There are serious VM stability issues with these systems. Ever wonder why Redhat hasn't released a >2.4.9 kernel? It's because 2.4.10 is where the new VM system went in. Redhat is busily porting Rick van Riel's 2.4.9 VM up to the later kernels so that they can use it.
Re:Au contraire (Score:2)
Re:Au contraire (Score:5, Insightful)
There are many links in the GUI chain, any of which can cause a problem. Roughly from top to bottom-
1. Widget toolkit (GTK, QT, etc)
2. Client painting library (GDK, QT, etc)
3. Window manager
4. X protocol
5. context switches
6. X server
7. 2D video card driver
The folklore seems to be that 4, 5, and 7 are the major problems - "the X protocol is badly designed, switching between client, server, and window manager processes is too expensive, and XFree86's video drivers are no good."
In reality though, the problems aren't where most people expect. The X protocol is not generally a bottleneck, especially if the client programmer knows what he's doing (wait until the input queue empties before repainting anything, avoid synchronous behavior, double-buffer windows using server-side pixmaps, etc). The copy-and-context-switch overhead isn't too bad either (keep in mind that context switches are much more expensive on Windows, and Windows is the platform to beat for 2D smoothness!). And finally, many of the 2D drivers really do take advantage of all the hardware offers.
The real culprits are turning out to be 1 and 3 - the tookits and window managers. Many of the Linux toolkits (especially GTK) have very advanced widget alignment/constraint systems that bog down when windows are resized. Some toolkits are doing naughty things with the event loop (painting while events are still in the input queue, or trying to "optimize" by pausing for new events), and most of them aren't fully double-buffered yet (though GTK 2.0 and recent KDE/QT are most of the way there). Window managers are some of the most horrific perpetrators of 2D crappiness. Some of them try too hard to snap or quantize window sizes and positions, resulting in jerky motion. Kwin seems to prolong expose/repaint cycles much more than necessary. And finally, I will make one criticism of X's overall architecture - I don't think separating the window manager from the X server was a good choice. The asynchronous relationship between X and the wm can cause nasty delays in window moving and resizing. (plus, all widely-used wm's have basically the same features these days; what's the use of having a choice?
I used to think that the only way to get perfectly smooth 2D would be to embed the widget toolkit in the X server, so that it could handle repainting all on its own. Now I don't think one needs to go that far; it may just take a well-written window manager, and a similarly carefully-designed widget toolkit. (though it may be helpful for the server to mandatorily double-buffer every window - hey, video RAM is plentiful these days =)
There are lots of issues I haven't investigated yet - for instance, I think Windows may be doing something interesting with vblank; dragging windows around seems to show a lot less tearing compared to X... Also, 3D OpenGL windows seem to cause much worse artifacting on both X and MS Windows. It's almost possible to bring an animating OpenGL program to a complete halt just by resizing the window, or dragging another window in front of it.
It's an interesting problem, and I'm glad to see I'm not the only one who cares about it. I find it apalling that (to my knowledge) not one major 2D GUI system has been able to produce 100% correct results - i.e. every window correctly drawn on every single monitor retrace, even while dragging or resizing. Why should we settle for less than 100%?
Re:Au contraire (Score:5, Informative)
As the author of a window manager and big hunks of GTK, I don't think your analysis is quite right.
The primary problem is synchronization, not delay. GTK 1.2 is very fast, its geometry code is not causing any slowness. You are confusing slow with flicker. Flicker looks slow but slow is not the problem; no matter how fast code is, if it flickers, you will see it, and it will look slow.
Similarly when opaque resizing a window; it has nothing to do with quantization or speed, the problem is that the window manager frame and the client are not resized/drawn at the same time resulting in a "tearing" effect. This would be visible no matter how fast you make things.
As you say, putting the toolkit in the server or putting the WM in the toolkit are overradical ways to fix this. It's not even necessary to backing store all X windows. It could be done with an extension allowing us to push a backing store on a single X window during a resize, for example. However fixing it 100% pretty clearly requires server changes, and that's why you haven't seen a fix yet.
Re:Au contraire (Score:3, Insightful)
Other causes of flicker: multiple visuals (not a problem on most Linux XFree86 systems), and toolkits (fixable with double buffering and can be reduced though not eliminated by the programmer of the toolkit).
I think the window hould be put into the toolkit. The window borders are no different than any other widget. In fact I believe far more code is expended trying to talk to a window manager than would be needed to do this in a toolkit (which already contains code to draw the buttons and borders). This would allow new ideas in window management to be experimented with, such as getting rid of the borders entirely.
The system might provide a "Task Manager" (using the term taken from Windows) that any program creating a window would talk to. The program would indicate the task that window belonged to and the name of the window itself. The task manager would send commands like "raise this window" or "map this window" or "hide this window" to the program, and by watching the visiblity and positions of windows could provide pagers, icons, and taskbar type interfaces.
I strongly believe that putting widgets into the server is BAD. If X had done this we would be using Athena widgets right now and X would look laughably bad. The fact that X can emulate Windows and Mac interface designs invented 10 years after X was is definate proof that keeping UI elements out of it was the best possible design.
No ... I like 2.4 ... (Score:4, Insightful)
I really like using USB, and I like not having to use ALSA for my sound card (not that I have anything against ALSA).
After playing around with debian the other day and seeing all of my hardware that WON'T WORK with the 2.2 series it has basically come to my attention that I am all for the 2.4 series.
Linux is a continously developing system, whether it be the kernel, distribution, or software. Linux will always be "In Developement". Which is perfect for linux.
So yeah ... if you don't like 2.4 ... go back to 2.2 ... yeah ... thought so :-P
You get what you deserve? (Score:2)
Why Linux? (Score:3, Insightful)
The big arguement FOR linux up until recently was stability of the OS. With Windows 2000 (and XP I assume) it seems that Linux users are now the ones willing to put up with the more problematic OS, and for some of the same reasons Windows users clung to not-long-ago.
Now my question... Why use Linux? It's that needlessly complex and clunky operating system in-between Windows/OS X.1 & the *BSDs. Windows & the *BSDs being far easier to configure than Linux, with the *BSDs being faster, more secure, more stable, and simply smoother (less clunky) all around.
The *BSDs are PnP (no need for Kudzu) no kernel modules to be manually configured, and the configuration files are far simpler than ANY Linux distro (although you CAN use Sys V scripts instead if you are so inclined-anyone who uses the BSD-style scripts for awhile will not want to use anything else though).
So I ask politely, hoping to avoid flames and rants... Why choose Linux? It's not the most stable, the most secure, the fastest, the most free, the easiest.
Re:Why Linux? (Score:4, Insightful)
Linux is a buzzword, and being one, gets the benefits. When people talk about "non-Windows support," what jumps to mind is "Linux" even before "Mac" for many developers. Thus, there are many precompiled binaries, precompiled kernel modules, etc., that run under Linux. (I know BSD can run many Linux binaries, but what about kernel modules?) Additionally, many people are actively developing hardware drivers for Linux, but not so many for BSD.
Plus, it's very easy to find support for various Linux-related problems, because so many people use it.
Re:Why Linux? (Score:5, Interesting)
At least for nVidia, it is being worked on: FreeBSD NVIDIA Driver Initiative [netexplorer.org]
Re:Why Linux? (Score:4, Insightful)
> rants... Why choose Linux? It's not the most
> stable, the most secure, the fastest, the most
> free, the easiest.
I like Linux would be the first answer that comes to mind. Linux is very stable, very secure, and quite fast, very free, and once you get to know it - very easy! Linux is all these things and more.
Linux is stable - OpenBSD may be more stable.
Linux is secure - NetBSD may be more secure.
Linux is fast - BeOS may be faster.
Linux is free - FreeBSD may be freer.
Linux is easy - OS-X may be easier.
Linux gives me all these benefits in one package AND the GPL'd codebase keeps getting richer.
We've been using it... (Score:4, Interesting)
We ran into some trouble with a number of Athlon systems but that was due to the 'Athlon bug' and was soon fixed. More worrisome was the performance of pre-2.4.9 kernels on the desktop: sometimes they slowed down to a crawl (and i'm talking about lightly loaded ~750MHz machines here).
We got over that with the -ac kernels however, and it's been a breeze ever since. We currently use 2.4.14 with XFS patched in (although we're ditching it in favor of ext3 now that it's been integrated and the RH installer supports it) and we're looking at 2.4.17 now.
Why use 2.4 on servers (as some have asked)? Well, iptables is a good reason, for one. Other security-related things count heavily too. And XFS seemed a good reason to do it at the time too. It can deliver very good performance.
Some stats:
zuse [1] > uname -a
Linux zuse 2.4.14-xfs_MI10 #1 Tue Nov 6 17:34:04 MET 2001 i686 unknown
zuse [2] > uptime
2:25pm up 61 days, 21:21, 1 user, load average: 1.07, 1.02, 0.93
Nothing but anecdote, but... (Score:2)
I imagine I could suss it out, but it isn't a big issue for me. I'm told later 2.4.x kernels fix this (I'm running 2.4.9).
Anecdotal, I know. For myself, I'd run 2.2.x still on production systems. But I don't run any big production systems...
2.4 running just fine here (Score:2, Interesting)
Alphas (Score:5, Informative)
-Paul Komarek
Re:Alphas (Score:4, Informative)
2.4 woes (Score:2)
Erik
My experience (Score:5, Informative)
Well:
8:33pm up 45 days, 5:49,
Shameful I know, but I had to move city before that I had 6 months. Should had a UPS
This is pretty much a desktop/development box running postgres, JBoss, tomcat, apache, JBuilder and (occasionally) kylix. No problems so far, touch wood.
I also used to work at the comp-sci department of a university were we had 40 boxes in the linux lab, no real problems except they were running ext2 so only the occasional manual fsck. Now the maclab, that is another story (OS9 not OSX).
Re:My experience (Score:2)
*sigh* (Score:2)
But this is the way of open source - it's obvious this stuff wasn't tested to destruction while still in the 2.3 phase, or we wouldn't be seeing this stuff. However distribution developers should be doing this testing before releasing a new OS, and they're obviously not doing so.
Re:My experience (Score:2)
To be honest, indeed I can't be really proud on the 2.4 series, but I hate to see it get this bad press now that Marcelo Tosatti is doing such good work. If you missed it, he has started doing prereleases to prevent Linus' 0-day blunders like 2.4.11 and 2.4.15. Linus might argue that that's slowing down development - I think it will give better timing and a better version names.
Yes, the emperor has no clothes! (Score:3, Interesting)
As a sysadmin, I have to state that the 2.4 kernels have ruined whatever reputation that existed before about the 2.2n series kernels being stable. Atleast in the 2.0 and the 2.2 series, you had islands of stability where really careful distributions could pick a kernel version as their default kernel. One of the main problems with Debian not finalizing a 2.4 kernel has been due to the fact that there hasnt been an island of stability so far in the 2.4 series.
And I've been waiting a long time now. The early 2.4 series didnt really work out on my SMP servers. The 2.4.6 onwards kernels broke Tulip support for me. Then came the VM switch. Then just when I decide, ok, 2.4.16 seems stable enough, we have the OOM problem. And I also keep hearing statements being made about the new VM being more friendly to desktop systems than servers.....
Now if only 2.2 offered iptables.....
While Linux remains superior to Windows (Score:5, Informative)
And now for some arm chair quarterbacking, all that having been said, I really think Linus needs to excersize some self discipline and stay away from maintaining even-numbered kernel releases (x.0.x, x.2.x, x.4.x, etc.). By his own admission he isn't good at being a stable kernel maintainer and prefers the more interesting work done in development kernels, and his track record in 2.2 wasn't fantastic (particularly in comparison to 2.0, where he did a fantastic job) and was pretty abysmal in 2.4. As someone who's been using GNU/Linux since the early pre 1.0 days I hope he'll put his efforts where his talents are (managing changes in odd numbered development releases) and leave stable maintenance to Cox and Marcelo (who are very good at maintaining and improving stable releases). But enough commentary from the peanut gallery...
Mandrake8.1 ships with both 2.4 and 2.2 (Score:5, Insightful)
But Mandrake 8.1 ships with both kernel 2.4 and 2.2.
The idea behind it is: if you need all the fancy stuff use 2.4 but if you want stability use 2.2.
So using 2.4 on a server and then complaining that it isn't stable enough is silly IMHO.
That said I agree that 2.4 has been slow to stabilize (VM mess apparently caused by communications problems between Linus and Rick Van Riel).
Re:Mandrake8.1 ships with both 2.4 and 2.2 (Score:3, Insightful)
Yeah, cause Linus was joking when he said that even numbers were "stable".
2.4 is a supposedly stable tree.
It's supposed to be Odd versions have fancy (ie experimental) stuff, use at own risk, Even versions are stable and suitable for real usage.
So using 2.4 on a server and then complaining that it isn't stable enough is silly IMHO
Then Linus should stop saying that the even versions are stable.
Insert obligatory *BSD advert here
Re:Mandrake8.1 ships with both 2.4 and 2.2 (Score:2, Insightful)
And as far as the VM mess, it wasn't really an issue of communication. It was an issue of Linus arbitrarily accepting some patches from Rik, and ignoring others. Alan Cox at least made a real attempt to incorporate all of Rik's VM patches in the -ac branch. And the -ac branch had a much improved VM as a result. But Linus didn't make the effort for some reason.
The reason 2.4 has been unstable is because the maintainership has been poor. Usually Linus turns over maintainership to someone else (previously Alan) very early on in the series. I think that happened at 2.2.7 for the 2.2 series. Alan puts out of lots of prepatches and gives people enough time to test prerelease kernel patches. Linus is random about it. He'll release a kernel that has changes that weren't in the prepatches. And a bunch of times those changes broke something badly. It probably doesn't help that he has a day job. Alan gets paid to work on Linux full time. The 2.2 series only started getting stable when Alan took over. 2.4 only just recently got handed off.
Re:Mandrake8.1 ships with both 2.4 and 2.2 (Score:2)
he is right on putting the responsability on top of the maintainer, otherwise IT WOULD NOT BE A DISTRIBUTED FRIGGIN SYSTEM. and a project this large can only work if everyone makes sure their little bit works. now i tell you this much, y'all falling on top of linus but i reckon that if rick had followed linus's model and had made sure the VM patches were all applied in order, at this time we would probably be celebrating the reasonable quality of the 2.4 series. thats how ficke we are, mate.
nuff said.
soup
Similar problem here... (Score:3, Informative)
Oh yeah, and the machine would crash randomly and lose data. We were using ext3, so the file system was (supposedly) still consistant, but whatever was being worked on would be lost.
Ultimately, we upgraded the kernel to 2.4.17, and the problems have been fixed. But the "even number == stable reliable" rule failed us that time.
Since then, I've read that "the entire VM system in 2.4 was replaced around 2.4.10". This really scares me. I hope that Linus and Alan Cox have learned to manage things better now. If not, someone else will have to pick up the slack (maybe RedHat) and manage a stable kernel.
Cryptnotic
Re:Similar problem here... (Score:3, Informative)
Neither Linus, nor Alan Cox maintain 2.4 at the moment. Marcelo Tosatti does, and from what I read on LKML some ppl thought that to be a bad move at the beginning, but I think it works out just great (the first release he made was 2.4.17 IIRC)
Why didn't he downgrade immediately? (Score:4, Insightful)
First, he replaces a known working server with something new. Then he keeps on adding bleeding edge newest kernel upon newest kernel to this box (following his narrative, it sounds as if he installed new kernels upon release).
Second, nowhere does he mention why he needed a 2.4 kernel in the first place. In fact, he mentions how he finally decided to downgrade to 2.2.
So, in conclusion: He upgrades to the bleeding edge without proper need, and when trouble ensues, instead of rolling back, he continues upgrading. Tell me why this guy is not a hopelessly incompetent sysadmin who's trying to blame Linux for his shortcomings?
Hell, even I as a home user waited until 2.4.17 before upgrading my main box from 2.2.19. If I can perceive the weaknesses of the 2.4 kernel, why can't a professional do so?
MartRe:Why didn't he downgrade immediately? (Score:5, Insightful)
Far be it for me to criticize Linus, et. al as I could never do what they do, but the shortcomings are not with this guy, but with the buggy kernels. These are release kernels, they are not beta kernals. I think, considering the reputation of Linux, that a release kernel should be stable. Yes, bugs happen, and when they do, you would expect a patch to fix these problems.
If everyone did as you suggested and rolled back to 2.2.x at the first whiff of trouble, who would be out using these "bleeding edge kernels"??
I think you should cut the boy some slack.....
Re:Why didn't he downgrade immediately? (Score:5, Insightful)
Re:Why didn't he downgrade immediately? (Score:3, Insightful)
At my work, just yesterday we were discussing how frustrating it was that users would downgrade when they had a problem instead of reporting it or checking for a newer version! The argument was that since they kept doing that, we could never determine if a new version needed bug fixes or not, and the bug reports we did get were meaningless because they were always on dated versions. I find this to be a common mentality.
Now I hear the exact opposite. This guy did exactly the right thing. Don't use beta versions, but if you have a problem, upgrade to the NEWEST, don't downgrade to an old version.
Re:Why didn't he downgrade immediately? (Score:3, Insightful)
The reason he doesn't downgrade immediately is that it's a big job. Compared to a downgrade - which presumably involves a backup, rebuild and restore (sounds like several hours downtime), an upgrade to the next kernel is basically a reboot.
The fact is they took a significant decision when they decided to go 2.4 to begin with. Having made that decision - rightly or wrongly - he then has to make decisions about what to do when he hits bugs. The business may prefer (initially at least) to live with the problems rather than face a prolonged downtime for the downgrade. Or live with them until they can schedule such a downtime.
There may have been things he could have done better but hopelessly incompetent is a bit harsh.
2.4 is hit and miss. (Score:5, Interesting)
On my desktop machine, I've taken more risks (installed pretty much every official 2.4.x-linus release as they have come out) and some have been good, while others have been total dogs.
I'm running 2.4.17 right now. It seems okay; I've only had a freeze-up once over the last couple of weeks, though it was a total hard freeze (i.e. no ping, no magic SysRq, no nothing), which I haven't had in Linux for several years.
The obvious issue is VM; if you keep lots of memory (768M, or preferably 1.0G+) in your system, things to much more smoothly, though MP3 playback still skips a little.
Right now, I'd prefer some work on the RAID and IDE performance issues. One or two of the 2.4 series have had disk performance 100%+ better than the current 2.4 kernels. Why? I'd like to get the disk I/O back to reasonable levels.
Re:2.4 is hit and miss. (Score:3, Interesting)
The original article author went off and yelled about this problem and that problem in the Linus kernels, but totally left Red Hat's stuff out in the cold until the very end.... yes, I admit, right now is not a good time to be following 100% pristine Linus code. But the beauty of Linux now is what everybody feared would get really ugly: We have SEVERAL forks in the code, and at least one of them is working quite well....
I'd still rather run Alan's beta code than the best Bill can possibly offer.
Cluestick (Score:4, Insightful)
1. Mandrake 8.0, *the* desktop distribution _and_ a dot zero release.
2. A kernel lt/eq 2.4.6 with known problems and definetaly
3. A large-scale *production* server.
Somebody hit this guy with a cluestick! Please?
Thimo
Re:Oh, stop it! (Score:2, Informative)
No, that's wrong. Red Hat, for instance, which is generally designed to be an industrial-strength server distribution, applies something like 200 patches [lwn.net] to Linus's kernel. Red Hat knows that its customers expect a solid, stable server operating system, so they will do what it takes to build one. Mandrake, on the other hand, knows that its customers are mostly desktop users, so it has other priorities (providing games, etc.) than testing and patching the kernel.
Re:Oh, stop it! (Score:3, Interesting)
However, you are absolutely right that Mandrake focuses on ease of use and new desktop-like features while Red Hat focuses on stability at the expense of "coolness."
Both can be frustrating if they're used in the "wrong" places.
No probs here. (Score:2, Troll)
Worked for me. (Score:5, Interesting)
Of course, whenever I'm playing around with this stuff I don't delete my "last known good" kernel, so if after a couple hours or a couple days I noticed a problem, I just booted back to what worked. The default (albeit heavily patched) Red Hat kernels were good, so "last known good" always existed for me.
To summarize: this hasn't been a source of inconvenience for me, but it has been one of vicarious embarrassment. I've only been using Linux since 2.0.somehighnumber, but this is the worst mess I've seen the "stable" kernel tree go through in that time. Don't get me wrong, I've experienced system-crashing bugs (a tulip driver that freaked at some tulip chipset clones, some really bad OOM behavior a couple years ago) before, and pragmatically I guess that's worse... but those problems were always fixed fast enough that the patches predated my bug reports. Watching even the top kernel developers seem to flounder for months over bugs in a core part of the OS like the virtual memory system just sucked.
2.4.16 + preempt (Score:3, Interesting)
It is the most stable config I ever had using this kernel generation.
I explain :
Before, with kernel 2.2.1x I only had "some" preformance issues (mostly disk access related) and what I thought were apm problems (this is a laptop).
Since I have been using kernel 2.4 I happened to have good times but mostly bad surprises.
pcmcia (I use the pcmcia-cs package [sourceforge.net]) is not quite plug'n play (system even hanged once) but symptoms vary from version to version.
So, the big PROS is that, yes, I boot a much quicker way.
The CONS is that since the 2.4.6/7, I bitterly regret upgrading this kernel since the functionality I gained was compensated by the new bugs.
Note that I don't mention the APM because besides the Windowmaker apm applet, I don't even imagine using the suspend/resume on this laptop.
BTW, when I see the difference with and without the preempt kernel, I wonder why this is not implemented in the official tree (radio button : "server or desktop" ?).
We are worse off with 2.2 (Score:5, Insightful)
1) A dual PIII-800/Intel 440GX/512MB ECC RAM based server, with a Mylex AcceleRAID 170 adapter, an Adaptec AIC-7896 SCSI adapter, Intel EtherExpress Pro 10/100, and an external 450GB SCSI RAID-5. This box is used for NFS/Samba file serving and an e-mail server for around 100 users.
It runs kernel 2.2.17
2) A dual PIII-800/VIA 133 server/1GB PC-133 RAM server, with an Initio A100U2W SCSI adapter, Intel EtherExpress 10/100 and 70GB of external SCSI RAID 1/0. It runs MySQL, Apache, and a collection of internally developed Perl, C and Java server apps, on kernel 2.4.3
3) A dual PIII-450/Intel 440BX/512MB PC-100 RAM server, with an Adaptec 2940UW adapter, Intel EtherExpress 10/100 and 170GB of external SCSI RAID-5. It is used as a development system, and runs MySQL, Apache, and assorted Perl, C and Java apps, on kernel 2.4.1.
Systems 2 and 3 have both been up for 197 days as I type this, and would have been up for over 250 days had we not needed to power them down to move them to a new server room.
System 1 (with the 2.2.17 kernel) has never stayed up for more than 55 days. It hard crashes without anything informative being written to the logs, and obviously required the reset button to be pressed.
Has anyone got any ideas, given the hardware configs and software running on these machines why 2.2 is so horrendous, yet 2.4 so stable?
Re:We are worse off with 2.2 (Score:5, Insightful)
Or something like that.
Anyway, I'd look at the changelogs for the network driver between 2.2.17 and 2.4.1, you may learn something.
Dave
NFS and 2.2 (Score:3, Informative)
There were some lingering problems with NFS (even v2 using UDP) in the 2.2.x kernel series until 2.2.19.
I recommend that you upgrade the machine that's running 2.2.17, or else apply the NFS patches. If you're using NFS v3 or TCP, you definitely want to upgrade to the latest version, and get the latest NFS utils.
Kernel Panic (Score:4, Insightful)
Is Linux 2.4 unstable? It depends on your perspective and luck. I'm running 2.4.8 and 2.2.19 (Debian potato) on my systems successfully; 2.8.9 thru .12 have been glitchy for me, especially when it comes to running big jobs that stress the VM. Haven't tried anything above .12 yet; I'm waiting for .18. My old cluster runs 2.2 simply because I have no reason to change.
Your mileage, of course, may vary.
I do think that 2.4 has been managed poorly. People complain that Microsoft beta-tests software on thier customers -- yet that is precisely what the kernel team does to Linux users when they release a "stable" kernel with an entirely new VM. A couple months' (weeks'?) testing on developer workstations is not sufficient for an "enterprise" class operating system. Anyone who understands the least bit about complex systems knows that you don't replace critical architecture (the VM) without jeopardizing stability.
It's all water under the bridge now; I hope Linus and company have learned from the 2.4 battles. If 2.6 has the same kinds of problems and controversies... well, I prefer not to think about it. For my part, I plan to beat 2.5 beta kernels to death, to help the testing along. Testing is as important as kernel hacking -- even if it isn't as sexy.
life ain't easy, kernel-programming too (Score:2, Insightful)
as an electronics student, i wouldn't dare criticizing the kernel programmers: if you ever tried to program a kernel from scratch, you'd know what a damn job that is...
for all of you interested, there's a great book over at o'reillys, understanding the linux kernel [oreilly.com]. it covers the changes from the 2.2 to 2.4 version and explains into every very detail the structures behind all the features you enjoy in you everyday linux life ;-)
cu, mephinet
Unfortunately I have to agree (Score:5, Interesting)
Having said that, there are some serious issues with 2.4 on some 8-way 8GB machines that I manage. They have been running 2.4.13-ac7 since November, because that is the last kernel that is usable for me (-ac11 would probably be ok). Newer kernels have terrible behavior under the intense IO load these machines go through. They get 14-30 days of uptime, and then hang or get resource starved or something and have to be rebooted.
I think part of the issue is that there simply aren't that many people running 8-way boxes, so bugs aren't found as easy, this is of course on top of having 8-way SMP being much more complex than a defacto single user, single processor desktop machine. To make it even worse, the machines are pushed hard. They move around GBs of data every day, and often will run for extended periods with loads over 25.
Of course, it is still mostly ok. While the machines are working they mostly work fine. Of course 20 days of uptime is totally unacceptable. I have an alpha running Tru64 pushing 300 days of uptime, and the last time it was down was due to a drive failure, not an OS problem.
My only remaining issue with Linux on "small" machines is an oscillation problem in IO. Data will fill up all available memory before being written to disk, and then everything from memory will be written out, and then memory fills up again before anything new is written to disk. This is a bit inefficient, and the machine's responsiveness at the memory-full part of the cycle is poor.
What are my options though? I guess I could try FreeBSD, but a bit of lurking on their lists and forums reveals plenty of problems there, too. Do I switch and hope things get better, or wait out 2.4 and hope it comes around soon? Aside from a few nasty bugs in some releases, pretty much each successive 2.4 kernel has been better than the previous one, at least on small systems.
Several years ago I was having a hard lockup problem with Tru64 (Digital Unix, at the time) and that was very scary. It took time to get the problem escalated to the OS engineers, instead of just sending an e-mail to lkm. Even then I could only hope that the issue was being addressed, but I had no way to know if anybody was doing anything about it or not. (Turned out to be an bug in the NFS server that would cause the machine to lockup when serving to AIX.) For all of its problems though, it is extremely reassuring for me to be able to monitor the development process of Linux through the linux-kernel-mailing list, and other specialized lists. If I feel that people aren't aware of some problem I am experiencing, I can raise the issue. I am not in the dark about what is happening, and what fixes are being made. I know what changes have gone into each kernel update, so I know if there is a chance of it fixing my problems.
Re:Unfortunately I have to agree (Score:4, Informative)
At about the time the 2.4 kernel was first released, we were bulding a server for serving out large media files for encoding. We were on a limited budget, so we put together a PC with about 256 MB RAM running on a K6-2/500. Set it up with a combination of RAID 1 and RAID 5 with 2x40GB and 2x80 GB IDE drives. While running with the stock RH 6.2 kernel we had no problems. But we needed the 2.4 kernel for large files, so we waited until we couldn't wait any longer.
This turned out to be problematic to say the least. While we had 7 servers running RH 6.2 and never had a crash, the machine serving up the media files would lock up whenever copying large files, or whenever many files were being copied. Kept me working through a few weekends trying the latest kernel and then stress testing the server with large file copies. We wound up reverting back to a 2.2 kernel because the crashes were too frequent.
I haven't tried the RH kernels for 2.4 on anything other than desktop systems. I can say that, on RH 7.1 at least, the 2.4 kernel in use is rock solid and has never crashed for me at home or on desktop systems at work. I never got the chance to try the kernels on RH 7.1, but I suspect Redhat kernels would probably be more stable. They've got the resources to stress test and modify kernels for specific needs.
I liked the article. He's not a kernel hacker and writes from his experience of the 2.4 kernel with clients. Only problem I see is WTH was he thinking using Mandrake 8.0 for a server? That version of Mandrake, more than any other I've used, I've found to be very unstable on 2.4.
Re:Unfortunately I have to agree (Score:2, Interesting)
That's unfortunatly the point. 2.4 is stable for desktop use, but obviously it's the 8-processors+RAID heavy use which is problematic. But honestly I must say I'm not surprised... Having done a bit of kernel developping, I always wondered how developpers managed to get SMP right (with spin locks etc...), when most of them probably don't have SMP machines AND considering that fine-grained SMP locking was added as a backthought. Actually when I wrote my little personnal kernel module, I was amazed at how many things could go wrong in SMP, so much that my envy of dual Athlon motherboards is now close to zero.
I think Linux is now meeting the exact same problem as Windows NT+ kernel, i.e. fine grained SMP is hell.
Desktop Myth (Score:5, Insightful)
I don't get it. I use Linux on the desktop. I have to admit that I don't run linux on my main machine. This is only because I've taken my second hard drive out, and put it back into an older machine. [sorry, wine doesn't like Red Alert 2]
Before I did this though, I ran 2.4 kernels on my desktop. None of the problems I may have had were with the kernel. Problems I had were mainly with certain applications and when I pushed them to their limits. Pan, for instance, crashed a lot on me, but that was because I was downloading gigs per day. A simple Pan upgrade fixed that.
In my humble opinion, 2.4 is prime for the desktop. Linux is more than ready for the desktop. I know he says it's ready for the desktop, but not ready for high end systems. To me 'high-end' is what you ask of a computer. I've got a 333MHZ running Red Hat 7.2. The computer is running webmin, proftpd, apache, and many mail daemons. I must also mention that SETI runs 24/7, it only has 64 MB of RAM. It never goes down, it never 'crashes', and is up as long as there is power running to it.
So... it's ready for the desktop? Sure, 2.4.x is prime. All the drivers I've needed supported are there. Even my >$50 webcam.
The question of 'desktop' use isn't with the kernel though. Desktop users don't patch or compile the kernel... how many times do they do it in *indows or MacOS X? They install complete distributions. IMHO, again, the only thing that keeps Linux off the desktop is easy program install. RPM has killed itself with dependencies, and apt-get is console based. Apt-get is waaay better, and it has worked wonders on my Red Hat machine [apt-rpm]. The problem is not being able to download an app and install it like *indows.
Solve this and I will sit outside my local computer store and hand out CDs. I don't know about high end systems, but dammit!, desktop users are ready... format that *indows crap and get a real OS!
Gimme a good apt-get gui... or have the system run apt-get in the background solving dependencies when needed... my g'ma will have it.
BTW, I just saw a guy on TV and his name is... get this: Joe Householder
Press Release (Score:5, Funny)
Let's call it a curiosity (Score:4, Interesting)
9:21am up 181 days, 13:25, 3 users, load average: 3.57, 3.33, 2.79
jakob@unthought ~> uname -a
Linux unthought.net 2.4.0-test4 #1 SMP Fri Jul 14 01:56:30 CEST 2000 i686 unknown
I suppose that ain't too bad. Other than that, with real 2.4 kernels, on UP and SMP systems, I've been fairly satisfied.
There was a RAID bug (RAID-1) in 2.4.9 or there about, which they forgot in the article. I think, except for the fs/raid corruption problems (which are horrible when they happen), that the 2.4 kernel has been a nice experience.
Think back for a moment: How would you like *not* to have iptables, reiser, proper software RAID, etc. etc. etc.
I think I would miss 2.4 if I went back, although the fs/raid corruption bugs made me "almost" do that.
Problem more serious in Business Computing (Score:2, Informative)
For home use, I really don't find a lot of problem with 2.4 except minor driver problems. But at work, things are very different. I run a few high load critical servers at work that are still on 2.2, the lab attempt to upgrade 2.4 (at early stage) failed because of lock up and performance issues (yes, some due to VM)
It was till recently, I tried again with 2.4.16 that I am getting some reasonable results with the 2.4 series. For your information, performance are about the same on 2.4 with my application, I cannot confirm high load stability issue yet as I need more time to test. But initial results tells me 2.4.17 are resonably stable, only one lockup so far (for two weeks).
Needs "unstable", "testing", "stable" or something (Score:2, Interesting)
Maybe holding on to "beta" status for a little longer, or having a "unstable", "testing" and "stable" like debian. So that when someone wants the latest stable kernel, they don't end up with something the kernel guys think is stable... till they release the next "stable" version a day later...
Observations & Experiences (Score:2, Insightful)
The other problem I've noticed that started with the 2.4.x tree was the 'ac' or Alan Cox branch. Don't get me wrong I think Alan has contributed meny good thing to the kernel; however, I do think Alan has gotten to feeling a little to self-important to Linux's success. Also keep in mind that he works for Red Hat now and everything I've seen with Red Hat's politics is they want to "control" Linux. So, I say stay away from any '-ac' kernel. But that's just my opinion and I could be wrong.
As far as the 'new' VM being put into the 2.4.x kernel - I don't completely agree or disagree with it being done when it was but there were various reason to do it then. I was holding up some important things and the kernels wasn't ready for a 2.5.x tree yet. It was a hard decision on Linus's part but not one I'm going to second guess.
Don't get me wrong. you can get a stable machine with just about any distribution; howver, I have found, from experience, that Slackware has a track record for being the most stable 'out-of-the-box'.
Also keep in mind that with Winblows you'd be rebooting every 14 to 30 days. Even with 2000 and XP.
On the other hand I still have one box at home still running 1.3.72 with an uptime of over 3 years. It's running as my router and is my experiment on just how long with a Linux box run without crashing.
Re:Observations & Experiences (Score:4, Informative)
Um... 1.3.x is, indeed, the stable version. From the website:
2.0.x is the unstable tree at the moment.
Re:Observations & Experiences (Score:2)
Kernel is ok, biggest problem is the applications (Score:3, Interesting)
I've found that the kernel is pretty stable for me. I use my system mostly for code development and as a server for files and web pages.
I find that the kernel itself is pretty stable, although as the article says, it does seem less stable that 2.2 series did. But even so it's not bad for the use I've made of it.
The old style applications are also very good. The command line tools, and the development tools (gcc etc) are all totally solid and are why linux gained it's early reputation for reliability.
*BUT* I find that much _new_ software. Both gnome, kde and others GUI software to be terribly unreliable. Say what you like about microsoft outlook but it rarely just crashes. On the other hand, every "modern" mail program I've used on linux tends to end with a crash eventually. And it's not just mail programs. I find that many of the programs I would use tend to crash quite a lot. Not all the time, but just once is too much.
It's rather sad in my opinion that such a solid base of reliable code is being let down by the stability of some of the more modern software. Frankly it doesn't matter how stable the kernel is if the programs that run on it crash.
This isn't indended to be a complaint, and I realise that before applications can be considered reliable the kernel needs to be, but it does concern me that the overall reliability of linux systems does seem to be going downwards.
Re:Kernel is ok, biggest problem is the applicatio (Score:3, Interesting)
Re:Kernel is ok, biggest problem is the applicatio (Score:2)
There are some great, reliable programs out there too, but I just get the feeling that quality of many of the *applications* is going down recently. And if the applications are not reliable, then it probably doesn't matter too much if the kernel crashes every few months...
The real irony... (Score:2, Offtopic)
What do we get? Stability problems, kernels with DO NOT USE warnings, massive changes to the core of the OS, the list goes on. All on what was supposed to be the flawless kernel that proved the worth of Linux to the masses.
Works ok for me (Score:2)
I am running the entire system in RAM (loading a 96 Mb almost empty ramdisk from floppy, populating the filesystem via rcS from
Conclusion:
If you run old hardware, puts little load on it, and patches the kernel before you compile it you will be quite satisfied with 2.4.*
Oh, btw. I run 2.4.* on a laptop (with more than enough RAM). 2.4.* has been stable and satisfactory since the first release, for me.
large system problems (Score:4, Insightful)
Now, what problems am I talking about? The latest 2.4 kernels still have compilation problems in some drivers (2.4.17 has problems in USB, 2.4.18pre4 has problems in one of the sound drivers). Important and mature packages like MOSIX require patching the kernel and aren't integrated into the kernel. Many hardware setups require recompiling the kernel and experimenting endlessly. Every time you recompile the kernel, you need to recompile some kernel modules. Dependencies and recompilation aren't working correctly--some things don't recompile when they should, and lots of things recompile over and over and over again. The kernel itself is a 30Mbyte download. And the list of problems goes on and on.
People seem to have gotten used to it and think there is nothing wrong. The kernel hackers keep telling us that C and make are just great tools for building kernels. But as a user and sometime driver hacker, I think the kernel is falling apart under its own weight. This is not a system I can recommend to non-technical users--commercial distributions can't cover all the possible kernel configurations (even with fully modularized kernels), and recompilation is out of the question for many users. What is needed?
I think, ultimately, if the kernel wants to survive and be able to keep up with the world, it needs some kind of more flexible dynamic binding of functions at runtime. It also must allow people to start writing kernel components in languages other than C, foremost C++. No, C++ isn't the epitome of good language design, and, yes, people can write even more horrible code in C++ than in C, but C++ can really help with safety, security, resource management, and modularity.
If those things don't happen, I think the Linux kernel will simply fall so far behind that it will get replaced by something else. And that would be a shame because the Linux kernel actually does have a lot of useful functionality, and once compiled and configured, works very well.
Re: (Score:3, Informative)
Re:large system problems (Score:3, Interesting)
Now remember that the whole point of microkernels is to enable flexibility -- not only for development's sake but also to be able to adapt to different loads and usage characteristics.
So the HURD seems to be the answer. That or Linus (or someone else, or a group of kernel hackers) over a reasonable amount of time manages to get better at (1) understanding modifications to Linux and their consequences and (2) based on this understanding only release "stable" kernels once they're done.
Given the complexity of the task, I doubt it's doable. The free flavours of BSD never scaled as far as GNU/Linux; the proprietay Unices have basically choosen to scale up and up, forgetting the small systems situations and accepting bloatedness in order to cater for stability, resilience and other high-iron stuff. Even single-server microkernels like Windows NT and Apple Darwin haven't much to offer, being bloated, slow and not flexible at all.
We still have a free software and open systems advantage, because POSIX OSs can cover the whole gamut of computing systems simply by having different kernels with the same APIs; standard bodies and GNU libc are the real heros here, not Linux or BSD. Proprietary software usually will be even more fragmented, with all those slightly incompatible, underperforming, unstable versions of Microsoft Windows, Mac OS Classic and so on. But still it would be nice to have a free, copylefted common kernel. Unless the Linux situation improves dramatically soon, the only answer in the horizon is the Hurd -- and it still needs to be finished.
the above message may be a troll (Score:3, Informative)
I saved this one for last:
You see, that's what we call not in the linus kernel. Your impressions of importance and maturity of the patch are really something you should take up with Linus himself. I, for one, wish Ingo's TUX subsystem makes it into the linus tree sometime soon. But you have no basis to say that just b/c a kernel patch is out, and linus hasn't integrated it into his stable tree, the linux process is flawed. Get a clue! Independent patches come out much faster than anyone can pull them into the core; they are usually conflictive and compete with other patches to solve the same problem. So it takes a while. If you want it in the linus tree sooner, help out. Welcome to open source.Improvements (Score:2)
Also, the VM system is much improved, when compared to the 2.2.
The only thing I think was a little too risky was replacing the entire VM (originally built by Rik van Riel) with a new one, by Andrea Arcangeli. I believe such dratic changes should be reserved for developmente kernels. But the important thing is that now it's working wonderfully, and is much improved.
I don't think 2.4. should be called the Kernel of Pain. We're what, in 2.4.17 ? Remember 2.2.17 or 2.0.17 ? Heck, 2.0 had DoS bugs till release 2.0.35.
I am running 2.4 on some production boxes. They're behaving fine and very stable, thank you, and I think 2.4 is ready for production.
It's what people have been saying for years! (Score:3, Funny)
Server?
Wait a minute...
2.4 RAID-0 (Score:2)
PS, notice thats negative 18%, I say negative 18% faster, since RAID-0 is supposed to give a speed increase above all else.
Innovation vs stability (Score:2)
I'm happily running Debian stable (potato) with a 2.2.18pre21 kernel. I'm only vaguely aware that some 2.4 kernel features could be nice to have. The main attraction is iptables, which of course wouldn't be any use on my desktop and laptop machines.
I'm about to play around with 2.4.17 on my current firewall server using Adrian Bunk's 2.4 kernel package, for that very reason. Would it be wiser to hold off? ipchains does an adequate job under 2.2, after all, and I'm perfectly happy with 2.2 on the desktop machines until Woody (the Debian candidate in testing) is moved to stable, by which time all the major gotchas will have been ironed out.
Dude.. (Score:3, Funny)
Where's your stability dude?
Dude! Where's my stability?
Where's your stability dude?
Oh! There it is! (points to linux-2.2.20.tar.gz)
;)
The real problem is... (Score:4, Insightful)
It is commom to see that the stock kernel has lots
of missing patchs to increase stability and as pointed out by
Rik van Riel [surriel.com] which was posted [slashdot.org]
here in slashdot, Linus rejects
random patchs which cause some areas of the kernel to not be "as good as it should".
The VM is one part which Linus just got random
patchs from Riel and rejected some of them randomically which made the VM suck so hard in
earlier stock 2.4 kernels.
OTOH, kernels shiped from distributions includes
(at least it should) the missing parts and should
be better than the stock kernel from kernel.org .
I don't use Mandrake to tell how good their
kernel is or is not. But I use
Conectiva Linux and I know how good their kernel package is.
Their kernel includes missing fixes that do not get over the stock kernel.
Better of all, their kernel maintainer is
[marcelothe...enguin.com]
Marcelo Tosati
who maintains the stable kernel tree now.
I think that we will see an improvement into new
2.4 releases.
The latest 2.4.17 kernels from Conectiva can be found in here [linux.org] .
Linux 2.4 on our router (Score:4, Informative)
Well, I put 2.4.14 on the box and I haven't rebooted since. I have 61 days of uptime and that's the most I've seen on that box ever. It is finally stable. The only thing I can conclude is that it's AA's VM that is doing the trick. And in hindsight, it makes sense. The behaviour of the box was that it was thrashing, but at the time it didn't seem that way because I hadn't noticed the HDD light was disconnected from the box and I couldn't hear the disk in the noisy server room.
So, Linux 2.4 is (knock on wood) stable for my servers, now.
Jason.
STABLE vs STABLE (Score:5, Interesting)
"Stable", in the context of a kernel release refers to the interfaces. When Linus releases 2.<leven>.0, he is saying that this kernel is one that has reached some arbitrary plateu of development stability, and it's now ready for others to begin actuall release engineering on.
You have to understand that the Linux Kernel is released by Linus in a state that is very reasonable for a development team, but that will never be "production quality". Debian puts a lot of realease engineering work into a Kernel. As does Red Hat. As does SuSe, etc, etc.
If you just grab 2.4.x and install it, you're acting as Linux Q/A, and I applaud your effort, but when it breaks in your environment, you should not be stunned.
Once again, production release != stable release. A stable release is just one the developers are happy with (and I've yet to see a 2.4 kernel that I can say developers should not be happy with).
So, maybe next time, 2.6.0 should be called the "post-development" release so that people don't go off half-cocked installing it on production systems.
FreeBSD VM, sync(3) OK for 10 YEARS. (Score:4, Interesting)
What you have here is a LARGE body of (l)users, and a small cadre of kernel hackers who are separated and out of communication. The three times I've found a problem in FreeBSD-STABLE (to be honest, I think one was the 2.2.6, and two were 3.x branches), I sent my bug report in with instructions on how to repeat the problem, and a patch for the ugly hack I made to make the problem go away. I never beat the committers to the bug. Now, before I do all the work, I watch the WebCVS for commits for a couple of days and email the committer who touched the effective files last with a quick question "hey, I see this problem.. have you?"
I tried to read some Linux (the kernel) code a long while ago. There were some funny comments, error messages, and variable identifiers, but otherwise it gave me a headache. I just felt being a Linux participant was beyond my tolerance. I browsed the FreeBSD source code though, and even in the midst of reorganization, both organising schemes were apparent, well documented, and there is a clean style to all FreeBSD code I've seen that makes for (relatively) easy reading.
I guess that the difference is culture, but in this case it seems to be a serious problem. I have to wonder why the Linux kernel people haven't broken the Linux VM code out for modularity and borrowed the FreeBSD VM as an option? I mean: FreeBSD is free-as-in-beer and also free-as-in-software. I'm not volunteering, but after all these years wouldn't it make sense to hack out some Linux wrappers for the FreeBSD VM system? I think I remember reading a Matthew Dillon interview where he talks about all the good stuff he's done to FreeBSD VM recently...
Why 2.4 was released (Score:3, Interesting)
There is a relatively small number of people that use the odd-numbered, experimental kernels. At the end of 2000, it was becoming clear that having the same people running the development kernels on their hardware wasn't fixing many more bugs - I remember a post from Linus to LKML to that effect.
2.4 was stable enough for mass consumption, and so it was released. However, it is important to remember that this is free software, and frequent incremental updates are the rule. Free software can't work if it is not constantly evaluated by users, bugs reported and fixed, and new versions shipped out.
Software is an evolutionary process; it is important to remember that free software (especially the Linux kernel) fully embraces this notion.
He Almost Had Me (Score:5, Interesting)
The kernel seemed to show more stability. Then we hit kernel 2.4.15.
Linux version 2.4.15 contained a bug that was arguably worse than the VM bug. Essentially, if you unmounted a file system via reboot -- or any another common method -- you would get filesystem corruption. A fix, called kernel 2.4.16, was released 24 hours later.
Look, anybody who is deploying a kernel on the day it is released on a production server deserves what they get. One day turnaround on a bug fix is phenomenal. Even if these are marked as "stable" kernels, trying to track the new versions in real time is a dumb thing to do.
This guy has written a moan and groan article based on a small set of bugs, some of which he only could have experienced if he is experimenting on his production system. He obviously requires extreme stability and says he needs this over the new 2.4 features (SMP, 2G memory, 2G files), which makes me ask: why was he putting new kernels on his production system before emperical evidence was there about high stability?
Open source will fix bugs faster the proprietary. It doesn't change reality to make bugs impossible. This is true even in "stable" releases, especially if you are talking about highly stressful production environments.
hmm.. (Score:3, Insightful)
"...when I upgraded from 2.2 to 2.4 (Mandrake 7.2 to 8.1), I had (still have) many stability problems..."
"...I don't know what compelled Joshua to choose Mandrake, whose "bleeding-edgeness" usually keeps them a bit unstable and unpolished..."
"...He's using MANDRAKE on a SERVER. For crying out loud, you don't use Mandrake on a server. Get something realistic like Slackware or Debian, and if you want to be a idiot use redhat, not Mandrake..."
"...First of all, who would used Mandrake for a server. We are talking about an installation that is meant for a laptop environment..."
"...i was running a 2.4 on mandrake 8.0 and had nothing but problems...."
"...I've noticed that most of the comments both in the article and others complaining about the 2.4.x kernels and various stability problems are running RedHat [redhat.com], Mandrake [mandrake.com], and even Debian [debian.org] Distros..."
huh..
Perhaps we have a Mandrake issue, here, and not really a 2.4.x kernel issue, and certainly not, as the few M$ trolls have tried to suggest, a Linux issue...
dunno..
Food for thought.
t_t_b
I can't resist (Score:4, Funny)
Cue martial music
Re:Been running fine for me (Score:2, Funny)
Throwdini the Great thinks:
Cliffom could have saved 78 characters by simply writing: "Me, too."
*rimshot*
Re:Been running fine for me (Score:2)
not for a productive use server...
Freudian slip? I have to agree, though, the biggest productivity killer I have is SameGnome...
Re:The Old Question. (Score:2, Funny)
Heh (horribly OT) (Score:4, Funny)
Well about two months ago
I found Garage Days Re-revisited
on tape in a used record shop
for about ten dollars
I came back two weeks later and
found Kill 'Em All with the two extra
songs-on tape for 3.50
I came back last week and found a rare
Soundgarden CD (Badmotorfinger w/the
Somm EP) for around 15.00
SO, hope is alive, those albums are still
floating around in some form
Re:Heh (horribly OT) (Score:2)
Is that what we're supposed to do with the marble? I guess so.
Debian Testing w/ 2.4.17-K6 (Score:2)
as they've been made available.
You're absolutely right, though, Woody base install still
uses 2.2.19(?), the 2.4.x kernels are available options.
I still keep 2.2.19 in lilo as an alternative in case I run into
any problems, but once I got all the module configs fixed
for 2.4, there's been no need to use it.
Bob-
Re:Kernel too big? (Score:2, Informative)
Uh... You can compile USB and many other parts as a module.
Re:Kernel too big? (Score:2, Interesting)
This is my opinion:
optional hardware, devices, peripherals -> modules
hardware found on most x86 machines -> built in
Re: (Score:2)