Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 20 million monthly users. It takes less than a minute. Get new users downloading your project releases today!
Check out Documentation/Changes. You'll probably need to upgrade binutils, modutils, e2fsprogs, and PPP (if you're running PPP). The file has pointers to applicable versions.
If you're comfortable compiling a kernel, it shouldn't be any trouble.
You could always try running the upgrade from a RedHat 7.1 disk. It has worked well for me in upgrading 6.2 boxes. It is also a hell of a lot easier than upgrading all the individual packages. IIRC RH 7.1 ships with the 2.4.2 kernel, an upgrade ro 2.4.x from that is a snap. Of the boxes I've upgraded, some have new, custom kernels and some are still running the stock RH kernel, which seems pretty solid. I did do an upgrade on one of the systems manually (not quite manually, lotta RPMs involved, some compiling) and it took at least 5 times as long as simply running the upgrade from a current CD.
Not THAT easily.. you'd most likely need to upgrade critical system tools and utilities like binutils, util-linux, modutils, maybe even gcc. For kernel 2.4.10, these are the needed versions of those and other packages:
The Changes file is more complete though.. read it to know the other changes you might need to make. Oh, and the recommended version of glibc for kernels in the 2.4 series is 2.2.x.. so you might want to upgrade that as well, though it isn't required.
On my machine where I used to work I tried the manual round to update redhat... I remember breaking a lot of stuff.
Humm...a slight quandry over what to do. as I have 3 *ell poweredges, all dualies, varying amts of memory. (New job, don't want to screw anything up) One runs the webserver, one the samba and powervault and one I just re-did is soon to be a win2k box for GIS stuff and SQL server (hope it can take the strain, think it might be just a single proc...damn).
Sigh, worse part is that RedHat is standing up against the SSSCA, but *IF* I redo the webserver and samba box I was leaning tword Slackware... Ack, technical and "moral" dilemma. (Error, Error....)
Anywho, after years of using and installing both, IMO mind you, Slackware if you want the flat out speed (shared libs) and RedHat (Mandrake too...nice installer, btw) for compatibility and ease of install and a few nifty utilities. {my own observations...some or no basis in reality...you decide}
(Side note-- tooting my own horn time: Had Slack, RH, BeOS, 98se and 2000 on one box, if that isn't computer abuse, dunno what is!)
Safest thing to do is back it up or do a disk dump/re-mirror the disk and see what happens. High "pucker factor" either way because as far as I can tell everything is running perfectly.
Moose.
Two worst things you can do in a position of authority:
1) change too much
2) change too little.
(how true, how very, very true)
here to whore and to reduce stress on the servers!
final:
- Jeff Garzik: net driver updates
- me: symlink attach fix
- Greg KH: USB update
- Rui Sousa: emu10k driver update
pre6:
- various: fix some module exports uncovered by stricter error checking
- Urban Widmark: make smbfs use same error define names as samba and win32
- Greg KH: USB update
- Tom Rini: MPC8xx ppc update
- Matthew Wilcox: rd.c page cache flushing fix
- Richard Gooch: devfs race fix: rwsem for symlinks
- Björn Wesen: Cris arch update
- Nikita Danilov: reiserfs cleanup
- Tim Waugh: parport update
- Peter Rival: update alpha SMP bootup to match wait_init_idle fixes
- Trond Myklebust: lockd/grace period fix
pre5:
- Keith Owens: module exporting error checking
- Greg KH: USB update
- Paul Mackerras: clean up wait_init_idle(), ppc prefetch macros
- Jan Kara: quota fixes
- Abraham vd Merwe: agpgart support for Intel 830M
- Jakub Jelinek: ELF loader cleanups
- Al Viro: more cleanups
- David Miller: sparc64 fix, netfilter fixes
- me: tweak resurrected oom handling
pre4:
- Al Viro: separate out superblocks and FS namespaces: fs/super.c fathers
fs/namespace.c
- David Woodhouse: large MTD and JFFS[2] update
- Marcelo Tosatti: resurrect oom handling
- Hugh Dickins: add_to_swap_cache racefix cleanup
- Jean Tourrilhes: IrDA update
- Martin Bligh: support clustered logical APIC for >8 CPU x86 boxes
- Richard Henderson: alpha update
pre3:
- Al Viro: superblock cleanups, partition handling fixes and cleanups
- Ben Collins: firewire update
- Jeff Garzik: network driver updates
- Urban Widmark: smbfs updates
- Kai Mäkisara: SCSI tape driver update
- various: embarrassing lack of error checking in ELF loader
- Neil Brown: md formatting cleanup.
pre2:
- me/Al Viro: fix bdget() oops with block device modules that don't
clean up after they exit
- Alan Cox: continued merging (drivers, license tags)
- David Miller: sparc update, network fixes
- Christoph Hellwig: work around broken drivers that add a gendisk more
than once
- Jakub Jelinek: handle more ELF loading special cases
- Trond Myklebust: NFS client and lockd reclaimer cleanups/fixes
- Greg KH: USB updates
- Mikael Pettersson: sparate out local APIC / IO-APIC config options
pre1:
- Chris Mason: fix ppp race conditions
- me: buffers-in-pagecache coherency, buffer.c cleanups
- Al Viro: block device cleanups/fixes
- Anton Altaparmakov: NTFS 1.1.20 update
- Andrea Arcangeli: VM tweaks
The Preemptible Kernel patches [tech9.net] can result in a desktop that reacts/feels faster... I'm running it here with 2.4.10 on an Inspiron 4000 laptop and I'd have to say I'm impressed - everything feels a bit zippier. The only problem I've had is that there seems to be some loop that it has optimized that blasts bits around the memory bus at high speeds with a rthymic pattern - in short if I'm in a really quiet room the high pitched busses are a bit noisy... Maybe my hearing is too good!
Anyway - doesn't look like much changed since pre-6 so the pre-6 patches should work but if you want to be sure you can wait until rml releases the 2.4.11 final patch. I'd recommend checking it out if you have the time...
These sound real good. Is there a reason that these patches are not the default behavior? Is there a downside to having a premptible kernel? Everyone that runs these patches says that they are awesome.
I'm assuming that its not in 2.4 because it probably changes alot of things and needs to be done in 2.5.
These sound real good. Is there a reason that these patches are not the default behavior? Is there a downside to having a premptible kernel?
AFAIK, there are two reasons why these patches aren't in default kernel. First, I understand that decreases latency at the price of slightly decreasing throughput. The second is that though the patch is small, its effects can be complex and nobody's too sure it doesn't have any bad side effects (crash, oops,...), especially on SMP systems.
In practice, though, the only real disadvantage seems to be the "newness" of it. Performance, for the tests that have been conducted, seems to be the same or even better.
Having a preemptible kernel makes things feel faster because what you're doing right now is getting serviced the
most, but the overall system performance is actually decreased a bit.
Which is bad why? The important thing is not (always) some arbitrary absolute measurement of "speed", but rather the apparent (to the user) speed of the system. If you're reading mail, you probably won't care, or even notice, that your compile takes 49 seconds instead of 47.
That ftp site owner will be a bit perturbed, however.
So he can just use a regular kernel. I assume something like this would be made a kernel configuration option, something like SMP (though I admit I haven't actually looked at the patches themselves so I don't know how feasible this is); then you'd just turn it on for a workstation and off for a server, or something ilke that.
Not really. The tests that have been done so far show no perceptible performance decrease. In theory, it should be slower, but I think any performance decreases from the patch are getting lost in the noise.
The PE kernel work looks pretty good, but it's still got some kinks to work out in order to guarantee sub-5ms latencies. In a recent email to alsa-devel, Takashi Iwai posted the following tests with alsa and low-latency [alsa-project.org] versus preemptible kernel [alsa-project.org] patches. In summary, getting better, but not quite there yet.
I definitely agree with you though, the PE people's work is exciting, and much less of a hack than the low-latency patches. Way to go hackers!
For things like playing buffered video and sound,
where you just need to get the CPU every few milliseconds, I would think that the system call code paths are not so long that you really need a preemtable kernel. I would expect that it would be enough to just change the time quantum from 1/100th of a second to, say, 1/5000th, by replacing the "#define HZ 100" in include/asm/param.h to "#define HZ 5000". I have not tried this, but this sort of thing has been discussed on the linux-kernel mailing list. One person there reported that doing this prevented his Palm cradle to no longer be able to sync, so be warned that this seems to trip at least one bug.
As someone who has only looked through the preemtable system call patch and never tried it, my impression is that while it may be great, I expect its design to change a bit. Right now, under this patch, you build the kernel with basically a fixed number of fake CPU's that basically make your computer look like it has more CPU's than it does. The kernel being preempted basically causes the old kernel's state to become associated with one of these fake CPU's and then the preempting context takes over a real CPU. [I'm really not doing justice to code in this oversimplified and possibly misinformed description.]
In the future, I would hope that the need for a fixed number of fake CPU's would disappear and the "old fashioned" way of doing context switching would also disappear when the preemtable kernel option is selected. In other words, that would be the only way that context switching would normally occur, rather than having two ways of doing the same thing.
I have always regarded the potential for a preemtable kernel as the biggest side benefit of the move to SMP in Linux 2.0, and I'm glad to see people turning it into a reality.
However, maintaining the option of making a non-preemtible kernel may be worthwhile, at least for a uniprocessor kernel, because the preemtible kernel code relies on running an multiprocessing kernel (even on a uniprocessor),
which has a slight performance cost in setting and releasing all those locks that never once experience contention.
I would expect that it would be enough to just change the time quantum from 1/100th of a second to, say, 1/5000th, by replacing the "#define HZ 100" in include/asm/param.h to "#define HZ 5000".
What are you talking about? The reason you get skips in sound and such is because the kernel hogs the CPU for a long time, using spinlocks (kernel 2.4) or by disabling IRQ's and then doing a bunch of processing (older kernels). It's particularly bad during I/O storms, and thus the bad vm lately has caused people to complain about audio dropouts. Changing HZ is not going to do anything but make the kernel less efficent. Note that the current default is 1024 for some archs, which corresponds to 1ms. Everyone sees latencies longer than 1ms on a regular basis, even with the low-latency/preempt patches.
If you're talking about spinlocks and you're running on a single CPU machine (even with an SMP kernel), the kernel never blocks on a spinlock, because there is never spinlock contention (except for a kernel locking bug, where the kernel will lock up hard at that point). The overhead of the checking the spinlocks is also very small (nanoseconds on that single CPU system, especially since there is no cache snooping). So, the delays that are long enough to deplete sound buffers are going to occur because the granularity of time slices between processes is too long, not because of lock contention.
With HZ=100, the timer tick is 1/100th of a second (ten milliseconds), and any process running at a CPU priority of nice 0 (the standard), nice -1, nice -2 or nice -3 will get five ticks (see the definition of TICK_SCALE in kernel/sched.c), so each time slice will be 50ms, which begins to approach the buffer size of sound cards when you have a few runnable processes, and is already much longer than video frame rates.
I think you've got some of the details wrong. First, changing the hertz timer wouldn't help anything. When people say that Linux is not preemptible, it means that when a process is running in kernel mode (as the result of a system call), the scheduler will not preempt it. It runs until it voluntarily blocks. Even if the scheduler is called more often, all that would happen is that it would allow the process to continue running more often. The result of this is that the maximum scheduling latency is dependant on the length of the system call paths. Long paths (such as disk access calls) cause spikes in latency. What the preempt patches do is they change SMP spinlocks into preemption locks. Each time a spinlock is taken, a preemption count is incremented. When it is released, the preemption count is decremented. Whenever the preemption count is zero, a context switch is allowed to happen. There is a good article here. [linuxdevices.com]
Actually, 1024 is about 10 times larger than 100. On BeOS, the timer interrupt is signficantly faster (either 333Hz, or 4096HZ, I forget) and it doesn't impact performance. Processors are so fast these days that the timer interrupt isn't really an issue anymore. The 1024 HZ was put in for the Alpha processor, and current x86 chips are a lot faster than the old Alphas. In fact, changing the HZ really doesn't effect anything in a normal kernel. Also, 10ms is a really long latency for audio purposes. If you want to do real time audio, you'd better be down in the few milliseconds range.
One thing to note, and I find myself saying this again and again, is that one of the simplest performance tweaks you can do is to negative-renice the X server. It's even mentioned somewhere in the X manual, and makes a hell of a difference.
This means that the GUI then pre-empts background tasks, like on Windoze, and other systems intended for desktop use. Of course you don't want to do that on a server machine, but only Microsoft are stupid enough to do it by default even on their "server" OSes.
I'd like to see "workstation" installs do it automatically, but there's a few small notes:
(a) if you renice it too low, it also ends up pre-empting audio tasks too much, and audio could conceivably skip when you move windows about. Shouldn't happen on today's reasonably fast computers. Easily fixed by careful tuning, perhaps including renicing important audio tasks too if your computer's really slow.
(b) If you're using the xfs font server, it needs tuning too - if it's starved of cpu time, then you might actually make text-heavy parts of the gui slower, not faster. I really wish distros would stop using xfs, since truetype support is now built into the X server, and server-side font support is being phased out thanks to XRender and Xft anyway.
This one is pretty sparse. WHAT changes were made with the emu10k driver? Did they change the bug that kills init on boot when you try to detect the game port? Did they update the way it reports, so that xmixer can control more things again? (What's with that, anyway? 2.4.2, I could control all sorts of stuff with xmixer. 2.4.10, I pretty much only have control over the volume.)
There are a few changes to the emu10k1 driver that may affect you:
- Mixer improvements (should add support for treble, bass, volume, and others).
- Fixed a dead lock in emu10k1_volxxx_irqhandler.
- Small code cleanup.
If you want the emu10k1 to work properly you'd better just go try ALSA [alsa-project.org]. In my ideal world ALSA will merge with the 2.5 kernel, but I wouldn't put money on it.
Back in 2.4.10, Linus made a fairly radical change in the virtual memory system - a rather unusual one for a stable kernel. While a lot of people are rather unhappy about it (notably Alan Cox, Rik Van Riel (the maintainer of the existing VM system), from all public accounts so far it seems that the new VM system works considerably better than the old one.
So - - - Is that the case? Has there been any stability problems? Is the performance better (not that it really matters as a workstation user, but ... )
Performance under my normal working set (KDE 2.2 w/default theme + Mozilla nightly version + the CRiSP text editor + KMail + XMMS + GAIM + several xterms, with occasional compiles and runs of very large apps like Wine and XMame) is substantially better (faster, smoother, way less swapping) on 2.4.10 vs. 2.4.9. I should note I'm running 512 MB RAM and 640 MB of swap on 2 partitions, and the system barely ever goes to swap now (with the previous VM, just starting up that environment got me into swap and it quickly maxxed out the swap from there).
So while I do appreciate Alan Cox's caution, the new VM works substantially better for me and I say "Go Andrea and Al!"
I ran all the 2.4.x's, both at home and at work. I am a software developer (not kernel, though) and so I beat on my systems pretty heavily. both systems run dualhead X and my work system additionally runs hardware (dac960) raid. cpu is a k7 tbird, in the ghz range.
anyway, 2.4.9 was ok for me. I tried 2.4.10 and both my systems (home and work) locked up within days. hard tight lockup.
I brought both back to 2.4.9, and so far, so good (less than a week running, though; it was only a week ago I went to.10 and had those problems).
I, too, worry about 3k line commits to so-called 'stable' trees to radically change an algorithm or model. can't say for sure if.10 was really a dog for me, but my systems usually run for months and months before being rebooted (usually due to my swapping of pci cards and such, necessitating a shutdown to do the board swap). so it does seem unusual for me to have a modern linux kernel freeze on _both_ of my hard-working linux boxes. hmm..
I noticed exactly the opposite. w/2.4.9 I was experiencing almost daily lockups (hard ones, fsck became my friend). Today was my first lockup w/2.4.10 since I installed it. I was running a bunch of crap in X, compiling a kernel and upgrading to the latest and greatest Debian.
Machine went down hard as hell when I tried to logout of X.
I am currently compiling 2.4.11 so we shall see how that goes.
If I did something that would really load them down ie a load average above about 20 the systems would essentially freeze for 20 minutes or so and sometimes just never come back.
Same problems here. Maybe I am just superstitious, but I found banging on the keyoboard and waving the mouse around a bit gradually brought it back to life. Perhaps the keyboard/serial interrupts got the kernel out of an infinite loop.:-)
It shouldn't surprise anyone that 2.4.10 VM performs better than 2.4.9. Even in terms of the "traditional" 2.4 VM from Rik, the Linus and Alan trees deviated starting around kernel 2.4.7. There were numerous complaints about the Linus tree missing important patches, and having contradicting patches applied. It ended up quite a mess, and VM performance reflects this. Alan's tree was much more conservative in this regard.
If you compare 2.4.11 to anything, please compare it to the latest -ac kernels from Alan, where the traditional 2.4 VM is actually working very well. There's NO sense in comparing 2.4.11 to 2.4.9; the VM in 2.4.9 and its kin -- it was just plain broken.
Side note: In Rik's VM, please remember to not just look at swap used as a gauge of whether you're swapping or not. All anonymous pages are mapped to swap, so the space is simply allocated. You can create a huge image in GIMP [gimp.org] and lots of swap will be allocated, but without a drop of disk I/O! Use vmstat and look at the 'si' and 'so' columns to see if you're actually writing pages to swap. Or look in/proc/meminfo and subtract "SwapCached" from the amount of swap you think you're using. That's the amount of *written* swap you're using (a better comparison to 2.4.10).
This needs to be made sensible in 2.5, if this VM is to be resurrected.
Andrea's work has cleaned up the handling of inactive pages (which could have been done under the old system), and the new "classzone" approach and VM balancing isn't documented anywhere outside the code itself. In addition, there are very normal loads where it performs badly compared to the -ac tree. Here is a test suite [arizona.edu] that tests different aspects of aging and swapping, and the results as provided to linux-kernel [helsinki.fi]. 2.4.10 (patched with Andrea's VM tweaks) swapped more pages, took longer, and had to swap more pages back in when the tests completed (i.e. it could have chosen better pages to swap out). It also caused XMMS to skip mp3 playback throughout the tests, whereas -ac didn't.
Nothing's perfect of course; a process that randomly walks through pages performs better in 2.4.10 [helsinki.fi] since it's more streamlined and not trying to be as "intelligent" about page handling. Rik's code could no doubt be improved here.
That's the great thing about open source: let the best idea win! No doubt in 2.5 we'll see these two VM schemes hash it out in much more complete form (i.e. lose the remaining kernel 2.2-isms, maybe add physical page mapping, almost certainly swapfs -- mostly for Rik's scheme; I'm not sure what the next steps for Andrea's VM should be).
damn! I just saw those today! I was getting very very frequent netscape crashes (and I wasn't even running java or script..) and tar's wouldn't verify, and on and on. and I looked at/var/log/messages and saw lots of those 'allocation failed' msgs.
but I'm running 2.4.9, so I'm not sure.10 is the cause of those.
you can check your RAM chips using memtest86 ( http://www.teresaudio.com/memtest86/ ). Using this program I could detect a very tiny problem in one chip that had caused one box to panic after over 20 days of uptime (also had an allocation problem message running 2.4.9 but linux wasn't the culprit). This is a good tool to have, specially now that we have these huge and cheap RAM chips.. a tiny bit that's fscked up in there can be a real mess. The only thing is this testing can take ages on older CPUs, altho major problems show up almost immediately.
Ruling aside the obvious objections (changing major subsystems in a so-called "stable" kernel, NIH syndrome) I can only assume Alan's objection is that it was yet another really neat thing developed (or sponsored) by rival Linux company SuSE (like reiserfs, which he also objected to)
The sooner Redhat stop leveraging its collection of kernel hackers to drive Linux kernel development the better for the rest of us that don't care for their crappy distribution.
I don't think RedHat had ANYTHING to do with Alan's objections. For one, numerous people have reported severe stability problems with Andreas' VM, which are things that simply should not occur in a so-called "stable" kernel. This kind of experimentation should be occurring in 2.5, not 2.4. The problem is that the VM got almost no testing before being rolled into 2.4.10. The same was true of ReiserFS when it was introduced into 2.4 -- it, too, had a number of problems, including stability and data integrity problems. Rik's VM is not very good, granted, but 2.5 is the proving grounds for new features, not 2.4.
Ruling aside the obvious objections (changing major subsystems in a so-called "stable" kernel, NIH syndrome) I can only assume Alan's objection is that it was yet another really neat thing developed (or sponsored) by rival Linux company SuSE (like reiserfs, which he also objected to)
That's a very strong allegation, and you'd better have some solid facts to back it up. I don't care for RedHat but I have great respect for Alan Cox. His objections seem valid to me. I'd also be very reluctant to do a major change in the stable release of any software, especially if I was the primary maintainer (like Alan Cox is for Linux). You'd better come up with some concrete evidence to justify your claim, or I'll assume you are just trolling.
My personal feelings on the new VM is that it was the right decision. The VM problems have been going on for months. When people would report a problem, Rik would pretty much say: I don't have time to work on so and so.. feel free to pay me or convince my employeer to fund the work. Which is fine, that is his choice... But if I was Linus this would make me more open to looking at alternitive approches even if the short term risks were moderate.
It is also interesting to note that Rik's vm core has had say 15 kernel releases (unstable + stable) to become stable and meet up to the expectations that Rik sold the kernel hackers on in the first place and judging from the reports on LK it is just now becoming stable enough for most work loads.
The new 2.2.10+ VM had a couple minor to moderate problems for _SOME_ work loads but over all has received very good reports as far as I can tell for being so new. 2.4.11 is bound to be even better.
Some people are complaining about the inclusion of major VM modifcations in the stable tree. I believe the truth is that most people that use Linux in production do not roll their own kernels. They use the vendor supplied kernels.
Redhat for example will be releasing a 2.2.7-11-AC kernel which uses Rik's VM, it is what they have been testing for months and thus is what they will end up shipping. So the fact that Linus made this change in the "Stable" tree makes very little difference to me from a stability stand point, and I think it will prove to be a very good call in the short/medium/long run.
VM in 2.4.10 is absolutely broken. The LKML is rife with reports of hangs, strange behavior, evil performance, etc, under heavy loads..
Pretty much fixed in 2.4.10-ac10-eatcache. Almost as fixed in 2.4.11, but more work definitely needs to be done before a company like RedHat will be willing to ship one of these kernels with the new VM code.
The problem with Rik's VM was Rik. He has been an arrogant piss ant for as long as I've been watching the list. He obviously ain't no dummy and I have no problem with working with people like that but I think Linus was itchn' to get that monkey off his back. They were applying all sorts of desperate patches ("tuning") and falling all over each other in the process. They just don't know why his VM goes off into la la land under high loads. What do you do about that? Stable or not Andrea totally rewrote the VM in like 5 weeks. Sometimes rewriting something from scratch like that is just the Right Thing to do. Linus saw that on the surface it worked better than Rik's and took it as a blessing. Sure 2.4.10 was bleeding before it left the gate and immediately needed triage (anyone running 2.4.10 should get this release patch folks) but so far it's not been a disaster like some people have been warning about. In fact most people claim it's quite a bit better than Rik's. If you've been using 2.4 without luck, try this one folks.
Linux has always been forked in certain areas. Just enver the VM before. Not to mention ALan has stated on LK that he is going to stick with Rik's VM until the other one proves out. He did NOT say that he will keep it until 2.5.
My reference is his post on LK when someone asked him directly what his plan was for 2.4 once linus hands it off to go work on 2.5. Alan stated he does not know, but he does not expect the new VM to be in a predictable state for a while.
This post was in the last few days so it shouldn't be hard to track down and verify. I would do so but I am off to bed:)
But it interresting to see that Rik's VM did NOT start to perform well until Linux adopted Andreas VM. Rik started posted a bunch of fixes to Alan Cox just around 2.4.10pre7 or whenever the Linus made the change.
I believe the truth is that most people that use Linux in production do not roll their own kernels
I don't think your right there at all. Companies are more likely to tweak the default installation, recompile the kernel for a known set of hardware, and then roll out a "company standard", using for instance RedHat's kickstart scripts.
Using the stock kernel is made very difficult at least for RedHat users. RedHat's ongoing refusal to support reiserfs while installing, only recently while upgrading, and shipping (at least with 7.1 from memory) a reiserfs module that was significantly slower due to debugging being left on makes kernel recompilation necessary.
I can understand their reservations, but faster fsck times isn't the only reason to move away from ext2
When people would report a problem, Rik would pretty much say: I don't have time to work on so and so.. feel free to pay me or convince my employeer to fund the work.
From what I understood, Rik was making changes/fixes but Linus was not applying them. Alan Cox was saying he was tired of resubmitting the same VM changes to Linus. I only lightly read the kernel mailing list, but if this is accurate, then it is really Linus's fault for the behavior of the old VM. From what I understand, the VM in ac kernels is not bad either and it is based on Rik's VM work.
Being someone who reads all the posts from the core kernel hackers (at least those that are public) I feel pretty confident in saying for the longest time Rik was to busy to fix bugs. Again he has this right.. Once other people started writting VM code (I think it started once pushonce was being tested by Daniel phillops) Rik has been churning out code at the rate he was in the pre 2.4 release days (back when he is bidding to get his code included). So I would not be suprised in the least that once pushonce was included that Rik patches have been ignored... the reason for which should be obvious Linus decided to take another direction with the VM and Rik's patches were incompatible with that direction.
The new Andrea VM is *much* smoother and more reliable for me in my standard desktop "working set". My machine has 512 MB RAM and 640 MB swap. I run KDE 2.2 and normally have Mozilla, KMail, the CRiSP editor, XMMS, GAIM, and a sprinkling of xterms open and doing stuff. I update and compile several large projects frequently including Wine and XMame.
Prior to 2.4.10, this resulted in the machine gradually filling all swap and then becoming very slow. With the 2.4.10+ VM my system rarely if ever touches swap, and when it does it often eventally comes back out of swap when necessary. It's overall much faster and smoother, and my HD runs less. I haven't tried any of the late model AC kernels where Rik actually started fixing his problems (spurred on no doubt by Linus giving up on him) - they may also run well too, I don't know.
What I do know is 2.4.10 and.11 are among the smoothest kernels I've run since back in 2.2 (as Alan points out, Andrea was ultimately responsible for smoothing out 2.2's VM as well).
One caveat with 2.4.11: starting with 2.4.11pre5 it plays very poorly with USB MS Intellimice. I have to unplug mine while booting 2.4.11 or else I get a continuous scroll of errors and no further boot progress (plugging it back in later resulted in normal operation including in X, but I'm still wary of the updated USB drivers).
Hmm, I don't know if that's you're problem, but your swap/RAM distribution has problems. Rik has pretty much flat out said that your swap needs to be at least twice as large as your RAM or the VM stops working right. Either way, though, 2.4.10 does run much nicer on my end too.
X chews up ram quickly. Add Mozilla or Netscape and it is worst. I switched from 64 to 128 MB ram a while back and noticed a world of difference. Mainly my machine stopped swapping often.
That said I have no idea what you need 512 RAM and 640 swap for, but I am assumming it is a database or webserver with lots of dynamic content. With X and related stuff I rarely (never) use swap with 512MB ram.
Don't confuse NT's need for swap with linux. NT aggressively swaps everything to disk to insure that there is always lots of free ram. I believe win95/98 is worst. Linux (and BSD) on the other had only swaps when more ram is needed.
My main machine with 512MB ram rarely swaps. So rare infact that I can't remember the last time I checked and saw swap in use.
I believe the truth is that most people that use Linux in production do not roll their own kernels. They use the vendor supplied kernels. Redhat for example will be releasing a 2.2.7-11-AC kernel which uses Rik's VM, it is what they have been testing for months and thus is what they will end up shipping.
Anyone running a production set worth their salt
will be running their own kernel base, tuned for
their own environment. The vendor kernels are a
compromise, trying to please everyone, with every
service you could ever imagine compiled in (and hence every bug/exploit included). Production boxes doing serious work are more likely to have a kernel set built for the purpose.
Vendor kernels are far more likely to be used by people who are not that bothered about kernels and stability
FWIW my production boxes run a 2.2.19 heavily patched.
Gotta love SDSL. I hope to have 2.4.11 running on my workstation tonight and on the servers at work tomorrow.
Anyone happen to know if there's a RH 7.X-friendly.rpm available for those that are too timid to compile and install their own kernel? Several folks at my office will only install.rpm kernels. Would be nice to get 2.4.11 going at work as soon as possible. I only know a small about of rpm voodoo, so I suppose I'll give it a shot if one isn't already available.
Stable branch or not.
You really should NOT run production servers (the ones at work anyway) on the latest and greatest kernels.
Who knows what data corrupting bugs are in a new kernel? I recall a few years back when a kernel was released in the that corrupted data over time. (Albeit that was in the testing branch, 2.1.44, but it's a matter of principle).
At least set it up on test servers first before launching on production servers.
Do yourself (and us) a favour, try before you buy.
NTFS, NTFS, NTFS boys. In a year or two most systems out there will have it in XP, and Linux will be catching up to support it. We can make a run for a majority of the NTFS 5.0 changes now, so at least people will be able to access their drives.
that so ? i have 4-5 NTFS partitions and my fresh install didn't understand any of it (couldn't mount it at all). I'm a linux newbie, but i have some friends who live the os, practically, and their understanding is NTFS support is "not good". "there, but not good".
there's been read-write driver since 2.2.x (at least) but it was marked experimental and could easily cause file system corruption. A lot of work is being done on write support right now. Of course since NTFS is undocumented and Microsoft keeps making subtle changes to it, it's very hard to get a stable read-write support. Read-only works perfectly though. I've used it myself way back when I had NT 4 on my machine.
I think that was one of the things causing some MM problems under heavy loads. Have they gotten rid of this yet? I think it was gone just after 2.4.10. But, I don't like the sound of "resurrect oom handling" in the 2.4.11 changelog.
I don't think that anyone takes the kernel versioning as seriously as they used to. I thought that stable kernels were not supposed to include any really new core features but mostly just bug fixes and perhaps new drivers, etc.
Rik's VM should have either showed up in the 2.3 tree and been stabilized there before entering 2.4 or the 2.5 tree should have been opened with it. I guess since 2.4 had to be pushed out the door (and I'm glad it was) there was no time for his VM to mature inside 2.3. But would it be worthwhile to let those ideas stagnate? So much really new activity has been going on since 2.4 that perhaps it would be too hard to manage 2.4 and 2.5 kernels with lots of active development going on both simultaneously.
It seems to me to be a hard management decision to make. The 2.4 series needed a lot of fixes and at the same time there has been a lot of new stuff floating around. Would introducing the 2.5 a few versions ago have slowed development on 2.4 and increased overall patch-management headaches? I suppose the answer is yes but I don't have an idea about how badly it would slow things down.
I do think, however, that it is wonderful to have both Linus and Alan Cox around and maintaining diverging credible trees. They can both gain perspective watching the other's code grow and break. When the two trees do finally merge again we (hopefully) will have the best characteristics of both.
I lost faith in Linux versioning and tree management a looooong time ago. I pretty much stick with distribution kernels these days. There are several things wrong with the current process, which could be fixed.
There needs to be OVERLAP of development kernels. For example, when 2.3 turned into 2.4-test, the 2.5 branch should have IMMEDIATELY shown up. That way, there is always a place for those who are good at doing new stuff and a place for those fixing what's there. This also greatly increases turnaround time. Also, Linus sucks at maintenance. He's good at development, but not at stabilizing and maintaining. Alan Cox is wonderful in that area. The _instant_ 2.3 became 2.4-test the reigns should have been handed to Alan Cox, to be released as 2.4.0 whenever Alan said it was ready. That way, Linus can spend his time dreaming up wonderful things and Alan can make it all work.
Anyway, I'd post this to LKLM, but I don't have time to be a kernel hacker myself.
To an outsider, it would look like the answer is Yes:
AC e LT source trees are diverging on issues sometime techical sometime 'political'. There will ever be a full merge?
The 2.5 tree is not coming out, and the 2.4 is merging huge and 'revolutionary' patches.
It seems to me that the model which worked so well for 2.2/2.3 series is not working anymore. In a true bazar fashion, a new model is already trying to define itself, and the AC and LT tree may be part of it. Maybe it is just time to admit it and try to define the new model a bit more clearly, if possible.
During the stable life of 2.4.x it became more or less clear to me that the current model of development for the Linux kernel doesn't work very well.
Changes that were too experimental for a stable kernel but too important to be deferred to an experimental kernel were included in 2.4.x all the time (the VM changes in 2.4.10 being the best example).
This makes me wonder: isn't it possible to improve the scheme of x.even.y = stable and x.odd.y = unstable? Even as we speak the -ac series provides an experimental kernel within the stable series. Maybe we could enhance this model into something more official.
I'm not sure about the actual form yet. I was thinking about something in the line of three kernels:
Stable: users should be able to rely on this blindly. This kernel works. Each and every release.
Testing: this kernel should evolve into the next stable kernel. More ambitious than the current -pre kernels; longer running development and more testing. Yet, nothing really radically new.
Experimental: playground for hackers. New features are introduced here.
The 'Testing' branch is new. I imagine these kernels to be released every month or so, at about the rate the stable kernel is released now. As soon as the Testing kernel proves something works and it stable, it's up for inclusion in the stable kernel.
Stable kernels should IMHO be lower-paced. Maybe a major release every four to six months or so. The VM is allowed to change radically, but only after having been tested extensively in the Testing series. Offcourse simple bugfixes should be allowed in. This would give us a stable kernel every month. It just wouldn't be a terrible interesting one, as it should be.
The Experimental kernels are as experimental as the current x.odd.y series.
What you describe is like the current Debian branch model, which seems to work very well.
If stability is your highest priority, you stick to the stable release, which is pretty much guaranteed uptime. So it's good for important servers. Can get out-of-date quickly, though, depending on your needs.
For desktops and less mission-critical servers, the testing release suits. Get the (almost) latest software, and retain a good level of stability.
Developers and masochists get to play with the unstable release.
Just like the majority of you readers, I am not a kernel developer. But I like to know what I'm running. My conclusion is that if you want a stable kernel, ignore Linus' tree and use the Alan Cox tree. To say it blunt, 2.4.10+ really is 2.5, and you should only run it if you are prepared for some weird behaviour.
Now am I a troll? Hope not. I did get my info out of Kernel Traffic [zork.net], which I've been reading for months. It is a very good, understandable and clear compression of all important things that happen on the linux kernel mailinglist. If you use Slashdot as your only information portal about the kernel, you are *braindead*.
Ok, now my point - it is the VM subsystem. By now you should know that 2.4.x, until recently 2.4.10, used the VM code by Rik van Riel. That code has taken some time to develop, but you definitely can't blame Rik as the cause for all 2.4 stability problems, as well as the eternal delay of 2.5. But according to the l-k list, Linus himself made several errors in including Rik's patches, which indeed caused 2.4.7 and up to be unstable! Ok, now stop and think about this. Linus has an enormous responsibility. He didn't realize where the fault was, but he did perceive that the stable kernels were NOT stable. He knew that Andrea Arcangeli was still working on his own VM (that work improved Rik's VM too in 2.3. Not having a monopoly really does improve invention!) Then Linus made the big step: even in a *stable* series, he took over Andrea's VM and threw out Rik's one. This is really an important decision, and I applaud it!
The only thing Linus should not have done, is labeling this thing 2.4.10. It really is 2.5. [lwn.net] For the big public, that kernel was definitely everything but a stable kernel. Luckily a lot of problems have been solved since (2.4.11 is a hell of a lot better than 2.4.10), and I consider Andrea Arcangeli really a good coder, but actually I trust Alan Cox most. He commented that Linus' recent kernels trashed several boxes of his overnight. Alan really sees the -ac tree as the stable one currently. I run 2.4.9-ac18 too, with the kernel preemption patch as mentioned earlier, on a p2-233 with quite some load, and it doesn't show any strange behaviour. (The kernel preemption patch doesn't do really much here: I still get skips when I record an mp3 from my soundcard and switch desktops in the meantime. But I should not expect wonders:))
One last thing: Rik van Riel's VM has improved *too*. Alan Cox catches up with his patches very speedily. No more big bugs; Rik even added some optimizations in 2.4.9-ac16. I can't see that of course, but overall the system is a lot more responsive than 2.4.3-pre6, my last kernel before this one.
So my advice: use the ac-series [kernelnewbies.org] of the kernel. Linus has made some wise decisions. I think he should start 2.5 and leave 2.4 to Alan, before people go sulking about 2.4.10 versus the always-stable reputation of the Linux kernel.
Looking at the impact on 7.2... the big changes in VM say something about the older VM that will no doubt be packaged with redhat. Hope they can get any issue with it nailed down, because their.2 series has always been rock solid stable. Ahh well, there is always.3
I'm still happily running 2.4.3 on everything, it still works as well as it did when I installed it.
As always, what compelling reasons is ther to upgrade? It's not like other O.S.'s where you have to unless you want major security or stability issues. and I have yet to find one app that has a kernel requirement.
Add to that the fact that RedHat 7.1 is a major pain in the arse to upgrade without the blessed redhat rpm packages (Hey, at least I got work to run linux, and it had to be redhat for the support and the fact that the CEO holds some RHAT stock.
If someone could come up with a decent way to install a current kernel in RH7.1 without breaking everything that runs on startup (kudzu and all the other fodder) without waiting for Redhat to put one together and bless it.
other than that one issue, there is no reason for a corperate user or the regular user to upgrade the kernel.
XFree 4.1 requires a v2.4.10 or v2.4.11 kernel to use DRI/DRM. On the other hand, Xfree 4.0 doesn't work with v2.4.10/v2.4.11.
Other than that, the need for upgrading is mostly if you experience problems or have new
hardware.
AFAIK you can use make rpm to build
an RPM of your kernel nowadays (new in v2.4.(some number > 3). For Debian, the counterpart is ake-kpkg which has existed
for ages.
XFree 4.1 requires a v2.4.10 or v2.4.11 kernel to use DRI/DRM.
There's always the possibility that I could be missing something here, but... either I'm highly insane in you are very wrong. According to my XFree86 log, I'm running version 4.1.0 (Released on June 2, 2001).
Would not this mean that XFree 4.1 was released before there even was a 2.4.10 kernel? My X setup is the same one that came on Slackware 8.0, which ships with Linux kernel version 2.4.5. I've been playing Quake3 and Unreal Tournament on this setup for months now, DRI and all.
Does anyone know the "Reiserfs cleanup" noted in the changelog include big-endian support?
The base reiser code ONLY supports little endian architectures (shame!). I recently put one of my PPC based servers on the AC tree to get big-endian reiser support, but I've heard the AC tree patches have file fragmentation problems. I'm a little nervous about going live with this thing because of the reported VM problems and a potentially flaky reiserfs.
I hadn't checked there today so I missed to patch for 2.4.11 pre 6. Kudos to jeff and thanks for the update. I guess the answer is that they still haven't mainstreamed the bigendian patches.
Anyone else out there find that after compiling 2.4.11 and then recompiling the nvidia kernel module X wouldn't work. I did.. I tried older versions of the nvidia drivers as well.
Let's hope what Linus said about other operating systems is not true:
5. What do you think of the FreeBSD 5 kernel and WindowsXP's new features from a clearly technical point of view?
Linus Torvalds: I don't actually follow other operating systems much. I don't compete - I just worry about making Linux better than itself, not others. And quite frankly, I don't see anythign very interesting on a technical level in either.
I think if this is true, Linux is being extremely stupid in this regard. Many operating systems have had serious design flaws that permanently staggered their development. Paying attention to other similar systems is a very important part of system development -- it keeps you from making the same mistakes others have made.
As you already said, it is on kernel.org. Unfortunately, it seems to be missing on the mirrors at this point. At least the patch from 2.4.10 to 2.4.11 was missing the five or six times I tried ftp.us.kernel.org (at which point I gave up and hit ftp.kernel.org)
Maybe that's why it wasn't 'announced' yet on www.kernel.org: mirrors hadn't picked it up yet.
I would have included the changelog and mirrors link, but I had not yet ever submitted a story to slashdot, and the thought of having my first try be successfull, AND about a kernel release at that was too overwhellming. I had been having problems with 2.4.10, so, I was often refreshing kernel.org with the hopes of a 2.4.11 magically appearing...how nice.:-)
The two trees are very different in certain cases, and are likely to stay that way for a while.
The -ac tree has the following major additions:
- Uses the Riel VM (Linus uses AA)
- 32bit uid safe quota
- Ext3 file system
- PnPBIOS support
- Various PPro and Pentium workarounds
- Simple boot flag
- Faster x86 syscall path
- PPPoATM
- Elevator flow control
- DRM 4.0 and 4.1 support not just 4.1
- CMS file system
- Intermezzo file system
- isofs compression
I don't think this will happen. Atleast not for OS X. It could become kind of hairy for tech support if ur grandma starts to recompile kernels. If there are updates to the Mac Kernel, you'll probably only be able to do it through software update.
Now if you just mean Darwin, sure you could upgrade that to your liking, but I think just upgrading Darwin with OS X on top could potentially break things in OS X (we'll never know, it's closed source).
IANAKH (I am not a kernel hacker) but I think 2.4.10 was a pretty big change (big VM changes. VM has been bad/odd since at least 2.4.0 according to kt.zork.net) That leads me to believe there will be at least a couple of more patches before 2.5 gets really going. I imagine it will take a little while for Alan Cox to really get with it and accept the new VM. (But I don't presume to speak for the man, so take that with a grain of salt).
In any event, I doubt that the 2.4 series will go down in Linux history as anything great.
In any event, I doubt that the 2.4 series will go down in Linux history as anything great.
This is true, as the kernel has moved along each new version has been more evolutionary the revolutionary. This is the way it is suppose to be, at some point many parts become stable and need little or no attention, such as serial and LPT ports. Things just work and at that point each release is more or less adding support for current hardware, routine bug fixes and the occassional rewrite of one or more subsystems.
The 1394 has been working - more or less - for quite a while, including backports to 2.2.x. The thing is, it is also a work-in-progress: developments in the 1394 subsystems (video/isochronos packets, storage systems, ip-over-1394, etc) can have subtle effects on what works with a given card and peripherals.
Personally I still tend to rely more on the patches directly from the 1394 project (linux1394 on SourceForge) still, although the Mandrake 8.0 1394 stuff worked for me out of the box (mostly, except for a patch to the video driver for an NTSC camcorder). Haven't tried 8.1 yet, nor the stock kernels.
For powermacs and especially powerbooks, you'd better go with benh. For other powerpc you might want to use paulus. I don't think the stock kernel is the best choice for any ppc.
does anyone know when this will be included in redhat or mandrake? i dont know how to make a kernel myself. the howto was of no use either
Never. They will likely use there own tweeked versions. In particular Red Hat will use an Alan Cox variant. When they release 7.2 for example it will have one of the more stable versions tweeked and hamstrung in various ways to their taste. The kernels that you would get from Kernel.org are kinda raw. You have to know your config and make adjustments on the fly. Stick with the post-proccessed distro kernels.
If you want to stick with a debian-based distribution, I recommend trying a straight debian install again.
After Mandrake pissed me off for the last time, I decided to give a bunch of distributions a try. This was earlier this spring, when a bunch of new distribution versions were coming out. I tried progeny, libranet, debian, redhat, mandrake again, and maybe a couple others that I can't remember.
The first time I installed Debian, I downloaded the stable iso's. I definitely didn't want to stick with stable, but I couldn't find an unstable iso, so I installed stable. I had problems with the dist-upgrade so decided to do a network install. Even though the Debian installer doesn't do much in the way of hardware detection, and it took a couple of tweaks to get everything right, I'm very satisfied now. All my boxes at home and work run Debian now.
Overall, if you want something that is going to auto-detect your hardware, and basically do the install for you, go with RedHat. If you want something that is going to be very easy to maintain, go with Debian.
Anyway, good luck with whatever distribution you choose
It is nice that you are still working on getting Linux set up! You might want to give progeny a go, it is based on debian but the install is much easier to get through. SuSE is not a bad bet, they have some realy nice tools for X configuration. As for mixing and matching distros, I would recommend that you stick with one setup. Some of the differences between distros have to do with the type of package management they use, whether they use custom kernels, the type of init scripts, how the packages were compiled, even the file system.That is probably not something that you want to mess with. Then again, maybe it is and you are just that brave...have fun.Take it for what it is worth, many people here will have valid arguements against any recommendation that someone else is willing to make.
But won't that make the problem even worse? Unless you have a veeery slow Linux box (486, 16MB RAM) you can probably do a dozen compiles or more in one week:)
And I can pretty much say for 100% that the kernel will not compile with LCC. Plus you must have an as86 compatible assembler to build the more interesting parts. Nevertheless, good luck, and if you do succeed, let us know!:-)
I had the same freezing problem with my USB mouse also. Once or twice a week it would just lock in place and I'd have to unplug it and plug it back in. I'm pretty sure it went away when I put 2.4.10 on my system. I had been running the Debian 2.4.9-K7 image.
Question for the Uber geeks. (Score:2, Interesting)
Got an old Redhat box with 2.2.16 (IIRC) and would like to bring it up to the latest stable release. Any chance of that being done easily?
I've done incremental updates before but never major overhauls.
Moose.
"That is a valid question, now how about a valid answer." (I forget whom)
Re:Question for the Uber geeks. (Score:2, Informative)
not an uber geek but I'll give it a try.
Check the README in the kernel source directory for the list of required software for the 2.4.x series.
From the kernel version you are using I'd expect to be upgrading a whole lotta stuff
Re:Question for the Uber geeks. (Score:3, Informative)
If you're comfortable compiling a kernel, it shouldn't be any trouble.
Re:Question for the Uber geeks. (Score:3, Informative)
Re:Question for the Uber geeks. (Score:2, Informative)
gcc - 2.95.3
make - 3.77
binutils - 2.9.1.0.25
util-linux - 2.10o
modutils - 2.4.2
e2fsprogs - 1.19
ppp - 2.4.0
The Changes file is more complete though.. read it to know the other changes you might need to make. Oh, and the recommended version of glibc for kernels in the 2.4 series is 2.2.x.. so you might want to upgrade that as well, though it isn't required.
Re:Question for the Uber geeks. (Score:2)
On my machine where I used to work I tried the manual round to update redhat... I remember breaking a lot of stuff.
Humm...a slight quandry over what to do. as I have 3 *ell poweredges, all dualies, varying amts of memory. (New job, don't want to screw anything up) One runs the webserver, one the samba and powervault and one I just re-did is soon to be a win2k box for GIS stuff and SQL server (hope it can take the strain, think it might be just a single proc...damn).
Sigh, worse part is that RedHat is standing up against the SSSCA, but *IF* I redo the webserver and samba box I was leaning tword Slackware... Ack, technical and "moral" dilemma. (Error, Error....)
Anywho, after years of using and installing both, IMO mind you, Slackware if you want the flat out speed (shared libs) and RedHat (Mandrake too...nice installer, btw) for compatibility and ease of install and a few nifty utilities. {my own observations...some or no basis in reality...you decide}
(Side note-- tooting my own horn time: Had Slack, RH, BeOS, 98se and 2000 on one box, if that isn't computer abuse, dunno what is!)
Safest thing to do is back it up or do a disk dump/re-mirror the disk and see what happens. High "pucker factor" either way because as far as I can tell everything is running perfectly.
Moose.
Two worst things you can do in a position of authority:
1) change too much
2) change too little.
(how true, how very, very true)
changelog (Score:3, Informative)
final:
- Jeff Garzik: net driver updates
- me: symlink attach fix
- Greg KH: USB update
- Rui Sousa: emu10k driver update
pre6:
- various: fix some module exports uncovered by stricter error checking
- Urban Widmark: make smbfs use same error define names as samba and win32
- Greg KH: USB update
- Tom Rini: MPC8xx ppc update
- Matthew Wilcox: rd.c page cache flushing fix
- Richard Gooch: devfs race fix: rwsem for symlinks
- Björn Wesen: Cris arch update
- Nikita Danilov: reiserfs cleanup
- Tim Waugh: parport update
- Peter Rival: update alpha SMP bootup to match wait_init_idle fixes
- Trond Myklebust: lockd/grace period fix
pre5:
- Keith Owens: module exporting error checking
- Greg KH: USB update
- Paul Mackerras: clean up wait_init_idle(), ppc prefetch macros
- Jan Kara: quota fixes
- Abraham vd Merwe: agpgart support for Intel 830M
- Jakub Jelinek: ELF loader cleanups
- Al Viro: more cleanups
- David Miller: sparc64 fix, netfilter fixes
- me: tweak resurrected oom handling
pre4:
- Al Viro: separate out superblocks and FS namespaces: fs/super.c fathers
fs/namespace.c
- David Woodhouse: large MTD and JFFS[2] update
- Marcelo Tosatti: resurrect oom handling
- Hugh Dickins: add_to_swap_cache racefix cleanup
- Jean Tourrilhes: IrDA update
- Martin Bligh: support clustered logical APIC for >8 CPU x86 boxes
- Richard Henderson: alpha update
pre3:
- Al Viro: superblock cleanups, partition handling fixes and cleanups
- Ben Collins: firewire update
- Jeff Garzik: network driver updates
- Urban Widmark: smbfs updates
- Kai Mäkisara: SCSI tape driver update
- various: embarrassing lack of error checking in ELF loader
- Neil Brown: md formatting cleanup.
pre2:
- me/Al Viro: fix bdget() oops with block device modules that don't
clean up after they exit
- Alan Cox: continued merging (drivers, license tags)
- David Miller: sparc update, network fixes
- Christoph Hellwig: work around broken drivers that add a gendisk more
than once
- Jakub Jelinek: handle more ELF loading special cases
- Trond Myklebust: NFS client and lockd reclaimer cleanups/fixes
- Greg KH: USB updates
- Mikael Pettersson: sparate out local APIC / IO-APIC config options
pre1:
- Chris Mason: fix ppp race conditions
- me: buffers-in-pagecache coherency, buffer.c cleanups
- Al Viro: block device cleanups/fixes
- Anton Altaparmakov: NTFS 1.1.20 update
- Andrea Arcangeli: VM tweaks
Check out the Preemptible Kernel patches... (Score:5, Insightful)
Anyway - doesn't look like much changed since pre-6 so the pre-6 patches should work but if you want to be sure you can wait until rml releases the 2.4.11 final patch. I'd recommend checking it out if you have the time...
Re:Check out the Preemptible Kernel patches... (Score:3, Interesting)
I'm assuming that its not in 2.4 because it probably changes alot of things and needs to be done in 2.5.
Re:Check out the Preemptible Kernel patches... (Score:5, Informative)
AFAIK, there are two reasons why these patches aren't in default kernel. First, I understand that decreases latency at the price of slightly decreasing throughput. The second is that though the patch is small, its effects can be complex and nobody's too sure it doesn't have any bad side effects (crash, oops,
Re:Check out the Preemptible Kernel patches... (Score:2)
Re:Check out the Preemptible Kernel patches... (Score:2)
Re:Check out the Preemptible Kernel patches... (Score:4, Insightful)
Having a preemptible kernel makes things feel faster because what you're doing right now is getting serviced the most, but the overall system performance is actually decreased a bit.
Which is bad why? The important thing is not (always) some arbitrary absolute measurement of "speed", but rather the apparent (to the user) speed of the system. If you're reading mail, you probably won't care, or even notice, that your compile takes 49 seconds instead of 47.
Re:Check out the Preemptible Kernel patches... (Score:2)
That ftp site owner will be a bit perturbed, however.
So he can just use a regular kernel. I assume something like this would be made a kernel configuration option, something like SMP (though I admit I haven't actually looked at the patches themselves so I don't know how feasible this is); then you'd just turn it on for a workstation and off for a server, or something ilke that.
Re:Check out the Preemptible Kernel patches... (Score:2)
Re:Check out the Preemptible Kernel patches... (Score:3, Informative)
I definitely agree with you though, the PE people's work is exciting, and much less of a hack than the low-latency patches. Way to go hackers!
Re:Check out the Preemptible Kernel patches... (Score:2)
Re:Check out the Preemptible Kernel patches... (Score:4, Insightful)
For things like playing buffered video and sound, where you just need to get the CPU every few milliseconds, I would think that the system call code paths are not so long that you really need a preemtable kernel. I would expect that it would be enough to just change the time quantum from 1/100th of a second to, say, 1/5000th, by replacing the "#define HZ 100" in include/asm/param.h to "#define HZ 5000". I have not tried this, but this sort of thing has been discussed on the linux-kernel mailing list. One person there reported that doing this prevented his Palm cradle to no longer be able to sync, so be warned that this seems to trip at least one bug.
As someone who has only looked through the preemtable system call patch and never tried it, my impression is that while it may be great, I expect its design to change a bit. Right now, under this patch, you build the kernel with basically a fixed number of fake CPU's that basically make your computer look like it has more CPU's than it does. The kernel being preempted basically causes the old kernel's state to become associated with one of these fake CPU's and then the preempting context takes over a real CPU. [I'm really not doing justice to code in this oversimplified and possibly misinformed description.]
In the future, I would hope that the need for a fixed number of fake CPU's would disappear and the "old fashioned" way of doing context switching would also disappear when the preemtable kernel option is selected. In other words, that would be the only way that context switching would normally occur, rather than having two ways of doing the same thing.
I have always regarded the potential for a preemtable kernel as the biggest side benefit of the move to SMP in Linux 2.0, and I'm glad to see people turning it into a reality. However, maintaining the option of making a non-preemtible kernel may be worthwhile, at least for a uniprocessor kernel, because the preemtible kernel code relies on running an multiprocessing kernel (even on a uniprocessor), which has a slight performance cost in setting and releasing all those locks that never once experience contention.
Re:Check out the Preemptible Kernel patches... (Score:3, Informative)
--Bob
Re:Check out the Preemptible Kernel patches... (Score:2)
If you're talking about spinlocks and you're running on a single CPU machine (even with an SMP kernel), the kernel never blocks on a spinlock, because there is never spinlock contention (except for a kernel locking bug, where the kernel will lock up hard at that point). The overhead of the checking the spinlocks is also very small (nanoseconds on that single CPU system, especially since there is no cache snooping). So, the delays that are long enough to deplete sound buffers are going to occur because the granularity of time slices between processes is too long, not because of lock contention.
With HZ=100, the timer tick is 1/100th of a second (ten milliseconds), and any process running at a CPU priority of nice 0 (the standard), nice -1, nice -2 or nice -3 will get five ticks (see the definition of TICK_SCALE in kernel/sched.c), so each time slice will be 50ms, which begins to approach the buffer size of sound cards when you have a few runnable processes, and is already much longer than video frame rates.
Re:Check out the Preemptible Kernel patches... (Score:2)
Re:Changing timer int. is not a good idea (Score:2)
Re:Check out the Preemptible Kernel patches... (Score:4, Informative)
This means that the GUI then pre-empts background tasks, like on Windoze, and other systems intended for desktop use. Of course you don't want to do that on a server machine, but only Microsoft are stupid enough to do it by default even on their "server" OSes.
I'd like to see "workstation" installs do it automatically, but there's a few small notes:
(a) if you renice it too low, it also ends up pre-empting audio tasks too much, and audio could conceivably skip when you move windows about. Shouldn't happen on today's reasonably fast computers. Easily fixed by careful tuning, perhaps including renicing important audio tasks too if your computer's really slow.
(b) If you're using the xfs font server, it needs tuning too - if it's starved of cpu time, then you might actually make text-heavy parts of the gui slower, not faster. I really wish distros would stop using xfs, since truetype support is now built into the X server, and server-side font support is being phased out thanks to XRender and Xft anyway.
Where do I find more detailed changelogs? (Score:3)
Re:Where do I find more detailed changelogs? (Score:4, Informative)
- Mixer improvements (should add support for treble, bass, volume, and others).
- Fixed a dead lock in emu10k1_volxxx_irqhandler.
- Small code cleanup.
Re:Where do I find more detailed changelogs? (Score:2)
XFS (Score:2)
http://sourceforge.net/project/showfiles.php?grou
Major kudos to all the kernel folks!
Re:Ooops! EXT3 is in the kernel (Score:2)
VM Changes (Score:4, Interesting)
So - - - Is that the case? Has there been any stability problems? Is the performance better (not that it really matters as a workstation user, but . .. )
Re:VM Changes (Score:5, Informative)
So while I do appreciate Alan Cox's caution, the new VM works substantially better for me and I say "Go Andrea and Al!"
Re:VM Changes (Score:4, Informative)
I ran all the 2.4.x's, both at home and at work. I am a software developer (not kernel, though) and so I beat on my systems pretty heavily. both systems run dualhead X and my work system additionally runs hardware (dac960) raid. cpu is a k7 tbird, in the ghz range.
anyway, 2.4.9 was ok for me. I tried 2.4.10 and both my systems (home and work) locked up within days. hard tight lockup.
I brought both back to 2.4.9, and so far, so good (less than a week running, though; it was only a week ago I went to .10 and had those problems).
I, too, worry about 3k line commits to so-called 'stable' trees to radically change an algorithm or model. can't say for sure if .10 was really a dog for me, but my systems usually run for months and months before being rebooted (usually due to my swapping of pci cards and such, necessitating a shutdown to do the board swap). so it does seem unusual for me to have a modern linux kernel freeze on _both_ of my hard-working linux boxes. hmm..
Re:VM Changes (Score:3, Informative)
Machine went down hard as hell when I tried to logout of X.
I am currently compiling 2.4.11 so we shall see how that goes.
YMMV. Best of luck to you all.
Re:VM Changes (Score:2)
Same problems here. Maybe I am just superstitious, but I found banging on the keyoboard and waving the mouse around a bit gradually brought it back to life. Perhaps the keyboard/serial interrupts got the kernel out of an infinite loop. :-)
Re:VM Changes (Score:5, Interesting)
If you compare 2.4.11 to anything, please compare it to the latest -ac kernels from Alan, where the traditional 2.4 VM is actually working very well. There's NO sense in comparing 2.4.11 to 2.4.9; the VM in 2.4.9 and its kin -- it was just plain broken.
Side note: In Rik's VM, please remember to not just look at swap used as a gauge of whether you're swapping or not. All anonymous pages are mapped to swap, so the space is simply allocated. You can create a huge image in GIMP [gimp.org] and lots of swap will be allocated, but without a drop of disk I/O! Use vmstat and look at the 'si' and 'so' columns to see if you're actually writing pages to swap. Or look in /proc/meminfo and subtract "SwapCached" from the amount of swap you think you're using. That's the amount of *written* swap you're using (a better comparison to 2.4.10).
This needs to be made sensible in 2.5, if this VM is to be resurrected.
Andrea's work has cleaned up the handling of inactive pages (which could have been done under the old system), and the new "classzone" approach and VM balancing isn't documented anywhere outside the code itself. In addition, there are very normal loads where it performs badly compared to the -ac tree. Here is a test suite [arizona.edu] that tests different aspects of aging and swapping, and the results as provided to linux-kernel [helsinki.fi]. 2.4.10 (patched with Andrea's VM tweaks) swapped more pages, took longer, and had to swap more pages back in when the tests completed (i.e. it could have chosen better pages to swap out). It also caused XMMS to skip mp3 playback throughout the tests, whereas -ac didn't.
Nothing's perfect of course; a process that randomly walks through pages performs better in 2.4.10 [helsinki.fi] since it's more streamlined and not trying to be as "intelligent" about page handling. Rik's code could no doubt be improved here.
That's the great thing about open source: let the best idea win! No doubt in 2.5 we'll see these two VM schemes hash it out in much more complete form (i.e. lose the remaining kernel 2.2-isms, maybe add physical page mapping, almost certainly swapfs -- mostly for Rik's scheme; I'm not sure what the next steps for Andrea's VM should be).
Re:VM Changes (Score:2)
but I'm running 2.4.9, so I'm not sure
Re:VM Changes (Score:2, Informative)
you can check your RAM chips using memtest86 ( http://www.teresaudio.com/memtest86/ ). Using this program I could detect a very tiny problem in one chip that had caused one box to panic after over 20 days of uptime (also had an allocation problem message running 2.4.9 but linux wasn't the culprit). This is a good tool to have, specially now that we have these huge and cheap RAM chips
thaught it could help some of you.
Re:VM Changes (Score:3, Informative)
did you know you could make it a lilo target? makes things very convenient
put memtest86 (the binary) in
image=/boot/memtest86
label=memtest
its that simple. and its a great util. I've just not had the downtime to be able to run a real long memtest..
Re:VM Changes (Score:2, Insightful)
Re:VM Changes (Score:2)
Re:VM Changes (Score:5, Insightful)
That's a very strong allegation, and you'd better have some solid facts to back it up. I don't care for RedHat but I have great respect for Alan Cox. His objections seem valid to me. I'd also be very reluctant to do a major change in the stable release of any software, especially if I was the primary maintainer (like Alan Cox is for Linux). You'd better come up with some concrete evidence to justify your claim, or I'll assume you are just trolling.
Re:Stupid troll (Score:4, Informative)
ext3 (Score:2, Interesting)
Thoughts on the 2.4.10+ VM (Score:5, Interesting)
It is also interesting to note that Rik's vm core has had say 15 kernel releases (unstable + stable) to become stable and meet up to the expectations that Rik sold the kernel hackers on in the first place and judging from the reports on LK it is just now becoming stable enough for most work loads.
The new 2.2.10+ VM had a couple minor to moderate problems for _SOME_ work loads but over all has received very good reports as far as I can tell for being so new. 2.4.11 is bound to be even better.
Some people are complaining about the inclusion of major VM modifcations in the stable tree. I believe the truth is that most people that use Linux in production do not roll their own kernels. They use the vendor supplied kernels. Redhat for example will be releasing a 2.2.7-11-AC kernel which uses Rik's VM, it is what they have been testing for months and thus is what they will end up shipping. So the fact that Linus made this change in the "Stable" tree makes very little difference to me from a stability stand point, and I think it will prove to be a very good call in the short/medium/long run.
Thats my 2 cents anyways.
Re:Thoughts on the 2.4.10+ VM (Score:2, Interesting)
Pretty much fixed in 2.4.10-ac10-eatcache. Almost as fixed in 2.4.11, but more work definitely needs to be done before a company like RedHat will be willing to ship one of these kernels with the new VM code.
Re:Thoughts on the 2.4.10+ VM (Score:4, Insightful)
Re:Thoughts on the 2.4.10+ VM (Score:2, Interesting)
Re:Thoughts on the 2.4.10+ VM (Score:2)
This post was in the last few days so it shouldn't be hard to track down and verify. I would do so but I am off to bed
Re:Thoughts on the 2.4.10+ VM (Score:2, Interesting)
Re:Thoughts on the 2.4.10+ VM (Score:2, Interesting)
I don't think your right there at all. Companies are more likely to tweak the default installation, recompile the kernel for a known set of hardware, and then roll out a "company standard", using for instance RedHat's kickstart scripts.
Using the stock kernel is made very difficult at least for RedHat users. RedHat's ongoing refusal to support reiserfs while installing, only recently while upgrading, and shipping (at least with 7.1 from memory) a reiserfs module that was significantly slower due to debugging being left on makes kernel recompilation necessary.
I can understand their reservations, but faster fsck times isn't the only reason to move away from ext2
Re:Thoughts on the 2.4.10+ VM (Score:2)
From what I understood, Rik was making changes/fixes but Linus was not applying them. Alan Cox was saying he was tired of resubmitting the same VM changes to Linus. I only lightly read the kernel mailing list, but if this is accurate, then it is really Linus's fault for the behavior of the old VM. From what I understand, the VM in ac kernels is not bad either and it is based on Rik's VM work.
Re:Thoughts on the 2.4.10+ VM (Score:2, Interesting)
Re:Thoughts on the 2.4.10+ VM (Score:2)
Prior to 2.4.10, this resulted in the machine gradually filling all swap and then becoming very slow. With the 2.4.10+ VM my system rarely if ever touches swap, and when it does it often eventally comes back out of swap when necessary. It's overall much faster and smoother, and my HD runs less. I haven't tried any of the late model AC kernels where Rik actually started fixing his problems (spurred on no doubt by Linus giving up on him) - they may also run well too, I don't know.
What I do know is 2.4.10 and
One caveat with 2.4.11: starting with 2.4.11pre5 it plays very poorly with USB MS Intellimice. I have to unplug mine while booting 2.4.11 or else I get a continuous scroll of errors and no further boot progress (plugging it back in later resulted in normal operation including in X, but I'm still wary of the updated USB drivers).
Re:Thoughts on the 2.4.10+ VM (Score:2)
Re:Thoughts on the 2.4.10+ VM (Score:2)
X chews up ram quickly. Add Mozilla or Netscape and it is worst. I switched from 64 to 128 MB ram a while back and noticed a world of difference. Mainly my machine stopped swapping often.
That said I have no idea what you need 512 RAM and 640 swap for, but I am assumming it is a database or webserver with lots of dynamic content. With X and related stuff I rarely (never) use swap with 512MB ram.
Re:Thoughts on the 2.4.10+ VM (Score:3, Informative)
Don't confuse NT's need for swap with linux. NT aggressively swaps everything to disk to insure that there is always lots of free ram. I believe win95/98 is worst. Linux (and BSD) on the other had only swaps when more ram is needed.
My main machine with 512MB ram rarely swaps. So rare infact that I can't remember the last time I checked and saw swap in use.
Production boxes using vendor kernels? (Score:3, Insightful)
I believe the truth is that most people that use Linux in production do not roll their own kernels. They use the vendor supplied kernels. Redhat for example will be releasing a 2.2.7-11-AC kernel which uses Rik's VM, it is what they have been testing for months and thus is what they will end up shipping.
Anyone running a production set worth their salt will be running their own kernel base, tuned for their own environment. The vendor kernels are a compromise, trying to please everyone, with every service you could ever imagine compiled in (and hence every bug/exploit included). Production boxes doing serious work are more likely to have a kernel set built for the purpose.
Vendor kernels are far more likely to be used by people who are not that bothered about kernels and stability
FWIW my production boxes run a 2.2.19 heavily patched.
Schweeeet! -- Is there a friendly .rpm of it? (Score:2)
Anyone happen to know if there's a RH 7.X-friendly
Thanks in advance!
On your servers? (Score:2, Insightful)
You really should NOT run production servers (the ones at work anyway) on the latest and greatest kernels.
Who knows what data corrupting bugs are in a new kernel? I recall a few years back when a kernel was released in the that corrupted data over time. (Albeit that was in the testing branch, 2.1.44, but it's a matter of principle).
At least set it up on test servers first before launching on production servers.
Do yourself (and us) a favour, try before you buy.
IrDA (Score:3, Insightful)
NTFS, NTFS, NTFS boys. In a year or two most systems out there will have it in XP, and Linux will be catching up to support it. We can make a run for a majority of the NTFS 5.0 changes now, so at least people will be able to access their drives.
Re:IrDA (Score:2)
Re:IrDA (Score:2)
Re:IrDA (Score:2, Informative)
Re:IrDA (Score:3, Informative)
Re:IrDA (Score:4, Funny)
Re:IrDA (Score:3, Informative)
oom_kill()? Not in my kernel! (Score:2, Informative)
-Chris
"Stable" Versioning (Score:2, Insightful)
Rik's VM should have either showed up in the 2.3 tree and been stabilized there before entering 2.4 or the 2.5 tree should have been opened with it. I guess since 2.4 had to be pushed out the door (and I'm glad it was) there was no time for his VM to mature inside 2.3. But would it be worthwhile to let those ideas stagnate? So much really new activity has been going on since 2.4 that perhaps it would be too hard to manage 2.4 and 2.5 kernels with lots of active development going on both simultaneously.
It seems to me to be a hard management decision to make. The 2.4 series needed a lot of fixes and at the same time there has been a lot of new stuff floating around. Would introducing the 2.5 a few versions ago have slowed development on 2.4 and increased overall patch-management headaches? I suppose the answer is yes but I don't have an idea about how badly it would slow things down.
I do think, however, that it is wonderful to have both Linus and Alan Cox around and maintaining diverging credible trees. They can both gain perspective watching the other's code grow and break. When the two trees do finally merge again we (hopefully) will have the best characteristics of both.
Re:"Stable" Versioning (Score:2)
There needs to be OVERLAP of development kernels. For example, when 2.3 turned into 2.4-test, the 2.5 branch should have IMMEDIATELY shown up. That way, there is always a place for those who are good at doing new stuff and a place for those fixing what's there. This also greatly increases turnaround time. Also, Linus sucks at maintenance. He's good at development, but not at stabilizing and maintaining. Alan Cox is wonderful in that area. The _instant_ 2.3 became 2.4-test the reigns should have been handed to Alan Cox, to be released as 2.4.0 whenever Alan said it was ready. That way, Linus can spend his time dreaming up wonderful things and Alan can make it all work.
Anyway, I'd post this to LKLM, but I don't have time to be a kernel hacker myself.
Is kernel development model broken?? (Score:2)
To an outsider, it would look like the answer is Yes:
It seems to me that the model which worked so well for 2.2/2.3 series is not working anymore. In a true bazar fashion, a new model is already trying to define itself, and the AC and LT tree may be part of it. Maybe it is just time to admit it and try to define the new model a bit more clearly, if possible.
Thoughts on kernel development model (Score:5, Insightful)
During the stable life of 2.4.x it became more or less clear to me that the current model of development for the Linux kernel doesn't work very well.
Changes that were too experimental for a stable kernel but too important to be deferred to an experimental kernel were included in 2.4.x all the time (the VM changes in 2.4.10 being the best example).
This makes me wonder: isn't it possible to improve the scheme of x.even.y = stable and x.odd.y = unstable? Even as we speak the -ac series provides an experimental kernel within the stable series. Maybe we could enhance this model into something more official.
I'm not sure about the actual form yet. I was thinking about something in the line of three kernels:
Stable kernels should IMHO be lower-paced. Maybe a major release every four to six months or so. The VM is allowed to change radically, but only after having been tested extensively in the Testing series. Offcourse simple bugfixes should be allowed in. This would give us a stable kernel every month. It just wouldn't be a terrible interesting one, as it should be.
The Experimental kernels are as experimental as the current x.odd.y series.
Re:Thoughts on kernel development model (Score:2)
If stability is your highest priority, you stick to the stable release, which is pretty much guaranteed uptime. So it's good for important servers. Can get out-of-date quickly, though, depending on your needs.
For desktops and less mission-critical servers, the testing release suits. Get the (almost) latest software, and retain a good level of stability.
Developers and masochists get to play with the unstable release.
Could be a good idea.
I won't run 2.4.11 (Score:4, Interesting)
Now am I a troll? Hope not. I did get my info out of Kernel Traffic [zork.net], which I've been reading for months. It is a very good, understandable and clear compression of all important things that happen on the linux kernel mailinglist. If you use Slashdot as your only information portal about the kernel, you are *braindead*.
Ok, now my point - it is the VM subsystem. By now you should know that 2.4.x, until recently 2.4.10, used the VM code by Rik van Riel. That code has taken some time to develop, but you definitely can't blame Rik as the cause for all 2.4 stability problems, as well as the eternal delay of 2.5. But according to the l-k list, Linus himself made several errors in including Rik's patches, which indeed caused 2.4.7 and up to be unstable! Ok, now stop and think about this. Linus has an enormous responsibility. He didn't realize where the fault was, but he did perceive that the stable kernels were NOT stable. He knew that Andrea Arcangeli was still working on his own VM (that work improved Rik's VM too in 2.3. Not having a monopoly really does improve invention!) Then Linus made the big step: even in a *stable* series, he took over Andrea's VM and threw out Rik's one. This is really an important decision, and I applaud it!
The only thing Linus should not have done, is labeling this thing 2.4.10. It really is 2.5. [lwn.net] For the big public, that kernel was definitely everything but a stable kernel. Luckily a lot of problems have been solved since (2.4.11 is a hell of a lot better than 2.4.10), and I consider Andrea Arcangeli really a good coder, but actually I trust Alan Cox most. He commented that Linus' recent kernels trashed several boxes of his overnight. Alan really sees the -ac tree as the stable one currently. I run 2.4.9-ac18 too, with the kernel preemption patch as mentioned earlier, on a p2-233 with quite some load, and it doesn't show any strange behaviour. (The kernel preemption patch doesn't do really much here: I still get skips when I record an mp3 from my soundcard and switch desktops in the meantime. But I should not expect wonders
One last thing: Rik van Riel's VM has improved *too*. Alan Cox catches up with his patches very speedily. No more big bugs; Rik even added some optimizations in 2.4.9-ac16. I can't see that of course, but overall the system is a lot more responsive than 2.4.3-pre6, my last kernel before this one.
So my advice: use the ac-series [kernelnewbies.org] of the kernel. Linus has made some wise decisions. I think he should start 2.5 and leave 2.4 to Alan, before people go sulking about 2.4.10 versus the always-stable reputation of the Linux kernel.
Redhat 7.2 (Score:2)
Re:I must be stupid... (Score:2)
reasons to upgrade.. (Score:2)
As always, what compelling reasons is ther to upgrade? It's not like other O.S.'s where you have to unless you want major security or stability issues. and I have yet to find one app that has a kernel requirement.
Add to that the fact that RedHat 7.1 is a major pain in the arse to upgrade without the blessed redhat rpm packages (Hey, at least I got work to run linux, and it had to be redhat for the support and the fact that the CEO holds some RHAT stock.
If someone could come up with a decent way to install a current kernel in RH7.1 without breaking everything that runs on startup (kudzu and all the other fodder) without waiting for Redhat to put one together and bless it.
other than that one issue, there is no reason for a corperate user or the regular user to upgrade the kernel.
Re:reasons to upgrade.. (Score:2, Insightful)
XFree 4.1 requires a v2.4.10 or v2.4.11 kernel to use DRI/DRM. On the other hand, Xfree 4.0 doesn't work with v2.4.10/v2.4.11.
Other than that, the need for upgrading is mostly if you experience problems or have new hardware.
AFAIK you can use make rpm to build an RPM of your kernel nowadays (new in v2.4.(some number > 3). For Debian, the counterpart is ake-kpkg which has existed for ages.
Re:reasons to upgrade.. (Score:3, Informative)
There's always the possibility that I could be missing something here, but... either I'm highly insane in you are very wrong. According to my XFree86 log, I'm running version 4.1.0 (Released on June 2, 2001).
Would not this mean that XFree 4.1 was released before there even was a 2.4.10 kernel? My X setup is the same one that came on Slackware 8.0, which ships with Linux kernel version 2.4.5. I've been playing Quake3 and Unreal Tournament on this setup for months now, DRI and all.
Big Endian Reiser? (Score:3, Interesting)
The base reiser code ONLY supports little endian architectures (shame!). I recently put one of my PPC based servers on the AC tree to get big-endian reiser support, but I've heard the AC tree patches have file fragmentation problems. I'm a little nervous about going live with this thing because of the reported VM problems and a potentially flaky reiserfs.
Re:Big Endian Reiser? (Score:2)
Well, there's always ext3.
2.4.11 broke my nvidia card (Score:2, Informative)
Oh well.. its back to 2.4.10 for me..
Linus' response was VERY naive (Score:2)
I think if this is true, Linux is being extremely stupid in this regard. Many operating systems have had serious design flaws that permanently staggered their development. Paying attention to other similar systems is a very important part of system development -- it keeps you from making the same mistakes others have made.
Re:ext3 (Score:3, Informative)
on a related topic, I see the 7.2 directory on ftp.redhat.com
Re:Funny... (Score:3, Insightful)
Maybe that's why it wasn't 'announced' yet on www.kernel.org: mirrors hadn't picked it up yet.
Re:Funny... (Score:3, Informative)
And another link on an Internet 2 capable site (Score:2, Informative)
Re:Syncing with AC kernels (Score:5, Informative)
The -ac tree has the following major additions:
- Uses the Riel VM (Linus uses AA)
- 32bit uid safe quota
- Ext3 file system
- PnPBIOS support
- Various PPro and Pentium workarounds
- Simple boot flag
- Faster x86 syscall path
- PPPoATM
- Elevator flow control
- DRM 4.0 and 4.1 support not just 4.1
- CMS file system
- Intermezzo file system
- isofs compression
Re:Syncing with AC kernels (Score:2)
Hmmm. Is one of the other of these the responsible for the notorious VM problems of 2.[0-9] ?
I'd like to know because I recently installed the 2.4.10-ac10 patches to get bigendian support for reiser. So far it's been fine.
Re:Syncing with AC kernels (Score:4, Funny)
Re:OS X (Score:2)
Now if you just mean Darwin, sure you could upgrade that to your liking, but I think just upgrading Darwin with OS X on top could potentially break things in OS X (we'll never know, it's closed source).
F-bacher
Yes, I meant mach, not mac kernel (Score:2)
Re:Perhaps 2.5? (Score:2)
In any event, I doubt that the 2.4 series will go down in Linux history as anything great.
Re:Perhaps 2.5? (Score:2)
In any event, I doubt that the 2.4 series will go down in Linux history as anything great.
This is true, as the kernel has moved along each new version has been more evolutionary the revolutionary. This is the way it is suppose to be, at some point many parts become stable and need little or no attention, such as serial and LPT ports. Things just work and at that point each release is more or less adding support for current hardware, routine bug fixes and the occassional rewrite of one or more subsystems.
Re:IEEE1394 works? (Score:2)
Personally I still tend to rely more on the patches directly from the 1394 project (linux1394 on SourceForge) still, although the Mandrake 8.0 1394 stuff worked for me out of the box (mostly, except for a patch to the video driver for an NTSC camcorder). Haven't tried 8.1 yet, nor the stock kernels.
Re:The status of the PowerPC updates? (Score:2)
Re:first winmodem driver in the kernel (Score:2, Interesting)
Re:new kernel (Score:2)
Never. They will likely use there own tweeked versions. In particular Red Hat will use an Alan Cox variant. When they release 7.2 for example it will have one of the more stable versions tweeked and hamstrung in various ways to their taste. The kernels that you would get from Kernel.org are kinda raw. You have to know your config and make adjustments on the fly. Stick with the post-proccessed distro kernels.
Re:OT: Which distro has a good installer? (Score:2, Informative)
After Mandrake pissed me off for the last time, I decided to give a bunch of distributions a try. This was earlier this spring, when a bunch of new distribution versions were coming out. I tried progeny, libranet, debian, redhat, mandrake again, and maybe a couple others that I can't remember.
The first time I installed Debian, I downloaded the stable iso's. I definitely didn't want to stick with stable, but I couldn't find an unstable iso, so I installed stable. I had problems with the dist-upgrade so decided to do a network install. Even though the Debian installer doesn't do much in the way of hardware detection, and it took a couple of tweaks to get everything right, I'm very satisfied now. All my boxes at home and work run Debian now.
Overall, if you want something that is going to auto-detect your hardware, and basically do the install for you, go with RedHat. If you want something that is going to be very easy to maintain, go with Debian.
Anyway, good luck with whatever distribution you choose
Re:OT: Which distro has a good installer? (Score:2, Informative)
Re:Can you compile the kernel under windows? (Score:2)
Also, check out CygWin [cygwin.com]
-adnans
Re:woo emu10k and usb (Score:2)