Andrew Morton And The Low-Latency Kernel Patch 151
An Anonymous Coward writes: "KernelTrap has interviewed Linux kernel hacker Andrew Morton, author of the low-latency patch. Though his patch has received less attention than Robert Love's preemptible kernel patch (recently merged into the 2.5 kernel), it results in quite significantly lower latencies. The interview is quite interesting, delving into the low-latency patch, explaining how it works and the differences between it and the preempt patch. He also talks about his ext3 work, porting that journaling filesystem from the older stable 2.2 kernel to the current stable 2.4 kernel."
realtime? (Score:2)
Is there a formal difference between low latency and a realtime OS?
What about the Timesys kernel patches? How do things match up to QNX another realtime OS??
Re:realtime? (Score:5, Informative)
For example, suppose you send a packet off into the internet, a realtime os would guarantee that the packet was sent within x number of nanoseconds. A realtime os would main this guarantee, regardless of the load on the system, the size of the packet, etc.
Re:realtime? (Score:2)
Exactly. There's also a corollary, which many people miss: realtime does not necessarily equate to high performance. Sometimes, you do things to enforce a bound on the worst case that actually make the average case worse. Anybody who has read Hennessy and Patterson should remember the formula for the value of an optimization (paraphrased because I don't have my copy handy):
Now consider a CPU cache. What a lot of people forget is that there is such a thing as a cache miss penalty, because in most systems hit rates are so high that the second half of the equation above remains negligible. However, a realtime system designer has to be pessimistic and assume very low hit rates. Only accesses that can be absolutely proven to be hits - e.g. repeated access not separated by too many other accesses including those from higher-priority tasks - can be counted, and all others must be considered misses. In practice, that sort of proof is usually too much of a pain in the ass so every access is assumed to be a miss. Since cache misses are actually more expensive than uncached accesses (the miss penalty), it's not uncommon to find that a critical code path has some possibility of missing its deadline if accesses are through the cache, but it can be guaranteed to complete in time with uncached accesses. So the cache gets turned off. Obviously, performance will suck, but at least it will suck predictably and that's the more important concern in realtime. For similar reasons, realtime systems often preallocate resources that then sit idle, because they can't afford to contend for them later.
The above examples should demonstrate why realtime systems might actually perform worse than general-purpose systems. Trying to make system behavior more predictable and responsive is great, and to that end we should all welcome the low-latency and preemption patches, but treating "realtime" as some kind of mantra for "better performance" is an illusion.
Re:realtime? (Score:5, Insightful)
Soft real time means that you can almost gaurantee the latency. Generally, of course, you want these latencies to be pretty small. Soft real time is for when you use check the "use real time where available" option on xmms and run it under sudo.
I hear that Linux (probably with patches) is a little better than windows and a little worse than os X for latency.
Re:realtime? (Score:2, Informative)
Re:realtime? (Score:1)
Re:realtime? (Score:1)
Re:realtime? (Score:1)
However, I have a (offtopic, though it may be) question for you (and others):
Here in the US (Ohio), the local PBS (WOSU) has been airing Father Ted at 11:00 pm on Saturdays. Great show. However, they all seem to say feck, rather than fuck. Is that just the way the Irish say it? Or is it easier to say with the accent? Or what?
Thanks.
Re:realtime? (Score:1)
Re:realtime? (Score:2)
In Darwin almost everything is a module but when they are loaded, they are loaded into kernel space rather than user space. This solved a lot of the performance problems at the (very minor) expense of some stability (it's still plenty stable though).
I don't know what the difference in latency between Darwin/Linux/W2k is either though. I do know that Apple went through a lot of trouble to make the kernel preemptible and have some realtime support etc. This is why iTunes will almost never skip (at least for me) even under very heavy loads.
Anyone have any numbers?
Re:realtime? (Score:2)
In Darwin almost everything is a module but when they are loaded, they are loaded into kernel space rather than user space.
Errm, with this definition, Linux (and many other modern Unices) would be a micro-kernel too. Modules run in the same "protection domain" as the kernel itself, and hence can really be considered to be part of the kernel (even though they are loaded later). With a real micro-kernel, the different system services (filesystem, virtual memory, ...) run in a different protection domain.
The advantage of a real microkernel is that bugs in one system service don't endanger the stability of the whole system, whereas in Linux, a buggy module may scribble all over the kernel code, and cause failures anywhere.
The downside of a real microkernel is of course lower performance (although, it doesn't necessarily need to be as abysmal as Mach...), because of the numerous context switches.
Re:realtime? (Score:5, Informative)
Yes. A realtime OS _guarantees_ that certain events trigger defined responses within specified times. A realtime OS is almost by definition an embedded OS, i.e., its hardware is rigorously specific and very tightly bound. A realtime OS also typically provides a very limited set of functions, as opposed to a general purpose OS. A low-latency OS, on the other hand, provides generalized structures for 1st-level/2nd-level interrupt handlers, real/virtual memory management, and facilities for locking, preemptive-priority dispatching, etc., but offers low latency on a merely best-efforts basis depending upon what all happens to be inflight at the moment. See the difference?
Examples of realtime systems: automotive control systems including engine power/emissions management, suspension and braking management, even airbag controls; aircraft fly-by-wire systems that control aerodynamically unstable airframes.
Examples of low-latency systems: mainframes - if you're a high-priority system task, you get _very_ low latencies - but exact timings aren't guaranteed in all situations.
Re:realtime? (Score:2)
you dont waste time and stability in using an OS when one is not needed. and for 95% of the true embedded systems out there, no os is used or needed.
Re:realtime? (Score:2)
Realtime stuff is starting to go more for using OSes though. There's a great resources hit in everyone developing a new task scheduler for each new platform they work on, so it makes more sense for an OS producer (eg. WindRiver) to produce an OS once and everyone else to license it. This is becoming more the case as processors get more complex and time-to-market gets smaller - it's cheaper to license an OS and only use a small fraction of it than it is to get some of your guys to write a new one from scratch.
Grab.
Re:realtime? (Score:3, Interesting)
In EVERY embedded application, there's multiple layers of stuff happening, ranging from ultra-high priority interrupts that need micro-second accuracy scheduling, down to background loop stuff that doesn't need to be done more often than every few seconds. Every embedded system uses this approach.
A single loop running round is fine if your code needs to do nothing more complex than a Windows program, which any 16-year-old kiddie can write. The moment it breaks this complexity, you're screwed. For example, consider a car engine controller (which I design software for, BTW). Scheduling the start and stop times for injector and ignition pulses requires the processor to recalculate the times a fraction of a second before the pulse, to make sure the fuel and ignition pulses are accurate for the current conditions. And importantly, the number of times you need to do this changes with engine speed, since you need to update every engine rev. It is unacceptable to burden this ultra-fast processing with stuff which doesn't need to be run 7000 times a second, eg. toggling the indicators.
So the solution is to go to a multi-rate system. Stuff which needs to run fast, runs fast; stuff which can run slow, runs slow. This frees up processing time for the fast stuff which can then handle more iterations per second. And in order to work this, you need something to tell all your functions when to run. Sometimes it's designed as part of your main application, sometimes it's a separate bit of object code bought-in, but it's always required. Even your autopilot will be doing this - as a minimum there'll be a fast loop controlling the aircraft, and a slow loop sending info back to the pilots.
So there's many different task rates, all running at their own time frame. For example, in the Ford project I'm working on currently, there's a task happens twice per rev to schedule fuel and spark, there's another task happens once per cam, and there's time-based tasks at 10ms, 16ms, 32ms, 50ms and 100ms rates. And this allows us to allocate resources to the processing that needs it, such as critical tasks like keeping ppl alive.
Grab.
Realtime doesn't necessarily mean embedded! (Score:1)
They are not exactly embedded. Both use a microkernel architecture allows them to run the rest of the OS in a separate process within the microkernel. But they do have a form of X windows and can be run on the desktop. Of course you can also strip them down, flash them, and run them in an embedded environment. Of course, the task switching latency is supposed to be higher than that of other truly embedded RTOSs like VxWorks and pSOS.
Re:realtime? (Score:2, Informative)
This is NOT a troll, NT makes it VERY hard to meet any true real-time requirements without writing at the driver level, which is a massive pain, and exposes you to BIG risks in destabilising the machine.
Linux currently (without these patches) has very good average latency, with these patches it has fantastic worst-case latency, windows CE (which is supposed to be real-time) cannot match it.
Windows hides behind it's multimedia guaranteed latency capabilities, fine if you want to do multi media, useless if you need machine control, or other real-time requirements.
Re:realtime? (Score:1)
Re:realtime? (Score:1)
Realtime is a boolean attribute: either you're realtime, or you're not. A realtime kernel specifies the maximum and maybe the minimum latency for various requests.
These two attributes are orthogonal: a kernel may be low-latency but not real-time, vice-versa, both, or neither.
For example, a real-time kernel may guarantee that you'll get to do one disk read per second (as long as the disk hasn't completely failed, the kernel hasn't crashed, etc). It might make the guarantee stick by ALLOWING only one disk read per second, even if the disk is idle 99% of the time, and your disk is doing absolutely nothing for 999.9ms between consecutive read requests. Not low-latency, but certainly real-time.
A low-latency kernel may allow read requests to complete as quickly as possible, but it may not guarantee a maximum time for read requests to complete. So your application will be able to start executing 100ns after a read operation returns data, but if there are a lot of read operations queued up then it might take 5000ms for the read operations to finish. This is definitely not real-time, but it is low-latency.
Re:realtime? (Score:2)
The real definition of realtime is fast enough response under all worst-case scenarios. One person's realtime is not not necessarity another's.
A realtime OS can work from a clock and polling, in which case there is no concept of latency.
Ok. Stupid question. (Score:1)
It's been stated that the realtime patch lowers the throughput of linux, while making the responsiveness quite good. Meaning good for destop use bad for server use.
Now my question. What does the low-latency patch do to the throughput? Increase? Decrease? Stay the same, but everything is just 'snappier'?
Re:Ok. Stupid question. (Score:1)
There are some cases where the preemptive patch lowers throughput, but in the majority of cases it only helps.
Re:realtime? (Score:2)
Botched Fixes (Score:5, Funny)
A day in the life of a kernel hacker.
Re:Botched Fixes (Score:1)
Re:Botched Fixes (Score:1)
Can't be. BillG does not use more than 5 words in E-mail replies. =)
Re:Botched Fixes (Score:4, Insightful)
Re:Botched Fixes (Score:2)
Actually, although it sounds like a way to 'trick' developers into fixing your bug, I find that broken patches are quite nice from the other end too (ie, as the one doing the fixing).
It seems easier to fix a broken patch (even one so broken that you end up rewriting the whole thing) than to get round to doing it yourself from scratch.
I'd also suggest people try submitting broken documentation for various projects. Even if you don't understand something, still document it. The developers are more likely to correct your text than they are to spotaneously write it themselves...
Re:Botched Fixes (Score:2)
Re:Botched Fixes (Score:2)
In such a case, nobody might notice that the patch is really botched for several months. It might be more productive, and better for Linux's stability/reputation if you contacted the maintainer directly about the problem, rather than deliberately botching his code.
Dilbert vs Open Source (Score:1)
Mistranscription? or is there YAAIDK (Yet Another Abbreviation I Don't Know) being thrown about?
Re:Dilbert vs Open Source (Score:2, Funny)
Process scheduling (Score:5, Interesting)
I hope someday that Linux will use a method similar to Irix, where you can specify a priority from 0 to 255, modify it's timeslice, and make it realtime or timeshared. This was one of the best things about Irix, and something I could really use for Linux.
Re:Process scheduling (Score:1)
Correct me if I'm wrong (and I probably am in some respects) but the comparison of priorities and the code to continuously re-shift kernel time should slow the kernel, and unless people actually used it often, would slow the system (very slightly) overall instead of slowing it to re-allocate time where it's needed, speeding up critical processes.
The overhead time would also increase significanly more than linearly (squared, exponential..??) with the number of processes and CPUs, which would make it very difficult to scale well.
I hope I'm not completely wrong here, any responses?
Re:Process scheduling (Score:3, Interesting)
It'll waste CPU cycles all right. But if it makes the network, disk and interface responsiveness faster odds are the CPU will have more information to do processing with.
There are very very few CPU constrained jobs a computer does anymore. The ones that are (Graphics rendering, key cracking) either have the budget to add an extra machine per 100 to get back the 1%, or are already working with a timeframe that the timelost doesn't really matter.
If you wait 3 months for something, whats an extra 12 hours?
That said, I don't know how much this actually slows a conjested machine down. But, one of the large benefits of Solaris on Sun hardware is that you can get it up to a load of about 1000 before it starts to choke (become choppy). Sure, no task is moving quickly -- but they're all moving.
FreeBSD I find gets slammed around 150, and Linux (last I tried was 2.0.x) was around 60.
It's the type of stuff that makes Bigiron worth the money.
DISCLAIMER: Load numbers are by my own independent testing on varying hardware. It was a large Sun box, but not an order of magnitude above the Linux / BSD one. Test consisted of FTP connections downloading varying sized files at varying speeds.
Re:Process scheduling (Score:1)
And doesn't this already exist (a couple priority levels) somewhere? (you can tell I'm not a power user, much less involved with kernel design, which is a Good Thing [tm])
Re:Process scheduling (Score:1)
It all came about when I discovered in the man pages for make that -j without any arguments would set no limit on the number of processes when compiling.
cd
And boom. System becomes pretty unresponsive. (500 mhz PIII with 320 meg ram.) All good fun though.
Re:Process scheduling (Score:2)
Well, it's more an issue of a 30% being meaningles if the task takes a second, and being quite meaningful if it takes 3 months, because if it takes 3 months and the difference is 30%, that's another month, but what's another 1/3rd of a second?
And this is exactly why you see the HPC folks caring about fortran-versus-C and all that, but to anyone else -- who cares?
If you think Linux does bad under load, try loading down Windows. My machine crawls to an unusable halt under the most basic of loads.
C//
Re:Process scheduling (Score:3, Funny)
`uptime`:
4:06pm up 1:44, 6 users, load average: 337.62, 241.84, 115.30
My box is a plain-old PII/233.
The only problem is that now any unniced process that does real cpu-intensive work (as opposed to interactive ones) can get only about 20% of cpu. It is just blatantly unfair to let one unniced process compete with 500+ others, even though they are niced to 19.
Of course, the programs I'm running does not take too much memory. When one run out of memory (like make -j), the system will swap like crazy, then it IS unresponsive.
Gladly pay CPU cycles for I/O acceleration. (Score:1)
Ok. Let's say a processor does an instruction in 500 picoseconds on average for a little burst, reading from L1. At that rate, you tell the processor, "I'm doubling your workload, and hurry the hell up." So the introduced CPU latency adds up to--what?--something on the order of a hundred nanoseconds. Of course that depends on a bazillion things; I'm not sure, but I understand the context.
At 100 MHz, a wire or trace carrying current rings easily and resonantly if it is about 10 inches long. At 1 GHz, 1 inch is a very long distance. If there is some sort of ground plane, it is its own tank circuit--guaranteed messy--making things that much worse. Put your finger nearby, and watch the form shift on the oscilloscope. Not good. Now try to speed that up, and what do you get? Bottlenecks.
We hear about Moore's Law this and Moore's Law that. Inside the chip, that's fine. Outside the chip--while trying to approach significant fractions of 1Gz--we have already reached diminishing returns. So people come along and start to reverse the trend of CPU-work offloading. (Consider the old "Advaned Technology" bus using direct memory access and bus mastering of peripherals while processors were running at 12 MHz.) It doesn't make sense to do that anymore, and anyone who knows how to build kernels knows this. Because of the bus/crossbar/backplane/fabric delays, CPU's will just slack off anyway, waiting for data.
If you look at the proposed specifications of PCI bus replacement technology, it's basically a local area network inside the beige box of the future. Everything is based on protocols. For all you know, within only a few years, data will be compressed and decompressed between a processor-L1-etc amalgam and a hard disk drive. It will be essentially like a modem connection. Fibre Channel disks are almost there already. By the time this stuff becomes generic, the customer's internal questions will be about the tradeoffs between bottlenecking or peripheral interfacing at all. Upgrades will be of a different form altogether. They will have to be.
Re:Process scheduling (Score:1)
Linux may already have something similar to this--it appears you can set priorities from 0-99. There are three types available: FIFO, Round Robin, and the old style priority. I don't know much about real time scheduling, so I'm not sure if this it what you wanted or not.
For more info, try man 2 sched_setscheduler, and if you check the kernel syscalls (look in the kernel include files--probably at /usr/include/asm/unistd.h), you'll find that it is an actual Linux system call.
Someone made a little utility called setpriority-check it out at Freshmeat.net [freshmeat.net]. It appears to only be able to set the schedule after the process is started (like renice), but I imagine it would be trivial to make a utility that will run a program with a specific priority set (like nice does)
Re:Process scheduling (Score:5, Informative)
On Linux, a low-priority process won't take much CPU away from a high-priority process... But if the low-priority process does a lot of disk I/O, it can cause significant delays in the high-priority process's own disk I/O. i.e. the notion of priority does not carry over to disk I/O. Whereas on Irix, you can set up a process to get a guaranteed level of disk bandwidth...
Look for this feature to appear in Linux soon though. The newly-introduced I/O elevator should make it easier to implement prioritization for disk I/O.
Re:Process scheduling (Score:2)
Re: (Score:3, Insightful)
Re:Process scheduling (Score:3, Informative)
Thankfully Andre Hendrick's IDE patch seems to find the optimal hdparm settings for a drive automatically - once I started using the patch, I got uniformly high transfer rates (20-30 MB/sec) without running hdparm manually.
Re:Process scheduling (Score:1)
" Using DMA does not necessarily provide any improvement in throughput or system performance, but many folks swear by it. Your mileage may vary."
Re:Process scheduling (Score:1)
This is a great example of why I love Linux (Score:4, Insightful)
That's why Linux is so great -- even if you're not good enough to work on the kernel, you can read about some of the issues that pop up. If you use Linux for awhile, and if you get to the point where you roll your own kernels and apply patches, you end up learning a lot about how the system works.
The MS guys are smart, and they're making some good systems now, but you can spend your whole life with them and not have much of a clue about what's going on under the hood.
If MS would open up their internal developer discussions to the public, it would take MS system administration to a whole new level. I understand why they can't do that, but it is a great example of what's nice about Linux.
Re:This is a great example of why I love Linux (Score:2, Interesting)
Re:This is a great example of why I love Linux (Score:1)
Well, let's see. In the last couple years linux and gnu recieved more support and developement from the leading computer corporations than any other OS. Oracle, Sun, IBM, SGI and now HP are all announcing new linux solutions. But maybe this is just coincidence. Afterall a collection of modular free software can't be worth all that much.
Its kernel has undergone rapid development in the last year that offers many solutions for the end user, like multiple virtual memory systems, preemptable kernel patches, etc. Its OS can conform to anything you desire, use any display frontend, includes several free and fully functional display management systems.
I could go on and talk about all the free software and source code that is out there available for all of us to use, for free, and all the advantages that that implies, but the fact of the matter is its not worth anything. Its free, and it'll always be free. So I'm affraid we'll have to wait until Microsoft fixes all the bugs and security holes and sells us a new
Re:This is a great example of why I love Linux (Score:1)
The guys at
SysInternals [sysinternals.com] have lots of inside knowldege of NT.
COM/COM+ is heavily documented (how do you think Gnome/Mozilla managed to copy it so well?). Lots of source code/examples are available too.
If you read any good OS book, it'll tell you things like the real time capabilities of NT compared to Solaris etc.
I don't see how knowing the scheduling algorithm used by Window 2000 would help system administrators....but if you want to know, the information is out there. Perhaps you should start reading Windows technology related websites and cut down on the linux evangelist websites?
Re:This is a great example of why I love Linux (Score:2)
I fail to see how your post has anything to do with the original post which stated that the openess of Linux kernel development was one of the reasons he likes Linux.
What the hell do COM and COM+ have to do with kernel development?
Lately, I'm getting the impression a lot more Microsoft zealots are trolling slashdot and just generally spewing total disinformation and nonesense.
Re:This is a great example of why I love Linux (Score:2)
Written by people who don't know to impress people who know even less.
A few tidbits here and there, often wrong, is no substitute for complete and accurate.
Re:This is a great example of why I love Linux (Score:1)
Modern Operating Systems [amazon.com] by Andrew Tanenbaum.
or
Windows Internals: The Implementation of the Windows Operating Enviroment [amazon.com]
I like this sentance the best (Score:5, Funny)
Try getting your head round that one when needing sleep
Re:I like this sentance the best (Score:4, Informative)
"With an internally preemptible kernel, the explicit task yielding is not necessary because the context switch is performed in the interrupt return path via open-coded yields, which are hidden in the unlock code. But you cannot preempt an in-kernel process while it holds locks, so all the unlock, relock and fixup code is needed in either approach."
--Ben
Re:I like this sentence the best (Score:1)
Unfortunately...
Adding three parantheses, does not improve the flow, by a significant factor, of the quote.
Re:I like this sentance the best (Score:2, Funny)
porting FROM 2.2? (Score:2)
journaling filesystem from the older stable 2.2 kernel to the current stable 2.4 kernel."
I'm confused... I was under the impression that most of the journaling file systems required 2.4. Granted, many started their life on 2.2, but still... most recommend or require 2.4.
On a side note, support for XFS and/or ext3 for 2.2 would be very nice as we currently have many servers running Debian (potato) with kernel 2.2. We would consider upgrading the filesystem, but little else. "If it ain't broke, don't fix it". About all that doesn't work well now is ext2... fsck sucks... we have 2 hours of UPS, but no generators... living in Vermont means a 4 hour power outtage about three times a year.
Re:porting FROM 2.2? (Score:1)
It's a baby step, so what's the big deal? (Score:4, Insightful)
but with the locking changes it should also yield 1-2 millisecond latencies." On what speed processor? 1.5ms is way too long for any kind of processor being sold these days. Try 100us maximum latency on a 133Mhz Pentium for starters and go down from there. And learn to use the term "deterministic" and I might raise an eyebrow. Make it POSIX 1003.1 compliant and someone will have a serious solution.
Programmers either need deterministic response in their applications or they don't. If they do, then Linux is not their OS. If they don't, then these half-baked solutions to reduce context switching time and interrupt latency are probably going to be fun to play with, but will cause nightmares in the long run.
Re:It's a baby step, so what's the big deal? (Score:1, Insightful)
The point is that there is a range of behaviour that is satisfying, then beyond that you start to worry. For audio or MIDI 1ms or even 10ms error may be acceptable. Even a 200ms error is acceptable when it occurs only once a week.
The task simply doesn't afford the troubles and cost of what you call a "serious solution", but at the same time it does require that some effort be put into constraining worst case latency. Much like cooking an egg really.
Re:It's a baby step, so what's the big deal? (Score:2, Interesting)
Think about this for a minute. Linux runs on all kinds of hardware. There are some severely broken hardware interfaces out there that require interrupts to be turned off for substantial amounts of time.
As mentioned in the interview, this (and the preempt patch) are mostly aimed at the audio world, where a couple ms latency is no problem, but more than a few becomes problematic.
Finally, if you have total control over the hardware that you're running on it is possible to get better than the stated performance, simply because you know what software will be running and can profile it yourself.
Re:It's a baby step, so what's the big deal? (Score:3, Insightful)
Re:It's a baby step, so what's the big deal? (Score:1)
I'm not sure about that. With a highly-loaded system and reiserfs, it's on the order of 3 seconds or so. At least, that what's I deduce from a system completely unresponsive for 3 seconds while doing disk I/O, and when it comes back xosview is up to like load 17.0 or something ridiculous like that, indicating that everything else had been blocked.
Re:It's a baby step, so what's the big deal? (Score:2)
If you're looking for hard real-time, then you need a real-time operating system. Try QNX.
Linux is a general purpose operating system, and acheiving the same level of real-time performance as QNX just isn't worth it. These patches demonstrate the level of real-time performance you can get with a general purpose operating system. For a great many applications this is 'good-enough', and it allows developers to stay with their comfortable general-purpose OS where they would otherwise have to switch to something more esoteric.
Re:It's a baby step, so what's the big deal? (Score:2)
Well, if this patch makes X more responsive (as was mentioned in the article, I believe), then it's useful just for that reason. Programmers may not "need" it, but lots of Linux users will sure be appreciative
On the other hand, couldn't such a patch be useful for systems which are recording data at a specific sample rate? For example, if a system needs to read data from some input device at 250Hz, wouldn't 1.5ms worst-case latency be enough to guarantee that no data samples are missed?
half-baked solutions? (Score:1)
Time for bed... (Score:3, Funny)
Sounds just like a title of a bedtime story.
I also recommend you read "How CowboyNeal saved the world (with a little help from / and
Why not SoftUpdates for Linux iso Journalling? (Score:5, Interesting)
IMHO, SoftUpdates are better than Journalled File Systems. There's no journal file to maintain, just careful ordering of the writes. Why no discussion of it for Linux?
Re:Why not SoftUpdates for Linux iso Journalling? (Score:3)
For those of you that don't know, or aren't familiar with FreeBSD, you can build the entire OS from source with one command. It's not a port or package, but the entire base OS (kernel, filesystem utils, OpenSSH, OpenSSL, bind, sendmail, all the crypto, etc...).
I do agree that softupdates would be preferencial in most cases. McKusick had his shit in order when he wrote SU. Journaling had its place a year or two ago, but with today's more robust systems and affordable UPSs, why not invest more attention in a unified VM, or better system tools?
For me, FreeBSD has a kick-ass VM and a rock solid filesytem. Using SU in linux wouldn't hurt, but you'd need to port over UFS to make it work. But that wouldn't be hard since BSD code is pretty much there for the taking. YMMV.
Re:Why not SoftUpdates for Linux iso Journalling? (Score:2)
AFAIK there isn't a real performance advantage for one or the other.
I think that soft-update needs more memory but that it use does fewer IO (no need to maitain the journal on the disk), so I expect that eventually soft-update will have a bigger advantage over journaling (memory will increase rapidly in size but disks won't become much faster in the near future)..
soft updates problems (Score:1, Insightful)
This may be a corrupt sector containing metadata (maybe even for the "/" directory or "/kernel", if you were writing a new kernel at the time of the crash), or it may be other corrupt data which became corrupted in a cascade failure that resulted in the crash after one or more corrupted blocks were written to disk.
Soft updates simply can't recover from this.
If, on the other hand, it were a kernel panic that didn't result in corrupt data being written to disk, then there's no danger of a corrupt sector from a DC failure, and there is no danger of other corrupt data needing fsck'ing, so you would be in the situation where the only thing that would be out of date is the cylinder group bitmaps; you could clean this in the background by "locking" access on a cylinder group by cylinder group basis for a short period of time, to clear bits in the bitmap that said an unallocated sector was allocated. This might be seen as brief stalls by an especially observant user or program (say someone is doing profiling of code at the time), but could be accomplished in the background.
The problem is that you can not know the reason for the crash, until after the recovery.
If there were available CMOS, you could write a "power failure" value into it at boot time, and then write a "safe panic" or an "unsafe panic" code into it at crash time (a power failure would leave the "power failure" code there).
The only valid background case would be for a "safe panic", if you could really guarantee such a thing.
The worst possible failure resulting in a reboot is a hardware failure of the disk; I would really be loathe to try cleaning in the background after a track failure or even a sector failure (sector failures are identical to sector format corruption after a DC failure during a write, FWIW).
Look, soft updates are a good thing, but they aren't a panacea for all problems. Let's laud them for what they do right, but not misrepresent them as doing something they can't.
Re:soft updates problems (Score:1)
That's fairly high overhead, and I would want to know how often corrupt sectors get written to disk. Nothing is safe against software faults, not even journalling. My working hypothesis is that most crashes are actually hangs or deadlocks. Accidental powerfail/reset also happens, but is also the deliberately caused to recover.
In this case, I would think that modern disks have a fairly sophisticated power-down routine, probably involving completing a certain amount of write-out (at least the sector) before parking the heads. Power comes from platter spin-down.
Re:soft updates problems (Score:2)
I'll be charitable and say your comment is merely misleading. This scenario is no more a problem for soft updates than it is for journaling. The only way it could be a problem would be if you had enabled write caching on a drive that didn't maintain write order and didn't have enough reserve power to flush its write cache on power loss. Well, guess what? Take that same impossible-to-find drive, use it to store your journal instead of soft updates, and you'd be just as screwed.
Journaling is no panacea either, and it involves additional performance costs that many find unacceptable. On balance, soft updates still seem like a far better solution.
The Applix 1616 project (Score:2)
Advantages? (Score:1)
Is it just me? (Score:1)
Re:bsd fully sucks (Score:1, Troll)
Quite funny. Wouldn't that be
Anyhow.. Back to compiling Postgresql and friends under Windows using that