Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

The New Linux Speed Trick 426

Brainsur quotes a story saying " Linux kernel 2.6 introduces improved IO scheduling that can increase speed -- "sometimes by 1,000 percent or more, [more] often by 2x" -- for standard desktop workloads, and by as much as 15 percent on many database workloads, according to Andrew Morton of Open Source Development Labs. This increased speed is accomplished by minimizing the disk head movement during concurrent reads. "
This discussion has been archived. No new comments can be posted.

The New Linux Speed Trick

Comments Filter:
  • Cool (Score:4, Informative)

    by JaxWeb ( 715417 ) on Tuesday April 06, 2004 @08:05AM (#8778313) Homepage Journal
    It seems there are two IO modes you can choose from, at boot time.

    "The anticipatory scheduling is so named because it anticipates processes doing several dependent reads. In theory, this should minimize the disk head movement. Without anticipation, the heads may have to seek back and forth under several loads, and there is a small delay before the head returns for a seek to see if the process requests another read. "

    "The deadline scheduler has two additional scheduling queues that were not available to the 2.4 IO scheduler. The two new queues are a FIFO read queue and a FIFO write queue. This new multi-queue method allows for greater interactivity by giving the read requests a better deadline than write requests, thus ensuring that applications rarely will be delayed by read requests."

    Nice, but this is making things more complex. I admit I'll just keep all kernel settings at wherever Mandrake sets them as. Will other people play about and specialise their system for the task that it does?
  • Re:SCSI (Score:3, Informative)

    by B1ackDragon ( 543470 ) on Tuesday April 06, 2004 @08:11AM (#8778347)
    Since it mentioned that the OS is keeping "per-process statistics on whether there will be another dependent read 'soon'", I really doubt the drive controller would even be able to do that, much less want to.
  • Re:1,000 percent? (Score:5, Informative)

    by gowen ( 141411 ) <gwowen@gmail.com> on Tuesday April 06, 2004 @08:13AM (#8778356) Homepage Journal
    My guess is that it's a fairly specific, non-standard load that will garner a 1000x gain
    My guess is that you haven't spotted that 1,000% is not 1,000x. A 10-fold increase isn't completely implausible for a workload whose read pattern matches the assumptions built into the anticipatory scheduler.
  • Re:SCSI (Score:2, Informative)

    by pararox ( 706523 ) on Tuesday April 06, 2004 @08:14AM (#8778361)
    As a college student, I feel proud to say I've access to a quad-Xeon SCSI machine; this bad thing truly burns.

    I run WebGUI [plainblack.com] on this machine, which recieves some 3 and a quarter million hits per month. Nothing to raise the eye brows at; but check it: on this machine the average uptime value is some 0.80. My personal (p3) machine, running a BBS, mail, bittorent, and web service maintains a constant 1.3+.

    I've guaged the importance of SCSI drives in the equation via a (sadly) messy, but soon to be SourceForged Perl program. The result, confirming that which I've heard repeatedly, is that SCSI drives truly make the difference.
  • by Anonymous Coward on Tuesday April 06, 2004 @08:17AM (#8778381)
    I believe that the anticipatory sched uses the model of the deadline sched. See "Linux Kernel Development" by Robert Love.
  • Re:Anti-MS Patent (Score:1, Informative)

    by Anonymous Coward on Tuesday April 06, 2004 @08:21AM (#8778395)
    I believe this feature is in NTFS.
  • Re:1,000 percent? (Score:5, Informative)

    by tonywestonuk ( 261622 ) on Tuesday April 06, 2004 @08:21AM (#8778398)
    Isn't 1000%, 11x?
    15% = 1.15x
    100% = 2x
    200% = 3x
    300% = 4x ..
    900% = 10x
    1000% = 11x

    a % = (a+100)/100 x
  • Re:Amiga Disks (Score:3, Informative)

    by jtwJGuevara ( 749094 ) on Tuesday April 06, 2004 @08:21AM (#8778400)
    Isn't this one of the reasons Windows takes ages to boot? (many processes all competing for the one disk resource?).

    Which version of Windows are you referring to? While risking to sound like a fan boy here, I must say that the OS load times for XP are quite fast compared to previous versions and to most vanilla linux distributions I've tried in the past (Mandrake 9.x, Redhat8/9). Whether or not this is in relation to resolving two processes arguing over access to read from the disk, I don't know. Does anyone have any more information on this?

  • Benchmark (Score:5, Informative)

    by zz99 ( 742545 ) on Tuesday April 06, 2004 @08:22AM (#8778403)
    Here's an older benchmark [kerneltrap.org] made by Andrew Morton showing the anticipatory scheduler vs the previous one.

    The benchmark was made before 2.6.0, but I still think it shows the big difference from the 2.4 IO scheduler.

    Quote:
    Executive summary: the anticipatory scheduler is wiping the others off the map, and 2.4 is a disaster.
  • CFQ (Score:4, Informative)

    by kigrwik ( 462930 ) on Tuesday April 06, 2004 @08:31AM (#8778446)
    The cfq scheduler in the -mm (Andrew Morton) trees gives very good results in a desktop use.

    With anticipatory or deadline, I'm experiencing awful skips with artsd under KDE 3.2 every time there is a heavy disk access, but it's [almost] completely gone with cfq.

    To use it, compile a -mm kernel and add the 'elevator=cfq' to the kernel boot parameters through Lilo or Grub.

    See this lwn article [lwn.net] for more info.
  • Re:SCSI (Score:5, Informative)

    by DuSTman31 ( 578936 ) on Tuesday April 06, 2004 @08:35AM (#8778469)

    Yeah, I think so. IIRC it's called tagged command queueing - the drive can have multiple requests pending and instead of doing them first come first served, they're fulfilled in order of estimated latency to that point.

    I believe Western Digital's recent Raptor IDE drives have the same feature.

    The benefit of this seems contingent upon having multiple requests pending, which AFAIK is hard on linux as there's no non-blocking file IO. To me, this reads like a workaround for that.

  • Re:SCSI (Score:5, Informative)

    by KagatoLNX ( 141673 ) <kagato@@@souja...net> on Tuesday April 06, 2004 @08:35AM (#8778474) Homepage
    ATA is basically the SCSI protocol (the good part) over IDE. There's a reason why some SATA drives appear as SCSI adapters under Linux.

    Expensive, yes. Aging, no. Ten years ago people said SCSI was the future. Now everyone runs it, they just don't know it.

    IDE in its original form has never been able to keep up with a 10k RPM (or higher) disk.

    I think what the parent post is alluding to is Tagged Queuing. Tagged Queueing allows you to group blocks together and tell the drive to write them in some priority. That sort of thing is used to guarantee journaling and such. Interestingly, the lack of this mechanism is why many IDE drives torch journalled fs's when they lose power during a write--they do buffering but without some sort of priority. You can imagine I was pretty torqued the first time I had to fsck an ext3 (or rebuild-tree on reiserfs) after a power failure.

    The reason that the kernel helps even with the above technology is that the drive queue is easily filled. Even when you have a multimegabyte drive cache and a fast drive, large amounts of data spread over the disk can take a while to write out.

    This scheduler is able to take into account Linux's entire internal disk cache (sometimes gigs of data in RAM) and schedule that before it hits the drives.
  • by warrax_666 ( 144623 ) on Tuesday April 06, 2004 @08:38AM (#8778490)
    AFAIK the "anticipation" bit is not so much about predicting head movement, but is more about reducing head movement. Reads
    cause processes to block while waiting for the data (and can thus stall processes for long amounts of time if not scheduled appropriately), whereas writes are typically fire-and-forget. This last bit means that you can usually just queue them up, return control to the user program, and perform the actual write at some more convenient time, i.e. later. Since reads (by the same process) are usually also heavily interdependent, it is also a win to schedule them early from that POV.

    That's my understanding of it.
  • Re:Cache? (Score:5, Informative)

    by Erik Hensema ( 12898 ) on Tuesday April 06, 2004 @08:44AM (#8778548) Homepage

    Sure, and both Linux 2.4 and 2.6 do caching and read-ahead (reading more data than requested, hoping that the application will request the data in the future).

    The I/O scheduler however lies beneath the cache layer. When it's decided that data must be read from or written to disk, the request is placed in a queue. The scheduler may reorder the queue in order to minimize head movements.

    Also, 2.6 has the anticipatory I/O scheduler: after a read, the scheduler simply pauses for a (very) short period. This is done in the assumption that the application will request more data from the same general area on the disk. Even when other requests are in the I/O queue, requests to the area where the disk's heads are hovering will get priority.

    While this increases latency (the time it takes for a request to be processed) a bit, throughput (the amount of data transfered in a time period) will also increase.

    It did take a fair amount of experimenting and tuning in order to make the I/O scheduler work as well as it does now. However there still may be some corner cases where the new scheduler is much slower than the old.

  • Re:But how? (Score:2, Informative)

    by Anonymous Coward on Tuesday April 06, 2004 @08:47AM (#8778576)
    Clusters close together are going to be close together on the disk surface. They're not actually talking about controlling the head movement directly, but minimising head movement by realising how a hard disk works in relation to sector accesses .
  • by bflong ( 107195 ) on Tuesday April 06, 2004 @08:48AM (#8778590)
    Make sure that you set X's "nice" value to 0. Some distros set it to something like -10 so that X is not disturbed by other procs. Under 2.4, this was a good thing. However, under 2.6, with it's superior scheduler, the kernel will keep interrupting X and you will see lagging performance. Google for it to get a better explanation.
  • Re:Cool (Score:5, Informative)

    by PyromanFO ( 319002 ) on Tuesday April 06, 2004 @08:48AM (#8778592)
    This troll comes up in any thread that has anything to do with Linux at all. Who the hell said anything about asking people to choose? This is for developers and hackers to mess with. The distro you're using will choose for you, just like Microsoft chooses what Windows drivers you have loaded by default. Does every person who runs a Dell Windows machine have to decide what version of the driver to use? No Dell installs it for them. However power users can install newer/beta drivers if they want. Same thing here, power users can enable this if they want. If not you'll never have to know about it or touch it.

    Sorry for biting on the troll but I felt like explaining it.
  • Re:Anti-MS Patent (Score:4, Informative)

    by dave420 ( 699308 ) on Tuesday April 06, 2004 @08:50AM (#8778607)
    So, in turn, should the Linux community cease developing/including things that are "inspired" by Windows?
  • by sylvester ( 98418 ) on Tuesday April 06, 2004 @08:52AM (#8778634) Homepage
    What sort of George 'verbal abortion' Bushism is " privilegiate "?

    You (presumeably) wanted just 'privilege' as a transitive verb:
    you either privilege front end services (GUI) or back end services (apache, etc)
    Given the name "Mirko", I would imagine that the grandparent was Finnish. Finnish is not an indo-european language, and has very very different suffixing rules from English. They commonly then derive interesting suffix-forms of words.

    Nice of you to point out the mistake like an ass, though. (Yes, just like I'm doing.)

    -Rob, a Canadian in Finland
  • by Anonymous Coward on Tuesday April 06, 2004 @08:53AM (#8778642)
    Linux has had preemptive patches for ages. As far as I can tell, the work from Timesys Corp is the leading implementation. It uses a variety of patches to make linux fully preemptive and suitable for use as a hard realtime os.

    The only reason a disk would need defragged is if the FS sucks so badly that it causes massive fragmentation. Utilizing better storage methods such as certain RAIDs, LVM, and any of the Linux or Unix filesystems drastically reduces any problems one might have.

    As an example, I'm using UFS2 (FreeBSD 5.2.1) with soft updates and have been using the same UFS slices for well over a year (not always 5.2.1) constantly (webserver, fileserver, and I do some light compiling) and have 0.2% fragmentation. You're just used to terrible filesystems.
  • Re:But how? (Score:3, Informative)

    by pseudorandom ( 35988 ) on Tuesday April 06, 2004 @08:54AM (#8778645)
    The absolute translation of logical block to head position is unknown to e.g. Linux. While it is possible to reverse engineer the physical disk layout by looking at timings, for general purpose computing this is going way too far. I think the upcoming ATA-7 hard disk standard has some more options to get information about the layout of the disk, but I'm not sure of that.

    Anyway, simple sorting on LBA address will typically reduce head seeks to a large extent, resulting in most of the potential benefit. It is important however to make sure that multiple requests are available to the driver to sort.

  • by k-hell ( 458178 ) on Tuesday April 06, 2004 @08:57AM (#8778669)
    2CPU.com has a Linux kernel comparison [2cpu.com] of 2.6.4 and 2.4.25 on a SMP system with interesting results.
  • by Anonymous Coward on Tuesday April 06, 2004 @08:58AM (#8778675)
    Direct IO has been available in Linux since 2.4.0 at least.
  • by Yokaze ( 70883 ) on Tuesday April 06, 2004 @08:58AM (#8778676)
    Elevator seeking is looking at the current request queue and bundle requests which are close together to minimise head movement. This is indeed old. IRC, Linux had it since 2.2 something.

    The anticipatory scheduler tries to anticipate future requests (who would have guessed that?), and is relatively new [acm.org]
  • by pmjordan ( 745016 ) on Tuesday April 06, 2004 @08:58AM (#8778681)
    Yeah, the same thing happens under Windows if you read from CD-ROM. The whole thing just slows to a crawl if you try to read two files at once. I'd assume it's a hardware problem, (long seek times, large error margins) not necessarily Windows' fault, but I don't use CDs much anymore (hooray for ethernet and huge hard drives) so I don't know.

    Of course, this raises the point that aligning the data on a game CD or DVD for a console is a science in itself. PC game development is easy in comparison! (plonk everything on the hard drive)

    phil
  • Re:Amiga Disks (Score:4, Informative)

    by jarran ( 91204 ) on Tuesday April 06, 2004 @09:00AM (#8778694)
    Because it's a lot more complicated that you suggest. What happens if A gets in first, but is doing an extremely long a disk-bound task? B will never get chance to access the disk. It could even be that B would stop after a very short amount of disk access, in which case it will have to wait until A is done, even though interleaving the reads would have been the "right thing to do".

    Being multi-user complicates things even further. Sure, you are a single user on a desktop machine, and you double click on two programs in rapid succession, queuing them for loading one after the other may be the right thing to do. But what if those programs are actually being loaded by two different users? Can we completely lock out one user just because they started loading their program slightly later? Again, what if user A runs emacs, and a fraction of a second later, user B runs ls? Under your system, B effectively has to wait as long as it would take to load emacs, plus as long as would take to load ls?

    You can't even realistically seperate the queues by user. In many situations, a single unix user may be running on behalf on many physical users (AKA human beings ;) ), e.g. in the case of any kind of server.

    I'm not saying that any of these problems are intractable (Linux is now doing a pretty fine job), just that they aren't as even remotely as trivial as queuing loads one after another.

    Oh BTW, thanks for bringing back happy Amiga memories. Them were the days! :-)
  • by Anonymous Coward on Tuesday April 06, 2004 @09:00AM (#8778695)
    It isn't just elevator seeking. This logic can actually pause a new request for a little but if it thinks that another process is about ready to do a read in the general area where the head is. The elevator seek simply sorts all the requests into the order. This mehotd tries to anticipate a request before it is issued. Kind of like holding the door open in the elevator for you instead of you missing it and having to catch the next one.
  • by Mgdm ( 586001 ) on Tuesday April 06, 2004 @09:00AM (#8778696)
    ...that the Red Hat "kernel development systems engineer"'s name is Stephen Tweedie, not Tweed :)
  • Re:Speed-ups (Score:3, Informative)

    by AlecC ( 512609 ) <aleccawley@gmail.com> on Tuesday April 06, 2004 @09:06AM (#8778730)
    Effectively Scsi does I/O speedups. Firmware, not hardware, but so is everything. And the speedups by giving Scsi a lot to do and letting it do it in its preferred order can be significant. But Scsi cannot "see" processes - nor file systems. The OS can work out that a process is reading a file and read the next bit of the file - where Scsi would read the next bit of the disk, if it did so at all. The OS can see when you ahve reached EOF, or closed the file, and there is no point prereading.

    You don't mean multiple heads on an arm. Multiple heads on an arm would all move together, and you couldn't use two at the same time - the feedback servo which keeps it on track can only respond to one track. What you mean, I think, is two groups of arms (all the arms move together). Manufacturers have looked at that but decided against it.

    The arms and associated actuators are some of the most expensive parts of the drive. If you are going to double this cost, why not throw in a few more platters andd an enclosure and have twice the capacity - and twice the throughput.

    Putting two actuators in the drive increases power consumption a lot, and heat as well. Both are real problems for current drives. And a "specialist" drive doesn't have the economies of scale, and could cost more than twice as much as two simple drives - which, together, have the same number of heads and twice the capacity.

    The real killer is turbulence. If you have two arms on the same surface, each is flying in the wake of the other. And, unlike its own wak, the other alters dynamically, so that seeking arm 1 can perturb arm 2.

    Google has it right - lots of dumb hardware, lots of clever software. What we need id filesystems whos allocation patterns are "Raid aware". Particularly Raid 0, I can see file system allocation patterns which could (in conjunction with the otimisations mentioned here) greatly improve performance.
  • by AlecC ( 512609 ) <aleccawley@gmail.com> on Tuesday April 06, 2004 @09:12AM (#8778779)
    No, this is not the elevator algorithm. This is an anticipatory algorithm that pre-queues reads that it expects the application to do in the future. Linux already has the elevator algorithm - had it before Windows, I believe.
  • Re:I second that. (Score:2, Informative)

    by incuso ( 747340 ) <[moc.liamg] [ta] [osucni]> on Tuesday April 06, 2004 @09:20AM (#8778842)
    New nvidia driver is 2.6.x compliant. I am using it on my PC with 2.6.4.

    M.
    --
    Monete Italiane [altervista.org]

  • Re:Amiga Disks (Score:3, Informative)

    by shyster ( 245228 ) <brackett@uflPOLLOCK.edu minus painter> on Tuesday April 06, 2004 @09:41AM (#8779026) Homepage
    I've always wondered why there wasn't something in the OS to force this behaviour, Ie, making sure that App 2 access to the disk is queued until app 1 has finished. Isn't this one of the reasons Windows takes ages to boot? (many processes all competing for the one disk resource?).

    AFAIK, the reason Windows used to take ages to boot was that drivers and services were started sequentially and no optimaztion was ever done for the boot process. Windows XP, OTOH, had a goal of less than 30 seconds for a cold boot. In order to achieve this, new BIOS specs were implemented as well as optimization of the boot process. The main things done to speed up the boot process were doing driver and service initialization and disk I/O in parallel, and prefetching. MS claims [microsoft.com] a 4-5x increase in speed using a chunked read of all boot files, but others disagree [serverworldmagazine.com] and think that prefetching accounts for most of the increase.

    With a new PC and a fresh install of XP, it's very possible to get to the desktop in less than 30 seconds. Even with my aging PIII-500MHz laptop (without the BIOS optimizations called for by MS) and with additional startup software, my PC is usable in less than a minute. To be honest, it's the one reason I switched to XP from 2000.

  • Re:Disk Transfer QoS (Score:4, Informative)

    by Xouba ( 456926 ) on Tuesday April 06, 2004 @09:44AM (#8779059) Homepage

    Two words: IRIX, XFS.

    IRIX had some sort of "quality of service applied to disk accesses", as you wrote, thanks to XFS. The filesystem allows defining zones that have a "minimal throughput" configured. I can't say more about it because I know only by referrals of another people O:-)

    XFS is available for Linux since 2.6.0 and 2.4.24, IIRC, and I think this feature is also available in the latest kernels. Though it's still experimental, IIRC.

  • by aussersterne ( 212916 ) on Tuesday April 06, 2004 @09:59AM (#8779186) Homepage
    Aside from much better I/O performance, 2.6.x also has much better performance on my notebook (IBM T-series ThinkPad).

    I don't know if it's due to SpeedStep support being in the kernel or what, but when I was running 2.4.x with the pre-emptible kernel patches, switching from wall power to battery power meant massive slowdowns, as though I had switched from a PIII-1GHz to a 100MHz Pentium classic. Simple commands like "ps" would take seconds to complete and screen redraws were visible. The whole system would feel like sludge. In spite of this fact, battery life was relatively poor. The combined effect (much slowed system, very short battery life) meant that it was difficult to get anything at all done on battery power.

    Now with 2.6.x, when I switch to battery power, there is no perceptible slowdown whatsoever when compared to wall power, and battery life is much improved. Downside: suspending 2.6.x kills USB-uhci, so I've had to compile it as a module and hack up my suspend/resume scripts to reload it each time. But for the speed increase, it's well worth the trouble.
  • Heh. (Score:2, Informative)

    by Mr Z ( 6791 ) on Tuesday April 06, 2004 @10:04AM (#8779226) Homepage Journal
    It is new with respect to 2.4.x. The anticipatory scheduler was introduced 2.5.x-mm and made its way into the kernel by the time 2.6 was released.
  • by Zoolander ( 590897 ) on Tuesday April 06, 2004 @10:05AM (#8779251)
    Use 'elevator=as' (or cfq, or deadline)
    The anticipatory scheduler is the default for the vanilla 2.6 kernel.
  • by Zoolander ( 590897 ) on Tuesday April 06, 2004 @10:21AM (#8779421)
    The point of the new scheduler(s) is that most access to the disk by a process is sequential (i.e. many blocks at a time), so if another process wants to access some other part of the disk, it most often pays off to let that process wait for a while before serving it, since the original process most likely will want to get more data from your current block.
    That way, you don't need to move the head nearly as much as if you responded directly to the other process.
    Robert Love has written an excellent article about the new schedulers here: I/O Schedulers [linuxjournal.com]
  • Standard mistake (Score:2, Informative)

    by fizbin ( 2046 ) <martin@s[ ]plow.org ['now' in gap]> on Tuesday April 06, 2004 @10:53AM (#8779739) Homepage
    You're making the standard mistake of assuming that the labor pool of "people who work on linux" is of a fixed size, and that man hours are interchangeable.

    Linux doesn't work like that. The vast majority of people who work to improve linux aren't doing it because they're getting paid, and instead work on or focus on what interests them. If someone is focusing on feature X, that's not necessarily taking any time or energy away from feature Y - if they weren't doing X, they might very possibly not be contributing to Linux at all.

    Seriously, complaints like this remind me of a manager coming in and discovering that some developers were talking about the finer points of thread interactions in a specific application and saying: "Who cares how the threading works? I just want something the customers can use!"

    If it makes you feel better, you should learn to simply ignore discussion of technical features that upset you - this discussion does not in fact take away from discussions of user friendliness nor does it imply that the user will be forced to follow this discussion in order to use the outcome. If the user wishes, anyway, to follow this discussion then they might glean something interesting from it, but supplying the users with extra optional information can't be a bad thing, can it?

    And as for access time, I have to ask: making the computer as a whole more responsive to my actions won't make me like using it better? Maybe 10% isn't going to make much perceived difference most of the time, but when it means the difference between a stutter-free movie playback and the occasional dropped frame, I'm going to notice.
  • Re:Disk Transfer QoS (Score:2, Informative)

    by diegocgteleline.es ( 653730 ) on Tuesday April 06, 2004 @12:33PM (#8780914)
    Yes, It's done. Ssearch for "CFQ scheduler". It's in the -mm tree. You've "io priorities" so you can tell apache "you've been guaranteed 80% of the disk bandwith". Or run updatedb cron jobs at a lower io priority so they don't interfere with mozilla/openoffice... Irix has it already for years. Linux has it now. MS is planning this for Longhorn...(it makes the OS a bit more "realtime" so you won't have video pauses because your videos have a highter priority...)
  • by Sajma ( 78337 ) on Tuesday April 06, 2004 @12:35PM (#8780943) Homepage
    The original research for anticipatory disk scheduling was done at Rice University by Sitaram Iyer and Peter Druschel and is described here [rice.edu].
  • by chongo ( 113839 ) * on Tuesday April 06, 2004 @01:55PM (#8782092) Homepage Journal
    While you are waiting to install the new kernel code code, you might try a filesystem mount option called noatime that has been in many *n*x distributions for a while now.

    If you don't care about last access times on your files, then you should consider mounting your filesystems with the noatime mount flag as in this /etc/fstab line:

    LABEL=/blah /blah ext3 defaults,noatime 1 2

    Reading a file under noatime means that the kernel does not need to go back and update the last access time field of that file's inode. Sure, multiple reads over a span of a few seconds will only cause the in-core inode to be modified, but eventually that modified inode must be flushed out to disk. Why cause an extra write to the disk for a feature that you might not care about?

    For example: think about those cron jobs / progs that scan the file tree (tmpwatch, updatedb, etc.). Unless you mount with the noatime option, your kernel must at least update the last access time fields of every directory's inode! Think about those /etc files that are frequently read (hosts, hosts.allow, DIR_COLORS, resolv.conf, etc.) or the dynamic shared libs (libc.so.6, ld-linux.so.2, libdl.so.2, etc.) that are frequently used by progs. Why waste write-ops updating their last access time fields?

    Yes, the last access time field has some uses. However, the the cost of updating those last access timestamps, IMHO, is seldom worth the extra disk ops.

    There are other advantages to using the noatime mount option ... however to wind up this posting I'll just say that I always mount my ext3 filesystems with the noatime mount flag. I recommend that you consider looking into this option if you don't use it already.

  • by arunarunarun ( 196635 ) <arunissatan.gmail@com> on Tuesday April 06, 2004 @02:14PM (#8782363)
    You can fix it. There's a kernel patch that allows you to use a fixed ACPI DSDT rather than the original one, and there are fixed versions of the DSDT available, put up by people who've fixed it. You can even do it yourself using Intel ACPI tools.

    I did this on a Compaq Presario 2100 laptop. Lookup The ACPI4Linux project [sourceforge.net].
  • Re:Heh. (Score:2, Informative)

    by Mr Z ( 6791 ) on Tuesday April 06, 2004 @03:24PM (#8783380) Homepage Journal

    I'm not sure where you'd find it, but you might make some headway searching for "anticipatory scheduler" on kerneltrap.org [kerneltrap.org]. This scheduler was discussed multiple times on that site.

    --Joe
  • FreeBSD runs faster (Score:1, Informative)

    by Anonymous Coward on Tuesday April 06, 2004 @08:55PM (#8787406)
    I have tested the Linux kernel 2.6.x series using the fastest Linux distro, Slackware, (I customized it and compiled it) and FreeBSD still runs faster with the defaults settings (no tweaking) !

Always draw your curves, then plot your reading.

Working...