Forgot your password?
typodupeerror
Software Linux

Preload Drastically Boosts Linux Performance 144

Posted by kdawson
from the getting-in-line-early dept.
Nemilar writes "Preload is a Linux daemon that stores commonly-used libraries and binaries in memory to speed up access times, similar to the Windows Vista SuperFetch function. This article examines Preload and gives some insight into how much performance is gained for its total resource cost, and discusses basic installation and configuration to get you started."
This discussion has been archived. No new comments can be posted.

Preload Drastically Boosts Linux Performance

Comments Filter:
  • LiveCDs do this... (Score:4, Informative)

    by SaidinUnleashed (797936) on Tuesday February 26, 2008 @01:19AM (#22555226)
    This is exactly why live CDs like Damn Small Linux (and Knoppix, if you have a ton of ram) run so fast if you load the CD image to ram. Ram is fast!
    • by calebt3 (1098475)
      I haven't tried DSL or Knoppix lately, but Puppy was terrible. It felt like I was running XP on a machine with the minimum specs and running bloated software. Even the mouse was jerky.
      • Re: (Score:2, Informative)

        by ozmanjusri (601766)

        It felt like I was running XP on a machine with the minimum specs and running bloated software. Even the mouse was jerky.

        Something's not right there. Puppy's normally responsive on machines that'd be slow with 98SE.

        • by calebt3 (1098475)
          Maybe it's because it was on a new machine? I know that DBAN [sourceforge.net] has (had) some issues with speed on Intel Core 2 Duo machines. Maybe Puppy suffers from the same problem?
          • Re: (Score:3, Informative)

            by ozmanjusri (601766)
            Puppy's like lightning on my Core 2 laptop. If the mouse is lagging, I'd suspect the graphics card/driver. Try selecting a different Vesa mode next time you boot and see what happens.
        • I'll say somethings not right there, I loaded Puppy on a 1 Gig USB drive, and tried it on my 7 year old homebuilt, and it was way slower than even XP.

          Jerky mouse, freezing up, crashing, it brought back the days of Win98SE.

          I thought it was because the USB load was "experimental".

          I'll try DSL, or some other lighter weight distro and see what happens.

    • Hey I ran Knoppix on my PC when it temporarily had 3 GB of ram and it still lagged on menu loads and stuff. I couldn't figure out how to get it to keep loading stuff into memory and leave it there longer. Then again I don't know linux very well. If you know of a way, please share.
      Anyway, back to the article. This is an expecially good idea because these days you can put in 4 GB of PC5300 on up for around $100 from newegg and probably never use half of it. I ran oblivion while burning a DVD and a lot o
      • by Anonymous Coward on Tuesday February 26, 2008 @03:41AM (#22555932)
        Make sure you pass the "toram" parameter when you boot the livecd at the kernel load prompt. You can press the various function keys at boot time to find the correct method.
        • Linux do cache all requests to the disk so I really don't see why you would need to use toram to get more speed. The only thing I can see is the ability to use the DVD drive. Sure seek times during boot are going to be eliminated, but you can do a precache stage in the boot instead.

          But I always have wondered does linux cache the compressed blocks, or the uncompressed?
          • Re: (Score:3, Informative)

            by BobPaul (710574) *
            The series of comments to which you replied is about Linux LiveCDs, which don't require/touch the hard disk unless you explicitly tell them to. Using "toram" or "dochache" or similar kernel switches allows the entire contents of the CD to be loaded to the ramdisk, dramatically speeding up loading and allow one to remove the CD.

            Even if your particular LiveCD is set to watch for and automatically use swap partitions, a HD is still significantly faster than an optical drive. If you install Linux permanently to
            • by BobPaul (710574) *
              Sorry, I misread disk as "HD" and not as "DVD" on my first pass. Obviously your in the right place.
            • by Junta (36770)

              dramatically speeding up loading

              Not so much in my experience, it just pushes all the loading and *then some* to the boot process. You may only ever hit, say, 40% of the disk content on a livecd if not tuned to your usage, and therefore incur 40% of the content having to be read painfully slow on demand. By virtue of being on demand, every operation that reads new files will be obviously slow the first go around.

              Meanwhile, toram and such cause 100% of the disk content to be hit up during boot process, and can make it excruciatingly slow

              • by BobPaul (710574) *
                I guess by loading I meant of applications, not of bootup. I use it if I'm going to use the livecd for an extended period of time rather than doing something quick like re-install grub.

                As a side note, from my experience even repeated loadings take a long time. If I open firefox it will often hang waiting for the disk to spin up after it's open and again if I close and reopen it. Even using the terminal can cause this. I pretty much hate using LiveCDs if they aren't cached in ram, but often I don't feel like
    • by tacocat (527354) <tallison1.twmi@rr@com> on Tuesday February 26, 2008 @06:30AM (#22556578)

      Several things come to mind when I read the post.

      I thought Linux cached used libraries in RAM already, resulting in the appearance that Linux was always using up all my memory but wasn't really. If true, then this basically does what? Guesses what you want to use and loads them for me? Decides what I use a lot and makes certain they never fall out of memory? In both cases, someone is not using my resources in an optimum manner.

      If I use the price of my first desktop computer and use that to purchase a new computer at Dell I am moving up 40 times in speed, 2x in architecture buss, 4x in cores, blah blah blah. Compared to the last computer (2006) I purchased I can still get something easily double in performance from that.

      So, Not sure what you need in performance, but between the stupid amount of computing power and Linux already doing a lot of in-memory caching there might be a pretty small margin for improvement. But I guess what I really struggle with is the idea of someone/something trying to proactively determine what I'm going to use and then force my computer into a certain behavioural pattern that is making assumptions about my use. Sure, it screams marketing demographics, but even without a PR department for Linux I still don't think there is sufficient need for something like this.

      Can someone elaborate on practical reasons where this is something I would really need

      • by Ibn al-Hazardous (83553) <filip@@@blueturtle...nu> on Tuesday February 26, 2008 @09:34AM (#22557408) Homepage
        You start with a false presumption. I do not know what distro you use, and can't tell you if that does anything nifty - but "Linux" sure as hell does not do this already. If another app has already loaded as shared library, it may well be in RAM but it can just as well be swapped out. For all other cases, the answer is probably that your shared libraries are not cached or preloaded - and so this will give you quite a speed up.

        The thing that eats all your RAM is nothing Linux specific at all, it is your applications asking for more RAM than they are currently going to use. Why should they do such a thing? Well, what do you think memory management would look like if hundreds of apps, daemons and kernel threads ask for two bytes at a time? It'd paint a pretty fragmented picture, so they ask for gobs of pages at a time. Pages seldom touched get swapped out, but still there's an awesome amount of overallocation - thus your memory seems to be 100% allocated 100% of the time.

        So, preloading libs that are frequently used is probably going to use your RAM in a more meaningful way unless you already have a problem with constant swapping.
        • by joto (134244) on Tuesday February 26, 2008 @10:13AM (#22557732)

          I disagree with both of you. There should not be a need for this. Linux memory management should be closer to optimal for desktop users, but unfortunately the current strategy just doesn't work. It's optimized for servers, paging out interactive apps whenever there's something going on in the background.

          In particular, the locatedb daemon makes everything unresponsive because linux caches every file on your file-system it touches, even though it's pretty much guaranteed nobody else needs those files anytime soon. This may be theoretically "optimal" in the general case, but it certainly doesn't feel that way for desktop users. Most desktop users would be more than happy to have background jobs run slower if it didn't impact responsitivity. Also, I believe many people would prefer predictive response-times; it's better for the disk to churn while loading a huge file, instead of it churning everywhere else to page in libraries that have been paged out because the huge file is in memory.

          Adding a daemon to predict shared library usage is a step in the wrong direction. Not because it doesn't fix the problem, unfortunately I haven't tried it, but sure, it might even work fantastic. It's a step in the wrong direction because it's a kludge, and not a proper fix for having memory management strategies in the kernel that the users actually want. Unfortunately, fixes to this problem are hard to do, and every time someone tries to do a proper fix, it is debated to death on the kernel mailing list, and then dies slowly as it ages out of tree. For all I know, it's also the right decision, if it should be in-kernel, it should also be *right*. A daemon might be a better place to experiment, and hell, if it solves the problem for 99% of the users, we might not even need to change the current strategy, which is certainly right for servers. After all, we live with kludges other places, such as the X Window System needing to be root and accessing raw kernel memory.

          But yeah, memory management is complicated. I doubt you can solve this on a piece of paper. If it works, I'm all for it! Maybe this is a proper kludge?

          • Re: (Score:3, Informative)

            by EvilRyry (1025309)
            The kernel already supports hinting like this. Indexing programs should throw the kernel the hint that the files it reads should not be cached. Whether the programs actually do this or not is another matter.
          • by Junta (36770)
            updatedb is a little different. Yes, I suppose on some amounts of memory, it will dislodge cached pages here and there, but updatedb doesn't do *much* with the files. My desktop has 12GB of ram, and has been up through a number of updatedb cycles and heavy heavy usage. It currently has 8.6 GB of ram it cannot possibly figure out a use for, that is left free, and 1.9GB of disk blocks held in cache. The actual memory used is 1.4GB, and there are 217MB of bufferspace allocated. With 4GB of ram, I would b
            • by bgat (123664)
              "In relatively recent history, IO scheduling has been painful on the desktop..."

              It doesn't help that a lot of desktop I/O hardware is really, really crappy, and bogs the CPU down doing nothing more than disk platter baby-sitting and checksum calculations. But not until after literally starving for memory and disk bandwidth.

              The minute you invest in decent hardware, a lot of the I/O scheduling problems go away.
          • At the risk of sounding stupid, I'll have to disagree with both you and the parent.

            In particular, Linux has long had an over commit memory system. An app can ask for vast quantities of memory and the kernel will happily give it. But that doesn't change the amount of memory being used nor does it kick things out of the cache. Only when that memory is actually used will the kernel *really* allocate it and make room in RAM.

            Also, Linux does not cache every file on the system that it touches. locatedb sc

        • Re: (Score:3, Informative)

          by billnapier (33763)
          Sigh. The reason Linux always reports all your memory as being used in the page cache where it caches pages that are read from block devices (like you hard disk). Physical pages in memory that are unused (as opposed to virtual pages that your application just hasn't accessed yet) are used to store data read from disk in case you need to access it again. If you application starts to actually use pages that it allocated (like accessing things in shared libraries), linux will dump those disk cache pages fro
        • Ok, linux box here, free -m:

          total used free shared buffers cached
          Mem: 2026 1512 513 0 770 379
          buffers/cache: 363 1663
          Swap: 2870 0 2870

          slashdot won't seem to let me format the way I want, but run free -m on your own. The cached column is the 379 figure.
          Note the 379M number. That is the amount of data read from disk and kept in ram. When an application needs to malloc and no completely free memory, yes it will free up those pages (it ideally pi

      • The moment you start running something, preload all libraries this process is known to reference? That way it doesn't matter if the application blocks on each library being loaded.

        Though I am pulling this out of my ass.

      • Re: (Score:3, Informative)

        by mhall119 (1035984)

        I thought Linux cached used libraries in RAM already, resulting in the appearance that Linux was always using up all my memory but wasn't really

        Linux uses a disk cache in RAM to keep from re-hitting the HD for often-accessed files. It actually is using up all your memory that hasn't been allocated to an application (which is good, because unused memory is wasted memory), but it will drop some disk cache space when other applications need more memory.

        If true, then this basically does what? Guesses what you want to use and loads them for me?

        Essentially, yes. On of the bigger bottlenecks to application startup is disk seek/read times. By performing this action in the background before it is requested, you won't hit that lag time. The

      • Re: (Score:3, Interesting)

        Unless you're still coming from the Windows mindset where you're used to closing an application after every use of it, preload isn't of much use at all. If you never close an application, startup time is not an issue. The firefox window I'm posting this response from now has an uptime longer than any windows box with automatic updates turned on and is only clocking in at 118M/22M resident/shared. I could possibly see it being of some use if you actually open and close OO.o regularly (it's a slow, bloated
        • As fun as it is to poke fun at MS for updates incurring reboots, most modern distros end up issuing a kernel update every couple of weeks, so linux is not immune. Also, I have some misbehaving drivers on my laptop that currently preclude suspending, and thus shut down when I have to travel with it, so it's often the case my linux laptop incurs resets that flush cache. Finally, the simple reality is if you want to build a platform for the masses, you have to deal with some users that even with everything l
        • by tacocat (527354)

          Good point. I generally just keep applications up and move to a new desktop when I need the real estate.

          This is a Microsoft solution for Microsoft software design.

          Please don't put it in my distro. If I use something large, like OO, I don't mind the start up time because once it's up, I can just leave it up all day long.

  • ramdisk (Score:2, Insightful)

    by wcpalmer (1232598)
    I read a guide on the Gentoo forums a while ago about copying different directories into ram to "preload" them.

    http://forums.gentoo.org/viewtopic-t-296892.html [gentoo.org]

    I never actually tried it, but I might now that I have 4gb ram! A daemon to help automate this process would be welcome, though.
    • Re: (Score:3, Informative)

      by arivanov (12034)
      I do this on a couple of systems that see only "occasional" use so I can spin down the disks. Works quite well actually.
  • by Anonymous Coward
    I find it odd that the article "highly recommends" (RTFA before replying) installing this on a desktop Linux machine, but Vista's implementation is seen as "RAM hogging" and considered "bloat." I'm curious what sort of logical argument underlies this, as the "goodness" of preloading seems to change based upon which operating system it is implemented in. It is *almost* as if there is no logical basis, but, surely that can't be the case with the erudite, level-headed Slashdot crowd, right?
    • Re: (Score:3, Interesting)

      by bersl2 (689221)
      You have the option not to run this sort of program. If it sucks, turn it off.

      Also, Windows' VM system (IMNSHO) has always sucked and will continue to suck; predictive loading of entire bits of software has nothing to do with it.
      • Re: (Score:1, Informative)

        by Anonymous Coward
        SuperFetch is easy to turn off, and Microsoft made pretty effective improvements to the VM system in Server 2003, which Vista is built upon. It's the System Restore, Shadow Copies, and Indexing services that strangle Vista with continous disk I/O. Power those services off, provide a healthy quantity of RAM, and Vista will be a much more adept multitasking system than XP ever was.
    • Re: (Score:1, Interesting)

      by Anonymous Coward
      Presumably the linux implementation is better. Heck, just look at how disabling swap (or using a ramdisk for swap) dramatically improves performance in windows because of it's sucky virtual memory system if you don't believe me.
    • Re: (Score:1, Informative)

      by Bondolon (1000444)
      Vista is typically seen as being pretty memory-hogging to begin with, whereas I've successfully run gutsy with compiz on my EeePC, and it doesn't, even then, have a problem with memory. The article pretty directly says that at the very most, the machine in question was set up to use no more than 87MiB, and out of that it wasn't even using a third.
    • by Daemonax (1204296)
      Does vista dynamically turn this on or off depending on the available amount of ram? I'm guessing not. This preload thing is something that you can turn on if your machine can deal with the extra memory requirements. Currently, Vista wants to do too much on crappy hardware, even on good machines it's painfully slow at times. So yes, both features it would seem are ram hogging, but this one can be turned on or off, where-as I'm guessing the vista one can't be, or at least is on by default even when the machi
    • by Anonymous Coward on Tuesday February 26, 2008 @01:59AM (#22555478)
      Vista's implementation is marketed as being useful for older, slower machines with less RAM, where it actually may be unwanted, and could cause performance issues (unless it's disabled below a certain threshold - it might be). It's only really useful if you have lots of RAM (around 2GB or so). Yes, SuperFetch has an extra mode where it uses a USB-2 stick as a secondary disk cache, but that's not what we're talking about here. That mode is generally perceived as a gimmick.

      Linux handles having lots of RAM a lot better than Windows (XP) does, because of differences in the way the caching system was designed. Linux (and OS X) was intended to run entirely from RAM and use little swap. I've run, say, OpenOffice once, not used it for several weeks, and the next time I start it it loads almost instantly, because it was still sitting in the cache. My machines have 2GB of RAM, with much less than 500MB actually in use - the remaining 1.5GB is being used as disk cache. Swap usage is either zero, or very close. Of course, performance goes to hell if you do something that flushes the disk cache, or if you try using such a system on a machine with 256MB of RAM.

      Windows, on the other hand, was designed to run almost entirely from swap, and tends to drop stuff from the disk cache when it's not been used in a while, as well as moving stuff out to swap rather aggressively. That works great if you barely have enough RAM to run the OS, but it's terribly wasteful if you have more than enough RAM. In this case, SuperFetch is actually useful, allowing it to catch up to and actually surpass Linux, by monitoring which files are actually used and making sure they're already in the disk cache.

      That's great, although nothing new. Other OSes have had this for years (this Linux implementation dates back to 2005, Mac OS X has had it for ages, and neither implementation was original) - Microsoft were just the first to brand it.

      TFA said nothing about Vista's implementation.

      I think the primary problem people have with Microsoft's implementations is that they're typically very complicated, and have a tendency to degrade over time. XP is the typical whipping boy for this - none of the self-maintaining performance stuff (prefetching, or the prelinker) actually works for longer than about six months, meaning that an XP installation starts off fast, gradually gets faster, and then rapidly slows down as the system tries to speed itself up.
      • Re: (Score:1, Troll)

        by dbIII (701233)

        Linux handles having lots of RAM a lot better than Windows (XP) does, because of differences in the way the caching system was designed

        It's also due to linux (and just about everything else including MS Server 2003) correctly supporting the Pentium Pro and later processors. With the 64 bit versions the 2GB ceiling vanishes - but with 32 bit Vista the ceiling is far too close to the floor in my opinion.

    • by Yfrwlf (998822)
      With the logical responses to this, I have to make my own emotional one and say that even if your statement were true and Microsoft really did "come up with it first", even if Steve himself actually got the idea before anyone else thought of it to load things into RAM to make things faster (hold the laughter til I'm done), it'd still be acceptable IMO to make fun of them, in principal.

      To use some old cliches, if a soccer-sewing sweatshop company donates some money to orphans, would you be all praise? If a
    • by MrNemesis (587188)
      As the AC pointed out in his excellent post, not all methods of prefetch/preload are created equal. It's not just whether the OS caches certain bits of certain things into RAM, it's also about what it does with them later. Disclaimer: I've not used Vista, but in previous versions of NT (including XP which also used prefetch), the OS has a terrible habit of dropping apps (especially ones that are minimised) into swap to make room in memory for more file cache (try minimising a reasonably chunky FF, copying a
  • Old tech (Score:1, Insightful)

    by mokeyboy (585139)
    This seems an odd article given preload made it into distros as a standard component for Fedora Core 1 (RHEL3). Its been around since the late 2.4 kernel series was still mainstream. What was the significance of the article? It didn't even update the numbers to a modern hardware config.
  • Hope... (Score:4, Funny)

    by deadmongrel (621467) * <karthik@poobal.net> on Tuesday February 26, 2008 @01:37AM (#22555346) Homepage
    it doesn't make GNU/Linux as *fast* Microsoft Vista.
  • Is this functionality available in Apple's OS X?
    • by Nemilar (173603)
      OS X uses Prebinding [wikipedia.org], which is a bit of a different thing.
      • by dreemernj (859414)
        Prebinding isn't really similar to preload. Preload is actively trying to figure out what the user is going to access and loads it into RAM in advance. Prebinding links an executable to libraries. It only needs to be run when you install an application or when you update the libraries. And prebinding is deprecated now. It wasn't a big savings in performance and has been replaced, I believe, by a system cache of library symbols.

        Back when I used to use Macs fairly regularly (back around OS X's release)
  • Nice but... (Score:5, Informative)

    by pizzach (1011925) <<pizzach> <at> <gmail.com>> on Tuesday February 26, 2008 @01:39AM (#22555354) Homepage
    I never had any luck with preload the times I tried it (a year or two ago?). Nowadays I use alltray [sourceforge.net] for preloading often used apps that are a bit chunky such as Firefox or Openoffice. Openoffice also has a built in preload feature...but you can use alltray anyway for the same effect.
    • Re:Nice but... (Score:5, Informative)

      by Nemilar (173603) on Tuesday February 26, 2008 @01:45AM (#22555398) Homepage
      Alltray and preload are two totally different things. With Alltray, you're talking about keeping the application open, just minimized to the system tray. With Preload, you're talking about caching the binaries/libraries in memory so when you do open the application, it's reading the data from RAM rather than the hard drive. Sure, AllTray moves the load to RAM, but at the cost of entire applications. The point of preload is that it just caches the most commonly used files.
      • by pizzach (1011925)
        I said alltray worked pretty well when preload seemed to do nothing. They are different programs, but they can be used to achieve similar effects. I did not say they were the same.

        I'm guessing you tried doing something silly like loading your whole /bin and /usr/src/bin to alltray and were give massive errors? For MOST linux users, the load times of only a few programs are the real pet peeves; and that is where alltray makes sense.

        Sure, alltray moves the load to RAM, but at the cost of entire applications. The point of preload is that it just caches the most commonly used files.

        Sure, preload caches the commonly used files, but what if your openoffice

  • Blogspam (Score:5, Informative)

    by bziman (223162) on Tuesday February 26, 2008 @02:05AM (#22555504) Homepage Journal
    The submitter is the author of the blog, and is merely paraphrasing the whitepaper written by the author of the software -- and that is two years old. Nothing new or interesting here, just someone trying to draw eyeballs to his blog.
    • Re: (Score:2, Insightful)

      by DragonTHC (208439)
      what's wrong with trying to drum up a little readership?

      For those of us with our own blogs, how on earth do you get readers without tooting your own horn?
      • Re:Blogspam (Score:5, Insightful)

        by xenocide2 (231786) on Tuesday February 26, 2008 @05:33AM (#22556364) Homepage
        By doing something productive, not spamming me with shit I already knew about. Blog about new information you've generated. Maybe make some charts about disk head position during boot and demonstrate whether I/O is throughput or seek bound. Above all else, don't just copy someone else's shit and translate it into HTML.
      • Fresh content offering even a microscopic mote of originality? By that standard the article here fails.
      • by jd (1658)
        You could bugfix the two-year-old software, do a release on SourceForge, announce on Freshmeat, and THEN post on Slashdot. I bet that would not only boost readership, but it would be a readership that appreciated your efforts. All it would take is a relatively minor bugfix to be a real release. Run splint over it, or put it through dmalloc, fix compiler errors for gcc 4.2, or a dozen other things. A few minutes work, perhaps.
  • Similar? (Score:5, Funny)

    by jsse (254124) on Tuesday February 26, 2008 @02:36AM (#22555658) Homepage Journal

    Preload is a Linux daemon that stores commonly-used libraries and binaries in memory to speed up access times, similar to the Windows Vista SuperFetch function
    I might be wrong, but similar function in Windows Vista should be "Reload".

    Vista users respond positively toward the speed boost everytime we "Reload" their Vista. The downtime and data lost as a result of "Reload" might irritate some disgruntled users, but most of them enjoy the free break at the expense of the company.

    Nothing in those Linus thingy could beat that user satisfaction. I might be bias though.
  • by JanneM (7445) on Tuesday February 26, 2008 @04:09AM (#22556038) Homepage
    I have a pretty good amount of memory on my current machine - 2Gb - and I mostly just never close any applications, especially with the big ones like Gimp just reusing the already open instance when you open a new file. I suspect that preload would not actually be all that useful for me in practice; I'm still goign to enable it to see if I'm wrong, though.
  • by pieleric (917714) on Tuesday February 26, 2008 @05:12AM (#22556254) Homepage
    Currently I use readhead [fedoraproject.org] which, at boot time, basically uses a special linux syscall to tell the kernel to read some files ahead whenever it has nothing else better to read.

    Does anyone knows the difference between the two projects? Does preload have a better algorithm for selecting the files to read? Does it also use this special syscall?

    • by caseih (160668)
      If you look at the FA, and then read the referenced paper, you'll find your questions are all answered by the original author. It's an interesting paper; just skip over the statistical analysis if you don't understand it.
  • And then you go and run Gnome on it...

     
  • As far as I can tell, Kubuntu 7.10 has readahead installed by default. I can't find much information on whether preload is to replace that, or if they work together or what.

    Has anyone got any insight into what's better, or if they will peacefully work together? I'd prefer faster app times over faster boot times as I hardly ever reboot.
  • by Ferret55 (589859) on Tuesday February 26, 2008 @07:53AM (#22556854)
    tried it out on my little eeepc and it definitely made a difference, on average its sped up all loading times by about 30 percent. This is especially good because i upgraded to a 2gig stick of ram but most programs hardly need that much ram and on average im left with about 1.2 gigs just sitting there doing nothing, now the ram is more productive and the loading time is noticably faster eg. firefox on a cold start without preload took 10 seconds to load before, now on a cold start it loads in 6 :). Also since the cpu is relatively slow it means fetching data and the overhead of moving it around it cut down alot. I'd love to shake the creators hand for this plucky little piece of software :) thanks!
  • by Uzik2 (679490)
    I thought read ahead caching in the hard disk would have accomplished 90% of what this does.
    I guess being smarter is more beneficial than I had thought

"Stupidity, like virtue, is its own reward" -- William E. Davidsen

Working...