Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Software

Other Uses For The Linux RAM Disk? 320

Dante_J asks: "Recently I discovered an old Amiga DOS 1.3 Manual I had lying around. While thumbing through it I remembered all the joyful days of good fun hacking. One thing I particularly remembered was how anyone with 3Mb of ram was considered especially blessed with resources because they could copy all their system files into the ram disk and have a 'trans-warp' fast machine on their hands. In this age of more Ram than sense why are Ram Disks only used for Linux installation floppies? Sure buffers are great, but why not mount /tmp to a Ram Disk, and the cache directory for Web browsers too? Does Linux support dynamically reseizing Ram Disks? Surely they would be vital in remote booting, diskless thin clients."
This discussion has been archived. No new comments can be posted.

Other Uses for the Linux Ram Disk?

Comments Filter:
  • by drudd ( 43032 ) on Tuesday September 19, 2000 @05:31AM (#768630)
    The problem with allocating that much ram to just hold cache for a web browser or similar program is that they're designed to expect that disk cache to be very VERY slow.

    So you'd be better off just using that memory to allow the OS to buffer disk accesses and your programs to do their own in-memory caching than to have it act as a ramdisk.

    Doug
  • I never tried it on my linux box, but I know it always sped things up on a Mac.
  • If I have ram it's going to make more executables fit in memory at one time. Persoanlly why don't you just get a faster drive?
  • by Jason Straight ( 58248 ) on Tuesday September 19, 2000 @05:32AM (#768633) Homepage
    The 2.4 kernel does support dynamic ramdisks.
  • People have been putting their NS cache into ram disks on Macs for a long time now, it speeds things up considerably.
  • Bob Young spoke at LinuxWorld Expo in San Jose earlier this year and took aim at Microsoft's "Innovations". He didn't mention M$, Bill Gates, or M$ Word or Excel, but he did mention a Redhat/Dell innovation of building a webserver into the kernel and breaking the hit/sec threshold.

    Isn't that basically putting what's most important process into the ram disk to make it run extremely fast?

  • With only 128 meg or so of ram, I rarely have any left over after opening about 6 netscapes (until they crash), 15 Eterms, Star Office, etc. etc.

    Maybe I'll try this at work, I've got over 300 megs there.

    Too bad RAM prices aren't dropping like disk storage prices...

  • Where I used to work had /tmp and swap shared via a resizable ramdisk on Solaris. It was a nightmare because we had a developer who was very sloppy and he'd not clean up temp files...after a while, the machine would crash (out of memory) and would be fine after a reboot. I never could convince him that he should erase temporary files after he was done with them...

    That being said, in some respects, its a denial of service attack waiting to happen, though probably no more than a malloc() loop...

  • Instead of using eterm, use something smaller, like rxvt.
    --
  • by Ether ( 4235 )
    "Sure buffers are great, but why not mount /tmp to a Ram Disk, and the cache directory for Web browsers too?"

    Or, you could simply increase the size of the Memory Cache in Netscape. Edit...Preferences, Advanced, Cache, Memory Cache Size.

    I'm not enough of an expert on Linux's tmpspace to comment on that part of the proposal... but IMO, it seems like a really bad idea.
  • Using a ram-disk to cache web-content seems a bit strange.. Most browsers I have seen already have options for setting how much ram you want to use for cache, and all you would have to do is increase it. When it comes to /tmp, if you have enough ram, Linux will just write-behind cache the file until its use is over.
  • Speaking as a person who does program the kernel looks like ancient Greek written by a dieing man. Also isn't creating more and more programs in dedicated kernel mode a really, really bad idea? Isn't this what causes NT to die hideously because of it's integrated kernel graphics mechanism?
  • I was under the impression that disk caching renders ram disks obsolete except for the cases where you want to be absolutely certain that the data stays in memory (such as when your files are on removable media).
  • by larien ( 5608 ) on Tuesday September 19, 2000 @05:36AM (#768643) Homepage Journal
    At least, the /tmp bit. Under Solaris, /tmp is effectively part of virtual memory (used to be just swap space, now includes physical RAM). Causes several problems as:
    • files disappearing from /tmp on reboot which users didn't expect
    • large files eating up swap space
    Personally, I find it a great idea; just some admins don't like it as much.
    --
  • I haven't looked into this at all, so I could be totally off in left field, but wouldn't it be possible to run a chroot jail's storage on a ramdisk? If the content of whatever is in the jail doesn't really change then I don't see how it would matter if the jail were lost, and I would assume that it would be easier to remove a potentially hacked jail by removing the ramdisk or restarting the computer (not that I am in favor of restarting, I don't think that it should ever be necessary). It would also be interesting to boot a box from CD, dump the 'live' distro into a very large ramdisk, and run it that way, so if someone rooted the box and installed their rootkit, a simple reboot would remove all their changes...

  • by VoidOfReality ( 156286 ) on Tuesday September 19, 2000 @05:37AM (#768645)
    Where I work, we use a bunch of Linux boxes to serve our website. Currently, all of our content is located in ramdisk, as well as a data cache used by the web applications that we run. I'm currently on a project to evaluate the merits of using hard disk for this as opposed to the ramdisk that we're currently using.

    The results of the performance test that I ran were somewhat surprising - it seems the machine with the hard disk actually performed _better_ than the machine with the ramdisk. I'm not a kernel hacker so I don't know exactly why this is the case, but I know that the buffer caching in the kernel really kicks ass (we're running 2.2.10) and I suspect having a ramdisk hampers the kernel's ability to manage the buffer cache (i.e., it takes up space that could be used for buffer cache). Just my $.02...

    -VoR
  • by Consul ( 119169 ) on Tuesday September 19, 2000 @05:38AM (#768649) Journal
    There's one other option, as well. Why not place the entire operating system on an EEPROM? Large-sized EEPROMs are getting pretty cheap these days.

    Using an EEPROM would allow you to upgrade/patch the OS as necessary. Also, some clever engineering would make it all but immune to viruses (putting the OS in a true ROM would do wonders for virus protection, but make it difficult to upgrade you system software).

    Hell, you could put Linux and X-Windows with the Window manager of your choice all on an EEPROM and have a superfast, instant booting machine.

    I'm sure this is being done somewhere. Any ideas or links anyone cares to share?

  • There's no reason to put the netscape cache in swap. There's already a setting for memory cache as well as disk cache. Personally, Netscape takes up enough memory already. Besides, do you really want 20-30M of RAM wasted when Netscape isn't even running?
  • You might have an otherwise busy computer
    that is serving thousands of httpd requests
    from harddisk and your filesystem cache is
    flushing itself over and over again, and you
    still want netscapes cachefiles to be in a
    speedy environment?

    Of course, having more RAM than disk is a fine
    solution when the OS buffers with your free mem,
    but obviously everyone can't have that.

    Having a dynamic ramdisk (like /tmp on solaris
    and the aforementioned amiga-ramdisks) is quite
    a big win speedwise for many kinds of situations.

    As long as you have free mem, tempfilecreation
    will go nearly as fast as ram allows, and whenever
    it gets full, you get swapspeed, which is more or
    less what you would have gotten in the first place
    with the cache on local disk anyhow.
  • [joke]Well, I suppose you could always put your ram disk in your virtual memory. or your virtual memory in your ram disk [/joke] as one psuedo geek tried to get me to do .....

    :P

    - - - - - - - -
    "Never apply a Star Trek solution to a Babylon 5 problem."

  • by Anonymous Coward on Tuesday September 19, 2000 @05:41AM (#768663)
    ...I am just waiting for the inevitable suggestion that the ramdisk be used for "swap" :)
  • by account_deleted ( 4530225 ) on Tuesday September 19, 2000 @05:41AM (#768666)
    Comment removed based on user account deletion
  • I an effort to speed up some calculations I was doing I directed the I/O to/from a ramdisk, rather than the usual HDD. I figured, RAM is much faster than HDD, so I will remove any I/O blocking time. (This particular application was very I/O intensive. Basically, two programs communicating via files. I.e., program A runs a calculation, program B reads the output, generates new input for A, and so on. Very messy...)

    Anyway... I found that there was NO speed up. None. My conclusion, the Linux file system caching whatever (I am a chemist, not a kernel hacker), was extremely good. I imagine that these smallish files were rarely ever actually written and/or read to disk. They were just stored in some cache. Essentially the normal use case and the ramdisk case were basically identical in that all of the I/O was in RAM anyway.

    Now, these files were small, and updated frequently, and I have tonnes of RAM (0.5 GB), and what not (2xPIII450, SCSI disk,...) so YMMV... just my experience with ramdisks, for what it is worth.

  • linux 2.4's ramfs dynamically resizes. The traditional method of creating an ext2fs on a ramdisk does not.

    __
  • In all honesty, the kernel knows much better than you do which files it's always accessing, so it can optimize itself better than you.

    Type in "free" and you'll see that almost all your RAM is in use -- that's because it's got a RAM buffer of most recently accessed files so they can be accessed again faster. In fact, if you create a temporary file and then delete it, often that file will never touch the disk.

    This, of course, is why you have to unmount disks - the unmounting writes the buffers to the disk so that the changes won't be lost.

    So, it's already done for you, assuming you want a RAMdisk of your most frequently accessed files.

    Sometimes, people want a rarely used file to be easily accessed to reduce load times, which is something that buffering won't help with. So, you just flip the sticky bit on the file, and it's done for you.

    By making a RAMdisk, you're taking away from the available RAM that the kernel could be using for intelligent buffering, and actually slowing down the machine.
  • It's a fairly simple and straightforward sysadmin exercise to make a script (well, one line, really) to delete all files in /tmp that haven't been accessed in say 3 days or so. (It involves reading the manpage on 'find'.)

    Heck, at cc.purdue.edu they put quotas on /tmp at one point, just to ensure that students wouldn't fill up /tmp and bring down the machine.

    ---
  • by DFDumont ( 19326 ) on Tuesday September 19, 2000 @05:49AM (#768684)
    One of the Fundamental differences between all Unixes and every other OS ever invented is the use of memory to buffer the filesystem. There are no direct writes to a disk, ever! All file IO is done to the buffers in memory, and then eventually the bdflush daemon runs and syncs the disk to the image in memory. Notwithstanding the recent journaling file-systems, and the sync-write IO calls, Unix today still does all its file-IO in this manner...which is why a RAMdisk is redundant. You already HAVE a RAMdisk.

    Mac, Windows, VMS, MVS, Amiga, et.al all do direct and/or syncronous writes to the disks. That's why a RAMdisk has such an effect.

    Linux boot floopies use a RAMdisk because they can't put all the needed files onto a 1.44MB floppy without compressing the image. The RAMdisk is simply the "disk" to which the decompression runs its output. If the root filesystem could fit entirely onto a floopy, there'd be no need for a RAMdisk upon install. See Redhat verion 3.x
  • What if you have an application which reads and writes files that you specifically do not ever want writing to disk eg for security reasons? That's one other good reason for a ram disk.
  • Copy /usr/X11R6 to a ram disk on boot and then mount the RAMdisk there?

    Maybe someone should do a dist that's optimized to use a 100 megabyte (or so) RAMdisk to speed up the loading of the most used and slowest loading apps. Netscape would probably qualify. All the gnome stuff... star office... The field's wide open.

    Of course, that space might be better used as disk buffers...

  • For remote-booting thin clients I would recommend the network block device [usrsrclinu...tionnbdtxt]. It allows you to mount any filesystem on a TCP connection, although you can't use it for swap space (at least not in 2.2-series kernels).
  • Besides, on NT, the reason having the GDI subsystem in kernelspace sucks is because the drivers tend to be buggy. The http-kernald only serves up text pages; anything more hefty it tries to send to the actual httpd. So although it IS a security and stability risk, it's a lot easier to debug (all a httpd does, at it's basic level, is parse a text request, read a file from disk, and print it out to a socket.)
  • Unix/Linux does this for you automatically. The disk caching functionality will keep the disk blocks belonging to recently used programs in memory -- so if you have a lot of memory, you'll simply find that once you've typed a few commands, the machine doesn't have to go to disk to fetch them on subsequent runs.

    This actually reflects the perfect way of doing this: add optimization, but don't bug the users about it -- it's not their problem.
    --
  • Actually, every modern OS has buffering options. But you really do need to have the option to have syncronous I/O. Databases, for example. Why do you think Oracle prefers to have raw disks? Filesystems get in the way, and async i/o is a real bummer for data integrity. :-)
  • The last few versions of rxvt have "transparency" support. You may have to compile your own version if your distro doesn't compile it with that option...

    "Free your mind and your ass will follow"

  • The problem with this approach is that on *every* boot you have to pre-stuff the ramdisk with files, regardless of whether you're about to use them or not.

    And of course, the memory so allocated is tied up whether or not those programs are in use.

    The tip-top way of achieving this effect is some uber-control over the disk caching mechanisms to allow pre-stuffing of the cache with known files at bootup / login.

    Now's the time for someone to point me at a FAQ telling me how to do this....

  • No faster, I've tried it.. Compiling is processor-bound, even on a quad Xeon 450. Shaved less than a minute off of 'make World, which normally takes just over 48 minutes. I think the Linux ramdisk has processor usage issues, because a ramdisk should in theory be faster than the UW/2 SCSI drive I normally compile on..
  • When the third drive died on my powerbook 180, I created a seven floppy boot-set. I had the thing maxed out at 14mb, so I could install system 7.1, word 4 (I'd been using 5, but backtracked), and excel 3, along with my usual utilities, and still have enough room left to work . . .

    Of course, given that batteries were only good for 1:40, and that some genious diddn't include a capacitor to back up ram while changing . . .

    hawk, who still has the pieces of that machine
  • Way back when, in the Oooold Days, Linux had problems with large memory machines. RAM disks were the only way to effectively utilise memory in excess of 64 megs.

    The same is probably true, today. Simply because few people will be able to test and refine the code on extreme memory machines, RAM Disks will probably still be the way to go.

    There is -one- other case for RAM Disks that I can think of. =VERY= Extreme RAM cases. Where the size of RAM is comparable to, or exceeds, the size of the HD(s), it is not efficient to keep swapping to and from drives. They're slow. It's much faster to simply dump the drive(s) to RAM and write-through to disk a you go. All actual read operations, after boot, would be to RAM.

    Beyond that, fast IDE's or SCSI's, with decent on-board cache, are all-round a better idea than RAM disks.

    Going back to the extreme memory case for a moment, this would be ideal for a laptop. Non-volatile RAM is going to eat batteries far less than a mechanical drive. (Especially on power-up, or where there is extensive disk activity.)

  • Oh boy! Virtual Memory! I'm going to make a HUGE ramdisk!
    --
  • I've got this problem with some machines at a client, which I need constant access to.

    Each machine runs a number of message queue daemons. The 'bright spark' developer of these decided to store the queue state infomation in a scratch file in /tmp. They also havn't heard of shared libararies, so each queue process is enormous due to being statically linked against all bar the C library. And there's over a hundred of these processes per machine.

    The system will run out of memory once a week or so, and when it does so the queue scratch files can't get updated. When the errant process is killed, all queues are corrupted.

    The need a competant sysadmin. Unfortunatly their based in the SE of England where competant staff are thin on the ground, and they seem to employ anyone who knows a few buzzwords.

  • I have been wondering the same thing as the asker of the question. I am planning on buying a new machine and thinking that if it is possible to load some parts of the sytem into memory rather than disk this may make it faster. On thing I'd like to have in memory is my /tmp space. I'd also like to have /swap mounted as a RAM disk too. I read a ramsidk howto and it did not seem like an easy task. Is there an easy way to make this happen?

    A machine with 512k that is a personal work station coul dhave 256k for memory, 128 for /tmp and 128 for swap. This could add in that necessary performance increas I need for video.

    I don't want a lot, I just want it all!
    Flame away, I have a hose!

  • CPU was much higher when testing the ramdisk, at 100%

    Oh good grief. Of course the CPU usage was higher.

    Hint: with a RAMdisk you don't have to wait for the disk.

  • by Dave Zarzycki ( 8609 ) on Tuesday September 19, 2000 @06:24AM (#768726)
    ROMs are slow. Ask any true Mac geek about ROM-in-RAM accelerators for traditional Mac OS (version 9 and below). Why? Modern SDRAM latency is less than 10ns. ROM latency is about 150ns. Need I say more?
  • by Azog ( 20907 ) on Tuesday September 19, 2000 @06:26AM (#768729) Homepage
    I don't think this would gain you much. When Linux loads, the kernel gets stuffed into memory, and as as far as I know, pretty much stays there until the machine powers down. Maybe parts of it swap out (not sure) but it doesn't get loaded again from the image again. After all, that's why kernels are usually compressed - make bzImage gives you a mostly zipped kernel which uncompresses as it loads.

    So, loading from EEPROM would perhaps get you a faster bootup, but not much more than that.

    There is a project I read about somewhere where people are actually putting a modified Linux kernel directly into the Flash to replace the system BIOS. This is neat because they boot in less than a second - so fast they have to explicitly wait for the hard drives to spin up before looking for them!

    Other embedded systems use Linux on "Disc On Chip" hardware. Have a look at the September Linux Journal, which has a lot on the use of Linux in embedded applications.


    Torrey Hoffman (Azog)
  • In most modern operating systems, using a ram based filesystem will only confuse things. The entire operating system was designed with the thought in mind that disk access was slow; hence the paging, virtual memory, and caching subsystems. Essentially what all of these things do is work together to assume that the right peices of code/data are in memory more often then not. As something is loaded, the OS will defer loading it into memory until it is actually called upon, where it will read it off the disk (generally asynchronously, assuming that it will take a long time). It should remain in memory until physical memory gets tight, and/or that peice of code/data hasn't been used in a while. All of these subsystems working together are generally much better at guessing dynamically what needs to be in RAM, rather then the user taking away 100mb of RAM and forcing all of that into memory. Theoretically, if you had enough RAM, the OS would pretty much load everything it needed into memory and be done with it. So if you want more benfifits of a "ramdrive", then add physical memory to your system.

    The only real place I can imagine where this might help is one where there is a lot of disk writing going on on a particular filesystem, and the system is so busy it never has a chance to flush it's cache without causing incoming requests to wait, such as a way overloaded mail server (I've seen this). Of course, the problem there is that it's not a night and day performance difference anyway when you're talking about modern disks with built in caches and OS's with decent lazy write techniques. And of course the little tiny issue that you're completely compromising your data integrity since if the system dies, everything in that file system goes with it. A far better solution, if you feel you must second guess the OS's caching/paging/vm system, is a controller card with on board cache and a battery backup. This maintains your data integrity, and will give you a little performance boost for busy disks with lots of writing. Of course, you'd still probably see just as much performance improvement by adding memory...

  • by Mr Z ( 6791 ) on Tuesday September 19, 2000 @06:26AM (#768731) Homepage Journal
    The results of the performance test that I ran were somewhat surprising - it seems the machine with the hard disk actually performed _better_ than the machine with the ramdisk. [...] and I suspect having a ramdisk hampers the kernel's ability to manage the buffer cache.

    You're partly right. The other reason is that it forces more pages out to swap, since the RAM disk can't be paged out (I'm pretty sure). Placing your data in a traditional Linux RAM disk has two bad effects:

    • It reduces the total amount of RAM available to applications. This results in more paging activity.
    • It reduces the size of the buffer cache, meaning that files outside the set you've placed in RAM are more likely not cached. This also includes filesystem metadata, such as block bitmaps and the like.

    Even if the Linux ramdisk can be swapped out (I think the new ramfs may be capable of this), it will still likely be slower than a traditional filesystem if you push into swap, because swap gets fragmented over time. In contrast, ext2 resists fragmentation pretty well and so will perform better as a result.

    My 0x02 cents...

    --Joe
    --
  • Actually, where I work we use the standard Solaris start up files mostly and those wipe /tmp (usually we have that on disk) on boot.

    This is a sensible strategy, and frankly I wouldn't expect to keep files in /tmp over a reboot.

    You can limit the size of /tmp on the tmpfs file system by using the mount options... -o size=sz sz can be in bytes, kilobytes or megabytes (123, 123k or 123m)

    The main reasons that we don't use it are DBAs...
    We have had one DBA ask us to reduce the size of /tmp and use that space to create a new partition... And they generally don't want to use ANY memory for things other than their programs. tmpfs keeps the filesystem control structures in memory.

    Z.
  • ...I am just waiting for the inevitable suggestion that the ramdisk be used for "swap" :)

    which isn't such a bad idea if you use the slram patch on a computer where not all RAM is cached (e.g., some Pentium boards with more than 64M)


    --
  • Ummm... The iOpener [iopener.net]?

    It has a 16MB flash disk, and it runs QNX. There have been various hacks reported on Slashdot about installing Linux, as well as attaching IDE drives, etc.

    Sure, it boots fast, and for mobile or embedded applications the hardware can be more reliable. But flash is still way more expensive than magnetic storage. Eventually the speed and cost of solid state storage will fall below magnetic and optical, but that won't be for at least a decade.

  • Does Linux support dynamically reseizing Ram Disks?

    Sorry, no. You'll have to run Windows 98SE for that pleasure. Linux is unnecessarily stable for such a task.

  • Just increase the size of your buffer cache.

    All of the data stored on your disk is already proactively cached, most recently used is much more likely to be in cache longer.

    Full blown RAM disks have limited applications, especially when the buffer cache can almost always do a better job. If you have gobs of RAM you're not using, increase the size!

  • by Grab ( 126025 ) on Tuesday September 19, 2000 @06:32AM (#768743) Homepage
    This is exactly why we don't use RAM-disks on modern machines.

    The stock A500 had 1/2Meg of RAM, so most stuff was designed to run in that memory space. Most word-pros and spreadsheets would run in this, but didn't have much room spare for file data, so more serious users got a 1/2Meg RAM upgrade for this (and even now, you can store a lot of text in a 1/2Meg file). If you had the money for a 2Meg (or more) expansion card, the world was your oyster. You could then run 2 or more heavy-duty programs simultaneously, and use any space left over to cache your frequently-used commands in a RAM-disk. Well cool at the time.

    Now back to today. It's no longer strange to run several heavy-duty applications together - at any one time in Windows (sorry, but that's what I use at work), I may have Word, Excel, Access, DevStudio, Outlook, Matlab, Acrobat and IE all running together. At this point, the Amiga would have reached the "heavy heavy heavy, man" stage and died with a Guru Meditation error. We have vast stacks of RAM now, but our expectations have risen too, and so have the program sizes. You could still sit down and code a graphics app in Intel assembler if you really wanted to (as one Amiga developer did to get fastest performance and minimum code size), but I wouldn't recommend it.

    Also, the purpose of a RAM-disk has pretty much vanished. When we used floppies, the disk access time was enormous and slowed things down considerably, but modern hard drives are so fast that disk access time isn't as big a deal as it was then. Even then, if you had a HDD (20Megs was state-of-the-art then!) then you didn't really need to use a RAM-disk.

    Grab.
  • I think there is work being done on this, and I seem to recall it was mentioned in a previous story, but I don't recall where.

    What I'm curious about is why, once you have a good, stable boot configuration, you can't store an image of memory at the moment the first login screen comes up, and have a boot loader that just loads that image at startup. I realize that this would be undesirable on a lot of systems, but I sure would appreciate this near-instant-on with the ancient IBM Thinkpad I carry around for lightweight tasks -- mostly text editing and my private development projects.

    --

  • What about hardware RAM disks?

    I've seen some (don't remember where) that were designed to be used like a normal disk but on the insides they were a bunch of battery-backed RAM. I believe the high-end models even had the ability to automagically sync to a physical disk in the enclosure on power loss/off/shutdown and restore to RAM.

    I think the advantage of these solutions over a conventional host-based RAM disk was that by treating the RAM disk as a SCSI device you could make it much faster than conventional host RAM (special controllers, interleaving, etc).

    I don't remember the name of the company that was selling this and they don't appear to be around anymore. Maybe the cost of RAM relative to the sizes people needed just made it commercially impractical.
  • You might wish to check out the Linux BIOS project at http://www.acl.lanl.gov/linuxbios/ [lanl.gov]. It's not exactly the same, but remarkably similar.
  • Due to buffer cache, the only read need for RAMdisks is during installation, bootup (initrd, as per RedHat), or in embedded systems.

    Linux installation from floppy uses the RAMdisk to store the installation filesystem. This is not only quicker than running from a floppy, but allows the RAMdisk image to be compressed. Debian and Slackware do this, and I presume other do as well.

    When I've used RAMdisks in the past on other systens, it has always been when other media was slow. A common one was under DOS on a floppy only 8086, copying COMMAND.COM and to a small (30K) RAM disk (stored in spare RAM on the non-standard video adaptor IIRC), and setting COMSPEC accordingly. Saved me having to swapp floppies just to load COMMAND.COM on program exit.

    The only advantage I can see on a non-embedded Linux is if you have a some data or executable that you need to guarantee is in cache, and pre-load this into RAM disk before-hand. Faking benchmarks springs to mind here.

  • My parents have a Performa 6116; it has a PPC 601 @66 mhz and a 33 Mhz bus. Running Netscape 3 is quite a task for the poor machine (Netscape 4 is right out :).

    It was even more of a task before I did just what you described and made a RAM Disk and told Netscape to use it. Sped it up by about 3 times.
  • by hey! ( 33014 ) on Tuesday September 19, 2000 @06:51AM (#768762) Homepage Journal
    I'm not sure that your boot up time would be all that much faster. After all, most things that boot from ROM boot fast because they are pretty simple and small. Perhaps better to put /etc/rc.d on diet.

    Read only media are a good idea though.

    For security, if super fast boot time is not an issue, then you might consider booting from CD-ROM or othe read only medium on some machines. If your home page and apache configuration files are on CD-ROM, then your home page cannot be defaced no matter how clever the cracker. Likewise even if somebody does manage to use a root kit on your box they can't replace one of the regular utilities with a trojan if the directory it's in resides on a CD-ROM.

    Anybody know a Linux how-to doing this? I've seen it done with the BSDs.

  • Yeah but compare that with the avg 9 to 9.5ms
    access time of a HD and there is a nice
    performance gain (theoretically).

  • by pruneau ( 208454 ) <pruneau.gmail@com> on Tuesday September 19, 2000 @07:06AM (#768766) Journal
    Well, we're using Solaris at work, and placing swap and /tmp in the same memory bucket has another interesting "side-effect".

    Another rule of the game is : when memory-pigs applications are running, and swap grows too much, the /tmp shrinks accordingly.

    Here, we are using an operating system simulator which was designed like a fortran-77 app. That is, at startup, reserve as much memory you think you need (commonly around 100/200M), and then works with it (or die). When a sufficient number of those apps is running (around 5-7), even our 1G-memory entreprise servers began to stumble under the load.

    Needless to say, when swap grows up to the point of crushing /tmp in the memory bucket, you've got a file system full on /tmp...

    Remember the ol'unix /tmp definition :
    /tmp is the place (filesystem) where any file that won't be needed next reboot should be placed, but _EVERYBODY_ should be able at least to write and read there.

    It's funny how you re-discover the importance of this rule by noticing how many mundane tool need /tmp for writing. And indeed refuse to move when /tmp is full...

    So my .5 cents to whoever will design such a mechanism in (free)*(*n*x). This mechanism really speeds things up, but on the other hand, PLEASE MAKE SURE THAT :
    - either your system does not vitally needs /tmp any more
    - or /tmp will remain sufficiently large for your system to keep running, whatever the conditions.

    Obviously, this is only needed when you run memory-hungry applications. But obviously indeed us modern designer are very carefull not to use to much memory ;-)
  • With my whopping 2.5 megs of ram I used to run a "sticky disk" in RAM of 880k. The disk stored system files in non-volatile memory.

    Yup. I did that too. A500, Workbench 2.1, 20 meg SCSI hard disk.

    Even though the 880k of RAM could have been better used for program space, I found that the performance penalty of having the system files on a slow band-stepper hard drive was such that the memory hit was less obtrusive.

    And, I had a later-revision A500, into which I'd plopped a Fat Agnes chip. With a little hacking, of course, it was possible to set up the memory expansion on the bottom of the A500 to become an extra 512k of Chip RAM; the 2 megs in my hard disk controller setup was the Fast RAM.

    The only reason I bring this up is that, if nothing else, the Amiga is a great source of inside jokes, like "Guru Meditation Errors" and "volatile memory"...

    Or, if you've been downloading too much off the local high-speed (2400 baud!) Warez BBS, "Your Amiga is alive..."

    Ya know, a Mac says, "Sorry, a System Error has occurred"; Windows tells you that "This program has performed an illegal operation" or any number of other nasty things.

    But, besides Eudora 3.x ("Eudora is tired of waiting for the system to respond."), can anyone else think of any really nasty or sarcastic error messages like the Guru Meditation Error?

  • by PureFiction ( 10256 ) on Tuesday September 19, 2000 @07:23AM (#768774)
    In Linux, all unused memory is used for filesystem caching. In general, linux does this caching mcuh better than you could by mounting specific disks and files in a RAMDISK. Linux chooses things which are accessed frequently, or very recently, among other things, into this FS chache.

    By creating a RAM disk to do this, you would force a much smaller subset into memory, which would be great for what you are using there, but would hinder performance on other things which linux does not have the room to cache now.

    So, unless there is something very specific that needs to be cached, there is no rationale for this, and the chances are that linux will cache it for you anyway if its that much of a performance hit.

    Last but not least, the biggest reason RAM disks are slightly faster than average (when linux does cache) is because they never have to synch to a physical medium. If you dont care that all the data you have written there is gone *poof* once a crash occurs, or if the system is shutdown, then thats ok. If you have a file there, and 'oops' forgot to write it to disk, its gone.

  • That looked interesting, so I looked it up (I'm on a windows machine at work right now, so I have no /usr/src/*) on Google.

    Current state: It currently works. Network block device looks like being pretty stable. I originally thought that it is impossible to swap over TCP. It turned out not to be true - swapping over TCP now works and seems to be deadlock-free, but it requires heavy patches into Linux's network layer.

    Looks like it can swap now. Pretty cool stuff.
  • Check out the Diskless nodes Linux HOWTO [linuxdoc.org]. It describes the hows and whys of exactly this.
  • Even though the lack of a swap file capability for the Amiga meant you sometimes you didn't have enough memory to do some things it did give you a certain philosophy that worked well.

    If you can run it, it will run well. If an app _has_ to use lots of memory because of it's fundimental nature (such as image processing) it will intelegently do the swapping itself.

    Because memory was not considered vitually unlimited people developed with an eye towards keeping memory requirements down.

    The requirements of programs for OS's with swap files have rocketed over the last few years.

    It's sad to see the Amiga style of operation disappear without any debate as to the merits of it. People made swap files because they could, not because they were essential.

    Bill Gates reputedly once said '640k should be enough for anybody'. In my own mind I think that 64 meg should be enough for most people, yet I routinely do tasks on 64meg machines that are swapping merrily away. It's not because I'm not considering the possibilities of things that could be done (Like Bill did) but rather that I feel that those things could be done in 64Meg. I fell like my memory is going to waste.

    I have ICQ running at te moment. It's using 6 meg, I haven't a clue what it actually puts in that 6 meg. If memory were not considered unlimited how much would it be using?
  • But it _isn't_ simple to make a safe way to delete files in /tmp which haven't been accessed in a while. If there were, all Unices would do it. About the only safe time to remove temp files is at boot, because you can't have malicious attackers manipulating /tmp.
  • Not necessarly a huge problem: you can simply copy code from ROM to RAM when you boot the machine. Or more economically, you can have a minimal OS on the machine that downloads and unpacks the regular installation from the network.

  • My ProGen laptop did this. If you hit a certain key combination at any time, the BIOS would come to the forefront and copy the entire RAM contents to a 70-something Meg partition at the end of the drive. This did take awhile, though... waking the system up was a small bit faster than booting, but shutting down the system took a LOT longer.

    All of that worked fine for windows, but last I tried it, Linux would begin acting very strangely shortly after waking up. I keep my laptop on all the time (it's only portable when I need it to be. :P), and I blasted the Save-To-RAM partition a long time ago, so I couldn't check to see if newer kernels don't mind it.
  • You have to admit that if I somehow got an "Ask Slashdot" posted where I asked the question: "How do I get my Windows 98 box to recognize 4 ethernet cards without crashing?", I'd get the following five answers:

    1. Works fine for me - under Linux.
    2. Why are you using Windows?
    3. Linux supports theoretically infinite ethernet devices.
    4. Natalie Portman!
    5. FUCK YOU AND YOUR WINDOWS BOX, YOU WHORE
  • by Arker ( 91948 ) on Tuesday September 19, 2000 @08:00AM (#768799) Homepage

    The results of the performance test that I ran were somewhat surprising - it seems the machine with the hard disk actually performed _better_ than the machine with the ramdisk.

    This is exactly what I would suspect - I'm glad you posted this because I don't have time today to test it myself, but this is the result I would bet on.

    The reason ramdisks aren't very useful with Linux is that the kernel has very good buffer/caching code - the effect is the same as having a ramdisk, except that the kernel can dynamically determine the contents based on actual usage. If you stick commonly used data on the ramdisk, you should be able to beat any caching algorithm, in theory, but this requires that there is certain data that you know will always be the most frequently accessed. In the real world, this rarely works out.

    Say you put 16 megs of what you think is the most commonly used data on a ramdisk. Say further that you are right - over time, that 16 megs of data is the most frequently accessed data in your system, by far. If your box is hitting that data constantly, every bit of it, every few cycles, you might get a small performance boost. But more than likely some parts of it will not be hit all that frequently, and there are also likely to be times when it's not being hit at all. Put the same data on hard disk and let the kernel have the ram to manage, and it will manage things on the fly, responding to the actual demands of the system... it's very hard to beat.

    Now, if the caching algorithm was more primitive, something like smartdrv for instance, then you can get a big performance boost out of the ramdisks. I used to play that game quite frequently. But this only worked because smartdrv really isn't very smart.

  • by Black Art ( 3335 ) on Tuesday September 19, 2000 @08:02AM (#768801)
    A friend of mine uses a RAM disk for his Netscape cache. It saves him the trouble of having to clear the cache out manually on every boot. (He is on a Mac.) I have been considering doing something similar on my Linux box, but too many things are already needing to be done.
  • The reason ramdisks were so useful on your old Amiga was the absence of a good disk caching program. Linux has excellent dynamic disk caching, you're better off letting the kernel have that memory to play with instead of locking it into a ramdisk.

  • by WNight ( 23683 ) on Tuesday September 19, 2000 @09:02AM (#768819) Homepage
    Yup. Having a ramdisk limits the ammount of ram available for caching and caching caches the files actually used, not just the one you specify.

    There's one time ramdisks are good... If you have a small set of files (relative to your total ram) that you don't use very often, but when you do, you want them to load with as little delay as possible...

    Not really something you'll run into with a webserver where a 10ms lag in HD access will be hidden under at least 50ms of network lag and probably 250ms of rendering lag.

    But, if you run a machine who is doing realtime data sampling and you need to run various transforms on the data, you might want to keep some of the essential tools in ram. (Just a contrived example...)

    The best way to do this though would be to ask the caching subsystem to keep some, higher priority (specified by you, and by overall usage patterns) in ram even when they might have been dumped, because they're either timing critical, or likely to be used a lot in the future. (The way an ASM programmer could hint to the CPU which branch will be taken by making a seldom taken loop fall-through when skipped and a often-taken loop fall-into when taken.)
  • Every filesystem type comes with a special set of options you can use when mounting one of those filesystems. For Solaris' tmpfs, you can set the maximum size of the "partition" so that it won't actually use up all of swap.

    Most admins bitching about the large files problem aren't aware of this option. (I wasn't for a while.)
  • The sticky bit keeps a program on swap, but swap is probably still a disk. This was useful in the days when you might have had a swap device that was faster than your primary storage system. Since most installations are either using a single disk, or have disks that all run at the same speed, this doesn't gain much. I'd like to be able to redefine the sticky-bit to tell the kernel "Keep this program resident in memory" -- but I wonder whether that would break anything?
    --
  • Comment removed based on user account deletion
  • I have one too...it makes a great serial terminal to my SPARCstation 5 home server. :P
    I can't find the damn manuals, though...could you point me to somewhere on the net where they can be found?
    "If ignorance is bliss, may I never be happy.
  • E is derived from FVWM?
  • Eproms are slow and not particularly easy to update. If the OS has a good caching algorithm, why not let *IT* decide how to best use the RAM?

    The HD-less network workstation (Internet and Citrix) that Larry Ellison (sp?) is pushing uses a CD-ROM to hold its Linux OS. Presumably the boot-up takes some time, but a good cache algorithm will handle things from there.

    Do we need to push boot times? How often do you need to boot Linux?

  • I have a 2.4 kernel and it works fine.
  • by Nathaniel ( 2984 ) on Tuesday September 19, 2000 @10:07AM (#768843)
    "There's one time ramdisks are good... If you have a small set of files (relative to your total ram) that you don't use very often, but when you do, you want them to load with as little delay as possible... "

    That's what I wanted when I was playing Civ:CTP last year. I made a 300M swap file and copied parts of the graphics directory structure to the ramdisk, then pointed at it with symlinks.

    This greatly improved game play. Of course, it was at the expense of slower access to other files, and caused the system to swap out some programs, but I didn't care, because I was planning to play the game for a while and didn't need the other programs in memory.

    Of course, I turned the ramdisk off while I wasn't playing, and had to pay the start up cost of loading the ramdisk each time I turned it back on.

    I don't see how a ramdisk would be a useful thing for a web server unless you could be sure you didn't cause anything you care about to be swapped out. That would require adding more memory, but adding more memory would give you more buffer cache, which would have the same (argueable better) effect as the ram disk in the first place.

    It's a specialized tool, useful in some specific cases. A web server doesn't seem to be such a case.

  • "Unix/Linux does this for you automatically."

    True, to a point. Linux will use available ram as a disk cache. It won't choose to swap programs out in order to provide more space for the disk cache, even if you have programs you haven't used in a long time, and a little more disk cache would mean you could get everything you are using into the disk cache, instead of rotating a bunch of stuff through the cache.

    This is because Linux gives higher priority to programs (even seldom used programs) than disk blocks (even frequently used disk blocks).

    Using a RAM disk can give you a way around that by letting you dictate that the disk cache is more important than keeping other programs in memory, and allowing you to specify which programs should be cached.

    This isn't a really common problem, which explains the fact that RAM disks aren't used all that often.

  • by Valdrax ( 32670 ) on Tuesday September 19, 2000 @10:29AM (#768850)
    One of the Fundamental differences between all Unixes and every other OS ever invented is the use of memory to buffer the filesystem.

    Actually, disk caching is NOT a unique idea at all.

    Macs have supported a disk cache for performance since at least System 3.0, in 1986. You can see a history of the old Mac OS here. However, I'm not sure if this is a read cache only and what form of cache writing scheme it supports if any nowdays.

    While I can't really say about the DOS-based Windows variants, the NT versions of the Win32 API has lots of support for asynchronous file I/O [foliage.com]. By default, all normal disk writes are written to a disk cache which is lazily flushed. You can specify certain options when opening a file handle with Cre ateFile() [leb.net] to force it to write straight through to disk rather than lazily cache it. In fact NT gets its asynchronous packet-based I/O subsystem design from VMS. (The designers of the NT kernel were ex-VMS designers. [win2000mag.com])

    Finally, while I can't speak about the Amiga, I can speak about MVS's descendant OS, OS/390, which can handle asynchronous file I/O. I can't find you a good link, but most of the references I could find on this talk about OS/390's UNIX services. Apparently around release 2 of OS/390, they began to comply to the XOpen definition of a UNIX, so I guess that doesn't help that much.
  • Putting /dev in a ramdisk is a recommended way to reduce disk activity. The various open devices have their last access time updated often, and those file access times are written to the disk every 30 seconds or so. If /dev is in a ramdisk, those updates are not written to the hard drive.

    The purpose of this is to let the hard drive in a laptop spin down. It is one of several suggestions in an old Linux laptop power reduction list...which I can't find at the moment.

  • With a blue background and High intensity white lettering. Ready prompt and flashing block cursor!

    I want my PIII to boot like a C64!!

  • If you've got something like a DBM [washington.edu] file that you're going to be doing absolutely massive numbers of updates on, it would be a slick idea to store that file on a RAMdisk so that updates wouldn't get forced out to disk on a regular basis.

    Obviously this will be vulnerable to failure, but for something that collects massive quantities of statistics, such as Ifile, [mit.edu] it can be worthwhile.

    With Ifile, an early edition stored stats in DBM files, and would do simply massive numbers of increments to entries. On disk, this meant that for a relatively small mail spool, the analysis would take hours.

  • Well, yes but all the fvwm code is long gone...

    "Free your mind and your ass will follow"

  • (LINK) [lanl.gov]

    sig you!!
  • I think his point was that you wouldn't have any writable media in the machine which would prevent what you're talking about.

    Of course the corollary to that is if you could create a ramdisk then you can write to it. So if you leave out ramdisks and all network filesystems - you'd be set. Just log everything remotely via syslog (or something better/custom) and you'd be pretty much defacement proof... won't stop 'em from bringing down the server - and because you're running on ROM you won't be able to fix the problem very quickly so they could just keep performing the same exploit over and over... haha!


  • But if you've ever USED a ramdisk before you'd know that it stores non-active executables in the ramdisk or files you want memory resident without actually being in use. On MacOS you can stick Word and IE into the ramdisk and they will start pretty darned fast as they don't need to access the disk in order to open. The cache is for stuff thats already been used, ramdisk is for things you might use.
  • To get the best of both worlds, it would be very handy if you could use logical volume management (or maybe something simpler) to ensure that the /tmp filesystem started out using RAM disk, then migrated onto a second physical volume on disk.

    But perhaps the real issue is filesystem-specific caching parameters - if you could configure the /tmp filesystem caching to be much more aggressive, using more memory, this would be self-tuning (i.e. expanding to disk when needed) and probably work better all round. Though perhaps a special filesystem would still be needed to avoid writing to disk unless you've run out of RAM.
  • This is common in most decent laptops, and is sometimes called Hibernate - it requires extra code to reset devices to the state they were in at hibernate time, but it always worked very well on my IBM Thinkpad under Windows.
  • WTF? What overhead are you talking about? UFS has way more overhead than TMPFS for various reasons. In any case:
    jr:air% cd /var/tmp

    jr:air% time dd if=/dev/zero of=afilename bs=512 count=10000
    10000+0 records in
    10000+0 records out
    dd if=/dev/zero of=afilename bs=512 count=10000 0.24s user 1.98s system 29% cpu 7.537 total
    jr:air% cd /tmp
    jr:air% time dd if=/dev/zero of=afilename bs=512 count=10000
    10000+0 records in
    10000+0 records out
    dd if=/dev/zero of=afilename bs=512 count=10000 0.14s user 1.45s system 65% cpu 2.440 total
    Ie, tmpfs is over twice as fast as UFS. And that's with UFS logging enabled which generally increases the speed.

    For reference, this is a SPARCstation 5/170 running Solaris 7 and the memory is pretty much full, so /tmp is probably using the disk swap by now.
    --

  • The system I'm talking about is an in-house written transaction processing package. My responsiblity is ensuring that remote systems can communicate properly, obeying the various specifications.

    The problem here is that there is no-one in overall control of the network of at least 7 high-end Solaris machines. Security is a joke - everyone logs in to the same machine as the same user, then rlogins to the rest of the network. They often run out of PTYs (due to constant use of login between machines without logging out) and have no idea who is consuming them, this results in either bling panic or a system reboot. They also have a nasty habit of rebooting whenever performance suffers, instead of performing preventative measures. They once rebooted one of these machines due to performance problems - all that had happene was someone had accidentally started a couple of huge web server daemons on this machine, and then not terminated it. And so on.....

    When I here how much money some of these people are on, I get really angry.

  • Another rule of the game is : when memory-pigs applications are running, and swap grows too much, the /tmp shrinks accordingly.

    I've been bitten by this (twice!) the other way round: When someone decides to do a 600-meg download into /tmp (because his quota is 20M), the virtual memory shrinks accordingly. After a while all you get at the prompt is this:

    $ls
    zsh: fork failed: Not enough memory

    This symptom is usually followed by an email to the sysop, and a few minutes later everything is back to normal (except for the user who lost a few hundred megs of downloaded stuff :-).

  • but you can't unmount root.
  • I agree. fvwm2 is my wm of choice. I was just stating a fact.

    BTW I AM Ed Gein, muahahaha.

    "Free your mind and your ass will follow"

  • Obviously file operations to a ram disk are faster than their hard-drive counterparts. But my point is that programs which cache data optimize for the 99% who are caching to a high-latency medium, not memory. Most programs also tend to do their own second layer of caching in memory, as well as the OS's buffering of file operations. Both of these optimizations are harmed by reducing the overall amount of available ram, and thus using a ramdisk probably does more harm than good (unless you are rich enough to just buy 10x the ram you could possibly ever need).

    Doug
  • The particular strain I had would (in addition to replicating itself) eat floppies and crash your system.

    Yeah. The strain of the virus that I caught didn't do anything overtly destructive. It just replicated.

    Not that bad floppies were uncommon on the Amiga...

    How could you tell that you had a virus?

  • That was cause diskdoctor thought it had done a good job of getting back the data on the disk. More usually it meant that you needed Jesus to get your data back.

    Urk. I think that was something that I'd very carefully closed away into a little, isolated part of my mind, sealed up, and walked away from, hoping that it would never corrode its way out through the barriers that I had built...

    And now look at what you've done.

Algebraic symbols are used when you do not know what you are talking about. -- Philippe Schnoebelen

Working...