Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage Software Linux

How To Use a Terabyte of RAM 424

Spuddly writes with links to Daniel Philips and his work on the Ramback patch, and an analysis of it by Jonathan Corbet up on LWN. The experimental new design for Linux's virtual memory system would turn a large amount of system RAM into a fast RAM disk with automatic sync to magnetic media. We haven't yet reached a point where systems, even high-end boxes, come with a terabyte of installed memory, but perhaps it's not too soon to start thinking about how to handle that much memory.
This discussion has been archived. No new comments can be posted.

How To Use a Terabyte of RAM

Comments Filter:
  • by Digi-John ( 692918 ) on Thursday March 20, 2008 @02:38PM (#22810556) Journal
    Finally, I'll have enough space to run Firefox, OpenOffice, and Eclipse *all at the same time*! As long as I don't leave Firefox running too long.
    • by smittyoneeach ( 243267 ) * on Thursday March 20, 2008 @02:44PM (#22810672) Homepage Journal
      You are wise to avoid discussion of emacs...
      • Re: (Score:2, Informative)

        by Digi-John ( 692918 )
        emacs is a Lisp interpreter, an editor, a games package, an irc client, many things, but its memory usage is just a drop in the bucket compared to the monstrosities I mentioned above. Of course, there's quite a few complete operating systems that can boot in the amount of RAM required by emacs :)
      • Re:1 TB of memory... (Score:4, Interesting)

        by jrockway ( 229604 ) <jon-nospam@jrock.us> on Thursday March 20, 2008 @03:25PM (#22811310) Homepage Journal
        It's interesting how times have changed. Over the years, emacs has used pretty much the same amount of memory. (My big emacs with erc and gnus is using about 67M right now. Firefox is using 1.7G.)

        In the 80s, the overhead of a lisp machine just to make your application customizable was absurd (hence the emacs jokes). Writing an editor all in C was a great idea. Speed! Memory savings! This approach made vi very popular.

        Now that it's 2008 and every new computer has a few gigs of RAM, it's not so absurd to write an editor in a dynamic language running on top of a minimal core. An experienced elisp coder can add non-trival functionality to emacs in just a few hours. emacs makes that easy and enjoyable.

        vi(m) may use less memory, but that just doesn't matter anymore. If you want to customize it (non-trivially), you have to hack vim and recompile. So while emacs jokes are hilarious, it dates you to the early 80s. There is no reason to write tiny apps in assembly anymore. Big apps that can be extended are a much better approach.
        • Mobile much? (Score:5, Insightful)

          by tepples ( 727027 ) <.tepples. .at. .gmail.com.> on Thursday March 20, 2008 @04:01PM (#22811834) Homepage Journal

          Now that it's 2008 and every new computer has a few gigs of RAM
          Handheld computers don't.

          There is no reason to write tiny apps in assembly anymore.
          Other than the fact that embedded systems outnumber PCs?
        • by Usquebaugh ( 230216 ) on Thursday March 20, 2008 @04:31PM (#22812216)
          In the early 80s there was this funny machine called a System/38 from IBM that morphed in the AS/400 that is now called an iSeries. Now this machine was a RDBMS engine with simple green screens attached.

          Under the covers the System/38 was a cisc box, the AS/400 could be CISC or RISC and the iSeries is all risc. From an app dev point the same compiled object code could run on all three. Stop and think about for a second.

          Now the System/38 had a very advanced constraint based security system. For example you could use an object that you could not see. But in general is allowed for very fine grain control of security. Of course this has been improved throught to the iSeries.

          Also, this machine had a single address space for all storage. An app didn't need to worry about memory size, the machine automatically used ram as disk and disk as ram.

          Of course this machine has had a life of 30+ years and most OS designers have zero idea about just how revolutionary it is same sort of thing as the MCP for a Burroughs B5000. People who do not know history are doomed to repeat it over and over again.
             
        • by hey hey hey ( 659173 ) on Thursday March 20, 2008 @05:25PM (#22812760)
          vi(m) may use less memory, but that just doesn't matter anymore. If you want to customize it (non-trivially), you have to hack vim and recompile. So while emacs jokes are hilarious, it dates you to the early 80s. There is no reason to write tiny apps in assembly anymore. Big apps that can be extended are a much better approach.

          Give it up, this is a religious war. Those of us who prefer vi(m) consider it a more focused editor. We neither need, nor want the extensibility crave. Those of you who prefer emacs consider the extensibility vital to your work, and can't imagine how anyone can live without it.

          We have been debating this forever, and will continue to do so, as long as there are vi(m) and emacs users out there. There is no "right" answer, so just enjoy the jokes (they are normally harmless, and often good for at least a smile).

        • by blair1q ( 305137 ) on Thursday March 20, 2008 @05:27PM (#22812776) Journal
          You don't customize emacs. It customizes you.
    • by frovingslosh ( 582462 ) on Thursday March 20, 2008 @03:24PM (#22811280)
      I'm not sure why people are rating your post as funny. I have not had moderator points in a long time, but if I did I would mark it insightful.

      As to the problem of how to use 1 TB of RAM, spending any time at all thinking of this is foolish and wasteful. Of course, I remember the days when we rated our computers in how many kilo bytes of memory we had, and plenty of readers here will remember having 20 to 40 meg hard disks in PC's with far less than 1 meg of physical RAM memory. In those days (and I'll avoid the famous Bill Gates quote on the subject), how would you have spent your time deciding what to do with the memory if you had a computer with 1 gig, 2 gig or even 4 gig of memory? You may have come up with all sorts of amazing ideas. But none of them would have done you any good, because the developers (Mostly Microsoft, but Linux is far from lean and mean any more either) already decided what to do with it, waste it and leave you wanting more. And one of your ideas for a 4 gig system might not have even been to just pretend that most of the last gig of memory wasn't there and ignore it!

      So why even have a post about what to do with a terabyte of memory? The solution is simple, install Windows 9 and try to quickly order more memory on-line before the memory hungry service pack comes out, forces it's install on you, and your TB isn't enough.

      • Re:1 TB of memory... (Score:5, Interesting)

        by Bandman ( 86149 ) <bandman@nOsPAM.gmail.com> on Thursday March 20, 2008 @03:31PM (#22811394) Homepage
        virtual machines. lots of 'em
      • by Colin Smith ( 2679 ) on Thursday March 20, 2008 @06:55PM (#22813722)
        Well, closer to 1.2 TB. 40 systems with 32Gb each. Want to know what it's used for? Disk cache... It's virtually all I/O buffer.

        All RAM is used as cache anyway. When an application allocates some RAM, it's in lieu of directly manipulating the permanent (disk) storage because it's horribly horribly slow. That's really an operating system failure. Network file systems, disk, RAM should all be completely transparent, the OS should abstract all that away and allow application programmers to handle it simply as storage.
         
        • by kesuki ( 321456 ) on Thursday March 20, 2008 @08:32PM (#22814510) Journal
          you know, this subject (1 TB RAM) brings up an annoying point, every year RAM access has gotten slower and slower relative to the CPU. when you bought a 486 computer, the RAM and Processor were running essentially at the same speed if the CPU needed data, as fast as it could be transfered from disc to ram was good enough, the cpu never hit cycles where the ram couldn't keep up with cached data and would miss a cycle for the want of data. But every new system, from the Pentium 1 on up ram has gotten slower and slower than the CPU, so now the CPU comes with 256KB to 8MB of 'very fast ram' that is specially designed to run at the speed of the processor, because the processor needs that cache for when the ram hasn't acquired and written the data to ram from HD because the system memory simply isn't fast enough.

          I have a gaming rig I custom built 5 or 6 years ago with some very sweet OCZ ram with 2-2-2-2 timings, but now when i was wish-listing a new gaming PC the best ram i could find was 3-4-4-15 timings that's ALMOST HALF THE SPEED that means that it's going to hit those 'unable to fetch ram for the CPU' TWICE AS OFTEN with horrendous results... And it's getting worse, DDR3 ram is all running at 5-5-5-15 timings stock, and mind you 4-4-4-15 is the normal variety of DDR2 'fast' ram, this was again OCZ over-clocked ram...

          with multi-core processors this is only going to get worse, with a dual processor rig, to truly keep both processors from missing cache you realistically need 1-1-1-10 ram and they KEEP MAKING THINGS WORSE by bumping up the amount of 'burst' data the RAM can put out, instead of how FAST the ram can access and reload ram!!!

          really with such pathetic timings realistically a dual core is going to be spending about 20% of it's cycles 'waiting on ram' if it needs randomly accessed memory, that can't be 'burst read' a lot of applications need random access, database, server farms, complex 3-d video game graphics... the reason why 512MB graphic cards cost so much is they really all need REALLY FAST random access memory that is way faster than 'stock' DDR3... and the reason why frame rates don't scale with more processor pipelines very well, is because those cards keep missing strokes because the system wasn't able to load the memory in time for the processor to work on it...

          I can't think of a single mainstream computers need to 'burst' more GB/second Instead of improving latency, yet the crazy computer scientists keep making it worse by engineering for 'burst' mode rather than latency.

          it almost makes one want to use normal DDR 1 ram, with the sweet 2-2-2-2 timings instead of ddr2...
      • Chip design apps (and I imagine a number of other ones) will likely need 1TB in a year or so. I already know of several companies using boxes with 64G of RAM and the apps are consuming like around 40-50 of it. Designing (& analyzing) those multi-billion transistor designs eat memory. My sw package was designed to allow for ~80G per cell in the hierarchy. Since my system allows 128K cells, thats about 10TB of RAM that could be used. I have even wondered if the 80G limit needs to be increased in the near
  • Given that the core components of an OS are only a few GB, even 8GB systems might be able to do this, today.
    • Re: (Score:3, Funny)

      by Digi-John ( 692918 )
      640K should be enough for anyone.
    • by Kjella ( 173770 ) on Thursday March 20, 2008 @02:53PM (#22810816) Homepage
      Personally I just wish there was better cache hinting on current software. For example, playing a huge movie will swap out all my software to disk even though the 30GB Blu-Ray movie will likely be played start-to-finish once and give no benefit whatsoever. To the best of my knowledge (at least I've never seen it exposed to any API I've used), there's nothing like "Open, for reading, with READ cache but don't bother keeping it around in SYSTEM cache" flags.
      • Re: (Score:3, Interesting)

        by Ed Avis ( 5917 )
        Database systems use that sort of thing all the time, telling the kernel not to bother caching their file I/O but send it straight to disk (of course, they have their own cache configured by the database administrator). Typically if it needs to scan table more than the size of available memory, it reads the data from start to finish off the disk but doesn't cache any of it.
      • by QuoteMstr ( 55051 ) <dan.colascione@gmail.com> on Thursday March 20, 2008 @03:04PM (#22810988)
        See posix_fadvise. Using that API, a process can have as much control over a file as it needs; too bad the kernel does basically nothing with that information.
      • by beuges ( 613130 ) on Thursday March 20, 2008 @04:15PM (#22811996)
        Yeah, except Vista already does aggressive caching and makes full use of RAM that isn't currently being used by applications, but slashdot keeps going on about how its a bloated piece of crap that uses 2GB of RAM when idle. Yet they don't complain that their system runs a lot smoother thanks to prefetching which analyses program usage and preloads (in the background) data that it anticipates being loaded from disk in the future.

        Here's a question... if you actually had a system that had 1TB of RAM, wouldn't you like to see a lot of your hard drive contents being loaded into RAM in the background because you have the RAM to store it, and you know that it can be discarded at any time because its just cache memory and not committed memory? I mean, you've gone to all the trouble and cost of getting yourself that much RAM... do you ONLY want to ever make use of it all on the rare occasion you need to edit a 500megapixel picture in photoshop? Do you want your ram to sit idle the rest of the time, and have your hard drive grind away because /. would rather see the OS use 100mb of ram at idle and have the rest doing nothing?
    • Re: (Score:3, Interesting)

      by Dolda2000 ( 759023 )
      Yeah, imagine, then, to be able to use such a fast disk as your swap device! That'll make your system swiftz0rs. Or, hey, wait a minute...

      In all honesty, though, I don't really get the point of this. Isn't the buffer cache already supposed to be doing kind of the same thing, only with a less strict mapping?

      • Swap exists because there is not enough RAM to hold all data, so it is swapped to the HDD. We are talking about a situation where a system has so much RAM that swap is unnecessary, so instead parts of the HDD are stored in RAM instead!

        That is the point. A buffer cache still requires spinning up the HDD to fill it; if this is used to replace the buffer cache, then the HDD is only spun up once, during boot, and never again except to synchronize the data in RAM to HDD.
      • Re: (Score:3, Informative)

        Yeah, imagine, then, to be able to use such a fast disk as your swap device! That'll make your system swiftz0rs.

        Bingo. That is one way you can use the Violin 1010 [violin-memory.com] without needing any special backing to disk at all. In fact, this is a nigh-on perfect use of the device, because the 2x8x PCI-e bus connection, while fast, is still not as fast as main memory. But the swap subsystem knows now to manage that latency increase quite nicely. Such a swap arrangement will even tend to bring things back in balance as far as the Linux VM goes, since in the good old days when swap was invented, disk was only two or three order

    • by mpapet ( 761907 )
      Except, maybe I'd like to cache the mother of all queries from my multi-terrabytes worth of DB data? I'm at least half serious. There are a number of viable scenarios where this could be great.

      There must be a few more relevant applications. Pitch in!

      I'm all for new ideas and getting them out there for people to test. It's one of the major benefits of open systems.
    • Re: (Score:3, Informative)

      by exley ( 221867 )
      Things like this [sourceforge.net] (somewhat smaller scale) already are [gentoo.org] (somewhat bigger scale) being done.

    • You could do it now.

      But, think a bit further about the implications of this. It isn't the OS that this is aimed at. From the OS side, it would be nice to run a lot of it in RAM, but the reality is that most of the important parts of the OS (shared libs, kernel, and whatnot) are resident in RAM most of the time anyway.

      There are a couple ways to use this just off the top of my head that might make this a more interesting thing than is presented.
      • Re: (Score:3, Insightful)

        by zappepcs ( 820751 )
        Go a couple of steps further. Today's motherboards are designed to support one cpu system and a compatible OS. Why not design them to support multiple cpu systems. A VME system will allow you to plug in multiple CPU cards and memory cards, map your apps to the right memory space, share memory.

        Something similar could be done with one supervisor cpu handling video mapping for multiple slave cpu's as well as managing a RAID-5 or better disk system that is partitioned and mapped to RAM disk mirror/buffers. Most
  • by saibot834 ( 1061528 ) on Thursday March 20, 2008 @02:38PM (#22810574)
    For those of you who don't have Adblock: Printerfriendly Version [idg.com.au]
  • Memory usage (Score:5, Interesting)

    by qoncept ( 599709 ) on Thursday March 20, 2008 @02:39PM (#22810578) Homepage
    I would think that, since we aren't even close to having boxes with more memory than we actively use, and RAM isn't growing any faster than we are using it up, that using it as a "disk" is even further off than the article would seem to imply.
    • Re: (Score:3, Insightful)

      by Bryansix ( 761547 )
      For some uses we use all the RAM and for others we don't. For instance I think WIN98 boot disks create a RAMDRIVE which is pretty usefull when you can't access any of your hard drives because they aren't formatted or partitioned.
    • Re:Memory usage (Score:5, Interesting)

      by wizardforce ( 1005805 ) on Thursday March 20, 2008 @02:51PM (#22810796) Journal

      since we aren't even close to having boxes with more memory than we actively use
      640k should be enough for anyone. you do realize that the fact that computer manufacturers are happy bundling over 2 gigs of RAM in a default install so it runs Vista all prettily gives the linux users of us a fantastic advantage when we don't use anywhere near that on a regular basis. there are already linux distros that are small enough as to be sitting entirely in RAM, some even small enough to run on the L2+3 cache if you like. being able to do things like this is going to be a major advantage.
    • Re:Memory usage (Score:4, Insightful)

      by Ephemeriis ( 315124 ) on Thursday March 20, 2008 @03:20PM (#22811218)
      RAM is getting cheaper every day. Capacity is constantly growing. I just bought 4 GB RAM for about the same price I paid a few years ago for 1 GB. Right now I could build a system with 16 GB RAM without breaking the bank, all from basic consumer-grade parts available on NewEgg. It isn't going to be long before we see systems with more RAM than we know what to do with. Turning a chunk of it into a big RAMdisk sounds like a good idea to me.
      • Re: (Score:3, Interesting)

        by geekoid ( 135745 )
        well I wish 64 Bit would get pushed and 32 Bit activly phased out. as in, stop making it.

        I can get a 64bit mobo, 64bit proc, and still ahve problem finding on that can take more then 8Gigs of ram.

        I want to load up my games into a ram disk and play them from their. I've didi it in the bad ol'/good ol' days. I want to put a 2 hour movie entirely in RAM. I want 100+gigabytes of RAM, damn it. I've beens tuck at 4 Gigs for years. ENough already.

        also, I want a pony.
    • by cgenman ( 325138 )
      This sounds a lot like google's server needs. Truly random access at high speeds.

      Ram disks were available on the mac in 1990. You can get specialized rocket drives that are entirely RAM. How is this so "far off" again?
  • by Cedric Tsui ( 890887 ) on Thursday March 20, 2008 @02:39PM (#22810586)
    One Terabyte ought to be enough for anybody.
  • Windows 7? (Score:4, Funny)

    by Lectoid ( 891115 ) on Thursday March 20, 2008 @02:41PM (#22810624)
    See also, Windows 7 minimum requirements.
  • Vista SP1 (Score:4, Funny)

    by sakdoctor ( 1087155 ) on Thursday March 20, 2008 @02:42PM (#22810634) Homepage
    Is that the recommended or minimum requirement?
  • 8 GB (Score:5, Funny)

    by Rinisari ( 521266 ) on Thursday March 20, 2008 @02:42PM (#22810644) Homepage Journal
    I have 8 GB of RAM and rarely use more than four of it unless I'm playing a 64-bit game which eats it up (Crysis). Yes, I am running both 64-bit Linux and Windows.

    One time, I opened up more than a thousand tabs in Firefox just because I could.
    • Oh yea? (Score:5, Funny)

      by SeePage87 ( 923251 ) on Thursday March 20, 2008 @03:10PM (#22811078)
      Well I can do cock push-ups.
    • by ls -la ( 937805 )
      I have 1 GB of RAM, and I rarely use it all up. Of course, I don't play ram-hungry games or use Vista; Those two and maybe compiling large programs are all I can think of that would need more than a gig of ram to function at a reasonable speed.

      As a side note on the compiling, I'm doing a thesis on memory paging, and the largest trace we have is of compiling a linux kernel: over 4 million distinct pages, each page 4kB for a total footprint over 16GB.
    • I have 4GB, still two more slots for another 4GB...

      How the hell do you use ~4GB? I do video encoding, compression, editing, graphics, etc. etc. all simultaneously and honestly never go above 2GB. The only time I ever go over that is when I boot up XP via VMware (ram set to use up to 1GB), although I think I've done that once since I've gotten Photoshop CS2 and Flash 8 running fine under WINE.
  • Power Failure (Score:3, Informative)

    by Anonymous Coward on Thursday March 20, 2008 @02:45PM (#22810688)
    One important thing to consider, is that if using a ramdisk for important stuff, what happens when the power dies?

    For example, will the stuff synced from magnetic media be stored elsewhere? If so, what happens to the speed?

    -B
    • Re:Power Failure (Score:5, Informative)

      by itsjz ( 1080863 ) on Thursday March 20, 2008 @02:55PM (#22810854)
      There's about three paragraphs in the article discussing this. Basically, use a UPS:

      If line power goes out while ramback is running, the UPS kicks in and a power management script switches the driver from writeback to writethrough mode. Ramback proceeds to save all remaining dirty data while forcing each new application write through to backing store immediately.
      • Re:Power Failure (Score:4, Insightful)

        by Znork ( 31774 ) on Thursday March 20, 2008 @04:20PM (#22812070)
        Basically, use a UPS

        Then it goes on with the other questions, like, what if the hardware or kernel crashes and answers them with 'use things that don't crash'.

        Agh. I mean, that's really, really bad engineering. You don't engineer things with the assumption that everything will work. You engineer them to fail gracefully when everything that can go wrong does go wrong. And preferably with margin.

        If the system requirements for this are UPS, crashproof hardware and a completely bug-free OS, well, I'm sorry, but there's no system in the world capable of fulfilling the requirements.

        Still, I'm sure there are cases where it's useful; as long as speed is of higher importance than data integrity, this sounds very useful.
  • by erroneus ( 253617 ) on Thursday March 20, 2008 @02:47PM (#22810716) Homepage
    ...I might be able to run Vista!!! (I wonder how many people have written this prior to me already?)

    It's a lot of RAM and at today's computational speeds, it's not likely that it could be used for anything beyond a RAM drive.

    Is it too soon to think about how to use that much RAM? NO! It's the lack for forward thinking that caused a lot of artificial limitations that have been worked around in the past. We're still dealing with limitations on file systems and the like. I've got an old Macintosh that can't access more than 128GB or something like that because its BIOS can't handle it... I had to get another PCI controller installed to handle larger drives.

    What it is time to think about is now to code without such limitations built-in. This would better enable things to grow more easily and naturally.
  • by Gybrwe666 ( 1007849 ) on Thursday March 20, 2008 @02:49PM (#22810748)
    The System Tray would end up filling most of my dual monitors with all the crap Microsoft will inevitably find "necessary" to run the OS, leaving me with a small, 640x480 patch and approximately 640k for applications.
    • If you run MS SQL Server and don't manage the RAM then it will use it all just for the fun of it.
      • by W2k ( 540424 ) on Thursday March 20, 2008 @02:58PM (#22810888) Journal
        If you run MS SQL Server and don't manage the RAM then it will use it all just for the fun of it.

        If you find this in any way strange, wrong or confusing, perhaps you should read up as to what the primary purpose of a frikkin' DATABASE SERVER is.

        Here's a hint: the more data it can keep readily accessible (that is, in RAM) the better it will perform. And as you mentiones, you can of course set it to use less RAM if you have to. It's just that it's optimized for performance by default.
        • No, I know that it optimizes for performance. What I don't understand is how a 128k database with no logs and no users would still need to use up a Terabyte of RAM. It even does this to the detriment of the console session of the OS GUI. It's a Microsoft product and it isn't even smart enough to be aware that windows might need some RAM to function correctly.
  • by Anonymous Coward on Thursday March 20, 2008 @02:49PM (#22810766)
    You wrote: "We haven't yet reached a point where systems, even high-end boxes, come with a terabyte of installed memory" - this is not true. Sun's E25k can go over 1TB of memory.....
  • How ? (Score:5, Funny)

    by herve_masson ( 104332 ) on Thursday March 20, 2008 @02:49PM (#22810770)
    // Use 1TB of RAM
    char *ptr=malloc(1099511627776);
    memset(ptr,1,1099511627776);
  • nothing new here (Score:4, Informative)

    by dltaylor ( 7510 ) on Thursday March 20, 2008 @02:51PM (#22810790)
    Linux gobbles free RAM to add to the buffer cache. This is already a large RAM disk with automatic sync. In embedded systems, you can even decouple the buffer cache from any physical media and just live in a variable size RAM disk, which means that Linux finally catching up to AmigaDOS.
  • by Enleth ( 947766 ) <enleth@enleth.com> on Thursday March 20, 2008 @02:59PM (#22810906) Homepage
    I'm using regular ramdisks initalized with data on bootup, composited with temporary, empty disk partitions using unionfs and synchronized back to their real partitions on powerdown, so that I have an extremely fast read time for most things contained on such a disk and conventional write-reread times. However, the problem is that for the upper layers of the kernel, those ramdisks are not RAM at all, just some other block device around - and when it comes to loading executables and libraries, they are copied, well, from memory to memory. What's missing is some way to tell the damn thing to use the data pages that already are there and issue a copy-on-write only when required. If this mechanism can do that - well, I'll be in as soon as they make it a little bit more fault-tolerant.
  • How is this different from the already existing kernel VFS buffer store, other than for the repopulation at startup?

    Could you not accomplish this much more simply by having a process read all the blocks in a given block device at startup, thus faulting everything into the kernel buffer cache?
    • by arcade ( 16638 )
      It doesn't guarantee to sync your data to disk, only to the ramdisk. It will _Attempt_ to sync the data to disk, but it won't block to do so.

      This means that both all your read and all your write operations will go splendidly fast.

      It also means that you lose if you have a sudden powerloss. But, in many situations, that might actually not matter so much compared to the speed advantage you get out of this.

  • The analysis thankfully makes a comparison to the IO caching that happens nominally. The distinction seems to be that this 'innovation' makes calling 'sync' a lie. That just doesn't seem like a good thing. It seems a roundabout way to make sync a lie as well.

    I put in 16 GB of ram in a system, and operations are quite snappy, the disk cache happily filling and draining, and it feels more or less like a ramdisk system, once the data has been read into memory the first time on read operations. Sure, sync t
  • Comment removed based on user account deletion
  • Not so far off (Score:4, Interesting)

    by Guspaz ( 556486 ) on Thursday March 20, 2008 @03:07PM (#22811018)
    Current high-end server boards support up to 64GB of RAM (16 slots, 4GB DIMMs).

    By Moore's Law, we should hit 1TB in a high-end server 6 years, high-end desktops (assume 8GB of RAM, currently selling for $180 CAD) in 10.5 years, and the average midrange desktop (assume 2GB of RAM, currently selling for $45 CAD) in 13.5 years.

    We might be a while off in consumer applications, but for high-end servers, 6 years doesn't seem very far away.
  • by JoeRandomHacker ( 983775 ) on Thursday March 20, 2008 @03:11PM (#22811084)
    Check out the specs on the Motorola (formerly BroadBus) B-1 Video Server:

    http://www.motorola.com/content.jsp?globalObjectId=7727-10991-10997 [motorola.com]

    Sounds like a good use for a terabyte of RAM to me.

    Disclosure: I currently work for Motorola, but I don't speak for them, and don't have any involvement with this product beyond salivating over it when it was announced that we were buying BroadBus.
  • by darkmeridian ( 119044 ) <<moc.liamg> <ta> <gnauhc.mailliw>> on Thursday March 20, 2008 @03:12PM (#22811108) Homepage
    Ten years ago, my PC had 8 megs of system RAM. My laptop now has four gigs of RAM. In ten more years, I am sure we'll have a terabyte of RAM.
  • by ecloud ( 3022 ) on Thursday March 20, 2008 @03:12PM (#22811110) Homepage Journal
    If you are planning on having a few minutes' worth of UPS backup then why would you need to write to the hard drive continuously? Keep the hard drive spun down (saving power). If the system is being shut down, or AC power fails, then spin up the drive and make a backup of your ramdisk, thus being ready to restore when the power comes back up.

    Next step beyond that: stop using a filesystem at runtime. Just assume your data can all fit in memory (why not, if you have a terabyte of it?) This simplifies the code and prevents a lot of duplication (why copy from RAM to RAM, just to make the distinction that one part of RAM is a filesystem and another part is the working copy?) But you will need a simple way to serialize the data to disk in case of power-down, and a simple way to restore it. This does not need to be a multi-threaded, online operation: when the system is going down you can cease all operations and just concentrate on doing the archival.

    This assumption changes software design pretty fundamentally. Relational databases for example have historically been all about leaving the data on the disk and yet still fetching query results efficiently, with as little RAM as necessary.

    Next step beyond that: system RAM will become non-volatile, and the disk can go away. The serialization code is now used only for making backups across the network.

    Now think about how that could obsolete the old Unix paradigm that everything is a file.
  • Geez. Why would I ever need it!?!

    If you ever want a fast OS, run Windows 3.1 on a 300 MHz P2 with 64 Mb of RAM. Blazing fast.

    Let's get to 128 Gb of RAM before we start pimping 1 Tb.

  • The first thing I thought of was pr0n. Is that so wrong?
  • by flyingfsck ( 986395 ) on Thursday March 20, 2008 @03:18PM (#22811204)
    Geez, I wrote a floppy disk cache driver as a programming homework exercise in the 1980s. Talk of re-inventing the wheel...
  • When I started my programming career (1997), my employer had 3-4 servers, the newest of which had a RAID array of Micropolis drives totaling a staggering 18GB for the volume. The older servers had 6GB and 9GB volumes. While we did have to take a bit more care then than now to conserve space, that was enough for an awful lot of tasks.

    If I'm reading the specs right, you can now get parts for a PC with 12GB of RAM (mixing DDR2 and DDR3) from NewEgg for something on the order of $1000. While I wouldn't sugge
  • During games or analysis I could store the entire 3-6men endgame table bases in memory and get rid of the bottleneck that a HD is when doing a lot of searching in a 1.5 TB dataset. So yes, it could be useful to some people. Perhaps not mom and pop who check email, but researchers who crunch large datasets.
  • cachefs (Score:3, Informative)

    by argent ( 18001 ) <(peter) (at) (slashdot.2006.taronga.com)> on Thursday March 20, 2008 @03:36PM (#22811476) Homepage Journal
    A fully caching file system that could be layered on top of your network or disk file system. Sun did this for dataless workstations and it worked pretty well.

    Another historically interesting ram file system was the Amiga Recoverable RAM Disk. You coudl even boot off it.
  • by heroine ( 1220 ) on Thursday March 20, 2008 @03:45PM (#22811624) Homepage
    Still think the floating point voxel octree version of Google Earth will use that memory before any ramdisk gets it.

  • Speed vs tmpfs? (Score:5, Interesting)

    by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Thursday March 20, 2008 @04:26PM (#22812152) Journal

    How it seems to work:

    Actual "ramdisk" -- that is, like /dev/rd -- that is, appears as a block device. You can run whatever filesystem you want on it, but it's still serializing and writing out to... well, RAM, in this case. No sane way for the kernel to free space on that "disk" that's not actually used.

    How I wish it worked:

    No Linux that I know of has used an actual ramdisk in forever. Instead, we use tmpfs -- a filesystem which actually grows or shrinks to our needs, up to an optional configurable maximum size. It'll use swap if available/needed. It's basically a RAM filesystem, instead of a RAM disk.

    Even initrds are dead now -- we use initramfs. Basically, instead of the kernel booting and reading a ramdisk image directly to /dev/rd0, it instead boots and unpacks a cpio archive (like a tarball, but different/better/worse) into a tmpfs filesystem, and uses that.

    So, how I would like this to work is, use a tmpfs filesystem -- as I suspect it will be faster, and in any case simpler, than a ramdisk -- and back it to a real filesystem on-disk. The only challenge here is that it's not as deterministic -- it would be more like a cp than a dd.

    An even better (crazier) idea:

    Use a filesystem like XFS or Reiser4 -- something which delays allocation until a flush. In either case, it would take a bit of tweaking -- you want to make sure no writes, or fsyncs, block while writing to disk, so long as the power is on -- but you'll hopefully already be caching an obscene amount anyway, so reads will be fast.

    In this case, forcing everything out to disk could be as simple as "mount / -o remount,sync" -- or something similar -- forcing an immediate sync, and all future writes to be synchronous.

    Conclusion:

    Either of the two ideas I suggested should work, and could perform better than a traditional ramdisk. If it is, in fact, a simple disk-backed ramdisk (not ram filesystem), then it's both not as flexible (what if your app suddenly wants 50 gigs of RAM in application space?) and a bit of a hack -- probably a hack around traditional disk-backed filesystems not being able to take advantage of so much RAM by themselves.

    In fact, glancing back at TFA, it seems there are some inherent reliability concerns, too:

    If UPS power runs out while ramback still holds unflushed dirty data then things get ugly. Hopefully a fsck -f will be able to pull something useful out of the mess. (This is where you might want to be running Ext3.)

    Now, true, this should never happen, but in the event it does, the inherent problem here is that the ramdisk doesn't know anything about the filesystem, and so it doesn't know in what order it should be writing stuff to disk. Ext3 journaling makes NO sense for a ramdisk when the ramdisk itself knows nothing about the journal -- the journal is just going to slow down the RAM-based operation. Compare this to a sync call to XFS -- individual files might be corrupted, but all the writes will be journaled in some way, so at least the filesystem structure will be intact.

    This gets even better with something like Reiser4's (vaporware) transaction API. If the application can define a transaction at the filesystem level, then this consistent-dump-to-disk will happen at the application level, too. Which means that while it would certainly suck to have a UPS fail, it wouldn't be much worse than the same happening to a non-ramdisk device, at least as far as consistency goes. (Some data will be lost, no way around that, but at least this way, some data will be fine.)

  • memory test (Score:3, Funny)

    by Skapare ( 16644 ) on Thursday March 20, 2008 @05:07PM (#22812558) Homepage

    You better skip the memory test.

  • Virtual Machines (Score:3, Insightful)

    by nurb432 ( 527695 ) on Thursday March 20, 2008 @06:44PM (#22813630) Homepage Journal
    Thats the only reason i can see to have that much ram. Unless our current crop of so called programers bloat their code to fill the expansion for yet another worthless feature.
  • Access (Score:3, Insightful)

    by Forty Two Tenfold ( 1134125 ) on Friday March 21, 2008 @07:30AM (#22817788)
    The question should be not WHAT to fill it with, but how to read/write gargantuan amounts of data quickly.

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...