How To Use a Terabyte of RAM 424
Spuddly writes with links to Daniel Philips and his work on the Ramback patch, and an analysis of it by Jonathan Corbet up on LWN. The experimental new design for Linux's virtual memory system would turn a large amount of system RAM into a fast RAM disk with automatic sync to magnetic media. We haven't yet reached a point where systems, even high-end boxes, come with a terabyte of installed memory, but perhaps it's not too soon to start thinking about how to handle that much memory.
1 TB of memory... (Score:5, Funny)
Re:1 TB of memory... (Score:5, Funny)
Re: (Score:2, Informative)
Re:1 TB of memory... (Score:5, Funny)
emacs, for starters
Re: (Score:2, Insightful)
Eighty Megamebibytes And Constantly Swapping?
Besides, isn't it obvious how one should use a terabyte of RAM? Use it to upgrade your PC to run Windows Vista MegaUltimate, of course. :-D
Or, to put it another way, the question of what to do with the extra RAM is a non-issue. Install software. Software developers will find a way to waste as much RAM as you can put in and performance will still be slow. It's just the nature of progress....
*sigh*
Re: (Score:3)
The new hotness in memory suckage is anything based on java.
Re: (Score:3, Funny)
Re:1 TB of memory... (Score:5, Funny)
Re:1 TB of memory... (Score:5, Funny)
Re:1 TB of memory... (Score:5, Funny)
Re:1 TB of memory... (Score:5, Funny)
Yes, but they're substantially less functional operating systems than Emacs.
Re: (Score:3, Funny)
Re:1 TB of memory... (Score:4, Interesting)
In the 80s, the overhead of a lisp machine just to make your application customizable was absurd (hence the emacs jokes). Writing an editor all in C was a great idea. Speed! Memory savings! This approach made vi very popular.
Now that it's 2008 and every new computer has a few gigs of RAM, it's not so absurd to write an editor in a dynamic language running on top of a minimal core. An experienced elisp coder can add non-trival functionality to emacs in just a few hours. emacs makes that easy and enjoyable.
vi(m) may use less memory, but that just doesn't matter anymore. If you want to customize it (non-trivially), you have to hack vim and recompile. So while emacs jokes are hilarious, it dates you to the early 80s. There is no reason to write tiny apps in assembly anymore. Big apps that can be extended are a much better approach.
Mobile much? (Score:5, Insightful)
Re:1 TB of memory... (Score:5, Insightful)
Under the covers the System/38 was a cisc box, the AS/400 could be CISC or RISC and the iSeries is all risc. From an app dev point the same compiled object code could run on all three. Stop and think about for a second.
Now the System/38 had a very advanced constraint based security system. For example you could use an object that you could not see. But in general is allowed for very fine grain control of security. Of course this has been improved throught to the iSeries.
Also, this machine had a single address space for all storage. An app didn't need to worry about memory size, the machine automatically used ram as disk and disk as ram.
Of course this machine has had a life of 30+ years and most OS designers have zero idea about just how revolutionary it is same sort of thing as the MCP for a Burroughs B5000. People who do not know history are doomed to repeat it over and over again.
Re:1 TB of memory... (Score:4, Insightful)
Give it up, this is a religious war. Those of us who prefer vi(m) consider it a more focused editor. We neither need, nor want the extensibility crave. Those of you who prefer emacs consider the extensibility vital to your work, and can't imagine how anyone can live without it.
We have been debating this forever, and will continue to do so, as long as there are vi(m) and emacs users out there. There is no "right" answer, so just enjoy the jokes (they are normally harmless, and often good for at least a smile).
Re:1 TB of memory... (Score:4, Funny)
Re:1 TB of memory... (Score:4, Funny)
Re:1 TB of memory... (Score:5, Insightful)
As to the problem of how to use 1 TB of RAM, spending any time at all thinking of this is foolish and wasteful. Of course, I remember the days when we rated our computers in how many kilo bytes of memory we had, and plenty of readers here will remember having 20 to 40 meg hard disks in PC's with far less than 1 meg of physical RAM memory. In those days (and I'll avoid the famous Bill Gates quote on the subject), how would you have spent your time deciding what to do with the memory if you had a computer with 1 gig, 2 gig or even 4 gig of memory? You may have come up with all sorts of amazing ideas. But none of them would have done you any good, because the developers (Mostly Microsoft, but Linux is far from lean and mean any more either) already decided what to do with it, waste it and leave you wanting more. And one of your ideas for a 4 gig system might not have even been to just pretend that most of the last gig of memory wasn't there and ignore it!
So why even have a post about what to do with a terabyte of memory? The solution is simple, install Windows 9 and try to quickly order more memory on-line before the memory hungry service pack comes out, forces it's install on you, and your TB isn't enough.
Re:1 TB of memory... (Score:5, Interesting)
Re: (Score:3, Funny)
Some of us do have access to 1TB or more of RAM (Score:5, Interesting)
All RAM is used as cache anyway. When an application allocates some RAM, it's in lieu of directly manipulating the permanent (disk) storage because it's horribly horribly slow. That's really an operating system failure. Network file systems, disk, RAM should all be completely transparent, the OS should abstract all that away and allow application programmers to handle it simply as storage.
Re:Some of us do have access to 1TB or more of RAM (Score:4, Interesting)
I have a gaming rig I custom built 5 or 6 years ago with some very sweet OCZ ram with 2-2-2-2 timings, but now when i was wish-listing a new gaming PC the best ram i could find was 3-4-4-15 timings that's ALMOST HALF THE SPEED that means that it's going to hit those 'unable to fetch ram for the CPU' TWICE AS OFTEN with horrendous results... And it's getting worse, DDR3 ram is all running at 5-5-5-15 timings stock, and mind you 4-4-4-15 is the normal variety of DDR2 'fast' ram, this was again OCZ over-clocked ram...
with multi-core processors this is only going to get worse, with a dual processor rig, to truly keep both processors from missing cache you realistically need 1-1-1-10 ram and they KEEP MAKING THINGS WORSE by bumping up the amount of 'burst' data the RAM can put out, instead of how FAST the ram can access and reload ram!!!
really with such pathetic timings realistically a dual core is going to be spending about 20% of it's cycles 'waiting on ram' if it needs randomly accessed memory, that can't be 'burst read' a lot of applications need random access, database, server farms, complex 3-d video game graphics... the reason why 512MB graphic cards cost so much is they really all need REALLY FAST random access memory that is way faster than 'stock' DDR3... and the reason why frame rates don't scale with more processor pipelines very well, is because those cards keep missing strokes because the system wasn't able to load the memory in time for the processor to work on it...
I can't think of a single mainstream computers need to 'burst' more GB/second Instead of improving latency, yet the crazy computer scientists keep making it worse by engineering for 'burst' mode rather than latency.
it almost makes one want to use normal DDR 1 ram, with the sweet 2-2-2-2 timings instead of ddr2...
Re: (Score:3, Insightful)
Closer than you think (Score:3)
You only need 16GB of RAM for this to be useful (Score:5, Insightful)
Re: (Score:3, Funny)
Re:You only need 16GB of RAM for this to be useful (Score:5, Insightful)
Re: (Score:3, Interesting)
Re:You only need 16GB of RAM for this to be useful (Score:5, Insightful)
Re:You only need 16GB of RAM for this to be useful (Score:5, Interesting)
Re: (Score:3, Informative)
Re:You only need 16GB of RAM for this to be useful (Score:5, Interesting)
Here's a question... if you actually had a system that had 1TB of RAM, wouldn't you like to see a lot of your hard drive contents being loaded into RAM in the background because you have the RAM to store it, and you know that it can be discarded at any time because its just cache memory and not committed memory? I mean, you've gone to all the trouble and cost of getting yourself that much RAM... do you ONLY want to ever make use of it all on the rare occasion you need to edit a 500megapixel picture in photoshop? Do you want your ram to sit idle the rest of the time, and have your hard drive grind away because
Re: (Score:3, Interesting)
In all honesty, though, I don't really get the point of this. Isn't the buffer cache already supposed to be doing kind of the same thing, only with a less strict mapping?
Re: (Score:2)
That is the point. A buffer cache still requires spinning up the HDD to fill it; if this is used to replace the buffer cache, then the HDD is only spun up once, during boot, and never again except to synchronize the data in RAM to HDD.
Re: (Score:3, Informative)
Yeah, imagine, then, to be able to use such a fast disk as your swap device! That'll make your system swiftz0rs.
Bingo. That is one way you can use the Violin 1010 [violin-memory.com] without needing any special backing to disk at all. In fact, this is a nigh-on perfect use of the device, because the 2x8x PCI-e bus connection, while fast, is still not as fast as main memory. But the swap subsystem knows now to manage that latency increase quite nicely. Such a swap arrangement will even tend to bring things back in balance as far as the Linux VM goes, since in the good old days when swap was invented, disk was only two or three order
Re: (Score:2)
There must be a few more relevant applications. Pitch in!
I'm all for new ideas and getting them out there for people to test. It's one of the major benefits of open systems.
Re: (Score:3, Informative)
Re: (Score:2)
You could do it now.
But, think a bit further about the implications of this. It isn't the OS that this is aimed at. From the OS side, it would be nice to run a lot of it in RAM, but the reality is that most of the important parts of the OS (shared libs, kernel, and whatnot) are resident in RAM most of the time anyway.
There are a couple ways to use this just off the top of my head that might make this a more interesting thing than is presented.
Re: (Score:3, Insightful)
Something similar could be done with one supervisor cpu handling video mapping for multiple slave cpu's as well as managing a RAID-5 or better disk system that is partitioned and mapped to RAM disk mirror/buffers. Most
Re: (Score:2)
Actually, no, it doesn't. Flash drives are based on static memory and are just about as slow as a regular hard drive. This article talks about using volatile memory, which is many, many times faster.
Re: (Score:3, Interesting)
Granted, it doesn't run Linux (or if it does, it's kept hidden from the user.) But with these awesome specifications, I have to wonder why they don't just sell general purpose computers -- people would port Linux to them, and they'd clean up! Is there something special about their processors that they're good at doing java or what?
Add-Free one-page Version of the story (Score:5, Informative)
Re:Add-Free one-page Version of the story (Score:5, Informative)
http://lwn.net/Articles/272011/ [lwn.net]
Memory usage (Score:5, Interesting)
Re: (Score:3, Insightful)
Re:Memory usage (Score:5, Interesting)
Re: (Score:2)
Re:Memory usage (Score:4, Insightful)
Re: (Score:3, Interesting)
I can get a 64bit mobo, 64bit proc, and still ahve problem finding on that can take more then 8Gigs of ram.
I want to load up my games into a ram disk and play them from their. I've didi it in the bad ol'/good ol' days. I want to put a 2 hour movie entirely in RAM. I want 100+gigabytes of RAM, damn it. I've beens tuck at 4 Gigs for years. ENough already.
also, I want a pony.
Re: (Score:2)
Ram disks were available on the mac in 1990. You can get specialized rocket drives that are entirely RAM. How is this so "far off" again?
One Terabyte (Score:4, Funny)
Re:One Terabyte (Score:5, Funny)
Obviously you're running windows XP, not Vista!
Re: (Score:2)
Re: (Score:2)
Windows 7? (Score:4, Funny)
Vista SP1 (Score:4, Funny)
8 GB (Score:5, Funny)
One time, I opened up more than a thousand tabs in Firefox just because I could.
Oh yea? (Score:5, Funny)
Re: (Score:2)
As a side note on the compiling, I'm doing a thesis on memory paging, and the largest trace we have is of compiling a linux kernel: over 4 million distinct pages, each page 4kB for a total footprint over 16GB.
Re: (Score:2)
How the hell do you use ~4GB? I do video encoding, compression, editing, graphics, etc. etc. all simultaneously and honestly never go above 2GB. The only time I ever go over that is when I boot up XP via VMware (ram set to use up to 1GB), although I think I've done that once since I've gotten Photoshop CS2 and Flash 8 running fine under WINE.
Power Failure (Score:3, Informative)
For example, will the stuff synced from magnetic media be stored elsewhere? If so, what happens to the speed?
-B
Re:Power Failure (Score:5, Informative)
Re:Power Failure (Score:4, Insightful)
Then it goes on with the other questions, like, what if the hardware or kernel crashes and answers them with 'use things that don't crash'.
Agh. I mean, that's really, really bad engineering. You don't engineer things with the assumption that everything will work. You engineer them to fail gracefully when everything that can go wrong does go wrong. And preferably with margin.
If the system requirements for this are UPS, crashproof hardware and a completely bug-free OS, well, I'm sorry, but there's no system in the world capable of fulfilling the requirements.
Still, I'm sure there are cases where it's useful; as long as speed is of higher importance than data integrity, this sounds very useful.
With that much RAM... (Score:3, Insightful)
It's a lot of RAM and at today's computational speeds, it's not likely that it could be used for anything beyond a RAM drive.
Is it too soon to think about how to use that much RAM? NO! It's the lack for forward thinking that caused a lot of artificial limitations that have been worked around in the past. We're still dealing with limitations on file systems and the like. I've got an old Macintosh that can't access more than 128GB or something like that because its BIOS can't handle it... I had to get another PCI controller installed to handle larger drives.
What it is time to think about is now to code without such limitations built-in. This would better enable things to grow more easily and naturally.
The problem with giving Windows 1TB... (Score:5, Funny)
Re: (Score:2)
Re:The problem with giving Windows 1TB... (Score:4, Informative)
If you find this in any way strange, wrong or confusing, perhaps you should read up as to what the primary purpose of a frikkin' DATABASE SERVER is.
Here's a hint: the more data it can keep readily accessible (that is, in RAM) the better it will perform. And as you mentiones, you can of course set it to use less RAM if you have to. It's just that it's optimized for performance by default.
Re: (Score:2)
uh - there is at least one system with 1TB of RAM (Score:5, Informative)
Re:uh - there is at least one system with 1TB of R (Score:3, Informative)
How ? (Score:5, Funny)
char *ptr=malloc(1099511627776);
memset(ptr,1,1099511627776);
Re: (Score:2)
nothing new here (Score:4, Informative)
What about copy-on-write for executables? (Score:4, Interesting)
How is this different.... (Score:2)
Could you not accomplish this much more simply by having a process read all the blocks in a given block device at startup, thus faulting everything into the kernel buffer cache?
Re: (Score:2)
This means that both all your read and all your write operations will go splendidly fast.
It also means that you lose if you have a sudden powerloss. But, in many situations, that might actually not matter so much compared to the speed advantage you get out of this.
Not quite understanding... (Score:2)
I put in 16 GB of ram in a system, and operations are quite snappy, the disk cache happily filling and draining, and it feels more or less like a ramdisk system, once the data has been read into memory the first time on read operations. Sure, sync t
Re: (Score:2)
Not so far off (Score:4, Interesting)
By Moore's Law, we should hit 1TB in a high-end server 6 years, high-end desktops (assume 8GB of RAM, currently selling for $180 CAD) in 10.5 years, and the average midrange desktop (assume 2GB of RAM, currently selling for $45 CAD) in 13.5 years.
We might be a while off in consumer applications, but for high-end servers, 6 years doesn't seem very far away.
Video Streaming Server (Score:3, Interesting)
http://www.motorola.com/content.jsp?globalObjectId=7727-10991-10997 [motorola.com]
Sounds like a good use for a terabyte of RAM to me.
Disclosure: I currently work for Motorola, but I don't speak for them, and don't have any involvement with this product beyond salivating over it when it was announced that we were buying BroadBus.
We'll be there soon enough. (Score:3, Interesting)
take it to the next step... (Score:5, Interesting)
Next step beyond that: stop using a filesystem at runtime. Just assume your data can all fit in memory (why not, if you have a terabyte of it?) This simplifies the code and prevents a lot of duplication (why copy from RAM to RAM, just to make the distinction that one part of RAM is a filesystem and another part is the working copy?) But you will need a simple way to serialize the data to disk in case of power-down, and a simple way to restore it. This does not need to be a multi-threaded, online operation: when the system is going down you can cease all operations and just concentrate on doing the archival.
This assumption changes software design pretty fundamentally. Relational databases for example have historically been all about leaving the data on the disk and yet still fetching query results efficiently, with as little RAM as necessary.
Next step beyond that: system RAM will become non-volatile, and the disk can go away. The serialization code is now used only for making backups across the network.
Now think about how that could obsolete the old Unix paradigm that everything is a file.
Windows 3.1 can't even address that much memory (Score:2)
Geez. Why would I ever need it!?!
If you ever want a fast OS, run Windows 3.1 on a 300 MHz P2 with 64 Mb of RAM. Blazing fast.
Let's get to 128 Gb of RAM before we start pimping 1 Tb.
Am I alone in thinking? (Score:2)
Re-inventing the disk cache wheel (Score:4, Interesting)
Stop thinking in terms of caching? (Score:2)
If I'm reading the specs right, you can now get parts for a PC with 12GB of RAM (mixing DDR2 and DDR3) from NewEgg for something on the order of $1000. While I wouldn't sugge
Yes I could Use it. (Score:2)
cachefs (Score:3, Informative)
Another historically interesting ram file system was the Amiga Recoverable RAM Disk. You coudl even boot off it.
Floating point voxel octree Google Earth (Score:4, Interesting)
Speed vs tmpfs? (Score:5, Interesting)
How it seems to work:
Actual "ramdisk" -- that is, like /dev/rd -- that is, appears as a block device. You can run whatever filesystem you want on it, but it's still serializing and writing out to... well, RAM, in this case. No sane way for the kernel to free space on that "disk" that's not actually used.
How I wish it worked:
No Linux that I know of has used an actual ramdisk in forever. Instead, we use tmpfs -- a filesystem which actually grows or shrinks to our needs, up to an optional configurable maximum size. It'll use swap if available/needed. It's basically a RAM filesystem, instead of a RAM disk.
Even initrds are dead now -- we use initramfs. Basically, instead of the kernel booting and reading a ramdisk image directly to /dev/rd0, it instead boots and unpacks a cpio archive (like a tarball, but different/better/worse) into a tmpfs filesystem, and uses that.
So, how I would like this to work is, use a tmpfs filesystem -- as I suspect it will be faster, and in any case simpler, than a ramdisk -- and back it to a real filesystem on-disk. The only challenge here is that it's not as deterministic -- it would be more like a cp than a dd.
An even better (crazier) idea:
Use a filesystem like XFS or Reiser4 -- something which delays allocation until a flush. In either case, it would take a bit of tweaking -- you want to make sure no writes, or fsyncs, block while writing to disk, so long as the power is on -- but you'll hopefully already be caching an obscene amount anyway, so reads will be fast.
In this case, forcing everything out to disk could be as simple as "mount / -o remount,sync" -- or something similar -- forcing an immediate sync, and all future writes to be synchronous.
Conclusion:
Either of the two ideas I suggested should work, and could perform better than a traditional ramdisk. If it is, in fact, a simple disk-backed ramdisk (not ram filesystem), then it's both not as flexible (what if your app suddenly wants 50 gigs of RAM in application space?) and a bit of a hack -- probably a hack around traditional disk-backed filesystems not being able to take advantage of so much RAM by themselves.
In fact, glancing back at TFA, it seems there are some inherent reliability concerns, too:
Now, true, this should never happen, but in the event it does, the inherent problem here is that the ramdisk doesn't know anything about the filesystem, and so it doesn't know in what order it should be writing stuff to disk. Ext3 journaling makes NO sense for a ramdisk when the ramdisk itself knows nothing about the journal -- the journal is just going to slow down the RAM-based operation. Compare this to a sync call to XFS -- individual files might be corrupted, but all the writes will be journaled in some way, so at least the filesystem structure will be intact.
This gets even better with something like Reiser4's (vaporware) transaction API. If the application can define a transaction at the filesystem level, then this consistent-dump-to-disk will happen at the application level, too. Which means that while it would certainly suck to have a UPS fail, it wouldn't be much worse than the same happening to a non-ramdisk device, at least as far as consistency goes. (Some data will be lost, no way around that, but at least this way, some data will be fine.)
memory test (Score:3, Funny)
You better skip the memory test.
Virtual Machines (Score:3, Insightful)
Access (Score:3, Insightful)
Re: (Score:2)
Re: (Score:3, Interesting)
Then a power outage wouldn't be an issue. Power comes up, machine PXE boots off a machine in a neighboring town, state, country, whatever.
I know--not really feasible, but you'd be the king of basement dwellers if you could pull it off...
Re: (Score:2, Informative)
Re: (Score:3, Informative)
Re: (Score:2)
I don't know about AIX and Linux, but I don't believe i5/OS can actually access the entire 2TB max of the i595.
Assuming that memory limitations follows the same limitations as processors, a single partition would be able to access at least 1TB. Right now, the 595 will go up to 64 way, but i5/OS partitions have a limit of 32 processors. I'm assuming memory would be similarly limited. Still (2) 32-way 1TB machines would be
Re: (Score:2)