How To Use a Terabyte of RAM 424
Spuddly writes with links to Daniel Philips and his work on the Ramback patch, and an analysis of it by Jonathan Corbet up on LWN. The experimental new design for Linux's virtual memory system would turn a large amount of system RAM into a fast RAM disk with automatic sync to magnetic media. We haven't yet reached a point where systems, even high-end boxes, come with a terabyte of installed memory, but perhaps it's not too soon to start thinking about how to handle that much memory.
You only need 16GB of RAM for this to be useful (Score:5, Insightful)
Re:Memory usage (Score:3, Insightful)
With that much RAM... (Score:3, Insightful)
It's a lot of RAM and at today's computational speeds, it's not likely that it could be used for anything beyond a RAM drive.
Is it too soon to think about how to use that much RAM? NO! It's the lack for forward thinking that caused a lot of artificial limitations that have been worked around in the past. We're still dealing with limitations on file systems and the like. I've got an old Macintosh that can't access more than 128GB or something like that because its BIOS can't handle it... I had to get another PCI controller installed to handle larger drives.
What it is time to think about is now to code without such limitations built-in. This would better enable things to grow more easily and naturally.
Re:You only need 16GB of RAM for this to be useful (Score:5, Insightful)
Re:How ? (Score:1, Insightful)
Re:You only need 16GB of RAM for this to be useful (Score:5, Insightful)
Re:Memory usage (Score:4, Insightful)
Re:1 TB of memory... (Score:5, Insightful)
As to the problem of how to use 1 TB of RAM, spending any time at all thinking of this is foolish and wasteful. Of course, I remember the days when we rated our computers in how many kilo bytes of memory we had, and plenty of readers here will remember having 20 to 40 meg hard disks in PC's with far less than 1 meg of physical RAM memory. In those days (and I'll avoid the famous Bill Gates quote on the subject), how would you have spent your time deciding what to do with the memory if you had a computer with 1 gig, 2 gig or even 4 gig of memory? You may have come up with all sorts of amazing ideas. But none of them would have done you any good, because the developers (Mostly Microsoft, but Linux is far from lean and mean any more either) already decided what to do with it, waste it and leave you wanting more. And one of your ideas for a 4 gig system might not have even been to just pretend that most of the last gig of memory wasn't there and ignore it!
So why even have a post about what to do with a terabyte of memory? The solution is simple, install Windows 9 and try to quickly order more memory on-line before the memory hungry service pack comes out, forces it's install on you, and your TB isn't enough.
Re:1 TB of memory... (Score:2, Insightful)
Eighty Megamebibytes And Constantly Swapping?
Besides, isn't it obvious how one should use a terabyte of RAM? Use it to upgrade your PC to run Windows Vista MegaUltimate, of course. :-D
Or, to put it another way, the question of what to do with the extra RAM is a non-issue. Install software. Software developers will find a way to waste as much RAM as you can put in and performance will still be slow. It's just the nature of progress....
*sigh*
Re:You only need 16GB of RAM for this to be useful (Score:3, Insightful)
Something similar could be done with one supervisor cpu handling video mapping for multiple slave cpu's as well as managing a RAID-5 or better disk system that is partitioned and mapped to RAM disk mirror/buffers. Most home users are not having threading issues, they see I/O bottlenecks. When your bittorrent client is downloading and buffering a file while you are trying to watch a DVD it's difficult to get a full system clamav scan done in the background. Using multiple systems, this would be possible and easy. The supervisor system could give you Picture in Picture or tiled views of the video displays of all slave systems, so that while watching the DVD, if there is a pop up window from your system scan it shows in the upper corner somewhere.
Sharing hardware among processes works, but if you really want speed, you need each process to have the full attention of the cpu. More RAM and specialized hardware would allow that for multiple processes. Tasks could be shared out by the supervisor to any non-active processors on the system such that by initializing a virus scan via the supervisor, it pushes the process off to the most available slave system cpu.
Well, that is the thought. I'm certain that many will tell me why it won't work. I just think that if you are going to make specialized hardware you should do more than a bit extra RAM. Go full on with mini clusters or supervised slave systems.
I currently sit infront of four screens at work. I'd like them all to be in the same box if possible, thanks. Running VMs might be an idea, but I like how they work separately too much. Yes, I would add one NIC for each slave system also.. they're cheap.
Once you see the size of some mini-atx boards, it's not inconceivable that you could put 5 cpu systems in one tower case and have a 1TB RAID-5 system in there also. You just need a bit of specialized hardware, and some drivers to make it all look/feel real to the slave systems. You could support built-in video and NICs on the cpu system plug-in cards if you wanted. Treat it like a special motherboard with 4+ slots for system-on-module expansion cards. The variants of the PCI standard would make it fairly easy... I think
Mobile much? (Score:5, Insightful)
Re:Power Failure (Score:4, Insightful)
Then it goes on with the other questions, like, what if the hardware or kernel crashes and answers them with 'use things that don't crash'.
Agh. I mean, that's really, really bad engineering. You don't engineer things with the assumption that everything will work. You engineer them to fail gracefully when everything that can go wrong does go wrong. And preferably with margin.
If the system requirements for this are UPS, crashproof hardware and a completely bug-free OS, well, I'm sorry, but there's no system in the world capable of fulfilling the requirements.
Still, I'm sure there are cases where it's useful; as long as speed is of higher importance than data integrity, this sounds very useful.
Re:1 TB of memory... (Score:5, Insightful)
Under the covers the System/38 was a cisc box, the AS/400 could be CISC or RISC and the iSeries is all risc. From an app dev point the same compiled object code could run on all three. Stop and think about for a second.
Now the System/38 had a very advanced constraint based security system. For example you could use an object that you could not see. But in general is allowed for very fine grain control of security. Of course this has been improved throught to the iSeries.
Also, this machine had a single address space for all storage. An app didn't need to worry about memory size, the machine automatically used ram as disk and disk as ram.
Of course this machine has had a life of 30+ years and most OS designers have zero idea about just how revolutionary it is same sort of thing as the MCP for a Burroughs B5000. People who do not know history are doomed to repeat it over and over again.
Re:1 TB of memory... (Score:4, Insightful)
Give it up, this is a religious war. Those of us who prefer vi(m) consider it a more focused editor. We neither need, nor want the extensibility crave. Those of you who prefer emacs consider the extensibility vital to your work, and can't imagine how anyone can live without it.
We have been debating this forever, and will continue to do so, as long as there are vi(m) and emacs users out there. There is no "right" answer, so just enjoy the jokes (they are normally harmless, and often good for at least a smile).
Virtual Machines (Score:3, Insightful)
Re:Some of us do have access to 1TB or more of RAM (Score:3, Insightful)
Access (Score:3, Insightful)