Linux Breaks 100 Petabyte Ceiling 330
*no comment* writes: "Linux has broken the barrier with the 100 petabyte ceiling, and
doing it at 144 petabytes." And this is even more impressive in pebibytes, too.
"Life begins when you can spend your spare time programming instead of watching television." -- Cal Keegan
Re:Ok... (Score:1, Insightful)
watchit (Score:2, Insightful)
well i for one am scared by the fact that oneday soon 144pentabyte files will seem small
- Lord of the Rings is boring. There is a distinct lack of giant robots in it. Good movies have giant robots, bad movies don't. -james
Re:Ok... (Score:3, Insightful)
Re:Ok... (Score:2, Insightful)
OK this is great... (Score:5, Insightful)
What I am really wondering is: is there at the current moment ANY company/application/whatever that required this amount of storage? I thought that even a large bank could manage with a few TB's
Not intended as a flame, just interested
but still, this is a Good Thing (r)
144 PB, not really (Score:5, Insightful)
IDE driver can support 48 bit addressing. With 2^48 seconds of 512 bytes, you get 144 PB. But there are a LOT of other barriers to huge filesystems or files.
For instance, the Linux SCSI driver has always support 32 bit addressing, good enough for 2 terabytes on a single drive. But until recently, you couldn't have a file larger than 2 gigabytes (1024x smaller) in Linux. I think that the ext2 filesystem still has a limit of 4 TB for a single partition.
So while the IDE driver may be able to deal with a hard drive 144 PB in size, you would still have to chop it into 4 TB partition.
Finally something is done right.... (Score:1, Insightful)
Btw, don't get messed up with two distinct things: 1) Being able to address 2^48 sectors on an IDE disc, and 2) having a filesystem that can handle files as large as 2^48 sectors.
Re:Ok... (Score:1, Insightful)
I would still like to see where the figures come from though.
Re:Somebody will probably correct me ... (Score:5, Insightful)
The BABAR experiment [stanford.edu] at SLAC [stanford.edu] is using Objectivity for data storage. Unfortunately, I cannot find a publicly available web page about computing at BABAR right now.
The amount of data BABAR produces is in the order of magnitude of 10's of terabytes per year (maybe a hundered), and even storing this amount in Objectivity is not without problems. The LHC [web.cern.ch], which is currently under construction, will generate much more data than BABAR, but even if they reach 10 petabytes per year one day, I very much doubt that they will be able to store this in Objectivity.
Re:Ok... (Score:2, Insightful)
Current codecs already do a pretty decent job of compression of smaller(resolution) streams. However, what if I want my linux box feeding my HDTV projector at high resolution? This might be one more step in my vision of the ultimate entertainment center.
I wouldn't say they are the first/only to do this. (Score:2, Insightful)
Hey
I also think you can use Vinum to mount such a petabyte sized file system fairily easilly.
Really FreeBSD doesn't get enough credit for work that's been done. I know linux has a lot of good marketing for technical features but you also have to believe everything you read to fall for it.
1.44 petabytes is half a lifetime (Score:5, Insightful)
However, there might one day be information processing systems to which 1.44 petabytes is a small amount of information. In a sense, these systems will have a richer experience of the world than human beings. I wonder if human consciousness would seem marvellous or valuable to such a machine.