Linux Breaks 100 Petabyte Ceiling 330
*no comment* writes: "Linux has broken the barrier with the 100 petabyte ceiling, and
doing it at 144 petabytes." And this is even more impressive in pebibytes, too.
"What if" is a trademark of Hewlett Packard, so stop using it in your sentences without permission, or risk being sued.
512? That can't be right. (Score:4, Funny)
Hm, that can't be right, I swear I heard it was supposed to be two raised to the power of 50, multiplied by 128.. hm.
Re:512? That can't be right. (Funny!) (Score:1)
I sacrifice my karma to you.
Tom.
Re:512? That can't be right. (Score:1)
Re:512? That can't be right. (Score:2, Informative)
2^48 blocks * 512 bytes/block = 144115188075855872 bytes
Nice! (Score:1)
Re:Nice! (Score:2)
FreeBSD had 48bit IDE addressing support
in the CVS repository on Oct 6! A full month
before these patches to linux were released. So
far no released kernel supports this.
Re:Nice! (Score:2)
Finally! (Score:1)
Just wondering... (Score:1)
"We almost forgot to mention this, but Linux recently became the first desktop OS to support enormously large file sizes."
So what about non-desktop OS then?
One Long Video (Score:4, Funny)
Re:One Long Video (Score:5, Funny)
Finally they can release the uncut version of '2001: A space Odyssey'
Re:One Long Video (Score:3, Funny)
Finally! (Score:2)
Now Linux can really own as a legitimate desktop OS!
Seriously though...Isn't there a better place for someone who has the time to contribute? I'd rather see a better desktop environment, a better E-mail package, etc...
(Flame away, all of you running on 200Mhz machines with a four gig drive who will post about how awesome this new support is!)
Somewhat misleading (Score:5, Interesting)
So does linux support 18pb files? kind of -- pieces of it do. But the system as a whole does not.
Re:Somewhat misleading (Score:3, Interesting)
The glibc limits the file size to 64 bit (9 million terabytes), so unless the POSIX LFS [www.suse.de] api changes, that is the current maximum file size regardless of the file system (on x86 that is).
A 9 million terabyte file size limit isn't a large problem for me ....
Re:Somewhat misleading (Score:2)
I could be wrong, but I think the article said FreeBSD broke the petabyte limit by going to 18 petabytes on October 6. Just a month ago. A few more addressable bits, and Linux can now do 144. Still way short of the, what, 8 exobytes NTFS can handle?
I also think an article about anything supporting a petabyte FILE SIZE, let alone partition size does not warrant over 200 comments! At least the pine/mutt and vi/emacs/pico wars are discussing actually USING something!
I know I'll see petabyte arrays during my career, but arrays of 1 petabyte drives? I doubt it. Imagine the time to rebuild a DDD 1 petabyte drive. Discussing the writing of a 144 petabyte file in 2001 is the worst pissing contest I've seen to date.
We *definitely* have more important stuff to address first. And I've been on the other side of that argument before.
Great! (Score:1)
Or in other words... (Score:4, Interesting)
98.7881981 billion 1.44 meg floppy disks.
1.44 million 100 gig hard drives
or
3.5 trillion 4K ram chips (remember those?)
Re:Or in other words... (Score:2)
Re:Or in other words... (Score:1)
given enough monkeys and typewriters...
hehe
meneer de koekepeer
Re:Or in other words... (Score:2)
Think we could be in for some serious I/O bandwidth problems here? I guess this is good for upward expandablity, but not worth much more than bragging rights in practice; unless you are porting Carnivore to Linux.
watchit (Score:2, Insightful)
well i for one am scared by the fact that oneday soon 144pentabyte files will seem small
- Lord of the Rings is boring. There is a distinct lack of giant robots in it. Good movies have giant robots, bad movies don't. -james
Re:watchit (Score:2, Funny)
122.5 PB of 160 EB downloaded @ 7 PB/Sec
6 hours, 29 minutes and 48 seconds remaining
Sigh...
1.44 petabytes is half a lifetime (Score:5, Insightful)
However, there might one day be information processing systems to which 1.44 petabytes is a small amount of information. In a sense, these systems will have a richer experience of the world than human beings. I wonder if human consciousness would seem marvellous or valuable to such a machine.
Re:1.44 petabytes is half a lifetime (Score:2)
Uh huh... and AI will be with us any day now (Score:2)
An entire day of a human life could be recorded in perfect detail (with no compression) on a 120 GB disk.
Let me guess - you're using roughly the bitrate of DVD, extrapolating over 24 hours, and fudging the numbers. Well, either you or whoever came up with this figure.
Let's look at this from a cocktail napkin perspective. At the CURRENT resolution and audio sampling rate etc for DVD, 24 hours is about 50GB of storage. Only problem is, this assumes that DVD catches every single bit of visual/audio information that is out there. Well, just ask your dog how well 44Khz records high pitched noises. And then remember that not everyone has as poor eyesight/hearing as the masses. So even fudging this number by a factor of 2 or 3 starts to hit and overtake 120GB.
Oh wait, this assumes that all we care about is what the eyes see and the ears hear. Too bad that things are happening all around you. Also too bad that you have 3 other external senses, plus several other internal ones (balance comes to mind) that are continually inputting data into your brain.
Estimates like this really make me shake my head, as they assume artificial limitations that just aren't there in the real, ANALOG world.
Re:Uh huh... and AI will be with us any day now (Score:2)
A JND is the change in the level of a stimulus that is just large enough to be noticable. For example, the JND in the brightness of a dim light is extremely small while the JND in the brightness of a bright light is quite large. It is this phenomenon that allows compression techniques like MP3 to discard information from a signal without audibly changing the signal - loud sounds can be stored with less precision than quiet sounds, quiet sounds that are masked by loud sounds of the same pitch can be discarded, etc. Of course MP3 compression doesn't perfectly match your own psychoacoustic compression, so sometimes the difference is audible. But in theory it is possible to remove information from an audio signal without creating a noticable difference (e.g. by reducing the sampling rate from 500 kHz to 250 kHz).
A Subjective Time Quantum is a period of time about one sixteenth of a second long. Two stimuli that occur within the same SZQ are experienced simultaneously - the subject cannot tell which occurred first. If the time separation is greater than 1/16 s, the subject can detect the order in which the events occurred. This phenomemon is related to 'binding', in which separate stimuli are identified as aspects of the same event. To test it for yourself, try watching a game of football from the other side of the playing field. Because light travels faster than sound, you will see the ball being kicked before you hear the thump. If you are less than 1/16 s away at the speed of sound (50 m IIRC), you won't notice the delay. But if you are further away (and it's a quiet day) you'll notice that the sight and sound of the ball being kicked become two separate experiences. You still know at a logical level that they are aspects of the same event, but at the level of immediate experience it's obvious that one occurred before the other.
Just Noticable Differences and Subjective Time Quanta mean that the amount of information received by our senses is smaller than the amount of information that could potentially be received. (Common sense tells us the same thing - our senses cannot be 100% accurate, they are subject to noise and distortion like any other physical device, and there's no point in recording below the noise floor.) In other words, although the world is analog our experience of it is quantised. (After all, sensory information is carried by nerve impulses with invariant magnitude, similar to digital signals.) Using experimentally-derived JNDs for all the senses (not just sight and sound like a DVD), Norretranders calculated that the bandwidth of human experience was 11 million bits per second. That's 1,375,000 bytes per second or 118,800,000,000 bytes (roughly 120 GB) per day.
Can we see a SLASHDOT version of linux (Score:1)
You guys already know how so why not share?
Performace (Score:1)
Surely the amount of bits needed to address this is going to increase and more data for addressing means less data for good ol file transfer.
Is this going to be a noticeable difference or am i just beeing a bit whore?
XFS (Score:5, Informative)
XFS is a full 64-bit filesystem, and thus, as a filesystem, is capable of handling files as
large as a million terabytes.
263 = 9 x 1018 = 9 exabytes
In future, as the filesystem size limitations of Linux are eliminated XFS will scale to the
largest filesystems
Allright... I'll bite. (Score:4, Funny)
"144 PB should be enough for anybody."
- Bowie J. Poag, November 7, 2001
Re:Allright... I'll bite. (Score:2)
>>- Bowie J. Poag, November 7, 2001
>*sigh*, how easily people forget the habits of geeks and pr0n!
Forget pr0n. Given the increasing size of successive releases, wouldn't it be good if something similar were implemented by Microsoft?
working with large files (Score:1)
that said, this is when i turn this into a mini ask-slashdot:
while i have no problems writing/reading large files (i.e., >2GB), most regular linux software can't deal with them
for instance, i can't upload them with ftp. i'm having this problem with a mysqldump file that's part of a system backup.
right now it's not a real problem since i can gzip the file and the size goes down to 250MB aprox, but how do you guys handle large files in linux anyway?
Re:working with large files (Score:4, Informative)
cat file | ssh user@host "cat > file"
More recent builds of SCP will also support +2GB, so:
scp file user@host:/path
or
scp file user@host:/path/file
will both work.
In fact, probably the best way for syncing two directories is rsync. Rsync's major weakness is that it's *tremendously* slow for large numbers of files, and I believe it has to read every byte of a large file before it can incrementally transfer it(so you're looking at 2GB+ of reading before transfering). The following will do rsync over ssh:
rsync -e ssh file user@host:/path/file
rsync -e ssh -r path user@host:/path
For incremental log transfers, I actually had a system built that would ssh into the remote side, determine the filesize of the remote file, and then tail from the total file size minus the size of the remote file. It was a bit messy, but it was incredibly reliable. Did have problems when the remote logs got cycled, but it wasn't too ugly to detect that remote filesize was smaller than localfilesize. Just a shell script, after all.
SFTP should, as far as I know, handle 2GB+ without a hitch.
Both SCP and SSH of course have compression support in the -C tag; alternatively you can pipe SSH through gzip.
Email me for further info; there's some SSH docs onto my home page as well. Good luck
--Dan
www.doxpara.com
Article got it wrong on BeOS - 18 EXAbytes! (Score:5, Informative)
Just wanted to set the record straight.
OK this is great... (Score:5, Insightful)
What I am really wondering is: is there at the current moment ANY company/application/whatever that required this amount of storage? I thought that even a large bank could manage with a few TB's
Not intended as a flame, just interested
but still, this is a Good Thing (r)
Somebody will probably correct me ... (Score:3, Interesting)
Of course, this was probably salescrap. Does anyone know the truth on this?
Re:Somebody will probably correct me ... (Score:5, Insightful)
The BABAR experiment [stanford.edu] at SLAC [stanford.edu] is using Objectivity for data storage. Unfortunately, I cannot find a publicly available web page about computing at BABAR right now.
The amount of data BABAR produces is in the order of magnitude of 10's of terabytes per year (maybe a hundered), and even storing this amount in Objectivity is not without problems. The LHC [web.cern.ch], which is currently under construction, will generate much more data than BABAR, but even if they reach 10 petabytes per year one day, I very much doubt that they will be able to store this in Objectivity.
Re:OK this is great... (Score:2, Informative)
Re:OK this is great... (Score:2, Informative)
Re:OK this is great... (Score:2)
Re:search internet using grep (Score:2)
Random statistics.... (Score:4, Funny)
So you whip out your two hundred million cd recordables, and start inserting them. Let's say you get 1 frisbee for each 25 700Mb CDs.
This leaves you with eight million frisbees.
That's a stack 13 kilometres high.
So who needs this on a desktop OS again?
Re:Random statistics.... (Score:5, Funny)
Silly Moo!
You back it up to your *other* 144 petabyte drive!
Re:Random statistics.... (Score:4, Informative)
Re:Random statistics.... (Score:2)
That's why you use your Standard Parallel Interlink Fiber (SPIF). You know, Interplanetary Federation standard IFP-340-A or B if you have the new IFP-560 standard chipset. Backup should take about 10 seconds for the former, 7 seconds for the latter.
Re:Random statistics.... (Score:2)
You need three 144 petabyte drives to do HD backup - backup A to B and then A to C alternately. Verify the backup and you should always have at least one consistent file system.
Hey, I just said all this in a message to an 'Ask Slashdot'
Baz
Re:Random statistics.... (Score:2)
Re:Random statistics.... (Score:2)
If you were unfortunate enough to be still using a 300 bps modem, this would take 152,227,742 years (including start and stop bits)
Your MP3 collection would need to have 183,000 years of continuous music to fill 144 petabytes.
The bandwidth of a single 144 petabyte file being carried across the Pacific in a 747 is an impressive 3,336,000,000 bytes per second (assuming a 12-hour flight time).
And the RIAA probably wants to control this. Muhahaha.
FreeBSD had it first. (Score:1, Informative)
http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/d
144 PB, not really (Score:5, Insightful)
IDE driver can support 48 bit addressing. With 2^48 seconds of 512 bytes, you get 144 PB. But there are a LOT of other barriers to huge filesystems or files.
For instance, the Linux SCSI driver has always support 32 bit addressing, good enough for 2 terabytes on a single drive. But until recently, you couldn't have a file larger than 2 gigabytes (1024x smaller) in Linux. I think that the ext2 filesystem still has a limit of 4 TB for a single partition.
So while the IDE driver may be able to deal with a hard drive 144 PB in size, you would still have to chop it into 4 TB partition.
Re:144 PB, not really (Score:2)
You could providing you were using a 64 bit architecture. Linux isn't just x86/other 32 bit architectures.
Just to put this into perspective... (Score:2, Informative)
http://www.cacr.caltech.edu/~roy/dataquan/ [caltech.edu]
Uh, no? (Score:3, Informative)
I'm looking at the Linux XFS feature page [sgi.com], which states:
My understanding is that the 2TB limit per block device (including logical devices) is firm (regardless of the word size of your architecture), and unrelated to what Mr. Hedrick did. Am I wrong? Does this limit disappear if you build the kernel on a 64-bit architecture?And, on 32-bit architectures, there's no way to get the buffer cache to address more than 16TB.
144 or 128 petabytes? (Score:2, Interesting)
If 1kB = 1024 Bytes, then I've always assumed that 1MB = 1024kB (instead of 1000kB), 1GB = 1024MB, and so on.
Normally this doesn't make that much difference, but when you consider the cost of a 16 (144-128) petabyte hard drive, then the difference is more important
Re:144 or 128 petabytes? (Score:2)
Re:144 or 128 petabytes? (Score:2)
Re:144 or 128 petabytes? (Score:2)
Very nice, but not really what I'd like to see... (Score:2, Interesting)
1st desktop OS? Well, not quite. (Score:5, Informative)
A slashdot story pointing out how without the FreeBSD ATA code, the Linux kernel would be 'lacking'
The FreeBSD press release announcing the code is stable [freebsd.org]
If The Reg actually researched the story, Andy would have notice it is not a 'first' but more a 'dead heat' between the 2 leading software libre OSes. Instead, The Reg does more hyping of *Linux.
Pebibytes? (Score:4, Informative)
Well, according to the IEC standard [nist.gov], one petabyte is 10^15 (or 1e+15) bytes, while one pebibyte is 2^50 (or 1.125899e+15) bytes.
So 144 petabytes is 1.44e+17 bytes or 127.89769 pebibytes. Can't say that's more impressive tho. :P
Reality check... (Score:5, Informative)
I'm already fed up of the time it takes to back up large disks to tape. Drive transfer rate has not improved at the rate of disk capacity in the last few years and is becoming a bottleneck. It was unimportant when the backup time of a single disk was well below one hour (our Ultrium tapes give about 40Gb/hour).
Just figure that if you want to transfer 144PB in about one day, you need a transfer rate of the order of 1TB/s. Electronics is far from there since it means about 10 terabits/second. Even fiber is not yet there. Barring a major revolution, magnetic media and heads can't be pushed that far. At least it is way further than the foreseeable future.
Don't get me wrong, it is much better to have more address bits than needed to avoid the painful limitations of 528 Mb, 1024 cylinders etc... But, as somebody who used disks over 1Gb on mainfranmes around 1984-1985, I easily saw all the limitations of the early IDE interfaces (with the hell of CHS addresses and its ridiculously low bit numbers once you mixed the BIOS and interface limitations) and insisted on SCSI on my first computer (now CHS is history thanks to LBA, but the transition has been sometimes painful).
However, right now big data centers don't always use the biggest drives because they can get more bandwidth by spread the load on more drives (they are also slightly wary of the greatest and latest because reliability is very important). Backing up starts to take too much time,
In short, the 48 bit block number is not a limit for the next 20 years or so. I may be wrong, but I'd bet it'll take at least 15 years, perhaps much more because it is too dependent on radically new technologies and the fact that the demand for bandwidth to match the increase in capacity will become more prevalent. Increasing the bandwidth is much harder since you'll likely run into noise problems, which are fundamental physical limitations.
Re:Reality check... (Score:2)
Wavelength division multiplexing can give this rate already, but the erbium amplifiers used to boost the signal do not amplify across the whole usable spectrum, so this datarate would only be possible for sub 50km distances.
Additionally you'd need whole rack of electronics to decode the demux'd streams.
Waiting for the obligitory... (sp?) (Score:3, Funny)
<Insert Poster's Name Here>
<Insert Sig Here>
Re:Waiting for the obligitory... (sp?) (Score:2)
It is a start (Score:5, Interesting)
The real advance here is that the disk drive weenies have at last realised that they need to come out with a real fix for the 'big drive' problem and not yet another temporary measure.
Despite the fact that hard drives have increased from 5 Mb storage to 100Gb over the past 20 years the disk drive manufacturers have time after time proposed new interface standards that have been obsolete within a couple of years of their introduction.
Remember the 2Gb barrier? Today we are rapidly approaching the 128Gb barrier.
What annoys me is that the disk drive manufaturers seem to be unable to comprehend the idea of 'automatic configuration'. Why should I have to spend time telling my BIOS how many cylinders and tracks my drive has? I have a couple of older machines with somewhat wonky battery backup for the settings, every so often the damn things forget what size their boot disk is. Like just how many days would it take to define an interface that allowed the BIOS to query the drive about its own geometry?
Of course in many cases the figures you have to enter into the drive config are fiddled because the O/S has some constraint on the size of drives it handles.
We probably need a true 64 bit Linux before people start attaching Petabyte drives for real. For some reason file systems tend to be rife with silly limitations on file sizes etc.
Bit saving made a lot of sense when we had 5Mb hard drives and 100kb floppy drives. It does not make a lot of sense to worry about a 32bit or 64 bit file size field when we are storing 100kb files.
If folk go about modifying Linux, please don't let them just deal with the drives of today. Insist on at least 64 bits for all file size and location pointers.
We are already at the point where Terrabyte storage systems are not unsusual. Petabyte stores are not exactly commonplace but there are several in existence. At any given time there are going to be applications that take 1000 odd of the largest disk available in their day. Today that means people are using 100Tb stores, it won't be very long before 100Pb is reached.
Re:It is a start (Score:2)
Your experience of computing is obviously not great enough to make that type of attack.
I have six systems from various sources that are post 96 that require the BIOS to be programmed for the disk geometry. One of those systems has an Intel motherboard so it is hardly an obscure problem.
The BIOS does have an 'auto-config' setting. However the damn thing does not work. Instead of reading out one set of geometry settings and using it the BIOS allows cylinders to be traded for tracks and vice versa.
This is kinda a strange way of looking at the problem. It is not as if changing the config file changes the geometry of the disk!
What is really going on here is that there is a bizare set of hacks where we tell the BIOS some lies about the disk geometry so that it can use a disk that was somewhat larger than the largest availble when the machine was made.
My 1996 machine has a providence motherboard which was designed for use in servers. The auto-config only works on a 3.5" disk smaller than about 20Gb. Above that point the number of cylinders goes above 65536 and some BIOS field overflows.
Now this may constitute 'auto-config' for geeks but it certainly does not in my book, it means that I have to spend time fixing machines that should not need fixing.
20Gb was larger than the disks that were common when the machine came out (just), but it was pretty obvious that this was a very short term issue. I had a 6Gb disk in the machine when I bought it and had swapped that out for a 12 pretty soon after.
A large part of the problem is that the disk drive manufacturers used one kludge after another to extend the IDE spec for another 18 months or so. Instead of fixing the basic problem they did things like saying 'blocks are now 4 times the amount of data they were before'.
Screw what the BIOS thinks (Score:2)
Any "modern" OS doesn't use what the BIOS thinks anyway. Try it with Linux sometime. Stick a 60 gig disk into an old 486 that can't handle it, set it to none, boot up Linux and watch it tell you that there's a 60 gig disk there, and more importantly, watch it WORK in all respects. Watch it have full access to the whole thing. Be careful, if the BIOS is set to something other than NONE, it *can* lie to Linux when the kernel asks for the size of the drive. But I've done this with systems that have a 32 gig limit on disk size, and have it work just fine.
Windows can do this too, sometimes. Not exactly certain on the details there, but having done this myself with Linux, I know that much of it does work.
Re:Screw what the BIOS thinks (Score:2)
Secondly, if the mobo is capable of booting from CD, then it expects the CD drive to be set to NONE or AUTO anyway. Even when it's NONE, the CD will detect and boot. I know, I have mine setup exactly that way. It boots from CD just fine.
PXE has to have motherboard support anyway, for booting over ethernet. I don't see how that applies. It's a different boot method that doesn't need drive geometry anyway.
BTW, I have all my drives set to none. It boots from the hard drive anyway. Try it sometime.
49 years to read the file (Score:2, Interesting)
144 * 2^50 # n bytes
/ 100 * 2^20 # bytes/sec ATA-100
= 1.44 * 2^30 # n I/O seconds
/ 60*60*24*365 # ~ secs/year
= 49.03 # n I/O years
Re:49 years to read the file (Score:2)
Of course the index would be too large for the filesystem if literally each byte were to be indexed. At the very least, each byte would need a 7.125 byte pointer to it.
Just how much is 144 PB? (Score:5, Interesting)
144,000,000,000,000,000 or 144*10^15
it's impossible to comprehend.
Here's a way to visualise it - although it's also mindboggeling:
Take a sheet of paper with the squares on it. If you put a single byte in each 5mm by 5mm (1/5" by 1/5") square and use both sides, you'd need:
3,600,000 km^2 of paper to have room for those 144 PB. That's roughly 1,325,525 square miles for you people who don't use the metric system.
So when people say "it doesn't sound like a lot", you know how to get them to understand that it really IS a lot.
Re:Just how much is 144 PB? (Score:2, Funny)
you would have IPv6 addresses left over even if you assigned an address for each byte on that disk.
this is just for perspective - not because you actually would.....
Re:Just how much is 144 PB? (Score:2)
Greenland (the largest island in the world) is 2,175,600 km^2
7,686,810 km^2 2,175,600 km^2? Is this the new math?
Some uses for all that space... (Score:2, Funny)
Keeping an archive of Slashdot. As the solar system's population grows and grows, it won't be long before every little news story gets a thousand comments per minute. There will be so many moderators that law of averages suggests that every comment will be modded up to 5, and in an ironic twist Slashdot will be flooded. Still, it's Slashdot, and no self-respecting high-bandwidth nerd will be without an up-to-date archive of Slashdot.
Leeching Aminet. By the time we actually have these monster size drives, processors will finally be fast enough to properly emulate an Amiga, WinUAE will have been perfected and bandwidth will be so plentiful that we can all enjoy the latest Amiga software, whether we want it or not.
Freaking out newbies. Remember your scriptkiddie days when you would h4x0r some dude's Windows machine and pop up something resembling the Matrix? Simply add a little matter-to-energy technology, and you can download the newbie onto his computer, FTP him along (resumable downloading, now, we don't want him to materialise with missing parts!) and rematerialise him in your fridge. He'll think he's been transported to some crazy ice planet. Just like in sci-fi, eh folks!
Somewhere to keep all your Pokémon hentai! Don't try and hide it, man. I've seen your sick pictures of Misty and Bulbasaur. [geocities.com]
You'll finally have enough diskspace to install Windows 2024. Naturally, you'll be using Linux instead, but it's nice to brag that you could, if you wanted.
Limit is for a single IDE disk (Score:3, Informative)
Since my machine has 2 IDE controllers, with 2 buses each, and 2 drives per bus, you could make a system with 8 144 pB drives, put an XFS partition on it, and have 1152.92 pB of storage.
And for meaningless statistics sake: I make my MP3s (from CDs that I own, thankyouverymuch) at an average of 160 kb/sec. At that rate, the specified drive array would store 1826693 YEARS of MP3s. None of which would be Brittany Spears.
I wouldn't say they are the first/only to do this. (Score:2, Insightful)
Hey
I also think you can use Vinum to mount such a petabyte sized file system fairily easilly.
Really FreeBSD doesn't get enough credit for work that's been done. I know linux has a lot of good marketing for technical features but you also have to believe everything you read to fall for it.
Not that large... (Score:2)
Sheesh. I at least want to be able to chronicle the entire history of mankind in uncompressed video on my Linux box. Right now I'll have to settle for the history of the Industrial Age, or split my documentary into several smaller files.
128 petabytes, not 144 (Score:2)
So what? (Score:2)
This is just geek fodder....Hey, we're using 48bit addressing. Which means you can have 10 petabytes of pr0n now!!! It just sounds cool is all, it doesn't mean anything practical.
But, impracticality is much more interesting, isn't it?
Just think... (Score:2)
Let's all pitch in and buy a big fat bandwidth pipe and fancy hardware interface and an array of these drives and we can store everything we want.
Being prepared (Score:2)
Coulda been better (Score:2)
Article Updated (Score:2, Informative)
The Register [slashdot.org] updated their article. It now acknowledges FreeBSD as being the first Unix to support multi-petabyte filesizes.
However, NTFS 5.0 (the filesystem that is used by Windows 2000) has had 64-bit addressing since Windows 2000 was released. This yields a maximum capacity of 16 exabytes, which is 8388608 Petabytes. That's right, Windows has supported files eighty thousans times larger than Linux with an experimental patch for the past few years. Still, by the time people actually start needing this kind of storage, I don't think it'll actually matter much...
Re:Ok... (Score:1, Insightful)
Re:Ok... (Score:1)
Re:Ok... (Score:2, Insightful)
Example... (Score:3, Informative)
BTW, it may also re-open the debate:
Re:Example... (Score:2, Interesting)
Re:Ok... (Score:1)
Of course, we all have eyes capable of appreciating this insane quality... dont we?
Re:Ok... (Score:2, Insightful)
Current codecs already do a pretty decent job of compression of smaller(resolution) streams. However, what if I want my linux box feeding my HDTV projector at high resolution? This might be one more step in my vision of the ultimate entertainment center.
Re:Ok... (Score:3, Insightful)
Re:Ok... (Score:2)
Obviously, there are more relevant issues. For example, how are you going to store X bits of information using Y particles? At least for classical computing, you have a problem if Y is orders of magnitude less than X. Hence storing 8 numbers for each atom in the galaxy would be impossible if you were confined to using only the atoms on the earth, at least in classical computing. (I beleive with quantum computing in principle you can be clever and get around this, but I don't know enough to say for sure.)
But to answer your question since over 70% of the baryonic matter is hydrogen, nearly all the rest is helium, and less than 2% is heavier, the average molar mass of baryonic matter in the universe is less than 2.
Re:Ok... (Score:3, Informative)
Well, it's good to see that Linux has caught up, but the article is not correct that Linux is the first OS to support 48-bit ATA; FreeBSD has had this support for over a month now.
See for example: this file [freebsd.org] which is one of the files containing the ATA-6r2 code, committed to FreeBSD on October 6.
Re:Forgot my Greek (Score:3, Informative)
Re:Slashbox (Score:1)
Re:Slashbox (Score:1)
Re:Not so Happy (Score:1)
Re:Big deal (Score:4, Informative)
This obviously mattered to the people who implemented it. If you'd rather see development move in a different direction, by all means, write some code that you feel is useful.
See, the people who implemented this probably don't give a damn what you feel is important, they care about what they feel is important.
It's really very simple, put up or shut up.
Re:Big deal - even more OT (Score:2)
The point is, if I have the choice, I will choose to develop on a system where I have access to the source, for a number of reasons, only partially technical. There is no "collective mind". Developers are highly independent and like to work on what interests them. If you're interested in reaping the rewards from something, you sometimes need to earn them. Whether this is actually contributing code, funding development, etc.
Open source platforms were created by hackers, for hackers. And typically we don't give a damn about widespread acceptance or overthrowing microsoft's dominance of the desktop. We just want something that works well for what we need. Try to understand it from that perspective and you'll do better.
People want a lot of things, but the only people who really matter here are the people implementing this system. See, thats the great part, if you want it to be something it's not, make it that way. And personally, I do think they're possible. If it weren't for legacy applications, Linux would likely be on a lot more desktops than it is. I know any clueful sysadmin would much rather maintain a bunch of linux boxen than windows boxen. From a management perspective, Linux is lightyears beyond windows. Especially considering if something doesn't work right, instead of looking for a kludge or trying to get a vendor to include the needed functionality (usually a combonation of the two), you can locate the problem, isolate it, and correct it. I know of at least one place I've worked where this ability would have saved the company literally millions of dollars. Theres nothing wrong with cheerleaders to keep the team motivated. As a matter of fact, if you really think about it, recognition is the sole form of payment quite a few oss developers receive. Its all about the right tool for the job. If you want to play the latest and greatest games, linux isn't a good desktop choice for you. For the people maintaining 5000 corporate PCs with custom apps, it becomes a very sensible desktop OS.Personally, I run FreeBSD and a mix of NT/2000. Windows is still a requirement for me (a couple of addictive games, and some apps that my job requires). And the majority of the time I'm in windows, I have emacs/tcsh/python windows up (Exceed is a dream here). I personally would LOVE to get windows off my desktop, but it's the applications that keep me there, applications are key.
Thanks for the thoughtful reply :)