Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Linux Software

Linux Breaks 100 Petabyte Ceiling 330

*no comment* writes: "Linux has broken the barrier with the 100 petabyte ceiling, and doing it at 144 petabytes." And this is even more impressive in pebibytes, too.
This discussion has been archived. No new comments can be posted.

Linux Breaks 100 Petabyte Ceiling

Comments Filter:
  • by fonebone ( 192290 ) <jessephrenic@nin ... rg minus painter> on Wednesday November 07, 2001 @05:20AM (#2531747) Homepage
    The 144 Petabyte figure is obtained by raising two to the power of 48, and multiplying it by 512.

    Hm, that can't be right, I swear I heard it was supposed to be two raised to the power of 50, multiplied by 128.. hm.

  • ..."but Linux recently became the first desktop OS to support enormously large file sizes"...
    • by imp ( 7585 )
      Check out the corrected register article.
      FreeBSD had 48bit IDE addressing support
      in the CVS repository on Oct 6! A full month
      before these patches to linux were released. So
      far no released kernel supports this. :-)
  • Now I can finally rip all my cds at the bitrate they deserve!

  • "We almost forgot to mention this, but Linux recently became the first desktop OS to support enormously large file sizes."

    So what about non-desktop OS then?
  • by CritterNYC ( 190163 ) on Wednesday November 07, 2001 @05:29AM (#2531769) Homepage
    This would be handy for over 8200 years of DVD video.
  • This is what we've all been waiting for!

    Now Linux can really own as a legitimate desktop OS!

    Seriously though...Isn't there a better place for someone who has the time to contribute? I'd rather see a better desktop environment, a better E-mail package, etc...

    (Flame away, all of you running on 200Mhz machines with a four gig drive who will post about how awesome this new support is!)
  • Somewhat misleading (Score:5, Interesting)

    by nks ( 17717 ) on Wednesday November 07, 2001 @05:33AM (#2531780)
    The IDE driver supports such rediculously large files, but no filesystem that I know of currently does, not to mention the buffer management code in the kernel.

    So does linux support 18pb files? kind of -- pieces of it do. But the system as a whole does not.
    • by geirt ( 55254 )

      The glibc limits the file size to 64 bit (9 million terabytes), so unless the POSIX LFS [www.suse.de] api changes, that is the current maximum file size regardless of the file system (on x86 that is).

      A 9 million terabyte file size limit isn't a large problem for me ....

  • This is great! Now all I need is a 144 terabyte hard drive.
  • Or in other words... (Score:4, Interesting)

    by PD ( 9577 ) <slashdotlinux@pdrap.org> on Wednesday November 07, 2001 @05:34AM (#2531783) Homepage Journal
    2.197265625 trillion Commodore 64's.

    98.7881981 billion 1.44 meg floppy disks.

    1.44 million 100 gig hard drives

    or

    3.5 trillion 4K ram chips (remember those?)
  • watchit (Score:2, Insightful)

    by mr_exit ( 216086 )
    remember when 640k was enough for everybody?
    well i for one am scared by the fact that oneday soon 144pentabyte files will seem small

    - Lord of the Rings is boring. There is a distinct lack of giant robots in it. Good movies have giant robots, bad movies don't. -james
    • File 1 of 1: holodeck_program.hol downloading
      122.5 PB of 160 EB downloaded @ 7 PB/Sec
      6 hours, 29 minutes and 48 seconds remaining

      Sigh...
    • by mrogers ( 85392 ) on Wednesday November 07, 2001 @10:30AM (#2532445)
      The size of information storage devices and the bandwidth of networks are approaching meaningful limits: the size and bandwidth of human experience. Tor Norretranders claims in his book The User Illusion [amazon.com] that the amount of information absorbed by the senses is around 11 Mbits per second. In other words, a totally immersive virtual experience with sight, sound, smell, taste, touch and motion could be transmitted over a standard Ethernet connection. An entire day of a human life could be recorded in perfect detail (with no compression) on a 120 GB disk. So there is a limit to how much information you could ever want to store. In your entire life you will experience less than 3.5 petabytes of information. 1.44 petabytes will never seem small to a human being.

      However, there might one day be information processing systems to which 1.44 petabytes is a small amount of information. In a sense, these systems will have a richer experience of the world than human beings. I wonder if human consciousness would seem marvellous or valuable to such a machine.

      • yes but there is always the possibility that the human mind will grow. evolutionarily speaking it just might have no choice but too. after generations of information overload perhaps the mind will increase in capacity to accomodate for later generations. at least. i hope.
      • Most of what you said can basically be summed up in one sentence:


        An entire day of a human life could be recorded in perfect detail (with no compression) on a 120 GB disk.


        Let me guess - you're using roughly the bitrate of DVD, extrapolating over 24 hours, and fudging the numbers. Well, either you or whoever came up with this figure.

        Let's look at this from a cocktail napkin perspective. At the CURRENT resolution and audio sampling rate etc for DVD, 24 hours is about 50GB of storage. Only problem is, this assumes that DVD catches every single bit of visual/audio information that is out there. Well, just ask your dog how well 44Khz records high pitched noises. And then remember that not everyone has as poor eyesight/hearing as the masses. So even fudging this number by a factor of 2 or 3 starts to hit and overtake 120GB.

        Oh wait, this assumes that all we care about is what the eyes see and the ears hear. Too bad that things are happening all around you. Also too bad that you have 3 other external senses, plus several other internal ones (balance comes to mind) that are continually inputting data into your brain.

        Estimates like this really make me shake my head, as they assume artificial limitations that just aren't there in the real, ANALOG world.

        • Tor Norretranders' estimate had nothing to do with DVDs. It was based on two psychological phenomena known as Just Noticable Difference (JND) and Subjective Time Quantum (SZQ, from the German).

          A JND is the change in the level of a stimulus that is just large enough to be noticable. For example, the JND in the brightness of a dim light is extremely small while the JND in the brightness of a bright light is quite large. It is this phenomenon that allows compression techniques like MP3 to discard information from a signal without audibly changing the signal - loud sounds can be stored with less precision than quiet sounds, quiet sounds that are masked by loud sounds of the same pitch can be discarded, etc. Of course MP3 compression doesn't perfectly match your own psychoacoustic compression, so sometimes the difference is audible. But in theory it is possible to remove information from an audio signal without creating a noticable difference (e.g. by reducing the sampling rate from 500 kHz to 250 kHz).

          A Subjective Time Quantum is a period of time about one sixteenth of a second long. Two stimuli that occur within the same SZQ are experienced simultaneously - the subject cannot tell which occurred first. If the time separation is greater than 1/16 s, the subject can detect the order in which the events occurred. This phenomemon is related to 'binding', in which separate stimuli are identified as aspects of the same event. To test it for yourself, try watching a game of football from the other side of the playing field. Because light travels faster than sound, you will see the ball being kicked before you hear the thump. If you are less than 1/16 s away at the speed of sound (50 m IIRC), you won't notice the delay. But if you are further away (and it's a quiet day) you'll notice that the sight and sound of the ball being kicked become two separate experiences. You still know at a logical level that they are aspects of the same event, but at the level of immediate experience it's obvious that one occurred before the other.

          Just Noticable Differences and Subjective Time Quanta mean that the amount of information received by our senses is smaller than the amount of information that could potentially be received. (Common sense tells us the same thing - our senses cannot be 100% accurate, they are subject to noise and distortion like any other physical device, and there's no point in recording below the noise floor.) In other words, although the world is analog our experience of it is quantised. (After all, sensory information is carried by nerve impulses with invariant magnitude, similar to digital signals.) Using experimentally-derived JNDs for all the senses (not just sight and sound like a DVD), Norretranders calculated that the bandwidth of human experience was 11 million bits per second. That's 1,375,000 bytes per second or 118,800,000,000 bytes (roughly 120 GB) per day.

  • Can we see a SLASHDOT version of linux that's made to be a secure stand alone webserver (apache, mysql, php) Forget the banner ads I'd pay you some money if you could create an ISO for us /.'ers

    You guys already know how so why not share?
  • I am just wondering here, is there some sort of performace hit in addressing normaly files while adding in support for this "petabyte" feature.

    Surely the amount of bits needed to address this is going to increase and more data for addressing means less data for good ol file transfer.

    Is this going to be a noticeable difference or am i just beeing a bit whore?
  • XFS (Score:5, Informative)

    by starrcake ( 25459 ) on Wednesday November 07, 2001 @05:40AM (#2531796) Homepage
    http://oss.sgi.com/projects/xfs/features.html

    XFS is a full 64-bit filesystem, and thus, as a filesystem, is capable of handling files as
    large as a million terabytes.

    263 = 9 x 1018 = 9 exabytes

    In future, as the filesystem size limitations of Linux are eliminated XFS will scale to the
    largest filesystems
  • by Bowie J. Poag ( 16898 ) on Wednesday November 07, 2001 @05:43AM (#2531802) Homepage


    "144 PB should be enough for anybody."

    - Bowie J. Poag, November 7, 2001
  • this might be usefull for some very large database tables (assuming you don't use rawdevices).
    that said, this is when i turn this into a mini ask-slashdot:

    while i have no problems writing/reading large files (i.e., >2GB), most regular linux software can't deal with them
    for instance, i can't upload them with ftp. i'm having this problem with a mysqldump file that's part of a system backup.
    right now it's not a real problem since i can gzip the file and the size goes down to 250MB aprox, but how do you guys handle large files in linux anyway?

    • by Effugas ( 2378 ) on Wednesday November 07, 2001 @06:56AM (#2531920) Homepage
      SSH has done quite a bit of work to support +2GB files. As always, the following will and always has worked:

      cat file | ssh user@host "cat > file"

      More recent builds of SCP will also support +2GB, so:

      scp file user@host:/path
      or
      scp file user@host:/path/file

      will both work.

      In fact, probably the best way for syncing two directories is rsync. Rsync's major weakness is that it's *tremendously* slow for large numbers of files, and I believe it has to read every byte of a large file before it can incrementally transfer it(so you're looking at 2GB+ of reading before transfering). The following will do rsync over ssh:

      rsync -e ssh file user@host:/path/file
      rsync -e ssh -r path user@host:/path

      For incremental log transfers, I actually had a system built that would ssh into the remote side, determine the filesize of the remote file, and then tail from the total file size minus the size of the remote file. It was a bit messy, but it was incredibly reliable. Did have problems when the remote logs got cycled, but it wasn't too ugly to detect that remote filesize was smaller than localfilesize. Just a shell script, after all.

      SFTP should, as far as I know, handle 2GB+ without a hitch.

      Both SCP and SSH of course have compression support in the -C tag; alternatively you can pipe SSH through gzip.

      Email me for further info; there's some SSH docs onto my home page as well. Good luck :-)

      --Dan
      www.doxpara.com
  • by Snard ( 61584 ) <mike,shawaluk&gmail,com> on Wednesday November 07, 2001 @05:46AM (#2531806) Homepage
    Just a side note: BeOS has support for files up to 18 exabytes, not 18 petabytes, as stated in the article. This is roughly 18,000 petabytes, or 2^64 bytes.

    Just wanted to set the record straight.
  • by TheMMaster ( 527904 ) <hp.tmm@cx> on Wednesday November 07, 2001 @05:46AM (#2531807)
    Now, I can really imagine someone that buys a 144Pb drive (array) and will use IDE?? I would personally go for SCSI there ;-)

    What I am really wondering is: is there at the current moment ANY company/application/whatever that required this amount of storage? I thought that even a large bank could manage with a few TB's
    Not intended as a flame, just interested

    but still, this is a Good Thing (r)
    • ... but a couple of years ago, I was investigating OODBMSs. The sales bloke for (I think it was) Objectivity claimed that CERN were using their database for holding all the information from the particle detector things - which I can see being a shedload of data (3d position + time + energy). He was suggesting figures of 10 petabytes a year for database growth (so it must be frigging huge by now).

      Of course, this was probably salescrap. Does anyone know the truth on this?
      • by Anonymous Coward on Wednesday November 07, 2001 @06:47AM (#2531903)

        Of course, this was probably salescrap. Does anyone know the truth on this?

        The BABAR experiment [stanford.edu] at SLAC [stanford.edu] is using Objectivity for data storage. Unfortunately, I cannot find a publicly available web page about computing at BABAR right now.

        The amount of data BABAR produces is in the order of magnitude of 10's of terabytes per year (maybe a hundered), and even storing this amount in Objectivity is not without problems. The LHC [web.cern.ch], which is currently under construction, will generate much more data than BABAR, but even if they reach 10 petabytes per year one day, I very much doubt that they will be able to store this in Objectivity.

    • by Nadir ( 805 )
      Actually you would go for FC (Fiber Channel) not SCSI. Go to http://www.fibrechannel.org [fibrechannel.org] for more information.
    • The bank I work for currently stores 1.5 Tb a day worth of data. Almost none of it is ever looked at again, but a huge proportion of it is required by regulators. Of course this all goes on tape, since there is no requirement for speedy access.
    • Maybe this guy [slashdot.org] would need it.
  • by tunah ( 530328 ) <sam&krayup,com> on Wednesday November 07, 2001 @05:47AM (#2531810) Homepage
    Let's say you have this 144 petabyte drive. Okay it's friday, time to back up.

    So you whip out your two hundred million cd recordables, and start inserting them. Let's say you get 1 frisbee for each 25 700Mb CDs.

    This leaves you with eight million frisbees.

    That's a stack 13 kilometres high.

    So who needs this on a desktop OS again?

    • by ColaMan ( 37550 ) on Wednesday November 07, 2001 @06:33AM (#2531876) Journal
      So you whip out your two hundred million cd recordables, and start inserting them. Let's say you get 1 frisbee for each 25 700Mb CDs.

      Silly Moo!

      You back it up to your *other* 144 petabyte drive!
      • by Anonymous Coward on Wednesday November 07, 2001 @06:59AM (#2531925)
        Suppose you copy at full PCI bus speed: 133 Megabytes per second. Said backup would take about 34 years.
        • That's why you use your Standard Parallel Interlink Fiber (SPIF). You know, Interplanetary Federation standard IFP-340-A or B if you have the new IFP-560 standard chipset. Backup should take about 10 seconds for the former, 7 seconds for the latter.

      • But after 78 petabytes have been transferred your first disk develops a hardware fault. And your backup disk now has a half-backed-up filesystem so corrupted that you cant get the data back!

        You need three 144 petabyte drives to do HD backup - backup A to B and then A to C alternately. Verify the backup and you should always have at least one consistent file system.

        Hey, I just said all this in a message to an 'Ask Slashdot' :)

        Baz
    • Keep in mind that you could also back it up onto a mere 1.5 million 100GB tapes [exabyte.com].
    • Transmitting 144 petabytes over a 1M link takes 4355 years, 3 months

      If you were unfortunate enough to be still using a 300 bps modem, this would take 152,227,742 years (including start and stop bits)

      Your MP3 collection would need to have 183,000 years of continuous music to fill 144 petabytes.

      The bandwidth of a single 144 petabyte file being carried across the Pacific in a 747 is an impressive 3,336,000,000 bytes per second (assuming a 12-hour flight time).

      And the RIAA probably wants to control this. Muhahaha.
  • by Anonymous Coward
    FreeBSD had it first. For over a month. Read the committer CVS Logs and weep, penguin boys.

    http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/de v/ ata/ata-disk.c -> version 1.114
  • 144 PB, not really (Score:5, Insightful)

    by tap ( 18562 ) on Wednesday November 07, 2001 @05:53AM (#2531818) Homepage
    Sounds like all they are saying is that the new
    IDE driver can support 48 bit addressing. With 2^48 seconds of 512 bytes, you get 144 PB. But there are a LOT of other barriers to huge filesystems or files.

    For instance, the Linux SCSI driver has always support 32 bit addressing, good enough for 2 terabytes on a single drive. But until recently, you couldn't have a file larger than 2 gigabytes (1024x smaller) in Linux. I think that the ext2 filesystem still has a limit of 4 TB for a single partition.

    So while the IDE driver may be able to deal with a hard drive 144 PB in size, you would still have to chop it into 4 TB partition.
    • But until recently, you couldn't have a file larger than 2 gigabytes (1024x smaller) in Linux.

      You could providing you were using a 64 bit architecture. Linux isn't just x86/other 32 bit architectures.
  • Just how much data IS 144 petabytes? It's hard to visualize it off the top of one's head, but this link may help to give you perspective at the sheer enormity of the amount:

    http://www.cacr.caltech.edu/~roy/dataquan/ [caltech.edu]

  • Uh, no? (Score:3, Informative)

    by srichman ( 231122 ) on Wednesday November 07, 2001 @06:02AM (#2531830)
    Correct me if I'm wrong, but isn't this very very misleading? The article states that the Linux IDE subsystem can now support single ATA drives up to 144 petabytes (i.e., Linux ATA now has 48 bit LBA support), but my understanding is that many other aspects of the the Linux kernel limit the maximum file size to much less.

    I'm looking at the Linux XFS feature page [sgi.com], which states:

    Maximum File Size
    For Linux 2.4, the maximum accessible file offset is 16TB on 4K page size and 64TB on 16K page size. As Linux moves to 64 bit on block devices layer, file size limit will increase to 9 million terabytes (or the system drive limits).

    Maximum Filesystem Size
    For Linux 2.4, 2 TB. As Linux moves to 64 bit on block devices layer, filesystem limits will increase.

    My understanding is that the 2TB limit per block device (including logical devices) is firm (regardless of the word size of your architecture), and unrelated to what Mr. Hedrick did. Am I wrong? Does this limit disappear if you build the kernel on a 64-bit architecture?

    And, on 32-bit architectures, there's no way to get the buffer cache to address more than 16TB.

  • by ukryule ( 186826 )
    Is 1 petabyte 1000^5 or 1024^5? (i.e. is it 10^15 or 2^50)?)

    If 1kB = 1024 Bytes, then I've always assumed that 1MB = 1024kB (instead of 1000kB), 1GB = 1024MB, and so on.

    Normally this doesn't make that much difference, but when you consider the cost of a 16 (144-128) petabyte hard drive, then the difference is more important :-)
    • Well, if you're talking about HDDs, they're usually marketed using base-10 sizes. That's why they have the small print saying "1 MB = 1000000 Bytes"
    • If you're buying a hard drive from a store, 1 meg = 1,000,000 bytes. If you're talking on Slashdot or about any scientific research, 1 meg = 1,048,576 bytes. My pansy 32 bit calculator can't comprehend Terabytes...
    • The number is derived from the addressibility which is binary based. Specifically, it's talking about 2^48*512, so in this case, it's using the base-2 interpretation. See this thread [slashdot.org] for a more humourous discussion about this.
  • From my perspective, while obscenely large limits on file system sizes are no bad thing, I'm more interested by the prospect for scalability in the context of realistic problems. I see much larger challenges in establishing systems to maximally exploit locality of reference. I'd also like to see memory mapped IO extended to allow direct use to be made of entire large scale disks in a single address space using a VM-like strategy ... but I guess this will only be deemed practicable once we're all using 64 bit processors. Are there any projects to approximate this on 32 bit architectures?
  • by mr ( 88570 ) on Wednesday November 07, 2001 @06:35AM (#2531878)
    Before you start thumping your chest about how superior or cutting edge *Linux is, go look at these two links
    A slashdot story pointing out how without the FreeBSD ATA code, the Linux kernel would be 'lacking'
    The FreeBSD press release announcing the code is stable [freebsd.org]

    If The Reg actually researched the story, Andy would have notice it is not a 'first' but more a 'dead heat' between the 2 leading software libre OSes. Instead, The Reg does more hyping of *Linux.
  • Pebibytes? (Score:4, Informative)

    by Rabenwolf ( 155378 ) on Wednesday November 07, 2001 @06:44AM (#2531897)
    And this is even more impressive in pebibytes, too.

    Well, according to the IEC standard [nist.gov], one petabyte is 10^15 (or 1e+15) bytes, while one pebibyte is 2^50 (or 1.125899e+15) bytes.

    So 144 petabytes is 1.44e+17 bytes or 127.89769 pebibytes. Can't say that's more impressive tho. :P

  • Reality check... (Score:5, Informative)

    by Anonymous Coward on Wednesday November 07, 2001 @07:09AM (#2531941)
    Does anybody realize, that, even with a data rate of the order of 1GB/s, much higher than what current platters can do, it takes about 5 years to fill such a disk.

    I'm already fed up of the time it takes to back up large disks to tape. Drive transfer rate has not improved at the rate of disk capacity in the last few years and is becoming a bottleneck. It was unimportant when the backup time of a single disk was well below one hour (our Ultrium tapes give about 40Gb/hour).

    Just figure that if you want to transfer 144PB in about one day, you need a transfer rate of the order of 1TB/s. Electronics is far from there since it means about 10 terabits/second. Even fiber is not yet there. Barring a major revolution, magnetic media and heads can't be pushed that far. At least it is way further than the foreseeable future.

    Don't get me wrong, it is much better to have more address bits than needed to avoid the painful limitations of 528 Mb, 1024 cylinders etc... But, as somebody who used disks over 1Gb on mainfranmes around 1984-1985, I easily saw all the limitations of the early IDE interfaces (with the hell of CHS addresses and its ridiculously low bit numbers once you mixed the BIOS and interface limitations) and insisted on SCSI on my first computer (now CHS is history thanks to LBA, but the transition has been sometimes painful).

    However, right now big data centers don't always use the biggest drives because they can get more bandwidth by spread the load on more drives (they are also slightly wary of the greatest and latest because reliability is very important). Backing up starts to take too much time,

    In short, the 48 bit block number is not a limit for the next 20 years or so. I may be wrong, but I'd bet it'll take at least 15 years, perhaps much more because it is too dependent on radically new technologies and the fact that the demand for bandwidth to match the increase in capacity will become more prevalent. Increasing the bandwidth is much harder since you'll likely run into noise problems, which are fundamental physical limitations.

    • Fibre's really not far off.

      Wavelength division multiplexing can give this rate already, but the erbium amplifiers used to boost the signal do not amplify across the whole usable spectrum, so this datarate would only be possible for sub 50km distances.

      Additionally you'd need whole rack of electronics to decode the demux'd streams.
  • by Talez ( 468021 ) on Wednesday November 07, 2001 @07:16AM (#2531945)
    <Insert witty joke vaguely relating to 144 petabytes and Microsoft software space requirements here>

    <Insert Poster's Name Here>

    <Insert Sig Here>
  • It is a start (Score:5, Interesting)

    by Zeinfeld ( 263942 ) on Wednesday November 07, 2001 @07:21AM (#2531950) Homepage
    The announcement is pretty irrelevant, all it says is that there is a Linux driver for the new disk drive interface that supports bigger disks.

    The real advance here is that the disk drive weenies have at last realised that they need to come out with a real fix for the 'big drive' problem and not yet another temporary measure.

    Despite the fact that hard drives have increased from 5 Mb storage to 100Gb over the past 20 years the disk drive manufacturers have time after time proposed new interface standards that have been obsolete within a couple of years of their introduction.

    Remember the 2Gb barrier? Today we are rapidly approaching the 128Gb barrier.

    What annoys me is that the disk drive manufaturers seem to be unable to comprehend the idea of 'automatic configuration'. Why should I have to spend time telling my BIOS how many cylinders and tracks my drive has? I have a couple of older machines with somewhat wonky battery backup for the settings, every so often the damn things forget what size their boot disk is. Like just how many days would it take to define an interface that allowed the BIOS to query the drive about its own geometry?

    Of course in many cases the figures you have to enter into the drive config are fiddled because the O/S has some constraint on the size of drives it handles.

    We probably need a true 64 bit Linux before people start attaching Petabyte drives for real. For some reason file systems tend to be rife with silly limitations on file sizes etc.

    Bit saving made a lot of sense when we had 5Mb hard drives and 100kb floppy drives. It does not make a lot of sense to worry about a 32bit or 64 bit file size field when we are storing 100kb files.

    If folk go about modifying Linux, please don't let them just deal with the drives of today. Insist on at least 64 bits for all file size and location pointers.

    We are already at the point where Terrabyte storage systems are not unsusual. Petabyte stores are not exactly commonplace but there are several in existence. At any given time there are going to be applications that take 1000 odd of the largest disk available in their day. Today that means people are using 100Tb stores, it won't be very long before 100Pb is reached.

  • by Anonymous Coward
    I figure that at ATA-100 speeds, it would take 49 years to read the entire file.

    144 * 2^50 # n bytes
    / 100 * 2^20 # bytes/sec ATA-100
    = 1.44 * 2^30 # n I/O seconds
    / 60*60*24*365 # ~ secs/year
    = 49.03 # n I/O years
    • And yet, if each byte were indexed in a balanced binary tree, it would take 57 operations to find it.

      Of course the index would be too large for the filesystem if literally each byte were to be indexed. At the very least, each byte would need a 7.125 byte pointer to it.
  • by Hektor_Troy ( 262592 ) on Wednesday November 07, 2001 @07:42AM (#2531974)
    144 Petabytes doesn't sound like a lot. When putting it into writing:

    144,000,000,000,000,000 or 144*10^15

    it's impossible to comprehend.

    Here's a way to visualise it - although it's also mindboggeling:

    Take a sheet of paper with the squares on it. If you put a single byte in each 5mm by 5mm (1/5" by 1/5") square and use both sides, you'd need:

    3,600,000 km^2 of paper to have room for those 144 PB. That's roughly 1,325,525 square miles for you people who don't use the metric system.

    So when people say "it doesn't sound like a lot", you know how to get them to understand that it really IS a lot.
    • lets assume that you have just one array in a machine. IPv6 has scope for 6 x 10^23 addresses per square meter of the earth.

      you would have IPv6 addresses left over even if you assigned an address for each byte on that disk.

      this is just for perspective - not because you actually would.....

  • A hundred pebbybytes or whatever you call it might seem like a lot, but if I remember correctly from my tagline collection, Hard Drive Myth #1 is "You'll never use all that space." Here are a few suggestions as to what you might like to fill those spare terabytes with...

    Keeping an archive of Slashdot. As the solar system's population grows and grows, it won't be long before every little news story gets a thousand comments per minute. There will be so many moderators that law of averages suggests that every comment will be modded up to 5, and in an ironic twist Slashdot will be flooded. Still, it's Slashdot, and no self-respecting high-bandwidth nerd will be without an up-to-date archive of Slashdot.

    Leeching Aminet. By the time we actually have these monster size drives, processors will finally be fast enough to properly emulate an Amiga, WinUAE will have been perfected and bandwidth will be so plentiful that we can all enjoy the latest Amiga software, whether we want it or not.

    Freaking out newbies. Remember your scriptkiddie days when you would h4x0r some dude's Windows machine and pop up something resembling the Matrix? Simply add a little matter-to-energy technology, and you can download the newbie onto his computer, FTP him along (resumable downloading, now, we don't want him to materialise with missing parts!) and rematerialise him in your fridge. He'll think he's been transported to some crazy ice planet. Just like in sci-fi, eh folks!

    Somewhere to keep all your Pokémon hentai! Don't try and hide it, man. I've seen your sick pictures of Misty and Bulbasaur. [geocities.com]

    You'll finally have enough diskspace to install Windows 2024. Naturally, you'll be using Linux instead, but it's nice to brag that you could, if you wanted.

  • by wowbagger ( 69688 ) on Wednesday November 07, 2001 @09:02AM (#2532139) Homepage Journal
    This limit is for a SINGLE IDE disk. Now, if you use Logical Volume Management [sistina.com] (which is in the standard 2.4 kernel, no patches required) you can combine multiple disks into one.

    Since my machine has 2 IDE controllers, with 2 buses each, and 2 drives per bus, you could make a system with 8 144 pB drives, put an XFS partition on it, and have 1152.92 pB of storage.

    And for meaningless statistics sake: I make my MP3s (from CDs that I own, thankyouverymuch) at an average of 160 kb/sec. At that rate, the specified drive array would store 1826693 YEARS of MP3s. None of which would be Brittany Spears.
  • http://www.freebsd.org/news/newsflash.html#2001Nov ember3:1

    Hey ... look at that 48 bit addressing ATA drivers are now working? Wow... maybe FreeBSD people should run around making bogus claims too. FreeBSD invented the Question Mark! Wooo hoo.

    I also think you can use Vinum to mount such a petabyte sized file system fairily easilly.

    Really FreeBSD doesn't get enough credit for work that's been done. I know linux has a lot of good marketing for technical features but you also have to believe everything you read to fall for it.
  • That's only about 177 years' worth of 640x480, 24-color, 30fps uncompressed video.

    Sheesh. I at least want to be able to chronicle the entire history of mankind in uncompressed video on my Linux box. Right now I'll have to settle for the history of the Industrial Age, or split my documentary into several smaller files.
  • The 144 Petabyte figure is obtained by raising two to the power of 48, and multiplying it by 512.
    That sounds to me like two to the power of 57. If we follow the established pattern, this would be 128 petabytes, not 144:

    • 2^10 = kilobyte
    • 2^20 = megabyte
    • 2^30 = gigabyte
    • 2^40 = terabyte
    • 2^50 = petabyte
    • 2^57 = 2^7 * 2^50 = 128 petabytes
  • A 100MB drive right now is about $180. A 10 petabyte drive would be how much? Oh, more than anyone on earth could afford. Not to mention all the technical barriers....


    This is just geek fodder....Hey, we're using 48bit addressing. Which means you can have 10 petabytes of pr0n now!!! It just sounds cool is all, it doesn't mean anything practical.

    But, impracticality is much more interesting, isn't it?

  • That single drive could hold... everything!

    Let's all pitch in and buy a big fat bandwidth pipe and fancy hardware interface and an array of these drives and we can store everything we want.
  • It's impossible to conceive of Linux _needing_ that big a hard drive. But think of how fast Microsoft code bloats. Every few years M$ has to invent a new file system to properly handle the larger drives needed to hold Windows & Office. And so who knows how big common disk drives will be in 10 years? But Linux is ready NOW... ;-)
  • This would be a much better article if the headline read "IBM breaks 100 petabyte barrier." Or Maxtor. Or Western Digital. Or perhaps Quantum. See what I mean?
  • Article Updated (Score:2, Informative)

    by Jobby ( 135237 )

    The Register [slashdot.org] updated their article. It now acknowledges FreeBSD as being the first Unix to support multi-petabyte filesizes.

    However, NTFS 5.0 (the filesystem that is used by Windows 2000) has had 64-bit addressing since Windows 2000 was released. This yields a maximum capacity of 16 exabytes, which is 8388608 Petabytes. That's right, Windows has supported files eighty thousans times larger than Linux with an experimental patch for the past few years. Still, by the time people actually start needing this kind of storage, I don't think it'll actually matter much...

E = MC ** 2 +- 3db

Working...