Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

Reaching Beyond Two-Terabyte Filesystems 173

Jeremy Andrews writes: "Peter Chubb posted a patch to the lkml, with which he's now managed to mount a 15 terabyte file (using JFS and the loopback device). Without the patch, Peter explains, "Linux is limited to 2TB filesystems even on 64-bit systems, because there are various places where the block offset on disc are assigned to unsigned or int 32-bit variables." Peter works on the Gelato project in Australia. His efforts include cleaning up Linux's large filesystem support, removing 32-bit filesystem limitations. When I asked him about the new 64-bit filesystem limits, he offered a comprehensive answer and this interesting link. The full thread can be found here on KernelTrap. Reaching beyond terabytes, beyond pentabytes, on into exabytes. I feel this sudden discontent with my meager 60 gigabyte hard drive..."
This discussion has been archived. No new comments can be posted.

Reaching Beyond Two-Terabyte Filesystems

Comments Filter:
  • Testing (Score:3, Interesting)

    by Rick the Red ( 307103 ) <Rick.The.Red@ g m a il.com> on Saturday May 11, 2002 @07:25AM (#3501717) Journal
    The thought of generating the test files is mind-boggling. Unless you work at CERN, where they probably have 16T files just lying around...

    • #!/usr/bin/perl
      # This creates a "sparse file" of 16 terabytes.
      # It will not test all attributes of file creation,
      # as the blocks on disk are not actually written,
      # but it will fail on modern Linux boxes. Now,
      # the question of whether Perl is 64-bit clean,
      # down to the seek(2) call is interesting....
      $tmpf = "ohmyyourabigoneaintcha";
      open(TESTFILE,">$tmpf") ;
      seek(TESTFILE,0,(1024**4) * 16);
      close(TESTFILE);
      print "Test file ($tmpf) is ", -s($tmpf), " bytes\n";
    • Actually it's PETAbytes, NOT pentabytes. On a project that I'm on, there's discussion fo 9 Petabyte storage systems.
    • This is probably the worst place ever to mention this, but:

      Since NTFS support under Linux is pretty shoddy, maybe it's time to get serious here and switch to Windows 2000. Recall that NTFS theoretically has NO maximum file size. [pcguide.com]

      On the other hand, if you are doing your calculations using Linux-proprietary software, you could mount the Win2k storage array as a samba volume under Linux, and store your data using, say, gigabit ethernet. Another solution is to write proxy software to create an in-between filesystem between the program and the actual filesystem. The data would be stored contiguously in a "virtual filesystem", which would actually consist of multiple files in the actual file system.

      Since this software is pretty new, I don't know if I'd trust it with any Terabyte-sized files right now.

      To see a real-world example of huge amounts of data, visit Microsoft TerraServer. [msn.com] From the site:

      "All the imagery and meta-data displayed on the TerraServer web site is stored in Microsoft SQL Server databases. The TerraServer image data is partitioned across three SQL Server 2000 1.5 TB databases. USGS aerial imagery is partitioned across two 1.5 TB databases. The USGS topographical data is stored in a single 1.5 TB database. Each database server runs on a separate, active server in the four-node Windows 2000 Datacenter Server cluster... (Let mySQL try THAT...)"

      "Microsoft TerraServer runs exclusively on Compaq servers and storage arrays. Compaq Corporation donated the 4 Compaq ProLiant 8500 database servers. The disk storage equipment, 13.5 TB in total, was donated by the StorageWorks division of Compaq Corporation. The web servers are eight Compaq ProLiant DL360, "1u" processors."


      See... Bill DOES know where you live! ;-)
      • .... Server 2000 1.5 TB databases. USGS aerial imagery is partitioned across two 1.5 TB databases. The USGS topographical data is stored in a single 1.5 TB ....

        1.5 TB 2 TB

        • Which only goes to show... who needs files about 20 TB? And why is the Linux community worried about it then? People who are dealing with files that big probably have lots of $money$, and can spare using a free OS.
          • You've never worked in a large organisation, did you? (despite the homepage you mentioned ;-)

            Actually big organizations with lots of money are much more likely to use a free OS for custom implementations because it's a lot more reliable, faster (and cheaper, but they have tons of cash, so that's not the deciding point) to modify Linux/BSD than to hope some other corp will put out an OS that works for you.

  • Brain Contents (Score:3, Interesting)

    by dscottj ( 115643 ) on Saturday May 11, 2002 @07:26AM (#3501720) Homepage
    I seem to recall reading, probably in a science fiction book, that the human brain is thought to store somewhere in the neighborhood of ~2-4 terabytes of information.


    Aside from all sorts of quantum fiddly bit problems, I wonder just how long it will be before we can store the state of every neuron in a brain (doesn't have to be human, at least not at first) on a hard drive.


    Of course, then what would you do with it?

    • You'd think that since the brain (obviously) doesn't store in bytes/bits, every estimate you hear would be bullshit.



      Seriously, I'm wondering how exactly they "estimate" that.

      • (* You'd think that since the brain (obviously) doesn't store in bytes/bits, every estimate you hear would be bullshit.....Seriously, I'm wondering how exactly they "estimate" that. *)

        I don't think the fact that the brain is not digital should prevent equivalency efforts. Music is not initially digital either, but we still know the issues in translating. Generally, the same issues apply: how "accurate" do you want the representation? For example, does the neuron "firing threashold" value need to be stored at double precision? Maybe one byte is enuf. Do we need to store the activation curve for each one, or can each cell be tagged into a "group" that supplies a sufficient activation approximation formula? That we don't really know.

        How accurate the representation needs to be is still hotly debated. We can do things to our brain like drink wine or coffee, which alter its state a bit, and kill some cells, yet it does not crash (at least not stay crashed). Thus, it does seem to have fairly high tolerances, meaning that super-detailed emulation is probably not necessary for a practical representation.

    • The storage capacity of the brain is a slibbery thing, first off, no one fully understands exactly how it wokrs, but it at least seems that the state of the neurons as well as the state of connections between them is related to this.
      Second, the estimate goes up as our notions of what big storage is. I don't think any seriousperson would acribe any sort of byte value. But if you think about it, we don't have that much capacity in our minds, our data storage is extremely lossy and we have very good minds that involve deriving a likely past state based on very few details.
    • Yeah that'll be good, put a back-up copy of your brain on a multi-terabyte(or larger) storage media and then forget about it and leave it in your car on a hot summer day. Oh! My brain back-up is in my car!

      BTW How do you back up a exo-byte, with an Iomega exo-drive?

      What's up with Constellation 3D and those other guys who are developing TB capable disks? Anyone get it to market yet or soon? C3D said on their website that they had an HD-TV recorder working and displayed at a treade show it but I haven't seen anything about it yet.
  • Pentabytes? (Score:4, Funny)

    by mbrubeck ( 73587 ) on Saturday May 11, 2002 @07:29AM (#3501724) Homepage
    Petabytes, please!
  • Does this mean I can stop backing up all my pr0n to CDR?

    No, it doesn't.
  • I look through what he's doing, and i find:

    General Kernel stuff
    Fix all kernel warnings

    All kernel warnings? That's almost like being a fire-fighter in hell..

  • Well, we have here and RAIDED 60 TB array which runs well und Mac OS X. This is mainly because Darwin is based on FreeBSD. The BSD series comes from the professional/academic unix world and has automatical 64 bit support at all level for 9 years or so.
    It's not very suprising that Linux is lacking these features. It's more hobbyist style and still contains some serious design failures like the missing microkernel Mac OS X has for some time now.
    Many people here at slashdot bitch at the academic/professional world but at examples like this you see that professional, thoughful design always pays off in some time.
    • On the other hand, approximately 0 % of Linux' intended uses does need 60 TB at this time. As the world and kernel evolves, this will be fixed if the linux community needs it.

      Yes, it's hobbyist based. Yes, it's great that FreeBSD supports it. Honourful! But Linux has had more important features to implement before this - because only a very few people have had access to these kind of disks.

      However, 2 TB is not that much - and it's about time Linux supports it.

      • I'm a bit confused about the tone of your message -- it seems like you feel defensive or threatened by the fact that FreeBSD and other OSes have had this capability for a while. No need to feel that way, especially about BSD -- the BSD community has in general not tried to one-up Linux. A lot of beneficial code sharing goes both ways.

        As far as what you actually said, I think we have a chicken-and-egg fallacy here that actually seems to limit the scope of Linux. You say that 0% of Linux's intended users need 60 TB (or <2 TB). But that's just it -- as long as Linux doesn't support 60 TB files, none of the people who need 60 TB files will use Linux. Who is doing the intending here? Is there some group that decides what are "intended" markets for Linux? No, I see people applauded all the time for using Linux in random and completely unintended uses, and it is amazing how many different ways Linux can be used.

        So what are you trying to say anyway -- that it is ok that Linux isn't as good as FreeBSD/OS X because anybody who uses Linux is not going to be worried about big-time stuff anyway? Yuk.

        I think this is a great patch -- it fixes a problem that didn't need to be there and that prevented Linux from entering a fairly important niche. This opens up another group of "intended" users.
      • On the other hand, approximately 0 % of Linux' intended uses does need 60 TB at this time.

        Probably because 100% of users who need 60 TB at this time see that Linux can't do it, and decide to use something else.

    • serious design failures like the missing microkernel
      missing? your going to have to be a bit more clear, it is a bit like saying that a car is clearly defective because it dosen't use the type of engine you like.
    • I'm a proud owner of a Mensa membership card.

      Really? Who did you take it from?

      - A.P.
    • Interesting. I recently saw a FreeBSD kernel developer say that anything over 1TB was dangerous on FreeBSD. Other research into using FreeBSD for our fileserver suggested that 2TB was the max size, but probably wouldn't work properly. We did end up with FreeBSD on our fileserver instead of linux (and several 1TB filesystems), but it was more-or-less a flip-of-the-coin thing in the end.

      And what do you mean by "automatical"? Overall, I think your post probably has more propaganda than real experience behind it.

      -Paul Komarek
  • So a 15Tb file must exist on a 15+Tb filesystem = 15,000+Gb.

    Now last time I looked the biggest common HD was a 180Gb Seagate Barracuda, so they would still need nearly 100 of these babies to get to 15Tb, costing well over $100,000, and that's before you get to the power/housing/cooling nightmare.

    Or do they have some fancy way to store bits using thin air that the rest of us don't know of.

    • Yup. But divide that number by two every two years, and you'll see that it'll be within small business' price range within a decade.
    • Expensive, but definitely not unheard of. We will have drives much larger than that someday. The best now isn't the best ever, it's in next year's bargain box. It doesn't hurt to plan ahead when designing any framework for a computer system, much less a filesystem. It's ironic that the OS with the best basic design (BeOS) had such a short lifetime.
    • It would be fairly simple to hack up a little block device that would act like a 15Tb FS and look like a normal file. /dev/zero looks like about 800 quadrillion bytes if you read it for long enough. Just as long as the device returned filesystem information where it needed to, it could fill the "files" themselves with repeating patterns. As long as it simulates a 15Tb filesystem realistically it'd be good enough for testing.

      Of course, they probably actually did it with a real file, but there's no reason it couldn't work this way.
    • Yea, I used 8 DiamondMax 160s with two Promise Ultra133 TX2s and managed to get a filesystem with 1.09TB when formatted with ReiserFS-3. (But I gotta watercool the crap out of the drives otherwise they cause the internal temprature of the case to jump to about 120 degrees fahrenheit within 5 minutes. Oy.) I got the basic idea from a slashdot article [slashdot.org] from January. (BTW, the array only cost me $4k to build including shipping.)

      Now if I can just get the 760-MPX chipset to stop locking up everytime the system boots I'll be happy and finally post benchmarks. :)

      -TheDarAve-
    • Re:Some Disk Array (Score:2, Informative)

      by Scott Laird ( 2043 )
      The point is that you can build a 2+TB system for well under $10k, using 160GB IDE drives and 3ware cards. I have 5 of them, and I've actually had problems -- my first partitioning attempt gave me a 2.06 TB RAID, which mke2fs decided was only 60 GB :-(.

      The next round of storage servers that I buy will probably be even bigger, and it'd be nice to be able to use them as one big partition. Pity that I'll have to wait for 2.6 for that.
  • what is that, 5 bytes? ;-P

  • Great name for a person with size issues.
  • Wow! (Score:5, Funny)

    by gazbo ( 517111 ) on Saturday May 11, 2002 @07:48AM (#3501765)
    Any other patches been submitted to the kernel? Perhaps an off-by-one error has been found; maybe an unchecked buffer has been fixed?

    Keep it up guys - until they create some sort of 'Linux kernel mailing list' the Slashdot front page is my only source for this information.

    • Keep it up guys - until they create some sort of 'Linux kernel mailing list' the Slashdot front page is my only source for this information.

      I suppose you suggest everybody wade through 250 mails/day to find the interesting ones? The logical extension of your argument is that non news sites are needed because people can do their own research.
  • xfs for linux (Score:5, Informative)

    by mysticbob ( 21980 ) on Saturday May 11, 2002 @07:50AM (#3501773)
    xfs for linux has provided significantly larger than 2Tb filesystems for a while. the official size supported is:

    26^3 = 9 x 10^18 = 9 exabytes

    check out the feature list. [sgi.com]

    • Been running XFS on a debian install since I got the beta disk at linux world 2000, awesome.
    • My understanding is that the FS supported size is only half the problem; there's a layer above that which handles disk accesses which is probably where the limitation lies.
    • arithmetic? (Score:3, Informative)

      by Anonymous Coward

      For those who wish to communicate with the rest of the world, the following calculations actually make sense:

      • 10^18 bytes = 1 000 000 000 000 000 bytes = 1 decimal terabyte = 1 terabyte = 1 TB
      • 2^50 bytes = 1 125 899 906 842 624 bytes = 1 binary terabyte = 1 tebibyte = 1 TiB

      For the uninitiated, these terms are described here [cofc.edu]

      Even accounting for your typographical error, 2^63 != 9 * 10^18 (9223372036854775808 != 9000000000000000000)

      • Why the *$&% was the parent modded up? The day any sane person (as opposed to a hypocephilic metriphile) uses kibibye, mebibyte, gibibyte or any of those thrice-accursed neologisms is the day that the world begins to end.

        Intelligent people have no problem with the idea that a kilobyte has 1,024 characters. Hard drive manufacturers always have, but they are hardly paragons worthy of emulation.

        Stop out the kibibyte nonsense now, before it gets any further.

    • Re:xfs for linux (Score:2, Insightful)

      by maswan ( 106561 )
      To quote from that page you just linked to:
      Maximum Filesystem Size

      For Linux 2.4, 2 TB. As Linux moves to 64 bit on block devices layer, filesystem limits will increase.

      This is exactly this problem that was adressed with that patch referenced in the story.

  • Once upon a time, I saw a big company producing some classified devices for the Soviet military-industrial complex. Of course, the company had an accounting department. And there was a company accounting database. It was a single file about 80 MBytes long (The typical drive size these days was 20-40 MB). To simplify the access tasks, the programmers that created the database software decided that all the data from time immemorial are to be kept in this file. The file grows with every operation, and since the data are thought to be needed forever there is no method to remove irrelevant entries.

    The programmers didn't imagine that in pair of years the base will be so big that it will not fit into any available HDD.

    Maybe it will be the lesson for some people who are going to misuse the file system features?
  • Woohoo! Now I have enough space to download every pirated movie before it comes out in theatres! Woohoo! *rips up Star Wars tickets*

  • Only two filesystems, XFS and JFS, seem to really
    work with larger than 2 TB in size hard disks.
  • Files that big (Score:3, Interesting)

    by Alien54 ( 180860 ) on Saturday May 11, 2002 @08:11AM (#3501812) Journal
    I can see certain high resolution videos getting this large.

    but I worry about other data types.

    For example, I grumple at the MS stupidity of putting all datafiles into one large container file in a database base under Access in Windows. Which is why I never use it. I prefer discrete files. If one gets hosed, then it is easier to fix.

    obviously a database that is that big would run into other performance issues as well. Some of which is handled by moore's law, and some of which isn't.

    for similar reasons I tend to divided my drive into various partitions, regardless of which OS I use.

    • 9 exabytes, yeah that should be sufficient to hold an Outlook mailbox with a published email address.

      Nooooo! another 150 spam emails and the database will corrupt!
    • Don't be afraid of filesysems that are really databases - you're using one right now. Even the simpelest of filesystems (unless it's a stack of files) - has an index of files. Most handle file locking. Most can reorder themsleves for efficent lookup of files. So really - most filesystem in existance behave like databases.

      Personally, I'd love to see MS use it's Jet (Access) database for their next version of Windows - they'd loose all their marktshare in five days tops.

  • by Anonymous Coward


    As you may know if you've been following recent IEC [www.iec.ch] and IEEE [ieee.org] standards (or if you've ever bothered to figure out exactly how large a terabyte is), what disk manufacturers call a terabyte and what this article calls a terabyte differ slightly.


    When used in the standard way [nist.gov], the "tera" prefix means 1 * 10^12, so a terabyte would be 1 000 000 000 000 bytes. Unfortunately, computer systems don't use base 10 ("decimal"), they use base 2 ("binary"). When trying to express computer storage capacities, somebody noticed that the SI [www.bipm.fr] prefixes [www.bipm.fr] kilo, mega, giga, tera, and so on (meaning 10^3, 10^6, 10^9, 10^12, ...) were about the same as 2^10, 2^20, 2^30, 2^40, and so on, so used the terms as multiples of 1024 rather than the usual 1000. On the other hand, many hardware manufacturers (especially hard disk manufacturers) use these prefixes in the standard way [nist.gov] to mean exactly multiples of 1000.


    This discrepancy causes some confusion [pels.org]. For instance, if you could afford to purchase such a 2 terabyte hard disk, you might well be annoyed when your system tells you your disk is almost 200 gigabytes (2 * (2^40 - 10^12)) smaller than you thought it would be (most systems would report a 2 terabyte disk as a 1.8 terabyte disk).


    The moral of the story is one of:

    1. don't buy 2 terabyte hard disks (blame the hard disk manufacturers)
    2. complain about it then continue the current ambiguity
    3. use the standard terminology [nist.gov] for binary units



    Interestingly the Slashdot community seems to think [slashdot.org] it should be a combination of 1 and 2.

    • The so-called standard terminology to which you refer ranks among the dumber ideas of history. Metriphiles--world-reknowned for their foolishness as it is--cannot grasp the fact that a kilobyte is 1,024 bytes, a megabyte 1,024*1,024 bytes and so on. Naturally, any sane person can deal with this, but there is a tiny-minded sort which cannot.

      The solution is to label hard drives in accordance with the rest of computer technology. A kilobyte is 1,024 bytes, not 1,000. The kibibyte does not exist!

  • fsck times (Score:4, Funny)

    by danny ( 2658 ) on Saturday May 11, 2002 @08:17AM (#3501823) Homepage
    Because fsck would take so long, it's unlikely that a non-journalled filesystem would be used on a large partition/logical volume.
    You can say that again! Fscking even 60 gig takes a painfully long time - with 10 terabytes it wouldn't be "go away and take a long coffee break", it would be more like "go away and read a book". And the with the 9EB limit he mentions, maybe "go away and write a book"!

    Danny.

    • Simple solution, use softupdates or a journaling filesystem.

      I prefer the former myself.
    • by orcwog ( 526336 )
      Fscking even 60 gig takes a painfully long time

      Wow, fsck used to mean fsck, and not....uh...ahem

      You know you read /. (and Penny Arcade) too much when you read that and think about Gabe putting a harddrive down his pants.

  • While not on the actual linux box, what about sizes of very large (e.g. > 2.1 TB) NFS mounts?
  • Imagine if the Trueman Show (like in the movie) was recorded as one huge MPEG video - you could store it one of of these! :-)

    You could fit movies of everything anyone's ever seen on a Beowulf cluster of these filesystems!
    • Curious....

      I forget the specifics, but lets say 30 years x an average of 400 cameras (number grew as show got larger) x good quality divx (.3MB/sec)

      105EB for the Trueman Show.

      I'm sure there is alot of cruft that isn't required -- like 8 hours a night among other things, and you only actually need a few of the cameras recording at any given time -- but it's still a few EBs for decent viewing quality.

      I also left out any additional channels dedicated to describing it, extra sound channels for announcers, etc.
    • Hmm, lets say you're storing it in standard VCD format, which is 1150kbps, with sound at 192kbps, that's 167.75 KB per second of film.

      That's 13.822174 GB per day. Lets say Truman lives until he's 80. That's 402501.70688 GB of film, which is what? 393.068073 TB ? And that's just assuming one camera :)
    • you could also keep a local cache of everything on Kazaa (provided that you manage to download it)
  • No problem (Score:2, Funny)

    by Beliskner ( 566513 )
    Multi-terrabyte files. Hmmmm.

    Problem solved: Use lzip [sourceforge.net]

    MBA Managers won't notice ;-)

    For the hardcore, we can build lzip into the FS. So we'll have Reiserfs, ext2, ext3, JFS, and lzipFS. Heck lzipFS might be faster than RAM!

    • Only if you can "tune" your lzipFS and trade compression for speed. Something like:

      tunelzipfs -c [compression %] /dev/sda1
      • Only if you can "tune" your lzipFS and trade compression for speed. Something like:

        tunelzipfs -c [compression %] /dev/sda1
        Uhhh, dude, I was kidding, *someone please mod parent as funny before other innocent people get confused*. Lzip is lossy compression. With a MySQL database or similar this would REALLY test the recovery features. Since MySQL doesn't attach a CRC to each field to ensure field data integrity, you might as well set lzip to 100% compression.

        In other words when you try to save a file to lzipFS it might as well return, "yeah" immediately. You tell lzipFS to fsync() and it'll return "yeah" immediately

        class lzipFS {
        .....
        long int fsync() {
        // cache->doflush(); /* what we save will be lossy so, what's the point? */
        return YEEEEAH_FSYNC_SUCCESFUL;
        .....
        }}

  • by nr ( 27070 )
    Peter explains, "Linux is limited to 2TB filesystems even on 64-bit systems, because there are various places where the block offset on disc are assigned to unsigned or int 32-bit variables."

    From the Linux Kernel mailinglist on the status of XFS merge into 2.5:

    I know it's been discussed to death, but I am making a formal request to you to include XFS in the main kernel. We (The Sloan Digital Sky Survey) and many, many other groups here at Fermilab would be very happy to have this in the main tree. Currently the SDSS has ~20TB of XFS filesystems, most of which is in our 14 fileservers and database machines. The D-Zero experiment has ~140 desktops running XFS and several XFS fileservers. We've been using it since it was released, and have found it to be very reliable. Uh, so Peter Chubb says there is a 2 TB limit, but these science guys on Fermilab are using Linux with 20 TB filesystems on the SGI XFS port.

  • Or is that _still_ at a meager 2GB limit?
  • Here is a (somewhet incomplete) answer to the two questions everyone seems to have about 2TB of data:

    1) Where would you store it?
    Well, you could store it in a holographic Tapestry drive [inphase-technologies.com]. The prototype, just unveiled a few months ago, stores 100GB in a removable disk, and that is nowhere near the max density of the technology. In their section on projects for the tech, they say that a floppy-sized disk should hold about 1TB in a couple years. Impressive.

    2) What would you do with it?
    Well, other than high-definition video or scientific experiments, nothing on your own PC, unless you are making a database of all the MP3s ever made or backing up the Library of Congress. But on a file server, you could easily use this much space. The 2TB limit will probably never affect most home users (realizes he will be quoted as an idiot in 10 years when 50TB HDs are standard). On the other hand, Tapestry will probably be useful in portable devices, esp video cameras.
  • by Jah-Wren Ryel ( 80510 ) on Saturday May 11, 2002 @10:34AM (#3502182)
    Reaching beyond terabytes, beyond pentabytes, on into exabytes

    Woohoo! A filesytem on a tape drive, that's what I need.
    • there are sollutions for windows that do that for many many years

      I had a 2 gb DAT streamer working under win98 and some special programm, that presented the tape as just another drive letter, it was really cool, expect the latency ;)
  • It seems to me that it would be more practical to make the file storage and management system be *independent* of the OS. This would allow storage companies to get economies of scale by not having to worry about OS-specific issues.

    The "native" disk storage could be used as a kind of cache. The "big fat" storage would be like a *service* that could be local or remote. The OS would not care. It simply makes an API call to the "storage service".
    • That's what you do sometimes.

      But the storage device needs to run on something. It needs to have an IP stack, an network card driver, filesystem support etc and so it needs an OS.

      • (* But the storage device needs to run on something. It needs to have an IP stack, an network card driver, filesystem support etc and so it needs an OS. *)

        Maybe a "controller" of some sort. I was thinking that any networking would be by a "manager OS" but not the controller itself. The manager OS would not be using its own file system. IOW, the manager OS might still have to be local with the controller. However, if you have a direct connection between the disk system and the application's OS, then you would not need a seperate manager OS for networking.

        I suppose there are a lot of different ways to partitian it all. My point is that a big file/disk system can exist independantly of the OS so that even Windows 3.1 could access huge amounts of storage without having it built-in to the OS.
  • by Webmoth ( 75878 ) on Saturday May 11, 2002 @10:52AM (#3502250) Homepage
    Looks like we'll have to come up with a different naming scheme. Someone's already trademarked the exabyte [exabyte.com].

    Couldn't it weaken the trademark to have Western Digital or Seagate making a '9 exabyte' hard drive? Or HP or Sony making an 'exabyte-class' tape drive? Wouldn't a judge find (in favor of Exabyte) that the consumer would easily be confused?

    *The USPTO are idiots.*
  • Remember all the hell when the world moved from 16 bit to 32 bit? All sorts of lazy code was broken. And here we go again. This isn't a Linux thing or a Windows thing; it's just the basic nature of human beings.

    The good news is, once we move to a 64-bit processor, that's it. We'll correct the code one more time and that's the end of it, since 64 bit ints are sufficient for any imaginable program.
    • I've already moved to 64bits with my IA64 box. In fact I'm about to add another 2TB to my 1TB RAID, I may have to do a bit of digging now though. Shame that, I actually need at least 15TB when the system goes live. Could always move it onto my AIX system though, that has at least 370TB on HSM.
    • since 64 bit ints are sufficient for any imaginable program.

      That's just like saying, no one would ever need more than 640K.

  • OK, lots of usable storage space is good, but what about the time epoch? Won't it run out in 2036 or something?
    When's that going to be fixed?

Beware the new TTY code!

Working...