Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Software Linux

Linux Gains Lossless File System 331

Anonymous Coward writes "An R&D affiliate of the world's largest telephone company has achieved a stable release of a new Linux file system said to improve reliability over conventional Linux file systems, and offer performance advantages over Solaris's UFS file system. NILFS 1.0 (new implementation of a log-structured file system) is available now from NTT Labs (Nippon Telegraph and Telephone's Cyber Space Laboratories)."
This discussion has been archived. No new comments can be posted.

Linux Gains Lossless File System

Comments Filter:
  • Bloat? (Score:3, Insightful)

    by shadowknot ( 853491 ) * on Tuesday October 04, 2005 @10:07AM (#13712789) Homepage Journal
    Please correct me if I'm wrong here but wouldn't a log that is only appended to and never overwritten cause a massive ammount of bloat after a period of prolonged use?
    • Re:Bloat? (Score:3, Interesting)

      by Ayanami Rei ( 621112 )
      There is a "cleaner" that is on the TODO list.
      Of course, you can delete files and re-use the space. But the performance slows down greatly once you start filling in "holes" left in the log after wrapping to the end of the allocated area. (A similar situation to database where you might want to compact, vacuum, condense, etc. a table).
      • hm, might be interesting (read exciting but not necessarially useful) to implement this in some sort of really huge circular buffer, where old files are simply overwritten when their time is up.
    • Aside from gasoline and water, data is the most valuable thing in the world.

      Walmart's most prized possesion is their billion-billion-billion transaction customer sales database. They use it to find things like, among other things, men tend to buy beer and diapers at the time.

      With disks costing $1.00/GB or less these days, many people including myself simply DON'T delete data anymore. I keep all my original digital photos (in .tiff format) along with full-quality movies and all the games I've ever played b
    • Re:Bloat? (Score:5, Informative)

      by ivan256 ( 17499 ) * on Tuesday October 04, 2005 @10:25AM (#13712999)
      I wrote a (unfortunatly, closed source) filesystem that was remarkably similar to this once. Generally these types of filesystems are used when you're constantly writing new data. You're going to be eating the space anyway, but you want the reliability of syncronous writes with the performance of asyncronous cached writes. Reading from these filesystems is incredibly slow in comparison.

      The version I wrote took advantage of the client's bursty IO pattern and used the slow periods to offload the data to an ext2 filesystem on a seperate disk. Hopefully your system memory was large enough that the offload to the secondary filesystem happened without any disk reads. Once that was done, the older sections of log could be re-used.... But only once the disk filled up and wrapped back to the beginning, because you want to keep your writes (essentially... There's other timing tricks you can play to get more speed) sequential.

      There's been lots of research done on this method of write structuring. Look for papers on the "TRAIL [sunysb.edu]" project (also closed source), for example.
    • Re:Bloat? (Score:2, Informative)

      by b100dian ( 771163 )
      Actually this is a journal filesystem, as opposed to journalized. That is, each file is a journal.
    • Actually.

      Log file systems are faster, safer, and just better. Period.
    • I usually put /var/log in a seperate partition anyhow. The easy solution would be to put this FS just on the areas that you want to be "lossless" and leave the rest with standard filesystems.
    • Because old stuff can be overwritten when you fill the whole disk. As I mentioned in other posting, data writes are Real Fast in log filesystems, but data reads are Real Slow.

      The biggest problem of this filesystem [nilfs.org] (link is missing from the original posting) is that it's Not Really Ready (among other important stuff, mmap() is not implemented yet).
    • Re:Bloat? (Score:3, Informative)

      by DrSkwid ( 118965 )
      for some of us, our whole file system is append only :

      http://cm.bell-labs.com/sys/doc/venti/venti.html [bell-labs.com]

      http://cm.bell-labs.com/magic/man2html/8/venti [bell-labs.com]

      Sean Quinlan [bell-labs.com] now works at Google, I'm not sure if Sean Dorward does, but it seems most of the other people who built plan9 at Bell Labs do.
  • New Improved? (Score:5, Insightful)

    by TheRaven64 ( 641858 ) on Tuesday October 04, 2005 @10:07AM (#13712792) Journal
    The article was a bit light on details. Perhaps someone could enlighten me as to exactly why this is better than existing log-structured filesystems, such as NetBSD's LFS.
    • Re:New Improved? (Score:3, Informative)

      by cowens ( 30752 )
      On its project page LFS is listed as a related project.
    • The article was a bit light on details. Perhaps someone could enlighten me as to exactly why this is better than existing log-structured filesystems, such as NetBSD's LFS.

      Logs structures are suceptible to termites, carpenter ants, and various forms of rot.

    • Re:New Improved? (Score:5, Informative)

      by Feyr ( 449684 ) on Tuesday October 04, 2005 @10:54AM (#13713299) Journal
      the why is dependent on your application,

      for common servers, or day-to-day use. it isn't

      but notice how this was developped by a telecom company? a log structured filesystem is perfect or even required, due to speed and integrity constraints (depending on the size of the network), when you're dealing with billing and monitoring data on a telecom network. you want something that's simple and extremely resistant to failures. a complete system crash (which never happen, short of nuking the box) should not result in any data loss, or the extreme minimum, and you should be able to recreate that data from somewhere else (eg, the other endpoint in a telephone network).

      a log structured filesystem allow this, the "head" is never over previous data in normal operation. you don't typically read the data back until the end of a cycle (whatever that cycle may be) or in a debugging condition. you simply append to the end. minimizing head movement, and thus increasing mtbf (replacing a disk in those things is costly)

      this is also extremely useful for logging to WORM media (write once, read many), for security logs mostly. you don't want a hacker to be able to remove them, no matter what they do
    • As mentioned on the LFS website, there were a number of attempts to implement a log-based filesystem for Linux. Some did work for the kernels they were written for, others do not appear to have worked at all. Virtually all were abandoned - and, to judge from my own research and that of the people who wrote the LFS website, many were abandoned around the Linux 2.2 era.

      I guess you can argue that if a project is actively maintained, any problems are potentially fixable. Even with Open Source, an abandoned proj

  • Horrible headline (Score:5, Insightful)

    by Quasar1999 ( 520073 ) on Tuesday October 04, 2005 @10:08AM (#13712796) Journal
    A lossless file system? Good lord... I most certainly hope all the exisiting file systems out there are not lossy. I have hundreds of gigabytes of data that I don't want to lose.

    Or is this filesystem somehow able to recover data once the hard drive crashes? That would be neat...
    • Re:Horrible headline (Score:5, Informative)

      by TheRaven64 ( 641858 ) on Tuesday October 04, 2005 @10:23AM (#13712976) Journal
      The title was written by a numpty. This is a log-structured filesystem. These systems have been around for ages. NetBSD has LFS (originally from 4.4BSD), and I believe Minix also had some form of log-structured filesystem.

      A log-structured filesystem doesn't modify existing files. Every time you write to the disk, you simply append some deltas. This gives very good write performance, but poor read performance (since almost all files will be fragmented, and the entire log for that file must be replayed to determine the current state of the file). To help alleviate this, most undergo a vacuuming process[1], whereby the log is replayed, and a set of contiguous files is written. This also frees space - something that is not normally done since deleting a file is done simply by writing something at the end of the log saying it was deleted. In addition to the good write performance, log-structured filesystems also have an intrinsic undo facility - you can always revert to an earlier disk state, up until the last time the drive was vacuumed.

      The snapshot facility is not particularly impressive. It's a feature intrinsic to log-structured filesystems, and also available in other filesystems (such as UFS2 on FreeBSD and XFS on Linux). The performance advantage claims must be taken with a grain of salt - write performance for log-structured filesystems is always close to the theoretical maximum of the disk, but this is at the expense of some disk space, and read speed (although LFS did beat UFS in several tests on NetBSD).

      [1] This is usually done in the background when there is little or no disk activity.

      • by addaon ( 41825 ) <addaon+slashdot@nOsPAM.gmail.com> on Tuesday October 04, 2005 @10:48AM (#13713230)
        It should be said that "good write performance, bad read performance" is essentially the point, not a defect. It's easy these days to speed up reads a huge amount through caching; these days 100MB+ of UBC isn't rare. But when you have to write, you have to write (for reliability reasons); this can't be cached into memory, so it should be optimized for. The goal here is to make BOTH operations as fast as possible, though one is made fast at the disk layer and one is made fast above it.
      • Where I work we are looking to do the same thing within a document file formats.
        • I hope you deal with very large documents. If you are dealing with things under a couple of MBs then you will get better performance by overwriting the entire file than by writing small chunks (at least, on mechanical hard disks - it's different if you are using Flash). Hard disks are best at transferring large amounts of contiguous data - small reads and writes can cripple their performance.
        • For ASCII text, the easiest way to maintain a log system is to use a revision control system such as RCS, CVS, Subversion, etc. Essentially, that's all these systems are - except they usually support forks in the log, they're not strictly linear collections of diffs.

          For binary document formats (eg: MS Office's .doc format), things get tougher. There are versions of diff that'll work on binary files (which is why you can get binary patches), but it's more common to see logging done as a series of macros wher

      • The title was written by a numpty

        I learned a new word today!
    • It depends on your reference. It is lossless compared to FAT and NTFS. In that context, it is a huge breakthough...
  • So... (Score:5, Funny)

    by Juiblex ( 561985 ) on Tuesday October 04, 2005 @10:08AM (#13712803)
    If it is lossless, I won't be able to store MPEG, XVid, JPEG and MP3 on it anymore? :(
    • Re:So... (Score:2, Funny)

      by valeriyk ( 914993 )
      No, but you can use the soon to be released MILF 1.0 file system for your jpg and mpg needs.
      • Re:So... (Score:3, Funny)

        by BottleCup ( 691335 )

        No, but you can use the soon to be released MILF 1.0 file system for your jpg and mpg needs.

        Now that's one filesystem I would like to fsck upon every boot(y) ;)

    • Re:So... (Score:2, Funny)

      by Iriel ( 810009 )
      I wouldn't have space for that on a Linux box after my (practically monthly) regular download of $distro['foo']!

      That's why I have Windows, because I can afford to lose what's on it ^_^
      </toungeincheek>
  • Old news (Score:5, Funny)

    by Anonymous Coward on Tuesday October 04, 2005 @10:08AM (#13712804)
    Websites with MILFS have been around for years.

    Oh, wait. NILFS. My bad.
  • Database Servers (Score:5, Insightful)

    by mysqlrocks ( 783488 ) on Tuesday October 04, 2005 @10:09AM (#13712819) Homepage Journal
    Log-structured filesystems write down all data in a continuous log-like format that is only appended to, never overwritten. The approach is said to reduce seek times, as well as minimizing the kind of data loss that occurs with conventional Linux filesystems.

    This sounds a lot like how database servers work. They keep both a log file and a database file. The log file is continuously written to and is only truncated when backups occur.
    • Has anyone considered the privacy implications of this yet?

      Not sure I like logs listing that 3 years ago, I had a file named bad_kiddie_pr0n.jpeg (or whatever) on my computer.

      They'd better have a good cleanup script!

      --LWM
  • by digitalgimpus ( 468277 ) on Tuesday October 04, 2005 @10:10AM (#13712825) Homepage
    Will there be a Windows Driver?

    If there isn't, this has no chance on taking off. Consumers today want portability. They don't like lock-in. A linux exclusive format is lock-in.

    Create a good windows (and Mac OS) driver, and it's got massive potential.
    • by pesc ( 147035 ) on Tuesday October 04, 2005 @10:16AM (#13712895)
      Consumers today want portability. They don't like lock-in.
      That's unfortunately not true, which is proved by all the people using NTFS (or Office).
    • by reynaert ( 264437 ) on Tuesday October 04, 2005 @10:17AM (#13712915)

      Will there be a Windows Driver? If there isn't, this has no chance on taking off.

      Yes, that's why I only use FAT filesystems on my Linux server.

      • by thc69 ( 98798 )
        Will there be a Windows Driver? If there isn't, this has no chance on taking off.

        Yes, that's why I only use FAT filesystems on my Linux server.
        You're probably joking, but fyi... There's at least one driver for mounting ext2 fs in windows: ext2fsd. If you don't need to mount it, explore2fs works well too.
    • Huh?

      Say, what?

      The following is left as an exercise to the reader:
      1. Please list all the linux file systems avaliable.

      2. Please list all the linux file systems avaliable with read/write support in both linux and windows.

      3. Please add up the total amount invested by various corporations in the development of the file systems listed in #1.

      Please don't forget that although you may use differently tweaked filesystems between servers and desktops, there is a great deal of overlap. Linux as a desktop system may no
  • Stable? (Score:5, Informative)

    by theJML ( 911853 ) on Tuesday October 04, 2005 @10:11AM (#13712839) Homepage
    I like how they say it's reached a stable release but if you look at the known bugs on the Project Home Page http://www.nilfs.org/ [nilfs.org] You'll see that:

    The system might hang under heavy load.

    The system hangs on a disk full condition.
    Aren't those kind of important to saying that something is stable?

  • by Work Account ( 900793 ) on Tuesday October 04, 2005 @10:12AM (#13712850) Journal
    NILFS is a log-structured file system developed for the Linux kernel 2.6. NILFS is an abbreviation of the New Implementation of a Log-structured File System. A log-structured file system has the characteristic that all file system data including metadata is written in a log-like format. Data is never overwritten, only appended in this file system. This greatly improves performance because there is little overhead regarding disk seeks. NILFS also has the following specific features:

            * Slick snapshots.
            * B-tree based file and inode management.
            * Immediate recovery after system crash.
            * 64-bit data structures; support many files, large files and disks.
            * Loadable kernel module; no recompilation of the kernel is required.
  • NTFS (Score:2, Interesting)

    " When the system reboots, the journal notes that the write did not complete, and any partial data writes are lost. "

    Isn't this similar to NTFS's journaling file system?

  • Bundling (Score:3, Interesting)

    by superpulpsicle ( 533373 ) on Tuesday October 04, 2005 @10:15AM (#13712883)
    If they are serious about a filesystem, it has to be bundled with the linux distros every release. Take Reiser and JFS for example, some distros have it, some don't. Not every release of the same distro has it, what a mess. Only two have stayed permanently EXT2, EXT3. Everything else is trendy.

    • I've never used anything but reiserfs. If a distro won't support it, I won't use that distro, simple as that. It's a really nice filesystem.
    • If they are serious about a filesystem, it has to be bundled with the linux distros every release. Take Reiser and JFS for example, some distros have it, some don't. Not every release of the same distro has it, what a mess. Only two have stayed permanently EXT2, EXT3. Everything else is trendy.

      Reiser and JFS have been in the mainline kernel since umm, I think early 2.4. They were put in around the same era that ext3 showed up in the mainline kernel.

      I don't know about you but I never use the included

  • More file integrity is always good. Ever since journaling file systems became available I just started turning the power off to my computers (via a power strip) rather than going through the shutdown command. It never made sense to me that we'd have to "shut down" as opposed to just turning the thing off.
    • by pesc ( 147035 ) on Tuesday October 04, 2005 @10:24AM (#13712998)
      Ever since journaling file systems became available I just started turning the power off to my computers (via a power strip) rather than going through the shutdown command.

      That's a very bad idea. Normally, journaling file systems only guarantee that the file/directory structure remains intact. It does not necessarily guarantee that the data in the files hit the disk. Also, your disk will probably have a cache that is lost when you remove power. Whatever is in the cache will also be lost.

      So your file system may be intact, but your practices will probably destroy data.
      • ...and it is actually very bad to don't have a proper ACPI support, because, if your model is supported by your operational system's ACPI, then pressing power button will shutdown your operational system, including Linux, by itself. No loss of data, correct shutdown, no waiting for that to happen.

        But if it is _not_ supported - well, it could be very bad for your laptop, for example. I know lot of laptops without support for them in Linux will heat themselves too much and it can cause big trouble. With prope
    • Wow, just wow. Seriously, I really, really hope you don't have any important data.

      Here's a little (simplified) tutorial on what happens when you a program writes a file to disk:

      1. It goes into the OS filesystem cache. This is often several hundred MBs. At some point, the OS decides it is not important enough to be kept around. At this point,
      2. it is written to the hard drive. Here, it sits in the hard drive controller's on-board cache. When the cache is full,
      3. it is written to disk.
        • At any given time, yo
      • by 0xABADC0DA ( 867955 ) on Tuesday October 04, 2005 @12:29PM (#13714170)
        Close, but no cigar:

        1. It goes into the OS filesystem cache. After 5 seconds the modified data gets flushed to the disk (sometimes set to 30 sec).
        2. It is written to the hard drive. Here, it sits in the hard drive controller's on-board cache until the head arrives at the write point, which is a fraction of a second.
        3. It is written to disk.

        So it *can* happen that data is not written properly, but unlike the scary picture you paint it is extremely unlikely. Even if you just saved your data, just do a sync and you'll be fine turning the power off.
  • by cowens ( 30752 ) on Tuesday October 04, 2005 @10:16AM (#13712900)
  • by totallygeek ( 263191 ) <sellis@totallygeek.com> on Tuesday October 04, 2005 @10:18AM (#13712919) Homepage
    I installed this lossless file system. rm is now chmod 444. I have not been able to lose information since.

    Note: instead of modding this +1 funny, mod it +0.1 pathetic.

  • ...but I want a business card that says I work at "Cyber Space Laboratory"!
  • by davegaramond ( 632107 ) on Tuesday October 04, 2005 @10:23AM (#13712977)
    I'd looove to replace ext2/3 as my filesystem for years since it's not so fast and most distro don't include binary tree indexing for ext3 (so large dir is slow). Unfortunately I haven't been able to do so. Here are my requirements:

    1. Distro support. I don't want to have to compile my own kernel. The FS needs to be supported by the distro (Debian in this case). I want to be able to create root partition and RAID with the FS.

    2. ACL and extended attributes.

    3. extended inode attributes would be nice ("chattr +i" is handy sometimes).

    4. optionally I would like to be able to create large Bestcrypt partitions (e.g. 30GB) with that FS.

    5. fast large dir and small files performance (I have millions of small files on my desktop).

    6. no need to fsck or fast fsck (i.e. journalling or some other technique or whatever).

    7. disk quota!

    8. optionally, transparent compression and encryption will be a big plus point.

    9. Snapshots would be nice too, for consistent backups.

    10. Versioning is also very welcome.

    XFS: very close but it still has problems with #4. It also doesn't have undelete like ext2/ext3 (not that it's a requirement though).

    JFS: it just lacks many features.

    Reiser3: How's the quota support, still have to patch kernel everytime? Plus it doesn't have ACL.

    Reiser4: not ready yet.

    I might have to look at FreeBSD after all. Background fsck, hmm....
    • by metamatic ( 202216 ) on Tuesday October 04, 2005 @10:32AM (#13713074) Homepage Journal
      Reiser3 works fine on Debian with no kernel patching required.

      It seems as if you're holding out for perfection, not willing to upgrade from ext3 to anything else unless you find The Perfect Filesystem. I think that's kinda silly; better to get 90% of what you need now, than to wait another 2-4 years, surely?
    • by m50d ( 797211 ) on Tuesday October 04, 2005 @10:38AM (#13713134) Homepage Journal
      Reiser3: How's the quota support, still have to patch kernel everytime? Plus it doesn't have ACL.

      It does have ACL, and quota support is fine at least in gentoo kernels (can't check a vanilla one atm)

    • 8. optionally, transparent compression and encryption will be a big plus point.

      9. Snapshots would be nice too, for consistent backups.

      10. Versioning is also very welcome.

      I sure hope that none of these things are ever part of the filesystem itself. I want my filesystems 100% portable, and fast. You know why NTFS isn't so much, right? All the extra, nearly useless features that should be handled by the OS, but that are done by the file system instead.

      These should be layers on top of the file system that ar
      • 9 is basically a cron job

        Ummm, no.

        9 (snapshots), is a very important feature that makes it possible to create a cron job to do nice, consistent backups easily. Unless you don't mind writing a cron job that remounts the fs as read-only before doing the backup... an approach that's likely to cause one or two small problems.

        That said, snapshotting doesn't need to be implemented in the file system. LVM, for example, implements it below the file system, at the level of a block device. That approach has

    • I might have to look at FreeBSD after all. Background fsck, hmm....

      It might be worth it:

      1. Not really an issue, since there aren't really FreeBSD distros. UFS2 is supported by NetBSD, however.
      2. Yup. Both available.
      3. Yup. Been around since 4.4BSD (1993). It's called chflags though, not chattr.
      4. No idea.
      5. It's not bad, but I don't have any real performance figures so try it and see.
      6. Softupdates should ensure consistency after a fsck, and the fsck can run in the background. There was a Summer of Code projec
    • I'm not sure, but does ext3 actually have all the features you mention?

      Anyway, some other things... AFAIK Ext3 undeletion doesn't work anymore (at least not using debuge2fs, lsdel always gives me an empty list these days). Also, snapshots are possible with any filesystem in Linux, as long as the fs is on LVM.

      I'm sticking with Ext3 too. Heard too many scary stories about XFS, JFS and even Reiser, so I'll stick with what's known to work.
  • RAID obviously protects against complete harddisk failure. This protects against dataloss during other hardware failure like powerloss BUT is there anything that protects against data loss due to slight defects on the hard disk?

    I am probably not the only one to come back to an old file saved years ago only to find a glitch in it. I noticed it with a couple of movies. Movies I know were perfect as I watched them without copying them. So the only explanation is that part of the disk got corrupted.

    The soluti

    • Raid 5 allows you to keep 1 or more parity checksums of the volume. In principle you could use partitions on the same disk if you cannot afford a multi disk setup.
    • Sounds like you want ZFS - it stores per-block checksums, so you can detect single-block faults on a disk (not sure if it includes correction codes as well). As the other poster mentioned, if you are willing to add more hardware, RAID 5 does this.
  • HDFS (home-dir FS)? (Score:5, Interesting)

    by Ramses0 ( 63476 ) on Tuesday October 04, 2005 @10:32AM (#13713069)
    I've had an idea kicking around for a while now... "HDFS / Home-Dir File System" ... I want a (s)low-performance, bloated, version controlled, roll-back featured, viewcvs.cgi enabled file system for my /home/rames (or at least /home/rames/documents).

    With FUSE [sourceforge.net] it might even be possible for mere mortals like me.

    Basically, I very rarely push more around more than 100-200kb at a time of "my stuff" unless it's big OGG's or tgz's, etc. Mostly source files, documents, resume's, etc. In that case, I want to be able to go historical to any saved revision *at the file-system level*, kindof like "always on cvs / svn / (git?)" for certain directories. Then when I accidently nuke files or make mistakes or whatever, I can drag a slider in a GUI and "roll-back" my filesystem to a certain point in time and bring that saved state into the present.

    Performance is not an issue (at first), as I'm OK if my files take 3 seconds to save in vim or OpenOffice instead of 0.5 seconds. Space is not an issue because I don't generally revise Large(tm) files (and it would be pretty straightforward to have a MaxLimit size for any particular file). Maintenance would also be pretty straighforward: crontab "@daily dump revisions > 1 month". Include some special logic for "if a file is changing a lot, only snapshot versions every 5-10 minutes" and you could even handle some of the larger stuff like images without too much work.

    Having done quite a bit of reading of KernelTraffic [kernel-traffic.org] (Hi Zack) and recently about GIT [wikipedia.org], maybe it's time to dust off some python and C and see what happens...

    --Robert
    • What you want is something like the katie [netcraft.com.au] fs. it is a versioned filesystem. You can access the current version by saying vi /home/user/foo or an older version by saying vi /home/user/foo@@main/5 where main is the branch and 5 is the version number. I don't know if katie is still under active development anymore though.
  • I suppose the released version is lossless, compared to their first development version, which was a wee little bit buggy?
  • nobody seems to know the difference between lossy and lossless filesystems. neither do I, and neither does whoever wrote the article, it seems.

    but hey, that's never slowed me down.

    This new filesystem is like old ones, with a big difference and a few small ones.

    It has something called 'snapshots', which seems to mean that you can work off of a partition, but seperately load up the version of that partition you had before you last had a power failure, or whatever went wrong.

    it also claims to:
  • Sounds to me an aweful lot like a tape drive. Start at one end and start writting until you're done. I can see the point of wanting to keep all parts of each single file together in one block so that it's not broken up. That way there is no need to defrag, but I thought ext2 and ext3 did that type of thing already. Correct me if I'm wrong, but I was told that ext2/ext3 would keep a file whole at just about every cost pending a really really full drive and absolutley no contiguous room to put it, then it'd b

  • Here is a sampling of the known bugs

    The system might hang under heavy load.
  • The UFS filesystem used by Solaris provides a data "snapshot" feature that prevents such data loss, NTT Labs says, but filesystem operation must be suspended to use the feature, reducing performance. NILFS, in contrast, can "continuously and automatically [save] instantaneous states of the file system without interrupting service," NTT Labs says.

    The BSD original Soft Updates [wikipedia.org] and snapshot implementation has a *minimal* impact on operation. See McKusik's paper Running Fsck in the Background [usenix.org].

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...