Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Bug Security Linux IT

Denial-of-Service Attack Found In Btrfs File-System 210

An anonymous reader writes "It's been found that the Btrfs file-system is vulnerable to a Hash-DOS attack, a denial-of-service attack caused by hash collisions within the file-system. Two DOS attack vectors were uncovered by Pascal Junod that he described as causing astonishing and unexpected success. It's hoped that the security vulnerability will be fixed for the next Linux kernel release." The article points out that these exploits require local access.
This discussion has been archived. No new comments can be posted.

Denial-of-Service Attack Found In Btrfs File-System

Comments Filter:
  • by Anonymous Coward on Friday December 14, 2012 @10:02PM (#42297855)

    ZFS on FreeBSD or FreeNAS is great. Easily saturates gigE with a simple mirror of recent 7200rpm disks. It scales up from there, and FreeBSD is pretty rock solid.

  • by Tough Love ( 215404 ) on Friday December 14, 2012 @10:22PM (#42297977)

    NTFS doesn't have snapshots. Instead it relies on volume shadow copies, with known severe performance artifacts caused by needing to move snapshotted data out of the way when new writes come in. Btrfs, like ZFS and Netapp's WAFL, use a far more efficient copy-on-write strategy that avoids the write penalty. The takeaway: I would not go so far as to claim Microsoft has an enterprise-worthy solution either. If you want something with industrial strength dedup, snapshots and fault tolerance, you won't be getting it from Micorosft.

  • by cryptizard ( 2629853 ) on Friday December 14, 2012 @10:38PM (#42298081)
    Sort of, but at least you can recover from those attacks by restarting or booting from an external source to clean up your filesystem. The second attack here leaves you with undeletable files because the file system code responsible for deleting cannot handle the multiple hash collisions. There is no way to recover from that until a patch is pushed out that fixes the problem.
  • I have seen the userlevel ZFS crash multiple times, it's also slow as hell. It's still worth it if you are short on storage and want to reduce the size of your backup, but I wouldn't exactly call it ready for production.

  • by maz2331 ( 1104901 ) on Friday December 14, 2012 @10:51PM (#42298139)

    ZFS on Linux does exist as a kernel module that is pretty stable and works well. http://zfsonlinux.org/ [zfsonlinux.org] -- it was put out by Lawrence Livermore National Lab, but can't be included with the kernel distros due to GPL / CDDL license compatability issues.

  • by dbIII ( 701233 ) on Friday December 14, 2012 @11:00PM (#42298219)
    Kernel level probably is ready, but not on 32bit (big hassles there but probably not a big deal to most) and on 64 bit there are some memory usage problems and performance seems to suck when there's a dozen or so hosts keeping connections to files on ZFS open via NFS at the same time. There's still a way to go before ZFS on linux gets to where it is on FreeBSD but it's still early days, and for many usage patterns it looks like it is ready for production.
  • by Anonymous Coward on Friday December 14, 2012 @11:01PM (#42298223)

    Linux has production level encryption, snapshots, and LVM2. What are you talking about?

    Unless you have very specific uses, deduplication should be done at your storage array really. It's not a high priority to implement in the filesystem. (No, your anecdote does not make it a high priority).

  • by Anonymous Coward on Saturday December 15, 2012 @12:46AM (#42298839)

    Tried to find some more information on this. First discovery: VSS stands for "Volume Shadow copy Service", not "Visual SourceSafe", as was my first association. :)

    AFAICT he's saying pretty much what Microsoft is saying [microsoft.com]:

    When a change to the original volume occurs, but before it is written to disk, the block about to be modified is read and then written to a "differences area", which preserves a copy of the data block before it is overwritten with the change. Using the blocks in the differences area and unchanged blocks in the original volume, a shadow copy can be logically constructed that represents the shadow copy at the point in time in which it was created.

    The disadvantage is that in order to fully restore the data, the original data must still be available. Without the original data, the shadow copy is incomplete and cannot be used. Another disadvantage is that the performance of copy-on-write implementations can affect the performance of the original volume.

    Do you have a newer reference?

  • by LordLimecat ( 1103839 ) on Saturday December 15, 2012 @01:26AM (#42299005)

    FAT32 is going to be faster than a LOT of filesystems precisely because it lacks features like dedup, any notion of real ACLs, and, oh, I dont know, data integrity. Thats why if you want a really fast RAMDisk, you dont use NTFS or ReFS, you use FAT16 or FAT32.

  • by Tough Love ( 215404 ) on Saturday December 15, 2012 @03:17AM (#42299481)

    VSS is the snapshot solution for NTFS, and of course it uses copy-on-write

    Well. Maybe you better sit down in a comfortable chair and think about this a bit. From Microsoft's site: When a change to the original volume occurs, but before it is written to disk, the block about to be modified is read and then written to a “differences area”, which preserves a copy of the data block before it is overwritten with the change. [microsoft.com]

    Think about what this means. It is not a "copy-on-write", it is a "copy-before-write". Gross abuse of terminology if anybody tries to call it a "copy-on-write", which has the very specific meaning [wikipedia.org] of "don't modify the destination data". Instead, copy it, then modify the copy. OK, are we clear? VSS does not do copy-on-write, it does copy-before-write.

    Now let's think about the implications of that. First, the write needs to be blocked until the copy-before-write completes, otherwise the copied data is not sure to be on stable storage. The copy-before-write needs to read the data from its original position, write it to some save area, then update some metadata to remember which data was saved where. How many disk seeks is that, if it's a spinning disk? If the save area is on the same spinning disk? If it's flash, how much write multiplication is that? When all of that is finally done, the original write can be unblocked and allowed to proceed. In total, how much slower is that than a simple, linear write? If you said "on the order of an order of magnitude" you would be in the ballpark. In face, it can get way worse than that if you are unlucky. In the best imaginable case, your write performance is going to take a hit by a factor of three. Usually, much much worse.

    OK, did we get this straight? As a final exercise, see if you can figure out who was talking nonsense.

  • by maxwell demon ( 590494 ) on Saturday December 15, 2012 @03:25AM (#42299517) Journal

    DOS = Disk Operating System
    DoS = Denial of Service

  • by Tough Love ( 215404 ) on Saturday December 15, 2012 @06:35AM (#42300203)

    Modifications in the middle of files are extremely rare. It's true, running a database on top of a snapshotted spinning disk is probably going to suck. For normal users, keeping regular files mostly linear, and files in the same directory nearby each other is what matters, and yes, Btrfs does a credible job of that.

    I know why shadow copy works the way it does. 1) It's simple, therefore likely to work. 2) It's an easy answer to the "how do you control fragmentation" question. But the write performance issue is so bad that it's a poor solution no matter how you justify it. It's just an attempt to get away with being lazy for a largely uncritical audience that isn't big into benchmarking, or indeed, isn't used to good disk performance.

  • by LWATCDR ( 28044 ) on Saturday December 15, 2012 @10:20AM (#42300989) Homepage Journal

    You then turn it off.... And go take your meds.
    I do not think you know what DeDup means. You as a user still see two copies of the file. If you make changes to one copy of the file it will only change that copy of the file. It is not like a link. In other words it is totally transparent to the end user but saves drive space. So if you work in a large organization and someone sends out an email to all 4000 people that email will only take up the space of one email. Even if everyone saves it the imap server.

    In other words you do not know what you are talking about, you probably do not need these functions because you probably do not run a server or servers for a large organization, you seem to have some anger issues, and maybe just a little nuts.

I find you lack of faith in the forth dithturbing. - Darse ("Darth") Vader

Working...