


Open Source Deduplication For Linux With Opendedup 186
tazzbit writes "The storage vendors have been crowing about data deduplication technology for some time now, but a new open source project, Opendedup, brings it to Linux and its hypervisors — KVM, Xen and VMware. The new deduplication-based file system called SDFS (GPL v2) is scalable to eight petabytes of capacity with 256 storage engines, which can each store up to 32TB of deduplicated data. Each volume can be up to 8 exabytes and the number of files is limited by the underlying file system. Opendedup runs in user space, making it platform independent, easier to scale and cluster, and it can integrate with other user space services like Amazon S3."
In case you don't know much about it (Score:5, Informative)
Data deduplication [wikipedia.org]
( I don't )
Re:In case you don't know much about it (Score:5, Informative)
Data deduplication is huge in virtualized environments. Four virtual servers with identical OS's running on one host server? Deduplicate the data and save a lot of space.
This is even bigger in the virutulized desktop envirornment where you could literally have hundreds of PCs virtualized on the same physical box.
Re:In case you don't know much about it (Score:2)
Unless you “deduplicate” the CPU work, that’s not going to happen. ^^
Re:In case you don't know much about it (Score:3, Informative)
hundreds of virtualized desktops per physical server does happen, my employer sells such solutions from several vendors.
Re:In case you don't know much about it (Score:4, Informative)
If you have a couple hundred people running business apps, it ain't all that difficult. Generally you will get spikes of CPU utilization that last a few seconds mashed between many minutes, or even hours of very low CPU utilization. A powerful server can handle dozens or even hundreds of virtual desktops in this type of environment.
Re:In case you don't know much about it (Score:3, Informative)
It really is hundreds, on a modern nehalem core system with 64 gigs of memory or so. We used to do dozens on each node in a citrix farm back in the PIII days.
Re:In case you don't know much about it (Score:2)
Unless you "deduplicate" the CPU work, that's not going to happen. ^^
Sure it does. CPU power is generally the _last_ thing you run out of in virtualised environments, and that's been true for years.
On a modern, Core i7-based server, you should be able to get 10+ "virtual desktops" per core on average, without too much trouble. IOPS and RAM are typically your two biggest limitations.
Re:In case you don't know much about it (Score:2)
I don't know much about the subject, so forgive me if this is a dumb question, but in that scenario, if the data for a file becomes corrupted on the hard drive, say a critical system file, doesn't that mean that all vm's using it are pooched?
Re:In case you don't know much about it (Score:5, Informative)
In a word, No. There are many types of 'virtualization' and more than one approach to de-duplication. In a system as engineered as one with de-duplication, you should have replication as part of the data integrity processes. If the file is corrupted in all the main copies (everywhere it exists, including backups) then the scenario you describe would be correct. This is true for any individual file that exists on computer systems today. De-duplication strives to reduce the number of copies needed across some defined data 'space' whether that is user space, or server space, or storage space etc.
This is a problem in many aspects of computing. Imagine you have a business with 50 users. Each must use a web application which has many graphics. The browser caches of each user has copies of each of those graphics images. When the cache is backed up, the backup is much larger than it needs to be. You can do several things to reduce backup times, storage space, and user quality of service
1 - disable caching for that site in the browser and cache them on a single server locally located
2 - disable backing up the browser caches, or back up only one
3 - enable deduplication in the backup and storage processes
4 - implement all or several of the above
The problems are not single ended and the answers or solutions will also not be single ended or faceted. That is no one solution is the answer to all possible problems. This one has some aspects to it that are appealing to certain groups of people. You average home user might not be able to take advantage of this yet. Small businesses though might need to start looking at this type of solution. Think how many people got the same group email message with a 12MB attachment. How many times do all those copies get archived? In just that example you see the waste that duplicated data represents. Solutions such as this offer an affordable way to positively affect bottom lines in fighting those types of problems problems.
Re:In case you don't know much about it (Score:2)
Put all client side caches and temp directories on a RAM disk. Save backup space and time, reduce your IOPS, and decrease client latency.
Protected Space (Score:2)
This is why some vendors protect some duplicated VM data ( like the OS ).
And sure stock DDup is not the end all to be all, but it goes a long way to that goal and the risks are more then worth the gains.
Re:In case you don't know much about it (Score:2)
I don't know much about the subject, so forgive me if this is a dumb question, but in that scenario, if the data for a file becomes corrupted on the hard drive, say a critical system file, doesn't that mean that all vm's using it are pooched?
Yes, but not because of deduplication. If you had one sector go bad then yes, you could affect many more vm's if you you were using data deduplication than if you weren't, but in my experience, data corruption is seldom just a '1 sector' thing, and once you detect it you should restore anything that uses that disk from a backup that probably was taken before the corruption started (which is tricky... how do you know when that was?)
Bitrot is one of the nastiest failure modes around.
Re:In case you don't know much about it (Score:2)
I don't know much about the subject, so forgive me if this is a dumb question, but in that scenario, if the data for a file becomes corrupted on the hard drive, say a critical system file, doesn't that mean that all vm's using it are pooched?
Yes, but a) this is something inherent to anything using shared resources, and b) there's not a lot of scope for such corruption to happen in a decent system (RAID, block-level checksums, etc).
Re:In case you don't know much about it (Score:2)
Or just use chroot or something. I don't know.
Re:In case you don't know much about it (Score:2)
Right.
It's one of the things I never really managed to wrap my head around: why would you want to install many instances of the same OS on the same machine to begin with? Besides using lots of disk space, each instance will also use up memory and redundantly use up resources for updates, background tasks that each OS is running, and basically everything else they have in common.
Sure, you can add on a lot of clever tricks to deduplicate resource usage, but why introduce the duplication in the first place?
Re:In case you don't know much about it (Score:3, Interesting)
It is one of those things that once you start using it, the benefits become apparent.
Here are some:
1) One application on one machine. No more wondering if application X has somehow messed up application Y. The writers of the software probably developed the application in a clean environment, and this lets you run it in a clean environment. Gets rid of vendor finger-pointing, too.
2) One application on one machine. If application X fouls the nest, you can reboot it and know that you are not also terminating applications Y, Z, A, and B.
3) Machine portability. The drivers in a VM guest are generic -and- uniform. Nothing inside the (guest) machine changes if you move the machine from a host with an Intel NIC to a host with a Broadcom NIC. The benefit here is that when hardware fails (and it will), it is pretty quick and easy to assign the boot disk to a different host, and boot the machine up. Think 10 - 30 minutes (per machine) to recover from a burned up power supply*.
4) Machine portability. There are some solutions that let you auto-fail-over to a new host when the guest stops responding. That burned up power supply could now be a two minute outage and NO emergency notification call.
5) Machine portability. Platespin lets you auto-migrate machines on a schedule to a few blades at night, power down those blades for power savings, and then power them up a little before business hours and migrate back. In a large data center, the electricity savings is enough to make it worth it.
6) Machine flexibility. Does application X not need much in the way of processing power? With the VM manager software, assign it one CPU and 256 MB RAM. Later find out that wasn't enough? Up the specs and reboot.
7) Reboot speed. In paravirtualized environments, the OS is already loaded in the host VM, so the guest VM just links and loads. I've seen entire machine reboots that take 16 seconds.
Along these lines, an anecdote from my life: How to add RAM to a server so nobody notices: virtualize [slashdot.org]
Hope this helps explain why some people are such a fan of virtualization.
*This is really a benefit that comes from disconnecting the machine from its disks, but VM and SAN go exceptionally well together.
Re:In case you don't know much about it (Score:3, Interesting)
Almost every mission critical system these days is running in either a clustered or virtualized environment. I work in the financial services industry and there are many reasons we virtualize pretty much everything these days. These, however, are probably the biggies:
- Redundancy: If a physical machine dies, its virtual machines can be moved over to a spare, often with no interruption in service.
- Isolation: Just because you can run multiple services on a box doesn't mean you should. It poses potential security problems (one compromised app can open the door to compromise another), makes managing users and resources more difficult, and the applications can interact or conflict in unexpected ways. Many vendors demand that their application be the only one running on a machine or they won't support it.
- Portability: An OS configured for use on a virtual machine can be run on any platform which runs the virtual machine without modification.
Re:In case you don't know much about it (Score:2, Funny)
Re:In case you don't know much about it (Score:3, Funny)
Hey, slow down cowboy. Explain that concept to me again. I don't know if it's applicable here, but if we find a way to implement it, it might just prove revolutionary.
I work in the quality assurance department of Geeknet Inc, Slashdot's parent company. We are constantly looking for ways to improve all the sites on our network.
I don't know if this method you propose, that, if I understand correctly, would involve parsing the content of the html document linked, and having an editor analyze the output of such html document after being rendered (let's call it, reading the story), is at all possible. But if we implement it the right way, it might prove useful.
We'll get our research team to work over this reading-the-story concept. It's something absolutely novel to us, so it might take a while. We'll let you know when we reach a conclusion, so that we might license this reading-the-story technology from you.
Kind Regards,
Lazy Rodriguez
GeekNet INC.
See Also LESSFS (Score:4, Interesting)
Another nice OpenSource FS De-Dup project to look into is LESSFS.
Block-level de-dup and good speed. Also offers per block encryption and compression.
I'm using it backup VMs. 2TB of raw VMs plus 60 days of changes store down to 300GB. Write to de-dup FS is > 50MB/s.
Re:See Also LESSFS (Score:3, Interesting)
ZFS also offers block-level dedupe support since ZFSv21. You can run it via FUSE on Linux, or natively on OpenSolaris. Hopefully, it'll also be available in FreeBSD 9.0 if not sooner (FreeBSD 7.3/8.0 have ZFSv14).
Since ZFS already checksums every block that hits the disk, dedupe is almost free, as those checksums are re-used for finding/tracking duplicate blocks.
Re:In case you don't know much about it (Score:2)
Here's another explaination: http://storagezilla.typepad.com/storagezilla/2009/02/unified-storage-file-system-deduplication.html [typepad.com]
There's a table about half-way down showing the differences between file-level dedup (elimination of duplicate files), fixed block dedup (elimilation of duplicate blocks as stored on the disk, which is what Opendedup is doing), and variable block dedup (which handles non-block aligned data, such as when you insert or delete someting at the start of a large file). File level dedup is (almost) drop dead easy, you just take a checksum of every file and link those that match to a single copy. (Handling file updates can be problematic, though. You want your deduped files to be read-only.) Fixed block is almost as easy, since a file is just a list of blocks. You use FUSE to turn those blocks into fixed length files, which are then themselves deduped. This fixes the file-update problem, since each update creates a new block.
Variable block dedup looks for special groups of bytes to divided a file into chunks (like using newlines to divide a text file into lines). These chunks are then dedups as above. If you aren't careful, you can waste space (since the blocks aren't exactly multiples of the disk's block size). Random seeks can be harder, since you can't multiply the block number by the block size to find a location.
Re:Patent 5,813,008 (Score:2)
Good try, but after skimming it, does not seem to apply. Seems to be for deduplicating e-mail attachments.
Re:Patent 5,813,008 (Score:2)
Which claims apply? I can see no claim that does not reference "information items [...] transferred between a plurality of servers connected on a distributed network". In fact, e-mail attachment dedup is seen as prior art (Background, fourth paragraph). File dedup is simpler than that.
Re:Patent 5,813,008 (Score:3, Interesting)
Claim 1(a) requires "dividing an information item into a common portion and a unique portion".
It may be that the patent covers the case where the unique portion is empty, but then again maybe not, especially if the computer never takes the step to find out! In other words, if you treat every item as a common item (even if there is only one copy), there is a good chance the patent might not apply.
(There is also a good chance that the patent is written the way it is specifically because it doesn't apply to that case -- it may be that there is prior art in one of the referenced patents.)
This is for hard disks (Score:3, Interesting)
Also, is there easy way to get multiple machines running 'as one' to pool resources for running a vm setup? Does openmosix do that?
Re:This is for hard disks (Score:2)
Re:This is for hard disks (Score:2, Funny)
what an idiot I am T.T
Re:This is for hard disks (Score:2, Funny)
Re:This is for hard disks (Score:2)
Does software like ESX and others (Xen etc) perform this in memory already for running VMs? I.e. if you have 2 Windows VMs it will only store one copy of the libs etc in the hosts memory ?
I don't know about Xen but VMWare will do that.
is there easy way to get multiple machines running 'as one' to pool resources for running a vm setup? Does openmosix do that?
I am not entirely certain what you mean by 'as one' to pool resources. Openmosix more or less is a load distributor that dispatches jobs across hosts. I am not sure what advantage you would gain by virtualizing the hosts other than granularity.
A hypothetical question. (Score:2)
Re:A hypothetical question. (Score:3, Interesting)
Most likely in the implementation itself, not the de-duplication process.
Let's say user A and B have some file in common. Without de-duplication, the file exists on both home directories. With de-duplication, one copy of the file exists for both users. Now, if there is an exploit such that you could find out if this has happened, then user A or B will know that the other has a copy of the same file. That knowledge could be useful.
Ditto on critical system files - if you could generate a file and have it match a protected system file, this might be useful to exploit the system. E.g., /etc/shadow (which isn't normally world-readable). If you can find a way to tell the deduplication happens, you can get access to these critical files for other purposes.
Note that you can't *change* the file (because that would just split the files up again), but being able to read the file (when you couldn't before) or knowing that another copy exists elsewhere can be very useful knowledge. But the de-duplication mechanism must inadvertently reveal when this happens.
Re:A hypothetical question. (Score:2)
Re:A hypothetical question. (Score:2)
Covert channels are fairly easy to achieve in a virtualized setup, particularly if you oversubscribe -- and if you don't oversubscribe you generally gain nothing from virtualization. Allocating physical CPU's, memory, network interfaces, and disks for each virtual server is impractical. Therefore I don't think the covert channel attack is much of a threat.
Detecting whether a particular file exists on other machines is interesting though. You can do that with Arkeia (deduplicating backup) I believe, by creating a particular file and checking how much data is actually sent across the network for that backup. If it's less than the compressed size of the file, then someone else on the same backup server has the same file...
Re:A hypothetical question. (Score:2)
Note that you can't *change* the file (because that would just split the files up again), but being able to read the file (when you couldn't before) or knowing that another copy exists elsewhere can be very useful knowledge.
If you can "generate a file" that can be deduplicated, then by definition you already know about the date in that file.
Re:A hypothetical question. (Score:2)
Maybe not, you might be able to fool the dedupe engine with a hash collision, and get it to turn your file full of gobeldy-gook into the actual file contents. I agree though you would need to know an awful amount about the file to pull that off, size, hash of what ever type the dedupe uses, time stamps.
So of that you might be able to control yourself like atime, though other access, but I don't know how you'd get the rest, (thinking about the GP's example of /etc/shadow).
Re:A hypothetical question. (Score:2)
Leaving aside vulnerabilities on any particular implementation, the only possible attack vector I see would be a bruteforce approach. Basically, a user in one VM creates random n bytes size files with all possible combinations of files of that size (off course, this would only be feasible for very small files, but /etc/shadow is usually small enough, and so is everything on $HOME/.ssh/). Eventually, the user would create a file that would match a copy on another VM. Off course, this would be useless without a way to check if another file was matched and deduplication took place. If the deduplication solution has any virtual guest software (like vmware tools), and that tool shares this kind of information with other systems, it might be possible, but that's a big might.
Any reasonably implemented deduplication solution should be 100% transparent to the guest, and very secure.
And, to all the people talking about "shared resources", deduplication doesn't create "shared resources". Deduplication is not similar to symbolic links (ln -s). If you want to compare it to links, you have to compare it to hard links, and that would be hard links that automatically dereferenced and created a new copy of the file with all the blocks as soon as the user wanted to write to that file. Remember, as soon as the file changes on any given guest, the information is not the same anymore, and so that file is not de-duplicated anymore. A user can change his copy of the file, not other people's files.
Re:A hypothetical question. (Score:2)
off course, this would only be feasible for very small files, but /etc/shadow is usually small enough,
/etc/shadow is typically >1kB, which is 2^(1000*8) possibilities. A stupid brute force approach isn't going to work. If you can be sure which users exist in the file in which order, and root is the only one with a password, then maybe, but I doubt you could get it fast enough even in that case. If it turns out the be a threat we just need to increase the salt size.
Re:A hypothetical question. (Score:2)
Deduplication often relies on copy-on-write to maintain seperate versions after deduplication.
Once a block is deduplicated between users A, B and C into file Z and user B changes his file, the filesystem will record the change and point user B to block Z instead.
Other security issues (permissions) should be handled by the filesystem table, not the physical file.
Re:A hypothetical question. (Score:2)
It's had a vulnerability because microsoft made it. Vulnerabilities are their signature.
And, as I explained before, it was a microsoft product (which means it wasn't fixed).
Hasn't this been posted before? (Score:5, Funny)
Re:Hasn't this been posted before? (Score:2)
Well, at least this comment has been posted before.
Dude, you’re only piling it up. Like with trolling: If you react to it, you only make it worse.
And because I’m not better, I’m now gonna end it, by stating that: yes, yes, I’m also not making it better. ^^ :)
Oh wait... now I am!
Yea, I RTFA, but... (Score:3, Interesting)
......from what i can tell, this is NOT a way to deduplicate existing filesystems or even layer it on top of existing data, but a new filesystem operating perhaps like eCryptfs, storing backend data on an existing filesystem in some FS-specific format.
So, having said that, does anyone know if there is a good way to resolve EXISTING duplicate files on Linux using hard links? For every identical pair found, a+b, b is deleted and instead hardlinked to a? I know there are plenty of duplicate file finders (fdupes, some windows programs, etc), but they're all focused on deleting things rather than simply recovering space using hardlinks.
Re:Yea, I RTFA, but... (Score:2)
Re:Yea, I RTFA, but... (Score:2)
Hmm, yea i've used FSLint but I didn't pay close enough attention to the options it seems :)
Thanks
Re:Yea, I RTFA, but... (Score:4, Informative)
Deduplication systems pay attention to this and maintain independent indexes to do copy-on-write and the like to preserve the independence of each reference.
Re:Yea, I RTFA, but... (Score:2)
If I couldn't find a good tool from responses here I would have written one for sure.
Re:Yea, I RTFA, but... (Score:3, Interesting)
I wrote fileuniq (http://sourceforge.net/projects/fileuniq/) exactly for this reason. You can symlink or hardlink, decide how identical a file must be (timestamp, uid...), or delete.
It's far from optimized, but I accept patches :-)
Re:Yea, I RTFA, but... (Score:2)
Sweet! Thanks a lot :)
Comment removed (Score:2)
Re:Yea, I RTFA, but... (Score:2)
http://code.google.com/p/hardlinkpy/ [google.com]
Re:Yea, I RTFA, but... (Score:2)
That can be managed for simple use cases, but yea i see your point.
Or get inline deduplication (Score:2)
with NexentaStor CE [nexentastor.org], which is based on OpenSolaris b134. It's free.. and has an excellent Storage WebUI. /plug
For a detailed explanation of OpenSolaris dedup see this blog entry [sun.com].
~Anil
Re:Or get inline deduplication (Score:2)
Grr.. meant inline/kernel dedup.
Re:Or get inline deduplication (Score:2)
Plus you get the "real" ZFS, zones, and tightly integrated, bootable system rollbacks using zfs clones :)
Re:Or get inline deduplication (Score:2)
Plus you get the "real" opensolaris experience:
- poor (like really really poor) hardware compatibility. Starting with basic stuff, many on-board Ethernet controllers with flaky or no support, very hard to choose a motherboard that's available and without too many compromises and fully supported. A guy asked if Android pairing is available (to use phone as modem for OpenSolaris), made me spill my coffee...
- doubtful future
- no security patches (yes, you read that right)
- major features like zfs encryption slipping schedule for years (working on it since 2008, last promise was to be in 2010.2 release which in itself slipped to 2010.3 and this one seems to be delayed as well as it was supposed to be released on the 26th and in any case it's quite sure that encryption won't make it anyway)
Thanks, but no thanks.
Re:Or get inline deduplication (Score:2)
Hardware compatibility is pretty good. Really. All decent brands (storage controller/NICs) support opensolaris. Doubtful future part is FUD. Oracle made it clear OpenSolaris development, community functions will continue as is. The security patches costing $$ is not for opensolaris, but enterprise Solaris. Encryption is late.. big deal.. some things are set to low priority over others. Dedup is present, and works very well.
If it's a storage box you're looking at.. what's really important? An in-kernel, established, and widely-deployed filesystem like ZFS (without support for android phones), or a new, user-space dedup filesystem, nascent and not in production (but it can pair with your android phone!).
~Anil
How useful is this in realistic scenarios? (Score:2)
Re:How useful is this in realistic scenarios? (Score:2)
These all sound very realistic to me...
Re:How useful is this in realistic scenarios? (Score:4, Informative)
First of all.... one of the most commonly duplicated blocks is the NUL block, that is a block of data where all bits are 0, corresponding with unused space, or space that was used and then zero'd.
If you have a virtual machine on a fresh 30GB disk with 10GB actually in use, you have at least 25GB that could be freed up by dedup.
Second, if you have multiple VMs on a dedup store, many of the OS files will be duplicates.
Even on a single system, many system binaries and libraries, will contain duplicate blocks.
Of course multiple binaries statically linked against the same libraries will have dups.
But also, there is a common structure to certain files in the OS, similarities between files so great, that they will contain duplicate blocks.
Then if the system actually contains user data, there is probably duplication within the data.
For example, mail stores... will commonly have many duplicates.
One user sent an e-mail message to 300 people in your organization -- guess what, that message is going to be in 300 mailboxes.
If users store files on the system, they will commonly make multiple copies of their own files..
Ex... mydocument-draft1.doc, mydocument-draft2.doc, mydocument-draft3.doc
Can MS Word files be large enough to matter? Yes.. if you get enough of them.
Besides they have common structure that is the same for almost all MS Word files. Even documents' whose text is not at all similar are likely to have some duplicate blocks, which you have just accepted in the past -- it's supposed to be a very small amount of space per file, but in reality: a small amount of waste multiplied by thousands of files, adds up.
Just because data seems to be all different doesn't mean dedup won't help with storage usage.
Re:How useful is this in realistic scenarios? (Score:2)
A major use case is NAS for users. Think of all those multi-megabyte files, stored individually by thousands of users.
However, normally deduplication is block level, under the filesystem, invisible to the user. This is implemented by NetApp SANs, for instance. After having RTFA, OpenDedup seems to be file-level, running between the user and an underlying file system. I'm not sure it's a good idea.
Re:How useful is this in realistic scenarios? (Score:4, Informative)
If you cut up a large file into lots of chunks of whatever size, lets say 64KB each. Then, you look at the chunks. If you have two chunks that are the same, you remove the second one, and just place a pointer to the first one. Data Deduplication is much more complicated than that in real life, but basically, the more data you have, or the smaller the chunks you look at, the more likely you are to have duplication, or collisions. (how many word documents have a few words in a row? remove every repeat of the phrase "and then the" and replace it with a pointer, if you will).
This is also similar to WAN acceleration, which at a high enough level, is just deduplicating traffic that the network would have to transmit.
It is amazing how much space you can free up, when your not just looking at the file level. This has become very big in recent years, cause storage has exploded, and processors are finally fast enough to do this in real-time.
Re:How useful is this in realistic scenarios? (Score:2)
All good dedupe systems are block-level, not file-level so you don't just save where whole files are identical but on *any* identical data that's on the disks.
If you're running VMs with the same OS you'll probably find that close to 70% of the data can be de-duplicated - and that's before you consider things like farms of clustered servers where you have literally identical config or fileservers with lots of idiots saving 40 "backup" copies of the same 2Gb access database just in case they need it.
Our deduped backup array is currently storing ~70Tb of backups on 10Tb of raw space and it's only about 40% full - to me, that's useful.
Re:How useful is this in realistic scenarios? (Score:4, Informative)
I wonder how much this approach really buys you in "normal" scenarios especially given the CPU and disk I/O cost involved in finding and maintaining the de-duplicated blocks. There may be a few very specific examples where this could really make a difference but can someone enlighten me how this is useful on say a physical system with 10 Centos VMs running different apps or similar apps with different data? You might save a few blocks because of the shared OS files but if you did a proper minimal OS install then the gain hardly seems to be worth the effort.
Assume 200 VMs at, say, 2GB per OS install. Allowing for some uniqueness, you'll probably end up using something in the ballpark of 20-30GB of "real" space to store 400GB of "virtual" data. That's a *massive* saving, not only disk space, but also in IOPS, since any well-engineered system will carry that deduplication through to the cache layer as well.
Deduplication is *huge* in virtual environments. The other big place it provides benefits, of course, is D2D backups.
Re:How useful is this in realistic scenarios? (Score:2)
Yeah, but the problem there is the cost. We run on 17GB boot disks, so your 200VM's would require under 4TB of disk to store. I am sorry but 4TB of storage is peanuts and I can do that easily with a low end DS3400.
Now the million dollar question to ask is how much does your dedupe solution cost? The reason being any dedupe that is supported against a virtualization solution we have looked at costs more than just buying the frigging disk. One then has to question the point of bothering with the extra layer of complexity.
The level of dedupe in bulk storage is likely to be low as well, besides which the cost of dedupe on a couple hundred TB of disks is rediculas. Even for backup one has to wonder as well, tape is again really cheap, and dedupe for hundreds of TB is bloody expensive.
Re:How useful is this in realistic scenarios? (Score:2)
Now the million dollar question to ask is how much does your dedupe solution cost?
Nothing. Our NetApp has it by default (who charges extra for dedupe these days ?).
The reason being any dedupe that is supported against a virtualization solution we have looked at costs more than just buying the frigging disk.
Except it doesn't cost any more and it saves IOPS, meaning we need to buy less disk not only for space, but for performance as well.
The level of dedupe in bulk storage is likely to be low as well, besides which the cost of dedupe on a couple hundred TB of disks is rediculas. Even for backup one has to wonder as well, tape is again really cheap, and dedupe for hundreds of TB is bloody expensive.
If your dedupe solution has differing costs depending on how much data you have, you've got the wrong solution.
Re:How useful is this in realistic scenarios? (Score:2)
This doesn't even save a single hard drive at current storage densities. :-(
Re:How useful is this in realistic scenarios? (Score:2)
Re:How useful is this in realistic scenarios? (Score:2)
Ok I'll bite.
Its real rarity in any of the enterprise environments that I have ever seen for minimal OS install installs to be the mode of operation on application servers (Unix and like); and I have never seen in on Windows based application servers. I am not even certain I agree that its such a good idea. Sure all the daemons not in use should not be started and ideally have had their execute bits turned off to avoid mistakes but when things go wrong its often helpful to have full platform availability.
So in lots of SAN based storage scenarios I suspect there is a great deal more than a few blocks to be saved on OS files alone.
Now for an application think about your typical corporate mail server, where users usually send 100 people a copy of the same speadsheet; times a thousand speadsheets, times a few hundred users. Yea it would be nice if you could get them to use the collaboration application or at least the file server but that will never happen. Exchange prior to 2k10 did that type of dedupe in the information store, but not any more. Lets assume you have a 5 or 7TB of online mail storage. An often quoted figure is 30% will be duplicate in the environment I described. SAN storage is still expensive enough that if you can cut that mail store down by a TB that is meaningful savings. If that is not reason enough for you imaging you are doing some kind of SAN level replication to a hot site. The less data you need to move the less connectivity you need to have to do it; at least in the States here D3s are not inexpensive. Even if you are just scratching tape backups every night cutting down the size of the snap shot in anyway possible is a big win, anyone who has even been stressed to figure out backup windows will tell you that.
Re:How useful is this in realistic scenarios? (Score:2)
``You might save a few blocks because of the shared OS files but if you did a proper minimal OS install then the gain hardly seems to be worth the effort.''
Right, but note the if. Most of the places where I've seen virtualization used have most of the VMs running instances of a proprietary operating system which shall remain unnamed. Together with other components that tend to be common, the amount of data that is common among instances can easily be over 10 GB per instance.
There is certainly a more efficient way to deal with > 10GB of common data per instance than storing the same data multiple times, and deduplication is one way to do things more efficiently.
Re:How useful is this in realistic scenarios? (Score:2)
The point is though even if you save 20GB per OS instance, that only comes to 2TB over 100 virtual machines. You are talking of saving four RAID1 450GB 15k rpm SAS/FC arrays or eight disks. It really is just is not worth the additional complexity. At your 10GB per instance we are talking two arrays, or four disks even less worth the additional complexity.
Then once you look at mature comercial inplementations and you start paying by the TB deduped it becomes utterly pointless. For sure an open source implentation can change that, but not one implemented in Java for crying out loud.
Re:How useful is this in realistic scenarios? (Score:2)
All sales literature, mind you. My personal experience with it will begin in a few months, when we get our new Celerra installed :-)
As far as I know, Celerras only do file-level dedupe.
This just gave me a good idea! (Score:4, Interesting)
Just occurred to me that it would not be difficult to write a quick script to extract everything into its own tree; run sha1sum on all files; and identify duplicate files automatically; probably in just one or two lines.
So in other words -- thanks Slashdot! The otherwise unintelligible summary did me a world of good -- mostly because there was no context as to what the hell it was talking about, so I had to supply my own definition...
Re:This just gave me a good idea! (Score:4, Informative)
try this::
mv backup.0 backup.1
rsync -a --delete --link-dest=../backup.1 source_directory/ backup.0/
see this [mikerubel.org]
Re:This just gave me a good idea! (Score:2)
Why not just rsync? ...
Now that that's out of the way...
For one thing, rdiff will give you a mess... rsync will give you multiple full filesystem trees...
Re:This just gave me a good idea! (Score:2)
Two things to look into:
rsync snapshots [mikerubel.org]
rsnapshot, for a better rsync snapshot [rsnapshot.org]
Re:This just gave me a good idea! (Score:2)
I'll take a look at this - thanks for the post.
Look at StoreBackup (Score:2)
We're a bit off topic here, seeing as this has nothing to do with file systems, but being off-topic is on-topic for /.
Anyhow: StoreBackup is a great backup system that automatically detects duplicates.
Re:This just gave me a good idea! (Score:2)
Re:This just gave me a good idea! (Score:2)
Deduplicated backups: http://backuppc.sourceforge.net/info.html [sourceforge.net]
Re:This just gave me a good idea! (Score:2)
For more good ideas like this, watch this screencast from pragmatic TV.
http://bit.ly/Pk3z3 [bit.ly]
Jim Weirich expains how git (the version control tool) works from the ground up, and in doing so, builds a hypothetical system that sounds like what you are trying to do.
Re:This just gave me a good idea! (Score:2)
Re:This just gave me a good idea! (Score:2)
On my system, each incremental backup of a 24GB dataset occupies about 600MB (depending how many files have changed). And each incremental backup is a complete, uncompressed copy of the dataset, making extracting files trivial!
It'll also backup across the network with ssh, so you can back up remote servers; it'll even back up Windows machines. It does proper backup rotation (I store two weeks' worth of daily backups, then a a couple of weekly backups, then monthly). It's totally awesome.
Re:This just gave me a good idea! (Score:2)
Re:This just gave me a good idea! (Score:2)
User Land? Come on! (Score:2)
[...] Opendedup runs in user space, making it platform independent, easier to scale and cluster, [...]
... and slow, prone to locking issues, etc. There's a reason no one runs ZFS over FUSE, why would we do it with this?
Confusing summary (Score:2)
Can anyone offer wisdom on what the volume size is supposed to signify, being different from the maximum size that SDFS is scalable to?
Re:Confusing summary (Score:2)
It's the raw capacity of the filesystem compared to the maximum amount of deduplicated data it can handle. So you can have 8Pb of raw disk space, on which you can store up to 8Eb of deduplicated data (depending on the dedupe ratios you get - I think 1000x is a little optimistic, 30x-40x is more common).
Great idea (Score:2)
Too bad it's just another new filesystem. I would have preferred integration into (some future version of) EXTn or BTRFS.
Not only would that mean it gets more widely available, it also means you don't have to miss al the nice functions of these filesystems. You may even be able to use it out of the box.
Re:Excellent! (Score:2, Redundant)
Yeah, I gave up on bitching about code inefficiency back in the early 90s. Do they even teach assembly any more?
Offtopic? (Score:4, Informative)
If you'd mentioned the fact that this appears to be written in Java, you might have a point. But despite this, and the fact that it's in userland, they seem to be getting pretty decent performance out of it.
And keep in mind, all of this is to support reducing the amount of storage required on a hard disk, and it's a fairly large programming effort to do so. Seems like this entire project is just the opposite of what you claim -- it's software types doing extra work so they can spend less on storage.
Re:deduplication (Score:2)
Re:deduplication (Score:3, Funny)
So, Blade Runner was about de-duplication?
Re:deduplication (Score:5, Funny)
What kind of lame recursive acronym is "deduplication"? I'm flummoxed in any attempt to decipher it.
Deduplication Eases Disk Utilization Purposefully Linking Information Common Among Trusted Independent Operating Nodes
Re:Let's get down to brass tacks. (Score:2)
Well, just how repetitive is your porn collection?
Re:Let's get down to brass tacks. (Score:4, Funny)
very repetitive. back and fourth. back and fourth. oh wait... that's not what you meant. never mind.
Re:redundant if saving large amounts of data to SA (Score:2)
Re:redundant if saving large amounts of data to SA (Score:2)
Re:New use for an old algorithm? (Score:2)
it seems so, but the ordering was always: physical, partition, filesystem, compression (sometimes fs integrating compression) and compression applied to relatively small chunks (blocks).
Now you have compression layer above partition layer, which means two identical files on two different partitions will occupy space of one physically.
So, say, your LAMP server takes up 4GB generic system plus 1GB custom data. One 1TB of storage could fit 200 partition-files of such server. Now you'll fit 995 of them and it will work faster as the commonly used parts of the FS will be read and buffered once for all instances.
Re:See also: LessFS (Score:2)
And given that it is not written in Java is likely to be much better performing.