Linux Backups Made Easy 243
mfago writes "A colleague of mine has written a great
tutorial on how to use rsync to create automatic "snapshot-style" backups. Nothing is required except for a simple script, although it is thus not necessarily suitable for data-center applications. Please try to be gentle on his server: it is the $80 computer that he mentions in the tutorial. Perhaps try the Google cache." An excellent article answering a frequently asked question.
thank you... (Score:3, Insightful)
Perhaps more article submitters (or editors) could add these links more frequenly?
Re:thank you... (Score:2, Interesting)
yes -- that was a refreshing change from the
usual postings where the page is
Re:thank you... (Score:2)
I prefer the idea that has been suggested by many previously, putting copies of linked articles right here on Slashdot.
Re:thank you... (Score:2, Insightful)
Re:thank you... (Score:4, Insightful)
This "ads pay for everything on the internet" mentality is INSANE!!
Re:thank you... (Score:2)
I'm sure they're perfectly happy to get the exposure from Slashdot linking to their cache. If they weren't I'm sure their programmers could figure out if ($ENV{'REFERRER'}=~/slashdot/i) {print "Content-type:text/plain\n\nplease don't link directly to our cache.";}
i've been slashdotted! (Score:4, Interesting)
The site was never down; it's just that my roommate, a windows user, noticed the connection was slow and reset the cable modem. He's quite upset about being unable to play Warcraft III. :)
I've never had a slashdot nick before, so I just created this one today. I'll try to go through some of the comments and provide useful feedback.
Thanks for your interest everyone!
Mike
Re:i've been slashdotted! (Score:3, Funny)
First suggestion: Don't list your actual backup configuration.
First Mirror (Score:4, Informative)
My mirror is here [matthewmiller.net]
'man dump' (Score:2, Interesting)
Re:'man dump' (Score:2)
Re:'man dump' (Score:1)
Re:'man dump' (Score:2)
Re:'man dump' (Score:2, Informative)
Re:'man dump' (Score:2)
I specifically said "rsync" does not support that. Whether he fools around with a script to do it for him, is another story. MY POINT is that dump does this ALL FOR YOU, and would have required LESS time to implement and would be a more reliable solution, as it was designed for doing filesystem backups!
Re:'man dump' (Score:4, Informative)
rm -rf backup.3
mv backup.2 backup.3
mv backup.1 backup.2
cp -al backup.0 backup.1
rsync -a --delete source_directory/ backup.0/
There. That's the script basically. Add more snapshot levels as needed, stick it in cron at whatever interval you need.
dump only supports ext2/3. This supports any file system, and retreiving a file from backups is as simple as running "cd" to the directory of the snapshot you need and "cp" the file out.
I run backups from Linux to IRIX and other UNIXs using gnu rsync and openssh. This little trick is going to be very handy for me. I can't waste my time worrying about which filesystem type the files came from originally.
Re:'man dump' (Score:2, Informative)
rsync --backup-dir
2 years ago I wrote a script to do pretty much what the linked product does - ie: maintain a duplicate set of data areas on another machine via rsync.
I use the --backup-dir option to relocate copies of the files which the current rsync run would otherwise delete or modify.
With a bit of rotation, we can have users helping themselves to a full view of their
home directory as of last night and also be able to restore files effectively from each day of the week going back 7 days in our case.
Sure does cut down on the number of tape restore requests.
As mentioned it is incredibly efficient - we deal with about 900GB of data backed up in this way - but rsync actually transfers about only 10-30GB of differences each night.
Only problem is my script was a crap prototype which is why I'm not letting anyone see it
But I do have a design in my head for a more professional effort (will be opensourced) - I'm might even get enough peace at work to write it one day!
Re:'man dump' (Score:2, Informative)
I was originally using the --backup-dir trick, and you're right, it allows you to back up the same data. The advantage to doing it as described in the article is that you get what appear to be full backups at each increment. This makes it simpler for your users, who can now think of the backup directories as full backups.
Hope that helps--
Mike
Because Linus says dump isn't reliable. (Score:5, Informative)
That's probably one good reason.
Re:Because Linus says dump isn't reliable. (Score:2)
I thought we beat up Billy Boy for doing that.
Re:Because Linus says dump isn't reliable. (Score:1, Funny)
Re: (Score:1)
Re:Because Linus says dump isn't reliable. (Score:2)
Tar or other systems that get the files through the regular file reading interface are better because they take advantage of the filesystem interface abstraction layer instead of going around it. That works well, and there's no reason to do backups otherwise. None. Not a single one. IMHO.
Re:Because Linus says dump isn't reliable. (Score:2)
The fact that reiserfs doesn't include a "dump" of its own isn't a failing in "dump", but a failure of the ReiserFS developers.
Yes, dump is and always was fs-specific. That's something that's always been understood.
It's also the only way to back up ACL's and other extended metadata. Data backup is good, but file metadata is important, too. You wouldn't back up your data with no file names, would you? File names are a small part of the metadata associated with a file. Tar and cpio only get a subset of that data.
As filesystems move toward extending the amount of metadata they store (ACL's, and extended attributes now, ReiserFS is moving toward ever more complex metadata), backup programs are going to have to be extended to store that information in the archives. Until they do, only dump is reliable.
Spread the word.
Re:Because Linus says dump isn't reliable. (Score:2)
Re:Because Linus says dump isn't reliable. (Score:2)
Tar and cpio back up the standard UNIX permission set, but that set is really inadequate. Until they can back up the full set of ACL's, they're basically useless on systems that use them.
Re:Because Linus says dump isn't reliable. (Score:3, Informative)
dump can't reliably back up the filesystem
because of the kernel filesystem caching, and that
future kernel development is headed further in that
direction, so you might as well not depend on dump.
seems pretty reasonable to me, go ahead and use
dump if you like though
Re:'man dump' (Score:1)
one good reason... there is NO good HOW-TOs for dump on ext2/3... man is great, but somebody has to write something humanly readable/understandable for those of us who just don't bother to read all features of a command before using it.
step 1) this
step 2) that
step 3) done
Works great! (Score:3, Insightful)
I use this method at work (Score:2)
Re:I use this method at work (Score:2, Informative)
So... (Score:1, Informative)
Tar? (Score:1)
Critical daily backups done by the clueless. (Score:1, Funny)
Re:Critical daily backups done by the clueless. (Score:2)
[referring to using 'tar' to do daily backups]
And people wonder why computer techs get a bad name.
Eh? There's nothing wrong with tar per se. For example, let's say you want to transport your backups over a network securely (i.e., via ssh). Your choices are:
1. Allow ssh access with no password (public-key access, preferably). I'm leery of this, because allowing anything like this to run automatically means entrusting all the auth data to the machine, where it can be compromised.
2. Copy the backups asynchronously from making them, allowing user-initiated authentication. This was the approach I opted for when I had to put together a backup system overnight at one company.
Couple of cron jobs that ran incremental tar's on a list of directories, storing them in the scratch partition with higher permissions (so user processes cleaning up after themselves couldn't nuke them accidentally). Then at my leisure I would run the transport script (mornings about 10 AM, typically) which would suck the backups across and copy them to the tape. This worked fine for the time the project was active. Note that I was backing up to tape, which meant I needed to manually rotate tapes anyway, so this system helped ensure that new backups didn't overwrite old ones if I came in late -- and we definitely did not want these backups exported to our network. I also had the advantage of only needing to worry about one server.
Just because tar is old and a bit... esoteric at times, doesn't mean it's therefore automatically a stupid idea to use it. If it's what you know, and it gets the job done, there's no need to feel guilty about not using a fancier system. Even Linus likes tar, because it's rock-solid reliable.
Now if you have (faint hope) a valid criticism of this guy's use of tar in his environment, then I'm all ears. But I doubt that, since he didn't give enough detail for you to have one.
I don't know why I even bother with this given it's an AC post, except that assholes like this are a major reason why Linux advocates get a bad rep.
rdist can perform same task with more flexibility (Score:1)
A simple example for my home directory is:
#
# Make a local copy of the contents of the home directory.
# Also make a local copy populated with hard links.
#
# This has the effect of preserving snapshots through time
# without too much overhead. (Cost of hard links + changed files.)
#
~ -> localhost
install -oremove
except ~/tmp
cmdspecial ~ "DATE=`/bin/date +\"%%Y-%%m-%%d.%%T\"` ; cp -al
Note that I get dated backup directories, and that I can add as many "except" clauses as I want, so I don't need to backup junk directories.
(.mozilla caches, etc.)
My backup drive is mounted via automount, so it is rarely mounted. Just change "localhost" to host the backup on another machine.
tar or dd? (Score:1)
I guess the whole thing goes to prove that, within anything computer related, there is more than one way to do it. Clever tutorial, gang. =^_^=
Check out glastree (Score:3, Informative)
It work just great! At one site I've got about 7 weeks of depth from 3 different servers all
mirrored via ssh-nfs on one lowly Penti 133. We still spin tapes mind you, but glastree has
been flawless.
Been meaning to buy the author a virtual beer for some time now . .
http://igmus.org/code/
From the website:
'The poor man's daily snapshot, glastree builds live backup trees, with branches for each day. Users directly browse the past to recover older documents or retrieve lost files. Hard links serve to compress out unchanged files, while modified ones are copied verbatim. A prune utility effects a constant, sliding window.'
--
Re:Check out glastree (Score:1)
It looks like this might be almost the exact same thing as is linked in the article. It's the same basic premise.
Re:Check out glastree (Score:2, Informative)
Yes, my software does essentially this, wrapped up in a nice utility (though, you get day resolution).
What we want, of course, is a better replica of plan9's dumpfs [bell-labs.com], featuring a real filesystem layer and compressed block differences. This is on my TODO list.
-jeremy
Rsync + Samba = Pretty good backup (Score:1)
Just thought I'd share our little Linux backup experience.
Slashdotting (Score:1)
*(change all pronouns to the appropriate gender)
Re: (Score:1)
What about MAC OS X??? (Score:2)
Re:What about MAC OS X??? (Score:2)
Re:What about MAC OS X??? (Score:1)
You can do the exact same thing in OS X. You just need to get rsync, which can be installed as part of fink [sourceforge.net] if you have it.
Otherwise, you can get the rsync sources yourself and build it without too much trouble.
Re:What about MAC OS X??? (Score:1)
Re:What about MAC OS X??? (Score:2)
This won't work with HFS because of the file forks. If you use UFS with OS X, the file forks appear as normal files. Eg, if you have a file named "foo", "._foo" is the resource fork. I don't know where they keep the finder fork, and I've never cared to investigate.
Here's a tip if you have to use OS X for a file server of any kind: use two partitions (or two disks), one HFS and one UFS. The OS and any applications are installed on the HFS partition and all data goes on UFS. Use HFS for the OS because a lot of stuff breaks when running under UFS and UFS performance is still roughly twice as bad as HFS in 10.2 (run your own little benchmarks if you don't believe me). Keep user data on UFS so you can use tools like tar, rsync, etc. to back up and manipulate files. Remember, tar won't work on most HFS files (those with forks). If you're deploying OS X Server, you should definitely keep user data on a separate partition anyway since any tiny little mistake (eg, LDAP typo in Directory Assistant) will require a reformat-reinstall.
Another tip: if you create a tarball off a UFS filesystem and then untar that onto a HFS filesystem, it will preserve the forks correctly. This has come in quite useful in making "setup" scripts for end-user machines, where all the applications to install are stored in tarballs created on a UFS machine and you can untar them onto the target HFS machine (the advantage is that you can script this - add in a couple of niutil commands and you can recreate a user machine in a couple minutes from one script).
I have a couple of OS X Server machines (bosses like the GUI user management stuff). I just tried rsync over NFS to a Linux box and it works fine since the data is on a UFS partition on the OS X Server box. PITA to set up an NFS share remotely (since I don't have Macs at home -> no Remote Desktop, no usable VNC servers for OS X -> have to do it over ssh -> must figure out how NFS exports are stored in netinfo -> gnashing of teeth), but it works and I might try this little trick next week since we're not doing anything systematic for backups on the OS X boxen.
Also, radmind [umich.edu] is a great tool for managing filesystems of OS X client machines. It supports HFS (by using AppleSingle internally).
Backups are for wimps (Score:1, Funny)
SSH comment needs to be added! (Score:3, Informative)
Also, it should probably also be done from the real server to the backup server so that you can not just break one machine and get into all. (if you break into the real machine as root then you should be able to get into the backup machine)
This allows the backup machine to have only one open port. ssh which can be tcpwrapped to allow connections only from the machines that it backsup.
Re:SSH comment needs to be added! (Score:2)
oh my, rsync backups roxxor (Score:3, Interesting)
I've been doing backups this way on Linux for aLongTime(tm). On FreeBSD I've also used dump/restore to an NFS-mounted RAID drive (does dump work okay on Linux these days? I've always been afraid to try it for some reason, maybe earlier versions weren't stable).
rsync is just so cool. First of all, it can work over the network through ssh, or through it's own daemon (faster), or on a local filesystem. You can "pull" backups from the server or "push" them from the client. Over the network, it can divides the files into blocks and just sends the blocks that are different. It has a fairly sophisticated way to specify files to exclude/include (for instance, exclude /home/*/.blah/* can be used to not save the contents of everybody's .blah directory, but keep the directory itself). You can set up a script to just backup given subdirectories so you can checkpoint your important project without backing up the whole show. etc etc.
I use it both to save over the network using the rsync daemon, and to a local separate drive. On a local drive it's great, because you can easily retrieve files that you've accidentally deleted, just using cp. It's also great for stuff like "diff -r /etc /backups/etc" to see if something changed.
I never thought of his technique for incremental backups, but since it uses hard links, I wonder how that interferes with the original hard links in your files?? Looks interesting.
There are many flags and options that rsync has, here are the ones I use to pull complete backups from another host onto a local drive (yeah --archive is a bit redundant here).
rdiff-backup is easier and more efficient (Score:5, Informative)
http://rdiff-backup.stanford.edu/
Re:rdiff-backup is easier and more efficient (Score:2, Informative)
Rdiff-backup is an excellent utility, and Ben Escoto (its author) and I link to each other. You must realize, though, that the purposes are different. Rdiff-backup is more space efficient for things like text, email, and so on. My rotating snapshot trick is less space-efficient, but much simpler for the average user to understand ("just go into your snapshot directory and copy the old file back into reality"). It works on all kinds of files, and barely touches the CPU (since it isn't doing diffs). I would use rdiff-backup for administrative backups of email, code, and that sort of thing, where text is involved and user restore is not an issue.
Different tools for different jobs!
Mike
Re:rdiff-backup is easier and more efficient (Score:2, Informative)
Re:rdiff-backup is easier and more efficient (Score:3, Informative)
New users: use the development version, it's a lot more efficient if you have a lot of small files, because it uses librsync instead of executing rdiff for each file. I've measured a factor 20 speedup on my devel directory!
Email messages shouldn't change at all. (Score:2)
Re:Email messages shouldn't change at all. (Score:2)
Really? How do you safely modify or delete a message from an mbox file? You make a new file and copy the existing one while changing the data that you need changed and then atomically replace the original file. This means you use double the space of the mbox file and take the time to rewrite the entire file. Or, you modify the original mbox file and hope the system doesn't crash while doing so or you risk corrupting the entire thing. And you have to deal with locking in both cases. How is this better than Maildir?
As for being slow, there are benchmarks to prove you wrong (on courier-mta.org).
This is really neat. (Score:1)
It's stories like this that keep me reading Slashdot. (Other than ranting on YRO stories, but that is no where near as cool as a neat trick like this)
What I'd really like... (Score:5, Interesting)
A few years ago I saw a neat (expensive!) disc array that could 'freeze' the disc image at a single point in time so that a backup could be taken from the frozen image. The backup software saw only the frozen image, while the rest of the OS saw the disc as normal including updates made after the freeze occurred. The disc array maintained the frozen image until the backup was complete, guaranteeing a true snapshot as at a specific instant in time.
I wonder whether such a thing would be possible in software. Possibly it can even be done through cunning application of the tools that we already have. I imagined that you might be able to do something like it by extending the loopback device interface. Does anyone out there have any cunning ideas?
Re:What I'd really like... (Score:5, Informative)
Re:What I'd really like... (Score:2)
Sounds like the Network Appliance [netapp.com] Filer's "snapshot" feature, but less advanced. (You can also get exactly the feature described under Linux purely in software, via LVM, now.) Under the NetApp version, you gain an extra directory ".snapshot", which contains previous versions of each file. So, if you screw up editing some file (delete/corrupt it, whatever) you can just grab a previous snapshot copy. Like having a series of online backups - but without all the extra space+hardware needs. Like CVS, but without the hassle (or fine-grained control) of doing "commits". Just tell the Filer "take a snapshot now" and 30 seconds later, it's done. Or "take snapshots every hour".
Neat feature - you could almost get this using LVM under Linux, but not quite...
Re:What I'd really like... (Score:2)
Re:What I'd really like... (Score:2)
chattr +u (Score:2)
What I would really like, however, is the ability to have the file system keep versions of a file as the file is written to or deleted; I don't want a shapshot every hour, I want a new single-file snapshot for every change to the file. And I want to be able to set or clear an attribute to control which files/directories this gets done in (i.e., chattr +u [linuxcommand.org], which currently doesn't really do anything). And I want the old snapshots to age and vanish on their own, say, 3 days after they are made (or however many days the sysadmin chooses).
Under Windows, with Norton Utilities, you can get this sort of functionality with the Norton Protected Recycle Bin. I have been wishing for this on Linux for quite some time.
I remember reading about something called the "Snap filesystem" which would someday offer this, but I can't find anything about it now on the web.
steveha
Re:What I'd really like... (Score:3, Informative)
We used to do this years ago before any such "options" were provided by drive manufacturers.
We were doing large Oracle backups, and there were issues with taking too much time to do a backup.
What we did was to throw some extra drives into the (at the time, software) RAID, so that we had a mirror of what we wanted to backup. At backup time, we'd shut down the Oracle instance, break the mirror, and then re-start the Oracle instance. The whole procedure resulted in less than 2 minutes of downtime for the instance, which was more than acceptable. We'd then take the "broken" mirror, re-mount it under a "temp" mount point, and then take our time backing it up (it usually took about 6-8 hours). Once we were finished backing it up, we'd then re-attach the broken mirrors and re-silver it. This was all done via software RAID, before journalling was available.
We did this about once a week, and it worked out great.
Not snapshots (Score:5, Informative)
At home, I store xfsdump output encrypted with GnuPG on an almost public (and thus untrusted) machine with lots of disk space (on multiple disks). At work, I do the same, but the untrusted machine is in turn backed up using TSM. In both cases, incremental backups work in the expected way. Of course, all this doesn't solve the snapshot problem (I'd probably need LVM for that), but with the encryption step, you can more easily separate the backup from your real box (without worrying too much about the implications).
Re:Not snapshots (Score:2, Informative)
Mike
why get so complex? (Score:3, Funny)
where exclude =
stick in a cronjob. you can also add --delete if you want. it's basic, but easy.
Re:why get so complex? (Score:2)
be gentle? (Score:1)
Did anyone else think to themselves..I'm gunna click on that link just because it said go easy on it?
#!/bin/sh rm -Rf /SAVE/bkup.tar.gz.5
mv /SAVE/bkup.tar.gz.4 /SAVE/bkup.tar.gz.5
mv /SAVE/bkup.tar.gz.3 /SAVE/bkup.tar.gz.4
mv /SAVE/bkup.tar.gz.2 /SAVE/bkup.tar.gz.3
mv /SAVE/bkup.tar.gz.1 /SAVE/bkup.tar.gz.2
mv /SAVE/bkup.tar.gz /SAVE/bkup.tar.gz.1
tar -zcf /SAVE/bkup.tar.gz /etc /var/spool/mail /home /var/www
Then I have an FTP script that runs once per day on the OTHER server sitting there (dare I say, the MS box) that grabs the BKUP.TAR.GZ from the linux box..And does much the same as far as replication.
at WHAT time in the morning? (Score:2)
I guess its better to trust your server at 4.20 than the operator. well, for many operators that is. even if its 4.20pm, I'd still prefer to let the machine do the critical work instead of some sysadmins. knowing what I know about many sysadmins at 4.20 that is..
[hint: double entendre on 420. not sure if the author knew this or not. or maybe I just stated what was terribly obvious.]
4:20 (Score:2)
IMHO, this is a great solution - I've been looking for something like this for fuss-free backups at work. Viola.
Being the only "computer guy" at work sucks ass when you're the programmer/sysadmin/engineer/tech. Gah.
Flexbackup (Score:2)
I don't consider snapshot backups backups; they're snapshots.
I've been using a utility called Flexbackup -- it's a perl script which will do multi-level backups (i.e. incremental), spew to tape or file, use tar, afio or dump and compression. Oh yes, and it will use rsh/ssh for network backups. I wish I could buy the author a beer or few but it seems to be unsupported now. Oh well.
Email me if you want a copy and can't find it. I've also got a patch to fix a minor table of contents bug with modern versions of mt.
Re:Flexbackup (Score:2)
Re:Flexbackup (Score:2)
Care to explain the difference for the uninitiated? Why can't a "snapshot" serve the same function as a backup?
I didn't say it couldn't serve as a backup, but it's not a backup in the sense that I can keep the last 6 months' worth of changes and pull from any of them. With snapshots I need to either keep 6 months of full daily backups or postprocess the daily snapshots and turn them in to differential backups.
An example might help. I do daily backups of our servers. Let's call the daily backups level 3 backups. Now each week I do a level 2 backup. Each month I do a level 1 backup, and every quarter I do a level 0 backup. Let's analyze:
With full snapshot backups this would take an insane amount of disk space. As I said earlier I could postprocess the snapshots and create differential backups but why do the extra work when tar/afio does this automatically? RSync isn't that special, and with an incredible script like Flexbackup it's even less special.
It would be great if rsync could tell the other end "this file has changed, here are the changes" and have the backing-up end copy the file and apply the changes -- i.e. allowing the creation of differential backups. That's not what it's designed for, though.
Re:Flexbackup (Score:2)
I hate to break it to you but that is what rsync does. If the file already exits where is is copying to it will send the delta (think diff but more efficant and works with binary files.) and only update the changes.
You didn't read carefully enough. and have the backing-up end copy the file and apply the changes -- I don't want one snapshot, I want a base snapshot and then any changes to be saved in an entirely new tree structure.
Basically this: Take your snapshot normally. Now ask for all the files that changed between the snapshot and today. rsync sends the diff. Now for each file mentioned in the diff, copy the entire file from the snapshot to another directory and apply the diff to that copy. Now your new directory has full files that are up to date, but only the diffs were sent over. That is not what rsync does.
Re:Flexbackup (Score:2)
you may be interested in this [bentlogic.net]. it does backups via rsync over ssh. its still in development so i'm sure features can be added if users request them.
It's got the exact same basic problem that the backup method featured in this article has -- they're snapshots, not multi-level backups. Each "pull" is a complete copy; I can't say "Give me all the files which have changed since my last levelx backup." That's what Flexbackup (well tar or afio) allows me to do. That's exactly what rsync isn't designed to do, as far as I can tell (I've used it before but I'm not an expert at it).
Are backups the right solution? (Score:2)
Re:Are backups the right solution? (Score:2)
You can exclude any part of the filesystem from the backups, or particular types of files, or files that match a particular pattern; see the "exclude" section in the rsync man page.
I'm not sure I agree that applications should handle their own backups! Don't forget that applications are run as their owners, so if they are broken or hacked, they can destroy the backups too. Far better, I think, to have the backups removed where user-level processes can't touch them. And probably a lot simpler too!
Mike
Re:Are backups the right solution? (Score:2)
You can exclude any part of the filesystem from the backups, or particular types of files, or files that match a particular pattern; see the "exclude" section in the rsync man page.
I don't know about you, but my filesystem certainly isn't organized enough for that to be useful.
Don't forget that applications are run as their owners, so if they are broken or hacked, they can destroy the backups too.
Well, I was thinking more along the lines of backing up to a third party server over the internet, in which case there wouldn't be permission to delete old copies until after a certain period of time. I dunno, in the case of my system, there's very little that needs to be backed up. In fact, I really can't think of anything.
Hard links and file diffs? (Score:2)
Anyone know anything about this issue? I can't find the necessary info in the rsync docs [anu.edu.au].
Judging by the fact that this technique does seem to work, I presume that rsync never modifies a file in-place, but I wonder if that's a guarantee, or just the current behaviour?
(Also, I am aware of the --whole-files command-line argument, but that's an orthogonal issue.)
The answer? (Score:2)
Re:The answer? (Score:2, Interesting)
Rsync source code, then a lot of testing! :)
Mike
ps: You're right, if there is any change in the file, the original is unlinked first, then the new one is written over top of it. So it does work as advertised! Thanks for your help answering questions btw.
CVS, cron, and an RW (Score:2)
This works because I don't throw my mp3/ogg, pr0n, etc into the repository. I'll have to figure out a new solution when I hit the 650MB/800MB limit, but it works for now. I'll probably just have my repository on a different computer and use ssh or a get another HD speciffically for backup purposes.
I started using this system after reading the Pragmatic Programmer [addall.com]. They recommend throwing using CVS for everything that is important. It's great for more than just code. And this way, whenever I install a new distro, I have all my settings since I save my .emacs, .mozilla, .kde, .etc directories.
What about backups to tape ? (Score:2, Interesting)
And what about people like me who backup to a DLT (or whatever) tape drive? Not much use then either.
In any case I don't see this as being extremely useful in the real world (i.e. beyond the casual backing up of a home machine)...
Re:What about backups to tape ? (Score:2, Insightful)
rsync-backup - a similar approach (Score:2, Informative)
Re:Linux sucks (mod as funny) (Score:1, Offtopic)
I am really tempted to mod this up as Funny, but I am afraid that you are serious. There are so many things wrong with your statements that I don't know where to start. I guess I'll just have to assume that this was a joke, and you weren't really serious. If you *were* serious, then you really really need to educate yourself about these new fangled computer dealies.
Re:Linux sucks (Score:1)
Re:Linux sucks (Score:1)
Re:Linux sucks (Score:1)
Re:Bandwidth isn't free, thats what. (Score:2, Insightful)
Its a sign of peer approval.
Re:Win2k has free backups made easy, too! (Score:1)
Re:Win2k has free backups made easy, too! (Score:1, Offtopic)
Move along, sheep. Go wait for your shepherd to tell you what to do.
Re:Win2k has free backups made easy, too! (Score:1)
There's a little daemon called 'Samba.' It's fairly obscure, and doesn't have many users, but I will explain how it works:
one can create SMB shares available to Windows clients, and then use the rather trivial Win2k backup util.
You might find mention of 'samba' in the bowels of Google somewhere, but I don't think anyone still uses it.
It's ok to use easy stuff for backups, just as users of automatic transmissions aren't pussies or limp wristed buffoons for picking something simple and automated.
Re:Win2k has free backups made easy, too! (Score:2)
I'd like to hear from you on this subject.
Re:simple encrypted backup (Score:2, Informative)
Not if you use the ssh-agent, and maybe keychain.
Before you run that command in a script, put this code previous to it:
keychain -q
.
tar cvzf - $1 | ssh $2 '( cd $3; tar xvzf - )'
Now the first time you run the command, it will ask you for your key passphrase, but any subsequent runs will work passwordlessly.
I use a similar script with rsync and it works great. Set up a cron job to automatically do the backup, and once after the box boots start a manual bkup (thus loading the key), and it'll work automatically from there.
Keychain can be found here: http://www.gentoo.org/projects/keychain.html
Re:what's the big deal? (Score:2, Insightful)
Re:But how do you do cron+ssh+rsync? (Score:2)
Use an RSA key with no password. If you are paranoid enough to be using ssh, you should be paranoid enough to be using the strong authentication provided by using RSA Keys.
You don't really need ssh-agent.