Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Linux

Best Backup Server Option For University TV Station? 272

idk07002 writes 'I have been tasked with building an offsite backup server for my university's television station to back up our Final Cut Pro Server and our in-office file server (a Drobo), in case the studio spontaneously combusts. Total capacity between these two systems is ~12TB. Not at all full yet, but we would like the system to have the same capacity so that we can get maximum life out of it. It looks like it would be possible to get rack space somewhere on campus with Gigabit Ethernet and possibly fiber coming into our office. Would a Linux box with rsync work? What is the sweet spot between value and longevity? What solution would you use?'
This discussion has been archived. No new comments can be posted.

Best Backup Server Option For University TV Station

Comments Filter:
  • by neiras ( 723124 ) on Wednesday September 16, 2009 @10:50PM (#29449713)

    Try one of these babies [backblaze.com] on for size. 67TB for about $8,000.

    There's a full parts list and a Solidworks model so you can get your local sheet metal shop to build cases for you.

    Talk to a mechanical engineering student on campus, they can probably help with that.

  • by Z8 ( 1602647 ) on Wednesday September 16, 2009 @11:05PM (#29449805)

    You may want to check out rdiff-backup [nongnu.org] also. It produces a mirror like rsync, and uses a similar algorithm, but keeps reverse binary diffs in a separate directory so you can restore to previous states. However, because it keeps these diffs in addition to the mirror, it's better if you have more space on the backup side.

    There are a few different frontends/guis to it but I don't have experience with them.

  • Why a backup server? (Score:3, Interesting)

    by cyberjock1980 ( 1131059 ) on Wednesday September 16, 2009 @11:08PM (#29449845)

    Why not a complete duplicate of all of the hardware? If the studio combusts you have an exact copy of everything.. hardware and all. If you use any kind of disk imaging software, you can simply recover to the server with the latest image and lose very little data.

  • I love rdiff backup but I'd never use it on any large datasets. I attempted to use it on ~ 600 GB of data once with about 20GB of additions every month and it ran dog slow. As in taking 6+ hours to run every day (there were a lot of small files, dunno if that was the killer).

    For larger datasets, like what the poster has, I'd go with a more comprehensive backup system, like bacula. I use that to backup about 12TB and it's rock solid and fast. There's a bit of a learning curve, but the documentation is very good.

    If Bacula is too intimidating rsnapshot would be a viable route, it's similar to rdiff-backup, but simpler (pretty much just rsync + cp using hard links), faster, and easier to use. It's not as space efficient, but diffing video data is probably a waste of time anyway.

  • by Anonymous Coward on Wednesday September 16, 2009 @11:33PM (#29450059)

    Actually, I'd suggest using OpenSolaris so that you can take advantage of ZFS. Managing large filesystems and pools of disks is *stupidly* easy with ZFS.

    You could also do it with Linux, but that would require you to use FUSE, which has a considerable performance penalty. I'm not sure about the state of ZFS on FreeBSD, although I imagine that the Solaris implementation is going to be the most stable and complete. (For what it's worth, I've been doing backups via ZFS/FUSE on Ubuntu for about a year without any major problems)

    The FreeBSD port of ZFS actually works pretty damn nicely. I'm using a RAID Z configuration on my FreeBSD 7.2 server and it works great!

  • by illumin8 ( 148082 ) on Thursday September 17, 2009 @12:03AM (#29450257) Journal

    Try one of these babies on for size. 67TB for about $8,000.

    There's a full parts list and a Solidworks model so you can get your local sheet metal shop to build cases for you.

    Talk to a mechanical engineering student on campus, they can probably help with that.

    Better yet, just subscribe to Backblaze and pay $5 a month for your server. Problem solved.

  • Here's what I do (Score:3, Interesting)

    by MichaelCrawford ( 610140 ) on Thursday September 17, 2009 @12:36AM (#29450443) Homepage Journal
    First let me point out that there are natural disasters that could potentially take out your backup, if it's on the same campus as your TV station - think of Hurricane Katrina. And for sure you want your Final Cut projects to survive a direct nuclear hit.

    Anyway, I have a Fedora box with a RAID 5 made of four 1 TB disks. There is a partition on the RAID called /backup0. That's not really a backup, but more meant as a convenience. I back up all my data to /backup0, then right away use rsync to copy the new data to an external drive that is either /backup1 or /backup2.

    I have a safe deposit box at my bank. Every week or two I swap the external drive on my desk with the external drive in the safe deposit box.

    So the reason I have that /backup0 filesystem is so that I don't have to sync the two external drives to each other - otherwise I would have to make twice as many trips to the bank, and there would be some exposure were my house to burn down while I had both external drives at home.

    My suggestion for you is to find two other University facilities that are both far away, and offer to trade offsite backup services with them.

    You would have two backup servers in your TV station - one for each of your partners - and they would also each have two, one each for you, as well as for each other.

    That way only a hit by a large asteroid would lose all your data.

    I got religion about backing up thoroughly after losing my third hard drive in twenty years as a software engineer. Fortunately I was able to recover most of that last one, but one of the other failures was a total loss, with very little of its data being backed up.

  • by mlts ( 1038732 ) * on Thursday September 17, 2009 @12:54AM (#29450553)

    Backups for UNIX, backups for Windows, and backups all across the board almost require different solutions.

    For an enterprise "catch all" solution, I'd go with TSM, Backup Exec, or Networker. These programs can pretty much back up anything that has a CPU, although you will be paying for that privilege.

    If I were in an AIX environment, I'd use sysback for local machine backups and backups to a remote server.

    If I were in a general UNIX environment, I'd use bru (it used to be licensed with IRIX, and has been around so long, it works without issue with any UNIX variant.) Of course, there are other solutions that work just as well, both freeware, and commercial.

    If I were in a solidly Windows environment, I'd use Retrospect, or Backup Exec. Both are good utilities and support synthetic full backups so you don't need to worry about a full/differential/incremental schedule.

    If I were in a completely mixed environment, I'd consider Retrospect (it can back up a few UNIX variants as well as Macs), Backup Exec, or an enterprise level utility that can back up virtually anything.

    Please note, these are all commercial solutions. Bacula, Amanda, tar over ssh, rsync, and many others can work just as well, and likely will be a lot lighter on the pocketbook. However, for a business, some enterprise features like copying media sets, or backing up a database while it is online to tape or other media for offsite storage may be something to consider for maximum protection.

    The key is figuring out what you need for restores. A backup system that is ideal for a bare metal restore may be a bit clunky if you have a machine with a stock Ubuntu config and just a few documents in your home directory. However, having 12 terabytes on Mozy, and needing to reinstall box from scratch that has custom apps with funky license keys would be a hair puller. Best thing is to use some method of backups for "oh crap" bare metal stuff, then an offsite service just in case you lose your backups at that location.

    Figure out your scenario too. Are multiple Drobos good enough, or do you need offsite storage in case the facility is flooded? Is tape an option? Tape is notoriously expensive per drive, but is very economical once you start using multiple cartridges. Can you get away with plugging in external USB/SATA/IEEE 1394 hard disks, backing to them, then plopping them in the Iron Mountain tub?

  • Re:lose the drobo (Score:3, Interesting)

    by mlts ( 1038732 ) * on Thursday September 17, 2009 @01:06AM (#29450603)

    I have not heard of any catastrophic data losses firsthand, but I don't like my data stored in a vendor specific format I couldn't dig out by plugging the component drives into another machine.

    If you are a homebrew type, you might consider your favorite OS of choice [1] that can do software RAID, building yourself a generic server level PC, and use that for your backups. This way, when you need more drives, you can go to external SATA frames.

    [1]: Almost all UNIX variants support RAID 5, Linux supports RAID 6 (two drives as parity), and of course, some BSDs and Solaris support ZFS for RAID-Z. Windows Server 2000, Windows Server 2003, Windows Server 2008, and Windows Server 2008R2 support RAID 5.

  • by mysidia ( 191772 ) on Thursday September 17, 2009 @01:27AM (#29450711)

    The hard drives are desktop class, not designed for 24x7 operation. Not designed for massive write traffic that server backups generates.

    Latent defects on disks are a real concern.

    You write your data to a disk, but there's a bad sector, or miswrite, and when you go back later (perhaps when you need the backup), there are errors on the data you are reading from the disk.

    Moreover, you have no way of detecting it, or deciding which array has recorded the "right value" for that bit...

    That is, unless every bit has been copied to 3 arrays.

    And every time you read data, you compare all 3. (Or that you have two copies and a checksum)

    Well, the complexity of this redundancy reduces the reliability overall, and it has a cost.

  • Re:BackupPC (Score:2, Interesting)

    by miffo.swe ( 547642 ) <daniel@hedblom.gmail@com> on Thursday September 17, 2009 @05:22AM (#29451601) Homepage Journal

    I love BackupPC more today than ever. I had a run with some of the more often used commercial offerings and the grass is NOT greener on the other side. Despite fancy wizards and support BackupPC beats any one of them anytime.

    I backup about 230 GB of user data each night and still the pool is only 241 GB after many months of use.

    "There are 6 hosts that have been backed up, for a total of:

            * 51 full backups of total size 1895.95GB (prior to pooling and compression),
            * 36 incr backups of total size 62.33GB (prior to pooling and compression). "

    The pooling works really well and saves oodles of space. Best thing is that its very easy to setup/restore files through a web GUI and demands no tinkering at all once its installed. I dont think the learning curve is worse than for anything else. Even if you can install a commercial system easily to its default it takes very much learning, tinkering and work before you can let it go.

  • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Thursday September 17, 2009 @11:12AM (#29453731) Journal

    Why bother?

    See GP. If the hardware I want isn't supported by Solaris, but is supported by Linux, I'll want to use that.

    OpenSolaris will run rsync just fine

    It'll also run NFS, so if the hardware will support it, you do have a point -- even if I "needed" Linux for some reason, I could still use Solaris for the physical storage.

"Everything should be made as simple as possible, but not simpler." -- Albert Einstein

Working...