Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Linux Software

Ask Slashdot: >2GB Backup Software for Linux? 173

Fer asks: "Are there backup program for Linux that do not have filesystem or volume size limits? I am trying to make a full backup of a 22 GB FTP server, using 4 GB TR-4 tapes. I have tried tar, dump, afio, taper, and afbackup, and every one of them either did not allow >2GB volumes or had weird problems with >4GB filesystems. Currently I am using dd to do the job, but I think there must be another option. Any suggestions on free programs which I may use?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: >2GB Backup Software for Linux?

Comments Filter:
  • by Anonymous Coward

    I use gtar + gzip (i.e. tar czvf ...)

    Works fine on my 4 gig tape drive.

    Mark
  • by Anonymous Coward
    If all you want is a poor man's RAID 1 mirroring then yes, a second hard drive where you can store a tar-gzip of your filesystem will do fine, you can also purchase a second hard drive of the same capacity as your first and tell the linux kernel to mirror all data to the second hard drive, although this will cost a little extra money since the hard drive will have to be larger since it will be holding uncompressed data, it will save you the cpu time of having to gzip everything not to mention it will make immediate backups of your files rather than whenever your crontab runs.

    This, however, is not what tape users use tape for. Nobody with a tape drive has just one tape to go with that drive, they have many tapes to go with that drive, they have backups from yesterday, last week, and last month.

    A hard drive backup can only be used in the case of disaster recovery, if one drive crashes, you have all the data neatly stored on the second. However, if a file has been deleted/modified and you later realize that you need the old file back, you are SOL if your second hard drive has already been backed up to, while a tape user will have a backup from before the file was deleted/modified.

    Hard drive backups are good for disaster recovery only; tape backups are good not only for recovering your entire filesystem, but for recovering individual files from your filesystem as well.

    Oh, did I forget to mention that tape is cheaper too?
  • by Anonymous Coward
    IBM makes 25G IDE hard drives which I'm planning to buy a couple for as backup drives. They would probably stay spinning all the time but only mount a couple times a month to copy any files that needed to be backed up. I might use tar/bzip on one drive and straight find/cp on another.

    The GP series runs at 5400 RPM, slower than their other high-capacity IDE drive, the GXP, although I'd like it to be even lower around 4000 RPM like the Quantum Bigfoot TS drives. For $100 less per drive, I might get a couple Quantum 19.2 Bigfoot TSs instead. For the ``need that file I deleted a year ago'' situation I would burn a CD-R or two every couple months for longer term storage.

    But then Toshiba has that $450 DVD-RAM drive, with double-sided 5.2G media around $40...

  • by Anonymous Coward
    Don't tapes cost more, though? I'm not sure of the price of a tape nowadays (I'm thinking I saw some for about $50-$100 the other day), but a $30 1000x rewriteable CD sounds best to me. And a $1 non-rewriteable can be more economic if you like to "rotate" your backups, but still keep some (you'll keep 'em all this way). You won't need to buy 10 tapes, any pay $500-$1000 in tapes.

    Of course, for a large installment (ie. Business, not at home), tapes are the only way to go. You can't fit 10 GB on a CD, and using over 10 CD's is just prohibitively sLooooW!

    Note: I beleieve cdrecord will burn from stdin, but if I remember right, the author strongly reccomends against this... Oh well, I've got a CD-Rewriteable, guess I'll give it a shot later and tell you my luck.
  • by Anonymous Coward
    If your filesystem only supports 2GB files, then just pipe the output of tar into split to break it up into chunks. Then write the chunks to tape.

    BTW: TR-4 tapes hold about 3.8 GB of data, so you might want to use slightly smaller chunks than 2GB, to make sure you can fit them onto the tape without wasting space.
  • by Anonymous Coward
    You're right. One weird thing I noticed was that while the score of your post is 2, the reason next to it is flamebait. A +2 score should not have a negative reason next to it. But the fact that it has +2 makes it inherently more likely to be read.

    My view is that the question referred to backup solutions, and you offered one. I disagree with the solution, and am preparing to respond to your post. But your post seems to be entirely on topic. It looks like some good moderators came by and corrected the scoring problem.

    anonymous moderator
  • by Anonymous Coward
    At my last job, we used tar with great success to back up one 100GB and one 150GB RAID to a 14-tape DLT changer. There is a bug in GNU tar regarding incremental backups and renaming of files; I don't know if it's been fixed yet.
  • The Intel 32-bit(ness) or processor / system architecture has nothing to do with >2GB file size problems. For instance, a true 64-bit file system can (and does) exist on a 32-bit machine, which would lead to having >2GB files.

    In fact, if you abstract the filesystem from the OS well, allow for 64-bit references, you could have any kind of large file system, like SGI, WinNT, etc... Maybe he just needs to expand the integer sizes in the kernel to support true(r) 64-bit file systems. Maybe that's what this is all about.
  • by Anonymous Coward
    Afio should handle >2GB tapes since version 2.4.5. It has never had a limitation on the size of the filesystem. If there are cases where it fails for >2GB, I would like to get a bug report in e-mail.

    Koen.
    (koen@win.tue.nl, current afio maintainer)
  • Backup Exec and Legato are good choices for NT. If your needs aren't complex, NTBACKUP.EXE works. You can backup files on Linux boxes using SAMBA (don't ask me about ownership/permissions) and tape drive support under NT is quite good.

    HP just released a cheap (~$250) 7/14GB Colorado IDE tape drive using Travan technology that I use to backup my network. [i'm not an hp rep, i just think hp is a reliable company and that's a good price for that size] For some reason the software and drivers that shipped with it don't work with NT Server (just Workstation) likely due to some "licensing" issues, since the binaries would be identical...

    "If Bill Gates had a nickel for every time Windows crashed...
    Oh, That's right. He does."
  • by Anonymous Coward
    I found that you have to retension travan tapes 2 or 3 times if you want to get any kind of reliability for large files (say the three and a half gigs I tar/gzip onto mine). Travan is the "common man's" tape drive; it's not really meant for serious business operations. At work, we use mostly DLTs with a few drives still using DAT.

    The one thing that bothers me most about Travans is how HOT they get. There's a reason why each tape has a heat sink built in!
  • by Anonymous Coward
    A ten-gig UDMA hard drive is $175 at the computer store down the street.

    An eight-gig Travan tape is $20 at the same place.

    From Web vendors, a DLT that will compress up to 70 GIGS of binary data is $80.

    I think that pretty much speaks for itself.
  • by Anonymous Coward
    Legato Networker works great and can be used in the enterprise network.
  • by Anonymous Coward
    Try this command and see if it works

    mkisofs -R /filesystem/ | cdrecord -v fs=6m speed=2 dev=0,0 -eject -dummy -

    and if it does remove the -dummy to make the real thing

    mkisofs -R /filesystem/ | cdrecord -v fs=6m speed=2 dev=0,0 -eject -

    You may have to change some of the flags. If your CDR won't do double speed remove the speed= (or change it to 4 if you are lucky enough to have a 4x drive) and if the drive is at a different SCSI id you'll have to change the dev= flag. I find that the fs= flag is not necessary but that it doesn't hurt either.

    One caveat is if the filesystem is larger that 650MB you will have just have made yourself a coaster. You can check the file system size with

    du -s /filesystem

    or, for more accurate results,

    mkisofs -print-size /filesystem
  • by Anonymous Coward on Sunday May 30, 1999 @09:42PM (#1874053)
    The Intel 32-bit(ness) or any kind system architecture has nothing to do with >2GB file size problems. 64-bit+ integers all used all the time and you probably dont even think about it, i.e. 128-bit encription, NTFS, timestamps, version numbers on MS binaries, etc.... It's just a bit more of a bother on non-64bit processors.

    When we had 16-bit DOS, did it mean we can only have 64K files. You might say, "but 32-bit only addresses 2GB (signed) of memory or whatever". Programs don't even typically load 2-4GB worth of file at a time, but that doesn't mean we can't or shouldn't use the contents of a 80GB file, just because we have a 32-bit processor.

    A true 64-bit file system can exist on a 32-bit machine, which would lead to having >2GB files. In fact, if you abstract the filesystem from the OS well, allow for 64-bit references, you could have any kind of large file system, like SGI, WinNT, etc... Maybe he just needs to expand the API / working integer sizes in the kernel to support true(r) 64-bit file systems. Maybe that's what this is all about.

  • I have to second that. Arkeia rocks, especially in a mixed Linux and NT environment since it seems to be the only Unix-based backup system that preserves NT's permissions, ownership and ACLs.
  • Not in the entire Unix world; only in the Unix-on-32-bit-processor world. For example, AlphaLinux and Digital Unix (now with the odd name Compaq Tru64) shouldn't encounter sort of problem.
  • A price check on CDW shows 2GB Jaz cartridges selling for $124.95. The Tandberg drive we use at work can put 16GB (uncompressed) on a single $65.71 tape. The server the tape drive is on has a bunch of Seagate ST39140W drives which hold about 8GB apiece (CDW says they hold 9.1 GB) and cost $339.10. The hot swap hardware for the drives adds to the cost, and the disks have to be present when the system is booted.

    Two of those disks cost $678.20 and would have about 2GB more capacity than a $65.71 tape (IIRC a 25GB tape can be had for about $78), and $999.60 (probably a little less, because there's a discount for 3-packs) to buy eight Jaz cartridges which would equal a single tape. The disks and cartridges would probably be faster (although it would be annoying to sit for hours changing cartridges when you can stick in a tape and go home), but imagine paying $8000 to keep twice-a-week backups going back six weeks when you could pay $800 plus the one-time expense of the drive. Imagine if you did daily backups (which we don't), or kept a few months worth of backups... or kept 'em indefinately...

    The drive *is* rather expensive, but because of media prices tape is cheaper than other solutions if you make frequent backups and keep them for a while.

    ...and, the 8 Jaz cartridges take up as much space in your fireproof container (which should hopefully be designed to protect magnetic media, as mentioned elsewhere in this thread) as four tapes - that's four backups in the physical space of one.
  • A big tape is cheaper than a $30 rewritable CD when you consider that (if I did my math right) you'd pay about $738 to buy CD-RWs with the capacity of a $65 tape (assuming $30 dollars per 650 MB rewritable CD and 16GB per $65 dollar tape - I got the tape size&cost from the tapes used on our Linux server), but the cost of the drives is murder for a home user. You're right about CDs being cheaper if you want to keep all the backups forever (about $24 dollars for 16GB of write-once CDs, if I didn't screw up my math).

    But nice big fast tape drives are sooo much more expensive than CD burners... :(

    (gahck, if you read my posts in this article you'd think I'm some crazy tape evangelist... I sound like your steriotypical OS bigot... uh-oh, here come the men in white coats... :)
  • Yes, the reason is not the speed, or weather it does backups properly.. it's a cost of operation, and a reliabality factor.. travan tapes cost between 15 and 25 $ each, where dat tapes are around 5-10 $. DLT tapes are more expensive, but have a capacity of over 35 gig un-compressed. It's also a reliabilty thing. Travan tapes are limited to several hundred backup-restore cycles, DLT/DAT are in the thousands.. which means that they produce less errors. DLT tapes are even more cool, they have a shelf storage life of upto 30 years, compared to the 5 years of Travan (I don't know about DAT tapes)
  • There is a linux client for Networker. Been unsupported for a while now, but even that just changed recently.

    -Colin
  • by Indomitus ( 578 ) on Sunday May 30, 1999 @07:40AM (#1874060) Homepage Journal
    We use a program called Lone-Tar at the ISP [spinn.net] where I work. We have a couple of +2gig partitions and it handles them nicely. It also has internal dialogs for setting up cron jobs, and tons of other stuff. I like it a lot, as much as one can like a backup program I guess. Lone-Tar.com [lone-tar.com].
  • There are a couple of problems with this post.
    1. ...around ten boxes running Linux 5.2: Red Hat Linux 5.2, I assume - or perhaps SuSE. There is no Linux 5.2 as of yet, though - it's 2.2.
    2. Wait ages for tar?: I haven't noticed tar being slow. What in particular is wrong with it for backup? That's kind of its point - tape archiver.
    3. Textmode amanda?: What's wrong with textmode? If something does the work, why is it bad if it's textmode?
    There are plenty of people talking about how GNU tar is perfect for what they need. I'm not sure exactly how the lack of GUI backup tools somehow makes Linux not 'enterprise-ready' - perhaps you'll enlighten me? After all, if there is a deficiency in this department, it takes people who actually need to use this sort of thing to point it out so it can be fixed.
  • Posted by stodge:

    What's the best backup system - ie. hardware for a single user PC Linux system? The machine is just for my use at home, but I need to back up some files. Are Zip drives really worth it, and supported out of the box by Redhat (its what I use, ok?!)? Anyone hear anything from the Orb device that came out recently?
  • Posted by somar:

    Arkeia is great. You may be able to get a significant discount on the commercial product. I had communication with the president of the company and received a 54% discount on the mini lan product + 1 server class backup license for personal use. I just received my licensed product directly from the president of Knox Software. I previously tested the product to backup NT and Linux to a 8mm Exabyte Drive.

    -Scott
  • Posted by FascDot Killed My Previous Use:

    If ANYONE suggests Legato Networker, run away as fast and as far as you can.

    We had this piece of crap installed on Netware and it SUCKED. Oh, it backed up and restored just fine. But it was literally an all day event to restore a single file. And the "user interface" (in quotes because it barely qualified) was the WORST I have ever seen for ANY program. And it required numerous patched NLMs to handle file-locking correctly. And it still brought down our servers regularly.
    --
    "Please remember that how you say something is often more important than what you say." - Rob Malda
  • I used to do backups to a SCSI DAT, but I recently broke my drive. Anyway, I wasn't able to fit more than 2G on one tape, and I assumed that was the limit of DATs.

    If I want to backup 4G or 8G of data, and I don't want to have to switch media halfway through, what should I buy?

    Bonus points if it's media that is still likely to be in fashion (and thus easily readable) five years from now.

    I'm leaning toward the idea of not using tapes for backups at all, but just cloning everything to a spare hard disk, that is only ever mounted during the backup process.

  • I don't understand. Our file/print server is one machine, and I'm installing the tape backup on that same machine.

    Maybe I'm missing something REALLY obvious here, but what's wrong with that?

    the email/dialup/firewall is a seperate box for security reasons but why *should* the backup server be a different computer? If the main server dies, you're fried either way. I have another machine I can throw the tape into and restore from, but I don't understand what's inherently wrong with using the same machine...

    please email me as well as send to this forum, /. really needs a "check replies to my thread" feature off the main screen
  • ...and, the 8 Jaz cartridges take up as much space in your fireproof container (which should hopefully be designed to protect magnetic media, as mentioned elsewhere in this thread) as four tapes - that's four backups in the physical space of one.

    ...and in the first fire that eats through the floor and crashes said safe on the floor below, you lose your four backups since JAZ drives can't stand being dropped off a desktop, let alone a story.

    I don't like JAZ. at all. ZIPs are damn near indestructable, as are tapes.
  • If you take large amounts of backup and need to store it off site, you need tape. Tape is much cheaper than harddisks. Only maintaining one backup is also foolish. Media fails. So you need tow harddisks anyway. And are you going to have 20 of those 22GB harddisks for backups? When you transport the harddisk for offsite storage, the heads might fail. Tape has not been superceded.
  • Could someone point to any efforts related to addition of snapshot backup capabilities for the ext2 (or ext3) filesystems, like the one that Veritas offers?
  • by krady ( 2201 ) on Sunday May 30, 1999 @07:50AM (#1874070)
    I would recommend the Amanda backup system which we have used in work for many years and can deal nicely with these problems.
  • yes, but you can use tar to backup directly to the device. sounds like this guy is creating a tarball on an ext2 filesystem somewhere first.

    backup straight to the device!!!
  • Yer still sposed to pay the license yknow!

    ADSM is very powerful; very "enterprise", but it's like repeatedly beating your head against a plasterboard wall. Legato's much nicer tho not as powerful.
  • Use DDS-2 or DDS-3 drives and tapes. I'll be
    ordering a HP DAT24 DDS-3 drive next week so I
    can let you know how I get on (it's for an NT
    box though :-(

    DDS-3 is 12GB (24 compresses) per tape. Something
    like 7GB/hour compressed. And the drives can
    read & write your original DDS-1 tapes as well.

    Kenn
  • Arkeia is good...go check it out...www.arkeia.com...

    --Warning: Start of Shameless plug --
    We are also a reseller for the..so if you are interested give us a call +1-201-384-4444 x204

    --end Shameless plug --
  • The Seagate drive is actually just a sony drive relabeled...we had problems with the seagate drives and after a month of swapping and dealing with tech support we got a sony drive instead...and walla it was the same exact drive as the seagate...Even seagate tech support said they were waiting for a firmware update from sony for their drives...BTW: the sony firmware update will allow the drives to do 35/70GB or so instead of the 25/50 GB...Can we say free upgrade?

    You will need new tapes though...Also one other note...using the AIT drives with ARKEIA..Do not buy the Tapes with a MIC (in cartridge memory), ARKEIA does not use it..(nor does tar, BRU, etc)..Save yourself $20 per tape...

    Also...It looks like the casue of our problem was a bad batch of tapes from seagate (also made by sony, confirmed by a seagate rep.)..Unfortuanatly I don't have any more info to pass on (like lot numbers)..

    Good luck with backups...and the restores when need be...
  • The 2GB filesize is actually a VFS limitation, so it applies to all filesystems. But it only happens on 32-bit Linuxae. And there's a patch that addresses it. Other 2G limits: MS-DOS FAT partition size (addressed by FAT32 or a real FS), IDE on non-LBA BIOSes (can be worked around).
  • by Malor ( 3658 ) on Sunday May 30, 1999 @10:08AM (#1874077) Journal
    Well, I'm just now installing Legato Networker for UNIX and it seems pretty good.

    We've had endless trouble with Arcserve for Windows NT. It backs up UNIX clients very slowly, and trashes its databases on a regular basis. I finally called up Arcserve to scream at them about it, and the upshot is that their database system cannot handle more than 16 million records, about 1GB, and the sheer volume of files we need to backup overwhelms their database engine. Of course, it doesn't gracefully fail, it just quietly corrupts itself without telling you. I have been dodging bullets with that system for months.

    Apparently, if you install a SQL server and use it to store your database of records, it works fine for larger installations, but I'm so pissed at them about not documenting such a basic limitation of their system that I've been exploring other alternatives.

    I did a test install of Legato Networker on a Solaris 2.5.1 box, and it seems to work pretty well. It spools multiple volumes to tape at the same time, seems to run A LOT faster than Arcserve did under UNIX (about the same speed as Arcserve/NT backing up NT files, about 2.5MB/minute to a 40GB SCSI DLT tape unit), and the restores run fine. It is a bit slow about responding to commands, sometimes taking a minute or so to set itself up for the next job, but it seems good on Unix at least.

    Unfortunately, there is no Linux client for Networker. :( Arcserve DOES have a Linux client, but that software is so bad I've never even tried it.

    Have you used Networker in anything other than a Novell environment? S'possible that their Netware stuff isn't so good, while their UNIX stuff is fine. I have all of one week experience with it, and while it seems fine to me, I haven't really loaded it down yet. :)

  • If you want to use files larger than 2 GB (the largest number that will fit in a signed 32-bit integer), then get a 64-bit system to put them on. If you want to have simple, efficient, easy to code and easy to port seeks within your files, then you're going to want to be able to use a signed integer to seek back and forth (let's not even mention the trouble with mmap() if your files are larger than a pointer can index...)

    The *last* thing Linux should do about 2GB files is try and use hack after kludge to satisfy people who want to use Intel chips but don't want to hear about their limitations.
  • by jsholovitz ( 4243 ) on Sunday May 30, 1999 @08:31AM (#1874079)
    On some Linux installations, I have used the BRU 2000 backup software. It costs a couple of hundred $$$, IIRC, but it is really excellent software with many features. So If you are willing to spend some money, that should work for you. I must defer to others, however, in the area of doing it with free tools.
  • I'll be ordering a HP DAT24 DDS-3 drive next week so I can let you know how I get on
    I've been using one for several months. It's great! I routinely write about 10G to the tape, and still have more than 1/3 of the tape to spare. The native capacity of DDS-3 is 12G, and they use compression to store more. Due to the nature of compression, your mileage may vary.

    If you shop around on the net, you can find name-brand DDS-3 tapes for under $14.

    Sony and HP have announced their DDS-4 drives. DDS-4 gets 20G native capacity. I was a little disappointed when DDS-4 was announced, as it is only a 67% capacity increase over DDS-3, while DDS-3 was a 200% increase over DDS-2. Anyhow, as near as I can tell, Sony is shipping their DDS-4 drive already but HP is not. I haven't seen any DDS-4 tapes at retail yet.

    Some people whine about the cost of DDS drives. It's almost irrelevant to me; I spend much more money on tapes. I find it to be a much better tradeoff to buy an expensive drive that uses inexpensive tapes. Most of the Travan and similar things are priced the other way around, so I have no use for them.

  • Exactly, I've been using Tar and Cpio to do backups of 20 and 40 gig raid volumes with no hickups... Could it be simply that travan tape drives suck? I've never gotten them to perform up to my standards of reliability... Use dat or DLT
    ----------------------------------------------
    bash# lynx http://www.slashdot.org >>/dev/geek
    Matt on IRC, Nick: Tuttle
  • Tape is more reliable. And if you ever work for a company that is audited by its shareholders regularly you will find that you are required to keep backups around for quite some time... For instance. A financial inst. may keep backups of the transaction journal forever! An insurance company i once did work for was required to keep weekly grandfather and daily incremental backups for 5 years... thats a lot of storage for HDDs :-)
    --------------------------------------------- -
    bash# lynx http://www.slashdot.org >>/dev/geek
    Matt on IRC, Nick: Tuttle
  • I have no problems with my HP Travan 4/8GB (internal) drive... Doing monthly full backup (of 2+4+2+0.5GB), and nightly incremental backup. I just don't see your point.

    /* Steinar */
  • Arcserve.... URGH!
    I remember using arcserve 5 on netware... Perhaps the only backup program that was more reliable was NT's backup. I could almost guarrentee that a backup run would result in an ABEND or corrupt files on tape. We switched to arcserve 6 when we moved to novell 4.1 - it didn't crash the server regularly either. But I have to say that Novell is not a brilliant platform on which to judge cross platform enterprise backup software.

    We currently use Legato Networker to back up a small constellation of suns, a couple of NT boxen and a few linux boxes - there are linux clients - for i386 and for sparc. They just don't have the graphical interfaces.
  • IBM has a version of the ADSM client for Linux. I haven't checked into the situation in about a year, but back then the client was not officially supported. However, ADSM is really quite nice. This, of course, requires that the tape system be connected to a non-Linux machine.

    The company I worked for when I looked into this performs remote backups via network lines. They also offer Internet access over the network lines they use for the backups. It's really a nice system. If you happen to live in Houston, check them out: www.edms.net
  • I'd like to see you cost-effectively backup a 5 TB database to disk. Oh, and don't forget that you have to be able to restore the system to its state on any day during the past month and at the beginning of any month in the past five years. And as others have mentioned, you want to make two copies and keep one off-site in case of disaster.

    Yes, for systems under 100 GB which don't need to keep historical data, backing up to disk is feasible, and in many circumstances is better than tape. However, that doesn't mean that tape has "been superceded" entirely.

  • Yeah. The newest-gen DLT tapes on Ultra-LVD busses can hit near 400 MB/min. And this in on WINNT. I'd love to see what these baby's would do on a nice linux box. Unfortunately, that is not in our test plans. :(
  • haven't seen any thoughts on afbackup and burt. i auditioned both last summer for network backups of multiple 9GB machines over ssh and settled on afbackup to a DDS-2 DAT.

    afbackup [freshmeat.net] is pretty painless to setup, speedy backups, can run over ssh, prompts by email when tape changes are needed, reasonable restores of entire backup sets, but is very slow for selected file restores.

    burt [wisc.edu] is wicked fast for backups, tcl-based interface, imho elegant, and can run over ssh. afbackup was better documented and offered an emergency restore option that i preferred at the time.

    i ruled out amanda because it is complex and tends to want a holding disk the size of an entire backup set.


  • My company has been running CTAR for 3 1/2 years, and it has never let us down. It can run as command line with TAR-like commands, or via a very well-designed character-mode menu. I've been backing up 8+ gigs at a time on our main SCO Unix server with no problems.

    It's available for Linux as well as most varieties of Unix. $195 for the Linux version.
  • by h2odragon ( 6908 ) on Sunday May 30, 1999 @07:48AM (#1874090) Homepage
    I use standard GNU tar v 1.12 to back up several systems to 12G DAT tapes; and have never had a problem using the '--multi-volume' switch to put an 18G filesystem on 2 tapes.

  • for backup: tar cv /directory/you/want/to/backup | split -b 2048m - bak_mm-dd-yyyy_tar_ for restore: cat bak_mm-dd-yyyy_tar_* | tar xvf -
  • But the tar that comes with Slowlaris regularly reminds me of the advantage of going GNU. It's the first thing I do to a Solaris system, is replace /bin/tar with GNU's tar.
  • by Booya ( 7619 ) on Sunday May 30, 1999 @07:56AM (#1874093) Homepage
    I've had problems with Tar (and the other commands) as well when the number of files I had was extremely large. This is regardless of file size, eg: I had 200,000+ small files, but storage size was about 1 gig or so and I had problems with tar.. I ended up just breaking down my backups into batches. something like:

    tar -cf part-a-of-tree.tar /files/parta
    tar -cf part-b-of-tree.tar /files/partb

    etc...
  • by True Dork ( 8000 )
    I know I've seen a few people mention BRU, but I'll put my .02 in as well. Someone mentioned the 24x6 from HP used in-house at estinc.com. I have one and loved it, but outgrew it. I'm on a 15 tape DLT robotic changer now, run by bru and a rather ugly hack I did to a changer driver. (I commented out an error, and it works fine hehe) The guys at bru are quite helpful and their product runs around 210 gigs per week across my changer on a linux 2.2 system for me. I plan on fixing my scripts for the backup (so they're not QUITE so ugly) and making them public on the truedork domain after I get off my butt, and getting estinc.com to link to me for them. Not fancy, but hell, they work. For single drive backups, they have a gui too. I'd love a gui, but it doesnt work for changers at the moment. Anyway, they took damn good care of me for the few hundred we spent on them. Check them out. Remember that commercial products are not all evil. Some companies have to do so to pay the rent :) See ya!

    The True Dork
  • What we use is a hybrid system which backs everything up to disk quickly, then durring the day moves things off to tape. ADSM by IBM. When you have a dozen large UN*Xen boxes, about 60+ NT servers, and a dozen Novell servers, scattered about on different properties, it's the only way to do things.

    We were running into the 16million record database size limitation on Arcserve, and also our backups were not finishing in time. ADSM, while rather expensive, was a job saver.

    You could surely do something similar with the available free GNU utilities, and some smart perl scripting.
  • This software works well with our 24x6 auto changer from HP. There are 5 24GB tapes, and one cleaning tape. Matter of fact this is the same drive that is used in house by the folks that make BRU 2000. (I'm just a happy customer btw.)

    Dana
  • Have a look at the ADSM home page [ibm.com] for more info. The ADSM server needs to run on a non-Linux box (AIX, HP/UX, Solaris, NT), but it has a Linux client, as well as clients for most OSes, and also lots of database back-ends (DB/2, Oracle, Sybase, SQL Server, Notes, Exchange).
  • I agree that DLT and AIC are awesome if you can afford them (anyone backing up a network had better figure out a way to afford them), but for personal use, I prefer Travan over DAT. Maybe it's just that my experience is with HP drives, but when we were using HP DAT's, we had to clean them daily, and even then sometimes they wouldn't get through a whole backup without blinking the "Error, I need cleaning" light. I've had numerous file restore failures and had to revert to older copies of the backups. Yes, we were using HP tapes too. Since then we have gone to DLT's and they're great.
    For my personal machine backup, I've had IOMega TR1 (400/800M), Seagate TR4 (4/8G) and now have an Aiwa NS20 TR5 (10/20G) drive. I've never had a single failure over hundreds of backups and dozens of restores. The Aiwa finishes a 5.6G backup in an hour, validates while writing, has hardware compression, and costs Oh yeah, I use GNU Tar to backup 6GB of data that lives on an 8GB drive, and I have no problems. I'm not spanning tapes, though; that might be an issue if there's a bug.
  • Hmm, probably not too feasible. You might see if cdrecord/cdwrite will burn from stdin, then you could `tar czf | cdrecord -v dev=0,0 speed=1' and then access it with tar.

    CD-Rs make good long-term backup media, but for most backup needs tapes are far superior. You can't make a coaster out of a tape.
  • 15 Linux boxes, 2 HP9000s, 5 IRIX, a dozen NT, handful of Macs, 4 VMS, and one NetApp toaster backed up through Linux NFS-mount (poor Linux 2.0 NFS performance is an advantage here... when mounted on the HP it pretty much killed the network).

    The interface isn't that terrible. 5.5 is much better. We have had intermittent problems backing up a 36GB RAID filesystem (Linux 2.2) though.

    It's far from free, and the server requires NT or a commercial UNIX. We run it on HP-UX 10.20. But for multiplatform backup on a high-end tape changer robot, you need o go the commercial route.
  • With the price and capacity of currently available hard disks, why on earth would you want to use tape anyway?

    Why not just get one or two large 20+GB HDs and backup stuff to them on a regular basis. I've been using a spare HD for backups for years. crontab scripts creating tar-gzips.

    I remember glancing through ads the other day that IBM makes a 22GB 3.5" IDE HD, which could backup that FTP server with no probs.

    Tape is just slow these days and it's been superceded.. it might be cheaper but then old tech always is (until it becomes antqiue! :))

    Just my 2 bits
  • I didn't actually mean use the drive solely as a RAID1 device, but rather to create an archive file (or a few backups of different ages and/or parts of the filesystem) on for access whenever.

    Dunno about you, AC, but I find

    tar -zxvf Backup.tar.gz /

    fairly simple to do to get back any file I've ever needed to retrieve from my backups.

    Yea tapes are cheaper, but HDs are MUCH faster to access :)

    Getting a few HDs together and implementing a RAID5 system is much better for reliability anyway, RAID1 is probably a waste of time. Cheap IDE RAID is getting affordable.
  • arse.. silly me SHOULD have used that preview button ..

    tar -zxvf Backup.tar.gz [path]/[file]

    was what I meant... doh :)
  • Heard of removable disks? :) .. to store in aforementioned fireproof (earthed?) container.
  • True, put in that context the previous 4 or 5 posts are true and okay I see circumstances that tapes are useful (and probably necessary) .. for backing up financial records for tax purposes and for multi-terabyte databases.

    But,

    I was responding to the article about backing up just 22GB and for the home or semi-pro tape is just not worth the hassle any more (tapes stretch , to counter the 'heads crash' argument) when multiple disks (can you say RAID even if you want online reliability of data) can be had for much better performance.

    Okay I admit maybe 'superceded' was a bit strong given the arguments raised, but I think anyone has to admit that storage capacity of HDs (vs. cost) has shot up incredibly against tapes in the last few years.

  • Your PC (and all attached disks) gets fried by a lightning strike or other electrical fault. Your house catches on fire or is flooded. What do you do?

    A tape can be stored off-site or in a fireproof container.

  • by CucKo0 ( 11870 )
    oops forgot to mention that kdat uses "tar" which is good if you have multiple servers incase you need to untar it on any unix box :)

    (did i mention that it has a nifty gui front end :)
  • by CucKo0 ( 11870 ) on Sunday May 30, 1999 @08:08AM (#1874111) Homepage
    im currently doing a 8+gig backup using kdat from kde and it works well (no compression tho)

    we had some trouble with backing up to another server via nfs mount that would only allow me to do a 2 gig max file :(

    if i use the tar with compression i can get up to 24 gig backup to tape (12 gig without compression)

    but kdat allows you to span tapes and keeps a nice little index of all previous files backed up on that tape (very nice gui app)

    not sure if that will help but it's all i got rite now :)

  • Travan tapes are limited to several hundred backup-restore cycles, DLT/DAT are in the thousands.. which means that they produce less errors.
    I don't know about DLT, but DAT tapes certainly will not give you thousands, or even hundreds, of passes. Keep in mind this is a rotating head medium, and a medium designed for audio at that.

    cjs

  • At my previous job, i had a Linux box doing 30+ Gigs of backup every night. It read the info from the network via NFS, and used GNU tar to put it onto a 40G DLT. One of the servers involved had at least 13G of stuff on it alone...the others had 9G and 8G and stuff like that.

    I'd recommend the DLT drives to anyone who can afford them....they're awesome.

    "The value of a man resides in what he gives,
    and not in what he is capable of receiving."

  • That limit only exists on 32bit machines. 64bit Linux platforms, such as Linux/Alpha, have a truly astronomic file size limit, IIANVMM.
  • by jerodd ( 13818 ) on Sunday May 30, 1999 @07:58AM (#1874115) Homepage
    I like to backup one of my filesystems which has huge files (a 6.4GB file and a 2.5GB file). I use GNU tar with it. Simply use the -M switch, and tar doesn't have any problems with the huge files. I'm not sure if SysV tar supports it (i.e. the tar in any non-GNU Unix system). Note that the filesystem I am using (JFS) supports big files and I had to use emx v0.9d to support 64-bit file lengths (I'm using OS/2).

    You can also use AMANDA backup, which I use on my GNU/Linux machines for backup. It seems to handle the large backup sizes acceptably.

    Finally, you can always just split up huge files using dd.

    Cheers,
    Joshua.

  • I suspect that all of DAT, DLT, and Exabyte 8mm tapes will be around and usable in five years (all of them are sufficiently popular now). Current generations of all of them store over 2G. Second hard drives that are active are susceptible to various problems, such as overheating, that can take them out at the same time as the main drive.

    On the other hand: if you really want to make sure that the data stays usable over time, nothing beats keeping it on live disks. As JWZ mentions implicitly, various forms of backups risk either the hardware or the software (or both) no longer being capable of reading them a few years down the road. If the data is on your live disks, you will copy it around as you change and upgrade disks, machines, and operating systems, and will hopefully remember to keep being able to read it through this.

    This isn't strictly a problem for pure backups (where everything in the backup is just a duplicate of what you have on the active drive), but most people and places wind up using things both for backup and for archival purposes. It may also be a problem for disaster recovery; if your existing computer melts down, can you still buy a tape drive that can read the machine's tape backups? (The paranoid (or just cautious) will also check the head alignment periodically to make sure that it hasn't drifted so that the tape drive is making tapes only it can read.)

    The university I work at is currently going through this as we decomission the machine one of our last 9-track tape drives is on. We're having to sort through old 9-track tapes, work out what's important and what's not, coax the old hardware to read the tapes, and dump the data somewhere. (We're probably going to put much of it onto ISO-9660 format CD-Rs; hopefully that will have an equally good or better lifetime).

  • Often, there's not much point in giving dump plausible numbers about the tape size. If you're dumping in a situation where a tape that is unexpectedly full won't get swapped, you might as well tell dump whatever lie it needs so that it never thinks it needs to change tapes.

    The Linux dump manpage I have handy even suggests that most of the time dump will directly notice end of media and deal with it properly, regardless of the tape size specified.

  • The other problem with trying to be accurate (or slightly conservative) about the tape size to dump is putting multiple dumps onto the same tape. To get the sizing right, you need to tell dump how much tape is still left in the second and later dumps, which means capturing and parsing dump's output to find out how much of the tape has been used up by each filesystem.

    For relatively simple situations (where you don't plan to pack tapes to the brim and don't plan on tapes ever overflowing), this is a chunk of somewhat arcane work for little purpose. On the other hand, if one really needs to do this, I suspect that things like Amanda have the code already. And researching the wheel is a lot easier than reinventing it.

    Anyone looking for a project could always look into adding a switch to dump so that it produces program-friendly output that's easy to parse for this sort of information.

  • I think that a lot of decisions will depend on what sort of disasters you want to recover from. Planning for only 'normal' aging drives having media failures is a lot different (and easier) than planning for your office burning down.

    There are all sorts of potentially problematic tradeoffs with various sorts of hard drives for backup. Things like:

    • a normal drive operating and in the machine is susceptible to common-mode failures that get both drives at once (such as overheating). If you keep it powered off it generally requires a reboot to see. External SCSI drives and some magic can avoid this.
    • Hot-swappable or otherwise removable HDs cost more and probably require rebooting the machine to use.
    • From what I understand, removable media like Zip and Jazz drives are not truly HDs, and are not as reliable as them (although they can be swapped in and out with the machine on). Repeated insert/write/remove cycles may expose you to a far-larger-than-HDs risk of media damage and data loss.

    I suspect that everything short of Zip/Jazz cartridges are more susceptible to damage when removed than tapes. Especially if they aren't mounted in an external enclosure and are carried around with the drive electronics bare.

    If you make more than one backup, tapes may become substantially cheaper than more and more HD space. If you store and handle lots of backups, again tapes may become easier and cheaper to deal with than other media.

    I have a fairly high trust for the long term durability of tapes sitting around. Manufacturers test this stuff and will tell you about it, including cautions for temperature limits and so on. Do disk manufacturers give equivalent figures for removed drives?

    For personal home use, I think that anything is better than nothing; a second HD is cheap and easy (especially if you aren't worried about things that would take both drives out at once, like overheating or fire). For professional use in an office or the like (even a home office), I'd trust tape more. The up-front costs are bigger, but the benefits can be substantial, and there are things that are far more convenient with tapes that are very important for professional use (such as periodic offsite backups).

    If you trust tapes they can also be used for archival purposes (where you delete the data off the HD after storing it on tape and verifying the tape) as well as normal backups. Depending on how much call you have for this, this may also represent a money savings with tape over disks.

  • Is the 2Gb size limit an ext2-limit, or a Linux-limit?
  • So, when the Y2K-hype is gone, we still have a D2Gb-problem left to solve, in the entire UNIX-world. Perheaps NT have the same problem. I think so...
  • I use a CD-RW writer as backup media. Works pretty fine, but I havent figured out yet how to make tar create multi-volumes when I have to pipe it to the cd-writer-program... But its nice to be able to put the CD in an ordinary CD-drive and tar -xvf /dev/cdrom... Someone have a acript or idea to make multivolumes?
  • We have found this not to be true with a number of tape drives (notably Seagate DATs), therefore the conservative limit. :)

    --When in doubt kludge it.

    -AP (Jordan Husney)


  • A couple people have written in asking how to do a restore opereration with "restore" (the companion program to dump). By far the easiest way is to do:

    # restore -i /dev/st0

    (where 0 is your tape drive number)


    However, If you have multiple filesystems on a single tape (like the example above), you must first use mt to fast-forward to the correct tape-mark. Let's say we want to get the second file-system off the tape:

    # mt fsf 1
    # restore -i /dev/st0

    This will then put you in restore's little "shell" for adding files/directories to be restored. For example:

    -- 8- *snip* ---

    restore > ls
    .:
    .automount/ bin/ lib/ proc/ usr/
    .bash_history boot/ lost+found/ root/ var/
    .mc.hot dev/ misc/ sbin/
    .mc.ini etc/ mnt/ tftpboot/
    .netwatch home/ net/ tmp/

    restore > ?
    Available commands are:
    ls [arg] - list directory
    cd arg - change directory
    pwd - print current directory
    add [arg] - add `arg' to list of files to be extracted
    delete [arg] - delete `arg' from list of files to be extracted
    extract - extract requested files
    setmodes - set modes of requested directories
    quit - immediately exit program
    what - list dump header information
    verbose - toggle verbose flag (useful with ``ls'')
    help or `?' - print this list
    If no `arg' is supplied, the current directory is used
    restore > add etc
    restore > extract
    You have not read any tapes yet.
    Unless you know which volume your file(s) are on you should start
    with the last volume and work towards towards the first.
    Specify next volume #: 1

    ( it will now restore from the tape to your cwd)

    Done!

    --- 8-
    Just to sum up, the example above opens the tape, lists the files, and adds "etc/" to the list of files to be extracted. Since this is a level 0 backup (a full non-incremental backup) I need not put in any other tapes and simply say "1" when it asks me for the next volume number.

    The etc/ directory (with all its sub-directories) will be in whatever directory I started restore from. If you are doing a system restore, do it from "/".


    -AP (Jordan Husney)


  • For the ISP I run, we do a nightly cron-driven backup using dump. The key is that you must specify the total length of the tape. We have a 23GB tape drive, of which we conservitively say the tape is 20GB long.

    Here is our cron.daily/daily.dump file:

    #!/bin/sh
    ## Daily Full System Backup Givin with 20GB Tape Cap.
    mt rewind
    mt erase
    mt rewind
    /sbin/dump 0uBf 20000000 /dev/nst0 /dev/sdb1
    /sbin/dump 0uBf 20000000 /dev/nst0 /dev/sda1
    /sbin/dump 0uBf 20000000 /dev/nst0 /dev/sdc1
    mt rewind

    This dumps all three of our partitions out to a single tape. The 0 ("zero") option dumps the entire thing, as out tape drive is fast, vs. specifing a dump level > 0 (which is for doing various levels of incremental backups); The u, which updates a human-readable /etc/dumpdates file; B for the number of blocks ("kilobytes") the tape is long (this is your problem); and finally f: the device to dump to.

    One of the things that really gets people is how to pass arguments correctly to dump. A little diagram might serve as an aid:

    dump [arg name 1][arg name 2][arg name 3] [arg value 1][arg value 2][arg value 3]
    Hope that helps!

    We use the /dev/nst0 device to write to the tape three times without the thing rewinding. This is the key to putting more than one filesystem on per tape.

    If anybody has any questions about using dump, I would be happy to help.

    -AP
    jordanh@remotepoint.com [mailto]

  • by Bryan Andersen ( 16514 ) on Sunday May 30, 1999 @01:46PM (#1874128) Homepage

    WARNING A tape or harddisk in a fireproof container will still be destroied in a fire. Most fireproof containers are designed to save paper from burning by a combination of steaming away water and thermal insulation. As such, the internal tempeture of the container will easily get over 210 degrees F. Most tapes and harddisks will be destroied at that point.

  • by Jonas Öberg ( 19456 ) <jonas@gnu.org> on Sunday May 30, 1999 @07:45AM (#1874133) Homepage
    I'd be interested to know what trouble you've been having with dump. I've been doing dumps of 2+ gig partitions for ages with dump and it works very well. Perhaps you're experiencing some other problem which is not related to the backup program you're using? If you can send me more information about where dump fails, I'd be happy to have a look at it.
  • Tapes are a great way to get a snapshot of your file system as of right now. With a sensible tape rotation, it's easy to go back and get a file as of a few months ago, i.e. before someone accidentally deleted the first 20 pages of the document or whatever, and nobody noticed until now.

    Keeping stuff online with big hard drives is a great way to backup data that is static, i.e. scanned documents, but is somewhat less useful for stuff that is constantly changing.

    My 0.02 $CAN
  • I use a commercial backup software called arkeia. You can get it from http://www.arkeia.com/ [arkeia.com].

    There is also a free for personal use version for linux server/clients. Just go to http://www.arkeia.com/downloadfree.html [arkeia.com] .

    Regards,
    Oliver.
  • The man pages for dump on linux used to say it could not handle multivolume dumps. I note that at least in Debian 2.1 this is no longer true.

    I've personally restored my 20GB 6 way stripe set twice from dumps under linux from a single (large) tape with no problems.
  • We use Legato Networker in our enterprise backup environment. They have had an unsupported Linux Client for several months now, but just recently have release a supported Linux Client. We use it to back up a couple of our Linux boxes here, and it works great. Much more reliable than Arcserve or Alexandria. We use Legato to back up Oracle, Sybase, and MS SQL Server, Windows NT, HP-UX, and we are just now starting a pilot to move all of our Novell stuff to Legato.

    The Legato software so far has been rock solid, as long as you keep up with it. Restores have gone off without a hitch. We have NEVER lost a file with Networker.

    If anyone needs more info, email me and I would be happy to tell you about our experience.

    --Michael Brown
  • Both, I believe. To access more than 2GB, you need to use 64 bit file access functions. fseek(), for example, uses an offset value that is a 32-bit signed int, so it can only address 2GB of a file. I think that SGI and others use special functions for 64 bit file access (fseek64() for example), and leave the traditional system calls alone. It's going to mean recompiling everything, at the least.
  • by -Surak- ( 31268 ) on Sunday May 30, 1999 @08:43AM (#1874145)
    I highly recommend the Sony AIT drives (I think Seagate or Quantum also sells a variation of the same format). They do 25 GB native per 8mm tape, at 5MB/s. The drives are under $2000, and tapes are around $60. It may be a bit overkill for what you need, but the speed is VERY nice. It does compression as well, but people that quote compressed capacities should be shot.

    They also have a cool feature that allows storing directory info on NVRAM on the tape cartridge - 16 KB or so.
    And because it's Sony, it's definately likely to stay around. I think they still sell Betamax decks, and I kinda think they know what they are doing when it comes to helical scan recording equipment :)
  • by -Surak- ( 31268 ) on Sunday May 30, 1999 @08:13AM (#1874146)
    Tar and others should work fine as long as you are writing directly to tape, instead of to a temp file. Linux has a 2GB (2^31-1) maximum file size, so if your backup software is trying to spool to disk before streaming to tape, it may fail.

    Amanda [amanda.org] handles this by splitting the disk files into 2 GB chunks and reassembling them when it writes to tape. It also deals well with network backups. The filesystem side backend is dump or GNU TAR, so it's fairly standard in that regard. I've had no problems with 8+ GB filesystems using Amanda.

    I would not recomend using e2fsdump - AFAIK, it's still beta, and I had problems with the interactive restore and some other issues. Because it accesses the filesystem at a lower level than standard file access (I believe), I'd be careful with trusting important backups to it.
    TAR definately a safer choice.

    BTW, I have a question myself... does anyone know how to get TAR (or something else) to restore permissions on symlinks? Typically it doesn't matter, but Apache uses symlink permissions for the SymlinksIfOwnersMatch directive, and every time I restore or copy a web partition, I have to go through and fix all the links that are now root owned.
  • BackupEdge is the most powerfull backup program for linux (or any other unix) that I've seen.
    It does do what you want, & has alot of other great features.

    -great automatic backup/verifys
    - backup recovery programs
    - bootdisk manager.

    downside: it's comercial
    see www.microlite.com for details
  • by Wodin ( 33658 )
    AFAIK Amanda uses dump or tar behind the scenes.

  • there is at alpha.gnu.org [gnu.org] and mirrors a version of tar which supports 64 bit file access and which can be used with glibc >= 2.1 to make archives bigger than 2 GB on 32 bit hosts. I just have downloaded it and it compiles and checks succeed without any problem. I just can't really test it because I have only 50 MB left...
  • If you split your backup in a few different files (say, backup ~ftp/pub/mirrors and then ~ftp/pub/linux or whatever) then you should be able to get away with tar.

    This will change soon when SGI releases portions of xfs as open source, and when ext3fs is ready.
  • by nakaduct ( 43954 ) on Sunday May 30, 1999 @02:19PM (#1874164)
    Unfortunately, there is no Linux client for Networker.

    ftp://ftp.legato.com/pub/Unsuppor ted/Linux_Client/ [legato.com] has both 4.2 and 5.1 client kits, in .gz and .rpm formats. The clients are unsupported, but they work well for us.

    We use Networker to back up ~500 GB, spread across 30 clients (NT, Digital Unix/Tru64, and Linux). Backup performance is excellent (by interleaving sessions over two network cards and the local disks, it can keep two loaders running at ~5MB/sec each).

    I don't know what the maximum "partition" size is, but we've backed up 150GB file domains with no problems.

    Restore performance is slower, of course, but emphatically not an "all day event"; it takes a few seconds to find what you need in the database, and a couple minutes to load the tape (we're using twin 280GB DLT loaders). After that, the speed is the same as it would be for tar/dump/whatever; the tape drive must seek to your files and read, and that can take up to an hour.

    If your files are spread across mutiple tapes (either because you're using incremental or differential backups, or because a single saveset spans multiple tapes), then it can be as long as two hours. If you have only a few clients, these times are reduced somewhat.

    The only time I've spent an entire day doing restores is when we lost the Networker server (and its media indices), and had to use Networker's bootstrap procedure to bring back the index, followed by regular restores to bring back everything else. Because I hadn't bothered to keep hardcopies of the logs, Networker had to scan the tapes for a suitable bootstrap. The searching alone took a few hours.

    A couple caveats, though: it's not cheap, and it's not easy.

    Networker was designed for the kind of environment we've set up, and you may find it overkill for one or two clients. The GUI is marginal, but the command-line tools can completely eliminate it, and do more besides.

    Expect to spend a couple of weeks configuring it, and a couple more getting comfortable with the (extremely powerful, IMHO) command-line tools.

    You'll need a cabable server to hold the media indices -- we keep data in the index for a Quarter, and the database is over 2GB. We're using a dual-CPU Alpha 4100 @600MHz w/2GB memory, running Tru64 Unix (it's used for a number of other things, of course).

    NB: starting with Networker 5, you can have the tape devices and databases on separate machines, which reduces the need for one mammoth server to do backups and media management. It's also good if you have mutiple sites separated by sub-LAN-speed links; you can put a tape device on each LAN.

    cheers,
    mike

  • One more comment to add to the other replies re: usage of tape for backups:

    Not only do most companies need backup rotations, they also need a backup for disaster recovery.

    Ideally, you have a backup of your data kept offsite, in case of disaster. Tape makes this simple - it's readily portable. Every week night our DBA throws a couple tapes into his bag before heading out. If our building goes down in flames, we still have our data.

    Insurance money can help to rebuild, but it won't get back data!
  • by jaimec ( 55077 ) on Sunday May 30, 1999 @10:49AM (#1874169)


    I have used a program called CTAR to cure these problems. Check it out here.

    http://www.ctar.com

Hackers are just a migratory lifeform with a tropism for computers.

Working...