Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Software Windows Linux

Ask Slashdot: User-Friendly, Version-Preserving File Sharing For Linux? 212

petherfile writes: I've been a professional with Microsoft stuff for more than 10 years and I'm a bit sick of it to be honest. The one that's got me stuck is really not where I expected it to be. You can use a combination of DFS and VSS to create a file share where users can put whatever files they are working on that is both redundant and has "previous versions" of files they can recover. That is, users have a highly available network location where they can "go back" to how their file was an hour ago. How do you do that with Linux?

This is a highly desirable situation for users. I know there are nice document management things out there that make sharepoint look silly, but I just want a simple file share, not a document management utility. I've found versioning file systems for Linux that do what Microsoft does with VSS so much better (for having previous version of files available.) I've found distributed file systems for Linux that make DFS look like a bad joke. Unfortunately, they seem to be mutually exclusive. Is there something simple I have missed?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: User-Friendly, Version-Preserving File Sharing For Linux?

Comments Filter:
  • FreeNAS (Score:5, Informative)

    by Anonymous Coward on Friday June 26, 2015 @02:22PM (#49996799)

    FreeNAS will do all of that shiny stuff. And snapshots, too.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Or the fork
      http://www.nas4free.org/

      ZFS for snapshots

    • Re:FreeNAS (Score:5, Informative)

      by Tailhook ( 98486 ) on Friday June 26, 2015 @03:00PM (#49997143)

      Yes, FreeNAS will get you there. Since versioning is a key requirement you will want to use ZFS. The thing you need for that is a plenty of RAM. It's not just a performance concern. ZFS can be unstable if not fed enough RAM.

      So budget for something with a lot of installed RAM on day one, and some room to grow as you add more storage.

      Yes, FreeNAS isn't Linux. The simple fact is that Linux has so far failed to achieve parity with other systems, both contemporary and historical, that provide advanced file system features. BTRFS might get there one day. ZFS is persona non grata. LVM can serve some of your expectations, but not all.

      So look beyond Linux. In addition to FreeNAS there is proprietary stuff; they still make NetApps and they still work as good as ever. Dell has EqualLogic boxes that will snapshot volumes all day long. If you have the dosh there are all sort of solutions. If you're dosh-challenged then look to FreeNAS.

      • by Rich0 ( 548339 )

        Can ZFS actually do versioning on every file close? I know it can do snapshots, but of course btrfs can do that as well. I'd think that the goal of a versioning filesystem would be that versions are captured anytime a file is written, not just when the admin hits the snapshot button, or once an hour, or whatever.

        As far as I've seen the COW filesystems only do snapshots when they're asked to, and I'm not sure they're designed to scale to the point where you have billions of snapshots for millions of files.

        • by MobyDisk ( 75490 )

          Can ZFS actually do versioning on every file close?

          The versioning filesystem that Windows Server provides does not version at every file close. It does it via snapshots. So that shouldn't be part of the submitter's requirements.

          • If that was the case he'd still be on windows. He wants something better, that's both distributed and has version control.
            • by MobyDisk ( 75490 )

              If that was the case he'd still be on windows.

              If what was the case?

              He wants something better, that's both distributed and has version control.

              The submission does not ask for those things.

          • by Rich0 ( 548339 )

            Can ZFS actually do versioning on every file close?

            The versioning filesystem that Windows Server provides does not version at every file close. It does it via snapshots. So that shouldn't be part of the submitter's requirements.

            He never said he was happy with Windows Server's versioning.

            He did mention sharepoint, which does retain a version on every file save.

            I'm well aware that zfs and btrfs can be told to snapshot the entire filesystem as often as you want to fire off a cron job.

            • by MobyDisk ( 75490 )

              He never said he was happy with Windows Server's versioning.

              From the submission.

              That is, users have a highly available network location where they can "go back" to how their file was an hour ago. How do you do that with Linux? This is a highly desirable situation for users.

              I took "is highly desirable" to indicate he was happy with it.

      • Not just a lot of RAM, it MUST be ECC RAM for ZFS. Without ECC, even a single flipped bit can cause ZFS to corrupt the entire file system.

        • Re: (Score:2, Informative)

          by Rutulian ( 171771 )

          Nonsense. ECC RAM may help avoid certain kinds of on-disk errors, but it's a heavily debated topic. Your statment,

          even a single flipped bit can cause ZFS to corrupt the entire file system

          is completely unsubstantiated BS.

          • Nonsense. ECC RAM may help avoid certain kinds of on-disk errors, but it's a heavily debated topic.

            With ZFS it is NOT a debate -- ZFS is very RAM-intensive and uses RAM in more critical ways than many other filesystems. In particular, for reasons having to do with how ZFS works, small RAM errors can (and have) made a ZFS filesystem unmountable. And given that there are NO recover tools for ZFS, that means your data is gone.

            You can debate how often such things happen, but they can and have. In most other filesystems, bad RAM is mostly a concern about corruption of some files on disk, but with ZFS it

            • by fnj ( 64210 )

              Sorry, Rutulian is entirely correct. You are full of bull, and that blog is unsubstantiated and clueless. The FreeNAS community is full of people who think they know more than they actually do (there are some very bright people there too, but it's up to you to figure out which ones).

            • I'll go with one of the co-architects of ZFS, Matthew Ahrens,
              http://arstechnica.com/civis/v... [arstechnica.com]

              There's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10). This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.

              I would simply say: if you love your data, use ECC RAM. Additionally, use a filesystem that checksums your data, such as ZFS.

              In other words, there is a non-zero chance that you will get silent data corruptions on disk if you don't use ECC RAM. It is the same risk with ZFS as with any other filesystem. And yet, personal computers have been running without ECC RAM for decades and it hasn't been a travesty. So yeah, if you are running in the type of situation where you absolutely must ensure the highest level of data integrity, then you mu

        • by fnj ( 64210 )

          Not just a lot of RAM, it MUST be ECC RAM for ZFS. Without ECC, even a single flipped bit can cause ZFS to corrupt the entire file system.

          it MUST be ECC RAM for ZFS

          Utter bullshit. Do not spread silly but destructive lies. If you are anal, non-ECC RAM can corrupt ANY filesystem. Nay, not just any filesystem, but any CONCEIVABLE filesystem. ZFS does not have any magic dependency on ECC that you are magically free from if you do not use ZFS.

      • Ahem,
        http://zfsonlinux.org/ [zfsonlinux.org]

        • by hoggoth ( 414195 )

          > zfsonlinux

          Using it here in production. Very happy with it. Windows permissions are a hassle though, but that's a SAMBA issue. (zfsonlinux doesn't have built in CIFS export)

          • Same here.

            zfsonlinux doesn't have built in CIFS export

            It's not built in, but it is integrated. Just use
            zfs set sharesmb=on pool/srv

            If you are having perms issues, make sure you have acl support enabled in your kernel and userland, and then use the aclinherit property on your zfs pool. Samba should handle the translation between NT and posix ACLs seamlessly, but you may need to use Samba4 for the best results.

            • by hoggoth ( 414195 )

              Thanks. I had given up and went to a simple permissions model with user/group forced for all files under certain directory trees.

      • I might be missing something here, but I run a small BSD ZFS file server with samba and the little PC it runs on only has 2Gb or RAM. I am not doing anything more complex than a simple mirror (I have backups, the mirror is for availability), the equivalent of RAID 1+0, and sure caching is off, but performance is fine for the 6 people that use it(some even running their virtual machines off it, though I try to discourage that). What am I missing?

      • by Wolfrider ( 856 )

        > ZFS can be unstable if not fed enough RAM.

        --I have seen ZFS (FreeBSD and Linux) be "stable" with ~2GB of RAM, as long as you are using a 64-bit OS. In my experience, these days ZFS only needs lots of RAM if you are doing de-duplication - although this can be gotten around to a certain extent if you use an SSD drive for L2ARC; it will just be slower. Maxing out your RAM is recommended if you want max speed and you can afford it, but there are other ways. As long as you're not doing de-dup and not exp

  • ZFS (Score:3, Informative)

    by blowfly7012 ( 1780006 ) on Friday June 26, 2015 @02:26PM (#49996837)
    Look at ZFS. Supports snapshots, SMB and NFS. And you can show the snapshots as a read-only directory to users.
    • by Rich0 ( 548339 )

      Can you make it snapshot anytime a file is modified? Also, can you easily find all the snapshots for a single file? Or do you just have 3 million subdirectories where a given file might or might not change between any random pair of them?

      • If you want per-application snapshots then you want the application to be in charge of the snapshots - not the file system. The file system does not know when an application is finished making changes to a file. Possibly many files must be changed - the file system does not know so it can not make any assumptions. Applications should be in charge of their own document snapshots using some form of version control. If one wanted they should script it so that a ZFS snapshot was generated - but you are bet

        • by Rich0 ( 548339 )

          Well, the whole idea would be to have simple snapshots of every file version without having to re-implement every application I use.

          And it sounds like the answer to my second question is no. btrfs works in the same way. The snapshot is at the filesystem/subvolume level, and if you want to find all the versions of a particular file you basically have to find the file in every snapshot that exists and diff them all.

          I love btrfs. I just don't think that it solves this particular problem, and neither does an

      • Can you make it snapshot anytime a file is modified? Also, can you easily find all the snapshots for a single file?

        It sounds like you're looking for a versioned filesystem, not a snapshotting filesystem. The latter is a point-in-time of the whole filesystem tree, the former is file-centric. Windows derives from VMS, which did file versioning by default, so that's not too surprising.

        Tux3 or copyfs on Linux might be ways to do it. A quick google said that there's a way to make Alfresco present an SMB sha

      • Also, can you easily find all the snapshots for a single file?

        If you export the filesystem via Samba, you can enable the VSS compatibility feature, which allows Windows users to access the "Previous Versions" tab. There is no equivalent for other Mac or Linux clients, or other network filesystems (NFS, etc) that I know of. It would be a nice feature to have.

  • VMS (Score:5, Interesting)

    by FranTaylor ( 164577 ) on Friday June 26, 2015 @02:27PM (#49996845)

    use VMS, it's built in

    TOPS-20 had it too

    yearning already for the lost technology of the 1970s

    • DIR :== PURGE/ERASE/NOCONFIRM/NOLOG




      I knew the goddam lameness filter was going to interfere. I'm not shouting, it's the Vomit Making System!!!
    • by VAXcat ( 674775 )
      And RSX
    • by RogL ( 608926 )

      And Wicat systems, also from back-in-the-day. I used them in '80s.

    • I loved that feature of VMS.

      Fat-fingered a config file? No problem!

      copy file.conf,6 file.conf

      FIXED!

      Whereas now it is dependent upon zealously careful sysadmin process:

      cp file.conf file.conf-yyyymmdd
      vim file.conf

      oops, fat-fingered it, or used a deprecated setting that broke the service, or vim decided to go nuts and insert dkghkjh3kh34534kj5h43kj54k3j when you saved.......

      cp file.conf-yyyymmdd file.conf

      And, if you didn't make that backup file? Hope you have a proper backup regimen!

  • OwnCloud (Score:5, Informative)

    by jd142 ( 129673 ) on Friday June 26, 2015 @02:33PM (#49996913) Homepage
    Take a look at the features list at https://owncloud.org/features/ [owncloud.org]. It seems to have what you want. I played with it a couple of years ago and it was easy to set up then. Unfortunately I never had the opportunity to try it in production.
    • by Junta ( 36770 )

      Depending on your usage, I'd recommend http://seafile.com/en/download... [seafile.com]

      We were using owncloud, but at least at last check a non-admin user needed a lot of things done by an admin on their behalf, and seafile let's people make their own groups and such.

      • While most of the nifty features are linked to the paying version Seafile seems quite cool, but I see basic users like me cannot install it as a php script on a shared server
        I'm still using Owncloud, even not the last version, for daily syncs of a dozen machines from desktops to phones and tablets, and it has "just worked" for a couple of years now. Its only downpoint is, contrary to Dropbox no local-loop transfers are allowed (everything must transit through the server). This is related to the way Ownclou

        • by Junta ( 36770 )

          So here we had a few admins and a bunch of 'normal' users. The normal users needed an admin to create a new group to facilitate sharing. With seafile, the users could create their own groups. That and frankly we hit a few bugs with sync and seafile seemed to do better.

          owncloud's document preview and the plugins were a bit worse than seafile's baked in, but primarily it's just a platform for replicating and sharing file content for us, we don't really care about anything beyond that. We don't use the com

    • Owncloud is a poor substitute for a fileserver because everything has to be owned by the webserver user at the the filesystem level. All of the access controls and versioning is handled by the owncloud webapp. So if you need to operate outside of the owncloud environment you are screwed, like for example using owncloud's dropbox-like client for easy syncing and at the same time exporting the filesystem via Samba for people to map network drives.

  • by bluefoxlucid ( 723572 ) on Friday June 26, 2015 @02:34PM (#49996919) Homepage Journal
    What is this of which you speak? Can someone expand on these thing?
    • by MobyDisk ( 75490 )

      It's the Microsoft Volume Shadow Service. It's not the same thing as Sharepoint. FYI: A Sharepoint admin recently told me "If you are just using Sharepoint as a way to share files, then you are using it's weakest most awful feature."

  • WebDAV (Score:4, Informative)

    by Anonymous Coward on Friday June 26, 2015 @02:42PM (#49996969)

    Search Google for WebDAV auto-versioning.

    I have set up (and used for many years) a WebDAV file share served by Apache (with an SVN backend). It can be used as an SVN repository (with checkin comments, etc.) or used as a simple remote file share that automatically creates revisions for the changes. I have used various WebDAV clients (built in to Linux, Mac, Windows) to access and modify the contents of the files.

    Hope that gives you another area to explore.

    • by melstav ( 174456 )

      I agree with this AC.

      If using SVN directly doesn't fit your needs -- if you REALLY want the transparency of a shared filesystem (as opposed to explicitly saying "synchronize the contents of this directory with the image on the server" ) WebDAV builds that on top of SVN. And if you want access controls, Apache's mod_auth provides them. Encryption? mod_ssl.

    • I literally wanted to set this up for myself using WebDAV, Apache and Subversion (SVN) to keep versions of my web programming but could not find a decent tutorial.

      Did you use a tutorial or is there a package that helps?
      • Re:WebDAV (Score:5, Informative)

        by flink ( 18449 ) on Friday June 26, 2015 @07:39PM (#49999407)

        Assuming you already have svn installed and copy of Apache httpd with mod_dav_svn and you are are running on some flavor of *NIX.

        Create an svn repository somewhere, eg.

        # svnadmin create /opt/svn-repo

        Create a password for your repo, replace username and password as appropriate

        # htpasswd /opt/svn-repo/conf/htpasswd USERNAME PASSWORD
        # chmod 640 /opt/svn-repo/conf/htpasswd

        Fix the permissions of the repo so that the user that httpd runs as can write to the repo database. Replace www with whatever the appropriate user is:

        # cd /opt/svn-repo
        # chgrp -R www .
        # chmod -R g+r .
        # chmod -R g+rwX db locks
        # find db locks -type d -exec chmod g+s '{}' ';'

        Open httpd.conf and add/uncomment the following lines in the LoadModule section:


        LoadModule dav_module modules/mod_dav.so
        LoadModule dav_svn_module modules/mod_dav_svn.so

        At the very bottom of your httpd, add a location for your repository:

        <Location /repo>
            DAV svn
            SVNPath /opt/svn-repo
            SVNAutoversioning on

            AuthType Basic
            AuthName "Subversion repository"
            AuthUserFile /opt/svn-repo/conf/htpasswd

            Require valid-user
        </Location>

        Restart apache and then test your config:

            # svn ls http://localhost/repo [localhost] --username USERNAME --password PASSWORD --no-auth-cache
            #

        No errors means everything is working.

        See the manual [red-bean.com] for instructions on mounting the WebDAV share with various clients. Note that Windows is kind of problematic for this out of the box and you may need to use a third party file system driver such as NetDrive.

  • Samba 4 will integrate nicely with btrfs and do previous versions for you. To get redundancy, just put the btrfs volume on RAID, perhaps?

    • Specifically, Samba 4.2 [samba.org] with Snapper. It's probably still a scheduled snapshot, but it looks to be better then how we did things in 4.0 and 4.1.
  • by Junta ( 36770 ) on Friday June 26, 2015 @02:48PM (#49997033)

    make sharepoint look silly

    Sharepoint needs absolutely zero help to look silly.

    Of MS world of products, sharepoint is perhaps the worst festering thing they got.

    • by thegarbz ( 1787294 ) on Friday June 26, 2015 @07:52PM (#49999449)

      You must not be using it right. Sharepoint can do EVERYTHING. It is a one stop shop for all your worlds problems. It has a feature list that make most OSes look lame in comparison. Just think of all the use cases:

      - You want document management with integration into popular MS products which breaks horribly whenever you use something non-standard? SHAREPOINT!
      - You want a metadata search system which fails to properly search metadata fields? SHAREPOINT!
      - You want to run a website with dynamic content that is so difficult to edit you'll yearn for using HTML and Notepad? SHAREPOINT!
      - Are your users being too productive? Want to slow people down a notch with slow load times so they don't embarrass you? SHAREPOINT!
      - How about an integrated user management system which allows you to search users by hitting a single key and waiting half an hour! SHAREPOINT!

      And on the days when you think you have hit rock bottom, your corporation is slowly falling apart after a SAP implementation has screwed the accounts system so badly that few invoices can be found, we have 3 words for you that you can integrate and ensure that the invoice you do find become unusable: SHARE FUCKING POINT!

      Sharepoint is a great OS, all it needs is a decent text editor. Oh wait, MS Word. ... SHAREPOINT!

  • If someone updates a file in place, do you really want to create a new version for every write call? On the other hand, apps that update files atomically do so by renaming original and backup, which breaks tracking these as the same file.

    What you can do is make hourly snapshots and make them available as read only shared directories. Easy enough with simple hard links, and many filesystems support snapshots natively.

    Protocols like WebDav do support versioning, but it would work best with WebDav clients, not

    • If someone updates a file in place, do you really want to create a new version for every write call?

      This is precisely how VMS does it, it works great. You can control how many generations it keeps. You can manually delete older versions if you want. You can explicitly refer to the older versions in the path if you want. If, for instance, you are creating a database file, you can disable the versioning.

      Wow, mainstream 1970s technology is just way too advanced for this crowd.

      • What happens if someone opens a file and changes a single 80-byte line inside, one byte at a time?

        • by jbolden ( 176878 )

          Open, change, close is a version.
          Open, change is not a version since it didn't get closed.

          The versioning pattern can keep older it doesn't have to be just "last 10". On better versioning system it can be:
          Last 10, up to 1 per month for 12 months. 1 per 6 mo forever. See Google Docs or Wikipedia for good examples of this.

    • by jbolden ( 176878 )

      If someone updates a file in place, do you really want to create a new version for every write call?

      Potentially yes. You might throw some of those away but...

      What you can do is make hourly snapshots and make them available as read only shared directories.

      And then the user on a file independent basis needs to know when bunches of changes happened. So for example file X had:

      Large number of changes between April 2015 and May 2015
      large number of changes between Nov 2014 and Jan 2015
      large number of change

  • There's this thing called VMS....
  • Why not a revision control system like SVN or git?

  • "I want document management, but not a document management utility."

    Don't fight it. Use the right tools for the job.

  • by Orgasmatron ( 8103 ) on Friday June 26, 2015 @04:29PM (#49997949)

    rsync with the --compare-dest option will give changed files, and --link-dest will give whole file trees at set times.

    You can do it pretty simple, or quite complicated, depending on your needs and preferences.

    rsync --link-dest makes a new tree with the current time, using only enough space for the directory tree and the changed files. On my box, I use it in a cron that runs every 5 minutes and cycles through my backup list. If any of them are older than the interval, it fires off the backup script specific to that type of connection (local LVM, nfs, CIFS/SMB, ssh, etc).

    A second cron then prunes those directories so that I've got fewer copies as I go back in time. An example would be pulling a copy every 15 minutes and keep every copy for 2 weeks, keep one from each hour for a month, one per day for a year, one per month for 10 years, and one per year forever.

    This can be easily adapted to other schemes. --compare-dest will make a tree with just the changed files, which you can then gather up and sort into the archival tree. Run a second (plain) rsync to sync up the comparison directory when done.

    • by bartle ( 447377 ) on Friday June 26, 2015 @04:40PM (#49998051) Homepage
      I would also recommend looking at rsnapshot which is built on top of rsync.

      I used to use a development system where the entire fire tree was mirrored at the top of every hour. Recovering old files was as simple as navigating to a different directory.

      Personally, I like the rsync solution because it is filesystem agnostic. It also has been around for a long time; whatever you're trying to do, I can guarantee that someone was doing it with rsync 20 years ago.

  • In a past life, I setup the free version of Alfresco for my teams. Configuration can be tough for those who don't like insanely deep trees of config files but it has a nice webdav server which integrates with the rest of it's quite awesome versioning capabilities. It's great for versioning anything that isn't source code.
  • Apache can serve webdav shares. It can back them with SVN.

    WebDAV can be mounted as a FS by all major OSs, and SVN also has a web based history. You need to provide some auth system to hook into (obviously), but otherwise it works really very well.

  • Different users find different things friendly. I want options and configurability. Grandma Culture20 wants one button and a wizard.
  • by mr_mischief ( 456295 ) on Friday June 26, 2015 @06:35PM (#49998943) Journal

    Contrary to popular rumors, there are a number of ways to do what you want. I can't vouch for all of these combinations working and wouldn't be too optimistic about tackling some of them. The more advanced stuff can take quite a while to ramp up to speed.

    If you don't mind FUSE as an intermediary, there's gitfs [presslabs.com] that uses git as a file system (which is kind of is anyway, beyond being just a VCS). It creates a new version on every file close. You can point it to a git remote on the same machine or across a network which lives on any filesystem.

    You already found that there are some non-mainline kernel modules for filesystems like next3, ext3cow, or tux3 [wikipedia.org] that do versioning on write. NILFS [wikipedia.org] is actually in the kernel these days (since 2.6.something) . More information about NILFS2 [nilfs.org] shows that it's somewhat slow [phoronix.com] but that it is in fact a stable, dependable file system [phoronix.com].

    Subversion has a feature that you can put WebDAV in front of it, mount the WebDAV as a filesystem somewhere, and every write creates a new revision of the file in SVN. That gets you networked and versioned. This works similarly to gitfs but uses WebDAV. You could if you wanted use dav2fs [nongnu.org] in front of that to treat it like a normal file system again.

    You can then share any of these over SMB with Samba. Or you can shared them via NFS.

    If you need really high-end, fast, replicated network filesystems you can use any of the clustered storage systems that will use a storage node's underlying files with any of these below that, but that will put your revisions underneath everything else rather than on top. Then there's using something like gitfs with the remote on top of, for example, DRDB [linbit.com], XtreemFS [xtreemfs.org], or Ceph (for example even across CephFS [ceph.com] which presents Ceph as a normal POSIX filesystem). This latter option puts your revisions closer to the user and then each revision gets replicated.

    I've personally never used some of the more exotic combinations listed here. You could in theory put NILFS2 on LVM with DRBD as the physical layer (since DRBD supports that) and then serve that file system via Samba (CIFS) or NFS which I would expect to work well enough if slowly.

    • so, which one would you vouch for?

      since you lead off with FUSE (jesus christ) layered on git (wtf?), i really want to know which one of these you think is the most stable and whether you'd vouch for it if your life depended on it.

      • I doubt my life will ever depend on the OP's needs of a versioning distributed filesystem.

        If I had to pick some combination out of that which I've seen work well with my own eyes, I'd share NILFS2 over CIFS or NFS, possibly with DRBD underneath LVM underneath the NILFS2. I've never done that exact combination. I have run NILFS2 on top of LVM, and I have run DRBD underneath LVM. I've shared lots of things, including NILFS2 and various other FSes on LVM over CIFS and NFS. I've never done DRBD under LVM with t

  • Glusterfs trash feature is able to recover previous file versions.
  • Everyone works with their files locally, changes are synced via a common server. Everyone has a compressed backup of the complete history of the entire filesystem for disaster recovery. Everyone should be able to browse and recover any version of any file without adding load to the server, though usability might be slightly lacking. You could also setup a FUSE filesystem on a linux box to browse the history.

    You may need to partition the file storage into multiple repositories, so that people don't need to

  • #! /bin/bash

    # stop on errors
    set -e

    scriptname=$(basename $0)
    pidfile="/var/run/${scriptname}"

    # lock it
    exec 200>$pidfile
    flock -n 200 || exit 1
    pid=$$
    echo $pid 1>&200

    # Rest of code

    SRC=$1
    SNAPSHOTS=$SRC/.snapshots
    MOST_RECENT="$SNAPSHOTS/1 hour ago"
    RSYNC="/root/rsync-HEAD-20130616-0452GMT/rsync"
    RSYNC_OPTIONS="-ah --no-inc-recursive"

    LINK_DEST=""
    FILTER=""

    if [ -d "$MOST_RECENT" ]; then
    LINK_DEST="--link-dest=\"$MOST_RECENT\""
    fi
    if [ -f "$SRC/.snapshots-filter" ]; then
    FILTER="--fil

Elliptic paraboloids for sale.

Working...