Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Linux

Ask Slashdot: Best *nix Distro For a Dynamic File Server? 234

An anonymous reader (citing "silly workplace security policies") writes "I'm in charge of developing for my workplace a particular sort of 'dynamic' file server for handling scientific data. We have all the hardware in place, but can't figure out what *nix distro would work best. Can the great minds at Slashdot pool their resources and divine an answer? Some background: We have sensor units scattered across a couple square miles of undeveloped land, which each collect ~500 gigs of data per 24h. When these drives come back from the field each day, they'll be plugged into a server featuring a dozen removable drive sleds. We need to present the contents of these drives as one unified tree (shared out via Samba), and the best way to go about that appears to be a unioning file system. There's also requirement that the server has to boot in 30 seconds or less off a mechanical hard drive. We've been looking around, but are having trouble finding info for this seemingly simple situation. Can we get FreeNAS to do this? Do we try Greyhole? Is there a distro that can run unionfs/aufs/mhddfs out-of-the-box without messing with manual recompiling? Why is documentation for *nix always so bad?""
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Best *nix Distro For a Dynamic File Server?

Comments Filter:
  • Why do you need a unified filesystem. Can't you just share /myShareOnTheServer and then mount each disk to a subfolder in /myShareOnTheServer (such as /myShareOnTheServer/disk1).

    • Seems someone ate the rest of my text. But if you do it this way, then the windows computers will se a single system with a folder for each harddisk. The only reason this might cause problems is if you really need files from different harddisks to appear as if they are in the same folder.
       

      • by Anrego ( 830717 ) * on Saturday August 25, 2012 @12:37PM (#41123377)

        I have to assume they are using some clunky windows analysis program or something that lacks the ability to accept multiple directories or something.

        Either way, the aufs (or whatever they use) bit seems to be the least of their worries. They bought an installed a bunch of gear and are just now looking into what to do with it, and they've decided they want it to boot in 30 seconds (protip: high end gear can take this long just doing it's self checks, which is a good thing! Fast booting and file server don't go well together).

        Probably a summer student or the office "tech guy" running things. They'd be better off bringing in someone qualified.

    • by Anonymous Coward on Saturday August 25, 2012 @12:56PM (#41123521)

      OP here:

      I left out a lot of information from the summary in order to keep the word count down. Each disk has an almost identical directory structure, and so we want to merge all the drives in such a way that when someone looks at "foo/bar/baz/" they see all the 'baz' files from all the disks in the same place. While the folders will have identical names the files will be globally unique, so there's no concern about namespace collisions at the bottom levels.

      • You then probably should ditch that idea right away and instead use a distributed file system on all your servers and simply export that via samba.
  • Wow (Score:5, Insightful)

    by Anonymous Coward on Saturday August 25, 2012 @12:29PM (#41123315)

    I know I’m not going to be the first person to ask this, but if I understand it the plan here was:

    1 - buy lots of hardware and install
    2 - think about what kind of software it will run and how it will be used

    I think you got your methodology swapped around man!

    Why is documentation for *nix always so bad?

    You are looking for information that your average user won’t care about. Things like boot time don’t get documented because your average user isn’t going to have some arbitrary requirement to have their _file server_ boot in 30 seconds. That’s a very weird use case. Normally you reboot a file server infrequently (unless you want to be swapping disks out constantly..). I’m assuming this requirement is because you plan on doing a full shutdown to insert your drives... in which case you really should be looking into hotswap

    Also mandatory: you sound horribly underqualified for the job you are doing. Fess up before you waste even more (I assume grant) money and bring in someone that knows what the hell they are doing.

    • Re:Wow (Score:4, Insightful)

      by LodCrappo ( 705968 ) on Saturday August 25, 2012 @12:40PM (#41123405)

      I know I’m not going to be the first person to ask this, but if I understand it the plan here was:

      1 - buy lots of hardware and install
      2 - think about what kind of software it will run and how it will be used

      I think you got your methodology swapped around man!

      Why is documentation for *nix always so bad?

      You are looking for information that your average user won’t care about. Things like boot time don’t get documented because your average user isn’t going to have some arbitrary requirement to have their _file server_ boot in 30 seconds. That’s a very weird use case. Normally you reboot a file server infrequently (unless you want to be swapping disks out constantly..). I’m assuming this requirement is because you plan on doing a full shutdown to insert your drives... in which case you really should be looking into hotswap

      Also mandatory: you sound horribly underqualified for the job you are doing. Fess up before you waste even more (I assume grant) money and bring in someone that knows what the hell they are doing.

      Wow.. I completely agree with an AC.

      The OP here is in way over his head and the entire project seems to have been planned by idiots.

      This will end badly.

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        He still hasn't told us what filesystem is on these drives they're pulling out of the field. That's the most important detail...........

      • Re:Wow (Score:5, Informative)

        by mschaffer ( 97223 ) on Saturday August 25, 2012 @12:49PM (#41123477)

        [...]

        Wow.. I completely agree with an AC.

        The OP here is in way over his head and the entire project seems to have been planned by idiots.

        This will end badly.

        Like that's the first time. However, we don't know all of the circumstances and I wouldn't be surprised that the OP had this dropped into his/her lap.

      • by u38cg ( 607297 )
        Yeah, welcome to academic computing.
    • Re:Wow (Score:5, Informative)

      by arth1 ( 260657 ) on Saturday August 25, 2012 @01:04PM (#41123565) Homepage Journal

      Yeah. Before we can answer this person's questions, we need to know why he has:
      1: Decided to cold-plug drives and reboot
      2: Decided to use Linux
      3 ... to serve to Windows

      Better yet, tell us what you need to do - not how you think you should do it. Someone obviously needs to read data that's collected, but all the steps in between should be based on how it can be collected and how it can be accessed by the end users. Tell us those parameters first, and don't throw around words like Linux, samba, booting, which may or may not be a solution. Don't jump the gun.

      As for documentation, no other OSes are as well-documented as Linux/Unix/BSD.
      Not only are there huge amounts of man pages, but there are so many web sites and books that it's easy to find answers.

      Unless, of course, you have questions like how fast a distro will boot, and don't have enough understanding to see that that that depends on your choice of hardware, firmware and software.
      I have a nice Red Hat Enterprise Linux system here. It takes around 15 minutes to boot. And I have another Red Hat Enterprise Linux system here. It boots in less than a minute. The first one is -- by far -- the better system, but enumerating a plaided RAID of 18 drives takes time. That's also irrelevant, because it has an expected shutdown/startup frequency of once per two years.

      • I would further enhance the question by asking: What the hell are you collecting that each sensor stores 500GB in 24 hours - photos? Seriously, these aren't sensors - they're drive fillers.

        Seriously, if "sensor units scattered across a couple square miles" means 10 sensors - that's 5 Terabytes to initialize and mount in 30 seconds. I suspect that the number is greater than 10 sensors because the rest of the requirements are so ridiculous.

        And why the sneakernet? If they're in only a couple of square miles

        • Re:Wow (Score:4, Insightful)

          by plover ( 150551 ) * on Saturday August 25, 2012 @02:42PM (#41124229) Homepage Journal

          While I'm curious as to the application, it's his data rates that ultimately count, not our opinions of if he's doing it right.

          500GB may sound like a lot to us, but the LHC spews something like that with every second of operation. They have a large cluster of machines whose job it is to pre-filter that data and only record the "interesting" collisions. Perhaps the OP would consider pre-filtering as much as possible before dumping it into this server as well. If this is for a limited 12 week research project, maybe they already have all the storage they need. Or maybe they are doing the filtering on the server before committing the data to long term storage. They just dump the 500GB of raw data into a landing zone on the server, filter it, and keep only the relevant few GB.

          Regarding mesh networking, they'd have to build a large custom network of expensive radios to carry that volume of data. Given the distances mentioned, it's not like they could build it out of 802.11 radios. Terrain might also be an issue, with mountains and valleys to contend with, and sensors placed near to access roads. That kind of expense would not make sense for a temporary installation.

          I don't think he's an idiot. I just think he couldn't give us enough details about what he's working on.

        • by drolli ( 522659 )

          6 Megasamples @ 8 bit take more than 500GB per day.

          So that would be a pretty low-end AD converter.

        • Re: (Score:2, Insightful)

          by Anonymous Coward

          Seismic data.
          Radio spectrum noise level.
          Accoustic data.
          High frequency geomagnetic readings.
          Any of various types of environmental sensors.

          Any of the above, or combination thereof, would be pretty common in research projects, and could easily generate 500gb+ per day. And the only thing you thought of was photos. You're not a geek, you're some Facebook Generation fuckwit who knows jack shit about science. Go back to commenting on YouTube videos.

      • Re: (Score:3, Interesting)

        by Anonymous Coward

        Op here:

        1) The cold plug is not the issue, rather, the server itself needs to be booted and halted on demand (don't ask, long story).

        2) Because it's better? Do I really need to justify not using windows for a server on Slashdot?

        3) The shares need to be easily accessible to mac/win workstations. AFAIK samba is the most cross-platform here, but if people have a better idea I'm all ears.

        > Better yet, tell us what you need to do

        - Take a server that is off, and boot it remotely (via ethernet magic packet)
        - Ha

        • Re:Wow (Score:4, Informative)

          by msobkow ( 48369 ) on Saturday August 25, 2012 @01:48PM (#41123861) Homepage Journal

          The "under 30 seconds part" is not as easy as you think.

          You're mounting new drives -- that means Linux will probably want to fsck them, which with such volume, is going to take way more than 30 seconds.

        • Re:Wow (Score:4, Interesting)

          by arth1 ( 260657 ) on Saturday August 25, 2012 @04:08PM (#41124865) Homepage Journal

          1) The cold plug is not the issue, rather, the server itself needs to be booted and halted on demand (don't ask, long story).

          Yes, I will ask why. Like why booted, and not hibernated, for example, if part of the reason is that it has to be powered off.
          If the server is single-purpose file serving of huge files once, it does not benefit from huge amounts of RAM, and can hibernate/wake in a short amount of time, depending on which peripherals have to be restarted.

          2) Because it's better? Do I really need to justify not using windows for a server on Slashdot?

          Yes? While Microsoft usually sucks, it can still be the least sucky choice for specific tasks. And there are more alternatives than Linux out there too.

          3) The shares need to be easily accessible to mac/win workstations. AFAIK samba is the most cross-platform here, but if people have a better idea I'm all ears.

          What's the format on the drives? That can be a limiting factor. And what's the specifics for "sharing"? Must files be locked (or lockable) during access, are there access restrictions on who can access what?
          For what it's worth, Windows Vista/7/2008R2 all come with Interix (as "Services for Unix") NFS support. So that's also an alternative.

          - Take a server that is off, and boot it remotely (via ethernet magic packet)

          That you want to "wake" it does not imply that the server has to be shut off. It can be in low power mode, for example - Apple's "bonjour" (which is also available for Linux) has a way to "wake" services from low-power states.

          - Have that server mount its drives in a union fashion, merging the nearly-identical directory structure across all the drives.

          Why? Sharing a single directory under which all the drives are mounted would also give access to all the drives under a single mount point - no need for union unless you really need to merge directories and for some reason cannot do the equivalent with symlinks ("junctions" in MS jargon).
          Unions are much harder, as you will need to decide what to do when inevitably the same file exists on two drives (even inconspicuous files like "desktop.ini" created by people browsing the file systems).
          Even copying the files to a common (and preferably RAIDed) area is generally safer - that way, you also don't kill the whole share if one drive is bad, and can reject a drive that comes in faulty.
          But you seem to have made the choices beforehand, so I'm not sure why I answer.

          - Do all this in under 30 seconds

          You really should have designed the system with the 30 seconds as a deadline then.

          If I were to do this, I would first try to get rid of the sneakernet requirement. 4G modems sending the data, for example. But if sneakernetting drives is impossible to get around, I'd choose a continuously running system with hotplug bays and automount rules.

          Unless the data has to be there 30 seconds from when the drive arrives (this is not clear - from the above it appears that only the client access to the system has that limit), I'd also copy the data to a RAID before letting users access it.

          Sure, Linux would do, but there's no particular flavour I'd recommend. ScientificLinux is a good one, but *shrug*.
          If you need support, Red Hat, but then you also should buy a system for which RHEL is certified.

        • Re:Wow (Score:4, Informative)

          by Fallen Kell ( 165468 ) on Saturday August 25, 2012 @09:47PM (#41126681)
          Even though I believe I am being trolled, I will still feed it some.

          1) The cold plug is not the issue, rather, the server itself needs to be booted and halted on demand (don't ask, long story).

          You will never find enterprise grade hardware which will do this. You will be even harder pressed to do this on mechanical drives (for the OS) and even harder still with random new drives being attached which may need to have integrity scans performed. This requirement alone is asinine and against every rule for data center and system administration handbook for something that is serving data to other machines. If you need something that you need to halt and shutdown so you can load the drives, well, you do that on something else other than the box which is servicing the data requests to other computers, and you copy the data from that one system to the real server.

          2) Because it's better? Do I really need to justify not using windows for a server on Slashdot?

          No you don't need to justify it, but you do need to explain it some. For the most part it sounds like most people where you work do not have much experience with *nix systems, because if you did, you would never have had requirement (1) in the first place (as you would know the whole point of *nix is to be able to separate everything so that you don't have to bring down the system just to update/replace/remove one particular service/application/hardware, everything is compartmentalized and isolated, which means the only time you should ever need to bring down the system is due to catastrophic hardware failure or you needed to update the actual kernel, otherwise everything else should be build such a way that is is hot-swappable, redundant, and/or interchangeable on the fly).

          3) The shares need to be easily accessible to mac/win workstations. AFAIK samba is the most cross-platform here, but if people have a better idea I'm all ears.

          Well, SAMBA is the only thing out there that will share to Win/Mac clients from *nix, so that is the right solution.

          - Take a server that is off, and boot it remotely (via ethernet magic packet) - Have that server mount its drives in a union fashion, merging the nearly-identical directory structure across all the drives. - Share out the unioned virtual tree in such a way that it it's easily accessible to mac/win clients - Do all this in under 30 seconds

          I don't know why people keep focusing on the "under 30 seconds" part, it's not that hard to get linux to do this.....

          They are focusing on the "under 30 seconds" part because they know that it is an absurd requirement for dealing with multiple hard drives which may or may not have a working filesystem as they have not only traveled/been shipped, but have also been out in the actual field. The probability of data corruption is so astronomically higher that they know that the "under 30 seconds" is idiotic at best.

          For instance, I can't even get to the BIOS in 30 seconds on anything that I have at my work. Our data storage servers take about 15-20 minutes to boot. Our compute servers take about 5-8 minutes. They spend more that 30 seconds just performing simple memory tests at POST, let alone hard drive identification and filesystem integrity checks or actually booting. This is why people are hung up on the "under 30 seconds".

          If you had a specialty build system, in which you disabled all memory checks (REALLY BAD IDEA on a server though since if your memory is bad you can corrupt your storage because writes to your storage are typically from memory), used SSDs for your OS drives, had no hardware raid controllers on the system, used SAS controllers which do not have firmware integrity checks, you might, just might be able to boot the system in 30 seconds. But I sure as hell would not trust it for any kind of important data because you had to disable all the hardware tests which means you have no idea if there are hardware problems which are corrupting your data.

          • by arth1 ( 260657 )

            Well, SAMBA is the only thing out there that will share to Win/Mac clients from *nix, so that is the right solution.

            There's OpenAFS.
            And let's not forget NFS, since non-home versions of Windows Vista/7/2008R2 come with Interix and an NFS client. (Control Panel -> Programs and Features -> Windows Features -> Services for NFS -> Client for NFS)
            NFS is the least secure, but it uses the least amount of resources on the server (this seems to be old inherited hardware), and if the rsize/wsize is bumped up on the server side, it's faster too.

            (I'm sharing my music collection over NFS to Windows 7, so I know it works.

      • by gweihir ( 88907 )

        Good call on the cold-plug. Hot-plugging SATA on Linux works very well, provided you have the right controller. I do it frequently for testing purposes.

      • by CAIMLAS ( 41445 )

        Add to the fact that he doesn't really even seem to understand the problem himself, or know the tools he's got to work with.

        I'm sorry, but: "UNIX has bad documentation"? Wikipedia itself is chock full of useful documentation in this regard. You can find functional "this is how it works" information on pretty much every single component and technology with ease. (You do, however, need to know what you're looking for.)

        Try to do the same for Windows. The first 12 pages of search results will likely be marketin

    • Re:Wow (Score:4, Informative)

      by Anonymous Coward on Saturday August 25, 2012 @01:14PM (#41123653)

      Op here:

      The gear was sourced from a similar prior project that's no longer needed, and we don't have the budget/authorization to buy more stuff. Considering that the requirements are pretty basic, we weren't expecting to have a serious issue picking the right distro.

      >You are looking for information that your average user won’t care about.

      Granted, but I thought one of the strengths of *nix was that it's not confined to computer illiterates. Some geeks somewhere should know which distros can be stripped down to bare essentials with a minimum of fuss.

      As for the 30 seconds thing, there's a lot side info I left out of the summary. This project is quirky for a number of reasons, and one of them being that the server itself spends a lot of time off, and needs to be booted (and halted) on demand. (Don't ask, it's a looooooong story).

      • by Nutria ( 679911 )

        Some geeks somewhere should know which distros can be stripped down to bare essentials with a minimum of fuss.

        Debian (The Universal OS)
        RHEL/CentOS/Scientific
        Gentoo
        Slackware

        • by quist ( 72831 )

          ...know which distros can be stripped down ... with a minimum of fuss.[?]

          Debian (The Universal OS)
          RHEL/CentOS/Scientific [...]

          And don't forget to compile a bespoke, static kernel.

      • Basically all distros can be stripped down easily. However you will have most sucess if you install the server version of Ubuntu, CentOs, Red Hat or Debian since the baseinstall there is already very restricted and you would not need to strip it further, just make sure that you disable the desktop in CentOs/RedHat/Debian (no need in Ubuntu since the server is console only as default).

        Instead of a unionfs you should raid the drives but since that seams to be no option(?) then you probably need something li

      • by rwa2 ( 4391 ) *

        Sounds reasonable.

        Any recent Linux distro should do... just stick with whatever you have expertise with. Scientific Linux would probably be the most suitable RHEL / CentOS clone... but it also comes with OpenAFS (which also has Windows clients) which might allow you another option to improve filesharing performance over Samba (I haven't played with it myself, though). Linux Mint is my current favorite Debian / Ubuntu distro.

        Either would likely mount SATA disks that were hotplugged automatically under /me

  • Use OpenAFS with Samba's modules. Distribution doesn't matter.

    • by wytcld ( 179112 )

      Looking at the OpenAFS docs, they're copyright 2000. Has the project gone stale since then?

      • by Monkius ( 3888 )

        OpenAFS is not dead. IIRC, any Samba AFS integration probably is. This doesn't sound like a job for AFS, however.

  • by Anonymous Coward on Saturday August 25, 2012 @12:30PM (#41123329)
    Why does it have to be a mechanical hard drive? Why not use an SSD for the boot drive?
    • by davester666 ( 731373 ) on Saturday August 25, 2012 @01:15PM (#41123659) Journal

      They already bought a $20 5400rpm 80Gb drive and don't want it to be wasted.

    • by mspohr ( 589790 )

      It sounds like they inherited a bunch of hardware and don't have a budget for more stuff.
      So... make do with what you have.

    • Why does it have to be a mechanical hard drive?

      Their "couple square miles of undeveloped land" is actually a minefield, and to avoid accidental detonation (you know, magnetic triggers and all that), the situation calls for a purely mechanical hard drive - perhaps one running on water power.

      Just a wild guess...

  • by guruevi ( 827432 ) on Saturday August 25, 2012 @12:32PM (#41123345)

    Really, singular hard drives are notoriously bad at keeping data around for long. I would make sure you have a copy of everything. So make a file server with RAIDZ2 or RAID6 and script the copying of these hard drives onto a system that has redundancy and is backed up as well.

    How many times I have seen scientist come out with their 500GB portable hard drives and they are unreadable... way too much. If you fill 500GB in 24 hours, there is no way a portable hard drive will survive for longer than about a year. Most of our drives (500GB 2.5" portable drives) last a few months, once they have processed about 6TB of data full-time they are pretty much guaranteed to fail.

    • by Macrat ( 638047 )

      Most of our drives (500GB 2.5" portable drives) last a few months, once they have processed about 6TB of data full-time they are pretty much guaranteed to fail.

      This is interesting. How have SSDs held up under that use?

      • by guruevi ( 827432 )

        The cheap ones fail just as fast, I ship the same OCZ Onyx drive back and forth to their RMA site it seems like, I've killed a couple of Vertexes also. The more expensive ones (SLC) are much better and the Intel 32GB SLC's have held up but they're so expensive it's really not worth it.

        • by Macrat ( 638047 )

          You should contact Other World Computing to see if they would loan you an SSD drive.

          They are a small company and pay close attention to any issues with their products. I bet they would LOVE for you to stress their SSD products that quickly. :-)

          http://eshop.macsales.com/shop/SSD/OWC/

    • by Osgeld ( 1900440 )

      funny, I still have a 20 meg MFM drive with its original os installed on it

  • CentOS may be your best bet. Its Red Hat Enterprise Linux rebuilt from the Red Hat source code, minus the Red Hat trademark.
    • by e3m4n ( 947977 ) on Saturday August 25, 2012 @12:45PM (#41123437)

      Scientific Linux is also a good option for similar reasons. Given its a science grant, they might like the idea that its used at labs like CERN

    • by wytcld ( 179112 ) on Saturday August 25, 2012 @12:51PM (#41123491) Homepage

      "Enterprise class" is a marketing slogan. In the real world, all the RH derivatives are pretty good (including Scientific Linux and Fedora as well as CentOS), and all the Debian derivatives are pretty good (including Ubuntu). Gentoo's solid too. "Enterprise class" doesn't mean much. The main thing that characterizes CentOS from Scientific Linux - which is also just a recompile of the RHEL code - is that the CentOS devs have "enterprise class" attitude. Meanwhile, RH's own devs are universally decent, humble people. Those who do less often thing more of themselves.

      For a great many uses, Debian's going to be easiest. But it depends on just what you need to run on it, as different distros do better with different packages, short of compiling from source yourself. No idea what the best solution is for the task here, but "CentOS" isn't by itself much of an answer.

      • Re: (Score:3, Insightful)

        by Anonymous Coward

        "Enterprise class" means that it runs the multi-million dollar crappy closed source software you bought to run on it without the vendor bugging out when you submit a support ticket.

        • Someone please mod this guy up.

          I swear, the higher the price of the software is, the more upper management just drools all over it, and the bigger the piece of shit it is. Millions of dollars spent per year licensing some of the biggest turds I've ever had the displeasure of dealing with. Just so management can say that some big vendor is behind it and will "have our backs when it fails".

          Guess what, the support is awful too. The vendor never has your back. You'll be left languishing with downtime while they

      • The nice thing about CentOS is that if/when you wind up on RHEL (comes with hardware, what you hosting provider is using, etc) the migration will be pretty simple.
    • I disagree. This guy:
      • doesn't like to read man pages
      • wants other people to tell him what buttons to push.

      Redhat, with a support contract is for him.

      • I disagree. This guy:

        • doesn't like to read man pages
        • wants other people to tell him what buttons to push.

        Redhat, with a support contract is for him.

        Well if he starts with CentOS that migration will be pretty simple.

  • Here we go again (Score:3, Insightful)

    by Anonymous Coward on Saturday August 25, 2012 @12:45PM (#41123441)

    Another "I don't know how to do my job, but will slag off OSS knowing someone will tell me what to do. Then I can claim to be l337 at work by pretending to know how to do my job".

    It's call reverse physiology, don't fall for it! Maybe shitdot will go back to its roots if no one comments in junk like this and the slashvertisments?

    • Damn, you outed yourself there as an old loser that never "made it" in your career field. "go back to its roots"? You would obviously have told the OP how to do this, and shown off how l337 your were, if you had the slightest clue of how to do this.

      Lame. Even as an AC troll.

      Go eat another twinkie and play with your star wars dolls in your mom's basement.

  • by NemoinSpace ( 1118137 ) on Saturday August 25, 2012 @12:45PM (#41123445) Journal
    • Enterprise-ready: Greyhole targets home users.

    Not sure why the 30s boot up requirement is there, so it depends on what you define as "booted" . Spinning up 12 hard drives and making them available through Samba within 30s guarantees your costs will be 10x more than they need to be.
    This isn't another example of my tax dollars at work is it?

    • This isn't another example of my tax dollars at work is it?

      I hope not! Or my university tuition fees, or really any other spending, even other people's money.

      Who cares if the server boots up in 30 seconds or 30 minutes? The OP now has up to 12 500GB drives to either copy off or access over the lan. There's hours of data access or data transfer here.

  • Questionable (Score:2, Informative)

    by Anonymous Coward

    Why would you want a file server to boot in 30 secs or less? Ok, lets skip the fs check, the controller checks, the driver checks, hell lets skip everything and boot to a recovery bash shell. Why would you not network these collection devices if they are all within a couple of miles and dump to an always on server?

    I really fail to see the advantage of a file server booting in under 30 seconds. Shouldn't you be able to hot swap drives?

    This really sounds like a bunch of kids trying to play server admin. M

  • by fuzzyfuzzyfungus ( 1223518 ) on Saturday August 25, 2012 @01:10PM (#41123625) Journal

    Booting in under 30 seconds is going to be a bit of a trick for anything servery. Even just putzing around in the BIOS can eat up most of that time(potentially some minutes if there is a lot of memory being self-tested, or if the system has a bunch of hairy option ROMs, as the SCSI/SAS/RAID-generally disk controllers commonly found in servers generally do...) If you really want fast, you just need to suck it up and get hot-swappable storage: even SATA supports that(well, some chipsets do, your mileage may vary, talk to your vendor and kidnap the vendor's children to ensure you get a straight answer, no warranty express or implied, etc.) and SAS damn well better, and supports SATA drives. That way, it doesn't matter how long the server takes to boot, you can just swap the disks in and either leave it running or set the BIOS wakeup schedule to have it start booting ten minutes before you expect to need it.

    Slightly classier would be using /dev/disk/by-label or by-UUID to assign a unique mountpoint for every drive sled that might come in from the field(ie. allowing you to easily tell which field unit the drive came from).

    If the files from each site are assured to have unique names, you could present them in a single mount directory with unionFS; but you probably don't want to find out what happens if every site spits out identically named FOO.log files, and(unless there is a painfully crippled tool somewhere else in the chain) having a directory per mountpoint shouldn't be terribly serious business.

    • Thinking of what you just wrote I'd like to add a bit.

      First of all I don't think they will serve the data from those disks; if only because you will probably want yesterday's data available as well, and drives are constantly being swapped out. So upon plugging in a drive, I'd have a script copy the data to a different directory on your permanent storage (which of course must be sizeable to take several times 500 GB a day - he says each sensor produces 500 GB of data - so several TB of data a day, hundreds o

  • But he either likes you, or is setting you up. build one of these instead.: http://hardware.slashdot.org/story/11/07/21/143236/build-your-own-135tb-raid6-storage-pod-for-7384 [slashdot.org] It's already been talked about.
    I know you already stated the hardware is already in place. This is about exercising your new found authority. Go big or go home.
  • by Anonymous Coward on Saturday August 25, 2012 @01:14PM (#41123645)

    500G in a 24h period sounds like it will be highly compressible data. I would recommend FreeBSD or Ubuntu with ZFS Native Stable installed. ZFS will allow you to create a very nice tree with each folder set to a custom compression level if necessary. (Don't use dedup) You can put one SSD in as a cache drive to accelerate the shared folders speed. I imagine there would be an issue with restoring the data to magnetic while people are trying to read off the SMB share. An SSD cache or SSD ZIL drive for ZFS can help a lot with that.

    Some nagging questions though.
    How long are you intending on storing this data? How many sensors are collecting data? Because even with 12 drive bay slots, assuming cheap SATA of 3TB a piece. (36TB total storage with no redundancy), lets say 5 sensors, thats 2.5TB a day data collection, and assuming good compression of 3x, 833GB a day. You will fill up that storage in just 43 days.

    I think this project needs to be re-thought. Either you need a much bigger storage array, or data needs to be discarded very quickly. If the data will be discarded quickly, then you really need to think about more disk arrays so you can use ZFS to partition the data in such a way that each SMB share can be on its own set of drives so as to not head thrash and interfere with someone else who is "discarding" or reading data.

    • 500G in a 24h period sounds like it will be highly compressible data

      It sounds like audio and video to me which is not very compressible at all if you need to maintain audio and video quality. And good luck booting a system with that many drives in "under 30 seconds" especially on a ZFS system which needs a lot of RAM (assuming you are following industry standards of 1GB RAM per 1TB of data you are hosting) as you will never make it through RAM integrity testing during POST in under 30 seconds.

  • You are in over your head. Buy a pogoplug and some usb2 hubs. Connect your drives to the hubs and they appear as unified file system on your clients. Or, if you need better performance accessing the data, call an expert.
  • by DamnStupidElf ( 649844 ) <Fingolfin@linuxmail.org> on Saturday August 25, 2012 @01:21PM (#41123695)
    Unless you're talking about millions of individual files on each drive it should be relatively quick to mount each hard drive and set up symbolic links in one shared directory to the files on each of the mounted drives. Just make sure Samba has "follow symlinks" set to yes and the Windows clients will see just see normal files in the shared directory.
  • Aka: I do not want the insecurity of losing my workplace if my boss happens to learn in slashdot how clueless I am.

    Seriously... could you send us the resumé that you sent to get that job?

  • Comment removed (Score:4, Interesting)

    by account_deleted ( 4530225 ) on Saturday August 25, 2012 @01:30PM (#41123761)
    Comment removed based on user account deletion
  • waaaay over head (Score:5, Insightful)

    by itzdandy ( 183397 ) on Saturday August 25, 2012 @01:39PM (#41123807) Homepage

    What is the point of 30 second boot on a file server? If this is on the list of 'requirements', then the 'plan' is 1/4 baked. 1/2 baked for buying hardware without a plan, then 1/2 again for not having a clue.

    unioning filesystem? what is the use scenario? how about automounting the drives on hot-plug and sharing the /mnt directory?

    Now, 500GB/day in 12 drive sleds....so 6TB a day? do the workers get a fresh drive each day or is the data only available for a few hours before it gets sent back out or are they rotated? I suspect that mounting these drives for sharing really isnt what is necessary, more like pull contents to 'local' storage. Then, why talk about unioning at all, just put the contents of each drive in a separate folder.

    Is the data 100% new each day? Are you really storing 6TB a day from a sensor network? 120TB+ a month?

    Are you really transporting 500GB of data by hand to local storage and expecting the disks to last? reading or writing 500GB isn't a problem, but constant power cycling and then physically moving/shaking the drives around each day to transport is going to put the MTBF of these drives in months not years.

    dumb

    • by gweihir ( 88907 )

      I agree. "30 seconds boot time" is a very special and were un-server-like requirement. It is hard to achieve and typically not needed. Hence it points to a messed-up requirements analysis (or some stupid manager/professor having put that in there without any clue what it implies). This requirement alone may break everything else or make it massively less capable of doing its job.

      What I don't get is why people without advanced feel they have any business setting up very special and advances system configurat

  • Use systems of symbolic links.

    Also, why "30 seconds boot time"? This strikes me as a bizarre and unnecessary requirement. Are you sure you have done careful requirements analysis and engineering?

    As to the "bad" documentation: Many things are similar or the same on many *nix systems. This is not Windows where MS feels the need to change everything every few years. On *nix you are reasonably expected to know.

  • OP here (Score:5, Informative)

    by Anonymous Coward on Saturday August 25, 2012 @02:32PM (#41124155)

    Ok, lots of folks asking similar questions. In order to keep the submission word count down I left out a lot of info. I *thought* most of it would be obvious, but I guess not.

    Notes, in no particular order:

    - The server was sourced from a now-defunct project with similar setup. It's a custom box with non-normal design. We don't have authorization to buy more hardware. That's not a big deal because what we have already *should* be perfectly fine.

    - People keep harping on the 30 seconds thing.
    The system is already configured to spin up all the drives simultaneously (yes the PSU can handle that) and get through the bios all in a few seconds. I *know* you can configure most any distro to be fast, the question is how much fuss it takes to get it that way. Honestly I threw that in there as an aside, not thinking this would blow up into some huge debate. All I'm looking for are pointers along the lines of "yeah distro FOO is bloated by default, but it's not as bad as it looks because you can just use the BAR utility to turn most of that off". We have a handful of systems running winXP and linux already that boot in under 30, this isn't a big deal.

    - The drives in question have a nearly identical directory structure but with globally-unique file names. We want to merge the trees because it's easier for people to deal with than dozens of identical trees. There are plenty of packages that can do this, I'm looking for a distro where I can set it up with minimal fuss (ie: apt-get or equivalent, as opposed to manual code editing and recompiling).

    - The share doesn't have to be samba, it just needs to be easily accessible from windows/macs without installing extra software on them.

    - No, I'm not an idiot or derpy student. I'm a sysadmin with 20 years experience (I'm aware that doesn't necessarily prove anything). I'm leaving out a lot of detail because most of it is stupid office bureaucracy and politics I can't do anything about. I'm not one of those people who intentionally makes things more complicated than they need to be as some form of job security. I believe in doing things the "right" way so those who come after me have a chance at keeping the system running. I'm trying to stick to standards when possible, as opposed to creating a monster involving homegrown shell scripts.

    • Not gonna happen. (Score:5, Insightful)

      by Anonymous Coward on Saturday August 25, 2012 @03:10PM (#41124453)

      You have to be able to identify the disks being mounted. Since these are hot swappable, they will not be automatically identifiable.

      Also note, not all disks spin up at the same speed. Disks made for desktops are not reliable either - though they tend to spin up faster. Server disks might take 5 seconds before they are failed. You also seem to have forgotten that even with all disks spun up, each must be read (one at a time) for them to be mounted.

      Hot swap disks are not something automatically mounted unless they are known ahead of time - which means they have to have suitable identification.

      UnionFS is not what you want. That isn't what it was designed for. Unionfs only has one drive that can be written to - the top one in the list. Operations on the other disks force it to copy it to the top disk for any modifications. Deletes don't happen to any but the top disk.

      Some of what you discribe is called an HSM (hierarchical storage management), and requires a multi-level archive where some volumes may be on line, others off line, yet others in between. Boots are NOT fast, mostly due to the need to validate the archive first.

      Back to the unreliability of things - if even one disk has a problem, your union filesystem will freeze - and not nicely either. The first access to a file that is inaccessable will cause a lock on the directory. That lock will lock all users out of that directory (they go into an infinite wait). Eventually, the locks accumulate to include the parent directory... which then locks all leaf directories under it. This propagates to the top level when the entire system freezes - along with all the clients. This freezing nature is one of the things that a HSM handles MUCH better. A detected media error causes the access to abort, and that releases the associated locks. If the union filesystem detects the error, then the entire filesystem goes down the tubes, not just one file on one disk.

      Another problem is going to be processing the data - I/O rates are not good going through a union filesystem yet. Even though UnionFS is pretty good at it, expect the I/O rate to be 10% to 20% less than maximum. Now client I/O has to go through a network connection, so that may make it bearable. But trying to process multiple 300 GB data sets in one day is not likely to happen.

      Another issue you have ignored is the original format of the data. You imply that the filesystem on the server will just "mount the disk" and use the filesystem as created/used by the sensor. This is not likely to happen - trying to do so invites multiple failures; it also means no users of the filesystem while it is getting mounted. You would do better to have a server disk farm that you copy the data to before processing. That way you get to handle the failures without affecting anyone that may be processing data, AND you don't have to stop everyone working just to reboot. You will also find that local copy rates will be more than double what the servers client systems can read anyway.

      As others have mentioned, using gluster file system to accumulate the data allows multiple systems to contribute to the global, uniform, filesystem - but it does not allow for plugging in/out disks with predefined formats. It has a very high data throughput though (due to the distributed nature of the filesystem), and would allow many systems to be copying data into the filesystem without interference.

      As for experience - I've managed filesystems with up to about 400TB in the past. Errors are NOT fun as they can take several days to recover from.

      • by olau ( 314197 )

        If UnionFS is not the answer, and OP doesn't want something more complicated, it honestly sounds like the simplest solution is to just symlink everything into another directory once it's up running.

        You have to do something about failures, but if you have that in hand, making the symlinks is really easy, you can do it with three lines of bash (for each path in find -type f, create symlink). If you need a bit more control, and it's not easily doable with grep, I'd write a small Python program, you can google

    • The boottime problem can be solved simely by buying a small(32/64GB) ssd disk, and then install linux on that. This will cost you less then 200$. And if you don't have the budget for that, then please your story to thedailywtf.com because it sounds like some interesting fuckup happend someware in your organization.

    • My suggestion would be:

      0. Do consider writing this yourself...a 100 line shell-script (carefully written and documented) may well be easier to debug than a complex off-the-shelf system.

      1. You can easily identify the disks with eg /dev/disk/by-uuid/ which, combined with some udev rules, will make it easy to identify which filesystem is which, even if the disks are put into different caddys. [Note that all SATA drives are hot-swap/hot-plug capable: remember to unmount, and to connect power before data; disco

  • The first thing I thought of was loss of one of the drives during all this moving around. Seems the protection of the data would be of the utmost priority here. Keeping this in mind, I'd go with a RAID 5 or 10 [thegeekstuff.com] setup. This will eliminate having the data distributed on different "drives", so to speak, and it would appear to the system one single drive. This would increase the drive count, but loosing a drive, ether physically (oops, dropped that one in the puddle) or electronically (oops, this drive cras

  • by oneiros27 ( 46144 ) on Saturday August 25, 2012 @04:17PM (#41124915) Homepage

    What you're describing sounds like a fairly typical Sensor Net (or Sensor Web) to me, maybe with a little more data logged than is normal per platform. (I believe they call it a 'mote' in that community).

    Some of the newer sensor nets use a forwarding mesh wireless system, so that you relay the data to a highly reduced number of collection points -- which might keep you from having to deal with the collection of the hard drives each night (maybe swap out a multi-TB RAID at each collection point each night instead).

    I'm not 100% sure of what the correct forum is for discussion of sensor/platform design. I know they have presentations in the ESSI (Earth and Space Science Informatics) focus group of the AGU (American Geophysical Union). Many of the members of ESIPfed (Federation of Earth Science Information Partners) probably have experience in these issues, but it's more about discussing managing the data after it comes out of the field.

    On the off chance that someone's already written software to do 90% of what you're looking for, I'd try contacting the folks from the Software Reuse Working Group [nasa.gov] of the Earth Science Data System community.

    You might also try looking through past projects funded through NASA AISR (Adanced Information Systems Research [nasa.gov]) ... they funded better sensor design & data distribution systems. (unfortunately, they haven't been funded for a few years ... and I'm having problems accessing their website right now). Or I might be confusing it with the similar AIST (Adanced Information Systems Technology [nasa.gov]), which tends more towards hardware vs. software. ... so, my point is -- don't roll your own. Talk to other people who have done similar stuff, and build on their work, otherwise you're liable to make all of the same mistakes, and waste a whole lot of time. And in general (at least ESSI / ESIP-wide), we're a pretty sharing community ... we don't want anyone out there wasting their time doing the stupid little piddly stuff when they could actually be collecting data or doing science.

    (and if you haven't guessed already ... I'm an AGU/ESSI member, and I think I'm an honorary ESIP member (as I'm in the space sciences, not earth science) ... at least they put up with me on their mailing lists)

  • Since you already bought the hardware, odds are you're going to run into driver issues. Since you're not already a *nix guy my suggestion is just run windows on your server. Next buy big fat USB enclosures, the kind that can hold DVD drives and put the drive sleds in there. Now you don't have to reboot adding the drives.

  • Pretty much any unix like OS will do fine. Personally I use NetBSD and if linux is a requirement, Debian.

  • The best explanation I've heard so far for why technical documentation in general (and in this case *nix documentation in particular) is often so poor is from a sci-fi TV series, called Eureka [wikipedia.org]. In one episode, the characters search in vain for a manual to help shut down an antiquated launch system. When they figure there never was a manual, one character asks another why the builders did not bother to write one. The reply he receives is "Well, what do you want: progress or poetry?"

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...