Ask Slashdot: Best *nix Distro For a Dynamic File Server? 234
An anonymous reader (citing "silly workplace security policies") writes "I'm in charge of developing for my workplace a particular sort of 'dynamic' file server for handling scientific data. We have all the hardware in place, but can't figure out what *nix distro would work best. Can the great minds at Slashdot pool their resources and divine an answer? Some background: We have sensor units scattered across a couple square miles of undeveloped land, which each collect ~500 gigs of data per 24h. When these drives come back from the field each day, they'll be plugged into a server featuring a dozen removable drive sleds. We need to present the contents of these drives as one unified tree (shared out via Samba), and the best way to go about that appears to be a unioning file system. There's also requirement that the server has to boot in 30 seconds or less off a mechanical hard drive. We've been looking around, but are having trouble finding info for this seemingly simple situation. Can we get FreeNAS to do this? Do we try Greyhole? Is there a distro that can run unionfs/aufs/mhddfs out-of-the-box without messing with manual recompiling? Why is documentation for *nix always so bad?""
Do you need a unified filesystem at all? (Score:3)
Why do you need a unified filesystem. Can't you just share /myShareOnTheServer and then mount each disk to a subfolder in /myShareOnTheServer (such as /myShareOnTheServer/disk1).
Re: (Score:2)
Seems someone ate the rest of my text. But if you do it this way, then the windows computers will se a single system with a folder for each harddisk. The only reason this might cause problems is if you really need files from different harddisks to appear as if they are in the same folder.
Re:Do you need a unified filesystem at all? (Score:5, Insightful)
I have to assume they are using some clunky windows analysis program or something that lacks the ability to accept multiple directories or something.
Either way, the aufs (or whatever they use) bit seems to be the least of their worries. They bought an installed a bunch of gear and are just now looking into what to do with it, and they've decided they want it to boot in 30 seconds (protip: high end gear can take this long just doing it's self checks, which is a good thing! Fast booting and file server don't go well together).
Probably a summer student or the office "tech guy" running things. They'd be better off bringing in someone qualified.
Re: (Score:2)
Massive Ubuntu installs taking 2 minutes to boot? Whatever its faults, Ubuntu was the one distro most focused on boot time for a long while, and even a standard desktop install goes from BIOS hand-off to login screen in 10 - 12 secs with a standard HD.
Re: (Score:2)
Re: (Score:2)
Linux in general is a pretty fast booter, but its the dependencies that are the problem.
A ubuntu server on its own is snappy as heck to boot, but once you load it up with a bunch of services each with its own dependencies for other services, no distribution is going to fix that.
Re:Do you need a unified filesystem at all? (Score:4, Interesting)
Re:Do you need a unified filesystem at all? (Score:4, Informative)
FreeNAS is based on FreeBSD, and boot speed (no matter what the OS) is based entirely on the hard drive speed + CPU speed + 'automagic' configuration.
FreeBSD boots pretty fast, but you need to turn off things like the bootloader menu delay, and set fixed IP addresses. Same on Linux, but Linux tends to be sloppy about starting up services.
In either case you can usually just turn anything you don't need off, and just turn on what you do need.
FreeBSD's ZFS is better than anything you can setup on Linux, but unless the box has a lot of RAM you're not going to get the expected performance.
Most of the NAS devices you see for sale run FreeNAS if they're based on x86-64 CPU's or Linux if they're not (PPC/MIPS/ARM) but they're not particuarly great pieces of hardware, you pretty much end up with something stupid silly like:
OS -> UFS/EXT2/EXT3 -> Samba share
for Windows clients, but you can also do this on FreeBSD/FreeNAS (ZFS is terrible under Linux-FUSE)
FREEBSD->ZFS (using all drives, even remote drives) -> iSCSI
iSCSI is something that you must have GigE/10GB Fiber for, and decent processing power. Most of the systems you see (including DELL) that do iSCSI are woefully underpowered for a small server, or extremely overkill (enterprise)
Windows however supports iSCSI out of the box. So you can do something theoretically stupid like this:
FreeBSD -> ZFS ->iSCSI ->Windows box accesses iSCSI and shares it with other Windows machines.
So it depends what you really want to do. From your description, it sounds like what you really want to do is hotplug a bunch of drives into a system, that system is "union"'d by filesystem mounts (nobody says you have to mount everything to root) and the share them under that samba.
But another possibility, not clearly indicated is that maybe the drives have overlapping file systems that you want to see as one (eg same directory structure, different file names) this is more complicated to deal with, but I'd probably go with not trying to share off the hotswapped drives and instead RSYNC all the drives to another filesystem and share that instead.
Re: (Score:2)
FreeNAS is based on FreeBSD, and boot speed (no matter what the OS) is based entirely on the hard drive speed + CPU speed + 'automagic' configuration.
This used to be truth, but it's not anymore. Look at modern init systems like upstart, OpenRC or Systemd. You'll see that these have very different results than what we could see using let's say sysv-rc. Note that there's a (huge) ongoing discussion inside Debian to choose what we will be using next (as Debian still uses the old sysv-rc thing).
Re: (Score:3)
Re:Do you need a unified filesystem at all? (Score:4, Informative)
OP here:
I left out a lot of information from the summary in order to keep the word count down. Each disk has an almost identical directory structure, and so we want to merge all the drives in such a way that when someone looks at "foo/bar/baz/" they see all the 'baz' files from all the disks in the same place. While the folders will have identical names the files will be globally unique, so there's no concern about namespace collisions at the bottom levels.
Re: (Score:2)
Wow (Score:5, Insightful)
I know I’m not going to be the first person to ask this, but if I understand it the plan here was:
1 - buy lots of hardware and install
2 - think about what kind of software it will run and how it will be used
I think you got your methodology swapped around man!
Why is documentation for *nix always so bad?
You are looking for information that your average user won’t care about. Things like boot time don’t get documented because your average user isn’t going to have some arbitrary requirement to have their _file server_ boot in 30 seconds. That’s a very weird use case. Normally you reboot a file server infrequently (unless you want to be swapping disks out constantly..). I’m assuming this requirement is because you plan on doing a full shutdown to insert your drives... in which case you really should be looking into hotswap
Also mandatory: you sound horribly underqualified for the job you are doing. Fess up before you waste even more (I assume grant) money and bring in someone that knows what the hell they are doing.
Re:Wow (Score:4, Insightful)
I know I’m not going to be the first person to ask this, but if I understand it the plan here was:
1 - buy lots of hardware and install
2 - think about what kind of software it will run and how it will be used
I think you got your methodology swapped around man!
Why is documentation for *nix always so bad?
You are looking for information that your average user won’t care about. Things like boot time don’t get documented because your average user isn’t going to have some arbitrary requirement to have their _file server_ boot in 30 seconds. That’s a very weird use case. Normally you reboot a file server infrequently (unless you want to be swapping disks out constantly..). I’m assuming this requirement is because you plan on doing a full shutdown to insert your drives... in which case you really should be looking into hotswap
Also mandatory: you sound horribly underqualified for the job you are doing. Fess up before you waste even more (I assume grant) money and bring in someone that knows what the hell they are doing.
Wow.. I completely agree with an AC.
The OP here is in way over his head and the entire project seems to have been planned by idiots.
This will end badly.
Re: (Score:2, Interesting)
He still hasn't told us what filesystem is on these drives they're pulling out of the field. That's the most important detail...........
Re:Wow (Score:5, Informative)
[...]
Wow.. I completely agree with an AC.
The OP here is in way over his head and the entire project seems to have been planned by idiots.
This will end badly.
Like that's the first time. However, we don't know all of the circumstances and I wouldn't be surprised that the OP had this dropped into his/her lap.
Re: (Score:2)
Re:Wow (Score:5, Informative)
Yeah. Before we can answer this person's questions, we need to know why he has: ... to serve to Windows
1: Decided to cold-plug drives and reboot
2: Decided to use Linux
3
Better yet, tell us what you need to do - not how you think you should do it. Someone obviously needs to read data that's collected, but all the steps in between should be based on how it can be collected and how it can be accessed by the end users. Tell us those parameters first, and don't throw around words like Linux, samba, booting, which may or may not be a solution. Don't jump the gun.
As for documentation, no other OSes are as well-documented as Linux/Unix/BSD.
Not only are there huge amounts of man pages, but there are so many web sites and books that it's easy to find answers.
Unless, of course, you have questions like how fast a distro will boot, and don't have enough understanding to see that that that depends on your choice of hardware, firmware and software.
I have a nice Red Hat Enterprise Linux system here. It takes around 15 minutes to boot. And I have another Red Hat Enterprise Linux system here. It boots in less than a minute. The first one is -- by far -- the better system, but enumerating a plaided RAID of 18 drives takes time. That's also irrelevant, because it has an expected shutdown/startup frequency of once per two years.
Re: (Score:3)
I would further enhance the question by asking: What the hell are you collecting that each sensor stores 500GB in 24 hours - photos? Seriously, these aren't sensors - they're drive fillers.
Seriously, if "sensor units scattered across a couple square miles" means 10 sensors - that's 5 Terabytes to initialize and mount in 30 seconds. I suspect that the number is greater than 10 sensors because the rest of the requirements are so ridiculous.
And why the sneakernet? If they're in only a couple of square miles
Re:Wow (Score:4, Insightful)
While I'm curious as to the application, it's his data rates that ultimately count, not our opinions of if he's doing it right.
500GB may sound like a lot to us, but the LHC spews something like that with every second of operation. They have a large cluster of machines whose job it is to pre-filter that data and only record the "interesting" collisions. Perhaps the OP would consider pre-filtering as much as possible before dumping it into this server as well. If this is for a limited 12 week research project, maybe they already have all the storage they need. Or maybe they are doing the filtering on the server before committing the data to long term storage. They just dump the 500GB of raw data into a landing zone on the server, filter it, and keep only the relevant few GB.
Regarding mesh networking, they'd have to build a large custom network of expensive radios to carry that volume of data. Given the distances mentioned, it's not like they could build it out of 802.11 radios. Terrain might also be an issue, with mountains and valleys to contend with, and sensors placed near to access roads. That kind of expense would not make sense for a temporary installation.
I don't think he's an idiot. I just think he couldn't give us enough details about what he's working on.
Re: (Score:2)
6 Megasamples @ 8 bit take more than 500GB per day.
So that would be a pretty low-end AD converter.
Re: (Score:2, Insightful)
Seismic data.
Radio spectrum noise level.
Accoustic data.
High frequency geomagnetic readings.
Any of various types of environmental sensors.
Any of the above, or combination thereof, would be pretty common in research projects, and could easily generate 500gb+ per day. And the only thing you thought of was photos. You're not a geek, you're some Facebook Generation fuckwit who knows jack shit about science. Go back to commenting on YouTube videos.
Re: (Score:2)
Anybody care how fast a Blue Gene/P boots? :-)
Just hire an IBM consultant to boot it for you. That will give you a new perspective on the costs of boot time...
Re: (Score:3, Interesting)
Op here:
1) The cold plug is not the issue, rather, the server itself needs to be booted and halted on demand (don't ask, long story).
2) Because it's better? Do I really need to justify not using windows for a server on Slashdot?
3) The shares need to be easily accessible to mac/win workstations. AFAIK samba is the most cross-platform here, but if people have a better idea I'm all ears.
> Better yet, tell us what you need to do
- Take a server that is off, and boot it remotely (via ethernet magic packet)
- Ha
Re:Wow (Score:4, Informative)
The "under 30 seconds part" is not as easy as you think.
You're mounting new drives -- that means Linux will probably want to fsck them, which with such volume, is going to take way more than 30 seconds.
Re:Wow (Score:4, Interesting)
1) The cold plug is not the issue, rather, the server itself needs to be booted and halted on demand (don't ask, long story).
Yes, I will ask why. Like why booted, and not hibernated, for example, if part of the reason is that it has to be powered off.
If the server is single-purpose file serving of huge files once, it does not benefit from huge amounts of RAM, and can hibernate/wake in a short amount of time, depending on which peripherals have to be restarted.
2) Because it's better? Do I really need to justify not using windows for a server on Slashdot?
Yes? While Microsoft usually sucks, it can still be the least sucky choice for specific tasks. And there are more alternatives than Linux out there too.
3) The shares need to be easily accessible to mac/win workstations. AFAIK samba is the most cross-platform here, but if people have a better idea I'm all ears.
What's the format on the drives? That can be a limiting factor. And what's the specifics for "sharing"? Must files be locked (or lockable) during access, are there access restrictions on who can access what?
For what it's worth, Windows Vista/7/2008R2 all come with Interix (as "Services for Unix") NFS support. So that's also an alternative.
- Take a server that is off, and boot it remotely (via ethernet magic packet)
That you want to "wake" it does not imply that the server has to be shut off. It can be in low power mode, for example - Apple's "bonjour" (which is also available for Linux) has a way to "wake" services from low-power states.
- Have that server mount its drives in a union fashion, merging the nearly-identical directory structure across all the drives.
Why? Sharing a single directory under which all the drives are mounted would also give access to all the drives under a single mount point - no need for union unless you really need to merge directories and for some reason cannot do the equivalent with symlinks ("junctions" in MS jargon).
Unions are much harder, as you will need to decide what to do when inevitably the same file exists on two drives (even inconspicuous files like "desktop.ini" created by people browsing the file systems).
Even copying the files to a common (and preferably RAIDed) area is generally safer - that way, you also don't kill the whole share if one drive is bad, and can reject a drive that comes in faulty.
But you seem to have made the choices beforehand, so I'm not sure why I answer.
- Do all this in under 30 seconds
You really should have designed the system with the 30 seconds as a deadline then.
If I were to do this, I would first try to get rid of the sneakernet requirement. 4G modems sending the data, for example. But if sneakernetting drives is impossible to get around, I'd choose a continuously running system with hotplug bays and automount rules.
Unless the data has to be there 30 seconds from when the drive arrives (this is not clear - from the above it appears that only the client access to the system has that limit), I'd also copy the data to a RAID before letting users access it.
Sure, Linux would do, but there's no particular flavour I'd recommend. ScientificLinux is a good one, but *shrug*.
If you need support, Red Hat, but then you also should buy a system for which RHEL is certified.
Re:Wow (Score:5, Funny)
I worked with networked computers in professional capacity longer than all of you combined, and I completely agree with the person you are replying to.
You are absolutely definitely unqualified to make any design decisions about the project you have described. The design is stupid, requirements are idiotic, and if iot was implemented in such manner it would not work for many reasons that you don't seem to be capable of understanding.
On top of that massive ignorance, you are stupid.
Re:Wow (Score:4, Informative)
1) The cold plug is not the issue, rather, the server itself needs to be booted and halted on demand (don't ask, long story).
You will never find enterprise grade hardware which will do this. You will be even harder pressed to do this on mechanical drives (for the OS) and even harder still with random new drives being attached which may need to have integrity scans performed. This requirement alone is asinine and against every rule for data center and system administration handbook for something that is serving data to other machines. If you need something that you need to halt and shutdown so you can load the drives, well, you do that on something else other than the box which is servicing the data requests to other computers, and you copy the data from that one system to the real server.
2) Because it's better? Do I really need to justify not using windows for a server on Slashdot?
No you don't need to justify it, but you do need to explain it some. For the most part it sounds like most people where you work do not have much experience with *nix systems, because if you did, you would never have had requirement (1) in the first place (as you would know the whole point of *nix is to be able to separate everything so that you don't have to bring down the system just to update/replace/remove one particular service/application/hardware, everything is compartmentalized and isolated, which means the only time you should ever need to bring down the system is due to catastrophic hardware failure or you needed to update the actual kernel, otherwise everything else should be build such a way that is is hot-swappable, redundant, and/or interchangeable on the fly).
3) The shares need to be easily accessible to mac/win workstations. AFAIK samba is the most cross-platform here, but if people have a better idea I'm all ears.
Well, SAMBA is the only thing out there that will share to Win/Mac clients from *nix, so that is the right solution.
- Take a server that is off, and boot it remotely (via ethernet magic packet) - Have that server mount its drives in a union fashion, merging the nearly-identical directory structure across all the drives. - Share out the unioned virtual tree in such a way that it it's easily accessible to mac/win clients - Do all this in under 30 seconds
I don't know why people keep focusing on the "under 30 seconds" part, it's not that hard to get linux to do this.....
They are focusing on the "under 30 seconds" part because they know that it is an absurd requirement for dealing with multiple hard drives which may or may not have a working filesystem as they have not only traveled/been shipped, but have also been out in the actual field. The probability of data corruption is so astronomically higher that they know that the "under 30 seconds" is idiotic at best.
For instance, I can't even get to the BIOS in 30 seconds on anything that I have at my work. Our data storage servers take about 15-20 minutes to boot. Our compute servers take about 5-8 minutes. They spend more that 30 seconds just performing simple memory tests at POST, let alone hard drive identification and filesystem integrity checks or actually booting. This is why people are hung up on the "under 30 seconds".
If you had a specialty build system, in which you disabled all memory checks (REALLY BAD IDEA on a server though since if your memory is bad you can corrupt your storage because writes to your storage are typically from memory), used SSDs for your OS drives, had no hardware raid controllers on the system, used SAS controllers which do not have firmware integrity checks, you might, just might be able to boot the system in 30 seconds. But I sure as hell would not trust it for any kind of important data because you had to disable all the hardware tests which means you have no idea if there are hardware problems which are corrupting your data.
Re: (Score:3)
Well, SAMBA is the only thing out there that will share to Win/Mac clients from *nix, so that is the right solution.
There's OpenAFS.
And let's not forget NFS, since non-home versions of Windows Vista/7/2008R2 come with Interix and an NFS client. (Control Panel -> Programs and Features -> Windows Features -> Services for NFS -> Client for NFS)
NFS is the least secure, but it uses the least amount of resources on the server (this seems to be old inherited hardware), and if the rsize/wsize is bumped up on the server side, it's faster too.
(I'm sharing my music collection over NFS to Windows 7, so I know it works.
Re: (Score:2)
No its just a generic science data collection application by the sounds of it/.
The data rate he's describing is absolutely nothing unusual in the sciences.
Re: (Score:2)
Good call on the cold-plug. Hot-plugging SATA on Linux works very well, provided you have the right controller. I do it frequently for testing purposes.
Re: (Score:3)
Add to the fact that he doesn't really even seem to understand the problem himself, or know the tools he's got to work with.
I'm sorry, but: "UNIX has bad documentation"? Wikipedia itself is chock full of useful documentation in this regard. You can find functional "this is how it works" information on pretty much every single component and technology with ease. (You do, however, need to know what you're looking for.)
Try to do the same for Windows. The first 12 pages of search results will likely be marketin
Re:Wow (Score:4, Informative)
Op here:
The gear was sourced from a similar prior project that's no longer needed, and we don't have the budget/authorization to buy more stuff. Considering that the requirements are pretty basic, we weren't expecting to have a serious issue picking the right distro.
>You are looking for information that your average user won’t care about.
Granted, but I thought one of the strengths of *nix was that it's not confined to computer illiterates. Some geeks somewhere should know which distros can be stripped down to bare essentials with a minimum of fuss.
As for the 30 seconds thing, there's a lot side info I left out of the summary. This project is quirky for a number of reasons, and one of them being that the server itself spends a lot of time off, and needs to be booted (and halted) on demand. (Don't ask, it's a looooooong story).
Re: (Score:2)
Some geeks somewhere should know which distros can be stripped down to bare essentials with a minimum of fuss.
Debian (The Universal OS)
RHEL/CentOS/Scientific
Gentoo
Slackware
Re: (Score:2)
...know which distros can be stripped down ... with a minimum of fuss.[?]
Debian (The Universal OS)
RHEL/CentOS/Scientific [...]
And don't forget to compile a bespoke, static kernel.
Re: (Score:2)
Basically all distros can be stripped down easily. However you will have most sucess if you install the server version of Ubuntu, CentOs, Red Hat or Debian since the baseinstall there is already very restricted and you would not need to strip it further, just make sure that you disable the desktop in CentOs/RedHat/Debian (no need in Ubuntu since the server is console only as default).
Instead of a unionfs you should raid the drives but since that seams to be no option(?) then you probably need something li
Re: (Score:2)
Sounds reasonable.
Any recent Linux distro should do... just stick with whatever you have expertise with. Scientific Linux would probably be the most suitable RHEL / CentOS clone... but it also comes with OpenAFS (which also has Windows clients) which might allow you another option to improve filesharing performance over Samba (I haven't played with it myself, though). Linux Mint is my current favorite Debian / Ubuntu distro.
Either would likely mount SATA disks that were hotplugged automatically under /me
Re: (Score:2)
Yeah, it reminds me of my experiences trying to get help in forums. I didn't want to just put in a CD and watch it install and boot to a shell. I had the hardware I had and for fun wanted it to do interesting things. Instead of conversations and tips / tricks I received a lot of "you're stupid if you want to do that and you're even stupider if you can't figure it out. I (make it seem like I) know and I won't tell you."
Re: (Score:2)
So a person messing things up, not learning (because the curve is too steep) and wasting money, time en effort of others is going to help how? Unless you propose that now it is every person form himself and doing something about the crisis by actually being efficient and effective is completely irrational.
You know, people only get fired for incompetence because they are not even competent to see their own limits or if they lack the guts to do something about it. Like get a job they can actually do well.
OpenAFS+Samba (Score:2)
Use OpenAFS with Samba's modules. Distribution doesn't matter.
Re: (Score:2)
Looking at the OpenAFS docs, they're copyright 2000. Has the project gone stale since then?
Re: (Score:2)
OpenAFS is not dead. IIRC, any Samba AFS integration probably is. This doesn't sound like a job for AFS, however.
Mechanical Hard Drive (Score:3, Insightful)
Re:Mechanical Hard Drive (Score:4, Funny)
They already bought a $20 5400rpm 80Gb drive and don't want it to be wasted.
Re: (Score:3)
It sounds like they inherited a bunch of hardware and don't have a budget for more stuff.
So... make do with what you have.
Re: (Score:2)
Why does it have to be a mechanical hard drive?
Their "couple square miles of undeveloped land" is actually a minefield, and to avoid accidental detonation (you know, magnetic triggers and all that), the situation calls for a purely mechanical hard drive - perhaps one running on water power.
Just a wild guess...
I would automate the copying (Score:5, Informative)
Really, singular hard drives are notoriously bad at keeping data around for long. I would make sure you have a copy of everything. So make a file server with RAIDZ2 or RAID6 and script the copying of these hard drives onto a system that has redundancy and is backed up as well.
How many times I have seen scientist come out with their 500GB portable hard drives and they are unreadable... way too much. If you fill 500GB in 24 hours, there is no way a portable hard drive will survive for longer than about a year. Most of our drives (500GB 2.5" portable drives) last a few months, once they have processed about 6TB of data full-time they are pretty much guaranteed to fail.
Re: (Score:2)
Most of our drives (500GB 2.5" portable drives) last a few months, once they have processed about 6TB of data full-time they are pretty much guaranteed to fail.
This is interesting. How have SSDs held up under that use?
Re: (Score:2)
The cheap ones fail just as fast, I ship the same OCZ Onyx drive back and forth to their RMA site it seems like, I've killed a couple of Vertexes also. The more expensive ones (SLC) are much better and the Intel 32GB SLC's have held up but they're so expensive it's really not worth it.
Re: (Score:2)
You should contact Other World Computing to see if they would loan you an SSD drive.
They are a small company and pay close attention to any issues with their products. I bet they would LOVE for you to stress their SSD products that quickly. :-)
http://eshop.macsales.com/shop/SSD/OWC/
Re: (Score:2)
funny, I still have a 20 meg MFM drive with its original os installed on it
Re: (Score:2)
>If you fill 500GB in 24 hours, there is no way a portable hard drive will survive for longer than about a year
These are fullsize desktop drives for exactly that reason.
You realize that being "full size desktop drives" makes zero difference for write duty cycle on mechanical drives?
As long as your not on the bleeding edge of platter density then the manufacturers use the same process for all platters both large and small. For lower capacity larger drives they just reduce the number of platters in the drive.
Re: (Score:2)
You are doing it with desktop-drives? You know that they can take far less shock/heat than 2.5" notebook drives, right?
Re: (Score:2)
It could well be physical shock ; I've changed from spinning disks in a 2.5" caddy to an SSD. Some of the problems I had were to do with cheap-assed caddies with lousy power electronics that would fail. But I had three disks die in about a year ; I changed to SSD and it's been going strong ever since.
These disks were only transported twice a day ; to work, and then home. But they inevitably got dropped sooner sorter or later.
Re: (Score:2)
High elevations can do it too. Mechanical hard drives need some air pressure. Take a typical hard drive up to 17000 ft, unpressurized, and it will fail very quickly. Most hard drive documentation mentions somewhere that they shouldn't be used above 10000 ft, and aren't warranted for that.
Re: (Score:2)
Well, the data is read/write quite intensively and HDD's don't write modified data to new areas as SSD's may do. They are 500GB capacity but the dataset is maybe 20-30GB all together for a week long analysis.
Yes, it's probably dropped and mishandled a lot by the students who do the work as well and the enclosures are pretty crappy. The manufacturer states that the drive will statistically generate an error for every 12TB and they're probably not built for intensive use.
CentOS, its enterprise class (Score:2)
Re:CentOS, its enterprise class (Score:4, Informative)
Scientific Linux is also a good option for similar reasons. Given its a science grant, they might like the idea that its used at labs like CERN
Re:CentOS, its enterprise class (Score:5, Insightful)
"Enterprise class" is a marketing slogan. In the real world, all the RH derivatives are pretty good (including Scientific Linux and Fedora as well as CentOS), and all the Debian derivatives are pretty good (including Ubuntu). Gentoo's solid too. "Enterprise class" doesn't mean much. The main thing that characterizes CentOS from Scientific Linux - which is also just a recompile of the RHEL code - is that the CentOS devs have "enterprise class" attitude. Meanwhile, RH's own devs are universally decent, humble people. Those who do less often thing more of themselves.
For a great many uses, Debian's going to be easiest. But it depends on just what you need to run on it, as different distros do better with different packages, short of compiling from source yourself. No idea what the best solution is for the task here, but "CentOS" isn't by itself much of an answer.
Re: (Score:3, Insightful)
"Enterprise class" means that it runs the multi-million dollar crappy closed source software you bought to run on it without the vendor bugging out when you submit a support ticket.
Re: (Score:2)
Someone please mod this guy up.
I swear, the higher the price of the software is, the more upper management just drools all over it, and the bigger the piece of shit it is. Millions of dollars spent per year licensing some of the biggest turds I've ever had the displeasure of dealing with. Just so management can say that some big vendor is behind it and will "have our backs when it fails".
Guess what, the support is awful too. The vendor never has your back. You'll be left languishing with downtime while they
Eventual migration to RHEL (Score:2)
Re: (Score:2)
Redhat, with a support contract is for him.
Re: (Score:2)
I disagree. This guy:
Redhat, with a support contract is for him.
Well if he starts with CentOS that migration will be pretty simple.
Here we go again (Score:3, Insightful)
Another "I don't know how to do my job, but will slag off OSS knowing someone will tell me what to do. Then I can claim to be l337 at work by pretending to know how to do my job".
It's call reverse physiology, don't fall for it! Maybe shitdot will go back to its roots if no one comments in junk like this and the slashvertisments?
Re: (Score:2)
Damn, you outed yourself there as an old loser that never "made it" in your career field. "go back to its roots"? You would obviously have told the OP how to do this, and shown off how l337 your were, if you had the slightest clue of how to do this.
Lame. Even as an AC troll.
Go eat another twinkie and play with your star wars dolls in your mom's basement.
What Greyhole isn't (Score:5, Insightful)
Not sure why the 30s boot up requirement is there, so it depends on what you define as "booted" . Spinning up 12 hard drives and making them available through Samba within 30s guarantees your costs will be 10x more than they need to be.
This isn't another example of my tax dollars at work is it?
Re: (Score:2)
This isn't another example of my tax dollars at work is it?
I hope not! Or my university tuition fees, or really any other spending, even other people's money.
Who cares if the server boots up in 30 seconds or 30 minutes? The OP now has up to 12 500GB drives to either copy off or access over the lan. There's hours of data access or data transfer here.
Questionable (Score:2, Informative)
Why would you want a file server to boot in 30 secs or less? Ok, lets skip the fs check, the controller checks, the driver checks, hell lets skip everything and boot to a recovery bash shell. Why would you not network these collection devices if they are all within a couple of miles and dump to an always on server?
I really fail to see the advantage of a file server booting in under 30 seconds. Shouldn't you be able to hot swap drives?
This really sounds like a bunch of kids trying to play server admin. M
Partly easy, partly not... (Score:3)
Booting in under 30 seconds is going to be a bit of a trick for anything servery. Even just putzing around in the BIOS can eat up most of that time(potentially some minutes if there is a lot of memory being self-tested, or if the system has a bunch of hairy option ROMs, as the SCSI/SAS/RAID-generally disk controllers commonly found in servers generally do...) If you really want fast, you just need to suck it up and get hot-swappable storage: even SATA supports that(well, some chipsets do, your mileage may vary, talk to your vendor and kidnap the vendor's children to ensure you get a straight answer, no warranty express or implied, etc.) and SAS damn well better, and supports SATA drives. That way, it doesn't matter how long the server takes to boot, you can just swap the disks in and either leave it running or set the BIOS wakeup schedule to have it start booting ten minutes before you expect to need it.
Slightly classier would be using /dev/disk/by-label or by-UUID to assign a unique mountpoint for every drive sled that might come in from the field(ie. allowing you to easily tell which field unit the drive came from).
If the files from each site are assured to have unique names, you could present them in a single mount directory with unionFS; but you probably don't want to find out what happens if every site spits out identically named FOO.log files, and(unless there is a painfully crippled tool somewhere else in the chain) having a directory per mountpoint shouldn't be terribly serious business.
Re: (Score:2)
Thinking of what you just wrote I'd like to add a bit.
First of all I don't think they will serve the data from those disks; if only because you will probably want yesterday's data available as well, and drives are constantly being swapped out. So upon plugging in a drive, I'd have a script copy the data to a different directory on your permanent storage (which of course must be sizeable to take several times 500 GB a day - he says each sensor produces 500 GB of data - so several TB of data a day, hundreds o
Your boss has no idea what he is doing (Score:2)
I know you already stated the hardware is already in place. This is about exercising your new found authority. Go big or go home.
ZFS Filesystem will help (Score:4, Insightful)
500G in a 24h period sounds like it will be highly compressible data. I would recommend FreeBSD or Ubuntu with ZFS Native Stable installed. ZFS will allow you to create a very nice tree with each folder set to a custom compression level if necessary. (Don't use dedup) You can put one SSD in as a cache drive to accelerate the shared folders speed. I imagine there would be an issue with restoring the data to magnetic while people are trying to read off the SMB share. An SSD cache or SSD ZIL drive for ZFS can help a lot with that.
Some nagging questions though.
How long are you intending on storing this data? How many sensors are collecting data? Because even with 12 drive bay slots, assuming cheap SATA of 3TB a piece. (36TB total storage with no redundancy), lets say 5 sensors, thats 2.5TB a day data collection, and assuming good compression of 3x, 833GB a day. You will fill up that storage in just 43 days.
I think this project needs to be re-thought. Either you need a much bigger storage array, or data needs to be discarded very quickly. If the data will be discarded quickly, then you really need to think about more disk arrays so you can use ZFS to partition the data in such a way that each SMB share can be on its own set of drives so as to not head thrash and interfere with someone else who is "discarding" or reading data.
Re: (Score:3)
500G in a 24h period sounds like it will be highly compressible data
It sounds like audio and video to me which is not very compressible at all if you need to maintain audio and video quality. And good luck booting a system with that many drives in "under 30 seconds" especially on a ZFS system which needs a lot of RAM (assuming you are following industry standards of 1GB RAM per 1TB of data you are hosting) as you will never make it through RAM integrity testing during POST in under 30 seconds.
Pogoplug (Score:2)
You could also just use symbolic links. (Score:4, Insightful)
Re: (Score:2)
They are, actually [microsoft.com] (at least in Vista and later, and for just directories as far back as 2K), but that's irrelevant.
If you set Samba to follow symlinks, it will present it to any client applications as though it were the actual file. So even old DOS-based Windows systems can handle it.
Silly workplace security policies (Score:2)
Aka: I do not want the insecurity of losing my workplace if my boss happens to learn in slashdot how clueless I am.
Seriously... could you send us the resumé that you sent to get that job?
Comment removed (Score:4, Interesting)
waaaay over head (Score:5, Insightful)
What is the point of 30 second boot on a file server? If this is on the list of 'requirements', then the 'plan' is 1/4 baked. 1/2 baked for buying hardware without a plan, then 1/2 again for not having a clue.
unioning filesystem? what is the use scenario? how about automounting the drives on hot-plug and sharing the /mnt directory?
Now, 500GB/day in 12 drive sleds....so 6TB a day? do the workers get a fresh drive each day or is the data only available for a few hours before it gets sent back out or are they rotated? I suspect that mounting these drives for sharing really isnt what is necessary, more like pull contents to 'local' storage. Then, why talk about unioning at all, just put the contents of each drive in a separate folder.
Is the data 100% new each day? Are you really storing 6TB a day from a sensor network? 120TB+ a month?
Are you really transporting 500GB of data by hand to local storage and expecting the disks to last? reading or writing 500GB isn't a problem, but constant power cycling and then physically moving/shaking the drives around each day to transport is going to put the MTBF of these drives in months not years.
dumb
Re: (Score:2)
I agree. "30 seconds boot time" is a very special and were un-server-like requirement. It is hard to achieve and typically not needed. Hence it points to a messed-up requirements analysis (or some stupid manager/professor having put that in there without any clue what it implies). This requirement alone may break everything else or make it massively less capable of doing its job.
What I don't get is why people without advanced feel they have any business setting up very special and advances system configurat
Unifed filesystem is a crutch (Score:2)
Use systems of symbolic links.
Also, why "30 seconds boot time"? This strikes me as a bizarre and unnecessary requirement. Are you sure you have done careful requirements analysis and engineering?
As to the "bad" documentation: Many things are similar or the same on many *nix systems. This is not Windows where MS feels the need to change everything every few years. On *nix you are reasonably expected to know.
OP here (Score:5, Informative)
Ok, lots of folks asking similar questions. In order to keep the submission word count down I left out a lot of info. I *thought* most of it would be obvious, but I guess not.
Notes, in no particular order:
- The server was sourced from a now-defunct project with similar setup. It's a custom box with non-normal design. We don't have authorization to buy more hardware. That's not a big deal because what we have already *should* be perfectly fine.
- People keep harping on the 30 seconds thing.
The system is already configured to spin up all the drives simultaneously (yes the PSU can handle that) and get through the bios all in a few seconds. I *know* you can configure most any distro to be fast, the question is how much fuss it takes to get it that way. Honestly I threw that in there as an aside, not thinking this would blow up into some huge debate. All I'm looking for are pointers along the lines of "yeah distro FOO is bloated by default, but it's not as bad as it looks because you can just use the BAR utility to turn most of that off". We have a handful of systems running winXP and linux already that boot in under 30, this isn't a big deal.
- The drives in question have a nearly identical directory structure but with globally-unique file names. We want to merge the trees because it's easier for people to deal with than dozens of identical trees. There are plenty of packages that can do this, I'm looking for a distro where I can set it up with minimal fuss (ie: apt-get or equivalent, as opposed to manual code editing and recompiling).
- The share doesn't have to be samba, it just needs to be easily accessible from windows/macs without installing extra software on them.
- No, I'm not an idiot or derpy student. I'm a sysadmin with 20 years experience (I'm aware that doesn't necessarily prove anything). I'm leaving out a lot of detail because most of it is stupid office bureaucracy and politics I can't do anything about. I'm not one of those people who intentionally makes things more complicated than they need to be as some form of job security. I believe in doing things the "right" way so those who come after me have a chance at keeping the system running. I'm trying to stick to standards when possible, as opposed to creating a monster involving homegrown shell scripts.
Not gonna happen. (Score:5, Insightful)
You have to be able to identify the disks being mounted. Since these are hot swappable, they will not be automatically identifiable.
Also note, not all disks spin up at the same speed. Disks made for desktops are not reliable either - though they tend to spin up faster. Server disks might take 5 seconds before they are failed. You also seem to have forgotten that even with all disks spun up, each must be read (one at a time) for them to be mounted.
Hot swap disks are not something automatically mounted unless they are known ahead of time - which means they have to have suitable identification.
UnionFS is not what you want. That isn't what it was designed for. Unionfs only has one drive that can be written to - the top one in the list. Operations on the other disks force it to copy it to the top disk for any modifications. Deletes don't happen to any but the top disk.
Some of what you discribe is called an HSM (hierarchical storage management), and requires a multi-level archive where some volumes may be on line, others off line, yet others in between. Boots are NOT fast, mostly due to the need to validate the archive first.
Back to the unreliability of things - if even one disk has a problem, your union filesystem will freeze - and not nicely either. The first access to a file that is inaccessable will cause a lock on the directory. That lock will lock all users out of that directory (they go into an infinite wait). Eventually, the locks accumulate to include the parent directory... which then locks all leaf directories under it. This propagates to the top level when the entire system freezes - along with all the clients. This freezing nature is one of the things that a HSM handles MUCH better. A detected media error causes the access to abort, and that releases the associated locks. If the union filesystem detects the error, then the entire filesystem goes down the tubes, not just one file on one disk.
Another problem is going to be processing the data - I/O rates are not good going through a union filesystem yet. Even though UnionFS is pretty good at it, expect the I/O rate to be 10% to 20% less than maximum. Now client I/O has to go through a network connection, so that may make it bearable. But trying to process multiple 300 GB data sets in one day is not likely to happen.
Another issue you have ignored is the original format of the data. You imply that the filesystem on the server will just "mount the disk" and use the filesystem as created/used by the sensor. This is not likely to happen - trying to do so invites multiple failures; it also means no users of the filesystem while it is getting mounted. You would do better to have a server disk farm that you copy the data to before processing. That way you get to handle the failures without affecting anyone that may be processing data, AND you don't have to stop everyone working just to reboot. You will also find that local copy rates will be more than double what the servers client systems can read anyway.
As others have mentioned, using gluster file system to accumulate the data allows multiple systems to contribute to the global, uniform, filesystem - but it does not allow for plugging in/out disks with predefined formats. It has a very high data throughput though (due to the distributed nature of the filesystem), and would allow many systems to be copying data into the filesystem without interference.
As for experience - I've managed filesystems with up to about 400TB in the past. Errors are NOT fun as they can take several days to recover from.
Re: (Score:2)
If UnionFS is not the answer, and OP doesn't want something more complicated, it honestly sounds like the simplest solution is to just symlink everything into another directory once it's up running.
You have to do something about failures, but if you have that in hand, making the symlinks is really easy, you can do it with three lines of bash (for each path in find -type f, create symlink). If you need a bit more control, and it's not easily doable with grep, I'd write a small Python program, you can google
Re: (Score:2)
The boottime problem can be solved simely by buying a small(32/64GB) ssd disk, and then install linux on that. This will cost you less then 200$. And if you don't have the budget for that, then please your story to thedailywtf.com because it sounds like some interesting fuckup happend someware in your organization.
Re: (Score:3)
My suggestion would be:
0. Do consider writing this yourself...a 100 line shell-script (carefully written and documented) may well be easier to debug than a complex off-the-shelf system.
1. You can easily identify the disks with eg /dev/disk/by-uuid/ which, combined with some udev rules, will make it easy to identify which filesystem is which, even if the disks are put into different caddys. [Note that all SATA drives are hot-swap/hot-plug capable: remember to unmount, and to connect power before data; disco
First thing I thought of... (Score:2)
The first thing I thought of was loss of one of the drives during all this moving around. Seems the protection of the data would be of the utmost priority here. Keeping this in mind, I'd go with a RAID 5 or 10 [thegeekstuff.com] setup. This will eliminate having the data distributed on different "drives", so to speak, and it would appear to the system one single drive. This would increase the drive count, but loosing a drive, ether physically (oops, dropped that one in the puddle) or electronically (oops, this drive cras
If you want good documentation... (Score:2)
Ask the correct community : science informatics (Score:5, Informative)
What you're describing sounds like a fairly typical Sensor Net (or Sensor Web) to me, maybe with a little more data logged than is normal per platform. (I believe they call it a 'mote' in that community).
Some of the newer sensor nets use a forwarding mesh wireless system, so that you relay the data to a highly reduced number of collection points -- which might keep you from having to deal with the collection of the hard drives each night (maybe swap out a multi-TB RAID at each collection point each night instead).
I'm not 100% sure of what the correct forum is for discussion of sensor/platform design. I know they have presentations in the ESSI (Earth and Space Science Informatics) focus group of the AGU (American Geophysical Union). Many of the members of ESIPfed (Federation of Earth Science Information Partners) probably have experience in these issues, but it's more about discussing managing the data after it comes out of the field.
On the off chance that someone's already written software to do 90% of what you're looking for, I'd try contacting the folks from the Software Reuse Working Group [nasa.gov] of the Earth Science Data System community.
You might also try looking through past projects funded through NASA AISR (Adanced Information Systems Research [nasa.gov]) ... they funded better sensor design & data distribution systems. (unfortunately, they haven't been funded for a few years ... and I'm having problems accessing their website right now). Or I might be confusing it with the similar AIST (Adanced Information Systems Technology [nasa.gov]), which tends more towards hardware vs. software. ... so, my point is -- don't roll your own. Talk to other people who have done similar stuff, and build on their work, otherwise you're liable to make all of the same mistakes, and waste a whole lot of time. And in general (at least ESSI / ESIP-wide), we're a pretty sharing community ... we don't want anyone out there wasting their time doing the stupid little piddly stuff when they could actually be collecting data or doing science.
(and if you haven't guessed already ... I'm an AGU/ESSI member, and I think I'm an honorary ESIP member (as I'm in the space sciences, not earth science) ... at least they put up with me on their mailing lists)
Why *nix? (Score:2)
Since you already bought the hardware, odds are you're going to run into driver issues. Since you're not already a *nix guy my suggestion is just run windows on your server. Next buy big fat USB enclosures, the kind that can hold DVD drives and put the drive sleds in there. Now you don't have to reboot adding the drives.
Take your pick (Score:2)
Pretty much any unix like OS will do fine. Personally I use NetBSD and if linux is a requirement, Debian.
Poor technical documentation? (Score:2)
FreeNAS 8 and ZFS (Score:2)
Re: (Score:3)
You know, I was on the "it doesn't matter" camp untill I readed your post. Now I just changed my mind.
Yes, any distro will do it. You'll have the same (lack of) trouble configuring the service on any distro. So, choose a distro that is easy to get into bare bones and to upgrade, because those a
Re: (Score:2)
Any Linux distribution will boot in less than 30 seconds if [..]
Linux does. Too bad it takes the bios and raid array of a server up to minutes to do their checks...
Re:Is this a joke? (Score:4, Informative)
Re: (Score:3)
Actually AUFS requires kernel patches, it's never been mainlined because the kernel maintainers like their own union mounts better ... even though they are far less useful (not write-through like AUFS, which is really nice for something like a file server) and forever undelivered. That said, I think Ubuntu and SUSE still come with AUFS patched in ... for Debian you have to compile your own patched kernel or use something like the Liquorix kernel.
Re: (Score:3, Insightful)
Saying "only good mp3 player" makes no sense unless you specify your criteria. Amarok, Banshee, VLC, Rhythmbox, or smplayer are all capable mp3 players by various criteria and easily found by googling for "linux mp3 player". If you use Ubuntu, searching for mp3 player in Software Center finds a plethora of good players. Googling "list of linux audio software" easily finds other things besides just mp3 players: maybe something like Audacity satisfies your requirements better. Search for "mp3" on xmms2.org fi
Re: (Score:2)
OP rather clearly stated criteria for "good mp3 player". Here it is, since you missed it the first time: "sorts music like XMMS and since I'm used to XMMS does most everything in a similar fashion as well."