Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Linux

DRBD To Be Included In Linux Kernel 2.6.33 166

An anonymous reader writes "The long-time Linux kernel module for block replication over TCP, DRBD, has been accepted as part of the main Linux kernel. Amid much fanfare and some slight controversy, Linus has pulled the DRBD source into the 2.6.33 tree, expected to release February, 2010. DRBD has existed as open source and been available in major distros for 10 years, but lived outside the main kernel tree in the hands of LINBIT, based in Vienna. Being accepted into the main kernel tree means better cooperation and wider user accessibility to HA data replication."
This discussion has been archived. No new comments can be posted.

DRBD To Be Included In Linux Kernel 2.6.33

Comments Filter:
  • by DrDitto ( 962751 ) on Thursday December 10, 2009 @09:40PM (#30397514)
    How does this differ from the Network Block Device (NBD)? http://en.wikipedia.org/wiki/Network_block_device [wikipedia.org]
  • Very Useful Software (Score:5, Interesting)

    by bflong ( 107195 ) on Thursday December 10, 2009 @09:58PM (#30397638)

    We use DRBD for some very mission critical servers that require total redundancy. Combined with Heartbeat I can fail over from one server to another without any single point of failure. We've been using it for more then 5 years, and never had any major issues with it. It will be great to have it in the mainline kernel.

  • by MichaelSmith ( 789609 ) on Thursday December 10, 2009 @09:59PM (#30397656) Homepage Journal

    But your hardware device is just another computer running software for which this feature might be useful.

  • by wiredlogic ( 135348 ) on Thursday December 10, 2009 @10:13PM (#30397756)

    Just what we need, yet another networking module built into the kernel. Creating a fresh config with the 2.6 series kernels has become even more of a hassle since there are so many modules that are activated by default. To stop the insanity I have to go through and eliminate 90% of what's there so that 'make modules' doesn't take longer than the kernel proper. Most of them are targeted for special applications and don't need to be in a default build.

  • by Lemming Mark ( 849014 ) on Thursday December 10, 2009 @10:36PM (#30397904) Homepage

    Doing it in software for purely virtual hardware is useful. I know it's been used to sync disks across the network on Xen hosts, the idea being that if the local and remote copies of the disk are kept in close sync, you can migrate a virtual machine with very low latency. Should be able to do similar tricks with other Linuxy VMMs. Having software available to do this stuff makes it easy to configure this sort of thing quickly, especially if you're budget-constrained, hardware-wise.

  • by martin-boundary ( 547041 ) on Thursday December 10, 2009 @11:01PM (#30398042)
    People who build (and test) their own custom kernels are important. Sometimes, a bug won't show up except with some weird combination of kernel options, because some code path dependencies are missed with the fully configured kernels that the distros build for you.
  • Linux FS rocks (Score:5, Interesting)

    by digitalhermit ( 113459 ) on Friday December 11, 2009 @12:02AM (#30398250) Homepage

    I admin AIX systems for my day job... One thing that's really nice about AIX is that the filesystem and underlying block device is highly integrated. This means that to resize a volume you can run a single command that does it on the fly. For AIX admins who are new to Linux it seems a step backwards and they liken it to HP-UX or some earlier volume management...

    Ahh, but the beauty of having separate filesystem and block device is that it's so damn flexible. I can build an LVM volume group on iSCSI LUNs exported from a another system. In that VG I can create a set of LUNs that I can use for the basis of my DRBD volume. In that DRBD volume I can carve out other disks. Or I can multipath them. Or create a software RAID.

    Anyhoo, DRBD is a really cool technology. It gives the ability to create HA pairs on the cheap. You can put anything from a shared apache docroot there to the disks for Oracle RAC. With fast networking available for cheap, almost any shop can have the toys that were once only affordable to big companies...

  • Re:Linux FS rocks (Score:4, Interesting)

    by mindstrm ( 20013 ) on Friday December 11, 2009 @12:05AM (#30398268)

    Or you could have ZFS where you don't even need to resize.. it just happens.

    And you still have block device representations if you want them, along with all the other benefits of zfs.

  • by pjr.cc ( 760528 ) on Friday December 11, 2009 @12:31AM (#30398376)

    I dont like drbd (though i've used it for a while)... its a massive convoluted and complex mess and fairly inflexible.

    Personally, im hoping dm-replicator gets near completion sometime soon though details of it are rather scarce (i do have a kernel built with the dm-replicator patches, but trying to do anything with it seems near impossible)...

    I do a fair amount of work inside the storage world and drbd is just such a mess in so many ways.

    I sounds very critical and so forth to drbd and thats not the way i mean to come across. What I really am trying to say is that its bloated for the small amount of functionality it does and with a couple of minor tweeks could do much MUCH more. Its a kewl piece of software, but like many FOSS projects has a hideous, weighty config prone to confusion (something you just dont need with DR).

    Still, that is the way it is!

  • by dgym ( 584252 ) on Friday December 11, 2009 @02:01AM (#30398722)
    I'm not about to dismiss your experience, but things have changed over the last 15 years so it might not be as relevant as it once was.

    In that time processors have become much faster, memory has become much cheaper, commodity servers have also become much cheaper and a lot of software has become free. While that has happened hard disks have become only a little faster. As a result many people consider custom hardware for driving those disks to be unnecessary - generic hardware is more than fast enough and is significantly cheaper.

    There might still be some compelling reasons to go with expensive redundant SAN equipment, but for many situations a couple of generic servers full of disks and running Linux and DRBD will do an admirable job. The bottleneck will most likely be the disks or the network, both of which can be addressed by spending some of the vast amount of money saved by not going with typical enterprise solutions.
  • 2002 (Score:1, Interesting)

    by Anonymous Coward on Friday December 11, 2009 @02:57AM (#30398878)

    FreeBSD users have been doing it for 7 years with the default kernel. I guess that's one reason why it's more popular with companies that depend on HA, such as Bank of America. I love having ZFS as well, the combination is sooooo bad ass :-)

    For those that run BRDB and want to try it, can read this [74.125.77.132].

  • by DerPflanz ( 525793 ) <bart@NOSPAm.friesoft.nl> on Friday December 11, 2009 @03:59AM (#30399134) Homepage

    We have used drbd 0.7 for some mission critical server, but it gave more headaches than a warm (or even cold) standby. The main problem is keeping you nodes synchronised for the disks that are NOT in the drbd (e.g. /, /etc, /usr, etc). We put our software on drbd disk and the database on another. However, when adding services, it is easy to 'forget' to add the startup script in /etc/ha.d and the first failover results in not all services being started. Which leads to a support call.

    I understand that we should perhaps change the setup to include a 'correct' way to provides updates, but just putting a raid-1 in a server, with database replication somewhere else just seems to be less of a hassle.

  • Re:Oh c'mon now... (Score:4, Interesting)

    by evilviper ( 135110 ) on Friday December 11, 2009 @04:30AM (#30399270) Journal

    Why is Linux still locking up? Windows fixed that problem years ago with 2k/XP!

    It isn't. In our mid/large company, we have hundreds of Linux workstations, and they've all been working for years without a single hitch, from day one. No permission problems, never had an update causing significant issues, don't even ALLOW users to get a command-line, etc. Vastly easier to debug when there is a problem, and has allowed the company to replace a large group of Windows experts with a small group of Linux experts, and the vastly improved productivity has allowed the company to significantly reduce the number of employees (or rather, just cease to replace them when there is turnover).

    Just the other day I noticed the uptime on one of the Linux workstations was over a year at this point. No lockups. The few issues we've had with the systems have been directly traced to hardware problems.

    If yours is a true story (which I seriously doubt) you should look at hiring at least one half-way decent Linux SysAdmin at a reasonable salary to fix the pathological issues with the installation which was likely done by minimum-wage idiots without a clue.

  • by sydb ( 176695 ) <michael@NospAm.wd21.co.uk> on Friday December 11, 2009 @06:15AM (#30399692)

    I implemented a DRBD/heartbeat mail cluster for a client about six years ago. At the same time I implemented a half-baked user replication solution using Unison when we should have been using LDAP. I picked up DRBD and heartbeat easily under pressure and found the config logical and consistent once I understood the underlying concepts. Certainly not bloated. Unison on the other hand caused major headaches. So quite clearly, like LSD, DRBD affects different users in different ways and perhaps you should stick to the crack you're smoking.

UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things. -- Doug Gwyn

Working...