Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

Linux On HP Blades 115

HNFO writes: "HP is unveiling their new 'blade' servers that fit onto a single card. Their press release is here. They are currently available with your choice of RedHat, Debian and SuSE. A picture of the card can be found here and a picture of the chassis can be found here." If you're looking for high-density slot-based computers, earlier postings about RLX's Transmeta blades and OmniCluster's x86 variety might interest you as well.
This discussion has been archived. No new comments can be posted.

Linux On HP Blades

Comments Filter:
  • useable for media (Score:4, Interesting)

    by cavemanf16 ( 303184 ) on Wednesday December 05, 2001 @11:05AM (#2659839) Homepage Journal
    It may be designed for high-density, minimum use of space servers for companies, but personally, I would love to encase that puppy in a little black box and make it my media server at home. It would make a nice, neat, hardly noticeable (compared to my ugly beige Dell case - blech!) all encompassing, reconfigurable media server for piping mp3's, DVD's, mpeg's, and other digitized media to my home theatre from all over the house...
    • Re:useable for media (Score:2, Informative)

      by Anonymous Coward
      It may be designed for high-density, minimum use of space servers for companies, but personally, I would love to encase that puppy in a little black box and make it my media server at home.

      You might, but you'd have to fit your own cooling system and PSU, as most 'blade' equipment relies on the frame it's mounted to for power and heat dispersal.
    • A cheaper solution is to buy a 2U rack case and a motherboard to fit. Works great, I have 2 80gig drives in there, with space for 2 more. it has 2 NIC cards in the only 2 pci slots (right angle stacked... kinda cool) Then I have websurfer pro's and audiotrons around the house for my audio pleasure. Eventually I will replace the audiotrons and websurfers with 1 more rackmount PC with 2 soundcards runnung a jukebox system to my FAST brand whole house audio system.

      do a search for CAJUN for the software behind the jukebox sytstem.
    • The pictures on HP's site show that only 8 blades fit into a chassis, and only 3 chassis fit into a standard rack. It's not that small.
  • Blade/Chassis links to the same image, I'll try to dig up the URL for the actual chassis.
    • I think this *should* be it: http://www.hp.com/products1/servers/blades/product s/bh7800/index.html

      "The HP Blade Server bh7800 Chassis architecture incorporates network switching, storage interconnect, and space for multiple servers into a single, highly available chassis infrastructure. The horizontally scaled 38-slot, 13U-high HP Blade Server bh7800 chassis has both front and back access. It supports from 1 to 16 server blades, 1 or 2 network blades, 1 to 16 storage blades of multiple types, and an intelligent management blade."
  • by fizz-beyond ( 130257 ) on Wednesday December 05, 2001 @11:07AM (#2659857) Homepage
    Did anyone else notice that the two pictures link to the exact same thing?
  • by ThatComputerGuy ( 123712 ) <amrit AT transamrit DOT net> on Wednesday December 05, 2001 @11:08AM (#2659860) Homepage
    Does anyone know how much heat each if these blades will generate? Nowadays just the idea of 2 Athlons in a single tower screams "SPACEHEATER!", but what are the specs on these things? Are they made to each be really high performance, or good performance at lower power usage/heat release?
    • Not sure, but I think I heard some sysadmins planning to roast marshmallows in the server room in celebration of buying the new blades...
    • by Xzzy ( 111297 ) <sether AT tru7h DOT org> on Wednesday December 05, 2001 @11:22AM (#2659951) Homepage
      > Does anyone know how much heat each if these
      > blades will generate?

      My guess is that the people who these things will be marketed for won't care how much heat they generate.

      Think about it.. you're some struggling dotcom who's managed to survive the blowout and are just barely keeping your head above water. All your servers are located at a hosting firm where they charge an assload of cash for rackspace.

      Here's the caveat.. they DON'T charge you for excessive power consumption or heat output. At least, they didn't a while back when I still worked in the area, I admit it could be different now. But the point is, your goal is to get as many CPU's into as few rack units as possible, and if it starts melting the rack cuz yer making so much heat, you don't care. That's the ISP's issue, because they don't charge you for cold air.

      Now obviously part of the air conditioning is covered in your monthly fee, but they don't scale it based on how much heat you're making. All hosting firms worry about is ethernet drops and rack units.
      • Now obviously part of the air conditioning is covered in your monthly fee, but they don't scale it based on how much heat you're making. All hosting firms worry about is ethernet drops and rack units.


        Well, the dumb ones maybe. Somebody has to pay for the power, both for your rack of heaters and for the air conditioning. If an ISP doesn't figure out a way to pass those costs on (proportionately, you'd hope) to customers, it's eventually going to fail.


        In fact it seems to me that a smart .com would try to optimize their power/page ratio and negotiate better terms from their ISP based on that effort. Convince the ISP to stick it to the people in the next rack!

    • you no that's funny, because my dual 1.4Ghz athlon box is named SPACEHEATER, and it runs at a cool 65oC celcius
    • This thread has 5 replies and no one has an answer yet?

      On the data sheet (there's a nice link in the article, I'm sure you can find it), you'll find the specs you're looking for:
      Capable of 50 Watts per slot.
      Single Pentium III 700 MHz, 512 MB ECC (PC100), 30GB IDE 2.5" HD, cPCI hot swap, dual 10/100base-T.
      smart temperature monitor and failsafe circuitry

      So, it's just good performance, not ultra-high.
    • Intel just last month introduced the 700 MHz ultra-dense server Pentium III ULV 0.13 m which is 4W typical 7W max power I think.

      I can not find out if the CPU in the new HP Blade is this model, but it would make sense.
  • by webword ( 82711 ) on Wednesday December 05, 2001 @11:08AM (#2659868) Homepage
    Buy the razor at a reduced cost first, then pay for blade after blade after blade.

    (Actually, all joking aside, this really does happen in the technology business. Especially HP! Buy the printer at a very reasonable cost and then pay big time for the stinking ink cartridges.)
  • by turbine216 ( 458014 ) <turbine216@@@gmail...com> on Wednesday December 05, 2001 @11:12AM (#2659890)
    try this link [hpservernews.com].
  • CompactPCI Board.. (Score:3, Insightful)

    by Anonymous Coward on Wednesday December 05, 2001 @11:12AM (#2659891)
    Uhh, so what? It's just another compact PCI board. Check out Force computer, Motorola, and a dozen other companies that make cPCI boards.. (and have for at least 4+ years..)

    News flash: HP reinvents the compactPCI board...
    • No kidding. I've been working with blade's for 3 years now here at Motorola. MonteVista has provided a PPC/x86 linux solution for almost 2 years. This post about HP's products missed the blade boat by years.

    • Well, the interesting part, is that they support Linux on it, and ship it with it.

      Now you can probably get Linux to run on any other compact PCI card, but this way you can be sure that it's supported, no missing drivers etc. Nice to know if you want to use Linux on a cPCI board.

      Now as a Linux zealot, I find it interesting, anyway, especially the statement below is rather unusal, and may merit mentioning:

      HP blade server products will initially run on the Linux operating system distributions of Red Hat, Debian andSuSE. HP-UX and Microsoft® Windows® are expected to be available on the blade server in the first half of 2002

      They really seem to give Linux a high priority there - getting it to run even before their own OS.

  • Not so dense? (Score:2, Interesting)

    by mybecq ( 131456 )
    I like this analysis at [theregister.co.uk] , where it seems that you'll get 48 in a 40u rack. Compared to the RLX, which gets several hundred, it isn't quite so flash.

    Of course having Linux available before Windows and HP-UX is interesting...
  • Link Correction (Score:2, Interesting)

    by Vrallis ( 33290 )
    Go here [hpservernews.com] for links to all the Blade photos (front, back, chassis, and specialty blades).
  • by jacquesm ( 154384 ) <j@nOSpaM.ww.com> on Wednesday December 05, 2001 @11:13AM (#2659901) Homepage
    at www.clustercompute.com [clustercompute.com] I thought I had the previous highest density record... not any more :)
    • Pretty :)

      But, the question is: did you leave the power supplies like that, or did you finish the job and hack them too? (they're pretty small compared to their boxes - most likely for ventilation, but a setup like this couldn't use very much power - you're running off of MB's and floppies, so using very underpowered power supplies would be a sweet option if you could get them for low enough cost)
    • I reckon he built this just to study the slashdot effect :-) Come on people let him study!
  • This is very cool, on many levels: space-saving, open architecture, and so forth.
    And sure, there's a lot of collaboration going on behind it as the press release says, but what's the likelihood that Blades will actually be a force in server hardware? A lot of companies are worried enough about financial situations without replacing large amounts of their assets.
    Just seems like a helluva risk to take, with this New Cool thing. When it DOES gain popularity, though, it'll be nice to hear success stories about physically cooler server rooms(I'd imagine) with more space for NERF combat [thinkgeek.com] or Ultimate Frisbee [rochester.edu].
  • Compaq (Score:5, Interesting)

    by RedX ( 71326 ) <(redx) (at) (wideopenwest.com)> on Wednesday December 05, 2001 @11:16AM (#2659921)
    According to Cnet [cnet.com], Compaq will be offering Proliant BL series of bladed servers soon as well. According to the article, HP was able to beat Compaq and others to market with their bladed offerings because HP went with an existing CompactPCI architecture, whereas Compaq believes CompactPCI doesn't offer high enough data transfer rates for bladed servers.
  • by Anonymous Coward on Wednesday December 05, 2001 @11:16AM (#2659924)
    My office evaluated a Blade a little while back, since we were in the market for a new build machine to replace an aging Dell PowerEdge (dual P3-400). The Blade performed very well and was rock solid running Debian 2.2r3 (upgraded to kernel 2.4.15). However, there was little to distinguish the Blade from most of its cheaper competitors, besides its easy upgradeability. We ran some benchmarks with the department next door, and their Compaq server blew the Blade out of the water, even though they both had identical CPUs. The Blade was also kind of pokey at 3-D rendering; we think the network cards that it came with were a bit underpowered. (We use a nice 3com 10/100 switch so normally, fast streaming data coming from the server flies down the pipe.)

    Overall we came to the conclusion that the Blades were novel, but overpriced and underpowered, at least for our needs. But organizations who can afford to pay extra and get very little for it won't mind the Blades.

    df

  • by chris.dag ( 22141 ) on Wednesday December 05, 2001 @11:17AM (#2659927) Homepage
    The biggest problem I have with these systems (and the ones from RLX) is that they put cheezy laptop hard disks on the blades. The not-so-fast 4300 RPM drives or whatever they are using now are simply not fast enough for I/O intensive tasks.

    I'll stick to standard high density rackmounts for my cluster projects that need better local disk IO.

    my $.02 of course

    • The biggest problem I have with these systems (and the ones from RLX) is that they put cheezy laptop hard disks on the blades. The not-so-fast 4300 RPM drives or whatever they are using now are simply not fast enough for I/O intensive tasks


      One of my good friends works as a chip designer for Dell. We were talking over beers last weekend about how Dell is coming out with the same thing soon, only with the option of having either the cheezy laptop drives OR a normal sized SCSI drive. You'll be able to choose between density or speed.
    • The one pictured on the HP site looks like a real snoozer [ibm.com], especialy with the 12ms access time.
    • Why not just skip placing the frigging IDE or SCSI hardrive in case? Get a FDDI daughter unit so you don't have to sacrifice density plus less heat.
    • Well the omnicluster units can use either standard ide drives (or a lptoip drive with adapter), or they can use the drives in whatever sytem you plug them into. We had one of their reps by last week, and expect some test blades soon.

      THe omniclusters can also use the pci bus as a high speed network between blades on the same bus.

      Slick idea all around, and could be useful in some applications (we're going to test them as citrix servers).
    • The main reasons for cheap drives are

      1. Most blade applications don't require or even use harddrives. They are a point of failure and add cost ($100 for every hard drive + $100 for every SCSI addition to the board x 100's of boards in most installs) to any project. When you spend millions, dropping $100,000 is significant. If you want I/O, go with something else.
      2. Most projects that use blades are also realtime applications in telco or internet. You can't really have a realtime OS that spends a lot of time reading/writing from a slow HD. Therefore, these are just there for startup and so forth when I/O isn't all the exciting.

      That said, there are SCSI PMC modules that can be added, and there are some Force and Mot chassis that support SCSI natively, but not for each blade.

    • This post almost certainly too late to get anyone's attention, but there you go...

      It doesn't surprise me that the blade servers come with fairly ho-hum internal disks. We have a large Citrix farm of 1U servers (we call them "pizza boxes") which are all attached to our SAN, which is only a step back from blade servers. We'd ideally not use any disk storage in the servers themselves, preferring to get it all from the SAN, and I imagine that this is a direction the blade servers will be going in.

      We've found that in practice we can't happily get our pizza boxes to boot from the SAN disk images, hence we have internal disks for the operating system, with the application data itself residing on the SAN fabric. The 1U boxes we buy only have a single fibre-channel card at present, which is a bit worrying for true redundancy.

      If you are using an internal disk for booting a blade you'd want it to be at least adequate for the OS (latency etc). The comments about the hard disk being a bit underwhelming still apply, unless these blades can boot straight off a fibre-channel card.

      Aegilops
  • Blades are cool (Score:1, Insightful)

    by LazyDawg ( 519783 )
    What we need are PCs that come with a single, directing processor on the mainboard and a bunch of PCI slots for daughtercard machines, running an OS geared towards clustering and paralell processing. They'd be able to get a lot more oomph than the current-generation single processor machines, and a non-von-neumann architecture, with multiple processing points might finally get people out of the WIMP interface paradigm.

    These Linux-running blade machines seem to be a good first step on this evolutionary path.
  • Doesn't Sun already have a blade [sun.com]? Look out! Here come's the landsharks.
  • well, i didnt want to go through the whole silly 'save as' crap, so here is the link to the high res photo:

    http://www.hpservernews.com/blades/photos/HPServ er bc1100_pr_01675.jpg
  • This thing is a joke (Score:3, Interesting)

    by frost22 ( 115958 ) on Wednesday December 05, 2001 @11:50AM (#2660105) Homepage
    This product looks like dead in the water.

    They need ridiculous 13U to house 16 blade servers - that's like 1.2 Severs per U.

    Have a look at the RLX beasts linked in the article. Those have 24 blades in a 3 U case - that's a whopping 8 Servers per U. Now, that's "ultra density".

    The HP stuff ist just ... sortof... like... ahem... dense...

    f.
  • Management Blade (Score:3, Informative)

    by Anonymous Coward on Wednesday December 05, 2001 @11:55AM (#2660146)
    I worked on the management blade. It's based around a StrongArm 110 and runs Linux 2.4. It has no hard disk and uses a RAM disk instead. Power on to prompt in 20 secs.
  • Rack space cheap! (Score:4, Interesting)

    by Computer! ( 412422 ) on Wednesday December 05, 2001 @12:01PM (#2660178) Homepage Journal
    With the recent exodus (sorry) from hosting providers, is rack space all that valuable anymore? I mean, for people who aren't still stuck in contracts?
  • I have a friend who works for a company here in Atlanta making "blade" systems. It's called Racemi [racemi.com] (pronounced ray-see-me).

    According to my friend, they have actual customers and a shipping product, which is more than most of the other blades on the market seem to have (although I would bet HP already has preorder customers). I wonder how a big company like HP will affect the market for smaller companies like Racemi and RLX.

    The Racemi box is very open-source friendly in terms of software and the like. They do a lot of the scheduling code in python, which is one of my favorite languages.

    How much do these things cost anyway (any of them)? Minaturization is always expensive. Just look at the (now dead) Apple Cube. Cool, but overpriced.

  • by lelitsch ( 31136 ) on Wednesday December 05, 2001 @12:16PM (#2660294)
    Good one. HP is naming a small scale server that will go directly against low end Sun Blade [sun.com] 100s and 1000s blade.
  • by Anonymous Coward
    There are a lot of other companies also making blades for compactPCI.

    Motorola makes a whole line of them based on the G3 and G4 chips. Nortel uses them (running linux) for their compact VoIP solutions.
  • Why is it that the Linux choices vendors offer is always limited to just SYSV style distributions? If they really believe choice is good, why not offer a real choice and include some different kinds of systems with that?

    • Well, if you want a more BSD-oriented Linux distro, Slackware Linux [slackware.com] supposedly fits the bill. I can't make any real comparisons, but I've been running it without any problems for a number of years, and find working with it much simpler than configuring Redhat.
      • I know about Slackware Linux [slackware.com]. Want to help me in making vendors more aware of it? And I don't mean they have to go so far as to actually offer it and support it to their customers. They only need to do enough to let the system administrator be able to run the Linux distribution of choice, or even one of the free BSDs, and have a reasonable expectation of the hardware working correctly (e.g. not blame the software unless they have actual reasons to know the software is at fault).

    • Why is it that some people just don't know how to say "Thank you"? They didn't have to offer a form of *NIX AT ALL and they give you 3 different distro's of Linux and the BEST you can come up with is that there are only SysV style distros?
      • When one of these big corporations offers specific Linux distributions, they generally deny support ... even support for the hardware itself ... unless you run not just that distribution (or one of, if more than one offered), but also run only the copy they provide to you. When it is the case that the choices they make are not all that diverse (well, Debian is a bit different than Redhat or SuSE, but not in everything), then the customers are basically limited.

        The best hardware vendor will be one that offers OS support for whatever OS they want to offer support for, but also offers _hardware_ support for plain hardware. And they also make sure that hardware is sufficiently standardized enough to work not only virtually every Linux distribution that uses a stock kernel, but also with the big three open source BSDs as well.

        Ultimately, I don't want their distribution anyway. I can put my own on there. But I do know that when the vendors are offering an OS like this, they are declining support for the hardware when alternatives are used. That is the problem.

  • I've evaluated the RLX chassis-based systems before, and compared to these, I think that RLX has them beat hands-down. RLX offers 3 NICs per board, less power requirements and probably equal speed.

    I'm also sure that RLX costs less, unless you buy the IBM relabeled ones.

    So what it comes down to is a nice first try for HP, but I'll stick with RLX until Compaq makes their entry--then I'll re-evaluate again.
  • It would be *really* cool if they'd make a laptop that would accept blades. Then you could pull a server out of the chassis and take it on the road with ya...
  • "Blade" hype (Score:4, Informative)

    by Animats ( 122034 ) on Wednesday December 05, 2001 @02:33PM (#2661100) Homepage
    Single-board computers in 6U Eurocard form factors have been around for years. The new ones have turn handles, like an AT&T 5ESS switch, rather than thumbscrews, for mounting. And Compact PCI single board computers have been around for a while, too. They've been sold in small volumes for industrial automation, and overpriced for that reason, but they're not new.

    Eurocard is good packaging. Industrial control, telephone COs, traffic light controllers, and Sun servers have been built that way since the 1980s.

    A note on nomenclature: Eurocard is a physical packaging standard dating from 1981. Eurocards come in 3U, 6U, and 9U heights. Compact PCI generally uses 3U, VMEbus uses 3U and 6U, and Sun servers used 9U. "VMEbus" is sometimes confused with Eurocard, but there's lots of stuff in Eurocard packaging that's not VMEbus compatible. These "blade" machines are 6U Eurocard, but the signals at the back connectors are, as I understand it, network interfaces and such, not a bus.

  • and I wonder if Sun will sue.. they have a series of workstations called Blades.

    different class and slightly different market, but how does the name in another computer device affect trademarks and or copyright??
  • I like the "Network Switch Blade" the best.
  • A datasheet on hp site mentions that the blade servers support PA-RISC software, has Transmeta done a PA-RISC code morphing software or is there just another blade server modul that has a PA-RISC cpu instead of a crusoe ?
    If there is a PA-RISC emulation then it should be easy to add other architectures. A crusoe based computer that could run x86, powerpc and pa-risc software would be very nice. Being able to run MacOSX on my PC from time to time would be really nice.
  • i suppose they can run linux xbsd or something else on them but linux is a nice buzzword at the moment
  • Many companies are planning to move to IB based blades. Dell for one; they are calling them bricks. Here the blade is a standard IB form factor module. This lets vendors do some really nice things. Get rid of PCI for one. Next get rid of internal I/O (storage, ethernet). The blade uses the IB backplane to connect to the IB fabric and thus to other blades for IPC and to I/O modules for ethernet and storage connectivity. With speeds at 2.5 Gb/s, 10 Gb/s, and 30 Gb/s you can come up with some really nice clustering applications. And you get to use a standard that many companies are backing. Now the blade just houses processors, memory, and an InfiniBand Host Channel Adapter chip or two. Moving the I/O out leaves you a lot more room. You could probably fit 8 blades or so in 3U of space. And these blades can use top shelf I/O like Gb Ethernet and 2Gb Fibre Channel where most blades today are 100 Mb ethernet and IDE or SCSI.

He who has but four and spends five has no need for a wallet.

Working...