Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage Linux Hardware

Baserock Slab Server Pairs High-Density ARM Chips With Linux 51

Nerval's Lobster writes with a report at Slash Datacenter that a portion of the predicted low-power-ARM-servers future has arrived, in the form of Codethink's Baserock Slab ARM Server, which puts 32 cores into a half-depth 1U server. "As with other servers built on ARM architecture, Codethink intends the Baserock Slab for data centers in need of extra power efficiency. The Slab supports Baserock Linux, currently in its second development release (known as 'Secret Volcano'), as well as Debian GNU/Linux. While Baserock Linux was first developed around the X86-64 platform, its developers planned the leap to the ARM platform. Each Slab CPU node consists of a Marvell quad-core 1.33-GHz Armada XP ARM chip, 2 GB of ECC RAM, a Cogent Computer Systems CSB1726 SoM, and a 30 GB solid-state drive. The nodes are connected to the high-speed network fabric, which includes two links per compute node driving 5 Gbits/s of bonded bandwidth to each CPU, with wire-speed switching and routing at up to 119 million packets per second."
This discussion has been archived. No new comments can be posted.

Baserock Slab Server Pairs High-Density ARM Chips With Linux

Comments Filter:
  • Slashvertisment (Score:4, Insightful)

    by daniel23 ( 605413 ) on Thursday August 23, 2012 @10:33AM (#41096325)

    The summary is almost unreadable, too

    • To be fair, this one was actually mildly interesting compared to the inanity and insanity of most /BI posts.

      • Re: (Score:2, Funny)

        by Anonymous Coward

        Only slashdot can make "bi" posts uninteresting.

  • by godrik ( 1287354 ) on Thursday August 23, 2012 @10:39AM (#41096423)

    The main question is how much GFlop per watt you get out of it, or the number of transactions per watt. Saying it is ARM so it is energy efficient is as stupid as saying it is pink so it is pretty.

    Some application are best processed (energy wise) by using a kick ass power hungry GPU. Who cares if you consume a lot of electricity if you have a tremendous throughput?

    • Also no 64-bit, so it might be memory constrained compared to other architectures.

      • I suspect that it is particularly memory constrained by there being 2GB of RAM hard-soldered to each compute card...

        I think that the Armada XPs used on these things support LPAE, so it would theoretically be possible to have more than 4GB of RAM; but with the 32-bit constraints on per-process addressing. For whatever reason, it looks like they went with substantially less RAM than even the 4GB one might have expected.

      • Actually, I believe current generation ARM processors address memory using 40bits, not 32bits. I'm trying to dig up a reference though, I could be dreaming this up.
        • by Desler ( 1608317 )

          You're thinking of the Cortex-A15 processors which introduces 40-bit addressing of which there aren't any on the market yet.

        • I think that ARM is currently where X86 was before the 64-bit move: at least some of the classier chips have a PAE-like scheme that allows more than 4GB of address space; but with limits on how effectively any single process can access more than it would on a 32-bit system(and, also similar to PAE-era x86, it isn't terribly common to actually find ARM systems kitted out with even 4GB of RAM, especially at the price points that don't involve a visit from a sales team).

          • A number of 64 bit chips were released with smaller address buses-- the powerpc g5, for instance, had a 42 bit address bus.My Core 2 duo has 36 bit address bus. This is flat, mind you, not banked like PAE.

        • by exabrial ( 818005 ) on Thursday August 23, 2012 @11:09AM (#41096907)
          Sorry, I was wrong. Current generation CortexA9 processors support up to 4gb _per process_ using some virtualization tricks. Cortex A15 has 40bit addressing, supporting up 1TB of ram per process. A15 processors are just being released right now...
    • by tgd ( 2822 )

      The main question is how much GFlop per watt you get out of it, or the number of transactions per watt. Saying it is ARM so it is energy efficient is as stupid as saying it is pink so it is pretty.

      Some application are best processed (energy wise) by using a kick ass power hungry GPU. Who cares if you consume a lot of electricity if you have a tremendous throughput?

      No, all the important information for this advertisement is there -- the link to Slashdot's other site with its full page advertisement.

      • by nullchar ( 446050 ) on Thursday August 23, 2012 @11:13AM (#41096961)
        From the fine "article":

        Typical ARM cores consume just a fraction of the power of an X86-based server. While Codethink hasn’t outright disclosed the actual power needs of the Slab, its 260-watt power supply offers something of a clue. Meanwhile, the forward-compatible SOMs (server object managers) will allow operators to replace the CPUs with newer models.

        First, it's like the GP said, "it's ARM therefore it's low power" without giving any specifications. To market this, it seems like they would really need tested specs from a decent benchmark tool.

        Finally, to praise the quality of the "article", I thought "SoM" meant System on Module [wikipedia.org]. A "server object manager" sounds like something running inside a java virtual machine.

        I don't understand how Geek.net thinks attaching poor quality blog posts (they're not really articles) to the Slashdot brand will help them... Slashdotters see through those BI/Cloud//DataCenter posts every time.

    • by hattig ( 47930 ) on Thursday August 23, 2012 @11:05AM (#41096839) Journal

      Total data centre power consumption is a major problem. We have the space in the racks for more servers, but no more power. In that case getting (example figures) 50% of the CPU power at 25% of the power consumption is totally worth it.

      The problem for these ARM servers is whether a 64-core cluster in 150W beats a quad-core low-power x86 server in 150W. "Beating" in this situation means either performance, cost or both.

      • by godrik ( 1287354 )

        I can understand that. But do you ACTUALLY get 50% of the computing power for 25% of the electric power? You still need disk running, memory. Less computing power means you might need to increase the number of nodes. Which means more network equipement, fan, ...

        Is it really worth it? Note that it is a real question, it is not rhetorical.

    • The main question is how much GFlop per watt you get out of it

      Provided your workload is floating-point heavy. ARM has historically been weak at floating-point arithmetic, but I'm under the impression that ARM might do better per watt on integer workloads than x86.

      • by Desler ( 1608317 )

        Cortex-A15 is, according to ARM, supposed to be much, much beefier for floating point and have better NEON performance. Plus with 40-bit physical addressing it could be quite an impressive competitor.

      • by godrik ( 1287354 )

        Any metric will be good for me. If you like better number of HTTP request per watt, I am fine with that. The performance will highly depend on the application anyway. Without actual numbers it is difficult to know if it is interesting or not.

  • by fuzzyfuzzyfungus ( 1223518 ) on Thursday August 23, 2012 @10:42AM (#41096489) Journal

    I seriously hope that the mechanical design isn't as nasty as the rendering makes it look...

    So, we've got a 260watt PSU in a half-depth 1-U. By my count, there are nine of those weedy little low-profile fans that start buzzing on cheap GPUs after about a week, plus one blower and a 40mm fan in the PSU. Also, there are air intake/exhaust slits on the front and rear of the case(which could be a problem since the manufacturer recommends mounting them back-to-back to achieve full rack density...); but none on the sides and (as best one can tell from the rendering) no obvious flow path from intake to exhaust, just a lot of churn.

    I can only hope that this is a low volume product, for which doing actual case design was uneconomic...

    • by hamjudo ( 64140 )
      It is less than half depth. There is a gap for hot air between the front and back units. In the pictures and animation on the Baserock site there are more ventilation slots. It appears that the air enters each through the front and both sides, and exits through the back. This will produce a chimney of heat in the center of each rack.
      • which is a nice idea - cables and heat in the centre of the rack rather than having a hot and cold aisle. Of course, cabling them up might be tricky, but as they're only half-width, it should be easy to pull them out for access to the back-ends.

        Still, ARM SoCs aren't known for producing massive amounts of heat, so I think the cheapo fans are just there for show more than anything, but I agree - a better designed case with air throughput flowing from front to back would be more efficient. The current case de

  • by Anonymous Coward

    This isn't an SSI either. The interconnects are actually 2x2.5 gigabit ethernet links to a '24 port switch', ethernet bonding, and 2x10 gigabit output for interlinking modules. That's from the site.

    I was kinda curious what sort of ARM chips were available with actual interconnects. Combined with the lousy 2 gig a module memory these things sound like a very expensive FAIL for anything other than frontend web services.

  • HPC wants fast everything and tons of ram. Virtualization wants tons of ram and tons of i/o. Non-parallelizable workloads need fast everything, tons of ram and tons of i/o. As far as I can tell this thing seems like a proof of concept more than anything.

    • My guess would be that this is the 'almost as good; but built out of cheap commodity stuff and therefore a lot cheaper' stab at the same niche that Sun was going after with their "T1" and "T2" cores and the T1000 and successor servers based on them. I don't know how well it worked out in practice(obviously not well enough to save Sun; but this was just one product line among others); but the theory was to target certain web and small-database-many-users workloads that tended to have a large number of comput

    • Mostly-static web server, maybe? Hook it up to a SAN for storage, let it cache in RAM or on the internal SSD. A large number of small cores would alleviate many of the problems in handling thousands of concurrent connections, and if none of the pages require intensive calculation, it could work pretty well.

    • by zrq ( 794138 )

      I agree it probably is a proof of concept with limited applications at the moment, but how about something like Hadoop, that was designed to work on a distributed set of cpus and discs.

      http://tech.slashdot.org/story/12/08/16/2343249/dremel-based-project-accepted-as-apache-incubator [slashdot.org]

  • I've been looking for a 1U, non x86, low power server (ie designed to run 24/7, have proper cooling, gige, multiple disks etc) for quite some time... I read about various ARM servers as well as the chinese loongson mips based boards, and have been reading about them regularly for a couple of years now...

    And what do all these things have in common? None of them are actually available to purchase anywhere!

  • It's pretty clear that data centers are rapidly turning into service providers that sell VM time and maybe adding value in the form of SaaS. That's true even for internal data centers that are used only by the companies that own them — they just use a different billing procedure for their customers.

    So, a serious developer doesn't buy a 1u server and rent colo space. He buys VM time and any other services he needs, and lets the provider worry about the hardware. Much more cost effective, much easier to

  • http://www.theregister.co.uk/2012/08/13/xeon_vs_calxeda_arm_apache_bench/ [theregister.co.uk] Calxeda produced a biased benchmark that showed it was more efficient than a Xeon. Intel replied with a fair benchmark which shows the Xeon is still better both per watt and per core.

Children begin by loving their parents. After a time they judge them. Rarely, if ever, do they forgive them. - Oscar Wilde

Working...