Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

23 Second Kernel Compiles 233

b-side.org writes "As a fine testament to how quickly linux is absorbing technology formerly available only to the computing elite, an LKML member posted a 23 second kernel compile time to the list this morning as a result of building a 16-way NUMA cluster. The NUMA technology comes gifted from IBM and SGI. Just one year ago, a Sequent NUMA-Q would have cost you about USD $100,000. These days, you can probably build a 16-way Xeon (4X 4-way SMP) system off of ebay for two grand, and the NUMA comes free of charge!"
This discussion has been archived. No new comments can be posted.

23 Second Kernel Compiles

Comments Filter:
  • 2 grand? i think not :(
    • That's what I was thinking...

      But, Here's a 4-P3 system [ebay.com], and it's nearly $1.5g, and a cpu alone [ebay.com] isn't more than $50, so...
      • Nice try... (Score:4, Informative)

        by castlan ( 255560 ) on Sunday March 10, 2002 @07:51AM (#3137485)
        But the reserve for this machine is $3850. The article says 16 way, which would be four of these four-way SMP systems. That also doesn't take into account the need for a high-bandwidth, low latency interconnect (like SGI's NumaLink.) If you aren't expecting more than 16-way SMP, then you can probably get away with switched Gigabit Ethernet, as long as it is kept distinct from the nornal network connectivity. If the Gigabit upgrade is still dual portm then you are set. If not, you'll neet another NIC - though you will only really need one for the whole cluster.

        Maybe instead of two grand, the poster meant twenty-grand. Either way, $20 grand is better than $100K!
        • How well would Firewire, Fibre Channel, or SCA work as NUMA interconnects? How would these guys compare, pricewise and in effectiveness, to 1000baseT?
      • Re:hmph (Score:3, Insightful)

        by Paul Jakma ( 2677 )
        what about the interconnect? the machine in question is /not/ a simple beowulf cluster, it's NUMA. Non Uniform Memory Architecture, which implies there is some form of memory architecture, and that the main difference between that architecture and that of a normal computer is that it is non-uniform.

        Ie, the CPUs in this computer share a common address space and can reference any memory, just that some memory (eg located at another node) has a higher cost of access than other memory. (as opposed to a typical SMP system where all memory has an equal 'cost of access').

        at the moment, under linux, this implies that there is special hardware in between those CPUs to provide the memory coherency - ie lots of bucks - cause there is no software means of providing that coherency (least not in linux today).

        NB: normal linux SMP could run fine on a NUMA machine (from the memory management POV), but it would be slower because it would not take the non-uniform bit into account.

        anyway... despite what the post says, this machine is /not/ a collection of cheap PCs connected via 100/1G ethernet or other high-speed packet interconnect.

  • Tempting... (Score:4, Insightful)

    by JoeLinux ( 20366 ) <joelinux AT gmail DOT com> on Sunday March 10, 2002 @05:26AM (#3137315)
    ok..I'm NOT about to start the perverbial deluge of people wanting to know about a beowulf cluster of these things. But what I will ask is this: if it can do that for a kernel, I wonder how long it will take to do Mozilla, or XFree? It'd be interesting to see those stats.

    JoeLinux
    • I concur: let's get us some benchmarks!
    • You might want to add OpenOffice to that list.

    • Re:Tempting... (Score:4, Interesting)

      by castlan ( 255560 ) on Sunday March 10, 2002 @12:10PM (#3138026)
      A Beowolf cluster of these? That's so 2 years ago... I'd love to see a NUMA-linked cluster of these! And I wonder how long it would take that cluster running GNOME under XFree86 to have Mozilla render this page nested at -1!

      Seriously, I wonder how long it takes to boot. Every NUMA machine I've ever used took more than its fair share of time to boot... much more than a standard Unix server. It would be pretty funny if compiling the kernel turned out to be trivial compared to booting!
      • Re:Tempting... (Score:4, Informative)

        by hansendc ( 95162 ) on Sunday March 10, 2002 @01:42PM (#3138416) Homepage
        Seriously, I wonder how long it takes to boot.

        They do take a good bit of time to boot. In fact, it makes me much more careful when booting new kernels on them because if I screw up, I've got to wait 5 minutes, or so, for it to boot again! But, they do boot a lot faster when you run them as a single quad and turn off the detection of other quads.
  • by Anonymous Coward on Sunday March 10, 2002 @05:28AM (#3137322)
    No way. Just a no-CPU, no-memory case and
    motherboard costs $500. More like $2000
    to $3000 for an old quad.

    • by Wells2k ( 107114 ) on Sunday March 10, 2002 @07:51AM (#3137486)
      No way. Just a no-CPU, no-memory case and
      motherboard costs $500. More like $2000
      to $3000 for an old quad.


      I am actually in the process of building a quad xeon right now with bits and pieces I bought off of E-Bay [ebay.com], and this is certainly doable. Not sure about the $500, but $2000-$3000 is high. I have the motherboard and memory riser now for $150, I am pretty sure that I can get a used rackmount case for $100 or so, the CPU's are going to cost around $60-70 each (P-III Xeon 500's), and memory is cheap as well.

      I figure I will be in it for around $1000 in the end. Yes, $500 is a low number, but I also know that your estimates of $2000-3000 is high.

    • Bzzzt...

      Old compaq ML-530 with 4 processors.. $850.00

      I just got one off of ebay...

      Problem: Hard drives for an ML530 are overpriced modified drives that you HAVE to buy from compaq.
      I puked when I found out the the 4 drives I need to get this running will cost me $3500.00 from compaq.

      Damn those jerks and their custom hot-plug sleds.. why cant they use standard hotplug drives and mounts? (Just like the rack mount... a compaq WILL NOT mount in a standard rack without heavy modification.)
      • by cmkrnl ( 2738 )

        Have to buy hard drives from CPQ my arse! Whats stopping you putting in a stock U160 controller and hanging drives off the back of that ? $3500 buys a LOT of Atlas 10K3s these days.

        The CPQ drive rails mount standard SCSI drives and are freely available if you talk to your nearest CPQ reseller nicely. Compaq use stock quantum, fuji and seagate drives, they just change the vendor mode page entry to make it look like they are 'adding value' somehow.

        Curmudgeon
      • Does it use the older beige drives or the new ultra2/3? I may be able to help you.

      • Let me get this right... you buy a used computer, and then go straight to the manufacturer for replacement parts??? (Surely you know 'accessories' are one of their higher-margin profit centers!)

        Still... if you're in the Seattle, WA area, stop in the Boeing Surplus Retail Store [boeing.com]. I was there last week, and they had a bucket of what looked like 80-pin 2.1GB Compaq hot-plug drives. They were just sitting there next to a cash register like candy would be at a supermarket. I don't remember the price, but I want to say they wanted ~$5 each for 'em.

        They were also selling an Indigo ($50), lots of PCs (mostly old Dell OptiPlex models, $20 - $300, Pentium MMXs to P-IIs), and even a Barco data-grade projector ($2500). Fun place to go and blow half a day poking around.

  • 42 seconds (Score:4, Informative)

    by decep ( 137319 ) on Sunday March 10, 2002 @05:28AM (#3137323)
    23 seconds is impressive. I, personally, have seen a 42 second compile time of a 2.2 series kernel on a Intel 8-way system (8GB ram, 8 550Mhz PIII Xeons w/ 1mb L2). It was in the 1 minute range with a 2.4 kernel.

    Definately the most impressive x86 system I have ever seen.
    • by Chexum ( 1498 )
      Dave S. Miller (the Sparc guy) boosted a post on 42 seconds kernel compile, although the exact article is not available on web archives, at least two quotes are on a 68k list [google.com], and a Hungarian Linux list [google.com].

      Remember, this was in 1996. Now, how much did we progress in the last five-six years? :)

      • Well, considering a 36-processor SGI Challenge with 5GB of RAM would have cost you several multiples of six figures in 1996, I don't really think the comparison's valid...
    • by kigrwik ( 462930 ) on Sunday March 10, 2002 @06:14AM (#3137390)
      Arthur Dent: Ford, I've got it ! "What's the kernel compile time in seconds on an Intel 8-way Xeon ?"
      Ford: 42 ! We're made !
    • by Anonymous Coward
      That's nothing. On my 40MHz 386DX, compiling the 1.2 series kernel used to take just 90 minutes.
      • Bah. When I first started with Linux (around the 0.99pl12 days), the kernel compile took 4.5 hours on my 386SX-16 with 4MB of RAM.

        A year or two ago I compiled the NetBSD kernel on a MicroVAX II with 8MB of RAM and it took about 24 hours.
  • Ok, it compiles a kernel hella fast. But can it be applied to other stuff. Obviously these fast compiling machines are best for places where you have tons of users compiling all over the place, but will other compilers be as fast?
    • You mean, will it run Counter-Strike server?
    • Yes, it can be applied to other stuff...and there are always CPU-bound problems that can use the speed. (I hope someone knowledgeable about current computer graphics technology can comment on what could be done with the machine under discussion.)
      • Re:Is it worth it? (Score:2, Insightful)

        by castlan ( 255560 )
        By computer graphics technology, do you mean a render-farm? That would be much better suited to a standard beowolf cluster, because the interprocess communication is minimal. That is an example of an "embarrasingly parallel" compumpting problem. As for live graphics, an Onyx workstation doesn't benefit from CPU power so much as its Reality Engine/Infinite Reality graphics pipeline. When you need better graphics performance, you can utilize multiple graphics pipelines. Some of the Onyx 3000s can use (I think) as many as 16 different IR3s for improved graphics output, like in RealityCenters.

        The point of this article isn't that kernel compilation is fast because it is usually CPU bound, and 16 CPUs alleviate that problem. If fact kernel compiliation isn't strictly CPU bound... there are other performance limits too, especially disk performance. The significance of this article is that multithreaded kernel compiles benefit from the increased interprocess communication potential in NUMA architectures... performance would be much worse trying to spread that across a beowolf cluster.

        While rendering (not displaying) graphics or running basic number crunching does not benefit much from a NUMA setup as compared to a beowolf style setup, some complex equation do benefit... computing the first million digits of Pi would use interprocess communication, as would large scale data minig application. It's been a few years since I've been there, I saw a huge cluster of Origin 2000s CC-NUMAed together with one Onyx 2, which handled displaying the results of the data mining. (An Onyx2 is basically an Origin 2000 with a graphics pipeline. An Onyx 3000 without any graphics bricks is an Origin 3000.)
    • Re:Is it worth it? (Score:2, Informative)

      by oxfletch ( 108699 )
      These machines were designed to run huge databases. The IO scalability isn't there in Linux yet as it was in Dynix/PTX, and there hasn't been so much work on the scalability of Linux as there has on PTX, but it'll get there pretty soon ;-)

      So yes, it will apply to other stuff, though maybe not as well as it could, quite yet.
  • by leviramsey ( 248057 ) on Sunday March 10, 2002 @05:33AM (#3137337) Journal

    ...who wondered, "I didn't know that Clive Cussler [numa.net] had gotten into cluster design?

  • by James_G ( 71902 )
    Maybe this is a silly question.. but why would you want to compile a kernel in 23 seconds? I mean, ok, it's cool and everything, but is there some hidden application of this that I'm not seeing? Or are people really devoting hardcore time to this just because they can?

    6 years ago, a kernel compile for me took about 3 hours. These days, it takes less than 3 minutes, which is more than fast enough for my needs. So, you can push it down to 23 seconds with a few thousand $ - what's the point? Someone help me out here!
    • The lesser compile time the better. That's especially useful for companies that have lots of software to compile, e.g. Linux distributors.
    • Unless I am completely daft (please note if so), then we're talking about a general progression in processing power, a certain configuration of which (e.g., compiling a piece of software) can serve as a preliminary benchmark against other systems (considering it's a task many are familiar with and can carry such a "wow" factor). It's not some strange optimization that only affect kernel compiles; it's about memory optimization (read the definition of the NUMA tech. [techtarget.com]).
    • Re:Why? (Score:1, Interesting)

      by Anonymous Coward

      Think "outside the box" (sorry, horrible pun) of just kernel compiles, and I suspect you'll understand the potential value here.

      Let's say you run a decent-sized development house, employing a healthy number of coders. Now, as these folks churn through their days (nights?), they're gonna be ripping through a lot of compiles (if they're using C/C++/whatever). From my personal experience as a developer in the industry, a large portion of a developer's time is spent just compiling code.

      If you can implement cost-effective tech like this to reduce time spent on routine tasks like code burns, you increase productivity.

      Holy shit, I may have actually come up with a halfway decent justification for "hippie tech" to throw at the suit-wearing types... ;).

      temporary email, because MS deserves a good Linux box [mailto].

    • Well, for my own little pet project [obsession.se], a full rebuild takes ~5 minutes. On my nearly vintage K6-233, that is. One main reason I'm looking forward so much to a new computer system (besides the gaming, that is) is the chance to shrink that time by a significant amount. If I was a kernel developer, the ability to do a full rebuild in 23 seconds wouldn't hurt a bit, I'm fairly sure.
    • Re:Why? (Score:4, Informative)

      by quintessent ( 197518 ) <my usr name on toofgiB [tod] moc> on Sunday March 10, 2002 @05:53AM (#3137365) Journal
      is there some hidden application of this that I'm not seeing?

      How about doing other stuff really fast?

      3D modeling. 3D simulations. Even extensive photoshop editing with complex filters can benefit from this kind of raw speed.

      It wouldn't be a catchy headline, though, if it said "render a scene of a house in 40 seconds--oh, and here are the details of the scene so you can be impressed if you understand 3D rendering..."

      There are hundreds of applications for this, many of which we don't do every day on our desktop simply because they take too much juice to be useful. With ever-faster computers, we will continue to envision and benefit from these new possibilities.
    • Because 6 years ago, you would have been asking, "Maybe this is a silly question.. but why would you want to compile a kernel in 3 minutes?"

    • Re:Why? (Score:3, Informative)

      by JabberWokky ( 19442 )
      Maybe this is a silly question.. but why would you want to compile a kernel in 23 seconds?

      That's not the point - kernel compilation (or the compilation of any large project like KDE or XFree[1]) is a fairly common benchmark for general performance. It chews up disk access and memory and works the CPU quite nicely.

      [1] Large is, of course, a relative thing. Also, some compilers (notably Borland) are incredibly efficent at compiling (sometimes through manipulating the language specs so the programmer lines things up so the compiler can just go through the source once and compiles as it goes).

      Still, benchmarks are suspect to begin with, and kernel compile time is a decent loose benchmark. What was that quote from Linus about the latest release being so good he benchmarked an infinate loop at just under 6 seconds? :)

      --
      Evan

      • Re:Why? (Score:2, Interesting)

        by cheezehead ( 167366 )
        Also, some compilers (notably Borland) are incredibly efficent at compiling

        You can say that again. Back in '95 or '96, Borland was claiming that their Delphi Object Pascal compiler compiled 350,000 lines per minute on a Pentium 90. I never checked this, I do know that it was incredibly fast.

        What I do know from own experience however:
        Back in those days we built a system on Solaris, implemented in C++, that took about 1 hour to compile for about 100,000 lines of source code (hardware was kind of modest compared to today's stuff).
        For a bizarre reason that I won't go into, we had to build part of the system on a PC platform. This was done using Borland C++ 3.0 for DOS. Some fool had configured something in the wrong way, resulting in the fact that all the 3rd party libraries were recompiled from source every time. This was more than 1 million lines of C++. It took about 10 minutes on a 486/33!
    • Re:Why? (Score:1, Insightful)

      by Adnans ( 2862 )
      Maybe this is a silly question..

      Yes it is... :-)

      -adnans
    • Well, for people who really want to be up to date, they could recompile the kernel every time they log in or boot up.
    • Re:Why? (Score:5, Insightful)

      by LinuxHam ( 52232 ) on Sunday March 10, 2002 @07:53AM (#3137487) Homepage Journal
      but why would you want to compile a kernel in 23 seconds?

      I think this benchmark is used time and time again because its really the only one that nearly any Linux user would be able to compare their own experiences to. If they said 1.2 GFLOPS, I (and I suspect most others) could only say "Wow, that sounds like a lot. I wonder what that looks like." OTOH, I have seen how long it takes to download 33 Slackware diskettes in parallel on a v.34 modem, and I still run 3 P75's today.

      I've been told that I will soon be deploying Beowulf HPC clusters to many clients, including universities and biomedical firms. If they were to tell me that the clusters will be able to do protein folds (or whatever they call it -- referring back to the nuclear simulation discussion) in "only 4 weeks", I won't have a clue as to how to scale that relative to customary performance of the day.

      Sure, there are many other applications that are run on clusters, but kernel compiles are the ones that all of us do. It can give us an idea of what kind of performance you'd get out of other processor-intensive operations. And many people will tell you there are so many variables with kernel compiles that its ridiculous to compare the results.

      Check out beowulf.org and see what people are doing with cluster computing. I've always wanted to open a site that compiles kernels for you. Just select the patches you want applied and paste the .config file. I'll compile it, and send back to you by email a clickable link to download your custom tarball. Of course no one here would trust a remotely compiled kernel :)
    • Re:Why? (Score:5, Funny)

      by sohp ( 22984 ) <snewtonNO@SPAMio.com> on Sunday March 10, 2002 @08:25AM (#3137529) Homepage
      Never ask a geek, "why?". Just nod your head and back away slowly.
    • Re:Why? (Score:2, Informative)

      by pslam ( 97660 )
      Maybe this is a silly question.. but why would you want to compile a kernel in 23 seconds? I mean, ok, it's cool and everything, but is there some hidden application of this that I'm not seeing?

      I'll give you the benefit of the doubt and assume you're not just a trolling karma whore here. The answer is as obvious as always: faster is always better. If there's nothing which needed that speed, it's because it wasn't previously viable and nothing got written with it in mind. If every computer were this fast, it makes compiling huge projects viable on small workstations.

      And here's a great example - where I work there are three things that reduce productivity because of technical bottlenecks:

      • Internet speed (both ways).
      • Waiting for CVS (we've got this down to less than a minute for the whole tree).
      • Compiling.

      Of these, the major bottleneck is compiling. If it takes 30 seconds just to recompile a single source file and link everything, you end up writing and debugging code in "batch" fashion rather than in tiny increments. And it's 30 seconds where you're not doing anything except waiting for the results.

      If I had a machine like this on my desk, I'd probably get twice as much work done.

    • Re:Why? (Score:2, Funny)

      by satterth ( 464480 )
      Why you ask...

      Cause 23 seconds is braggin rights for a bitchin fast machine...

      /satterth

    • by rew ( 6140 )
      The kernel compile is a "benchmark" as a bunch of reasonably separate CPU intensive jobs. If you do good on a kernel compile, you'll do very good on well-parallelizable hardcore computations as well.

      Here, we do kernel development. For us it's REAL time that we spend in compiling kernels. I really would like to have a machine that does 23 second kernel compiles...

      Roger.
  • by autopr0n ( 534291 ) on Sunday March 10, 2002 @05:38AM (#3137342) Homepage Journal
    But, does anyone know how NUMA compares with, say, a beowulf cluster? Does NUMA allow you to 'bind' multiple systems into one, so that I wouldn't need to rewrite my software? Did these guys use a stock GCC or something special? I know you would need to use MPI or similar for beowulf. Is NUMA as scalable as Beowulf in terms of building huge-ass machines (of course if I was going to expend the effort to do that, I might as well want to write custom software).

    If this type of system would allow 'supercomputer' performance on regular programs... well... that would be really nice. How much work is it to setup?
    • by macinslak ( 41252 ) <macinslakNO@SPAMmac.com> on Sunday March 10, 2002 @06:26AM (#3137405)
      NUMA is rather different than Beowulf.

      NUMA is just a strategy used for making computers that are too large for normal SMP techniques. I read a few good papers on sgi.com a couple of years ago that explained it in detail, and the NUMA link in the article had a quick definition. NUMA systems run one incarnation of one OS throughout the whole cluster, and usually imply some kind of crazy-ass bandwidth running between different machines. I don't think you could actually create a NUMA cluster of seperate quad Xeons boxes, and it would probably be ungodly slow if you tried.

      There probably isn't any difference for kernel compiles between the two, but NUMA clusters don't require any reworking of normal multithreaded programs to utilize the cluster and can be commanded as one coherent entity (make -j 32, wheee).
      • NUMA stands for (Score:2, Informative)

        NUMA means Non-Uniform Memory Access. It is a kind of computer where you have shared memory but you dont have the same access time for every processor to every memory position. Therefore, every processor will have access to all memory but sometimes it will take longer or shorter (if the memory belongs to another processor).

        In a Beowulf cluster, you dont have shared memory (unless inside a node, if you have a SMP machine) and you must use message passing to communicate (unless you are using DSM--Distributed Shared Memory--, maybe with SCI).
      • You can buy the bits needed to build your own NUMA hardware system out of separate boxes relatively[1] cheaply. The speed depends on how you manage the memory and I/O. You'd need Linux to support it as a coherent whole though and I'm not sure that it does.

        [1] For large values of relatively.

    • NUMA vs Beowolf (Score:2, Informative)

      by castlan ( 255560 )
      Beowolf clusters are considered horizontal scaling, while NUMA clusters are considered vertical scaling. From my experience (SGI CC-NUMA) a NUMA cluster looks like a single computer system, with a single shared memory. (SGI systems are even Cache-Coherent, so that there is minimal performance loss if your data is in the RAM of the most distant CPU.. a significant issue with 256 nodes). This means that you don't have to deal with MPI or other systems to deal with disparate memory of seperate machines, so you can mostly code as if it were a single supercomputer. In fact, that is how SGI actually makes their supercomputers.

      NUMA clusters tend to have scalability problems related to the cache coherence issue, so for a vertically scalable CC-NUMA box, you have to pay SGI the big bux. I haven't looked at IBMs NUMA technology, but if they own sequent, then they probably have similar capability.

      As for the work to set one up, SGI's 3000 line is fairly trivial, the hardware is designed to handle it, and I think you only need NUMA link cables to scale beyond what fits inside a deskside case, if not a full height case. Now if you have a wall of these systems, you will need the NUMAlink (nee CrayLink) lovin'. As for an Intel based system, I suspect it wouldn't be nearly as easy... unless your vendor provides the setup for you. On your own, you would need to futz with cabling the systems together, just like in a beowolf. Except that your performance depends on finding a reasonably priced, high bandwidth, low-latency interconnect. Gigabit Ethernet wont scale very far, so going past 16 CPUs would be very unpleasant. If you expend the effort, you will end up with a cluster of machines that behave very much like a "supercomputer" though. Good luck!
    • by jelson ( 144412 ) on Sunday March 10, 2002 @01:18PM (#3138315) Homepage
      NUMA is somewhere in between clustering (e.g. Beowulf) and SMP.

      On a normal desktop machine, you typically have one CPU and one set of main memory. The CPU is basically the only user of the memory (other than DMA from peripherals, etc.) so there's no problem.

      SMP machines have multiple CPUs, but each process running on each CPU can still see every byte of the same main memory. This can be a bottleneck as you scale up, since larger and larger numbers of processors that can theoretically run in parallel are being serviced by the same, serial memory.

      NUMA means that there are multiple sets of main memory -- typically one chunk of main memory for every processor. Despite the fact that memory is physically distributed, it still looks the same as one big set of centralized main memory -- that is, every processor sees the same (large) address space. Every processor can access every byte of memory. Of course, there is a performance penalty for accessing nonlocal memory, but NUMA machines typically have extremely fast interconnects to minimize this cost.

      Multi-computers, or clustering, etc. such as Beowulf completely disconnects memory spaces from each other. That is, each processor has its own independent view of its own independent memory. The only way to share data across processors is by explicit message-passing.

      I think the advantage of NUMA over beowulf from the point of view of compiling a kernel is just that you can launch 32 parallel copies of gcc, and the the cost of migrating those processes to processors is nearly 0. With beowulf, you'd have to write a special version of 'make' that understood MPI or some other way of manually distributing processes to processors. Even with something like MOSIX, an OS that automatically migrates processes to remote nodes in a multicomputer for you, the cost of process migration is very high compared to the typically short lifetime of a typical instantiation of 'gcc', so it's not a big win. (MOSIX is basically control software on top of a beowulf style cluster, and the kernel mods needed to do transparent process migration)

      I hope this clarified the situation rather than further confusing you. :-)
    • NUMA provides you with a single system image, so there's no need to rewrite your software. At the moment, we're working on default behaviours so that normal software works reasonably well. For something like a large database, we're providing APIs that will allow you to specify things about how processes interact with their memory and each other, allowing you to increase performance further.

      The hardware looks a little like 4 x 4way SMP boxes, with a huge fat interconnect pipe slung down the back (10 - 20 Gbit/s IIRC). But there's all sorts of smart cache coherency / mem transparency hardware in there too, to make the whole machine look like a single normal machine.

      Yes, I used stock GCC (redhat 6.2).

      re Scalability, the largest machine you can build out of this stuff would be a 64 proc P3/900 with 64Gb of RAM. SGI can build larger machines, but I think they're ia64 based, which has it's own problems.

      It's not that hard to set up, but not something you would build in your bedroom ;-)
  • by Ed Avis ( 5917 ) <ed@membled.com> on Sunday March 10, 2002 @05:38AM (#3137343) Homepage
    You can also get 23-second kernel compiles in software using Compilercache [erikyyy.de] :-).
    • Ed Avis, I kiss you!!!

  • by m0RpHeus ( 122706 ) on Sunday March 10, 2002 @05:43AM (#3137352)
    This may be good news, but what the heck! They should have at least included the .config that they used so that we can know what drivers/modules that are compiled with it, or maybe this is just bare-bones kernel enough to run the basic. We need to know the complexity of the configuration before we could really say it's fast.

  • by Wakko Warner ( 324 ) on Sunday March 10, 2002 @05:44AM (#3137354) Homepage Journal
    But where can I get a NUMA cluster for $80? Should I Ask Slashdot?

    - A.P.
  • That's on a "16 way NUMA-Q, 700MHz P3's, 4Gb RAM".

    I've been following that thread wondering if anybody would post better results with a dual Athlon or similar. Any lucky soals with really cool hardware who want to post benchmarks? In fact, it would be interesting to know how quickly the kernel compiles on single P3/700, just to get an idea of how it scales.

  • So if I could compile a new kernel in less time than it takes to boot-up, then a new kernel would be ready before the boot process was finished. So I'd have to restart with the new kernel, and if I start a new kernel compile too then that boot wouldn't be able to finish before there was another new kernel, so I restart with the new new kernel and begin another compile...

    Maybe naming this box 'Zeno' wasn't such a good idea after all.

    (PS. You can now compile a kernel faster than Nautilus opens a folder. Go fig.)
  • ... just so long as I never have to program on Sequent iron, and that it's insidious operating system ever again. Of course, that was 10 years ago when Dynix, trying to be the best of both worlds, was really neither ATT nor Berkly !

    ... of coure the other problem was indeed the expense, leaving us in situations where we had to program at odd hours and off-days because the client couldn't affort a "development" machine.

    ... two issues which I would hope are sovled a 16-way Xeon for $2K ... hence, making it a REAL-world bargain.
  • HELLLOOOOOO??? (Score:4, Insightful)

    by Anonymous Coward on Sunday March 10, 2002 @06:04AM (#3137382)
    You can't build a NUMA cluster worth a crap without a fast, low-latency interconnect.

    Sequent's NUMA Boxen use a flavor of SCI (Scalable Coherent Interface) which is integrated into the memory controller.

    While you can use some sort of PCI-based interconnect, the results are just plain not worth it.

    Infiniband should be better, though I've heared the latency is too high to make this a marketable solution.

    Keep your eyes on IBM's Summit chipset based systems. These are quads tied together with a "scalability port" and go up to 16-way. They should go to 32 or higher by 2003. That's when NUMA will -finally- be inevitable...
  • Unless it's two years old I don't believe the price of that cluster is $2k. The cheapest quad-xeon motherboard on pricewatch is $500. If you cut that in half for being used, that's still $1k for just the motherboards. Add 16 processors, ram, cases, NICs, drives, power supplies, and other parts, and there's no WAY you're coming in under $5k, and $10k would be more realistic.

    On the other hand, if someone IS selling such a beast and I can win the bidding with a $2k bid, I might be tempted...
  • by boris_the_hacker ( 125310 ) on Sunday March 10, 2002 @06:21AM (#3137399) Homepage
    ... with the advent of this new technology and raw speed, you should actually be able to use them!

    [this is actually a joke]
  • by swirlyhead ( 95291 ) on Sunday March 10, 2002 @06:27AM (#3137406) Homepage

    I went and looked at the email and noticed that the very first patch he mentions was from the woman who came and gave a talk to EUGLUG [euglug.org] last spring. For one of our Demo Days we emailed IBM and asked them if they would send down someone to talk about IBM's Linux effort. We were kind of worried that they would send a marketing type in a suit who would tell us all about how much money they were going to spend, etc., etc. But we were very pleasantly surprised when they sent down a hardcore engineer who had been with Sequent until they were swallowed by IBM.

    She did a pretty broadranging overview of the linux projects currently in place at IBM, and then dived into the NUMA/Q stuff that she had been working on. The main gist of which is that Sequent had these 16-way fault-tolerant redundant servers that needed linux because the number of applications that ran on the native OS was small and getting smaller. Turned out that even the SMP code that was in the current tree at the time did not quite do it. She had some fairly hairy debugging stories, apparently sprinkling print statements through the code doesn't work too well when you're dealing with boot time on a multiprocessor system because it causes the kernel to serialize when in normal circumstances it wouldn't...

    I think the end result of all this progress with multiprocessor systems is that we'll be able to go down to the hardware store and buy more nodes plug 'em into the bus; and compute away.

    • I don't know if she told you this, too, but IBM saved Sequent from going out of business. I lived a couple of blocks from their headquarters in the Silicon Forest (Portland, OR) and in the last couple of years two or three of their buildings had the Sequent sign taken down. There were articles in the paper about how in trouble they were, guys at Intel (Sequest was an Intel spin-off) saying they're sorry to see such a good idea go, etc. Sure, they had a few years left, but, without IBM, NUMA-Q probably would have went the way of the Alpha.
  • by CoolVibe ( 11466 ) on Sunday March 10, 2002 @06:31AM (#3137412) Journal
    Jeez... That's not even enough to finish a cup of coffee! Better scratch that remark in the kernel compile howto about doing something else while the kernel is building...

    What can you do in 23 seconds?

    • smoke 1/5th of a sigaret?
    • drink 1/9th cup of coffee?
    • finish 1/8th bottle of Mtn Dew?
    (Heck, I haven't had enough coffee yet, you guys think of more examples :)

    But otherwise, impressive. I wonder when Moore's law will have progressed so much we'll have systems like that in normal households...

  • by J4 ( 449 )
    Yeah kick ass system and all, but we're 'make clean && make dep && make' I hope otherwise it's like, really, not saying much.
  • by wowbagger ( 69688 ) on Sunday March 10, 2002 @10:06AM (#3137692) Homepage Journal
    I would assert that a simple "time make -j32 bzImage" (which is what is being quoted) is not a very good benchmark as it is.

    Reason? Not enough information as to the options.
    • What version kernel was he building (actually, the LKML post did give this, but as a general statement this objection still stands)
    • What were his compile options? Building a kernel with everything possible built as modules will take a great deal less time to build bzImage (the non-module part of the kernel) than would a kernel with everything built in.
    • Then there's the issue of buffercache - to be consistent you would have to do a "make -j32 bzImage && make -j32 clean && time make -j32 bzImage" in order to have a consistent set of files in the VFS buffercache.

    Never the less:

    I WANT ONE
  • by nbvb ( 32836 ) on Sunday March 10, 2002 @10:19AM (#3137718) Journal
    http://samba.org/~anton/e10000/maketime_24

    Wheeeeeee!

    And seriously, I saw some comments about needed a really fast interconnect... check out Sun's Wildcat.

    --NBVB
  • It's not just a kernel compile. It's also bzipping, which takes a few seconds alone on most machines, and which can't effectively be done in parallel.

    Very nice. :)

  • my 386-dx40 with weitek coprocessor and 8M ram,
    at 1.36 bogomips, will compile a 2.2 kernel in only 27 hours 13 minutes.
  • That's good, but compilation is awfully parallelizable: You could (almost) just assign a computer to compile each individual source file; the total time would be the time to compile the slowest file plus link time. You could accomplish this with a shell script and a network file system -- what's the benefit of doing it with a shared-memory system like NUMA?
  • Well, a 23-second kernel compile is impressive and all, but the most important question I would have of such a machine is: How fast can it run Quake-3?

    If it can do 1280 * 1024 * 32bpp at 300 frames/second, then I'm getting one.

    :-),
    Schwab

He who steps on others to reach the top has good balance.

Working...