23 Second Kernel Compiles 233
b-side.org writes "As a fine testament to how quickly linux is absorbing technology formerly available only to the computing elite, an LKML member posted a
23 second kernel compile time to the list this morning as a result of building a 16-way NUMA cluster. The NUMA technology comes gifted from IBM and SGI. Just one year ago, a
Sequent NUMA-Q would have cost you about USD $100,000. These days, you can probably build a 16-way Xeon (4X 4-way SMP) system off of ebay for two grand, and the NUMA comes free of charge!"
hmph (Score:1)
Re:hmph (Score:1)
But, Here's a 4-P3 system [ebay.com], and it's nearly $1.5g, and a cpu alone [ebay.com] isn't more than $50, so...
Nice try... (Score:4, Informative)
Maybe instead of two grand, the poster meant twenty-grand. Either way, $20 grand is better than $100K!
Other options? (Score:2)
Re:Other options? (Score:2)
Thanks for the info on Firewire, though.
Re:Other options? (Score:2)
run SCSI over FC-AL, however. You can also run
SCSI over OC384 or a 300 baud modem now (iSCSI).
Re:hmph (Score:3, Insightful)
Ie, the CPUs in this computer share a common address space and can reference any memory, just that some memory (eg located at another node) has a higher cost of access than other memory. (as opposed to a typical SMP system where all memory has an equal 'cost of access').
at the moment, under linux, this implies that there is special hardware in between those CPUs to provide the memory coherency - ie lots of bucks - cause there is no software means of providing that coherency (least not in linux today).
NB: normal linux SMP could run fine on a NUMA machine (from the memory management POV), but it would be slower because it would not take the non-uniform bit into account.
anyway... despite what the post says, this machine is
Re:hmph (Score:1)
Tempting... (Score:4, Insightful)
JoeLinux
Re:Tempting... (Score:1)
Re:Tempting... (Score:1)
Re:Tempting... (Score:4, Interesting)
Seriously, I wonder how long it takes to boot. Every NUMA machine I've ever used took more than its fair share of time to boot... much more than a standard Unix server. It would be pretty funny if compiling the kernel turned out to be trivial compared to booting!
Re:Tempting... (Score:4, Informative)
They do take a good bit of time to boot. In fact, it makes me much more careful when booting new kernels on them because if I screw up, I've got to wait 5 minutes, or so, for it to boot again! But, they do boot a lot faster when you run them as a single quad and turn off the detection of other quads.
$500 for a quad xeon? (Score:3, Informative)
motherboard costs $500. More like $2000
to $3000 for an old quad.
Re:$500 for a quad xeon? (Score:5, Informative)
motherboard costs $500. More like $2000
to $3000 for an old quad.
I am actually in the process of building a quad xeon right now with bits and pieces I bought off of E-Bay [ebay.com], and this is certainly doable. Not sure about the $500, but $2000-$3000 is high. I have the motherboard and memory riser now for $150, I am pretty sure that I can get a used rackmount case for $100 or so, the CPU's are going to cost around $60-70 each (P-III Xeon 500's), and memory is cheap as well.
I figure I will be in it for around $1000 in the end. Yes, $500 is a low number, but I also know that your estimates of $2000-3000 is high.
Re:$500 for a quad xeon? (Score:2)
Old compaq ML-530 with 4 processors.. $850.00
I just got one off of ebay...
Problem: Hard drives for an ML530 are overpriced modified drives that you HAVE to buy from compaq.
I puked when I found out the the 4 drives I need to get this running will cost me $3500.00 from compaq.
Damn those jerks and their custom hot-plug sleds.. why cant they use standard hotplug drives and mounts? (Just like the rack mount... a compaq WILL NOT mount in a standard rack without heavy modification.)
Re:$500 for a quad xeon? (Score:2, Informative)
Have to buy hard drives from CPQ my arse! Whats stopping you putting in a stock U160 controller and hanging drives off the back of that ? $3500 buys a LOT of Atlas 10K3s these days.
The CPQ drive rails mount standard SCSI drives and are freely available if you talk to your nearest CPQ reseller nicely. Compaq use stock quantum, fuji and seagate drives, they just change the vendor mode page entry to make it look like they are 'adding value' somehow.
Curmudgeon
Re:$500 for a quad xeon? (Score:2)
Re: Compaq hot-plug drives (Score:2)
Let me get this right... you buy a used computer, and then go straight to the manufacturer for replacement parts??? (Surely you know 'accessories' are one of their higher-margin profit centers!)
Still... if you're in the Seattle, WA area, stop in the Boeing Surplus Retail Store [boeing.com]. I was there last week, and they had a bucket of what looked like 80-pin 2.1GB Compaq hot-plug drives. They were just sitting there next to a cash register like candy would be at a supermarket. I don't remember the price, but I want to say they wanted ~$5 each for 'em.
They were also selling an Indigo ($50), lots of PCs (mostly old Dell OptiPlex models, $20 - $300, Pentium MMXs to P-IIs), and even a Barco data-grade projector ($2500). Fun place to go and blow half a day poking around.
42 seconds (Score:4, Informative)
Definately the most impressive x86 system I have ever seen.
in 1996 (Re:42 seconds) (Score:2, Interesting)
Remember, this was in 1996. Now, how much did we progress in the last five-six years? :)
Re:in 1996 (Re:42 seconds) (Score:2)
Re:in 1996 (Re:42 seconds) (Score:2, Informative)
You appear to be equating clocking and processor speed like apples and oranges. They aren't. If we consider all of the technological advances in the modern ia32 processors vs. it's earilier brethern, then the comparision is even less favorable... Modern processors should be exceptionally faster. But they aren't. There are two primary reasons for this: increasing inefficiency, and increasing complexity. Present day programmers are far less motivated to write "good code" because they live in the falacy that the processor is fast enough to run anything. ("No one will notice the difference.") In fact, they are generally incapable of generating efficient code as they've never been taught to think that way. These people will surely spend an eternity in computing hell writing programs in BASIC on 1MHz machines that have 32x16 character console displayed on a 12" BW TV. (Any resemblance to the movie Brazil is unintentional.)
Complexity breeds more inefficiency. As the saying goes, "Make it work. Then make it fast."
As for my comments about Sparc... Unless Sun is deploying reverse engineered alien technologies, the core of the processor (ie. how it adds and subtracts) hasn't changed much. It's the clock speed (how fast it runs through the "add" proceedure) that makes it faster. The efficient adaption of code to the native 64bit environment also helps alot. (Better code + better compiler yeilds faster execution.)
Re:42 seconds (Score:5, Funny)
Ford: 42 ! We're made !
Re:42 seconds (Score:1, Funny)
Re:42 seconds (Score:2)
A year or two ago I compiled the NetBSD kernel on a MicroVAX II with 8MB of RAM and it took about 24 hours.
Re:42 seconds (Score:2)
Is it worth it? (Score:1)
Re:Is it worth it? (Score:1)
Re:Is it worth it? (Score:2)
Re:Is it worth it? (Score:2, Insightful)
The point of this article isn't that kernel compilation is fast because it is usually CPU bound, and 16 CPUs alleviate that problem. If fact kernel compiliation isn't strictly CPU bound... there are other performance limits too, especially disk performance. The significance of this article is that multithreaded kernel compiles benefit from the increased interprocess communication potential in NUMA architectures... performance would be much worse trying to spread that across a beowolf cluster.
While rendering (not displaying) graphics or running basic number crunching does not benefit much from a NUMA setup as compared to a beowolf style setup, some complex equation do benefit... computing the first million digits of Pi would use interprocess communication, as would large scale data minig application. It's been a few years since I've been there, I saw a huge cluster of Origin 2000s CC-NUMAed together with one Onyx 2, which handled displaying the results of the data mining. (An Onyx2 is basically an Origin 2000 with a graphics pipeline. An Onyx 3000 without any graphics bricks is an Origin 3000.)
Re:Is it worth it? (Score:2, Informative)
So yes, it will apply to other stuff, though maybe not as well as it could, quite yet.
Was I the only one... (Score:4, Funny)
...who wondered, "I didn't know that Clive Cussler [numa.net] had gotten into cluster design?
Why? (Score:2)
6 years ago, a kernel compile for me took about 3 hours. These days, it takes less than 3 minutes, which is more than fast enough for my needs. So, you can push it down to 23 seconds with a few thousand $ - what's the point? Someone help me out here!
Re:Why? (Score:2)
Re:Why? (Score:2)
Re:Why? (Score:1)
Re:Why? (Score:1, Interesting)
Think "outside the box" (sorry, horrible pun) of just kernel compiles, and I suspect you'll understand the potential value here.
Let's say you run a decent-sized development house, employing a healthy number of coders. Now, as these folks churn through their days (nights?), they're gonna be ripping through a lot of compiles (if they're using C/C++/whatever). From my personal experience as a developer in the industry, a large portion of a developer's time is spent just compiling code.
If you can implement cost-effective tech like this to reduce time spent on routine tasks like code burns, you increase productivity.
Holy shit, I may have actually come up with a halfway decent justification for "hippie tech" to throw at the suit-wearing types...
temporary email, because MS deserves a good Linux box [mailto].
Re:Why? (Score:2)
Re:Why? (Score:4, Informative)
How about doing other stuff really fast?
3D modeling. 3D simulations. Even extensive photoshop editing with complex filters can benefit from this kind of raw speed.
It wouldn't be a catchy headline, though, if it said "render a scene of a house in 40 seconds--oh, and here are the details of the scene so you can be impressed if you understand 3D rendering..."
There are hundreds of applications for this, many of which we don't do every day on our desktop simply because they take too much juice to be useful. With ever-faster computers, we will continue to envision and benefit from these new possibilities.
Re:Why? (Score:2)
Because 6 years ago, you would have been asking, "Maybe this is a silly question.. but why would you want to compile a kernel in 3 minutes?"
Re:Why? (Score:3, Informative)
That's not the point - kernel compilation (or the compilation of any large project like KDE or XFree[1]) is a fairly common benchmark for general performance. It chews up disk access and memory and works the CPU quite nicely.
[1] Large is, of course, a relative thing. Also, some compilers (notably Borland) are incredibly efficent at compiling (sometimes through manipulating the language specs so the programmer lines things up so the compiler can just go through the source once and compiles as it goes).
Still, benchmarks are suspect to begin with, and kernel compile time is a decent loose benchmark. What was that quote from Linus about the latest release being so good he benchmarked an infinate loop at just under 6 seconds? :)
--
Evan
Re:Why? (Score:2, Interesting)
You can say that again. Back in '95 or '96, Borland was claiming that their Delphi Object Pascal compiler compiled 350,000 lines per minute on a Pentium 90. I never checked this, I do know that it was incredibly fast.
What I do know from own experience however:
Back in those days we built a system on Solaris, implemented in C++, that took about 1 hour to compile for about 100,000 lines of source code (hardware was kind of modest compared to today's stuff).
For a bizarre reason that I won't go into, we had to build part of the system on a PC platform. This was done using Borland C++ 3.0 for DOS. Some fool had configured something in the wrong way, resulting in the fact that all the 3rd party libraries were recompiled from source every time. This was more than 1 million lines of C++. It took about 10 minutes on a 486/33!
Re:Why? (Score:2)
No it dosen't - to compile, it (and assocated tools like cpp) go through at *least* three times. Once for preprocessor, once for parsing function headers and then it starts the actual compile process. As I said, Borland uses its language choice to enforce authors to write source that can be parsed straight through. Turbo Pascal, and the Modula-3ish Delphi are languages constructed in such a way that the compiler can attack them with lots of compile-speed optimization tricks.
Of course, Borland's C/C++ compilers (which take multiple passes) are also speed demons, but their pascal-based compilers are sickeningly fast. Quite amazing.
One reason Borland is fast for C / C++ is that they don't implement the full spec (e.g. trigraphs and escaped newlines are not implemented in the compiler proper), and they do little optimization.
Again, they tweak the language for the compiler, although I *think* they support trigraphs (I haven't used Borland in years, and may very well be wrong). They do optimize the code quite nicely though... at least for the era that I used their compiler, it was usually ranked third in a large large list of common compilers (Watcom sat at the top of that list forever). I used Borland, Turbo and Mix's Power C mostly (plus all the other things like Clipper, bizzare P-code Cobol compilers, etc). Nowadays compiler optimization has fallen way down on my list of "important selling points".
Come to think of it, the language tool arena has really really shrunk, just like the OS arena. And if Borland~=Apple, gcc~=Linux/BSD and Visual Studio/.NET~=Windows, it kinda has similar dynamics (lots of other specialized stuff, just like there are lots of other specialized OSes, but nothing with a high profile or large market share). Interesting.
--
Evan
Re:Why? (Score:1, Insightful)
Yes it is...
-adnans
Re:Why? (Score:1)
Re:Why? (Score:5, Insightful)
I think this benchmark is used time and time again because its really the only one that nearly any Linux user would be able to compare their own experiences to. If they said 1.2 GFLOPS, I (and I suspect most others) could only say "Wow, that sounds like a lot. I wonder what that looks like." OTOH, I have seen how long it takes to download 33 Slackware diskettes in parallel on a v.34 modem, and I still run 3 P75's today.
I've been told that I will soon be deploying Beowulf HPC clusters to many clients, including universities and biomedical firms. If they were to tell me that the clusters will be able to do protein folds (or whatever they call it -- referring back to the nuclear simulation discussion) in "only 4 weeks", I won't have a clue as to how to scale that relative to customary performance of the day.
Sure, there are many other applications that are run on clusters, but kernel compiles are the ones that all of us do. It can give us an idea of what kind of performance you'd get out of other processor-intensive operations. And many people will tell you there are so many variables with kernel compiles that its ridiculous to compare the results.
Check out beowulf.org and see what people are doing with cluster computing. I've always wanted to open a site that compiles kernels for you. Just select the patches you want applied and paste the
Re:Why? (Score:2)
I too started in 96, and I think its rare to find anyone who builds their own distro here. I may get six or seven, "I roll my own" replies but there are 600,000 people here. You should try compiling your own kernel sometime. You'll probably learn so much more about Linux's capabilities just by looking at all of the features that are available but deactivated by default. I'd bet you'd be shocked at what features you could activate by doing a simple recompile.
Of course, if you don't recompile because you're a desktop user and don't need to tweak the system's performance or support odd hardware, then we would love to know your name even more so we could make you our poster boy for "Linux is not too hard for the average user."
Re:Why? (Score:5, Funny)
Re:Why? (Score:2, Informative)
I'll give you the benefit of the doubt and assume you're not just a trolling karma whore here. The answer is as obvious as always: faster is always better. If there's nothing which needed that speed, it's because it wasn't previously viable and nothing got written with it in mind. If every computer were this fast, it makes compiling huge projects viable on small workstations.
And here's a great example - where I work there are three things that reduce productivity because of technical bottlenecks:
Of these, the major bottleneck is compiling. If it takes 30 seconds just to recompile a single source file and link everything, you end up writing and debugging code in "batch" fashion rather than in tiny increments. And it's 30 seconds where you're not doing anything except waiting for the results.
If I had a machine like this on my desk, I'd probably get twice as much work done.
Re:Why? (Score:2, Funny)
Cause 23 seconds is braggin rights for a bitchin fast machine...
/satterth
Re:Why? (Score:2)
Here, we do kernel development. For us it's REAL time that we spend in compiling kernels. I really would like to have a machine that does 23 second kernel compiles...
Roger.
ok this is NOT a troll (Score:4, Interesting)
If this type of system would allow 'supercomputer' performance on regular programs... well... that would be really nice. How much work is it to setup?
Re:ok this is NOT a troll (Score:5, Informative)
NUMA is just a strategy used for making computers that are too large for normal SMP techniques. I read a few good papers on sgi.com a couple of years ago that explained it in detail, and the NUMA link in the article had a quick definition. NUMA systems run one incarnation of one OS throughout the whole cluster, and usually imply some kind of crazy-ass bandwidth running between different machines. I don't think you could actually create a NUMA cluster of seperate quad Xeons boxes, and it would probably be ungodly slow if you tried.
There probably isn't any difference for kernel compiles between the two, but NUMA clusters don't require any reworking of normal multithreaded programs to utilize the cluster and can be commanded as one coherent entity (make -j 32, wheee).
NUMA stands for (Score:2, Informative)
In a Beowulf cluster, you dont have shared memory (unless inside a node, if you have a SMP machine) and you must use message passing to communicate (unless you are using DSM--Distributed Shared Memory--, maybe with SCI).
You can actually - Dolphin SCI (Score:2)
[1] For large values of relatively.
NUMA vs Beowolf (Score:2, Informative)
NUMA clusters tend to have scalability problems related to the cache coherence issue, so for a vertically scalable CC-NUMA box, you have to pay SGI the big bux. I haven't looked at IBMs NUMA technology, but if they own sequent, then they probably have similar capability.
As for the work to set one up, SGI's 3000 line is fairly trivial, the hardware is designed to handle it, and I think you only need NUMA link cables to scale beyond what fits inside a deskside case, if not a full height case. Now if you have a wall of these systems, you will need the NUMAlink (nee CrayLink) lovin'. As for an Intel based system, I suspect it wouldn't be nearly as easy... unless your vendor provides the setup for you. On your own, you would need to futz with cabling the systems together, just like in a beowolf. Except that your performance depends on finding a reasonably priced, high bandwidth, low-latency interconnect. Gigabit Ethernet wont scale very far, so going past 16 CPUs would be very unpleasant. If you expend the effort, you will end up with a cluster of machines that behave very much like a "supercomputer" though. Good luck!
Re:ok this is NOT a troll (Score:4, Informative)
On a normal desktop machine, you typically have one CPU and one set of main memory. The CPU is basically the only user of the memory (other than DMA from peripherals, etc.) so there's no problem.
SMP machines have multiple CPUs, but each process running on each CPU can still see every byte of the same main memory. This can be a bottleneck as you scale up, since larger and larger numbers of processors that can theoretically run in parallel are being serviced by the same, serial memory.
NUMA means that there are multiple sets of main memory -- typically one chunk of main memory for every processor. Despite the fact that memory is physically distributed, it still looks the same as one big set of centralized main memory -- that is, every processor sees the same (large) address space. Every processor can access every byte of memory. Of course, there is a performance penalty for accessing nonlocal memory, but NUMA machines typically have extremely fast interconnects to minimize this cost.
Multi-computers, or clustering, etc. such as Beowulf completely disconnects memory spaces from each other. That is, each processor has its own independent view of its own independent memory. The only way to share data across processors is by explicit message-passing.
I think the advantage of NUMA over beowulf from the point of view of compiling a kernel is just that you can launch 32 parallel copies of gcc, and the the cost of migrating those processes to processors is nearly 0. With beowulf, you'd have to write a special version of 'make' that understood MPI or some other way of manually distributing processes to processors. Even with something like MOSIX, an OS that automatically migrates processes to remote nodes in a multicomputer for you, the cost of process migration is very high compared to the typically short lifetime of a typical instantiation of 'gcc', so it's not a big win. (MOSIX is basically control software on top of a beowulf style cluster, and the kernel mods needed to do transparent process migration)
I hope this clarified the situation rather than further confusing you.
Re:ok this is NOT a troll (Score:2, Informative)
The hardware looks a little like 4 x 4way SMP boxes, with a huge fat interconnect pipe slung down the back (10 - 20 Gbit/s IIRC). But there's all sorts of smart cache coherency / mem transparency hardware in there too, to make the whole machine look like a single normal machine.
Yes, I used stock GCC (redhat 6.2).
re Scalability, the largest machine you can build out of this stuff would be a 64 proc P3/900 with 64Gb of RAM. SGI can build larger machines, but I think they're ia64 based, which has it's own problems.
It's not that hard to set up, but not something you would build in your bedroom
Alternatively... (Score:4, Funny)
Thank You!!!! (Score:2)
Re:Why is this alternative funny? (Score:3, Informative)
Re:Why is this alternative funny? (Score:2)
kernel--- compilercache time
default-- no-------------- 5m28.860s
default-- yes, but empty 6m56.490s
default-- yes, filled------ 2m51.900s
modified yes, filled------ 3m58.730s
(ugly formatting to avoid lameness filter)
It looks pretty safe, especially if you've been burned by a badly written Makefile. The FAQ explains the difference between compilercache and makefiles pretty well.
As a bonus, compilercache ignores changes made to the comments (since it uses the preprocessed source (with the comments stripped) to calculate an MD5 checksum), so you can fix/add comments without worrying about an extra long compile.
I probably won't use it, though -- my projects tend to require only one file to be recompiled per build.
this may be good but... (Score:3, Insightful)
Yeah, that's great, and all... (Score:4, Funny)
- A.P.
That's on "old" hardware too (Score:2)
That's on a "16 way NUMA-Q, 700MHz P3's, 4Gb RAM".
I've been following that thread wondering if anybody would post better results with a dual Athlon or similar. Any lucky soals with really cool hardware who want to post benchmarks? In fact, it would be interesting to know how quickly the kernel compiles on single P3/700, just to get an idea of how it scales.
Re:That's on "old" hardware too (Score:2, Informative)
Mod this up too (Score:2)
It takes me 23 seconds to boot (Score:1)
Maybe naming this box 'Zeno' wasn't such a good idea after all.
(PS. You can now compile a kernel faster than Nautilus opens a folder. Go fig.)
Who cares about the compile time ... (Score:2)
... of coure the other problem was indeed the expense, leaving us in situations where we had to program at odd hours and off-days because the client couldn't affort a "development" machine.
... two issues which I would hope are sovled a 16-way Xeon for $2K
HELLLOOOOOO??? (Score:4, Insightful)
Sequent's NUMA Boxen use a flavor of SCI (Scalable Coherent Interface) which is integrated into the memory controller.
While you can use some sort of PCI-based interconnect, the results are just plain not worth it.
Infiniband should be better, though I've heared the latency is too high to make this a marketable solution.
Keep your eyes on IBM's Summit chipset based systems. These are quads tied together with a "scalability port" and go up to 16-way. They should go to 32 or higher by 2003. That's when NUMA will -finally- be inevitable...
Re:HELLLOOOOOO??? (Score:2)
Data General had been doing 32 way NUMA boxes. (Score:2)
They've since been bought by EMC and closed down but they had it working *and* scaling to 32 CPUs and on the market. 64 CPU systems were well on the way but I don't recall if they finished them.
4x4 cluster for $2k? Show me. (Score:1)
On the other hand, if someone IS selling such a beast and I can win the bidding with a $2k bid, I might be tempted...
Re:4x4 cluster for $2k? Show me. (Score:2, Informative)
Great news for mozilla and nautilus... (Score:5, Funny)
[this is actually a joke]
Re:Great news for mozilla and nautilus... (Score:1)
Re:Great news for mozilla and nautilus... (Score:1)
But WRT my original post, pinch of salt, I'm british
Re:Great news for mozilla and nautilus... (Score:2)
As for Mozilla? GREAT project, the Web *needs* Mozilla, but for my desktop? I'll stick with Galeon, thanks
IBM and Sequent being good citizens (Score:5, Informative)
I went and looked at the email and noticed that the very first patch he mentions was from the woman who came and gave a talk to EUGLUG [euglug.org] last spring. For one of our Demo Days we emailed IBM and asked them if they would send down someone to talk about IBM's Linux effort. We were kind of worried that they would send a marketing type in a suit who would tell us all about how much money they were going to spend, etc., etc. But we were very pleasantly surprised when they sent down a hardcore engineer who had been with Sequent until they were swallowed by IBM.
She did a pretty broadranging overview of the linux projects currently in place at IBM, and then dived into the NUMA/Q stuff that she had been working on. The main gist of which is that Sequent had these 16-way fault-tolerant redundant servers that needed linux because the number of applications that ran on the native OS was small and getting smaller. Turned out that even the SMP code that was in the current tree at the time did not quite do it. She had some fairly hairy debugging stories, apparently sprinkling print statements through the code doesn't work too well when you're dealing with boot time on a multiprocessor system because it causes the kernel to serialize when in normal circumstances it wouldn't...
I think the end result of all this progress with multiprocessor systems is that we'll be able to go down to the hardware store and buy more nodes plug 'em into the bus; and compute away.
Re:IBM and Sequent being good citizens (Score:2)
23 seconds?!? (Score:5, Funny)
What can you do in 23 seconds?
But otherwise, impressive. I wonder when Moore's law will have progressed so much we'll have systems like that in normal households...
Re:23 seconds?!? (Score:4, Funny)
Re:23 seconds?!? (Score:1)
Including you ?
Re:23 seconds?!? (Score:1)
23 seconds? (Score:2)
make bzImage is not a very good benchmark (Score:3, Informative)
Reason? Not enough information as to the options.
Never the less:
I WANT ONE
Mod parent up please (Score:2)
1. 2.4.18, and I also told you what patches I was using (though some of them won't be published until next week).
2. OK, I just posted the config file. http://lse.sourceforge.net/numa/config.mem
3. I did five kernel compiles in a row (though I omitted to mention that).
Hi Martin!
--
Daniel
Sorry, Anton Blanchard Wins (Score:3, Informative)
Wheeeeeee!
And seriously, I saw some comments about needed a really fast interconnect... check out Sun's Wildcat.
--NBVB
Not just a kernel compile... (Score:2)
Very nice. :)
and I hold the other record (Score:2)
at 1.36 bogomips, will compile a 2.2 kernel in only 27 hours 13 minutes.
Compilation is highly parallelizable (Score:2)
The Real Question (Score:2)
Well, a 23-second kernel compile is impressive and all, but the most important question I would have of such a machine is: How fast can it run Quake-3?
If it can do 1280 * 1024 * 32bpp at 300 frames/second, then I'm getting one.
Schwab
Re:Who would have guessed... (Score:4, Interesting)
Re:Who would have guessed... (Score:1)
Recently I "experienced" compiling 2.2.19 kernel on Intel 486 DX 100 with 16MB RAM. It takes about 4.5 hours - that's about 700 times slower.
Re:Who would have guessed... (Score:2)
When it's upgrade time, I can start a compile, go to the pub, have a few beers, go back, see that the compile failed (because of , sparc32 and linux 2.4 don't seem to mix very well without some heavy tweaking), fix mistake, start again, and go back to the pub :)
Thanks to my slow sparcstations, I have a life! :)
Re:Who would have guessed... (Score:3, Funny)
Re:Who would have guessed... (Score:2, Interesting)
I'm afraid I have to disagree entirely, mate. I'm no neo-luddite by
any stretch of the imagination... I too spend a good proportion
[English is hilarious] of my time on the internet. I could, indeed, be
said to be leading a 'double life' by the unobservant. Notwithstanding
Mr Postman, Still and Talbot whom I cannot speak for; your assessment
of the intrinsically 'good' or 'evil' nature of technology is far from
clearly correct. You're allusion to the internet as a block of marble,
awaiting us to sculpt meaning into its form by using it is desperately
far from the truth. For example, books are not tabula rasa objects,
waiting for readers to impress upon them meaning and effect. When you
read the bible, the koran, Herman Hess or whoever, is it not the
author that steers you're experience of reading?
There are many forms of media in our lives, and the internet is just
one of them
and accessible to many folks does not detract from its power to
affect, to sometimes enourmous proportions, our culture, purpose and
ultimate ly 'mystical' existence.
Some instances of a particlular media may merely 'incline' us to
consider something... bland books, poor television programs or vacuous
theatre productions, but there are some instances that inspire us and
drive us to better our existance, or in some cases to cause blight,
cruelty and ruinous events.... Language, for example, has allowed our
brains to extend far beyond the confines of our boney skulls and
enables us to communicate and share ideas. If you've ever been in the
presence of a great speaker, you will know instantly that words are
not merely emtpy sounds awaiting our interpretations, but are weighted
vehicles for the influencial dissemination of ideas, and are very
seldom 'neutral'.
So my point is this: the internet is not a neutral object awaiting our
interpretation, but is a rich and varied media that can influence you
.. it can shock, scare, amuse, frighten.... and more things than you
can find in a thesaurus or dictionary, to boot; and it is NOT guided
or limited by your own mind...
Off by one... (Score:1)
Re:Why use NUMA? (Score:2)
If the end goal of this was to just compile kernels fast, you would be right. These numbers were posted because everybody knows how fast their kernel compiles. If someone posts TPC-H or SpecWeb99 numbers, no one notices. Normal people can say, "Wow, that is fast!"