Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Software

Pixar Eclipses Sun with Linux/Intel 439

lieutenant writes "Pixar Animation Studios is replacing servers from Sun in its render farm with eight new blade servers from Rackspace. In all, the blade system contains 1,024 Intel 2.8GHz Xeon processors, and it runs the open-source Linux operating system. Pixar has ported its Renderman software to run on Linux." I'd love to see their electric bill ;)
This discussion has been archived. No new comments can be posted.

Pixar Eclipses Sun with Linux/Intel

Comments Filter:
  • Any word on... (Score:2, Interesting)

    How fast they can now render over the old Sun servers?

    (imaging a Beowulf cluster of THESE!)

    • 1,024 Intel 2.8GHz Xeon processors... I'd love to see their electric bill ;)

      Well, ignoring the power requirements of RAM, bus controllers, network adapters, hard disks which are probably used for boot only...

      Intel rates these things for 74.0W thermal dissipation [intel.com], which is a pretty good measure of the electrical power consumed... since, unless something is badly wrong, your Xeon chip will not dissipate energy as light or sound.

      74W x 1,024 = 75,776W continuous.

      Assume they're on 24/7. Assume a cost of $0.06 per kWh, including distribution, debt retirement, Ontario's capped electric rates, etc.

      There are 30 days in the average month. There are 24 hours in the average day [grin]. Therefore, there are 720 hours per month.

      720 hours @ 75,776W = 54,558,720kWh.

      Just a little over $3.2 million per month.

      I'd imagine it's less than that; their electric rate is probably somewhat less based on their consumption. But consider that the depreciation on that hardware is probably a greater monthly expense than the electricity to power it...

      I'm glad Linux is ready for Pixar, because Linux sure ain't ready for the desktop [glowingplate.com].

      • by Anonymous Coward on Sunday February 09, 2003 @01:24PM (#5265324)
        isn't

        720 hours @ 75,776W = 54,558,720kWh.

        actually 54,558,720Wh (watt-hours, not kilowatt hours), which is 54,558kWh

        making it not 3.2million, but only $3200 a month?
      • 3.2 million for a thousand processors?

        Raise your hand if you're paying $3,200 a month for a single processor, please. No? Somebody needs to double-check their estimate. :-)

      • Well, ignoring the power requirements of RAM, bus controllers, network adapters, hard disks which are probably used for boot only...

        The cost of powering the computers themselves pales compared to cost of the air conditioning for the data centre. That is the single biggest cost, in a data center in New York it even outweighs rent!
      • You know, I really don't know what the logic is of arguing that. The people who are using Linux on their desktops now know Linux well enough to completely disregard that. I suppose you will scare newbies away until someone gives them a knoppix CD to play with, but MS spends BILLIONS already for that your little rant is insignificant in comparison.

        Maybe Linux is more than ready for the desktop, it just isn't ready for your narrow view of what a desktop should be. And it is not that I really care that you are not satsified, but bitching to a bunch of volunteers seems a bit insane, because I don't think they really care that your are not satisfied, either.

        Regardless, Linux isn't going away anytime soon (at least not in my lifetime), so why don't you create a project devoted to "making it ready for the desktop according to my definitions" instead of wasting your life away making complaints about the fruits of a VOLUNTEER EFFORT.

        Do you complain about the Salvation Army or Goodwill Industries, as well?
        • Salvation Army sure ain't ready for the battlefield.
        • by BigBlockMopar ( 191202 ) on Sunday February 09, 2003 @02:53PM (#5265869) Homepage

          You know, I really don't know what the logic is of arguing that. The people who are using Linux on their desktops now know Linux well enough to completely disregard that. I suppose you will scare newbies away until someone gives them a knoppix CD to play with,

          I use Linux on my desktop. It's great. It's beautiful. But it's *still not ready* for the desktop - as in, it's still not ready to compete with Windows - because it's still more comparable with Windows 3.1 than it is with Windows XP.

          Maybe Linux is more than ready for the desktop, it just isn't ready for your narrow view of what a desktop should be. And it is not that I really care that you are not satsified, but bitching to a bunch of volunteers seems a bit insane, because I don't think they really care that your are not satisfied, either.

          Maybe my viewpoint is narrow. Or maybe I'm as big a power user as you can get without actually *thinking* in C.

          Note that I administer my own domain on a server farm of Linux and OpenBSD machines which live in my bedroom.

          Primarily, my main desktop is an e-mail drone. If Evolution actually worked (ie. didn't take 8 minutes to exit on my machine), then it would be fine. But without a spellchecker competitive to prevalent software, Linux/KDE or Linux/Gnome doesn't even make a good e-mail drone. The spellchecker is so 1995. I want an underlining spell checker.

          Does that give me a narrow viewpoint, because I expect features which I could take for granted among the apps of more estabished operating systems? Apparently.

          Your lack of a realistic viewpoint and your immediate dismissal of my page as FUD is symptomatic of what is wrong with the Linux/OS community, and why I'm starting to believe that Linux will never be able to get its shit together enough to be more of a fringe group like Apple users.

          Try using Windows 2000 or XP sometime. Look at it from a user's perspective - you know, the sort of idiot who opens e-mail virii and who makes the *bulk* of the computer-using public. From that perspective, Windows is great. It does everything reasonably well, whether you're a newbie or expert. Linux doesn't do that yet, and therefore isn't as good a desktop solution as Windows.

          I'm waiting for the day someone can prove me wrong, but until you get some actual real-world experience with what end-users want from their operating systems, you'll still just be a whiny 14-year-old living in Mommy and Daddy's basement.

          • "you'll still just be a whiny 14-year-old living in Mommy and Daddy's basement."

            Or a financial analyst for a leading semiconductor supplier, besides, you are the one who is doing the whining, but I digress . . .

            You seem to have established many assumptions about how a desktop should work, one such assumption is desktops, servers, and every other MS product should be a separate system ENTIRELY. Therefore, though Linux is good for the server, it ain't ready for the Desktop. However, I believe the post Internet era changes this completely (ironic that you compare Linux to Windows 3.1 . . .).

            The fact of the matter is that, up until open source (to me, synonymous with "the Internet"), all software came as square pegs. This is because square pegs are much easier to produce than customized pegs. Proprietary software, which doesn't utilize the power of the Internet to its fulliest, is limitted to the square peg model. This can work great in niche markets (which is why MS is trying to make niches all over the place), but it depends on controlling all standards within the market, which is increasingly difficult as the Internet progresses and as more and more people learn how to program. And, as we see with adoption of Linux, when you don't have to use a square peg, a lot can be gained.

            But, I suppose my biggest argument is based on the fact that the majority of the world does NOT own desktops. The definition of "Desktop" depends on this majority. For these users, who lack the assumptions you have been conditioned with, Linux is already a superior product and is being adopted at a very fast rate. As time passes, thanks to Linux, this majority will gain access to "the Desktop." Of course, since Linux is constantly improving itself, I will never really be able to prove that it was ready when I made this post, but reality tends to be grey, like that. Not so black and white, as some people see it.

            However, I am afraid that without a Madonna song playing in the background and a video of someone flying around, I have failed to convince you. Oh, well . . . I can't say I cared much to begin with. Linux is definitely ready for the desktop as far as I am concerned, regardless of what you think.

            Disclaimer:
            I am not the spokesman of Open Source. Nobody is the the spokesman of Open Source. Using stereotypes is an indication of a simplistic mind struggling to oversimplify a complex world. Using stereotypes for the Open Source community is down right ludicrous. So get a grip and come to terms that people can still share software even if they don't always agree . . .
        • by nusuth ( 520833 ) <oooo_0000us.yahoo@com> on Sunday February 09, 2003 @03:29PM (#5266124) Homepage
          My friend and my fan, I have to disagree with you on that. Once installed according to requirements of the user, linux is more than enough for any desktop use. But it is not trivial to find which components make the desktop you require, or how can you troubleshoot, upgrade or just add software to linux. These require a bit of expertise.

          The most important linux skill is how to use internet for help, not any unix skills. For a newbie, it is a hit or miss affair. He grabs a modern desktop oriented linux, installs it in 30 or less minutes, if all of his hardware are supported and all programs newbie wants are already installed, good news, we have a new linux fan. Chances are, that won't happen.

          If something goes wrong, it is best option for linux fans that newbie just forgets the idea, right then. Most probably he now has a functional system but with a non-functional usb mouse, cd burner or a sub-optimal refresh rate. He will want to fix and use the system. It is just the mouse, or the printer, or excel documents. He almost succeeded in this linux thing!

          Wrong. He still misses the crucial skill.

          He will try to fix it and fail, seek help and fail again, try to skim docs and fail, learn where to seek help and fail, read documents and seek help at the correct place with the correct attitude and if he has some luck, succeed at last. Now we have a brand new whiner instead fo a fan. Worse, he half knows what he is talking about.

          Eveyone whines about windows all the time too, but it is not the same thing. We don't want scared potential new users. In case of windows, user already knows how much of that whining is about a real problem, that is not the case with linux.

          Solution is aiming higher. Linux has to be considerably easier to use and install than windows because non-techie users just have a lot of experience with windows. Even if the fix isn't optimal, there is always a fix a phonecall to someone you know away. Linux doesn't have nearly the same installed base so is denied the luxury. Linux still requires a crucial skill; it shouldn't.

          In some areas (considering desktop) linux already is better than windows and in others, it is not too far behind. But it has to better on all fronts. Till than, linux is not ready. You can argue that had market shares of linux and windows magically flipped, we would be saying windows is not ready. Probably you would be right, too. But market share (or rather, user base) has not magically flipped and that is not irrelevant.

          I know, I should have read the grandparent.

    • Re:Any word on... (Score:5, Informative)

      by Anonymous Coward on Sunday February 09, 2003 @04:16PM (#5266385)
      Sorry Guys... This article looks to be a bit off base!

      -- Not an Official RS response --

      I work for Rackspace Managed Hosting. The company the link "Rackspace" references in the C-Net article. This kind of cluster is not consistent with our business. We are most focused on web-centric managed hosting vrs colocation. A rendering cluster is something that, from my experience, we've never done. Also We don't carry Blade servers. C-Mon /. I though you guys did better about checking this kind of thing out! Just because it's on c-net doesn't mean it's accurate. Well kudos to who ever really got this job.

      Matthew Montgomery
      Rackspace Managed Hosting.
  • electric bill.... (Score:3, Interesting)

    by sirmalloc ( 648119 ) on Sunday February 09, 2003 @01:00PM (#5265152)
    1024 xeon's? jeeze, my electric is $120/mo with one amd and one intel running half the time.
  • For Around... (Score:3, Informative)

    by viper432 ( 589797 ) on Sunday February 09, 2003 @01:02PM (#5265163)
    For around $25,000 you too can make Pixar quality movies (+ the cost of those servers). https://renderman.pixar.com/
  • SCARED (Score:2, Funny)

    by wwwgregcom ( 313240 )
    I am actually scared, to imagine a beowulf cluster of these.
  • 1024 CPUS? (Score:3, Interesting)

    by Pharmboy ( 216950 ) on Sunday February 09, 2003 @01:03PM (#5265174) Journal
    My god, I thought they had trouble scaling Linux that far. Seriously. How the hell do you do that when "stock" linux doesnt like 8 CPUs?

    • I think they run the CPU:s independently, so that they assign one frame to render to each machine and let them run in parallell...
    • by NetJunkie ( 56134 ) <jason.nash@nosPam.gmail.com> on Sunday February 09, 2003 @01:07PM (#5265203)
      Not 1024 CPUs in one box. Each CPU sits on a "blade" card and acts like a seperate system. It's a bug cluster.
    • Re:1024 CPUS? (Score:5, Informative)

      by MerlynEmrys67 ( 583469 ) on Sunday February 09, 2003 @01:11PM (#5265241)
      Just in case you didn't guess, this is a cluster of Linux servers, not a single server

      If you have a task that can be easily partitioned off (oh like each individual frame would be an easy break for this) you can send each task to a different machine allowing you to parellelize the task.

      This is a poor mans version of NUMA (Non Uniform Memory Access) created and popularized by Sequent (now a division of IBM) where rather than have a single pool of addressable memory, you have multiple pools of memory, some with very fast access, some with slower.

      What I am wondering is what do they do for the cluster cross connect. In large scale cluster environments, this tends to be a significant bottleneck. In large scale clusters you start seeing things like HIPPI, VIA, and soon to be Infiniband... wonder what this is stocked up with

    • Re:1024 CPUS? (Score:5, Informative)

      by sql*kitten ( 1359 ) on Sunday February 09, 2003 @01:20PM (#5265299)
      My god, I thought they had trouble scaling Linux that far. Seriously. How the hell do you do that when "stock" linux doesnt like 8 CPUs?

      Because it's not a single system image. Rendering movies is easy to parallelize because you don't need to have once scene rendered before you can render the next; all the information you need is in the model file.
    • Re:1024 CPUS? (Score:5, Informative)

      by dprice ( 74762 ) <daprice@NOsPam.pobox.com> on Sunday February 09, 2003 @01:33PM (#5265382) Homepage

      My god, I thought they had trouble scaling Linux that far. Seriously. How the hell do you do that when "stock" linux doesnt like 8 CPUs?

      I often see this misconception about multiprocessor machines. Some machines have a true tightly coupled multiprocessor architectures with a shared memory space, like big iron machines from SGI, Sun, and HP. These can be used to run a multithreaded process to speed up time-to-solution for a task. The speed-up is subject to the usual Amdahl's Law restrictions. The blade server machines, like Pixar is using, are 'tightly bolted' multiprocessors which share mechanical components and power supplies, but they effectively look like separate computers. Possibly some of the blades have shared multiprocessors, but no more than a 2-4 cpus per blade. Separate instances of the OS run on each blade.

      For easy to partition tasks like computer graphic rendering, each frame render task can be run single threaded, and there can be many tasks running at the same time. The time-to-solution for a single rendered frame is not reduced by parallelization, but the overall throughput is increased by multiple tasks.

      Nine women cannot make a baby in one month, but nine women can make nine babies in nine months.

      • by digitalcowboy ( 142658 ) on Sunday February 09, 2003 @04:28PM (#5266450)
        Nine women cannot make a baby in one month, but nine women can make nine babies in nine months.

        Won't Microsoft's soon-to-be-released BabyMaker .NET allow for nine women to make a baby in one month?

        I thought I saw a press release about it a while back but can't seem to find a link now.
    • You've heard of this phenomenon, yes? Assigning many machines to a task? That's how rendering is done. Machine 1 handles scene 1 or frame 1, machine 2 handles frame 36, blah, blah, blah...

      It's not a 1024-CPU box... it's several hundred boxes with one or more CPU's. Hence the term 'render farm', along the lines of 'server farm'...

  • Power (Score:5, Funny)

    by tiktok ( 147569 ) on Sunday February 09, 2003 @01:05PM (#5265189) Homepage
    With that type of processing power, they should be able to calculate to infinity...and beyond.
  • Didn't I follow the same link from the earlier Rendezvous with Rama story [slashdot.org]?
  • Raw CPU power (Score:4, Insightful)

    by EwokNinja ( 29993 ) on Sunday February 09, 2003 @01:06PM (#5265195) Homepage
    Perhaps if Sun spent more time getting their processors faster at good cost they wouldn't be losing this kind of ground. Sun took way too long to come out with their UltraSparc III processor and now clustering technology is at the point where it's much cheaper to string together a bunch of commodity PCs than purchase a high end Sun box.
    • by That_Dan_Guy ( 589967 ) on Sunday February 09, 2003 @01:24PM (#5265330)
      I teach MCSE courses down in Chatsworth, recently we got a lot of Engineers from boeing coming over for Windows XP classes. Why? They're dumping all their Sparc Stations and moving to XP on cheap Intel hardware. Its faster, and 2/3s of the applications they need run it already. The last third they were working on.

      The IT people I talked to were surprisingly happy with XP so far. These were all Unix only kind of people actually.

      The other thing they were doing were looking into dumping their Crays in favor of LINUX clusters. The comments were along the lines of how much faster and cheaper it was to put together a cluster of a 100 cheap Intel boxes than getting a new Cray. That, and they were all already familiar with the unix style interface. On top of it all, the GUI interface (I think they were running Gnome) was so much nicer than CDE on Solaris.

      So Sun it getting it from both sides- Cheap Wintel boxes and Cheap Linux boxes. No wonder they finally relented and released Solaris 9 on Intel.
    • Re:Raw CPU power (Score:5, Informative)

      by ottffssent ( 18387 ) on Sunday February 09, 2003 @01:34PM (#5265391)
      Sun isn't about raw CPU power. For that we have POWER and x86. Sun is about massive scaling. Sure, 1 POWER4 or P4 or Athlon beats an Ultrasparc. And 8 USIIIs lose out to 8 POWER4s or Xeons or Hammer CPUs. But Intel and AMD drop off at about 8P systems (though ItaniumII can handle larger systems, and Opteron can scale past 8P with a HT bridge), and the POWER architecture scales to hundreds of processors. Sun though can pack a thousand chips in a single system image, with plans to scale to 4096 (IIRC) within the next 2 years.

      I'm sure Sun would love to have a high-performance CPU to field against massive clusters being deployed for highly parallelizable tasks such as rendering, but the fact is that's not where their strengths lie. Huge tasks which cannot be efficiently split are what Sun is good at, tasks where superb scalability in terms of both CPU power and memory are an absolute must.

      For more, read Ace's Hardware's excellent volume multiprocessor articles:
      Part 1 [aceshardware.com]
      Part 2 [aceshardware.com]
      Part 3 [aceshardware.com]
      • The problem is that everyone else is getting better at scaling. IBM's POWER series is a serious threat, and you can bet that POWER5 will be even better than what they have out there now.

        The comment you replied to is entirely on track. Scaling is not enough. Other processors concentrated on speed first, and scaling up cheaply (obviously if you put enough work into the backplane you can make anything scale, but there comes a point at which it's no longer cost-effective for 99.44% of the market, like a Cray) just wasn't part of the equation. Now that clustering is getting cheaper and more effective (It used to be that workstation-class machines were puny, they are becoming more comparatively powerful, and have more memory for "large" data sets) it is more important to scale smoothly; THIS is when other companies will put effort into that.

  • Yea, well.. (Score:3, Interesting)

    by Anonymous Coward on Sunday February 09, 2003 @01:06PM (#5265196)
    It was bound to happen. When you're talking about a big farm of boxes, Sun doesn't make much sense. Look at Google - all run on commodity hardware.

    Now, on the other hand, if you're talking a situation where a server can not just die and be swapped out without notice, then you'd stay with Sun. Or, I would anyway.
  • Not Xserves? (Score:3, Interesting)

    by splattertrousers ( 35245 ) on Sunday February 09, 2003 @01:07PM (#5265211) Homepage
    I assumed that Apple created their Xserve rack-mounted servers for exactly this purpose: not just for animation studios, but for Pixar in particular (since Steve Jobs runs both companies and does things like selling Pixar DVDs to Apple to give away in promotions, thereby increasing the number of DVDs sold at launch, getting his movies in front of more people, and of course providing more incentive to buy whatever it is he's bundling the DVDs with).

    I guess the density of the blade servers is higher than the Mac servers, but it would have been a big boost to the Xserve's credibility if Pixar had chosen to use a ton of them. Perhaps Apple will make a new server (Xblade?) that's more suited to this use. It wouldn't surprise me...
    • Re:Not Xserves? (Score:3, Insightful)

      Are you kidding? Xserves don't have anywhere near the computational horsepower of the Intel hardware. And, at that, they are probably more expensive than the Xeon machines per unit.

      I've said it before and I'll say it again: you are not paying for cutting-edge hardware when you buy an Apple. You are paying for easy to use software. Movie production houses which have teams of professional administrators do not need the handholding that OSX Server would provide.

    • Re:Not Xserves? (Score:3, Interesting)

      by SlamMan ( 221834 )
      You hit the nail right on the head: density. Xserves are great and wonderful machines (possibly excluding the god-awerful sound they make), but they just don't compete this blade servers. And I'm assuming they're not supposed to. We all know tthe advanages of using blades for this kind of thing, so it'd be foolhardy to use even 1U devices here.
  • by Anonymous Coward on Sunday February 09, 2003 @01:09PM (#5265219)
    "I'd love to see their electric bill "

    Dude, they render stuff... would you not prefer to see that...
  • One of the advantages of *nix is that if you code is well written then you can write once and compile anywhere (except possibly windows), so it's nice to see a company showing that this is possible. It's a shame they chose Xeon's though. The Xeon is the latest in a long line of chips which evolved from the 8086, a fairly nondescript 16-bit chip. It still caries a lot of legacy bagage around with it and so has to run several times faster (and drink several times as much power generating several times as much heat) as a well designed chip in order to go as fast. Intel and AMD are able to sell chips which compete with RISC chips only because they sell enough that they can seriously outspend everyone else on R&D making a dead architecture last just a little longer. It would have been nice if the Alpha had been able to compete with x86 on price when NT4 was released, maybe then we'd have left the x86 ship for good. Whenever I here someone suggest OS X or Solaris on x86 I always wonder what they have against the OS that they want to cripple it by putting it on such an evil architecture.
    • 'nuff said.
  • by SuperDuG ( 134989 ) <[be] [at] [eclec.tk]> on Sunday February 09, 2003 @01:10PM (#5265238) Homepage Journal
    That Sun had tried renderman (or whatever they call it) to run on 32 bit processors and it was a horrible disaster. Something about how it seemed more feasible and cost efficient to use Sun until the days in which the competiting 64 bit processors became cheaper.

    I could have sworn that the software couldn't run at all in 64 bit. I'm just wondering if they didn't take a step down when they converted 64-bit optimized code to run on regular high cache 32-bit pentiums.

    Great for linux and anyone who has half a brain knows that you can make a very nice system from the Intel Xeon chips and Linux. But Sparcs aren't x86's and they certainly don't run the same. I've been running a server off of a pII 400 mhz Xeon with 2 megs cache on it for nearly 4 years now. It's never failed me yet and I have no intentions of upgrading anytime soon, but then again I'm not rendering anything in 3 deminsions either.

    Doesn't dreamworks use this type of technology already?

    Damned MPAA members ... we hate you because of your strives for world domination, but then you go and support linux ... bastards we just love to hate you.

    Lastly I'm really surprised that Pixar didn't go for a server farm of OS X boxen, just goes to show ya, right tool for the job. Maybe they'll throw darwin on their at least.

    • by donglekey ( 124433 ) on Sunday February 09, 2003 @02:15PM (#5265659) Homepage
      The parent post is somewhat misleading and more than a little spotty, but it got modded up, so I feel I should clarify.

      That Sun had tried renderman (or whatever they call it) to run on 32 bit processors and it was a horrible disaster. Something about how it seemed more feasible and cost efficient to use Sun until the days in which the competiting 64 bit processors became cheaper.

      Renderman is a standard for going exporting frames to a renderer. Pixar's implementation is called Photorealistic renderman. Sun is not involved in this at all. It has run on x86 procs, as well as Linux for quite a while now. Renderers are relativly easy to port, especially from different Unixes. I am not sure if there are speed advantages to 64 bit computers, or if it is just accuracy and memory like always, which is still a big advantage for a renderer. ( can anyone clarify?) I have a PRman rendered image on my desktop right now on my 450 Mhz PIII. The above quote is pretty much completly false.

      Doesn't dreamworks use this type of technology already?

      The technology is just running off the shelf software and hardware. Different parts of dreamworks do use Linux heavily.

      Damned MPAA members ... we hate you because of your strives for world domination, but then you go and support linux ... bastards we just love to hate you.

      This is horribly misinformed. I don't have the energy to go into the whole issue here but suffice to say that this is wildly misplaced frustration. First of all, Pixar is not a member of the MPAA. They have a deal with Disney, which is. That attidude would be fitting and understandable with Disney for various reasons, but making Pixar your enemy is just wrong (except when they sued Larry Gritz personally to hold off competition to Renderman). The same goes for Visual Effects companies. ILM, Imageworks, Digital Domain, PDI, Pixar, Rythm and Hues, Weta, etc. are the best thing that's happening to Linux right now. They are so far removed from the wrongdoings of the MPAA its like me blaming someone for crime when their friends dad is part of the NRA. They are doing only good for Linux, and they are not hyprocrites. They do have deals with studios that are intern part of the MPAA. Not everything is perfect, and these issues are not something that they as companies are, should be, or will be concerned about. They are also starting to contribute to Linux, and I am confident more will come as Linux matures in their pipeline. Building up anger towards Visual Effects companies perpetuates the sterotype of free software advocates being zealots without understanding the whole issue.
  • by bmarklein ( 24314 ) on Sunday February 09, 2003 @01:15PM (#5265266)
    As far as I know Rackspace is a managed hosting company. Rackable Systems [rackable.com] makes servers - Yahoo and Google both use them. Anyone know if the article has it wrong, and Pixar is actually using Rackable machines?
    • Likely Rackable! (Score:5, Informative)

      by Crypt0pimP ( 172050 ) on Sunday February 09, 2003 @01:25PM (#5265331) Homepage
      They have half-depth 1U boxes. That's right, two servers in 1U, back to back.

      Includes space between the two for cabling and cooling.

      They specialize in delivering easy to manage (physically) racks of highly commoditized systems.
      (I work with them in a reseller relationship)

      Imagine a 71U rack(minus 1U for a switch), with 142 boxes, all dual proc. 248 procs in a rack!

      Man, I wish they'd put the right link in there.
  • Is renderman open source yet?

    Linux is one step, making sure they have a completely open system is another
    • Is renderman open source yet?

      Renderman is a specification, not a product. There are various open-source efforts to implement the renderman specification, but they all seem to be dormant at the moment. See here. [dotcsw.com]

      • Actually there is one hosted at Sourceforge that is very active, called Aqsis. There were a couple of other projects like gman that never took off, or were just University projects. Aqsis is making good progress:



        Aqsis [aqsis.com]



        There are a few other implementations that also run on Linux like AIR, The aforementioned RenderDotC (which I believe Cinesite used), and 3Delight. Hopefully a product like Liquid (from a guy that worked at Weta), which is a Maya to RIB translator (kinda like MTOR) will also take off which could help in making a more powerful combo.

  • Sun?? (Score:3, Informative)

    by SuperDuG ( 134989 ) <[be] [at] [eclec.tk]> on Sunday February 09, 2003 @01:21PM (#5265306) Homepage Journal
    This Link [pixar.com] makes no mention of Renderman running on anything Sun related, I see IRIX windows XP and RedHat mentioned here. Is this Sparc-64 tree of the RedHat??

    I must be lost here, but most of these renderfarms I've seen that use Sun products is for network storage solutions, though they're even losing the marketshare these days. I think what people are starting to realize is that just because you paid a whole lot for it, doesn't mean you got "The Best".

    Supercomputers of 5 years ago can be built today with computers being thrown away and setup into a computing cluster. Obviously the good old days of 40 trillion dollar super computers paid for by the goernment aren't the super computers of today.

  • by MisterP ( 156738 ) on Sunday February 09, 2003 @01:28PM (#5265351)
    A lot of people are going to be saying "just one example of how Sun is dying", but coming from a place that runs several hundred Sun machines (and being a Sun fanboy), I can understand why they made this switch. For shere processing power on-the-cheap, the x86 world has had a lead on Sun and other big UNIX vendors for a few years. Having a decent OS (linux) to run on those machines, makes it even easier to switch.

    It's about using the right tool for the job, and now that x86/linux/bsd has matured to a point where it can be used for some professional applications, it only makes sense to see things like this happen.

    Sun is going to be around for a long time. As many other people have pointed out, they're just retreating somewhat to more a of niche market, where they are the right tool for the job.

    • and now that x86/linux/bsd has matured to a point where it can be used for some professional applications

      For some reason, I just can't seem to resist nit picking here. BSD has been mature enough for professional use for quite a long time now.

      In fact, I seem to remember a time (pre-Solaris) when Sun systems ran a form of BSD.
  • by BrianUofR ( 143023 ) on Sunday February 09, 2003 @01:32PM (#5265375)
    This is a big win for Linux, and that is cool, but performance is only half the battle.


    The executives at my company [elementk.com] are very interested in linux, because of the outrageous leap in processing power per dollar, and the reductions in CPU-based licensing costs for software like Oracle is staggering. The concern, though, is stability.


    Sun Fire and Enterprise servers are really expensive, but they stay up all the time. Swapping a failed processor or NIC or memory stick without halting the box is really important on a mission-critical server. Likewise, a well built Sun box never panics, and if it ever does, Sun will insist that their engineers look at the crash dump to figure out what went wrong.

    I think Linux has won the performance battle, but what about the stability battle? You need to win both to win the war.

  • Google (Score:5, Informative)

    by mrm677 ( 456727 ) on Sunday February 09, 2003 @01:33PM (#5265384)
    I recently attended a talk by Google's chief engineer. They have approximately 15,000 x86 machines running Linux at seven data centers in the United States.

    Weird failures occur so often, such as disks returning garbage without the controller informing the OS, that Google does a checksum on _every_ data structure in their user-level software. He also talked about how Linux is good enough for them, but it doesn't perform well with respects to I/O under heavy load. He says they like Linux because they have the source-code and that they minimize excessive I/O loads on their machines. Nobody asked why they don't use FreeBSD but I suspect its because Linux has better hardware support and Google builds their own machines with numerous different components based on the latest technology.
    • Re:Google (Score:2, Informative)

      by foyle ( 467523 )

      Nobody asked why they don't use FreeBSD but I suspect its because Linux has better hardware support and Google builds their own machines with numerous different components based on the latest technology.

      I keep seeing people say that Linux has better hardware support than FreeBSD, but it has not been my experience. In the past year, I've had three machines that Redhat 7.3 and 8.0 refuse to work on. Redhat 7.x installers would choke and the 8.0 installer works but leaves you with an unbootable machine when it finishes. Linux just doesn't get along with the Adaptec AIC-789x controller that was built into the motherboard on these machines. FreeBSD, on the other hand, installs and boots fine without any problems.

    • Re:Google (Score:5, Interesting)

      by Alomex ( 148003 ) on Sunday February 09, 2003 @02:47PM (#5265837) Homepage

      Look, the mean-time-to-failure of a hard drive is 15,000 to 20,000 hours. This means that a hard drive stops working at Goole every hour of every day. Truly 24/7.

      If you were to look at their dumpster in the back alley, you'd find about 170 hard drives dunked every week.

      Wouldn't you cheksum every data transfer under those conditions too?

  • by speeding_cat ( 631744 ) on Sunday February 09, 2003 @01:40PM (#5265435)
    Sun uniprocessor performance has been very uncompetitive for quite some time now. I bet they would have switched a long time ago if it was not for the difficulty of porting software from Solaris to Linux. Plus human inertia ...

    The worst problem for Sun is once they loose customers to Linux, there is no turning back.
    They still hold well in 64-bit area, however, once commodity hardware such as x86-64 gets there, this battle will also be over.

    This is the main reason why the company is likely to go down the drain.
  • by Anonymous Coward
    This cluster is so powerful, when they try to render anything with it, all they get is "42" on the console.
  • by alienthoughts ( 648978 ) on Sunday February 09, 2003 @01:57PM (#5265542)
    Pixar is on the right track. I do ASIC verification, mainly on Sun boxes (fastest USparc IIIs, multi-proccessor, 14GBs memory, etc). Lately, I have been running the exact same jobs on an LSF enabled Linux farm of Intel boxes.
    The improvement is 3-4 times speedup ie 8 hour Sun jobs take 2 hours on Intels.
    For the price of one dual proccesor Sun workstation, you can get ten Intel boxes running linux.
    Not only is the speedup great, I need less licences to run the CAD software (doing multiple regression jobs). Since a license seat per CAD tool can run from 30K to 200K each plus 10% a year maintence fee, the savings are huge.

    Changing over to linux was trivial. I like and have used Suns for years and Suns were a major player in this industry. But I firmly believe that this paradigm is going to be a SUN KILLER!
  • Sun is setting... (Score:2, Interesting)

    by JavaJoint ( 612671 )
    Anyone get the feeling that Sun's "brightest hours" are behind them? As others have mentioned, they're getting hit from the Windows XP side, as well as Linux. If Solaris dwindles as a result of this, and becomes a niche/high-end item, what does this say for HP, SGI, and the rest of Unixen?

    I've been thinking in terms what are/will be the Big Three:
    Linux, Mac OS X, and that other thing.. uh, Windows XP. I wouldn't bet on traditional Unixen as a growth area, by any means. Won't be long for some companies to become "Unix-free and Windows-free" zones...
    • by tgd ( 2822 ) on Sunday February 09, 2003 @02:46PM (#5265825)
      I agree. I think whats going to end up happening long term is Windows will take and keep the desktop (I just don't see it happening with Linux, this coming from someone whose used it as his only OS at home for ten years), Linux in the datacenter, and OS-X in the same niche role its in now, with the caveat that I think it'll start pulling away the tiny percentage of people who want to run Unix on their desktop.

      Ten years running Linux, and tomorrow morning I'm dropping the bills on one of those spiffy gigahertz 17" iMacs. I want Unix, and I want more functional stability than Linux has ever given me (not OS stability, but stability in terms of what programs I can use to do what, what works with what else, etc... )

  • by digitalgimpus ( 468277 ) on Sunday February 09, 2003 @02:17PM (#5265680) Homepage
    1982 - born in Nebula - incorporated with 4 employees

    1984 - protostar - NFS is introduced

    1995 - main sequence begins - Java Released

    1996 - red giant - Using Java technology, NASA engineers develop an interactive application allowing anyone on the Internet to be a "virtual participant" in the space administration's groundbreaking mission to Mars.

    SUPERNOVA - Sun battles MS over Java and Windows

    Blackhole - TODAY!

    References:
    http://www.sun.com/aboutsun/coinfo/history.html
  • I would think that they would need more ram per node than blade servers would allow. I mean my friends a graphic arts student and even his modeling jobs frequently bump against 4gig limits. I would have thought two or four way Opterons with gobs of ram would be more up their ally.
  • by Pig Hogger ( 10379 ) <pig.hogger@gmail ... m minus caffeine> on Sunday February 09, 2003 @02:50PM (#5265851) Journal
    I'd love to see their electric bill ;)
    I'd rather have their HEATING BILL...
  • Parallelism (Score:5, Interesting)

    by rugwuk ( 525954 ) on Sunday February 09, 2003 @02:57PM (#5265895) Homepage Journal
    Its all about the distinction between shared and distributed memory architectures. Different applications benefit from different types of parralelism which the above architectures provide. If to solve the problem independent chunks of code can be run that require no communication at run time then clearly a blade type solution (distrbiuted memory) is viable, but if the calculations are co dependent on each other and require communication of interrim results then the overhead of communication can quickly become the critical path and shared memory parallelism becomes a better solution. It also depends on the level of parralelilsm built into the implementation of the algorithms inside pixars redering program itself.
  • Surprising choice (Score:5, Insightful)

    by Thagg ( 9904 ) <thadbeier@gmail.com> on Sunday February 09, 2003 @03:43PM (#5266208) Journal
    I have heard from several places that Intel's PR flacks have been flogging this story mercilessly, so it's not too surprising to see it show up in Slashdot. Twice.

    To get the inaccuracy out of the way -- RenderMan has been running on Linux for several years now, and I would be surprised if Linux wasn't the dominant platform for RenderMan for quite some time, outside of Pixar of course.

    I am really surprised, though, that at this point in time they'd go from 64-bit to 32-bit machines, especially as 64-bit PC-like machines are just becoming available. Why not go with Itanium or the new Hammer? Each of Pixar's movies to date have been gloriously more complex and hard-to-render than the last one -- and while I know that they go to fairly extreme lengths to keep the memory footprint down I would think that they'd be bumping up against the 4GB limit already. If not now, then quite soon.

    Perhaps this is just a stopgap to get Nemo finished, even 1024 servers is a fairly small cost. Certainly it would be compared to the RenderMan licenses :)

    Every RenderMan user except for Pixar has to look to get the maximum rendering power per CPU, as the licenses are $5,000 and up, while the CPUs are far far cheaper than that. I suppose Pixar's figure of merit is rendering power per dollar or rendering power per BTU (for cooling limited situations), or even render power per ft^2. Still, the 32-bit machines are a baffling choice to me.

    thad

    ps. My company has a render garden (too small to be a render farm) of a dozen or so Athlons.
  • by BollocksToThis ( 595411 ) on Sunday February 09, 2003 @05:10PM (#5266712) Journal
    I just eclipsed Windows with Linux on my home system.

    I just eclipsed my old toothbrush with a new one.

    I just eclipsed the shit in my ass-crack with toilet paper.

    Now, don't I sound FUCKING STUPID? Yes, I do.
  • by Gavitron_zero ( 544106 ) on Sunday February 09, 2003 @06:29PM (#5267183)
    but when are they going to spend some money and teach their animators how to model a human that doesn't look like a puppet?

"The great question... which I have not been able to answer... is, `What does woman want?'" -- Sigmund Freud

Working...