Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Linux Software

Cool Linux Tricks With Atlas 127

dpilgrim writes: "Looks like some powerful players want to see Linux going toe to toe with Unix 'big iron.' Would you like to be able to run two Linuxes simultaneously on the same box? Or seemless swap processor and memory in and out of your machine? The Atlas project aims to bring you all that and more. There's a press release from TurboLinux reported here, and a more in-depth article running on SourceForge's Linux on Large Systems Foundry."
This discussion has been archived. No new comments can be posted.

Cool Linux Tricks With Atlas

Comments Filter:
  • or can the OS turn off power to the sockets first as well as not use them in preparation for exchange?
    • Why dont you install their mods, bust open your case & try it to find out?

      Heh heh.

      Dumb ass.

    • by zeno_2 ( 518291 ) on Thursday December 20, 2001 @05:52PM (#2734781)
      Atlas seems to be relying upon this new Intel chipset and the Mckinley processor (one of intels new 64bit processors). This new chipset will support hotswapping it sounds like, and any motherboard maker that would use that chipset, would make sure that the slots could do that. So yes you do need a special mobo, but it wont be available for a while.
    • You only need a special motherboard for hotswap PCI/CPU/RAM if you want your system to keep running, and your components to work afterwards ;-).

      Seriously, though, yes you need special hardware on the motherboard side (HP has hot-swap PCI system(s), and IBM has hot-swap everything but not for ia32 or ia64 class hardware). AFAIK, the CPU/RAM/PCI card itself does not need to be special, just the motherboard.

      As for hot-swap RAM, it is _very_ hard to do this without special hardware support. You would need something like RAID for RAM + special hardware that would let you remove/install DIMMs live.
      • Actually RAM would be one of the easiest to do, as in requiring the least complex hardware. Mainly you need an OS that will let you swap entire banks to disk on demand and stop using them. Then make your mobo just stop powering the DIMM slot. No need for special hardware tricks to make it seem like the RAM is still there, even though it's not.
        • True, but in that case you're putting the complexity in the OS instead of in the hardware.

          I realize its not difficult to add that functionality to an OS, but you introduce at least one and probably many third parties (the OS developers) that you then need to rely on to add this functionality according to some spec that might take years for everyone to agree on..And of course, it won't work with older versions of the OS(es)...

          In any case, hot-swappable-everything is neat, but usually not worth the increased price of the hardware involved unless you have very specific needs.. For most reliable-server type applications it makes more sense to just have more standard hardware in a load-balanced configuration that allows you to bring one box down for a time, make changes, etc, without the clients losing access. Of course, for fully interactive sessions built on older protocols like telnet, this doesn't work so well and hot-swapping is more desirable.

  • Starcat (Score:5, Insightful)

    by nbvb ( 32836 ) on Thursday December 20, 2001 @05:44PM (#2734734) Journal
    If I'm going to spend lots of money for hardware like this [sun.com] anyway, why would I use Linux?

    I'm not trolling, I mean it. What does Linux offer me that Solaris doesn't?

    And please avoid the philosophical ramifications -- I have nothing against commercial software, except that 99% of it sucks. :-)

    --NBVB
    • Re:Starcat (Score:3, Insightful)

      by dytin ( 517293 )
      Money is money... Every penny that you can save helps you out. Would you willingly through away $1,000 just because you were buying a house on the same day, and $1,000 is only a small percentage of the house? I think not.
      • Yes, money may be money, but Solaris is free. What does Linux offer that Solaris doesn't?
        -- kai
      • Money is money... Every penny that you can save helps you out. Would you willingly through away $1,000 just because you were buying a house on the same day, and $1,000 is only a small percentage of the house? I think not.

        Surely if you buy a Sun, you get Solaris too?
    • What does Linux offer me that Solaris doesn't?

      In my experience, Linux is much more capable when it somes to minimalist or embedded tasks. With a machine that could swap NICs and memory on the fly, you could have a high-reliability router/smart switch without paying Cisco an arm and a leg.
    • Re:Starcat (Score:4, Insightful)

      by Camel Pilot ( 78781 ) on Thursday December 20, 2001 @06:32PM (#2734949) Homepage Journal
      Or

      "What does Solaris offer me that Linux doesn't"
      • Re:Starcat (Score:1, Informative)

        by Anonymous Coward
        Reliability, scalability, a working VM subsystem...the list goes on and on.
      • Given that Solaris comes bundled with Sun hardware, the original question makes more sense. Dumping Solaris and installing Linux requires a conscious decision, time and effort, so you'd better be looking for a good reason to prefer Linux over Solaris before you do it.

    • something very tiny.

      the sourcecode to the OS.

      This is the ONE thing that is vitally important in an Open Source OS. It blows away everyone else because I have the sourcecode.

      If you dont need to ever modify your system outside what they want you to have? then no you dont need it at all.
      • Re:Starcat (Score:5, Insightful)

        by bconway ( 63464 ) on Thursday December 20, 2001 @07:15PM (#2735124) Homepage
        You can download the Solaris source code to your heart's content here [sun.com]. You can edit, change, and rebuild all you like if it will really suit your needs. You can send patches back to Sun if you feel so inclined for incorporation. The only thing you can't do is redistribute it, which from what it sounds like your needs are, really isn't that important anyways.
    • Re:Starcat (Score:3, Interesting)

      There aren't many reasons at the moment to switch from Solaris to Linux on big-iron hardware. But tomorrow is another matter...

      IBM is now marketing Linux as a big-iron OS and is actively selling S/390 mainframes with Linux. I believe that Linux now has a good chance of becoming the standard OS for big-iron systems - IBM and SGI first, then Compaq and HP, and finally Sun. Sun have switched Unixes before. I worked at Sun during the transition from SunOS 4 (BSD) to Solaris (SVR4). If they can do it once, they can do it again. Solaris is also gradually becoming more Linux-like, with a Linux compatability layer and Gnome. This could ease an eventual transition from Solaris to Linux. I'm not saying that this will happen, just that it's becoming increasingly likely.

      HH
    • Re:Starcat (Score:5, Insightful)

      by rgmoore ( 133276 ) <glandauer@charter.net> on Thursday December 20, 2001 @07:09PM (#2735100) Homepage
      I'm not trolling, I mean it. What does Linux offer me that Solaris doesn't?

      The primary thing that Linux offers is the ability to run on non-Sun hardware. That's actually bigger than you might think. Consider the following ways that it might be nice:

      • Your system is growing. You've been using Linux on comodity x86 hardware to this point, but you now need a bigger machine. Having a bigger machine available that can run Linux should make the transition easier.
      • You wish to avoid vendor lock-in. Yes, Sun makes nice machines, but so do other manufacturers like IBM, HP, etc. If every manufacturer has Linux available as an OS, it's much easier to jump to another vendor if circumstances dictate.
      • You anticipate that IA64 will be competitive with SPARC in the near future, so you're going to buy comodity IA64 hardware instead of single vendor Sun stuff. Since Solaris is only available for Sun platforms, it won't be an option.

      Basically, the fact that big iron manufacturers already have their own OSes is not a strong argument about adding big-iron features to Linux. That's especially true if I'm a manufacturer and I want to break into that very lucrative market. It may very well be cheaper for me to help to develop the needed features in Linux and put that on my new hardware than to develop my own OS. By making those things available in a comodity OS you have the potential to convert big iron into a comodity market, just as comodity OSes for desktop systems helped turn them into comodity goods.

      • You are certainly aware that you can download Solaris for Intels for
        free as well as for real hardware?
        • But being available for x86 isn't the whole game. That might give you the ability to move easily from x86 to Sparc, provided that you had been running Solaris x86 to start out with. But hardly anyone is going to be starting with Solaris on x86 with the intention of making it easy to move to Sun's big iron when the need comes, especially because reviews of Solaris for x86 seem to be generally negative compared to Linux and *BSD.

          And even if you had started on Solaris for x86, that still doesn't solve the issue of migrating from one big iron vendor to another. What happens if it turns out that you really want to be running on one of IBM's POWER-based systems, or a S390? Then you'll have a vendor to vendor migration problem. The advantage of Linux is that it's comparatively vendor agnostic. Once the kernel works on a given processor, it's likely to keep working there, and its range currently seems to be better than any of the proprietary Unixes. The big reason to use those proprietary Unixes is that they support "big iron" features; if and when Linux does the same there will be no reason not to use it instead.

    • Interesting. (Score:4, Interesting)

      by mindstrm ( 20013 ) on Thursday December 20, 2001 @09:32PM (#2735590)
      A lot of people are going on and on about what linux can do that solaris can't.. cross platform, open source, etc...

      But I think your question was, given the Sparc platform.. why not use solaris?

      At this point, you are right. Solaris is where it's at.. I mean, if you are buying Sun.. you obviously want more than just a fast machine... you want the support, etc.

      But... as to why I prefer using linux to solaris, in general...
      Linux is the new reference platform. new tools are developed on linux first, then ported to other unixes (the mahjority, anyway).
      The number of tools quickly & easily available for linux vastly outnumbers the same for Solaris. Yes, you can get, compile, and run pretty much everything on solaris.. it's easy to port form linux to solaris.. but it's still easier to use linux.

      Linux is open.. I just, well, I DO like that. Sure, I'm not gonna go out and modify a kernel.. but it means I'm not necessarily stuck with what Sun tells me I'm stuck with.

      Would I buy a server farm of solaris boxes and run linux on it now? no.
      Would I if Linux sparc supportw as as good as it's intel support? probably (once SPM is fixed)
    • I wonder if people in Sun get excited every time someone hits the buy button on a $4Million machine. (no recalculate... $5Million) I wonder if they are disappointed when people then back out of the transaction....
      It can't happen *that* often..
      • Nobody buys a Starfire or Starcat via the web.

        Companies that buy those things have dedicated sales weasels.......

        Which, truthfully, is a cool thing. The dedicated-to-your-account part is important, 'cuz then the sales guy has *2* interests:

        1) Getting his MONEY.
        2) Making sure you spend MO' MONEY.

        Unlike generic sales weasels, the dedicated sales weasel has only ONE account. So if he makes you unhappy, he stops getting money. Therefore, they tend to treat you really well, in hopes you spends lots more money.

        So they do things like throw in ServerStarts for free and extra disk and things like that...

        All the more reason to stick with Solaris. :-)
  • Webservers? (Score:2, Interesting)

    It would be useful for webservers (or any other server for that matter). Upgrade w/o reboot, full redundancy (if one crashes, the other goes on). This would increase the reliablility of the internet tremendously. Not to mention, you can run Debian + RedHat at the same time :)
    • How, the number one danger to the internet is Joe Backhoe each sprint...


    • At my university, our information servers get hammered on the day grades are released. EVERYBODY is checking out how they did throughout the term. What we do is set up a 'revolving dns', which basicly cycles through a small group of servers to prevent users from not being able to get through.

      So what's this have to do with upgrading? We can upgrade machines without bringing down the whole webserver. Just unplug one, open it, swap in components, and add it back to the rack. Rinse and repeat, without losing web time.

      -Senine
      • As far as I am aware when using Round Robin DNS if a box goes down a user can still be directed to that box and get an error. Especially given that a lot of web browsers cache DNS entries for the session it would be trying to get to the downed box everytime you hit refresh.

        The proper solution is a load balancer, which has a single IP at the internet end and distributes requests to multiple webservers (and knowing which ones are up and which are down).

        Of course if your load balancer goes down.. :) (Actually, you just have two load balanacers. The chance of both failing is pretty slim)
  • Turbo Linux sent us some free CD's and it was very nice of them, but the text-based installer is rather horrid and their post-install setup is less than would be desired.

    Mandrake and RedHat also sent us CD's, They were nice but also installed Gnome and KDE along with other huge apps that an average user wouldn't know what they were let alone ever use them.

    Personally I would like to see Turbo up and going again, but I haven't heard anything positive coming out of it besides it's huge market-share in asia.

  • Drat. (Score:2, Interesting)

    You got my hopes up, only to find this is for future enterprise hardware.

    I want hot-swap PCI now. The memory swapping would be good in the case of a failed DIMM or two. The processor swapping...well, I'll just admit that wouldn't work too well in a uniprocessor computer.

    Since I really doubt memory connectors are grounded properly to handle hot-swapping, that leaves PCI as the only one that's remotely feasible with today's computers. I know Solaris SPARC has it, what about x86?

    Hot-swap PCI could be a really nifty feature on x86 machines. Especially for net guys like me who move NICs around all the time...
    • Any server class x86 has hot-swap PCI.
    • Hey: if the PCI bus can stand it (both physical and logical layers), Linux can cope with it just fine; that's what "insmod" is for. I don't believe most PCI slots have the grounding that allows for it, but, if it did, then I believe it'd be a total no-brainer. I think you're seeing a software shortcoming that just doesn't exist.
    • exactly what do you have that must have 100% uptime? will you lose several thousand dollars for the 15 minutes of downtime it takes to power down, swap out that dead ISA network card (as you were operating on the hot backup) and power up?

      99% of all enterprise computing needs fall outside this. in even gigantic companies less than 3% of all their servers will lose the company huge sums of money for being down for a short amount of time.

      Now research? that's a different story. Running a computation that takes 3 months to complete on a 500 processor 99Gigahertz each processor machine (or whatever the insane amounts of processing computers are up to now.) the shutdown to replace a nic card or ram module would have substantial effects.. and I doubt highly that more than 5 people that are here at slashdo ever do such advanced mathematics.

      you dont need it, dont waste the time and money on it.
      • People doing research don't need that much
        reliability they would more likely go for cheaper
        faster hardware. They are the once who write code
        for their computations, so they include
        checkpoints, so that they can recover from any
        crashes. They also prefer using many machines
        instead of single machine if that will increase
        their price/performance, as it also improves
        reliability, if a processor goes down, the
        computation slows down but does not stop.
        You can see that research people are moving very
        fast to Beowulf architectures. Only people who
        cannot move are the ones who need some very fast
        networking architectures.
    • Why bother replacing the failed DIMM module when you can just install the ``badram'' kernel patch [vanrein.org] that will let Linux work around bad bits in a memory module? Don't let a few bad bits spoil an entire module... (-:
  • by torqer ( 538711 ) on Thursday December 20, 2001 @05:52PM (#2734779)
    Quick, set a timer to see how long it takes until someone drools and says the magic words:

    Beowulf cluster

  • Hmmmm..... (Score:3, Funny)

    by Peridriga ( 308995 ) on Thursday December 20, 2001 @05:52PM (#2734780)
    Would you like to be able to run two Linuxes simultaneously on the same box?

    On KDE I just push the big button with the 2 on it...

  • I've been looking for a Linux-based replacement for MS Expedia Streets and Trips. Imagine how happy I was to read this message!

    I can't wait to run Atlas on Linux!

    That's what it's talking about, right?

    (Shrug.)

    --SC

  • by Proud Geek ( 260376 ) on Thursday December 20, 2001 @06:01PM (#2734825) Homepage Journal
    This goes hand in hand with a lot of the work planned for 2.5 to make it scale to larger systems. Linus feels that the current architecture is just about ideal for SMP's of any size, but there are really two obstacles to Linux working on big iron and competing against solutions from IBM, Sun and others. The first is scalability to NUMA machines. This issue is being addressed by the kernel development team. The second is support for the reliability features that the really high end hardware provides. That's the work described here. Together they will make Linux the winning combination even on the very high end!
    • by Anonymous Coward

      ...competing against solutions from IBM, Sun and others.

      So far as IBM is concerned, they don't seem to be "competing against" Linux, they are running it on their servers right now. If you want some big iron running Linux, look at their zSeries machines. [ibm.com] They run hundreds of simultaneous Linux images through their Virtual Image Facility. [ibm.com] Of course, this has been discussed before here [slashdot.org] and probably some other places...

      Other computer companies might have the same offerings, but IBM is the only one I am familiar with. Of course, we are also talking some serious cash for these IBM machines, whereas the Atlas project seems to be geared at more mid-range stuff that a smaller company could afford. So, I know it's a little like comparing apples to oranges, but I thought it might be of interest...

    • by cpeterso ( 19082 ) on Thursday December 20, 2001 @06:56PM (#2735043) Homepage
      I read that the Linux kernel developers are planning a number of kernel improvements to increase overall system reliability. Some of the minor updates include a completely new VM, new block IO layer, new VFS layer, new kernel NFS server, new device naming management, new SCSI lyaer, new IDE layer, and an in-kernel web server (khttpd and TUX) for improved system reliability.

      Just like the similar complete rewrites in Linux 2.0, 2.2, and 2.4, Linux once again finally be a winning combination on the very high end!
      • Color me stupid, but do we really need _another_ new VM in the kernel?

        What's wrong with the one we have?

        What features would a new VM have that would make it better for these high-end systems?

        Why do these high-end systems with skads and kaboodles of RAM need VM?

        Just curious.
  • All very well, but I wonder if they wouldn't be better starting with a BSD base if long uptime and file system reliability is an issue. They'll have to introduce something better than ext2fs anyway, so why bother working from a kernel that boots from that filesystem?

    Don't get me wrong, I run Linux on my home servers, but I would run it on kit that had the operational requirements that go with those hefty price tags.

    • Not exactly, a filesystem in the linux kernel can be changed rather quickly, the kernel does not need ext2 too boot from it, you can run the kernel from whatever you want, as long as there is a bootloader that supports it. Take loog at loadlin or yaboot.....

      And for what is talked about here ther is a linux fs for, its called XFS.

      Stability is something else, its sayed that (Open|Free|Net)BSD or Solaris are more stable than linux, i cant say anything about that.

      An intresting thing is SGI, theyre quite intrested in Linux for their HUGE boxen, maybe they will realease a special certified kernel wich is stable on this machinene type.

      Maybe its really easier to build a kernel for a machine you built yourself and wich exists only in a few varieties like SGI or SUN can do (Solaris for x86 doesnt support a lot of hardware, so the BeOS policy - we support only good hardware)
  • I'd like to see an Atlas cluster of these babies !
    ;-)
  • linuxes=linii?
  • What we need (Score:5, Insightful)

    by TheEviscerator ( 240966 ) on Thursday December 20, 2001 @06:36PM (#2734973) Homepage
    It's important to remember that much of Linux's competition comes not from the dreaded MS, but from commercial UNIX vendors, like Sun and IBM.

    Most companies that currently employ Linux tend to use it for things like DNS, Web servers, and file sharing. Fitting Linux with enterprise features is critical in moving beyond these types of services and truly entering the enterprise world of hot plugging, scalability, and *proven* reliability.

    While I realize that its reliability is more than proven to most of us here, it's important that it be proven to executives as well. Not only must it be reliable, but proven companies must have track records of standing behind the product 100%.

    One concern I've heard voiced is that no company providing support for Linux will take ultimate responsiblity for a product that isn't theirs.

    Get a few more years and services behind Linux, and we should see it explode.
    • Re:What we need (Score:3, Insightful)

      by Lumpy ( 12016 )
      One concern I've heard voiced is that no company providing support for Linux will take ultimate responsiblity for a product that isn't theirs
      NO company will take ultimate responsibility for products that are theirs.

      Microsoft, SCO,SUN SiliconGraphics. everyone has in the license that they are not responsible for anything for any reason.

      this concern needs to be met with a direct response that no company will, even for their own product.
    • It's important to remember that much of Linux's competition comes not from the dreaded MS, but from commercial UNIX vendors, like Sun and IBM.

      Unfortunatly, this is true the other way around: Linux isn't replacing Windows anywhere, but other Unices like Solaris or AIX. If it only were as good an OS...

    • Re:What we need (Score:3, Informative)

      by conway ( 536486 )
      While I realize that its reliability is more than proven to most of us here, it's important that it be proven to executives as well.

      No, its not proven, at all!

      Because you ran your Linux boxen at home and work without powering down for a year doesn't prove anything. You haven't gotten your machines even close to the level of load that enterprise server machines handle each day. Also, most of us run uniprocessor or 2-CPU machines. Not too much stuff is being done on the 32-CPU enterprise machines with Gigs and Gigs of RAM, hundreds of disks, network connections, and PCI buses.

      Linux has not been proven in these environments at all. And even if you say that it runs on those machines, when you install an OS on a $1 million machine, you better damn be sure its proven to be reliable.

      Now, big Uni*es -- Sun, IBM, HP, etc. have entire teams of people running stress tests on these machines, and (as a former developer of HP-UX) I know that developers must run through at least 12 hours of stress testing a system (thats a system running through automated test suites that excercise every subsystem, and get system load averages to about 200 consistently) when making kernel changes. These things are TESTED

      Noone does that with linux, because noone wants to do it -- its not fun work at all. But the companys do do it, and must do it, since they must guarantee 99.99% uptime for the "executives" to buy the system.

      So don't blame them for not jumping on Linux.

    • IBM's Regatta server allows you to run multiple servers on the same box (virtual servers) they can run a combination of OSes. AIX and Linux. So there you have it. Also AIX5L supports Linux apps, need to recompile though, and Solaris provides an enviroment for running Linux apps. Oh yeah Linux has been released for the OS/390 as well.
    • Keep in mind, most of these Unix vendors are not in direct competition with Linux in and of itself. See IBM's embracing of Linux. Sun could care less if you use Solaris - They give it away free. They want you to use Sun Hardware, because thats what they charge you money for. IBM and AIX and HP with HPUX are similar. If they could run a Linux on their big iron, they no longer need to develop their own OS in house - siginificantly improving their margins.
  • I love linux, and we run, sans two irix boxes, all linux machines in our department and it's wonderful. I've been a proponent for years, but maybe.. maybe.. just maybe.. we should worry about a stable virtual memory system before we worry about hotswapable processors?
  • Other idea (Score:4, Interesting)

    by ocie ( 6659 ) on Thursday December 20, 2001 @06:51PM (#2735030) Homepage
    Would you like to be able to run two Linuxes simultaneously on the same box?

    Actually, I'd like to be able to run one Linux on N boxes, or M Linuxes on N boxes where M!=N. Just immagine a cluster of 50 machines where the failure of one machine has no effect on the operation of the cluster as a whole. There are some good projects in this area, but I don't think they can quite offer this kind of transparency.
  • I'd love to be able to pull out my single CPU and put in a new one, all without shutting the machine down ;)

    Great geek party trick.
    • I'd love to be able to pull out my single CPU and put in a new one, all without shutting the machine down ;)

      As long as the board maintains the RAM contents, and can vector a newly inserted CPU to an area of memory capable of setting up the CPU where it was previously, this shouldn't be too impossible.

      After all, its not that far removed from 'hibernation' mode that most laptops offer (under windows), with the difference that you're not powering up the whole machine back to a previous state, just the CPU.

      Is it a useful feature on a single CPU machine? Maybe, Maybe not. It would reduce downtime for a server running on a 1 CPU box, but most serious servers where that matters aren't running 1 CPU.

      The only complication would be that most legitimate reasons to allow hot-swapping a CPU on a single-CPU board are based around failure conditions, where you probably can't assume that the contents of RAM are 'sane'.

  • partitioning (Score:2, Interesting)

    by cweber ( 34166 )
    I've always been very intrigued by the various partitioning options which you can get from commercial Unixes. Personally, I think Solaris is lightyears ahead of the rest, but any of the available solutions look intersting.

    Partitioning, especially the dynamic variety, lets you take maximum advantage of a large multiprocessor machine. Can you say, 'OS upgrade without downtime'? From testing to gradual rollout, to full deployment, and if needed roll back, all without having to bring the machine down. Really cool!

    I realize that atlas only envisages static partitioning for now. But can dynamic partitioning be far behind?
  • I just can't wait till they make it for Windows, so I'll be able to get half the "stability" Windows offers... ;-)
  • > Would you like to be able to run two Linuxes simultaneously on the same box?

    VMWARE [vmware.com] has been doing this for years, on Intel architeture. Plus, you can run multiple operating systems, not only Linux. It creates a virtual machine, so it runs in protected mode, has a completely independent BIOS, uses the memory you assign... Works like a breeze.

    I frequently run Win 2000 AND Debian Linux AND Win 98 (this one for some testing purposes), at the very same time. So you can have the best of all worlds.
    • Re:Virtual machines (Score:2, Informative)

      by DGolden ( 17848 )
      However, there's no two ways about it, VMWare is intrinsically a kludge, because the x86 architecture as it stands today simply wasn't designed to be fully virtualisable (irritatingly, 32-bit x86s CAN fully virtualise 16-bit x86 code - dunno why intel didn't make the conceptual leap to doing the same for the 32-bit stuff - perhaps IBM had patents on it or something, thanks to their POWER architecture, for all I know.)

      And true virtualisation, mainframe-style, needs not just CPU support, but support from lots more of the hardware in the computer system.

      So, VMWare is part virtualiser, part emulator.

      The open source VMWare clone, plex86, similarly, has a lot in common with the bochs x86 emulator.

      It's all very clever, but the PeeCee architecture is simply vile, and definitely an example of the "Eat shit, 10 billion flies can't be wrong" effect - even way, way back, there were better designed hardware architectures than the PC available for similar prices - the dominance of the PC arose partly because it was easy to semi-legally produce IBMPC-compatible clones, and partly through non-technological forces (i.e. lying salesmen and marketers combined with completely computer-clueless businessmen who believed them).
  • by Ungrounded Lightning ( 62228 ) on Thursday December 20, 2001 @11:48PM (#2736018) Journal
    Looks like some powerful players want to see Linux going toe to toe with Unix 'big iron.'

    This isn't a linux issue. It's a hardware issue.

    The significant thing about 'big iron' is that it's an enabling hardware technology.

    Once you have it you can write firmware and software that creates the illusion that the hardware never fails

    Until you have it, you can't.

    The hardware described looks about right - if they handled machine checks properly. (And the fact that they even used the term implies they either did or are trying.) Basic idea: The machine catches ANY error, with enough state saved that you can:

    CORRECT the error

    IDENTIFY any failed components,

    MOVE tasks to non-failed components or reconfigure the failing components to limp along,

    NOTIFY the OS of any problem, so it can do things like start moving things off a dying component, and

    pick up the computation where it left off WITHOUT the error.

    When you can do this you can write a modified Linux, Windows, BeOS, or what-have-you that can do the things a mainframe can. (But you'll need to have a REALLY reliable OS for your starting point - you're now talking uptimes measured in decades. The software better not take the system down in the absense of hardware trouble, and there IS NO hardware troube. B-) )

    Hot-swappable parts are more a side-effect than something key. You have to be able to hot-swap to replace a broken part with the system live. Once you have the ability to hot-swap in a replacement for a failed part and add it back into a running domain, it's trivial to generalize that to "fix" parts that were "bad" because they had never been installed.

    Partitioning is also implied: You need a minimum of two domains ("virtual machine" subsets of the total device) - working (where the live system is) and diagnostic (where the maintainence guys check out the parts). Once you have that mechanism, making a LARGE number of working domains (with varying amounts of resource, including full or time-shared CPUs) is straightforward.

  • IBM zSeries [ibm.com] supports multiple Linuxes running simulteously on the same box.
  • but to never use. My first thoughts were something along the lines of.
    "Now I just have to pop in this new memory stick, and I'm done, Oh @$%#!@! I must have shorted something out when I accidentally dragged this contact against something on the motherboard.

"In my opinion, Richard Stallman wouldn't recognise terrorism if it came up and bit him on his Internet." -- Ross M. Greenberg

Working...