Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Linux Business

A Quarter-Million Dollar Box For A Free OS 113

popeyethesailor writes: "According to a CNET story, the server startup Egenera will be debut its high end Linux servers for financial services customers, running Red Hat Linux. An earlier CNETstory details their design." That's a hefty pricetag, but the companies they hope to sell to ("market--financial-services companies and service providers") aren't shy about investing in tools. Of course, an S/390 isn't cheap either, no matter how many GNU/Linux images it's running ;)
This discussion has been archived. No new comments can be posted.

A Quarter-Million Dollar Box For A Free OS

Comments Filter:
  • It's... a mainframe.

    Unix returns to its home.
    • excuse me? unix' home is a minimcomputer, not a mainframe.

      -- nobody

      ps: unix ticks its billion second next saturday!
    • Actually it's more like a really dense server farm.

      Possible markets? The only thing that really comes to mind for me is ISPs. This could replace racks of essentially standalone machines quite nicely. But of course that's not the best market to go after right now.

      I'm wondering if more conventional companies would go for this. There are lots of companies that have a ridiculous number of little servers floating around. If they had been deployed on a beast like this from the start then they could have saved a lot of money. But now that they have all these little servers, it's hard to imagine them throwing them all out and replacing them with one box.

  • Correct me if I'm wrong, but aren't all these servers going to run with 4GB of RAM? What good will they do running Linux when Linux can't currently scale past 4GB of RAM?
    • 2.4.x supports >4GB of ram.
      See http://www.spack.org/index.cgi/LinuxRamLimits
    • common misconception!


      Linux can scale to more of 4GB of ram. The problem is that, in the x86 arch, a process
      (hardware limitation) can't redirect more than
      4GB of ram. That doesn't mean that you can have, say, 8 4GB of ram processes. And this is aplicable to the x86, I don't know what happens with big iron beasts


      BTW, a s/390 does not run a single linux image

    • I'm not sure here, but AFAIK Linux 2.4 supports more than 4GB RAM using Intels Physical Address Extension (PAE), which every PPro or newer should have.

      I know this is a workaround, not a clean solution, but the problem is the 32bit architecture here, not Linux itself.

      At least I'm sure I saw more than one announcement of a "linux monster" like this in the past.
    • Besides the fact that Linux can access more than 4GB of RAM, this thing isn't a single server; it's a cluster. If each of the nodes has 4GB of RAM, that adds up to a lot.
    • The Linux kernel documentation states that you could use 16gigs, or more, by using high memory, of course, you can only access 1gb at a time, but that's all done at the kernel level, so unless it's being used for something like dma, it won't really matter.
  • by GreyPoopon ( 411036 ) <gpoopon@gma i l .com> on Monday September 03, 2001 @08:50AM (#2247895)
    Could somebody please explain to me where the $250,000 value is? Is this just another case of bad allocation of venture capital? The $250,000 is the BASE price of a system that can hold up to 24 cpu boards that CAN be connected to a network or CAN be connected to a drive array. The stated purpose in the article is to provide redundancy for failover. The only cool thing I can see is that if a cpu fails, another cpu will assume its name, characteristics and storage space. What wasn't clear was whether or not all 24 CPU boards were redundant, or whether you could have several redundant machines within the same "cabinet." But there wasn't anything really magical going on here. These boards would contain either 2 or 4 high-end processors (just over 1 GHz). I can see a price tag of maybe $40,000 or something, but certainly nowhere near the order they are asking. Anybody have any insight on this?
    • I think that for such a price, you do not only get a piece of hardware, running a piece of OS with various pieces of software on it, which may account for $40,000 or something, but you also get support, based on a good service level agreement. For businesses, good guaranteed support is worth much more then just hardware and os.
    • This is a solution meant for the "big guys". The one's who require absolute, or near absolute failover. The suits are willing to spend this kind of cash because they're looking for piece of mind. I'm sure we could all slap something together for them in a week or two that would do the same for 40K but would it keep runniing for a couple years with minimal attention and would it be easy to configure and keep running?

      If you think you can do it for 40K then you've obviously got yourself a golden opportunity, call up the venture capitalists today! Okay, tomorrow.

      • This is a solution meant for the "big guys".


        Obviously. :)


        The suits are willing to spend this kind of cash because they're looking for piece of mind.


        I'm glad to hear that they find peace of mind in paying these kinds of prices, but I honestly don't think that even for the $250K price tag that they will get the performance and service they are expecting.


        If you think you can do it for 40K then you've obviously got yourself a golden opportunity, call up the venture capitalists today! Okay, tomorrow.


        Nah, I'd need at least a week. :) But seriously, I think that after the $40K level, there's really no additional value that can be added. Sure, they can add tons of service and support agreements, but if there's a drive array failure, the whole thing is down anyway. If lightening strikes the building and fries all of the power supplies, there's no redundancy. I don't know, maybe they are planning on setting up a full repair shop right next to all of their customers. Somehow though, I think that at the first "incident," this company is going belly-up. Am I being too harsh?

        • by Anonymous Coward
          You probably are being too harsh. IBM Mainframes have MTBFs in the 50-year range. That's reliable.

          An IBM mainframe will "call home" and order a replacement part as soon as it detects fluctuations in performance indicative of imminent failure.

          When the CPUs are running, they're each really two CPUs - at a per-instruction level on the silicon itself, if the results from both CPUs
          differ, the CPU is "retired".

          There's similar failsafes and interlocks all through the system.

          And the I/O throughput is both phenomenal, and transactional.

          The PC has a MTBF of a few years, often much less. Thus, while you might get equivalent computing power, once you get up to a few hundred PCs, you spend as much time running around "changing lightbulbs" - i.e. replacing PCs that have failed, as doing useful work.

          The PC hardware architecture is a "toy" compared to a mainframe, or even a commercial unix box, or (ironically, given the amiga's "toy" image) even an old amiga motherboard.
          • At (workplace - 2) I was the PFY at a place that used a horde of PCs in a compute cluster. Horde as in north of 150. Probably half of our time was spent simply running around fixing dead or dying machines. I think we had an average of one total machine failure a week, with lots of lesser events
            thrown in to make life interesting. The most common failure mode was just a power supply crapping out (not unsuprising becuase these guys were running at 90+% system load 24x7x365).
          • But if the part that the computer thought was defective turns out to be O.K. will it go on a murderous rampage?
        • I think that after the $40K level, there's really no additional value that can be added


          You are plain wrong I'm afraid, the fact that you haven't seen any tasks where more money equals more value just means you aint seen nothin'


    • its surprising to me that people don't see the value in high end server machines (Sun Enterprise, IBM mainframes) but then most of you have never had to poll 35 international long distance switches at and AVERAGE of 7000 transactions a minute. And then be comfortable enough to walk in to the data center infront of the Execs and pull the network connection out of the back of the main server, to prove to the guys they are touring thru the building that we have serious failover protection. When you can do that people suddenly don't care about the price they paid just that the can sleep at night... :-)
  • by Anonymous Coward on Monday September 03, 2001 @08:54AM (#2247901)


    Looks like they bought a copy of the Redhat High Availability server for about $2000 and loaded it into a rack of CPU's.


    Pretty much any competant tech could do it. I've had customers running systems like this for Geophysical 3D Migrations for over a year now. No big deal really.


    It sure took me forever to find a "product" in their website. Mostly just organisational and marketing bullshit.


    • And why not? RedHat are great at marketing, although it's not my distribution of choice, it sure as hell is my favorite Linux company. RedHat is the best at making you pay for something you can get for free.

      • RedHat is the best at making you pay for something you can get for free.

        That's the only way most corporations will ever accept the use of (Free || Open Source) Software. I work as an IT consultant to @BIG_OIL_COMPANIES, and you wouldn't believe how hard it is to get them to accept things like perl. Hell, I think the only reason they did eventually let us use perl is because ActiveState is around so an actual company is out there that we can point to. Sad? yes. But that's the way it is out in the trenches.

        • > Sad? yes. But that's the way it is out in the
          >trenches.

          I too have worked for a Big Oil Company.
          The real problem in IT that leads to this type of complaint is something more fundamental.

          People with knowledge, experience, and skills are rarely, if ever, placed in positions of authority to make decisions.

  • by sticks_us ( 150624 ) on Monday September 03, 2001 @08:54AM (#2247905) Homepage
    Here's why:

    1. The dot-com boom has pretty much evaporated, leaving the realm of "professional computer work" to geeky types with college degrees and bad hair (I'm one of them). The work that is done is now more mundane and laborious(billing, insurance, reporting, etc) than $20K-bonus-scooter-riding-dot-com-hipster-streamin g-multimedia stuff. [netslaves.com] (I'm not bitter-I'm jealous)

    2. Computers are now getting bigger and more mainframe-y (See comment above [slashdot.org]). More and more enterprises are centralizing mission-critical functions, primarily for ease of management as well as power and security. Proof:. We've already got Linux/390 [ibm.com], the Solaris E10K [sun.com], there's some newbigandexciting Intel box out there I keep hearing about that has 64-way SMP and now this.

    Anyone have the newest Creative Computing?
    • Speaking as another geeky types with college degrees and bad hair who works on things like this (distributed data storage/high end computing) I'd agree with point 1 & disagree with point two.

      There was a revolution to put computing power on the desk, we did it, it's been done, it's time to move on to more interesting things. A century or two ago the commoners knew nothing about medicine leaving everything to the *professionals*. Now adays most of my dept can do CPR, basic first aid, diagnosise of common ailments. This doesn't make them doctors - just as using Office doesn't make people Computer Scientists.

      Point 2. Computers are now getting bigger and more mainframe-y The defining point of the mainframes was their fixed location for processing, with requests being sent from essentailly dumb terminals.

      The current research interest is in tying high end systems together as a utility Grid like resource where lots of mainframes are tied together to either provide a specific service or to do the job in the quickest time possible. This deviates from the mainframe concept in the eighties in the idea that there is no one central point for the system and that processing, storage etc can be done wherever is easist/cheapest without having to spend money on redundant resources.

      Cases to prove my point are: IBMs investment of $4 billion for server farms/grid archietcture for businesses, the dutch governments four location grid, the Uk governments £120 million investment in a National Grid with lots of nice new supercomputers & data storage all linked together, NSFs $53 million investment in a Teraflop+ processing grid shared between SDSC, NSCA & two other sites.
    • Hmm... interesting points. I've been doing a lot of thinking about the mainframe/dumb-terminal vs. power-user PCs on all desktops models lately.

      I work in I.T. for a business (7 locations around the U.S.) that is in the process of centralizing our servers and storage right now. We started out with VT-100 dumb terminals and DEC VAX servers originally, and as the PC revolution progressed, moved to a much more distributed model. (We had 2 file/print servers installed on-site at each of our locations, and tape backups were done independently at each location.)

      Now, we've invested heavily in Citrix Metaframe and a storage area network solution, and power is becoming centralized again.

      I think the primary reason people moved away from a centralized computing model was the rise of the GUI interface. Microsoft and Apple brought the GUI to the forefront, and millions of individuals became comfortable using it. Despite all of its problems, it made it possible for a whole generation of workers to learn the basics of using a computer at home. Instead of training someone in exactly what menu selections to choose in a custom-written app on a dumb terminal at one particular business, you could train them in general PC usage concepts instead. The knowledge carried over to anything you put in front of them.

      Now, the GUI has been fully integrated to the point where it's feasible to make a dumb-terminal (now renamed "thin client") that works just like the full-power desktop PC. For reasons of security and ease of administration, I.T. has always wanted to centralize the PC environment. Until now, though, the benefits of using GUI desktops outweighed that desire.

      The challenge, now, is convincing users to let go of some of the control we gave them in the 90's. Where I work, our more knowledgeable users feel punished when you take that Pentium III off their desk and replace it with a thin client. You can overcome some of that by upgrading their monitor. (Everyone likes a bigger, brighter screen.) Still, they resist when they discover they can't install those 30-day free trial programs anymore, or they can't load a driver for that custom 6-button cordless mouse they bought without I.T. knowing about it.

      I think the final solution will really be a mix of high-end PCs and thin clients. The high-end PCs will be able to enter and exit the Citrix Metaframe environment at will, while everyone else "lives" in the Citrix environment all day long. People who can give legitimate reasons to keep a PC will do so. Otherwise, they're getting the thin client.

      In summary, we might have come full circle, but today, users have been on both sides of the fence. You'll see more of them wanting a mix - rather than one model or the other.
    • Anyone have the newest Creative Computing?

      Now that's a blast from the past. Used to love that old mag. That and 80 Micro.

      Anyway, I'm waiting for IBM to come out with the IBM Personal Computer code named "We really mean it this time."

    • IBM is even running TV ads now for Linux on the S/390.
  • progress ;) (Score:4, Funny)

    by ^Z ( 86325 ) on Monday September 03, 2001 @08:55AM (#2247908) Homepage Journal
    What we have seen 10 yrs ago? Last-generation hardware being used for servers. Now we see newer and better software running on older hardware designs (e. g. S/390). Do the math. Next generation of even more powerful software will run on even older (yet refurbished) hardware designs: expect Linux 4.x run on 8192 processor UNIVAC, with 5.0 kernel for 50GHz ENIAC in the works.
    • Servers are like Rolls Royces... they cost a bundle but are as reliable as hell.

      Like Rolls Royces, servers tend to stay away from bleeding edge technology, instead focusing on tried and tested stuff built with only the best components and only the best craftsmen.

  • Sounds like 5 years ago. I remember like it was yesterday.

    Running 1000 user Netware 3.x on a Netframe 450. (I especially remember the $5000/1GB drives)

    This "new" architecture sounds a lot like a repackage of that idea. They have multiple server blades in 1 chassis with a proprietary (800Mhz) backplane to communicate. They could even run Netware/OS2 and NT in the same chassis.

    This new one even has the "Rcon" (lights out) capability (hee hee).
  • by TheLoneCabbage ( 323135 ) on Monday September 03, 2001 @09:00AM (#2247917) Homepage

    This is a verry good trend when you stop to think about it.

    One of the key issue technical column writers have been b!tching about is that Linux lacks enterprise server credibility.

    With Linux driving mainframes and massive Credit Card / insurance company type machines who could complain about Linux's capabilities to handle their buisness demands. (if it can balance the budget for a fortune 500, it can host your stupid ASP/Intranet/fileserver/DB)

    Think about the (Ugh! I'm gonna be sick) marketing angle... the average small buisness, or even home user, can have access to the same toys as multi-billion dollar corporations and goverments. (barring the obvious memmory and other hardware limits, this is about perception after all)

    And it's not about a free OS. It's about the ability to develop the app on a PC and recompile it to run on a computer that makes Deep Thought look like Rain Man. And on top of all that the big system will work just like any other linux box running X. So it's easy to administer (wow! Who would have thought to say that about Linux!!)

  • TCO (Score:3, Insightful)

    by onion2k ( 203094 ) on Monday September 03, 2001 @09:05AM (#2247924) Homepage
    This just goes to show that the total cost of ownership for Linux/Unix/NT/2k has very little to do with the license for the OS at all. Hardware, admin, the software running on the box and so on more than make up the the trivial price differences between most server operating systems. Just because a Linux CD might be free doesn't mean running it on an enterprise box is going to save you a single penny.
    • The labor component of TCO (the biggest) is inversely proportional to the population of people who know about and can support the system. As more and more programmers/sysadmin get "on board" with Linux, TCO goes down.

      This is also called "lock-in", the primary value of a software product is not intrinsic, it's how many people know about and use your system. It works very much like rock music... the more well known it is, the more popular it becomes (even if it is god awful). Of course, in software it's double powerful beacuse people familar with the software make other software that is dependent on the base software, thus creating a multiplier to this effect which is so very powerful.

      NT and Office have a "low" TCO, since one can *hire* people off the streets to administer and use these products without additional raining. Hopefully Linux will be the TCO leader by saturating the sysadmin market from the bottom up. If sysadmins perfer Linux over NT, then Linux will eventually have the lower labor component of the TCO.
    • Right. Unless you are talking about a large MS install, in which case yearly licensing and forced upgrade purchases make a significant dent in your operating budget.
      • The yearly licensing fees even for MS BackOffice are still overshadowed by the cost of the one or two experienced technical people to manage and maintain the servers.

        I am one of the rare people who works on both sides of the fence, and likes both sides. Both *nix/BSD and Microsoft have advantages and disadvantages. Though I don't use it or like it myself, I even concede that Apple has its place.

        Every platform has benefits and drawbacks, and TCO for a MS platform office isn't that different from TCO for a *nix based office. My personal opinion is that TCO is best when you mix both services together along with people experienced with both, and use whatever platform works best for the task at hand.
    • Re:TCO (Score:1, Interesting)

      by Anonymous Coward
      Much of my training was on Linux at home on Redhat5.1 and I have a job as an admin on Solaris. What is your point? If I did that on Windows I would have had to buy NT + C++ and VB compilers and Access, and SQL server on a modern machine. I did this on a 166Mz. Training costs on Linux are nil as I created a training environment for myself on Linux. Apache C and Postgres. Do you find it surprising I can do Solaris and Informix(and this is just similar)? Even a $1000 hit at the training stage raises labor cost a bunch. Free software lowers the cost all the way from traing to production as well as licence management overhead that bits us in the ass every year. It just goes to show you do not know the implications

      If software prices are trivial why don't vendors just raise the price? If I were a stock holder.......
    • can make a significant difference in TCO. In medium sized company with in-house IT staff, the difference could allow you to hire a junior admin.
    • This requires big metal because it needs big power! Nobody said the hardware was free.

      If you spend megabucks on hardware, not having to pay for the software softens the blow.

      On the other hand, requiring a bunch of machines running Windows to do the exact same task can be expensive too. [uiuc.edu]

      Linux is more cost effective because you aren't shelling out several billion [washtech.com] in software.

  • That's a hefty pricetag, but the companies they hope to sell to ("market--financial-services companies and service providers") aren't shy about investing in tools. Of course, an S/390 isn't cheap either, no matter how many GNU/Linux images it's running ;)

    This must be what microsoft is talking about when they say that Linux has a high total cost of ownership ;)

    Bryguy

    • If indeed NT has a lower TCO than Linux, it is only a short-term item. For every person that learns how to use $500 NT advanced server there are two who can't afford the $500 and learn $0 Linux instead. Eventually this change-in-mindshare will catch up with Microsoft and the TCO table will shift; with Microsoft on the higher end... since those who know Microsoft NT Advanced Server will be in shorter supply.

      So... Microsoft may be right about their price in the short term; the market is quite inelastic. But in the long term the market is quite elastic... and it certainly notices the value proposition Linux provides.

  • Not A New Idea (Score:2, Informative)

    by GeekSoup ( 447371 )
    Check out www.rlxtechnologies.com [rlxtechnologies.com]. They have had the same technology available for almost a year now. The 'blade plane' for reducing the number of cables needed... etc... etc... And you can get three blades in a 3U case for $5k.

    • No - those are different "beasts"

      The RLX machines are specifically for ISP's - (look at the blade card - you'll see a Transmeta processor + hard disk) - so when you get a new client - you put an image on it, PHP, MySQL/PostgresSQL, Front Page extension, IP - and let the client do what it wants to do..

      withi this case - it's a different thing - you'll definately connect a SAN to it, you'll have Xeon processors who can crunch numbers much better then Transmeta's one - and this machine doesn't give a damn about power saving...
  • well then I better get into the hardware business.
  • Obiously, this machine is worth more than I am :)

  • This box looks to be designed for current-day applications. Think about it: a normal 'application' these days consists of
    • A number of front-end or presentation servers, often web-based
    • A set of middleware/application servers
    • A number of back-end servers, normally running a database of some sort
    In addition you might see a load balancer in there as well for more complex systems. This box allows you to put all of these things in a single physical unit, with a nice high speed interconnection between them, along with the ability to add servers as required.

    A single server with many CPUs like the Sun E10K is great but very complex and really expensive. It doesn't give you the freedom to separate out components. That's why people moved away from monolithic boxes and on to the distributed model. This machine is trying to combine the best of both worlds, with modularity of servers but a much better sense of locality for a single application spread across multiple systems.

    Sharing interfaces to the real world makes sense, too, as most of the traffic can stay internal. Think of the cost of $2-3K per fibre channel interface and $1k per GigE interface, not to mention the relevant switches, and suddenly this box doesn't seem to be too expensive after all.

    I imagine that this will ultimately stand or fall on the TCO, the biggest part of which is bound to be management...

    • Sun's StarFire machine (the E10K, the code name sounds cooler) does let you seperate out components. That's one of the major selling points. Press button, wait for clear, remove card. That's the seperation of components - in the same for as these blade thingys. As for software, everything runs in a virtual machine, with one really freakin' simple OS to partition them.
      • It's not really the same thing. The StarFire was really designed to be a large SMP server; the domaining was secondary, but required to sell the thing. Last time I checked, the StarFire didn't have things like hot-swap of one domain for another, sharable FC interfaces, and the like. That said it's been a while since I last took a look at it, so it might have evolved a bit since then.

        Anyway, the 10K was always designed as a really high-end server, rather than a simple-to-administer cluster, which is where the BladeFrame is aiming at being.

        Oh yeah, and there's about an order of magnitude in the price difference between the two systems...
  • The only time price matters is when you are talking about recouping your investment. I have worked with a few financial companies, and if this thing can give them a bit better performance, than the cost will be made up in days or weeks.

    I look at this product as akin to Windows 2000 datacenter, a product which costs at least 500k on a 32 way system (from Compaq).

    This is the time to look at a product like this and say "Wow, if they can sell it to companies who have traditionally run mainframes, MVS, VMS or some "Big" unix, than it is good for Linux"

    -Jeff
  • It's a cluster, not a server. It's hard to tell how this is that different from a rack full of 1U servers, but I didn't read their Web site carefully.
  • The actual benchmark machine for 'Charlie' was a rather low end machine, probably 1 million total cost. With 40,000 images that's 25 bucks a server. Let's say that in practice that's off by a factor of 20. That's right let's say the benchmark understates the actual cost by 95%. That ends up $500 bucks a server. Still too much?
    • How much horsepower does each of those virtual servers get? It can't be that much. $500/server would be too much if it was only 100 MIPS.
      • How much horsepower does each of those virtual servers get? It can't be that much. $500/server would be too much if it was only 100 MIPS.

        That depends on the application. Lots of servers spend most of their time idle. If you expect to be doing CPU intensive work a lot of the time, then no, a VM on a partitioned server is not for you. If however you want cheap, reliable, high availability for the kind of applications that do not tax the CPU, this is ideal.
    • Don't confuse a large number of 'logical' machines with physical ones. If a Pentium III had the ability in hardware to subdivide itself into thousands of functionally identical logical processors you would be able to run thousands of Linux instances on that one CPU. You probably see the problem that you would immediately encounter: each Linux instance would have only a tiny fraction of a percent of the PIII's processing power. Yes, you'll have thousands of distinct running instances of Linux, but they will be very slow when several of them tries to do something cpu-intensive at the same time.

      A mainframe CPU is not dramatically faster than (any other) microprocessor anymore. In recent years I've only been able to indirectly compare the benchmarks; it seems that IBM isn't interested in submitting it's mainframes for industry standard benchmarking these days. Bottom line: a 12-CPU mainframe is still a 12-CPU box, even if running 1,000 or 10,000 instances of Linux.

      The mainframe's value is no longer in being a honker of a computer. Reliability, the ability to run existing OLTP workloads, and manageability are the big reasons people still buy mainframes.

      Move along now; there's no magic going on here.
      • Nor does it have ESCON adapters or OSA's or virtual routers or WLS or RACF all of which if they were functionally implemented on PC would tend to eat it alive. The point is that while the CEC itself maynot execute more 'ticks' than a Pentium the system architecture is designed to provide efficient performance.
  • these guys are toast.

    Sun and IBM are gonna do anything for a sale. When business gets slow is when these firms really get nasty. The pie (IT budgets) is shrinking fast and most firms plan to continue to reduce spending.

    A few years ago when there was plenty to around they probably could have carved a niche. Now, no way.

    I give them less than two years.

    • by Anonymous Coward
      The problem is that many tech people don't get business (but feel like they can bitch about it), and you've just demonstrated that. Business, especially finance, is about relationships. These "guys" have received funding from the financial industry, to build a product for the financial industry, which will be used by the funding parties (including CSFB and Goldman Sachs). If this product works to their satisfaction, nobody will be toast.
      • I work for an Investment Bank, and have for almost twenty years.

        I've a Masters in Finance, in addition to Undergraduate Math / Computer Science.

        IBM and Sun already have extensive relationships with Investment Banks - they very market these guys are just trying to enter.

        And, as I previously pointed out, will do anything to protect it. I've dealt with both firms, and they will cut almost any kind of deal - in a good market.

        Every firm on Wall Street and in The City in London is simplifying tech, cutting back on the number of vendors and relationships.

        In this market, its a really bad time to be a tech startup, and especially one that is selling a commodity product.

  • Vern Brownell was CTO at GS. He's CEO, not CTO, at Egenera [egenera.com].
  • I'm not sure that this would be an acceptable solution for a real business that has the $250K to spend on a server of this class. I purchase a large amount or enterprise-level hardware and I know I would have a very difficult time selling a solution by some unknown company name Engenra.

    I've always believed that supportability was one of the most important "-ilities" when evaluating hardware and software (i.e. scalability, reliability, supportability, etc.). I would have great concerns that the company who manufactured my $250K refrigerator wouldn't be around in a couple of years to support it.

    While it is exciting to see new Linux-based platforms emerge, I know that I would have a very difficult time getting my CEO to cough up $250K for this box. Even the technically un-savvy would have to ask "What about solutions from IBM or Sun?".

    We've already seen several examples of high-end boxes that have the capabilities to run Linux from more established manufactures. We're familiar with the S/390 and the Sun E10K. There are also lesser-known high-end solutions from other behemoths like Unisys. It may be unfair because Engenra's technology may be far superior to any of the others (but I couldn't even make a premature judgment...their website doesn't give too much detail).

    But there's one thing I do know: spending $250K on mission-critical hardware from an unknown startup is a tough pill to swallow for the people who sign the checks. (Remember back in the late 90s when companies used to do things like that - we used to call it venture capital ;)

    • Yeah, most people would have a very difficult time, since the company is not named "Engenra." The company is actually "Egenera," but if you can't even spell it, and management probably can't spell it, then maybe I see your point.
  • Even though the system in question was not a mainframe (intel based blade plane type system), I do want to say a few things about S/390...or whatever IBM is calling them now. Everyone thinks when MS or Linux adds support for a new fangled thing (say the new buses on the PC that are supposed to be mainframe channel like....), well, the mainframe has been doing it for years! When the PC folk added virtual ram via paging stuff out to disk, that came first on the mainframe. Almost every type of PC technology that comes down the pike has it's roots in the mainframe world. PC's need better I/O buses....in comes channels and so on and so on. Our mainframe support consultant that I work with used to call PC's pretend computers because they didn't have half of what the manframe did. Now servers are starting to get these I/O things and we are supposed to gasp because it's new. Well, it isn't new and it's been around for 15 years on the mainframe. Mainframes are solid so long as your network stays up and you don't have students hammering on the thing! :)

  • by ikekrull ( 59661 )
    This thing has got nothing on my cluster of 3 12MB 486DX266's hooked up with fat 10Mbps ethernet to a screaming 16MB P-75 controller running Slackware with IPVS kernel patches and giant 800MB IDE disk.

Where there's a will, there's an Inheritance Tax.

Working...