A Quarter-Million Dollar Box For A Free OS 113
popeyethesailor writes: "According to a CNET story, the server startup Egenera will be debut its high end Linux servers for financial services customers, running Red Hat Linux.
An earlier CNETstory details their design." That's a hefty pricetag, but the companies they hope to sell to ("market--financial-services companies and service providers") aren't shy about investing in tools. Of course, an S/390 isn't cheap either, no matter how many GNU/Linux images it's running ;)
Everything old is new again (Score:1, Insightful)
Unix returns to its home.
Re:Everything old is new again (Score:1)
-- nobody
ps: unix ticks its billion second next saturday!
Re:Everything old is new again (Score:1)
nobody
Re:Everything old is new again (Score:1)
Maybe someone else can explain it better.
I stand corrected. (Score:2)
Re:Everything old is new again (Score:3, Interesting)
Actually it's more like a really dense server farm.
Possible markets? The only thing that really comes to mind for me is ISPs. This could replace racks of essentially standalone machines quite nicely. But of course that's not the best market to go after right now.
I'm wondering if more conventional companies would go for this. There are lots of companies that have a ridiculous number of little servers floating around. If they had been deployed on a beast like this from the start then they could have saved a lot of money. But now that they have all these little servers, it's hard to imagine them throwing them all out and replacing them with one box.
Re:Everything old is new again (Score:1)
Re:Everything old is new again (Score:2)
Then there's render farms (for movies) and various scientific applications.
But it still seems like a pretty small market. Of course, a small company doesn't need to sell that many boxes to survive, so maybe that's ok.
Re:Everything old is new again (Score:1)
4GB RAM? (Score:1)
Re:4GB RAM? (Score:1)
See http://www.spack.org/index.cgi/LinuxRamLimits
Re:4GB RAM? (Score:1)
Re:4GB RAM? (Score:1)
Linux can scale to more of 4GB of ram. The problem is that, in the x86 arch, a process
(hardware limitation) can't redirect more than
4GB of ram. That doesn't mean that you can have, say, 8 4GB of ram processes. And this is aplicable to the x86, I don't know what happens with big iron beasts
BTW, a s/390 does not run a single linux image
Re:4GB RAM? (Score:1)
I know this is a workaround, not a clean solution, but the problem is the 32bit architecture here, not Linux itself.
At least I'm sure I saw more than one announcement of a "linux monster" like this in the past.
Re:4GB RAM? (Score:2)
"More than 16" (Score:1)
Oh, great segments:offsets again! (Score:1)
Re:??? (Score:1)
Pardon me, but "desktop targetted"? My ass. Redhat is is more insterested in server space with desktop as a (close) second. Do you assume that if something is designed to install easily or have an array of choices for a GUI environment it couldn't POSSIBLY be targetted at being a server?
CLI is not a necessity to be a server (a MacOS X system can be a fine server and it has a real purdy GUI).
Re:??? (Score:1)
Well, how should I say this, RedHat's standard user distribution would not make the best server in the world. It's true that RedHat has enterprise-level solutions, but I would say that there are other OSes(*BSD, Debian come to mind) that offer superior serving.
As to your argument about the CLI, I am quite sure that a good CLI interface is more important than a pretty GUI for a server. It's much easier to handle things on a lower level if you know what you're doing. MaxOS X . . . I never tried using it as a server, but I can't imagine how it gets good performance -- it seems so slugish for workstation use. Remember -- in any case, GUI takes up resources. . .
Another destined failure? (Score:3, Interesting)
Re:Another destined failure? (Score:1)
Re:Another destined failure? (Score:1)
If you think you can do it for 40K then you've obviously got yourself a golden opportunity, call up the venture capitalists today! Okay, tomorrow.
Re:Another destined failure? (Score:1)
Obviously.
The suits are willing to spend this kind of cash because they're looking for piece of mind.
I'm glad to hear that they find peace of mind in paying these kinds of prices, but I honestly don't think that even for the $250K price tag that they will get the performance and service they are expecting.
If you think you can do it for 40K then you've obviously got yourself a golden opportunity, call up the venture capitalists today! Okay, tomorrow.
Nah, I'd need at least a week.
Re:Another destined failure? (Score:3, Informative)
An IBM mainframe will "call home" and order a replacement part as soon as it detects fluctuations in performance indicative of imminent failure.
When the CPUs are running, they're each really two CPUs - at a per-instruction level on the silicon itself, if the results from both CPUs
differ, the CPU is "retired".
There's similar failsafes and interlocks all through the system.
And the I/O throughput is both phenomenal, and transactional.
The PC has a MTBF of a few years, often much less. Thus, while you might get equivalent computing power, once you get up to a few hundred PCs, you spend as much time running around "changing lightbulbs" - i.e. replacing PCs that have failed, as doing useful work.
The PC hardware architecture is a "toy" compared to a mainframe, or even a commercial unix box, or (ironically, given the amiga's "toy" image) even an old amiga motherboard.
lightbulb analogy is a good one (Score:2)
thrown in to make life interesting. The most common failure mode was just a power supply crapping out (not unsuprising becuase these guys were running at 90+% system load 24x7x365).
Sorry Dave.... (Score:2)
Re:Another destined failure? (Score:1)
You are plain wrong I'm afraid, the fact that you haven't seen any tasks where more money equals more value just means you aint seen nothin'
Re:Another destined failure? (Score:1)
redhat high availability server? (Score:3, Informative)
Looks like they bought a copy of the Redhat High Availability server for about $2000 and loaded it into a rack of CPU's.
Pretty much any competant tech could do it. I've had customers running systems like this for Geophysical 3D Migrations for over a year now. No big deal really.
It sure took me forever to find a "product" in their website. Mostly just organisational and marketing bullshit.
Re:redhat high availability server? (Score:1)
selling free stuff (Score:2)
That's the only way most corporations will ever accept the use of (Free || Open Source) Software. I work as an IT consultant to @BIG_OIL_COMPANIES, and you wouldn't believe how hard it is to get them to accept things like perl. Hell, I think the only reason they did eventually let us use perl is because ActiveState is around so an actual company is out there that we can point to. Sad? yes. But that's the way it is out in the trenches.
Re:selling free stuff (Score:2)
>trenches.
I too have worked for a Big Oil Company.
The real problem in IT that leads to this type of complaint is something more fundamental.
People with knowledge, experience, and skills are rarely, if ever, placed in positions of authority to make decisions.
We're heading back to the '80s (Score:4, Insightful)
1. The dot-com boom has pretty much evaporated, leaving the realm of "professional computer work" to geeky types with college degrees and bad hair (I'm one of them). The work that is done is now more mundane and laborious(billing, insurance, reporting, etc) than $20K-bonus-scooter-riding-dot-com-hipster-streami
2. Computers are now getting bigger and more mainframe-y (See comment above [slashdot.org]). More and more enterprises are centralizing mission-critical functions, primarily for ease of management as well as power and security. Proof:. We've already got Linux/390 [ibm.com], the Solaris E10K [sun.com], there's some newbigandexciting Intel box out there I keep hearing about that has 64-way SMP and now this.
Anyone have the newest Creative Computing?
Re:We're heading back to the '80s (Score:1)
There was a revolution to put computing power on the desk, we did it, it's been done, it's time to move on to more interesting things. A century or two ago the commoners knew nothing about medicine leaving everything to the *professionals*. Now adays most of my dept can do CPR, basic first aid, diagnosise of common ailments. This doesn't make them doctors - just as using Office doesn't make people Computer Scientists.
Point 2. Computers are now getting bigger and more mainframe-y The defining point of the mainframes was their fixed location for processing, with requests being sent from essentailly dumb terminals.
The current research interest is in tying high end systems together as a utility Grid like resource where lots of mainframes are tied together to either provide a specific service or to do the job in the quickest time possible. This deviates from the mainframe concept in the eighties in the idea that there is no one central point for the system and that processing, storage etc can be done wherever is easist/cheapest without having to spend money on redundant resources.
Cases to prove my point are: IBMs investment of $4 billion for server farms/grid archietcture for businesses, the dutch governments four location grid, the Uk governments £120 million investment in a National Grid with lots of nice new supercomputers & data storage all linked together, NSFs $53 million investment in a Teraflop+ processing grid shared between SDSC, NSCA & two other sites.
Re:We're heading back to the '80s (Score:1)
I work in I.T. for a business (7 locations around the U.S.) that is in the process of centralizing our servers and storage right now. We started out with VT-100 dumb terminals and DEC VAX servers originally, and as the PC revolution progressed, moved to a much more distributed model. (We had 2 file/print servers installed on-site at each of our locations, and tape backups were done independently at each location.)
Now, we've invested heavily in Citrix Metaframe and a storage area network solution, and power is becoming centralized again.
I think the primary reason people moved away from a centralized computing model was the rise of the GUI interface. Microsoft and Apple brought the GUI to the forefront, and millions of individuals became comfortable using it. Despite all of its problems, it made it possible for a whole generation of workers to learn the basics of using a computer at home. Instead of training someone in exactly what menu selections to choose in a custom-written app on a dumb terminal at one particular business, you could train them in general PC usage concepts instead. The knowledge carried over to anything you put in front of them.
Now, the GUI has been fully integrated to the point where it's feasible to make a dumb-terminal (now renamed "thin client") that works just like the full-power desktop PC. For reasons of security and ease of administration, I.T. has always wanted to centralize the PC environment. Until now, though, the benefits of using GUI desktops outweighed that desire.
The challenge, now, is convincing users to let go of some of the control we gave them in the 90's. Where I work, our more knowledgeable users feel punished when you take that Pentium III off their desk and replace it with a thin client. You can overcome some of that by upgrading their monitor. (Everyone likes a bigger, brighter screen.) Still, they resist when they discover they can't install those 30-day free trial programs anymore, or they can't load a driver for that custom 6-button cordless mouse they bought without I.T. knowing about it.
I think the final solution will really be a mix of high-end PCs and thin clients. The high-end PCs will be able to enter and exit the Citrix Metaframe environment at will, while everyone else "lives" in the Citrix environment all day long. People who can give legitimate reasons to keep a PC will do so. Otherwise, they're getting the thin client.
In summary, we might have come full circle, but today, users have been on both sides of the fence. You'll see more of them wanting a mix - rather than one model or the other.
Re:We're heading back to the '80s (Score:2)
Now that's a blast from the past. Used to love that old mag. That and 80 Micro.
Anyway, I'm waiting for IBM to come out with the IBM Personal Computer code named "We really mean it this time."
Re:We're heading back to the '80s (Score:1)
progress ;) (Score:4, Funny)
Servers are like Rolls Royces (Score:1)
Like Rolls Royces, servers tend to stay away from bleeding edge technology, instead focusing on tried and tested stuff built with only the best components and only the best craftsmen.
Just like old times (Score:1)
Running 1000 user Netware 3.x on a Netframe 450. (I especially remember the $5000/1GB drives)
This "new" architecture sounds a lot like a repackage of that idea. They have multiple server blades in 1 chassis with a proprietary (800Mhz) backplane to communicate. They could even run Netware/OS2 and NT in the same chassis.
This new one even has the "Rcon" (lights out) capability (hee hee).
Enterprise credibility (Score:4, Interesting)
This is a verry good trend when you stop to think about it.
One of the key issue technical column writers have been b!tching about is that Linux lacks enterprise server credibility.
With Linux driving mainframes and massive Credit Card / insurance company type machines who could complain about Linux's capabilities to handle their buisness demands. (if it can balance the budget for a fortune 500, it can host your stupid ASP/Intranet/fileserver/DB)
Think about the (Ugh! I'm gonna be sick) marketing angle... the average small buisness, or even home user, can have access to the same toys as multi-billion dollar corporations and goverments. (barring the obvious memmory and other hardware limits, this is about perception after all)
And it's not about a free OS. It's about the ability to develop the app on a PC and recompile it to run on a computer that makes Deep Thought look like Rain Man. And on top of all that the big system will work just like any other linux box running X. So it's easy to administer (wow! Who would have thought to say that about Linux!!)
Re:Enterprise credibility (Score:1)
NT's multi-cpu and native SMB support don't make up for it's internal memory bugs and enevitable system failures.
In my mind there is no question that Linux's (or really any Unix) superb range of database support, combined with the reliablity of Apache (or Tux) and PHP as a development language is a fantastic combination. Linux out scales NT easily not only in it's culstering capablities but also it's portablity (most anything developed on Linux can be ported in short order).
But the problem lies not in reality, but perception. Nerds can scream at the wall's till there throughts are bare, but it wont impress the suit's that sign the checks. All they understand is MS is a big company that makes good-enough software that they "understand" (read hand holding). No one ever got fired for buying Microsoft (I'd fire the f#cker!!) And that's the real problem.
TCO (Score:3, Insightful)
Labor Component of TCO is what's important... (Score:2, Interesting)
This is also called "lock-in", the primary value of a software product is not intrinsic, it's how many people know about and use your system. It works very much like rock music... the more well known it is, the more popular it becomes (even if it is god awful). Of course, in software it's double powerful beacuse people familar with the software make other software that is dependent on the base software, thus creating a multiplier to this effect which is so very powerful.
NT and Office have a "low" TCO, since one can *hire* people off the streets to administer and use these products without additional raining. Hopefully Linux will be the TCO leader by saturating the sysadmin market from the bottom up. If sysadmins perfer Linux over NT, then Linux will eventually have the lower labor component of the TCO.
Re:TCO (Score:2)
Re:TCO (Score:1)
I am one of the rare people who works on both sides of the fence, and likes both sides. Both *nix/BSD and Microsoft have advantages and disadvantages. Though I don't use it or like it myself, I even concede that Apple has its place.
Every platform has benefits and drawbacks, and TCO for a MS platform office isn't that different from TCO for a *nix based office. My personal opinion is that TCO is best when you mix both services together along with people experienced with both, and use whatever platform works best for the task at hand.
Re:TCO (Score:1, Interesting)
If software prices are trivial why don't vendors just raise the price? If I were a stock holder.......
Per seat licensing and upgrades... (Score:1)
Re:TCO (Score:1)
This requires big metal because it needs big power! Nobody said the hardware was free.
If you spend megabucks on hardware, not having to pay for the software softens the blow.
On the other hand, requiring a bunch of machines running Windows to do the exact same task can be expensive too. [uiuc.edu]
Linux is more cost effective because you aren't shelling out several billion [washtech.com] in software.
TCO (Score:2)
This must be what microsoft is talking about when they say that Linux has a high total cost of ownership ;)
Bryguy
High "TCO" of Linux is only a short-term thing. (Score:1)
So... Microsoft may be right about their price in the short term; the market is quite inelastic. But in the long term the market is quite elastic... and it certainly notices the value proposition Linux provides.
Not A New Idea (Score:2, Informative)
Re:Not A New Idea (Score:2)
The RLX machines are specifically for ISP's - (look at the blade card - you'll see a Transmeta processor + hard disk) - so when you get a new client - you put an image on it, PHP, MySQL/PostgresSQL, Front Page extension, IP - and let the client do what it wants to do..
withi this case - it's a different thing - you'll definately connect a SAN to it, you'll have Xeon processors who can crunch numbers much better then Transmeta's one - and this machine doesn't give a damn about power saving...
Hardware business (Score:1)
Obviously (Score:1)
Moderators note (Score:1)
Designed for current-day applications (Score:2, Insightful)
A single server with many CPUs like the Sun E10K is great but very complex and really expensive. It doesn't give you the freedom to separate out components. That's why people moved away from monolithic boxes and on to the distributed model. This machine is trying to combine the best of both worlds, with modularity of servers but a much better sense of locality for a single application spread across multiple systems.
Sharing interfaces to the real world makes sense, too, as most of the traffic can stay internal. Think of the cost of $2-3K per fibre channel interface and $1k per GigE interface, not to mention the relevant switches, and suddenly this box doesn't seem to be too expensive after all.
I imagine that this will ultimately stand or fall on the TCO, the biggest part of which is bound to be management...
Re:Designed for current-day applications (Score:2)
Re:Designed for current-day applications (Score:1)
Anyway, the 10K was always designed as a really high-end server, rather than a simple-to-administer cluster, which is where the BladeFrame is aiming at being.
Oh yeah, and there's about an order of magnitude in the price difference between the two systems...
Re:Designed for current-day applications (Score:2)
What price??? (Score:1)
I look at this product as akin to Windows 2000 datacenter, a product which costs at least 500k on a 32 way system (from Compaq).
This is the time to look at a product like this and say "Wow, if they can sell it to companies who have traditionally run mainframes, MVS, VMS or some "Big" unix, than it is good for Linux"
-Jeff
Not a server (Score:2)
s/390 not cheap? REALLY. (Score:2)
Re:s/390 not cheap? REALLY. (Score:3, Interesting)
Re:s/390 not cheap? REALLY. (Score:2)
That depends on the application. Lots of servers spend most of their time idle. If you expect to be doing CPU intensive work a lot of the time, then no, a VM on a partitioned server is not for you. If however you want cheap, reliable, high availability for the kind of applications that do not tax the CPU, this is ideal.
No, S/390 is not cheap (in price/performance) (Score:2)
A mainframe CPU is not dramatically faster than (any other) microprocessor anymore. In recent years I've only been able to indirectly compare the benchmarks; it seems that IBM isn't interested in submitting it's mainframes for industry standard benchmarking these days. Bottom line: a 12-CPU mainframe is still a 12-CPU box, even if running 1,000 or 10,000 instances of Linux.
The mainframe's value is no longer in being a honker of a computer. Reliability, the ability to run existing OLTP workloads, and manageability are the big reasons people still buy mainframes.
Move along now; there's no magic going on here.
But it doesn't have that ability (Score:2)
In this market (Score:1)
Sun and IBM are gonna do anything for a sale. When business gets slow is when these firms really get nasty. The pie (IT budgets) is shrinking fast and most firms plan to continue to reduce spending.
A few years ago when there was plenty to around they probably could have carved a niche. Now, no way.
I give them less than two years.
Why are they toast? Did you read the article? (Score:1, Insightful)
Re:Why are they toast? Did you read the article? (Score:1)
I've a Masters in Finance, in addition to Undergraduate Math / Computer Science.
IBM and Sun already have extensive relationships with Investment Banks - they very market these guys are just trying to enter.
And, as I previously pointed out, will do anything to protect it. I've dealt with both firms, and they will cut almost any kind of deal - in a good market.
Every firm on Wall Street and in The City in London is simplifying tech, cutting back on the number of vendors and relationships.
In this market, its a really bad time to be a tech startup, and especially one that is selling a commodity product.
Cnet Got the Founder's title wrong (Score:2)
Can you sell this to your CTO/CEO? (Score:1)
I've always believed that supportability was one of the most important "-ilities" when evaluating hardware and software (i.e. scalability, reliability, supportability, etc.). I would have great concerns that the company who manufactured my $250K refrigerator wouldn't be around in a couple of years to support it.
While it is exciting to see new Linux-based platforms emerge, I know that I would have a very difficult time getting my CEO to cough up $250K for this box. Even the technically un-savvy would have to ask "What about solutions from IBM or Sun?".
We've already seen several examples of high-end boxes that have the capabilities to run Linux from more established manufactures. We're familiar with the S/390 and the Sun E10K. There are also lesser-known high-end solutions from other behemoths like Unisys. It may be unfair because Engenra's technology may be far superior to any of the others (but I couldn't even make a premature judgment...their website doesn't give too much detail).
But there's one thing I do know: spending $250K on mission-critical hardware from an unknown startup is a tough pill to swallow for the people who sign the checks. (Remember back in the late 90s when companies used to do things like that - we used to call it venture capital ;)
Re:Can you sell this to your CTO/CEO? (Score:1)
You know.... (Score:2)
Ha! (Score:2)