All 500 of the World's Top 500 Supercomputers Are Running Linux (zdnet.com) 288
Freshly Exhumed shares a report from ZDnet: Linux rules supercomputing. This day has been coming since 1998, when Linux first appeared on the TOP500 Supercomputer list. Today, it finally happened: All 500 of the world's fastest supercomputers are running Linux. The last two non-Linux systems, a pair of Chinese IBM POWER computers running AIX, dropped off the November 2017 TOP500 Supercomputer list. When the first TOP500 supercomputer list was compiled in June 1993, Linux was barely more than a toy. It hadn't even adopted Tux as its mascot yet. It didn't take long for Linux to start its march on supercomputing.
From when it first appeared on the TOP500 in 1998, Linux was on its way to the top. Before Linux took the lead, Unix was supercomputing's top operating system. Since 2003, the TOP500 was on its way to Linux domination. By 2004, Linux had taken the lead for good. This happened for two reasons: First, since most of the world's top supercomputers are research machines built for specialized tasks, each machine is a standalone project with unique characteristics and optimization requirements. To save costs, no one wants to develop a custom operating system for each of these systems. With Linux, however, research teams can easily modify and optimize Linux's open-source code to their one-off designs. The semiannual TOP500 Supercomputer List was released yesterday. It also shows that China now claims 202 systems within the TOP500, while the United States claims 143 systems.
From when it first appeared on the TOP500 in 1998, Linux was on its way to the top. Before Linux took the lead, Unix was supercomputing's top operating system. Since 2003, the TOP500 was on its way to Linux domination. By 2004, Linux had taken the lead for good. This happened for two reasons: First, since most of the world's top supercomputers are research machines built for specialized tasks, each machine is a standalone project with unique characteristics and optimization requirements. To save costs, no one wants to develop a custom operating system for each of these systems. With Linux, however, research teams can easily modify and optimize Linux's open-source code to their one-off designs. The semiannual TOP500 Supercomputer List was released yesterday. It also shows that China now claims 202 systems within the TOP500, while the United States claims 143 systems.
This is the year (Score:2, Funny)
Linux makes it to the desktop, of a supercomputer.
Re: (Score:2)
Chevy Gen2 Volt's infotainment runs GMLinux. Saw it on the core dump of a crash of another driver's car.
Re: (Score:2)
Re: (Score:2)
is there Linux in this shit ?
Re: (Score:2)
This is the year Linux makes it to the desktop, of all supercomputers.
Re: (Score:3, Interesting)
There is a logical reason for this, and it has nothing to do with Linux.
The Supercomputers level of OSS use is primarily a concern with science. It compiles on multiple platforms, and is well maintained on most of them. Windows and MacOS are only available for the x86-64, ARM, and PPC platforms, and even then, not all of them. That only leaves FreeBSD as an option, and FreeBSD isn't as virtualization friendly, and drivers aren't readily available for GPU systems.
So it's quite literately the only logical cho
Re:This is the year (Score:5, Insightful)
...it's quite literately the only logical choice
Oh I know, right? But the big fact you danced around is, Linux is just better than the others. It's faster and more reliable. Otherwise top 500 would not use it. Like, they tried to use Windows, they really did. Microsoft was paying academic institutions to install it and providing teams of free engineers. Still didn't do it. Why? Windows can't handle the load, it can't run continuously under load. It just gets more and more unstable then it falls over. Even when it does stay up, it can't touch the storage, scheduling or memory management efficiency of Linux.
Re: (Score:2, Informative)
It has nothing to do with not being able to handle the load. It has everything to do with costs. Linux is free. Windows isn't. Most of the tools for supercomputing were written for the linux platform. There are tools for windows but since it's a niche market there aren't nearly as many. And since in the super-computing world having a good desktop/GUI environment doesn't mean squat there is no real incentive to use windows outside of certain circumstances.
In all the time I used it I never encountered any ser
Re: (Score:3)
You are right it's not about ability to take on load (though there is a matter of how self-reliant shops can be when trying to analyze failures, which is unlimited with Linux and inherently limited in Windows). However:
the lack of tools and tracking down a couple numerical inconsistencies
Those are pretty huge things. It all stems from the origin of supercomputing as a Unix thing, and as such similarity to Unix allowed seamless porting. Windows however is very different and requires more work to port all of the technical computing ecosystem that no one wants to do (except for
Re:This is the year (Score:5, Insightful)
It has nothing to do with not being able to handle the load. It has everything to do with costs. Linux is free. Windows isn't.
If I get you right, You spend all this money on a Supercomputer, so you logically use the cheapest OS out there instead of paid ones that should work better?
Sounds legit.
Re:This is the year (Score:5, Informative)
> The Supercomputers level of OSS use is primarily a concern with science. It compiles on multiple platforms, and is well maintained on most of them. Windows and MacOS are only available for the x86-64, ARM, and PPC platforms, and even then, not all of them.
This makes no sense. Almost all supercomputers are x86-64 based (+/- GPUs).
> That only leaves FreeBSD as an option, and FreeBSD isn't as virtualization friendly, and drivers aren't readily available for GPU systems.
Lol. Supercomputers don't use virtualization.
> So it's quite literately the only logical choice, owing to that the other choices would have required engineering resources.
That's not true, supercomputers within the past 5 years on the top500 list have used Windows, AIX, BSD, Linux. It's just that Linux is better for the job than the others.
> That said, Linux does not belong in safety systems, and I hope it never ends up in car automotive systems, power plants, or spacecraft.
I hope nobody who thinks supercomputers use virtualization ever have their opinion on a computing matter taken by the designer of a safety critical system.
Linux is in safety critical systems already. But it depends on the level and capabilities you're talking about. Processing doppler radar data and sending it to ATC systems in a timely manner is one thing. Running tight control loops in automotive engine and control systems is completely different and just isn't appropriate for Linux.
> Everything else is fair game. These systems need real time operating systems that are highly threaded and can respond to events instantly, not be scheduled, or deferred due to eating all the swap space (one of Linux's worst default features, and what makes it woefully awful for web servers by default.)
You're mixing up all sorts of things here. Nothing responds to interrupts "instantly", what you want is guaranteed hard upper limits. It doesn't even have to be all that fast often times, it just has to be an upper limit so you can design the system to meet response time requirements. Linux can respond "immediately" to interrupts, by the way. It does not have to be "scheduled". Work can be done in interrupt context.
"Highly threaded" what? That's nothing to do with real time.
"Deferred due to eating all swap space" What is this meaningless drivel? Automotive and aircraft control systems don't use swap space. They don't even use virtual memory for god's sake lol.
> (one of Linux's worst default features, and what makes it woefully awful for web servers by default.)
Apparently better than all the others at that too. Windows, OSX, and BSD must *really* be shit if Linux is so bad yet it still beat them all there too.
Re:This is the year (Score:4, Insightful)
Re:This is the year (Score:5, Interesting)
That said, Linux does not belong in safety systems
Dedicated real time operating systems obviously have their uses, but due to advances in embedded level hardware they're becoming less and less relevant. Even with the overheads of an "almost real time" OS like Linux with some compile switches most modern day embedded hardware is capable of making the dealines in all except some special super low latency use cases. Only place where a real time OS is even necessary these days are rare super low latency and super low power cases (as in under 0.25W).
Serious, 6502s and Z80s are no longer the standard embedded hardware out there anymore.
Doesn't guarantee success on the desktop (Score:3, Insightful)
Unix never made inroads on the desktop.
This might actually be harmful if people think Linux is complicated or designed for heavy hardware they may not consider it for desktops and use cases involving desktop apps.
Linux has been ready for the desktop since about 1999, before that there were dependency issues and hardware wasn't always supported. Now hardware is more likely to be better supported on Linux than on Windows. I'm writing this on Windows but that's only because Windows came on this machine, I'll be installing Linux when I have a week of downtime.
Enlightenment is probably the best looking desktop software anywhere, it's customizability makes it hard to include with distros but it should be considered as evidence that it's not user-friendliness or beauty holding Linux back.
I think it's a bit sad to see Linux software becoming overly simplified in the wake of Apple's success the way other software is.
Linux needs to remain the enthusiast and expert operating system more than it needs broad acceptance. Look what happened with the internet, Linux is great without ads, malware and other problems I associate with popularity.
That said Linux skills are still hugely undervalued and not taught in schools which needs to change. A Linux machine is still your best bet that your machine will still be runnning with data and apps updated but not broken after 10-15 years.
Re: (Score:2)
Eh, no, not really. You're talking about a KDE 1.0, pre-Gnome desktop... I used it, but I wouldn't have inflicted it on anyone I needed to support. Five years later it was certainly reasonable, at least where the average non-technical user was concerned.
Re:Doesn't guarantee success on the desktop (Score:5, Informative)
Eh, no, not really. You're talking about a KDE 1.0, pre-Gnome desktop... I used it, but I wouldn't have inflicted it on anyone I needed to support. Five years later it was certainly reasonable, at least where the average non-technical user was concerned.
KDE was '96, GNOME '97.. in 1999 you'd already have KDE 2.0, didn't use that but I remember trying RHL 6.2 [everythinglinux.org] that came out in April 2000 which looks pretty much like a normal desktop to me. Remember that it was going head to head with Windows ME as the consumer desktop, using either was a major PITA. Granted, XP was a big step up but then you had Vista... you can make a lot of excuses for YotLD not happening but that Microsoft brought their A-game is not one of them.
The cornerstone for Microsoft's dominance is Office and Excel in particular, all those people who had to use Windows at work of course took what little knowledge and training they had and bought a Windows machine for home too. When Outlook kicked Lotus Notes to the curb they locked that market up good.
Re: (Score:2)
Outlook didn't kick Lotus Notes to the curb... IBM kicked Lotus Notes to the curb.
Re: (Score:2)
I've sort of made my peace with the fact that desktop Linux will always be a niche player for enthusiasts and experts, plus a tiny percentage of normal people who have experts to admin their systems. There are a lot of reasons for this, but I think that an OS which is still a CLI focused system at its heart has some disadvantages as a desktop solution for the masses.
On the other hand, I think Linux is really in its element with specialized environments like supercomputing. You can hack everything to make
Re:Doesn't guarantee success on the desktop (Score:5, Informative)
Linux still isn't ready for any desktop it isn't installed on. It IS installed on lots of desktops in places like research labs, mine included. But if it's going to make it to anybody else's desk it needs some basic things fixed. I don't know if it's possible to do something as simple as configure a graphics driver in Ubuntu's GUI, but it's certainly not easy.
Everything else works perfectly fine, but none of the GUI systems seem to offer a user friendly way for command line averse users to fiddle with their system settings.
Both AMD and Nvidia have their config UI packaged to their binary blob driver on linux.
Re:Doesn't guarantee success on the desktop (Score:5, Interesting)
On a scale of LFS to Mandrake, how bad was your experience?
I'll be installing Linux when I have a week of downtime.
It takes me a couple months to transition a new workstation to Windows. Each time, I try to learn what the native software options are and whether they can meet my current needs. Where it doesn't, I install or use recommended software in most cases to see if it does what I need (WSL) though I do have exceptions for my personal favorite software in a few instances where I'm just unwilling to learn something new. (EditPlus, GIMP, VLC, Sysinternals, Putty.)
A new install of Linux takes the same process, but it has apt or yum or whatever which speeds things up pretty dramatically. With a new Linux desktop install, I just rarely have to learn too many new things... usually. (Eyeballing you hard here systemd!)
If I have to support any significant sized network, and if it's possible, I'd do Linux desktops everywhere. I'd rather use those admin systems than admin Windows... but I work in a job where I have to support Windows because that's all the core software runs on. As an admin, I can do about anything I need to on Microsoft servers and workstations. It'd be false modesty to say I'm not good at admin on Microsoft system. On the other hand, I have used Linux and various BSDs at home and work (on servers) since the late 90's. I could eliminate Microsoft in our workplace and cut our IT departmental work by maybe 30% if only our primary system ran on Linux. I'd miss some of the AD/DHCP/DNS/DFS stack. I'd miss Excel (running native) and Exchange/Outlook, but honestly, running the alternatives in the cloud or Libre would probably reduce our helpdesk workload after a year or two.
I'm good at my jobs, and whatever systems I admin, I'll learn to be good at. Given the ideal scenario, I could run several thousand workstations with the same effort I'd use for a couple hundred Windows workstations. The scenarios I have been hired to handle haven't been ideal, so I've learned to take advantage of the environments I'm in. I'm good at my jobs because I like to learn. I like tinkering, trying new things, scripting and writing real code. That makes me useful, maybe even it helps toward making me valuable.
On the other hand, my varied experiences and experimenting have made me aware that my own weakness is a desire to try new things. If I were designing the systems for a company responsible for my income, it wouldn't be Linux or Windows or Mac. It'd be PC-BSD on the workstations and AIX on the servers. They're boring. Boring is what I look for in a business network. Ideally, the network will be so stable that IT doesn't spend any time working on the backend systems, and that means boring is the goal. I like Linux because I'm always learning new stuff and I like Windows.. sorta, because I'm always being forced to learn new things. That's why I'm sorry to see AIX take a dip off the top 500, but I can see it; Linux is fun.
Re: (Score:2)
Enlightenment is probably the best looking desktop software anywhere, it's customizability makes it hard to include with distros but it should be considered as evidence that it's not user-friendliness or beauty holding Linux back.
Note that 'beauty' is relative and certainly it is not equivalent to 'user friendly'.
I will though agree with the sentiment that there is no winning the 'user friendly', because the main desktop environments are user friendly enough, but there just isn't enough upside for the casual user to bother to even think about changing. As such diminishing the 'enthusiast' experience for the sake of the casual user is a strange thing to do.
Re: (Score:2, Insightful)
How the hell would you know? Ever tried a new software before? Everything has a learning curve. OS change is much more complicated, because you need to replace programs and actions you already know.
Try switching to OSX (or Windows, if you already are using OSX primarily), it's different and i bet it'll take you more than a week to get used to it and find alternative programs
You are nothing but a Windows' (or OSX's) bitch and your comment could have not been more idiotic.
Re: (Score:3)
If you need a week downtime to install an OS which you already know -which the GP implies-, something is not right.
Re: Doesn't guarantee success on the desktop (Score:2)
Re: (Score:2)
Actually, if you want to be prepared and have a solid rollback strategy, you'll remove the HD, insert a fresh one and start. And should something fail that you cannot resolve in a timely manner, you put the old one back. That is how we do the very risky server migrations / software updates at work during the annual downtime window.
As for the backup: you should always have a decent up to date backup. If that is something you need to plan a long time in advance, you're already one hard disk failure away from
Re: Doesn't guarantee success on the desktop (Score:5, Funny)
>> "I'll be installing Linux when I have a week of downtime." :)
If you need a week of downtime from MS to convince you to switch to Linux, you should rather stay with MS until having a month long downtime. Then you'll be really convinced
Re: (Score:2)
I've done it several times when building a new machine. You pop in the windows disk and when it is done installing, you install your drivers or apps. The last couple of years I haven't had to use the command line for anything. Depending on the amount of apps and the data you want to move, it takes time but generally not a lot of hassling.
I admit it's been 10 years since I worked with linux. Back in those days, I could install linux fairly easily, but there was always something that required a serious amount
Re: (Score:2)
Note that I didn't say it was linux's fault. I don't really care whose fault it is / was. All I care about is that when I pop in the installation cd, things get installed and to a decent working state without a lot of finagling. Linux used to have that problem. And that might have various reasons but ultimately, the 'why' doesn't matter to the user.
Honest question: Let's say I take a 2 year old laptop. I know for sure that I can install windows, and that after installation, I will have sound, wifi and a dec
Not surprising (Score:4, Interesting)
From what I know about the windows kernel it couldn't scale upwards well enough to run in this league. And If I remember correctly one of the key goals of Linux was to make sure it could scale well on big iron systems.
We still don't know if you can successfully beowolf cluster a bunch of the old Microsoft Barnies though.
Microsoft's supercomputing efforts (Score:2)
As Linux began to crack the TOP500 list in the 1990s, Bill Gates tried to ignite a supercomputer effort at Microsoft but it never amounted to much. I wish I could find a link to it. Anyways, I found the following timeline for Microsoft's "Project Catapult" AI-related supercomputing effort, which might not be in the TOP500 list's league:
2010: Microsoft researchers meet with Bing executives to propose using FPGAs to accelerate Indexserve.
2011: A team o
Re:Microsoft's supercomputing efforts (Score:4, Interesting)
There was an "HPC" edition of windows 2003, and microsoft managed to sponsor a few places to build clusters using it that made it into the top500 list...
I don't recall anyone ever using it of their own volition tho, only if microsoft were paying, and at least one of those clusters was a dual boot experiment which climbed 50 places in the ranking when booted to linux.
Re: (Score:2)
From what I know about the windows kernel it couldn't scale upwards well enough to run in this league. And If I remember correctly one of the key goals of Linux was to make sure it could scale well on big iron systems.
Originally? No, not at all. "I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones." But it's the sort of thing you can patch a kernel to do so this [xkcd.com] happened.
Re: (Score:2)
The main issue with Windows is not scaling, it's been able to make good use of >100 cores since the early 2000s. The issue is management.
These computers don't run one single OS. They run multiple copies of the same OS on nodes, and dispatch work to those nodes using special high speed interconnects. When you have thousands of CPUs, power supplies, RAM modules etc. some of them are going to fail, so you divide them up into nodes that can fail and recover individually.
Windows is not well suited to this. Wi
Re:Not surprising (Score:5, Insightful)
And these piece of shit Linux nodes fail all the time.
That must be why all 500 are running Linux - for the great failure rate. Or maybe you're wrong.
Re: (Score:3)
So they're a $5-6 million in electricity to run, $100-200 million to design/assemble, but what really decides the OS to run is the cost.
OK.
Re: (Score:2)
Take cosmic rays: you'll have one double bit flip on memory per day on 75000 DIMMs http://www.fiala.me/pubs/paper... [fiala.me]
Titan has 18688 nodes.
As the amount of nodes increases, the time to first failure gets lower and lower. I heard that next gen supercomputers will have jobs maximum time reduced from current 1 day to 4 to 6 hours.
Re:Any cluster scales (Score:5, Funny)
The technical term for this is "botnets".
What a pathetic bunch of comments so far (Score:4, Insightful)
Re: (Score:2)
Re: (Score:3)
I signed up here almost from the very beginning of Slashdot time. A few days earlier and I might have had ID#100000. From what people have been saying here in all those 19+ years, Slashdot was dying, has always been dying, is still dying, and will always be dying. I just ignore the irritating cruft, and I'd advise everyone to do that.
So, moving right along... how about those TOP500 scores!
Re: (Score:2)
Imagine a Beowulf cluster of these?
Re: (Score:3)
Re: (Score:3)
Slashdot has definitely changed. I remember when it was more of a marketplace of ideas, where interesting comments were actually modded "interesting" instead of "flamebait" or "troll". I remember when everything wasn't a conspiracy of some kind.
Back on topic, imagine a Beowulf cluster of the top 500 supercomputers!
Re: (Score:2)
Can you elaborate on what you think the problem is?
I haven't noticed them being particularly bad, however I have noticed there's less and less. Particularly the last 4 months or so? Has the gradual decline sped up?
Re: (Score:2)
Though I think you will find that the vast majority are running some mainstream Linux distribution on the nodes. Whether that is a RHEL derivative (CentOS/Scientific Linux) or a LTS version of Ubuntu etc. if it's latest its systemd.
Obligatory xkcd (Score:2)
Some features are used more than others. [xkcd.com] ;)
Re: (Score:3)
That time when Linus's wife could not be Rickrolled because her Linux box had no Flash capability was a searing tragedy in the annals of computer history.
So, Linux turned the Top500 into a Monoculture (Score:3)
So, the Top 500 list of computers was dominated by many Variants of Unix, with a little sprinkle of other weird stuff (among those, VMS). Which is not a monoculture
Then, as the other weird stuff waned, Windows took it's place (for a short while). Not directly as a replacement of course, but rather as a percentage of Top500 systems.
On the other side of the fence, Linux began to take increasign market share of the Top500 because of low cost, shallow learning curve from *nix, and posibility to modify source code, in an accelerated path to become a monoculture (at least where the Top500 is concerned).
And now, finally, we are on a monoculture in the Top500, with Linux all the way in the Top500... No *BSD, no AIX, HP-UX, or Solaris. Just Linux all the way.
Better not catch anyone complaining about Chrome Monoculture, Windows Monoculture, or Android monoculture! M'kay? ;-)
Re: (Score:2)
These latest TOP500 project teams seem to have exercised free choice of OSes, free from force or coercion. Choice is good, so I'm not sure if you're complaining, and if so, why? If we knew that all 500 projects were using completely interchangeable code and hardware we might have a monoculture at play, but the reality is that they used the best available OS option for their own specific, bespoke, custom needs. I hope I understood your comment correctly.
Re: (Score:2)
[...] so I'm not sure if you're complaining, and if so, why? [...] I hope I understood your comment correctly.
Right at the end of the comment, there is a ;-) emoticon.
You may have missed it.
Re: (Score:3, Insightful)
So, the Top 500 list of computers was dominated by many Variants of Unix, with a little sprinkle of other weird stuff (among those, VMS). Which is not a monoculture
Then, as the other weird stuff waned, Windows took it's place (for a short while). Not directly as a replacement of course, but rather as a percentage of Top500 systems.
On the other side of the fence, Linux began to take increasign market share of the Top500 because of low cost, shallow learning curve from *nix, and posibility to modify source code, in an accelerated path to become a monoculture (at least where the Top500 is concerned).
And now, finally, we are on a monoculture in the Top500, with Linux all the way in the Top500... No *BSD, no AIX, HP-UX, or Solaris. Just Linux all the way.
Better not catch anyone complaining about Chrome Monoculture, Windows Monoculture, or Android monoculture! M'kay? ;-)
I think the reason it's become a Linux "monoculture" is that it isn't really a monoculture.
Top-500 should be an area that's amendable to variety. Any one project is big enough that some serious customization is going to occur, so traditionallyany one OS could focus on a specific feature set and nab themselves a bit of the market. That's why the big Unixes co-existed for so long, if your problem was a round hole you'd grab the Unix that looked the most like a round peg, and if you had a square hold you'd gra
BSD (Score:3)
The *BSDs will keep on doing what they have always done. Run well with minimal upkeep and not beta test features on production releases. Under Linux the mentality is if something compiles then ship it. I ran Linux in the 2.0.x kernel days. What they call Linux today is so far removed it might as well be a different operating system. Some distros don't even include tools like nslookup or traceroute anymore. Good luck installing that package if your default route isn't set. Oh and "route" has been changed to
Re: (Score:2)
I switched to FreeBSD from Linux ages ago because it was a complete system. Linux was a kernel with a bunch of utilities bolted onto it, and there was That One Day where I was trying to upgrade something and needed a key utility for configuring something and it wouldn't run, and there was no "source" for an updated version. I gave FreeBSD a spin and just liked that everything was a part of a larger whole, and not a bunch of pieces with varying standards.
FreeBSD can have other problems, sometimes certain p
Re: (Score:2)
What needed to be maintained exactly? Were there bugs or security problems? If the answer is no then why fix something that was never broken?
Re: (Score:2)
So you're saying people are using Linux as a 'framework' upon which they build their own (custom) supercomputer OS? Nice! :-)
Re: (Score:2)
I suppose it's all about cost for the proprietary OSes
Not for the top-500, I'm sure Linux is cheaper that the Unixes, but you don't get a machine on the top-500 by cost-cutting.
but what of the BSDs? They're free, and the license would let custom work on a super computer's OS be closed and even sold. Is it networking speeds or the like? Parallelism? I would really like to know, didn't come here for "'cause Linux Rulz!" ass-hattery
I think it's simpler than that, what does BSD have that Linux doesn't?
Linux gives you an open source Unix with a massive community and a ton of corporate backing.
BSD gives you an open source Unix with a small community and a little bit of corporate backing.
I'm sure there's some specific application where BSD might have an advantage, but there's going to be a lot of applications where Li
Re: (Score:2)
We are talking about Supercomputers here. Surely you mean TOPS20!
Yeah. And? (Score:3)
What'd you expect it to run? Windows?
Re: (Score:2)
I'm sure some of those rigs could spare a few CPU cycles to run VMs in case somebody needs to Skype their basement-dwelling maladroit kid.
Re:Yeah. And? (Score:4, Informative)
five years ago, 3 of the top 500 did run windows, and in 2011 4 did.
Re: (Score:2)
And, I'm betting that Microsoft sponsored all of them, just to have SOMETHING ON THE LIST. But did M$ ever manage to bribe enough people to get 1 lousy percent of the top 500?
For most people, the extra HUMAN expense of making a cluster work at all, and the extra TIME expense of having it run like a pig when you get it to run at all, isn't worth even a massive M$ bribe free cluster (as long as you run Windows). It sort of depends on whether actually getting your work done is more important to you than pa
Re: (Score:2)
What'd you expect it to run? Windows?
Some people would expect that. But the Linux kernel is certainly more customizable than a Windows black box (that would require the help of Microsoft engineers).
Distro (Score:2)
Anyone have any information on what distro they use? The article didn't say.
Re: (Score:2)
I'd expect that most of them are not distro-based but rather LFS-based: http://www.linuxfromscratch.or... [linuxfromscratch.org]
Of course it requires a supercomputer (Score:4, Funny)
... to fully appreciate all the features of the latest Enlightenment desktop.
Limitation of a single computer (Score:3)
For most parallel problems, it's possible to divide them and send each piece to different computers, rather than a different core on the same computer. For even more highly parallel problems, using a GPUs to do the computation is even faster.
With 100 gig ethernet, we're starting to see networking speeds closer to bus speeds on motherboards themselves and it's cheaper, faster to scale (especially dynamically), and probably more fault tolerant (node fail? Send the job to a different node) to use more computer nodes rather than using more processors in a single computer.
Distributed computing has almost made supercomputers irrelevant -- except for people with a hole in their pocket. Folding@home [wikipedia.org] is more powerful than anything on their list while we have no idea what monster of a compute clusters work inside Google or Facebook -- but given the open source software they have released (e.g. Facebook's 360 degree video stitcher [github.com]) and how slow they are on a single machine -- the only way they'd be usable on their site is if you have a massive cluster.
Re: (Score:2)
Distributed computing has almost made supercomputers irrelevant
Not really, no. supercomputers are distributed computers with good interconnects. For many calculations, the interconnect is really, really REALLY imortant which is why a good number have the interconnect right there on die with the CPU.
FreeBSD (Score:2)
I miss the days when the list had a ton of FreeBSD systems. To this day, it remains my preferred OS. Two little software compatibility issues prevent me from running it as my desktop OS anymore although I did for many years. It still has a home on several servers here in my house where it has distinct advantages over Linux.
Windows subsystem for Linux. (Score:2)
There Linux compatibility is so essential to get a toehold there Microsoft had to support linux way of doing things. Finally it relented and introduced "Linux subsystem of Windows" support.
Does it support incoming ssh connections? I use ssh to go out of Windows to connect to Linux machines in my network. If the linux subsystem allows incoming ssh and RSA
Re: (Score:2, Insightful)
There is no second reason.
Linux is not used because it's better, it's used because it is cheaper.
In the end, cheaper almost always wins over better.
Re: (Score:3, Informative)
Bullshit.
Linux is used because it's far, far, FAR more flexible, less resource intensive and more efficient than Windows, while supporting and making good use of vastly larger amounts of RAM and CPUs.
If you baseline is one of the proprietary Unices, it's still more flexible, less archaic and more familiar to users while supporting a wider range of hardware while being infinitely cheaper.
Re: (Score:3)
Price is a positive thing of course but not why it is used - the cost for OS software in a supercomputer would be a fraction of the hardware and infrastructure costs anyway.
The thing is that Linux have excellent scalability when it comes to I/O throughput, this is something that many companies and individuals have worked hard to achieve. So it is possible to adapt an OS installation to be suitable for extreme throughput.
The compute nodes themselves doesn't really need a proper operating system (and many sup
Re: 'This happened for two reasons.' (Score:5, Funny)
Odd statement, considering Microsoft mantra declares Linux is far more expensive than Linux.
I think you got that backwards
Re: (Score:2)
I think he got it just right.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Re:That's because... (Score:5, Informative)
Linux was originally made in Finland.
Re:That's because... (Score:5, Funny)
Re:That's because... (Score:5, Funny)
Re:That's because... (Score:5, Funny)
Actually he abandoned his homeland in search of warmth.
Re: (Score:3)
After optimizing his computer to the point the CPU couldn't produce enough heat to warm his house anymore.
Re: (Score:2)
Actually he abandoned his homeland in search of warmth.
He couldn't have bought a Pentium 4?
Re:That's because... (Score:5, Insightful)
Anybody still believe Linus Torvalds about how Linux was just for fun?
Of course. Linux was just for fun; now he makes a living out of it. A person's motivations for doing something don't have to remain exactly the same for the whole time they do it.
Re: (Score:2)
Re: (Score:2)
Linus lives in America now, since he abandoned his homeland in search of money, money, money.
Linux 2: The Search for More Money
Linux: The Breakfast Cereal
Linux: The Toilet Paper
Linux: The Flame Thrower (people really love this one and Linus is no stranger to spewing flames)
Re: (Score:2)
it's made in the USA, USA, USA!
No it's because they're running Beowulf Clusters. Sorry couldn't resist and hadn't seen it in a long time. </nostalgia>
Re: (Score:2)
I'm rather sure that means since the super computer builders are building their own OS out of Linux, they don't have to supply anybody the source as their not sharing it. However I may be mistaken, anybody here that knows the contract of Linux that can verify this?
Re:Where's the source? (Score:5, Informative)
I'm not even sure what you are asking here. Do you truly have no idea how a GPL works?
Anyway, you have this exactly backwards. The reason Linux became popular during the parallel supercomputing "revolution" (and I say this as a modest expert, at least at that time) is because it IS an open source operating system, so you could hack the kernel, write your own kernel drivers, fix things like networking bugs or system balance issues, and handle memory at a very primitive level. You got then, and can easily get now, the complete source of the OS and all of its device drivers, although the latter has been a constant source of contention between hardware mfrs who think that a device driver that makes their hardware run is some sort of "trade secret" and the keepers of the Linux kernel. Over decades (at this point) the mfrs have largely given up and actively help with kernel drivers instead of insisting on binary-only distributions. This played a critical role in the development of early parallel supercomputers once Linux had its first kernel capable of symmetric multiprocessing with two (and rapidly more) CPUs or (later) cores, or both. That would be roughly kernel 2.0, although there were still serious issues with race conditions, (network) driver interrupts and lockups, memory management, and so on, through 2.0.4+ -- really they went on forever as the 2.0 kernel wasn't truly symmetric, handled interrupt locking "badly", and took a lot of revision and some new paradigms to smooth out and stabilize. Ah, those were the days...
Microsoft, on the other hand, made you sign away your firstborn child in order to get a copy of the OS source -- even as a research institution. If (say) your network drivers were slow, or locked up while multiprocessing, you were SOL. You COULDN'T fix it. You couldn't even find the bug. And it wasn't worth the effort -- even if you sacrificed a goat and got the source -- to learn to work with the source because it changed at MS's whim and all your work could go down the tubes at any moment and if you DID develop anything that ran on their system in some "custom" fashion, you ran into serious issues if you wanted to share it. You COULDN'T share your work with anybody else, not unless they had a surplus of goats or firstborn children too.
"Anybody" (with a need and decent programming chops) could join the linux kernel list and communicate directly with the main kernel developers and report bugs, contribute fixes or drivers, etc. There was a lot of healthy debate about what needed to be fixed, or improved, first, second, third etc, as well as just how to go about fixing them -- sometimes it required substantial redesign and had to wait for a major bump (and a lot of testing). You could of course hack/fix your own kernels or add your own device drivers, or fix broken drivers, or mess with internal "tuning", and I and many others did, but behind the public scenes the actual kernel developers -- the heart of linux, as it were -- made steady, inexorable progress.
By the year 2000, Linux had made serious inroads into not only the top 500, but there were literally uncounted small clusters that weren't fast enough (or weren't architected correctly) to crack the top 500, which relied on things like the Linpack benchmark to determine who to include. There were lots of folks who didn't USE linear algebra in their computations who built massively parallel compute farms with many different architectures and purposes who didn't even have the benchmark software installed (or give a shit) about their "ranking". Both PVM and MPI were fully ported onto Linux and most of their ongoing development was taking place on Linux boxes. Additional tools for management and job distribution and much more were developed -- on mostly Linux boxes, but yeah, there were still SGIs and Sun Microsystems clusters and much more out there. They suffered -- badly suffered, terminally badly suffered in pretty much all cases -- from being much, much more expensive than over the counter Intel or AMD box
Re: (Score:3)
I'm rather sure that means since the super computer builders are building their own OS out of Linux, they don't have to supply anybody the source as their not sharing it. However I may be mistaken...
You are mistaken. Top 500 shops are regular contributors to mainline Linux development, with test cases, patches and more than a few core developers. They do it because they benefit from it, and they save money that way, they don't need to carry patches. And they aren't "competing" in the commercial sense, they just want the best system they can have, and that means, play with the community.
anybody here that knows the contract of Linux that can verify this?
Contract??? You really don't get it, good luck with that.
Re: (Score:2)
Re: (Score:2)
Most programs run on Supercomputers are probably as old as Linux, if not older. I am pretty sure a dual processor quad-core Intel processor will beat the pants off a Cray Y, let alone a CDC7600.
It might take a while to hack the Fortran from FTN to GCC, but its a lot easier if the supercomputers are 64bit machines running Linux and not Chronos on a 60 bit machine.
Re: (Score:2)
Who is "they"?
Re: (Score:2)
Re: (Score:2)
Several of the top500 are using GPUs, but for calculations rather than displaying graphics. Having an active video display on a large cluster would be stupid, most supercomputer nodes won't have screens attached and while the power consumption of an idle display controller is pretty low its not 0, and multiplied by thousands of nodes its a terrible waste of power.
Re: (Score:2)
Do supercomputers ever come with a graphics card (that is intended to drive a display)?
I'd imagine that if you want a console for your supercomputer, you set up a PC next to it and run X Window remotely or something.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Yeah, but Triumph of the Free.
Re: (Score:2)
Yes.
Supercomputers usually have front and back doors. It's needed for maintenance.
So yes, they are alwas backdoored.
https://media2.s-nbcnews.com/j... [s-nbcnews.com]