Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
China Open Source Operating Systems Software United States Linux Hardware Technology

All 500 of the World's Top 500 Supercomputers Are Running Linux (zdnet.com) 288

Freshly Exhumed shares a report from ZDnet: Linux rules supercomputing. This day has been coming since 1998, when Linux first appeared on the TOP500 Supercomputer list. Today, it finally happened: All 500 of the world's fastest supercomputers are running Linux. The last two non-Linux systems, a pair of Chinese IBM POWER computers running AIX, dropped off the November 2017 TOP500 Supercomputer list. When the first TOP500 supercomputer list was compiled in June 1993, Linux was barely more than a toy. It hadn't even adopted Tux as its mascot yet. It didn't take long for Linux to start its march on supercomputing.

From when it first appeared on the TOP500 in 1998, Linux was on its way to the top. Before Linux took the lead, Unix was supercomputing's top operating system. Since 2003, the TOP500 was on its way to Linux domination. By 2004, Linux had taken the lead for good. This happened for two reasons: First, since most of the world's top supercomputers are research machines built for specialized tasks, each machine is a standalone project with unique characteristics and optimization requirements. To save costs, no one wants to develop a custom operating system for each of these systems. With Linux, however, research teams can easily modify and optimize Linux's open-source code to their one-off designs.
The semiannual TOP500 Supercomputer List was released yesterday. It also shows that China now claims 202 systems within the TOP500, while the United States claims 143 systems.
This discussion has been archived. No new comments can be posted.

All 500 of the World's Top 500 Supercomputers Are Running Linux

Comments Filter:
  • by Anonymous Coward

    Linux makes it to the desktop, of a supercomputer.

    • by MouseR ( 3264 )

      Chevy Gen2 Volt's infotainment runs GMLinux. Saw it on the core dump of a crash of another driver's car.

    • by stooo ( 2202012 )

      This is the year Linux makes it to the desktop, of all supercomputers.

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      There is a logical reason for this, and it has nothing to do with Linux.

      The Supercomputers level of OSS use is primarily a concern with science. It compiles on multiple platforms, and is well maintained on most of them. Windows and MacOS are only available for the x86-64, ARM, and PPC platforms, and even then, not all of them. That only leaves FreeBSD as an option, and FreeBSD isn't as virtualization friendly, and drivers aren't readily available for GPU systems.

      So it's quite literately the only logical cho

      • by Tough Love ( 215404 ) on Wednesday November 15, 2017 @04:26AM (#55552969)

        ...it's quite literately the only logical choice

        Oh I know, right? But the big fact you danced around is, Linux is just better than the others. It's faster and more reliable. Otherwise top 500 would not use it. Like, they tried to use Windows, they really did. Microsoft was paying academic institutions to install it and providing teams of free engineers. Still didn't do it. Why? Windows can't handle the load, it can't run continuously under load. It just gets more and more unstable then it falls over. Even when it does stay up, it can't touch the storage, scheduling or memory management efficiency of Linux.

        • Re: (Score:2, Informative)

          by Xyrus ( 755017 )

          It has nothing to do with not being able to handle the load. It has everything to do with costs. Linux is free. Windows isn't. Most of the tools for supercomputing were written for the linux platform. There are tools for windows but since it's a niche market there aren't nearly as many. And since in the super-computing world having a good desktop/GUI environment doesn't mean squat there is no real incentive to use windows outside of certain circumstances.

          In all the time I used it I never encountered any ser

          • by Junta ( 36770 )

            You are right it's not about ability to take on load (though there is a matter of how self-reliant shops can be when trying to analyze failures, which is unlimited with Linux and inherently limited in Windows). However:

            the lack of tools and tracking down a couple numerical inconsistencies

            Those are pretty huge things. It all stems from the origin of supercomputing as a Unix thing, and as such similarity to Unix allowed seamless porting. Windows however is very different and requires more work to port all of the technical computing ecosystem that no one wants to do (except for

          • by Ol Olsoc ( 1175323 ) on Wednesday November 15, 2017 @09:51AM (#55554043)

            It has nothing to do with not being able to handle the load. It has everything to do with costs. Linux is free. Windows isn't.

            If I get you right, You spend all this money on a Supercomputer, so you logically use the cheapest OS out there instead of paid ones that should work better?

            Sounds legit.

      • Re:This is the year (Score:5, Informative)

        by Anonymous Coward on Wednesday November 15, 2017 @07:46AM (#55553417)

        > The Supercomputers level of OSS use is primarily a concern with science. It compiles on multiple platforms, and is well maintained on most of them. Windows and MacOS are only available for the x86-64, ARM, and PPC platforms, and even then, not all of them.

        This makes no sense. Almost all supercomputers are x86-64 based (+/- GPUs).

        > That only leaves FreeBSD as an option, and FreeBSD isn't as virtualization friendly, and drivers aren't readily available for GPU systems.

        Lol. Supercomputers don't use virtualization.

        > So it's quite literately the only logical choice, owing to that the other choices would have required engineering resources.

        That's not true, supercomputers within the past 5 years on the top500 list have used Windows, AIX, BSD, Linux. It's just that Linux is better for the job than the others.

        > That said, Linux does not belong in safety systems, and I hope it never ends up in car automotive systems, power plants, or spacecraft.

        I hope nobody who thinks supercomputers use virtualization ever have their opinion on a computing matter taken by the designer of a safety critical system.

        Linux is in safety critical systems already. But it depends on the level and capabilities you're talking about. Processing doppler radar data and sending it to ATC systems in a timely manner is one thing. Running tight control loops in automotive engine and control systems is completely different and just isn't appropriate for Linux.

        > Everything else is fair game. These systems need real time operating systems that are highly threaded and can respond to events instantly, not be scheduled, or deferred due to eating all the swap space (one of Linux's worst default features, and what makes it woefully awful for web servers by default.)

        You're mixing up all sorts of things here. Nothing responds to interrupts "instantly", what you want is guaranteed hard upper limits. It doesn't even have to be all that fast often times, it just has to be an upper limit so you can design the system to meet response time requirements. Linux can respond "immediately" to interrupts, by the way. It does not have to be "scheduled". Work can be done in interrupt context.

        "Highly threaded" what? That's nothing to do with real time.

        "Deferred due to eating all swap space" What is this meaningless drivel? Automotive and aircraft control systems don't use swap space. They don't even use virtual memory for god's sake lol.

        > (one of Linux's worst default features, and what makes it woefully awful for web servers by default.)

        Apparently better than all the others at that too. Windows, OSX, and BSD must *really* be shit if Linux is so bad yet it still beat them all there too.

      • by PeeAitchPee ( 712652 ) on Wednesday November 15, 2017 @07:54AM (#55553441)
        The other big logical reason is licensing. No one is ever going to pay for the required Windows or MacOS server licenses for installations of this size when there is free (as in beer) software available. They'd rather put that money into other parts of the project (more cores etc.) to eek out even more performance.
      • Re:This is the year (Score:5, Interesting)

        by The Cynical Critic ( 1294574 ) on Wednesday November 15, 2017 @07:56AM (#55553447)

        That said, Linux does not belong in safety systems

        Dedicated real time operating systems obviously have their uses, but due to advances in embedded level hardware they're becoming less and less relevant. Even with the overheads of an "almost real time" OS like Linux with some compile switches most modern day embedded hardware is capable of making the dealines in all except some special super low latency use cases. Only place where a real time OS is even necessary these days are rare super low latency and super low power cases (as in under 0.25W).

        Serious, 6502s and Z80s are no longer the standard embedded hardware out there anymore.

  • by Anonymous Coward on Tuesday November 14, 2017 @08:45PM (#55551581)

    Unix never made inroads on the desktop.
     
      This might actually be harmful if people think Linux is complicated or designed for heavy hardware they may not consider it for desktops and use cases involving desktop apps.
     
      Linux has been ready for the desktop since about 1999, before that there were dependency issues and hardware wasn't always supported. Now hardware is more likely to be better supported on Linux than on Windows. I'm writing this on Windows but that's only because Windows came on this machine, I'll be installing Linux when I have a week of downtime.
     
      Enlightenment is probably the best looking desktop software anywhere, it's customizability makes it hard to include with distros but it should be considered as evidence that it's not user-friendliness or beauty holding Linux back.
     
      I think it's a bit sad to see Linux software becoming overly simplified in the wake of Apple's success the way other software is.
     
      Linux needs to remain the enthusiast and expert operating system more than it needs broad acceptance. Look what happened with the internet, Linux is great without ads, malware and other problems I associate with popularity.
     
      That said Linux skills are still hugely undervalued and not taught in schools which needs to change. A Linux machine is still your best bet that your machine will still be runnning with data and apps updated but not broken after 10-15 years.

    • by c ( 8461 )

      Linux has been ready for the desktop since about 1999...

      Eh, no, not really. You're talking about a KDE 1.0, pre-Gnome desktop... I used it, but I wouldn't have inflicted it on anyone I needed to support. Five years later it was certainly reasonable, at least where the average non-technical user was concerned.

      • by Kjella ( 173770 ) on Tuesday November 14, 2017 @10:33PM (#55552025) Homepage

        Eh, no, not really. You're talking about a KDE 1.0, pre-Gnome desktop... I used it, but I wouldn't have inflicted it on anyone I needed to support. Five years later it was certainly reasonable, at least where the average non-technical user was concerned.

        KDE was '96, GNOME '97.. in 1999 you'd already have KDE 2.0, didn't use that but I remember trying RHL 6.2 [everythinglinux.org] that came out in April 2000 which looks pretty much like a normal desktop to me. Remember that it was going head to head with Windows ME as the consumer desktop, using either was a major PITA. Granted, XP was a big step up but then you had Vista... you can make a lot of excuses for YotLD not happening but that Microsoft brought their A-game is not one of them.

        The cornerstone for Microsoft's dominance is Office and Excel in particular, all those people who had to use Windows at work of course took what little knowledge and training they had and bought a Windows machine for home too. When Outlook kicked Lotus Notes to the curb they locked that market up good.

    • by argumentsockpuppet ( 4374943 ) on Tuesday November 14, 2017 @11:25PM (#55552251)

      On a scale of LFS to Mandrake, how bad was your experience?

      I'll be installing Linux when I have a week of downtime.

      It takes me a couple months to transition a new workstation to Windows. Each time, I try to learn what the native software options are and whether they can meet my current needs. Where it doesn't, I install or use recommended software in most cases to see if it does what I need (WSL) though I do have exceptions for my personal favorite software in a few instances where I'm just unwilling to learn something new. (EditPlus, GIMP, VLC, Sysinternals, Putty.)

      A new install of Linux takes the same process, but it has apt or yum or whatever which speeds things up pretty dramatically. With a new Linux desktop install, I just rarely have to learn too many new things... usually. (Eyeballing you hard here systemd!)

      If I have to support any significant sized network, and if it's possible, I'd do Linux desktops everywhere. I'd rather use those admin systems than admin Windows... but I work in a job where I have to support Windows because that's all the core software runs on. As an admin, I can do about anything I need to on Microsoft servers and workstations. It'd be false modesty to say I'm not good at admin on Microsoft system. On the other hand, I have used Linux and various BSDs at home and work (on servers) since the late 90's. I could eliminate Microsoft in our workplace and cut our IT departmental work by maybe 30% if only our primary system ran on Linux. I'd miss some of the AD/DHCP/DNS/DFS stack. I'd miss Excel (running native) and Exchange/Outlook, but honestly, running the alternatives in the cloud or Libre would probably reduce our helpdesk workload after a year or two.

      I'm good at my jobs, and whatever systems I admin, I'll learn to be good at. Given the ideal scenario, I could run several thousand workstations with the same effort I'd use for a couple hundred Windows workstations. The scenarios I have been hired to handle haven't been ideal, so I've learned to take advantage of the environments I'm in. I'm good at my jobs because I like to learn. I like tinkering, trying new things, scripting and writing real code. That makes me useful, maybe even it helps toward making me valuable.

      On the other hand, my varied experiences and experimenting have made me aware that my own weakness is a desire to try new things. If I were designing the systems for a company responsible for my income, it wouldn't be Linux or Windows or Mac. It'd be PC-BSD on the workstations and AIX on the servers. They're boring. Boring is what I look for in a business network. Ideally, the network will be so stable that IT doesn't spend any time working on the backend systems, and that means boring is the goal. I like Linux because I'm always learning new stuff and I like Windows.. sorta, because I'm always being forced to learn new things. That's why I'm sorry to see AIX take a dip off the top 500, but I can see it; Linux is fun.

    • by Junta ( 36770 )

      Enlightenment is probably the best looking desktop software anywhere, it's customizability makes it hard to include with distros but it should be considered as evidence that it's not user-friendliness or beauty holding Linux back.

      Note that 'beauty' is relative and certainly it is not equivalent to 'user friendly'.

      I will though agree with the sentiment that there is no winning the 'user friendly', because the main desktop environments are user friendly enough, but there just isn't enough upside for the casual user to bother to even think about changing. As such diminishing the 'enthusiast' experience for the sake of the casual user is a strange thing to do.

  • Not surprising (Score:4, Interesting)

    by jwhyche ( 6192 ) on Tuesday November 14, 2017 @09:04PM (#55551657) Homepage

    From what I know about the windows kernel it couldn't scale upwards well enough to run in this league. And If I remember correctly one of the key goals of Linux was to make sure it could scale well on big iron systems.

    We still don't know if you can successfully beowolf cluster a bunch of the old Microsoft Barnies though.

    • As Linux began to crack the TOP500 list in the 1990s, Bill Gates tried to ignite a supercomputer effort at Microsoft but it never amounted to much. I wish I could find a link to it. Anyways, I found the following timeline for Microsoft's "Project Catapult" AI-related supercomputing effort, which might not be in the TOP500 list's league:
      2010: Microsoft researchers meet with Bing executives to propose using FPGAs to accelerate Indexserve.
      2011: A team o

    • by Kjella ( 173770 )

      From what I know about the windows kernel it couldn't scale upwards well enough to run in this league. And If I remember correctly one of the key goals of Linux was to make sure it could scale well on big iron systems.

      Originally? No, not at all. "I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones." But it's the sort of thing you can patch a kernel to do so this [xkcd.com] happened.

    • by AmiMoJo ( 196126 )

      The main issue with Windows is not scaling, it's been able to make good use of >100 cores since the early 2000s. The issue is management.

      These computers don't run one single OS. They run multiple copies of the same OS on nodes, and dispatch work to those nodes using special high speed interconnects. When you have thousands of CPUs, power supplies, RAM modules etc. some of them are going to fail, so you divide them up into nodes that can fail and recover individually.

      Windows is not well suited to this. Wi

  • by GerryGilmore ( 663905 ) on Tuesday November 14, 2017 @09:24PM (#55551735)
    How far has the discussion quality fallen? Apparently this low, even without a political bent.
    • Sorry, out of mod points...not that they help that much when we're swarmed by immature idiots and AC trolls.
    • I signed up here almost from the very beginning of Slashdot time. A few days earlier and I might have had ID#100000. From what people have been saying here in all those 19+ years, Slashdot was dying, has always been dying, is still dying, and will always be dying. I just ignore the irritating cruft, and I'd advise everyone to do that.

      So, moving right along... how about those TOP500 scores!

      • by Steve72 ( 52902 )

        Imagine a Beowulf cluster of these?

      • by AmiMoJo ( 196126 )

        Slashdot has definitely changed. I remember when it was more of a marketplace of ideas, where interesting comments were actually modded "interesting" instead of "flamebait" or "troll". I remember when everything wasn't a conspiracy of some kind.

        Back on topic, imagine a Beowulf cluster of the top 500 supercomputers!

    • Can you elaborate on what you think the problem is?

      I haven't noticed them being particularly bad, however I have noticed there's less and less. Particularly the last 4 months or so? Has the gradual decline sped up?

    • That time when Linus's wife could not be Rickrolled because her Linux box had no Flash capability was a searing tragedy in the annals of computer history.

  • by williamyf ( 227051 ) on Tuesday November 14, 2017 @09:54PM (#55551847)

    So, the Top 500 list of computers was dominated by many Variants of Unix, with a little sprinkle of other weird stuff (among those, VMS). Which is not a monoculture

    Then, as the other weird stuff waned, Windows took it's place (for a short while). Not directly as a replacement of course, but rather as a percentage of Top500 systems.

    On the other side of the fence, Linux began to take increasign market share of the Top500 because of low cost, shallow learning curve from *nix, and posibility to modify source code, in an accelerated path to become a monoculture (at least where the Top500 is concerned).

    And now, finally, we are on a monoculture in the Top500, with Linux all the way in the Top500... No *BSD, no AIX, HP-UX, or Solaris. Just Linux all the way.

    Better not catch anyone complaining about Chrome Monoculture, Windows Monoculture, or Android monoculture! M'kay? ;-)

    • These latest TOP500 project teams seem to have exercised free choice of OSes, free from force or coercion. Choice is good, so I'm not sure if you're complaining, and if so, why? If we knew that all 500 projects were using completely interchangeable code and hardware we might have a monoculture at play, but the reality is that they used the best available OS option for their own specific, bespoke, custom needs. I hope I understood your comment correctly.

      • [...] so I'm not sure if you're complaining, and if so, why? [...] I hope I understood your comment correctly.

        Right at the end of the comment, there is a ;-) emoticon.

        You may have missed it.

    • Re: (Score:3, Insightful)

      by quantaman ( 517394 )

      So, the Top 500 list of computers was dominated by many Variants of Unix, with a little sprinkle of other weird stuff (among those, VMS). Which is not a monoculture

      Then, as the other weird stuff waned, Windows took it's place (for a short while). Not directly as a replacement of course, but rather as a percentage of Top500 systems.

      On the other side of the fence, Linux began to take increasign market share of the Top500 because of low cost, shallow learning curve from *nix, and posibility to modify source code, in an accelerated path to become a monoculture (at least where the Top500 is concerned).

      And now, finally, we are on a monoculture in the Top500, with Linux all the way in the Top500... No *BSD, no AIX, HP-UX, or Solaris. Just Linux all the way.

      Better not catch anyone complaining about Chrome Monoculture, Windows Monoculture, or Android monoculture! M'kay? ;-)

      I think the reason it's become a Linux "monoculture" is that it isn't really a monoculture.

      Top-500 should be an area that's amendable to variety. Any one project is big enough that some serious customization is going to occur, so traditionallyany one OS could focus on a specific feature set and nab themselves a bit of the market. That's why the big Unixes co-existed for so long, if your problem was a round hole you'd grab the Unix that looked the most like a round peg, and if you had a square hold you'd gra

      • The *BSDs will keep on doing what they have always done. Run well with minimal upkeep and not beta test features on production releases. Under Linux the mentality is if something compiles then ship it. I ran Linux in the 2.0.x kernel days. What they call Linux today is so far removed it might as well be a different operating system. Some distros don't even include tools like nslookup or traceroute anymore. Good luck installing that package if your default route isn't set. Oh and "route" has been changed to

        • by swb ( 14022 )

          I switched to FreeBSD from Linux ages ago because it was a complete system. Linux was a kernel with a bunch of utilities bolted onto it, and there was That One Day where I was trying to upgrade something and needed a key utility for configuring something and it wouldn't run, and there was no "source" for an updated version. I gave FreeBSD a spin and just liked that everything was a part of a larger whole, and not a bunch of pieces with varying standards.

          FreeBSD can have other problems, sometimes certain p

      • So you're saying people are using Linux as a 'framework' upon which they build their own (custom) supercomputer OS? Nice! :-)

    • Top500?

      We are talking about Supercomputers here. Surely you mean TOPS20!

  • by thedarb ( 181754 ) on Tuesday November 14, 2017 @10:02PM (#55551891)

    What'd you expect it to run? Windows?

    • I'm sure some of those rigs could spare a few CPU cycles to run VMs in case somebody needs to Skype their basement-dwelling maladroit kid.

    • Re:Yeah. And? (Score:4, Informative)

      by iggymanz ( 596061 ) on Tuesday November 14, 2017 @10:27PM (#55551995)

      five years ago, 3 of the top 500 did run windows, and in 2011 4 did.

      • And, I'm betting that Microsoft sponsored all of them, just to have SOMETHING ON THE LIST. But did M$ ever manage to bribe enough people to get 1 lousy percent of the top 500?

        For most people, the extra HUMAN expense of making a cluster work at all, and the extra TIME expense of having it run like a pig when you get it to run at all, isn't worth even a massive M$ bribe free cluster (as long as you run Windows). It sort of depends on whether actually getting your work done is more important to you than pa

    • What'd you expect it to run? Windows?

      Some people would expect that. But the Linux kernel is certainly more customizable than a Windows black box (that would require the help of Microsoft engineers).

  • Anyone have any information on what distro they use? The article didn't say.

  • by 93 Escort Wagon ( 326346 ) on Wednesday November 15, 2017 @01:06AM (#55552501)

    ... to fully appreciate all the features of the latest Enlightenment desktop.

  • by Prien715 ( 251944 ) <agnosticpope@gmail. c o m> on Wednesday November 15, 2017 @01:32AM (#55552557) Journal

    For most parallel problems, it's possible to divide them and send each piece to different computers, rather than a different core on the same computer. For even more highly parallel problems, using a GPUs to do the computation is even faster.

    With 100 gig ethernet, we're starting to see networking speeds closer to bus speeds on motherboards themselves and it's cheaper, faster to scale (especially dynamically), and probably more fault tolerant (node fail? Send the job to a different node) to use more computer nodes rather than using more processors in a single computer.

    Distributed computing has almost made supercomputers irrelevant -- except for people with a hole in their pocket. Folding@home [wikipedia.org] is more powerful than anything on their list while we have no idea what monster of a compute clusters work inside Google or Facebook -- but given the open source software they have released (e.g. Facebook's 360 degree video stitcher [github.com]) and how slow they are on a single machine -- the only way they'd be usable on their site is if you have a massive cluster.

    • Distributed computing has almost made supercomputers irrelevant

      Not really, no. supercomputers are distributed computers with good interconnects. For many calculations, the interconnect is really, really REALLY imortant which is why a good number have the interconnect right there on die with the CPU.

  • I miss the days when the list had a ton of FreeBSD systems. To this day, it remains my preferred OS. Two little software compatibility issues prevent me from running it as my desktop OS anymore although I did for many years. It still has a home on several servers here in my house where it has distinct advantages over Linux.

  • Amazon, Google and Microsoft have cloud platforms. Technically by the "network is the computer" mantra from Sun Solaris days, these are supercomputers.

    There Linux compatibility is so essential to get a toehold there Microsoft had to support linux way of doing things. Finally it relented and introduced "Linux subsystem of Windows" support.

    Does it support incoming ssh connections? I use ssh to go out of Windows to connect to Linux machines in my network. If the linux subsystem allows incoming ssh and RSA

We are Microsoft. Unix is irrelevant. Openness is futile. Prepare to be assimilated.

Working...