Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
United States Linux Hardware Technology

The World's Fastest Supercomputers Hit Higher Speeds Than Ever With Linux (zdnet.com) 124

An anonymous reader quotes a report from ZDNet: In the latest Top 500 supercomputer ratings, the average speed of these Linux-powered racers is now an astonishing 1.14 petaflops. The fastest of the fast machines haven't changed since the June 2019 Top 500 supercomputer list. Leading the way is Oak Ridge National Laboratory's Summit system, which holds top honors with an HPL result of 148.6 petaflops. This is an IBM-built supercomputer using Power9 CPUs and NVIDIA Tesla V100 GPUs. In a rather distant second place is another IBM machine: Lawrence Livermore National Laboratory's Sierra system. It uses the same chips, but it "only" hit a speed of 94.6 petaflops.

Close behind at No. 3 is the Sunway TaihuLight supercomputer, with an HPL mark of 93.0 petaflops. TaihuLight was developed by China's National Research Center of Parallel Computer Engineering and Technology (NRCPC) and is installed at the National Supercomputing Center in Wuxi. It is powered exclusively by Sunway's SW26010 processors. Sunway's followed by the Tianhe-2A (Milky Way-2A). This is a system developed by China's National University of Defense Technology (NUDT). It's deployed at the National Supercomputer Center in China. Powered by Intel Xeon CPUs and Matrix-2000 accelerators, it has a top speed of 61.4 petaflops. Coming at No. 5, the Dell-built, Frontera, a Dell C6420 system is powered by Intel Xeon Platinum processors. It speeds along at 23.5 petaflops. It lives at the Texas Advanced Computing Center of the University of Texas. The most powerful new supercomputer on the list is Rensselaer Polytechnic Institute Center for Computational Innovations (CCI)'s AiMOS. It made the list in the 25th position with 8.0 petaflops. The IBM-built system, like Summit and Sierra, is powered by Power9 CPUs and NVIDIA V100 GPUs.
In closing, ZDNet's Steven J. Vaughan-Nichols writes: "Regardless of the hardware, all 500 of the world's fastest supercomputers have one thing in common: They all run Linux."
This discussion has been archived. No new comments can be posted.

The World's Fastest Supercomputers Hit Higher Speeds Than Ever With Linux

Comments Filter:
  • Linux (Score:5, Informative)

    by nnet ( 20306 ) on Monday November 18, 2019 @07:16PM (#59428316) Journal
    I pronounce Linux as Linux.
  • Imagine booting those with systemd ?

    Guess I need to point this out -- the above is humor, trying to avoid seeing another flame war

    • by marcle ( 1575627 )

      Imagine booting those with systemd ?

      Guess I need to point this out -- the above is humor, trying to avoid seeing another flame war

      As only a casual Linux user, I'm curious. Does systemd really slow things down?
      I've heard and agree with many arguments about how systemd perverts the Linux philosophy, coopts more and more functionality, and generally doesn't play well with others in the Linux ecosystem. Does that include performance hits?

      • Re: (Score:1, Insightful)

        "As only a casual Linux user, ... I've heard and agree with many arguments about how systemd perverts the Linux philosophy, coopts more and more functionality, and generally doesn't play well with others in the Linux ecosystem."

        It is easy to make this mistake when you are a "casual Linux user". systemd is an init system designed to work with modern systems, and solves problems that literally can't be solved with ancient init systems. When people say "Linux philosophy" it shows that they don't know what Uni

        • Re:humor (Score:5, Insightful)

          by PPH ( 736903 ) on Monday November 18, 2019 @09:01PM (#59428586)

          relying on shell scripts as a core part of your OS is ridiculous in 2019

          I'm afraid you will have to back that statement up with some actual arguments.

          • Yeah sure. I'll just teach you everything that is needed to know about processes, sockets, asynchronous system events, shell scripts, and software development in general in a post. No problem. If you don't already get this fact then you need to get an education before you can understand why this very obvious thing is true.
            • by PPH ( 736903 )

              Yeah sure. I'll just teach you everything that is needed to know about

              Some of which are shell scripts wrapped around executables or parsing IPC between processes. I'm looking at some dbus stuff as an example. I know you kids don't like command line parameters and pipelines. But that's how a lot of stuff still gets done. Even if we have to hide it from the systemd people to keep them from throwing tantrums.

              In the meantime, here's a ball. Go outside and play, kid.

              • Comment removed based on user account deletion
        • So please enlighten me as to why systemd is concerned with my DNS queries?

          • Because it has been told to do so, and for whatever reason you haven't educated yourself enough to know that you can systemctl disable systemd-resolved if you don't want it to.
            • Why is it enabled by default in the first place? An init system shouldn't even be near DNS.

              • Your distribution chose it as the default. That is why it is enabled by default. systemd is not just an init system. There are many myths [0pointer.de] about it and you will never learn anything listening to those who "hate" it here on slashdot. A few know some things they don't like about it, but they share a universal characteristic: "they decided to hate it first and learn about it never (.i.e. "cancel culture") rather than learn [wikipedia.org] about it [freedesktop.org] first and decide if it is really the end of Linux as we know it later.
          • So please enlighten me as to why systemd is concerned with my dns queries?

            Um, I can tell you about that. As to the value or whatever you want to draw from that, I leave that up to the reader.

            So typically resolution is handled by the glibc functions which receive their configuration via resolv.conf [die.net]. That said, the glibc resolver does not do things like caching or encrypted requests. You need to setup a caching server or some other kind of server if you want to do those things. Typically, if you do those things, then you set the resolv.conf to 127.0.0.1. However, you may also

        • Re:humor (Score:5, Insightful)

          by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Tuesday November 19, 2019 @12:45AM (#59429140) Homepage Journal

          systemd is an init system designed to work with modern systems, and solves problems that literally can't be solved with ancient init systems.

          Name one.

          When people say "Linux philosophy" it shows that they don't know what Unix is, and haven't figured out that relying on shell scripts as a core part of your OS is ridiculous in 2019.

          Shell scripting is a core feature of Unix. Having the same language for scripting and interactive use is a benefit, not a drawback. That was an advance over prior systems, it's not a drawback.

          • by syn3rg ( 530741 )
            I may disagree with you on politics Drinkypoo, but I agree with you wholeheartedly here.
      • by raymorris ( 2726007 ) on Monday November 18, 2019 @08:06PM (#59428442) Journal

        > I've heard and agree with many arguments about how systemd perverts the Linux philosophy, coopts more and more functionality, and generally doesn't play well with others in the Linux ecosystem.

        All true. Systemd is designed using the same philosophy as Microsoft Office. Office is a successful product, and is opposite the Unix philosophy.

        > Does that include performance hits?

        I don't believe so. I'm theory, init isn't doing anything at all at runtime. Init launches the services at boot and it's done. So it can't affect runtime performance. Except of course systemd is a heck of a lot more than init.

        Probably the most relevant to runtime performance would be syslog - err systemd, the logging code part. If you had only one application running on the machine, systemd logging shouldn't be slower than syslog logging. Where you COULD have an impact is that one application that is doing a lot of logging could slow down other applications, because systemd is putting logs for all applications into the same file, meaning they all share one file.

        On our busy web servers, we used to have a httpd log drive separate from the OS or other storage, so that the httpd log IO didn't affect anything else. Systemd would get in the way since all logs from everything are written to one combined file. In theory log writes could block because other services have the log busy.

        • "On our busy web servers, we used to have a httpd log drive separate from the OS or other storage, so that the httpd log IO didn't affect anything else. Systemd would get in the way since all logs from everything are written to one combined file. In theory log writes could block because other services have the log busy."

          This is a ridiculous claim. Calls to the logging API are non-blocking, and writing to a single file means better performance not worse.

          • I'm not sure if you forgot to read the first sentence you quoted.

            Anyway, it's non-blocking only until a page is full. It's mmap underneath. Of course is HAS to block at some point, or lose data - machines don't have unlimited RAM. In the case of systemd, the block is when the page is written to disk. Pages are 4KB, so it doesn't block UNTIL all processes combined have written 4KB.

            Are you by chance aware that disk drives are by far the slowest component in your system? The CPU can easily handle hundreds o

            • You need to get a better understanding of how the OS works. Page sizes do not have to be 4k, and you can build the kernel with 64k or 128k page sizes (et. Al.) You are also confusing page size with buffer size. A buffer can be any size and can be made up of multiple pages. You also can put the systemd (journald) logfile on any kind of file system, including SSD and RAMDISK, or on a SAN over the network. Furthermore, on modern systems a blocking process only ties up one core, so during blocking you will have
              • by raymorris ( 2726007 ) on Monday November 18, 2019 @10:07PM (#59428770) Journal

                Bro, drive throughout has nothing whatsoever to do with CPU cores. When you try to send more than data than the drive can write, CPU affinity doesn't affect that in any way.

                > You are also confusing page size with buffer size. A buffer can be any size and can be made up of multiple pages

                Journald COULD have used a buffer, of any size. It doesn't because that's far slower. It uses mmap. Which talks directly to iommu pages. That skips a lot of file crap. It's a lot like using swap space - it's memory and it's disk. You write the memory address, that memory happens to be same memory page that corresponds to certain disk blocks.

                Google mmap and play with some mmap programming to understand it better. That helped me understand it 15 years ago. That was shortly before I got into kernel programming https://cdn.kernel.org/pub/lin... [kernel.org]

                Or, you can proceed to tell us how we wrote the system calls, while totally misusing the vocabulary so badly that it's obvious you've never even read our man pages, much less our code.

                • And btw, where exactly do you see MAP_HUGETLB?
                  Because I'm looking right at the mmap on line #72 and I don't see MAP_HUGETLB there.

                  https://github.com/systemd/sys... [github.com]

                  • Okay, I'm chill now. If you decide to look over the code to see how it works, I'd be glad to answer any questions you may have.

                    This type of stuff is new to most people. It happens to be the programming I was doing back around 2012.

                • Claiming that CPU cores don't come into play is ridiculous. Again, if there was any blocking that blocking would be only for that CPU, leaving the others to continue running programs. A SATA 6 interface has 6 Gigabit/s data throughput. A typical SSD has write throughput of 520 Mbs. For server applications you are using RAID to take advantage of the full SATA throughput. If you are saturating that there is something seriously wrong with your logging, and in any case your 4k per write cap becomes insignifican
                  • > Claiming that CPU cores don't come into play is ridiculous. Again, if there was any blocking that blocking would be only for that CPU

                    Alright buddy. When your drive IO is saturated, go ahead and buy a faster CPU. That'll fix it.

                    • That's not what I said and you know it. There are multiple places where blocking can occur. You aren't I/O blocking because you are logging too many events, and I have to believe you know that. You are also getting away from the fact that your initial claims that having multiple files open was more efficient than having a single file open, and that journald doesn't allow you to use a separate filesystem and hardware for logging if you so desire.. I don't think it is intentional, but it appears like you are
                    • by raymorris ( 2726007 ) on Tuesday November 19, 2019 @12:05AM (#59429066) Journal

                      Remember when I said "I'm not sure if you forgot to read the first sentence you quoted"?

                      Let's try reading it now:
                      On our busy web servers, we used to have a httpd log drive separate from the OS or other storage, so that the httpd log IO didn't affect anything else.

                      Maybe even include the sentence before and after, for context:

                      --

                      Where you COULD have an impact is that one application that is doing a lot of logging could slow down other applications, because systemd is putting logs for all applications into the same file, meaning they all share one file.

                      On our busy web servers, we used to have a httpd log drive separate from the OS or other storage, so that the httpd log IO didn't affect anything else. Systemd would get in the way since all logs from everything are written to one combined file. In theory log writes could block because other services have the log busy.
                      --

                      There are some things you know a lot about. You really don't have to pretend to know more about everything than everyone and anyone. IO throughput is a real thing. And it's per-device. If your httpd / smtpd / whatever the server does log is on the same drive / device as everything else, it's going to effect everything else when there is heavy io. And a billion CPU cores aren't going to make that drive any faster. If 26 daemons are logging to one drive, 26 daemons are going to be waiting every time that drive gets busy.

                      If instead you put your constant writes from $main_service on a separate, dedicated drive, io throughput to that drive doesn't effect any other processes - they are all logging to wherever /var is mounted.

                      This is particularly true when you have a lot of data, so you're using spindles. For example, we log all network activity for 3,000 users. We don't put /var/log/messages on the same set of spindles, or even the same raid card, because you don't want heavy logging from the service to bog down *system* (os) functions.

                      This is actually an enhancement of an earlier idea, something that became popular after Mike and I started doing it in 1996. The 1996 concept was called read drive / write drive. The platter of an HD moves a lot faster than the heads can seek. That means HDs are WAY faster at writes if they are purely sequential as the platter spins under the head, with no seeks. If you do even 2% reads from the same drive, reading other files while trying to log, that requires the heads to seek to other tracks and totally ruins your throughput. So what you can do is have your logs writes to one pair or set of spindles and do all your data reads from another set. Read drive / write drive, /var/www on one hardware, /var/log on another. Because uninterrupted sequential writes are so much faster than seek-interrupted ones.

                         

                    • "On our busy web servers, we used to have a httpd log drive separate from the OS or other storage, so that the httpd log IO didn't affect anything else."

                      I read that and you ignored the point, which is that you can do the same thing with journald.

                      "For example, we log all network activity for 3,000 users. We don't put /var/log/messages on the same set of spindles, or even the same raid card, because you don't want heavy logging from the service to bog down *system* (os) functions."

                      Which again, you are free to

                    • > That you can do the same thing with journald.

                      Why would you do that, since "Calls to the logging API are non-blocking, and writing everything to a single file means better performance not worse." ? :D

                      You could just admit that two disks are faster than one, rather than trying to pretend you didn't say that, and trying to use words you almost understand.

                    • I concede that, since you completely left the part of having two different filesystems in your reasoning, alluding to it only as an anecdote later, I did not address that case and my statement was with regard to multiple files on the same drive. Still, your "why would I do that" comment shows that you won't admit that you claimed systemd didn't allow the approach you describe when it does.
                    • Btw, your user name is interesting. Why did you choose that?

                      > you won't admit that you claimed systemd didn't allow the approach you describe when it does

                      When did I say that? And ... do you happen to be familiar with how that's done?

                    • > you completely left the part of having two different filesystems in your reasoning, alluding to it only as an anecdote late

                      The sentence you quoted in your very first post, which I asked you twice to please read, is:

                      On our busy web servers, we used to have a httpd log drive separate from the OS or other storage, so that the httpd log IO didn't affect anything else.

                      What is the 13th word I that sentence? The word between "log" and "separate". You quoted it in your very first post, saying that's ridiculo

                    • I was a teenager when I chose it. I was into the hacking and phreaking thing and thought that it was the coolest hacker moniker ever at the time. Pun intended obviously,

                      man journald.conf

                      See the Storage= section. You can disable the log completely and use your original setup.
                    • Yes, and since you can do that with systemd it isn't significant, since the subject was how systemd could negatively impact performance. Most people aren't doing that, so the single file solution is faster, and for those who are doing that they still can still do what you suggest either by pointing to the device for the location of the binary journal or disabling the journald logging and using only syslog, syslog-ng, rsyslog, etc.
                    • Lol I had to read that twice to notice the pun, even after you pointed it out.

                      > You can disable the log completely and use your original setup

                      I see. So the way "you can do that with journald" is by disabling, not using journald. Cool.

                    • Close. The way you can do it with systemd is to keep journald running and disabling the binary logfile creation with an option in journald.conf, relying on the fact that journald passes events on the the other logging systems. Not using journald would involve systemctl disable journald.service, which is a different thing entirely. Recall that this discussion was about systemd, and that you stated that "syslog - err systemd, the logging code part" (ie.e journald) would prevent your approach. I proved it does
          • by MrKaos ( 858439 )

            Systemd would get in the way since all logs from everything are written to one combined file. In theory log writes could block because other services have the log busy."

            This is a ridiculous claim. Calls to the logging API are non-blocking, and writing to a single file means better performance not worse.

            No, it doesn't. First because if you use separate files, you can specify different filesystems to write log data and increase I/O bandwidth for logs, or separate your demanding application logs from syste

            • Your mistake is in your belief that you cannot do the same with systemd, and is founded on the belief in the myth that it is monolithic. If you want you can disable journald completely and use any other logging tools. The entire educated Linux community accepts your apology.
              • by MrKaos ( 858439 )

                Your mistake is in your belief that you cannot do the same with systemd, and is founded on the belief in the myth that it is monolithic.

                You said: Calls to the logging API are non-blocking, and writing to a single file means better performance not worse. Maybe you can do the same thing with unit files however it's not the assertion you made.

                If you want you can disable journald completely and use any other logging tools.

                I'm sure it works fine.

                The entire educated Linux community accepts your apology.

                Dear Linux community,
                I'm sorry that you are being subjected to systemd.

                Sincerely
                MrKaos

                • "Maybe you can do the same thing with unit files however it's not the assertion you made."

                  My assertion was that you are an idiot who doesn't understand systemd but runs around telling people it can't do what it is perfectly capable of doing. You have proved that is true countless times now. Off you go little turd ...

                  • by MrKaos ( 858439 )

                    "Maybe you can do the same thing with unit files however it's not the assertion you made."

                    My assertion was that you are an idiot who doesn't understand systemd but runs around telling people it can't do what it is perfectly capable of doing.

                    Wow, you're actually having an emotional episode over systemd. No need to make it personal fella I'm trying to understand what motivates you to choose systemd, even though there is little choice left.

                    The thing is I've seen the sar data with configurations you described more than once and thought it appropriate to mention considering this is about as nerdy as news gets talking about extracting performance from a super computer. Or clusters. That's the core behavior that I've observed generated whilst syst

        • you can litteraly keep the exact same setup by:
          - switching journald to not store anything on disk (Storage=none) or only in ramdisk (storage=volatile, or auto without a disk directory)
          - run concurrently syslog, and have journald send all its output to syslog (ForwardToSyslog=true)
          - (or use the parameters in the unit files to only forward that service's output to syslog. "StandardOutput=syslog" "StandardError=syslog" "SyslogIdentifier" etc)
          - keep the same syslog settings.

          Alternatively, you can also redirect

          • > Switching journald to not store anything on disk (Storage=none) or only in ramdisk (storage=volatile, or auto without a disk directory)
            > run concurrently syslog, and have journald send all its output to syslog (ForwardToSyslog=true)

            > All the criticism toward journald is unfounded as you can still run it next to your preferred logging daemon and do everything like pre-systemd

            Journals is great because you can basically turn it off and run syslog instead? Well, that is a good thing.

            • Journals is great because you can basically turn it off and run syslog instead? Well, that is a good thing.

              No.
              What I mean is people like you who are whining "Wah! Journald broke my setup!!!!" can switch the whole thing of and keep your old setup that you hold dear so much. Despite all you complain, it is not actually *forced* upon you. You can turn it off, if you want. It's just that nearly every single distro in the wild has decided to switch to it for various reasons (early logging from the start of the boot, possibility to keep the entirety of the logs in RAM for embed devices, etc.)
              But if you feel that sysl

      • You may have heard that crap, but none of it is true.
        What is true is that your system boots way faster running systemd.
        That's what the "imagine booting with systemd" comment is about.

        • by Wolfrider ( 856 )

          --And yet, Antix boots to GUI in less than 40 seconds on my 2011 iMac, while Ubuntu 19.10 with systemd takes over a minute.

      • by MrKaos ( 858439 )

        Does that include performance hits?

        Yes and stability.

    • by deek ( 22697 )

      Nice one! Both with the systemd joke, and in saying you want to avoid a flame war. Very amusing.

    • by donaldm ( 919619 )

      Imagine booting those with systemd ?

      Guess I need to point this out -- the above is humor, trying to avoid seeing another flame war

      Now you have done it. Flame On! :-)

      Personally I don't have any issue with "systemd" and in the majority of cases, it is unobtrusive unless you are one of those people who wants to fiddle with everything. From an enterprise perspective, you as the system admin may have to change some recommended systemd parameter(s) (e.g."Oracle") but normally you don't have to worry about it.

  • Who knew you needed a mainframe to run Linux desktop?
    • IBM runs World Community Grid. It has hundreds of thousands of members but only a small percent are active, If generates about 1,500,000 results a day. Now the fastest supercomputer should be able to that in under an hour. But I think it is very expensive just to program for a supercomputer. That is why IBM continues to ask for volunteers to generate those results. It must take a lot of computing power just to download the work units and upload the results. The results are checked to ensure they are

  • by jshark ( 623406 ) on Monday November 18, 2019 @07:38PM (#59428378)
    Imagine a beowulf cluster of these puppies.
    • Yes. IBM would use a different operating system but to compete for the title but they don't have a lot of cash for that sort of thing. I hope you mean open / libre and not "wothout monetary cost."
    • It's free.

      Big part of the reason is that the computer makers have the source code and can make Linux run on this specific hardware, unlike for Microsoft and Apple proprietary code.

  • by _merlin ( 160982 ) on Monday November 18, 2019 @07:50PM (#59428402) Homepage Journal

    On the three fastest machines, the main compute elements aren't the CPUs. On Summit and Sierra the main compute elements are the NVIDIA GPUs. On TaihuLight the compute elements are specialised vector units (kind of like IBM Cell SPEs, but using MIPS as the starting point rather than PowerPC). The compute elements aren't running Linux, the compute jobs run pretty close to the metal.

    Linux runs on the CPUs that manage distributing jobs to the compute elements, and managing the communication infrastructure in the cluster. That's not to say this is a trivial task - efficient job distribution and result collection is critical to getting the most out of the compute elements - but the actual computing doesn't run on Linux for the most part.

    • by egr ( 932620 )
      And how do you see for yourself Linux would "compute elements"? That's all OS really is: a task scheduler and a resource distributor. All computations, regardless of the OS are done on the bare metal.
      • by _merlin ( 160982 )

        OK smartarse. The Linux kernel is not loaded into memory on the compute elements - they run purpose-built lightweight messaging kernels.

    • On TaihuLight the compute elements are specialised vector units (kind of like IBM Cell SPEs, but using MIPS as the starting point rather than PowerPC).

      So, like the Playstation 2, not the Playstation 3? The Emotion Engine was MIPS-derived.

  • Frontera! (Score:5, Interesting)

    by Orp ( 6583 ) on Monday November 18, 2019 @07:52PM (#59428406) Homepage

    I am an early use on Frontera, finishing up on Blue Waters. Frontera replaces Blue Waters as the National Science Foundation "leadership class" machine that requires external funding/code that has proven to scale well and is only accessible via the proposal process. Blue Waters (NCSA/U of Illinois) was a fine machine, and I raise my glass to 'er. I had a major research breakthrough on that machine that changed my academic career for the better.

    But the new shiny machine in town is quite nice. Frontera has 56 cores per node, whereas Blue Waters had 16 (well, 32, but really 16). I can blast through simulations on 256 Frontera nodes taking half as long as 625 Blue Waters nodes. And I thought Blue Waters was fast when I started using it! There are some growing pains on the machine (mostly with Lustre and people blasting too many files to the FS at once) but overall I'm pretty psyched about how research will go on Frontera.

    Further, Frontera isn't even fully assembled yet. A large chunk of the machine will be GPUs that aren't yet active. Currently it's just the CPU side that is working. Our research team is going to use GPUs to do post-processing rather than deep learning / simulation and have code that is ready to go.

    Anyhoo yay Linux and yay supercomputers. Oh, I simulate supercell thunderstorms and tornadoes, if you were wondering! http://orf.media/ [orf.media]

  • Comment removed based on user account deletion
    • You are clearly pretty butthurt; so much so that you feel the need to pretend Linux hasn't already taken over Windows, and MacOS was a never ran, for the vast majority of systems on the planet. You can get everything you need to do without ever using a Windows system, but even if you have Windows you can't get anything useful done without also using a Linux system. Even if you just have a LAN not connected to the internet, you are still using Linux. DOH!
    • The World's Fastest Supercomputers Hit Higher Speeds Than Ever With Linux

      And now they can run all the best software, like Abi'sWord and K-Phaint and Cali-Libre and Tux Races and Snake! and play all the hottest new action/strategy titles like Battle of Westnorth, and FreeCivilization, Mrs. Mac-Pan, and some port of a port of a cracked version of an early edition of Doomed, where "IDFA" works acceptably but "IDCLIP" yields unpredictable results and sometimes when you try to use it it hangs your four-Raspberry PI bramble cluster ... and you can use a fork of a port of whatever the forerunner to Windamp was based on, (for all those times you really just want to lick a llama's ass... or whatever,)

      I'm kidding, of course. GNU/Linux is awesome and you can run literally everything on it.

      LOL... kidding again! Gotcha again.

      But seriously, now that the fastest supercomputers run even faster on GNU/Linux, it will only be a few more months before the rest of the world realizes the glories of GNU/Linux, abandons Windows AND macOS, every BSD, and everything else for that matter, and someone suggests it might finally be... the year... of Linux... on the desktop.

      I wait with baited breath. :-)

      never, NEVER bait your breath, only folly will ensue.

    • It's 'bated' breath - like 'abated'. And "BATIN!".

    • by dddux ( 3656447 )
      Your post makes me wonder how old you are... or what you've been smoking, or consuming. Just kidding! :)))) I just think what kind of drugs you're using, and what kind of mental illness you might be afflicted with. Just kidding! :)))) Maybe you're just a kid, after all. d= ;)
  • They all run a heavily modified version of the Linux kernel and on top of it a lot of proprietary code (for instance NVIDIA VOLTA drivers are 99% closed source aside from thin kernel interfaces) and open code which may or may originate from the GNU project.

    And these super computers software stack has very little to do with what an average Linux desktop user has installed.

    • I would not say that. If you ssh onto those systems you'd feel like on any linux system. It's not like a *BSD, it has a recent Distro that is likely very close to a standard RHEL (if not an unmodified RHEL). Sure you're not using any linux desktop, it's command line.

      On top of that, you install the NVIDIA driver and CUDA, but that is not different from your laptop if you have an NVIDIA GPU and use CUDA.

      It's not x86, but honestly you don't really feel it until you write assembly code.

  • What does the operating system have to do with the computational speed of the CPU?

    • by AHuxley ( 892839 )
      Re 'computational speed of the CPU"
      The really smart people who can link software over a huge number of consumer CPU parts can call it a super computer?
      That needs a lot of perfect code to work on some "math".
      Dont have the code quality and that network of consumer CPU parts? Dont get the math done in time and won't be nation raking impressive.
      Super computing for some math and everyone can only do the same limited set of math.
      But can virtue signal about their nations education system, green CPU tech, ne
    • by fintux ( 798480 )

      What does the operating system have to do with the computational speed of the CPU?

      It doesn't have anything to do with the computation speed of the CPU. It has a lot to do with the computational speed of the system, and for example how well the system can utilize the computational speed of the CPU.

      The kernel does things like task managing, memory managing, inter-task communication and any other messaging, NUMA node assigning etc. These can make a huge difference.

  • a CoC starts tracking the projects and suggesting no gov and no mil work?
    Projects doing SJW CoC approved math only?
  • Being more than 1000 petaflops and ARM powered to boot.

  • " Hey, we made the list! Now let's budget some upgrades and code optimizations and try to improve our score! "

The biggest difference between time and space is that you can't reuse time. -- Merrick Furst

Working...