Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Open Source Operating Systems Linux

Intel's Clear Linux Distribution Offers Fast Out-Of-The-Box Performance (phoronix.com) 137

An anonymous reader writes: In a 10-way Linux distribution battle including OpenSUSE, Debian, Ubuntu, Fedora, and others, one of the fastest out-of-the-box performers was a surprising contender: Intel's Clear Linux Project that's still in its infancy. Clear Linux ships in an optimized form for delivering best performance on x86 hardware with enabling many compiler optimizations by default, highly-tuned software bundles, function multi-versioning for the most performant code functions based upon CPU, AutoFDO for automated feedback-direct optimizations and other performance-driven features. Clear Linux is a rolling-release-inspired distribution that issues new versions a few times a day and is up to version 5700.
This discussion has been archived. No new comments can be posted.

Intel's Clear Linux Distribution Offers Fast Out-Of-The-Box Performance

Comments Filter:
  • Different compiler (Score:5, Informative)

    by BarbaraHudson ( 3785311 ) <barbara.jane.hud ... minus physicist> on Tuesday January 12, 2016 @12:50PM (#51288173) Journal
    Intel's own compiler in many cases generated code that runs 15%-20% faster than code compiled under GCC.
    • by Bert64 ( 520050 ) <.moc.eeznerif.todhsals. .ta. .treb.> on Tuesday January 12, 2016 @12:52PM (#51288197) Homepage

      Not everything can be compiled with Intel's compiler...
      I believe Gentoo has offered an option to build with Intel's compiler for a while, but not all packages will work that way.

    • by sofar ( 317980 ) on Tuesday January 12, 2016 @01:02PM (#51288277) Homepage

      Clear uses gcc-5.3.0 - see https://download.clearlinux.or... [clearlinux.org]

      • Re: (Score:3, Interesting)

        by Anonymous Coward

        And what it look like when running an AMD core?

    • Re: (Score:2, Informative)

      by Anonymous Coward

      TFA says that GCC was used, not ICC.

    • by Locke2005 ( 849178 ) on Tuesday January 12, 2016 @01:09PM (#51288357)
      A few years back, GCC took out all the processor-specific optimizations and put in only general purpose optimizations. That made it less efficient, so I can believe putting back in all the processor-specific stuff would make it run 15% to 20% faster. I also suspect the Intel Linux Distro is extremely limited as to what hardware it runs on, but that's the tradeoff you make for maximum optimization.
      • by Anonymous Coward

        My guess is the hard to find (because they host it as an image file instead of in plain text) optimization notice for intel compilers probably applies here. In other words purposely compiling poorly for AMD processors. https://software.intel.com/en-us/articles/optimization-notice/

      • by Anonymous Coward

        When you dig into a default kernel config for a major distribution you find a ton of options that have performance hits, but you might never need. Specifically, debug options and certain features that according to the inbuilt description have a small impact on performance. When you turn off two dozen such options and a few debug options (build a secondary debug orientated kernel from a separate config if you need it for bug hunting), you stack quite a few performance improvements together.

      • I think you are mistaken. You can still tune GCC, but it defaults to safe compiler optimzation flags.

        I use Funtoo, and I have my CFLAGS tuned to my architecture. Things run real zippy even on older or low end hardware when optimized for your architecture.

      • by Anonymous Coward on Tuesday January 12, 2016 @01:54PM (#51288723)

        A few years back, GCC took out all the processor-specific optimizations and put in only general purpose optimizations

        WTF are you talking about? That is patently false.

      • by nnull ( 1148259 )
        So all of the sudden running Gentoo makes sense.
  • Telemetry (Score:2, Informative)

    by Anonymous Coward

    "In support of the goal to provide an agile Linux* distribution that rapidly detects and responds to quality issues in the field, Clear Linux for Intel® Architecture includes a telemetry solution, which notes events of interest and reports them back to the development team."

    https://clearlinux.org/feature... [clearlinux.org]

    • by NotInHere ( 3654617 ) on Tuesday January 12, 2016 @01:03PM (#51288295)

      Linux 10?

    • The first paragraph there says the end user can disable it.

      • Note how nobody has said how.
        • Note how nobody has said how.

          Well, I imagine rm -rf libtelemetry.so would do the trick ... :)

          Seriously, though, it's pretty clear that Clear Linux is designed for server deploys, in situations where I'd guess the telemetry service might catch issues in order to make an admin's task easier. It's touted by Intel as a feature of the distro, after all, so they obviously think some people will find it useful. They also note that the telemetry service is open source, so I imagine you could vet the code if really wanted to.

          Would I want a te

          • "Would I want a telemetry service running on my linux box? Hell, no! But I'm also not the target market for this distro."

            You actually think enterprise environments are LESS concerned about their data leaking to unauthorized third parties? Hardly. In fact, depending on what data leaks through such a service it could even be a PCI compliance violation, illegal and/or open them to liability.

            "It's touted by Intel as a feature of the distro, after all, so they obviously think some people will find it useful."

            If
      • by Anonymous Coward

        The first paragraph there says the end user can disable it.

        And you take their word for it? Also, who's to say that they won't pull an MS and re-enable it after an update?

  • Having started programming before a lot of you were born, I'd say that's called management too cheap to hire a testing group, and the programmers are trying something out, see if it works for them, and pushing it out.

    In other words, sounds like M$ std. development system....

                          mark

    • by Anonymous Coward

      There are SO many things that you can bash Microsoft for, but this is not one of them.

      They spend a lot of time testing, regression testing, and backward compatibility testing. They almost-only release updates and patches once per month. When you consider the vastness and diversity of their ecosystem and the number of issues that they constantly deal with and the very few instances of breakage that occur, you cannot reasonably deny that they do a fantastic job of developing and testing and deploying.

      Be serio

      • While, what you say maybe generally true, MS has been on a big move towards telemetry gathering since Win 10 release. I've seen huge performance gains even in Win 7 and Win 8 simply by turning off newly-installed telemetry. The amount of telemetry vendors gather tends to be a huge drain on start up times. So this goes beyond simple privacy concerns. As much as everyone seems to be in love with Win 10, I think it's a regression (relative to Win 7) despite increased telemetry gathering. So you can't simp
    • Generally agreed. One might say this is the EXTREME version of the release early and release often development model.

      It has it's ups and downs and a similar development model is spreading everywhere as devops take over and developers are being given the keys to the castle. Unfortunately, in the past young an enthusiastic developers have been restrained both by more experienced programmers and experienced admins. When a young but talented programmer ignores a more experienced programmer the admins could be c
    • Well, it seems that several moderators must have marked this as "troll". I can't work out whether that's from resentment of the (perhaps) implied superiority of "before a lot of you were born", or from natural resentment that Mark permitted himself to criticize M$.

      I'm standing back to back with Mark, so please moderate this reply "troll" to your heart's content. Or stop and think for a few moments about what he actually said.

  • Re: (Score:1, Interesting)

    Comment removed based on user account deletion
  • Does it use (Score:1, Insightful)

    by Anonymous Coward

    Systemd? Hopefully they chuck it.

    • Re: (Score:3, Interesting)

      by KermodeBear ( 738243 )

      There's one thing about systemd that I don't understand: If it is terrible (and I have no doubt that it is, from its philosophy to its implementation), why have almost all of the major Linux distributions moved to it?

      • by fnj ( 64210 )

        There's one thing about systemd that I don't understand: If it is terrible (and I have no doubt that it is, from its philosophy to its implementation), why have almost all of the major Linux distributions moved to it?

        Seriously? People are stupid and compliant. Look at the incompetent boob occupying the white hut for the last 7 years. That was by popular vote - twice. And before anyone jumps down my throat, look at the last 27 years of boobs that were installed as president.

      • You're saying that many distros have decided in favour of systemd[1], and surely they can't all be wrong? Except they aren't really independent decisions. Most distros are either respins of RedHat or rely heavily on source provided by them, just due to their sheer size. The downstreams made the decision, rightly or wrongly, that excising it is either not possible or not feasible. Some suspect it's been intentionally written that way.

        So the decision was made once - by, or under the influence of, Lennart

        • Nope. That's not what I'm saying. The question is meant to b'e taken quite literally: "systemd seems to have quite a lot of flaws, so why are so many distributions accepting it so quickly?" I apologize if you ended up taking more meaning out of that than what was intended - it's really that simple a question.

          I did a bit of searching around and found this [zd.net], but it seems tin-foil-hatty and I don't have the deep, low-level linux experience to tell if it is true or not.

          The whole open source thing is great, in th

      • Re:Does it use (Score:4, Insightful)

        by tlhIngan ( 30335 ) <slashdot@worf.ERDOSnet minus math_god> on Tuesday January 12, 2016 @05:28PM (#51290283)

        There's one thing about systemd that I don't understand: If it is terrible (and I have no doubt that it is, from its philosophy to its implementation), why have almost all of the major Linux distributions moved to it?

        Because maybe it isn't as terrible as it seems?

        Sure, there's a lot of NEW things in it, but isn't Linux all about new? And things are different, which gets people tied up in knots.

        And the other thing is, people don't realize the shortcomings of the ever-popular SysVInit - I mean, why do we emulate in SysVInit, init? Init is a daemon manager - in practically all Linux distros, it's managing getty (which spawns login). And when you end your session getty dies, and init duly restarts it, like a good daemon manager does. And you can have daemons kill and restart based on runlevel. This is built in, standard default behavior of init. Yet everyone creates elaborate scripts that do the same thing, or even programs that spawn a child that does the service, and when it crashes or dies, it respawns it. Something init already does. Init even does rate limiting - if a daemon quits too quickly, init stops starting it for a few minutes.

        SystemD formalizes this as a fundamental part of the system - init really should manage daemons, not a rough collection of shell scripts that try to mimic its behavior.

        Granted, things are more complex, like how PulseAudio made audio more complicated. But then you realize that audio IS complicated these days, especially on a desktop OS. There was a time you could open /dev/dsp and that's it, but those days are long gone, because users have multiple audio devices and not only that, but those audio devices can change suddenly. And no, the hardware can change - perhaps they're listening on wireless headphones through Bluetooth, but then they want to switch to speakers which require switching the underlying hardware, and so forth.

        And initialization and startup is similar.

        In the end, what's happening to Linux is what Android did to Linux. Android has its own init system (init manages daemons, like it should), its own graphical system, its own audio layer and much more.

        And it was done because the demands of mobile make it purposely complex and consumer expectations ensure it isn't easy.

      • by Anonymous Coward

        The people who find it terrible are clearly a minority among distro devs, including Debian, which is made by volunteers.
        Apparently from a distro makers viewpoint it is not so terrible to avoid it like the plague.
        (Distro) devs who do find it terrible, are still just as free as ever to (fork to) create non-systemd distro's, modifying programs to work without systemd, which some are doing.
        Some users/admins are not content with the non-systemd offerings these devs are making, but they are free to become devs (o

      • by dbIII ( 701233 )
        Because RedHat do a lot of the low level work in user space and Lennart has sold RedHat on systemd as an inhouse project. No conspiracy and also no proof of it being better than the alternatives, just easier to copy RedHat's stuff and move on.

        It does mean a lot of server and workstation stuff is going to be stuck on RHEL6 for a few years until the systemd stuff has either settled down or been abandoned. If Lennart moves onto something else someone who just gets the job done instead of striving to be a "r
        • by ssam ( 2723487 )
          RHEL6 uses upstart, which is absent from most anti-systemd folks list of favourite init systems.
          • by dbIII ( 701233 )
            You've missed that systemd is no longer just a init system and has even expanded to have another version of sudo. Upstart is limited to an init system and has far less points of failure - it's about getting a single simple job done instead of replacing everything in linux that doesn't have Lennart's name on it.
            A lot of commercial stuff has init scripts and is too slow moving to go chasing after a moving target like systemd whether it is good or not - hence stuck on RHEL6/CentOS6 where it will work due to b
    • If Intel is doing a Linux distro, why not also a FreeBSD distro, where they can also work their compiler w/ LLVM/Clang?
  • by fahrbot-bot ( 874524 ) on Tuesday January 12, 2016 @01:19PM (#51288415)

    Clear Linux is a rolling-release-inspired distribution that issues new versions a few times a day and is up to version 5700.

    Big deal. Firefox will catch up with that shortly.

  • "Clear Linux Project for Intel Architecture"

    In reading a few pages, I don't think they used the word 'Intel' enough....

  • They also claim the code to be 100% thetan-free.
  • Coming from a major U.S corporation who is on NSA's and the U.S gov's leash, this is an important question to ask, and something that people should look into before adopting this.

  • What would you expect. Intel is using a custom kernel optimized for Intel processors and chipsets. The other distros ship generic kernels to work with various processors and chipsets. If you prepare custom kernels for the specific hardware at hand, any of those distros listed in the summary will perform wickedly fast.

  • Linux for Scientologists. Thetan free.
  • by Ungrounded Lightning ( 62228 ) on Tuesday January 12, 2016 @01:42PM (#51288627) Journal

    Upon seeing the description of function multi-versioning I thought of three distinct ways to use that for malware in as many minutes, and the ideas are still coming. (And I don't write malware, so someone in the field would probably think of more, faster,)

    It's also a great way to make competitors' processors look bad: Detect their processors and fall back on the minimalist defaults or even hand them "grinched" code that does worse, or contains odd kickers. Or just don't support THEIR accelerations. Also: Don't support their implementations of YOUR accelerations.

  • by Eravnrekaree ( 467752 ) on Tuesday January 12, 2016 @01:45PM (#51288651)

    Its nice to see this however we really should, in general, have a better way for Linux programs to be able to easily take advantage of the CPU extensions available without recompile. There are dozens of permutations of CPU extensions, so distributing a binary for each permutation is not feasible. Full from source compilation takes too long for many users. Having Linux binaries being able to use the CPUs most advanced features has been a problem. One solution that I favor is to take a page from AS/400, in a variation of that, in each library file, put a copy of the machine code, but also a copy of the abstract syntax tree, the last compilation phase. If the binary is moved to a new CPU, the AST is run through the code generator to regenerate the machine code in the file according to the options the CPU supports. All done in situ. This is much better than storing a copy a binary for each CPU permutation in a library file. It makes things easy to use and is faster than compiling from source as the lexer and parser phase does not need to be repeated.

    • One solution that I favor is to take a page from AS/400, in a variation of that, in each library file, put a copy of the machine code, but also a copy of the abstract syntax tree, the last compilation phase.

      You're thinking of distributing LLVM bitcode. Google was thinking of the same thing when designing PNaCl.

  • by hey! ( 33014 ) on Tuesday January 12, 2016 @01:51PM (#51288689) Homepage Journal

    The rankings in individual benchmarks were all over the place; a composite of those benchmarks is only valid for some theoretical "average" workload that's the average of all the workloads each individual benchmark is supposed to represent; almost nobody is bound to have a workload that resembles that "average".

    In fact the whole "shooutout" scenario is silly because Clear Linux is a container-centric distro. It makes no sense at all to compare it to general purpose distros like Ubuntu and plain vanilla Centos then leave out Red Hat/Centos's Atomic Host flavors.

    In any case if performance is your paramount concern, then "out-of-the-box" performance is bound to be irrelevant to you because you'll be compiling from source with your own choice of compiler and flags, as well as fiddling with all those bells and whistles exposed in the /sys interface. What's interesting would be an exploration of why various distros did better or worse on individual benchmarks.

  • by BrookHarty ( 9119 ) on Tuesday January 12, 2016 @02:12PM (#51288849) Journal

    No matter what distro I use for my desktop, I always use the latest pf-kernel [natalenko.name], with bfq scheduler, low latency, cpu optimizations, etc. I can overload the desktop, and music/video is smooth as silk, and compiling is faster. Its a real world performance boost.

    I'd love to see how a pf-kernel does vs stock on each distro.

    • Thanks for mentioning pf-kernel here, but I would suggest you to evaluate real profit for you in numbers, not only just by feelings, comparing to stock distro kernel. Otherwise, there always will be lots of anonymous people talking about placebo. Nevertheless, thanks for using pf-kernel.

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...