Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AMD Linux

New Ryzen Running Stable On Linux, Threadripper Builds Kernel In 36 Seconds (phoronix.com) 186

An anonymous reader writes: After AMD confirmed the a "performance marginality problem" affecting some Ryzen Linux users, RMAs are being issued and replacement Ryzen processors arriving for affected opensource fans. Phoronix has been able to confirm that the new Ryzen CPUs are running stable without the segmentation fault problem that would occur under very heavy workloads. They have also been able to test now the Ryzen Threadripper 1950X. The Threadripper 1950X on Linux is unaffected by any issues unless you count the lack of a thermal reporting driver. With the 32 threads under Linux they have been able to build the Linux kernel in just about a half minute.
This discussion has been archived. No new comments can be posted.

New Ryzen Running Stable On Linux, Threadripper Builds Kernel In 36 Seconds

Comments Filter:
  • by Joe_Dragon ( 2206452 ) on Tuesday August 29, 2017 @01:46PM (#55104891)

    Apple needs this not the $700 more intel cpu!

    • If Apple really wants to lower their costs they'll put their own A-series CPU/GPUs in their Macs.

      Now that I have a gaming PC next to my Mac mini, I don't care as long as the future Mac can run the software I need. Bonus points if it means a cheaper MacBook Air that can run for 20 hours per charge.

      • I think he was probably talking about performance not cost. I've seen many claims that ARM processors can match desktop ones, but they are bad claims. ARM ones can do very well on microbenchmarks, but they lack the cache and memory bandwidth to go fast, because those things are expensive in terms of power, heat and die area.

        • It's not like you can't pair any core with a bigger cache and memory controller, though.
        • What about multiple multi-core ARMs working in parallel compared to a single multi-core mobile CPU? Because that's what Apple uses in the Mac Mini, the MacBooks and the iMac. The MacBook is even worst, using an ultra-low-power mobile CPU, a single ARM is probably at least head-to-head with it, if not faster.

          • What about multiple multi-core ARMs working in parallel compared to a single multi-core mobile CPU?

            Depends on the workload. However, there's not a lot of single core processors left in the mobile space.

            Because that's what Apple uses in the Mac Mini

            The bottom end mac mini has a dual core part. But it's a pretty old part, so the comparison isn't great.

            the MacBooks and the iMac. The MacBook is even worst, using an ultra-low-power mobile CPU, a single ARM is probably at least head-to-head with it, if not fast

            • Read my comment again. I said multiple multi-core ARMs vs a single multi-core x86. Surely Apple's own A10-whatever costs a lot less than any intel CPU they're using so they could put more than one A10 in their Macs.

              • oh ok, but the arm cores don't have expensive, power hungry high bandwidth inter CPU links. Only the server processors have those at the moment.

                It's much harder to male those 10,000 chickens work well on a wide bak variety of tasks.

      • by enjar ( 249223 )
        I wonder which is cheaper: 1) Porting OSX to A-series CPUs, then supporting multiple processors for a while, and potentially lose customers/vendors or 2) Buying AMD outright. AMD's market cap is only 12.4 billion. Apple could pay for that out of cash. Apple probably has a decent number for what they paid to move to Intel, and what moving to Intel got them.
        • Whatever the cost, remember when the Keynote when they told us they had Mac OS X running on Intel CPUs since practically the beginning? You can be sure they also have macOS running on A11 (or whatever) in their labs right now.

          And since power requirements and heat dissipation are much different in something the size of a MacBook air compared to something like an iPhone or iPad, you can be sure whatever numbers we've seen for the iPad Pro are lower than what we'd see in a A11-powered MacBook.

          Perhaps they also

          • And the MacBook Pro and iMac have AMD GPUs for a long while. What is the problem?

            http://appleinsider.com/articl... [appleinsider.com]

          • by enjar ( 249223 )

            Whatever the cost, remember when the Keynote when they told us they had Mac OS X running on Intel CPUs since practically the beginning? You can be sure they also have macOS running on A11 (or whatever) in their labs right now.

            And since power requirements and heat dissipation are much different in something the size of a MacBook air compared to something like an iPhone or iPad, you can be sure whatever numbers we've seen for the iPad Pro are lower than what we'd see in a A11-powered MacBook.

            Perhaps they also have dual or even Quad-A11 in their prototypes. I don't know the cost of an A11 compared to a i5 but I'm thinking they can put multiple A11's inside a Mac before reaching the cost of a single i5.

            There's a hardware cost, but there's also a non-trivial software cost -- to maintain two compiler versions, support two versions of applications, etc. If they just swapped AMD for Intel they don't need to leave x86 land. And if they bought AMD outright, they control the cost of chips, too.

            • by stabiesoft ( 733417 ) on Tuesday August 29, 2017 @03:05PM (#55105549) Homepage

              Does AMD have an x86 license that is transferrable in the case of a change of control? It could be no one can buy AMD and retain the jewel.

              • by Agripa ( 139780 )

                No, AMD's x86 licensing deals are not transferable and would have to be renegotiated.

            • As I said, they won't say it but I'm sure the non-trivial software cost is already done and always have been there since the first days of Mac OS X. It's not like Apple are strangers to platform changes. 68K to PPC, PPC to Intel and maybe Intel to ARM in the (near) future.

              They also had fat binaries with both PPC and x86 code at the same time, then 32-bit and 64-bit too, so don't count them out because they'd have to support two versions of everything. They've done it before and they're probably still doing

              • going ARM will kill fast VM's and other X86-64 only stuff

                • Since iPhones and iPads, iTunes and others bring in a lot more money than all Macs combined, does it really matter to Apple to lose a small percentage of Mac sales if it means lower-priced Macs with a bigger profit margin? (ex: intel CPU cost $300, Apple's A11 cost $30, they lower their Mac price by $200 and still make $70 more profit than before).

            • There's a hardware cost, but there's also a non-trivial software cost -- to maintain two compiler versions, support two versions of applications, etc.

              I've shipped MacOS applications built for three different processors (PowerPC 32 bit, x86 32 and 64 bit), and lots of people ship iPhone apps for three different processors (armv7, armv7s, aarch64). Apple has no problem whatsoever supporting any number of different processors if they want to.

              If Apple built an iPhone with an Intel processor for example, that wouldn't be more than five minutes time for most developers.

            • There's a hardware cost, but there's also a non-trivial software cost -- to maintain two compiler versions, support two versions of applications, etc

              They already maintain the compiler, toolchain, kernel, and most of their core frameworks on x86-64, AArch32 and AArch64 and they only just stopped supporting them on x86-32. The biggest issue would be for third-party developers, but Apple hasn't had a problem with this in the past and they already support fat binaries and have infrastructure for building them in their dev tools.

              If they just swapped AMD for Intel they don't need to leave x86 land. And if they bought AMD outright, they control the cost of chips, too.

              They already have their own in-house CPU and GPU design teams. The only thing that they'd gain from buying AMD would be an x86 l

        • by ganjadude ( 952775 ) on Tuesday August 29, 2017 @04:30PM (#55106053) Homepage
          dear god no. apple cannot buy AMD. they are finally waking up after a decade slumber, last thing i want to see is apple owning them.
        • Apple switched to Intel because the PowerPC consortium wasn't delivering on their commitments for R & D sufficient to stay competitive with the power / performance ratio of Intel. Apple hardware was falling behind PC hardware. Part of why Steve Jobs was able to convince the Apple BOD to buy NeXT was because their OS was already able to deploy on either architecture.

          Intel's R & D investments were justified by the guaranteed volume. PowerPC was a niche server (IBM) and desktop (Apple) player, in cont
    • by alvinrod ( 889928 ) on Tuesday August 29, 2017 @02:11PM (#55105135)
      I think I had read that Apple is locked into a deal with Intel for several more years, so I wouldn't expect to see any AMD processors soon.

      I suspect that in the long run, Apple's plan is to replace Intel with their own custom chips. Their recent ARM SoCs don't clock as high as Intel chips, but they have been able to achieve similar performance per clock [extremetech.com] in many areas.

      It's probably still a few years before they make the move to their own chips, but it seems like that's where they're going. This seems even more likely as the amount of performance needed for consumer PCs is going to remain relatively fixed while improvements in chip design and fabrication processes make it economically possible for Apple to use their own SoCs in their notebooks or desktops even if they can't compete with the most powerful high-end Intel or AMD chips.

      Perhaps Apple will start designing products intended for the professional market that still use those high-end CPUs from Intel/AMD, but most of their customers don't require that level of power and it's probably much more cost economical for Apple to use their own custom chips, especially if they have lower power draw for similar levels of performance.
      • by Bert64 ( 520050 )

        They may not clock as high, but they're used for small battery powered mobile devices with passive cooling. They could probably clock them up quite significantly with some moderate heatsinks and fans.

      • by tlhIngan ( 30335 )

        I think I had read that Apple is locked into a deal with Intel for several more years, so I wouldn't expect to see any AMD processors soon.

        Unless AMD has changed, Apple will never invest in an AMD processor.

        The reason is simple. It's why Apple is using Intel these days after abandoning PowerPC. Besides PowerPC performance issues, Motorola and IBM both failed to deliver on their commitments - shortages of Macs were well known in the PowerPC days, because Apple just could not get their hands on enough PowerP

    • Intel has been without a viable competitor for some time now (especially on low power CPUs). But they've been careful to keep it that way by giving OEMs sweetheart deals.
    • Why? Apple is mostly consumer oriented and the majority of consumer software and tasks gets little or zero benefit from being able to spin up so many threads. They usually get a far better return on a faster core rather than lots of cores.
  • Inaccurate Article (Score:2, Interesting)

    by Anonymous Coward

    AMD is using CPUs from week 25+ to fullfill RMAs. They have been doing additional testing in Customer Service on those CPUs -people are getting boxes that have been opened with handwritten notes relating to this testing.

    It's *not known with certainty that ALL* week 25+ CPUs are good. AMD has made no official statement on that. They sent Phoronix a testing CPU just like they have been sending to their RMA customers.

    Most stores and retail sellers are still selling pre-week 25 CPUs, so those may still be im

    • by courteaudotbiz ( 1191083 ) on Tuesday August 29, 2017 @01:59PM (#55104989) Homepage
      That's what PR is all about. It's not about getting the problems fixed: it's about getting people to think that the problem is fixed.

      Engineers usually fix problems. But right now, they don't want to issue a full recall, so they still sell the old - defective - CPUs assuming that most people run Windows on top of it.

      Do any company really care about a desktop processor running a "server" OS like Linux? No.

      Hell, most consumer / prosumer Intel chipsets have no drivers for W2K12 / W2K16. Tweaks exist, but not for the faint of heart.
      • Getting people a CPU that works is a fix. If it doesn't impact your workload, you don't need to worry about it.

        Cost-of-risk factors into business overhead, and requires reserve cash. That means a business's profitability fluctuates--maybe 12% this year, 4% that year, average of 9% profit, once in a while you have to dump your cash coffers to take massive growth (opportunity risk) or deal with 6 straight years of billion-dollar losses (threat risk)--and they need the revenue and thus the pricing to cover

    • by Kjella ( 173770 )

      From what I understand it the "fixed" CPUs have the same stepping, indicating that it's a build tolerance issue that AMD will initially solve by adding it to their quality control. Presumably they have some inventory that's already boxed, but if you RMA a CPU with this issue they'll explicitly test for it so you don't get affected a second time. I agree it's still not a guarantee you won't experience this problem, but if the cure is a RMA away it's not worse than any other defective/DOA equipment. IMHO that

      • It's also apparently only affecting Linux, so they can shelve and test the RMA units and then roll them back out as refurbished units if this proves to not affect other users. They could announce that, or they could do it quietly before announcing the issue has been resolved completely and just rely on 98% of their consumer base being Windows users building ridiculous gaming boxes and deal with "my Linux won't work" exactly as they're doing it today, although someone is going to notice the pattern in refu

  • by CycleFreak ( 99646 ) on Tuesday August 29, 2017 @01:58PM (#55104979)

    For those of us that have not actually built a kernel, is 36 seconds astonishingly fast? A little faster? A totally random number with no meaning whatsoever?

    Maybe some of you that do build kernels every once in a while could share your times along with specs for your rig.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      about 20 years ago, my 486DX took 2.5 days to build a kernel.

      • I have a laptop with a mobile core i7 processor. It takes me 2 or 3 minutes to cross compile a linux kernel for an embedded arm device. I run my compiler in a VM. 38 seconds is nice, but not mind-blowing.
        • It's most likely I/O bound too.

          C (and far worse, C++) compilation is incredibly I/O bound because of the insanely archaic include system (and preprocessor too) . I'm not even talking out of my ass. It's such a problem that Facebook and (IIRC) Google have both come up with custom solutions to try and reduce compile times because they're so insanely taxing on their day-to-day operations.

          Fun side note: Andrei Alexandrescu and Walter Bright (creators behind the D language) were directly involved in helping Face

          • In the case of Linux it would be more CPU bound because of all the files that could be compiled in parallel as it is a big project. A 16 core 32 thread system can work on alot of .c and .h files at a time.

          • C (and far worse, C++) compilation is incredibly I/O bound because of the insanely archaic include system (and preprocessor too)

            Not on any reasonably spec'd machine. After the first time a header is read, it's going to be in the buffer cache and so the next compilation job that needs them gets them out of RAM (unless you have such a tiny amount of RAM that they're evicted already, but even huge projects only have a few hundred MBs of headers and a single compiler invocation will use more RAM than that). Watching our build machines, they never read from the disk during big C/C++ compile jobs.

            The bottleneck for C/C++ is that you e

    • by TimothyHollins ( 4720957 ) on Tuesday August 29, 2017 @02:07PM (#55105081)

      For those of us that have not actually built a kernel, is 36 seconds astonishingly fast?

      I did a little checking and here's what I found. It's faster than 37 seconds, but not as fast as 35 seconds.

    • by kalpol ( 714519 )
      I can't remember how long it used to take, but I do remember starting Gentoo kernel compilations in the evening on my Inspiron 2600 back in the day and it would be done in the morning.
      • How long before there's a GRUB2 module (installed by default on Gentoo) that will recompile the kernel from source at every boot? And how long after that until SystemD integrates a compile-on-access setup for all managed services?

        Just imagine, everytime you turn the system on, everything is recompiled from source, and don't even notice! Everytime something connects to a socket over the network, the service is compiled fresh! :)

        • How long before there's a GRUB2 module (installed by default on Gentoo) that will recompile the kernel from source at every boot?

          It exists already. This was one of the demos for tcc (which didn't optimise much, but compiled really fast): it would compile the kernel from the loader and then boot.

    • by mark-t ( 151149 )
      In my experience, it typically takes upwards of 10 minutes, so yes... 36 seconds is astonishingly fast.
    • Re: (Score:3, Informative)

      by Anonymous Coward

      Depends a lot on how you configure it. If it's 36 seconds to compile a stripped bare kernel, it's mildly impressive. If it's 36 seconds to compile a complex kernel with tons of options, modules, etc. it's very impressive. I used to compile kernels with options for a home computer in 10-15 minutes in the late 90s early aughts (486-PIII days). A complex config might take hours. No doubt modern kernel compilation is more complex.

    • 36 second stat is meaningless for the ThreadRipper 1950X without something to compare it against. How long it it take to compile the exact same kernel with the exact same configuration on another box with an i7-7700K or the Ryzen 1800X?

      That is like me saying my new database program takes 15 seconds to do a certain SQL select statement against a 5 million row table. It means nothing until I tell you that the exact same statement running on Postgres on the same box takes 38 seconds.
    • by wings ( 27310 )

      For reference a 386SX-16 would take about 4-1/2 hours to bulid R2.x kernels back in the mid-90's.

  • Can Confirm (Score:4, Informative)

    by iCEBaLM ( 34905 ) on Tuesday August 29, 2017 @03:41PM (#55105769)

    Have an Ryzen 7 1700 that was affected by the segfault issue. Contacted AMD, they wanted a pic of my case to make sure it wasn't a thermal issue. Then asked me to try some different vcore/vsoc voltages and retest. When I still had the problem they shipped me out a brand new in box CPU, and it's been working perfectly.

    AMD support is bloody stellar.

    • Re:Can Confirm (Score:5, Interesting)

      by ffkom ( 3519199 ) on Tuesday August 29, 2017 @05:26PM (#55106397)
      "Bloody stellar" I would call if they:

      - Had an understanding of the root cause of the bug, and told the public what it was and how they solved it
      - Told everyone how to distinguish affected CPUs from unaffected CPUs (without a multi-hour run of some test-script that not AMD, but desparate affected buyers implemented and made available)
      - Recalled all the defective CPUs and replaced them with working ones, including CPUs sold as part of computers

      What you describe is rather the bare minimum they owe customers going through lots of trouble due to a defect product they were sold.
    • by guruevi ( 827432 )

      You call that good support? You are affected by a known issue, you have to call them and then they still make you jump through a bunch of hoops to get a replacement part?

  • by Billly Gates ( 198444 ) on Tuesday August 29, 2017 @04:24PM (#55106015) Journal

    Yes they are very similiar but Threadripper is their consumer version of the upcoming Xeon competitor.

    AMD admitted it did little testing on the regular Ryzen line as most consumers would be running WIndows anyway and admitted in the future they will test this out. FreeBSD is also impacted by the same bug where things get out of order and corrupted under heavy loads.

    Threadripper has more cache and a different caching and memory as it supports NUMA and non-NUMA for server oriented loads and this is where the bug is here.

    Unfortunately, this makes me very cautious to purchase an AMD system as it does have a reputation of being bargain grade. But, it is a brand new architecture from scratch. I maybe open to Ryzen2 or Threadripper2 after some of the bugs are worked out.

    • by arth1 ( 260657 )

      This bug is precisely why I just ordered a dual Xeon system instead of a Ryzen Threadripper. I can't risk it.
      The Xeons for the same price are slower clocked than the 1950X, but the build load is mostly thread and IO bound, so that's not as big a worry as a thread dying.

      • This bug is precisely why I just ordered a dual Xeon system instead of a Ryzen Threadripper. I can't risk it.
        The Xeons for the same price are slower clocked than the 1950X, but the build load is mostly thread and IO bound, so that's not as big a worry as a thread dying.

        The newest Ryzen already don't have the issue, and if you see it, you can get a new one through AMD QA. Since it only triggers on Linux it is probably a small subset of costumers affected, though we would all have loved if they could explain exactly what happened.

  • If you order one now, will it have the fix or not? In other words are they still selling units that were made before the fix was incorporated into the production process?

    I am building a heavily threaded Windows program that does data management. One of the things it does is break SQL queries into small pieces and run them in parallel on multi-core boxes. I have queries that run more than twice as fast (or is that less than half as long?) as the same queries on PostgreSQL v9.5 running on the same box. I am
    • by ffkom ( 3519199 )
      AMD has never explained (and might not even know) the root cause of this bug. It is merely hear-say that newer CPUs are probably not affected.

      So yes, if you buy a Ryzen now you could still get one that is affected. Better make sure you run the test scripts for this bug for multiple hours before assuming you got a flawless CPU.
  • AMD in many countries sends people who are affected by this bug off to their computer dealer if they did not buy a boxed retail CPU, but bought a whole computer with the Ryzen CPU as part of it.

    Which basically means they are screwed: Computer dealers totally lack the knowledge or incentive to even understand the nature of this bug, especially given the fact that AMD has never explained which of their CPUs are affected and which are not.

    So at best, your computer dealer will give you another computer in r
  • I've been avoiding AMD based cpu / chipsets for a while now, Mostly down to the fact that its just a mistake to
    use AMD for running Linux. (Terrible GPU drivers) . Any Linux users out there care to comment on how AMD fare to Intel these days ?
    Thanks

  • There's a manufacturing problem. To solve this, they've come up with a stress test that gets them enough CPUs known to run the provocation test without failing, that they can ship those to people complaining about that problem. CPUs being sold continue to be as buggy as before, since in Windows such bugs get excused as video card shittiness or w/e.

  • I can install Gentoo in under a day.

    • Came here for this, was not disappointed. That was my first thought too. "When I get one of those in another year or two, maybe I should look at Gentoo again..." 7-8 years ago I got tired of compiling Gentoo all of the time. A 36 second kernel compile would definitely pique my interest in Gentoo once again. Gentoo can be blindingly fast, and I'm wondering exactly how fast with a CPU like this. If the kernel compile goes that fast, how about the rest of it?

  • Phronix reported that AMD has debugged the problem and that all CPUs (1600-1800x for example) manufactured after week 30 (30 July) will not have this problem.
    There is a date of manufacture engraved on the cpu cover.

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...