New Ryzen Running Stable On Linux, Threadripper Builds Kernel In 36 Seconds (phoronix.com) 186
An anonymous reader writes: After AMD confirmed the a "performance marginality problem" affecting some Ryzen Linux users, RMAs are being issued and replacement Ryzen processors arriving for affected opensource fans. Phoronix has been able to confirm that the new Ryzen CPUs are running stable without the segmentation fault problem that would occur under very heavy workloads. They have also been able to test now the Ryzen Threadripper 1950X. The Threadripper 1950X on Linux is unaffected by any issues unless you count the lack of a thermal reporting driver. With the 32 threads under Linux they have been able to build the Linux kernel in just about a half minute.
Apple needs this not the $700 more intel cpu! (Score:5, Insightful)
Apple needs this not the $700 more intel cpu!
Re: (Score:3)
If Apple really wants to lower their costs they'll put their own A-series CPU/GPUs in their Macs.
Now that I have a gaming PC next to my Mac mini, I don't care as long as the future Mac can run the software I need. Bonus points if it means a cheaper MacBook Air that can run for 20 hours per charge.
Re: (Score:3)
I think he was probably talking about performance not cost. I've seen many claims that ARM processors can match desktop ones, but they are bad claims. ARM ones can do very well on microbenchmarks, but they lack the cache and memory bandwidth to go fast, because those things are expensive in terms of power, heat and die area.
Re: (Score:2)
Re:Apple needs this not the $700 more intel cpu! (Score:4, Insightful)
Re: (Score:2)
What about multiple multi-core ARMs working in parallel compared to a single multi-core mobile CPU? Because that's what Apple uses in the Mac Mini, the MacBooks and the iMac. The MacBook is even worst, using an ultra-low-power mobile CPU, a single ARM is probably at least head-to-head with it, if not faster.
Re: (Score:3)
What about multiple multi-core ARMs working in parallel compared to a single multi-core mobile CPU?
Depends on the workload. However, there's not a lot of single core processors left in the mobile space.
Because that's what Apple uses in the Mac Mini
The bottom end mac mini has a dual core part. But it's a pretty old part, so the comparison isn't great.
the MacBooks and the iMac. The MacBook is even worst, using an ultra-low-power mobile CPU, a single ARM is probably at least head-to-head with it, if not fast
Re: (Score:2)
Read my comment again. I said multiple multi-core ARMs vs a single multi-core x86. Surely Apple's own A10-whatever costs a lot less than any intel CPU they're using so they could put more than one A10 in their Macs.
Re: (Score:2)
oh ok, but the arm cores don't have expensive, power hungry high bandwidth inter CPU links. Only the server processors have those at the moment.
It's much harder to male those 10,000 chickens work well on a wide bak variety of tasks.
Re: (Score:2)
Re: (Score:3)
Whatever the cost, remember when the Keynote when they told us they had Mac OS X running on Intel CPUs since practically the beginning? You can be sure they also have macOS running on A11 (or whatever) in their labs right now.
And since power requirements and heat dissipation are much different in something the size of a MacBook air compared to something like an iPhone or iPad, you can be sure whatever numbers we've seen for the iPad Pro are lower than what we'd see in a A11-powered MacBook.
Perhaps they also
Re: (Score:2)
And the MacBook Pro and iMac have AMD GPUs for a long while. What is the problem?
http://appleinsider.com/articl... [appleinsider.com]
Re: (Score:2)
Whatever the cost, remember when the Keynote when they told us they had Mac OS X running on Intel CPUs since practically the beginning? You can be sure they also have macOS running on A11 (or whatever) in their labs right now.
And since power requirements and heat dissipation are much different in something the size of a MacBook air compared to something like an iPhone or iPad, you can be sure whatever numbers we've seen for the iPad Pro are lower than what we'd see in a A11-powered MacBook.
Perhaps they also have dual or even Quad-A11 in their prototypes. I don't know the cost of an A11 compared to a i5 but I'm thinking they can put multiple A11's inside a Mac before reaching the cost of a single i5.
There's a hardware cost, but there's also a non-trivial software cost -- to maintain two compiler versions, support two versions of applications, etc. If they just swapped AMD for Intel they don't need to leave x86 land. And if they bought AMD outright, they control the cost of chips, too.
Re:Apple needs this not the $700 more intel cpu! (Score:4, Insightful)
Does AMD have an x86 license that is transferrable in the case of a change of control? It could be no one can buy AMD and retain the jewel.
Re: (Score:2)
No, AMD's x86 licensing deals are not transferable and would have to be renegotiated.
Re: (Score:3)
As I said, they won't say it but I'm sure the non-trivial software cost is already done and always have been there since the first days of Mac OS X. It's not like Apple are strangers to platform changes. 68K to PPC, PPC to Intel and maybe Intel to ARM in the (near) future.
They also had fat binaries with both PPC and x86 code at the same time, then 32-bit and 64-bit too, so don't count them out because they'd have to support two versions of everything. They've done it before and they're probably still doing
Re: (Score:2)
going ARM will kill fast VM's and other X86-64 only stuff
Re: (Score:2)
Since iPhones and iPads, iTunes and others bring in a lot more money than all Macs combined, does it really matter to Apple to lose a small percentage of Mac sales if it means lower-priced Macs with a bigger profit margin? (ex: intel CPU cost $300, Apple's A11 cost $30, they lower their Mac price by $200 and still make $70 more profit than before).
Re: (Score:2)
There's a hardware cost, but there's also a non-trivial software cost -- to maintain two compiler versions, support two versions of applications, etc.
I've shipped MacOS applications built for three different processors (PowerPC 32 bit, x86 32 and 64 bit), and lots of people ship iPhone apps for three different processors (armv7, armv7s, aarch64). Apple has no problem whatsoever supporting any number of different processors if they want to.
If Apple built an iPhone with an Intel processor for example, that wouldn't be more than five minutes time for most developers.
Re: (Score:2)
There's a hardware cost, but there's also a non-trivial software cost -- to maintain two compiler versions, support two versions of applications, etc
They already maintain the compiler, toolchain, kernel, and most of their core frameworks on x86-64, AArch32 and AArch64 and they only just stopped supporting them on x86-32. The biggest issue would be for third-party developers, but Apple hasn't had a problem with this in the past and they already support fat binaries and have infrastructure for building them in their dev tools.
If they just swapped AMD for Intel they don't need to leave x86 land. And if they bought AMD outright, they control the cost of chips, too.
They already have their own in-house CPU and GPU design teams. The only thing that they'd gain from buying AMD would be an x86 l
Re:Apple needs this not the $700 more intel cpu! (Score:4, Insightful)
Doubt Apple will buy AMD (Score:2)
Intel's R & D investments were justified by the guaranteed volume. PowerPC was a niche server (IBM) and desktop (Apple) player, in cont
Re: (Score:2)
Actually the PASemi PA6T was very good in terms of performance and power, and it was clearly more advanced than Intel's Core2 Duo: memory controller and PCI-E on die, while Intel's needed an external chipset. It would have been perfect in a laptop
On paper. They never actually shipped any silicon. Switching to them would have been a huge gamble and would have left them in the same situation that they were attempting to avoid previously: paying 100% of the R&D costs and competing with companies that are were only paying a quarter of it.
Re:Apple needs this not the $700 more intel cpu! (Score:5, Insightful)
I suspect that in the long run, Apple's plan is to replace Intel with their own custom chips. Their recent ARM SoCs don't clock as high as Intel chips, but they have been able to achieve similar performance per clock [extremetech.com] in many areas.
It's probably still a few years before they make the move to their own chips, but it seems like that's where they're going. This seems even more likely as the amount of performance needed for consumer PCs is going to remain relatively fixed while improvements in chip design and fabrication processes make it economically possible for Apple to use their own SoCs in their notebooks or desktops even if they can't compete with the most powerful high-end Intel or AMD chips.
Perhaps Apple will start designing products intended for the professional market that still use those high-end CPUs from Intel/AMD, but most of their customers don't require that level of power and it's probably much more cost economical for Apple to use their own custom chips, especially if they have lower power draw for similar levels of performance.
Re: (Score:2)
They may not clock as high, but they're used for small battery powered mobile devices with passive cooling. They could probably clock them up quite significantly with some moderate heatsinks and fans.
Re: (Score:2)
Unless AMD has changed, Apple will never invest in an AMD processor.
The reason is simple. It's why Apple is using Intel these days after abandoning PowerPC. Besides PowerPC performance issues, Motorola and IBM both failed to deliver on their commitments - shortages of Macs were well known in the PowerPC days, because Apple just could not get their hands on enough PowerP
Re: Apple needs this not the $700 more intel cpu! (Score:2)
Re: (Score:2)
I doubt Apple pays those prices (Score:2)
Re: (Score:2)
what a dribbling paid Intel shill (Score:2, Interesting)
Ryzen is generation ONE of a new architecture and it already slaughters Intel's entire x86 range, top to bottom. So Intel, in desperation, floods forums with FUD.
This lying dribbler, guruevi- trusts that you, the Slashdot reader, are clueless. AMD encryption instructions are much faster than Intel's. Hyperthreading gen 1 on AMD is much more efficient than Intel hyperthreading gen 8, and what the hell is 'encryption' and 'hyperthreading' 'compatibility' even supposed to mean. These are things measured in per
Re: (Score:2)
All the advanced features which (especially) VMWare and other big name vendors use them and they either don't support AMD or the features don't work cross-platform so you can't do a live migration or you have to reconfigure your VMs to use different CPU architectures.
This may change with Ryzen but as you said, it's the first generation, I'm not going to entrust my datacenter on something that had trouble handling a Linux kernel compile. Encryption is about half the speed on AMD vs Intel. Check the benchmark
Re: (Score:2)
Who modded this insightful - nobody serious uses AMD.
True, now, but that's a fairly recent development. The Opteron shipped in 2003. It wasn't until 2009 that Intel integrated the memory controller on die and caught up. In that period, all serious server deployments were AMD. Xeons were overpriced and underpowered in comparison.
Inaccurate Article (Score:2, Interesting)
AMD is using CPUs from week 25+ to fullfill RMAs. They have been doing additional testing in Customer Service on those CPUs -people are getting boxes that have been opened with handwritten notes relating to this testing.
It's *not known with certainty that ALL* week 25+ CPUs are good. AMD has made no official statement on that. They sent Phoronix a testing CPU just like they have been sending to their RMA customers.
Most stores and retail sellers are still selling pre-week 25 CPUs, so those may still be im
Re:Inaccurate Article (Score:4, Interesting)
Engineers usually fix problems. But right now, they don't want to issue a full recall, so they still sell the old - defective - CPUs assuming that most people run Windows on top of it.
Do any company really care about a desktop processor running a "server" OS like Linux? No.
Hell, most consumer / prosumer Intel chipsets have no drivers for W2K12 / W2K16. Tweaks exist, but not for the faint of heart.
Re: (Score:2)
Getting people a CPU that works is a fix. If it doesn't impact your workload, you don't need to worry about it.
Cost-of-risk factors into business overhead, and requires reserve cash. That means a business's profitability fluctuates--maybe 12% this year, 4% that year, average of 9% profit, once in a while you have to dump your cash coffers to take massive growth (opportunity risk) or deal with 6 straight years of billion-dollar losses (threat risk)--and they need the revenue and thus the pricing to cover
Re: (Score:2)
From what I understand it the "fixed" CPUs have the same stepping, indicating that it's a build tolerance issue that AMD will initially solve by adding it to their quality control. Presumably they have some inventory that's already boxed, but if you RMA a CPU with this issue they'll explicitly test for it so you don't get affected a second time. I agree it's still not a guarantee you won't experience this problem, but if the cure is a RMA away it's not worse than any other defective/DOA equipment. IMHO that
Re: (Score:3)
It's also apparently only affecting Linux, so they can shelve and test the RMA units and then roll them back out as refurbished units if this proves to not affect other users. They could announce that, or they could do it quietly before announcing the issue has been resolved completely and just rely on 98% of their consumer base being Windows users building ridiculous gaming boxes and deal with "my Linux won't work" exactly as they're doing it today, although someone is going to notice the pattern in refu
Re: (Score:2)
Actually, the OS can apply microcode updates as well as the BIOS, so it's possible for Linux / Windows / etc to apply the fix even if the mobo mfg says pound sand.
What is an average kernel build time? (Score:5, Interesting)
For those of us that have not actually built a kernel, is 36 seconds astonishingly fast? A little faster? A totally random number with no meaning whatsoever?
Maybe some of you that do build kernels every once in a while could share your times along with specs for your rig.
Re: (Score:2, Insightful)
about 20 years ago, my 486DX took 2.5 days to build a kernel.
Re: (Score:2)
Re: (Score:3)
It's most likely I/O bound too.
C (and far worse, C++) compilation is incredibly I/O bound because of the insanely archaic include system (and preprocessor too) . I'm not even talking out of my ass. It's such a problem that Facebook and (IIRC) Google have both come up with custom solutions to try and reduce compile times because they're so insanely taxing on their day-to-day operations.
Fun side note: Andrei Alexandrescu and Walter Bright (creators behind the D language) were directly involved in helping Face
Re: (Score:2)
In the case of Linux it would be more CPU bound because of all the files that could be compiled in parallel as it is a big project. A 16 core 32 thread system can work on alot of .c and .h files at a time.
Re: (Score:2)
C (and far worse, C++) compilation is incredibly I/O bound because of the insanely archaic include system (and preprocessor too)
Not on any reasonably spec'd machine. After the first time a header is read, it's going to be in the buffer cache and so the next compilation job that needs them gets them out of RAM (unless you have such a tiny amount of RAM that they're evicted already, but even huge projects only have a few hundred MBs of headers and a single compiler invocation will use more RAM than that). Watching our build machines, they never read from the disk during big C/C++ compile jobs.
The bottleneck for C/C++ is that you e
Re: (Score:3)
I only remember needing to turn it off for games which were clock locked.
Re:What is an average kernel build time? (Score:5, Funny)
For those of us that have not actually built a kernel, is 36 seconds astonishingly fast?
I did a little checking and here's what I found. It's faster than 37 seconds, but not as fast as 35 seconds.
Re: (Score:2)
How many times did you run the test?
Re:What is an average kernel build time? (Score:5, Insightful)
That is incredibly helpful insight. You should work at Gartner.
Re: (Score:2)
Re: (Score:2)
It's faster than 37 seconds, but not as fast as 35 seconds.
So in other words, it's merely average.
Re: (Score:2)
It's theoretically meaningful, but only if it's a measurement others can replicate. In the olden days, I'd build my own kernel every time a new release came out. I had to go through and select all the drivers and features I wanted, but most of the kernel code in the tarball never got compiled. Assuming he's taking the default source config, and assuming that the configuration process doesn't go out and automatically detect a bunch of drivers and automatically select them (this would cause different platf
Re: (Score:2)
Re: (Score:2)
How long before there's a GRUB2 module (installed by default on Gentoo) that will recompile the kernel from source at every boot? And how long after that until SystemD integrates a compile-on-access setup for all managed services?
Just imagine, everytime you turn the system on, everything is recompiled from source, and don't even notice! Everytime something connects to a socket over the network, the service is compiled fresh! :)
Re: (Score:2)
How long before there's a GRUB2 module (installed by default on Gentoo) that will recompile the kernel from source at every boot?
It exists already. This was one of the demos for tcc (which didn't optimise much, but compiled really fast): it would compile the kernel from the loader and then boot.
Re: (Score:3)
Re: (Score:3)
I remember it taking several hours, but that was 2.0.3x running on a 386dx40 with 8MB of RAM... now get off my lawn...
Re: (Score:3, Informative)
Depends a lot on how you configure it. If it's 36 seconds to compile a stripped bare kernel, it's mildly impressive. If it's 36 seconds to compile a complex kernel with tons of options, modules, etc. it's very impressive. I used to compile kernels with options for a home computer in 10-15 minutes in the late 90s early aughts (486-PIII days). A complex config might take hours. No doubt modern kernel compilation is more complex.
Re: (Score:2)
That is like me saying my new database program takes 15 seconds to do a certain SQL select statement against a 5 million row table. It means nothing until I tell you that the exact same statement running on Postgres on the same box takes 38 seconds.
Re: (Score:2)
For reference a 386SX-16 would take about 4-1/2 hours to bulid R2.x kernels back in the mid-90's.
Can Confirm (Score:4, Informative)
Have an Ryzen 7 1700 that was affected by the segfault issue. Contacted AMD, they wanted a pic of my case to make sure it wasn't a thermal issue. Then asked me to try some different vcore/vsoc voltages and retest. When I still had the problem they shipped me out a brand new in box CPU, and it's been working perfectly.
AMD support is bloody stellar.
Re:Can Confirm (Score:5, Interesting)
- Had an understanding of the root cause of the bug, and told the public what it was and how they solved it
- Told everyone how to distinguish affected CPUs from unaffected CPUs (without a multi-hour run of some test-script that not AMD, but desparate affected buyers implemented and made available)
- Recalled all the defective CPUs and replaced them with working ones, including CPUs sold as part of computers
What you describe is rather the bare minimum they owe customers going through lots of trouble due to a defect product they were sold.
Re: (Score:2)
You call that good support? You are affected by a known issue, you have to call them and then they still make you jump through a bunch of hoops to get a replacement part?
Threadripper != Ryzen (Score:4)
Yes they are very similiar but Threadripper is their consumer version of the upcoming Xeon competitor.
AMD admitted it did little testing on the regular Ryzen line as most consumers would be running WIndows anyway and admitted in the future they will test this out. FreeBSD is also impacted by the same bug where things get out of order and corrupted under heavy loads.
Threadripper has more cache and a different caching and memory as it supports NUMA and non-NUMA for server oriented loads and this is where the bug is here.
Unfortunately, this makes me very cautious to purchase an AMD system as it does have a reputation of being bargain grade. But, it is a brand new architecture from scratch. I maybe open to Ryzen2 or Threadripper2 after some of the bugs are worked out.
Re: (Score:2)
This bug is precisely why I just ordered a dual Xeon system instead of a Ryzen Threadripper. I can't risk it.
The Xeons for the same price are slower clocked than the 1950X, but the build load is mostly thread and IO bound, so that's not as big a worry as a thread dying.
Re: (Score:2)
This bug is precisely why I just ordered a dual Xeon system instead of a Ryzen Threadripper. I can't risk it.
The Xeons for the same price are slower clocked than the 1950X, but the build load is mostly thread and IO bound, so that's not as big a worry as a thread dying.
The newest Ryzen already don't have the issue, and if you see it, you can get a new one through AMD QA. Since it only triggers on Linux it is probably a small subset of costumers affected, though we would all have loved if they could explain exactly what happened.
Can you still buy one with the bug? (Score:2)
I am building a heavily threaded Windows program that does data management. One of the things it does is break SQL queries into small pieces and run them in parallel on multi-core boxes. I have queries that run more than twice as fast (or is that less than half as long?) as the same queries on PostgreSQL v9.5 running on the same box. I am
Re: (Score:2)
So yes, if you buy a Ryzen now you could still get one that is affected. Better make sure you run the test scripts for this bug for multiple hours before assuming you got a flawless CPU.
Bought the CPU inside a computer? You are screwed. (Score:2)
Which basically means they are screwed: Computer dealers totally lack the knowledge or incentive to even understand the nature of this bug, especially given the fact that AMD has never explained which of their CPUs are affected and which are not.
So at best, your computer dealer will give you another computer in r
Headline: Early Adopters Have Problems (Score:2)
AMD vs Intel on linux (Score:2)
I've been avoiding AMD based cpu / chipsets for a while now, Mostly down to the fact that its just a mistake to
use AMD for running Linux. (Terrible GPU drivers) . Any Linux users out there care to comment on how AMD fare to Intel these days ?
Thanks
In other words, (Score:2)
There's a manufacturing problem. To solve this, they've come up with a stress test that gets them enough CPUs known to run the provocation test without failing, that they can ship those to people complaining about that problem. CPUs being sold continue to be as buggy as before, since in Windows such bugs get excused as video card shittiness or w/e.
Re: (Score:2)
Finally (Score:2)
I can install Gentoo in under a day.
Re: (Score:2)
Came here for this, was not disappointed. That was my first thought too. "When I get one of those in another year or two, maybe I should look at Gentoo again..." 7-8 years ago I got tired of compiling Gentoo all of the time. A 36 second kernel compile would definitely pique my interest in Gentoo once again. Gentoo can be blindingly fast, and I'm wondering exactly how fast with a CPU like this. If the kernel compile goes that fast, how about the rest of it?
Ryzen microcode patch as new cpu with mcode update (Score:2)
Phronix reported that AMD has debugged the problem and that all CPUs (1600-1800x for example) manufactured after week 30 (30 July) will not have this problem.
There is a date of manufacture engraved on the cpu cover.
Re:Stable at last! (Score:5, Funny)
My AMD 80386 DX-40 was stable.
Re: (Score:2)
Re:Stable at last! (Score:4, Informative)
What the hell does heat have to do with stability?
I've been running AMD processors since the X2. And X4... and AMD FX-8370. All of which run 100% fine to this day. (Even though I've had more than 2 motherboards die in the last couple years, the same CPUs keeps running fine.)
My childhood friend ran an AMD Athlon 64 when they first came out.
I used an AMD K6-266 when I was a teenager, and have numerous 486's (and even a 586 IIRC) lying around that still run. I even have a fucking AMD 8088 in my Compaq "Portable" (36 LBS!) built in 1986.
And I'm not even a complete AMD fanboy. I'm a fanboy for my wallet. I've run nVidia videocards ever since 3DFX and my Voodoo 2 and 3 went tits up.
But as for unreliable, I have no fucking idea what you're talking about. And there are tons of hot Intel CPUs out there. Pentium 4 HT's ran at a whopping 110 Watts back in the year 2000. My FX-8370 runs at... 125 W. And the Core i7 3970X Extreme Edition runs at... 150W. Now, you can cite the FX-9570 at 220W but that was a joke CPU (Google: "outlier") using a dated architecture to keep a little trickle of money coming into AMD from die-hard enthusiasts. It cost over $100 more, and only got like 15% more throughput than my 8370, while consuming another 100 watts of power.
So yeah, AMD's typically run a little hotter because they have to make up for their worse fab technology (of which Intel a supreme leader). But as for super hot, or being unreliable... you better pull some citations out of your ass.
Re: (Score:2)
What the hell does heat have to do with stability?
A LOT back in the day. There was no protection system scaling back processor speed when temperatures were exceeded. The first sign of high temperature is stability issues. Hell even to this day overclocking is still often a function of how much heat you can remove from a chip vs its stability at full load.
The problem was back in the day not everyone wanted a vacuum cleaner in the study. Yeah my friend's Athlon 800 was perfectly stable. Mine however was always on the edge. Something to do with not wanting a
Re: (Score:2)
No shit. Every CPU back in the day was hot enough that if you got three workstations running them in a room, you could turn the heat off.
Re: (Score:2)
No shit. Every CPU back in the day was hot enough that if you got three workstations running them in a room, you could turn the heat off.
Welcome to nostalgia, truth is the Athlon/Athlon XP/Athlon 64 topped out at 72/79/89W with a few FX processors going up to 125W, roughly the same as a modern day mainstream CPU. It was however a *huge* power hike from the 34W power consumption on a Pentium 3 and Intel's Netburst was even worse but in the race of the Gigahertz power consumption was completely ignored. If the workstations aren't running as hot or hotter today, it's because they're idle...
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
No shit. Every CPU back in the day was hot enough that if you got three workstations running them in a room, you could turn the heat off.
Sure, for x86. Not so for Motorola.
Re: (Score:2, Insightful)
Re: (Score:3)
I just decommissioned my Athlon XP 2200+ 2 years ago. It had been in operation for 13 years with the original Motherboard and processor. Rock solid stability on Linux, 3 months between BSOD's on Windows XP. Used a Vantec heatsink--nothing exotic. Oh, and I beat the hell out of that thing--I used to game on all through college, and then used it for a home server.
Decommissioned because the motherboard died. Capacitors finally wore out and burst after 13 years... Processor still wor
Re: (Score:3)
Back in the days, most people agreed that P4 was performing better and much cooler than the Athlon XP. You usually needed very good cooling to run any interesting workload on an Athlon XP if you wanted it to be stable.
WHo modded this up?
THe Athlon XP was light years ahead because it had an integrated memory controller on the cpu and not the chipset chip on the board. It also had more FPU units and didn't have long scalar pipelines with terrible memory latency with Rambus ram like the Pentium IV.
I call BS as well as the P4 was for the clueless who bought Dells and bought Intel for brandname only. THe only thing good about it was the extreme edition in 2004 had hyperthreading. Meanwhile AMD had the Athlon MP for dual cores
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
No way. Pentium M or Intel Core sure. But P4 (especially the 'Presc-hot' version) of the 'Netbust' architecture were anything but that.
Re: (Score:2)
Re: (Score:2)
I do believe there was an 80386 DX-40. Intel's topped out at 33MHz, but AMD had a 40MHz part. IIRC.
Re: (Score:2)
Seriously, do people not even bother using Google anymore? Here's the AMD 80386 DX-40 [mixeurpc.free.fr].
My A10-5800k is rock solid (Score:2)
WTF? (Score:2)
My Phenom II X2 550 BE, which had two unlockable cores in addition to the two "offical" ones, has been running rock-solid in quad-core mode since I built up the system, and does fine with the stock AMD cooler. Yeah, I lucked out with the extra cores, but it's been running 24/7 aside from occasional hardware changes and OS updates for at least six years now.
My only regret was not springing for 8 GB of ECC memory instead of 4 GB. At the time, I could only get 4 x 1GB sticks of ECC RAM at a reasonable price. B
Re: (Score:2)
Aside that, a quick search for "FX-8320E problems" on Google gets me a ton of results showing that so many folks have problems with this CPU... OK, so many because they try to overclock it. But some folks get the problem that only 2 out of 4 cores are usable...
Yeah, nice product man.
Re: (Score:2)
My non-OCed FX-6100 has been rock-stable for 4.5 years.
Re: Stable at last! (Score:2)