Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Linux Hardware

Why Linus Torvalds Prefers x86 Over ARM (pcworld.com) 150

Linus Torvalds answered a question about his favorite chip architecture at the Linaro Connect conference. An anonymous Slashdot reader quotes PCWorld: People are too fixated with the instruction set and the CPU core, Torvalds said. But ultimately "what matters is all the infrastructure around the instruction set, and x86 has all that infrastructure... at a lot of different levels. It's open in a way that no other architecture is... Being compatible just wasn't as big of a deal for the ARM ecosystem as it has been traditionally for the x86 ecosystem... I've been personally pretty disappointed with ARM as a hardware platform, not as an instruction set, though I've had my issues there, too. As a hardware platform, it is still not very pleasant to deal with."
You can watch the whole half-hour conversation on YouTube. My favorite part is where Linus candidly acknowledges that "sometimes my grumpiness makes more news than my being nice... 99% of the time I'm a very happy manager, and I mentally pat people on the head all the time. That maybe then highlights the times when things don't work so well a bit more."
This discussion has been archived. No new comments can be posted.

Why Linus Torvalds Prefers x86 Over ARM

Comments Filter:
  • by Anonymous Coward

    Well... he has a point on all fronts.

    1) x86 is so backward compatible it's... grand. Except for legacy bugs to push forward

    2) ARM is, or rather, was, not afraid to put efficiency above complete and total backward compatibility

    3) He get's a whole lot of news for being an ass. And that may help /. because I always come here to see the comments after news of a Linus blowup. It's awesome, coming from a multi-disciplinary background where job A's culture is nothing like job B... but oh, would it be great to comb

    • by AmiMoJo ( 196126 ) on Sunday October 09, 2016 @05:15PM (#53043801) Homepage Journal

      As Linus says, the main issue with ARM is not the CPU core but all the other stuff you need to make a computer. On x86 most of it has become standardized, even if the standards are terrible. On ARM manufacturers do their own thing and produce a "board support package" (BSP) that provides semi-standard interfaces to it, but of course it's a pain for an open OS like Linux to deal with and many of them are not interested in providing enough documentation for native drivers to be written.

      ARM is kind of a pain in the arse to do low level development for due to the BSP stuff, but on the other hand in the low power/low cost segments x86 isn't even a player. You can get low end ARM parts for less than a Euro. If they were not such a bugger to work with they would be displacing 8 bit parts at a much greater rate, but 8 bit's simplicity keeps it popular.

      • Re: (Score:3, Insightful)

        by Anonymous Coward

        If ARM had a BIOS with PnP it'd be most of the way to solving this.

        That little HAL we all ignore goes a long way.

        • by Anonymous Coward on Sunday October 09, 2016 @06:00PM (#53044001)

          Plug and play along with BIOS are not the words you want to use. Standard firmware interfaces and IO/memory maps would be more appropriate. We don't want a repeat of the bad old days.

        • Pretty much all vaguely modern ARM SoCs have FDT in the firmware (some of the server-oriented ARMv8 ones use ACPI instead). This provides the default memory maps, the locations of devices, the names of the drivers needed to use those devices, and so on. They also all typically now use an ARM GIC (earlier ARM SoCs often used incompatible vendor-provided interrupt controllers, which made life a bit more painful) or, if not the ARM implementation, something that exposes the same interfaces.

          ARM has done a

      • by thesupraman ( 179040 ) on Sunday October 09, 2016 @05:34PM (#53043891)

        As usual Linus is talking 'general' but thinking focused.
        What he is actually talking about is high level computers (which these days includes smartphones, tablets, etc however there is a little
        more crossover there).
        Where he has no knowledge, understanding, or consideration is lower level applications - ie :embedded.
        Arms flexibility, and tendency to closely integrate hardware at the low level makes it is fantastic micro CONTROLLER implementation in general.
        The STM32 series are a great example of this, and it is an area that Intel seems to have lost the plot on. Despite Intels gushing money from time
        to time into such areas, very very few would touch them with a barge pole. Their IO infrastructure is just TOO complex and unnecessary for such
        applications (no one there uses PCIe, etc. Even USB tends to emulate a serial device).

        In the mid range - ie: cellphone, tablet, etc ARM Chip sellers integration is great, however their documentation is TERRIBLE, and they do not seem to understand that open hardware specifications are gold (I am looking at you allwinner, rockchip, amlogic, himedia, mediatek, etal) and who dont seem to realise that sharing that knowledge gets a LOT more developers on side (or possibly hide it because of IP fears... who can be sure). There are vendors without
        such problems however (generally but not exclusively the non-chinese chip makers).

        In the high end - PCs, Servers, etc. well, thats a mess right now. Perhaps AMD etc will help sort it out, or perhaps not.

        In the end, ARM makes sense in a whole lot of niches, however not really those Linux focuses on - his primary focus has always been large server and workstation hardware, an area ARM is only just starting to overlap into in a small way.

        So, what he says is factual in one area, but that area is a niche to arm, and a stronghold of Intel, so is it really a surprise?

        • He's indeed talking about desktop and mobile devices though, where this is an issue.

          • by ttucker ( 2884057 ) on Sunday October 09, 2016 @11:03PM (#53044927)

            It is actually also a major problem for many embedded devices. Have you really looked at the DD-WRT project lately? It is completely dead, largely due to the lack of a common platform. My embedded router is in the rubbish heap now, we have switched to an x86 device running normal Linux...

            • Re: (Score:2, Insightful)

              by Anonymous Coward

              "Have you really looked at the DD-WRT project lately? It is completely dead, largely due to the lack of a common platform."

              That's a tiny part of it. The _other_ part of it is that (fancy management GUI aside) OpenWRT kicked the everloving shit out of DD-WRT in terms of capabilities and code quality for _years_. DD-WRT was _dead_ eight years ago... it just couldn't compete with OpenWRT.

        • Comment removed based on user account deletion
      • by Anonymous Coward

        That is it right there. The BSP issue is huge. Everyone wants to be 'the one everyone else follows'. PC already had that moment in the mid 80s. If you want to be a PC you have to expose particular things. If you want to use ARM everything is slightly different all over the place.

        It is one of my big problems I have with IoT in its current state. No one wants to follow everyone wants to lead. So you end up with hundreds of fragmented systems. Systems that fall by the wayside once interest from the com

      • You can get low end ARM parts for less than a Euro.

        But how indicative is this of the way things must be? The 386 is over 30 years old. 686 procs are 20 years old. All we need is one company who can envision a market segment for low cost x86 (something like Raspberry Pi x86?) so that someone will put in the legwork required to develop a royalty-free or low-royalty product.

      • by SumDog ( 466607 )

        I'm struggling with Arbian and ArchArm on a ClearFog Pro right now. Wi-Fi drivers like to have all kinds of weird problems. I can't just cross compile a newer kernel to see if the problem is fixed, because the kernel for the Clearfog has a lot of specific kernel patches. Even though ARM boards support the standard PCIE and USB buses.

        If I had to do this again, I would have tried to find a small x86/Atom style board. That way I could use Grub/EFI and the tools I'm familiar with.

        • by ruir ( 2709173 )
          And that is exactly I have not bought a Clear Fog/Omnia Router. I also bough a Lamobo R1, that nowadays compiles a generic kernel, and I ended up cutting physically the wifi chip because realtek sucks bigtime, and use it as my home server. For Wifi, an archer C2 with OpenWRT is doing a great job for much, much less than a ClearFog Pro.
      • As Linus says, the main issue with ARM is not the CPU core but all the other stuff you need to make a computer. On x86 most of it has become standardized, even if the standards are terrible. On ARM manufacturers do their own thing and produce a "board support package" (BSP) that provides semi-standard interfaces to it, but of course it's a pain for an open OS like Linux to deal with and many of them are not interested in providing enough documentation for native drivers to be written.

        ARM is kind of a pain in the arse to do low level development for due to the BSP stuff, but on the other hand in the low power/low cost segments x86 isn't even a player. You can get low end ARM parts for less than a Euro. If they were not such a bugger to work with they would be displacing 8 bit parts at a much greater rate, but 8 bit's simplicity keeps it popular.

        The thing is, a PC is really just one hardware design. Memory is in one fixed spot in the memory map. I/O is in the same spot (in the IO bus, since it's x86... but even so..).

        It's at the point where you could take any x86 based PC and program it because you know where all the components are. If you need to talk to the VGA adapter, well, the framebuffer is always in the same location.

        And we have to admit, what we call "x86" really is "IBM PC Compatible" because there were many x86 designs in the early days that were not compatible. We just happened to base our modern PC design off of what is now a 30+ year old PC design. Heck, I think Intel emulates the A20 Gate functionality on the CPU (a design leftover from the move from 8086 to the 80286) - there is a pin that basically states the value of the gate.

        Add in the other peripherals that are basically identical like keyboard controllers, DMA controllers (not used anymore), etc, and you have what is effectively just a single hardware platform. It doesn't matter if you have Intel or AMD or nVidia graphics - at a basic level, VGA mode works identically on all of them.

        ARM, on the other hand, isn't a monolithic design - it's a CPU core and people attach peripherals to it to meet the design requirements. There is no universal keyboard controller for ARM, because one doesn't exist - some devices have no keyboards and don't need it, others have a full PC keyboard, and yet others still have a basic one able to scan 16 keys in a 4x4 matrix.

        The biggest difference is that most x86 designs are not SoC based - so your options aren't as fixed. If you want a different graphics chip, it's usually external, etc. But for ARM, they are SoCs and thus everything is combined to form an almost complete system in a single chip - no CPU, northbridge, IO controller(south bridge), etc type PC design.

        And lest we forget, x86 can be incompatible - Intel made a few SoCs that run fine on Linux, but cannot run Windows at all because Windows requires things a certain way, which was not provided. Even today, Intel SoCs often have a "Windows compatibility" block that contains peripherals and other things required for Windows support.

        In the end, a PC is like a car - it has 4 wheels, a steering wheel, a transmission, etc, and they all basically drive the same - you need a key, you need to start the engine or activate the card via the ignition (for EVs or hybrids), apply brakes, release the parking brake, shift transmission into reverse, then back out of your spot. Then put the transmission into drive, press down on the accelerator and off you go.

        ARM would be more like Caterpillar or Briggs and Stratton - they sell the engine, which forms the core of whatever, but the final vehicle may not be a car. It may have treads instead of tires, use a differential control mechanism, etc. Or the vehicle's movement may not be related to the engine at all

      • by Lennie ( 16154 )

        of course it's a pain for an open OS like Linux to deal with

        Microsoft has the exact same problem.

    • by Guy Harris ( 3803 ) <guy@alum.mit.edu> on Sunday October 09, 2016 @05:21PM (#53043835)

      Well... he has a point on all fronts.

      1) x86 is so backward compatible it's... grand. Except for legacy bugs to push forward

      2) ARM is, or rather, was, not afraid to put efficiency above complete and total backward compatibility

      Except what he was really talking about was x86-based "IBM-compatible personal computers", which have, in at least some layers other than the instruction set, a lot of similarity, vs. ARM-based {smartphones, tablets, embedded systems, etc.}, where everything other than the CPU may be significantly different from system to system.

      That has nothing to do with the instruction sets, it has to do with the fact that x86 got its big boost from the IBM PC and the clones of it, which were all pretty similar machines so that MS-DOS and its successors would Just Work on them, while ARM got its big boost from phones/tablets/embedded boxes, where the vendor supplied some or all of the OS, and they didn't care much about having to tweak the OS for the next machine.

      • by Kjella ( 173770 )

        That has nothing to do with the instruction sets, it has to do with the fact that x86 got its big boost from the IBM PC and the clones of it, which were all pretty similar machines so that MS-DOS and its successors would Just Work on them, while ARM got its big boost from phones/tablets/embedded boxes, where the vendor supplied some or all of the OS, and they didn't care much about having to tweak the OS for the next machine.

        Not to mention the fact that computers have always had all sorts of expansion cards and peripherals so you needed a general system to discover what you have installed right now and what it's capable of, as well as a layer of OEMs mixing and matching. A phone comes as one fixed hardware package, often a complete system on a chip. You can get away with a blob that assumes that's the way everything is and unlike PCs it seems acceptable that they go out of support after a few years.

      • Seeing a computer as a mini-network is a change that needs to happen. Having mini ARM cores doing things like coordinating I/O and graphics, so that all the main processor sees is somewhere to send instructions in a standard language (kind of like Vulkan is doing with graphics, and kind of like Postscript in some ways), so all that 'board support' stuff can be moved away from the main CPU, and all the complex compatibility and driver code isolated to a separate, low-power core. That could be down with both

    • by ruir ( 2709173 )
      Your first point is spot on. x86 has been always backward idiocy has a big part of the chip is dedicated to compatibility with the past. Similarly a big part of the ARM processors is dedicated to video instructions.
      The Intel architecture has a horrific trend in what touches security. You cannot do away with firmware signed by Intel. Similarly in some ARM architectures, namely raspberry PI, you are suffering from the same exact problem.
      It is virtually impossible at the moment to design a product based in
    • by Bengie ( 1121981 )

      2) ARM is, or rather, was, not afraid to put efficiency above complete and total backward compatibility

      They were only "highly efficient" because they were not high performance. They've tried to make CPUs as powerful as x86 cores, but they consumed much more power and still quite weaker. This has been their weakness in the datacenter. All of the power savings is lost to needing more cores and systems. Great embedded type systems, or possibly low end desktops, but horrible for most workloads for high-end servers.

    • Well... he has a point on all fronts.

      1) x86 is so backward compatible it's... grand. Except for legacy bugs to push forward

      2) ARM is, or rather, was, not afraid to put efficiency above complete and total backward compatibility

      3) He get's a whole lot of news for being an ass. And that may help /. because I always come here to see the comments after news of a Linus blowup. It's awesome, coming from a multi-disciplinary background where job A's culture is nothing like job B... but oh, would it be great to combine the two! Billion-dollar sovereign debt deals and computer science. I wanna be able to yell at these geniuses like those assholes yell at those assholes. Uh... makes me get all warm down there.

      1. Some of the bugs were sidestepped or avoided when AMD first moved the ISA from 32-bit to 64. A lot of the CPU efficiencies that were learned from the CISC vs RISC era were implemented on the 64-bit only side of instructions, so that if at any point in future, a CPU drops 32-bit support and becomes 64-bit only, it will either be a down and out RISC CPU, or something real close.

      2. The issue is more that every ARM vendor is at liberty to implement a CPU any way they see fit. Which is great in terms of au

  • Fitting (Score:3, Insightful)

    by GrahamJ ( 241784 ) on Sunday October 09, 2016 @05:15PM (#53043795)
    I guess it's fitting that he doesn't realize patting someone on the head is a condescending gesture.
    • Re:Fitting (Score:5, Insightful)

      by ShanghaiBill ( 739463 ) on Sunday October 09, 2016 @05:21PM (#53043829)

      I guess it's fitting that he doesn't realize patting someone on the head is a condescending gesture.

      That is culture dependent. In some cultures patting, or even touching, someone's head is offensive. In other cultures, it means nothing.

      I have met Linus a few times on a person-to-person level, and he was always friendly and considerate. Tove is also a very nice person.

      • by Anonymous Coward

        > Tove is also a very nice person.

        You're just saying that because she is a six-time Finish Karate champion.

    • I guess it's fitting that he doesn't realize patting someone on the head is a condescending gesture.

      There, there. You know you're special, right? That's all that matters.

    • by wjcofkc ( 964165 )
      Try giving the a-okay sign in Argentina. That is a sure fire way to kill a deal.
      • I'm not even Argentinian but that looks like drawing a butt hole with your fingers and that's not a very good idea. Though in France, it might mean a derogatory "zero", which is slightly better than a butt hole but still sucks.

    • He is European so he probably understands irony.
    • by Anonymous Coward

      I guess it's fitting that he doesn't realize patting someone on the head is a condescending gesture.

      It's even more fitting that people who have decided to hate Linus, regardless of what he does or says, will indeed find fault in anything he does or says, even actively misconstruing good things to make them look bad.

  • by nyet ( 19118 ) on Sunday October 09, 2016 @05:19PM (#53043815) Homepage

    There is no reason to coddle developers who write shit code.

    And make no mistake, ARM driver developers employed by ARM vendors are truly horrifyingly bad programmers.

    • by Anonymous Coward

      There's a lot of truth in that statement. To make matters worse, I've seen an appreciable number of cases where perhaps 75% of the code for something is either okay or actually pretty nice, but there are also really shitty chunks that appear to have been written by people under the influence of a random assortment of hard drugs sprinkled across the whole codebase. This can make further work quite "challenging" sometimes. -PCP

      Captcha: simplify

      • there are also really shitty chunks that appear to have been written by people under the influence of [...] -PCP

        Indeed.

      • I've seen an appreciable number of cases where perhaps 75% of the code for something is either okay or actually pretty nice, but there are also really shitty chunks that appear to have been written by people under the influence of a random assortment of hard drugs sprinkled across the whole codebase.

        See my other comment, the Arm environment more or less drives that, you've got 75% that's common and the remaining 25% is semi-documented special-snowflake crap that you have to reverse-engineer from partial docs in order to get it to work.

    • by arglebargle_xiv ( 2212710 ) on Sunday October 09, 2016 @07:02PM (#53044231)
      It's not just the code, it's the hardware environment as a whole. Every single freaking ARM SoC is a custom special-snowflake device with its own special-case add-on IP cores, Chinese-menu instruction set (we'll do this extension, and that one, but not that one over there, and the config register read that tells you whether it's available is privileged to it'll trigger an exception if you try and read it), undocumented memory-mapped crap, or a 1,000-page manual with partial documentation which in any case changes completely if you order an XYZb rather than an XYZa even though it's the same family from the same manufacturer. Just the perfect environment for vendor lock-in, but terrible for devs.
    • Your post is still entirely true if you delete all occurrences of the word 'ARM'.
  • by Lisandro ( 799651 ) on Sunday October 09, 2016 @05:49PM (#53043961)

    Android suffers this very issue, where you end up needing a bytecode VM (Dalvik) just to ensure compatibility across devices. This doesn't mean that the ARM instruction set isn't a joy to work on though.

    • by Alomex ( 148003 )

      Dalvik is last generation. Latest iteration is the Android Run Time (ART) which compiles once at installation time, according to the specific device.

      • Same deal. The problem is deeper than just applications though - this is also the reason why you can't have a generic Android installer for multiple platforms.

    • Android suffers this very issue, where you end up needing a bytecode VM (Dalvik) just to ensure compatibility across devices.

      There's no need for a VM to isolate differences between 32-bit ARM CPUs; if you want to support both 32-bit and 64-bit ARM with the same binary, or support ARM and x86 with the same binary, a bytecode interpreter/JIT is one way to do that, but you don't need it to support two machines using the same CPU or using compatible CPUs, and it's not the only way to handle it (you could do fat binaries, as Apple does in iOS, for example).

      The issue in question is not with the ARM CPU cores, it's with the stuff aroun

      • Or, you can get a x86 platform with offers backwards compatible instruction sets and relatively standardized architectures.

        The problem with ARM goes way beyond CPU compatibility, which is the point made by Linus in the video: there's just too many CPU+hardware combinations out there, all (mostly) incompatible with each other. Apple gets away with multi-platform (fat) binaries simply because their ecosystem is way more constrained.

        • The problem with ARM goes way beyond CPU compatibility, which is the point made by Linus

          And by me in the last paragraph of the posting to which you responded.

          There are the CPU issues, such as "what version of VFP does the processor have, if any?" and "does the processor have Advanced SIMD?". The NDK has an API that can be used to get the answer to those questions [android.com] (and to similar questions for x86 and MIPS), and there are the "rest of the platform" issues. The former may affect applications, but the latter don't, so the VM isn't needed to handle the latter, nor are fat binaries.

          Apple gets away with multi-platform (fat) binaries simply because their ecosystem is way more constrained.

          Again, the "r

          • There are the CPU issues, such as "what version of VFP does the processor have, if any?" [...]

            Which is also tackled in the x86 world... [wikipedia.org]

            Again, the "rest of the platform" issues aren't relevant here, other than perhaps screen size (iPhone vs. iPad). I'm not sure what processors Apple's used have in the way of floating-point or SIMD support, so I'm not sure what flavors of "fat" are needed other than "ARMv6 vs. ARMv7 vs. ARMv8-A 64-bit".

            ARM11 (iPhone 1 to 3), A4 (iPhone 4), A5 (iPhone 4S), A7 (iPhone 5), A8 (iPhone 6), A10 Fusion (iPhone 7). And that's just for their phones.

      • by Anonymous Coward
        I doubt anybody ever /needed/ yet another shitty java VM.

        The big heads decided to repeat that mistake based because of their groupthink and incompetence, and then the code monkeys simply had to love and accept it.

        Many applications that matter have their core written in C or C++ (which is the way to write something actually portable) and have it packed in separate ndk .so libraries for the different architectures (if they bother to support anything other than arm).

        What is quite remarkable about Android

    • by ShakaUVM ( 157947 ) on Sunday October 09, 2016 @07:47PM (#53044403) Homepage Journal

      >This doesn't mean that the ARM instruction set isn't a joy to work on though.

      Yes, I'm glad somebody here said this. I have programmed assembly for x86, 68k, MIPS, SPARC, etc., and ARM is my favorite by far to program in. It's very sane and sensible. The ISA's documentation is... ok, there could be better documentation on ARM's part, but it's good enough I suppose.

      I was able to take an image manipulation library function call written in C++ from 6 seconds to .03s using assembly in about an hour's work. (A 4K image file held in memory, processed by a RPi 2.) That would be good enough to do sepia toning in real time on a 4K video stream if the RPi 2 was actually capable of doing I/O fast enough to feed the function.

      • I remember having a "whoa" moment the first time i found out ARM supported free bitshifts witihin a MOV.

      • by AmiMoJo ( 196126 )

        Wow, that must either be a seriously crap compiler or some very badly optimized C code. Perhaps the compiler couldn't vectorize it? We had a similar issue with older versions of GCC that was fixed by using in-line assembler Neon instructions, but newer versions and some carefully written C allow the compiler to get the same result in a more maintainable, portable way.

        • It was not vectorizing properly. I rewrote the library call in C++ and got it to vectorize correctly, which took it from 6 seconds to about 0.06 seconds, but with assembly I was able to beat it by a factor of 2. Again, without much work on my part.

          Actually, the most time consuming thing was implementing the whole thing a fourth way using C++ intrinsics, which are supposed to compile down to assembly in an optimal fashion, but are sparsely documented and didn't end up being any faster than what the optimizer

      • by Anonymous Coward

        ARM is a "joy" when you are writing bog-simple single-threaded code.

        For anything else (say, like a *kernel*), it is an utter pain in the ass due to the need for explicit synchronization everywhere. If you don't even know what I am talking about, you are not entitled to judge anything related to low-level multi-threaded programming.

        X86 is quite a hideous ISA, but it gets *one* thing right: it is easy to write correct multi-threaded programs in it, as it is fully coherent. ARM doesn't have even that much.

        Gr

  • One chip was designed to be very powerful and work well.
    Another chip was designed to be new, better, cheaper. The option is to have other closed hardware do graphics, sound and other tasks.
    Years later the different design ideas are back on the desktop again.
    Do applications want one good chip that is well understood or a cheap chip that locks in a generation to other deeper hardware to get the same performance.
    Nice if you have a closed OS, online shop, can alter app code standards and can command develop
  • by caseih ( 160668 ) on Sunday October 09, 2016 @11:06PM (#53044939)

    I am probably not the only one that has a drawer full of devices and SOCs with ARM processors on them that I thought would be more useful than they turned out to be. There's nothing wrong with the ARM processor itself, it's just the funky bootloaders and proprietary peripherals with proprietary firmware, and custom kernels that make them a lot less useful to me than if someone made a little x86 SOC with a full complement of I/O pins (including a ADC) with a normal EFI/BIOS.

    For some things like my router/firewall, I thought a little ARM-based device would be perfect, but it turned out that a Intel NUC with an micro SD card ended up being easier to deal with (though an order of magnitude more expensive). Easier to keep updated, and can run a stock distro.

    I just saw that GlobalScale is producing a new ARM board aimed at networking, which looks interesting, but it's the nearly the same hardware as their old Plug computer products (only 1.2 GHz but with a lot more RAM), married to a 3-port gigabit switch fabric. Still means dealing with a custom/proprietary uboot loader, flashing kernels, etc. Not something I care to deal with anymore.

    Of course other devices are different and easy to boot off an SD card. But that's the problem, isn't it. There's no such thing as an ARM version of Debian that runs on all ARM devices. We have to have custom spins for each board. They may as well be their own complete platform, which is impossible for Linus and crew to deal with. So we have to rely on vendors to supply custom versions of the kernel and matching distro.

    • Yeah I think that's what Linus means though.
      Also why I'm using a NUC as well. Eventually gave up no the countless garbage ARM boards with proprietary boot loaders and incompatible things of all kinds. The NUC let me focus on what matters, basically, the ARM boards all get in the way.
      If x86 would get as cheap as ARM, I suspect there would be no more ARM.

      • by gtall ( 79522 )

        That wouldn't make ARM go away. Intel tells you how to construct your computing, ARM allows you to construct it yourself with their licensing schemes. Until that changes, no one who doesn't have a fairly, large uniform market is going to trust Intel.

No spitting on the Bus! Thank you, The Mgt.

Working...