Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Linux Hardware

ARM Is a Promising Platform But Needs To Learn From the PC 167

jbrodkin writes "Linux and ARM developers have clashed over what's been described as a 'United Nations-level complexity of the forks in the ARM section of the Linux kernel.' Linus Torvalds addressed the issue at LinuxCon this week on the 20th anniversary of Linux, saying the ARM platform has a lot to learn from the PC. While Torvalds noted that 'a lot of people love to hate the PC,' the fact that Intel, AMD, and hardware makers worked on building a common infrastructure 'made it very efficient and easy to support.' ARM, on the other hand, 'is missing it completely,' Torvalds said. 'ARM is this hodgepodge of five or six major companies and tens of minor companies making random pieces of hardware, and it looks like they're taking hardware and throwing it at a wall and seeing where it sticks, and making a chip out of what's stuck on the wall.'"
This discussion has been archived. No new comments can be posted.

ARM Is a Promising Platform But Needs To Learn From the PC

Comments Filter:
  • "...tens of minor companies making random pieces of hardware..."

    Has this guy never seen the PC hardware section at Fry's?

    • Re: (Score:1, Flamebait)

      by Desler ( 1608317 )

      He's talking about CPUs, moron.

      • Re:Wait, what? (Score:5, Insightful)

        by thsths ( 31372 ) on Thursday August 18, 2011 @12:55PM (#37132272)

        What is a desktop in the PC world is your SOC in the embedded world. It even comes with RAM and Flash (not on chip, but on package), if you want to.

        The difference is that the PC environment has over a long time filtered down to a few typical devices for each type. Your network hardware is probably Realtek, or maybe Intel or an embedded AMD chip. You graphics card is NVidia, AMD or Intel. Your mouse does not matter, because it always talk USB HID etc.

        In the ARM world, you also have standard components, but every integrator makes tiny (and usually pointless) changes that render them incompatible on the software level. Linus is right - this is neither necessary nor sustainable. It is one of the reasons that you can get software updates for a 5 year old PC, but not for a 6 months old smartphone.

        • Re:Wait, what? (Score:4, Informative)

          by petermgreen ( 876956 ) <plugwash@@@p10link...net> on Thursday August 18, 2011 @01:15PM (#37132488) Homepage

          The difference is that the PC environment has over a long time filtered down to a few typical devices for each type. Your network hardware is probably Realtek, or maybe Intel or an embedded AMD chip. You graphics card is NVidia, AMD or Intel. Your mouse does not matter, because it always talk USB HID etc.

          And perhaps most importantly your main system bus is either PCI or something that looks like PCI to software and by accessing the configuration space of that bus you can read the device IDs of everything on it whereas with ARM the software is expected to know the complete hardware setup in advance.

          • And perhaps most importantly your main system bus is either PCI or something that looks like PCI to software and by accessing the configuration space of that bus you can read the device IDs of everything on it whereas with ARM the software is expected to know the complete hardware setup in advance.

            Amen to that. It seems like every vendor's pet ARM variant, and every stepping of every variant, has tons of semi-custom value-add features that they have to add because if they didn't differentiate their variant from everyone else's, you might buy the other guy's device. There's usually no way to tell what's present and what isn't, so you end up creating these complex expert systems that "know" that if you're using vendor X, product Y, stepping Z, then this set of additional functionality is available. An

        • So I wish I could agree. But ARM is following the same diversity explosion and darwinian selection as FOSS, and for the same reasons. Out of this chaos comes the wonderful bounty of choices in our modern digital buffet. PCs have become stagnant. If you want one they are still available, but don't really do anything more than they did 15 years ago.
          • Yet if you look at the FOSS projects with any real market penetration (outside the FOSS world) they are all the market leaders. Firefox, Apache, MySQL, Open Office, and so on. Yes, KOffice exists on Windows, but show me one non-linux type running it...

            Right now ARM is a bunch of FOSS projects with no clear leader. Once there is one, it will get the mindshare, and hence the support. Then others will be compatible so they can use the ecosystem, and things will get better. But right now, it is Linux 199
          • Sure, sure. Play the old cynic. The one under my desk is running particle simulations interactively using hundreds of GFlop/s. Looking back 15 years would have been the first 3dfx board and maybe a version of Quake. I think they are doing quite a few new things in that time. Improving vector processing performance by four orders of magnitude has resulted in new applications (for the PC, these things would have been available on mainframes previously) ...

        • Re:Wait, what? (Score:4, Informative)

          by TheRaven64 ( 641858 ) on Thursday August 18, 2011 @01:26PM (#37132608) Journal

          You're missing the point. He's not talking about add-ons like network adaptors, he's talking about fundamental core bits of hardware, like interrupt and DMA controllers, which need to be configured by the kernel before it can even bring things like serial ports online for a console.

          Every PC, except some early Intel Macs, is capable of booting PC-DOS 1.0. It has interrupt controllers and device I/O configured in the same way and accessible via the standard BIOS interface. You don't get great performance if you use these, but you can bring up a system with them and then load better drivers if you have them. With ARM, every SoC needs its own set of drivers for core functionality long before you get to things like video or network adaptors. Oh, and on the subject of video, you don't even get something like the PC's text console as standard, let alone a framebuffer (via VESA).

          • by Rob Y. ( 110975 )

            It almost brings to mind the Linux desktop situation. Sure the underlying engine (linux kernel, drivers, etc) is the same across distros, like the basic ARM processor instruction set is the same for ARM. But all the glue that holds a system together is different. Choice of desktops, sound systems, desktop interprocess communications. Every distro puts together a Linux 'system' from the Linux kernel, X11 and various combinations of these other software components the way every ARM box generates a system

          • Actually all that cruft does slow things down in a major way when you try to initialize hardware. Compare the times of Coreboot --> Linux to BIOS --> bootloader ---> linux.

            Anyways standardizing some of these component's wouldn't be a bad idea, say the ARM group providing a few standard modules than can be leased with the Core designs.

        • by Luckyo ( 1726890 )

          In a nutshell, essentially everything in and out of PC is standardized, and when it's not, drivers usually provide abstraction to some standardized layer to ensure compatibility. This is true, and is probably the point you're trying to make. But it ended up coming pretty badly mangled.
          In fact the sheer size of falsehood in your claim is astonishing. The entire point of PC platform is that it supports a massive amount of different hardware configurations that all work with the same x86 (amd64) code.

          Networkin

        • This like Linus is spoken by someone who cannot have done any ASIC/embedded development.
          There are standard graphics pipelines but you will integrate them onto your SOC with direct access to your SDRAM. This removes any standard bus architecture. You may even write your own 3D pipeline.You will probably write your own SDRAM controller. You will add your own peripherals. Why do this rather than plug in standard components? Well if you just stick off the shelf stuff together then how is your product any differ

          • There are standard graphics pipelines but you will integrate them onto your SOC with direct access to your SDRAM. This removes any standard bus architecture. You may even write your own 3D pipeline.You will probably write your own SDRAM controller. You will add your own peripherals. Why do this rather than plug in standard components? Well if you just stick off the shelf stuff together then how is your product any different from your competitor?

            That's exactly the thinking of the device vendors that's got us into the mess we're in now. It's fine if you're a device vendor, it's OK if you're a manufacturer who can ship 100M units with exactly one ARM-based ASIC in it that'll never change in its lifetime, and it's a nightmare if you're doing anything else. For example to do something as basic as add TCP offload on ARM ASICs to our stuff we'd have had to add custom code for every single fscking vendor's bizarro TOE concept. In the end we did it all in

            • Well put yourself in our shoes.
              You've got to remember an ATI chipset is not a standard, OpenGl is the standard, why shouldn't I be able to develop my own pipeline as long as it complies with the standard?
              SDRAM is a standard, ARM's implementation of their controller might be targeted for their processor but not for the 3D pipeline you're building, so by building your own you can do better. Trust me an arm processor and 3D pipeline need very different things from an SDRAM controller. Why shouldn't I be able t

    • Re:Wait, what? (Score:4, Insightful)

      by kbolino ( 920292 ) on Thursday August 18, 2011 @12:53PM (#37132250)

      All of which is, more or less, interchangeable. The Intel x86/IBM PC platform, despite its many flaws, has reached a stable point where there are well accepted and commonly implemented standards for the boot process, the storage formats, the hardware interfaces, etc. ARM, despite a "purer" and "simpler" instruction architecture, lacks much of this common surrounding infrastructure.

      • by LWATCDR ( 28044 )

        And that is called inovation.
        The orignal PC "Standard" sucked.
        You had to asigne memory spaces, interrupts, and IO ports when you added cards. Not every card worked with every PC.
        PC compatibility was hit or miss. The magazines would use Lotus 123 and Microsoft Flight Simulator as the benchmarks. If both of those ran then it was PC compatible. Of course if you bought anything but a real IBM PC or at you could still find software that didn't run.
        Then you had the x86 CPU which also was terrible. Segmented memor

        • by mikael ( 484 )

          ARM descended from Acorn Computers, who provided the Archimedes computer along with a RISC OS. They seem to have bought out every possible semiconductor design group and merged them together.

          Remember those times in the mid/early 1990's. As mentioned, there was a vast variety of different consumer PC's, along with experimental operating systems like TAOS - a JAVA like system with cross-platform compilation and dynamic linking.

          Graphics boards were upgraded every six months as they are now: CGA, Hercules, EGA,

          • by LWATCDR ( 28044 )

            No graphic cards where not upgraded very six months.
            CGA 1981 http://en.wikipedia.org/wiki/Color_Graphics_Adapter [wikipedia.org]
            EGA October of 1984. http://en.wikipedia.org/wiki/Enhanced_Graphics_Adapter [wikipedia.org]
            VGA 1987 http://en.wikipedia.org/wiki/Video_Graphics_Array [wikipedia.org]
            Hercules was in the 1982-83 area but they where not a "Standard" card but where well supported but also only mono.
            As you can see before the OS offered drivers and abstracted the hardware change was slow. You really had to wait for IBM to set a "standard" or you had t

            • by mikael ( 484 )

              Maybe it was just me then - got my first PC in 1989 (20MHz Dell 310), got an upgrade to a 256K Paradise VGA board about 6 months later, moved onto a TIGA board with a few Megabytes memory and 32-bit framebuffer another 6 months later. Another six months later, I've got employment using a custom PC.

              Had great fun programming the VGA directly; doing fun things like changing the character set, trying out Mode X from Dr. Dobbs, with 256 colors and page flipping, implementing scrollable viewports, palette editors

              • by LWATCDR ( 28044 )

                By 1990 we where already 9 years into the PC lifecycle. In those ten years we gone through 4 generations of mainstream video standards, three generations of CPUs, and where on the third generation of Windows, and frankly the first usable one but still version three." It was well after the standard was set. The thing was that even with all that development on the "standard". It still sucked to high heaven.
                The Mac, Amiga, and Atari ST where all still better and they where all from the mid 80s. If you adopt a

                • by mikael ( 484 )

                  I always remember that time in 1986 when our lab PC's had CGA graphics,EGA was high-end, and even the Atari 800XL could do 256 colors using some HBI's. All the artists I knew, used Amigas for rendering characters and levels.

    • by Osgeld ( 1900440 )

      yea and that PC hardware uses the same cpu platform

      with ARM well shit theres TI's flavor which doesnt play well with ST's version and lets not even get into the "arm based" stuff like PIC32

      it is a mess, much like PC's in the late 70's and early 80's, they all have basic, but are totally incompatible

      • Re:Wait, what? (Score:4, Insightful)

        by jedidiah ( 1196 ) on Thursday August 18, 2011 @01:32PM (#37132698) Homepage

        It is NOTHING like computers in the 70s and 80s.

        In the 80s, you had machines made out of standard 3rd party components. Your CPU was the same as the next guy even if he got his computer from a competing brand. This is why an Atari could emulate a Mac. The actual CPU was a particular part that everyone bought from the same place. This is why you can have versions of Linux targeting those 80s/90s era machines. A 68000 in one machine is the same as the next, or a 6502, or a 68030.

        The old home computer landscape seems positively orderly by comparison.

        • Re: (Score:3, Insightful)

          by JDG1980 ( 2438906 )
          The CPUs were standard, but little else was. Sure, the C-64 and Atari 800 both had a 6502-based CPU, but they also had different video chips, different sound chips, different and mutually incompatible disk drive formats and serial communications protocols, etc. One nice thing was that even though each company used their proprietary chips, they didn't feel the need to hide implementation details from users. If you wanted to know exactly what each register in the VIC-II chip did, it was right there in the man
          • by jedidiah ( 1196 )

            Apple was the only one that had a "mutually incompatable" format. The rest not so much.

            While there were a lot of custom chips, there were also a good number of stock parts as well. This included floppy controllers, IO controllers, and sound chips.

            Now the bit about everything being documented is a good point. This is how it is that I am still somewhat familiar with the parts that were in my old machine. This probably made the 030 Linux versions a lot easier to deal with.

      • by skids ( 119237 )

        It's a complete mess and currently a huge barrier to development. You don't even have to get into coding for the kernel -- just getting a toolchain for your particular flavor of ARM is enough to turn away lots of developers. We're talking several DAYS spent figuring out how to produce a goddamn libgcc.a that has the correct endianness, MMU-or-not, and doesn't hose the system because it uses an undefined instruction to implement prefetch()... and then another night trying to figure out how to get that libg

      • The PIC32 isn't ARM-based. It's MIPS32-based.

        The Philips/NXP LPC series is (for the most part) ARM-based though. ARM CPU core tied to custom peripherals (though as I understand it the UART is a stock-standard 16550)

  • by Anonymous Coward on Thursday August 18, 2011 @12:55PM (#37132278)

    They're not trying to cut corners for the hell of it, but for performance, power usage, and other actual engineering reasons.

    You just cant build smartphones and tablets with that same common architecture, or else you're adding too many chips and circuits you don't need.

    It's no big deal that PC's ship with empty PCI slots and huge chunks of the bios and chipset that are never used but rarely. (Onboard raid, ECC codes, so on and so on), but when you're trying to put together a device as trim and minimalist as possible, you're going to end up with something slightly different for each use case.

    • He's acknowledging that, but at the same time discounting the advantages of having a minimalist option. I don't see any problem with having a heavier duty ARM available, but suggesting that there's not value to having chips that have just the necessary circuits is silly.

    • there is a difference between %feature% being present/absent and %feature% having 30 different implementations (of which 12 are actually hostile to others).

      when you have to have a venn diagram with PLAID as one of the circles then you are in trouble.

      • Unfortunately one of advantages of ARM is that the chip maker can heavily customize what is on the SOC. Most of them don't mess with the core. I don't think that the different makers are intending to have hostile features but given time constraints for development, they can't check with other companies (some of them competitors) to see if there optimization hurts others.
        • by hitmark ( 640295 )

          I think perhaps the biggest complaint is that ARM lacks a unified bootstrap and hardware bus. As in, there is no BIOS like on the X86, nor is there a PCI or similar that one can query and get a dump of device ids. So for a lot of the SoCs you basically need to know what is on there before you start sending signals.

          • Right, because do you want to boot your router from ROM? Or your IP phone from flash attached over what will later be GPIO? or your Mobile phone from SDCARD. Your tablet an embedded SSD, your MP3 player from its flash chip over custom interface*, your set top box from its hard drive? What if you have one processor booting another, what if it needs to do a first stage loader from ROM and then grab the image over ethernet (using your own ethernet implementation of course). What if it's not ethernet but SPI?
            So

      • by AmiMoJo ( 196126 )

        That is the reason we have drivers. Unfortunately the ones supplied by embedded manufacturers tend to be kinda crap in my experience. There is also a lack of solid APIs for many embedded system features, where are desktop ones are quite comprehensive and mature now.

        At the place I work some guys are making a hand held logger using Windows 7 Embedded and the support from the manufacturer is terrible. Took two weeks and sending test code to their office in Israel just to get the vector processing features of t

    • This whole article is bullshit. Is everyone forgetting the varying instruction sets of the 386, 486, Pentium, Pentium 2-4, Xeon, x86-64 etc., etc. Plus all the millions of Northbridge and Southbridge chipsets from Intel, Via, etc., plus all the different busses through the ages, plus 92 different kinds of temperature monitoring, USB, ATAPI, ACPI...

      And we're badmouthing ARM for being a constantly moving target? And that manufacturers are throwing shit at the wall? Huh???
      • by Pentium100 ( 1240090 ) on Thursday August 18, 2011 @02:01PM (#37133006)

        And yet, you can run, say, DOS on all of those computers. Critical devices will support a "generic" instruction set. Any VGA card will support standard VGA instructions, disk drives can be accessed using standard IDE interface (SATA controllers can emulate it). SCSI drives can be accessed using INT13h, the controller BIOS takes care of it. Keyboard/mouse use one of the few interfaces (and USB can emulate PS/2).

        Now, when you get the basic system running, you can load drivers to access all of the features of the hardware (for example, different resolutions of the VGA card).

        For ARM you have to recompile the kernel for most of the chips and boards for it to even boot. So, how would you create a way to install an operating system from me media not using another PC?

    • Not sure how this is insightful.

      They're not trying to cut corners for the hell of it, but for performance, power usage, and other actual engineering reasons.You just cant build smartphones and tablets with that same common architecture, or else you're adding too many chips and circuits you don't need.

      A common firmware interface (like BIOS, OF, or EFI) and something like a device tree doesn't require extra chips. At most maybe it's a few KB of flash.

  • by sgt scrub ( 869860 ) <saintium@nOSPaM.yahoo.com> on Thursday August 18, 2011 @01:02PM (#37132358)

    Goals for Friday.
    1) play all pink floyd albums in a continuous loop.
    2) make bubbly gurgle sounds with my "sandwich".
    3) contemplate "making a chip out of what sticks on the wall".

  • "ARM should be more like my previous employer Transmeta".
  • Openness? (Score:3, Insightful)

    by Baloroth ( 2370816 ) on Thursday August 18, 2011 @01:08PM (#37132426)

    Is Linus Torvalds (implicitly, at least) criticizing ARM because it is open and therefore anyone can create their own version of it? As opposed to x86, which has a restricted licensing set (AMD/Intel/Via... Via still exists, right?)? Because that is, AFAICT, exactly why ARM is so varied: anyone can roll their own. With the result that many do.

    Kinda ironic (and I do mean *ironic*) that the creator of Linux would be complaining about this. I guess he is finally discovering why, in some cases, a regulated and restricted environment can be good (note: if x86 was a monopoly, I would not be saying that. But AMD and Intel are fierce competitors, so it isn't at all monopolistic). Open environments often become "hodgepodges" and lend themselves to non-standardization. Closed ones don't (well, they can, but generally they don't. Definitely not as fast as an open one) and can be easily standardized (witness how Intel accepted AMD's x86-64 set for consumers over their own I64 system). The result is, in the case of CPUs, good for consumers.

    Note: I am note proclaiming the virtues of proprietary systems, or claiming they are better than free and open ones. Just pointing out the irony of the situation.

    • Linus doesn't have the RMS/ESR stick up his ass about "open." Linux was built out of necessity because no good x86 based *NIX or BSD was available. if HURD got off the ground, Linus wouldn't have bothered with Linux.

    • Re:Openness? (Score:5, Insightful)

      by Jonner ( 189691 ) on Thursday August 18, 2011 @01:30PM (#37132660)

      Is Linus Torvalds (implicitly, at least) criticizing ARM because it is open and therefore anyone can create their own version of it? As opposed to x86, which has a restricted licensing set (AMD/Intel/Via... Via still exists, right?)? Because that is, AFAICT, exactly why ARM is so varied: anyone can roll their own. With the result that many do.

      ARM is not any more "open" than x86. To sell chips implementing modern versions of either instruction set, you must obtain a license from at least one company and nothing prevents you from extending that instruction set. Many companies have implemented (and often extened) each set over the years, though there are fewer implementing x86 now than ARM. There are probably fewer implementors of x86 because it is much more complex.

      I think Linus is criticizing the lack of a common platform surrounding ARM rather than the instructions themselves. The instruction set of x86 chips has grown a lot, especially with x86_64, but the way you boot a PC hasn't changed much for example.

      • by yuhong ( 1378501 )

        ARM is not any more "open" than x86. To sell chips implementing modern versions of either instruction set, you must obtain a license from at least one company and nothing prevents you from extending that instruction set

        Yes, but I think ARM is much easier to license than x86.

    • Open? How is ARM open? ARM is a very popular but *licensed* core that you must pay a good deal of money to license. According to the Wikipedia article on ARM, in 2006 it cost about $1,800,000 per license.
      • Open? How is ARM open?

        Probably because there are royalty free, freely available ARM designs available for use by anyone. Its not their leaded edge designs, but ARM is freely available.

        • by Arlet ( 29997 )

          Or, more likely, the $1 million+ license fees, and the royalties per core are not a big obstacle for dozens of different licensees.

          In return for the license, you get a high quality core for your ASIC, so that's worth it for a lot of customers.

        • There are? OpenCores has one beta VHDL implementation (it hasn't been updated since December 2009) that I can find with a quick search- everything else I find leads to a dead-end. I don't see any ARM cores listed on opencores that have been ASIC proven.

          While there may be some designs available, I don't think any of the ARM implementations that are in the Linux kernel are based on an open core. If you are aware of an open core that can run Linux, I would appreciate a pointer.

          Beyond anything else, ARM is a

    • by JackDW ( 904211 )

      Actually it is the other way around. The x86 platform is mostly based on open standards. There are more 486-compatible clones than you may realise. ARM, on the other hand, is strongly proprietary. There are no clones at all. The ARM fragmentation has occurred because of a lack of open standards - while the PC guys were standardising PCI, USB and VGA, every ARM licensee was reinventing the wheel to give their own SoC the features that nobody else had. While the core ISA is always the same, the system archite

    • The problem I think is that the abstraction of the CPU is seen differently in the ARM community from the x86 community. So Linus is frustrated that the ARM side doesn't see the CPU as "processor plus memory management plus bus management plus system control".

      ARM is going after a different market than the typical desktop/server and so has different needs. Whereas Intel, AMD and others want to be very compatible and mostly plug compatible at the software/OS level so that you don't have to have different ver

    • Well written and much more diplomatic than what I would have said:

      "Haha, suck it, Mr. We-don't-need-no-stable-kernel-ABI! Not so much fun being on the other end of the 'can't we program to a uniform standard' problem, is it, Linus?"

  • by Weaselmancer ( 533834 ) on Thursday August 18, 2011 @01:11PM (#37132452)

    The reason why x86 is so unified is because they're all in PCs. You only have the one form factor to shoot at. So of course the CPUs will be highly similar.

    ARM fills a different niche. You see ARM chips in tablets, phones, industrial control, routers...all over the place. Of course ARM chips will vary more wildly. They're trying to hit more targets. And those targets have unique and tightly defined parameters. That will put them at odds with other designs.

    I mean hell, if the x86 has it all figured out so well, then why isn't your cellphone using one?

    • Uh, x86 is everywhere. PCs. Supercomputers. Microcontrollers. Embedded systems (you can still buy i386 chips because a lot of embedded systems like traffic light controllers use them). There's even been a few game consoles using it (original XBox and the Wonderswan series). Quite a few of them don't follow the PC standard, and that's fine. But there should still be a standard for common uses - even just covering smartphones, tablets and netbooks would be a major improvement over the current chaos.

      • by Svartalf ( 2997 )

        ARM's everywhere. Look at most of your consumer electronics... Odds on, you're looking at an ARM in most of them. There's at least 1-2 ARMs in your X86 machines as well, doing tasks you wouldn't relegate to the X86.

      • Well yeah you can find x86 a few other places. I was working on a grocery store scale that was x86. But the thing was from an architecture point of view a PC in a funny box with a scale on top. Standard Linux distros worked on it unmodified. I tested that personally. The thing ran Ubuntu without a hitch.

        And yeah you probably would expect to find 386 in traffic lights. Traffic lights are older than ARM chips, so you'd expect that.

        But there should still be a standard for common uses - even just cov

        • What standard would you propose? What standard could cover a CPU that you find in everything from routers to car dashboards? ARM is meant to be adaptable to corner cases. How would you fence that without hindering development?

          Once again, you misunderstand me. I'm not suggesting we make a standard for any possible ARM device. I'm suggesting we make a set of standards for PC-like ARM computers - tablets, smartphones, netbooks, maybe even desktops and servers. That much is possible - x86 is able to work passably well in each, and it has a rather outdated standard not designed for those things. If you designed the standard to fit ARM's strengths (low power consumption, low cost), you could come up with something that works just as w

          • I'm not trying to misunderstand you - honest. I just don't see what your standard would fix. Code portability isn't really a huge problem on ARM. I do a lot of Windows CE work. And 99% of the code that runs on one platform will run on another. The base Microsoft binaries linked during a sysgen do not change from platform to platform, regardless of many design choices. Just select ARMV4I and you're good to go.

            Sure, you could make a standard that says "This is what an ARM tablet is." Microsoft has al

  • One of the reasons ARM has succeeded over Intel in the embedded platform is exactly because it's a hodgepodge in terms of implementation.. Arm just designs the chip, they don't make it, they leave that up to others, who then in turn support their own chips by providing kernel patches - which has been amazingly successful for Linux (and incidentally the non-linuxy iPhone as well)

    Not to talk trash, he definitely understands the kernel and software but the nuances of hardware development and what makes hardwar

    • by Jonner ( 189691 )

      A lot has changed since then but ARM has done nothing but help Linux. If your chip vendor has a poopy Linux implementation they'll sell less. If they have a great one (and great documentation) they'll sell more. TI's a pretty good example of an awesome ARM / Linux implementation, and.. there are less awesome examples..

      How do you define "help Linux"? The popularity of Linux on ARM has produced a giant, acrimonious fork which is not helpful to the community in general. Obviously, this wouldn't have happened in the first place if Linux and ARM weren't good for each other, but for the community to function well, things need to change. Linus is hopeful that this will be resolved in four or five years as a result of his and others' efforts to fix the very problems he's complaining about. The problem is not so much "poopy" Linu

  • Can someone give me an example of the kind of non-compatible functionality you'd get with a desktop ARM versus a mobile version.

    It seems in my eyes they should implement all the features for great cross compatibility, but just make them slower if need be. I doubt it would up much more die space...

    I really dislike fragmented environments, and at most, we should have 2 ARM versions, preferably one, and not for example 20000.

    • It's not desktop vs. mobile, it is manufacturer X vs. manufacturer Y. ARM is just the core- the company doesn't make chips. They license their core to people who design with it. What is fragmented is everything outside the core- that is, the value that each licensee adds to the core to make their own product. They're embedded processors- they get surrounded by many peripherals such as analog to digital converters, interrupt controllers, serial ports, memory interfaces ... the list goes on and on.

      To me,

      • by Twinbee ( 767046 )

        Thanks that helps clear things up.

        But apart from maybe "memory interfaces" maybe, the other things you mentioned (like analog to digital converters) wouldn't be of concern to the average programmer who would still maintain cross-compatibility across devices, assuming he didn't write for special things like cameras, sounds recorders or networking etc.

        • All the core does (by itself) is math/logic functions, conditionals, and move data around. *EVERYTHING* else is done by a peripheral. Embedded processors are (almost by definition) packed with peripherals. We get very used to these peripherals, but they're there, and if you want your computer to do anything other than serve as a way to deplete batteries, you've got to send data to/from a peripheral. Every different ARM variant has a different set of peripherals and different ways to use them- hence the frag
        • ARM is a bunch of different companies all contributing code to make their own device work. The problem is that very little effort is being spent to extract all the common bits, so you end up with many slightly different implementations of the same thing.

  • linus's views sound very similar to what i've written about, at some length on this subject: https://lkml.org/lkml/2011/7/1/473 [lkml.org]

    the thing is that absolutely nobody has come up with any solutions. the only solution i've heard is the one that i recommended, and there's been no reaction or response to it, as of yet.

    the problem is the sheer overwhelming diversity. therefore, the solution is to prioritise linux kernel patches that come from hardware syndicates or specifications that cover more than just the one

The Tao doesn't take sides; it gives birth to both wins and losses. The Guru doesn't take sides; she welcomes both hackers and lusers.

Working...