Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Software Linux

Ryan Gordon Ends FatELF Universal Binary Effort 549

recoiledsnake writes "A few years after the Con Kolivas fiasco, the FatELF project to implement the 'universal binaries' feature for Linux that allows a single binary file to run on multiple hardware platforms has been grounded. Ryan C. Gordon, who has ported a number of popular games and game servers to Linux, has this to say: 'It looks like the Linux kernel maintainers are frowning on the FatELF patches. Some got the idea and disagreed, some didn't seem to hear what I was saying, and some showed up just to be rude.' The launch of the project was recently discussed here. The FatELF project page and FAQ are still up."
This discussion has been archived. No new comments can be posted.

Ryan Gordon Ends FatELF Universal Binary Effort

Comments Filter:
  • by harmonise ( 1484057 ) on Thursday November 05, 2009 @01:34PM (#29997484)

    He needs thicker skin if he's going to deal with the LKML crowd. I wouldn't give up just because it's not merged into the official tree.

    • by omuls are tasty ( 1321759 ) on Thursday November 05, 2009 @01:36PM (#29997516)
      Heck, an elephant needs a thicker skin if he's going to deal with the LKLM crowd.
    • Exactly...given enough time, if enough people find fatELF binaries useful, they may just rethink its usefulness in the kernel source tree.
    • He should have sent Theo in ;^)

  • by hansraj ( 458504 ) on Thursday November 05, 2009 @01:37PM (#29997540)

    I like my elves the way I like my tea: thin and exotic.. served while still hot.

  • by smooth wombat ( 796938 ) on Thursday November 05, 2009 @01:38PM (#29997546) Journal
    and some showed up just to be rude.

    I would never have believed that people in the Linux community would show up at an event just to be rude. I've always heard such glowing praise about the Linux community. They're always there to help the new guy, willing to mentor those learning the "So simple a caveman can do it" operating system and break the monopoly of Microsoft once and for all.

    His comments can't be correct. Everyone knows what fine, upstanding individuals the Linux community is.~
  • Fat elves [jojosagency.com.au]
  • by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Thursday November 05, 2009 @01:41PM (#29997606)

    The 32-bit vs. 64-bit split is handled pretty well on Linux (well, Debian drug its heels a bit on multiarch handling in packages, but even they seem to be getting with the programme).

    Real multi-arch could be useful, but the number of arches on Linux is just too overwhelming. To get somewhat decent coverage for Linux binaries, they'd have to run on x86, ARM, and PPC. Plus possibly MIPS, SPARC, and Itanium. Most of those in 32-bit and 64-bit flavours. Those elves are going to be very fat indeed.

    • It has been reported that a law suit was filed against Keebler corporation by a group of fat elves.

      Their spokes person was quoted as saying,"It's *mmmm* their fault, these cookies *numnumnum* are just too good!".

    • by nxtw ( 866177 ) on Thursday November 05, 2009 @02:30PM (#29998348)

      The 32-bit vs. 64-bit split is handled pretty well on Linux (well, Debian drug its heels a bit on multiarch handling in packages, but even they seem to be getting with the programme).

      I disagree. Solaris and Mac OS X are the only operating systems I would say handle it well.

      OS X 10.6 includes i386 and x86_64 versions of almost everything. By default it runs the x86_64 versions on compatible CPUs and compiles software as x86_64. It runs the i386 kernel by default, but the OS X i386 kernel is capable of running 64 bit processes.

      One can reuse the same OS X installation from a system with a 64-bit CPU on a system with a 32-bit CPU.

      Solaris includes 32-bit binaries for most applications but includes 32- and 64-bit libraries. It includes 32- and 64-bit kernels as well, all in the same installation media.

    • Actually, not a solution in search of a problem. The fundamental problem is allowing program installation on a shared disk for use by networked workstations despite the various systems using that disk being of varying types. You cannot solve that with package management because you need all packages installed at the same time -- i.e., your Emacs binary must run on x86, amd64, sparc, whatever and you simply cannot install three different packages from three different architectures onto the same file server a
  • by vadim_t ( 324782 ) on Thursday November 05, 2009 @01:44PM (#29997634) Homepage

    I don't get the point in bringing it up.

    Things get rejected from the kernel all the time -- because not all things are good, useful, well coded, or solve a problem that needs solving. It's not new in any way.

    This in particular seems like a solution in search of a problem to me. Especially since on a 64 bit distro pretty much everything, with very few exceptions is 64 bit. In fact I don't think 64 bit distributions contain any 32 bit software except for closed source that can't be ported, and compatibility libraries for any applications the user would like to install manually. So to me there doesn't seem to be a point to try to solve a problem that exists less and less as the time passes and proprietary vendors make 64 bit versions of their programs.

    • Things get rejected from the kernel all the time -- because not all things are good, useful, well coded, or solve a problem that needs solving. It's not new in any way.

      Except this seems to be the only place that doesn't acknowledge the usefulness of fat binaries.

      Windows has had them since DOS, although no one uses them. OS X has them, FBSD has talk about them and isn't flatly rejecting the idea.

      I've seen many features in my career that seemed pointless, tabbed browsing for instance, my OS already supports

      • by nxtw ( 866177 )

        Windows has had them since DOS, although no one uses them.

        Is it possible to have a single Windows executable with both x86-64 and i386 code?

      • by vadim_t ( 324782 )

        Except this seems to be the only place that doesn't acknowledge the usefulness of fat binaries.

        Well, please explain what would the usefulness be.

        Especially when the two architectures most people care about is i386 and amd64, and the former works perfectly fine on the later.

        Windows has had them since DOS, although no one uses them.

        That would seem to point to that the idea wasn't really useful in that case

        OS X has them

        But it went through a considerable architecture shift from one CPU to another that was incom

      • Re: (Score:2, Troll)

        Tell me how an Apple developer can run a server allowing the client to select the program and it'll download and install the correct version, like Debian repositories. That problem has already been solved, and the solution is better (it also gives you plenty of other features).

        Oh, and closed-source companies can have their own repositories too. Example: http://download.skype.com/linux/repos/debian/ [skype.com]

    • by Auroch ( 1403671 ) on Thursday November 05, 2009 @01:58PM (#29997866)

      This in particular seems like a solution in search of a problem to me. Especially since on a 64 bit distro pretty much everything, with very few exceptions is 64 bit. In fact I don't think 64 bit distributions contain any 32 bit software except for closed source that can't be ported, and compatibility libraries for any applications the user would like to install manually. So to me there doesn't seem to be a point to try to solve a problem that exists less and less as the time passes and proprietary vendors make 64 bit versions of their programs.

      EXACTLY! We don't want choice, we want it to just work! Damnit, force people to do things the way they ought to do them, don't give them choice, they'll just screw it up.

      Especially when that choice makes things EASY!

    • by cheesybagel ( 670288 ) on Thursday November 05, 2009 @01:59PM (#29997878)
      It just shows Ryan isn't used to contributing free software to someone else's project. I once had to wait months before I got my code accepted into a free software project and it wasn't the kernel. If the maintainers add every submission to a project, it will end up in an unstable, unmaintainable mess. Code can last a long time and someone will have to maintain the code even after he's lost interest in it. I am especially leery of code that touches a lot of difference places at the same time, as is undoubtedly the case here.
    • There are places this would be very useful to have. Anytime we're distributing binaries to users, hosting binaries on a network file share, or carrying portable media, it's a big pain in the butt to maintain completely separate architecture trees. In some cases it wastes a lot of space too if there's significant data files along with the executables, because we generally wind up replicating that in each arch install tree.

      I've definitely appreciated OS X's universal binaries in the past, it's a shame to
      • by vadim_t ( 324782 )

        There are places this would be very useful to have. Anytime we're distributing binaries to users, hosting binaries on a network file share, or carrying portable media, it's a big pain in the butt to maintain completely separate architecture trees. In some cases it wastes a lot of space too if there's significant data files along with the executables, because we generally wind up replicating that in each arch install tree.

        If your architectures are i386 and amd64 then you can just ship i386 and not bother wit

  • rude (Score:3, Insightful)

    by QuietLagoon ( 813062 ) on Thursday November 05, 2009 @01:47PM (#29997678)
    ...some showed up just to be rude...

    .
    Oh well, so goes it with parts of the Linux culture.

    • Oh well, so goes it with parts of the Linux culture.

      Coulda been worse. He could have been trying to get something folded in to the OpenBSD kernel. Theo makes those surly Linux kernel developers look like Miss Congeniality. :)

  • by bcmm ( 768152 ) on Thursday November 05, 2009 @01:47PM (#29997686)
    This idea is kind of broken for Linux. On MacOS, with 2 architectures, it makes some sense, since the actual executable code is not huge compared to data. On Linux, withe a couple of dozen architectures, executable code *is* going to start to take relevant amounts of space, and the effort involved in preparing them will be nontrivial. If this system were adopted, virtually no binaries would be made to support all available architectures, meaning that anyone not on x86 (32 bit) would need to check what archs a binary supported before downloading it, which is about as difficult as choosing which one to download would've been.
    • by argent ( 18001 ) <peter&slashdot,2006,taronga,com> on Thursday November 05, 2009 @01:57PM (#29997850) Homepage Journal

      On Linux, withe a couple of dozen architectures,

      Kind of, but not really. No more than there are four architectures (PPC, ARM, X86, X86_64) for OS X. There's two architectures for Linux that actually matter, and they're the same two that run Snow Leopard. X86 and X86_64.

      I can see why people are going to get up in arms about this. I've been as big a RISC booster as anyone, I think Apple gave up on PPC too soon, and I'm still bitter about Alpha, but that game's over. 32-bit and 64-bit Intel architectures are what matter, and those are the ones that almost all binaries will work for. I'm not running YDL any more, and neither are you. Game's over, instruction sets lost to marketing. The game's over, the fat lady's sung, picked up her paycheck, and gone home to watch House on her Tivo. Give it up and quit holding up the bus.

    • Re: (Score:3, Interesting)

      by ceoyoyo ( 59147 )

      True, but the ability to handle such things can come in handy. As an example, suppose you've got a setup where you're running apps off a server. You've got several different hardware platforms going, but you want your users to be able to double click the server hosted apps without worrying about picking the right one for the computer they happen to be sitting at. A fat binary is pretty much the only way to solve that problem.

    • Just wanted to say that MacOS for a while there supported 4 architectures: i386 and PPC, both in 32 and 64 bit. Very few apps actually shipped with all four, since there were also fallbacks in place. (A few high-end apps did, but only a few.)

      And even then there were a half-dozen utilities out there for 'cleaning' the architectures you didn't need out of the files. Which could get back a fair amount of disk space.

    • by volsung ( 378 )
      Minor nit: Mac OS X (until Snow Leopard) had to deal with 4 architectures $ file /usr/lib/libbz2.dylib /usr/lib/libbz2.dylib: Mach-O universal binary with 4 architectures /usr/lib/libbz2.dylib (for architecture ppc7400): Mach-O dynamically linked shared library ppc /usr/lib/libbz2.dylib (for architecture ppc64): Mach-O 64-bit dynamically linked shared library ppc64 /usr/lib/libbz2.dylib (for architecture i386): Mach-O dynamically linked shared library i386 /usr/lib/libbz2.dylib (for architecture x86_64): M
  • by Gopal.V ( 532678 ) on Thursday November 05, 2009 @01:48PM (#29997696) Homepage Journal

    In the entire forked-up mess of the unix tree, there was only one thing that anybody & everybody cared about - source compatibilty. C99, POSIX, SuS v3, so many ways you could ensure that your code would compile everywhere, with whatever compiler was popular that week. For a good part of 4 years, I worked on portable.net, which had a support/ directory full of ifdefs and a configure script full of AC_DEFINEs. It worked nearly everywhere too.

    Binary compatibility never took off because there is so little stuff that can be shared between binary platforms. Sure, the same file could run on multiple archs, but in reality that is no different from a zip file with six binaries in them. Indeed, it needed someone to build 'em all in one place to actually end up with one of these. Which is actually more effort than actually letting each distro arch-maintainer do a build whenever they please. OS X build tools ship with the right cross-compilers in XCode and they have more of a monoculture in library versions, looking backwards.

    Attempting this in a world where even an x86 binary wouldn't work on all x86-linux-pc boxes (static linking, yeah...yeah), is somehow a solution with no real problem attached. Unless you can make the default build-package workflow do this automatically, this simple step means a hell of a lot of work for the guy doing the build.

    And that's just the problems with getting a universal binary. Further problems await as you try to run the created binaries ... I like the idea and the fact that the guy is talking with his patches. But colour me uninterested in this particular problem he's trying to solve. If he manages to convince me that it's a real advantage over 4 binaries that I pick & choose to download, hell ... I'll change my opinion so quickly, it'll leave you spinning.

    • The problem isn't that its not possible, its that its hard. Your argument is that since its hard now, since the tools aren't ready for it, it shouldn't be done ...

      Sounds pretty silly to me.

      It would be hard to start from scratch and write a modern OS ... but that is indeed what Linux is.

      If you never take the effort to make the hard easier it will remain hard. Changing from single threaded to multithreaded is hard, do you think we should not do that either, because the tools to do it don't make it a cake wa

      • Re: (Score:3, Insightful)

        by Anonymous Coward

        For grandma who has a netbook running an ARM processor, and a desktop or laptop running a x86 processor, its probably a little different, don't you think?

        No because a package manager makes it easy to install software for the current arch. Even grandma doesn't benefit from having x86, AMD64 and arm binaries in a single package, much less from some random untrusted binary she downloaded from the internet.

      • by IICV ( 652597 ) on Thursday November 05, 2009 @03:25PM (#29999110)

        You missed half the argument. It's hard, and it's pointless.

        The grandma who has a netbook running ARM and a desktop running x86 will install software by going into Add/Remove Programs and picking "Fun Times Photo Album for Grandmas" out of a list. The package manager will figure out what needs to be installed for her, on both her ARM and her x86 computers.

        She's not going to go to some random website and download a random installer file and use it on both her computers - her kids have told her over and over again that that's not safe, and she may lose her knitting patterns if she does it.

        Seriously, the people who advocate this junk seem to be entirely unaware of the joys of package management. All FatELF does is re-solve a problem that package management has had licked for a couple of years now, and it solves the problem in a less efficient way.

        It's hard, yes - but it's not worth doing just because it's hard.

    • In the entire forked-up mess of the unix tree, there was only one thing that anybody & everybody cared about - source compatibilty. C99, POSIX, SuS v3, so many ways you could ensure that your code would compile everywhere, with whatever compiler was popular that week.

      This guy worked in the closed-source world of video games where it's often not even legal to share your source code (due to middle-ware licensing and trade secrets) and even when it is legal, it's often not feasible for business or gameplay reasons (competitive coding advantage, preventing cheating hacks, disallowing "free content" mods, etc). It's exactly this reason that high-end cutting-edge games and other closed-source software will NEVER be viable on Linux unless there are major changes to the entir

      • Re: (Score:3, Insightful)

        by kill-1 ( 36256 )

        But the lack of universal binaries is not the reason why it's hard to release closed source software on Linux.

        • by adisakp ( 705706 )
          I agree. I never said universal binaries were the problem -- rather, it's the lack of binary compatibility. It's the whole "source compatibility" / "binary incompatibility" that gives the win to Windoze over Linux in the gamine world.
      • Re: (Score:3, Insightful)

        Comment removed based on user account deletion
  • by spitzak ( 4019 ) on Thursday November 05, 2009 @01:55PM (#29997808) Homepage

    My objection is that any such hierarchy of data could be stored as files.

    Linux needs tools so that a directory can be manipulated as a file more easily. For instance cp/mv/etc should pretty much act like -r/-a is on all the time, and such recursive operations should be provided by libc and the kernel by default. Then programs are free to treat any point in the hierarchy as a "file". A fat binary would just be a bunch of binaries stuck in the same directory, and you would run it by exec of the directory itself. Also need filesystems designed for huge numbers of very small files and to make such manipulations efficient.

    We need the tools to be advanced into the next century. Not use the workarounds of the previous ones as currently practiced on Unix and Windows.

  • a better idea.. (Score:5, Interesting)

    by Eravnrekaree ( 467752 ) on Thursday November 05, 2009 @01:57PM (#29997842)

    Fatelf was never really a great idea in my opinion. Putting two binaries in a file is not a really good way to solve the problem as there are many more variations of CpU type including all of the x86 variation than one or two. it would be a better idea to do something similar to the AS/400, include, an intermediate form in the file, such as a syntax tree, convert it to native at runtime on the users system, and then store the native code inside the file next to the intermediate code. if the binary is moved to a new system, the native code can be regenerated again from the intermediate code. This does not even requite kernel support, the front of the file put shell code to call the code generator installed on the system, and generate the native code, and then run it. This way, things like various x86 extensions can also be supported and so on.

    • Re:a better idea.. (Score:4, Insightful)

      by idontgno ( 624372 ) on Thursday November 05, 2009 @02:24PM (#29998228) Journal
      So you're advocating Java?
    • by jaclu ( 66513 )

      There is already a method for supporting multiple binary formats.

      It's called source code

      • Re:a better idea.. (Score:4, Insightful)

        by Eravnrekaree ( 467752 ) on Thursday November 05, 2009 @03:30PM (#29999180)

        Only the last phase of compilation, code generation, would occur on the users computer. One of the problems with source code is that it can take hours for it to compile, and getting it to compile right is never easy enough for granny. The purpose of a universal executable is that it should be easy enough for granny to use, which means download, double click, and it runs. None of this fiddling with a million dependancies and so on. Granted, the problem is parly due to the fact that each Linux distribution does something differently and puts things in a different place.

  • FatELF was the wrong solution to the problem. In the Linux community, we do have a cross distribution application issue. But its one of pure stubbornness.

    What do I mean? Suse has its way of setting up RPMs, Mandriva has its, RedHat (Fedora) has its. The three big names in RPM all fight each other over stupid things like RPM Macros, when RPMs are all 95% the same. We can't decide what to classify anything, so we fight over stuff like Amusements/Arcade vs. Games/Arcade. To some degree the same issue exists be

    • We have application makers who provide a binary installer for the Windows platform, yet hand Linux users a completely unpackaged BZ2 Type Tarball and say "Good luck!"

      That's because you don't have to descend into the hell that is rpmbuild, which was a pile of rotting dingo fetuses ten years ago and hasn't gotten one bit better since.

      It's long past time that they gave up that ghastly binary blob and defined a new "rpmx" format, that would look kind of like this:

      A gzipped or bzipped tarball, containing:

      1. a di

  • Petty fiefdoms and not invented here syndrome will continue to torpedo any chance for a decent Linux on the desktop. Until Linux has a single binary and a universal installation strategy they will continue to be mostly harmless and largely irrelevant to the desktop market at large.
  • maybe its just me but i see 0 advantages for an executable with multiple binaries.

    shouldn't this all be handled by the package manager? isn't including all these binaries just jacking up download sizes for no gain?

    a boot CD that can run on multiple archs is the only real use i see for this, but i would have to think there is a better way handle that than changing the fundamentals of executables and libraries.

    maybe he received a less than warm reception from other devs because his idea provided virtually no

  • by Yvan256 ( 722131 ) on Thursday November 05, 2009 @02:14PM (#29998092) Homepage Journal

    I want fat binaries for microcontrollers! Give me binaries that can run on PIC16F88, eZ80 and 68HC11!

    There's nothing worst than having to replace a 0.50$ chip with another that cost 0.51$!

  • The issue wasn't that there were lots of people saying "That's a stupid idea" or "That's a stupid implementation of an otherwise good idea."

    The issue was lots of people saying "You are stupid."

    There is a big difference.

    I'd weighed in on this, because in the embedded systems I design this actually would have been useful - I have to support different processor types with what is, ideally, the same software load. (Just because MY embedded systems are much larger than some 4-bit microcontroller running 16K of c

  • by pclminion ( 145572 ) on Thursday November 05, 2009 @02:24PM (#29998232)
    I don't understand what the kernel has to do with any of this. Fat binaries can be (almost) completely implemented at the userspace level by extending the dynamic loader (ld-linux.so). The way this would work is that the fat binary would have a boilerplate ELF header that contains just enough information to convince the kernel to load it and launch its interpreter program, which could piggyback on the standard dynamic loader. The fat binary interpretter would locate the correct architecture within the fat binary, map its ELF header into memory, then call out to the regular dynamic loader to finish the job. The only hitch is that a 64-bit kernel will refuse to load a 32-bit ELF, and vice-versa, so you would need an EXTREMELY minor patch to the kernel to allow it to happen. I mean like a one-liner.
  • by w0mprat ( 1317953 ) on Thursday November 05, 2009 @06:57PM (#30001694)
    Programming languages are already far two high level which incurrs a performance hit. We should all be coding in assembler. Personally, for fast executing binaries I prefer to tap the bits into the hard drive platter with magnetized needle.
  • stupid idea (Score:4, Insightful)

    by jipn4 ( 1367823 ) on Thursday November 05, 2009 @08:00PM (#30002076)

    FatELF is a stupid implementation of a stupid idea. I.e., even if you want fat binaries, modifying the ELF format is the wrong way of doing it.

    Yeah for the Linux kernel developers for keeping this kind of crap out of the kernel.

C makes it easy for you to shoot yourself in the foot. C++ makes that harder, but when you do, it blows away your whole leg. -- Bjarne Stroustrup

Working...