Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Software Linux

Ryan Gordon Ends FatELF Universal Binary Effort 549

recoiledsnake writes "A few years after the Con Kolivas fiasco, the FatELF project to implement the 'universal binaries' feature for Linux that allows a single binary file to run on multiple hardware platforms has been grounded. Ryan C. Gordon, who has ported a number of popular games and game servers to Linux, has this to say: 'It looks like the Linux kernel maintainers are frowning on the FatELF patches. Some got the idea and disagreed, some didn't seem to hear what I was saying, and some showed up just to be rude.' The launch of the project was recently discussed here. The FatELF project page and FAQ are still up."
This discussion has been archived. No new comments can be posted.

Ryan Gordon Ends FatELF Universal Binary Effort

Comments Filter:
  • by spitzak ( 4019 ) on Thursday November 05, 2009 @02:55PM (#29997808) Homepage

    My objection is that any such hierarchy of data could be stored as files.

    Linux needs tools so that a directory can be manipulated as a file more easily. For instance cp/mv/etc should pretty much act like -r/-a is on all the time, and such recursive operations should be provided by libc and the kernel by default. Then programs are free to treat any point in the hierarchy as a "file". A fat binary would just be a bunch of binaries stuck in the same directory, and you would run it by exec of the directory itself. Also need filesystems designed for huge numbers of very small files and to make such manipulations efficient.

    We need the tools to be advanced into the next century. Not use the workarounds of the previous ones as currently practiced on Unix and Windows.

  • a better idea.. (Score:5, Interesting)

    by Eravnrekaree ( 467752 ) on Thursday November 05, 2009 @02:57PM (#29997842)

    Fatelf was never really a great idea in my opinion. Putting two binaries in a file is not a really good way to solve the problem as there are many more variations of CpU type including all of the x86 variation than one or two. it would be a better idea to do something similar to the AS/400, include, an intermediate form in the file, such as a syntax tree, convert it to native at runtime on the users system, and then store the native code inside the file next to the intermediate code. if the binary is moved to a new system, the native code can be regenerated again from the intermediate code. This does not even requite kernel support, the front of the file put shell code to call the code generator installed on the system, and generate the native code, and then run it. This way, things like various x86 extensions can also be supported and so on.

  • by Auroch ( 1403671 ) on Thursday November 05, 2009 @02:58PM (#29997866)

    This in particular seems like a solution in search of a problem to me. Especially since on a 64 bit distro pretty much everything, with very few exceptions is 64 bit. In fact I don't think 64 bit distributions contain any 32 bit software except for closed source that can't be ported, and compatibility libraries for any applications the user would like to install manually. So to me there doesn't seem to be a point to try to solve a problem that exists less and less as the time passes and proprietary vendors make 64 bit versions of their programs.

    EXACTLY! We don't want choice, we want it to just work! Damnit, force people to do things the way they ought to do them, don't give them choice, they'll just screw it up.

    Especially when that choice makes things EASY!

  • by ceoyoyo ( 59147 ) on Thursday November 05, 2009 @03:06PM (#29997982)

    True, but the ability to handle such things can come in handy. As an example, suppose you've got a setup where you're running apps off a server. You've got several different hardware platforms going, but you want your users to be able to double click the server hosted apps without worrying about picking the right one for the computer they happen to be sitting at. A fat binary is pretty much the only way to solve that problem.

  • by sbeckstead ( 555647 ) on Thursday November 05, 2009 @03:07PM (#29998000) Homepage Journal
    Petty fiefdoms and not invented here syndrome will continue to torpedo any chance for a decent Linux on the desktop. Until Linux has a single binary and a universal installation strategy they will continue to be mostly harmless and largely irrelevant to the desktop market at large.
  • by Joe Mucchiello ( 1030 ) on Thursday November 05, 2009 @03:08PM (#29998012) Homepage

    Commercial Games. That's who.

  • by morgauxo ( 974071 ) on Thursday November 05, 2009 @03:23PM (#29998220)
    Maybe he could have done so, he does seem to have some sort of programming background. Maybe not if it's been mostly Windows stuff. If getting a program to work normally involves combing through log files then it's really still just a programmer's toy. That's sad considering the alternative (WMC) respects the copy flag.

    It sounds like he was just trying to make it work. If it takes digging to that level just to get it running then there is a problem. I though Myth was supposed to be in a working state at this point? Was he trying to build the latest right out of source control? If he was just installing a stable release with typical options then I think it's pretty reasonable to expect that anything which could go wrong would be too high level to need to go to a bug log.

    I've given Myth a few tries myself (on Gentoo). It's the one thing I have yet to get to work. I see lots of people do have it working but it seems most have to resort to a live CD. I for one don't want to be stuck with a distro that was built with only Myth in mind. I want my machine I have already customized as I like it but with mythbackend running in the background.
  • by pclminion ( 145572 ) on Thursday November 05, 2009 @03:24PM (#29998232)
    I don't understand what the kernel has to do with any of this. Fat binaries can be (almost) completely implemented at the userspace level by extending the dynamic loader (ld-linux.so). The way this would work is that the fat binary would have a boilerplate ELF header that contains just enough information to convince the kernel to load it and launch its interpreter program, which could piggyback on the standard dynamic loader. The fat binary interpretter would locate the correct architecture within the fat binary, map its ELF header into memory, then call out to the regular dynamic loader to finish the job. The only hitch is that a 64-bit kernel will refuse to load a 32-bit ELF, and vice-versa, so you would need an EXTREMELY minor patch to the kernel to allow it to happen. I mean like a one-liner.
  • by dgatwood ( 11270 ) on Thursday November 05, 2009 @04:16PM (#29998978) Homepage Journal

    Another really big advantage is easier developer workflow. With multi-architecture binaries and libraries, you can test and debug the 32-bit and 64-bit versions of an application without rebooting into a separate OS, without building some weird chroot environment, without using a special linker that changes the paths of the libraries if it detects a 32-bit binary, etc. This means that your development system is essentially identical to the user systems (except for the kernel), and thus the likelihood of bizarre "Unable to reproduce" bugs goes way down.

    Another big advantage is that if you build a universal install DVD, you have half as many binary packages. That means a less complex installer and thus reduced potential for bugs, reduced install testing overhead, etc.

    Another big advantage is that when a user finds a 64-bit-specific bug in a tool, the user can do an "arch i386 grep 'blahblahblah' *" instead and work around it until somebody comes up with a fix. Without fat binaries, that's well beyond the ability of most typical computer users, including many IT people. You might as well tell them to fix the bug themselves. That doesn't do anybody any good....

    But probably the most important reason for it is that Linux is late to the party. Many other operating systems support fat binaries---Mac OS X, Windows, NetBSD, etc. It's not like this is a new idea, nor is there a lack of a clear benefit. Obviously if there weren't clear benefits, there wouldn't be such broad support for this concept. And that's not just a bandwagon appeal; people judge operating systems by what they can do, and if there's an important, user-visible feature that one OS is missing, that's a win for operating systems that do have the feature....

  • by Hobophile ( 602318 ) on Thursday November 05, 2009 @04:30PM (#29999186) Homepage

    Commercial Games. That's who.

    Exactly. Take Blizzard, who ships Windows and Mac versions of their games on the same media. Fat chance of getting an official Linux release in the absence of a universal binary solution. Blizzard tends to ignore platform-specific package formats in favor of their own installers, the better to control and customize the installation experience. By avoiding the standard MSI format on Windows, for instance, they avoid introducing a lot of unrelated dependencies and vastly simplify the post-release patching process.

    If you don't mind hacking around on the command line to get a game to work, the current state of affairs probably suits you just fine. But there's no business reason for Blizzard to support Linux users with an official release, if the best they could provide is a different set of command line inputs to type in. This of course assumes they would not develop installers for every Linux distribution on every compatible architecture, along with the necessary documentation and technical support for each. I think that's a fair assumption.

  • by Eric Green ( 627 ) on Thursday November 05, 2009 @05:05PM (#29999694) Homepage
    Regarding compiling, anybody doing cross-platform development by definition has the compilers to produce binaries for those platforms. Cross compiling is not needed as long as you have a tool capable of tagging ELF hunks and concatenating them together into "fat" binaries and "fat" libraries. We've been there, done that, it's a solved problem, your software repository is on a NFS share, you compile to an architecture-specific directory on each of your platforms to create individual binaries that are to be turned into fat binaries and libraries, then on the platform with the fat binary tools you run them to assemble the architecture-specific stuff into actual binaries (thanks to ELF's hunk-based mechanism for assembling multiple hunks into one binary, of which unknown hunks are ignored). At one point Apple was actually doing this for three different platforms, before realizing that dropping PowerPC support for Snow Leopard would sell more Intel Macs. In a prior job we had a compile lab of 20 different machines running different architectures or OS versions that we fired up to create the final build, each machine dropped its driblets into the proper place on the NFS share, then the final build machine put it all together into the release package (note that this was not a setup based on fat binaries but the build process works the same for fat binaries with the exception that the final build machine does a bit more work). It's called professional Unix development, and we were doing it decades ago, long before Linux existed.

    Regarding hard drive and network speed, in today's world of gigabit to the desktop and 10 gigabit backbones and 2 terabyte hard drives I don't know what you're talking about with "10 megabit" and "20 megabyte" cracks. You do realize that the primary expense in a networked workstation environment is administration, not hardware, right? The proper use for local hard drive in a networked workstation environment is for caching, not for software installation. We knew this truth about workstation management twenty years ago, but for some reason it has been forgotten in a world where Microsoft and their deranged horribly expensive and virtually impossible to manage workstation environment seems to be the model for how to do things. How many years of IT experience did you say you had, again? :).

  • by Grishnakh ( 216268 ) on Thursday November 05, 2009 @05:38PM (#30000158)

    It seems to me that this problem would be better solved by packaging tools, rather than messing with the kernel. After all, this doesn't seem like it would get around library dependency problems at all, unless you require everything to be statically linked.

    Besides, packaging tools in Linux, while certainly better than other OSes, could still use a lot of work. For instance, it'd be nice if we could standardize on a single one, instead of deb, rpm, and tgz all being used. I can understand why people can't agree on Gnome vs. KDE, but honestly, what is so great about deb/rpm that rpm/deb doesn't do? Or can't be made to do with some modifications? (Don't say "apt"; that's a layer above, and has been made to work with rpm, plus it has a clone called zypper that OpenSUSE uses which does basically the same thing from what I've seen.)

    How about if all the packaging folks decide to bury the hatchet and create a single package standard (we could call it "dpm" or "rpkg", or maybe "upkg"), which does everything that rpm and dpkg do, and they could probably add support for multiple architectures too which would make it easy for commercial software developers to distribute software to Linux users in a Linux-friendly way, without the users even needing to know whether they're using an i386, x86_64, or even ARM system. The total download size difference shouldn't be that much, since much of the data is not arch-specific (graphics and such), and since only the proper arch-specific binaries would be installed, after installation the non-applicable stuff would be deleted along with the .upkg, so there'd be no wasted space on the HD. For distros of course, they could keep using the same apt/zypper tools they use now to manage dependencies.

  • Re:mod up, please (Score:5, Interesting)

    by Anonymous Coward on Thursday November 05, 2009 @06:21PM (#30000714)

    Here's the log of the full conversation. We was hardly abused, and got persistent help over several hours which was patient and helpful, ultimately culminating in him realizing what he had done wrong and admitting to it.

    Clearly, the myth IRC folk are at fault.

    http://pastebin.com/m2cfd19dd

  • by tlambert ( 566799 ) on Thursday November 05, 2009 @09:28PM (#30002228)

    So, remind me again: why exactly is it not possible to implement all that in a package manager and we need to have a Really Fat ELF?

    Because Linux distributions can not agree on a single GUI technology, let alone a package manager.

    -- Terry

  • by zapakh ( 1256518 ) on Friday November 06, 2009 @04:15AM (#30003748)

    I didn't smear MythTV, I pointed out how arrogant assholes can ruin someones experience and cause them to leave.

    In the pastebin link you cited as being "less one-sided", you are barking orders to people and citing your credentials. What you say about arrogance turning potentially contributing members of a community away is sometimes true, commonly enough as to have become cliche, and quite unfortunate. However, you appear to have had a deep expectation that this is how you would be met. When you're convinced that you have good reason to "hate linux people", as you put it, you will tend to see what you believe. Especially after you have (admirably!) spent two frustrating days trying to find a solution.

    It puts my teeth on edge to read the tone of your post here, and also of your linked IRC log. It came off -- and I say this not as an insult but as a barometer -- in a similar way to this guy [bash.org]. I'm not saying you're like that guy; but the heaviness with which you tried to control the conversation could have been perceived as a sense of entitlement. I certainly would have perceived it that way had I been present, and I likely would have reacted in a way that reinforced your dislike of the denizens of help channels.

    It's only on multiple readings that I can see that you didn't actually have a chip on your shoulder, and did not actually possess the sense of entitlement that I attributed to you. Rather, you were venting frustration. Maybe you dreaded the trip you would have to make to #mythtv-users because you expected that you'd missed something obvious and would feel stupid when it was pointed out. If you are anything like me, this expectation will always render you very sensitive to being rubbed the wrong way by a rough sense of humor or an assumption that you are a noob (which, a priori, is the most likely hypothesis). If you're sensitive to it, then it doesn't matter how gently or politely they express this assumption; it will get your hackles up. And if the noob assumption is expressed less-than-gently because you opened with a statement that you intended to be humbly self-deprecating but which contained no mythtv-related query, you are likely to perceive it as a full-blown assault on your legitimacy.

    If there is any personal advice I can offer, it is to maintain a sense of humor when entering any situation like this. You'll be encountering a lot of strong personalities. Maybe you expect them to respect your frustration, your intelligence, and the time you put into a solution so far (they may assume you're a noob). Maybe they expect you to ask your question first thing, all on one line (you didn't). Your sense of humor is a sort of shock-absorber to ride out the first few missed expectations and maintain your cool. It smooths over the beginning of the conversation. Small matters of etiquette can be allowed to slide on both sides. Thereafter, if someone's legitimatly an asshole, they're easy to spot and ignore.

    As far as the nature of the community... hell, even if IRC were the most wretched hive of scum and villainy I'd still dread that support experience less than, say, Dell's.

  • Re:"Insightful"? (Score:3, Interesting)

    by segedunum ( 883035 ) on Friday November 06, 2009 @11:50AM (#30006118)

    I've never used a package manager that forced you to upgrade all dependencies to the latest version to install a package.

    Anyone with an ounce of sense and experience knows that if you have a package for the version of the software that you want, but it's only built for and available in a later version of your distribution, then installing it will result in a cascade that will as good as update your entire system. There wouldn't be dependencies otherwise. On a system where you can automatically recompile like Gentoo then this probably won't be the case, but on binary-only systems it most certainly will be. That's why you have a lot of distro hopping, churn and updating.

    Then you get it from elsewhere if the official repos don't provide it.

    Translation: In practice you don't get the software you want to install at all unless you give up on the package manager and wind back the clock a couple of decades.

    You can even build your own package, something you certainly ought to be capable of if you're applying your own patches to software. You can even set up your own repos!

    Which software vendors have steadfastly and correctly said they won't do, and especially not for multiple distributions, versions of distributions and architectures! How's your multiplication? Deployment on this scale is error prone and requires a ton of support that they just won't provide because other more popular platforms don't make them do this. You just don't get the applications, and even the up to date free/open source software applications, you get on platforms where this sort of thing isn't a problem. Screw you, in other words.

    So basically what you're saying is, "you're not using the package manager except if you are". Gee, really?

    Hmmmmm, and you're the one who's just recommended building your own specific packages or setting up repositories for multiple distributions, versions and architectures, with not the faintest idea of the costs involved in doing that, to stay within that package management system and you're wondering why someone might be suggesting that they stay outside of that brain damage? Hmmmmm. That 'Check Updates....' thing in Firefox that works everywhere else. Why doesn't it fucking work on a Linux platform and why does everyone need to redo the work of providing updates? It's a puzzle. I wonder.............

    Most of your post you've been toeing a fine line between being just wrong and being wrong and a trolling asshat. Guess which side you just landed on?....No, the short of it is that you're a moron and your entire post is bullshit, and both you and everyone who modded your post up need to be mercilessly beaten with a cluestick.

    I've come to the conclusion that there are a lot borderline people who hover around Linux distributions, and some of them are even developers, who have never known what it is like to develop software for a living, or even as an independent free project, and get it deployed and updated quickly and easily on a user's system. Package management must be the answer to that. You can't question it. You can't look at the Windows or Mac OS world and learn anything from it and ask "Why the hell do they have more free and open source applications packaged and updated regularly for their non-free/non-opensource platforms?"

    Package management is the one true solution to software installation. I mean, if it isn't, then the sky might turn fucking pink, or purple, or something. Christ. Anything could happen.

  • Re:"Insightful"? (Score:3, Interesting)

    by Lord Bitman ( 95493 ) on Friday November 06, 2009 @06:29PM (#30010662)

    And what do most package managers do? Utterly lazy dependency management. "Well, you need this package... so you should have the latest version of it. If you want another version, you should rename the package and depend on something else instead."

    I've never used a package manager that forced you to upgrade all dependencies to the latest version to install a package. All of them allow not just required packages but required versions of packages, and only force upgrades of dependencies when you don't have a sufficiently recent version.

    And anyone who has ever wanted to upgrade just one package can tell you that this is clearly insufficient, because every package ever made lazily specifies all they know: "this package works with the version I have, therefore it requires the version I have." If this lazy way is the easiest method of specifying requirements, it is what will be used- you can tell, because it is what is used. If you are dealing with what is considered to be a non-trivial requirement by package managers, you will have run into this problem. "I know this requires version N. I have read the source, I know where Foo is being called, I know that this works in all versions since A. But the package maintainer didn't say "requires this feature" they said "requires version Q of package Foo", so I can't use the dependency management for this package". Blame the maintainer, not the system? Not bloody likely. Sometimes you see ugly hacks like "virtual packages" and "meta packages" which attempt to abuse the limits of the package manager and act as if a real providesrequires system is in place.

    And that would be almost-excusable, except for the brain-dead "open source is king" approach for updates: "The whole-thing's free anyway, why not just re-send the whole thing?" binary patches are pretty-much unheard of. Of course, sending the whole thing is really just a work-around because-

    Some can do patches. I think RPM can. But unless you're using dialup, they're not really that much of advantage. And you also have the problem of having to provide patches from lots of versions to lots of versions. Or you can provide only patches from the last version to the current one, in which case they're useless for anyone who misses an upgrade.

    This is a solved problem. Some package managers actually /do/ send binary patches, as does every software company on the planet. If there were a valid excuse for not doing it, it wouldn't be a problem.

    Package managers generally do NOT bother to detect when they are about to clobber or alter "the wrong file". When they do, they don't bother to keep a record of what they /would/ consider to be "the right file", making "merging" impossible and difference examination a guessing game.

    I don't know any package manager that does this. For example, Pacman, the package manager of Arch (my current distro of choice), installs new versions of files with the suffix '.pacnew' if the old version was modified and doesn't clobber.

    The "not clobbering" part is /usually/ true of configuration files, though it really all depends on what you consider to be "configuration", which you'll find yourself disagreeing with often enough to wonder why the hell the rule isn't applied universally. And yes, the second-half of that applies even to configuration files: Putting "oh, I didn't clobber this file, here's the one I wanted to stick somewhere" in a random tree, sometimes with no notification, and rarely with /useful/ notification ("speak now or forever hold your peace" notification is the bane of any upgrade, especially when combined with the point above about too-many-things-upgrading just because you wanted _one_ change), does nothing to help you find out what has changed between versions. If the only thi

  • Re:"Insightful"? (Score:1, Interesting)

    by Anonymous Coward on Friday November 06, 2009 @10:24PM (#30012036)

    I've never used a package manager that forced you to upgrade all dependencies to the latest version to install a package. All of them allow not just required packages but required versions of packages, and only force upgrades of dependencies when you don't have a sufficiently recent version.

    So instead of forcing you, they merely offer no other choice... that is a fail.

    Some can do patches. I think RPM can. But unless you're using dialup, they're not really that much of advantage. And you also have the problem of having to provide patches from lots of versions to lots of versions. Or you can provide only patches from the last version to the current one, in which case they're useless for anyone who misses an upgrade.

    Excuses to not track what changes between RPM revisions. Patch revision n can include revisions n-1,2,3,4, etc, and still not require the whole package. They can be differential, not incremental.

    I don't know any package manager that does this. For example, Pacman, the package manager of Arch (my current distro of choice), installs new versions of files with the suffix '.pacnew' if the old version was modified and doesn't clobber.

    He is talking about change-tracking, not backing up configuration files. This plays into lack of patching. The package manager, nor package installer can tell you the difference between package revision n and package revision n-1 because packaging systems go the all or nothing route.

    This is true on pretty much any OS. Multiple versions of the same package will install to the same paths, and your package manager would have to be pretty fucked up to do that. If you'd like to horribly violate widely adopted filesystem organization standards and patch your software a bunch to make it work properly with your new layout, you can do that, but there's no real gain.

    Not on Solaris or systems without package managers. Most(all?) Linux packages are both non-relocatable and have no concept of alternate roots. Uses include: installing/patching an OS root mounted somewhere else for recovery purposes. Software shares (see all non-Linux OSs, but I suppose general lack of commercial software helps Linux here). Chroots - quite interesting security implications here because they are often not patched, being outside the package management. I think I could go on, but I'm tired.

    They manage packages

    They can install the latest versions of dependencies. It's hard to say they manage anything other than what, the list of packages installed? They really violate the whole idea of "releases". I can't figure out how RedHat puts up with it. I guess by just not publishing regressions because their package "management" can't do anything about it.

    Then you get it from elsewhere if the official repos don't provide it. You can even build your own package, something you certainly ought to be capable of if you're applying your own patches to software. You can even set up your own repos!

    Small problem being all the integration done midstream. Out of repo firefox sucks dick for instance, wheres other OSs have public APIs for the integration Firefox needs. That's not really a package manager problem but an OSS thing..

    Location sensitivity sucks.
    Overbearing uniqueness requirement sucks.
    Installing to alternate root sucks.
    Dependencies suck.

    Solution: keep the OS and user applications as separate as possible, install whole OS only.
    Package managers solve problems nobody has, and fail at things they should be good at. They mix system and user software. Modern OSs really need software repositories or stores, stable public APIs, and clear system/user separation. Package managers should assist those goals, not get in the way.

To the systems programmer, users and applications serve only to provide a test load.

Working...