Forgot your password?
typodupeerror
Software Linux

Ryan Gordon Wants To Bring Universal Binaries To Linux 487

Posted by timothy
from the grossly-obese-binaries dept.
wisesifu writes "One of the interesting features of Mac OS X is its 'universal binaries' feature that allows a single binary file to run natively on both PowerPC and Intel x86 platforms. While this comes at a cost of a larger binary file, it's convenient on the end-user and on software vendors for distributing their applications. While Linux has lacked such support for fat binaries, Ryan Gordon has decided this should be changed."
This discussion has been archived. No new comments can be posted.

Ryan Gordon Wants To Bring Universal Binaries To Linux

Comments Filter:
  • by Monkey-Man2000 (603495) on Sunday October 25, 2009 @08:34AM (#29863737)
    While this is true, of course a lot of free software can run on OS X as well. Compiling this is nearly as easy as Linux, but it's still quite useful just to download a universal binary of the full application if it's available. Smaller apps aren't a big problem, but for bigger ones it can become an unnecessary hassle. For example, I just had to compile Inkscape from scratch on Snow Leopard and I spent an afternoon tracking down and compiling all the dependencies because the universal binary doesn't currently run on 10.6. I really would have benefited from the universal binary if I wasn't so bleeding edge.
  • by John Hasler (414242) on Sunday October 25, 2009 @08:34AM (#29863739) Homepage

    > It's dead simple. We need something like this in Linux.

    "aptitude install " (or the pointy-clicky equivalent) works for me.

  • by Sique (173459) on Sunday October 25, 2009 @08:38AM (#29863771) Homepage

    But... "compiling for your platform" is just another way to install software. You could wrap this in a little application (call it "setup"), where you click "Next >" several times, and as a result you have a binary for your platform.
    For those who know what they are doing, there is always the "expert configuration" button.

  • Re:Linking problems (Score:3, Interesting)

    by WaywardGeek (1480513) on Sunday October 25, 2009 @08:38AM (#29863773) Journal

    While your reply sounds a bit like flame-bait, I basically have to agree. The format isn't a universal binary that gets translated to each machine architecture when installed. Instead, it's basically an archive of pre-compiled binaries for each platform you support. So, for example, my stupid Qt application has to be compiled separately for Fedora and Ubuntu. This technology would in theory allow me to merge the binaries into a single FatELF binary. Personally, I'd rather just provide separate .deb and .rpm files.

    However, the idea of a universal binary is cool. We could do something like the old p-Code, where we compile to a virtual architecture, and then translate it to the machine during installation. I liked the idea when they had it way back in the early 80's (late 70's?), and I was sad to see we didn't have the compute power back then to make it fly. I bet we do now.

  • by C0vardeAn0nim0 (232451) on Sunday October 25, 2009 @08:44AM (#29863811) Journal

    you know, just trick the good ol' .DEB package format to include several archs, then let to dpkg decide wich binaries to extract.

    is not that in linux the binaries are one big blob with binaries, libs, images, videos, heplfiles, etc. all ditributed in as a single "file" which is actualy a directory with metadata that the finder hides as being a "program file".

    being able to copy a binary ELF from one box to another doesn't guarantee it'll work, specially if it's GUI apps that may require other support files, so fat binaries in linux would be simply a useless gimmick. either distribute fat .DEBs, or just do the Right Thing(tm): distribute the source.

  • by Hal_Porter (817932) on Sunday October 25, 2009 @09:03AM (#29863901)

    Nextstep isn't really gone, it just possessed MacOS and now it walks around in its body, a bit like VMS did to Windows.

  • Universal Source ? (Score:2, Interesting)

    by obarthelemy (160321) on Sunday October 25, 2009 @09:20AM (#29864001)

    I'm already amazed we have a universal x86 binary. With the architectural differences between an Atom and a Core7 or 9... I dare not think of all the inefficiencies this creates.

    Wouldn't it be better to shoot for a Universal Source, with the install step integrating a compile+link step ? I know Gentoo does this, but Gentoo is marginal within the marginality that is Linux, on the desktop.

    I'm amazed you can do real-time x86 emulation on non-x86 CPUs, but still can"t have a Universal Source.

  • by Alain Williams (2972) <addw@phcomp.co.uk> on Sunday October 25, 2009 @09:40AM (#29864121) Homepage
    Architecture Neutral Distribution Format [wikipedia.org] was tried some 20 years ago. The idea was to have a binary that could be installed on any machine. From what I can remember it involved compiling to some intermediate form and when installed compilation to the target machine code was done.

    It never really flew.

    If someone wants to do this then something like Java would be good enough for many types of software. There will always be some things for which a binary tied to the specific target is all that would work; I think that it would be better to adopt something that works for most software rather than trying to achieve 100%.

  • by slim (1652) <{john} {at} {hartnup.net}> on Sunday October 25, 2009 @09:42AM (#29864135) Homepage

    I agree that fat binaries are not appropriate for applications in the distribution's archive. And I agree that the first port of call for any user should be apt-get / up2date / etc.

    However there are many kinds of app that might not get into the distro archive, for all kinds or reasons. Maybe it's of really niche interest, maybe it's too new, maybe the distro-maintainer just interested in it. Or maybe it's proprietary. Some people are willing to compromise on freedom.

    The last application I had trouble installing on Linux, due to glibc versioning problems, was a profiler for WebMethods Integration Server. Something like that is never going to get into the APT repository.

  • by PeterBrett (780946) on Sunday October 25, 2009 @09:48AM (#29864183) Homepage

    Well, the effort of packaging a application (a) to different platforms and (b) to different distributions is quite a duplicate one, involving a lot of people (and time).

    Firstly, this proposal has absolutely no relevance to the difficulty of packaging to different distributions.

    Secondly, packaging to different platforms has been solved. Most distributions now have compile farms where you submit a package specification (usually a very simple compilation script and a set of distribution-specific patches) and packages for all the various architectures get spat out automatically.

    This proposal is a solution looking for a problem, as far as Free software is concerned. The only utility is where the application is closed-source and can't go through a Linux distribution's normal package compilation and distribution workflow.

  • by Anonymous Coward on Sunday October 25, 2009 @09:49AM (#29864187)

    That's true for free software that happens to be part of your distribution, yes. It's not true for non-free software, nor is it true for free software that isn't included in your distribution, or for free software that has updated versions that haven't been packaged yet.

    Basically, there are three ways to handle an architecture transition.

    First, you can use fat binaries. Apple have used this technique to transition between PowerPC and x86, and are using it again to transition between 32- and 64-bit. It works brilliantly.

    Second, you can take Microsoft's approach. Assume that 64-bit applications know what they're doing, and that 32-bit applications do not. Keep separate copies of all of the system libraries, one for each. Apply lots of crazy path (and registry) redirection voodoo to any running 32-bit applications. Depending on which mode the current application runs in, it'll have a completely different view of the filesystem (and registry). This approach only really works for libraries included in the system. Third-party libraries have to include multiple copies, in different files, and use manifest files to work out which one to use. You still need separate versions for 32- and 64-bit systems, even though the 32-bit version will usually run on a 64-bit system. For some types of application, you need the 64-bit version on a 64-bit OS, because the 32-bit version doesn't work. For things like plug-ins or video codecs, you usually either need just the 32-bit version, or both the 32- and 64-bit versions installed at once.

    Finally, you have the approach Linux currently uses. A 64-bit OS is 64-bit only. It can run statically linked 32-bit applications, but nothing else unless you install a separate set of 32-bit libraries. This approach is only possible on an OS where the vast majority of available software is both open source, and works when compiled as 64-bit. It would not be possible on Windows or Mac OS X. It works OK, until you try to install a single 32-bit binary application. Then you're screwed. You still have the problem of working out which version of something you actually need. Usually, you want a 64-bit version on a 64-bit system, but not always. Sometimes, you still need the 32-bit version (rare, because it often won't work), and some other times you would need both (very rarely - video codecs that need to be used in the native media player and a 32-bit web browser, for example).

    There are only two ways to "fix" this problem. The first is to take Microsoft's approach, which doesn't solve the problem of having to choose which version of something to install. It also requires a lot of extra maintenance overhead, and considering that your average Linux distribution has much more software than Windows does, and virtually every library in the repository is treated as a system library, it's probably not possible to actually maintain such a beast. If it were, then someone would probably have already done it.

    The other way is to take Apple's approach. You supply one copy of each library and executable, and let to OS work out which one to use. All libraries and plug-ins would all be available to both 32- and 64-bit applications. The OS installer will be able to install the 64-bit version automatically if you have a 64-bit machine, and install the 32-bit version otherwise. You could upgrade from 32-bit to 64-bit simply by installing a 64-bit kernel.

    It becomes an even better idea if you throw ARM into the mix. While there would be no benefit in supplying x86/x86-64/ARM packages in a distribution, there would be a benefit in distributing stand-alone packages that way. It'd just cost a tiny bit of disk space for applications / libraries / plug-ins that were installed outside of the distribution's package manager. And you could easily write a tool to strip the unused parts of the binaries out if you don't need them. These exist for Mac OS X already. You could even do this as part of installing the package - if you install a x86/x86-64/ARM package on an ARM machine, just strip the x86 and x86-64 parts from the binary.

  • by TheRaven64 (641858) on Sunday October 25, 2009 @10:04AM (#29864289) Journal
    And who then complains about lacking features GNUstep has had for years? EVERYBODY.
  • Re:Apple dropped it (Score:4, Interesting)

    by peragrin (659227) on Sunday October 25, 2009 @10:24AM (#29864419)

    Apple does that. when 10.3 came out apple stopped installing OS 9 classic by default as well. Support backwards compatibility for 2-3 generations and then phase it out. First phase is simply not installing it by default. Second phase is not to supply it. Snow leopard is the 3rd generation of OS after Rosetta came out. installed by default in tiger, and Leopard, they stopped installing it by default for 10.6.

    personally I wish MSFT would do the same thing. I get really pissed when my "new application" requires the same installer that win95 had, and in order to run it I have to reboot into safe mode as my antivirus won't let it run. Seriously why does an Application built in 2009 still require the win16 subsystem to run? Why aren't the coders moveing onto new toolkits? Apple nudges and then pushes programers forward. MSFT let's them stay in the previous century and use bare metal knife switches to turn on the lights.

  • by Tacvek (948259) on Sunday October 25, 2009 @11:25AM (#29864753) Journal

    Just the other day I tried to compile Berkley SPICE for under Linux. That was a real pain in the ass, since it apparently not only predates Linux, but it also predates ANSI C and POSIX being wide-spread enough to depend on them. By default it assumed that void* pointers where not available, so it used char* pointers, unless a specific #define was used, in which case some but not all of the erroneous char*'s were converted to void*. It made incorrect assumptions about what header files included a function (relying on header files implicitly including other header files when they are not required to), in some cases, bypassed including header files, and just added extern function declarations. Since this was K&R C the function declarations (prototypes) did not list the arguments, but they still managed to use return types different from those specified in POSIX. And in a few cases, the arguments passed were of the wrong type because apparently they were specified differently in early UNIX.

    That is not counting the places where a function returned a local array, rather than a copy of the array. (In fairness, the author did comment this asking if it should have been returning a copy instead.

    Some of the function names conflicted with those used in C99, which they obviously could not have predicted, but did mean I needed to compile with "-std=c89 -posix", which took me a little while to realize. Etc.

    So despite the code being targeted at a Unix, it took me several hours to compile it for Linux. This just goes to show that porting can be a real pain, and end users should not be required to compile programs themselves. Now, locally compiling programs can be a valid install strategy, as seen in Gentoo, but a ported must have checked on each targeted platform that the code compiles as-is, or provided patches if needed. It also is a pain if the system you are installing on does not have compilation tools for some reason, such as being an embedded system with limited space.

  • by jedidiah (1196) on Sunday October 25, 2009 @11:40AM (#29864851) Homepage

    Sometimes the insightful comment is necessarily flamebait.

    You want "platform independence"? We already had that with source.

    It just got FUD'ed to death because it seems kind of scary and there
    were ever really any nice n00b friendly package mechanisms. Perhaps
    Gentoo comes close, but I doubt it would fit the (n00b) bill.

  • by 99BottlesOfBeerInMyF (813746) on Sunday October 25, 2009 @11:46AM (#29864893)

    The whole Linux distribution and installation system (such as, with apt-get) is great for setting up a server, but it's very awkward and unnatural for desktop apps. Apple is far ahead in that respect, and I see no reason why Linux shouldn't follow their lead.

    You've got to be kidding? Super-easy installation and automatic security updates for all applications is 'awkward'?

    Neither Linux on the desktop nor OS X is perfect when it comes to software installation and both should borrow from the other. Right now Ubuntu, probably the front runner for usability in desktop Linux, still has multiple programs used to manage packages and fails to handle installation from Web sites or disks well. It cannot run application off a flash drive easily and reliably, and frankly it sucks for installation of commercial software. A lot of commercial and noncommercial software is simply not in the repositories and I end up running a binary installer by hand or I have to resort to complex CLI copy and paste in order to get what I want running. But they're working on it and the new RC has a new package manager they eventually intend to solve some of these problems. Both OpenStep and multiple binaries would further the goal of having more usable application installation. For example, one could install an application on a flash drive then plug that drive into multiple computers with different architectures and run it without having to install it on the machines themselves (which is often not even an option).

    If I understood you correctly, your suggestion is that desktop software should be hard to find, it should be installed from whatever website I happen to ultimately find and it shouldn't automatically get security updates. Sounds fabulous.

    Like it or not, regardless of the platform I'm running, when I want new software I usually turn to Google. I read Web pages and reviews and comparisons and look at the developer's Web site. It follows then that an easier workflow is to directly install from a Web site by clicking a link in the Web browser. Ideally, this link should be a link to a software repository that will pull the application into my package manager and this is possible in some package managers but almost never used because there is not a single standard for package management on Linux. A less useful workflow is what I normally use. That is, I go through the process of deciding what software I want to use by looking at Web pages, then I open my package manager and copy and paste the name into the package manager. Then I sometimes find it and install from there with only on wasted step, but often I don't find it so I go back to my browser and install by hand using whatever method is necessary.

    In short, there's a lot of room for improvement and multiple binaries are one way to make an improvement for desktop use, but which may well never happen because Linux development is still dominated by the server role.

  • by jipn4 (1367823) on Sunday October 25, 2009 @11:54AM (#29864939)

    Linux doesn't need fat binaries because the package manager automatically installs the binaries that are appropriate for the machine.

    OS X needs fat binaries because it doesn't have package management. I wish people would stop trying to bring OS X (mis)features over to Linux. If I wanted to use OS X, I'd already be using it.

  • by jedidiah (1196) on Sunday October 25, 2009 @12:12PM (#29865041) Homepage

    ...and a great counterexample is MovieGate.

    It's right there on the Apple downloads page, and pretty prominent too.

    It doesn't do any of this "automatic dependency management". It could sorely use it.

  • by Hurricane78 (562437) <deleted&slashdot,org> on Sunday October 25, 2009 @02:58PM (#29866315)

    You have obviously no graps of software design principles.

    Look at how it's done on pretty much all Linux distributions: You choose your architecture when you choose the install medium. From then on, the package manager pulls your packages for the right arch. There is no need to re-compile it for every user, if it's the same. Just offer specially optimized binaries for every arch right on the package repository servers. That's the basic principle: Never do something twice. It's like caching.

    The other one is efficiency. And in the case of such a standard Linux package management, having the machine code for different architectures in that binary, despite you exactly knowing your architecture, is not only inefficient for no freakin' reason at all, but downright stupid. Yeah, disk space is cheap. But does that mean you should flood it for no reason at all? No. Of course not. Or else, why not just run a "dd if=/dev/zero of=/var/cache/$(date +%N).trash bs=1G" on every boot? It's cheap, ...no? And it's just as pointless.

    What do you think? How likely is it at all, that a user copies a binary to a different architecture? I think zero point zero zero. There is just no freakin point to it. Except "because I can". So you lost all connections to reality, if you think it's a good idea.

"There are things that are so serious that you can only joke about them" - Heisenberg

Working...