Ryan Gordon Wants To Bring Universal Binaries To Linux 487
wisesifu writes "One of the interesting features of Mac OS X is its 'universal binaries' feature that allows a single binary file to run natively on both PowerPC and Intel x86 platforms. While this comes at a cost of a larger binary file, it's convenient on the end-user and on software vendors for distributing their applications. While Linux has lacked such support for fat binaries, Ryan Gordon has decided this should be changed."
Gee, just 14 years (Score:4, Funny)
after the diminse of NeXTStep!
(c)Innovation!!(tm)(R)
Re:Gee, just 14 years (Score:5, Informative)
Re: (Score:2, Informative)
Isn't that was http://autopackage.org/ [autopackage.org] is trying to do to?
I find it a benefit of Linux if there is only one instance of a XML library in my memory, though.
Re: (Score:3, Insightful)
As a former Autopackage developer, no, it isn't what Autopackage is about.
Autopackage attempts to do more than just packaging: it also tries to fight the binary compatibility problem. Probably the most widely known example is this one: compile a binary on distro A, run it on distro B, and get "symbol foobar@GLIBC_2.8 not found" errors. (There are a lot more binary compatibility issues than that though.)
However, making cross-architecture binaries is not one of Autopackage's goals.
Re:Gee, just 14 years (Score:4, Insightful)
The problem is that disk space is NOT cheap at all, or plentiful. ARM-based Linux is used on a lot of embedded devices where there's only 16 or 32MB of flash space, total. This "fatELF" idea makes no sense, because adding in x86, x86_64, MIPS, Alpha, and SPARC binaries to your ARM binary will make everything take so much space that much more (expensive) flash memory would be needed.
This just isn't a very good idea. It makes sense for Apple, which only has to worry about 2 architectures on the desktop, and wants to make things easy for consumers, but that's it. It doesn't make sense for Linux. And I'll bet that Apple doesn't use this "universal binary" thing on their iPhone, either.
Re:Gee, just 14 years (Score:5, Insightful)
And I'll bet that Apple doesn't use this "universal binary" thing on their iPhone, either.
You'd lose that bet. Apple provides complete support for universal binary on iPhone, allowing developers to compile for ARMv6 (compatible with every iDevice) and ARMv7 (newer ISA; works on iPhone 3GS + iPod Touch 3G).
It makes sense for Apple, which only has to worry about 2 architectures on the desktop
Actually, 4: PowerPC, PowerPC 64, x86, and x86 64. Though for the purposes of your argument its probably an immaterial difference.
Re: (Score:3, Informative)
The down side of this approach is that it consumes a bit more disk space because you have a copy of all of the data (not just the code) in every binary.
I'm not terribly familiar with GNUstep, but, in Mac OS X's implementation of application bundles, this is simply not true. Of course, architecture-dependent compiled executable code must necessarily be duplicated for each supported architecture, but the application data (which almost always is the most significant fraction of an application's size) is shared. The only reason data would have to be duplicated is if for some reason it is compiled into the binary. Though compiling data into the binary is com
Re: (Score:3, Insightful)
Comment removed (Score:4, Informative)
Re: (Score:2, Interesting)
Nextstep isn't really gone, it just possessed MacOS and now it walks around in its body, a bit like VMS did to Windows.
Re: (Score:3, Informative)
The 68k/PPC binaries were referred to as Fat Binaries. As a marketing term, Universal was only ever applied to PPC/x86.
When you say that MacOS doesn't use universal binaries any more, that's not strictly true. It doesn't ship with many that use both PPC and x86, because as you say OS 10.6 won't run on PPC Macs. But it still fully supports them and many software vendors are still producing them. These days, the same tech is used to store a 32 bit and a 64 bit version of the same x86 executable.
Re:Gee, just 14 years (Score:5, Informative)
When did VMS take-over Windows? Which iteration? NT5 (2000/XP) or NT6 (Vista/Win7)? Or earlier?
Dave Cutler, the architect of VMS developed Windows NT. Lots of Windows NT kernel mode terminology - working sets, paged pools, IRQLs, IRPS and so come from VMS and were not present in 16 bit Windows (which didn't really have any architecture).
http://windowsitpro.com/Windows/Articles/ArticleID/4494/pg/2/2.html [windowsitpro.com]
If you take the next letter after V you get W, M you get N and S you get T, so W(indows)NT is a successor to VMS. The Windows NT kernel run on Dec's preferred Mips architecture (and later the Dec Alpha) before it run on x86. Much of the development of 64 bit Windows was done on Alpha.
Actually before Cutler worked on Windows NT at Microsoft he worked on a project to run Unix and VMS binaries on a single kernel in separate subsystems. Orignally Windows NT supported Win32, Posix, OS/2 and Win16+Dos subsystems, though Win32 obviously ended up being by far the most important. In fact Windows NT was originally so CPU agnostic that it run Win16 and Dos applications using a software emulator on Risc chips before it run them using V86 mode on x86.
Re: (Score:3, Informative)
DEC did sue Microsoft, but they settled for royalties.
Re: (Score:3, Informative)
> The way you express it, DEC would have a had a case
> against Microsoft for stealing their technology. Are
> you aware of any evidence that this happened?
As a matter of fact, yes. See http://www.roughlydrafted.com/2009/07/30/readers-write-how-microsoft-got-windows-nt/#more-3661 [roughlydrafted.com]
| "So, Cutler walked down the street to Microsoft and offered them
| Mica which became NT. Later DEC sued MS and, in an out of court
| settlement, got royalties for the filched technology. Part of the
| deal included targeting
Only useful for non-free applications (Score:5, Insightful)
If you have access to the source, you can always compile a version for your platform. The 'fat binary' principle is only useful for non-free applications, where the end-user can't compile the application himself and has to use the binary provided by the vendor.
Since most apps for Linux are free and the source is available, this feature isn't as useful as it is on the Mac. Not that it shouldn't be created, but it makes sense to me why it took a while before someone started developing this for Linux.
Re:Only useful for non-free applications (Score:5, Insightful)
Re: (Score:3, Interesting)
But... "compiling for your platform" is just another way to install software. You could wrap this in a little application (call it "setup"), where you click "Next >" several times, and as a result you have a binary for your platform.
For those who know what they are doing, there is always the "expert configuration" button.
Re: (Score:2)
But then you need a fat binary for your little installation program.
Re: (Score:3, Informative)
Re: (Score:3, Insightful)
Re:Only useful for non-free applications (Score:5, Insightful)
But... "compiling for your platform" is just another way to install software. You could wrap this in a little application (call it "setup"), where you click "Next >" several times, and as a result you have a binary for your platform.
Wow, the lack of grasp on reality around here really amazes me sometimes. But it looks like it worked for you. The open source fanatic fan boys shot your karma through the roof. Congratulations!
Compiling non-trivial applications from source can take a long time. That fact alone can make precompiled binaries a big win for most users.
I did the "compile from source" thing for a long time on FreeBSD before finally realizing the pointlessness of it all. Not only was I completely unnecessarily beating up on my hardware, but spending far too much time waiting for compiles to complete.
These days, I grab precompiled packages whenever possible, and you know what? It's a hell of a lot better.
Re: (Score:3, Interesting)
You have obviously no graps of software design principles.
Look at how it's done on pretty much all Linux distributions: You choose your architecture when you choose the install medium. From then on, the package manager pulls your packages for the right arch. There is no need to re-compile it for every user, if it's the same. Just offer specially optimized binaries for every arch right on the package repository servers. That's the basic principle: Never do something twice. It's like caching.
The other one is
Re: (Score:3, Insightful)
16 2.0 Ghz cores and compiling will only be marginally more time-intensive than binaries.
On what kind of battery?
Re:Only useful for non-free applications (Score:4, Insightful)
You can't tell me you've never run into the situation where installing a single open source package ends up taking you down a three hour maze of dependencies. Sure, sometimes you get lucky and everything just works, but many other types, you discover that application A needs libraries B and C installed, and B needs libraries D, F and G, and then the version of F you downloaded isn't compatible with package Y, so you try to upgrade Y only to discover that it doesn't work with package Z, until you say "fuck it" and just go try to find a binary.
Re:Only useful for non-free applications (Score:5, Insightful)
"Not everyone is skilled enough to compile the source on their own"
By "end user" we can understand here "distribution maintainer" which already has the skills to compile the source (and that's not but a part in the lot of things that have to be done in order to integrate some software in a distribution).
"I would think this might be useful for distro maintainers who do not want to maintain separate packages across multiple architectures"
But they have to: they still must build and integrate for their supported platforms, then rebuild when bugs are found or the software is upgraded, then test... It's just the last step (producing the very binary packages) that changes so instead of multiple packages you'd end up with a single multplatform package. The distributor still need (almost) as much disk space and infrastructures as before, but then each and every user will end up with spending much more space in their hard disks (imagine the fat binary for, say, Debian, supporting eleven platforms).
And then, please note that this will allow for single binaries for diferent hardware platforms but not for different version compilations (so it won't be useful to obtain binaries for, say, amd64 for Debian, Red Hat and SUSE).
It seems it will only benefit to those that want to publish their software in an only binary form outside the framework of stablished distributions and that means closed source software. Of course they can look for their bussiness the way they feel better, it's only they don't get my simpathy so I don't give a damn about them.
Re:Only useful for non-free applications (Score:4, Interesting)
I agree that fat binaries are not appropriate for applications in the distribution's archive. And I agree that the first port of call for any user should be apt-get / up2date / etc.
However there are many kinds of app that might not get into the distro archive, for all kinds or reasons. Maybe it's of really niche interest, maybe it's too new, maybe the distro-maintainer just interested in it. Or maybe it's proprietary. Some people are willing to compromise on freedom.
The last application I had trouble installing on Linux, due to glibc versioning problems, was a profiler for WebMethods Integration Server. Something like that is never going to get into the APT repository.
Re: (Score:3, Informative)
"The last application I had trouble installing on Linux, due to glibc versioning problems, was a profiler for WebMethods Integration Server. Something like that is never going to get into the APT repository."
Why not? If it's free software it certainly can go in the "contrib" repo; if it's closed source but still redistributable it will go in "non-free". Being it "niche" doesn't preclude it from being in the repos: usually if it is license-compatible and there's at least one person willing to take the effo
Re:Only useful for non-free applications (Score:5, Insightful)
Please elaborate.
I too agree that this is pointless for the end user in Linux, at least when it comes to free software. Only closed binary blobs will benefit, which IMHO is not something worth putting effort towards helping. They did their design choices and accepted the reality in doing so.
As for the end user, she should just use the package manager of her distro and find whatever she needs. Not worrying about neither compiling nor platforms. /etc/apt/sources.d. Not only will the user be able to use the package manager to install your app like any other, she will also get security updates you publish.
For example, in Debian/Ubuntu you could more easilly package your installer to simply drop a file in
Let the package system handle these things, they do it well and does not bloat your boat.
Re:Only useful for non-free applications (Score:4, Informative)
"Package managers are a necessary evil."
Your opinion. I've extensively managed software installations on Linux and Windows and even if package managers are a necessary evil, they are much better than the alternatives. By far.
"If I want software [...] I don't have to hope that the company I bought the operating system has put it in their database."
You forget that one thing is a package manager and a very different one a package source. Any company that cares can provide its own tigthly integrated package source for a distribution without permission or cooperation from the operating source vendor (yes: even closed source vendors can do that). And by using the package manager the end user gets for free centralized software inventory and upgrades without the need to go after each and every vendor's procedures as they reinvent the wheel.
"Package managers are only necessary because of the fragmented nature of the Linux universe."
Oh! that certainly explains why on the Windows side they have reinvented them (control panel's install/unistall app, install shield, windows update, msi files...) only worse.
Re: (Score:3, Insightful)
Something else occurs to me... if you can't be bothered to create packages for
distinct platforms (in this case CPU architectures) then it seems that you
probably can't be bothered to go through the even greater trouble of actually
testing them.
If you support both x86 and PPC with any level of diligence then some sort of
fat binary format really doesn't buy you that much.
If you are any good, compiling source and building packages is a COMPLETELY AUTOMATED.
This is a very important fact lost on the clueless n00bs
Re: (Score:3, Insightful)
If you are any good, compiling source and building packages is a COMPLETELY AUTOMATED.
If that is like you say, then please explain why, e.g., Debian has a whole legion of people they call "developers" whose sole purpose in life is to take sources from other people and create deb packages?
It appears you don't play video games (Score:3, Insightful)
I do not propose anything. Why should I?
You propose no solution for distribution of non-free software. You propose no solution for funding development of free video game software. Therefore, you appear to propose the elimination of the video game industry. A lot of Slashdot users who like to play video games disagree with your proposal.
Re:Only useful for non-free applications (Score:4, Interesting)
Just the other day I tried to compile Berkley SPICE for under Linux. That was a real pain in the ass, since it apparently not only predates Linux, but it also predates ANSI C and POSIX being wide-spread enough to depend on them. By default it assumed that void* pointers where not available, so it used char* pointers, unless a specific #define was used, in which case some but not all of the erroneous char*'s were converted to void*. It made incorrect assumptions about what header files included a function (relying on header files implicitly including other header files when they are not required to), in some cases, bypassed including header files, and just added extern function declarations. Since this was K&R C the function declarations (prototypes) did not list the arguments, but they still managed to use return types different from those specified in POSIX. And in a few cases, the arguments passed were of the wrong type because apparently they were specified differently in early UNIX.
That is not counting the places where a function returned a local array, rather than a copy of the array. (In fairness, the author did comment this asking if it should have been returning a copy instead.
Some of the function names conflicted with those used in C99, which they obviously could not have predicted, but did mean I needed to compile with "-std=c89 -posix", which took me a little while to realize. Etc.
So despite the code being targeted at a Unix, it took me several hours to compile it for Linux. This just goes to show that porting can be a real pain, and end users should not be required to compile programs themselves. Now, locally compiling programs can be a valid install strategy, as seen in Gentoo, but a ported must have checked on each targeted platform that the code compiles as-is, or provided patches if needed. It also is a pain if the system you are installing on does not have compilation tools for some reason, such as being an embedded system with limited space.
Re:Only useful for non-free applications (Score:4, Interesting)
Re:Only useful for non-free applications (Score:5, Informative)
While you may be able to claim none of those points are overly compelling and target a very small part of the population, you have to recognize there's more than just satisfying non-free applications. Furthermore, I think you mean to say that it's "only useful for non-open source applications" as there are tons of free software applications out there that are not open source but are free (like Microsoft's Express editions of Visual Studio).
Re: (Score:3, Insightful)
Furthermore, I think you mean to say that it's "only useful for non-open source applications" as there are tons of free software applications out there that are not open source but are free (like Microsoft's Express editions of Visual Studio).
He means free as in Stallman-Free, not the free as in cost. That's what I don't like about this 'free' as in 'freedom' thing, it's needless confusing by trying to change the meaning that first comes to mind to people. They could've just gone with libre or something.
Re:Only useful for non-free applications (Score:4, Funny)
Thank god English is free of that stupid distinction.
Re:Only useful for non-free applications (Score:4, Informative)
It was... a joke. Thank god English is free of that stupid distinction?
Re:Only useful for non-free applications (Score:4, Insightful)
Re:Only useful for non-free applications (Score:5, Funny)
Yes, that's right. That's why a 'freeman' was someone you didn't have to pay for his work, whereas a 'slave' was, er...
Re: (Score:3, Informative)
Furthermore, I think you mean to say that it's "only useful for non-open source applications" as there are tons of free software applications out there that are not open source but are free (like Microsoft's Express editions of Visual Studio).
I'm sorry, I should have been more clear. I mean free as in freedom. MS Visual Studio Express isn't free, it just doesn't cost any money to purchase.
Re: (Score:2)
The 'fat binary' principle is only useful for non-free applications, where the end-user can't compile the application himself and has to use the binary provided by the vendor.
On the other hand, it's a poor platform if it is too hostile to third party software (some of which will be sufficiently specialist to be effectively commercial-only, because you're really paying for detailed support). The big benefit is being able to say "this is a Linux program" as opposed to "this is a 32-bit x86 Linux program"; for most end users this is just a much easier statement to handle because they don't (and won't ever want to) understand the technical parts. (There's a smaller benefit to people
Re: (Score:2)
Re: (Score:2)
There are already smart update systems, there is a single source package which is compiled into multiple binary packages, your smart client only transfers the binaries which are appropriate for the architecture it uses. Because different architecture versions have different filenames, the packages themselves can already sit alongside each other inside a distribution repository... There is nothing currently stopping you creating a multi architecture install dvd. The reason it's not done is because it would b
Re: (Score:2)
Actually, there's one case where I can see this being useful. I was talking a while ago to some of the OpenBSD developers about the planned Dell laptops that had both ARM and x86 chips. Their idea was to have a /home partition shared between the two and let users boot OpenBSD on either. If you had fat binaries, you could share everything.
The canonical use for fat binaries with NeXT was for applications on a file server. You would install the .app bundle on a central file server and then run it from wor
Re: (Score:2)
You could take that a step further actually...
Boot the core OS on the ARM cpu, and use that for all your day to day tasks, but power up the x86 on demand for heavy computing workloads. Think of the early PPC amiga addon cards, the core system was still 68k based but you could use the PPC chip for certain power hungry apps or games.
Not sure how hard it would be to engineer, at the very least you could boot the x86 system headless, and have a virtual network between the two so you could access it virtually re
Re: (Score:2)
Maybe "most apps for Linux are free and the source is available" partly due to difficulties of distributing and installing binaries?
The whole Linux distribution and installation system (such as, with apt-get) is great for setting up a server, but it's very awkward and unnatural for desktop apps. Apple is far ahead in that respect, and I see no reason why Linux shouldn't follow their lead.
I read an opinion somewhere, and it made sense to me, that Linux treats all software as system software -- as part of th
Re:Only useful for non-free applications (Score:5, Insightful)
You've got to be kidding? Super-easy installation and automatic security updates for all applications is 'awkward'?
If I understood you correctly, your suggestion is that desktop software should be hard to find, it should be installed from whatever website I happen to ultimately find and it shouldn't automatically get security updates. Sounds fabulous.
Don't get me wrong, I agree that package management systems have their flaws (even inherent ones) but you just aren't making a good case against them... You could start with explaining what's unnatural about "Open 'Add applications', check what you want, click Install", and then continue with explaining what's awkward about totally automatic security updates.
Re:Only useful for non-free applications (Score:4, Interesting)
The whole Linux distribution and installation system (such as, with apt-get) is great for setting up a server, but it's very awkward and unnatural for desktop apps. Apple is far ahead in that respect, and I see no reason why Linux shouldn't follow their lead.
You've got to be kidding? Super-easy installation and automatic security updates for all applications is 'awkward'?
Neither Linux on the desktop nor OS X is perfect when it comes to software installation and both should borrow from the other. Right now Ubuntu, probably the front runner for usability in desktop Linux, still has multiple programs used to manage packages and fails to handle installation from Web sites or disks well. It cannot run application off a flash drive easily and reliably, and frankly it sucks for installation of commercial software. A lot of commercial and noncommercial software is simply not in the repositories and I end up running a binary installer by hand or I have to resort to complex CLI copy and paste in order to get what I want running. But they're working on it and the new RC has a new package manager they eventually intend to solve some of these problems. Both OpenStep and multiple binaries would further the goal of having more usable application installation. For example, one could install an application on a flash drive then plug that drive into multiple computers with different architectures and run it without having to install it on the machines themselves (which is often not even an option).
If I understood you correctly, your suggestion is that desktop software should be hard to find, it should be installed from whatever website I happen to ultimately find and it shouldn't automatically get security updates. Sounds fabulous.
Like it or not, regardless of the platform I'm running, when I want new software I usually turn to Google. I read Web pages and reviews and comparisons and look at the developer's Web site. It follows then that an easier workflow is to directly install from a Web site by clicking a link in the Web browser. Ideally, this link should be a link to a software repository that will pull the application into my package manager and this is possible in some package managers but almost never used because there is not a single standard for package management on Linux. A less useful workflow is what I normally use. That is, I go through the process of deciding what software I want to use by looking at Web pages, then I open my package manager and copy and paste the name into the package manager. Then I sometimes find it and install from there with only on wasted step, but often I don't find it so I go back to my browser and install by hand using whatever method is necessary.
In short, there's a lot of room for improvement and multiple binaries are one way to make an improvement for desktop use, but which may well never happen because Linux development is still dominated by the server role.
Re:Only useful for non-free applications (Score:4, Informative)
On my mac, I just download the app. Run it. If the app supports auto updating, it just hooks in on first run.
No package manager required. No dependency tracking, it just works. When I want to uninstall it, I just delete it and it cleans itself up on its own, sometimes not completely until next login.
A great example of this is CrossOver for Mac.
A package manager is nice for finding apps however, but trying to say that Apples system is bad in comparison is just silly. When you get a bunch of commercial vendor together, putting them all on the same 'repository' gets to be a bitch, they fight too much. This is why its rare for commercial software unless they can buy their way to the front of the display list.
No one is suggesting it be hard to find, not even Apple, which is why they have their own site with the common Mac software you can buy or download if its free or has a trial.
You can't compare Linux package managers which are practically designed to be anti-commercial to a commercial environment. Its just not the same ballgame.
Re: (Score:3, Interesting)
...and a great counterexample is MovieGate.
It's right there on the Apple downloads page, and pretty prominent too.
It doesn't do any of this "automatic dependency management". It could sorely use it.
Re: (Score:2)
. Yes ... you can.
Let's use OpenOffice.org as an example, if for no other reason than I was looking into building it optimized specifically for my computer (Windows).
Step 1) Getting the source [openoffice.org]
Okay, I probably don't need testautom
Re:Only useful for non-free applications (Score:4, Insightful)
No, my example wasn't a Linux one. Who cares. The main point is that it's not just that easy to build from source.
Well, since TFA is about a fat binary system for Linux, it is kinda relevant to narrow your scope to just Linux. How stuff in Windows or any other operating system work has nothing to do with this new Linux-specific feature.
That said, Windows is probably the worst platform for consumers wanting to compile their own applications. It doesn't provide any tools to do so by itself and if the source you want to compile doesn't include something like a Visual Studio project file, you're in for a very hard time. Linux doesn't suffer this fate at all. Compiling an application is in most cases nothing more complex than typing "./configure && make" and you're good to go.
Besides, what is so horrible about having fat binaries on Linux?
Nothing. I'm not saying it is. In fact, I'm saying it isn't. It just doesn't surprise me that it took a long time before someone started to develop something like this, while other platforms had this feature for quite a few years, because the need for this on Linux isn't on the same level as it was for Mac OS X back when Universal Binaries made their entry.
Does Linux even need them? (Score:5, Insightful)
Re: (Score:2)
Well, the effort of packaging a application (a) to different platforms and (b) to different distributions is quite a duplicate one, involving a lot of people (and time).
Re:Does Linux even need them? (Score:4, Interesting)
Well, the effort of packaging a application (a) to different platforms and (b) to different distributions is quite a duplicate one, involving a lot of people (and time).
Firstly, this proposal has absolutely no relevance to the difficulty of packaging to different distributions.
Secondly, packaging to different platforms has been solved. Most distributions now have compile farms where you submit a package specification (usually a very simple compilation script and a set of distribution-specific patches) and packages for all the various architectures get spat out automatically.
This proposal is a solution looking for a problem, as far as Free software is concerned. The only utility is where the application is closed-source and can't go through a Linux distribution's normal package compilation and distribution workflow.
We need 1-file installs (Score:4, Insightful)
We don't need the universal binary, so much as we need the "1-file install" idea that MacOS has. This would greatly simplify installing a standalone application.
For those of you who don't know, if you download an app for MacOSX (say, Firefox) you are presented with one icon to drag into your "Applications" folder. This is really a payload, a "Firefox.app" directory that contains the program and its [static?] libraries. But to the user, you have dragged a single "file" or "app" into your "Applications" folder - thus, installing it.
It's dead simple. We need something like this in Linux.
Re:We need 1-file installs (Score:5, Interesting)
> It's dead simple. We need something like this in Linux.
"aptitude install " (or the pointy-clicky equivalent) works for me.
Re: (Score:3, Funny)
Re: (Score:2)
"aptitude install " (or the pointy-clicky equivalent) works for me.
No, it doesn't. "yum install firefox" and similar things like apt install something called firefox. In many cases this will be ok, but in many others it won't. E.g. when new releases of openoffice.org or firefox arrives. This often won't show up in normal repositories for a while, if it shows up at all before a later release of the distribution they are running.
Re:We need 1-file installs (Score:4, Insightful)
That is great for software supplied by your distro's repository, and most distros have lots of software available in their "contrib" or equivalent repository. Firefox of course usually comes installed out of the box, so it isn't an issue.
Where this could be beneficial is for software that isn't popular enough for the distros to package. At the moment, you have to publish different packages for each distro and for each architecture, and you probably won't bother about much beyond i386 and amd64.
Re: (Score:2)
Re: (Score:2)
Which ancient OS are you using?
In Ubuntu at least the first command works fine (Haven't bothered to try the others)
Re: (Score:2)
apt-get install firefox
or synaptic (click) firefox (check) apply (click)
A lot easier than finding/going to the website, clicking the download icon, waiting, draging an icon... Mozilla does this wonderfully (agent string used to present the download for your OS/language), most other downloads require you to look for the download you need.
Re:We need 1-file installs (Score:4, Informative)
Re:We need 1-file installs (Score:4, Interesting)
Re: (Score:3, Funny)
Re: (Score:2)
Linux' way of saying "NO NEVER NEVER DO THIS! Also, it doesn't work." to installing stuff by downloading it yourself has beauty where Apples concept is malware-prone.
The only way I can see your suggestion implemented is by providing a file format that just contains what package you would like to install (for each possible distribution).
e.g. for Firefox.app: ... etc
Gentoo: ensure_installed(www-client/mozilla-firefox)
Debian: ensure_installed(firefox)
Fedora: ensure_installed(firefox)
Declarative, that is. Then
convenient for _closed source_ software vendors (Score:4, Insightful)
Sofwtare vendors? This only makes life easier for _closed source_ software makers. For everyone else this is a solution looking for a problem as package management and repositories don't really have a problem with different arches and versions.
I'm not saying this is useless (people do want to run closed source software), but the kernel, glibc and other patches better be good and non-invasive if this guy wants them to land...
Re: (Score:3, Insightful)
Actually, having to maintain packages across several architectures can be tricky at times. Some packages need to be patched to run correctly on different architectures, and the upstream maintainers can accidentally break those patches (e.g. if they are not personally testing on a given architecture). It could even be the case that different a
Re: (Score:2)
Oh yes, maintaining packages for several archs is real work, I'm not claiming otherwise. I just don't see how universal binaries makes things easier. Coding, compiling, testing, patching -- all of those need to be done with all supported archs in mind in any case.
Re:convenient for _closed source_ software vendors (Score:5, Insightful)
"Actually, having to maintain packages across several architectures can be tricky at times."
Of course yes. But let's see if the single fat binary reduces complexity.
"Some packages need to be patched to run correctly on different architectures"
And they still will need that. Or do you thing that the ability to produce a single binary will magically make those incompatibilities to disapear?
"the upstream maintainers can accidentally break those patches (e.g. if they are not personally testing on a given architecture)"
That can happen too with a single binary exactly the same way.
"It could even be the case that different architectures have different versions of the same packages, because the distro maintainers are busy trying to get everything to work."
Probably with a reason (like new version needs to be patched to work on this or that platform). How do you think going with a single binary will avoid that problem? It's arguably that in this situation you would end up worse. At least with different binaries you can take the decision of staying with foo 1.1 on arm but promote foo 1.2 on amd64 in the meantime; with a single binary it would mean foo 1.1 for everybody.
"I am not saying that this "universal binary" solution is the answer, but it might help streamline the build process at the distro level."
Still you didn't produce any argument about *how* it could help.
Re: (Score:2)
Another option is for distributors to have compile farms, whereby they have an example of each architecture available to them, and it's a simple case of submitting a source package and it gets built automatically for each available architecture.
Also most patch breakage is changes which prevent the patch from applying, most build systems will apply arch specific patches even if you aren't building for that arch so breakage will be quickly noticed... A well written patch to fix a specific arch should not have
Is this really necessary? Or even advantageous? (Score:2)
oh boy, just pack all archs on a .deb (Score:5, Interesting)
you know, just trick the good ol' .DEB package format to include several archs, then let to dpkg decide wich binaries to extract.
is not that in linux the binaries are one big blob with binaries, libs, images, videos, heplfiles, etc. all ditributed in as a single "file" which is actualy a directory with metadata that the finder hides as being a "program file".
being able to copy a binary ELF from one box to another doesn't guarantee it'll work, specially if it's GUI apps that may require other support files, so fat binaries in linux would be simply a useless gimmick. either distribute fat .DEBs, or just do the Right Thing(tm): distribute the source.
Not scalable (Score:5, Insightful)
To a first approximation, the size of the binary will increase in proportion to the number of architectures supported.
This is something you might decide to ignore if you are only supporting two architectures. Debian Lenny supports twelve architectures, and I've lost count of how many the Linux kernel itself has been ported to. I really don't think this idea makes sense.
(Besides, what's wrong with simply shipping two or more binaries in the same package or tarball?)
Data Code (Score:3, Informative)
And in fact, for large classes of interesting applications, installer and installed size is overwhelmingly data, not code. Games are going to be 95%+ data (check out how small the actual app is sometimes; often less than 1% the size of the data files). Microsoft Office has far, far more space allocated to fonts, clip art, all those multilingual spelling dictionaries, and templates than the actual *.exe files.
And even the self-contained .exe files in the above examples will also include a ton of bitmapped im
Unix (OSF) tried it with ANDF (Score:5, Interesting)
It never really flew.
If someone wants to do this then something like Java would be good enough for many types of software. There will always be some things for which a binary tied to the specific target is all that would work; I think that it would be better to adopt something that works for most software rather than trying to achieve 100%.
Re: (Score:3, Informative)
It was called P-code [wikipedia.org]
n the early 1980s, at least two operating systems achieved machine independence through extensive use of p-code. The Business Operating System (BOS) was a cross-platform operating system designed to run p-code programs exclusively. The UCSD p-System, developed at The University of California, San Diego, was a self-compiling and self-hosted operating system based on p-code optimized for generation by the Pascal programming language.
Less Fat, more Unreal (Score:3, Insightful)
I don't get it (Score:3)
What's the big benefit again? Instead of the package manager making the decision once at install time and all of the un-needed parts for platforms I'm not using stay on the install disk, now the decision is made each time I run the app and I get to clog my HD (or worse, my SSD) with all of them?
Now I can have the world's LARGEST hello world program with support for alpha, arm, avr32, blackfin, cris, frv, h8300, ia64, m32r, m68k, m68knommu, mips, parisc, powerpc, ppc, s390, sh, sh64, sparc, sparc64, um, v850, x86, and xtensa?
I'm guessing if this catches on, the most commonly used program will be 'diet' the program that slims down fat binaries by removing the architectures you will never encounter in a zillion years. (Just what are the odds that I will one day replace my workstation with an s390?)
If they want to do this, they should do it right and implement something like TIMI [wikipedia.org]. Done well, it would mean that an app could run on a platform that didn't even exist when it was shipped (it worked for IBM).
Beyond the technical advantages of TIMI, it will provide us years of South Park references.
please stop trying to turn Linux into OS X (Score:3, Interesting)
Linux doesn't need fat binaries because the package manager automatically installs the binaries that are appropriate for the machine.
OS X needs fat binaries because it doesn't have package management. I wish people would stop trying to bring OS X (mis)features over to Linux. If I wanted to use OS X, I'd already be using it.
Actually yes Linux needs a universal format (Score:3, Funny)
Linux needs to become more like Mac OSX than Windows.
What I would like to see in Linux in the near future:
Universal file format for X86, X64, and PowerPC executiables that replaces the ELF format (WIZARD format, ELF needs food badly!)
GNOME and KDE merged into one GUI that emulates both of them, GNIGHT or something.
Ability for Linux to use Windows based drivers when Linux based drivers do not exist, something better than that NDISwrapper but under a GPL license and built into Linux.
GNUStep being developed into something that resembles Aqua, Aero, and other GUIs and is backward compatible with the Mac OSX API calls to recompile OSX programs for Linux. Maybe even in the near future run OSX Universal binaries somewhat like WINE runs Windows programs.
Re:Linking problems (Score:5, Insightful)
Could this technology also help binaries to link against multiple versions of standard libraries (glibc, libstdc++)?
Probably not. Or not without getting headaches like you get with assemblies on Vista. Keying off the system architecture (32-bit x86 vs. 64-bit ia64) is much simpler than keying off library versions.
The fix with standard libraries is for the makers of them to stop screwing around and stick with ABI compatibility for a good number of years. OK, this does tend to codify some poor decisions but is enormously more supportive of application programmers. Note that I differentiate from API compat.; rebuilding against a later version of the API can result in a different - later - part of the ABI being used, and it's definitely possible to extend the ABI if structure and offset versioning is done right. But overall, it takes a lot of discipline (i.e., commitment to being a foundational library) from the part of the authors of the standard libs, and some languages make that hard (it's easier in C than in C++, for example).
Re:Linking problems (Score:4, Funny)
I think FatELF is too skinny for that. You want SantaELF, which links all those libraries statically in each binary...
Re: (Score:3, Interesting)
While your reply sounds a bit like flame-bait, I basically have to agree. The format isn't a universal binary that gets translated to each machine architecture when installed. Instead, it's basically an archive of pre-compiled binaries for each platform you support. So, for example, my stupid Qt application has to be compiled separately for Fedora and Ubuntu. This technology would in theory allow me to merge the binaries into a single FatELF binary. Personally, I'd rather just provide separate .deb and
Java (Score:4, Insightful)
Kind of like Java then.
Use the source, Luke! (Score:3, Insightful)
Now why would I want to do anything that fucktarded, when I can just use the source?
And if I needed cross-platform that badly, I can always ship ONE java app with ONE instance of data.
The '90s called, they want their obsolete fat and universal binaries back.
Re:Use the source, Luke! (Score:5, Funny)
Now why would I want to do anything that fucktarded, when I can just use the source? And if I needed cross-platform that badly, I can always ship ONE java app with ONE instance of data. The '90s called, they want their obsolete fat and universal binaries back.
The elusive (+1 Insightful, -1 Flamebait) post makes a brief appearance, flashing its brightly-colored plumage, before disappearing back into the brush.
Re: (Score:3, Interesting)
Sometimes the insightful comment is necessarily flamebait.
You want "platform independence"? We already had that with source.
It just got FUD'ed to death because it seems kind of scary and there
were ever really any nice n00b friendly package mechanisms. Perhaps
Gentoo comes close, but I doubt it would fit the (n00b) bill.
Apple is still using it (Score:3, Informative)
Apple is still using it for x86/x86_64 fat binaries in Snow Leopard.
Re:Apple dropped it (Score:5, Informative)
No, Apple didn't drop support for Universal Binaries. Most apps available for Mac today are universal binaries and work on PPC or Intel macs, and in some cases support PPC 32, PPC 64, Intel 32 and Intel 64. Just because a new OS doesn't support an older CPU architecture doesn't mean the functionality for Universal or "Fat" binaries is not supported.
Re: (Score:2)
It's also theoretically possible to include ARM support too, to make a binary that would also run on an iphone...
Re: (Score:3, Insightful)
Wouldn't surprise me if this is to encourage users to demand a native x86 version of software - once every significant application exists as x86 binaries, Apple can drop support for Rosetta altogether and that's another developer or two freed up to work on furthering their products rather than backward compatibility.
Re: (Score:3, Informative)
They already have, to a degree. In Snow Leopard, you have to choose to specifically install Rosetta. If you don't, you can't run PPC programs.
If you try, OS X will prompt you to install Rosetta (which it will do at the press of a button), but it's not there any more by default.
Re:Apple dropped it (Score:4, Interesting)
Apple does that. when 10.3 came out apple stopped installing OS 9 classic by default as well. Support backwards compatibility for 2-3 generations and then phase it out. First phase is simply not installing it by default. Second phase is not to supply it. Snow leopard is the 3rd generation of OS after Rosetta came out. installed by default in tiger, and Leopard, they stopped installing it by default for 10.6.
personally I wish MSFT would do the same thing. I get really pissed when my "new application" requires the same installer that win95 had, and in order to run it I have to reboot into safe mode as my antivirus won't let it run. Seriously why does an Application built in 2009 still require the win16 subsystem to run? Why aren't the coders moveing onto new toolkits? Apple nudges and then pushes programers forward. MSFT let's them stay in the previous century and use bare metal knife switches to turn on the lights.
Re:We really care (Score:4, Informative)
Read the fine website:
Re:Apple Universal Binary is kinda of a joke. (Score:5, Informative)
Re: (Score:3, Insightful)
Calm down. Gentoo has almost finished building. In a few hours, you'll be able to use your machine again.
Re:Linux is fine, but how about other platforms (Score:4, Informative)
Somebody didn't read the article...
Q: Do you have to read the entire FatELF file to load it?
A: Nope! Just a few bytes at the start, and then the specific ELF object we want is read directly. The other ELF objects in the file are ignored, so the disk bandwidth overhead is almost non-existent.
Q: So this...adds PowerPC support to my Intel box?
A: No. FatELF is not an emulator, it just glues ELF binaries together. If you have a FatELF binary with PowerPC and Intel records, then PowerPC and Intel boxes will pick the right one and do the right thing, and other platforms will refuse to load the binary, like they would anyway.
Q: Does this let me run 32-bit code on a 64-bit system or vice versa?
A: No. This doesn't let 32-bit and 64-bit code coexist, it just lets them both reside in the file so the platform can choose 32 or 64 bits as necessary.
Q: Do I need to have PowerPC (MIPS, ARM, whatever) support in my FatELF file?
A: No. Put whatever you want in there. The most popular scenario will probably be x86 plus x86_64, to aid in transition to 64-bit systems.