Debian Wheezy To Have Multi-Architecture Support 135
dkd903 writes "Debian has announced they are introducing support for multiarch in Debian Wheezy. Multiarch support means a Debian system will be able to install and run applications built for a different target system."
Re: (Score:2)
Re: (Score:3)
Re: (Score:1)
Link is (was?) broken (Score:5, Informative)
Re:Link is (was?) broken (Score:5, Funny)
Read about it here: http://wiki.debian.org/Multiarch [debian.org]
The link wasn't forgotten. It's Debian and as such the community is supposed to provide the links.
Re: (Score:3, Informative)
Wait isn't that what Gentoo was doing since well, forever?
Not really, Gentoo has native multilib support, not native multiarch support. And as I understand it, Debian's multiarch support is much more integrated than Gentoo's crossdev efforts at multiarch.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
where is TFA? (Score:2, Informative)
I guess the best way to prevent people from reading TFA is to not post a proper link...
Anyway, here is the Debian announcement. [debian.org]
Re: (Score:2)
The misleading summary helps, too.
Re: (Score:2)
Re: (Score:3)
32 vs 64 bit isn't the only reason to want something like this. What about systems that happen to provide support for another binary format with hardware translation?
Re: (Score:2)
Which distribution is currently shipping cross-compile environments? Which distribution has a solution that extends beyond Intel's architecture?
SUSE. It's called Open Build Service and exists since 5 or so years.
The original submission was a self-serv ad anyway. (Score:2)
Wheezy F Baby... (Score:2, Funny)
Is it powered by drugs and cough syrup?
Party like its 2004 (Score:1)
Woot! Only 7 years late to the party Debian.
Re: (Score:1)
Better late than never I suppose. This is good news nonetheless, makes it easier for us Debian/Ubuntu fans to use a 64bit OS. Score one more point for the Linux world!
...*Ahem*, I mean GNU/Linux world, there, happy RMS? :)
Re:Party like its 2004 (Score:4, Informative)
Uhm no, this is not bi-arch crap that keeps dragging us down -- as in, 32 bit libs on a 64 bit system. This is about having any library and any header co-installable with its versions from other architectures. Just think how much easier this makes cross-building things...
Fat Binary or just some kind of FS organization? (Score:1)
Re: (Score:1)
It means they are finally supporting the part of LSB that defines /lib64 for x86_64 libraries which has been part of the standard for like 7 years now.
/lib64 is not enough (Score:1)
No, they are explicitly *not* doing that. /lib64 is a narrow solution to the more general problem of multi-architecture support.
The new approach here is to put e.g. Linux/amd64 libraries in /usr/lib/lx86_64-unknown-linux-gnu/. The hope is that a future revision of the FHS will incorporate this.
Re: (Score:1)
Ah, so you're retarded, gotcha.
Hint: RedHat solved a very narrow problem with x64/x86 only. Debian's solution is far more general. But, of course, you'd have to actually read the fucking article to understand that... assuming, of course, you're capable of reading and comprehending it, which, judging by your comment, seems unlikely.
Re:/lib64 is not enough (Score:5, Interesting)
So they are doing it in a stupid way to be different?
No, they're doing it a different way to solve a broader set of problems, as their rationale [debian.org] says. Feel free to debate whether the problems they claim to be solving are worth solving, or whether their solution is the right one for those problems, but don't claim they're just being stupid until you've read the rationale. (I don't have a dog in this fight; the OS I use and develop for/on uses fat binaries for the multi-architecture part of that. I'm just noting that there's a rationale to read before concluding that they're just being different to be different.)
Re:/lib64 is not enough (Score:4, Insightful)
(Disclaimer: I actually rather like Debian, even if the likes of Ubuntu have made it unfashionable)
Knowing the good people behind Debian, it'll be an absurdly over-engineered solution that will only be supported by software in the Debian repository. And it'll be rather poorly documented so figuring out exactly how it's been done will be an exercise in frustration. But once you've figured it out - and provided you're only using packages in the repository - it'll be beautifully elegant and work so nicely you'll wonder why nothing else works the same way.
Re:/lib64 is not enough (Score:5, Interesting)
No. Your reaction illustrates that you don't understand what you are talking about.
LSB says that /lib64 is where x86_64 libraries go on x86_32 machines, but says nothing about any other machine architectures and/or combinations. The problem itself is not x86/x86_64-specific, however, so the LSB-specified solution is incomplete.
The LSB is wrong. Debian is solving the problem correctly.
The fact that "something has worked just fine for Red Hat for years now" only reflects the fact that Red Hat doesn't focus anywhere other than x86. For that and several other reasons, Red Hat is a toy operating system as far as I'm concerned.
Re: (Score:1)
Red Hat is a toy for targeting architectures that are actually relevant? That doesn't make Red Hat a toy OS; it makes Red Hat a distribution built by a company that needs to keep the lights on and pay its employees.
Re: (Score:2)
And it seems to me that Debian can simply make a symlink from the new x86_64 directory to /lib64 for compatibility's sake, if it's ever needed.
Seems like a great addition for compiling for ARM on amd64 or any other combination you can think of.
Re: (Score:2)
The fact that "something has worked just fine for Red Hat for years now" only reflects the fact that Red Hat doesn't focus anywhere other than x86.
Even at that, one of the hardest linux sysadmin things I've done has been to upgrade live systems from i686 to x86_64, allowing for only one quick reboot. Having proper coexistence would make this task much more straightforward.
I like Fedora-derived distributions, but Debian is making an improvement here. I hope Fedora follows suit.
Re: (Score:3)
Actually, it hasn't worked fine. That is exactly why Debian want to change it.
You may not be a distribution developer, but the folks are Fedora/RedHat and Debian/Ubuntu and so on, so have to deal with it.
If something is easy for you, that doesn't mean they didn't spend a lot of time on it to make it so.
Re: (Score:2)
They do, but their are not happy with the way it is handled now in Linux distributions. They want to improve it.
Correct Link (Score:2)
Original submission [slashdot.org] had the link correctly. Timothy replaced it with a bad one.
Wheezy (Score:4, Funny)
Looks like Wheezy will finally get a piece of the pie.
I bet it took a whole lot of trying just to get up that hill.
Now that they are up in the big leagues, they will get their turn to bat.
And I don't think there is anything wrong with that.
Re: (Score:2)
Re: (Score:1)
Don't you mean GNU/RMS?
At Last! (Score:1)
Great Idea (Score:1)
Re: (Score:1)
Re: (Score:2)
This is about having both 64-bit and 32-bit libs on the system at once and implementing the filesystem paths to support this.
Re: (Score:1)
Re: (Score:3)
It generalizes to cases larger than 64-bit vs. 32-bit. Big-endian vs. little-endian, hardware FPU vs. software FPU emulation, etc.
Re:Great Idea (Score:4, Interesting)
It makes that possible, yes. Combine that with tools from the emdebian project, and you have one wicked embedded Linux development environment AND runtime platform. I for one am following this development closely; I have been using emdebian tools for almost two years now, having them supported in the mainline Debian installations via Multiarch is a huge step forward. Indescribably huge.
Re: (Score:1)
So This Means...... (Score:3)
Re: (Score:2)
Multiarch is not going to delay a Debian release. If it could, it would have done so already. Multiarch has been a release goal in some form for years now, but Lenny and Squeeze (2 last releases) went ahead as normal without it, simply because it wasn't ready.
Debian releases will continue to be approximately 2 years apart.
Debian is experimenting with timed freezes, which means the release schedule should be more predictable (although the time between the freeze and the release will still work according to
When (Score:2)
at the risk of feeding trolls, when will they update Wine?
Re: (Score:1)
Re: (Score:3)
I asked a friend who's a core debian member. His answer - when someone gets interested in it.
So... you can be the guy in charge of wine for debian if you're interested.
Re: (Score:1)
Multiarch FTW!! (Score:1)
So you can run STABLE on both a TOPS-20 *AND* an 80286!!! :D
Re: (Score:2)
So you can run STABLE on both a TOPS-20 *AND* an 80286!!! :D
I would so mod you up if I had mod points!
Re: (Score:1)
Can't wait to be able run my PDP-8 software through a pipe!
nice link (Score:2)
did you accidentally the whole thing?
Re: (Score:2)
I clicked it and now I cannot see. (Maybe my eyes are bleeding too!)
Good for chinese MIPS processors (Score:1)
although the feature is still vaporware, chinese Loongson 3 CPU stand to benefit from this as they feature hardware-assisted x86 emulation (using QEMU on the process). it's broken on the loongson 3A, but hopefully will work on 3B and up.
comments about /lib32, /lib64 miss some of the point, here we would want to run most things on MIPS 64, and some stuff, maybe windows games through wine, on i386 arch. that's potentially two arch with both 32bit and 64bit variants. another case would be running on a 32bit AR
Lil Wayne? (Score:1)
The thing that's nice about this (Score:2)
is that it allows the package manager to co-install packages of two different architectures in certain cases. This means that you can install a 32-bit Firefox (if you have some proprietary plugin) and have the rest of the system be 64-bit. Or you can install most of the packages from the armel port (ARM EABI soft-float) and install floating-point intensive ones from the armhf port (ARM EABI hard-float).
Previously, in order to install any meaningful amount of i386 software on an amd64 system, you had to inst
a step in the wrong direction (Score:1)
Unfortunately, this is deceptively bad.
The point isnt to support lots of platforms, it's to allow x86 run on x86_64 platforms. In theory it's fantastic but the reality is that this will enable people to be lazy in that they will only release a build for x86 and just ignore all about 64-bit platforms because they can. While it would be great, not every Linux application or game is open source. Why make a 64-bit build when it can cause incompatibility because of bad coding practices? Even with open source
Re: (Score:2)
Um... Incase you don't realize it, that's the way it currently is: With my x86_64 system, and the ia32-libs package(and a couple of others), I can(and do) run 32-bit excecutables. On windows, you've got the same thing.
Thing is, x86 isn't gone. All desktop processors may be x86_64, but we've got lots of low powered chips that are 32 bit, and will continue to be so for the forseeable future.
We also have ARM chips taking center stage, and being able to seamlessly install and run x86 software on ARM and vice-v
Re: (Score:2)
Re: (Score:2)
Apple chose to bundle all supported (2) architectures in all packages. While wasteful, it was effective given the low number of supported architectures (2). Given that debian support quite a few architectures, that route really isn't feasible. Is it more clear now?
Re: (Score:2)
Apple officially supported 4 architectures. PPC32, PPC64, and x86, x86_64. And you could even store binaries optimized for sub architectures (i.e. G3 vs. G4). All completely transparent to the user. No extra files. The reason this is made practical is the Mach-O binary format. It was trivial to build and distribute a universal binaries and libraries. This was baked in from the start. Something inherited from NeXT.
Considering that we're mainly talking about running x86 binaries on x86_64 Linux machines,
Re: (Score:2)
Both solutions are transparent for the user --- it is only something that matters to packagers. Debians solution is also transparent for the developer, though Apple's (I guess) is not, but then, Apple does not have packagers.
This idea (nicknamed "FAT elf") was considered and rejected across the board --- noone wanted it. While I am not an expert, it seems wasteful to me to load a bunch of architectures you won't ever need, and wasteful to install a bunch of libraries for architectures you don't use. Typica
Re: (Score:2)
This idea (nicknamed "FAT elf") was considered and rejected across the board --- noone wanted it.
Their loss because it could have made the current 32 to 64 bit transition a lot smoother had it been generally accepted and used. Think one kernel, at least 2 different architectures. Also, it would presumably extend down to the driver level too. You wouldn't even have to think about how many "bits" your system is. Everything would just work. The boot loader would pick the optimal arch and you'd be set. But I guess Linux users don't like things to be too easy. (I was a Linux user for many years, and this w
Re: (Score:2)
Their loss because it could have made the current 32 to 64 bit transition a lot smoother had it been generally accepted and used. Think one kernel, at least 2 different architectures. Also, it would presumably extend down to the driver level too. You wouldn't even have to think about how many "bits" your system is. Everything would just work. The boot loader would pick the optimal arch and you'd be set. But I guess Linux users don't like things to be too easy. (I was a Linux user for many years, and this was actually the case in many ways).
Everything did just work --- bugs excepted. And do. The problem this is going to solve is "how to handle N architectures on the same installation, dependencies and all. Of course, OS X doesn't handle automatic dependency resolution at all, does it? It relies on bundling instead, wasting resources left and right, if I recall correctly.
While I am not an expert, it seems wasteful to me to load a bunch of architectures you won't ever need,
Define "load." It isn't l like you load all architectures into memory for all executables. They're just there if you need them. Also, I could be mistaken, but I believe the way OS X uses message passing, I don't think all shared libraries need to be multiarch. 32 bit programs can more smoothly interact with 64 bit programs/libraries.
You would be mistaken. Message passing in a architecture-independent way is expensive.
and wasteful to install a bunch of libraries for architectures you don't use. Typically, only a very small part of your installation will actually need to be multiarch. Of course, if I were *also* selling hardware, I might be a bit more wasteful :)
Oh please. Disk space is dirt cheap. That's just an excuse for laziness on the part of developers, packagers, and distribution maintainers.
Oh please. Laziness is just stuffing in everything, that's the banal, ham-fisted solution.
Re: (Score:2)
Everything did just work --- bugs excepted. And do. The problem this is going to solve is "how to handle N architectures on the same installation, dependencies and all.
A problem that Apple solved years ago in a far more elegant way.
Of course, OS X doesn't handle automatic dependency resolution at all, does it? It relies on bundling instead, wasting resources left and right, if I recall correctly.
Bundling is one of the best parts of OS X, IMO. Who the hell cares if it wastes a bit of disk space? It WORKS. And it is totally hassle free. No installers. No package managers. You just copy the app to your desktop.. and run. Don't want it anymore? Delete it. Want to test two versions of the same software (one beta, perhaps) side by side? Keep both copies. You dont' have to worry about an installer or package manager trying to overwrite the o
Re: (Score:2)
A problem that Apple solved years ago in a far more elegant way.
No. The dependency problem was unsolved, and only a limited number of architectures. They build a mud hut and you claimed they solved the problem of building skyscrapers. Come on, this is simple facts!
Of course, OS X doesn't handle automatic dependency resolution at all, does it? It relies on bundling instead, wasting resources left and right, if I recall correctly.
Bundling is one of the best parts of OS X, IMO. Who the hell cares if it wastes a bit of disk space? It WORKS. And it is totally hassle free. No installers. No package managers. You just copy the app to your desktop.. and run. Don't want it anymore? Delete it. Want to test two versions of the same software (one beta, perhaps) side by side? Keep both copies. You dont' have to worry about an installer or package manager trying to overwrite the old version. THAT is what computers should be like. OS X handles dependency resolution by making it a non-issue. Disk space is cheap as hell. I'll take a polished, hassle free user experience with some wasted disk space any day.
Bundles means multiple copies of shared libraries. This has 2 implication: 1. the libraries cannot share memory pages and 2. security updates are not applied across the board, meaning that security updates are cumbersome to apply. So it might look like a good idea to someone who doesn't know much about the sub
Re: (Score:2)
Bundles means multiple copies of shared libraries. This has 2 implication: 1. the libraries cannot share memory pages and 2. security updates are not applied across the board, meaning that security updates are cumbersome to apply.
Some shared libraries are copied, but since OS X is a predictable base system, you don't actually have to bundle most things. And the things you do bundle are likely not used by anthing else anyway. So they're non-issues. I used LInux for 10 years and I'll tell you, bundling eliminates so much hassle. Packages are fine for the base system, but user applications need to be much more flexible. Most Linux distributions are just big monolithic beasts where every damn application is tightly coupled with the nex
Re: (Score:2)
Some shared libraries are copied, but since OS X is a predictable base system, you don't actually have to bundle most things. And the things you do bundle are likely not used by anthing else anyway. So they're non-issues. I used LInux for 10 years and I'll tell you, bundling eliminates so much hassle. Packages are fine for the base system, but user applications need to be much more flexible. Most Linux distributions are just big monolithic beasts where every damn application is tightly coupled with the next.
Only poor developers only depends on included shared libraries, and yes, the remaining libraries could be reused. You are just making excuses for a crude system.
So it might look like a good idea to someone who doesn't know much about the subject,
LOL. Wow, dude. I'm a developer myself. I know what I'm taking about.
If you think that base and included libraries are sufficient for any real work, you are not a very good developer. You will be taking much longer, with a lot more bugs, than if you used existing libraries extensively.
Yea... not quite the same.
Since you cannot tell me what the difference is, exactly, I'll as
Re: (Score:2)
Only poor developers only depends on included shared libraries, and yes, the remaining libraries could be reused. You are just making excuses for a crude system.
I'll take a system that puts user experience ahead of developmental purity, thanks. No excuses necessary.
If you think that base and included libraries are sufficient for any real work, you are not a very good developer. You will be taking much longer, with a lot more bugs, than if you used existing libraries extensively.
I'm sure it depends entirely on what I'm writing. Most software out there can rely on base and include libraries. And when they can't, they can just bundle what they need at very little real cost to the user. The trick is to provide a sufficiently robust base .Something that LInux has utterly failed to do. So it is understandable that you might not think much of "base" systems. On Linux you can't even
Re: (Score:2)
Ok, I will try to restate the points you apparently cannot see.
I'll take a system that puts user experience ahead of developmental purity, thanks. No excuses necessary.
It is OS X that puts user experience over developer (OS X developer) convenience. Automatic dependency resolving is some work, which you sidestep by bundling. Bundling costs the user, in terms of security and in terms of performance (esp. less memory). That is why they have never been popular in the Linux world. Of course, both of these are sneaky problems, which only eats at you in tiny nipples.
I'm sure it depends entirely on what I'm writing. Most software out there can rely on base and include libraries. And when they can't, they can just bundle what they need at very little real cost to the user. The trick is to provide a sufficiently robust base .Something that LInux has utterly failed to do. So it is understandable that you might not think much of "base" systems.
If you have a huge base, you will be dragging alon
Re: (Score:2)
It is OS X that puts user experience over developer (OS X developer) convenience. Automatic dependency resolving is some work, which you sidestep by bundling. Bundling costs the user, in terms of security and in terms of performance (esp. less memory). That is why they have never been popular in the Linux world. Of course, both of these are sneaky problems, which only eats at you in tiny nipples.
They've never been popular in the LInux world because Linux users have typically been willing to deal with the extra adminitrative overhead of package management and lack of proprietary software. But Linux users are not typical users and they never will be.
If you have a huge base, you will be dragging along crap for a long time, because any library you put in the base has to be supported forever unless you want to break contract on that base. And any libraries accumulate cruft, stuff that wasn't designed as well as should be, or rested on assumptions that at no longer true.
You don't need a "huge" base. You only needs a sufficiently robust base., which you have with OS X. You're speaking in purely theoretical terms. Open your eyes. OS X is a nice system to use and work wth.
I have dozens of proprietary programs on this computer, mainly games. None seem to have a problem. The package management system doesn't get in the way, like you seem to think. It is very easy: You bundled all the shared libraries you need with you program, stuff in the some directory (./lib is popular), and add a LD_LIBRARY_PATH=./lib to the launcher. And no, this is not something the user does, it is something the developer does, just like the developer might create a deb package, or installshield package, or whatever OSX uses. See, it's all in your mind?
Dozens, huh? And you're telling me the all just work
Re: (Score:2)
They've never been popular in the LInux world because Linux users have typically been willing to deal with the extra adminitrative overhead of package management and lack of proprietary software. But Linux users are not typical users and they never will be.
Again, what overhead? Lack proprietary software is, on the other hand, an issue, but it has nothing to do with Debian, but with the installed base. Max OS X is another that suffers from the latter, if to a lesser degree.
You don't need a "huge" base. You only needs a sufficiently robust base., which you have with OS X. You're speaking in purely theoretical terms. Open your eyes. OS X is a nice system to use and work wth.
You can chose between huge or constantly reinventing and reimplementing stuff. OS X does not satisfy my requirements for being highly productive (e.g, I need to have all the source code for the operating system and libraries I use available in a convenient way). If I were to use something el
Re: (Score:2)
More information. (Score:5, Informative)
The summary is terrible. And not just the invalid link.
Here's a more informative link [debian.org] than the one posted by lnunes.
Multiarch is not gonna let you run ARM binaries on an Intel chip or anything like that - nor will it let you run Windows code on Debian. What it will do, however, is let you run x86 compiled binaries on an x64 system. It will also allow for things like mixing armhf and armel code on modern ARM, but for the most part, running 32-bit x86 code on 64-bit x64 (amd64) systems will be the benefit most of us will get.
How will we benefit? You'll be able to run binary-only x86 code on your x64 system. This means Adobe Flash and Skype. Any open source code is fine, because it can be compiled for your own architecture - but for binary-only proprietary software, it may not be available for your architecture.
"But this is already possible" you may be thinking. It is, but it's a nasty kludge at the moment. These packages, when installed on 64-bit systems, depend on 32-bit versions of several system libraries, which are separate packages. There's a series of kludges to make them work, and it's not very flexible.
The heart of multiarch support is a re-designed file system layout which accounts for the architecture of any binaries. So instead of putting some binary libraries in /lib/, it puts it in /lib/amd64/ or /lib/i386/. This is the first step for allowing the same package to be installed for different architectures. Then, dpkg will have to be modified to track packages from more than one architecture on the one system.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
How will we benefit? You'll be able to run binary-only x86 code on your x64 system. This means Adobe Flash and Skype. Any open source code is fine, because it can be compiled for your own architecture - but for binary-only proprietary software, it may not be available for your architecture.
This will also make 32-bit mplayer much easier to install on 64-bit machines, which is often needed due to binary-only win32 codecs.
Re: (Score:1)
It will also allow you to dist-upgrade from a x86 system to a x86_64 system if the CPU supports it. Right now it is possible to install a 64 bit kernel and some compatibility libs on a 32 bit system but this is really only good for running precompiled 64 bit binaries. Vice versa can be done on a 64 bit system.
It doesn't allow for installing 64 bit debs. But if they can pull this off then I can upgrade from a machine that is 32 bit arch and make it a 64 bit arch.
Re: (Score:2)
"Multiarch is not gonna let you run ARM binaries on an Intel chip or anything like that"
Actually it does. They plan to couple multiarch with emulation solution such as qemu to run ARM binaries on x86 processors and vice versa. They also plan to let you run cross-OS binaries like run freebsd binaries on linux and vice versa.
I dont see powerPC listed (Score:2)
so that takes away about 90% of the coolness factor
Not hot innovation (Score:1)
Disney (Score:3, Insightful)
After all Disney has done for cultural freedom, it's nice to see Debian is still honoring their properties with its OS names.
Re: (Score:2)
Couple of things: Bruce Perens [wikipedia.org], who used to work at Pixar, and developed Electric Fence [wikipedia.org] while there, was once a Debian Project Leader and started the convention of codenaming Debian releases to Toy Story characters. This was all before Disney bought Pixar [debian.org]. HTH.
Re: (Score:2)
Yeah, I understand Pixar used to be cool and hip. But Wheezy was named after Disney bought them out.
Bad name for asthmatics (Score:2)