Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Debian Upgrades Linux

Debian Wheezy To Have Multi-Architecture Support 135

dkd903 writes "Debian has announced they are introducing support for multiarch in Debian Wheezy. Multiarch support means a Debian system will be able to install and run applications built for a different target system."
This discussion has been archived. No new comments can be posted.

Debian Wheezy To Have Multi-Architecture Support

Comments Filter:
  • by lnunes ( 1897628 ) on Thursday July 28, 2011 @03:50PM (#36913560)
    Read about it here: http://wiki.debian.org/Multiarch [debian.org]
  • where is TFA? (Score:2, Informative)

    by Anonymous Coward

    I guess the best way to prevent people from reading TFA is to not post a proper link...

    Anyway, here is the Debian announcement. [debian.org]

  • When the href is finally added, can we please make sure it's to the debian release rather than the link provided in the submission? Slashvertisements are depressing.
  • by Anonymous Coward

    Is it powered by drugs and cough syrup?

  • Woot! Only 7 years late to the party Debian.

    • Better late than never I suppose. This is good news nonetheless, makes it easier for us Debian/Ubuntu fans to use a 64bit OS. Score one more point for the Linux world!

      ...*Ahem*, I mean GNU/Linux world, there, happy RMS? :)

    • by KiloByte ( 825081 ) on Thursday July 28, 2011 @06:41PM (#36915338)

      Uhm no, this is not bi-arch crap that keeps dragging us down -- as in, 32 bit libs on a 64 bit system. This is about having any library and any header co-installable with its versions from other architectures. Just think how much easier this makes cross-building things...

  • Not sure about what they mean with multi arch. Should it be like Mac OS X and its use of fat Mach-o binaries? I think something alike can be implemented on Linux through the use of FatELF. That is, the same binary can run on every supported arch.
    • It means they are finally supporting the part of LSB that defines /lib64 for x86_64 libraries which has been part of the standard for like 7 years now.

      • by Anonymous Coward

        No, they are explicitly *not* doing that. /lib64 is a narrow solution to the more general problem of multi-architecture support.

        The new approach here is to put e.g. Linux/amd64 libraries in /usr/lib/lx86_64-unknown-linux-gnu/. The hope is that a future revision of the FHS will incorporate this.

  • Original submission [slashdot.org] had the link correctly. Timothy replaced it with a bad one.

  • Wheezy (Score:4, Funny)

    by Frankie70 ( 803801 ) on Thursday July 28, 2011 @04:18PM (#36913878)

    Looks like Wheezy will finally get a piece of the pie.
    I bet it took a whole lot of trying just to get up that hill.
    Now that they are up in the big leagues, they will get their turn to bat.

    And I don't think there is anything wrong with that.

  • Now I can play those Mac games! Debian so rocks! Or is that not what they meant?
  • I actually really like this idea. I do a lot of embedded development and it would be AMAZING to be able to compile ARM or run an ARM directly, or not just ARM but even x86.
    • I should explain, x86 because I run all 64bit Desktops and x86_64 AND x86 aren't always directly compatible even with kernel support. I usually end up running a chroot for i386.
    • This is about having both 64-bit and 32-bit libs on the system at once and implementing the filesystem paths to support this.

      • There are a lot of times when multi-libs don't work well. Hence running a chroot i386 environment. I'm not saying it's not possible, I know it is but the success rate has always been poor.
      • by bgat ( 123664 )

        It generalizes to cases larger than 64-bit vs. 32-bit. Big-endian vs. little-endian, hardware FPU vs. software FPU emulation, etc.

    • Re:Great Idea (Score:4, Interesting)

      by bgat ( 123664 ) on Thursday July 28, 2011 @04:55PM (#36914304) Homepage

      It makes that possible, yes. Combine that with tools from the emdebian project, and you have one wicked embedded Linux development environment AND runtime platform. I for one am following this development closely; I have been using emdebian tools for almost two years now, having them supported in the mainline Debian installations via Multiarch is a huge step forward. Indescribably huge.

  • by segedunum ( 883035 ) on Thursday July 28, 2011 @04:43PM (#36914166)
    .......that we're not going to get another Debian release for another fifty years?
    • by mmj638 ( 905944 )

      Multiarch is not going to delay a Debian release. If it could, it would have done so already. Multiarch has been a release goal in some form for years now, but Lenny and Squeeze (2 last releases) went ahead as normal without it, simply because it wasn't ready.

      Debian releases will continue to be approximately 2 years apart.

      Debian is experimenting with timed freezes, which means the release schedule should be more predictable (although the time between the freeze and the release will still work according to

  • at the risk of feeding trolls, when will they update Wine?

  • by Anonymous Coward

    So you can run STABLE on both a TOPS-20 *AND* an 80286!!! :D

  • did you accidentally the whole thing?

  • although the feature is still vaporware, chinese Loongson 3 CPU stand to benefit from this as they feature hardware-assisted x86 emulation (using QEMU on the process). it's broken on the loongson 3A, but hopefully will work on 3B and up.

    comments about /lib32, /lib64 miss some of the point, here we would want to run most things on MIPS 64, and some stuff, maybe windows games through wine, on i386 arch. that's potentially two arch with both 32bit and 64bit variants. another case would be running on a 32bit AR

  • Does this come bundled with his latest single?
  • is that it allows the package manager to co-install packages of two different architectures in certain cases. This means that you can install a 32-bit Firefox (if you have some proprietary plugin) and have the rest of the system be 64-bit. Or you can install most of the packages from the armel port (ARM EABI soft-float) and install floating-point intensive ones from the armhf port (ARM EABI hard-float).
    Previously, in order to install any meaningful amount of i386 software on an amd64 system, you had to inst

  • Unfortunately, this is deceptively bad.

    The point isnt to support lots of platforms, it's to allow x86 run on x86_64 platforms. In theory it's fantastic but the reality is that this will enable people to be lazy in that they will only release a build for x86 and just ignore all about 64-bit platforms because they can. While it would be great, not every Linux application or game is open source. Why make a 64-bit build when it can cause incompatibility because of bad coding practices? Even with open source

    • Um... Incase you don't realize it, that's the way it currently is: With my x86_64 system, and the ia32-libs package(and a couple of others), I can(and do) run 32-bit excecutables. On windows, you've got the same thing.
      Thing is, x86 isn't gone. All desktop processors may be x86_64, but we've got lots of low powered chips that are 32 bit, and will continue to be so for the forseeable future.
      We also have ARM chips taking center stage, and being able to seamlessly install and run x86 software on ARM and vice-v

    • Or look at OS X where none of this shit even matters. Hell, even running two completely different architectures of the same machine was transparent to the user. 32bit vs. 64bit hardly mattered at all.
      • Apple chose to bundle all supported (2) architectures in all packages. While wasteful, it was effective given the low number of supported architectures (2). Given that debian support quite a few architectures, that route really isn't feasible. Is it more clear now?

        • Apple officially supported 4 architectures. PPC32, PPC64, and x86, x86_64. And you could even store binaries optimized for sub architectures (i.e. G3 vs. G4). All completely transparent to the user. No extra files. The reason this is made practical is the Mach-O binary format. It was trivial to build and distribute a universal binaries and libraries. This was baked in from the start. Something inherited from NeXT.

          Considering that we're mainly talking about running x86 binaries on x86_64 Linux machines,

          • Both solutions are transparent for the user --- it is only something that matters to packagers. Debians solution is also transparent for the developer, though Apple's (I guess) is not, but then, Apple does not have packagers.

            This idea (nicknamed "FAT elf") was considered and rejected across the board --- noone wanted it. While I am not an expert, it seems wasteful to me to load a bunch of architectures you won't ever need, and wasteful to install a bunch of libraries for architectures you don't use. Typica

            • This idea (nicknamed "FAT elf") was considered and rejected across the board --- noone wanted it.

              Their loss because it could have made the current 32 to 64 bit transition a lot smoother had it been generally accepted and used. Think one kernel, at least 2 different architectures. Also, it would presumably extend down to the driver level too. You wouldn't even have to think about how many "bits" your system is. Everything would just work. The boot loader would pick the optimal arch and you'd be set. But I guess Linux users don't like things to be too easy. (I was a Linux user for many years, and this w

              • Their loss because it could have made the current 32 to 64 bit transition a lot smoother had it been generally accepted and used. Think one kernel, at least 2 different architectures. Also, it would presumably extend down to the driver level too. You wouldn't even have to think about how many "bits" your system is. Everything would just work. The boot loader would pick the optimal arch and you'd be set. But I guess Linux users don't like things to be too easy. (I was a Linux user for many years, and this was actually the case in many ways).

                Everything did just work --- bugs excepted. And do. The problem this is going to solve is "how to handle N architectures on the same installation, dependencies and all. Of course, OS X doesn't handle automatic dependency resolution at all, does it? It relies on bundling instead, wasting resources left and right, if I recall correctly.

                While I am not an expert, it seems wasteful to me to load a bunch of architectures you won't ever need,

                Define "load." It isn't l like you load all architectures into memory for all executables. They're just there if you need them. Also, I could be mistaken, but I believe the way OS X uses message passing, I don't think all shared libraries need to be multiarch. 32 bit programs can more smoothly interact with 64 bit programs/libraries.

                You would be mistaken. Message passing in a architecture-independent way is expensive.

                and wasteful to install a bunch of libraries for architectures you don't use. Typically, only a very small part of your installation will actually need to be multiarch. Of course, if I were *also* selling hardware, I might be a bit more wasteful :)

                Oh please. Disk space is dirt cheap. That's just an excuse for laziness on the part of developers, packagers, and distribution maintainers.

                Oh please. Laziness is just stuffing in everything, that's the banal, ham-fisted solution.

                • Everything did just work --- bugs excepted. And do. The problem this is going to solve is "how to handle N architectures on the same installation, dependencies and all.

                  A problem that Apple solved years ago in a far more elegant way.

                  Of course, OS X doesn't handle automatic dependency resolution at all, does it? It relies on bundling instead, wasting resources left and right, if I recall correctly.

                  Bundling is one of the best parts of OS X, IMO. Who the hell cares if it wastes a bit of disk space? It WORKS. And it is totally hassle free. No installers. No package managers. You just copy the app to your desktop.. and run. Don't want it anymore? Delete it. Want to test two versions of the same software (one beta, perhaps) side by side? Keep both copies. You dont' have to worry about an installer or package manager trying to overwrite the o

                  • A problem that Apple solved years ago in a far more elegant way.

                    No. The dependency problem was unsolved, and only a limited number of architectures. They build a mud hut and you claimed they solved the problem of building skyscrapers. Come on, this is simple facts!

                    Of course, OS X doesn't handle automatic dependency resolution at all, does it? It relies on bundling instead, wasting resources left and right, if I recall correctly.

                    Bundling is one of the best parts of OS X, IMO. Who the hell cares if it wastes a bit of disk space? It WORKS. And it is totally hassle free. No installers. No package managers. You just copy the app to your desktop.. and run. Don't want it anymore? Delete it. Want to test two versions of the same software (one beta, perhaps) side by side? Keep both copies. You dont' have to worry about an installer or package manager trying to overwrite the old version. THAT is what computers should be like. OS X handles dependency resolution by making it a non-issue. Disk space is cheap as hell. I'll take a polished, hassle free user experience with some wasted disk space any day.

                    Bundles means multiple copies of shared libraries. This has 2 implication: 1. the libraries cannot share memory pages and 2. security updates are not applied across the board, meaning that security updates are cumbersome to apply. So it might look like a good idea to someone who doesn't know much about the sub

                    • Bundles means multiple copies of shared libraries. This has 2 implication: 1. the libraries cannot share memory pages and 2. security updates are not applied across the board, meaning that security updates are cumbersome to apply.

                      Some shared libraries are copied, but since OS X is a predictable base system, you don't actually have to bundle most things. And the things you do bundle are likely not used by anthing else anyway. So they're non-issues. I used LInux for 10 years and I'll tell you, bundling eliminates so much hassle. Packages are fine for the base system, but user applications need to be much more flexible. Most Linux distributions are just big monolithic beasts where every damn application is tightly coupled with the nex

                    • Some shared libraries are copied, but since OS X is a predictable base system, you don't actually have to bundle most things. And the things you do bundle are likely not used by anthing else anyway. So they're non-issues. I used LInux for 10 years and I'll tell you, bundling eliminates so much hassle. Packages are fine for the base system, but user applications need to be much more flexible. Most Linux distributions are just big monolithic beasts where every damn application is tightly coupled with the next.

                      Only poor developers only depends on included shared libraries, and yes, the remaining libraries could be reused. You are just making excuses for a crude system.

                      So it might look like a good idea to someone who doesn't know much about the subject,

                      LOL. Wow, dude. I'm a developer myself. I know what I'm taking about.

                      If you think that base and included libraries are sufficient for any real work, you are not a very good developer. You will be taking much longer, with a lot more bugs, than if you used existing libraries extensively.

                      Yea... not quite the same.

                      Since you cannot tell me what the difference is, exactly, I'll as

                    • Only poor developers only depends on included shared libraries, and yes, the remaining libraries could be reused. You are just making excuses for a crude system.

                      I'll take a system that puts user experience ahead of developmental purity, thanks. No excuses necessary.

                      If you think that base and included libraries are sufficient for any real work, you are not a very good developer. You will be taking much longer, with a lot more bugs, than if you used existing libraries extensively.

                      I'm sure it depends entirely on what I'm writing. Most software out there can rely on base and include libraries. And when they can't, they can just bundle what they need at very little real cost to the user. The trick is to provide a sufficiently robust base .Something that LInux has utterly failed to do. So it is understandable that you might not think much of "base" systems. On Linux you can't even

                    • Ok, I will try to restate the points you apparently cannot see.

                      I'll take a system that puts user experience ahead of developmental purity, thanks. No excuses necessary.

                      It is OS X that puts user experience over developer (OS X developer) convenience. Automatic dependency resolving is some work, which you sidestep by bundling. Bundling costs the user, in terms of security and in terms of performance (esp. less memory). That is why they have never been popular in the Linux world. Of course, both of these are sneaky problems, which only eats at you in tiny nipples.

                      I'm sure it depends entirely on what I'm writing. Most software out there can rely on base and include libraries. And when they can't, they can just bundle what they need at very little real cost to the user. The trick is to provide a sufficiently robust base .Something that LInux has utterly failed to do. So it is understandable that you might not think much of "base" systems.

                      If you have a huge base, you will be dragging alon

                    • It is OS X that puts user experience over developer (OS X developer) convenience. Automatic dependency resolving is some work, which you sidestep by bundling. Bundling costs the user, in terms of security and in terms of performance (esp. less memory). That is why they have never been popular in the Linux world. Of course, both of these are sneaky problems, which only eats at you in tiny nipples.

                      They've never been popular in the LInux world because Linux users have typically been willing to deal with the extra adminitrative overhead of package management and lack of proprietary software. But Linux users are not typical users and they never will be.

                      If you have a huge base, you will be dragging along crap for a long time, because any library you put in the base has to be supported forever unless you want to break contract on that base. And any libraries accumulate cruft, stuff that wasn't designed as well as should be, or rested on assumptions that at no longer true.

                      You don't need a "huge" base. You only needs a sufficiently robust base., which you have with OS X. You're speaking in purely theoretical terms. Open your eyes. OS X is a nice system to use and work wth.

                      I have dozens of proprietary programs on this computer, mainly games. None seem to have a problem. The package management system doesn't get in the way, like you seem to think. It is very easy: You bundled all the shared libraries you need with you program, stuff in the some directory (./lib is popular), and add a LD_LIBRARY_PATH=./lib to the launcher. And no, this is not something the user does, it is something the developer does, just like the developer might create a deb package, or installshield package, or whatever OSX uses. See, it's all in your mind?

                      Dozens, huh? And you're telling me the all just work

                    • They've never been popular in the LInux world because Linux users have typically been willing to deal with the extra adminitrative overhead of package management and lack of proprietary software. But Linux users are not typical users and they never will be.

                      Again, what overhead? Lack proprietary software is, on the other hand, an issue, but it has nothing to do with Debian, but with the installed base. Max OS X is another that suffers from the latter, if to a lesser degree.

                      You don't need a "huge" base. You only needs a sufficiently robust base., which you have with OS X. You're speaking in purely theoretical terms. Open your eyes. OS X is a nice system to use and work wth.

                      You can chose between huge or constantly reinventing and reimplementing stuff. OS X does not satisfy my requirements for being highly productive (e.g, I need to have all the source code for the operating system and libraries I use available in a convenient way). If I were to use something el

    • It is more important to make the transition from 32 bit to 64 bit as smooth and seamless as possible. For the most part, 64bit programs are not necessary. In many cases they're actually slower. As long as your kernel is 64 bit (can address all of your memory) and any memory intensive apps are 64bit, that's all you need. Everyone else can keep writing 32bit software and as long as your system handles it gracefully, it doesn't matter.
  • More information. (Score:5, Informative)

    by mmj638 ( 905944 ) on Thursday July 28, 2011 @08:13PM (#36916396)

    The summary is terrible. And not just the invalid link.

    Here's a more informative link [debian.org] than the one posted by lnunes.

    Multiarch is not gonna let you run ARM binaries on an Intel chip or anything like that - nor will it let you run Windows code on Debian. What it will do, however, is let you run x86 compiled binaries on an x64 system. It will also allow for things like mixing armhf and armel code on modern ARM, but for the most part, running 32-bit x86 code on 64-bit x64 (amd64) systems will be the benefit most of us will get.

    How will we benefit? You'll be able to run binary-only x86 code on your x64 system. This means Adobe Flash and Skype. Any open source code is fine, because it can be compiled for your own architecture - but for binary-only proprietary software, it may not be available for your architecture.

    "But this is already possible" you may be thinking. It is, but it's a nasty kludge at the moment. These packages, when installed on 64-bit systems, depend on 32-bit versions of several system libraries, which are separate packages. There's a series of kludges to make them work, and it's not very flexible.

    The heart of multiarch support is a re-designed file system layout which accounts for the architecture of any binaries. So instead of putting some binary libraries in /lib/, it puts it in /lib/amd64/ or /lib/i386/. This is the first step for allowing the same package to be installed for different architectures. Then, dpkg will have to be modified to track packages from more than one architecture on the one system.

    • You can already run Windows code on Debian. What this should do (among lots of other things) is make the operating system more hardware agnostic. Imagine a single installed system on a portable hard drive. You plug the drive into an x86 and it's your trusted Debian system. Next, you take the drive out and go plug it into your ARM based smartphone while you're on the plane. Same OS, different hardware. Next, your old x64 breaks and you buy a replacement x128, and you just swap the hard drives. If you're a sy
    • Huh? RHEL already does this. I had to install 32-bit libs for HP's utils, eg. hpacucli and hponcfg.
    • by Trogre ( 513942 )

      How will we benefit? You'll be able to run binary-only x86 code on your x64 system. This means Adobe Flash and Skype. Any open source code is fine, because it can be compiled for your own architecture - but for binary-only proprietary software, it may not be available for your architecture.

      This will also make 32-bit mplayer much easier to install on 64-bit machines, which is often needed due to binary-only win32 codecs.

    • It will also allow you to dist-upgrade from a x86 system to a x86_64 system if the CPU supports it. Right now it is possible to install a 64 bit kernel and some compatibility libs on a 32 bit system but this is really only good for running precompiled 64 bit binaries. Vice versa can be done on a 64 bit system.

      It doesn't allow for installing 64 bit debs. But if they can pull this off then I can upgrade from a machine that is 32 bit arch and make it a 64 bit arch.

    • by godrik ( 1287354 )

      "Multiarch is not gonna let you run ARM binaries on an Intel chip or anything like that"

      Actually it does. They plan to couple multiarch with emulation solution such as qemu to run ARM binaries on x86 processors and vice versa. They also plan to let you run cross-OS binaries like run freebsd binaries on linux and vice versa.

  • so that takes away about 90% of the coolness factor

  • This multiarch support is no hot innovation. For years, NetBSD has been able to cross-build the entire system and offer 32 bit binary compatibility on 64 bit architectures.
  • Disney (Score:3, Insightful)

    by bill_mcgonigle ( 4333 ) * on Thursday July 28, 2011 @10:19PM (#36917330) Homepage Journal

    After all Disney has done for cultural freedom, it's nice to see Debian is still honoring their properties with its OS names.

    • by npsimons ( 32752 ) *

      After all Disney has done for cultural freedom, it's nice to see Debian is still honoring their properties with its OS names.

      Couple of things: Bruce Perens [wikipedia.org], who used to work at Pixar, and developed Electric Fence [wikipedia.org] while there, was once a Debian Project Leader and started the convention of codenaming Debian releases to Toy Story characters. This was all before Disney bought Pixar [debian.org]. HTH.

  • It's difficult for an asthmatic to even hear the word "wheezy" and not reach for an inhaler.

"Out of register space (ugh)" -- vi

Working...