Forgot your password?
typodupeerror
Software Linux

Linux x32 ABI Not Catching Wind 262

Posted by Soulskill
from the try-a-bigger-sail dept.
jones_supa writes "The x32 ABI for Linux allows the OS to take full advantage of an x86-64 CPU while using 32-bit pointers and thus avoiding the overhead of 64-bit pointers. Though the x32 ABI limits the program to a virtual address space of 4GB, it also decreases the memory footprint of the program and in some cases can allow it to run faster. The ABI has been talked about since 2011 and there's been mainline support since 2012. x32 support within other programs has also trickled in. Despite this, there still seems to be no widespread interest. x32 support landed in Ubuntu 13.04, but no software packages were released. In 2012 we also saw some x32 support out of Gentoo and some Debian x32 packages. Besides the kernel support, we also saw last year the support for the x32 Linux ABI land in Glibc 2.16 and GDB 7.5. The only Linux x32 ABI news Phoronix had to report on in 2013 was of Google wanting mainline LLVM x32 support and other LLVM project x32 patches. The GCC 4.8.0 release this year also improved the situation for x32. Some people don't see the ABI as being worthwhile when it still requires 64-bit processors and the performance benefits aren't very convincing for all workloads to make maintaining an extra ABI worthwhile. Would you find the x32 ABI useful?"
This discussion has been archived. No new comments can be posted.

Linux x32 ABI Not Catching Wind

Comments Filter:
  • no (Score:4, Insightful)

    by Anonymous Coward on Tuesday December 24, 2013 @07:24PM (#45778949)

    no

  • Subject (Score:2, Insightful)

    by Daimanta (1140543) on Tuesday December 24, 2013 @07:24PM (#45778951) Journal

    With memory being dirt cheap I ask: Who cares?

  • Eh? (Score:4, Insightful)

    by fuzzyfuzzyfungus (1223518) on Tuesday December 24, 2013 @07:27PM (#45778973) Journal
    If I wanted to divide my nice big memory space into 32-bit address spaces, I'd dig my totally bitchin' PAE-enabled Pentium Pro rig out of the basement, assuming the rats haven't eaten it...
  • Nice concept (Score:3, Insightful)

    by Anonymous Coward on Tuesday December 24, 2013 @07:34PM (#45779025)

    I do not see many cases where this would be useful. If we have a 64-bit processor and a 64-bit operating system then it seems the only benefit to running a 32-bit binary is it uses a slightly smaller amount of memory. Chances are that is a very small difference in memory used. Maybe the program loads a little faster, but is it a measurable, consistent amount? For most practical use case scenarios it does not look like this technology would be useful enough to justify compiling a new package. Now, if the process worked with 64-bit binaries and could automatically (and safely) decrease pointer size on 64-bit binaries then it might be worth while. But I'm not going to re-build an application just for smaller pointers.

  • Re:Subject (Score:4, Insightful)

    by mellon (7048) on Tuesday December 24, 2013 @07:48PM (#45779103) Homepage

    Memory? What about cache? Is cache dirt cheap?

  • by Anonymous Coward on Tuesday December 24, 2013 @08:18PM (#45779241)

    My dad drives a Ford and your dad drives a Chevy. Your dad sucks.

    Didn't we do this already? Like when we were twelve years old.

  • by billcarson (2438218) on Tuesday December 24, 2013 @08:22PM (#45779267)
    Wouldn't this require all common shared libraries (glib, mpi, etc.) to be recompiled for both x86-64 and x32? What am I missing here?
  • Re:Subject (Score:5, Insightful)

    by mellon (7048) on Tuesday December 24, 2013 @08:34PM (#45779327) Homepage

    In answer to my question, no, it is not dirt cheap. For any size cache you will get fewer cache misses if your data structures are smaller than if they are larger. Until the cache is so big that everything fits in it, you always win if you can double what you can cram into it.

  • Re:Subject (Score:2, Insightful)

    by Anonymous Coward on Tuesday December 24, 2013 @08:51PM (#45779453)

    ECC memory is artificially expensive. Were ECC standard as it ought to be, it would only cost about 12.5% more. (1 bit for every byte) That is a pittance when considering the cost of the machine and the value of one's data and time. It is disgusting that Intel uses this basic reliability feature to segment their products.

  • Re:no (Score:5, Insightful)

    by mlts (1038732) on Tuesday December 24, 2013 @09:27PM (#45779627)

    For general computing, iffish.

    For embedded computing where I am worried about every chunk of space, and I can deal with the 3-4 GB RAM limit, definitely.

    This is useful, and IMHO, should be considered the mainstream kernel, but it isn't something everyone would use daily.

  • Re:no (Score:5, Insightful)

    by GPLHost-Thomas (1330431) on Wednesday December 25, 2013 @06:16AM (#45781123)
    Well, I do find it extremely useful. Especially in Debian & Ubuntu, we have multi-arch support. For some specific workload using interpreted languages, it just reduces the memory footprint by a half. For example, PHP and Perl. If you once ran Amavis and spamassassin, you certainly know what I mean: it takes double the amount of RAM on 64 bits. Since most of our servers are running PHP, Amavis and Spamassassin, this would be a huge benefits (from 800 MB to 400 MB as the minimum server footprint), while still being able to run the rest of the workloads using 64 bits: for example, Apache itself and MySQL, which aren't taking much RAM anyway compared to these anti-spam dogs.

When you don't know what to do, walk fast and look worried.

Working...