Linux x32 ABI Not Catching Wind 262
jones_supa writes "The x32 ABI for Linux allows the OS to take full advantage of an x86-64 CPU while using 32-bit pointers and thus avoiding the overhead of 64-bit pointers. Though the x32 ABI limits the program to a virtual address space of 4GB, it also decreases the memory footprint of the program and in some cases can allow it to run faster. The ABI has been talked about since 2011 and there's been mainline support since 2012. x32 support within other programs has also trickled in. Despite this, there still seems to be no widespread interest. x32 support landed in Ubuntu 13.04, but no software packages were released. In 2012 we also saw some x32 support out of Gentoo and some Debian x32 packages. Besides the kernel support, we also saw last year the support for the x32 Linux ABI land in Glibc 2.16 and GDB 7.5. The only Linux x32 ABI news Phoronix had to report on in 2013 was of Google wanting mainline LLVM x32 support and other LLVM project x32 patches. The GCC 4.8.0 release this year also improved the situation for x32. Some people don't see the ABI as being worthwhile when it still requires 64-bit processors and the performance benefits aren't very convincing for all workloads to make maintaining an extra ABI worthwhile. Would you find the x32 ABI useful?"
no (Score:4, Insightful)
no
Subject (Score:2, Insightful)
With memory being dirt cheap I ask: Who cares?
Eh? (Score:4, Insightful)
Nice concept (Score:3, Insightful)
I do not see many cases where this would be useful. If we have a 64-bit processor and a 64-bit operating system then it seems the only benefit to running a 32-bit binary is it uses a slightly smaller amount of memory. Chances are that is a very small difference in memory used. Maybe the program loads a little faster, but is it a measurable, consistent amount? For most practical use case scenarios it does not look like this technology would be useful enough to justify compiling a new package. Now, if the process worked with 64-bit binaries and could automatically (and safely) decrease pointer size on 64-bit binaries then it might be worth while. But I'm not going to re-build an application just for smaller pointers.
Re:Subject (Score:4, Insightful)
Memory? What about cache? Is cache dirt cheap?
Re:Wont use Linux without it! (Score:2, Insightful)
My dad drives a Ford and your dad drives a Chevy. Your dad sucks.
Didn't we do this already? Like when we were twelve years old.
What about shared libraries? (Score:4, Insightful)
Re:Subject (Score:5, Insightful)
In answer to my question, no, it is not dirt cheap. For any size cache you will get fewer cache misses if your data structures are smaller than if they are larger. Until the cache is so big that everything fits in it, you always win if you can double what you can cram into it.
Re:Subject (Score:2, Insightful)
ECC memory is artificially expensive. Were ECC standard as it ought to be, it would only cost about 12.5% more. (1 bit for every byte) That is a pittance when considering the cost of the machine and the value of one's data and time. It is disgusting that Intel uses this basic reliability feature to segment their products.
Re:no (Score:5, Insightful)
For general computing, iffish.
For embedded computing where I am worried about every chunk of space, and I can deal with the 3-4 GB RAM limit, definitely.
This is useful, and IMHO, should be considered the mainstream kernel, but it isn't something everyone would use daily.
Re:no (Score:5, Insightful)