Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

Smallest Possible ELF Executable? 460

taviso writes "I recently stumbled across this paper (google cache), where the author investigates the smallest possible ELF executable on linux, some interesting stuff, and well worth a read. The author concludes, 'every single byte in this executable file can be accounted for and justified. How many executables have you created lately that you can say that about?'
This discussion has been archived. No new comments can be posted.

Smallest Possible ELF Executable?

Comments Filter:
  • by mesocyclone ( 80188 ) on Saturday October 19, 2002 @12:54PM (#4485138) Homepage Journal
    It isn't amazing until its also palindromic!
  • by Anonymous Coward on Saturday October 19, 2002 @12:58PM (#4485152)
    I just heard the news on slashdot -- Frodo Baggins, the smallest elf, was just executed! No other details were available.
  • by ebuck ( 585470 ) on Saturday October 19, 2002 @12:59PM (#4485153)
    Last time I read this on slash dot was less than a year ago. I imagine in 4 or 5 months we'll see it again.

    The article is great. It really is a good intro to refresh that assembly / understand ELF / do neat stuff. I still have the tiny assembler installed on my machine from the last go round.

    I've heard of a guy who is trying to make the world's smallest 'cat' program. I wonder how many other utilities have been similiarly "optimized"
  • Turbo Pascal (Score:2, Interesting)

    I seem to remember making some damn small Turbo Pascal .COM files. Under 4096 bytes, IIRC.
  • by tuxedo-steve ( 33545 ) on Saturday October 19, 2002 @01:00PM (#4485163)
    ... wanting to execute the smallest possible elf. You Americans and your bloodsports. Barbarians.

    If you guys go ahead with your cold-hearted plan to execute this elf, the Olsen twins better watch their backs next time they're in Ireland, if you catch my drift.
  • by Fuzzums ( 250400 ) on Saturday October 19, 2002 @01:01PM (#4485165) Homepage
    in assembly: RET

    All this one byte program does is terminate execution. If it's infected by a virus you'll see soon enough if the size has increased.

    ofcourse with todays macroviruses this doesn't work anymore :(
  • by Anonymous Coward on Saturday October 19, 2002 @01:02PM (#4485168)
    Linux software is horribly bloated, like even "ls" is above 30k, thats just insane for a program thats supposed to just list files in a directory. About time someone did something about it.
    • Well it still manages to list files faster than my eyes can read them.

      So don't expect me to do anything about it.
    • Excellent troll! (Score:4, Insightful)

      by PurpleFloyd ( 149812 ) <`zeno20' `at' `attbi.com'> on Saturday October 19, 2002 @01:49PM (#4485391) Homepage
      Linux software is horribly bloated, like even "ls" is above 30k
      ls is probably statically linked (all necessary libs reside within the executable), so it will function in almost any circumstance where the executable itself is not corrupted. Would you really want to try to repair a broken system without ls? Most critical utilites and shells are available in statically-linked forms (if not, you can do it yourself). While executable size is an important consideration, it isn't the only one. I would rather have a set of basic programs (like ls) that work even if all the lib directories are toasted, than to save a few K here and there, and have a system that could never pull itself back up if broken.
      • Re:Excellent troll! (Score:5, Informative)

        by DeeKayWon ( 155842 ) on Saturday October 19, 2002 @02:10PM (#4485453)
        Funny. My /bin/ls (Debian unstable) is nearly 60k, yet is dynamically linked, and is even stripped.

        % ls -l /bin/ls
        -rwxr-xr-x 1 root root 59592 Oct 8 20:17 /bin/ls*

        % ldd /bin/ls
        librt.so.1 => /lib/librt.so.1 (0x40022000)
        libc.so.6 => /lib/libc.so.6 (0x40034000)
        libpthread.so.0 => /lib/libpthread.so.0 (0x40147000)
        /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)

        % file /bin/ls
        /bin/ls: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.2.0, dynamically linked (uses shared libs), stripped
  • by Anonymous Coward on Saturday October 19, 2002 @01:02PM (#4485171)
    This makes my new 100-gig hard drive seem WAY too big.
  • Glibc is the thief! (Score:2, Informative)

    by Goodbyte ( 539941 )
    I've always wondered what all the glibc overhead (compared to f.i. uClibc) does. I've never noticed any functional difference when setting up a initrd image by using uClibc instead of glibc.
  • There isn't much practical here more than it is an ELF education. But it was a very interesting read... and being able to stuff a payload into the header. Nicely done. But given that, in most cases, disk space is vast, and memory is plentiful, there isn't much in the way of usefulness. Maybe in some niche' applications running on tight hardware... but running Linux.
    • But given that, in most cases, disk space is vast, and memory is plentiful, there isn't much in the way of usefulness. Maybe in some niche' applications running on tight hardware... but running Linux.
      Consider for a moment what the GNU world would be like if every byte, primarily of our base system, was accounted for. Imagine a Linux distribution on the shelf with a "Zero Overhead" label on it - and be able to mean it!

      Consider Linux 2.4.x running on a 486 with 16MB of RAM - and having 14 of it free for applications even with init and Bash running.

      Now expand the concept to GUI applications, XFree86, etc. and think of how blazingly FAST the entire Linux experience would be, even on the most mediocre hardware. People would get a CPU upgrade and their systems would boot to KDM as if it was already loaded.

      Considering too the fact that every (assuming based on my own HDDs and limited knowledge of IDE transfer code) 8KB of program code requires a separate disk read operation to load to cache. Every 8KB that's shaved off an application's startup routines is one less disk read, which means those dusty old ATA33 hard drives would suddenly seem a lot more worthwhile to keep around (not to mention they'd be big enough, what with reduced size constraints) - an especially Good Thing<TM> considering recent changes in manufacturer policy [slashdot.org] where new drives are concerned.

      The excuse that CPU/RAM/HDD is inexpensive is a lousy one at best. It's cheap because bloated programs and operating systems have driven up demand, which has caused a surge in supply, which has dropped the prices. Imagine a world though where it was only the Windows weenies who had to trundle out to their resident computer store [highfield.ca] every other month to accomodate their latest cabre of software updates? We'd be able to laugh at them, knowing full-well that our K62-400s were smoking their brand-new P4-3.0GHz super-screamer systems.

      </RANT>

  • umm.... yeah? (Score:2, Insightful)

    by Lxy ( 80823 )
    Basically what the other is saying is that by default, C is somewhat bloated (you need to include massive libraries just to use one function). Writing system level calls in assembly can replace the unnecesary bloat of a library that's only being used by one function.

    I remember this trick wehn I was learning x86 assembly. I wrote a hello world program in assembly. Assembled, it came to something like 35 bytes. In C++, it took over 10K.

    Now, also see the statement that he is abandoning portability, because he's using linux-specific system calls. So, in a nutchell, C++ makes big code that's portable, assembly makes tiny code that's static.

    Did I miss something or was this a long winded article about why assembly is better than C++?
    • Re:umm.... yeah? (Score:4, Interesting)

      by Leonel ( 52396 ) on Saturday October 19, 2002 @01:07PM (#4485203) Homepage
      Did you bother to read it?

      From the article, after the first try in asm:

      Looks like we shaved off a measly twelve bytes. So much for all the extra overhead that C automatically incurs, eh?
      • Yes, but he continues to say that he's using a C library function to END the program, with is how he knocked off 2K+ of space when he wrote it in assembly, forcing incompatibility with othe platforms.
    • Assembled, it came to something like 35 bytes. In C++, it took over 10K.

      Obviously you weren't using the ELF format then:

      There is no getting around the fact that the 45th byte in the file, which specifies the number of entries in the program header table, needs to be non-zero, needs to be present, and needs to be in the 45th position from the start of the ELF header.

      Maybe ELF is just too inefficient. :)
  • by Anonymous Coward on Saturday October 19, 2002 @01:06PM (#4485195)
  • by Masque ( 20587 ) on Saturday October 19, 2002 @01:07PM (#4485200)
    This guy clearly doesn't get the point!

    67% of Americans are overweight. They can't account for most of the bites they use. By developing software that is just as bloated, the users feel good about themselves.

    This kind of skinny programming is very insensitive to the fatass society we Americans live in! Hopefully the U.S. Congress hears of this soon, so that they may legislate this kind of software right off the face of the earth.

    Masque, head of the Sensitive Programming Foundation*

    [*A division of Maxtor Corporation; come check out our new 320GB drives, featuring room for tomorrow's applications...today.]
  • Windows exe (Score:4, Interesting)

    by Sivar ( 316343 ) <charlesnburns[ AT ]gmail DOT com> on Saturday October 19, 2002 @01:07PM (#4485201)
    Cas, or StorageReview.com's forums, created a 324 bytes Windows 2000 PE executeable. It completely blew away all of mine, the smallest of which were about ~700 bytes.
  • by mbogosian ( 537034 ) <<matt> <at> <arenaunlimited.com>> on Saturday October 19, 2002 @01:09PM (#4485209) Homepage
    every single byte in this executable file can be accounted for and justified

    The author's sanity, however, cannot.
  • Not bad... (Score:5, Interesting)

    by Captain Pedantic ( 531610 ) on Saturday October 19, 2002 @01:10PM (#4485220) Homepage
    But I'd like to see them get a Breakout clone in 1K [demon.co.uk]
  • Even Shorter... (Score:3, Interesting)

    by MarvinMouse ( 323641 ) on Saturday October 19, 2002 @01:12PM (#4485223) Homepage Journal
    While the program would be completely useless.

    You could make it even shorter by having it return absolutely nothing (Just having it execute and finish.)

    It could be useful to catch when anything starts to modify programs on your computer, because if the "thing" just modifies programs, it will recognize it as a program, and increase the size notably.

    I really like the 45 byte program though, too bad that after you passed 100 bytes, it became totally non-compliant.
    • Re:Even Shorter... (Score:3, Informative)

      by zsmooth ( 12005 )
      No, I don't think you can make it any shorter even by removing that call. The program is 45 bytes, and the 45th byte is required to be there (a critical part of the ELF header), or else it won't execute at all.
  • by Anonymous Coward on Saturday October 19, 2002 @01:12PM (#4485224)
    Harddrive sizes being what they are now, the smallest sector size I see is 512 bytes. If the file stored in that sector is smaller than 512, it still takes up 512 bytes. Very intersting article however.
  • by ksw2 ( 520093 ) <[obeyeater] [at] [gmail.com]> on Saturday October 19, 2002 @01:12PM (#4485229) Homepage
    I used to kill myself trying to strip a few lines of code from my programs... in my mind, I was trying to emulate the PDP hackers of the 60s (my heros) by finding the one "Right Thing" for each program.

    Soon I realized that smaller programs are not the end-all goal of programming. If a slightly bigger program is easier to understand for the next person who modifies/maintains it, then that is the new "Right Thing" for that application... and I realized the efficient progamming of the PDP days was a biproduct of necessity more than anything else. It's seldom needed with today's blazing hardware capabilities.

    This isn't to say that many of today's programs are over-bloated, but just to reinforce the trade-off between small and easy to understand.

  • by tomoose ( 449824 ) on Saturday October 19, 2002 @01:12PM (#4485230)
    Reminds me of one of Bill Gates' first programs - Micro-Soft's 1975 Altair BASIC. Unfortuantly the page I wanted to link to has gone, but this is something from the register at the time: http://www.theregister.co.uk/content/4/18949.html

    Finally found a web archive of the page I wanted: http://web.archive.org/web/20011031094552/www.rjh. org.uk/altair/4k/index2.html

    A real pity that standards have slipped so much since then.

    (I refuse to post anonymously even though I have mentioned Microsoft in a thread about Small Exes. So there :p )
  • by TibbonZero ( 571809 ) <Tibbon@@@gmail...com> on Saturday October 19, 2002 @01:13PM (#4485232) Homepage Journal
    We really need more efficiant programming in OSes today. Look at the system requirements for OSes over the past few years. It's gone crazy. Check out the requirements for NT Workstation 4.0 [microsoft.com], Windows XP Pro [microsoft.com] and Windows 2000 Pro [microsoft.com].

    Doesn't something seem messed up? What have we really gained since 4.0 that causes 4x the memory, 3x the procecssor, and almost 15x the harddrive space? Is USB and Firewire support really that big? And have you ever tried to run XP on the min system? It doesn't work so well. I remember being able to tweak a system to run Windows 95 on a 386 with 5mb memory and a 45mb harddrive. It wasn't pretty but it could run. Today if you aren't going 1ghz+, then they want to leave you behind.

    They are just using really fast hardware as an excuse for bloating the code.
    Even Linux (redhat moreso) is guilty of this.
    Remember when awesome games could fit on a handful of floppies? I think that could fly today if they tried. Look at the Demo scene. 64k can do alot of graphics. The most awesome games like Betrayal at Krondor were only a few floppies. Sure, if you have big hardware use it, but don't waste it. Programmers are just getting slack and including (literally) everything in the world, and not writing anything for themselves. They aren't looking to optimize stuff, just to kick it out and make money (obviously open source isn't guilty of the money or the fast kickout thing)...
    • It'll go full circle at some point. A kind-of-example, might be PalmOS in so far as it is wildly stripped down and slim when compared with WindowsCE. The problem is that it costs at lot to develop really beautifully engineered code. Some day it'll happen though - like a nice, tight office suite that loads ridulously quickly and does everything you want.
    • by coupland ( 160334 ) <dchaseNO@SPAMhotmail.com> on Saturday October 19, 2002 @02:02PM (#4485425) Journal

      Unfortunately my take on this situation is a bit more sinister. Your post mentions that NT 5.0 (Windows 2000 Pro) requires 64M of RAM, yet NT 5.1 (Windows XP) requires 128M of RAM. Why the twofold increase for a minor upgrade? Well, consider two things:

      1. Windows XP was released during one of the slowest hardware sales slumps in PC history. All the big players were hoping to see XP spur sales. Not coincidentally, for many people XP required a new PC.
      2. Microsoft can only stand to benefit from these PC sales in the form of OEM licenses.

      Yes I'm cynical but I've always been of the belief that the bloat in XP is engineered, not the simple result of bad programming. To think that the project managers and marketing don't talk about these sorts of things is naive

    • by Fastolfe ( 1470 ) on Saturday October 19, 2002 @02:46PM (#4485588)
      I think a lot of it is due to pressure to get a product out. Developers are relying exclusively nowadays on high-level languages, even in OS design, and those that write the compilers don't spend as much time on getting good, compact, precise and optimized code out of high-level code. Nobody cares. CPU is cheap, hard disk is cheap. Why should they work to make their stuff efficient when they can just claim their product is so advanced it requires twice the resources.

      Part of it also lies on the shoulders of developers. A lot of developers today are simply programmers that learned C in high school. They have little understanding of machine languages, assembly, or the CPU architectures they're coding for. They know just what the high-level languages look like and one or two ways of accomplishing their goal. What they need to know is how their software design decisions actually get implemented by the assembler and executed by the architecture. Memory efficiency never even crosses their mind. Who wants to pay for programmers that actually know their shit when they can just claim their product is so advanced it now requires four times the resources?

      Perhaps this is another area in which OpenSource software can shine some day...
      • by yerricde ( 125198 ) on Saturday October 19, 2002 @03:40PM (#4485854) Homepage Journal

        CPU is cheap, hard disk is cheap.

        Maybe on PCs, but not on embedded systems, handheld systems, or game consoles. The Game Boy Advance, for instance, has only 384 KB of RAM, and all but 32 KB are 16-bit bus width with muchos wait states. Many microcontrollers inside such things as microwave ovens are as powerful as an Atari 2600 VCS, with 128 bytes of RAM and about 12 bytes of VRAM (if that).

    • Several things (Score:3, Informative)

      by Sycraft-fu ( 314770 )
      1) The increase in OS requirements is partially due to the increase in OS functions. XP provides a lot more eye candy than NT, which needs more processor power to handle. You may not think it's a good idea, but most people like it.

      2) The increase in OS requirements is mainly due to an increase in software requirements as a whole. An OS is worthless if you can't run anything on it, so you need to set your requirements with software in mind. MS made this mistake with Windows 95. Yes, technically IT would run with 4MB of ram, but that wasn't enough to load anything else. XP's stated minimum isn't the actual minimum, but a practical one wheny ou account for applications.

      3) As others have mentioned, compact code comes at the price of maintainability. Sure, I can write a program in 100% assembly, and then if I'm realyl good tweak the machine code to make sure it is as efficient as possable. Now try and maintain that. This is hard enough if it's a tiny app, but if it is something large like, say, Mozilla even the orignal programmer would find matenence very difficult and anyone else would find it almost impossable.

      4) Along those line, portability requires that you code in a higher level language, and often that you make some changes that increase your code size. If you do everything in optimised assembly, well it's a one platform thing. I can gaurentee that you have to do a massive rewrite of an assembly Windows app if you want to make it run on x86 Linux just because of the API differences. If you are talking another hardware platform, then it's a total and complete rewrite.

      5) Your 64k demo thing I'm assuming is refering to the now infamous Farbrausch demos. It is simply stunning what they can get done in 64k BUT it comes at a huge price. First there is the memory usage, look at your task manager sometime when one of those is running, they use like 80MB. Because of their tiny disk usage they can to decompress to memory. Second their compatibility is horrable, their newer one FR22 works properly on my sytstem at work, but not at home, the only big difference being at home a have a geForce 4 at work I have a GeForce 3. Finally, these thigns are only made possable by the "bloated" Windows framework with things like DirectX to simplfy low level access.

      6) Most people see little point in trying to make things run well on a 386 when you can get an entire system running at over 1ghz for about $500.
  • 3d Tube in 256 bytes (Score:5, Interesting)

    by timbong ( 311303 ) on Saturday October 19, 2002 @01:14PM (#4485241) Homepage
    I think this [pouet.net] is more interesting and the source code (assembler) is included too.
    • 256b... (Score:5, Interesting)

      by Skiboo ( 306467 ) on Saturday October 19, 2002 @01:54PM (#4485406) Homepage
      www.256b.com [256b.com] is a site full of little 256 byte windows programs. It's amazing what they can do in that tiny amount of space. All assembler, all with source code, it's worth checking out.

      It's kind of a contest to see who can come up with the best 256 byte program.
  • by Anonymous Coward
    You know -- the bullshitters that say: "Optimizing C compilers write small/better/faster code than hand-tuned assembly."

    Hand-tuned assembler is always faster/smaller/better than C code, except when it comes to portability.

    And this just goes to show that fact again!
    • by Waffle Iron ( 339739 ) on Saturday October 19, 2002 @01:59PM (#4485417)
      Hand-tuned assembler is always faster/smaller/better than C code, except when it comes to portability.

      Maybe in theory. In practice, once your program gets too big to fit it all in your head at once, you're going to run out of the mental energy required to stay ahead of the C compiler (and remain bug-free).

      If you've disassembled the output of a good optimizing compiler lately, you'd see that it usually produces pretty good code. Except for the inner loops of numerical algorithms, I doubt that anyone will consistently be able to produce code that is more than 25% faster than the C compiler.

      The thing is, the compiler is able to spit out this code at thousands of lines per minute all day long. It doesn't get tired. The human programmer is going to get tired of the boredom, and will start creating higher level abstractions in assembly. He'll start using macros. He'll use a simplified parameter passing protocol so that he doesn't have to inline and hand-allocate the registers for every little subroutine call.

      Before long, he's fallen behind, and the C code will run faster overall. And the C program will have taken less time to write, as well.

      • 26 bytes (Score:4, Funny)

        by bcrowell ( 177657 ) on Saturday October 19, 2002 @03:38PM (#4485843) Homepage
        Yeah, and now here's a thumb in the eye for all those C bigots and all those assembler bigots:

        $ cat >a.pl
        #!/usr/bin/perl
        exit(42);

        $ chmod +x a.pl
        $ ./a.pl
        $ echo $?
        42
        $ ls -l a.pl
        -rwxr-xr-x 1 bcrowell bcrowell 26 Oct 19 12:41 a.pl

        Only takes up 26 bytes on my hard disk!

  • Now, the author has to shrink the cluster size of his hard drive, and make up some new indexing structure that is more efficient so any shrinking of the executable actually matters.... It doesn't help if the executable is 32 bytes or 4096 bytes if you only have 4k clusters, you're still eating the same amount of space.
  • nasm (Score:3, Informative)

    by elykyllek ( 543092 ) on Saturday October 19, 2002 @01:22PM (#4485277) Homepage
    The nasm assembly compiler site that he mentions [web-sites.co.uk] in the article seems /.'d, theres a sourgeforge project [sourceforge.net] site instead.
  • I remember years ago writting a run-length decompressor in Z80 asm that was 32 bytes. I think the compressor was 50 something bytes!

    I also recall adding up the clock cycles for all of this to try and find the fastest implementation!

    I'm over it now though!

    I'm just glad to forget those cassette-tape based, hand coded assembler days, but it is kind of a shame to see how bloated code has got. If only I'd had a macro-assembler on my Sinclair ZX Spectrum (Timex something or other in the US) in those days... oh the world could've been mine!!
  • by 4minus0 ( 325645 ) on Saturday October 19, 2002 @01:33PM (#4485321)
    Linus wept.

    linuxdoc.org
    Chapter 11
    Verse 35
  • by eng69 ( 618859 ) on Saturday October 19, 2002 @01:39PM (#4485345)
    The current state of elf proccessors demands an astounding amount of system resources. When combined with dwarf co processor, it provides for unparalleled carnie access.
  • Its not so much the executable size that really matters (when talking about bloat), its the memory consumption of the program that really matters.I could write a very small program size wise that would drain your memory and crash your system, or make it slow down to a crawl.
  • MenuetOS (Score:4, Interesting)

    by jaaron ( 551839 ) on Saturday October 19, 2002 @01:40PM (#4485352) Homepage
    On a similar topic, MenuetOS [menuetos.org] is a full OS written in assembly and fits on a floppy. Yeah, lots of OS's used to fit on floppies, but it's still cool. It's amazing what all you can fit into a small space if you're careful.
  • On bloatnesses (Score:4, Insightful)

    by Ektanoor ( 9949 ) on Saturday October 19, 2002 @01:41PM (#4485354) Journal
    Well this reminds me the golden days of DOS (not Denial Of Service but Disk Operating System... well, anyway it didn't made a difference). Back then people fought for every bit of code. And assembler was as popular as C or Pascal.

    However, using assembler this way is not the most optimal resource. Frankly this piece of code is only useful if you need some real tiny program and you are running out of space and speed. But, today, 99.99...% of tasks don't need it. The optimal way to use such tricks is to concentrate in tasks that really need "the best and fastest code ever". These are drivers and situations where speed's price costs gold. Usually this is done by injecting the necessary asm directives into C or any other language. Writing everything in pure Assembler is unpractical and the result may become harder to understand than the Rosetta Stone.

    However the article is making a point - how unoptimised are the present compilers. For example, GCC is mostly C in C. It makes it highly portable, but, if anyone decided to repeat Turbo Pascal feat (most of its base code was Assembler), I know that binary code would shrink to the impossible. Right now we may not be feeling this drawback as bloatness still doesn't clog everything. In the future this situation may change if speed and reliability turn to higher priorities.

    Some note for the bloat FUDders: This is not a reason for Linux distros being bloat. First learn to be rational on your needs and don't install everything in one box. Second, learn a little bit of administration, maybe some programming and kick that (mega_kernel) + (some_highly_featured_libs) + (several_unuseful_apps) out of your box. Then you will know that Linux can help fry eggs on your processor with lightning speed. Till then, keep the flame for yourself and read "Why I switched from Mac to Windows".
  • i love this (Score:2, Insightful)

    by sysrequest ( 325177 )
    it may not be all TOO practical, since a lot of people try to ensure that their program runs on multiple architectures and platforms, but I also miss the old (DOS) days when the demo scene tried to optimize their intros to fit a half an hour of entertainment into 64k, with full sound blaster support. the registers of the vga cards were abused to no end, lightening fast assembler procedures were optimized either for size, or for speed by unrolling loops, etc.

    while that isn't practical anymore these days, a LOT of code has become very sloppy. More than once have i stumbled over some college kids c app that was supposed to demonstrate linked lists, and instead, it was using one class with an array.

    programming is an art, like acting. many try and are good enough for some purposes, but only a selected few are masters. sounds pretty damn philosophical, don't read too much into it :)
  • by hey ( 83763 ) on Saturday October 19, 2002 @01:46PM (#4485375) Journal
    On Red Hat 8.0 I get:

    $ wc -c /bin/true
    9752 /bin/true

    That's thousands of extra bytes - eek.
  • by Kenneth Stephen ( 1950 ) on Saturday October 19, 2002 @01:48PM (#4485388) Journal

    Looking through the comments here I see two main threads : (1) Squeezing out the last few overhead in a program leads to hard to understand / maintain program and thus is not worth the effort. (2) Whats the big deal anyway in this era of 100 GB disks and 2GHz processors?

    While both these criticisms are valid, they miss the point. Firstly, it wasnt the objective of the author to squeeze the last few bytes out of that program to save resources. He was just putting his hard-earned knowledge to use. He was doing it because he could! This is the same motivation for people who climb mountains : because the mountain is there, and because they can climb it. Indeed, if the author were seriously looking into saving resources, he'd hardly be wasting his time on a trivial program, would he?

    Secondly, one of the authors intentions was to demonstrate the limits to which austerity could be taken to. Certainly, this was a trivial program - but the same principles could be used to shrink larger non-trivial programs, and it those cases, the savings could possibly be larger. Of course, it those cases, the largest savings would come from a good optimizing compiler rather than crunching the headers together. More importantly, the author has exposed whole new ideas and lines of possibilities to programmers.

  • 4K Demos (Score:5, Interesting)

    by Wraithlyn ( 133796 ) on Saturday October 19, 2002 @01:54PM (#4485407)
    Some of the 4K demos I've seen written for ASM competitions completely blow my mind... check out this one [sanction.de], it's basically a flythrough of the first level of Descent, with texture mapping, source lighting, animated lava and recharger field, a MIDI soundtrack, etc... all in 4095 bytes!!!

    Here is Sanction's home page, it contains a couple more very impressive 4K demos.
    • Re:4K Demos (Score:3, Informative)

      by Ektanoor ( 9949 )
      The Demo scene had always beat the usual coders. Not long ago we had a national festival with guys coming all over from Russia. Some demos, mainly Amiga and Spectrum, were impressive. Some 3D effects were shown on machines that lack any types of acceleration. And these things ran nearly with the same speeds we frequently saw in some powerful Pentiums. Besides, the PC demo presented things shrinked to the impossible with a speed, sound, space and color effect that beated many popular games.

      I wonder the speed and the effects some Doom III would have if it was written mainly in Asm...
      • Re:4K Demos (Score:3, Interesting)

        by Wraithlyn ( 133796 )
        Do you remember Future Crew? And the legendary Second Reality demo of '93? (Available here [mbhs.edu], but can be hard to run properly on modern systems) Apparently many of those guys are now working at Remedy... which may explain why Max Payne is such a graphically beautiful game... I wouldn't be surprised in the slightest if the Max FX engine employs some nice ASM routines. Also, check out their Final Reality benchmark [instantnetworks.net], the final "cityscape flythrough" is a homage to a nearly identical (albeit flat shaded) sequence in Second Reality. Cool shit.
  • by Jugalator ( 259273 ) on Saturday October 19, 2002 @02:00PM (#4485418) Journal
    Stand-alone console EXE (Release Build):
    #include "stdafx.h"

    int _tmain(int argc, _TCHAR* argv[])
    {
    return 42;
    }
    Size: 20,992 bytes

    To be compared with the non-optimized gcc version at 3,998 bytes. :-)

    I wonder how small you can make a Windows EXE..
    • by Jugalator ( 259273 ) on Saturday October 19, 2002 @02:29PM (#4485529) Journal
      Hm... I stripped the code from stdio.h, replaced _TCHAR* with char* so the stdafx.h doesn't really do much at all. Then turned on size optimizations and turned off boundary checks etc in the compiler. Still exactly 20992 bytes. Huh?? Browsed the exe and there's full text messages like "ooh something corrupted the state of this program and it cannot safely continue". Which is actually a great addition by Microsoft, but can't you remove such things? :-)

      But I guess the .NET compiler has its lower limits where bloat get called feature, just surprising that it seems to compile at the minimum size by default... Or perhaps it use some kind of silly padding so even if there's less code, the physical size isn't reduced.
  • by shoppa ( 464619 ) on Saturday October 19, 2002 @02:13PM (#4485466)
    It's a well known fact that
    Any program can have at least one reduncdant byte removed or optimized away and still function

    In fact, just apply this fact iteratively and you'll find that any program can be written in zero bytes!

  • by cgleba ( 521624 ) on Saturday October 19, 2002 @02:29PM (#4485527)
    http://linuxassembly.org/asmutils.html

    Check it out, download it and assemble it.
    They create the smalles set of binaries for the basic linux tools that I have found and they employ a good portion of the stuff mentioned in this paper.

    They make busybox look bloated by comparison.

    Another neat trick is to use the ld options "-Wl, gc-sections" when linking a static binary -- it tries to weed out all the unused portions of the libraries it links against.

    The last trick I usually use is to link against uClibc or dietlibc rather then glibc. Makes a noticeable difference. RedHat has been working on a program called "newlib" which is supposed to do the same thing as uClibc or dietlibc but better (for embedded stuff).
  • by yorgasor ( 109984 ) <.ten.shcetirt. .ta. .nor.> on Saturday October 19, 2002 @02:51PM (#4485611) Homepage
    Just because a program or executable file is smaller, doesn't necessarily mean it's more efficient. For instance, some compiler optimizations actually produce larger executables. If you unroll a loop, it actually generates code for each iteration of the loop, but saves time because it's faster to keep going forward than to branch backwards to run through the code again.

    Similarly, you can have inline functions that insert the inline function directly into the function calling it. Every function that calls an inline function would get a copy of it, which produces larger code, but saves a lot of time since it doesn't need to push the arguments on the stack, branch to the new function, and return with the value.

    Finally, the biggest speed gains you can get are generally algorithmic in nature. You can do a bubble sort with just a few lines of code. It's a lot simpler code and smaller than the larger and more complicated quick sort or merge sort. I know which one I'd rather wait for with a million items to sort.

    So remember, just because something is bigger, doesn't mean it's more bloated, and just because something is smaller doesn't mean it's faster or more efficient.

  • by Antity ( 214405 ) on Saturday October 19, 2002 @03:19PM (#4485740) Homepage

    The first few examples are quite noteworthy, but when the author starts to put code inside the ELF header, it gets really ugly..

    Saying that these bytes are "only padding anyway for future extensions" doesn't feel that good. :-)

    This remembers me of early attempts on AmigaOS to shorten and fasten executables where people could be sure that all available Amigas would only use the lower 24 bits of 32 bit address registers since the machines could only address 24 bits physically. So they put application data into the upper 8 bits of registers. Worked fine.

    Then came newer machines which really used the full set of 32 address lines and all those dirty programs crashed without obvious reason..

    The author says "if we leave compatibility behind.." but what he's doing is not only leaving inter-OS compatibility behind - what he creates isn't even an ELF executable anymore. It's just something that happens to work with this special Linux version.

    So since this isn't even an ELF executable any more, there's no reason not just to write "exit 42" in bash (which would be an amazing 8 bytes in size *g*).

    Don't misunderstand me, I really like those hacks. But I myself will never, ever again code something that is prone to break in the future just because I didn't follow standards.

    One could say that this is what programming is about. :-) No offence meant.

  • by Billly Gates ( 198444 ) on Saturday October 19, 2002 @08:29PM (#4487282) Journal
    Its a much better os. I just wish I had a good editor on it.

UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker

Working...