Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Software Linux

Ryan Gordon Ends FatELF Universal Binary Effort 549

recoiledsnake writes "A few years after the Con Kolivas fiasco, the FatELF project to implement the 'universal binaries' feature for Linux that allows a single binary file to run on multiple hardware platforms has been grounded. Ryan C. Gordon, who has ported a number of popular games and game servers to Linux, has this to say: 'It looks like the Linux kernel maintainers are frowning on the FatELF patches. Some got the idea and disagreed, some didn't seem to hear what I was saying, and some showed up just to be rude.' The launch of the project was recently discussed here. The FatELF project page and FAQ are still up."
This discussion has been archived. No new comments can be posted.

Ryan Gordon Ends FatELF Universal Binary Effort

Comments Filter:
  • by harmonise ( 1484057 ) on Thursday November 05, 2009 @02:34PM (#29997484)

    He needs thicker skin if he's going to deal with the LKML crowd. I wouldn't give up just because it's not merged into the official tree.

  • by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Thursday November 05, 2009 @02:41PM (#29997606)

    The 32-bit vs. 64-bit split is handled pretty well on Linux (well, Debian drug its heels a bit on multiarch handling in packages, but even they seem to be getting with the programme).

    Real multi-arch could be useful, but the number of arches on Linux is just too overwhelming. To get somewhat decent coverage for Linux binaries, they'd have to run on x86, ARM, and PPC. Plus possibly MIPS, SPARC, and Itanium. Most of those in 32-bit and 64-bit flavours. Those elves are going to be very fat indeed.

  • by vadim_t ( 324782 ) on Thursday November 05, 2009 @02:44PM (#29997634) Homepage

    I don't get the point in bringing it up.

    Things get rejected from the kernel all the time -- because not all things are good, useful, well coded, or solve a problem that needs solving. It's not new in any way.

    This in particular seems like a solution in search of a problem to me. Especially since on a 64 bit distro pretty much everything, with very few exceptions is 64 bit. In fact I don't think 64 bit distributions contain any 32 bit software except for closed source that can't be ported, and compatibility libraries for any applications the user would like to install manually. So to me there doesn't seem to be a point to try to solve a problem that exists less and less as the time passes and proprietary vendors make 64 bit versions of their programs.

  • rude (Score:3, Insightful)

    by QuietLagoon ( 813062 ) on Thursday November 05, 2009 @02:47PM (#29997678)
    ...some showed up just to be rude...

    .
    Oh well, so goes it with parts of the Linux culture.

  • by bcmm ( 768152 ) on Thursday November 05, 2009 @02:47PM (#29997686)
    This idea is kind of broken for Linux. On MacOS, with 2 architectures, it makes some sense, since the actual executable code is not huge compared to data. On Linux, withe a couple of dozen architectures, executable code *is* going to start to take relevant amounts of space, and the effort involved in preparing them will be nontrivial. If this system were adopted, virtually no binaries would be made to support all available architectures, meaning that anyone not on x86 (32 bit) would need to check what archs a binary supported before downloading it, which is about as difficult as choosing which one to download would've been.
  • by Gopal.V ( 532678 ) on Thursday November 05, 2009 @02:48PM (#29997696) Homepage Journal

    In the entire forked-up mess of the unix tree, there was only one thing that anybody & everybody cared about - source compatibilty. C99, POSIX, SuS v3, so many ways you could ensure that your code would compile everywhere, with whatever compiler was popular that week. For a good part of 4 years, I worked on portable.net, which had a support/ directory full of ifdefs and a configure script full of AC_DEFINEs. It worked nearly everywhere too.

    Binary compatibility never took off because there is so little stuff that can be shared between binary platforms. Sure, the same file could run on multiple archs, but in reality that is no different from a zip file with six binaries in them. Indeed, it needed someone to build 'em all in one place to actually end up with one of these. Which is actually more effort than actually letting each distro arch-maintainer do a build whenever they please. OS X build tools ship with the right cross-compilers in XCode and they have more of a monoculture in library versions, looking backwards.

    Attempting this in a world where even an x86 binary wouldn't work on all x86-linux-pc boxes (static linking, yeah...yeah), is somehow a solution with no real problem attached. Unless you can make the default build-package workflow do this automatically, this simple step means a hell of a lot of work for the guy doing the build.

    And that's just the problems with getting a universal binary. Further problems await as you try to run the created binaries ... I like the idea and the fact that the guy is talking with his patches. But colour me uninterested in this particular problem he's trying to solve. If he manages to convince me that it's a real advantage over 4 binaries that I pick & choose to download, hell ... I'll change my opinion so quickly, it'll leave you spinning.

  • by BitZtream ( 692029 ) on Thursday November 05, 2009 @02:56PM (#29997832)

    Things get rejected from the kernel all the time -- because not all things are good, useful, well coded, or solve a problem that needs solving. It's not new in any way.

    Except this seems to be the only place that doesn't acknowledge the usefulness of fat binaries.

    Windows has had them since DOS, although no one uses them. OS X has them, FBSD has talk about them and isn't flatly rejecting the idea.

    I've seen many features in my career that seemed pointless, tabbed browsing for instance, my OS already supports tabs of sorts on task bar. Then ... once you have them and use them for a while you come back and say 'hey, thats a really good idea'.

    People who are anti-closed source need to just go hide in a cave somewhere and talk about when the revolution is going to come. There will be a place for closed source and open source, side by side for the foreseeable feature. Trying to deny that is only hurting yourself.

  • Re:Good riddance (Score:2, Insightful)

    by maxume ( 22995 ) on Thursday November 05, 2009 @02:56PM (#29997838)

    That's terrible! They will quickly become dehydrated and lose flavor.

  • On Linux, withe a couple of dozen architectures,

    Kind of, but not really. No more than there are four architectures (PPC, ARM, X86, X86_64) for OS X. There's two architectures for Linux that actually matter, and they're the same two that run Snow Leopard. X86 and X86_64.

    I can see why people are going to get up in arms about this. I've been as big a RISC booster as anyone, I think Apple gave up on PPC too soon, and I'm still bitter about Alpha, but that game's over. 32-bit and 64-bit Intel architectures are what matter, and those are the ones that almost all binaries will work for. I'm not running YDL any more, and neither are you. Game's over, instruction sets lost to marketing. The game's over, the fat lady's sung, picked up her paycheck, and gone home to watch House on her Tivo. Give it up and quit holding up the bus.

  • by cheesybagel ( 670288 ) on Thursday November 05, 2009 @02:59PM (#29997878)
    It just shows Ryan isn't used to contributing free software to someone else's project. I once had to wait months before I got my code accepted into a free software project and it wasn't the kernel. If the maintainers add every submission to a project, it will end up in an unstable, unmaintainable mess. Code can last a long time and someone will have to maintain the code even after he's lost interest in it. I am especially leery of code that touches a lot of difference places at the same time, as is undoubtedly the case here.
  • by BitZtream ( 692029 ) on Thursday November 05, 2009 @03:04PM (#29997952)

    The problem isn't that its not possible, its that its hard. Your argument is that since its hard now, since the tools aren't ready for it, it shouldn't be done ...

    Sounds pretty silly to me.

    It would be hard to start from scratch and write a modern OS ... but that is indeed what Linux is.

    If you never take the effort to make the hard easier it will remain hard. Changing from single threaded to multithreaded is hard, do you think we should not do that either, because the tools to do it don't make it a cake walk RIGHT NOW?

    Seems a silly way to look at things to me. Fortunately other people made multithreading work on other platforms long before x86 could really do it properly, which made it easier to do on x86. Imagine if Linus said 'multithreading in an OS is hard on x86, you have to use timer interrupts and blah blah blah, I'm not doing it' back in the 90s ...

    For you, it might not be any different, but you won't know until you give it a try.

    For grandma who has a netbook running an ARM processor, and a desktop or laptop running a x86 processor, its probably a little different, don't you think? Do you want to remain in this hole forever, or do you want to get out and catch up to the rest of the world?

  • by xianthax ( 963773 ) on Thursday November 05, 2009 @03:08PM (#29998020)

    maybe its just me but i see 0 advantages for an executable with multiple binaries.

    shouldn't this all be handled by the package manager? isn't including all these binaries just jacking up download sizes for no gain?

    a boot CD that can run on multiple archs is the only real use i see for this, but i would have to think there is a better way handle that than changing the fundamentals of executables and libraries.

    maybe he received a less than warm reception from other devs because his idea provided virtually no benefit to the end user and required more work by the devs.

  • by Anonymous Coward on Thursday November 05, 2009 @03:18PM (#29998142)

    For grandma who has a netbook running an ARM processor, and a desktop or laptop running a x86 processor, its probably a little different, don't you think?

    No because a package manager makes it easy to install software for the current arch. Even grandma doesn't benefit from having x86, AMD64 and arm binaries in a single package, much less from some random untrusted binary she downloaded from the internet.

  • by wowbagger ( 69688 ) on Thursday November 05, 2009 @03:21PM (#29998194) Homepage Journal

    The issue wasn't that there were lots of people saying "That's a stupid idea" or "That's a stupid implementation of an otherwise good idea."

    The issue was lots of people saying "You are stupid."

    There is a big difference.

    I'd weighed in on this, because in the embedded systems I design this actually would have been useful - I have to support different processor types with what is, ideally, the same software load. (Just because MY embedded systems are much larger than some 4-bit microcontroller running 16K of code doesn't make them any less embedded.) People called ME stupid - not "That's a stupid design" or "That's a stupid reason to want FatELF", but "You are stupid."

    Yes, developing a thick skin, so that when somebody says "That's a stupid idea" you realize that it is the IDEA, and not YOU, that they are criticizing, is important to any engineer.

    But at the same time, saying to somebody "You are stupid" just because you don't like their idea, or don't see how it applies to your needs, is immature and unprofessional.

  • Re:a better idea.. (Score:4, Insightful)

    by idontgno ( 624372 ) on Thursday November 05, 2009 @03:24PM (#29998228) Journal
    So you're advocating Java?
  • by cheesybagel ( 670288 ) on Thursday November 05, 2009 @03:25PM (#29998262)
    So one of the developers in the project tracked and found the issue online for free and you think their support sucks? I won't even share my issues with a certain piece of closed-source software here, which required going through many layers of corporate bureaucracy to fix.

    I once found a bug in DOSBox which none of the developers cared about. l debugged and read the code myself, made a patch that "fixed" the bug (although my fix made bugs elsewhere), posted it and screenshots showing the game working when it didn't even boot before. This was enough for a couple of people to start talking about it. Next release of DOSBox came guess what, the bug was fixed. Properly.

    With closed-source software you are truly stuck because whoever developed the software must necessarily fix it. You cannot fix it yourself even if you could and wanted to. How is that better?

  • by ckaminski ( 82854 ) <slashdot-nospam@ ... m ['r.c' in gap]> on Thursday November 05, 2009 @03:26PM (#29998272) Homepage
    <quote>
    But the truth is the Linux movement needs every warm body it can to fight Microsoft.
    </quote>

    THAT is the problem. Stop trying to FIGHT Microsoft. Start making better software. Innovation, something so tremendously better they start copying YOU.

    Vis-a-vis AMD and Intel and x86_64 and VT extensions.

    Except software has zero marginal cost, so once you take the lead, it'll take a serious fuckup, and not just money to lose it.
  • by nxtw ( 866177 ) on Thursday November 05, 2009 @03:26PM (#29998286)

    There's much more to the question of whether or not something will run on an arbitrary copy of Linux than the CPU arch.

    This issue would limit the usefulness of a fat ELF feature, but it seems this is a problem that should be solved regardless of the existence of fat ELF support.

  • by kill-1 ( 36256 ) on Thursday November 05, 2009 @03:29PM (#29998338)

    But the lack of universal binaries is not the reason why it's hard to release closed source software on Linux.

  • by nxtw ( 866177 ) on Thursday November 05, 2009 @03:30PM (#29998348)

    The 32-bit vs. 64-bit split is handled pretty well on Linux (well, Debian drug its heels a bit on multiarch handling in packages, but even they seem to be getting with the programme).

    I disagree. Solaris and Mac OS X are the only operating systems I would say handle it well.

    OS X 10.6 includes i386 and x86_64 versions of almost everything. By default it runs the x86_64 versions on compatible CPUs and compiles software as x86_64. It runs the i386 kernel by default, but the OS X i386 kernel is capable of running 64 bit processes.

    One can reuse the same OS X installation from a system with a 64-bit CPU on a system with a 32-bit CPU.

    Solaris includes 32-bit binaries for most applications but includes 32- and 64-bit libraries. It includes 32- and 64-bit kernels as well, all in the same installation media.

  • by 99BottlesOfBeerInMyF ( 813746 ) on Thursday November 05, 2009 @03:40PM (#29998486)

    Can you remind me again of the advantages of such fat binaries over a tar/deb/rpm file with multiple binaries? Thank you.

    One really nice thing is you can install a single fat binary on a shared network drive and clients with different architectures can all run it without having to know what architecture they are on or without a client side script that needs to be installed, or a script that tries to identify the client's architecture. This is really useful in places where you want to offer software with limited licenses to users on site, when you don't know what they will be using.

    With multiple binaries in a tar/deb/rpm you end up with multiple binaries and end users randomly trying them in the hopes that one will be the right one for their computer. A lot of users don't know their chip architecture or if it is 32 or 64 bit.

    Another advantage comes from applications being run from flash drives, which has similar benefits. Being able to perform automated hardware upgrades is a nice advantage as well. For software in OSS repositories users can just grab them from the repositories when updating. For closed source software, however, being able to pull the applications directly from your old hardware to your new hardware (regardless of architecture) and have it work is really nice. Otherwise you have to find each and every commercial software package, re-download them, and then dig up all your serial numbers and re-register them. It's a huge pain, alleviated only by the lack of commercial software available on Linux these days. Ideally much of this could be mitigated by better package management that caters to commercial developers, but it certainly isn't there today and still does not handle software installed from optical disks.

  • by Anonymous Coward on Thursday November 05, 2009 @03:54PM (#29998674)

    While I won't disagree that commercial games is a potentially valid argument, let's take a step back for a minute. Most commercial games that would benefit from this (i.e., the ones that are flashy and have big advertising budgets) are likely to require certain hardware capabilities (processing speed, graphics capability, high-end sound) to operate well. You might be able to force a lower-end system to run it, but NO game developer spends time optimizing the experience on crappy hardware, it's just not cost-effective for them. So they have hardware requirements, and chances are that is going to mean running on an Intel CPU, with an ATI or Nvidia GPU and some sort of decent sound processor. Don't try to tell me you seriously think that the latest FPS or MMO should run on your ARM netbook - that's not what they're designed for and we all know it.

    So that leaves the question of x86 vs. x64 binaries, and I just don't buy any argument that it's too much work to make companies build both (or just stick with 32-bit and be done with it). NO modern game comes without a comprehensive installer, and most have internal updaters as well, so having them pick 32 vs 64 bit as appropriate is a perfectly reasonable thing to do. Even with high-speed Internet do you really want to download files that are twice the size you need them to be just so they can run on hardware you don't have?

    And if you're trying to make the case for small, indie game developers being able to supply a 'universal binary' so make it available on more platforms, I ask you this: If they can't afford to have all of the different architectures available for development, testing and troubleshooting, what makes you think they're going to want to provide an sort of binary to run on those architectures? I don't even want to think about the headaches (and karma losses) that devs would go through trying to support platforms that they can compile for but not actually run on...

    Universal Fat Binaries have a single problem they are meant to solve: How to provide a single version of a close-source program for proprietary OS that is currently shifting from one hardware platform to another in a manner that is supposed to be transparent to the user. MacOS has done it twice - M68K->PPC, and PPC->x86 (I don't consider x86->x64 in the same way, because a proper x64 OS will run x86 32-bit binaries seamlessly anyway). I was off of Macs by the time they went Intel, but was right in the middle of the M68K->PPC shift and think it was handled pretty well. But that's because a) the hardware options for the platform were highly fixed to begin with, b) the development environments for the platform were fairly limited as well, and c) EVERYONE knew that while Fat Binaries were intended to make the transition easier, they were not a permanent thing and eventually M68K support would be dropped and everything would be PPC only.

    In case you missed that, I'll repeat it on its own: Fat Binaries are designed to TEMPORARILY support a TRANSITION from one architecture to another, and after a time they STOP SUPPORTING THE OLD ONE and go back to being thin.

    Linux already supports something like this, in that multiple major versions of shared libraries are supported on the system for when there's an ABI change. But there's no reason to support Fat Binaries for different hardware architectures because a) Since the OS is open one is not locked into vendor-specific hardware, and thus vendor specified architectures that can be changed AND enforced, and b) there is no major architectural hardware transition in process that is likely to affect the consumers who would be the likely targets of this change. People buy hardware these days because the software they want to use runs on it. Even if a company were to offer Fat Binaries for multiple architectures, people would buy the one that will run the most software they want to use, and unless EVERYONE produced Fat Binaries for Linux people would STILL buy Intel systems because that's what most of the so

  • Comment removed (Score:3, Insightful)

    by account_deleted ( 4530225 ) on Thursday November 05, 2009 @03:55PM (#29998686)
    Comment removed based on user account deletion
  • by Khyber ( 864651 ) <techkitsune@gmail.com> on Thursday November 05, 2009 @04:17PM (#29998988) Homepage Journal

    Yep, prety easy for SOMEONE THAT WORKS ON COMPUTERS.

    Now let's see you tell that to the average joe, who has no clue about architectures, distros, or even the desktop management system.

    Nimrod.

  • by Lord Bitman ( 95493 ) on Thursday November 05, 2009 @04:25PM (#29999108)

    The state of package management is atrocious, and so should not be looked to for solutions? I'd call that a pretty big one.

    MOST packages need only the functionality of a dependency manager, everything else being a nice-to-have-when-you-need-it feature. This is why dependency management can be considered to be the central feature of a package manager- if you don't have dependency management, you'd be hard-pressed to find anyone who claims you have a working package manager.

    And what do most package managers do? Utterly lazy dependency management. "Well, you need this package... so you should have the latest version of it. If you want another version, you should rename the package and depend on something else instead."

    And that would be almost-excusable, except for the brain-dead "open source is king" approach for updates: "The whole-thing's free anyway, why not just re-send the whole thing?" binary patches are pretty-much unheard of. Of course, sending the whole thing is really just a work-around because-

    Package managers generally do NOT bother to detect when they are about to clobber or alter "the wrong file". When they do, they don't bother to keep a record of what they /would/ consider to be "the right file", making "merging" impossible and difference examination a guessing game. That doesn't even matter, because the first step in an "Upgrade" is usually to just completely remove the existing package, which means...

    Multiple versions of a single package co-existing on the same base install is generally impossible. Which really makes you wonder what the hell a package manager /does/ manage.

    It's not third-party software, that's for sure. You want the bleeding-edge version of something? You just want to patch a broken package? That means you're not using the package manager, and that means you're on your own for everything. Either you build a /package/ for what you're doing on the side, or you don't get access to any of the supposed features. And anything that depends on what you're doing, you may as well just compile and track yourself- 'cause that's what you like doing, right?

    The short of it is: Package managers seem so fundamentally broken that giving them another task seems like a waste of time. They'll just be replaced by a better system eventually anyway, right? And then you'll need to do it all again.

    The closest to "right" I've seen is GoboLinux.

  • by IICV ( 652597 ) on Thursday November 05, 2009 @04:25PM (#29999110)

    You missed half the argument. It's hard, and it's pointless.

    The grandma who has a netbook running ARM and a desktop running x86 will install software by going into Add/Remove Programs and picking "Fun Times Photo Album for Grandmas" out of a list. The package manager will figure out what needs to be installed for her, on both her ARM and her x86 computers.

    She's not going to go to some random website and download a random installer file and use it on both her computers - her kids have told her over and over again that that's not safe, and she may lose her knitting patterns if she does it.

    Seriously, the people who advocate this junk seem to be entirely unaware of the joys of package management. All FatELF does is re-solve a problem that package management has had licked for a couple of years now, and it solves the problem in a less efficient way.

    It's hard, yes - but it's not worth doing just because it's hard.

  • by Sparks23 ( 412116 ) on Thursday November 05, 2009 @04:26PM (#29999124)

    Usability.

    Your average desktop user does not want to go, 'Oh, well, I'm running on Processor X, with distribution Y, patch Z. I guess that means I need /this/ tarball (or this subdirectory of the big tarball).' Fat binaries solve this problem.

    If I am a Mac OS X developer, fat binaries mean I don't have to make a separate Intel download, or separate PowerPC download. No worries about Joe User downloading the PowerPC version, then complaining about performance (not realizing they're running a PowerPC binary in Rosetta on an Intel machine), or so on. I can just have one download on my website, and the loader handles finding the correct binary.

    Similarly, I can bundle 32-bit and 64-bit binaries for a given architecture into the same binary, rather than having separate 32 and 64-bit downloads (as is common on Windows). Tech-literate users may well know whether their system is 32-bit or 64-bit, but if I sat my father down in front of a brand-new Windows 7 machine from Best Buy, I doubt he would know whether to pick a '32-bit' or '64-bit' download for an antivirus program on a given website. He would, instead, call me.

    Now, some software solves this problem by having a tiny installer you download, which then goes out and pulls down the correct packages from the Internet after examining your machine. This is one solution, though not entirely ideal (it means in order to do any install, you need to have internet access). Some installers include the entire set of binaries, and just install the correct one; this is fine, as long as you have an installer, but can break down if you try to transplant the hard drive into a new machine. For instance, Joe User picks up that nice Windows 7 Home Premium machine he saw at Best Buy, and plugs his Windows Vista drive in to copy over applications, unaware his old computer was running Vista x64, while his new Windows 7 machine is 32-bit. Joe has some Problems now, when he tries to run some of his old installed software that was 64-bit only.

    At any rate, there are plenty of solutions to this problem; fat binaries are just one. None are perfect and all have their tradeoffs; in the case of fat binaries, the main problem is disk space. Package management tools have their own problems. (RPM dependency hell any time you want to go outside of your distribution's available packages, for instance, and the 'screw this, I'm installing PHP from source' result some sysadmins turn to.)

    From a server standpoint, fat binaries aren't necessarily the most useful solution (unless you're dealing with clustered machines with variant processors or configurations, but a shared filesystem between them), but from a *desktop user standpoint*, fat binaries may be friendlier than other options.

    At any rate, my *personal* opinion is that from a general desktop end-user standpoint (as opposed to a sysadmin/techy standpoint), disk space is cheap but usability is priceless. And my experience is that fat binaries require less work on the part of the end user (though, admittedly, more work on the part of the developer; building Universal Mac OS X binaries of software outside of Xcode can be a hair-pulling experience at times and inspire fond thoughts of Windows installers that just pick the right binary based on a system check).

    So whether you feel Linux benefits from fat binaries may well boil down to whether you feel Linux needs to target general, non-techy desktop users more or not. Your own opinions may well differ from mine; not everyone's criteria and priorities are identical, which is probably a good thing. Otherwise we'd have a pretty homogenous software community out there!

  • Re:a better idea.. (Score:4, Insightful)

    by Eravnrekaree ( 467752 ) on Thursday November 05, 2009 @04:30PM (#29999180)

    Only the last phase of compilation, code generation, would occur on the users computer. One of the problems with source code is that it can take hours for it to compile, and getting it to compile right is never easy enough for granny. The purpose of a universal executable is that it should be easy enough for granny to use, which means download, double click, and it runs. None of this fiddling with a million dependancies and so on. Granted, the problem is parly due to the fact that each Linux distribution does something differently and puts things in a different place.

  • by dgatwood ( 11270 ) on Thursday November 05, 2009 @04:35PM (#29999260) Homepage Journal

    Sometimes I think that user forums and developer forums should be one and the same---that developers should be forced to see the user flame wards---that by exposing them to this without the option to get rid of it, they will be inspired to actually write software that is easier to configure and use. Maybe it's just me.

  • by azmodean+1 ( 1328653 ) on Thursday November 05, 2009 @05:20PM (#29999892)
    aah yes, the old, "The free IRC-based tech support I got from random volunteers wasn't up to my high standards" problem. This really has no bearing on the issue with FatELF, but I see it over and over again, people demanding prompt, polite, tech support from a roomful of random lurkers and project volunteers. These people are spending their free time performing one of the most annoying, boring, thankless jobs in IT, and they get abuse because they don't fix your problem fast enough. There are a few things you might want to consider before "punishing" a project by abandoning it based on experiences in an IRC channel:

    1. Ability: There is no guarantee that the people that kept giving you the same suggestions over and over know enough about the project to look into it more deeply, but you assume that they just weren't interested in helping. It's more likely that they know little more than you do about the project, but have a short list of the most commonly encountered problems and likely solutions. (kind of like tier 1 tech support, but free)

    2. Affiliation: There is no guarantee that any of the people you talked to even have anything to do with the project other than lurking in their IRC channel. In my experience quite a few users lurk in channels of software they like, regardless of how capable they are of helping other people.

    3. Incentive: I'm sure your problem was YOUR top priority at the time, but quite a few people on IRC lurk most of the time while they are doing other things, some of which are more important to them than trying to fix your problem. Also they have almost zero direct incentive to try to be nice to you.

    4. Price: You mention this only to dismiss it, but seriously, this is a very valuable service that you are receiving for free, and you even had your quite obscure sounding problem diagnosed.

    For-pay tech support either eliminates or hides these problems from the end-user, volunteer tech support doesn't have the resources to do this.

    1. Ability is handled by tiering, if this were commercial software, you would have had to wait days to weeks in order to reach a level of tech support that would have been able to diagnose a bug in a sub-library not maintained by the business in question, and that's assuming you had paid enough for support to go that far for you (a hint, just buying a device will NOT get you this level of support). Instead you had an answer in under a day, and even a chance that the bug will get fixed based on your input.

    2. Affiliation: This is the easy one, even if you do get support from someone outside a company you've purchased something from, you aren't going to blame the experience on the company, but rather on the individual. With open source however, if you find some random jerk that claims to be part of the project that proceeds to piss you off, you blame the project, not the individual. And regardless, unless you have a support contract with someone, it's just one person helping another.

    3. Incentive: Paid services have a lockdown on this one too, tech support that doesn't maintain at least the barest facade of civility won't be working in tech support for much longer. (there are exceptions, but in general they will be more highly incentivized to pretend to like you, however as someone who worked in tech support for a while, I can guarantee you there is approximately zero chance that they will actually like you or care about your problem, which you have a pretty decent chance of with open source volunteers.)

    At the end of the day, your problem was solved at no cost to yourself. Additionally I don't see any mention of your helpers even being rude, is this just an omission, or did you really just go into a roomful of random people and end up screaming at them (figuratively of course) because they couldn't help you with no direct provocation? If so, holy crap, you're a jerk.

  • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Thursday November 05, 2009 @05:26PM (#29999998) Journal

    I've really appreciated just being able to pull down a single executable from a site and have it "just work".

    Have you ever done that? Even once?

    I'm willing to bet you haven't -- that you've instead downloaded zips, dmgs, or mpkgs, neither of which are executables, and all of which would be perfectly capable of including multiple binaries and a script to select the correct one.

    my tolerance for pointless frustration decreases steadily with age.

    That's why I use a package manager, which eliminates the whole issue -- I just need the name of a package, and it Just Works.

  • by Anonymous Coward on Thursday November 05, 2009 @05:54PM (#30000374)
    Agreed. There is also the question of why the authors of a GNU licensed program are expected to show a passionate interest in a project whose central focus is the distribution of binaries. FatELF makes sense when the source is not available. When the source is available, package management, with a per architecture build, makes more sense.
  • by moonbender ( 547943 ) <moonbenderNO@SPAMgmail.com> on Thursday November 05, 2009 @06:04PM (#30000504)

    Unless it's not available in the repos in which case it Just Won't Work. I'm in this situation all the time, both with software which simply isn't in the repository, and with software that's available but outdated.

    1) If someone has set up a repository for this software, that's great, and it happens more and more often since it's fairly painless to set up a PPA on Launchpad; it's still not a one step solution anymore, though.

    2) Or you download a deb, which usually (by design?) is for a single architecture and run it through GDebi. Which is pretty painless, too, if it works.

    3) Or it's one of the few precompiled blobs that you set +x and they just seem to work as if by magic, but I bet that was a pain to create and is even more of a pain to keep updated or maybe uninstalled.

    4) Source distribution. You better hope you've got all those dependency -dev packages installed. Could you hook up apt into the building process and auto-install dependency source packages when they're needed?

  • stupid idea (Score:4, Insightful)

    by jipn4 ( 1367823 ) on Thursday November 05, 2009 @09:00PM (#30002076)

    FatELF is a stupid implementation of a stupid idea. I.e., even if you want fat binaries, modifying the ELF format is the wrong way of doing it.

    Yeah for the Linux kernel developers for keeping this kind of crap out of the kernel.

  • by localman ( 111171 ) on Friday November 06, 2009 @07:26AM (#30004370) Homepage

    I've really appreciated just being able to pull down a single executable from a site and have it "just work".

    Have you ever done that? Even once?

    Absolutely yes. And if you're willing to forgive the word "executable" and allow a dmg with a single app in it that I can just drag and drop without picking which one to use or running any scripts, then I've done it quite often.

    The universal binary system on osx was pretty sweet during transition. I went from PowerPC to Intel and very rarely had to think about it at all. If you think that's nothing special, fine, but to a lot of users that's a very nice feature.

    Cheers.

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...