Ryan Gordon Ends FatELF Universal Binary Effort 549
recoiledsnake writes "A few years after the Con Kolivas fiasco, the FatELF project to implement the 'universal binaries' feature for Linux that allows a single binary file to run on multiple hardware platforms has been grounded. Ryan C. Gordon, who has ported a number of popular games and game servers to Linux, has this to say: 'It looks like the Linux kernel maintainers are frowning on the FatELF patches. Some got the idea and disagreed, some didn't seem to hear what I was saying, and some showed up just to be rude.' The launch of the project was recently discussed here. The FatELF project page and FAQ are still up."
He needs thicker skin (Score:5, Insightful)
He needs thicker skin if he's going to deal with the LKML crowd. I wouldn't give up just because it's not merged into the official tree.
Re:He needs thicker skin (Score:5, Funny)
Re:He needs thicker skin (Score:5, Funny)
Re: (Score:3, Funny)
Re: (Score:2)
Re: (Score:2)
The problem with fixating on something like ELF for fatness is that it's just the tip of the iceberg.
There's much more to the question of whether or not something will run on an arbitrary copy of Linux than the CPU arch.
Same goes for an Apple fat binary really...
BS: "tip of the iceberg" (Score:5, Informative)
The problem with fixating on something like ELF for fatness is that it's just the tip of the iceberg.
There's much more to the question of whether or not something will run on an arbitrary copy of Linux than the CPU arch.
Why do you think one of the discriminators in a fat binary can't be a distribution identifier, such that there are fat slices for supporting Debian, RedHat, Ubuntu, etc., all from the dame binary file?
Or that they can't have different slices in the fat binary for Gnome vs. KDE, or desktop vs. Android, and so on?
Also, the arguments about disk space are specious; at least in the Mac OS X world, there is a utility called "lipo" which will pull apart a fat bnary into only the pieces you need/want to install. Typically you only install the actually fat binary on server systems, where the software has to be able to run on multiple client machines, and otherwise you run scripts to reclaim disk space (or in the case of an embedded device, you run them over the install image before you install them).
Same goes for an Apple fat binary really...
Obligatory disclaimer: I am the person who maintains the fat binary loader in the Mac OS X kernel.
-- Terry
Re: (Score:2)
Can you remind me again of the advantages of such fat binaries over a tar/deb/rpm file with multiple binaries? Thank you.
Re: (Score:2)
Re:BS: "tip of the iceberg" (Score:5, Insightful)
Can you remind me again of the advantages of such fat binaries over a tar/deb/rpm file with multiple binaries? Thank you.
One really nice thing is you can install a single fat binary on a shared network drive and clients with different architectures can all run it without having to know what architecture they are on or without a client side script that needs to be installed, or a script that tries to identify the client's architecture. This is really useful in places where you want to offer software with limited licenses to users on site, when you don't know what they will be using.
With multiple binaries in a tar/deb/rpm you end up with multiple binaries and end users randomly trying them in the hopes that one will be the right one for their computer. A lot of users don't know their chip architecture or if it is 32 or 64 bit.
Another advantage comes from applications being run from flash drives, which has similar benefits. Being able to perform automated hardware upgrades is a nice advantage as well. For software in OSS repositories users can just grab them from the repositories when updating. For closed source software, however, being able to pull the applications directly from your old hardware to your new hardware (regardless of architecture) and have it work is really nice. Otherwise you have to find each and every commercial software package, re-download them, and then dig up all your serial numbers and re-register them. It's a huge pain, alleviated only by the lack of commercial software available on Linux these days. Ideally much of this could be mitigated by better package management that caters to commercial developers, but it certainly isn't there today and still does not handle software installed from optical disks.
Re:BS: "tip of the iceberg" (Score:5, Interesting)
Another really big advantage is easier developer workflow. With multi-architecture binaries and libraries, you can test and debug the 32-bit and 64-bit versions of an application without rebooting into a separate OS, without building some weird chroot environment, without using a special linker that changes the paths of the libraries if it detects a 32-bit binary, etc. This means that your development system is essentially identical to the user systems (except for the kernel), and thus the likelihood of bizarre "Unable to reproduce" bugs goes way down.
Another big advantage is that if you build a universal install DVD, you have half as many binary packages. That means a less complex installer and thus reduced potential for bugs, reduced install testing overhead, etc.
Another big advantage is that when a user finds a 64-bit-specific bug in a tool, the user can do an "arch i386 grep 'blahblahblah' *" instead and work around it until somebody comes up with a fix. Without fat binaries, that's well beyond the ability of most typical computer users, including many IT people. You might as well tell them to fix the bug themselves. That doesn't do anybody any good....
But probably the most important reason for it is that Linux is late to the party. Many other operating systems support fat binaries---Mac OS X, Windows, NetBSD, etc. It's not like this is a new idea, nor is there a lack of a clear benefit. Obviously if there weren't clear benefits, there wouldn't be such broad support for this concept. And that's not just a bandwagon appeal; people judge operating systems by what they can do, and if there's an important, user-visible feature that one OS is missing, that's a win for operating systems that do have the feature....
Re:BS: "tip of the iceberg" (Score:4, Informative)
The advantage is on the typical user side. As a Mac user over two architecture transitions, I've really appreciated just being able to pull down a single executable from a site and have it "just work". Disk space is cheap and my tolerance for pointless frustration decreases steadily with age. A distribution mechanism like fat binaries makes it so the user really has to go out of their way to get it wrong.
Re: (Score:3, Informative)
Also useful in copying applications between Application folders on Macs with different platforms.
Re: (Score:3, Insightful)
Unless it's not available in the repos in which case it Just Won't Work. I'm in this situation all the time, both with software which simply isn't in the repository, and with software that's available but outdated.
1) If someone has set up a repository for this software, that's great, and it happens more and more often since it's fairly painless to set up a PPA on Launchpad; it's still not a one step solution anymore, though.
2) Or you download a deb, which usually (by design?) is for a single architecture an
Re:BS: "tip of the iceberg" (Score:4, Insightful)
I've really appreciated just being able to pull down a single executable from a site and have it "just work".
Have you ever done that? Even once?
Absolutely yes. And if you're willing to forgive the word "executable" and allow a dmg with a single app in it that I can just drag and drop without picking which one to use or running any scripts, then I've done it quite often.
The universal binary system on osx was pretty sweet during transition. I went from PowerPC to Intel and very rarely had to think about it at all. If you think that's nothing special, fine, but to a lot of users that's a very nice feature.
Cheers.
Re:BS: "tip of the iceberg" (Score:5, Insightful)
The state of package management is atrocious, and so should not be looked to for solutions? I'd call that a pretty big one.
MOST packages need only the functionality of a dependency manager, everything else being a nice-to-have-when-you-need-it feature. This is why dependency management can be considered to be the central feature of a package manager- if you don't have dependency management, you'd be hard-pressed to find anyone who claims you have a working package manager.
And what do most package managers do? Utterly lazy dependency management. "Well, you need this package... so you should have the latest version of it. If you want another version, you should rename the package and depend on something else instead."
And that would be almost-excusable, except for the brain-dead "open source is king" approach for updates: "The whole-thing's free anyway, why not just re-send the whole thing?" binary patches are pretty-much unheard of. Of course, sending the whole thing is really just a work-around because-
Package managers generally do NOT bother to detect when they are about to clobber or alter "the wrong file". When they do, they don't bother to keep a record of what they /would/ consider to be "the right file", making "merging" impossible and difference examination a guessing game. That doesn't even matter, because the first step in an "Upgrade" is usually to just completely remove the existing package, which means...
Multiple versions of a single package co-existing on the same base install is generally impossible. Which really makes you wonder what the hell a package manager /does/ manage.
It's not third-party software, that's for sure. You want the bleeding-edge version of something? You just want to patch a broken package? That means you're not using the package manager, and that means you're on your own for everything. Either you build a /package/ for what you're doing on the side, or you don't get access to any of the supposed features. And anything that depends on what you're doing, you may as well just compile and track yourself- 'cause that's what you like doing, right?
The short of it is: Package managers seem so fundamentally broken that giving them another task seems like a waste of time. They'll just be replaced by a better system eventually anyway, right? And then you'll need to do it all again.
The closest to "right" I've seen is GoboLinux.
"Insightful"? (Score:4, Informative)
I've never used a package manager that forced you to upgrade all dependencies to the latest version to install a package. All of them allow not just required packages but required versions of packages, and only force upgrades of dependencies when you don't have a sufficiently recent version.
Some can do patches. I think RPM can. But unless you're using dialup, they're not really that much of advantage. And you also have the problem of having to provide patches from lots of versions to lots of versions. Or you can provide only patches from the last version to the current one, in which case they're useless for anyone who misses an upgrade.
I don't know any package manager that does this. For example, Pacman, the package manager of Arch (my current distro of choice), installs new versions of files with the suffix '.pacnew' if the old version was modified and doesn't clobber.
This is true on pretty much any OS. Multiple versions of the same package will install to the same paths, and your package manager would have to be pretty fucked up to do that. If you'd like to horribly violate widely adopted filesystem organization standards and patch your software a bunch to make it work properly with your new layout, you can do that, but there's no real gain.
They manage packages. Just because they don't implement every feature you'd like doesn't mean that's not true, though evidently they do implement many features you'd like, but you are too busy raging pointlessly to pay attention to the facts. In fact, the only feature they don't implement that you'd like is a major design decision that would require altering pretty much all software on the system for no benefit to the vast, vast majority of users.
Then you get it from elsewhere if the official repos don't provide it. You can even build your own package, something you certainly ought to be capable of if you're applying your own patches to software. You can even set up your own repos!
So basically what you're saying is, "you're not using the package manager except if you are". Gee, really?
Most of your post you've been toeing a fine line between being just wrong and bei
Re: (Score:3, Interesting)
Anyone with an ounce of sense and experience knows that if you have a package for the version of the software that you want, but it's only built for and available in a later version of your distribution, then installing it will result in a cascade that will as good as update your entire system. There wouldn't be dependencies otherwise. On a system where you can automatically recompile li
Re: (Score:3, Interesting)
I've never used a package manager that forced you to upgrade all dependencies to the latest version to install a package. All of them allow not just required packages but required versions of packages, and only force upgrades of dependencies when you don't have a sufficiently recent version.
And anyone who has ever wanted to upgrade just one package can tell you that this is clearly insufficient, because every package ever made lazily specifies all they know: "this package works with the version I have, therefore it requires the version I have." If this lazy way is the easiest method of specifying requirements, it is what will be used- you can tell, because it is what is used. If you are dealing with what is considered to be a non-trivial requirement by package managers, you will have run into
Re:BS: "tip of the iceberg" (Score:5, Insightful)
Usability.
Your average desktop user does not want to go, 'Oh, well, I'm running on Processor X, with distribution Y, patch Z. I guess that means I need /this/ tarball (or this subdirectory of the big tarball).' Fat binaries solve this problem.
If I am a Mac OS X developer, fat binaries mean I don't have to make a separate Intel download, or separate PowerPC download. No worries about Joe User downloading the PowerPC version, then complaining about performance (not realizing they're running a PowerPC binary in Rosetta on an Intel machine), or so on. I can just have one download on my website, and the loader handles finding the correct binary.
Similarly, I can bundle 32-bit and 64-bit binaries for a given architecture into the same binary, rather than having separate 32 and 64-bit downloads (as is common on Windows). Tech-literate users may well know whether their system is 32-bit or 64-bit, but if I sat my father down in front of a brand-new Windows 7 machine from Best Buy, I doubt he would know whether to pick a '32-bit' or '64-bit' download for an antivirus program on a given website. He would, instead, call me.
Now, some software solves this problem by having a tiny installer you download, which then goes out and pulls down the correct packages from the Internet after examining your machine. This is one solution, though not entirely ideal (it means in order to do any install, you need to have internet access). Some installers include the entire set of binaries, and just install the correct one; this is fine, as long as you have an installer, but can break down if you try to transplant the hard drive into a new machine. For instance, Joe User picks up that nice Windows 7 Home Premium machine he saw at Best Buy, and plugs his Windows Vista drive in to copy over applications, unaware his old computer was running Vista x64, while his new Windows 7 machine is 32-bit. Joe has some Problems now, when he tries to run some of his old installed software that was 64-bit only.
At any rate, there are plenty of solutions to this problem; fat binaries are just one. None are perfect and all have their tradeoffs; in the case of fat binaries, the main problem is disk space. Package management tools have their own problems. (RPM dependency hell any time you want to go outside of your distribution's available packages, for instance, and the 'screw this, I'm installing PHP from source' result some sysadmins turn to.)
From a server standpoint, fat binaries aren't necessarily the most useful solution (unless you're dealing with clustered machines with variant processors or configurations, but a shared filesystem between them), but from a *desktop user standpoint*, fat binaries may be friendlier than other options.
At any rate, my *personal* opinion is that from a general desktop end-user standpoint (as opposed to a sysadmin/techy standpoint), disk space is cheap but usability is priceless. And my experience is that fat binaries require less work on the part of the end user (though, admittedly, more work on the part of the developer; building Universal Mac OS X binaries of software outside of Xcode can be a hair-pulling experience at times and inspire fond thoughts of Windows installers that just pick the right binary based on a system check).
So whether you feel Linux benefits from fat binaries may well boil down to whether you feel Linux needs to target general, non-techy desktop users more or not. Your own opinions may well differ from mine; not everyone's criteria and priorities are identical, which is probably a good thing. Otherwise we'd have a pretty homogenous software community out there!
Re: (Score:3, Interesting)
It seems to me that this problem would be better solved by packaging tools, rather than messing with the kernel. After all, this doesn't seem like it would get around library dependency problems at all, unless you require everything to be statically linked.
Besides, packaging tools in Linux, while certainly better than other OSes, could still use a lot of work. For instance, it'd be nice if we could standardize on a single one, instead of deb, rpm, and tgz all being used. I can understand why people can't
Re: (Score:2)
all from the dame binary file
This is an abbreviation for "same damn binary file", isn't it?
Re:BS: "tip of the iceberg" (Score:4, Funny)
all from the dame binary file
This is an abbreviation for "same damn binary file", isn't it?
I was waiting for the gritty exposition to begin.
"not possible [...] in a package manager"? (Score:4, Interesting)
So, remind me again: why exactly is it not possible to implement all that in a package manager and we need to have a Really Fat ELF?
Because Linux distributions can not agree on a single GUI technology, let alone a package manager.
-- Terry
Re: (Score:3, Insightful)
This issue would limit the usefulness of a fat ELF feature, but it seems this is a problem that should be solved regardless of the existence of fat ELF support.
Re: (Score:2)
He should have sent Theo in ;^)
I sympathize with you. (Score:2)
That is a tragic experience. The thing is, MythTV is in shambles and I know it. Its not easy to configure or use. I still support MythTV. But there is room for improvement. There are other Linux competitors to Myth out there. Support them if need be. But the truth is the Linux movement needs every warm body it can to fight Microsoft.
Re:I sympathize with you. (Score:5, Insightful)
But the truth is the Linux movement needs every warm body it can to fight Microsoft.
</quote>
THAT is the problem. Stop trying to FIGHT Microsoft. Start making better software. Innovation, something so tremendously better they start copying YOU.
Vis-a-vis AMD and Intel and x86_64 and VT extensions.
Except software has zero marginal cost, so once you take the lead, it'll take a serious fuckup, and not just money to lose it.
Re: (Score:2)
Same thing.
Re: (Score:2)
> they need an attitude adjustment
Yup. I told them to fix their attitudes too.
Some got the idea but declined.
Some didn't seem to hear what I was saying.
Some appeared just to be rude.
Re: (Score:2, Insightful)
I once found a bug in DOSBox which none of the developers cared about. l debugged and read the code myself, made a patch that "fixed" the bug (although my fix made bugs elsewhere), posted it and screenshots showing the game working when
Re: (Score:2)
If you're able to fix the bug, that's great. For most users, though, that isn't an option. The options are:
- Free software, free support from surly individuals who cast aspersions on your intelligence and competency, while declining to actually listen enough to help you until you find the rare gem that DOES help you. The existence of that one generous developer in the crowd of jerks is not enough to counteract the generally unhelpful community. (If the community wanted to have good support, they would h
Re: (Score:3, Insightful)
Sometimes I think that user forums and developer forums should be one and the same---that developers should be forced to see the user flame wards---that by exposing them to this without the option to get rid of it, they will be inspired to actually write software that is easier to configure and use. Maybe it's just me.
Re: (Score:2)
Re: (Score:2)
I'm not sure it's restricted to OSS development. The phenomenon seems to be well-established in mailing lists, web forums, and any number of Usenet newsgroups as well. You end up with a bunch of assholes who dominate the conversation and drive away everyone else. In the case of software developers, at least on the successful projects, they tend to be smart and highly skilled assholes, but that still doesn't make them any more pleasant to deal with. While part of it reflects the leadership (or lack of it) in
Re:He needs thicker skin (Score:5, Informative)
Not really, he can just go peddle his warez to someone who is more open to ideas.
Why should anyone subject themselves to dealing with a bunch of assholes to help them make their stuff better?
Reminds me of my recent MythTV experience ...
I join the IRC channel for it, ask a question, lay out whats wrong, and then was told repeatedly that I had configured the server wrong and it wasn't accepting connections, even though I said repeatedly that I was able to connect to it from one client but not another so it was unlikely to be a server problem.
After trying to explain that I had read the wiki, the mailing lists and done a fair amount of googling and already seen the 3 suggestions I kept getting over and over again, I got to the point where I told them to go fuck themselves basically. At which one guy, who hadn't been there earlier listened long enough to ask for the debug output.
Turns out, low and behold, it was a combination of client configuration error and a bug in the mysql libs that caused it to hang and never report an error.
A day later, I've dumped MythTV and went back to WMC under Win7. I've lost a few features in the process, but it works on all my hardware and has yet to require me to deal with a bunch of jackasses who are too arrogant to be useful. (With WMC you deal with to ignorant to be useful instead)
Does anyone care what I run? Of course not, but they've lost potential developer support. Instead of porting my custom extensions to WMC over to work in a MythTV setup and sharing them, I'll just continue to make them work in WMC. I filed the bug on the way out the door so someone else can fix it, but overall the total loss will be on the MythTV end.
You don't get help by being a jackass to people, regardless of how much better than them you think you are. You see a lot of this in OSS software (not just Linux, as anyone who has dealt with Theo knows). I partially understand, they aren't getting paid, they don't have any motivation to hide their true colors. Well, at least any instant motivation. Turning people away is never a good thing. I would have been happy to donate to the project instead of buying more XBox 360s to use as extenders. Now I'll just get a couple more rather than re-using my existing PCs and donating to the project.
He doesn't need thicker skin, they need an attitude adjustment. Its a safe bet that he doesn't really care that much. He's obviously not a cluebie, he has some knowledge, and now they won't benefit. The problem isn't his.
It's a shame you have to lie about this to make your "point". The person you claim wasn't there earlier actually was and had been asking for a debug dump the entire time. Bot logs are bad for liars.
Re:He needs thicker skin (Score:5, Informative)
> Reminds me of my recent MythTV experience ...
>
>I join the IRC channel for it, ask a question, >lay out whats wrong, and then was told repeatedly >that I had configured the server wrong and it >wasn't accepting connections, even though I said >repeatedly that I was able to connect to it from >one client but not another so it was unlikely to >be a server problem.
>
>After trying to explain that I had read the wiki, >the mailing lists and done a fair amount of >googling and already seen the 3 suggestions I >kept getting over and over again, I got to the >point where I told them to go fuck themselves >basically. At which one guy, who hadn't been >there earlier listened long enough to ask for the >debug output.
>
> Turns out, low and behold, it was a combination >of client configuration error and a bug in the >mysql libs that caused it to hang and never >report an error.
Woah woah woah. Back up here. I was there for those 2 IRC coonversations and you kept complaining when you were asked for details or to check your settings and the wiki. Repeatedly getting upset that someone was "assuming" you wear a newbie, refused to even consider the problem was with your configuration. You were hardly an innocent angel who was being picked on by some bullies.
And later you owned up that it was YOUR configuration issue and apologized and thanked the people in the channel.... and now days later you are using that to smere them and MythTV and OSS projects in general?
Really classy.
FYI for those reading this:
At the top of the following log is him admitting it was his mistake and saying thank you for the help.
http://mythtv.beirdo.ca/ircLog/channel/1/2009-10-28
Unfortunately I don't have the rest logged and the log bot was down for part of that day.
Re:mod up, please (Score:5, Interesting)
Here's the log of the full conversation. We was hardly abused, and got persistent help over several hours which was patient and helpful, ultimately culminating in him realizing what he had done wrong and admitting to it.
Clearly, the myth IRC folk are at fault.
http://pastebin.com/m2cfd19dd
Re:He needs thicker skin (Score:4, Interesting)
I didn't smear MythTV, I pointed out how arrogant assholes can ruin someones experience and cause them to leave.
In the pastebin link you cited as being "less one-sided", you are barking orders to people and citing your credentials. What you say about arrogance turning potentially contributing members of a community away is sometimes true, commonly enough as to have become cliche, and quite unfortunate. However, you appear to have had a deep expectation that this is how you would be met. When you're convinced that you have good reason to "hate linux people", as you put it, you will tend to see what you believe. Especially after you have (admirably!) spent two frustrating days trying to find a solution.
It puts my teeth on edge to read the tone of your post here, and also of your linked IRC log. It came off -- and I say this not as an insult but as a barometer -- in a similar way to this guy [bash.org]. I'm not saying you're like that guy; but the heaviness with which you tried to control the conversation could have been perceived as a sense of entitlement. I certainly would have perceived it that way had I been present, and I likely would have reacted in a way that reinforced your dislike of the denizens of help channels.
It's only on multiple readings that I can see that you didn't actually have a chip on your shoulder, and did not actually possess the sense of entitlement that I attributed to you. Rather, you were venting frustration. Maybe you dreaded the trip you would have to make to #mythtv-users because you expected that you'd missed something obvious and would feel stupid when it was pointed out. If you are anything like me, this expectation will always render you very sensitive to being rubbed the wrong way by a rough sense of humor or an assumption that you are a noob (which, a priori, is the most likely hypothesis). If you're sensitive to it, then it doesn't matter how gently or politely they express this assumption; it will get your hackles up. And if the noob assumption is expressed less-than-gently because you opened with a statement that you intended to be humbly self-deprecating but which contained no mythtv-related query, you are likely to perceive it as a full-blown assault on your legitimacy.
If there is any personal advice I can offer, it is to maintain a sense of humor when entering any situation like this. You'll be encountering a lot of strong personalities. Maybe you expect them to respect your frustration, your intelligence, and the time you put into a solution so far (they may assume you're a noob). Maybe they expect you to ask your question first thing, all on one line (you didn't). Your sense of humor is a sort of shock-absorber to ride out the first few missed expectations and maintain your cool. It smooths over the beginning of the conversation. Small matters of etiquette can be allowed to slide on both sides. Thereafter, if someone's legitimatly an asshole, they're easy to spot and ignore.
As far as the nature of the community... hell, even if IRC were the most wretched hive of scum and villainy I'd still dread that support experience less than, say, Dell's.
Re:He needs thicker skin (Score:5, Insightful)
1. Ability: There is no guarantee that the people that kept giving you the same suggestions over and over know enough about the project to look into it more deeply, but you assume that they just weren't interested in helping. It's more likely that they know little more than you do about the project, but have a short list of the most commonly encountered problems and likely solutions. (kind of like tier 1 tech support, but free)
2. Affiliation: There is no guarantee that any of the people you talked to even have anything to do with the project other than lurking in their IRC channel. In my experience quite a few users lurk in channels of software they like, regardless of how capable they are of helping other people.
3. Incentive: I'm sure your problem was YOUR top priority at the time, but quite a few people on IRC lurk most of the time while they are doing other things, some of which are more important to them than trying to fix your problem. Also they have almost zero direct incentive to try to be nice to you.
4. Price: You mention this only to dismiss it, but seriously, this is a very valuable service that you are receiving for free, and you even had your quite obscure sounding problem diagnosed.
For-pay tech support either eliminates or hides these problems from the end-user, volunteer tech support doesn't have the resources to do this.
1. Ability is handled by tiering, if this were commercial software, you would have had to wait days to weeks in order to reach a level of tech support that would have been able to diagnose a bug in a sub-library not maintained by the business in question, and that's assuming you had paid enough for support to go that far for you (a hint, just buying a device will NOT get you this level of support). Instead you had an answer in under a day, and even a chance that the bug will get fixed based on your input.
2. Affiliation: This is the easy one, even if you do get support from someone outside a company you've purchased something from, you aren't going to blame the experience on the company, but rather on the individual. With open source however, if you find some random jerk that claims to be part of the project that proceeds to piss you off, you blame the project, not the individual. And regardless, unless you have a support contract with someone, it's just one person helping another.
3. Incentive: Paid services have a lockdown on this one too, tech support that doesn't maintain at least the barest facade of civility won't be working in tech support for much longer. (there are exceptions, but in general they will be more highly incentivized to pretend to like you, however as someone who worked in tech support for a while, I can guarantee you there is approximately zero chance that they will actually like you or care about your problem, which you have a pretty decent chance of with open source volunteers.)
At the end of the day, your problem was solved at no cost to yourself. Additionally I don't see any mention of your helpers even being rude, is this just an omission, or did you really just go into a roomful of random people and end up screaming at them (figuratively of course) because they couldn't help you with no direct provocation? If so, holy crap, you're a jerk.
Re: (Score:2, Interesting)
It sounds like he was just trying to make it work. If it takes digging to that level just to get it running then there is a problem. I though Myth was supposed to be in a working state
Re: (Score:3, Informative)
Yes, Mythbuntu.
The problem was not a security issue.
The problem was using the wrong port for the database server in the client configuration, a port that was open and listening, but not MySQL.
The MySQL client connects, MythTV says 'Okay, got the database!' and goes on to do other things as if everything is working properly. Then when it gets around to actually using the database it just locks up waiting for the DB to respond, but since its not actually talking to a database it doesn't respond, and the REAL
Good riddance (Score:5, Funny)
I like my elves the way I like my tea: thin and exotic.. served while still hot.
Re: (Score:2, Insightful)
That's terrible! They will quickly become dehydrated and lose flavor.
Re: (Score:2)
I like my elves like I like everything else, finely minced, baked with mushrooms and served in a legendary obsidian dining hall.
*elf meat biscuit*
Loosing is fun. Dwarf Fortress.
I don't believe it! (Score:5, Funny)
I would never have believed that people in the Linux community would show up at an event just to be rude. I've always heard such glowing praise about the Linux community. They're always there to help the new guy, willing to mentor those learning the "So simple a caveman can do it" operating system and break the monopoly of Microsoft once and for all.
His comments can't be correct. Everyone knows what fine, upstanding individuals the Linux community is.~
Re: (Score:2, Funny)
There you go, someone listened and gave you that troll point that you asked for!
Here are a couple of fatelves (Score:2, Funny)
Solution in search of a problem (Score:5, Insightful)
The 32-bit vs. 64-bit split is handled pretty well on Linux (well, Debian drug its heels a bit on multiarch handling in packages, but even they seem to be getting with the programme).
Real multi-arch could be useful, but the number of arches on Linux is just too overwhelming. To get somewhat decent coverage for Linux binaries, they'd have to run on x86, ARM, and PPC. Plus possibly MIPS, SPARC, and Itanium. Most of those in 32-bit and 64-bit flavours. Those elves are going to be very fat indeed.
Fat Elves Sue Keebler (Score:3, Funny)
It has been reported that a law suit was filed against Keebler corporation by a group of fat elves.
Their spokes person was quoted as saying,"It's *mmmm* their fault, these cookies *numnumnum* are just too good!".
Re:Solution in search of a problem (Score:5, Insightful)
I disagree. Solaris and Mac OS X are the only operating systems I would say handle it well.
OS X 10.6 includes i386 and x86_64 versions of almost everything. By default it runs the x86_64 versions on compatible CPUs and compiles software as x86_64. It runs the i386 kernel by default, but the OS X i386 kernel is capable of running 64 bit processes.
One can reuse the same OS X installation from a system with a 64-bit CPU on a system with a 32-bit CPU.
Solaris includes 32-bit binaries for most applications but includes 32- and 64-bit libraries. It includes 32- and 64-bit kernels as well, all in the same installation media.
Rejecting solutions to problems (Score:2)
Re:Rejecting solutions to problems (Score:4, Interesting)
Regarding hard drive and network speed, in today's world of gigabit to the desktop and 10 gigabit backbones and 2 terabyte hard drives I don't know what you're talking about with "10 megabit" and "20 megabyte" cracks. You do realize that the primary expense in a networked workstation environment is administration, not hardware, right? The proper use for local hard drive in a networked workstation environment is for caching, not for software installation. We knew this truth about workstation management twenty years ago, but for some reason it has been forgotten in a world where Microsoft and their deranged horribly expensive and virtually impossible to manage workstation environment seems to be the model for how to do things. How many years of IT experience did you say you had, again? :).
Wait, what does Con Kolivas have to do with this? (Score:5, Insightful)
I don't get the point in bringing it up.
Things get rejected from the kernel all the time -- because not all things are good, useful, well coded, or solve a problem that needs solving. It's not new in any way.
This in particular seems like a solution in search of a problem to me. Especially since on a 64 bit distro pretty much everything, with very few exceptions is 64 bit. In fact I don't think 64 bit distributions contain any 32 bit software except for closed source that can't be ported, and compatibility libraries for any applications the user would like to install manually. So to me there doesn't seem to be a point to try to solve a problem that exists less and less as the time passes and proprietary vendors make 64 bit versions of their programs.
Re:Wait, what does Con Kolivas have to do with thi (Score:2, Insightful)
Except this seems to be the only place that doesn't acknowledge the usefulness of fat binaries.
Windows has had them since DOS, although no one uses them. OS X has them, FBSD has talk about them and isn't flatly rejecting the idea.
I've seen many features in my career that seemed pointless, tabbed browsing for instance, my OS already supports
Re: (Score:2)
Is it possible to have a single Windows executable with both x86-64 and i386 code?
Re: (Score:2)
Well, please explain what would the usefulness be.
Especially when the two architectures most people care about is i386 and amd64, and the former works perfectly fine on the later.
That would seem to point to that the idea wasn't really useful in that case
But it went through a considerable architecture shift from one CPU to another that was incom
Re: (Score:2, Troll)
Tell me how an Apple developer can run a server allowing the client to select the program and it'll download and install the correct version, like Debian repositories. That problem has already been solved, and the solution is better (it also gives you plenty of other features).
Oh, and closed-source companies can have their own repositories too. Example: http://download.skype.com/linux/repos/debian/ [skype.com]
Re:Wait, what does Con Kolivas have to do with thi (Score:4, Interesting)
This in particular seems like a solution in search of a problem to me. Especially since on a 64 bit distro pretty much everything, with very few exceptions is 64 bit. In fact I don't think 64 bit distributions contain any 32 bit software except for closed source that can't be ported, and compatibility libraries for any applications the user would like to install manually. So to me there doesn't seem to be a point to try to solve a problem that exists less and less as the time passes and proprietary vendors make 64 bit versions of their programs.
EXACTLY! We don't want choice, we want it to just work! Damnit, force people to do things the way they ought to do them, don't give them choice, they'll just screw it up.
Especially when that choice makes things EASY!
Re:Wait, what does Con Kolivas have to do with thi (Score:4, Insightful)
Re:Wait, what does Con Kolivas have to do with thi (Score:2)
I've definitely appreciated OS X's universal binaries in the past, it's a shame to
Re: (Score:2)
If your architectures are i386 and amd64 then you can just ship i386 and not bother wit
Re: (Score:2)
If what you said was correct (which it is not), then why would intel/amd stop making 32 bit chips, as they are concerned only with processor efficiencies (and not RAM).
32bit is only more memory efficient than 64bit. It is not computationally so.
Re: (Score:2)
Ok, but what do you expect to come out of this?
For instance, suppose this patch goes in. But you won't get an i386/amd64/ARM binary from me, because I don't have an ARM system, so I won't compile for something I can't test. Some architectures have things like alignment requirements. Not all code will work everywhere even if a binary for the platform can be produced. Different byteorder will require writing code that will correctly read data files on all platforms.
And why would you want code for 8 different
rude (Score:3, Insightful)
.
Oh well, so goes it with parts of the Linux culture.
Re: (Score:2)
Oh well, so goes it with parts of the Linux culture.
Coulda been worse. He could have been trying to get something folded in to the OpenBSD kernel. Theo makes those surly Linux kernel developers look like Miss Congeniality. :)
Kind of broken by design (Score:4, Insightful)
Game's over, quit holding up the bus. (Score:4, Insightful)
On Linux, withe a couple of dozen architectures,
Kind of, but not really. No more than there are four architectures (PPC, ARM, X86, X86_64) for OS X. There's two architectures for Linux that actually matter, and they're the same two that run Snow Leopard. X86 and X86_64.
I can see why people are going to get up in arms about this. I've been as big a RISC booster as anyone, I think Apple gave up on PPC too soon, and I'm still bitter about Alpha, but that game's over. 32-bit and 64-bit Intel architectures are what matter, and those are the ones that almost all binaries will work for. I'm not running YDL any more, and neither are you. Game's over, instruction sets lost to marketing. The game's over, the fat lady's sung, picked up her paycheck, and gone home to watch House on her Tivo. Give it up and quit holding up the bus.
Re: (Score:3, Interesting)
True, but the ability to handle such things can come in handy. As an example, suppose you've got a setup where you're running apps off a server. You've got several different hardware platforms going, but you want your users to be able to double click the server hosted apps without worrying about picking the right one for the computer they happen to be sitting at. A fat binary is pretty much the only way to solve that problem.
Re: (Score:2)
Just wanted to say that MacOS for a while there supported 4 architectures: i386 and PPC, both in 32 and 64 bit. Very few apps actually shipped with all four, since there were also fallbacks in place. (A few high-end apps did, but only a few.)
And even then there were a half-dozen utilities out there for 'cleaning' the architectures you didn't need out of the files. Which could get back a fair amount of disk space.
Re: (Score:2)
Re:Isn't someone going to ask ... (Score:5, Interesting)
Commercial Games. That's who.
Re: (Score:3, Interesting)
Commercial Games. That's who.
Exactly. Take Blizzard, who ships Windows and Mac versions of their games on the same media. Fat chance of getting an official Linux release in the absence of a universal binary solution. Blizzard tends to ignore platform-specific package formats in favor of their own installers, the better to control and customize the installation experience. By avoiding the standard MSI format on Windows, for instance, they avoid introducing a lot of unrelated dependencies and vastly simplify the post-release patching
Re: (Score:2)
Ryan Gordon does. He's ported a metric buttload of commercial apps for Linux. See Loki games & icculus.org for more information.
Story of binary compatibility is short and tragic (Score:5, Insightful)
In the entire forked-up mess of the unix tree, there was only one thing that anybody & everybody cared about - source compatibilty. C99, POSIX, SuS v3, so many ways you could ensure that your code would compile everywhere, with whatever compiler was popular that week. For a good part of 4 years, I worked on portable.net, which had a support/ directory full of ifdefs and a configure script full of AC_DEFINEs. It worked nearly everywhere too.
Binary compatibility never took off because there is so little stuff that can be shared between binary platforms. Sure, the same file could run on multiple archs, but in reality that is no different from a zip file with six binaries in them. Indeed, it needed someone to build 'em all in one place to actually end up with one of these. Which is actually more effort than actually letting each distro arch-maintainer do a build whenever they please. OS X build tools ship with the right cross-compilers in XCode and they have more of a monoculture in library versions, looking backwards.
Attempting this in a world where even an x86 binary wouldn't work on all x86-linux-pc boxes (static linking, yeah...yeah), is somehow a solution with no real problem attached. Unless you can make the default build-package workflow do this automatically, this simple step means a hell of a lot of work for the guy doing the build.
And that's just the problems with getting a universal binary. Further problems await as you try to run the created binaries ... I like the idea and the fact that the guy is talking with his patches. But colour me uninterested in this particular problem he's trying to solve. If he manages to convince me that it's a real advantage over 4 binaries that I pick & choose to download, hell ... I'll change my opinion so quickly, it'll leave you spinning.
Re:Story of binary compatibility is short and trag (Score:3, Insightful)
The problem isn't that its not possible, its that its hard. Your argument is that since its hard now, since the tools aren't ready for it, it shouldn't be done ...
Sounds pretty silly to me.
It would be hard to start from scratch and write a modern OS ... but that is indeed what Linux is.
If you never take the effort to make the hard easier it will remain hard. Changing from single threaded to multithreaded is hard, do you think we should not do that either, because the tools to do it don't make it a cake wa
Re: (Score:3, Insightful)
No because a package manager makes it easy to install software for the current arch. Even grandma doesn't benefit from having x86, AMD64 and arm binaries in a single package, much less from some random untrusted binary she downloaded from the internet.
Re:Story of binary compatibility is short and trag (Score:4, Insightful)
You missed half the argument. It's hard, and it's pointless.
The grandma who has a netbook running ARM and a desktop running x86 will install software by going into Add/Remove Programs and picking "Fun Times Photo Album for Grandmas" out of a list. The package manager will figure out what needs to be installed for her, on both her ARM and her x86 computers.
She's not going to go to some random website and download a random installer file and use it on both her computers - her kids have told her over and over again that that's not safe, and she may lose her knitting patterns if she does it.
Seriously, the people who advocate this junk seem to be entirely unaware of the joys of package management. All FatELF does is re-solve a problem that package management has had licked for a couple of years now, and it solves the problem in a less efficient way.
It's hard, yes - but it's not worth doing just because it's hard.
Re:Story of binary compatibility is short and trag (Score:2)
In the entire forked-up mess of the unix tree, there was only one thing that anybody & everybody cared about - source compatibilty. C99, POSIX, SuS v3, so many ways you could ensure that your code would compile everywhere, with whatever compiler was popular that week.
This guy worked in the closed-source world of video games where it's often not even legal to share your source code (due to middle-ware licensing and trade secrets) and even when it is legal, it's often not feasible for business or gameplay reasons (competitive coding advantage, preventing cheating hacks, disallowing "free content" mods, etc). It's exactly this reason that high-end cutting-edge games and other closed-source software will NEVER be viable on Linux unless there are major changes to the entir
Re: (Score:3, Insightful)
But the lack of universal binaries is not the reason why it's hard to release closed source software on Linux.
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:2)
Structure should be at the filesystem level (Score:3, Interesting)
My objection is that any such hierarchy of data could be stored as files.
Linux needs tools so that a directory can be manipulated as a file more easily. For instance cp/mv/etc should pretty much act like -r/-a is on all the time, and such recursive operations should be provided by libc and the kernel by default. Then programs are free to treat any point in the hierarchy as a "file". A fat binary would just be a bunch of binaries stuck in the same directory, and you would run it by exec of the directory itself. Also need filesystems designed for huge numbers of very small files and to make such manipulations efficient.
We need the tools to be advanced into the next century. Not use the workarounds of the previous ones as currently practiced on Unix and Windows.
a better idea.. (Score:5, Interesting)
Fatelf was never really a great idea in my opinion. Putting two binaries in a file is not a really good way to solve the problem as there are many more variations of CpU type including all of the x86 variation than one or two. it would be a better idea to do something similar to the AS/400, include, an intermediate form in the file, such as a syntax tree, convert it to native at runtime on the users system, and then store the native code inside the file next to the intermediate code. if the binary is moved to a new system, the native code can be regenerated again from the intermediate code. This does not even requite kernel support, the front of the file put shell code to call the code generator installed on the system, and generate the native code, and then run it. This way, things like various x86 extensions can also be supported and so on.
Re:a better idea.. (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
There is already a method for supporting multiple binary formats.
It's called source code
Re:a better idea.. (Score:4, Insightful)
Only the last phase of compilation, code generation, would occur on the users computer. One of the problems with source code is that it can take hours for it to compile, and getting it to compile right is never easy enough for granny. The purpose of a universal executable is that it should be easy enough for granny to use, which means download, double click, and it runs. None of this fiddling with a million dependancies and so on. Granted, the problem is parly due to the fact that each Linux distribution does something differently and puts things in a different place.
The wrong Solution to the problem. (Score:2)
FatELF was the wrong solution to the problem. In the Linux community, we do have a cross distribution application issue. But its one of pure stubbornness.
What do I mean? Suse has its way of setting up RPMs, Mandriva has its, RedHat (Fedora) has its. The three big names in RPM all fight each other over stupid things like RPM Macros, when RPMs are all 95% the same. We can't decide what to classify anything, so we fight over stuff like Amusements/Arcade vs. Games/Arcade. To some degree the same issue exists be
You want people to quit whining about RPM? (Score:2)
We have application makers who provide a binary installer for the Windows platform, yet hand Linux users a completely unpackaged BZ2 Type Tarball and say "Good luck!"
That's because you don't have to descend into the hell that is rpmbuild, which was a pile of rotting dingo fetuses ten years ago and hasn't gotten one bit better since.
It's long past time that they gave up that ghastly binary blob and defined a new "rpmx" format, that would look kind of like this:
A gzipped or bzipped tarball, containing:
1. a di
Petty fiefdoms and not invented here... (Score:2, Interesting)
maybe the idea was just bad... (Score:2, Insightful)
maybe its just me but i see 0 advantages for an executable with multiple binaries.
shouldn't this all be handled by the package manager? isn't including all these binaries just jacking up download sizes for no gain?
a boot CD that can run on multiple archs is the only real use i see for this, but i would have to think there is a better way handle that than changing the fundamentals of executables and libraries.
maybe he received a less than warm reception from other devs because his idea provided virtually no
Forget fat binaries for Linux (Score:4, Funny)
I want fat binaries for microcontrollers! Give me binaries that can run on PIC16F88, eZ80 and 68HC11!
There's nothing worst than having to replace a 0.50$ chip with another that cost 0.51$!
"That's a stupid idea" vs. "You are stupid" (Score:2, Insightful)
The issue wasn't that there were lots of people saying "That's a stupid idea" or "That's a stupid implementation of an otherwise good idea."
The issue was lots of people saying "You are stupid."
There is a big difference.
I'd weighed in on this, because in the embedded systems I design this actually would have been useful - I have to support different processor types with what is, ideally, the same software load. (Just because MY embedded systems are much larger than some 4-bit microcontroller running 16K of c
Why did he even talk to the kernel people? (Score:3, Interesting)
Not worth the performance hit? (Score:3, Funny)
stupid idea (Score:4, Insightful)
FatELF is a stupid implementation of a stupid idea. I.e., even if you want fat binaries, modifying the ELF format is the wrong way of doing it.
Yeah for the Linux kernel developers for keeping this kind of crap out of the kernel.
If I wanted to write in Java, I guess... (Score:2)
Why have universal binaries when you have Java to do that?
If I don't care about performance I can already write in Perl, Awk, Python, Tcl, or something. Why do I want to put up with Java?
I would LOVE to be able to remove all the "if `uname -cokebottle` matches "[xX]86" pick this RPM else pick that RPM" code from my installers. OK, that still leaves a fair amount of packaging hell unfixed but every little bit helps.
C Byte Code (Score:3, Funny)
If I don't care about performance I can already write in Perl, Awk, Python, Tcl, or something. Why do I want to put up with Java?
Well maybe what we could use instead in C byte code, or some other form of byte code, and then have on the JIT low-level compilations.
My first reaction to FatELF was that it is a good idea, since it works on the Mac. After listening to the issues people bring up with the large selection of CPU architectures I can understand this issue. On the other hand, why not allow ELF to suppo