Linux Descending into DLL Hell? 467
meldroc writes "Or should I call it "Shared Library Hell"? The top story of LWN.net this week states that the newest version of GnuCash requires sixty libraries(!), many of which are new versions that aren't available out of the box with most Linux distributions. Managing shared libraries, with all their interdependencies and compatibility problems, is a major hassle. How should we go about dealing with the multitudes of shared libraries without driving ourselves mad or descending into the DLL Hell that makes Windows machines so unreliable?" Well, GnuCash 1.4.1 works fine for me, and I feel no immediate need to update to 1.6, the version that needs 60 libraries. But still a good point here.
Re:Is there a real solution to this? (Score:2)
Theoretically there should be no problem (Score:2)
So long, happy linking.
Re:Difference from Windows... (Score:5)
Re:Relax. (Score:2)
libguppi11
libwrapguile
libwww-perl
slib
scm
guile1.4
guide1.4-slb
libgwrapguile0
libnet-perl
liburi-perl
libhtml-parse-perl
guile-common
libnet-telnet-perl
libhtml-tagset-perl
This is on a system with a fairly extensive set of apps used normally. I can see where if you use perl for a bunch of your stuff, and guile, there really is no choice but to install the full development files and libs on a system that has no guile stuff on it.
However, it's 40ish mb of archives that I have to download (Thank $DEITY for fast adsl!).
Re:Why doesn't Linux adopt a Mac OS X type scheme? (Score:2)
Question.
Is it like this?
\SomeAppDir\!Application
\SomeAppDir\!Application\!Sprites
etc.
Or is it like this?
\SomeAppDir\!Application
\SomeAppDir\!Sprites
etc.
Just curious... RISC OS sounds fun.
Re:Libraries: Harken to the Bad Old Days (Score:3)
But, really, this breaks down under certain usage patterns. On a system like Debian, where package installation is trivial compared to Windows, there are a ton of packages. I currently have 694 packages installed, though a significant number of them are libraries.
Consider another pattern -- extended environments. Gnome is an instance, as is KDE. I have 12 Python packages installed, and Python by itself doesn't even do anything. I won't speculate on how large Gnome or KDE are.
I have 41 gnome packages installed (or, at least packages named gnome*). What would happen if I had 41 copies of the Gnome libraries for these applications? What if packages had even greater granularity? What if I get to choose which applets I want installed? What GTK engines I want? Hell, I don't even know how engines could work with 41 copies of the libraries.
Sym linking identicle libraries to save space wouldn't do any good, because that offers nothing better than what I have now (which works just fine, actually) -- where most libraries end up in /usr/lib. In a funny way, I suppose hard links would almost work right.
On Windows per-application DLLs kind of make sense. On Windows, people don't have that many applications installed. Each application tends to do a lot on its own. This is partially the influence of commercial tendencies (people don't want to pay for something too small), and partially it's because small applications quickly become unmanagable on Windows. But Debian doesn't have this problem, and RPM-based systems have, well, not too much of this problem. Why "fix" what isn't broken?
Next you'll have us putting application files in directories named after domain names or company names, to avoid naming conflicts. /usr/applications/org/gnu/gnucash. Uh huh.
Re:Hey programmers (Score:2)
People really need to make up their minds. I know I for one would prefer to reuse someone else's code for common functionality, rather than having to rewrite it myself every time I need it (or static link it, making unnecessarily bloated binaries).
_____
Sam: "That was needlessly cryptic."
60 Libraries indeed! (Score:2)
Re:Yea but (Score:2)
Most of the GNUcash dependencies are satisfied merely by "obtaining latest version of associated desktop". That hardly rates as "including every single doo-dad" or "spans 10 DVD's".
This is more a matter of a developer using "bleeding edge" libraries rather than creating some perverse octopus.
Re:Relax. (Score:3)
I agree that code reuse causes bloated software in that libraries often are required to deal with a very general case of the problem, which requires much more time than just coding an example by yourself.
However, I can prove that code reuse isn't always bloat: the ANSI C library on your system. If the ANSI C library was statically linked, there wouldn't be any shared memory redgarding it between your processes. When you run 'top' and a process says it takes up a few more megabytes than you thought it would, be sure to check the shared column.
Saying that code reuse causes bloat is not the whole story. Code reuse serves both sides of the bloat war.
\\\ SLUDGE
Relax. (Score:4)
The software requirements [gnucash.org] require "60 libraries" because "The majority of the GNUCash 1.6.0 dependancies are satisfied by Gnome 1.4".
If major distros don't yet support the libraries of recent software releases, that's fine with me. The push for newer versions should come from bleeding edge software.
Aside from that, I personally commend the code reuse of GNUCash. Functionality needs to be reused as much as possible: We're working alongside giants. Let's stand on each other's shoulders.
\\\ SLUDGE
Re:Installing Free Software (Score:2)
X drivers, etc. (Score:2)
You're wrong. Even the regular XF86_SVGA server must directly access the IO space, and it does so through the kernel. On my Alpha, when I still ran Linux on it, I had nothing but problems with X. A perverse combination of user operations (e.g. a fill or scroll) would completely lock up the machine.
Yes, there are times when some poorly-written X app kills the server, and there are times when the X server bugs out without killing the OS. But your statement "X and the Kernel are clearly different" is a gross oversimplification, ignoring, iovec(), memory-mapped IO, and the like. Even the most lowly frame buffer drivers must directly access the hardware.
Rev. Dr. Xenophon Fenderson, the Carbon(d)ated, KSC, DEATH, SubGenius, mhm21x16
Re:This is why 'Stable' distributions exist (Score:2)
I found it a little bit ironic that I read the lwn.net article about Gnucash yesterday afternoon and this morning my daily 'apt-get update ; apt-get -dy dist-upgrade' of Debian unstable put a copy of gnucash-1.6.0 on my desktop. Also included in Debian unstable is Evolution 0.10, Mozilla 0.9.1, and Nautilus 1.0.3 they all seem to work together fairly well. It seems that a version of Netscape is also available, but I wouldn't know, I haven't used Netscape in some time.
Shared libraries are good. And shared libraries are especially good in Linux where it is a trivial thing to have different applications use different versions of the same dynamic libraries. With the next generation of Linux distributions all of these libraries will be included (probably by default), and so installing will be a piece of cake.
Re:Difference from Windows... (Score:2)
The difference is that in Linux you can use LD_LIBRARY_PATH so that you can guarantee that your application loads the dynamic libraries that you need. Couple that with a library versioning system that has major and minor revision numbers and that allows you to have several different versions of a library and you have a system that is basically DLL-Hell-proof.
The author of the original lwn.net article apparently simply doesn't know what the heck he is talking about. If the Linux system that you are using does not make adding new libraries a trivial undertaking then this needs to be filed as a bug. Gnucash 1.6.0 is a new release, and to run new releases you either have to know how to build and install software, or you have to wait until someone else does it for you.
To illustrate this, I spent a small portion of time this morning playing with Debian unstable's version of gnucash-1.6.0. I installed it with a simple 'apt-get install gnucash' and it painlessly downloaded gnucash and all of the required libs (that I didn't already have). After nearly an hour of playing things seem to be just fine (and gnucash is much improved over the 1.4 series).
Dynamic libraries are good. They are even good on Windows now that Windows 2000 has finally got their act together and allows multiple versions of the same DLL to be in memory at the same time. They have been useful on Linux for quite some time.
Re:This is why 'Stable' distributions exist (Score:2)
The my entire /etc/apt/sources.list consists of two lines:
deb http://http.us.debian.org/debian unstable main contrib
deb http://non-us.debian.org/debian-non-US unstable non-US/main
Notice that I am not using Ximian's packages. I have used them in the past, but unstable has generally had the software I needed without the extra hassle of dealing with Ximian and their sometimes not quite Debian compliant packages. I ended up removing all of the ximian packages some time ago.
RedHat is a fine choice for a distribution, and Debian isn't for everyone. It seems to me that RedHat probably would be the way to go if you were primarily interested in Ximian's Gnome packages. At least with RedHat those packages are likely to be well tested.
Either way, there certainly is no evidence of DLL Hell. It is just a case of a program that has a lot of required libraries. To my mind this is the best sort of code reuse, and is definitely a good thing.
Don't Confuse DLL Hell with the Linux Situation (Score:5)
Linux isn't experiencing anything remotely similar to DLL Hell.
DLL Hell is when Foo DLL 1.0 and Foo DLL 6.0 both stored in the file foo.dll (unlike libfoo.so.1.0 and libfoo.so.6.0) and brain damaged installer programs blindly replace foo.dll version 6.0 with foo.dll version 1.0, thus breaking every single program that depends on the newer version of foo.dll
Because so many of these crappy programs exist, Microsoft has made an attempt at fixing the problem by introducing the Critical Files Protection mechanism, in which the operating system itself monitors file creations and modifications, looking for these stupid installers as they attempt to replace the new versions with the old versions of the libraries. Attempts at change silently fail, and the installer runs along its merry course without breaking things too badly.
Basically the problem is only gnucash (Score:2)
I am not surprised that they chose gnome-1.4 which has been available for 6 weeks as a base for a finance app, that would happily work as a gtk app (like gimp).
--
Let the package maintainers take care of it (Score:2)
Thanks
Bruce
Re:You wanna talk hell... (not NeXT style) (Score:2)
Re:Installing Free Software (Score:2)
Why the hell is there anything more complicated than this. I really want to be able to test the software without "installing" it.
Just put the libraries in there and fix the system so it is trivial for a program to turn argv[0] into it's execution location and it looks in that lokcation for shared libraries first. Yea, it's different, but it will be far better.
Or put the source code in there and when the user double-clicks it pops up a window that says "please wait, compiling gnucash" and then it works and is perfectly adapted to your system. This would be a huge win, and is something closed-source providers just cannot do! Don't worry about the time it takes, people are apparently willing to waste hours running "install" and the compile is nothing compared to that.
Re:Difference from Windows... (Score:2)
I am just making normal DLL's (whatever you get when you "link /DLL" all your .obj files). It does appear it searches $PATH when it needs a DLL (I always must remember to delete my optimized copy, which is an earlier directory, before running if I want debugging to work). In this case it works exactly like LD_LIBRARY_PATH (which reminds me, I am just as confused by Linux as Windows. Why isn't this variable *always* needed, why is there some other mechanism to locate shared libraries?)
However one nasty screw up I have seen several times (but I can't reproduce at will, that's NT for you!) is that sometimes the other library will get "locked into memory", so that it refuses to load my new copy. I swear that I have exited, and even deleted, all the executables that are using that shared library, but it is still there! It appears that logout/in fixes it. Also deleting the library itself works (though that crashes running apps using it, why aren't the files locked like NT locks the executables?)
Another annoyance not mentioned here about NT is that you must use a DLL if you want to support "plugins" because all functions the plugins call must be in the DLL. Linux also has this but you can link your program with "-shared" and it then works like the Unix systems I am more familiar with (though I don't do this here as the software has to port to NT and repliating the difficulties of NT helps to keep things portable). For this reason I have to produce a DLL for no good reason at all, only my program uses it, and it is likely only one copy of my program is running at any time. I have heard there is some way to make a "DLL executable" but I can't locate it, just building a DLL out of all the source does not make a runnable program.
Another huge Windows annoyance is "__declspec__(dllexport)". That crap is going to pollute header files in code for ages after NT is dead and buried. What a waste.
Re:Difference from Windows... (Score:2)
This can be demonstrated easily by removing those links, programs still work, but you cannot compile programs (or they find the static versions of the libraries).
You can use symbolic links to get DLL hell, as I will sheepishly admit I have to get downloaded programs to work. Link the version name of the library the program complained about to whatever version you have and often the program will work! However newer things like libc seem to have version information in the file that is checked so you can't outwit it that way.
Re:It's not DLL hell that makes Windows unreliable (Score:2)
PS: MFC is also as big a lock-in for MicroSoft as Word is. A huge stumbling block for getting programs ported to Linux is that they are written with MFC or VB systems. The MFC source code is totally useless for the port as it calls Win32 stuff (like structured file parsers) that nobody would ever use in any real code but MFC does so they can hide the implementation.
Re:Libraries: Harken to the Bad Old Days (Score:2)
Would this work, or am I just clueless? Is anybody trying this? Never saw anything about it in all the file system discussion.
Re:Debian != smart (Score:2)
Too many posters here seem to have no concept of a program that you can "run" without "installing" it first, and go on about how great apt-get is at "installing".
Listen to the orignal poster. The steps desired to run a new program are:
1. Download the file
2. Double click it.
3. It's running! No, it is not "installing", it is RUNNING!!!! Get that through your thick heads.
Also not mentioned is what the user does if they decide the program is crap and don't want it:
1. They drag the downloaded file to the trash (and empty it, I guess).
2. THAT'S IT! We are exactly in the same state as they were before they downloaded it. It does not "deinstall", it's one file!
What is being talked about has nothing to do with "installation".
If you want "installation" here is how it should work:
1. The user runs the program
2. The program somehow shows a question like "do you like this program and want it to appear in your Gnome/KDE startup, and in everybody else's (or for services, do you want to run it for real versus the simulation that is running now). The user clicks "yes".
3. It pops up a panel and asks for the root password.
4. It churns and it is "installed".
To "deinstall" you do this:
1. Somewhere in the system directories is a file with exactly the same name. You (as root) take this and put it in the trash. It's GONE!!!!
2. Somewhat later KDE/Gnome may notice that the menu items don't point at anything and hide or delete them.
Re:It's not DLL hell that makes Windows unreliable (Score:2)
Hell, as a regular user you can hang X and the console badly enough that you can't use it or reset it. I did it yesterday with RH7.1 and Konqueror (sp?). The X server should not be able to crash based on requests from it's clients, but it does.
--
the telephone rings / problem between screen and chair / thoughts of homocide
Re:Libraries: Harken to the Bad Old Days (Score:3)
I now virtually nothing about MS Windows and MS's plans for it, but I do remember the commotion here on /. a year or so ago when MS patented "a new OS feature", basically automatic links. You can have as many copies of same file on disk as you want, without using more space than one file. When an application wants to change a file, the link is substituted with a real file.
Managing DLL:s was the purpose of this "invention".
Lars
__
Re:static goddamnit! (Score:2)
Um, you don't run a multiuser system, do you... the problem is that if this program is statically linked against 60 libraries, it's probably going to use up a fuck of a lot of memory. And, if you have 10 users running statically linked programs, you hurt. Badly. Especially since it's usually not just one program -- imagine Gnome statically linked. There are several resident programs, all linked against X and libc at a minimum (call it an extremely -- almost psychotically -- optimistic 5 megs each for library code, plus the app.) That's a lot of memory for 10 people running a desktop... and they might want to do work too. Shared libraries exist for a reason.
Difference from Windows... (Score:5)
The major reason we refer to it as dll hell on Windows is very simple -- there's no concept of a version. App A uses v6 of foo.dll, app B uses v8. It's still named foo.dll. Oops -- the API changed. Hell.
Unix systems have the concept of a version -- you change the API, you rev the major version of the library. The old one's still there if you need it, but apps will get (dynamically) linked against the version of the library they were originally linked against.
Yeah, it's a bitch (and a half) to compile all that shit -- I've compiled Gnome (and all dependencies of it) on a Solaris box from sources. It's a pain in the ass. But, as Bruce Perens said in another post, that's the job of the packager -- Ximian, RedHat, the Debian volunteers (thanks.)
Re:You wanna talk hell... (not NeXT style) (Score:2)
Its a bad use of a word/phrase that already has a meaning, that will in the end only confuse young and budding developers. I hate Java's "Design Patterns" (in this case, the naming scheme for JavaBean accessors and event methods) for the same reason -- "Design Patterns" already had a well used, and mostly well understood meaning.
--
You know, you gotta get up real early if you want to get outta bed... (Groucho Marx)
This has been solved in other areas... (Score:2)
The problem is a particular instance of the general "continuous maintenance problem", which is now reasonably well understood (i.e., it's not academically interesting any more). This has been solved in particular cases as far back as Multics and, more topically, in the relational database world.
I have one of Paul Stachour's papers on the subject, although it needs to be OCR'd...
What's a suitable place/list to discuss this?
dave (formerly DRBrown.TSDC@Hi-Multics./ARPA) c-b
Re:Is it really necessary.. (Score:2)
rm -rf
hell, even better, try this:
rm -rf
now you're down to a bare-bones, 10 MB linux install with all the cool commands left (ls, cp, sort, etc.). Pretty cool eh?
GLibC can't be upgraded (Score:2)
I pointed out some of them on the bug-glibc [gnu.org] mailing list:
The current situation is not acceptable.
Why doesn't Linux adopt a Mac OS X type scheme? (Score:5)
Couldn't something like this be done to reduce the clutter? Have "gnome-libs.pkg" which is actually a tar or tgz file. When an application needed to use a library, it would involve an extra step -- extracting it from the tarfile -- but that would only be on first load (after that it would be cached in swap) and the cost to retrieve the file would be minimal.
On the other hand, the possible gains would be enormous. Packaging would become simple. For most applications, install/uninstall could simply be putting the binary in bin and the libpkg in
I guess what I'm thinking of is like GNU stow, just taken further. Why not make those directories into tarfiles?
Want to make $$$$ really quick? It's easy:
1. Hold down the Shift key.
It's so simple! (Score:2)
(why do open source software authors pay so little attention to their users' needs?)
Re:Let the package maintainers take care of it (Score:2)
Go you big red fire engine!
Re:Relax. (Score:2)
Sure. You can then install Abiword, Gnumeric, Nautilus, Evolution, and Dia in another few megabytes. That's the point of shared libraries.
Oh, and by the way, have you checked just how much screenshot-heavy documentation gnucash provides?
Go you big red fire engine!
Re:Difference from Windows... (Score:4)
When we're talking about DLL hell, the first thing to keep in mind is that usually we are talking about incompatible changes in behavior or interface in libraries that are *meant* to be shared. Once a software designer decides to expose an interface, he becomes hostage to his installed base. Effectively, as others have pointed out, the name of the DLL becomes its interface identifier. So by convention you should encode the lowest compatible revision number of the interface in the DLL name. You can keep the name the same as long as you are binary-compatible, not matter how much the implementation changes. For example, MFC didn't change much from version 4.2, so when Microsoft came out with MFC 5.0, the names of the DLLs didn't change from MFC42*.DLL because they were still binary compatible.
If you need to share some state even between different versions of the same facility (like a list of file handles for different versions of libc), you are going to have to factor the internal state into yet another interface -- and that one that better be evolvable for a very long time.
The sad fact of the matter is that maintaining binary compatibility is hard, because there are technical details to master, and because of the discipline required to keep all potential clients working without have to rebuild and relink. You can't change the names or signatures of functions, you can't change the layouts or sizes of structures or classes, and older bugs that clients rely on as features will become features that have to be supported. If it's a C++ interface, you can't add virtual functions to base classes, although you can add new nonvirtual class functions. Of course, you can add new functions. It is also possible to "version" structures for expandibility, but this is tricky.
By the way, while static linking is a way around some DLL problems, one of the cases in which it can't work is when a DLL implements in interface to functionality that requires shared state in the same address space. You can have multiple clients of a DLL in the same address space when both and application and its extension libraries refer to that DLL. This happens a lot with ActiveX controls on Windows (and in-process COM servers in general), and I suspect that this also happens in Apache on Unix as well.
Both Windows XP and Mac OS X have mechanisms for allow shared libraries to be hived off so that you can have private version for yourself, but I don't know much about how they work.
Note that shared Java classes are much easier to deal with, but isn't like there are no interface changes have binary compatibility implications.
The other way around this problem is to expose the interfaces(s) in a language-neutral object model that supports interface evolvability, such as [XP]COM or CORBA. Unfortunately, it's not always easy to use these kinds of interfaces, and most of these kinds of systems don't offer implementation inheritance (with the notable exception of .NET, which is Windows-only currently and also requires language extensions), which is a handy feature of a C++ class exposed in a DLL.
Re:Welcome to Windows? (Score:4)
On Windows, the libs are called Dynamic Linked Libraries. On UNIX, they are called SHARED libraries. Of course, we all know they are the same thing, but apparently on Windows many people don't understand their purpose.
Part of the DLL hell is the vast amount of them that are unnecessarily created by people who don't understand when static linking will work just fine. I still hear people claim that DLL's magically keep the executable size small. DUH! All it does is unnecessarily chunk up your program, increase file count, and increase loading time.
So far under Linux I have hardly seen any abuses of this. Shared libraries are generally reserved for geniunely sharable code, and the rest is statically linked the way it should be.
It sounds like GNU Cash is using shared libs correctly and once the distros catch up we'll wonder what the fuss was about.
Re:Welcome to Windows? (Score:2)
Assuming we remember the fuss. You're giving our collective attention spans way too much credit.
--
Re:Yea but (Score:2)
The deps for GnuCash are at the forefront of the state of the art for the GNU desktop, but c'mon: this ain't no half-hour hack; it's one of the more serious desktop applications for Linux, and thus requires some recent libraries. Or, you could just dual-boot and run Micros~1 Money or Quicken...
...jsled
Helping the community? (Score:5)
Hmmm... 2 or 3 GnuCash developers independently post story submissions to /. about how they've released a significant new version [gnumatic.com] of a key Linux application ... one which has the potential to replace some people's last hurdle for switching away from Micros~1 completely...
And /. decides to reject all those, and instead posts a poor LWN piece [gnumatic.com] which overstates [gnucash.org] a problem that is valid, but has nothing to do with GnuCash, and more about the poor state of Linux software installation, package management and impatience of users regarding the package system they're using.
Thanks Slashdot story selectors! The GnuCash folks did their part and wrote a bunch of code that works really well... ignore it if you must, but don't piss on their efforts.
Re:Is it really necessary.. (Score:3)
If you release a version that uses all kinds of new libraries, take 3 more seconds and release a statically linked version so you don't undo every success the advocates have achieved...
I ran into this gnucash problem 3 days ago, and now I need to tell my small group of new linux users that the newest version is not for them and will probably never be for them. (getting linux there was hard enough, asking them to re-install to redhat 8.0-8.1-8.2-etc to satisfy the whims of programmers? no way.)
my suggestion? go ahead and use 925 libraries for your program, but if you expect people other than programmers anf gu-ru's to use it make a package that is 100% compatable with a current distribution -1 (I.E. subtract a decimal from the distribution release). Sorry to pick on gnucash directly but it is one of those apps that was able to switch windows users over, so they have more riding on their backs than they really know.
Re:Difference from Windows... (Score:2)
Now if only glibc followed this guideline...
not really dll hell, more like mini -mayham.. (Score:3)
For those just tuning in, linux has the concept of package managment, which doesn't really exist in windows, and intheory should solve the library problem.
In practice there are two competing standards, AKA debian DEB/APT (which works pretty well), and red hat RPM (which is a mess, and not trully automatic).
There are solutions that try to solve the probelms with RPM (mostly the lack of centralized QA on indevidual packages), like the distributions them selves (many, many, many distributions), but couse other problems with diferant distribution packages, (suse RPMs with red system are likly to have problems, etc).
Some other solution are surfacing, like Aduva's manager, which uses a central database as well as a client which analizes the usrs dependancies/interalations on his machine. But that solution is not Free software/gpl/OSS.
Another problam with these solution is that the typical Linux users are reluctant to use automatic systems on thier systems.
(I'm feeling genorous today)
--------------------------------
Re:Let the package maintainers take care of it (Score:3)
Since the majority of Linux software comes in packages we'll never enter the DLL Hell in the way Windows has because you can always see where each file comes from and how they relate just by bothering to do some legwork.
Upgrading is no big hassle for the most part. Use something like one of the above mentioned tools to stay updated or if you must resort to a program that is none-standard try something like wget (or in many cases just plain old ftp) to suck all the needed files in.
Re:Difference from Windows... (Score:2)
Sure, it works great in theory -- until you realize that many of these shared-library authors haven't bothered to CHANGE that major version number when the API changes! And this has even happened with libc!!!
Re:Installing Free Software (Score:2)
libjpeg DLL problem (Score:2)
Since it's only two programs, I've modified the Mozilla startup script and created a startup script for Opera to set LD_LIBRARY_PATH to find libjpeg.so.6.2, instead of 6.1. I've tried upgrading ImageMagick, but both non-SuSE rpms and the source break, and the SuSE rpm essentially requires upgrading all of SuSE. oh well - DLL Hell is warm in the winter...
dll hell on Linux? (Score:2)
I don't want a lot, I just want it all!
Flame away, I have a hose!
Re:Difference from Windows... (Score:2)
Congratulations, Microsoft has successfully duplicated (er, I mean "innovated") the technology of multiple shared libraries and LD_LIBRARY_PATH. They're only how many years late to the party? :)
Caution: contents may be quarrelsome and meticulous!
Re:Is it really necessary.. (Score:5)
It's never necessary, unless of course you want one of the cool new features in that library. It doesn't take too many active developers before you're pulling in a lot of new technology and thus a lot of new library dependencies.
For most people this won't be a problem since their distro will figure it out, and for everybody else that's trying to install RPMs or build from source, take a look at the mailing list archives on gnucash.org [gnucash.org]. If gnucash didn't use brand-new tools like Guppi and Gnome 1.4 libs, much of the cool new stuff in gnucash 1.6 wouldn't be there.
It's not really DLL hell, since you can have multiple copies of the same library installed for use by different apps. DLL hell would be if gnucash blew away the libs that gnumeric needs, and then reinstalling gnumeric screwed up the libs that nautilus wants, etc.
Caution: contents may be quarrelsome and meticulous!
GnuCash 1.6.0 problems and solutions. (Score:3)
grab them here [dhs.org]
Incidentally I think it's very worthwhile to upgrade to GnuCash 1.6.0. It's slick Play around with the reports for a few minutes and read the excellent documentation. The documentation gives an excellent summary of the principles behind handling your finances and covers the new features in 1.6.0.
Re:Why use shared libraries? (Score:2)
I don't think so. I'm pretty sure that Linux merely memorymaps the libraries when they get linked in. Only the parts that are actually accessed are loaded into memory where they are cached for later use. It's very fast.
Re:It seems to me that.... (Score:2)
Not all Linux distributions are like that at all. I use Debian GNU/Linux and when I want to install package foo I just type "apt-get install foo" at a prompt. Done. apt will resolve all dependancy issues, download all required packages install them and configure them without you having to lift a finger. Just about any piece of software you can think of is available as a package in the Debian system.
Debian also has guidelines about how packages are built so that software is installed in predictable places. All packages have a dedicated maintainer (and changelogs) and are generally of higher quality, ie. they work on the first try.
Yes, the Debian installer is a real piece of crap but the Progeny installer is Slicker 'n Snot(tm). Yes, there is a bit less handholding, but it is also less required as stuff Just Works(tm) out of the box. If you want all the benefits of apt but don't want Debian, however, you can always use the new Mandrake or Connectiva which have moved to apt.
Re:Why use shared libraries? (Score:2)
The filesystem does read-ahead cacheing, as well as using all available memory (save 1-2MB) for filesystem cache. I think it would be inefficient to have free memory sitting around not doing anything for the system. Any performance problems would only occur on the first load of a shared library and shouldn't be a problem unless you are randomly access a lot of libraries (see the LD_BIND_NOW=true trick that KDE uses on how to work around this).
Unix style systems are generally optimized to start lots of small processes very quickly and small processes are often used where threads are used on other OS's. For example, on Linux process creation time is comparable (if not faster) then thread creation on NT and threads aren't scheduled any differently than processes. Sorry, tangent alert.
I think that in the common case, libraries for whatever you are using will usually be loaded into memory already.
Re:Load libraries by checksum, not version number (Score:2)
autogenerate a test suite for your app. Say you
are a coder and you just made this nifty app Foo.
Your compiler figures out what interfaces you need
from shared libraries, then builds a test suite
to find these interfaces. On user machine, the test
suite would automatically launch on install, query
all installed dlls for required interfaces, figure
out if these interface lead to expected implementation
(kinda like OS fingerprinting), then install your
app against stuff that will work. If some interfaces
were missing, then tell the user which versions of
libraries are preferred for the stuff to work.
The search bot could even be smart and try to
consolidate interfaces, i.e. find and use libraries
which would have the most interfaces, so as to reduce
memory footprint. It amazes me that computers can
do complicated logic yet much of the install
process requires the user to implement their logic
by hand.
Re:You wanna talk hell... (not NeXT style) (Score:2)
Yes, there is. (Score:3)
Vik
Re:Difference from Windows... (Score:2)
Wrong...it's a problem that arises when developers DO NOT follow the convention and you DO NOT have the ability to alter code. In the Windows world, bad code is uncorrectable.
Re:Difference from Windows... (Score:2)
Take, as an example, msado.dll. You want to use the Recordset object. We'll assume there are two versions. Your registry will look something like this:
(This is from memory, I don't have a win machine in front of me)
HKEY_LOCAL_MACHINE\Classes_Root\ADODB.Recordset
HKEY_LOCAL_MACHINE\Classes_Root\ADODB.Recordset
...
HKEY_LOCAL_MACHINE\Classes_Root\{aaaaa-11111-bl
...
HKEY_LOCAL_MACHINE\Classes_Root\{bbbbb-22222-fo
The upshot of this is that if you use static binding, i.e. you point your compiler at the DLL, the ClassID of the dependancy is compiled into your program, and you're guarunteed to get that version or later loaded at runtime.
If, on the the other hand you use late binding (CreateObject() in VB) the ClassID is determined at run time by checking the registry. If multiple copies exist, I think you get the one with the newest version string.
Re:Helping the community? (Score:2)
But then I started thinking about it. Eight of the libraries were X11 libraries. And one other library was Qt, which is the equivalent to seven or eight libraries in the GTK world.
Obviously there are two different library philosophies here. One philosophy is to bundle all of the funtionality into a single massive library like Qt. The other philosophy, used by X11, GTK and others, is to split the funtionality amongst smaller libraries. For example, the KDE world does not have the equivalent of libxml, since that functionality is already rolled up into Qt.
Neither of these philosophies is wrong. A few large libraries or many small libraries. Both have their advantages and disadvantages. Personally I like being a Qt user. I don't have to worry about any dependencies or with one module getting out of sync with another. But I'm also a Qt developer, and I often wish I could use just one small module from Qt without having having to use all of it.
As long as GnuCash can keep its dependencies under control, then I don't have a problem with it using 60 libraries.
Re:Solution: Mix Dynamic and Static (Score:2)
Once the library used by the program is less bleeding edge and has a more stable API, then dynamic linking would be more appropriate.
Solution: Mix Dynamic and Static (Score:5)
Things have changed. While the larger, most common libraries (GTK, QT, glibc, X libs, gnome and kde libs) should remain dynamic, it would be helpful for binary packagers to statically link the smaller and more obscure libraries, especially if they are using a bleeding edge version that is not even in the most current distributions.
With a combination of static and dynamic linking, you'll achieve the majority of the benefit of shared libs because your largest and most common libs will be dynamic, but you'll be able to avoid much of the DLL hell and upgrade hell that accompanies packages that use bleeding edge libraries.
Re:It's not DLL hell that makes Windows unreliable (Score:2)
It's not DLL hell that makes Windows unreliable (Score:4)
How should we go about dealing with the multitudes of shared libraries without driving ourselves mad or descending into the DLL Hell that makes Windows machines so unreliable?
DLL hell is a small part of the problems Windows faces, but most of the better programmers started putting their libraries either in their app directory or static linking... Most every app on my Windows machine can be uninstalled by deleting a directory. But don't blame Windows instability on DLL hell. DLL hell is just another symptom of the same thing that causes the instability.
What makes Windows boxes unstable, plain and simple, is faulty drivers and applications. Out of the box, the NT series has been rock-solid even since 1.0 (version 3.1). The Windox 9x series has also been way more reliable than it has the reputation for. Drivers provided directly by Microsoft have traditionally been very stable, even if not very feature-rich. The drivers provided by third-parties, however, tend to suck overall. I would estimated 50% of instability problems I see are related to VIDEO drivers.
The big thing people forget when they compare stability of Windows and other OSes is that in monolithic kernels, the drivers are provided by the guys that know the kernel, thus are typically more stable. I cannot say the same for many Windows drivers. In addition, something like a FreeBSD web server is hardly comparable to an end-user Windows machine, yet this is always the example held up by the anti-Windows crowd. Add MP3 software, graphics apps, kids' games, a crappy sound and video card, and all the other stuff people put on user machines and then see how stable the other OS is.
I'm not blind. I know that on the whole Windows boxes are not so stable. I'm a professional Windows developer. I can say from first-hand knowlege that the bulk of problems with Windows is due to lazy, unknowledgeable, or sometimes hurried and overtaxed programmers. It's a real problem. I also keep a FreeBSD boot around and I'm very pro-GNU/Linux and especially pro FreeBSD. But I program Windows as my work, and know that the instability blamed on Windows itself rarely has anything to do with code written by Microsoft.
Re:It's not DLL hell that makes Windows unreliable (Score:4)
I was referring to a clean box, and said so clearly in my original post. There are plenty of sloppy coders working in the apps division of MS. The MDAC installs are among the most awful installs of anything available. No argument from me there.
If you can't expect Microsoft themselves to be able to write software that doesn't crash windows how do you expect your average VB programmer to?
I don't :). The average VB programmer is a beginner and doesn't know much more than drag-and-drop window building and basic event-handling. I'd guesstimate that fewer than 5% of VB programmers understand how Windows actually works. I'll also say that most VB apps can't crash Windows, either, because they don't have access to anything privileged.
The fact of the matter is that it's awfully difficult to write an app for windows that won't take down the system occationally. If for no other reason then you are using DLLs and you have no idea what's in them. How many apps depend on wininet.dll or the vb runtime or MSVCxxx Dlls?
Bullshit. I write system services and plenty of system-privilege apps that don't crash Windows. The *app* may crash until I get it finished, but that's not what you said. Anyone writing code that depends on wininet or the MSVC runtime and MFC dlls, frankly, is asking for what they get. But if you use the interfaces correctly, these don't crash Windows either. There are bugs in the common libraries to be sure, but that can be said for common Linux libraries as well, can't it? Of course the massive advantage with most Linux libraries is that the source is available. But my original post wasn't focused on the availablility of source.
You know I went to install the MS soap toolkit the other day and it insisted that I install IE5.5 and some service pack or another, then the rope.dll wouldn't register properly so I had to go download a later version and register it by hand. Just to be able to work with XML I had to download over thirty megabytes of stuff. So If I write an app using this toolkit is it my fault if the app crashes or is it possible that somewhere along that 30 megabytes of crap I installed on my machine something was broken?
Yep, it is your fault. All that shit is not necessary to work with XML. There are some reasonably good third-party libs that don't require all that crap. You apparently know as well as I do that developers using all that crap (besides the existence of that crap itself) are the cause of the problem. The most useful frame of mind I've found for writing Windows apps is "think small". Write with the fewest dependencies and libs possible. That pretty much leaves out developing in VB or using MFC, and avoiding the C runtime if possible. ATL is an excellent library, though, and there are some outstanding compilers out there if you prefer BASIC (check out http://www.powerbasic.com" [powerbasic.com].
Re:Why doesn't Linux adopt a Mac OS X type scheme? (Score:3)
Shared libraries are treated much the same way - they go in folders called 'OpenGL.framework' or whatever. They contain the library, the headers, other resources, and can contain multiple versions of the framework, which are versioned with a major/minor scheme - minor upgrades don't contain API changes, while major versions do - this way, programs can still use the right version of a framework if an incompatible one is installed, but can still benefit from framework upgrades.
I really do wish Linux and other UNIXes would move to this scheme - it's really nice.
Re:You wanna talk hell... (not NeXT style) (Score:5)
Most shared libraries on NeXT/Mac OS X are installed as frameworks. These frameworks are basically directories with
Each framework contains a folder called Versions - this folder contains all the installed versions of a framework, which includes shared libraries, the headers necessary to use them, and any other necessary resources. They are versioned with a major/minor numbering system - minor versions get bumped for compatible upgrades, and major versions get bumped for API changes and the like. Programs know what version they were linked against, so if you install a new, incompatible version, the program can still use the old version. This pretty much eliminates DLL hell - you can install new versions without breaking old stuff.
Apple's frameworks go in
It's a really cool system, and makes a lot of sense. Unfortunately, there seems to be a trend in Apple to install many unix shared libraries the regular way instead of as frameworks, to increase compatibility - makes many more things 'just compile'. I'd be much happier if more unix systems went to frameworks instead.
Re:Difference from Windows... (Score:5)
First, some background - I'm a windows developer, and have been since Windows 3.0 (programming for a living since 82), and I even played around with development before that
Windows, when looking for a NON OLE DLL, first looks in MEMORY, then in the local application directory, then the Windir, Then Winsysdir, then apppath. When loading an OLE type DLL (as most are today), it looks where the REGISTERED version is, and ignores all other copies on your system
Putting the NON OLE DLL in your applications directory works fine, IF you ONLY run that one app at a time. What happens if you run the app the uses the OLD version of the DLL FIRST, then load the second app at the same time - RIGHT the dll is already in memory
What is supposed to be done, and I've spent years yelling at some of my co workers is this
YOU NEVER BREAK BACKWARDS COMPATABILITY WITHOUT RENAMING the DLL. It will bite you in the ass. Your install program should NEVER overwrite a newer version of a DLL with an older version.
The BIG problems are the following
1)Developers writing non backwards compatible DLLS - Crystal Reports is famous for this. CTRL3D32 is also a example
2)MANY companies thing that running install programs is too much work, because their in house developers come out with "The Version Of The Day", so they use batch files to overwrite DLLs/programs without checking versions or what is in the registry
Folks, MSFT has said "Don't Do That" for years. Unfortunately, they don't prevent you from doing this. That is why XP is supposed to (last I heard) load a different copy of the DLL for each application - then the idea of "DLL in the application directory" works great - The
Shared libs will cause this same descent into hell if we're not careful to prevent this problem. The answer is NOT "Make sure you install the latest RPMs (debs)" assuming that they will be backwards compatible - this WILL fail. It's the same answer Microsoft took - "If you do it right, it works". The problem is, when you become the OS that sits everywhere, people WILL do it work, and then complain
Re:Installing Free Software (Score:3)
In some situations this might be a bad thing. But in a great many situations it is actually a good thing. Indeed there is a whole industry providing tools to make it difficult to install programs under Windows. On the vast majority of corporate systems you explicitally don't want end users installing apps any more than you want they customising their company cars or taking woodworking tools to their desks.
Re:Why doesn't Linux adopt a Mac OS X type scheme? (Score:3)
If a directory name starts with a '!', it's an application directory (this could have been done better, but it's not too bad). Inside is a file called !Boot, which is run when the directory is first 'seen' by the filer. This sets up filetypes and other info for auto-running the application if a document is run for instance.
There's a !Sprites file which provides icons for the application in the filer window. There's also a !Help file which gives info on the application.
Finally, there's a !Run file which is run if the application is doubleclicked.
This makes it very easy to keep multiple copies of the same application around. Installation consists of copying the app-directory onto your hard drive. Uninstall involves just deleting it.
Things can get more complicated if it starts using shared modules, but I never had problems with backwards compatibility with these.
Re:Difference from Windows... (Score:3)
So, the REAL problem with DLL Hell is simply that people are installing DLLs into the system directory instead of with their applications.
Re:Installing Free Software (Score:3)
You can (mostly) under linux, if you're using the right distribution.
apt-get -y install gnucash; gnucash
Type that into any command prompt or whoknowswhat run dialog, and gnucash will automatically be installed and run. Oh, you have to be 'root' to do it, I'm sorry. Linux has this nice thing called 'permissions', so that you can't break anything unless you've logged in as root.
No need to show folders, or double-click anything, etc... Debian's Advanced Package Tool will automagically put an icon in Start->Programs->Applications->GNUCash, so it's even easier than what you're describing.
I'd love to spend some time wiring up a few scripts to "apt-cache search", "apt-cache show", and "apt-get install", with a nice GUI interface, so that it *would* be as easy as double-clicking. Ahhh... a project for another day.
--Robert
Re:Why DLL Hell exists on Windows... (Score:3)
Re:Libraries: Harken to the Bad Old Days (Score:5)
Well, no a bad idea totally. It'd help for DLL-hell type problems, but let's raise a few points:
1) Firstly, decent packaging makes DLL-hell much less likely. I use the experimental variant of Debian, and even then, there are rarely library dependancy problems. The problems that do arise are usually easily fixable, as opposed to most of the DLL-hell problems that Windows has(ie: two applications requiring incompatible versions of the same library).
2) Secondly, Linux(as a *nix variant) allows one to have multiple library versions installed at the same time, without trouble. That was one of the design considerations.
3) Security. The main reason for shared libraries isn't space-saving(as you imply), but rather security and stability. The FooBar application relies on library Wingbat. So does app YooHoo. Now, the Wingbat library has a security hole that was just found. Oops! Well, a patch is released, packages are made and sent out. You upgrade(or your computer does it automatically), and poof - all of a sudden, YooHoo, FooBar, and all the other apps that use the Wingbat library are more secure. Ditto with stability. The Wingbat library has a nasty bug which causes crashes. Okay, a fix is made, packages are installed, and now neither the YooHoo nor the FooBar apps are susceptible to that particular bug.
Anyways, I just wanted to say that the main reasons for shared libraries isn't really the space issue. Nor is it a performance thing. It's a quality thing.
Barclay family motto:
Aut agere aut mori.
(Either action or death.)
Hehe (Score:5)
Load libraries by checksum, not version number (Score:3)
A better alternative is to load libraries by checksum instead of version number - the library's filename would be its checksum, which would be verified by the loader. If two apps were compiled against exactly the same version of a library they would share the same copy in memory, but if one was compiled against 0.8.1 and the other against (supposedly compatible) 0.8.2 they would each load their own version into memory. This approach would waste memory in some cases, but you would never see your apps behaving unexpectedly because of changes to the underlying libraries. I think the memory tradeoff would be worthwhile for the increased stability and, of course, the smug sense of superiority. ("DLL hell? Oh please. Don't tell me you're still loading shared libraries by filename on your OS!?")
--
Yea but what? (Score:3)
I just installed a Debian GNU/Linux system with 3 floppies (not CDs, not DVDs, but 3 1.44 FLOPPIES) and a network connection.
Once the system is up, I have access to, what is it now, over 6000 packages?
I hate to say this, but the network really *is* the computer, if you take advantage of it.
Re:static goddamnit! (Score:3)
If you use shared libraries, each gets loaded into memory ONCE. This is particularly good for something like the GNOME or KDE desktop. Do you really want 13 copies of the KDE libraries loaded into RAM because you used staticly linked binaries ??
Anyway, I wouldn't sweat it. This will not become Windows. Open source apps do not install new versions of existing libraries, and libraries increment version numbers when they break compatibility.
I haven't even thought about this in a long time since apt began taking care of it for me.
The solution is old news (Score:5)
Unix solved the problems that constitute true ".dll hell" ages ago, and linux uses the solution. It's not the systems fault that some writer doesn't understand what he's talking about.
If you want to use the new gnucash, you need the libraries it relies on (or a statically compiled version, which may be a workable alternative for a short time but is really an inferior solution in the long run.) This does not constitute .dll hell at all. Dll hell is when two applications on a windows machine require different versions of a shared library. Only one will work, and that is a problem. But on a *nix machine, there is a little something called library versioning that eliminates that problem entirely. Installing the newer libraries gnucash needs will not cause other applications to quit functioning. No .dll hell. Just another misleading slashdot headline.
"That old saw about the early bird just goes to show that the worm should have stayed in bed."
Re:Relax. (Score:5)
The other week I had a go at building and installing GNOME under Solaris so as to get a certain piece of scientific software running on our Sun Solaris systems (which range from versions 2.6 to 8).
I first went to the GNOME site and downloaded the libraries the web pages told me I needed.. plus everything else fromt he stable directory. I thought that as I'd read the instructions and downloaded all of the stable source I would be home and dry, it'd just need a lot of configure, build and install cycles.
Little did I know that most of the libraries in the "stable" branch required a multitude of libraries fromt eh unstable one, many of which didn't play well in the configure scripts (they assumed that
Basically, after a week of trying to build things, finding I needed yet another library or utility from an unstable branch or from an obscure web site somewhere I managed to build enough of the core libraries to build the application.
And before anyone says "Why didn't you download a binary version?" I did look into this but the version on SunFreeware was disgned as a single machine installation, not a network wide one. The recently released Sun beta release version of GNOME 1.4 is only for Soalris 8 AND half the libraries needed by the scientific application were newer than in the GNOME 1.4 "release."
If the library API's aren't standardised soon and kept stable then people will shun such things as GNOME. For someone tinkering on their own system it may be fine to play for weeks getting things working, but in the "real world" this is just untenable.
Re:static goddamnit! (Score:3)
Check out http://www.gnu.org/software/libc/FAQ.html#s-1.17 [gnu.org], and try objdump -T sometime.
This is why 'Stable' distributions exist (Score:5)
I think this is one of the primary hurdles facing Linux's wider adoption. Nobody wants to mess with upgrading/downgrading libraries, and you rarely have to do that stuff in Windows. For example, I have never been able to get the newest versions of galeon, mozilla, and nautilus installed and working at the same time. And, perhaps unrelated, gnumeric and netscape 4.7 no longer work. Of course, its not impossible to fix, and I'm not trying to sound like I'm whining, but I don't know to many of my friends who are Windows power users & programmers who would put up this stuff.
Hopefully things will improve when libraries become more stable, and apps move into versions 1, 2, 3, and higher. Then, release cycles should get longer and less drastic, and everything should be easier to use together.
Re:Why doesn't Linux adopt a Mac OS X type scheme? (Score:3)
The RISC OS scheme has security problems on multiuser systems (running !Boot when you look at a directory is not good!) but ROX [sf.net] doesn't use boot files.
Incidentally, ROX uses Gtk and XDND, so it should play nice with GNOME apps.
Re:Why DLL Hell exists on Windows... (Score:3)
That's the way it is now, but it's only going to get worse as Linux gains popularity. For the reason why, let me quote another portion of your post:
(i) Windows developers got lazy and didn't put versions in their filenames. (ii) Windows developers got lazy and put all their application DLLs on the path (ie system directory) instead of in the application directory.
The application developers got lazy. And if/when Linux picks up mass market speed, you can expect the exact same thing to happen.
Re:static goddamnit! (Score:3)
then how come cdparanoia (or any of a dozen other applications) insists on a different version of libc.so.6 than the one I have installed? Silly me, in twenty years of Unix experience I had naively expected that "requires libc.so.6" meant just (and only) that.
Responsability is the key (Score:4)
Using libraries to add functionality to applications is essential. There can be little doubt here. It is easier, more robust, and better practice to use the standard implementation library for a piece of functionality than to attempt to re-write that functionality for yourself. However there are some important basic principles that must be understood when writing application to use, and more particularly developing shared libs.
In development of any code project, there will be times when the codes design is undergoing rapid change. In the case of a library this means that the API will be constantly changing. In a closed source environment this does not cause too many problems because noone ever sees this code. In a cooperative Free Software project, the source to a library is always available, so there is a temptation for the application developer to use a development version of a library to get a certain feature. This is the beginning of the road to hell. It is essential that applications never have stable releases that depend on development libraries. I remember some years ago when gtk+ 1.1 was being developed towards 1.2, and many application developers were chasing the development branch because they wanted the features it had. The result was chaos. It became nearly impossible to install many combinations of gtk+ applications because they all required different versions.
On the library development side there is a need for a great deal of responsibility. Library developers need to learn about, and really understand library interface versioning. The interface version of a library is completely different from the release version number, and it is used by the runtime linker to ensure that it is safe to link a binary to library at runtime. With properlly managed interface versions it is quite possible to have many different incompatable versions of a library installed and available for runtime linking. GNU [gnu.org] libtool [gnu.org] has some excellent support for library interface versioning, and the libtool documentation contains some excellent documentation on how to correctly assign interface version numbers to a release.
Large numbers of libraries can be managed effectively, and cleanly as long as these principles are understood by both library and application developers, and good practice is followed.
Re:This is why 'Stable' distributions exist (Score:3)
I think you have hit the nail on the head here. If people want to live on the bleeding edge and always have the latest release installed then they will encounter these problems. However, for the average user, if they don't really know what they are doing, they are best sticking to the stable distributions. Anyway, things like Ximian are good at keeping the average user upto date. It has definitely aided new Linux users in my office to upgrade their work stations with packages.
We should remember, code reuse is something that makes the Bizaar model or programming so strong. The fact that we take code from here and there and save ourselves the time of reinventing the wheel. I don't think there is any problem in relying on many libraries. In fact, I think it is better than developing something that is just complete bloatware.
Also, developers always have to look ahead at future releases of libraries. Some developers may choose to, lets say, use a development release of glib because the functionality is just what they are looking for. They're thinking that by the time they finish developing and their work becomes stable the glib they are using will also be stable. Sometimes it doesn't quite work like that. However, don't complain if you want to run the latest version but the requirements are to difficult to meet. The only time you might want to complain, in a polite way that is, is if there is a vital security bug fixed that you can't upgrade to without breaking everything else on your system. In such a case, developers should really release a patch for the previous version.
Re:Relax. (Score:5)
Re:Difference from Windows... (Score:3)
Also, if you create a ".local" file in your app's directory, Windows will first try and load all shared libs from your app's directory.
For example, "foo.exe".... if you wanna load ALL dlls out of your own directory, just create a file called "foo.local", and problem solved.
-- russ
Huge Misunderstanding (Score:3)
There seems to be a HUGE misunderstanding. Yes, there is a problem with shared libraries. But the problem with shared libraries is minor compared to DLL hell. Shared libraries are a sunburn. DLL hell is hell.
Microsoft DLL hell is largely caused by a deliberate attempt to create copy protection by obscuring how the operating system works. You can't fix the problem because you have no reasonable way of getting the information you need.
This class of problem just doesn't exist with Open Source software. Yes there may be difficulties, but all the information necessary is available. The people who write Open Source software are smart and are solely motivated by the desire to do a good job. You can be sure that they will find ways of solving the problems. See comments #56, #165, #59, and #17 above for ways of solving the problems.
On the other hand, to a commercial monopoly like Microsoft, DLL hell is an advantage. It gets people to upgrade quickly in the hope of having fewer problems.
See #53: "DLL hell would be if gnucash blew away the libs that gnumeric needs, and then reinstalling gnumeric screwed up the libs that nautilus wants, etc."
Re:It's not DLL hell that makes Windows unreliable (Score:5)
>(IE, as root, I can hose a Linux system, not
>matter how stable it is supposed to be).
(/me waving goodbye to the karma..)
Unless the admin has been paranoid / smart / had enough copious spare time to implement quotas, a generic user on a Red Hat Linux (and AFAIK, other distros) can crash the whole thing with a one-liner fork bomb. I know, I did it to a co-worker who was relentlessly (and ignorantly) trolling me about how flakey NT is vs the never-crashes, 200 day uptime, uber-secure Linux machine he was using as his w/s. A Rude thing to do, admittedly, but he doesn't go on about it quite so much now. And he couldn't find a way to do that to my NT machine... running BIND, dial-on-demand IP gateway & NAT, Apache w./ mod_proxy and mod-perl, local mail server, plus my generic workstation apps (mail, mozilla, emacs, cygwin apps etc) and currently has nearly 60 days' uptime. Nothing special there, I agree, but really NT isn't as bad as some of the zealots would like to believe...
Incidentally I'm only still on NT whilst I get an OpenBSD config working to provide the network services and get round to Bastillifying my Linux machine. And I'm lazy. And there aren't enough hours in the day...
Back on topic: I moved frrom RH6.1 to Mandrake 7 and lost Gnuchess & Xboard along the way. The grilf complained (she's (50%) Russian, so she really needs her chess practice ;) - "no problem" I chortled, "I'll just grab the source, config, make, make install..." HA! chess is > 20Mb as src. OK, I'll take an RPM. And then lib hell began... three days later I gave up and told her I couldn't do it. Perhaps I should give Debian another go... wtvr, RPM sucks.
--
"I'm not downloaded, I'm just loaded and down"
Re:It's not DLL hell that makes Windows unreliable (Score:3)
A> The OS can only protect itself from applications whose permissions are limited. (IE, as root, I can hose a Linux system, not matter how stable it is supposed to be).
B> Windows 9x is a single user OS, meaning that everything runs as root. We can spend a long time discussing how bad it is. But it was neccecary for backward compatability.
Win9x can't be blamed for its instability, it allows direct access to hardware, for crying out loud. What you *could* blame is Win9x design, which prevents the system from being stable.
MS did a wonderful job making it semi-stable, though. I mean, I can seat on a 9x machine and don't get a BSOD for an hour at a time, sometimes.
--
Two witches watch two watches.
Re:It's not DLL hell that makes Windows unreliable (Score:3)
They *can* run on ring 1/2, the reason that they don't is that some proccessors only have 2 levels of speration, so NT & Linux (no idea why 9x doesn't implements it, as it's as platform spesific as possible.) are forced to use 2 levels of speration, kernel (ring 0) & user (ring 3) because they are portable.
Second, ntoskrnl.exe & explorer.exe *do* run in seperate spaces. Explorer.exe run on ring 3, as a user level process.
(I wouldn't be surprised if notepad.exe would run at ring 0 on 9x, btw)
What you are referring to is USER & GDI, GDI is the graphic primitives, which actually translate the orders from the program to the hardware.
USER is where most of Win32 lives, it handle the GUI (higher abstraction than GDI), Windows Messaging, etc. [*]
Because of the way NT is designed, if USER crash, regardless of where USER lives, NT goes down as well.
Why? Because without USER, all the information that Win32 has goes south and dies.
This has adverse affects on any program that uses Win32 (99.999999999% of them does).
There is nothing I can compare this to in Linux, I believe, as Linux employs monolitic, rather than layered, model.
On Win9x, if USER goes down, there is nothing to "catch" the computer if USER goes kaboom.
* This is not a techincal explanation, but it should give you some understanding on how it works.
Third, I didn't say that Win9x is single task, I said that it's design was influence from the single-taskness of DOS.
Because one of the *major* design goals of Win9x was to stay compatible with DOS applications.
This made it impossible to implement real seperation of applications from the hardware, as many DOS applications did direct hardware calls.
Memory protection was also a problem.
The single user aspect of 9x kicks you in the balls when you want to talk about security & seperation of tasks.
It's just an extention of the problems caused by DOS compatability, though.
Forth, X does makes direct hardware calls, and as such, capable of making the machine hang.
--
Two witches watch two watches.
The problem with DLL's (Score:3)
And all this DLL hell stuff...I haven't met with a single DLL problem since Win95(i havent met any problems on linux either)...And WinXP will have even better core DLL protection and stuff...
By the way, one really (seemingly)cool thing about
Installing Free Software (Score:4)
Installing propertary software requires putting in a serial number and then install some updates.
But with Free Software I can always get the latest version, and no need for serial numbers.
But installing apps on Linux more difficult then Windows.
When I want to install something what shoud happen is
I download it tar file.
I double-click on it to uncompress it.
This shows a Gnucash folder.
It's installed!
I can go in the folder and open Gnucash.
I can then move that folder where I want it.
Why can't Linux do this?
Good ideas, but I don't like LD_LIBRARY_PATH (Score:3)
if incompatible libraries are found, the installation process should wrap its binaries in scripts which set LD_LIBRARY_PATH to the necessary compatibility libraries (/usr/lib/compat) -- and they should be linked to _specifically by version_, so that different versions of compability libraries don't fight with each other.
Excellent plan. Just so everyone knows, though, LD_LIBRARY_PATH is rarely needed. In this case, it is only needed because the binaries are precompiled. If you ever have to set LD_LIBRARY_PATH, the software should be recompiled correctly!
Neat eye-opening information about LD_LIBRARY_PATH can be found at Why LD_LIBRARY_PATH is Bad [visi.com]
I don't think we're going to see anything analogous to the DLL problem because most shared libraries use explicit versions. But I would love to get rid of the madness of being told to set LD_LIBRARY_PATH to run software I just compiled! All you have to do is set LD_RUN_PATH during compilation. (See that link!) One notes that Perl's MakeMaker system always sets LD_RUN_PATH appropriately when compiling an extension module.
Windows and Linux Hell (Score:4)