Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

Linux Descending into DLL Hell? 467

meldroc writes "Or should I call it "Shared Library Hell"? The top story of LWN.net this week states that the newest version of GnuCash requires sixty libraries(!), many of which are new versions that aren't available out of the box with most Linux distributions. Managing shared libraries, with all their interdependencies and compatibility problems, is a major hassle. How should we go about dealing with the multitudes of shared libraries without driving ourselves mad or descending into the DLL Hell that makes Windows machines so unreliable?" Well, GnuCash 1.4.1 works fine for me, and I feel no immediate need to update to 1.6, the version that needs 60 libraries. But still a good point here.
This discussion has been archived. No new comments can be posted.

Linux Descending into DLL Hell?

Comments Filter:
  • by Anonymous Coward
    Http://www.linuxbase.org is trying to acomplish this. They have good backing and if all goes well we should have a bit more of a standard library group.
  • by Anonymous Coward
    To get into serious trouble simply disobey one of these rules:
    1. Perform proper versioning on your shared libraries. The simplest rule is: If your new version breaks binary compatibility (on any platform!) change the soname. Normally you would bump the major version number. The versioning abstraction layer provided by GNU libtool is far from being perfect, but it does it's job.
    2. Make your shared libraries fully self contained. Many platforms (if not more) provide everything to accomplish this. At runtime (and at compile time for executables or shared libraries that your library is linked to) the system must be able to resolve all needs of your library. The executable and linking (or linkable?) format provides the dynamic tags NEEDED and RPATH for this. Most prominent example for ingnoring this rule is libpng: You think you need to say -lpng -lz for linking against libpng? You are dead wrong! If libpng would be self contained, the linker does not only know, that libpng needs libz, but where it should look for libz (for the case it's not in /usr). So no need to say -lz as long as you don't use libz in your code.
    But there are problems. First of all the GNU ld needs to be fixed. It fails use the RPATH of shared libraries, when looking for NEEDED libraries at link time. I sent patches, but nobody seemed interested.
    So long, happy linking.
  • by Anonymous Coward on Thursday June 14, 2001 @11:21PM (#149400)
    Wrong - you're talking about COM interfaces. Look up "COM" and "DLL" to see what is being discussed here. If MS had in general followed the versioning requirements of COM, perhaps there wouldn't have been a DLL hell. As someone else pointed out, MS merrily changed the entry points of DLL's (note: not the same as a COM interface), which was simply stupid. It caused unimaginable pain for users and installations. Nothing to do with COM and GUID's.
  • by Alan ( 347 )
    Good point, except that I'm happily running gnome 1.4 under debian unstable and selecting gnucash caused 15 extra packages to be installed:

    libguppi11
    libwrapguile
    libwww-perl
    slib
    scm
    guile1.4
    guide1.4-slb
    libgwrapguile0
    libnet-perl
    liburi-perl
    libhtml-parse-perl
    guile-common
    libnet-telnet-perl
    libhtml-tagset-perl

    This is on a system with a fairly extensive set of apps used normally. I can see where if you use perl for a bunch of your stuff, and guile, there really is no choice but to install the full development files and libs on a system that has no guile stuff on it.

    However, it's 40ish mb of archives that I have to download (Thank $DEITY for fast adsl!).
  • That is pretty cool.

    Question.

    Is it like this?

    \SomeAppDir\!Application
    \SomeAppDir\!Application\!Sprites
    etc.

    Or is it like this?

    \SomeAppDir\!Application
    \SomeAppDir\!Sprites
    etc.

    Just curious... RISC OS sounds fun.
  • When's the last time you saw a "disk full" error?
    What, efficiency isn't worth anything these days?

    But, really, this breaks down under certain usage patterns. On a system like Debian, where package installation is trivial compared to Windows, there are a ton of packages. I currently have 694 packages installed, though a significant number of them are libraries.

    Consider another pattern -- extended environments. Gnome is an instance, as is KDE. I have 12 Python packages installed, and Python by itself doesn't even do anything. I won't speculate on how large Gnome or KDE are.

    I have 41 gnome packages installed (or, at least packages named gnome*). What would happen if I had 41 copies of the Gnome libraries for these applications? What if packages had even greater granularity? What if I get to choose which applets I want installed? What GTK engines I want? Hell, I don't even know how engines could work with 41 copies of the libraries.

    Sym linking identicle libraries to save space wouldn't do any good, because that offers nothing better than what I have now (which works just fine, actually) -- where most libraries end up in /usr/lib. In a funny way, I suppose hard links would almost work right.

    On Windows per-application DLLs kind of make sense. On Windows, people don't have that many applications installed. Each application tends to do a lot on its own. This is partially the influence of commercial tendencies (people don't want to pay for something too small), and partially it's because small applications quickly become unmanagable on Windows. But Debian doesn't have this problem, and RPM-based systems have, well, not too much of this problem. Why "fix" what isn't broken?

    Next you'll have us putting application files in directories named after domain names or company names, to avoid naming conflicts. /usr/applications/org/gnu/gnucash. Uh huh.

  • So what would you _like_ programmers to do, then? People talk about reusing existing code, but then complain when you require more shared libs, due to the fact that you're using someone else's code for some particular functionality instead of writing your own (to redo the exact same thing - why solve the same problem AGAIN?).

    People really need to make up their minds. I know I for one would prefer to reuse someone else's code for common functionality, rather than having to rewrite it myself every time I need it (or static link it, making unnecessarily bloated binaries).
    _____

    Sam: "That was needlessly cryptic."
  • According to the gnucash site, the requirement is for GNOME 1.4 plus 4 other libraries. This is hardly as complicated as the situtation is being made out to be in the editorial.
  • ...Another fine example of slashdot respondents not bothering to inform themselves on the issue before spouting off.

    Most of the GNUcash dependencies are satisfied merely by "obtaining latest version of associated desktop". That hardly rates as "including every single doo-dad" or "spans 10 DVD's".

    This is more a matter of a developer using "bleeding edge" libraries rather than creating some perverse octopus.
  • by Sludge ( 1234 ) <slashdot@NosPaM.tossed.org> on Thursday June 14, 2001 @10:07PM (#149413) Homepage

    I agree that code reuse causes bloated software in that libraries often are required to deal with a very general case of the problem, which requires much more time than just coding an example by yourself.

    However, I can prove that code reuse isn't always bloat: the ANSI C library on your system. If the ANSI C library was statically linked, there wouldn't be any shared memory redgarding it between your processes. When you run 'top' and a process says it takes up a few more megabytes than you thought it would, be sure to check the shared column.

    Saying that code reuse causes bloat is not the whole story. Code reuse serves both sides of the bloat war.

    \\\ SLUDGE

  • by Sludge ( 1234 ) <slashdot@NosPaM.tossed.org> on Thursday June 14, 2001 @09:20PM (#149414) Homepage

    The software requirements [gnucash.org] require "60 libraries" because "The majority of the GNUCash 1.6.0 dependancies are satisfied by Gnome 1.4".

    If major distros don't yet support the libraries of recent software releases, that's fine with me. The push for newer versions should come from bleeding edge software.

    Aside from that, I personally commend the code reuse of GNUCash. Functionality needs to be reused as much as possible: We're working alongside giants. Let's stand on each other's shoulders.

    \\\ SLUDGE

  • hence the vast market for Ghosting and Imaging software.
  • You're wrong. Even the regular XF86_SVGA server must directly access the IO space, and it does so through the kernel. On my Alpha, when I still ran Linux on it, I had nothing but problems with X. A perverse combination of user operations (e.g. a fill or scroll) would completely lock up the machine.

    Yes, there are times when some poorly-written X app kills the server, and there are times when the X server bugs out without killing the OS. But your statement "X and the Kernel are clearly different" is a gross oversimplification, ignoring, iovec(), memory-mapped IO, and the like. Even the most lowly frame buffer drivers must directly access the hardware.


    Rev. Dr. Xenophon Fenderson, the Carbon(d)ated, KSC, DEATH, SubGenius, mhm21x16
  • I found it a little bit ironic that I read the lwn.net article about Gnucash yesterday afternoon and this morning my daily 'apt-get update ; apt-get -dy dist-upgrade' of Debian unstable put a copy of gnucash-1.6.0 on my desktop. Also included in Debian unstable is Evolution 0.10, Mozilla 0.9.1, and Nautilus 1.0.3 they all seem to work together fairly well. It seems that a version of Netscape is also available, but I wouldn't know, I haven't used Netscape in some time.

    Shared libraries are good. And shared libraries are especially good in Linux where it is a trivial thing to have different applications use different versions of the same dynamic libraries. With the next generation of Linux distributions all of these libraries will be included (probably by default), and so installing will be a piece of cake.

  • The difference is that in Linux you can use LD_LIBRARY_PATH so that you can guarantee that your application loads the dynamic libraries that you need. Couple that with a library versioning system that has major and minor revision numbers and that allows you to have several different versions of a library and you have a system that is basically DLL-Hell-proof.

    The author of the original lwn.net article apparently simply doesn't know what the heck he is talking about. If the Linux system that you are using does not make adding new libraries a trivial undertaking then this needs to be filed as a bug. Gnucash 1.6.0 is a new release, and to run new releases you either have to know how to build and install software, or you have to wait until someone else does it for you.

    To illustrate this, I spent a small portion of time this morning playing with Debian unstable's version of gnucash-1.6.0. I installed it with a simple 'apt-get install gnucash' and it painlessly downloaded gnucash and all of the required libs (that I didn't already have). After nearly an hour of playing things seem to be just fine (and gnucash is much improved over the 1.4 series).

    Dynamic libraries are good. They are even good on Windows now that Windows 2000 has finally got their act together and allows multiple versions of the same DLL to be in memory at the same time. They have been useful on Linux for quite some time.

  • The my entire /etc/apt/sources.list consists of two lines:

    deb http://http.us.debian.org/debian unstable main contrib
    deb http://non-us.debian.org/debian-non-US unstable non-US/main

    Notice that I am not using Ximian's packages. I have used them in the past, but unstable has generally had the software I needed without the extra hassle of dealing with Ximian and their sometimes not quite Debian compliant packages. I ended up removing all of the ximian packages some time ago.

    RedHat is a fine choice for a distribution, and Debian isn't for everyone. It seems to me that RedHat probably would be the way to go if you were primarily interested in Ximian's Gnome packages. At least with RedHat those packages are likely to be well tested.

    Either way, there certainly is no evidence of DLL Hell. It is just a case of a program that has a lot of required libraries. To my mind this is the best sort of code reuse, and is definitely a good thing.

  • by Mihg ( 2381 ) on Thursday June 14, 2001 @10:38PM (#149421)

    Linux isn't experiencing anything remotely similar to DLL Hell.

    DLL Hell is when Foo DLL 1.0 and Foo DLL 6.0 both stored in the file foo.dll (unlike libfoo.so.1.0 and libfoo.so.6.0) and brain damaged installer programs blindly replace foo.dll version 6.0 with foo.dll version 1.0, thus breaking every single program that depends on the newer version of foo.dll

    Because so many of these crappy programs exist, Microsoft has made an attempt at fixing the problem by introducing the Critical Files Protection mechanism, in which the operating system itself monitors file creations and modifications, looking for these stupid installers as they attempt to replace the new versions with the old versions of the libraries. Attempts at change silently fail, and the installer runs along its merry course without breaking things too badly.

  • The developers ALWAYS use the latest and greatest odd little library. The program has not worked on SuSE forever.

    I am not surprised that they chose gnome-1.4 which has been available for 6 weeks as a base for a finance app, that would happily work as a gtk app (like gimp).

    --
  • Once this software becomes part of a Linux distribution, the package dependencies will take care of those 60 libraries for you. At least that's the case with Debian, if Red Hat doesn't put this in their official distribution it might be a bit harder.

    Thanks

    Bruce

  • I think this idea has been suggested several times here, and is a good one. However it does mean the "application bundles all the libraries it uses". What you are saying is that the application does not necessarily use those libraries if the same version can be found in a more central place. Personally I feel this could be done with some kind of hash codeing of all the read-only pages swapped into memory, thus merging any matches without any version or file matching at all, but what do I know...
  • YES EXACTLY!

    Why the hell is there anything more complicated than this. I really want to be able to test the software without "installing" it.

    Just put the libraries in there and fix the system so it is trivial for a program to turn argv[0] into it's execution location and it looks in that lokcation for shared libraries first. Yea, it's different, but it will be far better.

    Or put the source code in there and when the user double-clicks it pops up a window that says "please wait, compiling gnucash" and then it works and is perfectly adapted to your system. This would be a huge win, and is something closed-source providers just cannot do! Don't worry about the time it takes, people are apparently willing to waste hours running "install" and the compile is nothing compared to that.

  • I have been writing shared libraries for NT for awhile, and it does appear to do weird unpredictable things. I don't bother to figure it out, I just logout and back in (that seems to reset it) and continue.

    I am just making normal DLL's (whatever you get when you "link /DLL" all your .obj files). It does appear it searches $PATH when it needs a DLL (I always must remember to delete my optimized copy, which is an earlier directory, before running if I want debugging to work). In this case it works exactly like LD_LIBRARY_PATH (which reminds me, I am just as confused by Linux as Windows. Why isn't this variable *always* needed, why is there some other mechanism to locate shared libraries?)

    However one nasty screw up I have seen several times (but I can't reproduce at will, that's NT for you!) is that sometimes the other library will get "locked into memory", so that it refuses to load my new copy. I swear that I have exited, and even deleted, all the executables that are using that shared library, but it is still there! It appears that logout/in fixes it. Also deleting the library itself works (though that crashes running apps using it, why aren't the files locked like NT locks the executables?)

    Another annoyance not mentioned here about NT is that you must use a DLL if you want to support "plugins" because all functions the plugins call must be in the DLL. Linux also has this but you can link your program with "-shared" and it then works like the Unix systems I am more familiar with (though I don't do this here as the software has to port to NT and repliating the difficulties of NT helps to keep things portable). For this reason I have to produce a DLL for no good reason at all, only my program uses it, and it is likely only one copy of my program is running at any time. I have heard there is some way to make a "DLL executable" but I can't locate it, just building a DLL out of all the source does not make a runnable program.

    Another huge Windows annoyance is "__declspec__(dllexport)". That crap is going to pollute header files in code for ages after NT is dead and buried. What a waste.

  • That link is used to locate the newest version when linking a new program. Already-linked programs have the version number in them and do not use the link (however after that it gets confusing and I do not understand it, it has complex rules to select from many possiblities and only the first number in the filename must match).

    This can be demonstrated easily by removing those links, programs still work, but you cannot compile programs (or they find the static versions of the libraries).

    You can use symbolic links to get DLL hell, as I will sheepishly admit I have to get downloaded programs to work. Link the version name of the library the program complained about to whatever version you have and often the program will work! However newer things like libc seem to have version information in the file that is checked so you can't outwit it that way.

  • Woe is me to defend MicroSoft, but MFC really is equivalent to Qt, while the Win32 API's are equivalent to libc plus Xlib. Before you complain that somebody says "don't use MFC" you should realize that there are plenty of Linux people who say that for reliability you should not use Qt. The same argument applies to both systems.

    PS: MFC is also as big a lock-in for MicroSoft as Word is. A huge stumbling block for getting programs ported to Linux is that they are written with MFC or VB systems. The MFC source code is totally useless for the port as it calls Win32 stuff (like structured file parsers) that nobody would ever use in any real code but MFC does so they can hide the implementation.

  • How come nobody has tried some kind of "compression" file system that uses hash codes or something to locate identical blocks of files and maps them to the same place? This would allow thousands of copies of the same DLL to take much less disk space, and would even allow "static linked DLL" where the disk and memory are shared just because the pages are equal, and in general would work with no symbolic links or os support.

    Would this work, or am I just clueless? Is anybody trying this? Never saw anything about it in all the file system discussion.

  • I agree with you 100%

    Too many posters here seem to have no concept of a program that you can "run" without "installing" it first, and go on about how great apt-get is at "installing".

    Listen to the orignal poster. The steps desired to run a new program are:

    1. Download the file

    2. Double click it.

    3. It's running! No, it is not "installing", it is RUNNING!!!! Get that through your thick heads.

    Also not mentioned is what the user does if they decide the program is crap and don't want it:

    1. They drag the downloaded file to the trash (and empty it, I guess).

    2. THAT'S IT! We are exactly in the same state as they were before they downloaded it. It does not "deinstall", it's one file!

    What is being talked about has nothing to do with "installation".

    If you want "installation" here is how it should work:

    1. The user runs the program

    2. The program somehow shows a question like "do you like this program and want it to appear in your Gnome/KDE startup, and in everybody else's (or for services, do you want to run it for real versus the simulation that is running now). The user clicks "yes".

    3. It pops up a panel and asks for the root password.

    4. It churns and it is "installed".

    To "deinstall" you do this:

    1. Somewhere in the system directories is a file with exactly the same name. You (as root) take this and put it in the trash. It's GONE!!!!

    2. Somewhat later KDE/Gnome may notice that the menu items don't point at anything and hide or delete them.

  • as root, I can hose a Linux system, not matter how stable it is supposed to be).

    Hell, as a regular user you can hang X and the console badly enough that you can't use it or reset it. I did it yesterday with RH7.1 and Konqueror (sp?). The X server should not be able to crash based on requests from it's clients, but it does.
    --
    the telephone rings / problem between screen and chair / thoughts of homocide
  • I mean, is this really the right answer? Do I need 20 copies of the same damned .NET DLL on my disk, one for each application? I think not. I do not consider this an intelligent move at all.

    I now virtually nothing about MS Windows and MS's plans for it, but I do remember the commotion here on /. a year or so ago when MS patented "a new OS feature", basically automatic links. You can have as many copies of same file on disk as you want, without using more space than one file. When an application wants to change a file, the link is substituted with a real file.

    Managing DLL:s was the purpose of this "invention".

    Lars
    __

  • Um, you don't run a multiuser system, do you... the problem is that if this program is statically linked against 60 libraries, it's probably going to use up a fuck of a lot of memory. And, if you have 10 users running statically linked programs, you hurt. Badly. Especially since it's usually not just one program -- imagine Gnome statically linked. There are several resident programs, all linked against X and libc at a minimum (call it an extremely -- almost psychotically -- optimistic 5 megs each for library code, plus the app.) That's a lot of memory for 10 people running a desktop... and they might want to do work too. Shared libraries exist for a reason.

  • by buysse ( 5473 ) on Thursday June 14, 2001 @09:21PM (#149440) Homepage

    The major reason we refer to it as dll hell on Windows is very simple -- there's no concept of a version. App A uses v6 of foo.dll, app B uses v8. It's still named foo.dll. Oops -- the API changed. Hell.

    Unix systems have the concept of a version -- you change the API, you rev the major version of the library. The old one's still there if you need it, but apps will get (dynamically) linked against the version of the library they were originally linked against.

    Yeah, it's a bitch (and a half) to compile all that shit -- I've compiled Gnome (and all dependencies of it) on a Solaris box from sources. It's a pain in the ass. But, as Bruce Perens said in another post, that's the job of the packager -- Ximian, RedHat, the Debian volunteers (thanks.)


  • I as a developer hate that use of the word, "framework". A framework already has a meaning in OO-speak, and not all of the "frameworks" in the Mac system fit that definition...probably very few.

    Its a bad use of a word/phrase that already has a meaning, that will in the end only confuse young and budding developers. I hate Java's "Design Patterns" (in this case, the naming scheme for JavaBean accessors and event methods) for the same reason -- "Design Patterns" already had a well used, and mostly well understood meaning.
    --
    You know, you gotta get up real early if you want to get outta bed... (Groucho Marx)

  • The problem is a particular instance of the general "continuous maintenance problem", which is now reasonably well understood (i.e., it's not academically interesting any more). This has been solved in particular cases as far back as Multics and, more topically, in the relational database world.
    I have one of Paul Stachour's papers on the subject, although it needs to be OCR'd...

    What's a suitable place/list to discuss this?

    dave (formerly DRBrown.TSDC@Hi-Multics./ARPA) c-b

  • well, if you want a slimmed down version of Linux, try:
    rm -rf /usr/share

    hell, even better, try this:
    rm -rf /home /usr /var

    now you're down to a bare-bones, 10 MB linux install with all the cool commands left (ls, cp, sort, etc.). Pretty cool eh?
  • There are very serious problems with the shared library versioning scheme on Linux.

    I pointed out some of them on the bug-glibc [gnu.org] mailing list:

    • if glibc (or really any shared library, bug glibc is fundemaental for all apps) is updated (even from, say, 2.1.1 to 2.1.2), the n all applications (including things like X-windows) on the system need to be recompiled for guaranteed correct behaviour.
    • If a user has glibc 2.2.2, downloads an application which was compiled with glibc 2.2.1, but uses some shared already on the user's computer compiled under glibc 2.2.2, correct behaviour is not guaranteed.
    I havn't had time to propose a solution, but there should be one if you think hard enough...

    The current situation is not acceptable.

  • by VanL ( 7521 ) on Thursday June 14, 2001 @09:41PM (#149447)
    One of the most interesting features I read about in Mac OS X was the way in which they distributed binaries. They have a "package" that appears as one binary object, but it acts (for the loader and linker, I presume) like a directory. Inside are stored versioned libraries.

    Couldn't something like this be done to reduce the clutter? Have "gnome-libs.pkg" which is actually a tar or tgz file. When an application needed to use a library, it would involve an extra step -- extracting it from the tarfile -- but that would only be on first load (after that it would be cached in swap) and the cost to retrieve the file would be minimal.

    On the other hand, the possible gains would be enormous. Packaging would become simple. For most applications, install/uninstall could simply be putting the binary in bin and the libpkg in /usr/lib.

    I guess what I'm thinking of is like GNU stow, just taken further. Why not make those directories into tarfiles?

    Want to make $$$$ really quick? It's easy:
    1. Hold down the Shift key.

  • Problems such as these can be easily avoided:
    • If you're the package maintainer, release source code and statically linked binaries, so that people can use your program without hassle
    • If you're a distribution maintainer, figure out the dependencies and only release problematic versions once all the necessary libraries are available in your distribution
    I would also recommend that projects that need so many libraries do not depend on the latest version of all of these (usually, there's no real reason to hard-code such dependencies into configure scripts etc., it's just the lack of information about incompatibilities between libary versions). If you can't ascertain that your code will work with older library versions, or if you're just too lazy to check, stick to these older versions while developing - it's what your users will have!

    (why do open source software authors pay so little attention to their users' needs?)

  • Indeed. GnuCash 1.6.0 is already in Debian unstable, in fact (thanks to the great work of the Debian maintainer).

    Go you big red fire engine!
  • Come on. I'm sure they could have done better. Installing gnucash on my system, not counting the libraries I already have installed, would take 35332k! (yes I actually checked).

    Sure. You can then install Abiword, Gnumeric, Nautilus, Evolution, and Dia in another few megabytes. That's the point of shared libraries.

    Oh, and by the way, have you checked just how much screenshot-heavy documentation gnucash provides?

    Go you big red fire engine!

  • by rpk ( 9273 ) on Friday June 15, 2001 @06:19AM (#149456)
    I've got some experience with this issue, having had to make different versions of DLL from the same vender coexist, and perhaps some of you Linux whippersnappers will learn something. Windows DLLs can be used in a upward compatible manner; it's just that it's hard for most people to understand the issues of DLL compatibility.

    When we're talking about DLL hell, the first thing to keep in mind is that usually we are talking about incompatible changes in behavior or interface in libraries that are *meant* to be shared. Once a software designer decides to expose an interface, he becomes hostage to his installed base. Effectively, as others have pointed out, the name of the DLL becomes its interface identifier. So by convention you should encode the lowest compatible revision number of the interface in the DLL name. You can keep the name the same as long as you are binary-compatible, not matter how much the implementation changes. For example, MFC didn't change much from version 4.2, so when Microsoft came out with MFC 5.0, the names of the DLLs didn't change from MFC42*.DLL because they were still binary compatible.

    If you need to share some state even between different versions of the same facility (like a list of file handles for different versions of libc), you are going to have to factor the internal state into yet another interface -- and that one that better be evolvable for a very long time.

    The sad fact of the matter is that maintaining binary compatibility is hard, because there are technical details to master, and because of the discipline required to keep all potential clients working without have to rebuild and relink. You can't change the names or signatures of functions, you can't change the layouts or sizes of structures or classes, and older bugs that clients rely on as features will become features that have to be supported. If it's a C++ interface, you can't add virtual functions to base classes, although you can add new nonvirtual class functions. Of course, you can add new functions. It is also possible to "version" structures for expandibility, but this is tricky.

    By the way, while static linking is a way around some DLL problems, one of the cases in which it can't work is when a DLL implements in interface to functionality that requires shared state in the same address space. You can have multiple clients of a DLL in the same address space when both and application and its extension libraries refer to that DLL. This happens a lot with ActiveX controls on Windows (and in-process COM servers in general), and I suspect that this also happens in Apache on Unix as well.

    Both Windows XP and Mac OS X have mechanisms for allow shared libraries to be hived off so that you can have private version for yourself, but I don't know much about how they work.

    Note that shared Java classes are much easier to deal with, but isn't like there are no interface changes have binary compatibility implications.

    The other way around this problem is to expose the interfaces(s) in a language-neutral object model that supports interface evolvability, such as [XP]COM or CORBA. Unfortunately, it's not always easy to use these kinds of interfaces, and most of these kinds of systems don't offer implementation inheritance (with the notable exception of .NET, which is Windows-only currently and also requires language extensions), which is a handy feature of a C++ class exposed in a DLL.

  • by PD ( 9577 ) <slashdotlinux@pdrap.org> on Thursday June 14, 2001 @09:34PM (#149457) Homepage Journal
    You forget the other part of the equation.

    On Windows, the libs are called Dynamic Linked Libraries. On UNIX, they are called SHARED libraries. Of course, we all know they are the same thing, but apparently on Windows many people don't understand their purpose.

    Part of the DLL hell is the vast amount of them that are unnecessarily created by people who don't understand when static linking will work just fine. I still hear people claim that DLL's magically keep the executable size small. DUH! All it does is unnecessarily chunk up your program, increase file count, and increase loading time.

    So far under Linux I have hardly seen any abuses of this. Shared libraries are generally reserved for geniunely sharable code, and the rest is statically linked the way it should be.

    It sounds like GNU Cash is using shared libs correctly and once the distros catch up we'll wonder what the fuss was about.
  • It sounds like GNU Cash is using shared libs correctly and once the distros catch up we'll wonder what the fuss was about.

    Assuming we remember the fuss. You're giving our collective attention spans way too much credit.
    --

  • Indeed. In fact, the development of GnuCash helped push a couple of those packages to have new features [there was a lot of back-and-forth with respect to Guppi, for instance, that benefits both projects]. Guile got a few things GnuCash was including itself into the guile distro, IIRC.

    The deps for GnuCash are at the forefront of the state of the art for the GNU desktop, but c'mon: this ain't no half-hour hack; it's one of the more serious desktop applications for Linux, and thus requires some recent libraries. Or, you could just dual-boot and run Micros~1 Money or Quicken...

    ...jsled
  • by jsled ( 11433 ) on Thursday June 14, 2001 @10:54PM (#149464) Homepage

    Hmmm... 2 or 3 GnuCash developers independently post story submissions to /. about how they've released a significant new version [gnumatic.com] of a key Linux application ... one which has the potential to replace some people's last hurdle for switching away from Micros~1 completely...

    And /. decides to reject all those, and instead posts a poor LWN piece [gnumatic.com] which overstates [gnucash.org] a problem that is valid, but has nothing to do with GnuCash, and more about the poor state of Linux software installation, package management and impatience of users regarding the package system they're using.

    Thanks Slashdot story selectors! The GnuCash folks did their part and wrote a bunch of code that works really well... ignore it if you must, but don't piss on their efforts.

  • by Lumpy ( 12016 ) on Friday June 15, 2001 @05:00AM (#149467) Homepage
    Ok that's fine and dandy, but the gnucash developers refuse to release statically built binaries so that the 90% of the world that would try to use it can. I was grinning from ear to ear when ximian came out and I was able to help several windows only friends make the crossover to linux.. Now I have to tell them NOT to upgrade their most used software (like gnucash) because the developers would rather them not have it, or I have to start a linux babysitting group and build everything for everyone... Sorry, it's crap like this that makes the job really hard for us advocates out on the front line getting linux in the faces and on the desks of normal users.

    If you release a version that uses all kinds of new libraries, take 3 more seconds and release a statically linked version so you don't undo every success the advocates have achieved...

    I ran into this gnucash problem 3 days ago, and now I need to tell my small group of new linux users that the newest version is not for them and will probably never be for them. (getting linux there was hard enough, asking them to re-install to redhat 8.0-8.1-8.2-etc to satisfy the whims of programmers? no way.)

    my suggestion? go ahead and use 925 libraries for your program, but if you expect people other than programmers anf gu-ru's to use it make a package that is 100% compatable with a current distribution -1 (I.E. subtract a decimal from the distribution release). Sorry to pick on gnucash directly but it is one of those apps that was able to switch windows users over, so they have more riding on their backs than they really know.
  • Unix systems have the concept of a version -- you change the API, you rev the major version of the library.

    Now if only glibc followed this guideline... :-)
  • The Problem is actually differant in linux the simple dll incompatabilities, it's the lack of a aggreed standard.

    For those just tuning in, linux has the concept of package managment, which doesn't really exist in windows, and intheory should solve the library problem.

    In practice there are two competing standards, AKA debian DEB/APT (which works pretty well), and red hat RPM (which is a mess, and not trully automatic).

    There are solutions that try to solve the probelms with RPM (mostly the lack of centralized QA on indevidual packages), like the distributions them selves (many, many, many distributions), but couse other problems with diferant distribution packages, (suse RPMs with red system are likly to have problems, etc).

    Some other solution are surfacing, like Aduva's manager, which uses a central database as well as a client which analizes the usrs dependancies/interalations on his machine. But that solution is not Free software/gpl/OSS.
    Another problam with these solution is that the typical Linux users are reluctant to use automatic systems on thier systems.

    (I'm feeling genorous today)
    --------------------------------
  • Tools like apt, Red Carpet, and the Apache Toolbox are doing a good job of handling this problem for me. I constantly run the newest versions of applications without any extra legwork.

    Since the majority of Linux software comes in packages we'll never enter the DLL Hell in the way Windows has because you can always see where each file comes from and how they relate just by bothering to do some legwork.

    Upgrading is no big hassle for the most part. Use something like one of the above mentioned tools to stay updated or if you must resort to a program that is none-standard try something like wget (or in many cases just plain old ftp) to suck all the needed files in.
  • Unix systems have the concept of a version -- you change the API, you rev the major version of the library. The old one's still there if you need it, but apps will get (dynamically) linked against the version of the library they were originally linked against.


    Sure, it works great in theory -- until you realize that many of these shared-library authors haven't bothered to CHANGE that major version number when the API changes! And this has even happened with libc!!!
  • As I look through the System Folder on my Powerbook I notice there aren't a large number of platform specific tools in there. The drivers are Apple drivers comforming to driver rules and everything is compiled for PPC. Oh yeah, Mac OS 8 worked on just about every piece of Mac hardware after the introduction of the 603. Linux is pretty much the same way. I've installed binary packages on lots of different computers and never had a hardware related problem before. If you DO have problems thats your own fucking fault for using shitty hardware that doesn't comform to specs like everyone else does. You don't install different C libraries because you've got an Athlon on a Via chipset. The dude had a valid point, installation procedures on linux are pretty retarded sometimes and turn people off to using it. Not only that but it is fairly daunting to have a thousand little apps that do the exact same thing a different way. Do I really need Yet Another (insert function name here)? Saying OS X is inferior because it doesn't work on Y platform is retarded. Linux doesn't "support" any more chipsets than x86. All the other platforms are worked on by third parties besides Cox and Torvalds. And lastly, Linux != X (as in windows).
  • I use SuSE 6.2, with lots of stuff I've added on, mostly compiling it. But - I've got one big DLL problem: ImageMagick 5.1.0 requires libjpeg.so.6.1, and breaks on 6.2, while Mozilla & Opera [opera.com] require 6.2, and break on 6.1.

    Since it's only two programs, I've modified the Mozilla startup script and created a startup script for Opera to set LD_LIBRARY_PATH to find libjpeg.so.6.2, instead of 6.1. I've tried upgrading ImageMagick, but both non-SuSE rpms and the source break, and the SuSE rpm essentially requires upgrading all of SuSE. oh well - DLL Hell is warm in the winter...

  • Not the same thing. As many others have said Linux uses version numbers. If you really wanted you could have multiple versions of the same libraries on your system. For instenace, gtk+ 1.0.x and gtk 1.2.x, or libc5 and libc6, and even more revisions of libc6, like glibc2.0, 2.1 and 2.2 all on the same system. Get the point?

    I don't want a lot, I just want it all!
    Flame away, I have a hose!

  • Congratulations, Microsoft has successfully duplicated (er, I mean "innovated") the technology of multiple shared libraries and LD_LIBRARY_PATH. They're only how many years late to the party? :)

    Caution: contents may be quarrelsome and meticulous!

  • by ethereal ( 13958 ) on Thursday June 14, 2001 @09:29PM (#149480) Journal

    It's never necessary, unless of course you want one of the cool new features in that library. It doesn't take too many active developers before you're pulling in a lot of new technology and thus a lot of new library dependencies.

    For most people this won't be a problem since their distro will figure it out, and for everybody else that's trying to install RPMs or build from source, take a look at the mailing list archives on gnucash.org [gnucash.org]. If gnucash didn't use brand-new tools like Guppi and Gnome 1.4 libs, much of the cool new stuff in gnucash 1.6 wouldn't be there.

    It's not really DLL hell, since you can have multiple copies of the same library installed for use by different apps. DLL hell would be if gnucash blew away the libs that gnumeric needs, and then reinstalling gnumeric screwed up the libs that nautilus wants, etc.

    Caution: contents may be quarrelsome and meticulous!

  • The real problem with GnuCash 1.6.0 wasn't really it's large number of dependancies. It was the the binaries provided didn't seem to be built for a specific platform. It required some new things found in the latest Gnome 1.4 while also requiring an older version of guile, e.g. It wan't too bad to build the rpm's for the latest Ximian platform. I rebuilt them for Red Hat 7.1 against the latest Ximian packages:

    grab them here [dhs.org]

    Incidentally I think it's very worthwhile to upgrade to GnuCash 1.6.0. It's slick Play around with the reports for a few minutes and read the excellent documentation. The documentation gives an excellent summary of the principles behind handling your finances and covers the new features in 1.6.0.

  • Worse, when you bring in a shared library, you bring in the whole thing, not just what you need.

    I don't think so. I'm pretty sure that Linux merely memorymaps the libraries when they get linked in. Only the parts that are actually accessed are loaded into memory where they are cached for later use. It's very fast.

  • Not all Linux distributions are like that at all. I use Debian GNU/Linux and when I want to install package foo I just type "apt-get install foo" at a prompt. Done. apt will resolve all dependancy issues, download all required packages install them and configure them without you having to lift a finger. Just about any piece of software you can think of is available as a package in the Debian system.

    Debian also has guidelines about how packages are built so that software is installed in predictable places. All packages have a dedicated maintainer (and changelogs) and are generally of higher quality, ie. they work on the first try.

    Yes, the Debian installer is a real piece of crap but the Progeny installer is Slicker 'n Snot(tm). Yes, there is a bit less handholding, but it is also less required as stuff Just Works(tm) out of the box. If you want all the benefits of apt but don't want Debian, however, you can always use the new Mandrake or Connectiva which have moved to apt.

  • The filesystem does read-ahead cacheing, as well as using all available memory (save 1-2MB) for filesystem cache. I think it would be inefficient to have free memory sitting around not doing anything for the system. Any performance problems would only occur on the first load of a shared library and shouldn't be a problem unless you are randomly access a lot of libraries (see the LD_BIND_NOW=true trick that KDE uses on how to work around this).

    Unix style systems are generally optimized to start lots of small processes very quickly and small processes are often used where threads are used on other OS's. For example, on Linux process creation time is comparable (if not faster) then thread creation on NT and threads aren't scheduled any differently than processes. Sorry, tangent alert.

    I think that in the common case, libraries for whatever you are using will usually be loaded into memory already.

  • What would be really neat is to have the compiler
    autogenerate a test suite for your app. Say you
    are a coder and you just made this nifty app Foo.
    Your compiler figures out what interfaces you need
    from shared libraries, then builds a test suite
    to find these interfaces. On user machine, the test
    suite would automatically launch on install, query
    all installed dlls for required interfaces, figure
    out if these interface lead to expected implementation
    (kinda like OS fingerprinting), then install your
    app against stuff that will work. If some interfaces
    were missing, then tell the user which versions of
    libraries are preferred for the stuff to work.
    The search bot could even be smart and try to
    consolidate interfaces, i.e. find and use libraries
    which would have the most interfaces, so as to reduce
    memory footprint. It amazes me that computers can
    do complicated logic yet much of the install
    process requires the user to implement their logic
    by hand.
  • The application does not need to bundle all the libraries it uses. The framework is, in most cases, separate to the application though there is nothing stopping a framework from being included as part of the application's bundle. For more information see Framework Versioning [apple.com]. The is probably more docs on the issue, but this should get you started.
  • by vik ( 17857 ) on Thursday June 14, 2001 @11:04PM (#149489) Homepage Journal
    The way around it is for people to use the lowest version number of GLIBC/libSVGA etc. rather than the latest we-are-the-dogs-bollox version shipping on Freshmeat. It never ceases to amaze me how the compilers of distros seem to go out of their way to make the latest version of the code incompatible with the stable version. Take RH6.x for example, when you were hard pushed to find any new software because it was all in V4.0 RPMs whether it needed the V4.0 features to be installed or not! That was only fixed by RH eventually releasing a V4 compatible RH6.x RPM.

    Vik :v)
  • As does Windows - it's not an 'exclusively Unix' kind of thing. The problem only arises when developers DO NOT follow the convention that you change the filename when the version changes.

    Wrong...it's a problem that arises when developers DO NOT follow the convention and you DO NOT have the ability to alter code. In the Windows world, bad code is uncorrectable.
  • Windows DLLs do have versions. That's what the Class ID is for. When you break binary compatability, you generate a new ClassID.

    Take, as an example, msado.dll. You want to use the Recordset object. We'll assume there are two versions. Your registry will look something like this:
    (This is from memory, I don't have a win machine in front of me)

    ...
    HKEY_LOCAL_MACHINE\Classes_Root\ADODB.Recordset\ Cl assID = {aaaaa-11111-blahblah}
    HKEY_LOCAL_MACHINE\Classes_Root\ADODB.Recordset\ Cl assID = {bbbbb-22222-fooey}
    ...
    HKEY_LOCAL_MACHINE\Classes_Root\{aaaaa-11111-bla hb lah}\InProcServer32 = C:\Program Files\Common\ODBC\msado15.dll
    ...
    HKEY_LOCAL_MACHINE\Classes_Root\{bbbbb-22222-foo ey }\InProcServer32 = C:\WINNT\System32\msado20.dll

    The upshot of this is that if you use static binding, i.e. you point your compiler at the DLL, the ClassID of the dependancy is compiled into your program, and you're guarunteed to get that version or later loaded at runtime.

    If, on the the other hand you use late binding (CreateObject() in VB) the ClassID is determined at run time by checking the registry. If multiple copies exist, I think you get the one with the newest version string.
  • At first I was going to rebut this post, since to my mind, 60 libraries for the type of application that GnuCash is sound absurd. I found that WindowMaker uses only 16 libraries, and Konqueror only 28.

    But then I started thinking about it. Eight of the libraries were X11 libraries. And one other library was Qt, which is the equivalent to seven or eight libraries in the GTK world.

    Obviously there are two different library philosophies here. One philosophy is to bundle all of the funtionality into a single massive library like Qt. The other philosophy, used by X11, GTK and others, is to split the funtionality amongst smaller libraries. For example, the KDE world does not have the equivalent of libxml, since that functionality is already rolled up into Qt.

    Neither of these philosophies is wrong. A few large libraries or many small libraries. Both have their advantages and disadvantages. Personally I like being a Qt user. I don't have to worry about any dependencies or with one module getting out of sync with another. But I'm also a Qt developer, and I often wish I could use just one small module from Qt without having having to use all of it.

    As long as GnuCash can keep its dependencies under control, then I don't have a problem with it using 60 libraries.
  • I agree with this problem, which is why I suggest that static linking be used primarily for the bleeding edge libraries - especially any library that is more recent than the most up to date included in the majority of current distributions.

    Once the library used by the program is less bleeding edge and has a more stable API, then dynamic linking would be more appropriate.
  • by GroundBounce ( 20126 ) on Thursday June 14, 2001 @10:00PM (#149497)
    One of the original reasons behind DLLs in the first place was to save redundant disk space and memory. This is still true, but when DLLs were first popularized on PCs by Windows 2.x (or whatever it was), most machines had a 20-30Mb hard drive and 1Mb of RAM.

    Things have changed. While the larger, most common libraries (GTK, QT, glibc, X libs, gnome and kde libs) should remain dynamic, it would be helpful for binary packagers to statically link the smaller and more obscure libraries, especially if they are using a bleeding edge version that is not even in the most current distributions.

    With a combination of static and dynamic linking, you'll achieve the majority of the benefit of shared libs because your largest and most common libs will be dynamic, but you'll be able to avoid much of the DLL hell and upgrade hell that accompanies packages that use bleeding edge libraries.
  • You're right that most applications can't be uninstalled that way. That's *another* symptom of the problem. As to the shared dlls in common folders, note I said most of the good developers aren't writing their apps that way. Granted, you may have file associations that need tending. I personally feel Windows ought to handle that by itself, and to a small degree it does with the "Open With" dialog. Other info stashed in the registry generally amounts to exactly squat; it's no more harmful than having an unused config file on a unix box.
  • by Katravax ( 21568 ) on Thursday June 14, 2001 @09:22PM (#149502)

    How should we go about dealing with the multitudes of shared libraries without driving ourselves mad or descending into the DLL Hell that makes Windows machines so unreliable?

    DLL hell is a small part of the problems Windows faces, but most of the better programmers started putting their libraries either in their app directory or static linking... Most every app on my Windows machine can be uninstalled by deleting a directory. But don't blame Windows instability on DLL hell. DLL hell is just another symptom of the same thing that causes the instability.

    What makes Windows boxes unstable, plain and simple, is faulty drivers and applications. Out of the box, the NT series has been rock-solid even since 1.0 (version 3.1). The Windox 9x series has also been way more reliable than it has the reputation for. Drivers provided directly by Microsoft have traditionally been very stable, even if not very feature-rich. The drivers provided by third-parties, however, tend to suck overall. I would estimated 50% of instability problems I see are related to VIDEO drivers.

    The big thing people forget when they compare stability of Windows and other OSes is that in monolithic kernels, the drivers are provided by the guys that know the kernel, thus are typically more stable. I cannot say the same for many Windows drivers. In addition, something like a FreeBSD web server is hardly comparable to an end-user Windows machine, yet this is always the example held up by the anti-Windows crowd. Add MP3 software, graphics apps, kids' games, a crappy sound and video card, and all the other stuff people put on user machines and then see how stable the other OS is.

    I'm not blind. I know that on the whole Windows boxes are not so stable. I'm a professional Windows developer. I can say from first-hand knowlege that the bulk of problems with Windows is due to lazy, unknowledgeable, or sometimes hurried and overtaxed programmers. It's a real problem. I also keep a FreeBSD boot around and I'm very pro-GNU/Linux and especially pro FreeBSD. But I program Windows as my work, and know that the instability blamed on Windows itself rarely has anything to do with code written by Microsoft.

  • by Katravax ( 21568 ) on Friday June 15, 2001 @12:55AM (#149503)

    ...There has never been a MDAC released by Microsoft that didn't crash windows? IE never crashed windows? MS-Office never crashed windows? Give me a break.

    I was referring to a clean box, and said so clearly in my original post. There are plenty of sloppy coders working in the apps division of MS. The MDAC installs are among the most awful installs of anything available. No argument from me there.

    If you can't expect Microsoft themselves to be able to write software that doesn't crash windows how do you expect your average VB programmer to?

    I don't :). The average VB programmer is a beginner and doesn't know much more than drag-and-drop window building and basic event-handling. I'd guesstimate that fewer than 5% of VB programmers understand how Windows actually works. I'll also say that most VB apps can't crash Windows, either, because they don't have access to anything privileged.

    The fact of the matter is that it's awfully difficult to write an app for windows that won't take down the system occationally. If for no other reason then you are using DLLs and you have no idea what's in them. How many apps depend on wininet.dll or the vb runtime or MSVCxxx Dlls?

    Bullshit. I write system services and plenty of system-privilege apps that don't crash Windows. The *app* may crash until I get it finished, but that's not what you said. Anyone writing code that depends on wininet or the MSVC runtime and MFC dlls, frankly, is asking for what they get. But if you use the interfaces correctly, these don't crash Windows either. There are bugs in the common libraries to be sure, but that can be said for common Linux libraries as well, can't it? Of course the massive advantage with most Linux libraries is that the source is available. But my original post wasn't focused on the availablility of source.

    You know I went to install the MS soap toolkit the other day and it insisted that I install IE5.5 and some service pack or another, then the rope.dll wouldn't register properly so I had to go download a later version and register it by hand. Just to be able to work with XML I had to download over thirty megabytes of stuff. So If I write an app using this toolkit is it my fault if the app crashes or is it possible that somewhere along that 30 megabytes of crap I installed on my machine something was broken?

    Yep, it is your fault. All that shit is not necessary to work with XML. There are some reasonably good third-party libs that don't require all that crap. You apparently know as well as I do that developers using all that crap (besides the existence of that crap itself) are the cause of the problem. The most useful frame of mind I've found for writing Windows apps is "think small". Write with the fewest dependencies and libs possible. That pretty much leaves out developing in VB or using MFC, and avoiding the C runtime if possible. ATL is an excellent library, though, and there are some outstanding compilers out there if you prefer BASIC (check out http://www.powerbasic.com" [powerbasic.com].

  • I agree that the Mac OS X does things is pretty cool, but you're slightly off. Applications are distributed as folders, but the GUI treats these folders as single objects (although there is a command to open the package). From the command line, they still look like folders. You launch them like 'open OmniWeb.app'.

    Shared libraries are treated much the same way - they go in folders called 'OpenGL.framework' or whatever. They contain the library, the headers, other resources, and can contain multiple versions of the framework, which are versioned with a major/minor scheme - minor upgrades don't contain API changes, while major versions do - this way, programs can still use the right version of a framework if an incompatible one is installed, but can still benefit from framework upgrades.

    I really do wish Linux and other UNIXes would move to this scheme - it's really nice.
  • Actually, NeXT didn't necessarily bundle all the packages inside the app, although some applications did. I never owned a NeXT box, but most of this works the same way in Mac OS X, which I do use.

    Most shared libraries on NeXT/Mac OS X are installed as frameworks. These frameworks are basically directories with .framework suffix, like OpenGL.framework.

    Each framework contains a folder called Versions - this folder contains all the installed versions of a framework, which includes shared libraries, the headers necessary to use them, and any other necessary resources. They are versioned with a major/minor numbering system - minor versions get bumped for compatible upgrades, and major versions get bumped for API changes and the like. Programs know what version they were linked against, so if you install a new, incompatible version, the program can still use the old version. This pretty much eliminates DLL hell - you can install new versions without breaking old stuff.

    Apple's frameworks go in /System/Library/Frameworks, local frameworks go in /Library/Frameworks/, user-installed frameworks go in ~/Library/Frameworks, and application frameworks can go in the application bundle. I'm not really sure of the precedence though.

    It's a really cool system, and makes a lot of sense. Unfortunately, there seems to be a trend in Apple to install many unix shared libraries the regular way instead of as frameworks, to increase compatibility - makes many more things 'just compile'. I'd be much happier if more unix systems went to frameworks instead.
  • by CharlieG ( 34950 ) on Friday June 15, 2001 @04:11AM (#149525) Homepage
    Doesn't work in windows, and I've been telling developers this for years.

    First, some background - I'm a windows developer, and have been since Windows 3.0 (programming for a living since 82), and I even played around with development before that

    Windows, when looking for a NON OLE DLL, first looks in MEMORY, then in the local application directory, then the Windir, Then Winsysdir, then apppath. When loading an OLE type DLL (as most are today), it looks where the REGISTERED version is, and ignores all other copies on your system

    Putting the NON OLE DLL in your applications directory works fine, IF you ONLY run that one app at a time. What happens if you run the app the uses the OLD version of the DLL FIRST, then load the second app at the same time - RIGHT the dll is already in memory

    What is supposed to be done, and I've spent years yelling at some of my co workers is this

    YOU NEVER BREAK BACKWARDS COMPATABILITY WITHOUT RENAMING the DLL. It will bite you in the ass. Your install program should NEVER overwrite a newer version of a DLL with an older version.

    The BIG problems are the following
    1)Developers writing non backwards compatible DLLS - Crystal Reports is famous for this. CTRL3D32 is also a example
    2)MANY companies thing that running install programs is too much work, because their in house developers come out with "The Version Of The Day", so they use batch files to overwrite DLLs/programs without checking versions or what is in the registry

    Folks, MSFT has said "Don't Do That" for years. Unfortunately, they don't prevent you from doing this. That is why XP is supposed to (last I heard) load a different copy of the DLL for each application - then the idea of "DLL in the application directory" works great - The .NET project is also supposed to fix this - we'll see.

    Shared libs will cause this same descent into hell if we're not careful to prevent this problem. The answer is NOT "Make sure you install the latest RPMs (debs)" assuming that they will be backwards compatible - this WILL fail. It's the same answer Microsoft took - "If you do it right, it works". The problem is, when you become the OS that sits everywhere, people WILL do it work, and then complain

  • by mpe ( 36238 ) on Friday June 15, 2001 @01:14AM (#149527)
    But installing apps on Linux more difficult then Windows.

    In some situations this might be a bad thing. But in a great many situations it is actually a good thing. Indeed there is a whole industry providing tools to make it difficult to install programs under Windows. On the vast majority of corporate systems you explicitally don't want end users installing apps any more than you want they customising their company cars or taking woodworking tools to their desks.
  • RISC OS has been doing something like this for ~15 years, and is one of the things I really like about it.

    If a directory name starts with a '!', it's an application directory (this could have been done better, but it's not too bad). Inside is a file called !Boot, which is run when the directory is first 'seen' by the filer. This sets up filetypes and other info for auto-running the application if a document is run for instance.

    There's a !Sprites file which provides icons for the application in the filer window. There's also a !Help file which gives info on the application.

    Finally, there's a !Run file which is run if the application is doubleclicked.

    This makes it very easy to keep multiple copies of the same application around. Installation consists of copying the app-directory onto your hard drive. Uninstall involves just deleting it.

    Things can get more complicated if it starts using shared modules, but I never had problems with backwards compatibility with these.
  • by throx ( 42621 ) on Friday June 15, 2001 @10:02AM (#149540) Homepage
    I wasn't sure at the time that it worked on Win9x as well. I just tried it. It does.

    So, the REAL problem with DLL Hell is simply that people are installing DLLs into the system directory instead of with their applications.
  • by Ramses0 ( 63476 ) on Thursday June 14, 2001 @11:34PM (#149560)

    You can (mostly) under linux, if you're using the right distribution.

    apt-get -y install gnucash; gnucash

    Type that into any command prompt or whoknowswhat run dialog, and gnucash will automatically be installed and run. Oh, you have to be 'root' to do it, I'm sorry. Linux has this nice thing called 'permissions', so that you can't break anything unless you've logged in as root.

    No need to show folders, or double-click anything, etc... Debian's Advanced Package Tool will automagically put an icon in Start->Programs->Applications->GNUCash, so it's even easier than what you're describing.

    I'd love to spend some time wiring up a few scripts to "apt-cache search", "apt-cache show", and "apt-get install", with a nice GUI interface, so that it *would* be as easy as double-clicking. Ahhh... a project for another day.

    --Robert

  • by VFVTHUNTER ( 66253 ) on Friday June 15, 2001 @11:18AM (#149565) Homepage
    Not really. Windows programmers get paid to write code, and laziness in this sense means getting the project done quickly and out to market. We in the Linux community write software in our free time because we love it, and do not release things until they kick ass.
  • by dbarclay10 ( 70443 ) on Thursday June 14, 2001 @10:12PM (#149573)
    then the next will really floor you -- applications will keep their own .DLL's in their own application directories. Just like in the DOS days, you will be able to blow an app completely off your machine by deleting its directory, and version differences will become irrelevant.

    Well, no a bad idea totally. It'd help for DLL-hell type problems, but let's raise a few points:

    1) Firstly, decent packaging makes DLL-hell much less likely. I use the experimental variant of Debian, and even then, there are rarely library dependancy problems. The problems that do arise are usually easily fixable, as opposed to most of the DLL-hell problems that Windows has(ie: two applications requiring incompatible versions of the same library).
    2) Secondly, Linux(as a *nix variant) allows one to have multiple library versions installed at the same time, without trouble. That was one of the design considerations.
    3) Security. The main reason for shared libraries isn't space-saving(as you imply), but rather security and stability. The FooBar application relies on library Wingbat. So does app YooHoo. Now, the Wingbat library has a security hole that was just found. Oops! Well, a patch is released, packages are made and sent out. You upgrade(or your computer does it automatically), and poof - all of a sudden, YooHoo, FooBar, and all the other apps that use the Wingbat library are more secure. Ditto with stability. The Wingbat library has a nasty bug which causes crashes. Okay, a fix is made, packages are installed, and now neither the YooHoo nor the FooBar apps are susceptible to that particular bug.

    Anyways, I just wanted to say that the main reasons for shared libraries isn't really the space issue. Nor is it a performance thing. It's a quality thing.



    Barclay family motto:
    Aut agere aut mori.
    (Either action or death.)
  • by Jailbrekr ( 73837 ) <jailbrekr@digitaladdiction.net> on Thursday June 14, 2001 @09:51PM (#149576) Homepage
    I'll bet the MS developers are rubbing their handing, and giggling gleefully right now. "Oh yes, now they understand, now they UNDERSTAND!"
  • by mrogers ( 85392 ) on Friday June 15, 2001 @01:21AM (#149589)
    The whole point of shared libraries is to save memory (and disk space, but that's not nearly as important). If each application loads its own version of a library, you defeat the purpose of using libraries - effectively you'd be statically linking everything. (To get an idea of how much bloat that would introduce, add up the resident set size (RSS) of all of your apps in top).

    A better alternative is to load libraries by checksum instead of version number - the library's filename would be its checksum, which would be verified by the loader. If two apps were compiled against exactly the same version of a library they would share the same copy in memory, but if one was compiled against 0.8.1 and the other against (supposedly compatible) 0.8.2 they would each load their own version into memory. This approach would waste memory in some cases, but you would never see your apps behaving unexpectedly because of changes to the underlying libraries. I think the memory tradeoff would be worthwhile for the increased stability and, of course, the smug sense of superiority. ("DLL hell? Oh please. Don't tell me you're still loading shared libraries by filename on your OS!?")
    --

  • by knife_in_winter ( 85888 ) on Thursday June 14, 2001 @10:13PM (#149590)
    Never happen.

    I just installed a Debian GNU/Linux system with 3 floppies (not CDs, not DVDs, but 3 1.44 FLOPPIES) and a network connection.

    Once the system is up, I have access to, what is it now, over 6000 packages?

    I hate to say this, but the network really *is* the computer, if you take advantage of it.
  • by blakestah ( 91866 ) <blakestah@gmail.com> on Thursday June 14, 2001 @09:29PM (#149598) Homepage
    RAM usage.

    If you use shared libraries, each gets loaded into memory ONCE. This is particularly good for something like the GNOME or KDE desktop. Do you really want 13 copies of the KDE libraries loaded into RAM because you used staticly linked binaries ??

    Anyway, I wouldn't sweat it. This will not become Windows. Open source apps do not install new versions of existing libraries, and libraries increment version numbers when they break compatibility.

    I haven't even thought about this in a long time since apt began taking care of it for me.
  • by Arker ( 91948 ) on Friday June 15, 2001 @05:37AM (#149600) Homepage

    Unix solved the problems that constitute true ".dll hell" ages ago, and linux uses the solution. It's not the systems fault that some writer doesn't understand what he's talking about.

    If you want to use the new gnucash, you need the libraries it relies on (or a statically compiled version, which may be a workable alternative for a short time but is really an inferior solution in the long run.) This does not constitute .dll hell at all. Dll hell is when two applications on a windows machine require different versions of a shared library. Only one will work, and that is a problem. But on a *nix machine, there is a little something called library versioning that eliminates that problem entirely. Installing the newer libraries gnucash needs will not cause other applications to quit functioning. No .dll hell. Just another misleading slashdot headline.


    "That old saw about the early bird just goes to show that the worm should have stayed in bed."
  • by MROD ( 101561 ) on Friday June 15, 2001 @01:16AM (#149609) Homepage
    The problem here is that you're looking at things from a purely Linux-centric viewpoint.

    The other week I had a go at building and installing GNOME under Solaris so as to get a certain piece of scientific software running on our Sun Solaris systems (which range from versions 2.6 to 8).

    I first went to the GNOME site and downloaded the libraries the web pages told me I needed.. plus everything else fromt he stable directory. I thought that as I'd read the instructions and downloaded all of the stable source I would be home and dry, it'd just need a lot of configure, build and install cycles.

    Little did I know that most of the libraries in the "stable" branch required a multitude of libraries fromt eh unstable one, many of which didn't play well in the configure scripts (they assumed that /bin/sh == bash and a few other Linux-centric assumptions).

    Basically, after a week of trying to build things, finding I needed yet another library or utility from an unstable branch or from an obscure web site somewhere I managed to build enough of the core libraries to build the application.

    And before anyone says "Why didn't you download a binary version?" I did look into this but the version on SunFreeware was disgned as a single machine installation, not a network wide one. The recently released Sun beta release version of GNOME 1.4 is only for Soalris 8 AND half the libraries needed by the scientific application were newer than in the GNOME 1.4 "release."

    If the library API's aren't standardised soon and kept stable then people will shun such things as GNOME. For someone tinkering on their own system it may be fine to play for weeks getting things working, but in the "real world" this is just untenable.
  • by rcw-home ( 122017 ) on Friday June 15, 2001 @12:09AM (#149628)
    Libc6 uses symbol versioning. An application originally linked to glibc version 2.0 symbols continues to operate using the glibc 2.0 ABI even after glibc 2.2 is installed. However, an application linked to glibc version 2.2 won't work until glibc 2.2 is installed.

    Check out http://www.gnu.org/software/libc/FAQ.html#s-1.17 [gnu.org], and try objdump -T sometime.

  • by big.ears ( 136789 ) on Thursday June 14, 2001 @09:25PM (#149642) Homepage
    From what I understand, one of the primary reasons for having two or three levels of distribution is to avoid this dependency hell. In debian, e.g., everything in Stable (should) play nice together, but the newer stuff is more likely to break other stuff. This means that a lot of the stable stuff is a year or more old, and because of Linux's nascent status, this means that if you want to get something that is usable, you have to deal with conflicts.

    I think this is one of the primary hurdles facing Linux's wider adoption. Nobody wants to mess with upgrading/downgrading libraries, and you rarely have to do that stuff in Windows. For example, I have never been able to get the newest versions of galeon, mozilla, and nautilus installed and working at the same time. And, perhaps unrelated, gnumeric and netscape 4.7 no longer work. Of course, its not impossible to fix, and I'm not trying to sound like I'm whining, but I don't know to many of my friends who are Windows power users & programmers who would put up this stuff.

    Hopefully things will improve when libraries become more stable, and apps move into versions 1, 2, 3, and higher. Then, release cycles should get longer and less drastic, and everything should be easier to use together.

  • The 10-char limit was removed a while back IIRC.

    The RISC OS scheme has security problems on multiuser systems (running !Boot when you look at a directory is not good!) but ROX [sf.net] doesn't use boot files.

    Incidentally, ROX uses Gtk and XDND, so it should play nice with GNOME apps.

  • by Chester K ( 145560 ) on Friday June 15, 2001 @09:50AM (#149652) Homepage
    Linux is NOT descending into DLL hell for the simple reason that it has a logical way of maintaining the separation between major versions of DLLs. As developers are being good on Linux, DLL hell won't exist the way it does on Windows.

    That's the way it is now, but it's only going to get worse as Linux gains popularity. For the reason why, let me quote another portion of your post:

    (i) Windows developers got lazy and didn't put versions in their filenames. (ii) Windows developers got lazy and put all their application DLLs on the path (ie system directory) instead of in the application directory.

    The application developers got lazy. And if/when Linux picks up mass market speed, you can expect the exact same thing to happen.
  • by peccary ( 161168 ) on Thursday June 14, 2001 @09:39PM (#149665)
    Open source apps do not install new versions of existing libraries, and libraries increment version numbers when they break compatibility.

    then how come cdparanoia (or any of a dozen other applications) insists on a different version of libc.so.6 than the one I have installed? Silly me, in twenty years of Unix experience I had naively expected that "requires libc.so.6" meant just (and only) that.
  • by alriddoch ( 197022 ) on Friday June 15, 2001 @02:05AM (#149698) Homepage

    Using libraries to add functionality to applications is essential. There can be little doubt here. It is easier, more robust, and better practice to use the standard implementation library for a piece of functionality than to attempt to re-write that functionality for yourself. However there are some important basic principles that must be understood when writing application to use, and more particularly developing shared libs.

    In development of any code project, there will be times when the codes design is undergoing rapid change. In the case of a library this means that the API will be constantly changing. In a closed source environment this does not cause too many problems because noone ever sees this code. In a cooperative Free Software project, the source to a library is always available, so there is a temptation for the application developer to use a development version of a library to get a certain feature. This is the beginning of the road to hell. It is essential that applications never have stable releases that depend on development libraries. I remember some years ago when gtk+ 1.1 was being developed towards 1.2, and many application developers were chasing the development branch because they wanted the features it had. The result was chaos. It became nearly impossible to install many combinations of gtk+ applications because they all required different versions.

    On the library development side there is a need for a great deal of responsibility. Library developers need to learn about, and really understand library interface versioning. The interface version of a library is completely different from the release version number, and it is used by the runtime linker to ensure that it is safe to link a binary to library at runtime. With properlly managed interface versions it is quite possible to have many different incompatable versions of a library installed and available for runtime linking. GNU [gnu.org] libtool [gnu.org] has some excellent support for library interface versioning, and the libtool documentation contains some excellent documentation on how to correctly assign interface version numbers to a release.

    Large numbers of libraries can be managed effectively, and cleanly as long as these principles are understood by both library and application developers, and good practice is followed.

  • I think you have hit the nail on the head here. If people want to live on the bleeding edge and always have the latest release installed then they will encounter these problems. However, for the average user, if they don't really know what they are doing, they are best sticking to the stable distributions. Anyway, things like Ximian are good at keeping the average user upto date. It has definitely aided new Linux users in my office to upgrade their work stations with packages.

    We should remember, code reuse is something that makes the Bizaar model or programming so strong. The fact that we take code from here and there and save ourselves the time of reinventing the wheel. I don't think there is any problem in relying on many libraries. In fact, I think it is better than developing something that is just complete bloatware.

    Also, developers always have to look ahead at future releases of libraries. Some developers may choose to, lets say, use a development release of glib because the functionality is just what they are looking for. They're thinking that by the time they finish developing and their work becomes stable the glib they are using will also be stable. Sometimes it doesn't quite work like that. However, don't complain if you want to run the latest version but the requirements are to difficult to meet. The only time you might want to complain, in a polite way that is, is if there is a vital security bug fixed that you can't upgrade to without breaking everything else on your system. In such a case, developers should really release a patch for the previous version.

  • by BlowCat ( 216402 ) on Friday June 15, 2001 @06:06AM (#149725)
    Don't just complain on Slashdot. Send patches to the developers. Many developers don't have Solaris or other commercial UNIX to test their software. What is obvious to you is not always obvious to them. Help yourself. Educate developers. Next time they will think before starting a script with #!/bin/bash. An hour spent by you on bugreports will save an hour to thousands users of your OS.
  • by rabtech ( 223758 ) on Friday June 15, 2001 @04:53AM (#149732) Homepage
    Windows 2000 can already do this... multiple copies of different versions of a DLL in memory at the same time.

    Also, if you create a ".local" file in your app's directory, Windows will first try and load all shared libs from your app's directory.

    For example, "foo.exe".... if you wanna load ALL dlls out of your own directory, just create a file called "foo.local", and problem solved.
    -- russ

  • There seems to be a HUGE misunderstanding. Yes, there is a problem with shared libraries. But the problem with shared libraries is minor compared to DLL hell. Shared libraries are a sunburn. DLL hell is hell.

    Microsoft DLL hell is largely caused by a deliberate attempt to create copy protection by obscuring how the operating system works. You can't fix the problem because you have no reasonable way of getting the information you need.

    This class of problem just doesn't exist with Open Source software. Yes there may be difficulties, but all the information necessary is available. The people who write Open Source software are smart and are solely motivated by the desire to do a good job. You can be sure that they will find ways of solving the problems. See comments #56, #165, #59, and #17 above for ways of solving the problems.

    On the other hand, to a commercial monopoly like Microsoft, DLL hell is an advantage. It gets people to upgrade quickly in the hope of having fewer problems.

    See #53: "DLL hell would be if gnucash blew away the libs that gnumeric needs, and then reinstalling gnumeric screwed up the libs that nautilus wants, etc."

  • >(IE, as root, I can hose a Linux system, not
    >matter how stable it is supposed to be).

    (/me waving goodbye to the karma..)

    Unless the admin has been paranoid / smart / had enough copious spare time to implement quotas, a generic user on a Red Hat Linux (and AFAIK, other distros) can crash the whole thing with a one-liner fork bomb. I know, I did it to a co-worker who was relentlessly (and ignorantly) trolling me about how flakey NT is vs the never-crashes, 200 day uptime, uber-secure Linux machine he was using as his w/s. A Rude thing to do, admittedly, but he doesn't go on about it quite so much now. And he couldn't find a way to do that to my NT machine... running BIND, dial-on-demand IP gateway & NAT, Apache w./ mod_proxy and mod-perl, local mail server, plus my generic workstation apps (mail, mozilla, emacs, cygwin apps etc) and currently has nearly 60 days' uptime. Nothing special there, I agree, but really NT isn't as bad as some of the zealots would like to believe...

    Incidentally I'm only still on NT whilst I get an OpenBSD config working to provide the network services and get round to Bastillifying my Linux machine. And I'm lazy. And there aren't enough hours in the day...

    Back on topic: I moved frrom RH6.1 to Mandrake 7 and lost Gnuchess & Xboard along the way. The grilf complained (she's (50%) Russian, so she really needs her chess practice ;) - "no problem" I chortled, "I'll just grab the source, config, make, make install..." HA! chess is > 20Mb as src. OK, I'll take an RPM. And then lib hell began... three days later I gave up and told her I couldn't do it. Perhaps I should give Debian another go... wtvr, RPM sucks.
    --
    "I'm not downloaded, I'm just loaded and down"

  • I agree, but you have forgot two things:
    A> The OS can only protect itself from applications whose permissions are limited. (IE, as root, I can hose a Linux system, not matter how stable it is supposed to be).
    B> Windows 9x is a single user OS, meaning that everything runs as root. We can spend a long time discussing how bad it is. But it was neccecary for backward compatability.
    Win9x can't be blamed for its instability, it allows direct access to hardware, for crying out loud. What you *could* blame is Win9x design, which prevents the system from being stable.
    MS did a wonderful job making it semi-stable, though. I mean, I can seat on a 9x machine and don't get a BSOD for an hour at a time, sometimes. :-D



    --
    Two witches watch two watches.
  • Okay, first, drivers don't have to run in the kernel space.
    They *can* run on ring 1/2, the reason that they don't is that some proccessors only have 2 levels of speration, so NT & Linux (no idea why 9x doesn't implements it, as it's as platform spesific as possible.) are forced to use 2 levels of speration, kernel (ring 0) & user (ring 3) because they are portable.

    Second, ntoskrnl.exe & explorer.exe *do* run in seperate spaces. Explorer.exe run on ring 3, as a user level process.
    (I wouldn't be surprised if notepad.exe would run at ring 0 on 9x, btw)
    What you are referring to is USER & GDI, GDI is the graphic primitives, which actually translate the orders from the program to the hardware.
    USER is where most of Win32 lives, it handle the GUI (higher abstraction than GDI), Windows Messaging, etc. [*]

    Because of the way NT is designed, if USER crash, regardless of where USER lives, NT goes down as well.

    Why? Because without USER, all the information that Win32 has goes south and dies.
    This has adverse affects on any program that uses Win32 (99.999999999% of them does).

    There is nothing I can compare this to in Linux, I believe, as Linux employs monolitic, rather than layered, model.

    On Win9x, if USER goes down, there is nothing to "catch" the computer if USER goes kaboom.

    * This is not a techincal explanation, but it should give you some understanding on how it works.

    Third, I didn't say that Win9x is single task, I said that it's design was influence from the single-taskness of DOS.
    Because one of the *major* design goals of Win9x was to stay compatible with DOS applications.

    This made it impossible to implement real seperation of applications from the hardware, as many DOS applications did direct hardware calls.
    Memory protection was also a problem.

    The single user aspect of 9x kicks you in the balls when you want to talk about security & seperation of tasks.
    It's just an extention of the problems caused by DOS compatability, though.

    Forth, X does makes direct hardware calls, and as such, capable of making the machine hang.

    --
    Two witches watch two watches.
  • by Strangely Unbiased ( 313686 ) on Friday June 15, 2001 @02:03AM (#149809) Homepage
    As a Windows coder myself, I know a fair bit about DLL, and I know it's not the way they work that's causing the so-called Windows 'DLL-hell'. It is the fact that Microsoft allowed programmers to include core windows libraries with their programs to ensure compatibility (and those programs could replace the windows ones too). Sure, its cool to keep your exe size down by using a few system libraries dyn-linked, but most people think its better to shove all these libs into their installer than static-linking (i personally like to build one or two-file apps). DLLs have their use (e.g when you want your program to be easily upgradeable, or as plug-ins), but people you don't have to use them. It's not cool, its confusing and it serves little purpose.

    And all this DLL hell stuff...I haven't met with a single DLL problem since Win95(i havent met any problems on linux either)...And WinXP will have even better core DLL protection and stuff...

    By the way, one really (seemingly)cool thing about .NET programming is that you can use the .net dlls to access a huge number of cool .net functions for RAD programming.We shall wait and see though...


  • by Tachys ( 445363 ) on Thursday June 14, 2001 @09:39PM (#149830)
    It always seems to me that Free Software could get an edge with ease of installation.

    Installing propertary software requires putting in a serial number and then install some updates.

    But with Free Software I can always get the latest version, and no need for serial numbers.

    But installing apps on Linux more difficult then Windows.

    When I want to install something what shoud happen is

    I download it tar file.

    I double-click on it to uncompress it.

    This shows a Gnucash folder.

    It's installed!

    I can go in the folder and open Gnucash.

    I can then move that folder where I want it.

    Why can't Linux do this?
  • if incompatible libraries are found, the installation process should wrap its binaries in scripts which set LD_LIBRARY_PATH to the necessary compatibility libraries (/usr/lib/compat) -- and they should be linked to _specifically by version_, so that different versions of compability libraries don't fight with each other.

    Excellent plan. Just so everyone knows, though, LD_LIBRARY_PATH is rarely needed. In this case, it is only needed because the binaries are precompiled. If you ever have to set LD_LIBRARY_PATH, the software should be recompiled correctly!

    Neat eye-opening information about LD_LIBRARY_PATH can be found at Why LD_LIBRARY_PATH is Bad [visi.com]

    I don't think we're going to see anything analogous to the DLL problem because most shared libraries use explicit versions. But I would love to get rid of the madness of being told to set LD_LIBRARY_PATH to run software I just compiled! All you have to do is set LD_RUN_PATH during compilation. (See that link!) One notes that Perl's MakeMaker system always sets LD_RUN_PATH appropriately when compiling an extension module.

  • by Bastard0 ( 452998 ) on Friday June 15, 2001 @09:40AM (#149842)
    When I think of Linux applications I think of lean mean server side programs with no fluff. No paperclips popping up to tell me what to do just a few text based config files and raw power. That's how I like it. When I think of Windows I think of powerful feature rich desktop applications (Quicken, Word, Cubase). The interesting thing is that as Linux is beginning to tread into the desktop arena its starting to face some of the same problems that Windows has gotten a lot of flack for. As you add features you add complexity and increase the size of the code and increase risk of some kind of flaw or failure. Windows is more bloated/unstable/convoluted because it has so many features. Just look a GNOME. That thing is a true monster which looks like its just going to continue to grow. Frankly I haven't seen one desktop application / user enviornment (including GNOME) on Linux that isn't 10 years behind a comparable application on Windows. Don't get me wrong I love linux I just don't see a point putting it on my desktop.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...