Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
GNU is Not Unix Software Linux

Bundled Applications for GNU/Linux? 148

munehiro asks: "As an addicted GNU/Linux and Mac OS X user I recently tried to install binaries and libraries on a Linux box using an approximation of the elegant and clean approach known as the Mac OS X bundle (everything about each app or lib under a different directory) as opposed to the Linux standard approach 'everything under a common prefix' (normally /usr or /usr/local) with applications and libraries mixed in the standard subdirs bin, lib, share and so on, and found administration life much easier. What do other, more experienced readers think about the problems and improvements related to dropping the current Linux approach for a 'bundle-like' one in Linux distributions?"
This discussion has been archived. No new comments can be posted.

Bundled Applications for GNU/Linux?

Comments Filter:
  • darwinports (Score:3, Informative)

    by russellh ( 547685 ) on Thursday January 13, 2005 @06:57PM (#11354184) Homepage
    I think DarwinPorts [opendarwin.org] is working on something like this. Unless I'm mistaken of course...
    • You are correct.
      Unfortunately this is because you are mistaken.

      Darwinports does work on the BSDs, and Solaris as well as on darwin. The new default installation structure is to install files under private directory spaces, but these are not intended to be used from those locations and must be activated by linking the files into canonical paths for use (e.g. /opt/local or /usr/local).
  • Correct me... (Score:4, Insightful)

    by Doctor O ( 549663 ) on Thursday January 13, 2005 @06:59PM (#11354206) Homepage Journal
    ...but to me it seems that the approach is the way to go. Install/uninstall by cp/rm or drag/drop, whatever you prefer. Ressource waste definitely is no reason for today's machines, at least on the desktop.
    • I've worked on OSes that used both methods, as well as others. Either of the two mentioned here is fine. You just spend a few minutes learning how your system does it, and deal with it.

      There are, IMO, much better uses of good engeineers' and programmers' time, than fighting this battle.

      Any logical approach, that's my preferred approach. And both of these are logical enough.
    • You can emulate both approaches with symbolic links (or even hard links). See Pkglink [unm.edu].
    • Libraries aren't about saving disk space, they're about saving space in memory, and also about having exactly one version of the library on the system, which the packaging software will update as required. (Particularly important security-wise).
      • Ah, ACK. Good point. Although using something jailesque for only allowing the proper app to use the insecure lib would greatly help as an exploit would not only have to find that insecure version but also first gain the rights to run it.

        Yeah, users/groups work fine, but any additional layer of security helps, and it's not so much of a performance hit unless you're in big business. We have a handful of FreeBSD servers containing lotsa jails for users, and they work great. No performance hassles expect when
  • It's great to work with...simple as hell. But, I wonder...what if a lib needs to be replaced (updated), will all the bundles get the new version?
    • I don't think that's necessarily a worry unless people are using a cvs version of the package.

      If (for example) zlib needed updating, you would need to get a new version of the entire package, for every package that used zlib. Or at least a diff between packages. Much more inefficient, but at least this way you're guaranteed the package got tested with the new zlib.

    • So setup a central lib repository with a way for each lib to report it's version and the app simply checks the system lib directory to see if there's an updated compatible version of the library, if not it defaults to it's own version of the lib. I'm pretty sure OS X uses a similiar system (uses major and minor version numbers to indicate compatability breaks).
      • Great, but... I already have /usr/lib and /usr/local/lib.

        People whining about a central approach should learn a package manager (rpm/deb/whatever), which works fine.
    • Well, the libs would be in a different bundle. You would update the lib bundle, and the app would simply load the new lib - assuming that nothing critical has changed, there would be no error.
    • Most such bugs are in libraries that are generally available, like libjpeg.

      The things that are included in an app "bundle" are things like the correct version of Qt. As far as I know there has never been a "security bug fix" of these. Instead all updates are "the new version" which requires the program to be recompiled.

      In any case, if they remain shared libraries, the knowledgable user can delete the instance from the bundle and it will use the main shared one.

  • I have a Linux Fileserver with system on a 2.5 gigs HDD + 4-5 large hdds.

    With the present organisation, if two programs must use a lib, they both access only this lib, wich is present once only.

    with the bundle system, I have as many times the same lib as I have softs using it. Okay you can get a 400 gigs hdd easily nowadays, but my system only has this poor 2.5 gigs for system... some use even smaller hdds/flash drives.

    + when upgrading my debian box (easy apt-get) I have to upload on library once, not ev
    • How about the best of both worlds using hard links? As an added bonus uninstalling the last application that uses a particular library will remove the library as well instead of leaving it around as cruft. Of course this requires the libs and applications exist on the same filesystem.
    • You can have the best of both worlds with symbolic links. A tool like Pkglink [unm.edu] can do most of the heavy lifting and also give you the ability to have multiple versions of the software installed.
    • i wouldnt think that to be as much of a problem as downloading gigabytes of crap every time you need to really update just 100mbytes..
    • You missunderstand the idea. You would still have shared libraries. Here is the idea (explain through Mac OS):

      You want to install a new piece of software. You insert the CD and it's window opens up in Finder (like Windows Explorer). You open up your hard drive and browse to where you want the program to go (in OS X I think it's "Applications"). You then drag the program from the CD to that folder. But you don't drag the program, plus it's data files, plus this, plus that, you drag just the program. One fil

      • You are correct but are talking about two different points that play off of each other.

        Every mac file has two points. one for data and a resource. if the pointer to the data doesn't match the pointer in the resource for the data location it gets automatically updated. So if you move a file an alias which uses the resource to find the file, starts looking for it.

        Now OS X doesn't use a lot of shared installed libraries from what I can tell, it appears most of them are installed to begin with. So install
      • Just to add a bit of clarity...

        A .app is just a directory. Mac OS X sees .app extension and treats it as an application. Inside are various directories called things like MacOS and Resources.

        This comes from the NeXTSTEP era, when a single '.app' would run on multiple architectures.

        I also believe RiscOS did something similar... a file with a ! at the start (ie. !Draw, !Paint, !Browser) would be treated as an application, and would really be a folder with a set of files inside, including a loader, icon, et
      • Yeah, but you still need the shared libraries.. So that is one thing where 'rm a dir' would not work.

        Secondly data is also shared a lot (for example icons). If I update my theme, I want to update automatically all my programs the same theme. Etc.

        I really fail to see the advantage of the 'bundled' approach. It is just creates a dependency hell of symlinks..

        I do see why the semi-bundled appraoch on windows is a mess, but please do not take this to linux.

        My idea: a package manager. Double clicking on mypro
    • by stuuf ( 587464 ) <sac+sd@atomicrad i . us> on Thursday January 13, 2005 @08:14PM (#11354961) Homepage Journal
      Sounds like what they did with GTK+ on windows. Apparently anyone who wants to install a GTK+ (other than GIMP) cannot be trusted to download and install GTK+ first, so they have to bundle it into the installer. So, once you install Gaim, GIMP, Ethereal, GTK Radiant, etc. you end up with 3 or 4 copies of the GTK+ libs scattered around (The most absurd one I've seen is Ethereal, which stuffs into the installer two versions of the app, one linked with GTK 1.x, the other with GTK 2.x, and both GTK runtime versions, for a plump 17MB installer). Whenever this approach is used, space is always wasted because of duplicates, and it makes it more difficult to update a shared library without reinstalling each application using it. Installing applications into their own separate locations does make administration easier. One of the only advantages to the current system is that you can have a PATH variable with a finite number of directories (/bin /sbin /usr/bin /usr/sbin /usr/local/bin /usr/local/sbin) and every application is quickly accessible from a shell command line. Now that many programs are launched form a desktop menu instead of the command line, this is not always needed. But bundling libraries with applications usually impedes maintenance and administration. It's also unnecessary because most package management systems (portage, apt, rpm, etc.) handle dependencies automatically (portage also has the depclean command to remove unneeded library packages).
      • The solution is the path is easy.

        Have one directory called "links" or something like that. Each application can create a symbolic link to the main executables in that directory. Then, other than the obvious links to /usr/bin and the like, you just have "links" in your path.

        On my sun workstation, one tools tell me "absurdly long path truncated" or something like that. But I have to keep the path long in order to get access to all of the tools that I use on a daily basis. Nice.
  • Hi there,

    GNUstep (http://www.gnustep.org) applications use application bundles as well. This tends to piss off a lot of anal-retentive folk, especially in the anal-retentive Debian Developer reality, but we do it because it ACTUALLY MAKES SENSE. It doesn't make sense to have stuff for one app in ten different non-parentally-unified folders.

    I strongly suggest you check it out, if you've not previously. I'd personally like to see a unified AppBundle Freedesktop standard. Rox also uses AppBundles, as far as
  • This is the way I would like all programs. No more
    chasing around the disk to remove stray bits.

    Same idea as having all user data under one branch so we can back it all up.

    rcb
    • How much harder is it to do "emerge -C program" as opposed to "rm -rf /usr/share/program"?

      And yes, we do like all user data under one branch, so we can back it up. Personally, I also like all config data under one branch, even if it isn't anywhere near the programs that use it, so that it can be backed up.

      Plus, how often do you install/uninstall programs? How often do you use them? You use less RAM when you don't have to cache two copies of the same library because there's no more /lib. But installing
    • You don't have to chase around the disk to remove stray bits. That's what package managers are for. Assuming you have an intelligent packaging system, it can track all the files that come in a software package and install/remove/update them as necessary. The only thing Gentoo's Portage system can't clean up after (that I've seen) is a kernel tree with leftover object files in it. But the only thing you have to do to fix that is "make distclean" before you "emerge -C =your-kernel-source-x.y.z-a".
  • by forsetti ( 158019 ) on Thursday January 13, 2005 @07:10PM (#11354315)
    Gobolinux: http://gobolinux.org/
    Stow: http://www.gnu.org/software/stow/stow.html

    I think you will find that you are not alone ...
  • If someone would make USE of the Library versioning system, and -then- make SANE use of LIBRARIES, then this would be an absolute no-brainer.

    You put your LIBRARIES in a common directory. Everything else that relates to a piece of software is in it's own directory.

    LIBRARIES ARE THINGS THAT ARE MEANT TO BE SHARED BY MULTIPLE SOFTWARES.

    When I go to install some program, and find I have to install 8,000 different libraries, that are mostly only used by that ONE piece of software.. that really farking pisses
    • This is an excellent point. For the people complaining about PATH vars, we just need intelligent path handling. For example PATH=/software that contains package directories, each with a bin subdirectory. When searching for binaries in the path look in /software/*/bin.
    • Potentially, libraries are not the only thing that is shared amongst programs. Plus, the designer of a program may build a library that is intended to be shared, but no one comes and shares it -- who's fault is that?
      • I can't think of anything else that should be shared between applications.

        Well, if no one shares it, that's fine. It's still a library, and libraries belong in one place. (although groups of libraries that belong together should be in their own directory from there)

        And instead of having libgtk1.2-62 or whatever the heck i have, and having libgtk2.0-2.6.1-121 or whatever it is i currently have, shouldn't i just have

        libgtk1
        libgtk2

        ? At least until interfaces become broken, then oh my god, i might
  • It's called /opt (Score:5, Insightful)

    by LeninZhiv ( 464864 ) * on Thursday January 13, 2005 @07:17PM (#11354394)
    I'm sure many will correct me if I'm not hearing you right, but it should be noted that there is a widely-accepted and fully GNU/Linuxy way to have an application housed with its own directory tree (organised however the application wants) in /opt.

    The filesystem hierarchy standard also provides /usr/local in cases where the UNIX filesystem hierarchy is adhered to (with /usr or even /. used if the software is included in the default disto/UNIX version).
    • It's possible to build and install a package into /opt/packages/packagename-1.2.3.4/[bin][lib][sbin ]

      And then symlink anything that it would have installed in /usr/[bin][lib][sbin] back to /usr/local/wherever. It makes removing the package pretty easy. Remove /opt/pacakage/packagename-1.2.3.4 and then check /usr/local for dangling links...

      Then there is only one copy of programs, libraries, and everything else but its all symlinked so that the packages can be all contained within their own dir.
    • The way OSX bundles functionality in directories isn't really the same as the /opt or /usr/local trees on GNU/Linux or Unix. The latter are places where users can customize a system with their own software, but it's just a housekeeping matter, really.

      On OSX, on the other hand, you're actually bundling up functionality in a self-contained wrapper: applications go in .app directories, libraries go in .framework directories, installers go in .pkg directories, etc. These bundles can, in turn, contain other bu

  • Global updates (Score:4, Insightful)

    by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Thursday January 13, 2005 @07:18PM (#11354397) Journal
    To what extreme does this go? For example, where is the standard C library?

    Suppose there's a major security flaw in a reasonably popular library. If each package must keep everything inside its own folders, then the library update only goes to apps which are maintained actively -- and which noticed that the library was updated.

    If, on the other hand, we use traditional UNIX, then one file is replaced in /lib, and at worst we get a warning that something some program is doing with that library is depricated and will be removed. But this gives the individual program maintainers more time to update, because they don't have to rush things out the door to make the security patch. They have until the next library release to get with the program.

    And, resource management DOES matter. There is no good reason that my dad, a commodity/stock broker, needs 512 megs of RAM on his machine -- except for the use of this kind of design. It's not just how much memory it takes up on disk, if you have to load glibc fifty times into RAM, you've got problems.
    • Obviously requiring every application to install and load its own copies of base Unix, Carbon and Cocoa libraries would be insane and obviously Apple doesn't do that.
    • To what extreme does this go? For example, where is the standard C library?

      On Mac OS X the C library is in the System Framework. Frameworks are bundles that contain shared libraries and support files. Frameworks can be either in /System/Library/Frameworks/, /Library/Frameworks/, /Network/Library/Frameworks/ ~/Library/Frameworks/ or inside the application.

      Suppose there's a major security flaw in a reasonably popular library. If each package must keep everything inside its own folders, then the library u

  • Doesn't scale well (Score:2, Informative)

    by Anonymous Coward
    For each application I would have to add an entry to
    PATH and possibly LD_LIBRARY_PATH, either globally or (even worse) in each user's profile file.
    With package management systems such as dpkg and rpm maintaining the /usr hierarchy, I don't think there is any advantage in moving each library/application to
    a different directory. There is even software available that will track where files are placed when locally compiled
    packages are installed. So, where's the advantage? I see lots of
    drawbacks and no real bene
    • For each application I would have to add an entry to PATH and possibly LD_LIBRARY_PATH

      Let's think outside the box =).

      Since there are applications I want to run, but can't trust (such as p2p, or anything proprietry), it would be great to partition my little secutiry island (I mean my user account) for each application that I run.

      Thus, when I double click on the App I want to run (think OS X application bundles), a script takes care of a chroot jail, setting up resource auditing etc. Nothing need happe
  • It's sad that it takes Apple to get the obvious going.

    A good system would work as follows:

    Base system (everything needed to get the system up into a usuable at all state, but no serious apps beyond vi) in /bin /lib /sbin /etc /man.

    The secondary set of basic apps (like grep, find, cut, and so on) in /usr/bin /usr/lib, etc.

    Add on apps (like gcc, emacs, Quake3, etc) in mini hierarchies: /apps/gcc/bin /apps/gcc/lib...

    Common libraries anywhere convenient (probably /apps/commonlibs/) and soft linked into the
  • by Rysc ( 136391 ) * <sorpigal@gmail.com> on Thursday January 13, 2005 @07:34PM (#11354577) Homepage Journal
    App bundles are okay for some people, but they are not the holy grail most seem to be touting them as here.

    Sure, library updates are a problem. But that isn't why app bundles are a bad idea.

    App bundles are a bad idea because they solve more problems than exist, and cause more problems than they solve.

    For every definciency in the way UNIX traditionally works there is a workaround. The problems with the system are well known. The system has a very few flaws... but one of those flaws is really glaring to desktop users, especially Mac heads.

    Because they see only what is broken and not what isn't they propose a Mac-like system. The app bundle idea isn't new and it isn't bad, but it does not solve the right problems. It solves one, perhaps two, problems, mostly for one class of users. And, while those problems are being solved, it creates dozens of difficult problems for several classes of users.

    The people who have the new problems tend to be the uber-admins, the developers, and the people who create distros. Those people do not adopt app bundles because the "sense" that they make is non, from thei point of view. In a admin-centric cost-benefit analysis app bundles nearly always lose to the *nix way.

    If someone could figure out a way to solve the problem that app bundles solve for desktop users without screwing over the admins and developers, distros would convert in droves. Since the existing solution is to "Screw different people, screw more people, just unscrew ME!" no one really feels obliged to comply.
    • 1) Give me an example of these problems you're referring to. I really can't think of any way that app bundles are inferior to the Unix way.

      2) What's wrong with giving the *users* of the system the easiest way of doing things and letting the Administrators or Developers, the people who KNOW computers, doing the troubleshooting? The users can't troubleshoot; Administrators and Developers can.
      • Give me an example of these problems you're referring to. I really can't think of any way that app bundles are inferior to the Unix way
        1) Bandwidth and space usage:
        Downloading and keeping around 30 copies of the GTK+ library if you have 30 GTK aps is a waste.
        2) Security and bug fixes:
        If there's a problem in a library you want to just fix it, not wait for each developer to release an update for their application with the new library.
        3) Memory use:
        You don't want your system loading a copy of ever
        • So do what MacOS and Windows do. Get a single window environment and make that one the system default. Then you won't need to load the libraries for every application, because they'll be part of the system. Not every MacOS application has to loads its own copy of Aqua because Aqua comes installed on all Macs. If Linux would team up, get organized, and figure it out, all these problems would be solved.
          • Are you feigning ignorance here, or did you really not understand that GTK was an example for any library on your system? Shared libraries exist for a reason: so that multiple applications can use the same library without taking up extra memory. They can't do that if each application is loading its own version inside its own "application directory". Perhaps in your world, windowing systems are the only shared library, but in the real world, Linux standardizing on a particular windowing system will not solve
      • 1) Give me an example of these problems you're referring to. I really can't think of any way that app bundles are inferior to the Unix way.

        The most glaringly obvious example is that the current structure is made so that as many files as possible can be 1)mounted on read-only drives 2)shared across machines.

        If you read the fhs, it makes these points very clear. /usr can be mounted read-only and shared by many machines over a network, so updates only have to be done once. To do updates, the drive is remou
    • Virtual file systems are the way to go I think. Make there be two "overlays" for your filesystem. Novice users see all the libs and binaries grouped into a single package. Advanced users can see the real unix hierarchy.
      • Make there be two "overlays" for your filesystem. Novice users see all the libs and binaries grouped into a single package. Advanced users can see the real unix hierarchy.

        But this is precisely how OSX solves the problem!

        In the Finder, which is how novices and people uniterested in system internals will spend most or all of their time, you only see the application (etc) bundles, and the other functionality is hidden from view. Moreover, key system directories (such as /etc, /var, and /usr) are hidden f

        • But it still leaves the problem of having to have tons of little folders in your PATH, LD_LIBRARY_PATH etc. (I imagine OSX works a bit differently for these, but enough linux users won't want to change that changing to whatever method OSX uses is unlikely). You need to be able to have novice users see a "bundle" that is actually one file in /usr/bin, a few in /usr/lib, some in /usr/doc and /usr/man, and a folder in /usr/share. And you need to have experts able to see it as it really is. It could easily beco
          • Yeah, I'm not really sure how this organization scheme could be adapted to Linux.

            The $PATH variable works in the usual sense on the command line -- it typically looks for commands in places like /bin, /usr/bin, /usr/local/bin, ~/bin, etc -- but it doesn't appear to be relevant to the GUI, or to the open command which can be used to open files in the GUI (e.g. open -a Safari ~/Sites/index.html).

            As near as I can tell, /Applications is the GUI analogue to */bin directories, and the Finder is just transpar

  • This is a little like windows system. It will make system administration more convenient.
  • In fact, it's the layout I happen to prefer. I've had too many namespace clashes, because application writers don't pick original names for things.

    Probably the arrangement I've seen the most often is to have a directory tree under /opt, or in /usr/local. The top-most directory is the application name. The directories under that contain the version numbers. Inside of that, you have the usual bin, lib, libexec, share, etc.

    This is much more practical to maintain, on modern Linux systems, as many important

  • by Luarvic ( 302768 ) on Thursday January 13, 2005 @08:01PM (#11354860) Homepage
    OK, Let's count advantages and disadvantages of proposed software installation system.

    Advantages:
    • You can easily know which files belong to which software packages
    • You can easily remove the entire package by using simple rm -r command

    All these goals can be easily achieved using any reasonable package menegement system. Now let's see disadvantages:
    • Every time you install package you have to change PATH variable. Existing applications must be restarted to see this change, because environment variables are inherited and can be changed on-the-fly only if application itself is shell or has some shell-like functionality.
    • Many packages have variable files (logs, data, caches, pipes etc.), which are normally placed in /var directory. Often /var resides on separate filesystem, because it has different requirements for speed, reliability, backup and other criteria. Under proposed schema we can not have separate variable filesystem.
    • Shared library dependencies become a nightmare. If you have no version number in package directory name, you can not install different versions of shared library, so forget about compatibility with old packages. If you have version number, library moves to different place every time you upgrade package. Don't forget, that shared library version numbers do not necessarily reflect package version. Instead, they reflect ABI changes.
    • Where are you going to have configuration files? If in the package directory, you must copy them every time you upgrade package. If for some reason you decide to remove package and than install it again, you lose all package's config files.
    • You have problems if you decide to split package into subpackages. Directory structure changes and all applications which use programs or libraries from splitted package must be updated or restarted. The same problem exists when you unite packages together (like fileutils, sh-utils and textutils was united into coreutils package).
    • Relying on PATH environment variable for invoking another programs is somtimes dangerous, especially for system services and set-UID programs. Usually full pathname is used in these cases. What kind of pathname can be used under proposed schema, if invoked program's package name can be changed (splitted into separate packages, united) or program can be moved from one package to another?

    So, what we gain? Nothing. There are some advantages which can be easily achieved another way, but there are very serious disadvantages.
    When managing system, stop thinking in terms of files. Think in terms of software packages. Consider /usr/bin a namespace which contains user-level programs and which is populated when packages are installed. Consider /usr/lib a namespace which contains libraries.
    • by Blakey Rat ( 99501 ) on Thursday January 13, 2005 @09:09PM (#11355473)
      Let's go over your list the OS X way. Noting that I'm not an expert:

      1) PATH variable only applies to CLI applications. Apple solves this problem by putting CLI applications in the standard UNIX places.

      2) /users/username/library/application support

      3) Shared libraries cause as many problems as they solve. Modern computers aren't short on RAM or disk space and there's no need to use them.

      4) /users/username/library/preferences

      5) I have no clue what you're talking about on this one.

      6) Bundles should be as self-sufficient as possible. The only external applications they should be calling are those that are *guaranteed* to be there.
      • 3) Shared libraries cause as many problems as they solve. Modern computers aren't short on RAM or disk space and there's no need to use them.

        Sadly, libraries aren't similarly short of security problems. This is what happens [gzip.org] when a library that is commonly statically linked is found to have a security vulnerability. [gzip.org]

      • 3 - yes maybe you may have lot of memory and disk space - this may be true. But consider other things like CPU cycles (please don't tell me that "modern computers" talk with CPU because I simply wont belive it) - and having such setup that every app comes with its own libs (so no shared libs causes):

        * More diskspace usage (we have settled on that).
        * More RAM usage (I can settle on fact disks being cheap, RAM is not cheap IMHO and it is never too much of it for me, I don't waste my RAM on crap).
        * More disk
    • > # You can easily know which files belong to which
      > software packages

      Well that was handled already. Just dont put everything to system like "make install" but add another layer to this proces and use package management system. F.e. RPM (I know there are others and do the same, I'll focus RPM only) - if you want to know which files came with package do: rpm -ql foo, if you have a file in FS and want to know which package it belongs to do rpm -qf /somepath/somefile - so this is not an advantage since
    • While I agree with your points, to be effective, you need, as you said, a decent package management system.

      However - I love /opt when it comes to stuff I install from source - that way I know where it all is.

      What would be nice is a standard way of installing something from a tarball which puts something, lets say /var/genericpackagemanagment, which contains a list of all the files installed, where, and directories created.

      That way, removal becomes something as simple as rm `cat /var/genericpackagemanagme
  • by Ogerman ( 136333 ) on Thursday January 13, 2005 @08:07PM (#11354903)
    What do other, more experienced readers think about the problems and improvements related to dropping the current Linux approach for a 'bundle-like' one in Linux distributions?

    OK.. this question is really 1st year CS material, so hopefully this will set all y'all newbie young'ns straight. "Bundling applications," as defined as giving every app it's own copies of used libraries, is just plain stupid if at all avoidable. Here's why:

    1.) What happens when a bug or security flaw is found in a library? Without a shared copy, you must figure out which apps are using it (which may be thousands) and then upgrade every application "bundle" instead of one library for the whole system. And what if some apps are using an older version of the library which nobody bothered to patch?

    2.) Disk caching. Today's hard disks may be really large, but they're still really slow (compared to the rest of the system). If you have to load separate copies of a library for each app, you lose all the benefit of disk caching.

    3.) Memory usage. Shared libraries allow a single copy of the library in memory to be used by multiple applications. This also reduces load time if the library is already in memory. (ie. this is why it makes sense efficiency-wise to use either KDE or GNOME and not a mixture of apps from both) It's also partly why OpenOffice and Firefox take so long to load on Windows compared to Office and IE. (they don't use all the standard windows libraries.)

    4.) Shared libraries are a major driving force in pushing application developers to stay on their toes and keep up with the progress of the library developers.

    5.) You shouldn't be compiling your own apps unless you're their developer or have very specific security or optimization needs. It's a waste of time unless you're learning something in the process. Leave that job to distro package maintainers and do something useful with your time like becoming a better programmer and/or contributing to your favorite app. Once Linux ceases to be a toy for you, you'll avoid compiling everyday software like the plague.

    I could go on for several points, but that should be enough to convince ya. (:
    • "Bundling applications," as defined as giving every app it's own copies of used libraries, is just plain stupid if at all avoidable.

      On the other hand, I may have originally misread what you were getting at. So, if that's not what you meant, I'm afraid my answer is different..

      Bundling apps / libraries in the sense of giving them their own directory and then symlinking back to some common path like /System/Libraries/KDE or what have you, is not such a bad idea. Check out GoboLinux for a distro aimed at t
    • This is the problem with Linux developers. Developer-centric thinking, not user-centric thinking. Think like an Apple programmer for a few minutes here:

      1) A security flaw in an application is the responsibility of the company who created that distributed that application and it's their job to inform users and fix it.

      2) Users don't give a crap about this, as long as their applications run. 95% of users don't know how much disk space an application takes up.

      3) Again, users don't give a crap about this.
      • 1) A security flaw in an application is the responsibility of the company who created that distributed that application and it's their job to inform users and fix it.

        What if the company has gone out of business? Not to mention that users don't want to have to update everything when a bug is found in something like glibc.

        3) Users don't want to deal with dependencies. If you tell your computer to download and install, say, a video game, the user doesn't want to see your computer downloading funky-happy-m

      • 2) Users don't give a crap about this, as long as their applications run. 95% of users don't know how much disk space an application takes up.

        3) Again, users don't give a crap about this. 95% of users can't even tell you how much memory an appliaction is using, or how to find that out.

        Most users don't know how to check memory usage and/or disk usage.

        But most users DO notice when the hard drive starts making noise and the system slows down (paging).

        And most users DO notice when they have to go deletin

      • This is the problem with Linux developers. Developer-centric thinking, not user-centric thinking. Think like an Apple programmer for a few minutes here:

        This is the problem with (most) Mac users. Proprietary thinking, not Open Source thinking. so... You've obviously never used a modern Linux distro based on your comments. Besides the ones about not caring about performance, every single one of your points assumes that we're talking about proprietary software from a vendor. In the world of Linux and Op
    • I'm wasting mod points replying but this needs to be said.

      App Bundles can be configured to search installed system libraries first. This also solves the security update issue. The bundled libs are only used as a last resort.

      Regarding config files and /var. This is mainly aimed user applications. Would you install MySQL this way? Maybe to play with but NEVER as a server. This is perfect for apps like Gimp, k3b, OpenOffice, Firefox, etc. Config files can always be checked in two locations. A system
  • I'm feeling pretty ignorant right now, but can someone please explain what problems are arising from shared libraries under linux? Is this the same as program x needs library y which needs library z but that will screw up program w? Or is this I want to distribute program x that has 20 other dependencies so I'll just put them all together?

  • It's a step backward (Score:3, Informative)

    by Khazunga ( 176423 ) * on Thursday January 13, 2005 @08:35PM (#11355175)
    The Unix filesystem hierarchy has network maintenance taken into account. Having program 'bundles' may be great for a single workstation, but is hell for a network-wide system. The FHS explains this much better than I could, so please read the rationale there [pathname.com].

    Personally, I see this like going back to the DOS days. Linux/BSD have been dealing with shared libs in pretty sane ways. Although rpm is sometimes a pain in the butt, Debian's package system and Gentoo's Portage prove that dealing with dependencies automatically is feasible and confortable.

    And, anyhow, for special cases, you can always drop apps into /opt and get the equivalent of a bundle. Oracle does this, vmware does this, as there are countless other cases.

  • pkgsrc us a source-based packaging system that works on MacOS X, Linux and many other operating systems (even Windows, with SFU).

    More information:

    http://www.pkgsrc.org/ [pkgsrc.org]
    http://www.feyrer.de/Texts/Own/21c3-pkgsrc-slides. pdf [feyrer.de]

  • Several comments have mentioned concerns like "What if there's a security problem in a commonly used library? You'll have to update all your apps instead of a shared lib!" or "All those extra copies of things will eat up too much RAM and drive space!"

    Please take a few minutes and look at Apple's documentation about how bundles work before rehashing those topics.

    http://developer.apple.com/documentation/MacOSX/C o nceptual/SystemOverview/Bundles/chapter_4_section_ 1.html [apple.com]

    -Ster
  • When I (compile &) install something locally I use stow.
    ./configure --prefix=/usr/local/stow/package
    make
    make install
    cd /usr/local/stow
    stow package
    But in general as a debian user, most stuff one needs is in the deb repository so it's just a matter of

    apt-get install package

  • Plan9's soft file systems would really help for this kind of system

    In Plan9 the file tree isn't bound to the disks, it is made up from a series of bind commands (kind of like mount). Where the server for each bind can be anything, from a disk server to an ftp client to a pipe, all it takes is a file descriptor. This file tree is set per process, with children able to inherit their parent's tree or start of with a blank one and bind in devices as necessary (a separate # namespace is used as the seed for dev
  • ROX (Score:2, Informative)

    ROX [sourceforge.net] (RiscOS On X) which has a filer, window manager and a session manager uses Application Directories taken from Risc OS. This sounds very similar to Apple Application Bundles.

    Installation is done by copying the directory, and the first time you run it, it will be compiled. You do have to run it from ROX-Filer for this to be supported (just double-click on the application directory), otherwise you have to run a script inside the directory.

    Recently ROX has combined AppDirs with the Zero Install [0install.net] insta

  • Are nasty as they look like one file, and are actually directories that don't open when you dbl click.

    Linux Package Management takes all the pain away from having everything in / and on a well implemented system (which prob doesn't exist yet) you can have the rpm as an icon for the app, double click, a scrollbar goes across the screen and its done, could not be simpler. Of course apt-get makes this even easier.

    How the application is stored in the filesystem has no impact what so ever on usiblity either, c
  • /etc is great (Score:3, Insightful)

    by rduke15 ( 721841 ) <rduke15.gmail@com> on Friday January 14, 2005 @05:09PM (#11367975)
    While I can see the advantages of having every app isolated in it's own directory, I feel that one of the things I really like in Linux is to have all configuration in one, relatively small, pure text hierarchy: /etc.

    I can grep it easily when I look for something, and easily edit the relevant file, which is usually well commented. I cannot grep the entire / tree. Well, I suppose I could, but I certainly don't want to.

    For the rest, grouping all an applications's files together sounds attractive, but I would be happy enough if every app just clearly documented what it did at install time so it's easy to undo. (I don't believe much in "uninstall" programs/scripts, seeing how they (don't quite) work on Windows).

    • While I can see the advantages of having every app isolated in it's own directory, I feel that one of the things I really like in Linux is to have all configuration in one, relatively small, pure text hierarchy: /etc

      OSX has an answer to this as well. By widely accepted convention:

      • ~/Library/Preferences has user-level application settings
      • /Library/Preferences has system-wide, custom-set application settings
      • /System/Library/Preferences has system-wide, vendor-set application settings
      • /etc has settin
  • i have quanta gold from theKompany. (good on linux, sucks on os x. oh well.) anyways, they bundle all the Qt libs in a single directory. it is theoretically possible, yes, but screws other things up. for example, i had several X clients in my old classroom, and ran them off a P3 933 512MB system. ran fine. but, if i ran 6 copies of mozilla, each with its own libraries, it'd come to a grinding halt. the problem is windowsy really, as with free software it's not usually too hard to update a library.
  • by bfree ( 113420 )

    One existing user of bundled applications on GNU/Linux is klik [atekon.de] which was originally designed for installing additional programs on Knoppix by simply installing the klik client and clicking on links on the klik site. Klik has evolved since it's inception so that now it builds compressed images as bundles, supports 4 distributions (Knoppix 3.7, Kanotix BHZ, Simply MEPIS 2004.04 and Linspire 5.0), can work with dialog|Xdialog aswell as kdialog and firefox|elinks aswell as konqueror and finally offers the ent

  • Unsupported Operating System

    If you were visiting this site with Linux, you could install thousands of applications simply with a klik. You can download a free copy of Linux here. Please come back with a standards compliant operating system and browser.

    This site is optimized for Konqueror and Firefox.


    How narrowminded to not let me scope it out from Windows/IE.
  • I think that the MacOS X approach (which is similar to Windows, only cleaner) partly differs from the Linux/Unix approach because of a different software development philosophy.

    Most Linux apps are open source (ok I run Debian so I'm biased). There's a strong emphasis on making things modular. Libraries are generally bundled separately from applications, because as soon as one app has a nifty feature it gets yoinked out and made into a general purpose library/API/framework (think GTK the "Gimp Toolkit").

It is clear that the individual who persecutes a man, his brother, because he is not of the same opinion, is a monster. - Voltaire

Working...