Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Announcements Software Linux

AutoPackaging for Linux 623

Isak Savo writes "The next generation packaging format for Linux has reached 1.0. With Autopackage officially declared stable, there is now an easy way for developers to create up to date, easy installable packages. There are lots of screenshots available including a flash demo of a package installation."
This discussion has been archived. No new comments can be posted.

AutoPackaging for Linux

Comments Filter:
  • Their server is now in smoking ruins.
  • by dg41 ( 743918 )
    Aren't there enough package/software installation formats for Linux that aren't being used enough as it is?
    • by FooBarWidget ( 556006 ) on Sunday March 27, 2005 @02:50PM (#12061177)
      I knew someone would say that. Read our FAQ. Mike posted a part of it below. Please read our website for the full FAQ once the Slashdotting is over.
      And we'll be available at #autopackage at irc.freenode.net if you have questions.
    • by diegocgteleline.es ( 653730 ) on Sunday March 27, 2005 @03:01PM (#12061241)
      yes, but that's not the main problem IMO. Package formats are often tought as the number 1 offender when it comes to inter-distro compatibility, but that's not the main problem. The main problem is that package mainteinance happens at distro-level, instead of developer-level. What we need is to move as much mainteinance work to the developers. It's the number 1 cause of problems: A package in debian may require "libfoo", but in fedora it may depend on "foo" or "foo+randomstring". If all those things would be done at developer level, they'd be more coherent, and inter-distro compatibility would be greater. Package format would be irrelevant.

      And this is why autopackage is great. It doesn't tries to replace apt/rpm/gentoo, it just tries to be a good way of distributing software, and encourages developers to create their own "autopackage packages" so they work in every distro
      • If all those things would be done at developer level, they'd be more coherent, and inter-distro compatibility would be greater

        Woah woah. Let me just stop and laugh for a moment. You're telling me that random developer X can do a better job of making a package than the people who develop the friggin' distro? Are you kidding?? Seriously, the idea that *more* cooks in the kitchen will somehow result in a "more coherent" set of packages is incredibly laughable...
        • by diegocgteleline.es ( 653730 ) on Sunday March 27, 2005 @07:36PM (#12062633)
          Woah woah. Let me just stop and laugh for a moment. You're telling me that random developer X can do a better job of making a package than the people who develop the friggin' distro? Are you kidding??

          They won't do "packaging" better, simply it will be better. The developer of project foo may say: "foo version 2.15-b depends on project bar version 1.1 to run properly", and everyone would follow it. Distros still could package themselves in a different way but that won't bee too common, and at that point people may tell "hey, your fedora package don't works properly in debian". My point is that a common package format

          WONT SOLVE ANYTHING. Autopackage doesn't solve anything because it's a better format, but because it has a different philosophy. It doesn't matter how good are deb or rpm - they will NEVER work in another distro just because of their philoshopie

  • by FooBarWidget ( 556006 ) on Sunday March 27, 2005 @02:41PM (#12061107)
    No doubt lots of people will have all kinds of questions about autopackage, such as:
    - "What idiots!! Another packaging format is the last thing we need!"
    - "What's wrong with apt-get?"
    - "Everybody should use Gentoo!"

    Slashdotters are highly encouraged to read the autopackage FAQ [autopackage.org]! Our project has existed for over 2 years now, and many people have asked us those questions. In short: autopackage is not meant to replace RPM/DEB/apt-get/etc.

    If you have more questions, feel free to come over at #autopackage at irc.freenode.net
    We'll be glad to answer your questions
    • It seems our servers can't withstand slashdotting. Mike posted the FAQ below. Scroll down to read it.
    • by Anonymous Coward on Sunday March 27, 2005 @02:50PM (#12061175)
      The only thing I'd like to see in a package manager is to allow non-root users to install software (perhaps under $HOME ; perhaps under /usr/local if they're members of the group local).

      It's absurd that you need to enter a root password to do something as simple as install a user-space program - and it's absurd that package mangers only support dependancy checking for stuff installed in the main system directories.

      At work, the main directories (/usr, /bin, etc) can only be accessed by the IT guys; but every department has a directory ("/usr/department/engineering", for example) of that memebers of that group can install software in. We have a newer version of Perl in ours. It really sucks that package managers can't help deal with the dependancies in an environmennt like this.

    • by FooBarWidget ( 556006 ) on Sunday March 27, 2005 @02:53PM (#12061197)
      Mike setup a mirror of the autopackage website! It's here [dur.ac.uk]. The FAQ is also available on that mirror.
    • by Screaming Lunatic ( 526975 ) on Sunday March 27, 2005 @02:58PM (#12061225) Homepage
      All Your Packages Are Belong To Us [autopackage.org]

      All Your Bandwidth Are Belong To Us.

    • by Anonymous Coward on Sunday March 27, 2005 @03:44PM (#12061446)
      Dammit! All i get is "emerge: there are no ebuilds to satisfy "autopackage".

      Crap! *now* what do I do?!?!?

    • The FAQ answers that question with:

      What RPM is not good at is non-core packages, ie programs available from the net, from commercial vendors, magazine coverdisks and so on.

      Why not? If you're going to spend a metric shitload of time creating a new packaging format and you want anyone to use it, some actual justification might help.

      Personally, I'd add non DB backends, suggests/recommends, and non root installs to RPM, or add better file / signature verification, a DB backend, and non root installs to d
      • by Nailer ( 69468 ) on Sunday March 27, 2005 @08:41PM (#12062905)
        Actually, I'll bite on some of his later answers, where, unlike the one to the previous question. he does go through some specific issues he has 'with RPM/Dpkg':

        "Other dependencies are not so simple, there is no file that reliably expresses the dependency, or the file could be in multiple locations"

        Yes, that's what virtual dependencies / capabilities in both rpm and dpkg provide. As for files moving around, that's what the FHS is for. Got a

        But in reality, you don't encounter this scenerio untill you install rpms/ debs on another distro. Big deal: binary packages will always be built against specific library versions. Autopackage doesn't solve that.

        "because RPM is, at the end of the day, a tool to help distro makers, they sometimes add new macros and features to it and then use them in their specfiles. People want proper integration of course, so they use Mandrake specific macros "

        That's a human problem. You can't fix it with technology. Taking away the ability to have custom macros is a bad thing. Encouraging proper behavior is a better thing.

        "Bad interactions with source code: because the current versions of RPM don't check the system directly, they only check a database"

        You don't mean bad interaction with source code. Installing from source works fine, provided you know how to make install you can easily reate an RPM of most autoconf apps in about 2 minutes.

        You mean bad interaction with non-packaged software. Again, that's a human issue, but one that's been solved better and better over time. Both OSS projects and proprietary vendors including Adobe's, BEAs, Macromedia etc. all release properly packaged binaries.

        And frankly, I like having a database. I want to be able to find out what package was responsible for installing a file, and a URL where I can get a new version of the software. I like having a checksum of all my files at install time, so I can see if they've changed later. All of which autopackage doesn't do.

        I don't like anything that encourages people to install unsigned applications from the internet, which autopackage does.

        "a dependency does not encode any information on where to find it"

        Yes, Great they kept that shit out of the dependency isn't it? Because what provides that capability isn't determined by the software. My app needs a webserver. That could be thhtpd, Apache httpd, Roxen, or whatever else. Rather than specifying what package provides that dependency, they simply list the depency. Three of my available packages say they provide a webserver capability. When a new webserver comes out, it will say it provides that capability too. Without requiring a new package of the thing that wants a webserver. Brilliant stuff!

    • A few quick comment on your FAQ:

      The first reason is the lack of dependency management. Because you are simply moving folders around, there is no logic involved so you cannot check for your apps dependencies.

      OS X handles this at runtime. i.e. You can install the software, but the folder contents contain enough information for the OS to give you an error message when you run it. Usually this amounts to nothing more than "OS X 10.3 required!", but it could be more. FWIW, I think the lack of a "standard" set of OS services in Linux complicates this issue. Under OS X (and to a certain degree Windows), developers always know which libraries they can always depend on, and which ones they should bundle.

      You'll note that bundled APIs on OS X and Windows tend not to duplicate each other across a given set of installed programs.

      One obvious one is that there is no uninstall logic either, so the app never gets a chance to remove any config files it placed on the system.

      This would be true on Linux, but it is NOT true on OS X. In OS X, the desktop integrates with the filesytem and learns via events when an application is added or deleted. This means that OS X users see file associations as soon as the program is added to the system, and they also see the deletion of those associations when the program is removed. It's all very seamless and prevents the dangling associations that plague Windows.

      The OS X FS takes things one step further by storing file system IDs instead of path names for everything. This means that if I move my program from the Desktop to Applications, the associations will move with it. Similarly, if I have a file open and I move it, my file will save to the new location instead of the old one.

      Another is that the app menus are largely determined by filing system structures. This means that it's hard to have separate menus for each user/different desktop environments without huge numbers of (manually maintained) symlinks.

      1. An application menu IS just a bunch of glorified symlinks.

      2. OS X handles this by having a system wide Applications folder for all users. If users wish to have private programs, they can drag them to their desktop or home folder.

      3. The dock is the user's customized menu. The most commonly used apps are usually already there so that users don't have to hunt for them. If a user wants another app on his dock, he can drag it on there to create a shortcut. Shortcuts are deleted by dragging the icon off the dock or into the trash can.

      What your FAQ should say is, "Unfortunately, the Linux design philosophy precludes the use of an appfolders system, as the services required to make such a system work are most likely unavailable on most installations."

      Now with that cleared up, good job guys. I'm glad to see that someone is finally tackling my number one Linux gripe. (Check my Journal for more info.) :-)
      • You'll note that bundled APIs on OS X and Windows tend not to duplicate each other across a given set of installed programs.

        OK, I don't know much about Mac but I have to call bullshit on Windows there. Windows packages are constantly rolling their own "common" DLLs, with slight differences, overwriting identically-named DLLs from other packages and clobbering that package's symbols. "DLL hell" wasn't just a clever assonance someone came up with.

      • I like the OS X (and GNUstep) way of handling things, but you are wrong about one thing. Deleting a .app bundle does not delete the configuration files. File associations are stored in a .plist file in each .app bundle, and must be periodically refreshed (fine, the system handles this), but other settings are stored in /Library/ or ~/Library/, and stay around after you have removed the application. This is required due to the way in which applications are upgraded in OS X - delete the old one, drag the n
  • Some FAQ entries (Score:5, Informative)

    by IamTheRealMike ( 537420 ) on Sunday March 27, 2005 @02:42PM (#12061115)
    So much for sunsite having plenty of bandwidth and fast servers! Well, here are some select pieces from the FAQ:

    # What is autopackage?

    For users: it makes software installation on Linux easier. If a project provides an autopackage, you know it can work on your distribution. You know it'll integrate nicely with your desktop and you know it'll be up to date, because it's provided by the software developers themselves. You don't have to choose which distro you run based on how many packages are available.

    For developers: it's software that lets you create binary packages for Linux that will install on any distribution, can automatically resolve dependencies and can be installed using multiple front ends, for instance from the command line or from a graphical interface. It lets you get your software to your users quicker, easier and more reliably. It immediately increases your userbase by allowing people with no native package to run your software within seconds.

    # Is autopackage meant to replace RPM?

    No. RPM is good at managing the core software of a distro. It's fast, well understood and supports features like prepatching of sources. What RPM is not good at is non-core packages, ie programs available from the net, from commercial vendors, magazine coverdisks and so on. This is the area that autopackage tackles. Although in theory it'd be possible to build a distro based around it, in reality such a solution would be very suboptimal as we sacrifice speed for flexibility and distro neutrality. For instance, it can take several seconds to verify the presence of all required dependencies, something that RPM can do far quicker.

    # Why a new format? Why do the RPMs I find on the net today only work on one distro?

    There are a number of reasons, some obvious, some not so obvious. Let's take them one at a time:

    • Dependency metadata: RPMs can have several types of dependencies, the most common being file deps and package deps. In file deps, the package depends on some other package providing that file. Depending on /bin/bash for a shell script is easy, as that file is in the same location with the same name on all systems. Other dependencies are not so simple, there is no file that reliably expresses the dependency, or the file could be in multiple locations. That means sometimes package dependencies are preferred. Unfortunately, there is no standard for naming packages, and distros give them different names, as well as splitting them into different sized pieces. Because of that, often dependency information has to be expressed in a distro-dependent way.
    • RPM features: because RPM is, at the end of the day, a tool to help distro makers, they sometimes add new macros and features to it and then use them in their specfiles. People want proper integration of course, so they use Mandrake specific macros or whatever, and then that RPM won't work properly on other distros.
    • Binary portability: This one affects all binary packaging systems. A more detailed explanation of the problems faced can be found in Chapter 7 [slashdot.org] of the developer guide.
    • Bad interactions with source code: because the current versions of RPM don't check the system directly, they only check a database, it makes it hard to install them on distros like Gentoo even when they only use file deps. Similar problems arise if you compile things from source.

    There are various reasons why a new format was created rather than changing RPM

    • Re:Some FAQ entries (Score:5, Informative)

      by IamTheRealMike ( 537420 ) on Sunday March 27, 2005 @02:49PM (#12061167)
      Why bother?

      # What's wrong with centralized repositories, apt style?

      The system of attempting to package everything the user of the distro might ever want is not scalable. By not scalable, we mean the way in which packages are created and stored in a central location, usually by separate people to those who made the software in the first place. There are several problems with this approach:

      • Centralisation introduces lag between upstream releases and actually being able to install them, sometimes measured in months or years.
      • Packaging as separate from development tends to introduce obscure bugs caused by packagers not always fully understanding what it is they're packaging. It makes no more sense than UI design or artwork being done by the distribution.
      • Distro developers end up duplicating effort on a massive scale. 20 distros == the same software packaged 20 times == 20 times the chance a user will receive a buggy package. Broken packages are not rare: see Wine, which has a huge number of incorrect packages in circulation, Mono which suffers from undesired distro packaging, etc
      • apt et al require extremely well controlled repos, otherwise they can get confused and ask users to provide solutions manually : this requires an understanding of the technology we can't expect users to have.
      • Very hard to avoid the "shopping mall" type user interface, at which point choice becomes unmanagably large: see Synaptic for a pathological example of this. Better UIs are possible but you're covering up a programmatic model which doesn't match the user model esp for migrants.
      • Pushes the "appliance" line of thinking, where a distro is not a platform on which third parties can build with a strong commitment to stability but merely an appliance: a collection of bits that happen to work together but may not tomorrow: you can use what's on the CDs but extend or modify it and you void the warranty. Appliance distros have their place: live demo CDs, router distros, maybe even server distros, but not desktops. To compete with Windows for mindshare and acceptance we must be a platform.

      # What's wrong with NeXT/MacOSX style appfolders?

      One of the more memorable features of NeXT based systems like MacOS X or GNUstep is that applications do not have installers, but are contained within a single "appfolder", a special type of directory that contains everything the application needs. To install apps, you just drag them into a special Applications folder. To uninstall, drag them to the trash can. This is a beguilingly easy way of managing software, and it's a common conception that Linux should also adopt this mechanism. I'd like to explain why this isn't the approach that autopackage takes to software management.

      The first reason is the lack of dependency management. Because you are simply moving folders around, there is no logic involved so you cannot check for your apps dependencies. Most operating systems are made up of many different components, that work together to make the computer work. Linux is no different, but due to the way in which it was developed, Linux has far more components and is far more "pluggable" than most other platforms. As such, the number of discrete components that must be managed is huge. Linux is different to what came before not only in this respect, but also because virtually all the components are freely available on the internet. Because of this, software often has large numbers of dependencies which must be satisfied for it to work correctly. Even simple programs often make use of many shar

      • by jxdxbx ( 158858 ) on Sunday March 27, 2005 @03:53PM (#12061492)
        Also, there are no more DMG exploits. There is nothing wrong with having a few XML files around that belong to an application you no longer have, if it it really irks you, or if programs leave behind large caches, there are plenty of pieces of software that will delete preferences and caches that belong to software you no longer have.

        Most applications shouldn't need to modify the OS to run, and for that minority that do, OS X still does have packages. This is how haxies and so forth work.

        The only valid objection I've seen to bundles is the one about how a user shouldn't be able to install random software from the internet. This is a pretty good point, but I fail to see how that, even in a system that uses an apt repository, you would be able to prevent a user from downloading and installing some random RPM from a website. You would have to have a severely crippled OS.
        • by IamTheRealMike ( 537420 ) on Sunday March 27, 2005 @06:05PM (#12062159)
          Are you sure about that? How do you know there are no more exploits? Do you have some power of clairvoyence nobody else does?

          The thing that concerns me about the DMG exploits, is that they were caused by the fundamental design of the system not simple typos/poor coding practice. Having appfolders integrate with the system by registering file associations/URL handlers silently through the shell seems like the obvious way to handle this stuff in an "install free" environment, though really it's just doing the install at a later time. But it had unintended side effects which were devastating for security.

          The problem is, to solve this you either have to go back to some explicit action integrating software with the system, or pile on more hacks to try and solve the security exploits. Apple chose both - Tiger boasts an improved installer, iTunes comes inside a package etc. But the approach they took with Safari reminds me of Internet Explorer: cover up a flawed technology like ActiveX with more and more hacks and security restrictions that somehow always managed to leak.

          You are right that most applications should not need to modify the "system" to run. This is the principle behind authentication-less installation, which we only approximate on Linux with the install to $HOME feature in autopackage. Figuring out the exact set of permissions that are safe for installers to have and then enforcing them is somewhat tricky: both Windows and MacOS X are riddled with programs that demand the administrator password which implies that so far, nobody quite identified the sweet spot.

        • by Nailer ( 69468 ) on Sunday March 27, 2005 @08:51PM (#12062937)
          "I fail to see how that, even in a system that uses an apt repository, you would be able to prevent a user from downloading and installing some random RPM from a website. You would have to have a severely crippled OS."

          It's pretty simple: if the package isn't signed by someone you trust, refuse to install it. This has the been the behaviour in up2date since it was created, and yum does the same thing. I'd be very surprised if apt/get (at least on systems where package signing is expected) didn't do the same.

          RPM itself, when used directly, currently throws up a warning if a package isn't signed by someone trusted, but (uunlike up2date / yum / etc) still installs it. This behavior may change in future tho.

    • To me it seems like anything that makes it easy for users to install random software off the internet to be a REALLY BAD THING. Using Linux right now is a pleasure because if you're using Gentoo, Ubuntu, Fedora, Mandrake, etc... you get all your software packaged for you by your distribution. No spymare, no viruses, so adware, no shareware. It all generally works, dependancies are sorted out and managed, it's all a really good system.

      Encouraging users to install Comet Cursors for Linux seems to me like
      • But what do you do if the software you need are not included with you distro?

      • by Master of Transhuman ( 597628 ) on Sunday March 27, 2005 @05:00PM (#12061820) Homepage
        "To me it seems like anything that makes it easy for users to install random software off the internet to be a REALLY BAD THING."

        This is hardly the point of the project.

        The point of the project is to eliminate problems for developers in packaging their software to be able to run across distros.

        The fact that it makes it easier to relieve dependency hell is a bonus for those users who want packages not included in their distro.

        Anybody who says EVERYTHING they'll ever need is included in their distro is just being a troll. Because it simply is not possible that ANY distro is "finished." And a lot of people don't want to wait months until something they want shows up in a repository.

        If Windows did that, everybody would still be using DOS.

        Finally, the notion that it is somehow "evil" to install software from the Net is just stupid. The Net exists to distribute information - and programs are part of that.

        Practically everything I use on the Windows side of my machine was downloaded off some Web site or another - and I have several gigs of stuff on my Linux side to explore yet which also has the same origin.

        And I have NEVER had a spyware/virus/trojan problem from such software. (Although I have had software that simply screwed up the machine due to stupid programming.)

        Users get spyware and other crap from stupid, pointless little programs offered by commercial entities because the user acts like a kid in a candy store when offered something "free". If the users really knew what freeware was about and where to get anything they need, they would be less likely to do stupid stuff like downloading a calendar program loaded with spyware.

        While it is true that CORPORATE users should be restricted from downloading any damn thing they see (unless it has a productivity purpose), home users certainly should not be.

        Your solution smacks of the paternalism I hate about Windows. You want your distro to control your machine just as much as Gates wants to control Windows users.

        Sorry - not acceptable.

        • Missing the point (Score:3, Insightful)

          by theantix ( 466036 )
          "To me it seems like anything that makes it easy for users to install random software off the internet to be a REALLY BAD THING."

          This is hardly the point of the project.


          Sadly, that is the point of the project. It's meant to aid the installation of packaged software from third party sources and manage dependancies in order to accomplish this. That is specifically my problem with it, it is a tool for enabling dangerous behaviour for unexperienced users.

          Anybody who says EVERYTHING they'll ever need is
          • . It's meant to aid the installation of packaged software from third party sources and manage dependancies in order to accomplish this. That is specifically my problem with it, it is a tool for enabling dangerous behaviour for unexperienced users.

            The factor blocking the deployment of "malware" to Linux isn't the intractability of Linux software installing, but the low population size of unskilled users, and the low explotiation value of that population (same as the largest reason there are few "viruses").
    • by N3wsByt3 ( 758224 )
      "What RPM is not good at is non-core packages, ie programs available from the net, from commercial vendors, magazine coverdisks and so on. "

      You can say that again. In fact, this has exactly been my gripe with linux, including the so-called user-freindly distro's.

      Apt-get, rpm, whatever - but if you are just browsing the Net and want to install something it's a real PITA, with Linux. There is no equivalent of an .exe, so you either have to be lucky that they not only have a linux version, but the right rpm
  • The biggest problem with Linux distributions is that different distributions have different ideas about where things should go: does this file go in /sbin, /usr/sbin, /usr/bin/, /usr/local/bin, or somewhere else? Where do the configuration settings go? /home/*/.config? /etc/profile?

    So, does this address the problem? Most software makers would really like to be able to release ONE package for their software and know that it will end up somewhere sensible.

    I know we all love to bash Microsoft, BUT, I have
    • Right, installing shit is great on Windows. The suicide hotlines start overloading when you try to remove software.
    • by Abcd1234 ( 188840 ) on Sunday March 27, 2005 @02:46PM (#12061150) Homepage
      Umm, that's what the Linux Standard Base [linuxbase.org] is for. Blame the distro makers and packagers for not following it. After all, the LSB has been out for a *long* time...
    • Who cares about M$? Installing software on Windows is a lot of work, having to manually leech (or *shudder* by) the software and click through dozens of dialogs. apt-get is easy, and I have seldom had any problems with it.

      Which brings me to the question, why would any free software author bother with autopackage when all decent distros provide mechanisms such as apt-get? And what about non-x86 architechtures. In my view autopackage support is _wasted time_. Any self-respecting user of my software already u
      • Because maybe your package is a small project not yet picked up by the distributions. Are you as the maintainer going to package it for debian, mandrake, redhat, and suse? Or would you rather convert your tarballs into autopackages?

        I think if I were an upstart package, and nobody were packaging me for their distro, I would want to be converted to an autopackage.
    • by FooBarWidget ( 556006 ) on Sunday March 27, 2005 @02:56PM (#12061215)
      By default, autopackage either installs to /usr or to $HOME (depending on what the user wants). If you really want it to use another prefix, you still can. The reason we use /usr instead of /usr/local is because there are many broken distributions that don't setup paths for /usr/local correctly. And /usr is the standard for many/most distributions.

      There's a mirror of the autopackage website's information here [dur.ac.uk].
  • by rice_burners_suck ( 243660 ) on Sunday March 27, 2005 @02:43PM (#12061134)
    There aren't many replies to this story yet, but I can already see it: Lots of people are going to complain, "Why the fsck do we need yet another packaging solution?!?! We already have rpm, deb, tgz, blah blah blah..."

    The reason is that most of these packaging solutions, while great for developers and those who want detailed knowledge of the inner workings of their systems, simply suck when given to mortal users.

    And they don't handle a number of edge cases too well... What if you want different versions of some software to coexist on the same system? What if you want ten different versions of a library? Yes, these can all be handled by current stuff... but not very well. It's bad enough that when we install software here, we actually get the rpms or whatever and then re-package them ourselves to serve our needs.

    A packaging solution that actually works is desperately needed.

    • by Anonymous Coward on Sunday March 27, 2005 @03:17PM (#12061324)
      I hate to say it, but...

      It seems to me that {NeXT,Open,GNU}step-style apps are both good for developers, and great for mortal users. Drag an app (it's just a file) to your Applications folder, double-click it to run, drag it to the trash to delete. They also handle your "edge cases" (multiple installed versions) just fine.

      They're actually quite a bit simpler for users because an app is just a file -- a first-class object in the system. You don't need a special program just to "install" and "uninstall" programs. You don't need ugly hacks like the "start menu" (Gnome or KDE's reimplementation of it). Users think an app should be a first-class object, and it's perfectly feasible, so as developers we should make that the case.

      The autopackage FAQ has "what's wrong with NeXT/MacOSX style appfolders", but it seems to consist mostly of hand-waving and straw men. They don't seem to understand how NeXT/Mac apps work, e.g., w.r.t. linking.
      • Mod parent up please!

        Packages are much more complicated conceptually (files all over the place for example) than appfolders.
        Then why use packages? I think the reasons are:
        1. Efficiency of HD space and memory
        2. Some things are easier to do with files all over the place.

        In the end, it's millions of people who have to put up with the confusing situation of packages just so that a few very smart developers can have it easier. That's the wrong way around! Even those same developers USE the program many more ti
      • by IamTheRealMike ( 537420 ) on Sunday March 27, 2005 @07:17PM (#12062558)
        The autopackage FAQ has "what's wrong with NeXT/MacOSX style appfolders", but it seems to consist mostly of hand-waving and straw men. They don't seem to understand how NeXT/Mac apps work, e.g., w.r.t. linking.

        Could you elaborate please? Of course it's possible to build a system that uses appfolders, NeXTStep is one, however:

        • Doing that on Linux is hard, because of dependencies and the lack of a large platform. I'm not interested in theoretical systems, I'm interested in what I use which is Linux. I'm also not interested in MacOS X because I don't use that, and because it's proprietary which is totally not the way forward for our society.

        • Linux isn't designed to allow appfolders easily. The freedesktop.org specs are oriented around the concept of "drop a file in this directory, update a cache". This works well for package managers, less well for appfolders. It's done that way for valid reasons: the menu system is designed with administrators, windows compatibility and internationalization in mind. The appfolders menu system isn't really designed per se, it's just a convention to put stuff in /Applications (and a shockingly large number of apps break if they aren't in /Applications).

        • The issues with linking etc are well known. Yes I understand how MacOS X linking works, that includes how frameworks work and also the weak linkage Apple use. Before suggesting we don't understand linking how about reviewing things like apbuild and relaytool. Nonetheless, new libraries are manufacturered at a quite astonishing rate on Linux: the openness lends itself to massive and extremely fine grained code reuse. That means we can't just rely on having a bigass platform, though one would be nice. Dep resolution is still important simply to prevent your OS becoming totally obsolete within a year.

        The only person who has really done serious work with appfolders on Linux is Thomas Leonard, an extremely smart man who was behind the ROX desktop. Note that appfolders aren't a Mac or NeXT invention, they were done by RISC OS back when I was a wee kid.

        As Thomas found out, dependencies are kind of a tricky problem with appfolders. His solution was ZeroInstall, a very nice piece of work I must say. However it had issues, as software often does, and now he is designing something called the "Injector". This has quite a few concepts similar to autopackage, namely management of interfaces and dependencies. So in the process of "fixing" appfolders to have all the features people wanted, he ended up with something that vaguely resembles systems like autopackage and apt-get. Note: that doesn't mean to imply that the Injector is useless or anything, it's not and has some good ideas. I encourage people to check it out!

        Just be aware that appfolders tend to evolve into installers eventually. They did on DOS, RISC OS, and of course on MacOS X as well (iTunes comes in an installer and it's not the only one ....)

  • Wrong Paradigm (Score:5, Insightful)

    by user9918277462 ( 834092 ) on Sunday March 27, 2005 @02:45PM (#12061139) Journal
    I've said it before and I'll say it again: The Windows model of acquiring and running software from a large number of random third parties is broken. It is fundamentally unsafe and, frankly, archaic in 2005. We do not trade 5.25" floppy disks with BASIC games on them, and we certainly shouldn't be downloading self-extracting installers from sketchy websites anymore, regardless of OS.

    The current Linux model of distros integrating and authenticating software from upstream authors helps ensure the security of the userbase as well as providing installation ease of use. This is something we should be proud of rather than trying to imitate the technically inferior competition.

    • Wow... now *that* is an interesting point I'd never considered before. No, I have nothing else to add to this conversation, other than, well put. :)
    • Without a bridge, how do we get even a portion of the change-fearing masses?
    • Re:Wrong Paradigm (Score:4, Insightful)

      by Steven Edwards ( 677151 ) on Sunday March 27, 2005 @02:54PM (#12061203)
      If the Windows Paradigm was broken people would not use Windows. Yes there are some things about Windows that suck but MSI and InstallShield installers are not a example. Windows security in most regards does suck but packaging is one of the few things Windows does right. You do know you can sign a package in Windows right? Vendor certificates work, just install any packages from Microsoft or from any other major third party vendor.

      I guess you would only be happy if we just pulled everything down from SVN/CVS and built from source.
      • Re:Wrong Paradigm (Score:3, Insightful)

        by schon ( 31600 )
        If the Windows Paradigm was broken people would not use Windows.

        Yeah, just like if the ActiveX-plugin paradigm was broken, nobody would use IE, right?

        Most users have *no* clue if a piece of software is designed incorrectly or not, it has exactly zero bearing on whether the masses use a particular piece of software or not.
      • Re:Wrong Paradigm (Score:5, Insightful)

        by labratuk ( 204918 ) on Sunday March 27, 2005 @03:44PM (#12061440)
        If the Windows Paradigm was broken people would not use Windows.

        I'll tell you this now, the packaging system is not the factor that people base their decisions to run windows on.

        Yes there are some things about Windows that suck but MSI and InstallShield installers are not a example.

        When you are installing from installshield, you're basically saying: 'Hello random executable from the internet (even if you are signed by someone), here, overwrite any of my libraries you'd like, with whatever obscure or customised version you want. Oh, and while you're at it, do whatever you want to my registry...'

        I guess you would only be happy if we just pulled everything down from SVN/CVS and built from source.

        That's a strawman attack. He didn't say anything like that - in fact it's the complete opposite of what he was arguing.
      • Re:Wrong Paradigm (Score:5, Insightful)

        by ferratus ( 244145 ) * on Sunday March 27, 2005 @04:01PM (#12061542) Homepage
        I don't think MSI or InstallShields (or any other Windows installer for that matter) are broken, but I do agree with the parent post in that the way to *get* the software on windows is not all that good.

        If there's one thing I love about Linux is the way I can download/install a software using a single command (or a GUI tool) in most distros.

        Even Gentoo, not exactly regarded as the most user friendly distro, allows one to download & install a software by doing:

        emerge XYZ

        That's it. Same goes for Mandrake, Debian, Fedora, etc. End-user distros like Linspire even go further by allowing you to browse through all available software, look up the description and then perform a "one-click" install.

        I think that's great, and a whole lot better than the windows (and mac os x) alternative where you have to look for software on the web, try to see if they contain malware, download them, run the installer, etc.

        One of the advantage of the system is that the upstream provider (i.e. usually your distro) checks the package for validity. The packages you download won't contain virii or spyware (even if those were to exist on Linux) because the provider would likely not allow them...something MS would certainly do if they controlled the software ppl are downloading.

        I know some packages are hard to install (Gnome for example) but for the most part, I feel software installation is a lot easier on Linux than on Windows, unless you go the CVS/SVN route and compile everything yourself.

        At least on Mac OS X, you usually simply drag and drop the Application in the Applications folder and that's it. While not perfect, it's a whole lot better than Windows.
        • Re:Wrong Paradigm (Score:3, Insightful)

          by Julian352 ( 108216 )
          That is all nice until you find a package that you need that somehow escaped the repository. At that time comes the painful part of finding all the dependencies, installing them and then manually configuring and installing the package. (And don't even ask me how to uninstall)

          For example, I needed a swi-prolog installation for a small class project a couple days back. I needed the GUI library, which means the package available in Gentoo (swi-prolog-lite) would not be sufficient. Thus I had to download the .
      • Re:Wrong Paradigm (Score:3, Interesting)

        by internic ( 453511 )

        I don't know much technical detail about software packaging, either in Windows or Linux, so I can only speak based upon my experience. My experience has been that package management in Linux is much preferable to Windows. When I used Windows 2000, I would often install software that later could not be fully or completely uninstalled (broken uninstall), and often software that was installed would make undesirable and unauthorized changes to settings like file associations. I'm not talking about malware he

    • Re:Wrong Paradigm (Score:5, Insightful)

      by karmaflux ( 148909 ) on Sunday March 27, 2005 @02:57PM (#12061223)
      Bittorrent calls you a liar, buddy. We trade 5.25" floppies in a metaphorical sense constantly. When I develop a program that takes random input and outputs Frank & Earnest cartoons, I don't want to have to wait for some Board of Linux Usage Oversight to give my 5k perl script the Stamp of Approval.

      Nobody's trying to copy the Windows paradigm with autopackage. What they're trying to do is break down that barrier to cross-distribution software releasing. Your average desktop user does not want to compile software. Dropping to a terminal, cd pathtoapp, tar -jxvf whatever.tar.gz, cd newpath, ./configure; make; make install is too much shit for a user -- and then how to uninstall? Keep the source directory there forever?

      "If they can't compile they should run Windows" is a stupid, backwards attitude, and autopackage is trying to fix it. Relying on upstream content providers is dangerous -- what happens when you disagree with your upstream provider? You have to switch distributions? Pat recently dropped Gnome support for Slackware -- I still run gnome. I do it with a third-party package from dropline. Is that broken? No.

      The way to fix the problems you describe is to educate users, not to remove their usage priveleges. Teach people not to install untrusted software -- and teach them how to tell what software to trust! Don't just slap their hand and yell NO.
      • Re:Wrong Paradigm (Score:3, Interesting)

        by labratuk ( 204918 )
        Bittorrent calls you a liar, buddy. We trade 5.25" floppies in a metaphorical sense constantly.

        That's content, not programs. Completely different. And when it's linux iso's They're always the same because they are checked against the hashes. The end result is the same thing as having a very fast ftp.

        When I develop a program that takes random input and outputs Frank & Earnest cartoons, I don't want to have to wait for some Board of Linux Usage Oversight to give my 5k perl script the Stamp of Approval
      • "Dropping to a terminal, cd pathtoapp, tar -jxvf whatever.tar.gz, cd newpath, ./configure; make; make install is too much shit for a user -- and then how to uninstall? Keep the source directory there forever? "

        Agreed. But how is using 2 package sytems (as the autopackge author recommends) with a weird distinction between what's installed in your current distro and 'third paty' apps easier than:

        1). Putting a link to 'Synaptic software installer'
        2). Having them browse for their app or simply type its name.
        3
  • by mp3phish ( 747341 ) on Sunday March 27, 2005 @02:46PM (#12061148)
    I have been following autopackage for a while now.. It looks promising. This release will be the test to see if anybody will take it seriously (I hope so). Autopackage brings some really cool features to the table:
    • Frontends to different windowing and desktop systems.
    • Able to resolve dependancies even if you installed other software through the source, or with RPM or DEB
    • You will be able to download one package and install it on several different distributions.
    Essentially, this will be as flexible as tarballs, only they will install easilly, and have clean upgrade paths and uninstall paths. With clean dependancy resolution. It sounds too good to be true, but you can only know it if you try it.

    Here is the sourceforge link [freshmeat.net] with some more info and downloading.
  • Seriously. I had envisioned something similar [madpenguin.org] last year but this really takes the cake, or so to speak. I have yet to try Autopackage, but after seeing this, I'm sold. Especially if it works as intended. Cross-distro package management is what we need. Sure, DEB, RPM, TGZ, etc etc are all excellent in their own right, but being able to install packages across multiple distros is what we really need. I for one am impressed. Of course there are a few technical details that I need to learn about as far as cros
  • ...if one could just make one package and have it install on any distribution. It seems that the page is slashdotted at the moment so I can't see anything but the frontpage. :-/

    .haeger

  • Mirrordot of Flash (Score:5, Informative)

    by vectorian798 ( 792613 ) on Sunday March 27, 2005 @03:02PM (#12061245)
    Flash Demo [mirrordot.com] Screenshots [mirrordot.com]

    I have to say this is like a godsave for linux. Most layusers will want some easy installation like this instead of using something like Yum (even if it is a GUI front-end to yum like GYUM). This is one giant step towards a viable desktop linux - and I believe that it isn't a replacement for apt/yum/[INSERT YOUR FLAVOR HERE] but uses them under the hood.

    Before everyone starts bashing it and says that apt or emerge or whatever they use is the way to go, seriously think about it - one click installation, from a FRIENDLY user-interface, and easy to manage system for installing and uninstalling programs. Now if this were part of the base install on many distributions and some sort of standard was established (seriously, we need standards) I can probably convince my scared-of-Linux-because-it-is-hardcore friends to actually try Linux out.
  • by dotslashdot ( 694478 ) on Sunday March 27, 2005 @03:03PM (#12061253)
    One format to rule them all, One format to find them One format to bring them all and in the package bind them...
  • by jesterzog ( 189797 ) on Sunday March 27, 2005 @03:12PM (#12061300) Journal

    I'm presently running Debian. I've briefly played with making my own .deb files so I'd be able to install some of my own things without necessarily completely losing track of everywhere they were scattered. With all of the extra meta files that need editing, the source packages versus binary packages, and everything else, though, the whole process of designing a .deb package looked a bit too structured and complicated for me to bother learning about... at least within the time I was prepared to spend.

    If AutoPackage has a straightforward way to generate a simple package, such as from a tar.gz file, I might find it very helpful. What I'm wondering about, though, is how bad does it get when two package managers conflict? eg. Apt and AutoPackage might end up trying to control the same file, get mixed up between what manager is providing a particular dependency (particularly if Apt tries to grab a package that AutoPackage has already installed), or whatever else.

    It also sounds a bit of extra work having two sets of commands to manage packages on one system, so ideally (in my world), I guess AutoPackage would wrap around Apt or whatever other manager, and invoke it when appropriate. Does AutoPackage just fight for control when there are conflicts, or does it integrate with other package managers nicely?

    The server seems to be very slashdotted right now, so I can't do much reading up on it. Does this sort of conflict thing turn out to be much of a problem?

  • by levell ( 538346 ) on Sunday March 27, 2005 @03:20PM (#12061333) Homepage
    Until I can install an autopackage on my FC3 desktop and rpm (and therefore yum) can use it in dependency resolution and update it then I don't intend to use it.

    This isn't meant to be a criticism, I realise that they plan to do this and it takes time, to do everything that everyone wants is a long process ;)
  • BackPackage (Score:3, Insightful)

    by Doc Ruby ( 173196 ) on Sunday March 27, 2005 @03:21PM (#12061339) Homepage Journal
    The package frontends are getting better. But we desperately need better backend apps. The interpackage dependencies are getting more complex, and broken dependency references abound. And we need a structure that represents a further distinction than just source/binary: we need -dev structure, so packages that depend on the headers and config tools of other packages can find them automatically, without having to figure out their (distro-dependent) development package name. For that matter, a single, universal "lib-config" tool that spits back the versions, headers and filesys locations of any library installed by the package client, would really improve productivity and reliability.

    The really big leap in backends would be a distributed repository. Instead of just a network of (unsync'ed) mirrors of a single monolithic repository, we need a mirrored or otherwise distributed directory of repositories, each with an overlapping fraction of the current repositories. That will accommodate the bandwidth and storage requirements for installing specific versions of packages, across the exploding Internet userbase, especially as the mirror:client ratio gets worse. Alternate repositories should be the rule, not just the exception [apt-get.org].
    • Re:BackPackage (Score:5, Interesting)

      by IamTheRealMike ( 537420 ) on Sunday March 27, 2005 @03:42PM (#12061430)
      The really big leap in backends would be a distributed repository.

      I suspect you are thinking of something like Conary. However ... that said, a distributed and decentralised package management system is what autopackage is all about. Autopackages can resolve dependencies in a manner that does not require large monolithic databases: the packages themselves know where to go to find their dependencies if they're missing (and in future they'll be able to use yum, apt-get etc as well just in case).

      Basically the apt-get type model doesn't work too well once you start having say >50 repositories all together, as co-ordinating the overlaps becomes too much of a nightmare. A much less tightly coupled system works better, which is what we have here.

  • Comment removed (Score:4, Interesting)

    by account_deleted ( 4530225 ) on Sunday March 27, 2005 @03:22PM (#12061347)
    Comment removed based on user account deletion
  • by jxdxbx ( 158858 ) on Sunday March 27, 2005 @03:32PM (#12061391)
    For distributing user applications, at least.

    There is no earthly reason why a GUI application should scatter files hither and yon across a hard drive, and why installing a program should require some package or installer or whatever.
    I cannot believe the hassle that I have to go through to install software on my Linux box as opposed to my Mac.

    An OS X application consists of one file--- really a bundle. It is a directory that acts like a single executable file. Everything it needs to run that is not part of the basic OS X setup is in that file.

    You don't even need to install the application. You can just run it from its compressed disk image that is still sitting in your downloads directory, if you like. Or you can copy it to your hard drive wherever you like. When you tire of it, you delete it.

    Now, "Linux" is not capable of doing this because no one runs just Linux. But there is no reason why, say, Gnome apps can't be distributed this way. If there are technical issues in the way, they need to be resolved. Because the OS X way is better that the Linux and the Windows methods, and ought to be copied.

    (ps: I do know that Unix programs are often installed via packages in OS X, as well as software that for whatever reason needs to modify the OS. But these are very rare and approached warily by seasoned OS X users.)
  • by Cylix ( 55374 ) * on Sunday March 27, 2005 @04:00PM (#12061529) Homepage Journal
    Where can I find the rpm? ;)
  • by agraupe ( 769778 ) on Sunday March 27, 2005 @05:57PM (#12062108) Journal
    and I was amazed by how well it worked! I think this could easily be the answer to the linux software installation problem. I am a gentoo user, and I like portage, but autopackage is a slick piece of software. Perhaps it could be used in conjunction with portage (i.e. remove the idea of "gentoo packages" for end-user-type-apps and make portage interface to autopackage). Either way, I think that a stable Autopackage definitely is a step forward for desktop linux.

Don't get suckered in by the comments -- they can be terribly misleading. Debug only code. -- Dave Storer

Working...