Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software Linux

CDE — Making Linux Portability Easy 385

ihaque writes "A Stanford researcher, Philip Guo, has developed a tool called CDE to automatically package up a Linux program and all its dependencies (including system-level libraries, fonts, etc!) so that it can be run out of the box on another Linux machine without a lot of complicated work setting up libraries and program versions or dealing with dependency version hell. He's got binaries, source code, and a screencast up. Looks to be really useful for large cluster/cloud deployments as well as program sharing. Says Guo, 'CDE is a tool that automatically packages up the Code, Data, and Environment involved in running any Linux command so that it can execute identically on another computer without any installation or configuration. The only requirement is that the other computer have the same hardware architecture (e.g., x86) and major kernel version (e.g., 2.6.X) as yours. CDE allows you to easily run programs without the dependency hell that inevitably occurs when attempting to install software or libraries. You can use CDE to allow your colleagues to reproduce and build upon your computational experiments, to quickly deploy prototype software to a compute cluster, and to submit executable bug reports.'"
This discussion has been archived. No new comments can be posted.

CDE — Making Linux Portability Easy

Comments Filter:
  • by Anonymous Coward on Friday November 12, 2010 @08:38PM (#34212442)

    CDE will always mean Common Desktop Environment to me.

  • CDE 2 (Score:3, Informative)

    by ukpyr ( 53793 ) on Friday November 12, 2010 @08:41PM (#34212460)

    I'm just pointing out a major application - that's not so major anymore - Common Desktop Environment uses this acronym :)
    Does sound like a neat tool though!

  • by countertrolling ( 1585477 ) on Friday November 12, 2010 @08:56PM (#34212554) Journal

    The Mac has been doing this since almost the very beginning.. The app was a single file, not even a package. The other systems, Windows and Linux, are madness. It's like splatter painting. Or for your airplane analogy, it's like designing the instrument panel with a shotgun.

  • by avaik6 ( 1927978 ) on Friday November 12, 2010 @09:07PM (#34212616)
    I would LOVE to have mod points and vote this up... I always loved the fact that the applications use the existing blocks on the system to do their magic, and by sharing the blocks you can have more apps in less space than other platform$. But with this... if you install every f*cking package with this.. you will end up having 12 different glibc versions (hey, maybe 12 times the same glibc version, each for each app that requires it), and 81725 times each library.. I think it's a _GREAT_ tool to perform a fast-and-no-brains-available deployment of some in-development app, or to debug some weird shit. But please, do not pretend to install all my software like this.. I don't want gedit to weight 100mb.
  • by Anonymous Coward on Friday November 12, 2010 @09:07PM (#34212618)

    Nope. Mac programs are still "packages". Even before OS X, you had the data fork and resource fork that held different parts of the app.

    Right click any Mac app -- you'll get a context-menu item called "Show Package Contents".

  • by 99BottlesOfBeerInMyF ( 813746 ) on Friday November 12, 2010 @09:08PM (#34212624)

    Great, now we can have outdated exploitable libs and every other kind of BS that comes with this. Might as well just statically link everything. Package mangers exist for a reason, use them. Do not bring the errors of Windows to us.

    Or you could just use OpenStep, get dynamic libraries and portable apps. This is a long solved problem.

  • by h4rr4r ( 612664 ) on Friday November 12, 2010 @09:10PM (#34212640)

    No the program should just use the libraries the system has. This is what package management is all about. What you prefer is only the product of brain dead OSes that lack proper package management.

  • Re:It's About Time (Score:5, Informative)

    by neocephas ( 840876 ) on Friday November 12, 2010 @10:21PM (#34213012)

    I think most people here are not understanding the target audience for this tool (hint: it's not for your typical linux environment). It's not about package management or having a universal installer... it's about being able to run your application in a different environment where you don't have admin rights.

    In a lot of university clusters or compute grids researchers have access to a large collection of compute nodes, but they usually don't have any rights to those machines. In fact, most of the time the programs are ran in a sandbox and have a restrictive environment. To run their codes reliably, researchers often have to perform some sort of static linking or package up all of the dependencies with the executable. apt-get or yum are not options in these environments... you may not even be able to ssh into them. Ideally, you could ask the system administrator that controls the cluster to install certain packages, but again, this is not always possible particularly if the researcher requires a niche package used in their domain.

    Moreover, the cluster may be composed of heterogenous set of machines with different versions of Linux. Package management does not help you here. The only way to reliably execute your programs on such a heterogenous cluster is to statically link or include your dependencies. If you are wondering who would use such a maddening environment where you have no admin rights... google Condor, OpenScienceGrid and Globus. This is how a lot of research computation is done.

    Of course, the hot new thing is virtual machines and clouds... but firing up a VM each time you want to run an application is very heavyweight... especially if your applications has a short run-time.

    TL;DR: this isn't for your typical ubuntu or fedora install; it's for scientific research that is done on restrictive computing clusters and grids.

    As a side note, I made and use a much cruder tool http://bitbucket.org/pbui/starch/ [bitbucket.org] that packages everything up (executables, libraries, and data) in a self-extracting tarball which can be executed on remote hosts. It's not as slick as CDE, but it's been used with success by various research groups that I collaborate with.

  • by Anonymous Coward on Friday November 12, 2010 @10:48PM (#34213148)

    Rather, when that library gets updated, it doesn't break 20 apps. It only has the potential to break the ones that do not contain their own copy (and thus link to the system-supplied version.) After updating a library, I'd rather not have to also update 20 apps just to get a working system again.

  • Linux is not Windows (Score:3, Informative)

    by Zero__Kelvin ( 151819 ) on Saturday November 13, 2010 @12:24AM (#34213560) Homepage

    "Then the package manager will fail to install the program because the program requires a later version of a given library than is available in the long-term-supported distribution that the end user is running."

    No, it won't. Unlike Windows, Linux is perfectly happy having multiple versions of any given library installed and available: Fortunately, on Unix-like systems (including Linux) you can have multiple versions of a library loaded at the same time, so while there is some disk space loss, users can still run ``old'' programs needing old libraries. [faqs.org]

  • by bmo ( 77928 ) on Saturday November 13, 2010 @01:17AM (#34213744)

    You're not even replying to anything I said, except to post an ill-informed rant.

    There are *three* package types. And what these do is far more than anything seen in the Windows universe.

    Windows doesn't even *have* a packaging scheme. Sure, there are installers, but that's all they do. There is no dependency resolution. There isn't any updating except manually or if the individual program checks by itself. Things really haven't come very far from the self-extracting .zip file method of installing. What we've gotten over the years is a graphical front-end to an archive extraction and a script to tweak the registry, and a script to uninstall. That's it. It's arcane. It's stone knives and bear skins.

    At least Microsoft dispensed with the bogus "add and remove software" and renamed it "remove software" in the Control Panel. In XP if you came from Debian, you'd expect the "add and remove software" would be where you'd find the Windows equivalent of Synaptic. Sorry, chuckles, it's not. Now in Windows 7 it's finally fixed. It's a small amount of honesty, but it's still better than before.

    And besides, do you know how long it takes to make a package for Linux after compiling? Literally less than 5 minutes, and it can be automated. Doing packaging for 3 package formats (.deb, .rpm, tar.gz) is not a lot of work at all.

    What is available for software take packaging and distribution for Linux software is light years ahead of anything seen in the Windows universe. The Apple "app store" is the only thing that comes close and even that is clunky compared to a fully functioning Debian or RPM repository.

    Your rant is typical of the Windows user who thinks he knows anything about Linux but doesn't really. I suggest you acquaint yourself with what you're talking about before you open your mouth here again and sound even more silly.

    --
    BMO

  • by ischorr ( 657205 ) on Saturday November 13, 2010 @02:42AM (#34213994)

    Why in the world would you need an installer for something like this? I don't understand why 99% of applications are ever distributed that way, except for poor OS design or just bad developer habits.

    OS X (and OpenStep) did it right earlier in the sense that the app is a self-contained unit, well-laid out and hierarchical but a single unit to the user. "Installation" typically involves a single move command or drag-and-drop of a single "file" (bundle), and portability across systems cannot be beat (providing the developer the ability to distribute a single bundle that transparently functions across as many platforms as exist for the OS) . It's a beautifully well-designed piece of work. The choice of usability/ease of distribution versus some amount of bloat is left to the developer, and I think it's a great trade-off.

  • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Saturday November 13, 2010 @04:17AM (#34214242) Journal

    OS X is routinely first to lose in the "pwn to own" challenge, among others.

  • Now it depends on your needs, if your constantly building stuff from git/cpan it sounds like your intentionally living on the bleeding edge and/or doing development...
    For most users, and especially production servers having non cutting edge packages managed centrally is far more desirable. Personally when i want cutting edge i use gentoo, so i can have installed a mix of stable core packages with a select few being cutting edge versions while still being package managed. If i want to install something not covered by the package system (which is quite rare with all the gentoo overlays available) i can build my own package quite easily which makes it easy to replace if a newer package comes out later.

  • by Anonymous Coward on Saturday November 13, 2010 @05:30AM (#34214458)

    "I'm unfamiliar with this 'CDE' but you're compelling me to try it."
    You can't unless You got a proper UNIX system (Solaris, HP-UX, Ultrix/Tru64). You can get som of the look in KDE's CDE theme.
    As far as I remember first versions of KDE did look and feel very much as CDE (I used a RedHat 4.x which looked just as CDE),
    "Windowization" (we need a Start-like menu button in the lower left corner) of KDE's UI happened much later.

  • Re:It's About Time (Score:3, Informative)

    by Lennie ( 16154 ) on Saturday November 13, 2010 @07:57AM (#34214876)

    You can also run different profiles at the same time with -no-remote -P . You can run different versions or the same.

  • Re:It's About Time (Score:3, Informative)

    by multipartmixed ( 163409 ) on Saturday November 13, 2010 @09:21AM (#34215112) Homepage

    Why dream?

    Buy a Mac.

    Software is most frequently distributed in .dmg (disk image) files. You download the file, maybe gunzip it, then you click on it. That mounts the volume. Then you click on the application. That runs it.

    If you want to turf the .dmg and put the application in your application folder, you just drag it from the dmg to the application folder (or fan) and let go.

    The biggest piece of the puzzle to making this work from a systems POV is the Mach-o linker. The linker understands executable-relative linking. The next biggest piece is the uniformity of the install environment, the common directory structure that goes with applications (like, stuff with the app goes with the app instead of parts in /bin, /etc, /usr/lib, and so on), and the understanding of the directory structure in the file manager (finder).

    Put all these pieces together and you have a really nice cohesive environment for day-to-day use. The best part is, it's still a UNIX box, so you can you pop open a terminal window and do whatever the hell you want.

    It will be a long time since Linux sees this type of paradigm, as it requires deep cooperation the KDE/Gnome folks, development toolchains, systems linker, and a re-invention of the directory layout in the LSB.

  • by Rich0 ( 548339 ) on Saturday November 13, 2010 @09:39AM (#34215170) Homepage

    From the user's point of view it's completely illogical to upgrade the whole system just because you want a new feature in amaroK 2.4 while your distro only packages 2.3, you expect one application to install or upgrade independently of any other application. That does not happen with Linux.

    Sure it does. It just doesn't happen with Ubuntu, or Debian, or RHEL/Fedora/SUSE/etc.

    If you use a source-based distro then unless the newer version depends on some API exposed by a newer library then it will build just fine and run against the old one.

    It used to be that the preferred upsteam way of distributing packages (outside of a package manager) was as source for exactly this reason. The same tarball worked just fine on sparc/ sgi/ hpux/ linux/ freebsd/ etc.

    I do see the utility of something like this tool, but I'd hate to see it used widely. Besides security there is also wasted memory - if 10 programs are linked against the same library it is only loaded in RAM once. If 10 programs are linked against 10 versions of the library, or even the same version built 10 times (and they load those different versions) then the system can't share their memory.

  • by Junta ( 36770 ) on Saturday November 13, 2010 @03:46PM (#34216984)

    4) It needs to be open-source for this model to work. Some software isn't. =)

    Not true. Adobe publishes their closed flash plugin via their own proprietary apt and yum repositories. The distros allow arbitrary vendors to hook into the single update management infrastructure with nothing more than the end user's permission.

"If it ain't broke, don't fix it." - Bert Lantz

Working...