Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software Linux

CDE — Making Linux Portability Easy 385

ihaque writes "A Stanford researcher, Philip Guo, has developed a tool called CDE to automatically package up a Linux program and all its dependencies (including system-level libraries, fonts, etc!) so that it can be run out of the box on another Linux machine without a lot of complicated work setting up libraries and program versions or dealing with dependency version hell. He's got binaries, source code, and a screencast up. Looks to be really useful for large cluster/cloud deployments as well as program sharing. Says Guo, 'CDE is a tool that automatically packages up the Code, Data, and Environment involved in running any Linux command so that it can execute identically on another computer without any installation or configuration. The only requirement is that the other computer have the same hardware architecture (e.g., x86) and major kernel version (e.g., 2.6.X) as yours. CDE allows you to easily run programs without the dependency hell that inevitably occurs when attempting to install software or libraries. You can use CDE to allow your colleagues to reproduce and build upon your computational experiments, to quickly deploy prototype software to a compute cluster, and to submit executable bug reports.'"
This discussion has been archived. No new comments can be posted.

CDE — Making Linux Portability Easy

Comments Filter:
  • by Jeremiah Cornelius ( 137 ) on Friday November 12, 2010 @08:45PM (#34212472) Homepage Journal

    Me too.

    Common to Sun and HP. :-)

    I guess Ultrix, too.

    Regarding this development - it's really what NeXT and later Mac OSX packages do. In the Windows world they have Thinapp and MS's App-V.

  • Re:It's About Time (Score:5, Interesting)

    by couchslug ( 175151 ) on Friday November 12, 2010 @08:54PM (#34212544)

    Making applications portable is handy for doing things like running them from a USB stick. It also makes backup much more convenient.

    Copy the program and its data in one shot, carry it with you, and use anywhere.

    Windows apps are ahead of the game on this one:

    http://portableapps.com/ [portableapps.com]

  • Re:It's About Time (Score:2, Interesting)

    by h4rr4r ( 612664 ) on Friday November 12, 2010 @09:05PM (#34212590)

    If you need the same app everywhere it is easy enough to either make the data format portable or run the entire OS from the usb stick. Your method just lets you move outdated and almost guaranteed insecure software around. Windows only has this because it still lacks proper package management.

    Bringing that sort of braindead thinking to linux is a curse not a blessing.

  • by Haeleth ( 414428 ) on Friday November 12, 2010 @09:32PM (#34212786) Journal

    Installing software on linux is easier than on windows or osx.

    For software that is in the package manager, yes.

    Where something like this "CDE" might be handy is for software that is not in the package manager. Suppose you've written a program that is only of interest to a handful of users. There's no way it's going to find package maintainers for every major distro, and your users might not be happy building from source code.

    There are also classes of software that are not allowed in the main repositories for some major distros like Fedora and Debian. For example, the authors of indie games might want to let Linux users play without making the whole game open source. Even if they open-sourced the engine, some distros will not permit it in the repos if its only use is to access non-free data.

    Basically, "package once, run everywhere" is very appealing to a certain class of software distributor.

    What I don't see is what CDE offers over any of the dozens of existing autonomous packagers. Or why they chose to confuse people by using the same name as the standard UNIX desktop environment.

  • by xgoat ( 629400 ) on Friday November 12, 2010 @09:33PM (#34212800)

    I very recently published a tool [xgoat.com] that performs a similar task. dynpk (my tool) bundles programs up with packages from your system, then wraps them with some other stuff to create a bundle that essentially allows you to run a Fedora program on a RHEL machine (and probably Ubuntu or Debian, but this is outside my needs...).

    Recompiling loads of libs for RHEL isn't fun or particularly maintainable. Therefore, use the ones from Fedora!

  • by jd ( 1658 ) <imipak@ y a hoo.com> on Friday November 12, 2010 @10:12PM (#34212974) Homepage Journal

    One method is to have a tool for interrogating the API version and also testing the API against some set of tests that relate to the application being installed. You'd then apply the following:

    • If the API version is within bounds, do not install the library
    • If the API version is outside of bounds but the tests succeed, do not install the library
    • If the API version is greater than the latest supported and the tests fail and a backwards-compatibility library which IS within bounds of the API provided is within the archive, install the backwards-compatibility library
    • If the API version is greater than the latest supported and the tests fail and no backwards-compatibility library is usable, install the supplied library locally to the application, using the package manager, using an alias so there's no name-clash with primary libraries
    • If the API version is less than the minimum supported and the tests fail and the user authorizes an upgrade, use the package manager to upgrade to the supplied library
    • In all other cases, install the supplied library locally to the application, using the package manager, using an alias so there's no name-clash with primary libraries
    • Where the library is installed locally, all information regarding the supplied API must be removed since it's vital it doesn't clash with anything else - however, there must be a maintenance tool for cleaning out such local libraries when they are no longer required

    This should keep redundancy to a minimum. There will be some, since there's nothing in this to collaborate between apps using this method, but it's a start.

  • by tepples ( 727027 ) <tepplesNO@SPAMgmail.com> on Friday November 12, 2010 @10:21PM (#34213010) Homepage Journal

    No the program should just use the libraries the system has.

    Then the package manager will fail to install the program because the program requires a later version of a given library than is available in the long-term-supported distribution that the end user is running. For example, PHP 5.3 made PHP far less of a pain in the ass, but Ubuntu 8.04 LTS (the version found on Go Daddy dedicated servers, for example) still has 5.0.something or other.

  • Re:It's About Time (Score:5, Interesting)

    by ischorr ( 657205 ) on Friday November 12, 2010 @11:42PM (#34213390)

    That's probably another use, but I really don't think that's the main place where it'd be useful. I DREAM of being able to just download an application archive, extract it *anywhere I want*, and just run it. Just use it, without having to worry. Any application - not the apps (and versions) that some distribution maintainer has gotten around to porting to my flavor.

  • by Zero__Kelvin ( 151819 ) on Saturday November 13, 2010 @12:28AM (#34213578) Homepage

    "I can take something as bloated as MS Office or Photoshop straight from one machine to the next."

    I can pirate software too, but I prefer to use superior free software.

  • Re:It's About Time (Score:3, Interesting)

    by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Saturday November 13, 2010 @04:06AM (#34214216) Journal

    apt-get or yum are not options in these environments...

    Really? Why not?

    Put another way: Rubygems can install to a home directory, and only requires Ruby itself to be in the path. Are you saying that the sandbox environment doesn't allow you to have reliable filesystem access or modify your environment variables? Because that's all it takes.

    I realize apt-get or yum may require system-level access in their current configuration, but if they can be configured per-user, that's a limitation in apt-get or yum, not in the idea of a package manager.

    firing up a VM each time you want to run an application is very heavyweight...

    Not necessarily, especially when VMs can be cloned with COW memory.

    the cluster may be composed of heterogenous set of machines with different versions of Linux.

    Doesn't seem like an insoluble problem, either. If you're just going to statically link anyway, bundling system libraries doesn't seem like that big a stretch. A chroot jail would be a better solution (and I think BSD has a secure version of these), but you can fake that without root anyway.

    The advantage of doing things this way is that you still get the advantages of dynamic linking (lower memory and disk usage for multiple programs installed, easier updates, etc). The only component missing is admin rights, and there's nothing special about admin rights that relates to any of these.

    So, TD;DR: This looks like just another case where the correct solution was staring you in the face, but you went with the easy solution instead. That's not necessarily wrong -- you could actually make a much better case that it's too hard to write a proper package manager for this environment, and there's too small of a target audience to justify the effort, compared to a simpler solution which just Gets It Done.

    But then, why is CDE a big enough deal to be Slashdotted? Presumably it took a significant amount of work...

  • by The Mighty Buzzard ( 878441 ) on Saturday November 13, 2010 @04:38AM (#34214302)

    No, solving dependency hell is far worse today. Building from source back in tarball only days you had problems if version W of library X was not installed. Building from source today you have that along with issues if your distro of choice does not have version W of library X in the repository along with actually having version W of library X that you built from source installed but your package manager refusing to install things dependent on it because it refuses to acknowledge anything's existence outside its list of installed packages.

    You also have issues like cpan which is currentish vs your distro's package manager which is usually anything but.

    If it weren't for checkinstall I'd seriously consider LFS over package management in situations where I was constantly having to build things from git/cpan/etc... And I'd probably have a huge dent in my desk from where I constantly banged my head instead of the only moderately sized one I have now.

  • by grumbel ( 592662 ) <grumbel+slashdot@gmail.com> on Saturday November 13, 2010 @04:44AM (#34214324) Homepage

    AppBundles are beautiful and easy, but also not flexible enough to handle many cases, which is why quite a few of software on MacOSX comes as .pkg with an installer, instead of as AppBundle. What MacOSX however did right is making the install application a standard thing of the OS, not something that has to be reinvented by every software developer like on Windows (.msi should probably fix that one day).

  • by The Mighty Buzzard ( 878441 ) on Saturday November 13, 2010 @04:52AM (#34214342)
    Wait, doesn't source code actually require at least a tiny bit of thought? I thought we were trying to be like Windows now. I mean they're still lightyears ahead of us up in Redmond. They have tons of software that installs without the user even having to be aware while even the package manager distros of Linux still require the user to actually authorize it to get the latest and greatest botnet software installed on their box. ( The preceding was a joke. rm ~/.ass/stick before flaming )
  • by The Mighty Buzzard ( 878441 ) on Saturday November 13, 2010 @05:11AM (#34214408)

    It's really more of a problem when one or two packages on a non-bleeding edge system need to be bleeding edge for some specific reason but actually updating those packages would cause backwards compatibility breakage of some other essential something.

    Trust me, it's far from gone even on arch/gentoo/slackware.

  • by RichiH ( 749257 ) on Saturday November 13, 2010 @06:44AM (#34214686) Homepage

    Sorry if topic sounds a tad personal, but hey...

    > The real problem is that Linux distributions, taken together or individually, presents developers with too many completely unnecessary choices as to where essential library files can be put, and also, there is no standard version naming and locating convention.

    Do you need it to boot? Prefix is /
    Do you need it after boot? Prefix is /usr
    Do you want to install custom stuff that is not handled via the system's default software handling solution? Prefix is /usr/local or /opt
    Do you want to install into home dir? Prefix is ~/local or ~/opt
    If you are in a heterogenous environment with shared home between lots of architectures etc, /import/x86 etc is a good place

    This leads to clean & clear separation of software after a system people poured a lot of thought into. Is it easy to grasp at first sight for someone used to Windows? No. But that is _not_ the priority. Sorry, it's not. People writing code need to learn how the language works. Why shouldn't they learn how to system works?

    > Package managers are a complex solution to a problem that need not have existed in the first place, if it was realized that unnecessary choice is deadly dangerous, in the world of large-scale software interoperability.

    Yeah, cause grabbing random downloads of .bat, .exe, .msi, .whatnot turned out to be awesome. Especially the integrated updates. Oh, what's that? Everyone is implementing their own system leading to dozens of parallel update mechanisms on a single machine? Now _that_ is efficient! And the programs that don't have an update routine? Simple, just write them bug-free, without holes and a complete feature set in 1.0!

    > There does not need to be any choice for where on a file system a given application or a given library should be located.

    That is true if you consider every machine to be an island. Unix thrived and continues to thrive cause you can create huge shared environments with almost no work.

    > That should be completely determined by the app or library name, version (using a standard versioning scheme), variant (using a standard variant naming scheme), and origin person-or-organization, using a standard organization identifying scheme.

    My custom mplayer is in /usr/local/mplayer. My custom git is in /usr/local/git. My custom vim is in /usr/local/vim. I can delete any of those and remove the program, along with all its libraries and whatnot, with one single rm.
    If devs simply don't know that they should default to /usr/local for stuff, again... It's their problem, same as if they did not know how to open() a file.

    > It goes without saying that there should also be a standard globally unique URI for such libraries and apps (including the unique name, version, variant, origin identification).

    No. No. No. This breaks any and all assumptions about being able to install different versions of stuff for different reasons. Use prefixes and use LD_PATH, etc.

    > So there should be no choice about where on the internet to get it (except for the choice involved in a standard mirroring URI scheme), and
    no choice about where to put it.

    Maybe you are too young to have seen this yourself, but after a few years, most URLs are dead. With gittorrent, ideally with a DHT sprinkled on top, this might change in the longer run, but what if the next VCS that whoops git's ass comes along? Static information on the internet is mostly a myth. (Also, git would need to get rid of SHA1 for fully automated code distribution, imo)

    > With this discipline, obviously needed in today's universe of code, all such package management, as well as dependency acquisition and installation, could be managed by a single unified and incredibly simple automated package manager; call it the

  • by Rich0 ( 548339 ) on Saturday November 13, 2010 @09:43AM (#34215186) Homepage

    LTS is good for production environments where you have lots of machines to support.

    Suppose you run Ubuntu on 1000 workstations. Each of those runs a variety of programs, not all the same, and some aren't from the PM (they might not be open source, or widely used - they could even be homegrown). Every time there is an upgrade one of these programs could break.

    The idea of LTS is that for the most part everything stays put but you still get security updates. For something like a corporate desktop that is exactly what you want. Why do you think that so many computers are still running XP? Simple, it works and as long as it works nobody wants to pay the small fortune it would take to figure out if anything will break and if so fix it when you upgrade 20,000 PCs running it.

  • by Renevith ( 1556657 ) on Saturday November 13, 2010 @12:06PM (#34215782)

    This is not intended to be a package manager replacement. In fact, keeping older possibly-buggy versions of libraries around is a good thing in some of the expected use cases, e.g.:

    • Giving other academic researchers your code that will reproduce your results exactly, even if your code was triggering a bug in a specific library version you have
    • A professor distributing a class assignment in executable form, and not having to worry about how many flavors of linux his students are running

    There are many more use cases illustrated on the website [stanford.edu], which you obviously did not read or else you wouldn't have compared CDE to a package manager. Almost all the examples he uses are ad-hoc transfers where the CDE package will only be used on a temporary basis, or where the effort required to bundle a package for each OS would be unwarranted, or where the lack of library upgrades is actually an advantage.

"If it ain't broke, don't fix it." - Bert Lantz

Working...