Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Debian Software Bug Linux

Debian Upgrade May Cause Serious Breakage 346

daria42 writes "Debian developer Bill Allombert has e-mailed the Debian community saying he estimates about 30% of users upgrading from Debian Woody to Sarge will suffer 'serious breakage'. Allombert says the upgrade process suffers from a number of bugs reported before the release went live several days ago. Chief among the problems, he said, were cyclic dependencies and the fact that software installation tool apt depended heavily on the changing C++ libraries. Allombert wants developers to test the upgrade cycle continuously during development and not just during the freeze period just before release."
This discussion has been archived. No new comments can be posted.

Debian Upgrade May Cause Serious Breakage

Comments Filter:
  • Chief among the problems, he said, were cyclic dependencies and the fact that software installation tool apt depended heavily on the changing C++ libraries.

    Let this be a lesson to those of you [slashdot.org] who claimed that "APT is unbreakable." There's no such thing as an unbreakable technology. There is however, such a thing as a robust technology that resists failure. As packaging systems go, APT is fairly good. However, my belief is that packaging systems are inherently flawed.

    What you want in an OS, is a method for determining the precise core upon which you can base your applications on. Such a core would effectively be an immutable set of system APIs that cannot be changed. The upshot to this situation is that the given system is verifyable. i.e. I can have a script go through and ensure that everything that should exist does exist. From that information, I can then do a delta to find out what exists that shouldn't exist.

    This is in direct opposition to a packaging system that builds an OS out of inter-dependent components. The problem with such a strategy is that using inter-dependent components only works if you're building from scratch. As anyone who has managed a version control system can tell you, things get extremely complicated (and tend to require manual intervention) as soon as files start branching. The same thing happens in packaging systems as soon as you start doing upgrades to individual components. Soon you find yourself with a mess of mismatched dependencies which require constant manual intervention to solve. Not a good situation.

    In the case of a defined core, you can simply wipe out the old core and replace it with the new one. As long as testing has been done to ensure that the new components are still backward compatible with old software, everything should work fine after the upgrade.

    Food for thought, anyway. To the Debian team: Thanks for the new release! Even if there are some growing pains, it's still nice to see you back in the game. :-)
    • by Tharkban ( 877186 ) on Friday June 10, 2005 @01:00PM (#12781123) Homepage Journal
      Give it a rest.

      The Linux Standard Base is dead.

      There is too much freedom for even the distributions to make cores effectively. Debian doesn't develop the software, they package it. They have no direct control over compatibility issues between versions in their software. This makes their job a whole lot harder than in commercial OS's where one entity controls both the core software and the packaging.

      They also don't have the resources to making security patches for every package without upgrading to a newer version of said package (i.e. backporting). They really do a phenominal job given their constraints.
      • They also don't have the resources to making security patches for every package without upgrading to a newer version of said package (i.e. backporting). They really do a phenominal job given their constraints.

        I agree wholeheartedly. I'm not attempting to "diss" the Debian distro or its maintainers. I'm only pointing out that the packaging system is beginning to strain under the pressure of so many packages. The complexity of the package system is quickly becoming too difficult to maintain. Especially since the packaging system mixes the core system APIS with the user applications. (Always a recipe for trouble.) Thus it is time to start thinking about something new.

        The Linux Standard Base is dead.

        The LSB was always about the "least common denominator" and not about "the most usable configuration". For what it was, it wasn't too bad. But a real standard at this point would have to define a lot more libraries, although perhaps at more of a library version level than trying to force the individual APIs.

        With that in mind, I don't think that such a standard should be attempted across all distros. For one, that would limit their ability to be different and provide new competitive services. For another, it tends to be better to allow a few different standards to compete before you attempt to pick one or two out of the fold. For example, there used to be many standards for Linus base distros. Now all distros tend to fork from either RedHat or Debian. Standards thus emerged.

        The same thing should happen today. We should see different distros attempt differing solutions to the issue and see which ones work best. Symphony [symphonyos.com] is certainly one of the most interesting, but mostly because it's the first attempt to break away from the current designs that Linux is stuck in. :-)
      • There is too much freedom for even the distributions to make cores effectively. Debian doesn't develop the software, they package it. They have no direct control over compatibility issues between versions in their software. This makes their job a whole lot harder than in commercial OS's where one entity controls both the core software and the packaging.

        You mean like Darwin/OS X? A fair amount of that OS comes from FreeBSD. Apple doesn't "control" FreeBSD.

        What they do control is their development and tes

      • by runswithd6s ( 65165 ) on Friday June 10, 2005 @02:37PM (#12782291) Homepage
        They also don't have the resources to making security patches for every package without upgrading to a newer version of said package (i.e. backporting). They really do a phenominal job given their constraints.

        I'm not sure what weed you're smoking, but Debian backports ALL of their security fixes from upstream software to the version packaged in stable. Really, consult the Debian Security FAQ [debian.org] for more details.

    • The upshot to this situation is that the given system is verifyable. i.e. I can have a script go through and ensure that everything that should exist does exist.

      This is probably one of my biggest gripes about Linux.

      The larger Unix OS's like Solaris & AIX have 'patchlevels' which seem to meet some of your criteria. To upgrade your the OS, you install a patchlevel on top of your existing OS. It's easy to verify all components of the OS.

      You know exactly what the OS contains--- "Solaris 5.9 build 100041
      • rpm -qa (rpm based distros)
        dpkg -l (deb based distros)

        Thank you, next question.
        • Thanks genius, but that doesn't answer any of my questions.

          Here's another one of my top gripes about Linux. I ask a question, and I get a stupid answer from dumb snot who clearly doesn't understand the question.

          Listing the packages is easy. But your solution doesn't deal with patchlevels at all, or show how to verify the installed packges in a patchlevel, or check to see if any files are missing from the packages, etc. You can do some of this stuff with individual packages, but not with clusters of patche
          • I'm not really getting you on this one.

            I think RHEL does the kind of thing you're asking for.

            I think of the Linux world as split in two. There are distros that are fundamentally CD based - like Fedora core, RHEL, Xandros, SuSe, etc. The CD based distribution system lends itself to the kind of thing you're talking about: patchlevels, service packs, batch upgrades - it provides known checkpoints for your OS.

            Then there are distros that are internet based - they use central repositories to manage software,
          • by wobblie ( 191824 ) on Friday June 10, 2005 @03:32PM (#12783011)
            And there is always some post like yours, which clearly demonstrates you haven't even tried to figure out the answer to this simple question.

            rpm systems: rpm -q --changelog
            deb systems: /usr/share/doc/changelog.Debian

            These are almost always more informative that the kind of crap I see on commercial unixes.

            There is no such things as "patch levels" or "clusters of patches" in any linux distro I know of.

            It is, in fact, a rather dumb idea anyway.

            Each package is updated alone, as it should be.
      • You know exactly what the OS contains--- "Solaris 5.9 build 100041-23" contains Kernel version X.Y.Z, C compiler X.Y.Z, C libraries X.Y.Z.

        No, you don't know that. 100041-23 is only the kernel patch version. What about other patches? Did you install it through the recommeneded patch cluster? Or the security patch cluster? Or was it the Solaris update cluster? Or maybe you installed this kernel patch as a part of the "Java" patch cluster? There is no such way as patch level on Solaris for overall system.
      • With Linux it's the opposite. You need to check the version of each individual package. You have an md5 string to compare against the .deb, .rpm or .tar.gz file, but how can you verify all components in the system?

        rpm --verify -a

        and to check you've got all the current updates:

        yum check-update

        This is on FC3, but both yum and rpm have been around for quite some time.

        • and to check you've got all the current updates:

          yum check-update


          Good point, and Yum is a great tool. But doesn't that only compare the new updates in the repository?

          What if I don't want the latest & greatest files in the respository? Can I check all packages against a particular OS level? On Solaris, I can say "Check that I have all packages released in Solaris x.y.z patch 5, which was released on June 2, 2005"? Can I do that with Yum?

          I frequently don't want the newest versions of the package
    • by listen ( 20464 ) on Friday June 10, 2005 @01:07PM (#12781191)
      You again ;-)

      Take a look at this Conary [specifix.com] system. It has some interesting ideas that could certainly help in this kind of situation : especially transactions for upgrades. If a bit fails, the whole upgrade rolls back, and you can even rollback completed transactions.

      I like this idea better than choosing some arbitrary core of code to upgrade as a massive lump, and statically linking hundreds of copies of anything not in the core into the separate apps. As to your verifiability detecting script, I see no reason this can not be done for a packaging system. And before you go on about corrupt databases, please remind yourself what a filesystem is: thats right, a corruptable database.

      I will agree with you on compatibility: people should stop breaking ABI. I'm looking at you, Freetype...
    • by Greyfox ( 87712 ) on Friday June 10, 2005 @01:11PM (#12781227) Homepage Journal
      If they statically linked it. Which they should really do for a base level of core utilities anyway. I've been burned by library upgrades and crippled recovery processes several times in the past because the correct libraries were no longer available. For something that might have a library pulled out from under it like apt, it really makes sense to incur the size penalty so that you never have to worry about it dying on you when you replace system libraries.
      • by Spazmania ( 174582 ) on Friday June 10, 2005 @01:22PM (#12781353) Homepage
        That's not quite true. For example, the staticly linked apt in a previous upgrade could run in to trouble looking up DNS entries. The problem? /etc/nsswitch.conf got upgraded and the staticly linked DNS library didn't understand some of the new options.

        However, offering a staticly linked apt would probably have helped.
    • by KiloByte ( 825081 ) on Friday June 10, 2005 @01:13PM (#12781255)
      The only issue is: if you don't read the freaking release notes, you will have problems. The apt in Woody is broken. The release notes say that you need to update it first, to let it handle circular dependencies.
      The only fault of Debian is not putting this in a bold enough font.

      Also, this breakage gives us a yet another reason to bash C++ as a poor excuse for a language :p
      • That was a good post, until you decided to troll.

        C++ may be a poor excuse for a language, but changing C++ libraries are not one of the reasons. That fault lies with the gcc team, and even then it's not really a fault.

        It would have been much more accurate to say that the g++ portion of the Gnu compiler collection is driven by people who are poor excuses for developers, but that's another story entirely.

        And just so you know, some of the greatest languages ever designed suffer from similar (or greater) pr

    • I understand the problem you're discussing, and what you suggest is one avenue to solve that problem. But I think that there is another, potentially simpler solution that should solve all the major problems with package systems: get rid of the idea of package conflicts.

      Circular dependencies are not a problem - if you view the set of packages and their dependencies as a directed finite graph - it's easy to come up with an algorithm for reliably figuring out the closure of dependency relations for any given
      • Circular dependencies are not a problem - if you view the set of packages and their dependencies as a directed finite graph - it's easy to come up with an algorithm for reliably figuring out the closure of dependency relations for any given package or set of packages.

        Exactly! You're thinking right along the same lines I am. The key is to view the system as a whole, because that is how it will operate. Once the system *is* whole, then many of the circular dependency issues go away. It's only through the co
      • For one, conflicting file locations must be gotten rid of. I.e. each package affects only it's own discrete set of subtrees in the filesystem.

        This isn't a new idea, and you can do it right now. Just install your packages from source and while building them, make sure that either:
        1. You link everything statically, or
        2. All libraries required for an app get installed within that app's directory.

        This way no app will interfere with any other, and you can upgrade any of them without having to worry about library

    • However, my belief is that packaging systems are inherently flawed.

      Yeah, it sucks to drag and drop an application to anywhere on my harddrive and have it work.

      I don't know guys. Using OS X for a while really makes other OSes seem very, very primitive. Most all applications can just be DNDed or run out of a disc image and they work. "Applications" in OS X are specially organized directories and all of the libraries and/or helper applications (I've even seen perl scripts inside of an application) are c
      • You can't put libraries inside an OSX app without ugly hacks manually updating LD_LIBRARY_PATH, and if you have multiple executables to run that isn't going to work.. for proper installation they need to go in /usr/lib or /usr/local/lib, with your binary in /usr/bin just like any other applicaiton/

        The idea of using directories as apps was done years ago in with RiscOS. It only works for very simple applications with one entrypoint, but if you have a larger application it just doesn't work to do that - yo
        • You can't put libraries inside an OSX app without ugly hacks manually updating LD_LIBRARY_PATH, and if you have multiple executables to run that isn't going to work.. for proper installation they need to go in /usr/lib or /usr/local/lib, with your binary in /usr/bin just like any other applicaiton/

          But you can statically link them. It's effectively the same result. A Linux system based on the AppFolder concept could easily fix this oversight if so desired.

          if you have a larger application it just doesn't
    • What you want in an OS, is a method for determining the precise core upon which you can base your applications on. Such a core would effectively be an immutable set of system APIs that cannot be changed. The upshot to this situation is that the given system is verifyable. i.e. I can have a script go through and ensure that everything that should exist does exist. From that information, I can then do a delta to find out what exists that shouldn't exist.

      This amounts to wishing the problem away. Sure,

      • The point is, the world constantly changes and progresses. You cannot isolate a large system of interest from external developments, so stop trying. It would be nice if all package installations went smoothly, but not nice enough to justify using Gnome 0.6 for all eternity.

        Now you're just being silly. Is OS X isolated from outside developments? No. Because the entire system core moves as one large piece. The applications are more dynamic, but they always are targetted directly at a minimum system level.
  • by LearnToSpell ( 694184 ) on Friday June 10, 2005 @12:49PM (#12781023) Homepage
    Schadenfreude, I think they call it. Testing for how long, and now this? Ah well. It'll get worked out. Gotta release at some point to find all the ugly bugs. :-P
  • by ShaniaTwain ( 197446 ) on Friday June 10, 2005 @12:49PM (#12781025) Homepage
    Everything is falling apart. You may experience some discomfort. Just thought we would let you know. have a nice day.
  • by JimDabell ( 42870 ) on Friday June 10, 2005 @12:50PM (#12781029) Homepage

    Obviously this was a rushed job. Typical Debian, always cutting corners, never taking the time to do things properly :P.

    • You can laugh all you want. Funny, sure. All the Linux geeks turning their collective nose up at Red Hat. But, do they have problems? YES! Like this? NO!

      Red Hat: Not Cutting Edge But Stable!

  • by Anonymous Coward on Friday June 10, 2005 @12:52PM (#12781052)
    Any reason why I should switch from Ubuntu to Debian?
  • well duh (Score:3, Insightful)

    by Tharkban ( 877186 ) on Friday June 10, 2005 @12:52PM (#12781057) Homepage Journal
    Well duh, if you wait that long between release cycles...you're going to have some major problems upgrading, as everything you had was ancient and everything you're upgrading to is mearly old.

    I love debian for their philosophy; however, when I tried their distribution and it downgraded the kernel from 2.4 to 2.2 when 2.6 had already come out....I don't think I even started X before deleting it. Maybe I'd have had a different experience if someone had told me "testing" didn't mean what it usually does.

    All of that said, it seems these problems could probably have been avoided with more testing, :( .
    • Why? Did you have an actual problem with 2.2? Or did you ditch Debian because it was making your e-penis feel small? It's rock solid stable, which is more than I can say for most 2.4 and every 2.6 release I've ever tried? They believe in stability, really do. If you know what you're doing they support going up to 2.4 and probably 2.6, but if you don't know how to and don't even read the help screen, it makes sense for debian to assume you don't know what you're doing.
      • Why? Did you have an actual problem with 2.2? Or did you ditch Debian because it was making your e-penis feel small?

        I wonder if you have even tried to use Linux on the desktop. Living in the land that time forgot is all well and good until you need drivers for new hardware, or less buggy drivers for old hardware, or want to boot off a cylinder above 1024, or use advanced network routing and QoS, etc etc.

        Debian is "stable" in the sense that packages play well together, and don't change much. But that

  • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Friday June 10, 2005 @12:54PM (#12781073)
    What, specifically, are the apps that will cause the problems and how does he determine that 30% of the boxes out there will have those apps?

    I've upgrade 6 boxes and have not had a single problem on any of them. They run a combination of Apache, perl, python, mySQL, php, bind9, DHCP, etc.

    If there is a circular dependency problem on an app, but no one uses that app, then there won't be any problem upgrading.
    • Same here, not real issue for any of the machines I've upgraded.

      However, I had to restart `apt-get dist-upgrade` 2 times because some upgrade process went wrong. But in the end it all worked.
      (Much like the `run live update a couple of times to get all updates`, or the `run windows update ...`).
    • So far I've seen one user with problems with TTF fonts, so if you're trying to pack every font possible on your computer, you'll end up getting stuck on "Regenerating font cache" (this particular user was stuck on ttf-bitstream-vera, so it may just be this particular font, or their language setting (french I think?)).

      If someone does run into a circular dependency, I'd suggest using dselect to run the upgrade, or simply going into apt's package cache and using dpkg -i to install all the packages in the circ
  • by Gentoo Fan ( 643403 ) on Friday June 10, 2005 @12:55PM (#12781079) Homepage
    We had a Woody to Sarge upgrade fail on boot because Lilo barfed a kernel panic on root mount. Installing grub fixed it. I forgot how the lilo was set up prior to sarge, but whatever. My suggestion if you have SATA root mounts: Install grub before installing Sarge!
  • by TekGoNos ( 748138 ) on Friday June 10, 2005 @01:02PM (#12781150) Journal
    Nice to know I'm not alone.

    Suddenly apt-get dist-upgrade didnt do anything good, I had to do an apt-get -f install multiple times until the dependancy stuff was sorted out. In the process, some packages (notably apache and ftpd) were simple de-installed and I had to re-select them manually.

    Good for me that it was a server and apache and ftpd were the only important hand-selected packages. I fear for the desktop systems with several dozends of hand-selected packages.

    So, I guess it is a good thing that Debian only releases a major update every two years :|
    • WTF were you doing using "apt-get dist-upgrade" anyway. If you'd read the release notes then you'd now that the recommended way of doing the upgrade was to use aptitude to prevent just those sorts of problems.
    • by BeBoxer ( 14448 ) on Friday June 10, 2005 @01:18PM (#12781324)
      Suddenly apt-get dist-upgrade didnt do anything good, I had to do an apt-get -f install multiple times until the dependancy stuff was sorted out. In the process, some packages (notably apache and ftpd) were simple de-installed and I had to re-select them manually.

      I can't say for sure that it would have helped, but the instructions specifically say to use aptitude because it handles dependencies better that apt. So while I feel your pain, I'm not sure it's a valid complaint.
  • by costela ( 198904 ) on Friday June 10, 2005 @01:08PM (#12781206)
    This is FUD, even by Slashdot standards.

    The problems do exist, but the "severe breakage" described does not implicate unbootable machines or unusable software. Cyclical dependencies mostly mean the algorithm used to select packages for upgrade or instalation will not run as expected and probably leave the problematic package on hold.

    This is not a new problem and affects Debian mainly because of it's distributed and loosely coupled model of organization, where integration problems can go by unoticed for quite some time.

    The original mail intended to push more developers into taking action about these integration errors and make sure the upgrade paths are always clear, which is a very big and important task.

    I, for one, hope his message doesn't fall on deaf ears, but also hope it doesn't generate more FUD like this.
    • I guess if your a vocal Debian user you have to expect this though don't you? Debian users claim seemless upgrading is a tenant of their distro. Along with stability, being able to upgrade the same distro year after year is one of the two major features of Debian. So when you lose that, it is a big deal. Also severe breakage means different things to different people. When a distro you expect to work perfectly and be worry free suddenly doesn't do what its supposed to I can see why this is a big deal.

      "Thi
  • I have been among those criticising debian for its long release cycles, but the big advantage of them was that releases weren't released until they were done. If there were that big breakages, why didn't they leave it frozen until they were fixed?

    I don't think testing all the way along is the answer, that way nothing would get done. It's great that Debian has a period devoted solely to testing before the distribution gets released, it means things get fixed. It's hard to be motivated about fixing things tha

  • by MECC ( 8478 ) *


    This is, after all, Debian

  • These problems were always there in the last Debian release - it's just such a long time ago all Debian users forgot about them (the debian see! scrolls recorded it)... ... good job I use Slack ;-)
  • This makes me wonder why the upgrade process is being run under the target installation in the first place. How about upgrading the system off-line through a live-cd of some sort?

    This live-CD would be able to access the installation's filesystems and examine the configuration, particularly the database that APT keeps of the installed packages. The relevant updated packages could then be installed without worrying about clobbering libraries and/or other files that the update process is dependent upon.

    At
    • This makes me wonder why the upgrade process is being run under the target installation in the first place. How about upgrading the system off-line through a live-cd of some sort?

      Because only extremely limited distributions would require you to take the machine offline to upgrade it. The Linux you're looking for is Red Hat or Mandrake. I need to be able to upgrade my Debian box from thousands of miles away... I'm sure as hell not going to fly out to a datacenter somewhere and reboot my box onto a boota

  • Fine i had to babysit my upgrade, it wasnt a simple apt-get dist-upgrade...

    At first when i did a apt-get upgrade it wanted to a REMOVE all the core packages i use that machine for (ssh, apache, mysql, courier...) i had to upgrade one at a time, making sure nothing would barf. Only at the end did i upgrade APT and then do a dist-upgrade to get everything up to speed

    This was a long time in the waiting, of course it wasnt going to be really easy.

  • I said the first 1,500 bug reports would be up by the next morning, and then said "This is what they get for releasing early."

    I thought it was supposed to be funny. I guess I'm just not cut out for /. humor.
  • This is true. (Score:5, Interesting)

    by cdn-programmer ( 468978 ) <<ten.cigolarret> <ta> <rret>> on Friday June 10, 2005 @01:46PM (#12781658)
    I attempted an upgrade from woody to sarge about a month ago and it broke my system. I have 1000's of zombies running around. This shows up as a defunct process. Its not the end of the world mind you but you can't kill a zombie since it is already dead.

    I have reported this and warned that there will be a lot of folks with broken systems. I was very surprised to hear that sarge went stable before this problem was sorted out.

    A sarge install from scratch however is fine. Its just the upgrade that is broken and in more than one place.
  • I'm upgrading all of my boxes from woody to sarge. So far so good. You need to follow the instructions here [debian.org]
    Some perspective: It is a giant leap from woody to sarge. Each server I'm upgrading has a purpose, and the application software to support that purpose has taken a big version jump. Of course, you're going to have issues doing that.
  • The upgrade wanted to remove half of my installed apps and it kept back the other half. Even after sorting it out (hours and hours), it did something screwy with the ifupdown utility so it never came back from a reboot.

    This was the final nail in the coffin for debian. No updates since 2002 and now this.
  • This upgrade/dependency problem is the same, most important, problem Debian releases have throughout the cycle. Or at least, it's the same solution: a robust, easy installer. Not just a flexible installer that handles all the many combinations of hardware, software, network, and configurations, hiding the complexity under the hood of a simple GUI which allows configuration of every option. The installer must also detect failures, and report them back to the release managers. Of course this failure notice sh
  • No man wants to read these 3 words. I will be sure to steer clear!
  • by cahiha ( 873942 ) on Friday June 10, 2005 @04:43PM (#12783867)
    If you really care about a system and minimal downtime due to upgrades, have two root partitions (it's only an extra 5-10G). Instead of upgrading, you make a clean install on the unused root partition. Clean installs generally work better to begin with, and you have the old install both mounted and bootable to figure out any problems and copy over configuration files.

    As for complaints about this sort of thing, I still prefer Debian. I just spent several hours upgrading an OS X system from Jaguar to Tiger. A trivial file system inconsistency in HFS caused the installer to crash reproducibly and eventually required me to manually patch inodes (apparently a fairly common problem on Macintosh). And I'll have to wipe a Windows machine clean tomorrow because mysteriously hardware has stopped working and no amount of fiddling will do.

    In comparison, these Debian upgrade woes seem minor. And unlike the Mac and Windows problems, the Debian upgrade problems will generally fix themselves after a few days when the package maintainers catch up.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...