Please create an account to participate in the Slashdot moderation system


Forgot your password?

Fedora Introduces Offline Updates 287

itwbennett writes "Thanks to a new feature approved this week by the Fedora Engineering Steering Committee, you won't hear Fedora 18 users bragging about systems that have been running continuously for months on end. 'Fedora's new Offline System Update feature will change the current system to something that is more Windows- and OS X-like: while many updates can still be made on the fly, certain package updates will require the system to be restarted so the patches can be applied in a special mode, according to the Fedora wiki page on the feature,' writes blogger Brian Proffitt."
This discussion has been archived. No new comments can be posted.

Fedora Introduces Offline Updates

Comments Filter:
  • > Are Gentoo/Debian really that intertwined with whatever Fedora maintainers decide?

    Not yet. But if GNOME and/or Firefox start requiring the feature other distros have a choice between two bad options. This new Linux only notion that started with systemd is increasingly Fedora only. You can have your distro if you want, if it behaves just like Fedora. One big monolitic blob of alien tech. Want the new udev? It is part of systemd. Want the new GNOME, wait for it to be unusable without udev which requires systemd. And so on.

  • by bill_mcgonigle ( 4333 ) * on Thursday June 21, 2012 @08:17PM (#40405947) Homepage Journal

    It's too bad btrfs still has such performance problems with common applications (BTDTBTEXT4).

    We really ought to have each package on its own filesystem. When there's an update, snapshot the filesystem, let currently running processes reference the old stuff so they don't crash, but new processes can have the new stuff. When the old version on longer has any references left, it can go away. This might not always make sense, but for a desktop it's a lot better than what we have now.

    Yeah, there's some plumbing work that needs solving (rpm, containers interaction perhaps, VersionKit or whatever) but this idea of rebooting a linux system to get consistent updates is just picking at a scab, and indicative that a real solution is still necessary.

  • by Anonymous Coward on Thursday June 21, 2012 @08:20PM (#40405971)

    The correct solution seems to be how Gentoo does it: Install BOTH versions of the library. Until nothing is accessing the old version anymore, it won't get uninstalled. So in this case, until FireFox is restarted it will keep having the old version of XULRunner around all the live-long day.

    Or is using library versioning really such a foreign concept to newer Linux developers? Hell, my Gentoo box right now has four different versions of Python installed: 2.4, 2.6, 2.7, and 3.2, so I can do cross-version development against latest-2, latest-3, and CentOS 5.x and CentOS 6.x equivalent versions.

  • by serviscope_minor ( 664417 ) on Thursday June 21, 2012 @08:50PM (#40406253) Journal

    If X11 needs an update, bounce the user to the console until it completes.

    Or, update X, and the user gets the new version when X is next restarted. This is what Arch does: the old files (binaries) are deleted, but remain on disk until the program exits. I've had uptimes of many months and not had trouble with X being updated.

  • Re:um... (Score:5, Interesting)

    by smash ( 1351 ) on Thursday June 21, 2012 @09:07PM (#40406399) Homepage Journal
    I *suspect* it is perhaps due to securing system files in a manner so they can't be modified when running multi-user (thus immune to exploit by user code) - kinda like freebsd securelevels perhaps. At least, thats the only reason I can see for it.
  • by TheRealGrogan ( 1660825 ) on Thursday June 21, 2012 @09:30PM (#40406591)

    Well, yes they can get away with patching on the fly for a lot of things, but really the system needs a reboot or it might be unstable. Some things may segfault if libraries get yanked out from underfoot unless they are 100% drop-in, binary compatible. Ever see what happens when a glibc upgrade goes awry? You can't even so much as run "ls". Many times have I had to boot with other media and finish a glibc install by hand. (I soon learned to "make install install_root=/someplace/else" first before running "make install" so I could easily manually install the finished product if the make install failed)

    It's a lot easier when a distributor with packaging utilities is just installing same version with patches back ported, because the entry points and symbols and stuff are all the same. But these "offline updates" that happen in a "special mode" (While few daemons are running) will actually give them more freedom to upgrade things.

  • by arth1 ( 260657 ) on Thursday June 21, 2012 @10:31PM (#40406973) Homepage Journal

    try resizing your /var or / LVM partitions while the system is running, let me know how that goes

    An OS upgrade has no business resizing your /var or root partitions. Period. Heck, you have to be pretty ignorant if you presume they're always local.

    Seriously, what has happened to Fedora lately? First they added gvfs so backup programs broke when root was denied access. Then the went Gnome 3 to cater to the iGeneration. Then they went to systemd, where you can't quickly and easily decide what to run on startup in each runlevel, but have to edit dozens or hundreds of different files. And don't even get me started on mtab. Now this?
    Thank goodness RHEL is supposed to be feature fixed for many years to come. And there, updates don't require a reboot. They actually TEST.

  • by jmorris42 ( 1458 ) * <jmorris@[ ] ['bea' in gap]> on Thursday June 21, 2012 @10:39PM (#40407027)

    > redhat (aka fedora) has always had a pathetic package mgmt system.

    No, rpm is a better system than deb. It had signed packages years before Debian and nice all in one file .srpm/.src.rpm files that build from pristine sources plus patches and a .spec. And rpm specifically forbids (although clueless commerical vendors sometimes ignore it) interactive install/update, absolutely required for mass installs or unattended update. I think you are getting confused over the update systems built atop them. There I would agree that Debian's apt family was better than what Red Hat was shipping until fairly recent versions of Yum and even then I would note that Yum is still a P.I.G. And delta rpms absolutely rock if you aren't sitting on the wire with a local mirror.

    > Debian has handled updates and major upgrades flawlessly for decades

    Not exactly. It took far longer to update my Mythtv system from Debian 5.0 to 6.0 following the directions at than I have ever spent doing a single version upgrade on a RH/Fedora system. Admitted that was my first time doing a version update on Debian on a machine I cared about and was taking pains to avoid pooching the system so that probably accounts for some of the extra time. Along with the fact that it was connected to a TV and the console was not easily visible so I was also taking care to keep it available over the network at all times during the process, something I haven't tried yet on RH. Point is neither one does version upgrades 100% hands off.

    > (years before RH existed)

    Obviously you weren't actually using Linux way backthen or you wouldn't have said something so ignorant. Debian saw first light Aug 1993 and got it's first primitive package system with 0.01 in Jan 1994. It didn't hit 1.x until years later. RH released 1.0 Oct 1994 and got rpm in 1995.

    > Debian just restarts the bits that are likely to break if they were not restarted.

    Like RH up until this idiotic new notion.

  • by arth1 ( 260657 ) on Thursday June 21, 2012 @10:58PM (#40407147) Homepage Journal

    It's not 1998 anymore, every OS is stable and can boot quickly. Nobody cares about your "uptime".

    Business users sure care about uptime.
    And a modern system can easily take 10 minutes or more sorting out and testing the hardware and RAIDs before it even begins to boot.

    Continue living in your iPad world, but know that you can access diddley squat on your iPad unless the servers at the other end work reliably.

  • by arth1 ( 260657 ) on Thursday June 21, 2012 @11:07PM (#40407181) Homepage Journal

    And we just had a bunch of development LAMP services go haywire because Python updates changed the on-disk libraries in a way that confused our mod_wsgi Python web apps. And the packages apparently don't track dependencies like that, so we had to restart HTTPD manually.

    My server, when it upgrades apache or one of its dependencies, calls "httpd -k graceful", so unused processes restart, and used processes restart as soon as they become unused.

  • Re:um... (Score:4, Interesting)

    by ILongForDarkness ( 1134931 ) on Thursday June 21, 2012 @11:41PM (#40407363)

    Yeah every other day is patch Tuesday.

  • by bill_mcgonigle ( 4333 ) * on Thursday June 21, 2012 @11:46PM (#40407399) Homepage Journal

    Btrfs performance is bad due to a lot of seeks. With a SSD and Facebook's Flashcache to cache your rotating clunkers it performs very nicely.

    Don't apologize for bad performance. ZFS has a very similar feature set to BTRFS but it doesn't have all these problems. Maybe BTRFS just isn't ready yet - I wonder if the developers ever feel like they'd wish people stopped rushing it into production.

    Anyway, I use SSD+Flashcache on some servers - it works great. But that shouldn't be the minimum system requirement for a Fedora machine.

Nothing succeeds like the appearance of success. -- Christopher Lascl