Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Open Source Red Hat Software SuSE Linux

Linux 4.0 Getting No-Reboot Patching 125

An anonymous reader writes: ZDNet reports that the latest changes to the Linux kernel include the ability to apply patches without requiring a reboot. From the article: "Red Hat and SUSE both started working on their own purely open-source means of giving Linux the ability to keep running even while critical patches were being installed. Red Hat's program was named kpatch, while SUSE' is named kGraft. ... At the Linux Plumbers Conference in October 2014, the two groups got together and started work on a way to patch Linux without rebooting that combines the best of both programs. Essentially, what they ended up doing was putting both kpatch and kGraft in the 4.0 Linux kernel." Note: "Simply having the code in there is just the start. Your Linux distribution will have to support it with patches that can make use of it."
This discussion has been archived. No new comments can be posted.

Linux 4.0 Getting No-Reboot Patching

Comments Filter:
  • by Gumbercules!! ( 1158841 ) on Wednesday March 04, 2015 @09:02AM (#49179875)
    I'm starting to feel old. I'm still on 2.6.x on my boxes.
    • Just curious, why?

      • by Gumbercules!! ( 1158841 ) on Wednesday March 04, 2015 @09:50AM (#49180219)
        Coz all my servers are production or purpose defined, and based on CentOS or VyOS. They all work. They all do their jobs - so I haven't had a compelling reason to upgrade. I did put one server briefly on CentOS 7.0 (Kernel 3.10 or something) and the client couldn't figure out how to use it, so I rolled it back.
        • by NatasRevol ( 731260 ) on Wednesday March 04, 2015 @10:05AM (#49180315) Journal

          Only people without servers in production/critical environments ask 'why haven't you upgraded already?'

          • by haruchai ( 17472 ) on Wednesday March 04, 2015 @12:10PM (#49181163)

            And that's how my team ended up supporting 10 - 25 yr old fossilized gear running all kinds of old, insecure shit that almost noone can remember what's the login or sometimes what's it for.

            • by Anonymous Coward

              No, your workplaces complete lack of enforcing a documentation policy is why you have that mess. You're job isn't to run shiny new OS installs, your job is to keep everything running smoothly. Good documentation is a part of that.

            • There's a huge difference between 'why haven't you upgraded?' and 'why haven't you upgraded already?'.

              • by haruchai ( 17472 )

                Perhaps.
                But depending on the system or product, that difference may not be large.
                One vendor allowed us to run fully supported on a particular product version for 5+ yrs but now requires us to upgrade again, after only 2 years, by the end of 2015.

                Upgrading this system affects about 5000 users & 35 external partners and required almost 4 months of preparation & co-ordination.

            • by dwywit ( 1109409 )

              And that's why my hourly rate goes up, up, and away when the client can't supply documentation. It doesn't decrease the frustration, but helps to make it bearable.

        • by Anonymous Coward

          If you were lost in the 3.0 kernels just wait until you try 4.0. Gone are the days of simply using ifconfig or adding a shell script to run on startup. Move to some form of BSD where the development process is sane. Changing for the sake of change is not a good idea.

    • No worries! Since Torvalds decided to number the successor of kernel 2.4.x as 2.6.x, in spite of major changes, the kernel numbering system stopped being just technical.
    • At my place of work we're still running the 2.6.35 kernel. Reason: It's the latest kernel supported by our hardware (Freescale i.MX53).
    • $ uname -a
      Linux snail 2.4.37.11 #2 Tue Dec 28 14:55:32 PST 2010 i586 Pentium MMX GenuineIntel GNU/Linux

  • I remember being surprised when I found out Ksplice costs money.
    • by Zan Lynx ( 87672 )

      The kernel parts have always been free under GPL2.

      The part of this that is worth money is creating patches to apply with this system. Patches that won't crash the machine or corrupt data.

    • Re:Finally... (Score:5, Interesting)

      by Bacon Bits ( 926911 ) on Wednesday March 04, 2015 @09:35AM (#49180117)

      Oracle bought it. Still surprised?

      Not only that, but Oracle bought it on July 21, 2011. The current version of Ksplice? Released on July 28, 2011. The major feature of the current release? The changelog says the only change was "Removed unnecessary zlib detection from configure." But now only Oracle Linux is supported.

      It's still available through source code [oracle.com], which you can find with a bit of digging (you can't navigate to it from the top level page, as far as I can tell... Ksplice isn't listed as a project). I think the amount of investment and effort put in that site makes it clear what Oracle's stance is.

      At least Microsoft extends before they extinguish....

  • Finally, they gave us a thing for the change from 3.x to 4.x make sense.
  • by Marginal Coward ( 3557951 ) on Wednesday March 04, 2015 @09:14AM (#49179945)

    "Simply having the code in there is just the start. Your Linux distribution will have to support it with patches that can make use of it."

    Darn. It looks like I'm gonna have to patch and reboot so I won't have to reboot after I patch.

    • Yo dawg, I heard you

      Ah, screw it.

    • Yo Dawg! We heard you liked to patch. So we got you a patch to patch your patches while you're patching.
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      "Simply having the code in there is just the start. Your Linux distribution will have to support it with patches that can make use of it."

      Darn. It looks like I'm gonna have to patch and reboot so I won't have to reboot after I patch.

      FTFS: Essentially, what they ended up doing was putting both kpatch and kGraft in the 4.0 Linux kernel.

      In other words the RedHat and OpenSuse teams decided no compromise is the best compromise. GNU/Linux used to stand on principles, now it is all about corporate control and marketing.

    • Without reboots, how am I to know if I setup my rc.d scripts correctly? Or kill that background app a user ran nohup myscript.sh &
      I mean without having to reboot the system, I am expected to do real systems administration work, not just a blanket refresh everything, once in a while. Patches gave me a good excuse to do such.

    • It's like Sun's Live Upgrade -- apply patches / updates to a copy of the running environment, then reboot into it. Nice enough idea. Ironically there was always a long list of patches that needed to be applied traditionally, often entailing a reboot, before LU could be run.
  • scientific computing (Score:5, Interesting)

    by e**(i pi)-1 ( 462311 ) on Wednesday March 04, 2015 @09:18AM (#49179983) Homepage Journal
    will be important for scientific computing. One of the weak points of OSX is the necessity to reboot even for minor stuff (but its also getting better there. Most upgrades in linux already do not require any reboot which is nice when having jobs running for weeks.
    • by chuckymonkey ( 1059244 ) <charles.d.burton ... m ['gma' in gap]> on Wednesday March 04, 2015 @09:31AM (#49180079) Journal
      If you have weeks long running jobs on your desktop you're doing it wrong. There's a reason servers exist in datacenters. I work in scientific computing and people running jobs on their desktop is a huge problem, they spend ridiculous amounts of money for something like a Mac Prol to run this stuff on when they should be buying actual servers instead. Then complain when their desktop is running like shit or their job fails because the building took an intermittent power hit. You can even put GPU compute in servers and have a lot less concern for your systems going down.
      • If you have weeks long running jobs on your desktop you're doing it wrong.

        I disagree, speaking as someone who has in fact had a weeks long job running on my desktop before. I mean if you have a fast PC (desktop processors are often faster per core, at the expense of fewer cores) and it's a single threaded job (sadly, not all workloads are parallelisable) then it makes sense to run it locally.

        If power outages happen once per year (I think that's pessimistic), and the job lasts a month then that's an average

        • by bored ( 40072 )

          I disagree, speaking as someone who has in fact had a weeks long job running on my desktop before. I mean if you have a fast PC (desktop processors are often faster per core, at the expense of fewer cores)

          Well you have to differentiate whether your talking about an intel desktop machine or a "workstation" class machine. The difference at this point is the "workstation" is using xeon class processors, and has ECC. The problem is that the "workstation" has exactly the same processors as the rack mount machin

        • If you have a weeks-running job and it isn't fault-tolerant, you're doing it wrong, period. As long as it's fault-tolerant, it isn't a big deal where it's run.

          That said, if you have a job that takes days to run on a single computer, it'd be a good idea to either invest in a compute cluster or get some time on one.

      • If you have weeks long running jobs on your desktop you're doing it wrong. There's a reason servers exist in datacenters. I work in scientific computing and people running jobs on their desktop is a huge problem, they spend ridiculous amounts of money for something like a Mac Prol to run this stuff on when they should be buying actual servers instead.

        I agree the Mac Pro is a bad choice (overpriced), but most servers are more expensive than a fast desktop with similar performance. Given that you need a desktop PC anyway, it often make sense to invest an extra $500 or even $1000 in it to save the costs of a server.

      • If you have weeks long running jobs on your desktop you're doing it wrong.

        Some of us cannot afford our very own personal render farm [wikipedia.org], or justify the cost of renting time on one, merely to satisfy our little hobbies. ;)

        Personally though, it's not just work that keeps us from rebooting. On my part, it's usually a month or two between reboots on my MBP laptop, and even then patching is usually the only reason... why bother waiting for a full boot process to finish when I don't have to? Close the lid and let it go to sleep... it's only a few seconds waiting for it to wake up when I w

      • by dissy ( 172727 )

        If you have weeks long running jobs on your desktop you're doing it wrong. There's a reason servers exist in datacenters.
        *SNIP*
        when they should be buying actual servers instead.
        *SNIP*
        You can even put GPU compute in servers and have a lot less concern for your systems going down.

        Well since you offered, could you make your paypal payment to me about $6000 USD for a mid-range server?
        Or since you're being so generous offering to pay for servers for us, how about a nice even $10000 and I'll get one of those newfangled blade systems!

    • scientific computing. One of the weak points of OSX

      I would have guessed that the high price per unit work for their proprietary hardware would be the limiting factor. Can't you hire for "free" a dedicated linux admin for the cost difference between clusters?

      Or is there a specific advantage OSX is bringing to the table? XGrid is long dead, right?

      • scientific computing. One of the weak points of OSX

        I would have guessed that the high price per unit work for their proprietary hardware would be the limiting factor.

        Not really - you can still buy old XServe boxes for a relatively reasonable price, pack them with RAM, and load ESXi on each one so that you can run a buttload of little OSX VMs on each one. Yes, it's perfectly legit to do exactly that under the Apple EULA (I did it for a former employer who wanted rack-mounted OSX instances for testing - it was its own little cluster in a vSphere farm, and it was far easier to clone off replacements or new VMs.)

    • by MachineShedFred ( 621896 ) on Wednesday March 04, 2015 @09:36AM (#49180121) Journal

      On OS X the reboot is for user convenience. If you use the command line software update tools, you can install them as you wish, and not reboot. Then you can restart services with launchctl or reload patched kexts and save yourself a reboot. Does this take a lot of extra time and testing? Sure - thus the reboot.

      • Yeah, it's also like, "In order for the update to full take effect and work correctly, we need to restart a bunch of services and applications. You should save all your work, since various things might close or stop working for a little bit." You can explain that to users, have them not pay attention, and then get pissed off because the update closed their document that they didn't save. Or you can just tell them that you're going to reboot.

        Users understand rebooting better.

  • by gb7djk ( 857694 ) * on Wednesday March 04, 2015 @09:28AM (#49180063) Homepage
    Is it just me that is rather uncomfortable about the ability to do seamless, run time, patching on (any) operating system? Isn't there a rather large elephant of a precedent out there somewhere for the sorts of things that this facility this feature could be misused for?
    • by MachineShedFred ( 621896 ) on Wednesday March 04, 2015 @09:43AM (#49180175) Journal

      It's been used for decades everywhere except the PC and it's server variants. It's no more a risk than current patching that requires a reboot, except that you don't have the downtime of a reboot.

      A bad patch is a bad patch. Have backups, have redundancy.

      • by gb7djk ( 857694 ) *
        Yes, it's been around in server environments for years. Whilst one can see a case for adding this facility to linux servers in properly wrangled environments, my concern is on yer actual PCs, such as the one I am typing this on at the moment.
      • by swillden ( 191260 ) <shawn-ds@willden.org> on Wednesday March 04, 2015 @10:09AM (#49180351) Journal

        It's no more a risk than current patching that requires a reboot, except that you don't have the downtime of a reboot.

        Sure, if your concern is error, rather than malice. An attacker who gains root could use this to dynamically patch a backdoor into the running kernel. Rebooting the machine would potentially enable someone to notice.

        As another poster noted, though, you can already dynamically patch the kernel for malicious purposes by loading a malicious module, assuming that hasn't been disabled. In contexts where security is crucial, I would disable both dynamic module loading and run-time patching.

        • by ledow ( 319597 )

          At all points you can modify the kernel, there's a potential for mischief, of course.

          But what you're saying is that rebooting is somehow a magic cure-all that guarantees the system isn't infected somehow, or that there's a user (or SecureBoot) there to "notice" something amiss.

          If SecureBoot can be fooled into loading an older kernel that can then be upgraded on-the-fly, it can be fooled into doing that at boot too.

          How often do you check your machine boot-up process to ensure it's on the version that grub et

          • by swillden ( 191260 ) <shawn-ds@willden.org> on Wednesday March 04, 2015 @12:48PM (#49181487) Journal

            But what you're saying is that rebooting is somehow a magic cure-all that guarantees the system isn't infected somehow

            Don't be condescending. I'm not saying rebooting is a magic anything.

            Whether or not this matters depends on the threat model and why the attacker is interested in patching the kernel. For example, one purpose would be to disable other kernel security features, such as SELinux, or dm-verity. Most SELinux rules are configured and the configuration can be altered by root, but some are compiled into the kernel and can only be modified by modifying the kernel. Altering the persistent kernel image may not be possible for a variety of reasons (read-only media, SecureBoot, etc.). In addition, in security-sensitive and mission-critical contexts an unexpected reboot may well be noticed.

            I don't understand your assertion about SecureBoot. Are you referring to some known vulnerability of some particular secure boot system? Given a decent implementation of secure/verified boot, an attacker should not be able to convince the system to boot a modified kernel image, which means that run-time modification of the kernel is the only option if the attacker needs to bypass some kernel security enforcement.

            In general, the security model of a high-security Linux system assumes that the kernel is more trustworthy than root. The ability for root to modify the running kernel invalidates this assumption, which most definitely is a security issue.

            In the context of a system without mandatory access controls there may not be any reason to care, since once an attacker has obtained root there probably isn't any limit to what he can do.

  • by Anonymous Coward

    In a world, where slashdot stories get repeated at least twice per week, one man had finally had enough.

    Dilbert Smith was your average computer programmer, until one day, it happened, and the world would never be the same.

    Jean Claude Van Damme is .... The UNDUPLICATOR.

  • Was there general consensus that both methods complemented each other or was it one of those "ours is best so we want it in"? Having looked at how they work each has its pluses and minuses but they couldn't have come up with one? Seems to me like they were sitting around going "yea these are so different there is no way to combine them to make one... and we dont want ours to be left out so fuck it, use em both."
  • Didn't Torvalds talk about this last week? This is hardly news.
  • Very cool that you can now patch and reload the core without a reboot, I just wonder how they handle when data structures change dramatically between major versions, will they replace the running data with predefined?

    • by ledow ( 319597 )

      Don't quote me on it, but from my understanding of the trampoline kernel patches there's a point at which the calls to old system calls are blocked and the calls to the new replacement system calls are demanded.

      There's a lot of logic involved in determining when the system is in a state to do that such that you don't end up feeding new structures to old syscalls, or old structures to new syscalls mid-way through (by checking that their dependent / source syscalls are all upgraded by that point, etc.)

      But, mo

  • While the kernel can be live patched, still some fundamentals pieces will lack live patch in the desktop, like X.org and libc. Ok, reboot a desktop is not that terrible task and not inconvenient like for a server. But it'd be nice to have.
  • We are heading to the situation where patching the kernel will be faster than patching applications:

    Kernel upgrade: no downtime

    Adjusting a parameter in Java application: wait for 4 minutes for Glassfish to restart

  • Even though the technology has been there for some time, it's good that these organizations have collaborated together and implemented this. Awesome stuff. GNU/Linux is probably the only OS that is able to accomplish this. Windows can't even touch a no-reboot OS like this. So, those using Microsoft will continue to patch and reboot their systems on a regular basis, which takes a LOT of resources. Obviously, GNU/Linux will and should excel in various markets, because it truly is better and more stable.

Avoid strange women and temporary variables.

Working...