Follow Slashdot stories on Twitter


Forgot your password?
Software Linux

Ubuntu Picks Upstart, KVM 97

derrida writes "Because the traditional System V init daemon (SysVinit) does not deal well with modern hardware, including hotplug devices, USB hard and flash drives, and network-mounted filesystems, Ubuntu replaced it with the upstart init daemon. Several other replacements for SysVinit are also available. One of the most prominent, initng, is available for Debian and runs on Ubuntu. Solaris uses SMF (Service Management Facility) and Mac OS uses launchd. Over time, Ubuntu will likely come to incorporate features of each of these systems into Upstart. Furthermore, heading in a different direction from its main rivals, Ubuntu Linux will use KVM as its primary virtualization software. Red Hat Enterprise Linux and Novell's Suse Linux Enterprise Server both use the Xen virtualization software, a 'hypervisor' layer that lets multiple operating systems run on the same computer. In contrast, the KVM software runs on top of a version of Linux, the 'host' operating system that provides a foundation for other 'guest' operating systems to run in a virtual mode." Slashdot shares a corporate overlord with
This discussion has been archived. No new comments can be posted.

Ubuntu Picks Upstart, KVM

Comments Filter:
  • Great (Score:5, Informative)

    by laptop006 ( 37721 ) on Monday February 11, 2008 @11:10PM (#22387526) Homepage Journal
    Except Upstart has been in Ubuntu since IIRC 6.10, nothing has even changed about the design.
  • News? (Score:5, Insightful)

    by webmaster404 ( 1148909 ) on Monday February 11, 2008 @11:23PM (#22387654)
    How is this news? The Ubuntu project came up with Upstart and therefore they are going to use it? Whats next Debian using apt-get rather then RPM?
    • Re: (Score:3, Informative)

      by macshit ( 157376 )

      Agreed, upstart doesn't seem like news, but I'd be curious to hear a bit of back-and-forth as to the benefits of the various initscript replacements. Ubuntu makes a case for upstart on their site; it would be nice to known what others think.

      Similarly for kvm vs. xen: xen is on roll these days, with everybody and their dog using it, but it seems like the company behind it is moving in an increasingly proprietary direction, so it would be good to hear what's up with that, and how kvm compares.

      • Interesting discussion here []. Along with a couple low-temp flame-squabbles.
      • so it would be good to hear what's up with that, and how kvm compares.

        Because KVM requires hardware support for the virtualization, it won't work on any of my computers.

        Xen does.

        • That is a rather loose definition of "works" from my experience.

          We tried Xen as a possible alternative to VMware and it isn't even close.
          • With VMware you can't arbitarily set your MAC address to whatever you want. How else am I going to virtualize the machines running all my expensive license servers? You can't dynamically add CPUs or RAM to guest instances. You can't migrate running instances without the expensive VMotion add-on. And so forth.

            What Xen is missing is a nice console. Maybe in version 4.
    • by Tarlus ( 1000874 )
      Not to mention, I think they've been using upstart since version 7.04.

      Facts for nerds. Stuff that matters.
    • ummmm... KVM is an upstart in the Virtulazation market. thats what they're saying.... they are not refering to a peice of software called "Upstart".
  • kvm (Score:3, Interesting)

    by debatem1 ( 1087307 ) on Monday February 11, 2008 @11:56PM (#22387900)
    Not sure how much traction KVM is going to get here unless Ubuntu can wrap it well. I'm no expert but I know my way around most virtualization technologies and KVM seems to be one of-if not the- hardest to use productively. Ubuntu has a great track record with this kind of thing though, and *if* this signals a new push in that direction I eagerly await the results.
    • Re:kvm (Score:4, Interesting)

      by neotokyo ( 465238 ) on Tuesday February 12, 2008 @01:47AM (#22388636)
      You've got to be kidding, right? You can install kvm without having to reboot and be installing a guest OS (given that you have the cd) in mere minutes. []

      All of two commands after you've installed kvm:

      1. create disk image
      2. launch installer

      Maybe a little more description of your experience with 'one of-if not the- hardest to use productively' claim might persuade folks that the above is not trivially simple.
      • I believe the GP may be referring to the lack of a GUI. If not, then I am - VMWare, Parallels, VirtualBox and Xen all have various GUIs to manage them, some of which are very convenient. I am unaware of a convenient GUI for kvm (the Ubuntu wiki page to which you refer doesn't mention one), but perhaps I am just uninformed? If so, please correct me.

        Also - again, not sure about this, correct me if I am wrong - but kvm requires hardware support, i.e., a fairly new CPU. Whereas some or most other virtualizat
        • Let me ask something related. I have a kind of slow home machine running Ubuntu (an old 1st gen AMD Sempron 2300+ @ 1.56 GHz, DDR1, SATA1, AGP, you get the picture), and I'd like to have some additional OSes available as virtual machines to just play around and maybe, eventually, who knows, do something useful in them. But all as a simple end user, someone who, in Windows, would be playing on something easy such as Virtual PC (which I in fact used before removing Windows and switching to Ubuntu).

          So, from al
          • Re:kvm (Score:4, Informative)

            by kripkenstein ( 913150 ) on Tuesday February 12, 2008 @07:16AM (#22390220) Homepage
            VMWare Server is probably the safest choice. It's stable, works, and is fairly convenient.

            Parallels just came out with convenient installation for Ubuntu, I haven't checked it out yet. But it is supposedly very user-friendly on other platforms, so it might be worth a shot if VMWare isn't working out.
            • by jschrod ( 172610 )
              Except that VMware Server Beta2 replaced the simple server console app with a horrendous memory-eating Tomcat-based monster web application. I downloaded it once and when I saw a footprint of 350 MB without even a single VM instantiated, I checked the VMware forums. There, a clear commitment to this new management interface and its resource demands could be read, in spite of the protests of many many users.

              We will have to see how many have the type of hardware that one seems to need for VMware server 2. U

            • VirtualBox is just as good as VMware and it's open source. I prefer it for desktops and laptops and such. VMware is better for centralized server virtualization though.
          • by LWATCDR ( 28044 )
            VirtualBox seems to work really well.
            On an AMD AM2 machine I have run Vista Home with no problems.
            On my older AMD machine I have run Ubuntu, Kbuntu, Xbuntu, PCLinux, and Minux3 with no problems.
            Under Ubuntu you just install it from add software. The GUI is pretty simple and It seems to work just fine.

        • KVM has the same gui that opensource Xen does, namely virt-manager. Virt-manager provide a gui for creating guests as well as managing them. KVM does require hardware support, but I don't think that's a huge deal. VT/SVM has been around since ~2005, and I'd say that in many cases, folks have access to newer processors or will so very soon. And as you mention, for those folks that haven't upgraded to a new processor since 2005, they certainly can use other solutions.
      • You're right.
        The last time I tried it, it repeatedly failed with error messages that were, to say the least, unhelpful- but my information seems to be out of date. Using those instructions it was, indeed, trivially easy.
    • Re:kvm (Score:4, Informative)

      by twistedcubic ( 577194 ) on Tuesday February 12, 2008 @04:01AM (#22389294)
      Wow. Dude, KVM is the only one I've gotten to work with no fuss. For me it was downloading kvm-57.tar.gz, ./configure; make; sudo make install; qemu-imw create ... image.img; qemu-system-x86_64 -hda ... ; qemu-system-x86_64 image.img. Installing Debian is best (vs, say, Fedora) because it's faster with a smaller amount of memory. Now go ahead and download KVM and enjoy!
  • I for one (Score:2, Funny)

    by Monsuco ( 998964 )

    Slashdot shares a corporate overlord with
    And I for one would like to welcome ...
  • by Anonymous Coward on Tuesday February 12, 2008 @12:59AM (#22388370)
    Beware! Richard Stallman is behind this! First they replace init with an event driven system. Next they start migrating services out of the kernel. It be proposed and seem natural to have these services driven by events (after all the init system is) . Then of course it will seem obvious to abstract the event system into its own package. This package will be called Mach. Finally a name change will be proposed that we rename Linux to GNU/Hurd. Don't say I didn't tell you! We will be butt of our own Hurd jokes.
  • by arcade ( 16638 ) on Tuesday February 12, 2008 @01:59AM (#22388722) Homepage
    Mac uses launchd
    Ubuntu uses Upstart
    Solaris uses SMF
    Debian uses initng
    RedHat uses sysvinit (?? not sure ??)

    Meaning that a sysadmin that needs to support those systems, need to write scripts that takes care to use the correct way on each and every platform. Blergh. I hate it when this kind of thing happen, instead of just sticking with the old stuff or _agreeing_ on a new way to do it. Instead, we now have a multitude of ways of doing it. Okay. Options are good. This isn't options though - this is differences being forced on you by various vendors, guaranteeing that you'll have to do more work.

    • by Daengbo ( 523424 ) <daengbo AT gmail DOT com> on Tuesday February 12, 2008 @02:04AM (#22388742) Homepage Journal
      Upstart still accepts the SysV init scripts.
      • by g1zmo ( 315166 )
        So does SMF. Maybe others do to, but SMF is the only other one I know well besides SysV Init.
    • by ashridah ( 72567 )
      Debian uses initng

      Uh. Debian *can* use initng. I'm pretty sure it's not being installed by default, unless they've made this decision in testing (I don't have any testing/unstable boxes atm). That seems like a fairly irregular thing for debian to agree to do.

    • When I moved from Red Hat to Debian (remember Bruce Peren's UserLinux ?) and to Ubuntu the thing I missed was the clarity of SysV init and the simple tools to add and remove programs from a runlevel.

      The article quoted shows examples of upstart scripts. I don't quite see if compatibility with SysV init is a goal of upstart.

      It sure would be nice if upstart means easier application sharing between Red Hat and Ubuntu.
      • "When I moved from Red Hat to Debian (remember Bruce Peren's UserLinux ?) and to Ubuntu the thing I missed was the clarity of SysV init and the simple tools to add and remove programs from a runlevel."

        The problem is just your ignorance; don't put the blame on anyone else.

        Debian and Ubuntu *use* SysV init scripts by default and they have *one* single tool to add and remove programs from a runlevel (update-rc.d).
        • I happened to be interested in this, so took the opportunity to see what man had to say about update-rc.d.

          Please note that this program was designed for use in package main-
          tainer scripts and, accordingly, has only the very limited functional-
          ity required by such scripts. System administrators are not encouraged
          to use update-rc.d to manage runlevels. They should edit the links
          directly or use runlevel editors such as sysv-rc-conf and bum instead.

          Thus is from Ubuntu 7.10 x86_64.

          • "I happened to be interested in this, so took the opportunity to see what man had to say about update-rc.d."

            So Debian *does* have SysV-style init scripts, doesn't it? And Debian *does* have simple tools to add and remove programs from a runlevel (that's exactly what update-rc.d does), doesn't it?

            What was your point, again?
    • by Yfrwlf ( 998822 )
      Which is why everything needs to stay on a common standard communication interface while behind-the-scenes system reformation is allowed to occur, and entirely new features are tacked onto the interface as additional settings or new commands, ensuring backwards compatibility.

      There is no point in using a completely different interface for two things that do the same thing though, and going back to the topic Upstart is supposed to be "compatible", though it'd be nice to use an identifier/name and flags that
      • "There is no point in using a completely different interface for two things that do the same thing though"

        Unless, of course, I really think my way it's better (on the other hand, why would I try a different path if I didn't think it's better?).

        "I believe Linux could be completely seamless between most all distros"

        If they were so seemless among, what would be the segregating factor to choose one over the other?

        "I hope one day us Linux users will actually have the freedom to use whatever system we prefer"

        • I think you'd better prefer not to have the time and effort to analyse those different things: of course the more similar the choices the lesser risk to be wrong. But that's not freedom; it's lazyness.

          I disagree. Myself, I would try more distros, if I didn't have to spend the time learning a new way of thinking each time. Why? Because of lazy? No, I have a single computer that I use for my work. The more time spent on system tweaking/learning is less doing the work I have an interest in.

          I believe Linux could be completely seamless between most all distros

          If they were so seemless among, what would be the segregating factor to choose one over the other?

          The community, preformance etc etc. Remember, your parent is discussing the interface ... not the implementation. For example, yum and apt-get are pretty close in what they you can do with them. Why not try to

          • "I disagree. Myself, I would try more distros, if I didn't have to spend the time learning a new way of thinking each time."

            Two points:
            1) If you think learning a new distribution is "learning a new way of thinking" then you are still to rooky (it's only a matter of learning some specific tools, not much more).
            2) Again, if they worked the same, why loosing time testing the same?

            "For example, yum and apt-get are pretty close in what they you can do with them. Why not try to unify the interface?"

            For once, yum
            • I disagree, so I guess we can agree to disagree ;)
            • by Yfrwlf ( 998822 )
              "...because apt-get developers already thought about it and they still thougth their way was better..."

              You're talking about differences between programs. You're on the wrong page. I'm not saying there shouldn't be differences between programs. I'm saying programs should try to use the same interfaces where possible and not just user interfaces if and where desired/able, but inter-program interfaces too, because you should use standardized things where and when possible to make your programs modular. T
              • "Then, for the flags, you could have them use the same names "

                Then again, the stupidness of looking it from the end user point of view... The end user won't EVER use but the flags someone programmed; it's plain obvious. Now, go and tell the programmer it should use GNU-like long options when he (for whatever reason) dislike them.

                I really don't know the syntax for yum, but let's say for the shake of the argument it goes like yum -install package where apt-get goes apt-get install package. Are *you*, as an
            • by Yfrwlf ( 998822 )
              Oh and one more thing, things are changing, modular programming is becoming more and more popular thankfully. Just so you know. But, things still need more work, and certain companies may not be so eager to see it happen as some companies are competitors. Programs and companies should compete, but everyone's freedom should not suffer because of it, it should not make things more proprietary. Open source is all about working together (at least, often it is) to create great things, but that spirit doesn't
              • "things are changing, modular programming is becoming more and more popular thankfully"

                Modular programming? Maybe. Modular "toolizing"? It's decades old. What do you think `ls -l | less` or `echo time > out.txt` means? But then, there comes some guru deciding that piping, IPC, POSIX system calls... are too cumbersome and there's the need for some "high level abstraction" or "service oriented" monolithic approach and, guess what? As long as he puts his effort where his mounth is you end with some peop
          • by Yfrwlf ( 998822 )
            Exactly right on all accounts. (see my reply to parent also if interested) Everyone should be concerned about bringing "systems" together, because as you do that you bring users and developers together, and make learning and life easier for everyone. I'm happy that there is a growing interest in keeping things modular so that we all have more freedom. It's one of many things that will help to make Linux more powerful, too, but it's simply intelligent programming. If I go to write a program to do someth
            • Good stuff. I don't get the obsession people have with wanting to keep everything complicated. It's like they equate making things easier as being 'noobish'*. Just because I can handle complicated things that doesn't mean I wish to. I'm glad to see your response, I didn't have the energy to do a counter-argument.

              * I use the term noob on purpose because it's ridiculous. Everyone starts out as a beginner. I find it quite arrogant and obnoxious.
              • by Yfrwlf ( 998822 )
                Yes it's very true, I like using noob because I think it's silly and playful, but yeah, while it may be fun for you writing a script to launch a program in a certain way at a certain time, and while it's what I do for a living, it's NOT what I want to always do for fun lol, sorry! You can enjoy a car without knowing everything about the inside of it. Besides we HAVE to make things easy if we want any kind of Linux adoption, and I do, or more specifically I want the adoption of a truly free system, because
        • by Yfrwlf ( 998822 )
          I think you misunderstood the way I truly meant my post. I'm not saying to make everything the same. Far, far from it. I'm saying that the common *interfaces* could be made the same, so that things are *modular* and can be more easily swapped out for *other* things, giving users more freedom. If I have to install OS Y (avoiding OS X jokes) in order to get program Y to run, and can only use OS Z to get program Z to run, that's not freedom, that's lock-in. I'm being locked in to this one particular OS to
    • There is no way Apple will agree on agreeing with anyone else - especially if it's some pesky open-source distributors or Sun, who are not in the desktop, but the server market (mostly). They are a) going to buy the stuff (cups, khtml) b) or completely ignore everyone else and roll their own, like they always did.

      I don't say this is a bad thing - I'm rather trying to point out that the OP is wishful thinking. And to add to the list: Gentoo has several alternatives. Heck, there are two versions of their ba
      • When I moved my laptop from a tuned Gentoo (einit) to Debian I was really happy to know suspend is finally in the kernel - it would take forever to boot.

        When I went to 2.6.23, it was before Mandriva had an officially blessed update available, hell they still might not, but I had to patch in tuxonice and modify my init scripts to work with it and write a daemon to automate the hibernation process. It was kind of annoying, but not a very big deal. There is a reason why we're not running Windows on everything.
      • There is no way Apple will agree on agreeing with anyone else - especially if it's some pesky open-source distributors or Sun, who are not in the desktop, but the server market (mostly).

        Actually, Apple frequently adopts technologies and standards from both Sun and Linux distros. The thing is, Apple also is not willing to compromise on all things and will sometimes roll their own when they think they can do it better and cleaner (sometimes just better for them). They are less interested in being just like everyone else than they are in doing what is right for their developers and users. I think LaunchD is one example of this and Webkit is another. In the first case they created a new solu

      • "In a year's time we'll probably have a winner... or, let them be two years."

        Yes, of course. SysV vs BSD has been there, how much? 25 years? But, of course, it'll settle now in a year of two. Yes, of course: Red Hat will throu away their years of development and fine tunning of their init scripts because it will settle. Debian... well, even if they decided it (and they won't) you probably won't see Lenny's succesor in such a timeframe.

        You forget that:
        1) There are not such a terrible differences between
    • Don't forget Gentoo has its own weird (but improved, IMHO) system. If only their package management was as nice as their init stuff.

      For my own software, I prefer to write an init-script for Ye Olde sysvinit and the actual distro I'm currently running, and leave others out in the cold.
  • At last year's Linux.Conf.Au I attended both the virtualization mini-conf and kernel hacker virtualization talks with interest, since I dabble a bit in virtualization but not enough to keep up to date on current trends.

    I was struck with the immense gulf in opinion between the "virtualization folks" and the "kernel folks".

    Most (possibly all) of the talks in the virtualization stream could be summarized as "Xen! Xen! Xen! Yay! Yay! Yay! Xen, xen, xen, xen xen, xen, xen. Xen! Xen Roxx0rs! Xen! Clients! Xen! Xen! XEN!!!". Lots of action, lots of progress, lots of excitement, lots of Real People in Real Companies doing Real Work and discovering Best Practices.

    It was quite a shock to walk into the "kernel hacker QA" with kernel maintainers from several big linux distros and some major names and here a simple "Xen sucks. Use KVM". Talking to one unnamed kernel hacker who actually wrote a big chunk of Xen code, even he basically flat out said Xen was a terribly solution which he only saw as a stop gap until KVN had picked up some speed.

    So the impression I walked away with was that while Xen is the current poster child for virtualization, its days are numbered.

    Once KVM has had time time to move away from being shiny new code that only a kernel dev could love to a Real Product Xen is going to have its ass kicked by the new Blessed Child.

    Fortunately I don't have anything invested in either side (I mostly use qemu because my needs are more for pure isolation and speed isn't needed at all) but it looks like this match is shaping up as a hell of a flame war.

    And by the sounds of it, Ubuntu just threw lit up the first flamethrower on behalf of KVM.

    Now where did I put those marshmallows.
    • So, what, your point is that, since KVM is better than Xen in the opinion of experienced techies, it will kick Xen's ass on the market?

      When did technical superiority ever matter on the market?
      • If we end up with two competing open source virtualization strategies with commercial support, but one has the blessing of (and encouragements for) the kernel devs and the other doesn't, then over the medium term, then quite possibly yes.

        Or at least that's the impression I got from the kernel dev I had a few drinks with.

        But hey, he could be wrong.

        My point is that by the looks of things, we've got a pretty major competitive war coming, and Ubuntu going to KVM isn't as unusual a choice as it might seem.
      • Re: (Score:1, Insightful)

        by Anonymous Coward
        KVM is better than Xen in the opinion of the kernel maintainers. Therefore, KVM will be (already is) available in the stock Linux kernel, and all distros that use any recent-ish kernel. KVM will be available pretty much everywhere by default, and requires very little in the way of distro support - a couple of simple userland apps will handle everything, and they don't even need root access to work.

        Once KVM has the remaining kinks worked out of it, it'll be everywhere (on Linux at least) by default, and will
        • IIRC, in the time of discussions, it was said that Xen can always use KVM.

          KVM is essentially official kernel interface to virtualization hardware. But Xen has to work even without hardware support so they do lots of things on their own.

          As soon as number of servers with hardware supported virtualization reaches critical mass, Xen would start using KVM where available since it is part of kernel, since it is official interface.

          IOW, in long term Xen and KVM are complementing, not competing technologi

      • When did technical superiority ever matter on the market?
        Since when did market share dictate Linux design decisions?

        Linux (and open source in general) is much less swayed by commercial popularity than proprietary vendors are.
    • Re: (Score:2, Informative)

      by Anonymous Coward
      "the first" flamethrower? Huh? Fedora has been supporting KVM for a while! Yes, Fedora _also_ supports Xen, but that might change, given that Xen is always a few kernel versions late and Fedora ships very recent kernels (so there's now a special old kernel-xen used for the Xen Dom0 and DomU kernels, basically the situation sucks).
    • Re: (Score:2, Informative)

      by digipres ( 877201 )
      And if the virtualisation waters weren't already muddy enough, we have kernel hacker Paul [] Rusty [] Russell coming up with lguest [] .

      So we have a kernel guy and his own take on Linux and virtual machines. This may prove hugely popular, though I hear that not too many turned up for Rusty's lguest tutorial at LCA08. Then again that may be because he scared us off with a "if you haven't done the homework, d
    • "Fortunately I don't have anything invested in either side (I mostly use qemu"

      Then you are doubly fortunate since KVM userspace tools and even the "native" disk image format are those from qemu.
  • Doesn't KVM require virtualisation extensions to run? Will Ubuntu be integrating qemu's CPU virtualisation into the UI they write so it can be used for more than a tiny fraction of the Ubuntu using population?
    • Re: (Score:3, Informative)

      by CmdrTHAC0 ( 229186 )
      It requires them to run with performance, yes. It falls back to pure emulation when they aren't available, because it *is* still qemu as well.

      Meanwhile, my crystal ball shows me that VT-capable hardware is not going away, so the "tiny fraction" will become the majority. It seems important to consider them when thinking of future directions.
      • by Bloater ( 12932 )
        If Ubuntu's standard development suite is benchmarked virtualising at 1/5 to 1/10 the speed of Microsoft's Development suite, Ubuntu loses.

        I mean, 1/5 to 1/10 the speed, Dude. And this existing hardware isn't going away any time soon and it can't just go to landfill for people to replace it. They'll just run Windows and Virtual Server - it's so easy and fast.
  • If UML gets SKAS support on AMD working there wouldn't be any argument about what is the best VM on Linux. It works great on Intel hardware.
  • Doesn't KVM require support for VM in hardware? ( ie, newer hardware ).

    Leaves out a lot of people that don't have brand new fancy stuff. ( or have things changed )

"It takes all sorts of in & out-door schooling to get adapted to my kind of fooling" - R. Frost