Forgot your password?
typodupeerror
Virtualization Open Source Red Hat Software Upgrades IT

oVirt 3.4 Means Management, VMs Can Live On the Same Machine 51

Posted by timothy
from the right-there-in-the-open dept.
darthcamaro (735685) writes "Red Hat's open source oVirt project hit a major milestone this week with the release of version 3.4. It's got improved storage handling so users can mix and match different resource types, though the big new feature is one that seems painfully obvious. For the first time oVirt users can have the oVirt Manager and oVirt VMs on the same physical machine. 'So, typically, customers deployed the oVirt engine on a physical machine or on a virtual machine that wasn't managed or monitored,' Scott Herold, principal product manager for Red Hat Enterprise Virtualization said. 'The oVirt 3.4 release adds the ability for oVirt to self-host its engine, including monitoring and recovery of the virtual machine.'" (Wikipedia describes oVirt as "a free platform virtualization management web application community project.")
This discussion has been archived. No new comments can be posted.

oVirt 3.4 Means Management, VMs Can Live On the Same Machine

Comments Filter:
  • by TWX (665546) on Saturday March 29, 2014 @02:55AM (#46608489)
    ...around the supposed benefits of server-side virtual machines.

    You're running an operating system, so that you can run a software package, so that you can run another operating system, so that you can run another software package that is then interfaced-to by users or other stations on the network?

    I guess that I can see it for boxes that serve multiple, different paying subscribers that each get their own "box", but wouldn't it just make more sense to size the applications to use the host OS on a single box as opposed to running multiple copies of operating systems and services that eat resources when the virtual hosts all belong to a single customer?
    • Re: (Score:3, Funny)

      by invictusvoyd (3546069)

      but wouldn't it just make more sense to size the applications to use the host OS on a single box as opposed to running multiple copies of operating systems and services that eat resources when the virtual hosts all belong to a single customer?

      Some hosting customers require control of the OS to run their proprietary security and optimization apps . Besides, virtualization allows for efficient utilization of hardware , power and rackspace.

    • by MightyMartian (840721) on Saturday March 29, 2014 @03:04AM (#46608511) Journal

      As someone who uses KVM VMs, I see a number of advantages:

      1. More efficient use of resources. A dedicated server usually idles a lot, and those cycles do nothing. Running guests allows empty cycles to be put to work.
      2. Load balancing and moving resources around is a lot easier. Have a busy host, move a guest to an idle one.
      3. Hardware abstraction. This is the big one for me. Guests are no longer tied to specific hardware, and I can build a new VM host and move guests to it a helluva lot more painlessly than I could with an OS installed directly on hardware.
      4. Backup options. Coupled with functionality like logical volumes, I can make snapshots for backup or testing purposes with incredible ease.

    • by Anonymous Coward

      ...around the supposed benefits of server-side virtual machines.

      You're running an operating system, so that you can run a software package, so that you can run another operating system, so that you can run another software package that is then interfaced-to by users or other stations on the network?

      I guess that I can see it for boxes that serve multiple, different paying subscribers that each get their own "box", but wouldn't it just make more sense to size the applications to use the host OS on a single box as opposed to running multiple copies of operating systems and services that eat resources when the virtual hosts all belong to a single customer?

      jeez, it's 2014. i think you've missed something, somewhere.

      how about running multiple different OS'es on a single piece of hardware?
      how about decoupliing your workload from hardware?
      how about hardware upgrades without downtime (move vm, put hv into maintenance, do firmware upgrade or whatever)?

      to name just a few.

      • Re: (Score:2, Insightful)

        by jon3k (691256)
        The scary thing is it's modded +5 interesting. Seriously, Slashdot?
    • What's good for paying customers is also good for you. If you are running a host OS on a single box, and you need to expand, then you will need to get another box with the same hardware so you can load the same host OS onto it. On the other hand, if you are virtuallizing your own host OS, then you can throw disparate hardware at the problem
    • by omglolbah (731566)

      Well, one reason is when you have a vendor which does not support your system -at all- if you install any unauthorized software packages or even OS updates that have not been cleared.

      At that point you want 'clean' VMs that follow the vendor spec exactly.

      • "At that point you want 'clean' VMs that follow the vendor spec exactly."

        Except, of course, when the vendor insists that their software shouldn't be virtualized at all.

        • by Lehk228 (705449)
          then you find a vendor who has not been asleep at the switch for the last decade and a half
    • by jimicus (737525)

      A couple off the top of my head:

        - You wouldn't believe the number of poorly written applications that will happily bring a server to its knees no matter how powerful. This way you can reset just that application, not the whole business.
        - An application that was never written with any sort of HA in mind can be made highly available without any changes.

    • Play with one under a decent system at some point. They're useful for all of the reasons people have already given, plus they make fantastic forensic, repair and testing environments.

    • by Anonymous Coward
      Every time the topic comes up, someone like you comes along and acts confused.

      I really can't be bothered to explain, and other posters have had a stab. However I will point out the irony given your sig. IBM have been doing LPARS on S/370 since 1972. Yet apparently the concept of virtualising your platform is still new and confusing to some people.
    • I'd say the killer feature is pure remote management. You don't need to physically manage your systems anymore.

    • ...around the supposed benefits of server-side virtual machines.

      You're running an operating system, so that you can run a software package, so that you can run another operating system, so that you can run another software package that is then interfaced-to by users or other stations on the network?

      I guess that I can see it for boxes that serve multiple, different paying subscribers that each get their own "box", but wouldn't it just make more sense to size the applications to use the host OS on a single box as opposed to running multiple copies of operating systems and services that eat resources when the virtual hosts all belong to a single customer?

      There are tons of benefits to virtualization. One is more efficient use of resources on the server (CPU, RAM, I/O, etc.). The other is the ability in some platforms to actually move running VMs from server to server which can be useful for balancing resources and maintenance. You already pointed out the benefit in a multi-tenant environment.

      Done properly, where it makes sense (which is for many, many applications) it can save money and provide a more robust environment.

    • One big issue is that virtual machines allows for different OSes. So if you are provides a variety of services, like legacy applications for example, you consolidate them all on to one machine.

      It also allows for easier testing. Say for example you need to stress test your application on some combination Red Hat, SUSE, Debian, FreeBSD, WinServer, Mac, and Solaris, or even a variety of different versions of those OSes. Putting them all in virtual machines is much simpler than re-installing or having a dedicat
    • by jon3k (691256)
      Uh it's pretty simple. Take 50 relatively idle servers. Combine them onto two physical servers as VMs. Spend 1/25th the money on servers and have complete hardware redundancy for every host which is now a vm. What is there not to get?
    • by nine-times (778537) <nine.times@gmail.com> on Saturday March 29, 2014 @10:39AM (#46609505) Homepage

      I may be confused, but... are you questioning the whole idea of hypervisors on servers at all?

      There are a lot of reasons for that. One of the simple reasons is that it's cheaper. When you're working in IT, you often have a bare minimum of hardware you have to buy with each server in order to be safe, e.g. dual hot-plug power supplies, hot-plug RAID enclosures and drives, lights-out management, etc. Because of that, each server you buy is going to end up being about $4k minimum, and the price goes up from there. If you have to buy 5 servers, you might be spending $25k even if they aren't powerful servers. However, you may be able to run all of those servers on a single server that costs $10k. In addition to the initial purchase being less, it will also use less power, take up less space, and put out less heat. All of that means it'll be cheaper of the long term. It will also require less administration. For example, if an important firmware update comes out that requires a certain amount of work to schedule and perform, you're doing that update on 1/5 of the servers you would be doing it on. Oh, and warranty renewals and other support will probably be cheaper.

      So more directly addressing the question, which I think was, "Why not just buy one big server and install everything on it?" There are lots of reasons. I think the most important reason is to isolate the servers. I'm a big believer in the idea of "1 server does 1 thing", except when there are certain tasks that group well together. For example, I might have one server run the web and database services for multiple web apps, and another run DNS/DHCP/AD, but I don't really want one server to do both of those things.

      And there are a few reasons for that. Security is a big one. There are services that need to be exposed the the internet, and then there are services where I don't want the server running them to be internet-accessible. Putting all of those services on the same physical server creates a security problem, unless I virtualize and split the roles into different virtual machines. Or it may be that I need to provide administrative access to the server to different groups of people, but each can't have administrative access to each other's data. Hosting providers are a good example of this: You and I could both be hosting our web application on the same physical machine at the same hosting provider, and we both might need administrative access to the server. However, I don't want you having access to my files and you don't want me having access to yours.

      Another big reason you'll want to isolate your servers is to meet software requirements. I might have one application that runs on Windows, but is only supported up to 2008R2. I might have another application or role that needs to run on Linux. I might have a third role where I really want to use Windows 2012R2 to take advantage of a feature that's unavailable in earlier versions of Windows. How would I put those things on the same server without using virtual machines?

      Isolating your servers is also good because it tends to improve stability. Many applications are poorly written can cause crashes or security problems, and keeping them on their own VM server prevent those applications from interfering with other applications running on the same physical hardware. I can even decide how to allocate the RAM and CPU across the virtual machines, preventing any one application from slowing down the rest by being a resource hog.

      Aside from all that, there are a bunch of other peripheral benefits. For example, with virtual machines, you have more options for snapshotting, backing and replication, restoring to dissimilar hardware, etc. With traditional installs, I need special software to do bare-metal restores in case something goes wrong, and the techniques used in that software often doesn't work quite right. If virtualized machines, I just need the VM's files copied to a compatible hypervisor, and I can start it up wherever I need to. With the right software, I can even move the whole VM live, without shutting it down, to another physical server.

      There are probably a few other benefits that I'm just not thinking of off the top of my head.

    • by red crab (1044734)
      AIX Workload Partitions and Solaris Zones already implement that concept, but its more about application mobility rather than optimal performance. It makes sense usually to have your own box when you need more control of your environment. And anyway the resources are dynamically allotted; so its unlikely that your web server box would be holding on to its 32 GB of allocated memory even when its not heavily loaded.
    • by tji (74570)

      Common inexpensive server machines are very powerful today. Many cores, many GB of RAM. It becomes a management and flexibility nightmare to host all the desired servers on a single operating system.

      For example, group A needs a web app hosted in a Tomcat environment; B needs a a JBoss based app; C and D need two different Django apps; E and F need Rails apps.. All of those apps together still only need 10% of the resources of the server. So, you can also host 20 other services on it. Good luck mana

    • by bolsh (135555)

      Some benefits:

      1. Instead of running 10 services on one physical machine the way we used to, you run one service per VM (one web server, one middleware server, one database server, etc) - you add the overhead of multiple operating system runtimes, but thanks to awesome hypervisor optimizations identical memory pages are merged, so you don't use any more RAM
      2. If one server gets over-subscribed, you just live migrate a running service to a less loaded server. No more building the infrastructure on another ser

  • Now if we could just get it interfaced to the Open Cloud Computing Interface (https://en.wikipedia.org/wiki/Open_Cloud_Computing_Interface), all would be well.

I don't want to achieve immortality through my work. I want to achieve immortality through not dying. -- Woody Allen

Working...