Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
SuSE Software Businesses Linux

Novell's Virtualization Partnership 54

Jane Walker writes "The push for a virtual data center and utility computing continued this week as Novell announced that SuSE Linux would have support for Virtual Iron out of the box." Novell has also guaranteed that 'that all existing independent software vendor (ISV) certifications will not be affected.' From the article: "'The applications certification [component] is huge,' said Novell director of data center applications Justin Steinman. 'Customers want to know that their existing applications are not going to break when they deploy their technology [on a virtual server].'"
This discussion has been archived. No new comments can be posted.

Novell's Virtualization Partnership

Comments Filter:
  • by 0110011001110101 ( 881374 ) on Wednesday February 08, 2006 @02:44PM (#14671437) Journal
    For those, like me, not immediately aware of what virtualization actually is, heres Wikipedia with some more detail!!

    In computing, virtualization is the process of presenting a logical grouping or subset of computing resources so that they can be accessed in ways that give benefits over the original configuration. This new virtual view of the resources is not restricted by the implementation, geographic location or the physical configuration of underlying resources. Commonly virtualized resources include computing power and data storage.

    A good example of virtualization is modern symmetric multiprocessing computer architectures that contain more than one CPU. Operating systems are usually configured in such a way that the multiple CPUs can be presented as a single processing unit. Thus software applications can be written for a single logical (virtual) processing unit, which is much simpler than having to work with a large number of different processor configurations.

    A new trend in virtualization is the concept of a virtualization engine which gives an overall holistic view of the entire network infrastructure.

    Virtualization is a broad term that refers to the abstraction of resources across many aspects of computing. Some common applications of virtualization are listed below.

    A virtual machine is an environment which appears to a "guest" operating system as hardware, but is simulated in a contained software environment by the host system. The simulation must be robust enough for hardware drivers in the guest system to work. With paravirtualization, the virtual machine does not simulate hardware but instead offers a special API. Operating System-level Virtualization is virtualizing a physical server at the operating system level, enabling multiple isolated and secure virtualized servers on a single physical server. Partitioning is the splitting of a single, usually large, resource (such as disk space or network bandwidth) into a number of smaller, more easily utilized resources of the same type. This is sometimes also called "zoning," especially in storage networks. Aggregation, spanning, or concatenation all combine multiple resources into larger resources or resource pools. For example, symmetric multiprocessing combines many processors; RAID and volume managers combine many disks into one large logical disk; RAIN and network equipment uses multiple links combined to work as though they offered a single, higher-bandwidth link. At a meta-level, computer clusters do all of this.

    Wikipedia article [wikipedia.org]

    and another great article with an introduction to Virtualization [kernelthread.com]

  • thank you novell (Score:5, Interesting)

    by Pavel Stratil ( 950257 ) on Wednesday February 08, 2006 @02:49PM (#14671488) Homepage Journal
    When Novell bought SUSE, I thought that nothing appart from the name change would happen. Over the last months Novel turned out to be a big surprise for me. Those guys really push some inovation into the linux world. Just to remind you, there's Novell's xforms implementation, support of a large number of open source projects (i.e. Gnome), or among the current issues, the most wanted win/mac apps poll or the opening the Xgl.. pretty cool.
  • by PornMaster ( 749461 ) on Wednesday February 08, 2006 @02:57PM (#14671555) Homepage
    I've not really seen any reports of utility computing really being used on a regular basis. Is anyone actually using it on a regular basis? I can see how something like the Sun Grid would be used for special projects, but I'm not convinced that general-purpose utility computing is suitable for most companies in their ongoing operations.

    That's not to say that virtualization isn't happening, and that it wouldn't also be useful for utility computing... but the real world examples I hear about aren't related.
    • by PCM2 ( 4486 ) on Wednesday February 08, 2006 @04:12PM (#14672226) Homepage
      The only mention of utility computing in TFA is this:
      "The utility computing, base data center model everyone is striving for today cannot be done without virtualization," Walsh said.
      I guess the idea is that virtualization can be a step in that direction, but otherwise you're sort of offtopic.
    • by lucabrasi999 ( 585141 ) on Wednesday February 08, 2006 @05:43PM (#14672979) Journal
      I can see how something like the Sun Grid would be used for special projects, but I'm not convinced that general-purpose utility computing is suitable for most companies in their ongoing operations.

      My existing client is trying to move to utility computing. And, they expect to save hundreds of millions of dollars over the course of a decade. This is because they have over 1,200 wintel server that are, on average, only using 5% of their CPU. By moving all of the dev and test servers (and the less critical prod servers) into a virtulized environmnet, we think we can reduce their hardware footprint to about 100 wintel ervers.

      On the Unix side, they are moving to a virtualized flava (as in Flava Flav) of Unix, where they can begin to use less than 15% of each CPU (if they so desire). In other words, if we actually make this project work, it will be a huge success. Check back with me in about three years.

  • by Anonymous Coward on Wednesday February 08, 2006 @02:58PM (#14671571)
    Just a month ago, we got budgetary approval on migrating our entire Windows/Linux/BSD datacenter from individual machines to virtual. We selected VMware's ESX Server as our hypervisor platform. We'll be moving over 200 physical servers to about ten of what VMware calls "virtual infrastructure nodes". For storage, we'll be using our fiber-channel EMC Clariions (two CX700's) and some new iSCSI storage. I've been researching this for over a year now and the time is right. 2006 will be the year that virtualization really takes off and goes mainstream.

    FYI: The only thing we're not moving to ESX will be our 8 and 16 CPU SQL Servers. As it stands right now, ESX only allows 2-way virtual SMP. With ESX 3.0 in Q2, they will up that to 4-way virtual SMP. Nonetheless, anything requiring a ton of throughput is best left to dedicated hardware as opposed to VM's. (for now, anyway)
    • Yes, definitely year 2006 is the big year for virtualization. I also think year 2006 will be big year for iSCSI (IP) SANs. Vendors like Equallogic (http://www.equallogic.com/ [equallogic.com] have made IP SANs enterprise ready.. and there are also many products available for the lower end.
  • by digitaldc ( 879047 ) * on Wednesday February 08, 2006 @03:08PM (#14671662)
    "It sounds as if Novell will be shipping a variant of their standard kernel with the changes needed to support Virtual Iron," Haff said.

    I had a virtual iron once, but I had to get rid of it because I became constantly worried that I had left the damn thing on.
    • I could never get my virtual iron to work, something about it not being AC compatible and needing a virtual power source. And don't even get me started on the adapters needed to get it working overseas ...
  • VMware and VI (Score:4, Informative)

    by TheSpillmonkey ( 942973 ) on Wednesday February 08, 2006 @03:09PM (#14671675)
    VMware and ESX have different market fields. They both virtualize, but VirtualIron utilizes a dynamic linux cluster of machines (there is a compatability list of hardware as well as software, SuSE compatable means on the client side) that requires lots of specialized low lantancey hardware such as infiniband fiber components (starting in the $15k range for the very very low end VMware ESX runs on a single high end box. It has a much lower pricepoint. They really dont cross opertunities as much as you would think. BTW, i have my VCP for vmware and have also been working closley with VirtualIron for the past 6 months or so (they dont have official certs yet). Both are very good products. I cant wait for ESX3 and the next VI product in the following quarter (big stuff happening there)
  • The Gambit (Score:4, Informative)

    by lazarus ( 2879 ) on Wednesday February 08, 2006 @03:27PM (#14671819) Journal
    Over the past week I think I have installed (or tried to install) every single freely-available open-source Linux virtualization technology available:

    - Xen
    - Linux-vserver
    - OpenVZ

    Or researched others:

    - OpenVPS
    - FreeVPS

    And ones that are not open source:

    - VMware Server (the new free Beta version of the old GSX Server product)

    My personal recommendation is that you not bother unless you have a lot of time to kill and don't mind disappointment. I have nothing but respect for the fine (and very smart) people who are working on this technology for Linux, but it's not ready for simple people like myself.

    I spent two full days (about 24 hours total) working on Xen and in the end I was never able to get iptables to work in a domain. The documentation was mostly incomplete and thus there was a lot of scurrying around trying to find bits and pieces of info that would allow me to get it together.

    I had the most success with linux-vserver and it was by far the easiest to get running (after I had re-compiled the rpms (fc4) for my x86_64 smp target machine. My first vserver was pretty badly mangled once I was done with it and, wanting to remove it found that there was no actual *documented* process for deleting it. I dare you to try to find a description anywhere on how to remove a vserver...

    Finally I pooched my system by trying OpenVZ.

    Virtualization is a "good thing" in my opinion, and as an architect I build it into many of my designs. But in the free Linux space you might end up asking yourself the question "do I really need it." For me the answer is "yes" as I want to run multiple mail servers with different configurations on the same box. For you, unless you really need it, you might want to see if you can make do the old fashioned way.

    I'm going to keep playing. If something you have tried works really well for you in a FC4/x86_64/SMP environment please let me know.
    • Re:The Gambit (Score:2, Interesting)

      by five18pm ( 763804 )
      How is the VMware Server? We have been using VMware workstation for many of our tests. It is a fine piece of software. Our testing is basically installing operating systems repeatedly in a hosted environment. If I had tried on a physical machine I would have been hosed long time ago.

      I am planning to move to VMware Server now that it is available for free. Let me see how that gambit goes.
      • VMWare server seems to be working fine, I've installed on a test server and have copied a few existing virtual machines across to see how well they work. The only real problem so far is that the Windows Console app seems to be a bit wobbly over a DSL link, whereas although GSX was slow it was still usable. It took me 10 minutes to login to a windows 2003 server and change a password through the console via DSL.

        I built the system up from a CENTOS server 4 CD. Downloaded the main RPM and MUI .tar.gz. Inst
    • Re:The Gambit (Score:3, Informative)

      by Bender0x7D1 ( 536254 )
      We are using Xen in an educational environment - providing a network of VMs to each student so they can play around, break stuff, and get some practical experience.

      For your iptables problem... I think you just need to add a rule to forward packets so they will be transfered through the software bridge. Make sure to use -I to insert the rule, appending the rule doesn't help since it will be dropped before it reaches the forward rule.

      I agree with you that documentation isn't all that great right now, but
    • by Anonymous Coward
      I've also played around a lot with the alternatives. Notably missing on your list are the two that worked best for me.
      • QEMU - works well enough for me to test our software on rhat, suse, debian, and (yuck) windows 2000 - and with the accellerator ($0 but non-free) runs reasonably well.
      • UML - I host the domains for 5 friends with user mode linux, and it works just fine and was quite trouble-free, though not as flexible as the above (can't run windows)

      If you're running on a supported OS, VMWare's awesome. Bu

      • I've been playing with Solaris, FreeBSD, and Windows 2000 under VMWare Server (host OS is SuSE 9.3). Not bad. (One caveat: For Sol-10 be sure to allocate at 512MB to the VM before doing the install, else your kids will have graduated school by the time it's done. But then, if you've installed Solaris before, you probably already figured that one out.) Last time I tried VMWare Workstation, it seemed to be suffering from the "Linux = Red Hat" mindset and they wanted me to recompile my stock SuSE 9.2 kernel ju
    • Re:The Gambit (Score:2, Informative)

      by VAXGeek ( 3443 )
      I don't know about FC4 or x86_64, but I do have SMP working perfectly fine with iptables under Xen 3.0. By default iptables is not compiled in. You have to download the xen source and build it yourself. After it is included in the kernel, you should have no problems.
    • Re:The Gambit (Score:2, Informative)

      by askegg ( 599634 )
      I think you're right - it is not ready for the mainstream yet, but things are looking promising.

      Novell showcased some nice server management technology during Brainshare a while ago. Using a web browser they were able to migrate virtual machines between hardware platforms with very little intteruption (sub second). This aludes tot he future of data centre computing IMHO.

      There are a lot of clever people working on technologies to cluster small machines together to form one virtual machine. This is t
    • Re:The Gambit (Score:3, Insightful)

      by KiloByte ( 825081 )
      VMware is a fine piece of proprietary software. In fact, it is an absolute must if you have to make a non-trivial installer for a piece of a Windows software -- I can't imagine anyone reinstalling the whole damn thing every test build. And, Windows doesn't support COW.
      However, it's expensive. Even VMware Workstation costs as much as a new PC, and around here we can hire a person for two months for that much money. Thus, we own only a single license and gradually move away from it.
      Qemu+kqemu is marginall
      • VMware Workstation costs as much as a new PC


        Really? Please tell me where I can buy a new PC for $189.00.

        Thanks!
        • Then it's more than twice cheaper than it used to be -- but still, 189$ more expensive than more powerful products from the competition.
          • As I've already mentioned elsewhere in this discussion, Parallels seems to work quite well for what I need it for (building/testing ports of my employer's software - the coding's done by others), and it's only $49. Unfortunately, I don't think Solaris is supported.

            However, while I like Parallels heaps better (it's much easier to install and configure IMO, and it uses fewer resources), VMWare Server is now free, and supports Solaris 9 and 10, which are two of our target platforms.

            (I didn't realise that VMWar
            • VMWare Server is now free

              Good to know. I'll sure check it out -- it may be a good replacement for our old test farm. As it does brutal virtualisation and emulates fake pieces of hardware, it simply can't be as fast as Xen for servers -- but since Xen doesn't do Windows, sometimes such kinds of virtualisation are needed.

              So please pardon my earlier snarkiness :p It's always good to be told new things.
  • by spinfire ( 148920 ) <dpn@isomerica.net> on Wednesday February 08, 2006 @03:29PM (#14671831) Homepage
    More and more companies are getting into providing Virtual Private Server business for customers who aren't quite ready for colocation or dedicated server usage, but have outgrown the basic shared hosting or have special needs. This is a good environment for people who need a web hosting environment which they can configure and customize but don't want the overhead of an added machine. Furthermore, because of the nature of server load it is efficient to put lots of customers on one massive machine.

    With the rise of the dual core Opteron offerings from AMD one can have a very nice server which can support a huge number of customers. It won't replace colocation for the people who want a very personalized setup or need lots of power but cheap virtual servers will likely gain a higher market share soon.
  • nice! (Score:2, Informative)

    by slackaddict ( 950042 )
    Novell has been doing some great things for the OSS community - releasing AppArmor and now this. Nice work, Novell!
  • Anyone failure with virtualization and Linux guests will know about Time Sync issues that are around. I have experienced this with SuSE Enterprise Linux guest running on VMWare ESX Server. My question is does anyone know if this virtualization system has taken care of this issue.

    Basicly put, Linux guests loose or gain time, up to hours a day. Major issue in the enterprise.

    The issue is described below, this is taken from the VMWare knowledge base.

    Linux guest operating systems keep time by counting
    • Any reason you don't run a custom kernel with all the SuSE patches, but the timer set for 100 Hz? Doesn't that resolve 99% of the problem, of which NTP can resolve the rest?
      • Getting it done an setup in a busy Enterprise company takes time that shouldn't be needed. They need to fix this in the virtualazation software. SuSE should work in a VM out of the box.
        • If you're too lazy to recompile a kernel you shouldn't be working for an Enterprise Company. 1000 Hz is meant for desktop systems, while 100 Hz is best for servers/SMP systems.
          • Once again it has nothing to do with what I can or want to do. This should not be nessisary. Windows works out of the box in a VM. Also I'm using SuSE Enterprise Linux provided to my by my manager, shouldn't this be set to 100 if your saying servers are supposed to? Regardless it shouldn't need a kernel recompile. Tierd of that being the fix, just recompile the kernel! Starting to feel how Linux is slacking.
  • I am currently implementing VMWare in our company. We are a development house and the advantages of VMs on high end boxes are a really good solution for us. At least I think so right now. We predefine VMs with certain patch levels, service packs, software versions, etc... and take snapshots. The plan is to move from about 60 servers of various platform types, Linux, Solaris, Winblows, to 6 running VMs. We bought a whole mess of SATA Raid storage to back up the VMs.

    Of course none of our production serve
    • Hi Theres, After having rolled out a 60+ CPU Production VMWARE-ESX implementation for my organisation, I can elaborate. VMWARE are currently pushing the virtualization much further than any other vendor, open source or commercial. When you buy into VMWARE (yes point#1, it ain't free) you are buying into a very scalable and open environment with good compatibility of guest operating systems. Point #2, The fact that we can have multiple physical servers in our Virtual Center farm accessing shared storage me

One man's constant is another man's variable. -- A.J. Perlis

Working...