Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Red Hat Software Cloud Open Source

Can Red Hat Do For OpenStack What It Did For Linux? 118

Brandon Butler writes "Red Hat made its first $1 billion commercializing Linux. Now, it hopes to make even more doing the same for OpenStack. Red Hat executives say OpenStack – the open source cloud computing platform – is just like Linux. The code just needs to be massaged into a commercially-hardened package before enterprises will really use it. But just because Red Hat successfully commercialized Linux does not guarantee its OpenStack effort will go as well. Proponents say businesses will trust Red Hat as an OpenStack distribution company because of its work in the Linux world. But others say building a private cloud takes a lot more than just throwing some code on top of a RHEL OS."
This discussion has been archived. No new comments can be posted.

Can Red Hat Do For OpenStack What It Did For Linux?

Comments Filter:
  • RedHat be unsmart? (Score:4, Insightful)

    by mwvdlee ( 775178 ) on Monday June 17, 2013 @12:39PM (#44031559) Homepage

    But others say building a private cloud takes a lot more than just throwing some code on top of a RHEL OS

    And somehow those "others" also believe Red Hat to be incapable of doing any more than just throwing some code on top of RHEL?

    • by AvitarX ( 172628 ) <.gro.derdnuheniwydnarb. .ta. .em.> on Monday June 17, 2013 @12:43PM (#44031615) Journal

      Red Hat better hope that throwing on the code isn't all it takes. Being an Enterprise Linux company takes more than throwing code on top of the kernel, and that's why Red Hat made a billion dollars, and Slackware didn't (not trying to knock Slackware, just trying to contrast two fairly early distros I used 15 or so years ago).

      If all it takes is the code, Red Hat is screwed.

      • Re: (Score:2, Interesting)

        by jedidiah ( 1196 )

        Plenty of companies forgo Redhat contracts just as plenty of companies used to forgo Sun contracts.

        The idea that you need someone else to blame is mainly an idea of a certain small group of large companies that are darlings of the financial news media.

        • The company I'm with uses both redhat and centos (as well as other paid and free linux vendors).

          Sometimes you want the paid-for support of a RedHat license. Other times it's an unnecessary expense.

        • by AvitarX ( 172628 )

          Yest, and all of those people forgoing it (or using other companies for support even) keep the ecosystem thriving. Red Hat is good at competing with that. If they are the ones throwing on the code, they will be experts, they sell expertise and accountability.

      • by Lumpy ( 12016 )

        Remember those two are different targets.

        Redhat is an enterprise Distro. Slackware is a hobbiest Distro Tow very different things. IT's like comparing Boeing to Cessna. They both make airplanes... but they both target completely different markets.

        • by eric_herm ( 1231134 ) on Monday June 17, 2013 @03:19PM (#44033271)

          Well, that's the point, ie you need more than hobbyist. IE, when it come to be "enterprise" ready, people expect documentation, training, certification, support, and this is not free ( because while some people enjoy writing documentation or making support, there isn't that much people doing it for free ). And also, when you start to pay, you tend to expect someone to handle the sales, someone to negociate, etc, etc.

        • by Arker ( 91948 ) on Monday June 17, 2013 @03:29PM (#44033343) Homepage

          "Redhat is an enterprise Distro. Slackware is a hobbiest Distro Tow very different things."

          True that they are two different things, sure. Though they are actually extraordinarily similar (to the point I usually get modded down when I say that they should be treated as two different, though closely related, Operating Systems, rather than being carelessly referred to as 'Linux'.)

          "IT's like comparing Boeing to Cessna. They both make airplanes... but they both target completely different markets."

          A very misleading analogy. It's more like if Boeing and Cessna were both building a plane based on the same chassis and engine. Slackware just produces one that lifts off a thousand pounds lighter, has engines that produce more thrust, and a stark, functional control layout, and gives it away for free expecting whoever uses it to have pilots, airplane mechanics, etc. on staff and to do their own due diligence and accept their own liability, while Redhat produces a much heavier version on the same airframe that they essentially rent to you, under a support contract where their mechanics keep it flying and they accept (some of the) liability.

          That analogy isnt great either actually but it's a lot better than one that implies that RHE is somehow going to 'lift more weight' than Slackware. Given the same hardware the opposite would be true.

    • But others say building a private cloud takes a lot more than just throwing some code on top of a RHEL OS

      And somehow those "others" also believe Red Hat to be incapable of doing any more than just throwing some code on top of RHEL?

      Others in fact also believe Red Hat to be incapable of doing any more than just throwing some code on top of GNU/Linux.
      This somewhat sounds more correct.

  • Breaks one day because snapshots are a "Technology preview" and destroys all your disk images when RedHat releases a new update to RHEL7 production stable feed.

    • by Archangel Michael ( 180766 ) on Monday June 17, 2013 @01:06PM (#44031857) Journal

      What? You data isn't backed up ... three times?

      And you patched a production server, without testing?

      You're screwed because you didn't do your job. For the crap that happens with RedHat, if you're paying for that support, pointing fingers at RedHat for their part of the blunder is why you pay for that service. Everything else, is your problem.

      If you did that while working for me, I'd fire you. Quit trying to impart your mom and pop Linux views on Enterprise environments.

      • by bluefoxlucid ( 723572 ) on Monday June 17, 2013 @01:41PM (#44032207) Homepage Journal

        Ubuntu inherits Debian policy. Anything--supported or not--is not updated in any way that breaks things. You might not be able to get security patches for stuff in Universe or Multiverse in a timely manner without rolling and submitting it yourself; but they won't go releasing a package that no longer does X when X worked before. The idea is that, if your configuration works, it will continue to work *exactly* the way you have it without modification no matter which version of the package you have across the entire lifecycle of a stable release--if it doesn't, that's a bug and they need to undo that breakage. Extending is fine, breaking is *not* acceptable.

        RedHat on the other hand released RHEL 6.4 and removed crmsh, the configuration system for Pacemaker, to be replaced with PCS. This wasn't documented in the release notes, either. Suddenly things that configure high-availability fail-over on RHEL 6 don't work. Running the same tools/scripts/whatnot breaks. This is still RHEL 6 stable, and under Debian policy that's not supposed to happen. RedHat doesn't have such a policy, so it happens.

        That means you're persistently at risk of reaching a situation where your patching priority demands increased resources: I can continue to patch Ubuntu while my dev team comfortably works on readying our stuff for the next LTS or the next 9 month release, usually; but one day RHEL has patches and I either don't upgrade as my company's security policy dictates OR we find resources (meaning, sometimes, hire more people) to step up the porting process.

        With RHEL, the risk is that we may need more manpower (labor cost--salaries) to support the same security policy; and that we may still not be able to keep in step as quickly as with a Debian-style update policy (i.e. there may be greater lag time as we rewrite scripts and configurations and do more dev testing before releasing patches). On top of that, we're faced with the risk of more frequent large roll-outs--things that worked in dev might not work in production, and now we're rolling out a patch that breaks production along with a bunch of patches to production to un-break it, and hoping that it all works in production.

        Yes, I blame RHEL for this.

        • So, basically you patch production machines without testing. Gotcha.

          Hell, I don't even do this(patch w/o testing) with Ubuntu OR Windows systems. Patching breaks things. Eventually.

          • Not everybody has a dev environment for everything.

            Not everything works because it's been tested. Every time we release something out of software in-house dev, it goes through a month of dev testing. Then... it breaks 15 minutes after it's released, and takes 3 hours to un-break.

            • by Anonymous Coward on Monday June 17, 2013 @02:31PM (#44032785)

              Not everybody has a dev environment for everything.

              Why the hell not? It's 2013 and virtualization is cheap.

              Not everything works because it's been tested. Every time we release something out of software in-house dev, it goes through a month of dev testing. Then... it breaks 15 minutes after it's released, and takes 3 hours to un-break.

              It sounds like your team is pretty bad at testing, then. Do you have dev or staging servers? Do they mirror the production setup? Is software versioning equivalent between the two (I'm talking distribution + supporting packages, clearly, not the software you're releasing)? Have you load tested to make sure your new shit isn't introducing something that will crush all resources in its path as soon as X people hit it? Do you have proper tests set up?

              "Every time" is terrible, and I don't know your organization, but I'm pretty sure you should feel terrible, if only for having to work in such an environment.

            • You don't have a dev environment... Go grab 2 workstations and a switch and MAKE ONE NOW!

              It also sounds like your testing regime needs working on. Devs do not say the code is ready. Users do. They get to break it 15 minutes into the test. They don't have to follow the Official Tests. This is called User Acceptance Testing. Devs will whinge about this because their mistakes are hi-lighted and "It worked for them" I speak from experience. I hate when users fail my code. But it's my fault, and I need

        • by eric_herm ( 1231134 ) on Monday June 17, 2013 @03:40PM (#44033441)

          I am sure that your company policy should include "do not use Technology preview on production servers". If it doesn't, then I suggest to add it, and then complain that using RHEL do not have the packages you need, if you want to switch to Debian. That would be much more smoother. than trying to blame the vendor for your lack of clue regarding what is supported and what is not ( especially when comparing to Ubuntu, where you do not even have the guarantee that Mark will not change his mind and just stop the project, or focus it on something else, like they did on the desktop, on bzr, and several stuff )

          • Our company policy is non-existent and SOP is insane. We'll leave it at that. I've suggested jumping off RHEL repeatedly because it's such a shit heap, but I get arguments about how "the industry knows RedHat is a good product" when it's really not.

            The fact is this "technology preview" is a critical feature for a huge business case--fail-over servers. You have to supply 99.999% SLA, so you have 3 servers, and when one fails it transfers its IP address to another one and there's barely a flicker. Yes,

        • "RedHat on the other hand released RHEL 6.4 and removed crmsh, the configuration system for Pacemaker, to be replaced with PCS. This wasn't documented in the release notes, either. Suddenly things that configure high-availability fail-over on RHEL 6 don't work. Running the same tools/scripts/whatnot breaks. This is still RHEL 6 stable, and under Debian policy that's not supposed to happen. RedHat doesn't have such a policy, so it happens."

          Well, that's not entirely accurate. Pacemaker is a Technology Preview

          • How about the technical release notes for RHEL 6.4 [redhat.com]

            The pacemaker packages have been upgraded to upstream version 1.1.8, which provides a number of bug fixes and enhancements over the previous version. (BZ#768522)

            To minimize the difference between the supported cluster stack, Pacemaker should be used in combination with the CMAN manager. Previous versions of Pacemaker allowed to use the Pacemaker plug-in for the Corosync engine. The plug-in is not supported in this environment and will be removed very soon.

            They at least warn that Corosync won't start Pacemaker in a future release--that's normal; we have the pacemaker init.d script running it, not the Corosync plug-in. Removing it during release would be a mistake, though.

            They don't mention that they've removed crmsh and added PCS. They do give this warning:

            With this update, Pacemaker provides a simpler XML output, which allows the users easier parsing and querying of the status of cluster resources.

            Status, but not configuration. Nothing about configuration input, which was previosuly handled by crmsh but now is handled by pcs. pcs isn't even

        • Ubuntu inherits Debian policy. Anything--supported or not--is not updated in any way that breaks things. You might not be able to get security patches for stuff in Universe or Multiverse in a timely manner without rolling and submitting it yourself; but they won't go releasing a package that no longer does X when X worked before. The idea is that, if your configuration works, it will continue to work *exactly* the way you have it without modification no matter which version of the package you have across the entire lifecycle of a stable release--if it doesn't, that's a bug and they need to undo that breakage. Extending is fine, breaking is *not* acceptable.

          I call bullshit. http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=561578#61 [debian.org]

          • -- System Information:

            Debian Release: squeeze/sid

            SID is the unstable Debian--the development branch where anything can change, anything can break, and eventually it gets snapshot and marked "RELEASE". This isn't really a release anymore than Fedora Rawhide.

            Nice job trying to counter-example my argument about how stable production releases work by showing that shit breaks in unstable alpha development cycle. You'll convince a few dumb people who don't know anything about Debian or can't decipher what's in the e-mail at all that way.

    • https://access.redhat.com/support/offerings/techpreview/ [redhat.com]

      What part of "no garantee" is hard to understand ?Seriously, if you cannot spare a system to test, that's not a reason to test it on production when the vendor explictely say "do not do it".

      • What part of "we don't support this -at- -all-, but there will be no updates in current production stable release that break it" is hard to understand?

        RedHat's tech preview include any sort of fail-over mechanism. That means you have to chose between the risk of running without high-availability and the risk of RedHat breaking your shit. When you do dev testing, and you find it breaks, you now have to deal with the risk of protracting your update cycle longer than would have been necessary if they had

    • by Chirs ( 87576 ) on Monday June 17, 2013 @04:40PM (#44034017)

      What part of "...these features are not fully supported under Red Hat Enterprise Linux Subscription Level Agreements, may not be functionally complete, and are not intended for production use." didn't make sense?

      Why were you using a technology preview feature in production when they explicitly say not to?

      • The product wasn't fully supported. That's fine. Supplied as-is.

        I want the product supplied as-is.

        What I don't want is a product supplied as-whatever-the-fuck-happens. It's not functionally complete? It's not supported? Not guaranteed to work? Okay, we can manage that risk. It works for us, and then you rip half of it out and throw something completely different in, during a "stable" system release? Not okay.

        I get that something might be "not complete or production ready" and I might come back

  • by Attila Dimedici ( 1036002 ) on Monday June 17, 2013 @12:44PM (#44031623)
    I think this is a headline that breaks the law of headlines which says the answer to a headline that is a question is always no. Red Hat certainly CAN do for OpenStack what they did for Linux. That does not mean that they will do so, even if they put the necessary effort into it. The last statement of the summary is irrelevant because Red Hat certainly knows that and almost certainly understands the magnitude of the project they are undertaking here. Red Hat is the sort of company that can do this. However, the project is complicated enough that they may fail.
    • Re: (Score:3, Interesting)

      by atom1c ( 2868995 )

      I concur.

      I took OpenStack for a spin last week and I like where they're going. It brings the CLI access from the AWS world with the mature web-based deployment tools of Azure -- and allows for GCE-type sophisticated apps to run within their contexts. This unique combination has made it appealing for me -- and surely whets the appetites for tons of onlookers who are waiting for the chips to fall before making any company-wide commitments.

      Ostensibly, they'll have to endure 2-3 major outages in the next few

    • ...t because Red Hat certainly knows that and almost certainly understands the magnitude of the project they are undertaking here. Red Hat is the sort of company that can do this. However, the project is complicated enough that they may fail.

      Can't their Openstack product just use Red Hat's existing build, test, and QA processes? Development still happens in the community, they just have to track/patch bugs and offer support, populate some new channels in RHN with software, figure out how to charge for their build. Perhaps they'll hire some Openstack project leaders - heck, maybe they already work at Red Hat.

      In the end, I think the biggest winner may be RHEV. If greater focus on Openstack development and support means that some of the Open

      • I don't see any reason to expect Red Hat to fail, but I can think of several ways that they could fail (one of them being reinventing the wheel when it comes to build, test and QA processes rather than using the processes they already have).
  • by ArcadeMan ( 2766669 ) on Monday June 17, 2013 @12:49PM (#44031669)

    After all these years I still don't know what that is supposed to mean. I know about servers, FTP, server-side languages, etc. But "cloud computing platform" just sounds like a buzzword clusterfuck from the marketing department.

    If I look on wikipedia [wikipedia.org] then even a simple website with a CMS is "cloud computing".

    • by ADRA ( 37398 )

      Self managed OS based hosting entirely from the internet (no physical access). That's it bud. Anything else is just window dressing.

      • by jekewa ( 751500 )

        You're talking PAAS (or IAAS) there, which while a big part of it isn't the only part of it. SAAS is another big part of it, and that can be run from within any infrastructure, so there may be physical access (which is the only point I'm contending).

        I agree that it's really what anyone working on Internet (usually software) contributes. Your blog or your wicked REST service, your server, your workstation, your router...all the cloud. And yeah, that remote machine you think you have all to yourself that run

    • by dwheeler ( 321049 ) on Monday June 17, 2013 @01:04PM (#44031831) Homepage Journal
      You might look at "The NIST Definition of Cloud Computing": http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf [nist.gov] - it's one-and-a-half pages once you skip the boilerplate.
      • by ArcadeMan ( 2766669 ) on Monday June 17, 2013 @01:12PM (#44031921)

        The fact that "cloud computing" needs 1.5 pages for definition alone is proof that the concept was created by the Marketing Department of the Sirius Cybernetics Corporation.

        • by FreeUser ( 11483 )

          The fact that "cloud computing" needs 1.5 pages for definition alone is proof that the concept was created by the Marketing Department of the Sirius Cybernetics Corporation.

          And here I thought it was Tyrell Corp, developing it as a ploy to use up the limited lifespan of any Android foolish enough to escape their servitude.

        • Honest to god, those motherfuckers are going to be first up against the wall when the Revolution comes. Them and Congress.
        • It doesn't really need 1.5 pages of description.

          Cloud computing is a strong abstraction layer between the physical machine and the logical machine. It's very similar in concept to memory virtualization - each process is given it's own address space, and really doesn't understand or care how that address space maps to physical memory. In a cloud computing environment, you request a new machine, and the cloud computing infrastructure automatically allocates a machine based on your requirements. The abstractio

          • In short, the hardware is virtualized and virtual machines run inside the virtual hardware sounds like running a GameBoy emulator for PSP inside a PSP emulator for Windows.

    • by asmkm22 ( 1902712 ) on Monday June 17, 2013 @01:10PM (#44031903)

      Cloud computing is less of a definition about what it is, and more about how it's used. The general idea of cloud computing is to offload storage and applications from your local setup (workstation, network, etc) onto something "in the cloud" which basically means the internet.

      The industry does a really poor job explaining that not all cloud experiences are the same, however. I've seen some cloud providers basically just offering seats to a Windows RDP server, which may not sound all that special to you or I, but they've managed to carve a business out of it.

      • by div_2n ( 525075 )

        Not necessarily the internet. You can have a private cloud. And also not necessarily offloading from your local setup. Storage, for example, might be local but synced into the cloud. Or computing resources might be local and scale up dynamically to the cloud as the workload demands.

        Modern cloud architectures seem to be working towards an experience as seamless as possible between what's local and what's not -- i.e. not just have your computer/phone be a dumb terminal to some remote app.

        • Keep in mind that I was speaking in very general terms. Of course there can and will be variations to the whole process, which doesn't help the confusion at all. Yes, you can have a private cloud that's hosted on a local network, or maybe even through an MPLS or something, but anyone who's dealing with that configuration probably doesn't need to ask what cloud computing means in the first place.

          Cloud computing is still very much a consumer term, and I don't know any admins who even use it unless they are

        • Funny Unix has made the distiction between local and remote seamless for decades now.
    • by Anonymous Coward

      Here's my take: simple website with CMS, not a cloud.

      Cloud would be a pool of servers, offering up resources (virtual machines, in my case, could be CMS-es) in such a way that the resource is not directly concerned with the underlying hardware. So I could take 5 servers, tie them together with OpenStack, and then fire up a number of virtual machines. The virtual machines aren't started on one particular physical box, as I do in a cluster; I'd start them in the cloud stack, and that deals with what hardware

      • Re: (Score:3, Insightful)

        by pLnCrZy ( 583109 )

        What you're describing is virtualization.

        "Cloud" is a stupid buzzword that quite simply means "resides on someone else's stuff."

        Whether it's Amazon's stuff, Rackspace's stuff, or Microsoft's stuff -- it's not your stuff. You don't worry about physical servers, disks, or OS (in many cases.) Take it a level higher and if your cloud service includes databases or middleware, you don't worry about that either. Or even applications. Amazon's Elastic Beanstalk basically lets you publish your website code direc

    • If you think of a development diagram where you map the interfaces of your application, often times the back end will be a cloud similar to this [philipotoole.com] image which isn't the best example but all I could pull up in a hurry. It basically means we don't really care what's in this part of the application as that is someone else's responsibility.

      Things like AWS gives you the illusion of a full server in their cloud. The idea being you don't care what the back end is, you just care on the unit of performance they prov

    • Think mainframe with the option of having distributed mainframes.
  • Probably won't.... (Score:4, Insightful)

    by Junta ( 36770 ) on Monday June 17, 2013 @12:57PM (#44031751)

    RHEL has been angling at shooting down vmware in the enterprise space. They have made a go of it with RHEV-M and thusfar have failed to get traction. This is despite RHEV-M having a lot of the most common capabilities available that vmware offers. It's a tad different and in some ways exposing users to quirks that don't make a lot of sense (vmware has its own quirks, but being first has advantages). Openstack in general is aimed in a pretty divergent direction than where vSphere went and isn't particularly well off in heading even in that direction. If RH couldn't dislodge vSphere with a solution that matches capabilities, I can't imagine how they come back with a less resilient architecture and suddenly be view favorably...

    • I only disagree, because there is definitely a groundswell out there for an alternative to VMWare. In practice VMWare isn't as stable as it's claimed (although it is quite resilient) it comes down to dollars, and there are legions unwilling to pay...

    • " I can't imagine how they come back with a less resilient architecture and suddenly be view favorably..."

      I can $ee how they might $way potential cu$tomer$ away from v$phere. vSphere is very pricy and a hard sell. With ESXi, they get you with the "first one's free" mentality, but the jump from free ESXi to paying for ESX for small installations is VERY steep. Red Hat could compete by having a better pricing model. Whether they will or not is another question.

      "but being first has advantages"

      A guy I used

      • by Junta ( 36770 )

        I see that with their RHEV-M product and anticipated more traction, but didn't see it happen.

        Going from a typical vSphere set of expectations to the Openstack vision is a whole lot bigger a leap than vCenter to RHEV-M.

        Unfortunately, my sense is that as MS' solution gains maturity, it seems to get more mindshare than RHEV.

    • RHEL has been angling at shooting down vmware in the enterprise space. They have made a go of it with RHEV-M and thusfar have failed to get traction. This is despite RHEV-M having a lot of the most common capabilities available that vmware offers..

      Our 300-VM RHEV 3.1 stack dates back to the beta test of RHEV in 2009. IMHO, companies claim certain capabilities, but the actual implementation can vary widely in terms of sophistication. Our RHEV infrastructure has certain characteristics we like: built on open source/standards, excellent performance and security, inexpensive to license. In other areas like management tools, storage migration, power management, etc.. the list goes on, our RHEV solution can't match comparable VMWare offerings.

      I like

      • by hetz ( 516550 )

        My guess (based on past experience with RHEL road) is that you may see RHEV 3.x versions with fixes (like RHEL 6.x), but RHEV 4 will probably be Open Stack (or Open Stack based solution). Red Hat is already working hard on Open Stack and you can see it in the Fedora releases.

  • by Anonymous Coward

    I hope Openstack will become less hype and more usable now that RedHat is on it.

  • by Anonymous Coward

    Then we will all be sad.

  • If it wasn't for SuSe, Slackware, Mandrake and a few others Red Hat would have gone under. Commercialization of Linux was a team effort, despite the shameless historical revisionism of the article lead.

  • by Anonymous Coward

    You mean ruin it?

    I kid, I kid...

  • Can Red Hat do for Open Stack what it did for Linux?

    If by that, do you mean can Red Hat break things that have worked perfectly for years (clustering in FC13-16 vs 17+, and the godawful mess that is systemd replacing perfectly servicable and reliable UNIX mainstays such as sysv init, etc.), then the answer is most definitely:

    YES

    On a recent conference call with Red Hat, they dismissed Open Stack and touted their own proprietary products for "cloudy" type infrastructure. Bringing fuel into the fold won't be

    • Re: (Score:2, Informative)

      by Anonymous Coward

      This post is so full of misinformation it's ridiculous.

      RH dismissed OpenStack in a recent conference call? That seems kind of odd seeing that they are contributing heavily to it now, and Red Hat Summit was just last week where they, you know, didn't dismiss it.

    • by hetz ( 516550 )

      Fedora (the "Core" part of the name was removed in Fedora 7 IIRC) IS MEANT to be a DEVELOPMENT version. Version X has ABC, Version Y - the ABC has been kicked out of the window. Red Hat team mention it in big letters!

      Want stable? either buy RHEL or use CentOS.

  • by Anonymous Coward

    Let's face it, OpenStack isn't the easiest thing to work with. Like the early Linux days, things were changing by the day/release. Red Hat (and other distros too) were able to tame it a bit and make it stable enough for people who needed it to work. They've already got an 'in' with lots of fortune 500 companies, and they're likely at a state where they have a lot of hardware but are starting to look at cloud computing. Giving those companies an easy way to turn their hardware into a private cloud for le

  • IBM is investing big into OpenStack. They talk about it all over the place.

    Rackspace is also investing big into OpenStack.

    Both of these players dwarf RedHat.

    • by thule ( 9041 )
      I thought Rackspace works closely with RedHat. So Rackspace probably gets a pretty good deal if they run RHEL.
  • I thought RedHat was throwing its weight behind oVirt [ovirt.org]. OpenStack from the beginning has been very cobbled together and even still is a major pain to set up from scratch. Although oVirt hasn't been around as long, it definitely has a more mature interface as far as I'm concerned. It was my understanding that RedHat was the major player behind oVirt and was looking to make it their cloud platform of choice.

    • Yeah well, before that they were the major player behind what, Xen? And had the only decent management tools for it? They change their mind about this every week. I'll believe it when I see it.

      • by cblack ( 4342 )

        Yeah well, before that they were the major player behind what, Xen?

        Xen: yes, they were behind Xen for a while years ago. The community and experts moved on to KVM, so did RedHat.

        And had the only decent management tools for it?

        I don't see how this is a question. Maybe they did, idunno. Hardly matters now that relatively few people still use Xen.

        They change their mind about this every week. I'll believe it when I see it.

        No they don't. And you won't believe it when you see it because you won't recognize it.
        Because you're dumb.

    • by cblack ( 4342 )

      oVirt is a virtualization management tool. It controls lower virtualization layers like KVM.

      KVM is the virtualization structure in the Linux kernel

      • Openstack doesn't control lower virtualization layers like KVM? Openstack isn't a virtualization management tool? You might want to re-think those distinctions.

    • by thule ( 9041 )
      oVirt is part of the RHEV product. RHEV is compared to VMWare. OpenStack is something different, but they both use KVM for virtualization. OpenStack is for computing on demand. RHEV, like VMWare is designed to manage the full life cycle of a OS and application deployment.
    • They are. oVirt is the basis for the underlying part of RHEV. I think oVirt and Openstack have some different design objectives, but in practice, there's a lot of functional overlap. I hope Openstack's modular nature (from what I understand, having not looked at the code) allows for some cross-pollination with oVirt/RHEV
  • I did some testing with it. Its good, but its clearly work in progress, and a technical marvel. But its also lacking the tools and toolset for deployment, and management. The only cloud winners will in the end be the ones that have great toolsets and management. If something in the end remains a huge mass of spagetti that only finite skilled souls can pull together and run, then it will be eaten alive.

    I have not tested JuJu, but that seems to be like the kind of deployable real world end user tooling that t

The finest eloquence is that which gets things done.

Working...