Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Ubuntu Open Source Linux

Ubuntu Gets Container-Friendly "Snappy" Core 149

judgecorp writes: Canonical just announced Ubuntu Core, which uses containers instead of packages. It's the biggest Ubuntu shakeup for 20 years, says Canonical's Mark Shuttleworth, and is based on a tiny core, which will run Docker and other container technology better, quicker and with greater security than other Linux distros. Delivered as alpha code today, it's going to become a supported product, designed to compete with both CoreOS and Red Hat Atomic, the two leading container-friendly Linux approaches. Shuttleworth says it came about because Canonical found it had solved the "cloud" problems (delivering and updating apps and keeping security) by accident — in its work on a mobile version of Ubuntu.
This discussion has been archived. No new comments can be posted.

Ubuntu Gets Container-Friendly "Snappy" Core

Comments Filter:
  • by Fwipp ( 1473271 ) on Tuesday December 09, 2014 @07:40PM (#48560079)

    No dependency management or fooling around packages that require conflicting library versions, possibly near-instant "installation" (depending on if they're distributing Dockerfile-equivalents* or containers directly). Sounds good to me - I'll have to take a look sometime.

    *Yes, I know that Docker is not the only way to do containers, but it's easy to imagine they could be using a similar "build" step.

    • by Fwipp ( 1473271 ) on Tuesday December 09, 2014 @07:59PM (#48560213)

      Update: Playing with it, It looks like there isn't a build step. Following the steps here: http://www.ubuntu.com/cloud/to... [ubuntu.com] - it seems that docker is installed under /apps/docker (~20 files total) - they're basically just distributing tarballs which contain all necessary dependencies.

      From there, it puts a quick wrapper script at ~/snappy-bin/docker, containing:

      #!/bin/sh
      ##TARGET=/apps/docker/1.3.2.007/bin/docker
      #TMPDIR=/run/apps/docker
      cd /apps/docker/1.3.2.007
      aa-exec -p docker_docker_1.3.2.007 -- /apps/docker/1.3.2.007/bin/docker $@

      To build your own "snappy package" looks as simple as "cd mydir; snappy build ."

    • by Imagix ( 695350 )
      But... when a library is patched then you have to re-download your entire machine again. What's the point of shared libraries? Just static link everything and you get a lot of the same result. (Not all.. there are other features of containerization... but static linking solves your library hell. And then there's the GPL/LGPL issues....)
      • by Anonymous Coward

        Actually it seems pretty obvious to me that the whole "container" thing is a deliberate sidestep of [L]GPL.
        The binary isn't technically "linked" with the GPL code yet, so it's not infected.
        You're free to modify the GPL'd files in the container, so the GPL is happy.

        Waiting for GPLv4 to come out and forbid "containerization" in 3... 2... 1...

      • Re: (Score:3, Informative)

        by geantvert ( 996616 )

        The article indicates that containers can be upgraded by sending only the modified files so upgrading a lib should not cost too much.
        If their system is not too stupid then it will manage the container at the file level and will try to share identical files between packages.

        That should not be too difficult using checksums and hard links. If they don't do that then the kernel won't be able to share files between applications and the whole system will use a lot more memory. Linux application are sharing libs b

      • The reason to use containers is isolation - the thing inside the container has a rigidly defined and strictly controlled limit to what external resources it can access and how much computing resources it can see and use: how many CPU cores and how much of each core it can utilize, disk space and rate of disk input and output, network resources and rate of network input and output, RAM, etc...

        So yes, you lose the advantage of statically linked libraries. But it gives you most of the important advantage
    • by MMC Monster ( 602931 ) on Tuesday December 09, 2014 @09:33PM (#48560909)

      How does this mesh with the ideas that:
      1. shared libraries allow a small memory footprint
      and
      2. allowing the OS to manage dependencies allows applications to be more secure since all applications that rely on a shared library benefit when a security update of the library get installed

      • 2 is one of my main concerns too. Let application developers develop their applications and library developers develop their libraries. Not every OSS application contributor wants to apply security updates in their free time.

      • because installing security updates is always easy. At least with a container system you can install and test it in isolation then just run the new container with confidence. Rather than the old fun of "Its patch day! Everyone get ready to validate everything still works again."

    • by turbidostato ( 878842 ) on Tuesday December 09, 2014 @10:08PM (#48561131)

      "No dependency management or fooling around packages that require conflicting library versions, possibly near-instant "installation" (depending on if they're distributing Dockerfile-equivalents* or containers directly). Sounds good to me"

      Congratulations. You have discovered static linking. Welcome to the fifties.

      Now, in less than ten years you will find the problems with your approach and will also reinvent dynamic linking and I'll gladly welcome you to the sixties.

      • by Fwipp ( 1473271 )

        I mean, I'm a big fan of static linking, so that probably explains my enthusiasm for this. I haven't yet figured out how to statically-link an entire django project, though. :)

      • by DuckDodgers ( 541817 ) <keeper_of_the_wo ... inus threevowels> on Wednesday December 10, 2014 @10:01AM (#48563803)
        I wrote this elsewhere, but to repeat: the advantage of Docker and similar containers isn't static linking. The advantage is that you get most of the important benefits of whole operating system virtualization with much lower overhead in terms of disk space and resource use. Instead of building one 5GB CentOS or Ubuntu VM that runs a copy of your web server software and pushing that all over your network and having every running instance hold 100MB of RAM just for a virtual kernel in addition to the web server itself, you build a 500MB Apache/Nginx/Whatever Docker container with the same web server and push it everywhere, and its memory and CPU overhead on top of the web server is tiny. And since, yes, it uses statically linked libraries, it doesn't matter if the host OS has a different version of libevent as long as it has a compatible version of Docker or lmctfy or whatever container technology you use.

        Now obviously if you wanted to make the virtualized environment available to a remote user as a remote desktop, or for remote login directly into the virtual environment with ssh for configuration changes, you want a full virtualized operating system. In that case you need VirtualBox, KVM, Xen, VMware, Hyper-V, etc... but for deploying identical instances of a configured application to a handful or even hundreds of machines with the smallest feasible overhead and a nice security layer to prevent most breaches of the hosted application from leaking into the host operating system, a container is great. If you look up how Docker and lmctfy work, they are mostly an API wrapper around the Linux kernel cgroup and selinux features for configurable restriction of a process's ability to utilize CPU, RAM, and IO.
        • "the advantage of Docker and similar containers isn't static linking. The advantage is that you get most of the important benefits of whole operating system virtualization with much lower overhead in terms of disk space and resource use [...] for deploying identical instances of a configured application to a handful or even hundreds of machines with the smallest feasible overhead"

          Please take a time to think about what you wrote. Once you deploy your app to "hundreds of machines" you are back to square one:

          • You're missing all of the other aspects and focusing on static linking. Okay, that's not fair - you're also correct that puppet or chef should be used instead of just copying image files around. A container also gets you:
            1. Isolating contained software from the rest of the operating system, so you can host third party applications without worrying about them inspecting their host environment.
            2. Limited contained software's ability to use host resources - disk space, CPU, memory, network IO, disk IO.
    • by gmuslera ( 3436 )

      Docker is not just containers, but image/container fs management is a key element too. Union fs with copy-on-write makes a big difference against traditional containers. And the image ecosystem, the easy creation with dockerfiles and a good api/powerful cmdline command are pretty important elements too.

      Other containers technologies could learn/adapt that other docker ideas, and even VMs could get a bit closer to them. No matter if Docker is the dominant implementation there in the future or not, with thos

  • by Anonymous Coward on Tuesday December 09, 2014 @07:55PM (#48560191)

    And here we go again, adding yet another layer to an already wobbling stack of layers.

    First we have hardware. Then we're running Xen or some other supervisor on that hardware, so we can have numerous VMs running Linux running on one physical system. Then each of these Linux VMs is in turn running VirtualBox, which in turn is running Linux, which in turn is running some container system. Then each of these containers is running some set of software. In some cases these containers are running something like the Java VM, which is, of course, another layer. Then in some truly idiotic cases, we have something like JRuby running on this JVM. There's some half-baked Ruby code running on JRuby.

    Let's visualize this stack:

    - Ruby code
    - JRuby
    - JVM
    - Container
    - Linux
    - VirtualBox
    - Linux
    - Xen
    - Hardware

    Now that there's all this compartmentalization, it becomes a royal pain in the ass to share data between the apps running in the containers running in the VMs running on the actual hardware. So we start seeing numerous networking hacks to try and make it all into something barely usable. So throw on Apache, Varnish, and other software into the mix, too.

    I'm sure that within a few years, we'll start seeing containers within containers, if that isn't already being done. Then those will need sandboxing, so there will be sandboxes for the containers that contain the containers.

    Meanwhile, it's just one hack after another to intentionally get around all of this isolation, in order to do something minimally useful with this stack. The performance of the system goes swirling down the shitter as a result of all of the layers, and all of the effort needed to bypass these layers.

    What a fucking mess!

    • by skids ( 119237 ) on Tuesday December 09, 2014 @07:59PM (#48560219) Homepage

      You think that's bad, you should see what goes on on the network side these days. All the layers of encapsulation can very often be larger than the innermost payload.

    • I haven't used Docker before. Does this mean if I have two (or more) servers running on a JVM, that each container will have its own JAVA_HOME? If so, wouldn't that make maintenance a nightmare? Similar for python (or other language) based services? Or items running a database? Each will have it's own MySQL or PostgreSQL instead of just adding another DB to an existing server? Or do the containers sit on top of traditional mode of installing these things?
      • by Fwipp ( 1473271 ) on Tuesday December 09, 2014 @08:38PM (#48560525)

        Each container would contain all of the stuff it needs to run - in this case, Java + associated modules.

        It simplifies stuff, because if one server requires Foo v1.11.4 but another needs Foo v1.10.8, neither server "sees" the other. You simply configure each container separately, without worrying what the other container's doing. When distributing the container, all you have to do is send out one image. If you want to run 12 containers on a host, that's cool. If you want to run only 1, that's fine too. And that same container will work just fine whether it's running on the server or the new kid's development laptop.

        It's not an all-or-nothing approach, so you can choose if you want the database to live in a container of its own, on the host, in the app container, or somewhere distant.

        • Re: (Score:2, Insightful)

          by 0123456 ( 636235 )

          Why is everyone trying to turn Linux into Windows?

          • by epyT-R ( 613989 )

            They're not. Windows has and makes use of dlls. They're trying to turn linux into dos.

            • In a way they are, though. Win7 keeps every distinct version of every dll that each installed program requires, so you can quickly wind up with a folder under Windows/ filled with 20 different versions of the same library, even if most of the programs requiring them could run just as well on one of them.

              • Only for managed code (.Net) the old COM stuff is still in the registry and as ugly as it ever was.
                • oh, no.. the .NET code is in the GAC and is just as crufty as COM. Even their best plans soon turn into old habits at Microsoft.

                  (if you really want to worry, take a look at the "I have no clue which assembly is actually loaded" way .NET decides what dlls to run using Probing heuristics [microsoft.com]

        • So that would mean then that you would need a far larger resource footprint. Say a single server with 4 domains could get by with say (just for round figures) 5 GB of RAM. From the sounds of it I would think you would require 5+ GB of RAM using containers because each container needs a minimum footprint before you add in the resources required by the application. Same for a database. I would guess that it would be significantly more than the original (significantly more than 5 GB), but I don't think it has
          • by Fwipp ( 1473271 )

            The containers aren't like full-fledged VMs - they're generally running only the processes they need. If you enter it & bring up top, you're quite likely to see 3 processes in there - one for top, one for your shell, and one for your actual app.

            You're going to have a slightly larger footprint, yeah - I don't believe you can take advantage of shared libraries to reduce RAM usage (though... possibly in some cases. I'm not sure) - but it's much lighter than running a VM.

            http://www.infoq.com/news/2014... [infoq.com] if

            • To get away from 'vm' terms, we know there can be several app's that might need mysql. So this would act like an embedded mysql server for each app instead of one. Or say like Python virtualenv only different. It seems like it might allow different versions of stuff but it also occurs to me this could get confusing after a while. I guess it's something else to learn, but I can't really see what the benefit is yet. chroot still works for good security.
          • I think of it like running multiple DOS applications under Windows 3.1. Some VM things were happening but it was lightweight enough you didn't really have to care. Except Wolf 3D (as sole running DOS application) was slower. When Windows 95 came out we'd hold F8 and boot to DOS out of habit (and still made a proper config.sys) lol.

            For containers I believe and hope that at least the disk cache is one and the same for all apps (one SMARTDRV.EXE for all processes). With cgroups containers trivially get a "RAM

            • I still program and run a few servers, even though it isn't my day job any more (I used to be a C Unix programmer). I'm trying to understand the benefit vs switching to a new paradigm, to try to use the word properly. It has a silver bullet smell to me. I hope this doesn't mean that Ubuntu will only be available with containers. Otherwise I'll likely have to make the switch to BSD.
              • If you switch to FreeBSD then you'll have.. jails.. which are like entirely the same thing as containers.

                Anyway it's the "Ubuntu Core" edition that's new, not containers, and it sure would be entirely optional. It seems to me it's a set of tools to spawn many copy-paste server instances in a gigantic "cloud" farm depending on level of activity or need to scale up. That's trendy but totally useless if you have the more usual need of caring about "that one server".

                But a container is maybe like running a proce

      • Does this mean if I have two (or more) servers running on a JVM, that each container will have its own JAVA_HOME?

        Isn't this already the case with large Java stuff? That despite the "run everywhere" mantra, people package it with large apps because in reality, it's not going to work?

        • So I have Artifactory and two versions of Glassfish on my dev box. But only one JVM. One JAVA_HOME. I also have Maven and Netbeans IDE using the same JAVA_HOME. Why would I want 4 different JVMs installed when one works just fine? And if there is a security flaw in Java and I need to upgrade, now I would need to download 1 update as opposed to 4 times that much (plus other crap like databases and other app code, assuming they want you to just download a whole new complete container whenever you upgrade). I
          • Why would I want 4 different JVMs installed when one works just fine?

            Why don't you ask Oracle or IBM? As far as I understand, these are the people bundling their own JVMs with whatever humongous product you buy from them, not me.

            • Artifactory, Netbeans, Maven, Glassfish, and Java are open source or close enough for me. Are you suffering from self imposed ignorance or arrogance? Either way, you sound like an uninformed dick who is trying too hard to sound programmer hip. I thought you'd appreciate the insult since from your sig you seem to be into cocks. Whatever floats your boat.
              • I have absolutely no idea what either your anecdotal evidence or insults are supposed to accomplish, since neither has any relevance for rational reasoning.
                • Then don't slag people for using Oracle products. If you can't answer a question, then shut the fuck up. I don't need to listen to assholes with nothing to say spouting some smart mouth remark instead of helping or passing on the question.
                  • The only two questions you asked were 1) why would you want 4 different JVMs installed when one works just fine, and 2) whether I am suffering from self imposed ignorance or arrogance. To which my respective answers were 1) you should ask IBM and Oracle, not me; and 2) no.
              • It sounds like YOU have no clue what you're talking about. Classically, multiple JVMs were required for such things as the Cisco PIX / ASA web interface (doesnt work with anything newer than 1.5), HP iLO (likes 1.6 / older), Blackberry Enterprise server (tends to blow up when you attempt an upgrade of the packaged JVM), and Fiery printer interfaces.

                But I guess none of that stuff counts.

                • I don't know about containers. It is also why I mentioned databases and other servers. I happen to know a good deal about JVMs. So just because I used Java as PART of an example don't go all aspergers and fixate on it. If you can't answer the question about containers then shut up. I responded to another fucking goof for acting like an asshole and implying I had a problem because I used Java based tools. Having worked on several projects in the last 15 years that had budgets of close to a billion dollars ea
                  • Have you ever wanted to know how to ensure that noone wants to have discussion with you? Cause I think you cracked the code.

          • Ok, so you have several reasonably well maintained open-source (or close) things running with same JVM. Good for you. In your use-case, you only need one VM.

            Now try running some legacy enterprise crap from 2003 which hasn't been touched for last 10 years on JDK8. And now imagine it uses JNI.

            --Coder
            • Doesnt have to be 10 year old crap. Fiery printers made in the last few years blow up if you try to use 1.7u45 or newer, because they enforce certificate signing which Fiery doesnt do. Theyre not the only big vendor with such issues either.

      • by Shados ( 741919 )

        Think of it as running separate VMs in an hypervisor, but SOME stuff can be shared. If its all in the package, yes they'd have their own mysql or postgres, but its the same thing as if you had VMs with everything included.

        Nothing stopping you from having an instance for the database, and an instance for the web server that connects to the database.

    • Shove DosBOX running Windows 3.11 on top of that and you're golden!

    • Soo... that's a pretty artificially sadistic case. Does anybody really run hypervisors under hypervisors commonly? Virtualbox under Xen? Really?

      • by Anonymous Coward

        Don't underestimate human stupidity.

      • Nesting virtualization containers can be useful to test VMs on an OS you don't/can't run.

      • Yes, my VCP training class used vSphere as the host for several vSphere vApps, so that each student could have their own cluster to play with.

        It actually makes a lot of sense for testing.

    • Well, from the hardware layer you only have to run one virtual layer, and for Linux as a guest of that is perfect, why would you make that instance host another? If you also have a custom kernel with only the drivers and services you need hen even better.
    • You've added a few. For one Jruby is compiled to run on the jvm. Unless you're just playing around at worst that stack should be

      -|JVM| - locked in container it's isolation not a layer
      -Linux
      -Hardware

      Or if you're doing development you might have something more like


      -|JVM| - locked in container it's isolation not a layer
      -Linux
      -VirtualBox
      -Windows or OSX
      -Hardware

    • Ideally, you have two layers:
      0. Host operating system to run the bare hardware.
      1. Containers to isolate running contained applications from each other, and govern their resource access.
      The application runs inside the container, without any virtual anything.

      With Java, in theory you could run Java on the host operating system under different user accounts. So you have the host operating system at level 0 and the JVMs running at level 1, and you use different user accounts to set different JAVA_HOMEs
      • Indeed I wonder if a container is that much different from running a different user. The other day, I was lazy and just ssh -X localhost to get some web browser loading in a blank state. If I could get ssh running with a null cipher and have some GUI launcher with user selection I would kind of have "containerized desktop applications".

        • There's overlap between containers and running a different user, but unlike running under a different user, a container:
          1. Can't see what applications are installed on the machine in the global /bin and other path locations
          2. Can have the amount of RAM, disk space, and CPU they can use capped.
          3. Can't see much other aspects of the host - like a list of active sockets, how much memory is used, etc...

          If you just want to run your own applications in an isolated fashion as an extra layer of security
  • Because you don't look to containers for security.

  • It may not be systemD but does this mean it gets a free pass?

    • You mean as something like this already has been suggested by lennart poettering? Yeah, there is something to it. Funnily the first dude answering the shuttleworth post was a systemD + btrfs fanboy...

      But its good Ubuntu ppl removed this stupid btrfs requirement. I'm myself a fan of btrfs, but things should be exchangeable.

  • by sconeu ( 64226 ) on Tuesday December 09, 2014 @09:04PM (#48560733) Homepage Journal

    I'm just a casual user, not a sysadmin.

    But I thought containers were kind of like VMs, not like packages.

    What's the difference between a VM, a container, a chroot jail, and packages?

    Auto analogies are always welcome.

    • by Trongy ( 64652 )

      Broadly speeking a VM is a virtual environment that runs a separate kernel.
      Containers are like a chroot jails in that they provide virtualization of the user environment for processes that execute under the parent kernel. Containers generally provide more sophisticated control over system resources (CPU, RAM, network i/o) than a simple chroot jail.
      This wikipedia article provides a comparison of different types of container: http://en.wikipedia.org/wiki/O... [wikipedia.org]

    • by skids ( 119237 )

      A VM runs its own kernel.
      Containers share a kernel, but have their own namespace so e.g. they see only their own process table
      chroot jails just really control permissions and environment/libs

      On the one hand all of them have some pretty compelling use cases like ease of moving machines to new/backup hardware. On the other hand they all lend themselves to horrible abuses and serve to keep crummy, buggy, code in service way past when it should be flushed down the toilet with extreme prejudice, and attract a s

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Ok let's get you up to speed on containers in 7 paragraphs and there is some pottering hiding somewhere in here to keep folks interested. A VM emulates the entire hardware layer. A container depends on cgroups and namespaces support in the Linux kernel to create a lightweight isolated OS environment with network support. So you could be running a Debian host and multiple Redhat, Centos, Ubuntu, Fedora etc containers and vice versa.

      The advantage is because containers are not emulating a hardware layer you ge

    • Docker and it's like are more than just containers. Docker is more like a format and eco system around the core LXC containers that have been around for ever.

      Just speaking of the container is is more in line with chroot/jail with even more isolation.

      Docker the entire ecosystem is more like Amazons AWS in that there are many prebuilt containers.

      And kinda like a configuration management system (chef, puppet, cfengine) in that there is a scriptible interface for building new containers.

      And kind of a continuo

    • You're right. Here's a solid overview of the type of technologies that Docker impacts which appeared on Hacker News a couple of months back http://www.brightball.com/devo... [brightball.com]

  • I Don't Get It (Score:5, Insightful)

    by ewhac ( 5844 ) on Tuesday December 09, 2014 @09:28PM (#48560879) Homepage Journal
    Am I getting hopelessly old and unable to appreciate new things, or is this not anywhere near as whoop-de-doo as they're making out?

    "You can update transactionally!!" Great. What does that mean? Is it like git add newapp; git commit -a? If so, how do I back out a program I installed three installations ago?

    Transactional updates have lots of useful properties: if they are done well, you can know EXACTLY what's running on a particular system, [ ... ]

    dpkg -l

    You can roll updates back, [ ... ]

    dpkg -i <previous_version>

    ...lets you choose exactly the capabilities you want for yourself, rather than having someone else force you to use a particular tool.

    #include <cheap_shots/systemd.h>

    Because there is a single repository of frameworks and packages, and each of them has a digital fingerprint that cannot be faked, two people on opposite ends of the world can compare their systems and know that they are running exactly the same versions of the system and apps.

    debsums

    Developers of snappy apps get much more freedom to bundle the exact versions of libraries that they want to use with their apps.

    ...Did this guy just say he brought DLL Hell to Linux? Help me to understand how he didn't just say that.

    I bet the average system on the cloud ends up with about three packages installed, total! Try this sort of output:

    $ snappy info
    release: ubuntu-core/devel
    frameworks: docker, panamax
    apps: owncloud

    That's much easier to manage and reason about at scale.

    No, it isn't!! What the hell is OwnCloud pulling in? What's it using as an HTTP server? As an SSL/TLS stack? Is it the one with the Heartbleed bug, the POODLE bug, or some new bug kluged in by the app vendor to add some pet feature that was rejected from upstream because it was plainly stupid?

    Honestly, I'm really not getting this. It just sounds like they created a pile of tools that lets "cloud" administrators be supremely lazy. What am I missing here?

    • "Honestly, I'm really not getting this. It just sounds like they created a pile of tools that lets "cloud" administrators be supremely lazy. What am I missing here?"

      Only three things:
      1) Most people are not really so good about their trade.
      2) Youngster moreso if only because they still had no time to become any wiser but, youngsters being youngsters, still they think they know it all.
      3) Due to IT advancements a lot of ignorant but otherwise full of energy youngsters can now be very vocal about how their elde

    • Re:I Don't Get It (Score:4, Interesting)

      by ArsonSmith ( 13997 ) on Wednesday December 10, 2014 @12:39AM (#48561805) Journal

      being able to go from zero to fully functional and testable running application with mulitple tiers in minutes is a bit different.

      Being able to completely uninstall without dpkg-old or random .bak files laying around is kinda nice.

      rolling back 3 versions ago is as simple as
      docker run

      Honestly I've been doing this for 22 years and docker is the first time I've looked at a tech and it scared me. It is going to relegate systems administration to little more than rack and stackers. The current DevOps trend is going to become just Dev and everything will be code.

      If you're an administrator of pretty much any type you better start learning to program. The days of long lived static apps with a full time support staff is going to go away. The art of setting and configuring the exact combination of packages, standards, access etc will be gone.

    • ...Did this guy just say he brought DLL Hell to Linux? Help me to understand how he didn't just say that.

      He didn't say that, because that's not how Linux works. You can load two different versions of the same library as long as they are named appropriately, so we don't have DLL hell no matter how many libraries we have loaded on our systems.

    • Developers of snappy apps get much more freedom to bundle the exact versions of libraries that they want to use with their apps.

      ...Did this guy just say he brought DLL Hell to Linux? Help me to understand how he didn't just say that.

      Too late -- Ruby on Rails has already brought DLL Hell to Linux. I challenge you to install a Ruby on Rails application without having the exact version of Ruby and its dependencies that was used to develop it. This is why almost everyone uses Ruby version managers such as RVM

    • Docker et al are mostly wrappers for cgroups and selinux. The static linking tradeoff is not the big deal. The big deal is that you get most of the benefits of whole operating system virtualization with lower overhead. You can set strict limits on the amount of disk space, disk IO, network bandwidth, memory, and CPU resources the container can use, and block the container from knowing anything about the host operating system, and block the container from knowing about any other containers on the machine.
    • Fellow greybeard here. I actually just looked into Docker a few days ago so I'm far from an expert, but it seemed like a neat idea. Docker doesn't really give you anything that you couldn't do before, but it does make it easier to let developers do their jobs and sysadmins do their jobs.

      The idea is that the output of software development is a "container", and that container is then handed off to the tech services group to deploy wherever makes sense. It's very similar to a VM, except it's (I think) based of

  • From TFA:

    Developers of snappy apps get much more freedom to bundle the exact versions of libraries that they want to use with their apps. It’s much easier to make a snappy package than a traditional Ubuntu package – just bundle up everything you want in one place, and ship it.

    So when a library needs a security / other update, I'll possibly have to update several snappy packages that all contain the affected library? Ya, that's sooo much better Mark.

    • I have to do this already with thousands of servers all running apps. This makes it much easier to do so. No longer to I have to have some kind of monitoring in place to insure that every nginx box has the latest ssl and bash fix along with vendor patches and other crap. One container, redeploy everywhere and restart. Only one thing to check.

  • It is interesting that they pick up this name, reminds me the good old days of Fedora Core :)
  • Comment removed based on user account deletion
  • Docker reminds me of Qubes in some ways. https://qubes-os.org/ [qubes-os.org]
  • I assume there is a component to this called Snappyd.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...