Ubuntu Gets Container-Friendly "Snappy" Core 149
judgecorp writes: Canonical just announced Ubuntu Core, which uses containers instead of packages. It's the biggest Ubuntu shakeup for 20 years, says Canonical's Mark Shuttleworth, and is based on a tiny core, which will run Docker and other container technology better, quicker and with greater security than other Linux distros. Delivered as alpha code today, it's going to become a supported product, designed to compete with both CoreOS and Red Hat Atomic, the two leading container-friendly Linux approaches. Shuttleworth says it came about because Canonical found it had solved the "cloud" problems (delivering and updating apps and keeping security) by accident — in its work on a mobile version of Ubuntu.
This actually sounds pretty cool. (Score:5, Informative)
No dependency management or fooling around packages that require conflicting library versions, possibly near-instant "installation" (depending on if they're distributing Dockerfile-equivalents* or containers directly). Sounds good to me - I'll have to take a look sometime.
*Yes, I know that Docker is not the only way to do containers, but it's easy to imagine they could be using a similar "build" step.
Re:This actually sounds pretty cool. (Score:5, Informative)
Update: Playing with it, It looks like there isn't a build step. Following the steps here: http://www.ubuntu.com/cloud/to... [ubuntu.com] - it seems that docker is installed under /apps/docker (~20 files total) - they're basically just distributing tarballs which contain all necessary dependencies.
From there, it puts a quick wrapper script at ~/snappy-bin/docker, containing:
/apps/docker/1.3.2.007 /apps/docker/1.3.2.007/bin/docker $@
#!/bin/sh
##TARGET=/apps/docker/1.3.2.007/bin/docker
#TMPDIR=/run/apps/docker
cd
aa-exec -p docker_docker_1.3.2.007 --
To build your own "snappy package" looks as simple as "cd mydir; snappy build ."
Re: (Score:3)
Re: (Score:1)
Actually it seems pretty obvious to me that the whole "container" thing is a deliberate sidestep of [L]GPL.
The binary isn't technically "linked" with the GPL code yet, so it's not infected.
You're free to modify the GPL'd files in the container, so the GPL is happy.
Waiting for GPLv4 to come out and forbid "containerization" in 3... 2... 1...
Re: (Score:3, Informative)
The article indicates that containers can be upgraded by sending only the modified files so upgrading a lib should not cost too much.
If their system is not too stupid then it will manage the container at the file level and will try to share identical files between packages.
That should not be too difficult using checksums and hard links. If they don't do that then the kernel won't be able to share files between applications and the whole system will use a lot more memory. Linux application are sharing libs b
Re: (Score:2)
So yes, you lose the advantage of statically linked libraries. But it gives you most of the important advantage
Re:This actually sounds pretty cool. (Score:5, Insightful)
How does this mesh with the ideas that:
1. shared libraries allow a small memory footprint
and
2. allowing the OS to manage dependencies allows applications to be more secure since all applications that rely on a shared library benefit when a security update of the library get installed
Re: (Score:2)
2 is one of my main concerns too. Let application developers develop their applications and library developers develop their libraries. Not every OSS application contributor wants to apply security updates in their free time.
Re: (Score:2)
because installing security updates is always easy. At least with a container system you can install and test it in isolation then just run the new container with confidence. Rather than the old fun of "Its patch day! Everyone get ready to validate everything still works again."
Re:This actually sounds pretty cool. (Score:5, Insightful)
"No dependency management or fooling around packages that require conflicting library versions, possibly near-instant "installation" (depending on if they're distributing Dockerfile-equivalents* or containers directly). Sounds good to me"
Congratulations. You have discovered static linking. Welcome to the fifties.
Now, in less than ten years you will find the problems with your approach and will also reinvent dynamic linking and I'll gladly welcome you to the sixties.
Re: (Score:2)
I mean, I'm a big fan of static linking, so that probably explains my enthusiasm for this. I haven't yet figured out how to statically-link an entire django project, though. :)
Re: (Score:2)
not what you're looking for but cool for the static linker fan:
http://blog.xebia.com/2014/07/... [xebia.com]
Re: (Score:2)
Haha, I have read that and, predictably, found it pretty cool. :)
Re:This actually sounds pretty cool. (Score:4, Informative)
Now obviously if you wanted to make the virtualized environment available to a remote user as a remote desktop, or for remote login directly into the virtual environment with ssh for configuration changes, you want a full virtualized operating system. In that case you need VirtualBox, KVM, Xen, VMware, Hyper-V, etc... but for deploying identical instances of a configured application to a handful or even hundreds of machines with the smallest feasible overhead and a nice security layer to prevent most breaches of the hosted application from leaking into the host operating system, a container is great. If you look up how Docker and lmctfy work, they are mostly an API wrapper around the Linux kernel cgroup and selinux features for configurable restriction of a process's ability to utilize CPU, RAM, and IO.
Re: (Score:3)
"the advantage of Docker and similar containers isn't static linking. The advantage is that you get most of the important benefits of whole operating system virtualization with much lower overhead in terms of disk space and resource use [...] for deploying identical instances of a configured application to a handful or even hundreds of machines with the smallest feasible overhead"
Please take a time to think about what you wrote. Once you deploy your app to "hundreds of machines" you are back to square one:
Re: (Score:2)
1. Isolating contained software from the rest of the operating system, so you can host third party applications without worrying about them inspecting their host environment.
2. Limited contained software's ability to use host resources - disk space, CPU, memory, network IO, disk IO.
Re: (Score:2)
"I wish static linking was standard. kernels/hypervisors detect duplicate memory anyway, so dynamic linking has not a single advantage anymore."
Except, of course, not having half a dozen versions of foo runing around with a dozen different security bugs to patch -or not to patch, since the different dozen applications are defined to use exactly x.y.z version of foo and they'll break appart if you try to use x.y.z+1.
Re: (Score:2)
Docker is not just containers, but image/container fs management is a key element too. Union fs with copy-on-write makes a big difference against traditional containers. And the image ecosystem, the easy creation with dockerfiles and a good api/powerful cmdline command are pretty important elements too.
Other containers technologies could learn/adapt that other docker ideas, and even VMs could get a bit closer to them. No matter if Docker is the dominant implementation there in the future or not, with thos
Re: (Score:3)
Yeah, there's a lot of drama 'round there. Cynically, it makes sense for CoreOS to roll their own solution, as Docker continues to reach into CoreOS's core areas. On the other hand, the Rocket announcement certainly sounded nice and desirable.
Neither the Rocket announcement nor the Docker response made me feel like I was reading a technical appraisal of either technology. Too much politics for me to dig through, so I'll wait for smarter people than I to weigh in before making a decision.
Re: (Score:2)
Good thing I have 128 GB of ram.
So many goddamn layers. (Score:5, Insightful)
And here we go again, adding yet another layer to an already wobbling stack of layers.
First we have hardware. Then we're running Xen or some other supervisor on that hardware, so we can have numerous VMs running Linux running on one physical system. Then each of these Linux VMs is in turn running VirtualBox, which in turn is running Linux, which in turn is running some container system. Then each of these containers is running some set of software. In some cases these containers are running something like the Java VM, which is, of course, another layer. Then in some truly idiotic cases, we have something like JRuby running on this JVM. There's some half-baked Ruby code running on JRuby.
Let's visualize this stack:
- Ruby code
- JRuby
- JVM
- Container
- Linux
- VirtualBox
- Linux
- Xen
- Hardware
Now that there's all this compartmentalization, it becomes a royal pain in the ass to share data between the apps running in the containers running in the VMs running on the actual hardware. So we start seeing numerous networking hacks to try and make it all into something barely usable. So throw on Apache, Varnish, and other software into the mix, too.
I'm sure that within a few years, we'll start seeing containers within containers, if that isn't already being done. Then those will need sandboxing, so there will be sandboxes for the containers that contain the containers.
Meanwhile, it's just one hack after another to intentionally get around all of this isolation, in order to do something minimally useful with this stack. The performance of the system goes swirling down the shitter as a result of all of the layers, and all of the effort needed to bypass these layers.
What a fucking mess!
Re:So many goddamn layers. (Score:4, Insightful)
You think that's bad, you should see what goes on on the network side these days. All the layers of encapsulation can very often be larger than the innermost payload.
What about things like the JVM inside a container? (Score:2)
Re:What about things like the JVM inside a contain (Score:4, Interesting)
Each container would contain all of the stuff it needs to run - in this case, Java + associated modules.
It simplifies stuff, because if one server requires Foo v1.11.4 but another needs Foo v1.10.8, neither server "sees" the other. You simply configure each container separately, without worrying what the other container's doing. When distributing the container, all you have to do is send out one image. If you want to run 12 containers on a host, that's cool. If you want to run only 1, that's fine too. And that same container will work just fine whether it's running on the server or the new kid's development laptop.
It's not an all-or-nothing approach, so you can choose if you want the database to live in a container of its own, on the host, in the app container, or somewhere distant.
Re: (Score:2, Insightful)
Why is everyone trying to turn Linux into Windows?
Re: (Score:2)
They're not. Windows has and makes use of dlls. They're trying to turn linux into dos.
Re: What about things like the JVM inside a contai (Score:1)
In a way they are, though. Win7 keeps every distinct version of every dll that each installed program requires, so you can quickly wind up with a folder under Windows/ filled with 20 different versions of the same library, even if most of the programs requiring them could run just as well on one of them.
Re: (Score:2)
Re: (Score:2)
oh, no.. the .NET code is in the GAC and is just as crufty as COM. Even their best plans soon turn into old habits at Microsoft.
(if you really want to worry, take a look at the "I have no clue which assembly is actually loaded" way .NET decides what dlls to run using Probing heuristics [microsoft.com]
Re: (Score:2)
Re: (Score:2)
The containers aren't like full-fledged VMs - they're generally running only the processes they need. If you enter it & bring up top, you're quite likely to see 3 processes in there - one for top, one for your shell, and one for your actual app.
You're going to have a slightly larger footprint, yeah - I don't believe you can take advantage of shared libraries to reduce RAM usage (though... possibly in some cases. I'm not sure) - but it's much lighter than running a VM.
http://www.infoq.com/news/2014... [infoq.com] if
Re: (Score:2)
Re: (Score:2)
I think of it like running multiple DOS applications under Windows 3.1. Some VM things were happening but it was lightweight enough you didn't really have to care. Except Wolf 3D (as sole running DOS application) was slower. When Windows 95 came out we'd hold F8 and boot to DOS out of habit (and still made a proper config.sys) lol.
For containers I believe and hope that at least the disk cache is one and the same for all apps (one SMARTDRV.EXE for all processes). With cgroups containers trivially get a "RAM
Re: (Score:2)
Re: (Score:2)
If you switch to FreeBSD then you'll have.. jails.. which are like entirely the same thing as containers.
Anyway it's the "Ubuntu Core" edition that's new, not containers, and it sure would be entirely optional. It seems to me it's a set of tools to spawn many copy-paste server instances in a gigantic "cloud" farm depending on level of activity or need to scale up. That's trendy but totally useless if you have the more usual need of caring about "that one server".
But a container is maybe like running a proce
Re: (Score:2)
Does this mean if I have two (or more) servers running on a JVM, that each container will have its own JAVA_HOME?
Isn't this already the case with large Java stuff? That despite the "run everywhere" mantra, people package it with large apps because in reality, it's not going to work?
Re: (Score:2)
Re: (Score:2)
Why would I want 4 different JVMs installed when one works just fine?
Why don't you ask Oracle or IBM? As far as I understand, these are the people bundling their own JVMs with whatever humongous product you buy from them, not me.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
It sounds like YOU have no clue what you're talking about. Classically, multiple JVMs were required for such things as the Cisco PIX / ASA web interface (doesnt work with anything newer than 1.5), HP iLO (likes 1.6 / older), Blackberry Enterprise server (tends to blow up when you attempt an upgrade of the packaged JVM), and Fiery printer interfaces.
But I guess none of that stuff counts.
Re: (Score:2)
Re: (Score:2)
Have you ever wanted to know how to ensure that noone wants to have discussion with you? Cause I think you cracked the code.
Many VMs is about legacy crap, not Open-source (Score:2)
Now try running some legacy enterprise crap from 2003 which hasn't been touched for last 10 years on JDK8. And now imagine it uses JNI.
--Coder
Re: (Score:2)
Doesnt have to be 10 year old crap. Fiery printers made in the last few years blow up if you try to use 1.7u45 or newer, because they enforce certificate signing which Fiery doesnt do. Theyre not the only big vendor with such issues either.
Re: (Score:2)
Think of it as running separate VMs in an hypervisor, but SOME stuff can be shared. If its all in the package, yes they'd have their own mysql or postgres, but its the same thing as if you had VMs with everything included.
Nothing stopping you from having an instance for the database, and an instance for the web server that connects to the database.
Re: (Score:2)
Shove DosBOX running Windows 3.11 on top of that and you're golden!
Re: (Score:2)
DosBOX running DESQview/X with a remote xterm from the bottom layer for recursive bonus points.
So many goddamn layers. (Score:2)
Soo... that's a pretty artificially sadistic case. Does anybody really run hypervisors under hypervisors commonly? Virtualbox under Xen? Really?
Re: (Score:1)
Don't underestimate human stupidity.
Re: (Score:2)
Nesting virtualization containers can be useful to test VMs on an OS you don't/can't run.
Re: (Score:2)
Yes, my VCP training class used vSphere as the host for several vSphere vApps, so that each student could have their own cluster to play with.
It actually makes a lot of sense for testing.
Re: So many goddamn layers. (Score:1)
Re: (Score:2)
You've added a few. For one Jruby is compiled to run on the jvm. Unless you're just playing around at worst that stack should be
-|JVM| - locked in container it's isolation not a layer
-Linux
-Hardware
Or if you're doing development you might have something more like
-|JVM| - locked in container it's isolation not a layer
-Linux
-VirtualBox
-Windows or OSX
-Hardware
Re: (Score:2)
Vagrant is nice for development integration, but we use it with Docker rather than virtual box. Works much much faster.
Re: (Score:2)
0. Host operating system to run the bare hardware.
1. Containers to isolate running contained applications from each other, and govern their resource access.
The application runs inside the container, without any virtual anything.
With Java, in theory you could run Java on the host operating system under different user accounts. So you have the host operating system at level 0 and the JVMs running at level 1, and you use different user accounts to set different JAVA_HOMEs
Re: (Score:2)
Indeed I wonder if a container is that much different from running a different user. The other day, I was lazy and just ssh -X localhost to get some web browser loading in a blank state. If I could get ssh running with a null cipher and have some GUI launcher with user selection I would kind of have "containerized desktop applications".
Re: (Score:2)
1. Can't see what applications are installed on the machine in the global
2. Can have the amount of RAM, disk space, and CPU they can use capped.
3. Can't see much other aspects of the host - like a list of active sockets, how much memory is used, etc...
If you just want to run your own applications in an isolated fashion as an extra layer of security
Can't wait to try this on Qubes (Score:2)
Because you don't look to containers for security.
No init (Score:2)
It may not be systemD but does this mean it gets a free pass?
Re: (Score:2)
You mean as something like this already has been suggested by lennart poettering? Yeah, there is something to it. Funnily the first dude answering the shuttleworth post was a systemD + btrfs fanboy...
But its good Ubuntu ppl removed this stupid btrfs requirement. I'm myself a fan of btrfs, but things should be exchangeable.
OK. I'm throroughly confused (Score:3)
I'm just a casual user, not a sysadmin.
But I thought containers were kind of like VMs, not like packages.
What's the difference between a VM, a container, a chroot jail, and packages?
Auto analogies are always welcome.
Re: (Score:2)
Broadly speeking a VM is a virtual environment that runs a separate kernel.
Containers are like a chroot jails in that they provide virtualization of the user environment for processes that execute under the parent kernel. Containers generally provide more sophisticated control over system resources (CPU, RAM, network i/o) than a simple chroot jail.
This wikipedia article provides a comparison of different types of container: http://en.wikipedia.org/wiki/O... [wikipedia.org]
Re: (Score:2)
A VM runs its own kernel.
Containers share a kernel, but have their own namespace so e.g. they see only their own process table
chroot jails just really control permissions and environment/libs
On the one hand all of them have some pretty compelling use cases like ease of moving machines to new/backup hardware. On the other hand they all lend themselves to horrible abuses and serve to keep crummy, buggy, code in service way past when it should be flushed down the toilet with extreme prejudice, and attract a s
Re: (Score:2, Informative)
Ok let's get you up to speed on containers in 7 paragraphs and there is some pottering hiding somewhere in here to keep folks interested. A VM emulates the entire hardware layer. A container depends on cgroups and namespaces support in the Linux kernel to create a lightweight isolated OS environment with network support. So you could be running a Debian host and multiple Redhat, Centos, Ubuntu, Fedora etc containers and vice versa.
The advantage is because containers are not emulating a hardware layer you ge
Re: (Score:2)
Docker and it's like are more than just containers. Docker is more like a format and eco system around the core LXC containers that have been around for ever.
Just speaking of the container is is more in line with chroot/jail with even more isolation.
Docker the entire ecosystem is more like Amazons AWS in that there are many prebuilt containers.
And kinda like a configuration management system (chef, puppet, cfengine) in that there is a scriptible interface for building new containers.
And kind of a continuo
Re: (Score:2)
You're right. Here's a solid overview of the type of technologies that Docker impacts which appeared on Hacker News a couple of months back http://www.brightball.com/devo... [brightball.com]
I Don't Get It (Score:5, Insightful)
"You can update transactionally!!" Great. What does that mean? Is it like git add newapp; git commit -a? If so, how do I back out a program I installed three installations ago?
dpkg -l
dpkg -i <previous_version>
#include <cheap_shots/systemd.h>
debsums
...Did this guy just say he brought DLL Hell to Linux? Help me to understand how he didn't just say that.
No, it isn't!! What the hell is OwnCloud pulling in? What's it using as an HTTP server? As an SSL/TLS stack? Is it the one with the Heartbleed bug, the POODLE bug, or some new bug kluged in by the app vendor to add some pet feature that was rejected from upstream because it was plainly stupid?
Honestly, I'm really not getting this. It just sounds like they created a pile of tools that lets "cloud" administrators be supremely lazy. What am I missing here?
Re: (Score:2)
"Honestly, I'm really not getting this. It just sounds like they created a pile of tools that lets "cloud" administrators be supremely lazy. What am I missing here?"
Only three things:
1) Most people are not really so good about their trade.
2) Youngster moreso if only because they still had no time to become any wiser but, youngsters being youngsters, still they think they know it all.
3) Due to IT advancements a lot of ignorant but otherwise full of energy youngsters can now be very vocal about how their elde
Re:I Don't Get It (Score:4, Interesting)
being able to go from zero to fully functional and testable running application with mulitple tiers in minutes is a bit different.
Being able to completely uninstall without dpkg-old or random .bak files laying around is kinda nice.
rolling back 3 versions ago is as simple as
docker run
Honestly I've been doing this for 22 years and docker is the first time I've looked at a tech and it scared me. It is going to relegate systems administration to little more than rack and stackers. The current DevOps trend is going to become just Dev and everything will be code.
If you're an administrator of pretty much any type you better start learning to program. The days of long lived static apps with a full time support staff is going to go away. The art of setting and configuring the exact combination of packages, standards, access etc will be gone.
Re: (Score:2)
while not the lowest I'm sure my ID is lower than yours. And normally I would agree with you.
Re: (Score:2)
...Did this guy just say he brought DLL Hell to Linux? Help me to understand how he didn't just say that.
He didn't say that, because that's not how Linux works. You can load two different versions of the same library as long as they are named appropriately, so we don't have DLL hell no matter how many libraries we have loaded on our systems.
I Don't Get It (Score:2)
...Did this guy just say he brought DLL Hell to Linux? Help me to understand how he didn't just say that.
Too late -- Ruby on Rails has already brought DLL Hell to Linux. I challenge you to install a Ruby on Rails application without having the exact version of Ruby and its dependencies that was used to develop it. This is why almost everyone uses Ruby version managers such as RVM
I Don't Get It (Score:2)
The second line above was supposed to be quoted (I blame Slashdot Beta...)
Re: (Score:2)
Re: (Score:2)
Fellow greybeard here. I actually just looked into Docker a few days ago so I'm far from an expert, but it seemed like a neat idea. Docker doesn't really give you anything that you couldn't do before, but it does make it easier to let developers do their jobs and sysadmins do their jobs.
The idea is that the output of software development is a "container", and that container is then handed off to the tech services group to deploy wherever makes sense. It's very similar to a VM, except it's (I think) based of
Re: (Score:2)
In that case, why bother with dynamic linking at all? Why not stati
Re: (Score:2)
The GPL is not bypassed becaused that's not what it was designed for. The GPL was not designed to prevent you from doing useful things on your own computer. You just can't give the software to someone else and not at the same time give them the same freedom that you got. Downloading GPL software and linking it locally is tottally OK, because it does not restrict someone else's freedom.
Re: (Score:2)
The GPL has never had a problem with bundling programs with different licensing or linking LGPL into non-GPL, or GPL into non-GPL for that matter; it's only if you distribute that mix that GPL has a problem with it. What you do on your own computer is up to you.
Re: (Score:2)
Nope, it brings DLL hell to linux.
Sure you get the developers environment, but every program can and will depend on different versions of the same library.
This is what DLL hell was in windows. Program A needed version A of msvexample.dll while program B needs version D, version A is newer but overwriting it causes program B from working so they put in both. You now have two versions of the same library and just like windows you are still vulnerable to any problem in the library because program B needs versi
Re: (Score:2)
Autoremove is not a magic bullet, and I have personally seen it nuke things that were in active use (as evidenced by the immediate breakage).
Redundant updates? (Score:2)
Developers of snappy apps get much more freedom to bundle the exact versions of libraries that they want to use with their apps. It’s much easier to make a snappy package than a traditional Ubuntu package – just bundle up everything you want in one place, and ship it.
So when a library needs a security / other update, I'll possibly have to update several snappy packages that all contain the affected library? Ya, that's sooo much better Mark.
Re: (Score:2)
I have to do this already with thousands of servers all running apps. This makes it much easier to do so. No longer to I have to have some kind of monitoring in place to insure that every nginx box has the latest ssl and bash fix along with vendor patches and other crap. One container, redeploy everywhere and restart. Only one thing to check.
Ubuntu Core? (Score:1)
Re: (Score:2)
Re: (Score:2)
qubes (Score:2)
I assume (Score:2)
Re: (Score:2)
Nah, it's already supplied by the base system(d).
Re:20 years? (Score:5, Informative)
Well TFA says:
"This is in a sense the biggest break with tradition in 10 years of Ubuntu..."
Editor fail.
Re: (Score:3)
It'd be fucking great if old Mark decided to instead ship some stable, mature software once.
I'm fucking sick of Ubuntu Unity freezing, locking up or getting stuck (lol you're stuck between workspaces and can't do anything about it but reboot or bounce Xorg!)
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
It's measured in recruitment years, so they find the longest experience in DevOps node.js in the company. It rises fast though, could have to risen to 40yrs since the article was written for such a hot dynamic company as this.
Re: (Score:2)
AC just assumed that Slashdot is continuing it's excellence in timely reporting of two year's ago news.
Re: (Score:3)
Two years ago's 20-years-ago would have been 1992? Unless leap years work very differently than I've been told...
Re: (Score:2)
Or AC was using the article to figure out today's date.
Re: (Score:2)