Docker Images To Be Based On Alpine Linux (brianchristner.io) 86
New submitter Tenebrousedge writes: Docker container sizes continue a race to the bottom with a couple of environments weighing in at less than 10MB. Following on the heels of this week's story regarding small images based on Alpine Linux, it appears that the official Docker images will be moving from Debian/Ubuntu to Alpine Linux in the near future. How low will they go?
Re: (Score:2)
The point is pulling an application.
How the people that made the docker image you are using created their Docker image doesn't matter much.
You can still use Debian/Ubuntu as the base of _your_ Docker images.
These are just the official Docker images. I can see how that makes sense for something like a MySQL Docker image. You just want it to run the database server.
Re: (Score:2)
There are already tools to detect that, at least some of them are FOSS:
https://www.alfresco.com/blogs... [alfresco.com]
For example this one is open source:
https://coreos.com/blog/vulner... [coreos.com]
Re: (Score:2)
I think the idea behind the tools is to catch stuff just in case something gets forgotten.
You can easily do updates/build of Docker containers, as long as you automated and do 12 factor apps.
Re: (Score:2)
If I didn't misunderstand them, keeping the common core up to date was one of the goals behind Red Hat's Project Atomic.
Re: (Score:2)
But in practice the applications don't update this base OS, so the copy of openssl that is loaded into memory when the app launches will be vulnerable since there is no practical way to automatically keep them updated. The app vendor would have to basically rebuild the image every time a single package would have to be updated.
First of all, there's often little reason to even include OpenSSL in your container. You can attach to it through Docker. And only expose ports your app uses. The attack vector is reduced. Secondly, practices around containers are definitely evolving, so what is "in practice" now isn't necessarily the way it will always be.
Re: (Score:1)
Re:Newbie question (Score:5, Interesting)
Systemd is a new init system whose primary advantage is that it promises to unify the behavior of services running on different linux distros. The problem that a lot of people (including myself) have with it is that since it unifies behavior, you lose choice in how you configure that behavior. In my organisation we've been using Debian for _years_ and upgrading our servers to the latest version has only ever involved minor tweaks to our config scripts. With Debian 8's systemd, we pretty much have to rewrite it all from scratch, which is going to be a huge, dangerous project. Of course, the trend of history is toward automation and standardization, but I think that systemd is too ambitious and too early. Only time will tell though.
I hadn't heard of Alpine Linux before today, being an old Debian guy. Besides the whole systemd thing, I've had the sense that Debian was loosing its way for a while and have been looking around for something to replace it when Debian7 reaches EOL. As a grey beard I want something light-weight and without systemd, but as a practical grey beard I want something stable, that I'll be able to run for another decade. So far, CentOS seemed to be the way of the future for my organization, although it makes me vomit in the back of my throat a little to go closer to the root of the systemd tree.
Make no mistake, though, Docker is the way of the future and will put a lot of people in this forum out of a job. If Alpine has the backing of Docker, it might be the linux distro of the future. It has some really interesting features, like the ability to save all of your system configuration into a package that you can install on other systems via the package manager. That's really cool, and a neat alternative to puppet. I'm not very happy that it isn't binary compatible with stuff built using glibc (which means that commercial software will be limited), and AFAICT it doesn't have some of the dev tools I like to use, but I think this will be a major contender soon.
I'll be watching it. It might
Re: (Score:3, Interesting)
I suspect docker is the only real contender to the total systemd domination. I liked systemd until I realized how much fragility it brings to the system with all those supposedly nice dependencies between services. The problem is that when one of the dependencies fails, the rest stops starting/shutting down/working when in fact the failure can be very much transient. So I started to like how docker resists supporting full-featured container dependencies. It essentially requires for containers to deal with c
Re: (Score:1)
and again alpine doesn't have systemd
no gnome yeah
Re:Newbie question (Score:4, Interesting)
Re: Newbie question (Score:1)
How so when the daemons have different names across distributions httpd Apache2 store the preferences in different locations and the configuration in a different location. Standardizing those things have nothing to do with systemd
Re: (Score:2)
Daemons do not have different names across distributions, the Apache2 daemon is httpd regardless of distribution. However as you point out configuration locations can and will vary and also package names (but package names has nothing to do with systemd). If we look at the systemd unit file for Apache2 from RHEL7 there is these two lines:
EnvironmentFile=/etc/sysconfig/httpd
ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND
So what happens here is that the unit file is the same regardless of distribution and then the distribution sets it's configuration locations into the /etc/sysconfig/httpd
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Thanks F.Ultra
Alpine is poorly configured and you need to fiddle with those lovely script files to get some parts firing.
You would be absolutely lost with systemd.
Not every distro works exactly the same, that's why we have init scripts.
Re: (Score:2)
Re:Newbie question (Score:4, Interesting)
Make no mistake, though, Docker is the way of the future and will put a lot of people in this forum out of a job.
Yea, you're not really a grey beard if you make stupid statements like that.
Adding another layer of virtualization to our existing stack with several layers already in it isn't going to magically make things better.
In the last 10 years I've seen the same web server go from running on bare metal to running under 4 layers of hypervisor by the time you get to the docker container ... and you know what ... now that server farm takes 3 times as many people to run because you still need the apache guys ... and you still need the linux guys ... but now you need the docker guys, the vmware guys, and somebody who can coordinate the whole fucking mess.
So lets look at this we went from:
Hardware -> Linux -> AppVM running Apache (1 layer of address translation, the AppVM/Linux kernel boundary)
to
Hardware -> Linux -> KVM -> Linux -> Docker -> Linux -> AppVM running Apache (thats 5 layers of translation ... Yes, thats REALLY what most people using Docker will do)
Yea, thats definitely going to put people out of work ... its easy to understand.
Docker is another example of people doing something because they figured out how to do it, not because they actually should do it. Worse still, Docker is a solution to the fact that Linux is a mess from a file system perspective where everyone just dumps all their bins, system or 3rd party all in the same directories, all together. And then they make fun of Windows like System32 is different than /usr/lib on any given Linux box. System or app config files ... ALL of them are in /etc ... WTF? You know theres this /usr/local idea right? You know that you can put libs in the same directory as the application and then you don't have to run a VM to get the same sort of separation right? I mean seriously, you can't call yourself a grey beard and say Docker is good at the same time, you just admit you have no fucking clue how to be an admin or how docker works.
And you're conflating it with systemd? Do you not have any idea what either one of them are?
Yea, I'm ranting. People who think Docker is good are idiots, typically developers trying to be sys admins, or 'DevOps' as they call it ... and they're clueless and don't understand wtf they are doing. Its not Dockers fault. 'Zones' are something Solaris has had for years, and they weren't new when Solaris did it. Mainframes have had the concept since the 70s. The problem devs who don't know when and where to use them are just throwing them all over the place
Re: (Score:1)
Alpine doesn't have systemd
Re: (Score:1)
Calling You an SJW is without merit, as Your post has nothing to do with social justice.
Instead, I shall call You an assbandit for calling my system to be a "nothing" without systemd. And a total idiot for claiming that GNU is nothing without Linux. And that Linux is nothing without GNU. In short, fuck off and educate Yourself.
Re: (Score:1)
Re: (Score:2)
Debian GNU/HURD
why no SLS (Score:2)
a.out and static binaries, go with linux 0.12. everything since has been bloated crap.
Jails (Score:2, Interesting)
You could run in a Jail on BSD, and ship pretty much no OS at all. Thats the future of containers (and 20 year old tech too...)
Re: (Score:3)
Jail --> Honking Big Ass Server running a bunch of restricted processes. While said process is running self-contained, it's not easily transferable at all.
DockerSelf-contained program (that can be plugged into many different systems now). Runs self-contained. Easily transferable, everywhere, anytime.
Think something like nGinx running as a proxy front-end, which could pass thru to servers, vms, dockers on a server, etc. Or using docker images of different configurations
Re: (Score:3)
You obviously never used jails. You can take a FreeBSD jail, archive and transfer it to any other server quite easily. For instance, with a running jail using ezjail-admin [there are actually many ways to do this] :
The archive file will be saved [according to configuration] in /usr/j
Re: Jails (Score:3)
There are many disadvantages to docker [boycottdocker.org]
Re: (Score:2)
Or an Illumos Zone.. Illumos being a fork of OpenSolaris.
Even better, you can run Linux inside an Zone on SmartOS, an Illumos distribution, via system call translation..
And Smartdatacenter [github.com]+ sdc-docker [github.com] = SmartOS based IaaS solution with docker support..
Smartdatacenter and SmartOS are made by Joyent [joyent.com], everything is opensource.
That's what they use to power their public cloud..
Truly an hidden gem, as other Joyent opensource stuff like manta [github.com]..
http://dockersl.im (Score:1)
DockerSlim shrinks standard Ubuntu containers by 30X.
Sample images (built with the standard Ubuntu 14.04 base image):
nodejs app container: 431.7 MB => 14.22 MB
python app container: 433.1 MB => 15.97 MB
ruby app container: 406.2 MB => 13.66 MB
java app container: 743.6 MB => 100.3 MB (yes, it's a bit bigger than others :-))
Summaries, how do they work? (Score:3, Insightful)
Re:Summaries, how do they work? (Score:4, Informative)
Really? You've never heard of Docker? Docker is a system that allows you to build "containers" for an application that contain all of its dependencies. Then you can deploy it on a machine where it runs as a VM, using its local copy of software and configuration if there is one, or the host's copy if not. It allows you to package applications that can run on any compatible server without interfering with other applications on that server. When you need to spin up a new machine, you can just copy the container over and the application and all of its configuration is automagically moved. It's awesome.
In the short term, Docker is going to change the way that every Linux system is administered. It will change the way that every Linux application is deployed. In the longer term, Docker will finally fulfill the promise of "write once, run everywhere"... linux, unix, windows, android; it won't matter any more. Docker is going to change the world.
Re: Summaries, how do they work? (Score:5, Insightful)
In other words, its nothing new, but now with new added lack of security oversight because administration is hard and therefore worthless in todays race to the bottom.
Re: Summaries, how do they work? (Score:5, Informative)
The novelty comes from having a lot of tools to quickly maintain images and such. As you say, there's also 'dockerhub' to let you download canned application complete with OS libraries. The former I find to be handy, the latter I find problematic.
On the one hand, it can be a handy resource to dive into something to have a hands on example as you learn to deal with it yourself.
However, a few big downsides:
-Some projects have gotten very lazy about packaging. They make a half hearted or no effort to offer up distro packages, because 'hey, docker!'. I suppose this wouldn't be so bad, except for...
-As you say, these are various images with varying degrees of discipline in applying updates.
Complicating matters that even if you 'trust' a particular publisher, docker's infrastructure isn't exactly thorough about things like signing images and such. Updates become gigantic, because you are updating the entire OS even if one library needs a hand.
Re: (Score:3, Interesting)
First, docker VMs are incredibly efficient. The overhead is really negligible, even on older hardware. Secondly, apt-get is nice, but it affects everything on the system. What if you're using an application that uses removed features of PHP 5.2 and another that relies on new features of PHP7 and a bunch of others that are on happy on 5.6? This is a royal PITA to set up on one machine; you are better off buying seperate servers. With Docker, you can apt-get the version you need WITHIN THE CONTAINER and the p
Re:Summaries, how do they work? (Score:4, Insightful)
I've been running linux as my main OS on my personal computers for about a dozen years. I browse through Slashdot nearly every day. I have never heard of Docker. I work in a Windows world and don't do application development. I'm not sure why you think this would be on the radar of every single reader of Slashdot.
This failure to explain, or even link to an explanation, of the core concept of a summary is probably one of the biggest recurring editorial failures on Slashdot. (And yeah, that's saying something.) Technology has a lot of specialized branches. I know plenty of application developers who don't know anything about networking, or network admins who don't know anything about databases or writing code, etc. etc.
Re: (Score:2, Informative)
Apologies, I had thought that the links were sufficiently informative, especially given that we had an article on the same subject earlier this week. I've never used Docker personally, and have a fairly loose grasp of what it entails, but the idea of application containers has been around for something like 20 years -- BSD Jails, lxc, systemd-nspawn, Solaris zones, and whatever that CoreBoot based one is -- there was an article about it on Thursday. Half of the comments are saying how Docker is a dressed-up
Re: (Score:2, Informative)
I'm not sure why you think this would be on the radar of every single reader of Slashdot.
You've failed to integrate into the hive-mind, then. Honestly, if every single thing had to be explained at the lowest common denominator, slashdot would be a horrible place. Well, more horrible than it already is, anyway.
I have never used Docker, and had only the vaguest idea what it was about. I googled it and read about it. God forbid you should have to do the same.
Re: (Score:2, Interesting)
You might also be interested to know that Microsoft is adopting Docker as well. That you haven't heard of it is a little surprising because it's been talked about awhile both here and elsewhere. I too am in the Windows world nearly all of the time and had heard of it although I've yet to use it (one of my Linux based systems supports it though). It looks like pretty interesting concept although when I approach Linux guys about it the first thing they say is that it's a security nightmare
Re: (Score:3)
I'm surprised, too. As was said at SCALE this year.... The first rule of Docker is that you never shut the fuck up about Docker!
Re: (Score:1)
It's like totally cool man, like imagine your containers are on a ship that sails the tubes of the interwebs, docker is the automated robot captain who guides the ship into port and fastens the ropes to the wharf and begins the cranes that turn the containers into packages on the SYSTEM!
Re: (Score:3)
Docker is Cloud 2.0 and is the biggest generational/watershed/great leap for IT since VMs.
Even Microsoft offers docker compatibility with their new NanoServer images.
This is The One Way Forward.
There's one guy working on a project called Atom/Atomic/Atome or something, which is basically your app compiled in to an OS container, instead of being built on top of an OS container, but still responding similar to a docker container.
In the mean time there are Linux Distros like Ran
Re:Summaries, how do they work? (Score:4, Interesting)
There's one guy working on a project called Atom/Atomic/Atome or something, which is basically your app compiled in to an OS container, instead of being built on top of an OS container, but still responding similar to a docker container.
Maybe several people doing the same, but this is one such project: http://www.includeos.org/ [includeos.org]
Simply add one include in you C++ project and compile it into a VM image.
Re: Summaries, how do they work? (Score:2)
I've used Docker a bit, and I get it. I'd like to build a container of my client's Rails app + gems + ruby and just run it wherever without having to 'dirty' my system with all that stuff. I get it.
What I don't get is why I'd want Docker for MySql or Postgres? I install that on a dedicated box, so whatever crap it wants to pull in and spew all over the place is fine with me. When its time to upgrade, I'm yet to see if Docker can help me, I can't see how (at the very least, not much).
So... what I'm saying is
Re: (Score:2)
If you're using something like CoreOS with systemd, you can spin up the database in a cluster of nodes, and something like fleetctl will spin up the database again on another node if you lose that node. If you write your database container correctly, then it will look for existing db containers in the node cluster and spin itself up as a secondary database, attaching to the primary and allowing you to spin up and down database capacity as needed, sort of your own ec2 system that can adjust itself based on l
Re: (Score:2)
So, WTF is docker?
The same thing it was when we ran articles about it 3 days ago.
Well the race is still open (Score:2)
As long as one cannot print out the image as a QR code ;)
Docker is Slower (Score:2)
Stop using docker (Score:2)
To use the potential of docker, you need to use prebuild images, otherwise it doesn't speed anything up. Prebuild images are just like "okay, somebody uploaded something, i execute it and feed my important data to it". No need to say, it's a bad idea.
What to use instead?
Use plain LXC. LXC works great, you can easily generate a template with debootstrap (it brings a script, which does that) from official debian packages.
Then use ansible to install your stuff. An ansible file just looks like a Dockerfile, onl