[kwlug-disc] Docker on VPSes

Mikalai Birukou mb at 3nsoft.com
Tue Jan 22 10:39:37 EST 2019


< -- inlined replies -- >

> I am working on migrating a LAMP application -- Drupal, as it turns
> out -- from a VPS on Linode onto a different VPS on Linode. (Yes, I
> really want to migrate to a new instance and not just upgrade the
> existing one.)

Talking only about migration, transplant (tar, untar, recreating users 
with same uid/gid's) AMP into LXC container on the same machine/linode. 
Snapshot container, better in off state, import it in a new machine/linode.

Note that this is using container tech without going into Docker 
approach. With LXD, your LXC container has a normal system inside, 
modulo imposed restrictions. With Docker, your also-LXC container is 
meant to be a single-process containing thingy. For example, starting 
container in LXD feels like starting computer/VM, starting Docker 
container comes with specifying a command that is run in it 
(specification is either explicit or implicit, but is always present). 
Another example, Nginx Docker image runs a non-daemon (!) process with 
run command. Feel it: ... non-daemon process. You tell Docker to restart 
process in container if it quits.

> It seems that Docker is the cool new thing to use for deploying
> applications. It sounds as if Docker is good for scaling things out,
> and for allowing different versions of LAMP components (different
> versions of PHP, for example) to exist on the same VPS when they are
> used by different applications. But does it make sense if you are
> sticking with a standard LAMP configuration on a standard Ubuntu
> install? It might be possible to containerize this application, but
> would it be worth the trouble? What advantages would there be?

Let's unpack scaling. Splitting AMP into two or three images should 
allow you to scale in those sections of stack that need to scale. The 
ingestion part may still depend on HAProxy attachment to the LAMP -- 
clouds may or may not provide ingestion magic, but you may have it as 
one more container. And this split for scaling can be achieved via LXD, 
like having different machines for different parts of the stack.

Docker.

Since there will be more than one container, Docker composer is your 
friend. Since, db part has a state, and Docker containers like to be 
stateless, you must use volumes (logs better got to volumes, too). Not 
all trust Docker to run db, but trust is coming. Me? I have dbs in LXD. 
Of cause, testing cycle uses db in docker, but its not production.

> I am also confused how one keeps all of these containerized images up
> to date,

You assemble an updated image and introduce it instead of an old one.

When you have a sea of machines/linodes, it might be useful to know that 
only versions of your containers define what is up-to-date-state on each 
computing drop in your cloud.

So, you make a new version of the site. The Docker way may be packing 
new container based on Nginx, adding static content. The dynamic part 
goes into container with your scripting language inside. Yes, instead of 
2MB, your new version artifact will be 100's of MB's.

>   and even why I should trust images that come from
> hub.docker.com . I trust Ubuntu/Debian updates because I understand
> the social infrastructure that makes them relatively trustworthy.
I make my containers from Ubuntu's official one. Making means adding 
lines in docker file like "RUN apt install -y gcc ...". And in one case 
I managed to use less space than some suggested thingy (node+C+fuse 
build container for gitlab runner).
>   I also understand that I can upgrade these components with an "apt
> upgrade". I do not know how people do this in the Docker world.

Your Docker container setup will probably be more bespoke in terms of 
settings than your single machine/linode, or LXD setup.

By now I have a few of LXD containers, based on Ubuntu. Regular apt is 
used in them. apt-cache-ng for caching packages is you friend when you 
have more than one machine.

> Help? There are a bunch of tutorials in getting started with Docker,
> but not much about when to choose it, and under what situations it
> does/does not make sense.

Is it vendors app? If yes: they give you docker image -> use it; they 
give you updates via apt, use LXD.

If its your own app, and you already have a flow of packing things into 
one machine, start splitting for a few machines. You may say it is 
micro-servicing route, I say it is a good engineering practice that will 
help with scaling. Let's not forget, how chef decides to chop meat is 
chef's choice. Put chops into LXD. Put all lxc creation commands into a 
script. Turning that into Docker file will be easy, when you decide to 
go Docker.

More so, you can mix Docker and LXD, cause subsystems talk over the 
network, anyway.

Right now I am staring at gitlab's continuous integration script. We use 
docker-based gitlab runner. Developer (!) has info to tell what is in 
the container, into which new version of software is packaged. This is 
cool. There is style "it runs on my machine" syndrome, but contained, as 
now I have to make it to run in this particular contained environment. 
Docker file provides pretty good prescription of it.






More information about the kwlug-disc mailing list