VMs in small ARM servers
Background (I swear I get to VMs later)
I have been running a personal server at my office for a little while (See 1 and 2) where I run a number of containerized services.
Mi server "producción" Pinky, en su forma final. Radxa Zero con 4GB de RAM y 32GB de eMMC, 2 HDDs de 1TB en RAID1 con btrfs. pic.twitter.com/ckhPloQXPO
— Roberto H. Alsina (@ralsina) August 4, 2022
Since I had it, I wanted to add a way to easily deploy my own experimental code so I can do quick "servers" for things I am playing with.
I could just create my code as, say, a flask app and create containers for them and deploy them that way, and then add ingress rules in my gateway and ... it gets exhausting pretty fast.
What I wanted was a way to run my own Heroku, sorta. Just write a bit of code, run a command, have it be available.
After googling I found a solution that didn't require me to implement a k8s cluster: faasd. The promise is:
- Run a shell script to install faasd
- Write your code as a function
- Run a command to deploy
- It all runs out of a single ingress path (except CORS of course)
So, minimal config, ease of deployment, no need to constant tweaking of my gateway. All good!
Except ... faasd doesn't play along with Docker. Both use containerd and cni and other things as their backend, and faasd says that really, they like specific versions so they should install them, not the system, and then running docker gets pretty dicey.
So, I could just get a second server. It's not like I don't have more small computers.
Tengo un problema.
— Roberto H. Alsina (@ralsina) August 2, 2022
Pero también tengo soluciones! pic.twitter.com/u1qj5d21Cj
But my server has spare capacity! So I don't WANNA START A SECOND SERVER!
Also, this is going to often be toy code I have not carefully vetted for security, so it would be better if it ran in isolation.
So? I needed a VM.
Really, a VM inside a tiny ARM computer
My server is a Radxa Zero. It's smaller than a credit card. It has, however, 4 cores, and 4GB of RAM, so surely there must be a way to run a VM in it that can isolate Faasd and let it run its wonky versions of things while the rest of the system doesn´t care.
And yes, there is!
Firecracker claims that you can start a VM fast, that it has overhead comparable to a container, and that it provides isolation! It's what Amazon uses for Lambda, so it should be enough for me.
On the other hand, Firecracker is a pain if you aren't a freaking Amazon SRE, which I am really not, but ...
Ignite is a VM manager that has a "container UX" and can manage VMs declaratively!
So I set out to run ignite on my server. And guess what? It works!
It's packaged for Arch, which is what I am using, so I just installed it, run a couple of scripts to create a VM:
[ralsina@pinky faas]$ cat build.sh
#!/bin/sh -x
# Create and configure a VM with faasd in it
set -e
NAME=faas
waitport() {
while ! nc -z $1 $2 ; do sleep 1 ; done
}
sudo ignite create weaveworks/ignite-ubuntu \
--cpus 2 \
--memory 1GB \
--size 10GB \
--ssh=id_rsa.pub \
-p 8082:8081 \
--name $NAME
sudo ignite vm start $NAME
IP=$(sudo ignite vm ls | grep faas | cut -f9 -d\ )
waitport $IP 22
ssh -o "StrictHostKeyChecking no" root@$IP mkdir -p /var/lib/faasd/secrets
ssh root@$IP "echo $(pass faas.ralsina.me) > /var/lib/faasd/secrets/basic-auth-password"
scp setup.sh root@$IP:
ssh root@$IP sh setup.sh
# Login
export OPENFAAS_URL=http://localhost:8082
ssh root@$IP cat /var/lib/faasd/secrets/basic-auth-password | faas-cli login --password-stdin
# Setup test function
faas-cli store deploy figlet
echo 'Success!' | faas-cli invoke figlet
[ralsina@pinky faas]$ cat setup.sh
#!/bin/sh -x
set -e
apt update
apt upgrade -y
apt install -y git
git clone https://github.com/openfaas/faasd
cd faasd
./hack/install.sh
If you run build.sh
it will create a ubuntu-based VM with Faasd installed, start it, map a port to it, setup SSH keys so you can ssh into it, and configure authentication for Faasd so you can log into that too.
Does it work?
Indeed it does!
Are there any problems?
There is one and it's pretty bad.
If the server closes badly (and that means: without explicitly shutting down the VM), the VM gets corrupted, every time. It either ends in a "Running" state in ignite while it's dead in containerd, or the network allocation is somehow duplicated and denied, or one of half a dozen other failure states at which point it's easier to remove everything in /var/lib/firecracker
and recreate it.
Is it easy to deploy stuff?
You betcha! Here's an example from https://nombres.ralsina.me, if I run build.sh
it builds it, deploy.sh
deploys it, the actual code is in the busqueda/
and historico/
folders.
It's very simple to write code, and it's very simple to deploy.
If I found a better way to handle the VMs I would consider this finished.