Skip to main content

Ralsina.Me — Roberto Alsina's website

VMs in small ARM servers

Background (I swear I get to VMs later)

I have been run­ning a per­son­al serv­er at my of­fice for a lit­tle while (See 1 and 2) where I run a num­ber of con­tainer­ized ser­vices.

Since I had it, I want­ed to add a way to eas­i­ly de­ploy my own ex­per­i­men­tal code so I can do quick "server­s" for things I am play­ing with.

I could just cre­ate my code as, say, a flask app and cre­ate con­tain­ers for them and de­ploy them that way, and then add ingress rules in my gate­way and ... it gets ex­haust­ing pret­ty fast.

What I want­ed was a way to run my own Heroku, sor­ta. Just write a bit of code, run a com­mand, have it be avail­able.

Af­ter googling I found a so­lu­tion that did­n't re­quire me to im­ple­ment a k8s clus­ter: faasd. The prom­ise is:

  • Run a shell script to in­stall faasd
  • Write your code as a func­tion
  • Run a com­mand to de­ploy
  • It all runs out of a sin­gle ingress path (ex­cept CORS of course)

So, min­i­mal con­fig, ease of de­ploy­men­t, no need to con­stant tweak­ing of my gate­way. All good!

Ex­cept ... faasd does­n't play along with Dock­er. Both use con­tain­erd and cni and oth­er things as their back­end, and faasd says that re­al­ly, they like spe­cif­ic ver­sions so they should in­stall them, not the sys­tem, and then run­ning dock­er gets pret­ty dicey.

So, I could just get a sec­ond serv­er. It's not like I don't have more small com­put­er­s.

But my serv­er has spare ca­pac­i­ty! So I don't WAN­NA START A SEC­OND SERVER!

Al­so, this is go­ing to of­ten be toy code I have not care­ful­ly vet­ted for se­cu­ri­ty, so it would be bet­ter if it ran in iso­la­tion.

So? I need­ed a VM.

Really, a VM inside a tiny ARM computer

My serv­er is a Radxa Ze­ro. It's small­er than a cred­it card. It has, how­ev­er, 4 cores, and 4GB of RAM, so sure­ly there must be a way to run a VM in it that can iso­late Faasd and let it run its wonky ver­sions of things while the rest of the sys­tem does­n´t care.

And yes, there is!

Fire­crack­er claims that you can start a VM fast, that it has over­head com­pa­ra­ble to a con­tain­er, and that it pro­vides iso­la­tion! It's what Ama­zon us­es for Lamb­da, so it should be enough for me.

On the oth­er hand, Fire­crack­er is a pain if you aren't a freak­ing Ama­zon SRE, which I am re­al­ly not, but ...

Ig­nite is a VM man­ag­er that has a "con­tain­er UX" and can man­age VMs declar­a­tive­ly!

So I set out to run ig­nite on my serv­er. And guess what? It work­s!

It's pack­aged for Arch, which is what I am us­ing, so I just in­stalled it, run a cou­ple of scripts to cre­ate a VM:

[ralsina@pinky faas]$ cat build.sh
#!/bin/sh -x
# Create and configure a VM with faasd in it
set -e

NAME=faas

waitport() {
    while ! nc -z $1 $2 ; do sleep 1 ; done
}

sudo ignite create weaveworks/ignite-ubuntu \
        --cpus 2 \
        --memory 1GB \
        --size 10GB \
        --ssh=id_rsa.pub \
        -p 8082:8081 \
        --name $NAME

sudo ignite vm start $NAME

IP=$(sudo ignite vm ls | grep faas | cut -f9 -d\        )
waitport $IP 22

ssh -o "StrictHostKeyChecking no" root@$IP mkdir -p /var/lib/faasd/secrets
ssh root@$IP "echo $(pass faas.ralsina.me) > /var/lib/faasd/secrets/basic-auth-password"
scp setup.sh root@$IP:
ssh root@$IP sh setup.sh

# Login
export OPENFAAS_URL=http://localhost:8082
ssh root@$IP cat /var/lib/faasd/secrets/basic-auth-password | faas-cli login --password-stdin

# Setup test function
faas-cli store deploy figlet

echo 'Success!' | faas-cli invoke figlet
[ralsina@pinky faas]$ cat setup.sh
#!/bin/sh -x

set -e
apt update
apt upgrade -y
apt install -y git

git clone https://github.com/openfaas/faasd
cd faasd
./hack/install.sh

If you run build.sh it will create a ubuntu-based VM with Faasd installed, start it, map a port to it, setup SSH keys so you can ssh into it, and configure authentication for Faasd so you can log into that too.

Does it work?

In­deed it does!

Are there any problems?

There is one and it's pret­ty bad.

If the server closes badly (and that means: without explicitly shutting down the VM), the VM gets corrupted, every time. It either ends in a "Running" state in ignite while it's dead in containerd, or the network allocation is somehow duplicated and denied, or one of half a dozen other failure states at which point it's easier to remove everything in /var/lib/firecracker and recreate it.

Is it easy to deploy stuff?

You betcha! Here's an example from https://nombres.ralsina.me, if I run build.sh it builds it, deploy.sh deploys it, the actual code is in the busqueda/ and historico/ folders.

It's very sim­ple to write code, and it's very sim­ple to de­ploy.

If I found a bet­ter way to han­dle the VMs I would con­sid­er this fin­ished.


Contents © 2000-2024 Roberto Alsina