Skip to main content

Ralsina.Me — Roberto Alsina's website

Schizo Desktop I: USB switch

I have an of­fice. It's like a home of­fice but it's in an­oth­er place. It's just mine, so I get the peace and qui­et of a home of­fice but al­so get to go out­side to get there. It's a good set­up.

BUT I have two com­put­ers there. Well, ac­tu­al­ly I have like 10, but I in­ter­act with two. One is my per­son­al desk­top com­put­er, the oth­er is my work lap­top. and I want to use the same pe­riph­er­als in the same way with both of them. That's why I have the world's most com­pli­cat­ed schizo desk­top set­up.

This se­ries of posts will doc­u­ment it, the why and how of it, and the var­i­ous things I've learned along the way, along with mak­ing you want to buy weird chi­nese gad­get­s.

Today: My USB Switch

What is it? A 4x2 switch. That means you can con­nect 4 USB de­vices to it, and it can be con­nect­ed to 2 com­put­er­s.

It's not an ex­pen­sive de­vice, it costs around 6 dol­lars!

USB switch

How do you use it? It has a but­ton. When you click it, all the de­vices move from one com­put­er to the oth­er.

What's plugged in­to it? My we­b­cam, my mi­cro­phone (not re­al­ly, it's a bit more com­pli­cat­ed, but that's an­oth­er ar­ti­cle) and my head­phones.

The head­phones are ac­tu­al­ly con­nect­ed via a small USB au­dio card be­cause that gives me a vol­ume knob for them (a­gain, about 6 dol­lars).

USB audio card

Why these de­vices? Be­cause they are my "video cal­l" de­vices. If I am do­ing a video call for work, I use them from the work com­put­er, and when I want to do a per­son­al one, I use them from my per­son­al com­put­er.

They switch in about half a sec­ond, and they work fine.

I do not use this for key­board and mouse. I have oth­er, bet­ter so­lu­tions (a­gain, an­oth­er ar­ti­cle)

Phys­i­cal­ly how is it mount­ed? I at­tached to the bot­tom of the desk to my left, next to a hook for the head­set and the arm for the mi­cro­phone. When I want to switch, I just reach down and press the but­ton.

What can fail?

  • If you up­date the ker­nel in one of the ma­chines and did­n't re­boot then hot­plug­ging the USB de­vices may fail. Don't do that :-)
  • Some op­er­at­ing sys­tems or users may get con­fused by au­dio de­vices pop­ping in­to ex­is­tence and dis­ap­pear­ing.
  • If you for­get to switch noth­ing works and you will be the per­son in the video call say­ing "can you hear me?"

Getting started with Ansible

I have a server, her name is Pinky

Pinky does a lot of things but pinky has one prob­lem: Pinky is to­tal­ly hand-­made. Ev­ery­thing in it has been in­stalled by hand, con­fig­ured by hand, and main­tained by hand. This is ok.

I mean, it's ok, un­til it's not ok. It has back­ups and ev­ery­thing, but when a chance presents to, for ex­am­ple, move to a new server, be­cause I just got a nice new com­put­er ... I would need to do ev­ery­thing by hand again.

So, let's fix this us­ing tech­nol­o­gy. I have known about an­si­ble for a long time, I have used things like an­si­ble. I have used pack­er, and salt, and pup­pet, and (re­lat­ed) dock­er, and ku­ber­netes, and ter­rafor­m, and cloud­for­ma­tion, and chef, and ... you get the idea.

But I have nev­er used an­si­ble!

So, here's my plan:

  • I will start do­ing an­si­ble play­books for pinky.
  • Since an­si­ble is idem­po­ten­t, I can run the play­books on pinky and noth­ing should change.
  • I can al­so run them on the new server, and ev­ery­thing should be set up.
  • At some point the new serv­er will be suf­fi­cient­ly pinky-­like and I can switch.

So, what is ansible?

In non-tech­ni­cal terms: An­si­ble is a tool to change things on ma­chines. An­si­ble can:

  • Set­up a us­er
  • Copy a file
  • In­stall a pack­age
  • Con­fig­ure a thing
  • En­able a ser­vice
  • Run a com­mand

And so on.

Ad­di­tion­al­ly:

  • It will on­ly do things that need to be done.
  • It will do things in the re­quest­ed or­der.
  • It will do things in mul­ti­ple ma­chines.

First: inventory

The first thing I need to do is to tell an­si­ble where to run things. This is done us­ing an in­ven­to­ry file. The in­ven­to­ry file is a list of ma­chi­nes, and groups of ma­chi­nes, that an­si­ble can run things on.

Mine is very sim­ple, a file called hosts in the same di­rec­to­ry as the play­book:

[servers]
pinky ansible_user=ralsina
rocky ansible_user=rock

[servers:vars]
ansible_connection=ssh 

This defines two machines, called pinky (current server) and rocky (new server). Since rocky is still in pretty much brand new shape it has only the default user it came with, called rock. I have logged into it and done some things ansible needs:

  • En­abled ssh
  • Made it so my per­son­al ma­chine where an­si­ble runs can log in with­out a pass­word
  • In­stalled python
  • Made rock a sudoer so it can run commands as root using sudo

So, I tell ansible I can log in as ralsina in pinky and as rock in rocky, in both cases using ssh.

First playbook

I want to be able to log into these machines using my user ralsina and my ssh key. So, I will create a playbook that does that. Additionally, I want my shell fish and my prompt starship to be installed and enabled.

A play­book is just a YAML file that lists tasks to be done. We start with some gener­ic stuff like "what ma­chines to run this on" and "how do I be­come root?"

# Setup my user with some QoL packages and settings
- name: Basic Setup
  hosts: servers
  become_method: ansible.builtin.sudo
  tasks:

And then guess what? Tasks. Each task is a thing to do. Here's the first one:

    - name: Install some packages
      become: true
      ansible.builtin.package:
        name:
          - git
          - vim
          - htop
          - fish
          - rsync
          - restic
          - vim
        state: present

There "an­si­ble.builtin.­pack­age" is a mod­ule that in­stalls pack­ages. An­si­ble has tons of mod­ules, and they are all doc­u­ment­ed in the an­si­ble doc­u­men­ta­tion.

Each task can take parameters, which depend on what the module does. In this case, as you can see there's a list of packages to install, and the state means I want them to be there.

BUT while rocky is a Debian, pinky is arch (btw), so there is at least one package I need to install only in rocky. That's the next task:

    - name: Install Debian-specific packages
      become: true
      when: ansible_os_family == 'Debian'
      ansible.builtin.apt:
        name:
          - ncurses-term
        state: present

Same thing, ex­cep­t:

  • It uses a debian-specific package thing, called ansible.builtin.apt
  • It has a when clause that only runs the task if the OS family is Debian.

What nex­t? Well, more tasks! Here they are, you can un­der­stand what each one does by look­ing up the docs for each an­si­ble mod­ule.

    - name: Add the user ralsina
      become: true
      ansible.builtin.user:
        name: ralsina
        create_home: true
        password_lock: true
        shell: /usr/bin/fish
    - name: Authorize ssh
      become: true
      ansible.posix.authorized_key:
        user: ralsina
        state: present
        key: "{{ lookup('file', '/home/ralsina/.ssh/id_rsa.pub') }}"
    - name: Make ralsina a sudoer
      become: true
      community.general.sudoers:
        name: ralsina
        user: ralsina
        state: present
        commands: ALL
        nopassword: true
    - name: Create fish config directory
      ansible.builtin.file:
        path: /home/ralsina/.config/fish/conf.d
        recurse: true
        state: directory
        mode: '0755'
    - name: Get starship installer
      ansible.builtin.get_url:
        url: https://starship.rs/install.sh
        dest: /tmp/starship.sh
        mode: '0755'
    - name: Install starship
      become: true
      ansible.builtin.command:
        cmd: sh /tmp/starship.sh -y
        creates: /usr/local/bin/starship
    - name: Enable starship
      ansible.builtin.copy:
        dest: /home/ralsina/.config/fish/conf.d/starship.fish
        mode: '0644'
        content: |
          starship init fish | source

And that's it! I can run this playbook using ansible-playbook -i hosts setup_user.yml and it will do all those things on both pinky and rocky, if needed:

> ansible-playbook -i hosts setup_user.yml

PLAY [Basic Setup] ******************************

TASK [Gathering Facts] **************************
ok: [rocky]
ok: [pinky]

TASK [Install some packages] ********************
ok: [rocky]
ok: [pinky]

TASK [Install Debian-specific packages] *********
skipping: [pinky]
ok: [rocky]

TASK [Add the user ralsina] *********************
ok: [rocky]
ok: [pinky]

TASK [Authorize ssh] ****************************
ok: [rocky]
ok: [pinky]

TASK [Make ralsina a sudoer] ********************
ok: [rocky]
ok: [pinky]

TASK [Create fish config directory] *************
changed: [rocky]
changed: [pinky]

TASK [Get starship installer] *******************
ok: [rocky]
ok: [pinky]

TASK [Install starship] *************************
ok: [rocky]
ok: [pinky]

TASK [Enable starship] **************************
changed: [rocky]
changed: [pinky]

PLAY RECAP **************************************
pinky : ok=9    changed=2    unreachable=0    failed=0    skipped=1 
        rescued=0    ignored=0
rocky : ok=10   changed=2    unreachable=0    failed=0    skipped=0 
        rescued=0    ignored=0

If you look care­ful­ly you can see rocky ran one more task, and pinky skipped one (the de­bian-spe­cif­ic pack­age in­stal­la­tion), and that on­ly two things got ac­tu­al­ly ex­e­cut­ed on each ma­chine.

I could run this a dozen times from now on, and it would not do any­thing.

Did it work?

Sure, I can ssh into rocky and everything is nice:

> ssh rocky
Linux rock-5c 5.10.110-37-rockchip #27a257394 SMP Thu May 23 02:38:59 UTC 2024 aarch64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Wed Jun 26 15:32:33 2024 from 100.73.196.129
Welcome to fish, the friendly interactive shell
Type `help` for instructions on how to use fish

ralsina in 🌐 rock-5c in ~ 

There is a star­ship promp­t, and I can use fish. And I can su­do. Nice!

I can now change the inventory so rocky also uses the ralsina user and delete the rock user.

Next steps

There is a lot more to an­si­ble, specif­i­cal­ly roles but this is al­ready enough to get use­ful things done, and hope­ful­ly it will be use­ful to you too.

Using duckdb to make CSV files talk

Some­times you want to ask da­ta ques­tion­s. And of­ten that da­ta is in a CSV. Sure, you can write a quick Python script and use that to ex­tract the in­for­ma­tion you wan­t. Or you can im­port it in­to a data­base and use SQL.

But TIL the eas­i­est thing is to just ask the duck.

The duck is Duck­DB here.

Why? Be­cause you can use SQL queries di­rect­ly on CSV files.

For examples, let's use a random CSV called luarocks-packages.csv I have lying around:

It starts like this:

name,src,ref,server,version,luaversion,maintainers
alt-getopt,,,,,,arobyn
bit32,,,,5.3.0-1,5.1,lblasc
argparse,https://github.com/luarocks/argparse.git,,,,,
basexx,https://github.com/teto/basexx.git,,,,,
binaryheap,https://github.com/Tieske/binaryheap.lua,,,,,vcunat
busted,,,,,,
cassowary,,,,,,marsam alerque
cldr,,,,,,alerque
compat53,,,,0.7-1,,vcunat
cosmo,,,,,,marsam

And how do I query it? Well, sup­pose I want to find all pack­ages where alerque is one of the main­tain­er­s:

> duckdb
v1.0.0 1f98600c2c
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
D select name from 'luarocks-packages.csv' where maintainers like '%alerque%';
┌───────────┐
│   name    │
│  varchar  │
├───────────┤
│ cassowary │
│ cldr      │
│ fluent    │
│ loadkit   │
│ penlight  │
└───────────┘

And boom! There you go. So, if you know even some very ba­sic SQL (and you should!) you can lever­age duck­db to ex­tract in­for­ma­tion from CSV files quick­ly, re­li­ably and in a re­peat­able man­ner.

Which is awe­some!

Using docker to cross-compile things

What and Why Cross-Compiling

Some­times you want a pro­gram com­piled for an ar­chi­tec­ture which is not the one you are us­ing. Specif­i­cal­ly, I some­times want to build Crys­tal bi­na­ries for ARM so I can run them in my home server, but the com­put­er I nor­mal­ly use is an x86 one, with an AMD CPU.

There are at least two so­lu­tions for this:

1) Build it in the server, or in a ma­chine sim­i­lar to the serv­er.

This may be tricky be­cause of many fac­tors:

  • Maybe the serv­er is too busy
  • Maybe it does­n't have de­vel­op­ment tools
  • Maybe I don't have an­oth­er sim­i­lar ma­chine
  • Maybe it's a heck of a lot slow­er

2) Build a bi­na­ry for the server's ar­chi­tec­ture on my ma­chine even though it's a dif­fer­ent ar­chi­tec­ture. That's cross­com­pil­ing.

This tu­to­ri­al ex­plains one of the pos­si­ble ways to do that.

For oth­er ways you can see this tu­to­ri­al or the of­fi­cial crys­tal docs

I think this so­lu­tion is sim­pler than both of them :-)

The Magic of qemu-static

If you don't know QE­mu, it's an awe­some open source em­u­la­tor. It lets you run vir­tu­al ma­chines for al­most any ar­chi­tec­ture in al­most any oth­er.

One off­shoot of this project is qe­mu-stat­ic which en­ables you to build and run con­tain­ers for oth­er ar­chi­tec­tures via trans­par­ent em­u­la­tion.

You first need to run this com­mand so ev­ery­thing else will work.

$ docker run --rm --privileged \
        multiarch/qemu-user-static \
        --reset -p yes

What that does is configure binfmt handlers for binaries in a number of platforms:

Setting /usr/bin/qemu-alpha-static as binfmt interpreter for alpha
Setting /usr/bin/qemu-arm-static as binfmt interpreter for arm
Setting /usr/bin/qemu-armeb-static as binfmt interpreter for armeb
Setting /usr/bin/qemu-sparc-static as binfmt interpreter for sparc
Setting /usr/bin/qemu-sparc32plus-static as binfmt interpreter for sparc32plus
Setting /usr/bin/qemu-sparc64-static as binfmt interpreter for sparc64
Setting /usr/bin/qemu-ppc-static as binfmt interpreter for ppc
Setting /usr/bin/qemu-ppc64-static as binfmt interpreter for ppc64
...

You can read about this in more de­tail but the short ver­sion is: bi­na­ries for any plat­form now work, and since in Lin­ux con­tain­ers are just a way to run iso­lat­ed bi­na­ries ... well, con­tain­er im­ages for oth­er plat­forms work too.

Building Crystal code using Docker

Let's cre­ate a sim­ple dock­er im­age that can com­pile crys­tal code:

# This makes it use the platform we specify in the docker commandline
# rather than the one of the system you are on. So we just use alpine
# as a base
FROM --platform=${TARGETPLATFORM:-linux/amd64} alpine AS base

# And then we install crystal in it
RUN apk add crystal shards

We can build an image, let's call it crystal using that:

$ docker build . -t crystal
[+] Building 1.5s (7/7) FINISHED                                                        docker:default
 => [internal] load build definition from Dockerfile                                              0.0s
 => => transferring dockerfile: 120B                                                              0.0s
 => [internal] load metadata for docker.io/library/alpine:latest                                  1.4s
 => [auth] library/alpine:pull token for registry-1.docker.io                                     0.0s
 => [internal] load .dockerignore                                                                 0.0s
 => => transferring context: 2B                                                                   0.0s
 => [1/2] FROM docker.io/library/alpine:latest@sha256:...8a8bbb5cb7188438  0.0s
 => CACHED [2/2] RUN apk add crystal                                                              0.0s
 => exporting to image                                                                            0.0s
 => => exporting layers                                                                           0.0s
 => => writing image sha256:...7f6abb5fe6d393a94689834bef88      0.0s
 => => naming to docker.io/library/crystal   

So if we have some crystal code, like hello.cr:

puts "Hello!"

And we can use that im­age to build a stat­i­cal­ly linked bi­na­ry (don't be scared by the long com­mand):

 $ docker run -ti -u $(id -u):$(id -g) \
    -v .:/src -w /src crystal \
    crystal build hello.cr -o hello --static

This tells docker to run in an interactive terminal (-ti) as the current user (-u $(id -u):$(id -g)) with the current folder visible as /src (-v .:/src) inside the folder /src (-w /src), using the container crystal the command crystal build hello.cr -o hello

After a second or so, a new hello file appears in your folder. It's the compiled version of hello.cr and is a regular binary:

$ ll hello
-rwxr-xr-x 1 ralsina users 3.6M Jun 24 16:18 hello*

$ file hello
hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, BuildID[sha1]=5c7de9ae7321754c7c53c8ea60670c19f3424fbe, with debug_info, not stripped

Well, reg­u­lar up to a point. It's stat­i­cal­ly linked. And it was not built in my arch lin­ux sys­tem, it was built in the alpine con­tin­er! If it was­n't stat­ic, it would de­pend on musl in­stead of glibc, and in fac­t, I don't even need to have a crys­tal com­pil­er in my sys­tem at al­l!

Bringing It All Together

So, if we know how to build crys­tal code us­ing Dock­er, and we have a sys­tem that can run Dock­er im­ages for oth­er ar­chi­tec­tures ... why not build our code us­ing Crys­tal in a con­tain­er for oth­er ar­chi­tec­tures?

First: we build a ARM ver­sion of our crys­tal con­tain­er:

$ docker build . --platform=aarch64 -t crystal
[+] Building 1.4s (7/7) FINISHED                                                        docker:default
 => [internal] load build definition from Dockerfile                                              0.0s
 => => transferring dockerfile: 120B                                                              0.0s
 => [internal] load metadata for docker.io/library/alpine:latest                                  1.3s
 => [auth] library/alpine:pull token for registry-1.docker.io                                     0.0s
 => [internal] load .dockerignore                                                                 0.0s
 => => transferring context: 2B                                                                   0.0s
 => [1/2] FROM docker.io/library/alpine:latest@sha256:b89d9c93e9ed3597455c90a0b88a8bbb5cb7188438  0.0s
 => CACHED [2/2] RUN apk add crystal                                                              0.0s
 => exporting to image                                                                            0.0s
 => => exporting layers                                                                           0.0s
 => => writing image sha256:95feb8f2b9773f6946bd39b07e4dab7fb974012db58f81375772e88d417a323e      0.0s
 => => naming to docker.io/library/crystal 

The only thing different from before is the --platform=aarch64 argument, which makes Docker build an ARM image.

And we can use the same argument to build an ARM version of hello:

$ docker run --platform=aarch64 -ti -u $(id -u):$(id -g) -v .:/src -w /src crystal crystal build hello.
cr -o hello --static

$ ll hello
-rwxr-xr-x 1 ralsina users 3.6M Jun 24 16:24 hello*

$ file hello
hello: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), statically linked, BuildID[sha1]=56183bdedb28fd383643fcbd234fdcee6aae2b4f, with debug_info, not stripped

As you can see it's an aarch64 binary now (which is ARM) not a x86_64 one.

You can even cre­ate a cou­ple of shell alias­es so that you have "crys­tal-ar­m" and "shard­s-ar­m" com­mand­s:

$ alias crystal-arm="docker run --platform=aarch64 -ti -u $(id -u):$(id -g) -v .:/src -w /src crystal crystal"

$ alias shards-arm="docker run --platform=aarch64 -ti -u $(id -u):$(id -g) -v .:/src -w /src crystal shards"

And then you just build things as al­ways, but us­ing the alias:

$ crystal-arm build hello.cr -o hello

Caveats and Conclusions

  • There is a per­for­mance penal­ty, the ARM ver­sion of crys­tal run­ning in em­u­la­tion will be slow­er than the x86 ver­sion.
  • If you are building more complex things using shards then you may have to change the Dockerfile and add dependencies such as libraries or C compilers in the crystal image.
  • qe­mu-stat­ic it­self on­ly works on X86 so you can­not use this to cross-­com­pile to x86 from AR­M.

I think this is not much doc­u­ment­ed else­where and sim­i­lar ap­proach­es should work for any lan­guage where you don't want to both­er set­ting up a cross-­com­pil­ing toolchain or if the tool­ing does­n't al­low it.

Version 0.1.3 of Hacé is out

A new release of Hacé my make-like tool backed by Croupier is out!

New in this version

Features

  • Set vari­ables from the com­mand line
  • Al­low pass­ing out­put files as ar­gu­ments
  • Au­to mode works bet­ter
  • Han­dle bo­gus ar­gu­ments bet­ter
  • Made --question more verbose, and only report stale tasks matching arguments
  • New -k option to keep going after errors.
  • Switched to croupi­er main, sup­ports de­pend­ing on di­rec­to­ries
  • Au­to­mat­i­cal­ly build bi­na­ries for re­lease
  • Gen­er­al house­keep­ing
  • Build it­self us­ing a Hace­file in­stead of a Make­file
  • Re­ject if two tasks share out­puts (lim­i­ta­tion of croupi­er for now)

Bugs Fixed:

  • Warn about un­known tasks used in com­mand line
  • Tasks with out­puts passed wrong tar­get to croupi­er
  • Com­mand out­put was not vis­i­ble in the log.

Full Changel­og: v0.1.2...v0.1.3


Contents © 2000-2024 Roberto Alsina