Setting up a Raspberry Pi dashboard

Lots of companies set up dashboards that allow them to keep an eye on their build pipelines and system monitoring. Often these involve large TV screens. Raspberry Pi’s are often a good option for the computer that connects and displays this information. They are low power, have the correct connectors, and running a web browser is trivial.

With everyone working from home at the moment lots of us are missing those dashboards. You may well have a Raspberry Pi and a screen or TV you can hook it up to, but contention over the VPN may be the bottleneck. A quick trick to allow you to ‘share’ the bit of the connection you need is to use ssh reverse port forwarding. This allows you to setup a port on the remote machine (the Pi) that connects to the address you provide, using your machine as a proxy.

From my machine I’m working from, connected to the VPN, I use this ssh command to connect to the raspberry pi:

ssh -f raspberrypi.local. ./ > /dev/null 2>&1

This opens up port 8080 on the Raspberry Pi for the https connection to the jenkins server.

I then add update my /etc/hosts entry on the Pi to point that network name at localhost:	localhost

That should mean that should then work. The high port number is so that we don’t need to ssh in as root, as ports < 1024 require root access to open. You can pick your own port numbers, you can also open up multiple connections to hosts on different ports by simply adding more -R options.

I also add a shell script on the host to fire up the browser, and tidy up the screen, ensuring the screen doesn’t turn off.

# try to ensure Chrome doesn’t complain about unclean exits
sed -i 's/"exited_cleanly":false/"exited_cleanly":true/' ~/.config/chromium/Default/Preferences
sed -i 's/"exit_type":"Crashed"/"exit_type":"Normal"/' ~/.config/chromium/Default/Preferences
export DISPLAY=:0.0
xset s 0 0
xset s noblank
xset s noexpose
xset dpms 0 0 0
# sudo apt-get install unclutter
# hides mouse cursor
unclutter &
chromium-browser --kiosk

Note that if you are running from a Windows machine you should still be able to do reverse port forwarding using ssh, you just need to pick an acceptable ssh client to do it.

Installing self signed certs automatically

As part of a new project to do automated testing I’ve been investigating using Selenium Grid and driving it with Cucumber and NodeJS. I’m not 100% convinced I’ll go for the full grid thing, but it’s worth a go. We’ve been using a fairly pared down Webdriver setup up until now, so this is a chance to test an alternative.

As part of this I’m spinning up what is essentially a fully functional website, with SSL and everything. It’s just on a dev box where we don’t want to have real certificates, so these are self signed.

If you look about you’ll find that the general advice is to turn off certificate authentication, and this is fairly trivial – I would rather have the browser validate the certificate, and then if we do have problems with certificates, that’s likely to indicate a real problem that needs fixing. If we turn it off completely, we won’t spot misconfigurations, or problems with 3rd party integrations.

Installing a self signed certificate isn’t too difficult. In fact the excellent tool mkcert by Filippo Valsorda has made the generation and installation of them trivial. If you’re ever unsure how to install certificates his code is an excellent reference for quite a few platforms. It has code for various browsers, and also tools like curl.

The rub with automated testing is that you want to start from scratch with a blank container (well in this case I’m using containers) and install the certificate into that. It appears that certificates get installed into a profile, and profiles aren’t created until the browser has been used for the first time. This means you need to do a bit of a dance in order to install the certificate.

With Google Chrome I’ve found this to be a reasonable method for getting the certificate installed:

FROM selenium/node-chrome-debug

USER root
RUN apt-get update && apt-get install -y libnss3-tools
USER seluser
RUN google-chrome --headless --disable-gpu --first-run \
    --profile-directory=Default \
COPY my-core-trust.crt /usr/local/share/ca-certificates/mine/
RUN certutil -d sql:/home/seluser/.pki/nssdb -A -t TC \
    -n "My Certificate" \
    -i /usr/local/share/ca-certificates/mine/my-core-trust.crt

Starting with the Selenium Chrome image we install libnss3-tools which provides the tool for installing the certificate.

I then run the browser browsing a real site in headless mode, and specify the --repl flag which is intended to allow you to run things, but in this case essentially means it loads up and exits straight away, which is perfect. This provides triggers the browser to create the profile we need. We can then install this using the certutil utility provided by libnss3-tools.

In principle you can do the same thing with Firefox, but a) it’s profile directories with the cert db are less predictable, b) selenium seems to do something stranger with the profiles. What this means is that so far I haven’t figured out how to get an automated workflow working with Firefox.

Selenium appears to copy over the profile folder over from the test machine to the browser machine when using Firefox. This is to allow it to be repeatable and not end up with issues relating to sessions still being logged in. I’ve experimented with both forcing the use of a particular profile on the firefox machine, and placing my pre-built profile on the test box and copying that over, but while that works in terms of loading up firefox with a profile with the cert loaded, it then sits there like a lemon doing nothing. My guess is that there’s something special about the profile that Selenium sets up for Firefox to use. I just haven’t had the time to figure out what’s going on fully.

Note that the equivalent trick to using the --repl command line flag for Firefox seems to be to use the --screenshot command line flag which spins it up briefly and then closes it back down.

At this point I figure I have enough to be useful, I can test reliably with Google Chrome, and I figured I may as well share what else I’d figured out in case it’s of use, and so that I can remember where I got to for when I want to pick this back up. Firefox is completely usable, just with the insecure certs setting. It would just be neat to be able to install the certificate authority properly into it.

Using systemd DNS with OpenVPN


With the new Ubuntu 18.10 new systemd DNS tools are exposed. The TL;DR for this post is you can tell systemd to use a particular DNS server for a particular interface. This is convenient when you have different DNS servers provided by DHCP and the one you want isn’t getting used. You can assign the one you want to the other interfaces you’re using too.

resolvectl dns $interface $dns_ip 
# for example resolvectl dns wlp3s0

The long story is I use a VPN to connect to work, and that provides a DNS server that has a superset of regular DNS, including some effectively internal DNS entries. I only send internal traffic down that VPN, and regular internet traffic goes out the normal way.

My previous local Network Manager configuration for the OpenVPN connection had these extra domains mentioned in the dns-search parameter. This meant that queries meant for those domains were sent to the correct interface.

With 18.04 this worked reasonably effectively. With 18.10 I found I couldn’t resolve any of the internal domains.

Looking at the current Network Manager configuration page I’m not entirely clear why that shouldn’t work any more. Initially I thought that that had changed parameter name, but I now suspect that might be a typo in the wiki. Certainly trying to set dns-searches as mentioned in the systemd section of the documentation reports that’s not a valid parameter on my system.

Investigating further it turns out that I could talk to the IP addresses, and I could even see that the DNS server was configured by simply using nmcli on it’s own. Running that with no parameters displays all the current network configuration, and at the bottom lists all the DNS configuration.

A straight up nslookup using the right server worked. Having realised that, I realised it was time to look at how systemd was working.

It turns out that it too could be proven to be able to use the DNS server. Using the interface tun0 which was the VPN’s interface, I could ask it to resolve an address and that worked. Leaving off the interface specifier or using the main internet connection interface failed.

systemd-resolve -i tun0

In my situation I realised that since I’m only connecting to a single VPN, and since our DNS server provides answers for all domains, not just the internal ones, I could effectively switch everything to using that DNS server when I’m connected to the VPN.

The trick then was to find out how to ask it to do that. This is where resolvectl comes in. This appears to be new from the point of view of 18.10, and allows us to specify the DNS to use for a particular interface.

If you run it on it’s own it provides a list of the resolving configuration. With a ‘dns’ command it simply sets the dns server to use for a particular interface.

resolvectl dns $interface $dns_ip 
# for example resolvectl dns wlp3s0

Now since I use a shell script to connect to the VPN I just added an extra line to switch the DNS server once it’s connected. It’s not perfect, but it gets it working. Hopefully I’ll find a better way of managing this so that I can either direct the correct queries to the correct server, or ensure that I use a particular server when I connect to a particular connection, but for now this gets me working again.

In theory according the Network Manager page I should be able to set the priority of the DNS connection for the VPN, although it actually looks like it ought to set VPN connections to have priority already. Playing with the settings in real time doesn’t really seem to make a difference though.

$ nmcli c show "tun0" | grep ipv4 | grep dns

ipv4.dns: --
ipv4.dns-search: --
ipv4.dns-options: ""
ipv4.dns-priority: 100
ipv4.ignore-auto-dns: no

$ nmcli c modify tun0 ipv4.dns-priority -1

Now there is one thing that I perhaps should take into account. I should point out that I’m using an Ubuntu variant, rather than plain Ubuntu, Xubuntu, so it might be something odd with that. The differences between the distributions are generally relatively cosmetic, so I’d be surprised if the problem lay there.

Tweaking dumb-init

I’ve been having a look at envoy, only I’ve been looking at it in docker-compose rather than kubernetes, so the usual sidecar type of deployment possible there wasn’t an option. Simply starting up one program in the background and the other in the foreground ‘works’, but not ideally. If the background process dies, the other one carries on oblivious as if nothing is wrong.

Putting in a full init system in a docker container is generally missing the point of containers. Then it occurred to me that what I really want is a dumb-init that just spins up 2 processes instead of just the one. So I adapted it and did just that. By separating the commands with a semicolon you can use dumb-init in the usual way.

Note that I’ve done nothing special about stdout/err, or well anything really, so things may not behave perfectly for your situation, but so far in my testing everything has worked far better than I could ever expect.

With this fork you start 2 processes like this,

$ ./dumb-init echo hello \; echo world

And if you try this,

$ ./dumb-init sleep 1000 \; echo test

You’ll see that it exits as soon as the echo completes, as it closes down if either process ends.

I’m not sure I really plan on doing anything serious with this fork, I’m not sure that there aren’t better alternatives that I’ve missed. I haven’t tested it much either. It looks like it does what I want, but I haven’t done any scientific testing.

Setting up a new CPAN Mini Mirror

It’s time for me to set up a new dev laptop and I wanted to set up a CPAN Mirror quickly. Using a mirror has been something I’ve done for a long time now, it’s great for coding without internet. Unless you’re doing lots of module installs this is likely to use more bandwidth than downloading on an ad hoc basis, but the tradeoff is that I can install almost any module I realise I need regardless of whether I have connectivity or not. The only limitations will be down to external dependencies on things like operating system packages/tools.

I have previously setup mirrors using lxc and salt to provision the machine, but this time I decided to convert that set up to Docker. I also simplified the setup to just the mirror as I didn’t need to inject modules into the CPAN server any more. Well, in truth I never really did on my laptops, I simply did that because it was useful for work at the time. I created the salt configuration to make it easier to re-provision new servers for a work setup that allowed for private mirror that also had work modules allowing for a full CPAN type deployment process of both public and private code.

To do this I’ve used docker-compose, that’s almost always the best option for a laptop setup. Even with a single machine it generally makes life simpler as you can encode all the configuration in the file so that you have a few simple consistent commands to build/setup/run your containers. I’ve also set it to the highest version number currently available partly just to see what’s available, and partly because I want to make use of some of the newer features of volumes. It doesn’t appear that I can express everything I want to perfectly in the docker-compose file, but I suspect that they aren’t really targeting my situation. While docker is partly really popular because it works really well on developer laptops, cached file system layers and lightweight machines work really well in a constrained environment, fundamentally docker is aiming for servers and so they’re trying to deal with issues of sharing resources across multiple machines rather than working on a single machine.

What I really wanted to be able to do was specify the exact details of the volume I wanted shared between the containers, from the location down to the user id’s for them. If that’s currently fully possible I haven’t seen a way to do it.

This setup is not designed for the general internet or with security in mind (not that a simple mirror really should really have much in the way of threats). I don’t even expose the ports, just print out the url for use when running cpanm. I also just update the server manually rather than setup a cron job for it as I don’t really want to use that much bandwidth on this. I don’t use the mirror that often, but when I do it’s really valuable, even if it’s not all the latest and greatest versions of the modules.

Having all the modules locally can also be great when you want to do some analysis of what existing modules are doing. It’s reasonably easy to write scripts to say find all the XS modules and then extract their C code to see which call a particular function.

The configuration is on github here, The modules downloaded are kept in a volume outside the containers so updating/removing etc. should be easy. In theory it should even be possible to wrap the set up around an existing mirror if you already have files.

An Operations guide to Catalyst configuration in Docker

We use the Perl Catalyst web framework at $work.  A lot.  It’s got most of the stuff you want for web sites and services, and it’s pretty solid and it’s lovely and stable.

As with most systems it has a well established method of configuration and it allows you to use configuration files.  Theses are handy for all sorts of reasons, but in a docker environment most of the file system generally wants to be essentially static.  If you need to change config files on every different deployment of your container you’re probably going to need to do something ugly.  Docker is much happier with allowing you to push in settings via environment variables instead.  This fits much more neatly into the 12 factor app methodology that using docker itself fits neatly into.

Of course Catalyst has a solution to that that doesn’t require wholesale change, you can just load the Catalyst::Plugin::ConfigLoader::Environment plugin to your application.  That allows you to override parts of your configuration.

Perl modules generally have names like this, My::App, often (but not always) with capitalized names, and :: between words.  The corresponding catalyst configuration file for that application would be named my_app.conf.  It is simply a lowercase version of the app name, with :: replaced with _.

The environment variables you override the config with need to be an all uppercase version of the app name, again with :: replaced by ::.  So it would start MY_APP.  Then you put another _ after that, and then you specify the configuration section you want to override.

Let’s take a look at an example config,

using_frontend_proxy 1 

   connect_info dbi:Pg:host=db;dbname=patch
    connect_info someuser
    connect_info averysecurepassword
      quote_char "\""
      name_sep .
      pg_enable_utf8 1

Note that catalyst config files can generally be in lots of different formats.  This is kind of an apache like config style, but it can also be yaml and other formats.  The configuration loaded Cataylst apps commonly use is very flexible.

If we want to change the using_frontend_proxy setting we can set the environment variable MY_APP_using_frontend_proxy to 0.

To set more complex data structures we can set a json value.  This will get merged into the config, so if something exists in the hash/dictionary but isn’t overwritten then it will generally be left alone.

The configuration file above roughly translates to a dictionary looking like this in code (expressed in json),

    "Model::Cabbages": {
        "connect_info": [
                "name_sep": ".",
                "pg_enable_utf8": "1",
                "quote_char": "\""
    "using_frontend_proxy": "1"

Note how the repeated connect_info were translated into an array, and then a dictionary.  This may seem strange, but the connect_info block is a very common structure in the perl world as most if not all will be passed straight onto DBI, the workhorse doing most database work.

So to change the password in the database configuration we do,


Since that’s an array of configuration settings we end up having to specify the whole lot, otherwise we lose some of the configuration.  Also note that we turned the :: into __.  That is generally a good thing when working on the command line as putting :: into env variables via bash is tricky (if possible).  

The one oddity is if you have a configuration key with more than 1 set of double colons.  E.g. Model::Payment::Client.  Then you just need to live with the :: otherwise your override will be interpreted in an odd fashion.

Perl and docker-compose configurations can set environment variables with :: easily though so this generally isn’t a big deal.  env and set can confuse however as one will show environment variables with ::, and the other won’t.


Take the configuration filename, drop the extension, upper case it.

Add an underscore.

Take the first section or key, and append that to the environment variable.

That’s the variable that will target that config section.

Now set it’s value to either a string (if it’s a simple string value), or a json structure that matches the configurations effective structure.

Now you discover whether someones done something confusing and eschewed convention. Hopefully not.

There is something missing here, and it’s how to test changes, and generate the correct json easily. I have the tooling to make it reasonably simple, I just need to open source it.

PAUSE permissions code outline

This is a quick brain dump from my brief coding session on PAUSE working on the permissions. This won’t make sense without reading the code yourself. Unless you’re interested in modifying PAUSE or reviewing my changes I would ignore this blog post. I’m writing this for my future self as much as anyone. The behaviour I modified in my pull request is in italics.

The code for setting the permissions when a new distribution is indexed is largely in PAUSE::package. It’s triggered by ./cron/mldistwatch and PAUSE::mldistwatch, but the code that actually sets the permissions is in PAUSE::package.

perm_check checks that the current user has permission to be uploading this distro and bails if not. If this is a new package (and they do have permission) then it also adds the user to the perms table as they are effectively first come.

give_regdowner_perms adds people to the perms table where necessary. This was supporting the module permissions in amongst other things. It is now also where we copy the permissions from the main module.

The checkin method as well as adding the record to the packages table also adds to the primeur table via the checkin_into_primeur function. If the x_authority flag is used then that user is added, otherwise the main module is looked up, or failing that the uploading user.

Note that the first come (primeur) users appear in both the perms and primeur tables.

There are special checks for Perl distributions in the permissions code that will change the behaviour of the permissions. I am purposely not mentioning them as I never took the time to understand them.

A quick note on testing. As well as the tests which work of the dummy distributions in the corpus directory there is a test utility. To get a look at the permissions database in practice use the build-fresh-cpan script. This will build a one off cpan environment that you can examine. Just pass it a package to index and then you can check the permissions database.

$ perl one-off-utils/build-fresh-cpan corpus/mld/009/authors/O/OO/OOOPPP/Jenkins-Hack-0.14.tar.gz
Initialized empty Git repository in /tmp/h4L3GU6Awi/git/.git/
$ ls
cpan  db  git  Maildir  pause.log  run
$ cd db
$ ls
authen.sqlite  mod.sqlite
$ sqlite3 mod.sqlite 
SQLite version 2014-10-29 13:59:56
Enter ".help" for usage hints.
sqlite> select * from primeur;
sqlite> select * from perms;

Docker logging and buffering

When you start using Docker one of the things it’s quite possibly you’ve hit is output buffering. Docker encourages you to log to stdout/err from your program, and then use docker to feed your regular logging solution.

This generally exhibits itself as you seeing no logging, then after the program has been running for a while you come back to the logs and find they are all there. It doesn’t get as much press as caching but it can be just as head scratching.

With Perl for example the buffering kicks in when it’s not connected to a terminal. For that this somewhat idiomatic snippet can be useful,

select( ( select(\*STDERR), $|=1 )[0] );
select( ( select(\*STDOUT), $|=1 )[0] );

As ever with Perl there is more than one way to do it of course…

Note that you could experience this sort of thing when using other logging setups. If you pipe to logger to output to rsyslog you could experience the same issues.

Debugging Web API traffic

Note that this blog post mostly assumes you’re operating on Linux. If you’re using Windows just use Fiddler. It probably does everything you need. Actually, having just looked at their page it looks like it may well work on lots of places other than Windows too, so it might be a good option.

When developing programs that consume API’s that make use of HTTP at some level it’s often useful to check what is actually going over the wire. Note that this is talking about unencrypted traffic at this level. If you’re talking to a server over HTTPS you will need to MITM your connection or get the keys for the session using the SSLKEYLOGFILE environment variable.

The simplest way to capture traffic is generally to use tcpdump. Wireshark is often a good tool for looking at network traces, but for lots of HTTP requests it tends to feel clunky. This is where I turn to a python utility named pcap2har. This converts a packet capture to a HAR file. A HAR file is essentially a json file containing the HTTP requests. It has an array of the requests with each request noting the headers and content of the parts of the request/response. The HAR file format is documented here.

You will actually find that Google Chrome allows you to export a set of requests from it’s Network tab of the Developer toolbar as a HAR file too.

The pcap2har utility isn’t packaged in a particularly pythonic way, and it doesn’t actually extract the request body so I created a minor tweak to it on a branch. This branch does extract the request body which is often used in API calls. You’ll need to pip install dpkt, the rest of the dependencies are bundled in the repo. Then you run it like this,

git clone --branch request_body
cd pcap2har
sudo pip install dpkt
tcpdump port 8069 -w packets.dump
pcap2har packets.dump traffic.har

Having said that HAR is a lot easier to consume, there are viewers, but I’ve not found one that I particularly liked. I tend to either pretty print the json and look at it in a text editor, or then use grep or code to extract the information I want.

For OpenERP/Odoo API calls I created a quick script to explode the contents of the API calls out to make it easier to read. It explodes the xml/json contents within the requests/responses out to json at the same level as the rest of the HAR data, rather than having encoded data encoded within json.

Perl QA Hackathon report (#perlqah2016)

Thank you to all the people who sponsored the Perl QAH hackathon and all those that provided their time. It was a very productive environment for a lot of projects.

I worked on 2 primary things relating to PAUSE permissions. PAUSE (indexing) itself and a new module for testing that permissions are consistent named Test::PAUSE::ConsistentPermissions.

The permissions of distributions on CPAN are generally something you don’t think about until you become a co-maintainer of a distribution and you see red on or Then you discover that there is a system by which each module in your distribution is effectively owned by somebody, and that some others might also have been granted permission to upload a file.

Let’s take Test::DBIx::Class as an example. I’m a co-maintainer, and JJNAPIORK is the owner. There is also another Co-Maintainer.

Using the script from Test::PAUSE::ConsistentPermissions I can look at the permissions currently assigned.

$ pause-check-distro-perms Test-DBIx-Class Test::DBIx::Class
Distribution: Test-DBIx-Class
Module: Test::DBIx::Class

When I upload a new release of Test::DBIx::Class to PAUSE any new modules added will be given to me. So I will be owner and no-one will be Comaint. I then need to grant PHAYLON comaint (which I can do as owner), and then pass ownership back to JJNAPIORK.

There is an alternative mechanism involving a bit of metadata named ‘x_authority’. With that we could ensure that JJNAPIORK retains ownership of all the modules within the distribution. The downside of that is that while I would also gain comaint on those new modules as the uploader, PHAYLON would not. Since I wasn’t the owner either, I wouldn’t be able to assign him Comaint, and I would have to ask JJNAPIORK to do that instead.

I believe there were historically alternative methods for managing this, but PAUSE has been through some rationalisation and simplification of some features and those features don’t exist anymore.

I came to this hackathon wanting to work on this problem having encountered Karen Etheridge (ETHER)’s prototype Dist::Zilla::Plugin::AuthorityFromModule which suggested a potential solution.

We had a meeting with the interested parties at the hackathon about how we could deal with this scenario better. Ricardo Signes (RJBS) suggested that we could make use of the fact that there is a designated ‘main module’ for permissions. We could  use that for the defaults. This is one step better than the previously suggested solution as it makes use of some of the previous rationalisation of PAUSE permissions and won’t require authors to provide extra metadata. This should mean that permissions are much more likely to just work out of the box.

With that potential solution suggested RJBS gave me some assistance to get working on the PAUSE indexer. I started with adding tests, then wrote the code.

The change made so far is very minimal, only affecting the indexing of new packages. No changes have been made to the user interface, or the complex permissions you can currently end up with. The pull request is here – Note that it also benefits from Peter Sergeant (Perl Careers)’s work to hook the PAUSE tests into Travis giving it a green tick indicating the tests passed.

The other thing I worked on was Test::PAUSE::ConsistentPermissions to allow us to check that a distribution has consistent permissions. I created a script for checking a distro on PAUSE (not too dissimilar to Neil Bowers App::PAUSE::CheckPerms module) and a test function for using a distro’s release tests. This is a bit like Test::PAUSE::Permissions, but rather than check whether you have permission to upload the distribution, it checks whether the permissions are consistent. These two properties don’t necessarily coincide.

During the event I was also able to create a couple of minor pull requests.

Here’s the obligatory thank you to the full list of sponsors. Thank you all.

The sponsors for the Perl QA Hackathon 2016,