Nov 17, 2020 - Python CLI script template


Python CLI script template

Usually my little Python scripts start with simply using sys.argv[] and print(). But after refactoring they always end up with a structure like this, using the argparse and logging libraries. It’s about time to put this in some kind of copy/pastable template:

import argparse
import logging

DESC = '''
This command line tool does something with a file.

parser = argparse.ArgumentParser(description=DESC)
parser.add_argument("file", help="Some file to process")
parser.add_argument("output", nargs="?", help="Optional output file")
                             #nargs="+" for one or more positional arguments
parser.add_argument("-v", "--verbose", action="count", default=0,
                    help="Verbosity (-v, -vv, etc)")
parser.add_argument("-p", "--param", help="Some additional parameter")
parser.add_argument('--another', default="something", help="Optional parameter "
                                                           "with default value")
parser.add_argument('--flag', action="store_true", default=False,
                    help='Optional flag')

args = parser.parse_args()
loglevel = 30 - (args.verbose * 10)
logging.basicConfig(level=loglevel, format='%(levelname)s: %(message)s')

print(f"Do something with {args.file}")

if args.output:
  print(f"Write to {args.output}")

if args.param:
  print(f"With additional parameter {args.param}")

print(f"The other parameter is {args.another}")

if args.flag:
  print("The flag was set too.")

logging.debug("This is a debug message")"This is an info message")
logging.warning("This is a warning message")
logging.error("This is an error message")

Paste this code in a file and have a play with it, especially using the different log levels by using -v, -vv etc. You’ll also notice that you automatically get a nice help (-h) message from argparse for free :-)

Oct 8, 2020 - Communication between Arduino Nano and Rasperry Pi via 2.4 GHz radio


Communication between Arduino Nano and Rasperry Pi via 2.4 GHz radio

Using the nRF24L01 chip and on both ends the RF24 library.

Check that the pins are correctly connected!

For this example on the Arduino CE -> D9 and CSN -> D10, hence RF24 is initialized with RF24 radio(10,9);. And on the Raspberry PI CSN -> 25 (it looks like CE doesn’t matter), hence radio is initialized with radio = RF24(25,0). The 0 comes from the SPI device name. Check with ls /dev | grep spi, e.g. spidev0.0 = 0 and spidev0.1 = 1, etc.

I’m still not quite sure, if the documentation is wrong about the pins or I just don’t get it.

Anyway with the above mentioned setup and the following code I got it finally working! The example is very simple and just send a timestamp from the Arduino to the Raspberry. But I hope you can use it for testing or as a template to build on.


#include <SPI.h>
#include "nRF24L01.h"
#include "RF24.h"
#include "printf.h"

RF24 radio(10,9);

const uint64_t rx_address = 0xF0F0F0F0E1LL;
const uint64_t tx_address = 0xF0F0F0F0D2LL;

void setup() {
  radio.setRetries(5, 15);
  radio.openReadingPipe(1, rx_address);
  Serial.println(F("Setup done."));

void loop() {
  Serial.print(F("Now sending: "));
  unsigned long start_time = micros();
  Serial.print(F(" ... "));
  if (!radio.write( &start_time, sizeof(unsigned long) )){
  } else {

Raspberry Pi

from __future__ import print_function
import time
from RF24 import *
import RPi.GPIO as GPIO

tx_address = 0xF0F0F0F0E1
rx_address = 0xF0F0F0F0D2

radio = RF24(25,0);
radio.setRetries(5, 15)
radio.openReadingPipe(1, rx_address)

while True:
	if radio.available():
		receive_payload =
			byteorder='little', signed=False)))

Aug 21, 2020 - More Unix tools


More Unix tools

Haven’t posted anything in ages, just too busy with various things… Anyway, here I’m just going to paste a few usage examples of some more very useful standard unix tools.


Often you have a lot of useful information encoded in file and directory names. In the past I used cut, tr, etc. to extract this information. This can sometimes get quite awkward. I knew there’s a tool called awk, but I never really bothered to use it. Until recently :-) It’s actually quite easy and very useful. Here’s an example:

Imagine you organized your holiday photos like this


Now lets say you want to create a CSV file with an inventory of your photos:

cd /mnt/photos
echo "Year,Location,Filename" > ~/my_photos.csv
find * -type f | awk 'BEGIN { FS = "/" } ; {print $1","$2","$3}' >> ~/my_photos.csv

Note: This use case could be handled in various, probably simpler ways, however this hopefully demonstrates how awk works. As you can guess awk also quite handy to extract information from CSV files.


I used it already in previous section, but find is also useful to process files serially. Example, calculate a checksum for all zip files:

find * -iname "*.zip" -exec sha1sum {} \; >> ~/checksums.sha1

Note: {} is substituted for every file find finds. The command which is executed has to be terminated with \;.


Not really a standard tool, but very useful if you want to make most of your multicore CPU! See GNU Parallel

Example: We’ll do the same, calculate the checksums of all zip files:

# First get all absolute paths to the zip files and put them in a file:
find * -iname "*.zip" -exec readlink -f {} \; >> ~/zip_files.txt

# Process them in 5 parallel threads:
parallel -a ~/zip_files.txt --eta -j5 --joblog log.txt --delay 2 -k sha1sum {} >> ~/checksums.sha1

Notes: With parallel you don’t need the \;. Please ignore that this example doesn’t make much sense, because shasum1 is so fast you wouldn’t use parallel for that. But it shows a few options which can be very useful in other cases. --eta gives you some progress information. -j5 means use 5 threads. --joblog obviously save the logs. --delay delay the start of each job by 2 seconds. This option can be very important. For example if you kick off a process which right at the start hits a database (or other limited resource) very hard, you don’t want 5, 10, or whatever jobs to do that exactly at the same time. Finally -k which ensures that the output from each job is written in the same order as the input. Without this option the output order would be the order in which the jobs finish, which could be pretty random. Often you want to preserve the order so you can easily match up input and output files.