A OpenStack Clouds, Linux and Tech blog. Sometimes I ramble about things I don't like. I also ramble about things I like, but it is rare.

I gave a short talk in the #12 Open Source Developer Meetup hosted by OpenSource.hk in HK

The talk was about two easy to use tools to improve email privacy



And the slides and link to the event

Link to slides

Link to the event

Looking forward for the next meetup! :)

Posted 07/03/18 13:19 Tags:

For most of the time I can remember using Linux I kept as much shell history as possible, I commonly had history files with thousands of lines of history. One of my sources of pride was this function to selectively choose what to save and not


HISTIGNORE=(mpv mplayer player many many commands zless)
HISTRESTRICTED=(nova neutron glance heat cinder keystone sudo git)
RESTRICTEDWORDS=(delete router-delete net-delete subnet-delete image-delete console-log show \
port-show net-show stack-create stack-delete image-list stack-show help apt-get aptitude rm mv $HISTIGNORE show)
zshaddhistory () {
    # do not store command not found commands
    { whence ${${(z)1}[1]} >| /dev/null || return 1 }

    local -a line

    # aca filtro cosas peligrosas
    if [[ $line[1] == "git" ]]
        if [[ $line[2] == "push" ]]
            if [[ $line[3] == "--force" ]]
                return 1
    # aca la magia de que salvar y que no
    if [ [ ${HISTIGNORE[(i)$line[1]]} -le ${#HISTIGNORE} ]]
        return 2
    if [ [ ${HISTRESTRICTED[(i)$line[1]]} -le ${#HISTRESTRICTED} ]]
        if [ [ ${RESTRICTEDWORDS[(i)$line[2]]} -le ${#RESTRICTEDWORDS} ]]
        return 2

Needless to say, searching backwards (a.k.a. Ctrl+R) for a command was a nightmare, very similar occurrences would pop up, making easy to make mistakes or forcing me to rewrite part of the command (the very thing I wanted to avoid in the first place!)

A few months ago I started a new job, which basically requires me to code and run the same 10 commands all-day-long. The history setup I had was unnecessary (very few commands) and exposed me to risks (a bad backwards search and I'll be running fabric against a wrong environment).

So I trimmed down the history settings to this

zshaddhistory() {
       ?span> <span class="hl opt">;
       # do not store command not found commands
       { whence ${${(z)1}[1]} >| /dev/null || return 1 }

       return 2

This will keep up to 5000 of local (to that shell) lines of history but it won't save any of them to the $HISTFILE

My usual workflow is to open a terminal for the lifetime of a particular change/deploy/whatever and when I'm done I'll close that terminal, losing all history and environmental variables, which is awesome (no chances of hitting the wrong cloud provider region, account, virtualenv, etc)

My $HISTFILE now has 12 lines with the commands I use the most and are long to write, with safe defaults (inexistent hostnames, noop=1, etc)

Now I feel liberated, as I stopped carrying a backpack full of stones

Posted 10/02/18 03:28 Tags:

I have read many blogs that at some point state that a post is to document something for future use
This post is like that, this is something I have done very recently and now I'm writing down about it so I know how to do it again in the future :)

I have a central mail server where I keep my email, other systems relay emails to that server so I can consolidate my cron mail and delete it altogether
I nailed this 2 years ago for my laptop, but I never wrote it down or anything so when I got new machines they just don't send out email
Recently I needed to send out email from a machine so I had it to make it work again

Generate a CA for this propose

$ easy-rsa smtps-ca
$ cd smtps-ca
$ vi vars
$ source vars
$ ./clean-all
$ ./build-ca

Create keys for all the parties involved

$ ./build-key-server mailserver
$ ./build-key-server client1
$ ./build-key-server clientN

Copy the keys to each server in particular

$ cd keys
$ cat client1.crt client1.key > client1.pem
$ scp client1.pem mailserver.crt client1:/etc/nullmailer/

Add the key fingerprint to the list of keys we allow to forward email through us

# openssl x509 -in client1.crt -fingerprint -sha1 -noout | awk -F = '{print $2 " client1" }' >> /etc/postfix/tls/relay_clientcerts
# postmap /etc/postfix/tls/relay_clientcerts

My master.cf config (partial), on my smarthost

$ cat /etc/postfix/master.cf

submission inet n       -       n       -       -       smtpd
  -o content_filter=
  -o syslog_name=postfix/submission
  -o smtpd_recipient_restrictions=permit_sasl_authenticated,permit_tls_clientcerts,reject
  -o smtpd_tls_req_ccert=no
  -o smtpd_tls_ask_ccert=yes
  -o smtpd_tls_auth_only=yes
  -o smtpd_tls_security_level=encrypt
  -o smtpd_tls_cert_file=/etc/postfix/tls/mailserver.crt
  -o smtpd_tls_key_file=/etc/postfix/tls/mailserver.key
  -o smtpd_tls_fingerprint_digest=sha1
  -o relay_clientcerts=hash:/etc/postfix/tls/relay_clientcerts
  -o smtpd_relay_restrictions=permit_sasl_authenticated,permit_tls_clientcerts,reject
  -o smtpd_tls_CAfile=/etc/postfix/tls/keys/ca.crt
  -o smtpd_sender_restrictions=$submission_sender_restrictions
  -o smtpd_client_restrictions=
  -o smtpd_helo_restrictions=
  -o smtpd_data_restrictions=
  -o smtpd_milters=inet:
  -o non_smtpd_milters=inet:
  -o milter_default_action=accept
  -o message_size_limit=211113302
  -o cleanup_service_name=subcleanup


Configure nullmailer on the clients, the cool thing is that nullmailer will validate the smart host as well

$ ssh client1
# cd /etc/nullmailer
# cat remotes
mailserver smtp --port=587 --starttls --x509certfile=/etc/nullmailer/client1.pem  --x509cafile=/etc/nullmailer/mailserver.crt

Things to be improved, a few probably. For example I think relay_clientcerts is redundant, postfix should trust all certs created by this CA, after all I created it for that reason... but it doesn't bother me much yet, maybe next time I add a machine I'll fix it.

EDIT: make the post more clear

Posted 09/02/17 05:39 Tags:

I bought a bunch of Raspberry Pi 3 for a project (I'll post about it soon), I had one spare so I ran a kernel compile

/mnt is an old, slow, ugly USB stick I got on a conference, go figure!

$ make -j4 bcm2709_defconfig O=/mnt
$ time make -j4 O=/mnt
H16TOFW firmware/edgeport/boot2.fw
H16TOFW firmware/edgeport/down.fw
H16TOFW firmware/edgeport/down2.fw
IHEX2FW firmware/whiteheat_loader.fw
IHEX2FW firmware/whiteheat.fw
IHEX2FW firmware/keyspan_pda/keyspan_pda.fw
IHEX2FW firmware/keyspan_pda/xircom_pgs.fw
make[1]: Leaving directory '/mnt'

real    110m27.565s
user    372m1.100s
sys     18m31.400s

Not bad, not bad at all! :)

Posted 26/01/17 10:11 Tags:

This isn't a new topic, many people do it already. You can google and see for yourself, but I'm doing something else

I run a local domain at home (example.casa) , and also I run a local domain in my laptop (example.lap), for vm/containers/lxc.

The problem with unbound is that it will fail to validate example.lap and example.casa, and when I use my laptop at someone's else home it will fail to that local domain too

The solution for this is to whitelist the domain example.lap all the time, and parse the local domain on the network am I and whitelist it too (this domain will change on every network)

I configure dnsmasq to publish DNS on the port 5353 and answer DHCP requests on br0



unbound is configured to listen on my br0 iface


        interface: ::1
        access-control: allow
        access-control: ::1 allow
        access-control: allow

I tell unbound to trust example.lap and forward its queries to dnsmasq


    do-not-query-localhost: no
    private-domain: "example.lap."
    domain-insecure: "example.lap."
    private-domain: "250.168.192.in-addr.arpa."
    domain-insecure: "250.168.192.in-addr.arpa."
    local-zone: "250.168.192.in-addr.arpa" transparent

            name: "example.lap."
            forward-addr: [email protected]

            name: "250.168.192.in-addr.arpa."
            forward-addr: [email protected]

Here I tell isc-dhcp-client to setup the forwarding for the local domain in unbound and disable DNSSEC for it


case $reason in
                unbound-control forward_add +i $new_domain_name $new_domain_name_servers
                unbound-control forward_remove +i $old_domain_name

The last piece is the cherry of the cake and the weakest link in the chain

I'm parsing untrusted data, and feeding it to my local resolver

Someone could pass .com as local domain and I'd be effectively disabling DNSSEC for all .com domains :o

I tried to use psl to know what domains cannot be registered, then only allow those domains. But any domain I used/can think of can be registered now

$ psl .corp
.corp: 1
$ psl .casa
.casa: 1
$ psl .local
.local: 1 **WTF**

useless, I wont use it :(

I need to be careful after connecting to unstrusted networks, anyway a simple

# service unbound restart

will clean the forward zones

There is a command, unbound-control forward_remove.... but I won't remember that command tomorrow :P

This wont work on IPv6 only networks

I'm using DHCP IPv4 events (BOUND,RELEASE), so the DNS servers exposed and the domain names configured over IPv6 only won't get configured,

I think can live with that :), I don't think I'll live long enough to see local IPv6 only networks, I'm not even sure they make sense (I may be wrong on that point, but I think I'm right)

Anyway, it is just an start, and I can finally ditch systemd-networkd/systemd-resolved :)

Posted 13/12/16 07:25 Tags:

I'm just going to do a quick comment on it, I wish Ansible had something like SaltStack's Pillar

I could use vars: and Vault to replace the Pillar but is not the same :(

Posted 13/07/16 10:18 Tags:

I usually run my own infrastructure so I never really played with AWS until now. I mean I used the free tier a few times but rarely passed that stage. The Cloud system/orchestration/whatever I'm most familiar with is by far OpenStack. Having said that let's get to the bone of this post.

I have been written CloudFormation templates for medium size environments like as follows

  • 1 VPC spawning in 2 AZs
  • Between 1 and 3 ELB and AutoScale Groups
  • A bunch of standalone EC2 instances
  • RDS
  • ElastiCache
  • SecurityGroups
  • CloudFront
  • Between 1 and 3 Route53 zones

As writing templates by hand sucks and ideally I would like to reuse the templates with other Clouds I tried other alternatives like:

  • Ansible
  • Terraform

Of those 2 I liked more Ansible, even if it does not appear to be the right tool for the job it has suport to handle AWS and create resources on it Terraform on the other hand has been designed to do this tasks on a variety of platforms like OpenStack, AWS, VMWare (I think), etc.

Finally I set on CloudFormation for a couple of reasons

This is what I have to say about Ansible

  • Ansible is super cool, especially if you are going to provision Linux instances, its a pleasure to continue the provisioning of the instance from the same tool as you provision the infrastructure
  • YAML (Ansible) syntax is a lot nicer than JSON (CloudFormation)
  • Is hard to develop Ansible playbooks if you are not running Linux. OK, I know Windows people can do it but is not as simple as for Linux people.
  • There is no rollback capability included if an update fails
  • There are not modules for everything

And about Terraform

  • It can deploy changes to the infraestructure, but it does the other way around than CloudFormation , it destroy the old resources first then create new resources :(
  • You need to keep an state file, if you're working with a team this instantly becomes a pain point
  • I liked the syntax more than CloudFormation

About CloudFormation Templates

  • JSON is ugly but is well supported by many editors
  • VisualCode is great for Windows developers, it runs on Linux too
  • vim-json is great to write JSON in Vim, I can only say nice things about it Add this to your ~.vimrc to mark files .template as json

    au! BufRead,BufNewFile *.json set filetype=json " Probably not necessary but won't hurt.

    au! BufRead,BufNewFile *.template set filetype=json

  • I passed my first templates for underscore to make them Tidy, the end result was awful, I had to beg to get them reviewed by my teammates

  • Keep them tidy by yourself, your $EDITOR may help you a lot if you spend some time configuring it

The main problems I faced were the state file and the destruction of resources (Terraform) and the feeling I had all the time I spend with Ansible that it wasn't the right tool, that feeling ultimatelly made it dificult for me to "sell" it to my teammates.

I have a rant post about CloudFormation coming, but now is time to sleep...

Posted 30/06/16 20:26 Tags:

Besides the lolz I was involved on an "identify thief" incident, somebody created a GPG key with the same short id as mine. It is important to mention that while short id is different the complete id(s) are different on both keys.

Gunnar Wolf, wrote in great detail about the issue http://gwolf.org/node/4070

And he even posted to LWN

Erico Zini created an utility to verify keys https://github.com/spanezz/verify-trust-paths

and the corresponding blog post http://www.enricozini.org/blog/2016/debian/verifying-gpg-keys

the TL;DR on what to do to avoid faling in this trap is this

  • Add keyid-format 0xlong to ~/.gnupg/gpg.conf so GPG will show you long IDs by default
  • If your scripts handle GPG IDs use long IDs, you can pass the options --keyid-format long or --keyid-format 0xlong, alternatively --with-colons will give you an output easily parseable by shell scripts, and long keyids!!!

I'm not adding much if you did read Gunnar's and Erico's blog, but I think is worth to repeat that valuable advice.

PS: I should have post about this long ago

Posted 30/06/16 20:26 Tags:

DC16 logo wide.png

I'm going to DebConf 16 in Cape Town, South Africa, I can't wait to be there :D

Posted 26/04/16 00:44 Tags:

Sometimes you don't have an IPv6 configured and you need to SSH to an IPv6 server, you can use a bastion server for that.

Host *
    ForwardAgent yes
VerifyHostKeyDNS no
    StrictHostKeyChecking no
    GSSAPIAuthentication no
    HashKnownHosts no
    TCPKeepAlive yes
    ServerAliveInterval 60
    ProxyCommand ssh -W %h:%p <bastion server>
    IdentityFile ~/.ssh/id_rsa

Unfortunately, it does not work with IPv6 :( I don't know why, but I found a fix using my dear old friend socat

Host pi
   ForwardAgent yes
   ProxyCommand ssh -q -A <bastion server> "~/socat STDIO TCP:[2404:XXXX:XXXX:58XX::a38]:22"
   #ProxyCommand ssh -q -A <bastion server> "~/socat STDIO TCP6:IPv6.FQDN:22"
   User root
   IdentityFile ~/.ssh/id_personal

Another cool trick to access servers behind a second bastion is to use the second bastion in ProxyCommand

Host machine.domain.casa
   ForwardAgent yes
       #ProxyCommand ssh <second bastion> nc 22
       ProxyCommand ssh -W <second bastion>
   IdentityFile ~/.ssh/id_personal

Use the first, commented out, ProxyCommand line if your second bastion does not forward the credentials correctly (Dropbear, very old OpenSSH)

Posted 14/04/16 19:55 Tags: