A OpenStack Clouds, Linux and Tech blog. Sometimes I ramble about things I don't like. I also ramble about things I like, but it is rare.

I have read many blogs that at some point state that a post is to document something for future use
This post is like that, this is something I have done very recently and now I'm writing down about it so I know how to do it again in the future :)

I have a central mail server where I keep my email, other systems relay emails to that server so I can consolidate my cron mail and delete it altogether
I nailed this 2 years ago for my laptop, but I never wrote it down or anything so when I got new machines they just don't send out email
Recently I needed to send out email from a machine so I had it to make it work again

Generate a CA for this propose

$ easy-rsa smtps-ca
$ cd smtps-ca
$ vi vars
$ source vars
$ ./clean-all
$ ./build-ca

Create keys for all the parties involved

$ ./build-key-server mailserver
$ ./build-key-server client1
$ ./build-key-server clientN

Copy the keys to each server in particular

$ cd keys
$ cat client1.crt client1.key > client1.pem
$ scp client1.pem mailserver.crt client1:/etc/nullmailer/

Add the key fingerprint to the list of keys we allow to forward email through us

# openssl x509 -in client1.crt -fingerprint -sha1 -noout | awk -F = '{print $2 " client1" }' >> /etc/postfix/tls/relay_clientcerts
# postmap /etc/postfix/tls/relay_clientcerts

My master.cf config (partial), on my smarthost

$ cat /etc/postfix/master.cf

submission inet n       -       n       -       -       smtpd
  -o content_filter=
  -o syslog_name=postfix/submission
  -o smtpd_recipient_restrictions=permit_sasl_authenticated,permit_tls_clientcerts,reject
  -o smtpd_tls_req_ccert=no
  -o smtpd_tls_ask_ccert=yes
  -o smtpd_tls_auth_only=yes
  -o smtpd_tls_security_level=encrypt
  -o smtpd_tls_cert_file=/etc/postfix/tls/mailserver.crt
  -o smtpd_tls_key_file=/etc/postfix/tls/mailserver.key
  -o smtpd_tls_fingerprint_digest=sha1
  -o relay_clientcerts=hash:/etc/postfix/tls/relay_clientcerts
  -o smtpd_relay_restrictions=permit_sasl_authenticated,permit_tls_clientcerts,reject
  -o smtpd_tls_CAfile=/etc/postfix/tls/keys/ca.crt
  -o smtpd_sender_restrictions=$submission_sender_restrictions
  -o smtpd_client_restrictions=
  -o smtpd_helo_restrictions=
  -o smtpd_data_restrictions=
  -o smtpd_milters=inet:
  -o non_smtpd_milters=inet:
  -o milter_default_action=accept
  -o message_size_limit=211113302
  -o cleanup_service_name=subcleanup


Configure nullmailer on the clients, the cool thing is that nullmailer will validate the smart host as well

$ ssh client1
# cd /etc/nullmailer
# cat remotes
mailserver smtp --port=587 --starttls --x509certfile=/etc/nullmailer/client1.pem  --x509cafile=/etc/nullmailer/mailserver.crt

Things to be improved, a few probably. For example I think relay_clientcerts is redundant, postfix should trust all certs created by this CA, after all I created it for that reason... but it doesn't bother me much yet, maybe next time I add a machine I'll fix it.

EDIT: make the post more clear

Posted 09/02/17 05:39 Tags:

I bought a bunch of Raspberry Pi 3 for a project (I'll post about it soon), I have one spare so I run a kernel compile

/mnt is an old, slow, ugly USB stick I got on a conference, go figure!

$ make -j4 bcm2709_defconfig O=/mnt
$ time make -j4 O=/mnt
H16TOFW firmware/edgeport/boot2.fw
H16TOFW firmware/edgeport/down.fw
H16TOFW firmware/edgeport/down2.fw
IHEX2FW firmware/whiteheat_loader.fw
IHEX2FW firmware/whiteheat.fw
IHEX2FW firmware/keyspan_pda/keyspan_pda.fw
IHEX2FW firmware/keyspan_pda/xircom_pgs.fw
make[1]: Leaving directory '/mnt'

real    110m27.565s
user    372m1.100s
sys     18m31.400s

Not bad, not bad at all! :)

Posted 26/01/17 10:11 Tags:

This isn't a new topic, many people do it already. You can google and see for yourself, but I'm doing a bit more

I run a local domain at home (example.casa) , and also I run a local domain in my laptop (example.lap), for the vm/containers/lxc I run.

The problem with unbound is that it will fail to validate example.lap and example.casa and when I use my laptop at Jhon's it will fail to validate jhon.casa domain

The solution for this is to whitelist the domain example.lap all the time, and parse the local domain on the network am I and whitelist it too (this domain will change on every network)

I configure dnsmasq to publish DNS on the port 5353 and answer DHCP requests on br0



unbound is configured to listen on my br0 iface


        interface: ::1
        access-control: allow
        access-control: ::1 allow
        access-control: allow

I tell unbound to trust example.lap and forward its queries to dnsmasq


    do-not-query-localhost: no
    private-domain: "example.lap."
    domain-insecure: "example.lap."
    private-domain: "250.168.192.in-addr.arpa."
    domain-insecure: "250.168.192.in-addr.arpa."
    local-zone: "250.168.192.in-addr.arpa" transparent

            name: "example.lap."

            name: "250.168.192.in-addr.arpa."

Here I tell isc-dhcp-client to setup the forwarding for the local domain in unbound and disable DNSSEC for it


case $reason in
                unbound-control forward_add +i $new_domain_name $new_domain_name_servers
                unbound-control forward_remove +i $old_domain_name

The last piece is the cherry of the cake and the weakest link in the chain

I'm parsing untrusted data, and feeding it to my local resolver

Someone could pass .com as local domain and I'd be effectively disabling DNSSEC for all .com domains :o

I tried to use psl to handle what domains cannot be registered and only allow those but any domain I used/can think of can be registered now

$ psl .corp
.corp: 1
$ psl .casa
.casa: 1
$ psl .local
.local: 1 **WTF**

useless, I wont use it :(

I'll be careful after connecting to unstrusted networks, I don't care much as a simple

# service unbound restart

will clean the forward zones

There is a command, unbound-control forward_remove.... but I won't remember that command tomorrow :P

It wont work on IPv6 only networks

I'm only using DHCP IPv4 events (BOUND,RELEASE), so the DNS servers exposed and the domain names configured over IPv6 only won't get configured,

I think can live with that :), I don't think I'll live long enough to see local IPv6 only networks, I'm not even sure they make sense (I may be wrong on that point, but I think I'm right)

Anyway, it is just an start, and I finally could ditch systemd-networkd/systemd-resolved :)

Posted 13/12/16 07:25 Tags:

I'm just going to do a quick comment on it, I wish Ansible had something like SaltStack's Pillar

I could use vars: and Vault to replace the Pillar but is not the same :(

Posted 13/07/16 10:18 Tags:

Besides the lolz I was involved on an "identify thief" incident, somebody created a GPG key with the same short id as mine. It is important to mention that while short id is different the complete id(s) are different on both keys.

Gunnar Wolf, wrote in great detail about the issue http://gwolf.org/node/4070

And he even posted to LWN

Erico Zini created an utility to verify keys https://github.com/spanezz/verify-trust-paths

and the corresponding blog post http://www.enricozini.org/blog/2016/debian/verifying-gpg-keys

the TL;DR on what to do to avoid faling in this trap is this

  • Add keyid-format 0xlong to ~/.gnupg/gpg.conf so GPG will show you long IDs by default
  • If your scripts handle GPG IDs use long IDs, you can pass the options --keyid-format long or --keyid-format 0xlong, alternatively --with-colons will give you an output easily parseable by shell scripts, and long keyids!!!

I'm not adding much if you did read Gunnar's and Erico's blog, but I think is worth to repeat that valuable advice.

PS: I should have post about this long ago

Posted 30/06/16 20:26 Tags:

I usually run my own infrastructure so I never really played with AWS until now. I mean I used the free tier a few times but rarely passed that stage. The Cloud system/orchestration/whatever I'm most familiar with is by far OpenStack. Having said that let's get to the bone of this post.

I have been written CloudFormation templates for medium size environments like as follows

  • 1 VPC spawning in 2 AZs
  • Between 1 and 3 ELB and AutoScale Groups
  • A bunch of standalone EC2 instances
  • RDS
  • ElastiCache
  • SecurityGroups
  • CloudFront
  • Between 1 and 3 Route53 zones

As writing templates by hand sucks and ideally I would like to reuse the templates with other Clouds I tried other alternatives like:

  • Ansible
  • Terraform

Of those 2 I liked more Ansible, even if it does not appear to be the right tool for the job it has suport to handle AWS and create resources on it Terraform on the other hand has been designed to do this tasks on a variety of platforms like OpenStack, AWS, VMWare (I think), etc.

Finally I set on CloudFormation for a couple of reasons

This is what I have to say about Ansible

  • Ansible is super cool, especially if you are going to provision Linux instances, its a pleasure to continue the provisioning of the instance from the same tool as you provision the infrastructure
  • YAML (Ansible) syntax is a lot nicer than JSON (CloudFormation)
  • Is hard to develop Ansible playbooks if you are not running Linux. OK, I know Windows people can do it but is not as simple as for Linux people.
  • There is no rollback capability included if an update fails
  • There are not modules for everything

And about Terraform

  • It can deploy changes to the infraestructure, but it does the other way around than CloudFormation , it destroy the old resources first then create new resources :(
  • You need to keep an state file, if you're working with a team this instantly becomes a pain point
  • I liked the syntax more than CloudFormation

About CloudFormation Templates

  • JSON is ugly but is well supported by many editors
  • VisualCode is great for Windows developers, it runs on Linux too
  • vim-json is great to write JSON in Vim, I can only say nice things about it Add this to your ~.vimrc to mark files .template as json

    au! BufRead,BufNewFile *.json set filetype=json " Probably not necessary but won't hurt.

    au! BufRead,BufNewFile *.template set filetype=json

  • I passed my first templates for underscore to make them Tidy, the end result was awful, I had to beg to get them reviewed by my teammates

  • Keep them tidy by yourself, your $EDITOR may help you a lot if you spend some time configuring it

The main problems I faced were the state file and the destruction of resources (Terraform) and the feeling I had all the time I spend with Ansible that it wasn't the right tool, that feeling ultimatelly made it dificult for me to "sell" it to my teammates.

I have a rant post about CloudFormation coming, but now is time to sleep...

Posted 30/06/16 20:26 Tags:

DC16 logo wide.png

I'm going to DebConf 16 in Cape Town, South Africa, I can't wait to be there :D

Posted 26/04/16 00:44 Tags:

Sometimes you don't have an IPv6 configured and you need to SSH to an IPv6 server, you can use a bastion server for that.

Host *
    ForwardAgent yes
VerifyHostKeyDNS no
    StrictHostKeyChecking no
    GSSAPIAuthentication no
    HashKnownHosts no
    TCPKeepAlive yes
    ServerAliveInterval 60
    ProxyCommand ssh -W %h:%p <bastion server>
    IdentityFile ~/.ssh/id_rsa

Unfortunately, it does not work with IPv6 :( I don't know why, but I found a fix using my dear old friend socat

Host pi
   ForwardAgent yes
   ProxyCommand ssh -q -A <bastion server> "~/socat STDIO TCP:[2404:XXXX:XXXX:58XX::a38]:22"
   #ProxyCommand ssh -q -A <bastion server> "~/socat STDIO TCP6:IPv6.FQDN:22"
   User root
   IdentityFile ~/.ssh/id_personal

Another cool trick to access servers behind a second bastion is to use the second bastion in ProxyCommand

Host machine.domain.casa
   ForwardAgent yes
       #ProxyCommand ssh <second bastion> nc 22
       ProxyCommand ssh -W <second bastion>
   IdentityFile ~/.ssh/id_personal

Use the first, commented out, ProxyCommand line if your second bastion does not forward the credentials correctly (Dropbear, very old OpenSSH)

Posted 14/04/16 19:55 Tags:

I played with AWS a bit, more than play with AWS, I used AWS to play with LVM stripping.

[root@ip-172-31-13-194 ~]# lvcreate -i20 -L 10G vg00
  Using default stripesize 64.00 KiB.
  Logical volume "lvol0" created.

[root@ip-172-31-13-194 ~]# lvcreate  -L 10G vg00
  Logical volume "lvol1" created.

[root@ip-172-31-13-194 ~]# lsblk
xvda         202:0      0  10G  0 disk
├─xvda1      202:1      0   1M  0 part
└─xvda2      202:2      0  10G  0 part /
xvdb         202:16     0   1G  0 disk
├─vg00-lvol0 253:0      0  10G  0 lvm
└─vg00-lvol1 253:1      0  10G  0 lvm
xvdc         202:32     0   1G  0 disk
xvdd         202:48     0   1G  0 disk
xvde         202:64     0   1G  0 disk

[ snip, server had > 80 volumes ]

xvddb        202:26880  0   1G  0 disk
xvdbx        202:19200  0   1G  0 disk
xvddc        202:27136  0   1G  0 disk
xvddd        202:27392  0   1G  0 disk
xvdde        202:27648  0   1G  0 disk
xvddg        202:28160  0   1G  0 disk
xvddh        202:28416  0   1G  0 disk
xvddi        202:28672  0   1G  0 disk
xvddj        202:28928  0   1G  0 disk
xvddk        202:29184  0   1G  0 disk

[root@ip-172-31-13-194 ~]# dd if=/dev/zero of=/dev/vg00/lvol1 bs=1M count=1200
1200+0 records in
1200+0 records out
1258291200 bytes (1.3 GB) copied, 44.4547 s, 28.3 MB/s

[root@ip-172-31-13-194 ~]# dd if=/dev/zero of=/dev/vg00/lvol0 bs=1M count=1200
1200+0 records in
1200+0 records out
1258291200 bytes (1.3 GB) copied, 20.245 s, 62.2 MB/s

Clearly stripping improves the performance :)

To create the volumes I used this simple scripts

$ cat ~/bin/iterations
for ((i=1;i<=$iterations;i++));

$ iterations 10 aws ec2 create-volume --availability-zone ap-northeast-1c --size 1

To attach the volumes I used this for loop

$ for i in a b c d e f g h i j k l m n o p q r s t u v w x y z ; do aws ec2 attach-volume --volume-id `aws ec2 describe-volumes --max-items 1 --filters "Name=status,Values=available" |jq .Volumes |grep VolumeId |cut -c 18-29` --instance-id i-b77a3938 --device "/dev/xvd$i" ; done

and repeat with devices xvda$i, xvdb$i, etc.

Posted 12/04/16 22:01 Tags:

I've used duplicity for a long time now, not because I like it but because it encrypts everything before uploads it.

I remember used it to backup my stuff to rsync servers; to backup stuff in previous $WORK, in both cases the destination servers were safe but not the transport (clear text rsync over internet).

Now I have access to Cloud Files for free so I plan to use to backup mystuff. While I would like to use backup2swift I wasn't able to make it work yet, so I will take an old duplicity script, modify it a little bit and use it.

sudo cat /etc/cron.daily/backup-duplicity

cd /opt/duplicity/bin
./duplicity --full-if-older-than 1M \
--include /root --include /etc --include /usr/local --include /home/myuser/.mozilla --include /home/myuser/.purple \
--exclude /home/myuser/repos --exclude /home/myuser/mail --exclude /home/myuser/downloads --exclude /home/myuser/.owncloud \
--exclude /home/myuser/.cache \
--include /home/myuser \
--exclude '**' /  cf+http://notebook-bkp >/dev/null 2>/dev/null

Duplicity needs pyrax library to talk to Cloud Files, it is not packaged for Debian so I have to run it from a virtualenv, how did I do it?

apt-get install librsync-dev
virtualenv /opt/duplicity
source /opt/duplicity/bin/activate
pip install pyrax
cd /opt/duplicity
wget https://code.launchpad.net/duplicity/0.6-series/0.6.26/+download/duplicity-0.6.26.tar.gz ---> always get latest version
tar xvzf duplicity-0.6.26.tar.gz
python setup.py install
Posted 04/01/16 00:51 Tags: