Syncing external hard-drive with dropbox for backup


This little project started because Bitcasa is dropping their Personal Drive product which I used to use. This forced me to change to another cloud storage provider and I decided to use Dropbox. (During this process I found out how broken Bitcasa is/was and got really furious. But it will be a topic for another blogpost)

One of the things I liked about Bitcasa is that they provided a FUSE that I could just mount anywhere. There was no “syncing” of the files in the sense that the files only existed in the cloud provider. It would download the chunks of the requested files on demand and keep them in a cache. This allowed me to not have to worry about disk-space in my physical hard-drive.

Dropbox, on the other hand, doesn’t work like this. When you setup the daemon, you select a folder to be mirrored to the cloud. The daemon monitors any changes in the folder or cloud and keeps both copies synced. The problem with this is that it requires to have as much space in the device where the dropbox folder is as the contents stored in Dropbox. For my immediate situation, that would work but it is definitely not going to scale. I have a 256Gb disk and around 100Gb of data to store in Dropbox.

One possibility is to restrict the content to be mirrored. With this you get a partial syncing of your Dropbox account in your local folder. But after what happened to me with Bitcasa (I lost files, MANY files), I want to have a physical backup copy in an external HD to be on the safe side in any event.


After doing some research I decided to take the following approach in order to tackle the problem.

I run an instance of Dropbox solely for the purpose of syncing my external hard-drive. In this way it doesn’t interfere with the files that I actually want to have always synced in my desktop.

I run the external hd Dropbox instances manually and I haven’t automated this process. The reason behind this decision is that if I accidentally delete something from Dropbox, the backup will still have it and it won’t sync until I tell it to do so.

Running a second instance of Dropbox

Dropbox installs the folders .dropbox and .dropbox-dist under the home directory.

The first one has all the configuration for the Dropbox instance, while the latter has the binary dropboxd and the files required by it.

If you try executing dropboxd, it will complain saying that Dropbox is already running (for syncing the folder in the home directory).

The key to be able to run more than one Dropbox instance is to know how Dropbox determines the location of the .dropbox configuration folder. As it is in this folder where all the configuration for an instance is stored, where all the cached elements are kept and also where the pid file is kept what prevents multiple instances using the same config.

The location used by Dropbox for the configuration directory is $HOME/.dropbox. Thus by changing the value of the HOME environmental variable when we execute dropboxd, we can change the configuration folder and have as many instances as we want.

I mount my external hard-drive on /mnt/external-hd/, so I just execute HOME=/mnt/external-hd/ /home/santiago/.dropbox-dist/dropboxd.

The first time it will ask for the instance setup information: account, password, location of the mirrored folder, etc. After the first time, it will run silently.

One caveat is that if the mount directory of your external hard-drive changes, then you should be careful when starting the external-hd’s Dropbox service. If dropbox thinks you have deleted the data, it will sync that upstream and you will lose the data. To prevent this, before running it, create a symlink from the old location to the new and then move the location to the new one using Dropbox’s configuration setup.


Setup NAT Network for QEMU in Mac OSX


I am working on a really cool project where we have to manage virtual machines that are launched through QEMU. The project is meant to run in Linux, but as I (and all my colleagues) have a Mac to develop, we wanted to be able to run the project under Mac. Yes, I know QEMU’s performance on Mac sucks!

We couldn’t use QEMU’s user mode network stack due to its limitations. We needed to use TAP interfaces, the machines should be able to acquire the network configuration through DHCP and should be NATted.

An schema of how I wanted things to be is as follows:

VM NAT bridged network schema

Enable TAP interfaces in Mac

The first issue I stumbled upon is the fact that Mac does not have native support for TAP interfaces.

In order to have TAP interfaces in Mac we need to download and install TunTap.

Once it is installed, we are going to see a list of TAP nodes installed with the scheme /dev/tapX. TunTap determines the max number of possible TAP interfaces and sets them up.

Creating the bridge interface

Once we are able to use TAP interfaces, we need to create the bridge where we can attach them.

This is really straight-forward in Mac. For a temporal bridge we just need to issue the following command with elevated privileges:

$ sudo ifconfig bridge1 create

The next step is to configure the address for that newly created bridge. Which IP we give it, will depend on the network we want to use for our VM network. As an example, I will use the network, so I will assign to the bridge. It will act as the default gateway for all the virtual machines, that is why we need to assign that IP statically.

$ sudo ifconfig bridge1

Packet forwarding and NAT

The next step to reach our goal is to configure our Mac so that the packets that arrive from the bridge1 interface are routed correctly. We also need to NAT these packets as otherwise they won’t find their way back.

Enabling packet forwarding is really easy, we just need to execute:

$ sudo sysctl -w net.inet.ip.forwarding=1

For the NAT we need to create a pfctl configuration file where we state the rule that will do the NAT. In a file we write the following:

nat on en0 from bridge1:network to any -> (en0)

This rule tells pfctl to NAT packets that:

  1. Go through en0 (you should replace this interface for the one connected to the internet) and
  2. Have source IP from the network range associated to bridge1 (here goes the bridge name of our VM network)

The address to use for the NAT is indicated after the ->. We need to put the interface connected to the internet between parenthesis. The parenthesis are important because it forces the evaluation of the address associated with the interface each time the rule is applied. If we don’t add it, the address to use will be resolved at load time and if it changes, the used address will be incorrect.

Now we need to enable the pfctl with the given rule.

$ sudo pfctl -F all # This flushes all active rules
$ sudo pfctl -f /path/to/pfctl-nat-config -e # Enable pf with the config

Setting up the DHCP server

Before setting up the virtual machine, we need to set up the DHCP server so the VM will be able to acquire the network configuration easily.

Fortunately, Mac OS comes with a DHCP server installed by default and the only thing we need is to set it up and start it. The server is bootpd and is located under the /usr/libexec/ directory.

The DHCP server reads the configuration from the file /etc/bootpd.list and we need to edit it.

The bootpd.plist file has 3 main sections (detailed explanation in the official documentation):

  • Root dictionary keys: this properties are used to control the behavior of the daemon as a whole.
  • Subnets property: An array of the subnetworks that the DHCP server has associated.
  • NetBoot property: This is used to configure the netboot options.

We are going to be interested in the first two sections as they are the ones that are needed to have the DHCP service up and running.

Here’s the file as we need it:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
                <string>VM NAT Network (</string>

Let’s drill down to the important elements in the xml file.

The dhcp_enabled key in the root dictionary is used to state which interfaces we want to associate the DHCP with. We must here add the bridge interface name, otherwise the DHCP service won’t listen to the DHCP requests on that interface.

The other required thing we need to do is add an entry in the array associated with the Subnets key. Each entry is a dictionary that will describe a subnetwork to be used by the DHCP service. The following is a description of the main keys used above (again, for the complete list see the documentation):

  • name: A string just to give the subnetwork a human readable aspect.
  • net_address: this is the subnetwork base address. In our case
  • net_mask: the subnetwork’s mask.
  • net_range: which range in this subnetwork is managed by the DHCP server. The value of this property is an array that contains two strings: the lower and upper bound of the addresses to manage. In our case, we want the DHCP to manage all the hosts but the one assigned to the host, then our range is:
  • alocate: this boolean property tells the DHCP server whether to assign or not IP addresses from the range. We must set it to true.

The other two keys are used to push configuration to the DHCP clients. We want to push the default gateway as well as the DNS, for that we use the dhcp_router and dhcp_domain_name_server option.

Now that the configuration has been set up, we need to start the DHCP server. To do that, we just execute $ sudo /usr/libexec/bootpd -D. This will launch the server in the background with DHCP capabilities on. If we want to have it in the foreground and see how it is working, it can be launched using the -d flag.

QEMU and interface setup

The last thing to do is to launch the virtual machines and correctly set up the attached interface so that it is correctly attached to the bridge.

We are going to be using a TAP interface setup in QEMU using a virtio NIC. We cannot use the bridge setup in Mac due to the inexistent qemu-bridge-helper for the platform.

To configure the virtio device we need to use the following command line arguments: -net nic,model=virtio. Here is where we would also specify the MAC address for the interface if we want to.

The command line argument specification to setup the interface as TAP is like this: -net tap[,vlan=n][,name=name][,fd=h][,ifname=name][,script=file][,downscript=dfile][,helper=helper]

From those arguments we are interested in 2 of them in particular, script and downscript. The files given in those arguments are executed right after the TAP interface is created and right before the TAP interface is destroyed, respectively. We need to use those scripts to attach and detach the interface from the bridge.

The scripts receive one command line argument with the name of the interface involved. We need to create two scripts:

  • qemu-ifup.sh will be used as the start script and will attach the interface to the bridge:

ifconfig bridge1 addm $1
  • qemu-ifdown.sh will be used in the downscript to detach the interface from the bridge before it is destroyed:

ifconfig bridge1 deletem $1

All that’s left is start the VMs and enjoy the newly created NAT network.


Free SSL: Using "Let's Encrypt" for TLS certificates in your website


When I was setting up this new blog, I wanted to see if there was a way of enabling TLS for it without having to buy a certificate. Let’s face it, this site has nothing confidential nor there is sensitive information like logins that need to be protected, thus having to pay to have a valid SSL protected site did not make any sense.

I started looking if there was any CA that offered free certificates or anything like it. That’s when I came across this interesting project: “Let’s Encrypt”.

Let’s Encrypt is a fully automated, free and open CA that offers certificates in a fully automated way. The generated certificates last for only 90 days. You might be thinking that that sucks, because every 3 months you will have to go through the hassle of renewing the certificates for your sites. The really cool thing about this project is that they provide a client that makes it extremely simple to generate and renew the certificates for your sites, with minimal intervention when generating the certificate and no intervention at all for renewing.

Setting up TLS in nginx using “Let’s Encrypt”

Here I’m going to show the steps I took in order to setup TLS for this blog in my Ubuntu 14.04 server using Let’s Encrypt and nginx.

I am not going to use the nginx plugin for Let’s Encrypt because, according to their documentation, it is in experimental state and it is not shipped with Let’s Encrypt.

I will be using the webroot plugin. This plugin requires access to the root folder of the site you want to generate the certificate for. The reason behind this is that Let’s Encrypt, in order to validate that the requester actually controls the domain he is asking the certificate for, will require a challenge. One of the challenge methods (the one used by this plugin) is to provide a file in a certain location of the site with a requested random content.

Installing the Let’s Encrypt client

The best way to install this is by cloning their repo. This also makes it easy to update it.

$ sudo git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt

Updating is just pulling the latest changes from upstream to our repo.

Generating the certificate

This is the beauty of Let’s Encrypt, the whole process is reduced to a one-liner command.

Let’s assume that the site’s root is located on the path /var/www/ and the name is www.mysite.com. All you need to do to generate the certificate is:

$ /opt/letsencrypt/letsencrypt-auto certonly --webroot -w /var/www -d www.mysite.com

Want something easier than that? Impossible.

After the process has finished, Let’s Encrypt will have generated a set of PEM files under the directory /etc/letsencrypt/live/www.mysite.com/. Actually, what’s you are going to find in that directory are symlinks to the current certificates for the site. You can find all the certificates that the client has ever generated for the site under /etc/letsencrypt/archive/www.mysite.com/.

If you want to change the default arguments for the certificate generation you can take a look at the different arguments you can pass the Let’s Encrypt client. You can also setup a configuration file with everything you need for a certain certificate generation and give only that to the Let’s Encrypt client.

Enabling TLS on a site using nginx

This is really straightforward. Just open the nginx’s configuration file for your site and add the following piece inside the server section:

listen 443 ssl;

ssl_certificate /etc/letsencrypt/live/www.mysite.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.mysite.com/privkey.pem;

That’s it! Reload the server’s configuration and Voila! you have TLS enabled with a valid certificate.

Now, if you want to go for something stronger regarding with security you will need to generate strong Diffie-Hellman parameters and restrict which ciphers you want to use.

To generate the Diffie-Hellman parameters run the following command: bash $ sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

And to make nginx use them and restrict the ciphers add the following right after the changes you made to enable TLS in your site’s configuration file:

ssl_prefer_server_ciphers on;

ssl_dhparam /etc/ssl/certs/dhparam.pem;


Certificate renewal

Renewing the certificate is just as easy as generating a new certificate for a site, a single command:

$ /opt/letsencrypt/letsencrypt-auto renew

This will renew all the certificates that are managed by Let’s Encrypt that are due for renewal (less than 30 days for expiration).

In any case, having to remember to run the command every 60-90 days is still a pain and prone to forgetting and letting your site with an invalid certificate.

That’s the reason why I think setting up automatic renewal is key. As you can imagine, this is really easy using cron. Open the crontab for the root user: $ sudo crontab -e and add the following cron jobs:

30 20 * * 6 /opt/letsencrypt/letsencrypt-auto renew >> /var/log/letsencrypt-renew.log
35 20 * * 6 service nginx reload

This will basically execute the renewal process every Saturday at 20:30 and 5 minutes later we reload the nginx configuration.