Setting up exim4 for Google Apps SMTP in Debian Wheezy

Whenever I install a new system, I always have to look up how to set my servers and workstations to use Google’s SMTP server for sending outbound mail. At Digium I just used the Zimbra server, and it was simple. For my VPSes, workstations, and laptops, it’s more complicated. Nonetheless, it’s still straightforward to set up Google Apps’/GMail’s SMTP as my smarthost.

  1. exim4 should already be installed. To set it up to use Google’s SMTP server, run the following command:
    dpkg-reconfigure exim4-config
    1. The first choice is the “General type of mail configuration.” Since this server (and all of my workstations/laptops) don’t process any local mail, I chose “mail sent by smarthost; no local mail”.
    2. I used my system’s hostname as the system mail name (in this case, it was “betachunk”).
    3. For the IP address or hostname of the outgoing smarthost, I used “smtp.gmail.com::587”
    4. I chose not to keep the DNS queries minimal, since most of my systems have an “always-on” connection.
    5. I decided not to split the configuration file into small files, to keep the system robust (small files can introduce breakage).
    6. I set the root and postmaster mail recipient to be my email address, trey@blancher.net.
  2. Now that exim4 has its basic configuration, I can add my credentials.
    1. First I edited /etc/exim4/exim4.conf.template, and added the following after the line “.ifdef DCconfig_smarthost DCconfig_satellite”:
      send_via_gmail: driver = manualroute domains = ! +local_domains transport = gmail_smtp route_list = * smtp.gmail.com

      …before the very next “.endif”.

    2. I then searched for this comment, and added the gmail_smtp section below it:
      ### transport/30_exim4-config_remote_smtp_smarthost ################################# ###### # GMail smtp gmail_smtp: driver = smtp port = 587 hosts_require_auth = $host_address hosts_require_tls = $host_address
    3. Next, I searched for “begin authenticators”, and added a section for gmail_login:
      begin authenticators ######### # GMail SMTP gmail_login: driver = plaintext public_name = LOGIN client_send = : @gmail.com :

      If you don’t use GMail’s two-factor authentication, you can substitute your actual GMail password there. Note that I use a dummy GMail account for all of my SMTP services, I don’t use my primary Google Account.

    4. Finally, I had to comment out the “login:” section at the end of the file since it conflicts with the gmail_login section (“public_name = LOGIN” can only appear once in the file).

    And that’s it for exim4.conf.template.

  3. I didn’t add anything to /etc/exim4/passwd.client.
  4. Finally, I could run the following commands to load the configuration:
    update-exim4.conf service exim4 restart # a reload may be sufficient

That’s it for the configuration. To test, I used the “mail” command from the shell:

mail trey@blancher.net Subject: Test Alpha Alpha Test .

That should send the message to my primary email address. If all goes well, I should receive a message with subject “Test Alpha” and body “Alpha Test”. When I was writing these instructions, I was configuring betachunk so I could document it as I went. It almost never happens, but this time the first test message got sent correctly, with no further configuration or troubleshooting necessary.

A note on my SMTP service testing: I use the Greek alphabet to label the test messages. I always start with alpha. I also start a running tail “tail -f /var/log/exim4/mainlog”, so I can verify that the system tried to send the message. Once I see the message has been sent successfully, I refresh my GMail inbox until the message arrives, completing the circuit. In this case I checked the email headers (“show original” in Gmail) to ensure it came from betachunk.

Should mainlog show an error, I investigate the problem, making the requisite changes. Then I send the next message, with the next Greek letter. Unfortunately my recollection of the Greek alphabet is rather spotty, so after the first five or six letters I just pick them at random.

And there you have it! Hopefully I won’t have to do this again for a while.

Migrate install to new VPS…

I have a ChunkHost Virtual Private Server (or VPS), and I ran into a predicament. I will admit my Debian knowledge is ever increasing, and now that sid has “thawed” its package selection is much more mercurial. Currently, this poses a problem for my web sites hosted on the VPS: amberandtrey.us, and this site (eldon.me). At the moment, Sid is going through a particular transition: apache-2.2 to apache-2.4. Thus many of the packages I need are in flux, and on any given day it could be horribly broken. I decided to switch back to Wheezy, but there’s no real way to downgrade. I’d have to reinstall.

ChunkHost gives a really (insidiously) easy way to fire up a new VPS (called a “chunk”). Within minutes I had my betachunk. My first chunk was called “alphabouncer” by me because I originally intended it to be an IRC bouncer (so if someone were to attack my IP address they’d be attacking my chunk, not my home IP address), but I digress. It wasn’t a simple matter of just copying files; I had to back up the MySQL databases, back up the WordPress files, copy my WeeChat configuration, restore it all, set the DNS server to point both domains at the new IP address, and coax it into working.

NOTE: All of the commands were performed within my backup directory, /root/backup/2013-06-15, unless otherwise noted.

  1. The first thing I did was to backup the databases. Since both websites use their own databases within the confines of the single MySQL instance, I needed to ensure the mysqldump command captured all databases. To conserve space, I ran the results through bzip2. The command I used was this:
    mysqldump -u root -p --all-databases --events | bzip2 -c - > websites_$(date +%F).sql.bz2

    All events may not have been necessary (and possibly problematic if the database is in active use at the time of the dump), but mysqldump will issue a warning if that flag is not passed. These are small enough databases that grabbing the events should not pose a problem.

  2. The next task was to back up the WordPress files. These lived in two places: /usr/share/wordpress for amberandtrey.us, and /var/www/eldon.me. I wanted to move amberandtrey.us to /var/www/amberandtrey.us to be more orthogonal with my directory structure, but more on that below. I also grabbed /etc and /root in case I needed them (I definitely did, mainly for SSL certificates, apache2 and WordPress server files). In order to back these up, I just used the standard tar commands:
    tar -cvjf amberandtrey.us_$(date +%F).tar.bz2 /usr/share/wordpress tar -cvjf eldon.me_$(date +%F).tar.bz2 /var/www/eldon.me/ tar -cvjf etc_$(date +%F).tar.bz2 /etc tar -cvjf alphabouncer-root_$(date +%F).tar.bz2 --exclude=backup --exclude=downloads --exclude=.cpan /root

    I used the excludes because the destination is in /root/backup, and I didn’t need to have all the cruft lying around on the new server.

  3. To copy my Weechat configuration, I created a matching username on the new VPS (betachunk). Rather than tar up my ~/.weechat directory, I used Midnight Commander (or “mc” for short) on alphabouncer to select the directory and copy it over. Since the two VPSes are in the same data center, I thought the transfer would occur quickly. It did, but some of the logs were still outrageously large, so it took a few minutes to complete.
  4. Restoring it all was a little tricky, compounded by the fact that I wanted to make minor structural changes to some of the WordPress directory layout. I copied the contents of alphabouncer:/root/backup/2013-06-15/* to betachunk:/root/backup/2013-06-15/. I can’t remember exactly, but I think I used mc to copy the entire backup directory over. I first changed directory (“cd”) into the backup directory, and untarred the etc backup file to the current directory (a nonstandard location) because I was going to copy the files one at a time using the command:
    tar -xvf etc_2013-06-15.tar.bz2

    I first copied over all of my SSL files for both sites, and the StartCom (my cert provider) files as well. This included everything for /etc/ssl/certs and /etc/ssl/private.

  5. Next I unpacked amberandtrey.us, created the directory where I wanted the files to ultimately live, and move the resulting files into that directory:
    tar -xvf amberandtrey.us_2013-06-15.tar.bz2 mkdir -p /var/www/amberandtrey.us mv usr/share/wordpress/* /var/www/amberandtrey.us

    I then repeated the process with eldon.me:

    
    tar -xvf eldon.me_2013-06-15.tar.bz2
    mv var/www/eldon.me /var/www/
    
    
  6. The next step was the tricky part. I had to restore all databases, but the database server MySQL hadn’t been installed yet (installing the wordpress package did NOT resolve this dependency!):
    aptitude install mysql-server

    Installing the mysql-server package prompted me to set the root password, but I still needed to copy the /etc/mysql/debian.cnf file from the previous installation. This allowed the MySQL daemon to start cleanly.

  7. The final step in transferring the WordPress sites to betachunk was to set up the /etc/wordpress/config-*.php files. Copying them from the restored /etc backup location, and we were almost in business.
  8. Since I was using SNI (Server Name Indication, see my KeePassX database merge topic for a brief discussion of this), I’d need to tie the new IP address to both domains. ChunkHost has a neat little DNS tool that can be used to do this. Since at my domain registrar (GoDaddy, I use them because they were convenient when I bought my first domain, and inertia keeps me with them) already has the ChunkHost nameservers for both domains, I just needed to update ChunkHost’s DNS server with the new IP address, and I was running!

Well, I thought I was running. amberandtrey.us worked no problem, but eldon.me had issues. Apparently I have the error log for eldon.me in a weird file (it’s not /var/log/eldon.me-error.log, like it probably should be). This has now been corrected. Anyway, I had to run “ls -al /var/log/apache2/” in a watch command to see what file was changing when I tried accessing eldon.me (and was getting a 500 Internal Server Error):

watch -n 1 ls -al /var/log/apache2

I saw the nonstandard log file (eldon.me-ssh-error.log) changed size and modification date/time, so I investigated that file. I found these errors:

[Sat Jun 15 03:33:14 2013] [error] [client 216.207.245.1] PHP Warning: require_once(/etc/wordpress/config-eldon.me.php): failed to open stream: Permission denied in /var/www/eldon.me/wp-config.php on line 19, referer: https://eldon.me/wp-admin/post-new.php [Sat Jun 15 03:33:14 2013] [error] [client 216.207.245.1] PHP Fatal error: require_once(): Failed opening required '/etc/wordpress/config-eldon.me.php' (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/eldon.me/wp-config.php on line 19, referer: https://eldon.me/wp-admin/post-new.php

That led me to believe that /etc/wordpress/config-eldon.me.php did not have the correct permissions. I changed the ownership to www-data, and set the permissions to match the config-amberandtrey.us.php. Now eldon.me works!

Whew, what a long post! It took me longer to write than I would have expected because I ran into the problems on eldon.me (the server providing the WordPress post form I’m using). We’ll see if it lets me post it!