Basic Postfix Config, Backed by PostgreSQL

May 1st, 2014 by exhuma.twn

Following my previous post, I now have a working config with PostgreSQL.

This post is meant to set up a config which is “just enough” to get postfix working with PostgreSQL. There are many tutorials out there, but they all give you a full-blown set-up with spam/virus checking, POP/IMAP access with authentication, webmail, and so on and so on. None of those I have read managed to explain the inner workings of postfix, and just gave you a bunch of copy/pasteable “templates”. This post intentionally leaves out additional features. You can add more support yourself if you want to.

Database administrative stuff (database/user creation, authentication, pg_hba.conf & co) is out of the scope of this document. You should read up on those topics somewhere else if you don’t feel comfortable with it!

The aim of this is to have a postfix installation capable mainly of “aliasing” e-mails. Let’s say I have a domain “example.com” and I want to manage e-mail addresses for that domain. But, it’s a small domain with only a handful of users and I don’t want to store mails locally, just alias them to the users private e-mails addresses.

While the aim is only to alias e-maile, the config explained below is still capable of delivering mail locally (storing them directly to disk), but as there is no set-up to access those mails (POP/IMAP/webmail), it’s only marginally useful. But it gives you a working framework if you want to add these features yourself. Read the rest of this entry »

Posted in Linux | No Comments »

postfix config from scratch.

April 27th, 2014 by exhuma.twn

There are many postfix tutorials out there. I’ve always wondered what the hell I was copy/pasting onto my system and decided to start (nearly) from scratch. I took one of those tutorials (don’t remember which one) as inspiration, but based the final result on the official docs. I kept what I liked, changed some settings I did not like so much, and threw out a few other things which I deemed useless.

The main problem with those tutorials is that they show you a final result without telling you how they ended up with the result, sometimes it looks like an amalgam of other tutorials ending up in a huge “frankenconfig”. I don’t like deploying something where I don’t know what it’s doing…

The final result is a config which is stripped down to my most basic needs.

The first iteration will be an extremely simple config:

  • No database will be used to store mail config. This however is something I will certainly implement.
  • Only simple spam handling using blocklists.
  • No antivirus.
  • No webmail access.
  • No POP access.
  • No IMAP access.
  • Not relay mails for other hosts (only local delivery)j.

For now, I only require e-mail aliasing. That is, I only want to handle e-mails destined for my domains, but I only want to “forward” them to other destinations. Local delivery (and access to those mails) may or may not be implemented later. It will be set up on an Ubuntu Precise Pangolin Server and should get you started for a basic mail server.

Here’s the main config:

Note:

The template contains 3 “variables”. Those variables need to be replaced by your values before deploying this!

{{fqdn}}
The fully qualified hostname of your server
{{vgid}}
The group-id of the local system group for files stored on the local disk.
{{vuid}}
The user-id of the local system user for files stored on the local disk.

The most interesting part of the config is at the end of the config after the “Virtual Mail” header. This part defines which e-mail addresses the MTA is handling and how. Will the mails be stored locally? Will they be “aliased” to another e-mail?

The config should be documented well-enough.

    # Debian specific:  Specifying a file name will cause the first
    # line of that file to be used as the name.  The Debian default
    # is /etc/mailname.
    myorigin = /etc/mailname

    smtpd_banner = $myhostname ESMTP $mail_name
    biff = no

    # appending .domain is the MUA's job.
    append_dot_mydomain = no

    # Uncomment the next line to generate "delayed mail" warnings
    delay_warning_time = 4h

    readme_directory = no

    # TLS parameters
    smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
    smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
    smtpd_use_tls=yes
    smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
    smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache

    # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for
    # information on enabling SSL in the smtp client.

    myhostname = {{fqdn}}
    alias_maps = hash:/etc/aliases
    alias_database = hash:/etc/aliases
    mydestination = {{fqdn}}, $myorigin
    relayhost =
    mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
    mailbox_size_limit = 51200000
    recipient_delimiter = +
    inet_interfaces = all
    inet_protocols = all

    # how long to keep message on queue before return as failed.
    maximal_queue_lifetime = 7d
    # how many address can be used in one message.
    # effective stopper to mass spammers, accidental copy in whole address list
    # but may restrict intentional mail shots.
    smtpd_recipient_limit = 16
    # how many error before back off.
    smtpd_soft_error_limit = 3
    # how many max errors before blocking it.
    smtpd_hard_error_limit = 12
    # Requirements for the HELO statement
    smtpd_helo_restrictions = permit_mynetworks, warn_if_reject
        reject_non_fqdn_hostname, reject_invalid_hostname, permit
    # Requirements for the sender details
    smtpd_sender_restrictions = permit_mynetworks, warn_if_reject
        reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining,
        permit
    # Requirements for the connecting server
    smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org,
        reject_rbl_client blackholes.easynet.nl
    # Requirement for the recipient address
    smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks,
        reject_non_fqdn_recipient, reject_unknown_recipient_domain,
        reject_unauth_destination, permit
    smtpd_data_restrictions = reject_unauth_pipelining
    # require proper helo at connections
    smtpd_helo_required = yes
    # waste spammers time before rejecting them
    smtpd_delay_reject = yes
    disable_vrfy_command = yes


    # ----------------------------------------------------------------------------
    #   Virtual Mail
    # ----------------------------------------------------------------------------

    # basic security (user ID mapping)
    virtual_minimum_uid = 100
    virtual_gid_maps = static:{{vgid}}
    virtual_uid_maps = static:{{vuid}}

    # base folder
    virtual_mailbox_base = /var/spool/mail/virtual

    # Domains for which we only ALIAS (mail will not be stored on the local disk).
    virtual_alias_domains = hash:/etc/postfix/valias_domains

    # Domains for which we deliver mail LOCALLY (mail will be stored on local
    # disk).
    virtual_mailbox_domains = hash:/etc/postfix/vdomains

    # Aliases. Maps one e-mail to another.
    # delivered (i.e. stored on the local disk), the end-point of the alias (the
    # "right-hand side") must be on a domain which is delivered LOCALLY (see
    # below).
    virtual_alias_maps = hash:/etc/postfix/valiases

    # Mappings for locally delivered mail (maps to files/folders which are stored
    # below the base folder `virtual_mailbox_base`)
    virtual_mailbox_maps = hash:/etc/postfix/vmailbox

Examples for the hash files (for an explanation what they do, see above):

---- valias_domains -- This is a "list", so the left-hand-side
---- is usually the same as the right-hand-side.

domain1.tld    domain1.tld
domain2.tld    domain2.tld

---- vdomains -- This is another list.

domain3.tld    domain3.tld

---- valiases -- This is a "map". Think "key/value". So,
---- forcibly the LHS differs from the RHS

user@domain1.tld          john.doe@external.domain.tld
user2@domain2.tld         user@domain1.tld

---- vmailbox -- This is another "map"
# the trailing slash defines a Maildir format.
user3@domain3.tld         folder/subfolder/user3/

# Not having a trailing slash makes it an Mbox file.
user4@domain3.tld         folder/subfolder/user4

Posted in Linux | No Comments »

Optimising the ipaddress module from Python 3.3

February 27th, 2014 by exhuma.twn

As of Python 3.2, the “ipaddress” module has been integrated into the stdlib. Personally, I find it a bit premature, as the library code does not look to be very PEP8 compliant. Still, it fills a huge gap in the stdlib.

In the last days, I needed to find a way to collapse consecutive IP networks into supernets whenever possible. Turns out, there’s a function for that: ipaddress.collapse_addresses. Unfortunately, I was unable to use it directly as-is because I don’t have a collection of networks, but rather object instances which have “network” as a member variable. And it would be impossible to extract the networks, collapse them and correlate the results back to the original instances.

So I decided to dive into the stdlib source code and get some “inspiration” to accomplish this task. To me personally, the code was fairly difficult to follow. About 60 lines comprised of two functions where one calls the other one recursively.

I thought I could do better. And preliminary tests are promising. It’s no longer recursive (it’s shift-reduceish if you will) and about 30 lines shorter. Now, the original code does some type checking which I might decide to add later on, increasing the number of lines a bit, and maybe even hit performance. I’m still confident.

A run with 30k IPv6 networks took 93 seconds with the new algorithm using up 490MB of memory. The old, stdlib code took 230 seconds to finish with a peak memory usage of 550MB. All in all, good results.

Note that in both cases, the 30k addresses had to be loaded into memory, so they will take up a considerable amount as well, but that size is the same in both runs.

I still have an idea in mind to improve the memory usage. I’ll give that a try.

Here are a few stats:

With the new algorithm:

collapsing 300000 IPv6 networks 1 times
generating 300000 addresses...
... done
new:  92.98410562699428
        Command being timed: "./env/bin/python mantest.py 300000"
        User time (seconds): 92.79
        System time (seconds): 0.28
        Percent of CPU this job got: 99%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 1:33.07
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 491496
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 123911
        Voluntary context switches: 1
        Involuntary context switches: 154
        Swaps: 0
        File system inputs: 0
        File system outputs: 0
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0

and with the old algorithm:

collapsing 300000 IPv6 networks 1 times
generating 300000 addresses...
... done
old:  229.66894743399462
        Command being timed: "./env/bin/python mantest.py 300000"
        User time (seconds): 229.35
        System time (seconds): 0.38
        Percent of CPU this job got: 99%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 3:49.76
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 549592
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 144970
        Voluntary context switches: 1
        Involuntary context switches: 1218
        Swaps: 0
        File system inputs: 0
        File system outputs: 0
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0

I’ll add more details as I go… I’m too “into it” and keep forgetting time and to post fun stuff on-line… stay tuned.

Posted in Python | No Comments »

Setting up Cygwin/X

January 15th, 2014 by exhuma.twn

Setting up Cygwin/X

In this article we will set-up Cygwin with an X11 server so you can use X11
forwarding to run remote graphical applications on Windows. To allow a
passwordless log-in, we will use public-key authentication. Even though this is
technically out of scope of this document, I will summarize the necessary steps
to make this a comprehensive guide.

Read the rest of this entry »

Posted in Linux | No Comments »

Colourising python logging for console output.

December 27th, 2013 by exhuma.twn

I’ve seen my fair share of code fragments colourising console output. Especially when using logging. Sometimes the colour codes are directly embedded into the format string, which makes it really hairy to deal with different colours for different levels. Sometimes even the log message is wrapped in a colour string along the lines: LOG.info("{YELLOW}Message{NORMAL}") or something equally atrocious.

Most logging frameworks support this use-case with “Formatters”. Use them! Here’s a quick example of how to do it “the right way™”:

Disclaimer: For whatever reason, this gist is borking the foobar.lu theme. I’m guessing it’s the UTF-8 char in the docstring? So maybe a web-server misconfig? So I’ll have to link it the “old way”! Go figure…

Clicky clicky → https://gist.github.com/exhuma/8147910

Posted in Python | No Comments »

Introduction to google-closure with plovr

September 1st, 2013 by exhuma.twn

I’m about to embark on a quest to understand the development for custom google-closure components (UI widgets if you will). Reading through the relevant section in “Closure – The Definitive Guide” makes me believe, it’s not all too difficult. But there are still a bunch of concepts which I need to familiarize myself with. This article briefly outlines my aim for this “learning trail”, and starts of with a tiny HelloWorld project using plovr. This article assume a minimal knowledge of google closure (you should know what “provides” and “requires”. “exportSymbol” should also not surprise you) Read the rest of this entry »

Posted in JavaScript | No Comments »

Automagic __repr__ for SQLAlchemy entities with primary key columns with Declarative Base.

July 5th, 2013 by exhuma.twn

According to the Python documentation about __repr__, a call to repr() should give you a valid Python expression if possible. This is a very useful guideline. And it is also something I I like to implement in my Python projects as much as possible.

Now, for mapped database entities, you might argue that it makes sense to have a default constructor as long as it accepts the primary key columns.

By default, it is possible to create new instances by specifying column values in SQLAlchemy. For example:

user = User(name=u'John Doe', email=u'john.doe@example.com')

It should be possible to create such “repr” values automatically for primary keys. All the required meta info is available. Digging through the SA docs, I found that it is possible the customize Base in order to add behaviour to all mapped entities!

Here’s the result:

With this in place, all representations of DB entities will finally make sense and be copy/pasteable directly into your code.

Of course, by nature of ORMs, the new instances created this way will be detached from the session and need to be merged before you can do any DB related operations on them! A simple example:

from mymodel import User, Session

sess = Session()
user = User(name=u'John Doe', email=u'john.doe@example.com')
user = sess.merge(user)
sess.refresh(user)

Posted in Python | 6 Comments »

Uploading the contents of a variable using fabric

June 25th, 2013 by exhuma.twn

More than once I needed to create files on the staging/production box which I had no need of on the local development box (For example complex logging configuration).

This fragment contains a simple function which tries to do this in a safe manner, and also ensuring proper cleanup in case of failure.

Posted in Coding Voodoo, Python | No Comments »

Formatting PostgreSQL CSV logs

April 24th, 2013 by exhuma.twn

The problem

Today I needed to keep an eye on PostgreSQL logs. Luckily, I decided upon installation to log everything using the “csvlog” format. But there’s a small catch, depending how you read that log. This catch is newline characters in database queries.

This has nothing to do with PostgreSQL directly. In fact, it does the right thing, in that it quotes all required fields. Now, a quoted field can contain a newline character. But if you read the file on a line-by-line basis (using methods like file_handle.readline, this will case problems. No matter what programming language you use, if you call readline, it will read up to the next newline character and return that. So, let’s say you have the following CSV record:

2013-03-21 10:41:19.651 CET,"ipbase","ipbase_test",13426,"[local]",514ad5bf.3472,139,"SELECT",2013-03-21 10:41:19 CET,2/5828,3741,LOG,00000,"duration: 0.404 ms  statement: SELECT\n                 p2.device,\n                 p2.scope,\n                 p2.label,\n                 p2.direction\n             FROM port p1\n             INNER JOIN port p2 USING (link)\n             WHERE p1.device='E'\n             AND p1.scope='provisioned'\n             AND p1.label='Eg'\n             AND (p1.device = p2.device\n                 AND p1.scope = p2.scope\n                 AND p1.label=p2.label) = false",,,,,,,,,""

If you read this naïvely with “readline” calls, you will get the following:

 1:2013-03-21 10:41:19.651 CET,"ipbase","ipbase_test",13426,"[local]",514ad5bf.3472,139,"SELECT",2013-03-21 10:41:19 CET,2/5828,3741,LOG,00000,"duration: 0.404 ms  statement: SELECT
 2:                p2.device,
 3:                p2.scope,
 4:                p2.label,
 5:                p2.direction
 6:            FROM port p1
 7:            INNER JOIN port p2 USING (link)
 8:            WHERE p1.device='E'
 9:            AND p1.scope='provisioned'
10:            AND p1.label='Eg'
11:            AND (p1.device = p2.device
12:                AND p1.scope = p2.scope
13:                AND p1.label=p2.label) = false",,,,,,,,,""

Now, this is really annoying if you want to parse the file properly.

The solution

Read the file byte-by-byte, and feed a line to the CSV parser only if you hit a newline outside of quoted text. Obviously you should consider the newline style (\n, \r or \r\n) and the proper quote and escape characters when doing this.

What about Python?

It turns out, Python’s csv module suffers from this problem. The builtin CSV module reads files line-by-line. However, it is possible to override the default behavior.

For my own purpose, I wrote a simple script, reading from the postgres log until interrupted.

You are free to use this for your own purpose, modify or extend it as you like.

You can find it here: exhuma/postgresql-logmon

Posted in Python | No Comments »

Recovering from a corrupted git repo

February 23rd, 2013 by exhuma.twn

I do a lot of work on the go. Offline. Sometimes it takes a long time to push changes to a remote repository. As always, Murphy’s law applies, and the one repo that explodes into my face is the one with ten days worth of work in it.

While working, suddenly my laptop hang. Music looping. No mouse movement. Nothing. The only possible solution was to do a cold-reboot. I was not worried. Everything was saved, and I only changed a few lines and can easily recover if something went awry. So I rebooted.

Once back in the system, I immediately wanted to do a git status and git diff. Git spat back the following error message:

jukebox$ git st
fatal: object 9bd41c2f96f295924af92a9da175cb3686f13359 is corrupted

My Laptop had shown some strange and erratic behaviour over the last few days already. I already left a memtest running for about 24 hours earlier this week without errors. The only possible explanation left was the hard-disk.

Fun times ahead! 10 days of work at risk… 10 days of important changes! Sweat building up my forehead. Bloody sweat!

I trust my tools to keep my code safe. I trust git. I trust vim. I do microscopic commits, and I knew my current uncommitted changes only involved a few lines. So maybe only the last commit got corrupted? Let’s see…

Read the rest of this entry »

Posted in Coding Voodoo | No Comments »

« Previous Entries

Pages

Recent Posts

Categories

Links


Archives

Meta