1. BSD License Audit

    I recently did an audit of the "BSD" licenses in the FreeBSD ports tree. This pertains strictly to those defined as LICENSE=BSD which could be one of several licenses. It was an extremely tedious process manually verifying the license of each port, and except for a dozen which are not identifiable or waiting for email responses from the authors it has been completed successfully.

    Things I've learned:

    • Lots of people don't understand open source licenses and incorrectly label their own license. BSD == MIT, etc.

    • Services like pypi don't get any more granular than "BSD" which made this audit frustrating and perpetuates the idea that there is a single "BSD" license. Go look in PKG-INFO files -- just says License: BSD.

    • Developers have this fantastic idea where they say "This project is under the BSD license" and then never point the enduser to any license text anywhere.

    • Many people are leaving their LICENSE or COPYING files out of their release tarballs -- incredibly daft of them.

    • BSD community members seem to know when you author software you license files not an entire projects, and put the license in the header of every source file. (Thanks!)

    • Some people think they can just edit standard licenses because they're smarter than the lawyers who helped develop these licenses and cause unnecessary work to myself and others. (ZPL2.1 with a clause cut out)

    • There are far too many variants of the MIT license.

    • OpenBSD actually uses the ISCL license, not a classic BSD license. (Don't worry, it's just shorter)

    • Even Debian can make mistakes. (That's not a GPLv3 license.)

    • Tons of copies of the BSD 3-CLAUSE out there that have clauses numbered 1., 2., and 4.. Makes me chuckle every time I see it.

    • An unofficial BSD 1-CLAUSE is floating out there in use by a few projects which indicates the author only cares about its source distribution and not the binary...

    • The Sendmail license had an older variant that implied that you have to fly to California to defend yourself if you violate it.

    • Never trust the license of a package. If you're a vendor you better verify it by hand before selling your product.

    Results:

       1 ART20
       1 BSD1
       1 BSD2 BSD3 ART10
       1 BSD2 MIT
       1 BSD3 TclTk
       1 CC
       1 CPL
       1 GPLv2 BSD3CLAUSE BSD4CLAUSE
       1 GPLv2 ISCL
       1 GPLv3
       1 PHP202
       1 PHP30
       1 Sendmail
       1 ZPL21
       2 BSD2 BSD3
       2 BSD3 MIT
       2 REPOZE -- ZPL21 modified
       4 GPLv2
       4 TclTk
       5 CUSTOM
       8 BSD4
      17 ISCL
      24 MIT
      62 BSD2
     148 BSD3
    

    This isn't 100% accurate either as sometimes there were ports which had multiple licenses defined and I only fixed and noted the "BSD" one. However, those that have multiple licenses listed were instances that I discovered that the project didn't fit strictly under one license.

    What a nightmare.

  2. Outlook-compatible WebDav with Nginx

    Microsoft Outlook has a Publish Online feature for sharing specific calendar information by publishing iCal files to WebDav. I don't use Apache on my personal servers, so here's how to configure it on Nginx.

    You first need to ensure that you have both Nginx WebDav modules installed. They are called http_webdav and webdav_ext. You need the webdav_ext as Outlook attempts some specific functions that are provided by this module. Other calendar clients may not have the same requirements.

    After you have both of these installed you can configure your webdav share like this:

    server {
            listen  80;
            listen  [::]:80;
            server_name cal.yourdomain.com;
            root /usr/local/www/caldav;
    
            location / {
                    dav_methods     PUT DELETE MKCOL COPY MOVE;
                    dav_ext_methods     PROPFIND OPTIONS;
                    create_full_put_path    on;
                    dav_access      user:rw  group:rw  all:r;
            }
    }
    

    You should probably secure this with SSL and possibly password protect your webdav share. Using the limit_except Nginx feature you could be clever and allow read access from everyone but prevent publishing without a password. This will prevent your server from being a target for public file storage. :-)

        limit_except GET {
            auth_basic "Restricted for authorized users!";
            auth_basic_user_file        /usr/local/etc/nginx/htpasswd;
        }
    
  3. Setting up Xymon with Nginx

    Xymon has been a favorite monitoring tool of mine for quite some time now largely due to its simplicity and flexibility. However, I despise running Apache unless absoultely neccessary. Previous attempts at getting Nginx and Xymon to play nice were not successful without some lazy hacks, but I finally sat down and made it work as cleanly as possible. See the below Nginx config. You may have to make minor adjustments if you're not on FreeBSD.

    server {
            listen 80;
            listen [::]:80;
            server_name xymon.feld.me;
            index   index.html;
            root /usr/local/www/xymon/server/www;
    
            location /xymon/ {
                    alias /usr/local/www/xymon/server/www/;
            }
    
            location /cgi-bin/ {
                    alias  /usr/local/www/xymon/cgi-bin/;
            }
    
            location /cgi-secure/ {
                    alias  /usr/local/www/xymon/cgi-secure/;
            }
    
           location ~ ^/.*\.sh$ {
                    gzip off;
                    fastcgi_param SCRIPT_NAME $fastcgi_script_name;
                    fastcgi_param DOCUMENT_ROOT /usr/local/www/xymon/;
            fastcgi_param REMOTE_USER $remote_user;
                    include fastcgi_params;
                    fastcgi_pass unix:/var/run/fcgiwrap/fcgiwrap.sock;
           }
    }
    

    Upstream Xymon supplies an Apache configuration that aliases /xymon-cgi/ to /cgi-bin/ and /xymon-seccgi/ to /cgi-secure/. Due to the way that Nginx handles aliases and fastcgi SCRIPT_NAME we don't have much choice here. We're just going to use them exactly as they're named on the filesystem and adjust Xymon to play along.

    Change the following in your xymonserver.cfg:

    XYMONSERVERCGIURL="/cgi-bin"
    XYMONSERVERSECURECGIURL="/cgi-secure"
    

    Now you should have a fully functional Xymon interface served by Nginx. I strongly suggest you protect the Xymon interface or at least the /cgi-secure/ with a password, though. I'll leave that up to the reader.

  4. Kindly Subverting POODLE

    Let's pretend for a moment you live in a world where you need to protect your customers from POODLE without completely breaking access for IE6 users. Scary errors or a complete failure to connect to the server are not options. Well then, this blog post is for you!

    This solution varies based on where SSL is being terminated. If it's on Apache or Nginx this is easy. If you have a different webserver or hardware load balancers you might have to investigate on your own. You should be able to apply these same techniques. If not, I suggest upgrading your infrastructure to something more modern. :-)

    Keep in mind this will likely cause you to fail some audits. However, with proof of your configuration you could contest the failure and (hopefully) win. I wouldn't run this permanently either. Maybe a month or two until your customers get a clue.

    Note: In these examples I redirect the end user to Microsoft's Windows XP End of Support page which has a lot of good information, but you might want to redirect the user to a company-branded page with your own message.

    Apache:

    RewriteCond %{SSL_PROTOCOL} = SSLv3
    RewriteRule .* http://windows.microsoft.com/en-us/windows/end-support-help [R=302,L]
    

    Nginx:

    if ($ssl_protocol = SSLv3) 
    {
      return 302 http://windows.microsoft.com/en-us/windows/end-support-help ;
    }
    

    Cisco ACE:

    You can't actually do this on the Cisco ACE, but it appears you could tell it to add a header and then you could watch for this header on the backend webservers and do a redirect similar to what we're doing above. I don't know the full syntax for the ACE but digging through the manual I found this gem:

    ssl header-insert session Protocol-Version
    

    The Cisco ACE only has the values TLSv1 and SSLv3 for Protocol-Version, so the following should work for Apache:

    RewriteCond %{HTTP:Protocol-Version} = SSLv3
    RewriteRule .* http://windows.microsoft.com/en-us/windows/end-support-help [R=302,L]
    

    And there you have it. You can inform your customers without sacrificing their security. The worst case scenario is a bad guy sniffs the traffic and captures a redirect. On a side note, this would also be a clever way to detect and inform of an SSL Downgrade attack...

    Have fun!

    Edit: Make sure you use 302 redirects so they aren't permanently cached by clients...

  5. pfSense On Citrix XenServer

    pfSense 2.2 snapshots are now based on FreeBSD 10 which means that support for Xen is built into the GENERIC kernel. This means virtualizing pfSense is very easy. If you install pfSense on Citrix XenServer it will not let you live migrate the VM to another host unless the Xen Tools are installed. Fortunately, there's an easy fix for that!

    Step 1: Install the tools. If you haven't installed the pkg utility yet it will automatically fetch that first.

    # pkg install xe-guest-utilities
    

    Step 2: Enable it to start on boot.

    Unfortunately pfSense has largely diverged from FreeBSD and won't run the rc scripts unless they end in .sh. We can fix this with a symlink. These two commands will enable it to run on boot.

    # echo "xenguest_enable=\"YES\"" >> /etc/rc.conf.local
    # ln -s /usr/local/etc/rc.d/xenguest /usr/local/etc/rc.d/xenguest.sh
    

    Step 3: Start the service manually or reboot.

    # service xenguest start
    

    Ta-da! Fully functional Xen Tools on your virtualized pfSense firewall.

    Note: If you happen to read this the day I wrote this you may find the memory reporting is broken and every few minutes some errors scroll on the console:

    bc: cannot find dc: No such file or directory
    bc: cannot find dc: No such file or directory
    bc: cannot find dc: No such file or directory
    bc: cannot find dc: No such file or directory
    bc: cannot find dc: No such file or directory
    

    You can hand-patch or just wait a week until the FreeBSD repo has xe-guest-utilties 6.0.2_3 or later.

    diff --git a/src/sbin/xe-update-guest-attrs b/src/sbin/xe-update-guest-attrs
    index 981f62f..44a0c63 100755
    --- a/src/sbin/xe-update-guest-attrs
    +++ b/src/sbin/xe-update-guest-attrs
    @@ -129,11 +129,11 @@ done
     if [ $MEMORY_MODE -eq 1 ] ; then
            # calc memory... used http://www.cyberciti.biz/files/scripts/freebsd-memory.pl.txt as guide
            pagesize=$(sysctl hw.pagesize | cut -d ':' -f2)
    -       memtotal=$(echo "$(sysctl hw.physmem | cut -d ':' -f2) / 1024" | bc)
    -       meminactive=$(echo "$(sysctl vm.stats.vm.v_inactive_count | cut -d ':' -f2) * $pagesize / 1024" | bc)
    -       memcache=$(echo "$(sysctl vm.stats.vm.v_cache_count | cut -d ':' -f2) * $pagesize / 1024" | bc)
    -       memfree=$(echo "$(sysctl vm.stats.vm.v_free_count | cut -d ':' -f2) * $pagesize / 1024" | bc)
    -       memavail=$(echo "$meminactive + $memcache + $memfree" | bc)
    +       memtotal=$(let "$(sysctl hw.physmem | cut -d ':' -f2) / 1024")
    +       meminactive=$(let "$(sysctl vm.stats.vm.v_inactive_count | cut -d ':' -f2) * $pagesize / 1024")
    +       memcache=$(let "$(sysctl vm.stats.vm.v_cache_count | cut -d ':' -f2) * $pagesize / 1024")
    +       memfree=$(let "$(sysctl vm.stats.vm.v_free_count | cut -d ':' -f2) * $pagesize / 1024")
    +       memavail=$(let "$meminactive + $memcache + $memfree")
    
            # we're using memavail as "free" for now
            xenstore_write_cached "data/meminfo_total" "$memtotal"
    

    It's really just changing the echo through bc into a POSIX sh let.

  6. Archiveopteryx: The IMAP Server You Always Wanted

    Archiveopteryx (aox) is a highly scalable PostgreSQL-backed IMAP/POP server. As described on its website:

    Archiveopteryx is an Internet mail server, optimised to support long-term archival storage. It seeks to make it practical not only to manage large archives, but to use the information therein on a daily basis instead of relegating it to offline storage.

    and

    Archiveopteryx is designed to impose no limits on the size or usage of the archive to the extent of the server hardware's capabilities.

    With aox it's possible to have millions of mail in an IMAP folder without experiencing performance issues. My own Postgres configuration is moderately tuned and performs spectacularly. The fact that the mail is in a database means large operations such as marking many thousands of mail as read or moving to different IMAP folders is a very fast and inexpensive operation.

    Notable features:

    • Automatic deduplication
    • Retention: Specify which messages must be deleted/retained, including by search.
    • Undelete: Search for accidentally deleted messages and recover them.
    • Export: Search for messages and export them.
    • Easy backup: just a SQL dump!
    • LDAP authentication
    • Bleeding edge RFC suppport -- many new IMAP features land here first!

    For those curious how this compares to DBMail read here.

    One thing to note about aox is the goal of "long-term archival storage". Aox intends for you to be able to read your email with a mail client written 20 years from now. Your mail will be safely stored and RFC compliant. Mail clients of the future should not have to implement thousands of quirks and workarounds to deal with malformed messages from poorly written mail clients of old. This means that your mail in some circumstances may be modified to be standards compliant: headers changed slightly, encoding fixed, etc. This will not be noticable when read unless it happens on a PGP-Signed mail in which case it could break the signature. I occasionally see this happen, but it's not been a major concern of mine. If this doesn't worry you, forge ahead!

    Installation & Setup

    On FreeBSD:

    # pkg install archiveopteryx
    

    We will now initiate the installer. If the Postgres server is local, the use of the pgsql user will initiate the aox database and accounts. If the database is not local I will leave it as an excercise to the reader to permit remote database access by the installer to create the required accounts and database.

    Make sure the citext extension is available.

    On FreeBSD this is in the postgresql93-contrib package for Postgres 9.3 servers.

    # /usr/local/libexec/aox/installer
    Connecting to Postgres server /tmp/.s.PGSQL.5432 as Unix user pgsql.
    Creating the 'aox' PostgreSQL user.
    Creating the 'aoxsuper' PostgreSQL user.
    Creating the 'archiveopteryx' database.
    Adding citext to the 'archiveopteryx' database.
    Loading the database schema.
    SET
    SET
    CREATE TABLE
    INSERT 0 1
    CREATE EXTENSION
    CREATE TABLE
    CREATE INDEX
    CREATE FUNCTION
    (yadda yadda this goes on for a bit)
    ...
    Granting database privileges.
    Generating default /usr/local/etc/archiveopteryx/archiveopteryx.conf
    Generating default /usr/local/etc/archiveopteryx/aoxsuper.conf
    Setting ownership and permissions on
    /usr/local/etc/archiveopteryx/archiveopteryx.conf
    Done.
    

    Add to /etc/rc.conf:

    archiveopteryx_enable="YES"
    

    Let's start it up and create a user. This will add a user and prompt for the password. The username can be anything you want, or the email address if that's more convenient.

    # service archiveopteryx start
    # aox add user -p <username> <email-address>
    

    You may want to use your own SSL/TLS key. Concat the certificate, key, and chain into a single file and add an entry to /usr/local/etc/archiveopteryx/archiveopteryx.conf:

    tls-certificate = /usr/local/etc/archiveopteryx/yourcert.pem
    

    Restart the service and you're ready to connect your mail client, webmail, etc to the server and start using it!

    Again, the other intracacies of email hosting are left as an exercise to the user: LMTP delivery from your favorite MTA, spam filtering, valid matching A and PTR records, SPF, DKIM, etc.

    I guarantee this setup is easier than competing IMAP servers with significantly less confusing knobs to turn.

    Other tips

    Import mail from mbox or maildir with:

    # aox import
    

    Importing lots of mail? Turning off the SQL index might help speed things up:

    # aox tune database mostly-writing
    

    Turn it back on when finished (be patient!):

    # aox tune database mostly-reading
    

    A nightly cron will clean up the database by removing emails that have expired beyond the configured undelete-time:

    0 0 * * *  root  /usr/local/bin/aox vacuum
    

    Don't care about enforcing IMAP Quotas? Turn them off in archiveopteryx.conf; they're expensive:

    use-imap-quota = off
    

    The official aox releases are rare. The authors are do a full audit of the codebase each release which takes significant time. Following git is encouraged if you need a certain bugfix or feature. The following IMAP features missed the 3.2.0 release because their reliability couldn't be vetted in time:

    THREAD=ORDEREDSUBJECT
    THREAD=REFS
    THREAD=REFERENCES
    

    Their absence means you won't see mail threads in webmail clients like Roundcube but it's easy to patch them back in.

  7. SSH Two Factor Authentication on FreeBSD

    Setting up two factor auth for SSH on FreeBSD is actually quite simple. This can be achieved with minimal effort via the security/pam_google_authenticator port.

    # pkg install pam_google_authenticator
    

    Edit /etc/pam.d/sshd and add the following line at the top of the list:

    auth            required        /usr/local/lib/pam_google_authenticator.so
    

    Now each user has to generate their two factor authentication with the google-authenticator commmand if they want to login via ssh with a password. If they have an SSH key it will bypass the need for the two factor authentication. Here's an example of the process:

    $ google-authenticator 
    
    Do you want authentication tokens to be time-based (y/n) y
    https://www.google.com/chart?chs=200x200&chld=M|0&cht=qr&chl=otpauth://totp/feld@my.server.name%3Fsecret%3DXD2SJCBPO2NAMGTS
    Your new secret key is: XD2SJCBPO2NAMGTS
    Your verification code is 666608
    Your emergency scratch codes are:
      83144609
      39391374
      49272727
      99788106
      18387881
    
    Do you want me to update your "/home/feld/.google_authenticator" file (y/n) y
    
    Do you want to disallow multiple uses of the same authentication
    token? This restricts you to one login about every 30s, but it increases
    your chances to notice or even prevent man-in-the-middle attacks (y/n) y
    
    By default, tokens are good for 30 seconds and in order to compensate for
    possible time-skew between the client and the server, we allow an extra
    token before and after the current time. If you experience problems with poor
    time synchronization, you can increase the window from its default
    size of 1:30min to about 4min. Do you want to do so (y/n) y
    
    If the computer that you are logging into isn't hardened against brute-force
    login attempts, you can enable rate-limiting for the authentication module.
    By default, this limits attackers to no more than 3 login attempts every 30s.
    Do you want to enable rate-limiting (y/n) n
    

    See, easy!

  8. FreeBSD Poudriere Cheat Sheet

    On FreeBSD poudriere is now the best way to maintain your software from the ports tree. It provides a cleanroom build environment and your packages will always be built properly. Manual installation and portmaster are certainly still viable, but they should be handled with care by advanced users. For those who cannot use the public FreeBSD pkg repository, here's a quick rundown on maintaining your own.

    Install poudriere. At this time I recommend poudriere from packages or your own ports tree.

    # cd /usr/ports/ports-mgmt/poudriere
    # make install clean
    

    Edit poudriere.conf to your own liking.

    # vi /usr/local/etc/poudriere.conf
    

    Create a build jail. If you want to target FreeBSD 10.1 amd64:

    # poudriere jail -c -j 101amd64 -v 10.1-RELEASE -a amd64
    

    Create the poudriere ports tree. I recommend following svn.

    # poudriere ports -c -m svn+http
    

    Generate a list of packages you'd like to build and put them in a text file. I prefer to call mine origins.txt.

    audio/beets
    audio/murmur
    comms/minicom
    comms/picocom
    ... etc
    

    Set the build options for your build environment and ports. Edit /usr/local/etc/poudriere.d/make.conf

    DEFAULT_VERSIONS= mysql=5.5 pgsql=9.3 php=5
    
    # custom options
    accessibility_redshift_SET= GNOME GUI
    audio_beets_SET= BEATPORT CHROMA DISCOGS FFMPEG
    audio_murmur_UNSET= ICE
    ... etc
    

    Setup the web interface for poudriere. I use nginx, so here's my vhost config:

    server {
            listen       1.2.3.4:80;
            server_name  pkg.feld.me;
            root         /usr/local/share/poudriere/html;
    
            # Allow caching static resources
            location ~* ^.+\.(jpg|jpeg|gif|png|ico|svg|woff|css|js|html)$ {
                    add_header Cache-Control "public";
                    expires 2d;
            }
    
            location /data {
                    alias /usr/local/poudriere/data/logs/bulk;
    
                    # Allow caching dynamic files but ensure they get rechecked
                    location ~* ^.+\.(log|txz|tbz|bz2|gz)$ {
                            add_header Cache-Control "public, must-revalidate, proxy-revalidate";
                    }
    
                    # Don't log json requests as they come in frequently and ensure
                    # caching works as expected
                    location ~* ^.+\.(json)$ {
                            add_header Cache-Control "public, must-revalidate, proxy-revalidate";
                            access_log off;
                            log_not_found off;
                    }
    
                    # Allow indexing only in log dirs
                    location ~ /data/?.*/(logs|latest-per-pkg)/ {
                            autoindex on;
                    }
    
                    break;
            }
    
            location /repo {
                    alias /usr/local/poudriere/data/packages;
            }
    }
    

    The clients will be fetching from http://pkg.feld.me/repo/${ABI}/. ${ABI} will expand to FreeBSD:10:amd64. You cannot use FreeBSD:10:amd64 as a build jail name because of the colons, so make a symlink so this just works automagically.

    # cd /usr/local/poudriere/data/packages
    # ln -s 101amd64-default FreeBSD:10:amd64
    

    Setup a build script to update ports, clean poudriere package repo for unused packages, and build your list of packages:

    #!/bin/sh
    
    poudiere ports -u
    poudriere pkgclean -y -j 101amd64 -f /path/to/origins.txt
    poudriere bulk -j 101amd64 -f /path/to/origins.txt
    

    Do your first build run!

    # poudriere bulk -j 101amd64 -f /path/to/origins.txt
    

    Now, your jails or servers can use your package repository. Until the release of pkg 1.3.0 I recommend you do not attempt to mix package repositories as it will not work as expected. On the clients/servers/jails you can use these two config files to activate your repo and disable the official FreeBSD repo:

    /usr/local/etc/pkg/repos/feld.me.conf

    feld: {
    url: "http://pkg.feld.me/repo/${ABI}/",
    mirror_type: "a",
    enabled: yes
    }
    

    /usr/local/etc/pkg/repos/freebsd.conf

    FreeBSD: {
    enabled: no
    }
    

    Now on your clients you can install packages with pkg. If you have additional needs such as signing your repository or different repositories with their own specific global options (sets) check the poudriere man page -- it's all there!

    04/09/2015: Updated to reflect some more recent changes

  9. New Blog: Pelican

    I've never been into blogging. I did a few articles for a friend on Timedoctor.org but never made time to write about stuff I'm working on. I most recently had my blog on Tumblr but I only wrote two articles and then gave up because remembering to log into Tumblr was inconvenient. I'm quite regularly shelled into my server so maybe I can jot down useful information and publish it on here.

    We'll see how this works out.

    Check out Pelican if you want something lightweight for your blog.

  10. Denon E400 firmware update loop

    My Denon E400 is a nice AVR, but for some reason fails to do firmware updates if plugged in to my Ubiquiti Toughswitch. If I attempt an update it fails to connect to the server for some strange reason and gets stuck in an update loop with an error on the display that looks like

    ConnectionFailed10e

    The fix is to do a Network Card reset, which on this model happens to also be a microprocessor reset. Unfortunately you will be stuck with factory defaults, and this model doesn't seem to let you do a backup/restore of settings.

    sigh

    Anyway, the trick is to power off the device and hold down

    [SOURCE SELECT], [->] and [ZONE2 ON/OFF]

    while powering on until the display starts blinking for a few seconds. For the record, the regular microprocessor reset is holding down both of the SOURCE SELECT buttons while powering on and wait for the display to blink a few times.


    Update, 2/19/15:

    My friend has an AVR X4000 and he needed to do the same after trying to update while plugged into a Ubiquiti. He called Denon and his reset sequence was the following:

    • Hold down the up and down buttons and power button simultaneously
    • Let the screen flash 5 times
    • Release
    • Navigate to the Setup Menu with your remote and then exit
Page 3 / 4