A new tiebreaker for footy tipping

When people used paper to manage their workplace footy tipping competitions they needed a simple method to decide tiebreakers. The closest guess to the margin of the round’s blockbuster game was usually the way to sort end-of-week results, and this was sometimes used cumulatively to sort end-of-season results.

Many online competitions follow this tradition and still use the same method to separate contestants on the same number of wins. It’s OK, but it could be improved. Now that tipping systems are automated why not rely on the computer to provide a better ranking of participants?

The problem as I see it: Imagine two participants tip eight of nine games correctly. Participant A’s incorrect tip was in a game decided by a margin of one point, and Participant B incorrectly tipped a game decided by a margin of 80 points. Both participants were wrong about one game, but Participant B was “more wrong”. This, I think, ought to come into play for tiebreakers before worrying about choosing margins of blockbuster games.

Tip Quality

My suggestion is for a ‘tip quality’ figure (TQ), based on a similar principle to the AFL ladder’s percentage tiebreaker, to be introduced to separate those contestants on the same base score. For each correct tip, the participant’s TQ will increase according to a function of that game’s margin. For each incorrect tip, the participant’s TQ will decrease likewise. The competitor with the higher TQ will win any required tiebreak.

Initially I had imagined a simple calculation – that all margins in games tipped correctly be summed and added to the competitor’s TQ, and all margins in games tipped incorrectly be summed and subtracted from their TQ. However, margin alone is inadequate – TQ would need to consider the margin as a percentage of the combined total score of both teams to provide an adequate gauge.

Consider three hypothetical results of AFL matches:

Carlton   110 - 90 Collingwood 
Melbourne  60 - 40 St Kilda
Fremantle  30 - 10 Sydney

All three games were decided by a margin of twenty points, but while Carlton beat Collingwood by a margin equal to 10% of the total score, Melbourne beat St Kilda by 20% of the total score, and Fremantle beat Sydney by a whopping 50% of the total score. While there are many in-game variables that fans would consider when determining which game was the best match, all that a computer can go on is the raw score, which is appropriate anyway since it is score alone that decides the result. By that metric, Fremantle had the greater win.

Round 1, 2015

Here is an example from the AFL (Round 1, 2015). Both participants tipped five games correctly, but while Participant A’s incorrect tips were in reasonably close games, Participant B tipped a side that was flogged by twelve goals.

TQ example

The first game was Carlton vs Richmond. Both participants correctly tipped Richmond, who won by a margin of 27 points, or 14.75% of the 183 points scored in the game. For TQ purposes this figure is taken and used as a raw number (not a percentage), so 14.75 is added to their TQ.

The seventh game was Adelaide vs North Melbourne. The Crows won by a margin of 77 points, or 37.93% of the 203 points scored in the match. Participant A picked this correctly and had 39.93 added to their TQ. Participant B picked incorrectly and had 39.93 subtracted from their TQ.

At the end of Round 1, Participant A achieved a TQ of 80.14, which puts them ahead of Participant B’s 19.28, and via a TQ tiebreaker gives them the win for the week. For any individual week the score would be reset, but a cumulative value would be calculated over the course of the season to provide an end-of-year TQ value for each tipster, providing a tiebreaker for the final standings.

The TQ is less useful in an individual round than it is over the course of a whole season, since people are more likely to pick the same combination of results in a single week. The “guess the margin” option could still be employed as a further tiebreaker in that scenario, and I think this is a fairer check to apply before resorting to that method.

Over the course of a home-and-away season it is unlikely that two people will have tipped the same combination of teams, so it provides an excellent tiebreaker. In the cutthroat world of footy tipping, with pride and money at stake, a tiebreaker that better reflects the skill (or fortune) of the competitors would be welcome, and a relatively simple task for an online tipping system to provide.

Delete old tweets selectively using Python and Tweepy

For some time I’ve used an online service to delete tweets that are more than one week old. I do this because I use Twitter for levity, for throwaway comments and retweets on issues of the day, and I don’t really want those saved for posterity. Thanks to search crawlers and caches I can never be certain that tweets are gone forever, but this is a small step in that direction.

When I joined Keybase I discovered that I needed to prevent my ‘proof’ tweet from being deleted, and the simple method used by the online deletion service was no longer an option. My solution uses an exception list containing the IDs of the tweets I wish to save, and these are ignored when their contemporaries are merged with the infinite.

I’ve written a Python script that uses Tweepy to scan the contents of my timeline and delete any tweet that meets two criteria – more than seven days old and not in my exception list. It’s very simple, there are probably better ways of doing it (please let me know), but it works well for me as a nightly cron job.

Please note that since I’ve been deleting my old tweets this way for some time I’ve never had issues with the Twitter API rate limits. Every deletion is an API call, so if you have many tweets you may need to consider initially limiting the number returned via the .items() method. This is demonstrated in the Tweepy cursor tutorial.

To get the required authentication keys you will need to register a Twitter application.


Since my initial post I’ve added functionality to unfavor (or ‘unfavorite’) tweets, too. I’ve included the full script below.

#!/usr/bin/env python

import tweepy
from datetime import datetime, timedelta

# options
test_mode = False
verbose = False
delete_tweets = True
delete_favs = True
days_to_keep = 7

tweets_to_save = [
    573245340398170114, # keybase proof
    573395137637662721, # a tweet to this very post
favs_to_save = [
    362469775730946048, # tony this is icac

# auth and api
consumer_key = 'XXXXXXXX'
consumer_secret = 'XXXXXXXX'
access_token = 'XXXXXXXX'
access_token_secret = 'XXXXXXXX'
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)

# set cutoff date, use utc to match twitter
cutoff_date = datetime.utcnow() - timedelta(days=days_to_keep)

# delete old tweets
if delete_tweets:
    # get all timeline tweets
    print "Retrieving timeline tweets"
    timeline = tweepy.Cursor(api.user_timeline).items()
    deletion_count = 0
    ignored_count = 0

    for tweet in timeline:
        # where tweets are not in save list and older than cutoff date
        if tweet.id not in tweets_to_save and tweet.created_at < cutoff_date:
            if verbose:
                print "Deleting %d: [%s] %s" % (tweet.id, tweet.created_at, tweet.text)
            if not test_mode:
            deletion_count += 1
            ignored_count += 1

    print "Deleted %d tweets, ignored %d" % (deletion_count, ignored_count)
    print "Not deleting tweets"
# unfavor old favorites
if delete_favs:
    # get all favorites
    print "Retrieving favorite tweets"
    favorites = tweepy.Cursor(api.favorites).items()
    unfav_count = 0
    kept_count = 0

    for tweet in favorites:
        # where tweets are not in save list and older than cutoff date
        if tweet.id not in favs_to_save and tweet.created_at < cutoff_date:
            if verbose:
                print "Unfavoring %d: [%s] %s" % (tweet.id, tweet.created_at, tweet.text)
            if not test_mode:
            unfav_count += 1
            kept_count += 1

    print "Unfavored %d tweets, ignored %d" % (unfav_count, kept_count)
    print "Not unfavoring tweets"

Use Getflix or Unblock-Us servers selectively with Dnsmasq

I subscribe to Getflix, which is quite similar to Unblock-Us in that it allows users to access geo-blocked content. The basic method to use these services is to set one’s device to use their provided DNS servers, but this sends all DNS requests their way. I wanted only to use their DNS servers to resolve specific geo-blocked URLs.

There are a couple of reasons you might want to do this – you may be concerned about yet another party being privy to your site visits, and in my case I wanted to retain the faster, closer DNS servers provided by my ISP for the majority of my web requests.

Dnsmasq is present in several flavours of custom firmware available for many consumer routers, but since that was unavailable to me I have set it up on my NAS, which runs the Ubuntu-server linux distro. There are many guides for setting up Dnsmasq on many systems (for me it was as easy as “sudo apt-get install dnsmasq”), so I’ll just stick to explaining why I’ve configured it as I have.

Here is my Dnsmasq configuration file. Much of this isn’t necessary for this goal but I’ve kept it intact for context. I’ll go through why I’ve made certain decisions and it may help someone else.

# /etc/dnsmasq.conf

# regular dns servers (IPs redacted)

# getflix primary dns

# getflix secondary dns

# settings
interface=em1       # accept requests from the em1 interface
bogus-priv          # don't forward non-routable (local) addresses
domain-needed       # don't forward incomplete hostnames (names without dots)
no-resolv           # don't read /etc/resolv.conf to get upstream servers
all-servers         # use all servers, use the first returned
#strict-order       # query servers in the order they appear
domain=local        # set the domain name of this network
local=/local/       # set selected domains to only resolve locally
expand-hosts        # add our domain name to our local hostnames
cache-size=10000    # increase the cache to 10k records
no-hosts            # don't use the regular hosts file
addn-hosts=/etc/dnsmasq.hosts   # use alternate hosts file

# dhcp: set range, netmask and lease time for unidentified clients
read-ethers                     # read the /etc/ethers file for static assignment
dhcp-option=3,       # set the gateway (router)

# logging
log-facility=/var/log/dnsmasq   # log file
#log-queries                    # log dns queries
#log-dhcp                       # log dhcp activity

# disable a bunch of windows stuff
filterwin2k                     # block certain unnecessary windows requests
dhcp-option=19,0                # set ip-forwarding off
dhcp-option=44,          # set netbios-over-TCP/IP (WINS) nameserver(s)
dhcp-option=45,          # netbios datagram distribution server
dhcp-option=46,8                # netbios node type
dhcp-option=252,"\n"            # tell windows not to ask for proxy info
dhcp-option=vendor:MSFT,2,1i    # tell windows to release lease on shutdown

The upstream DNS servers have been selected by their speed from my location (according to namebench). Farther down I’ve also set the “all-servers” flag, which means that every request I make is resolved by each server that I’ve configured, and the first response is accepted. Like this fellow, I found that it resulted in a tremendous resolution speed increase. This is a terrible setting for a big network to use because of the increased traffic, but since I’m just a home user and since I’m caching my requests it’s not such a big deal. Were I not using this I might have gone for the “strict-order” option, to ensure that the faster servers I’ve listed at the top are tried first.

The Getflix server block defines which URLs are to be resolved via the Getflix servers, using some domains I found here, plus a few more that they hadn’t updated at the time of writing. Each server line is saying that for each of these addresses, use this DNS server to resolve it. I could have put all of them on one line, but preferred to separate them according to the service being accessed. I have repeated this whole block for the secondary Getflix DNS server.

I’ve commented the settings but it’s worth mentioning a few. I have specified the interface to listen on even though there’s only the one point of entry on my network. Recent versions of Dnsmasq block all traffic if nothing is specified here, which is the opposite to its previous behaviour.

I’ve specified that Dnsmasq is not to read nameservers from the /etc/resolv.conf file and not to read hostnames from the /etc/hosts file. Both of these are used by the system for other purposes as well, and I wanted to keep Dnsmasq ‘clean’. I’ve specified my own hosts file specifically for Dnsmasq instead. It looks something like this:

# /etc/dnsmasq.hosts     red     green    blue    yellow    purple

Dnsmasq is also being used as a DHCP server, so I’m specifying my gateway (the router) and an IP range to be used for unidentified clients. This includes a subnet value, which is required because my router is a DHCP relay. Thanks to the “read-ethers” option I can specify clients requiring static IPs in the /etc/ethers file, which looks a little like this:

# /etc/ethers

While troubleshooting my setup I was logging DHCP and DNS activity on top of the standard Dnsmasq reporting, but I’ve turned both off now. The final block of the config turns off a bunch of stuff related to Windows clients, which I do have, but my network is so small that they are pointless overheads.

That’s about it! Let me know if you have any questions about my configuration, or if you can help me improve upon it. My thanks to these articles, which pointed me in the right direction:


17 May 2014: Since posting this I’ve changed router, and the new one doesn’t support DHCP relaying. So I’m now doing DHCP on the router itself and am simply using Dnsmasq for DNS. I have commented out all of the DHCP lines in /etc/dnsmasq.conf and therefore no longer use /etc/ethers, but everything still works as before.