A new tiebreaker for footy tipping

When people used paper to manage their workplace footy tipping competitions they needed a simple method to decide tiebreakers. The closest guess to the margin of the round’s blockbuster game was usually the way to sort end-of-week results, and this was sometimes used cumulatively to sort end-of-season results.

Many online competitions follow this tradition and still use the same method to separate contestants on the same number of wins. It’s OK, but it could be improved. Now that tipping systems are automated why not rely on the computer to provide a better ranking of participants?

The problem as I see it: Imagine two participants tip eight of nine games correctly. Participant A’s incorrect tip was in a game decided by a margin of one point, and Participant B incorrectly tipped a game decided by a margin of 80 points. Both participants were wrong about one game, but Participant B was “more wrong”. This, I think, ought to come into play for tiebreakers before worrying about choosing margins of blockbuster games.

Tip Quality

My suggestion is for a ‘tip quality’ figure (TQ), based on a similar principle to the AFL ladder’s percentage tiebreaker, to be introduced to separate those contestants on the same base score. For each correct tip, the participant’s TQ will increase according to a function of that game’s margin. For each incorrect tip, the participant’s TQ will decrease likewise. The competitor with the higher TQ will win any required tiebreak.

Initially I had imagined a simple calculation – that all margins in games tipped correctly be summed and added to the competitor’s TQ, and all margins in games tipped incorrectly be summed and subtracted from their TQ. However, margin alone is inadequate – TQ would need to consider the margin as a percentage of the combined total score of both teams to provide an adequate gauge.

Consider three hypothetical results of AFL matches:

Carlton   110 - 90 Collingwood 
Melbourne  60 - 40 St Kilda
Fremantle  30 - 10 Sydney

All three games were decided by a margin of twenty points, but while Carlton beat Collingwood by a margin equal to 10% of the total score, Melbourne beat St Kilda by 20% of the total score, and Fremantle beat Sydney by a whopping 50% of the total score. While there are many in-game variables that fans would consider when determining which game was the best match, all that a computer can go on is the raw score, which is appropriate anyway since it is score alone that decides the result. By that metric, Fremantle had the greater win.

Round 1, 2015

Here is an example from the AFL (Round 1, 2015). Both participants tipped five games correctly, but while Participant A’s incorrect tips were in reasonably close games, Participant B tipped a side that was flogged by twelve goals.

TQ example

The first game was Carlton vs Richmond. Both participants correctly tipped Richmond, who won by a margin of 27 points, or 14.75% of the 183 points scored in the game. For TQ purposes this figure is taken and used as a raw number (not a percentage), so 14.75 is added to their TQ.

The seventh game was Adelaide vs North Melbourne. The Crows won by a margin of 77 points, or 37.93% of the 203 points scored in the match. Participant A picked this correctly and had 39.93 added to their TQ. Participant B picked incorrectly and had 39.93 subtracted from their TQ.

At the end of Round 1, Participant A achieved a TQ of 80.14, which puts them ahead of Participant B’s 19.28, and via a TQ tiebreaker gives them the win for the week. For any individual week the score would be reset, but a cumulative value would be calculated over the course of the season to provide an end-of-year TQ value for each tipster, providing a tiebreaker for the final standings.

The TQ is less useful in an individual round than it is over the course of a whole season, since people are more likely to pick the same combination of results in a single week. The “guess the margin” option could still be employed as a further tiebreaker in that scenario, and I think this is a fairer check to apply before resorting to that method.

Over the course of a home-and-away season it is unlikely that two people will have tipped the same combination of teams, so it provides an excellent tiebreaker. In the cutthroat world of footy tipping, with pride and money at stake, a tiebreaker that better reflects the skill (or fortune) of the competitors would be welcome, and a relatively simple task for an online tipping system to provide.

London 2012 medal tally by use of capital punishment

At the end of the Olympic Games, people like to play around with medal tallies ordered or weighted in a variety of (dis)interesting ways. Here’s mine:

Country has death penalty Gold Silver Bronze
No 192 211 254
Yes 110 93 102

The part of the world that does not legally kill its citizens is victorious! So, is removing an archaic form of justice the secret to Olympic success? It’s not quite that simple. The top two nations in the general tally are also in this illustrious list:

Country with DP Gold Silver Bronze
United States 46 29 29
China 38 27 22
Japan 7 14 17
Cuba 5 3 6
Iran 4 5 3
North Korea 4 0 2
Ethiopia 3 1 3
Belarus 2 5 5
Uganda 1 0 0
India 0 2 4
Thailand 0 2 1
Egypt 0 2 0
Indonesia 0 1 1
Malaysia 0 1 1
Botswana 0 1 0
Qatar 0 0 2
Singapore 0 0 2
Afghanistan 0 0 1
Bahrain 0 0 1
Kuwait 0 0 1
Saudi Arabia 0 0 1
Total 110 93 102

The USA and China give the off-with-their-heads mob it a great start, but they can’t compete with the sheer number of little-nations-that-could in this enlightened tally:

Country without DP Gold Silver Bronze
Great Britain 29 17 19
Russian Federation 24 25 33
South Korea 13 8 7
Germany 11 19 14
France 11 11 12
Italy 8 9 11
Hungary 8 4 5
Australia 7 16 12
Kazakhstan 7 1 5
Netherlands 6 6 8
Ukraine 6 5 9
New Zealand 6 3 5
Jamaica 4 4 4
Czech Republic 4 3 3
Spain 3 10 4
Brazil 3 5 9
South Africa 3 2 1
Croatia 3 1 2
Romania 2 5 2
Kenya 2 4 5
Denmark 2 4 3
Azerbaijan 2 2 6
Poland 2 2 6
Turkey 2 2 1
Switzerland 2 2 0
Lithuania 2 1 2
Norway 2 1 1
Canada 1 5 12
Sweden 1 4 3
Colombia 1 3 4
Georgia 1 3 3
Mexico 1 3 3
Ireland 1 1 3
Argentina 1 1 2
Serbia 1 1 2
Slovenia 1 1 2
Tunisia 1 1 1
Dominican Republic 1 1 0
Trinidad and Tobago 1 0 3
Uzbekistan 1 0 3
Latvia 1 0 1
Algeria 1 0 0
Bahamas 1 0 0
Grenada 1 0 0
Venezuela 1 0 0
Mongolia 0 2 3
Slovakia 0 1 3
Armenia 0 1 2
Belgium 0 1 2
Finland 0 1 2
Bulgaria 0 1 1
Chinese Taipei 0 1 1
Estonia 0 1 1
Puerto Rico 0 1 1
Cyprus 0 1 0
Gabon 0 1 0
Guatemala 0 1 0
Montenegro 0 1 0
Portugal 0 1 0
Greece 0 0 2
Moldova 0 0 2
Hong Kong 0 0 1
Morocco 0 0 1
Tajikistan 0 0 1
Total 192 211 254

The source for the tally is ScraperWiki, and the source for the capital punishment stats is Wikipedia. I’ve given some countries the benefit of the doubt by including those who have “abolished in practice” on the side of no death penalty.

It’s been fun! See you again in Sochi 2014. I love the ski jump.

What the Brownlow Medal isn’t

So, Chris Judd has won the 2010 Chas Brownlow Trophy, and some people aren’t very happy about it. I reckon this is because they misunderstand what the award is.

The Brownlow Medal

  • is an award given to an AFL player in recognition of a good season. It’s considered to be the highest individual award in the competition, which is more due to its history and status (not to mention how much the media loves to pump it up) than any other consideration.
  • isn’t an accurate indication of the “best” player of the year. The winner is always among the best players and in some years we might agree that the winner was the very best, but that’s not too often.

Chris Judd is a champion and his great year has been recognised. Good, he deserves it. I’ve always been a critic of the Brownlow, though – not of the medal itself, but of what it’s held up to be. Footy followers think that it should always be awarded to the best player of the year (and they always claim know who that player is!) but there are two big problems that hinder it from happening.

Problem 1: The umpires cast the votes

The umpires have a lot to do during a match, and they spend most of it chasing after the ball. Consequently they see a lot of action from the midfielders, and may miss some of the more subtle parts of the game. They also have a different interest in the game than the average viewer, and they’re charged with finding the “fairest and best” player of the match, so they probably take things other than sheer brilliance into account. Finally, the Brownlow is an individual medal in a team game, which is always problematic. Individual skill needs to be recognised, but I think the way they execute their team plan should also be considered, and the umpire can’t possibly judge that.

Problem 2: It has a poor voting system

At the end of a game, the umpires allocate their 3-2-1 votes to three separate players. This is the case regardless of whether a match is marked by a big team effort, or whether a few players did all the work. There aren’t enough votes to go around – some good players miss out entirely, and sometimes three votes aren’t enough to measure the influence a player had on the game.

The Solution

We already have an award that does a pretty good job of finding the best player of the year, and it’s the AFL Coaches Association Champion Player of the Year. What makes this award so good is that it addresses both of the above problems. It’s voted by the coaches, who have the perfect understanding of how well each player filled their given role. They also know which opposition players caused them the most problems. Although the flashier players will usually still get more votes, this opens it up a little more to the less glamorous roles, like defenders.

It also has a reasonable scoring system. Each coach picks five players to award votes on a 5-4-3-2-1 scale. That’s a total of thirty votes between the two coaches. Sometimes the two coaches’ choices overlap, sometimes not. The high scoring system separates the best from the rest in a more definite way than the lower-scored Brownlow. It’s still not ideal, but it’s an improvement. It did a good job at ranking the best players this year:

2010 AFLCA Champion Player of the Year
114 – Dane Swan (Collingwood)
88 – Luke Hodge (Hawthorn)
80 – Joel Selwood (Geelong)
75 – Aaron Sandilands (Fremantle)
71 – Chris Judd (Carlton)
70 – Gary Ablett (Geelong)

But even the AFLCA put Judd in the top five for 2010, so those who claimed that Judd didn’t even deserve to make the All-Australian team can get stuffed.

Sad face

The stature of the Brownlow drowns out the other awards, and so everyone – the public, the media, the players – puts their faith in the Brownlow and demands that it be awarded to the clear player of the year. We don’t always see eye-to-eye on who that player is, but in 2010 everyone seems to agree that it was Dane Swan, so the knockers have been more vocal than usual. Swan did have a great year, and Brownlow night must have been a terrible let-down given that the media had already awarded it to him. But that doesn’t make Judd any less a champion: he had a great year, and he deserves his award. It’s a shame to see people attacking him with their disappointment.

The Brownlow simply isn’t the award that the public wants it to be. It awards something unique – something you can’t quite put your finger on – and it would be great if people recognised and appreciated that. It would also be great if the coaches award was elevated to a higher importance to fill the “best player” void. The TV networks wouldn’t go much on it – the count would probably be decided earlier in the evening and the winner would rarely be a surprise – but the public would get the result they want. And maybe they’d stop knocking champions for their success.

But that probably won’t happen as long as there are Collingwood supporters.