Fix the Triple J Hottest 100 voting system

The cover for Triple J's upcoming compilation
The upcoming compilation

The twentieth anniversary of the Hottest 100 inspired a “best of the last twenty years” version, the winners of which were announced last weekend. As always, there was much angst as to what appeared, what didn’t, and where they ranked. As with the “hottest of all time” count from 2009 the the biggest criticism of this latest poll seems to be the lack of women.

I’ve read a number of articles giving reasons for why this might be, and each of those may be correct. I’ve also read some things about how and why popularity isn’t a good metric for quality, and they’re probably also correct. What I’d like to question is whether the Hottest 100 is even a good measure of popularity, full-stop. Although I accept that the results are skewed towards the particular section of the community that votes in the poll, I don’t even think they accurately represent the opinions of that group.

I believe that the wrong voting system creates this problem, and the sheer number of tracks from which listeners can choose exacerbates it. When the Hottest 100 began, there was no way around this – the current method would have been the easiest way to process phone votes. Today, voting is done via the Web, so it’d be pretty easy to switch to a more appropriate system.

The problem with the current system

I’ll be using the recent vote as my example, but it’s a similar case for the annual events. Listeners were asked to pick a maximum of twenty tracks out of the tens of thousands of songs that might appeal to their demographic as a whole. They were not given the opportunity to rank those songs – each track listed in each ballot would put a single vote next to that song. A total is calculated, and the winners announced. Unfortunately I don’t have access to the ballots themselves and I can’t prove that the voting system skews the results, but I suggest that it’s a possibility.

Album cover for Oasis' 'Wonderwall'
Oasis – Wonderwall

The poll was topped by Oasis’ Wonderwall. Now, it may very well be that a plurality of listeners thinks that it’s the best song of the last twenty years. But it’s also possible that a large number of listeners voted for a bunch of other songs as their favourites, and put Wonderwall somewhere else in their lists for nostalgic reasons, perhaps as a shout-out to a fondly-remembered time in their lives. Tweep @NatalieGaronzi made this point somewhat more pithily.

There are a few Hottest 100 number ones that I (perhaps cynically) presume had been given votes for novelty reasons, despite their voters not necessarily considering the song the top track for the year. The flat voting system means that if enough people do this, the song can win. Perhaps that’s not a bad thing. Like the Condorcet voting method, it will favour candidates generally acceptable to the majority over candidates passionately supported by a minority. Depending on your definition of “hottest song”, this may be fine. Condorcet, however, at least allows the voters to rank their choices.

The problem with the number of tracks available

Album cover for Radiohead's 'Kid A'
Radiohead – Kid A

I like Radiohead, they have a mountain of quality songs, and I love a few of them equally. Paranoid Android, Karma Police, How To Disappear Completely, Everything In Its Right Place. So, I guess I could vote for all of them. But then, I love lots of different types of music and I don’t think I like Radiohead enough to give them four votes out of twenty. So, I decided to choose between them. Out of this lot my favourite is probably How To Disappear Completely, and I voted for it knowing it was unlikely to feature in the final count, so should I have voted for Paranoid Android or Karma Police to boost their votes?

Album cover for PJ Harvey's 'Rid Of Me'
PJ Harvey – Rid Of Me

Picking on Oasis again – they had the one song feature, and it topped the count. Radiohead had two songs (Paranoid and Karma) feature at 13 and 35. Numerous other bands had two or three songs appear. Is it possible that prolific, long-lived, well-loved, consistently-good bands suffer in such counts? I love PJ Harvey and voted for a few of her tracks. But it was hard to choose only a few. Are there other PJ fans who were in the same boat and chose differently to me? Maybe not, maybe I’m inventing problems here. But I think it’s a possibility worth considering. Oasis has many popular songs, but none stands out as obviously as Wonderwall.

A proposal

To solve these problems I would like to see the Hottest 100 allow voters to rank their songs, and use an STV-based proportional system such as Hare-Clark to tally the results. Let the users select as few or as many tracks as they like (perhaps limited to the number of vacancies to prevent people going overboard and crashing the system), and give them the ability to drag and drop them into the order they choose. The formulas for calculating quotas, surpluses, and exclusions are pretty straightforward, let computers do the work.

This would also go some way to alleviate the ‘number of eligible songs’ problem. Since I can vote for as many songs as I like, I’d vote for all four Radiohead songs somewhere in my list without too much concern for wasting my vote. I’d vote for all of the PJ Harvey songs I like and still have plenty of room for my other favourites.

Alternatively, some form of run-off voting system could be employed to whittle down the list first (perhaps to 500 or so) and then in the main vote the listeners would be restricted to those tracks alone. But this would create its own problems, and I would much prefer to kill two birds with the solution above.

Perhaps nothing I’ve suggested here would make a difference. Perhaps Triple J listeners genuinely aren’t fans of women in music, and maybe they genuinely love novelty songs. But to fix the voting system would at least remove these doubts and give us a clearer idea. Tracing preference flows would also provide some interesting metadata. Are fans of Mumford & Sons also into Of Monsters and Men? I bet they are.

Finally, it may be that I’m trying to wedge the wrong voting system into the wrong paradigm. If any psephologists read this, feel free to poke holes in it, but I’d love to hear some alternatives.

What the Brownlow Medal isn’t

So, Chris Judd has won the 2010 Chas Brownlow Trophy, and some people aren’t very happy about it. I reckon this is because they misunderstand what the award is.

The Brownlow Medal

  • is an award given to an AFL player in recognition of a good season. It’s considered to be the highest individual award in the competition, which is more due to its history and status (not to mention how much the media loves to pump it up) than any other consideration.
  • isn’t an accurate indication of the “best” player of the year. The winner is always among the best players and in some years we might agree that the winner was the very best, but that’s not too often.

Chris Judd is a champion and his great year has been recognised. Good, he deserves it. I’ve always been a critic of the Brownlow, though – not of the medal itself, but of what it’s held up to be. Footy followers think that it should always be awarded to the best player of the year (and they always claim know who that player is!) but there are two big problems that hinder it from happening.

Problem 1: The umpires cast the votes

The umpires have a lot to do during a match, and they spend most of it chasing after the ball. Consequently they see a lot of action from the midfielders, and may miss some of the more subtle parts of the game. They also have a different interest in the game than the average viewer, and they’re charged with finding the “fairest and best” player of the match, so they probably take things other than sheer brilliance into account. Finally, the Brownlow is an individual medal in a team game, which is always problematic. Individual skill needs to be recognised, but I think the way they execute their team plan should also be considered, and the umpire can’t possibly judge that.

Problem 2: It has a poor voting system

At the end of a game, the umpires allocate their 3-2-1 votes to three separate players. This is the case regardless of whether a match is marked by a big team effort, or whether a few players did all the work. There aren’t enough votes to go around – some good players miss out entirely, and sometimes three votes aren’t enough to measure the influence a player had on the game.

The Solution

We already have an award that does a pretty good job of finding the best player of the year, and it’s the AFL Coaches Association Champion Player of the Year. What makes this award so good is that it addresses both of the above problems. It’s voted by the coaches, who have the perfect understanding of how well each player filled their given role. They also know which opposition players caused them the most problems. Although the flashier players will usually still get more votes, this opens it up a little more to the less glamorous roles, like defenders.

It also has a reasonable scoring system. Each coach picks five players to award votes on a 5-4-3-2-1 scale. That’s a total of thirty votes between the two coaches. Sometimes the two coaches’ choices overlap, sometimes not. The high scoring system separates the best from the rest in a more definite way than the lower-scored Brownlow. It’s still not ideal, but it’s an improvement. It did a good job at ranking the best players this year:

2010 AFLCA Champion Player of the Year
114 – Dane Swan (Collingwood)
88 – Luke Hodge (Hawthorn)
80 – Joel Selwood (Geelong)
75 – Aaron Sandilands (Fremantle)
71 – Chris Judd (Carlton)
70 – Gary Ablett (Geelong)

But even the AFLCA put Judd in the top five for 2010, so those who claimed that Judd didn’t even deserve to make the All-Australian team can get stuffed.

Sad face

The stature of the Brownlow drowns out the other awards, and so everyone – the public, the media, the players – puts their faith in the Brownlow and demands that it be awarded to the clear player of the year. We don’t always see eye-to-eye on who that player is, but in 2010 everyone seems to agree that it was Dane Swan, so the knockers have been more vocal than usual. Swan did have a great year, and Brownlow night must have been a terrible let-down given that the media had already awarded it to him. But that doesn’t make Judd any less a champion: he had a great year, and he deserves his award. It’s a shame to see people attacking him with their disappointment.

The Brownlow simply isn’t the award that the public wants it to be. It awards something unique – something you can’t quite put your finger on – and it would be great if people recognised and appreciated that. It would also be great if the coaches award was elevated to a higher importance to fill the “best player” void. The TV networks wouldn’t go much on it – the count would probably be decided earlier in the evening and the winner would rarely be a surprise – but the public would get the result they want. And maybe they’d stop knocking champions for their success.

But that probably won’t happen as long as there are Collingwood supporters.