Playing the Moldovans at Rugby

Perplexed by the iRB Player of the Year Awards?  So were we.  So after tweeting about the nominations, and appearing to have a better understanding of the whole crazy thing than us, we asked ummm, to write about it for the site. 

Much has been made of the announcement of the iRB shortlist for the 2012 player of the year, and a lot of it a little bit frothy around the mouth. It’s understandable, the inclusion of a player nobody had been particularly gushing about all year raised eyebrows and objections in equal measure.

I have to admit I feel sorry for Owen Farrell. He’s like the poor kid at school who gets a compliment from teacher and immediately becomes the butt of every joke for the rest of the month, or even year, you know how kids are.

The teacher in this case is a select group of players who have singled out a short list of Richie McCaw, Dan Carter, Frédéric Michalak and – yes, that boy – Owen Farrell, and now have to listen to all and sundry give them a better list. But again, like Farrell, this isn’t really their fault either. This is the result of a poor selection system, not the selectors themselves.

Selection itself is simple. Each selector picks three stand out players from each match, ranked 1, 2 & 3. From this they arrive at a group wherein the cream has, it’s assumed, risen to the top. But a little analysis shows this to be entirely flawed.

This process considers all matches equal, and all opposition in these matches equal. Whether you play the top or the 10th ranked team in the world, if you’re in the top 3 you’re on the list. But if you’re 4th best in a tough match, sorry son, the guy who played well in a facile
match is better than you. And if you’re consistently 4th best, you might get lots of nice things written about you, but you’ve got zero points for the year.

So forget asking “What about Lobbe?” or someone else. Lobbe would have needed to outplay 13 All Blacks, Aussies or South Africans just to get a ranking point in some matches this year, the deck is stacked against him. Meanwhile Owen Farrell kicked some goals against Italy and, according to the system, is therefore a better shout for the player of the year. Not right at all. With much tougher opposition Lobbe’s efforts on a losing team are, according to this system, no different to those of a replacement getting 5 minutes at the end.

Fans of American sports will probably be aware of the Bowl Championship Series, a system that uses a computer to decide who got to play in what was effectively the college finals (Ok, it’s not, but it’s more complicated than that). The maths graduate in me has nothing
but love for numbers and algorithms, but even I know this is a soulless and ultimately unsatisfying way of choosing a deserving winner. After enough uproar the system was modified, the BCS still uses algorithms to select teams, but now takes into account opinion polls, including from coaches.

Many good articles about the short list have been ruined by missing out on an analysis of the process. Much has been made of the fact that the selectors “should” have picked this player or that. Maybe they should have (Ok, they absolutely should have), but the fact is, they
couldn’t. They couldn’t because of a set of criteria that are fundamentally broken, but nobody is pointing this out. Articles that miss this out misinform the reader and lead to the general air of ignorance of the underlying problem.

Another issue is deciding what matches to watch. In 2009 Richie McCaw won the player of the year award. The selectors made their choice after reviewing 49 matches. Which seems a lot, but is it?

There are 15 6 Nations matches a year, in 2009 there were 9 Tri-Nations games. In November each of the 6 NH teams would have played 3 home games, 18 in total (give or take, I’m ignoring teams adding on an extra match.) In July each of the Tri-Nations teams also would have played 3 home games. This adds up to 51 matches. So not all of these matches were viewed, possibly because as a Lions year the Australian and New Zealand matches weren’t all considered, and that’s before you ask yourself what about Argentina, Samoa, Fiji, Tonga, Japan and others. It’s speculation on my part, but does that mean the squads for those teams weren’t considered worth a look when deciding the player of the year? Effectively it would seem the shortlist is made from a long list of 6 Nations plus Rugby Championship.

I suppose you have to draw the line somewhere and a player such as Finland’s Ilkka Tuomaala is never going to really be in with a shout, but still. Samoa are ranked in the top 8, were their players ignored while 12th place Scotland’s players were ranked by the system?

The selectors are probably guilty of painting themselves into a corner, and for that they should shoulder some of the blame; they should have lobbied to have it changed years ago. But if we’re going to have a pop, let’s get the right target in our sights, the iRB itself for choosing to do it this way. I don’t envy the selectors their jobs, they can’t watch every match, they can’t rate all 46 players on the pitch. The amount of work involved would make it impossible. So let’s be nicer to them, and have a pop at whoever came up with this broken and entirely stupid system.

With thanks to ummm,  follow him on Twitter here for more Connacht and general rugby related stuff.