Rediff Logo


Home  -  Cricket
Feedback


How the rankings work?

Test Updates:
October 23, 2002
September 27, 2002
July 3, 2002
June 18, 2002
June 4, 2002
May 16, 2002
Apr 5, 2002
Mar 20, 2002
Mar 14, 2002
Mar 4, 2002
Feb 12, 2002
Jan 18, 2002
Jan 11, 2002
Dec 23, 2001
Dec 5, 2001
Nov 30, 2001

ODI Updates:
October 12, 2002
September 9, 2002
August 28, 2002
July 15, 2002
July 3, 2002
June 18, 2002
June 3, 2002
May 16, 2002
Apr 25, 2002
Apr 10, 2002
Mar 20, 2002
Mar 5, 2002
Feb 20, 2002
Feb 9, 2002
Feb 5, 2002
Jan 14, 2002

November 3, 2002

Rediff reaction to ICC ODI rating

M J Manohar Rao and Srinivas Bhogle

One's first reaction is that the ICC is getting smarter; there is no doubt that the ICC ODI Championship, to rank cricket teams playing one-day international cricket, is much more sensible than the ICC Test Championship.

But is it good enough? That's something that we will try to discover as we go along.

First, let us quickly outline the ICC ODI method.

The rating depends on points obtained and numbers of matches played. We found that the system of giving points is sufficiently novel.

Let's explain with an example.

Consider the forthcoming India-WI series. India (placed at No 5) currently have 3301 points from 31 matches to get a rating of 3301/31 = 106. West Indies (7) have 2074 points from 22 matches for a rating of 2074/22 = 94. Let us suppose that India win the first ODI of the series. This Indian win will give India 50 + 94 = 134 points (50 plus the rating of the opponent, in this case WI). The West Indian defeat will give them 106 - 50 = 56 points (rating of the opponent, in this case India, minus 50).

So after the first match, India will have 3301 + 134 = 3435 points from 32 matches for a modified rating of 3435/32 = 107. The West Indies points will be 2074 + 56 = 2130 from 23 matches for a rating of 2130/23 = 93. India therefore gain a point while West Indies lose one point.

The Australian team If, instead, India (106) were to defeat the top-placed Australia (128), then India would gain 50 + 128 = 178 points and their rating would rise to [3301 + 178]/32 = 109. Australia would drop marginally; from 128 to 126. So the scheme amply rewards wins against the strongest opponents.

There's another condition which we must explain at this stage.

If the match were to be between Australia (128) and Zimbabwe (67), then things would be different because the gap (128 - 67 = 61) between the two teams' rating is more than 40.
In such an instance a win would fetch Australia 10 points more than its rating, i.e. 128 + 10 = 138, and Zimbabwe 10 points less than its own rating, i.e. 67 - 10 = 57 points. If, by some strange quirk of circumstance, Zimbabwe were to win, then they get 90 points more than their own rating (so, 67 + 90 = 157) and Australia get 90 points less than their rating (128 - 90 = 38).

Currently the rating considers all matches after 1 August 2000. We will go on this way till 31 July 2003.
On 1 August 2003, results from the most distant year (i.e. results during the period 1 August 2000 to 31 July 2001) will drop off. We therefore note that, at any point of time, we will have a 'window of reckoning' for a period varying between 2-3 years. For example, today's 'window' spans 1 August 2000 to 1 November 2002. On 22 July 2003, the 'window' will span 1 August 2000 to 22 July 2003, i.e. almost three calendar years. On 5 August 2003, however, the 'window' will span 1 August 2001 to 5 August 2003, i.e. just over two calendar years.

The rating also tries to ensure that the impact of a win diminishes over time by giving progressively reducing weights. At the moment, points obtained from all matches during the period 1 August 2002 to 1 November 2002 have a weight of 1; matches played during 1 August 2001 to 31 July 2002 have a weight of 2/3; and matches during 1 August 2000 to 31 July 2001 have a weight of 1/3. On 1 August 2003, when the starting date of reckoning will change from 1 August 2000 to 1 August 2001, the weight 'window' will advance by one year; so matches during 1 August 2001 to 31 July 2002 -- which currently have a weight of 2/3 -- will have a weight of 1/3 .. and so on.

That, briefly, summarizes the new ICC method devised by David Kendix.

What do we think of it?

First, let's quickly go over what appear to be niggling concerns. To start with, one wonders why we have two different formulae on either side of the 40 gap. What's so sacred about 40? Let's see what happens if India (106) play Zimbabwe (67). Here the gap is 106 - 67 = 39. So an Indian win fetches 67 + 50 = 117 points. If, instead, India are 107, and the gap is now 40, the formula would change but an Indian win would still fetch 107 + 10 = 117 points. So there appears, a priority, to be no worrying discontinuity at 40. So perhaps we shouldn't worry too much.

Next, one might wish to question the choice of weights (1, 2/3 and 1/3) as we go back year after year. But given that the maximum window span is 3 years, these weights appear reasonable enough. The cut-off date of 1 August is, however, more curious, and certainly more worrying. One gets the impression that a team is better off winning immediately after 1 August, than immediately before, because these wins would earn a full weight of 1 for a much longer time.

We now look at the more serious concerns, after the following brief comment.

Generally speaking, there are probably four or five factors that one must take into account while evolving an ODI rating. One: it is harder to beat a strong side; so beating a stronger side must receive a higher "weight". Two: the rating must reflect current form; so more recent wins must have a higher weight. Three: it is easier to win on "home" grounds than on "away" grounds; so the rating must give a higher weight to "away" wins. Four: it is harder to win the "big" matches (e.g. a tournament final); so big wins must receive a higher weight. Five: Wins with large margins point to greater team ability; so an "easy" win merit more weight.

While one would like to accommodate all the factors, the complexity of the rating formula grows with each additional factor. So every statistician must, rather reluctantly, put the full stop somewhere.

David Kendix, as we have seen, has chosen to put the full stop after the first two factors. So beating a stronger side is worth more, and more recent wins are worth more. But he ignores the "home-away" factor, the "tournament win" factor and the "win-by-large-margin" factor.

We could perhaps wink at the "win-by-large-margin" factor because it is probably the least significant -- and the overhead of accommodating it objectively is rather large.

What about the "home-away" factor?

Let's quickly look at the percentage of recent home-away wins for the nine major ODI cricket playing countries. Australia: 75 (home) and 65 (away). England: 54 and 33. India: 54 and 38. NZ: 51 and 24. Pakistan: 47 and 54. South Africa 75 and 56. Sri Lanka 73 and 43. West Indies 56 and 22. Zimbabwe 15 and 22.

These numbers tell a very simple tale: most teams win significantly more at home than away. Even Australia win more at home than way; teams less equipped than Australia win much more at home than away (Pakistan and Zimbabwe are exceptions; but their difference does not diminish our hypothesis). So why did ICC ignore the "home-away" factor? One can only guess that it was harder to build it into the model (especially because ODI's really involve the even more complex "home-away-neutral" venue factor).

Finally, the ICC ODI equation completely ignores the "tournament" factor. So a win in a World Cup final is essentially as valuable as a win in an early, and perhaps inconsequential, league encounter. Recall, for instance, our example of India winning its first ODI match against West Indies. We know that an Indian win fetches India one rating point and clips one rating point from West Indies. However, even if the match were to be a World Cup final, India would still have gained only one point!.
This, to us, appears to be the second major weakness in the ICC model.

A few minutes on the calculator reveal some interesting scenarios. If South Africa (120) were to defeat Australia (128) in three consecutive matches, they would become No. 1. If Sri Lanka (117) defeat South Africa (120) in their first game, South Africa's rating would be just 0.5 ahead of Sri Lanka. If Sri Lanka win two in a row against South Africa, they become No. 2. This partly happens because of the way the formula is constructed: a win pushes Sri Lanka up and, at the same time, brings South Africa down! So, relatively speaking, the teams approach each other at double speed.

This, to us, suggests two things: first, we are heading for really topsy-turvy times with rankings perhaps fluctuating far too frequently. Second, uneasy will truly be the head that wears the crown! This is especially so because the team at the top will always lose more, in relative terms, than its opponent. We just read that the ACB has welcomed the new ratings and Aussie skipper Ricky Ponting is eager to lay his hands on the new ICC ODI Trophy. We would advise Ponting to hurry up because the Trophy may vanish rather quickly -- of course, it may return rather quickly too. This, we fear, could well become the trend, especially if Australia's performance drops by a gear -- as it must one of these days.

How does the ICC rating compare with the Rediff ODI rating? A quick comparison with the rankings -- before we introduced the 'tournament' factor -- reveals that the top six places would have been identical; only Rediff currently place New Zealand (8) just a whisker ahead of West Indies (7). If we consider the 'distance' between the individual teams' index (e.g. the 'distance' between Australia and South Africa is 128 - 120 =8), we find that the gap between Australia and South Africa is wider in the Rediff ratings, as also the gap between South Africa and Sri Lanka. We would like to believe that this happens because the Rediff ODI rating takes into account that Australia recently whipped South Africa in South Africa (something which the ICC rating cannot 'spot'). We also note that Sri Lanka win predominantly at home; this tends to marginally lower their index in the Rediff ODI rating.

We will end by asking the ICC (as a large number of our readers have suggested), to review their Test ratings as well. It must be embarrassing to find South Africa inching even closer to Australia by virtue of defeating Bangladesh when, by anyone's reckoning, Australia are miles ahead of all other teams especially in Test cricket.

Also read: Australia top inaugural one-day rankings

M J Manohar Rao is professor and director, Department of Economics, University of Mumbai, Mumbai; Srinivas Bhogle is scientist and head, Information Management Division, National Aerospace Laboratories, Bangalore.

Back to top

Design: Imran Shaikh

Home  -  Cricket Feedback