Varsity Rankings –They Don’t Tell All

“Most academicians pay scant attention to rankings produced by commercial newspapers and magazines for good reason.

Especially on the higher end, universities are not in the business of producing standardised products that can be rankied according to some measurable objective scale.

For good or ill, most universities pride themselves on differentiation, not standardisation.

Diversity in academia is considered a good in itself, providing consumers with more varied choices and encouraging innovation in the creation and dissemination of new knowledge.

There are many methodological grounds on which to question university rankings, besides the fact that they mostly compare apples and oranges.

The outcome of any ranking process depends entirely on what variables the rankers choose to include, how they measure them, and what weight they assign to each factor in the ranking, all of which are subjective.

Competition between rankers virtually forces them to differentiate their rankings.

Even in the best known rankings of a relatively (though still incompletely) standardised academic product — full-time United States MBA programmes –Business Week, US News & World Report and The Wall Street Journal all rank the same programmes differently. …

MBA programme rankings at least have the virtue of comparing apples with apples — the same programme at different institutions.

It is much more difficult to compare heterogeneous comprehensive universities with each other, especially without differentiating among disciplines and degree programmes.

For example, Harvard may indeed be the university in the world.

But that does not mean it has the best undergraduate business degree programme (it has none), or that you would go there to study engineering (Illinois and many others are much better), or South-east Asia (Michigan, Cornell and Wisconsin have richer curricula).

Besides their dubious methodological validity, university rankings are of questionable utility to the consumer. Most rankings are based on measurement of inputs not outputs. Inputs include such items as the grade-point average or the American Scholastic Assessment Test (SAT) score of the average entering undergraduate, the amount of money spent per student, the student-faculty ratio and so on.

These do not tell us the value-added of a university education at a particular institution, that is, the difference it makes to the lifetime welfare (both in income and non-pecuniary terms) of the individual student. …

There are of course, many reason to go to university, of which learning a particular set of job skills (that might also be acquired through other means) is only one.

Other reasons include knowledge or status acquisition for their own sake (as consumption goods), credentialing and job placement services to compatible social networks (including potential life or business partners), or simply the overall experinece of a university education.

When you put all these factors together, and tailor them to a particular individual with specific capabilities, interest, ambitions, maturation needs, likes and dislikes (for example, for a big city versus a small town location) and financial constraints, it becomes apparent that university rankings are not that helpful.

–by Linda Lim, excerpted article from The Straits Times, Friday December 17, 2004

Leave a comment