In researching universities in the United States, you’re likely to come across information about how the institution is “ranked” in comparison with other, similar institutions. And if you’re like most people, where a school is ranked will probably impact your desire to go there. This is natural. Everyone wants to get the best education possible, and so when we read that University X is ranked #1, and University Y is ranked #100, we quite understandably would prefer to attend university X.
This assumes, however, that the ranking system itself is an accurate portrayal of the quality of education provided by the university in question. Is this a valid assumption?
Various ranking systems exist in the U.S. The most famous and influential is U.S. News and World Report’s annual ranking of U.S. institutions of higher education. This report ranks institutions in groups according to several different criteria (for example, the average test scores of students who were accepted). The result is a well-ordered and very accessible list of institutions. Here, for example, is the list of the top MBA programs:
- Wharton (University of Pennsylvania)
- Sloan (MIT)
- Kellogg (Northwestern)
Not everyone believes that this method of compiling rankings achieves an accurate portrayal of an institution’s educational, quality, however. In this article, Amy Graham and Nicholas Thompson lay out what they view as problems with U.S. News and World Report’s methodology:
Unfortunately, the highly influential U.S. News & World Report annual guide to “America’s Best Colleges” pays scant attention to measures of learning or good educational practices, even as it neatly ranks colleges in long lists of the sort that Americans love. It could be a major part of the solution; instead, it’s a problem.
U.S. News’ rankings primarily register a school’s wealth, reputation, and the achievement of the high-school students it admits. At one time, most academics believed in one simple equation: Good students plus good faculty equals good school. The rankings reflect this outlook, tabulating things such as percent of faculty with a doctorate (to measure the quality of the professors) and SAT scores of the freshman class (to get at quality of the students). That’s like measuring the quality of a restaurant by calculating how much it paid for silverware and food: not completely useless, but pretty far from ideal.
In a similar vein, one might argue that it’s inappropriate to rank a university – or even a program - as an entire unit:
[E]ach institution is nothing more than a collection of local chapters of international intellectual fraternities. The quality of each chapter at each institution is more or less independent of the quality of any other chapter at the same institution, except to the extent that financial muscle can attract better quality across the board.
In other words, the quality of education is mostly dependent on individual professors and departments, which are not necessarily reflected in the rankings for their host institutions.
The individuals who do the rankings are well aware of the methodological difficulties, and even a well-known methodology like U.S. News and World Report’s is constantly refined. The struggle to arrive at a comprehensive and fair set of criteria for ranking programs and institutions has led to the formation of various national and international associations, for example The International Observatory on Academic Ranking and Excellence.
The point is, remember that each ranking system has its own set of criteria, which may or may not apply to your own personal or professional goals. So when you see that University X is #1, that does not necessarily mean that University X is #1 for you.
The rankings that are most useful for you are going to be the ones you make yourself.