I. Introduction 

This chapter is a review of the performance of Ivy League universities across five of the most widely consulted global higher education rankings for the past six years (2012-2018). The scores and rankings contained in the specialized publications here analyzed are highly important as they guide the decisions of prospective university students across the globe, are used by universities themselves for benchmarking, and even by national governments to monitor development of research and education in their countries. However, given the large number of publication of different rankings, it is important to understand the background and nature of each if one desires to gain valuable information and accurate judgement of university performance from these evaluations. Although the rankings included reflect similar trends of performance for Ivy League universities, the variations in their methodologies yield some variation in the resulting ranks. The following paragraphs will analyze the general trends in performance for 9 universities: Brown University, Columbia University, Cornell University, Dartmouth University, Harvard University, Princeton University, University of Pennsylvania, Yale University, and the Massachusetts Institute of Technology (included among Ivy League University for this study, given its ranking as a top university across the period analyzed). 

The rankings in which university performance and methodology will be analyzed include Shanghai Rankings of World Universities, QS University Rankings, Times Higher Education World University Rankings, Center for World University Rankings, World Top 20 Global University Rankings. Combined, these rankings consider over 300 universities from all corners of the world, in addition to that are the focus of this book. For this reason it is important to keep in mind that the relative performances of universities discussed in this chapter are not in the context of the universe of these more than 300 universities, but in a universe of the eight Ivy League universities and MIT. For this reason, even though these 9 universities do not necessarily occupy the first nine positions in each of the rankings analyzed, this chapter will analyze their relative position, 1st to 9th in isolation.

The chapter is structured as follows: the second section gives background information on the rankings selected and a general description of the available data.  It is followed by a section that describes main trends found across rankings and highlights main discrepancies among them. Next, a detailed discussion of the main trends in ranking methodology and common indicators, and how these reflect on top and bottom performing universities. The fifth section will look closely at less common indicators and variations in their weighting, and how these are reflected in discrepancies across ranking. The last section analyzes the factors influencing indicator selection and weighting by rankings, and focus and performance of universities in the specific aspects measured by these indicators. Let’s begin with a bird’s-eye view.

II. Selected Rankings

Since the first publication of “America’s Best Colleges Ranking” by US News in 1983, international university rankings have proliferated—only in Study International (site that centralizes university rankings) the count is above 40. As the number of rankings grows, so does the range of methodologies and scopes under which universities are evaluated. Hence, knowledge about individual rankings is important when relying on these evaluations of university for any type of decision-making. The rankings selected for this chapter are Shanghai Rankings of World Universities, QS University Rankings, Times Higher Education World University Rankings, Center for World University Rankings, World Top 20 Global University Rankings. These rankings are among the most influential and also present a representative variance in their intent and methodology—which is useful for grasping a broad understanding of the field.

Academic Ranking of World Universities (ARWU)

The Academic Ranking of World Universities (ARWU) was first published in 2003 by the Shangai Jiao Tong University in China. The project was framed within a national initiative to position the country among the leaders in scientific research (as The Economist reports, this target might have already been achieved!), for which it was necessary to assess Chinese universities performance at that time and benchmark it against universities across the world. The launching of ARWU redirected the focus from national or regional university benchmarking to a global scope. In 2009, Shanghai Ranking was established as a consulting firm independent from the Shanghai University or the Chinese government. Among all five rankings, ARWU has the most clearly defined position with regards to the education-research dichotomy introduced in the opening paragraph of this chapter: it heavily relies on a university’s research capability for assessing its overall quality.

ARWU.png

QS World University Ranking (QSWUR)

The QS World University Ranking (QSWUR), was first published in 2004 by Times Higher Education. The publication—then called the Times Higher Education Supplement—originated from a partnership between Quacquarelli Symonds (QS) and the Times newspaper. QS Ltd. specialized in recruitment and admission of international students, while the Times sought to satisfy readership demand on information on education. Given QS’s target audience is international students, it has sought to tailor its evaluation to expected benefits to students from education by surveying the authorities who would eventually be responsible of rewarding and/or hiring the prospective alumni: academics and employers. In QSWUR, academic and employer surveys make up 50% of a university’s performance. Thus, QSWUR is in extreme opposition to Shanghai’s ARWU, as the ranking that gives most importance to benefits from education in a university and the least importance to the research that it produces.

QSWUR.png

Times Higher Education World University Ranking (THEWUR) 

The Times Higher Education World University Ranking, was first published by the Times Higher Education in 2009, following the termination of its partnership with QS. This move corresponds to growing differences between the target audiences of QS and the THE publication—while the first was concentrated on international students, the latter’s readership included a larger audience. In the interest of appealing to the interests of a wider range of actors, THE reduces the weight given to the surveyed opinions of academics and employers and instead includes additional objective metrics to evaluate the quality of teaching (such as staff-to-student ratio, doctorate-to-bachelor ratio, doctorates-awarded-to-academic-staff ratio, and institutional income). 

THEWUR.png

The Center for World University Ranking (CWUR)

The Center for World University Ranking (CWUR), based in Jeddah, Saudi Arabia, was first published in 2012 as a project ranking 100 world universities. Since the CWUR has expanded to over 1000 universities, becoming the largest global university ranking, and moving its headquarters to United Arab Emirates. As a later-comer relative to the previously mentioned rankings, CWUR embodies an attempt to bridge potential flaws in previous rankings. As QS and THE relied (to different degrees) on reputational surveys to grasp measures of educational quality, CWUR sought the same end but through objective and quantifiable data.  For example, the 45% weight of education-related indicators includes a count of university alumni that have occupied CEO positions in top companies.

CWUR.png

World Top 20 Global University Rankings (Top20)

The World Top 20 Global University Ranking (Top20), first published in 2013, stands out as serving an end different to all previously mentioned—it does not prioritize educational quality or research, but a university’s social impact. The project was started by New Jersey Minority Educational Development, a nonprofit that promotes and fosters the impact of education on the youth and their communities. Hence, in addition to the educational and research components, Top20 includes indicators for impact, which include indicators on innovation, facilities and infrastructure, and social responsibility. Data is first collected by Top20 from publicly available sources and is then averaged with QS, THEWUR, and CWUR for validation. Furthermore, Top20 only publishes its ranking of the best 20 universities worldwide and the data is only available for their most recent publication (2018). Given the data collection methodology and scope of data availability, for our analysis, Top20 ranking is useful for completing a full picture of university performance by shedding light on most recent university impact relative to educational quality and research.

Description and Weighting of Indicators by Ranking

graph1.jpg

III. Bird’s-Eye View of University Performance

General commonalities and trends

From the various particularities described above about each ranking’s background, purpose and methodology, one would expect to find large disagreements between their individual end-results. However, as the graph below illustrates from a bird’s-eye view, it looks as if there is little variation in Ivies’ rankings relative to each other across rankings. As can be inferred from the description and weighting of indicators by ranking, the large similarities are a consequence of the overlap of topics that rankings are trying to measure. Across all five rankings considered, Ivies fall into three distinct groups: top performers (Harvard University and MIT), bottom performers (Brown University and Dartmouth College), and the middle performers (Columbia, Cornell, Princeton, U-Penn and Yale). Among the top performers, CWUR and Shanghai rankings place Harvard above MIT consistently; MIT beats Harvard consistently in the QS Ranking, and, in the THE ranking, MIT only beats Harvard from the 2016 ranking onwards. At the other end of the Ivy League competition, dominance is even more clear—Brown consistently outperforms Dartmouth in Shanghai, QS, and THE rankings. Only in CWUR Dartmouth outperforms Brown between 2014 and 2017, before slipping to 9th again in 2018.  

 Average Ivy Rankings Across Available Years

 
graph2.jpg
 

Most of the action occurs in the middle group. An average across years and rankings places the group in this order: 3rd Princeton, 4th Yale, 5th Columbia, 6th Cornell and 7th U-Penn. However, there is large variation: only in 2018, THE ranks U-Penn as 4th, while QS ranks Cornell as 4th, above Yale and Columbia. Despite variations in Ivy League university ranking across publications, there is consistency in variations of performance of specific universities at specific points in time across most rankings. In 2012-2013, for example, Columbia improves its ranking from 5th to 3rd in CWUR, and from 6th to 5th place in THE. In the same period, U-Penn climbs from 6th to 5th in QS and from 7th to 6th in THE. In 2015-2016, Cornell improves from 6th to 5th in CWUR, and from 7th to 5th in QS. Finally, in 2017 Yale falls, from 4th to 5th in both QS and THE rankings, and from 5th to 6th among the Ivies in Shanghai ranking. While discrepancies among rankings are often a consequence of different views and measuring methodologies on university performance, consistent variations across rankings point to and significant changes in a university which are broadly considered highly influential for its overall performance. As we discuss indicators further ahead, the changes that drove the consistent trends mentioned above will be discussed. For now, let’s focus on when different views and measuring methodologies across rankings result on diverging trends and discrepancies.  

Diverging trends 

When, in a given period, a university moves in opposite directions in different rankings, the particularities of each ranking become most relevant. As a prospective student or university staff, we will want to know precisely what it is we are doing better or worse. For Ivy League universities in our selected rankings, this happens in a few cases. While Columbia climbs from 5th to 3rd out of the nine Ivies between 2012 and 2014 in CWUR, it falls from 5th to 6th in QS rankings. Similarly, between 2015-2018 while U-Penn falls from 5th to 8th in QS, it improves from 6th to 4th in THE. How can a university’s performance improve in one ranking and worsen in another one during the same period? What are the specific factors that drive a university’s performance up or down in these rankings? In the next section, we take a first look at how a university’s performance is estimated. The set of graphs in the next page shows university performance per ranking across all available years.

IV. Looking inside a ranking

Your usual indicators

Despite our recurrent focus on discrepancies, it has been mentioned that the agreement between rankings is much larger than the disagreement, e.g. we were able to group universities in three ranking groups across all rankings. This is mainly because although there might be variations in nuances, there are general agreements on what makes a good university: (i) having faculty that excels in their field, (ii) preparing students to excel in their fields, and (iii) producing valuable academic research. All rankings try to measure how a university performs at each of these tasks and will only differ in their methodology for collecting such measures and on how much weight it gives to each. This section describes overlaps between rankings for each of these three components. 

  • Faculty

Quality of faculty is the most widely used and most largely weighted component for estimating a university’s performance across rankings. This component carries a weight of 40% (out of 6 indicators) in QS, 30% (out of 5 indicators) in THE, 20% (out of 6 indicators) in Shanghai, and 15% (out of 7 indicators) in both CWUR and World Top 20. In addition, QS, THE, and Shanghai also attribute weight to the quantity of faculty relative to students of 5%, 4.5% and 10%, respectively. In addition, Shanghai and CWUR both rely on the awards earned by faculty members to estimate their quality. Despite small variations in quantification, the estimate is based on the number Nobel Prizes and field medals in recent periods of times. Let’s see how this commonly evaluated component drives main university performance trends across rankings.

The general trends described before are maintained: Harvard and MIT consistently rank as the top two performers and Dartmouth and Brown consistently rank at the bottom in all rankings for quality of faculty. Furthermore, if we take a deeper look at how university performance for quality of faculty follow the same trends across rankings that consider a common metric: awards earned by faculty. The graphs below show university performance for quality of faculty in Shanghai and CWUR rankings between 2011 and 2019. The resemblance in university performance across these indicators is high. In the Ivy Universe, Harvard University is the undisputable best performer, MIT and Princeton are in a close competition for the second spot, Columbia is unanimously in 4th place, Yale and Cornell compete for the 5th, and there is agreement for U-Penn as 7th, Brown as 8th and Dartmouth as the last one out of the nine. The resemblance of individual university performance across rankings is an illustration of the case when differences in methodologies are not significant. In this case, this is explained by the fact that the different methodologies employed grasp the same dimensions in the evaluated component. However, it can also be the case that the difference in performance across universities is so large that their rankings are not affected by slight differences in scoring across rankings.

  • Students

Given this chapter proposes that a university’s mission is to prepare the youth for their professional career, the students should be at the center of the measurement of university performance. Indeed, a measurement of professional performance of a university’s alumni is an important component of most of the rankings considered: it carries a weight of 30% in CWUR, 20% in Top20, and of 10% in QS and Shanghai. However, unlike faculty, who are mostly academics who specialize on fields, university students are not restricted to any professional career path or field. Thus, evaluating alumni professional performance under a single metric is a daunting task. This is reflected in the fact that the four rankings mentioned in this paragraph use different metrics to evaluate alumni professional performance: Top20 measures alumni employability, QS survey employers for their opinions, and Shanghai measures awards earned by the alumni (as explained above for faculty), while CWUR measures both awards received by alumni the number of alumni who have held CEO positions. In sum, although there is consensus on the importance of alumni professional performance to evaluate universities, discrepancies in measurement yield varying university rankings. For example, while U-Penn averages a 2nd place among Ivies in CWUR’s indicator for alumni employment, it averages 7th in QS and Shanghai in the corresponding indicators; conversely, while MIT averages 2nd among Ivies in QSWUR and ARWU, it ranks 4th in CRWU.

  • Research

A university’s capacity for research production is important in determining its value. Research production can be regarded as complementary to student formation, but it can also be considered an end in its own. Regardless, all rankings in this study, consider this component and give significant weight to it and rely on similar indicators. Rankings measure performance in research based on the income that it generates, the number of publications a university produces, the frequency with which a university’s research is cited and the quality and influence of a university’s publication and of the publications that cite the university’s research. The combined weight that each ranking gives to this component is the largest by a considerable margin. Research indicators weigh a combined 60% in Shanghai, 55% in CWUR, 50% in THE, 20% in QS and Top20. The overlap in metrics employed to measure this component, lead to parallel trends across rankings for these indicators (show in graphs). However, the large range between highest weight given to the component and its lowest weight of 40 percentage points, is also a source of variations in overall university scoring.

Your usual top performer

Given the commonalities in evaluation methodologies and in university performance across rankings, there must also be a common profile for a top performing university. Although university rankings invest significant time and resources in collecting and processing data to evaluate universities, a fair amount of this information is public and of general knowledge. For example, ranking publications access academic awards databases to estimate a university’s performance in teaching and quality, but the general public might also be familiar with recent Nobel Prize awards and their affiliations. Hence, top performing universities do not only excel in obscure metrics and calculations, but also in aspects that have high visibility and contribute to its public profile. Below are some specific characteristics and dimensions of what constitutes a top performing university.

A highly visible indicator for evaluating the quality of faculty, for example, are Nobel Prizes. The historical ranking by university goes as follows: Harvard is alma mater to 36, MIT to 20, Columbia to 18, and 14 graduated from Princeton. The universities from which CEOs of top performing companies graduated is also often mentioned across the media. For 2018, the UK Domain reports that Harvard has 15 CEOs in top flight companies, followed by U-Penn with 6. At last, the extent to which universities are cited as the source of specific research in other research or in mass media is also correlated with their standing across rankings. IDEAS RePec, the largest and freely available bibliographic database for economic literature, scores institutions by the number of publications and the number of times they are cited. The top three performers from our selection are ranked in the following order: Harvard (78 authors), MIT (48 authors) and Princeton (53 authors). These examples illustrate that (i) despite varying degrees of methodological sophistication, ranking results are not too far away from what may be gathered by an average media content consumer, and (ii) the high numbers and levels of production required for a seat among top the top performing universities in the world.  

V. Diving in

We now know that rankings might serve different purposes and measure university performance differently, but that this only results in small variations away from a general common trend. Nevertheless, the potential richness of an individual ranking and its usefulness relative to the other four rankings analyzed here or dozens that are now publicly available, lies precisely in these discrepancies. The paragraphs below analyze both cases of similarities across universities that might bind them in the category of Ivy League universities, but also cases of discrepancies in rankings and methodologies in order to define and characterize their individual strengths and particularities.

ARWU.png
THEWUR.png
CWUR.png

Disagreement at the peak: Harvard or MIT?

While ARWU, CWUR and Top20 place Harvard as the top performer for all years analyzed, in THE WUR the position is given to MIT. In QSWUR, MIT outperforms Harvard only from 2016 onwards. What explains the opposite results between THEWUR and ARWU, CWUR and Top20? What shift is QSWUR picking up that leads to the alternation between Harvard and MIT? 

As was stated earlier, QSWUR gives a 50% weighting to surveys. This large weight that is attributed to subjective and not publicly available responses could be a potential source of unclarity if MIT’s advantage was attributed to the surveys—say a large number of questions address a specific component at which a particular university might outperform the others but the reader would consider irrelevant, or that responses are opinion-based and respondents may favor universities with which they identify. However, both MIT and Harvard obtain the maximum scores possible from surveys every year. MIT outperforms Harvard in faculty-to-student ratio (20%), citations per faculty (20%), and international faculty ratio (5%). Though Harvard may outperform MIT in terms of absolute number of citations and international faculty, the aforementioned indicators (which weigh a combined 45% of the total QSWUR score) are all ratios—hence MIT is benefitted by being close to Harvard in volume of production but being a significant smaller school (enrollment size for 2015 was 28,297 in Harvard and 11,301 in MIT). Whereas other rankings give high importance to research production and influence or impact in the community, QSWUR targets international students and, therefore, incorporates a dimension of the student experience. Being able to excel at absolute indicators, while staying a medium-sized school is a dimension at which MIT undoubtedly excels. 

The shift between MIT and Harvard in THE WUR is mostly explained by Harvard’s drop in the Teaching indicator, from a score of 92.9 in 2015 to 83.6 in 2016, while MIT slightly improved from 89.1 to 89.4, and maintained this advantage through to 2019. This Teaching indicator, is composed of several sub-indicators: reputation survey (15% weight), staff-to-student ratio (4.5% weight), doctorate-to-bachelor’s ratio (2.25% weight), doctorates-awarded-to-academic-staff ratio (6% weight), institutional income (2.25% weight). However, individual university performance across these sub-indicators is not publicly available making it difficult to understand what precisely drives the change in positions between Harvard and MIT in this ranking. 

In addition, given the important weight that is given to opinion-based surveys, it is possible that scores are picking up popular perceptions about universities based on dimensions that are not explicitly laid out in a ranking’s methodology. Two of such dimensions are innovation and funding—given close to zero weight in the rankings analyzed in this chapter but are given much more importance in other publications. Reuters Top 100 Innovative Universities Ranking, for example, includes indicators on patents by university, placing MIT above Harvard in 2018. Similarly, The Center for Measuring University Performance ranks universities according to their expenditure in federal research between 2006 and 2015, and places both Columbia and U-Penn above Harvard and MIT. This further stresses the point that although rankings may agree in general trends and overview, there exist slight but important disagreements on the importance given to particular components of a university’s performance. 

Consensual third: Princeton 

Even more predominant than Harvard’s and MIT’s corresponding first and second places across all rankings, is Princeton’s third. For 2018, all five rankings rank Princeton in a third place. Although some fluctuation is present, and Princeton slips to 4th place or climbs to 2nd for given years in some rankings, only in 2012-2013 we see contradicting trends across rankings—CWUR and THE WUR reflect the same drop from 3rd in 2012 to 4th place in 2013. In THE WUR, Princeton’s score for Teaching (30% weight), Citations (20% weight) and Industry Income (3% weight), improve; while in CWUR performance slips in Quality of faculty (15% weight), Publications (15% weight), Influence (15% weight) and Citations (10% weight). 

Given the difficulty in analyzing the Teaching component in THE WUR explained above, the most important common element is an apparent worsening of research-related performance. This should be made evident in a research-centered ranking such as Shanghai’s AWRU, but Princeton’s fall in 2013 is not reflected in its position there. However, there is a decline in its score (despite it not being large enough to lose its 3rd place) in the indicators of Awards (20% weight) and Papers Indexed (20% weight) confirming the trend in these areas. Hence, we can conclude that Princeton’s decline in rankings between in 2013 onwards was primarily due to a decrease in its share of faculty recently awarded international prizes or field medals and in the number of times its research was cited and indexed in other publications, relative to close competitors such as MIT and Columbia.

Columbia and Yale

Another very even competition occurs across our selected rankings for the fourth place between Columbia and Yale. While CWUR has it as 3rd and Shanghai’s AWRU and Top20 have it as 4th, Columbia doesn’t make it past 5th in THE WUR or QSWUR. On the contrary, THE WUR and QSWUR have Yale in 4th for most of the available years. Interestingly, even though THEWUR and QSWUR parted ways due to disagreements in what they intended to measure and how they measured it, it would appear as if both had maintained a common core that makes them compare Yale and Columbia very similarly but differently from the other rankings. Alternatively, it could be that each is evaluating a different aspect of these universities but that the result is the same in average.

In research-centered AWRU, Columbia outperforms Yale across all components in every year. Similarly, in CWUR, Columbia only yields its dominance to Yale in the component for Quality of Education (15% weight), which is measured by the number of alumni winning awards. On the other hand, in both THEWUR and QSWUR, Columbia only beats Yale in one indicators, but it does so systematically across every available year and it is the same indicator in both rankings: how often the academic work by its faculty is cited (20% weight in both rankings). This hints to Yale performing better at rankings that give more weight to education relative to those that give more weight to research, and Columbia performing at research. However, CWUR attempts to strike an equilibrium between research and education just like QSWUR and THEWUR, and yet is aligned with ARWU when comparing Columbia and Yale. This is because while QSWUR and THEWUR assess quality of education by the number of awards received by the alumni, CWUR relies on the number of alumni holding CEO positions in top firms and Columbia is favored by this. A possible explanation is that students seeking corporate careers are attracted to a New York-based university for networking, in the same way that Washington DC attracts public sector-oriented university students. In fact, Columbia reported in an article in 2015 that its students report higher stress levels than their Ivy League peers, citing location and pressure to network as probable causes.  

Cornell and U-Penn

The duel between Cornel and U-Penn is not specifically for the next position (6th) in the Ivy ranking, as it occurs over a range of positions—both universities oscillate between 4th and 7th place across selected rankings. ARWU and Top20 have Cornel as 6th and U-Penn as 7th for all available years, while U-Penn falls through the years in QSWUR, climbs in THEWUR, and oscillates between 6th and 7th in CWUR. Cornell moves in the opposite direction in QSWUR and progressively climbs positions, while it drops once in THEWUR, and oscillates between 5th and 7th in CWUR. 

Unlike the previous rivalry cases analyzed in this section (Harvard vs. MIT and Columbia vs. Yale), Cornell and U-Penn perform very similarly across all categories analyzed. The fact that they constantly are displacing each other in the rankings, is as much a reflection of their individual performance as it is of its rival’s performance. For example, Cornell’s improvement in the QSWUR ranking for Ivy League universities between 2016 and 2019 is mirrored by U-Penn’s variation in the opposite direction. However, by comparing both universities across individual indicators in the ranking we see that they follow similar trends, but Cornell follows a steeper variation when the trend is improving, and U-Penn has greater declines when both universities fall. Another particular case that illustrates this point is that between 2017 and 2018 U-Penn moves up in THEWUR from 5th to 4th and in CWUR from 8th to 7th, without there being an indication of improvement across it’s indicators. However, in both rankings there is a stronger decline in Cornell’s performance across several indicators. The dimensions along which the differentials are larger and the foundation for U-Penn’s improvement are that while the latter only decreases its score for THEWUR’s Teaching from 85.9 to 83.7 (2.2 points), Cornell’s decreases from 79.7 to 76.2  (3.5 points); and while U-Penn fell in CWUR’s Employment indicator from 4th to 9th place in the U.S., Cornell had a longer fall from 17th to 28th. 

The comparison of Cornell and U-Penn across these rankings cannot present much clear evidence about their relative performance against each other. However, it is important in illustrating that rankings can exaggerate a university’s scoring. Ranking positions only tell us which university performs above another, but they tell very little about how much better or worse a university compares to another. Therefore, although the actual difference in performance between two universities for any particular component may not be significant, a ranking will place one on top of the other and hide away how close together or far apart they actually are. For this reason, it is important to consider how universities score and not only their relative position in these rankings. Furthermore, in even cases such as Cornell vs. U-Penn a more detailed look might also allow the reader to attribute more or less importance to aspects that might appeal personally to him/her. For example, if a prospective student had a strong preference for an urban rather than a rural setting, this could be enough to tip the balance in favor of U-Penn.  

Race away from bottom: Brown and Dartmouth

The last two places among ivies plus MIT is very clearly defined. While neither of the two universities make the Top20 ranking, for ARWU, THEWUR and QSWUR, Brown outperforms Dartmouth every year since 2012. CWUR, however, tells a different story: Brown only outperforms Dartmouth in 2013 and 2018. By now, it is clear that when ARWU’s ranking departs from the rest, research performance is a distinctive feature. However, when there is divergence between THEWUR, QSWUR, and CWUR—which distribute indicator weight more evenly across research and education indicators—the differences in what is being measured and how it’s being measured are subtler. Which are the elements in CWUR in which Dartmouth outperforms Brown that are not being picked up by the other rankings?

As shown in the graph above, the answer is clear: Dartmouth largely outperforms Brown in CWUR’s indicators for Quality of Education (15% weight) and Alumni Employment (15% weight). As mentioned above these indicators measure the number of alumni winning major international awards, and the number of alumni who have held CEO positions in top firms. In contrast, THEWUR and QSWUR base their evaluation of these components mostly on surveys. Hence, it is clear that, at least for a selected group of graduating alumni, Dartmouth is a better platform for achieving high-level recognition (receiving major international awards or becoming CEO of a top firm). However, the measures employed by CWUR to reach these conclusions covers a shorter range of what might comprise an optimal university education. On the other hand, while opinion surveys often represent a broader reflection of education quality, a reader of survey-based ranking has no access to the responder’s analytical process or justification for the responses. 

VI. Discussion and Conclusion:

In the public’s mind, the factors that hold together Ivy League universities as a common group are stronger than the individual characteristics that set them apart from each other. Although this chapter has focused mostly on how Ivies perform differently in individual rankings and indicators, it should have also suggested that it is along such indicators that these universities rank considerably better than most others. 

Although the Ivy League was originally a Football Conference, the characteristics which brought these universities together to form such a league extended beyond athletics. In addition to their rivalry in sports, Ivy League universities identified with each other based on being part of the first generation of American universities, academic excellence, and a common New England legacy expressed not only in geographic proximity but in similar architecture and tradition. However, since 1954 (when the Ivy League was formed) until now, the meaning of being a member of this group has transcended these common elements to become a synonym of being a world top university. Hopefully, by now it is clear what it means to be a top university and how specialized rankings across the world determine how “top” each university is.

The rankings considered for this analysis are useful tool in pinning down a university’s performance to concrete indicators. As we have seen there is extensive common ground on what the elements which these rankings consider important components of a university’s performance such as the quality of research, teaching and students’ future success. This common ground is the reason why general views about university performance are confirmed across most rankings. However, in embarking upon a more detailed analysis of university performance it is most useful to understand where and how these rankings differ.

In learning, for example, that Shanghai Ranking (ARWU) was part of an effort to position China among the leaders in scientific research, we understand that its evaluation of university focuses mostly on the research they produce. Similarly, by understanding that Top20 stems from a non-profits mission to spread opportunity for education in the world, we assume that universities will be assessed on its impact beyond its walls. Most importantly, evaluating the evaluators lets us understand and absorb higher value from what they are telling us about universities, and lets us appreciate Ivy League and all universities for their distinctive and unique features, allowing each one of us to interpret them from our own perspective and according to our own experience.

Bibliography

  1.  Study International. About. (2018). http://www.studyinternational.com/about 

  2.  Shanghai Ranking. About ARWU. (2018) http://www.shanghairanking.com/aboutarwu.html 

  3.  The Economist. “How China could dominate science”. (2019) https://www.economist.com/leaders/2019/01/12/how-china-could-dominate-science 

  4.  Top Universities. About QS. (2018). https://www.topuniversities.com/about-qs 

  5.  Tertiary Sector Performance Analysis, Ministry of Education of New Zealand. The latest performance of New Zealand universities in international rankings. (2018) 

  6.  Times Higher Education. About Us. (2018). https://www.timeshighereducation.com/about-us 

  7.  Holmes, Richard; The QS World University Rankings and THE World University Rankings by Subject Group and Subject. Perspektywy Education Foundation. (2005)

  8.  Center for World University Rankings. About CWUR. (2018). https://cwur.org/about.php 

  9.  Bhardwa, Seeta. “Where do the top CEOs go to university?”. Time Higher Education (2018). https://www.timeshighereducation.com/student/news/where-do-top-ceos-go-university

  10.   https://www.theukdomain.uk/top-flight-ceos/

  11.  StateUniversity. “Top 500 Ranked Colleges - Highest Total Enrollment” pg. 6 (2016)
    Read more: College Rankings - Top 500 Ranked Colleges - Highest Total Enrollment - Page 6 - StateUniversity.com http://www.stateuniversity.com/rank/tot_enroll_rank/6#ixzz5czhkZpLBhttp://www.stateuniversity.com/rank/tot_enroll_rank/6 

  12.  Ewalt, David. “Reuters Top 100: The World's Most Innovative Universities – 2018”. Reuters (2018). https://www.reuters.com/article/us-amers-reuters-ranking-innovative-univ/reuters-top-100-the-worlds-most-innovative-universities-2018-idUSKCN1ML0AZ

  13. Spitz, Jessica. “Are Columbia students the most stressed in the Ivy League?”. Columbia Spectator (2016) http://features.columbiaspectator.com/news/2016/04/14/are-columbia-students-the-most-stressed-in-the-ivy-league/