top of page
  • Mark Olivito

College Rankings: Houston, We Have a Problem!

Updated: Mar 17, 2022

We are a society that loves brand names.

When you here certain business names you immediate have a picture in your mind.

First, I asked my high school Freshman: "Dom, name me 3 brand names that come top of mind?"

"Google. Apple. Microsoft."

Lets call this group "technology."


A few other categories:

Cars: Mercedes. Tesla. Corvette.

Fast Food: Mcdonalds. Burger King. Chipotle.


How about higher education, random examples:

Harvard. Yale. Rutgers. Camden Community College. Northeastern University.


All of the above are brands, and elicit certain connotations and thoughts in your mind about what they represent.


Do you know what else our country loves? RANKINGS.

The Fortune 500. The Forbes World's Richest. etc


In the world of higher education, I would argue that the entire country has become obsessed with the annual rankings of colleges and universities, in particular the US News & World Report. College leadership watches them closely, whether they admit it or not.

college rankings, us news, university rankings
U.S. News & World Report Annual College Rankings

While some simply monitor, others will re-orient their entire strategy to move up the rankings. Some will even border on outright fraud to game the rankings. How widespread fraud or "mis-reporting" is hard to tell, but it absolutely exists.


Parents, students, teachers and alumni all root for their alma matter to move up or at least hold their position.


Rankings are both a blessing and a curse. By definition, what they try and do is "quantify" a number of schools across a range of factors so that people can evaluate a group of schools on. More on that later, but in general, I applaud the intent of trying to OBJECTIVELY understand what a version of "truth" really is.

The topic of college rankings is near and dear to my heart, for a couple of reasons.


1) My beloved Alma Matter, Northeastern University has by any measure catapulted up the national rankings, from barely existing on them in my day ('92-'97) to a top 50 national university today.


2) Holding personal opinions aside for the moment as to the value of college rankings, there is little doubt that Northeastern University is a case study in "reverse engineering" the university ranking contest.


I talk about reverse engineering a bunch on Brick by Brick, understanding the success drivers that people look for, working backwards to master them, and going all in on them. NU's Leadership appears to have taken this approach exactly, as their approach appears to be strategic, intentional and borderline "all in" against improving their ranking. Solid article published in 2014 by Max Kutner at Boston Magazine, which you can review that details the effort.


But I digressed for a bit.


The question that I think really matters if we can acknowledge that rankings matter is this:


"Does a school's ranking (as measured by US News & World Report) actually map to the core reason why the student's actually pursue a college education to begin with?" In other words, what is important to the customers of the College, and does the schools delivery against that? If the college does, then great! But if they do not, shouldn't their ranking be lower? AND.....given the rapidly increasing cost of college, financed by debt, more and more people are asking "is it worth it?" Shouldn't a ranking try and get at this basic question?"

To understand this, why don't we first look at why students choose to pursue a degree in the first place. I think most would agree that the below survey data from newamerica.org, the top 3 reasons are practical, economic reasons related to "better career and earning opportunities."


Also worth noting, plenty of "altruistic" reasons are also cited: "Becoming a better person, self confidence, set an example, learn more about the world, meet new people, my parents wanted me to go." Nothing wrong with these reasons.......but again the stakes are high with crippling potential debt. You can become self actualized and more cultured in a number of less riskier ways, so lets focus on the top 3 pragmatic reasons that the survey revealed.

attending college, top reasons
Source:https://www.newamerica.org

Back to the rankings, what do they actual measure? I'll paste in the methodology and weighting factors, and give my critical questions/thoughts on the relevant sections. I'll also give a layman's point of view AS IF I was sitting on the leadership committee of a university trying to improve the rankings of my school. In addition, I'll draw some questions related to the state of higher education today, in particular rising tuition and debt levels.


Here are the broad categories with the rankings (specific breakdowns within the categories below)

* Graduation/Retention: 30%

* Undergraduate academic reputation: 20%

* Faculty recourses: 20%

* Faculty resources per student: 10%

* Student selectivity: 7%

* Graduate indebtedness 5%

* Social mobility: 5%

* Average alumni giving: 3%


My overall take:
  • The US News & World Report has become the de-facto "standard" in the world of measuring the quality of higher education. But in my mind it is VERY flawed, for the following reasons:

  • It totally ignores it's primary constituencies (its students) reason for going to college in the first place, career advancement, earning power, etc. They are not measuring employment outcomes: % employed in a job of their choice, starting pay, speed of employment, speed of debt repayment. Those are the basic inputs to basic ROI which starts to get at "is it even worth it?"

  • They are not measuring secondary primary audiences of their services: Employers.

  • They have a SIGNIFICANT bias towards rewarding small class size, taught by the full time faculty with the highest degree, and if they earn more, the better the ranking. And if the school spends more per student, even better.


A simple example. 2 different universities teaching a marketing strategy course. Consider....

School A: Class of 19 students, taught by a full time professor of marketing making with a PHD, earning 15% more then industry average.
School B, same class, but 45 students. Taught by a PART TIME adjunct teacher who happens to be a chief marketing officer full-time at a prominent firm local to the university.
Which School/Class delivers a better outcome for the students? IS that school higher or lower ranked?

First, the outcome of learning and satisfaction is note even measured, so lets take that off the table. BUT the characteristics of the class and their teacher IS.


School A, according the the US News & World Report would score higher on dimensions that are collectively 30% of the rankings weighting. So school A....

  • The professor is full-time, that counts more.

  • The professor has a PHD, that counts more.

  • The professor is well compensated, that counts more.

  • The class size is 19, and that counts more.

That's 4 factors that School A beats school B. And in my opinion, very little is relevant.

Why?


At the core of education is the student and the teacher. The teacher matters MASSIVELY. Let's envision which class would have the better outcome, lets say better overall learning student in the eyes of those who matter MOST, the students.


A full-time PHD of marketing (well compensated) vs a Chief Marketing Officer (CMO) from a prominent local business who chooses to teach part time. If given the choice, who would the majority of students pick to learn marketing strategy from, the one that is studying it full time, or the one that is LIVING it?


My personal experience over 40 or so undergrad classes + executive education classes? My level of learning and overall engagement from the "teacher-practitioner" (ie CMO coming into the class) vs the life long academic.....give me the one coming back into the classroom after spending a day in the trenches. I suspect I would not be in the minority.

This would be an easy experiment to run, actually have this class at the same school, with 2 different teachers, control for grade inflation and force a bell curve distribution. I would be shocked if the CMO didn't generate better outcomes for the students. But no such experiment would be run when the scorecard does not incent such outcomes.


Keep in mind, why would a CMO actually want to do invest their valuable time to begin with? Presumably it wouldn't be "worth their time" unless there are other intangibles, possibly recruiting and getting a sneak peak at up and coming talent? Wouldn't a practitioner with hiring ability be more valuable in the eyes of students and the very reasons why they are attending college to begin with?


None of this matters in the eyes of the rankings. College A outranks B.


The reality is quite ironic, in the real world the CMO may not even be CONSIDERED to teach unless they had a Master's degree. Not always, but search for adjunct openings and rarely will you not see a Master's requirement.


Let me cite one small example. Professor of Marketing Scott Galloway of NYU. In my opinion, he is an Entrepreneur/Investor/Board Member/Podcast co-host that just happens to be a professor. That level of experience in front of a group of students simply can't compare to the full-time academic. Here's some highlights from his Prof G show.


Other thoughts on the rankings.

These rankings seem to have a built in "soft cap" on increasing the overall supply or available seats or students. Everything is measured "per student" and full time. Artificially constraining supply has one impact on price, it pressures it NORTH, exactly as we have seen over the decades.

Next, there's a STRONG value placed on strong graduation rates, which I DO think makes sense. HOWEVER, it is well known that 40% of students drop out of college. When you analyze why that is, many of the overall reasons cite some type of financial pressure, more so than performance/lack of grades.


So what type of behavior COULD this incent among college admission officers. Take 2 students, one who will go SIGNIFICANTLY in debt vs another with full family resources to fund without aid? Does it not pass the common sense test that we may be INCENTING schools to have a bias towards the latter student?


Last, there is clearly a bias towards "more student spending the better." What about the overall financial health of the university? In other words, if a school is borrowing (debt) LARGE sums of money (lets say $100MM+) to pop up new buildings, or in some cases to fund operating losses, where is that accounted for? Short answer is it's not. But what is the value of a degree when/if a school goes out of business? One only needs to look at college bond ratings to see a fairly large spread between universities that have investment grade bunds to below junk level. One would think that would be a factor, it's measurable, third party validated. Because if the opinion of a college president in Iowa impacts my NJ schools ranking, you would think Moody's rating my debt below junk bond status indicating I have long term viability issues would be important as well.


Rigorous, like beauty is in the eyes of the beholder.

Here are specific comments below on each ranking in highlighted writing.


Graduation rate average- 17.6%: This is the percentage of entering first-year students who graduated within a six-year period, averaged over the classes entering from fall 2011 through fall 2014. This excludes students who transferred to the school after their first year and then graduated.

If a school submits fewer than four years of graduation rate data, the average is based on the number of years the school submits. A higher average graduation rate scores better than a lower graduation rate in the ranking because completion is integral for students to get the most value in their careers from their education.


Generally speaking, hard to argue that schools with higher graduation rates should rank better than schools with lower grad rates. But a couple of points. Is it not a bit ironic that the bar has been measured within 6 years? The standard used to be 4 years! Now we are calling success, despite skyrocketing costs is graduating within 6 years.


Seems like there , is there NOT an unintended consequence that could be set up with measuring and rewarding high graduation rates? Grade inflation is a fairly accepted phenomena that has occurred over time. With grade inflation, curious to see if there is "graduation inflation" that is being more built into the system?


First-year student retention rate average -4.4%: This represents the percentage of first-year students who returned to the same college or university the following fall. The average first-year student retention rate indicates the average proportion of the first-year classes entering from fall 2016 through fall 2019 who returned the following fall.

If a school submits fewer than four years of first-year retention rate data, the average is based on the number of years that a school submits to U.S. News. A higher average first-year retention rate scores better than a lower average retention rate in the ranking model because students staying enrolled demonstrates a school's continued appeal.


All things being equal, higher returning student %'s will outrank schools with lower %'s. In general, I think this is hard to argue with being in the mix of factors. Customers that come back vs exit matters. Does the opinion of other school administrators matter 4.5x's more than returning students (20% weighting vs 4.4%)? The answer is yes on the ranking.


Graduation rate performance -8% : This is a comparison between the actual six-year graduation rate for students entering in fall 2013 and fall 2014 and the predicted graduation rate for the proportion who graduated six years later in 2019 and 2020. For the second consecutive year, the graduate rate performance indicator is based on the average of the two most recent graduating classes – for this ranking, it's of the fall 2013 and fall 2014 cohort. The predicted graduation rate is based on characteristics of the entering class, as well as characteristics of the institution.

If the actual graduation rate is higher than the predicted rate, the college is enhancing achievement or is overperforming. If its actual graduation rate is lower than the predicted rate, it's underperforming. U.S. News divided the actual rate by the predicted rate. The higher the ratio, the better the score.

This indicator of added value shows the effect of the college's programs and policies on the six-year graduation rate of students after controlling for spending per student, the proportion of undergraduates receiving Pell Grants, standardized test scores and high school class standing of the entering classes, and for the second year in a row, the proportion of undergraduates who are first-generation. Also, the proportion of science, technology, engineering and math, or STEM, degrees out of the total degrees granted was a variable used to calculate the predicted graduation rate for each school in the National Universities ranking category only.

To determine whether an awarded degree was considered STEM, U.S. News used the U.S. Department of Homeland Security's STEM Designated Degree Program list. The list includes a diverse array of degrees in general STEM areas such as biology and engineering, as well as specific STEM degree tracks in nontraditional STEM fields, such as business statistics and digital communication and media.

Graduation rate performance has been used in the National Universities and National Liberal Arts Colleges ranking categories since the 1997 edition of Best Colleges, and in the Regional Universities and Regional Colleges ranking categories starting with the 2014 edition.


I'll admit, I've read this a few times and it feels pretty "black box" measuring a variable such as graduation rate as compared to a PREDICTED graduation rate. So there's a big formula to predict something, then if you exceed that prediction, higher score. But in general, graduation rate IS an important outcome and the higher the better.


Expert opinion -20%: This is a measure of how a school is regarded by administrators at peer institutions on a peer assessment survey. A school's peer assessment score is determined by surveying presidents, provosts and deans of admissions, or officials in equivalent positions, at institutions in the school's ranking category.

Each individual was asked to rate peer schools' undergraduate academic programs on a scale from 1 (marginal) to 5 (distinguished). Individuals who do not know enough about a school to evaluate it fairly were asked to mark "don't know."

A school's score is the average score of all the respondents who rated it. Responses of "don't know" count neither for nor against a school.

U.S. News averaged the two most recent years of peer assessment survey results – spring and summer 2020 and 2021 – to compute the academic reputation peer assessment score used in the rankings. This increases the number of ratings each school received and more fully represents the views of high-level academics. Also, it reduces the year-to-year volatility in the average peer assessment score.

The overall response rates among all 10 ranking categories were 34.1% for the spring and summer 2021 survey, and 36.4% for the 2020 survey. The peer assessment response rate for the 2021 surveys in the National Universities category was 41.5%, and in National Liberal Arts category was 44.8%.

A higher average peer assessment score does better than a lower peer assessment score in the ranking model. The academic peer assessment rating is used in the National Universities, National Liberal Arts Colleges, Regional Universities and Regional Colleges rankings.


By far the most controversial, and second highest important category in the ranking set. A survey goes out to a bunch of schools. They rank MY school on a variety of factors, and the collective results of "peer opinions" of my school gives me a reputation score? Yup. Where in the private sector does anything remotely similar happen? I can't think of one. So the President of Rutgers and a few other administrators are asked to weight in on schools A-Z, and that yields quality data to determine a schools reputation? What does the President of Rutgers know about School A-Z anyway? Does he attend classes there? Sit on their board? Does he donate there? And not to be "conspiratorial" but wouldn't a person have a built in motive to score others with a gravitational pull south, as by definition this is a ranking? Where is the voice of true stakeholders when it comes to yielding an opinion on reputation? IN other words, does not one EMPLOYER, who has the option to actually HIRE a student at any university across the country, see those students perform, decide if they should continue to recruit from School A, does that opinion not even count? It doesn't. Apparently, "reputation" is only in the eyes of the current system, not external stakeholders. OK, so I'm a president and now I have to deal with this. Guess what happens? Charm offensive, sales forces, shaping opinion, resources devoted to moving the opinion needle, among people whose opnion really does not matter for delivering a great education product.


The great Malcolm Gladwell talks about this componet on the video below, starting at 19:40 mark.


Class size -8%: This assesses the ability of students to engage with their instructors in class. This was based on the average of fall 2019 and fall 2020 class size data. This is a change from previous years' rankings, when only the most recent year of class size data was used. This change was made to try adjust for the impact of the coronavirus on class size during fall 2020, when at many schools there was a large increase in the number of online and hybrid classes that took the place of traditional face-to-face classes offered for the same class in previous years.

Schools receive the most credit in this index for their proportions of undergraduate classes with fewer than 20 students. Classes with 20 to 29 students score second highest, 30 to 39 students third highest and 40 to 49 students fourth highest. Classes that have 50 or more students receive no credit. U.S. News has not disclosed to any schools the weights assigned to different intervals within the index. To calculate the two-year average, the class size indexes for fall 2019 and fall 2020 were averaged.


Generally, this is ranking factor screams the smaller the better and gives the most points to classes <20 students. At 8%, this is valued over 2.5x's alumni giving which was 3%. Not sure I can say that relative difference seems off. HOWEVER, a couple of points. If I look back at my 40 college classes, RANK them from high to low on my personal learning impact, there was zero relationship to class size having me learn more or less. In other words, some lecture halls with 100 students ranked higher then small classes of 20. In my opinion, great proffesors are great whether they are conducting class to 100 or 20. Second, does it not stand to reason that a focus on small class size and loweren the faculty to student ratio can have the affect of escalating overall costs? If I'm a college president with a factulty to student ration of 25 and I need to get it closer to 15, how does that get funded? Ultimately through more professors, and that cost gets reflected in escalating tuition prices. So improving on this metric has a real possibility to drive higher overall costs (high certainty) with questionable certainty of improving educational experience.


Average faculty salaries- 7%: Research shows there is a link between academic outcomes and compensation of faculty. The average faculty pay is adjusted for regional differences in cost of living. This includes full-time assistant, associate and full-time professors. Salary values are computed as an average for 2019-2020 and 2020-2021 academic year values. This is a change from previous rankings, when the indicator was based on the average of the most recent year of data. For the second consecutive year, U.S. News no longer includes benefits in our calculations due to definition changes made by the American Association of University Professors on their faculty compensation surveys, which U.S. News follows.

The regional differences in cost of living in this ranking are taken from the December 15, 2020, update to the Regional Price Parities by State and Metro Area indexes from the U.S. Department of Commerce's Bureau of Economic Analysis.

BEA's regional price indexes allows comparisons of buying power across the 50 states and the District of Columbia, or from one metro area to another, for a given year. Price levels are expressed as a percentage of the overall national level. The regional price indexes cover all consumption goods and services, including housing rent.

Higher average faculty salaries after adjusting for regional cost of living score better than lower average faculty salaries.


First off, credit to US News & World report for combining a couple years and adjusting for regional differences to try and smooth data and account for cost of living. But the "link between academic outcome and compensation of faculty" I think is suspect. It rests on the premise that if school A has an average comp of $90k and school B that comp level is $70k, school A must be better. I think that is loose at best. It's an award of "points" simply because of higher compensation with a total disregard to the collective performance. And again, what are the implications of this ranking factor to tuition increases? If colleges are watching closely and they need more headcount and higher cost per average headcount, does that not FURTHER PRESSURE tuition increases?


Faculty with a doctoral or terminal degree-3%: This is the percentage of full-time faculty members with a doctorate or the highest degree possible in their field or specialty during the 2020-2021 academic year. Schools with a larger proportion of full-time faculty with the terminal degree in their field score better than schools with a lower proportion, demonstrating their commitment to employing experienced faculty.


There is clearly a preference for the fulltime professor vs the part-time, and if the full-time has a doctorate degree and is full-time, even better. How does this translate into a better education experience? Almost universally, my best education experiences were ADJUNCT professors (part time) coming back to school a day a week to teach a class while they juggled their startup, etc vs the Full-time tenured track professor with a PHD. So you mean to tell me that a lifelong acadmic, full time with a PHD in Marketing will deliver a better learning experience for their students then a 40 year old Chief Marketing Officer with a bachelors degree in Marketing, that is CURRENTLY leading a team of 30 for say a billion dollar company, but wants to come back to their Alma Matter and teach Marketing Strategy? Offer 2 classes, each for 100 seats, Put 2 teacher profiles up for each and see which one fills up quicker. Tell the class that this will be a forced bell curve grading system, so no threat of grade inflation affecting student satisfaction scores. Then measure the outcomes. I would be SHOCKED if the PHD is even close to a satisfaction/learning outcome of the Part time Adjunct teacher. But the rankings think otherwise.


Proportion of full-time faculty -1%: A proxy of faculty resources, this is the proportion of the 2020-2021 faculty that is full time. We divide the count of full-time faculty members by the count of full-time-equivalent faculty members (full-time faculty members plus one-third the count of part-time faculty members).

U.S. News does not include faculty in preclinical and clinical medicine; administrative officers with titles such as dean of students, librarian, registrar or coach, even though they may devote part of their time to classroom instruction and may have faculty status; undergraduate or graduate students who are teaching assistants or teaching fellows; faculty members on leave without pay; or replacement faculty for those faculty members on sabbatical leave.

To calculate this percentage, we divide the total full-time faculty by the full-time-equivalent faculty. A higher proportion of faculty members who are full time scores better than a lower proportion in the ranking model.


Again, a clear bias towards the full time faculty member vs the part time. I fail to see how the full time faculty member is better then the part time teacher coming into the classroom (which may have full time relevant experience they are bringing in.)


Student-faculty ratio: This is the ratio of full-time-equivalent students to full-time-equivalent faculty members during fall 2020. This excludes faculty and students of law, medical, business and other stand-alone graduate or professional programs in which faculty members teach virtually only graduate-level students. Faculty numbers also exclude graduate or undergraduate students who are teaching assistants.

Each school's student-faculty ratio is compared with the largest ratio reported in its ranking category. Consequently, a lower student-faculty ratio (fewer students per each faculty member) scores better than a higher ratio in the ranking model.


More full-time teachers per student is better then less, and if they are higher compensate even better.


Financial resources -10%: This is a proxy for a school's ability to have a strong environment for instruction and impact in academia. Financial resources are measured by the average spending per full-time-equivalent student on instruction, research, public service, academic support, student services and institutional support during the fiscal years ending in 2019 and 2020. If a school submits fewer than two years of data, one year is used.

The number of full-time-equivalent undergraduate and graduate students is equal to the number of full-time students plus one-third the number of part-time students.

U.S. News first scales the public service and research values by the percentage of full-time-equivalent undergraduate students attending the school. This is done to account for schools with robust graduate programs likely having greater research and public service focuses. Next, U.S. News adds total instruction, academic support, student services and institutional support, and then divides by the number of full-time-equivalent students. After calculating this value, U.S. News applies a logarithmic transformation to it prior to standardizing.

Higher average expenditures per student score better than lower expenditures in the ranking model. However, the use of the logarithmic transformation means schools with expenditures per student that are far higher than most other schools' values see diminishing benefits in the rankings.


Logarithmic transformation, sounds fancy! More resources per student counts more then less. Again, just because spending is higher, does not mean outcomes are. Look no further than the world of Moneyball, major league baseball where low payroll teams (Oakland Athletics) manage to generate more wins per payroll dollar vs the large payroll teams.


Couple of things. Why are part-time students excluded from this? Part time students exist for a number of reasons, many are choosing part-time because they are working, either out of necessity (financial constraints) or choice. Like fulltime vs part time teacher issues I raised, a part time student that is working and comes to class ADDS to the learning environment to those around them, they do not subtract. Second, the key word in this calculation of "average spending per full time equivalent student" is the word "PER." By definition, the denominator being held relatively constant increases the likelyhood of this ratio rising, good for the rankings. But the law of supply and demand kicks in......holding available seats STEADY, helps with faculty/student ratio, spending per student, but combines to pressure tuition NORTH, a major problem with the entire system. The real problem is affordability, and here's another ranking factor that puts upward pressure on tuition.


SAT/ACT scores -5% : Average test scores on both the SAT math and evidence-based reading and writing portions, and the composite ACT of all enrolled first-time, first-year students entering in fall 2020 are combined for the ranking model.

These tests are used in the rankings because they measure, in a standardized way, a school's ability to attract students who can handle rigorous coursework. A higher average entering class test score on the SAT math and evidence-based reading and writing portions and the composite ACT does better than a lower average SAT and ACT test score in the ranking model.

Before being used as a ranking factor, the reported scores from both SAT tests and the ACT composite test are each converted to their national percentile distributions.

To most accurately represent the entire entering class, we use a calculation based on the percentage of the fall entering class that submitted each test. For example, if twice as many applicants to a school submitted ACT scores versus SAT scores, the ACT scores would have twice as much effect on that school's ranking.

Schools were instructed to report scores for all exams they had on record, including in cases when an exam was not used in the admissions decision. Schools that excluded groups of students in their reporting – such as student athletes and international students – had their test scores discounted in the calculations by 15%. Separately, in a change due to the disruption of SAT and ACT test taking and the college admission process caused by the coronavirus, schools reporting a total count of SAT and ACT submissions less than 50% of their fall 2020 entering classes had their test scores discounted by 15% in the rankings calculations. This is a change from previous years, which were based on a 75% submission rate of fall entering students before the ACT and SAT scores were discounted by 15% in the rankings.


SAT/ACT's have a key feature. They are standardized, and therefore results can be compared. One high schools pre-calc class is harder to compare to another one's. The question is, what does school A with an average SAT score of say 1,100 say about the quality of education vs school B who has an average score of 1,200? I would argue, very little, you are simply measuring the scores of who is sitting in the seat, not the outcomes of those seats while they are there. The implication is that the higher scores = a better college.


High school class standing - 2%: This is the proportion of students enrolled for the academic year beginning in fall 2020 who graduated in the top 10% (for National Universities and National Liberal Arts Colleges) or top 25% (for Regional Universities and Regional Colleges) of their high school class.

A higher proportion of students from either the top 10% or top 25% of their high school class scores better than lower proportions in the rankings because students who earned high grades in high school can be well-suited to handle challenging college coursework. Colleges reporting high school class standing based on less than 20% of their entering classes had their scores discounted before being used in the rankings. Values based on less than 10% are not used in the rankings at all, and in those cases, the schools get an estimate for ranking purposes.


Pretty straight forward, ranking colleges higher for their selectivity as measured as a % of their student body who graduated in the top 10% of their high school class. So the assumption or proxy is that if College A has 40% of its students (example) who graduated in the top 10% of their high school class, and college B has 25%, college A gets more points in the ranking.


I view this as "more points for putting higher ranking high school kids in the seat, not what we do to educate the people in that seat."


Graduate indebtedness total -3%: This is the average amount of accumulated federal loan debt among the 2019 and 2020 bachelor's degree graduating class. This is a change from previous years' rankings, when only the most recent year of graduate indebtedness total was used. Student debt can be volatile when there is a small cohort of students, and this change reduces that volatility.


The good news is that a measure of debt is actually on the list, although it's only weighted at 3%. The other issue? It's an accumulated debt measured at a point in time. What does it look like 10 years from now, are the students still hanging onto debt? That starts to get at the RETURN of the degree, not just the cost of it. In theory, high levels of debt among grads is less of a problem if they are out from under it relatively quickly. So debt makes the sheet, but it's a lazy metric.


Graduate indebtedness proportion with debt- 2%: This is the percentage of graduates from the 2019 and 2020 bachelor's degree graduating class who borrowed federal loans. This is a change from previous years' rankings, when only the most recent year of graduate indebtedness proportion with debt was used. Like with the graduate indebtedness total debt indicator, this change reduces the inherent year-to-year volatility compared with when only one year of data is used.

The federal debt figures used by U.S. News in both ranking indicators are:


  • The 2019 and 2020 undergraduate classes, in which the class of 2019 is defined as all students who started at an institution as first-time students and received a bachelor's degree between July 1, 2018, and June 30, 2019, and the 2020 class is defined as all students who started at an institution as first-time students and received a bachelor's degree between July 1, 2019, and June 30, 2020.

  • Only loans made to students who borrowed while enrolled at the institution from which they graduated.

  • Co-signed loans.


The federal debt figures used by U.S. News exclude:


  • Students who transferred to the school.

  • Money students borrowed at other institutions.

  • Parent loans.

  • Students who did not graduate or who graduated with another degree or certificate (but no bachelor's degree).


In the two rankings factors, U.S. News counts the federal Perkins and federal Stafford subsidized and unsubsidized loan programs. These include both Federal Direct Student Loans and Federal Family Education Loans.

A lower dollar amount of average debt for bachelor's degree graduates and a lower percentage of bachelor's degree graduates with debt score higher in the two ranking indicators than a school with relatively higher average debt loads and a large percentage of graduates with debt.


This metric can be criticized more for what it is NOT measuring as opposed to what it IS measuring. First, it is excluding Parent loans from the calculation. Why? Presumably in the abscence of the parent loan, the student would not even be able to attend. Second, and maybe more important? It is excluding students who do NOT graduate. The % of students who attend college but do NOT graduate is high at 40%, and many of them have debt? Not sure why the rankings would ignore the debt levels of the people who drop out, it's a real negative outcome and very likely to be different among the different universities.


Also, while the debt metrics are important, is there not an unintended consequence with "selectivity bias?" Said differently, if a school has a fixed number of seats, incented by previously mentioned ranking factors, all things being equal are schools NOT bettered off when picking between their enrolled classes to select the MORE financialy resourced student that is NOT subjecting themselves to high debt levels? I think it's common sense that this happens.


Pell Grant graduation rates -2.5%: This social mobility ranking factor measures a school's success at graduating Pell Grant students, who are from low-income backgrounds. To calculate this, we use a school's six-year graduation rate among new fall 2013 and fall 2014 entrants receiving Pell Grants. A higher Pell Grant graduation rate scores better than a lower one. Because achieving results from a broader base is more challenging, schools whose fall 2013 and fall 2014 cohorts were made up of less than 50% Pell students had their Pell graduation rates multiplied by the proportion of that cohort that is Pell. For all remaining schools that demonstrated significant economic diversity by being comprised of at least 50% Pell students, we multiplied their Pell graduation rates by 0.5.


Pell Grant graduation rate performance-2.5% : This social mobility ranking factor assesses success at achieving equitable outcomes for students from underserved backgrounds. It divides each school's six-year graduation rate among fall 2013 and fall 2014 new entrant Pell recipients by the rate achieved by non-Pell recipients, with higher ratios scoring better than lower ratios. The significant minority of schools whose Pell graduation rates are equal to or greater than non-Pell graduation rates receive the best possible score of 1 before adjustment for the proportion of the entering class that received Pell Grants. Schools whose cohorts were at least 50% Pell students have their scores augmented by 0.5; schools below 50% Pell students had their scores augmented by the proportion that received Pell Grants.


Pell Grants, by definition go to students whose family's EFC (expected family contribution) is below $6,000. So this is a group of students that have a real financial need, at least as compared to students whose family's exceed that EFC threshold. So finanical need based federal aid. If the rankings (which the are) is focusing on students from low income households and measuring graduating rates among this segment of the student body, this SHOULD count for higher rankings. In total this is 5% worth of the rankings. As opposed to the selectivity measure, that has nothing to do with school outcomes, it's only capturing one characteristic of who they put in the seat.




Alumni giving rate average-3%: This is the non-weighted mean percentage of undergraduate alumni of record who donated money to the college or university. The percentage of alumni giving serves as a proxy for how satisfied students are with the school. A higher average alumni giving rate scores better than a lower rate in the ranking model.

Following guidelines of reporting to the Voluntary Support of Education Survey, alumni of record are former full- or part-time students who received an undergraduate degree and for whom the college or university has a current address. Alumni who earned only a graduate degree are excluded.

Undergraduate alumni donors are alumni with undergraduate degrees from an institution who made one or more gifts for either current operations or capital expenses during the specified academic year.

The alumni giving rate is calculated by dividing the number of alumni donors during a given academic year by the number of alumni of record for that same year. The two most recent years of alumni giving rates available are averaged (added together and divided by two) and used in the rankings. For the 2022 edition, the two separately calculated alumni giving rates that were averaged were for giving in the 2018-2019 and 2019-2020 academic years.


I agree that capturing alumni giving is a proxy for how satisfied past students are with the school. It passes the common sense test that if school A has 25% of the alumni given back and school B has 10%, there is SOMETHING to school A that should warrant more overall points. HOWEVER, if you read the above methodology, this calculation is a simple "% of all alumni giving relative to the alumni base. Lets stay with the overall simplified above example School A and School B. School A has 25% of the alumni giving back, lets assume they have 100 in their alumni base and collectively they each gave $1,000, so $25k in total contributions for school A. School B, lets say had 10 donors, and each gave $2,500, for the same $25,000 in combined giving, and lets assume they also have a total of 100 alumni. Same dollars, different % of alumni giving. If I'm heading up "School B" the goal is to improve on % of alumni base that contribute. IF I got 65% of them to give just one dollar, do I not catapult School A on this metric, but I effectively have done nothing to generate more $'s to my university? That answer is yes according to this calculation.









11 views
bottom of page