Ratings Description and Discussion

I will post here about any changes made to these ratings and about other major developments or explanatory thoughts as they come up.

Change to the formula

posted Sep 29, 2016, 7:44 PM by Ted P   [ updated Sep 29, 2016, 7:57 PM ]

While the fundamentals will remain the same, I think this will be my most significant change in the formula since 2010, which is when I implemented a comprehensive system for FCS/I-AA opponents.  This will be implemented with my first ratings of the season, which should be posted in the early morning hours on Sunday the 2nd.

In short, I'm going to give more credit for quality wins so teams can more easily overcome losses.  In last season's final ratings for instance, Appalachian St. was in the top 25, and Arkansas was not.  When I made the change to emphasize quality wins more than avoiding losses, the two teams roughly switched spots.  I will show the top 25 under the new formula for last year below, but first I wanted to explain more how we got to this point.

The formula in the current form as represented on this site was not the first formula I used.  I had started with a fairly simple formula that was more similar to an RPI.  Whereas RPI formulas usually add things like winning percentage and opponents' winning percentage together, I found a way to make a strength of schedule based on opponents and opponents' opponents.  I then took that number and multiplied it by the team's winning percentage.  I thought this was preferable since if you don't actually beat anyone, you don't get any credit.  

When I began the ratings posted here, I took a further step as I realized that provided a team has a win, their score can be unfairly increased by losing to a good opponent with a good schedule.  This especially dawned on me after one of the games Troy played against an SEC opponent.  I'm not going to look up the game or the stats for that season, but a comparable example took place last season when Georgia played Georgia Southern.  Under the old formula, Georgia Southern would have improved from a rating of about 3.00 to about 3.36 despite losing.  That doesn't make any sense, obviously.

So my solution to this problem was to add another step in the process.  The number I had been using changed from an overall rating to an opponent rating.  So now when you lose, you have a number subtracted no matter what; and when you win, you have a number added no matter what.  Georgia Southern's rating as an opponent still improved, but things like this balance out when you play 12 games or more.  The teams that beat Georgia Southern got a boost of 0.012.  For context, the two closest top-10 teams in the final ratings last season were Houston and Oklahoma.  They were separated by 0.021.  As for Georgia Southern itself, their rating fell from about 0.046 to about 0.040, so the problem of a team like that improving with a loss was eliminated.

I'm not completely sure why this doesn't make a bigger difference, but the change I decided to make was instead of dividing the opponent scores by 30 in the event of a win, I'll only divide them by 10.  1/(opponent score x 2) is still what I subtracted for losses.  I guess the main reason is, since everyone has roughly the same number of games, every win you have is a loss you've avoided.  So if the formula is 80% based on rewarding teams for wins or 80% based on punishing teams for losses, the order of most teams will not change.

So this was the top 30 in the 2015 final ratings.  I'll show the changes below that.  Somehow, only a few teams experienced much movement.  I italicized the teams that moved up more than a single spot.  They were all teams with good schedules.  Arkansas was #1 in strength of schedule, USC was #9, Mississippi St. was #10, and Florida was #26.  North Carolina and Utah did not have amazing strengths of schedule, but North Carolina had a loss to #93 South Carolina and Utah had two losses outside the top 40 (#44 UCLA and #63 Arizona).  So this allows the wins by North Carolina and Utah (double digits in both cases) to overcome the bad losses.

1Alabama1.788745
2Clemson1.340525
3Michigan St.1.154173
4Ohio St.1.123617
5Stanford1.086094
6Oklahoma0.933149
7Houston0.912188
8Iowa0.830925
9Ole Miss0.790432
10TCU0.727804
11LSU0.696757
12Western Kentucky0.685069
13Michigan0.672603
14Notre Dame0.645402
15Northwestern0.632768
16Navy0.629937
17Utah0.618819
18Florida0.580560
19Toledo0.548158
20Oklahoma St.0.512307
21Appalachian St.0.469065
22Mississippi St.0.454009
23Baylor0.448700
24Georgia0.446010
25Wisconsin0.434243
26Oregon0.429406
27Tennessee0.406242
28North Carolina0.367099
29Florida St.0.312854
30Arkansas0.307026

New formula:
1 Alabama 1.835621
2 Clemson 1.381007
3 Michigan St. 1.298501
4 Stanford 1.222672
5 Ohio St. 1.182743
6 Oklahoma 1.110108
7 Houston 1.021402
8 Ole Miss 1.006611
9 Iowa 0.934286
10 Utah 0.887724
11 LSU 0.886032
12 Western Kentucky 0.864322
13 Michigan 0.859931
14 TCU 0.848495
15 Florida 0.829168
16 Northwestern 0.820768
17 Notre Dame 0.805415
18 Navy 0.748729
19 Toledo 0.743622
20 Mississippi St. 0.707822
21 Oklahoma St. 0.696839
22 Oregon 0.692704
23 Baylor 0.686210
24 North Carolina 0.670169
25 Arkansas 0.667304
26 Tennessee 0.647179
27 Florida St. 0.643034
28 Southern California 0.631457
29 Georgia 0.627629
30 Washington St. 0.622407
31 Appalachian St. 0.616337
32 Wisconsin 0.610187

Early top 10 after three weeks

posted Sep 14, 2014, 4:34 PM by Ted P   [ updated Sep 14, 2014, 4:36 PM ]

1. Oklahoma – What does it take to be #1 after three weeks? It wasn’t even close, by the way. Being 3-0 by itself puts you pretty close to the top (I don’t count FCS wins in my records, but I give teams credit separately), but I’ll go deeper into this. Oklahoma has beaten two teams who themselves have two wins apiece (Tennessee and Louisiana Tech). The third team, Tulsa, is 1-2. So being 3-0 and having opponents with a combined 5 (FBS) wins right now is why Oklahoma is a clear #1.

2. Ole Miss – Most importantly, the Rebels are also 3-0. Also important is the fact that Boise St. has two wins. Vanderbilt has an FBS win. Ole Miss got zero points for beating UL-Lafayette apart from the influence on its overall strength of schedule. Losing to an undefeated team and a 2-1 team (Louisiana Tech) gives ULL a decent strength of schedule.

3. UCLA – I think luck will catch up to the Bruins like it did to the cross-town rival Trojans, but hear me out. Like Ole Miss, they are 3-0 with a win over an otherwise unbeaten team (although Virginia has one FBS and one FCS win) and a second win over a team with a BCS win. The third team, Memphis, has a good strength of schedule because the only FBS team they’ve played is undefeated. I know that seems like circular logic, but when Memphis has played a few other FBS teams strength of schedule will begin to be more meaningful. This is another reason to wait for time to pass before I officially use these.

4. Notre Dame – The Irish are 3-0, and Michigan is 2-1, so that’s a good start this early. Purdue also has an FBS win. Rice has no wins, but being that they’ve only played Notre Dame and Texas A&M, that gives them a pretty good strength of schedule.

5. Mississippi St. – Two Mississippi schools in the top five. Can you tell yet why we’re still in the preliminary stages? Guess the Bulldogs’ record. The rest will also generally follow the above script. Mississippi St. is the only team to beat UAB, who has an FBS win and an FCS win. South Alabama’s other two weeks are an FBS win (albeit over a winless team) and a bye. Mississippi St. also beat Southern Mississippi, whose only FBS games have been against 3-0 teams.

6. Arizona – A real top 10 team probably would have won by more against Nevada and UTSA, but as a reminder, this does not factor in margin of victory, which I think you need to do after only three weeks. Arizona is also 3-0, that Nevada team I mentioned has an FBS win and no other losses, and Arizona also beat Texas-San Antonio (again by a small margin), who has an FBS win. UNLV, whose only win is over an FCS team, does not really help except for Arizona’s record of course.

7. Oregon – So the next three are all teams you’ll see in the top 10 of pretty much any major rankings at this point. Oregon is the first team on this list that does NOT have 3 FBS wins (one was over an FCS team). Wyoming, whom the Ducks just beat, has one win over an FBS team and one over an FCS team. Michigan St. only has an FCS win, but at least they don’t have any other losses.

8. Texas A&M – Should be no surprise here. Of course, South Carolina has won two games over otherwise-undefeated teams since losing to the Aggies. East Carolina in turn beat Virginia Tech (who had beaten Ohio St.) after losing to the Gamecocks. I mentioned Rice in reference to Notre Dame. The Aggies have also beaten North Texas, who is 1-2, and an otherwise-unbeaten FCS team.

9. Alabama – We’re back to another 3-0 team, but obviously they wouldn’t be behind two 2-0 teams (with FCS wins) if they had great wins. West Virginia does have an FBS win and an FCS win though. Florida Atlantic is 1-2, and Southern Mississippi (common opponent with Mississippi St.) just has an FCS win.

10. Washington – I took the Huskies out of my subjective top 25 afer struggling to beat Hawaii (who was barely able to beat Northern Iowa last night/this morning) and Eastern Washington, an FCS team. Washington does have another FBS win now, over Illinois, who itself has FBS (over Western Kentucky) and FCS wins. Also, Eastern Washington is 1-0 against the FCS.

Hopefully that’s somewhat enlightening about how the system works. The above does not use any reference whatsoever to preseason rankings or prior seasons. It’s as if the entire FBS started from scratch this year. So it’s completely about what you’ve proven, and if a team has played and won every week and their opponents are in the FBS (especially if such opponents also have a number of FBS wins), that team will have a huge advantage whoever they are.

So when you look at other ratings, like Sagarin for instance, they might be more the teams you expect to see, because they do include reference to prior seasons. That said, his top four teams are all in my top 10, and there are 10 teams in common in our respective top 15 teams. The ones in my top 15 that are not in his are all ones that you would rightly be suspicious of due to recent seasons: Mississippi St., Arizona, Pittsburgh, Washington, and Syracuse.

Top 4 Comparisons, 2008-2011

posted Jul 3, 2012, 3:24 PM by Ted P   [ updated Jul 3, 2012, 3:24 PM ]

With the move to a 4-team national semifinal beginning in 2014, I wanted to go back and look at the semifinals with my system and compare those to the ones that would have been played within the BCS system.  Apparently, college football will become like baseball and basketball and all ratings and formulae will just be for fun, so the BCS top-4 might or might not reflect what a committee would have chosen.

2008
My ratings:
(1) Oklahoma vs. (4) Boise St.
(2) Florida vs. (3) Texas

BCS:
(1) Oklahoma vs. (4) Alabama
(2) Florida vs. (3) Texas

2009
My ratings:
(1) Alabama vs. (4) Florida
(2) Texas vs. (3) Cincinnati

BCS:
(1) Alabama vs. (4) TCU
(2) Texas vs. (3) Cincinnati

2010
My ratings:
(1) Auburn vs. (4) TCU
(2) Oregon vs. (3) Oklahoma

BCS:
(1) Auburn vs. (4) Stanford
(2) Oregon vs. (3) TCU

2011
My ratings:
(1) LSU vs. (4) Boise St.
(2) Alabama vs. (3) Oklahoma St.

BCS:
(1) LSU vs. (4) Stanford
(2) Alabama vs. (3) Oklahoma St.

I would have included a "mid-major" conference every year but 2009, when I would have included Cincinnati of the Big East (which it seems has been kicked out of the to tier beginning in 2014).

The BCS standings would have done so twice but in 2009 would have included both Cincinnati and TCU.

In 2008 and 2011, I would have included Boise St. while the BCS would have limited participation to the major conferences.

I also note that the BCS would have included a total of three (now) Pac-12 teams in the last two seasons while my ratings would have only included Oregon as the #2 team in 2010.  I would have included one more Big XII team (Oklahoma in 2010) than the BCS standings would have chosen. 

The SEC teams evened out, as the two ratings would have disagreed about Alabama in 2008 and Florida in 2009. The same is true of the Mountain West, in which Boise St. competed in 2011 and TCU competed in 2009.  Boise St. was in the WAC in 2008, so only mine would have allowed in a WAC team over the last four years.

Massey comparison/corrections/outliers

posted Oct 7, 2010, 2:08 PM by Ted P   [ updated Oct 8, 2010, 2:13 PM ]

The abbreviation for these ratings on the Massey comparison page is now KNT.  I believe it was KR last year.

When you look at the comparison page, there are red numbers and blue numbers for a team's highest rank and lowest rank.  My ratings are not that bizarre, so if you see those, they USUALLY indicate mistakes and will be corrected if necessary later in the week.

Mistakes are more often toward a lower rank, because most teams are toward the middle, so I notice a team that should be toward the middle if they're mistakenly toward the top, but I won't notice as easily if that team is toward the bottom.

The only team for which I still can't find anything wrong is Tulane.  I've checked everything I can imagine for them, and they are #63.  They are in the 60s in a few of the ratings, and #70 in GBE.  The fact that I added another step seems to have helped them further, and made my ratings rank Tulane the highest.

I think it's a matter of there not being bad losses and the fact that I segregate out I-AA/FCS wins so they do not unduly weaken an overall schedule.  They provide very little credit, but in the second half of the list, the rating is much more about how much you're penalized for losses than how much you're credited for wins.  Also this early in the season, the I-AA/FCS games do carry more weight since the credit given is divided by the number of overall games.  I could start out by dividing by 13, but I think this is best for the context of a week-to-week rating.  It would be too much of an advantage to those who have played an I-A/FBS team every week at this point.  Also, I like it better that the I-AA/FCS wins fade into the background as the season progresses.  Tulane opponent Ole Miss has rebounded nicely from an embarrassing start; and the Green Wave's other loss is to Houston, which despite not having a great schedule, is 3-1 with the only loss coming to a currently top-25 team (UCLA, ironically Tulane head coach Bob Toledo's former team).  Houston, Rutgers, and Ole Miss are all more highly rated as opponents than they are as teams overall.

I didn't like the old formula (where the three Tulane opponents mentioned would be higher {Houston would even be ahead of UCLA} and Tulane would be slightly lower), because I don't like games treated as better as the other games on the schedule change.  Under formulas like that, Arkansas gets more points for beating ULM now that Arkansas has played Alabama.  Losing should not strengthen the wins, no matter who the opponent is.  GBE is the closest to the old formula (that I know of) if you want to compare.  Kansas St. would be #1, however.  That sounds alarming, but the same number of ratings have Kansas St. #1 as have LSU #1 even though LSU's average rank is 9th (which is the 8th-best average) and Kansas St.'s average rank is 20.5 (Which is the 22nd-best average).

I'm not trying to impress anyone, I'm just explaining the things that I look at in order to correct either mistakes or situations that I believe are not properly addressed by my formula.  Either way, I make an effort to notify anyone who might be interested here.

Special #1 Explanation

posted Oct 3, 2010, 10:50 AM by Ted P   [ updated Oct 3, 2010, 11:02 AM ]

I wrote this to try to explain why the ratings came out the way they did, so I think it might help people understand how it works.  If not, just disregard.

Without trying to get too technical, there are two basic steps to my ratings, rating teams as opponents and then using those ratings to calculate the overall ratings.  When we’re only talking about wins, your rating directly corresponds to the ratings as opponents of the teams that you beat.  By telling you how teams are rated as opponents, I was hoping that would explain this a little better.

Florida (at #9 as an opponent, highest among teams with losses) is the highest-rated win this season, so that’s why I was thinking it might be enough for Alabama, but Penn St. ranks 27th as an opponent, lower than West Virginia, whom LSU beat.  Arkansas is 28th, and Mississippi St. is 29th, so Alabama’s best second-best and third-best opponents cancel out LSU’s best and second-best opponents.  But then LSU catches up.  North Carolina is a respectable 43rd, and Tennessee is 67th.  Duke and San Jose St., neither of whom have an FBS (I-A) win, are #109 and #111, respectively.  LSU’s lowest-rated win is Vandy, who is #74 as an opponent.  If Penn St. beats Iowa, Duke beats Maryland (close game), and San Jose St. beats UC-Davis (close game), along with maybe Miss. St. and UNC losing, that might have all combined to put Alabama ahead of LSU.  The distance between Alabama and Oklahoma is not very much, so those three Alabama opponents’ winning might have been enough even if Oklahoma stays the same.  For instance, I accidentally had North Carolina 3-1 (I double-checked every team after noticing that error), and that would have given LSU an extra .07 in the ratings.  Oklahoma and Alabama are less than .04 apart, and LSU is less than .02 ahead of Oklahoma.

Maybe this year Vandy finishes 1-11 and Penn St. finishes 11-2, but it goes by where everyone is right now, including opponents and opponents’ opponents and even to some extent their opponents (because I don’t think you can fully rate the opponents without this information).  Also, it doesn’t look ahead and say that LSU stands to gain fewer points because they have two easy in-state opponents later in the year. So that’s another factor.  If LSU plays WVU and UNC later and those weaker opponents earlier, LSU wouldn’t be in the conversation.  If combined with that, BYU is better this year and beats Air Force and Utah St. (coincidentally two of Oklahoma’s opponents), Alabama is probably #1 right now.   It’s going to take a few more games for the ratings to reflect the teams’ acoomplishments more accurately without it being a function of scheduling quirks and opponents who start out differently from how they will end up later.

To contact me

posted Sep 28, 2010, 6:55 AM by Ted P   [ updated Nov 4, 2017, 11:29 PM ]

Write an email to theknightswhosay@msn.com or leave a comment on my blog site: https://theknightswhosay.wordpress.com/.

Change for FCS (I-AA) losses

posted Sep 27, 2010, 11:31 AM by Ted P   [ updated Sep 28, 2010, 4:58 PM ]

I have always had trouble trying to come up with a good solution to the problem of FBS (I-A) teams losing to FCS teams.  I don't find the wins over FCS teams that problematic, because it hurts the team in question to be denied the increased winning percentage that would have occurred with a win over another FBS team instead.  That is considered when points are added to the winning FBS team as the last step in calculating the initial rating (see below for what that means) and final rating.  Also, the least numerically significant type of game is a win against a lesser opponent, FCS or FBS.  The differences between FCS wins are usually hundredths of a point, when there are almost 5 full points separating the best teams from the worst at the end of the season.

Addressing the losses is that manner was unsatisfactory because by adding a loss, that loss was not as bad as it should have been depending on how good the FBS strength of schedule was.  For example, if it were a loss by a Sun Belt team whose best opponent was 50th, there wasn't much of a problem there, but if it were a team of an automatic-qualifying conference with a decent overall schedule, adding the loss was less significant.  Most of the problem was in rating this FBS team as an opponent, because I think the final ratings had adequate adjustments, but small problems are magnified when "win chain" (team A beat team B, who beat team C) opponents are considered on multiple levels.  It was also too much of a subjective judgment on what a "fair" penalty was.   An equitable inverse number is not as easy to conceive and would probably get into the type of math that I designed my rating system to avoid.

Anyway, what I have done is,  by employing the number I came up with to translate FCS results into FBS results, is to add in the FCS opponents as losses.  I have to use that number since their SoS is not independently calculated.  It changes a 5-5 record to something like 2-13.  If it were an FCS team with a win over an FBS team (probably a bad one, certainly on the day they played that team) and 5 losses, I think that's about the correct correlation.  The opponent's winning percentage for FCS opponents' opponents is simply the winning percentage that FCS teams in general have against FBS teams.  I will add in the actual opponents' opponents' winning percentage for FBS teams.  Earlier in this process, I may not have bothered, but there are too many FCS teams that have more than one FBS opponent at this point.

I realize this doesn't perfectly differentiate the quality of the FCS opponents, but it's enough to ensure losing to an Appalachian St. doesn't hurt as much as losing to an Idaho St., for example.  I do understand that just like in FBS (or arguably even more so) going undefeated in one conference can be drastically different from going undefeated in another, but I've made sure to limit the harm done by any loss to what is appropriate for one week.  It's hard to tell the difference between a FBS team who can't beat anyone else and an FCS team who can't beat anyone else.  For the better FCS teams, I have made sure they're not rated any more highly than a mediocre FBS team (slightly better than average if the FCS team goes undefeated in the FCS and beats one or more good FBS teams as well).  This is based on observations over years of looking at on-field results and computer ratings, even ones that attempt to integrate all the divisions into one formula.

Speaking of which, I doubt that even the BCS ratings have come up with an ideal solution in the framework of their ratings anyway.  I know that Wes Colley, for example, groups together several FCS opponents into one "team" and rates that team as an opponent.  From my own calculations, it seems there is no way to fully consider the difference in opponents from one division to the next.  We do not have a representative sample of how good one division is compared to another.  There are both very good and very bad FCS teams who (their coaches and administrators anyway) choose not to play FBS teams and both very good and very bad FBS teams who choose not to play FCS teams.  At any rate, I have a full day job, and I have no input into who makes these big bowl games, so it's enough to enter in all the FBS results without trying to come up with a more complex system for the FCS teams.  My ratings are a balance between saving time and being fair.  These goals often coincide (I honestly used to stay up nights worried about whether I was fair in my ratings and would change them as one week progressed toward the next...it's just too hard for me to filter out bias and establish a fair assessment of 120 teams without the objective number-crunching which is the basis of these ratings), but where they do not, I believe I have struck a reasonable balance between the two. 

Adjustment

posted Nov 17, 2009, 3:34 PM by Ted P   [ updated Sep 26, 2010, 12:39 PM ]

Based on the cases of Clemson and Eastern Michigan, I have decided to revise how teams who are the only I-A team to lose to a given opponent are penalized.  So any team that gets a fractional (below 1.000000) rating as an opponent will instead get a rating of 1, so that no team loses more than half a point from its point total for a given loss.  Given that even after 10 games the best teams are only at just over 1 for their total rating (not to be confused with "rating as an opponent," which is as high as 6.2), I think half of a point is more than an adequate penalty for even the worst losses.  Losses to I-AA teams (since obviously such teams also have at least one I-A win) will also result in a subtraction of no more than half of a point.  I-AA opponents aren't added in the same way, but I try to ensure that the losses are applied similarly to other bad losses.  Even though in the case of Eastern Michigan, they only go from #120 to #119, it is worth mentioning that they were penalized worse for losing to Ball St. than Western Kentucky had been for losing to Central Arkansas.

Wins against such teams will also be slightly higher, but it is such a small difference (about .01), I don't mind the change for the sake of consistency.

Credits

posted Oct 25, 2009, 6:21 AM by Ted P   [ updated Oct 25, 2009, 6:48 AM ]

I'd like to thank (in chronological order) Jeff Sagarin, Kenneth Massey, George Eldredge, Matthew Welch and David Wilson.

I became interested in computer rankings in the 2003 college football season.  I contacted Jeff Sagarin with some questions, a couple of which he was nice enough to answer.  After following his ratings over some time, I developed a degree of respect for them. 

I started following the BCS computers some time after that and eventually found my way to Massey's comparison page.  I looked at a few that I liked, and this led me to George (GBE) and Matthew (SquareGear).  I used their pages for help with my SoS, and tinkering with George's formula is how I developed my own ratings system.

David Wilson suggested that I post on this page and I believe will be adding this to his ratings directory.

Description

posted Oct 22, 2009, 8:46 PM by Ted P   [ updated Sep 27, 2010, 3:15 PM ]

Why I do this

Back in 1994, I strongly believed that Penn St. was the best team in college football.  But Nebraska remained #1 through the course of both teams' undefeated season and easily won the final vote in both polls.  There had been #1 controversies in 1989, 1990, 1991, and 1993 as well, so this was just the last straw for me.  At the beginning of the next season, I started doing my own ratings just like I imagine poll voters do: looking at results, adding in observations from games I saw, and putting the teams in order.  This was long before the days of DVR, and I never figured out how to use a VCR to record something I wasn't watching, so I mostly used it so I would know what games to watch on Saturdays.

In 2003, I believed LSU was (1) most deserving of the BCS title game, and then (2) more deserving of #1 at the end of the year.  In following the progression of the computer polls (which became relevant to the national title with the advent of the BCS), I read a lot into them and wrote to the authors of some of the BCS ratings.  For example, I asked Jeff Sagarin what his ratings would have been in 2002 had Miami not been flagged for pass interference in the first overtime.  I searched for a long time to come up with a good way of rating teams without either a secret formula being involved or something very complex (involving such things as logarithms and least-squares).  I finally found one, GBE, and my own ratings began developing from there.

College Football News has an interesting take on ranking teams:
"
You must rank teams based on how good you believe they are at the moment. That's the point when it comes to putting the teams in some order. However, once the year is complete, it's only fair to take the subjectivity out of it and go by what actually happened on the field."

I believe that first part is utter nonsense
So if a winless team beats the best team in the country, I should then rank the winless team #1 the following week, assuming I was correct in picking out who #1 was up to that point?  This philosophy is also why you see "forgiveness votes" given to a team like USC who loses to a team like Stanford a couple of years ago.  They lost ground in the rankings for a while but since (I imagine) voters believed they were good (and they were decent on the whole), except for that moment that had passed, they deserved to be ahead of other teams with perhaps better on-field results.

The last part: "take the subjectivity out of it and go by what actually happened on the field," is exactly what needs to be done, but why on earth would you want to wait until the year is complete?  There is no point to doing weekly ratings then.

General description


What I try to measure is what a team has accomplished to date through wins and what they've failed to accomplish in losses. Note that as of the rankings linked to above, the average team's score is -.33. So I suppose, like any other ranking, some teams will view a medium rank as an accomplishment, and some will not be satisfied with even a very high rank. The average number being negative is not indicative of how I think one should feel about a given team being average. I just want to rate teams objectively with an eye toward who the best teams are rather than placing the most average team 60th.

Leaving out the math for a moment...I do not use a strict RPI formula, but it is more record-based and there is no complex math like there is in an ELO formula. This is a two-tiered ranking system. An initial rating, which is similar to the RPI, is first computed, which allows me to then rate teams as opponents. This initial rating is similar to the RPI in that opponents' winning percentage counts for twice as much as opponents' opponents' winning percentage. It is dissimilar in that the strength of scheduled (SoS) is on a 12-point scale and is multiplied by the record. Then I make a dramatic departure from the RPI. Every win (as long as the opponent has any value in the initial rating which I just described) adds to a team, and every loss subtracts from a team. So a team with no wins has only negative weeks. Unlike in the RPI, wins are not segregated from strength of opponent. For instance, in the RPI, if team A and team B beat team C, the team with the better SoS (which results from unrelated games) between team A and team B gets more points.  This rating ensures team A and team B will get identical credit for the same win.  Margin of victory is not considered except under a very precise set of standards that exist only to address the heightened degree of difficulty that exists when a team plays on the road (see below).

How I get the SoS for the initial rating is like this: (1/(opponents' winning percentage)) x 2 + (1/(opponents' opponents' winning percentage)). Then I subtract that number from 12. If it's a really bad schedule (like Hawaii for a long time in 2007), that first number I compute is a number higher than 12, so it doesn't work (resulting in unrated teams early on). And the rest of what I do is multiply the SoS by the winning percentage (with a couple adjustments for home and away and FCS {I-AA} opponents), which gives me a "value" for each team. Beating a team of a certain value gives you one number (the better the value, the higher the number); losing to them gives you a different number (the better the value, the lower the number).  Then, I take the loss number and subtract it from the win number (again with home/away and adjustments for wins over FCS {I-AA} teams).

The second tier of the ratings, technical description and illustration

The rating is divided by 30 to give me the "win number," and 1/(the rating times 2) to give me the "loss number." Losses hurt more than wins help, but the priority is to determine the best teams, so I think this is appropriate. I want a bad loss to hurt, and when it is a close call in strength of wins, I'd rather benefit the 1-loss team with a more understandable loss. Teams that win most of the time get a little bit more leniency for playing weaker teams. But the other side of that coin is that if most of the opponents are not very good, there is still a relatively limited opportunity to accumulate points.

In 2008's final ratings, Utah did not have a loss and got a lot of credit for beaten once-beat Alabama but still finished behind Florida and Texas. Florida had a worse loss than Texas (even though the team to which Florida lost, Ole Miss, defeated the team to which Texas lost, Texas Tech) did but had more good wins. Texas had more good wins than Utah. Alabama (a loser to Utah) and Oklahoma (a loser to Texas) were similar, and Oklahoma was a little higher.  Also, Texas' other wins were still enough to overcome the one loss. I think this is an appropriate balance. The loss didn't hurt Texas so much that they dropped below Utah, but losses still make enough of a difference to help the better records go to the top. I don't want the rest of the season to drag a team like Utah down too much despite a good win like that, but teams without good wins and an otherwise similar schedule would have no chance of getting so high.

Home/away

This is all I do for home and away games. First of all, this is only triggered if the home teams wins by 3 or less or in overtime. It is pretty consistent, across many seasons and both NFL and college football, that a home team on average has a 3-point advantage. Most ratings which consider home/away treat all games universally, but I do not believe in this since it probably didn't matter where the game was played if a team wins by more than that. Think about it this way. If the game were played in an unfamiliar, centrally located stadium with no people the game would start off 0-0 and there would be no effect on what happened due to location. So the average advantage is exactly 0. That means a team even winning by 1 has shown itself to be superior for that instance. It gets a little murkier when it is not a neutral environment. Since the average advantage is about 3 for the home team, that's all I'm willing to consider, even though of course a crowd can help or hurt one play, and that play could have a much bigger impact than three points (an interception return instead of a touchdown, for example, {assuming one-point PATs} is a 14-point variation). Anyway, the only impact is a winning home team's victim is counted for 9/10 as good of a team as it is for the home team's rating, and the losing team's opponent is considered 11/10 as good as their rating actually indicates they are. This makes the win count for just a little less, and the loss subtracts just a little less.

I-AA games

I used a complicated formula that I can't even quote you exactly to determine how much credit to give to I-AA wins. Basically, I differentiated how I-AA opponents perform against each other (exactly .500 of course) from how they perform against I-A teams in general.

This enabled me to value the average win against a I-AA opponent. So you do get credit for beating a I-AA team that is successful against other I-AA teams over one that is not, but even beating an otherwise perfect I-AA team who has a win over a mediocre I-A opponent will only get you about as much credit as beating a mediocre I-A team yourself. Just like I-A opponents, if a I-AA team has no wins, you get no credit. And I made it so that if the I-AA team gets few wins it is only marginally better than beating a I-A team with no wins. But there is no automatic quality that exists just because a team is I-A (Western Kentucky, who finished #120 in 2008, proved nothing by moving up to I-A and losing). You don't get any credit for suiting up and going onto the field no matter who your opponent is. A I-AA team who beats I-AA opponents still has shown some degree of competence in a football game.

As for how this affects SoS, I-AA wins are not factored in (with the adjustment mentioned on 9/27, I'm going to subtract back out the I-AA losses).  While your first reaction may be that this helps teams and opponents of teams who play I-AA games, except as to SoS ratings, this is not the case.  The credit that a team gets is added after the first step of the calculation. SoS is as strong as a team’s schedule against I-A opponents only, although FBS (I-A) opponents who play FCS teams have those FCS games reflected in the initial ratings.

SoS computation

There are numerous different ways to calculate SoS according to opponents and opponents’ opponents as I do.  Of course I did not invent this idea.  Some just add together twice the wins and losses (if they even add it that part twice) for opponents plus the cumulative records of opponents’ opponents.  You can also compute the average for each component (opp. and opp. opp.), again factoring in opponents twice.  I believe this method is unsatisfactory.  I want each game to have the same weight (which is why I made the adjustment mentioned above).  The rating is based on accomplishments.  Just as beating a team in Week 1 is equal to beating that team in Week 12, Week 1 should have the same impact on SoS as Week 2.  So what I do is average wins and losses for each week.  For example, if after 4 weeks, a team has played a 1-3 team, a 2-2 team, and a 3-1 team, and had a bye week, its the total record of opponents would be 6-6.  It would be the same if the records were 1-4, 2-1, and 3-1.  I think the team that has played more games should count for more because there is more basis on which to judge.  This also keeps FCS wins by opponents from playing too large of a role.


Although many rankings systems (Sagarin, for instance) average ratings from the final results (after the fact) and I have an SoS rating as a preliminary step, something I have in common with those systems is that there is more to the formula than SoS and record.  SoS listed may or may not correspond to the SoS computed by an average of opponents or to dividing the final ratings by winning percentage.  Please be aware if this fact in using my SoS to argue for or against a certain team. 

Timing of first rating

I sometimes still get unusual results (such as -25 ratings), even teams that are impossible to rate, in the first couple of weeks when I do these ratings in October.  As the season goes on, the chance of these events diminish.  5 weeks of play seems to take care of most statistical anomalies that run afoul of my system.  So to begin the ratings any earlier would neither be constructive nor instructive.

1-10 of 10