Four Point Scale

Everything About Fiction You Never Wanted to Know.

Duke: Why the hell do you have to be so critical?
Jay: I'm a critic!
Duke: No, your job is to rate movies on a scale from "good" to "excellent"!
Jay: What if I don't like them?
Duke: That's what "good" is for.

The Critic, Pilot

If you take a stroll on professional game review websites, you will notice that score tend to be in the 6.0 to 10.0 range, even if they're nominally using a ten-point scale. This is called the four point scale, which is also sometimes called the 7 to 9 scale. Two takes exist on why this is so.

The first view considers the four point scale to be a bad thing, and holds this as evidence of a website's lack of integrity (often toward mainstream outlets). The accusation is rarely leveled at the writers themselves, with the blame usually placed on a site's editors or Executive Meddling.

The game journalism industry, like all forms of journalism, thrives on access. Game magazines and websites need to get a steady flow of new games, previews, and promotional materials directly from the publishers in a timely manner, or they're irrelevant. Unfortunately, the game industry does not have to provide this access, and games review sites and magazines are far more reliant on the companies that produce the games than movie critics are on movie companies; indeed, since most websites are expected to provide their content for free, industry advertising is perhaps their most important source of income. There are murky tales of editorial mandates or outright bribery, but the whole system is set up so that providing a highly critical review of a company's triple-A title is akin to biting the hand that feeds you. This is especially true of previews, which tend to have an artificially positive tone since if a journalist pans a game the company didn't have to show him in the first place, he's unlikely to be invited back to see any of their other work. As such, you're unlikely to see major titles, even awful licensed crap, get panned too hard in for-profit publications.

The other view considers the four point scale to be the result of a perfectly reasonable way to award and interpret review scores. This can be understood fairly easily by comparing with the way school assignments are graded. In any given class, people will usually get scores ranging from 60% to 100%, with the average being around 70-75%. This then leads people, both reviewer and reader, to expect scores to mean something similar to what they already encountered in real life. Getting ~60% means "this sucks, but it can still be considered a game", 70% is "average", 80% is "decent/solid" and anything above 90% is a mark of excellence.

Think of just how hard it is to actually get lower than 60% on an assignment. Even if you hand in complete crap for your essay on the Punic Wars, it will be hard to get much lower than 60%. For you to get under 60%, you pretty much have to turn in something that goes beyond "not being good". Unless you write in another language, forget to include the last 3 pages of your essay, and accuse Napoleon Bonaparte of engineering the Punic Wars to cause the September 11 attacks, you probably did enough done to get passing grades, despite not having mastered the material by a long shot. Game developers that achieve this level of suck quickly go out of business, which in turns explains why games rarely ever get scores below 60%. This also explains why there are few games that get under 75%, as most game developers know that churning out sub-par products isn't good long-term business practice, and those who don't know it quickly learn the lesson.

The situation with the four point scale has lead some reviewers to drop rating scores altogether, or favor an A/B/C/D grading system. Professional reviews tend to keep a rating system to reduce the chance of being misquoted or misinterpreted, as it will be evident that you did not mean the game was "excellent" if there's a big "6/10" or "D" at the end of the article.

The same basic concept applies to every industry; reviewers tend to place things in the upper half of whatever their reviewing scale happens to be, and for the same reasons.

Of course, if reviewers get too negative there's always the risk of fan backlash, because Reviews Are the Gospel. Contrast So Okay It's Average, where being just below this scale is acknowledged to have some quality, if not a lot. See also Broke the Rating Scale and F Minus Minus.

Examples of Four Point Scale include:

Real-world examples

General

  • This happens to an extent with fan reviews too. If you go to any site where shows can be rated (like Anime News Network) most shows will float above 6.0. Fan reviewers do tend to be, well, fans, which would tend to skew reviews positively. That, and they may pattern themselves after official reviews, even without meaning to. And sometimes the fan reviews "cheat" to bring the score closer to their desired number. The problem is with the way the scores are averaged, encouraging this kind of behaviour. By taking the median score or using a fancy formula, there are ways to make it an 8/10 rated movie is affected the same way by a 7/10 and a 1/10.
    • There's plain selection bias here; no one is forced to watch anime they remotely suspect they won't like. The some-eps rating vs the all-eps rating point spread and population ratio can be instructive.
    • Exception: Some anime series with exceptionally bad Macekre dubs will still have the original version rated highly, but the dub will get low ratings.
      • Fan reviews on video games also apply here. You'll find a mixtures of reviews that are all perfect scores or close to it and reviews that give the game the lowest score possible.
  • Truth in Television: the whole business can also be justified in many cases by score entropy. Here's how it goes: you independently, objectively and honestly review game A. You give it, say, 95%. A year later, you review game B. Game B is pretty much game A, with the awesome cranked to eleven. Or with the same awesome, but all the miscellaneous suck ironed out. Watchagonnado? You objectively have to give it 96% or higher. Cue next year. Some reviewers such as Gamespot claim that their standards rise as the average quality of what they review rises, averting this problem in theory but giving rise to a lot of Fan Dumb if actually followed.
    • This is the same concept behind why they have the Olympic favorites in events like Ice Skating do their routines last. If they did them first, and got a perfect score, but were then one-upped by an underdog, the judges can't score the underdog higher than perfect, and controversy erupts.
    • This trope can also be explained in basically all industries because, if you assume the scores are like grades in school, getting a 50% is absolutely terrible. This does lead to bizarre situations with user submitted reviews on sites where a person will give the game a 5 or 6 out of ten while claiming the game was average or somewhat above average, while somebody scoring the game a 7 claims the game was mediocre but without major flaws, or where somebody will give a game an 8.5 or 9/10 because "nothing can be perfect" or because it's not on the system the reviewer likes, while somebody else may score the game a ten saying it's the best game on the system by far despite a few minor flaws.
    • Averted by EGM - see below.
  • Lore Sjoberg played with this in giving his first and possibly only F on the Book of Ratings to Scrappy-Doo.

Lore on Potato Bugs: "'Fouler insect never swarmed or flew, nor creepy toad was gross as 'tato bug. Remove the cursed thing before I freak.' -- Wm. Shakespeare, Betty and Veronica, Act 1, Scene 23. I can't even go into how nightmarish these vile little affronts to decency and aesthetics are. If I were having an Indiana Jones-style adventure, the Nazis would lock me in a crypt with a herd of potato bugs. And, I might add, I'd choke myself to death with my own whip right then and there rather than let a single evil little one of them touch my still-living body. They're still better than Scrappy-Doo, though. D-"

  • This gets to be true on just about any site where viewers can post their opinions as well. Simply put, the only people who vote are either going to 1) post glowing reviews and scores, because they loved it, or 2) post really bad ones, because they hated it. Genial appreciation, the response of the much larger majority ? never gets factored in. (And generally there's a lot more fanboy squeeing, at least if the high user rating for Revenge of the Fallen is to be believed...)
  • Any horoscope that rates the upcoming day on an alleged scale of one to ten will use a Four Point Scale.
  • A well known gun writer came right out and said that negative reviews were not allowed by the editorial staff. He went on to say that they simply wouldn't print reviews for bad guns, so if a new gun came out and none of the major industry mags were reviewing it, take a hint.
  • Averted by Mountain Time's cheesy book covers reviews. The first three book covers reviewed received scores of 8 stars, 11 stars (and 1 wolf) and "all the remaining stars in the universe" (and 12 wolves) respectively. All subsequent covers have received arbitrary ratings, such as "3 groping Space Frankenstiens."


Cars

  • New car reviews in both magazines and newspapers. Even the Yugo received lukewarm reviews from the major car magazines; these publications are truly frightened at the thought of losing advertising revenue due to giving a poor review. This is doubly-true after General Motors pulled its advertising from the Los Angeles Times after one of GM's products was panned in print. Of course, this may be the only case where this trope is potentially Justified, as compared to everything else on this list cars and other vehicles are very expensive, and if you buy one the dealer isn't inclined to take returns...
    • Car review site The Truth About Cars prides itself on averting this trope, and their brutally honest reviews has resulted in several car manufacturers barring them from attending press events or being given review cars. The most infamous example involved pissing off Subaru AND BMW after the proprietor of the site, Robert Farago, penned a review of the Subaru B9 Tribeca where he declared that the grille on said car "looks like a flying vagina." (Farago was fired from the San Francisco Chronicle, too.) Don't know why BMW was offended... Anyway, TTAC uses a five-point scale and it is quite common for cars to get 3s, 2s, or even 1s. Similarly, Middle-Eastern site Drive Arabia is honest in its reviews (although not as aggressive or hostile as TTAC can sometimes be), and they've received similar treatment from local dealers.
      • The only known executive of a Detroit automaker who acknowledges TTAC's existence is Alan Mulally, CEO of Ford Motor Company. Ford also happens to be the only American automaker that didn't have to be bailed out by the government during the recent economic downturn. Think about it for a minute there...
    • European motorcycle magazines seem to have a particular love for BMW motorcycles. A flat spot in the torque curve is a minus for any other marque, but the BMW is praised for having high end power. Or a test of three comparable motorcycles where the two Japanese cycles win on points in the summary, but the article still proclaims the BMW number 1. It's either Euro-chauvinism, or influence by the BMW advertising budget. It doesn't help that BMW routinely provides reviewers with bikes with all the optional extras. Reviewers will gush the entire review on the technological gew-gaws, and then mention in one sentence at the end that these are all optional and cost money. Guess what readers remember?
    • In 1969, Car and Driver panned an Opel station wagon so thoroughly, to the point that the article's photos of the car were taken in a junkyard, that GM cancelled their advertising for months and Buick Division (which sold Opels in the US) pulled theirs for the rest of the decade.
    • One manufacturer threatened to withdraw its advertising from the BBC after having a car panned on Top Gear.
  • Consumer Reports has a policy against reviewing cars or household goods that they didn't buy incognito from a retailer. Nonetheless, most of its ratings are Good, Very Good or Excellent.
  • Cars at a car show or ones that are being appraised are scored on a scale of 1-6, with 1 being perfect and 6 being junk. Most are scored as a 2-3 because they are generally at a car show and most people don't take junk vehicles out to such things.


Films (Live Action)

  • A related phenomenon is noted in the Laserblast episode of MST3K, wherein Mike checks Leonard Maltin's Movie Rating Guide, notices the (incredibly terrible) movie they just watched got two and a half stars, and mocking how this implies that the work is equivalent to other films that Maltin gave the same score to (such as Animal House and Amadeus).
  • The ABC's movie critics, Margaret and David, on At The Movies use the full range of 0-5 stars. They've given plenty of low scores to rubbish movies. While their individual scores for a given film are usually similar, sometime there's quite the disagreement. One of them might give a film one star, while the other gives it four.


Live-Action TV

  • Go to TV.com. Pick a show you hate, any show. It's pretty much guaranteed that most of the ratings won't drop below 7 out of 10. In some cases, reviewers will rate an episode before it's aired, in a "I think this will be good" way.
  • For British television dramas, "average" is actually 77%. Even so, very few dramas go below 70 or over 90 (much was made over the Doctor Who Series 4 finale getting 91% for both parts).
  • And speaking of Doctor Who, the show's official magazine ran a user poll, asking their readers to rank all two-hundred Doctor Who stories out of ten. The lowest rated episode was "The Twin Dilemma" at 38%, with the average score being 69% and all but a handful above at least 50%. Yeah, it's that bad.
  • As a reality TV example from Dancing With the Stars, you can trip, shuffle, and walk your way across the dance floor for two minutes and still get a four or five. Two and three are put in play extremely rarely, when the judges are trying to force an inferior dancer off the show. In ten seasons, no one has ever been given a one.
    • The Head Judge Glenn once gave an explanation of each of the ten scores, and getting on the floor and moving your feet grants you a 2. Being vaguely aware that there was music playing was a 3. Dancing mostly in time to said music gets a 4. To get a 1, you literally would have to not dance at all.
    • On Strictly Come Dancing, Craig Revel-Horwood, in particular, has been criticised for his "low" marking - which means that he sometimes gives out fives and sixes or even (the horror!) less than five to dancers who aren't particularly good, while the other judges give out sub-6 scores so rarely that it tends to look like a personal insult when they do. This criticism ignores the fact that, logically, if you're using a ten-point scale then a five or six should be average and a seven or above should be good. Things get even worse once the season passes the quarter-final stage, when any mark lower than 9 tends to be roundly booed by the audience.
  • Ice Age (formerly Stars On Ice, Russian Dancing With The Stars on ice), uses standard figure skating scales: 0.0 to 6.0. To put things into perspective, the worst average score in the five-year history of the show, awarded to the worst pair on the very first day, was 4.8. It's becoming worse over the years: now the average score is 6.0, noticeable mistakes mean 5.9, and bad performance is as low as 5.8. To add insult to injury, judges sometimes complain about how they don't have enough grades to noticeably differentiate between performances of similar qualities, apparently ignoring the fact that they have 57 other grades at their disposal.


Music

  • Q Magazine has never gotten over giving five stars to the legendary Oasis train wreck (for some, anyway) Be Here Now.
  • Sounds of Death, aka S.O.D., is infamous for this. In past years they would publish "reviews" of albums with copy taken straight from the record label's press releases, and in many cases run a glowing review of an album opposite a full-page ad for the same CD!
  • Allmusic zig-zags this:
    • It rarely rates an album below three stars, and never rates an album five stars when it comes out. So it is a literal Four Point Scale.
    • It isn't unheard of for them to go a little lower. Brooks and Dunn's and Kenny Chesney's discographies include at least a two-star and two-and-a-half star apiece. Kenny has two two-stars.
    • With certain artists it shifts the scale about one-and-a-half stars lower.
    • Allmusic also seems to have a strange hate for later "Weird Al" Yankovic albums, which are usually well-received by others.
    • Some of the reviews date from when Allmusic was still in book form, and in those cases, the stars don't always match up — so they might say an album is unremarkable yet give it four stars, or say it's great but only give it three.
  • In a similar vein, Country Weekly magazine has used a five-star rating in its albums reviews section since late 2003, a couple years after Chris Neal took over as primary reviewer. Almost everything seemed to get an automatic three-star or higher, with the occasional two-and-a-half at worst. Perhaps the only time he averted this trope was in one issue where a Kidz Bop-esque covers album got one star. Before the star-rating system, the mag's reviewers were even more unflinchingly favorable, both from Neal and his predecessors. When Jessica Phillips took over the reviews in late 2009, she got a little more conservative with the stars; she gave an album only two-and-a-half stars although the tone of her review didn't suggest that the album was even mediocre. The other reviews that have joined her have been equally rational.
  • Robert Christgau used to be much more diverse in his ratings, which either ranged from E- to A+ (before 1990) or through a wide variety of grades including dud, "neither," honorable mention, and B+ to A+. Now that he no longer has the same encyclopedic approach to reviewing he once had, he only rates albums he likes as part of his "Expert Witness" blog, effectively limiting grades from B+ to A+ - literally only four different grades. Though limiting his effectiveness as a reviewer, the new scale makes him considerably more likable as a person.


Professional Wrestling

  • This trope hits professional wrestling reviews hard. Virtually nobody is satisfied with any rating below four stars. Japanese wrestling reviewer Mike Campbell has gotten a reputation as a horribly biased negative critic simply because he averts this trope very hard while explaining the pros and cons of a wrestling match in meticulous detail.


Sports

Do keep in mind that many sports that lack a clear defining line between amateur and professional (or indeed have no real professionals in) use the same grading system accross the entire sport from local amateur events to the Olympics meaning that the only occasion that normal people see them being graded is at the very top level where most of the athletes have 20 years practice and genuinely deserve the points they get, even when they are all clustered close to 'perfect'. It's an attempt to get the same kind of empirical scoring that most athletic events have and showing a progression of ability throughout a career and giving athletes consistent marks to aim for. Without that kind of system there would be little way for an athlete to actually know how good they really are particularly at the local and regional level where high quality competition may be in short supply. So some of these should come with a YMMV tag.

  • Happens with some competitive sports, such as martial arts tourneys. You could technically give someone less than a 7 on the 1-10 grading scale when judging, but you'd get a hell of a lot of stares and earn a reputation as a biased jerk. But nothing is stopping you aside from any sense of decency.
    • A specific example is the ten-point must system, specifically as applied to mixed martial arts. The winner of a round gets 10 points (both get 10 in case of a drawn round), the loser gets "9 or less". Actually finding a judge who will award a round 10-8, let alone 10-7 or 10-4, is rare indeed. Finding a judge who will score a 10-10 round is rarer still.
      • Most (probably all, actually, but there is no way of knowing) MMA productions require that their judges give one fighter a 10, and the other a number less than 10 for each round. In practice, 10-9 is the most common score, with 10-8 meaning one fighter got his ass kicked. Lower scores are virtually unheard of. That said, this is questionable as a use of this trope, since the judges are comparing exactly two things to each other in a vacuum (how each fighter did in the fight), so this isn't a case where the scale is really relevant to anything outside of itself. They could just as easily use a 0/1+ system, but people like big numbers. Also since most MMA contests are trying to give the audience a good fight with fighters of comparable skill and fitness, the difference in scores should in theory be fairly low, and if it does transpire that someone is ahead by a larger margin then the opponent is likely to be on the receiving end of a technical knockout and/or concussion before the end of the round anyway.
    • A similar thing happens in competitive debating (it's like martial arts with words!). At tournaments, 75 is considered an average speech, and virtually all speaker scores fall between about 70 and 80, with 79 or 80 being a demigod level speech. Supposedly if someone simply gets up, repeats the topic of the debate, and sits down, that's about a 50. Getting enough judges for a debate can be a problem; often the judging forms are very ... specific ... to try to get around the fact that some judges may be, effectively, people who wandered in because they smelled coffee. There are forms where the judge is asked to circle a number from 1 to 5 on 20 different categories, then add the numbers up to give the final score. Since in some categories a 2 is roughly equivalent to "Did not mumble incomprehensible gibberish during the entirety of the debate," 40-50 is about the lowest score you can get if you even attempt to look like you're self-aware.
      • Variant: In some forensics (fancy name for "public speaking," though CSI competitions would be awesome) formats, each competitor's score is determined by adding the judges' individual scores, each one out of fifty points. Judges are instructed to both score and rank each competitor. Where the fun begins is that judges aren't allowed to give tied scores, and scores are only allowed to differ from each other by one point. The result being that first place, in every round, automatically carries a 50, second place a 49, and so on. Even if a competitor starts his piece over more than once (which automatically carries a ten-point penalty or worse, depending on the format) they're often just given the last place score. Few judges ever rock the vote; a judge who awards a first place a 49 (let alone, say, a 45) is regarded as being unfamiliar with the format. The dark irony hits when you realize that the most veteran judges are the ones willing to be tough; judges who don't know their way around the competition usually just punt it.
  • Rivals.com, a football recruiting site, ranks prospects using the standard 1-5 star scale. Then they have a vague additional ranking system that ranks players on a 4.9-6.1 scale.
  • In ski jumping each jump is scored by five judges. They can award up to 20 points each for style based on keeping the skis steady during flight, balance, good body position, and landing. The highest and lowest style scores are disregarded, with the remaining three scores added to the distance score. However, anything below 18 is usually considered a slightly botched jump and scores below 14 are only ever seen when the jumper falls flat on his face upon landing.
  • In NCAA football, going through an NFL draft voids the remainder of your scholarship years, which often prevents players from finishing any degrees they have not completed. In order to "help" kids who were on the fence about declaring or staying in school, the NCAA allowed them to consult a panel that would predict where they would be drafted should they come out. However, this panel was notoriously optimistic, frequently telling hundreds of kids a year that they would be drafted in the first 3 rounds.[1] This had very real consequences as many kids were lured by the promise of NFL riches, fell to late in the draft because they were raw players, and washed out of the NFL before developing.
  • Gymnastics is theoretically scored out of 10, but is really marked between 9 and 10. Anything below 9 pretty much means "fell off equipment".


Technology

  • This is also true with computer hardware (heck, probably all electronics); never buy any product given less than an 8 on a 10 point scale. The reasons for this are complicated, but basically boil down to the following few reasons:
    • Most everything that is worth reviewing is competently developed and put together and couldn't be genuinely unusable these days, so they all get points there.
    • Everything is always an improvement over last year's model, again, because it pretty much has to be, so there is always a tendency to go upwards with every iteration. If last year's was an 85, and this year's is faster and has more cool stuff, how can it be lower?
    • Almost every complaint that you could make about most well known high-tech products is either based on taste (iOs vs android say) or is strongly counterbalanced by price (a top end graphics card against a $60 model). The few complaints that don't fall into those two tend towards nitpicking and are often only visible when sitting two things next to each other. So whatever problems you might find can't take too many points off if the device does what it is supposed to for that price.
    • Gadgets have some of the most vehement fanboys on the internet, and so a site that tries to cater to all of them has to hedge their scores to keep everyone happy further pushing the scores closer together.
    • Finally, they have to keep the manufacturers happy too, because those smartphones, SLR cameras and 3D TVs aren't cheap. So they will almost always focus a review on the 'new' feature being touted by the manufacturer and how amazing it is and then ignoring the same feature on similar products who are pushing a different part of their widget as being awesome.
  • Attack of the Show!'s Gadget Pr0n segment has never rated any reviewed item below 70%. Even a digital camera with grainy picture, difficult menus, unresponsive buttons, low battery life, insufficient storage space, and inadequate low light sensitivity that is several hundreds of dollars too expensive will still get the equivalent of a B+.
  • Zig-zagged by Mac|Life back when it was still called MacAddict. At the time, they had three review sections: a generic one, one for interactive CD-ROMs and one for children's software. All three used a four-point scale with their mascot, Max: "Freakin' Awesome", "Spiffy", "Yeah, Whatever" and "Blech!".
    • The catch-all section had reviews written by a panel of reviewers, summarized with the responding four-point scale and a good news/bad news blurb. If they could find even one good thing to say about it, it usually got a "Spiffy" at worst. "Yeah, Whatever" was usually reserved for unspectacular products, and "Blech!" was all but nonexistent.
    • The interactive CD-ROM section, however, was just the opposite. It used a three-reviewer panel for each CD-ROM, and it was very rare that any of the three had anything good to say about any of the interactive CD-ROMs. You could pretty much guarantee at least one "Blech!" here.
    • And finally, the children's section used feedback from actual children, with a summary from a regular reviewer. The children's panel and the main reviewer were weighted to give the overall rating, but even then, you'd be hard-pressed to find a "Blech!"
    • All of this went out the window when the magazine repackaged itself as more staid and formal, going with a standard five-star scale (which has remained with the shift to Mac|Life).


Video Games

  • Amiga Power used the full range of its percentage system, to the point that Commodore executives would berate them over the phone for not toeing the party line and games company Team 17 refused to send them any more games. The magazine also ran a regular feature in which it compared its low scores to that of other magazines. According to Amiga Power, the cut-off point for most games magazines' reviews of poor (but politically important) games was 73%.
    • Amiga Power were known for being very critical of other Amiga magazines; not without cause. At one point CU Amiga rated Epic as 91% seven months before it was actually released, and their review of Saint Dragon used screenshots from Dragon Breed, the Irem game of which Saint Dragon was a ripoff.
    • Let's quote Stuart Campbell's absolutely stunning tirade in his review for Kick Off '96 in the last issue of the magazine. For the record, the game reviewed got 1%.

"Giving something like Sensible World of Soccer 95% is utterly devaulued if you also give, for example, Rise Of The Robots 92%. Percentage ratings are meaningless unless you use the full range, and you can't give credit where it's due if you're pretending that everything's good. What encouragement does that give developers to produce quality? They might as well knock it out at half the cost and in a third of the time if they're only going to get another 3% for doing it properly."

  • Another exception is Destructoid whose loyal userbase demand as honest a review as possible (going so far as to rate and compare the reviewers themselves in the community sections) and have a very strict review guide (linked above) as to what their review scores mean. Their reviews editor Jim Sterling is on record as feeling very strongly about the 4 point scale and is always working at bucking the trend with his reviewers. Sterling has, however, been accused of panning popular new releases and giving badly-received titles high ratings simply to stir up controversy.
  • Games Radar is fully aware of the Four Point scale, and examines this phenomenon in their article: "Crap games that scored a seven out of ten."
  • Edge magazine is one publication that, over the years, has attempted to stick to a rating system where a score of 5 should ideally be perceived as average, not negative. However, their mean score is definitely skewed closer to 7, simply because the magazine is more likely to review relatively polished high-profile games than the bargain-bin budget titles that would balance out the weighting the other way. Edge has done quite a lot of self-analysis of its own reviewing/scoring practices over the years, with articles like E124's look at how reviewing practices vary across the gaming publications industry (how much time a reviewer should spend with a game before rating it, how styles of criticism and ratings criteria vary depending on the target audience, and so on). Up until a few years ago, they also did a lot to build up the prestige and mythology around their rarely-awarded Ten Out Of Ten score (see, for example their 10th anniversary issue (E128) retrospective look at the highly exclusive club of four games that had received that score up until that point).
    • Then in 2007, Halo 3, The Orange Box, and Super Mario Galaxy were awarded 10s three months running, and since then the score has been awarded a lot more frequently. (See this interview with the editor for a discussion of their reviewing philosophy from around that time.) In contrast to 10/10, they've only used the dread 1/10 score twice - for the godawful Kabuki Warriors, and Flat Out 3.
  • An exception can be found in the notoriously hard-assed Famitsu review team. However, the number of 40/40 reviews has risen sharply in recent times.
  • A partial exception is UK magazine NGamer, which, while falling into the Four Point Scale trap occasionally, gives a rather wide range of scores, with the reviews of the worse games particularly funny to read. In extreme cases they can even break out of the hundred point scale, usually for ultra-niche imported games that readers are unlikely to actually buy. Secret Flirts got -47%, Doki Doki Majo Shinpan received a score of "No", and some game about hunting deer scored ":(". Meanwhile, the Japanese version of Wario Ware D.I.Y. got 100% on the reasoning that the minigames were made by NGamer and hence were by definition perfect. (They did, however, admit that was a joke the next issue. When the UK edition was reviewed, it got a more realistic 88%.)
  • PC Gamer is also an exception. Until recently using a 100-point system (they've since gone to a 10-point scale), they have been known to give scores as low as 3% (South Park). A partial exception, anyway. They really do use the grading system from schools, so something that's So Okay It's Average gets 75%. Incidentally, this means there's a lot more range among awful games than among good ones.
    • PC Gamer, Edge magazine and NGamer are produced by Future Publishing, so perhaps there's a whole publisher holding out. That said, they also do Nintendo's official magazine. Future Publishing also publishes Maximum PC, which is also usually an exception to this trope... as long as you don't mention Half Life 2. A huge, in-depth cover story on the game, followed immediately by a review in which it gets an 11 on a scale from 1-10? It almost seems like a parody of this very trope, as does the aforementioned PC Gamer practically having an orgasm over Quake II. The front-page headline "THE BEST GAME EVER!" set the tone for the review.
      • Not a games magazine, but Future's SF magazine SFX is more than happy to give a program it's doing a cover feature on a lousy review if it thinks it deserves it. In the days when it used a grade system, from D-minus to A-plus, it once gave something an E.
      • PC Gamer's compatriot video game magazine, PC Format, during the 1994-1996 period was beautifully brutal in that it used its percentile scale actively and aggressively. While some reviewers like Paul Pettingale seemed disposed to grade downward, the reviews ranged from 1-5% all the way up to 94% (the highest-rated game ever, which was the CD talky version of Sam and Max Hit The Road - and it was a sub-review!). Then again, PC Format also refused to review any game that wasn't off the shelf, refusing to do previews for cash or offer scores based on games that were different to the games the readers could actually buy themselves. Games would routinely score in the 60-70 range, including classic games like Dune 2 (which gained points when it became a budget game), with real corkers breaking the 80% range. Ah, memories.
      • Amiga Power which is noted elsewhere on here for becoming infamous for its aversion of this trope was also a Future stablemate.
  • Shortly before becoming discontinued, Games for Windows: The Official Magazine (previously Computer Gaming World), switched to a letter grade system like that used in schools, precisely because of this problem. This system is now used on their corresponding website, 1up.com.
    • Computer Gaming World rather famously didn't have numerical / starred reviews for its first fifteen years or so, until the mid-Nineties, when readers who didn't want to actually read the whole article and just look at the score finally complained enough that they started giving out 0-5 stars. When they did start actually giving scores to their reviewed games, in most cases they were more than willing to use the entire scale. They even had an "unholy trinity" of games that were rated at zero (Postal 2, Mistmare, and Dungeon Lords).
  • The notorious game reviewer Jeff Gerstmann (who was responsible for the 8.8 trope) was fired by Gamespot for panning Kane and Lynch (a game heavily advertised on the site) with a 6.0. However, the site says he was fired for personal reasons. Also, he was not exactly alone among reviewers in scoring the game poorly. Of course, after this controversy, and his firing, Gerstmann started up Giant Bomb. Over there, Gerstmann and his crew use an X-Play-style review scale (1-5 stars, no half-stars), and they're more than willing to dish out 1 and 2 star reviews for bad games. He later reviewed the sequel Kane and Lynch: Dog Days, which he gave a 3 out of 5 (an average score).
    • Alex Navarro (a co-worker and supporter of Gerstmann's) often broke the four point scale when he reviewed games including Big Rigs, Robocop, and Land of the Dead.
    • Gamespot is partially guilty of the scale: browsing their reviews archive, almost 123 of their 233 pages so far score between 7 and 10 (and only seven have a perfect score, which take time to appear - the 4th in 2001, but the next two, only in 2008).
  • A non-review example of this occurs in the Guitar Hero games: You will never get less than 3 stars on anything, no matter how badly you do. It's just a question of whether you get 3,4, or 5.
    • However, Rock Band averts this. As you build up to the base score, which is the score you'd get for hitting every single note if there was no combo system and no Overdrive, you go from 0 stars to 1, to 2, and finally to 3. With the combo system and Overdrive, however, getting 3 stars is still laughably easy on most songs. 4- and 5-starring songs is still just as hard (or easy, depending on the song) as it was in Guitar Hero. This all means that it's more than possible to complete songs with scores below three stars.
      • It's still not possible to get 0 stars—someone tested this with the song "Polly" by Nirvana. The song literally has only eight notes in its drum part, so it's possible not to hit any of them (and, thus, not to score any points) and still pass the song. The results screen? 0 points and 1 star.
      • Guitar Hero Metallica introduces a star meter somewhat similar to Rock Band's. The difference is, you still can't get less than three stars in GHM; until you have at least three stars, the star meter will "help" you fill it until you reach three, which sometimes entails, for example, automatically filling itself during sections with no notes.
    • Guitar Hero sort of justifies it, because "failed a song" means "got a bad review" and so if you get less than three stars you failed. It's more like a Hand Wave than a real justification, though.
    • The minimum 3 stars on Guitar Hero is only for the guitar. You can get 2 stars on the drums or the mic, not sure of the reasoning behind it but it's possible.
    • The opposite end of the spectrum occurs for certain DDR clones. In The Groove 2? An "A" is somewhere around low 80%; after A+ is S-, S, S+, one star, two stars, three stars and four stars.
  • Your Sinclair was willing to run the full gamut, all the way from 99% (Mercenary) to 9% (Count Duckula 2, which is so bad that even that was thought generous). Commodore Format was similar, giving scores ranging from 100% (Mayhem in Monsterland) to 11% (Dick Tracy). Both of these were Future Publishing titles as well; see above.
  • Averted with X-Play, where they outright have condemned the use of a 10 point or especially the 100 point scale, as they're trying to be more general ("What's the difference between a 6.7 and a 6.3?"). They use a five point scale (no half-points) with each point being clearly defined.

1 Star- Absolute trash. Not worth any effort
2 Stars- Some recommendation but severely flawed.
3 Stars- Entertaining but nothing spectacular.
4 Stars- Good fun but maybe not for everyone.
5 Stars- Brilliant and innovative game.

    • In order to stay true to this rating system, they have gone on the record as boycotting giving reviews of specific games because they didn't want to add a zero to the scale. (Such games include: Barbie Horse Adventures, a flag simulator for the PC, and the infamous Big Rigs: Over The Road Racing.)
      • The dividing line seems to be whether said game has actual gameplay, an issue first brought up with Pokemon Channel. 1 Star rated games are usually playable...up until a Game Breaking Bug or the player destroys his or her controller in disgust.
    • Computer and Video Games beat them to this in the mid-nineties, although it in itself is hardly the first anything to rate on a five point scale.
  • Game Informer also averts this, regularly giving scores of 6 or lower for bad games. They've even given a couple of 0.5s in their history. Since they're owned lock, stock, and barrel by a major game retailer which provides most of their funding and games, they don't have to cultivate good relations with the game companies.
    • Granted, it still mostly follows this trope: If a game runs without major glaring bugs, usually it makes a 6 or better. Usually only one or two of then ten or so games reviewed per issue will be below 6. They don't follow this trope as egregiously as some other examples on this list, but they don't completely avert it.
  • Good Game averts this as well, having given several crappy but technically working games (such as Lost: Via Domus or Iron Man) a grand total of three. The fact they are on a government station rather than a commercial station helps them somewhat. They also quite regularly give grades between 4 and 6 to subpar or unimpressive games that may be of interest to specific groups. The fact that they have two reviewers also helps: a game that gets 5 and 8 is probably quite good if you are part of the target demographic, but not worth the effort otherwise, while a pair of 6.5's mean that the game is unimpressive, though inoffensive.
  • Australian magazine PC PowerPlay uses the full scale, except for 100% (though they have used 10/10, when they changed from percentage to decimal a few years ago) The lowest scoring game ever was Howzat, getting 1%. And they occasionally refuse to review especially bad games such as Squad Leader and Tiberium Sun: Firestorm. This magazine has been heavily criticized by their readers for 2 specific game reviews in this category, giving high scores to Deus Ex Invisible War, and Iron Storm, when most people would say otherwise. They do apologize for Iron Storm though.
  • Independent review site WorthPlaying.com has a typical floor of 4.0 unless the game is flat-out broken (in the sense of significant glitches).
  • Hardcore Gamer Magazine has an interesting version of this. Each game is reviewed by two staffers; the first gives the in-depth review of the game and awards a score (0.0--5.0 scale), then the second comes in with a "second opinion" score, and gives usually a one or two sentence aside about the game. The two scores are averaged out. And while it's refreshing to see the two scores differing by about half a point, the real entertainment comes from watching the second opinion offering completely derail the score of the main reviewer.
  • RPGFan is notorious for this - with rare exceptions, even a game the reviewer will spend the entire piece criticizing will still get at least a 70. They posted an editorial about it, providing an explanation of their methods and somewhat admitting that the lower half of their scale is pointless, but sidestepped describing their reasoning, instead saying that you should focus on the the text of their reviews.
  • RPGamer used to score on a scale of 1-10, but ultimately dropped this in favor of a 1-5 system because of this very trend. This led to their reviews since the change actually using the entire scale, with several 1s and 2s given to games that truly tortured the staff members reviewing them. While older scores on the older scales remain unchanged, the review scoring page provides a conversion scale that has led to many games experiencing a severe drop in score when converted to their latest scale.
  • Videogame magazine Electronic Gaming Monthly, or EGM, made a conscious effort to avert this: most (previously all) titles they featured were handled by three separate reviewers, and highly varying impressions were surprisingly common. Closer to the end of its run, they switched from a 1-10 scale to a 'grade' system (A, B, B+, etc.) for the purpose of avoiding the Four Point Scale trap entirely.
    • Towards the end of the mag's original run, they handed off the really awful games to internet personality Seanbaby, who wrote humorous reviews lambasting them for being so bad that nobody would - or should - ever play them (many of the reviews can be seen, in extended and uncensored forms, on his website).
      • Eventually this reached its ridiculous-yet-logical conclusion when EGM was denied a review copy of the Game Boy Advance Cat in The Hat movie tie-in game, which the developer said was because they "didn't want Seanbaby to make fun of it". Or, to put it another way, they acknowledged right out the gate that their game was so bad it wouldn't even rate a 1 in the normal review section. Seanbaby obligingly went out and purchased a copy just so he could lambaste it.
    • There were letters from the editor talking about how some company or another wouldn't give them information about their games anymore because of the bad scores they handed out. This happened at least twice with Acclaim and once with Capcom. In their first encounter with Acclaim, EGM had handed out very low review scores to their Total Recall game for the NES; when Acclaim threatened to pull advertising if they didn't give the game a better review, editor-in-chief Ed Semrad wrote in an editorial column that they could go right ahead, because they were sticking by the review even if it cost them money, because journalistic integrity was more important than a paycheck. The second time this happened, it was because EGM had blasted BMX XXX (and rightfully so); this time, Acclaim threatened to never let them review another game of theirs ever again, to which EGM said "fine by us". Capcom's case was a somewhat different affair: it wasn't a review that got them angry, but instead EGM badmouthing the constant stream of "updates" to Street Fighter II; when Capcom asked EGM to apologize for the remarks in exchange for not pulling advertising, EGM again said that they would not retract the statements even if it cost them Capcom's money, because they felt honesty and independence in their publication was more important. In all three cases, Acclaim and Capcom pulled ads from the mag for a few months before buying adspace again.
    • It should also be noted that EGM's review system was heavily inspired by Famitsu's review system. The first issue of EGM, however, featured scores that ranged from 'miss' to 'DIRECT HIT!'...
    • Actually inverted by EGM in 1998, where they revised their review policy in order to give HIGHER scores, specifically 10s. There was a period from late 1994-mid 1998 where no reviewer had given out a single 10 (Sonic & Knuckles being the last one to receive one). After a slew of excellent high-profile games such as GoldenEye and Final Fantasy VII passed through in 1997 with 9.5s, the mag revised its policy in the summer of 1998. Previously, a 10 was only awarded if a reviewer believed the game to be "perfect". But as Crispin Boyer pointed out in his editorial discussing the change, "Since you can find flaws in any game if you wanted...there's really no point in having a 10-point scale if we're only using 9 of them." Thus, a 10 would be given out if the game was to be considered a gold standard of gaming and genre. The very next issue, Tekken 3 would break the 3+-year spell by receiving 10s from three of its four reviewers, and later that year, Metal Gear Solid and Ocarina of Time became the first games to receive 10s across the board in the magazine's long history.
    • EGM also received criticism from readers that some games would receive high scores one year, but the next year, a new-and-improved sequel or an extremely-similar-but-better game would come out to lower scores; alternately, a game that received high scores upon its original release may be ported to another system, or remade years later, to lower scores. Reader logic was that if Game B was better than Game A, objectively, Game B had to be rated higher on the numerical scale (see an entry above). This was addressed multiple times in the reader mail and editorial sections, where it was explained that they did not follow this rule, as long-running and generally high-scoring yearly sports series like Madden or Tony Hawk'sProSkater would have hit the 10-point ceiling years ago due to improvements in each version. Furthermore, at least technically speaking, games will always be improving due to the more powerful consoles and computers that are released every few years. Finally, innovation naturally tended to score higher because of its originality than when all those ideas were incorporated into every game the next year. EGM explained that instead, they rated games based on the current marketplace, and specifically compared new releases to others within its own genre, while their level of standards would naturally increase into the future as games became more ambitious.
  • Somethingawful.com reverses this trope by using the entire spectrum to rate games - and beyond. They rate games on five factors that can rate up to a maximum of ten, the scores adding up to a maximum perfect score of 50. They will not hesitate to rate in to the negatives for really bad games, the awful piece of garbage Thundra being hailed the worst of them with a remarkable minus fifty.
  • Dr. Ashen's review of Karting Grand Prix mocks this, with Ashen referring to the game as "irredeemably awful", then giving it a score of 73% "because I'm a fucking idiot."
    • In an earlier review on the Gamestation, a flea-market handheld game system resembling the original PlayStation, Dr. Ashen gives the system 7/10, saying that it's the lowest score one can give "before the company pulls their advertising".
    • And in yet another review he gives a product 8/10, but "only because it's made in China, and I'm terrified of their government."
  • German gaming magazine Gamestar averts this on a 100% scale. They make it clear that games below 70 percent are generally "still okay" for fans of the genre and those below 60% are average at best with miscellanous faults, but still have given ratings as low as 4%. No game has ever gotten a higher score than 94%, and the staff has said that getting 100% is flat-out impossible since it would have to be an eternally perfect game providing endless unhampered entertainment for all time.
    • Even more important, they avoid the I-have-to-give-a-sequel-a-better-rating problem by downgrading game. Games lose points as they age due to better hardware making objectively better games possible.
    • To explain this a bit more: Games are rated on 10 factors, 6 of which always apply and the other 4 depending on the genre. A game can reach any whole number from 0 to 10 in these ten categories and any number between 0 and 10 actually shows up regularly. To go even further, for reviews of more than one page, they will generally show a table of all ten categories, outlining the main pro and cons of this game in those categories. They also never review a game that hasn't been released on the PC yet and for games such as MMOs they won't even review them at release since such a game really takes a long time to play.
  • Zero Punctuation does not give out numerical scores for just this reason.
    • He did give out a numerical score for Wolfenstein a two out of five stars, which is already an aversion of this trope. Likely the reason he did give out a rating, though, was because he did the review almost entirely in limerick form and just needed a rhyme.
    • In response to "How can you even call it a review without a score?" from his Super Smash Bros. Mailbag Showdown: "If you want a score, how about four, as in four-k you" accompanied by the commenter being flattened by a giant number 4.
    • It is also worth mentioning that, his lack of using scores aside, Yahtzee subverts the whole reason for this trope in the first place (that is, reviewers not giving bad reviews more or less to keep their jobs). His job practically is to give bad reviews and often receives criticism when he praises a game.
    • Kotaku doesn't give scores, either, making some commenters confused.
  • British gaming magazine PC Zone's reviews run the whole gamut from 7%-98%. Similarly, a score of 80%+ does NOT automatically gain a "Highly Recommended" award; although these often ARE given out to high scoring games, on occasion they have not been awarded to games that are technically good, but are lacking in some kind of "soul" that the reviewer (and the Second Opinion reviewer) would have liked to see present.
  • This compilation of MetaCritic scores is this trope in all its glory. 70% is worth no points, 60% is -1, and anything below that is -2. It doesn't really prove consistency, for one. That is standard deviations, while this is a total of points. For another, putting negatives that high just makes the lower scorers look even worse. Talk about spin.
  • GameTrailers generally has very informative and reliable reviews that coherently explain the points they try to make as the review itself is going on, but the score at the end falls squarely into this trap, the lowest score they usually give being somewhere in the 4.7 to 5.0 range. It once gave a humorous "suicide review" of Ultimate Duck Hunting presented in the form of the reviewer having killed himself over the game and his review being his suicide note, and went on about how it was bad enough to push him over the edge at every turn...and then gave it a 3.2.
  • Nintendo Power is usually good at averting this trope, but some of their reviews of games in popular franchises tend to be given high ratings by default.
    • With this magazine, what you have to watch for is not the score, but the number of pages of the review. The Nintendo blockbusters get two, three, even four page reviews, squishing out reviews for other games.
    • They also admitted in response to a letter that while they use a full ten-point scale, they won't put up a review for a game lower than a two, reasoning it's too bad to even bother with, and they only give out tens for the super-duper cream of the crop.
  • Averted by Computer Games Magazine from Greece (no longer in existence, sadly.) Its scores typically ranged from 3 to 10, with especially rare cases of games with a 2 score. Once it refused to rate Dark and Light, stating that it would probably have had the first 1 in the magazine's history. Eventually, it started giving ones to games like Warhammer 4000 Fire Warrior, and even the occasional 0.5.
  • Amiga Computing gave 100% to Xenon 2. A reader called them out on this, asking if they'd give a higher score to an even better game. ("Yup.") They later gave out a score of 109%, and another 100% in the same issue.
  • The UK Official Dreamcast magazine aimed to avert this trope (back around the turn of the millennium even) by insisting on a rating scheme where 5/10 was strictly "Average". This led to a huge amount of complaints from fans who missed the intention behind the scheme and complained that a game they liked got a "harsh" score (The creators of Fur Fighter commented that the 7/10 they got from the magazine was the lowest score the game received). Eventually, the magazine staff made a phrase for each number and put it under each review score so the reader knew what the rating actually "meant". (For instance, any 7/10 rating had the word "good" under it. Shenmue was the only game that let us find out that the word under a 10/10 was "genius").
  • The Finnish gaming magazine Pelit uses this to a degree: They use a percentage scale for their game reviews, and they do use the entire gamut of their scoring system, but anything below 65 is still relatively rare. The magazine used to include an info box that described anything below 65% was below all standards, and 50% and lower meant the game was truly atrocious. While the 50-or-lower reviews are amusing to read (such as their Fight Club review where the entire review was just the phrase "Rule 1 of Fight Club: You do not talk about the Fight Club" with a 20% score), the staff hardly ever go out of their way to seek bad games to review, because they don't hate themselves that much. Instead, they pick games that they know they'll like, or ones that have interesting subject matter or are otherwise noteworthy. Originally their scoring system was chosen to maintain compatibility with other gaming magazines of the time, by the early 2000s there were basically no other respectable magazines around that still used the same scale, and the staff have mentioned repeatedly that they would like to switch to a star-based system or no score at all.
  • Averted by Noobtoob, an independent gaming podcast that uses a binary "thumbs up/thumbs down" rating system.
  • Averted by VGF's old Reviews Moderator and several other members who would actually set the "Average" level to "Five" on a ten-point-scale. The reviews forum is now defunct, but several other members and users who review games and on average treat the "5" as the "Average" level. Some reviewers on GameFAQs even use this, actually praising a game that was given a 7/10 (The accepted "Average") and saying it was good.
  • Ars Technica has started reviewing video games on a three-point scale: Buy, Rent, and Skip. They expand a bit upon why they use that scale and why they aren't part of Metacritic.
    • ScrewAttack has the same review system, with the exception of using "F' It" rather than "Skip." It's also the system used for the video game reviews in Boys' Life (the magazine of the Boy Scouts), under the names of "Buy," "Borrow," and "Bag," but not many people care about that.
    • Disney Adventures also used to use this rating system as well.
    • Nintendo Power uses a three-tier system for digital download reviews ("Recommended", "Hmmm...", and "Grumble Grumble").
  • Inside Pulse tried to avoid this, but got so many threatening letters from developers that it gave up on a numeric scale entirely, describing games with positive and negative adjectives instead.
  • When Assassin's Creed 2 was due for release, Ubisoft got caught in a major shitstorm when they announced that they won't give the game out for testing unless the reviewer agrees in advance to give a positive review. Apparently, it didn't need the "boost".
    • Eidos also pulled this trick for Tomb Raider: Underworld.
  • An odd aversion with user reviews, Gamefly customers regularly rate games about a point below the critical average. Probably happens because their customers play a lot of games. Padding of any kind will seriously damage a game's rating, but overall length isn't as important as with regular reviews. Only about 6 games a year will do better than 8.8.
  • Videogame review site actionbutton.net has been routinely lambasted for using a four point scale from fans who believe a game should have gotten five stars.
  • Spanish mag Nintendo Acción runs on this, to the point some Pokémon fans complained when Pokémon Black and White got only a 94, when other games got 96-98 scores. Though in their defense, said review also lambasts the game's graphics, despite the great animated sprites and the Scenery Porn the game has, and yes, the previous games got better off on graphics somehow.
  • Averted by Metro GameCentral, who have repeatedly insisted that whatever the industry decides any maths teacher can tell you that 5/10 is an average score. There have been a decent number of 1/10 scores over the years, and even the occasional 0/10.
  • While Toonami hosted dozens of video game reviews over the course of the show, only a handful ever scored below 7 out of 10. No games ever scored lower than 6 on that scale either.
  • Averted by The Angry Joe Show. He clarifies in some of his videos that a 5/10 to him is "average", and will usually have at least a few positive comments on games with such a rating.


Web Originals

  • YouTube had a rating system that let people give a video a score of up to five stars, though hardly anyone gave less than three, unless the video was particularly bad. This graph illustrates just how ridiculous it was. This led to a few wide-spread incidents of vote-bots giving dozens or hundreds of one star ratings to people whose videos disagree with the attackers' own political or religious beliefs, where a drop even to four stars will greatly reduce a video's traffic. Youtube has since dropped the 5-star system and changed it to a simple like/dislike system.
  • Similarly, a web site that hosts community content for Left 4 Dead allows people to give reviews on the created content, ranging from 1-100. Trolls or people who exaggerate how much they hate the custom content will generally give a rating between 1-20. Anyone that wants to praise the author to hell or if the author is using an alt account, they will give scores of 90-100. For the latter, the people will ridicule others who give scores between a 60 and an 80, even if the content doesn't meet the standards of receiving a high score. In other words, if the content is decent, you better either give high scores or risk being flamed by the community for being too harsh or a troll.
    • The site then added a "I agree/I disagree" system to combat people who were abusing the score system, similar to many places that use a Like/Dislike feature. Unfortunately, it backfired since people could rate down a review enough to actually get a reviewer's score removed and be taken out of the overall average, which meant that a group of people could team up and vote down a review if the score wasn't a perfect 100. In turn, the reviewer could just reset their score to put it back up and defeat the purpose of the voting system. This system was then removed.
    • There is also a critic scoring system, which is displayed alongside the average score made by the community (example: a campaign can have a score of 85 by the community and a 60 by members who have critic status). However, since members with the critic status are just regular members who gave a lot of reviews, they are still open to spamming low or high scores.
  • Newgrounds is somewhat of an aversion to this; while the scale is only 0-5, it's an unspoken rule that if it's not up to snuff for the portal, it's a 0, if you just didn't like it or something along those lines you should vote 2, and if you love it vote 5. While 1 3 and 4 are in there, hardly anyone uses them. Undoubtedly this is partially due to its "Blam"/"Protection" system which, generally, rewards you for relatively high ratings of content others have rated relatively high and low ratings for content others have rated low, in a blind system.
    • Though, as Retsupurae shows in their Newground LPs, the comments will be filled with people who will give text reviews in the comments, and it's surprisingly common to see people complain about how bad a game or video was, then give a 10/10.
  • Netflix allows you to rate movies, and aggregates all the user reviews into a star rating. Because there are people that will like something no matter how bad it is, and some people that will hate something no matter how good it is, 1 star and 5 star ratings are impossible. However, if a movie doesn't get above 1 and 1/2 stars, you should probably avoid it, and if it reaches 4 and 1/2 stars, it's probably worth watching. So the scale is skewed, but still relatively accurate.
  • Fandango.com not only does this, but also always rounds up. Coincidentally, it rates the movies to which it sells tickets, surprise.

Real Life

  • A particularly interesting example of this trope occurs with brokerages. Brokerages have a quid pro quo relationship with the firms that they're supposed to be rating. Usually there's an informal understanding between the two that if the brokerage advise their investors to sell a particular firm's assets that firm will stop providing the brokerage with information or other privileges. So brokerages almost never give firms a "sell" rating.
    • You can see a Four Point Scale in corporate credit ratings where junk bonds and high risks get a B-rating while better investments get A, AA, AAA, etc. In ordinary education system a B is a respectable grade and a C is a clear pass.
      • While there are independent rating agencies that are more honest, the big three (S&P, Moody's and Fitch) all receive contributions and payments from the companies they are rating. When the time comes to evaluate a company, the big three are generally the most listened-to voices. The incestuous relationship has been theorized to have greatly contributed to the 2008 Economic Meltdown.
      • It's worth noting that the lowest investment-grade (below which anything is "high yield" or "junk") is BBB-, and anything below AAA and above default can have a +/- modifier. Below BBB is BB, then B, then CCC, then CC, and a C rating is generally reserved for companies that are paying on time, but who have breached their collateral requirements and are in imminent danger of defaulting. A D is given if you actually default.
    • And now S&P has made the historic move of reducing the credit rating of the United States federal government from "triple A" to "double A plus." And if that sounds like some kind of joke... well, it is (see the Futurama example below). No word yet on whether the next grade down is plain "double A" or some increment like "double A plus minus."
      • AA+ is a completely valid rating that S&P was using long before giving it to the United States Government. Several other countries have AA+ rated sovereign debt, and corporations wear a AA+ rating as a badge of honor (it's EXTREMELY difficult for a corporation to obtain a AAA rating on its bond issues, as AAA implies no risk of default within the next 12 months). Thus, it would be perfectly respectable to have a AA+ rating but for the fact that it used to be AAA.
    • Related: Jim Cramer of Mad Money (featured in Arrested Development and Iron Man, as well as being an actual show) received a lot of flak from Jon Stewart of The Daily Show fame when it was revealed that he recommended buys and holds on stocks and companies that were, days later, revealed to be financially and ethically bankrupt. Further investigation revealed that Cramer and some business partners of his were using his show to artificially run up prices of stock that they owned by encouraging buys, then selling the stock, in a bizarre pump & dump scheme that he has never been prosecuted for.
      • This may explain the observed fact that you can indeed make money by following Cramer's stock picks... specifically, by selling them short the day after he touts them as a "buy" on his show.
  • Ebay ratings, as parodied here in XKCD
    • Ebay only has a Positive-Neutral-Negative rating system, but it still skews very much toward positive. Some people leave neutral feedback for sellers when they really should give negative. Part of this is because Ebay doesn't allow anonymous feedback and a few sellers flip out and give the buyer negative feedback in retaliation.
      • The system itself actually discourages users from giving anything other than positive, making the user confirm that they have given the seller ample time, that they have tried to contact the seller about any problems, and that they understand what they're doing in order to give a neutral. This is more confirmation than one has to do to sign up to the system.
      • Now sellers are not even allowed to rate the buyers at all. This leads to ratings extortion; i.e. the buyer can withhold rating you positive after they receive the item unless you refund a portion of their money. It gets worse - Ebay now has "Detailed Seller Ratings" which are nominally based on a five-point scale. However, sellers will receive a warning (possibly followed by the withdrawal of certain selling privileges) if any of their ratings fall below 4.5. This means, in effect, that 4 out of 5 is considered a bad score and that it's actually better not to receive a rating at all than to receive one less than a perfect 5/5.
        • It's less of a 4 point scale, its more that we mostly assume that 5/5 should be in some way above 'doing the job properly'. On ebay, 5/5 means that what you received was as advertised, arrived promptly, had fair postage costs and that generally there were no problems. So anything below that means 'there was a problem'. And since ebay largely runs on trust any of those things not happening frequently means that maybe you can't trust the seller. If you stop and think about it, how could an ebay seller get a 5/5 rating on any other terms ? Other than giving you free stuff or phoning you up to serenade you in a deep soothing baritone voice on receiving the order, the absolute best they can do is send you your stuff.
  • By and large, the lowest score you'll see for an album, book, or movie on amazon.com will be four stars. Similarly, on the seller's page, if any one of the ratings are under 4 stars, they either are flatly taking your money and not making any effort to send the product or (more likely) somebody is overreacting to something out of their hands, like the post office losing the package or they didn't like the product. Sometimes very egregious when the comment will say "Product was supposed to be like new but did not work at all" and will still get a 3 out of 5.
  • The LSAT has a minimum score of 120, and a maximum of 180. The empty range is twice the size of the scored range.
    • Somewhat related, the Dutch Cito test at the end of primary school, which partially determines what kind of secondary education a pupil can/will take, has a range of 500-550. (The reason for this is to avoid the Cito results being misinterpreted as IQ.) The empty range is ten times the size of the scored range.
  • In music festival ratings (mostly for high-school choirs and bands), you theoretically have 5 levels you can rate a performance. The scale is 5 = Poor, 4 = Fair, 3 = Good, 2 = Excellent, 1 = Superior. Very few groups get a 4 or 5, and 3's are what's given when something was terrible. The "Excellent" or "2" rating goes to groups that range from acceptable to very good. It's partially meant to be encouraging. You also have to sign your rating form and you want to be invited back - judges get paid.
    • I've been told that the only groups that ever get '4's or '5's are those that are entering the competition for the first time. It may be that most judges consider a '4' to be below whatever they usually see in a music festival.
  • It is difficult to grade on a 100-point scale, since many ratings sites do, so even the best amateur critics tend to have a bimodal or trimodal distribution.
  • The Crash of '07, anyone? Banks rated a lot of loans on a four-point scale.
  • Competitive high school debate organizations use a different scoring system for each event, but a particularly Egregious example of this trope can be seen in the Lincoln-Douglas event. Judges are asked to score competitors on a 30-point scale, but any score below 20 is to be reserved for extreme circumstances in which the judge must provide a written justification of why they gave a score lower than 20. Basically, as long as a contestant gets up, says enough words to fill the time limit, and doesn't use any foul language, they get at least a 20/30.
  • A (mostly Southern) California thing. Restaurants are given a letter grade based on health and safety standards. Mostly just how clean the place is. While the rankings do follow the usual A, B, C, D, F moniker, most restaurants have an A grade, it's rare that a place has B (even in food courts where its neighbors have A's). Though people accuse this system of another kind of Rank Inflation, as an A has no real value since everyone has an A.
  • The USDA beef grading. Most meats that normal consumers have access to (from lowest to highest) is Select, Choice, and Prime. There's also five ranks below that, and from lowest to highest: Canner, Cutter, Utility, Commercial, Standard. However, Kobe beef from Japan causes Rank Inflation. It's so good, it has its own grade above Prime. Angus beef may be in a similar situation.
  • Anyone watched the Olympics? Try the gymnastics events sometime. Despite being on a 10 point scale, it's rare for any competitor to get below a 9.5. Rank Inflation is so bad that critical flaws (such as a gymnast actually tripping and falling on their face) are worth only about a tenth of a point. Flaws that we viewers can't even distinguish? 1/100th of a point off. Scores generally range from 9.7 to 9.9.
  • Telephone customer service personnel will occasionally ask you to rate their level of service on a scale of 1-10. If you answer 9 or below, they'll ask for specific reasons why you didn't give them a 10. Customers who can't or don't care to name specific flaws in the service will probably ammend their rating to 10. This makes a rating of 10 equivilent to acceptable service with no specific complaints rather than outstanding or beyond expectations service.


Examples in media

Video Games

  • In My First IGN Interview (from the IGF Pirate Kart), you get the option to do a practice interview with an IGN applicant, who then asks you to rate how well she did. You have a choice between 10, 9, 8 or 7 out of 10, and if you pick 7 she gets as offended as if you had chosen 1. (This is obviously a subtle poke at IGN's game rating system.)


Web Comics

  • Dorkly in this strip has a game rated by UGN "7.2: Literally unplayable".


Western Animation

  • Parodied in the TV show The Critic. Jay is told by his boss that his job is to "rate movies on a scale from good to excellent." Jay himself in an inversion: he dislikes pretty much everything and the best score he ever gave a film was a 7 out of 10.
  • In Futurama, Dr Wernstrom gives Dr Farnsworth the lowest rating ever: A, minus, MINUS!
  • In an episode of The Simpsons, a journalist who travels around America visiting locations to review visits Springfield. He's repeatedly tricked and abused by the residents and storms off to give Springfield the lowest rating he's given anywhere... 6/10.
    • In another, Homer becomes a food critic. At first, being Homer, he gives everything an excellent review. While his fellow critics eventually convince him to be crueler, he still won't give anything lower than "seven thumbs up".
  1. For reference, with 32 teams in the NFL, that equates to telling all of them they are one of the top 100 players in the draft pool that year.