THE RATINGS SYSTEM

A Moronically Detailed Explanation

1. The Chieftain Level [everybody please read]

When I first designed the ratings system that is still in use on this site (and has never given me any really acute headaches, for the matter), it was supposed to be operating on a few relatively simple principles. I never thought that seriously of numbers as the chief way of expressing your attitude towards an album - the attitude is primarily expressed in the review itself. Roughly speaking, a number is way too strict and exact a way to summarize your feelings about a work of art, unlike words - which is why I always say that an overall rating of 12 is an overall rating of 13 on a good day and an overall rating of 11 on a bad day.

On the other hand, numerical (or letter, or "formula-like", i.e. "excellent", "very good", etc.) ratings serve as a good way of getting your preferences in some kind of systematic order. It's nice to have them around - you can make funny (and sometimes actually meaningful) statistic calculations, create best-of lists, etc., etc. In other words, numbers should not define your life, but they can make it more comfortable, to an extent.

One thing that always bothered me about "simple" ratings systems, though, was their obvious inability to be applied to every band on a "hierarchical" basis. Suppose, for instance, that out of the thousands and thousands of records you have heard, only a small handful approaches the "perfect ideal" for a record - quite a possible, in fact, an almost inevitable situation. It would then turn out that out of these thousands of records, you would give the highest rating (10/10, A+, "perfect", etc.) only to a select few, probably done by some of your favourite bands. This would mean that a whole heap of other artists would never get this "perfect" rating, which would sort of complicate the task of indicating which particular record was their peak - in numerical form.

A different solution, such as employed by Founding Father Mark Prindle, would be to simply give out a 10/10 to the band's peak, no matter what kind of band this is (to be honest, today not every band on Prindle's site has a 10/10, but I am referring to the original approach here, when every single band did receive a 10). This would lead to a situation where, say, both Madonna and The Beatles would receive a 10/10 rating for one of their albums, even if it is rather obvious - even for Mark, I think - that these are, well, different 10/10.

Thus I came up with the idea of making two separate, although connected, sets of ratings. There would be a basic "record" rating on a standard 1 to 10 scale. There would also be a "band rating", applied to the band/artist in general, on a 1 to 5 (A to E) scale. The two, when combined, would yield an "overall rating", on a scale from 1 to 15 (technically speaking, the lowest overall rating is "2": band rating 1 (E) + record rating 1 = 2).

This gives us an easy possibility to distinguish between "artist peaks" and where these peaks actually are on the general scale of things. For instance: The Eagles are an E-class (1-star) band. Their best album is Eagles. Since it is their best album, it gets a record rating of 10. 10+1 = 11; overall rating is 11. Bob Dylan is an A-class (5-star) artist. His best album is Highway 61 Revisited. Since it is his best album, it gets a record rating of 10. 10+5 = 15; overall rating is 15. Thus, both Eagles and Highway 61 Revisited are "tens", meaning they represent the peaks of their creators; but on the overall scale, the first one is an "eleven", and the second one is a "fifteen", meaning that even the highest peak of the Eagles is still vastly inferior to the highest peak of Bob Dylan.

2. The Warlord Level [only for those truly concerned]

So far, so good. But this situation seemingly leads to a minor/major glitch, which I have been accused of so many times it's not even funny any more. If I'm not mistaken, this glitch can be formulated as follows:

    Giving out two separate ratings for the band and the record is not a problem. But you are thoroughly wrong when you make out the overall rating by a simple process of adding the two.

Example 1: The Rolling Stones are a great, A-class (5-star)-worthy band. But suppose one day they make an absolute piece of crap, like Undercover or Dirty Work. This means they will have it rated higher than necessary simply because they are the Rolling Stones, and thus get an extra 5 points to every piece of shit they may record. In this way, a shitty Rolling Stones record may turn out to be better than a particularly great Allman Brothers record.

Example 2: Love are a mediocre, derivative, (D-class) (2-star)-worthy band. But suppose one day they make an absolute chef-d'oeuvre, like Forever Changes. This means they will have it rated lower than necessary simply because they are Love, and thus only get an extra 2 points to every piece of beauty they may record. In this way, a Love masterpiece may turn out to be worse than a particularly shitty Bob Dylan record.

Objections noted. Let us even disregard the fact that I have serious arguments for not considering Undercover and Dirty Work worthless pieces of garbage, as well as for not considering Forever Changes the timeless masterpiece it is often made out to be. Suppose I'm in full agreement here. What then?

The trick lies in establishing which rating is primary and which rating is secondary. If the band rating were given out beforehand, there would be no escaping this argument. But the band rating doesn't get conjured out of thin air. The band rating is derived from listening to these very same records, and that's the point that people often forget when they're arguing with me on these issues. The Beatles and The Rolling Stones aren't judged worthy of 5 stars because of their hairstyles, or of their longevity or commercial success - they're judged worthy of 5 stars because of the quality of their musical output. And primarily on the quality of their best musical output.

In other words, when deciding upon the actual numbers, the primary rating - the one that's given out before everything else - is not the record one, nor the band one. How can I give out a band rating if I've only heard, say, one record by a band that has 15? How can I give out a record rating if I don't have any idea of what the band's actual peak might be, having only heard one record out of 15? No way. The primary rating that is being given out is the overall one, and you can see this clearly on pages for those artists whose output I'm unfamiliar with to a large extent. These pages, with one or two records reviewed, only offer overall ratings. It's not until I have acquired all the key records for a band/artist that I'm ready to offer a "band rating" and "record ratings" for all the albums.

Thus, when listening to a new record, the first step is to decide upon a particular number on a scale of 2 to 15. From there, the process is simple. The band's peak is the highest overall rating it gets. When it turns out no album of that band deserves a higher overall rating than what it has already received, you subtract ten from that rating and you get the "band rating".

Example: The Clash have Sandinista! as their highest overall rated record, with an overall rating of 13. Subtract 10 and you get a 3 - this is the "band rating" of the Clash (Class C). Subtract 3 from every "overall rating" on the Clash page and you get a system of "record ratings" for the Clash.

We can therefore say that, although formally it looks like the overall rating is calculated according to the formula "record rating + band rating", in reality it is the opposite: the record rating is calculated according to the formula "overall rating - band rating". The overall rating is the primary one, which is why my ratings page is only structured according to the overall ratings, and not to the record ones.

3. The Prince Level [don't bother reading if you're not really anal about it]

This is, of course, not the end of the story. The immediate ensuing question is:

    Is it right to determine band ratings based on their peak level only?

Answer: possibly. Unless God himself comes down from the sky and establishes a system for rating bands and artists any violation of which will lead to you burning in Hell for ten thousand years, people are free to choose their criteria. In my understanding, bands should be judged primarily - if not only - by their peak output.

Why? For a simple reason: quantity doesn't mean all that much. There have been plenty of artists, writers, and composers over the centuries who have garnered fame and recognition for only one or two works of art - which have nevertheless put them on equal footing with those artists, writers, and composers, who have been amazingly prolific. The entire collected works of Cervantez Saavedra can occupy a whole shelf in your library, yet, unless you're a serious expert in Spanish literature, people only know and revere him for Don Quijote - yet he has entered the golden row of late Renaissance-era writers pretty much on the same terms with the much more prolific (in terms of writing universally acknowledged chef'd'oeuvres, I mean) Shakespeare; at least, I don't often hear discussions on the "who's better?" subject in this particular case.

Continuing this analogy, I do not see a necessity to "penalize" bands for their bad output if they can counteract it with good output. I frankly do not care how many shitty albums has a band like Jethro Tull released over the last two decades; what I really care about is how great this band was at its peak in the early Seventies. When enough time has passed, nobody will remember Jethro Tull as the band that put out that piece of shit, Under Wraps, in 1984; everybody will remember Jethro Tull as the band that put out that masterpiece, Thick As A Brick, in 1972. (Given that people will still remember anything, I mean). In the same way, I do not care about the enormous quantity of crap that Count Leo Tolstoy has written in his later years (believe me, he did - and a huge pile of preachy, self-righteous, misogynistic bullshit it was) - all I care about is that it's the Leo Tolstoy that made War And Peace.

This puts an end to the Love/Forever Changes side of the argument. Were I to decide Forever Changes is a masterpice for eternity, 14-worthy or something, I wouldn't hesitate about giving Love a band rating of 4 (Class B) and leaving it at that - even if it were the only Love album, or their only good album, followed by five hundred mediocre or even atrocious ones. What it would mean would be that at least at one point in their career, Love were a magnificent band capable of producing a masterpiece, and deserve to be remembered for that one point in their career, disregarding everything else.

Essentially, it all comes down to the problem of: what the hell are these band ratings there for? My answer is simple: to attract people's attention to what I consider to be good/great bands. Suppose there were no band ratings at all - how would I manage to attract people's attention to those bands that I consider more important? On Mark Prindle's site, you have hundreds of different bands which would take you zillions of hours to sift through. And if I have no idea how these bands sound like beforehand, there's no way I can understand Mark's own priorities in many, many of these cases. With the system that's been offered here, you can immediately see the bands I'm inviting people to check upon first and foremost - even if these are bands with only one or two great albums to their names.

Suppose I penalize Jethro Tull for having all that lengthy streak of uninspired Eighties product. Could that lead to people dismissing the band because of their 2-star (Class D) and not 3-star (Class C) status? Possibly. By accentuating the good and downplaying the bad, I can make more people interested in the band, not to mention it's sort of a nicer position in the overall moral sense. Nobody's forcing anybody to buy Rock Island anyway.

Is it morally right, though, to equate a band with 1 or 2 great albums to a band with 10 or 15 great albums? Well - see the Cervantes/Shakespeare analogy on that matter. Essentially, it depends on whether we're taking the bad into account. If we're not, I see no problem with that issue. If you still do, well - make yourself a different ratings system, that's what freedom of choice is for.

4. The King Level [only destined for a true giant of a man]

Fine, that's taken care of. But now what about the problem of Example 1 (overrating shitty stuff by great artists)?

So, let's do a little elementary math here. The lowest possible overall rating is 2 (band rating 1 + record rating 1). The lowest possible overall rating that a 5-star band can receive is 6 (band rating 5 + record rating 1). If "overall ratings" are indeed, as I say, primary, then what's to happen when a 5-star band records an album that I don't even see fit for an overall rating of 6? The whole system seems to be ruined.

First of all, for a few special cases I have introduced a record rating of zero (for instance, Genesis' Calling All Stations). This decreases the "unaccounted interval" by one point at least. But, of course, it's not really a true solution to the problem.

Where the true solution lies is in empiric reality. So far, although I have reviewed around a couple thousand albums and listened to quite a bit more, I have not yet been able to encounter such a situation. In other words, I have not yet heard a Rolling Stones album (or a Beatles album, or a Who album, or a Bob Dylan album) that would deserve less than an overall six. Mind you, I have heard very few albums from other artists as well that would deserve less than an overall six - I think about a couple dozen at most, out of everything I've heard (and some of these ratings were given out quite some time ago - I seriously think I might have underrated a few records out there). An overall rating of six isn't given out to an album that's simply mediocre or boring or unoriginal or devoid of personality; it's given to an album that seriously, offensively violates the laws of good taste (the way I see them, of course), and very few artists have managed to offend me that way (Rod Stewart has a particular talent for that, though - but he's nowhere near a 5-star artist either).

In other words, on a pure logical/mathematical basis the system truly does not work when the lowest ranges are concerned. However, empiric experience has so far dictated to me that nobody who has made a 15-star album could ever come up with a 5-star album. Again, if your empiric experience differs from mine, that's all right. You make your own system. The day I see the Rolling Stones coming up with an album as bad as Rod Stewart's Tonight I'm Yours, I'll seriously reevaluate my attitude towards the band. Yet I honestly believe such a day will never arrive - if it hasn't arrived in the past thirty years, it's way too late for that now.

Of course, there can be some complicated cases. For instance, a band might turn from "totally cool" to "absolute shit" if it basically becomes a different band - losing its main creative members and exchanging them for a bunch of hacks, retaining its name but cardinally changing its essence. Such is quite often the case (ELO Part II; Bloodrock, etc.), yet, once again, so far this has never been a major problem because these things only happened to "lesser" bands - none of the outfits that I rank as 5-star or 4-star ones have ever seriously suffered from this problem. Of course, if this problem were to arise, I could easily solve it by issuing different band ratings to different "stages" of the band, because what's in a name? Nothing.

It's interesting to notice, though, that much too often I've been asked the reverse question, referring to the inability of "lesser" bands to receive "higher" album ratings rather than the inability of "major" bands to receive "lower" album ratings, like: "Okay, The Beach Boys are a 3-star band, but surely Pet Sounds is the greatest album of all time, so how can it get a 13/15? Your system is flawed". My answer is simple: "No, it isn't, because - in my understanding - Pet Sounds is not the greatest album of all time, and only deserves 13/15. If I rated it 15/15, the Beach Boys would be a 5-star band". It's as if people were thinking I'm seriously downgrading excellent albums not because I don't think they're perfect but because I've invented a stupid system that won't let me declare them perfect even if I myself think they are. That's sort of putting the cart in front of the horse, if you get my meaning.

5. The Emperor Level [if you're still here, you've officially received the title of The Great Nerdmaster]

Here's one more question that, believe it or not, occasionally gets asked (not always in a polite manner):

So, if you think it possible to rate artists based on their peak album, why not go further and rate them based on their peak song? If an artist with just one great album can be better than an artist with ten good albums, why can't we prolong this absurdity and say an artist with just one great song can be better than an artist with a thousand good songs?

The answer is essentially very simple: a song is not an album. If I were reviewing primarily Fifties' and early Sixties' single-oriented artists, this principle might actually have been reasonable. The majority of my reviews, however, falls upon the period where rock was album-oriented, and songs were organized in cycles, whether conceptual or not. It is, of course, possible to dissect any given albums and discuss the merits of each and every individual song (and I very frequently do that), but much too often one has to resort to the old trusty maxim "more than the sum of its parts".

It is, of course, possible to argue that a set of albums by one artist should also be judged in the context of each other, and quite often, I agree - especially in the case of artists like Frank Zappa, whose output was supposed to possess a certain amount of "continuity". But this is still a radically different matter. Albums are unities, sold apart from each other and not presupposed to make you rush out and buy all the "prequels" and "sequels" if you do not necessarily feel like it.

Thus, I believe that positioning The Album (LP, CD) as rock music's "basic" unit when reviewing a huge conglomerate of said entities is much more reasonable than doing the same thing with The Song.

6. The Deity Level [don't ever come near me, you sick fuck!]

So what about these "general evaluations"? Where do they come from?

Answer: The "general evaluations" are more of a curious experiment than anything else. They were added to the system much later, after all the other principles have already been defined, which is why I'm also adding this section as the last one.

If there is a thing that doesn't quite fit into the system (and has evidently confused many a reader - at one point, I almost thought of having general evaluations removed, but then I realized it would be taking myself too seriously), it IS the general evaluation, and I will be the first to admit that, as well as the first to add that it doesn't really bother me that much. It comes into contradiction with the principle that the band rating is determined according to the band's peak, because it seems to provide the band rating according to an overall assessment of its career.

Well - that IS the idea of the experiment; to see how well the "band rating based on the peak" and the "band rating based on an overall evaluation" would agree with each other. Again, when making this overall evaluation, I usually concentrate on the 'main' output of the band, leaving out, for instance, the tail years of its career, but this can vary and is, indeed, very subjective. And anyway, how can you come up with a strict criterion for evaluating the "overall listenability" of a band, whether it's 2 or 3, for instance? You can't.

So the five-pronged parameter system of the general evaluation is there rather to give the reader a few pointers - in a condensed, compact form - than to add to the numerological confusion. Using it, you can immediately see whether I consider the band to be diverse, or original, or emotionally resonant, and draw conclusions from there. I'd go as far as to say that the short lines of text after each of the numbers are much more important there than the numbers themselves.

In some cases, I actually manipulate these ratings, and I'm absolutely not ashamed of it. Like, for instance, if I'm in doubt over whether to give 2 or 3 for listenability, I give the number that would accord with the original band rating better. For instance, I could give the Ramones a 2 for originality (because they only had one truly original album), or a 3 for originality (because it was so tremendously original and influential). I chose 3, because that gave me an overall rating of 3.6 = 4, agreeing with the rating of The Ramones (the debut album) as a 14-star record. Was it "cheating"? Maybe it was, and maybe it wasn't. Who can really tell? I do not have a scientific method for rating records; I'm only using an instinctive approach to what I feel could some day turn into a scientific method.

To conclude: any number you see on this site, unless it refers to the year of an album's release or to the number of minutes in any given song, is highly relative. In this lengthy rant I have given every explanation I could about every possible use of these numbers, but even then each of them has to be treated with a grain of salt. Remember - 'a 12 is a 13 on a good day, an 11 on a bad day'. The same applies to band ratings, record ratings, and general evaluation criteria. So if you're already weaving a huge red banner saying "PET SOUNDS IS A 14!" in gold letters, save your efforts. It is a 14. On a good day.


Return to the Index Page! Now!