Yes, this is another post about the 100 point scale? Why?
I ranted a few weeks ago about why I use the 100-point scale and I wanted to set the record straight–I am no fan of it, but I really do not see a better alternative. There were a couple of comments to last week’s post that proclaimed that the problem was not so much with the scale per se, but rather with what Anatoli of Talk-a-Vino and Bill of Duff’s Wines described as, essentially, “score inflation.”
Given my background in teaching, I thought I would explore this concept of “score inflation” a bit more.
One of the reasons given for why the 100-point scale is so widely used is that it is easily understood since most (all?) people were themselves graded on a 100-point scale at some point during their education. People “get” what it means.
But do they really?
When thinking about how to approach this article, I looked for parallels between score inflation in wine and grade inflation in education, and there are many. As a former high school and college teacher, I thought it would be useful to first look at the academic side of the “problem”.
As I see it, there are essentially five reasons for grade inflation:
First: The scale itself. At the heart of the 100-point scale is the assumption that the scale is continually re-centered. The scale was created with the central tenet that a grade of 75 was considered “average” and most (68%–roughly 2/3) of students would score between 65 and 85 and that a few (13%–about 1/8) would get an A (and a minuscule 2%, or 1/50, would receive the top grade). Well, that is all fine and dandy until you start putting it into practice.
Imagine two different classes: one with relatively high achievers and the other with a bunch of dullards. A “C” in the class of brainiacs is very different than the “C” in the class of knuckle-draggers. In other words, the scale works fine within the confines of its own population, but comparing students across populations becomes difficult, if not impossible.
This led to teachers using the scale not to measure relative performance (how well students did compared to other students i.e., what the scale is designed to do) but rather to measure absolute performance—grading students based on established norms of what they should know, which leads directly to the next reason….
Second: Established norms. If you asked 100 people what a high school graduate should know you are broaching a topic that has no end in this country. I doubt you would ever even get past what it means “to know” much less into any real discussion of curricula. I could go on for days on this point, but instead….
Third: Better students. It can be argued, students have become smarter, or at least better prepared when compared to students from previous eras. There are several reasons for this—improved studying techniques, better educated parents, more prevalent pre-school, a better understanding of nutrition, and so on. Basically, each successive generation benefits from the knowledge gained from its predecessors.
Fourth: Higher stakes. Many people would agree that attending college is imperative in today’s economy and thus doing well in school (and performing well on standardized tests), so there is a greater emphasis on grades and test scores. A billion dollar industry from test preparation, to tutoring, to you name it, all focused on getting kids into college.
Fifth: Politics. In education, many people have a vested interest in seeing kids do well in school and many (most?) of the reasons have nothing to do with the students. Principals and, increasingly, teachers want to keep their jobs; politicians want to elevate the stature of their district (and this happens on every level, all the way up to national).
Switching over to wine and the 100-point scale, here is how I see the parallels:
First: The scale itself. The limitations of the scale are just as striking with wine, perhaps more so. Wine critics are constantly evaluating wines across variety, region, and vintage resulting in countless permutations. If a 1988 Austrian Grüner Veltliner, a 2006 Red Burgundy, and a 2012 Dry Creek Valley Zinfandel all receive a “90” what does that mean?
Second: Established norms. There are no defined norms on what makes a “good” wine. Sure, there are some elements that are common among good and great wines, but not everyone is in agreement as to what degree those elements need to be present. In addition, some might prefer bigger, more bombastic wines, while others fancy a more refined style–are any such opinions inherently “wrong” (other than those who like insipid Pinot Grigio, of course–they are clearly off their rocker).
Third: Better wine. I firmly believe that the definition of “average” when it comes to wine has shifted–today’s average wines are better than average wines from 30 years ago. There have been countless improvements from viticulture to winemaking over the last several decades and with Climate Change many European regions (at least for the time being) are producing better fruit and more consistent wines. Is an “average” wine today “better” than an “average” wine from a decade ago? Are today’s 100 point wines “better” than 100 point wines from 30 years ago? I would say “maybe” to both. [Some would say that the process has actually gotten worse (more “science” and therefore less “art”), but that would be an argument for another rant.]
Fourth: Higher stakes. I have certainly heard stories about how the simple difference between receiving a score of 89 and a 90 on a wine can be monumental for a winery. I have visited many wineries and met countless people in the industry and it is impossible not to at least think about them when tasting their wines (I would even argue that it makes tasting the wine more enjoyable). I can only imagine that the same happens with the more “established” wine critics–do they take that into consideration? You can’t convince me that they don’t. This is why you rarely see scores below 85 in print or on blogs, in my opinion–there is far too much good wine to write about–no need to tear anyone down. And I did not even mention the billion dollar industry set up to help sell these wines (I guess I did now, though).
Fifth: Politics. Might there be a reason (or several) that a wine writer not publish his honest opinions on a wine? Are you kidding? Of course there are.
What are your thoughts on “score inflation”? Are there other reasons that I missed?
I would love to hear your comments, as always!