I started this little blog almost three years ago to the day (my actual blogiversary is 1/16/12), I have been to a total of two Wine Bloggers Conferences, I have read and “liked” countless other blogs, and with the help of many others, we have tried to broaden the “Wine Blogging Community” (whatever that means) through the Monthly Wine Writing Challenge and the Secret Wino “programs” that we have created. But when I started doing a little research for this post…
…I had no idea…
… of how many wine blogs there are.
There have been attempts to catalogue the number of wine blogs out there, the most comprehensive of which is certainly the list compiled on Vinography, where there are currently over 700 blogs “registered”, and those are just the English language blogs. There are no doubt more—I remember sending an email to Alder Yarrow, Mr. Vinography, a couple of years ago to get this blog included on the list.
Why do I care?
A few weeks ago, I exchanged a few emails with another blogger, at the end of which he asked when I was going to stop using the 100-point scale in my wine reviews. This was not the first time that I had encountered the anti-100 point bias. A few months ago, I was interviewed by Jameson Fink, one of my favorite wine bloggers, for the Grape Collective. He suggested that since I fancy myself as a bit of a statistician, I should create an alternative to the 100-Point scale.
Why would I want to do that?
I am not here to say that the 100-Point scale for rating wine is “perfect” but it does serve a purpose—it gives a relative assessment of a product that others would have no idea they would enjoy without purchasing it themselves. And it is easy to understand.
So this week, I did a bit of research. I looked at a ton of blogs—some that I read on a regular basis, others that I did not even know existed. What did I find? Of those sites that reviewed wine, over half used some sort of numerical score to rate the wine and exactly half of those used the 100-point scale. The other numerical raters used some derivation that could be fairly easily translated into the 100-point scale (giving a wine 9.3 points out of 10 is the 100-point scale, folks), a 20-point scale (the aforementioned Vinography uses a 10-point scale with half point increments—that’s a 20-point scale), or a 10-point scale (this is usually done claiming a 5-point scale, but by introducing half-points it becomes a 10-point scale).
There are others that use some derivation of a numeric scale—some use a 5-star scale (which is really a 10-point scale), others use the scholastic grading scale (A, B, C, etc.—by the way, this is almost precisely the same as the method that I use, I just use numbers to convey the exact same thing).
There was one, Jon Thorsen, the Reverse Wine Snob, that actually used a bit of math to arrive at his rating. Jon uses a linear formula to create a QPR rating (Quality-Price Ratio)—his final score is 75% quality and 25% “value”.
A few (I was actually surprised how few there were, actually) have devised their own scale, with the most common being “Recommended”, “Highly Recommended”, etc. There were a few more clever derivations of this theme (Loie at Cheap Wine Curious rates her wines from “Blech!” to “Case Worthy!”), but they were essentially the same.
Last, about a third of the blogs that I researched wrote wine reviews but did not explicitly “rate” the wine on any type of evident scale. These writers, through their prose, generally indicated their impressions of the wine and left it to the reader to draw any conclusions about the relative quality of the wine.
So why did I conduct this “research”?
Another good question.
No, despite popular opinion, I do not have unlimited leisure time to peruse scores of wine blogs. Rather, I was hoping to discover why there is disdain for the 100-point scale. It seems to be rooted in the malaise that many have for the “Father of Modern Wine Ratings” (I just made that up), Robert Parker, Jr., who did not “invent” the 100-Point scale, but certainly popularized its attachment to wine reviews.
I get most of the criticism:
- It is impossible to affix a numerical value to wine.
- The scores (even among the same critic) can be widely variable.
- The scale has been essentially reduced to a 85-95 Point range.
- And perhaps more….
But I still don’t get it. Any review, whether there is a scale or not attached to it is subjective. There is simply no way (yet) to get around the human element when it comes to evaluating wine.
Isn’t the goal to let people know what you think about the wine? If I do that using numbers between 80-100, is that significantly different from a 10, 20, or 30 point scale? So why all the fuss about the 100-point scale? I just do not understand how it is categorically worse than “8+/10”, “3 Stars”, “A-“, or “Recommended Plus”.
What am I missing?