I noted in an earlier post that wine retailers often cherry-pick the critics' quality scores that they quote when advertising their wines (Laube versus Suckling — their scores differ, but what does that mean for us?). They can do this because the quality scores from different critics often do not agree, and there is a pecuniary advantage to the retailer for any given wine to appear to be as good as it can be.
However, not all wine advertising is like this. For example, Wine Rack Media, a company offering services to online wine retailers, makes available the scores (for any given wine) from a range of critics, and these scores are sometimes quoted in full by retailers. For example, both Fisher's Discount Liquor Barn and Incredible Wine Store have web pages "powered by Wine Rack Media", and they both use that company's wine database.
Let's take a specific example of a well-known wine, to see what information we get: Caymus 'Special Selection' Cabernet Sauvignon. Checking online, we are given wine descriptions (including quality scores) for most of the vintages from 1978 to 2005, many of them from multiple sources: Wine Spectator, Wine Enthusiast, Wine and Spirits, Stephen Tanzer, Connoisseurs' Guide to California Wine, and Vintage Tastings.
What is more interesting, though, is that we are often given multiple scores from each of these sources, rather than a single score. Most reputable sources of wine-quality scores are likely to have tasted the wines on different occasions, often by different tasters, so that we do have multiple scores available. Most wine retailers report only the highest of these scores, for each wine (ie. a cherry-picked score); but not in this example.
The first graph shows all of the quality scores we are given from the Wine Spectator magazine, with the vintages listed horizontally and the scores vertically. Where there are multiple scores for a particular vintage, I have connected them with a line.
As you can see, the repeated scores tend to differ from each other by at least a couple of points, although one pair differs by 16 points (82 versus 98) and some others differ by 10 points and 9 points. Clearly, cherry-picking a score would be very effective here. Indeed, all of the scores below 90 points have a paired score that is much higher — we could, if we were so inclined, easily claim that the Caymus Cabernet is consistently better than a 90-point wine!
For comparison, the second graph shows some of the other quality scores, as well.
Note that the Tanzer scores are almost all lower than the other scores for the same vintage, while the Connoisseurs' Guide scores are usually lower. Indeed, the only vintages for which there is good agreement are 2000 and 2004 (where all of the scores are within 1 point).
With this sort of presentation of the quality scores we can thus see at a glance where we stand with regard to the variability among critics' scores. If only all wine retailing was this honest about the multitude of data available.
No comments:
Post a Comment