For your reading pleasure, at the bottom of this post I have included links to a selection of online commentaries (mostly negative) about this issue. The principal objections seem to be one or more of these:
- They are broad generalizations — they do not account for within-region variation in quality
- The ratings over simplify — there is also between-vineyard variation within local areas
- There is no recognition of site selection - there is even within-vineyard variation
- Modern wine-making (along with global warming) produces reasonably consistent quality, so that vintage variation mainly concerns quantity, instead
- Do the charts rate wine longevity or drinkability?
- Vintage variation influences style, but not necessarily quality
- Charts produced by different people inevitably differ, often strongly disagreeing
- Do wine drinkers actually prefer highly rated vintages?
Different people, different charts
Most of the well-known wine magazines produce vintage charts, which are available online. The first graph below compares two of these charts for the vintages from 2000-2011. The dots represent the vintage scores from the Wine Advocate (vertically) and the Wine Enthusiast (horizontally) pooled for the following Italian regions: Barolo, Barbaresco, Brunello di Montalcino, and Chianti. If the two magazines gave each vintage the same score, then the dots would all be along the pink line.
As you can see, there is a great deal of disagreement between these two charts, as only four of the dots are actually on the line, and another five differ by 1 point. But more importantly, the eight Wine Enthusiast scores between 80 and 88 form two clusters of quality scores as far as the Wine Advocate is concerned, with four of the vintages scoring much lower (74-77) than the other four (89-93).
As an alternative example, Jancis Robinson has organized some blind tastings of the red Bordeaux vintages from this century (C21 Bordeaux vintages - a ranking). During the tastings in 2015 and 2016, the attending wine professionals were "asked to rank the last 13 vintages in qualitative order." We thus have a total of 18 (2016) and 15 (2015) rankings for the same 12 vintages (2000-2011). These are compared in the next graph, where each dot represents a single vintage, located according to the sum of ranks from 2015 (horizontally) and 2016 (vertically). Note that a smaller rank indicates a "better" vintage.
There is obviously a lot of agreement here. However, there are four vintages in the middle of the graph that all had very similar ranks in 2015 but had two very different ranks in 2016, so that two of the dots are a long way below the line. That is, the 2006 and 2008 vintages were evaluated similarly in the two tastings, but the 2003 and 2004 vintages dropped significantly in the ranking between 2015 and 2016.
Andrew Jefford has a comment on these rankings at Decanter (Kicking the hell out of Bordeaux 2011).
Wine vintage charts must apply to specified wine-making regions, with a score for each vintage in each region. Unfortunately, these regions are often unconscionably large, so that a single number cannot possibly describe the wine quality across the whole region. While countries like France, Spain and Italy usually get divided into several wine-making regions, even somewhere as large as California sometimes gets treated as a single region.
However, to me, the classic example of silliness is trying to treat an entire continent like Australia as a single region, or even "south-eastern Australia". The following maps compare the size of Australia to both Europe (minus Scandinavia and the Baltic states) and the USA. As you can see, even south-eastern Australia is as large as Spain + Portugal, or California + Oregon + Washington. Moreover, the variation in wine-growing climates throughout south-eastern Australia is at least as large as any of these other conglomerations.
Traditional wine-making regions sometimes get subdivided, on the grounds that the within-region climate variation produces different wines. Thus, Bordeaux red wine is sometimes divided into the Right Bank (Saint Emilion and Pomerol) and the left Bank (the Médoc).
The next graph compares the vintage rankings for these two Banks, from the Wine Cellar Insider, for the vintages from 1982-2014. Each dot represents a single vintage, located according to the quality score for the Left Bank (horizontally) and the Right Bank (vertically). Once again, smaller ranks indicate "better" vintages; and if the vintages had the same rank in both Banks then the dots would lie along the pink line. Not all vintages made it into the rankings (ie. some were not considered good enough to be worth ranking).
While there is some consistency in the rankings there are many anomalies, where the two Banks had very different qualities in the same vintage. In particular, there are six vintages (shown as red dots) where a vintage made it into the ranking for one Bank but not the other.
The Global Wine Score blog has a similar analysis for these two Banks (Bordeaux 2015 vintage: Right Or Left Bank?).
Modern consistency of vintage quality
I have published several recent blog posts that illustrate the changing nature of vintage scores over the past 25 years (Two centuries of Bordeaux vintages — Tastet & Lawton; A century of Barolo vintages — Fontanafredda; More than a century of Barolo vintages — Marchesi di Barolo). The bottom line is that the scores have increased during that time, as well as becoming less variable from year to year.
A similar point is made for Australian vintages in this paper:
V.O. Sadras, C.J. Soar, P.R. Petrie (2007) Quantification of time trends in vintage scores and their variability for major wine regions of Australia. Australian Journal of Grape and Wine Research 13:117-123.
Preference for high versus low vintage ratings
This issue of whether wine drinkers actually prefer the vintages recommended by the wine charts is addressed in another published article:
Roman L. Weil (2001) Parker v. Prial: the death of the vintage chart. Chance 14(4):27-31.
A free copy is available here.
The paper discusses an empirical test of the claim (specifically by Frank Prial; see the link below) that the modern vintage chart is redundant. The author got many people to do tastings of paired wines, one from a good vintage as decreed by the Wine Advocate chart and one a poor vintage; and his conclusion is:
The 240 wine drinkers on whom I’ve systematically tested Prial’s hypothesis cannot distinguish between wines of good and bad vintages, except for Bordeaux, and even when they can distinguish, their preferences and the chart’s do not match better than a random process would imply.In other words, a high vintage score in a chart is no guarantee that you will actually like the wines.
Frank J. Prial, The New York Times
So who needs vintage charts?
Paul Gregutt, The Seattle Times
Rating vintage ratings; not high
Paul Kaan, Filthy Good Vino blog
Using a vintage chart to pick wines sucks … here’s a better way!
W. Blake Gray, The Gray Report blog
Vintage charts for California are worthless
Dan Berger, Vintage Experiences newsletter
Vintage chart fallacies
Richard Hemming, Jancis Robinson blog
Are official vintage charts meaningless?