Monday, 19 February 2018

Wine-quality scores for premium wines are not consistent through time

When dealing with professional wine-quality scores, the usual attitude seems to be: "one wine, one score". We have all seen wine retailers where, for each wine, only one quality score is advertised from each well-known wine critic or magazine. This is often either the most recent score that has been provided, or it is the highest score that has been given to that particular wine.

However, we all know that this is overly simplistic. The score assigned to a wine by any given taster can vary through time for one or more of several reasons, including: bottle variation, tasting conditions, personal vagaries, and the age of the wine. So, one score is actually of little practical use, even though that is usually all we get from retailers.

The point about the age of the wine is of particular interest to wine lovers, since there is a perception that premium wines should increase in quality though time (that's why we cellar the wine), before descending slowly to a mature old age (the wine, as well as us). It is therefore of interest to find out whether this is actually so. When wine critics repeatedly taste the same vintage of the same wine, do their assigned quality scores show any particular pattern through time? Or do they correctly assess the wine when it is young, so that it continues to get the same score as it matures?

This turns out not to be an easy question to answer, because in very few cases do critics taste a single wine often enough for us to be able to get a worthwhile answer; and when they do do repeat tastings, they do not always publish all of the results. I have previously looked at the issue of repeated tastings by comparing pairs of tastings for several wines (Are the quality scores from repeat tastings correlated?), but I have not looked at single wines through time.

Some data

So, I have searched around, and found as many examples as I can find of situations where a single critic has publicized scores for the same wine (single winery and vintage) at least six different times since 2003. I got my data from CellarTracker, WineSearcher and 90Plus Wines (as described in a previous post)

It turns out that very few people have provided quality scores for more than five repeats of any one wine (who can afford to?). It also turns out that the most likely place to find such scores is among the icon wines from the top Bordeaux châteaux. The critics I found are: Jeff Leve (27 wines), Richard Jennings (3 wines), Jancis Robinson (2 wines) and Jean-Marc Quarin (1 wine).

The graphs are tucked away at the bottom of this post, and I will simply summarize here what they show. They all show roughly the same thing: a lot of variation in scores through time, with a spread of points for any one wine never being less than 2; and the scores generally show a slight decrease through time.

The first four graphs are from Jeff Leve (at the Wine Cellar Insider). The first graph is for seven vintages of Château Latour. The scores generally stay within 2-3 points for each wine; and only the 1990 could be considered to show any sort of increase in score through time. The second graph is for Château Lafite-Rothschild, Château Mouton-Rothschild and Pétrus — the first two generally stay within 2 points, but the latter is all over the place. The third graph covers seven vintages of Château Margaux, which rarely stay within 2 points, and the 2000 vintage shows a strong decrease in score through time. The fourth graph covers nine vintages of Château Haut-Brion. The scores often do not stay within 2 points, especially for the 1961 vintage; and only the 1998 vintage increases slightly through time.

The fifth graph is for Richard Jennings (from RJ on Wine). All three of the vintages covered show a decrease in score through time. Finally, the sixth graph shows a couple of wines of Château Latour from Jancis Robinson and one from Jean-Marc Quarin, both of whom use a 20-point quality scale. Their scores range by at least 2 points per wine; and Quarin's wine strongly decreases in score through time.


I think that it might be stretching a point to claim that any of these wines show a consistent score through time — they go up and down by at least 2 points, and often more. We certainly can't claim that the scores increase with repeated tastings — if anything, the general trend is more often downwards.

There are a couple of possible explanations for this variation, in addition to the obvious one that the critics don't have much idea what they doing.

The classic explanation is "bottle variation" (rather than "taster variation"). For example, Robert Parker once wrote (Wine Advocate #205, March 2013): "I had this wine four different times, rating it between 88 and 92, so some bottle variation (or was it my palate?) seems at play." Parker's results would fit perfectly into the graphs below. As confirmation of this point, the widely reported 2010 results of the Australian Wine Research Institute’s Closure Trial certainly indicated a very large amount of bottle variation for cork-closed bottles (see Wine Spectator, Wine Lovers).

If this is the explanation, then the consistently erratic nature of the results, and the expected high quality of the wines, does make me wonder about the advisability of buying expensive wines. Huge bottle variation for cheap wines might be expected, but cannot be acceptable for the supposedly good stuff, even if only for financial reasons. This topic is discussed in more detail by, among others, Wilfred van Gorp, Jamie Goode, and Bear Dalton.

At the extreme, bottle variation can refer to flawed wines, of course. In the graph for Richard Jennings, one of the scores for Château Haut-Brion is missing, because he scored it as "flawed". Indeed, he did this for 3 of the 188 Grand Cru wines for which he provided scores (1.6%). James Laube estimates the rate of flawed wine as 3-4%. The other tasters may also have encountered flawed wines, but not reported this, as recently discussed by Oliver Styles.

Another point is the extent to which the tasters may have taken into account how old the wine was at the time they tasted it. If the wines are not tasted blind, then this always remains a strong question mark regarding the quality scores assigned.

Anyway, there is certainly a lot of leeway for retailers to select the score(s) they report on their shelf talkers and web pages. The Wine Searcher database addresses this issue by simply reporting the most recent score available.


Jeff Leve:

Jeff Leve's scores for Château Latour

Jeff Leve's scores for the Rothschilds and Pétrus

Jeff Leve's scores for Château Margaux

Jeff Leve's scores for Château Haut-Brion

Richard Jennings:

Richard Jennings' scores

Jancis Robinson and Jean-Marc Quarin:

Scores from Jancis Robinson and Jean-Marc Quarin

Monday, 12 February 2018

California grapes: quantity versus quality

Grape production is a balancing act between quantity and quality — producing a greater quantity is usually assumed to result in a reduction in quality. Therefore, attempts by grape growers to increase grape quality are usually associated with trying to decrease quantity. The relationship works both ways.

It is therefore of interest to look at the big picture of this supposed relationship. This quality/quantity relationship is usually investigated only at the micro level — for example, individual growers might decide to increase their grape quality by thinning their crop. But what happens at the macro level, across all growers? This question seems rarely to have been asked.

One simple way to start looking at this topic is to compare the production area of particular grape types with the amount of fruit they produce. We might anticipate that the highest quality varieties produce less fruit than do lower quality varieties. This is a simplistic approach, of course, because there are many factors that affect fruit production, most notably the weather; but if we restrict ourselves to a particular viticultural area, then it might be a useful place to start.

So, I decided to look at the California grape data provided by the United States Department of Agriculture (USDA). The latest report is from April 2017, which shows the acreage of productive vines (in each US state), for both red varieties and white varieties. I then compared these data to the data for the 2016 California grape crush provided by the American Association of Wine Economists (AAWE), for the top reds and top whites.

The data are shown in the two graphs, one for each type of grape. Within each graph, each point represents a single grape variety in California, showing its bearing acreage horizontally and its grape crush vertically. The lines on the graphs are best-fit linear regressions, illustrating the "average" production expected from each variety based on its acreage. In both cases the lines fit the data quite well, explaining c. 85% of the variation in the data.

California red grapes by area and crush

The first graph shows the data for the red varieties, where Cabernet sauvignon is by far the most widely planted grape variety, as well as the one most highly esteemed by winemakers. I therefore calculated the regression line, as shown, without including this variety, so that the line is fitted only to the other varieties — this then tells us what production to "expect" from Cabernet, based on the observed data for the other varieties. We can see that, as anticipated for the top variety, Cabernet sauvignon produces a much smaller crop than do the other varieties.

Interestingly, both Zinfandel and Rubired (labeled on the graph) produce a larger grape quantity than we might expect from their acreage, whereas all of the other varieties are close to their expectation. This is notable because Zinfandel is the second most widely planted red grape, and it is usually considered to also be a premium variety. Other common premium varieties, such as Pinot Noir, Merlot and Syrah, produce crops at about their expected level in California.

California white grapes by area and crush

A similar pattern is seen when we look at the white grape varieties, as shown in the second graph. Indeed, the regression lines in both graphs have almost the same slope (and intercept), indicating that red and white production both have the same relationship to area.

Chardonnay is both the most widely planted white grape variety and the one most highly esteemed by winemakers. It is obvious from the graph that Chardonnay produces less quantity than is expected based on the other white varieties, as anticipated.

Interestingly, three of the next four most widely planted white varieties (labeled on the graph) produce a larger grape quantity than we might expect from their acreage, whereas all of the other varieties are close to their expectation. This matches the pattern observed for the red varieties, where only the top variety has a reduced crop.

Finally, the California Department of Food and Agriculture's California Grape Crush Report Preliminary 2017 allows us to look at the broad-scale economics of wine-grape production. The graph below shows the inflation-adjusted price per ton of wine grapes (vertically) versus the grape crush tonnage (horizontally). Each point represents the crop for one year from 1989 to 2016, inclusive.

Price per ton of California grapes through 20 years

As can be seen, for the white-wine grapes their price is unrelated to the crop size — prices do not go up or down when the crop is large. On the other hand, for the red-wine grapes the price has a tendency (ie. with a few exceptions) to rise when the crop is large (correlation = 33%). In both cases, price is not related to scarcity, which is the important point. This implies that voluntarily restricting crop size does not affect the overall economics — the reduction in crop is likely to be compensated by increased price.


So, Cabernet sauvignon and Chardonnay are the most widely planted and most esteemed red and white grape varieties, respectively, in California, and they both produce smaller crops than might be expected based on the production levels of other varieties. Furthermore, the situation differs for some of the other widely planted varieties, which produce larger crops than might be expected. This seems to match what is anticipated from the suggested relationship between quantity and quality — quantity is less when quality is at its very highest. For California grapes, less is more.

Monday, 5 February 2018

Where does all of this wine come from and go to?

A few weeks ago I commented on some of those countries that are importing expensive versus cheap wine (The USA imports more expensive wines than anywhere else). This leads us inevitably to consider, globally, where all the wines are coming from and going to.

The data I will use to explore this come from Comtrade, the United Nations International Trade Statistics Database. I accessed all of the data available for 2016 in the category: "Wine; still, in containers holding more than 2 litres" (code 22042). This may include pretty much anything (bulk or otherwise), except import/export of single bottles of wine, but excludes sparkling or fortified wines.

I have plotted the results in the graph, which shows the total reported exports (in kg, which ≈ liters) horizontally, and the total reported imports vertically, with each point representing a single country (as recognized by the UN). Some of the countries are labeled, but most are not. Note that both axes have logarithmic scales, so that the most active countries are dealing with up to 1 million tons of wine annually.

Exports and imports of wine by country

For those countries above the line, their imports exceed their exports, while for those below the line, exports exceed imports. Obviously, most countries are net importers of wine. For the the USA, imports exceed exports by c. 50%.

Those countries that are large net exporters of wine are well known, including Spain, Chile, Italy, Australia and South Africa. France is not in this list because, according to the data, it imports nearly three times as much wine as it exports. Portugal is another absentee, as it imports more than twice as much wine as it exports. I discussed these two issues in the previous post (The USA imports more expensive wines than anywhere else).

The next group of net exporters includes (in order) Moldova, New Zealand, Macedonia, Myanmar (Burma) and Argentina, followed by Hungary, Israel, Morocco and Bulgaria. For Myanmar 96% of the wine goes to Suriname, and for Morocco 89% goes to France, which is why you have never tasted either of these wines. Macedonian wine principally goes to Germany (41%) and Serbia (34%), while Hungarian wine goes to Germany (30%) and Czechia (23%). The USA takes the largest share of the Israeli wine (46%), although France (11%) and the UK (10%) get their share, as well. Moldova sends its wines to mainly to Belarus (40%) and the Ukraine (19%), while Bulgarian wine goes to Poland (56%) and Sweden (25%).

The biggest net importers are generally (but not always) those countries with large populations but with only a relatively small wine industry: Germany, the United Kingdom, France, Russia and China — that's right, France is the third biggest net importer of wine in the world! These countries are followed by (in order) Iceland, Sweden, Czechia, USA, Belgium, the Netherlands and Switzerland.

I have commented before that there seems to be a number of countries that are credited with exporting far more wine than they actually produce (Bizarre wine data). In the Comtrade dataset, these countries include: Denmark, Finland, Belgium, the Netherlands, Sweden, Thailand, Norway, Luxembourg, Singapore, Hong Kong and Iceland. Normally, I would conclude that these data involve re-exports of imported wine; however, "re-export" is officially a separate category of data in the Comtrade database. It therefore seems to me that the data might not be organized as well as we would like.