Monday, December 31, 2018

Is there truth in (wine) numbers?

Everyone knows the expression in vino veritas (in wine there is truth), which (in one form or another) seems to date all the way back to the 6th century BCE. However, in this blog, I spend a lot of time looking at numbers. This immediately raises the oft-asked question of whether "truth" also lies in numbers. In this post I will look at four informative examples where there is truth in some wine numbers, but in each case all is not quite as it seems.


Introduction

Many people are wary of numbers. The issue is that truth lies not in the numbers themselves but in our ability to interpret them. Numbers cannot speak for themselves, and thus they can tell us nothing directly. We have to look at them and work out for ourselves what truth lies therein.

The same applies to words, of course. The same combination of letters can mean quite different things, in different contexts or in different places. Even in English the words "lead" (pronounced leed) and "lead" (pronounced led) look very similar but have different meanings. (Did you know that this is why the band Led Zeppelin spelled their name that way? That was how they wanted it to always be pronounced, as the name comes from the expression "going down like a lead zeppelin".)

We have to be aware of this sort of thing, if we are to make much sense of the world around us; and we all get it wrong more often than we would like. Having so many different languages only makes it much worse, of course.

It is the same with numbers, even though there is only one mathematical language. This is why Mark Twain famously referred to "Lies, damned lies, and statistics". The first two emphasize the problems with words, and the third one the problem with numbers. It is easy to fool ourselves when interpreting numbers, and to thereby intentionally or unintentionally mislead others.

I mention this because I recently encountered four different examples of misinterpreting numbers in the wine industry, in a way that lead to wrong conclusions, even though the numbers were (almost all) truthful. The first example comes from a book, the second from a blog post, the third from a press release, and the fourth comes from a research paper. Numbers are everywhere!

Example 1

An easy one to start with. This table is from a book about Madeira wine. It discusses the wine production from each of the main grape varieties. Back when I taught experimental design to university students, I used examples just like this one to drill into those students the importance of presenting numbers correctly in tables. Can you spot the error?


Any time you see a table where the numbers are supposed to add up to a given total, check whether they do — you might be surprised how often they don't (eg. see the Postscript.) In this case, the Production data for Other European Varieties cannot possibly be right, although the Percentage of total harvest is apparently correct. Working backwards from the Total given, the true Production should be 38,936.05 hL, not 39.04.

Note that this is similar to a typographical error, but of a somewhat complex type, and with important consequences.

Example 2

Now let's look at a slightly more tricky instance. Some years ago, a retailer blog post from Australia contained this comment:
I could not help but be struck by how many wines the tasters rated at 90 points or more on a scale where the maximum is 100. An analysis of the list of 365 wines (excluding the French champagnes - all of which were rated above 90) showed that the average score given to this range was 91.36 and the lowest score given to a wine was 83. Of the 211 reds listed, 83 were rated as 93 points or higher. Now under the 20 point system used in Australian wine shows, 18.5 points is gold medal standard. Multiply by five to get 92.5 and it seems that almost 40 per cent of all the reds (83 out of 211) are gold medal standard, and every red and white wine on their list is well above the minimum 15.5 out of 20 (or 77.5 out of 100) needed to gain a bronze medal.
The author's conclusion does, indeed, follow if his arithmetic is right; but it isn't right. The issue here is converting from one scale (20 points) to another (100 points). The arithmetic assumption that the author makes is that both scales start at 0, whereas the 100-point scale actually starts at 50. These different equivalences are compared in this table:
20-point
scale

0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
5.5
6
6.5
7
7.5
8
8.5
9
9.5
10
10.5
11
11.5
12
12.5
13
13.5
14
14.5
15
15.5
16
16.5
17
17.5
18
18.5
19
19.5
20

0 = 0

0
2.5
5
7.5
10
12.5
15
17.5
20
22.5
25
27.5
30
32.5
35
37.5
40
42.5
45
47.5
50
52.5
55
57.5
60
62.5
65
67.5
70
72.5
75
77.5
80
82.5
85
87.5
90
92.5
95
97.5
100

   0 = 50
50
51.25
52.5
53.75
55
56.25
57.5
58.75
60
61.25
62.5
63.75
65
66.25
67.5
68.75
70
71.25
72.5
73.75
75
76.25
77.5
78.75
80
81.25
82.5
83.75
85
86.25
87.5
88.75
90
91.25
92.5
93.75
95
96.25
97.5
98.75
100
Show
medals
































Bronze


Silver


Gold

 

Allowing for the fact that 0 on the 20-point scale equals 50 on the 100-point scale does away with the author's concern about over-inflation of scores, because a Gold medal requires 96 points, not 93 points, and not all of the wines would get a Bronze medal (which requires 89 points not 77.5).

However, even this simple correction does not necessarily produce the "correct" conversion from 20 points to 100 points. For example, Australia's Winestate magazine has used a conversion where 15.5 points on the 20-point scale is equivalent to 90 points on the 100-point scale, not 89 points (as shown in the table).

Example 3

This example takes a lot of work to identify the source of the error.

In 2015, the climats and terroirs of the wine region of Burgundy were added to the UNESCO World Heritage List. To quote UNESCO: "The climats are precisely delimited vineyard parcels on the slopes of the Côte de Nuits and the Côte de Beaune south of the city of Dijon. They differ from one another due to specific natural conditions (geology and exposure) as well as vine types and have been shaped by human cultivation. Over time they came to be recognized by the wine they produce ... The site is an outstanding example of grape cultivation and wine production developed since the High Middle Ages."

The UNESCO documentation suggests that there are 1,247 climats in this World Heritage site. However, Paul Messerschmidt (an amateur wine researcher from the UK) (paulmess[at] gmail.com) noted a discrepancy between this number and the count of those actually listed in the UNESCO documentation.

After a lot of (tedious) work, he realized that, while there are 1,247 climat names in Burgundy, there are actually "1,628 separate, distinct, and precisely delimited vineyard parcels in the Côte d'Or". The difference appears to come from searching the database for "climat names" rather than "named climats". For example, "there are vineyards called Les Cras in Chambolle-Musigny, Vougeot, Aloxe-Corton, Pommard, and Meursault (ie. five "named climats"), but they share only one "climat name", as listed in UNESCO's count of 1,247". The difference of 381 vineyards is hardly trivial, especially if you happen to own part of one of them.

Paul is apparently now compiling the discrepancies between the UNESCO list and those of The Wines of Burgundy, by Sylvain Pitiot & Jean-Charles Servant, and Inside Burgundy, by Jasper Morris, if anyone wants to help him with his work.

Example 4

Let's return to the subject of scoring wines at a wine show, and awarding medals. This will illustrate a situation where we can easily be mislead when dealing with statistical summaries.

There are a number of research papers where judge scores have been compiled, and I will illustrate my point with a paper in the Journal of Wine Research (1996, 7:83-90). In this case, the judges evaluated 174 wines, and this graph shows the scores for three of the judges (each vertical bar represents the number of wines that received each of the scores shown on the horizontal axis):


One standard way to summarize data like this, and thus to compare the judges, is to calculate the mean score for each judge. In this case, the mean for Judge 3 is 11.8 and for Judge 5 it is 11.3. These two means are almost identical, suggesting that the judges are rather similar, and yet their scores, as shown in their graphs, are quite different. Indeed, Judge 3 seems to have two main groups of scores that are favored, with a score of 12 not commonly being used — and yet this is actually the mean score (11.8)! In this case, the mean does not help us understand the scoring behavior of this judge.

This is even more obvious when we look at the data for Judge 1. Once again, there are scores that the judge rarely uses, such as 9 and 10, and yet the mean score is 9.8. When the data have two distinct patterns, we call it "bi-modal", and in such a case the calculation of any sort of average score is going to mislead us badly. The data need to be clustered around the mean, if the mean is going to tell us anything useful.

Conclusion

So, remember that truth lies not in the words or numbers, but in our ability to interpret them. Compare this with the situation of a medical doctor diagnosing a disease based on the patient's symptoms. The symptoms really do indicate the disease, and hopefully the doctor extracts the truth most of the time. However, sometimes the doctor is unfamiliar with the disease, and sometimes the doctor misinterprets the symptoms, and sometimes the doctor fails to connect the symptoms with the disease. This is not good for the patient, or the doctor for that matter; but they both need to deal with it.

As a final word example, Swedes have an expression for couples living together, which is "samma boende", which they shorten to "sambo" (the first letters of each word). Americans do not introduce their partner as their sambo, but Swedes quite happily do so. This confuses Americans, but not Swedes.

Postscript

How many of you have ever noticed the error in a widely distributed description of the original UC Davis 20-point wine score card (as pointed out to me by Bob Henry)? The description is: "Appearance (2), Color (2), Aroma & Bouquet (4), Volatile Acidity (2), Total Acidity (2), Sugar (1), Body (1), Flavor (1), Astringency (1), and General Quality (2)". The true numbers should be: Flavor = 2, Astringency = 2, so that scores then correctly sum to 20. [I warned you to always check totals!] The written description also refers to a "fairy wine", which would be a very interesting thing, if it existed.

Monday, December 24, 2018

The first wine-themed Christmas card (1843)


The first known commercially produced Christmas card

This is a copy of the first known commercially produced Christmas card. John Calcott Horsley, a British painter, designed the card in May 1843, for Henry Cole, a British civil servant and entrepreneur. Apparently, two batches totaling 2,050 cards were printed, and sold later that year for 1 shilling each. There are believed to be about 10-12 of the cards surviving.

Rather than the more modern idea of a wintery scene (which has no meaning in December in the southern hemisphere!), the card apparently depicts three generations of a family, with some of them raising a toast of wine to the card's recipient. The scenes on either side show food and clothing being given to the poor.

God jul och gott nytt år!

Monday, December 17, 2018

How much do recent US wine imports differ between years?

In a previous post I looked at the volume of wine-brand imports into the USA in 2012 (Yellow Tail — wine imports into the USA do fit a "power law"). It might be of interest to now look at some more recent data, to see how the different brands fare through time.

In the following graph I compare the lists of the top-25 imported wine brands in terms of volume sales in the USA, for 2012 (horizontally; the data come from my previous post) and 2016 (vertically; the data come from the AAWE Facebook page). Note that both axes of the graph are logarithmic.

Top US wine imports in 2012 and 2016

There are only 17 wine brands that appear in both top-25 lists (represented by blue diamonds in the graph) — that is, one-third of the wine brands (one-third) were replaced in the top-25 imported volume sellers across the 4 years.

Five of the top six wine brands were at the top in both years, as labeled in the graph. These brands have retained consistent sales volumes across the 4 years. Both Yellow Tail and Lindemans come from Australia, while Cavit and Riunite are imported from Italy, and Concha y Toro is from Chile.

The big difference in the top grouping is that Principato drops off the list, from 3rd place in 2012 (the pink diamonds represent wine brands that appear in only one of the two lists). This Italian wine brand seems to have greatly decreased its portfolio recently, now down to just three distinct wines.

The sales of the remaining 12 wine brands were much less consistent in wine sales, even though they appeared on both lists — that is, the sales volume in 2016 did not necessarily reflect the sales volume in 2012. There were also another seven wine brands that have been replaced from the 2012 list.

So, the answer to the title question is: reasonably consistent at the top, but not very consistent elsewhere on the top-25 list. There seems to be a lot of room for maneuvering the volume of brand sales among most of the imported wines.

Finally, it is worth looking at the topic covered in my previous post, which is the fit of a Power Law to the data, specifically Zipf's Law, which refers to the "size" of each event relative to it's rank order of size. In the previous post, I noted that data for the volume of imported wine do fit a Power Law very well. That is, when you plot a graph of the logarithm of sales volume (vertically) and the rank order of the wine brands (horizontally), it forms a straight line.

This next graph compares the 2012 data (in blue) with the 2016 data (in pink).


This graph shows that the Power Law for 2012 was not a fluke, but was repeated in 2016. Indeed, the fit of the data to a Power Law is actually slightly better in 2016 (99% vs. 97%).

I think that it is remarkable that so often in our world, complex economic data, like wine sales volumes, which are presumably subject to all sorts of market forces, actually fit very simple mathematical models. This implies that all of the different forces cancel each other out, leaving a simple pattern for us to see and interpret. In this case, for example, we should not expect Yellow Tail wines to sell any better than they already do.

Monday, December 10, 2018

Classification of the cellaring wines of Australia

I have previously written about Australia's most collected wines. These are the wines that are among the most commonly found in wine-storage facilities, which means that they are suitable for at least short-term cellaring. Another way to investigate the wines most suitable for cellaring is to look at which wines appear most commonly at auctions, on the basis that these wines were originally purchased for storage and later sale (at a profit).


This is precisely what Langton's Fine Wine auction house has done, with their Langton’s Classification of Distinguished Australian Wines, currently in its VIIth edition. This lists the most collectable wines, as determined by their auction sales records. Langton's was originally an independent organization, but it is now owned by Australia's larger alcohol retailer (with just over 50% of the market), the Woolworths supermarket chain.

The Classification is updated roughly every 4.5 years, with lists produced in 1991, 1996, 2000, 2005, 2010, 2014 and 2018. Langton’s says the classification is compiled by analyzing the track records in the Australian wine auction market over several years — “It’s a combination of the volume of wine sold and prices achieved, balanced against release price. Our number crunchers use this combination of elements to arrive at a number, which is what determines the classification.” This has been likened to the process used for the 1855 classification of Bordeaux wines, but without the politics (see Introducing the fine wines of Australia).



The current Classification has three main levels (in increasing order): Excellent, Outstanding and Exceptional. However, the best of the Exceptional wines are distinguished as being Heritage. Previous classifications have used somewhat different names for the groups, and four of them (II-V) had an extra Distinguished level at the bottom.

The current classification contains 136 wines — 5 Heritage, 17 Exceptional, 46 Outstanding, and 68 Excellent. The previous classifications mostly had fewer wines — 34 (Classification I), 63 (II), 89 (III), 101 (IV), 123 (V), 139 (VI). Wines have come and gone among the classification lists, with a total of 185 wines having appeared among the seven classifications, although only 21 of them have appeared in all seven of the lists.

The current Classification

Most wine producers have only one wine in the Classificxaiton, although some of them do better than this — Grosset, Mount Mary, Vasse Felix, and Wynns Coonawarra Estate (3 each), Henschke (4), Wendouree (5) and Penfolds (10). The latter producer (part of Australia's biggest wine company, Treasury Wine Estates) is probably the best known among the appreciators of fine-wine outside Australia.

The Classification contains 116 red wines, 17 white wines, and 3 fortified wines. This extreme bias is the basic limitation of using auctions to classify wines — most people don't cellar white wines and put them up for sale on the auction market; and most Australian fortified wines are ready to be drunk when released, and so there is no point in storing them. We cannot treat this as a quality classification of Australian wine in general.

This next table shows the distribution of grape varieties among the wines. As expected, Australia's icon grape type is Shiraz, which makes wines stylistically quite different from the Syrah wines of the rest of the world — fully 40% of the wines are straight Shiraz. These are the Australian wines most beloved of Robert M. Parker, Jr (see The Parker influence). Next, 29% of the wines have Cabernet sauvignon as the dominant component, which generally produces a somewhat less heavyweight wine. Indeed, at least one of the Heritage cabernets has not at all met with Parker's approval (see Sharp differences of opinion over Mount Mary).

Wine type
Red
Cabernet sauvignon
Cabernet blend
Cabernet Shiraz
Pinot noir
Shiraz
Shiraz blend
White
Chardonnay
Riesling
Semillon
Fortified
Heritage


1


3


1


 
Exceptional

3
1

1
8
1

1
1

1
Outstanding

8
5
2
6
17
5

2

1
 
    Excellent

12
6
2
6
27
2

5
4
2
2

As for the origin of the wine grapes, 54% of the wines come from the state of South Australia, with Victoria producing another 26%, as shown in the next table. Most wines from South Australia can be considered to be warm-climate wines, while those from Victoria are mostly cool-climate. It is somewhat surprising that Western Australia still produces such a small percentage of the classified wines (11%), given its rapidly rising status over the past decade. It is, however, not surprising that New South Wales has so few wines — when I was young it was considered to be a premium wine-grape state, but it is now over-shadowed by most of the other states.

State
New South Wales
Queensland
South Australia
Tasmania
Victoria
Western Australia
Heritage
1

3

1
 
Exceptional
2

10

3
2
Outstanding
1

25
1
12
6
    Excellent
5

36
1
19
7

The next table shows the situation when we dig deeper, into the Geographical Indication areas (you can check out their locations in my prior post: Welcome to the wine regions of Australia). The world-renowned Barossa Valley is way out in front, with 19% of the classified wines. Margaret River, the best region in Western Australia, is next (10%), followed closely by Coonawarra, the coolest region in South Australia (9.5%). The Clare Valley is surprisingly next (8%), given that it is an often-overlooked region.

Geographical Indication
Adelaide, SA
Barossa Valley, SA
Beechworth, Vic
Canberra District, NSW
Clare Valley, SA
Coonawarra, SA
Eden Valley, SA
Frankland River, WA
Geelong, Vic
Goulburn Valley, Vic
Grampians, Vic
Heathcote, Vic
Henty, Vic
Hunter Valley, NSW
Langhorne Creek, SA
Macedon Ranges, Vic
Margaret River, WA
McLaren Vale, SA
Mornington Peninsula, Vic
Pyrenees, Vic
Riverina, NSW
Rutherglen, Vic
South Gippsland, Vic
Sunbury, Vic
Tasmania
Yarra Valley, Vic
South Australia
South-Eastern Australia
Western Australia
Heritage




1

1









1








1
1

 
Exceptional

4
1
1
2
1
1



1


1


2
1




1



1

 
Outstanding

12


4
2
1
1
1
1
1
2

1
1
2
4
3
1



1

1
3
2
1
1
    Excellent
1
10
2

4
10
2

1

2

1
4
1

7
4
3
2
1
2

1
1
5
4

 
Finally, it is worth noting that the the last three of the regions listed refer to wines blended across several GIs. This includes 10 of the wines, notably Penfolds Grange, which has always been at the top of the Classification, as Australia's most common auction wine. Worldwide, it is unusual for blended wines to be considered premium — indeed, it has been suggested that "the great contribution of Australia to the world of wine has been lifting the art of blending to a whole new level" (Specific site or blending?).

Mind you, Penfolds Grange currently retails for a price that far exceeds its recent auction value (for details, see Penfolds Collection 2018 – An outstanding release… but there’s a twist) — currently, it loses a third of its value during the year immediately after release. This has been going on for at least a decade (see Making, selling, Grange and other wine business).

Previous Classifications

Presumably, we are meant to infer from the relative stability of the wines numbers over the past three classifications that things are becoming settled. However, 16 wines were dropped between Classifications VI and VII, with another 13 being added. Furthermore, 13 of the wines went up at least one level and 15 went down. This means that 42% of the Classification changed in some way between 2014 and 2018.

There is nothing unusual about this instability, given what has happened in previous classifications. Of the 185 wines that have appeared among the seven classifications, 21 of them have appeared in all seven of the lists, and 25 of them in six out of the seven; 27 wines have appeared only once, and 34 twice. The Australian fine-wine market is not a stable one, at least as far as auction sales are concerned.

So, it has been pointed out that the Classification portrays an evolving winemaking culture in Australia. However, few people seem to have looked at this evolution (eg. A look at Langton's Classification from 1991 to 2005). I may do this at some time in the future; but for now I will simply look at which wines have been most consistently present in the Langton's Classification.

To do this, I have compiled all seven of the lists, which was no mean feat, given that Langton's literally replaces each list with the new one, and that the available documentation is unclear about Classification II (variously 62, 63 or 64 wines). I then simply scored each of the wines as 1-4 based on their classification level. From this, I have constructed a network of the 90 wines that had the highest average score across the lists, and were classified as Outstanding at least once. (Fortified wines were excluded, because they were not present in Classifications III-V.)

Network of Langton's Classification of Australian Wine


The 18 wines at the top-right of the network are those that have always been classified highly — presumably, these are the cream of the crop.

Those wines at the lower-right did not make it into the first and / or second classification lists. On the other hand, those at the top-middle have dropped down or off the list in recent classifications

Those wines at the top-left moved to the top of the list in Classifications IV or V, while those at the bottom-left moved to top in Classification VI. Those wines at the bottom-middle made it to the top only in the current classification (VII).

Conclusion

Neither the Langton's list of wines nor the Wine Ark list (as discussed in the previous post on cellaring wines) can be considered to represent all of Australia's finest wines. Indeed, I noted last time that there are notable differences between the two lists (and they also differ from the results of both wine shows and critic reviews — see How to find consensus on the top drops for your cellar).

Both lists refer solely to wines preferred for long-term storage, and therefore white wines and sparkling are severely under-represented (there have only ever been two of the latter on the list) — people buy most white and sparkling wines to drink soon, not to store (or sell). Indeed, the best sparkling wines are often not released until they are ready for drinking (most of these are from Tasmania), and neither are the fortified wines (mostly from northern Victoria, and South Australia).

Cult wines are also excluded, because they are not made in enough volume to affect the auction market, or to occupy much space in cellar-storage facilities. Quite a number of such wineries survived the cult-wine boom of the 1990s (see InvestDrinks), including the wines of Clarendon Hills, Fox Creek, Greenock Creek, Noon Winery, Three Rivers (aka Chris Ringland), and Wild Duck Creek Estate. The main cult winery in the Langton's list is Wendouree (with all five of its wines!).

Monday, December 3, 2018

Which people and publications are name-checked in wine-store emails?

Anyone who takes their wine seriously is likely to be inundated with email "blasts" from all and sundry. Moreover, if you have ever registered with a wine store or distributor, then you will be getting emails daily, weekly or monthly, extolling the virtues of the current crop of wines, and telling you just how advantageous are the current prices.


These emails may even contain more than a grain of truth. In support of that truth, wine sellers often use the opinions of well-known people and publications, at least when those opinions are supportive. Indeed, the more supportive they are, the more likely they are to be quoted. So, an email inbox full of these offerings does, in my mind anyway, raise the question of just how often particular people / publications get cited. Moreover, I have wondered whether the USA and Europe quote the same people to the same extent.

To answer these questions, I have used two sources of information. The data for the USA come from Bob Henry (a business school-educated, ad agency-trained wine marketer based in Los Angeles), who has long used Gmail to archive email-blast offerings from many of the leading US wine merchants (they are listed at the bottom of this post). I searched through the 14,403 emails from late 2006 to April 2017. The data for Europe are my own, which I collected from April 2017 to March 2018, totaling 1,728 emails. These came from merchants in Germany, Italy, Spain, Sweden, and the United Kingdom (they are also listed at the bottom of this post).

The results are shown in the following table.


Emails
Publications
Wine Advocate
Wine Spectator
Vinous
Wine Enthusiast
Decanter
La Revue du Vins de France
Falstaff
El Mundo Vino
People
Robert Parker
James Suckling
Stephen Tanzer
Antonio Galloni
Neal Martin
Jancis Robinson
James Laube
Ian D'Agata
Bettane & Desseauve
Tim Atkin
Jeff Leve
Daniele Cernilli
USA
14,403

7,212
3,802
879
510
431
8
2
2

6,890
1,209
1,143
1,015
447
118
94
36
7
6
4
2
Europe
1,728

45
127
3
45
75
0
87
0

1,107
66
0
13
2
10
6
24
0
0
0
0

Not very surprising, is it? [Note: I did search for other publications and names, but drew a blank for them.]

The Wine Spectator has had around 350,000 paid subscribers, while the Wine Advocate has had 40-50,000. In both cases these are likely to be predominately US residents. Mind you, the subscription penetration rate of habitual US wine drinkers is likely to be <1%. However, in the matter of marketing wines this is what the retailers have, and the Advocate is usually noted to out-perform the Spectator, in terms of quotability. However, this is only true in the USA. In Europe, the Spectator, the UK-published Decanter, and the Austrian Falstaff magazine appear to do better.

For individuals, Robert M. Parker Jr has been the world's wine guru for several decades, although his influence may now be on the wane. Perhaps he has been under-performing, since he only made it into 48% of the US emails? After all, he made it into 64% of the European emails! However, note that the US wine stores may mention either Parker or the Advocate (his former newsletter), whereas the European stores focus almost solely on the man himself, when quoting wine opinions.

Indeed, other than Parker, the European emails do not cite many critics at all. This is in marked contrast to the US emails, where at least three other people get quoted 5-10% of the time. This may have something to do with the fact that most of the critics are based in the USA, even if they do comment on wines worldwide.

The profession (or sport) of quotable wine criticism was developed in the USA — in Europe, there are far fewer critics whose commentary is worded in a manner likely to help sell a lot of wine. Indeed, the most quotable information is usually a quality score, and many of the European critics eschew the art (or sport) of scoring wines. For example, Jancis Robinson may well be "better known than Robert Parker Jnr, at least outside the US", but her wine notes (and lack of scores) are not the stuff that advertising copywriters dream of.

This, perhaps, is the situation in Europe — there is no reason why the local critics should be helping to advertise wine, as opposed to helping consumers identify desirable wines. So, the critics are not providing quotability. The US critics, on the other hand, are very much doing so, and are thereby being cited more prominently in their homeland. Parker and his Wine Advocate have provided the archetypal selling points (ie. quality scores), along with the Wine Spectator. It will be interesting to see what happens under the new regime at the Advocate.

You may also ask yourself why sites like CellarTracker are not quoted. After all, approval by a community site should, in theory, encourage other consumers. Perhaps it has a lot to do with the fact that the scores are averaged across a number wine drinkers, and are therefore lower than those of individual critics (a score of 89 will not sell a wine but 90 will). More importantly, perhaps, the scores do not accumulate until after the retailers have started selling the wines (post hoc scores will not sell wines but a priori ones will). This suggests that the critics will continue to wield power, at least in the world of wine marketing.



The emails came principally (but not solely) from the following stores.

USA:

Aabalat Fine & Rare Wines
Calvert Woodley Wines & Spirits
Golden West Wines
Hi-Time Wine Cellars
K&L Wine Merchants
Post Wines & Spirits
The Rare Wine Co.
D. Sokolin & Co.
Wine Access
Wine Cellarage
Wine Exchange
Wine Garage


Europe:

AporVino
Bernabei
eBuy Wines
Enoteca d'autore
Nickolls & Perks
SangiShop
Sicilstore
Superiore
Vinello
Vinissimus
Vinoteket
Vintjansten
WineFinder
XtraWine