Review: The Wine Trials 2010
Take hundreds of wines that are commonly available nationwide, varying in price from $3-150. Randomly pick a few at a time, staying within a major flavor category (e.g. heavy new world red). Put them in brown bags and number them. Have groups of people taste rank sets of wine, blind. Repeat until thousands of people have tasted hundreds of wines. Look at the results, creating a ordinal-ranked ladder. That should give you an objective measure of wine quality.
Robin Goldstein and Alexis Herschkowitsch did just that in The Wine Trials 2010, and to ice the cake, they gave us a list of the blind tasters’ top 150 wines for $15 and under. Their shocking finding was that those “under $15” wines were the category winners.
The recommendations list is appropriately humble, and what I’ve tasted from their list (only 4 of the wines) everything has been good. As a list to guide your everyday wine purchases, it’s worth the cover price.
But as a piece of skeptical inquiry that exposes wine rating, wine pricing and wine judging as biased and unreliable, well, it’s pretty… acerbic.
That the Wine Spectator (WS) 0-100 scale reviews are biased has been a gripe of vintners for some time, at least according to the authors. They cite unfair practices such as selling ad space in the same issue as a review of the wine, as well as major gifts for the big reviewers. (Fellow bloggers take note! Wine reviewing is a great way to get lavishly comped!) The authors give us exposé on the wine review industry through interviews, accounts, and references to journalism on the subject. They do some dirt-digging of their own, as well, entering a WS restaurant wine list competition with a fake restaurant. They created a wine list from WS’s lowest-rated Italian wines, paid the $250 entry fee, and won the Award of Excellence in 2008, with no communication from WS other than them calling to say (I paraphrase) “Congratulations! You won an Award of Excellence! Now, do you want to buy a $3000 or $8000 ad in the issue where we announce the results?”
More pleasing to my palette, they also review the scientific research that’s been done to date (theirs included) on wine. They cite the red-dyed-white-wine experiment where reviewers could not tell the difference between red and white when dye was used to obfuscate, using words like “red fruits”, “spicy” and “heavy tannins” to describe a chardonnay. They talk about the fine-wine-cheap-bottle experiment: Apparently regardless of the price of your wine, if you serve it in a fancy bottle, it gets positive wine-terms from reviewers; and if you use a cheap bottle, it is more likely to get negative terms). And they discuss the most recent wine judge statistical analysis, where a statistician named Hodgson got his hands on the California Wine Judges record sheets, did some analysis, and found that regardless of all other factors, “…the likelihood of receiving a gold medal can be statistically explained by chance alone.” Though, some have disputed Hodgsons findings, and right they should (as I am a fan of skeptical inquiry). Personally, I would like to see another analysis with a higher sample size; 13 competitions and 375 wines is a relatively low N. 375 is enough to call the results interesting and probably indicative of a pattern — or lack thereof — but not enough to be conclusive. They also re-published Almberg and Almberg’s 2008 wine experiment, which, using over 6000 blind tastings, found that wine price was not at all correlated with drinker preference.
The authors don’t mean to say that there is no such thing as good wine, or that the human palette can’t detect good wine. The entire thesis of The Wine Trials 2010 is that we should be using blind tasting.
Clearly price influences reviewers. Social psychologists have known this for decades, having done countless studies to get at the issue from multiple angles. So, they ask, if you get $150 worth of pleasure out of a $150 wine, who cares if $135 of the price tag pays for placebo effect? The answer: Nothing is wrong with it if you’re a wine reviewer. But if you’re a vintner, there is a whole lot wrong with it. That “Veblen effect” de-incentivizes (an econ term) vineyards from producing the best wines, placing the emphasis on marketing and image.
Beer has a related problem: Anheuser-Busch has a 51% market share of all beer sold in the United States. This is because of marketing and unfair business practices, definitely not quality. Watch Beer Wars, a fabulous documentary similar in style to a Michael Moore film, for more info on this. Wine, luckily, isn’t going that direction. A little sociology: Wine is moving toward an “enchanted” state (dressed up to be exciting and appear elite, regardless of quality) while beer has become disenchanted (homogenized, bland, McBeer).
Both industries (beer and wine) are dealing with a mutiny in their ranks. For beer, it’s craft brewers. A honkin’ 4% of the market share is held by these small breweries (Sam Adams, Dogfish Head, New Belgium) but they’re reminding consumers that beer doesn’t have to taste like piss. For wine, it’s the blind tasting advocates. What educated person can honestly say an open tasting is more fair than a blind tasting?
Wine judges — that’s who. They say that they need “label information” on region, year, and grape to appropriately assess a wine’s true character. But if you have to read the bottle to know that information, what does that make you? A wine judge or a bottle-reader? That information may put the wine in context with other wines, to contextualize the vintage (e.g. 2007 was a good year for Russian River Valley Chardonnay) but that information is post facto to flavor. The region, year, grape and vintner may tell you a lot if you’re a consumer, looking for a good wine with a good wine education; but a reviewer needs to be evaluating the wine first, then using the quality of the wine to evaluate the region, year, grape and vintage — not the other way around.
Freethinker values oppose arguments from tradition and authority, and using wine information other than flavor to rate a wine’s quality smacks of both. The Wine Trials 2010 exposes the corrupt authority of the big wine reviewers and competitions. It invalidates the wrongheaded tradition of using the origins and price of a wine to pre-judge its flavor. Go read this book.
Postscript: Wine Trials Results
You may ask why there is a list of recommended wines in a book that promotes blind tasting. The authors are self-conscious about their list. First, they qualify that you should do your own blind tasting and drink what you like, regardless of what they say and what others say. Second, they quantify that these recommendations are not theirs; they are the winners of a massive blind tasting they conducted around the US, with hundreds of wines and thousands of people. I’ve enjoyed the wines I’ve tasted from their results list, and maybe you will too.