Canada is a vibrant wine country, with consumers thirsty to know more about wine. Treve Ring introduces the key players.
The white wine trap
Thursday, 23. March 2017 - 14:00
Do experts rate red wines more highly than white wines, regardless of price, vintage, and region? Does this mean there is a critical bias in favour of red wines? Is it a flaw in the 100-point scoring system used by the major wine magazines?
Or are red wines inherently more complex than whites, which accounts for the score disparity?
Or is something else going on?
That was the question we wanted to answer when I teamed up with data scientist, wine lover, PhD, and former college math professor Dr Suneal Chaudhary to analyse almost 62,000 wine scores dating to the 1970s, used in reviews in the major wine magazines. This question, that red wines always seem to score higher than whites, has puzzled anyone who pays attention to scores.
Critics do seem to favour red wines over white. We found that, first, reds typically score higher than whites, and do so across vintage and region. Second, red wines are over-represented above 90 points, and whites are over-represented below 90 points. In fact, reds are 20% more likely to be rated higher than 90. Finally, as an expert score crossed 90 points, the price of the wine and the variation in the price of the wine increased quickly. In some cases, this led to non-intuitive results, such as median reds costing more than more highly-rated whites.
None of these things should happen if bias didn’t exist. In looking at the numbers, our goal was not to issue an ultimatum from on high, but to identify what seems to be a problem with the critical process. And we were well aware that there could be many reasons for the scoring disparities that had nothing to with critical bias.
First, red wines may be inherently better than white wines. This seemed to be the reaction of much of the criticism of the study, and several wine writers, including Fred Swan and David Morrison, questioned why there was even a need to do it. Morrison wrote, in fact, that the study was trying to prove something that everyone already knew, what he called a common error in study methodology.
Second, red wines can be more expensive to make than white wines. They require more (and more costly) oak, more ageing, and the land used to grow red grapes can be significantly more expensive, as in Bordeaux and Napa Valley. That red wines cost more money, given the perception in the wine business that cost equates quality, might have affected the findings.
Third, the review process itself may have influenced the study. Not every critic publishes every wine he or she reviews, and those that were published may have been more favourable to reds than whites. That may have skewed the findings, said Mike Veseth, professor emeritus of international political economy at the University of Puget Sound and editor of the Wine Economist website.
Fourth, the scoring process, which is infamous for its inconsistencies, may have accounted for what the study found. Any study is only as good as the data it uses, and if scores are inherently flawed – and there are many reasons to believe they are – then our results come from flawed data, and the study might not have value.
Still, even allowing for those caveats, something seems to be amiss. Three wine academics, including Veseth, reviewed our work before we published it, and each agreed the numbers say that what is happening is more than a coincidence. For one thing, the size of the database, those 61,809 scores, is big enough so that the results are statistically significant according to accepted standards. In addition, how is it possible, unless there is bias, that 90% of the 2010 red wines that we had scores for got 90 points or higher? Could red wines be that excellent naturally?
In fact, talk to winemakers and oenologists, the people who deal with red and white wine quality on a daily basis, and three things emerge. First, it makes sense for red wines to consistently score higher than whites, since reds can be more complex, and complexity is seen as integral to a ‘better’ wine.
Much of this is because many of the polyphenols, the chemical compounds in wine that give it its distinctive flavours and aromas, come from grape skins, and the skins are more important in red-wine production than in white, says Matt Brain, winemaker at the Fresno State winery and a lecturer in the school’s department of viticulture and oenology.
But are reds just better?
“We have a different way of approaching reds than we do whites,” says Dr Stephen Menke, associate professor of oenology at the Western Colorado Research Center at Colorado State University. “Because a white wine doesn’t taste like a red, we assume it isn’t as complex, and score it accordingly. But that’s not necessarily anything to do with the wine. That’s perception, and it has to do with how we are trained to judge wine.”
Menke’s other point? That the tannic bitterness in red wines that is taken as one sign of complexity is an acquired taste. Typically, he says, humans prefer sweet over bitter, which dates to when we were hunter-gatherers who had to decide if something was safe to eat by how it tasted. Sweet was safe, while bitter could be poisonous. Our appreciation of bitter, in foods like coffee, took tens of thousands of years to acquire.
Which leads to the third point. Brain says that scores may have as much to do with consensus among critics as it does with the quality of the various wines, something he describes as a compromise among peers. “If they’re rating the wine for complexity and power, then they’re going to rate reds higher than whites,” he says. “That’s the kind of wine that makes the strongest impact on the mind. There are few white wines that cause that kind of complexity. That’s what makes it such an interesting question, and you have to wonder: Is the score sometimes higher than the component parts?”
Hence, this critical approach, by definition, discounts any judgment based on style. If a Sauvignon Blanc or Chardonnay is as varietally correct as it can be, with the proper balance, it will still lose points because it doesn’t fit the complex/powerful guidelines. And, in fact, that’s one reason why we wanted to do the study: Why do so many well-made whites never score more than the high 80s?
One question we didn’t look at it, but that may be worth considering: Does this apparent bias affect the marketplace? We know that red wines that score more than 90 points become increasingly more expensive than similar white wines, but does it also explain why white wines tend to be less expensive overall than red wines? And does it affect what consumers buy?
In regions where land prices are similarly costly for both reds and whites, like Burgundy and Napa Valley, there is a difference in price between reds and whites that can’t necessarily be explained by land prices. And in regions where whites predominate, but land prices are still high, like Aldo Adige, whites are comparatively less expensive than reds from other parts of Italy.
This dichotomy could become increasingly important in the marketplace as some whites and rosés gain market share. We didn’t include rosés in the study, but their scores also seem to fare poorly compared to reds, based on anecdotal evidence. Will winemakers, faced with difficulty in getting higher prices for whites and rosés, be forced to ignore consumer sentiment and make reds because they can charge more?
Finally, this scoring conundrum could well be part of what Dr Chaudhary has identified as the chicken-egg-chick dilemma, in which the chicken and egg dilemma becomes even more complicated.
Did the chicken come first? Critics rate red wine more highly because it’s more prestigious, but it’s more prestigious because critics have determined that the best wines are more complex and powerful – qualities that are only possible in red wine.
Or did the egg? Producers spend more money to make red wine, using more expensive oak and grapes from the most expensive vineyards, than they do to make white wine. That’s because critics see red wine as more prestigious and producers want to do all they can to get higher scores for a more prestigious wine. That’s what the pricing analysis in the study seemed to support, where we found that red wines cost more to buy than whites with similar or even higher scores. For example, a 93 red cost more than a 94 white.And where does the chick fit in?
Consumers are willing to pay higher prices for red wine because red is seen as more prestigious than white. And why do consumers see red as more prestigious? Because producers put more resources into it and critics rate it more highly – both on the assumption that red is more valued than white.
Perhaps the best evidence of this chicken-egg-chick dilemma comes from our analysis of scores by regions. Reds outscored white across 20 of the 21 wine regions for which we had data. The only region where whites outscored reds was in Germany, where the common perception is that the white wines – which are not powerful, and where the complexity revolves around sweetness and not tannins – are better than the reds. In other words, reds are always better than whites, except where they aren’t.