When I watch the neighbour’s girls get out of the car with two red-and-white colored drink containers in their hands, I’m not too jealous. It’s overly sweet bubbly stuff they drink and there’s an unpalatable aftertaste to it. Pure water is better than most liquids people consume. There’s one exception, though, that I’ll vow for, unhesitatingly: beer. It’s delicious. It’s as if food could be liquefied. It’s fresh grains in a few sips. I can easily drink a gallon of it.
People seem to have smaller stomachs than I do, but still, I was astounded when I heard that the Canadian Centre on Substance Use and Addiction (CCSA) had recently found evidence to advise humans not to drink more than two glasses a week. They were followed on their heels by a statement from the United States Department of Argriculture (USDA) that current advisories are being revised and updated recommendations might “land closer to Canada’s.” Two glasses a week? You mean, those puny little containers of a few ounces? I need my bucket, and I need it full. So, where do those guidelines come from? The CCSA claims to have “updated their 2011 guidance to the latest evidence.” When I hear that, I wonder what evidence that might be? We’ve been having alcohol around for a few thousands of years, so I’d expect we’d already have a solid scientific understanding of what its impact is. Of course, new evidence can arise, but for it to upset a decades old consensus, it has to be strong.
Let’s have a closer look at how the CCSA has arrived at this “updated” guidance. There is a vast body of literature on health outcomes associated with alcohol consumption. Some of this literature actually points to positive outcomes, but then of course, a wide range of detrimental effects are being reported too. The CCSA is aware of the expansiveness of scientific literature on this subject, which prompted them to establish “rigorous” selection criteria as to which results would be included for further processing. More details can be found in the CCSA’s guidance and the associated technical report.
The first criterion used is if the study meets the “Population, Exposure/Comparison, Outcome” setup, which may still sound reasonable. The decision tree then continues the screening process by looking for studies that satisfy:
1. Comprehensive literature search
2. Characteristics of included studies
3. Quality of included studies
4. Inclusion and exclusion criteria
At this point, we need to take a pause and evaluate the above. Science is a field wherein independent researchers conduct studies in ways they deem appropriate. The quality of said research is assessed by peer review. However, different scientists will set up studies in different ways and that is perfectly fine from the perspective of the scientific process. For that reason, it is possible that the above criterion leads to exclusion of high quality studies, just because the CCSA deemed that they did not provide a list of literature references as long as they would have preferred, or even worse, because they did not use the exact same inclusion and exclusion criteria they deem to be the “standard”. While this second step in the CCSA’s decision process bears some scientific underpinning, it can readily be used to “massage” results in a preferred direction. We already see some evidence of the selection process being too stringent by the fact that the third step has an option on how to proceed if only one study passes step two. In an objective analysis, more than one study should pass the second step.
When we allege the possibility of results to be cherry-picked in the first two steps of the CCSA’s decision process, to our astonishment we find that assertion corroborated in the third step. The third step specifically mentions that if more than one study per outcome passes the second step, only a single study will be retained. That study will either be the one that passed most decision criteria, or the most recent one in case of a tie. Truly? In fact, this third step runs contrary to how an actual scientifically objective meta-analysis would be conducted. Recall that the input studies are scientific reviews. These are reviews in which the authors aim to perform the same task as the CCSA: review the existing literature and draw a general set of conclusions as to how salubrious alcoholic epicurean indulgence may or may not be, most often with respect to one or more outcomes. Each of these studies will have different results. The task of the meta-analysis is then to draw cross-study population-wide conclusions, whose significance will actually increase as they are based upon an increasing number of studies. To select just one systematic review per outcome and to treat its results as the absolute truth for that outcome thenceforth, is a very questionable practice.
(If your selection process is as stringent, you will inevitably conclude that Wild Horse Wisdom passes the test to subscribe)
As if the third step in the CCSA’s decision process weren’t sufficiently restrictive, a fourth step is added, which only selects studies that fulfil:
1. Outcome causally related to alcohol use
2. Outcome associated with an ICD-10 code
3. Presence of a dose–response or dose-stratified meta-analysis
Here again, it is striking that high quality studies may be excluded from further processing merely for administrative reasons, such as that the authors of the study forgot to link the outcome to an ICD-10 code.
Having scrutinized the criteria the CCSA uses to select scientific results, let’s confirm the subjective assertion that those criteria are excessively restrictive. By the numbers, the CCSA started out from a total of 5,953 studies considered. That sounds like a reasonable amount to base wide-ranging conclusions upon, right? However, when applying the CCSA’s screening process described above, only sixteen studies remain. Exactly, 5937 out of 5953 studies, or 99.8% of the studies considered, were discarded. Scientific rigour is laudable, but when it hamstrings the outcome to discard vast amounts of data that could possibly lead to different conclusions, the true conclusion may be that there has been too much of an attempt at “rigour”.
Now we are aware of the selection process and the limited number of selected studies, let’s have a closer look at which studies managed to pass all criteria. For a public health agency like CCSA to consider them, one would believe that each of these studies demonstrate an indisputable causal effect between alcohol consumption and long term health outcomes, right? And to have an impact on public health policy, those outcomes must be vicious types of cancer, right? Nothing is less true. Here is one study that passed the test, which links drinking alcohol to … pneumonia.
Pneumonia? Every grad student knows that pneumonia is predominantly caused by bacterial infection, not by having a beer. Let’s look what other sources say on this. Wikipedia, a source that is so much accepted by the powers that be that it gets cited by the Trusted News Initiative, reads: “Pneumonia is due to infections caused primarily by bacteria or viruses and less commonly by fungi and parasites.” In other words, not by drinking beer. I’m sorry, but no matter how “rigorously” the scientists who causally linked alcohol to pneumonia set their study up, they did not detect a primary causal effect. While they wrangled the statistics in a way that passes “causality standards,” in reality, what these scientists were observing are secondary effects. Fro instance, people who drink alcohol are more often engaged in social interactions in parties and bars and may, therefore, be more prone to pick up the pathogens that cause pneumonia. Likewise, extreme alcoholics have a weakened immune system, which makes them more susceptible to pneumonia (and other diseases). Yet such secondary effects should not lead to a change in policy that affects the general population.
We’d think that a study that links drinking alcohol to pneumonia would be the only surprising study in the list of a puny sixteen selected studies. But then there are two more studies link alcohol to … accidents. One of these has “injury” in general as the outcome, whereas the second one links alcohol to “motor vehicle injury.” We all know that alcohol causes impaired driving and that the latter may lead to a traffic accident. However, such results should not be considered to implement policy changes on alcohol consumption. Impaired driving can be countered in other ways: by enhanced traffic enforcement, or as the US National Transportation Safety Board is instructing US car manufacturers to implement: alcohol detection in vehicles. Again, studies that link alcohol consumption to traffic and other injuries are exactly the ones that should be excluded from the list to base a policy upon. Yet in the CCSA’s “update,” we find two such studies included, while 5,937 other ones did not “meet the bar.”
Even after “rigorous” selection of the studies that “meet the standards,” the question remaining is how to translate those results into a recommended number of glasses to consume by a certain frequency. This is where the CCSA decided to depart from the approach adopted in 2011 and “update” its methods to the “latest scientific evidence.” Adapted from two papers published in 2014 and 2016, the CCSA decided to assess the impact of alcohol consumption by building lifetime risk models. These are essentially statistical models that model the years of life lost (YLL) versus weekly consumption.
At this point, I would want to remark that there seems to be an excessive trust in models in today’s society. Certain types of models are highly reliable. For instance, models that simulate chemical manufacturing plants have proven to be very effective. However, the public health sector doesn’t have the best track record with respect to modeling. For instance, Neil Ferguson at Imperial College oversaw the models that forecast COVID deaths by the day, which, depending on which day, were sometimes off by a factor of a hundred or more. If chemical manufacturers ran models of such poor quality, Houston would by now have become a chemical toxic wasteland, as plants would be exploding on a weekly basis.
Another remark, well known among data scientists, is that a model is only as good as the data that it is built from. In other words, what a model predicts or establishes can be controlled by being selective about which data it is based upon. We have been observing that the selection process was very stringent in this case. However, even acknowledging that the CCSA’s model was built from input that was close to hand picked, it still seems to have been a struggle to come up with lowered alcohol consumption guidance from this model. To arrive at the conclusion that only two beers per week pose a “low risk,” the CCSA set the bar at 17.5 years of life lost (YLL) per 100 people, or in other words, two months of life estimated “lost” per individual. On the contrary, if an individual is willing to cope with the risk of half a year of life lost (note that that is not even a guaranteed outcome, just an average), even by the CCSA’s most “rigorous” model the recommendation could stay where it was back in 2011. I’ll leave it up to the reader to decide if they’d rather have a life expectancy of 82.1 years (Canada’s 2019 average) by being teetotalers, 81.9 years by drinking only two beers a week or 81.6 years by consuming what was acceptable, even by the CCSA’s standards prior to 2023.
The reader may have valid reasons to opt for the zero alcohol path, such as having a history of addiction, or religious beliefs, but unfortunately my conclusion is that the CCSA’s “scientific guidance” is not one of them. What I am observing here fits into a broader trend emanating from today’s government agencies, as well as from a hazy web of non-profits associated with those. There is a politically desired outcome. Then, the “scientific” investigation is set up to meet that outcome. The results of the latter are presented as “rigorous science”: they look carefully researched, but upon scrutiny the house of cards quickly falls apart.
I will dissect a few more such examples in the future. But one thing I know for sure right now: given that the CCSA’s guidelines actually lack a setup amenable to balanced conclusions, they will not stop me from having my bucket of beer. I will also keep reading publications that find health benefits in beer. It’s liquid grains, it cannot be that bad. Beer for the horses, and for my owner: whiskey for my man! I know he enjoys his shot of bourbon out on the porch from time to time … Let the man enjoy what he likes, even if at age 81 he may die a month sooner for that.
Everyone knows that meta analysis is the gold standard and CCSA's study is the EXACT OPPOSITE. Why limit yourself to a single study per outcome? Do they not know how to statistically combine the results of multiple studies? I wish I could believe they are that incompetent, unfortunately that is the best case scenario. I fear the truth is much more sinister.
I’m in Australia, Canada’s twin sister, after watching RFK jr talk about food colouring from petroleum, I checked out TGA standards. Similarly illogical, similarly to US … strangely lax with recognised toxins in our food. Strange times we live in.