Monday, September 21, 2020

Risk assessments can be very risky

In the past, the general public has often simply accepted all publicly available information as being okay, at least for practical purposes, although some people are skeptical about social and political biases in the media. However, in the modern world, people have become much more aware that the internet and its social media are major sources of mis-information (eg. Misleading information about Covid-19 spreads through texts and emails), and there are now even books on the topic (eg. Cindy L. Otis: True or False — A CIA Analyst's Guide to Spotting Fake News). These days, fake information from the internet even makes it into the traditional media (eg, see my post on False reports of US women's breast sizes).

Sadly, in the professional world, it sometimes goes way beyond this. For example, there is a thing called Forensic Engineering, which investigates construction mishaps of all types. Such mishaps often involve failures by professional engineers, and this needs to be checked and corrected, to help prevent future events. In this case, 20:20 hindsight is considered to be a valuable thing if you want to read some brief reports of such investigations then the Brady Heywood site has quite a few examples (as podcasts, blog posts, and publications).


The point here is that the human failures are of two different types. There are mis-calculations by professional experts, which go undetected, and there is pig-headedness by supervising bureaucrats, who fail to heed the warning signs. My own interest is in the former, in that my professional experience as a scientist is that things always need to double-checked — no matter how confident your colleagues profess to be in their own work, or how insulted they claim to be that you are checking on them in the first place.

A recent biological example of some relevance to us all involves the original study of the insecticide called Chlorpyrifos, which has been used for treating crops, animals, and buildings. The study that brought this to light is: Flawed analysis of an intentional human dosing study and its impact on chlorpyrifos risk assessments; and there is a more readable interview with the main author here: Data omission in key EPA insecticide study shows need for review of industry analysis.

Here is the authors’ summary of their work:
In March 1972, Coulston and colleagues at the Albany Medical College reported results of an intentional chlorpyrifos dosing study to the study’s sponsor, Dow Chemical Company. Their report concluded that 0.03 mg/kg-day was the chronic no-observed-adverse-effect-level (NOAEL) for chlorpyrifos in humans. We demonstrate here that a proper analysis by the original statistical method should have found a lower NOAEL (0.014 mg/kg-day) ... The original analysis, conducted by Dow-employed statisticians, did not undergo formal peer review; nevertheless, EPA cited the Coulston study as credible research and kept its reported NOAEL as a point of departure for risk assessments throughout much of the 1980s and 1990s ... This work demonstrates that reliance by pesticide regulators on research results that have not been properly peer-reviewed may needlessly endanger the public.
Note that there are actually several different issues here, almost all of which are also repeated themes in other fields, such as Forensic Engineering. It is this diversity of issues that is my main warning about risk assessments:
  • although the actual experiment was done independently, the study design and data analysis were not, but were carried out by an organization with a clear conflict of interest
  • the study design and data analysis were not reviewed by an independent person (or persons), either formally or (apparently) informally
  • the results were not reviewed adequately by any of the government-funded regulatory body responsible for implementing any conclusions (eg. US EPA, WHO's FAO)
  • the results were (apparently) never scrutinized in the professional literature, in spite of being frequently cited
  • the study design was complicated by having measurements taken at inconsistent times
  • the data analyses were then limited by the design complication, and the subsequent attempts to overcome this (including omitting some of the data)
  • the study's conclusion missed the true one by a long way (at least double, in this case)
  • the wrong conclusion was potentially life-threatening for by-standers — the chemical was subsequently (wrongly) approved for use in homes (eg. flea treatments for pets), as well as on crops.


My point here is not to examine the details of this one example, which you can read for yourself if you are interested. However, the original technical issues are worth reviewing: 1) the actual study design had poor ability to detect the true effect of the chemical; and 2) valid data points were omitted from the analysis, which obscured an effect that their study would otherwise have detected.

As an expert in designing biological experiments (I even used to teach this subject to both undergraduate and postgraduate biology students), the Coulston et al. study seems to be inadequate by modern standards (and, incidentally, also unethical). If one of my students had turned up with this design, they would never have passed the exam. Therefore, I think that even a 1971 review by an independent researcher would have severely questioned the practical usefulness of the study.

The data analyses were performed using the tools available at the time, which were (sadly) inadequate compared to what we now have. However, even then, the outcomes were poorly interpreted. Part of the issue here is the perennial failure to distinguish between experimental effects that are “statistically significant” and those that are “biologically significant” (see Biological importance and statistical significance). In this case, the main problem was deciding beforehand what the pattern of biological effect would be, and then statistically analyzing to detect only that effect, rather than finding the effect that actually did occur in the experiment.

Sadly, the US EPA immediately used the study when making regulatory decisions regarding human health; and then subsequently decided not to institute a review, despite growing evidence during the 1980s that the chemical might pose a health hazard in residential environments. Even more bizarrely, even as late as 2017 the US EPA administrator formally continued the existing registration of Chlorpyrifos for use, despite the fact that the year before the EPA’s own scientists had explicitly recommended an official new reference dose that would effectively discontinue all uses of the chemical.

Conclusion

The bottom line here is that the weak link when assessing risks is always humans. As humans, we have only one line of defense: we need to double- and triple-check everything. You'd think that after a few thousand years of recorded history we would have gotten the message; but obviously not. We all make mistakes, but if those mistakes threaten the lives of other humans, then we need to employ our defense, always. Life has enough threats as it is — we don’t need to create extra (unnecessary) ones of our own.

No comments:

Post a Comment