My favorite all-time pundit and cynic defined the word “predict” in a way that never made sense—until now. Ambrose Bierce said it was a transitive verb: “To relate an event that has not occurred, is not occurring, and will not occur.”[1] COVID-19 is on us, in us, around us, and may do a lot of us in. But this blog is not about actuality; it’s about predicting the future during a public health crisis of unfathomable consequence. It’s about the ethics of writing predictions in the COIVD-19 era.

Most of us are not professional predictors. So we rely on scientists, statisticians, data geeks, and epidemiologists to predict our future by guessing at the parameters, assuming consistency, and ignoring the absence of research-based predictive tools. That’s the best they can do in an environment drowning in uncertainty. So what ethical norms might apply, in the absence of what we don’t know about the dreaded COVID-19?

Let’s start with a prediction created by humans with the help of artificial intelligence. “An artificial intelligence tool accurately predicted which patients newly infected with the COVID-19 virus would go on to develop severe respiratory disease, a new study found.”[2] The not-so-fine print expanded the prediction: “While work remains to further validate our model, it holds promise as another tool to predict the patients most vulnerable to the virus, but only in support of physicians’ hard-won clinical experience in treating viral infections.”

Is this AI helping humans or humans moderating AI?

The AI study noted, “Limitations of the study, say the authors, included the relatively small data set and the limited clinical severity of disease in the population studied. The latter may be due in part to an as yet unexplained dearth of elderly patients admitted into the hospitals during the study period.”

The predictive tool did identify some newly infected patients with the disease. It does hold promise as a tool. It does not replace clinical experience. It was based on a small data set. It noted unexplained death. From an ethical perspective, this study does its job. It reads as honest, limited, not exaggerated. Those are ethical norms recognized and used by the authors.

Words matter. Ethics matters. Let’s hope for more of that. Let’s call it AI ethics.


Gary L StuartI am an author and a part-time lawyer with a focus on ethics and professional discipline. I teach creative writing and ethics to law students at Arizona State University. Read my bio.

If you have an important story you want told, you can commission me to write it for you. Learn how.

 

 

[1] The Enlarged Devil’s Dictionary by Ambrose Bierce, compiled and edited by Ernest J. Hopkins, Doubleday & Company, Garden City, New York, 1967 at 225.

[2] https://www.sciencedaily.com/releases/2020/03/200330152135.htm