This blog is about the ethics of writing. Not writing per se, just the ethics of it. Occasionally I feel compelled to remind readers of this limitation. Especially when the topic is so esoteric that most writers have never written a single word about it, much less a whole sentence. The ethics of writing artificial intelligence fall into this category.

“Artificial Intelligence” is not an oxymoron, although many use it oxymoronically, as in “My AI is smarter than yours is.” It’s a two-word thing. “Artificial” means made or produced by people. When a writer modifies it by linking it to intelligence, actual or otherwise, voilà: you have Artificial Intelligence. Maye it should be hyphenated, like artificial-intelligence, or italicized, like artificial-intelligence.

E.E. Cummings would turn over in his grave, but there you have it. Some say it should be further modified: artificial-non-human intelligence. But then the joke would be on the writer, not the modifier.

For short, and to eliminate further debate about its oxymoronic tendencies, let’s call it by its trade name: AI. It gives it a certain nome de plume feel, like its older cousins, GI, Aye Aye, PI, and the ever-popular SI.

AI has ethical norms, albeit applied ethics. They harness the “disruptive” potentials of new AI technologies. The net geniuses have scribbled, cobbled, and artificially imagined a set of paradigms. Technology developers in Silicon Valley and lesser domains will “adhere as far as possible.” Facebook, Twitter, Tic-Tok, Wowee, Louie, and Suzy are in, so we have nothing to fear, right?

Now that the first drafts are available, the first question is whether AI’s ethical guidelines could “harness” human decision-making in machine learning. The short answer is, “Are you kidding?” The longer engineering answer is, “Perhaps, but let’s wait and see.” The engineers got that from the political crowd.

A learned paper on the effort analyzed twenty-two so-called major AI ethics guidelines. It started with a whimper saying they would attempt to “overcome the relative ineffectiveness of these guidelines.”[1] The paper’s authors explained, “AI ethics—or ethics in general—lacks mechanisms to reinforce its own normative claims. Of course, the enforcement of ethical principles may involve reputational losses in the case of misconduct, or restrictions on memberships in certain professional bodies. Yet altogether, these mechanisms are rather weak and pose no eminent threat. Researchers, politicians, consultants, managers and activists have to deal with this essential weakness of ethics.”

From a lay perspective, as a writer, I agree—their enforcement ethical mechanisms “pose no eminent threat.” Good luck dealing with the “essential weakness of ethics.” But if it works, their next project should be the ethics of politics, especially wing-nut variations.

An alternative path to ethical practices in the world of machine learning is an examination of how AI’s designers hope to improve their ability to perform tasks. For starters, they learn from big data sets without human intervention. Is it possible for machines to make decisions without human assessment, or understanding the machine’s pathways of decision-making?[2]

The “Marine Intelligence Institute,” MII for short, has a unique mission: “We do foundational mathematical research to ensure smarter-than-human artificial intelligence has a positive impact.” The premise (AI is smarter than humans) is why we need foundational mathematical research. They have done foundational work on ethics as well as mathematics. “The possibility of creating thinking machines raises a host of ethical issues. These questions relate both to ensuring that such machines do not harm humans and other morally relevant beings, and to the moral status of the machines themselves.”[3]

MII’s research is fascinating given they want to protect not just human folk, but “other relevant beings” as well. Who might they be? And to make it even more intriguing, they are looking into “the moral status of the machines themselves.” Who would have thought that AI would bring ethics to nonhumans and morality to machines? We are definitely well beyond the twilight zone and into the ether world of mechanical thought. Stay tuned.


Gary L Stuart

I am an author and a part-time lawyer with a focus on ethics and professional discipline. I teach creative writing and ethics to law students at Arizona State University. Read my bio.

If you have an important story you want told, you can commission me to write it for you. Learn how. 


[1] https://link.springer.com/article/10.1007/s11023-020-09517-8

[2] https://www.apc.org/en/blog/inside-digital-society-why-do-we-need-ethics-ai

[3] https://intelligence.org/files/EthicsofAI.pdf