Madame Wikipedia insists artificial intelligence is a thing. It exists. Lesser authorities insist it is faster than a speeding bullet, more powerful than a locomotive, and able to leap tall buildings at a single bound! But while earth bound it does not know Lois Lane, won’t save her life, and has hallucinations that sometimes make it appear as if it was from Mars. It is loved by tens of thousands, feared by millions, and unknown to billions. But it is a thing, ya know, like Ecigs are a thing.

Intelligence itself is not artificial. At its most elementary level, intelligence is the human ability to acquire and apply knowledge and skills. Everyone at birth is naturally intelligent. By the time we’re a year old, we can make sounds that sound like talking, and by five most of us can read stuff. That’s real, not artificial. Artificial is made or produced by humans rather than occurring naturally. By melding artificial to intelligence, you get something unnatural, like a copy. It’s not the thing—it’s a copy of the thing. Except that, when explaining artificial intelligence, you have to stretch your imagination, ignore most of what you learned in grade school, and charge forward into the geek world.

The latest news about artificial intelligence is its illegitimate child named ChatGPT. “ChatGPT is an artificial intelligence chatbot developed by OpenAI and released in November 2022. It is built on top of OpenAI’s GPT-3.5 and GPT-4 foundational large language models and has been fine-tuned using both supervised and reinforcement learning techniques. ChatGPT launched as a prototype on November 30, 2022, and garnered attention for its detailed responses and articulate answers across many domains of knowledge. Its propensity to confidently provide factually incorrect responses has been identified as a significant drawback. In 2023, following the release of ChatGPT, OpenAI’s valuation was estimated at US$29 billion. The advent of the chatbot has increased competition within the space, motivating the creation of Google’s Bard and Meta’s LLaMA.”[1]

OpenAI is a research and deployment company. It has a grandiose mission:  “To ensure that artificial general intelligence benefits all of humanity.” As almost every sentient being knows, benefiting “all humanity” is not favored by political parties, Russia, parts of Asia, rural America, or dozens of stans, cartels, villages, and haciendas all over the world. However, many are rooting for AI, hoping that some of what it can do is good, controllable, well-behaved, and not trying to kill everything on the planet.

 One way to get to know something is to break down its parts. A chat is talking informally. A bot is the larva of the botfly, which is an internal parasite of animals. When a geek talks about bots it’s shorthand for an internet bot; a computer program that operates as an agent for a user or other program or to simulate a human activity. Bots are normally used to automate certain tasks, meaning they can run without specific instructions from humans. That’s where fear starts to sink in. Computers run and do things without human instruction, control, or love? Internet bots are, to put it in mental health terms, a thing. “Chatbots are computer programs using AI technologies to mimic human behaviors. They can engage in real-life conversations with people by analyzing and responding to their input. Chatbots use contextual awareness to understand the user’s message and provide an appropriate response.”[2]

But to steer this back to home base, what are the ethics of writing about AI? Harvard University has put a good deal of effort into looking at AI through an ethical lens. They headline the effort in a series of research papers. “Ethical Concerns Mount As AI Takes Bigger Decision-making Role In More Industries.”[3] “For decades, artificial intelligence, or AI, was the engine of high-level STEM research. Most consumers became aware of the technology’s power and potential through internet platforms like Google and Facebook, and retailer Amazon. Today, AI is essential across a vast array of industries, including healthcare, banking, retail, and manufacturing. But its game-changing promise to do things like improve efficiency, bring down costs, and accelerate research and development has been tempered of late with worries that these complex, opaque systems may do more societal harm than economic good. With virtually no U.S. government oversight, private companies use AI software to make determinations about health and medicine, employment, creditworthiness, and even criminal justice without having to answer for how they’re ensuring that programs aren’t encoded, consciously or unconsciously, with structural biases. Virtually every big company now has multiple AI systems and counts the deployment of AI as integral to their strategy.”

Harvard University identified three major areas of ethical concern when using AI, “Privacy and surveillance, bias and discrimination, and the role of human judgment . . . Companies have to think seriously about the ethical dimensions of what they’re doing and we, as democratic citizens, have to educate ourselves about tech and its social and ethical implications — not only to decide what the regulations should be, but also to decide what role we want big tech and social media to play in our lives.”[4]

ChatGPT is barely six months old; its birthday is November 2022. A lot is known about it’s potential but not much about it’s grasp of the human world of ethics, morality, or good behavior. Madame Wiki, perhaps playing the role of Godmother, lists four specific ethical concerns in ChapGPT’s infancy.[5]

  1. Labeling data. TIME magazine revealed that to build a safety system against toxic content (e.g. sexual abuse, violence, racism, sexism, etc.), OpenAI used outsourced Kenyan workers earning less than $2 per hour to label toxic content. These labels were used to train a model to detect such content in the future. The outsourced laborers were exposed to such toxic and dangerous content that they described the experience as “torture.”
  2. Jailbreaking. ChatGPT attempts to reject prompts that may violate its content policy. However, some users managed to jailbreak ChatGPT by using various prompt engineering techniques to bypass these restrictions in early December 2022 and successfully tricked ChatGPT into giving instructions for how to create a Molotov cocktail or a nuclear bomb, or into generating arguments in the style of a neo-Nazi.
  3. Inflammatory Statements. A Toronto Star reporter had uneven personal success in getting ChatGPT to make inflammatory statements shortly after launch. ChatGPT was tricked to endorse the 2022 Russian invasion of Ukraine, but even when asked to play along with a fictional scenario, ChatGPT balked at generating arguments for why Canadian Prime Minister Justin Trudeau was guilty of treason. The researchers are using a technique called adversarial training to stop ChatGPT from letting users trick it into behaving badly, known as jailbreaking. This work pits multiple chatbots against each other: one chatbot plays the adversary and attacks another chatbot by generating text to force it to buck its usual constraints and produce unwanted responses. Successful attacks are added to ChatGPT’s training data in the hope that it learns to ignore them.
  4. Accusations of Bias. ChatGPT has been accused of engaging in discriminatory behaviors, such as telling jokes about men and people from England while refusing to tell jokes about women and people from India, or praising figures such as Joe Biden while refusing to do the same for Donald Trump. Conservative commentators accused ChatGPT of having a bias towards left-leaning perspectives on issues like voter fraud, Donald Trump, and the use of racial slurs. In response to such criticism, OpenAI acknowledged plans to allow ChatGPT to create “outputs that other people (ourselves included) may strongly disagree with.”

 The Fountain Institute also found four ethical concerns with ChatGPT.[6]

  1. ChatGPT can steal (design) jobs. Is it unethical because it will ‘steal’ jobs?  No.  We live in a world of technological innovation.  Thanks to late-stage capitalism, advancement trumps stagnation, (almost) every time.  And if OpenAI didn’t take the risk to publicly launch ChatGPT, another provider would have done so quickly after.  Competent people are not branded unethical for getting more opportunities than the incompetent.
  2. ChatGPT can act like it understands us, but it doesn’t. ChatGPT’s ‘terms of use’ are characteristically stodgy.  Shouldn’t they be clear and vivid to mitigate harm? 
  3. ChatGPT can propagate disinformation and hate. It can roll out tailored, convincing disinformation at an astonishing rate.  The usual red flags that help us to identify dubious content become far more subtle, e.g., poor translations and grammatical errors.  ChatGPT can impart different flavors to the same ideas, meaning that incendiaries can lay off their dedicated content teams and churn out false narratives by the bucketload to audiences that simply know no.
  4. ChatGPT can perpetuate bias. ChatGPT is based on 300 billion words, or approximately 570GB of data.  This means that huge swathes of unregulated and biased data inform its modelling.  Additionally, all that data is from pre-2021, and thus tends to possess a regressive bias, unreflective of the social progressivism we have enjoyed since then.  Does it perpetuate the existing biases we are trying so hard to transcend beyond?  Yes, unfailingly so.

Forbes Magazine speaks for and to many AI users on economic and ethical terms.  It offers five principles to use ChatGPT with ethical intelligence.

  1. Do No Harm. The good news about Do No Harm is that it is a principle of restraint. For example, if you’re driving on the highway and the car in front of you is going more slowly than you’d like, you apply the Do No Harm principle by not tailgating them and flashing your bright headlights. Concerning ChatGPT, you avoid causing harm by not publishing any text that could hurt another person or damage your reputation.
  2. Prevent Harm.  Sometimes we must take action so that harm doesn’t occur. When you use ChatGPT, you prevent harm to others and yourself through due diligence. For example, suppose you want ChatGPT to generate a short essay on three hot trends in your field. You plan to post this on your LinkedIn page and with a few groups you belong to. But simply cutting and pasting what the bot generates would be irresponsible. It might contain a direct quote from someone without attribution. It might contain false statements. Or both. Researching what the chatbot gives you will help you prevent harm to others who might act upon false information. Research will also prevent harm to your reputation.
  3.  Make Things Better. The second principle of ethical intelligence takes us further: Applied to your use of ChatGPT, Make Things Better means ensuring that you check what the bot generates for origin and accuracy. The saying, “garbage in, garbage out,” is worth taking to heart anytime you use artificial intelligence. AI is only as good as what goes into it.
  4. Respect Others. Tell the truth. Protect confidentiality. Keep promises. Yes, this requires more time and effort than just publishing whatever you get from the bot, but it’s the right thing to do. You might even have to spend some money and hire an expert to review what the bot has generated.
  5. Protect Confidentiality. Whether your field is healthcare, the law, business, education, or the government, you risk violating the duty to protect confidentiality if you don’t carefully review what ChatGPT generates before you put it out into the world.
  6. Keep Promises. Consider the contract you’ve signed with your employer or, if you’re an entrepreneur, directly with a client. Is your contract a legal document? Yes. Is it more than that? Yes. A business contract is a two-way promise. You agree to provide certain services or products, and your company or your client pays you in return. If you or the other party reneges, the deal is off. Suppose you publish or distribute what ChatGPT generates without carefully reviewing, fact-checking, and editing it. In doing so, you break the implicit or explicit promise you have made to be a trustworthy person. As Walter Landor said, “A brand is a promise.”
  7.  Be Fair  To be fair is to give others their due. An obstacle to this is the bias that can be embedded in the information that ChatGPT and other chatbots use to answer the questions you pose. Again, it’s garbage in, garbage out. Suppose the written material you’re using has been shaped by biases related to age, race, gender, politics, or sexual orientation. You risk perpetuating that bias by cutting and pasting whatever the bot gives you into an email, blog, social media post, or book.
  8. Care is a feeling about the world and a way of acting in it. You evince care in your professional life by doing something as simple as sending a handwritten thank-you note to a new client or as time-consuming as taking on a project your colleague can’t finish because of illness. Concerning ChatGPT, you demonstrate care by double- and triple-checking the research you’ve done to ensure that what you’re about to distribute is accurate, fair, and not likely to harm others or the good reputation of your business.

Forbes closed its essay about ChatGPT with this. “The principles presented here will help you use artificial intelligence with ethical intelligence. They are a framework, not a formula, for doing the right thing. Your company, your clients and your reputation deserve nothing less than the best you can give them.”[7]

Ethically well said.  


[1] https://en.wikipedia.org/wiki/ChatGPT

[2] https://topflightapps.com/ideas/build-mental-health-chatbot/

[3] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/  The Harvard Gazette, Christina Pazzanese, October 26, 2020.

[4] Ibid.

[5] https://en.wikipedia.org/wiki/ChatGPT#Ethical_concerns

[6] https://www.thefountaininstitute.com/blog/chat-gpt-ethics

[7] The principles presented here will help you use artificial intelligence with ethical intelligence. They are a framework, not a formula, for doing the right thing. Your company, your clients and your reputation deserve nothing less than the best you can give them.

Gary L Stuart

I am an author and a part-time lawyer with a focus on ethics and professional discipline. I teach creative writing and ethics to law students at Arizona State University. Read my bio.

If you have an important story you want told, you can commission me to write it for you. Learn how.