Scientists and technologists and columnists are warning us. We are at risk, they say. The risk is peril that artificial intelligence poses to humankind. Wow, that’s a mouthful. It seems half the world is ramping up AI while the other half is looking for an AI bomb shelter. AP reported on May 31, 2023, that Microsoft and Google said, “Mitigating the risk of extinction from AI should be a global priority alongside other societal risks such as pandemics and nuclear war.”[1]
As everyone who can read at the high school level knows, artificial intelligence systems might be able to outthink, outsmart and outfight us, our solar system, and us along with it. Without human brains, they are highly capable AI chatbots. They are the new GIs, Godzilla, Black Holes, and bad government putting us at meta-risk. Worst case scenario is chatbots ganging up, destroying the universe, and dying with us. That would serve them right, given the ordinary terrors we already face: (1) Catastrophic Climate Change, (2) Nuclear War, (3) Another Global Pandemic, (4) Ecological Catastrophe, (5) Global System Collapse, (6) Major Asteroid Impact, (7) Supervolcano, (8) Synthetic Biology, (9) Nanotechnology.[2] Actually, Artificial Intelligence is number ten on that list. They didn’t even mention the possibility of another Trump presidency. The combination is mind-shattering.
Modern science and technology insist that autonomous weapons with AI could destroy humanity through “Self-Life/Death Decisions.” Bots that smart could be malicious. They could be hacked by hackers with bad intent and use AI to carry out attacks on critical infrastructure, such as power grids. But could they really destroy us? Experts say yes, they potentially could surpass human intelligence, which already seems to be in short demand. Those bots could decide that humans are a bother and not really necessary anymore. If they make that decision, then we would be a threat to their existence. If that happens, they could launch a global-scale attack, wiping out humanity in the process![3]
Elon Musk said, “A.I. Will Destroy Humanity.”[4] The New York Times asked, “How Could A.I. Destroy Humanity?”[5] The Center For AI Safety said, “Artificial Intelligence possesses the potential to benefit and advance society. Like any powerful technology, AI also carries inherent risks, including some which are potentially catastrophic.”[6] Hundreds of artificial intelligence experts signed a “Statement On AI Risk.”[7] Approximately 350 researchers, executives, and engineers working on AI systems signed it. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”[8] This statement was absurdly short because the signatories included over 100 professors of AI, the two most-cited computer scientists, and Turing laureates Geoffrey Hinton and Yousha Bengio as well as the scientific and executive leaders of several major AI companies, and experts in pandemics, climate, nuclear disarmament, philosophy, social scientists, and other fields.”[9]
Senate Majority Leader Charles E. Schumer, D-N.Y., said he and his staff have met with more than 100 CEOs, scientists, and other experts to figure out how to draw up legislation. The National Telecommunications Information Administration, or NTIA, is gathering comments from industry groups and tech experts on how to design audits that can examine AI systems and ensure they’re safe for public use. And former Federal Trade Commission officials are urging the agency to use its authority over antitrust and consumer protection to regulate the sector.[10]
So, the experts are meeting, the risks are real, the outcome is uncertain, and the big dog in the fight to stay alive—politics—is, as usual, divisive, and dysfunctional. The Republican National Committee published its first attack advertisement in response to US President Joe Biden launching his re-election bid, featuring images and audio “built entirely” by artificial intelligence.[11]
The Democratic National Committee wanted to demonstrate the potential threat to the 2020 election posed by deepfake videos — clips created with artificial intelligence that can make people appear to do or say things they never said or did. So, the committee came up with a novel solution: It had experts make one, with its chair as the victim.[12] Politicians care more about winning and holding their political offices than they do about anything else.
“Making politicians aware of a societal problem or risk — even an existential one — doesn’t mean they’ll actually do anything about it. For example, we’ve known about the harms posed by climate change for decades but have only recently begun to take meaningful steps to mitigate it. The existential risk posed by nuclear weapons isn’t likely to be solved by the government — they’re the ones with the nukes in the first place. Same with the risk of bio-engineered pandemics. The problem isn’t that our elected leaders don’t know these risks exist. It’s that until they’re properly incentivized, they won’t actually deal with it. Artificial intelligence is the next evolution of the internet and of technology, but here in the U.S., we haven’t even regulated Internet 2.0 yet. There are still no national laws protecting consumer data privacy. Section 230 of the Telecommunications Decency Act, which allows and incentivizes platforms like Facebook, Instagram, YouTube, and Twitter to promote toxic content no matter how harmful, is alive and kicking, despite President Biden and former President Trump each calling for its repeal or reform. Politicians rarely do anything unless it’s in their clear political interest to do so. And if the leaders who signed the Center for AI Safety’s letter want action, they need more than a letter. They need a relentless campaign.”[13]
The risk of extinction will be studied and either heightened or moderated by science and engineering. That leaves the ethical questions in limbo. There are over-arching ethical imperatives for everyone who is “writing” about the existential risk of extinction. The context in which these ethical imperatives are written is critical. “For decades, artificial intelligence, or AI, was the engine of high-level STEM research. . . Today, AI is essential across a vast array of industries, including health care, banking, retail, and manufacturing. . . With virtually no U.S. government oversight, private companies use AI software to make determinations about health and medicine, employment, creditworthiness, and even criminal justice without having to answer for how they’re ensuring that programs aren’t encoded, consciously or unconsciously, with structural biases.”[14]
Ethicists have identified ethical challenges in AI. “Common ethical challenges in AI – Human Rights and Biomedicine. Common ethical challenges in AI. Inconclusive evidence. Inscrutable evidence. Misguided evidence. Unfair outcomes. Transformative effects. Traceability.[15]
Ethics is a two-way street—benefits and failings. On the upside are the benefits of AI. “When we speak of ethical issues of AI, there tends to be an implicit assumption that we are speaking of morally bad things. And, of course, most of the AI debate revolves around such morally problematic outcomes that need to be addressed. However, it is worth highlighting that AI promises numerous benefits. As noted earlier, many AI policy documents focus on the economic benefits of AI that are expected to arise from higher levels of efficiency and productivity. These are ethical values insofar as they promise higher levels of wealth and well-being that will allow people to live better lives and can thus be conducive to or even necessary for human flourishing. It is worth pointing out that this implies certain levels of distribution of wealth and certain assumptions about the role of society and the state in redistributing wealth in ethically acceptable manners which should be made explicit. The EU’s High-Level Expert Group on AI (2019: 4) makes this very clear when it states: AI is not an end in itself, but rather a promising means to increase human flourishing, thereby enhancing individual and societal well-being and the common good, as well as bringing progress and innovation.”[16]
The Brookings Institution said, “The world is seeing extraordinary advances in artificial intelligence. There are new applications in finance, defense, health care, criminal justice, and education, among other areas. Algorithms are improving spell-checkers, voice recognition systems, ad targeting, and fraud detection. Yet at the same time, there is concern regarding the ethical values embedded within AI and the extent to which algorithms respect basic human values. Ethicists worry about a lack of transparency, poor accountability, unfairness, and bias in these automated tools. . . As they push the boundaries of innovation, technology companies increasingly are becoming digital sovereigns that set the rules of the road, the nature of the code, and their corporate practices and terms of service. In the course of writing software, their coders make countless decisions that affect the way algorithms operate and make decisions.”[17]
The Markkula Center For Applied Ethics identified sixteen challenges and opportunities presented by AI. “1. Technical Safety 2. Transparency and Privacy 3. Beneficial Use & Capacity for Good 4. Malicious Use & Capacity for Evil 5. Bias in Data, Training Sets, etc. 6. Unemployment / Lack of Purpose & Meaning 7. Growing Socio-Economic Inequality 8. Environmental Effects 9. Automating Ethics 10. Moral Deskilling & Debility 11. AI Consciousness, Personhood, and “Robot Rights.” 12. AGI and Superintelligence 13. Dependency on AI 14. AI-powered Addiction 15. Isolation and Loneliness 16. Effects on the Human Spirit.”[18]
The Markkula Center’s conclusion is an ethical parameter worthy of every writer’s attention. “New technologies are always created for the sake of something good—and AI offers us amazing new abilities to help people and make the world a better place. But in order to make the world a better place we need to choose to do that, in accord with ethics. Through the concerted effort of many individuals and organizations, we can hope that AI technology will help us to make a better world.”[19]
In writer’s parlance, think before you write. In medical parlance, do no harm. And as lawyers say, write within the rule of law.
[1] Nation & World, Albuquerque Journal, May 31, 2023, Page A2.
[2] https://www.vox.com/2015/2/19/8069533/end-of-the-world
[3] https://www.linkedin.com/pulse/artificial-intelligence-ai-destroy-humans-umair-khalid
[4] Ibid.
[5] https://www.nytimes.com/2023/06/10/technology/ai-humanity.html
[7] https://en.wikipedia.org/wiki/Statement_on_AI_risk_of_extinction
[8] Ibid.
[9] “Statement on AI Risk | CAIS”. www.safe.ai. Retrieved 2023-05-30. Roose, Kevin (2023-05-30). “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn”. The New York Times. ISSN 0362-4331. Retrieved 2023-05-30.
[10] https://rollcall.com/2023/05/31/experts-say-ai-poses-extinction-level-of-risk-as-congress-seeks-legislative-response/
[11] https://www.thenationalnews.com/world/us-news/2023/04/25/republicans-issue-anti-biden-advert-made-entirely-with-ai/
[12] https://www.cnn.com/2019/08/09/tech/deepfake-tom-perez-dnc-defcon/index.html
[13] https://thehill.com/opinion/technology/4038971-a-campaign-plan-to-put-ai-regulation-in-the-political-zeitgeist/
[14] https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/
[15] https://www.coe.int/en/web/bioethics/common-ethical-challenges-in-ai
[16] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7968615/
[17] https://www.brookings.edu/research/how-to-address-ai-ethical-dilemmas/
[18] https://www.scu.edu/ethics/all-about-ethics/artificial-intelligence-and-ethics-sixteen-challenges-and-opportunities/
[19] Ibid.
I am an author and a part-time lawyer with a focus on ethics and professional discipline. I teach creative writing and ethics to law students at Arizona State University. Read my bio.
If you have an important story you want told, you can commission me to write it for you. Learn how.