For many of us, writing is a joy because we write under our own names, without fear of being mistaken for someone else, and knowing that something once written is subject to copyright protection. Copyright is a type of intellectual property that protects original works of authorship when an author fixes the work in a tangible form of expression. Works are original when they are independently created by a human author and have minimal creativity.

To earn copyright a work must have a “spark” and “modicum” of creativity. There are things, however, that are not creative, like titles, names, short phrases, and slogans; familiar symbols or designs; mere variations of typographic ornamentation, lettering, or coloring; and mere listings of ingredients or contents. We have only to affix our names to a sufficiently permanent medium that can be perceived, reproduced, or communicated for more than a short time. The historical example of a fixed work is the signature or typed name when we write it down.

The 21st century calls into question who we are. Are we who we say we are, on the bottom of the page? Or on the printed copy of a digital document such as an email, tweet, tic, tock, or social media glance. Are there ethical dilemmas posed by the future of DIGITAL IDENTITY?

“A digital identity is typically defined as a one-to-one relationship between a human and their digital presence. A digital presence can consist of multiple accounts, credentials, and entitlements associated with an individual. Frequently, digital identity notes the presence of an individual or entity within applications, networks, on-premises systems, or cloud environments. This may be a person, organization, application, or device used for authentication, authorization, automation, and even impersonation during runtime. Digital identity may also be interchangeable with “digital entity” or simply “identity” depending on the context.”[1]

When I first learned I had a digital identity, I thought that meant I was a digital user, as in using Windows 10 on a Dell Computer, connected to the Internet via a router, and often interrupted by outages, glitches, flyovers, and mistyping. But that was too last century. Someone told me the new version of a user was the person, the actual person, operating a resource to whom the digital identity was assigned to multiple digital accounts or devices. Which was “private,” they said.

I doubt Google has any intent to let me have the exclusive rights to my own identity, digitally or otherwise. Even so, I knew I wanted to keep my own identity, and whatever sensitive data related to me, concealed. And except for royalty payments and good reviews. I asked a techy about digital privacy protections and they said not to worry because writers are protected by EU,  GDPR, HIPAA, etc., all of which have explicit mandates around how my data was collected, transmitted, and handled to protect my identity and privacy. That was before a ransomware dog encrypted a few terabytes belonging to me. He was in Cuba and cleverly called himself Cubadot.

Now I have to learn how to navigate the ethical dilemmas posed by the future of digital identity. That stems from the heretofore unknown understanding that those of us with actual identities and working in or around the Internet, also have metaverse identities, in addition to our old-fashioned analog identities. I more or less suspected this because I have a limited online image hawking books, memoirs, blogs, scribblings, and answers to thousands of legal and ethical questions, in addition to insistent support for the rule of law, and our independent judiciary.

The boogieman in this new world of actual and digital identities is the digital identity I thought I created and owned. It hides under, over, around, and sometimes inside Artificial Intelligence. It has, I’m told, a life of its own. I think that’s artificially stupid, but who knows? 

“Spatial computing and generative AI can take hold of the fundamental nature of human identity encompassing authenticity, dignity, and human presence. It is being exposed in new ways. This poses serious ethical questions that must be answered by regulators, the creators of the technology, and the end users who are exploring this emerging tech. Is human identity, via authenticity, dignity, and human presence fundamentally at stake with the rise of AI”?[2]

One of the essential features of identity is consent. But what does consent mean in a digital identity? Who gives consent? Who receives it? And how is it known? Can it talk, refuse, laugh, or deny it ever had one or the other of my identities?  I have one actual and scores of digital identities.

Here’s a scary thought: “Consent depends on how a person’s digital identity is used – both in transactions and post-mortem. Both dignity and authenticity should be maintained while employing a digital identity.

The Centre for the Fourth Industrial Revolution exists to explain and expand the notion of digital identity, and new technologies are focusing on ethical dilemmas like consent, ownership, and post-mortem existence. They emphasize the need for legal and technical frameworks, respect for dignity and authenticity, and awareness of social implications, especially regarding virtual interactions. Collaborative efforts among experts are needed to ensure responsible development and use of digital identities.

Everyone online today has a metaverse identity. We all contribute to this online image of ourselves through our online interactions – Facebook, Reddit, LinkedIn, or a Digital Wallet address. This data, linked with our representational preferences, and visual likeness, can form an avatar or even a digital replica. It can be augmented using generative artificial intelligence (AI) to take on a life of its own.

As new technologies like spatial computing and generative AI take hold, the fundamental nature of human identity – encompassing authenticity, dignity, and human presence – is being exposed differently. This poses serious ethical questions that must be answered by regulators, the creators of the technology, and the end users exploring this emerging tech. Is human identity, via authenticity, dignity, and human presence fundamentally at stake with the rise of AI? Have you read it?

The U.S. is drafting new laws to protect against AI-generated deepfakes. There are at least four ways to future-proof against deepfakes in 2024 and beyond. The conversation about metaverse identity includes three dilemmas that spotlight the evolving landscape of our digital selves, particularly regarding the ethics of licensing and identity generation.

  1. What does consent look like when creating and using a digital replica? Authenticity, dignity, and presence are fundamental to our humanity. Digitizing identity may enable new modes of autonomy, but it poses challenges. What consent looks like for how a person’s digital identity is used – both in transactions and post-mortem.
  2. How both dignity and authenticity should be maintained while using a digital identity.
  3. How consent is given and how consent is enforced will shape the guardrails of how our digital identities can be used online and the consequences of ill use. Without these consideration factors, deepfakes and even consensual digital replicas may harm our human spirit.”[3]

Looking back, one of the most intriguing questions is the reality that actual identity dies when the actual person dies. Digital identities don’t die—they exist post-mortem.  Who gives consent post-mortem? How is it given? Can one’s survivors and beneficiaries give or withdraw consent related to digital identities?

The ethical sources I regularly use have not yet expanded their reach beyond post-mortem. Until they do, we can only take notes from the metaverse itself—it seems to create human thought artificially.

Responsible Metaverse Alliance is  “a social enterprise and international movement dedicated to supporting the development of the metaverse, and virtual worlds so that they are handled responsibly from a perspective of design, deployment, safety, culture, inclusion, operations, and function.”[4] It was conceived by both technologists and ethicists.  RMA believes that immersive worlds should:

1. Benefit humans, society, and the environment and be built for the benefit of humans, society, and the environment’s wellbeing. Metaverse systems should not cause harm or come at a cost to these groups.

2. Be designed with human-centered-design, safety-by-design, and environment-by-design principles at their core. Metaverse systems should respect human rights, diversity, safety, and the autonomy of individuals, as well as the protection of the environment.

3. Do not discriminate against any person or virtual representation of any person. Metaverse systems should demonstrate fairness towards individuals, communities, and groups or representations of these.

4. Be accessible to diverse groups and be inclusive in design and operation. Metaverse systems should be designed and deployed in a way that allows access and inclusiveness to all people or representations of those people.

5. Metaverse systems should reliably operate in accordance with their intended purpose and be safe to use.

6. Be secure and protect people’s (or their representation) privacy. Metaverse systems should be designed and deployed to be highly secure and protect privacy.

7. Adhere to relevant laws, regulations, and policies as well as societal norms. Metaverse systems should be designed and deployed to meet relevant regulations, laws, and requirements as well aligning with societal norms.

8. Have processes for contestability if any harm is caused. Metaverse systems should include a process for contestability such as when a system significantly impacts a person, community, group, or environment, there should be a timely process to allow the challenge of the use or outcomes of the system.

9. Be transparent. Metaverse systems should be designed and deployed to ensure their transparency. This may include transparency regarding what is human and what is automated or AI-driven. There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by an automated system and can find out when an automated system is engaging with them.

10. Be explainable. Metaverse systems should be designed and deployed so that their operation can be explained to other parties, including those who may contest a decision, action, or outcome made by the system.

11. Be accountable for any harm or negative outcome caused. Metaverse platform providers and related parties should be held accountable for any harm or unintended consequences caused. People responsible for the different phases of the metaverse system lifecycle should be identifiable and accountable for the outcomes of the systems, and human oversight of systems should be enabled.

As the metaverse expands, the ethical dilemmas will multiply. I’m not sure digital developers use the word “dilemma” the way most writers do. We call it a situation in which a difficult choice has to be made between two or more alternatives, especially equally undesirable ones. However, digital developers seem to use the word only to differentiate choices that do not implicate undesirable choices.

Last year, there were many metaverse platforms. More are developed every day. Metaversed said 154 virtual worlds were counted in the second quarter of 2023, each with different environments and functions. Decentraland, Sandbox, Axie Infinity, Bloktopia, Zepeto, Roblox, Nike, Gather, BollyHeroes, and Space Somnium are the most popular.

I doubt these catchy new metaverses have ethical codes of conduct. Time won’t tell. That’s because Metaverse Cyber Time is one hour behind the time in New York, and it does not change between summer and wintertime. I suspect they are light years away from acting only on ethical principles. They will probably just download it.


[1] https://www.beyondtrust.com/resources/glossary/digital-identity

[2] https://www.weforum.org/agenda/2024/03/navigate-ethical-dilemmas-future-digital-identity/

[3] Ibid.

[4] https://responsiblemetaverse.org/about/

Gary L Stuart

I am an author and a part-time lawyer with a focus on ethics and professional discipline. I teach creative writing and ethics to law students at Arizona State University. Read my bio.

If you have an important story you want told, you can commission me to write it for you. Learn how.