It’s common knowledge that regulatory frameworks are a work in progress for the Metaverse. With Web 3.0 dawning, regulators and business executives are scratching their heads regarding how to ensure compliance.
Consequently, there is a danger that when entering the Metaverse, companies could inadvertently fall foul of data privacy, finance, or other charters. That’s why it’s best to err on the side of caution and operate within existing boundaries when launching a Metaverse product or service.
In this article, we’ll cover some of the most important considerations. While not an exhaustive list, it will give you a good jumping-off point to plan a sustainable Metaverse strategy.
Just as with the internet, Metaverse applications will span multiple jurisdictions. However, to create an interoperable, multifaceted ecosystem where transitioning through geographical boundaries is commonplace, companies must become more aware of the various data privacy laws that preside.
While today, content is often restricted based on IP addresses by companies and government organizations, the Metaverse will likely not have the same restrictions. At the moment, it is relatively clear to pinpoint your customer base and, as such, comply with various regulations that govern data use.
For example, suppose you are a U.S.-based organization. In that case, you may restrict content to U.S. audiences, and, as such, guarantee compliance with the California Consumer Privacy Act (CCPA) and related regulations. However, in the Metaverse, you’ll need to make certain all bases are covered. In that case, you should consider installing data governance software incorporating all the existing data privacy regulations, like the EU’s General Data Protection Regulation (GDPR) and Brazil’s General Law for the Protection of Personal Data (LGPD).
NFTs and Other Digital Tokens
A great debate has been in progress since Ethereum launched the ERC-20 token mechanism. The tokens and their simple support for smart contracts kick-started a boom in alt-coins, the next step in the evolution of cryptocurrencies.
Since then, a battle has been raging between regulators, like the Securities and Exchange Commission (SEC) in the U.S., and crypto companies about whether or not these tokens should be classed as securities. There have been winners and losers, but one thing has remained constant: the case for utility.
On paper, a utility token isn’t a security and, therefore, shouldn’t be regulated and restricted in the same way other securities, like the U.S. dollar, are. NFTs have followed the utility trend because they have clear use cases outside a transaction. Still, there is a growing push by regulators to classify them as securities because, as SEC Commissioner Hester Peirce puts it, “Given the breadth of the NFT landscape, certain pieces of it might fall within our jurisdiction.”
So, what does this mean for organizations that want to enter the Metaverse on the back of an NFT project? Ultimately, one of the primary objectives must be hiring counsel to determine whether the intended NFT or other token constitutes a security. And if it does, follow the relative guidelines to be certain you operate within the law.
Various Web 2.0 companies have fallen foul with regulators when it comes to protecting users’ rights regarding online abuse and exposure to distressing content. There is a danger that this exposure could be far more profound in the Metaverse.
As such, Metaverse companies are responsible for ensuring their users’ safety and are uniquely positioned to do so. The Metaverse is being built with a knowledge of what went wrong before. With this in mind, building a framework for online safety is a core principle.
As an organization, one of the best ways to operate responsibly in the absence of unified online safety regulations is to follow the advice and guidance of existing geographically defined laws. Some of the most significant include Australia’s Online Safety Act 2021, the U.K.’s emerging Online Safety Bill, and the Kids Online Safety Act in the U.S.
Artificial Intelligence (AI)
Perhaps the most challenging area to prepare for is the impact of AI. While AI technologies and subsequent regulations to govern them are very much in their infancy, there is no doubt that they will emerge. So, it’s best to prepare for the inevitable.
AI is complicated because it straddles many areas, from data privacy to human rights, to advertising standards. However, the EU has already set in motion its Artificial Intelligence Act which, like the GDPR, will cover any company doing business with EU citizens.
The act’s focus is on the ability of AI to manipulate users and cause mental or physical harm, the exploitation of vulnerable groups, and the dissemination of biometric data. What’s more, the White House has just revealed its blueprint for an AI Bill of Rights. A work in progress, the bill promises to focus on protecting personal identifiable information (PII) and limiting surveillance.
Although not yet in operation, companies would do well to explore the upcoming regulations and take appropriate steps to adhere to them. Because, just as data privacy laws have spread, AI laws will undoubtedly follow the same pattern.
This article originally appeared in Kieron Allen’s My Metaverse Minute newsletter, which is sent every Sunday. Subscribe to the newsletter today — just enter your e-mail into the sign-up form on the right-hand side of this page — to get exclusive early looks at content such as this in the future.
Want to compete in the Metaverse? Subscribe to the My Metaverse Minute Channel: