There has been a lot of buzz about generative artificial intelligence (AI), ChatGPT, and other language models in the last few months. There is both eager anticipation and palpable fear that these automated systems will eventually replace the need for human intelligence and the human function of creativity.
I decided to learn more about generative AI systems, including how they function, their advantages, and their limitations, with a particular focus on comparing what they do to human intelligence. The question of whether or not robots are replacing human creativity has no definitive answer, but from my perspective, robots can never provide the creativity of humans. There are also many concerns in terms of whether AI will replace human intelligence. In this analysis, I will address these concerns and aim to clarify some common misconceptions.
To hear practitioner and platform insights on how solutions such as ChatGPT will impact the future of work, customer experience, data strategy, and cybersecurity, make sure to register for your on-demand pass to Acceleration Economy’s Generative AI Digital Summit.
Concerns About Generative AI
A variety of industries have raised concerns over generative AI use cases and outcomes. For instance, experts in the field of data have voiced concerns about the accuracy and consistency of these learning models, noting the potential for misuse in the form of misleading information, trolling, plagiarism, and other forms of online dishonesty. Recently, the Italian data protection authority said there were privacy concerns relating to the model, resulting in its decision to ban the use of ChatGPT in Italy.
Similar worries exist in many other government bodies and sectors, such as education. In early January of this year, New York City Public Schools banned ChatGPT due to concerns about students cheating. The school system explained its decision by indicating the tool doesn’t help in “building critical thinking and problem-solving skills.”
Organizations such as JPMorgan banned the use of ChatGPT among their employees over data security concerns. Many other companies are still evaluating claims of data security weaknesses and are considering taking similar action in the future.
The organizations in the above examples made these decisions after an initial evaluation of the tool. I believe that they considered a range of factors in making their decisions. It will be helpful, then, to consider how generative AI functions and how these concerns can be addressed by adding human intelligence into the mix.

Which companies are the most important vendors in AI and hyperautomation? Check out the Acceleration Economy AI/Hyperautomation Top 10 Shortlist.
The Function of Generative AI
With the concerns noted above in mind, let’s review the inner workings of generative AI, specifically ChatGPT and how it returns results to humans. Generative AI is a form of AI that can generate brand-new material from scratch. It may produce visuals, audio, or text. Although the results may seem creative, they are actually the product of the formulas and training data that were applied to the model.
Generative AI is primarily based on two models: Generative Adversarial Networks (GANs) and Transformer-based models. When you put these models together, they help you find visual and textual information based on the prompts you give. Also, these models are taught with a lot of data, so their output is a mix of information from different sources.
Although AI can create content based on previously learned patterns and relationships, it does not fully comprehend the content’s meaning or purpose. I consider this generative AI model, especially ChatGPT, as a “prompts tool” where you need to provide inputs in different prompts to get a tailored response. By providing more prompts, you can get close to your ideal results, but you can’t be fully confident that it’s accurate without human intelligence to evaluate the output.
Companies are worried about how they can protect the privacy and security of data when the results are based on public data retrieved from many sources. Therefore, human intelligence is needed to ensure that the data that is retrieved and stored is safe and retains its integrity. There is no doubt that the intervention of human intelligence can address the concerns, solidifying the results as more reliable and accurate.
The Role of Human Creativity
Human creativity, on the other hand, is a unique trait that emerges from our own capabilities, including our memories and our imaginations. Human creation requires originality and uniqueness that cannot be duplicated by artificial intelligence. Although AI is advancing to better understand context, no machine can ever duplicate the unique human mind or the thoughts, ideas, and speech that it produces.
Furthermore, the emotional depth and authenticity that are hallmarks of human creativity are often missing in AI-generated material. Each person’s distinct viewpoints, life experiences, and cultural upbringing shape their creative output. However, the scope and quality of material generated by AI systems are constrained by the training data and the underlying algorithms. The human element, which is what makes content produced by humans so engaging, is missing.
Also, keep in mind that the purpose of AI-generated material is not to replace human intelligence or creativity but to supplement and enhance them. Generative AI aims to facilitate the creation of material while reducing the workload placed on human creators. For example, images produced by AI can serve as inspiration for artists, while text generated by AI can spark new ideas for writers. AI is not being used to supplant humans, but rather to help them be more productive and creative.
Using Generative AI for Business
The results of generative AI are sufficient for routine tasks that have existed for a number of years and remain unchanged.
Expecting generative AI to come up with an original idea is unrealistic, as it simply returns everything from the dataset it was trained on, which could have been taught as recently as two years ago. Although the training data will become more current over time, users can’t become dependent on a tool that doesn’t consistently provide current information in today’s world, where technology moves so fast and relying on outdated or inaccurate data can never lead to innovation.
However, if these models are integrated with business applications that are feeding them the correct input – which may still involve human intervention – then they will return the correct output, as your business application will restrict the model for a tailored response. For example, ChatGPT’s integration with business applications like Microsoft Dynamics 365 makes it more genuine and useful for business users.
Final Thoughts
Before releasing generative AI model outputs, guardrails and a human validation step will assure accuracy and prevent misinformation. We don’t want to mislead anyone — especially children — who uses these tools. Adding this human layer ensures that future leaders continue to receive accurate information.
The incorporation of human intelligence into generative AI will propel this technology to the next level. Humans are an essential element in the future of AI. Not only will this ensure that no information is interpreted incorrectly, but it will also pave the way for every industry to finally begin trusting and optimizing AI technologies.
Looking for real-world insights into artificial intelligence and hyperautomation? Subscribe to the AI and Hyperautomation channel: