The Limitations of Generative AI: Why Humans Must Stay in Control

Key Takeaways
- Generative AI does not understand or know anything; it only learns, recognises and applies patterns.
- Generative AI models are prone to errors and struggle with situations they haven't been trained on.
- Over-reliance on Generative AI can stifle innovation, as it generates similar content and lacks originality.
- AI-generated search summaries are often unreliable, requiring human verification.
- Image-generating AI frequently makes mistakes, which can help identify fake visuals.
- AI should be used as a tool, not as a replacement for human decision-making and oversight.
- Aspiedent's work in autism profiling relies on human expertise, which cannot be modelled using generative AI. Just as Generative AI cannot grasp the meaning behind words, it cannot grasp the complexities of cognitive diversity. Our autism profiling services provide real insights that AI cannot replicate. If you want an accurate, human-led assessment, please contact us.
Right now, Generative Artificial Intelligence (AI) is all the rage. From chatbots like ChatGPT to AI-driven business solutions, the technology has rapidly expanded into various applications. Governments are investing in AI training programs, and businesses are eager to integrate AI into their workflows. Even the UK’s Prime Minister, Keir Starmer, has outlined plans to position the UK as a leader in AI development.
But when one of the goals is for the “public sector to spend less time doing admin and more time delivering the services working people rely on”, we start to wonder if those behind all this really understand the technology.
Amid all the hype, it’s crucial to understand what AI really is—and what it isn’t.
We must first be clear on what AI is.
There are two main types of AI: traditional Symbolic AI and the newer Generative AI, which is based on machine learning. Symbolic AI works by applying hand-crafted rules to data to produce answers, whereas generative AI, like ChatGPT or image-generative systems, learns from data and answers to create its own rules and patterns. After training, you provide it with questions, and it generates answers. In general, Symbolic AI can tell you how it reached its answer, whereas this is not possible with Generative AI. While Generative AI has made waves in various industries, it’s crucial to understand its limitations and the role humans must still play in overseeing its use.
Our director, Dr Elizabeth Guest completed a PhD in Artificial Intelligence that remained state of the art for over two decades. As the founder of Minds in Depth and Aspiedent, she has applied her research and development expertise to create a framework to support individuals affected by conditions such as autism, ADHD, and dyspraxia. Her innovative approach helps parents, individuals, and businesses understand and implement effective support strategies through advanced profiling methods and comprehensive training.
Dr Elizabeth expresses her thoughts:
AI is not intelligent.
Despite its name, generative AI understands nothing; it knows nothing. All it does is learn and apply patterns. What is behind these developments is largely very clever engineering combined with massive computing power, not actual comprehension.
This distinction is important because it highlights AI’s limitations. Generative AI is prone to errors, especially when it encounters situations it hasn’t been trained on. If the data changes or an unexpected scenario arises, the AI will not be able to cope. This is why AI should never be left unsupervised in decision-making roles. Remember Apple’s attempt to use AI to create summaries of news articles? Nobody checked the summaries were correct.
Handing over important jobs to AI, especially those that rely on understanding of language, is a recipe for disaster.
Over-reliance on AI is a risk and will stifle innovation.
We have first-hand experience of this. Because what Aspiedent and Minds in Depth do is new, and the systems have not been trained on our paradigm, we’ve found AI tools like ChatGPT are not useful for writing blog posts because they don’t understand our specialised work. They can’t even provide transcriptions for videos: we have to use older technology that just relies on speech sounds - and gets the transcription hilariously wrong in places!
There is much propaganda that if you don’t use generative AI to aid you in your work, you will get left behind. But what if it is those who do their own work who will stand out because of the similarity of AI generated stuff? Perhaps using the AI generated stuff as a guide to avoid what everyone else has and create something new will provide dividends?
These large language models crop up in search engines now.
Google now provides an AI summary at the top of its search results. There is now an AI infused search engine, Togoda, which attempts to organise search results so you can more easily find what you are looking for. Our experience is that many AI-generated summaries lack credibility and are so unreliable that we are better off ignoring them and looking through the articles the search has brought up. But something that organises results instead of just listing them? Well that could well be very useful especially if you are looking for something that is a bit obscure. Even if the system doesn’t get it right, even a poor result could be useful.
Image generating systems are also unreliable.
Look closely and you will see anomalies in hands and/or faces. People in the background can have missing, broken, or extra limbs. These errors stem from lack of understanding on the part of the system. We should probably be thankful for this, because it means that it will hopefully remain possible to identify fake images and videos (for those who know what to look for)!
Conclusion: humans must stay in control
As long as these Generative AI systems are used as tools, and not used to replace people, they can be useful in some circumstances. As soon as they are expected to act autonomously and make decisions, we are in trouble. They should never replace human oversight, especially in areas that require critical thinking, creativity, or ethical decision-making.
As Generative AI continues to evolve, we must be cautious about how we integrate it into daily life. Used responsibly, Generatiave AI has the potential to enhance efficiency and productivity. But if we hand over too much control, we risk creating a world where “the computer says no” without any human intervention to challenge or correct it.
The key is to evaluate its output critically and check any information you didn’t already know is correct. That way, you can make use of the benefits of Generative AI while safeguarding against its potential pitfalls.
AI and Autism Profiling?
At Aspiedent, we understand that true understanding comes from human expertise, not AI-generated patterns and rules. Just as Generative AI (currently) has no mechanism for grasping the meaning behind words, it cannot comprehend the complexities of cognitive diversity. This does not mean that computers will never understand meanings at least to some extent. PAT.ai is implementing a Symbolic AI system that does model the meaning behind words using a model of semantics. They are getting good results without the errors that Generative AI is prone to.
This is why our autism profiling services are built on real understanding of underlying issues and how these combine to cause both the troubling surface symptoms and the strengths of the individual.
While we are looking to build Artificial Intelligence tools to aid with the process, we do not envision that this can be made fully automatic because of the need to fully understand the context of the individual. When it comes to identifying and supporting individual needs, human insight is irreplaceable.
If you are looking for an accurate, personalised autism profile, please contact us.