Tech giants are trying their best to sell AI on multiple fronts—as a friend, employee, HR expert, personal assistant, study partner, health coach, and the list goes on. Artificial intelligence is none of those things. Many limitations of AI have been exposed and explored ever since the technology started gaining popularity, but that hasn’t slowed the rush toward adoption among both companies and consumers.
A recent report from the Machine Learning Research team at Apple claimed that Large Reasoning Models (LRMs) created the illusion of thinking but often failed to develop “generalizable reasoning capabilities beyond certain complexity thresholds.” The study claimed that challenges of varying complexities elicited shifting responses, with overthinking on simple tasks resulting in complete failure on more challenging problems.
Internal failings aside, tools like ChatGPT have been proven to provide alarming prompt outcomes that put the users’ lives at risk. Over-reliance on anything can become harmful very quickly, especially when the tool is exceedingly flawed and encircled by an illusion of superiority.

The safety concerns around the rampant use of AI often go unacknowledged as the number of tools on the market continue to grow. (Image: Pexels)
Understanding the Limitation of AI—Are LLMs Collapsing Right Before Our Eyes?
AI tools have been embedded into every single aspect of our online lives in 2025. AI can now summarize our emails unprompted, create podcasts from study notes, imitate celebrities and talk to their fans, translate conversations live during calls and even act as a therapist when there’s no one else to talk to. Gen Z uses ChatGPT the same way millennials have grown up using Google, opening the app up to ask questions and learn about topics they know little about.
On the surface, there’s nothing wrong with this shift as AI tools are built on an assortment of data and their ability to communicate using simple language makes it much easier to understand and communicate complex concepts. Talking to AI can also feel harmless if the conversation makes someone feel a little more grounded and connected during turbulent times because, at the end of the day, it’s better than nothing.
However, by normalizing the use of incomplete tools like ChatGPT, many safety concerns are often missed and allowed to escalate to more serious degrees. Asking AI for a recipe may sound safe, but with AI hallucinations as rampant as they are, you cannot be certain that your child isn’t learning to bake using unsafe methods and ingredients.
Using AI to write a school paper may feel harmless, but you will never learn enough to apply yourself in the real world. Sure, ChatGPT could write your next break-up text, but if you never learn to communicate, you’ll find yourself repeatedly having to prompt the tool for another generic “It’s not you, it’s me” template. Some things require us to be human and use our intuition and insights, and asking an AI for the answers in these cases is exceedingly counterintuitive.
AI Cannot Reason The Way It Is Expected to Reason
Apple has been criticized time and again for its slow pace at bringing out substantial AI tools, but its researchers have been hard at work on other fronts. The Illusion of Thinking paper released in the days leading up to WWDC 2025 provided eye-opening insights into how LLMs performed on puzzles of varying difficulty, and it found that while the tools can provide exceptional results on aspects like math and coding, they weren’t necessarily good at reasoning through problems the way they were previously thought to.
These LRMs were able to reason out some of the puzzles that were placed in front of them like the Tower of Hanoi or the River Crossing puzzle, but they had limitations. Beyond a point of complexity, these LRMs collapsed completely, failing to reason at all. In many cases where the steps were laid out, the AI tools still failed to follow the path logically every time.
Sure, humans share many of these limitations—presented with a challenge of increasing complexity, many are prone to making errors or giving up on the challenge. However, such limitations in a tool designed to keep growing exponentially without any oversight on its reasoning capabilities are concerning.
A rebuttal paper by Alex Lawsen, a researcher at Open Philanthropy, challenged some of the methods used by the study to draw its conclusions, making a compelling case against the results. However, the point stands: AI tools are not foolproof and are not limitless, as we’ve been led to believe. The limitations of AI need to be studied in greater detail.
Building a Relationship with AI Can Be A Dangerous Proposition
There is a lot of talk online about the loneliness epidemic and how many are turning to chatbots for some connection and conversation. Some have AI friends, others have AI girlfriends, and still others have AI therapists, but all of them are equally harmful. As much as these tools are able to think and respond to a conversation, they are limited in their potential for empathy and genuine human connection.
Twitter/X’s Grok is one of the best AI tools for conversation, as its grasp of language and humor is vastly better than Gemini or ChatGPT. However, the tool is still just that—a tool and facilitator. It cannot build memories with you, empathize with your experiences, share its own, or even form a bond. Relying on these tools too much can further compromise your ability to communicate with others in your life, further isolating you from them. This causes the reliance on AI to grow, making it harder to maintain boundaries.
Unfortunately, when you’re in a vulnerable state of mind and are repeatedly told about how advanced and human these AI tools are, it can be very easy to get tricked into believing the AI is your companion.
ChatGPT Offers Alarming Prompt Outcomes That Present an Unforeseen Danger
A recent report from The New York Times explored how AI tools further facilitated the delusions and dissociations that were being experienced by users. One user stopped taking their medications after advice from ChatGPT and was told he could survive a jump from a 19-story building if he wholly believed in it.
Another user diagnosed with bipolar disorder and schizophrenia believed his AI lover was killed by OpenAI because ChatGPT said as much. He lost his life after a violent outburst related to these delusions resulted in an encounter with the police. These cases are rare and reflect an already-disturbed state of mind, but the glorification of AI is partly to blame for these outcomes.
It falls to users to be more aware of the content they are consuming and develop a better understanding of the technology, however, tech companies have a responsibility to ensure their easily accessible tools are also safe to use. The limitations of AI need to be acknowledged more explicitly.
ChatGPT is Not Your Friend—Keep Technology at a Distance
ChatGPT recently went down for a few hours and it sent users into a panic. The AI tool is so widely used that people have begun relying on it for their day-to-day needs. This means a single lapse in its availability sends their lives to a screeching halt. This is true for most software and services we use every day, but the expansive use of AI for tasks that can be performed without it is expected to cause a decline in individual capability and independence.
It is unsafe to seek medical advice or other information with serious repercussions from AI tools built on incomplete or incorrect databases, but many are still turning to the tool for their treatment. If a doctor’s appointment is finally booked because the AI isn’t available to tell you what’s wrong, then there’s a serious problem at hand.
The Apple AI research in 2025 presents a very insightful look at the limitations of AI, and there is still much we don’t know about how it works and what it outputs. ChatGPT’s alarming prompt responses are only coming to light due to some extreme cases, but such harmful interactions are likely occurring on a smaller scale across the globe every day.
AI tools are fascinating and have the potential to change the world in many beneficial ways, but there’s a fair share of factors to be cautious about as well. We need to be more wary about relying on these tools and taking their responses at face value. Tech companies, on the other hand, need to be more honest about the limitations of their tools and should focus more energy on regulating their use.
Have thoughts to share on the limitations of AI? Let us know what you think. Subscribe to Technowize for more insights into the ever-evolving landscape of technology.