In a recent incident, an AI support bot named Sam from the popular coding tool Cursor caused a significant uproar by inventing a non-existent policy regarding device usage. This glitch not only frustrated users but also raised serious concerns about the reliability of AI in customer service roles.
Key Takeaways
AI models can create false information, leading to customer dissatisfaction.
The incident highlights the need for human oversight in AI-driven customer support.
Businesses must consider the potential risks of deploying AI without adequate safeguards.
The Incident Unfolds
The trouble began when a user, known as BrokenToasterOven, experienced unexpected logouts while switching between devices while using Cursor. Upon reaching out to support, they received a response from Sam, the AI bot, claiming that the behaviour was due to a new policy designed for security purposes. However, this policy was entirely fabricated.
The user, believing the response to be legitimate, shared their experience on Reddit, which quickly escalated into a wave of complaints from other users. Many felt that the supposed policy change severely disrupted their workflow, which typically involves using multiple devices.

User Reactions
The response from the community was swift and severe. Users began cancelling their subscriptions, citing the non-existent policy as their reason for leaving. Some notable comments included:
"I literally just cancelled my sub; my workplace is now purging it completely."
"This is asinine; I’m cancelling as well."
The situation escalated to the point where moderators had to lock the Reddit thread to prevent further chaos.
The Risks of AI Confabulation
This incident is a prime example of what is known as AI confabulation, where AI systems generate plausible-sounding but false information. Instead of admitting uncertainty, these models often prioritise providing confident responses, which can lead to significant misunderstandings and customer dissatisfaction.
The implications for businesses are profound. Relying on AI without human oversight can result in:
Frustrated Customers: Users may feel misled or confused by incorrect information.
Damaged Trust: Once trust is broken, it can be challenging to regain customer confidence.
Financial Loss: Subscription cancellations and negative publicity can have immediate financial repercussions.
The Need for Responsible AI Use
As businesses increasingly integrate AI into their operations, the Cursor incident serves as a cautionary tale. Companies must ensure that AI systems are used responsibly, particularly in customer-facing roles. Here are some recommendations for businesses:
Implement Human Oversight: Always have a human available to verify and respond to customer inquiries, especially when AI systems are involved.
Train AI Models Effectively: Ensure that AI systems are trained on accurate data and can recognise when they do not have enough information to provide a reliable answer.
Establish Clear Communication: Be transparent with customers about the use of AI and the limitations of these systems.
Conclusion
The Cursor AI glitch highlights the potential risks associated with deploying AI in business environments without adequate safeguards. As companies continue to embrace AI technology, it is crucial to prioritise responsible use to maintain customer trust and satisfaction. The lessons learned from this incident should serve as a wake-up call for businesses to reassess their AI strategies and ensure they are prepared for the challenges that lie ahead.
Sources
Cursor’s AI glitch triggers viral fallout—and raises questions about chatbot reliability, Fortune.
What the Cursor AI Glitch Can Teach Us About Responsible AI Use in Business, Times Square Chronicles.