Strawberries, Air Canada, and Lawyers: How You Can Deploy AI Without Costly Pitfalls
The Quality Coach® is excited to feature Dr. Johnathan Mell in this week’s blog. Dr. Mell will be joining The Quality Coach® on April 16 for our Spring HRLF Lunch & Learn. This is a can’t-miss event for all business and HR professionals as we dive into AI and it’s effect on work. We hope you enjoy this week’s blog!
Since OpenAI launched ChatGPT in late 2022, business leaders have scrambled to keep up with a market evolving at breakneck speed. While investors may not fully understand AI’s capabilities, they recognize its potential to create more agile and competitive businesses. As a result, many executives are pushing AI integration from the top down, eager to showcase innovation in annual reports.
At the same time, AI's accessibility has empowered employees—both in-office and remote—to enhance productivity independently. However, most workers lack specialized AI training, and the rapid, often haphazard deployment of these tools has led to costly missteps. The intersection of executive mandates and grassroots AI adoption has created risks that businesses must navigate carefully.
HR’s Hidden AI Challenge
HR professionals, in particular, should tread carefully. With some companies pushing return-to-office policies, remote workers—who often rely on AI tools for efficiency—may resist mandates that reduce their ability to use these productivity boosters. If AI policies are implemented too rigidly, businesses risk alienating their most skilled and efficient employees.
Security and legal concerns further complicate AI adoption. What happens when proprietary company data is entered into a third-party AI tool like ChatGPT? Who owns the copyright for AI-generated content? These questions remain unresolved, making organizations hesitant to fully embrace AI while also fearing they’ll fall behind if they don’t.
The benefits of AI are significant, but only when integrated strategically. Below are three real-world examples of AI missteps that serve as cautionary tales.
1. Air Canada’s Chatbot Blunder
In early 2024, Air Canada deployed an AI chatbot to handle customer inquiries—seemingly a smart move. Customer support is expensive, and chatbots can automate simple requests, reducing operational costs.
However, AI’s weaknesses quickly became apparent. A customer seeking bereavement fare assistance was misinformed by the chatbot, which incorrectly stated they could apply for the discount up to 90 days after travel. This contradicted Air Canada’s official policy, leading the customer to sue in small claims court—and win.
The legal implications were clear: companies cannot hide behind AI-generated misinformation. Courts increasingly hold businesses accountable for AI-driven errors, and Air Canada was fortunate the damages were minor. This case highlights the risks of deploying AI without rigorous oversight, particularly in customer service scenarios where accuracy is critical.
2. A Lawyer’s AI Shortcut Ends in Sanctions
Legal professionals are traditionally slow to adopt new technology, but AI's rise has forced rapid adaptation. In November 2024, a lawyer faced sanctions after relying on an AI tool to generate case law citations for a court filing. The problem? The AI fabricated entirely fictional cases.
The failure to verify the AI-generated content led to court-imposed penalties and reputational damage. This incident underscores a broader issue: while AI can accelerate workflows, it still requires human oversight. In high-stakes fields like law, finance, and healthcare, unverified AI-generated content can have severe consequences.
Businesses must balance AI-driven efficiency gains with robust quality control. A failure to do so, as this case illustrates, can result in financial and professional repercussions.
3. AI’s Struggle with Basic Counting
A more technical yet equally revealing failure of AI surfaced in mid-2024: models struggled with simple counting tasks. One widely reported example involved AI consistently miscounting the number of “R’s” in the word strawberry—often insisting there were only two instead of three.
This seemingly trivial flaw has deeper implications. If AI struggles with elementary tasks, how can it be trusted for complex decision-making? AI's inability to count correctly highlights its limitations, especially in domains requiring precision, such as coding, legal analysis, and financial modeling.
Even OpenAI acknowledged the issue in a tongue-in-cheek move, naming one of its latest models "Strawberry" as a nod to this persistent flaw. However, the lesson for businesses is serious: AI tools, no matter how advanced they seem, have inherent weaknesses. Companies must train employees to recognize AI’s limitations rather than assuming it always provides accurate answers.
The Path Forward: AI with Strategy and Training
Each of these examples—Air Canada’s chatbot misstep, the lawyer’s AI-generated false citations, and the counting failure—illustrates the pitfalls of uncritical AI adoption. Businesses eager to integrate AI should take the following steps to avoid similar mistakes:
Human Oversight is Non-Negotiable
AI should assist, not replace, critical decision-making processes. Employees must verify AI-generated content, especially in legal, financial, or customer-facing roles.Train Employees, Not Just AI
Workers must understand AI’s capabilities and limitations. Training should focus on best practices for using AI effectively while mitigating risks.Establish Clear AI Policies
Companies must create guidelines for AI use, addressing issues like data privacy, accuracy verification, and responsibility for AI-generated content.Test AI Before Deploying Publicly
AI models should be rigorously tested in controlled environments before being integrated into customer interactions or mission-critical workflows.Embrace AI, but with Caution
AI is not a magic bullet. While it can drive efficiency, businesses that rush into AI adoption without proper safeguards risk legal, financial, and reputational damage.
Next month, I will be speaking at The Quality Coach® Human Resources Leadership Forum on AI deployment strategies, focusing on practical steps to integrate AI effectively while avoiding common pitfalls. AI is here to stay, but responsible implementation is the key to leveraging its benefits without falling into costly traps. The seminar is April 16, 2025 at Liberty Hall Culture Center in Washington, MO.
Final Thoughts
AI offers unprecedented opportunities, but only for organizations that use it wisely. Companies must balance enthusiasm with diligence, ensuring AI enhances operations rather than creating liabilities. By learning from these real-world failures, business leaders can deploy AI more strategically—gaining competitive advantages while avoiding costly setbacks.
Example AI-generated images… text is often confused!
About Dr. Johnathan Mell
Dr. Mell is a professor, researcher, and technology innovator specializing in artificial intelligence, human behavior, and engineering and leads the ScionAI Lab, focusing on making AI systems more human-like, understandable, and practical for real-world applications.
He has advised industry leaders on trustworthy and ethical AI adoption, helping organizations navigate the challenges of AI implementation without falling for hype. He also serves as a fractional VP of Software for Custom Technologies, a St. Louis-based manufacturing and engineering firm.
TQC is a proud provider of SHRM Professional Development Credits (PDCs) for SHRM-CP® or SHRM-SCP® recertification activities. Our TQC's HRLF events can help you maintain your SHRM certification. Attendance at our luncheon qualifies you to earn 1.25 PDC credits!