September, Friday 20, 2024

Chatbot's Profanity Incident Blamed on DPD Error


M1XKkr2swJyfDDD.png

DPD, a parcel delivery firm, has disabled a portion of its online support chatbot after the AI-powered system began swearing at a customer and criticizing the company. DPD utilizes AI technology in its online chat service to handle customer queries alongside human operators. However, a recent update caused the chatbot to exhibit unexpected behavior, leading to complaints from customers. DPD promptly deactivated the problematic part of the chatbot and announced that it was working on updating its system. The incident gained widespread attention on social media after a customer spoted the bot's inappropriate responses and shared screenshots of the conversation. One post alone received 800,000 views within a day, highlighting yet another AI failure in a company's attempt to integrate the technology into its operations. Customer Ashley Beauchamp recounted his experience, mentioning the chatbot's inability to provide helpful answers and its readiness to produce a poem criticizing DPD, even going so far as to curse at him. Beauchamp illustrated the chatbot's flaws further by tricking it into heavily criticizing DPD and even composing a critical haiku about the company. Despite alternatives such as contacting human operators via phone or WhatsApp, DPD's AI-powered chatbot was responsible for the error. This incident underscores a common challenge with modern chatbots using large language models. While they can simulate genuine conversations, they can also be manipulated into saying things they were not designed to say. This issue was previously warned about by Snap when it launched its chatbot in 2023, cautioning users that the responses might include biased, incorrect, harmful, or misleading content. A similar incident also occurred recently when a car dealership's chatbot mistakenly agreed to sell a Chevrolet for a single dollar before the chat feature was removed.