September, Thursday 19, 2024

What occurs when you suspect that AI is deceiving information about you?


30VgK8OA5poZwbr.png

Imagine a scenario where you are at home with your family when suddenly your phone starts buzzing with messages from people you know. They are warning you about something they have seen about you on social media. It's a terrible feeling. In my case, I received a screenshot that supposedly came from Elon Musk's chatbot Grok, but I couldn't confirm its authenticity. The screenshot placed me on a list of the worst spreaders of disinformation on Twitter, alongside prominent US conspiracy theorists. As a journalist, this was definitely not the kind of top 10 list I wanted to be a part of. Since Grok is not accessible in the UK, I decided to ask other AI chatbots, ChatGPT and Google's Bard, to create the same list using the same prompt. However, both of them refused to do so. Bard even claimed it would be "irresponsible" to fulfill my request. As someone who has extensively covered AI and regulation, I am well aware of the concerns people have about the pace at which our laws are keeping up with this rapidly evolving and highly disruptive technology. Experts around the world agree that humans should always have the ability to challenge the actions of AI. Over time, AI tools are not only generating content about us but also making decisions that impact our lives. Currently, the UK lacks official AI regulation, but the government believes these issues should be addressed by existing regulators. Determined to rectify the situation, I reached out to X, a platform involved in the list. Unfortunately, I received no response, which is not uncommon for media inquiries. I then approached two UK regulators. The Information Commissioner's Office, responsible for data protection, suggested I contact Ofcom, the regulator for the Online Safety Act. However, Ofcom informed me that the list did not fall under the purview of the act since it did not involve criminal activity. They stated that taking action would require following civil procedures, which meant I would need a lawyer. While a few legal cases related to AI are ongoing worldwide, there is still no clear precedent. In the US, Mark Walters, a radio presenter, is suing OpenAI, the creator of ChatGPT, after the chatbot falsely accused him of charity fraud. Similarly, an Australian mayor threatened legal action after the same chatbot inaccurately claimed he had been convicted of bribery. In reality, he was a whistleblower, but the AI tool had misinterpreted the data. The case was eventually settled. I consulted two lawyers who specialize in AI matters, but the first declined to assist me. The second lawyer informed me that I was venturing into uncharted territory in terms of defamation law in England and Wales. She acknowledged that what had happened to me could be considered defamation since I was identifiable and the list had been published. However, the burden of proof would lie with me to demonstrate that the content had caused harm. I would have to establish that being accused of spreading misinformation as a journalist had negative consequences. I was frustrated by my inability to figure out how I had ended up on that list or who had seen it. It was particularly aggravating that I couldn't access Grok myself. Grok has a "fun mode" that can produce provocative responses, so I wondered if it was intentionally misleading me. AI chatbots are known to "hallucinate," meaning they sometimes generate false information, even baffling their creators. Disclaimers accompany these chatbots, warning users that their output may not be reliable, and the responses may not be consistent. To get to the bottom of the situation, I consulted with my colleagues in BBC Verify, a team of journalists dedicated to verifying information and sources. They conducted an investigation and concluded that the initial screenshot accusing me of spreading misinformation may have been fabricated. The irony of the situation was not lost on me. Through my experience, I realized the challenges we face as AI continues to play an increasingly prominent role in our lives. AI regulators must ensure that there is a straightforward process for humans to challenge AI's actions. If AI is spreading false information about you, where do you even begin? I thought I knew the answer, but I discovered that it remains a difficult path to navigate.