Expert warns 'we should not rely on AI too much’ as hallucinations persist
OpenAI CEO, Sam Altman, recently warned that while AI tools are powerful, they can generate false or misleading information, and users shouldn’t trust them blindly.
Picture: Pixabay.com
702's Gugs Mhlungu spoke to Dr Mark Nasila, Chief Data and Analytics Officer at First National Bank Risk and author of African Artificial Intelligence.
Listen to their conversation in the audio clip below.
It's rare for a tech CEO to openly caution users about the limitations of their own product.
According to recent reports, Sam Altman, the founder and CEO of OpenAI (the company behind the viral chatbot ChatGPT), warned against over-relying on AI tools, stating that some of these tools aren’t always factual or accurate.
Altman is quoted saying during a recent episode of the OpenAI podcast that "people have a very high degree of trust in ChatGPT, which is interesting because AI hallucinates. It should be the tech that you don’t trust that much."
"...when we talk about AI hallucinating, we basically mean... AI is generating output that is false, fabricated, nonsensical, despite it appearing plausible. So grammatically it appears correct, sounds correct, but doesn't make sense."
- Dr Mark Nasila, Chief Data and Analytics Officer at First National Bank Risk
Nasila says in the past couple of months, there have been numerous examples of companies falling victim to AI hallucination, even lawyers citing fake cases and chatbots offering impossible deals which have caused real-world problems.
"...some of these tools are just trained to give output without necessarily being tuned to provide context. So, just generating output word by word based on predictions does not necessarily make sense. Also, some of the tools, there's evidence that they could be manipulated, where people attacked a system and interfered with models and predictions, or even data was contaminated."
- Dr Mark Nasila, Chief Data and Analytics Officer at First National Bank Risk
"Not everything today can be handled by AI...studies have shown that up to only 50 something percent of code can be confidently automated with AI. Everything else is still so complex that it requires human beings to handle or human oversight."
- Dr Mark Nasila, Chief Data and Analytics Officer at First National Bank Risk
"I think most importantly, we need to realise these are tools, and we should not rely on them too much."
- Dr Mark Nasila, Chief Data and Analytics Officer at First National Bank Risk
Despite the warnings, not all users share this concern.
A listener of the 702 Weekend Breakfast show, who happens to be an AI trainer, says he trusts ChatGPT more than traditional search engines because it’s trained for accuracy.
Nasila, however, stressed that users should remain cautious.
"It's not conscious thinking. Someone could say it's mathematically conscious because it's generating output similar to what human beings would have done. But it's not human conscious."
- Dr Mark Nasila, Chief Data and Analytics Officer at First National Bank Risk
Scroll up to listen to the full conversation