Trust and verifAI: Navigating the World of Generative AI
Who is that actor again? Where I used to fire up IMDB, now I ask ChatGPT. It's faster and provides great answers. But I also realized that I was taking these answers at face value, allowing ChatGPT to define the truth for me. This raises an important point—ChatGPT is not infallible. Sometimes the source is incorrect, or the translation provided by ChatGPT is flawed.
In an era where generative AI tools are becoming integral to our daily lives, it is crucial to reflect on how we process the information they provide. As we increasingly rely on AI for quick insights, it becomes more important to distinguish between data, information, and insight—and to have access to the source (data).
From Data to Insight: A Necessary Distinction
Data is the foundation: raw facts and figures, the unprocessed building blocks of knowledge. Information is what we get by organizing and interpreting data, giving it context and meaning. Insight goes a step further—it is the actionable knowledge we gain from analyzing information in the context of our specific needs or questions.
When using tools like Google, we are presented with a plethora of information sources. It is up to us to filter these, assess their credibility, and extract meaningful insights. This process requires critical thinking and judgment.
ChatGPT and Direct Insights
Generative AI like ChatGPT takes a different approach by providing what appears to be direct insights. Instead of leading users to various sources of information, it synthesizes data and delivers conclusions directly. While this can save time and streamline decision-making, it raises a critical question: On what basis are these insights generated?
Understanding the sources and data underlying AI-generated insights is essential. Unlike traditional search engines, where you can review and verify multiple sources, the inner workings and sources of AI models can be opaque. This opacity can lead users to accept AI-generated insights without sufficient scrutiny, potentially resulting in misinformation or misinterpretation.
The Risk of Misleading Conclusions
There have been notable instances where AI-generated conclusions were inaccurate or misleading. In the image above AI recognizes 9.11, a version number of the software, reads it as a date and presents a relation with the terrorist attacks of 9-11. Also, in one case, an AI model erroneously linked a program's version number to a date and then associated it with the events of September 11. AI models have also generated incorrect historical facts or misattributed quotes, leading to the acceptance of false narratives as truth. In one case, an AI model suggested a medical diagnosis that, upon review, was not supported by clinical evidence, which could have had serious consequences if acted upon.
Such examples highlight the importance of maintaining healthy skepticism towards AI-generated insights and underscore the need for verification, especially when decisions based on these insights have significant consequences.
The Necessity of Verification
As AI becomes integrated into more software solutions, from customer service chatbots to decision-making systems, the need for transparency and traceability of data sources grows. Organizations and institutions must be able to trace the origin of the data supporting AI conclusions to ensure accuracy and reliability.
In a world increasingly dependent on AI, "trust but verify" should be the guiding principle. While we embrace the efficiency and possibilities of AI, it is crucial to remain vigilant about the foundations on which insights are based. Understanding and questioning these foundations will enable us to use AI responsibly, ensuring it serves as a tool for genuine insight rather than a source of potential misinformation.
At Tjep's, we use AI in a way that ensures the source is always accessible. We provide easy access to databases and instructions that feed AI agents, allowing you to maintain control while innovating.