The Trust and Truth Dilemma
Alison Taylor
Ethical AI Champion
Join Alison Taylor as she explores AI's impact on reality, from viral photos like the Pope in a puffer jacket to deepfakes of politicians. Understand AI's boundaries and discover tools to verify information in the age of AI.
Join Alison Taylor as she explores AI's impact on reality, from viral photos like the Pope in a puffer jacket to deepfakes of politicians. Understand AI's boundaries and discover tools to verify information in the age of AI.
Sign up today
Sign up for Santander Open Academy to unlock your potential with our free, expert-led learning platform.
The Trust and Truth Dilemma
12 mins 7 secs
Key learning objectives:
Understand the limitations of artificial intelligence
Identify examples where artificial intelligence has gone wrong
Outline tools and resources to use when verifying AI output
Overview:
Understanding the limitations of AI is crucial as it increasingly impacts our lives. Alison Taylor, a business ethics professor, discusses how AI-generated misinformation, like deepfakes, can dangerously affect society and politics. She emphasises the importance of critical thinking, fact-checking, and knowing when to trust AI. This step guides learners in navigating AI's pitfalls, using tools to detect misinformation, and maintaining control over technology's influence. Always question AI outputs and stay sceptical to future-proof your career.
Sign up today
Sign up for Santander Open Academy to unlock your potential with our free, expert-led learning platform.
Our psychological biases cause us to rigorously defend our own image while being less diligent in verifying the accuracy of information shared by others. This bias stems from a natural instinct for self-preservation and the desire to maintain credibility. However, when it comes to information about others, we are prone to accepting it at face value without proper verification. This can lead to the spread of misinformation, as we do not apply the same rigorous standards to external information.
What are some limitations of AI, particularly Large Language Models (LLMs)?
AI, including Large Language Models (LLMs) like ChatGPT, can give the illusion of human-like communication but fundamentally lack genuine understanding, lived experience, and common sense. LLMs are designed to predict the next word in a sentence based on vast amounts of training data, which can lead to "hallucinations"—instances where the AI generates incorrect or entirely fabricated information. These models do not distinguish between truth and fiction, and their output is only as reliable as the data they were trained on. Recognizing these limitations is crucial to avoid relying on AI for critical tasks that demand factual accuracy and understanding the potential pitfalls when AI goes wrong.
When should AI be used, and what precautions should be taken?
AI can be beneficial for creative and exploratory tasks where absolute accuracy is not critical. For example, generating interesting images, videos, or creative text can be a suitable use of AI. However, for serious work that requires precise and factual information, it is essential to double-check AI-generated content against credible sources. Tools like DeepFake-o-meter, Snopes and GPTZero can help verify and fact-check AI-generated content. These precautions ensure that you are using AI responsibly and mitigating the risks of misinformation.
The opinions and viewpoints expressed in this video are those of the creator and do not necessarily reflect the views of any affiliated organisations.
Sign up today
Sign up for Santander Open Academy to unlock your potential with our free, expert-led learning platform.
Alison Taylor
There are no available Steps from "Alison Taylor"