Throughout this programme, we have explored how data, algorithms, and neural networks come together to create intelligent systems. But intelligence without responsibility can cause real harm. In this final lesson, we examine the human side of AI - the biases it inherits, the ethical dilemmas it raises, and what we can all do about it.
AI bias occurs when a system produces results that are systematically unfair to certain groups of people. The AI is not deliberately prejudiced - it simply reflects the patterns in its training data and the assumptions of its designers.
Amazon's Hiring Tool (2018) Amazon built an AI to screen job applications. It was trained on CVs submitted over the previous ten years - a period when the tech industry was overwhelmingly male. The AI learned to penalise CVs that contained the word "women's" (as in "women's chess club") and downgraded graduates from all-women's universities. Amazon scrapped the tool.
Facial Recognition Failures Research by Joy Buolamwini at MIT found that commercial facial recognition systems had error rates of up to 34.7% for darker-skinned women, compared to just 0.8% for lighter-skinned men. The training data simply did not represent all faces equally.
Why did Amazon's AI hiring tool discriminate against women?
Bias can enter an AI system at every stage:
AI does not create bias from thin air. It amplifies the biases already present in human decisions, historical records, and societal structures. The data is a mirror - and sometimes we do not like what it reflects.
Sign in to join the discussion
AI can now generate realistic fake videos, images, and audio - known as deepfakes. While the technology has creative uses (film effects, accessibility tools), it also poses serious risks:
Detecting deepfakes is becoming an arms race. As generation tools improve, so must detection tools - but they are always playing catch-up.
In 2019, criminals used AI-generated voice cloning to impersonate a CEO and trick an employee into transferring ยฃ220,000. The voice was so convincing that the employee never suspected it was fake.
If you saw a video of a world leader declaring war, how would you verify whether it was real? What tools or sources would you trust? In a world of deepfakes, critical thinking about media becomes a survival skill.
AI automates tasks that were previously done by humans. This creates both opportunities and challenges:
Tasks at risk of automation:
Tasks less likely to be automated:
The key distinction is between automating tasks and replacing jobs. Most jobs are collections of many tasks - AI tends to automate some tasks within a role rather than eliminating the role entirely.
Which type of work is LEAST likely to be fully automated by AI?
AI systems are hungry for data, and that hunger raises significant privacy questions:
The tension is real: more data generally makes AI better, but collecting more data can violate individual privacy.
Researchers demonstrated that AI could predict a person's sexual orientation from a photo with higher accuracy than humans - raising profound questions about privacy, consent, and the limits of what AI should be allowed to infer.
Leading organisations have converged on a set of principles for building AI responsibly:
AI should treat all people equitably. Models should be tested across different demographics to ensure no group is disadvantaged.
People affected by AI decisions deserve to understand how those decisions are made. Black-box models should be accompanied by explanations.
There must be clear ownership when AI causes harm. "The algorithm did it" is not an acceptable defence.
AI systems must respect data protection laws and individual rights. Data collection should be minimised to what is truly necessary.
AI should be tested rigorously before deployment, especially in high-stakes domains like healthcare, criminal justice, and finance.
If an AI system denies someone a loan, who is responsible - the developer who built the model, the bank that deployed it, or the data that trained it? Accountability in AI is one of the hardest questions we face.
Which responsible AI principle states that people should understand how AI decisions are made?
You do not need to be an AI engineer to make a difference. Here is how you can contribute to more responsible AI:
Technology is not neutral. The choices made by the people who build, deploy, and regulate AI shape the world we all live in. Your awareness and your voice matter.
Congratulations - you have completed Level 2: Foundations! You now understand how data, algorithms, neural networks, training, and ethics come together in the world of AI. The next step is to get hands-on.