AI pillar · Module 4 of 6
The real risks of AI
AI can go wrong in a lot of ways. Some risks are obvious, others are subtle. Understanding them helps you use AI responsibly and prepare for problems.
4.1 Bias: The pattern-matching problem
Remember: AI learns patterns from data. If the data is biased, the AI will be biased. This isn’t a bug—it’s how the system works.
- Hiring algorithms trained on historical data learned to favour candidates similar to past hires—often white men in tech roles.
- Facial recognition systems performed worse on darker-skinned faces because training data was predominantly lighter-skinned.
- Healthcare algorithms used healthcare spending as a proxy for health needs, disadvantaging Black patients who historically had less access to care.
The fix isn’t easy. You need diverse data, careful testing across groups, and ongoing monitoring. But first, you need to acknowledge the problem exists.
4.2 Safety and security concerns
- Hallucinations. LLMs confidently generate false information. A lawyer cited fake cases from ChatGPT in court. Medical advice from AI can be dangerously wrong.
- Prompt injection. Attackers can manipulate AI systems by hiding instructions in inputs. Your customer service bot could be tricked into ignoring its rules.
- Data leakage. AI systems can memorise and regurgitate training data, including sensitive information. They can also leak information from conversations.
- Overreliance. People trust AI too much. Automation complacency is real—when humans stop checking AI outputs, errors slip through.
The bigger picture risks
Near-term concerns
- Job displacement and economic disruption
- Deepfakes and misinformation
- Privacy erosion through surveillance
- Concentration of power in few companies
- Environmental impact (training is energy-intensive)
Long-term debates
- AI safety and alignment (will AI do what we want?)
- Autonomous weapons
- Economic inequality acceleration
- Existential risk (highly debated)
💡 The responsible approach
You don’t need to solve all AI ethics. But you do need to think about the risks in your specific context. What could go wrong? Who could be harmed? How would you know? What would you do about it?
Free resources to go deeper
- Course (free): Ethics of AI (edX) — University-level ethics exploration
- Reading: ProPublica: Machine Bias — The investigation that exposed algorithmic bias in criminal justice
- Tool: IBM AI Fairness 360 — Free toolkit for detecting bias in ML models