AI pillar · Reference

AI terminology glossary

Key terms you’ll encounter in AI discussions. Bookmark this for reference.

← Back to AI Fundamentals Training

Core concepts

Artificial intelligence (AI)
Computer systems that perform tasks normally requiring human intelligence. Umbrella term covering many approaches.
Machine learning (ML)
AI systems that learn patterns from data rather than following explicit programming rules. Most modern AI is ML.
Deep learning
ML using artificial neural networks with multiple layers. Powers image recognition, speech, and language models.
Neural network
Computing system inspired by biological brains. Layers of interconnected nodes that process information.
Large language model (LLM)
AI trained on massive text datasets to predict and generate language. GPT-4, Claude, and Gemini are LLMs.
Generative AI
AI that creates new content—text, images, code, audio—rather than just classifying or predicting.

Technical terms

Training data
The dataset used to teach an AI model. Quality and biases in training data directly affect model behaviour.
Parameters
The adjustable values within a model. GPT-4 reportedly has over 1 trillion parameters.
Fine-tuning
Additional training on a specialised dataset to adapt a general model for specific tasks.
Prompt
The input text given to an AI model. “Prompt engineering” is the practice of crafting effective prompts.
Inference
Using a trained model to make predictions on new data. Distinct from training.
Token
The basic unit of text for LLMs—roughly a word or word-part. Models have token limits.
Embedding
Numerical representation of text, images, or other data that captures semantic meaning.
RAG (Retrieval-Augmented Generation)
Technique that enhances AI outputs by retrieving relevant information from external sources.

Risk and safety terms

Hallucination
When an AI generates false or fabricated information presented as fact. A fundamental LLM limitation.
Bias
Systematic errors in AI outputs that reflect biases in training data or model design.
Alignment
Ensuring AI systems behave according to human intentions and values. A major research challenge.
Prompt injection
Attack where malicious inputs manipulate AI behaviour by overriding system instructions.
Model drift
Degradation in model performance over time as real-world data diverges from training data.
Explainability (XAI)
The ability to understand and explain how an AI reached its outputs. Critical for high-stakes decisions.

Governance terms

AI governance
Frameworks for responsible development and deployment of AI systems within organisations.
Model card
Documentation describing a model’s intended use, limitations, training data, and evaluation results.
Risk-based approach
Regulatory strategy that applies requirements based on the potential harm of AI systems (e.g., EU AI Act).
Human-in-the-loop
System design requiring human oversight or approval before AI outputs are acted upon.
Conformity assessment
Process to evaluate whether an AI system meets regulatory requirements before deployment.