Infrastructure pillar · Module 2 of 6

Data centres and hardware

Ever wondered where “the cloud” actually lives? Spoiler: it’s in massive buildings full of computers, with enough power to run a small town and enough cooling to keep it all from melting. Let’s take a tour.

← Back to Infrastructure Fundamentals Training

Controls stack visual kit

Reusable icons and a telemetry-to-audit diagram aligned to our fundamentals and operational guides.

Governance evidence

Use for control statements that cite ISO/IEC 42001 clause 6.3 change management, EU AI Act Articles 62–75, and SOC 2 trust service criteria.

Secure supply chain

Pair with SBOM, provenance, and intake guidance that references SPDX or CycloneDX formats, SLSA Level 3 attestations, and NIST SSDF tasks PS.3/PO.4.

Telemetry & evaluations

Highlight logging of prompts, responses, refusal rates, and safety filters alongside adversarial evaluation suites from NIST AI RMF playbooks or UK AISI guidance.

Assurance & resilience

Use for incident response and assurance artefacts that must meet OMB M-24-10 24-hour notifications, CIRCIA’s 72-hour clocks, and serious-incident duties under the EU AI Act.

Signals Controls Evidence Audit
  • Signals: prompt traces, supplier advisories, and safety filter activations streamed into monitoring.
  • Controls: guardrails, change review, SBOM validation, and access enforcement tied to AI lifecycle gates.
  • Evidence: runbooks that capture artefacts for ISO/IEC 42001 management reviews and SOC 2 narratives.
  • Audit: regulator-facing packets that satisfy EU AI Act post-market monitoring, OMB M-24-10, and CIRCIA timelines.

2.1 What’s inside a data centre?

Picture a warehouse. Now fill it with rows of tall cabinets (we call them “racks”), each stuffed with computers. Add industrial air conditioning, backup generators, and serious security. That’s a data centre.

  • Servers. The computers that do the work. They look like pizza boxes stacked on shelves. Some handle web requests, others crunch numbers, others store data. Modern servers are incredibly powerful—one rack might have more computing power than your entire company had 20 years ago.
  • Storage systems. Banks of hard drives or solid-state drives, often with clever tricks to protect against failures. If one drive dies (and they do), the data is safe on others.
  • Networking gear. Switches and routers that connect everything together. Data centres have their own internal networks that can move data at mind-boggling speeds.
  • Power systems. Multiple electricity feeds, massive batteries (UPS systems) for instant backup, and diesel generators for longer outages. Good data centres can run for days without utility power.
  • Cooling. Servers generate serious heat. Data centres use industrial air conditioning, careful airflow design, and sometimes even liquid cooling. This is often their biggest electricity cost.

2.2 The tier system: how reliable is reliable?

The Uptime Institute created a rating system that’s become the industry standard:

Tier I & II: Basic

Single path for power and cooling. If something needs maintenance, systems go down. Fine for non-critical stuff. Think: a company’s internal test environment.

Tier III: Concurrently maintainable

Redundant components so you can do maintenance without downtime. Multiple paths, but only one active. This is where most serious business systems live.

Tier IV: Fault tolerant

Multiple active paths for everything. Can survive any single equipment failure without blinking. Expensive to build. Think: banking systems, major cloud providers.

Reality check

Higher tier = more expensive. Not everything needs Tier IV. The smart move is matching the tier to what you’re running. Your public website? Maybe Tier III. Your disaster recovery backup? Tier II might be fine.

Key numbers to know

  • PUE (Power Usage Effectiveness). Total facility power divided by IT power. PUE of 2.0 means half your electricity goes to cooling and overhead. PUE of 1.2 is excellent—most power goes to actual computing. The industry average is around 1.5-1.6.
  • Availability percentages. “Five nines” (99.999%) means about 5 minutes of downtime per year. “Three nines” (99.9%) means about 8.7 hours per year. Big difference!

💡 The key insight

Data centres are engineering marvels designed around one goal: keep the computers running no matter what. Every design decision—from where they’re built to how the cables are organised—serves reliability. When you use cloud services, you’re benefiting from billions of dollars of data centre engineering.

Free resources to learn more