← All Certifications DevOps & IaC

DevOps & IaC Certification Prep

Domain guides, practice questions, and free lab resources for DevOps and infrastructure-as-code certifications — HashiCorp Terraform Associate, Docker Certified Associate, Splunk Core/Power User, AWS DevOps Pro, eJPT, and PNPT.

57 questions · 1 hour · HashiCorp

Terraform Associate (003)

The Terraform Associate validates your ability to define, provision, and manage infrastructure as code using HashiCorp Terraform. It is the most widely sought IaC certification in cloud and DevOps hiring, appearing in job descriptions for DevOps engineers, cloud engineers, SREs, and platform engineers. The exam tests conceptual understanding and CLI proficiency — no hands-on lab component, but scenarios require working knowledge. Recommended: 6+ months of Terraform usage in a cloud environment before sitting.

Objective 1

Understand Infrastructure as Code (IaC) Concepts

The shift from manual/imperative provisioning to declarative IaC. Benefits: version control, repeatability, self-documentation, auditability, and drift detection. Terraform's position in the IaC landscape vs Ansible (configuration management), CloudFormation (AWS-native), Pulumi (code-first). Understanding desired state vs current state: Terraform computes the difference (plan) and applies changes to reach desired state.

Objective 2

Understand Terraform's Purpose

Terraform is multi-cloud and multi-provider: AWS, Azure, GCP, Kubernetes, GitHub, Datadog, Vault, and 3,000+ providers. The provider is the plugin that communicates with the API of each platform. Terraform manages the full infrastructure lifecycle: initplanapplydestroy. State is the source of truth — Terraform stores what it last created/modified. Remote state with backends (S3, Terraform Cloud, Azure Blob) enables team collaboration and state locking.

Objective 3

Understand Terraform Basics

HCL (HashiCorp Configuration Language) syntax: resources, data sources, variables (input), outputs, locals, modules. Resource syntax: resource "aws_instance" "web" { ... }. Variable types: string, number, bool, list, map, object, tuple, any. Variable precedence (highest to lowest): CLI flags → .tfvars file → environment variables (TF_VAR_) → defaults. terraform.tfvars and *.auto.tfvars are automatically loaded.

Objective 4

Use the Terraform CLI

Core workflow: terraform init (downloads providers and modules), terraform validate (syntax check), terraform plan (preview changes — always review before apply), terraform apply (execute; use -auto-approve for automation), terraform destroy (remove all managed resources). Other essential commands: terraform fmt (auto-format HCL), terraform show (human-readable state), terraform state list / state show / state mv / state rm (state manipulation), terraform import (bring existing resources under Terraform management), terraform workspace (manage named workspaces).

Objective 5

Interact with Terraform Modules

Modules encapsulate reusable infrastructure code. Calling a module: module "vpc" { source = "./modules/vpc" ... }. Sources: local path, Terraform Registry, GitHub, S3 bucket. The Terraform Registry (registry.terraform.io) is the public module library — it hosts community and verified provider modules. Versioning: always pin module versions in production (version = "~> 3.0"). Module inputs are variables; module outputs expose values to the calling module.

Objective 6

Navigate Terraform Workflow

State locking: prevents concurrent applies from corrupting state. Remote backends (S3 + DynamoDB for AWS, Azure Blob, GCS, Terraform Cloud) provide locking automatically. terraform refresh updates state to match actual infrastructure. Drift: when infrastructure diverges from state/config. terraform plan detects drift and shows changes required to reconcile. Lifecycle meta-arguments: create_before_destroy, prevent_destroy (blocks terraform destroy for critical resources), ignore_changes (prevents Terraform reacting to out-of-band changes to specific attributes).

Objective 7

Implement and Maintain State

State stores resource IDs and attributes — it is the mapping between config and real-world resources. Never edit state files manually. Sensitive values in state: state is not encrypted by default in local backends — use remote backends with encryption at rest. terraform state mv: rename/move a resource in state without destroying/recreating it. terraform state rm: remove a resource from state management without destroying it. Backend migration: terraform init -migrate-state.

Objective 8

Read, Generate, and Modify Configuration

Built-in functions: length(), concat(), merge(), lookup(), toset(), flatten(), file(), base64encode(), jsonencode(). Dynamic blocks for creating repeated nested blocks. for_each and count for resource iteration. for expressions for transforming collections. depends_on: explicit dependency when implicit dependency cannot be detected. Data sources: read existing infrastructure not managed by this Terraform config.

Objective 9

Understand Terraform Cloud Capabilities

Terraform Cloud free tier: remote state, remote execution (no need for local provider credentials), team collaboration, cost estimation. Workspaces in Terraform Cloud represent separate environments (dev/staging/prod) with separate state and variables. Sentinel: policy-as-code framework for enforcing compliance rules before apply (paid tier). Private Registry: host proprietary modules. Variable sets: share common variables (e.g., cloud credentials) across multiple workspaces without duplication.

55 questions · 90 minutes · Docker Inc.

Docker Certified Associate (DCA)

The DCA validates professional-level Docker and container ecosystem skills. It is vendor-administered and tests both conceptual knowledge and scenario-based practical skills. Relevant for DevOps engineers, platform engineers, SREs, and developers who deploy containerised applications. Docker's market dominance makes container skills universally expected — the DCA provides formal certification of those skills.

Domain 1 · 15%

Orchestration

Docker Swarm: initialising a swarm (docker swarm init), adding nodes (manager and worker nodes), deploying services (docker service create), scaling (docker service scale), rolling updates, global vs replicated services. Swarm vs Kubernetes: Swarm is simpler and built into Docker; Kubernetes has broader ecosystem support. Stack deployment with docker-compose files: docker stack deploy -c compose.yml myapp. Constraints, labels, and placement preferences for service scheduling.

Domain 2 · 15%

Image Creation, Management and Registry

Dockerfile best practices: use official base images, minimise layers, order instructions from least to most frequently changed (exploit build cache), use multi-stage builds to reduce final image size, avoid running as root. Key instructions: FROM, RUN, COPY (preferred over ADD), WORKDIR, ENV, EXPOSE, CMD vs ENTRYPOINT (ENTRYPOINT = always runs; CMD = default argument, can be overridden). Docker Hub: public registry; private repos require authentication. Image tagging: image:tag; latest is not automatically the newest if tags exist. docker image prune, docker system prune for cleanup.

Domain 3 · 15%

Installation and Configuration

Docker Engine components: dockerd (daemon), docker CLI, containerd, runc. Storage drivers: overlay2 (default on modern Linux, preferred), devicemapper (legacy). Logging drivers: json-file (default), syslog, journald, fluentd. Runtime configuration: /etc/docker/daemon.json. Docker context: switch between Docker endpoints without changing environment. Rootless Docker: run Docker without root privileges for improved security. Docker Desktop on macOS/Windows uses a lightweight Linux VM.

Domain 4 · 20%

Networking

Network drivers: bridge (default for standalone containers — private internal network), host (container shares host network namespace — no isolation), overlay (multi-host network in Swarm — uses VXLAN), macvlan (assign MAC address to container; appears as physical device), none (no networking). Published ports: -p host_port:container_port. Container DNS: containers on the same user-defined bridge network resolve each other by container name. docker network inspect: examine connected containers and IP assignments.

Domain 5 · 20%

Security

User namespaces: map container root to non-root on the host. Seccomp profiles: restrict system calls available to containers. AppArmor/SELinux: MAC policies for container processes. Content trust (Docker Content Trust / Notary): cryptographically sign and verify images. Read-only containers: --read-only flag mounts the container filesystem read-only — write to tmpfs for temporary data. Capabilities: containers run with a reduced set by default; --cap-drop ALL --cap-add NET_BIND_SERVICE enforces least privilege. Never use --privileged unless absolutely required.

Domain 6 · 15%

Storage and Volumes

Volume types: volumes (managed by Docker, stored in /var/lib/docker/volumes — preferred for persistence), bind mounts (maps a host path into the container — suitable for development), tmpfs mounts (in-memory, discarded on container stop — for secrets/temp data). docker volume create, docker volume inspect, docker volume rm. Volume backup: docker run --rm -v myvolume:/data -v $(pwd):/backup alpine tar czf /backup/myvolume.tar.gz /data. Named volumes survive container removal; anonymous volumes do not by default.

Splunk Core Certified User → Power User → Admin

Splunk Certifications

Splunk is the dominant SIEM and log analysis platform in enterprise security operations centres. Splunk certifications are highly valued for SOC analysts (Tier 1–3), threat hunters, and security engineers. The three-tier certification path progresses from core search skills to advanced SPL and dashboard building to enterprise administration. Splunk provides official free training and a free cloud sandbox — there is no reason to rely on third-party materials for these certs.

Level 1 · ~60 questions

Splunk Core Certified User

Entry-level. Covers: Splunk interface navigation, basic search (SPL keywords, field selection, time ranges), saving and sharing searches, creating basic reports, alerts, and dashboards. Key SPL: index=, source=, sourcetype=, field extraction (rex command), stats commands (count, avg, sum), table, sort, dedup, eval. Recommended preparation: Splunk's free Intro to Splunk eLearning (4 hours) from splunk.com/training.

Level 2 · ~65 questions

Splunk Core Certified Power User

Intermediate. Adds: advanced field extraction with regex (rex, erex), knowledge objects (field aliases, calculated fields, lookups, tags, event types), advanced SPL (transaction, join, append, eventstats, streamstats, foreach), data models and pivot, advanced visualisations (choropleth maps, heatmaps, single-value with sparklines), and scheduled reports. Focus area: Knowledge Objects — many candidates underestimate how thoroughly the exam tests these.

Level 3 · ~60 questions

Splunk Core Certified Admin

Architecture and administration. Covers: Splunk components (search head, indexer, forwarder types — heavy vs universal), distributed deployment, index management (buckets: hot/warm/cold/frozen, retention), user authentication (LDAP integration, role-based access, capabilities), data inputs (network inputs, file/directory monitoring, scripted inputs, modular inputs), index clustering, and search head clustering. Performance tuning: summary indexing, data model acceleration, report acceleration.

eLearnSecurity · Hands-on lab exam · 48 hours

eJPT — eLearnSecurity Junior Penetration Tester

The eJPT (now administered by INE Security) is the most widely recommended entry-level practical offensive security certification. Unlike CEH (multiple choice only), the eJPT requires candidates to compromise machines in a real network during a 48-hour lab exam. It is affordable, beginner-accessible, and structured for those transitioning from CompTIA Security+ into offensive security. Ideal first step before attempting OSCP or the TCM Security PNPT.

Section 1

Assessment Methodologies

Penetration testing phases: reconnaissance, scanning, exploitation, post-exploitation, and reporting. Information gathering: passive (OSINT — Whois, Google dorking, Shodan, Recon-ng, theHarvester) vs active (Nmap, banner grabbing). Vulnerability scanning with OpenVAS and Nessus Essentials. Footprinting and enumeration — what information is needed before exploitation. Legal and ethical framework — rules of engagement, scope definition.

Section 2

Host and Network Auditing

Nmap core usage: -sS (SYN scan), -sV (version detection), -O (OS detection), -A (aggressive — combines), -p- (all ports), -oN / -oX (output formats), --script (NSE script execution). Network protocols: TCP/IP, DNS, HTTP, SMB, FTP, SSH, RDP — understanding what each service does and what vulnerabilities are commonly associated. Wireshark basics: reading packet captures, filtering (http, tcp.port==80, ip.addr==), following TCP streams.

Section 3

Host and Network Penetration Testing

Exploitation with Metasploit Framework: search, use, show options, set RHOSTS, run/exploit. Meterpreter commands: sysinfo, getuid, hashdump, shell, upload/download, portfwd (port forwarding for pivoting). Manual exploitation: identifying CVEs, downloading and modifying public exploits. Post-exploitation basics: privilege escalation (Linux: SUID binaries, sudo -l, cron jobs; Windows: token impersonation, unquoted service paths), lateral movement concepts.

Section 4

Web Application Penetration Testing

OWASP Top 10 overview. SQL injection: manual testing (' OR '1'='1), sqlmap for automated exploitation. XSS: reflected, stored, DOM-based — impact and basic payloads. Directory traversal and file inclusion (LFI/RFI). Burp Suite Community (free): proxy setup, intercept, repeater, intruder (rate-limited in free tier). Basic authentication bypass techniques. Web server technology fingerprinting (WhatWeb, curl headers, Wappalyzer).

eJPT exam strategy

TCM Security · 5-day practical exam · report required

PNPT — Practical Network Penetration Tester

The PNPT is issued by TCM Security and is the most practically focused mid-level penetration testing certification available. It is frequently cited alongside OSCP as a hiring signal for junior penetration tester roles. The PNPT requires candidates to compromise an Active Directory environment over 5 days and write a professional penetration testing report — making it unique in requiring BOTH technical exploitation AND written communication skills. All preparation materials from TCM Security are free via the TCM Security Academy free tier.

Section 1

Practical Ethical Hacking

Full internal network penetration testing methodology. Reconnaissance: passive and active. Scanning and enumeration: Nmap, Netdiscover, enum4linux, SMBclient, RPC enumeration. Exploitation: EternalBlue (MS17-010), Responder (LLMNR/NBT-NS poisoning), NTLMv2 hash capture, Pass-the-Hash, Kerberoasting, AS-REP Roasting. Post-exploitation and persistence. Pivoting with Metasploit and SSH tunnelling. Report writing: executive summary, technical findings (CVSS scoring), remediation recommendations.

Section 2

Active Directory Attacks

Active Directory is tested in depth — most enterprise environments use it and most PNPT exam environments involve AD compromise. Key concepts: domain structure (domains, forests, trusts), authentication protocols (NTLM, Kerberos TGT/TGS), privilege escalation within AD. Attack techniques: LLMNR poisoning (Responder), SMB relay, IPv6 attacks (mitm6), BloodHound enumeration, Pass-the-Hash, Pass-the-Ticket, Golden Ticket/Silver Ticket, DCSync, PrintNightmare. Tools: BloodHound (free, GitHub), Responder (free, GitHub), CrackMapExec, Impacket suite.

Section 3

Web Application & External Attacks

External reconnaissance: subdomain enumeration (Subfinder, Amass), certificate transparency (crt.sh), GitHub/GitLab secret hunting (truffleHog, GitLeaks). Web application testing: OWASP Top 10 in practice, Burp Suite Community for manual testing. Bug bounty methodology and scoping. OSINT for external penetration testing. Report writing for external assessments — how to communicate external attack surface to a non-technical audience.

PNPT vs OSCP — what to choose

Choose PNPT if:

Choose OSCP if:

Interactive · Timed · Fully explained

Interactive Practice Exam — DevOps & IaC Certs

A 20-question, 30-minute scenario-based practice test covering Terraform state management, Docker multi-stage builds, Kubernetes workloads, CI/CD pipelines, Splunk SPL, and offensive security concepts (eJPT/PNPT). Each item includes a detailed explanation with authoritative references. Progress auto-saves; you can pause and resume later.

Loading the interactive practice exam… If it does not load, ensure JavaScript is enabled.

Additional practice · Terraform, Docker, Splunk, eJPT, PNPT

Additional Practice Questions — DevOps & IaC Certs

These questions cover scenario-based IaC reasoning, container configuration, SIEM SPL, and offensive security concepts. For Terraform and Docker, the exam tests application of knowledge in context — not just definition recall.

1. (Terraform) You run terraform plan and observe the output shows a resource will be replaced (destroyed and recreated) rather than updated in-place. What is the MOST likely reason for this?

  • A) The provider version was upgraded
  • B) A change was made to an argument that is immutable after resource creation (e.g., changing an EC2 instance's AMI ID or an RDS instance's engine version)
  • C) The Terraform state file was deleted
  • D) The resource uses count instead of for_each
Answer: B When you change an argument that cannot be modified in-place on an already-created resource (an immutable attribute), the cloud provider's API does not support updating that field on an existing resource. Terraform must destroy the existing resource and create a new one with the updated attributes. Examples: changing an EC2 instance's AMI, changing an RDS cluster's engine type, changing an S3 bucket's name. The plan output shows -/+ (destroy then create) and identifies which attribute forced the replacement. To prevent unexpected replacements, use the lifecycle { prevent_destroy = true } meta-argument on critical resources. Always review plan output before applying.

2. (Terraform) Your team is collaborating on Terraform code in a shared CI/CD pipeline. Two engineers run terraform apply simultaneously. What problem can occur and how does Terraform prevent it?

  • A) Nothing — Terraform applies are idempotent so concurrent runs are safe
  • B) State corruption — two concurrent applies both read the same state, compute different plans, and overwrite each other's state. Remote backends with state locking (e.g., S3 + DynamoDB, Terraform Cloud) prevent this by acquiring a lock before plan and releasing it after apply
  • C) Duplicate resources will be created, but Terraform will detect and clean them up on the next apply
  • D) The second apply will fail with a provider authentication error
Answer: B — State locking State locking is critical for team Terraform workflows. Without locking, two concurrent applies both read the current state, both compute a plan based on that snapshot, and then both write a new state — the second write overwrites the first, losing track of resources the first apply created. This produces state corruption and potential orphaned cloud resources. State locking mechanisms: Terraform Cloud acquires a lock automatically. AWS S3 backend requires a DynamoDB table for locking (dynamodb_table = "terraform-lock"). If a previous apply was killed mid-run, the lock may remain — use terraform force-unlock <LOCK_ID> to release it after verifying no apply is running. Local state has no locking — never use local state for team workflows.

3. (Docker) You need to build a Docker image for a Go application. The compiled binary is only 8 MB. You want the final production image to be as small as possible with no build tools. Which Dockerfile pattern achieves this?

  • A) Use FROM golang:1.21 as the base image and copy the source code in
  • B) Use a multi-stage build: FROM golang:1.21 AS builder to compile, then FROM scratch (or FROM alpine:3.19) and COPY --from=builder /app/binary /binary
  • C) Use docker build --compress to reduce the image size automatically
  • D) Use FROM ubuntu:latest and install Go via apt
Answer: B — Multi-stage build Multi-stage builds are the canonical solution for minimal production images. Stage 1 (builder): use the full golang:1.21 image (~800 MB) to compile the binary. Stage 2 (final): start from scratch (completely empty — 0 MB) or alpine:3.19 (~7 MB if you need a shell/SSL certs) and COPY --from=builder only the compiled binary. Final image size: ~8–15 MB instead of ~800 MB. No build tools, no compiler, no package manager in the production image — dramatically reduced attack surface. The scratch base requires the binary to be statically compiled (CGO_ENABLED=0 in Go). For images requiring CA certificates for HTTPS, use alpine and apk add ca-certificates.

4. (Docker) A container has been configured with --network host. What is the security implication?

  • A) The container cannot make outbound network connections
  • B) The container bypasses Docker's network namespace isolation — it shares the host's network stack directly, can bind to any host port, and can observe all host network traffic
  • C) The container is isolated in its own private bridge network
  • D) The container uses the host's DNS but has a separate IP address
Answer: B The host network driver removes the container's network namespace entirely. The container process uses the host's network stack as if it were running directly on the host — it shares the host IP, can bind to privileged ports (<1024) without NET_BIND_SERVICE, can see all network interfaces, and can sniff all unencrypted host traffic. This eliminates the security boundary between container and host at the network layer. --network host is sometimes used for performance (eliminates NAT overhead) or for containers that need to manage host networking (e.g., monitoring agents). For production application containers, always use the default bridge network or user-defined bridge networks — never host mode without explicit justification. This is a common DCA exam scenario for the security domain.

5. (Splunk) A SOC analyst needs to find all failed SSH login attempts to Linux servers in the last 24 hours, display the top 10 source IP addresses by count, and alert if any single source IP exceeds 50 attempts. Which Splunk query finds the top source IPs?

  • A) index=linux sourcetype=syslog "Failed password" | top limit=10 src_ip
  • B) index=linux sourcetype=syslog "Failed password" | stats count by src_ip | sort -count | head 10
  • C) index=linux "SSH" | count src_ip | limit 10
  • D) Both A and B are correct
Answer: D — Both A and B are correct SPL approaches Both produce valid results. Option A uses the top command — designed for exactly this use case: it returns the top N values of a field by count (count and percent columns). Simple and concise. Option B uses stats count by src_ipsort -count (descending) → head 10. This is more verbose but gives more control over output columns and is the pattern used in more complex queries with multiple aggregations. The Splunk exam tests both patterns — know when to use top (quick ranked frequency) vs stats + sort (flexible, composable). For the alert threshold: | where count > 50 added after the stats would filter to only IPs exceeding the threshold for the alert condition.

6. (eJPT / PNPT) During an internal network penetration test, you run Responder on the network and observe an NTLMv2 hash for user "jsmith@corp.local". You crack the hash and obtain the plaintext password "Winter2024!". The domain controller is at 10.10.1.1 and is running SMB on port 445. What is the MOST efficient next step to test for lateral movement?

  • A) Use CrackMapExec to spray the credentials across the entire subnet: crackmapexec smb 10.10.1.0/24 -u jsmith -p 'Winter2024!'
  • B) Log into the domain controller using RDP with jsmith's credentials
  • C) Run Mimikatz on the domain controller to dump all hashes
  • D) Submit the finding as medium severity and proceed to the next host
Answer: A — Credential spraying with CrackMapExec CrackMapExec (CME) is the standard tool for testing compromised credentials across an entire subnet via SMB/RDP/LDAP/WinRM. The command crackmapexec smb 10.10.1.0/24 -u jsmith -p 'Winter2024!' tests jsmith's credentials against all discovered Windows hosts simultaneously — hosts where they are valid are marked (Pwn3d!) if the account has admin access. This efficiently maps the blast radius of the compromised account. Direct RDP to the DC (B) skips discovery of other accessible hosts and may be noisy. Mimikatz (C) requires a session on the target first — it is a post-exploitation tool, not a lateral movement discovery tool. Stopping with "medium severity" (D) would be premature for a credential that may grant domain admin access. Reference: CrackMapExec (free, GitHub).

7. (Terraform) You want to create 5 identically configured EC2 instances with unique names (web-0 through web-4). Which Terraform configuration is CORRECT?

  • A) Use count = 5 and reference count.index in the name tag: Name = "web-${count.index}"
  • B) Copy-paste the resource block 5 times with unique resource names
  • C) Use a for loop in the provider configuration
  • D) Create a module and call it 5 times
Answer: A — count and count.index count = 5 creates 5 instances of the resource. count.index is an integer from 0 to (count−1) available within the resource block during iteration. Full example: resource "aws_instance" "web" { count = 5; ami = "ami-abc123"; instance_type = "t3.micro"; tags = { Name = "web-${count.index}" } }. Access individual instances: aws_instance.web[0], aws_instance.web[2], etc. Prefer for_each over count when your instances have distinct identities (e.g., a set of unique hostnames) — for_each uses string keys which are more stable than integer indices when items are removed from the middle. The Terraform exam specifically tests the difference: count for ordered lists; for_each for maps/sets.

8. (Docker / Container Security) Which of the following container configurations represents the WORST security practice?

  • A) docker run --read-only --tmpfs /tmp myapp:1.0
  • B) docker run --cap-drop ALL --cap-add NET_BIND_SERVICE myapp:1.0
  • C) docker run --privileged myapp:1.0
  • D) docker run --user 1000:1000 myapp:1.0
Answer: C — --privileged --privileged gives the container access to ALL Linux capabilities, disables seccomp and AppArmor profiles, mounts the host's device filesystem, and allows the container to load kernel modules — effectively giving the container root access to the host. Any container escape vulnerability in a --privileged container immediately grants full host root access. This should almost never be used in production. Option A (read-only filesystem + tmpfs for /tmp) is a security best practice. Option B (drop all capabilities then add only what's needed) is the principle of least privilege applied to Linux capabilities. Option D (run as non-root UID 1000) prevents the container process from running as root — also a best practice. The DCA exam heavily tests container security — know all flags and their security implications.

9. (PNPT / Active Directory) During an internal pentest, you discover the network uses LLMNR (Link-Local Multicast Name Resolution) and NBT-NS. Which attack does this enable and what does it achieve?

  • A) DNS poisoning — you redirect all DNS traffic through your machine
  • B) LLMNR/NBT-NS poisoning with Responder — when a machine fails to resolve a hostname via DNS, it broadcasts an LLMNR/NBT-NS query; you respond claiming to be the target host and capture the NTLMv2 authentication hash the victim's machine sends for authentication
  • C) Kerberoasting — you request Kerberos service tickets for offline cracking
  • D) Pass-the-Hash — you reuse captured NTLM hashes without cracking them
Answer: B — LLMNR/NBT-NS poisoning LLMNR and NBT-NS are fallback name resolution protocols. When a Windows machine fails to resolve a hostname via DNS, it sends an LLMNR multicast broadcast: "Can anyone tell me the IP of \\FileServer1?" Any machine on the network (including an attacker's machine running Responder) can respond: "I am \\FileServer1!" The victim's machine then attempts to authenticate to the attacker's fake server using NTLM, sending an NTLMv2 challenge-response hash. Responder captures this automatically. The hash can then be: 1) cracked offline with Hashcat (targeted at common passwords), or 2) relayed in an SMB relay attack to authenticate against other hosts without cracking. This is often the first foothold in an Active Directory assessment. Remediation: disable LLMNR and NBT-NS in Group Policy. Reference: Responder (free, GitHub); TCM Security — LLMNR Poisoning explained (free article).

10. (Terraform) You need to pass a database password to a Terraform resource without hardcoding it in your configuration files. Which approach is MOST secure?

  • A) Store the password in a terraform.tfvars file and commit it to Git with a .gitignore entry
  • B) Hardcode it in the resource block — it is encrypted in the state file
  • C) Use a secrets manager integration: retrieve the secret from AWS Secrets Manager, HashiCorp Vault, or Azure Key Vault using a data source, and reference it in the resource without storing it in the config or state in plaintext
  • D) Pass it as an environment variable: export TF_VAR_db_password="mypassword" and reference it as a variable
Answer: C — Secrets manager integration (most secure) Terraform state stores resource attributes in plaintext by default — even if a variable is marked sensitive = true in outputs and plans, the actual value IS stored in the state file. Therefore the most secure pattern is to never have the secret value pass through Terraform at all: use a data source to fetch the secret reference (data "aws_secretsmanager_secret_version" "db_password" { ... }) and pass data.aws_secretsmanager_secret_version.db_password.secret_string to the resource. The secret stays in the secrets manager; Terraform only handles a reference. Option D (environment variables) is acceptable and keeps secrets out of code, but they still appear in state. Option A (.gitignore) is dangerous — accidentally committing secrets to Git is a leading cause of credential exposure. Option B is incorrect — Terraform state is NOT encrypted by default.
Case study · IaC misconfiguration in production

Real-World Walkthrough: Capital One 2019 — Cloud IaC Misconfiguration & SSRF

The Capital One data breach of 2019 exposed the personal information of over 100 million customers. The attacker was a former cloud engineer. The attack vector — a misconfigured WAF and an exposed instance metadata service — is directly testable in Terraform Associate (infrastructure misconfiguration), AWS Security Specialty (SSRF and IAM), and PNPT/eJPT (SSRF technique) exams.

What happened

  • The environment: Capital One ran a financial services application on AWS. A Web Application Firewall (WAF) was deployed to filter malicious requests. The application servers ran on EC2 instances with IAM roles — credentials were automatically provided via the instance metadata service (IMDS) at 169.254.169.254.
  • The misconfiguration: The WAF was misconfigured to allow SSRF (Server-Side Request Forgery) — requests from the application server to internal resources were not blocked. Specifically, the WAF allowed outbound connections from the web application to 169.254.169.254.
  • The attack: The attacker discovered the SSRF vulnerability, sent a crafted request to the web application, and the application's server-side request to 169.254.169.254/latest/meta-data/iam/security-credentials/ returned the temporary IAM credentials for the EC2 instance role. With those credentials, the attacker accessed over 700 S3 folders containing customer data.
  • Discovery: A researcher found and reported data on GitHub (a second leak — unrelated to the SSRF). Capital One was notified in July 2019. The attacker was arrested; Capital One paid $80 million in regulatory fines and $190 million in a class action settlement.

Map to DevOps & IaC cert domains

Free & reputable only · Verified links

Helpful Materials — DevOps & IaC Certifications

All resources below are free or free-tier. The best preparation for Terraform is building real infrastructure; for Docker, running containers locally; for offensive certs, completing free lab platforms. Prioritise doing over reading.

Terraform — free resources

Docker — free resources

Splunk — free resources

eJPT / PNPT — free resources

Communities (free)

Quick reference · Memorise before exam day

DevOps & IaC Cheatsheet

High-frequency CLI commands, HCL patterns, and Docker flags. Terraform exam frequently tests command flags; Docker exam tests network driver names and security flags.

Terraform CLI — core workflow

Terraform HCL patterns

Docker — key commands

Docker network drivers

Splunk SPL essentials

Study tools · Active recall · DevOps / IaC

Flashcards & Term-Matching Game

Active recall beats passive reading for long-term retention. Use the flashcards to drill definitions and the matching game to reinforce connections between concepts. Shuffle to mix domains and reset to start fresh. Keyboard navigation supported on flashcards.

Flashcard Deck — Key Terms

Loading flashcards… ensure JavaScript is enabled.

Term-Matching Game

Click a term on the left, then click its matching definition on the right. Correct pairs lock in green; wrong pairs flash red. Complete all pairs to advance to the next round.

Loading matching game… ensure JavaScript is enabled.

Speed Round — True or False

You have 10 seconds per statement. Answer TRUE or FALSE before the timer runs out. Build a combo multiplier for consecutive correct answers and beat your session high score.

Loading speed round… ensure JavaScript is enabled.

Fill in the Blank

Read the clue and type the missing term. One typo is forgiven for longer answers. Use the hint button if you're stuck — but it costs half the question's points.

Loading fill-in-the-blank… ensure JavaScript is enabled.

Domain Sprint — Categorise the Term

A term appears — click the correct exam domain it belongs to. Correct selections score 100 pts; wrong selections deduct 25 pts. Master domain knowledge before exam day.

Loading domain sprint… ensure JavaScript is enabled.