← Back to all briefings
Infrastructure 5 min read Published Updated Credibility 87/100

Serverless Briefing — AWS Lambda Adds Container Image Support

AWS enabled Lambda functions to run from OCI-compatible container images up to 10 GB, aligning serverless packaging with Docker tooling while preserving managed scaling, observability, and security controls.

Timeline plotting source publication cadence sized by credibility.
2 publication timestamps supporting this briefing. Source data (JSON)

Executive briefing: AWS Lambda gained Open Container Initiative (OCI) image support on , letting teams package functions as up to 10 GB images stored in Amazon Elastic Container Registry (ECR). The release aligns Lambda with prevailing container practices, preserves managed scaling and pay-per-invoke economics, and extends official base images with runtime interface clients for Node.js, Python, Java, .NET, and Go.

Organizations can now standardize serverless and container delivery pipelines, reuse hardened base images, and include sizeable machine learning libraries, media codecs, or specialized system utilities without repackaging to ZIP archives. Lambda still handles multi-AZ availability, automatic scaling, logging to CloudWatch, X-Ray tracing, and optional Lambda extensions, while Amazon maintains the host infrastructure and isolation model.

What changed and why it matters

  • OCI-compatible packaging. Teams build and push images with Docker or cloud-native buildpacks, then reference the image URI when creating Lambda functions. The feature supports both Linux x86_64 and Arm64 architectures and can run images up to 10 GB, removing the 250 MB compressed ZIP ceiling.
  • Familiar tooling. Existing CI/CD pipelines that already lint, scan, and sign container images can now produce Lambda artifacts, reducing duplicate packaging logic across microservices and functions.
  • Runtime coverage. AWS publishes base images and runtime interface clients (RICs) that implement the Lambda Runtime API, enabling fast bootstrapping for managed languages and custom runtimes.
  • Operational continuity. Observability, concurrency limits, provisioned concurrency, and IAM controls behave the same as ZIP-based functions, preserving operational guardrails.

Packaging workflow

The container path mirrors standard Docker practices while honoring Lambda constraints:

  1. Choose a base image. Start from an AWS-provided base (e.g., amazon/aws-lambda-nodejs:14, amazon/aws-lambda-python:3.10, amazon/aws-lambda-provided) or a thin distro such as alpine plus the Lambda Runtime Interface Emulator (RIE) for local testing.
  2. Copy application code and dependencies. Use multi-stage builds to keep the final image lean. Include compiled artifacts, shared libraries, or large ML models that previously exceeded ZIP limits. The Lambda handler is invoked via the RIC (bootstrap for custom runtimes or language-specific entrypoints).
  3. Optimize the image. Strip build tools, cache directories, and unused locales. Set CMD or ENTRYPOINT to the runtime interface client so Lambda can invoke the handler without wrapper scripts.
  4. Publish to Amazon ECR. Push the image to a private repository in the same AWS account and Region as the function, or share cross-account with resource-based policies. Signed images can be enforced with ECR pull-through cache rules or AWS Signer for provenance.
  5. Create or update the Lambda function. Reference the ECR image URI, specify the architecture, memory, and timeout, and configure environment variables, layers (still supported for helper content), and VPC settings as needed.
  6. Test locally. Use the Lambda Runtime Interface Emulator or sam local start-api to simulate invocation before deploying to production stages.

Teams already using Kubernetes or Amazon ECS can align Dockerfiles across workloads, reducing drift between long-running containers and event-driven functions. Because Lambda copies the image to an encrypted internal registry, keep image sizes reasonable to curb cold-start data transfer.

Performance and security considerations

  • Cold starts. Larger images add download time on first invocation of a fresh execution environment. Minimize layers, compress assets, and prefer provisioned concurrency for latency-sensitive APIs.
  • Startup profile. Place initialization code in the global scope to benefit from execution environment reuse. Defer heavyweight model loading until required or cache artifacts on the /tmp volume across invocations.
  • Image hygiene. Apply OS and language runtime updates regularly. Use continuous image scanning (Amazon ECR Enhanced Scanning, AWS Inspector, or third-party tools) and signing to enforce supply-chain integrity.
  • IAM least privilege. CI/CD roles should have scoped ecr:BatchCheckLayerAvailability, ecr:PutImage, and lambda:UpdateFunctionCode permissions. Execution roles need pull-only access to the specific ECR repository.
  • Network behavior. Functions pulling images from a VPC must route to public ECR endpoints or use VPC endpoints for ECR (including S3 for layer downloads) to avoid NAT bottlenecks.
  • Observability. CloudWatch Logs and X-Ray continue to operate as with ZIP packages. You can also bundle or attach Lambda extensions for log forwarding, custom metrics, or secrets retrieval.

Migration steps

  1. Inventory candidates. Identify functions constrained by the 250 MB ZIP limit, complex native dependencies, or language runtimes not covered by managed Lambda versions.
  2. Design the Dockerfile. Mirror the existing handler behavior, ensure the correct working directory, and keep the Lambda runtime interface client as the container entrypoint. Validate the architecture (x86_64 or Arm64) matches performance and cost goals.
  3. Port infrastructure-as-code. Update AWS CloudFormation, AWS CDK, Serverless Framework, or Terraform definitions to reference ImageUri instead of ZIP artifacts. Include ECR repository creation and lifecycle policies.
  4. Update pipelines. Add image build, scan, sign, and push stages. Preserve existing unit, integration, and load tests, and gate promotion on both functional and security results.
  5. Run validation. Exercise canary or alias-based deployments, monitor p95 latency, and verify that concurrency scaling meets requirements with the new image size. Compare cold-start metrics to ZIP baselines.
  6. Document rollback paths. Keep the previous ZIP package available for rapid rollback if image performance regresses. Use versioned ECR tags to simplify promotion or reversal.

Operational governance

Because Lambda still abstracts the underlying fleet, platform teams should codify budgets and guardrails for image usage. Apply service control policies to restrict which accounts can publish public images, require image scanning before deployment, and set ECR lifecycle rules to remove stale tags. Observability baselines should track image size trends, init duration, and error rates after each rollout. For regulated workloads, document how the container approach preserves Lambda’s sandbox isolation, per-function IAM roles, and encryption in transit and at rest.

Container image support also improves portability for hybrid or disaster recovery scenarios: the same image can run on Lambda, Amazon ECS on Fargate, or Kubernetes with minor configuration differences, enabling consistent workload definitions across environments.

Follow-up: AWS subsequently added Arm64 base images, support for 10 GB uncompressed images, and integrations with Lambda SnapStart for Java (where compatible), making container packaging a mainstream path for enterprise serverless teams.

Sources

Timeline plotting source publication cadence sized by credibility.
2 publication timestamps supporting this briefing. Source data (JSON)
Horizontal bar chart of credibility scores per cited source.
Credibility scores for every source cited in this briefing. Source data (JSON)

Continue in the Infrastructure pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

  • AWS Lambda
  • Serverless
  • Container images
  • Amazon ECR
Back to curated briefings