The AWS Landing Zone Checklist Nobody Gives You Before a SOC 2 Audit
Most SOC 2 guides focus on policies, procedures, and organizational controls. That's important, but when your auditor sits down and starts asking about your AWS environment, they're looking at infrastructure. Specifically, they want to see evidence that your cloud controls are actually implemented — not just documented in a policy PDF nobody reads.
This is the checklist we use when building SOC 2-ready landing zones. It's not exhaustive for the entire SOC 2 framework, but it covers the AWS infrastructure controls that come up in virtually every audit.
Organization and account structure
Your auditor wants to see separation of concerns. A single AWS account with production, staging, and dev workloads sharing the same IAM policies is a finding waiting to happen.
- Multi-account structure with AWS Organizations — at minimum: Management, Security/Tooling, Log Archive, and Workload accounts
- Organizational Units that reflect your environment boundaries (Production, Non-Production, Security, Sandbox)
- Service Control Policies attached at the OU level, not individual accounts — this proves you're enforcing boundaries systematically, not ad hoc
The OU structure matters because your auditor will ask "how do you prevent a developer in staging from accessing production data?" If the answer involves trusting people to follow a wiki page, that's not a control.
Logging — the non-negotiable
If there's one area where auditors have zero tolerance, it's logging. You need to prove that every meaningful action in your environment is recorded, stored immutably, and retained for the required period.
- CloudTrail organization trail — enabled across all accounts, all regions, with management and data events
- CloudTrail logs shipped to a dedicated Log Archive account that workload accounts cannot modify or delete
- AWS Config enabled in every account and region — recording all resource types
- VPC Flow Logs enabled on all VPCs
- S3 access logging on all buckets (especially any bucket holding customer data)
- ALB/ELB access logs enabled
- API Gateway access logging with a standard log format
The Log Archive account is critical. It should have an SCP that prevents anyone — including admins — from deleting or modifying log buckets. Your auditor will specifically ask who has access to delete logs. The answer should be "nobody, it's enforced by policy."
Threat detection and monitoring
Having logs is step one. Your auditor also wants to see that you're actively monitoring for threats, not just storing data nobody looks at.
- GuardDuty enabled across all accounts — delegated to your Security Tooling account
- Security Hub enabled with at least the AWS Foundational Security Best Practices standard and CIS Benchmarks
- Macie enabled if you're handling sensitive data (PII, PHI, financial records)
Security Hub is particularly useful for audits because it gives you a compliance score against known frameworks. When your auditor asks "how do you continuously assess your security posture," pointing at a Security Hub dashboard with FSBP and CIS scores is a strong answer.
Evidence collection
This is where most teams scramble before an audit. You need to produce evidence that your controls have been operating effectively over the audit period — not just that they exist today.
- AWS Audit Manager with SOC 2 assessment framework active — this automatically collects evidence from Config, CloudTrail, and Security Hub
- Evidence stored in a dedicated S3 bucket in the Log Archive account
- Config rules that map to SOC 2 Trust Services Criteria — at minimum: S3 public access checks, encryption checks, IAM password policy, root account MFA
Audit Manager isn't magic — it doesn't make you compliant. But it does automate the evidence collection that would otherwise take your team weeks of manual screenshot gathering. We set it up on every landing zone we build, even if the client isn't planning a SOC 2 audit yet. The evidence accumulates over time, and when they do decide to pursue certification, they have months of historical data ready.
Encryption
Short section because it's straightforward, but auditors always check:
- S3 default encryption enabled on all buckets (KMS preferred over SSE-S3 for customer data)
- EBS default encryption enabled in every region
- RDS encryption at rest enabled
- KMS keys with appropriate key policies — not using AWS-managed keys for sensitive workloads
- TLS 1.2+ enforced on all endpoints
Cost controls (yes, really)
SOC 2 includes availability as a Trust Services Criteria. Runaway costs that lead to service disruption are an availability concern. Auditors increasingly ask about cost governance.
- AWS Cost Anomaly Detection enabled
- Budget alerts configured with appropriate thresholds
- Health event notifications routed to your ops team
The controls your auditor won't ask about but should
If you're running AI workloads — especially anything using Bedrock — there are controls that aren't in the standard SOC 2 framework yet but forward-thinking auditors are starting to look at:
- Bedrock model invocation logging — every prompt and response recorded
- AI services opt-out policies at the organization level
- Data classification for any data flowing into AI models
These aren't required for SOC 2 today, but they demonstrate maturity. And if you're also pursuing HIPAA or handling regulated data, Bedrock logging is effectively mandatory.
The pattern
If you look at this list, there's a pattern: everything is deployed at the organization level, enforced by policy (not by trust), logged to an immutable destination, and monitored continuously. That's the difference between a landing zone that passes an audit and one that was built to pass an audit. The first one keeps working after the auditor leaves.
Building toward SOC 2? We can get your landing zone audit-ready.
Get in touch