Back to Blog

SAP-C02 Practice Questions with Detailed Answers and Explanations

10 realistic SAP-C02 practice questions with detailed answer explanations covering all 4 exam domains. Sharpen your AWS Solutions Architect Professional skills.

By Sailor Team , April 17, 2026

The AWS Certified Solutions Architect - Professional (SAP-C02) exam tests your ability to design and deploy dynamically scalable, highly available, fault-tolerant, and reliable applications on AWS. Unlike the associate-level exam, SAP-C02 questions present complex, multi-layered scenarios that require you to weigh trade-offs between competing architectural approaches.

Below are 10 realistic scenario-based practice questions that mirror the depth and style of the actual exam. Each question is followed by a detailed explanation that walks through the reasoning — not just the correct answer, but why the other options fall short.

Before you start, review the key differences between SAP-C02 and SAA-C03 if you’re still deciding which certification to pursue.

How to Use These Practice Questions

Don’t just read them. For each question:

  1. Read the scenario carefully — identify the core requirement and any constraints (cost, latency, compliance, availability).
  2. Eliminate obviously wrong answers first — narrow it down to two plausible options.
  3. Choose based on the specific constraint — the correct answer always addresses the stated requirement more precisely.
  4. Read the explanation even if you got it right — understanding the reasoning strengthens your exam intuition.

SAP-C02 Exam Domains Overview

The questions below are mapped to the four exam domains:

DomainWeightFocus Areas
Domain 1: Design Solutions for Organizational Complexity26%Multi-account strategies, cross-account access, hybrid networking
Domain 2: Design for New Solutions27%Compute, storage, database, and application architecture choices
Domain 3: Continuous Improvement for Existing Solutions25%Cost optimization, performance tuning, migration strategies
Domain 4: Accelerate Workload Migration and Modernization22%Migration planning, application modernization, data migration

Question 1 — Multi-Account Strategy (Domain 1)

A global financial services company operates 45 AWS accounts across three business units. The security team needs centralized visibility into compliance status across all accounts, the ability to enforce guardrails that prevent any account from disabling CloudTrail logging, and automated remediation when non-compliant resources are detected. The solution must work with minimal ongoing operational effort.

Which combination of services addresses all requirements?

A. AWS Organizations with SCPs to prevent CloudTrail deletion, AWS Config aggregator with conformance packs, and AWS Systems Manager Automation for remediation

B. AWS Control Tower with mandatory guardrails, Amazon CloudWatch Events with Lambda functions for compliance monitoring, and manual remediation workflows

C. AWS Organizations with SCPs, individual AWS Config rules deployed per account using CloudFormation StackSets, and SNS notifications for manual review

D. AWS Security Hub with multi-account aggregation, AWS Config rules, and Amazon EventBridge with Systems Manager Automation runbooks for remediation

Answer: A

Explanation: This question requires centralized compliance, enforcement, and automated remediation with minimal operational overhead.

  • Option A is correct because SCPs provide preventive controls (blocking CloudTrail deletion at the organizational level), AWS Config aggregator with conformance packs provides centralized compliance visibility across all accounts from a single delegated administrator account, and Systems Manager Automation enables automated remediation without custom code.
  • Option B is weak because CloudWatch Events with Lambda requires custom code for compliance monitoring, which increases operational burden. Manual remediation directly contradicts the “minimal effort” requirement.
  • Option C fails on the automated remediation requirement — SNS notifications with manual review is not automated.
  • Option D is plausible but Security Hub alone doesn’t enforce preventive guardrails. It provides detection but not prevention. Without SCPs, accounts could still disable CloudTrail.

Key takeaway: When a question mentions both prevention and detection, you need a combination of preventive controls (SCPs) and detective controls (Config). AWS Config conformance packs are the lowest-effort way to deploy compliance rules at scale.


Question 2 — Hybrid DNS Architecture (Domain 1)

A company is migrating workloads from on-premises to AWS over 18 months. During the migration period, applications in both environments need to resolve each other’s DNS names. The on-premises DNS uses an internal domain corp.internal. AWS workloads use a Route 53 private hosted zone for aws.corp.internal. The connection between environments is an AWS Direct Connect link. The solution must minimize DNS query latency and avoid managing DNS servers on EC2.

Which approach meets these requirements?

A. Create Route 53 Resolver inbound endpoints for on-premises to resolve AWS DNS, and outbound endpoints with forwarding rules to send corp.internal queries to the on-premises DNS servers

B. Deploy a BIND DNS server on EC2 in AWS that conditionally forwards queries between environments

C. Configure the on-premises DNS to forward aws.corp.internal to the VPC DNS resolver at the VPC+2 address directly over Direct Connect

D. Use Route 53 public hosted zones for both domains with split-view DNS

Answer: A

Explanation: This is a classic hybrid DNS resolution pattern.

  • Option A is correct. Route 53 Resolver inbound endpoints allow on-premises DNS servers to forward queries for aws.corp.internal to AWS for resolution. Outbound endpoints with forwarding rules allow AWS resources to resolve corp.internal by forwarding those queries to the on-premises DNS servers. This is fully managed — no EC2 DNS servers required.
  • Option B works technically but violates the constraint of not managing DNS servers on EC2. BIND on EC2 creates operational overhead.
  • Option C is incorrect because the VPC+2 DNS address is not directly accessible from on-premises over Direct Connect. You need Route 53 Resolver endpoints to expose DNS resolution to on-premises networks.
  • Option D is wrong because these are internal domains — public hosted zones would expose internal names and don’t solve the private resolution requirement.

Key takeaway: Route 53 Resolver endpoints are the managed solution for hybrid DNS. Inbound = on-premises resolves AWS names. Outbound = AWS resolves on-premises names.


Question 3 — High-Availability Database Design (Domain 2)

A SaaS company runs a customer-facing application that requires a relational database with the following characteristics: sub-10ms read latency for a global user base, automatic failover with less than 30 seconds of downtime, support for up to 500,000 read requests per second, and the ability to scale read capacity across multiple AWS Regions. Cost optimization is a secondary concern.

Which database solution meets these requirements?

A. Amazon RDS Multi-AZ with cross-Region read replicas in each target Region

B. Amazon Aurora Global Database with Aurora replicas in each Region

C. Amazon DynamoDB global tables with DynamoDB Accelerator (DAX)

D. Amazon Aurora Multi-AZ with ElastiCache Redis in each Region for read caching

Answer: B

Explanation: The question specifies relational database, global users, sub-10ms reads, fast failover, and massive read throughput.

  • Option B is correct. Aurora Global Database replicates data across Regions with typical latency under 1 second. It supports up to 15 Aurora Replicas per Region (each capable of handling significant read throughput), and cross-Region failover can be completed in under 30 seconds with managed planned failover. This addresses every requirement.
  • Option A is weaker because RDS cross-Region read replicas have higher replication lag and failover is not as fast or automated as Aurora Global Database.
  • Option C fails the relational database requirement. DynamoDB is NoSQL.
  • Option D doesn’t provide cross-Region read scaling natively. ElastiCache adds complexity and doesn’t solve the multi-Region requirement — cache invalidation across Regions introduces consistency challenges.

Key takeaway: Aurora Global Database is the go-to answer for multi-Region relational workloads requiring fast failover and low-latency reads globally.


Question 4 — Event-Driven Architecture (Domain 2)

A retail company processes orders through a monolithic application. They want to refactor the order processing into an event-driven microservices architecture. Requirements include: each order event must be processed exactly once, multiple downstream services need to react to the same event independently, the system must handle traffic spikes of 10x during sales events, and failed processing must be retried automatically without blocking other events.

Which architecture best meets these requirements?

A. Amazon SQS FIFO queue with multiple consumers reading from the same queue

B. Amazon SNS topic fanning out to separate SQS queues per microservice, with dead-letter queues for failed messages

C. Amazon Kinesis Data Streams with enhanced fan-out consumers and Lambda integration

D. Amazon EventBridge event bus with rules routing to Lambda functions and SQS dead-letter queues

Answer: B

Explanation: The key requirements are: fan-out to multiple independent consumers, exactly-once processing semantics, spike handling, and automatic retry with isolation.

  • Option B is correct. SNS-to-SQS fan-out is the standard AWS pattern for one-to-many event distribution. Each microservice gets its own SQS queue, so processing failures in one service don’t affect others. SQS provides at-least-once delivery, and with idempotent consumers (a standard microservices pattern), this achieves effective exactly-once processing. Dead-letter queues capture failed messages for investigation. SQS scales automatically to handle traffic spikes.
  • Option A fails because a single SQS queue with multiple consumers means each message is processed by only one consumer — it doesn’t support fan-out.
  • Option C works for streaming use cases but Kinesis requires capacity planning (shard management) for 10x spikes, and exactly-once processing is harder to implement.
  • Option D is viable but EventBridge has lower throughput limits compared to SNS/SQS and is better suited for event routing based on content rather than simple fan-out.

Key takeaway: SNS + SQS fan-out is the foundational pattern for event-driven architectures requiring independent processing by multiple consumers.


Question 5 — Data Migration Strategy (Domain 4)

A company needs to migrate 80 TB of data from an on-premises NAS to Amazon S3. They have a 1 Gbps Direct Connect link, but it’s already utilized at 60% capacity for production traffic. The migration must be completed within 2 weeks. Data must be encrypted in transit and at rest. After migration, the on-premises NAS will be decommissioned.

Which migration approach meets the timeline and constraints?

A. Use AWS DataSync over the existing Direct Connect connection, scheduling transfers during off-peak hours

B. Order an AWS Snowball Edge device, load the data on-premises, and ship it to AWS

C. Set up S3 Transfer Acceleration with multipart uploads over the internet

D. Create a VPN connection alongside Direct Connect and transfer using the AWS CLI with S3 copy commands

Answer: B

Explanation: Let’s do the math. The available bandwidth on the Direct Connect link is 40% of 1 Gbps = 400 Mbps. At 400 Mbps continuously, transferring 80 TB would take approximately 18.5 days — that exceeds the 2-week window, and that assumes 24/7 utilization of the remaining bandwidth, which is unrealistic.

  • Option B is correct. Snowball Edge can hold up to 80 TB of usable storage, handles encryption in transit and at rest natively, and the typical turnaround (shipping, loading, return shipping, ingestion) fits within 2 weeks. It doesn’t consume any of the existing network bandwidth.
  • Option A fails the timeline because DataSync would use the already-constrained Direct Connect link. Even with off-peak scheduling, the math doesn’t work.
  • Option C would be even slower. S3 Transfer Acceleration improves internet-based transfers but doesn’t address the fundamental bandwidth limitation, and it would go over the internet rather than using the private connection.
  • Option D doesn’t solve anything — adding a VPN doesn’t increase total bandwidth, and the AWS CLI approach is less efficient than DataSync for large transfers.

Key takeaway: When network bandwidth cannot support the migration timeline, Snowball devices are the answer. Always calculate the transfer time: Data size / Available bandwidth = Time required.


Question 6 — Cost Optimization (Domain 3)

A media company runs a video transcoding pipeline using a fleet of c5.4xlarge EC2 instances in an Auto Scaling group. The workload is consistent on weekdays (8 AM to 8 PM) with minimal usage on nights and weekends. Transcoding jobs can tolerate interruption and will automatically resume from the last checkpoint. Average monthly spend is $45,000. The company wants to reduce compute costs by at least 50%.

Which strategy achieves the target cost reduction?

A. Purchase 1-year Reserved Instances for the weekday peak capacity

B. Replace the On-Demand fleet with Spot Instances using a diversified instance type strategy across multiple Availability Zones

C. Use a combination of a smaller Reserved Instance or Savings Plan baseline with Spot Instances for additional capacity, and schedule scaling to zero during nights and weekends

D. Migrate the workload to AWS Lambda with container image support

Answer: C

Explanation: The workload is interruptible (checkpoint/resume), has predictable patterns, and needs 50%+ cost reduction.

  • Option C is correct. A hybrid approach maximizes savings: a small Savings Plan or Reserved Instance commitment covers the minimum baseline at ~60% discount, Spot Instances handle the variable weekday load at ~70-90% discount, and scheduled scaling eliminates spend during off-hours entirely. This combination can easily exceed 50% savings.
  • Option A provides only ~30-40% savings and wastes money during nights and weekends when Reserved Instances sit idle.
  • Option B is close but lacks the baseline stability. Running 100% on Spot creates availability risk — even with diversification, major spot market shifts could leave the pipeline with insufficient capacity.
  • Option D is impractical. Video transcoding is a long-running, compute-intensive workload that doesn’t fit Lambda’s execution model well (15-minute timeout, limited CPU control).

Key takeaway: For predictable, interruptible workloads, the optimal cost strategy combines Savings Plans for baseline + Spot for variable + scheduled scaling for known idle periods.


Question 7 — Application Modernization (Domain 4)

A company runs a legacy .NET application on Windows Server VMs on-premises. The application uses a Microsoft SQL Server database. The CTO wants to modernize the application to reduce infrastructure management while minimizing code changes. The application must continue running on .NET and SQL Server. The team has limited experience with containers.

Which modernization approach best meets the requirements?

A. Rehost the application on EC2 Windows instances and migrate the database to RDS for SQL Server

B. Containerize the application using Windows containers on Amazon ECS with AWS Fargate and migrate the database to RDS for SQL Server

C. Refactor the application to .NET Core, deploy on AWS App Runner, and migrate to Amazon Aurora PostgreSQL

D. Deploy the application using AWS Elastic Beanstalk for .NET with a Multi-AZ RDS SQL Server database

Answer: D

Explanation: The constraints are: minimize code changes, continue using .NET and SQL Server, reduce infrastructure management, and the team has limited container experience.

  • Option D is correct. Elastic Beanstalk supports .NET applications natively on Windows, handles infrastructure provisioning, scaling, load balancing, and health monitoring automatically, and requires minimal code changes. RDS for SQL Server reduces database management overhead. This path minimizes both code changes and infrastructure management without requiring container expertise.
  • Option A reduces infrastructure management for the database but still requires managing EC2 instances (patching, scaling, monitoring). It’s a lift-and-shift, not modernization.
  • Option B introduces containers, which the team isn’t experienced with. Windows containers add complexity, and Fargate support for Windows containers has limitations. This contradicts the “limited container experience” constraint.
  • Option C requires significant code refactoring (.NET to .NET Core) and database migration (SQL Server to Aurora PostgreSQL). This violates “minimize code changes.”

Key takeaway: Elastic Beanstalk is often the best answer when the question emphasizes reducing operational overhead with minimal code changes and the team lacks container expertise. It’s the “managed PaaS” step between EC2 and full containerization.


Question 8 — Security and Compliance (Domain 1)

A healthcare company stores patient data in S3 buckets across multiple AWS accounts. Regulatory requirements mandate that all patient data must be encrypted with keys the company controls and can rotate on demand, access to patient data must be logged and auditable, no patient data can be stored in the us-west-1 Region, and a security administrator must be able to revoke access to all patient data within minutes.

Which combination of controls satisfies all requirements?

A. S3 SSE-S3 encryption, CloudTrail data events, SCP to deny S3 actions in us-west-1, and IAM policy updates for access revocation

B. S3 SSE-KMS with customer-managed keys (CMKs), CloudTrail data events, SCP to deny all actions in us-west-1, and KMS key policy modification for instant access revocation

C. S3 client-side encryption with keys stored in AWS Secrets Manager, S3 server access logging, SCP to deny S3 actions in us-west-1, and bucket policy updates for revocation

D. S3 SSE-KMS with AWS-managed keys, AWS Config rules for compliance, SCP to deny S3 actions in us-west-1, and S3 Object Lock for data protection

Answer: B

Explanation: Each requirement maps to a specific control:

  • Customer-controlled keys with on-demand rotation = SSE-KMS with customer-managed CMKs (not SSE-S3 or AWS-managed keys).
  • Auditable access logging = CloudTrail data events for S3 (captures every API call, including who accessed which object).
  • Region restriction = SCP denying all actions in us-west-1 (not just S3 — broader is better for compliance).
  • Rapid access revocation = Modifying the KMS key policy to deny access. Since all data is encrypted with the CMK, revoking access to the key immediately renders all data inaccessible — this takes effect within minutes.

Option A fails because SSE-S3 uses AWS-managed keys — the company doesn’t control rotation. Option C introduces client-side encryption complexity and S3 server access logs are less comprehensive than CloudTrail data events. Option D uses AWS-managed KMS keys, not customer-managed, and Object Lock doesn’t address the requirements.

Key takeaway: KMS key policy revocation is the fastest way to revoke access to encrypted data across an entire organization. It’s a common SAP-C02 pattern for “emergency access revocation” scenarios.


Question 9 — Performance Optimization (Domain 3)

An e-commerce platform serves product catalog pages with an average response time of 800ms. The application runs on EC2 behind an ALB, queries an Aurora MySQL database, and serves images from S3. Analysis shows: 500ms is spent on database queries (mostly repeated catalog lookups), 200ms on application processing, and 100ms on image loading. The target is sub-200ms response time for 95% of requests.

Which optimizations, applied together, bring response times below the target?

A. Add Aurora read replicas and enable S3 Transfer Acceleration

B. Add an ElastiCache Redis cluster for catalog query caching and serve images through CloudFront

C. Migrate to DynamoDB with DAX for caching and use Lambda@Edge for image processing

D. Upgrade to larger EC2 instances and enable Aurora Serverless for automatic scaling

Answer: B

Explanation: Breaking down the 800ms: 500ms (DB) + 200ms (app) + 100ms (images). The target is sub-200ms.

  • Option B is correct. ElastiCache Redis caching for repeated catalog queries eliminates most of the 500ms database time (cache hits return in <1ms). CloudFront caches and serves images from edge locations, reducing the 100ms image loading to near-zero for cached content. Combined: ~0ms (cached DB) + 200ms (app processing) + ~0ms (cached images) = ~200ms. With subsequent optimizations to application logic or connection pooling, this achieves the target.
  • Option A doesn’t solve the problem. Read replicas reduce load on the primary but don’t significantly reduce individual query latency for repeated lookups. S3 Transfer Acceleration is for uploads, not downloads.
  • Option C is over-engineered. Migrating to DynamoDB requires rewriting the data layer, and Lambda@Edge adds complexity without addressing the core latency sources.
  • Option D throws money at the problem without addressing the root cause. Larger instances won’t fix inefficient repeated queries, and Aurora Serverless is about capacity scaling, not query performance.

Key takeaway: Caching is almost always the answer when the question describes repeated read-heavy queries as the latency bottleneck. Match the caching layer to the problem: ElastiCache for database queries, CloudFront for static content delivery.


Question 10 — Disaster Recovery (Domain 2)

A financial trading platform requires a disaster recovery solution with an RPO of 1 minute and an RTO of 15 minutes. The application uses Aurora MySQL, ElastiCache Redis, and a fleet of EC2 instances behind an NLB. The primary Region is us-east-1 and the DR Region is us-west-2. Cost must be optimized while meeting the stated RPO/RTO targets.

Which DR strategy meets these requirements?

A. Pilot light: Aurora Global Database with a secondary Region, pre-configured AMIs, and CloudFormation templates to launch infrastructure in us-west-2 on demand

B. Warm standby: Aurora Global Database, a scaled-down replica of the EC2 fleet running in us-west-2, ElastiCache Global Datastore, and Route 53 health checks with automated failover

C. Multi-site active-active: Full production capacity in both Regions with Route 53 weighted routing

D. Backup and restore: Automated Aurora snapshots copied to us-west-2, AMI copies, and a runbook for manual restoration

Answer: B

Explanation: RPO of 1 minute and RTO of 15 minutes places this squarely in the warm standby category.

  • Option B is correct. Aurora Global Database provides cross-Region replication with sub-second lag (meeting 1-minute RPO). ElastiCache Global Datastore replicates the Redis cache to the DR Region. A scaled-down EC2 fleet in us-west-2 is already running and can be scaled up quickly. Route 53 health checks detect the failure and route traffic automatically. This combination achieves 15-minute RTO while keeping costs lower than full active-active.
  • Option A (pilot light) might not meet the 15-minute RTO because EC2 instances need to be launched from scratch during failover. Boot time, initialization, and health checks could exceed 15 minutes for a complex application.
  • Option C meets the requirements but is significantly more expensive than needed. Running full production capacity in two Regions doubles infrastructure costs — the question asks to optimize cost.
  • Option D can’t meet either target. Restoring from snapshots takes much longer than 15 minutes, and snapshot frequency of 1 minute is not achievable.

Key takeaway: Match the DR strategy to the RPO/RTO requirements. Backup & restore (hours), Pilot light (tens of minutes), Warm standby (minutes), Multi-site active-active (near-zero). Always pick the least expensive option that meets the stated targets.


Question Analysis Strategies for the SAP-C02 Exam

After working through these practice questions, here are the analytical strategies that apply broadly across the exam:

Read for Constraints, Not Just Requirements

Every SAP-C02 question contains constraints that eliminate options. Common constraints include:

  • “Minimal operational overhead” — eliminates self-managed solutions
  • “Minimize code changes” — eliminates refactoring-heavy options
  • “Cost-optimized” — eliminates over-provisioned solutions
  • “Within 2 weeks” — forces you to do the math on bandwidth/time

Identify the Architecture Pattern

Most questions test a known AWS architecture pattern. Once you recognize the pattern, the answer becomes clearer:

  • Multi-Region failover = Aurora Global Database + Route 53
  • Event fan-out = SNS + SQS
  • Hybrid DNS = Route 53 Resolver endpoints
  • Emergency access revocation = KMS key policy
  • Large data migration with limited bandwidth = Snow Family

Eliminate, Then Choose

On the real exam, you’ll rarely be 100% certain. The strategy is:

  1. Eliminate options that violate stated constraints
  2. Between remaining options, choose the one that addresses the most specific requirement
  3. When in doubt, prefer the AWS-managed, scalable, and operationally simpler option

Watch for “Almost Right” Answers

AWS designs distractors that are correct in a different context. For example, DynamoDB with DAX is an excellent caching solution — but not when the question specifies a relational database. Always match the answer to the specific scenario.

Frequently Asked Questions

How many questions are on the SAP-C02 exam?

The SAP-C02 exam contains 75 questions to be completed in 180 minutes. Not all questions are scored — some are unscored pilot questions used by AWS for future exams, but you won’t know which ones they are.

What score do I need to pass the SAP-C02?

You need a score of 750 out of 1000 on the scaled scoring model. This does not correspond directly to a percentage because AWS uses a compensatory scoring model that weighs questions differently.

Are these practice questions the same as the real exam?

No. These are original questions designed to match the format, complexity, and domain coverage of the SAP-C02. The actual exam questions are proprietary to AWS. These practice questions help you develop the analytical skills needed for the real exam.

How many practice questions should I do before taking the SAP-C02?

Most candidates who pass on their first attempt report completing between 300-500 practice questions across multiple question sets. The goal is not memorization but developing pattern recognition for architectural trade-offs. Check out our guide to passing SAP-C02 on your first attempt for a complete study strategy.

What’s the best way to review incorrect practice questions?

Create a “wrong answers” document. For each missed question, write down: what you chose and why, what the correct answer was and why, and the specific AWS concept you misunderstood. Review this document before your exam. Patterns in your mistakes reveal your knowledge gaps.

How does the SAP-C02 question format differ from the SAA-C03?

SAP-C02 questions are longer, present more complex scenarios with multiple interacting services, and often have two or more plausible answers. The associate exam tends to have clearer “right” answers, while the professional exam requires weighing trade-offs. For a full comparison, read our SAP-C02 vs SAA-C03 differences guide.

Should I study all four domains equally?

Not necessarily. Domain 1 (Organizational Complexity, 26%) and Domain 2 (New Solutions, 27%) together account for 53% of the exam. If your study time is limited, prioritize these domains while maintaining adequate coverage of Domains 3 and 4.

How do I know when I’m ready for the real SAP-C02 exam?

A reliable indicator is consistently scoring 80% or above on timed practice exams. If you want to validate your readiness, check out our SAP-C02 exam difficulty and pass rate analysis for benchmarks.

Conclusion

These 10 practice questions represent the depth and complexity you’ll face on the SAP-C02 exam. The key to success is not memorizing answers but developing the ability to analyze scenarios, identify constraints, and match them to the right AWS architectural patterns.

To build that skill, you need extensive practice with high-quality, scenario-based questions that mimic the real exam. Sailor.sh’s SAP-C02 mock exams provide hundreds of practice questions with detailed explanations, timed exam simulations, and domain-level performance tracking to help you identify and close your knowledge gaps before exam day.

Limited Time Offer: Get 80% off all Mock Exam Bundles | Sale ends in 7 days. Start learning today.

Claim Now