Free Sample — 15 Practice Questions
Preview 15 of 516 real practice questions from the Amazon SAP-C02 study guide.
Question 10
A company wants to migrate its website to AWS. The website uses containers that are deployed in an on-premises, self-managed Kubernetes cluster. All data for the website is stored in an on-premises PostgreSQL database.
The company has decided to migrate the on-premises Kubernetes cluster to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. The EKS cluster will use EKS managed node groups with a static number of nodes. The company will also migrate the on-premises database to an Amazon RDS for PostgreSQL database.
A solutions architect needs to estimate the total cost of ownership (TCO) for this workload before the migration.
Which solution will provide the required TCO information?
A. Request access to Migration Evaluator. Run the Migration Evaluator Collector and import the data. Configure a scenario. Export a Quick Insights report from Migration Evaluator.
B. Launch AWS Database Migration Service (AWS DMS) for the on-premises database. Generate an assessment report. Create an estimate in AWS Pricing Calculator for the costs of the EKS migration.
C. Initialize AWS Application Migration Service. Add the on-premises servers as source servers. Launch a test instance. Output a TCO report from Application Migration Service.
D. Access the AWS Cloud Economics Center webpage to assess the AWS Cloud Value Framework. Create an AWS Cost and Usage report from the Cloud Value Framework.
Show Answer
Correct Answer: A
Explanation:
Migration Evaluator is specifically designed to estimate total cost of ownership for migrating on‑premises workloads to AWS. It collects data from the existing environment, models target AWS services such as Amazon EKS and Amazon RDS, allows scenario configuration, and produces Quick Insights and detailed TCO reports. The other options focus on migration execution or generic cost concepts rather than comprehensive pre‑migration TCO analysis.
Question 22
A company has an application that uses Amazon EC2 instances in an Auto Scaling group. The quality assurance (QA) department needs to launch a large number of short-lived environments to test the application. The application environments are currently launched by the manager of the department using an AWS CloudFormation template. To launch the stack, the manager uses a role with permission to use CloudFormation, EC2, and Auto Scaling APIs. The manager wants to allow testers to launch their own environments, but does not want to grant broad permissions to each user.
Which set up would achieve these goals?
A. Upload the AWS CloudFormation template to Amazon S3. Give users in the QA department permission to assume the manager’s role and add a policy that restricts the permissions to the template and the resources it creates. Train users to launch the template from the CloudFormation console.
B. Create an AWS Service Catalog product from the environment template. Add a launch constraint to the product with the existing role. Give users in the QA department permission to use AWS Service Catalog APIs only. Train users to launch the template from the AWS Service Catalog console.
C. Upload the AWS CloudFormation template to Amazon S3. Give users in the QA department permission to use CloudFormation and S3 APIs, with conditions that restrict the permissions to the template and the resources it creates. Train users to launch the template from the CloudFormation console.
D. Create an AWS Elastic Beanstalk application from the environment template. Give users in the QA department permission to use Elastic Beanstalk permissions only. Train users to launch Elastic Beanstalk environments with the Elastic Beanstalk CLI, passing the existing role to the environment as a service role.
Show Answer
Correct Answer: B
Explanation:
AWS Service Catalog is designed for controlled self-service provisioning. By creating a Service Catalog product from the CloudFormation template and applying a launch constraint that uses the existing privileged role, testers can launch environments without being granted direct EC2, Auto Scaling, or CloudFormation permissions. Users only need Service Catalog access, satisfying the requirement to avoid broad permissions while enabling many short-lived, standardized environments.
Question 3
A company is running a large containerized workload in the AWS Cloud. The workload consists of approximately 100 different services. The company uses Amazon Elastic Container Service (Amazon ECS) to orchestrate the workload.
Recently the company’s development team started using AWS Fargate instead of Amazon EC2 instances in the ECS cluster. In the past, the workload has come close to running the maximum number of EC2 instances that are available in the account.
The company is worried that the workload could reach the maximum number of ECS tasks that are allowed. A solutions architect must implement a solution that will notify the development team when Fargate reaches 80% of the maximum number of tasks.
What should the solutions architect do to meet this requirement?
A. Use Amazon CloudWatch to monitor the Sample Count statistic for each service in the ECS cluster. Set an alarm for when the math expression sample count/SERVICE_QUOTA(service)*100 is greater than 80. Notify the development team by using Amazon Simple Notification Service (Amazon SNS).
B. Use Amazon CloudWatch to monitor service quotas that are published under the AWS/Usage metric namespace. Set an alarm for when the math expression metric/SERVICE_QUOTA(metric)*100 is greater than 80. Notify the development team by using Amazon Simple Notification Service (Amazon SNS).
C. Create an AWS Lambda function to poll detailed metrics from the ECS cluster. When the number of running Fargate tasks is greater than 80, invoke Amazon Simple Email Service (Amazon SES) to notify the development team.
D. Create an AWS Config rule to evaluate whether the Fargate SERVICE_QUOTA is greater than 80. Use Amazon Simple Email Service (Amazon SES) to notify the development team when the AWS Config rule is not compliant.
Show Answer
Correct Answer: B
Explanation:
Amazon CloudWatch publishes service usage and quota metrics under the AWS/Usage namespace, including ECS Fargate task usage. By creating a CloudWatch alarm that compares the current usage metric to the corresponding service quota (using a math expression) and triggering at 80%, the company can be proactively notified. Using Amazon SNS for notifications is the standard, fully managed approach and directly meets the requirement without custom code.
Question 46
A company needs to use an AWS Transfer Family SFTP-enabled server with an Amazon S3 bucket to receive updates from a third-party data supplier. The data is encrypted with Pretty Good Privacy (PGP) encryption. The company needs a solution that will automatically decrypt the data after the company receives the data.
A solutions architect will use a Transfer Family managed workflow. The company has created an IAM service role by using an IAM policy that allows access to AWS Secrets Manager and the S3 bucket. The role’s trust relationship allows the transfer amazonaws.com service to assume the role.
What should the solutions architect do next to complete the solution for automatic decryption?
A. Store the PGP public key in Secrets Manager. Add a nominal step in the Transfer Family managed workflow to decrypt files. Configure PGP encryption parameters in the nominal step. Associate the workflow with the Transfer Family server.
B. Store the PGP private key in Secrets Manager. Add an exception-handling step in the Transfer Family managed workflow to decrypt files. Configure PGP encryption parameters in the exception handler. Associate the workflow with the SFTP user.
C. Store the PGP private key in Secrets Manager. Add a nominal step in the Transfer Family managed workflow to decrypt files. Configure PGP decryption parameters in the nominal step. Associate the workflow with the Transfer Family server.
D. Store the PGP public key in Secrets Manager. Add an exception-handling step in the Transfer Family managed workflow to decrypt files. Configure PGP decryption parameters in the exception handler. Associate the workflow with the SFTP user.
Show Answer
Correct Answer: C
Explanation:
To automatically decrypt PGP-encrypted files, the workflow must use the PGP private key, which is required for decryption. AWS Transfer Family managed workflows support decryption as a nominal (normal) workflow step, not as an exception handler. The private key should be securely stored in AWS Secrets Manager and referenced by the workflow. Finally, the workflow must be associated with the Transfer Family server so that files are decrypted automatically upon upload.
Question 32
A company requires that all internal application connectivity use private IP addresses. To facilitate this policy, a solutions architect has created interface endpoints to connect to AWS Public services. Upon testing, the solutions architect notices that the service names are resolving to public IP addresses, and that internal services cannot connect to the interface endpoints.
Which step should the solutions architect take to resolve this issue?
A. Update the subnet route table with a route to the interface endpoint.
B. Enable the private DNS option on the VPC attributes.
C. Configure the security group on the interface endpoint to allow connectivity to the AWS services.
D. Configure an Amazon Route 53 private hosted zone with a conditional forwarder for the internal application.
Show Answer
Correct Answer: B
Explanation:
Interface endpoints rely on AWS PrivateLink private DNS to override the public service DNS names and resolve them to the private IP addresses of the endpoint ENIs. Because the service names are still resolving to public IPs, private DNS is not enabled. Enabling the private DNS option on the VPC (and for the endpoint) ensures standard AWS service names resolve to the endpoint’s private IPs, allowing internal services to connect using private addresses.
Question 13
A company is using AWS to develop and manage its production web application. The application includes an Amazon API Gateway HTTP API that invokes an AWS Lambda function. The Lambda function processes and then stores data in a database.
The company wants to implement user authorization for the web application in an integrated way. The company already uses a third-party identity provider that issues OAuth tokens for the company’s other applications.
Which solution will meet these requirements?
A. Integrate the company’s third-party identity provider with API Gateway. Configure an API Gateway Lambda authorizer to validate tokens from the identity provider. Require the Lambda authorizer on all API routes. Update the web application to get tokens from the identity provider and include the tokens in the Authorization header when calling the API Gateway HTTP API.
B. Integrate the company's third-party identity provider with AWS Directory Service. Configure Directory Service as an API Gateway authorizer to validate tokens from the identity provider. Require the Directory Service authorizer on all API routes. Configure AWS IAM Identity Center as a SAML 2.0 identity Provider. Configure the web application as a custom SAML 2.0 application.
C. Integrate the company’s third-party identity provider with AWS IAM Identity Center. Configure API Gateway to use IAM Identity Center for zero-configuration authentication and authorization. Update the web application to retrieve AWS Security Token Service (AWS STS) tokens from IAM Identity Center and include the tokens in the Authorization header when calling the API Gateway HTTP API.
D. Integrate the company’s third-party identity provider with AWS IAM Identity Center. Configure IAM users with permissions to call the API Gateway HTTP API. Update the web application to extract request parameters from the IAM users and include the parameters in the Authorization header when calling the API Gateway HTTP API.
Show Answer
Correct Answer: A
Explanation:
The company already uses a third-party OAuth identity provider and wants integrated user authorization for an API Gateway HTTP API. API Gateway Lambda authorizers are designed to validate external OAuth/JWT tokens and integrate directly with third-party identity providers. The web app can obtain tokens from the existing provider and pass them in the Authorization header, while the Lambda authorizer enforces authorization on all routes. The other options introduce unnecessary services or do not align with OAuth-based user authorization for API Gateway.
Question 36
A company has an application that uses AWS Key Management Service (AWS KMS) to encrypt and decrypt data. The application stores data in an Amazon S3 bucket in an AWS Region. Company security policies require the data to be encrypted before the data is placed into the S3 bucket. The application must decrypt the data when the application reads files from the S3 bucket.
The company replicates the S3 bucket to other Regions. A solutions architect must design a solution so that the application can encrypt and decrypt data across Regions. The application must use the same key to decrypt the data in each Region.
Which solution will meet these requirements?
A. Create a KMS multi-Region primary key. Use the KMS multi-Region primary key to create a KMS multi-Region replica key in each additional Region where the application is running. Update the application code to use the specific replica key in each Region.
B. Create a new customer managed KMS key in each additional Region where the application is running. Update the application code to use the specific KMS key in each Region.
C. Use AWS Private Certificate Authority to create a new certificate authority (CA) in the primary Region. Issue a new private certificate from the CA for the application’s website URL. Share the CA with the additional Regions by using AWS Resource Access Manager (AWS RAM). Update the application code to use the shared CA certificates in each Region.
D. Use AWS Systems Manager Parameter Store to create a parameter in each additional Region where the application is running. Export the key material from the KMS key in the primary Region. Store the key material in the parameter in each Region. Update the application code to use the key data from the parameter in each Region.
Show Answer
Correct Answer: A
Explanation:
AWS KMS multi-Region keys are designed for exactly this use case. A multi-Region primary key can be replicated to other Regions as multi-Region replica keys that share the same key material. This allows data encrypted in one Region to be decrypted in another Region using the corresponding replica key. Updating the application to use the local replica key in each Region meets the requirement to encrypt before storing in S3 and to decrypt across Regions with the same key. The other options either use different keys per Region, do not use KMS for data encryption, or require exporting key material, which is not allowed by KMS.
Question 37
A global ecommerce company has many data centers around the world. With the growth of its stored data, the company needs to set up a solution to provide scalable storage for legacy on-premises file applications. The company must be able to take point-in-time copies of volumes by using AWS Backup and must retain low-latency access to frequently accessed data. The company also needs to have storage volumes that can be mounted as Internet Small Computer System Interface (iSCSI) devices from the company’s on-premises application servers.
Which solution will meet these requirements?
A. Provision an AWS Storage Gateway tape gateway. Configure the tape gateway to store data in an Amazon S3 bucket. Deploy AWS Backup to take point-in-time copies of the volumes.
B. Provision an Amazon FSx File Gateway and an Amazon S3 File Gateway. Deploy AWS Backup to take point-in-time copies of the data.
C. Provision an AWS Storage Gateway volume gateway in cache mode. Back up the on-premises Storage Gateway volumes with AWS Backup.
D. Provision an AWS Storage Gateway file gateway in cache mode. Deploy AWS Backup to take point-in-time copies of the volumes.
Show Answer
Correct Answer: C
Explanation:
The requirements include iSCSI-mounted volumes for on-premises servers, low-latency access to frequently accessed data, scalable cloud-backed storage, and point-in-time backups using AWS Backup. AWS Storage Gateway **volume gateway in cache mode** provides iSCSI block storage to on-premises applications, keeps frequently accessed data cached locally for low latency, stores the full dataset durably in AWS, and integrates directly with AWS Backup for point-in-time volume backups. Other options either do not support iSCSI volumes (file or tape gateways) or do not fit the legacy block-storage use case.
Question 42
A company hosts its primary API on AWS by using an Amazon API Gateway API and AWS Lambda functions that contain the logic for the API methods. The company’s internal applications use the API for core functionality and business logic. The company’s customers use the API to access data from their accounts. Several customers also have access to a legacy API that is running on a single standalone Amazon EC2 instance.
The company wants to increase the security for these APIs to better prevent denial of service (DoS) attacks, check for vulnerabilities, and guard against common exploits.
What should a solutions architect do to meet these requirements?
A. Use AWS WAF to protect both APIs. Configure Amazon Inspector to analyze the legacy API. Configure Amazon GuardDuty to monitor for malicious attempts to access the APIs.
B. Use AWS WAF to protect the API Gateway API. Configure Amazon Inspector to analyze both APIs. Configure Amazon GuardDuty to block malicious attempts to access the APIs.
C. Use AWS WAF to protect the API Gateway API. Configure Amazon Inspector to analyze the legacy API. Configure Amazon GuardDuty to monitor for malicious attempts to access the APIs.
D. Use AWS WAF to protect the API Gateway AP! Configure Amazon Inspector to protect the legacy API. Configure Amazon GuardDuty to block malicious attempts to access the APIs.
Show Answer
Correct Answer: C
Explanation:
AWS WAF can natively protect Amazon API Gateway to mitigate common exploits and DoS-style attacks. The legacy API runs on a standalone EC2 instance, which cannot be directly protected by WAF without an ALB or CloudFront, but it can be assessed for vulnerabilities using Amazon Inspector. Amazon GuardDuty provides threat detection and monitoring across AWS accounts and workloads but does not block traffic, making monitoring (not blocking) the correct usage. Therefore, option C correctly applies each service according to its capabilities.
Question 34
A company has an application that stores user-uploaded videos in an Amazon S3 bucket that uses S3 Standard storage. Users access the videos frequently in the first 180 days after the videos are uploaded. Access after 180 days is rare. Named users and anonymous users access the videos.
Most of the videos are more than 100 MB in size. Users often have poor internet connectivity when they upload videos, resulting in failed uploads. The company uses multipart uploads for the videos.
A solutions architect needs to optimize the S3 costs of the application.
Which combination of actions will meet these requirements? (Choose two.)
A. Configure the S3 bucket to be a Requester Pays bucket.
B. Use S3 Transfer Acceleration to upload the videos to the S3 bucket.
C. Create an S3 Lifecycle configuration o expire incomplete multipart uploads 7 days after initiation.
D. Create an S3 Lifecycle configuration to transition objects to S3 Glacier Instant Retrieval after 1 day.
E. Create an S3 Lifecycle configuration to transition objects to S3 Standard-infrequent Access (S3 Standard- IA) after 180 days.
Show Answer
Correct Answer: C, E
Explanation:
The goal is to optimize S3 storage costs. Incomplete multipart uploads consume storage and are common with large files and unreliable connectivity, so expiring them after 7 days reduces unnecessary costs (C). Videos are frequently accessed for the first 180 days and rarely afterward, making a lifecycle transition from S3 Standard to S3 Standard-IA after 180 days the most cost-effective choice while retaining immediate access (E). Other options either add cost or do not primarily address storage cost optimization.
Question 5
A travel company built a web application that uses Amazon Simple Email Service (Amazon SES) to send email notifications to users. The company needs to enable logging to help troubleshoot email delivery issues. The company also needs the ability to do searches that are based on recipient, subject, and time sent.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
A. Create an Amazon SES configuration set with Amazon Data Firehose as the destination. Choose to send logs to an Amazon S3 bucket.
B. Enable AWS CloudTrail logging. Specify an Amazon S3 bucket as the destination for the logs.
C. Use Amazon Athena to query the logs in the Amazon S3 bucket for recipient, subject, and time sent.
D. Create an Amazon CloudWatch log group. Configure Amazon SES to send logs to the log group.
E. Use Amazon Athena to query the logs in Amazon CloudWatch for recipient, subject, and time sent.
Show Answer
Correct Answer: A, C
Explanation:
Amazon SES email delivery and event details (recipient, subject, timestamps, delivery status) are captured by using an SES configuration set with an event destination such as Amazon Kinesis Data Firehose, which can deliver the logs to Amazon S3. Once the logs are stored in S3, Amazon Athena can query them using SQL to search by recipient, subject, and time sent. CloudTrail logs API calls only, and CloudWatch event publishing does not provide the required searchable email-level details for this use case.
Question 48
A company is migrating infrastructure for its massive multiplayer game to AWS. The game’s application features a leaderboard where players can see rankings in real time. The leaderboard requires microsecond reads and single-digit-millisecond write latencies. The datasets are single-digit terabytes in size and must be available to accept writes in less than a minute if a primary node failure occurs.
The company needs a solution in which data can persist for further analytical processing through a data pipeline.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an Amazon ElastiCache tor Redis cluster with cluster mode enabled, Configure the application to interact with the primary node.
B. Create an Amazon ROS database with a read replica. Configure the application to point writes to the writer endpoint. Configure the application to point reads to the reader endpoint.
C. Create an Amazon MemoryDB for Redis cluster in Muit-AZ mode Configure the application to interact with the primary node.
D. Create multiple Redis nodes on Amazon EC2 instances that are spread across multiple Availability Zones. Configure backups to Amazon S3.
Show Answer
Correct Answer: C
Explanation:
The leaderboard requires microsecond read latency, single-digit millisecond writes, automatic failover in under a minute, durability for later analytics, and minimal operational overhead. Amazon MemoryDB for Redis is purpose-built for this: it is a fully managed, Redis-compatible, in-memory database that delivers microsecond reads, fast writes, Multi-AZ durability with automatic failover, and persistent storage via transaction logs that can feed analytics pipelines. ElastiCache for Redis lacks the same level of durability, RDS cannot meet the latency requirements, and self-managed Redis on EC2 has significantly higher operational overhead.
Question 26
A company plans to migrate a legacy on-premises application to AWS. The application is a Java web application that runs on Apache Tomcat with a PostgreSQL database.
The company does not have access to the source code but can deploy the application Java Archive (JAR) files. The application has increased traffic at the end of each month.
Which solution will meet these requirements with the LEAST operational overhead?
A. Launch Amazon EC2 instances in multiple Availability Zones. Deploy Tomcat and PostgreSQL to all the instances by using Amazon Elastic File System (Amazon EFS) mount points. Use AWS Step Functions to deploy additional EC2 instances to scale for increased traffic.
B. Provision Amazon Elastic Kubernetes Service (Amazon EKS) in an Auto Scaling group across multiple AWS Regions. Deploy Tomcat and PostgreSQL in the container images. Use a Network Load Balancer to scale for increased traffic.
C. Refactor the Java application into Python-based containers. Use AWS Lambda functions for the application logic. Store application data in Amazon DynamoDB global tables. Use AWS Storage Gateway and Lambda concurrency to scale for increased traffic.
D. Use AWS Elastic Beanstalk to deploy the Tomcat servers with auto scaling in multiple Availability Zones. Store application data in an Amazon RDS for PostgreSQL database. Deploy Amazon CloudFront and an Application Load Balancer to scale for increased traffic.
Show Answer
Correct Answer: D
Explanation:
Elastic Beanstalk provides the lowest operational overhead by abstracting infrastructure management while directly supporting Java/Tomcat deployments using JAR files. It handles provisioning, load balancing, health checks, and auto scaling across multiple Availability Zones automatically. Using Amazon RDS for PostgreSQL offloads database administration tasks such as backups, patching, and high availability. This approach fits a lift-and-shift migration with no source code access and easily scales to handle predictable end-of-month traffic spikes.
Question 38
A company is using AWS CloudFormation as its deployment tool for all applications. It stages all application binaries and templates within Amazon S3 buckets with versioning enabled. Developers have access to an Amazon EC2 instance that hosts the integrated development environment (IDE). The developers download the application binaries from Amazon S3 to the EC2 instance, make changes, and upload the binaries to an S3 bucket after running the unit tests locally. The developers want to improve the existing deployment mechanism and implement CI/CD using AWS CodePipeline.
The developers have the following requirements:
• Use AWS CodeCommit for source control.
• Automate unit testing and security scanning.
• Alert the developers when unit tests fail.
• Turn application features on and off, and customize deployment dynamically as part of CI/CD.
• Have the lead developer provide approval before deploying an application.
Which solution will meet these requirements?
A. Use AWS CodeBuild to run unit tests and security scans. Use an Amazon EventBridge rule to send Amazon SNS alerts to the developers when unit tests fail. Write AWS Cloud Development Kit (AWS CDK) constructs for different solution features, and use a manifest file to tum features on and off in the AWS CDK application. Use a manual approval stage in the pipeline to allow the lead developer to approve applications.
B. Use AWS Lambda to run unit tests and security scans. Use Lambda in a subsequent stage in the pipeline to send Amazon SNS alerts to the developers when unit tests fail. Write AWS Amplify plugins for different solution features and utilize user prompts to tum features on and off. Use Amazon SES in the pipeline to allow the lead developer to approve applications.
C. Use Jenkins to run unit tests and security scans. Use an Amazon EventBridge rule in the pipeline to send Amazon SES alerts to the developers when unit tests fail Use AWS CloudFormation nested stacks for different solution features and parameters to turn features on and off. Use AWS Lambda in the pipeline to allow the lead developer to approve applications.
D. Use AWS CodeDeploy to run unit tests and security scans. Use an Amazon CloudWatch alarm in the pipeline to send Amazon SNS alerts to the developers when unit tests fail. Use Docker images for different solution features and the AWS CLI to turn features on and off. Use a manual approval stage in the pipeline to allow the lead developer to approve applications.
Show Answer
Correct Answer: A
Explanation:
Option A uses AWS-native CI/CD services that align exactly with the requirements. AWS CodeCommit provides source control, and AWS CodeBuild is designed to run automated unit tests and security scans. Amazon EventBridge with SNS can notify developers when tests fail. AWS CDK supports dynamic, code-driven infrastructure and feature toggling (for example, via context or manifest files). AWS CodePipeline supports a manual approval stage, allowing the lead developer to approve deployments. The other options rely on non-native tools, misuse services (such as Lambda or CodeDeploy for testing), or provide less appropriate mechanisms for approvals and feature management.
Question 8
Accompany is building an application to collect and transmit sensor data from a factory. The application will use AWS IoT Core to send data from hundreds of devices to an Amazon S3 data lake. The company must enrich the data before loading the data into Amazon S3.
The application will transmit the sensor data every 5 seconds. New sensor data must be available in Amazon S3 less than 30 minutes after the application collects the data. No other applications are processing the sensor data from AWS IoT Core.
Which solution will meet these requirements MOST cost-effectively?
A. Create a topic in AWS IoT Core to ingest the sensor data. Create an AWS Lambda function to enrich the data and to write the data to Amazon S3. Configure an AWS IoT rule action to invoke the Lambda function.
B. Use AWS IoT Core Basic Ingest to ingest the sensor data. Configure an AWS IoT rule action to write the data to Amazon Kinesis Data Firehose. Set the Kinesis Data Firehose buffering interval to 900 seconds. Use Kinesis Data Firehose to invoke an AWS Lambda function to enrich the data, Configure Kinesis Data Firehose to deliver the data to Amazon S3.
C. Create a topic in AWS IoT Core to ingest the sensor data. Configure an AWS IoT rule action to send the data to an Amazon Timestream table. Create an AWS Lambda, function to read the data from Timestream. Configure the Lambda function to enrich the data and to write the data to Amazon S3.
D. Use AWS loT Core Basic Ingest to ingest the sensor data. Configure an AWS IoT rule action to write the data to Amazon Kinesis Data Streams. Create a consumer AWS Lambda function to process the data from Kinesis Data Streams and to enrich the data. Call the S3 PutObject API operation from the Lambda function to write the data to Amazon S3.
Show Answer
Correct Answer: B
Explanation:
The most cost-effective solution is to use AWS IoT Core Basic Ingest with Amazon Kinesis Data Firehose and Lambda for enrichment. Basic Ingest reduces IoT messaging costs when no other applications consume the data. Kinesis Data Firehose is fully managed, automatically scales, batches records, and minimizes S3 PUT requests and Lambda invocations, which significantly lowers cost compared to invoking Lambda per message. A 900-second buffering interval still ensures data arrives in S3 well within the 30-minute requirement. Other options either trigger Lambda too frequently (higher cost) or introduce unnecessary services such as Timestream or Kinesis Data Streams.