A) Remove some EC2 instances to increase the utilization of remaining instances.
B) Increase the Amazon Elastic Block Store (Amazon EBS) capacity of instances with less CPU utilization.
C) Modify the Auto Scaling group scaling policy to scale in and out based on a higher CPU utilization metric.
D) Create a new launch configuration that uses smaller instance types. Update the existing Auto Scaling group.
Correct Answer
verified
Multiple Choice
A) Add a target tracking scaling policy with a short cooldown period.
B) Change the Auto Scaling group launch configuration to use a larger instance type.
C) Change the Auto Scaling group to use six servers across three Availability Zones.
D) Change the Auto Scaling group to use eight servers across two Availability Zones.
Correct Answer
verified
Multiple Choice
A) Increase the minimum capacity for the Auto Scaling group.
B) Increase the maximum capacity for the Auto Scaling group.
C) Configure scheduled scaling to scale up to the desired compute level.
D) Change the scaling policy to add more EC2 instances during each scaling operation.
Correct Answer
verified
Multiple Choice
A) Enable the versioning and MFA Delete features on the S3 bucket.
B) Enable multi-factor authentication (MFA) on the IAM user credentials for each audit team IAM user account.
C) Add an S3 Lifecycle policy to the audit team's IAM user accounts to deny the s3:DeleteObject action during audit dates.
D) Use AWS Key Management Service (AWS KMS) to encrypt the S3 bucket and restrict audit team IAM user accounts from accessing the KMS key.
Correct Answer
verified
Multiple Choice
A) Amazon Aurora
B) Amazon DynamoDB
C) Amazon RDS
D) Amazon Redshift
Correct Answer
verified
Multiple Choice
A) Amazon EFS
B) Amazon FSx
C) Amazon S3
D) AWS Storage Gateway
Correct Answer
verified
Multiple Choice
A) Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write. Use DynamoDB Streams to share the transactions data with other applications.
B) Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.
C) Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in AmazonDynamoDB. Other applications can consume the transactions data off the Kinesis data stream.
D) Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3.
Correct Answer
verified
Multiple Choice
A) Configure storage Auto Scaling on the RDS for Oracle instance.
B) Migrate the database to Amazon Aurora to use Auto Scaling storage.
C) Configure an alarm on the RDS for Oracle instance for low free storage space.
D) Configure the Auto Scaling group to use the average CPU as the scaling metric.
E) Configure the Auto Scaling group to use the average free memory as the scaling metric.
Correct Answer
verified
Multiple Choice
A) Use an Amazon Timestream database.
B) Use an Amazon Neptune database in a Multi-AZ design.
C) Use a fully managed Amazon RDS for MySQL database in a Multi-AZ design.
D) Deploy PostgreSQL on an Amazon EC2 instance that uses Amazon Elastic Block Store (Amazon EBS) Throughput Optimized HDD (st1) storage.
Correct Answer
verified
Multiple Choice
A) Amazon DynamoDB
B) Amazon RDS for MySQL
C) MySQL-compatible Amazon Aurora Serverless
D) MySQL deployed on Amazon EC2 in an Auto Scaling group
Correct Answer
verified
Multiple Choice
A) Use AWS Key Management Service (AWS KMS) customer master keys (CMKs) to create keys. Configure the application to load the database credentials from AWS KMS. Enable automatic key rotation.
B) Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manager. Configure the application to load the database credentials from Secrets Manager. Create an AWS Lambda function that rotates the credentials in Secret Manager.
C) Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manager. Configure the application to load the database credentials from Secrets Manager. Set up a credentials rotation schedule for the application user in the RDS for MySQL database using Secrets Manager.
D) Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Systems Manager Parameter Store. Configure the application to load the database credentials from Parameter Store. Set up a credentials rotation schedule for the application user in the RDS for MySQL database using Parameter Store.
Correct Answer
verified
Multiple Choice
A) Use AWS Database Migration Service (AWS DMS) to migrate the database servers to Amazon RDS.
B) Use Amazon EC2 instances to migrate and operate the database servers.
C) Use AWS Database Migration Service (AWS DMS) to migrate the database servers to Amazon DynamoDB.
D) Use an AWS Snowball Edge Storage Optimized device to migrate the data from Oracle to Amazon Aurora.
Correct Answer
verified
Multiple Choice
A) Amazon S3 for cold data storage
B) Amazon Elastic File System (Amazon EFS) for cold data storage
C) Amazon S3 for high-performance parallel storage
D) Amazon FSx for Lustre for high-performance parallel storage
E) Amazon FSx for Windows for high-performance parallel storage
Correct Answer
verified
Multiple Choice
A) Add a set of VPNs between the Management and Production VPCs.
B) Add a second virtual private gateway and attach it to the Management VPC.
C) Add a second set of VPNs to the Management VPC from a second customer gateway device.
D) Add a second VPC peering connection between the Management VPC and the Production VPC.
Correct Answer
verified
Multiple Choice
A) Implement code in microservice 1 to send data to an Amazon S3 bucket. Use S3 event notifications to invoke microservice 2.
B) Implement code in microservice 1 to publish data to an Amazon SNS topic. Implement code in microservice 2 to subscribe to this topic.
C) Implement code in microservice 1 to send data to Amazon Kinesis Data Firehose. Implement code in microservice 2 to read from Kinesis Data Firehose.
D) Implement code in microservice 1 to send data to an Amazon SQS queue. Implement code in microservice 2 to process messages from the queue.
Correct Answer
verified
Multiple Choice
A) Use Amazon Aurora with Multi-AZ Aurora Replicas and restore from mysqldump for the test database.
B) Use Amazon Aurora with Multi-AZ Aurora Replicas and restore snapshots from Amazon RDS for the test database.
C) Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas, and use the standby instance for the test database.
D) Use Amazon RDS for SQL Server with a Multi-AZ deployment and read replicas, and restore snapshots from RDS for the test database.
Correct Answer
verified
Multiple Choice
A) Create an IAM policy that prohibits changes to CloudTrail, and attach it to the root user.
B) Create a new trail in CloudTrail from within the developer accounts with the organization trails option enabled.
C) Create a service control policy (SCP) the prohibits changes to CloudTrail, and attach it the developer accounts.
D) Create a service-linked role for CloudTrail with a policy condition that allows changes only from an Amazon Resource Name (ARN) in the master account.
Correct Answer
verified
Multiple Choice
A) Amazon CloudFront and Amazon S3
B) AWS Lambda and Amazon DynamoDB
C) Application Load Balancer with Amazon EC2 Auto Scaling
D) Amazon Route 53 with internal Application Load Balancers
Correct Answer
verified
Multiple Choice
A) Create an Amazon Machine Image (AMI) from the instance used for processing. Terminate the instance and replace it with a larger size.
B) Create an Amazon Machine Image (AMI) from the instance used for processing. Terminate the instance and replace it with an Amazon EC2 Dedicated Instance.
C) Create an Amazon Machine image (AMI) from the instance used for processing. Create an Auto Scaling group using this image in its launch configuration. Configure the group with a target tracking policy to keep its aggregate CPU utilization below 70%.
D) Create an Amazon Machine Image (AMI) from the instance used for processing. Create an Auto Scaling group using this image in its launch configuration. Configure the group with a target tracking policy based on the age of the oldest message in the SQS queue.
Correct Answer
verified
Multiple Choice
A) Set up AWS Auto Scaling to scale out the ECS service when there are timeouts on the ALB. Set up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high.
B) Set up AWS Auto Scaling to scale out the ECS service when the ALB CPU utilization is too high. Setup AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high.
C) Set up AWS Auto Scaling to scale out the ECS service when the service's CPU utilization is too high. Set up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high.
D) Set up AWS Auto Scaling to scale out the ECS service when the ALB target group CPU utilization is too high. Set up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high.
Correct Answer
verified
Showing 61 - 80 of 596
Related Exams