Vcehome > Amazon > AWS Certified Specialty > BDS-C00 > BDS-C00 Online Practice Questions and Answers

BDS-C00 Online Practice Questions and Answers

Questions 4

A customer has an Amazon S3 bucket. Objects are uploaded simultaneously by a cluster of servers from multiple streams of data. The customer maintains a catalog of objects uploaded in Amazon S3 using an Amazon DynamoDB table. This catalog has the following fileds: StreamName, TimeStamp, and ServerName, from which ObjectName can be obtained.

The customer needs to define the catalog to support querying for a given stream or server within a defined time range.

Which DynamoDB table scheme is most efficient to support these queries?

A. Define a Primary Key with ServerName as Partition Key and TimeStamp as Sort Key. Do NOT define a Local Secondary Index or Global Secondary Index.

B. Define a Primary Key with StreamName as Partition Key and TimeStamp followed by ServerName as Sort Key. Define a Global Secondary Index with ServerName as partition key and TimeStamp followed by StreamName.

C. Define a Primary Key with ServerName as Partition Key. Define a Local Secondary Index with StreamName as Partition Key. Define a Global Secondary Index with TimeStamp as Partition Key.

D. Define a Primary Key with ServerName as Partition Key. Define a Local Secondary Index with TimeStamp as Partition Key. Define a Global Secondary Index with StreamName as Partition Key and TimeStamp as Sort Key.

Browse 264 Q&As
Questions 5

An online photo album app has a key design feature to support multiple screens (e.g, desktop, mobile phone, and tablet) with high-quality displays. Multiple versions of the image must be saved in different resolutions and layouts.

The image-processing Java program takes an average of five seconds per upload, depending on the image size and format. Each image upload captures the following image metadata: user, album, photo label, upload timestamp.

The app should support the following requirements:

1.

Hundreds of user image uploads per second

2.

Maximum image upload size of 10 MB

3.

Maximum image metadata size of 1 KB

4.

Image displayed in optimized resolution in all supported screens no later than one minute after image upload

Which strategy should be used to meet these requirements?

A. Write images and metadata to Amazon Kinesis. Use a Kinesis Client Library (KCL) application to run the image processing and save the image output to Amazon S3 and metadata to the app repository DB.

B. Write image and metadata RDS with BLOB data type. Use AWS Data Pipeline to run the image processing and save the image output to Amazon S3 and metadata to the app repository DB.

C. Upload image with metadata to Amazon S3, use Lambda function to run the image processing and save the images output to Amazon S3 and metadata to the app repository DB.

D. Write image and metadata to Amazon Kinesis. Use Amazon Elastic MapReduce (EMR) with Spark Streaming to run image processing and save the images output to Amazon S3 and metadata to app repository DB.

Browse 264 Q&As
Questions 6

An online retailer is using Amazon DynamoDB to store data related to customer transactions. The items in the table contains several string attributes describing the transaction as well as a JSON attribute containing the shopping cart and other details corresponding to the transaction. Average item size is 250KB, most of which is associated with the JSON attribute. The average customer generates -3GB of data per month.

Customers access the table to display their transaction history and review transaction details as needed. Ninety percent of the queries against the table are executed when building the transaction history view, with the other 10% retrieving transaction details. The table is partitioned on CustomerID and sorted on transaction date.

The client has very high read capacity provisioned for the table and experiences very even utilization, but complains about the cost of Amazon DynamoDB compared to other NoSQL solutions.

Which strategy will reduce the cost associated with the client's read queries while not degrading quality?

A. Modify all database calls to use eventually consistent reads and advise customers that transaction history may be one second out-of-date.

B. Change the primary table to partition on TransactionID, create a GSI partitioned on customer and sorted on date, project small attributes into GSI, and then query GSI for summary data and the primary table for JSON details.

C. Vertically partition the table, store base attributes on the primary table, and create a foreign key reference to a secondary table containing the JSON data. Query the primary table for summary data

and the secondary table for JSON details.

D. Create an LSI sorted on date, project the JSON attribute into the index, and then query the primary table for summary data and the LSI for JSON details.

Browse 264 Q&As
Questions 7

An organization uses a custom map reduce application to build monthly reports based on many small data files in an Amazon S3 bucket. The data is submitted from various business units on a frequent but unpredictable schedule. As the dataset continues to grow, it becomes increasingly difficult to process all of the data in one day. The organization has scaled up its Amazon EMR cluster, but other optimizations could improve performance.

The organization needs to improve performance with minimal changes to existing processes and applications.

What action should the organization take?

A. Use Amazon S3 Event Notifications and AWS Lambda to create a quick search file index in DynamoDB.

B. Add Spark to the Amazon EMR cluster and utilize Resilient Distributed Datasets in-memory.

C. Use Amazon S3 Event Notifications and AWS Lambda to index each file into an Amazon Elasticsearch Service cluster.

D. Schedule a daily AWS Data Pipeline process that aggregates content into larger files using S3DistCp.

E. Have business units submit data via Amazon Kinesis Firehose to aggregate data hourly into Amazon S3.

Browse 264 Q&As
Questions 8

A company is centralizing a large number of unencrypted small files from multiple Amazon S3 buckets. The company needs to verify that the files contain the same data after centralization.

Which method meets the requirements?

A. Compare the S3 Etags from the source and destination objects.

B. Call the S3 CompareObjects API for the source and destination objects.

C. Place a HEAD request against the source and destination objects comparing SIG v4 headers.

D. Compare the size of the source and destination objects.

Browse 264 Q&As
Questions 9

Customers have recently been complaining that your web application has randomly stopped responding. During a deep dive of your logs, the team has discovered a major bug in your Java web application. This bug is causing a memory leak that eventually causes the application to crash.

Your web application runs on Amazon EC2 and was built with AWS CloudFormation. Which techniques should you see to help detect theses problems faster, as well as help eliminate the server's unresponsiveness?

Choose 2 answers

A. Update your AWS CloudFormation configuration and enable a CustomResource that uses cfn- signal to detect memory leaks

B. Update your CloudWatch metric granularity config for all Amazon EC2 memory metrics to support five-second granularity. Create a CloudWatch alarm that triggers an Amazon SNS notification to page your team when the application memory becomes too large

C. Update your AWS CloudFormation configuration to take advantage of Auto Scaling groups. Configure an Auto Scaling group policy to trigger off your custom CloudWatch metrics

D. Create a custom CloudWatch metric that you push your JVM memory usage to create a CloudWatch alarm that triggers an Amazon SNS notification to page your team when the application memory usage becomes too large

E. Update your AWS CloudFormation configuration to take advantage of CloudWatch metrics Agent. Configure the CloudWatch Metrics Agent to monitor memory usage and trigger an Amazon SNS alarm

Browse 264 Q&As
Questions 10

A user has created a launch configuration for Auto Scaling where CloudWatch detailed monitoring is disabled. The user wants to now enable detailed monitoring. How can the user achieve this?

A. Update the Launch config with CLI to set InstanceMonitoringDisabled = false

B. The user should change the Auto Scaling group from the AWS console to enable detailed monitoring

C. Update the Launch config with CLI to set InstanceMonitoring.Enabled = true

D. Create a new Launch Config with detail monitoring enabled and update the Auto Scaling group

Browse 264 Q&As
Questions 11

A company is preparing to give AWS Management Console access to developers. Company policy mandates identity federation and role based access control. Roles are currently assigned using groups in the corporate

Choose 2 answers

A. AWS Directory Service AD connector

B. AWS Directory Service Simple AD

C. AWS identity and Access Management groups

D. AWS identity and Access Management roles

E. AWS identity and Access Management users

Browse 264 Q&As
Questions 12

Which of the following requires a custom cloudwatch metric to monitor?

A. Memory utilization of an EC2 instance

B. CPU utilization of an EC2 instance

C. Disk usage activity of an EC2 instance

D. Data transfer of an EC2 instance

Browse 264 Q&As
Questions 13

What's an ECU?

A. Extended Cluster User.

B. None of these.

C. Elastic Computer Usage.

D. Elastic Compute Unit.

Browse 264 Q&As
Exam Code: BDS-C00
Exam Name: AWS Certified Big Data - Speciality (BDS-C00)
Last Update: May 09, 2024
Questions: 264 Q&As

PDF

$49.99

VCE

$59.99

PDF + VCE

$67.99