You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Google Certified Associate Cloud Engineer Practice Test 8 "
0 of 65 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
Google Certified Associate Cloud Engineer
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking view questions. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
Answered
Review
Question 1 of 65
1. Question
You want to find a list of regions and the prebuilt images offered by Google Compute Engine. Which commands should you execute to retrieve this information?
Correct
gcloud regions list.
gcloud images list. is not right.
The correct command to list compute regions is gcloud compute regions list.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/regions/list
The correct command to list compute images is gcloud compute images list.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/images/list
gcloud compute regions list
gcloud images list. is not right.
The correct command to list compute images is gcloud compute images list.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/images/list
gcloud regions list
gcloud compute images list. is not right.
The correct command to list compute regions is gcloud compute regions list.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/regions/list
gcloud compute regions list
gcloud compute images list. is the right answer.
Both the commands correctly retrieve images and regions offered by Google Compute Engine
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/regions/list
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/images/list
Incorrect
gcloud regions list.
gcloud images list. is not right.
The correct command to list compute regions is gcloud compute regions list.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/regions/list
The correct command to list compute images is gcloud compute images list.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/images/list
gcloud compute regions list
gcloud images list. is not right.
The correct command to list compute images is gcloud compute images list.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/images/list
gcloud regions list
gcloud compute images list. is not right.
The correct command to list compute regions is gcloud compute regions list.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/regions/list
gcloud compute regions list
gcloud compute images list. is the right answer.
Both the commands correctly retrieve images and regions offered by Google Compute Engine
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/regions/list
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/images/list
Unattempted
gcloud regions list.
gcloud images list. is not right.
The correct command to list compute regions is gcloud compute regions list.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/regions/list
The correct command to list compute images is gcloud compute images list.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/images/list
gcloud compute regions list
gcloud images list. is not right.
The correct command to list compute images is gcloud compute images list.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/images/list
gcloud regions list
gcloud compute images list. is not right.
The correct command to list compute regions is gcloud compute regions list.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/regions/list
gcloud compute regions list
gcloud compute images list. is the right answer.
Both the commands correctly retrieve images and regions offered by Google Compute Engine
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/regions/list
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/images/list
Question 2 of 65
2. Question
You want to find out when users were added to Cloud Spanner Identity Access Management (IAM) roles on your Google Cloud Platform (GCP) project. What should you do in the GCP Console?
Correct
Go to the Stackdriver Monitoring console and review information for Cloud Spanner. is not right.
Monitoring collects metrics, events, and metadata from Google Cloud and lets you generate insights via dashboards, charts, and alerts. It cant provide information on when a role has been granted to a user.
Ref: https://cloud.google.com/monitoring/docs
Open the IAM & admin console to review IAM policies for Cloud Spanner roles. is not right.
You cant find the role bindings and the timestamps in the policies. https://cloud.google.com/iam/docs/overview
Open the Cloud Spanner console to review configurations. is not right.
You manage cloud spanner instances in the console but you cant check when a role has been granted to a user.
Ref: https://cloud.google.com/spanner/docs/quickstart-console
Go to the Stackdriver Logging console, review admin activity logs, and filter them for Cloud Spanner IAM roles. is the right answer.
Admin Activity audit logs contain log entries for API calls or other administrative actions that modify the configuration or metadata of resources. For example, these logs record when users create VM instances or change Cloud Identity and Access Management permissions. Admin Activity audit logs are always written; you cant configure or disable them. There is no charge for your Admin Activity audit logs.
Ref: https://cloud.google.com/logging/docs/audit#admin-activity
See below a screenshot from GCP console showing this in action.
Among other things, the payload contains
{
action: ADD
role: roles/spanner.admin
member: user:[email protected]
}
Incorrect
Go to the Stackdriver Monitoring console and review information for Cloud Spanner. is not right.
Monitoring collects metrics, events, and metadata from Google Cloud and lets you generate insights via dashboards, charts, and alerts. It cant provide information on when a role has been granted to a user.
Ref: https://cloud.google.com/monitoring/docs
Open the IAM & admin console to review IAM policies for Cloud Spanner roles. is not right.
You cant find the role bindings and the timestamps in the policies. https://cloud.google.com/iam/docs/overview
Open the Cloud Spanner console to review configurations. is not right.
You manage cloud spanner instances in the console but you cant check when a role has been granted to a user.
Ref: https://cloud.google.com/spanner/docs/quickstart-console
Go to the Stackdriver Logging console, review admin activity logs, and filter them for Cloud Spanner IAM roles. is the right answer.
Admin Activity audit logs contain log entries for API calls or other administrative actions that modify the configuration or metadata of resources. For example, these logs record when users create VM instances or change Cloud Identity and Access Management permissions. Admin Activity audit logs are always written; you cant configure or disable them. There is no charge for your Admin Activity audit logs.
Ref: https://cloud.google.com/logging/docs/audit#admin-activity
See below a screenshot from GCP console showing this in action.
Among other things, the payload contains
{
action: ADD
role: roles/spanner.admin
member: user:[email protected]
}
Unattempted
Go to the Stackdriver Monitoring console and review information for Cloud Spanner. is not right.
Monitoring collects metrics, events, and metadata from Google Cloud and lets you generate insights via dashboards, charts, and alerts. It cant provide information on when a role has been granted to a user.
Ref: https://cloud.google.com/monitoring/docs
Open the IAM & admin console to review IAM policies for Cloud Spanner roles. is not right.
You cant find the role bindings and the timestamps in the policies. https://cloud.google.com/iam/docs/overview
Open the Cloud Spanner console to review configurations. is not right.
You manage cloud spanner instances in the console but you cant check when a role has been granted to a user.
Ref: https://cloud.google.com/spanner/docs/quickstart-console
Go to the Stackdriver Logging console, review admin activity logs, and filter them for Cloud Spanner IAM roles. is the right answer.
Admin Activity audit logs contain log entries for API calls or other administrative actions that modify the configuration or metadata of resources. For example, these logs record when users create VM instances or change Cloud Identity and Access Management permissions. Admin Activity audit logs are always written; you cant configure or disable them. There is no charge for your Admin Activity audit logs.
Ref: https://cloud.google.com/logging/docs/audit#admin-activity
See below a screenshot from GCP console showing this in action.
Among other things, the payload contains
{
action: ADD
role: roles/spanner.admin
member: user:[email protected]
}
Question 3 of 65
3. Question
You want to ingest and analyze large volumes of stream data from sensors in real-time, matching the high speeds of IoT data to track normal and abnormal behavior. You want to run it through a data processing pipeline and store the results. Finally, you want to enable customers to build dashboards and drive analytics on their data in real-time. What services should you use for this task?
Correct
You want to ingest large volumes of streaming data at high speeds. So you need to use Cloud Pub/Sub. Cloud Pub/Sub provides a simple and reliable staging location for your event data on its journey towards processing, storage, and analysis. Cloud Pub/Sub is serverless and you can ingest events at any scale.
Ref: https://cloud.google.com/pubsub
Next, you want to analyze this data. Cloud Dataflow is a fully managed streaming analytics service that minimizes latency, processing time, and cost through autoscaling and batch processing. Dataflow enables fast, simplified streaming data pipeline development with lower data latency.
Ref: https://cloud.google.com/dataflow
Next, you want to store these results. BigQuery is an ideal place to store these results as BigQuery supports the querying of streaming data in real-time. This assists in real-time predictive analytics.
Ref: https://cloud.google.com/bigquery
Therefore the correct answer is Cloud Pub/Sub, Cloud Dataflow, BigQuery
Heres more information from Google docs about the Stream analytics use case. Google recommends we use Dataflow along with Pub/Sub and BigQuery. https://cloud.google.com/dataflow#section-6
Googles stream analytics makes data more organized, useful, and accessible from the instant its generated. Built on Dataflow along with Pub/Sub and BigQuery, our streaming solution provisions the resources you need to ingest, process, and analyze fluctuating volumes of real-time data for real-time business insights. This abstracted provisioning reduces complexity and makes stream analytics accessible to both data analysts and data engineers.
and https://cloud.google.com/solutions/stream-analytics
Ingest, process, and analyze event streams in real time. Stream analytics from Google Cloud makes data more organized, useful, and accessible from the instant its generated. Built on the autoscaling infrastructure of Pub/Sub, Dataflow, and BigQuery, our streaming solution provisions the resources you need to ingest, process, and analyze fluctuating volumes of real-time data for real-time business insights.
Incorrect
You want to ingest large volumes of streaming data at high speeds. So you need to use Cloud Pub/Sub. Cloud Pub/Sub provides a simple and reliable staging location for your event data on its journey towards processing, storage, and analysis. Cloud Pub/Sub is serverless and you can ingest events at any scale.
Ref: https://cloud.google.com/pubsub
Next, you want to analyze this data. Cloud Dataflow is a fully managed streaming analytics service that minimizes latency, processing time, and cost through autoscaling and batch processing. Dataflow enables fast, simplified streaming data pipeline development with lower data latency.
Ref: https://cloud.google.com/dataflow
Next, you want to store these results. BigQuery is an ideal place to store these results as BigQuery supports the querying of streaming data in real-time. This assists in real-time predictive analytics.
Ref: https://cloud.google.com/bigquery
Therefore the correct answer is Cloud Pub/Sub, Cloud Dataflow, BigQuery
Heres more information from Google docs about the Stream analytics use case. Google recommends we use Dataflow along with Pub/Sub and BigQuery. https://cloud.google.com/dataflow#section-6
Googles stream analytics makes data more organized, useful, and accessible from the instant its generated. Built on Dataflow along with Pub/Sub and BigQuery, our streaming solution provisions the resources you need to ingest, process, and analyze fluctuating volumes of real-time data for real-time business insights. This abstracted provisioning reduces complexity and makes stream analytics accessible to both data analysts and data engineers.
and https://cloud.google.com/solutions/stream-analytics
Ingest, process, and analyze event streams in real time. Stream analytics from Google Cloud makes data more organized, useful, and accessible from the instant its generated. Built on the autoscaling infrastructure of Pub/Sub, Dataflow, and BigQuery, our streaming solution provisions the resources you need to ingest, process, and analyze fluctuating volumes of real-time data for real-time business insights.
Unattempted
You want to ingest large volumes of streaming data at high speeds. So you need to use Cloud Pub/Sub. Cloud Pub/Sub provides a simple and reliable staging location for your event data on its journey towards processing, storage, and analysis. Cloud Pub/Sub is serverless and you can ingest events at any scale.
Ref: https://cloud.google.com/pubsub
Next, you want to analyze this data. Cloud Dataflow is a fully managed streaming analytics service that minimizes latency, processing time, and cost through autoscaling and batch processing. Dataflow enables fast, simplified streaming data pipeline development with lower data latency.
Ref: https://cloud.google.com/dataflow
Next, you want to store these results. BigQuery is an ideal place to store these results as BigQuery supports the querying of streaming data in real-time. This assists in real-time predictive analytics.
Ref: https://cloud.google.com/bigquery
Therefore the correct answer is Cloud Pub/Sub, Cloud Dataflow, BigQuery
Heres more information from Google docs about the Stream analytics use case. Google recommends we use Dataflow along with Pub/Sub and BigQuery. https://cloud.google.com/dataflow#section-6
Googles stream analytics makes data more organized, useful, and accessible from the instant its generated. Built on Dataflow along with Pub/Sub and BigQuery, our streaming solution provisions the resources you need to ingest, process, and analyze fluctuating volumes of real-time data for real-time business insights. This abstracted provisioning reduces complexity and makes stream analytics accessible to both data analysts and data engineers.
and https://cloud.google.com/solutions/stream-analytics
Ingest, process, and analyze event streams in real time. Stream analytics from Google Cloud makes data more organized, useful, and accessible from the instant its generated. Built on the autoscaling infrastructure of Pub/Sub, Dataflow, and BigQuery, our streaming solution provisions the resources you need to ingest, process, and analyze fluctuating volumes of real-time data for real-time business insights.
Question 4 of 65
4. Question
You want to list all the compute instances in zones us-central1-b and europe-west1-d. Which of the commands below should you run to retrieve this information?
Correct
gcloud compute instances get filter=zone:( us-central1-b europe-west1-d ). is not right.
gcloud compute instances command does not support get action.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances
gcloud compute instances get filter=zone:( us-central1-b ) and gcloud compute instances list filter=zone:( europe-west1-d ) and combine the results. is not right.
gcloud compute instances command does not support get action.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances
gcloud compute instances list filter=zone:( us-central1-b ) and gcloud compute instances list filter=zone:( europe-west1-d ) and combine the results. is not right.
The first command retrieves compute instances from us-central1-b and the second command retrieves compute instances from europe-west1-d. The output from the two statements can be combined to create a full list of instances from us-central1-b and europe-west1-d, however, this is not efficient as it is a manual activity. Moreover, gcloud already provides the ability to list and filter on multiple zones in a single command.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list
gcloud compute instances list filter=zone:( us-central1-b europe-west1-d ). is the right answer.
gcloud compute instances list lists Google Compute Engine instances. The output includes internal as well as external IP addresses. The filter expression filter=zone:( us-central1-b europe-west1-d ) is used to filter instances from zones us-central1-b and europe-west1-d.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list
Heres a sample output of the command.
$gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-cluster-1-default-pool-8c599c87-16g9 us-central1-a n1-standard-1 10.128.0.8 35.184.212.227 RUNNING
gke-cluster-1-default-pool-8c599c87-36xh us-central1-b n1-standard-1 10.129.0.2 34.68.254.220 RUNNING
gke-cluster-1-default-pool-8c599c87-lprq us-central1-c n1-standard-1 10.130.0.13 35.224.96.151 RUNNING
$gcloud compute instances list filter=zone:( us-central1-b europe-west1-d )
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-cluster-1-default-pool-8c599c87-36xhus-central1-bn1-standard-110.129.0.234.68.254.220RUNNING
Incorrect
gcloud compute instances get filter=zone:( us-central1-b europe-west1-d ). is not right.
gcloud compute instances command does not support get action.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances
gcloud compute instances get filter=zone:( us-central1-b ) and gcloud compute instances list filter=zone:( europe-west1-d ) and combine the results. is not right.
gcloud compute instances command does not support get action.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances
gcloud compute instances list filter=zone:( us-central1-b ) and gcloud compute instances list filter=zone:( europe-west1-d ) and combine the results. is not right.
The first command retrieves compute instances from us-central1-b and the second command retrieves compute instances from europe-west1-d. The output from the two statements can be combined to create a full list of instances from us-central1-b and europe-west1-d, however, this is not efficient as it is a manual activity. Moreover, gcloud already provides the ability to list and filter on multiple zones in a single command.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list
gcloud compute instances list filter=zone:( us-central1-b europe-west1-d ). is the right answer.
gcloud compute instances list lists Google Compute Engine instances. The output includes internal as well as external IP addresses. The filter expression filter=zone:( us-central1-b europe-west1-d ) is used to filter instances from zones us-central1-b and europe-west1-d.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list
Heres a sample output of the command.
$gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-cluster-1-default-pool-8c599c87-16g9 us-central1-a n1-standard-1 10.128.0.8 35.184.212.227 RUNNING
gke-cluster-1-default-pool-8c599c87-36xh us-central1-b n1-standard-1 10.129.0.2 34.68.254.220 RUNNING
gke-cluster-1-default-pool-8c599c87-lprq us-central1-c n1-standard-1 10.130.0.13 35.224.96.151 RUNNING
$gcloud compute instances list filter=zone:( us-central1-b europe-west1-d )
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-cluster-1-default-pool-8c599c87-36xhus-central1-bn1-standard-110.129.0.234.68.254.220RUNNING
Unattempted
gcloud compute instances get filter=zone:( us-central1-b europe-west1-d ). is not right.
gcloud compute instances command does not support get action.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances
gcloud compute instances get filter=zone:( us-central1-b ) and gcloud compute instances list filter=zone:( europe-west1-d ) and combine the results. is not right.
gcloud compute instances command does not support get action.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances
gcloud compute instances list filter=zone:( us-central1-b ) and gcloud compute instances list filter=zone:( europe-west1-d ) and combine the results. is not right.
The first command retrieves compute instances from us-central1-b and the second command retrieves compute instances from europe-west1-d. The output from the two statements can be combined to create a full list of instances from us-central1-b and europe-west1-d, however, this is not efficient as it is a manual activity. Moreover, gcloud already provides the ability to list and filter on multiple zones in a single command.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list
gcloud compute instances list filter=zone:( us-central1-b europe-west1-d ). is the right answer.
gcloud compute instances list lists Google Compute Engine instances. The output includes internal as well as external IP addresses. The filter expression filter=zone:( us-central1-b europe-west1-d ) is used to filter instances from zones us-central1-b and europe-west1-d.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list
Heres a sample output of the command.
$gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-cluster-1-default-pool-8c599c87-16g9 us-central1-a n1-standard-1 10.128.0.8 35.184.212.227 RUNNING
gke-cluster-1-default-pool-8c599c87-36xh us-central1-b n1-standard-1 10.129.0.2 34.68.254.220 RUNNING
gke-cluster-1-default-pool-8c599c87-lprq us-central1-c n1-standard-1 10.130.0.13 35.224.96.151 RUNNING
$gcloud compute instances list filter=zone:( us-central1-b europe-west1-d )
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-cluster-1-default-pool-8c599c87-36xhus-central1-bn1-standard-110.129.0.234.68.254.220RUNNING
Question 5 of 65
5. Question
You want to list all the internal and external IP addresses of all compute instances. Which of the commands below should you run to retrieve this information?
Correct
gcloud compute instances list-ip. is not right.
gcloud compute instances doesnt support the action list-ip.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list
gcloud compute networks list-ip. is not right.
gcloud compute networks doesnt support the action list-ip.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/networks/list
gcloud compute networks list. is not right.
gcloud compute networks list doesnt list the IP addresses. It is used for listing Google Compute Engine networks (i.e. VPCs)
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/networks/list
Heres a sample output of the command.
$ gcloud compute networks list
NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4
default AUTO REGIONAL
test-vpc CUSTOM REGIONAL
gcloud compute instances list. is the right answer
gcloud compute instances list lists Google Compute Engine instances. The output includes internal as well as external IP addresses.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list
Heres a sample output of the command.
$ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-cluster-1-default-pool-8c599c87-16g9 us-central1-a n1-standard-1 10.128.0.8 35.184.212.227 RUNNING
gke-cluster-1-default-pool-8c599c87-36xh us-central1-a n1-standard-1 10.128.0.6 34.68.254.220 RUNNING
gke-cluster-1-default-pool-8c599c87-lprq us-central1-a n1-standard-1 10.128.0.7 35.224.96.151 RUNNING
Incorrect
gcloud compute instances list-ip. is not right.
gcloud compute instances doesnt support the action list-ip.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list
gcloud compute networks list-ip. is not right.
gcloud compute networks doesnt support the action list-ip.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/networks/list
gcloud compute networks list. is not right.
gcloud compute networks list doesnt list the IP addresses. It is used for listing Google Compute Engine networks (i.e. VPCs)
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/networks/list
Heres a sample output of the command.
$ gcloud compute networks list
NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4
default AUTO REGIONAL
test-vpc CUSTOM REGIONAL
gcloud compute instances list. is the right answer
gcloud compute instances list lists Google Compute Engine instances. The output includes internal as well as external IP addresses.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list
Heres a sample output of the command.
$ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-cluster-1-default-pool-8c599c87-16g9 us-central1-a n1-standard-1 10.128.0.8 35.184.212.227 RUNNING
gke-cluster-1-default-pool-8c599c87-36xh us-central1-a n1-standard-1 10.128.0.6 34.68.254.220 RUNNING
gke-cluster-1-default-pool-8c599c87-lprq us-central1-a n1-standard-1 10.128.0.7 35.224.96.151 RUNNING
Unattempted
gcloud compute instances list-ip. is not right.
gcloud compute instances doesnt support the action list-ip.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list
gcloud compute networks list-ip. is not right.
gcloud compute networks doesnt support the action list-ip.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/networks/list
gcloud compute networks list. is not right.
gcloud compute networks list doesnt list the IP addresses. It is used for listing Google Compute Engine networks (i.e. VPCs)
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/networks/list
Heres a sample output of the command.
$ gcloud compute networks list
NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4
default AUTO REGIONAL
test-vpc CUSTOM REGIONAL
gcloud compute instances list. is the right answer
gcloud compute instances list lists Google Compute Engine instances. The output includes internal as well as external IP addresses.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list
Heres a sample output of the command.
$ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-cluster-1-default-pool-8c599c87-16g9 us-central1-a n1-standard-1 10.128.0.8 35.184.212.227 RUNNING
gke-cluster-1-default-pool-8c599c87-36xh us-central1-a n1-standard-1 10.128.0.6 34.68.254.220 RUNNING
gke-cluster-1-default-pool-8c599c87-lprq us-central1-a n1-standard-1 10.128.0.7 35.224.96.151 RUNNING
Question 6 of 65
6. Question
You want to migrate an application from Google App Engine Standard to Google App Engine Flex. Your application is currently serving live traffic and you want to ensure everything is working in Google App Engine Flex before migrating all traffic. You want to minimize effort and ensure the availability of service. What should you do?
Correct
1. Set env: flex in app.yaml
2. gcloud app deploy version=[NEW_VERSION]
3. Validate [NEW_VERSION] in App Engine Flex
4. gcloud app versions migrate [NEW_VERSION]. is not right.
Executing gcloud app deploy version=[NEW_VERSION] without no-promote would deploy the new version and immediately promote it to serve traffic. We dont want this version to receive traffic as we would like to validate the version first before sending it traffic.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate
1. Set env: app-engine-flex in app.yaml
2. gcloud app deploy version=[NEW_VERSION]
3. Validate [NEW_VERSION] in App Engine Flex
4. gcloud app versions start [NEW_VERSION] is not right.
env: app-engine-flex is an invalid setting. The correct syntax for using the flex engine is env: flex. Also, Executing gcloud app deploy version=[NEW_VERSION] without no-promote would deploy the new version and immediately promote it to serve traffic. We dont want this version to receive traffic as we would like to validate the version first before sending it traffic.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate
1. Set env: app-engine-flex in app.yaml
2. gcloud app deploy no-promote version=[NEW_VERSION]
3. Validate [NEW_VERSION] in App Engine Flex
4. gcloud app versions start [NEW_VERSION] is not right.
env: app-engine-flex is an invalid setting. The correct syntax for using the flex engine is env: flex.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate
1. Set env: flex in app.yaml
2. gcloud app deploy no-promote version=[NEW_VERSION]
3. Validate [NEW_VERSION] in App Engine Flex
4. gcloud app versions migrate [NEW_VERSION] is the right answer.
These commands together achieve the end goal while satisfying our requirements. Setting env: flex in app.yaml and executing gcloud app deploy no-promote version=[NEW_VERSION] results in a new version deployed to flex engine. but the new version is not configured to serve traffic. We take the opportunity to review this version before migrating it to serve live traffic by running gcloud app versions migrate [NEW_VERSION]
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate
Ref: https://cloud.google.com/sdk/gcloud/reference/app/deploy
Incorrect
1. Set env: flex in app.yaml
2. gcloud app deploy version=[NEW_VERSION]
3. Validate [NEW_VERSION] in App Engine Flex
4. gcloud app versions migrate [NEW_VERSION]. is not right.
Executing gcloud app deploy version=[NEW_VERSION] without no-promote would deploy the new version and immediately promote it to serve traffic. We dont want this version to receive traffic as we would like to validate the version first before sending it traffic.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate
1. Set env: app-engine-flex in app.yaml
2. gcloud app deploy version=[NEW_VERSION]
3. Validate [NEW_VERSION] in App Engine Flex
4. gcloud app versions start [NEW_VERSION] is not right.
env: app-engine-flex is an invalid setting. The correct syntax for using the flex engine is env: flex. Also, Executing gcloud app deploy version=[NEW_VERSION] without no-promote would deploy the new version and immediately promote it to serve traffic. We dont want this version to receive traffic as we would like to validate the version first before sending it traffic.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate
1. Set env: app-engine-flex in app.yaml
2. gcloud app deploy no-promote version=[NEW_VERSION]
3. Validate [NEW_VERSION] in App Engine Flex
4. gcloud app versions start [NEW_VERSION] is not right.
env: app-engine-flex is an invalid setting. The correct syntax for using the flex engine is env: flex.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate
1. Set env: flex in app.yaml
2. gcloud app deploy no-promote version=[NEW_VERSION]
3. Validate [NEW_VERSION] in App Engine Flex
4. gcloud app versions migrate [NEW_VERSION] is the right answer.
These commands together achieve the end goal while satisfying our requirements. Setting env: flex in app.yaml and executing gcloud app deploy no-promote version=[NEW_VERSION] results in a new version deployed to flex engine. but the new version is not configured to serve traffic. We take the opportunity to review this version before migrating it to serve live traffic by running gcloud app versions migrate [NEW_VERSION]
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate
Ref: https://cloud.google.com/sdk/gcloud/reference/app/deploy
Unattempted
1. Set env: flex in app.yaml
2. gcloud app deploy version=[NEW_VERSION]
3. Validate [NEW_VERSION] in App Engine Flex
4. gcloud app versions migrate [NEW_VERSION]. is not right.
Executing gcloud app deploy version=[NEW_VERSION] without no-promote would deploy the new version and immediately promote it to serve traffic. We dont want this version to receive traffic as we would like to validate the version first before sending it traffic.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate
1. Set env: app-engine-flex in app.yaml
2. gcloud app deploy version=[NEW_VERSION]
3. Validate [NEW_VERSION] in App Engine Flex
4. gcloud app versions start [NEW_VERSION] is not right.
env: app-engine-flex is an invalid setting. The correct syntax for using the flex engine is env: flex. Also, Executing gcloud app deploy version=[NEW_VERSION] without no-promote would deploy the new version and immediately promote it to serve traffic. We dont want this version to receive traffic as we would like to validate the version first before sending it traffic.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate
1. Set env: app-engine-flex in app.yaml
2. gcloud app deploy no-promote version=[NEW_VERSION]
3. Validate [NEW_VERSION] in App Engine Flex
4. gcloud app versions start [NEW_VERSION] is not right.
env: app-engine-flex is an invalid setting. The correct syntax for using the flex engine is env: flex.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate
1. Set env: flex in app.yaml
2. gcloud app deploy no-promote version=[NEW_VERSION]
3. Validate [NEW_VERSION] in App Engine Flex
4. gcloud app versions migrate [NEW_VERSION] is the right answer.
These commands together achieve the end goal while satisfying our requirements. Setting env: flex in app.yaml and executing gcloud app deploy no-promote version=[NEW_VERSION] results in a new version deployed to flex engine. but the new version is not configured to serve traffic. We take the opportunity to review this version before migrating it to serve live traffic by running gcloud app versions migrate [NEW_VERSION]
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate
Ref: https://cloud.google.com/sdk/gcloud/reference/app/deploy
Question 7 of 65
7. Question
You want to persist logs for 10 years to comply with regulatory requirements. You want to follow Google recommended practices. Which Google Cloud Storage class should you use?
Correct
In April 2019, Google introduced a new storage class Archive storage class is the lowest-cost, highly durable storage service for data archiving, online backup, and disaster recovery. Google previously recommended you use Coldline storage class but the recommendation has since been updated to Coldline Storage is ideal for data you plan to read or modify at most once a quarter. Note, however, that for data being kept entirely for backup or archiving purposes, Archive Storage is more cost-effective, as it offers the lowest storage costs.
In April 2019, Google introduced a new storage class Archive storage class is the lowest-cost, highly durable storage service for data archiving, online backup, and disaster recovery. Google previously recommended you use Coldline storage class but the recommendation has since been updated to Coldline Storage is ideal for data you plan to read or modify at most once a quarter. Note, however, that for data being kept entirely for backup or archiving purposes, Archive Storage is more cost-effective, as it offers the lowest storage costs.
In April 2019, Google introduced a new storage class Archive storage class is the lowest-cost, highly durable storage service for data archiving, online backup, and disaster recovery. Google previously recommended you use Coldline storage class but the recommendation has since been updated to Coldline Storage is ideal for data you plan to read or modify at most once a quarter. Note, however, that for data being kept entirely for backup or archiving purposes, Archive Storage is more cost-effective, as it offers the lowest storage costs.
You want to reduce storage costs for infrequently accessed data. The data will still be accessed approximately once a month and data older than 2 years is no longer needed. What should you do to reduce storage costs? (Select 2)
Correct
Set an Object Lifecycle Management policy to change the storage class to Coldline for data older than 2 years. is not right.
Data older than 2 years is not needed so there is no point in transitioning the data to Coldline. The data needs to be deleted.
Set an Object Lifecycle Management policy to change the storage class to Archive for data older than 2 years. is not right.
Data older than 2 years is not needed so there is no point in transitioning the data to Archive. The data needs to be deleted.
Store infrequently accessed data in a Multi-Regional bucket. is not right.
While infrequently accessed data can be stored in Multi-Regional bucket, there are several other storage classes offered by Google Cloud Storage that are primarily aimed at storing infrequently accessed data and cost less. Multi-Region buckets are primarily used for achieving geo-redundancy.
Ref: https://cloud.google.com/storage/docs/locations
Set an Object Lifecycle Management policy to delete data older than 2 years. is the right answer.
Since you dont need data older than 2 years, deleting such data is the right approach. You can set a lifecycle policy to automatically delete objects older than 2 years. The policy is valid on current as well as future objects and doesnt need any human intervention.
Ref: https://cloud.google.com/storage/docs/lifecycle
Store infrequently accessed data in a Nearline bucket. is the right answer.
Nearline Storage is a low-cost, highly durable storage service for storing infrequently accessed data. Nearline Storage is ideal for data you plan to read or modify on average once per month or less.
Ref: https://cloud.google.com/storage/docs/storage-classes#nearline
Incorrect
Set an Object Lifecycle Management policy to change the storage class to Coldline for data older than 2 years. is not right.
Data older than 2 years is not needed so there is no point in transitioning the data to Coldline. The data needs to be deleted.
Set an Object Lifecycle Management policy to change the storage class to Archive for data older than 2 years. is not right.
Data older than 2 years is not needed so there is no point in transitioning the data to Archive. The data needs to be deleted.
Store infrequently accessed data in a Multi-Regional bucket. is not right.
While infrequently accessed data can be stored in Multi-Regional bucket, there are several other storage classes offered by Google Cloud Storage that are primarily aimed at storing infrequently accessed data and cost less. Multi-Region buckets are primarily used for achieving geo-redundancy.
Ref: https://cloud.google.com/storage/docs/locations
Set an Object Lifecycle Management policy to delete data older than 2 years. is the right answer.
Since you dont need data older than 2 years, deleting such data is the right approach. You can set a lifecycle policy to automatically delete objects older than 2 years. The policy is valid on current as well as future objects and doesnt need any human intervention.
Ref: https://cloud.google.com/storage/docs/lifecycle
Store infrequently accessed data in a Nearline bucket. is the right answer.
Nearline Storage is a low-cost, highly durable storage service for storing infrequently accessed data. Nearline Storage is ideal for data you plan to read or modify on average once per month or less.
Ref: https://cloud.google.com/storage/docs/storage-classes#nearline
Unattempted
Set an Object Lifecycle Management policy to change the storage class to Coldline for data older than 2 years. is not right.
Data older than 2 years is not needed so there is no point in transitioning the data to Coldline. The data needs to be deleted.
Set an Object Lifecycle Management policy to change the storage class to Archive for data older than 2 years. is not right.
Data older than 2 years is not needed so there is no point in transitioning the data to Archive. The data needs to be deleted.
Store infrequently accessed data in a Multi-Regional bucket. is not right.
While infrequently accessed data can be stored in Multi-Regional bucket, there are several other storage classes offered by Google Cloud Storage that are primarily aimed at storing infrequently accessed data and cost less. Multi-Region buckets are primarily used for achieving geo-redundancy.
Ref: https://cloud.google.com/storage/docs/locations
Set an Object Lifecycle Management policy to delete data older than 2 years. is the right answer.
Since you dont need data older than 2 years, deleting such data is the right approach. You can set a lifecycle policy to automatically delete objects older than 2 years. The policy is valid on current as well as future objects and doesnt need any human intervention.
Ref: https://cloud.google.com/storage/docs/lifecycle
Store infrequently accessed data in a Nearline bucket. is the right answer.
Nearline Storage is a low-cost, highly durable storage service for storing infrequently accessed data. Nearline Storage is ideal for data you plan to read or modify on average once per month or less.
Ref: https://cloud.google.com/storage/docs/storage-classes#nearline
Question 9 of 65
9. Question
You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache and need an additional 2 GB of memory for the rest of the processes. You want to minimize costs. How should you run this reverse proxy?
Correct
Requirements
1. latency sensitive
2. 30 GB in-memory cache
3. 2 GB for rest of processes
4. Cost-effective
Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB. is not right.
Fetching data from disk is slower compared to fetching from in-memory. Our requirements state we need 30GB in-memory cache for a latency-sensitive website and a compute engine with disk cant provide in-memory cache.
Run it on Compute Engine and choose a custom instance type with 6 vCPUs and 32 GB of memory. is not right.
While this option provides us with 32 GB of memory, a part of it used by the compute engine operating system as well as the reverse proxy process leaving us with less than 32GB which does not satisfy our requirements. In addition, the reverse proxy consumes almost no CPU so having 6vCPUs is a waste of resources and money.
Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes. is not right.
Without going into details of the feasibility of this option, lets assume for now that this option is possible. But this option is quite expensive. At the time of writing, just the compute cost for a n1-standard-32 instance is $1.5200 per hour in the Iowa region.
Ref: https://cloud.google.com/compute/all-pricing
In comparison, the cost of GCP Cloud Memorystore which is $0.023 per GB-hr which is $0.736 for 32GB per hour. Ref: https://cloud.google.com/memorystore
Create a Cloud Memorystore for Redis instance with 32-GB capacity. is the right answer.
This is the only option that fits the requirements. Cloud Memorystore is a fully managed in-memory data store service for Redis built on scalable, secure, and highly available infrastructure managed by Google. Use Memorystore to build application caches that provide sub-millisecond data access.
Ref: https://cloud.google.com/memorystore
Memorystore for Redis instance pricing is charged per GB-hour and you can scale as needed. You can also specify eviction (maxmemory) policies to restrict the rest of processes to 2GB or the reverse proxy to 30GB or both; you can select a suitable maxmemory policy to handle scenarios when memory is full.
Ref: https://cloud.google.com/memorystore/docs/reference/redis-configs#maxmemory_policies
Incorrect
Requirements
1. latency sensitive
2. 30 GB in-memory cache
3. 2 GB for rest of processes
4. Cost-effective
Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB. is not right.
Fetching data from disk is slower compared to fetching from in-memory. Our requirements state we need 30GB in-memory cache for a latency-sensitive website and a compute engine with disk cant provide in-memory cache.
Run it on Compute Engine and choose a custom instance type with 6 vCPUs and 32 GB of memory. is not right.
While this option provides us with 32 GB of memory, a part of it used by the compute engine operating system as well as the reverse proxy process leaving us with less than 32GB which does not satisfy our requirements. In addition, the reverse proxy consumes almost no CPU so having 6vCPUs is a waste of resources and money.
Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes. is not right.
Without going into details of the feasibility of this option, lets assume for now that this option is possible. But this option is quite expensive. At the time of writing, just the compute cost for a n1-standard-32 instance is $1.5200 per hour in the Iowa region.
Ref: https://cloud.google.com/compute/all-pricing
In comparison, the cost of GCP Cloud Memorystore which is $0.023 per GB-hr which is $0.736 for 32GB per hour. Ref: https://cloud.google.com/memorystore
Create a Cloud Memorystore for Redis instance with 32-GB capacity. is the right answer.
This is the only option that fits the requirements. Cloud Memorystore is a fully managed in-memory data store service for Redis built on scalable, secure, and highly available infrastructure managed by Google. Use Memorystore to build application caches that provide sub-millisecond data access.
Ref: https://cloud.google.com/memorystore
Memorystore for Redis instance pricing is charged per GB-hour and you can scale as needed. You can also specify eviction (maxmemory) policies to restrict the rest of processes to 2GB or the reverse proxy to 30GB or both; you can select a suitable maxmemory policy to handle scenarios when memory is full.
Ref: https://cloud.google.com/memorystore/docs/reference/redis-configs#maxmemory_policies
Unattempted
Requirements
1. latency sensitive
2. 30 GB in-memory cache
3. 2 GB for rest of processes
4. Cost-effective
Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB. is not right.
Fetching data from disk is slower compared to fetching from in-memory. Our requirements state we need 30GB in-memory cache for a latency-sensitive website and a compute engine with disk cant provide in-memory cache.
Run it on Compute Engine and choose a custom instance type with 6 vCPUs and 32 GB of memory. is not right.
While this option provides us with 32 GB of memory, a part of it used by the compute engine operating system as well as the reverse proxy process leaving us with less than 32GB which does not satisfy our requirements. In addition, the reverse proxy consumes almost no CPU so having 6vCPUs is a waste of resources and money.
Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes. is not right.
Without going into details of the feasibility of this option, lets assume for now that this option is possible. But this option is quite expensive. At the time of writing, just the compute cost for a n1-standard-32 instance is $1.5200 per hour in the Iowa region.
Ref: https://cloud.google.com/compute/all-pricing
In comparison, the cost of GCP Cloud Memorystore which is $0.023 per GB-hr which is $0.736 for 32GB per hour. Ref: https://cloud.google.com/memorystore
Create a Cloud Memorystore for Redis instance with 32-GB capacity. is the right answer.
This is the only option that fits the requirements. Cloud Memorystore is a fully managed in-memory data store service for Redis built on scalable, secure, and highly available infrastructure managed by Google. Use Memorystore to build application caches that provide sub-millisecond data access.
Ref: https://cloud.google.com/memorystore
Memorystore for Redis instance pricing is charged per GB-hour and you can scale as needed. You can also specify eviction (maxmemory) policies to restrict the rest of processes to 2GB or the reverse proxy to 30GB or both; you can select a suitable maxmemory policy to handle scenarios when memory is full.
Ref: https://cloud.google.com/memorystore/docs/reference/redis-configs#maxmemory_policies
Question 10 of 65
10. Question
You want to select and configure a cost-effective solution for relational data on Google Cloud Platform. You are working with a small set of operational data in one geographic location. You need to support point-in-time recovery. What should you do?
Correct
Requirements
1. Cost effective
2. Relational Data
3. Small set of data
4. One location
5. Point in time recovery
Select Cloud Spanner. Set up your instance with 2 nodes. is not right.
Cloud spanner is a massively scalable, fully managed, relational database service for regional and global application data. Cloud spanner is expensive compared to Cloud SQL. We have a small set of data and we want to be cost-effective, so Cloud Spanner doesnt fit these requirements. Furthermore, Cloud Spanner does not offer a Point in time recovery feature.
Ref: https://cloud.google.com/spanner
Select Cloud Spanner. Set up your instance as multi-regional. is not right.
Cloud spanner is a massively scalable, fully managed, relational database service for regional and global application data. Cloud spanner is expensive compared to Cloud SQL. We dont have a requirement for more than one geographic location and we also have a small set of data and we want to be cost-effective, so Cloud Spanner doesnt fit these requirements. Furthermore, Cloud Spanner does not offer a Point in time recovery feature.
Ref: https://cloud.google.com/spanner
Select Cloud SQL (MySQL). Select the create failover replicas option. is not right.
Cloud SQL can easily handle small sets of relational data and is cost-effective compared to Cloud Spanner. But This option does not enable point in time recovery so our requirement to support point-in-time recovery is not met.
Ref: https://cloud.google.com/sql/docs/mysql
Select Cloud SQL (MySQL). Verify that the enable binary logging option is selected. is the right answer
Cloud SQL can easily handle small sets of relational data and is cost-effective compared to Cloud Spanner. And by enabling binary logging, we can enable point-in-time recovery which fits our requirement.
You must enable binary logging to use point-in-time recovery. Point-in-time recovery helps you recover an instance to a specific point in time. For example, if an error causes a loss of data, you can recover a database to its state before the error occurred.
Ref: https://cloud.google.com/sql/docs/mysql/backup-recovery/backups#tips-pitr
Incorrect
Requirements
1. Cost effective
2. Relational Data
3. Small set of data
4. One location
5. Point in time recovery
Select Cloud Spanner. Set up your instance with 2 nodes. is not right.
Cloud spanner is a massively scalable, fully managed, relational database service for regional and global application data. Cloud spanner is expensive compared to Cloud SQL. We have a small set of data and we want to be cost-effective, so Cloud Spanner doesnt fit these requirements. Furthermore, Cloud Spanner does not offer a Point in time recovery feature.
Ref: https://cloud.google.com/spanner
Select Cloud Spanner. Set up your instance as multi-regional. is not right.
Cloud spanner is a massively scalable, fully managed, relational database service for regional and global application data. Cloud spanner is expensive compared to Cloud SQL. We dont have a requirement for more than one geographic location and we also have a small set of data and we want to be cost-effective, so Cloud Spanner doesnt fit these requirements. Furthermore, Cloud Spanner does not offer a Point in time recovery feature.
Ref: https://cloud.google.com/spanner
Select Cloud SQL (MySQL). Select the create failover replicas option. is not right.
Cloud SQL can easily handle small sets of relational data and is cost-effective compared to Cloud Spanner. But This option does not enable point in time recovery so our requirement to support point-in-time recovery is not met.
Ref: https://cloud.google.com/sql/docs/mysql
Select Cloud SQL (MySQL). Verify that the enable binary logging option is selected. is the right answer
Cloud SQL can easily handle small sets of relational data and is cost-effective compared to Cloud Spanner. And by enabling binary logging, we can enable point-in-time recovery which fits our requirement.
You must enable binary logging to use point-in-time recovery. Point-in-time recovery helps you recover an instance to a specific point in time. For example, if an error causes a loss of data, you can recover a database to its state before the error occurred.
Ref: https://cloud.google.com/sql/docs/mysql/backup-recovery/backups#tips-pitr
Unattempted
Requirements
1. Cost effective
2. Relational Data
3. Small set of data
4. One location
5. Point in time recovery
Select Cloud Spanner. Set up your instance with 2 nodes. is not right.
Cloud spanner is a massively scalable, fully managed, relational database service for regional and global application data. Cloud spanner is expensive compared to Cloud SQL. We have a small set of data and we want to be cost-effective, so Cloud Spanner doesnt fit these requirements. Furthermore, Cloud Spanner does not offer a Point in time recovery feature.
Ref: https://cloud.google.com/spanner
Select Cloud Spanner. Set up your instance as multi-regional. is not right.
Cloud spanner is a massively scalable, fully managed, relational database service for regional and global application data. Cloud spanner is expensive compared to Cloud SQL. We dont have a requirement for more than one geographic location and we also have a small set of data and we want to be cost-effective, so Cloud Spanner doesnt fit these requirements. Furthermore, Cloud Spanner does not offer a Point in time recovery feature.
Ref: https://cloud.google.com/spanner
Select Cloud SQL (MySQL). Select the create failover replicas option. is not right.
Cloud SQL can easily handle small sets of relational data and is cost-effective compared to Cloud Spanner. But This option does not enable point in time recovery so our requirement to support point-in-time recovery is not met.
Ref: https://cloud.google.com/sql/docs/mysql
Select Cloud SQL (MySQL). Verify that the enable binary logging option is selected. is the right answer
Cloud SQL can easily handle small sets of relational data and is cost-effective compared to Cloud Spanner. And by enabling binary logging, we can enable point-in-time recovery which fits our requirement.
You must enable binary logging to use point-in-time recovery. Point-in-time recovery helps you recover an instance to a specific point in time. For example, if an error causes a loss of data, you can recover a database to its state before the error occurred.
Ref: https://cloud.google.com/sql/docs/mysql/backup-recovery/backups#tips-pitr
Question 11 of 65
11. Question
You want to select and configure a solution for storing and archiving data on Google Cloud Platform. You need to support compliance objectives for data from one geographic location. This data is archived after 30 days and needs to be accessed annually. What should you do?
Correct
Our requirements are one region, archival after 30 days and data to be accessed annually.
Select Multi-Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage. is not right.
When used in a multi-region, Standard Storage is appropriate for storing data that is accessed around the world, such as serving website content, streaming videos, executing interactive workloads, or serving data supporting mobile and gaming applications. This is against our requirement of one region, moreover, this is expensive compared to standard Regional storage.
Ref: https://cloud.google.com/storage/docs/storage-classes#standard
Select Multi-Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage. is not right.
When used in a multi-region, Standard Storage is appropriate for storing data that is accessed around the world, such as serving website content, streaming videos, executing interactive workloads, or serving data supporting mobile and gaming applications. This is against our requirement of one region, moreover, this is expensive compared to standard Regional storage.
Ref: https://cloud.google.com/storage/docs/storage-classes#standard
Select Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage. is not right.
While selecting Regional Storage is the right choice, archiving to Nearline is not the most optimal. We have a requirement to access data annually whereas Nearline Storage is ideal for data you plan to read or modify on average once per month or less. Nearline storage is more expensive than Coldline Storage which is more suitable for our requirements. https://cloud.google.com/storage/docs/storage-classes#nearline
Select Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage. is the right answer.
Regional Storage is the right fit for our requirements (one geographic region) and archiving to Coldline storage is the most cost-efficient solution. Coldline Storage is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is ideal for data you plan to read or modify at most once a quarter. Since we have a requirement to access data once a quarter and want to go with the most cost-efficient option, we should select Coldline Storage.
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline
Incorrect
Our requirements are one region, archival after 30 days and data to be accessed annually.
Select Multi-Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage. is not right.
When used in a multi-region, Standard Storage is appropriate for storing data that is accessed around the world, such as serving website content, streaming videos, executing interactive workloads, or serving data supporting mobile and gaming applications. This is against our requirement of one region, moreover, this is expensive compared to standard Regional storage.
Ref: https://cloud.google.com/storage/docs/storage-classes#standard
Select Multi-Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage. is not right.
When used in a multi-region, Standard Storage is appropriate for storing data that is accessed around the world, such as serving website content, streaming videos, executing interactive workloads, or serving data supporting mobile and gaming applications. This is against our requirement of one region, moreover, this is expensive compared to standard Regional storage.
Ref: https://cloud.google.com/storage/docs/storage-classes#standard
Select Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage. is not right.
While selecting Regional Storage is the right choice, archiving to Nearline is not the most optimal. We have a requirement to access data annually whereas Nearline Storage is ideal for data you plan to read or modify on average once per month or less. Nearline storage is more expensive than Coldline Storage which is more suitable for our requirements. https://cloud.google.com/storage/docs/storage-classes#nearline
Select Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage. is the right answer.
Regional Storage is the right fit for our requirements (one geographic region) and archiving to Coldline storage is the most cost-efficient solution. Coldline Storage is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is ideal for data you plan to read or modify at most once a quarter. Since we have a requirement to access data once a quarter and want to go with the most cost-efficient option, we should select Coldline Storage.
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline
Unattempted
Our requirements are one region, archival after 30 days and data to be accessed annually.
Select Multi-Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage. is not right.
When used in a multi-region, Standard Storage is appropriate for storing data that is accessed around the world, such as serving website content, streaming videos, executing interactive workloads, or serving data supporting mobile and gaming applications. This is against our requirement of one region, moreover, this is expensive compared to standard Regional storage.
Ref: https://cloud.google.com/storage/docs/storage-classes#standard
Select Multi-Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage. is not right.
When used in a multi-region, Standard Storage is appropriate for storing data that is accessed around the world, such as serving website content, streaming videos, executing interactive workloads, or serving data supporting mobile and gaming applications. This is against our requirement of one region, moreover, this is expensive compared to standard Regional storage.
Ref: https://cloud.google.com/storage/docs/storage-classes#standard
Select Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage. is not right.
While selecting Regional Storage is the right choice, archiving to Nearline is not the most optimal. We have a requirement to access data annually whereas Nearline Storage is ideal for data you plan to read or modify on average once per month or less. Nearline storage is more expensive than Coldline Storage which is more suitable for our requirements. https://cloud.google.com/storage/docs/storage-classes#nearline
Select Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage. is the right answer.
Regional Storage is the right fit for our requirements (one geographic region) and archiving to Coldline storage is the most cost-efficient solution. Coldline Storage is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is ideal for data you plan to read or modify at most once a quarter. Since we have a requirement to access data once a quarter and want to go with the most cost-efficient option, we should select Coldline Storage.
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline
Question 12 of 65
12. Question
You want to send and consume Cloud Pub/Sub messages from your App Engine application. The Cloud Pub/Sub API is currently disabled. You will use a service account to authenticate your application to the API. You want to make sure your application can use Cloud Pub/Sub. What should you do?
Correct
Requirements
1. We need to enable Cloud Pub/Sub API
2. Get our application to use the service account.
Grant the App Engine default service account the role of Cloud Pub/Sub Admin. Have your application enable the API on the first connection to Cloud Pub/Sub. is not right.
APIs are not automatically enabled on the first connection to the service (Cloud Pub/Sub in this scenario). APIs can be enabled through Google Cloud Console, gcloud command-line and REST API. See https://cloud.google.com/service-usage/docs/enable-disable for more information.
Rely on the automatic enablement of the Cloud Pub/Sub API when the Service Account accesses it. is not right.
There is no such thing as automatic enablement of the APIs when the service (Cloud Pub/Sub in this scenario) is accessed. APIs can be enabled through Google Cloud Console, gcloud command-line and REST API. See https://cloud.google.com/service-usage/docs/enable-disable for more information.
Use the Deployment Manager to deploy your application. Rely on the automatic enablement of all APIs used by the application being deployed. is not right.
There is no such thing as automatic enablement of the APIs (Cloud Pub/Sub in this scenario) is accessed. APIs can be enabled through Google Cloud Console, gcloud command-line and REST API. See https://cloud.google.com/service-usage/docs/enable-disable for more information.
Enable the Cloud Pub/Sub API in the API Library on the GCP Console. is the right answer.
For most operational use cases, the simplest way to enable and disable services is to use the Google Cloud Console. you need to create scripts, you can also use the gcloud command-line interface. If you need to program against the Service Usage API, we recommend that you use one of our provided client libraries
Ref: https://cloud.google.com/service-usage/docs/enable-disable
Secondly, after you create an App Engine application, the App Engine default service account is created and used as the identity of the App Engine service. The App Engine default service account is associated with your Cloud project and executes tasks on behalf of your apps running in App Engine. By default, the App Engine default service account has the Editor role in the project so this already has the permissions to push/pull/receive messages from Cloud Pub/Sub
Incorrect
Requirements
1. We need to enable Cloud Pub/Sub API
2. Get our application to use the service account.
Grant the App Engine default service account the role of Cloud Pub/Sub Admin. Have your application enable the API on the first connection to Cloud Pub/Sub. is not right.
APIs are not automatically enabled on the first connection to the service (Cloud Pub/Sub in this scenario). APIs can be enabled through Google Cloud Console, gcloud command-line and REST API. See https://cloud.google.com/service-usage/docs/enable-disable for more information.
Rely on the automatic enablement of the Cloud Pub/Sub API when the Service Account accesses it. is not right.
There is no such thing as automatic enablement of the APIs when the service (Cloud Pub/Sub in this scenario) is accessed. APIs can be enabled through Google Cloud Console, gcloud command-line and REST API. See https://cloud.google.com/service-usage/docs/enable-disable for more information.
Use the Deployment Manager to deploy your application. Rely on the automatic enablement of all APIs used by the application being deployed. is not right.
There is no such thing as automatic enablement of the APIs (Cloud Pub/Sub in this scenario) is accessed. APIs can be enabled through Google Cloud Console, gcloud command-line and REST API. See https://cloud.google.com/service-usage/docs/enable-disable for more information.
Enable the Cloud Pub/Sub API in the API Library on the GCP Console. is the right answer.
For most operational use cases, the simplest way to enable and disable services is to use the Google Cloud Console. you need to create scripts, you can also use the gcloud command-line interface. If you need to program against the Service Usage API, we recommend that you use one of our provided client libraries
Ref: https://cloud.google.com/service-usage/docs/enable-disable
Secondly, after you create an App Engine application, the App Engine default service account is created and used as the identity of the App Engine service. The App Engine default service account is associated with your Cloud project and executes tasks on behalf of your apps running in App Engine. By default, the App Engine default service account has the Editor role in the project so this already has the permissions to push/pull/receive messages from Cloud Pub/Sub
Unattempted
Requirements
1. We need to enable Cloud Pub/Sub API
2. Get our application to use the service account.
Grant the App Engine default service account the role of Cloud Pub/Sub Admin. Have your application enable the API on the first connection to Cloud Pub/Sub. is not right.
APIs are not automatically enabled on the first connection to the service (Cloud Pub/Sub in this scenario). APIs can be enabled through Google Cloud Console, gcloud command-line and REST API. See https://cloud.google.com/service-usage/docs/enable-disable for more information.
Rely on the automatic enablement of the Cloud Pub/Sub API when the Service Account accesses it. is not right.
There is no such thing as automatic enablement of the APIs when the service (Cloud Pub/Sub in this scenario) is accessed. APIs can be enabled through Google Cloud Console, gcloud command-line and REST API. See https://cloud.google.com/service-usage/docs/enable-disable for more information.
Use the Deployment Manager to deploy your application. Rely on the automatic enablement of all APIs used by the application being deployed. is not right.
There is no such thing as automatic enablement of the APIs (Cloud Pub/Sub in this scenario) is accessed. APIs can be enabled through Google Cloud Console, gcloud command-line and REST API. See https://cloud.google.com/service-usage/docs/enable-disable for more information.
Enable the Cloud Pub/Sub API in the API Library on the GCP Console. is the right answer.
For most operational use cases, the simplest way to enable and disable services is to use the Google Cloud Console. you need to create scripts, you can also use the gcloud command-line interface. If you need to program against the Service Usage API, we recommend that you use one of our provided client libraries
Ref: https://cloud.google.com/service-usage/docs/enable-disable
Secondly, after you create an App Engine application, the App Engine default service account is created and used as the identity of the App Engine service. The App Engine default service account is associated with your Cloud project and executes tasks on behalf of your apps running in App Engine. By default, the App Engine default service account has the Editor role in the project so this already has the permissions to push/pull/receive messages from Cloud Pub/Sub
Our requirement here is to serve content from two backends while following Google recommended practices.
Lets look at each of the options
1. Configure http://www.my-new-gcp-ace-website.com as a CNAME pointing to storage.googleapis.com
2. Create a HTTPS Load Balancer in front of the MIG
3. In the app folder of your Cloud Storage Bucket, add a file called redirect containing the address of the load balancer. is not right.
We can create a CNAME http://www.my-new-gcp-ace-website.com pointing to storage.googleapis.com, however, the cloud storage bucket does not support routing requests to a load balancer based on routing information in a file in the app folder. So this option doesnt work.
1. Deploy HAProxy in a second MIG and configure it to route /app/ to the first MIG and /static/ to your Cloud Storage bucket.
2. Create a network Load Balancer in front of the HAProxy MIG
3. Configure http://www.my-new-gcp-ace-website.com as an A record pointing to the address of the load balancer is not right.
This could possibly work, but we want to follow Google recommended practices and why deploy and manage HAProxy when there might be some other Google product that does exactly the same with minimal configuration (there is !!)?
1. Create a HTTPS Load Balancer in front of the MIG
2. In Cloud DNS in the example.com zone, create a TXT record for _app_._routes_.www.my-new-gcp-ace-website.com containing the address of the load balancer.
3. Create another TXT record for _static_._routes_.www.my-new-gcp-ace-website.com containing the URL of your Cloud Storage bucket. is not right.
TXT records are used to verify the domain and TXT records can also hold any arbitrary text but the DNS providers dont use the text in these TXT records for routing.
Ref: https://cloud.google.com/dns/records
Ref: https://support.google.com/cloudidentity/answer/183895?hl=en
1. Create a HTTPS Load Balancer
2. Create a backend service associated with the MIG and route /app/ to the backend service
3. Create a backend bucket associated with your Cloud Storage Bucket, and route /static/ to the backend bucket
4. Configure http://www.my-new-gcp-ace-website.com as an A record pointing to the address of the load balancer. is the right answer.
Since we need to send requests to multiple backends, Cloud DNS cant alone help us. We need Cloud HTTPS Load Balancer its URL maps (a fancy name for path-based routing) helps distribute traffic to backends based on the path information. Ref https://cloud.google.com/load-balancing/docs/url-map
Traffic received by Cloud HTTPS Load Balancer can be configured to send all requests on /app path to the MIG group; and requests on /static/ path to the bucket.
Ref Adding MIG as backend service- https://cloud.google.com/load-balancing/docs/backend-service#backend_services_and_autoscaled_managed_instance_groups.
Ref Adding a backend bucket(s) https://cloud.google.com/load-balancing/docs/https/adding-backend-buckets-to-load-balancers
The Load Balancer has a public IP address. But we want to instead access on http://www.my-new-gcp-ace-website.com, so we configure this as an A Record in our DNS provider. So this option is the right answer.
Ref: https://cloud.google.com/dns/records.
Incorrect
Our requirement here is to serve content from two backends while following Google recommended practices.
Lets look at each of the options
1. Configure http://www.my-new-gcp-ace-website.com as a CNAME pointing to storage.googleapis.com
2. Create a HTTPS Load Balancer in front of the MIG
3. In the app folder of your Cloud Storage Bucket, add a file called redirect containing the address of the load balancer. is not right.
We can create a CNAME http://www.my-new-gcp-ace-website.com pointing to storage.googleapis.com, however, the cloud storage bucket does not support routing requests to a load balancer based on routing information in a file in the app folder. So this option doesnt work.
1. Deploy HAProxy in a second MIG and configure it to route /app/ to the first MIG and /static/ to your Cloud Storage bucket.
2. Create a network Load Balancer in front of the HAProxy MIG
3. Configure http://www.my-new-gcp-ace-website.com as an A record pointing to the address of the load balancer is not right.
This could possibly work, but we want to follow Google recommended practices and why deploy and manage HAProxy when there might be some other Google product that does exactly the same with minimal configuration (there is !!)?
1. Create a HTTPS Load Balancer in front of the MIG
2. In Cloud DNS in the example.com zone, create a TXT record for _app_._routes_.www.my-new-gcp-ace-website.com containing the address of the load balancer.
3. Create another TXT record for _static_._routes_.www.my-new-gcp-ace-website.com containing the URL of your Cloud Storage bucket. is not right.
TXT records are used to verify the domain and TXT records can also hold any arbitrary text but the DNS providers dont use the text in these TXT records for routing.
Ref: https://cloud.google.com/dns/records
Ref: https://support.google.com/cloudidentity/answer/183895?hl=en
1. Create a HTTPS Load Balancer
2. Create a backend service associated with the MIG and route /app/ to the backend service
3. Create a backend bucket associated with your Cloud Storage Bucket, and route /static/ to the backend bucket
4. Configure http://www.my-new-gcp-ace-website.com as an A record pointing to the address of the load balancer. is the right answer.
Since we need to send requests to multiple backends, Cloud DNS cant alone help us. We need Cloud HTTPS Load Balancer its URL maps (a fancy name for path-based routing) helps distribute traffic to backends based on the path information. Ref https://cloud.google.com/load-balancing/docs/url-map
Traffic received by Cloud HTTPS Load Balancer can be configured to send all requests on /app path to the MIG group; and requests on /static/ path to the bucket.
Ref Adding MIG as backend service- https://cloud.google.com/load-balancing/docs/backend-service#backend_services_and_autoscaled_managed_instance_groups.
Ref Adding a backend bucket(s) https://cloud.google.com/load-balancing/docs/https/adding-backend-buckets-to-load-balancers
The Load Balancer has a public IP address. But we want to instead access on http://www.my-new-gcp-ace-website.com, so we configure this as an A Record in our DNS provider. So this option is the right answer.
Ref: https://cloud.google.com/dns/records.
Unattempted
Our requirement here is to serve content from two backends while following Google recommended practices.
Lets look at each of the options
1. Configure http://www.my-new-gcp-ace-website.com as a CNAME pointing to storage.googleapis.com
2. Create a HTTPS Load Balancer in front of the MIG
3. In the app folder of your Cloud Storage Bucket, add a file called redirect containing the address of the load balancer. is not right.
We can create a CNAME http://www.my-new-gcp-ace-website.com pointing to storage.googleapis.com, however, the cloud storage bucket does not support routing requests to a load balancer based on routing information in a file in the app folder. So this option doesnt work.
1. Deploy HAProxy in a second MIG and configure it to route /app/ to the first MIG and /static/ to your Cloud Storage bucket.
2. Create a network Load Balancer in front of the HAProxy MIG
3. Configure http://www.my-new-gcp-ace-website.com as an A record pointing to the address of the load balancer is not right.
This could possibly work, but we want to follow Google recommended practices and why deploy and manage HAProxy when there might be some other Google product that does exactly the same with minimal configuration (there is !!)?
1. Create a HTTPS Load Balancer in front of the MIG
2. In Cloud DNS in the example.com zone, create a TXT record for _app_._routes_.www.my-new-gcp-ace-website.com containing the address of the load balancer.
3. Create another TXT record for _static_._routes_.www.my-new-gcp-ace-website.com containing the URL of your Cloud Storage bucket. is not right.
TXT records are used to verify the domain and TXT records can also hold any arbitrary text but the DNS providers dont use the text in these TXT records for routing.
Ref: https://cloud.google.com/dns/records
Ref: https://support.google.com/cloudidentity/answer/183895?hl=en
1. Create a HTTPS Load Balancer
2. Create a backend service associated with the MIG and route /app/ to the backend service
3. Create a backend bucket associated with your Cloud Storage Bucket, and route /static/ to the backend bucket
4. Configure http://www.my-new-gcp-ace-website.com as an A record pointing to the address of the load balancer. is the right answer.
Since we need to send requests to multiple backends, Cloud DNS cant alone help us. We need Cloud HTTPS Load Balancer its URL maps (a fancy name for path-based routing) helps distribute traffic to backends based on the path information. Ref https://cloud.google.com/load-balancing/docs/url-map
Traffic received by Cloud HTTPS Load Balancer can be configured to send all requests on /app path to the MIG group; and requests on /static/ path to the bucket.
Ref Adding MIG as backend service- https://cloud.google.com/load-balancing/docs/backend-service#backend_services_and_autoscaled_managed_instance_groups.
Ref Adding a backend bucket(s) https://cloud.google.com/load-balancing/docs/https/adding-backend-buckets-to-load-balancers
The Load Balancer has a public IP address. But we want to instead access on http://www.my-new-gcp-ace-website.com, so we configure this as an A Record in our DNS provider. So this option is the right answer.
Ref: https://cloud.google.com/dns/records.
Question 14 of 65
14. Question
You want to use Google Cloud Storage to host a static website on http://www.example.com for your staff. You created a bucket example-static-website and uploaded index.html and css files to it. You turned on static website hosting on the bucket and set up a CNAME record on http://www.example.com to point to c.storage.googleapis.com. You access the static website by navigating to http://www.example.com in the browser but your index page is not displayed. What should you do?
Correct
In example.com zone, modify the CNAME record to c.storage.googleapis.com/example-static-website. is not right.
CNAME records cannot contain paths. There is nothing wrong with the current CNAME record.
In example.com zone, delete the existing CNAME record and set up an A record instead to point to c.storage.googleapis.com. is not right.
A records cannot use hostnames. A records use IP Addresses.
Reload the Cloud Storage static website server to load the objects. is not right.
There is no such thing as a Cloud Storage static website server. All infrastructure that underpins the static websites is handled by Google Cloud Platform.
Delete the existing bucket, create a new bucket with the name http://www.example.com and upload the html/css files. is the right answer.
We need to create a bucket whose name matches the CNAME you created for your domain. For example, if you added a CNAME record pointing http://www.example.com to c.storage.googleapis.com., then create a bucket with the name www.example.com.A CNAME record is a type of DNS record. It directs traffic that requests a URL from your domain to the resources you want to serve, in this case, objects in your Cloud Storage buckets. For http://www.example.com, the CNAME record might contain the following information:
NAME TYPE DATA http://www.example.com CNAME c.storage.googleapis.com.
Ref: https://cloud.google.com/storage/docs/hosting-static-website
Incorrect
In example.com zone, modify the CNAME record to c.storage.googleapis.com/example-static-website. is not right.
CNAME records cannot contain paths. There is nothing wrong with the current CNAME record.
In example.com zone, delete the existing CNAME record and set up an A record instead to point to c.storage.googleapis.com. is not right.
A records cannot use hostnames. A records use IP Addresses.
Reload the Cloud Storage static website server to load the objects. is not right.
There is no such thing as a Cloud Storage static website server. All infrastructure that underpins the static websites is handled by Google Cloud Platform.
Delete the existing bucket, create a new bucket with the name http://www.example.com and upload the html/css files. is the right answer.
We need to create a bucket whose name matches the CNAME you created for your domain. For example, if you added a CNAME record pointing http://www.example.com to c.storage.googleapis.com., then create a bucket with the name www.example.com.A CNAME record is a type of DNS record. It directs traffic that requests a URL from your domain to the resources you want to serve, in this case, objects in your Cloud Storage buckets. For http://www.example.com, the CNAME record might contain the following information:
NAME TYPE DATA http://www.example.com CNAME c.storage.googleapis.com.
Ref: https://cloud.google.com/storage/docs/hosting-static-website
Unattempted
In example.com zone, modify the CNAME record to c.storage.googleapis.com/example-static-website. is not right.
CNAME records cannot contain paths. There is nothing wrong with the current CNAME record.
In example.com zone, delete the existing CNAME record and set up an A record instead to point to c.storage.googleapis.com. is not right.
A records cannot use hostnames. A records use IP Addresses.
Reload the Cloud Storage static website server to load the objects. is not right.
There is no such thing as a Cloud Storage static website server. All infrastructure that underpins the static websites is handled by Google Cloud Platform.
Delete the existing bucket, create a new bucket with the name http://www.example.com and upload the html/css files. is the right answer.
We need to create a bucket whose name matches the CNAME you created for your domain. For example, if you added a CNAME record pointing http://www.example.com to c.storage.googleapis.com., then create a bucket with the name www.example.com.A CNAME record is a type of DNS record. It directs traffic that requests a URL from your domain to the resources you want to serve, in this case, objects in your Cloud Storage buckets. For http://www.example.com, the CNAME record might contain the following information:
NAME TYPE DATA http://www.example.com CNAME c.storage.googleapis.com.
Ref: https://cloud.google.com/storage/docs/hosting-static-website
Question 15 of 65
15. Question
You want to verify the IAM users and roles assigned within a GCP project named my-project. What should you do?
Correct
Requirements verify users (i.e. IAM members) and roles.
Run gcloud iam roles list. Review the output section. is not right.
gcloud iam roles list lists the roles but does not list the users (i.e. IAM members)
Run gcloud iam service-accounts list. Review the output section. is not right.
gcloud iam service-accounts list lists the service accounts which are users (i.e. IAM members) but it ignores other users that are not service accounts e.g. users in GSuite domain, or groups etc.
Navigate to the project and then to the Roles section in the GCP Console. Review the roles and status. is not right.
This allows us to review the roles but not users. See the screenshot below.
Navigate to the project and then to the IAM section in the GCP Console. Review the members and roles. is the right answer.
This is the only option that lets us view roles as well as users (members).
Ref: https://cloud.google.com/iam/docs/overview
See the screenshot below.
A member can be a Google Account (for end-users), a service account (for apps and virtual machines), a Google group, or a G Suite or Cloud Identity domain that can access a resource. The identity of a member is an email address associated with a user, service account, or Google group; or a domain name associated with G Suite or Cloud Identity domains
Incorrect
Requirements verify users (i.e. IAM members) and roles.
Run gcloud iam roles list. Review the output section. is not right.
gcloud iam roles list lists the roles but does not list the users (i.e. IAM members)
Run gcloud iam service-accounts list. Review the output section. is not right.
gcloud iam service-accounts list lists the service accounts which are users (i.e. IAM members) but it ignores other users that are not service accounts e.g. users in GSuite domain, or groups etc.
Navigate to the project and then to the Roles section in the GCP Console. Review the roles and status. is not right.
This allows us to review the roles but not users. See the screenshot below.
Navigate to the project and then to the IAM section in the GCP Console. Review the members and roles. is the right answer.
This is the only option that lets us view roles as well as users (members).
Ref: https://cloud.google.com/iam/docs/overview
See the screenshot below.
A member can be a Google Account (for end-users), a service account (for apps and virtual machines), a Google group, or a G Suite or Cloud Identity domain that can access a resource. The identity of a member is an email address associated with a user, service account, or Google group; or a domain name associated with G Suite or Cloud Identity domains
Unattempted
Requirements verify users (i.e. IAM members) and roles.
Run gcloud iam roles list. Review the output section. is not right.
gcloud iam roles list lists the roles but does not list the users (i.e. IAM members)
Run gcloud iam service-accounts list. Review the output section. is not right.
gcloud iam service-accounts list lists the service accounts which are users (i.e. IAM members) but it ignores other users that are not service accounts e.g. users in GSuite domain, or groups etc.
Navigate to the project and then to the Roles section in the GCP Console. Review the roles and status. is not right.
This allows us to review the roles but not users. See the screenshot below.
Navigate to the project and then to the IAM section in the GCP Console. Review the members and roles. is the right answer.
This is the only option that lets us view roles as well as users (members).
Ref: https://cloud.google.com/iam/docs/overview
See the screenshot below.
A member can be a Google Account (for end-users), a service account (for apps and virtual machines), a Google group, or a G Suite or Cloud Identity domain that can access a resource. The identity of a member is an email address associated with a user, service account, or Google group; or a domain name associated with G Suite or Cloud Identity domains
Question 16 of 65
16. Question
Your company collects and stores CCTV footage videos in raw format in Google Cloud Storage. Within the first 30 days, the footage is processed regularly for detecting patterns such as threat/object/face detection and suspicious behavior detection. You want to minimize the cost of storing all the data in Google Cloud. How should you store the videos?
Correct
Footage is processed regularly within the first 30 days and is rarely used after that. So we need to store the videos for the first 30 days in a storage class that supports economic retrieval (for processing) or at no cost, and then transition the videos to a cheaper storage after 30 days.
Use Google Cloud Regional Storage for the first 30 days, and use lifecycle rules to transition to Nearline Storage. is not right.
Transitioning the data to Nearline Storage is a good idea as Nearline Storage costs less than standard storage, is highly durable for storing infrequently accessed data and a better choice than Standard Storage in scenarios where slightly lower availability is an acceptable trade-off for lower at-rest storage costs. Ref: https://cloud.google.com/storage/docs/storage-classes#nearline
However, we do not have a requirement to access the data after 30 days; and there are storage classes that are cheaper than nearline storage, so it is not a suitable option.
Ref: https://cloud.google.com/storage/pricing#storage-pricing
Use Google Cloud Regional Storage for the first 30 days, and then move videos to Google Persistent Disk. is not right.
Persistent disk pricing is almost double that of standard storage class in Google Cloud Storage service. Plus the persistent disk can only be accessed when attached to another service such as compute engine, GKE, etc making this option very expensive.
Ref: https://cloud.google.com/storage/pricing#storage-pricing
Ref: https://cloud.google.com/compute/disks-image-pricing#persistentdisk
Use Google Cloud Nearline Storage for the first 30 days, and use lifecycle rules to transition to Coldline Storage. is not right.
Nearline storage class is suitable for storing infrequently accessed data and has costs associated with retrieval. Since the footage is processed regularly within the first 30 days, data retrieval costs may far outweigh the savings made by using nearline storage over standard storage class.
Ref: https://cloud.google.com/storage/docs/storage-classes#nearline
Ref: https://cloud.google.com/storage/pricing#archival-pricing
Use Google Cloud Regional Storage for the first 30 days, and use lifecycle rules to transition to Coldline Storage. is the right answer.
We save the videos initially in Regional Storage (Standard) which does not have retrieval charges so we do not pay for accessing data within the first 30 days during which the videos are accessed frequently. We only pay for the standard storage costs. After 30 days, we transition the CCTV footage videos to Coldline storage which is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is a better choice than Standard Storage or Nearline Storage in scenarios where slightly lower availability, a 90-day minimum storage duration, and higher costs for data access are acceptable trade-offs for lowered at-rest storage costs. Coldline storage class is cheaper than Nearline storage class.
Ref: https://cloud.google.com/storage/docs/storage-classes#standard
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline
Incorrect
Footage is processed regularly within the first 30 days and is rarely used after that. So we need to store the videos for the first 30 days in a storage class that supports economic retrieval (for processing) or at no cost, and then transition the videos to a cheaper storage after 30 days.
Use Google Cloud Regional Storage for the first 30 days, and use lifecycle rules to transition to Nearline Storage. is not right.
Transitioning the data to Nearline Storage is a good idea as Nearline Storage costs less than standard storage, is highly durable for storing infrequently accessed data and a better choice than Standard Storage in scenarios where slightly lower availability is an acceptable trade-off for lower at-rest storage costs. Ref: https://cloud.google.com/storage/docs/storage-classes#nearline
However, we do not have a requirement to access the data after 30 days; and there are storage classes that are cheaper than nearline storage, so it is not a suitable option.
Ref: https://cloud.google.com/storage/pricing#storage-pricing
Use Google Cloud Regional Storage for the first 30 days, and then move videos to Google Persistent Disk. is not right.
Persistent disk pricing is almost double that of standard storage class in Google Cloud Storage service. Plus the persistent disk can only be accessed when attached to another service such as compute engine, GKE, etc making this option very expensive.
Ref: https://cloud.google.com/storage/pricing#storage-pricing
Ref: https://cloud.google.com/compute/disks-image-pricing#persistentdisk
Use Google Cloud Nearline Storage for the first 30 days, and use lifecycle rules to transition to Coldline Storage. is not right.
Nearline storage class is suitable for storing infrequently accessed data and has costs associated with retrieval. Since the footage is processed regularly within the first 30 days, data retrieval costs may far outweigh the savings made by using nearline storage over standard storage class.
Ref: https://cloud.google.com/storage/docs/storage-classes#nearline
Ref: https://cloud.google.com/storage/pricing#archival-pricing
Use Google Cloud Regional Storage for the first 30 days, and use lifecycle rules to transition to Coldline Storage. is the right answer.
We save the videos initially in Regional Storage (Standard) which does not have retrieval charges so we do not pay for accessing data within the first 30 days during which the videos are accessed frequently. We only pay for the standard storage costs. After 30 days, we transition the CCTV footage videos to Coldline storage which is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is a better choice than Standard Storage or Nearline Storage in scenarios where slightly lower availability, a 90-day minimum storage duration, and higher costs for data access are acceptable trade-offs for lowered at-rest storage costs. Coldline storage class is cheaper than Nearline storage class.
Ref: https://cloud.google.com/storage/docs/storage-classes#standard
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline
Unattempted
Footage is processed regularly within the first 30 days and is rarely used after that. So we need to store the videos for the first 30 days in a storage class that supports economic retrieval (for processing) or at no cost, and then transition the videos to a cheaper storage after 30 days.
Use Google Cloud Regional Storage for the first 30 days, and use lifecycle rules to transition to Nearline Storage. is not right.
Transitioning the data to Nearline Storage is a good idea as Nearline Storage costs less than standard storage, is highly durable for storing infrequently accessed data and a better choice than Standard Storage in scenarios where slightly lower availability is an acceptable trade-off for lower at-rest storage costs. Ref: https://cloud.google.com/storage/docs/storage-classes#nearline
However, we do not have a requirement to access the data after 30 days; and there are storage classes that are cheaper than nearline storage, so it is not a suitable option.
Ref: https://cloud.google.com/storage/pricing#storage-pricing
Use Google Cloud Regional Storage for the first 30 days, and then move videos to Google Persistent Disk. is not right.
Persistent disk pricing is almost double that of standard storage class in Google Cloud Storage service. Plus the persistent disk can only be accessed when attached to another service such as compute engine, GKE, etc making this option very expensive.
Ref: https://cloud.google.com/storage/pricing#storage-pricing
Ref: https://cloud.google.com/compute/disks-image-pricing#persistentdisk
Use Google Cloud Nearline Storage for the first 30 days, and use lifecycle rules to transition to Coldline Storage. is not right.
Nearline storage class is suitable for storing infrequently accessed data and has costs associated with retrieval. Since the footage is processed regularly within the first 30 days, data retrieval costs may far outweigh the savings made by using nearline storage over standard storage class.
Ref: https://cloud.google.com/storage/docs/storage-classes#nearline
Ref: https://cloud.google.com/storage/pricing#archival-pricing
Use Google Cloud Regional Storage for the first 30 days, and use lifecycle rules to transition to Coldline Storage. is the right answer.
We save the videos initially in Regional Storage (Standard) which does not have retrieval charges so we do not pay for accessing data within the first 30 days during which the videos are accessed frequently. We only pay for the standard storage costs. After 30 days, we transition the CCTV footage videos to Coldline storage which is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is a better choice than Standard Storage or Nearline Storage in scenarios where slightly lower availability, a 90-day minimum storage duration, and higher costs for data access are acceptable trade-offs for lowered at-rest storage costs. Coldline storage class is cheaper than Nearline storage class.
Ref: https://cloud.google.com/storage/docs/storage-classes#standard
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline
Question 17 of 65
17. Question
Your company has a Google Cloud Platform project that uses BigQuery for data warehousing. Your data science team changes frequently and has few members. You need to allow members of this team to perform queries. You want to follow Google-recommended practices. What should you do?
Correct
1. Create an IAM entry for each data scientists user account.
2. Assign the BigQuery dataViewer user role to the group. is not right.
dataViewer provides permissions to Read the datasets metadata and to list tables in the dataset, and read data and metadata from the datasets tables. But it does not provide permissions to run jobs, including queries within the project.
Ref: https://cloud.google.com/bigquery/docs/access-control
1. Create a dedicated Google group in Cloud Identity.
2. Add each data scientists user account to the group.
3. Assign the BigQuery dataViewer user role to the group. is not right.
dataViewer provides permissions to Read the datasets metadata and to list tables in the dataset, and read data and metadata from the datasets tables. But it does not provide permissions to run jobs, including queries within the project.
Ref: https://cloud.google.com/bigquery/docs/access-control
1. Create an IAM entry for each data scientists user account.
2. Assign the BigQuery jobUser role to the group. is not right.
jobUser is the right role. It provides permissions to run jobs, including queries, within the project. But given that our data science team changes frequently, we do not want to go through this lengthy provisioning and de-provisioning process. Instead, we should be using groups so that provisioning and de-provisioning is as simple as adding/removing the user to/from the group. Google Groups are a convenient way to apply an access policy to a collection of users
Ref: https://cloud.google.com/bigquery/docs/access-control
Ref: https://cloud.google.com/iam/docs/overview#google_group
1. Create a dedicated Google group in Cloud Identity.
2. Add each data scientists user account to the group.
3. Assign the BigQuery jobUser role to the group. is the right answer.
This is the only option that follows Google recommended practices and meets our requirements. jobUser is the right role. It provides permissions to run jobs, including queries, within the project.
And we want to use a group and grant the group all the necessary roles so that whenever a user joins or leaves, they can be provided access to run big query jobs by simply adding them to the group or removing from the group respectively. Google Groups are a convenient way to apply an access policy to a collection of users
Ref: https://cloud.google.com/bigquery/docs/access-control
Ref: https://cloud.google.com/iam/docs/overview#google_group
Incorrect
1. Create an IAM entry for each data scientists user account.
2. Assign the BigQuery dataViewer user role to the group. is not right.
dataViewer provides permissions to Read the datasets metadata and to list tables in the dataset, and read data and metadata from the datasets tables. But it does not provide permissions to run jobs, including queries within the project.
Ref: https://cloud.google.com/bigquery/docs/access-control
1. Create a dedicated Google group in Cloud Identity.
2. Add each data scientists user account to the group.
3. Assign the BigQuery dataViewer user role to the group. is not right.
dataViewer provides permissions to Read the datasets metadata and to list tables in the dataset, and read data and metadata from the datasets tables. But it does not provide permissions to run jobs, including queries within the project.
Ref: https://cloud.google.com/bigquery/docs/access-control
1. Create an IAM entry for each data scientists user account.
2. Assign the BigQuery jobUser role to the group. is not right.
jobUser is the right role. It provides permissions to run jobs, including queries, within the project. But given that our data science team changes frequently, we do not want to go through this lengthy provisioning and de-provisioning process. Instead, we should be using groups so that provisioning and de-provisioning is as simple as adding/removing the user to/from the group. Google Groups are a convenient way to apply an access policy to a collection of users
Ref: https://cloud.google.com/bigquery/docs/access-control
Ref: https://cloud.google.com/iam/docs/overview#google_group
1. Create a dedicated Google group in Cloud Identity.
2. Add each data scientists user account to the group.
3. Assign the BigQuery jobUser role to the group. is the right answer.
This is the only option that follows Google recommended practices and meets our requirements. jobUser is the right role. It provides permissions to run jobs, including queries, within the project.
And we want to use a group and grant the group all the necessary roles so that whenever a user joins or leaves, they can be provided access to run big query jobs by simply adding them to the group or removing from the group respectively. Google Groups are a convenient way to apply an access policy to a collection of users
Ref: https://cloud.google.com/bigquery/docs/access-control
Ref: https://cloud.google.com/iam/docs/overview#google_group
Unattempted
1. Create an IAM entry for each data scientists user account.
2. Assign the BigQuery dataViewer user role to the group. is not right.
dataViewer provides permissions to Read the datasets metadata and to list tables in the dataset, and read data and metadata from the datasets tables. But it does not provide permissions to run jobs, including queries within the project.
Ref: https://cloud.google.com/bigquery/docs/access-control
1. Create a dedicated Google group in Cloud Identity.
2. Add each data scientists user account to the group.
3. Assign the BigQuery dataViewer user role to the group. is not right.
dataViewer provides permissions to Read the datasets metadata and to list tables in the dataset, and read data and metadata from the datasets tables. But it does not provide permissions to run jobs, including queries within the project.
Ref: https://cloud.google.com/bigquery/docs/access-control
1. Create an IAM entry for each data scientists user account.
2. Assign the BigQuery jobUser role to the group. is not right.
jobUser is the right role. It provides permissions to run jobs, including queries, within the project. But given that our data science team changes frequently, we do not want to go through this lengthy provisioning and de-provisioning process. Instead, we should be using groups so that provisioning and de-provisioning is as simple as adding/removing the user to/from the group. Google Groups are a convenient way to apply an access policy to a collection of users
Ref: https://cloud.google.com/bigquery/docs/access-control
Ref: https://cloud.google.com/iam/docs/overview#google_group
1. Create a dedicated Google group in Cloud Identity.
2. Add each data scientists user account to the group.
3. Assign the BigQuery jobUser role to the group. is the right answer.
This is the only option that follows Google recommended practices and meets our requirements. jobUser is the right role. It provides permissions to run jobs, including queries, within the project.
And we want to use a group and grant the group all the necessary roles so that whenever a user joins or leaves, they can be provided access to run big query jobs by simply adding them to the group or removing from the group respectively. Google Groups are a convenient way to apply an access policy to a collection of users
Ref: https://cloud.google.com/bigquery/docs/access-control
Ref: https://cloud.google.com/iam/docs/overview#google_group
Question 18 of 65
18. Question
Your company has a large quantity of unstructured data in different file formats. You want to perform ETL transformations on the data. You need to make the data accessible on Google Cloud so it can be processed by a Dataflow job. What should you do?
Correct
The key to answering this question is unstructured data.
Upload the data to BigQuery using the bq command line tool. is not right.
The bq load command is used to load data in BigQuery from a local data source i.e. local file but the data has to be in a structured format.
bq location=LOCATION load \
source_format=FORMAT \
PROJECT_ID:DATASET.TABLE \
PATH_TO_SOURCE \
SCHEMA
where
schema: a valid schema. The schema can be a local JSON file, or it can be typed inline as part of the command. You can also use the autodetect flag instead of supplying a schema definition.
Ref: https://cloud.google.com/bigquery/docs/loading-data-local#bq
Upload the data into Cloud SQL using the import function in the console. is not right.
Fully managed relational database service for MySQL, PostgreSQL, and SQL Server. As this is relational database, it is for structured data and not fit for unstructured data.
Ref: https://cloud.google.com/sql
Upload the data into Cloud Spanner using the import function in the console. is not right.
Cloud Spanner is the first scalable, enterprise-grade, globally-distributed, and strongly consistent database service built for the cloud specifically to combine the benefits of relational database structure with non-relational horizontal scale. Although Google claims Cloud Spanner is the best of the relational and non-relational worlds, it also says With Cloud Spanner, you get the best of relational database structure and non-relational database scale and performance with external strong consistency across rows, regions, and continents.. Cloud spanner is for structured data and not fit for unstructured data.
Ref: https://cloud.google.com/spanner
Upload the data to Cloud Storage using the gsutil command line tool. is the right answer.
Cloud storage imposes no such restrictions, you can store large quantities of unstructured data in different file formats. Cloud Storage provides globally unified, scalable, and highly durable object storage for developers and enterprises. In addition, Dataflow can query Cloud Storage filesets as described in this article Ref: https://cloud.google.com/dataflow/docs/guides/sql/data-sources-destinations#querying-gcs-filesets
Incorrect
The key to answering this question is unstructured data.
Upload the data to BigQuery using the bq command line tool. is not right.
The bq load command is used to load data in BigQuery from a local data source i.e. local file but the data has to be in a structured format.
bq location=LOCATION load \
source_format=FORMAT \
PROJECT_ID:DATASET.TABLE \
PATH_TO_SOURCE \
SCHEMA
where
schema: a valid schema. The schema can be a local JSON file, or it can be typed inline as part of the command. You can also use the autodetect flag instead of supplying a schema definition.
Ref: https://cloud.google.com/bigquery/docs/loading-data-local#bq
Upload the data into Cloud SQL using the import function in the console. is not right.
Fully managed relational database service for MySQL, PostgreSQL, and SQL Server. As this is relational database, it is for structured data and not fit for unstructured data.
Ref: https://cloud.google.com/sql
Upload the data into Cloud Spanner using the import function in the console. is not right.
Cloud Spanner is the first scalable, enterprise-grade, globally-distributed, and strongly consistent database service built for the cloud specifically to combine the benefits of relational database structure with non-relational horizontal scale. Although Google claims Cloud Spanner is the best of the relational and non-relational worlds, it also says With Cloud Spanner, you get the best of relational database structure and non-relational database scale and performance with external strong consistency across rows, regions, and continents.. Cloud spanner is for structured data and not fit for unstructured data.
Ref: https://cloud.google.com/spanner
Upload the data to Cloud Storage using the gsutil command line tool. is the right answer.
Cloud storage imposes no such restrictions, you can store large quantities of unstructured data in different file formats. Cloud Storage provides globally unified, scalable, and highly durable object storage for developers and enterprises. In addition, Dataflow can query Cloud Storage filesets as described in this article Ref: https://cloud.google.com/dataflow/docs/guides/sql/data-sources-destinations#querying-gcs-filesets
Unattempted
The key to answering this question is unstructured data.
Upload the data to BigQuery using the bq command line tool. is not right.
The bq load command is used to load data in BigQuery from a local data source i.e. local file but the data has to be in a structured format.
bq location=LOCATION load \
source_format=FORMAT \
PROJECT_ID:DATASET.TABLE \
PATH_TO_SOURCE \
SCHEMA
where
schema: a valid schema. The schema can be a local JSON file, or it can be typed inline as part of the command. You can also use the autodetect flag instead of supplying a schema definition.
Ref: https://cloud.google.com/bigquery/docs/loading-data-local#bq
Upload the data into Cloud SQL using the import function in the console. is not right.
Fully managed relational database service for MySQL, PostgreSQL, and SQL Server. As this is relational database, it is for structured data and not fit for unstructured data.
Ref: https://cloud.google.com/sql
Upload the data into Cloud Spanner using the import function in the console. is not right.
Cloud Spanner is the first scalable, enterprise-grade, globally-distributed, and strongly consistent database service built for the cloud specifically to combine the benefits of relational database structure with non-relational horizontal scale. Although Google claims Cloud Spanner is the best of the relational and non-relational worlds, it also says With Cloud Spanner, you get the best of relational database structure and non-relational database scale and performance with external strong consistency across rows, regions, and continents.. Cloud spanner is for structured data and not fit for unstructured data.
Ref: https://cloud.google.com/spanner
Upload the data to Cloud Storage using the gsutil command line tool. is the right answer.
Cloud storage imposes no such restrictions, you can store large quantities of unstructured data in different file formats. Cloud Storage provides globally unified, scalable, and highly durable object storage for developers and enterprises. In addition, Dataflow can query Cloud Storage filesets as described in this article Ref: https://cloud.google.com/dataflow/docs/guides/sql/data-sources-destinations#querying-gcs-filesets
Question 19 of 65
19. Question
Your company has a number of GCP projects that are managed by the respective project teams. Your expenditure of all GCP projects combined has exceeded your operational expenditure budget. At a review meeting, it has been agreed that your finance team should be able to set budgets and view the current charges for all projects in the organization but not view the project resources; and your developers should be able to see the Google Cloud Platform billing charges for only their own projects as well as view resources within the project. You want to follow Google recommended practices to set up IAM roles and permissions. What should you do?
Correct
Add the finance team to the default IAM Owner role. Add the developers to a custom role that allows them to see their own spend only. is not right.
Granting your finance team the default IAM role provides them permissions to manage roles and permissions for a project and subsequently use that to assign them the permissions to view/edit resources in all projects. This is against our requirements. Also, you can write a custom role that lets developers view their project spend but they are missing permissions to view project resources.
Ref: https://cloud.google.com/iam/docs/understanding-roles#primitive_roles
Add the developers and finance managers to the Viewer role for the Project. is not right.
Granting your finance team the Project viewer role lets them view resources in all projects and doesnt let them set budgets both are against our requirements.
Ref: https://cloud.google.com/iam/docs/understanding-roles#primitive_roles
Add the finance team to the Viewer role on all projects. Add the developers to the Security Reviewer role for each of the billing accounts. is not right.
Granting your finance team the Project viewer role lets them view resources in all projects which is against our requirements. Also, the security Reviewer role enables the developers to view custom roles but doesnt let them view the projects costs or project resources.
Ref: https://cloud.google.com/iam/docs/understanding-roles#primitive_roles
Add the finance team to the Billing Account Administrator role for each of the billing accounts that they need to manage. Add the developers to the Viewer role for the Project. is the right answer.
Billing Account Administrator role is an owner role for a billing account. It provides permissions to manage payment instruments, configure billing exports, view cost information, set budgets, link and unlink projects and manage other user roles on the billing account.
Ref: https://cloud.google.com/billing/docs/how-to/billing-access
Project viewer role provides permissions for read-only actions that do not affect the state, such as viewing (but not modifying) existing resources or data; including viewing the billing charges for the project. https://cloud.google.com/iam/docs/understanding-roles#primitive_roles
Incorrect
Add the finance team to the default IAM Owner role. Add the developers to a custom role that allows them to see their own spend only. is not right.
Granting your finance team the default IAM role provides them permissions to manage roles and permissions for a project and subsequently use that to assign them the permissions to view/edit resources in all projects. This is against our requirements. Also, you can write a custom role that lets developers view their project spend but they are missing permissions to view project resources.
Ref: https://cloud.google.com/iam/docs/understanding-roles#primitive_roles
Add the developers and finance managers to the Viewer role for the Project. is not right.
Granting your finance team the Project viewer role lets them view resources in all projects and doesnt let them set budgets both are against our requirements.
Ref: https://cloud.google.com/iam/docs/understanding-roles#primitive_roles
Add the finance team to the Viewer role on all projects. Add the developers to the Security Reviewer role for each of the billing accounts. is not right.
Granting your finance team the Project viewer role lets them view resources in all projects which is against our requirements. Also, the security Reviewer role enables the developers to view custom roles but doesnt let them view the projects costs or project resources.
Ref: https://cloud.google.com/iam/docs/understanding-roles#primitive_roles
Add the finance team to the Billing Account Administrator role for each of the billing accounts that they need to manage. Add the developers to the Viewer role for the Project. is the right answer.
Billing Account Administrator role is an owner role for a billing account. It provides permissions to manage payment instruments, configure billing exports, view cost information, set budgets, link and unlink projects and manage other user roles on the billing account.
Ref: https://cloud.google.com/billing/docs/how-to/billing-access
Project viewer role provides permissions for read-only actions that do not affect the state, such as viewing (but not modifying) existing resources or data; including viewing the billing charges for the project. https://cloud.google.com/iam/docs/understanding-roles#primitive_roles
Unattempted
Add the finance team to the default IAM Owner role. Add the developers to a custom role that allows them to see their own spend only. is not right.
Granting your finance team the default IAM role provides them permissions to manage roles and permissions for a project and subsequently use that to assign them the permissions to view/edit resources in all projects. This is against our requirements. Also, you can write a custom role that lets developers view their project spend but they are missing permissions to view project resources.
Ref: https://cloud.google.com/iam/docs/understanding-roles#primitive_roles
Add the developers and finance managers to the Viewer role for the Project. is not right.
Granting your finance team the Project viewer role lets them view resources in all projects and doesnt let them set budgets both are against our requirements.
Ref: https://cloud.google.com/iam/docs/understanding-roles#primitive_roles
Add the finance team to the Viewer role on all projects. Add the developers to the Security Reviewer role for each of the billing accounts. is not right.
Granting your finance team the Project viewer role lets them view resources in all projects which is against our requirements. Also, the security Reviewer role enables the developers to view custom roles but doesnt let them view the projects costs or project resources.
Ref: https://cloud.google.com/iam/docs/understanding-roles#primitive_roles
Add the finance team to the Billing Account Administrator role for each of the billing accounts that they need to manage. Add the developers to the Viewer role for the Project. is the right answer.
Billing Account Administrator role is an owner role for a billing account. It provides permissions to manage payment instruments, configure billing exports, view cost information, set budgets, link and unlink projects and manage other user roles on the billing account.
Ref: https://cloud.google.com/billing/docs/how-to/billing-access
Project viewer role provides permissions for read-only actions that do not affect the state, such as viewing (but not modifying) existing resources or data; including viewing the billing charges for the project. https://cloud.google.com/iam/docs/understanding-roles#primitive_roles
Question 20 of 65
20. Question
Your company has a third-party single sign-on (SSO) identity provider that supports Security Assertion Markup Language (SAML) integration with service providers. Your company has users in Cloud Identity and requires them to authenticate using your companys SSO provider. What should you do?
Correct
In Cloud Identity, set up SSO with Google as an identity provider to access custom SAML apps. is not right.
The question states that you want to use the companys existing Identity provider for SSO, not Google. Moreover, your users are in Cloud Identity and not in a GSuite domain so they dont have GSuite Gmail accounts and therefore can not sign in through Google.
Ref: https://cloud.google.com/identity/solutions/enable-sso
Obtain OAuth 2.0 credentials, configure the user consent screen, and set up OAuth 2.0 for Mobile & Desktop Apps. is not right.
OAuth 2.0 credentials are needed for an OAuth 2.0 flow, not SAML flow. See https://oauth.net/2/ for more information about OAuth 2.0 which is quite a popular protocol for SSO. When you sign in to a 3rd party website using Facebook/Twitter/Google, it uses OAuth 2.0 behind the scenes.
Ref: https://cloud.google.com/identity/solutions/enable-sso
Obtain OAuth 2.0 credentials, configure the user consent screen, and set up OAuth 2.0 for Web Server Applications. is not right.
OAuth 2.0 credentials are needed for an OAuth 2.0 flow, not SAML flow. See https://oauth.net/2/ for more information about OAuth 2.0 which is quite a popular protocol for SSO. When you sign in to a 3rd party website using Facebook/Twitter/Google, it uses OAuth 2.0 behind the scenes.
Ref: https://cloud.google.com/identity/solutions/enable-sso
In Cloud Identity, set up SSO with a third-party identity provider with Google as a service provider. is the right answer.
This is the only possible option. You configure applications (service providers) to accept SAML assertions from the companys existing identity provider and users in Cloud Identity can sign in to various applications through the third-party single sign-on (SSO) identity provider. It is important to note that user authentication occurs in the third-party IdP so the absence of a Gmail login is not an issue for signing in.
Ref: https://cloud.google.com/identity/solutions/enable-sso
If you have a third-party IdP, you can still configure SSO for third-party apps in the Cloud Identity catalog. User authentication occurs in the third-party IdP, and Cloud Identity manages the cloud apps.
To use Cloud Identity for SSO, your users need Cloud Identity accounts. They sign in through your third-party IdP or using a password on their Cloud Identity accounts.
Incorrect
In Cloud Identity, set up SSO with Google as an identity provider to access custom SAML apps. is not right.
The question states that you want to use the companys existing Identity provider for SSO, not Google. Moreover, your users are in Cloud Identity and not in a GSuite domain so they dont have GSuite Gmail accounts and therefore can not sign in through Google.
Ref: https://cloud.google.com/identity/solutions/enable-sso
Obtain OAuth 2.0 credentials, configure the user consent screen, and set up OAuth 2.0 for Mobile & Desktop Apps. is not right.
OAuth 2.0 credentials are needed for an OAuth 2.0 flow, not SAML flow. See https://oauth.net/2/ for more information about OAuth 2.0 which is quite a popular protocol for SSO. When you sign in to a 3rd party website using Facebook/Twitter/Google, it uses OAuth 2.0 behind the scenes.
Ref: https://cloud.google.com/identity/solutions/enable-sso
Obtain OAuth 2.0 credentials, configure the user consent screen, and set up OAuth 2.0 for Web Server Applications. is not right.
OAuth 2.0 credentials are needed for an OAuth 2.0 flow, not SAML flow. See https://oauth.net/2/ for more information about OAuth 2.0 which is quite a popular protocol for SSO. When you sign in to a 3rd party website using Facebook/Twitter/Google, it uses OAuth 2.0 behind the scenes.
Ref: https://cloud.google.com/identity/solutions/enable-sso
In Cloud Identity, set up SSO with a third-party identity provider with Google as a service provider. is the right answer.
This is the only possible option. You configure applications (service providers) to accept SAML assertions from the companys existing identity provider and users in Cloud Identity can sign in to various applications through the third-party single sign-on (SSO) identity provider. It is important to note that user authentication occurs in the third-party IdP so the absence of a Gmail login is not an issue for signing in.
Ref: https://cloud.google.com/identity/solutions/enable-sso
If you have a third-party IdP, you can still configure SSO for third-party apps in the Cloud Identity catalog. User authentication occurs in the third-party IdP, and Cloud Identity manages the cloud apps.
To use Cloud Identity for SSO, your users need Cloud Identity accounts. They sign in through your third-party IdP or using a password on their Cloud Identity accounts.
Unattempted
In Cloud Identity, set up SSO with Google as an identity provider to access custom SAML apps. is not right.
The question states that you want to use the companys existing Identity provider for SSO, not Google. Moreover, your users are in Cloud Identity and not in a GSuite domain so they dont have GSuite Gmail accounts and therefore can not sign in through Google.
Ref: https://cloud.google.com/identity/solutions/enable-sso
Obtain OAuth 2.0 credentials, configure the user consent screen, and set up OAuth 2.0 for Mobile & Desktop Apps. is not right.
OAuth 2.0 credentials are needed for an OAuth 2.0 flow, not SAML flow. See https://oauth.net/2/ for more information about OAuth 2.0 which is quite a popular protocol for SSO. When you sign in to a 3rd party website using Facebook/Twitter/Google, it uses OAuth 2.0 behind the scenes.
Ref: https://cloud.google.com/identity/solutions/enable-sso
Obtain OAuth 2.0 credentials, configure the user consent screen, and set up OAuth 2.0 for Web Server Applications. is not right.
OAuth 2.0 credentials are needed for an OAuth 2.0 flow, not SAML flow. See https://oauth.net/2/ for more information about OAuth 2.0 which is quite a popular protocol for SSO. When you sign in to a 3rd party website using Facebook/Twitter/Google, it uses OAuth 2.0 behind the scenes.
Ref: https://cloud.google.com/identity/solutions/enable-sso
In Cloud Identity, set up SSO with a third-party identity provider with Google as a service provider. is the right answer.
This is the only possible option. You configure applications (service providers) to accept SAML assertions from the companys existing identity provider and users in Cloud Identity can sign in to various applications through the third-party single sign-on (SSO) identity provider. It is important to note that user authentication occurs in the third-party IdP so the absence of a Gmail login is not an issue for signing in.
Ref: https://cloud.google.com/identity/solutions/enable-sso
If you have a third-party IdP, you can still configure SSO for third-party apps in the Cloud Identity catalog. User authentication occurs in the third-party IdP, and Cloud Identity manages the cloud apps.
To use Cloud Identity for SSO, your users need Cloud Identity accounts. They sign in through your third-party IdP or using a password on their Cloud Identity accounts.
Question 21 of 65
21. Question
Your company has an App Engine application that needs to store stateful data in a proper storage service. Your data is non-relational data. You do not expect the database size to grow beyond 10 GB and you need to have the ability to scale down to zero to avoid unnecessary costs. Which storage service should you use?
Correct
Cloud SQL. is not right.
Cloud SQL is not suitable for non-relational data. Cloud SQL is a fully-managed database service that makes it easy to set up, maintain, manage, and administer your relational databases on Google Cloud Platform
Ref: https://cloud.google.com/sql/docs
Cloud Dataproc. is not right.
Cloud Dataproc is a fast, easy-to-use, fully managed cloud service for running Apache Spark and Apache Hadoop clusters in a simple, cost-efficient way. It is not a database.
Ref: https://cloud.google.com/dataproc
Cloud Bigtable. is not right.
Bigtable is a petabyte-scale, massively scalable, fully managed NoSQL database service for large analytical and operational workloads. Cloud Bigtable is overkill for our database which is just 10 GB. Also, Cloud Bigtable cant be scaled down to 0, as there is always a cost with the node, SSD/HDD storage etc.
Ref: https://cloud.google.com/bigtable
Cloud Datastore. is the right answer.
Cloud Datastore is a highly-scalable NoSQL database. Cloud Datastore scales seamlessly and automatically with your data, allowing applications to maintain high performance as they receive more traffic; automatically scales back when the traffic reduces.
Ref: https://cloud.google.com/datastore/
Incorrect
Cloud SQL. is not right.
Cloud SQL is not suitable for non-relational data. Cloud SQL is a fully-managed database service that makes it easy to set up, maintain, manage, and administer your relational databases on Google Cloud Platform
Ref: https://cloud.google.com/sql/docs
Cloud Dataproc. is not right.
Cloud Dataproc is a fast, easy-to-use, fully managed cloud service for running Apache Spark and Apache Hadoop clusters in a simple, cost-efficient way. It is not a database.
Ref: https://cloud.google.com/dataproc
Cloud Bigtable. is not right.
Bigtable is a petabyte-scale, massively scalable, fully managed NoSQL database service for large analytical and operational workloads. Cloud Bigtable is overkill for our database which is just 10 GB. Also, Cloud Bigtable cant be scaled down to 0, as there is always a cost with the node, SSD/HDD storage etc.
Ref: https://cloud.google.com/bigtable
Cloud Datastore. is the right answer.
Cloud Datastore is a highly-scalable NoSQL database. Cloud Datastore scales seamlessly and automatically with your data, allowing applications to maintain high performance as they receive more traffic; automatically scales back when the traffic reduces.
Ref: https://cloud.google.com/datastore/
Unattempted
Cloud SQL. is not right.
Cloud SQL is not suitable for non-relational data. Cloud SQL is a fully-managed database service that makes it easy to set up, maintain, manage, and administer your relational databases on Google Cloud Platform
Ref: https://cloud.google.com/sql/docs
Cloud Dataproc. is not right.
Cloud Dataproc is a fast, easy-to-use, fully managed cloud service for running Apache Spark and Apache Hadoop clusters in a simple, cost-efficient way. It is not a database.
Ref: https://cloud.google.com/dataproc
Cloud Bigtable. is not right.
Bigtable is a petabyte-scale, massively scalable, fully managed NoSQL database service for large analytical and operational workloads. Cloud Bigtable is overkill for our database which is just 10 GB. Also, Cloud Bigtable cant be scaled down to 0, as there is always a cost with the node, SSD/HDD storage etc.
Ref: https://cloud.google.com/bigtable
Cloud Datastore. is the right answer.
Cloud Datastore is a highly-scalable NoSQL database. Cloud Datastore scales seamlessly and automatically with your data, allowing applications to maintain high performance as they receive more traffic; automatically scales back when the traffic reduces.
Ref: https://cloud.google.com/datastore/
Question 22 of 65
22. Question
Your company has an existing GCP organization with hundreds of projects and a billing account. Your company recently acquired another company that also has hundreds of projects and its own billing account. You would like to consolidate all GCP costs of both GCP organizations onto a single invoice and you would like to do this as soon as possible. What should you do?
Correct
Configure the acquired companys billing account and your companys billing account to export the billing data into the same BigQuery dataset. is not right.
Cloud Billing export to BigQuery enables you to export detailed Google Cloud billing data (such as usage and cost estimate data) automatically throughout the day to a BigQuery dataset that you specify. Then you can access your Cloud Billing data from BigQuery for detailed analysis or use a tool like Google Data Studio to visualize your data. Exporting billing data from both the GCP organizations into a single BigQuery dataset can help you have a single view of the billing information, but it doesnt result in a consolidated invoice, which is our requirement.
Migrate the acquired companys projects into your companys GCP organization. Link the migrated projects to your companys billing account. is not right.
While the result is what we need, migrating projects from the acquired company into your companys GCP organization is not straightforward and takes a lot of time. Our requirements state we would like to do this as soon as possible but this option isnt quick.
Create a new GCP organization and a new billing account. Migrate the acquired companys projects and your companys projects into the new GCP organization and link the projects to the new billing account. is not right.
While the result is what we need, migrating projects from both organizations into a new single organization is not straightforward and takes a lot of time. Our requirements state we would like to do this as soon as possible but this option isnt quick.
Link the acquired companys projects to your companys billing account. is the right answer.
This option is the quickest that lets us achieve our end requirement of having all GCP billing in a single invoice. Linking the acquired companys projects to your companys billing account can be very quick and can be scripted using gcloud.
Ref: https://cloud.google.com/logging/docs/reference/tools/gcloud-logging
Incorrect
Configure the acquired companys billing account and your companys billing account to export the billing data into the same BigQuery dataset. is not right.
Cloud Billing export to BigQuery enables you to export detailed Google Cloud billing data (such as usage and cost estimate data) automatically throughout the day to a BigQuery dataset that you specify. Then you can access your Cloud Billing data from BigQuery for detailed analysis or use a tool like Google Data Studio to visualize your data. Exporting billing data from both the GCP organizations into a single BigQuery dataset can help you have a single view of the billing information, but it doesnt result in a consolidated invoice, which is our requirement.
Migrate the acquired companys projects into your companys GCP organization. Link the migrated projects to your companys billing account. is not right.
While the result is what we need, migrating projects from the acquired company into your companys GCP organization is not straightforward and takes a lot of time. Our requirements state we would like to do this as soon as possible but this option isnt quick.
Create a new GCP organization and a new billing account. Migrate the acquired companys projects and your companys projects into the new GCP organization and link the projects to the new billing account. is not right.
While the result is what we need, migrating projects from both organizations into a new single organization is not straightforward and takes a lot of time. Our requirements state we would like to do this as soon as possible but this option isnt quick.
Link the acquired companys projects to your companys billing account. is the right answer.
This option is the quickest that lets us achieve our end requirement of having all GCP billing in a single invoice. Linking the acquired companys projects to your companys billing account can be very quick and can be scripted using gcloud.
Ref: https://cloud.google.com/logging/docs/reference/tools/gcloud-logging
Unattempted
Configure the acquired companys billing account and your companys billing account to export the billing data into the same BigQuery dataset. is not right.
Cloud Billing export to BigQuery enables you to export detailed Google Cloud billing data (such as usage and cost estimate data) automatically throughout the day to a BigQuery dataset that you specify. Then you can access your Cloud Billing data from BigQuery for detailed analysis or use a tool like Google Data Studio to visualize your data. Exporting billing data from both the GCP organizations into a single BigQuery dataset can help you have a single view of the billing information, but it doesnt result in a consolidated invoice, which is our requirement.
Migrate the acquired companys projects into your companys GCP organization. Link the migrated projects to your companys billing account. is not right.
While the result is what we need, migrating projects from the acquired company into your companys GCP organization is not straightforward and takes a lot of time. Our requirements state we would like to do this as soon as possible but this option isnt quick.
Create a new GCP organization and a new billing account. Migrate the acquired companys projects and your companys projects into the new GCP organization and link the projects to the new billing account. is not right.
While the result is what we need, migrating projects from both organizations into a new single organization is not straightforward and takes a lot of time. Our requirements state we would like to do this as soon as possible but this option isnt quick.
Link the acquired companys projects to your companys billing account. is the right answer.
This option is the quickest that lets us achieve our end requirement of having all GCP billing in a single invoice. Linking the acquired companys projects to your companys billing account can be very quick and can be scripted using gcloud.
Ref: https://cloud.google.com/logging/docs/reference/tools/gcloud-logging
Question 23 of 65
23. Question
Your company has chosen to go serverless to enable developers to focus on writing code without worrying about infrastructure. You have been asked to identify a GCP Serverless service that does not limit your developers to specific runtimes. In addition, some of the applications need WebSockets support. What should you suggest?
Correct
App Engine Standard. is not right.
Google App Engine Standard offers a limited number of runtimes Java, Node.js, Python, Go, PHP and Ruby; and at the same time doesnt offer support for Websockets.
Ref: https://cloud.google.com/appengine/docs/standard
Cloud Functions. is not right.
Like Google App Engine Standard, Cloud functions offer a limited number of runtimes Node.js, Python, Go and Java; and doesnt offer support for Websockets.
Ref: https://cloud.google.com/blog/products/application-development/your-favorite-runtimes-now-generally-available-on-cloud-functions
Cloud Run. is not right.
Cloud Run lets you run stateless containers in a fully managed environment. As this is container-based, we are not limited to specific runtimes. Developers can write code using their favorite languages (Go, Python, Java, C#, PHP, Ruby, Node.js, Shell, and others). However, Cloud Run does not support Websockets.
Ref: https://cloud.google.com/run
Cloud Run for Anthos. is the right answer.
Cloud Run for Anthos leverage Kubernetes and serverless together using Cloud Run integrated with Anthos. As this is container-based, we are not limited to specific runtimes. Developers can write code using their favorite languages (Go, Python, Java, C#, PHP, Ruby, Node.js, Shell, and others). Cloud Run for Anthos is the only serverless GCP offering that supports WebSockets. https://cloud.google.com/serverless-options
Incorrect
App Engine Standard. is not right.
Google App Engine Standard offers a limited number of runtimes Java, Node.js, Python, Go, PHP and Ruby; and at the same time doesnt offer support for Websockets.
Ref: https://cloud.google.com/appengine/docs/standard
Cloud Functions. is not right.
Like Google App Engine Standard, Cloud functions offer a limited number of runtimes Node.js, Python, Go and Java; and doesnt offer support for Websockets.
Ref: https://cloud.google.com/blog/products/application-development/your-favorite-runtimes-now-generally-available-on-cloud-functions
Cloud Run. is not right.
Cloud Run lets you run stateless containers in a fully managed environment. As this is container-based, we are not limited to specific runtimes. Developers can write code using their favorite languages (Go, Python, Java, C#, PHP, Ruby, Node.js, Shell, and others). However, Cloud Run does not support Websockets.
Ref: https://cloud.google.com/run
Cloud Run for Anthos. is the right answer.
Cloud Run for Anthos leverage Kubernetes and serverless together using Cloud Run integrated with Anthos. As this is container-based, we are not limited to specific runtimes. Developers can write code using their favorite languages (Go, Python, Java, C#, PHP, Ruby, Node.js, Shell, and others). Cloud Run for Anthos is the only serverless GCP offering that supports WebSockets. https://cloud.google.com/serverless-options
Unattempted
App Engine Standard. is not right.
Google App Engine Standard offers a limited number of runtimes Java, Node.js, Python, Go, PHP and Ruby; and at the same time doesnt offer support for Websockets.
Ref: https://cloud.google.com/appengine/docs/standard
Cloud Functions. is not right.
Like Google App Engine Standard, Cloud functions offer a limited number of runtimes Node.js, Python, Go and Java; and doesnt offer support for Websockets.
Ref: https://cloud.google.com/blog/products/application-development/your-favorite-runtimes-now-generally-available-on-cloud-functions
Cloud Run. is not right.
Cloud Run lets you run stateless containers in a fully managed environment. As this is container-based, we are not limited to specific runtimes. Developers can write code using their favorite languages (Go, Python, Java, C#, PHP, Ruby, Node.js, Shell, and others). However, Cloud Run does not support Websockets.
Ref: https://cloud.google.com/run
Cloud Run for Anthos. is the right answer.
Cloud Run for Anthos leverage Kubernetes and serverless together using Cloud Run integrated with Anthos. As this is container-based, we are not limited to specific runtimes. Developers can write code using their favorite languages (Go, Python, Java, C#, PHP, Ruby, Node.js, Shell, and others). Cloud Run for Anthos is the only serverless GCP offering that supports WebSockets. https://cloud.google.com/serverless-options
Question 24 of 65
24. Question
Your company has migrated most of the data center VMs to Google Compute Engine. The remaining VMs in the data center host legacy applications that are due to be decommissioned soon and your company has decided to retain them in the datacenter. Due to a change in the business operational model, you need to introduce changes to one of the legacy applications to read files from Google Cloud Storage. However, your data center does not have access to the internet and your company doesnt want to invest in setting up internet access as the data center is due to be turned off soon. Your data center has a partner interconnect to GCP. You wish to route traffic from your datacenter to Google Storage through partner interconnect. What should you do?
Correct
While Google APIs are accessible on *.googleapis.com, to restrict Private Google Access within a service perimeter to only VPC Service Controls supported Google APIs and services, hosts must send their requests to the restricted.googleapis.com domain name instead of *.googleapis.com. The restricted.googleapis.com domain resolves to a VIP (virtual IP address) range 199.36.153.4/30. This IP address range is not announced to the Internet. If you require access to other Google APIs and services that arent supported by VPC Service Controls, you can use 199.36.153.8/30 (private.googleapis.com). However, we recommend that you use restricted.googleapis.com, which integrates with VPC Service Controls and mitigates data exfiltration risks. In either case, VPC Service Controls service perimeters are always enforced on APIs and services that support VPC Service Controls.
Ref: https://cloud.google.com/vpc-service-controls/docs/set-up-private-connectivity
This rules out the two options that map storage.cloud.google.com to restricted.googleapis.com.
The main differences between the remaining two options are
1. Static route in the VPC network.
2. Public/Private zone.
According to Googles guide on setting up private connectivity, in order to configure a route to restricted.googleapis.com within the VPC, we need to create a static route whose destination is 199.36.153.4/30 and whose next hop is the default Internet gateway.
So, the right answer is
1. In on-premises DNS configuration, map *.googleapis.com to restricted.googleapis.com, which resolves to the 199.36.153.4/30.
2. Configure Cloud Router to advertise the 199.36.153.4/30 IP address range through the Cloud VPN tunnel.
3. Add a custom static route to the VPC network to direct traffic with the destination 199.36.153.4/30 to the default internet gateway.
4. Created a Cloud DNS managed private zone for *.googleapis.com that maps to 199.36.153.4/30 and authorize the zone for use by VPC network
In the following example, the on-premises network is connected to a VPC network through a Cloud VPN tunnel. Traffic from on-premises hosts to Google APIs travels through the tunnel to the VPC network. After traffic reaches the VPC network, it is sent through a route that uses the default internet gateway as its next hop. The next hop allows traffic to leave the VPC network and be delivered to restricted.googleapis.com (199.36.153.4/30).
? The on-premises DNS configuration maps *.googleapis.com requests to restricted.googleapis.com, which resolves to the 199.36.153.4/30.
? Cloud Router has been configured to advertise the 199.36.153.4/30 IP address range through the Cloud VPN tunnel by using a custom route advertisement. Traffic going to Google APIs is routed through the tunnel to the VPC network.
? A custom static route was added to the VPC network that directs traffic with the destination 199.36.153.4/30 to the default internet gateway (as the next hop). Google then routes traffic to the appropriate API or service.
If you created a Cloud DNS managed private zone for *.googleapis.com that maps to 199.36.153.4/30 and have authorized that zone for use by your VPC network, requests to anything in the googleapis.com domain are sent to the IP addresses that are used by restricted.googleapis.com
Incorrect
While Google APIs are accessible on *.googleapis.com, to restrict Private Google Access within a service perimeter to only VPC Service Controls supported Google APIs and services, hosts must send their requests to the restricted.googleapis.com domain name instead of *.googleapis.com. The restricted.googleapis.com domain resolves to a VIP (virtual IP address) range 199.36.153.4/30. This IP address range is not announced to the Internet. If you require access to other Google APIs and services that arent supported by VPC Service Controls, you can use 199.36.153.8/30 (private.googleapis.com). However, we recommend that you use restricted.googleapis.com, which integrates with VPC Service Controls and mitigates data exfiltration risks. In either case, VPC Service Controls service perimeters are always enforced on APIs and services that support VPC Service Controls.
Ref: https://cloud.google.com/vpc-service-controls/docs/set-up-private-connectivity
This rules out the two options that map storage.cloud.google.com to restricted.googleapis.com.
The main differences between the remaining two options are
1. Static route in the VPC network.
2. Public/Private zone.
According to Googles guide on setting up private connectivity, in order to configure a route to restricted.googleapis.com within the VPC, we need to create a static route whose destination is 199.36.153.4/30 and whose next hop is the default Internet gateway.
So, the right answer is
1. In on-premises DNS configuration, map *.googleapis.com to restricted.googleapis.com, which resolves to the 199.36.153.4/30.
2. Configure Cloud Router to advertise the 199.36.153.4/30 IP address range through the Cloud VPN tunnel.
3. Add a custom static route to the VPC network to direct traffic with the destination 199.36.153.4/30 to the default internet gateway.
4. Created a Cloud DNS managed private zone for *.googleapis.com that maps to 199.36.153.4/30 and authorize the zone for use by VPC network
In the following example, the on-premises network is connected to a VPC network through a Cloud VPN tunnel. Traffic from on-premises hosts to Google APIs travels through the tunnel to the VPC network. After traffic reaches the VPC network, it is sent through a route that uses the default internet gateway as its next hop. The next hop allows traffic to leave the VPC network and be delivered to restricted.googleapis.com (199.36.153.4/30).
? The on-premises DNS configuration maps *.googleapis.com requests to restricted.googleapis.com, which resolves to the 199.36.153.4/30.
? Cloud Router has been configured to advertise the 199.36.153.4/30 IP address range through the Cloud VPN tunnel by using a custom route advertisement. Traffic going to Google APIs is routed through the tunnel to the VPC network.
? A custom static route was added to the VPC network that directs traffic with the destination 199.36.153.4/30 to the default internet gateway (as the next hop). Google then routes traffic to the appropriate API or service.
If you created a Cloud DNS managed private zone for *.googleapis.com that maps to 199.36.153.4/30 and have authorized that zone for use by your VPC network, requests to anything in the googleapis.com domain are sent to the IP addresses that are used by restricted.googleapis.com
Unattempted
While Google APIs are accessible on *.googleapis.com, to restrict Private Google Access within a service perimeter to only VPC Service Controls supported Google APIs and services, hosts must send their requests to the restricted.googleapis.com domain name instead of *.googleapis.com. The restricted.googleapis.com domain resolves to a VIP (virtual IP address) range 199.36.153.4/30. This IP address range is not announced to the Internet. If you require access to other Google APIs and services that arent supported by VPC Service Controls, you can use 199.36.153.8/30 (private.googleapis.com). However, we recommend that you use restricted.googleapis.com, which integrates with VPC Service Controls and mitigates data exfiltration risks. In either case, VPC Service Controls service perimeters are always enforced on APIs and services that support VPC Service Controls.
Ref: https://cloud.google.com/vpc-service-controls/docs/set-up-private-connectivity
This rules out the two options that map storage.cloud.google.com to restricted.googleapis.com.
The main differences between the remaining two options are
1. Static route in the VPC network.
2. Public/Private zone.
According to Googles guide on setting up private connectivity, in order to configure a route to restricted.googleapis.com within the VPC, we need to create a static route whose destination is 199.36.153.4/30 and whose next hop is the default Internet gateway.
So, the right answer is
1. In on-premises DNS configuration, map *.googleapis.com to restricted.googleapis.com, which resolves to the 199.36.153.4/30.
2. Configure Cloud Router to advertise the 199.36.153.4/30 IP address range through the Cloud VPN tunnel.
3. Add a custom static route to the VPC network to direct traffic with the destination 199.36.153.4/30 to the default internet gateway.
4. Created a Cloud DNS managed private zone for *.googleapis.com that maps to 199.36.153.4/30 and authorize the zone for use by VPC network
In the following example, the on-premises network is connected to a VPC network through a Cloud VPN tunnel. Traffic from on-premises hosts to Google APIs travels through the tunnel to the VPC network. After traffic reaches the VPC network, it is sent through a route that uses the default internet gateway as its next hop. The next hop allows traffic to leave the VPC network and be delivered to restricted.googleapis.com (199.36.153.4/30).
? The on-premises DNS configuration maps *.googleapis.com requests to restricted.googleapis.com, which resolves to the 199.36.153.4/30.
? Cloud Router has been configured to advertise the 199.36.153.4/30 IP address range through the Cloud VPN tunnel by using a custom route advertisement. Traffic going to Google APIs is routed through the tunnel to the VPC network.
? A custom static route was added to the VPC network that directs traffic with the destination 199.36.153.4/30 to the default internet gateway (as the next hop). Google then routes traffic to the appropriate API or service.
If you created a Cloud DNS managed private zone for *.googleapis.com that maps to 199.36.153.4/30 and have authorized that zone for use by your VPC network, requests to anything in the googleapis.com domain are sent to the IP addresses that are used by restricted.googleapis.com
Question 25 of 65
25. Question
Your company hosts a number of applications in Google Cloud and requires that log messages from all applications be archived for 10 years to comply with local regulatory requirements. Which approach should you use?
Correct
Grant the security team access to the logs in each Project. is not right.
Granting the security team access to the logs in each Project doesnt guarantee log retention. If the security team is to come up with a manual process to copy all the logs files into another archival source, the ongoing operational costs can be huge.
1. Enable Stackdriver Logging API
2. Configure web applications to send logs to Stackdriver. is not right.
In Stackdriver, application logs are retained by default for just 30 days after which they are purged.
Ref: https://cloud.google.com/logging/quotas
While it is possible to configure a custom retention period of 10 years, storing logs in Stackdriver is very expensive compared to Cloud Storage. Stackdriver charges $.01 per GB per month, whereas something like Cloud Storage Coldline Storage costs $0.007 per GB per month (30% cheaper) and Cloud Storage Archive Storage costs 0.004 per GB per month (60% cheaper than Stackdriver)
Ref: https://cloud.google.com/logging/docs/storage#pricing
Ref: https://cloud.google.com/storage/pricing
The difference between the remaining two options is whether we store the logs in BigQuery or Google Cloud Storage.
1. Enable Stackdriver Logging API
2. Configure web applications to send logs to Stackdriver
3. Export logs to BigQuery. is not right.
While enabling Stackdriver Logging API and having the applications send logs to stack driver is a good start, exporting and storing logs in BigQuery is fairly expensive. In BigQuery, Active storage costs $0.02 per GB per month and Long-term storage costs $0.01 per GB per month. In comparison, Google Cloud Storage offers several storage classes that are significantly cheaper.
Ref: https://cloud.google.com/bigquery/pricing
Ref: https://cloud.google.com/storage/pricing
1. Enable Stackdriver Logging API
2. Configure web applications to send logs to Stackdriver
3. Export logs to Google Cloud Storage. is the right answer.
Google Cloud Storage offers several storage classes such as Nearline Storage ($0.01 per GB per Month) Coldline Storage ($0.007 per GB per Month) and Archive Storage ($0.004 per GB per month) which are significantly cheaper than the storage options covered by the above options above.
Ref: https://cloud.google.com/storage/pricing
Incorrect
Grant the security team access to the logs in each Project. is not right.
Granting the security team access to the logs in each Project doesnt guarantee log retention. If the security team is to come up with a manual process to copy all the logs files into another archival source, the ongoing operational costs can be huge.
1. Enable Stackdriver Logging API
2. Configure web applications to send logs to Stackdriver. is not right.
In Stackdriver, application logs are retained by default for just 30 days after which they are purged.
Ref: https://cloud.google.com/logging/quotas
While it is possible to configure a custom retention period of 10 years, storing logs in Stackdriver is very expensive compared to Cloud Storage. Stackdriver charges $.01 per GB per month, whereas something like Cloud Storage Coldline Storage costs $0.007 per GB per month (30% cheaper) and Cloud Storage Archive Storage costs 0.004 per GB per month (60% cheaper than Stackdriver)
Ref: https://cloud.google.com/logging/docs/storage#pricing
Ref: https://cloud.google.com/storage/pricing
The difference between the remaining two options is whether we store the logs in BigQuery or Google Cloud Storage.
1. Enable Stackdriver Logging API
2. Configure web applications to send logs to Stackdriver
3. Export logs to BigQuery. is not right.
While enabling Stackdriver Logging API and having the applications send logs to stack driver is a good start, exporting and storing logs in BigQuery is fairly expensive. In BigQuery, Active storage costs $0.02 per GB per month and Long-term storage costs $0.01 per GB per month. In comparison, Google Cloud Storage offers several storage classes that are significantly cheaper.
Ref: https://cloud.google.com/bigquery/pricing
Ref: https://cloud.google.com/storage/pricing
1. Enable Stackdriver Logging API
2. Configure web applications to send logs to Stackdriver
3. Export logs to Google Cloud Storage. is the right answer.
Google Cloud Storage offers several storage classes such as Nearline Storage ($0.01 per GB per Month) Coldline Storage ($0.007 per GB per Month) and Archive Storage ($0.004 per GB per month) which are significantly cheaper than the storage options covered by the above options above.
Ref: https://cloud.google.com/storage/pricing
Unattempted
Grant the security team access to the logs in each Project. is not right.
Granting the security team access to the logs in each Project doesnt guarantee log retention. If the security team is to come up with a manual process to copy all the logs files into another archival source, the ongoing operational costs can be huge.
1. Enable Stackdriver Logging API
2. Configure web applications to send logs to Stackdriver. is not right.
In Stackdriver, application logs are retained by default for just 30 days after which they are purged.
Ref: https://cloud.google.com/logging/quotas
While it is possible to configure a custom retention period of 10 years, storing logs in Stackdriver is very expensive compared to Cloud Storage. Stackdriver charges $.01 per GB per month, whereas something like Cloud Storage Coldline Storage costs $0.007 per GB per month (30% cheaper) and Cloud Storage Archive Storage costs 0.004 per GB per month (60% cheaper than Stackdriver)
Ref: https://cloud.google.com/logging/docs/storage#pricing
Ref: https://cloud.google.com/storage/pricing
The difference between the remaining two options is whether we store the logs in BigQuery or Google Cloud Storage.
1. Enable Stackdriver Logging API
2. Configure web applications to send logs to Stackdriver
3. Export logs to BigQuery. is not right.
While enabling Stackdriver Logging API and having the applications send logs to stack driver is a good start, exporting and storing logs in BigQuery is fairly expensive. In BigQuery, Active storage costs $0.02 per GB per month and Long-term storage costs $0.01 per GB per month. In comparison, Google Cloud Storage offers several storage classes that are significantly cheaper.
Ref: https://cloud.google.com/bigquery/pricing
Ref: https://cloud.google.com/storage/pricing
1. Enable Stackdriver Logging API
2. Configure web applications to send logs to Stackdriver
3. Export logs to Google Cloud Storage. is the right answer.
Google Cloud Storage offers several storage classes such as Nearline Storage ($0.01 per GB per Month) Coldline Storage ($0.007 per GB per Month) and Archive Storage ($0.004 per GB per month) which are significantly cheaper than the storage options covered by the above options above.
Ref: https://cloud.google.com/storage/pricing
Question 26 of 65
26. Question
Your company implemented BigQuery as an enterprise data warehouse. Users from multiple business units run queries on this data warehouse. However, you notice that query costs for BigQuery are very high, and you need to control costs. Which two methods should you use? (Choose two.)
Correct
Once your data is loaded into BigQuery, you are charged for storing it. Storage pricing is based on the amount of data stored in your tables when it is uncompressed. BigQuery doesnt charge for the query execution based on the output of the query (i.e. bytes returned) but on the number of bytes processed (also referred to as bytes read or bytes scanned) in order to arrive at the output of the query. You are charged for the number of bytes processed whether the data is stored in BigQuery or in an external data source such as Cloud Storage, Google Drive, or Cloud Bigtable. On-demand pricing is based solely on usage. You are charged for the bytes scanned even if your query itself doesnt return any data.
Ref: https://cloud.google.com/bigquery/pricing
Split the users from business units to multiple projects. is not right.
The bytes scanned is not expected to go down by splitting the users into multiple projects so this wouldnt reduce/control the costs.
Ref: https://cloud.google.com/bigquery/pricing
Split your BigQuery data warehouse into multiple data warehouses for each business unit. is not right.
The bytes scanned is not expected to go down by splitting the BigQuery warehouse into two so this wouldnt reduce/control the costs either.
Ref: https://cloud.google.com/bigquery/pricing
Create separate copies of your BigQuery data warehouse for each business unit. is not right.
Creating separate copies of the BigQuery data warehouse for each business unit is going to increase your costs. Not only is this expected to reduce the bytes scanned, but this is also going to increase the storage costs as we are now storing double the amount of data.
Ref: https://cloud.google.com/bigquery/pricing
Apply a user- or project-level custom query quota for BigQuery data warehouse. is the right answer.
BigQuery limits the maximum rate of incoming requests and enforces appropriate quotas on a per-project basis. You can set various limits to control costs such as Concurrent rate limit for interactive queries, Concurrent rate limit for interactive queries against Bigtable external data sources, Concurrent rate limit for legacy SQL queries that contain UDFs, Cross-region federated querying, Daily query size limit, etc. https://cloud.google.com/bigquery/quotas
Change your BigQuery query model from on-demand to flat rate. Apply the appropriate number of slots to each Project. is the right answer.
This pricing option is best for customers who desire cost predictability. Flat-rate customers purchase dedicated resources for query processing and are not charged for individual queries. BigQuery offers flat-rate pricing for customers who prefer a stable cost for queries rather than paying the on-demand price per TB of data processed. You can choose to use flat-rate pricing using BigQuery Reservations. When you enroll in flat-rate pricing, you purchase slot commitments dedicated query processing capacity, measured in BigQuery slots. Your queries consume this capacity, and you are not billed for bytes processed. If your capacity demands exceed your committed capacity, BigQuery will queue up slots, and you will not be charged additional fees.
Ref: https://cloud.google.com/bigquery/pricing#flat_rate_pricing
Incorrect
Once your data is loaded into BigQuery, you are charged for storing it. Storage pricing is based on the amount of data stored in your tables when it is uncompressed. BigQuery doesnt charge for the query execution based on the output of the query (i.e. bytes returned) but on the number of bytes processed (also referred to as bytes read or bytes scanned) in order to arrive at the output of the query. You are charged for the number of bytes processed whether the data is stored in BigQuery or in an external data source such as Cloud Storage, Google Drive, or Cloud Bigtable. On-demand pricing is based solely on usage. You are charged for the bytes scanned even if your query itself doesnt return any data.
Ref: https://cloud.google.com/bigquery/pricing
Split the users from business units to multiple projects. is not right.
The bytes scanned is not expected to go down by splitting the users into multiple projects so this wouldnt reduce/control the costs.
Ref: https://cloud.google.com/bigquery/pricing
Split your BigQuery data warehouse into multiple data warehouses for each business unit. is not right.
The bytes scanned is not expected to go down by splitting the BigQuery warehouse into two so this wouldnt reduce/control the costs either.
Ref: https://cloud.google.com/bigquery/pricing
Create separate copies of your BigQuery data warehouse for each business unit. is not right.
Creating separate copies of the BigQuery data warehouse for each business unit is going to increase your costs. Not only is this expected to reduce the bytes scanned, but this is also going to increase the storage costs as we are now storing double the amount of data.
Ref: https://cloud.google.com/bigquery/pricing
Apply a user- or project-level custom query quota for BigQuery data warehouse. is the right answer.
BigQuery limits the maximum rate of incoming requests and enforces appropriate quotas on a per-project basis. You can set various limits to control costs such as Concurrent rate limit for interactive queries, Concurrent rate limit for interactive queries against Bigtable external data sources, Concurrent rate limit for legacy SQL queries that contain UDFs, Cross-region federated querying, Daily query size limit, etc. https://cloud.google.com/bigquery/quotas
Change your BigQuery query model from on-demand to flat rate. Apply the appropriate number of slots to each Project. is the right answer.
This pricing option is best for customers who desire cost predictability. Flat-rate customers purchase dedicated resources for query processing and are not charged for individual queries. BigQuery offers flat-rate pricing for customers who prefer a stable cost for queries rather than paying the on-demand price per TB of data processed. You can choose to use flat-rate pricing using BigQuery Reservations. When you enroll in flat-rate pricing, you purchase slot commitments dedicated query processing capacity, measured in BigQuery slots. Your queries consume this capacity, and you are not billed for bytes processed. If your capacity demands exceed your committed capacity, BigQuery will queue up slots, and you will not be charged additional fees.
Ref: https://cloud.google.com/bigquery/pricing#flat_rate_pricing
Unattempted
Once your data is loaded into BigQuery, you are charged for storing it. Storage pricing is based on the amount of data stored in your tables when it is uncompressed. BigQuery doesnt charge for the query execution based on the output of the query (i.e. bytes returned) but on the number of bytes processed (also referred to as bytes read or bytes scanned) in order to arrive at the output of the query. You are charged for the number of bytes processed whether the data is stored in BigQuery or in an external data source such as Cloud Storage, Google Drive, or Cloud Bigtable. On-demand pricing is based solely on usage. You are charged for the bytes scanned even if your query itself doesnt return any data.
Ref: https://cloud.google.com/bigquery/pricing
Split the users from business units to multiple projects. is not right.
The bytes scanned is not expected to go down by splitting the users into multiple projects so this wouldnt reduce/control the costs.
Ref: https://cloud.google.com/bigquery/pricing
Split your BigQuery data warehouse into multiple data warehouses for each business unit. is not right.
The bytes scanned is not expected to go down by splitting the BigQuery warehouse into two so this wouldnt reduce/control the costs either.
Ref: https://cloud.google.com/bigquery/pricing
Create separate copies of your BigQuery data warehouse for each business unit. is not right.
Creating separate copies of the BigQuery data warehouse for each business unit is going to increase your costs. Not only is this expected to reduce the bytes scanned, but this is also going to increase the storage costs as we are now storing double the amount of data.
Ref: https://cloud.google.com/bigquery/pricing
Apply a user- or project-level custom query quota for BigQuery data warehouse. is the right answer.
BigQuery limits the maximum rate of incoming requests and enforces appropriate quotas on a per-project basis. You can set various limits to control costs such as Concurrent rate limit for interactive queries, Concurrent rate limit for interactive queries against Bigtable external data sources, Concurrent rate limit for legacy SQL queries that contain UDFs, Cross-region federated querying, Daily query size limit, etc. https://cloud.google.com/bigquery/quotas
Change your BigQuery query model from on-demand to flat rate. Apply the appropriate number of slots to each Project. is the right answer.
This pricing option is best for customers who desire cost predictability. Flat-rate customers purchase dedicated resources for query processing and are not charged for individual queries. BigQuery offers flat-rate pricing for customers who prefer a stable cost for queries rather than paying the on-demand price per TB of data processed. You can choose to use flat-rate pricing using BigQuery Reservations. When you enroll in flat-rate pricing, you purchase slot commitments dedicated query processing capacity, measured in BigQuery slots. Your queries consume this capacity, and you are not billed for bytes processed. If your capacity demands exceed your committed capacity, BigQuery will queue up slots, and you will not be charged additional fees.
Ref: https://cloud.google.com/bigquery/pricing#flat_rate_pricing
Question 27 of 65
27. Question
Your company is moving all corporate applications to Google Cloud Platform. The security team wants detailed visibility of all GCP projects in the organization. You provision the Google Cloud Resource Manager and set up yourself as the org admin. What Google Cloud Identity and Access Management (Cloud IAM) roles should you give to the security team?
Correct
The security team needs detailed visibility of all GCP projects in the organization so they should be able to view all the projects in the organization as well as view all resources within these projects.
Grant roles/resourcemanager.organizationViewer and roles/owner. is not right.
roles/resourcemanager.organizationViewer role provides permissions to see the organization in the Cloud Console without having access to view all resources in the organization.
roles/owner provides permissions to manage roles and permissions for a project and all resources within the project and set up billing for a project.
Neither of the roles give the security team visibility of the projects in the organization.
Ref: https://cloud.google.com/resource-manager/docs/access-control-org
Ref: https://cloud.google.com/iam/docs/understanding-roles#organization-policy-roles
Grant roles/resourcemanager.organizationAdmin and roles/browser. is not right.
roles/resourcemanager.organizationAdmin provides access to administer all resources belonging to the organization. This doesnt follow the least privilege principle. Our security team needs detailed visibility i.e. read-only access but should not be able to administer resources..
Ref: https://cloud.google.com/iam/docs/understanding-roles#organization-policy-roles
Grant roles/owner, roles/networkmanagement.admin. is not right.
roles/owner provides permissions to manage roles and permissions for a project and all resources within the project and set up billing for a project.
roles/networkmanagement.admin provides full access to Cloud Network Management resources.
Neither of the roles give the security team visibility of the projects in the organization.
Ref: https://cloud.google.com/resource-manager/docs/access-control-org
Ref: https://cloud.google.com/iam/docs/understanding-roles#organization-policy-roles
Grant roles/resourcemanager.organizationViewer and roles/viewer. is the right answer.
roles/viewer provides permissions to view existing resources or data.
roles/resourcemanager.organizationViewer provides access to view an organization.
With the two roles, the security team can view the organization including all the projects and folders; as well as view all the resources within the projects.
Ref: https://cloud.google.com/resource-manager/docs/access-control-org
Ref: https://cloud.google.com/iam/docs/understanding-roles#organization-policy-roles
Incorrect
The security team needs detailed visibility of all GCP projects in the organization so they should be able to view all the projects in the organization as well as view all resources within these projects.
Grant roles/resourcemanager.organizationViewer and roles/owner. is not right.
roles/resourcemanager.organizationViewer role provides permissions to see the organization in the Cloud Console without having access to view all resources in the organization.
roles/owner provides permissions to manage roles and permissions for a project and all resources within the project and set up billing for a project.
Neither of the roles give the security team visibility of the projects in the organization.
Ref: https://cloud.google.com/resource-manager/docs/access-control-org
Ref: https://cloud.google.com/iam/docs/understanding-roles#organization-policy-roles
Grant roles/resourcemanager.organizationAdmin and roles/browser. is not right.
roles/resourcemanager.organizationAdmin provides access to administer all resources belonging to the organization. This doesnt follow the least privilege principle. Our security team needs detailed visibility i.e. read-only access but should not be able to administer resources..
Ref: https://cloud.google.com/iam/docs/understanding-roles#organization-policy-roles
Grant roles/owner, roles/networkmanagement.admin. is not right.
roles/owner provides permissions to manage roles and permissions for a project and all resources within the project and set up billing for a project.
roles/networkmanagement.admin provides full access to Cloud Network Management resources.
Neither of the roles give the security team visibility of the projects in the organization.
Ref: https://cloud.google.com/resource-manager/docs/access-control-org
Ref: https://cloud.google.com/iam/docs/understanding-roles#organization-policy-roles
Grant roles/resourcemanager.organizationViewer and roles/viewer. is the right answer.
roles/viewer provides permissions to view existing resources or data.
roles/resourcemanager.organizationViewer provides access to view an organization.
With the two roles, the security team can view the organization including all the projects and folders; as well as view all the resources within the projects.
Ref: https://cloud.google.com/resource-manager/docs/access-control-org
Ref: https://cloud.google.com/iam/docs/understanding-roles#organization-policy-roles
Unattempted
The security team needs detailed visibility of all GCP projects in the organization so they should be able to view all the projects in the organization as well as view all resources within these projects.
Grant roles/resourcemanager.organizationViewer and roles/owner. is not right.
roles/resourcemanager.organizationViewer role provides permissions to see the organization in the Cloud Console without having access to view all resources in the organization.
roles/owner provides permissions to manage roles and permissions for a project and all resources within the project and set up billing for a project.
Neither of the roles give the security team visibility of the projects in the organization.
Ref: https://cloud.google.com/resource-manager/docs/access-control-org
Ref: https://cloud.google.com/iam/docs/understanding-roles#organization-policy-roles
Grant roles/resourcemanager.organizationAdmin and roles/browser. is not right.
roles/resourcemanager.organizationAdmin provides access to administer all resources belonging to the organization. This doesnt follow the least privilege principle. Our security team needs detailed visibility i.e. read-only access but should not be able to administer resources..
Ref: https://cloud.google.com/iam/docs/understanding-roles#organization-policy-roles
Grant roles/owner, roles/networkmanagement.admin. is not right.
roles/owner provides permissions to manage roles and permissions for a project and all resources within the project and set up billing for a project.
roles/networkmanagement.admin provides full access to Cloud Network Management resources.
Neither of the roles give the security team visibility of the projects in the organization.
Ref: https://cloud.google.com/resource-manager/docs/access-control-org
Ref: https://cloud.google.com/iam/docs/understanding-roles#organization-policy-roles
Grant roles/resourcemanager.organizationViewer and roles/viewer. is the right answer.
roles/viewer provides permissions to view existing resources or data.
roles/resourcemanager.organizationViewer provides access to view an organization.
With the two roles, the security team can view the organization including all the projects and folders; as well as view all the resources within the projects.
Ref: https://cloud.google.com/resource-manager/docs/access-control-org
Ref: https://cloud.google.com/iam/docs/understanding-roles#organization-policy-roles
Question 28 of 65
28. Question
Your company is moving from an on-premises environment to Google Cloud Platform (GCP). You have multiple development teams that use Cassandra environments as backend databases. They all need a development environment that is isolated from other Cassandra instances. You want to move to GCP quickly and with minimal support effort. What should you do?
Correct
1. Build an instruction guide to install Cassandra on GCP.
2. Make the instruction guide accessible to your developers. is not right.
There is a very simple and straightforward way to deploy Cassandra as a Service, called Astra, on the Google Cloud Marketplace. You dont need to come up with an installation guide and ask your developers to do it.
Ref: https://cloud.google.com/blog/products/databases/open-source-cassandra-now-managed-on-google-cloud
Ref: https://console.cloud.google.com/marketplace/details/click-to-deploy-images/cassandra?filter=price:free&filter=category:database&id=25ca0967-cd8e-419e-b554-fe32e87f04be&pli=1
1. Build a Cassandra Compute Engine instance and take a snapshot of it.
2. Use the snapshot to create instances for your developers. is not right.
Like above, there is a very simple and straightforward way to deploy Cassandra as a Service, called Astra, on the Google Cloud Marketplace. You dont need to do this in a convoluted way.
Ref: https://cloud.google.com/blog/products/databases/open-source-cassandra-now-managed-on-google-cloud
Ref: https://console.cloud.google.com/marketplace/details/click-to-deploy-images/cassandra?filter=price:free&filter=category:database&id=25ca0967-cd8e-419e-b554-fe32e87f04be&pli=1
1. Build a Cassandra Compute Engine instance and take a snapshot of it.
2. Upload the snapshot to Cloud Storage and make it accessible to your developers.
3. Build instructions to create a Compute Engine instance from the snapshot so that developers can do it themselves. is not right.
Like above, there is a very simple and straightforward way to deploy Cassandra as a Service, called Astra, on the Google Cloud Marketplace. You dont need to do this in a convoluted way.
Ref: https://cloud.google.com/blog/products/databases/open-source-cassandra-now-managed-on-google-cloud
Ref: https://console.cloud.google.com/marketplace/details/click-to-deploy-images/cassandra?filter=price:free&filter=category:database&id=25ca0967-cd8e-419e-b554-fe32e87f04be&pli=1
1. Advise your developers to go to Cloud Marketplace.
2. Ask the developers to launch a Cassandra image for their development work. is the right answer.
You can deploy Cassandra as a Service, called Astra, on the Google Cloud Marketplace. Not only do you get a unified bill for all GCP services, your Developers can now create Cassandra clusters on Google Cloud in minutes and build applications with Cassandra as a database as a service without the operational overhead of managing Cassandra. Each instance is deployed to a separate set of VM instances (at the time of writing this, 3 x VM instance: 4 vCPUs + 26 GB memory (n1-highmem-4) + 10-GB Boot Disk) which are all isolated from the VM instances for other Cassandra deployments.
Ref: https://cloud.google.com/blog/products/databases/open-source-cassandra-now-managed-on-google-cloud
Ref: https://console.cloud.google.com/marketplace/details/click-to-deploy-images/cassandra?filter=price:free&filter=category:database&id=25ca0967-cd8e-419e-b554-fe32e87f04be&pli=1
Incorrect
1. Build an instruction guide to install Cassandra on GCP.
2. Make the instruction guide accessible to your developers. is not right.
There is a very simple and straightforward way to deploy Cassandra as a Service, called Astra, on the Google Cloud Marketplace. You dont need to come up with an installation guide and ask your developers to do it.
Ref: https://cloud.google.com/blog/products/databases/open-source-cassandra-now-managed-on-google-cloud
Ref: https://console.cloud.google.com/marketplace/details/click-to-deploy-images/cassandra?filter=price:free&filter=category:database&id=25ca0967-cd8e-419e-b554-fe32e87f04be&pli=1
1. Build a Cassandra Compute Engine instance and take a snapshot of it.
2. Use the snapshot to create instances for your developers. is not right.
Like above, there is a very simple and straightforward way to deploy Cassandra as a Service, called Astra, on the Google Cloud Marketplace. You dont need to do this in a convoluted way.
Ref: https://cloud.google.com/blog/products/databases/open-source-cassandra-now-managed-on-google-cloud
Ref: https://console.cloud.google.com/marketplace/details/click-to-deploy-images/cassandra?filter=price:free&filter=category:database&id=25ca0967-cd8e-419e-b554-fe32e87f04be&pli=1
1. Build a Cassandra Compute Engine instance and take a snapshot of it.
2. Upload the snapshot to Cloud Storage and make it accessible to your developers.
3. Build instructions to create a Compute Engine instance from the snapshot so that developers can do it themselves. is not right.
Like above, there is a very simple and straightforward way to deploy Cassandra as a Service, called Astra, on the Google Cloud Marketplace. You dont need to do this in a convoluted way.
Ref: https://cloud.google.com/blog/products/databases/open-source-cassandra-now-managed-on-google-cloud
Ref: https://console.cloud.google.com/marketplace/details/click-to-deploy-images/cassandra?filter=price:free&filter=category:database&id=25ca0967-cd8e-419e-b554-fe32e87f04be&pli=1
1. Advise your developers to go to Cloud Marketplace.
2. Ask the developers to launch a Cassandra image for their development work. is the right answer.
You can deploy Cassandra as a Service, called Astra, on the Google Cloud Marketplace. Not only do you get a unified bill for all GCP services, your Developers can now create Cassandra clusters on Google Cloud in minutes and build applications with Cassandra as a database as a service without the operational overhead of managing Cassandra. Each instance is deployed to a separate set of VM instances (at the time of writing this, 3 x VM instance: 4 vCPUs + 26 GB memory (n1-highmem-4) + 10-GB Boot Disk) which are all isolated from the VM instances for other Cassandra deployments.
Ref: https://cloud.google.com/blog/products/databases/open-source-cassandra-now-managed-on-google-cloud
Ref: https://console.cloud.google.com/marketplace/details/click-to-deploy-images/cassandra?filter=price:free&filter=category:database&id=25ca0967-cd8e-419e-b554-fe32e87f04be&pli=1
Unattempted
1. Build an instruction guide to install Cassandra on GCP.
2. Make the instruction guide accessible to your developers. is not right.
There is a very simple and straightforward way to deploy Cassandra as a Service, called Astra, on the Google Cloud Marketplace. You dont need to come up with an installation guide and ask your developers to do it.
Ref: https://cloud.google.com/blog/products/databases/open-source-cassandra-now-managed-on-google-cloud
Ref: https://console.cloud.google.com/marketplace/details/click-to-deploy-images/cassandra?filter=price:free&filter=category:database&id=25ca0967-cd8e-419e-b554-fe32e87f04be&pli=1
1. Build a Cassandra Compute Engine instance and take a snapshot of it.
2. Use the snapshot to create instances for your developers. is not right.
Like above, there is a very simple and straightforward way to deploy Cassandra as a Service, called Astra, on the Google Cloud Marketplace. You dont need to do this in a convoluted way.
Ref: https://cloud.google.com/blog/products/databases/open-source-cassandra-now-managed-on-google-cloud
Ref: https://console.cloud.google.com/marketplace/details/click-to-deploy-images/cassandra?filter=price:free&filter=category:database&id=25ca0967-cd8e-419e-b554-fe32e87f04be&pli=1
1. Build a Cassandra Compute Engine instance and take a snapshot of it.
2. Upload the snapshot to Cloud Storage and make it accessible to your developers.
3. Build instructions to create a Compute Engine instance from the snapshot so that developers can do it themselves. is not right.
Like above, there is a very simple and straightforward way to deploy Cassandra as a Service, called Astra, on the Google Cloud Marketplace. You dont need to do this in a convoluted way.
Ref: https://cloud.google.com/blog/products/databases/open-source-cassandra-now-managed-on-google-cloud
Ref: https://console.cloud.google.com/marketplace/details/click-to-deploy-images/cassandra?filter=price:free&filter=category:database&id=25ca0967-cd8e-419e-b554-fe32e87f04be&pli=1
1. Advise your developers to go to Cloud Marketplace.
2. Ask the developers to launch a Cassandra image for their development work. is the right answer.
You can deploy Cassandra as a Service, called Astra, on the Google Cloud Marketplace. Not only do you get a unified bill for all GCP services, your Developers can now create Cassandra clusters on Google Cloud in minutes and build applications with Cassandra as a database as a service without the operational overhead of managing Cassandra. Each instance is deployed to a separate set of VM instances (at the time of writing this, 3 x VM instance: 4 vCPUs + 26 GB memory (n1-highmem-4) + 10-GB Boot Disk) which are all isolated from the VM instances for other Cassandra deployments.
Ref: https://cloud.google.com/blog/products/databases/open-source-cassandra-now-managed-on-google-cloud
Ref: https://console.cloud.google.com/marketplace/details/click-to-deploy-images/cassandra?filter=price:free&filter=category:database&id=25ca0967-cd8e-419e-b554-fe32e87f04be&pli=1
Question 29 of 65
29. Question
Your Company is planning to migrate all Java web applications to Google App Engine. However, you still want to continue using your on-premise database. How can you set up the app engine to communicate with your on-premise database while minimizing effort?
Correct
Setup the application using App Engine Standard environment with Cloud Router to connect to an on-premise database. is not right.
Cloud router by itself is not sufficient to connect VPC to an on-premise network. Cloud Router enables you to dynamically exchange routes between your Virtual Private Cloud (VPC) and on-premises networks by using Border Gateway Protocol (BGP).
Ref: https://cloud.google.com/router
Setup the application using App Engine Flexible environment with Cloud Router to connect to an on-premise database. is not right.
Cloud router by itself is not sufficient to connect VPC to an on-premise network. Cloud Router enables you to dynamically exchange routes between your Virtual Private Cloud (VPC) and on-premises networks by using Border Gateway Protocol (BGP).
Ref: https://cloud.google.com/router
Setup the application using App Engine Flexible environment with Cloud VPN to connect to an on-premise database. is not right.
You need Cloud VPN to connect VPC to an on-premise network.
Ref: https://cloud.google.com/vpn/docs/concepts/overview
However, we dont have a requirement to run docker containers and App Engine Standard already supports the requirements of our existing web applications, we should avoid using App Engine Flexible. Converting to a container model involves effort and we want to minimize effort.
Setup the application using App Engine Standard environment with Cloud VPN to connect to an on-premise database. is the right answer.
You need Cloud VPN to connect VPC to an on-premise network.
Ref: https://cloud.google.com/vpn/docs/concepts/overview
And since we dont have a requirement to run docker containers, App Engine Standard already supports the requirements of our existing web applications Java runtime environment, so we should use App Engine Standard
Incorrect
Setup the application using App Engine Standard environment with Cloud Router to connect to an on-premise database. is not right.
Cloud router by itself is not sufficient to connect VPC to an on-premise network. Cloud Router enables you to dynamically exchange routes between your Virtual Private Cloud (VPC) and on-premises networks by using Border Gateway Protocol (BGP).
Ref: https://cloud.google.com/router
Setup the application using App Engine Flexible environment with Cloud Router to connect to an on-premise database. is not right.
Cloud router by itself is not sufficient to connect VPC to an on-premise network. Cloud Router enables you to dynamically exchange routes between your Virtual Private Cloud (VPC) and on-premises networks by using Border Gateway Protocol (BGP).
Ref: https://cloud.google.com/router
Setup the application using App Engine Flexible environment with Cloud VPN to connect to an on-premise database. is not right.
You need Cloud VPN to connect VPC to an on-premise network.
Ref: https://cloud.google.com/vpn/docs/concepts/overview
However, we dont have a requirement to run docker containers and App Engine Standard already supports the requirements of our existing web applications, we should avoid using App Engine Flexible. Converting to a container model involves effort and we want to minimize effort.
Setup the application using App Engine Standard environment with Cloud VPN to connect to an on-premise database. is the right answer.
You need Cloud VPN to connect VPC to an on-premise network.
Ref: https://cloud.google.com/vpn/docs/concepts/overview
And since we dont have a requirement to run docker containers, App Engine Standard already supports the requirements of our existing web applications Java runtime environment, so we should use App Engine Standard
Unattempted
Setup the application using App Engine Standard environment with Cloud Router to connect to an on-premise database. is not right.
Cloud router by itself is not sufficient to connect VPC to an on-premise network. Cloud Router enables you to dynamically exchange routes between your Virtual Private Cloud (VPC) and on-premises networks by using Border Gateway Protocol (BGP).
Ref: https://cloud.google.com/router
Setup the application using App Engine Flexible environment with Cloud Router to connect to an on-premise database. is not right.
Cloud router by itself is not sufficient to connect VPC to an on-premise network. Cloud Router enables you to dynamically exchange routes between your Virtual Private Cloud (VPC) and on-premises networks by using Border Gateway Protocol (BGP).
Ref: https://cloud.google.com/router
Setup the application using App Engine Flexible environment with Cloud VPN to connect to an on-premise database. is not right.
You need Cloud VPN to connect VPC to an on-premise network.
Ref: https://cloud.google.com/vpn/docs/concepts/overview
However, we dont have a requirement to run docker containers and App Engine Standard already supports the requirements of our existing web applications, we should avoid using App Engine Flexible. Converting to a container model involves effort and we want to minimize effort.
Setup the application using App Engine Standard environment with Cloud VPN to connect to an on-premise database. is the right answer.
You need Cloud VPN to connect VPC to an on-premise network.
Ref: https://cloud.google.com/vpn/docs/concepts/overview
And since we dont have a requirement to run docker containers, App Engine Standard already supports the requirements of our existing web applications Java runtime environment, so we should use App Engine Standard
Question 30 of 65
30. Question
Your company owns a web application that lets users post travel stories. You began noticing errors in logs for a specific Deployment. The deployment is responsible for translating a post from one language to another. Youve narrowed the issue down to a specific container named msg-translator-22 that is throwing the errors. You are unable to reproduce the error in any other environment, and none of the other containers serving the deployment have this issue. You would like to connect to this container to figure out the root cause. What steps would allow you to run commands against the msg-translator-22?
Correct
Use the kubectl run command to run a shell on that container. is not right.
kubectl run creates and runs a deployment. It creates a deployment or a job to manage the created container(s). It is not possible to use kubectl run to connect to an existing container. https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run
Use the kubectl run msg-translator-22 /bin/ bash command to run a shell on that container. is not right.
kubectl run creates and runs a deployment. It creates a deployment or a job to manage the created container(s). It is not possible to use kubectl run to connect to an existing container. https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run
Use the kubectl exec -it /bin/bash command to run a shell on that container. is not right.
While kubectl exec is used to execute a command in a container, the command above doesnt quite work because we havent passed to it the identifier of the container.
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec
Use the kubectl exec -it msg-translator-22 /bin/bash command to run a shell on that container. is the right answer.
kubectl exec is used to execute a command in a container. We pass the container name msg-translator-22 so kubectl exec knows which container to connect to. And we pass the command /bin/bash to it, so it starts a shell on the container and we can then run custom commands and identify the root cause of the issue.
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec
Incorrect
Use the kubectl run command to run a shell on that container. is not right.
kubectl run creates and runs a deployment. It creates a deployment or a job to manage the created container(s). It is not possible to use kubectl run to connect to an existing container. https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run
Use the kubectl run msg-translator-22 /bin/ bash command to run a shell on that container. is not right.
kubectl run creates and runs a deployment. It creates a deployment or a job to manage the created container(s). It is not possible to use kubectl run to connect to an existing container. https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run
Use the kubectl exec -it /bin/bash command to run a shell on that container. is not right.
While kubectl exec is used to execute a command in a container, the command above doesnt quite work because we havent passed to it the identifier of the container.
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec
Use the kubectl exec -it msg-translator-22 /bin/bash command to run a shell on that container. is the right answer.
kubectl exec is used to execute a command in a container. We pass the container name msg-translator-22 so kubectl exec knows which container to connect to. And we pass the command /bin/bash to it, so it starts a shell on the container and we can then run custom commands and identify the root cause of the issue.
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec
Unattempted
Use the kubectl run command to run a shell on that container. is not right.
kubectl run creates and runs a deployment. It creates a deployment or a job to manage the created container(s). It is not possible to use kubectl run to connect to an existing container. https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run
Use the kubectl run msg-translator-22 /bin/ bash command to run a shell on that container. is not right.
kubectl run creates and runs a deployment. It creates a deployment or a job to manage the created container(s). It is not possible to use kubectl run to connect to an existing container. https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run
Use the kubectl exec -it /bin/bash command to run a shell on that container. is not right.
While kubectl exec is used to execute a command in a container, the command above doesnt quite work because we havent passed to it the identifier of the container.
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec
Use the kubectl exec -it msg-translator-22 /bin/bash command to run a shell on that container. is the right answer.
kubectl exec is used to execute a command in a container. We pass the container name msg-translator-22 so kubectl exec knows which container to connect to. And we pass the command /bin/bash to it, so it starts a shell on the container and we can then run custom commands and identify the root cause of the issue.
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec
Question 31 of 65
31. Question
Your company plans to store sensitive PII data in a cloud storage bucket. Your compliance department doesnt like encrypting sensitive PII data with Google-managed keys and has asked you to ensure the new objects uploaded to this bucket are encrypted by customer-managed encryption keys. What should you do? (Select Three)
Correct
In the bucket advanced settings, select the Customer-supplied key and then select a Cloud KMS encryption key. is not right.
The customer-supplied key is not an option when selecting the encryption method in the console.
Use gsutil with encryption-key=[ENCRYPTION_KEY] when uploading objects to the bucket. is not right.
gsutil doesnt accept the flag encryption-key. gsutil can be set up to use an encryption key by modifying boto configuration or by specifying a top-level -o flag but neither of these is included in this option.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys
In the bucket advanced settings, select the Customer-managed key and then select a Cloud KMS encryption key. is the right answer.
Our compliance department wants us to use customer-managed encryption keys. We can select Customer-Managed radio and provide a cloud KMS encryption key to encrypt objects with the customer-managed key. This fit our requirements.
Use gsutil with -o GSUtil:encryption_key=[KEY_RESOURCE] when uploading objects to the bucket. is the right answer.
We can have gsutil use an encryption key by using the -o top-level flag: -o GSUtil:encryption_key=[KEY_RESOURCE].
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys#add-object-key
Modify .boto configuration to include encryption_key = [KEY_RESOURCE] when uploading objects to bucket. is the right answer.
As an alternative to the -o top-level flag, gsutil can also use an encryption key if .boto configuration is modified to specify the encryption key.
encryption_key = [KEY_RESOURCE]
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys#add-object-key
Incorrect
In the bucket advanced settings, select the Customer-supplied key and then select a Cloud KMS encryption key. is not right.
The customer-supplied key is not an option when selecting the encryption method in the console.
Use gsutil with encryption-key=[ENCRYPTION_KEY] when uploading objects to the bucket. is not right.
gsutil doesnt accept the flag encryption-key. gsutil can be set up to use an encryption key by modifying boto configuration or by specifying a top-level -o flag but neither of these is included in this option.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys
In the bucket advanced settings, select the Customer-managed key and then select a Cloud KMS encryption key. is the right answer.
Our compliance department wants us to use customer-managed encryption keys. We can select Customer-Managed radio and provide a cloud KMS encryption key to encrypt objects with the customer-managed key. This fit our requirements.
Use gsutil with -o GSUtil:encryption_key=[KEY_RESOURCE] when uploading objects to the bucket. is the right answer.
We can have gsutil use an encryption key by using the -o top-level flag: -o GSUtil:encryption_key=[KEY_RESOURCE].
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys#add-object-key
Modify .boto configuration to include encryption_key = [KEY_RESOURCE] when uploading objects to bucket. is the right answer.
As an alternative to the -o top-level flag, gsutil can also use an encryption key if .boto configuration is modified to specify the encryption key.
encryption_key = [KEY_RESOURCE]
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys#add-object-key
Unattempted
In the bucket advanced settings, select the Customer-supplied key and then select a Cloud KMS encryption key. is not right.
The customer-supplied key is not an option when selecting the encryption method in the console.
Use gsutil with encryption-key=[ENCRYPTION_KEY] when uploading objects to the bucket. is not right.
gsutil doesnt accept the flag encryption-key. gsutil can be set up to use an encryption key by modifying boto configuration or by specifying a top-level -o flag but neither of these is included in this option.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys
In the bucket advanced settings, select the Customer-managed key and then select a Cloud KMS encryption key. is the right answer.
Our compliance department wants us to use customer-managed encryption keys. We can select Customer-Managed radio and provide a cloud KMS encryption key to encrypt objects with the customer-managed key. This fit our requirements.
Use gsutil with -o GSUtil:encryption_key=[KEY_RESOURCE] when uploading objects to the bucket. is the right answer.
We can have gsutil use an encryption key by using the -o top-level flag: -o GSUtil:encryption_key=[KEY_RESOURCE].
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys#add-object-key
Modify .boto configuration to include encryption_key = [KEY_RESOURCE] when uploading objects to bucket. is the right answer.
As an alternative to the -o top-level flag, gsutil can also use an encryption key if .boto configuration is modified to specify the encryption key.
encryption_key = [KEY_RESOURCE]
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys#add-object-key
Question 32 of 65
32. Question
Your company plans to store sensitive PII data in a cloud storage bucket. Your compliance department has asked you to ensure the objects in this bucket are encrypted by customer-managed encryption keys. What should you do?
Correct
In the bucket advanced settings, select Customer-supplied key and then select a Cloud KMS encryption key. is not right.
Customer-Supplied key is not an option when selecting the encryption method in the console. Moreover, we want to use customer managed encryption keys and not customer supplied encryption keys. This does not fit our requirements.
In the bucket advanced settings, select Google-managed key and then select a Cloud KMS encryption key. is not right.
While Google-managed key is an option when selecting the encryption method in console, we want to use customer managed encryption keys and not Google Managed encryption keys. This does not fit our requirements.
Recreate the bucket to use a Customer-managed key. Encryption can only be specified at the time of bucket creation. is not right.
Bucket encryption can be changed at any time. The bucket doesnt have to be recreated to change encryption.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys#add-default-key
In the bucket advanced settings, select Customer-managed key and then select a Cloud KMS encryption key. is the right answer.
This option correctly selects the Customer-managed key and then the key to use which satisfies our requirement. See the screenshot below for reference.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys#add-default-key
Incorrect
In the bucket advanced settings, select Customer-supplied key and then select a Cloud KMS encryption key. is not right.
Customer-Supplied key is not an option when selecting the encryption method in the console. Moreover, we want to use customer managed encryption keys and not customer supplied encryption keys. This does not fit our requirements.
In the bucket advanced settings, select Google-managed key and then select a Cloud KMS encryption key. is not right.
While Google-managed key is an option when selecting the encryption method in console, we want to use customer managed encryption keys and not Google Managed encryption keys. This does not fit our requirements.
Recreate the bucket to use a Customer-managed key. Encryption can only be specified at the time of bucket creation. is not right.
Bucket encryption can be changed at any time. The bucket doesnt have to be recreated to change encryption.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys#add-default-key
In the bucket advanced settings, select Customer-managed key and then select a Cloud KMS encryption key. is the right answer.
This option correctly selects the Customer-managed key and then the key to use which satisfies our requirement. See the screenshot below for reference.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys#add-default-key
Unattempted
In the bucket advanced settings, select Customer-supplied key and then select a Cloud KMS encryption key. is not right.
Customer-Supplied key is not an option when selecting the encryption method in the console. Moreover, we want to use customer managed encryption keys and not customer supplied encryption keys. This does not fit our requirements.
In the bucket advanced settings, select Google-managed key and then select a Cloud KMS encryption key. is not right.
While Google-managed key is an option when selecting the encryption method in console, we want to use customer managed encryption keys and not Google Managed encryption keys. This does not fit our requirements.
Recreate the bucket to use a Customer-managed key. Encryption can only be specified at the time of bucket creation. is not right.
Bucket encryption can be changed at any time. The bucket doesnt have to be recreated to change encryption.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys#add-default-key
In the bucket advanced settings, select Customer-managed key and then select a Cloud KMS encryption key. is the right answer.
This option correctly selects the Customer-managed key and then the key to use which satisfies our requirement. See the screenshot below for reference.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys#add-default-key
Question 33 of 65
33. Question
Your company procured a license for a third-party cloud-based document signing system for the procurement team. All members of the procurement team need to sign in with the same service account. Your security team prohibits sharing service account passwords. You have been asked to recommend a solution that lets the procurement team login as the service account in the document signing system but without the team knowing the service account password. What should you do?
Correct
Ask the third-party provider to enable SAML for the application and set the credentials to always use service account credentials. is not right.
The application may or may not support SAML. Moreover, you cant set the credentials in SAML integration to always use a particular account. The authentication is carried out by IdP such as GSuite or a third-party identity provider. Since users are prohibited from sharing the service account credentials, they wouldnt be able to sign in through the IdP with the service account credentials.
Ask the third-party provider to enable OAuth 2.0 for the application and set the credentials to always use service account credentials. is not right.
The application may or may not support OAuth 2.0. Moreover, you cant set the credentials in SAML integration to always use a particular account. The authentication is carried out by IdP such as GSuite or a third-party identity provider. Since users are prohibited from sharing the service account credentials, they wouldnt be able to sign in through the IdP with the service account credentials.
Have a single person from the procurement team access document signing system with the service account credentials. is not right.
While this would prevent password reuse, it goes against our requirements and results in a single person dependency.
Register the application as a password vaulted app and set the credentials to the service account credentials. is the right answer.
As a G Suite or Cloud Identity administrator, the password vaulted apps service enables you to manage access to some of the apps that dont support federation and that are available to users on the User Dashboard. The password vaulted apps service saves login credential sets for applications and assigns those credential sets to users through group association. When a user has access to one of these applications through a group, they can sign in to the application through the user dashboard, or they can sign in directly from the specific application. This functionality is possible by leveraging Chrome or Firefox extensions/plugins. When adding an app to the password vaulted apps service, you can search and choose from the available web-based applications in the app library, or you can add a custom app. You can then manage usernames and passwords safely while providing users in your organization with quick one-click access to all of the apps they already use.
Ref: https://support.google.com/cloudidentity/answer/9178974?hl=en
Incorrect
Ask the third-party provider to enable SAML for the application and set the credentials to always use service account credentials. is not right.
The application may or may not support SAML. Moreover, you cant set the credentials in SAML integration to always use a particular account. The authentication is carried out by IdP such as GSuite or a third-party identity provider. Since users are prohibited from sharing the service account credentials, they wouldnt be able to sign in through the IdP with the service account credentials.
Ask the third-party provider to enable OAuth 2.0 for the application and set the credentials to always use service account credentials. is not right.
The application may or may not support OAuth 2.0. Moreover, you cant set the credentials in SAML integration to always use a particular account. The authentication is carried out by IdP such as GSuite or a third-party identity provider. Since users are prohibited from sharing the service account credentials, they wouldnt be able to sign in through the IdP with the service account credentials.
Have a single person from the procurement team access document signing system with the service account credentials. is not right.
While this would prevent password reuse, it goes against our requirements and results in a single person dependency.
Register the application as a password vaulted app and set the credentials to the service account credentials. is the right answer.
As a G Suite or Cloud Identity administrator, the password vaulted apps service enables you to manage access to some of the apps that dont support federation and that are available to users on the User Dashboard. The password vaulted apps service saves login credential sets for applications and assigns those credential sets to users through group association. When a user has access to one of these applications through a group, they can sign in to the application through the user dashboard, or they can sign in directly from the specific application. This functionality is possible by leveraging Chrome or Firefox extensions/plugins. When adding an app to the password vaulted apps service, you can search and choose from the available web-based applications in the app library, or you can add a custom app. You can then manage usernames and passwords safely while providing users in your organization with quick one-click access to all of the apps they already use.
Ref: https://support.google.com/cloudidentity/answer/9178974?hl=en
Unattempted
Ask the third-party provider to enable SAML for the application and set the credentials to always use service account credentials. is not right.
The application may or may not support SAML. Moreover, you cant set the credentials in SAML integration to always use a particular account. The authentication is carried out by IdP such as GSuite or a third-party identity provider. Since users are prohibited from sharing the service account credentials, they wouldnt be able to sign in through the IdP with the service account credentials.
Ask the third-party provider to enable OAuth 2.0 for the application and set the credentials to always use service account credentials. is not right.
The application may or may not support OAuth 2.0. Moreover, you cant set the credentials in SAML integration to always use a particular account. The authentication is carried out by IdP such as GSuite or a third-party identity provider. Since users are prohibited from sharing the service account credentials, they wouldnt be able to sign in through the IdP with the service account credentials.
Have a single person from the procurement team access document signing system with the service account credentials. is not right.
While this would prevent password reuse, it goes against our requirements and results in a single person dependency.
Register the application as a password vaulted app and set the credentials to the service account credentials. is the right answer.
As a G Suite or Cloud Identity administrator, the password vaulted apps service enables you to manage access to some of the apps that dont support federation and that are available to users on the User Dashboard. The password vaulted apps service saves login credential sets for applications and assigns those credential sets to users through group association. When a user has access to one of these applications through a group, they can sign in to the application through the user dashboard, or they can sign in directly from the specific application. This functionality is possible by leveraging Chrome or Firefox extensions/plugins. When adding an app to the password vaulted apps service, you can search and choose from the available web-based applications in the app library, or you can add a custom app. You can then manage usernames and passwords safely while providing users in your organization with quick one-click access to all of the apps they already use.
Ref: https://support.google.com/cloudidentity/answer/9178974?hl=en
Question 34 of 65
34. Question
Your company publishes large files on an Apache web server that runs on a Compute Engine instance. The Apache web server is not the only application running in the project. You want to receive an email when the egress network costs for the server exceed 100 dollars for the current month as measured by Google Cloud Platform (GCP). What should you do?
Correct
Set up a budget alert on the project with an amount of 100 dollars, a threshold of 100%, and notification type of email. is not right.
This budget alert is defined for the project which means it includes all costs and not just the egress network costs which goes against our requirements; and it also contains costs across all applications and not just the Compute Engine instance containing the Apache web server. While it is possible to set budget scope to include the Product (i.e. Google Compute Engine) and a label that uniquely identifies the specific compute engine instance, the option doesnt mention this.
Ref: https://cloud.google.com/billing/docs/how-to/budgets#budget-scope
Set up a budget alert on the billing account with an amount of 100 dollars, a threshold of 100%, and notification type of email. is not right.
Like above, but worse as this budget alert includes costs from all projects linked to the billing account. And like above, while it is possible to scope an alert down to Project/Product/Labels, the option doesnt mention this.
Ref: https://cloud.google.com/billing/docs/how-to/budgets#budget-scope
Use the Stackdriver Logging Agent to export the Apache web server logs to Stackdriver Logging. Create a Cloud Function that uses BigQuery to parse the HTTP response log data in Stackdriver for the current month and sends an email if the size of all HTTP responses, multiplied by current GCP egress prices, totals over 100 dollars. Schedule the Cloud Function using Cloud Scheduler to run hourly. is not right.
You cant arrive at the exact egress costs with this approach. You can configure apache logs to include the response object size.
Ref: https://httpd.apache.org/docs/1.3/logs.html#common
And you can then do what this option says to arrive at the combined size of all the responses but this is not 100% accurate as it does not include header sizes. Even if we assume the header size is insignificant compare to the large files published on apache web server, our question asks us to do this the Google way as measured by Google Cloud Platform (GCP). GCP does not look at the response sizes in the Apache log files to determine the egress costs. The GCP egress calculator takes into consideration the source and destination (source = the region that hosts the Compute Engine instance running Apache Web Server; and the destination is the destination region of the packet). The egress cost is different for different destinations as shown in this pricing reference.
Ref: https://cloud.google.com/vpc/network-pricing#internet_egress
The Apache logs do not give you the destination information and without this information, you cant accurately calculate the egress costs.
Export the billing data to BigQuery. Create a Cloud Function that uses BigQuery to sum the egress network costs of the exported billing data for the Apache web server for the current month and sends an email if it is over 100 dollars. Schedule the Cloud Function using Cloud Scheduler to run hourly. is the right answer.
This is the only option that satisfies our requirement. We do it the Google way by (re)using the Billing Data that GCP uses. And we scope down the costs to just egress network costs for the apache web server. Finally, we schedule this to run hourly and send an email if the costs exceed 100 dollars.
Incorrect
Set up a budget alert on the project with an amount of 100 dollars, a threshold of 100%, and notification type of email. is not right.
This budget alert is defined for the project which means it includes all costs and not just the egress network costs which goes against our requirements; and it also contains costs across all applications and not just the Compute Engine instance containing the Apache web server. While it is possible to set budget scope to include the Product (i.e. Google Compute Engine) and a label that uniquely identifies the specific compute engine instance, the option doesnt mention this.
Ref: https://cloud.google.com/billing/docs/how-to/budgets#budget-scope
Set up a budget alert on the billing account with an amount of 100 dollars, a threshold of 100%, and notification type of email. is not right.
Like above, but worse as this budget alert includes costs from all projects linked to the billing account. And like above, while it is possible to scope an alert down to Project/Product/Labels, the option doesnt mention this.
Ref: https://cloud.google.com/billing/docs/how-to/budgets#budget-scope
Use the Stackdriver Logging Agent to export the Apache web server logs to Stackdriver Logging. Create a Cloud Function that uses BigQuery to parse the HTTP response log data in Stackdriver for the current month and sends an email if the size of all HTTP responses, multiplied by current GCP egress prices, totals over 100 dollars. Schedule the Cloud Function using Cloud Scheduler to run hourly. is not right.
You cant arrive at the exact egress costs with this approach. You can configure apache logs to include the response object size.
Ref: https://httpd.apache.org/docs/1.3/logs.html#common
And you can then do what this option says to arrive at the combined size of all the responses but this is not 100% accurate as it does not include header sizes. Even if we assume the header size is insignificant compare to the large files published on apache web server, our question asks us to do this the Google way as measured by Google Cloud Platform (GCP). GCP does not look at the response sizes in the Apache log files to determine the egress costs. The GCP egress calculator takes into consideration the source and destination (source = the region that hosts the Compute Engine instance running Apache Web Server; and the destination is the destination region of the packet). The egress cost is different for different destinations as shown in this pricing reference.
Ref: https://cloud.google.com/vpc/network-pricing#internet_egress
The Apache logs do not give you the destination information and without this information, you cant accurately calculate the egress costs.
Export the billing data to BigQuery. Create a Cloud Function that uses BigQuery to sum the egress network costs of the exported billing data for the Apache web server for the current month and sends an email if it is over 100 dollars. Schedule the Cloud Function using Cloud Scheduler to run hourly. is the right answer.
This is the only option that satisfies our requirement. We do it the Google way by (re)using the Billing Data that GCP uses. And we scope down the costs to just egress network costs for the apache web server. Finally, we schedule this to run hourly and send an email if the costs exceed 100 dollars.
Unattempted
Set up a budget alert on the project with an amount of 100 dollars, a threshold of 100%, and notification type of email. is not right.
This budget alert is defined for the project which means it includes all costs and not just the egress network costs which goes against our requirements; and it also contains costs across all applications and not just the Compute Engine instance containing the Apache web server. While it is possible to set budget scope to include the Product (i.e. Google Compute Engine) and a label that uniquely identifies the specific compute engine instance, the option doesnt mention this.
Ref: https://cloud.google.com/billing/docs/how-to/budgets#budget-scope
Set up a budget alert on the billing account with an amount of 100 dollars, a threshold of 100%, and notification type of email. is not right.
Like above, but worse as this budget alert includes costs from all projects linked to the billing account. And like above, while it is possible to scope an alert down to Project/Product/Labels, the option doesnt mention this.
Ref: https://cloud.google.com/billing/docs/how-to/budgets#budget-scope
Use the Stackdriver Logging Agent to export the Apache web server logs to Stackdriver Logging. Create a Cloud Function that uses BigQuery to parse the HTTP response log data in Stackdriver for the current month and sends an email if the size of all HTTP responses, multiplied by current GCP egress prices, totals over 100 dollars. Schedule the Cloud Function using Cloud Scheduler to run hourly. is not right.
You cant arrive at the exact egress costs with this approach. You can configure apache logs to include the response object size.
Ref: https://httpd.apache.org/docs/1.3/logs.html#common
And you can then do what this option says to arrive at the combined size of all the responses but this is not 100% accurate as it does not include header sizes. Even if we assume the header size is insignificant compare to the large files published on apache web server, our question asks us to do this the Google way as measured by Google Cloud Platform (GCP). GCP does not look at the response sizes in the Apache log files to determine the egress costs. The GCP egress calculator takes into consideration the source and destination (source = the region that hosts the Compute Engine instance running Apache Web Server; and the destination is the destination region of the packet). The egress cost is different for different destinations as shown in this pricing reference.
Ref: https://cloud.google.com/vpc/network-pricing#internet_egress
The Apache logs do not give you the destination information and without this information, you cant accurately calculate the egress costs.
Export the billing data to BigQuery. Create a Cloud Function that uses BigQuery to sum the egress network costs of the exported billing data for the Apache web server for the current month and sends an email if it is over 100 dollars. Schedule the Cloud Function using Cloud Scheduler to run hourly. is the right answer.
This is the only option that satisfies our requirement. We do it the Google way by (re)using the Billing Data that GCP uses. And we scope down the costs to just egress network costs for the apache web server. Finally, we schedule this to run hourly and send an email if the costs exceed 100 dollars.
Question 35 of 65
35. Question
Your company recently migrated all infrastructure to Google Cloud Platform (GCP) and you want to use Google Cloud Build to build all container images. You want to store the build logs in Google Cloud Storage. You also have a requirement to push the images to Google Container Registry. You wrote a cloud build YAML configuration file with the following contents.
How should you execute Cloud build to satisfy these requirements?
Correct
Execute gcloud builds push config=[CONFIG_FILE_PATH] [SOURCE]. is not right.
gcloud builds command does not support push operation. The correct operation to build images and push them to gcr is submit.
Ref: https://cloud.google.com/sdk/gcloud/reference/builds/submit
Execute gcloud builds run config=[CONFIG_FILE_PATH] gcs-log-dir=[GCS_LOG_DIR] [SOURCE]. is not right.
gcloud builds command does not support run operation. The correct operation to build images and push them to gcr is submit.
Ref: https://cloud.google.com/sdk/gcloud/reference/builds/submit
Execute gcloud builds submit config=[CONFIG_FILE_PATH] [SOURCE]. is not right.
This command correctly builds the container image and pushes the image to GCR (Google Container Registry) but doesnt upload the build logs to Google Cloud Storage which is one of our requirements.
Ref: https://cloud.google.com/sdk/gcloud/reference/builds/submit
Ref: https://cloud.google.com/cloud-build/docs/building/build-containers
Execute gcloud builds submit config=[CONFIG_FILE_PATH] gcs-log-dir=[GCS_LOG_DIR] [SOURCE]. is the right answer.
This command correctly builds the container image, pushes the image to GCR (Google Container Registry) and uploads the build logs to Google Cloud Storage.
config flag specifies the YAML or JSON file to use as the build configuration file.
gcs-log-dir specifies the directory in Google Cloud Storage to hold build logs.
[SOURCE] is the location of the source to build. The location can be a directory on a local disk or a gzipped archive file (.tar.gz) in Google Cloud Storage.
Execute gcloud builds push config=[CONFIG_FILE_PATH] [SOURCE]. is not right.
gcloud builds command does not support push operation. The correct operation to build images and push them to gcr is submit.
Ref: https://cloud.google.com/sdk/gcloud/reference/builds/submit
Execute gcloud builds run config=[CONFIG_FILE_PATH] gcs-log-dir=[GCS_LOG_DIR] [SOURCE]. is not right.
gcloud builds command does not support run operation. The correct operation to build images and push them to gcr is submit.
Ref: https://cloud.google.com/sdk/gcloud/reference/builds/submit
Execute gcloud builds submit config=[CONFIG_FILE_PATH] [SOURCE]. is not right.
This command correctly builds the container image and pushes the image to GCR (Google Container Registry) but doesnt upload the build logs to Google Cloud Storage which is one of our requirements.
Ref: https://cloud.google.com/sdk/gcloud/reference/builds/submit
Ref: https://cloud.google.com/cloud-build/docs/building/build-containers
Execute gcloud builds submit config=[CONFIG_FILE_PATH] gcs-log-dir=[GCS_LOG_DIR] [SOURCE]. is the right answer.
This command correctly builds the container image, pushes the image to GCR (Google Container Registry) and uploads the build logs to Google Cloud Storage.
config flag specifies the YAML or JSON file to use as the build configuration file.
gcs-log-dir specifies the directory in Google Cloud Storage to hold build logs.
[SOURCE] is the location of the source to build. The location can be a directory on a local disk or a gzipped archive file (.tar.gz) in Google Cloud Storage.
Execute gcloud builds push config=[CONFIG_FILE_PATH] [SOURCE]. is not right.
gcloud builds command does not support push operation. The correct operation to build images and push them to gcr is submit.
Ref: https://cloud.google.com/sdk/gcloud/reference/builds/submit
Execute gcloud builds run config=[CONFIG_FILE_PATH] gcs-log-dir=[GCS_LOG_DIR] [SOURCE]. is not right.
gcloud builds command does not support run operation. The correct operation to build images and push them to gcr is submit.
Ref: https://cloud.google.com/sdk/gcloud/reference/builds/submit
Execute gcloud builds submit config=[CONFIG_FILE_PATH] [SOURCE]. is not right.
This command correctly builds the container image and pushes the image to GCR (Google Container Registry) but doesnt upload the build logs to Google Cloud Storage which is one of our requirements.
Ref: https://cloud.google.com/sdk/gcloud/reference/builds/submit
Ref: https://cloud.google.com/cloud-build/docs/building/build-containers
Execute gcloud builds submit config=[CONFIG_FILE_PATH] gcs-log-dir=[GCS_LOG_DIR] [SOURCE]. is the right answer.
This command correctly builds the container image, pushes the image to GCR (Google Container Registry) and uploads the build logs to Google Cloud Storage.
config flag specifies the YAML or JSON file to use as the build configuration file.
gcs-log-dir specifies the directory in Google Cloud Storage to hold build logs.
[SOURCE] is the location of the source to build. The location can be a directory on a local disk or a gzipped archive file (.tar.gz) in Google Cloud Storage.
Your company runs a very successful web platform and has accumulated 3 petabytes of customer activity data in sharded MySQL database located in your datacenter. Due to storage limitations in your on-premise data center, your company has decided to move this data to GCP. The data must be available all through the day. Your business analysts, who have experience of using a SQL Interface, have asked for a seamless transition. How should you store the data so that availability is ensured while optimizing the ease of analysis for the business analysts?
Correct
Import data into Google Cloud SQL. is not right.
Cloud SQL is a fully-managed relational database service. It supports MySQL so the migration of data from your data center to cloud can be straightforward but Google Cloud SQL cannot handle petabyte-scale data. The current second-generation instances limit the storage to approximately 30TB.
Ref: https://cloud.google.com/sql#overview
Ref: https://cloud.google.com/sql/docs/quotas
Import flat files into Google Cloud Storage. is not right.
Cloud Storage is a service for storing objects in Google Cloud. You store objects in containers called buckets. You could export the MySQL data into files and import them into Google Cloud Storage, but it doesnt offer an SQL Interface to run queries/reports.
Ref: https://cloud.google.com/storage/docs/introduction
Import data into Google Cloud Datastore. is not right.
Your business analysts are already familiar with SQL Interface so we need a service that supports SQL. However, Cloud Datastore is a NoSQL document database. Cloud Datastore doesnt support SQL (it supports GQL which is similar to SQL, but not identical).
Ref: https://cloud.google.com/datastore/docs/reference/gql_reference
Ref: https://cloud.google.com/datastore/docs/concepts/overview
Import data into Google BigQuery. is the right answer.
Bigquery is a petabyte-scale serverless, highly scalable, and cost-effective cloud data warehouse that offers blazing-fast speeds, and with zero operational overhead. BigQuery supports a standard SQL dialect that is ANSI:2011 compliant, which reduces the impact and enables a seamless transition for your business analysts.
Ref: https://cloud.google.com/bigquery
Incorrect
Import data into Google Cloud SQL. is not right.
Cloud SQL is a fully-managed relational database service. It supports MySQL so the migration of data from your data center to cloud can be straightforward but Google Cloud SQL cannot handle petabyte-scale data. The current second-generation instances limit the storage to approximately 30TB.
Ref: https://cloud.google.com/sql#overview
Ref: https://cloud.google.com/sql/docs/quotas
Import flat files into Google Cloud Storage. is not right.
Cloud Storage is a service for storing objects in Google Cloud. You store objects in containers called buckets. You could export the MySQL data into files and import them into Google Cloud Storage, but it doesnt offer an SQL Interface to run queries/reports.
Ref: https://cloud.google.com/storage/docs/introduction
Import data into Google Cloud Datastore. is not right.
Your business analysts are already familiar with SQL Interface so we need a service that supports SQL. However, Cloud Datastore is a NoSQL document database. Cloud Datastore doesnt support SQL (it supports GQL which is similar to SQL, but not identical).
Ref: https://cloud.google.com/datastore/docs/reference/gql_reference
Ref: https://cloud.google.com/datastore/docs/concepts/overview
Import data into Google BigQuery. is the right answer.
Bigquery is a petabyte-scale serverless, highly scalable, and cost-effective cloud data warehouse that offers blazing-fast speeds, and with zero operational overhead. BigQuery supports a standard SQL dialect that is ANSI:2011 compliant, which reduces the impact and enables a seamless transition for your business analysts.
Ref: https://cloud.google.com/bigquery
Unattempted
Import data into Google Cloud SQL. is not right.
Cloud SQL is a fully-managed relational database service. It supports MySQL so the migration of data from your data center to cloud can be straightforward but Google Cloud SQL cannot handle petabyte-scale data. The current second-generation instances limit the storage to approximately 30TB.
Ref: https://cloud.google.com/sql#overview
Ref: https://cloud.google.com/sql/docs/quotas
Import flat files into Google Cloud Storage. is not right.
Cloud Storage is a service for storing objects in Google Cloud. You store objects in containers called buckets. You could export the MySQL data into files and import them into Google Cloud Storage, but it doesnt offer an SQL Interface to run queries/reports.
Ref: https://cloud.google.com/storage/docs/introduction
Import data into Google Cloud Datastore. is not right.
Your business analysts are already familiar with SQL Interface so we need a service that supports SQL. However, Cloud Datastore is a NoSQL document database. Cloud Datastore doesnt support SQL (it supports GQL which is similar to SQL, but not identical).
Ref: https://cloud.google.com/datastore/docs/reference/gql_reference
Ref: https://cloud.google.com/datastore/docs/concepts/overview
Import data into Google BigQuery. is the right answer.
Bigquery is a petabyte-scale serverless, highly scalable, and cost-effective cloud data warehouse that offers blazing-fast speeds, and with zero operational overhead. BigQuery supports a standard SQL dialect that is ANSI:2011 compliant, which reduces the impact and enables a seamless transition for your business analysts.
Ref: https://cloud.google.com/bigquery
Question 37 of 65
37. Question
Your company set up a complex organizational structure on Google Could Platform. The structure includes hundreds of folders and projects. Only a few team members should be able to view the hierarchical structure. You need to assign minimum permissions to these team members and you want to follow Google recommended practices. What should you do?
Correct
Add the users to roles/iam.roleViewer role. is not right.
roles/iam.roleViewer provides read access to all custom roles in the project and doesnt satisfy our requirement of viewing organization hierarchy.
Ref: https://cloud.google.com/iam/docs/understanding-roles
Add the users to a group and add this group to roles/iam.roleViewer role. is not right.
roles/iam.roleViewer provides read access to all custom roles in the project and doesnt satisfy our requirement of viewing organization hierarchy.
Ref: https://cloud.google.com/iam/docs/understanding-roles
Add the users to roles/browser role. is not right.
roles/browser provides read access to browse the hierarchy for a project, including the folder, organization, and Cloud IAM policy. This role doesnt include permission to view resources in the project. Although this is the role we require, you want to follow Google recommended practices which means we should instead add a group to the role and add users to the group instead of granting the role individually to users.
Ref: https://cloud.google.com/iam/docs/understanding-roles
Add the users to a group and add this group to roles/browser role. is the right answer.
roles/browser Read access to browse the hierarchy for a project, including the folder, organization, and Cloud IAM policy. This role doesnt include permission to view resources in the project. And you follow Google recommended practices by adding users to the group and group to the role. Groups are a convenient way to apply an access policy to a collection of users. You can grant and change access controls for a whole group at once instead of granting or changing access controls one at a time for individual users or service accounts. You can also easily add members to and remove members from a Google group instead of updating a Cloud IAM policy to add or remove users.
Ref: https://cloud.google.com/iam/docs/understanding-roles
Ref: https://cloud.google.com/iam/docs/overview
Incorrect
Add the users to roles/iam.roleViewer role. is not right.
roles/iam.roleViewer provides read access to all custom roles in the project and doesnt satisfy our requirement of viewing organization hierarchy.
Ref: https://cloud.google.com/iam/docs/understanding-roles
Add the users to a group and add this group to roles/iam.roleViewer role. is not right.
roles/iam.roleViewer provides read access to all custom roles in the project and doesnt satisfy our requirement of viewing organization hierarchy.
Ref: https://cloud.google.com/iam/docs/understanding-roles
Add the users to roles/browser role. is not right.
roles/browser provides read access to browse the hierarchy for a project, including the folder, organization, and Cloud IAM policy. This role doesnt include permission to view resources in the project. Although this is the role we require, you want to follow Google recommended practices which means we should instead add a group to the role and add users to the group instead of granting the role individually to users.
Ref: https://cloud.google.com/iam/docs/understanding-roles
Add the users to a group and add this group to roles/browser role. is the right answer.
roles/browser Read access to browse the hierarchy for a project, including the folder, organization, and Cloud IAM policy. This role doesnt include permission to view resources in the project. And you follow Google recommended practices by adding users to the group and group to the role. Groups are a convenient way to apply an access policy to a collection of users. You can grant and change access controls for a whole group at once instead of granting or changing access controls one at a time for individual users or service accounts. You can also easily add members to and remove members from a Google group instead of updating a Cloud IAM policy to add or remove users.
Ref: https://cloud.google.com/iam/docs/understanding-roles
Ref: https://cloud.google.com/iam/docs/overview
Unattempted
Add the users to roles/iam.roleViewer role. is not right.
roles/iam.roleViewer provides read access to all custom roles in the project and doesnt satisfy our requirement of viewing organization hierarchy.
Ref: https://cloud.google.com/iam/docs/understanding-roles
Add the users to a group and add this group to roles/iam.roleViewer role. is not right.
roles/iam.roleViewer provides read access to all custom roles in the project and doesnt satisfy our requirement of viewing organization hierarchy.
Ref: https://cloud.google.com/iam/docs/understanding-roles
Add the users to roles/browser role. is not right.
roles/browser provides read access to browse the hierarchy for a project, including the folder, organization, and Cloud IAM policy. This role doesnt include permission to view resources in the project. Although this is the role we require, you want to follow Google recommended practices which means we should instead add a group to the role and add users to the group instead of granting the role individually to users.
Ref: https://cloud.google.com/iam/docs/understanding-roles
Add the users to a group and add this group to roles/browser role. is the right answer.
roles/browser Read access to browse the hierarchy for a project, including the folder, organization, and Cloud IAM policy. This role doesnt include permission to view resources in the project. And you follow Google recommended practices by adding users to the group and group to the role. Groups are a convenient way to apply an access policy to a collection of users. You can grant and change access controls for a whole group at once instead of granting or changing access controls one at a time for individual users or service accounts. You can also easily add members to and remove members from a Google group instead of updating a Cloud IAM policy to add or remove users.
Ref: https://cloud.google.com/iam/docs/understanding-roles
Ref: https://cloud.google.com/iam/docs/overview
Question 38 of 65
38. Question
Your company stores customer PII data in Cloud Storage buckets. A subset of this data is regularly imported into a BigQuery dataset to carry out analytics. You want to make sure the access to this bucket is strictly controlled. Your analytics team needs read access on the bucket so that they can import data in BigQuery. Your operations team needs read/write access to both the bucket and BigQuery dataset to add Customer PII data of new customers on an ongoing basis. Your Data Vigilance officers need Administrator access to the Storage bucket and BigQuery dataset. You want to follow Google recommended practices. What should you do?
Correct
At the Organization level, add your Data Vigilance officers user accounts to the Owner role, add your operations team user accounts to the Editor role, and add your analytics team user accounts to the Viewer role. is not right.
Google recommends we apply the security principle of least privilege, where we grant only necessary permissions to access specific resources.
Ref: https://cloud.google.com/iam/docs/overview
Providing these primitive roles at the organization levels grants them permissions on all resources in all projects under the organization which violates the security principle of least privilege.
Ref: https://cloud.google.com/iam/docs/understanding-roles
At the Project level, add your Data Vigilance officers user accounts to the Owner role, add your operations team user accounts to the Editor role, and add your analytics team user accounts to the Viewer role. is not right.
Google recommends we apply the security principle of least privilege, where we grant only necessary permissions to access specific resources.
Ref: https://cloud.google.com/iam/docs/overview
Providing these primitive roles at the project level grants them permissions on all resources in the project which violates the security principle of least privilege.
Ref: https://cloud.google.com/iam/docs/understanding-roles
Create 3 custom IAM roles with appropriate permissions for the access levels needed for Cloud Storage and BigQuery. Add your users to the appropriate roles. is not right.
While this has the intended outcome, it is not very efficient particularly when there are predefined roles that can be used. Secondly, if Google adds/modifies permissions for these services in the future, we would have to update our roles to reflect the modifications. This results in operational overhead and increases costs.
Ref: https://cloud.google.com/storage/docs/access-control/iam-roles#primitive-roles-intrinsic
Ref: https://cloud.google.com/bigquery/docs/access-control
Use the appropriate predefined IAM roles for each of the access levels needed for Cloud Storage and BigQuery. Add your users to those roles for each of the services. is the right answer.
For Google Cloud Storage service, Google provides predefined roles roles/owner, roles/editor, roles/viewer that match the access levels we need.
Similarly, Google provides the roles roles/bigquery.dataViewer, roles/bigquery.dataOwner, roles/bigquery.admin that match the access levels we need.
We can assign these predefined IAM roles to the respective users. Should Google add/modify permissions for these services in the future, we dont need to modify the roles above as Google does this for us; and this helps future proof our solution.
Ref: https://cloud.google.com/storage/docs/access-control/iam-roles#primitive-roles-intrinsic
Ref: https://cloud.google.com/bigquery/docs/access-control
Incorrect
At the Organization level, add your Data Vigilance officers user accounts to the Owner role, add your operations team user accounts to the Editor role, and add your analytics team user accounts to the Viewer role. is not right.
Google recommends we apply the security principle of least privilege, where we grant only necessary permissions to access specific resources.
Ref: https://cloud.google.com/iam/docs/overview
Providing these primitive roles at the organization levels grants them permissions on all resources in all projects under the organization which violates the security principle of least privilege.
Ref: https://cloud.google.com/iam/docs/understanding-roles
At the Project level, add your Data Vigilance officers user accounts to the Owner role, add your operations team user accounts to the Editor role, and add your analytics team user accounts to the Viewer role. is not right.
Google recommends we apply the security principle of least privilege, where we grant only necessary permissions to access specific resources.
Ref: https://cloud.google.com/iam/docs/overview
Providing these primitive roles at the project level grants them permissions on all resources in the project which violates the security principle of least privilege.
Ref: https://cloud.google.com/iam/docs/understanding-roles
Create 3 custom IAM roles with appropriate permissions for the access levels needed for Cloud Storage and BigQuery. Add your users to the appropriate roles. is not right.
While this has the intended outcome, it is not very efficient particularly when there are predefined roles that can be used. Secondly, if Google adds/modifies permissions for these services in the future, we would have to update our roles to reflect the modifications. This results in operational overhead and increases costs.
Ref: https://cloud.google.com/storage/docs/access-control/iam-roles#primitive-roles-intrinsic
Ref: https://cloud.google.com/bigquery/docs/access-control
Use the appropriate predefined IAM roles for each of the access levels needed for Cloud Storage and BigQuery. Add your users to those roles for each of the services. is the right answer.
For Google Cloud Storage service, Google provides predefined roles roles/owner, roles/editor, roles/viewer that match the access levels we need.
Similarly, Google provides the roles roles/bigquery.dataViewer, roles/bigquery.dataOwner, roles/bigquery.admin that match the access levels we need.
We can assign these predefined IAM roles to the respective users. Should Google add/modify permissions for these services in the future, we dont need to modify the roles above as Google does this for us; and this helps future proof our solution.
Ref: https://cloud.google.com/storage/docs/access-control/iam-roles#primitive-roles-intrinsic
Ref: https://cloud.google.com/bigquery/docs/access-control
Unattempted
At the Organization level, add your Data Vigilance officers user accounts to the Owner role, add your operations team user accounts to the Editor role, and add your analytics team user accounts to the Viewer role. is not right.
Google recommends we apply the security principle of least privilege, where we grant only necessary permissions to access specific resources.
Ref: https://cloud.google.com/iam/docs/overview
Providing these primitive roles at the organization levels grants them permissions on all resources in all projects under the organization which violates the security principle of least privilege.
Ref: https://cloud.google.com/iam/docs/understanding-roles
At the Project level, add your Data Vigilance officers user accounts to the Owner role, add your operations team user accounts to the Editor role, and add your analytics team user accounts to the Viewer role. is not right.
Google recommends we apply the security principle of least privilege, where we grant only necessary permissions to access specific resources.
Ref: https://cloud.google.com/iam/docs/overview
Providing these primitive roles at the project level grants them permissions on all resources in the project which violates the security principle of least privilege.
Ref: https://cloud.google.com/iam/docs/understanding-roles
Create 3 custom IAM roles with appropriate permissions for the access levels needed for Cloud Storage and BigQuery. Add your users to the appropriate roles. is not right.
While this has the intended outcome, it is not very efficient particularly when there are predefined roles that can be used. Secondly, if Google adds/modifies permissions for these services in the future, we would have to update our roles to reflect the modifications. This results in operational overhead and increases costs.
Ref: https://cloud.google.com/storage/docs/access-control/iam-roles#primitive-roles-intrinsic
Ref: https://cloud.google.com/bigquery/docs/access-control
Use the appropriate predefined IAM roles for each of the access levels needed for Cloud Storage and BigQuery. Add your users to those roles for each of the services. is the right answer.
For Google Cloud Storage service, Google provides predefined roles roles/owner, roles/editor, roles/viewer that match the access levels we need.
Similarly, Google provides the roles roles/bigquery.dataViewer, roles/bigquery.dataOwner, roles/bigquery.admin that match the access levels we need.
We can assign these predefined IAM roles to the respective users. Should Google add/modify permissions for these services in the future, we dont need to modify the roles above as Google does this for us; and this helps future proof our solution.
Ref: https://cloud.google.com/storage/docs/access-control/iam-roles#primitive-roles-intrinsic
Ref: https://cloud.google.com/bigquery/docs/access-control
Question 39 of 65
39. Question
Your company stores sensitive PII data in a cloud storage bucket. The objects are currently encrypted by Google-managed keys. Your compliance department has asked you to ensure all current and future objects in this bucket are encrypted by customer-managed encryption keys. You want to minimize effort. What should you do?
Correct
1. In the bucket advanced settings, select the Customer-managed key and then select a Cloud KMS encryption key.
2. Existing objects encrypted by Google-managed keys can still be decrypted by the new Customer-managed key. is not right.
While changing the bucket encryption to use the Customer-managed key ensures all new objects use this key, existing objects are still encrypted by the Google-managed key. This doesnt satisfy our compliance requirements. Moreover, the customer managed key cant decrypt objects created by Google-managed keys.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys#add-default-key
1. In the bucket advanced settings, select the customer-supplied key and then select a Cloud KMS encryption key.
2. Delete all existing objects and upload them again so they use the new customer-supplied key for encryption. is not right.
The customer-supplied key is not an option when selecting the encryption method in the console. Moreover, we want to use customer-managed encryption keys and not customer-supplied encryption keys. This does not fit our requirements.
1. Rewrite all existing objects using gsutil rewrite to encrypt them with the new Customer-managed key.
2. In the bucket advanced settings, select the Customer-managed key and then select a Cloud KMS encryption key. is not right.
While changing the bucket encryption to use the Customer-managed key ensures all new objects use this key, rewriting existing objects before changing the bucket encryption would result in the objects being encrypted by the encryption method in use at that point which is still Google-managed.
1. In the bucket advanced settings, select the Customer-managed key and then select a Cloud KMS encryption key.
2. Rewrite all existing objects using gsutil rewrite to encrypt them with the new Customer-managed key. is the right answer.
Changing the bucket encryption to use the Customer-managed key ensures all new objects use this key. Now that bucket encryption is changed to use the Customer-managed key, rewrite all existing objects using gsutil rewrite results in objects being encrypted by the new Customer-managed key. This is the only option that satisfies our requirements.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys#add-default-key
Incorrect
1. In the bucket advanced settings, select the Customer-managed key and then select a Cloud KMS encryption key.
2. Existing objects encrypted by Google-managed keys can still be decrypted by the new Customer-managed key. is not right.
While changing the bucket encryption to use the Customer-managed key ensures all new objects use this key, existing objects are still encrypted by the Google-managed key. This doesnt satisfy our compliance requirements. Moreover, the customer managed key cant decrypt objects created by Google-managed keys.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys#add-default-key
1. In the bucket advanced settings, select the customer-supplied key and then select a Cloud KMS encryption key.
2. Delete all existing objects and upload them again so they use the new customer-supplied key for encryption. is not right.
The customer-supplied key is not an option when selecting the encryption method in the console. Moreover, we want to use customer-managed encryption keys and not customer-supplied encryption keys. This does not fit our requirements.
1. Rewrite all existing objects using gsutil rewrite to encrypt them with the new Customer-managed key.
2. In the bucket advanced settings, select the Customer-managed key and then select a Cloud KMS encryption key. is not right.
While changing the bucket encryption to use the Customer-managed key ensures all new objects use this key, rewriting existing objects before changing the bucket encryption would result in the objects being encrypted by the encryption method in use at that point which is still Google-managed.
1. In the bucket advanced settings, select the Customer-managed key and then select a Cloud KMS encryption key.
2. Rewrite all existing objects using gsutil rewrite to encrypt them with the new Customer-managed key. is the right answer.
Changing the bucket encryption to use the Customer-managed key ensures all new objects use this key. Now that bucket encryption is changed to use the Customer-managed key, rewrite all existing objects using gsutil rewrite results in objects being encrypted by the new Customer-managed key. This is the only option that satisfies our requirements.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys#add-default-key
Unattempted
1. In the bucket advanced settings, select the Customer-managed key and then select a Cloud KMS encryption key.
2. Existing objects encrypted by Google-managed keys can still be decrypted by the new Customer-managed key. is not right.
While changing the bucket encryption to use the Customer-managed key ensures all new objects use this key, existing objects are still encrypted by the Google-managed key. This doesnt satisfy our compliance requirements. Moreover, the customer managed key cant decrypt objects created by Google-managed keys.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys#add-default-key
1. In the bucket advanced settings, select the customer-supplied key and then select a Cloud KMS encryption key.
2. Delete all existing objects and upload them again so they use the new customer-supplied key for encryption. is not right.
The customer-supplied key is not an option when selecting the encryption method in the console. Moreover, we want to use customer-managed encryption keys and not customer-supplied encryption keys. This does not fit our requirements.
1. Rewrite all existing objects using gsutil rewrite to encrypt them with the new Customer-managed key.
2. In the bucket advanced settings, select the Customer-managed key and then select a Cloud KMS encryption key. is not right.
While changing the bucket encryption to use the Customer-managed key ensures all new objects use this key, rewriting existing objects before changing the bucket encryption would result in the objects being encrypted by the encryption method in use at that point which is still Google-managed.
1. In the bucket advanced settings, select the Customer-managed key and then select a Cloud KMS encryption key.
2. Rewrite all existing objects using gsutil rewrite to encrypt them with the new Customer-managed key. is the right answer.
Changing the bucket encryption to use the Customer-managed key ensures all new objects use this key. Now that bucket encryption is changed to use the Customer-managed key, rewrite all existing objects using gsutil rewrite results in objects being encrypted by the new Customer-managed key. This is the only option that satisfies our requirements.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys#add-default-key
Question 40 of 65
40. Question
Your company uses a legacy application that still relies on the legacy LDAP protocol to authenticate. Your company plans to migrate this application to cloud and is looking for a cost effective solution while minimizing any developer effort. What should you do?
Correct
Modify the legacy application to use SAML and ask users to sign in through Gmail. is not right.
Modifying a legacy application to use SAML can be quite challenging. In any case, this is a time consuming and error-prone task and is very expensive.
Modify the legacy application to use OAuth 2.0 and ask users to sign in through Gmail. is not right.
Modifying a legacy application to use OAuth 2.0 can be quite challenging. In any case, this is a time consuming and error-prone task and is very expensive.
Synchronize data within your LDAP server with Google Cloud Directory Sync. is not right.
This can be done but this isnt going to help with the legacy LDAP protocol authentication unless the application is modified to work with either Cloud Identity or GSuite. And your company is looking for a cost-effective solution while minimizing developer effort so this isnt suitable.
Use secure LDAP to authenticate the legacy application and ask users to sign in through Gmail. is the right answer.
Secure LDAP enables authentication, authorization, and user/group lookups for LDAP-based apps and IT infrastructure. Secure LDAP uses the same user directory for both SaaS and LDAP-based applications, so people can use the same Cloud Identity credentials they use to log in to services like G Suite and other SaaS apps as they do to log into traditional applications. Applications and IT infrastructure that use LDAP can be simply configured to leverage Cloud Identitys secure LDAP service instead of an existing legacy identity systemend-users dont have to change how they access their apps.
Ref: https://cloud.google.com/blog/products/identity-security/cloud-identity-now-provides-access-to-traditional-apps-with-secure-ldap
Incorrect
Modify the legacy application to use SAML and ask users to sign in through Gmail. is not right.
Modifying a legacy application to use SAML can be quite challenging. In any case, this is a time consuming and error-prone task and is very expensive.
Modify the legacy application to use OAuth 2.0 and ask users to sign in through Gmail. is not right.
Modifying a legacy application to use OAuth 2.0 can be quite challenging. In any case, this is a time consuming and error-prone task and is very expensive.
Synchronize data within your LDAP server with Google Cloud Directory Sync. is not right.
This can be done but this isnt going to help with the legacy LDAP protocol authentication unless the application is modified to work with either Cloud Identity or GSuite. And your company is looking for a cost-effective solution while minimizing developer effort so this isnt suitable.
Use secure LDAP to authenticate the legacy application and ask users to sign in through Gmail. is the right answer.
Secure LDAP enables authentication, authorization, and user/group lookups for LDAP-based apps and IT infrastructure. Secure LDAP uses the same user directory for both SaaS and LDAP-based applications, so people can use the same Cloud Identity credentials they use to log in to services like G Suite and other SaaS apps as they do to log into traditional applications. Applications and IT infrastructure that use LDAP can be simply configured to leverage Cloud Identitys secure LDAP service instead of an existing legacy identity systemend-users dont have to change how they access their apps.
Ref: https://cloud.google.com/blog/products/identity-security/cloud-identity-now-provides-access-to-traditional-apps-with-secure-ldap
Unattempted
Modify the legacy application to use SAML and ask users to sign in through Gmail. is not right.
Modifying a legacy application to use SAML can be quite challenging. In any case, this is a time consuming and error-prone task and is very expensive.
Modify the legacy application to use OAuth 2.0 and ask users to sign in through Gmail. is not right.
Modifying a legacy application to use OAuth 2.0 can be quite challenging. In any case, this is a time consuming and error-prone task and is very expensive.
Synchronize data within your LDAP server with Google Cloud Directory Sync. is not right.
This can be done but this isnt going to help with the legacy LDAP protocol authentication unless the application is modified to work with either Cloud Identity or GSuite. And your company is looking for a cost-effective solution while minimizing developer effort so this isnt suitable.
Use secure LDAP to authenticate the legacy application and ask users to sign in through Gmail. is the right answer.
Secure LDAP enables authentication, authorization, and user/group lookups for LDAP-based apps and IT infrastructure. Secure LDAP uses the same user directory for both SaaS and LDAP-based applications, so people can use the same Cloud Identity credentials they use to log in to services like G Suite and other SaaS apps as they do to log into traditional applications. Applications and IT infrastructure that use LDAP can be simply configured to leverage Cloud Identitys secure LDAP service instead of an existing legacy identity systemend-users dont have to change how they access their apps.
Ref: https://cloud.google.com/blog/products/identity-security/cloud-identity-now-provides-access-to-traditional-apps-with-secure-ldap
Question 41 of 65
41. Question
Your company uses BigQuery for data warehousing. Over time, many different business units in your company have created 1000+ datasets across hundreds of projects. Your CIO wants you to examine all datasets to find tables that contain an employee_ssn column. You want to minimize effort in performing this task. What should you do?
Correct
Go to Data Catalog and search for employee_ssn in the search box. is the right answer.
Data Catalog is a fully managed and scalable metadata management service that empowers organizations to quickly discover, understand, and manage all their data. It offers a simple and easy-to-use search interface for data discovery, a flexible and powerful cataloging system for capturing both technical and business metadata, and a strong security and compliance foundation with Cloud Data Loss Prevention (DLP) and Cloud Identity and Access Management (IAM) integrations. The service automatically ingests technical metadata for BigQuery and Cloud Pub/Sub and allows customers to capture business metadata in schematized format via tags, custom APIs, and the UI, offering a simple and efficient way to catalog their data assets. You can perform a search for data assets from the Data Catalog home page in the Google Cloud Console.
See https://cloud.google.com/data-catalog/docs/how-to/search for example.
All other options are manual, error-prone, time-consuming, and should be avoided.
Incorrect
Go to Data Catalog and search for employee_ssn in the search box. is the right answer.
Data Catalog is a fully managed and scalable metadata management service that empowers organizations to quickly discover, understand, and manage all their data. It offers a simple and easy-to-use search interface for data discovery, a flexible and powerful cataloging system for capturing both technical and business metadata, and a strong security and compliance foundation with Cloud Data Loss Prevention (DLP) and Cloud Identity and Access Management (IAM) integrations. The service automatically ingests technical metadata for BigQuery and Cloud Pub/Sub and allows customers to capture business metadata in schematized format via tags, custom APIs, and the UI, offering a simple and efficient way to catalog their data assets. You can perform a search for data assets from the Data Catalog home page in the Google Cloud Console.
See https://cloud.google.com/data-catalog/docs/how-to/search for example.
All other options are manual, error-prone, time-consuming, and should be avoided.
Unattempted
Go to Data Catalog and search for employee_ssn in the search box. is the right answer.
Data Catalog is a fully managed and scalable metadata management service that empowers organizations to quickly discover, understand, and manage all their data. It offers a simple and easy-to-use search interface for data discovery, a flexible and powerful cataloging system for capturing both technical and business metadata, and a strong security and compliance foundation with Cloud Data Loss Prevention (DLP) and Cloud Identity and Access Management (IAM) integrations. The service automatically ingests technical metadata for BigQuery and Cloud Pub/Sub and allows customers to capture business metadata in schematized format via tags, custom APIs, and the UI, offering a simple and efficient way to catalog their data assets. You can perform a search for data assets from the Data Catalog home page in the Google Cloud Console.
See https://cloud.google.com/data-catalog/docs/how-to/search for example.
All other options are manual, error-prone, time-consuming, and should be avoided.
Question 42 of 65
42. Question
Your company uses Cloud Storage to store application backup files for disaster recovery purposes. You want to follow Google recommended practices. Which storage option should you use?
Correct
The ideal answer to this would have been Archive Storage but that is not one of the options.
Archive Storage is the lowest-cost, highly durable storage service for data archiving, online backup, and disaster recovery. Your data is available within milliseconds, not hours or days. https://cloud.google.com/storage/docs/storage-classes#archive
In the absence of Archive Storage, the next best option for storing backups is Coldline Storage.
Coldline Storage is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is a better choice than Standard Storage or Nearline Storage in scenarios where slightly lower availability, a 90-day minimum storage duration, and higher costs for data access are acceptable trade-offs for lowered at-rest storage costs. Coldline Storage is ideal for data you plan to read or modify at most once a quarter.
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline
Although Nearline, Regional and Multi-Regional can also be used to store the backups, they are expensive in comparison and Google recommends we use Coldline for backups.
More information about Nearline: https://cloud.google.com/storage/docs/storage-classes#nearline
More information about Standard/Regional: https://cloud.google.com/storage/docs/storage-classes#standard
More information about Standard/Multi-Regional: https://cloud.google.com/storage/docs/storage-classes#standard
Incorrect
The ideal answer to this would have been Archive Storage but that is not one of the options.
Archive Storage is the lowest-cost, highly durable storage service for data archiving, online backup, and disaster recovery. Your data is available within milliseconds, not hours or days. https://cloud.google.com/storage/docs/storage-classes#archive
In the absence of Archive Storage, the next best option for storing backups is Coldline Storage.
Coldline Storage is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is a better choice than Standard Storage or Nearline Storage in scenarios where slightly lower availability, a 90-day minimum storage duration, and higher costs for data access are acceptable trade-offs for lowered at-rest storage costs. Coldline Storage is ideal for data you plan to read or modify at most once a quarter.
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline
Although Nearline, Regional and Multi-Regional can also be used to store the backups, they are expensive in comparison and Google recommends we use Coldline for backups.
More information about Nearline: https://cloud.google.com/storage/docs/storage-classes#nearline
More information about Standard/Regional: https://cloud.google.com/storage/docs/storage-classes#standard
More information about Standard/Multi-Regional: https://cloud.google.com/storage/docs/storage-classes#standard
Unattempted
The ideal answer to this would have been Archive Storage but that is not one of the options.
Archive Storage is the lowest-cost, highly durable storage service for data archiving, online backup, and disaster recovery. Your data is available within milliseconds, not hours or days. https://cloud.google.com/storage/docs/storage-classes#archive
In the absence of Archive Storage, the next best option for storing backups is Coldline Storage.
Coldline Storage is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is a better choice than Standard Storage or Nearline Storage in scenarios where slightly lower availability, a 90-day minimum storage duration, and higher costs for data access are acceptable trade-offs for lowered at-rest storage costs. Coldline Storage is ideal for data you plan to read or modify at most once a quarter.
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline
Although Nearline, Regional and Multi-Regional can also be used to store the backups, they are expensive in comparison and Google recommends we use Coldline for backups.
More information about Nearline: https://cloud.google.com/storage/docs/storage-classes#nearline
More information about Standard/Regional: https://cloud.google.com/storage/docs/storage-classes#standard
More information about Standard/Multi-Regional: https://cloud.google.com/storage/docs/storage-classes#standard
Question 43 of 65
43. Question
Your company wants to move 200 TB of your website clickstream logs from your on-premise data center to Google Cloud Platform. These logs need to be retained in GCP for compliance requirements. Your business analysts also want to run analytics on these logs to understand user click behavior on your website. Which of the below would enable you to meet these requirements? (Select Two)
Correct
Load logs into Google Cloud SQL. is not right.
Cloud SQL is a fully-managed relational database service. Storing logs in Google Cloud SQL is very expensive. Cloud SQL doesnt help us with analytics. Moreover, Google Cloud Platform offers several storage classes in Google Cloud Storage that are more apt for storing logs at a much cheaper cost.
Ref: https://cloud.google.com/sql/docs
Ref: https://cloud.google.com/sql/pricing#sql-storage-networking-prices
Ref: https://cloud.google.com/storage/pricing
Import logs into Google Stackdriver. is not right.
You can push custom logs to Stackdriver and set custom retention periods to store the logs for longer durations. However, Stackdriver doesnt help us with analytics. You could create a sink and export data into Cloud BigQuery for analytics but that is more work. Moreover, Google Cloud Platform offers several storage classes in Google Cloud Storage that are more apt for storing logs at a much cheaper cost.
Ref: https://cloud.google.com/logging
Ref: https://cloud.google.com/storage/pricing
Insert logs into Google Cloud Bigtable. is not right.
Cloud Bigtable is a petabyte-scale, fully managed NoSQL database service for large analytical and operational workloads. Cloud Bigtable can not run analytics by itself. (But when combined with other services to ingest, process, analyze and present, it can help drive analytics) see the diagram below. So this option is not right.
Ref: https://cloud.google.com/bigtable/
Upload log files into Google Cloud Storage. is the right answer.
Google Cloud Platform offers several storage classes in Google Cloud Storage that are suitable for storing/archiving logs at a reasonable cost.
GCP recommends you use
1. Standard storage class if you need to access objects frequently
2. Nearline storage class if you access infrequently i.e. once a month
3. Coldline storage class if you access even less frequently e.g. once a quarter
4. Archive storage for logs archival.
Ref: https://cloud.google.com/storage/docs/storage-classes
Load logs into Google BigQuery. is the right answer.
By loading logs into Google BigQuery, you can securely run and share analytical insights in your organization with a few clicks. BigQuerys high-speed streaming insertion API provides a powerful foundation for real-time analytics, making your latest business data immediately available for analysis.
Ref: https://cloud.google.com/bigquery#marketing-analytics
Incorrect
Load logs into Google Cloud SQL. is not right.
Cloud SQL is a fully-managed relational database service. Storing logs in Google Cloud SQL is very expensive. Cloud SQL doesnt help us with analytics. Moreover, Google Cloud Platform offers several storage classes in Google Cloud Storage that are more apt for storing logs at a much cheaper cost.
Ref: https://cloud.google.com/sql/docs
Ref: https://cloud.google.com/sql/pricing#sql-storage-networking-prices
Ref: https://cloud.google.com/storage/pricing
Import logs into Google Stackdriver. is not right.
You can push custom logs to Stackdriver and set custom retention periods to store the logs for longer durations. However, Stackdriver doesnt help us with analytics. You could create a sink and export data into Cloud BigQuery for analytics but that is more work. Moreover, Google Cloud Platform offers several storage classes in Google Cloud Storage that are more apt for storing logs at a much cheaper cost.
Ref: https://cloud.google.com/logging
Ref: https://cloud.google.com/storage/pricing
Insert logs into Google Cloud Bigtable. is not right.
Cloud Bigtable is a petabyte-scale, fully managed NoSQL database service for large analytical and operational workloads. Cloud Bigtable can not run analytics by itself. (But when combined with other services to ingest, process, analyze and present, it can help drive analytics) see the diagram below. So this option is not right.
Ref: https://cloud.google.com/bigtable/
Upload log files into Google Cloud Storage. is the right answer.
Google Cloud Platform offers several storage classes in Google Cloud Storage that are suitable for storing/archiving logs at a reasonable cost.
GCP recommends you use
1. Standard storage class if you need to access objects frequently
2. Nearline storage class if you access infrequently i.e. once a month
3. Coldline storage class if you access even less frequently e.g. once a quarter
4. Archive storage for logs archival.
Ref: https://cloud.google.com/storage/docs/storage-classes
Load logs into Google BigQuery. is the right answer.
By loading logs into Google BigQuery, you can securely run and share analytical insights in your organization with a few clicks. BigQuerys high-speed streaming insertion API provides a powerful foundation for real-time analytics, making your latest business data immediately available for analysis.
Ref: https://cloud.google.com/bigquery#marketing-analytics
Unattempted
Load logs into Google Cloud SQL. is not right.
Cloud SQL is a fully-managed relational database service. Storing logs in Google Cloud SQL is very expensive. Cloud SQL doesnt help us with analytics. Moreover, Google Cloud Platform offers several storage classes in Google Cloud Storage that are more apt for storing logs at a much cheaper cost.
Ref: https://cloud.google.com/sql/docs
Ref: https://cloud.google.com/sql/pricing#sql-storage-networking-prices
Ref: https://cloud.google.com/storage/pricing
Import logs into Google Stackdriver. is not right.
You can push custom logs to Stackdriver and set custom retention periods to store the logs for longer durations. However, Stackdriver doesnt help us with analytics. You could create a sink and export data into Cloud BigQuery for analytics but that is more work. Moreover, Google Cloud Platform offers several storage classes in Google Cloud Storage that are more apt for storing logs at a much cheaper cost.
Ref: https://cloud.google.com/logging
Ref: https://cloud.google.com/storage/pricing
Insert logs into Google Cloud Bigtable. is not right.
Cloud Bigtable is a petabyte-scale, fully managed NoSQL database service for large analytical and operational workloads. Cloud Bigtable can not run analytics by itself. (But when combined with other services to ingest, process, analyze and present, it can help drive analytics) see the diagram below. So this option is not right.
Ref: https://cloud.google.com/bigtable/
Upload log files into Google Cloud Storage. is the right answer.
Google Cloud Platform offers several storage classes in Google Cloud Storage that are suitable for storing/archiving logs at a reasonable cost.
GCP recommends you use
1. Standard storage class if you need to access objects frequently
2. Nearline storage class if you access infrequently i.e. once a month
3. Coldline storage class if you access even less frequently e.g. once a quarter
4. Archive storage for logs archival.
Ref: https://cloud.google.com/storage/docs/storage-classes
Load logs into Google BigQuery. is the right answer.
By loading logs into Google BigQuery, you can securely run and share analytical insights in your organization with a few clicks. BigQuerys high-speed streaming insertion API provides a powerful foundation for real-time analytics, making your latest business data immediately available for analysis.
Ref: https://cloud.google.com/bigquery#marketing-analytics
Question 44 of 65
44. Question
Your company wants to move all documents from a secure internal NAS drive to a Google Cloud Storage (GCS) bucket. The data contains personally identifiable information (PII) and sensitive customer information. Your company tax auditors need access to some of these documents. What security strategy would you recommend on GCS?
Correct
Use signed URLs to generate time-bound access to objects. is not right.
When dealing with sensitive customer information such as PII, using signed URLs is not a great idea as anyone with access to the URL has access to PII data. Signed URLs provide time-limited resource access to anyone in possession of the URL, regardless of whether they have a Google account. With PII Data, we want to be sure who has access and signed URLs dont guarantee that.
Ref: https://cloud.google.com/storage/docs/access-control/signed-urls
Grant IAM read-only access to users, and use default ACLs on the bucket. is not right.
We do not need to grant all IAM read-only access to this sensitive data. Just the users who need access to sensitive/PII data should be provided access to this data.
Create randomized bucket and object names. Enable public access, but only provide specific file URLs to people who do not have Google accounts and need access. is not right.
Enabling public access to the buckets and objects makes them visible to everyone. There are a number of scanning tools out in the market with the sole purpose of identifying buckets/objects that can be reached publicly. Should one of these tools be used by a bad actor to find out our public bucket/objects, it would result in a security breach.
Grant no Google Cloud Identity and Access Management (Cloud IAM) roles to users, and use granular ACLs on the bucket. is the right answer.
We start with no explicit access to any of the IAM users, and the bucket ACLs can then control which users can access what objects. This is the most secure way of ensuring just the people who require access to the bucket are provided with access. We block everyone from accessing the bucket and explicitly provided access to specific users through ACLs.
Incorrect
Use signed URLs to generate time-bound access to objects. is not right.
When dealing with sensitive customer information such as PII, using signed URLs is not a great idea as anyone with access to the URL has access to PII data. Signed URLs provide time-limited resource access to anyone in possession of the URL, regardless of whether they have a Google account. With PII Data, we want to be sure who has access and signed URLs dont guarantee that.
Ref: https://cloud.google.com/storage/docs/access-control/signed-urls
Grant IAM read-only access to users, and use default ACLs on the bucket. is not right.
We do not need to grant all IAM read-only access to this sensitive data. Just the users who need access to sensitive/PII data should be provided access to this data.
Create randomized bucket and object names. Enable public access, but only provide specific file URLs to people who do not have Google accounts and need access. is not right.
Enabling public access to the buckets and objects makes them visible to everyone. There are a number of scanning tools out in the market with the sole purpose of identifying buckets/objects that can be reached publicly. Should one of these tools be used by a bad actor to find out our public bucket/objects, it would result in a security breach.
Grant no Google Cloud Identity and Access Management (Cloud IAM) roles to users, and use granular ACLs on the bucket. is the right answer.
We start with no explicit access to any of the IAM users, and the bucket ACLs can then control which users can access what objects. This is the most secure way of ensuring just the people who require access to the bucket are provided with access. We block everyone from accessing the bucket and explicitly provided access to specific users through ACLs.
Unattempted
Use signed URLs to generate time-bound access to objects. is not right.
When dealing with sensitive customer information such as PII, using signed URLs is not a great idea as anyone with access to the URL has access to PII data. Signed URLs provide time-limited resource access to anyone in possession of the URL, regardless of whether they have a Google account. With PII Data, we want to be sure who has access and signed URLs dont guarantee that.
Ref: https://cloud.google.com/storage/docs/access-control/signed-urls
Grant IAM read-only access to users, and use default ACLs on the bucket. is not right.
We do not need to grant all IAM read-only access to this sensitive data. Just the users who need access to sensitive/PII data should be provided access to this data.
Create randomized bucket and object names. Enable public access, but only provide specific file URLs to people who do not have Google accounts and need access. is not right.
Enabling public access to the buckets and objects makes them visible to everyone. There are a number of scanning tools out in the market with the sole purpose of identifying buckets/objects that can be reached publicly. Should one of these tools be used by a bad actor to find out our public bucket/objects, it would result in a security breach.
Grant no Google Cloud Identity and Access Management (Cloud IAM) roles to users, and use granular ACLs on the bucket. is the right answer.
We start with no explicit access to any of the IAM users, and the bucket ACLs can then control which users can access what objects. This is the most secure way of ensuring just the people who require access to the bucket are provided with access. We block everyone from accessing the bucket and explicitly provided access to specific users through ACLs.
Question 45 of 65
45. Question
Your companys infrastructure is on-premises, but all machines are running at maximum capacity. You want to burst to Google Cloud. The workloads on Google Cloud must be able to directly communicate to the workloads on-premises using a private IP range. What should you do?
Correct
In Google Cloud, configure the VPC as a host for Shared VPC. is not right.
Shared VPC allows an organization to connect resources from multiple projects to a common Virtual Private Cloud (VPC) network so that they can communicate with each other securely and efficiently using internal IPs from that network. When you use Shared VPC, you designate a project as a host project and attach one or more other service projects to it. This in no way helps us connect to our on-premises network.
Ref: https://cloud.google.com/vpc/docs/shared-vpc
In Google Cloud, configure the VPC for VPC Network Peering. is not right.
Google Cloud VPC Network Peering allows internal IP address connectivity across two Virtual Private Cloud (VPC) networks regardless of whether they belong to the same project or the same organization. VPC Network Peering enables you to connect VPC networks so that workloads in different VPC networks can communicate internally. Traffic stays within Googles network and doesnt traverse the public internet. This doesnt help us connect to our on-premises network.
Ref: https://cloud.google.com/vpc/docs/vpc-peering
Create bastion hosts both in your on-premises environment and on Google Cloud. Configure both as proxy servers using their public IP addresses. is not right.
Bastion hosts provide an external facing point of entry into a network containing private network instances. Bastion hosts are primarily for end users so they can connect to an instance that does not have an external IP address through a bastion host.
Ref: https://cloud.google.com/compute/docs/instances/connecting-advanced
Set up Cloud VPN between the infrastructure on-premises and Google Cloud. is the right answer.
Cloud VPN securely connects your on-premises network to your Google Cloud (GCP) Virtual Private Cloud (VPC) network through an IPsec VPN connection.
Ref: https://cloud.google.com/vpn/docs/concepts/overview
Incorrect
In Google Cloud, configure the VPC as a host for Shared VPC. is not right.
Shared VPC allows an organization to connect resources from multiple projects to a common Virtual Private Cloud (VPC) network so that they can communicate with each other securely and efficiently using internal IPs from that network. When you use Shared VPC, you designate a project as a host project and attach one or more other service projects to it. This in no way helps us connect to our on-premises network.
Ref: https://cloud.google.com/vpc/docs/shared-vpc
In Google Cloud, configure the VPC for VPC Network Peering. is not right.
Google Cloud VPC Network Peering allows internal IP address connectivity across two Virtual Private Cloud (VPC) networks regardless of whether they belong to the same project or the same organization. VPC Network Peering enables you to connect VPC networks so that workloads in different VPC networks can communicate internally. Traffic stays within Googles network and doesnt traverse the public internet. This doesnt help us connect to our on-premises network.
Ref: https://cloud.google.com/vpc/docs/vpc-peering
Create bastion hosts both in your on-premises environment and on Google Cloud. Configure both as proxy servers using their public IP addresses. is not right.
Bastion hosts provide an external facing point of entry into a network containing private network instances. Bastion hosts are primarily for end users so they can connect to an instance that does not have an external IP address through a bastion host.
Ref: https://cloud.google.com/compute/docs/instances/connecting-advanced
Set up Cloud VPN between the infrastructure on-premises and Google Cloud. is the right answer.
Cloud VPN securely connects your on-premises network to your Google Cloud (GCP) Virtual Private Cloud (VPC) network through an IPsec VPN connection.
Ref: https://cloud.google.com/vpn/docs/concepts/overview
Unattempted
In Google Cloud, configure the VPC as a host for Shared VPC. is not right.
Shared VPC allows an organization to connect resources from multiple projects to a common Virtual Private Cloud (VPC) network so that they can communicate with each other securely and efficiently using internal IPs from that network. When you use Shared VPC, you designate a project as a host project and attach one or more other service projects to it. This in no way helps us connect to our on-premises network.
Ref: https://cloud.google.com/vpc/docs/shared-vpc
In Google Cloud, configure the VPC for VPC Network Peering. is not right.
Google Cloud VPC Network Peering allows internal IP address connectivity across two Virtual Private Cloud (VPC) networks regardless of whether they belong to the same project or the same organization. VPC Network Peering enables you to connect VPC networks so that workloads in different VPC networks can communicate internally. Traffic stays within Googles network and doesnt traverse the public internet. This doesnt help us connect to our on-premises network.
Ref: https://cloud.google.com/vpc/docs/vpc-peering
Create bastion hosts both in your on-premises environment and on Google Cloud. Configure both as proxy servers using their public IP addresses. is not right.
Bastion hosts provide an external facing point of entry into a network containing private network instances. Bastion hosts are primarily for end users so they can connect to an instance that does not have an external IP address through a bastion host.
Ref: https://cloud.google.com/compute/docs/instances/connecting-advanced
Set up Cloud VPN between the infrastructure on-premises and Google Cloud. is the right answer.
Cloud VPN securely connects your on-premises network to your Google Cloud (GCP) Virtual Private Cloud (VPC) network through an IPsec VPN connection.
Ref: https://cloud.google.com/vpn/docs/concepts/overview
Question 46 of 65
46. Question
Your companys test suite is a custom C++ application that runs tests each day on Linux virtual machines. The full test suite takes several hours to complete, running on a limited number of on-premises servers reserved for testing. Your company wants to move the testing infrastructure to Google Cloud Platform. Your company wants to reduce the amount of time it takes to fully test a change to the system while changing the tests as little as possible. Your project manager has asked you to suggest suitable services in Google Cloud and you want to follow Google recommended practices. What should you do?
Correct
Use Google Compute Engine unmanaged instance groups with a Network Load Balancer. is not right.
An unmanaged instance group is a collection of virtual machines (VMs) that reside in a single zone, VPC network, and subnet. An unmanaged instance group is useful for grouping together VMs that require individual configuration settings or tuning. Unmanaged instance group does not autoscale, so it does not help reduce the amount of time it takes to fully test a change to the system.
Ref: https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-unmanaged-instances
Use Google App Engine and Google Stackdriver for logging. is not right.
App Engine supports many popular languages like Java, PHP, Node.js, Python, C#, .Net, Ruby, and Go. However, C++ isnt supported by App Engine.
Ref: https://cloud.google.com/appengine
Use Google Cloud Dataproc and run Apache Hadoop jobs to process each test. is not right.
Cloud Dataproc is a fast, easy-to-use, fully managed cloud service for running Apache Spark and Apache Hadoop clusters in a simpler, more cost-efficient way. While Dataproc is very efficient at processing ETL and Big Data pipelines, it is not as suitable for running a ruby application that runs tests each day.
Ref: https://cloud.google.com/dataproc
Use Google Compute Engine managed instance groups and autoscaling. is the right answer.
A managed instance group (MIG) contains identical virtual machine (VM) instances that are based on an instance template. MIGs support auto-healing, load balancing, autoscaling, and auto-updating. Managed instance groups offer auto-scaling capabilities that let you automatically add or delete instances from a managed instance group based on increases or decreases in load. Autoscaling helps your apps gracefully handle increases in traffic and reduce costs when the need for resources is lower. Autoscaling works by adding more instances to your instance group when there is more load (upscaling), and deleting instances when the need for instances is lowered (downscaling).
Ref: https://cloud.google.com/compute/docs/autoscaler/
Incorrect
Use Google Compute Engine unmanaged instance groups with a Network Load Balancer. is not right.
An unmanaged instance group is a collection of virtual machines (VMs) that reside in a single zone, VPC network, and subnet. An unmanaged instance group is useful for grouping together VMs that require individual configuration settings or tuning. Unmanaged instance group does not autoscale, so it does not help reduce the amount of time it takes to fully test a change to the system.
Ref: https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-unmanaged-instances
Use Google App Engine and Google Stackdriver for logging. is not right.
App Engine supports many popular languages like Java, PHP, Node.js, Python, C#, .Net, Ruby, and Go. However, C++ isnt supported by App Engine.
Ref: https://cloud.google.com/appengine
Use Google Cloud Dataproc and run Apache Hadoop jobs to process each test. is not right.
Cloud Dataproc is a fast, easy-to-use, fully managed cloud service for running Apache Spark and Apache Hadoop clusters in a simpler, more cost-efficient way. While Dataproc is very efficient at processing ETL and Big Data pipelines, it is not as suitable for running a ruby application that runs tests each day.
Ref: https://cloud.google.com/dataproc
Use Google Compute Engine managed instance groups and autoscaling. is the right answer.
A managed instance group (MIG) contains identical virtual machine (VM) instances that are based on an instance template. MIGs support auto-healing, load balancing, autoscaling, and auto-updating. Managed instance groups offer auto-scaling capabilities that let you automatically add or delete instances from a managed instance group based on increases or decreases in load. Autoscaling helps your apps gracefully handle increases in traffic and reduce costs when the need for resources is lower. Autoscaling works by adding more instances to your instance group when there is more load (upscaling), and deleting instances when the need for instances is lowered (downscaling).
Ref: https://cloud.google.com/compute/docs/autoscaler/
Unattempted
Use Google Compute Engine unmanaged instance groups with a Network Load Balancer. is not right.
An unmanaged instance group is a collection of virtual machines (VMs) that reside in a single zone, VPC network, and subnet. An unmanaged instance group is useful for grouping together VMs that require individual configuration settings or tuning. Unmanaged instance group does not autoscale, so it does not help reduce the amount of time it takes to fully test a change to the system.
Ref: https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-unmanaged-instances
Use Google App Engine and Google Stackdriver for logging. is not right.
App Engine supports many popular languages like Java, PHP, Node.js, Python, C#, .Net, Ruby, and Go. However, C++ isnt supported by App Engine.
Ref: https://cloud.google.com/appengine
Use Google Cloud Dataproc and run Apache Hadoop jobs to process each test. is not right.
Cloud Dataproc is a fast, easy-to-use, fully managed cloud service for running Apache Spark and Apache Hadoop clusters in a simpler, more cost-efficient way. While Dataproc is very efficient at processing ETL and Big Data pipelines, it is not as suitable for running a ruby application that runs tests each day.
Ref: https://cloud.google.com/dataproc
Use Google Compute Engine managed instance groups and autoscaling. is the right answer.
A managed instance group (MIG) contains identical virtual machine (VM) instances that are based on an instance template. MIGs support auto-healing, load balancing, autoscaling, and auto-updating. Managed instance groups offer auto-scaling capabilities that let you automatically add or delete instances from a managed instance group based on increases or decreases in load. Autoscaling helps your apps gracefully handle increases in traffic and reduce costs when the need for resources is lower. Autoscaling works by adding more instances to your instance group when there is more load (upscaling), and deleting instances when the need for instances is lowered (downscaling).
Ref: https://cloud.google.com/compute/docs/autoscaler/
Question 47 of 65
47. Question
Your compliance team requested all audit logs are stored for 10 years and to allow access for external auditors to view. You want to follow Google recommended practices. What should you do? (Choose two)
Correct
Create an account for auditors to have view access to Stackdriver Logging. is not right.
While it is possible to configure a custom retention period of 10 years in Stackdriver logging, storing logs in Stackdriver is expensive compared to Cloud Storage. Stackdriver charges $0.01 per GB per month, whereas something like Cloud Storage Coldline Storage costs $0.007 per GB per month (30% cheaper) and Cloud Storage Archive Storage costs 0.004 per GB per month (60% cheaper than Stackdriver)
Ref: https://cloud.google.com/logging/docs/storage#pricing
Ref: https://cloud.google.com/storage/pricing
Export audit logs to BigQuery via an export sink. is not right.
Storing logs in BigQuery is expensive. In BigQuery, Active storage costs $0.02 per GB per month and Long-term storage costs $0.01 per GB per month. In comparison, Google Cloud Storage offers several storage classes that are significantly cheaper.
Ref: https://cloud.google.com/bigquery/pricing
Ref: https://cloud.google.com/storage/pricing
Export audit logs to Cloud Filestore via a Pub/Sub export sink. is not right.
Storing logs in Cloud Filestore is expensive. In Cloud Filestore, Standard Tier pricing costs $0.2 per GB per month and Premium Tier pricing costs $0.3 per GB per month. In comparison, Google Cloud Storage offers several storage classes that are significantly cheaper.
Ref: https://cloud.google.com/bigquery/pricing
Ref: https://cloud.google.com/storage/pricing
Export audit logs to Cloud Storage via an export sink. is the right answer.
Among all the storage solutions offered by Google Cloud Platform, Cloud storage offers the best pricing for long term storage of logs. Google Cloud Storage offers several storage classes such as Nearline Storage ($0.01 per GB per Month) Coldline Storage ($0.007 per GB per Month) and Archive Storage ($0.004 per GB per month) which are significantly cheaper than the storage options covered by the above options above.
Ref: https://cloud.google.com/storage/pricing
Generate a signed URL to the Stackdriver export destination for auditors to access. is the right answer.
In Google Cloud Storage, you can generate a signed URL to provide limited permission and time to make a request. Anyone who possesses it can use the signed URL to perform specified actions, such as reading an object, within a specified period of time.
In our scenario, we do not need to create accounts for our auditors to provide access to logs in Cloud Storage. Instead, we can generate them signed URLs which are time-bound and lets them access/download log files.
Ref: https://cloud.google.com/storage/docs/access-control/signed-urls
Incorrect
Create an account for auditors to have view access to Stackdriver Logging. is not right.
While it is possible to configure a custom retention period of 10 years in Stackdriver logging, storing logs in Stackdriver is expensive compared to Cloud Storage. Stackdriver charges $0.01 per GB per month, whereas something like Cloud Storage Coldline Storage costs $0.007 per GB per month (30% cheaper) and Cloud Storage Archive Storage costs 0.004 per GB per month (60% cheaper than Stackdriver)
Ref: https://cloud.google.com/logging/docs/storage#pricing
Ref: https://cloud.google.com/storage/pricing
Export audit logs to BigQuery via an export sink. is not right.
Storing logs in BigQuery is expensive. In BigQuery, Active storage costs $0.02 per GB per month and Long-term storage costs $0.01 per GB per month. In comparison, Google Cloud Storage offers several storage classes that are significantly cheaper.
Ref: https://cloud.google.com/bigquery/pricing
Ref: https://cloud.google.com/storage/pricing
Export audit logs to Cloud Filestore via a Pub/Sub export sink. is not right.
Storing logs in Cloud Filestore is expensive. In Cloud Filestore, Standard Tier pricing costs $0.2 per GB per month and Premium Tier pricing costs $0.3 per GB per month. In comparison, Google Cloud Storage offers several storage classes that are significantly cheaper.
Ref: https://cloud.google.com/bigquery/pricing
Ref: https://cloud.google.com/storage/pricing
Export audit logs to Cloud Storage via an export sink. is the right answer.
Among all the storage solutions offered by Google Cloud Platform, Cloud storage offers the best pricing for long term storage of logs. Google Cloud Storage offers several storage classes such as Nearline Storage ($0.01 per GB per Month) Coldline Storage ($0.007 per GB per Month) and Archive Storage ($0.004 per GB per month) which are significantly cheaper than the storage options covered by the above options above.
Ref: https://cloud.google.com/storage/pricing
Generate a signed URL to the Stackdriver export destination for auditors to access. is the right answer.
In Google Cloud Storage, you can generate a signed URL to provide limited permission and time to make a request. Anyone who possesses it can use the signed URL to perform specified actions, such as reading an object, within a specified period of time.
In our scenario, we do not need to create accounts for our auditors to provide access to logs in Cloud Storage. Instead, we can generate them signed URLs which are time-bound and lets them access/download log files.
Ref: https://cloud.google.com/storage/docs/access-control/signed-urls
Unattempted
Create an account for auditors to have view access to Stackdriver Logging. is not right.
While it is possible to configure a custom retention period of 10 years in Stackdriver logging, storing logs in Stackdriver is expensive compared to Cloud Storage. Stackdriver charges $0.01 per GB per month, whereas something like Cloud Storage Coldline Storage costs $0.007 per GB per month (30% cheaper) and Cloud Storage Archive Storage costs 0.004 per GB per month (60% cheaper than Stackdriver)
Ref: https://cloud.google.com/logging/docs/storage#pricing
Ref: https://cloud.google.com/storage/pricing
Export audit logs to BigQuery via an export sink. is not right.
Storing logs in BigQuery is expensive. In BigQuery, Active storage costs $0.02 per GB per month and Long-term storage costs $0.01 per GB per month. In comparison, Google Cloud Storage offers several storage classes that are significantly cheaper.
Ref: https://cloud.google.com/bigquery/pricing
Ref: https://cloud.google.com/storage/pricing
Export audit logs to Cloud Filestore via a Pub/Sub export sink. is not right.
Storing logs in Cloud Filestore is expensive. In Cloud Filestore, Standard Tier pricing costs $0.2 per GB per month and Premium Tier pricing costs $0.3 per GB per month. In comparison, Google Cloud Storage offers several storage classes that are significantly cheaper.
Ref: https://cloud.google.com/bigquery/pricing
Ref: https://cloud.google.com/storage/pricing
Export audit logs to Cloud Storage via an export sink. is the right answer.
Among all the storage solutions offered by Google Cloud Platform, Cloud storage offers the best pricing for long term storage of logs. Google Cloud Storage offers several storage classes such as Nearline Storage ($0.01 per GB per Month) Coldline Storage ($0.007 per GB per Month) and Archive Storage ($0.004 per GB per month) which are significantly cheaper than the storage options covered by the above options above.
Ref: https://cloud.google.com/storage/pricing
Generate a signed URL to the Stackdriver export destination for auditors to access. is the right answer.
In Google Cloud Storage, you can generate a signed URL to provide limited permission and time to make a request. Anyone who possesses it can use the signed URL to perform specified actions, such as reading an object, within a specified period of time.
In our scenario, we do not need to create accounts for our auditors to provide access to logs in Cloud Storage. Instead, we can generate them signed URLs which are time-bound and lets them access/download log files.
Ref: https://cloud.google.com/storage/docs/access-control/signed-urls
Question 48 of 65
48. Question
Your customer has implemented a solution that uses Cloud Spanner and notices some read latency-related performance issues on one table. This table is accessed only by their users using a primary key. The table schema is shown below.
CREATE TABLE Persons {
person_id INT64 NOT NULL, // sequential number based on number of registrations
account_creation_date DATE, // system date
birthdate DATE, // customer birthdate
firstname STRING (255), // first name
lastname STRING (255), // last name
profile_picture BYTES (255) // profile picture
} PRIMARY KEY (person_id)
You want to resolve the issue. What should you do?
Correct
Change the primary key to not have monotonically increasing values. is the right answer.
You should be careful when choosing a primary key to not accidentally create hotspots in your database. One cause of hotspots is having a column whose value monotonically increases as the first key part because this results in all inserts occurring at the end of your keyspace. This pattern is undesirable because Cloud Spanner divides data among servers by key ranges, which means all your inserts will be directed at a single server that will end up doing all the work.
Ref: https://cloud.google.com/spanner/docs/schema-design#primary-key-prevent-hotspots
All other options make no sense. The problem is with the monotonically increasing values in the primary key and removing profile_picture or adding a secondary index isnt going to alleviate the problem.
Incorrect
Change the primary key to not have monotonically increasing values. is the right answer.
You should be careful when choosing a primary key to not accidentally create hotspots in your database. One cause of hotspots is having a column whose value monotonically increases as the first key part because this results in all inserts occurring at the end of your keyspace. This pattern is undesirable because Cloud Spanner divides data among servers by key ranges, which means all your inserts will be directed at a single server that will end up doing all the work.
Ref: https://cloud.google.com/spanner/docs/schema-design#primary-key-prevent-hotspots
All other options make no sense. The problem is with the monotonically increasing values in the primary key and removing profile_picture or adding a secondary index isnt going to alleviate the problem.
Unattempted
Change the primary key to not have monotonically increasing values. is the right answer.
You should be careful when choosing a primary key to not accidentally create hotspots in your database. One cause of hotspots is having a column whose value monotonically increases as the first key part because this results in all inserts occurring at the end of your keyspace. This pattern is undesirable because Cloud Spanner divides data among servers by key ranges, which means all your inserts will be directed at a single server that will end up doing all the work.
Ref: https://cloud.google.com/spanner/docs/schema-design#primary-key-prevent-hotspots
All other options make no sense. The problem is with the monotonically increasing values in the primary key and removing profile_picture or adding a secondary index isnt going to alleviate the problem.
Question 49 of 65
49. Question
Your development team needs a new Jenkins server for their project. You need to deploy the server using the fewest steps possible. What should you do?
Correct
Create a new Compute Engine instance and install Jenkins through the command line interface. is not right.
While this can be done, this involves a lot more work than installing the Jenkins server through App Engine.
Create a Kubernetes cluster on Compute Engine and create a deployment with the Jenkins Docker image. is not right.
While this can be done, this involves a lot more work than installing the Jenkins server through App Engine.
Download and deploy the Jenkins Java WAR to App Engine Standard. is not right.
While this is possible, we need to ensure App Engine is enabled, we then need to download the Java project/WAR, and run gcloud app deploy to set up a Jenkins server. This involves more steps than spinning up an instance from GCP Marketplace.
Ref: https://cloud.google.com/appengine/docs/standard/java/tools/uploadinganapp
Ref: https://cloud.google.com/solutions/using-jenkins-for-distributed-builds-on-compute-engine
Use GCP Marketplace to launch the Jenkins solution. is the right answer.
The simplest way to launch a Jenkins server is from GCP Market place. GCP market place has a number of builds available for Jenkins: https://console.cloud.google.com/marketplace/browse?q=jenkins. All you need to do is spin up an instance from a suitable market place build and you have a Jenkins server in a few minutes with just a few clicks.
Incorrect
Create a new Compute Engine instance and install Jenkins through the command line interface. is not right.
While this can be done, this involves a lot more work than installing the Jenkins server through App Engine.
Create a Kubernetes cluster on Compute Engine and create a deployment with the Jenkins Docker image. is not right.
While this can be done, this involves a lot more work than installing the Jenkins server through App Engine.
Download and deploy the Jenkins Java WAR to App Engine Standard. is not right.
While this is possible, we need to ensure App Engine is enabled, we then need to download the Java project/WAR, and run gcloud app deploy to set up a Jenkins server. This involves more steps than spinning up an instance from GCP Marketplace.
Ref: https://cloud.google.com/appengine/docs/standard/java/tools/uploadinganapp
Ref: https://cloud.google.com/solutions/using-jenkins-for-distributed-builds-on-compute-engine
Use GCP Marketplace to launch the Jenkins solution. is the right answer.
The simplest way to launch a Jenkins server is from GCP Market place. GCP market place has a number of builds available for Jenkins: https://console.cloud.google.com/marketplace/browse?q=jenkins. All you need to do is spin up an instance from a suitable market place build and you have a Jenkins server in a few minutes with just a few clicks.
Unattempted
Create a new Compute Engine instance and install Jenkins through the command line interface. is not right.
While this can be done, this involves a lot more work than installing the Jenkins server through App Engine.
Create a Kubernetes cluster on Compute Engine and create a deployment with the Jenkins Docker image. is not right.
While this can be done, this involves a lot more work than installing the Jenkins server through App Engine.
Download and deploy the Jenkins Java WAR to App Engine Standard. is not right.
While this is possible, we need to ensure App Engine is enabled, we then need to download the Java project/WAR, and run gcloud app deploy to set up a Jenkins server. This involves more steps than spinning up an instance from GCP Marketplace.
Ref: https://cloud.google.com/appengine/docs/standard/java/tools/uploadinganapp
Ref: https://cloud.google.com/solutions/using-jenkins-for-distributed-builds-on-compute-engine
Use GCP Marketplace to launch the Jenkins solution. is the right answer.
The simplest way to launch a Jenkins server is from GCP Market place. GCP market place has a number of builds available for Jenkins: https://console.cloud.google.com/marketplace/browse?q=jenkins. All you need to do is spin up an instance from a suitable market place build and you have a Jenkins server in a few minutes with just a few clicks.
Question 50 of 65
50. Question
Your finance team wants to view the billing report for your projects. You want to make sure that the finance team does not get additional permissions to the project. What should you do?
Correct
Add the group for the finance team to roles/billing.user role. is not right.
This role has very restricted permissions, so you can grant it broadly, typically in combination with Project Creator. These two roles allow a user to create new projects linked to the billing account on which the role is granted.
Ref: https://cloud.google.com/billing/docs/how-to/billing-access
Add the group for the finance team to roles/billing.admin role. is not right.
This role is an owner role for a billing account. Use it to manage payment instruments, configure billing exports, view cost information, link and unlink projects, and manage other user roles on the billing account.
Ref: https://cloud.google.com/billing/docs/how-to/billing-access
Add the group for the finance team to roles/billing.projectManager role. is not right.
This role allows a user to attach the project to the billing account, but does not grant any rights over resources. Project Owners can use this role to allow someone else to manage the billing for the project without granting them resource access.
Ref: https://cloud.google.com/billing/docs/how-to/billing-access
Add the group for the finance team to roles/billing.viewer role. is the right answer.
Billing Account Viewer access would usually be granted to finance teams, it provides access to spending information but does not confer the right to link or unlink projects or otherwise manage the properties of the billing account.
Ref: https://cloud.google.com/billing/docs/how-to/billing-access
Incorrect
Add the group for the finance team to roles/billing.user role. is not right.
This role has very restricted permissions, so you can grant it broadly, typically in combination with Project Creator. These two roles allow a user to create new projects linked to the billing account on which the role is granted.
Ref: https://cloud.google.com/billing/docs/how-to/billing-access
Add the group for the finance team to roles/billing.admin role. is not right.
This role is an owner role for a billing account. Use it to manage payment instruments, configure billing exports, view cost information, link and unlink projects, and manage other user roles on the billing account.
Ref: https://cloud.google.com/billing/docs/how-to/billing-access
Add the group for the finance team to roles/billing.projectManager role. is not right.
This role allows a user to attach the project to the billing account, but does not grant any rights over resources. Project Owners can use this role to allow someone else to manage the billing for the project without granting them resource access.
Ref: https://cloud.google.com/billing/docs/how-to/billing-access
Add the group for the finance team to roles/billing.viewer role. is the right answer.
Billing Account Viewer access would usually be granted to finance teams, it provides access to spending information but does not confer the right to link or unlink projects or otherwise manage the properties of the billing account.
Ref: https://cloud.google.com/billing/docs/how-to/billing-access
Unattempted
Add the group for the finance team to roles/billing.user role. is not right.
This role has very restricted permissions, so you can grant it broadly, typically in combination with Project Creator. These two roles allow a user to create new projects linked to the billing account on which the role is granted.
Ref: https://cloud.google.com/billing/docs/how-to/billing-access
Add the group for the finance team to roles/billing.admin role. is not right.
This role is an owner role for a billing account. Use it to manage payment instruments, configure billing exports, view cost information, link and unlink projects, and manage other user roles on the billing account.
Ref: https://cloud.google.com/billing/docs/how-to/billing-access
Add the group for the finance team to roles/billing.projectManager role. is not right.
This role allows a user to attach the project to the billing account, but does not grant any rights over resources. Project Owners can use this role to allow someone else to manage the billing for the project without granting them resource access.
Ref: https://cloud.google.com/billing/docs/how-to/billing-access
Add the group for the finance team to roles/billing.viewer role. is the right answer.
Billing Account Viewer access would usually be granted to finance teams, it provides access to spending information but does not confer the right to link or unlink projects or otherwise manage the properties of the billing account.
Ref: https://cloud.google.com/billing/docs/how-to/billing-access
Question 51 of 65
51. Question
Your managed instance group raised an alert stating that new instance creation has failed to create new instances. You need to maintain the number of running instances specified by the template to be able to process expected application traffic. What should you do?
Correct
Verify that the instance template being used by the instance group contains valid syntax. Delete any persistent disks with the same name as instance names. Set the disks.autoDelete property to true in the instance template. is the right answer.
The likely causes are
1. A persistent disk already exists with the same name as VM Instance
2. disks.autoDelete option is set to false
3. instance template might be invalid
Therefore, we need to ensure that instance template is valid, disks.autoDelete is turned on, and that there are no existing persistent disks with the same name as VM instance.
Incorrect
Verify that the instance template being used by the instance group contains valid syntax. Delete any persistent disks with the same name as instance names. Set the disks.autoDelete property to true in the instance template. is the right answer.
The likely causes are
1. A persistent disk already exists with the same name as VM Instance
2. disks.autoDelete option is set to false
3. instance template might be invalid
Therefore, we need to ensure that instance template is valid, disks.autoDelete is turned on, and that there are no existing persistent disks with the same name as VM instance.
Unattempted
Verify that the instance template being used by the instance group contains valid syntax. Delete any persistent disks with the same name as instance names. Set the disks.autoDelete property to true in the instance template. is the right answer.
The likely causes are
1. A persistent disk already exists with the same name as VM Instance
2. disks.autoDelete option is set to false
3. instance template might be invalid
Therefore, we need to ensure that instance template is valid, disks.autoDelete is turned on, and that there are no existing persistent disks with the same name as VM instance.
Question 52 of 65
52. Question
Your management has asked an external auditor to review all the resources in a specific project. The security team has enabled the Organization Policy called Domain Restricted Sharing on the organization node by specifying only your Cloud Identity domain. You want the auditor to only be able to view, but not modify, the resources in that project. What should you do?
Correct
Ask the auditor for their Google account, and give them the Viewer role on the project. is not right.
Since the auditors account is not part of your companys Cloud Identity domain, the auditor can not access resources from your GCP projects. Domain restriction constraint can be used in organization policies to limit resource sharing based on domain. This constraint allows you to restrict the set of identities that are allowed to be used in Cloud Identity and Access Management policies. In this scenario, since we are restricting based on the Cloud Identity domain, only the users in the Cloud Identity domain can access GCP services. https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains
Ask the auditor for their Google account, and give them the Security Reviewer role on the project. is not right.
Since the auditors account is not part of your companys Cloud Identity domain, the auditor can not access resources from your GCP projects. Domain restriction constraint can be used in organization policies to limit resource sharing based on domain. This constraint allows you to restrict the set of identities that are allowed to be used in Cloud Identity and Access Management policies. In this scenario, since we are restricting based on the Cloud Identity domain, only the users in the Cloud Identity domain can access GCP services. https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains
Create a temporary account for the auditor in Cloud Identity, and give that account the Security Reviewer role on the project. is not right.
Creating a temporary account for the auditor in your cloud identity is the right approach as this makes the auditor part of the Cloud identity domain and the organization policy in place lets the auditor access resources. However, the role granted here is not suitable, it provides permissions to list all resources and Cloud IAM policies. Note that list permissions only allow you to list but not view resources. You need to get permission to view the resources.
Ref: https://cloud.google.com/iam/docs/understanding-roles#iam-roles
Create a temporary account for the auditor in Cloud Identity, and give that account the Viewer role on the project. is the right answer.
The primitive viewer role provides permissions for read-only actions that do not affect the state, such as viewing (but not modifying) existing resources or data. This fits our requirements.
In addition, adding the auditor to Cloud Identity ensures that Organization Policy for Domain Restricted Sharing doesnt block them from accessing resources.
Ref: https://cloud.google.com/iam/docs/understanding-roles#primitive_role_definitions
Incorrect
Ask the auditor for their Google account, and give them the Viewer role on the project. is not right.
Since the auditors account is not part of your companys Cloud Identity domain, the auditor can not access resources from your GCP projects. Domain restriction constraint can be used in organization policies to limit resource sharing based on domain. This constraint allows you to restrict the set of identities that are allowed to be used in Cloud Identity and Access Management policies. In this scenario, since we are restricting based on the Cloud Identity domain, only the users in the Cloud Identity domain can access GCP services. https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains
Ask the auditor for their Google account, and give them the Security Reviewer role on the project. is not right.
Since the auditors account is not part of your companys Cloud Identity domain, the auditor can not access resources from your GCP projects. Domain restriction constraint can be used in organization policies to limit resource sharing based on domain. This constraint allows you to restrict the set of identities that are allowed to be used in Cloud Identity and Access Management policies. In this scenario, since we are restricting based on the Cloud Identity domain, only the users in the Cloud Identity domain can access GCP services. https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains
Create a temporary account for the auditor in Cloud Identity, and give that account the Security Reviewer role on the project. is not right.
Creating a temporary account for the auditor in your cloud identity is the right approach as this makes the auditor part of the Cloud identity domain and the organization policy in place lets the auditor access resources. However, the role granted here is not suitable, it provides permissions to list all resources and Cloud IAM policies. Note that list permissions only allow you to list but not view resources. You need to get permission to view the resources.
Ref: https://cloud.google.com/iam/docs/understanding-roles#iam-roles
Create a temporary account for the auditor in Cloud Identity, and give that account the Viewer role on the project. is the right answer.
The primitive viewer role provides permissions for read-only actions that do not affect the state, such as viewing (but not modifying) existing resources or data. This fits our requirements.
In addition, adding the auditor to Cloud Identity ensures that Organization Policy for Domain Restricted Sharing doesnt block them from accessing resources.
Ref: https://cloud.google.com/iam/docs/understanding-roles#primitive_role_definitions
Unattempted
Ask the auditor for their Google account, and give them the Viewer role on the project. is not right.
Since the auditors account is not part of your companys Cloud Identity domain, the auditor can not access resources from your GCP projects. Domain restriction constraint can be used in organization policies to limit resource sharing based on domain. This constraint allows you to restrict the set of identities that are allowed to be used in Cloud Identity and Access Management policies. In this scenario, since we are restricting based on the Cloud Identity domain, only the users in the Cloud Identity domain can access GCP services. https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains
Ask the auditor for their Google account, and give them the Security Reviewer role on the project. is not right.
Since the auditors account is not part of your companys Cloud Identity domain, the auditor can not access resources from your GCP projects. Domain restriction constraint can be used in organization policies to limit resource sharing based on domain. This constraint allows you to restrict the set of identities that are allowed to be used in Cloud Identity and Access Management policies. In this scenario, since we are restricting based on the Cloud Identity domain, only the users in the Cloud Identity domain can access GCP services. https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains
Create a temporary account for the auditor in Cloud Identity, and give that account the Security Reviewer role on the project. is not right.
Creating a temporary account for the auditor in your cloud identity is the right approach as this makes the auditor part of the Cloud identity domain and the organization policy in place lets the auditor access resources. However, the role granted here is not suitable, it provides permissions to list all resources and Cloud IAM policies. Note that list permissions only allow you to list but not view resources. You need to get permission to view the resources.
Ref: https://cloud.google.com/iam/docs/understanding-roles#iam-roles
Create a temporary account for the auditor in Cloud Identity, and give that account the Viewer role on the project. is the right answer.
The primitive viewer role provides permissions for read-only actions that do not affect the state, such as viewing (but not modifying) existing resources or data. This fits our requirements.
In addition, adding the auditor to Cloud Identity ensures that Organization Policy for Domain Restricted Sharing doesnt block them from accessing resources.
Ref: https://cloud.google.com/iam/docs/understanding-roles#primitive_role_definitions
Question 53 of 65
53. Question
Your operations team has configured a lifecycle management rule on a bucket. The bucket is multi-regional and has versioning enabled. Which of the following statement accurately reflects the following lifecycle config?
{
rule:[
{
action:{
type:Delete
},
condition:{
age:60,
isLive:false
}
},
{
action:{
type:SetStorageClass,
storageClass:COLDLINE
},
condition:{
age:366,
matchesStorageClass:MULTI_REGIONAL
}
}
]
}
Correct
Archive objects older than 60 days and move objects to Coldline Storage after 366 days if the storage class in Multi-regional. is not right.
The action has type:Delete which means we want to Delete, not archive.
Ref: https://cloud.google.com/storage/docs/managing-lifecycles
Delete objects older than 60 days and move objects to Coldline Storage after 366 days if the storage class in Multi-regional. is not right.
We want to delete objects as indicated by the action, however, we dont want to delete all objects older than 60 days. We only want to delete archived objects as indicated by isLive:false condition
Ref: https://cloud.google.com/storage/docs/managing-lifecycles
Move objects to Coldline Storage after 366 days if the storage class in Multi-regional. First rule has no effect on the bucket. is not right.
The first rule certainly has an effect. It deletes archived objects older than 60 days.
Delete archived objects older than 60 days and move objects to Coldline Storage after 366 days if the storage class is Multi-regional. is the right answer.
The first part of the rule: The action has type:Delete which means we want to Delete. isLive:false condition means we are looking for objects that are not Live i.e. objects that are archived. Together, it means we want to delete archived objects older than 60 days. Note that if an object is deleted, it cannot be undeleted. Take care in setting up your lifecycle rules so that you do not cause more data to be deleted than you intend.
Ref: https://cloud.google.com/storage/docs/managing-lifecycles
The second part of the rule: The action indicates we want to set storage class to Coldline. The condition is true if the existing storage class is multi-regional and the age of the object is 366 days or over. Together it means we want to set the storage class to Coldline if existing storage class is multi-regional and age of the object is 366 days or over
Incorrect
Archive objects older than 60 days and move objects to Coldline Storage after 366 days if the storage class in Multi-regional. is not right.
The action has type:Delete which means we want to Delete, not archive.
Ref: https://cloud.google.com/storage/docs/managing-lifecycles
Delete objects older than 60 days and move objects to Coldline Storage after 366 days if the storage class in Multi-regional. is not right.
We want to delete objects as indicated by the action, however, we dont want to delete all objects older than 60 days. We only want to delete archived objects as indicated by isLive:false condition
Ref: https://cloud.google.com/storage/docs/managing-lifecycles
Move objects to Coldline Storage after 366 days if the storage class in Multi-regional. First rule has no effect on the bucket. is not right.
The first rule certainly has an effect. It deletes archived objects older than 60 days.
Delete archived objects older than 60 days and move objects to Coldline Storage after 366 days if the storage class is Multi-regional. is the right answer.
The first part of the rule: The action has type:Delete which means we want to Delete. isLive:false condition means we are looking for objects that are not Live i.e. objects that are archived. Together, it means we want to delete archived objects older than 60 days. Note that if an object is deleted, it cannot be undeleted. Take care in setting up your lifecycle rules so that you do not cause more data to be deleted than you intend.
Ref: https://cloud.google.com/storage/docs/managing-lifecycles
The second part of the rule: The action indicates we want to set storage class to Coldline. The condition is true if the existing storage class is multi-regional and the age of the object is 366 days or over. Together it means we want to set the storage class to Coldline if existing storage class is multi-regional and age of the object is 366 days or over
Unattempted
Archive objects older than 60 days and move objects to Coldline Storage after 366 days if the storage class in Multi-regional. is not right.
The action has type:Delete which means we want to Delete, not archive.
Ref: https://cloud.google.com/storage/docs/managing-lifecycles
Delete objects older than 60 days and move objects to Coldline Storage after 366 days if the storage class in Multi-regional. is not right.
We want to delete objects as indicated by the action, however, we dont want to delete all objects older than 60 days. We only want to delete archived objects as indicated by isLive:false condition
Ref: https://cloud.google.com/storage/docs/managing-lifecycles
Move objects to Coldline Storage after 366 days if the storage class in Multi-regional. First rule has no effect on the bucket. is not right.
The first rule certainly has an effect. It deletes archived objects older than 60 days.
Delete archived objects older than 60 days and move objects to Coldline Storage after 366 days if the storage class is Multi-regional. is the right answer.
The first part of the rule: The action has type:Delete which means we want to Delete. isLive:false condition means we are looking for objects that are not Live i.e. objects that are archived. Together, it means we want to delete archived objects older than 60 days. Note that if an object is deleted, it cannot be undeleted. Take care in setting up your lifecycle rules so that you do not cause more data to be deleted than you intend.
Ref: https://cloud.google.com/storage/docs/managing-lifecycles
The second part of the rule: The action indicates we want to set storage class to Coldline. The condition is true if the existing storage class is multi-regional and the age of the object is 366 days or over. Together it means we want to set the storage class to Coldline if existing storage class is multi-regional and age of the object is 366 days or over
Question 54 of 65
54. Question
Your organization has a dedicated person who creates and manages all service accounts for Google Cloud projects. You need to assign this person the minimum role for projects. What should you do?
Correct
Add the user to roles/iam.roleAdmin role. is not right.
roles/iam.roleAdmin is an administrator role that provides access to all custom roles in the project. This doesnt include permissions needed to manage service accounts.
Ref: https://cloud.google.com/iam/docs/understanding-roles#roles-roles
Add the user to roles/iam.securityAdmin role. is not right.
roles/iam.securityAdmin role is a Security admin role, with permissions to get and set any IAM policy. This role is too broad i.e. includes too many permissions and goes against the principle of least privilege. Moreover, although this role provides iam.serviceAccounts.get/list, it doesnt provide iam.serviceAccounts.create, iam.serviceAccounts.delete and iam.serviceAccounts.update permissions that are needed for managing service accounts.
Ref: https://cloud.google.com/iam/docs/understanding-roles#iam-roles
Add the user to roles/iam.serviceAccountUser role. is not right.
roles/iam.serviceAccountUser is a service Account User role which is used for running operations as the service account. This role does not provide the permissions iam.serviceAccounts.create, iam.serviceAccounts.delete, iam.serviceAccounts.update, iam.serviceAccounts.get/list which are required for managing service accounts.
Ref: https://cloud.google.com/iam/docs/understanding-roles#service-accounts-roles
Add the user to roles/iam.serviceAccountAdmin role. is the right answer.
roles/iam.serviceAccountAdmin is a Service Account Admin role that lets you Create and manage service accounts. This grants all the required permissions for managing service accounts (iam.serviceAccounts.create iam.serviceAccounts.delete, iam.serviceAccounts.update, iam.serviceAccounts.get/list etc) and therefore fits our requirements.
Ref: https://cloud.google.com/iam/docs/understanding-roles#service-accounts-roles
Incorrect
Add the user to roles/iam.roleAdmin role. is not right.
roles/iam.roleAdmin is an administrator role that provides access to all custom roles in the project. This doesnt include permissions needed to manage service accounts.
Ref: https://cloud.google.com/iam/docs/understanding-roles#roles-roles
Add the user to roles/iam.securityAdmin role. is not right.
roles/iam.securityAdmin role is a Security admin role, with permissions to get and set any IAM policy. This role is too broad i.e. includes too many permissions and goes against the principle of least privilege. Moreover, although this role provides iam.serviceAccounts.get/list, it doesnt provide iam.serviceAccounts.create, iam.serviceAccounts.delete and iam.serviceAccounts.update permissions that are needed for managing service accounts.
Ref: https://cloud.google.com/iam/docs/understanding-roles#iam-roles
Add the user to roles/iam.serviceAccountUser role. is not right.
roles/iam.serviceAccountUser is a service Account User role which is used for running operations as the service account. This role does not provide the permissions iam.serviceAccounts.create, iam.serviceAccounts.delete, iam.serviceAccounts.update, iam.serviceAccounts.get/list which are required for managing service accounts.
Ref: https://cloud.google.com/iam/docs/understanding-roles#service-accounts-roles
Add the user to roles/iam.serviceAccountAdmin role. is the right answer.
roles/iam.serviceAccountAdmin is a Service Account Admin role that lets you Create and manage service accounts. This grants all the required permissions for managing service accounts (iam.serviceAccounts.create iam.serviceAccounts.delete, iam.serviceAccounts.update, iam.serviceAccounts.get/list etc) and therefore fits our requirements.
Ref: https://cloud.google.com/iam/docs/understanding-roles#service-accounts-roles
Unattempted
Add the user to roles/iam.roleAdmin role. is not right.
roles/iam.roleAdmin is an administrator role that provides access to all custom roles in the project. This doesnt include permissions needed to manage service accounts.
Ref: https://cloud.google.com/iam/docs/understanding-roles#roles-roles
Add the user to roles/iam.securityAdmin role. is not right.
roles/iam.securityAdmin role is a Security admin role, with permissions to get and set any IAM policy. This role is too broad i.e. includes too many permissions and goes against the principle of least privilege. Moreover, although this role provides iam.serviceAccounts.get/list, it doesnt provide iam.serviceAccounts.create, iam.serviceAccounts.delete and iam.serviceAccounts.update permissions that are needed for managing service accounts.
Ref: https://cloud.google.com/iam/docs/understanding-roles#iam-roles
Add the user to roles/iam.serviceAccountUser role. is not right.
roles/iam.serviceAccountUser is a service Account User role which is used for running operations as the service account. This role does not provide the permissions iam.serviceAccounts.create, iam.serviceAccounts.delete, iam.serviceAccounts.update, iam.serviceAccounts.get/list which are required for managing service accounts.
Ref: https://cloud.google.com/iam/docs/understanding-roles#service-accounts-roles
Add the user to roles/iam.serviceAccountAdmin role. is the right answer.
roles/iam.serviceAccountAdmin is a Service Account Admin role that lets you Create and manage service accounts. This grants all the required permissions for managing service accounts (iam.serviceAccounts.create iam.serviceAccounts.delete, iam.serviceAccounts.update, iam.serviceAccounts.get/list etc) and therefore fits our requirements.
Ref: https://cloud.google.com/iam/docs/understanding-roles#service-accounts-roles
Question 55 of 65
55. Question
Your organization has strict requirements to control access to Google Cloud projects. You need to enable your Site Reliability Engineers (SREs) to approve requests from the Google Cloud support team when an SRE opens a support case. You want to follow Google-recommended practices. What should you do?
Correct
Add your SREs to roles/iam.roleAdmin role. is not right.
roles/iam.roleAdmin provides access to all custom roles in the project. This doesnt fit our requirement of SREs being able to approve requests.
Add your SREs to a group and then add this group to roles/iam.roleAdmin role. is not right.
roles/iam.roleAdmin provides access to all custom roles in the project. This doesnt fit our requirement of SREs being able to approve requests.
Add your SREs to roles/accessapproval.approver role. is not right.
roles/accessapproval.approver is an Access Approval Approver role and provides the ability to view or act on access approval requests and view configuration. Although this is the role we require, you want to follow Google recommended practices which means we should instead add the group to the role and add users to the group instead of granting the role individually to users.
Ref: https://cloud.google.com/iam/docs/understanding-roles#access-approval-roles
Ref: https://cloud.google.com/iam/docs/overview
Add your SREs to a group and then add this group to roles/accessapproval.approver role. is the right answer.
roles/accessapproval.approver is an Access Approval Approver role and provides the ability to view or act on access approval requests and view configuration. And you follow Google recommended practices by adding users to the group and group to the role. Groups are a convenient way to apply an access policy to a collection of users. You can grant and change access controls for a whole group at once instead of granting or changing access controls one at a time for individual users or service accounts. You can also easily add members to and remove members from a Google group instead of updating a Cloud IAM policy to add or remove users.
Ref: https://cloud.google.com/iam/docs/understanding-roles#access-approval-roles
Ref: https://cloud.google.com/iam/docs/overview
Incorrect
Add your SREs to roles/iam.roleAdmin role. is not right.
roles/iam.roleAdmin provides access to all custom roles in the project. This doesnt fit our requirement of SREs being able to approve requests.
Add your SREs to a group and then add this group to roles/iam.roleAdmin role. is not right.
roles/iam.roleAdmin provides access to all custom roles in the project. This doesnt fit our requirement of SREs being able to approve requests.
Add your SREs to roles/accessapproval.approver role. is not right.
roles/accessapproval.approver is an Access Approval Approver role and provides the ability to view or act on access approval requests and view configuration. Although this is the role we require, you want to follow Google recommended practices which means we should instead add the group to the role and add users to the group instead of granting the role individually to users.
Ref: https://cloud.google.com/iam/docs/understanding-roles#access-approval-roles
Ref: https://cloud.google.com/iam/docs/overview
Add your SREs to a group and then add this group to roles/accessapproval.approver role. is the right answer.
roles/accessapproval.approver is an Access Approval Approver role and provides the ability to view or act on access approval requests and view configuration. And you follow Google recommended practices by adding users to the group and group to the role. Groups are a convenient way to apply an access policy to a collection of users. You can grant and change access controls for a whole group at once instead of granting or changing access controls one at a time for individual users or service accounts. You can also easily add members to and remove members from a Google group instead of updating a Cloud IAM policy to add or remove users.
Ref: https://cloud.google.com/iam/docs/understanding-roles#access-approval-roles
Ref: https://cloud.google.com/iam/docs/overview
Unattempted
Add your SREs to roles/iam.roleAdmin role. is not right.
roles/iam.roleAdmin provides access to all custom roles in the project. This doesnt fit our requirement of SREs being able to approve requests.
Add your SREs to a group and then add this group to roles/iam.roleAdmin role. is not right.
roles/iam.roleAdmin provides access to all custom roles in the project. This doesnt fit our requirement of SREs being able to approve requests.
Add your SREs to roles/accessapproval.approver role. is not right.
roles/accessapproval.approver is an Access Approval Approver role and provides the ability to view or act on access approval requests and view configuration. Although this is the role we require, you want to follow Google recommended practices which means we should instead add the group to the role and add users to the group instead of granting the role individually to users.
Ref: https://cloud.google.com/iam/docs/understanding-roles#access-approval-roles
Ref: https://cloud.google.com/iam/docs/overview
Add your SREs to a group and then add this group to roles/accessapproval.approver role. is the right answer.
roles/accessapproval.approver is an Access Approval Approver role and provides the ability to view or act on access approval requests and view configuration. And you follow Google recommended practices by adding users to the group and group to the role. Groups are a convenient way to apply an access policy to a collection of users. You can grant and change access controls for a whole group at once instead of granting or changing access controls one at a time for individual users or service accounts. You can also easily add members to and remove members from a Google group instead of updating a Cloud IAM policy to add or remove users.
Ref: https://cloud.google.com/iam/docs/understanding-roles#access-approval-roles
Ref: https://cloud.google.com/iam/docs/overview
Question 56 of 65
56. Question
A company wants to build an application that stores images in a Cloud Storage bucket and wants to generate thumbnails as well as resize the images. They want to use a google managed service that can scale up and scale down to zero automatically with minimal effort. You have been asked to recommend a service. Which GCP service would you suggest?
Correct
Cloud Functions. is the right answer.
?Cloud Functions is Google Clouds event-driven serverless compute platform. It automatically scales based on the load and requires no additional configuration. You pay only for the resources used.
?While all other options i.e. Google Compute Engine, Google Kubernetes Engine, Google App Engine support autoscaling, it needs to be configured explicitly based on the load and is not as trivial as the scale up or scale down offered by Googles cloud functions.
Incorrect
Cloud Functions. is the right answer.
?Cloud Functions is Google Clouds event-driven serverless compute platform. It automatically scales based on the load and requires no additional configuration. You pay only for the resources used.
?While all other options i.e. Google Compute Engine, Google Kubernetes Engine, Google App Engine support autoscaling, it needs to be configured explicitly based on the load and is not as trivial as the scale up or scale down offered by Googles cloud functions.
Unattempted
Cloud Functions. is the right answer.
?Cloud Functions is Google Clouds event-driven serverless compute platform. It automatically scales based on the load and requires no additional configuration. You pay only for the resources used.
?While all other options i.e. Google Compute Engine, Google Kubernetes Engine, Google App Engine support autoscaling, it needs to be configured explicitly based on the load and is not as trivial as the scale up or scale down offered by Googles cloud functions.
Question 57 of 65
57. Question
A team of data scientists infrequently needs to use a Google Kubernetes Engine (GKE) cluster that you manage. They require GPUs for some long-running, non- restartable jobs. You want to minimize cost. What should you do?
Correct
Enable node auto-provisioning on the GKE cluster. is not right.
?Node auto-provisioning automatically manages a set of node pools on the users behalf. Without Node auto-provisioning, GKE considers starting new nodes only from the set of user-created node pools. With node auto-provisioning, new node pools can be created and deleted automatically. This in no way helps us with our requirements.
?Vertical pod autoscaling (VPA) frees you from having to think about what values to specify for a containers CPU and memory requests. The autoscaler can recommend values for CPU and memory requests and limits, or it can automatically update the values. This doesnt help us with the GPU requirement. Moreover, due to Kubernetes limitations, the only way to modify the resource requests of a running Pod is to recreate the Pod. This has the negative effect of killing the non-restartable jobs which is undesirable.
?You can use preemptible VMs in your GKE clusters or node pools to run batch or fault-tolerant jobs that are less sensitive to the ephemeral, non-guaranteed nature of preemptible VMs. Whereas we have long-running and non-restartable jobs so preemptible VMs arent suitable for our requirement.
?A node pool is a group of nodes within a cluster that all have the same configuration. Our requirement is GPUs, so we create a node pool with GPU enabled and have the scientists applications deployed to the cluster and use this node pool. At the same time, you want to minimize cost so you start with 1 instance and scale up as needed. It is important to note that the scale down needs to take into consideration if there are any running jobs otherwise the scale down may terminate the nonrestartable job.
Enable node auto-provisioning on the GKE cluster. is not right.
?Node auto-provisioning automatically manages a set of node pools on the users behalf. Without Node auto-provisioning, GKE considers starting new nodes only from the set of user-created node pools. With node auto-provisioning, new node pools can be created and deleted automatically. This in no way helps us with our requirements.
?Vertical pod autoscaling (VPA) frees you from having to think about what values to specify for a containers CPU and memory requests. The autoscaler can recommend values for CPU and memory requests and limits, or it can automatically update the values. This doesnt help us with the GPU requirement. Moreover, due to Kubernetes limitations, the only way to modify the resource requests of a running Pod is to recreate the Pod. This has the negative effect of killing the non-restartable jobs which is undesirable.
?You can use preemptible VMs in your GKE clusters or node pools to run batch or fault-tolerant jobs that are less sensitive to the ephemeral, non-guaranteed nature of preemptible VMs. Whereas we have long-running and non-restartable jobs so preemptible VMs arent suitable for our requirement.
?A node pool is a group of nodes within a cluster that all have the same configuration. Our requirement is GPUs, so we create a node pool with GPU enabled and have the scientists applications deployed to the cluster and use this node pool. At the same time, you want to minimize cost so you start with 1 instance and scale up as needed. It is important to note that the scale down needs to take into consideration if there are any running jobs otherwise the scale down may terminate the nonrestartable job.
Enable node auto-provisioning on the GKE cluster. is not right.
?Node auto-provisioning automatically manages a set of node pools on the users behalf. Without Node auto-provisioning, GKE considers starting new nodes only from the set of user-created node pools. With node auto-provisioning, new node pools can be created and deleted automatically. This in no way helps us with our requirements.
?Vertical pod autoscaling (VPA) frees you from having to think about what values to specify for a containers CPU and memory requests. The autoscaler can recommend values for CPU and memory requests and limits, or it can automatically update the values. This doesnt help us with the GPU requirement. Moreover, due to Kubernetes limitations, the only way to modify the resource requests of a running Pod is to recreate the Pod. This has the negative effect of killing the non-restartable jobs which is undesirable.
?You can use preemptible VMs in your GKE clusters or node pools to run batch or fault-tolerant jobs that are less sensitive to the ephemeral, non-guaranteed nature of preemptible VMs. Whereas we have long-running and non-restartable jobs so preemptible VMs arent suitable for our requirement.
?A node pool is a group of nodes within a cluster that all have the same configuration. Our requirement is GPUs, so we create a node pool with GPU enabled and have the scientists applications deployed to the cluster and use this node pool. At the same time, you want to minimize cost so you start with 1 instance and scale up as needed. It is important to note that the scale down needs to take into consideration if there are any running jobs otherwise the scale down may terminate the nonrestartable job.
An employee was terminated, but their access to Google Cloud Platform (GCP) was not removed until 2 weeks later. You need to find out this employee accessed any sensitive customer information after their termination. What should you do?
Correct
View the Admin Activity log in Stackdriver. Search for the service account associated with the user. is not right.
?Admin Activity logs do not contain log entries for reading resource data. Admin Activity audit logs contain log entries for API calls or other administrative actions that modify the configuration or metadata of resources.
?System Event audit logs do not contain log entries for reading resource data. System Event audit logs contain log entries for Google Cloud administrative actions that modify the configuration of resources. System Event audit logs are generated by Google systems; they are not driven by direct user action.
?System Event audit logs do not contain log entries for reading resource data. System Event audit logs contain log entries for Google Cloud administrative actions that modify the configuration of resources. System Event audit logs are generated by Google systems; they are not driven by direct user action.
?Data Access audit logs contain API calls that read the configuration or metadata of resources, as well as user-driven API calls that create, modify, or read user-provided resource data.
View the Admin Activity log in Stackdriver. Search for the service account associated with the user. is not right.
?Admin Activity logs do not contain log entries for reading resource data. Admin Activity audit logs contain log entries for API calls or other administrative actions that modify the configuration or metadata of resources.
?System Event audit logs do not contain log entries for reading resource data. System Event audit logs contain log entries for Google Cloud administrative actions that modify the configuration of resources. System Event audit logs are generated by Google systems; they are not driven by direct user action.
?System Event audit logs do not contain log entries for reading resource data. System Event audit logs contain log entries for Google Cloud administrative actions that modify the configuration of resources. System Event audit logs are generated by Google systems; they are not driven by direct user action.
?Data Access audit logs contain API calls that read the configuration or metadata of resources, as well as user-driven API calls that create, modify, or read user-provided resource data.
View the Admin Activity log in Stackdriver. Search for the service account associated with the user. is not right.
?Admin Activity logs do not contain log entries for reading resource data. Admin Activity audit logs contain log entries for API calls or other administrative actions that modify the configuration or metadata of resources.
?System Event audit logs do not contain log entries for reading resource data. System Event audit logs contain log entries for Google Cloud administrative actions that modify the configuration of resources. System Event audit logs are generated by Google systems; they are not driven by direct user action.
?System Event audit logs do not contain log entries for reading resource data. System Event audit logs contain log entries for Google Cloud administrative actions that modify the configuration of resources. System Event audit logs are generated by Google systems; they are not driven by direct user action.
?Data Access audit logs contain API calls that read the configuration or metadata of resources, as well as user-driven API calls that create, modify, or read user-provided resource data.
An engineer from your team accidentally deployed several new versions of NodeJS application on Google App Engine Standard. You are concerned the new versions are serving traffic. You have been asked to produce a list of all the versions of the application that are receiving traffic as well the percent traffic split between them. What should you do?
Correct
gcloud app versions list. is not right
?This command lists all the versions of all services that are currently deployed to the App Engine server. While this list includes all versions that are receiving traffic, it also includes versions that are not receiving traffic.
?This command correctly lists just the versions that are receiving traffic by hiding versions that do not receive traffic. This is the only command that fits our requirements.
?This command lists all the versions of all services that are currently deployed to the App Engine server. While this list includes all versions that are receiving traffic, it also includes versions that are not receiving traffic.
?This command correctly lists just the versions that are receiving traffic by hiding versions that do not receive traffic. This is the only command that fits our requirements.
?This command lists all the versions of all services that are currently deployed to the App Engine server. While this list includes all versions that are receiving traffic, it also includes versions that are not receiving traffic.
?This command correctly lists just the versions that are receiving traffic by hiding versions that do not receive traffic. This is the only command that fits our requirements.
An intern joined your team recently and needs access to Google Compute Engine in your sandbox project to explore various settings and spin up compute instances to test features. You have been asked to facilitate this. How should you give your intern access to compute engine without giving more permissions than is necessary?
Correct
Create a shared VPC to enable the intern access Compute resources. is not right.
?Creating a shared VPC is not sufficient to grant intern access to compute resources. Shared VPCs are primarily used by organizations to connect resources from multiple projects to a common Virtual Private Cloud (VPC) network, so that they can communicate with each other securely and efficiently using internal IPs from that network.
?Project editor role grants all viewer permissions, plus permissions for actions that modify state, such as changing existing resources. While this role lets the intern explore compute engine settings and spin up compute instances, it grants more permissions than what is needed. Our intern can modify any resource in the project.
?Compute Engine Admin Role grants full control of all Compute Engine resources; including networks, load balancing, service accounts etc. While this role lets the intern explore compute engine settings and spin up compute instances, it grants more permissions than what is needed.
?Compute Engine Instance Admin Role grants full control of Compute Engine instances, instance groups, disks, snapshots, and images. It also provides read access to all Compute Engine networking resources. This provides just the required permissions to the intern.
Create a shared VPC to enable the intern access Compute resources. is not right.
?Creating a shared VPC is not sufficient to grant intern access to compute resources. Shared VPCs are primarily used by organizations to connect resources from multiple projects to a common Virtual Private Cloud (VPC) network, so that they can communicate with each other securely and efficiently using internal IPs from that network.
?Project editor role grants all viewer permissions, plus permissions for actions that modify state, such as changing existing resources. While this role lets the intern explore compute engine settings and spin up compute instances, it grants more permissions than what is needed. Our intern can modify any resource in the project.
?Compute Engine Admin Role grants full control of all Compute Engine resources; including networks, load balancing, service accounts etc. While this role lets the intern explore compute engine settings and spin up compute instances, it grants more permissions than what is needed.
?Compute Engine Instance Admin Role grants full control of Compute Engine instances, instance groups, disks, snapshots, and images. It also provides read access to all Compute Engine networking resources. This provides just the required permissions to the intern.
Create a shared VPC to enable the intern access Compute resources. is not right.
?Creating a shared VPC is not sufficient to grant intern access to compute resources. Shared VPCs are primarily used by organizations to connect resources from multiple projects to a common Virtual Private Cloud (VPC) network, so that they can communicate with each other securely and efficiently using internal IPs from that network.
?Project editor role grants all viewer permissions, plus permissions for actions that modify state, such as changing existing resources. While this role lets the intern explore compute engine settings and spin up compute instances, it grants more permissions than what is needed. Our intern can modify any resource in the project.
?Compute Engine Admin Role grants full control of all Compute Engine resources; including networks, load balancing, service accounts etc. While this role lets the intern explore compute engine settings and spin up compute instances, it grants more permissions than what is needed.
?Compute Engine Instance Admin Role grants full control of Compute Engine instances, instance groups, disks, snapshots, and images. It also provides read access to all Compute Engine networking resources. This provides just the required permissions to the intern.
Auditors visit your teams every 12 months and ask to review all the Google Cloud Identity and Access Management (Cloud IAM) policy changes in the previous 12 months. You want to streamline and expedite the analysis and audit process. What should you do?
Correct
Create custom Google Stackdriver alerts and send them in an email to the auditor. is not right.
?Stackdriver Alerting gives timely awareness to problems in your cloud applications so you can resolve the problems quickly. Sending alerts to your auditor is not of much use during audits.
?Ref: https://cloud.google.com/monitoring/alerts Use Cloud Functions to transfer log entries to Google Cloud SQL and use ACLs and views to limit an auditors view. is not right.
?Using Cloud Functions to transfer log entries to Google Cloud SQL is expensive in comparison to audit logs export feature which exports logs to various destinations with minimal configuration.
?Auditors spend a lot of time reviewing log messages. And you want to expedite the audit process!! So you want to make it easier for the auditor to extract the information easily from the logs. Between the two remaining options, the only difference is the log export sink destination
One option exports to Google Cloud Storage (GCS) bucket whereas other exports to BigQuery. Querying information out of files in a bucket is much harder compared to querying information from BigQuery Dataset where it is as simple as running a job or set of jobs to extract just the required information and in the format required. By enabling the auditor to run jobs in Big Queries, you streamline the log extraction process and the auditor can review the extracted logs much quicker. While as good as the other option (bucket) is, Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor is the right answer. You need to configure log sinks before you can receive any logs, and you cant retroactively export logs that were written before the sink was created.
Incorrect
Create custom Google Stackdriver alerts and send them in an email to the auditor. is not right.
?Stackdriver Alerting gives timely awareness to problems in your cloud applications so you can resolve the problems quickly. Sending alerts to your auditor is not of much use during audits.
?Ref: https://cloud.google.com/monitoring/alerts Use Cloud Functions to transfer log entries to Google Cloud SQL and use ACLs and views to limit an auditors view. is not right.
?Using Cloud Functions to transfer log entries to Google Cloud SQL is expensive in comparison to audit logs export feature which exports logs to various destinations with minimal configuration.
?Auditors spend a lot of time reviewing log messages. And you want to expedite the audit process!! So you want to make it easier for the auditor to extract the information easily from the logs. Between the two remaining options, the only difference is the log export sink destination
One option exports to Google Cloud Storage (GCS) bucket whereas other exports to BigQuery. Querying information out of files in a bucket is much harder compared to querying information from BigQuery Dataset where it is as simple as running a job or set of jobs to extract just the required information and in the format required. By enabling the auditor to run jobs in Big Queries, you streamline the log extraction process and the auditor can review the extracted logs much quicker. While as good as the other option (bucket) is, Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor is the right answer. You need to configure log sinks before you can receive any logs, and you cant retroactively export logs that were written before the sink was created.
Unattempted
Create custom Google Stackdriver alerts and send them in an email to the auditor. is not right.
?Stackdriver Alerting gives timely awareness to problems in your cloud applications so you can resolve the problems quickly. Sending alerts to your auditor is not of much use during audits.
?Ref: https://cloud.google.com/monitoring/alerts Use Cloud Functions to transfer log entries to Google Cloud SQL and use ACLs and views to limit an auditors view. is not right.
?Using Cloud Functions to transfer log entries to Google Cloud SQL is expensive in comparison to audit logs export feature which exports logs to various destinations with minimal configuration.
?Auditors spend a lot of time reviewing log messages. And you want to expedite the audit process!! So you want to make it easier for the auditor to extract the information easily from the logs. Between the two remaining options, the only difference is the log export sink destination
One option exports to Google Cloud Storage (GCS) bucket whereas other exports to BigQuery. Querying information out of files in a bucket is much harder compared to querying information from BigQuery Dataset where it is as simple as running a job or set of jobs to extract just the required information and in the format required. By enabling the auditor to run jobs in Big Queries, you streamline the log extraction process and the auditor can review the extracted logs much quicker. While as good as the other option (bucket) is, Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor is the right answer. You need to configure log sinks before you can receive any logs, and you cant retroactively export logs that were written before the sink was created.
Question 62 of 65
62. Question
Every employee of your company has a Google account. Your operational team needs to manage a large number of instances on the Compute Engine. Each member of this team needs only administrative access to the servers. Your security team wants to ensure that the deployment of credentials is operationally efficient and must be able to determine who accessed a given instance. What should you do?
Correct
Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key in the metadata of each instance. is not right.
?Reuse of a single SSH key pair by all employees is a very bad security practice as auditing becomes very impossible. Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key as a project-wide public SSH key in your Cloud Platform project and allow project-wide public SSH keys on each instance. is not right.
?Reuse of a single SSH key pair by all employees is a very bad security practice as auditing becomes very impossible. Ask each member of the team to generate a new SSH key pair and to send you their public key. Use a configuration management tool to deploy those keys on each instance. is not right.
?While this can be done, it is not operationally efficient. Lets say a user leaves the company, you then have to remove their SSH key from all instances where it has been added (cant be removed at a single place). Similarly, when a user joins the company, you have to add their SSH key to all the instances. This is very tedious and not operationally efficient. Ask each member of the team to generate a new SSH key pair and to add the public key to their Google account. Grant the compute.osAdminLogin role to the Google group corresponding to this team. is the right answer.
?By letting users manage their own SSH key pair (and its rotation etc), you delete the operational burden of managing SSH keys to individual users. Secondly, granting compute.osAdminLogin grants the group administrator permissions (as opposed to granting compute.osLogin, which does not grant administrator permissions). Finally, managing provisioning and de-provisioning is as simple as adding or removing the user from the group. OS Login lets you use Compute Engine IAM roles to efficiently manage SSH access to Linux instances and is an alternative to manually managing instance access by adding and removing SSH keys in the metadata. Before you can manage instance access using IAM roles, you must enable the OS Login feature by setting a metadata key-value pair in your project or in your instances metadata: enable-oslogin=TRUE. After you enable OS Login on one or more instances in your project, those instances accept connections only from user accounts that have the necessary IAM roles in your project or organization. There are two predefined roles.
?? roles/compute.osLogin, which does not grant administrator permissions
?? roles/compute.osAdminLogin, which grants administrator permissions
?At any point, to revoke user access to instances that are enabled to use OS Login, remove the user roles from that user account
Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key in the metadata of each instance. is not right.
?Reuse of a single SSH key pair by all employees is a very bad security practice as auditing becomes very impossible. Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key as a project-wide public SSH key in your Cloud Platform project and allow project-wide public SSH keys on each instance. is not right.
?Reuse of a single SSH key pair by all employees is a very bad security practice as auditing becomes very impossible. Ask each member of the team to generate a new SSH key pair and to send you their public key. Use a configuration management tool to deploy those keys on each instance. is not right.
?While this can be done, it is not operationally efficient. Lets say a user leaves the company, you then have to remove their SSH key from all instances where it has been added (cant be removed at a single place). Similarly, when a user joins the company, you have to add their SSH key to all the instances. This is very tedious and not operationally efficient. Ask each member of the team to generate a new SSH key pair and to add the public key to their Google account. Grant the compute.osAdminLogin role to the Google group corresponding to this team. is the right answer.
?By letting users manage their own SSH key pair (and its rotation etc), you delete the operational burden of managing SSH keys to individual users. Secondly, granting compute.osAdminLogin grants the group administrator permissions (as opposed to granting compute.osLogin, which does not grant administrator permissions). Finally, managing provisioning and de-provisioning is as simple as adding or removing the user from the group. OS Login lets you use Compute Engine IAM roles to efficiently manage SSH access to Linux instances and is an alternative to manually managing instance access by adding and removing SSH keys in the metadata. Before you can manage instance access using IAM roles, you must enable the OS Login feature by setting a metadata key-value pair in your project or in your instances metadata: enable-oslogin=TRUE. After you enable OS Login on one or more instances in your project, those instances accept connections only from user accounts that have the necessary IAM roles in your project or organization. There are two predefined roles.
?? roles/compute.osLogin, which does not grant administrator permissions
?? roles/compute.osAdminLogin, which grants administrator permissions
?At any point, to revoke user access to instances that are enabled to use OS Login, remove the user roles from that user account
Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key in the metadata of each instance. is not right.
?Reuse of a single SSH key pair by all employees is a very bad security practice as auditing becomes very impossible. Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key as a project-wide public SSH key in your Cloud Platform project and allow project-wide public SSH keys on each instance. is not right.
?Reuse of a single SSH key pair by all employees is a very bad security practice as auditing becomes very impossible. Ask each member of the team to generate a new SSH key pair and to send you their public key. Use a configuration management tool to deploy those keys on each instance. is not right.
?While this can be done, it is not operationally efficient. Lets say a user leaves the company, you then have to remove their SSH key from all instances where it has been added (cant be removed at a single place). Similarly, when a user joins the company, you have to add their SSH key to all the instances. This is very tedious and not operationally efficient. Ask each member of the team to generate a new SSH key pair and to add the public key to their Google account. Grant the compute.osAdminLogin role to the Google group corresponding to this team. is the right answer.
?By letting users manage their own SSH key pair (and its rotation etc), you delete the operational burden of managing SSH keys to individual users. Secondly, granting compute.osAdminLogin grants the group administrator permissions (as opposed to granting compute.osLogin, which does not grant administrator permissions). Finally, managing provisioning and de-provisioning is as simple as adding or removing the user from the group. OS Login lets you use Compute Engine IAM roles to efficiently manage SSH access to Linux instances and is an alternative to manually managing instance access by adding and removing SSH keys in the metadata. Before you can manage instance access using IAM roles, you must enable the OS Login feature by setting a metadata key-value pair in your project or in your instances metadata: enable-oslogin=TRUE. After you enable OS Login on one or more instances in your project, those instances accept connections only from user accounts that have the necessary IAM roles in your project or organization. There are two predefined roles.
?? roles/compute.osLogin, which does not grant administrator permissions
?? roles/compute.osAdminLogin, which grants administrator permissions
?At any point, to revoke user access to instances that are enabled to use OS Login, remove the user roles from that user account
For service discovery, you need to associate each of the Compute Engine instances of your VPC with an internal (DNS) record in a custom zone. You want to follow Google recommended practices. What should you do?
Correct
Our requirements here are 1. Internal and 2. Custom Zone Create your Compute Engine instances with custom hostnames. is not right.
?This doesnt put them in a custom zone. Deploy the BIND DNS server in the VPC, and create a Cloud DNS forwarding zone to forward the DNS requests to BIND. Create records for each instance in the BIND DNS server. is not right.
?This might be possible but not something Google recommends. The Cloud DNS service offering from Google already offers these features so it is pointless installing a custom DNS server to do that. Create a new VPC, block all external traffic with a firewall rule and create 2 Cloud DNS zones a first zone in the new VPC and a second zone in the main VPC that is forwarding requests to the first Cloud DNS zone. Create records for each instance in the first zone. is not right.
?This doesnt make any sense, moreover, the two VPCs cant communicate without VPC peering.
?Ref: https://cloud.google.com/dns/docs/overview#concepts Create a Cloud DNS zone, set its visibility to private and associate it with your VPC. Create records for each instance in that zone. is the right answer.
?You should absolutely do this when you want internal DNS records in a custom zone. Cloud DNS gives you the option of private zones and internal DNS names.
Our requirements here are 1. Internal and 2. Custom Zone Create your Compute Engine instances with custom hostnames. is not right.
?This doesnt put them in a custom zone. Deploy the BIND DNS server in the VPC, and create a Cloud DNS forwarding zone to forward the DNS requests to BIND. Create records for each instance in the BIND DNS server. is not right.
?This might be possible but not something Google recommends. The Cloud DNS service offering from Google already offers these features so it is pointless installing a custom DNS server to do that. Create a new VPC, block all external traffic with a firewall rule and create 2 Cloud DNS zones a first zone in the new VPC and a second zone in the main VPC that is forwarding requests to the first Cloud DNS zone. Create records for each instance in the first zone. is not right.
?This doesnt make any sense, moreover, the two VPCs cant communicate without VPC peering.
?Ref: https://cloud.google.com/dns/docs/overview#concepts Create a Cloud DNS zone, set its visibility to private and associate it with your VPC. Create records for each instance in that zone. is the right answer.
?You should absolutely do this when you want internal DNS records in a custom zone. Cloud DNS gives you the option of private zones and internal DNS names.
Our requirements here are 1. Internal and 2. Custom Zone Create your Compute Engine instances with custom hostnames. is not right.
?This doesnt put them in a custom zone. Deploy the BIND DNS server in the VPC, and create a Cloud DNS forwarding zone to forward the DNS requests to BIND. Create records for each instance in the BIND DNS server. is not right.
?This might be possible but not something Google recommends. The Cloud DNS service offering from Google already offers these features so it is pointless installing a custom DNS server to do that. Create a new VPC, block all external traffic with a firewall rule and create 2 Cloud DNS zones a first zone in the new VPC and a second zone in the main VPC that is forwarding requests to the first Cloud DNS zone. Create records for each instance in the first zone. is not right.
?This doesnt make any sense, moreover, the two VPCs cant communicate without VPC peering.
?Ref: https://cloud.google.com/dns/docs/overview#concepts Create a Cloud DNS zone, set its visibility to private and associate it with your VPC. Create records for each instance in that zone. is the right answer.
?You should absolutely do this when you want internal DNS records in a custom zone. Cloud DNS gives you the option of private zones and internal DNS names.
You want to create two compute instances one in europe-west2-a and another in europe-west2-b. What should you do? (Select 2)
Correct
gcloud compute instances create instance1
?gcloud compute instances create instance2. is not right.
?The default compute/zone property is set to europe-west2-a in the current gcloud configuration. Executing the two commands above would create two compute instances in the default zone i.e. europe-west2-a which doesnt satisfy our requirement.
?gcloud compute instances create instance2. is not right.
?The approach is right but the syntax is wrong. gcloud config does not have a core/zone property. The syntax for this command is gcloud config set SECTION/PROPERTY VALUE. If SECTION is missing, SECTION is defaulted to core. We are effectively trying to run gcloud config set core/zone europe-west2-b but the core section doesnt have a property called zone, so this command fails.
?gcloud configuration set compute/zone europe-west2-b
?gcloud compute instances create instance2. is not right.
?Like above, the approach is right but the syntax is wrong. You want to set the default compute/zone property in gcloud configuration to europe-west2-b but it needs to be done via the command gcloud config set and not gcloud configuration set.
?gcloud compute instances create instance2. is the right answer.
?The default compute/zone property is europe-west2-a in the current gcloud configuration so executing the first gcloud compute instances create command creates the instance in europe-west2-a zone. Next, executing the gcloud config set compute/zone europe-west2-b changes the default compute/zone property in default configuration to europe-west2-b. Executing the second gcloud compute instances create command creates a compute instance in europe-west2-b which is what we want.
?gcloud compute instances create instance2 zone=europe-west2-b. is the right answer.
?The default compute/zone property is europe-west2-a in the current gcloud configuration so executing the first gcloud compute instances create command creates the instance in europe-west2-a zone. Next, executing the second gcloud compute instances create command with zone property creates a compute instance in provided zone i.e. europe-west2-b instead of using the default zone from the current active configuration.
?gcloud compute instances create instance2. is not right.
?The default compute/zone property is set to europe-west2-a in the current gcloud configuration. Executing the two commands above would create two compute instances in the default zone i.e. europe-west2-a which doesnt satisfy our requirement.
?gcloud compute instances create instance2. is not right.
?The approach is right but the syntax is wrong. gcloud config does not have a core/zone property. The syntax for this command is gcloud config set SECTION/PROPERTY VALUE. If SECTION is missing, SECTION is defaulted to core. We are effectively trying to run gcloud config set core/zone europe-west2-b but the core section doesnt have a property called zone, so this command fails.
?gcloud configuration set compute/zone europe-west2-b
?gcloud compute instances create instance2. is not right.
?Like above, the approach is right but the syntax is wrong. You want to set the default compute/zone property in gcloud configuration to europe-west2-b but it needs to be done via the command gcloud config set and not gcloud configuration set.
?gcloud compute instances create instance2. is the right answer.
?The default compute/zone property is europe-west2-a in the current gcloud configuration so executing the first gcloud compute instances create command creates the instance in europe-west2-a zone. Next, executing the gcloud config set compute/zone europe-west2-b changes the default compute/zone property in default configuration to europe-west2-b. Executing the second gcloud compute instances create command creates a compute instance in europe-west2-b which is what we want.
?gcloud compute instances create instance2 zone=europe-west2-b. is the right answer.
?The default compute/zone property is europe-west2-a in the current gcloud configuration so executing the first gcloud compute instances create command creates the instance in europe-west2-a zone. Next, executing the second gcloud compute instances create command with zone property creates a compute instance in provided zone i.e. europe-west2-b instead of using the default zone from the current active configuration.
?gcloud compute instances create instance2. is not right.
?The default compute/zone property is set to europe-west2-a in the current gcloud configuration. Executing the two commands above would create two compute instances in the default zone i.e. europe-west2-a which doesnt satisfy our requirement.
?gcloud compute instances create instance2. is not right.
?The approach is right but the syntax is wrong. gcloud config does not have a core/zone property. The syntax for this command is gcloud config set SECTION/PROPERTY VALUE. If SECTION is missing, SECTION is defaulted to core. We are effectively trying to run gcloud config set core/zone europe-west2-b but the core section doesnt have a property called zone, so this command fails.
?gcloud configuration set compute/zone europe-west2-b
?gcloud compute instances create instance2. is not right.
?Like above, the approach is right but the syntax is wrong. You want to set the default compute/zone property in gcloud configuration to europe-west2-b but it needs to be done via the command gcloud config set and not gcloud configuration set.
?gcloud compute instances create instance2. is the right answer.
?The default compute/zone property is europe-west2-a in the current gcloud configuration so executing the first gcloud compute instances create command creates the instance in europe-west2-a zone. Next, executing the gcloud config set compute/zone europe-west2-b changes the default compute/zone property in default configuration to europe-west2-b. Executing the second gcloud compute instances create command creates a compute instance in europe-west2-b which is what we want.
?gcloud compute instances create instance2 zone=europe-west2-b. is the right answer.
?The default compute/zone property is europe-west2-a in the current gcloud configuration so executing the first gcloud compute instances create command creates the instance in europe-west2-a zone. Next, executing the second gcloud compute instances create command with zone property creates a compute instance in provided zone i.e. europe-west2-b instead of using the default zone from the current active configuration.
In Regional Storage buckets with object versioning enabled, what is the effect of deleting the live version of an object and deleting a noncurrent version of an object?
Correct
1. The live version becomes a noncurrent version.
?2. The noncurrent version is deleted permanently. is the right answer. In buckets with object versioning enabled, deleting the live version of an object creates a noncurrent version while deleting a noncurrent version deletes that version permanently.
?2. The noncurrent version is deleted permanently. is the right answer. In buckets with object versioning enabled, deleting the live version of an object creates a noncurrent version while deleting a noncurrent version deletes that version permanently.
?2. The noncurrent version is deleted permanently. is the right answer. In buckets with object versioning enabled, deleting the live version of an object creates a noncurrent version while deleting a noncurrent version deletes that version permanently.