You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Google Certified Associate Cloud Engineer Practice Test 5 "
0 of 70 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
Google Certified Associate Cloud Engineer Practice Tests
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking view questions. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
Answered
Review
Question 1 of 70
1. Question
You are creating a Kubernetes Engine cluster to deploy multiple pods inside the cluster. All container logs must be stored in BigQuery for later analysis. You want to follow Google-recommended practices. Which two approaches can you take?
Correct
Correct answers are A & E
Option A as creating a cluster with Stackdriver Logging option will enable all the container logs to be stored in Stackdriver Logging.
Option E as Stackdriver Logging support exporting logs to BigQuery by creating sinks
Refer GCP documentation – Kubernetes logging
Option B is wrong as creating a cluster with Stackdriver Monitoring option will enable monitoring metrics to be gathered, but it has nothing to do with logging.
Option C is wrong as even if you can develop a Kubernetes addon that will send logs to BigQuery, this is not a Google-recommended practice.
Option D is wrong as this is not a Google recommended practice.
Incorrect
Correct answers are A & E
Option A as creating a cluster with Stackdriver Logging option will enable all the container logs to be stored in Stackdriver Logging.
Option E as Stackdriver Logging support exporting logs to BigQuery by creating sinks
Refer GCP documentation – Kubernetes logging
Option B is wrong as creating a cluster with Stackdriver Monitoring option will enable monitoring metrics to be gathered, but it has nothing to do with logging.
Option C is wrong as even if you can develop a Kubernetes addon that will send logs to BigQuery, this is not a Google-recommended practice.
Option D is wrong as this is not a Google recommended practice.
Unattempted
Correct answers are A & E
Option A as creating a cluster with Stackdriver Logging option will enable all the container logs to be stored in Stackdriver Logging.
Option E as Stackdriver Logging support exporting logs to BigQuery by creating sinks
Refer GCP documentation – Kubernetes logging
Option B is wrong as creating a cluster with Stackdriver Monitoring option will enable monitoring metrics to be gathered, but it has nothing to do with logging.
Option C is wrong as even if you can develop a Kubernetes addon that will send logs to BigQuery, this is not a Google-recommended practice.
Option D is wrong as this is not a Google recommended practice.
Question 2 of 70
2. Question
Your company has a mission-critical application that serves users globally. You need to select a transactional and relational data storage system for this application. Which two products should you choose?
Correct
Correct answers are B & C
Option B as because Cloud SQL is a relational and transactional database in the list.
Option C as Spanner is a relational and transactional database in the list.
Refer GCP documentation – Storage Options
Option A is wrong as BigQuery is not a transactional system.
Option D is wrong as Cloud Bigtable provides transactional support but it’s not relational.
Option E is wrong as Datastore is not a relational data storage system.
Incorrect
Correct answers are B & C
Option B as because Cloud SQL is a relational and transactional database in the list.
Option C as Spanner is a relational and transactional database in the list.
Refer GCP documentation – Storage Options
Option A is wrong as BigQuery is not a transactional system.
Option D is wrong as Cloud Bigtable provides transactional support but it’s not relational.
Option E is wrong as Datastore is not a relational data storage system.
Unattempted
Correct answers are B & C
Option B as because Cloud SQL is a relational and transactional database in the list.
Option C as Spanner is a relational and transactional database in the list.
Refer GCP documentation – Storage Options
Option A is wrong as BigQuery is not a transactional system.
Option D is wrong as Cloud Bigtable provides transactional support but it’s not relational.
Option E is wrong as Datastore is not a relational data storage system.
Question 3 of 70
3. Question
You want to find out who in your organization has Owner access to a project called “my-project”. What should you do?
Correct
Correct answer is B as this shows you the Owners of the project.
Option A is wrong as it will give the org-wide owners, but you are interested in the project owners, which could be different.
Option C is wrong as this command is to list grantable roles for a resource, but does not return who has a specific role.
Option D is wrong as this command is to list grantable roles for a resource, but does not return who has a specific role.
Incorrect
Correct answer is B as this shows you the Owners of the project.
Option A is wrong as it will give the org-wide owners, but you are interested in the project owners, which could be different.
Option C is wrong as this command is to list grantable roles for a resource, but does not return who has a specific role.
Option D is wrong as this command is to list grantable roles for a resource, but does not return who has a specific role.
Unattempted
Correct answer is B as this shows you the Owners of the project.
Option A is wrong as it will give the org-wide owners, but you are interested in the project owners, which could be different.
Option C is wrong as this command is to list grantable roles for a resource, but does not return who has a specific role.
Option D is wrong as this command is to list grantable roles for a resource, but does not return who has a specific role.
Question 4 of 70
4. Question
You need to verify the assigned permissions in a custom IAM role. What should you dou
Correct
Correct answer is A as this is the correct console area to view permission assigned to a custom role in a particular project.
Refer GCP documentation – IAM Custom Rules
Option B is wrong as gcloud init will not provide the information required.
Options C and D are wrong as these are not the correct areas to view this information
Incorrect
Correct answer is A as this is the correct console area to view permission assigned to a custom role in a particular project.
Refer GCP documentation – IAM Custom Rules
Option B is wrong as gcloud init will not provide the information required.
Options C and D are wrong as these are not the correct areas to view this information
Unattempted
Correct answer is A as this is the correct console area to view permission assigned to a custom role in a particular project.
Refer GCP documentation – IAM Custom Rules
Option B is wrong as gcloud init will not provide the information required.
Options C and D are wrong as these are not the correct areas to view this information
Question 5 of 70
5. Question
You have an App Engine application serving as your front-end. It’s going to publish messages to Pub/Sub. The Pub/Sub API hasn’t been enabled yet. What is the fastest way to enable the API?
Correct
Correct answer is B as the simplest way to enable an API for the project is using the GCP console.
Refer GCP documentation – Enable/Disable APIs
The simplest way to enable an API for your project is to use the GCP Console, though you can also enable an API using gcloud or using the Service Usage API. You can find out more about these options in the Service Usage API docs.
To enable an API for your project using the console:
1. Go to the GCP Console API Library.
2. From the projects list, select a project or create a new one.
3. In the API Library, select the API you want to enable. If you need help finding the API, use the search field and/or the filters.
4. On the API page, click ENABLE.
Option A is wrong as providing the Pub/Sub Admin role does not provide the access to enable API.
Enabling an API requires the following two Cloud Identity and Access Management permissions:
1. The servicemanagement.services.bind permission on the service to enable. This permission is present for all users for public services. For private services, you must share the service with the user who needs to enable it.
2. The serviceusage.services.enable permission on the project to enable the service on. This permission is present in the Editor role as well as in the Service Usage Admin role.
Option C is wrong as all applications need the API to be enabled before they can use it.
Option D is wrong as the API is not enabled and it needs to be enabled.
Incorrect
Correct answer is B as the simplest way to enable an API for the project is using the GCP console.
Refer GCP documentation – Enable/Disable APIs
The simplest way to enable an API for your project is to use the GCP Console, though you can also enable an API using gcloud or using the Service Usage API. You can find out more about these options in the Service Usage API docs.
To enable an API for your project using the console:
1. Go to the GCP Console API Library.
2. From the projects list, select a project or create a new one.
3. In the API Library, select the API you want to enable. If you need help finding the API, use the search field and/or the filters.
4. On the API page, click ENABLE.
Option A is wrong as providing the Pub/Sub Admin role does not provide the access to enable API.
Enabling an API requires the following two Cloud Identity and Access Management permissions:
1. The servicemanagement.services.bind permission on the service to enable. This permission is present for all users for public services. For private services, you must share the service with the user who needs to enable it.
2. The serviceusage.services.enable permission on the project to enable the service on. This permission is present in the Editor role as well as in the Service Usage Admin role.
Option C is wrong as all applications need the API to be enabled before they can use it.
Option D is wrong as the API is not enabled and it needs to be enabled.
Unattempted
Correct answer is B as the simplest way to enable an API for the project is using the GCP console.
Refer GCP documentation – Enable/Disable APIs
The simplest way to enable an API for your project is to use the GCP Console, though you can also enable an API using gcloud or using the Service Usage API. You can find out more about these options in the Service Usage API docs.
To enable an API for your project using the console:
1. Go to the GCP Console API Library.
2. From the projects list, select a project or create a new one.
3. In the API Library, select the API you want to enable. If you need help finding the API, use the search field and/or the filters.
4. On the API page, click ENABLE.
Option A is wrong as providing the Pub/Sub Admin role does not provide the access to enable API.
Enabling an API requires the following two Cloud Identity and Access Management permissions:
1. The servicemanagement.services.bind permission on the service to enable. This permission is present for all users for public services. For private services, you must share the service with the user who needs to enable it.
2. The serviceusage.services.enable permission on the project to enable the service on. This permission is present in the Editor role as well as in the Service Usage Admin role.
Option C is wrong as all applications need the API to be enabled before they can use it.
Option D is wrong as the API is not enabled and it needs to be enabled.
Question 6 of 70
6. Question
Your team is working on designing an IoT solution. There are thousands of devices that need to send periodic time series data for processing. Which services should be used to ingest and store the data?
Correct
Correct answer is D as Pub/Sub is ideal for ingestion and Bigtable for time series data storage.
Refer GCP documentation – IoT Overview
Ingestion
Google Cloud Pub/Sub provides a globally durable message ingestion service. By creating topics for streams or channels, you can enable different components of your application to subscribe to specific streams of data without needing to construct subscriber-specific channels on each device. Cloud Pub/Sub also natively connects to other Cloud Platform services, helping you to connect ingestion, data pipelines, and storage systems.
Cloud Pub/Sub can act like a shock absorber and rate leveller for both incoming data streams and application architecture changes. Many devices have limited ability to store and retry sending telemetry data. Cloud Pub/Sub scales to handle data spikes that can occur when swarms of devices respond to events in the physical world, and buffers these spikes to help isolate them from applications monitoring the data.
Time Series dashboards with Cloud Bigtable
Certain types of data need to be quickly sliceable along known indexes and dimensions for updating core visualizations and user interfaces. Cloud Bigtable provides a low-latency and high-throughput database for NoSQL data. Cloud Bigtable provides a good place to drive heavily used visualizations and queries, where the questions are already well understood and you need to absorb or serve at high volumes.
Compared to BigQuery, Cloud Bigtable works better for queries that act on rows or groups of consecutive rows, because Cloud Bigtable stores data by using a row-based format. Compared to Cloud Bigtable, BigQuery is a better choice for queries that require data aggregation.
Option A is wrong as Datastore is not an ideal solution for time series IoT data storage.
Options B & C are wrong as Dataproc is not an ideal ingestion service for IoT solution. Also the storage is HDFS based.
Incorrect
Correct answer is D as Pub/Sub is ideal for ingestion and Bigtable for time series data storage.
Refer GCP documentation – IoT Overview
Ingestion
Google Cloud Pub/Sub provides a globally durable message ingestion service. By creating topics for streams or channels, you can enable different components of your application to subscribe to specific streams of data without needing to construct subscriber-specific channels on each device. Cloud Pub/Sub also natively connects to other Cloud Platform services, helping you to connect ingestion, data pipelines, and storage systems.
Cloud Pub/Sub can act like a shock absorber and rate leveller for both incoming data streams and application architecture changes. Many devices have limited ability to store and retry sending telemetry data. Cloud Pub/Sub scales to handle data spikes that can occur when swarms of devices respond to events in the physical world, and buffers these spikes to help isolate them from applications monitoring the data.
Time Series dashboards with Cloud Bigtable
Certain types of data need to be quickly sliceable along known indexes and dimensions for updating core visualizations and user interfaces. Cloud Bigtable provides a low-latency and high-throughput database for NoSQL data. Cloud Bigtable provides a good place to drive heavily used visualizations and queries, where the questions are already well understood and you need to absorb or serve at high volumes.
Compared to BigQuery, Cloud Bigtable works better for queries that act on rows or groups of consecutive rows, because Cloud Bigtable stores data by using a row-based format. Compared to Cloud Bigtable, BigQuery is a better choice for queries that require data aggregation.
Option A is wrong as Datastore is not an ideal solution for time series IoT data storage.
Options B & C are wrong as Dataproc is not an ideal ingestion service for IoT solution. Also the storage is HDFS based.
Unattempted
Correct answer is D as Pub/Sub is ideal for ingestion and Bigtable for time series data storage.
Refer GCP documentation – IoT Overview
Ingestion
Google Cloud Pub/Sub provides a globally durable message ingestion service. By creating topics for streams or channels, you can enable different components of your application to subscribe to specific streams of data without needing to construct subscriber-specific channels on each device. Cloud Pub/Sub also natively connects to other Cloud Platform services, helping you to connect ingestion, data pipelines, and storage systems.
Cloud Pub/Sub can act like a shock absorber and rate leveller for both incoming data streams and application architecture changes. Many devices have limited ability to store and retry sending telemetry data. Cloud Pub/Sub scales to handle data spikes that can occur when swarms of devices respond to events in the physical world, and buffers these spikes to help isolate them from applications monitoring the data.
Time Series dashboards with Cloud Bigtable
Certain types of data need to be quickly sliceable along known indexes and dimensions for updating core visualizations and user interfaces. Cloud Bigtable provides a low-latency and high-throughput database for NoSQL data. Cloud Bigtable provides a good place to drive heavily used visualizations and queries, where the questions are already well understood and you need to absorb or serve at high volumes.
Compared to BigQuery, Cloud Bigtable works better for queries that act on rows or groups of consecutive rows, because Cloud Bigtable stores data by using a row-based format. Compared to Cloud Bigtable, BigQuery is a better choice for queries that require data aggregation.
Option A is wrong as Datastore is not an ideal solution for time series IoT data storage.
Options B & C are wrong as Dataproc is not an ideal ingestion service for IoT solution. Also the storage is HDFS based.
Question 7 of 70
7. Question
Your development team has asked you to set up an external TCP load balancer with SSL offload. Which load balancer should you use?
Correct
Correct answer is A as SSL proxy support TCP traffic with an ability to SSL offload.
Refer GCP documentation – Choosing Load Balancer
Google Cloud SSL Proxy Load Balancing terminates user SSL (TLS) connections at the load balancing layer, then balances the connections across your instances using the SSL or TCP protocols. Cloud SSL proxy is intended for non-HTTP(S) traffic. For HTTP(S) traffic, HTTP(S) load balancing is recommended instead.
SSL Proxy Load Balancing supports both IPv4 and IPv6 addresses for client traffic. Client IPv6 requests are terminated at the load balancing layer, then proxied over IPv4 to your backends.
Options B & D are wrong as they are recommended for HTTP or HTTPS traffic only
Option C is wrong as TCP proxy does not support SSL offload.
Incorrect
Correct answer is A as SSL proxy support TCP traffic with an ability to SSL offload.
Refer GCP documentation – Choosing Load Balancer
Google Cloud SSL Proxy Load Balancing terminates user SSL (TLS) connections at the load balancing layer, then balances the connections across your instances using the SSL or TCP protocols. Cloud SSL proxy is intended for non-HTTP(S) traffic. For HTTP(S) traffic, HTTP(S) load balancing is recommended instead.
SSL Proxy Load Balancing supports both IPv4 and IPv6 addresses for client traffic. Client IPv6 requests are terminated at the load balancing layer, then proxied over IPv4 to your backends.
Options B & D are wrong as they are recommended for HTTP or HTTPS traffic only
Option C is wrong as TCP proxy does not support SSL offload.
Unattempted
Correct answer is A as SSL proxy support TCP traffic with an ability to SSL offload.
Refer GCP documentation – Choosing Load Balancer
Google Cloud SSL Proxy Load Balancing terminates user SSL (TLS) connections at the load balancing layer, then balances the connections across your instances using the SSL or TCP protocols. Cloud SSL proxy is intended for non-HTTP(S) traffic. For HTTP(S) traffic, HTTP(S) load balancing is recommended instead.
SSL Proxy Load Balancing supports both IPv4 and IPv6 addresses for client traffic. Client IPv6 requests are terminated at the load balancing layer, then proxied over IPv4 to your backends.
Options B & D are wrong as they are recommended for HTTP or HTTPS traffic only
Option C is wrong as TCP proxy does not support SSL offload.
Question 8 of 70
8. Question
Your company wants to host confidential documents in Cloud Storage. Due to compliance requirements, there is a need for the data to be highly available and resilient even in case of a regional outage. Which storage classes help meet the requirement?
Correct
Correct answers are C & E as Multi-Regional and Coldline storage classes provide multi-region geo-redundant deployment, which can sustain regional failure.
Refer GCP documentation – Cloud Storage Classes
Multi-Regional Storage is geo-redundant.
The geo-redundancy of Coldline Storage data is determined by the type of location in which it is stored: Coldline Storage data stored in multi-regional locations is redundant across multiple regions, providing higher availability than Coldline Storage data stored in regional locations.
Data that is geo-redundant is stored redundantly in at least two separate geographic places separated by at least 100 miles. Objects stored in multi-regional locations are geo-redundant, regardless of their storage class.
Geo-redundancy occurs asynchronously, but all Cloud Storage data is redundant within at least one geographic place as soon as you upload it.
Geo-redundancy ensures maximum availability of your data, even in the event of large-scale disruptions, such as natural disasters. For a dual-regional location, geo-redundancy is achieved using two specific regional locations. For other multi-regional locations, geo-redundancy is achieved using any combination of data centers within the specified multi-region, which may include data centers that are not explicitly available as regional locations.
Options A & D are wrong as they do not exist
Option B is wrong as Regional storage class is not geo-redundant. Data stored in a narrow geographic region and Redundancy is across availability zones
Incorrect
Correct answers are C & E as Multi-Regional and Coldline storage classes provide multi-region geo-redundant deployment, which can sustain regional failure.
Refer GCP documentation – Cloud Storage Classes
Multi-Regional Storage is geo-redundant.
The geo-redundancy of Coldline Storage data is determined by the type of location in which it is stored: Coldline Storage data stored in multi-regional locations is redundant across multiple regions, providing higher availability than Coldline Storage data stored in regional locations.
Data that is geo-redundant is stored redundantly in at least two separate geographic places separated by at least 100 miles. Objects stored in multi-regional locations are geo-redundant, regardless of their storage class.
Geo-redundancy occurs asynchronously, but all Cloud Storage data is redundant within at least one geographic place as soon as you upload it.
Geo-redundancy ensures maximum availability of your data, even in the event of large-scale disruptions, such as natural disasters. For a dual-regional location, geo-redundancy is achieved using two specific regional locations. For other multi-regional locations, geo-redundancy is achieved using any combination of data centers within the specified multi-region, which may include data centers that are not explicitly available as regional locations.
Options A & D are wrong as they do not exist
Option B is wrong as Regional storage class is not geo-redundant. Data stored in a narrow geographic region and Redundancy is across availability zones
Unattempted
Correct answers are C & E as Multi-Regional and Coldline storage classes provide multi-region geo-redundant deployment, which can sustain regional failure.
Refer GCP documentation – Cloud Storage Classes
Multi-Regional Storage is geo-redundant.
The geo-redundancy of Coldline Storage data is determined by the type of location in which it is stored: Coldline Storage data stored in multi-regional locations is redundant across multiple regions, providing higher availability than Coldline Storage data stored in regional locations.
Data that is geo-redundant is stored redundantly in at least two separate geographic places separated by at least 100 miles. Objects stored in multi-regional locations are geo-redundant, regardless of their storage class.
Geo-redundancy occurs asynchronously, but all Cloud Storage data is redundant within at least one geographic place as soon as you upload it.
Geo-redundancy ensures maximum availability of your data, even in the event of large-scale disruptions, such as natural disasters. For a dual-regional location, geo-redundancy is achieved using two specific regional locations. For other multi-regional locations, geo-redundancy is achieved using any combination of data centers within the specified multi-region, which may include data centers that are not explicitly available as regional locations.
Options A & D are wrong as they do not exist
Option B is wrong as Regional storage class is not geo-redundant. Data stored in a narrow geographic region and Redundancy is across availability zones
Question 9 of 70
9. Question
Your manager needs you to test out the latest version of MS-SQL on a Windows instance. You’ve created the VM and need to connect into the instance. What steps should you follow to connect to the instance?
Correct
Correct answer is A as connecting to Windows instance involves installation of the RDP client. GCP does not provide RDP client and it needs to be installed. Generate Windows instance password to connect to the instance.
Refer GCP documentation – Windows Connecting to Instance
Option B is wrong as GCP Console does not have a direct RDP connectivity.
Option C is wrong as a seperate windows password needs to be generate. Google Cloud username password cannot be used.
Option D is wrong as you cannot connect to Windows instance using SSH.
Incorrect
Correct answer is A as connecting to Windows instance involves installation of the RDP client. GCP does not provide RDP client and it needs to be installed. Generate Windows instance password to connect to the instance.
Refer GCP documentation – Windows Connecting to Instance
Option B is wrong as GCP Console does not have a direct RDP connectivity.
Option C is wrong as a seperate windows password needs to be generate. Google Cloud username password cannot be used.
Option D is wrong as you cannot connect to Windows instance using SSH.
Unattempted
Correct answer is A as connecting to Windows instance involves installation of the RDP client. GCP does not provide RDP client and it needs to be installed. Generate Windows instance password to connect to the instance.
Refer GCP documentation – Windows Connecting to Instance
Option B is wrong as GCP Console does not have a direct RDP connectivity.
Option C is wrong as a seperate windows password needs to be generate. Google Cloud username password cannot be used.
Option D is wrong as you cannot connect to Windows instance using SSH.
Question 10 of 70
10. Question
You need to create a new development Kubernetes cluster with 3 nodes. The cluster will be named project-1-cluster. Which of the following truncated commands will create a cluster?
Correct
Correct answer is A as Kubernetes cluster can be created using the gcloud command only, with the cluster name and –num-nodes parameter.
Refer GCP documentation – Kubernetes Create Cluster
gcloud container clusters create my-regional-cluster –num-nodes 2 \ –region us-west1
Options B & C are wrong as kubectl cannot be used to create Kubernetes cluster.
Option D is wrong as the 3 parameter is invalid and needs to follow a parameter.
Incorrect
Correct answer is A as Kubernetes cluster can be created using the gcloud command only, with the cluster name and –num-nodes parameter.
Refer GCP documentation – Kubernetes Create Cluster
gcloud container clusters create my-regional-cluster –num-nodes 2 \ –region us-west1
Options B & C are wrong as kubectl cannot be used to create Kubernetes cluster.
Option D is wrong as the 3 parameter is invalid and needs to follow a parameter.
Unattempted
Correct answer is A as Kubernetes cluster can be created using the gcloud command only, with the cluster name and –num-nodes parameter.
Refer GCP documentation – Kubernetes Create Cluster
gcloud container clusters create my-regional-cluster –num-nodes 2 \ –region us-west1
Options B & C are wrong as kubectl cannot be used to create Kubernetes cluster.
Option D is wrong as the 3 parameter is invalid and needs to follow a parameter.
Question 11 of 70
11. Question
Your security team wants to be able to audit network traffic inside of your network. What’s the best way to ensure they have access to the data they need?
Correct
Correct answer is B as VPC Flow logs track all the network flows and needs to be enabled.
Refer GCP documentation – VPC Flow logs
VPC Flow Logs record a sample of network flows sent from and received by VM instances. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization.
Flow logs are aggregated by connection, at 5-second intervals, from Compute Engine VMs and exported in real time. By subscribing to Cloud Pub/Sub, you can analyze flow logs using real-time streaming APIs.
Option A is wrong as the VPC logs need to enabled and are disabled by default.
Option C is wrong as there is no VPC Network logs.
Option D is wrong as there is no firewall capture filter.
Incorrect
Correct answer is B as VPC Flow logs track all the network flows and needs to be enabled.
Refer GCP documentation – VPC Flow logs
VPC Flow Logs record a sample of network flows sent from and received by VM instances. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization.
Flow logs are aggregated by connection, at 5-second intervals, from Compute Engine VMs and exported in real time. By subscribing to Cloud Pub/Sub, you can analyze flow logs using real-time streaming APIs.
Option A is wrong as the VPC logs need to enabled and are disabled by default.
Option C is wrong as there is no VPC Network logs.
Option D is wrong as there is no firewall capture filter.
Unattempted
Correct answer is B as VPC Flow logs track all the network flows and needs to be enabled.
Refer GCP documentation – VPC Flow logs
VPC Flow Logs record a sample of network flows sent from and received by VM instances. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization.
Flow logs are aggregated by connection, at 5-second intervals, from Compute Engine VMs and exported in real time. By subscribing to Cloud Pub/Sub, you can analyze flow logs using real-time streaming APIs.
Option A is wrong as the VPC logs need to enabled and are disabled by default.
Option C is wrong as there is no VPC Network logs.
Option D is wrong as there is no firewall capture filter.
Question 12 of 70
12. Question
You have a Cloud Storage bucket that needs to host static web assets with a dozen HTML pages, a few JavaScript files, and some CSS. How do you make the bucket public?
Correct
Correct answer is D as the bucket can be shared by providing the Storage Object Viewer access to allUsers.
Refer GCP documentation – Cloud Storage Sharing files
You can either make all files in your bucket publicly accessible, or you can set individual objects to be accessible through your website. Generally, making all files in your bucket accessible is easier and faster.
To make all files accessible, follow the Cloud Storage guide for making groups of objects publicly readable.
To make individual files accessible, follow the Cloud Storage guide for making individual objects publicly readable.
If you choose to control the accessibility of individual files, you can set the default object ACL for your bucket so that subsequent files uploaded to your bucket are shared by default.
Use the gsutil acl ch command, replacing [VALUES_IN_BRACKETS] with the appropriate values:
gsutil acl ch -u AllUsers:R gs://[BUCKET_NAME]/[OBJECT_NAME]
Option A is wrong as there is no make public option with GCP Console.
Option B is wrong as access needs to be provided to allUsers to make it public and there is no allAuthenticatedUsers option.
Option C is wrong as there is no make public option with gsutil command.
Incorrect
Correct answer is D as the bucket can be shared by providing the Storage Object Viewer access to allUsers.
Refer GCP documentation – Cloud Storage Sharing files
You can either make all files in your bucket publicly accessible, or you can set individual objects to be accessible through your website. Generally, making all files in your bucket accessible is easier and faster.
To make all files accessible, follow the Cloud Storage guide for making groups of objects publicly readable.
To make individual files accessible, follow the Cloud Storage guide for making individual objects publicly readable.
If you choose to control the accessibility of individual files, you can set the default object ACL for your bucket so that subsequent files uploaded to your bucket are shared by default.
Use the gsutil acl ch command, replacing [VALUES_IN_BRACKETS] with the appropriate values:
gsutil acl ch -u AllUsers:R gs://[BUCKET_NAME]/[OBJECT_NAME]
Option A is wrong as there is no make public option with GCP Console.
Option B is wrong as access needs to be provided to allUsers to make it public and there is no allAuthenticatedUsers option.
Option C is wrong as there is no make public option with gsutil command.
Unattempted
Correct answer is D as the bucket can be shared by providing the Storage Object Viewer access to allUsers.
Refer GCP documentation – Cloud Storage Sharing files
You can either make all files in your bucket publicly accessible, or you can set individual objects to be accessible through your website. Generally, making all files in your bucket accessible is easier and faster.
To make all files accessible, follow the Cloud Storage guide for making groups of objects publicly readable.
To make individual files accessible, follow the Cloud Storage guide for making individual objects publicly readable.
If you choose to control the accessibility of individual files, you can set the default object ACL for your bucket so that subsequent files uploaded to your bucket are shared by default.
Use the gsutil acl ch command, replacing [VALUES_IN_BRACKETS] with the appropriate values:
gsutil acl ch -u AllUsers:R gs://[BUCKET_NAME]/[OBJECT_NAME]
Option A is wrong as there is no make public option with GCP Console.
Option B is wrong as access needs to be provided to allUsers to make it public and there is no allAuthenticatedUsers option.
Option C is wrong as there is no make public option with gsutil command.
Question 13 of 70
13. Question
You’ve created a new Compute Engine instance in zone us-central1-b. When you tried to attach the GPU that your data engineers requested, you’re getting an error. What is the most likely cause of the error?
Correct
Correct answer is D as GPU availability varies for region to region and zone to zone. One GPU available in one region/zone is not guarantee to be available in other region/zone.
Refer GCP documentation – GPUs
Option A is wrong as access scope for compute engine does not control GPU attachment with the Compute Engine.
Option B is wrong as GPUs can be attached to any OS and machine type.
Option C is wrong as access scope for compute engine does not control GPU attachment with the Compute Engine.
Incorrect
Correct answer is D as GPU availability varies for region to region and zone to zone. One GPU available in one region/zone is not guarantee to be available in other region/zone.
Refer GCP documentation – GPUs
Option A is wrong as access scope for compute engine does not control GPU attachment with the Compute Engine.
Option B is wrong as GPUs can be attached to any OS and machine type.
Option C is wrong as access scope for compute engine does not control GPU attachment with the Compute Engine.
Unattempted
Correct answer is D as GPU availability varies for region to region and zone to zone. One GPU available in one region/zone is not guarantee to be available in other region/zone.
Refer GCP documentation – GPUs
Option A is wrong as access scope for compute engine does not control GPU attachment with the Compute Engine.
Option B is wrong as GPUs can be attached to any OS and machine type.
Option C is wrong as access scope for compute engine does not control GPU attachment with the Compute Engine.
Question 14 of 70
14. Question
Your data team is working on some new machine learning models. They’re generating several files per day that they want to store in a regional bucket. They mostly focus on the files from the last week. However, they want to keep all the files just to be safe and if needed, would be referred once in a month. With the fewest steps possible, what’s the best way to lower the storage costs?
Correct
Correct answer is D as the files are required for a week and then would be needed for only once in a month access, Nearline storage would be an ideal storage to save cost. The transition of the object can be handled easily using Object Lifecycle Management.
Refer GCP documentation – Cloud Storage Lifecycle Management
You can assign a lifecycle management configuration to a bucket. The configuration contains a set of rules which apply to current and future objects in the bucket. When an object meets the criteria of one of the rules, Cloud Storage automatically performs a specified action on the object. Here are some example use cases:
Downgrade the storage class of objects older than 365 days to Coldline Storage.
Delete objects created before January 1, 2013.
Keep only the 3 most recent versions of each object in a bucket with versioning enabled.
Option C is wrong as the files are needed once in a month, Coldline storage would not be a cost effective option.
Options A & B are wrong as the transition can be handled easily using Object Lifecycle management.
Incorrect
Correct answer is D as the files are required for a week and then would be needed for only once in a month access, Nearline storage would be an ideal storage to save cost. The transition of the object can be handled easily using Object Lifecycle Management.
Refer GCP documentation – Cloud Storage Lifecycle Management
You can assign a lifecycle management configuration to a bucket. The configuration contains a set of rules which apply to current and future objects in the bucket. When an object meets the criteria of one of the rules, Cloud Storage automatically performs a specified action on the object. Here are some example use cases:
Downgrade the storage class of objects older than 365 days to Coldline Storage.
Delete objects created before January 1, 2013.
Keep only the 3 most recent versions of each object in a bucket with versioning enabled.
Option C is wrong as the files are needed once in a month, Coldline storage would not be a cost effective option.
Options A & B are wrong as the transition can be handled easily using Object Lifecycle management.
Unattempted
Correct answer is D as the files are required for a week and then would be needed for only once in a month access, Nearline storage would be an ideal storage to save cost. The transition of the object can be handled easily using Object Lifecycle Management.
Refer GCP documentation – Cloud Storage Lifecycle Management
You can assign a lifecycle management configuration to a bucket. The configuration contains a set of rules which apply to current and future objects in the bucket. When an object meets the criteria of one of the rules, Cloud Storage automatically performs a specified action on the object. Here are some example use cases:
Downgrade the storage class of objects older than 365 days to Coldline Storage.
Delete objects created before January 1, 2013.
Keep only the 3 most recent versions of each object in a bucket with versioning enabled.
Option C is wrong as the files are needed once in a month, Coldline storage would not be a cost effective option.
Options A & B are wrong as the transition can be handled easily using Object Lifecycle management.
Question 15 of 70
15. Question
Your company wants to setup a virtual private cloud network. They want to configure a single Subnet within the VPC with maximum range of available. Which CIDR block would you choose?
Correct
Correct answer is B as you can assign a standard private CIDR blocks (192.168.0.0/16, 172.16.0.0/12, 10.0.0.0/8) to VPC and and their subsets as the IP address range of a VPC.
CIDR block Number of available private IPs
192.168.0.0/16 65,532
172.16.0.0/12 1,048,572
10.0.0.0/8 16,777,212
Refer GCP documentation – VPC Subnet IP ranges
Option A is wrong as it is not an allowed RFC 1918 CIDR range allowed.
Options C & D are wrong as they provide less private IPs compared to CIDR 10.0.0.0/8
Incorrect
Correct answer is B as you can assign a standard private CIDR blocks (192.168.0.0/16, 172.16.0.0/12, 10.0.0.0/8) to VPC and and their subsets as the IP address range of a VPC.
CIDR block Number of available private IPs
192.168.0.0/16 65,532
172.16.0.0/12 1,048,572
10.0.0.0/8 16,777,212
Refer GCP documentation – VPC Subnet IP ranges
Option A is wrong as it is not an allowed RFC 1918 CIDR range allowed.
Options C & D are wrong as they provide less private IPs compared to CIDR 10.0.0.0/8
Unattempted
Correct answer is B as you can assign a standard private CIDR blocks (192.168.0.0/16, 172.16.0.0/12, 10.0.0.0/8) to VPC and and their subsets as the IP address range of a VPC.
CIDR block Number of available private IPs
192.168.0.0/16 65,532
172.16.0.0/12 1,048,572
10.0.0.0/8 16,777,212
Refer GCP documentation – VPC Subnet IP ranges
Option A is wrong as it is not an allowed RFC 1918 CIDR range allowed.
Options C & D are wrong as they provide less private IPs compared to CIDR 10.0.0.0/8
Question 16 of 70
16. Question
You’ve been tasked with getting all of your team’s public SSH keys onto to a specific Bastion host instance of a particular project. You’ve collected them all. With the fewest steps possible, what is the simplest way to get the keys deployed?
Correct
Correct answer is A as instance specific SSH keys can help provide users access to to the specific bastion host. The keys can be added or removed using the instance metadata.
Refer GCP documentation – Instance level SSH keys
Instance-level public SSH keys give users access to a specific Linux instance. Users with instance-level public SSH keys can access a Linux instance even if it blocks project-wide public SSH keys.
gcloud compute instances add-metadata [INSTANCE_NAME] –metadata-from-file ssh-keys=[LIST_PATH]
Option B is wrong as the gcloud compute project-info provides access to all the instances within a project.
Option C is wrong as gcloud compute ssh is a thin wrapper around the ssh(1)command that takes care of authentication and the translation of the instance name into an IP address. It can be used to ssh to the instance.
Option D is wrong as there is no user interface to upload the keys.
Incorrect
Correct answer is A as instance specific SSH keys can help provide users access to to the specific bastion host. The keys can be added or removed using the instance metadata.
Refer GCP documentation – Instance level SSH keys
Instance-level public SSH keys give users access to a specific Linux instance. Users with instance-level public SSH keys can access a Linux instance even if it blocks project-wide public SSH keys.
gcloud compute instances add-metadata [INSTANCE_NAME] –metadata-from-file ssh-keys=[LIST_PATH]
Option B is wrong as the gcloud compute project-info provides access to all the instances within a project.
Option C is wrong as gcloud compute ssh is a thin wrapper around the ssh(1)command that takes care of authentication and the translation of the instance name into an IP address. It can be used to ssh to the instance.
Option D is wrong as there is no user interface to upload the keys.
Unattempted
Correct answer is A as instance specific SSH keys can help provide users access to to the specific bastion host. The keys can be added or removed using the instance metadata.
Refer GCP documentation – Instance level SSH keys
Instance-level public SSH keys give users access to a specific Linux instance. Users with instance-level public SSH keys can access a Linux instance even if it blocks project-wide public SSH keys.
gcloud compute instances add-metadata [INSTANCE_NAME] –metadata-from-file ssh-keys=[LIST_PATH]
Option B is wrong as the gcloud compute project-info provides access to all the instances within a project.
Option C is wrong as gcloud compute ssh is a thin wrapper around the ssh(1)command that takes care of authentication and the translation of the instance name into an IP address. It can be used to ssh to the instance.
Option D is wrong as there is no user interface to upload the keys.
Question 17 of 70
17. Question
You’re migrating an on-premises application to Google Cloud. The application uses a component that requires a licensing server. The license server has the IP address 10.28.0.10. You want to deploy the application without making any changes to the code or configuration. How should you go about deploying the application?
Correct
Correct answer is A as the IP is internal it can be reserved using the static internal IP address, which blocks it and prevents it from getting allocated to other resource.
Refer GCP documentation – Compute Network Addresses
In Compute Engine, each VM instance can have multiple network interfaces. Each interface can have one external IP address, one primary internal IP address, and one or more secondary internal IP addresses. Forwarding rules can have external IP addresses for external load balancing or internal addresses for internal load balancing.
Static internal IPs provide the ability to reserve internal IP addresses from the private RFC 1918 IP range configured in the subnet, then assign those reserved internal addresses to resources as needed. Reserving an internal IP address takes that address out of the dynamic allocation pool and prevents it from being used for automatic allocations. Reserving static internal IP addresses requires specific IAM permissions so that only authorized users can reserve a static internal IP address.
With the ability to reserve static internal IP addresses, you can always use the same IP address for the same resource even if you have to delete and recreate the resource.
Option C is wrong as Ephemeral internal IP addresses remain attached to a VM instance only until the VM is stopped and restarted or the instance is terminated. If an instance is stopped, any ephemeral internal IP addresses assigned to the instance are released back into the network pool. When a stopped instance is started again, a new ephemeral internal IP address is assigned to the instance.
Options B & D are wrong as the IP address is RFC 1918 address and needs to be an internal static IP address.
Incorrect
Correct answer is A as the IP is internal it can be reserved using the static internal IP address, which blocks it and prevents it from getting allocated to other resource.
Refer GCP documentation – Compute Network Addresses
In Compute Engine, each VM instance can have multiple network interfaces. Each interface can have one external IP address, one primary internal IP address, and one or more secondary internal IP addresses. Forwarding rules can have external IP addresses for external load balancing or internal addresses for internal load balancing.
Static internal IPs provide the ability to reserve internal IP addresses from the private RFC 1918 IP range configured in the subnet, then assign those reserved internal addresses to resources as needed. Reserving an internal IP address takes that address out of the dynamic allocation pool and prevents it from being used for automatic allocations. Reserving static internal IP addresses requires specific IAM permissions so that only authorized users can reserve a static internal IP address.
With the ability to reserve static internal IP addresses, you can always use the same IP address for the same resource even if you have to delete and recreate the resource.
Option C is wrong as Ephemeral internal IP addresses remain attached to a VM instance only until the VM is stopped and restarted or the instance is terminated. If an instance is stopped, any ephemeral internal IP addresses assigned to the instance are released back into the network pool. When a stopped instance is started again, a new ephemeral internal IP address is assigned to the instance.
Options B & D are wrong as the IP address is RFC 1918 address and needs to be an internal static IP address.
Unattempted
Correct answer is A as the IP is internal it can be reserved using the static internal IP address, which blocks it and prevents it from getting allocated to other resource.
Refer GCP documentation – Compute Network Addresses
In Compute Engine, each VM instance can have multiple network interfaces. Each interface can have one external IP address, one primary internal IP address, and one or more secondary internal IP addresses. Forwarding rules can have external IP addresses for external load balancing or internal addresses for internal load balancing.
Static internal IPs provide the ability to reserve internal IP addresses from the private RFC 1918 IP range configured in the subnet, then assign those reserved internal addresses to resources as needed. Reserving an internal IP address takes that address out of the dynamic allocation pool and prevents it from being used for automatic allocations. Reserving static internal IP addresses requires specific IAM permissions so that only authorized users can reserve a static internal IP address.
With the ability to reserve static internal IP addresses, you can always use the same IP address for the same resource even if you have to delete and recreate the resource.
Option C is wrong as Ephemeral internal IP addresses remain attached to a VM instance only until the VM is stopped and restarted or the instance is terminated. If an instance is stopped, any ephemeral internal IP addresses assigned to the instance are released back into the network pool. When a stopped instance is started again, a new ephemeral internal IP address is assigned to the instance.
Options B & D are wrong as the IP address is RFC 1918 address and needs to be an internal static IP address.
Question 18 of 70
18. Question
You’ve setup and tested several custom roles in your development project. What is the fastest way to create the same roles for your new production project?
Correct
Correct answer is D as Cloud SDK gcloud iam roles copy can be used to copy the roles to different organization or project.
Refer GCP documentation – Cloud SDK IAM Copy Role
gcloud iam roles copy – create a role from an existing role
–dest-organization=DEST_ORGANIZATION (The organization of the destination role)
–dest-project=DEST_PROJECT (The project of the destination role)
Incorrect
Correct answer is D as Cloud SDK gcloud iam roles copy can be used to copy the roles to different organization or project.
Refer GCP documentation – Cloud SDK IAM Copy Role
gcloud iam roles copy – create a role from an existing role
–dest-organization=DEST_ORGANIZATION (The organization of the destination role)
–dest-project=DEST_PROJECT (The project of the destination role)
Unattempted
Correct answer is D as Cloud SDK gcloud iam roles copy can be used to copy the roles to different organization or project.
Refer GCP documentation – Cloud SDK IAM Copy Role
gcloud iam roles copy – create a role from an existing role
–dest-organization=DEST_ORGANIZATION (The organization of the destination role)
–dest-project=DEST_PROJECT (The project of the destination role)
Question 19 of 70
19. Question
You have been tasked to grant access to sensitive files to external auditors for a limited time period of 4 hours only. The files should not be strictly available after 4 hours. Adhering to Google best practices, how would you efficiently share the file?
Correct
Correct answer is C as the file can be stored in Cloud Storage and Signed urls can be used to quickly and securely share the files with third party.
Refer GCP documentation – Cloud Storage Signed URLs
Signed URLs provide a way to give time-limited read or write access to anyone in possession of the URL, regardless of whether they have a Google account
In some scenarios, you might not want to require your users to have a Google account in order to access Cloud Storage, but you still want to control access using your application-specific logic. The typical way to address this use case is to provide a signed URL to a user, which gives the user read, write, or delete access to that resource for a limited time. Anyone who knows the URL can access the resource until the URL expires. You specify the expiration time in the query string to be signed.
Options A & B are wrong as it is not a quick solution, but a manual effort to host, share and stop the solution.
Option D is wrong as All Users is not a secure way to share data and it would be marked public.
Incorrect
Correct answer is C as the file can be stored in Cloud Storage and Signed urls can be used to quickly and securely share the files with third party.
Refer GCP documentation – Cloud Storage Signed URLs
Signed URLs provide a way to give time-limited read or write access to anyone in possession of the URL, regardless of whether they have a Google account
In some scenarios, you might not want to require your users to have a Google account in order to access Cloud Storage, but you still want to control access using your application-specific logic. The typical way to address this use case is to provide a signed URL to a user, which gives the user read, write, or delete access to that resource for a limited time. Anyone who knows the URL can access the resource until the URL expires. You specify the expiration time in the query string to be signed.
Options A & B are wrong as it is not a quick solution, but a manual effort to host, share and stop the solution.
Option D is wrong as All Users is not a secure way to share data and it would be marked public.
Unattempted
Correct answer is C as the file can be stored in Cloud Storage and Signed urls can be used to quickly and securely share the files with third party.
Refer GCP documentation – Cloud Storage Signed URLs
Signed URLs provide a way to give time-limited read or write access to anyone in possession of the URL, regardless of whether they have a Google account
In some scenarios, you might not want to require your users to have a Google account in order to access Cloud Storage, but you still want to control access using your application-specific logic. The typical way to address this use case is to provide a signed URL to a user, which gives the user read, write, or delete access to that resource for a limited time. Anyone who knows the URL can access the resource until the URL expires. You specify the expiration time in the query string to be signed.
Options A & B are wrong as it is not a quick solution, but a manual effort to host, share and stop the solution.
Option D is wrong as All Users is not a secure way to share data and it would be marked public.
Question 20 of 70
20. Question
A member of the finance team informed you that one of the projects is using the old billing account. What steps should you take to resolve the problem?
Correct
Correct answer is B as for changing the billing account you have to select the project and change the billing account.
Refer GCP documentation – Change Billing Account
To change the billing account for an existing project, you must be an owner on the project and a billing administrator on the destination billing account.
To change the billing account:
1. Go to the Google Cloud Platform Console.
2. Open the console left side menu and select Billing.
3. If you have more than one billing account, you’ll be prompted to select Go to linked billing account to manage the current project’s billing.
4. Under Projects linked to this billing account, locate the name of the project that you want to change billing for, and then click the menu next to it.
5. Select Change billing account, then choose the desired destination billing account.
6. Click Set account.
Option A is wrong as billing account cannot be changed from Project page.
Option C is wrong as the project need not be deleted.
Option D is wrong as Google support does not handle the changes and it is users responsibility.
Incorrect
Correct answer is B as for changing the billing account you have to select the project and change the billing account.
Refer GCP documentation – Change Billing Account
To change the billing account for an existing project, you must be an owner on the project and a billing administrator on the destination billing account.
To change the billing account:
1. Go to the Google Cloud Platform Console.
2. Open the console left side menu and select Billing.
3. If you have more than one billing account, you’ll be prompted to select Go to linked billing account to manage the current project’s billing.
4. Under Projects linked to this billing account, locate the name of the project that you want to change billing for, and then click the menu next to it.
5. Select Change billing account, then choose the desired destination billing account.
6. Click Set account.
Option A is wrong as billing account cannot be changed from Project page.
Option C is wrong as the project need not be deleted.
Option D is wrong as Google support does not handle the changes and it is users responsibility.
Unattempted
Correct answer is B as for changing the billing account you have to select the project and change the billing account.
Refer GCP documentation – Change Billing Account
To change the billing account for an existing project, you must be an owner on the project and a billing administrator on the destination billing account.
To change the billing account:
1. Go to the Google Cloud Platform Console.
2. Open the console left side menu and select Billing.
3. If you have more than one billing account, you’ll be prompted to select Go to linked billing account to manage the current project’s billing.
4. Under Projects linked to this billing account, locate the name of the project that you want to change billing for, and then click the menu next to it.
5. Select Change billing account, then choose the desired destination billing account.
6. Click Set account.
Option A is wrong as billing account cannot be changed from Project page.
Option C is wrong as the project need not be deleted.
Option D is wrong as Google support does not handle the changes and it is users responsibility.
Question 21 of 70
21. Question
Your billing department has asked you to help them track spending against a specific billing account. They’ve indicated that they prefer to use Excel to create their reports so that they don’t need to learn new tools. Which export option would work best for them?
Correct
Correct answer is D as Cloud Billing allows export of the billing data as flat files in CSV and JSON format. As the billing department wants to use Excel to create their reports, CSV would be a ideal option.
Refer GCP documentation – Cloud Billing Export Billing Data
To access a detailed breakdown of your charges, you can export your daily usage and cost estimates automatically to a CSV or JSON file stored in a Google Cloud Storage bucket you specify. You can then access the data via the Cloud Storage API, CLI tool, or Google Cloud Platform Console.
Usage data is labeled with the project number and resource type. You use ACLs on your Cloud Storage bucket to control who can access this data.
Options A, B, & C are wrong as they do not support Excel directly and would need conversions.
Incorrect
Correct answer is D as Cloud Billing allows export of the billing data as flat files in CSV and JSON format. As the billing department wants to use Excel to create their reports, CSV would be a ideal option.
Refer GCP documentation – Cloud Billing Export Billing Data
To access a detailed breakdown of your charges, you can export your daily usage and cost estimates automatically to a CSV or JSON file stored in a Google Cloud Storage bucket you specify. You can then access the data via the Cloud Storage API, CLI tool, or Google Cloud Platform Console.
Usage data is labeled with the project number and resource type. You use ACLs on your Cloud Storage bucket to control who can access this data.
Options A, B, & C are wrong as they do not support Excel directly and would need conversions.
Unattempted
Correct answer is D as Cloud Billing allows export of the billing data as flat files in CSV and JSON format. As the billing department wants to use Excel to create their reports, CSV would be a ideal option.
Refer GCP documentation – Cloud Billing Export Billing Data
To access a detailed breakdown of your charges, you can export your daily usage and cost estimates automatically to a CSV or JSON file stored in a Google Cloud Storage bucket you specify. You can then access the data via the Cloud Storage API, CLI tool, or Google Cloud Platform Console.
Usage data is labeled with the project number and resource type. You use ACLs on your Cloud Storage bucket to control who can access this data.
Options A, B, & C are wrong as they do not support Excel directly and would need conversions.
Question 22 of 70
22. Question
A company wants to setup a template for deploying resources. They want the provisioning to be dynamic with the specifications in configuration files. Which of the following service would be ideal for this requirement?
Correct
Correct answer is B as Deployment Manager provide Infrastructure as a Code capability.
Refer GCP documentation – Deployment Manager
Google Cloud Deployment Manager allows you to specify all the resources needed for your application in a declarative format using yaml. You can also use Python or Jinja2 templates to parameterize the configuration and allow reuse of common deployment paradigms such as a load balanced, auto-scaled instance group. Treat your configuration as code and perform repeatable deployments.
Option A is wrong as Cloud Composer is a fully managed workflow orchestration service that empowers you to author, schedule, and monitor pipelines that span across clouds and on-premises data centers.
Option C is wrong as Cloud Scheduler is a fully managed enterprise-grade cron job scheduler. It allows you to schedule virtually any job, including batch, big data jobs, cloud infrastructure operations, and more.
Option D is wrong as Cloud Deployer is not a valid service.
Incorrect
Correct answer is B as Deployment Manager provide Infrastructure as a Code capability.
Refer GCP documentation – Deployment Manager
Google Cloud Deployment Manager allows you to specify all the resources needed for your application in a declarative format using yaml. You can also use Python or Jinja2 templates to parameterize the configuration and allow reuse of common deployment paradigms such as a load balanced, auto-scaled instance group. Treat your configuration as code and perform repeatable deployments.
Option A is wrong as Cloud Composer is a fully managed workflow orchestration service that empowers you to author, schedule, and monitor pipelines that span across clouds and on-premises data centers.
Option C is wrong as Cloud Scheduler is a fully managed enterprise-grade cron job scheduler. It allows you to schedule virtually any job, including batch, big data jobs, cloud infrastructure operations, and more.
Option D is wrong as Cloud Deployer is not a valid service.
Unattempted
Correct answer is B as Deployment Manager provide Infrastructure as a Code capability.
Refer GCP documentation – Deployment Manager
Google Cloud Deployment Manager allows you to specify all the resources needed for your application in a declarative format using yaml. You can also use Python or Jinja2 templates to parameterize the configuration and allow reuse of common deployment paradigms such as a load balanced, auto-scaled instance group. Treat your configuration as code and perform repeatable deployments.
Option A is wrong as Cloud Composer is a fully managed workflow orchestration service that empowers you to author, schedule, and monitor pipelines that span across clouds and on-premises data centers.
Option C is wrong as Cloud Scheduler is a fully managed enterprise-grade cron job scheduler. It allows you to schedule virtually any job, including batch, big data jobs, cloud infrastructure operations, and more.
Option D is wrong as Cloud Deployer is not a valid service.
Question 23 of 70
23. Question
Your project manager wants to delegate the responsibility to upload objects to Cloud Storage buckets to his team members. Considering the principle of least privilege, which role should you assign to the team members?
Correct
Correct answer is C as roles/storage.objectCreator allows users to create objects. Does not give permission to view, delete, or overwrite objects.
Refer GCP documentation – Cloud Storage IAM Roles
Options B is wrong as roles/storage.objectViewer role does not provide sufficient privileges to manage buckets.
Options A & D are wrong as it provides more privileges than required.
Incorrect
Correct answer is C as roles/storage.objectCreator allows users to create objects. Does not give permission to view, delete, or overwrite objects.
Refer GCP documentation – Cloud Storage IAM Roles
Options B is wrong as roles/storage.objectViewer role does not provide sufficient privileges to manage buckets.
Options A & D are wrong as it provides more privileges than required.
Unattempted
Correct answer is C as roles/storage.objectCreator allows users to create objects. Does not give permission to view, delete, or overwrite objects.
Refer GCP documentation – Cloud Storage IAM Roles
Options B is wrong as roles/storage.objectViewer role does not provide sufficient privileges to manage buckets.
Options A & D are wrong as it provides more privileges than required.
Question 24 of 70
24. Question
Your company needs to create a new Kubernetes Cluster on Google Cloud Platform. As a security requirement, they want to upgrade the nodes to the latest stable version of Kubernetes with no manual intervention. How should the Kubernetes cluster be configured?
Correct
Correct answer is C as the Kubernetes cluster can be configured for node auto-upgrades to update them to the lest sable version of Kubernetes.
Refer GCP documentation – Kubernetes Auto Upgrades
Node auto-upgrades help you keep the nodes in your cluster up to date with the latest stable version of Kubernetes. Auto-Upgrades use the same update mechanism as manual node upgrades.
Some benefits of using auto-upgrades:
Lower management overhead: You don’t have to manually track and update to the latest version of Kubernetes.
Better security: Sometimes new binaries are released to fix a security issue. With auto-upgrades, GKE automatically ensures that security updates are applied and kept up to date.
Ease of use: Provides a simple way to keep your nodes up to date with the latest Kubernetes features.
Node pools with auto-upgrades enabled are automatically scheduled for upgrades when a new stable Kubernetes version becomes available. When the upgrade is performed, nodes are drained and re-created to match the current cluster master version. Modifications on the boot disk of a node VM do not persist across node re-creations. To preserve modifications across node re-creation, use a DaemonSet.
Option A is wrong as this would not take into account any latest updates.
Option B is wrong as auto repairing helps in keeping nodes healthy and does not handle upgrades.
Option D is wrong as it is a manual effort and not feasible.
Incorrect
Correct answer is C as the Kubernetes cluster can be configured for node auto-upgrades to update them to the lest sable version of Kubernetes.
Refer GCP documentation – Kubernetes Auto Upgrades
Node auto-upgrades help you keep the nodes in your cluster up to date with the latest stable version of Kubernetes. Auto-Upgrades use the same update mechanism as manual node upgrades.
Some benefits of using auto-upgrades:
Lower management overhead: You don’t have to manually track and update to the latest version of Kubernetes.
Better security: Sometimes new binaries are released to fix a security issue. With auto-upgrades, GKE automatically ensures that security updates are applied and kept up to date.
Ease of use: Provides a simple way to keep your nodes up to date with the latest Kubernetes features.
Node pools with auto-upgrades enabled are automatically scheduled for upgrades when a new stable Kubernetes version becomes available. When the upgrade is performed, nodes are drained and re-created to match the current cluster master version. Modifications on the boot disk of a node VM do not persist across node re-creations. To preserve modifications across node re-creation, use a DaemonSet.
Option A is wrong as this would not take into account any latest updates.
Option B is wrong as auto repairing helps in keeping nodes healthy and does not handle upgrades.
Option D is wrong as it is a manual effort and not feasible.
Unattempted
Correct answer is C as the Kubernetes cluster can be configured for node auto-upgrades to update them to the lest sable version of Kubernetes.
Refer GCP documentation – Kubernetes Auto Upgrades
Node auto-upgrades help you keep the nodes in your cluster up to date with the latest stable version of Kubernetes. Auto-Upgrades use the same update mechanism as manual node upgrades.
Some benefits of using auto-upgrades:
Lower management overhead: You don’t have to manually track and update to the latest version of Kubernetes.
Better security: Sometimes new binaries are released to fix a security issue. With auto-upgrades, GKE automatically ensures that security updates are applied and kept up to date.
Ease of use: Provides a simple way to keep your nodes up to date with the latest Kubernetes features.
Node pools with auto-upgrades enabled are automatically scheduled for upgrades when a new stable Kubernetes version becomes available. When the upgrade is performed, nodes are drained and re-created to match the current cluster master version. Modifications on the boot disk of a node VM do not persist across node re-creations. To preserve modifications across node re-creation, use a DaemonSet.
Option A is wrong as this would not take into account any latest updates.
Option B is wrong as auto repairing helps in keeping nodes healthy and does not handle upgrades.
Option D is wrong as it is a manual effort and not feasible.
Question 25 of 70
25. Question
You have created an App engine application in the us-central region. However, you found out the network team has configured all the VPN connections in the asia-east2 region, which are not possible to move. How can you change the location efficiently?
Correct
Correct answer is D as app engine is a regional resource, it needs to be redeployed to the different region.
Refer GCP documentation – App Engine locations
App Engine is regional, which means the infrastructure that runs your apps is located in a specific region and is managed by Google to be redundantly available across all the zones within that region.
Meeting your latency, availability, or durability requirements are primary factors for selecting the region where your apps are run. You can generally select the region nearest to your app’s users but you should consider the location of the other GCP products and services that are used by your app. Using services across multiple locations can affect your app’s latency as well as pricing
You cannot change an app’s region after you set it.
Options A, B & C are wrong as one the region it set for the app engine it cannot be modified.
Incorrect
Correct answer is D as app engine is a regional resource, it needs to be redeployed to the different region.
Refer GCP documentation – App Engine locations
App Engine is regional, which means the infrastructure that runs your apps is located in a specific region and is managed by Google to be redundantly available across all the zones within that region.
Meeting your latency, availability, or durability requirements are primary factors for selecting the region where your apps are run. You can generally select the region nearest to your app’s users but you should consider the location of the other GCP products and services that are used by your app. Using services across multiple locations can affect your app’s latency as well as pricing
You cannot change an app’s region after you set it.
Options A, B & C are wrong as one the region it set for the app engine it cannot be modified.
Unattempted
Correct answer is D as app engine is a regional resource, it needs to be redeployed to the different region.
Refer GCP documentation – App Engine locations
App Engine is regional, which means the infrastructure that runs your apps is located in a specific region and is managed by Google to be redundantly available across all the zones within that region.
Meeting your latency, availability, or durability requirements are primary factors for selecting the region where your apps are run. You can generally select the region nearest to your app’s users but you should consider the location of the other GCP products and services that are used by your app. Using services across multiple locations can affect your app’s latency as well as pricing
You cannot change an app’s region after you set it.
Options A, B & C are wrong as one the region it set for the app engine it cannot be modified.
Question 26 of 70
26. Question
Your team needs to set up a MongoDB instance as quickly as possible. You don’t know how to install it and what configuration files are needed. What’s the best way to get it up-and-running quickly?
Correct
Correct answer is C as Cloud Launcher provides out of box deployments that are completely transparent to you and can be done in no time.
Refer GCP documentation – Cloud Launcher
GCP Marketplace offers ready-to-go development stacks, solutions, and services to accelerate development. So you spend less time installing and more time developing.
Deploy production-grade solutions in a few clicks
Single bill for all your GCP and 3rd party services
Manage solutions using Deployment Manager
Notifications when a security update is available
Direct access to partner support
Option A is wrong as Cloud Memorystore is Redis compliant and not an alternative for MongoDB
Option B is wrong as hosting on the compute engine is still a manual step and would require time.
Option D is wrong as Deployment Manager would take time to build and deploy.
Incorrect
Correct answer is C as Cloud Launcher provides out of box deployments that are completely transparent to you and can be done in no time.
Refer GCP documentation – Cloud Launcher
GCP Marketplace offers ready-to-go development stacks, solutions, and services to accelerate development. So you spend less time installing and more time developing.
Deploy production-grade solutions in a few clicks
Single bill for all your GCP and 3rd party services
Manage solutions using Deployment Manager
Notifications when a security update is available
Direct access to partner support
Option A is wrong as Cloud Memorystore is Redis compliant and not an alternative for MongoDB
Option B is wrong as hosting on the compute engine is still a manual step and would require time.
Option D is wrong as Deployment Manager would take time to build and deploy.
Unattempted
Correct answer is C as Cloud Launcher provides out of box deployments that are completely transparent to you and can be done in no time.
Refer GCP documentation – Cloud Launcher
GCP Marketplace offers ready-to-go development stacks, solutions, and services to accelerate development. So you spend less time installing and more time developing.
Deploy production-grade solutions in a few clicks
Single bill for all your GCP and 3rd party services
Manage solutions using Deployment Manager
Notifications when a security update is available
Direct access to partner support
Option A is wrong as Cloud Memorystore is Redis compliant and not an alternative for MongoDB
Option B is wrong as hosting on the compute engine is still a manual step and would require time.
Option D is wrong as Deployment Manager would take time to build and deploy.
Question 27 of 70
27. Question
Your company wants to setup Production and Test environment. They want to use different subjects and the key requirement is that the VMs must be able to communicate with each other using internal IPs no additional routes configured. How can the solution be designed?
Correct
Correct answer is B as the VMs need to be able to communicate using private IPs they should be hosted in the same VPC. The Subnets can be in any region, however they should have non-overlapping CIDR range.
Refer GCP documentation – VPC Intra VPC reqs
The system-generated subnet routes define the paths for sending traffic among instances within the network using internal (private) IP addresses. For one instance to be able to communicate with another, appropriate firewall rules must also be configured because every network has an implied deny firewall rule for ingress traffic.
Option A is wrong as CIDR range cannot overlap.
Options C & D are wrong as VMs in subnet in different VPC cannot communicate with each other using private IPs.
Incorrect
Correct answer is B as the VMs need to be able to communicate using private IPs they should be hosted in the same VPC. The Subnets can be in any region, however they should have non-overlapping CIDR range.
Refer GCP documentation – VPC Intra VPC reqs
The system-generated subnet routes define the paths for sending traffic among instances within the network using internal (private) IP addresses. For one instance to be able to communicate with another, appropriate firewall rules must also be configured because every network has an implied deny firewall rule for ingress traffic.
Option A is wrong as CIDR range cannot overlap.
Options C & D are wrong as VMs in subnet in different VPC cannot communicate with each other using private IPs.
Unattempted
Correct answer is B as the VMs need to be able to communicate using private IPs they should be hosted in the same VPC. The Subnets can be in any region, however they should have non-overlapping CIDR range.
Refer GCP documentation – VPC Intra VPC reqs
The system-generated subnet routes define the paths for sending traffic among instances within the network using internal (private) IP addresses. For one instance to be able to communicate with another, appropriate firewall rules must also be configured because every network has an implied deny firewall rule for ingress traffic.
Option A is wrong as CIDR range cannot overlap.
Options C & D are wrong as VMs in subnet in different VPC cannot communicate with each other using private IPs.
Question 28 of 70
28. Question
Your company is hosting their static website on Cloud Storage. You have implemented a change to add PDF files to the website. However, when the user clicks on the PDF file link it downloads the PDF instead of opening it within the browser. What would you change to fix the issue?
Correct
Correct answer is B as the browser needs the correct content-type to be able to interpret and render the file correctly. The content-type can be set on object metadata and should be set to application/pdf.
Refer GCP documentation – Cloud Storage Object Metadata
Content-Type
The most commonly set metadata is Content-Type (also known as MIME type), which allows browsers to render the object properly. All objects have a value specified in their Content-Type metadata, but this value does not have to match the underlying type of the object. For example, if the Content-Type is not specified by the uploader and cannot be determined, it is set to application/octet-stream or application/x-www-form-urlencoded, depending on how you uploaded the object.
Option A is wrong the content type needs to be set to application/pdf
Options C & D are wrong as the metadata should be set on the objects and not on the bucket.
Incorrect
Correct answer is B as the browser needs the correct content-type to be able to interpret and render the file correctly. The content-type can be set on object metadata and should be set to application/pdf.
Refer GCP documentation – Cloud Storage Object Metadata
Content-Type
The most commonly set metadata is Content-Type (also known as MIME type), which allows browsers to render the object properly. All objects have a value specified in their Content-Type metadata, but this value does not have to match the underlying type of the object. For example, if the Content-Type is not specified by the uploader and cannot be determined, it is set to application/octet-stream or application/x-www-form-urlencoded, depending on how you uploaded the object.
Option A is wrong the content type needs to be set to application/pdf
Options C & D are wrong as the metadata should be set on the objects and not on the bucket.
Unattempted
Correct answer is B as the browser needs the correct content-type to be able to interpret and render the file correctly. The content-type can be set on object metadata and should be set to application/pdf.
Refer GCP documentation – Cloud Storage Object Metadata
Content-Type
The most commonly set metadata is Content-Type (also known as MIME type), which allows browsers to render the object properly. All objects have a value specified in their Content-Type metadata, but this value does not have to match the underlying type of the object. For example, if the Content-Type is not specified by the uploader and cannot be determined, it is set to application/octet-stream or application/x-www-form-urlencoded, depending on how you uploaded the object.
Option A is wrong the content type needs to be set to application/pdf
Options C & D are wrong as the metadata should be set on the objects and not on the bucket.
Question 29 of 70
29. Question
You currently are running an application on a machine type with 2 vCPUs and 4gb RAM. However, recently there have been plenty of memory problems. How to increase the memory of the application with minimal downtime?
Correct
Correct answer is C as Live migration would help migrate the instance to an machine-type with higher memory with minimal to no downtime.
Refer GCP documentation – Live Migration
Compute Engine offers live migration to keep your virtual machine instances running even when a host system event occurs, such as a software or hardware update. Compute Engine live migrates your running instances to another host in the same zone rather than requiring your VMs to be rebooted. This allows Google to perform maintenance that is integral to keeping infrastructure protected and reliable without interrupting any of your VMs.
Live migration keeps your instances running during:
Regular infrastructure maintenance and upgrades.
Network and power grid maintenance in the data centers.
Failed hardware such as memory, CPU, network interface cards, disks, power, and so on. This is done on a best-effort basis; if a hardware fails completely or otherwise prevents live migration, the VM crashes and restarts automatically and a hostErroris logged.
Host OS and BIOS upgrades.
Security-related updates, with the need to respond quickly.
System configuration changes, including changing the size of the host root partition, for storage of the host image and packages.
Live migration does not change any attributes or properties of the VM itself. The live migration process just transfers a running VM from one host machine to another host machine within the same zone. All VM properties and attributes remain unchanged, including internal and external IP addresses, instance metadata, block storage data and volumes, OS and application state, network settings, network connections, and so on.
Options A & B are wrong as the memory cannot be increased for an instance from console or command line
Option D is wrong the live migration needs to be done to an instance type with higher CPU.
Incorrect
Correct answer is C as Live migration would help migrate the instance to an machine-type with higher memory with minimal to no downtime.
Refer GCP documentation – Live Migration
Compute Engine offers live migration to keep your virtual machine instances running even when a host system event occurs, such as a software or hardware update. Compute Engine live migrates your running instances to another host in the same zone rather than requiring your VMs to be rebooted. This allows Google to perform maintenance that is integral to keeping infrastructure protected and reliable without interrupting any of your VMs.
Live migration keeps your instances running during:
Regular infrastructure maintenance and upgrades.
Network and power grid maintenance in the data centers.
Failed hardware such as memory, CPU, network interface cards, disks, power, and so on. This is done on a best-effort basis; if a hardware fails completely or otherwise prevents live migration, the VM crashes and restarts automatically and a hostErroris logged.
Host OS and BIOS upgrades.
Security-related updates, with the need to respond quickly.
System configuration changes, including changing the size of the host root partition, for storage of the host image and packages.
Live migration does not change any attributes or properties of the VM itself. The live migration process just transfers a running VM from one host machine to another host machine within the same zone. All VM properties and attributes remain unchanged, including internal and external IP addresses, instance metadata, block storage data and volumes, OS and application state, network settings, network connections, and so on.
Options A & B are wrong as the memory cannot be increased for an instance from console or command line
Option D is wrong the live migration needs to be done to an instance type with higher CPU.
Unattempted
Correct answer is C as Live migration would help migrate the instance to an machine-type with higher memory with minimal to no downtime.
Refer GCP documentation – Live Migration
Compute Engine offers live migration to keep your virtual machine instances running even when a host system event occurs, such as a software or hardware update. Compute Engine live migrates your running instances to another host in the same zone rather than requiring your VMs to be rebooted. This allows Google to perform maintenance that is integral to keeping infrastructure protected and reliable without interrupting any of your VMs.
Live migration keeps your instances running during:
Regular infrastructure maintenance and upgrades.
Network and power grid maintenance in the data centers.
Failed hardware such as memory, CPU, network interface cards, disks, power, and so on. This is done on a best-effort basis; if a hardware fails completely or otherwise prevents live migration, the VM crashes and restarts automatically and a hostErroris logged.
Host OS and BIOS upgrades.
Security-related updates, with the need to respond quickly.
System configuration changes, including changing the size of the host root partition, for storage of the host image and packages.
Live migration does not change any attributes or properties of the VM itself. The live migration process just transfers a running VM from one host machine to another host machine within the same zone. All VM properties and attributes remain unchanged, including internal and external IP addresses, instance metadata, block storage data and volumes, OS and application state, network settings, network connections, and so on.
Options A & B are wrong as the memory cannot be increased for an instance from console or command line
Option D is wrong the live migration needs to be done to an instance type with higher CPU.
Question 30 of 70
30. Question
Your billing department has asked you to help them track spending against a specific billing account. They’ve indicated that they prefer SQL querying to create their reports so that they don’t need to learn new tools. The data should be as latest as possible. Which export option would work best for them?
Correct
Correct answer is B as Billing data can be automatically exported to BigQuery and BigQuery provides the SQL interface for the billing department to query the data.
Refer GCP documentation – Cloud Billing Export BigQuery
Tools for monitoring, analyzing and optimizing cost have become an important part of managing development. Billing export to BigQuery enables you to export your daily usage and cost estimates automatically throughout the day to a BigQuery dataset you specify. You can then access your billing data from BigQuery. You can also use this export method to export data to a JSON file.
Options A & D are wrong as it would need manual exporting and loading the data to Cloud SQL.
Option C is wrong as Billing does not export to Cloud SQL
Incorrect
Correct answer is B as Billing data can be automatically exported to BigQuery and BigQuery provides the SQL interface for the billing department to query the data.
Refer GCP documentation – Cloud Billing Export BigQuery
Tools for monitoring, analyzing and optimizing cost have become an important part of managing development. Billing export to BigQuery enables you to export your daily usage and cost estimates automatically throughout the day to a BigQuery dataset you specify. You can then access your billing data from BigQuery. You can also use this export method to export data to a JSON file.
Options A & D are wrong as it would need manual exporting and loading the data to Cloud SQL.
Option C is wrong as Billing does not export to Cloud SQL
Unattempted
Correct answer is B as Billing data can be automatically exported to BigQuery and BigQuery provides the SQL interface for the billing department to query the data.
Refer GCP documentation – Cloud Billing Export BigQuery
Tools for monitoring, analyzing and optimizing cost have become an important part of managing development. Billing export to BigQuery enables you to export your daily usage and cost estimates automatically throughout the day to a BigQuery dataset you specify. You can then access your billing data from BigQuery. You can also use this export method to export data to a JSON file.
Options A & D are wrong as it would need manual exporting and loading the data to Cloud SQL.
Option C is wrong as Billing does not export to Cloud SQL
Question 31 of 70
31. Question
Your company hosts multiple applications on Compute Engine instances. They want the instances to be resilient to any Host maintenance activities performed on the instance. How would you configure the instances?
Correct
Correct answer is C as onHostMaintenance availability policy determines how the instance reacts to the host maintenance events.
Refer GCP documentation – Instance Scheduling Options
A VM instance’s availability policy determines how it behaves when an event occurs that requires Google to move your VM to a different host machine. For example, you can choose to keep your VM instances running while Compute Engine live migrates them to another host or you can choose to terminate your instances instead. You can update an instance’s availability policy at any time to control how you want your VM instances to behave.
You can change an instance’s availability policy by configuring the following two settings:
The VM instance’s maintenance behavior, which determines whether the instance is live migrated or terminated when there is a maintenance event.
The instance’s restart behavior, which determines whether the instance automatically restarts if it crashes or gets terminated.
The default maintenance behavior for instances is to live migrate, but you can change the behavior to terminate your instance during maintenance events instead.
Configure an instance’s maintenance behavior and automatic restart setting using the onHostMaintenance and automaticRestart properties. All instances are configured with default values unless you explicitly specify otherwise.
onHostMaintenance: Determines the behavior when a maintenance event occurs that might cause your instance to reboot.
[Default] migrate, which causes Compute Engine to live migrate an instance when there is a maintenance event.
terminate, which terminates an instance instead of migrating it.
automaticRestart: Determines the behavior when an instance crashes or is terminated by the system.
[Default] true, so Compute Engine restarts an instance if the instance crashes or is terminated.
false, so Compute Engine does not restart an instance if the instance crashes or is terminated.
Options A & B are wrong as automaticRestart does not apply to host maintenance event.
Option D is wrong as the onHostMaintenance needs to be set to migrate the instance as termination would lead to loss of instance.
Incorrect
Correct answer is C as onHostMaintenance availability policy determines how the instance reacts to the host maintenance events.
Refer GCP documentation – Instance Scheduling Options
A VM instance’s availability policy determines how it behaves when an event occurs that requires Google to move your VM to a different host machine. For example, you can choose to keep your VM instances running while Compute Engine live migrates them to another host or you can choose to terminate your instances instead. You can update an instance’s availability policy at any time to control how you want your VM instances to behave.
You can change an instance’s availability policy by configuring the following two settings:
The VM instance’s maintenance behavior, which determines whether the instance is live migrated or terminated when there is a maintenance event.
The instance’s restart behavior, which determines whether the instance automatically restarts if it crashes or gets terminated.
The default maintenance behavior for instances is to live migrate, but you can change the behavior to terminate your instance during maintenance events instead.
Configure an instance’s maintenance behavior and automatic restart setting using the onHostMaintenance and automaticRestart properties. All instances are configured with default values unless you explicitly specify otherwise.
onHostMaintenance: Determines the behavior when a maintenance event occurs that might cause your instance to reboot.
[Default] migrate, which causes Compute Engine to live migrate an instance when there is a maintenance event.
terminate, which terminates an instance instead of migrating it.
automaticRestart: Determines the behavior when an instance crashes or is terminated by the system.
[Default] true, so Compute Engine restarts an instance if the instance crashes or is terminated.
false, so Compute Engine does not restart an instance if the instance crashes or is terminated.
Options A & B are wrong as automaticRestart does not apply to host maintenance event.
Option D is wrong as the onHostMaintenance needs to be set to migrate the instance as termination would lead to loss of instance.
Unattempted
Correct answer is C as onHostMaintenance availability policy determines how the instance reacts to the host maintenance events.
Refer GCP documentation – Instance Scheduling Options
A VM instance’s availability policy determines how it behaves when an event occurs that requires Google to move your VM to a different host machine. For example, you can choose to keep your VM instances running while Compute Engine live migrates them to another host or you can choose to terminate your instances instead. You can update an instance’s availability policy at any time to control how you want your VM instances to behave.
You can change an instance’s availability policy by configuring the following two settings:
The VM instance’s maintenance behavior, which determines whether the instance is live migrated or terminated when there is a maintenance event.
The instance’s restart behavior, which determines whether the instance automatically restarts if it crashes or gets terminated.
The default maintenance behavior for instances is to live migrate, but you can change the behavior to terminate your instance during maintenance events instead.
Configure an instance’s maintenance behavior and automatic restart setting using the onHostMaintenance and automaticRestart properties. All instances are configured with default values unless you explicitly specify otherwise.
onHostMaintenance: Determines the behavior when a maintenance event occurs that might cause your instance to reboot.
[Default] migrate, which causes Compute Engine to live migrate an instance when there is a maintenance event.
terminate, which terminates an instance instead of migrating it.
automaticRestart: Determines the behavior when an instance crashes or is terminated by the system.
[Default] true, so Compute Engine restarts an instance if the instance crashes or is terminated.
false, so Compute Engine does not restart an instance if the instance crashes or is terminated.
Options A & B are wrong as automaticRestart does not apply to host maintenance event.
Option D is wrong as the onHostMaintenance needs to be set to migrate the instance as termination would lead to loss of instance.
Question 32 of 70
32. Question
Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of their log data to the cloud and test the analytics features available to them there, while also retaining that data as a long-term disaster recovery backup. Which two steps should they take? (Choose two answers)
Correct
Correct answers are A & E as Google Cloud Storage can provide long term archival option and BigQuery provides analytics capabilities.
Option B is wrong as Cloud SQL is relational database and does not support the capacity required as well as not suitable for long term archival storage.
Option C is wrong as Stackdriver is a monitoring, logging, alerting and debugging tool. It is not ideal for long term retention of data and does not provide analytics capabilities.
Option D is wrong as Bigtable is a NoSQL solution and can be used for analytics. However it is ideal for data with low latency access and is expensive.
Incorrect
Correct answers are A & E as Google Cloud Storage can provide long term archival option and BigQuery provides analytics capabilities.
Option B is wrong as Cloud SQL is relational database and does not support the capacity required as well as not suitable for long term archival storage.
Option C is wrong as Stackdriver is a monitoring, logging, alerting and debugging tool. It is not ideal for long term retention of data and does not provide analytics capabilities.
Option D is wrong as Bigtable is a NoSQL solution and can be used for analytics. However it is ideal for data with low latency access and is expensive.
Unattempted
Correct answers are A & E as Google Cloud Storage can provide long term archival option and BigQuery provides analytics capabilities.
Option B is wrong as Cloud SQL is relational database and does not support the capacity required as well as not suitable for long term archival storage.
Option C is wrong as Stackdriver is a monitoring, logging, alerting and debugging tool. It is not ideal for long term retention of data and does not provide analytics capabilities.
Option D is wrong as Bigtable is a NoSQL solution and can be used for analytics. However it is ideal for data with low latency access and is expensive.
Question 33 of 70
33. Question
Your company wants to reduce cost on infrequently accessed data by moving it to the cloud. The data will still be accessed approximately once a month to refresh historical charts. In addition, data older than 5 years needs to be archived for 5 years for compliance reasons. How should you store and manage the data?
Correct
Correct answer is D as the access pattern fits Nearline storage class requirements and Nearline is a more cost-effective storage approach than Multi-Regional. The object lifecycle management policy to move data to Coldline is ideal for archival.
Refer GCP documentation – Cloud Storage – Storage Classes
Options A & B are wrong as Multi-Regional storage class is not an ideal storage option with infrequent access.
Option C is wrong as the data is required for compliance it cannot be deleted and needs to be moved to the Coldline storage.
Incorrect
Correct answer is D as the access pattern fits Nearline storage class requirements and Nearline is a more cost-effective storage approach than Multi-Regional. The object lifecycle management policy to move data to Coldline is ideal for archival.
Refer GCP documentation – Cloud Storage – Storage Classes
Options A & B are wrong as Multi-Regional storage class is not an ideal storage option with infrequent access.
Option C is wrong as the data is required for compliance it cannot be deleted and needs to be moved to the Coldline storage.
Unattempted
Correct answer is D as the access pattern fits Nearline storage class requirements and Nearline is a more cost-effective storage approach than Multi-Regional. The object lifecycle management policy to move data to Coldline is ideal for archival.
Refer GCP documentation – Cloud Storage – Storage Classes
Options A & B are wrong as Multi-Regional storage class is not an ideal storage option with infrequent access.
Option C is wrong as the data is required for compliance it cannot be deleted and needs to be moved to the Coldline storage.
Question 34 of 70
34. Question
Your company plans to migrate a multi-petabyte data set to the cloud. The data set must be available 24hrs a day. Your business analysts have experience only with using a SQL interface. How should you store the data to optimize it for ease of analysis?
Correct
Correct answer is A as BigQuery is the only of these Google products that supports an SQL interface and a high enough SLA (99.9%) to make it readily available.
Option B is wrong as Cloud SQL cannot support multi-petabyte data. Storage limit for Cloud SQL is 10TB
Option C is wrong as Cloud Storage does not provide SQL interface.
Option D is wrong as Datastore does not provide a SQL interface and is a NoSQL solution.
Incorrect
Correct answer is A as BigQuery is the only of these Google products that supports an SQL interface and a high enough SLA (99.9%) to make it readily available.
Option B is wrong as Cloud SQL cannot support multi-petabyte data. Storage limit for Cloud SQL is 10TB
Option C is wrong as Cloud Storage does not provide SQL interface.
Option D is wrong as Datastore does not provide a SQL interface and is a NoSQL solution.
Unattempted
Correct answer is A as BigQuery is the only of these Google products that supports an SQL interface and a high enough SLA (99.9%) to make it readily available.
Option B is wrong as Cloud SQL cannot support multi-petabyte data. Storage limit for Cloud SQL is 10TB
Option C is wrong as Cloud Storage does not provide SQL interface.
Option D is wrong as Datastore does not provide a SQL interface and is a NoSQL solution.
Question 35 of 70
35. Question
You have a Kubernetes cluster with 1 node-pool. The cluster receives a lot of traffic and needs to grow. You decide to add a node. What should you do?
Correct
Correct answer is A as the kubernetes cluster can be resized using the gcloud command.
Refer GCP documentation – Resizing Kubernetes Cluster
gcloud container clusters resize [CLUSTER_NAME] –node-pool [POOL_NAME] \ –size [SIZE]
Option B is wrong as kubernetes cluster cannot be resized using the kubectl command
Options C & D are wrong as the managed instance groups should be changed manually.
Incorrect
Correct answer is A as the kubernetes cluster can be resized using the gcloud command.
Refer GCP documentation – Resizing Kubernetes Cluster
gcloud container clusters resize [CLUSTER_NAME] –node-pool [POOL_NAME] \ –size [SIZE]
Option B is wrong as kubernetes cluster cannot be resized using the kubectl command
Options C & D are wrong as the managed instance groups should be changed manually.
Unattempted
Correct answer is A as the kubernetes cluster can be resized using the gcloud command.
Refer GCP documentation – Resizing Kubernetes Cluster
gcloud container clusters resize [CLUSTER_NAME] –node-pool [POOL_NAME] \ –size [SIZE]
Option B is wrong as kubernetes cluster cannot be resized using the kubectl command
Options C & D are wrong as the managed instance groups should be changed manually.
Question 36 of 70
36. Question
What is the command for creating a storage bucket that has once per month access and is named ‘archive_bucket’?
Correct
Correct answer is C as the data needs to be accessed on monthly basis Nearline is an ideal storage class. Also gsutil needs -c parameter to pass the class.
Refer GCP documentation – Storage Classes
Nearline – Data you do not expect to access frequently (i.e., no more than once per month). Ideal for back-up and serving long-tail multimedia content.
Option A is wrong as rm is the wrong parameter and removes the data.
Option B is wrong as coldline is not suited for data that needs monthly access.
Option D is wrong as by default, gsutil would create a regional bucket.
Incorrect
Correct answer is C as the data needs to be accessed on monthly basis Nearline is an ideal storage class. Also gsutil needs -c parameter to pass the class.
Refer GCP documentation – Storage Classes
Nearline – Data you do not expect to access frequently (i.e., no more than once per month). Ideal for back-up and serving long-tail multimedia content.
Option A is wrong as rm is the wrong parameter and removes the data.
Option B is wrong as coldline is not suited for data that needs monthly access.
Option D is wrong as by default, gsutil would create a regional bucket.
Unattempted
Correct answer is C as the data needs to be accessed on monthly basis Nearline is an ideal storage class. Also gsutil needs -c parameter to pass the class.
Refer GCP documentation – Storage Classes
Nearline – Data you do not expect to access frequently (i.e., no more than once per month). Ideal for back-up and serving long-tail multimedia content.
Option A is wrong as rm is the wrong parameter and removes the data.
Option B is wrong as coldline is not suited for data that needs monthly access.
Option D is wrong as by default, gsutil would create a regional bucket.
Question 37 of 70
37. Question
You need to take streaming data from thousands of Internet of Things (IoT) devices, ingest it, run it through a processing pipeline, and store it for analysis. You want to run SQL queries against your data for analysis. What services in which order should you use for this task?
Correct
Correct answer is C as the need to ingest it, transform and store the Cloud Pub/Sub, Cloud Dataflow, BigQuery is ideal stack to handle the IoT data.
Refer GCP documentation – IoT
Google Cloud Pub/Sub provides a globally durable message ingestion service. By creating topics for streams or channels, you can enable different components of your application to subscribe to specific streams of data without needing to construct subscriber-specific channels on each device. Cloud Pub/Sub also natively connects to other Cloud Platform services, helping you to connect ingestion, data pipelines, and storage systems.
Google Cloud Dataflow provides the open Apache Beam programming model as a managed service for processing data in multiple ways, including batch operations, extract-transform-load (ETL) patterns, and continuous, streaming computation. Cloud Dataflow can be particularly useful for managing the high-volume data processing pipelines required for IoT scenarios. Cloud Dataflow is also designed to integrate seamlessly with the other Cloud Platform services you choose for your pipeline.
Google BigQuery provides a fully managed data warehouse with a familiar SQL interface, so you can store your IoT data alongside any of your other enterprise analytics and logs. The performance and cost of BigQuery means you might keep your valuable data longer, instead of deleting it just to save disk space.
Sample Arch – Mobile Gaming Analysis Telemetry
Processing game client and game server events in real time
Option A is wrong as the stack is correct, however the order is not correct.
Option B is wrong as Dataproc is not an ideal tool for analysis. Cloud Dataproc is a fast, easy-to-use, fully-managed cloud service for running Apache Spark and Apache Hadoop clusters in a simpler, more cost-efficient way.
Option D is wrong as App Engine is not an ideal ingestion tool to handle IoT data.
Incorrect
Correct answer is C as the need to ingest it, transform and store the Cloud Pub/Sub, Cloud Dataflow, BigQuery is ideal stack to handle the IoT data.
Refer GCP documentation – IoT
Google Cloud Pub/Sub provides a globally durable message ingestion service. By creating topics for streams or channels, you can enable different components of your application to subscribe to specific streams of data without needing to construct subscriber-specific channels on each device. Cloud Pub/Sub also natively connects to other Cloud Platform services, helping you to connect ingestion, data pipelines, and storage systems.
Google Cloud Dataflow provides the open Apache Beam programming model as a managed service for processing data in multiple ways, including batch operations, extract-transform-load (ETL) patterns, and continuous, streaming computation. Cloud Dataflow can be particularly useful for managing the high-volume data processing pipelines required for IoT scenarios. Cloud Dataflow is also designed to integrate seamlessly with the other Cloud Platform services you choose for your pipeline.
Google BigQuery provides a fully managed data warehouse with a familiar SQL interface, so you can store your IoT data alongside any of your other enterprise analytics and logs. The performance and cost of BigQuery means you might keep your valuable data longer, instead of deleting it just to save disk space.
Sample Arch – Mobile Gaming Analysis Telemetry
Processing game client and game server events in real time
Option A is wrong as the stack is correct, however the order is not correct.
Option B is wrong as Dataproc is not an ideal tool for analysis. Cloud Dataproc is a fast, easy-to-use, fully-managed cloud service for running Apache Spark and Apache Hadoop clusters in a simpler, more cost-efficient way.
Option D is wrong as App Engine is not an ideal ingestion tool to handle IoT data.
Unattempted
Correct answer is C as the need to ingest it, transform and store the Cloud Pub/Sub, Cloud Dataflow, BigQuery is ideal stack to handle the IoT data.
Refer GCP documentation – IoT
Google Cloud Pub/Sub provides a globally durable message ingestion service. By creating topics for streams or channels, you can enable different components of your application to subscribe to specific streams of data without needing to construct subscriber-specific channels on each device. Cloud Pub/Sub also natively connects to other Cloud Platform services, helping you to connect ingestion, data pipelines, and storage systems.
Google Cloud Dataflow provides the open Apache Beam programming model as a managed service for processing data in multiple ways, including batch operations, extract-transform-load (ETL) patterns, and continuous, streaming computation. Cloud Dataflow can be particularly useful for managing the high-volume data processing pipelines required for IoT scenarios. Cloud Dataflow is also designed to integrate seamlessly with the other Cloud Platform services you choose for your pipeline.
Google BigQuery provides a fully managed data warehouse with a familiar SQL interface, so you can store your IoT data alongside any of your other enterprise analytics and logs. The performance and cost of BigQuery means you might keep your valuable data longer, instead of deleting it just to save disk space.
Sample Arch – Mobile Gaming Analysis Telemetry
Processing game client and game server events in real time
Option A is wrong as the stack is correct, however the order is not correct.
Option B is wrong as Dataproc is not an ideal tool for analysis. Cloud Dataproc is a fast, easy-to-use, fully-managed cloud service for running Apache Spark and Apache Hadoop clusters in a simpler, more cost-efficient way.
Option D is wrong as App Engine is not an ideal ingestion tool to handle IoT data.
Question 38 of 70
38. Question
Your application has a large international audience and runs stateless virtual machines within a managed instance group across multiple locations. One feature of the application lets users upload files and share them with other users. Files must be available for 30 days; after that, they are removed from the system entirely. Which storage solution should you choose?
Correct
Correct answer is B as the key storage requirements is it being global, allow lifecycle management and sharing capability. Cloud Storage is an ideal choice as it can be configured to be multi-regional, have lifecycle management rules to auto delete the files after 30 days and share them with others.
Option A is wrong Datastore is a NoSQL solution and not ideal for unstructured data.
Option C is wrong as SSD disks are ephemeral storage option for virtual machines.
Option D is wrong as disks are regional and not ideal storage option for content that needs to be shared.
Incorrect
Correct answer is B as the key storage requirements is it being global, allow lifecycle management and sharing capability. Cloud Storage is an ideal choice as it can be configured to be multi-regional, have lifecycle management rules to auto delete the files after 30 days and share them with others.
Option A is wrong Datastore is a NoSQL solution and not ideal for unstructured data.
Option C is wrong as SSD disks are ephemeral storage option for virtual machines.
Option D is wrong as disks are regional and not ideal storage option for content that needs to be shared.
Unattempted
Correct answer is B as the key storage requirements is it being global, allow lifecycle management and sharing capability. Cloud Storage is an ideal choice as it can be configured to be multi-regional, have lifecycle management rules to auto delete the files after 30 days and share them with others.
Option A is wrong Datastore is a NoSQL solution and not ideal for unstructured data.
Option C is wrong as SSD disks are ephemeral storage option for virtual machines.
Option D is wrong as disks are regional and not ideal storage option for content that needs to be shared.
Question 39 of 70
39. Question
Your company wants to track whether someone is present in a meeting room reserved for a scheduled meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a motion sensor that reports its status every second. The data from the motion detector includes only a sensor ID and several different discrete items of information. Analysts will use this data, together with information about account owners and office locations. Which database type should you use?
Correct
Correct answer is B as NoSQL like Bigtable and Datastore solution is an ideal solution to store sensor ID and several different discrete items of information. It also provides an ability to join with other data. Datastore can also be configured to store data in multi-region locations.
Refer GCP documentation – Storage Options
Option A is wrong as flat file is not an ideal storage option. It is not scalable.
Option C is wrong as relational database like Cloud SQL is not an ideal solution to store schema less data.
Option D is wrong as blob storage like Cloud Storage is not an ideal solution to store, analyze schema less data and join with other sources.
Incorrect
Correct answer is B as NoSQL like Bigtable and Datastore solution is an ideal solution to store sensor ID and several different discrete items of information. It also provides an ability to join with other data. Datastore can also be configured to store data in multi-region locations.
Refer GCP documentation – Storage Options
Option A is wrong as flat file is not an ideal storage option. It is not scalable.
Option C is wrong as relational database like Cloud SQL is not an ideal solution to store schema less data.
Option D is wrong as blob storage like Cloud Storage is not an ideal solution to store, analyze schema less data and join with other sources.
Unattempted
Correct answer is B as NoSQL like Bigtable and Datastore solution is an ideal solution to store sensor ID and several different discrete items of information. It also provides an ability to join with other data. Datastore can also be configured to store data in multi-region locations.
Refer GCP documentation – Storage Options
Option A is wrong as flat file is not an ideal storage option. It is not scalable.
Option C is wrong as relational database like Cloud SQL is not an ideal solution to store schema less data.
Option D is wrong as blob storage like Cloud Storage is not an ideal solution to store, analyze schema less data and join with other sources.
Question 40 of 70
40. Question
You have data stored in a Cloud Storage dataset and also in a BigQuery dataset. You need to secure the data and provide 3 different types of access levels for your Google Cloud Platform users: administrator, read/write, and read-only. You want to follow Google-recommended practices. What should you do?
Correct
Correct answer is D as Google best practice is to use pre-defined rules over legacy primitive and custom roles. Pre-defined roles can help grant fine grained control per service.
Refer GCP documentation – IAM Overview
Primitive roles: The roles historically available in the Google Cloud Platform Console will continue to work. These are the Owner, Editor, and Viewer roles.
Predefined roles: Predefined roles are the Cloud IAM roles that give finer-grained access control than the primitive roles. For example, the predefined role Pub/Sub Publisher (roles/pubsub.publisher) provides access to only publish messages to a Cloud Pub/Sub topic.
Custom roles: Roles that you create to tailor permissions to the needs of your organization when predefined roles don’t meet your needs.
What is the difference between primitive roles and predefined roles?
Primitive roles are the legacy Owner, Editor, and Viewer roles. IAM provides predefined roles, which enable more granular access than the primitive roles. Grant predefined roles to identities when possible, so you only give the least amount of access necessary to access your resources.
When would I use primitive roles?
Use primitive roles in the following scenarios:
When the GCP service does not provide a predefined role. See the predefined roles table for a list of all available predefined roles.
When you want to grant broader permissions for a project. This often happens when you’re granting permissions in development or test environments.
When you need to allow a member to modify permissions for a project, you’ll want to grant them the owner role because only owners have the permission to grant access to other users for for projects.
When you work in a small team where the team members don’t need granular permissions.
Option A is wrong as you should use custom roles only if predefined roles are not available.
Options B & C are wrong Google does not recommend using primitive roles which do not allow fine grained access control. Also primitive roles are applied at project or service resource levels
Incorrect
Correct answer is D as Google best practice is to use pre-defined rules over legacy primitive and custom roles. Pre-defined roles can help grant fine grained control per service.
Refer GCP documentation – IAM Overview
Primitive roles: The roles historically available in the Google Cloud Platform Console will continue to work. These are the Owner, Editor, and Viewer roles.
Predefined roles: Predefined roles are the Cloud IAM roles that give finer-grained access control than the primitive roles. For example, the predefined role Pub/Sub Publisher (roles/pubsub.publisher) provides access to only publish messages to a Cloud Pub/Sub topic.
Custom roles: Roles that you create to tailor permissions to the needs of your organization when predefined roles don’t meet your needs.
What is the difference between primitive roles and predefined roles?
Primitive roles are the legacy Owner, Editor, and Viewer roles. IAM provides predefined roles, which enable more granular access than the primitive roles. Grant predefined roles to identities when possible, so you only give the least amount of access necessary to access your resources.
When would I use primitive roles?
Use primitive roles in the following scenarios:
When the GCP service does not provide a predefined role. See the predefined roles table for a list of all available predefined roles.
When you want to grant broader permissions for a project. This often happens when you’re granting permissions in development or test environments.
When you need to allow a member to modify permissions for a project, you’ll want to grant them the owner role because only owners have the permission to grant access to other users for for projects.
When you work in a small team where the team members don’t need granular permissions.
Option A is wrong as you should use custom roles only if predefined roles are not available.
Options B & C are wrong Google does not recommend using primitive roles which do not allow fine grained access control. Also primitive roles are applied at project or service resource levels
Unattempted
Correct answer is D as Google best practice is to use pre-defined rules over legacy primitive and custom roles. Pre-defined roles can help grant fine grained control per service.
Refer GCP documentation – IAM Overview
Primitive roles: The roles historically available in the Google Cloud Platform Console will continue to work. These are the Owner, Editor, and Viewer roles.
Predefined roles: Predefined roles are the Cloud IAM roles that give finer-grained access control than the primitive roles. For example, the predefined role Pub/Sub Publisher (roles/pubsub.publisher) provides access to only publish messages to a Cloud Pub/Sub topic.
Custom roles: Roles that you create to tailor permissions to the needs of your organization when predefined roles don’t meet your needs.
What is the difference between primitive roles and predefined roles?
Primitive roles are the legacy Owner, Editor, and Viewer roles. IAM provides predefined roles, which enable more granular access than the primitive roles. Grant predefined roles to identities when possible, so you only give the least amount of access necessary to access your resources.
When would I use primitive roles?
Use primitive roles in the following scenarios:
When the GCP service does not provide a predefined role. See the predefined roles table for a list of all available predefined roles.
When you want to grant broader permissions for a project. This often happens when you’re granting permissions in development or test environments.
When you need to allow a member to modify permissions for a project, you’ll want to grant them the owner role because only owners have the permission to grant access to other users for for projects.
When you work in a small team where the team members don’t need granular permissions.
Option A is wrong as you should use custom roles only if predefined roles are not available.
Options B & C are wrong Google does not recommend using primitive roles which do not allow fine grained access control. Also primitive roles are applied at project or service resource levels
Question 41 of 70
41. Question
You have created a Kubernetes deployment, called Deployment-A, with 3 replicas on your cluster. Another deployment, called Deployment-B, needs access to Deployment-A. You cannot expose Deployment-A outside of the cluster. What should you do?
Correct
Correct answer is D as this exposes the service on a cluster-internal IP address. Choosing this method makes the service reachable only from within the cluster.
Refer GCP documentation – Kubernetes Networking
Option A is wrong as this exposes Deployment A over the public internet.
Option B is wrong as LoadBalancer will expose the service publicly.
Option C is wrong as this exposes the service externally using a cloud provider’s load balancer, and Ingress can work only with nodeport, not LoadBalancer.
Incorrect
Correct answer is D as this exposes the service on a cluster-internal IP address. Choosing this method makes the service reachable only from within the cluster.
Refer GCP documentation – Kubernetes Networking
Option A is wrong as this exposes Deployment A over the public internet.
Option B is wrong as LoadBalancer will expose the service publicly.
Option C is wrong as this exposes the service externally using a cloud provider’s load balancer, and Ingress can work only with nodeport, not LoadBalancer.
Unattempted
Correct answer is D as this exposes the service on a cluster-internal IP address. Choosing this method makes the service reachable only from within the cluster.
Refer GCP documentation – Kubernetes Networking
Option A is wrong as this exposes Deployment A over the public internet.
Option B is wrong as LoadBalancer will expose the service publicly.
Option C is wrong as this exposes the service externally using a cloud provider’s load balancer, and Ingress can work only with nodeport, not LoadBalancer.
Question 42 of 70
42. Question
You want to create a new role for your colleagues that will apply to all current and future projects created in your organization. The role should have the permissions of the BigQuery Job User and Cloud Bigtable User roles. You want to follow Google’s recommended practices. How should you create the new role?
Correct
Correct answer is D as this creates a new role with the combined permissions on the organization level.
Option A is wrong as this does not create a new role.
Option B is wrong as gcloud cannot promote a role to org level.
Option C is wrong as it’s recommended to define the role on the organization level. Also, the role will not be applied on new projects.
Incorrect
Correct answer is D as this creates a new role with the combined permissions on the organization level.
Option A is wrong as this does not create a new role.
Option B is wrong as gcloud cannot promote a role to org level.
Option C is wrong as it’s recommended to define the role on the organization level. Also, the role will not be applied on new projects.
Unattempted
Correct answer is D as this creates a new role with the combined permissions on the organization level.
Option A is wrong as this does not create a new role.
Option B is wrong as gcloud cannot promote a role to org level.
Option C is wrong as it’s recommended to define the role on the organization level. Also, the role will not be applied on new projects.
Question 43 of 70
43. Question
Your team uses a third-party monitoring solution. They’ve asked you to deploy it to all nodes in your Kubernetes Engine Cluster. What’s the best way to do that?
Correct
Correct answer is C as Daemon set helps deploy applications or tools that you need to run on all the nodes.
Refer GCP documentation – Kubernetes Engine Daemon Set
Like other workload objects, DaemonSets manage groups of replicated Pods. However, DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add nodes to a node pool, DaemonSets automatically add Pods to the new nodes as needed.
DaemonSets use a Pod template, which contains aspecification for its Pods. The Pod specification determines how each Pod should look: what applications should run inside its containers, which volumes it should mount, its labels and selectors, and more.
DaemonSets are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention. Examples of such tasks include storage daemons like ceph, log collection daemons like fluentd, and node monitoring daemons like collectd.
For example, you could have DaemonSets for each type of daemon run on all of your nodes. Alternatively, you could run multiple DaemonSets for a single type of daemon, but have them use different configurations for different hardware types and resource needs.
Option A is wrong as it is not a viable option.
Option B is wrong as Stateful set is useful for maintaining state. StatefulSets represent a set of [Pods] with unique, persistent identities and stable hostnames that GKE maintains regardless of where they are scheduled. The state information and other resilient data for any given StatefulSet Pod is maintained in persistent disk storage associated with the StatefulSet.
Option D is wrong as Deployment manager does not control Pods.
Incorrect
Correct answer is C as Daemon set helps deploy applications or tools that you need to run on all the nodes.
Refer GCP documentation – Kubernetes Engine Daemon Set
Like other workload objects, DaemonSets manage groups of replicated Pods. However, DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add nodes to a node pool, DaemonSets automatically add Pods to the new nodes as needed.
DaemonSets use a Pod template, which contains aspecification for its Pods. The Pod specification determines how each Pod should look: what applications should run inside its containers, which volumes it should mount, its labels and selectors, and more.
DaemonSets are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention. Examples of such tasks include storage daemons like ceph, log collection daemons like fluentd, and node monitoring daemons like collectd.
For example, you could have DaemonSets for each type of daemon run on all of your nodes. Alternatively, you could run multiple DaemonSets for a single type of daemon, but have them use different configurations for different hardware types and resource needs.
Option A is wrong as it is not a viable option.
Option B is wrong as Stateful set is useful for maintaining state. StatefulSets represent a set of [Pods] with unique, persistent identities and stable hostnames that GKE maintains regardless of where they are scheduled. The state information and other resilient data for any given StatefulSet Pod is maintained in persistent disk storage associated with the StatefulSet.
Option D is wrong as Deployment manager does not control Pods.
Unattempted
Correct answer is C as Daemon set helps deploy applications or tools that you need to run on all the nodes.
Refer GCP documentation – Kubernetes Engine Daemon Set
Like other workload objects, DaemonSets manage groups of replicated Pods. However, DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add nodes to a node pool, DaemonSets automatically add Pods to the new nodes as needed.
DaemonSets use a Pod template, which contains aspecification for its Pods. The Pod specification determines how each Pod should look: what applications should run inside its containers, which volumes it should mount, its labels and selectors, and more.
DaemonSets are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention. Examples of such tasks include storage daemons like ceph, log collection daemons like fluentd, and node monitoring daemons like collectd.
For example, you could have DaemonSets for each type of daemon run on all of your nodes. Alternatively, you could run multiple DaemonSets for a single type of daemon, but have them use different configurations for different hardware types and resource needs.
Option A is wrong as it is not a viable option.
Option B is wrong as Stateful set is useful for maintaining state. StatefulSets represent a set of [Pods] with unique, persistent identities and stable hostnames that GKE maintains regardless of where they are scheduled. The state information and other resilient data for any given StatefulSet Pod is maintained in persistent disk storage associated with the StatefulSet.
Option D is wrong as Deployment manager does not control Pods.
Question 44 of 70
44. Question
You’re attempting to deploy a new instance that uses the centos 7 family. You can’t recall the exact name of the family. Which command could you use to determine the family names?
Correct
Correct answer is D as family names are image attributes.
Refer GCP documentation – Cloud SDK Compute Images List & Image Families
Image families simplify the process of managing images in your project by grouping related images together and making it easy to roll forward and roll back between specific image versions. An image family always points to the latest version of an image that is not deprecated. Most public images are grouped into an image families. For example, the debian-9image family in the debian-cloud project always points to the most recent Debian 9 image.
You can add your own images to an image family when you create a custom image. The image family points to the most recent image that you added to that family. Because the image family never points to a deprecated image, rolling the image family back to a previous image version is as simple as deprecating the most recent image in that family.
Options A, B & C are wrong as they do not help retrieve the image family.
Incorrect
Correct answer is D as family names are image attributes.
Refer GCP documentation – Cloud SDK Compute Images List & Image Families
Image families simplify the process of managing images in your project by grouping related images together and making it easy to roll forward and roll back between specific image versions. An image family always points to the latest version of an image that is not deprecated. Most public images are grouped into an image families. For example, the debian-9image family in the debian-cloud project always points to the most recent Debian 9 image.
You can add your own images to an image family when you create a custom image. The image family points to the most recent image that you added to that family. Because the image family never points to a deprecated image, rolling the image family back to a previous image version is as simple as deprecating the most recent image in that family.
Options A, B & C are wrong as they do not help retrieve the image family.
Unattempted
Correct answer is D as family names are image attributes.
Refer GCP documentation – Cloud SDK Compute Images List & Image Families
Image families simplify the process of managing images in your project by grouping related images together and making it easy to roll forward and roll back between specific image versions. An image family always points to the latest version of an image that is not deprecated. Most public images are grouped into an image families. For example, the debian-9image family in the debian-cloud project always points to the most recent Debian 9 image.
You can add your own images to an image family when you create a custom image. The image family points to the most recent image that you added to that family. Because the image family never points to a deprecated image, rolling the image family back to a previous image version is as simple as deprecating the most recent image in that family.
Options A, B & C are wrong as they do not help retrieve the image family.
Question 45 of 70
45. Question
Your security team has asked you to present them some numbers based on the logs that are exported to BigQuery. Due to the team structure, your manager has asked you to determine how much the query will cost. What’s the best way to determine the cost?
Correct
Correct answer is C as the –dry-run option can be used to price your queries before they are actually fired. The Query returns the bytes read, which can then be used with the Pricing Calculator to estimate the query cost.
Refer GCP documentation – BigQuery Best Practices
Price your queries before running them
Best practice: Before running queries, preview them to estimate costs.
Queries are billed according to the number of bytes read. To estimate costs before running a query use:
The query validator in the GCP Console or the classic web UI
The –dry_run flag in the CLI
The dryRun parameter when submitting a query job using the API
The Google Cloud Platform Pricing Calculator
Options A, B & D are wrong as they are not valid options.
Incorrect
Correct answer is C as the –dry-run option can be used to price your queries before they are actually fired. The Query returns the bytes read, which can then be used with the Pricing Calculator to estimate the query cost.
Refer GCP documentation – BigQuery Best Practices
Price your queries before running them
Best practice: Before running queries, preview them to estimate costs.
Queries are billed according to the number of bytes read. To estimate costs before running a query use:
The query validator in the GCP Console or the classic web UI
The –dry_run flag in the CLI
The dryRun parameter when submitting a query job using the API
The Google Cloud Platform Pricing Calculator
Options A, B & D are wrong as they are not valid options.
Unattempted
Correct answer is C as the –dry-run option can be used to price your queries before they are actually fired. The Query returns the bytes read, which can then be used with the Pricing Calculator to estimate the query cost.
Refer GCP documentation – BigQuery Best Practices
Price your queries before running them
Best practice: Before running queries, preview them to estimate costs.
Queries are billed according to the number of bytes read. To estimate costs before running a query use:
The query validator in the GCP Console or the classic web UI
The –dry_run flag in the CLI
The dryRun parameter when submitting a query job using the API
The Google Cloud Platform Pricing Calculator
Options A, B & D are wrong as they are not valid options.
Question 46 of 70
46. Question
Your development team has asked you to set up load balancer with SSL termination. The website would be using HTTPS protocol. Which load balancer should you use?
Correct
Correct answer is D as HTTPS load balancer supports the HTTPS traffic with the SSL termination ability.
Refer GCP documentation – Choosing Load Balancer
An HTTPS load balancer has the same basic structure as an HTTP load balancer (described above), but differs in the following ways:
An HTTPS load balancer uses a target HTTPS proxy instead of a target HTTP proxy.
An HTTPS load balancer requires at least one signed SSL certificate installed on the target HTTPS proxy for the load balancer. You can use Google-managed or self-managed SSL certificates.
The client SSL session terminates at the load balancer.
HTTPS load balancers support the QUIC transport layer protocol.
Option A is wrong as SSL proxy is not recommended for HTTPS traffic.
Google Cloud SSL Proxy Load Balancing terminates user SSL (TLS) connections at the load balancing layer, then balances the connections across your instances using the SSL or TCP protocols. Cloud SSL proxy is intended for non-HTTP(S) traffic. For HTTP(S) traffic, HTTP(S) load balancing is recommended instead.
SSL Proxy Load Balancing supports both IPv4 and IPv6 addresses for client traffic. Client IPv6 requests are terminated at the load balancing layer, then proxied over IPv4 to your backends.
Option B is wrong as HTTP load balancer does not support SSL termination.
Option C is wrong as TCP proxy does not support SSL offload and not recommended for HTTP/S traffic.
Incorrect
Correct answer is D as HTTPS load balancer supports the HTTPS traffic with the SSL termination ability.
Refer GCP documentation – Choosing Load Balancer
An HTTPS load balancer has the same basic structure as an HTTP load balancer (described above), but differs in the following ways:
An HTTPS load balancer uses a target HTTPS proxy instead of a target HTTP proxy.
An HTTPS load balancer requires at least one signed SSL certificate installed on the target HTTPS proxy for the load balancer. You can use Google-managed or self-managed SSL certificates.
The client SSL session terminates at the load balancer.
HTTPS load balancers support the QUIC transport layer protocol.
Option A is wrong as SSL proxy is not recommended for HTTPS traffic.
Google Cloud SSL Proxy Load Balancing terminates user SSL (TLS) connections at the load balancing layer, then balances the connections across your instances using the SSL or TCP protocols. Cloud SSL proxy is intended for non-HTTP(S) traffic. For HTTP(S) traffic, HTTP(S) load balancing is recommended instead.
SSL Proxy Load Balancing supports both IPv4 and IPv6 addresses for client traffic. Client IPv6 requests are terminated at the load balancing layer, then proxied over IPv4 to your backends.
Option B is wrong as HTTP load balancer does not support SSL termination.
Option C is wrong as TCP proxy does not support SSL offload and not recommended for HTTP/S traffic.
Unattempted
Correct answer is D as HTTPS load balancer supports the HTTPS traffic with the SSL termination ability.
Refer GCP documentation – Choosing Load Balancer
An HTTPS load balancer has the same basic structure as an HTTP load balancer (described above), but differs in the following ways:
An HTTPS load balancer uses a target HTTPS proxy instead of a target HTTP proxy.
An HTTPS load balancer requires at least one signed SSL certificate installed on the target HTTPS proxy for the load balancer. You can use Google-managed or self-managed SSL certificates.
The client SSL session terminates at the load balancer.
HTTPS load balancers support the QUIC transport layer protocol.
Option A is wrong as SSL proxy is not recommended for HTTPS traffic.
Google Cloud SSL Proxy Load Balancing terminates user SSL (TLS) connections at the load balancing layer, then balances the connections across your instances using the SSL or TCP protocols. Cloud SSL proxy is intended for non-HTTP(S) traffic. For HTTP(S) traffic, HTTP(S) load balancing is recommended instead.
SSL Proxy Load Balancing supports both IPv4 and IPv6 addresses for client traffic. Client IPv6 requests are terminated at the load balancing layer, then proxied over IPv4 to your backends.
Option B is wrong as HTTP load balancer does not support SSL termination.
Option C is wrong as TCP proxy does not support SSL offload and not recommended for HTTP/S traffic.
Question 47 of 70
47. Question
You’ve created a bucket to store some data archives for compliance. The data isn’t likely to need to be viewed. However, you need to store it for at least 7 years. What is the best default storage class?
Correct
Correct answer is B as Coldline storage is an ideal solution for archival of infrequently accessed data at low cost.
Refer GCP documentation – Cloud Storage Classes
Google Cloud Storage Coldline is a very-low-cost, highly durable storage service for data archiving, online backup, and disaster recovery. Unlike other “cold” storage services, your data is available within milliseconds, not hours or days.
Coldline Storage is the best choice for data that you plan to access at most once a year, due to its slightly lower availability, 90-day minimum storage duration, costs for data access, and higher per-operation costs. For example:
Cold Data Storage – Infrequently accessed data, such as data stored for legal or regulatory reasons, can be stored at low cost as Coldline Storage, and be available when you need it.
Disaster recovery – In the event of a disaster recovery event, recovery time is key. Cloud Storage provides low latency access to data stored as Coldline Storage.
The geo-redundancy of Coldline Storage data is determined by the type of location in which it is stored: Coldline Storage data stored in multi-regional locations is redundant across multiple regions, providing higher availability than Coldline Storage data stored in regional locations.
Options A, C & D are wrong as they are not suited for archival data.
Incorrect
Correct answer is B as Coldline storage is an ideal solution for archival of infrequently accessed data at low cost.
Refer GCP documentation – Cloud Storage Classes
Google Cloud Storage Coldline is a very-low-cost, highly durable storage service for data archiving, online backup, and disaster recovery. Unlike other “cold” storage services, your data is available within milliseconds, not hours or days.
Coldline Storage is the best choice for data that you plan to access at most once a year, due to its slightly lower availability, 90-day minimum storage duration, costs for data access, and higher per-operation costs. For example:
Cold Data Storage – Infrequently accessed data, such as data stored for legal or regulatory reasons, can be stored at low cost as Coldline Storage, and be available when you need it.
Disaster recovery – In the event of a disaster recovery event, recovery time is key. Cloud Storage provides low latency access to data stored as Coldline Storage.
The geo-redundancy of Coldline Storage data is determined by the type of location in which it is stored: Coldline Storage data stored in multi-regional locations is redundant across multiple regions, providing higher availability than Coldline Storage data stored in regional locations.
Options A, C & D are wrong as they are not suited for archival data.
Unattempted
Correct answer is B as Coldline storage is an ideal solution for archival of infrequently accessed data at low cost.
Refer GCP documentation – Cloud Storage Classes
Google Cloud Storage Coldline is a very-low-cost, highly durable storage service for data archiving, online backup, and disaster recovery. Unlike other “cold” storage services, your data is available within milliseconds, not hours or days.
Coldline Storage is the best choice for data that you plan to access at most once a year, due to its slightly lower availability, 90-day minimum storage duration, costs for data access, and higher per-operation costs. For example:
Cold Data Storage – Infrequently accessed data, such as data stored for legal or regulatory reasons, can be stored at low cost as Coldline Storage, and be available when you need it.
Disaster recovery – In the event of a disaster recovery event, recovery time is key. Cloud Storage provides low latency access to data stored as Coldline Storage.
The geo-redundancy of Coldline Storage data is determined by the type of location in which it is stored: Coldline Storage data stored in multi-regional locations is redundant across multiple regions, providing higher availability than Coldline Storage data stored in regional locations.
Options A, C & D are wrong as they are not suited for archival data.
Question 48 of 70
48. Question
You have installed an SQL server on a windows instance. You want to connect to the instance. What steps should you follow to connect to the instance with fewest steps?
Correct
Correct answer is D as connecting to Windows instance involves installation of the RDP client. GCP does not provide RDP client and it needs to be installed. Generate Windows instance password to connect to the instance and the RDP port is 3389
Refer GCP documentation – Windows Connecting to Instance
Options A & B are wrong as you need an external client and connect connect directly from GCP console.
Options B & C are wrong as 22 port is for SSH.
Incorrect
Correct answer is D as connecting to Windows instance involves installation of the RDP client. GCP does not provide RDP client and it needs to be installed. Generate Windows instance password to connect to the instance and the RDP port is 3389
Refer GCP documentation – Windows Connecting to Instance
Options A & B are wrong as you need an external client and connect connect directly from GCP console.
Options B & C are wrong as 22 port is for SSH.
Unattempted
Correct answer is D as connecting to Windows instance involves installation of the RDP client. GCP does not provide RDP client and it needs to be installed. Generate Windows instance password to connect to the instance and the RDP port is 3389
Refer GCP documentation – Windows Connecting to Instance
Options A & B are wrong as you need an external client and connect connect directly from GCP console.
Options B & C are wrong as 22 port is for SSH.
Question 49 of 70
49. Question
Your team has been working on building a web application. The plan is to deploy to Kubernetes. You currently have a Dockerfile that works locally. How can you get the application deployed to Kubernetes?
Correct
Correct answer is D as the correct steps are to create the container image and push it to Google Container Registry and deploy the image to Kubernetes with Kubectl.
Refer GCP documentation – Kubernetes Engine Deploy
To package and deploy your application on GKE, you must:
1. Package your app into a Docker image
2. Run the container locally on your machine (optional)
3. Upload the image to a registry
4. Create a container cluster
5. Deploy your app to the cluster
6. Expose your app to the Internet
7. Scale up your deployment
8. Deploy a new version of your app
Option A is wrong as kubectl cannot convert the Dockerfile to deployment.
Option B is wrong as Cloud Storage is not Docker image repository.
Option C is wrong as kubectl cannot push Dockerfile to Kubernetes and it does not result into deployment.
Incorrect
Correct answer is D as the correct steps are to create the container image and push it to Google Container Registry and deploy the image to Kubernetes with Kubectl.
Refer GCP documentation – Kubernetes Engine Deploy
To package and deploy your application on GKE, you must:
1. Package your app into a Docker image
2. Run the container locally on your machine (optional)
3. Upload the image to a registry
4. Create a container cluster
5. Deploy your app to the cluster
6. Expose your app to the Internet
7. Scale up your deployment
8. Deploy a new version of your app
Option A is wrong as kubectl cannot convert the Dockerfile to deployment.
Option B is wrong as Cloud Storage is not Docker image repository.
Option C is wrong as kubectl cannot push Dockerfile to Kubernetes and it does not result into deployment.
Unattempted
Correct answer is D as the correct steps are to create the container image and push it to Google Container Registry and deploy the image to Kubernetes with Kubectl.
Refer GCP documentation – Kubernetes Engine Deploy
To package and deploy your application on GKE, you must:
1. Package your app into a Docker image
2. Run the container locally on your machine (optional)
3. Upload the image to a registry
4. Create a container cluster
5. Deploy your app to the cluster
6. Expose your app to the Internet
7. Scale up your deployment
8. Deploy a new version of your app
Option A is wrong as kubectl cannot convert the Dockerfile to deployment.
Option B is wrong as Cloud Storage is not Docker image repository.
Option C is wrong as kubectl cannot push Dockerfile to Kubernetes and it does not result into deployment.
Question 50 of 70
50. Question
You’ve created the code for a Cloud Function that will respond to HTTP triggers and return some data in JSON format. You have the code locally; it’s tested and working. Which command can you use to create the function inside Google Cloud?
Correct
Correct answer is A as the code can be deployed using gcloud functions deploycommand.
Refer GCP documentation – Cloud Functions Deploy
Deployments work by uploading an archive containing your function’s source code to a Google Cloud Storage bucket. You can deploy Cloud Functions from your local machine or from your GitHub or Bitbucket source repository (via Cloud Source Repositories).
Using the gcloud command-line tool, deploy your function from the directory containing your function code with thegcloud functions deploy command:
gcloud functions deploy NAME –runtime RUNTIME TRIGGER [FLAGS…]
Incorrect
Correct answer is A as the code can be deployed using gcloud functions deploycommand.
Refer GCP documentation – Cloud Functions Deploy
Deployments work by uploading an archive containing your function’s source code to a Google Cloud Storage bucket. You can deploy Cloud Functions from your local machine or from your GitHub or Bitbucket source repository (via Cloud Source Repositories).
Using the gcloud command-line tool, deploy your function from the directory containing your function code with thegcloud functions deploy command:
gcloud functions deploy NAME –runtime RUNTIME TRIGGER [FLAGS…]
Unattempted
Correct answer is A as the code can be deployed using gcloud functions deploycommand.
Refer GCP documentation – Cloud Functions Deploy
Deployments work by uploading an archive containing your function’s source code to a Google Cloud Storage bucket. You can deploy Cloud Functions from your local machine or from your GitHub or Bitbucket source repository (via Cloud Source Repositories).
Using the gcloud command-line tool, deploy your function from the directory containing your function code with thegcloud functions deploy command:
gcloud functions deploy NAME –runtime RUNTIME TRIGGER [FLAGS…]
Question 51 of 70
51. Question
Your data team is working on some new machine learning models. They are generating several output files per day that they want to store in a regional bucket. They focus on the output files from the last month. The output files older than a month needs to be cleaned up. With the fewest steps possible, what’s the best way to implement the solution?
Correct
Correct answer is B as the files are not needed anymore they can be deleted and need not be stored. The transition of the object can be handled easily using Object Lifecycle Management.
Refer GCP documentation – Cloud Storage Lifecycle Management
You can assign a lifecycle management configuration to a bucket. The configuration contains a set of rules which apply to current and future objects in the bucket. When an object meets the criteria of one of the rules, Cloud Storage automatically performs a specified action on the object. Here are some example use cases:
Downgrade the storage class of objects older than 365 days to Coldline Storage.
Delete objects created before January 1, 2013.
Keep only the 3 most recent versions of each object in a bucket with versioning enabled.
Option A is wrong as the files are not needed anymore they can be deleted.
Options C & D are wrong as the transition can be handled easily using Object Lifecycle management.
Incorrect
Correct answer is B as the files are not needed anymore they can be deleted and need not be stored. The transition of the object can be handled easily using Object Lifecycle Management.
Refer GCP documentation – Cloud Storage Lifecycle Management
You can assign a lifecycle management configuration to a bucket. The configuration contains a set of rules which apply to current and future objects in the bucket. When an object meets the criteria of one of the rules, Cloud Storage automatically performs a specified action on the object. Here are some example use cases:
Downgrade the storage class of objects older than 365 days to Coldline Storage.
Delete objects created before January 1, 2013.
Keep only the 3 most recent versions of each object in a bucket with versioning enabled.
Option A is wrong as the files are not needed anymore they can be deleted.
Options C & D are wrong as the transition can be handled easily using Object Lifecycle management.
Unattempted
Correct answer is B as the files are not needed anymore they can be deleted and need not be stored. The transition of the object can be handled easily using Object Lifecycle Management.
Refer GCP documentation – Cloud Storage Lifecycle Management
You can assign a lifecycle management configuration to a bucket. The configuration contains a set of rules which apply to current and future objects in the bucket. When an object meets the criteria of one of the rules, Cloud Storage automatically performs a specified action on the object. Here are some example use cases:
Downgrade the storage class of objects older than 365 days to Coldline Storage.
Delete objects created before January 1, 2013.
Keep only the 3 most recent versions of each object in a bucket with versioning enabled.
Option A is wrong as the files are not needed anymore they can be deleted.
Options C & D are wrong as the transition can be handled easily using Object Lifecycle management.
Question 52 of 70
52. Question
You’ve been tasked with getting all of only the operations team’s public SSH keys onto to a specific Bastion host instance of a particular project. Currently Project wide access has already been granted to all the instances within the projects. With the fewest steps possible, how do you block or override the project level access on the Bastion host?
Correct
Correct answer is A as the project wide SSH access can be blocked by using the –metadata block-project-ssh-keys=TRUE
Refer GCP documentation – Compute Block Project Keys
If you need your instance to ignore project-wide public SSH keys and use only the instance-level keys, you can block project-wide public SSH keys from the instance. This will only allow users whose public SSH key is stored in instance-level metadata to access the instance. If you want your instance to use both project-wide and instance-level public SSH keys, set the instance metadata to allow project-wide SSH keys. This will allow any user whose public SSH key is stored in project-wide or instance-level metadata to access the instance.
gcloud compute instances add-metadata [INSTANCE_NAME] –metadata block-project-ssh-keys=TRUE
Option B is wrong as the –metadata block-project-ssh-keys parameter needs to be set to TRUE
Option C is wrong as the command needs to be execute at the instance level.
Option D is wrong as project wide SSH key access can be blocked.
Incorrect
Correct answer is A as the project wide SSH access can be blocked by using the –metadata block-project-ssh-keys=TRUE
Refer GCP documentation – Compute Block Project Keys
If you need your instance to ignore project-wide public SSH keys and use only the instance-level keys, you can block project-wide public SSH keys from the instance. This will only allow users whose public SSH key is stored in instance-level metadata to access the instance. If you want your instance to use both project-wide and instance-level public SSH keys, set the instance metadata to allow project-wide SSH keys. This will allow any user whose public SSH key is stored in project-wide or instance-level metadata to access the instance.
gcloud compute instances add-metadata [INSTANCE_NAME] –metadata block-project-ssh-keys=TRUE
Option B is wrong as the –metadata block-project-ssh-keys parameter needs to be set to TRUE
Option C is wrong as the command needs to be execute at the instance level.
Option D is wrong as project wide SSH key access can be blocked.
Unattempted
Correct answer is A as the project wide SSH access can be blocked by using the –metadata block-project-ssh-keys=TRUE
Refer GCP documentation – Compute Block Project Keys
If you need your instance to ignore project-wide public SSH keys and use only the instance-level keys, you can block project-wide public SSH keys from the instance. This will only allow users whose public SSH key is stored in instance-level metadata to access the instance. If you want your instance to use both project-wide and instance-level public SSH keys, set the instance metadata to allow project-wide SSH keys. This will allow any user whose public SSH key is stored in project-wide or instance-level metadata to access the instance.
gcloud compute instances add-metadata [INSTANCE_NAME] –metadata block-project-ssh-keys=TRUE
Option B is wrong as the –metadata block-project-ssh-keys parameter needs to be set to TRUE
Option C is wrong as the command needs to be execute at the instance level.
Option D is wrong as project wide SSH key access can be blocked.
Question 53 of 70
53. Question
You’re migrating an on-premises application to Google Cloud. The application uses a component that requires a licensing server. The license server has the IP address 10.28.0.10. You want to deploy the application without making any changes to the code or configuration. How should you go about deploying the application?
Correct
Correct answer is D as only the CIDR range 10.28.0.0/28 would include the 10.28.0.10 address. It provides 16 ip addresses i.e. 10.28.0.0 to 10.28.0.15
Option A is wrong as 10.28.0.0/31 CIDR range provides 2 ip addresses i.e. 10.28.0.0 to 10.28.0.1
Option B is wrong as 10.28.0.0/30 CIDR range provides 4 ip addresses i.e. 10.28.0.0 to 10.28.0.3
Option C is wrong as 10.28.0.0/29 CIDR range provides 8 ip addresses i.e. 10.28.0.0 to 10.28.0.7
Incorrect
Correct answer is D as only the CIDR range 10.28.0.0/28 would include the 10.28.0.10 address. It provides 16 ip addresses i.e. 10.28.0.0 to 10.28.0.15
Option A is wrong as 10.28.0.0/31 CIDR range provides 2 ip addresses i.e. 10.28.0.0 to 10.28.0.1
Option B is wrong as 10.28.0.0/30 CIDR range provides 4 ip addresses i.e. 10.28.0.0 to 10.28.0.3
Option C is wrong as 10.28.0.0/29 CIDR range provides 8 ip addresses i.e. 10.28.0.0 to 10.28.0.7
Unattempted
Correct answer is D as only the CIDR range 10.28.0.0/28 would include the 10.28.0.10 address. It provides 16 ip addresses i.e. 10.28.0.0 to 10.28.0.15
Option A is wrong as 10.28.0.0/31 CIDR range provides 2 ip addresses i.e. 10.28.0.0 to 10.28.0.1
Option B is wrong as 10.28.0.0/30 CIDR range provides 4 ip addresses i.e. 10.28.0.0 to 10.28.0.3
Option C is wrong as 10.28.0.0/29 CIDR range provides 8 ip addresses i.e. 10.28.0.0 to 10.28.0.7
Question 54 of 70
54. Question
While looking at your application’s source code in your private Github repo, you’ve noticed that a service account key has been committed to git. What steps should you take next?
Correct
Correct answer is C as all the traces of the keys needs to removed and add the key to .gitignore file.
Option A is wrong as deleting project does not remove the keys from Git.
Option B is wrong as it is bad practice to store keys in Git, irrespective of private repo.
Option D is wrong as Google Cloud support cannot help.
Incorrect
Correct answer is C as all the traces of the keys needs to removed and add the key to .gitignore file.
Option A is wrong as deleting project does not remove the keys from Git.
Option B is wrong as it is bad practice to store keys in Git, irrespective of private repo.
Option D is wrong as Google Cloud support cannot help.
Unattempted
Correct answer is C as all the traces of the keys needs to removed and add the key to .gitignore file.
Option A is wrong as deleting project does not remove the keys from Git.
Option B is wrong as it is bad practice to store keys in Git, irrespective of private repo.
Option D is wrong as Google Cloud support cannot help.
Question 55 of 70
55. Question
You need to help a developer install the App Engine Go extensions. However, you’ve forgotten the exact name of the component. Which command could you run to show all of the available options?
Correct
Correct answer is D as gcloud components list provides the list of components with the installation status.
Refer GCP documentation – Cloud SDK Components List
gcloud components list – list the status of all Cloud SDK components
This command lists all the available components in the Cloud SDK. For each component, the command lists the following information:
Status on your local workstation: not installed, installed (and up to date), and update available (installed, but not up to date)
Name of the component (a description)
ID of the component (used to refer to the component in other [gcloud components] commands)
Size of the component
In addition, if the –show-versions flag is specified, the command lists the currently installed version (if any) and the latest available version of each individual component.
Options A & C are wrong as config helps view and edit Cloud SDK properties. It does not provide components detail.
Option B is wrong as it is not a valid command.
Incorrect
Correct answer is D as gcloud components list provides the list of components with the installation status.
Refer GCP documentation – Cloud SDK Components List
gcloud components list – list the status of all Cloud SDK components
This command lists all the available components in the Cloud SDK. For each component, the command lists the following information:
Status on your local workstation: not installed, installed (and up to date), and update available (installed, but not up to date)
Name of the component (a description)
ID of the component (used to refer to the component in other [gcloud components] commands)
Size of the component
In addition, if the –show-versions flag is specified, the command lists the currently installed version (if any) and the latest available version of each individual component.
Options A & C are wrong as config helps view and edit Cloud SDK properties. It does not provide components detail.
Option B is wrong as it is not a valid command.
Unattempted
Correct answer is D as gcloud components list provides the list of components with the installation status.
Refer GCP documentation – Cloud SDK Components List
gcloud components list – list the status of all Cloud SDK components
This command lists all the available components in the Cloud SDK. For each component, the command lists the following information:
Status on your local workstation: not installed, installed (and up to date), and update available (installed, but not up to date)
Name of the component (a description)
ID of the component (used to refer to the component in other [gcloud components] commands)
Size of the component
In addition, if the –show-versions flag is specified, the command lists the currently installed version (if any) and the latest available version of each individual component.
Options A & C are wrong as config helps view and edit Cloud SDK properties. It does not provide components detail.
Option B is wrong as it is not a valid command.
Question 56 of 70
56. Question
Your finance team is working with the engineering team to try and determine your spending for each service by day and month across all projects used by the billing account. What is the easiest and most flexible way to aggregate and analyze the data?
Correct
Correct answer is B as the billing data can be exported to BigQuery for running daily and monthly to calculate spending across services.
Refer GCP documentation – Cloud Billing Export to BigQuery
Tools for monitoring, analyzing and optimizing cost have become an important part of managing development. Billing export to BigQuery enables you to export your daily usage and cost estimates automatically throughout the day to a BigQuery dataset you specify. You can then access your billing data from BigQuery. You can also use this export method to export data to a JSON file.
Options A & C are wrong as they are not easy and flexible.
Option D is wrong as there are no built-in reports.
Incorrect
Correct answer is B as the billing data can be exported to BigQuery for running daily and monthly to calculate spending across services.
Refer GCP documentation – Cloud Billing Export to BigQuery
Tools for monitoring, analyzing and optimizing cost have become an important part of managing development. Billing export to BigQuery enables you to export your daily usage and cost estimates automatically throughout the day to a BigQuery dataset you specify. You can then access your billing data from BigQuery. You can also use this export method to export data to a JSON file.
Options A & C are wrong as they are not easy and flexible.
Option D is wrong as there are no built-in reports.
Unattempted
Correct answer is B as the billing data can be exported to BigQuery for running daily and monthly to calculate spending across services.
Refer GCP documentation – Cloud Billing Export to BigQuery
Tools for monitoring, analyzing and optimizing cost have become an important part of managing development. Billing export to BigQuery enables you to export your daily usage and cost estimates automatically throughout the day to a BigQuery dataset you specify. You can then access your billing data from BigQuery. You can also use this export method to export data to a JSON file.
Options A & C are wrong as they are not easy and flexible.
Option D is wrong as there are no built-in reports.
Question 57 of 70
57. Question
A company wants to deploy their application using Deployment Manager. However, they want to understand how the changes will affect before implementing the updated. How can the company achieve the same?
Correct
Correct answer is C as Deployment Manager provides the preview feature to check on what resources would be created.
Refer GCP documentation – Deployment Manager Preview
After you have written a configuration file, you can preview the configuration before you create a deployment. Previewing a configuration lets you see the resources that Deployment Manager would create but does not actually instantiate any actual resources. The Deployment Manager service previews the configuration by:
1. Expanding the full configuration, including any templates.
2. Creating a deployment and “shell” resources.
You can preview your configuration by using the preview query parameter when making an insert() request.
gcloud deployment-manager deployments create example-deployment –config configuration-file.yaml \ –preview
Incorrect
Correct answer is C as Deployment Manager provides the preview feature to check on what resources would be created.
Refer GCP documentation – Deployment Manager Preview
After you have written a configuration file, you can preview the configuration before you create a deployment. Previewing a configuration lets you see the resources that Deployment Manager would create but does not actually instantiate any actual resources. The Deployment Manager service previews the configuration by:
1. Expanding the full configuration, including any templates.
2. Creating a deployment and “shell” resources.
You can preview your configuration by using the preview query parameter when making an insert() request.
gcloud deployment-manager deployments create example-deployment –config configuration-file.yaml \ –preview
Unattempted
Correct answer is C as Deployment Manager provides the preview feature to check on what resources would be created.
Refer GCP documentation – Deployment Manager Preview
After you have written a configuration file, you can preview the configuration before you create a deployment. Previewing a configuration lets you see the resources that Deployment Manager would create but does not actually instantiate any actual resources. The Deployment Manager service previews the configuration by:
1. Expanding the full configuration, including any templates.
2. Creating a deployment and “shell” resources.
You can preview your configuration by using the preview query parameter when making an insert() request.
gcloud deployment-manager deployments create example-deployment –config configuration-file.yaml \ –preview
Question 58 of 70
58. Question
Your company needs to create a new Kubernetes Cluster on Google Cloud Platform. They want the nodes to be configured for resiliency and high availability with no manual intervention. How should the Kubernetes cluster be configured?
Correct
Correct answer is C as the resiliency and high availability can be increased using the node auto-repair feature, which would allow Kubernetes engine to replace unhealthy nodes.
Refer GCP documentation – Kubernetes Auto-Repairing
GKE’s node auto-repair feature helps you keep the nodes in your cluster in a healthy, running state. When enabled, GKE makes periodic checks on the health state of each node in your cluster. If a node fails consecutive health checks over an extended time period, GKE initiates a repair process for that node.
Option A is wrong as this cannot be implemented for the Kubernetes cluster.
Option B is wrong as auto-upgrades are to upgrade the node version to the latest stable Kubernetes version.
Option D is wrong as there is no auto-healing feature.
Incorrect
Correct answer is C as the resiliency and high availability can be increased using the node auto-repair feature, which would allow Kubernetes engine to replace unhealthy nodes.
Refer GCP documentation – Kubernetes Auto-Repairing
GKE’s node auto-repair feature helps you keep the nodes in your cluster in a healthy, running state. When enabled, GKE makes periodic checks on the health state of each node in your cluster. If a node fails consecutive health checks over an extended time period, GKE initiates a repair process for that node.
Option A is wrong as this cannot be implemented for the Kubernetes cluster.
Option B is wrong as auto-upgrades are to upgrade the node version to the latest stable Kubernetes version.
Option D is wrong as there is no auto-healing feature.
Unattempted
Correct answer is C as the resiliency and high availability can be increased using the node auto-repair feature, which would allow Kubernetes engine to replace unhealthy nodes.
Refer GCP documentation – Kubernetes Auto-Repairing
GKE’s node auto-repair feature helps you keep the nodes in your cluster in a healthy, running state. When enabled, GKE makes periodic checks on the health state of each node in your cluster. If a node fails consecutive health checks over an extended time period, GKE initiates a repair process for that node.
Option A is wrong as this cannot be implemented for the Kubernetes cluster.
Option B is wrong as auto-upgrades are to upgrade the node version to the latest stable Kubernetes version.
Option D is wrong as there is no auto-healing feature.
Question 59 of 70
59. Question
You have created an App engine application in the development environment. The testing for the application has been successful. You want to move the application to production environment. How can you deploy the application with minimal steps?
Correct
Correct answer is B as the gcloud app deploy allows the –project parameter to be passed to override the project that the app engine application needs to be deployed to.
Refer GCP documentation – Cloud SDK
-project=PROJECT_ID
The Google Cloud Platform project name to use for this invocation. If omitted, then the current project is assumed; the current project can be listed using gcloud config list –format=’text(core.project)’and can be set using gcloud config set project PROJECTID. Overrides the default core/projectproperty value for this command invocation.
Option A is wrong as it is a two step process, although a valid solution
Option C is wrong as Clone of the application is possile
Option D is wrong app.yaml does not control the project it is deployed to.
Incorrect
Correct answer is B as the gcloud app deploy allows the –project parameter to be passed to override the project that the app engine application needs to be deployed to.
Refer GCP documentation – Cloud SDK
-project=PROJECT_ID
The Google Cloud Platform project name to use for this invocation. If omitted, then the current project is assumed; the current project can be listed using gcloud config list –format=’text(core.project)’and can be set using gcloud config set project PROJECTID. Overrides the default core/projectproperty value for this command invocation.
Option A is wrong as it is a two step process, although a valid solution
Option C is wrong as Clone of the application is possile
Option D is wrong app.yaml does not control the project it is deployed to.
Unattempted
Correct answer is B as the gcloud app deploy allows the –project parameter to be passed to override the project that the app engine application needs to be deployed to.
Refer GCP documentation – Cloud SDK
-project=PROJECT_ID
The Google Cloud Platform project name to use for this invocation. If omitted, then the current project is assumed; the current project can be listed using gcloud config list –format=’text(core.project)’and can be set using gcloud config set project PROJECTID. Overrides the default core/projectproperty value for this command invocation.
Option A is wrong as it is a two step process, although a valid solution
Option C is wrong as Clone of the application is possile
Option D is wrong app.yaml does not control the project it is deployed to.
Question 60 of 70
60. Question
Your company hosts multiple applications on Compute Engine instances. They want the instances to be resilient to any instance crashes or system termination. How would you configure the instances?
Correct
Correct answer is A as automaticRestart availability policy determines how the instance reacts to the crashes and system termination and should be set to true to restart the instance.
Refer GCP documentation – Instance Scheduling Options
A VM instance’s availability policy determines how it behaves when an event occurs that requires Google to move your VM to a different host machine. For example, you can choose to keep your VM instances running while Compute Engine live migrates them to another host or you can choose to terminate your instances instead. You can update an instance’s availability policy at any time to control how you want your VM instances to behave.
You can change an instance’s availability policy by configuring the following two settings:
The VM instance’s maintenance behavior, which determines whether the instance is live migrated or terminated when there is a maintenance event.
The instance’s restart behavior, which determines whether the instance automatically restarts if it crashes or gets terminated.
The default maintenance behavior for instances is to live migrate, but you can change the behavior to terminate your instance during maintenance events instead.
Configure an instance’s maintenance behavior and automatic restart setting using the onHostMaintenance and automaticRestart properties. All instances are configured with default values unless you explicitly specify otherwise.
onHostMaintenance: Determines the behavior when a maintenance event occurs that might cause your instance to reboot.
[Default] migrate, which causes Compute Engine to live migrate an instance when there is a maintenance event.
terminate, which terminates an instance instead of migrating it.
automaticRestart: Determines the behavior when an instance crashes or is terminated by the system.
[Default] true, so Compute Engine restarts an instance if the instance crashes or is terminated.
false, so Compute Engine does not restart an instance if the instance crashes or is terminated.
Option B is wrong as automaticRestart availability policy should be set to true.
Options C & D are wrong as the onHostMaintenance does not apply to crashes or system termination.
Incorrect
Correct answer is A as automaticRestart availability policy determines how the instance reacts to the crashes and system termination and should be set to true to restart the instance.
Refer GCP documentation – Instance Scheduling Options
A VM instance’s availability policy determines how it behaves when an event occurs that requires Google to move your VM to a different host machine. For example, you can choose to keep your VM instances running while Compute Engine live migrates them to another host or you can choose to terminate your instances instead. You can update an instance’s availability policy at any time to control how you want your VM instances to behave.
You can change an instance’s availability policy by configuring the following two settings:
The VM instance’s maintenance behavior, which determines whether the instance is live migrated or terminated when there is a maintenance event.
The instance’s restart behavior, which determines whether the instance automatically restarts if it crashes or gets terminated.
The default maintenance behavior for instances is to live migrate, but you can change the behavior to terminate your instance during maintenance events instead.
Configure an instance’s maintenance behavior and automatic restart setting using the onHostMaintenance and automaticRestart properties. All instances are configured with default values unless you explicitly specify otherwise.
onHostMaintenance: Determines the behavior when a maintenance event occurs that might cause your instance to reboot.
[Default] migrate, which causes Compute Engine to live migrate an instance when there is a maintenance event.
terminate, which terminates an instance instead of migrating it.
automaticRestart: Determines the behavior when an instance crashes or is terminated by the system.
[Default] true, so Compute Engine restarts an instance if the instance crashes or is terminated.
false, so Compute Engine does not restart an instance if the instance crashes or is terminated.
Option B is wrong as automaticRestart availability policy should be set to true.
Options C & D are wrong as the onHostMaintenance does not apply to crashes or system termination.
Unattempted
Correct answer is A as automaticRestart availability policy determines how the instance reacts to the crashes and system termination and should be set to true to restart the instance.
Refer GCP documentation – Instance Scheduling Options
A VM instance’s availability policy determines how it behaves when an event occurs that requires Google to move your VM to a different host machine. For example, you can choose to keep your VM instances running while Compute Engine live migrates them to another host or you can choose to terminate your instances instead. You can update an instance’s availability policy at any time to control how you want your VM instances to behave.
You can change an instance’s availability policy by configuring the following two settings:
The VM instance’s maintenance behavior, which determines whether the instance is live migrated or terminated when there is a maintenance event.
The instance’s restart behavior, which determines whether the instance automatically restarts if it crashes or gets terminated.
The default maintenance behavior for instances is to live migrate, but you can change the behavior to terminate your instance during maintenance events instead.
Configure an instance’s maintenance behavior and automatic restart setting using the onHostMaintenance and automaticRestart properties. All instances are configured with default values unless you explicitly specify otherwise.
onHostMaintenance: Determines the behavior when a maintenance event occurs that might cause your instance to reboot.
[Default] migrate, which causes Compute Engine to live migrate an instance when there is a maintenance event.
terminate, which terminates an instance instead of migrating it.
automaticRestart: Determines the behavior when an instance crashes or is terminated by the system.
[Default] true, so Compute Engine restarts an instance if the instance crashes or is terminated.
false, so Compute Engine does not restart an instance if the instance crashes or is terminated.
Option B is wrong as automaticRestart availability policy should be set to true.
Options C & D are wrong as the onHostMaintenance does not apply to crashes or system termination.
Question 61 of 70
61. Question
Your organization requires that metrics from all applications be retained for 5 years for future analysis in possible legal proceedings. Which approach should you use?
Correct
Correct answer is B as Stackdriver monitoring metrics can be exported to BigQuery or Google Cloud Storage. However as the need is for future analysis, BigQuery is a better option.
Refer GCP documentation – Stackdriver
Stackdriver Logging provides you with the ability to filter, search, and view logs from your cloud and open source application services. Allows you to define metrics based on log contents that are incorporated into dashboards and alerts. Enables you to export logs to BigQuery, Google Cloud Storage, and Pub/Sub.
Option A is wrong as project logs are maintained in Stackdriver and it has limited data retention capability.
Option C is wrong as Stackdriver cannot retain data for 5 year. Refer Stackdriver data retention
Option D is wrong as Google Cloud Storage does not provide analytics capability.
Incorrect
Correct answer is B as Stackdriver monitoring metrics can be exported to BigQuery or Google Cloud Storage. However as the need is for future analysis, BigQuery is a better option.
Refer GCP documentation – Stackdriver
Stackdriver Logging provides you with the ability to filter, search, and view logs from your cloud and open source application services. Allows you to define metrics based on log contents that are incorporated into dashboards and alerts. Enables you to export logs to BigQuery, Google Cloud Storage, and Pub/Sub.
Option A is wrong as project logs are maintained in Stackdriver and it has limited data retention capability.
Option C is wrong as Stackdriver cannot retain data for 5 year. Refer Stackdriver data retention
Option D is wrong as Google Cloud Storage does not provide analytics capability.
Unattempted
Correct answer is B as Stackdriver monitoring metrics can be exported to BigQuery or Google Cloud Storage. However as the need is for future analysis, BigQuery is a better option.
Refer GCP documentation – Stackdriver
Stackdriver Logging provides you with the ability to filter, search, and view logs from your cloud and open source application services. Allows you to define metrics based on log contents that are incorporated into dashboards and alerts. Enables you to export logs to BigQuery, Google Cloud Storage, and Pub/Sub.
Option A is wrong as project logs are maintained in Stackdriver and it has limited data retention capability.
Option C is wrong as Stackdriver cannot retain data for 5 year. Refer Stackdriver data retention
Option D is wrong as Google Cloud Storage does not provide analytics capability.
Question 62 of 70
62. Question
A recent software update to a static e-commerce website running on Google Cloud has caused the website to crash for several hours. The CTO decides that all critical changes must now have a back-out/roll-back plan. The website is deployed Cloud Storage and critical changes are frequent. Which action should you take to implement the back-out/roll-back plan?
Correct
Correct answers are B as this is a seamless way to ensure the last known good version of the static content is always available.
Option A is wrong as this copy process is unreliable and makes it tricky to keep things in sync, it also doesn’t provide a way to rollback once a bad version of the data has been written to the copy.
Option C is wrong as this would add a great deal of overhead to the process and would cause conflicts in association between different Deployment Manager deployments which could lead to unexpected behavior if an old version is changed.
Option D is wrong as this approach doesn’t scale well, there is a lot of management work involved.
Incorrect
Correct answers are B as this is a seamless way to ensure the last known good version of the static content is always available.
Option A is wrong as this copy process is unreliable and makes it tricky to keep things in sync, it also doesn’t provide a way to rollback once a bad version of the data has been written to the copy.
Option C is wrong as this would add a great deal of overhead to the process and would cause conflicts in association between different Deployment Manager deployments which could lead to unexpected behavior if an old version is changed.
Option D is wrong as this approach doesn’t scale well, there is a lot of management work involved.
Unattempted
Correct answers are B as this is a seamless way to ensure the last known good version of the static content is always available.
Option A is wrong as this copy process is unreliable and makes it tricky to keep things in sync, it also doesn’t provide a way to rollback once a bad version of the data has been written to the copy.
Option C is wrong as this would add a great deal of overhead to the process and would cause conflicts in association between different Deployment Manager deployments which could lead to unexpected behavior if an old version is changed.
Option D is wrong as this approach doesn’t scale well, there is a lot of management work involved.
Question 63 of 70
63. Question
A user wants to install a tool on the Cloud Shell. The tool should be available across sessions. Where should the user install the tool?
Correct
Correct answer is D as only HOME directory is persisted across sessions.
Refer GCP documentation – Cloud Shell
Cloud Shell provisions 5 GB of free persistent disk storage mounted as your $HOMEdirectory on the virtual machine instance. This storage is on a per-user basis and is available across projects. Unlike the instance itself, this storage does not time out on inactivity. All files you store in your home directory, including installed software, scripts and user configuration files like .bashrc and .vimrc, persist between sessions. Your $HOME directory is private to you and cannot be accessed by other users.
Incorrect
Correct answer is D as only HOME directory is persisted across sessions.
Refer GCP documentation – Cloud Shell
Cloud Shell provisions 5 GB of free persistent disk storage mounted as your $HOMEdirectory on the virtual machine instance. This storage is on a per-user basis and is available across projects. Unlike the instance itself, this storage does not time out on inactivity. All files you store in your home directory, including installed software, scripts and user configuration files like .bashrc and .vimrc, persist between sessions. Your $HOME directory is private to you and cannot be accessed by other users.
Unattempted
Correct answer is D as only HOME directory is persisted across sessions.
Refer GCP documentation – Cloud Shell
Cloud Shell provisions 5 GB of free persistent disk storage mounted as your $HOMEdirectory on the virtual machine instance. This storage is on a per-user basis and is available across projects. Unlike the instance itself, this storage does not time out on inactivity. All files you store in your home directory, including installed software, scripts and user configuration files like .bashrc and .vimrc, persist between sessions. Your $HOME directory is private to you and cannot be accessed by other users.
Question 64 of 70
64. Question
Your company has hosted their critical application on Compute Engine managed instance groups. They want the instances to be configured for resiliency and high availability with no manual intervention. How should the managed instance group be configured?
Correct
Correct answer is D as Managed Instance Groups provide AutoHealing feature, which performs a health check and if the application is not responding the instance is automatically recreated.
Refer GCP documentation – Managed Instance Groups
Autohealing — You can also set up an autohealing policy that relies on an application-based health check, which periodically verifies that your application is responding as expected on each of the MIG’s instances. If an application is not responding on an instance, that instance is automatically recreated. Checking that an application responds is more precise than simply verifying that an instance is up and running.
Managed instance groups maintain high availability of your applications by proactively keeping your instances available, which means in RUNNING state. A managed instance group will automatically recreate an instance that is not RUNNING. However, relying only on instance state may not be sufficient. You may want to recreate instances when an application freezes, crashes, or runs out of memory.
Application-based autohealing improves application availability by relying on a health checking signal that detects application-specific issues such as freezing, crashing, or overloading. If a health check determines that an application has failed on an instance, the group automatically recreates that instance.
Options A & C are wrong as these features are not available.
Option B is wrong as auto-updating helps deploy new versions of software to instances in a managed instance group. The rollout of an update happens automatically based on your specifications: you can control the speed and scope of the update rollout in order to minimize disruptions to your application. You can optionally perform partial rollouts which allows for canary testing.
Incorrect
Correct answer is D as Managed Instance Groups provide AutoHealing feature, which performs a health check and if the application is not responding the instance is automatically recreated.
Refer GCP documentation – Managed Instance Groups
Autohealing — You can also set up an autohealing policy that relies on an application-based health check, which periodically verifies that your application is responding as expected on each of the MIG’s instances. If an application is not responding on an instance, that instance is automatically recreated. Checking that an application responds is more precise than simply verifying that an instance is up and running.
Managed instance groups maintain high availability of your applications by proactively keeping your instances available, which means in RUNNING state. A managed instance group will automatically recreate an instance that is not RUNNING. However, relying only on instance state may not be sufficient. You may want to recreate instances when an application freezes, crashes, or runs out of memory.
Application-based autohealing improves application availability by relying on a health checking signal that detects application-specific issues such as freezing, crashing, or overloading. If a health check determines that an application has failed on an instance, the group automatically recreates that instance.
Options A & C are wrong as these features are not available.
Option B is wrong as auto-updating helps deploy new versions of software to instances in a managed instance group. The rollout of an update happens automatically based on your specifications: you can control the speed and scope of the update rollout in order to minimize disruptions to your application. You can optionally perform partial rollouts which allows for canary testing.
Unattempted
Correct answer is D as Managed Instance Groups provide AutoHealing feature, which performs a health check and if the application is not responding the instance is automatically recreated.
Refer GCP documentation – Managed Instance Groups
Autohealing — You can also set up an autohealing policy that relies on an application-based health check, which periodically verifies that your application is responding as expected on each of the MIG’s instances. If an application is not responding on an instance, that instance is automatically recreated. Checking that an application responds is more precise than simply verifying that an instance is up and running.
Managed instance groups maintain high availability of your applications by proactively keeping your instances available, which means in RUNNING state. A managed instance group will automatically recreate an instance that is not RUNNING. However, relying only on instance state may not be sufficient. You may want to recreate instances when an application freezes, crashes, or runs out of memory.
Application-based autohealing improves application availability by relying on a health checking signal that detects application-specific issues such as freezing, crashing, or overloading. If a health check determines that an application has failed on an instance, the group automatically recreates that instance.
Options A & C are wrong as these features are not available.
Option B is wrong as auto-updating helps deploy new versions of software to instances in a managed instance group. The rollout of an update happens automatically based on your specifications: you can control the speed and scope of the update rollout in order to minimize disruptions to your application. You can optionally perform partial rollouts which allows for canary testing.
Question 65 of 70
65. Question
Your company has deployed their application on managed instance groups, which is served through a network load balancer. They want to enable health checks for the instances. How do you configure the health checks?
Correct
Correct answer is B as Network Load Balancer does not support TCP health checks and hence HTTP health checks need to be performed. You can run a basic web server on each instance for health checks.
Refer GCP documentation – Network Load Balancer Health Checks
Health checks ensure that Compute Engine forwards new connections only to instances that are up and ready to receive them. Compute Engine sends health check requests to each instance at the specified frequency; once an instance exceeds its allowed number of health check failures, it is no longer considered an eligible instance for receiving new traffic. Existing connections will not be actively terminated which allows instances to shut down gracefully and to close TCP connections.
The health checker continues to query unhealthy instances, and returns an instance to the pool when the specified number of successful checks is met. If all instances are marked as UNHEALTHY, the load balancer directs new traffic to all existing instances.
Network Load Balancing relies on legacy HTTP Health checks for determining instance health. Even if your service does not use HTTP, you’ll need to at least run a basic web server on each instance that the health check system can query.
Option A is wrong as the traffic is not secured, HTTPS health checks are not needed.
Option C is wrong as Network Load Balancer does not support TCP health checks.
Option D is wrong as instances do not need to send any traffic to Network Load Balancer.
Incorrect
Correct answer is B as Network Load Balancer does not support TCP health checks and hence HTTP health checks need to be performed. You can run a basic web server on each instance for health checks.
Refer GCP documentation – Network Load Balancer Health Checks
Health checks ensure that Compute Engine forwards new connections only to instances that are up and ready to receive them. Compute Engine sends health check requests to each instance at the specified frequency; once an instance exceeds its allowed number of health check failures, it is no longer considered an eligible instance for receiving new traffic. Existing connections will not be actively terminated which allows instances to shut down gracefully and to close TCP connections.
The health checker continues to query unhealthy instances, and returns an instance to the pool when the specified number of successful checks is met. If all instances are marked as UNHEALTHY, the load balancer directs new traffic to all existing instances.
Network Load Balancing relies on legacy HTTP Health checks for determining instance health. Even if your service does not use HTTP, you’ll need to at least run a basic web server on each instance that the health check system can query.
Option A is wrong as the traffic is not secured, HTTPS health checks are not needed.
Option C is wrong as Network Load Balancer does not support TCP health checks.
Option D is wrong as instances do not need to send any traffic to Network Load Balancer.
Unattempted
Correct answer is B as Network Load Balancer does not support TCP health checks and hence HTTP health checks need to be performed. You can run a basic web server on each instance for health checks.
Refer GCP documentation – Network Load Balancer Health Checks
Health checks ensure that Compute Engine forwards new connections only to instances that are up and ready to receive them. Compute Engine sends health check requests to each instance at the specified frequency; once an instance exceeds its allowed number of health check failures, it is no longer considered an eligible instance for receiving new traffic. Existing connections will not be actively terminated which allows instances to shut down gracefully and to close TCP connections.
The health checker continues to query unhealthy instances, and returns an instance to the pool when the specified number of successful checks is met. If all instances are marked as UNHEALTHY, the load balancer directs new traffic to all existing instances.
Network Load Balancing relies on legacy HTTP Health checks for determining instance health. Even if your service does not use HTTP, you’ll need to at least run a basic web server on each instance that the health check system can query.
Option A is wrong as the traffic is not secured, HTTPS health checks are not needed.
Option C is wrong as Network Load Balancer does not support TCP health checks.
Option D is wrong as instances do not need to send any traffic to Network Load Balancer.
Question 66 of 70
66. Question
You need to deploy an update to an application in Google App Engine. The update is risky, but it can only be tested in a live environment. What is the best way to introduce the update to minimize risk?
Correct
Correct answer is C as deploying a new version without assigning it as the default version will not create downtime for the application. Using traffic splitting allows for easily redirecting a small amount of traffic to the new version and can also be quickly reverted without application downtime.
Refer GCP documentation – App Engine Splitting Traffic
Traffic migration smoothly switches request routing, gradually moving traffic from the versions currently receiving traffic to one or more versions that you specify.
Traffic splitting distributes a percentage of traffic to versions of your application. You can split traffic to move 100% of traffic to a single version or to route percentages of traffic to multiple versions. Splitting traffic to two or more versions allows you to conduct A/B testing between your versions and provides control over the pace when rolling out features.
Option A is wrong as deploying the application version as default requires moving all traffic to the new version. This could impact all users and disable the service
Option B is wrong as this is not a recommended practice and it impacts user experience.
Option D is wrong as App Engine services are intended for hosting different service logic. Using different services would require manual configuration of the consumers of services to be aware of the deployment process and manage from the consumer side who is accessing which service.
Incorrect
Correct answer is C as deploying a new version without assigning it as the default version will not create downtime for the application. Using traffic splitting allows for easily redirecting a small amount of traffic to the new version and can also be quickly reverted without application downtime.
Refer GCP documentation – App Engine Splitting Traffic
Traffic migration smoothly switches request routing, gradually moving traffic from the versions currently receiving traffic to one or more versions that you specify.
Traffic splitting distributes a percentage of traffic to versions of your application. You can split traffic to move 100% of traffic to a single version or to route percentages of traffic to multiple versions. Splitting traffic to two or more versions allows you to conduct A/B testing between your versions and provides control over the pace when rolling out features.
Option A is wrong as deploying the application version as default requires moving all traffic to the new version. This could impact all users and disable the service
Option B is wrong as this is not a recommended practice and it impacts user experience.
Option D is wrong as App Engine services are intended for hosting different service logic. Using different services would require manual configuration of the consumers of services to be aware of the deployment process and manage from the consumer side who is accessing which service.
Unattempted
Correct answer is C as deploying a new version without assigning it as the default version will not create downtime for the application. Using traffic splitting allows for easily redirecting a small amount of traffic to the new version and can also be quickly reverted without application downtime.
Refer GCP documentation – App Engine Splitting Traffic
Traffic migration smoothly switches request routing, gradually moving traffic from the versions currently receiving traffic to one or more versions that you specify.
Traffic splitting distributes a percentage of traffic to versions of your application. You can split traffic to move 100% of traffic to a single version or to route percentages of traffic to multiple versions. Splitting traffic to two or more versions allows you to conduct A/B testing between your versions and provides control over the pace when rolling out features.
Option A is wrong as deploying the application version as default requires moving all traffic to the new version. This could impact all users and disable the service
Option B is wrong as this is not a recommended practice and it impacts user experience.
Option D is wrong as App Engine services are intended for hosting different service logic. Using different services would require manual configuration of the consumers of services to be aware of the deployment process and manage from the consumer side who is accessing which service.
Question 67 of 70
67. Question
Your company has reserved a monthly budget for your project. You want to be informed automatically of your project spend so that you can take action when you approach the limit. What should you do?
Correct
Correct answer is B as Budget Alerts allow you configure thresholds and if crossed alerts are automatically triggered.
Refer GCP documentation – Billing Budgets Alerts
To help you with project planning and controlling costs, you can set a budget alert. Setting a budget alert lets you track how your spend is growing toward a particular amount.
You can apply budget alerts to either a billing account or a project, and you can set the budget alert at a specific amount or match it to the previous month’s spend. The alerts will be sent to billing administrators and billing account users when spending exceeds a percentage of your budget.
Option A is wrong as linked card does not alert. The charges would still increase as per the usage.
Option C is wrong as App Engine does not have budget settings.
Option D is wrong as the solution would not trigger automatic alerts and the checks would not be immediate as well.
Incorrect
Correct answer is B as Budget Alerts allow you configure thresholds and if crossed alerts are automatically triggered.
Refer GCP documentation – Billing Budgets Alerts
To help you with project planning and controlling costs, you can set a budget alert. Setting a budget alert lets you track how your spend is growing toward a particular amount.
You can apply budget alerts to either a billing account or a project, and you can set the budget alert at a specific amount or match it to the previous month’s spend. The alerts will be sent to billing administrators and billing account users when spending exceeds a percentage of your budget.
Option A is wrong as linked card does not alert. The charges would still increase as per the usage.
Option C is wrong as App Engine does not have budget settings.
Option D is wrong as the solution would not trigger automatic alerts and the checks would not be immediate as well.
Unattempted
Correct answer is B as Budget Alerts allow you configure thresholds and if crossed alerts are automatically triggered.
Refer GCP documentation – Billing Budgets Alerts
To help you with project planning and controlling costs, you can set a budget alert. Setting a budget alert lets you track how your spend is growing toward a particular amount.
You can apply budget alerts to either a billing account or a project, and you can set the budget alert at a specific amount or match it to the previous month’s spend. The alerts will be sent to billing administrators and billing account users when spending exceeds a percentage of your budget.
Option A is wrong as linked card does not alert. The charges would still increase as per the usage.
Option C is wrong as App Engine does not have budget settings.
Option D is wrong as the solution would not trigger automatic alerts and the checks would not be immediate as well.
Question 68 of 70
68. Question
Your company plans to archive data to Cloud Storage, which would be needed only in case of any compliance issues, or Audits. What is the command for creating the storage bucket with rare access and named ‘archive_bucket’?
Correct
Correct answer is B as the data would be rarely accessed, Coldline is an ideal storage class. Also gsutil needs -c parameter to pass the class.
Refer GCP documentation – Storage Classes
Coldline – Data you expect to access infrequently (i.e., no more than once per year). Typically this is for disaster recovery, or data that is archived and may or may not be needed at some future time
Option A is wrong as rm is the wrong parameter and removes the data.
Option C is wrong as Nearline is not suited for data that needs rare access.
Option D is wrong as by default, gsutil would create a regional bucket.
Incorrect
Correct answer is B as the data would be rarely accessed, Coldline is an ideal storage class. Also gsutil needs -c parameter to pass the class.
Refer GCP documentation – Storage Classes
Coldline – Data you expect to access infrequently (i.e., no more than once per year). Typically this is for disaster recovery, or data that is archived and may or may not be needed at some future time
Option A is wrong as rm is the wrong parameter and removes the data.
Option C is wrong as Nearline is not suited for data that needs rare access.
Option D is wrong as by default, gsutil would create a regional bucket.
Unattempted
Correct answer is B as the data would be rarely accessed, Coldline is an ideal storage class. Also gsutil needs -c parameter to pass the class.
Refer GCP documentation – Storage Classes
Coldline – Data you expect to access infrequently (i.e., no more than once per year). Typically this is for disaster recovery, or data that is archived and may or may not be needed at some future time
Option A is wrong as rm is the wrong parameter and removes the data.
Option C is wrong as Nearline is not suited for data that needs rare access.
Option D is wrong as by default, gsutil would create a regional bucket.
Question 69 of 70
69. Question
An application that relies on Cloud SQL to read infrequently changing data is predicted to grow dramatically. How can you increase capacity for more read-only clients?
Correct
Correct answer is D as read replicas can help handle the read traffic reducing the load from the primary database.
Refer GCP documentation – Cloud SQL Replication Options
Cloud SQL provides the ability to replicate a master instance to one or more read replicas. A read replica is a copy of the master that reflects changes to the master instance in almost real time.
Option A is wrong as high availability is for failover and not for performance.
Option B is wrong as external replica is not recommended for scaling as it needs to be maintained and the network established for replication.
Option C is wrong as backups are more to restore the database in case of any outage.
Incorrect
Correct answer is D as read replicas can help handle the read traffic reducing the load from the primary database.
Refer GCP documentation – Cloud SQL Replication Options
Cloud SQL provides the ability to replicate a master instance to one or more read replicas. A read replica is a copy of the master that reflects changes to the master instance in almost real time.
Option A is wrong as high availability is for failover and not for performance.
Option B is wrong as external replica is not recommended for scaling as it needs to be maintained and the network established for replication.
Option C is wrong as backups are more to restore the database in case of any outage.
Unattempted
Correct answer is D as read replicas can help handle the read traffic reducing the load from the primary database.
Refer GCP documentation – Cloud SQL Replication Options
Cloud SQL provides the ability to replicate a master instance to one or more read replicas. A read replica is a copy of the master that reflects changes to the master instance in almost real time.
Option A is wrong as high availability is for failover and not for performance.
Option B is wrong as external replica is not recommended for scaling as it needs to be maintained and the network established for replication.
Option C is wrong as backups are more to restore the database in case of any outage.
Question 70 of 70
70. Question
You’ve been asked to add a new IAM member and grant them access to run some queries on BigQuery. Considering Google recommended best practices and the principle of least privilege, how would you assign the access?
Correct
Correct answer is D as the user would need the roles/bigquery.dataViewer and roles/bigquery.jobUser to access and query the BigQuery tables inline with the least privilege. As per google best practices it is recommended to use predefined roles and create groups to control access to multiple users with same responsibility
Refer GCP documentation – IAM Best Practices
Use Cloud IAM to apply the security principle of least privilege, so you grant only the necessary access to your resources.
We recommend collecting users with the same responsibilities into groups and assigning Cloud IAM roles to the groups rather than to individual users. For example, you can create a “data scientist” group and assign appropriate roles to enable interaction with BigQuery and Cloud Storage. When a new data scientist joins your team, you can simply add them to the group and they will inherit the defined permissions.
Options A & B are wrong as the predefined roles can be assigned directly and there is not need to create custom roles.
Option C is wrong as it is recommended to create groups instead of using individual users.
Incorrect
Correct answer is D as the user would need the roles/bigquery.dataViewer and roles/bigquery.jobUser to access and query the BigQuery tables inline with the least privilege. As per google best practices it is recommended to use predefined roles and create groups to control access to multiple users with same responsibility
Refer GCP documentation – IAM Best Practices
Use Cloud IAM to apply the security principle of least privilege, so you grant only the necessary access to your resources.
We recommend collecting users with the same responsibilities into groups and assigning Cloud IAM roles to the groups rather than to individual users. For example, you can create a “data scientist” group and assign appropriate roles to enable interaction with BigQuery and Cloud Storage. When a new data scientist joins your team, you can simply add them to the group and they will inherit the defined permissions.
Options A & B are wrong as the predefined roles can be assigned directly and there is not need to create custom roles.
Option C is wrong as it is recommended to create groups instead of using individual users.
Unattempted
Correct answer is D as the user would need the roles/bigquery.dataViewer and roles/bigquery.jobUser to access and query the BigQuery tables inline with the least privilege. As per google best practices it is recommended to use predefined roles and create groups to control access to multiple users with same responsibility
Refer GCP documentation – IAM Best Practices
Use Cloud IAM to apply the security principle of least privilege, so you grant only the necessary access to your resources.
We recommend collecting users with the same responsibilities into groups and assigning Cloud IAM roles to the groups rather than to individual users. For example, you can create a “data scientist” group and assign appropriate roles to enable interaction with BigQuery and Cloud Storage. When a new data scientist joins your team, you can simply add them to the group and they will inherit the defined permissions.
Options A & B are wrong as the predefined roles can be assigned directly and there is not need to create custom roles.
Option C is wrong as it is recommended to create groups instead of using individual users.
X
Use Page numbers below to navigate to other practice tests