Projects are holders of resources, akin to Accounts in AWS. While AWS accounts are MUCH more than simple resource containers, this is still the best way to visualize the correspondence between AWS and GCP.

The trust boundary within a GCP Project is implicit — resources within a project automatically have access to each other. And they are denied access to resources in other projects.

Hence, a storage bucket and a compute instance, within the same project, don’t need any special access means. They can access each other without any policy wrangling.

Granular Resource Access within a Project — Setting access control at the Resource level

It’s great to have this type of implicit trust between resources. However, often , users need to be RESTRICTED from accessing resources.

The principle of least privilege says — only grant as much access as is needed.

This post shows a how this principle may be applied to compute engine instances.

Select your user from IAM — and assign the following two roles (At the very least, you would assign the ‘Service Account User Role’ to the IAM user. This lets the user use built in service accounts, which are used to access GCP services).

  • Add two roles — Compute Viewer Role and Service Account User Role (Role in GCP is defined as a set of permissions).
  • This, as per the principle of least privilege, allows this user to view all instances. But, as we will show below, this user will be only granted access (log on access) to a single instance.

To grant access to specific instances, choose the instance (note that what we are doing now is at the RESOURCE level; what we did before was at the IAM level)

  • On the instance, “Add Members”
  • Assign “Compute Instance Admin” role to the user. This will allow the user SSH access onto the instance.
  • However if user tries to access any other instance, their SSH access will be disallowed. They can still SEE the instance — as they have the Compute Viewer role.
  • This the basically the principle of least privilege at work. The user is allowed access ONLY to what she needs and not to anything more.
  • The same example above can be applied to restrict / control access to Disks, Storage Buckets, Images, Snapshots etc.

What about cross project access?

If you think AWS accounts, how does a resource in Account A, access a resource in Account B? The answer is cross account roles.

In a similar vein, a role is needed in GCP as well to provide cross project access. A service account in project A is granted access (access being a compute viewer) to a compute resource in project B.

gcloud projects add-iam-policy-binding $GCP_PROJECT_ID_B \
--member=serviceAccount:${GCP_PROJECT_ID_A}@appspot.gserviceaccount.com \
--role=roles/compute.viewer \

Summary

A GCP project is like an AWS account, in that it is the holder of resources as well as designed with a built in IAM. While IAM is defined at the ORG level in GCP, it is still useful to think of it at the Project Level.

 Access from resource A to resource B within a project is implicit. However, if a user were to be restricted access to resources within a project, you would typically do a TWO STEP configuration

  1. The first is the GRANT at the IAM level (user is granted viewer access for example) — and then you would,
  2. GRANT / DENY at the resource level (this resource is ALLOWED to be accessed by this user, all other resources are IMPLICITLY denied).

Granular access is at the heart of the principle of least privilege, and this post shows how to use the Google Cloud Project Boundary to accomplish granular access. Along with GCP Projects, once you grasp service accounts, you can get rolling on GCP development.

Anuj holds professional certifications in Google Cloud, AWS as well as certifications in Docker and App Performance Tools such as New Relic. He specializes in Cloud Security, Data Encryption and Container Technologies.

Initial Consultation

Anuj Varma – who has written posts on Anuj Varma, Hands-On Technology Architect, Clean Air Activist.