Back to Blog

The Complete Guide to Google Cloud Console — Projects, OAuth, APIs & Real-World Automation

Notion
23 min read
TechnologyTutorialCloud

image

If you have ever wanted to programmatically interact with any Google service — upload a video to YouTube, read your Gmail, manage files on Google Drive, pull events from Google Calendar — you need to go through Google Cloud Console. It is the gateway, the control panel, the bouncer at the door. And once you understand how it works, it unlocks an enormous amount of automation potential.

This guide covers everything from scratch. We will use a real-world automation pipeline as a running example throughout: a system that automatically generates a daily politics podcast, uploads audio to Google Drive, publishes videos to YouTube, and sends newsletter emails via Gmail — all orchestrated by n8n workflows running on a home server.

Let us get into it.

image


What Is Google Cloud Console?

image

In plain language, Google Cloud Console is your control panel for Google APIs. It lives at console.cloud.google.com and it is where you go any time you want to build something that talks to Google services programmatically.

image

Think of it this way: Google has hundreds of services — YouTube, Drive, Gmail, Calendar, Maps, Sheets, and many more. Each of these services has an API (Application Programming Interface) that lets code interact with it. But Google does not just let anyone hit those APIs without oversight. You need to:

  1. Create a project to organize your work
  2. Enable the specific APIs you want to use
  3. Set up credentials so Google knows who is making requests
  4. Configure permissions so users can authorize your app Google Cloud Console is where all four of those things happen.

When do you need it? Any time you want to automate something involving a Google service. Some real-world examples:

  • Uploading YouTube videos automatically from a script
  • Managing Google Drive files (upload, download, organize) via code
  • Sending emails through Gmail without opening a browser
  • Reading and creating Google Calendar events programmatically
  • Pulling data from Google Sheets into a dashboard
  • Using Google Maps or Geocoding APIs in an app In our case, we needed it for all of the above. Our daily politics newsletter pipeline touches YouTube, Drive, Gmail, and Calendar — and every single one of those integrations required a trip through Google Cloud Console first.

Creating a Project

Why Projects Exist

A Google Cloud project is a container. It groups together all the APIs, credentials, billing, and quotas for a specific piece of work. Think of it like a folder for a client or a specific automation system.

Why not just have one big project for everything? Isolation. If you have a YouTube automation project and a separate Google Sheets dashboard project, you want them to have:

  • Separate API quotas — so one project hitting rate limits does not affect the other
  • Separate credentials — so revoking access to one does not break the other
  • Separate billing — so you can track costs independently
  • Separate team access — so collaborators on one project cannot see the other

How to Create One

  1. Go to console.cloud.google.com
  2. Click the project dropdown at the top of the page (it might say "Select a project" or show your current project name)
  3. Click New Project in the top right of the dialog
  4. Give it a name
  5. Optionally select an organization (if you have a Google Workspace account)
  6. Click Create That is it. You now have a project. It takes about 10 seconds to provision.

Naming Conventions That Actually Help

Google lets you name projects almost anything, but good names save you headaches later. Some patterns that work well:

  • Purpose-first: youtube-automation, politics-newsletter, portfolio-website
  • Account-specific: youtube-automation-politics-channel (helpful when you have multiple channels)
  • Environment-based: newsletter-prod, newsletter-dev Avoid generic names like "My Project" or "Test" — you will end up with five of those and no idea which is which.

Real example from our setup: We created a project called Youtube-Automation-Politics-Channel specifically for automating YouTube uploads and Google Drive hosting for our politics podcast. The name tells us exactly what it does and which channel it is tied to, even months later.


Enabling APIs

image

What "Enabling an API" Means

Creating a project does not automatically give you access to every Google API. You have to explicitly turn on each API you want to use. This is a deliberate design choice — it keeps things secure and makes quota management cleaner.

Enabling an API is like flipping a switch. Before it is enabled, any request to that API from your project will fail with an error. After it is enabled, requests go through (assuming you have valid credentials).

How to Enable an API

  1. In your project, go to APIs & Services then Library (or search "API Library" in the top search bar)
  2. Search for the API you need (e.g., "YouTube Data API")
  3. Click on it
  4. Click Enable Done. It takes effect immediately.

Common APIs You Will Likely Need

Real Scenario: The 30-Second Fix

Here is something that will happen to you at least once: you write your code, set up your credentials, everything looks right — and you get an error like:

googleapiclient.errors.HttpError: <HttpError 403 "Access Not Configured. YouTube Data API v3 has not been used in project 123456 before or it is disabled.">

The fix is literally 30 seconds. Go to API Library, search for the API, click Enable. We hit this exact issue when our podcast upload script suddenly could not access Google Drive — turns out we had enabled YouTube Data API but forgot to enable the Drive API in the same project. A 30-second fix for what felt like a mysterious failure.

How Quotas Work

Each API has its own quota limits, and they vary wildly:

  • YouTube Data API v3: 10,000 units per day. A video upload costs 1,600 units. So you can upload about 6 videos per day on the free tier. Reading data costs much less (1-5 units per request).
  • Google Drive API: Extremely generous. 1 billion queries per day. For personal automation, you will never hit this.
  • Gmail API: 250 quota units per second per user. Also very generous for typical use.
  • Google Sheets API: 300 requests per minute per project. Fine for most automation, but can be tight for heavy read/write loops. You can monitor your quota usage in the Console under APIs & Services then Dashboard. Each enabled API shows its traffic, errors, and quota consumption.

OAuth Consent Screen

This is the part that confuses most people the first time. Let us break it down.

What It Is

The OAuth consent screen is the permission dialog that users see when your app asks to access their Google data. You know those screens that say "App X wants to access your Google Drive" with a list of permissions? That is the consent screen, and you configure it in Google Cloud Console.

Even if you are the only user of your app (which is common for personal automation), you still need to set this up.

Internal vs External

When you create an OAuth consent screen, Google asks you to choose:

  • Internal: Only users within your Google Workspace organization can use the app. If you have a company Google Workspace account, this is simpler — no verification needed.
  • External: Anyone with a Google account can use the app. This is what you choose for personal Gmail accounts (since personal accounts do not have an "organization"). For most personal automation projects, you will choose External.

Testing Mode vs Production Mode

Here is where it gets important. When your app type is External, it starts in Testing mode. This means:

  • Only users you explicitly add as "test users" can authorize the app
  • There is a cap of 100 test users
  • Authorization tokens for test users expire after 7 days (you will need to re-authorize periodically)
  • No Google verification is required Production mode removes these restrictions but requires Google to verify your app — they review your privacy policy, homepage, and how you use the data. For personal automation, staying in Testing mode is usually fine.

Why Testing Mode Matters — A Real Scenario

This bit us in a real way. We had our YouTube automation working perfectly with one Google account (abishek.lakandri69@gmail.com). Then we needed to add Google Drive uploads using a different account (abisheklamichhane@gmail.com).

When we tried to authorize the second account, we got:

Access blocked: This app's request is invalid

The fix? The second account was not in the test users list. In Testing mode, Google will flat-out reject authorization attempts from any account that is not explicitly listed as a test user.

How to Add Test Users

  1. Go to APIs & Services then OAuth consent screen
  2. Scroll down to the Test users section
  3. Click Add Users
  4. Enter the email address of the Google account you want to authorize
  5. Click Save That is it. Now that account can go through the OAuth flow and authorize your app.

When to Move to Production

You should consider moving to Production if:

  • You have more than 100 users who need to authorize
  • You are building a public-facing application
  • You are tired of tokens expiring every 7 days For a personal automation pipeline? Testing mode is perfectly fine. Just remember to add every Google account you plan to use as a test user.

Creating OAuth Credentials

Types of Credentials

Google Cloud Console offers three main credential types. Knowing which one to use is half the battle:

1. API Key

  • Simplest option

  • Used for accessing public data only (no user data)

  • Example: reading public YouTube video metadata

  • Cannot upload videos, access Drive files, or read email

  • Just a string you pass in your request 2. OAuth Client ID

  • Used when your app needs to access user data with their permission

  • The user goes through a consent flow and grants access

  • You get tokens (access + refresh) that let you act on behalf of that user

  • This is what you use for: YouTube uploads, Drive management, Gmail sending, Calendar access

  • This is what we use for our automation pipeline 3. Service Account

  • Used for server-to-server communication with no user interaction

  • The service account is its own identity (has its own email address)

  • Great for accessing Google Cloud resources, Cloud Storage, BigQuery

  • Cannot upload to a personal YouTube channel (YouTube requires user OAuth)

  • Can access Drive files that are shared with the service account's email

Creating an OAuth Client ID (Step by Step)

  1. Go to APIs & Services then Credentials
  2. Click Create Credentials then OAuth Client ID
  3. For Application type, choose:
  4. Give it a name (e.g., "Politics Newsletter Automation")
  5. Click Create You will see two values:
  • Client ID — a long string like 123456-abcdef.apps.googleusercontent.com
  • Client Secret — a shorter secret string Download the JSON file (click the download icon). This file contains both values and is what your code will use to initiate the OAuth flow.

OAuth 2.0 Authorization Flow — Desktop App: showing how your script gets auth URL, user sees consent screen, auth code redirects to localhost, tokens are exchanged and saved

Redirect URIs

For desktop apps, the redirect URI is typically http://localhost (or a specific port like http://localhost:8080). This is where Google sends the user after they authorize your app.

How it works:

  1. Your script opens a browser to Google's authorization page
  2. The user logs in and grants permission
  3. Google redirects to http://localhost:PORT with an authorization code
  4. Your script catches that code and exchanges it for tokens For desktop apps, Google usually configures this automatically. If you see a redirect_uri_mismatch error, you need to add the correct redirect URI in the Console under your OAuth client's settings.

Scopes — What They Control

Scopes define exactly what your app can do with the user's data. When the user sees the consent screen, the scopes determine what permissions are listed. Some common ones:

Best practice: Request the minimum scopes you need. Use drive.file instead of drive if you only need to access files your app created. Use gmail.send instead of full gmail if you only need to send.

Important: If you later need to add a new scope, you must have the user go through the entire authorization flow again. Adding a scope to your code is not enough — the user needs to explicitly grant the new permission.


Multi-Account Scenarios

Why You Might Need Multiple Accounts

In the real world, things get messy. Here are scenarios where you end up needing multiple Google accounts in a single automation pipeline:

  • Account termination: Your original YouTube account got terminated, so you created a new channel on a different account
  • Ownership separation: Your YouTube channel is on a personal account, but your organization's Drive is on a Workspace account
  • Permission boundaries: One account has admin access to a shared Drive, another owns the YouTube channel
  • Risk management: You do not want to put all your eggs in one basket

How OAuth Works Across Accounts

Here is the key insight: your OAuth Client ID (the app) is separate from the user who authorizes it. One OAuth app can be authorized by multiple different Google accounts. Each authorization produces its own set of tokens.

So the flow looks like:

  1. You create ONE OAuth app in your Google Cloud project
  2. User A authorizes it and you save their tokens to tokens-user-a.json
  3. User B authorizes it and you save their tokens to tokens-user-b.json
  4. Your code loads the right token file depending on which account it needs for each task

Real Scenario: Our Multi-Account Setup

In our politics newsletter pipeline:

  • YouTube uploads use abishek.lakandri69@gmail.com — this is the account that owns the YouTube channel
  • Google Drive uploads use abisheklamichhane@gmail.com — this is where the podcast audio files are hosted
  • Both accounts authorized the same OAuth app
  • Both accounts are listed as test users in the OAuth consent screen
  • Separate token files on the server: .youtube-tokens.json and .drive-tokens.json The automation script knows which token file to load for each operation. YouTube upload code loads the YouTube tokens, Drive upload code loads the Drive tokens. Same app, different users, different tokens.

Multi-Account OAuth — One App Multiple Users: Same OAuth Client ID authorizes different Google accounts with separate token files for YouTube and Drive

Setting It Up

  1. Add both email addresses as test users in the OAuth consent screen
  2. Run the authorization flow once for each account, saving tokens to different files
  3. In your code, load the appropriate token file for each operation
  4. Make sure the scopes match what each account needs (YouTube account needs youtube.upload, Drive account needs drive or drive.file)

Token Management

Access Tokens vs Refresh Tokens

When a user authorizes your app, you get two tokens:

  • Access Token: Short-lived (usually 1 hour). This is what you actually send with API requests. Think of it as a session key.
  • Refresh Token: Long-lived (months to years). When the access token expires, you use the refresh token to get a new access token without making the user go through the consent flow again. This is why you save the token file — it contains the refresh token, which is the valuable piece.

Auto-Refresh Pattern

Most Google API client libraries handle token refresh automatically. The typical pattern:

1. Load tokens from file
2. Check if access token is expired
3. If expired, use refresh token to get new access token
4. Save updated tokens back to file
5. Make API request with valid access token

The Google client libraries for Python, Node.js, Go, and others all do this transparently. You just need to make sure the token file is writable so the updated tokens can be saved.

When Refresh Tokens Break

Refresh tokens are not immortal. They can stop working if:

  • The user changes their Google password — all refresh tokens are revoked
  • The user manually revokes access — via Google Account settings under "Third-party apps with account access"
  • You change the scopes in your app — the old tokens do not cover the new scopes
  • The refresh token has not been used in 6 months — Google may expire it
  • Your app is in Testing mode — tokens expire after 7 days
  • Google detects suspicious activity — rare but possible When a refresh token breaks, you will see an invalid_grant error. The fix is always the same: run the authorization flow again to get new tokens.

PKCE for Desktop Apps

Modern OAuth for desktop apps uses PKCE (Proof Key for Code Exchange). This is a security enhancement that prevents authorization code interception attacks. The good news: most client libraries handle PKCE automatically. You do not need to implement it yourself.

Storing Tokens on Servers

If your automation runs on a server (like our n8n setup on a Proxmox LXC container), you need to:

  1. Run the initial authorization on a machine with a browser — the OAuth flow opens a browser window
  2. Copy the resulting token file to the server
  3. Set appropriate file permissionschmod 600 tokens.json so only the service user can read them
  4. Make sure the token file path is not in a git repo or publicly accessible directory Some automation tools (like n8n) have built-in OAuth handling that manages this for you through their credential system.

Service Accounts vs OAuth — When to Use Which

This distinction trips up a lot of people, so let us make it crystal clear.

Service Accounts

  • Act as their own identity (they have their own email: my-service@project-id.iam.gserviceaccount.com)
  • No user interaction needed — authenticate with a key file
  • Perfect for: accessing Google Cloud services (Cloud Storage, BigQuery, Pub/Sub), accessing shared resources
  • Cannot upload to a personal YouTube channel (YouTube requires a real user's OAuth consent)
  • Can access Google Drive files IF those files are shared with the service account's email

OAuth Client ID

  • Acts on behalf of a real user
  • Requires the user to go through a consent flow (at least once)
  • Perfect for: anything that needs to access a specific user's data
  • Required for: YouTube uploads, Gmail access, Calendar management Service Account vs OAuth Client ID — Decision Flowchart: Does this need to act as a specific user? YES leads to OAuth Client ID, NO leads to Service Account

Decision Framework

Ask yourself: Does this need to act as a specific user?

  • Yes (upload to my YouTube channel, send email from my Gmail, access my Drive): Use OAuth Client ID
  • No (process files in Cloud Storage, query BigQuery, call a Google Cloud AI API): Use Service Account In our automation pipeline, we use OAuth for everything because all our operations are tied to specific user accounts — uploading to a specific YouTube channel, accessing a specific person's Drive, sending from a specific Gmail.

Quotas and Billing

The Free Tier Is Generous

Here is something that surprises a lot of people: for most personal automation, Google Cloud is completely free. You do not need to set up billing for basic API usage.

API-Specific Quotas

YouTube Data API v3

  • 10,000 quota units per day

  • Video upload: 1,600 units (so about 6 uploads per day)

  • Read operations: 1-5 units each

  • For our daily podcast (1 upload per day), we use about 16% of the daily quota Google Drive API

  • 1,000,000,000 (one billion) queries per day

  • For personal use, this is essentially unlimited

  • File uploads are counted separately but the limits are also very high Gmail API

  • 250 quota units per second per user

  • Daily sending limit: 500 emails for free Gmail, 2,000 for Workspace

  • For our daily newsletter (1 email per day), this is more than enough Google Sheets API

  • 300 read requests per minute per project

  • 300 write requests per minute per project

  • Can be limiting for heavy automation — batch your requests

Monitoring Usage

To check your quota usage:

  1. Go to APIs & Services then Dashboard
  2. Click on the specific API
  3. Look at the Quotas tab
  4. You will see graphs showing usage over time and remaining quota

What Happens When You Hit Limits

When you exceed a quota:

  • Requests return HTTP 429 (Too Many Requests) or 403 (Quota Exceeded)
  • The error message usually tells you which quota you hit
  • Daily quotas reset at midnight Pacific Time
  • You can request quota increases in the Console (may require billing setup) For most personal automation, you will never hit these limits. But if you are doing something intensive (bulk video uploads, mass email sends, heavy Sheets automation), keep an eye on the dashboard.

Common Gotchas and Troubleshooting

Here are the errors you will inevitably encounter, what they mean, and how to fix them:

"Access blocked: This app's request is invalid"

Cause: The Google account trying to authorize is not in the test users list, and your app is in Testing mode.

Fix: Go to OAuth consent screen, add the email as a test user.

"API not enabled" or "has not been used in project"

Cause: You forgot to enable the API in your project.

Fix: Go to API Library, search for the API, click Enable. Takes 30 seconds.

"invalid_grant"

Cause: Your refresh token is no longer valid. Could be due to password change, revoked access, expired testing mode token, or scope changes.

Fix: Delete the token file and run the authorization flow again.

"insufficient authentication scopes"

Cause: Your token was created with scopes that do not cover the operation you are trying to perform. For example, you authorized with youtube.readonly but are trying to upload.

Fix: Update your code to request the correct scopes, delete the old token file, and re-authorize. The user needs to explicitly grant the new permissions.

"redirect_uri_mismatch"

Cause: The redirect URI your code is using does not match any of the redirect URIs configured in the Cloud Console for your OAuth client.

Fix: Go to Credentials, edit your OAuth Client ID, and add the correct redirect URI (usually http://localhost or http://localhost:PORT).

Token Refresh Loops

Cause: Your code refreshes the token but does not save it back to the file. Next request loads the old (expired) token and refreshes again. Wastes time and can hit rate limits.

Fix: Make sure your token refresh logic saves the updated tokens to disk.

Scope Confusion

Cause: Adding a new scope to your code does not automatically grant that scope. Scopes are locked in at authorization time.

Fix: Any time you add a new scope, you need to:

  1. Update your code with the new scope
  2. Delete the existing token file
  3. Run the authorization flow again
  4. The consent screen will now show the new permission
  5. The user grants it
  6. New tokens are saved with the expanded scopes

Security Best Practices

Never Commit Tokens to Git

This cannot be overstated. Your token files (and client secret JSON) contain credentials that give access to Google accounts. If they end up in a public GitHub repo, anyone can use them.

Add these to your .gitignore immediately:

*.tokens.json
client_secret*.json
token.json
credentials.json

Use Environment Variables for Sensitive Values

Instead of hardcoding Client IDs and secrets in your code:

CLIENT_ID=your-client-id
CLIENT_SECRET=your-secret
TOKEN_FILE=/secure/path/tokens.json

Load these from environment variables or a .env file (also gitignored).

Minimum Scopes

Always request the narrowest scopes possible:

  • Need to upload files to Drive? Use drive.file (only accesses files your app created) instead of drive (accesses everything)
  • Need to send emails? Use gmail.send instead of full gmail access
  • Need to read Calendar? Use calendar.readonly instead of calendar If your token is compromised, narrower scopes limit the damage.

Review Third-Party Access Regularly

Go to myaccount.google.com/permissions periodically and review what apps have access to your Google account. Revoke anything you no longer use.

Rotate Tokens Periodically

Even though refresh tokens are long-lived, it is good practice to:

  • Re-authorize every few months
  • Delete old token files
  • Monitor for invalid_grant errors as a signal that something changed

File Permissions on Servers

If tokens live on a server:

chmod 600 /path/to/tokens.json
chown service-user:service-user /path/to/tokens.json

Only the service user should be able to read the token file.


Putting It All Together — Our Real Pipeline

To make all of this concrete, here is how Google Cloud Console fits into our actual automation:

The System: A daily politics newsletter with podcast audio, generated by AI, uploaded to multiple Google services, and delivered to subscribers.

Real-World Pipeline — Daily Politics Newsletter Automation: Cron trigger to AI generation to Google Drive upload to YouTube publish to Gmail newsletter, all orchestrated by n8n

Google Cloud Setup:

  1. One Google Cloud project: Youtube-Automation-Politics-Channel

  2. APIs enabled: YouTube Data API v3, Google Drive API, Gmail API

  3. OAuth consent screen: External, Testing mode, two test users added

  4. One OAuth Client ID (Desktop app type)

  5. Two sets of tokens: one for the YouTube account, one for the Drive account Daily Flow:

  6. n8n workflow triggers at 5 AM

  7. AI generates newsletter content and podcast audio

  8. Audio file uploads to Google Drive (using Drive account tokens)

  9. Video publishes to YouTube (using YouTube account tokens)

  10. Newsletter email sends via Gmail API

  11. All API calls stay well within free tier quotas What Went Into Setting This Up:

  • 15 minutes creating the project and enabling APIs
  • 10 minutes configuring the OAuth consent screen and adding test users
  • 5 minutes creating OAuth credentials
  • 20 minutes running authorization flows for both accounts
  • 0 dollars in Google Cloud charges The initial setup takes under an hour, and then it just works. The only maintenance is re-authorizing when tokens expire (every 7 days in testing mode) or when Google passwords change.

Quick Reference Cheat Sheet


Final Thoughts

Google Cloud Console looks intimidating at first because it is designed for enterprises managing massive cloud infrastructure. But for personal automation? You only need a tiny fraction of what it offers. Create a project, enable your APIs, set up OAuth, get your tokens, and you are off to the races.

The biggest lesson from building our automation pipeline: most problems come from the initial setup, not the ongoing usage. Once your project is configured, APIs are enabled, test users are added, and tokens are generated — everything just works. The Console becomes something you visit once in a while to check quotas or add a new test user.

Start small. Pick one Google service you want to automate. Walk through the setup. Once you have done it once, every subsequent API integration follows the exact same pattern. And that pattern — project, API, consent screen, credentials, tokens — is now something you know inside and out.

image

Share this post

Help this article travel further

8share actions ready

One tap opens the share sheet or pre-fills the post for the platform you want.