Skip to main content

Documentation Index

Fetch the complete documentation index at: https://platform.docs.zenoo.com/llms.txt

Use this file to discover all available pages before exploring further.

GCP Cloud Provider

The GCP (Google Cloud Platform) cloud provider is a production-ready implementation that enables Zenoo Hub to run on Google Cloud infrastructure. It provides seamless integration with core GCP services including Cloud Firestore for storage, Secret Manager for configuration and secrets management, and Cloud Monitoring for metrics publishing. The GCP provider is designed to be highly scalable, secure, and cost-effective, leveraging Google Cloud’s global infrastructure and managed services. It supports Application Default Credentials (ADC) for authentication, making it easy to deploy on GCE, GKE, or Cloud Run without managing service account keys.

Supported GCP Services

ServicePurposeImplementation Module
Cloud Firestore (Native Mode)Component storage, API key lookups, sharable tokensgcp-stores
Secret ManagerComponent configuration, API key secretsgcp-secrets
Cloud MonitoringMetrics publishing with batchinggcp-metrics
IAMAuthentication and authorizationAll modules

Prerequisites

Before using the GCP cloud provider, ensure you have:
  • GCP Project with billing enabled
  • gcloud CLI installed and authenticated (gcloud auth login)
  • Required APIs enabled (see Quick Start below)
  • Service Account with appropriate IAM roles (see IAM Permissions)
  • Spring Boot 3.x application (Hub requires Spring Boot 3.3.11 or later)
  • Java 21 runtime

Quick Start

Get started with the GCP provider in under 10 minutes.

1. Add Dependency

Add the GCP Spring Boot starter to your build.gradle:
dependencies {
    implementation project(':cloud-provider-gcp:gcp-spring-boot-starter')
}
Or if using the published artifact:
dependencies {
    implementation 'com.zenoo.hub:gcp-spring-boot-starter:VERSION'
}

2. Enable Required APIs

Enable the necessary GCP APIs for your project:
# Set your project ID
export PROJECT_ID="your-gcp-project-id"

# Enable APIs
gcloud services enable firestore.googleapis.com --project=${PROJECT_ID}
gcloud services enable secretmanager.googleapis.com --project=${PROJECT_ID}
gcloud services enable monitoring.googleapis.com --project=${PROJECT_ID}

3. Configure Application

Create or update application.yml with minimal GCP configuration:
spring:
  profiles:
    active: gcp

hub:
  cloud:
    provider:
      type: gcp
  gcp:
    projectId: ${GCP_PROJECT_ID}
    # credentialsLocation: /path/to/key.json  # Optional - uses ADC if omitted
For local development, authenticate with ADC:
gcloud auth application-default login

4. Start Application

./gradlew bootRun
The Hub will automatically:
  • Connect to Firestore and create necessary collections
  • Initialize Secret Manager for configuration storage
  • Start publishing metrics to Cloud Monitoring

Cloud Firestore Storage

The GCP provider uses Cloud Firestore in Native Mode for storing components, API key mappings, and sharable tokens. Firestore provides real-time synchronization, automatic scaling, and strong consistency guarantees.

Important: Native Mode Required

Critical: The Hub requires Firestore in Native Mode, not Datastore Mode. If you’re creating a new Firestore database, ensure you select Native Mode. The two modes are not compatible and cannot be changed after creation.

Collections Schema

The GCP provider automatically creates and manages three Firestore collections:

Components Collection ({prefix}-components)

Stores component definitions with versioning support. Document ID Pattern: {componentName}_{revision} Fields:
  • componentName (string) - Component identifier
  • revision (number) - Version number (1, 2, 3, …)
  • definition (string) - Component DSL definition (Groovy code)
  • metadata (map) - Component metadata
  • dependencies (array) - List of dependency component names
  • connectors (array) - List of connector names
  • createdAt (timestamp) - Creation timestamp
  • updatedAt (timestamp) - Last update timestamp
LATEST Pointer: The provider maintains a special document {componentName}_LATEST that points to the current active revision. This enables efficient retrieval of the latest version without querying all revisions. Example Documents:
onboarding-workflow_1    # Revision 1
onboarding-workflow_2    # Revision 2
onboarding-workflow_3    # Revision 3
onboarding-workflow_LATEST   # Points to revision 3

API Keys Collection ({prefix}-api-keys)

Maps component names to their corresponding Secret Manager secret names for API key lookups. Document ID: {componentName} Fields:
  • component (string) - Component name
  • secretName (string) - Secret Manager secret name
  • bidirectional (boolean) - Supports reverse lookup
Purpose: Enables efficient bidirectional mapping between components and API keys stored in Secret Manager. Example Document:
{
  "component": "payment-gateway",
  "secretName": "zenoo-hub/api-key/stripe-api-key",
  "bidirectional": true
}

Sharables Collection ({prefix}-sharables)

Stores temporary sharable tokens with automatic TTL-based cleanup. Document ID: {token-uuid} Fields:
  • token (string) - Token identifier
  • payload (string) - Base64-encoded payload
  • expiresAt (timestamp) - Expiration timestamp
  • expired (boolean) - Expiration flag
  • reusable (boolean) - Whether token can be used multiple times
  • ttl (duration) - Time-to-live (triggers automatic deletion)
TTL Feature: Firestore automatically deletes expired documents based on the ttl field, eliminating the need for manual cleanup. Example Document:
{
  "token": "a7f8d9e2-3c4b-5a6e-7d8f-9e0a1b2c3d4e",
  "payload": "eyJkYXRhIjoidmFsdWUifQ==",
  "expiresAt": "2026-02-03T12:00:00Z",
  "expired": false,
  "reusable": false,
  "ttl": "2026-02-03T12:00:00Z"
}

Features

  • Atomic LATEST Pointer Updates - Uses Firestore transactions to ensure consistency
  • Component Versioning - Complete revision history for all components
  • Bidirectional API Key Lookup - Fast component-to-secret and secret-to-component mapping
  • Automatic TTL Cleanup - Firestore deletes expired sharables automatically
  • Composite Indexes - Auto-created indexes for efficient queries (if enabled)
  • Strong Consistency - Firestore provides strong consistency for all reads
  • Real-time Updates - Native support for real-time listeners (not used by default)

Configuration

Configure Firestore behavior with these properties:
hub:
  gcp:
    firestore:
      database: "(default)"           # Firestore database name
      prefix: "zenoo-hub"             # Collection name prefix
      createIndexes: true             # Auto-create composite indexes
      ttlEnabled: true                # Enable TTL for sharables
      ttlField: "ttl"                 # TTL field name
      retryStrategy:
        requestTimeout: 500ms         # Per-request timeout
        maxRetries: 10                # Retry attempts
        backoff: 100ms                # Exponential backoff base
Notes:
  • prefix - All collection names are prefixed with this value (e.g., zenoo-hub-components)
  • createIndexes - Set to true for automatic composite index creation; set to false for manual management
  • ttlEnabled - Must be true for automatic sharable cleanup
  • retryStrategy - Configures exponential backoff for transient errors

Secret Manager

Google Cloud Secret Manager provides secure, centralized storage for component configuration and API keys. The GCP provider uses Secret Manager for all sensitive data, leveraging automatic encryption at rest and fine-grained IAM access control.

Component Configuration Secrets

Component configuration is stored as versioned secrets in Secret Manager. Naming Convention:
{prefix}/component-config/{key}/{version}
Example:
zenoo-hub/component-config/onboarding-workflow/1.0.0
zenoo-hub/component-config/payment-gateway/2.1.3
Features:
  • Version-labeled Secrets - Stores semantic version labels (version-1.0.0) on Secret resource
  • Automatic Version Management - New versions created automatically on updates
  • Caching - Caffeine-based cache reduces Secret Manager API calls
  • Batch Operations - Supports bulk secret loading on startup
Version Management: Secret Manager doesn’t natively support semantic versioning labels on secret versions. The GCP provider works around this by:
  1. Storing version labels (version-{semantic}) on the Secret resource metadata
  2. Mapping semantic versions to Secret Manager version numbers
  3. Managing version limits automatically (deletes oldest versions when limit reached)

API Key Secrets

API keys are stored as JSON-encoded secrets. Naming Convention:
{prefix}/api-key/{keyName}
Example:
zenoo-hub/api-key/stripe-api-key
zenoo-hub/api-key/sendgrid-api-key
Secret Structure:
{
  "keyName": "stripe-api-key",
  "secret": "sk_live_4eC39HqLyjWDarjtT1zdp7dc",
  "permissions": ["read", "write"],
  "metadata": {
    "environment": "production",
    "createdBy": "admin@example.com"
  }
}
Features:
  • Bidirectional lookup via Firestore API Keys collection
  • JSON serialization/deserialization
  • Automatic caching with configurable TTL
  • Permission metadata support

Configuration

Configure Secret Manager behavior:
hub:
  gcp:
    secrets:
      prefix: "zenoo-hub"             # Secret name prefix
      cacheSize: 128                  # Caffeine cache max entries
      cacheExpiry: 30m                # Cache TTL
      versionsLimit: 18               # Max versions per secret
      forceDelete: true               # Immediate deletion (vs 30-day recovery)
      retryStrategy:
        requestTimeout: 2s            # Handles eventual consistency
        maxRetries: 5                 # Retry attempts
        backoff: 200ms                # Exponential backoff base
Notes:
  • cacheSize - Increase for applications with many configuration keys
  • cacheExpiry - Balance between freshness and API costs
  • versionsLimit - Old versions are automatically deleted when limit is reached
  • forceDelete: false - Recommended for production (enables 30-day recovery window)
  • requestTimeout - 2s recommended to handle Secret Manager eventual consistency

Configuration Reference

Complete reference for all GCP provider configuration properties.

Core GCP Configuration

PropertyTypeDefaultRequiredDescription
hub.cloud.provider.typestring-YesMust be gcp to enable GCP provider
hub.gcp.projectIdstring-YesGCP project ID
hub.gcp.credentialsLocationpathADCNoPath to service account key JSON file
hub.gcp.enabledbooleantrueNoEnable/disable GCP provider

Firestore Configuration

PropertyTypeDefaultRequiredDescription
hub.gcp.firestore.databasestring"(default)"NoFirestore database name
hub.gcp.firestore.prefixstring"zenoo-hub"NoCollection name prefix
hub.gcp.firestore.createIndexesbooleantrueNoAuto-create composite indexes
hub.gcp.firestore.ttlEnabledbooleantrueNoEnable TTL for sharables
hub.gcp.firestore.ttlFieldstring"ttl"NoTTL field name
hub.gcp.firestore.retryStrategy.requestTimeoutduration500msNoPer-request timeout
hub.gcp.firestore.retryStrategy.maxRetriesinteger10NoMax retry attempts
hub.gcp.firestore.retryStrategy.backoffduration100msNoExponential backoff base

Secret Manager Configuration

PropertyTypeDefaultRequiredDescription
hub.gcp.secrets.prefixstring"zenoo-hub"NoSecret name prefix
hub.gcp.secrets.cacheSizeinteger128NoCaffeine cache max entries
hub.gcp.secrets.cacheExpiryduration30mNoCache TTL
hub.gcp.secrets.versionsLimitinteger18NoMax versions per secret
hub.gcp.secrets.forceDeletebooleantrueNoImmediate delete (vs 30-day recovery)
hub.gcp.secrets.retryStrategy.requestTimeoutduration2sNoPer-request timeout
hub.gcp.secrets.retryStrategy.maxRetriesinteger5NoMax retry attempts
hub.gcp.secrets.retryStrategy.backoffduration200msNoExponential backoff base

Cloud Monitoring Configuration

PropertyTypeDefaultRequiredDescription
hub.gcp.metrics.enabledbooleantrueNoEnable metrics publishing
hub.gcp.metrics.prefixstring"hub"NoMetric name prefix
hub.gcp.metrics.batchSizeinteger200NoMax time series per batch (GCP limit: 200)

Complete Configuration Examples

Minimal Development Configuration

# Uses Application Default Credentials (ADC) and all defaults
hub:
  cloud:
    provider:
      type: gcp
  gcp:
    projectId: dev-project-123

Production Configuration

# Full production configuration with service account key
hub:
  cloud:
    provider:
      type: gcp
  gcp:
    projectId: prod-project-456
    credentialsLocation: /run/secrets/gcp-sa-key.json

    firestore:
      prefix: "hub-prod"
      createIndexes: true
      ttlEnabled: true
      retryStrategy:
        maxRetries: 15
        requestTimeout: 1s

    secrets:
      prefix: "hub-prod"
      cacheSize: 256
      cacheExpiry: 1h
      versionsLimit: 30
      forceDelete: false        # Enable 30-day recovery for production

    metrics:
      enabled: true
      prefix: "hub-prod"
      batchSize: 200

High Performance Configuration

# Optimized for high throughput
hub:
  gcp:
    projectId: perf-test-789

    firestore:
      prefix: "hub-perf"
      retryStrategy:
        maxRetries: 20
        backoff: 50ms           # Faster retries

    secrets:
      cacheSize: 512            # Larger cache
      cacheExpiry: 2h           # Longer TTL
      versionsLimit: 50

    metrics:
      batchSize: 200            # Max batch size

Multi-Environment Configuration

# Development
spring:
  profiles: dev
hub:
  gcp:
    projectId: hub-dev
    firestore:
      prefix: "hub-dev"
    secrets:
      prefix: "hub-dev"
      forceDelete: true         # No recovery needed
    metrics:
      enabled: false            # Reduce costs

---
# Staging
spring:
  profiles: staging
hub:
  gcp:
    projectId: hub-staging
    firestore:
      prefix: "hub-staging"
    secrets:
      prefix: "hub-staging"
    metrics:
      enabled: true

---
# Production
spring:
  profiles: prod
hub:
  gcp:
    projectId: hub-prod
    firestore:
      prefix: "hub-prod"
    secrets:
      prefix: "hub-prod"
      forceDelete: false        # Enable recovery
    metrics:
      enabled: true

IAM Permissions

The GCP provider requires specific IAM permissions to access Firestore, Secret Manager, and Cloud Monitoring. This section details the minimum required permissions and recommended IAM roles.

Required Permissions

Firestore Access

datastore.entities.get
datastore.entities.create
datastore.entities.update
datastore.entities.delete
datastore.entities.list
datastore.indexes.create      # Only if createIndexes=true

Secret Manager Access

secretmanager.secrets.create
secretmanager.secrets.get
secretmanager.secrets.update
secretmanager.secrets.delete
secretmanager.versions.access
secretmanager.versions.add
secretmanager.versions.destroy

Cloud Monitoring

monitoring.timeSeries.create

Predefined Roles (Minimum Required)

The simplest approach is to use Google’s predefined IAM roles:
# Set variables
export PROJECT_ID="your-gcp-project"
export SA_EMAIL="hub-cloud-provider@${PROJECT_ID}.iam.gserviceaccount.com"

# Firestore access
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
  --member="serviceAccount:${SA_EMAIL}" \
  --role="roles/datastore.user"

# Secret Manager access
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
  --member="serviceAccount:${SA_EMAIL}" \
  --role="roles/secretmanager.admin"

# Cloud Monitoring access
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
  --member="serviceAccount:${SA_EMAIL}" \
  --role="roles/monitoring.metricWriter"

Custom Role (Least Privilege Approach)

For production environments, create a custom role with minimal permissions:
# Create custom role with exact permissions needed
gcloud iam roles create HubCloudProvider \
  --project=${PROJECT_ID} \
  --title="Hub Cloud Provider" \
  --description="Minimum permissions for Zenoo Hub on GCP" \
  --permissions="\
datastore.entities.get,\
datastore.entities.create,\
datastore.entities.update,\
datastore.entities.delete,\
datastore.entities.list,\
secretmanager.secrets.create,\
secretmanager.secrets.get,\
secretmanager.secrets.update,\
secretmanager.versions.access,\
secretmanager.versions.add,\
monitoring.timeSeries.create" \
  --stage=GA

# Bind custom role to service account
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
  --member="serviceAccount:${SA_EMAIL}" \
  --role="projects/${PROJECT_ID}/roles/HubCloudProvider"
Note: If hub.gcp.firestore.createIndexes=true, add datastore.indexes.create permission.

Service Account Setup

Create Service Account

# Create service account for Hub
gcloud iam service-accounts create hub-cloud-provider \
  --display-name="Hub Cloud Provider" \
  --description="Service account for Zenoo Hub application" \
  --project=${PROJECT_ID}

# Get service account email
SA_EMAIL=$(gcloud iam service-accounts list \
  --filter="displayName:Hub Cloud Provider" \
  --format="value(email)" \
  --project=${PROJECT_ID})

echo "Service Account Email: ${SA_EMAIL}"

Grant Permissions

# Use predefined roles (easiest)
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
  --member="serviceAccount:${SA_EMAIL}" \
  --role="roles/datastore.user"

gcloud projects add-iam-policy-binding ${PROJECT_ID} \
  --member="serviceAccount:${SA_EMAIL}" \
  --role="roles/secretmanager.admin"

gcloud projects add-iam-policy-binding ${PROJECT_ID} \
  --member="serviceAccount:${SA_EMAIL}" \
  --role="roles/monitoring.metricWriter"

Authentication Options

Option 1: Application Default Credentials (Recommended for GCP) When running on GCP infrastructure (GCE, GKE, Cloud Run), attach the service account to the compute resource:
# GCE VM
gcloud compute instances set-service-account INSTANCE_NAME \
  --service-account=${SA_EMAIL} \
  --scopes=cloud-platform \
  --zone=ZONE

# GKE (use Workload Identity - see GKE documentation)
# Cloud Run (specify service account in deployment)
No credentialsLocation configuration needed - ADC automatically detects the attached service account. Option 2: Service Account Key File (For non-GCP deployments)
# Create service account key
gcloud iam service-accounts keys create hub-sa-key.json \
  --iam-account=${SA_EMAIL} \
  --project=${PROJECT_ID}

# Securely store the key file
# Configure Hub to use the key
In application.yml:
hub:
  gcp:
    credentialsLocation: /secure/path/to/hub-sa-key.json
Security Warning: Service account keys are long-lived credentials. Prefer ADC (Option 1) or Workload Identity on GKE. If using keys:
  • Store keys securely (e.g., Google Cloud Secret Manager, Kubernetes secrets)
  • Rotate keys regularly (every 90 days recommended)
  • Never commit keys to source control

Authentication Methods

The GCP provider supports three authentication methods, in order of preference: Application Default Credentials (ADC) is Google’s recommended authentication mechanism. It automatically discovers credentials from the environment in this order:
  1. GOOGLE_APPLICATION_CREDENTIALS environment variable (points to key file)
  2. User credentials from gcloud auth application-default login
  3. Service account attached to GCE VM, GKE pod, or Cloud Run instance
  4. Default service account from Compute Engine metadata service
Configuration: None required - just omit hub.gcp.credentialsLocation Local Development:
gcloud auth application-default login
GCP Deployment:
# Attach service account to VM
gcloud compute instances set-service-account VM_NAME \
  --service-account=SA_EMAIL \
  --zone=ZONE
Advantages:
  • No credential files to manage
  • Automatic credential refresh
  • Follows Google Cloud best practices
  • Works seamlessly on GCP infrastructure

2. Service Account Key File

Explicitly specify a service account key file path. Configuration:
hub:
  gcp:
    credentialsLocation: /path/to/service-account-key.json
Or via environment variable:
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/key.json
Use Cases:
  • Running Hub outside GCP (on-premises, other clouds)
  • Testing with specific service account
  • Development environments without gcloud CLI
Disadvantages:
  • Manual key management and rotation required
  • Security risk if key is compromised
  • Must securely distribute keys to all instances

3. GCE/GKE Metadata Service (Automatic)

When running on Google Cloud compute resources with an attached service account, ADC automatically uses the metadata service. Configuration: None required Requirements:
  • Service account attached to compute resource
  • Compute resource has cloud-platform scope (or specific scopes)
Verification:
# On GCE VM, check attached service account
curl -H "Metadata-Flavor: Google" \
  http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/email
GKE Workload Identity: For GKE, use Workload Identity for enhanced security:
apiVersion: v1
kind: ServiceAccount
metadata:
  name: hub-service-account
  annotations:
    iam.gke.io/gcp-service-account: SA_EMAIL

---
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      serviceAccountName: hub-service-account
      containers:
      - name: hub
        image: hub:latest

Performance Tuning

Optimize GCP provider performance for your workload.

Firestore Optimization

Batch Operations: The GCP provider uses batch operations for bulk component updates. Firestore supports up to 500 operations per batch. Composite Indexes: Enable automatic index creation for complex queries:
hub:
  gcp:
    firestore:
      createIndexes: true
Or create indexes manually via Firebase Console or firestore.indexes.json. Retry Strategy: Tune retry behavior for your latency requirements:
hub:
  gcp:
    firestore:
      retryStrategy:
        maxRetries: 20          # More retries for unstable networks
        backoff: 50ms           # Faster retries (increases load)
        requestTimeout: 1s      # Longer timeout for large documents
Connection Pooling: Firestore SDK manages connection pooling automatically. No configuration needed.

Secret Manager Optimization

Cache Configuration: Reduce Secret Manager API calls by tuning the cache:
hub:
  gcp:
    secrets:
      cacheSize: 512          # Increase if you have many config keys
      cacheExpiry: 2h         # Longer TTL reduces API calls
Cache Hit Rate Monitoring: Monitor cache effectiveness:
curl http://localhost:5080/actuator/metrics/hub.cache.hit.rate
Target 95%+ cache hit rate for optimal performance. Batch Secret Loading: Load all secrets at startup to populate cache:
hub:
  gcp:
    secrets:
      preloadOnStartup: true    # Load all secrets into cache
Version Limit: More versions = more storage costs, but enables rollback:
hub:
  gcp:
    secrets:
      versionsLimit: 30         # Keep 30 versions (default: 18)

Cloud Monitoring Optimization

Batch Size: Configure batch size based on metric volume:
hub:
  gcp:
    metrics:
      batchSize: 200            # Max allowed by Cloud Monitoring
Selective Metrics: Disable metrics in non-production environments:
spring:
  profiles: dev
hub:
  gcp:
    metrics:
      enabled: false            # Reduces costs and API calls

Security Best Practices

Secure your GCP provider deployment.

Service Account Security

1. Use Least Privilege IAM Roles Create custom roles with minimum required permissions (see IAM Permissions). 2. Rotate Service Account Keys If using key files, rotate every 90 days:
# Create new key
gcloud iam service-accounts keys create new-key.json \
  --iam-account=SA_EMAIL

# Update application configuration
# Delete old key
gcloud iam service-accounts keys delete OLD_KEY_ID \
  --iam-account=SA_EMAIL
3. Prefer Workload Identity on GKE Use GKE Workload Identity instead of service account keys for enhanced security and automatic credential rotation. 4. Separate Service Accounts per Environment Use different service accounts for dev, staging, and production:
hub-dev@project.iam.gserviceaccount.com
hub-staging@project.iam.gserviceaccount.com
hub-prod@project.iam.gserviceaccount.com

Secret Management Security

1. Enable 30-Day Recovery for Production
hub:
  gcp:
    secrets:
      forceDelete: false        # Enables 30-day recovery window
2. Enable Cloud Audit Logs Monitor secret access:
gcloud logging read "protoPayload.serviceName=secretmanager.googleapis.com" \
  --limit=50 \
  --format=json
3. Implement Secret Rotation Policies Rotate API keys and sensitive configuration regularly:
  • Database passwords: Every 90 days
  • API keys: Every 180 days
  • Encryption keys: Yearly
4. Use Secret Manager Replication For multi-region deployments:
gcloud secrets create SECRET_NAME \
  --replication-policy=automatic \
  --project=PROJECT_ID

Network Security

1. VPC Service Controls Restrict Firestore access to specific VPC networks:
# Create service perimeter
gcloud access-context-manager perimeters create hub-perimeter \
  --title="Hub Service Perimeter" \
  --resources=projects/PROJECT_NUMBER \
  --restricted-services=firestore.googleapis.com,secretmanager.googleapis.com
2. Private Google Access Enable Private Google Access for GCE instances without external IPs:
gcloud compute networks subnets update SUBNET_NAME \
  --enable-private-ip-google-access \
  --region=REGION
3. Cloud NAT for Egress Traffic Use Cloud NAT for controlled egress:
gcloud compute routers create hub-router \
  --network=NETWORK \
  --region=REGION

gcloud compute routers nats create hub-nat \
  --router=hub-router \
  --nat-all-subnet-ip-ranges \
  --region=REGION

Firestore Security

1. Database-Level IAM Grant permissions at database level, not collection level:
gcloud firestore databases add-iam-policy-binding "(default)" \
  --member="serviceAccount:SA_EMAIL" \
  --role="roles/datastore.user"
2. Enable Audit Logging Track all Firestore operations:
gcloud projects get-iam-policy PROJECT_ID \
  --flatten="bindings[].members" \
  --filter="bindings.role:roles/datastore.user"
3. Regular Security Reviews
  • Review service account permissions quarterly
  • Audit Firestore security rules (if using)
  • Check for unused service accounts

Monitoring and Metrics

The GCP provider publishes metrics to Cloud Monitoring for observability.

Cloud Monitoring Integration

The GcpMetricPublisher automatically publishes application metrics to Cloud Monitoring with:
  • Custom dimensions: componentName, operation, status
  • Batch publishing: Up to 200 time series per request
  • Level-based filtering: INFO, ERROR, TRACE
Configuration:
hub:
  gcp:
    metrics:
      enabled: true
      prefix: "hub-prod"
      batchSize: 200

Key Metrics to Monitor

Application Metrics

Published by Hub to Cloud Monitoring:
hub.component.store.operations         # Firestore component operations
hub.component.store.latency           # Firestore operation latency
hub.secret.operations                  # Secret Manager operations
hub.secret.latency                     # Secret Manager operation latency
hub.cache.hit.rate                     # Secret cache hit rate
hub.cache.miss.rate                    # Secret cache miss rate
hub.kafka.consumer.lag                 # Kafka consumer lag (if using Kafka)
View in Cloud Console:
# List custom metrics
gcloud monitoring metrics-descriptors list \
  --filter="type:custom.googleapis.com/hub*" \
  --project=PROJECT_ID

GCP Service Metrics

Native GCP service metrics available in Cloud Monitoring: Firestore:
firestore.googleapis.com/document/read_count
firestore.googleapis.com/document/write_count
firestore.googleapis.com/document/delete_count
firestore.googleapis.com/api/request_count
firestore.googleapis.com/api/request_latencies
Secret Manager:
secretmanager.googleapis.com/api/request_count
secretmanager.googleapis.com/secret/version/access_count
Cloud Monitoring:
monitoring.googleapis.com/api/request_count
monitoring.googleapis.com/collection/write_time_series_count

Setting Up Alerts

Create alert policies for critical metrics:

High Firestore Latency Alert

# Create notification channel (email example)
CHANNEL_ID=$(gcloud alpha monitoring channels create \
  --display-name="Hub Alerts" \
  --type=email \
  --channel-labels=email_address=alerts@example.com \
  --format="value(name)")

# Create alert policy
gcloud alpha monitoring policies create \
  --notification-channels=${CHANNEL_ID} \
  --display-name="High Firestore Latency" \
  --condition-display-name="Read latency > 500ms" \
  --condition-threshold-value=500 \
  --condition-threshold-duration=60s \
  --condition-expression='
    metric.type="firestore.googleapis.com/api/request_latencies"
    resource.type="firestore_database"'

Low Cache Hit Rate Alert

gcloud alpha monitoring policies create \
  --notification-channels=${CHANNEL_ID} \
  --display-name="Low Secret Cache Hit Rate" \
  --condition-display-name="Hit rate < 80%" \
  --condition-threshold-value=0.8 \
  --condition-threshold-duration=300s \
  --condition-expression='
    metric.type="custom.googleapis.com/hub/cache/hit.rate"'

Secret Manager Error Rate Alert

gcloud alpha monitoring policies create \
  --notification-channels=${CHANNEL_ID} \
  --display-name="High Secret Manager Error Rate" \
  --condition-display-name="Error rate > 5%" \
  --condition-threshold-value=0.05 \
  --condition-threshold-duration=60s \
  --condition-expression='
    metric.type="secretmanager.googleapis.com/api/request_count"
    metric.label.status="error"'

Dashboards

Create custom dashboards in Cloud Monitoring Console:
  1. Navigate to Monitoring > Dashboards
  2. Click Create Dashboard
  3. Add charts for key metrics:
    • Firestore read/write operations
    • Secret Manager access count
    • Cache hit rate
    • Application latency

Troubleshooting

Common issues and solutions when using the GCP provider.

Authentication Issues

Problem: PermissionDeniedException

com.google.api.gax.rpc.PermissionDeniedException:
IAM permission 'datastore.entities.get' denied on resource
Cause: Service account lacks required IAM roles. Solution:
# Check current IAM roles
gcloud projects get-iam-policy ${PROJECT_ID} \
  --flatten="bindings[].members" \
  --filter="bindings.members:serviceAccount:${SA_EMAIL}"

# Grant missing roles
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
  --member="serviceAccount:${SA_EMAIL}" \
  --role="roles/datastore.user"

gcloud projects add-iam-policy-binding ${PROJECT_ID} \
  --member="serviceAccount:${SA_EMAIL}" \
  --role="roles/secretmanager.admin"

Problem: Application Default Credentials Not Found

com.google.auth.oauth2.DefaultCredentialsProvider:
Unable to detect the Application Default Credentials
Cause: No credentials found in environment. Solution: For local development:
gcloud auth application-default login
For production, attach service account to compute resource or set:
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/key.json

Firestore Issues

Problem: Cloud Firestore API Not Enabled

com.google.api.gax.rpc.FailedPreconditionException:
The Cloud Firestore API is not enabled for project
Cause: Firestore API not enabled. Solution:
gcloud services enable firestore.googleapis.com --project=${PROJECT_ID}

Problem: ComponentNotFoundException

com.zenoo.hub.storage.exception.ComponentNotFoundException:
Component 'onboarding-workflow' not found
Cause: Firestore collection or document missing, or wrong prefix configuration. Solution:
  1. Check collection prefix matches configuration:
hub:
  gcp:
    firestore:
      prefix: "zenoo-hub"    # Must match Firestore collections
  1. Verify component was stored successfully (check application logs for write errors)
  2. Check Firestore Console for data:
https://console.cloud.google.com/firestore/data
  1. List collections programmatically:
gcloud firestore operations list --database="(default)"

Problem: Firestore in Datastore Mode

java.lang.IllegalStateException:
Firestore must be in Native Mode, but detected Datastore Mode
Cause: Firestore database is in Datastore Mode, which is incompatible. Solution: Firestore modes cannot be changed after creation. You must:
  1. Create a new GCP project
  2. Enable Firestore in Native Mode
  3. Migrate data from old project (if needed)
Prevention: Always select Native Mode when creating Firestore:
gcloud firestore databases create \
  --location=REGION \
  --type=firestore-native

Secret Manager Issues

Problem: Secret Not Found

com.google.api.gax.rpc.NotFoundException:
Secret 'projects/PROJECT_ID/secrets/zenoo-hub/api-key/my-key' not found
Cause: Secret doesn’t exist or wrong prefix configuration. Solution:
  1. List existing secrets:
gcloud secrets list --project=${PROJECT_ID} --filter="name:zenoo-hub/*"
  1. Verify prefix configuration matches Secret Manager naming:
hub:
  gcp:
    secrets:
      prefix: "zenoo-hub"    # Must match secret name prefix
  1. Create missing secret:
echo -n '{"keyName":"my-key","secret":"value"}' | \
  gcloud secrets create zenoo-hub/api-key/my-key \
    --data-file=- \
    --project=${PROJECT_ID}

Problem: High Cache Miss Rate

Symptoms:
  • High Secret Manager API usage
  • Increased latency
  • hub.cache.miss.rate metric > 20%
Cause: Cache too small or expiry too short for access patterns. Solution: Increase cache size and TTL:
hub:
  gcp:
    secrets:
      cacheSize: 512        # Increase from default 128
      cacheExpiry: 2h       # Increase from default 30m
Monitor cache metrics:
curl http://localhost:5080/actuator/metrics/hub.cache.hit.rate
curl http://localhost:5080/actuator/metrics/hub.cache.miss.rate

Performance Issues

Problem: High Firestore Latency

Symptoms:
  • Slow component loads
  • firestore.googleapis.com/api/request_latencies > 500ms
Cause: Missing composite indexes for queries. Solution:
  1. Check index status:
gcloud firestore indexes composite list \
  --database="(default)" \
  --project=${PROJECT_ID}
  1. Enable automatic index creation:
hub:
  gcp:
    firestore:
      createIndexes: true
  1. Or create indexes manually via Firebase Console or firestore.indexes.json

Problem: Secret Manager Rate Limiting

Symptoms:
com.google.api.gax.rpc.ResourceExhaustedException:
Quota exceeded for quota metric 'Read requests' and limit 'Read requests per minute'
Cause: Too many Secret Manager API calls. Solution:
  1. Increase cache TTL to reduce API calls:
hub:
  gcp:
    secrets:
      cacheExpiry: 2h       # Longer cache reduces API calls
  1. Preload secrets at startup:
hub:
  gcp:
    secrets:
      preloadOnStartup: true
  1. Request quota increase (if legitimately needed):
# Navigate to Cloud Console > IAM & Admin > Quotas
# Filter for "Secret Manager API"
# Request increase

Network Issues

Problem: Connection Timeouts to GCP APIs

Symptoms:
com.google.api.gax.rpc.DeadlineExceededException:
Deadline exceeded after 10 seconds
Cause: Network configuration, firewall rules, or regional outage. Solution:
  1. Test connectivity:
curl -I https://firestore.googleapis.com
curl -I https://secretmanager.googleapis.com
curl -I https://monitoring.googleapis.com
  1. Check VPC firewall rules:
gcloud compute firewall-rules list --project=${PROJECT_ID}
  1. Verify Private Google Access (if using private IPs):
gcloud compute networks subnets describe SUBNET_NAME \
  --region=REGION \
  --format="get(privateIpGoogleAccess)"
  1. Check GCP Status Dashboard:
https://status.cloud.google.com/
  1. Increase request timeouts (temporary workaround):
hub:
  gcp:
    firestore:
      retryStrategy:
        requestTimeout: 10s    # Increase from default 500ms
    secrets:
      retryStrategy:
        requestTimeout: 5s     # Increase from default 2s

Testing

Test your GCP provider integration locally and in CI/CD.

Local Testing with Firestore Emulator

Use the Firestore emulator for local development and testing without incurring costs or needing network access. 1. Start Firestore Emulator:
# Install gcloud emulators component (if not already installed)
gcloud components install cloud-firestore-emulator

# Start emulator
gcloud emulators firestore start --host-port=localhost:8080
2. Configure Application: Set the emulator environment variable before starting your application:
export FIRESTORE_EMULATOR_HOST=localhost:8080
Or in application-test.yml:
# Firestore SDK automatically detects this environment variable
# No Hub configuration changes needed
3. Run Tests:
./gradlew test
Benefits:
  • No GCP credentials needed
  • Fast local testing
  • No costs
  • Isolated test environment
Limitations:
  • Emulator doesn’t support all Firestore features (e.g., TTL)
  • No Secret Manager or Cloud Monitoring emulators

Integration Testing with Real GCP

For comprehensive integration testing, use a dedicated test GCP project. 1. Create Test Project:
gcloud projects create hub-integration-test --name="Hub Integration Test"
gcloud config set project hub-integration-test

# Enable APIs
gcloud services enable firestore.googleapis.com
gcloud services enable secretmanager.googleapis.com
gcloud services enable monitoring.googleapis.com
2. Test Service Account: Create a service account with limited permissions for testing:
gcloud iam service-accounts create hub-test \
  --display-name="Hub Integration Test"

# Grant minimum required roles
gcloud projects add-iam-policy-binding hub-integration-test \
  --member="serviceAccount:hub-test@hub-integration-test.iam.gserviceaccount.com" \
  --role="roles/datastore.user"

gcloud projects add-iam-policy-binding hub-integration-test \
  --member="serviceAccount:hub-test@hub-integration-test.iam.gserviceaccount.com" \
  --role="roles/secretmanager.admin"

# Create key for CI/CD
gcloud iam service-accounts keys create hub-test-key.json \
  --iam-account=hub-test@hub-integration-test.iam.gserviceaccount.com
3. Test Configuration:
# application-integration-test.yml
spring:
  profiles: integration-test

hub:
  cloud:
    provider:
      type: gcp
  gcp:
    projectId: hub-integration-test
    credentialsLocation: ${GOOGLE_APPLICATION_CREDENTIALS}
    firestore:
      prefix: "test-hub"
      createIndexes: false    # Manual index management in tests
    secrets:
      prefix: "test-hub"
      cacheExpiry: 1m         # Shorter for testing
      forceDelete: true       # Immediate deletion
    metrics:
      enabled: false          # Reduce API calls
4. Test Cleanup: Always clean up test resources after tests:
@AfterEach
void cleanup() {
    // Delete test components
    componentStore.deleteAll().block();

    // Delete test secrets
    secretStorage.deleteAll().block();
}
Or use Firestore emulator for unit tests and real GCP only for end-to-end tests.

CI/CD Integration

GitHub Actions Example:
name: Integration Tests

on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v3

    - name: Set up JDK 21
      uses: actions/setup-java@v3
      with:
        java-version: '21'

    - name: Authenticate to GCP
      uses: google-github-actions/auth@v1
      with:
        credentials_json: '${{ secrets.GCP_TEST_SA_KEY }}'

    - name: Run Integration Tests
      env:
        GCP_PROJECT_ID: hub-integration-test
      run: |
        ./gradlew integrationTest \
          -Dspring.profiles.active=integration-test

    - name: Cleanup Test Resources
      if: always()
      run: |
        gcloud firestore import --project=hub-integration-test \
          --async \
          --collection-ids=test-hub-components,test-hub-api-keys,test-hub-sharables \
          --delete

Cost Optimization

Optimize GCP costs for your Hub deployment.

Firestore Costs

Firestore charges for:
  • Document reads - $0.06 per 100,000 documents
  • Document writes - $0.18 per 100,000 documents
  • Document deletes - $0.02 per 100,000 documents
  • Storage - $0.18 per GB/month
  • Network egress - Varies by region
Optimization Strategies:
  1. Enable TTL for Sharables:
Firestore automatically deletes expired documents, saving delete operation costs:
hub:
  gcp:
    firestore:
      ttlEnabled: true
  1. Use LATEST Pointers:
Retrieve latest component version without querying all revisions (saves read operations).
  1. Minimize Write Operations:
  • Only update components when actually changed
  • Batch multiple updates together
  1. Storage Cleanup:
Delete old component revisions:
// Custom cleanup job (example)
componentStore.deleteOldRevisions(componentName, keepLatest=5);
  1. Regional Selection:
Choose regions with lower costs (e.g., us-central1 vs asia-northeast1).

Secret Manager Costs

Secret Manager charges for:
  • Active secret versions - $0.06 per secret version per month
  • Access operations - $0.03 per 10,000 access operations
Optimization Strategies:
  1. Version Limits:
Limit number of versions to reduce storage costs:
hub:
  gcp:
    secrets:
      versionsLimit: 10       # Fewer versions = lower costs
  1. Cache Aggressively:
Reduce access operation costs with longer cache TTL:
hub:
  gcp:
    secrets:
      cacheSize: 512
      cacheExpiry: 4h         # Longer cache = fewer API calls
  1. Force Delete in Development:
Immediate deletion avoids 30-day retention costs:
spring:
  profiles: dev
hub:
  gcp:
    secrets:
      forceDelete: true       # No 30-day recovery window
  1. Preload at Startup:
Single bulk load is cheaper than many individual accesses:
hub:
  gcp:
    secrets:
      preloadOnStartup: true

Cloud Monitoring Costs

Cloud Monitoring charges for:
  • Ingestion - $0.2580 per MB for custom metrics (first 150 MB/month free)
  • API calls - First 1 million free, then $0.01 per 1,000 calls
Optimization Strategies:
  1. Disable in Non-Production:
spring:
  profiles: dev
hub:
  gcp:
    metrics:
      enabled: false
  1. Batch Metrics:
Maximize batch size to reduce API calls:
hub:
  gcp:
    metrics:
      batchSize: 200          # Max allowed
  1. Selective Metrics:
Only publish critical metrics (custom implementation).

Multi-Region Considerations

Cost vs Availability Tradeoff:
  • Single Region: Lowest cost, sufficient for most use cases
  • Multi-Region (Automatic): Higher cost, better availability and latency
Recommendation: Start with single region, move to multi-region only if needed for SLA requirements.

Migration

Migrate to GCP provider from other storage backends.

From AWS to GCP

Strategy: Dual-write pattern for zero-downtime migration. Phase 1: Dual Write
  1. Enable both AWS and GCP providers simultaneously
  2. Write to both backends
  3. Read from AWS (existing)
Phase 2: Verification
  1. Verify data consistency between AWS and GCP
  2. Run parallel production traffic (10% to GCP)
  3. Monitor for errors
Phase 3: Cutover
  1. Switch reads to GCP
  2. Continue dual writes for rollback capability
  3. Monitor for 24-48 hours
Phase 4: Cleanup
  1. Disable AWS writes
  2. Remove AWS provider dependency
  3. Archive AWS data

From Local Provider to GCP

Strategy: Export and import data. 1. Export from Local Provider:
// Pseudo-code for export
List<Component> components = localComponentStore.findAll().block();
components.forEach(component -> {
    gcpComponentStore.save(component).block();
});
2. Update Configuration:
hub:
  cloud:
    provider:
      type: gcp    # Changed from 'local'
  gcp:
    projectId: your-project
3. Restart Application 4. Verify Data: Check Firestore Console for migrated data. Considerations:
  • Local provider has no persistence - export during runtime
  • Sharable tokens are temporary - may not need migration
  • Test import in dev environment first

See Also