Documentation Index
Fetch the complete documentation index at: https://platform.docs.zenoo.com/llms.txt
Use this file to discover all available pages before exploring further.
Cloud Provider Migration Guide
This guide helps you migrate from older Zenoo Hub versions to the new cloud provider architecture. The good news: your existing configuration continues to work with zero changes required.
Overview of Changes
What Changed
Architecture:
- AWS-specific code moved from
backend to cloud-provider-aws module
- New abstraction layer (
cloud-provider-api) for provider independence
- Domain models moved to
hub-domain module
- Backend now uses adapters to access cloud providers
What This Means:
- Cleaner code organization
- Support for multiple cloud providers
- Better testability
- Easier maintenance
What Didn’t Change
- Configuration: Old
hub.aws.* properties still work
- Functionality: All features work exactly the same
- Data: No migration of DynamoDB tables or secrets needed
- API: No changes to Hub Client API or Admin API
Backward Compatibility Guarantee
The new architecture maintains 100% backward compatibility:
# Old configuration - STILL WORKS
hub:
aws:
region: us-east-1
dynamodb:
prefix: my-hub
endpoint: http://localhost:4566
secrets:
prefix: my-hub
Nothing breaks. Your existing deployments will continue running without any configuration changes.
Migration Strategies
Strategy 1: No Migration (Recommended for Most)
When to Use:
- You’re happy with current configuration
- No immediate need for multi-cloud support
- Want to minimize changes
Action Required: None
Your configuration continues to work as-is. The Hub automatically uses the AWS provider when it detects hub.aws.* properties.
Strategy 2: Adopt New Configuration Style
When to Use:
- Starting a new deployment
- Want to be explicit about provider selection
- Planning for future multi-cloud support
Migration Steps:
- Add explicit provider selection:
hub:
cloud:
provider:
type: aws # Explicitly choose AWS
- Keep existing AWS configuration:
hub:
aws:
region: us-east-1
dynamodb:
prefix: my-hub
secrets:
prefix: my-hub
- Test and deploy
Benefits:
- Future-proof configuration
- Clearer intent
- Easier to switch providers later
Strategy 3: Full Migration with Reorganization
When to Use:
- Major version upgrade
- Infrastructure refresh
- Opportunity to clean up configuration
Migration Steps:
- Review current configuration:
# Backup current configuration
cp application.yml application.yml.backup
- Reorganize configuration with new structure:
Before:
hub:
aws:
accessKey: ${AWS_ACCESS_KEY_ID}
secretKey: ${AWS_SECRET_ACCESS_KEY}
region: us-east-1
dynamodb:
prefix: old-hub
throughput:
read: 100
write: 50
After:
hub:
cloud:
provider:
type: aws
component:
operationTimeout: 30s
sharable:
operationTimeout: 10s
defaultTtl: 24h
aws:
# Use IAM role instead of access keys
region: us-east-1
dynamodb:
prefix: new-hub # Optional: new prefix
createTables: true
retryStrategy:
requestTimeout: 500ms
maxRetries: 10
backoff: 100ms
tags:
Environment: production
CostCenter: engineering
secrets:
prefix: new-hub
cache: true
cacheTtl: 300s
tags:
Environment: production
- Update dependencies in
build.gradle (if building from source):
No changes needed - dependencies are automatically managed.
- Test in non-production environment:
# Use LocalStack for testing
./gradlew :backend-integration-tests:test
- Deploy to production
Step-by-Step Migration
Prerequisites
Phase 1: Preparation (No Downtime)
1. Inventory Current Resources:
# List current DynamoDB tables
aws dynamodb list-tables | grep "your-prefix"
# List current secrets
aws secretsmanager list-secrets | grep "your-prefix"
2. Document Current Configuration:
Create a configuration inventory:
- Table prefixes
- Secret prefixes
- IAM roles/permissions
- Multi-region setup (if any)
- Tags
3. Review Dependencies:
Check if any custom code depends on old package names:
# Search for old imports (should find none in modern versions)
grep -r "com.zenoo.hub.aws.dynamodb" src/
grep -r "com.zenoo.hub.aws.secrets" src/
Phase 2: Testing (Non-Production)
1. Update Configuration File:
Add new provider configuration to your test environment:
# application-test.yml
hub:
cloud:
provider:
type: aws
aws:
region: us-east-1
dynamodb:
prefix: test-hub # Use test prefix
endpoint: http://localhost:4566 # LocalStack
createTables: true
secrets:
prefix: test-hub
endpoint: http://localhost:4566
2. Run Tests:
# Start LocalStack
docker run -d -p 4566:4566 localstack/localstack
# Run integration tests
./gradlew :backend-integration-tests:test
# Check test results
cat backend-integration-tests/build/reports/tests/test/index.html
3. Verify Functionality:
Test these key operations:
4. Performance Testing:
Compare performance with old version:
# Run performance tests
./gradlew :backend:performanceTest
# Check metrics
Phase 3: Staging Deployment (Limited Downtime)
1. Deploy to Staging:
# Update staging configuration
kubectl apply -f k8s/staging/configmap.yaml
# Deploy new version
kubectl rollout restart deployment/hub-backend -n staging
2. Monitor Logs:
# Watch startup logs
kubectl logs -f deployment/hub-backend -n staging
# Look for:
# - "AWS Cloud Provider auto-configuration enabled"
# - "AWS DynamoDB stores auto-configuration enabled"
# - "AWS Secrets Manager auto-configuration enabled"
# - No errors related to cloud provider
3. Smoke Testing:
Run smoke tests against staging:
# Component operations
curl -X POST https://staging-hub.example.com/api/components/test-component
# Configuration operations
curl https://staging-hub.example.com/api/config/test-config
# API key validation
curl -H "X-API-Key: test-key" https://staging-hub.example.com/api/execute/test-workflow
4. Soak Test:
Run staging under load for 24-48 hours:
- Monitor error rates
- Check DynamoDB metrics
- Verify Secrets Manager API calls
- Watch for memory leaks or performance degradation
Phase 4: Production Deployment (Planned Downtime)
Option A: Rolling Deployment (No Downtime)
1. Prepare:
# Ensure backward compatibility
hub:
aws: # Keep old format
region: us-east-1
dynamodb:
prefix: prod-hub
2. Deploy:
# Rolling update
kubectl set image deployment/hub-backend \
hub-backend=your-registry/hub-backend:new-version
# Monitor rollout
kubectl rollout status deployment/hub-backend
3. Verify:
# Check all pods are healthy
kubectl get pods -l app=hub-backend
# Sample logs from new pods
kubectl logs -l app=hub-backend --tail=100 | grep "cloud provider"
Option B: Blue-Green Deployment (Minimal Downtime)
1. Deploy Green Environment:
# Create green deployment
kubectl apply -f k8s/prod/hub-backend-green.yaml
# Wait for healthy
kubectl wait --for=condition=ready pod -l version=green
2. Smoke Test Green:
# Internal smoke tests
kubectl exec -it pod/green-test -- curl localhost:8080/actuator/health
3. Switch Traffic:
# Update service to point to green
kubectl patch service hub-backend -p '{"spec":{"selector":{"version":"green"}}}'
4. Monitor:
# Watch for errors
kubectl logs -f -l version=green | grep -i error
5. Rollback if Needed:
# Quick rollback to blue
kubectl patch service hub-backend -p '{"spec":{"selector":{"version":"blue"}}}'
Phase 5: Verification & Cleanup
1. Production Verification:
2. Monitor Key Metrics:
# CloudWatch metrics to watch
- DynamoDB.Operation.Latency
- DynamoDB.ThrottledRequests
- SecretsManager.CacheHitRate
- Application.ErrorRate
3. Gradual Traffic Increase:
If using canary deployment:
# Increase traffic gradually
# 10% -> 25% -> 50% -> 100%
kubectl patch virtualservice hub-backend --type merge -p '{"spec":{"http":[{"route":[{"destination":{"host":"hub-backend","subset":"new"},"weight":10}]}]}}'
4. Cleanup Old Deployment:
After 24-48 hours of stable operation:
# Remove old deployment
kubectl delete deployment hub-backend-old
# Clean up old configuration
kubectl delete configmap hub-backend-config-old
Rollback Procedures
If Issues Arise
Immediate Rollback:
# Kubernetes
kubectl rollout undo deployment/hub-backend
# Docker
docker service update --rollback hub-backend
# Verify
kubectl get pods
kubectl logs -l app=hub-backend --tail=50
Configuration Rollback:
# Restore old configuration
kubectl apply -f configmap-backup.yaml
kubectl rollout restart deployment/hub-backend
Common Issues and Solutions
Issue: Tables not found
Solution:
# Ensure createTables is enabled
hub:
aws:
dynamodb:
createTables: true
Issue: Secrets not accessible
Solution:
- Verify IAM permissions
- Check secret name format
- Confirm region is correct
Issue: Performance degradation
Solution:
- Enable secrets caching
- Check DynamoDB capacity
- Review retry configuration
Testing Checklist
Before migrating to production:
Functional Tests
Non-Functional Tests
Infrastructure Tests
Post-Migration
Monitoring
Set up these CloudWatch alarms:
alarms:
- name: HighDynamoDBLatency
metric: DynamoDB.Operation.Latency
threshold: 1000ms
- name: DynamoDBThrottling
metric: DynamoDB.ThrottledRequests
threshold: 10
- name: SecretsManagerErrors
metric: SecretsManager.ErrorCount
threshold: 5
Optimization
After stabilization:
1. Enable Secrets Caching:
hub:
aws:
secrets:
cache: true
cacheTtl: 600s # 10 minutes
2. Tune DynamoDB:
hub:
aws:
dynamodb:
retryStrategy:
requestTimeout: 300ms # Adjust based on metrics
maxRetries: 5
3. Add Tags for Cost Tracking:
hub:
aws:
dynamodb:
tags:
CostCenter: engineering
Environment: production
Getting Help
If you encounter issues during migration:
-
Check the logs:
kubectl logs -f deployment/hub-backend | grep -i "cloud provider"
-
Review configuration:
kubectl describe configmap hub-backend-config
-
Verify AWS resources:
aws dynamodb list-tables
aws secretsmanager list-secrets
-
Consult documentation:
Summary
The cloud provider migration is designed to be zero-downtime and backward compatible. For most users:
- No action required
- Configuration continues to work
- No data migration needed
- All features work the same
For users who want to adopt the new configuration style, follow the step-by-step guide above and test thoroughly in non-production environments first.
See Also