Kubernetes
Kubernetes (often abbreviated as K8s) is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. Since all the Gluesync components are shipped as Docker images, it is possible to leverage them and run the entire system in a Kubernetes cluster with production-grade scalability and reliability.
This guide covers deploying Gluesync on various Kubernetes environments, from local development clusters to cloud-managed services like AWS EKS, Google GKE, and Azure AKS.
Overview
A complete Gluesync deployment on Kubernetes includes:
-
Traefik Ingress Controller for routing HTTPS traffic and providing reverse proxy capabilities
-
Gluesync core hub (HTTPS on port 1717) for orchestrating data synchronization
-
Chronos module for scheduling and task automation
-
Source and target agents for connecting to databases
-
Grafana monitoring dashboard (accessible at
/grafana) -
Prometheus for metrics collection (accessible at
/prometheus) -
Portainer for Kubernetes cluster management and container monitoring (accessible at port 9000)
Prerequisites
Before deploying Gluesync on Kubernetes, ensure you have the following tools installed:
Installing kubectl
Kubectl is the command-line tool used to interact with a Kubernetes cluster. It is essential for managing and troubleshooting Kubernetes resources, enabling you to deploy applications, inspect resources, scale workloads, and more.
Follow the installation guide to install kubectl on your machine.
Installing Helm
Helm is the package manager for Kubernetes that simplifies application deployment and management. This guide uses Helm charts to deploy Gluesync.
To install Helm, follow the steps for your operating system at the Helm installation website.
Setting up a Kubernetes cluster
You need access to a Kubernetes cluster. Choose one of the following options:
Local development with minikube
Minikube is a lightweight Kubernetes cluster for local development and testing.
To set up minikube on your machine, follow the instructions at the minikube installation page, making sure to select the correct architecture for your machine.
Once installed, start the minikube cluster:
minikube start
AWS EKS
For production deployments on AWS, use Amazon Elastic Kubernetes Service (EKS). Ensure you have:
-
AWS CLI configured with appropriate credentials
-
kubectlconfigured to connect to your EKS cluster
Google GKE
For Google Cloud Platform deployments, use Google Kubernetes Engine (GKE).
Install the gcloud SDK by following the steps on the gcloud installation page or use the brew command-line utility on macOS:
brew install google-cloud-sdk
Initialize the SDK (you will be prompted for login details and project information):
gcloud init
Install the GKE authentication plugin and kubectl component:
gcloud components install gke-gcloud-auth-plugin
gcloud components install kubectl
Configure kubectl to connect to your GKE cluster by clicking the "Connect" button on your Google Cloud GKE page, which provides a command similar to:
gcloud container clusters get-credentials <your_cluster_name> --region <your_cluster_region> --project <your_project_name>
Quick start
The Gluesync Helm chart includes deployment scripts for quick installation across different environments. Choose the script that matches your deployment target:
Local development
For Docker Desktop, kind, minikube, or other local Kubernetes clusters, use the deploy-local.sh script:
./deploy-local.sh
This script:
-
Creates a namespace (default:
gluesync-dev) -
Installs Traefik Ingress Controller
-
Installs Portainer for cluster management
-
Deploys Gluesync with NodePort service type
-
Waits for all pods to be ready
-
Displays instructions for accessing services via port forwarding
After deployment, you need to use port forwarding to access services from localhost. See the port forwarding section for details.
AWS EKS
For production deployments on Amazon EKS, use the deploy-eks.sh script:
./deploy-eks.sh
This script:
-
Creates a namespace (default:
gluesync-prod) -
Installs Traefik Ingress Controller
-
Installs Portainer for cluster management
-
Deploys Gluesync with LoadBalancer service type
-
Provisions an AWS Network Load Balancer (takes 2-3 minutes)
-
Displays the LoadBalancer DNS name and access URLs
The script supports environment variables for customization:
# Use a custom namespace
NAMESPACE=my-namespace ./deploy-eks.sh
# Use a custom release name
RELEASE_NAME=my-gluesync ./deploy-eks.sh
# Deploy with internal load balancer
LB_SCHEME=internal ./deploy-eks.sh
After deployment, services are directly accessible via the LoadBalancer DNS without requiring port forwarding.
Google GKE
For deployments on Google Kubernetes Engine, use the deploy-gcp.sh script:
./deploy-gcp.sh
This script:
-
Creates a namespace (default:
gluesync-prod) -
Installs Traefik Ingress Controller
-
Installs Portainer for cluster management
-
Deploys Gluesync with LoadBalancer service type
-
Provisions a GCP Load Balancer
-
Displays the LoadBalancer IP address and access URLs
-
Shows firewall configuration instructions
Environment variables for customization:
# Use a custom namespace
NAMESPACE=my-namespace ./deploy-gcp.sh
# Deploy with internal load balancer
LB_SCHEME=internal ./deploy-gcp.sh
Important: Ensure your GCP firewall rules allow traffic on the required ports. The script displays the necessary gcloud command to create the firewall rule:
gcloud compute firewall-rules create allow-gluesync \
--allow tcp:80,tcp:443,tcp:9443 \
--source-ranges=0.0.0.0/0 \
--description='Allow Gluesync ports'
Azure AKS
For deployments on Azure Kubernetes Service, use the deploy-azure.sh script:
./deploy-azure.sh
Similar to EKS and GKE deployments, this script installs Traefik Ingress Controller and Portainer for cluster management, then provisions an Azure Load Balancer and displays access URLs. Ensure your Network Security Group allows inbound traffic on the required ports.
Deployment scripts comparison
| Script | Environment | Service type | Access method |
|---|---|---|---|
|
Local clusters (Docker Desktop, kind, minikube) |
NodePort |
Port forwarding required |
|
AWS EKS |
LoadBalancer |
Direct via NLB DNS |
|
Google GKE |
LoadBalancer |
Direct via IP |
|
Azure AKS |
LoadBalancer |
Direct via IP |
Port forwarding for local development
When deploying on local Kubernetes clusters, services are exposed via NodePort, which requires port forwarding to access them from localhost.
The port-forward.sh script automates this process:
./port-forward.sh
The script:
-
Auto-detects the Gluesync namespace
-
Finds the Traefik service automatically
-
Forwards port 9443 without requiring sudo
-
Optionally forwards ports 443 and 80 if run with sudo
-
Forwards Portainer on port 9000
To forward all ports including privileged ports (443 and 80):
sudo ./port-forward.sh
Manual port forwarding:
If you prefer to forward ports manually:
# Forward only port 9443 (no sudo required)
kubectl port-forward -n gluesync-dev svc/gluesync-gluesync-traefik 9443:9443
# Forward Portainer (no sudo required)
kubectl port-forward -n gluesync-dev svc/portainer 9000:9000
# Forward all ports (requires sudo for ports < 1024)
sudo kubectl port-forward -n gluesync-dev svc/gluesync-gluesync-traefik 9443:9443 443:443 80:80
Why port forwarding is needed:
Local Kubernetes clusters (Docker Desktop, kind, minikube) don’t expose NodePort services to the host machine by default. Port forwarding creates a secure tunnel from localhost to the cluster service, allowing you to access services as if they were running locally.
Accessing services
The method for accessing Gluesync services depends on your deployment type.
Local deployments
After running the port forwarding script, access services at:
-
Core hub HTTPS: https://localhost:9443/
-
Core hub UI: https://localhost:9443/ui
-
Grafana: https://localhost:9443/grafana
-
Prometheus: https://localhost:9443/prometheus
-
Chronos: https://localhost:9443/chronos
-
Portainer: http://localhost:9000/
-
Via ingress (HTTPS): https://localhost:443/ (requires sudo port forwarding)
-
Via ingress (HTTP): http://localhost:80/ (requires sudo port forwarding)
Cloud deployments (EKS/GKE/AKS)
Services are directly accessible via the LoadBalancer address displayed after deployment:
-
Core hub HTTPS: https://YOUR-LB-ADDRESS:9443/
-
Core hub UI: https://YOUR-LB-ADDRESS:9443/ui
-
Grafana: https://YOUR-LB-ADDRESS:9443/grafana
-
Prometheus: https://YOUR-LB-ADDRESS:9443/prometheus
-
Chronos: https://YOUR-LB-ADDRESS:9443/chronos
-
Via ingress: https://YOUR-LB-ADDRESS/
Replace YOUR-LB-ADDRESS with the DNS name (EKS) or IP address (GKE/AKS) shown by the deployment script.
SSL/TLS certificates
The deployment uses self-signed certificates by default. You will see certificate warnings in your browser, which is expected and safe for development and testing environments. Accept the warnings to proceed.
Traefik is configured to handle TLS termination and accepts self-signed certificates from backend services:
-
Supports TLS 1.2 and TLS 1.3 protocols
-
Provides automatic HTTPS routing
-
Can be configured with Let’s Encrypt for automatic certificate management
For production deployments, you should replace the self-signed certificates with certificates from a trusted Certificate Authority or configure Let’s Encrypt integration.
Service ports
Gluesync exposes the following ports:
| Port | Service | Protocol | Description |
|---|---|---|---|
80 |
HTTP |
TCP |
Standard HTTP ingress |
443 |
HTTPS |
TCP |
Standard HTTPS ingress |
9000 |
Portainer |
TCP |
Portainer web UI for Kubernetes cluster management |
9443 |
Core hub and all services |
TCP |
Direct HTTPS access to core hub, Grafana, Prometheus, and Chronos via Traefik routing |
30900 |
Portainer (NodePort) |
TCP |
Portainer NodePort access for local development |
Manual deployment
If you prefer manual control over the deployment, you can use Helm directly instead of the deployment scripts.
Local development
helm install gluesync . \
--namespace gluesync-dev \
--create-namespace \
--set service.type=NodePort
AWS EKS
helm install gluesync . \
--namespace gluesync-prod \
--create-namespace \
--set service.type=LoadBalancer \
--set service.loadBalancerScheme=internet-facing
Configuration
Service type configuration
The service type determines how services are exposed. Edit values.yaml or use --set flags:
service:
# For local dev: NodePort
# For cloud: LoadBalancer
type: NodePort # or LoadBalancer
# AWS LoadBalancer options (only for EKS)
loadBalancerScheme: internet-facing # or "internal"
# loadBalancerIP: "1.2.3.4" # Optional: specify static IP
AWS EKS-specific configuration
Network Load Balancer annotations
When using LoadBalancer type on EKS, the service automatically configures an AWS Network Load Balancer:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
Volume management
Gluesync components use Kubernetes volumes for storing persistent data. The volume configuration matches the Docker Compose setup to ensure consistency.
Grafana volumes
Grafana uses two volumes:
Storage options
Development configuration (default)
Uses emptyDir volumes:
-
Pros: Simple, no setup required
-
Cons: Data lost when pod restarts
-
Use case: Local development, testing
volumes:
- name: grafana-data
emptyDir: {}
Production configuration (recommended)
Use PersistentVolumeClaims for data persistence:
Benefits:
-
Data survives pod restarts
-
Supports backup and restore
-
Better for production use
Create a PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-data
namespace: gluesync-prod
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Update the deployment to use the PVC:
volumes:
- name: grafana-data
persistentVolumeClaim:
claimName: grafana-data
Verifying volumes
Check mounted volumes:
# View volume mounts
kubectl describe pod -n gluesync-dev -l app.kubernetes.io/name=gluesync-grafana | grep -A 5 "Mounts:"
# Check provisioning content
kubectl exec -n gluesync-dev deployment/gluesync-gluesync-grafana -- ls -la /etc/grafana/provisioning/
# Check data directory
kubectl exec -n gluesync-dev deployment/gluesync-gluesync-grafana -- ls -la /var/lib/grafana/
Understanding Helm and Kubernetes concepts
Helm charts
The Gluesync deployment uses Helm charts to manage Kubernetes resources. A Helm chart is a package that contains all the resource definitions needed to run an application on Kubernetes.
To create a basic Helm chart:
helm create <your_chart_name>
This creates a folder structure:
mychart
├── Chart.yaml
├── charts
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── deployment.yaml
│ ├── service.yaml
│ └── ...
└── values.yaml
The templates/ directory contains YAML definitions for Kubernetes resources. Helm processes these files through a Go template engine, allowing dynamic configuration through the values.yaml file.
For more examples and details about Helm charts, follow the documentation on the Helm website.
Kubernetes resources
Services
Services provide stable networking and load balancing for Pods. They enable communication between components and with external clients.
Example of a Gluesync core hub service:
apiVersion: v1
kind: Service
metadata:
name: {{ include "gluesync.fullname" . }}-core-hub
labels:
app.kubernetes.io/name: {{ include "gluesync.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
type: {{ .Values.service.type }}
ports:
- name: port-1717
protocol: TCP
port: 1717
targetPort: 1717
selector:
app.kubernetes.io/name: {{ include "gluesync.name" . }}-core-hub
app.kubernetes.io/instance: {{ .Release.Name }}
Pods and deployments
Pods are the smallest deployable units in Kubernetes. A Pod represents one or more containers that share the same execution environment and are tightly coupled.
Deployments manage the lifecycle of Pods and ReplicaSets, enabling rolling updates, scaling, and self-healing.
Example deployment for the core hub:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "gluesync.fullname" . }}-core-hub
labels:
app.kubernetes.io/name: {{ include "gluesync.name" . }}-core-hub
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "gluesync.name" . }}-core-hub
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "gluesync.name" . }}-core-hub
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: gluesync-core-hub
image: "{{ .Values.gluesyncCoreHub.image }}"
ports:
- containerPort: 1717
protocol: TCP
volumeMounts:
- name: gluesync-license
mountPath: /opt/gluesync/data/gs-license.dat
subPath: gs-license.dat
- name: gluesync-core-hub
mountPath: /opt/gluesync/database
- name: gluesync-bootstrap
mountPath: /opt/gluesync/data/bootstrap-core-hub.json
subPath: bootstrap-core-hub.json
volumes:
- name: gluesync-license
configMap:
name: gluesync-license
- name: gluesync-bootstrap
configMap:
name: gluesync-bootstrap
- name: gluesync-core-hub
persistentVolumeClaim:
claimName: gluesync-core-hub
Selectors and labels
Selectors define how Kubernetes resources match or associate with Pods based on their labels. They ensure that Services route traffic to the correct Pods and that Deployments manage the appropriate ReplicaSets.
In the examples above, notice how:
-
The Service selector matches the labels in the Deployment template
-
All resources use consistent labeling for proper association
-
Volume mounts specify how containers access volumes defined at the Pod level
-
The match between volumes and volume mounts is done through their name
Troubleshooting
View logs
# Traefik Ingress Controller
kubectl logs -n gluesync-dev deployment/gluesync-gluesync-traefik -f
# Core Hub
kubectl logs -n gluesync-dev statefulset/gluesync-gluesync-core-hub -f
# Source Agent
kubectl logs -n gluesync-dev statefulset/gluesync-gluesync-source -f
# Target Agent
kubectl logs -n gluesync-dev statefulset/gluesync-gluesync-target -f
Port forwarding issues
If port forwarding fails:
-
Check if ports are already in use:
lsof -i :9443 -
Ensure the service is running:
kubectl get pods -n gluesync-dev -
Try different ports:
kubectl port-forward … 19443:9443
LoadBalancer pending on cloud deployments
Problem: Service stuck in pending state
kubectl describe svc -n gluesync-prod gluesync-prod-gluesync-traefik
Common causes:
-
AWS: Load Balancer Controller not installed, insufficient IAM permissions, or subnet tags missing
-
GCP: Firewall rules not configured
-
Azure: Network Security Group rules missing
Solution for AWS: Ensure subnets are tagged:
-
kubernetes.io/cluster/<cluster-name> = shared -
kubernetes.io/role/elb = 1(public subnets)
Additional resources
For a complete example of a Helm chart for Gluesync components, see the Gluesync Helm template repository.