Installing Infrahub
This guide provides step-by-step instructions for installing Infrahub Community and Enterprise editions. The installation methods covered here are for non-resilient deployment architectures suitable for development, testing, and single-node production environments.
For resilient, high-availability deployments, refer to the high availability architecture documentation.
Prerequisites
- Ensure your system meets the hardware requirements before installing Infrahub
- Each installation method has additional prerequisites listed in their respective sections
Allocating more CPU cores to the Neo4j database will only improve performance on Infrahub Enterprise as it leverages parallel query execution.
Community
Infrahub Community is deployed as a container-based architecture and can be installed using several methods.
- Docker compose via curl
- Local development (git clone)
- Kubernetes with Helm
Using curl and Docker Compose
To quickly spin up the latest Infrahub locally, retrieve the Docker Compose file from infrahub.opsmill.io.
You can also specify a specific version or the develop
branch in the URL:
Prerequisites
- Docker (version 24.x minimum)
- Docker Compose
Start an Infrahub environment
- macOS
- Linux
curl https://infrahub.opsmill.io | docker compose -p infrahub -f - up -d
curl https://infrahub.opsmill.io | sudo docker compose -p infrahub -f - up -d
After running the command, you should see Docker downloading the necessary images and starting the containers.
Verify that Infrahub is running by accessing the web interface or checking container status:
docker ps | grep infrahub
Stop and remove an Infrahub environment
- macOS
- Linux
curl https://infrahub.opsmill.io | docker compose -p infrahub -f - down -v
curl https://infrahub.opsmill.io | sudo docker compose -p infrahub -f - down -v
Cloning the repository
The recommended method for running Infrahub for development uses the Docker Compose files included with the project combined with helper commands defined in invoke
.
This method is suitable for local development and demo environments. It is not recommended for production deployments.
Prerequisites
Step 1: clone the repository
Create the base directory for the Infrahub installation. For this guide, we'll use /opt/infrahub
.
cd /opt/
Usage of the /opt/infrahub
directory is merely a suggestion. You can use any directory on your system, especially for development or demo purposes.
mkdir -p ~/source/
cd ~/source/
Clone Infrahub repository into the current directory:
git clone --recursive https://github.com/opsmill/infrahub.git
The git clone
command should generate output similar to the following:
Cloning into '.'...
remote: Enumerating objects: 1312, done.
remote: Counting objects: 100% (1312/1312), done.
remote: Compressing objects: 100% (1150/1150), done.
remote: Total 1312 (delta 187), reused 691 (delta 104), pack-reused 0
Receiving objects: 100% (1312/1312), 33.37 MiB | 14.46 MiB/s, done.
Resolving deltas: 100% (187/187), done.
Step 2: install dependencies
Navigate to the cloned Infrahub directory:
cd infrahub
Install the poetry dependencies by running:
poetry install
You should see Poetry installing the required dependencies. When complete, you'll be returned to the command prompt without errors.
Step 3: start Infrahub
Start and initialize Infrahub:
poetry run invoke demo.start
Using Helm and Kubernetes
It's possible to deploy Infrahub on Kubernetes using Helm charts. This method is suitable for production deployments and provides a more resilient architecture.
Infrahub Helm Charthttps://github.com/opsmill/infrahub-helm/tree/stable/charts/infrahubArtifactHubhttps://artifacthub.io/packages/helm/infrahub/infrahubPrerequisites
- A Kubernetes cluster
- Helm installed on your system
Production deployment requirements
The following are required for production deployments using Helm:
- Data persistence for the database must be enabled
- Multiple replicas of the Infrahub API Server and Infrahub Task workers should be deployed: you can use the
affinity
variable to define the affinity policy for the pods - S3 storage should be configured for the Infrahub API Server, it is required if you have multiple replicas
We do not recommend using the included dependencies (Neo4j, RabbitMQ, Redis) for production. They are present to ease deployment on non-production environments.
Step 1: Fill in the values file
Create a values.yml
file with the following configuration:
infrahubServer:
replicas: 3
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- infrahub-server
topologyKey: topology.kubernetes.io/zone
persistence:
enabled: false
ingress:
enabled: true
infrahubServer:
env:
INFRAHUB_ALLOW_ANONYMOUS_ACCESS: "true"
INFRAHUB_CACHE_PORT: 6379
INFRAHUB_DB_TYPE: neo4j
INFRAHUB_LOG_LEVEL: INFO
INFRAHUB_PRODUCTION: "true"
INFRAHUB_INITIAL_ADMIN_TOKEN: 06438eb2-8019-4776-878c-0941b1f1d1ec
INFRAHUB_SECURITY_SECRET_KEY: 327f747f-efac-42be-9e73-999f08f86b92
INFRAHUB_STORAGE_DRIVER: s3
AWS_ACCESS_KEY_ID: xxxx
AWS_SECRET_ACCESS_KEY: xxxx
AWS_S3_BUCKET_NAME: infrahub-data
AWS_S3_ENDPOINT_URL: https://s3
infrahubTaskWorker:
replicas: 3
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- infrahub-task-worker
topologyKey: topology.kubernetes.io/zone
neo4j:
services:
admin:
enabled: true
volumes:
data:
mode: dynamic
dynamic:
storageClassName: premium-rwo
requests:
storage: 100Gi
Be sure to replace the placeholder values with your actual values.
Step 2: install the chart
Install using a local chart:
helm install infrahub -f values.yml path/to/infrahub/chart
Or install using the OpsMill registry:
helm install infrahub -f values.yml oci://registry.opsmill.io/opsmill/chart/infrahub
Verify the installation by checking that all pods are running:
kubectl get pods -l app=infrahub
EnterpriseEnterprise Edition
Infrahub Enterprise is based on the Community version, with several enhancements for:
- Enterprise features
- High availability
- Better performance
- Security hardening (Docker image, etc.)
Infrahub Enterprise can be deployed using the same methods as Infrahub Community.
- Docker compose via curl
- Kubernetes with Helm
Using curl and Docker Compose
To quickly spin up the latest Infrahub Enterprise locally, retrieve the Docker Compose file from infrahub.opsmill.io/enterprise.
You can also specify a specific version in the URL:
You can also specify a sizing preset in the URL. This will automatically configure replica count for each component according to your sizing plan:
- https://infrahub.opsmill.io/enterprise?size=small (requires 8 GB of RAM)
- https://infrahub.opsmill.io/enterprise/1.3.5?size=medium (requires 24 GB of RAM)
- https://infrahub.opsmill.io/enterprise?size=large (requires 64 GB of RAM)
Prerequisites
- Docker (version 24.x minimum)
- Docker Compose
Start an Infrahub Enterprise environment
- macOS
- Linux
curl https://infrahub.opsmill.io/enterprise | docker compose -p infrahub -f - up -d
curl https://infrahub.opsmill.io/enterprise | sudo docker compose -p infrahub -f - up -d
Stop and remove an Infrahub Enterprise environment
- macOS
- Linux
curl https://infrahub.opsmill.io/enterprise | docker compose -p infrahub -f - down -v
curl https://infrahub.opsmill.io/enterprise | sudo docker compose -p infrahub -f - down -v
Using Helm and Kubernetes
The Enterprise Helm chart is based on the original Infrahub chart and uses it as a Helm dependency.
Most configuration related to Infrahub goes inside the infrahub
top-level key.
Production deployment requirements
The following are required for production deployments using Helm:
- Data persistence for the database must be enabled
- Multiple replicas of the Infrahub API Server and Infrahub Task workers should be deployed: you can use the
affinity
variable to define the affinity policy for the pods - S3 storage should be configured for the Infrahub API Server, it is required if you have multiple replicas
We do not recommend using the included dependencies (Neo4j, RabbitMQ, Redis) for production. They are present to ease deployment on non-production environments.
Prerequisites
- A Kubernetes cluster
- Helm installed on your system
Step 1: Fill in the values file
Create a values.yml
file with the following configuration:
infrahub:
infrahubServer:
replicas: 3
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- infrahub-server
topologyKey: topology.kubernetes.io/zone
persistence:
enabled: false
ingress:
enabled: true
infrahubServer:
env:
INFRAHUB_ALLOW_ANONYMOUS_ACCESS: "true"
INFRAHUB_CACHE_PORT: 6379
INFRAHUB_DB_TYPE: neo4j
INFRAHUB_LOG_LEVEL: INFO
INFRAHUB_PRODUCTION: "true"
INFRAHUB_INITIAL_ADMIN_TOKEN: 06438eb2-8019-4776-878c-0941b1f1d1ec
INFRAHUB_SECURITY_SECRET_KEY: 327f747f-efac-42be-9e73-999f08f86b92
INFRAHUB_STORAGE_DRIVER: s3
AWS_ACCESS_KEY_ID: xxxx
AWS_SECRET_ACCESS_KEY: xxxx
AWS_S3_BUCKET_NAME: infrahub-data
AWS_S3_ENDPOINT_URL: https://s3
infrahubTaskWorker:
replicas: 3
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- infrahub-task-worker
topologyKey: topology.kubernetes.io/zone
neo4j:
services:
admin:
enabled: true
volumes:
data:
mode: dynamic
dynamic:
storageClassName: premium-rwo
requests:
storage: 100Gi
Be sure to replace the placeholder values with your actual values.
You can also use the configuration preset values.yaml file. This will automatically configure replica count for each component according to your sizing plan. They are available here:
Small size config preset (requires 8 GB of RAM)https://github.com/opsmill/infrahub-helm/blob/stable/charts/infrahub-enterprise/values.small.yamlMedium size config preset (requires 24 GB of RAM)https://github.com/opsmill/infrahub-helm/blob/stable/charts/infrahub-enterprise/values.medium.yamlLarge size config preset (requires 64 GB of RAM)https://github.com/opsmill/infrahub-helm/blob/stable/charts/infrahub-enterprise/values.large.yamlStep 2: install the chart
Install using a local chart:
helm install infrahub -f values.yml path/to/infrahub-enterprise/chart
Or install using the OpsMill registry:
helm install infrahub -f values.yml oci://registry.opsmill.io/opsmill/chart/infrahub-enterprise
Verify the installation by checking that all pods are running:
kubectl get pods -l app=infrahub
High availability deployment examples
The following examples demonstrate how to deploy Infrahub in a highly available configuration using Kubernetes. These deployments provide resilience, scalability, and are suitable for production environments.
These examples are for reference purposes and may require customization for your specific environment. Ensure you understand the requirements and dependencies before deploying to production.
Using Terraform
Example HA deployment using Terraform on a 3-node Kubernetes cluster
terraform {
required_providers {
kubectl = {
source = "alekc/kubectl"
version = "2.1.3"
}
}
}
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
}
provider "kubernetes" {
config_path = "~/.kube/config"
}
provider "kubectl" {
config_path = "~/.kube/config"
}
locals {
target_namespace = "infrahub"
infrahub_version = "1.2.5"
}
### Infrahub
resource "helm_release" "infrahub_ha" {
depends_on = [helm_release.taskmanager_ha, helm_release.cache_ha, helm_release.messagequeue_ha, helm_release.database_ha, helm_release.objectstore_ha]
name = "infrahub"
chart = "oci://registry.opsmill.io/opsmill/chart/infrahub-enterprise"
version = "3.3.5"
create_namespace = true
namespace = local.target_namespace
values = [
<<EOT
infrahub:
global:
infrahubTag: ${local.infrahub_version}
infrahubServer:
replicas: 3
persistence:
enabled: false
infrahubServer:
env:
INFRAHUB_DB_ADDRESS: infrahub-headless
INFRAHUB_DB_PROTOCOL: neo4j # required for client-side routing
INFRAHUB_BROKER_ADDRESS: messagequeue-rabbitmq
INFRAHUB_CACHE_ADDRESS: redis-sentinel-proxy
INFRAHUB_CACHE_PORT: 6379
INFRAHUB_WORKFLOW_ADDRESS: prefect-server
INFRAHUB_WORKFLOW_PORT: 4200
PREFECT_API_URL: "http://prefect-server:4200/api"
INFRAHUB_STORAGE_DRIVER: s3
AWS_ACCESS_KEY_ID: admin
AWS_SECRET_ACCESS_KEY: password
AWS_S3_BUCKET_NAME: infrahub-data
AWS_S3_ENDPOINT_URL: objectstore-minio:9000
AWS_S3_USE_SSL: "false"
INFRAHUB_ALLOW_ANONYMOUS_ACCESS: "true"
INFRAHUB_DB_TYPE: neo4j
INFRAHUB_LOG_LEVEL: INFO
INFRAHUB_PRODUCTION: "false"
INFRAHUB_INITIAL_ADMIN_TOKEN: 06438eb2-8019-4776-878c-0941b1f1d1ec
INFRAHUB_SECURITY_SECRET_KEY: 327f747f-efac-42be-9e73-999f08f86b92
INFRAHUB_GIT_REPOSITORIES_DIRECTORY: "/opt/infrahub/git"
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
service: infrahub-server
topologyKey: kubernetes.io/hostname
infrahubTaskWorker:
replicas: 3
infrahubTaskWorker:
env:
INFRAHUB_DB_ADDRESS: infrahub-headless
INFRAHUB_DB_PROTOCOL: neo4j # required for client-side routing
INFRAHUB_BROKER_ADDRESS: messagequeue-rabbitmq
INFRAHUB_CACHE_ADDRESS: redis-sentinel-proxy
INFRAHUB_CACHE_PORT: 6379
INFRAHUB_WORKFLOW_ADDRESS: prefect-server
INFRAHUB_WORKFLOW_PORT: 4200
PREFECT_API_URL: "http://prefect-server:4200/api"
INFRAHUB_STORAGE_DRIVER: s3
AWS_ACCESS_KEY_ID: admin
AWS_SECRET_ACCESS_KEY: password
AWS_S3_BUCKET_NAME: infrahub-data
AWS_S3_ENDPOINT_URL: objectstore-minio:9000
AWS_S3_USE_SSL: "false"
INFRAHUB_DB_TYPE: neo4j
INFRAHUB_LOG_LEVEL: DEBUG
INFRAHUB_PRODUCTION: "false"
INFRAHUB_API_TOKEN: 06438eb2-8019-4776-878c-0941b1f1d1ec
INFRAHUB_TIMEOUT: "60"
INFRAHUB_GIT_REPOSITORIES_DIRECTORY: "/opt/infrahub/git"
PREFECT_WORKER_QUERY_SECONDS: 3
PREFECT_AGENT_QUERY_INTERVAL: 3
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
service: infrahub-task-worker
topologyKey: kubernetes.io/hostname
redis:
enabled: false
neo4j:
enabled: false
rabbitmq:
enabled: false
prefect-server:
enabled: false
EOT
]
}
#### Infrahub dependencies
resource "helm_release" "database_ha_service" {
depends_on = [helm_release.database_ha]
name = "database-service"
chart = "neo4j-headless-service"
repository = "https://helm.neo4j.com/neo4j/"
version = "5.20.0"
create_namespace = true
namespace = local.target_namespace
values = [
<<EOT
neo4j:
name: "infrahub"
EOT
]
}
resource "helm_release" "database_ha" {
count = 3
name = "database-${count.index}"
chart = "neo4j"
repository = "https://helm.neo4j.com/neo4j/"
version = "5.20.0"
create_namespace = true
namespace = local.target_namespace
values = [
<<EOT
neo4j:
name: "infrahub"
minimumClusterSize: 3
resources:
cpu: "4"
memory: "8Gi"
password: "admin"
edition: "enterprise"
acceptLicenseAgreement: "yes"
config:
dbms.security.auth_minimum_password_length: "4"
dbms.security.procedures.unrestricted: apoc.*
logInitialPassword: false
volumes:
data:
mode: "defaultStorageClass"
defaultStorageClass:
accessModes:
- ReadWriteOnce
requests:
storage: 10Gi
services:
neo4j:
enabled: false
EOT
]
}
resource "helm_release" "messagequeue_ha" {
name = "messagequeue"
chart = "oci://registry-1.docker.io/bitnamicharts/rabbitmq"
version = "14.4.1"
create_namespace = true
namespace = local.target_namespace
values = [
<<EOT
replicaCount: 3
auth:
username: infrahub
password: infrahub
metrics:
enabled: true
startupProbe:
enabled: true
podAntiAffinityPreset: hard
EOT
]
}
resource "helm_release" "objectstore_ha" {
name = "objectstore"
chart = "oci://registry-1.docker.io/bitnamicharts/minio"
version = "15.0.5"
create_namespace = true
namespace = local.target_namespace
values = [
<<EOT
mode: distributed
statefulset:
replicaCount: 3
drivesPerNode: 2
auth:
rootUser: admin
rootPassword: password
provisioning:
enabled: true
buckets:
- name: infrahub-data
podAntiAffinityPreset: hard
EOT
]
}
#### Task manager
# Workaround since Prefect Helm chart does not use StatefulSets and multiple pod initialization causes issue with concurrent DB init
# StatefulSet would solve this issue because it creates pods sequentially
# Workaround by installing the Helm chart with one replica, and then scale up to 3 replicas
resource "null_resource" "scale_up_taskmanager" {
depends_on = [helm_release.infrahub_ha]
provisioner "local-exec" {
command = "kubectl scale -n ${local.target_namespace} deployment/prefect-server --replicas=3"
}
}
resource "helm_release" "taskmanager_ha" {
depends_on = [helm_release.cache_ha, kubectl_manifest.taskmanagerdb_ha]
name = "taskmanager"
chart = "prefect-server"
repository = "https://prefecthq.github.io/prefect-helm"
version = "2025.2.21193831"
create_namespace = true
namespace = local.target_namespace
values = [
<<EOT
global:
prefect:
image:
repository: registry.opsmill.io/opsmill/infrahub-enterprise
prefectTag: ${local.infrahub_version}
server:
replicaCount: 1
command:
- /usr/bin/tini
- -g
- --
args:
- uvicorn
- --host
- "0.0.0.0"
- --port
- "4200"
- --factory
- infrahub.prefect_server.app:create_infrahub_prefect
env:
- name: PREFECT_UI_SERVE_BASE
value: /
- name: PREFECT_MESSAGING_BROKER
value: prefect_redis.messaging
- name: PREFECT_MESSAGING_CACHE
value: prefect_redis.messaging
- name: PREFECT_REDIS_MESSAGING_HOST
value: redis-sentinel-proxy
- name: PREFECT_REDIS_MESSAGING_DB
value: "1"
secret:
create: true
name: ""
username: "prefect"
password: "prefect"
host: "taskmanagerdb-rw"
port: "5432"
database: "prefect"
serviceAccount:
create: false
postgresql:
enabled: false
EOT
]
}
resource "kubernetes_service_v1" "redis_sentinel_proxy_svc" {
depends_on = [helm_release.cache_ha]
metadata {
name = "redis-sentinel-proxy"
namespace = local.target_namespace
labels = {
"app.kubernetes.io/name" = "redis-sentinel-proxy"
}
}
spec {
type = "ClusterIP"
port {
port = 6379
target_port = "redis"
name = "cache-ha"
}
selector = {
"app.kubernetes.io/name" = "redis-sentinel-proxy"
}
}
}
resource "kubernetes_deployment_v1" "redis_sentinel_proxy_deployment" {
depends_on = [helm_release.cache_ha]
metadata {
name = "redis-sentinel-proxy"
namespace = local.target_namespace
labels = {
"app.kubernetes.io/name" = "redis-sentinel-proxy"
}
}
spec {
replicas = 2
selector {
match_labels = {
"app.kubernetes.io/name" = "redis-sentinel-proxy"
}
}
template {
metadata {
labels = {
"app.kubernetes.io/name" = "redis-sentinel-proxy"
}
}
spec {
affinity {
pod_anti_affinity {
required_during_scheduling_ignored_during_execution {
label_selector {
match_labels = {
"app.kubernetes.io/name" = "redis-sentinel-proxy"
}
}
topology_key = "kubernetes.io/hostname"
}
}
}
container {
name = "redis-sentinel-proxy"
image = "patrickdk/redis-sentinel-proxy:v1.2"
args = [
"-master",
"mymaster",
"-listen",
":6379",
"-sentinel",
"cache:26379",
]
port {
container_port = 6379
name = "redis"
}
}
}
}
}
}
resource "helm_release" "cache_ha" {
name = "cache"
chart = "redis"
repository = "https://charts.bitnami.com/bitnami"
version = "19.5.2"
create_namespace = true
namespace = local.target_namespace
values = [
<<EOT
nameOverride: cache
architecture: replication
auth:
enabled: false
master:
podAntiAffinityPreset: hard
persistence:
enabled: true
service:
ports:
redis: 6379
replicas:
replicaCount: 3
podAntiAffinityPreset: hard
sentinel:
enabled: true
EOT
]
}
resource "kubectl_manifest" "taskmanagerdb_ha" {
depends_on = [helm_release.taskmanagerdb_ha_operator, kubernetes_secret_v1.db_secret]
yaml_body = <<EOT
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: taskmanagerdb
namespace: ${local.target_namespace}
spec:
instances: 3
storage:
size: 10Gi
postgresql:
pg_hba:
- host all all 10.0.0.0/8 md5
bootstrap:
initdb:
database: prefect
owner: prefect
secret:
name: prefect-user
EOT
}
resource "kubernetes_secret_v1" "db_secret" {
depends_on = [helm_release.taskmanagerdb_ha_operator]
metadata {
name = "prefect-user"
namespace = local.target_namespace
}
data = {
username = "prefect"
password = "prefect"
}
}
resource "helm_release" "taskmanagerdb_ha_operator" {
name = "cnpg"
chart = "cloudnative-pg"
repository = "https://cloudnative-pg.github.io/charts"
version = "0.23.2"
create_namespace = true
namespace = local.target_namespace
values = [
<<EOT
EOT
]
}
Using Helm
Example HA deployment using Helm and kubectl
---
nameOverride: cache
architecture: replication
auth:
enabled: false
master:
podAntiAffinityPreset: hard
persistence:
enabled: true
service:
ports:
redis: 6379
replicas:
replicaCount: 3
podAntiAffinityPreset: hard
sentinel:
enabled: true
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm install cache bitnami/redis --version 19.5.2 --namespace infrahub --create-namespace --values redis-values.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-sentinel-proxy
namespace: infrahub
labels:
app.kubernetes.io/name: redis-sentinel-proxy
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: redis-sentinel-proxy
template:
metadata:
labels:
app.kubernetes.io/name: redis-sentinel-proxy
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/name: redis-sentinel-proxy
topologyKey: kubernetes.io/hostname
containers:
- name: redis-sentinel-proxy
image: patrickdk/redis-sentinel-proxy:v1.2
args:
- "-master"
- "mymaster"
- "-listen"
- ":6379"
- "-sentinel"
- "cache:26379"
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: redis-sentinel-proxy
namespace: infrahub
labels:
app.kubernetes.io/name: redis-sentinel-proxy
spec:
type: ClusterIP
ports:
- port: 6379
targetPort: redis
name: cache-ha
selector:
app.kubernetes.io/name: redis-sentinel-proxy
kubectl apply -f redis-sentinel-proxy-deploy.yaml --namespace infrahub
kubectl apply -f redis-sentinel-proxy-svc.yaml --namespace infrahub
---
replicaCount: 3
auth:
username: infrahub
password: infrahub
metrics:
enabled: true
startupProbe:
enabled: true
podAntiAffinityPreset: hard
helm install messagequeue oci://registry-1.docker.io/bitnamicharts/rabbitmq --version 14.4.1 --namespace infrahub --create-namespace --values rabbitmq-values.yaml
---
neo4j:
name: "infrahub"
minimumClusterSize: 3
resources:
cpu: "4"
memory: "8Gi"
password: "admin"
edition: "enterprise"
acceptLicenseAgreement: "yes"
config:
dbms.security.auth_minimum_password_length: "4"
dbms.security.procedures.unrestricted: apoc.*
logInitialPassword: false
volumes:
data:
mode: "defaultStorageClass"
defaultStorageClass:
accessModes:
- ReadWriteOnce
requests:
storage: 10Gi
services:
neo4j:
enabled: false
helm repo add neo4j https://helm.neo4j.com/neo4j/
helm repo update
helm install database-0 neo4j/neo4j --version 5.20.0 --namespace infrahub --values neo4j-values.yaml
helm install database-1 neo4j/neo4j --version 5.20.0 --namespace infrahub --values neo4j-values.yaml
helm install database-2 neo4j/neo4j --version 5.20.0 --namespace infrahub --values neo4j-values.yaml
Once all instances are running, install the headless service:
---
neo4j:
name: "infrahub"
helm install database-service neo4j/neo4j-headless-service --version 5.20.0 --namespace infrahub --values service-values.yaml
---
mode: distributed
statefulset:
replicaCount: 3
drivesPerNode: 2
auth:
rootUser: admin
rootPassword: password
provisioning:
enabled: true
buckets:
- name: infrahub-data
podAntiAffinityPreset: hard
helm install objectstore oci://registry-1.docker.io/bitnamicharts/minio --version 15.0.5 --namespace infrahub --create-namespace --values minio-values.yaml
helm repo add cnpg https://cloudnative-pg.github.io/charts
helm repo update
helm install cnpg cnpg/cloudnative-pg --version 0.23.2 --namespace infrahub --create-namespace
kubectl create secret generic prefect-user --namespace infrahub --from-literal=username=prefect --from-literal=password=prefect
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: taskmanagerdb
namespace: infrahub
spec:
instances: 3
storage:
size: 10Gi
postgresql:
pg_hba:
- host all all 10.0.0.0/8 md5
bootstrap:
initdb:
database: prefect
owner: prefect
secret:
name: prefect-user
kubectl apply -f taskmanagerdb.yaml --namespace infrahub
---
global:
prefect:
image:
repository: registry.opsmill.io/opsmill/infrahub-enterprise
prefectTag: 1.2.5
server:
replicaCount: 1
command:
- /usr/bin/tini
- -g
- --
args:
- uvicorn
- --host
- "0.0.0.0"
- --port
- "4200"
- --factory
- infrahub.prefect_server.app:create_infrahub_prefect
env:
- name: PREFECT_UI_SERVE_BASE
value: /
- name: PREFECT_MESSAGING_BROKER
value: prefect_redis.messaging
- name: PREFECT_MESSAGING_CACHE
value: prefect_redis.messaging
- name: PREFECT_REDIS_MESSAGING_HOST
value: redis-sentinel-proxy
- name: PREFECT_REDIS_MESSAGING_DB
value: "1"
secret:
create: true
name: ""
username: "prefect"
password: "prefect"
host: "taskmanagerdb-rw"
port: "5432"
database: "prefect"
serviceAccount:
create: false
postgresql:
enabled: false
helm install taskmanager prefect-server --repo https://prefecthq.github.io/prefect-helm --version 2025.2.21193831 --namespace infrahub --create-namespace --values prefect-values.yaml
Deploy the Prefect Helm chart with a single replica initially to prevent concurrent database initialization issues. Wait for the first pod to become ready and complete database initialization. Once initialized, scale up to your desired number of replicas for high availability.
kubectl scale deployment/prefect-server --replicas=3 --namespace infrahub
---
infrahub:
global:
infrahubTag: 1.2.5
infrahubServer:
replicas: 3
persistence:
enabled: false
infrahubServer:
env:
INFRAHUB_DB_ADDRESS: infrahub-headless
INFRAHUB_DB_PROTOCOL: neo4j # required for client-side routing
INFRAHUB_BROKER_ADDRESS: messagequeue-rabbitmq
INFRAHUB_CACHE_ADDRESS: redis-sentinel-proxy
INFRAHUB_CACHE_PORT: 6379
INFRAHUB_WORKFLOW_ADDRESS: prefect-server
INFRAHUB_WORKFLOW_PORT: 4200
PREFECT_API_URL: "http://prefect-server:4200/api"
INFRAHUB_STORAGE_DRIVER: s3
AWS_ACCESS_KEY_ID: admin
AWS_SECRET_ACCESS_KEY: password
AWS_S3_BUCKET_NAME: infrahub-data
AWS_S3_ENDPOINT_URL: objectstore-minio:9000
AWS_S3_USE_SSL: "false"
INFRAHUB_ALLOW_ANONYMOUS_ACCESS: "true"
INFRAHUB_DB_TYPE: neo4j
INFRAHUB_LOG_LEVEL: INFO
INFRAHUB_PRODUCTION: "false"
INFRAHUB_INITIAL_ADMIN_TOKEN: 06438eb2-8019-4776-878c-0941b1f1d1ec
INFRAHUB_SECURITY_SECRET_KEY: 327f747f-efac-42be-9e73-999f08f86b92
INFRAHUB_GIT_REPOSITORIES_DIRECTORY: "/opt/infrahub/git"
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
service: infrahub-server
topologyKey: kubernetes.io/hostname
infrahubTaskWorker:
replicas: 3
infrahubTaskWorker:
env:
INFRAHUB_DB_ADDRESS: infrahub-headless
INFRAHUB_DB_PROTOCOL: neo4j # required for client-side routing
INFRAHUB_BROKER_ADDRESS: messagequeue-rabbitmq
INFRAHUB_CACHE_ADDRESS: redis-sentinel-proxy
INFRAHUB_CACHE_PORT: 6379
INFRAHUB_WORKFLOW_ADDRESS: prefect-server
INFRAHUB_WORKFLOW_PORT: 4200
PREFECT_API_URL: "http://prefect-server:4200/api"
INFRAHUB_STORAGE_DRIVER: s3
AWS_ACCESS_KEY_ID: admin
AWS_SECRET_ACCESS_KEY: password
AWS_S3_BUCKET_NAME: infrahub-data
AWS_S3_ENDPOINT_URL: objectstore-minio:9000
AWS_S3_USE_SSL: "false"
INFRAHUB_DB_TYPE: neo4j
INFRAHUB_LOG_LEVEL: DEBUG
INFRAHUB_PRODUCTION: "false"
INFRAHUB_API_TOKEN: 06438eb2-8019-4776-878c-0941b1f1d1ec
INFRAHUB_TIMEOUT: "60"
INFRAHUB_GIT_REPOSITORIES_DIRECTORY: "/opt/infrahub/git"
PREFECT_WORKER_QUERY_SECONDS: 3
PREFECT_AGENT_QUERY_INTERVAL: 3
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
service: infrahub-task-worker
topologyKey: kubernetes.io/hostname
redis:
enabled: false
neo4j:
enabled: false
rabbitmq:
enabled: false
prefect-server:
enabled: false
helm install infrahub oci://registry.opsmill.io/opsmill/chart/infrahub-enterprise --version 3.3.5 --namespace infrahub --create-namespace --values infrahub-values.yaml
Load balancer configuration
For high availability deployments, a load balancer is useful to distribute traffic across multiple Infrahub API server instances. The load balancer must be configured to properly handle both HTTP/HTTPS and WebSocket connections.
HAProxy configuration example
global
maxconn 4096
log stdout local0
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
option httplog
# Frontend for Infrahub API
frontend infrahub_frontend
bind *:80
bind *:443 ssl crt /etc/ssl/certs/infrahub.pem
# Redirect HTTP to HTTPS
redirect scheme https if !{ ssl_fc }
# ACL for WebSocket connections
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket_path path_beg /ws /graphql-ws
# Use WebSocket backend for WS connections
use_backend infrahub_websocket if is_websocket || is_websocket_path
# Default to HTTP backend
default_backend infrahub_http
# Backend for regular HTTP/HTTPS traffic
backend infrahub_http
balance roundrobin
option httpchk GET /api/health
http-check expect status 200
# Sticky sessions for GraphQL subscriptions
cookie SERVERID insert indirect nocache
server infrahub-api-1 10.0.1.10:8000 check cookie api1
server infrahub-api-2 10.0.1.11:8000 check cookie api2
server infrahub-api-3 10.0.1.12:8000 check cookie api3
# Backend for WebSocket connections
backend infrahub_websocket
balance source
option http-server-close
option forceclose
# WebSocket specific health check
option httpchk GET /api/health
http-check expect status 200
server infrahub-api-1 10.0.1.10:8000 check
server infrahub-api-2 10.0.1.11:8000 check
server infrahub-api-3 10.0.1.12:8000 check
NGINX configuration example
upstream infrahub_backend {
# IP hash for session affinity
ip_hash;
server 10.0.1.10:8000 max_fails=3 fail_timeout=30s;
server 10.0.1.11:8000 max_fails=3 fail_timeout=30s;
server 10.0.1.12:8000 max_fails=3 fail_timeout=30s;
}
server {
listen 80;
server_name infrahub.example.com;
# Redirect HTTP to HTTPS
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name infrahub.example.com;
ssl_certificate /etc/ssl/certs/infrahub.pem;
ssl_certificate_key /etc/ssl/private/infrahub.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
# Client body size for large GraphQL queries
client_max_body_size 10M;
# Timeouts for long-running operations
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
# Health check endpoint
location /health {
access_log off;
proxy_pass http://infrahub_backend/api/health;
}
# WebSocket support for GraphQL subscriptions
location ~ ^/(ws|graphql-ws) {
proxy_pass http://infrahub_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Disable buffering for WebSocket
proxy_buffering off;
}
# Regular API traffic
location / {
proxy_pass http://infrahub_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Enable keepalive
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
Related resources
- Database backup and restore - Learn how to backup and restore your Infrahub database
- High availability architecture - Understand resilient deployment architectures
- Local demo environment - Explore the demo environment configuration
- Hardware requirements - Review system requirements