Install Infrahub Enterprise Enterprise Edition
Infrahub Enterprise is based on the Community version, with several enhancements for:
- Enterprise features
- High availability
- Better performance
- Security hardening (Docker image, etc.)
Infrahub Enterprise can be deployed using the same methods as Infrahub Community.
- Docker compose via curl
- Kubernetes with Helm
- Bare metal
Using curl and Docker Compose​
To quickly spin up the latest Infrahub Enterprise locally, retrieve the Docker Compose file from infrahub.opsmill.io/enterprise.
You can also specify a specific version in the URL:
You can also specify a sizing preset in the URL. This will automatically configure replica count for each component according to your sizing plan:
- https://infrahub.opsmill.io/enterprise?size=small (requires 16 GB of RAM)
- https://infrahub.opsmill.io/enterprise/1.3.5?size=medium (requires 32 GB of RAM)
- https://infrahub.opsmill.io/enterprise/stable?size=large (requires 64 GB of RAM)
| Size | Total required memory | API workers | Task workers | Task manager API workers | Task manager background workers | DB heap size | DB page cache size |
|---|---|---|---|---|---|---|---|
| small | 16 GB | 4 | 2 | 1 | 1 | 8G | 1G |
| medium | 32 GB | 4 | 4 | 2 | 2 | 24G | 4G |
| medium-data | 32 GB | 4 | 2 | 1 | 1 | 24G | 4G |
| large | 64 GB | 4 | 8 | 4 | 2 | 31G | 16G |
| large-data | 64 GB | 4 | 2 | 1 | 1 | 31G | 16G |
Prerequisites​
- Docker (version 24.x minimum)
- Docker Compose
Start an Infrahub Enterprise environment​
- macOS
- Linux
curl https://infrahub.opsmill.io/enterprise > docker-compose.yml
docker compose -p infrahub up -d
curl https://infrahub.opsmill.io/enterprise > docker-compose.yml
sudo docker compose -p infrahub up -d
Stop and remove an Infrahub Enterprise environment​
- macOS
- Linux
curl https://infrahub.opsmill.io/enterprise > docker-compose.yml
docker compose -p infrahub down -v
curl https://infrahub.opsmill.io/enterprise > docker-compose.yml
sudo docker compose -p infrahub down -v
Enable observability​
To deploy Infrahub with a built-in observability stack (Grafana, Prometheus, Loki, Alloy), add the ?observability=true query parameter to the URL. This can be combined with existing parameters like size:
- macOS
- Linux
curl "https://infrahub.opsmill.io/enterprise?observability=true" > docker-compose.yml
docker compose -p infrahub up -d
curl "https://infrahub.opsmill.io/enterprise?observability=true" > docker-compose.yml
sudo docker compose -p infrahub up -d
Combined with a sizing preset:
curl "https://infrahub.opsmill.io/enterprise?size=small&observability=true" > docker-compose.yml
Once running, Grafana is accessible at http://localhost:3500 with default credentials admin / admin.
For instructions on upgrading an existing observability stack, see the Upgrade guide.
Infrahub can export OpenTelemetry traces so you can follow a single request as it moves across the API server, task workers, and the database. Traces are sent to the bundled Tempo instance and surfaced in Grafana under the Tempo data source.
To enable it, add the following to a .env file alongside the compose file:
INFRAHUB_TRACE_ENABLE=true
INFRAHUB_TRACE_EXPORTER_TYPE=otlp
INFRAHUB_TRACE_EXPORTER_PROTOCOL=grpc
INFRAHUB_TRACE_EXPORTER_ENDPOINT=http://infrahub-tempo:4317
INFRAHUB_TRACE_INSECURE=true
Using Helm and Kubernetes​
The Enterprise Helm chart is based on the original Infrahub chart and uses it as a Helm dependency.
Most configuration related to Infrahub goes inside the infrahub top-level key.
Production deployment requirements​
The following are required for production deployments using Helm:
- Data persistence for the database must be enabled
- Multiple replicas of the Infrahub API Server and Infrahub Task workers should be deployed: you can use the
affinityvariable to define the affinity policy for the pods - S3 storage should be configured for the Infrahub API Server, it is required if you have multiple replicas
We do not recommend using the included dependencies (Neo4j, RabbitMQ, Redis) for production. They are present to ease deployment on non-production environments.
Prerequisites​
- A Kubernetes cluster
- Helm installed on your system
By default, the Helm chart disables storage persistence on all components (Neo4j, RabbitMQ, Redis) to enable quicker deployments using emptyDir storage and reduce requirements for test installations.
However, if pods get rescheduled on the Kubernetes cluster for any reason, all data will be lost. This can happen unexpectedly due to node maintenance, resource pressure, or cluster updates.
For long-running tests or lab environments, you should either:
- Enable storage persistence for the Neo4j database at minimum (but also on the other components if possible)
- Use Docker Compose instead, which provides more predictable behavior for non-production environments
Step 1: Fill in the values file​
Create a values.yml file with the following configuration:
infrahub:
infrahubServer:
replicas: 3
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- infrahub-server
topologyKey: topology.kubernetes.io/zone
persistence:
enabled: false
ingress:
enabled: true
infrahubServer:
env:
INFRAHUB_ALLOW_ANONYMOUS_ACCESS: "true"
INFRAHUB_CACHE_PORT: 6379
INFRAHUB_DB_TYPE: neo4j
INFRAHUB_LOG_LEVEL: INFO
INFRAHUB_PRODUCTION: "true"
INFRAHUB_INITIAL_ADMIN_TOKEN: 06438eb2-8019-4776-878c-0941b1f1d1ec
INFRAHUB_SECURITY_SECRET_KEY: 327f747f-efac-42be-9e73-999f08f86b92
INFRAHUB_STORAGE_DRIVER: s3
AWS_ACCESS_KEY_ID: xxxx
AWS_SECRET_ACCESS_KEY: xxxx
AWS_S3_BUCKET_NAME: infrahub-data
AWS_S3_ENDPOINT_URL: https://s3
infrahubTaskWorker:
replicas: 3
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- infrahub-task-worker
topologyKey: topology.kubernetes.io/zone
neo4j:
services:
admin:
enabled: true
volumes:
data:
mode: dynamic
dynamic:
storageClassName: premium-rwo
requests:
storage: 100Gi
Be sure to replace the placeholder values with your actual values. See the configuration reference for details on all available environment variables.
Starting with version 4.0.0 of the Helm chart, you can use flavored Helm charts that come pre-configured with sizing presets. These charts automatically configure replica counts and resource allocations for each component according to your sizing plan:
| Flavor | Chart version example | Required memory |
|---|---|---|
| small | 4.0.0-small | 16 GB |
| medium | 4.0.0-medium | 32 GB |
| medium-data | 4.0.0-medium-data | 32 GB |
| large | 4.0.0-large | 64 GB |
| large-data | 4.0.0-large-data | 64 GB |
You can review the configuration values for each sizing preset here:
Small size config presethttps://github.com/opsmill/infrahub-helm/blob/stable/charts/infrahub-enterprise/values.small.yamlMedium size config presethttps://github.com/opsmill/infrahub-helm/blob/stable/charts/infrahub-enterprise/values.medium.yamlMedium (data) size config presethttps://github.com/opsmill/infrahub-helm/blob/stable/charts/infrahub-enterprise/values.medium-data.yamlLarge size config presethttps://github.com/opsmill/infrahub-helm/blob/stable/charts/infrahub-enterprise/values.large.yamlLarge (data) size config presethttps://github.com/opsmill/infrahub-helm/blob/stable/charts/infrahub-enterprise/values.large-data.yamlWhen using a flavored chart (sizing preset), you must also configure the Redis host for the Prefect background services. Add the following to your values.yml:
infrahub:
prefect-server:
backgroundServices:
messaging:
redis:
host: "infrahub-cache-master"
The host value should match your Redis service name. If you deployed Redis using the bundled dependency, this defaults to infrahub-cache-master. This configuration is only required when using sizing presets — the base chart (for example, 4.0.0) does not require it.
Migrating from chart versions before 4.0.0​
If you are upgrading from a Helm chart version earlier than 4.0.0, update your values.yml file:
-
Remove old configuration preset values: Delete any environment variables and settings that were previously copied from sizing presets (for example,
INFRAHUB_CACHE_ADDRESS,PREFECT_REDIS_MESSAGING_HOST, replica counts, resource limits). The flavored charts now handle these automatically. -
Add the Redis host configuration: Add the following to your
values.ymlfile:
infrahub:
prefect-server:
backgroundServices:
messaging:
redis:
host: "infrahub-cache-master" # Use the value previously set in INFRAHUB_CACHE_ADDRESS or PREFECT_REDIS_MESSAGING_HOST
Step 2: install the chart​
Install using a flavored chart from the OpsMill registry (recommended for production):
helm install infrahub -f values.yml oci://registry.opsmill.io/opsmill/chart/infrahub-enterprise --version 4.0.0-medium
Or install the base chart and provide your own sizing configuration:
helm install infrahub -f values.yml oci://registry.opsmill.io/opsmill/chart/infrahub-enterprise --version 4.0.0
Install using a local chart:
helm install infrahub -f values.yml path/to/infrahub-enterprise/chart
Verify the installation by checking that all pods are running:
kubectl get pods -l app=infrahub
Bare metal deployment​
Infrahub Enterprise supports bare metal installations for customers who require direct hardware deployment without containerization.
To learn more about bare metal deployment options and requirements, reach out to the OpsMill team:
- Discord: discord.gg/opsmill
- Email: [email protected]
- Schedule a meeting: cal.com/team/opsmill/meet