Install Infrahub Community
Infrahub Community is deployed as a container-based architecture and can be installed using several methods.
- Docker compose via curl
- Local development (git clone)
- Kubernetes with Helm
Using curl and Docker Compose​
To quickly spin up the latest Infrahub locally, retrieve the Docker Compose file from infrahub.opsmill.io.
You can also specify a specific version or the develop branch in the URL:
Prerequisites​
- Docker (version 24.x minimum)
- Docker Compose
Start an Infrahub environment​
- macOS
- Linux
curl https://infrahub.opsmill.io > docker-compose.yml
docker compose -p infrahub up -d
curl https://infrahub.opsmill.io > docker-compose.yml
sudo docker compose -p infrahub up -d
After running the command, you should see Docker downloading the necessary images and starting the containers.
Verify that Infrahub is running by accessing the web interface or checking container status:
docker ps | grep infrahub
Stop and remove an Infrahub environment​
- macOS
- Linux
curl https://infrahub.opsmill.io > docker-compose.yml
docker compose -p infrahub down -v
curl https://infrahub.opsmill.io > docker-compose.yml
sudo docker compose -p infrahub down -v
Cloning the repository​
The recommended method for running Infrahub for development uses the Docker Compose files included with the project combined with helper commands defined in invoke.
This method is suitable for local development and demo environments. It is not recommended for production deployments.
Prerequisites​
Step 1: clone the repository​
Create the base directory for the Infrahub installation. For this guide, we'll use /opt/infrahub.
cd /opt/
Usage of the /opt/infrahub directory is merely a suggestion. You can use any directory on your system, especially for development or demo purposes.
mkdir -p ~/source/
cd ~/source/
Clone Infrahub repository into the current directory:
git clone --recursive https://github.com/opsmill/infrahub.git
The git clone command should generate output similar to the following:
Cloning into '.'...
remote: Enumerating objects: 1312, done.
remote: Counting objects: 100% (1312/1312), done.
remote: Compressing objects: 100% (1150/1150), done.
remote: Total 1312 (delta 187), reused 691 (delta 104), pack-reused 0
Receiving objects: 100% (1312/1312), 33.37 MiB | 14.46 MiB/s, done.
Resolving deltas: 100% (187/187), done.
Step 2: install dependencies​
Navigate to the cloned Infrahub directory:
cd infrahub
Install the Python dependencies by running:
uv sync --all-groups
You should see uv installing the required dependencies. When complete, you'll be returned to the command prompt without errors.
Step 3: start Infrahub​
Start and initialize Infrahub:
uv run invoke demo.start
Using Helm and Kubernetes​
It's possible to deploy Infrahub on Kubernetes using Helm charts. This method is suitable for production deployments and provides a more resilient architecture.
Infrahub Helm Charthttps://github.com/opsmill/infrahub-helm/tree/stable/charts/infrahubArtifactHubhttps://artifacthub.io/packages/helm/infrahub/infrahubPrerequisites​
- A Kubernetes cluster
- Helm installed on your system
By default, the Helm chart disables storage persistence on all components (Neo4j, RabbitMQ, Redis) to enable quicker deployments using emptyDir storage and reduce requirements for test installations.
However, if pods get rescheduled on the Kubernetes cluster for any reason, all data will be lost. This can happen unexpectedly due to node maintenance, resource pressure, or cluster updates.
For long-running tests or lab environments, you should either:
- Enable storage persistence for the Neo4j database at minimum (but also on the other components if possible)
- Use Docker Compose instead, which provides more predictable behavior for non-production environments
Production deployment requirements​
The following are required for production deployments using Helm:
- Data persistence for the database must be enabled
- Multiple replicas of the Infrahub API Server and Infrahub Task workers should be deployed: you can use the
affinityvariable to define the affinity policy for the pods - S3 storage should be configured for the Infrahub API Server, it is required if you have multiple replicas
We do not recommend using the included dependencies (Neo4j, RabbitMQ, Redis) for production. They are present to ease deployment on non-production environments.
Step 1: Fill in the values file​
Create a values.yml file with the following configuration:
infrahubServer:
replicas: 3
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- infrahub-server
topologyKey: topology.kubernetes.io/zone
persistence:
enabled: false
ingress:
enabled: true
infrahubServer:
env:
INFRAHUB_ALLOW_ANONYMOUS_ACCESS: "true"
INFRAHUB_CACHE_PORT: 6379
INFRAHUB_DB_TYPE: neo4j
INFRAHUB_LOG_LEVEL: INFO
INFRAHUB_PRODUCTION: "true"
INFRAHUB_INITIAL_ADMIN_TOKEN: 06438eb2-8019-4776-878c-0941b1f1d1ec
INFRAHUB_SECURITY_SECRET_KEY: 327f747f-efac-42be-9e73-999f08f86b92
INFRAHUB_STORAGE_DRIVER: s3
AWS_ACCESS_KEY_ID: xxxx
AWS_SECRET_ACCESS_KEY: xxxx
AWS_S3_BUCKET_NAME: infrahub-data
AWS_S3_ENDPOINT_URL: https://s3
infrahubTaskWorker:
replicas: 3
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- infrahub-task-worker
topologyKey: topology.kubernetes.io/zone
neo4j:
services:
admin:
enabled: true
volumes:
data:
mode: dynamic
dynamic:
storageClassName: premium-rwo
requests:
storage: 100Gi
Be sure to replace the placeholder values with your actual values. See the configuration reference for details on all available environment variables.
Step 2: install the chart​
Install using a local chart:
helm install infrahub -f values.yml path/to/infrahub/chart
Or install using the OpsMill registry:
helm install infrahub -f values.yml oci://registry.opsmill.io/opsmill/chart/infrahub
Verify the installation by checking that all pods are running:
kubectl get pods -l app=infrahub