Skip to main content

Installing Infrahub

Infrahub is deployed as a container-based architecture, and can be deployed for testing or production use in a number of different ways:

Hardware requirements

Please ensure the systems on which you want to install Infrahub meet the hardware requirements.

Quick start via curl

To quickly spin up Infrahub locally:

curl https://infrahub.opsmill.io | sudo docker compose -f - up -d

Alternative examples:

curl https://infrahub.opsmill.io/develop | sudo docker compose -f - up -d
curl https://infrahub.opsmill.io/0.15.2 | sudo docker compose -f - up -d

To spin down and remove an Infrahub environment:

curl https://infrahub.opsmill.io | sudo docker compose -f - down -v

From Git repository

Create the base directory for the Infrahub installation. For this guide, we'll use /opt/infrahub.

sudo mkdir -p /opt/infrahub/
cd /opt/infrahub/
warning

Depending on your system configuration, you might have to give other users write permissions to the /opt/infrahub directory.

Usage of the /opt/infrahub directory is merely a suggestion. You can use any directory on your system, especially for development or demo purposes.

mkdir -p ~/source/infrahub/
cd ~/source/infrahub/

Next, clone the Infrahub GitHub repository into the current directory.

git clone --recursive --depth 1 https://github.com/opsmill/infrahub.git
note

The command above utilizes a "shallow clone" to retrieve only the most recent commit. If you need to download the entire history, omit the --depth 1 argument.

The git clone command should generate output similar to the following:

Cloning into '.'...
remote: Enumerating objects: 1312, done.
remote: Counting objects: 100% (1312/1312), done.
remote: Compressing objects: 100% (1150/1150), done.
remote: Total 1312 (delta 187), reused 691 (delta 104), pack-reused 0
Receiving objects: 100% (1312/1312), 33.37 MiB | 14.46 MiB/s, done.
Resolving deltas: 100% (187/187), done.

Docker Compose

The recommended way to run Infrahub is to use the Docker Compose files included with the project combined with the helper commands defined in invoke.

The pre-requisites for this type of deployment are to have:

Invoke

On MacOS, Python is installed by default so you should be able to install invoke directly. Invoke works best when you install it in the main Python environment, but you can also install it in a virtual environment if you prefer. To install invoke and toml, run the following command:

pip3 install invoke toml

Docker

To install Docker, follow the official instructions on the Docker website for your platform.

Once docker desktop and invoke are installed you can start and initialize the Infrahub demo environment with the following command:

cd infrahub
invoke demo.start demo.load-infra-schema demo.load-infra-data
Check the documentation of the demo environment for more information/topics/local-demo-environment

GitHub Codespaces

The Infrahub GitHub repository is designed to launch an instance via GitHub Codespaces. We have two devcontainer configurations:

  • infrahub: a deployment running without any Schema or data pre-installed
  • infrahub-demo: a deployment running the demo environment
No DataDemo Data
Launch in GitHub Codespaces (No Data)Launch in GitHub Codespaces (Demo Data)
note

The default devcontainer .devcontainer/devcontainer.json launches Infrahub with no schema or data. If you want to launch a deployment with the demo schema and data, you will need to choose the alternate Dev container configuration in the GitHub Codespaces creation options.

Infrahub devcontainer filehttps://github.com/opsmill/infrahub/tree/stable/.devcontainer/devcontainer.json

K8s with Helm charts

A first version of our K8S helm-chart is available in our repository.

Infrahub Helm Charthttps://github.com/opsmill/infrahub/tree/stable/helm

The following are required for production deployments using Helm:

  • data persistence must be enabled (except for the Infrahub API Server if using S3 storage)
  • multiple replicas of the Infrahub API Server and Infrahub Task workers should be deployed: you can make use of the affinity variable to define the affinity policy for the pods
  • a shared storage should be available for use by the Task workers (through a StorageClass that supports RWX accesses)
  • S3 storage should be configured for the Infrahub API Server
warning

We do not recommend using the included dependencies (Neo4j, RabbitMQ, Redis, NFS) for production. They are present to ease deployment on non-production environments.

You can use the following values example:

global:
infrahubTag: stable
imagePullPolicy: Always

infrahubServer:
replicas: 3
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- infrahub-server
topologyKey: topology.kubernetes.io/zone
persistence:
enabled: false
ingress:
enabled: true
infrahubServer:
env:
INFRAHUB_ALLOW_ANONYMOUS_ACCESS: "true"
INFRAHUB_CACHE_PORT: 6379
INFRAHUB_CONFIG: /config/infrahub.toml
INFRAHUB_DB_TYPE: neo4j
INFRAHUB_LOG_LEVEL: INFO
INFRAHUB_PRODUCTION: "true"
INFRAHUB_INITIAL_ADMIN_TOKEN: 06438eb2-8019-4776-878c-0941b1f1d1ec
INFRAHUB_SECURITY_SECRET_KEY: 327f747f-efac-42be-9e73-999f08f86b92
INFRAHUB_STORAGE_DRIVER: s3
AWS_ACCESS_KEY_ID: xxxx
AWS_SECRET_ACCESS_KEY: xxxx
AWS_S3_BUCKET_NAME: infrahub-data
AWS_S3_ENDPOINT_URL: https://s3

infrahubGit:
replicas: 3
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- infrahub-git
topologyKey: topology.kubernetes.io/zone
persistence:
enabled: true
storageClassName: standard-rwx # using GCP Filestore

neo4j:
services:
admin:
enabled: true
volumes:
data:
mode: dynamic
dynamic:
storageClassName: premium-rwo
requests:
storage: 100Gi

nfs-server-provisioner:
enabled: false
helm install infrahub -f values.yml path/to/infrahub/chart

You can also install the chart using the OpsMill registry.

helm install infrahub -f values.yml oci://registry.opsmill.io/opsmill/chart/infrahub