Skip to main content

Installation & Setup

Everything needed to run the AI/DC solution locally is included in the repository. The environment runs in Docker; you interact with it through the infrahubctl CLI and the Infrahub web UI.

Requirements

  • Python 3.11+ (3.12 recommended)
  • Docker and Docker Compose (v2)
  • uv — Python package manager
  • Git — for cloning the repository
infrahubctl is included

The infrahubctl CLI is installed automatically as part of the infrahub-sdk dependency. No separate installation is needed.

Getting started

Cloning the repository

First we have to clone the repository.

git clone https://github.com/opsmill/infrahub-solution-ai-dc.git

Install dependencies

uv sync --all-packages

This installs all Python dependencies including infrahub-sdk (which provides infrahubctl), invoke (task runner), and the solution's own infrahub-solution-ai-dc package.

Configure environment variables

export INFRAHUB_USERNAME="admin"
export INFRAHUB_PASSWORD="infrahub"

INFRAHUB_USERNAME and INFRAHUB_PASSWORD are used by infrahubctl to authenticate with the Infrahub API. VERSION sets the Infrahub image tag. If not using direnv, source the file manually or export these variables:

Alternatively, you can use direnv and define the environment variables in a .envrc file at the root of the project.

Start Infrahub

inv start

This downloads the base docker-compose.yml from https://infrahub.opsmill.io (if not already present), builds a custom Docker image, then runs docker compose up -d. The custom image extends the standard Infrahub image with the infrahub-solution-ai-dc Python package — a shared library (in src/infrahub_solution_ai_dc/) that the Generators and Transforms use for common logic such as IP addressing, cabling, and interface sorting. Without it, Generators running inside the task workers would not have access to that shared code. The override file (docker-compose.override.yml) ensures the custom image is used in place of the standard one.

Services started:

ServiceDescriptionPort
infrahub-serverInfrahub API and web UI8000
task-managerPrefect API server for workflow orchestration4200
task-workerInfrahub task workers (2 replicas)
databaseNeo4j graph database7474, 7687
cacheRedis
message-queueRabbitMQ
task-manager-dbPostgreSQL for Prefect
Verify the environment

Open http://localhost:8000 in a browser and log in with username admin, password infrahub.

Load data

inv load

This runs four steps in sequence:

  1. infrahubctl schema load schemas — loads the 5 schema files defining the data model
  2. infrahubctl menu load menus/ — loads UI menu configuration
  3. infrahubctl object load objects/ — loads all design objects (groups, manufacturers, device types, IPAM, profiles, device templates, fabrics, pods, racks)
  4. infrahubctl object load repository.yml — registers the Git repository with Infrahub
Repository sync

After inv load, Infrahub imports the repository and reads .infrahub.yml to register Generators, Transforms, queries, and artifact definitions. Verify with infrahubctl repository list. Generators cannot run until this completes.

Load trigger rules (optional)

cp objects/20_triggers.yml.save triggers.yml
infrahubctl object load triggers.yml

This creates CoreGeneratorAction and CoreNodeTriggerRule objects for the automatic modular Generator execution. The trigger file uses a .save extension so it is excluded from the initial inv load — triggers must be loaded after the repository has synced, because they reference Generator definitions that need to exist first.

Load triggers after repository sync

If loaded before Generator definitions are imported, the trigger rules will fail to resolve their action references.

Repository structure

The repository is self-contained — everything below is included. You do not need to build any of these from scratch.

Schemas

FileContents
schemas/logical_design.ymlNetworkFabric and NetworkPod — the design hierarchy with Generator signaling attributes (checksum, amount_of_spines)
schemas/physical_location.ymlLocationHall and LocationRack — physical locations with Generator target support
schemas/device.ymlNetworkDevice, NetworkInterface, and NetworkLink — devices with computed attributes (interface index, description)
schemas/ipam.ymlIpamIPPrefix with role-based allocation and IpamIPAddress
schemas/generator.ymlGeneratorTarget generic enabling trigger-based modular Generator execution via checksum attribute

Generators

FileResponsibility
generators/generate_fabric.pyFabricGenerator — IP pool allocation, super spine creation, writes checksums to child Pods
generators/generate_pod.pyPodGenerator — validates fabric complete, spine creation, spine-to-super-spine cabling, writes checksums to child Racks
generators/generate_rack.pyRackGenerator — validates pod complete, leaf creation, leaf-to-spine cabling

Each Generator has a paired .gql query file in the same directory. The .infrahub.yml file in the repository root wires Generator definitions to their queries and target groups.

Transforms and artifacts

FilePurpose
transforms/startup_config.gql / templates/startup_config.j2Jinja2 Transform producing a startup configuration artifact per device
transforms/cabling_plan.py / .gqlPython Transform producing a CSV cabling plan artifact per fabric
transforms/computed_interface_description.py / .gqlPython Transform applied as a computed attribute on each interface

Demo data (object files)

Loaded in numbered order by infrahubctl object load objects/:

FileContents
objects/01_groups.ymlCoreStandardGroup objects: halls, racks, fabrics, pods, devices
objects/02_manufacturer.ymlEquipment manufacturers
objects/03_device_type.ymlDevice types for leaf, spine, and super spine switches
objects/04_ipam.ymlIP supernet (10.0.0.0/8) and FabricSupernetPool
objects/05_profiles.ymlInterface role profiles (MTU, role assignments)
objects/06_device_template.yml9 device templates defining interface layouts per device role
objects/10_fabric.ymlFabric and Pod objects: Fabric-A (6 super spines, 3 pods), Fabric-B (4 super spines, 3 pods, Dell equipment)
objects/11_rack.yml16 Rack objects across both fabrics with leaf counts and template assignments
objects/20_triggers.yml.saveTrigger rules (.save extension — loaded separately after repository sync)

Infrastructure

FilePurpose
DockerfileBuilds a custom Infrahub image with the infrahub-solution-ai-dc Python package installed
docker-compose.override.ymlReplaces standard Infrahub image with the custom build; mounts src/ for live code changes
repository.ymlCoreRepository object pointing to /upstream (the mounted repository)
tasks.pyInvoke task definitions for environment management
.infrahub.ymlCentral configuration wiring Generators, Transforms, queries, and artifacts

Invoke tasks

TaskCommandDescription
Startinv startDownload compose file (if needed) and start all services
Stopinv stopStop containers, remove networks (data preserved)
Destroyinv destroyStop containers, remove networks and volumes (full reset)
Restartinv restartRestart all services (or inv restart --component=<name> for one)
Loadinv loadLoad schemas, menus, objects, and register repository
Load schemainv load-schemaLoad schema files only
Load menuinv load-menuLoad menu definitions only
Buildinv buildBuild the custom Docker image locally
Testinv testRun pytest test suite
Formatinv formatRun Ruff formatter and auto-fix linting issues

Troubleshooting

Services fail to start

  • Check Docker is running: docker info
  • Check port conflicts: Infrahub uses ports 8000, 4200, 7474, 7687
  • View logs: docker compose logs infrahub-server or docker compose logs task-worker
  • Full reset: inv destroy && inv start

Repository not syncing

  • Verify with infrahubctl repository list
  • The repository mounts from /upstream inside the container — check docker compose logs task-worker for import errors
  • Schema mismatches between what .infrahub.yml references and what is loaded can block sync