Example Monitoring Setup

This section gives an example of how canton can be run inside a connected network of docker containers. We then go on to show how network activity can be monitored.

Canton Setup

Here we go through the details of how Docker Compose can be configured to spin up the docker container network shown below. Please see the compose documentation for detailed information concerning the structure of the configuration files.

One feature of compose is that it allows the overall configuration to be provided across a number of files. Below we look at each of the configuration files in turn and then show how we can bring them all together in a running network.

../../_images/basic-canton-setup.svg

Network Configuration

This compose file defines the network that will be used to connect all the running containers.

etc/network-docker-compose.yml
 # Create with `docker network create monitoring`

 version: "3.8"

 networks:
   default:
     name: monitoring
     external: true

In the docker files below we expose ports where container is providing a service to allow external connection and visibility for demonstration purposes.

This would be enirely inappropriate for a production environment where only the minimum number of ports should be exposed and secured via SSL and other hardening measures.

Postgres Setup

We only use a single postgres container but create databases for the domain along with canton and index databases for each participant. We do this by mounting postgres-init.sql into the postgres initialized directory. Note that in a production environment passwords must not be inlined inside config.

etc/postgres-docker-compose.yml
 services:
   postgres:
     image: postgres:11
     hostname: postgres
     container_name: postgres
     environment:
       - POSTGRES_USER=pguser
       - POSTGRES_PASSWORD=pgpass
     volumes:
       - ../etc/postgres-init.sql:/docker-entrypoint-initdb.d/init.sql
     expose:
       - "5432"
     ports:
       - "5432:5432"
etc/postgres-init.sql
 create database canton1db;
 create database index1db;

 create database domain0db;

 create database canton2db;
 create database index2db;

Domain Setup

We run the domain with the –log-profile container that writes plain text to standard out at debug level.

etc/domain0-docker-compose.yml
 services:
   domain0:
     image: monitoring:latest
     # image: digitalasset-canton-enterprise-docker.jfrog.io/digitalasset/canton-enterprise:2.4.0
     container_name: domain0
     hostname: domain0
     volumes:
       - ../etc/domain0.conf:/canton/etc/domain0.conf
     command: daemon --log-profile container --config etc/domain0.conf
     expose:
       - "10018"
       - "10019"
     ports:
       - "10018:10018"
       - "10019:10019"
etc/domain0.conf
 canton {
   domains {
     domain0 {
       storage {
         type = postgres
         config {
           dataSourceClass = "org.postgresql.ds.PGSimpleDataSource"
           properties = {
             databaseName = "domain0db"
             serverName = "postgres"
             portNumber = "5432"
             user = pguser
             password = pgpass
           }
         }
       }
       public-api {
         port = 10018
         address = "0.0.0.0"
       }
       admin-api {
         port = 10019
         address = "0.0.0.0"
       }
     }
   }
 }

Participant Setup

Ths particpant container has two files mapped into it on conainer creation, the .conf gives details of the domain and database locations. By default participants do not connect to remote domains so to make this happen a bootstrap script is provided.

etc/participant1-docker-compose.yml
 services:
   participant1:
     image: monitoring:latest
     container_name: participant1
     hostname: participant1
     volumes:
       - ./participant1.conf:/canton/etc/participant1.conf
       - ./participant1.bootstrap:/canton/etc/participant1.bootstrap
     command: daemon --log-profile container --config etc/participant1.conf --bootstrap etc/participant1.bootstrap
     expose:
       - "10011"
       - "10012"
     ports:
       - "10011:10011"
       - "10012:10012"
etc/participant1.bootstrap
 participant1.domains.connect(domain0.defaultDomainConnection)
etc/participant1.conf
 canton {
   participants {
     participant1 {
       storage {
         type = postgres
         config {
         dataSourceClass = "org.postgresql.ds.PGSimpleDataSource"
           properties = {
             databaseName = "canton1db"
             serverName = "postgres"
             portNumber = "5432"
             user = pguser
             password = pgpass
           }
         }
         ledger-api-jdbc-url = "jdbc:postgresql://postgres:5432/index1db?user=pguser&password=pgpass"
       }
       ledger-api {
         port = 10011
         address = "0.0.0.0"
       }
       admin-api {
         port = 10012
         address = "0.0.0.0"
       }
     }
   }
   remote-domains.domain0 {
     public-api {
       address="domain0"
       port = 10018
     }
     admin-api {
       address = "domain0"
       port = 10019
     }
   }
 }

The setup for participant2 is identical apart from the name and ports which are changed

etc/participant2-docker-compose.yml
 services:
   participant2:
     image: digitalasset-canton-enterprise-docker.jfrog.io/digitalasset/canton-enterprise:2.4.0
     container_name: participant2
     hostname: participant2
     volumes:
       - ../etc/participant2.conf:/canton/etc/participant2.conf
       - ../etc/participant2.bootstrap:/canton/etc/participant2.bootstrap
     command: daemon --log-profile container --config etc/participant2.conf --bootstrap etc/participant2.bootstrap
     expose:
       - "10021"
       - "10022"
     ports:
       - "10021:10021"
       - "10022:10022"
etc/participant2.bootstrap
 participant1.domains.connect(domain0.defaultDomainConnection)
etc/participant2.conf
 canton {
   participants {
     participant1 {
       storage {
         type = postgres
         config {
         dataSourceClass = "org.postgresql.ds.PGSimpleDataSource"
           properties = {
             databaseName = "canton1db"
             serverName = "postgres"
             portNumber = "5432"
             user = pguser
             password = pgpass
           }
         }
         ledger-api-jdbc-url = "jdbc:postgresql://postgres:5432/index1db?user=pguser&password=pgpass"
       }
       ledger-api {
         port = 10011
         address = "0.0.0.0"
       }
       admin-api {
         port = 10012
         address = "0.0.0.0"
       }
     }
   }
   remote-domains.domain0 {
     public-api {
       address="domain0"
       port = 10018
     }
     admin-api {
       address = "domain0"
       port = 10019
     }
   }
 }

Dependencies

There are startup dependencies between the docker containers, for example the domain needs to be running before the particpant, and in turn, the datbase needs to run before the domain.

etc/dependency-docker-compose.yml
 services:
   domain0:
     depends_on:
       - postgres

   participant1:
     depends_on:
       - domain0

   participant2:
     depends_on:
       - domain0

Docker Images

The docker images used above need to be pulled down prior to starting the network.

  • digitalasset-canton-enterprise-docker.jfrog.io/digitalasset/canton-enterprise:2.4.0
  • postgres:11

Running Docker Compose

Running docker compose with all the compose files shown above makes for quite a long command line. For this reason a helper script, dc.sh is used.

dc.sh
 #!/bin/bash

 if [ $# -eq 0 ];then
     echo "Usage: $0 <docker compose command>"
     echo "Use '$0 up --force-recreate --renew-anon-volumes' to re-create network"
     exit 1
 fi

 set -x

 docker compose \
     -p monitoring \
     -f etc/network-docker-compose.yml \
     -f etc/postgres-docker-compose.yml \
     -f etc/domain0-docker-compose.yml \
     -f etc/participant1-docker-compose.yml \
     -f etc/participant2-docker-compose.yml \
     -f etc/dependency-docker-compose.yml \
     $*

Useful commands

./dc.sh up -d       # Spins up the network and runs it in the background

./dc.sh ps          # Shows the running containers

./dc.sh stop        # Stops the containers

./dc.sh start       # Starts the containers

./dc.sh down        # Stops and tears down the network, removing any created containers

Connecting to Nodes

To intereact with the running network a the canton console can be used with a remote configuration. For example

bin/canton -c etc/remote-participant1.conf

Remote configurations

etc/remote-domain0.conf
 canton.remote-domains.domain0 {
   admin-api {
     address="0.0.0.0"
     port="10019"
   }
   public-api {
     address="0.0.0.0"
     port="10018"
   }
 }
etc/remote-participant1.conf
 canton {

   features.enable-testing-commands = yes  // Needed for ledger-api

   remote-participants.participant1 {
     ledger-api {
       address="0.0.0.0"
       port="10011"
     }
     admin-api {
       address="0.0.0.0"
       port="10012"
     }
   }
 }
etc/remote-participant2.conf
 canton {

   features.enable-testing-commands = yes  // Needed for ledger-api

   remote-participants.participant2 {
     ledger-api {
       address="0.0.0.0"
       port="10021"
     }
     admin-api {
       address="0.0.0.0"
       port="10022"
     }
   }

 }

Getting Started

Using the scripts above it is possible to follow the examples provided in the getting the Getting Started guide.

Monitoring

To view the log output from any of the containers docker logs command can be run, for example:

docker logs -participant1