Upgrade To a New Release

This section covers the processes to upgrade Canton participant nodes and sync domains. Upgrading Daml applications is covered elsewhere.

As elaborated in the versioning guide, new features, improvements and fixes are released regularly. To benefit from these changes, the Canton-based system must be upgraded.

There are two key aspects that need to be addressed when upgrading a system:

Canton is a distributed system, where no single operator controls all nodes. Therefore, we must support the situation where nodes are upgraded individually, providing a safe upgrade mechanism that requires the minimal amount of synchronized actions within a network.

A Canton binary supports multiple protocol versions, and new protocol versions are introduced in a backward-compatible way with a new binary (see version table). Therefore, any upgrade of a protocol used in a distributed Canton network is done by individually upgrading all binaries and subsequently changing the protocol version used among the nodes to the desired one.

The following is a general guide. Before upgrading to a specific version, please check the individual notes for each version.

This guide also assumes that the upgrade is a minor or a patch release. Major release upgrades might differ and will be covered separately if necessary.

Warning

Upgrading requires care and preparation.
  • Please back up your data before any upgrade.
  • Please test your upgrade thoroughly before attempting to upgrade your production system.

Upgrade Canton Binary

A Canton node consists of one or more processes, where each process is defined by

  • A Java Virtual Machine application running a versioned JAR of Canton.
  • A set of configuration files describing the node that is being run.
  • An optional bootstrap script passed via --boostrap, which runs on startup.
  • A database (with a specific schema), holding the data of the node.

To upgrade the node,

  1. Replace the Canton binary (which contains the Canton JAR).
  2. Test that the configuration files can still be parsed by the new process.
  3. Test that the bootstrap script you are using is still working.
  4. Upgrade the database schema.

Generally, all changes to configuration files should be backward compatible, and therefore not be affected by the upgrade process. In rare cases, there might be a minor change to the configuration file necessary in order to support the upgrade process. Sometimes, fixing a substantial bug might require a minor breaking change to the API. The same applies to Canton scripts.

The schema in the database is versioned and managed using Flyway. Detecting and applying changes is done by Canton using that library. Understanding this background can be helpful to troubleshoot issues.

Preparation

First, please download the new Canton binary that you want to upgrade to and store it on the test system where you plan to test the upgrade process.

Then, obtain a recent backup of the database of the node and deploy it to a database server of your convenience, such that you can test the upgrade process without affecting your production system. While we extensively test the upgrade process ourselves, we cannot exclude the eventuality that you are using the system in a non-anticipated way. Testing is cumbersome, but breaking a production system is worse.

If you are upgrading a participant, then we suggest that you also use an in-memory sync domain which you can tear down after you have tested that the upgrade of the participant is working. You might do that by adding a simple sync domain definition as a configuration mixin to your participant configuration.

Generally, if you are running a high-availability setup, please take all nodes offline before performing an upgrade. If the update requires a database migration (check the release notes), avoid running older and newer binaries in a replicated setup, as the two binaries might expect a different database layout.

You can upgrade the binaries of a microservice-based sync domain in any order, as long as you upgrade the binaries of nodes accessing the same database at the same time. For example, you could upgrade the binary of a replicated mediator node on one weekend and an active-active database sequencer on another weekend.

Back Up Your Database

Before you upgrade the database and binary, please ensure that you have backed up your data, such that you can roll back to the previous version in case of an issue. You can back up your data by cloning it. In Postgres, the command is:

CREATE DATABASE newdb WITH TEMPLATE originaldb OWNER dbuser;

When doing this, you need to change the database name and user name in above command to match your setup.

Test your Configuration

Test that the configuration still works

./bin/canton -v -c storage-for-upgrade-testing.conf -c mynode.conf --manual-start

Here, the files storage-for-upgrade-testing.conf and mynode.conf need to be adjusted to match your case.

If Canton starts and shows the command prompt of the console, then the configuration was parsed successfully.

The command line option --manual-start prevents the node from starting up automatically, as we first need to migrate the database.

Migrating the Database

Canton does not perform a database migration automatically. Migrations need to be forced. If you start a node that requires a database migration, you will observe the following Flyway error:

@ participant.start()
ERROR com.digitalasset.canton.integration.EnterpriseEnvironmentDefinition$$anon$3 - failed to initialize participant: There are 7 pending migrations to get to database schema version 8. Currently on version 1.1. Please run `participant.db.migrate` to apply pending migrations
  Command LocalParticipantReference.start invoked from cmd10000002.sc:1

The database schema definitions are versioned and hashed. This error informs us about the current database schema version and how many migrations need to be applied. This check runs at startup, so if the node starts, the migrations was successful.

We can now force the migration to a new schema using:

@ participant.db.migrate()

You can also configure the migrations to be applied automatically. Please note that you need to ensure that the user account the node is using to access the database allows to change the database schema. How long the migration takes depends on the version of the binary (see migration notes), the size of the database and the performance of the database server.

We recommend cleaning up your database before you start your node. On Postgres, run

VACUUM FULL;

Otherwise, the restart may take a long time while the database is cleaning itself up.

Subsequently, you can successfully start the node

@ participant.start()

Please note that the procedure remains the same for all other types of nodes, with a participant node used here as an example.

Test Your Upgrade

Once your node is up and running, you can test it by running a ping. If you are testing the upgrade of your participant node, then you might want to connect to the test sync domain

@ testdomain.start()
@ participant.domains.connect_local(testdomain)

If you did the actual upgrade of the production instance, then you would just reconnect to the current sync domain before running the ping:

@ participant.domains.reconnect_all()

You can check that the sync domain is up and running using

@ participant.domains.list_connected()
res6: Seq[ListConnectedDomainsResult] = Vector(
  ListConnectedDomainsResult(
    domainAlias = Domain 'testdomain',
    domainId = testdomain::1220a502cc47...,
    healthy = true
  )
)

Finally, you can ping the participant to see if the system is operational

@ participant.health.ping(participant)
res7: Duration = 424 milliseconds

The ping command creates two contracts between the admin parties, then exercises and archives them – providing an end-to-end test of ledger functionality.

Version Specific Notes

Upgrade to Release 2.9

Protocol versions

The recommended protocol versions is 5 (see here for more information about protocol versions).

Version 2.9 does not offer support for protocol versions 3 and 4. If your sync domain is running one of these protocol versions, you need to perform the upgrade described below.

Protocol version should be set explicitly

Until now, the sync domains were configures to pick the latest protocol version by default. Since the protocol version is an important parameter of the sync domain, having this value set behind the scenes caused unwanted behavior.

You now must specify the protocol version for your sync domain:

myDomain {
    init.domain-parameters.protocol-version = 5
}

For a domain manager:

domainManager {
    init.domain-parameters.protocol-version = 5
}
Deactivated sync domain data cleanup

Version 2.9 adds a new repair command participant.repair.purge_deactivated_domain to delete data of a defunct domain.

Using this command is recommended for removing any remaining but unnecessary data from previous sync domains that were migrated using the participant.repair.migrate_domain command.

Note that the migrate_domain command in 2.9 now automatically removes such data, but only for the sync domain on which it has been invoked.

Paging in party Management

By default, the ListKnownParties method on the PartyManagementService will now only return at most 10’000 to avoid memory issues in participants that knows more than 10’000 parties.

The next_page_token can be used to request the next page (see request and response).

Upgrade to Release 2.8

Version 2.8 extends the database schema. If you use the “migrate and start” feature, the database schema will be automatically updated. Otherwise, perform the manual database migration steps outlined in the database migration steps.

Protocol versions 3 and 4 are deprecated

Protocol versions 3 and 4 are now marked as deprecated and will be removed in 2.9. Protocol version 5 should be preferred for any new deployment.

Configuration changes

KMS wrapper-key configuration value: The configuration value for the KMS wrapper-key now accepts a simple string. Update your configuration as follows:

crypto.private-key-store.encryption.wrapper-key-id = { str = "..."} # version 2.7
crypto.private-key-store.encryption.wrapper-key-id = "..." # version 2.8

Indexer Schema Migration and Cache Weight Configuration: Remove the following configuration lines related to the indexer and Ledger API server schema migration and cache weight:

participants.participant.parameters.ledger-api-server-parameters.indexer.schema-migration-attempt-backoff
participants.participant.parameters.ledger-api-server-parameters.indexer.schema-migration-attempts
participants.participant.ledger-api.max-event-cache-weight
participants.participant.ledger-api.max-contract-cache-weight

SQL Batching Parameter: The expert mode SQL batching parameter has been moved. Generally, we recommend not changing this parameter unless advised by support.

canton.participants.participant.parameters.stores.max-items-in-sql-clause # version 2.7
canton.participants.participant.parameters.batching.max-items-in-sql-clause # version 2.8
Breaking console commands

Key Management Commands: The owner_to_key_mappings.rotate_key command was changed to avoid unwanted key rotations. It now expects a node reference to perform additional checks.

Sync domain filtering in testing commands: To improve consistency and code safety, some testing console commands now expect an optional sync domain alias (rather than a plain sync domain alias). For example, the following call needs to be rewritten:

participant.testing.event_search("da") # version 2.7
participant.testing.event_search(Some("da")) # version 2.8

The impacted console commands are: participant.testing.event_search and participant.testing.transaction_search

Packaging

We have reverted the packaging change introduced in version 2.7.0; the Bouncy Castle JAR is now included back in the Canton JAR. However, users with Oracle JRE must explicitly add the Bouncy Castle library to the classpath when running Canton.

java -cp bcprov-jdk15on-1.70.jar:canton-with-drivers-2.8.0-all.jar com.digitalasset.canton.CantonEnterpriseApp
Breaking Error Code

The error code SEQUENCER_DELIVER_ERROR is superseded by two new error codes: SEQUENCER_SUBMISSION_REQUEST_MALFORMED and SEQUENCER_SUBMISSION_REQUEST_REFUSED. Update your client applications code accordingly.

Deprecations

SequencerConnection.addConnection is deprecated. Use SequencerConnection.addEndpoints instead.

Upgrade to Release 2.7

Version 2.7 slightly extends the database schema. Therefore, you will have to perform the database migration steps. Alternatively, you can enable the new “migrate and start” mode in Canton, which triggers an automatic update of the database schema when a new minor version is deployed. This mode can be enabled by setting the appropriate storage parameter:

canton.X.Y.storage.parameters.migrate-and-start = yes

To benefit from the new security features in protocol version 5, you must upgrade the sync domain accordingly.

Activation of unsupported features

In order to activate unsupported features, you now need to explicitly enable dev-version-support on the sync domain (in addition to the non-standard config flag). More information can be found in the documentation.

Breaking changes around console commands

Key rotation The command keys.secret.rotate_wrapper_key now returns a different error code. An INVALID_WRAPPER_KEY_ID error has been replaced by an INVALID_KMS_KEY_ID error.

Adding sequencer connection The configuration of the sequencer client has been updated to accommodate multiple sequencers and their endpoints: method addConnection has been renamed to addEndpoints to better reflect the fact that it modifies an endpoint for the sequencer.

Hence, the command to add a new sequencer connection to the mediator would be changed to:

mediator1.sequencer_connection.modifyConnections(
    _.addEndpoints(SequencerAlias.Default, connection)
)
Unique contract key deprecation

The unique-contract-keys parameters for both participant nodes and sync domains are now marked as deprecated. As of this release, the meaning and default value (true) remain unchanged. However, contract key uniqueness will not be available in the next major version, featuring multi-sync-domain connectivity. If you are already setting this key to false explicitly (preview), this behavior will be the default one after the configuration key is removed. If you don’t explicitly set this value to false, you are encouraged to evaluate evolving your existing applications and services to avoid relying on this feature. You can read more on the topic in the documentation.

Causality tracking

An obsolete early access feature to enable causality tracking, related to preview multi-sync-domain, was removed. If you enabled it, you need to remove the following config lines, as they will not compile anymore:

participants.participant.init.parameters.unsafe-enable-causality-tracking = true
participants.participant.parameters.enable-causality-tracking = true
Besu and Fabric drivers

In order to allow for independent updates of the different components, we have moved the drivers into a separate jar, which needs to be loaded into a separate classpath. As a result, deployments that use Fabric or Besu need to additionally download the jar and place it in the appropriate directory. Please consult the installation documentation on how to obtain this additional jar.

Removal of deploy_sequencer_contract

The command deploy_sequencer_contract has been removed and exchanged with a deployment through genesis block in examples. The deploy_sequencer_contract, while convenient, is ill-suited for any production environment and can cause more damage than harm. The deployment of a sequencing contract should only happen once on the blockchain; however, adding deployment as part of the bootstrapping script would cause a redeployment each time bootstrapping is done.

Ledger API error codes

The error codes and metadata of gRPC errors returned as part of failed command interpretation from the Ledger API have been updated to include more information. Previously, most errors from the Daml engine would be given as either GenericInterpretationError or InvalidArgumentInterpretationError. They now all have their own codes and encode relevant information in the gRPC Status metadata. Specific error changes are as follows: * GenericInterpretationError (Code: DAML_INTERPRETATION_ERROR) with gRPC status FAILED_PRECONDITION is now split into:

  • DisclosedContractKeyHashingError (Code: DISCLOSED_CONTRACT_KEY_HASHING_ERROR) with gRPC status FAILED_PRECONDITION
  • UnhandledException (Code: UNHANDLED_EXCEPTION) with gRPC status FAILED_PRECONDITION
  • InterpretationUserError (Code: INTERPRETATION_USER_ERROR) with gRPC status FAILED_PRECONDITION
  • TemplatePreconditionViolated (Code: TEMPLATE_PRECONDITION_VIOLATED) with gRPC status INVALID_ARGUMENT
  • InvalidArgumentInterpretationError (Code: DAML_INTERPRETER_INVALID_ARGUMENT) with gRPC status INVALID_ARGUMENT is now split into:

    • CreateEmptyContractKeyMaintainers (Code: CREATE_EMPTY_CONTRACT_KEY_MAINTAINERS) with gRPC status INVALID_ARGUMENT
    • FetchEmptyContractKeyMaintainers (Code: FETCH_EMPTY_CONTRACT_KEY_MAINTAINERS) with gRPC status INVALID_ARGUMENT
    • WronglyTypedContract (Code: WRONGLY_TYPED_CONTRACT) with gRPC status FAILED_PRECONDITION
    • ContractDoesNotImplementInterface (Code: CONTRACT_DOES_NOT_IMPLEMENT_INTERFACE) with gRPC status INVALID_ARGUMENT
    • ContractDoesNotImplementRequiringInterface (Code: CONTRACT_DOES_NOT_IMPLEMENT_REQUIRING_INTERFACE) with gRPC status INVALID_ARGUMENT
    • NonComparableValues (Code: NON_COMPARABLE_VALUES) with gRPC status INVALID_ARGUMENT
    • ContractIdInContractKey (Code: CONTRACT_ID_IN_CONTRACT_KEY) with gRPC status INVALID_ARGUMENT
    • ContractIdComparability (Code: CONTRACT_ID_COMPARABILITY) with gRPC status INVALID_ARGUMENT
    • InterpretationDevError (Code: INTERPRETATION_DEV_ERROR) with gRPC status FAILED_PRECONDITION
  • The ContractKeyNotVisible error (previously encapsulated by GenericInterpretationError) is now transformed into a ContractKeyNotFound to avoid information leaking.

Upgrade to Release 2.6

Version 2.6 changes the database schema used. Therefore, you must perform the database migration steps. Depending on the size of the database, this operation can take many hours. Vacuuming your database before starting your nodes helps avoid long startup times. Otherwise, the participant node can refuse to start due to extremely long initial database response times.

Upgrade to Release 2.5

Version 2.5 will slightly extend the database schema used. Therefore, you will have to perform the database migration steps.

Some configuration arguments have changed. While rewrite rules are in place for backward compatibility, we recommend that you test your configuration before upgrading and update the settings to avoid using deprecated flags.

IMPORTANT: Existing sync domains and sync domain managers need to be reconfigured to keep on working. It is important that before attempting the binary upgrade, you configure the currently used protocol version explicitly:

canton.domains.mydomain.init.domain-parameters.protocol-version = 3

Nodes persist the static sync domain parameters used during initialization now. Version 2.5 is the last version that will require this explicit configuration setting during upgrading.

If you started the sync domain node accidentally before changing your configuration, your participants won’t be able to reconnect to the sync domain, as they will fail with a message like:

DOMAIN_PARAMETERS_CHANGED(9,d5dfa5ce): The sync domain parameters have changed

To recover from this, you need to force a reset of the stored static sync domain parameters using:

canton.domains.mydomain.init.domain-parameters.protocol-version = 3
canton.domains.mydomain.init.domain-parameters.reset-stored-static-config = yes

To benefit from protocol version 4, you will have to upgrade the sync domain accordingly.

Upgrade to Release 2.4

Version 2.4 will slightly extend the database schema used. Therefore, you will have to perform the database migration steps.

There have been a few consistency improvements to some console commands. In particular, we have renamed a few of the arguments and changed some of their types. As we have included automatic conversion and the change only affects special arguments (mainly timeouts), your script should still work. However, we recommend that you test your scripts for compilation issues. Please check the detailed release notes on the specific changes and their impact.

There was no change to the protocol. Participants and sync domains running 2.3 can also run 2.4, as both versions use the same protocol version.

Upgrade to Release 2.3

Version 2.3 will slightly extend the database schema used. Therefore, you will have to perform the database migration steps.

Furthermore, the Canton binary with version 2.3 has introduced a new protocol version 3, and deprecated the previous protocol version 2. In order to keep a node operational that is using protocol version 2, you need to turn on support for the deprecated protocol version.

On the participant, you need to turn on support for deprecated protocols explicitly:

canton.participants.myparticipant.parameters.minimum-protocol-version = 2.0.0

The default settings have changed to use protocol 3, while existing sync domains run protocol 2. Therefore, if you upgrade the binary on sync domains and sync domain manager nodes, you need to explicitly set the protocol version as follows:

canton.domains.mydomain.init.domain-parameters.protocol-version = 2.0.0

You cannot upgrade the protocol of a deployed sync domain! You need to keep it running with the existing protocol. Please follow the protocol upgrade guide to learn how to introduce a new protocol version.

Change the Canton Protocol Version

The Canton protocol is defined by the semantics and the wire format used by the nodes to communicate to each other. In order to process transactions, all nodes must be able to understand and speak the same protocol.

Therefore, a new protocol can be introduced only once all nodes have been upgraded to a binary that can run the version.

Upgrade the Synchronization Domain to a new Protocol Version

A sync domain is tied to a protocol version. This protocol version is configured when the sync domain is initialized and cannot be changed afterward. Therefore, you can not upgrade the protocol version of a sync domain. Instead, you deploy a new sync domain side by side with the old sync domain process.

This applies to all sync domain services, be it sequencer, mediator, or topology manager.

With that, the protocol upgrade process boils down to:

  1. Deploy a new sync domain with the new protocol version

    Deploy a new sync domain and ensure that the new sync domain is using the desired protocol version.

    Also make sure to use different databases (or at least different schemas in the same database) for the sync domain services (domain node, mediator, sequencer node, and topology manager), channel names, smart contract addresses, etc.

    The new sync domain must be completely separate, but you can reuse your DLT backend as long as you use different sequencer contract addresses or Fabric channels.

  2. Carry out a hard sync domain migration

    Instruct the participants individually using the hard sync domain migration to use the new sync domain.

Note

Currently, the sync domain ID cannot be preserved during upgrades.

Note

To use the same database with different schemas for the old and the new sync domains, set the currentSchema either in the JDBC URL or as a parameter in storage.config.properties.

Hard Synchronization Domain Migration

Warning

Ensure that you have appropriate backups in place and have tested this procedure before applying it to your production system.

A hard sync domain migration is performed using the respective migration command.

You must enable this command using a special config switch:

canton.features.enable-repair-commands=yes

Assuming that you have several participants all connected to a sync domain named olddomain, ensure that there are no pending transactions. You can do that by either controlling your applications, or by setting the resource limits to 0 on all participants:

@ participant.resources.set_resource_limits(ResourceLimits(Some(0), Some(0)))

This rejects any new command and finishes processing the pending commands. Once you are sure that your participant node is idle, disconnect the participant node from the old sync domain:

@ participant.domains.disconnect("olddomain")

Test that the participant is disconnected from the sync domain by checking the list of active connections:

@ participant.domains.list_connected()
res3: Seq[ListConnectedDomainsResult] = Vector()

This is a good time to perform a backup of the database before proceeding:

CREATE DATABASE newdb WITH TEMPLATE originaldb OWNER dbuser;

Warning

Following steps modify the participant’s data storage. Without a database backup for your participant, a potential recovery becomes significantly more difficult.

Next, we want to run the migration step. For this, we need to run the repair.migrate_domain command. The command expects two input arguments: The alias of the source sync domain and a sync domain connection configuration for the new sync domain.

In order to build a sync domain connection config, we can just type

@ val config = DomainConnectionConfig("newdomain", GrpcSequencerConnection.tryCreate("https://127.0.0.1:5018"))
config : DomainConnectionConfig = DomainConnectionConfig(
  domain = Domain 'newdomain',
  sequencerConnections = Sequencer 'DefaultSequencer' -> GrpcSequencerConnection(
    endpoints = https://127.0.0.1:5018,
    transportSecurity = true
..

where the URL should point to the correct sync domain. If you are testing the upgrade process locally in a single Canton process using a target sync domain named newdomain (which is what we are doing in this example), you can grab the connection details using

@ val config = DomainConnectionConfig("newdomain", newdomain.sequencerConnection)
config : DomainConnectionConfig = DomainConnectionConfig(
  domain = Domain 'newdomain',
  sequencerConnections = Sequencer 'DefaultSequencer' -> GrpcSequencerConnection(
    endpoints = http://127.0.0.1:30094,
    transportSecurity = false
..

Now, using this configuration object, we can trigger the hard sync domain connection migration using

@ participant.repair.migrate_domain("olddomain", config)

This command registers the new sync domain and re-associate the contracts tied to olddomain to the new sync domain. In addition, some data specific to the old domain is automatically deleted.

Once all participants have performed the migration, they can reconnect to the sync domain:

@ participant.domains.reconnect_all()

Now, the new sync domain should be connected:

@ participant.domains.list_connected()
res8: Seq[ListConnectedDomainsResult] = Vector(
  ListConnectedDomainsResult(
    domainAlias = Domain 'newdomain',
    domainId = newdomain::1220746f987d...,
    healthy = true
  )
)

As we’ve previously set the resource limits to 0, we need to reset this back:

@ participant.resources.set_resource_limits(ResourceLimits(None, None))

Finally, we can test that the participant can process a transaction by running a ping on the new sync domain:

@ participant.health.ping(participant)
res10: Duration = 449 milliseconds

Note

Currently, the hard migration is the only supported way to migrate a production system. This is because unique contract keys are restricted to a single sync domain.

While the sync domain migration command is mainly used for upgrading, it can also be used to recover contracts associated with a broken sync domain.

After the upgrade, the participants may report a mismatch between commitments during the first commitment exchange, as they might have performed the migration at slightly different times. The warning should eventually stop once all participants are back up and connected.

Expected Performance

Performance-wise, we can note the following: when we migrate contracts, we write directly into the respective event logs. This means that on the source sync domain, we insert a transfer-out, while we write a transfer-in and the contract into the target sync domain. Writing this information is substantially faster than any kind of transaction processing (several thousand migrations per second on a single CPU/16-core test server). However, with very large datasets, the process can still take quite some time. Therefore, we advise you to measure the time the migration takes during the upgrade test to understand the necessary downtime required for the migration.

Furthermore, upon reconnecting, the participant needs to recompute the new set of commitments. This can take a while for large numbers of contracts.

One-Step Migration

The one-step migration covers a binary upgrade from Canton version 2.3 and following minor versions up to this minor release version. Additionally, it changes the protocol version supported by these prior releases to a protocol version supported by this minor release version (see also the protocol version table).

Note

There is no need for a one-step migration when upgrading from one release version to another, as long as both release versions support the protocol version that your current sync domain uses. Follow the steps for upgrading the Canton binary instead.

Warning

Every upgrade requires specific and thorough testing before applying it to a particular production environment. Even though the one-step migration process has been tested through automatic and manual tests, and its steps are known to work, additional measures and configuration may be required to address the peculiarities of your environment.

One-Step Migration Recipe for 2.9

General recipe to migrate from a sync domain running on a 2.3, 2.4, 2.5, 2.6, 2.7 or 2.8 release and protocol version 3 or 4 to a new sync domain running on the 2.9 release and protocol version 5:

Note

Although version 2.9 supports other protocol versions, it is recommended to use protocol version 5.

  1. Start a new sync domain running protocol version 5
  2. Halt activity on the old sync domain
  3. Wait for pending transactions to complete or time out
  4. Backup the current sync domain including participants
  5. Participants: Upgrade the binary to 2.9
  6. Participants: Enable repair commands
  7. All nodes: Test the configuration considering this additional change if your old domain runs protocol version 3.
  8. All nodes: Apply the DB migrations
  9. Participants: Connect to the new sync domain, then disconnect from all domains
  10. Participants: Invoke command repair.migrate_domain
  11. Participants: Reconnect to the new sync domain
  12. Decommission the old domain

Halt activity on the current sync domain

For a sync domain running protocol version 4 and above, set the dynamic domain parameter maxRatePerParticipant to 0 on each participant.

Otherwise, use

participant.resources.set_resource_limits(ResourceLimits(Some(0), Some(0)))

to set the resource limits to 0 on each participant.

Test the configuration - Additional change if you were running protocol version 3

When migrating from protocol version 3, which uses an unauthenticated contract id scheme, you may need to specifically allow it on the participants.

canton.participants.<nodeName>.parameters.allow-for-unauthenticated-contract-ids=true

Please adjust <nodeName> to match your case.