Operational Processes

Managing domain entities

Adding new sequencers to distributed domain

For non-database-based sequencers such as Ethereum or Fabric sequencers, you can either initialize them as part of the regular distributed domain bootstrapping process or dynamically add a new sequencer at a later point as follows:

domainManager1.setup.onboard_new_sequencer(
  initialSequencer = sequencer1,
  newSequencer = sequencer2,
)

Similarly to initializing a distributed domain with separate consoles, dynamically onboarding new sequencers (supported by Fabric and Ethereum sequencers) can be achieved in separate consoles as follows:

// Second sequencer's console: write signing key to file
{
  secondSequencer.keys.secret
    .generate_signing_key(s"${secondSequencer.name}-signing")
    .writeToFile(file1)
}

// Domain manager's console: write domain params and current topology
{
  domainManager1.service.get_static_domain_parameters.writeToFile(paramsFile)

  val sequencerSigningKey = SigningPublicKey.tryReadFromFile(file1)

  domainManager1.setup.helper.authorizeKey(
    sequencerSigningKey,
    s"${secondSequencer.name}-signing",
    sequencerId,
  )

  domainManager1.setup.helper.waitForKeyAuthorizationToBeSequenced(
    sequencerId,
    sequencerSigningKey,
  )

  domainManager1.topology.all
    .list(domainId.filterString)
    .collectOfType[TopologyChangeOp.Positive]
    .writeToFile(file1)
}

// Initial sequencer's console: read topology and write snapshot to file
{
  val topologySnapshotPositive =
    StoredTopologyTransactions
      .tryReadFromFile(file1)
      .collectOfType[TopologyChangeOp.Positive]

  val sequencingTimestamp = topologySnapshotPositive.lastChangeTimestamp.getOrElse(
    sys.error("topology snapshot is empty")
  )

  sequencer.sequencer.snapshot(sequencingTimestamp).writeToFile(file2)
}

// Second sequencer's console: read topology, snapshot and domain params
{
  val topologySnapshotPositive =
    StoredTopologyTransactions
      .tryReadFromFile(file1)
      .collectOfType[TopologyChangeOp.Positive]

  val state = SequencerSnapshot.tryReadFromFile(file2)

  val domainParameters = StaticDomainParameters.tryReadFromFile(paramsFile)

  secondSequencer.initialization
    .initialize_from_snapshot(
      domainId,
      topologySnapshotPositive,
      state,
      domainParameters,
    )
    .publicKey

  secondSequencer.health.initialized() shouldBe true

}

Change Sequencer Connection

You can change the sequencer being used by the the domain manager node or by a mediator node after bootstrapping. To do this for either type of node:

val conn1 = sequencer1.sequencerConnection
mediator1.sequencer_connection.get() shouldBe Some(conn1)

val conn2 = sequencer2.sequencerConnection
mediator1.sequencer_connection.set(conn2)
mediator1.sequencer_connection.get() shouldBe Some(conn2)
participant1.health.ping(participant2, timeout = 30.seconds)

mediator1.sequencer_connection.modify(_.addConnection(conn1))

mediator1.sequencer_connection.get() shouldBe Some(
  SequencerConnection.merge(Seq(conn2, conn1)).value
)
participant1.health.ping(participant2, timeout = 30.seconds)

Dynamic domain parameters

In addition to the parameters that are specified in the configuration, some parameters can be changed at runtime (i.e., while the domain is running); these are called dynamic domain parameters.

A participant can get the current parameters on a domain it is connected to using the following command:

participant.topology.domain_parameters_changes.get_latest(mydomain.id)

A domain operator can update some of the parameters as follows:

mydomain.service.update_dynamic_parameters(_.copy(
  participantResponseTimeout = TimeoutDuration.ofSeconds(10)
))

Importing existing Contracts

You may have existing contracts, parties, and DARs in other Daml Participant Nodes (such as the Daml sandbox) that you want to import into your Canton-based participant node. To address this need, you can extract contracts and associated parties via the ledger api, modify contracts, parties, and daml archived as needed, and upload the data to Canton using the Canton Console.

You can also import existing contracts from Canton as that is useful as part of Canton upgrades across major versions with incompatible internal storage.

importing ledger contracts from other Daml Participant Nodes or instances of Canton based on previous major versions

Preparation

As contracts (1) “belong to” parties and (2) are instances of Daml templates defined in Daml Archives (DARs), importing contracts to Canton also requires creating corresponding parties and uploading DARs.

  • Contracts are often interdependent requiring care to honor dependencies such that the set of imported contracts is internally consistent. This requires particular attention if you choose to modify contracts prior to their import.
  • Additionally use of divulgence in the original ledger has likely introduced non-obvious dependencies that may impede exercising contract choices after import. As a result such divulged contracts need to be re-divulged as part of the import (by exercising existing choices or if there are no-side-effect-free choices that re-divulge the necessary contracts by extending your Daml models with new choices).
  • Party Ids have a stricter format on Canton than on non-Canton ledgers ending with a required “fingerprint” suffix, so at a minimum, you will need to “remap” party ids.
  • Canton contract keys do not have to be unique, so if your Daml models rely on uniqueness, consider extending the models using these strategies or limit your Canton Participants to connect to a single Canton domain with unique contract key semantics.
  • Canton does not support implicit party creation, so be sure to create all needed parties explicitly.
  • In addition you could choose to spread contracts, parties, and DARs across multiple Canton Participants.

With the above requirements in mind, you are ready to plan and execute the following three step process:

  1. Download parties and contracts from the existing Daml Participant Node and locate the DAR files that the contracts are based on.
  2. Modify the parties and contracts (at the minimum assigning Canton-conformant party ids).
  3. Provision Canton Participants along with at least one Canton Domain. Then upload DARs, create parties, and finally the contracts to the Canton participants. Finally connect the participants to the domain(s).

Importing an actual Ledger

To follow along with this guide, ensure you have installed and unpacked the Canton release bundle and run the following commands from the “canton-X.Y.Z” directory to set up the initial topology.

export CANTON=`pwd`
export CONF="$CANTON/examples/03-advanced-configuration"
export IMPORT="$CANTON/examples/07-repair"
bin/canton \
  -c $IMPORT/participant1.conf,$IMPORT/participant2.conf,$IMPORT/participant3.conf,$IMPORT/participant4.conf \
  -c $IMPORT/domain-export-ledger.conf,$IMPORT/domain-import-ledger.conf \
  -c $CONF/storage/h2.conf,$IMPORT/enable-preview-commands.conf \
  --bootstrap $IMPORT/import-ledger-init.canton

This sets up an “exportLedger” with a set of parties consisting of painters, house owners, and banks along with a handful of paint offer contracts and IOUs.

Define the following helper functions useful to extract parties and contracts via the ledger api:

def queryActiveContractsFromDamlLedger(
    hostname: String,
    port: Port,
    tls: Option[TlsClientConfig],
    token: Option[String] = None,
)(implicit consoleEnvironment: ConsoleEnvironment): Seq[CreatedEvent] = {

  // Helper to query the ledger api using the specified command.
  def queryLedgerApi[Svc <: AbstractStub[Svc], Result](
      command: GrpcAdminCommand[_, _, Result]
  ): Either[String, Result] =
    consoleEnvironment.grpcAdminCommandRunner
      .runCommand("sourceLedger", command, ClientConfig(hostname, port, tls), token)
      .toEither

  (for {
    // Identify all the parties on the ledger and narrow down the list to local parties.
    allParties <- queryLedgerApi(LedgerApiCommands.PartyManagementService.ListKnownParties())
    localParties = allParties.collect {
      case PartyDetails(party, _, isLocal) if isLocal => LfPartyId.assertFromString(party)
    }

    // Query the ActiveContractsService for the actual contracts
    acs <- queryLedgerApi(
      LedgerApiCommands.AcsService.GetActiveContracts(localParties.toSet)
    )
  } yield acs.map(_.event)).valueOr(err =>
    throw new IllegalStateException(s"Failed to query parties, ledger id, or acs: $err")
  )
}

def removeCantonSpecifics(acs: Seq[CreatedEvent]): Seq[CreatedEvent] = {
  def stripPartyIdSuffix(suffixedPartyId: String): String =
    suffixedPartyId.split(SafeSimpleString.delimiter).head

  acs.map { event =>
    ValueRemapper.convertEvent(identity, stripPartyIdSuffix)(event)
  }
}

def lookUpPartyId(participant: ParticipantReference, party: String): PartyId =
  participant.parties.list(filterParty = party + SafeSimpleString.delimiter).map(_.party).head

As the first step, export the active contract set (ACS). To illustrate how to import data from non-Canton ledgers, strip the Canton-specifics by making the party ids generic (stripping the Canton-specific suffix).

val acs =
  queryActiveContractsFromDamlLedger(
    exportLedger.config.ledgerApi.address,
    exportLedger.config.ledgerApi.port,
    exportLedger.config.ledgerApi.tls.map(_.clientConfig),
  )

val acsExported = removeCantonSpecifics(acs).toList

Step number two involves preparing the Canton participants and domain by uploading DARs and creating parties. Here we choose to place the house owners, painters, and banks on different participants.

placing contracts on all the correct Canton Participants

Also modify the events to be based on the newly created party ids.

// Decide on which canton participants to host which parties along with their contracts.
// We place house owners, painters, and banks on separate participants.
val participants = Seq(participant1, participant2, participant3)
val partyAssignments =
  Seq(participant1 -> houseOwners, participant2 -> painters, participant3 -> banks)

// Connect to domain prior to uploading dars and parties.
participants.foreach { participant =>
  participant.domains.connect_local(importLedgerDomain)
  participant.dars.upload(darPath)
}

// Create canton party ids and remember mapping of plain to canton party ids.
val toCantonParty: Map[String, String] =
  partyAssignments.flatMap { case (participant, parties) =>
    val partyMappingOnParticipant = parties.map { party =>
      participant.ledger_api.parties.allocate(party, party)
      party -> lookUpPartyId(participant, party).toLf
    }
    partyMappingOnParticipant
  }.toMap

// Create traffic on all participants so that the repair commands will pick an identity snapshot that is aware of
// all party allocations
participants.foreach { participant =>
  participant.health.ping(participant, workflowId = importLedgerDomain.name)
}

// Switch the ACS to be based on canton party ids.
val acsToImportToCanton =
  acsExported.map(ValueRemapper.convertEvent(identity, toCantonParty(_)))

As the third step, perform the actual import to each participant filtering the contracts based on the location of contract stakeholders and witnesses.

// Disconnect from domain temporarily to allow import to be performed.
participants.foreach(_.domains.disconnect(importLedgerDomain.name))

// Pick a ledger create time according to the domain's clock.
val ledgerCreateTime =
  consoleEnvironment.environment.domains
    .getRunning(importLedgerDomain.name)
    .get
    .clock
    .now
    .toInstant

// Filter active contracts based on participant parties and upload.
partyAssignments.foreach { case (participant, rawParties) =>
  val parties = rawParties.map(toCantonParty(_))
  val participantAcs = acsToImportToCanton
    .collect {
      case event
          if event.signatories.intersect(parties).nonEmpty
            || event.observers.intersect(parties).nonEmpty
            || event.witnessParties.intersect(parties).nonEmpty =>
        val wrappedCreatedEvent = WrappedCreatedEvent(event)

        SerializableContractWithWitnesses(
          utils
            .contract_data_to_instance(wrappedCreatedEvent.toContractData, ledgerCreateTime),
          Set.empty,
        )
    }

  participant.repair.add(importLedgerDomain.name, participantAcs, ignoreAlreadyAdded = false)
}

def verifyActiveContractCounts() = {
  Map[LocalParticipantReference, (Boolean, Boolean)](
    participant1 -> ((true, true)),
    participant2 -> ((true, false)),
    participant3 -> ((false, true)),
  ).foreach { case (participant, (hostsPaintOfferStakeholder, hostsIouStakeholder)) =>
    val expectedCounts =
      (houseOwners.map { houseOwner =>
        houseOwner.toPartyId(participant) ->
          ((if (hostsPaintOfferStakeholder) paintOffersPerHouseOwner else 0)
            + (if (hostsIouStakeholder) 1 else 0))
      }
        ++ painters.map { painter =>
          painter.toPartyId(participant) -> (if (hostsPaintOfferStakeholder)
                                               paintOffersPerPainter
                                             else 0)
        }
        ++ banks.map { bank =>
          bank.toPartyId(participant) -> (if (hostsIouStakeholder) iousPerBank else 0)
        }).toMap[PartyId, Int]

    assertAcsCounts((participant, expectedCounts))
  }
}

/*
  If the test fails because of Errors.MismatchError.NoSharedContracts error, it could be worth to
  extend the scope of the suppressing logger.
 */
loggerFactory.assertLogsUnorderedOptional(
  {
    // Finally reconnect to the domain.
    participants.foreach(_.domains.reconnect(importLedgerDomain.name))

To demonstrate that the imported ledger works, let’s have each of the house owners accept one of the painters’ offer to paint their house.

def yesYouMayPaintMyHouse(
    houseOwner: PartyId,
    painter: PartyId,
    participant: ParticipantReference,
): Unit = {
  val iou = participant.ledger_api.acs.await[Iou.Iou](houseOwner, Iou.Iou)
  val bank = iou.value.payer
  val paintProposal = participant.ledger_api.acs
    .await[Paint.OfferToPaintHouseByPainter](
      houseOwner,
      Paint.OfferToPaintHouseByPainter,
      pp => pp.value.painter == painter.toPrim && pp.value.bank == bank,
    )
  val cmd = paintProposal.contractId
    .exerciseAcceptByOwner(houseOwner.toPrim, iou.contractId)
    .command
  val _ = clue(
    s"$houseOwner accepts paint proposal by $painter financing through ${bank.toString}"
  )(participant.ledger_api.commands.submit(Seq(houseOwner), Seq(cmd)))
}

// Have each house owner accept one of the paint offers to illustrate use of the imported ledger.
houseOwners.zip(painters).foreach { case (houseOwner, painter) =>
  yesYouMayPaintMyHouse(
    lookUpPartyId(participant1, houseOwner),
    lookUpPartyId(participant1, painter),
    participant1,
  )
}

// Illustrate that acceptance of have resulted in
{
  val paintHouseContracts = painters.map { painter =>
    participant2.ledger_api.acs
      .await[Paint.PaintHouse](lookUpPartyId(participant2, painter), Paint.PaintHouse)
  }
  assert(paintHouseContracts.size == 4)
  paintHouseContracts
}

This guide has demonstrated how to import data from non-Canton Daml Participant Nodes or from a Canton Participant of a lower major version as part of a Canton upgrade.

Backup and Restore

It is recommended that your database is frequently backed up so that the data can be restored in case of a disaster.

In the case of a restore, a participant can replay missing data from the domain considering the domain’s backup is more recent than that of the participant’s. It is important that the participant’s backup is not more recent than that of the domain’s as that would constitute a ledger fork. Therefore if you backup both participant and domain, always backup participant database before the domain.

In case of a domain restore from a backup, if a participant is ahead of the domain, the participant will refuse to connect to the domain and you must either:

  • restore the participant’s state to a backup before the disaster of the domain,
  • or roll out a new domain as a repair strategy in order to recover from a lost domain

We recommend that in production, a domain should be run with offsite synchronous replication to assure the most crucial data is always safely backed up and as up-to-date as possible.

Postgres Example

If you are using Postgres to persist the participant or domain node data, you can create backups to a file and restore it using Postgres’s utility commands pg_dump and pg_restore as shown below:

Backing up Postgres database to a file:

pg_dump -U <user> -h <host> -p <port> -w -F tar -f <fileName> <dbName>

Restoring Postgres database data from a file:

pg_restore -U <user> -h <host> -p <port> -w -d <dbName> <fileName>

Although the approach shown above works for small deployments, it is not recommended in larger deployments. For that, we suggest looking into incremental backups and refer to the resources below:

Database Failover

A database backup allows you to recover the ledger up to the point when the last backup was created. However, any command accepted after creation of the backup may be lost in case of a disaster. Therefore, restoring a backup will likely result in data loss.

If such data loss is unacceptable, you need to run Canton against a replicated database. If the data in one replica gets lost, the database can still failover to another replica without any data loss. For detailed instructions on how to setup a replicated database and how to perform failovers, we refer to the database system documentation, e.g. the high availability documentation of PostgreSQL.

It is strongly recommended to configure replication as synchronous. That means, the database should report a database transaction as successfully committed only after it has been persisted to all database replicas. In PostgreSQL, this corresponds to the setting synchronous_commit = on. If you do not follow this recommendation, you may observe data loss and/or a corrupt state after a database failover.

For PostgreSQL, Canton strives to validate the database replication configuration and fail with an error, if a misconfiguration is detected. However, this validation is of a best-effort nature; so it may fail to detect an incorrect replication configuration. For Oracle, no attempt is made to validate the database configuration. Overall, you should not rely on Canton detecting mistakes in the database configuration.

Ledger Pruning

Pruning the ledger frees up storage space by deleting state no longer needed by participants, domain sequencers, and mediators. It also serves as a mechanism to help implement right-to-forget mandates such as GDPR.

The following commands allow you to prune events and inactive contracts up to a specified time from the various components:

  • Prune participants via the prune command specifying a “ledger offset” obtained by specifying a timestamp received by a call to “get_offset_by_time”.
  • Prune domain sequencers and mediators via their respective prune_at commands.

The pruning operations impact the “regular” workload (lowering throughput during pruning by as much as 50% in our test environments), so depending on your requirements it might make sense to schedule pruning at off-peak times or during maintenance windows such as after taking database backups.

The following canton console code illustrates best practices such as:

  • The pruning commands used in the script will not delete any data that is still required for command processing. (E.g. it will only delete sequencer data that all clients of the sequencer have already read or acknowledged.) If the given timestamp is too high, the commands will fail.
  • Error handling ensures that pruning errors raise an alert. Catching the CommandFailure exception also ensures that a problem encountered while pruning one component still lets pruning other components proceed allowing corresponding storage to be freed up.
  • Pruning one node at a time rather than all nodes in parallel somewhat limits the impact on concurrently executing workload. If you configure pruning to run during a maintenance window with no concurrent workload, and as long as the database backend has sufficient capacity, you may prune participants and domains in parallel.
import com.digitalasset.canton.console.{CommandFailure, ParticipantReference}
import com.digitalasset.canton.data.CantonTimestamp

def pruneAllNodes(pruneUpToIncluding: CantonTimestamp): Unit = {
  // If pruning a particular component fails, alert the user, but proceed pruning other components.
  // Therefore prune failures in one component still allow other components to be pruned
  // minimizing the chance of running out of overall storage space.
  def alertOnErrorButMoveOn(
      component: String,
      ts: CantonTimestamp,
      invokePruning: CantonTimestamp => Unit,
  ): Unit =
    try {
      invokePruning(ts)
    } catch {
      case _: CommandFailure =>
        logger.warn(
          s"Error pruning ${component} up to ${ts}. See previous log error for details. Moving on..."
        )
    }

  // Helper to prune a participant by time for consistency with domain prune signatures
  def pruneParticipantAt(p: ParticipantReference)(pruneUpToIncluding: CantonTimestamp): Unit = {
    val pruneUpToOffset = p.pruning.get_offset_by_time(pruneUpToIncluding.toInstant)
    pruneUpToOffset match {
      case Some(offset) => p.pruning.prune(offset)
      case None => logger.info(s"Nothing to prune up to ${pruneUpToIncluding}")
    }
  }

  val participantsToPrune = participants.all
  val domainsToPrune = domains.all

  // Prune all nodes one after the other rather than in parallel to limit the impact on concurrent workload.
  participantsToPrune.foreach(participant =>
    alertOnErrorButMoveOn(participant.name, pruneUpToIncluding, pruneParticipantAt(participant))
  )

  domainsToPrune.foreach { domain =>
    alertOnErrorButMoveOn(
      s"${domain.name} sequencer",
      pruneUpToIncluding,
      domain.sequencer.pruning.prune_at,
    )
    alertOnErrorButMoveOn(
      s"${domain.name} mediator",
      pruneUpToIncluding,
      domain.mediator.prune_at,
    )
  }
}

Invoke pruning from within your scheduling environment and by specifying the ledger data retention period like so:

import java.time.Duration
val retainMostRecent = Duration.ofDays(30)
pruneAllNodes(CantonTimestamp.now().minus(retainMostRecent))

Pruning Ledgers in Test Environments

While it is a best practice for test environments to match production configurations, testing pruning involves challenges related to the amount of retained data:

  • Test environments may not have the same amount of storage space to hold data volumes present in production.
  • It may be impractical to wait long enough until test environments have accrued data to expected production retention times that are often measured in months.

As a result you may choose to prune test environments more aggressively. When using databases other than Oracle with a lower retention time, use the same code as when pruning production. On Oracle however you may observe performance degradation when pruning the majority of the ledger data in one go. In such cases breaking up pruning invocations into multiple chunks likely speeds up pruning:

// An example test environment configuration in which hardly any data is retained.
val pruningFrequency = Duration.ofDays(1)
val retainMostRecent = Duration.ofMinutes(20)
val pruningStartedAt = CantonTimestamp.now()
val isOracle = true

// Deleting the majority of rows from an Oracle table has been observed to
// take a long time. Avoid non-linear performance degradation by breaking up one prune call into
// several calls with progressively more recent pruning timestamps.
if (isOracle && retainMostRecent.compareTo(pruningFrequency) < 0) {
  val numChunks = 8L
  val delta = pruningFrequency.minus(retainMostRecent).dividedBy(numChunks)
  for (chunk <- 1L to numChunks) yield {
    val chunkRetentionTimestamp = pruningFrequency.minus(delta.multipliedBy(chunk))
    pruneAllNodes(pruningStartedAt.minus(chunkRetentionTimestamp))
  }
}

pruneAllNodes(pruningStartedAt.minus(retainMostRecent))

Repairing Participants

Canton enables interoperability of distributed participants and domains. Particularly in distributed settings without trust assumptions, faults in one part of the system should ideally produce minimal irrecoverable damage to other parts. For example if a domain is irreparably lost, the participants previously connected to that domain need to recover and be empowered to continue their workflows on a new domain.

This guide will illustrate how to replace a lost domain with a new domain providing business continuity to affected participants.

Recovering from a Lost Domain

Note

Please note that the given section describes a preview feature, due to the fact that using multiple domains is only a preview feature.

Suppose that a set of participants have been conducting workflows via a domain that runs into trouble. In fact consider that the domain has gotten into such a disastrous state that the domain is beyond repair, for example:

  • The domain has experienced data loss and is unable to be restored from backups or the backups are missing crucial recent history.
  • The domain data is found to be corrupt causing participants to lose trust in the domain as a mediator.

Next the participant operators each examine their local state, and upon coordinating conclude that their participants’ active contracts are “mostly the same”. This domain-recovery repair demo illustrates how the participants can

  • coordinate to agree on a set of contracts to use moving forward, serving as a new consistent state,
  • copying over the agreed-upon set of contracts to a brand new domain,
  • “fail over” to the new domain,
  • and finally continue running workflows on the new domain having recovered from the permanent loss of the old domain.

Repairing an actual Topology

To follow along with this guide, ensure you have installed and unpacked the Canton release bundle and run the following commands from the “canton-X.Y.Z” directory to set up the initial topology.

export CANTON=`pwd`
export CONF="$CANTON/examples/03-advanced-configuration"
export REPAIR="$CANTON/examples/07-repair"
bin/canton \
  -c $REPAIR/participant1.conf,$REPAIR/participant2.conf,$REPAIR/domain-repair-lost.conf,$REPAIR/domain-repair-new.conf \
  -c $CONF/storage/h2.conf,$REPAIR/enable-preview-commands.conf \
  --bootstrap $REPAIR/domain-repair-init.canton

To simplify the demonstration, this not only sets up the starting topology of

  • two participants, “participant1” and “participant2”, along with
  • one domain “lostDomain” that is about to become permanently unavailable leaving “participant1” and “participant2” unable to continue executing workflows,

but also already includes the ingredients needed to recover:

  • The setup includes “newDomain” that we will rely on as a replacement domain, and
  • we already enable the “enable-preview-commands” configuration needed to make available the “repair.change_domain” command.

In practice you would only add the new domain once you have the need to recover from domain loss and also only then enable the repair commands.

We simulate “lostDomain” permanently disappearing by stopping the domain and never bringing it up again to emphasize the point that the participants no longer have access to any state from domain1. We also disconnect “participant1” and “participant2” from “lostDomain” to reflect that the participants have “given up” on the domain and recognize the need for a replacement for business continuity. The fact that we disconnect the participants “at the same time” is somewhat artificial as in practice the participants might have lost connectivity to the domain at different times (more on reconciling contracts below).

lostDomain.stop()
Seq(participant1, participant2).foreach { p =>
  p.domains.disconnect(lostDomain.name)
  // Also let the participant know not to attempt to reconnect to lostDomain
  p.domains.modify(lostDomain.name, _.copy(manualConnect = true))
}
"lostDomain" has become unavailable and neither participant can connect anymore

Even though the domain is “the node that has broken”, recovering entails repairing the participants using the “newDomain” already set up. As of now, participant repairs have to be performed in an offline fashion requiring participants being repaired to be disconnected from the the new domain. However we temporarily connect to the domain, to let the topology state initialize, and disconnect only once the parties can be used on the new domain.

Seq(participant1, participant2).foreach(_.domains.connect_local(newDomain))

// Wait for topology state to appear before disconnecting again.
clue("newDomain initialization timed out") {
  eventually()(
    (
      participant1.domains.active(newDomain.name),
      participant2.domains.active(newDomain.name),
    ) shouldBe (true, true)
  )
}
// Run a few transactions on the new domain so that the topology state chosen by the repair commands
// really is the active one that we've seen
participant1.health.ping(participant2, workflowId = newDomain.name)

Seq(participant1, participant2).foreach(_.domains.disconnect(newDomain.name))

With the participants connected neither to “lostDomain” nor “newDomain”, each participant can

  • locally look up the active contracts assigned to the lost domain using the “testing.pcs_search” command made available via the “features.enable-testing-commands” configuration,
  • and invoke “repair.change_domain” (enabled via the “features.enable-preview-commands” configuration) in order to “move” the contracts to the new domain.
// Extract participant contracts from "lostDomain".
val contracts1 =
  participant1.testing.pcs_search(lostDomain.name, filterTemplate = "^Iou", activeSet = true)
val contracts2 =
  participant2.testing.pcs_search(lostDomain.name, filterTemplate = "^Iou", activeSet = true)

// Ensure that shared contracts match.
val Seq(sharedContracts1, sharedContracts2) = Seq(contracts1, contracts2).map(
  _.filter { case (_isActive, contract) =>
    contract.metadata.stakeholders.contains(Alice.toLf) &&
      contract.metadata.stakeholders.contains(Bob.toLf)
  }.toSet
)

clue("checking if contracts match") {
  sharedContracts1 shouldBe sharedContracts2
}

// Finally change the contracts from "lostDomain" to "newDomain"
participant1.repair.change_domain(
  contracts1.map(_._2.contractId),
  lostDomain.name,
  newDomain.name,
)
participant2.repair.change_domain(
  contracts2.map(_._2.contractId),
  lostDomain.name,
  newDomain.name,
  skipInactive = false,
)

Note

The code snippet above includes a check that the contracts shared among the participants match (as determined by each participant, “sharedContracts1” by “participant1” and “sharedContracts2” by “participant2). Should the contracts not match (as could happen if the participants had lost connectivity to the domain at different times), this check fails soliciting the participant operators to reach an agreement on the set of contracts. The agreed-upon set of active contracts may for example be

  • the intersection of the active contracts among the participants
  • or perhaps the union (for which the operators can use the “repair.add” command to create the contracts missing from one participant).

Also note that both the repair commands and the “testing.pcs_search” command are currently “preview” features, and therefore their names may change.

Once each participant has associated the contracts with “newDomain”, let’s have them reconnect, and we should be able to confirm that the new domain is able to execute workflows from where the lost domain disappeared.

Seq(participant1, participant2).foreach(_.domains.reconnect(newDomain.name))

// Look up a couple of contracts moved from lostDomain
val Seq(iouAlice, iouBob) = Seq(participant1 -> Alice, participant2 -> Bob).map {
  case (participant, party) =>
    participant.ledger_api.acs.await[Iou.Iou](party, Iou.Iou, _.value.owner == party.toPrim)
}

// Ensure that we can create new contracts
Seq(participant1 -> ((Alice, Bob)), participant2 -> ((Bob, Alice))).foreach {
  case (participant, (payer, owner)) =>
    participant.ledger_api.commands.submit_flat(
      Seq(payer),
      Seq(
        Iou
          .Iou(
            payer.toPrim,
            owner.toPrim,
            Iou.Amount(value = 200, currency = "USD"),
            List.empty,
          )
          .create
          .command
      ),
    )
}

// Even better: Confirm that we can exercise choices on the moved contracts
Seq(participant2 -> ((Bob, iouBob)), participant1 -> ((Alice, iouAlice))).foreach {
  case (participant, (owner, iou)) =>
    participant.ledger_api.commands
      .submit_flat(Seq(owner), Seq(iou.contractId.exerciseCall(owner.toPrim).command))
}
"newDomain" has replaced "lostDomain"

In practice, we would now be in a position to remove the “lostDomain” from both participants and to disable the repair commands again to prevent accidental use of these “dangerously powerful” tools.

This guide has demonstrated how participants can recover from losing a domain that has been permanently lost or somehow become irreparably corrupted.

Removing Packages and DARs

A package is a unit of compiled Daml code corresponding to one Daml project. A DAR is a collection of packages including a main package (corresponding to a Daml project) and all other packages from the dependencies of this Daml project.

Canton supports removal of both packages and DARs that are no longer in use. Removing unused packages and DARs has the following advantages:

  • Freeing up storage
  • Preventing accidental use of the old package / DAR
  • Reducing the number of packages / DARs that are trusted and may potentially have to be audited

Note that package and DAR removal is still under active development. The behaviour described in this documentation may change in the future. Package and DAR removal is still a preview feature and should not be used in production.

Certain conditions must to be met in order to remove packages or DARs. These conditions are designed to prevent removal of packages or DARs that are currently in use. The rest of this page describes the requirements.

Removing DARs

The following checks are performed before a DAR can be removed:

  • The main package of the DAR must be unused – there should be no active contract from this package
  • All package dependencies of the DAR should either be unused or contained in another of the participant node’s uploaded DARs. Canton uses this restriction to ensure that the package dependencies of the DAR don’t become “stranded” if they’re in use.
  • The main package of the dar should not be vetted. If it is vetted, Canton will try to automatically revoke the vetting for the main package of the DAR, but this automatic vetting revocation will only succeed if the main package vetting originates from a standard dars.upload. Even if the automatic revocation fails, you can always manually revoke the package vetting.

The following tutorial shows how to remove a DAR with the Canton console. The fist step is to upload a DAR so that we have one to removed. Additionally, store the packages that are present before the DAR is uploaded, as these can be used to double-check that DAR removal reverts to a clean state.

@ val packagesBefore = participant1.packages.list().map(_.packageId).toSet
packagesBefore : Set[String] = HashSet(
  "86828b9843465f419db1ef8a8ee741d1eef645df02375ebf509cdc8c3ddd16cb",
  "cc348d369011362a5190fe96dd1f0dfbc697fdfd10e382b9e9666f0da05961b7",
  "6839a6d3d430c569b2425e9391717b44ca324b88ba621d597778811b2d05031d",
  "99a2705ed38c1c26cbb8fe7acf36bbf626668e167a33335de932599219e0a235",
  "e22bce619ae24ca3b8e6519281cb5a33b64b3190cc763248b4c3f9ad5087a92c",
  "d58cf9939847921b2aab78eaa7b427dc4c649d25e6bee3c749ace4c3f52f5c97",
  "6c2c0667393c5f92f1885163068cd31800d2264eb088eb6fc740e11241b2bf06",
  "8a7806365bbd98d88b4c13832ebfa305f6abaeaf32cfa2b7dd25c4fa489b79fb",
  "c1f1f00558799eec139fb4f4c76f95fb52fa1837a5dd29600baa1c8ed1bdccfd",
..
@ val darHash = participant1.dars.upload("dars/CantonExamples.dar")
darHash : String = "122048728cb9404bb61a1264e946465e28ed9f3cd9902853d85e04d1cd51afa82854"

If the DAR hash is unknown, it can be found using dars.list:

@ val darHash_ = participant1.dars.list().filter(_.name == "CantonExamples").head.hash
darHash_ : String = "122048728cb9404bb61a1264e946465e28ed9f3cd9902853d85e04d1cd51afa82854"

The DAR can then be removed with the following command:

@ participant1.dars.remove(darHash)

Note that, right now, DAR removal will only remove the main packages associated with the DAR:

@ val packageIds = participant1.packages.list().filter(_.sourceDescription == "CantonExamples").map(_.packageId)
packageIds : Seq[String] = Vector(
  "86828b9843465f419db1ef8a8ee741d1eef645df02375ebf509cdc8c3ddd16cb",
  "9aa3e6c519a690dd659d33c6a5463913f57ad0cde76e3f8c605fe775b284b681",
  "cc348d369011362a5190fe96dd1f0dfbc697fdfd10e382b9e9666f0da05961b7",
  "e491352788e56ca4603acc411ffe1a49fefd76ed8b163af86cf5ee5f4c38645b",
  "cb0552debf219cc909f51cbb5c3b41e9981d39f8f645b1f35e2ef5be2e0b858a",
  "38e6274601b21d7202bb995bc5ec147decda5a01b68d57dda422425038772af7",
  "99a2705ed38c1c26cbb8fe7acf36bbf626668e167a33335de932599219e0a235",
  "940d9ffdac4d55e44181ec30343703a8b1341a6a21f0e1bf48956e1652de1d98",
  "f20de1e4e37b92280264c08bf15eca0be0bc5babd7a7b5e574997f154c00cb78",
..

It’s possible to remove each of these manually, using package removal. There is a complication here that packages needed for admin workflows (e.g. the Ping command) cannot be removed, so these are skipped.

@ packageIds.filter(id => ! packagesBefore.contains(id)).foreach(id => participant1.packages.remove(id))

The following command verifies that all the packages have been removed.

@ val packages = participant1.packages.list().map(_.packageId).toSet
packages : Set[String] = HashSet(
  "86828b9843465f419db1ef8a8ee741d1eef645df02375ebf509cdc8c3ddd16cb",
  "cc348d369011362a5190fe96dd1f0dfbc697fdfd10e382b9e9666f0da05961b7",
  "6839a6d3d430c569b2425e9391717b44ca324b88ba621d597778811b2d05031d",
  "99a2705ed38c1c26cbb8fe7acf36bbf626668e167a33335de932599219e0a235",
  "e22bce619ae24ca3b8e6519281cb5a33b64b3190cc763248b4c3f9ad5087a92c",
  "d58cf9939847921b2aab78eaa7b427dc4c649d25e6bee3c749ace4c3f52f5c97",
  "6c2c0667393c5f92f1885163068cd31800d2264eb088eb6fc740e11241b2bf06",
  "8a7806365bbd98d88b4c13832ebfa305f6abaeaf32cfa2b7dd25c4fa489b79fb",
  "c1f1f00558799eec139fb4f4c76f95fb52fa1837a5dd29600baa1c8ed1bdccfd",
..
@ assert(packages == packagesBefore)

The following sections explain what happens when the DAR removal operation goes wrong, for various reasons.

Main package of the DAR is in use

The first step to illustrate this is to upload a DAR and create a contract using the main package of the DAR:

@ val darHash = participant1.dars.upload("dars/CantonExamples.dar")
darHash : String = "122048728cb9404bb61a1264e946465e28ed9f3cd9902853d85e04d1cd51afa82854"
@ val packageId = participant1.packages.find("Iou").head.packageId
packageId : String = "3bb0e7e515a2e791b03a5f7e17ca8199419cce8a9606bc9b498a207c94ea6f33"
@ participant1.domains.connect_local(mydomain)
@ val createIouCmd = ledger_api_utils.create(packageId,"Iou","Iou",Map("payer" -> participant1.adminParty,"owner" -> participant1.adminParty,"amount" -> Map("value" -> 100.0, "currency" -> "EUR"),"viewers" -> List()))
..
@ participant1.ledger_api.commands.submit(Seq(participant1.adminParty), Seq(createIouCmd))
res13: com.daml.ledger.api.v1.transaction.TransactionTree = TransactionTree(
  transactionId = "12205f07e0b254432e443d93381d6e27e8935defb7c3358bdc3226f482eb814a249b",
  commandId = "40354c59-66b3-43d9-aea4-9d36d3a2a65b",
  workflowId = "",
  effectiveAt = Some(
..

Now that a contract exists using the main package of the DAR, a subsequent DAR removal operation will fail:

@ participant1.dars.remove(darHash)
ERROR com.digitalasset.canton.integration.EnterpriseEnvironmentDefinition$$anon$3 - Request failed for participant1.
  GrpcRequestRefusedByServer: FAILED_PRECONDITION/PACKAGE_OR_DAR_REMOVAL_ERROR(9,5c5a39fe): The DAR DarDescriptor(SHA-256:48728cb9404b...,CantonExamples) cannot be removed because its main package 3bb0e7e515a2e791b03a5f7e17ca8199419cce8a9606bc9b498a207c94ea6f33 is in-use by contract ContractId(0013d3bb6ba80e0126b8462b9ff3c10dd1fc0a180466504ac5b875d804b93b98ffca00122010ac200ab262817f73c467e4b627d43a1a03c3425ee05ae9883dc862725aca72)
on domain mydomain::1220acd3d079....
  Request: RemoveDar(122048728cb9404bb61a1264e946465e28ed9f3cd9902853d85e04d1cd51afa82854)
  CorrelationId: 5c5a39fe88913fd04b30365b2b5e9eac
  Context: Map(participant -> participant1, test -> PackageDarRemovalDocumentationIntegrationTest, pkg -> 3bb0e7e515a2e791b03a5f7e17ca8199419cce8a9606bc9b498a207c94ea6f33)
  Command ParticipantAdministration$dars$.remove invoked from cmd10000037.sc:1

In order to remove the DAR, we must archive this contract. Note that the contract ID for this contract can also be found in the error message above.

@ val iou = participant1.ledger_api.acs.find_generic(participant1.adminParty, _.templateId == "Iou.Iou")
iou : com.digitalasset.canton.admin.api.client.commands.LedgerApiTypeWrappers.WrappedCreatedEvent = WrappedCreatedEvent(
  event = CreatedEvent(
    eventId = "#12205f07e0b254432e443d93381d6e27e8935defb7c3358bdc3226f482eb814a249b:0",
    contractId = "0013d3bb6ba80e0126b8462b9ff3c10dd1fc0a180466504ac5b875d804b93b98ffca00122010ac200ab262817f73c467e4b627d43a1a03c3425ee05ae9883dc862725aca72",
    templateId = Some(
      value = Identifier(
        packageId = "3bb0e7e515a2e791b03a5f7e17ca8199419cce8a9606bc9b498a207c94ea6f33",
        moduleName = "Iou",
        entityName = "Iou"
      )
..
@ val archiveIouCmd = ledger_api_utils.exercise("Archive", Map.empty, iou.event)
..
@ participant1.ledger_api.commands.submit(Seq(participant1.adminParty), Seq(archiveIouCmd))
res16: com.daml.ledger.api.v1.transaction.TransactionTree = TransactionTree(
  transactionId = "1220b3a1ebb365a60e5692d1d471a22248e44470903c311c18d19cacef4e4f55752f",
  commandId = "f333d3dc-707c-4768-a8ef-dd1a2048390a",
  workflowId = "",
  effectiveAt = Some(
..

The DAR removal operation will now succeed.

@ participant1.dars.remove(darHash)

Main package of the DAR can’t be automatically removed

Similarly, DAR removal may fail because the DAR can’t be automatically removed. To illustrate this, upload the DAR without automatic vetting and subsequently vet all the packages manually.

@ val darHash = participant1.dars.upload("dars/CantonExamples.dar", vetAllPackages = false)
darHash : String = "122048728cb9404bb61a1264e946465e28ed9f3cd9902853d85e04d1cd51afa82854"
@ import com.daml.lf.data.Ref.IdString.PackageId
@ val packageIds = participant1.packages.list().filter(_.sourceDescription == "CantonExamples").map(_.packageId).map(PackageId.assertFromString)
packageIds : Seq[PackageId] = Vector(
  "86828b9843465f419db1ef8a8ee741d1eef645df02375ebf509cdc8c3ddd16cb",
  "9aa3e6c519a690dd659d33c6a5463913f57ad0cde76e3f8c605fe775b284b681",
..
@ participant1.topology.vetted_packages.authorize(TopologyChangeOp.Add, participant1.id, packageIds)
res21: com.google.protobuf.ByteString = <ByteString@7110341b size=2182 contents="\n\203\021\n\263\016\n\260\016\n\255\016\022 VRng30WxG12gFycpqATSpXi0zwr8G7imJ...">

The DAR removal operation will now fail:

@ participant1.dars.remove(darHash)
ERROR com.digitalasset.canton.integration.EnterpriseEnvironmentDefinition$$anon$3 - Request failed for participant1.
  GrpcRequestRefusedByServer: FAILED_PRECONDITION/PACKAGE_OR_DAR_REMOVAL_ERROR(9,9e6ab87b): An error was encountered whilst trying to unvet the DAR DarDescriptor(SHA-256:48728cb9404b...,CantonExamples) with main package 3bb0e7e515a2e791b03a5f7e17ca8199419cce8a9606bc9b498a207c94ea6f33 for DAR removal. Details: IdentityManagerParentError(Mapping(VettedPackages(
  participant = participant1::1220f2c5fafa...,
  packages = Seq(
    3bb0e7e515a2...,
    8016b5e8e840...,
    cb0552debf21...,
    3f4deaf145a1...,
    86828b984346...,
    f20de1e4e37b...,
    76bf0fd12bd9...,
    38e6274601b2...,
    d58cf...
  Request: RemoveDar(122048728cb9404bb61a1264e946465e28ed9f3cd9902853d85e04d1cd51afa82854)
  CorrelationId: 9e6ab87bd830d61a8a603125eda5c7a6
  Context: Map(participant -> participant1, test -> PackageDarRemovalDocumentationIntegrationTest)
  Command ParticipantAdministration$dars$.remove invoked from cmd10000057.sc:1

The DAR can be successfully removed after manually revoking the vetting for the main package:

@ participant1.topology.vetted_packages.authorize(TopologyChangeOp.Remove, participant1.id, packageIds, force = true)
res22: com.google.protobuf.ByteString = <ByteString@5ef3abe9 size=2184 contents="\n\205\021\n\265\016\n\262\016\n\257\016\b\001\022 VRng30WxG12gFycpqATSpXi0zwr8G7i...">
@ participant1.dars.remove(darHash)

Note that a force flag is needed used to revoke the package vetting; throughout this tutorial force will be used whenever a package vetting is being removed. See topology.vetted_packages.authorize for more detail.

Removing Packages

Canton also supports removing individual packages, giving the user more fine-grained control over the system. Packages can be removed if the package satisfies the following two requirements:

  • The package must be unused. This means that there shouldn’t be an active contract corresponding to the package.
  • The package must not be vetted. This means there shouldn’t be an active vetting transaction corresponding to the package.

The following tutorial shows how to remove a package using the Canton console. The first step is to upload and identify the package ID for the package to be removed.

@ val darHash = participant1.dars.upload("dars/CantonExamples.dar")
darHash : String = "122048728cb9404bb61a1264e946465e28ed9f3cd9902853d85e04d1cd51afa82854"
@ val packageId = participant1.packages.find("Iou").head.packageId
packageId : String = "3bb0e7e515a2e791b03a5f7e17ca8199419cce8a9606bc9b498a207c94ea6f33"

Package removal will initially fail as, by default, uploading the DAR will add a vetting transaction for the package:

@ participant1.packages.remove(packageId)
ERROR com.digitalasset.canton.integration.EnterpriseEnvironmentDefinition$$anon$3 - Request failed for participant1.
  GrpcRequestRefusedByServer: FAILED_PRECONDITION/PACKAGE_OR_DAR_REMOVAL_ERROR(9,a105a7de): Package 3bb0e7e515a2e791b03a5f7e17ca8199419cce8a9606bc9b498a207c94ea6f33 is currently vetted and available to use.
  Request: RemovePackage(3bb0e7e515a2e791b03a5f7e17ca8199419cce8a9606bc9b498a207c94ea6f33,false)
  CorrelationId: a105a7de3496c25a594ae750e44a8b0f
  Context: Map(participant -> participant1, test -> PackageDarRemovalDocumentationIntegrationTest)
  Command ParticipantAdministration$packages$.remove invoked from cmd10000068.sc:1

The vetting transaction must be manually revoked:

@ val packageIds = participant1.topology.vetted_packages.list().map(_.item.packageIds).filter(_.contains(packageId)).head
packageIds : Seq[com.digitalasset.canton.package.LfPackageId] = Vector(
  "3bb0e7e515a2e791b03a5f7e17ca8199419cce8a9606bc9b498a207c94ea6f33",
  "8016b5e8e840ccc9bf15ba9a1a768cde2082f8913a313b23f42a72d9b2e36fe1",
..
@ participant1.topology.vetted_packages.authorize(TopologyChangeOp.Remove, participant1.id, packageIds, force = true)
res27: com.google.protobuf.ByteString = <ByteString@2917c736 size=2184 contents="\n\205\021\n\265\016\n\262\016\n\257\016\b\001\022 vzwL8wv4uDa8Y3msBs0BSAyeMZfRKZO...">

And then the package can be removed:

@ participant1.packages.remove(packageId)

Package is in use

The operations above will fail if the package is in use. To illustrate this, first re-upload the package (uploading the associated DAR will work):

@ val darHash = participant1.dars.upload("dars/CantonExamples.dar")
darHash : String = "122048728cb9404bb61a1264e946465e28ed9f3cd9902853d85e04d1cd51afa82854"

Then create a contract using the package:

@ val createIouCmd = ledger_api_utils.create(packageId,"Iou","Iou",Map("payer" -> participant1.adminParty,"owner" -> participant1.adminParty,"amount" -> Map("value" -> 100.0, "currency" -> "EUR"),"viewers" -> List()))
createIouCmd : com.daml.ledger.api.v1.commands.Command = Command(
  command = Create(
    value = CreateCommand(
      templateId = Some(
        value = Identifier(
..
@ participant1.ledger_api.commands.submit(Seq(participant1.adminParty), Seq(createIouCmd))
res31: com.daml.ledger.api.v1.transaction.TransactionTree = TransactionTree(
  transactionId = "12205f23edf63cd78395087d36e188a63e11dfea6cd6f1b5a6970bee585e67e347bc",
  commandId = "376d46f6-0b03-481d-b157-c1b260a70811",
  workflowId = "",
  effectiveAt = Some(
    value = Timestamp(
      seconds = 1657558963L,
      nanos = 563111000,
      unknownFields = UnknownFieldSet(fields = Map())
    )
..

In this situation, the package cannot be removed:

@ participant1.packages.remove(packageId)
ERROR com.digitalasset.canton.integration.EnterpriseEnvironmentDefinition$$anon$3 - Request failed for participant1.
  GrpcRequestRefusedByServer: FAILED_PRECONDITION/PACKAGE_OR_DAR_REMOVAL_ERROR(9,01af54dc): Package 3bb0e7e515a2e791b03a5f7e17ca8199419cce8a9606bc9b498a207c94ea6f33 is currently in-use by contract ContractId(000e1a26ec5571ad8e1b8e4a34cf472ad1382a83a8d4642f5cf2b1c731f44a0413ca001220a9c446879a9c3035d9f25aef9d0a4e1a8a255f3ddc03cf093cbb48bd047b4102) on domain mydomain::1220acd3d079.... It may also be in-use by other contracts.
  Request: RemovePackage(3bb0e7e515a2e791b03a5f7e17ca8199419cce8a9606bc9b498a207c94ea6f33,false)
  CorrelationId: 01af54dc3a34c3d4a495e9740ffc9f53
  Context: HashMap(participant -> participant1, test -> PackageDarRemovalDocumentationIntegrationTest, domain -> mydomain::1220acd3d079..., pkg -> 3bb0e7e515a2e791b03a5f7e17ca8199419cce8a9606bc9b498a207c94ea6f33, contract -> ContractId(000e1a26ec5571ad8e1b8e4a34cf472ad1382a83a8d4642f5cf2b1c731f44a0413ca001220a9c446879a9c3035d9f25aef9d0a4e1a8a255f3ddc03cf093cbb48bd047b4102))
  Command ParticipantAdministration$packages$.remove invoked from cmd10000084.sc:1

To remove the package, first archive the contract:

@ val iou = participant1.ledger_api.acs.find_generic(participant1.adminParty, _.templateId == "Iou.Iou")
iou : com.digitalasset.canton.admin.api.client.commands.LedgerApiTypeWrappers.WrappedCreatedEvent = WrappedCreatedEvent(
  event = CreatedEvent(
    eventId = "#12205f23edf63cd78395087d36e188a63e11dfea6cd6f1b5a6970bee585e67e347bc:0",
    contractId = "000e1a26ec5571ad8e1b8e4a34cf472ad1382a83a8d4642f5cf2b1c731f44a0413ca001220a9c446879a9c3035d9f25aef9d0a4e1a8a255f3ddc03cf093cbb48bd047b4102",
    templateId = Some(
      value = Identifier(
        packageId = "3bb0e7e515a2e791b03a5f7e17ca8199419cce8a9606bc9b498a207c94ea6f33",
        moduleName = "Iou",
        entityName = "Iou"
      )
..
@ val archiveIouCmd = ledger_api_utils.exercise("Archive", Map.empty, iou.event)
archiveIouCmd : com.daml.ledger.api.v1.commands.Command = Command(
  command = Exercise(
    value = ExerciseCommand(
      templateId = Some(
        value = Identifier(
          packageId = "3bb0e7e515a2e791b03a5f7e17ca8199419cce8a9606bc9b498a207c94ea6f33",
          moduleName = "Iou",
          entityName = "Iou"
        )
      ),
..
@ participant1.ledger_api.commands.submit(Seq(participant1.adminParty), Seq(archiveIouCmd))
res34: com.daml.ledger.api.v1.transaction.TransactionTree = TransactionTree(
  transactionId = "1220caa8fcf52b7e84601535f29ce58b2202e686d0d5b082d34ccb1a167adb3b3201",
  commandId = "3d036748-f4bb-4945-9364-773b476a3aad",
  workflowId = "",
  effectiveAt = Some(
    value = Timestamp(
      seconds = 1657558964L,
      nanos = 146489000,
      unknownFields = UnknownFieldSet(fields = Map())
    )
  ),
  offset = "00000000000000000c",
..

Then revoke the package vetting transaction:

@ val packageIds = participant1.topology.vetted_packages.list().map(_.item.packageIds).filter(_.contains(packageId)).head
packageIds : Seq[com.digitalasset.canton.package.LfPackageId] = Vector(
  "3bb0e7e515a2e791b03a5f7e17ca8199419cce8a9606bc9b498a207c94ea6f33",
  "8016b5e8e840ccc9bf15ba9a1a768cde2082f8913a313b23f42a72d9b2e36fe1",
..
@ participant1.topology.vetted_packages.authorize(TopologyChangeOp.Remove, participant1.id, packageIds, force = true)
res36: com.google.protobuf.ByteString = <ByteString@281f4c65 size=2184 contents="\n\205\021\n\265\016\n\262\016\n\257\016\b\001\022 CVgRnVUP8hG1YlvSLd8RGuiv6dgUGYu...">

The package removal operation should now succeed.

@ participant1.packages.remove(packageId)

Force-removing packages

Packages can also be forcibly removed, even if the conditions above are not satisfied. This is done by setting the force flag to true.

To experiment with this, first re-upload the DAR so the package becomes available again:

@ participant1.dars.upload("dars/CantonExamples.dar")
res38: String = "122048728cb9404bb61a1264e946465e28ed9f3cd9902853d85e04d1cd51afa82854"

Then force-remove the package:

@ participant1.packages.remove(packageId, force = true)

Please note, this is a dangerous operation. Forced removal of packages should be avoided whenever possible.