Packages

package util

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. util
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Package Members

  1. package pekkostreams
  2. package retry

Type Members

  1. trait BatchAggregator[A, B] extends AnyRef

    This batch aggregator exposes a BatchAggregator.run method that allows for batching scala.concurrent.Future computations, defined by a BatchAggregator.Processor.

    This batch aggregator exposes a BatchAggregator.run method that allows for batching scala.concurrent.Future computations, defined by a BatchAggregator.Processor.

    Note: it is required that getter and batchGetter do not throw an exception. If they do, the number of in-flight requests could fail to be decremented which would result in degraded performance or even prevent calls to the getters.

  2. class BatchAggregatorImpl[A, B] extends BatchAggregator[A, B]
  3. final case class ByteString190(str: ByteString)(name: Option[String] = None) extends LengthLimitedByteString with Product with Serializable
  4. final case class ByteString256(str: ByteString)(name: Option[String] = None) extends LengthLimitedByteString with Product with Serializable
  5. final case class ByteString4096(str: ByteString)(name: Option[String] = None) extends LengthLimitedByteString with Product with Serializable
  6. final case class ByteString6144(str: ByteString)(name: Option[String] = None) extends LengthLimitedByteString with Product with Serializable
  7. sealed abstract class Checked[+A, +N, +R] extends Product with Serializable

    A monad for aborting and non-aborting errors.

    A monad for aborting and non-aborting errors. Non-aborting errors are accumulated in a cats.data.Chain until the first aborting error is hit. You can think of com.digitalasset.canton.util.Checked as an extension of Either to also support errors that should not cause the computation to abort.

    A

    Type of aborting errors

    N

    Type of non-aborting errors

    R

    Result type of the monad

  8. final case class CheckedT[F[_], A, N, R](value: F[Checked[A, N, R]]) extends Product with Serializable

    Monad Transformer for Checked, allowing the effect of a monad F to be combined with the aborting and non-aborting failure effect of Checked.

    Monad Transformer for Checked, allowing the effect of a monad F to be combined with the aborting and non-aborting failure effect of Checked. Similar to cats.data.EitherT.

    Annotations
    @FutureTransformer(transformedTypeArgumentPosition = 0)
  9. trait CheckedTInstances extends CheckedTInstances1
  10. trait CheckedTInstances1 extends CheckedTInstances2
  11. trait CheckedTInstances2 extends AnyRef
  12. final case class Ctx[+Context, +Value](context: Context, value: Value, telemetryContext: TelemetryContext = NoOpTelemetryContext) extends Product with Serializable

    Ctx wraps a value with some contextual information.

  13. type ErrorLoggingLazyVal[T] = LazyValWithContext[T, ErrorLoggingContext]
  14. class FlushFuture extends HasFlushFuture

    Stand-alone implementation of HasFlushFuture

  15. trait HasFlushFuture extends NamedLogging

    Provides a single flush scala.concurrent.Future that runs asynchronously.

    Provides a single flush scala.concurrent.Future that runs asynchronously. Tasks can be chained onto the flush future, although they will not run sequentially.

  16. trait HasReadWriteLock extends AnyRef
  17. final class LazyValWithContext[T, Context] extends AnyRef

    "Implements" a lazy val field whose initialization expression can refer to implicit context information of type Context.

    "Implements" a lazy val field whose initialization expression can refer to implicit context information of type Context. The "val" is initialized upon the first call to get, using the context information supplied for this call, like a lazy val.

    Instead of a plain lazy val field without context

    class C { lazy val f: T = initializer }
    
    use the following code to pass in a Context:
    class C {
      private[this] val _f: LazyValWithContext[T, Context] = new LazyValWithContext[T, Context](context => initializer)
      def f(implicit context: Context): T = _f.get
    }
    

    This class implements the same scheme as how the Scala 2.13 compiler implements lazy vals, as explained on https://docs.scala-lang.org/sips/improved-lazy-val-initialization.html (version V1) along with its caveats.

    See also

    TracedLazyVal To be used when the initializer wants to log something using the logger of the surrounding class

    ErrorLoggingLazyVal To be used when the initializer wants to log errors using the logger of the caller

  18. trait LazyValWithContextCompanion[Context] extends AnyRef
  19. sealed trait LengthLimitedByteString extends NoCopy

    This trait wraps a ByteString that is limited to a certain maximum length.

    This trait wraps a ByteString that is limited to a certain maximum length. Classes implementing this trait expose create and tryCreate methods to safely (and non-safely) construct such a ByteString.

    The canonical use case is ensuring that we don't encrypt more data than the underlying crypto algorithm can: for example, Rsa2048OaepSha256 can only encrypt 190 bytes at a time.

  20. trait LengthLimitedByteStringCompanion[A <: LengthLimitedByteString] extends AnyRef

    Trait that implements method commonly needed in the companion object of an LengthLimitedByteString

  21. final case class LengthLimitedByteStringVar(str: ByteString, maxLength: Int)(name: Option[String] = None) extends LengthLimitedByteString with Product with Serializable
  22. class MessageRecorder extends FlagCloseable with NamedLogging

    Persists data for replay tests.

  23. type NamedLoggingLazyVal[T] = LazyValWithContext[T, NamedLoggingContext]
  24. trait NoCopy extends AnyRef

    Prevents auto-generation of the copy method in a case class.

    Prevents auto-generation of the copy method in a case class. Case classes with private constructors typically shouldn't have a copy method.

  25. class NoOpBatchAggregator[A, B] extends BatchAggregator[A, B]
  26. final case class OrderedBucketMergeConfig[Name, +Config](threshold: PositiveInt, sources: NonEmpty[Map[Name, Config]]) extends Product with Serializable

    threshold

    The threshold of equivalent elements to reach before it can be emitted.

    sources

    The configurations to be used with OrderedBucketMergeHubOps.makeSource to create a source.

  27. class OrderedBucketMergeHub[Name, A, Config, Offset, M] extends GraphStageWithMaterializedValue[FlowShape[OrderedBucketMergeConfig[Name, Config], Output[Name, (Config, Option[M]), A, Offset]], Future[Done]] with NamedLogging

    A custom Pekko org.apache.pekko.stream.stage.GraphStage that merges several ordered source streams into one based on those sources reaching a threshold for equivalent elements.

    A custom Pekko org.apache.pekko.stream.stage.GraphStage that merges several ordered source streams into one based on those sources reaching a threshold for equivalent elements.

    The ordered sources produce elements with totally ordered offsets. For a given threshold t, whenever t different sources have produced equivalent elements for an offset that is higher than the previous offset, the OrderedBucketMergeHub emits the map of all these equivalent elements as the next com.digitalasset.canton.util.OrderedBucketMergeHub.OutputElement to downstream. Elements from the other ordered sources with lower or equal offset that have not yet reached the threshold are dropped.

    Every correct ordered source should produce the same sequence of offsets. Faulty sources can produce any sequence of elements as they like. The threshold should be set to F+1 where at most F sources are assumed to be faulty, and at least 2F+1 ordered sources should be configured. This ensures that the F faulty ordered sources cannot corrupt the stream nor block it.

    If this assumption is violated, the OrderedBucketMergeHub may deadlock, as it only looks at the next element of each ordered source (this avoids unbounded buffering and therefore ensures that downstream backpressure reaches the ordered sources). For example, given a threshold of 2 with three ordered sources, two of which are faulty, the first elements of the sources have offsets 1, 2, 3. Suppose that the first ordered source's second element had offset 3 and is equivalent to the third ordered source's first element. Then, by the above definition of merging, the stage could emit the elements with offset 3 and discard those with 1 and 2. However, this is not yet implemented; the stream just does not emit anything. Neither are such deadlocks detected right now. This is because in an asynchronous system, there typically are ordered sources that have not yet delivered their next element, and possibly may never will within useful time, say because they have crashed (which is not considered a fault). In the above example, suppose that the second ordered source had not emitted the element with offset 2. Then it is unknown whether the element with offset 1 should be emitted or not, because we do not know which ordered sources are correct. Suppose we had decided that we drop the elements with offset 1 from a correct ordered source and emit the ones with offset 3 instead, Then the second (delayed, but correct) ordered source can still send an equivalent element with 1, and so the decision of dropping 1 was wrong in hindsight.

    The OrderedBucketMergeHub manages the ordered sources. Their configurations and the threshold are coming through the OrderedBucketMergeHub's input stream as a OrderedBucketMergeConfig. As soon as a new OrderedBucketMergeConfig is available, the OrderedBucketMergeHub changes the ordered sources as necessary:

    - Ordered sources are identified by their Name. - Existing ordered sources whose name does not appear in the new configuration are stopped. - If a new configuration contains a new name for an ordered source, a new ordered source is created using ops. - If the configuration of an ordered source changes, the previous source is stopped and a new one with the new configuration is created.

    The OrderedBucketMergeHub emits com.digitalasset.canton.util.OrderedBucketMergeHub.ControlOutput events to downstream:

    - com.digitalasset.canton.util.OrderedBucketMergeHub.NewConfiguration signals the new configuration in place. - com.digitalasset.canton.util.OrderedBucketMergeHub.ActiveSourceTerminated signals that an ordered source has completed or aborted with an error before it was stopped.

    Since configuration changes are consumed eagerly, the OrderedBucketMergeHub buffers these com.digitalasset.canton.util.OrderedBucketMergeHub.ControlOutput events if downstream is not consuming them fast enough. The stream of configuration changes should therefore be slower than downstream; otherwise, the buffer will grow unboundedly and lead to java.lang.OutOfMemoryErrors eventually.

    When the configuration stream completes or aborts, all ordered sources are stopped and the output stream completes.

    An ordered source is stopped by pulling its org.apache.pekko.stream.KillSwitch and dropping all elements until the source completes or aborts. In particular, the ordered source is not just simply cancelled upon a configuration change or when the configuration stream completes. This allows for properly synchronizing the completion of the OrderedBucketMergeHub with the internal computations happening in the ordered sources. To that end, the OrderedBucketMergeHub materializes to a scala.concurrent.Future that completes when the corresponding futures from all created ordered sources have completed as well as the ordered sources themselves.

    If downstream cancels, the OrderedBucketMergeHub cancels all sources and the input port, without draining them. Therefore, the materialized scala.concurrent.Future may or may not complete, depending on the shape of the ordered sources. For example, if the ordered sources' futures are created with a plain org.apache.pekko.stream.scaladsl.FlowOpsMat.watchTermination, it will complete because org.apache.pekko.stream.scaladsl.FlowOpsMat.watchTermination completes immediately when it sees a cancellation. Therefore, it is better to avoid downstream cancellations altogether.

    Rationale for the merging logic:

    This graph stage is meant to merge the streams of sequenced events from several sequencers on a client node. The operator configures N sequencer connections and specifies a threshold T. Suppose the operator assumes that at most F nodes out of N are faulty. So we need F < T for safety. For liveness, the operator wants to tolerate as many crashes of correct sequencer nodes as feasible. Let C be the number of tolerated crashes. Then T <= N - C - F because faulty sequencers may not deliver any messages. For a fixed F, T = F + 1 is optimal as we can then tolerate C = N - 2F - 1 crashed sequencer nodes.

    In other words, if the operator wants to tolerate up to F faults and up to C crashes, then it should set T = F + 1 and configure N = 2F + C + 1 different sequencer connections.

    If more than C sequencers have crashed, then the faulty sequencers can make the client deadlock. The client cannot detect this under the asynchrony assumption.

    Moreover, the client cannot distinguish either between whether a sequencer node is actively malicious or just accidentally faulty. In particular, if several sequencer nodes deliver inequivalent events, we currently silently drop them. TODO(#14365) Design and implement an alert mechanism

  28. trait OrderedBucketMergeHubOps[Name, A, Config, Offset, +M] extends AnyRef
  29. class RateLimiter extends AnyRef

    Utility class that allows clients to keep track of a rate limit.

    Utility class that allows clients to keep track of a rate limit.

    The decay rate limiter keeps track of the current rate, allowing temporary bursts. This allows temporary bursts at the risk of overloading the system too quickly.

    Clients need to tell an instance whenever they intend to start a new task. The instance will inform the client whether the task can be executed while still meeting the rate limit.

    Guarantees:

    • Maximum burst size: if checkAndUpdateRate is called n times in parallel, at most max 1, maxTasksPerSecond * maxBurstFactor calls may return true.
    • Average rate: if checkAndUpdateRate is called at a rate of at least maxTasksPerSecond during n seconds, then the number of calls that return true divided by n is roughly maxTasksPerSecond .*
  30. trait ShowUtil extends ShowSyntax
  31. class SimpleExecutionQueue extends PrettyPrinting with NamedLogging with FlagCloseableAsync

    Functions executed with this class will only run when all previous calls have completed executing.

    Functions executed with this class will only run when all previous calls have completed executing. This can be used when async code should not be run concurrently.

    The default semantics is that a task is only executed if the previous tasks have completed successfully, i.e., they did not fail nor was the task aborted due to shutdown.

    If the queue is shutdown, the tasks' execution is aborted due to shutdown too.

  32. class SingleUseCell[A] extends AnyRef

    This class provides a mutable container for a single value of type A.

    This class provides a mutable container for a single value of type A. The value may be put at most once. A SingleUseCell therefore provides the following immutability guarantee: The value of a cell cannot change; once it has been put there, it will remain in the cell.

  33. class SnapshottableList[A] extends AnyRef

    A mutable list to which elements can be prepended and where snapshots can be taken atomically.

    A mutable list to which elements can be prepended and where snapshots can be taken atomically. Both operations are constant-time. Thread safe.

  34. trait Thereafter[F[_]] extends AnyRef

    Typeclass for computations with an operation that can run a side effect after the computation has finished.

    Typeclass for computations with an operation that can run a side effect after the computation has finished.

    The typeclass abstracts the following patterns so that it can be used for types other than scala.concurrent.Future.

    future.transform { result => val () = body(result); result } // synchronous body
    future.transform { result => body(result).transform(_ => result) } // asynchronous body

    Usage:

    import com.digitalasset.canton.util.Thereafter.syntax.*
    
    myAsyncComputation.thereafter(result => ...)
    

    F

    The computation's type functor.

  35. type TracedLazyVal[T] = LazyValWithContext[T, TraceContext]
  36. final case class UByte(signed: Byte) extends NoCopy with Product with Serializable

Value Members

  1. val ErrorLoggingLazyVal: LazyValWithContextCompanion[ErrorLoggingContext]
  2. val NamedLoggingLazyVal: LazyValWithContextCompanion[NamedLoggingContext]
  3. val TracedLazyVal: LazyValWithContextCompanion[TraceContext]
  4. object BatchAggregator
  5. object BinaryFileUtil

    Write and read byte strings to files.

  6. object ByteString190 extends LengthLimitedByteStringCompanion[ByteString190] with Serializable
  7. object ByteString256 extends LengthLimitedByteStringCompanion[ByteString256] with Serializable
  8. object ByteString4096 extends LengthLimitedByteStringCompanion[ByteString4096] with Serializable
  9. object ByteString6144 extends LengthLimitedByteStringCompanion[ByteString6144] with Serializable
  10. object ByteStringUtil
  11. object ChainUtil

    Provides utility functions for the cats implementation of a Chain.

    Provides utility functions for the cats implementation of a Chain. This is a data-structure similar to a List, with constant time prepend and append. Note that the Chain has a performance hit when pattern matching as there is no constant-time uncons operation.

    Documentation on the cats Chain: https://typelevel.org/cats/datatypes/chain.html.

  12. object Checked extends Serializable
  13. object CheckedT extends CheckedTInstances with Serializable
  14. object Ctx extends Serializable
  15. object DamlPackageLoader

    Wrapper that retrieves parsed packages from a DAR file consumable by the Daml interpreter.

  16. object DelayUtil extends NamedLogging

    Utility to create futures that succeed after a given delay.

    Utility to create futures that succeed after a given delay.

    Inspired by the odelay library, but with a restricted interface to avoid hazardous effects that could be caused by the use of a global executor service.

    TODO(i4245): Replace all usages by Clock.

  17. object EitherTUtil

    Utility functions for the cats cats.data.EitherT monad transformer.

    Utility functions for the cats cats.data.EitherT monad transformer. https://typelevel.org/cats/datatypes/eithert.html

  18. object EitherUtil
  19. object ErrorUtil
  20. object FutureInstances
  21. object FutureUtil
  22. object HasFlushFuture
  23. object HexString

    Conversion functions to and from hex strings.

  24. object IdUtil

    Contains instances for the Id functor.

  25. object IterableUtil
  26. object LengthLimitedByteString
  27. object LengthLimitedByteStringVar extends Serializable
  28. object LfTransactionUtil

    Helper functions to work with com.digitalasset.daml.lf.transaction.GenTransaction.

    Helper functions to work with com.digitalasset.daml.lf.transaction.GenTransaction. Using these helper functions is useful to provide a buffer from upstream changes.

  29. object LoggerUtil
  30. object MapsUtil
  31. object MessageRecorder
  32. object MonadUtil
  33. object OptionUtil
  34. object OptionUtils
  35. object OrderedBucketMergeHub
  36. object OrderedBucketMergeHubOps
  37. object PathUtils
  38. object PekkoUtil extends HasLoggerName
  39. object PriorityBlockingQueueUtil
  40. object RangeUtil
  41. object ResourceUtil

    Utility code for doing proper resource management.

    Utility code for doing proper resource management. A lot of it is based on https://medium.com/@dkomanov/scala-try-with-resources-735baad0fd7d

  42. object SeqUtil
  43. object SetsUtil
  44. object ShowUtil extends ShowUtil

    Utility class for clients who want to make use of pretty printing.

    Utility class for clients who want to make use of pretty printing. Import this as follows:

    import com.digitalasset.canton.util.ShowUtil._
    
    In some cases, an import at the top of the file will not make the show interpolator available. To work around this, you need to import this INSIDE of the using class.

    To enforce pretty printing, the show interpolator should be used for creating strings. That is, show"s$myComplexObject" will result in a compile error, if pretty printing is not implemented for myComplexObject. In contrast, s"$myComplexObject" will fall back to the default (non-pretty) toString implementation, if pretty printing is not implemented for myComplexObject. Even if pretty printing is implemented for the type T of myComplexObject, s"$myComplexObject" will not use it, if the compiler fails to infer T: Pretty.

  45. object SimpleExecutionQueue
  46. object SnapshottableList
  47. object StackTraceUtil
  48. object TextFileUtil
  49. object Thereafter
  50. object TrieMapUtil
  51. object TryUtil
  52. object UByte extends Serializable
  53. object VersionUtil

Inherited from AnyRef

Inherited from Any

Ungrouped