object PekkoUtil extends HasLoggerName
- Alphabetic
- By Inheritance
- PekkoUtil
- HasLoggerName
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Type Members
- class CombinedKillSwitch extends KillSwitch
Combines two kill switches into one
- trait RetrySourcePolicy[S, -A] extends AnyRef
Defines the policy when restartSource should restart the source, and the state from which the source should be restarted from.
- final case class WithKillSwitch[+A](value: A)(killSwitch: KillSwitch) extends Product with Serializable
Container class for adding a org.apache.pekko.stream.KillSwitch to a single value.
Container class for adding a org.apache.pekko.stream.KillSwitch to a single value. Two containers are equal if their contained values are equal.
(Equality ignores the org.apache.pekko.stream.KillSwitches because it is usually not very meaningful. The org.apache.pekko.stream.KillSwitch is therefore in the second argument list.)
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native() @IntrinsicCandidate()
- def createActorSystem(namePrefix: String)(implicit ec: ExecutionContext): ActorSystem
Create an Actor system using the existing execution context
ec
- def createExecutionSequencerFactory(namePrefix: String, logger: Logger)(implicit actorSystem: ActorSystem): ExecutionSequencerFactory
Create a new execution sequencer factory (mainly used to create a ledger client) with the existing actor system
actorSystem
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- implicit def errorLoggingContextFromNamedLoggingContext(implicit loggingContext: NamedLoggingContext): ErrorLoggingContext
Convert a com.digitalasset.canton.logging.NamedLoggingContext into an com.digitalasset.canton.logging.ErrorLoggingContext to fix the logger name to the current class name.
Convert a com.digitalasset.canton.logging.NamedLoggingContext into an com.digitalasset.canton.logging.ErrorLoggingContext to fix the logger name to the current class name.
- Attributes
- protected
- Definition Classes
- HasLoggerName
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @IntrinsicCandidate()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @IntrinsicCandidate()
- def injectKillSwitch[A, Mat](graph: FlowOpsMat[A, Mat])(killSwitch: (Mat) => KillSwitch): ReprMat[WithKillSwitch[A], Mat]
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- implicit def loggerNameFromThisClass: LoggerNameFromClass
- Attributes
- protected
- Definition Classes
- HasLoggerName
- def mapAsyncAndDrainUS[A, Mat, B](graph: FlowOps[A, Mat], parallelism: Int)(f: (A) => FutureUnlessShutdown[B])(implicit loggingContext: NamedLoggingContext): Repr[B]
Version of mapAsyncUS that discards the com.digitalasset.canton.lifecycle.UnlessShutdown.AbortedDueToShutdowns.
Version of mapAsyncUS that discards the com.digitalasset.canton.lifecycle.UnlessShutdown.AbortedDueToShutdowns.
Completes when upstream completes and all futures have been completed and all elements have been emitted.
- def mapAsyncUS[A, Mat, B](graph: FlowOps[A, Mat], parallelism: Int)(f: (A) => FutureUnlessShutdown[B])(implicit loggingContext: NamedLoggingContext): Repr[UnlessShutdown[B]]
Version of org.apache.pekko.stream.scaladsl.FlowOps.mapAsync for a com.digitalasset.canton.lifecycle.FutureUnlessShutdown.
Version of org.apache.pekko.stream.scaladsl.FlowOps.mapAsync for a com.digitalasset.canton.lifecycle.FutureUnlessShutdown. If
f
returns com.digitalasset.canton.lifecycle.UnlessShutdown.AbortedDueToShutdown on one element ofsource
, then the returned source returns com.digitalasset.canton.lifecycle.UnlessShutdown.AbortedDueToShutdown for all subsequent elements as well.If
parallelism
is one, ensures thatf
is called sequentially for each element ofsource
and thatf
is not invoked on later stream elements iff
returns com.digitalasset.canton.lifecycle.UnlessShutdown.AbortedDueToShutdown for an earlier element. Ifparellelism
is greater than one,f
may be invoked on later stream elements even though an earlier invocation results inf
returning com.digitalasset.canton.lifecycle.UnlessShutdown.AbortedDueToShutdown.Emits when the Future returned by the provided function finishes for the next element in sequence
Backpressures when the number of futures reaches the configured parallelism and the downstream backpressures or the first future is not completed
Completes when upstream completes and all futures have been completed and all elements have been emitted, including those for which the future did not run due to earlier com.digitalasset.canton.lifecycle.UnlessShutdown.AbortedDueToShutdowns.
Cancels when downstream cancels
- parallelism
The parallelism level. Must be at least 1.
- Exceptions thrown
java.lang.IllegalArgumentException
ifparallelism
is not positive.
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @IntrinsicCandidate()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @IntrinsicCandidate()
- def remember[A, Mat](graph: FlowOps[A, Mat], memory: NonNegativeInt): Repr[NonEmpty[Seq[A]]]
Remembers the last
memory
many elements that have already been emitted previously.Remembers the last
memory
many elements that have already been emitted previously. Passes those remembered elements downstream with each new element. The current element is the com.daml.nonempty.NonEmptyCollInstances.NEPreservingOps.last1 of the sequence.remember differs from org.apache.pekko.stream.scaladsl.FlowOps.sliding in that remember emits elements immediately when the given source emits, whereas org.apache.pekko.stream.scaladsl.FlowOps.sliding only after the source has emitted enough elements to fill the window.
- def restartSource[S, A](name: String, initial: S, mkSource: (S) => Source[A, (KillSwitch, Future[Done])], policy: RetrySourcePolicy[S, A])(implicit arg0: Pretty[S], loggingContext: NamedLoggingContext, materializer: Materializer): Source[WithKillSwitch[A], (KillSwitch, Future[Done])]
Creates a source from
mkSource
from theinitial
state.Creates a source from
mkSource
from theinitial
state. Whenever this source terminates,policy
determines whether another source shall be constructed (after a given delay) from a possibly new state. The returned source concatenates the output of all the constructed sources in order. At most one constructed source is active at any given point in time.Failures in the constructed sources are passed to the
policy
, but do not make it downstream. Thepolicy
is responsible for properly logging these errors if necessary.- returns
The concatenation of all constructed sources. This source is NOT a blueprint and MUST therefore be materialized at most once. Its materialized value provides a kill switch to stop retrying. Only the org.apache.pekko.stream.KillSwitch.shutdown method should be used; The switch does not short-circuit the already constructed sources though. synchronization may not work correctly with org.apache.pekko.stream.KillSwitch.abort. Downstream should not cancel; use the kill switch instead. The materialized scala.concurrent.Future can be used to synchronize on the computations for restarts: if the source is stopped with the kill switch, the future completes after the computations have finished.
- def runSupervised[T](reporter: (Throwable) => Unit, graph: RunnableGraph[T], debugLogging: Boolean = false)(implicit mat: Materializer): T
Utility function to run the graph supervised and stop on an unhandled exception.
Utility function to run the graph supervised and stop on an unhandled exception.
By default, an Pekko flow will discard exceptions. Use this method to avoid discarding exceptions.
- def statefulMapAsync[Out, Mat, S, T](graph: FlowOps[Out, Mat], initial: S)(f: (S, Out) => Future[(S, T)])(implicit loggingContext: NamedLoggingContext): Repr[T]
A version of org.apache.pekko.stream.scaladsl.FlowOps.mapAsync that additionally allows to pass state of type
S
between every subsequent element.A version of org.apache.pekko.stream.scaladsl.FlowOps.mapAsync that additionally allows to pass state of type
S
between every subsequent element. Unlike org.apache.pekko.stream.scaladsl.FlowOps.statefulMapConcat, the state is passed explicitly. Must not be run with supervision strategies org.apache.pekko.stream.Supervision.Restart nor org.apache.pekko.stream.Supervision.Resume - def statefulMapAsyncUS[Out, Mat, S, T](graph: FlowOps[Out, Mat], initial: S)(f: (S, Out) => FutureUnlessShutdown[(S, T)])(implicit loggingContext: NamedLoggingContext): Repr[UnlessShutdown[T]]
Combines mapAsyncUS with statefulMapAsync.
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def takeUntilThenDrain[A, Mat](graph: FlowOps[WithKillSwitch[A], Mat], condition: (A) => Boolean): Repr[WithKillSwitch[A]]
Passes through all elements of the source until and including the first element that satisfies the condition.
Passes through all elements of the source until and including the first element that satisfies the condition. Thereafter pulls the kill switch of the first such element and drops all remaining elements of the source.
Emits when upstream emits and all previously emitted elements do not meet the condition.
Backpressures when downstream backpressures
Completes when upstream completes
Cancels when downstream cancels
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- def withUniqueKillSwitch[A, Mat, Mat2](graph: FlowOpsMat[A, Mat])(mat: (Mat, UniqueKillSwitch) => Mat2): ReprMat[WithKillSwitch[A], Mat2]
Adds a org.apache.pekko.stream.KillSwitches.single into the stream after the given source and injects the created kill switch into the stream
- object RetrySourcePolicy
- object WithKillSwitch extends Serializable
- object syntax