Packages

package kernel

Source
package.scala
Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. kernel
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Package Members

  1. package instances
  2. package syntax
  3. package testkit

Type Members

  1. trait Async[F[_]] extends AsyncPlatform[F] with Sync[F] with Temporal[F]

    A typeclass that encodes the notion of suspending asynchronous side effects in the F[_] context

    A typeclass that encodes the notion of suspending asynchronous side effects in the F[_] context

    An asynchronous task is one whose results are computed somewhere else (eg by a scala.concurrent.Future running on some other threadpool). We await the results of that execution by giving it a callback to be invoked with the result.

    That computation may fail hence the callback is of type Either[Throwable, A] => (). This awaiting is semantic only - no threads are blocked, the current fiber is simply descheduled until the callback completes.

    This leads us directly to the simplest asynchronous FFI

    def async_[A](k: (Either[Throwable, A] => Unit) => Unit): F[A]
    async(k)

    is semantically blocked until the callback is invoked.

    async_ is somewhat constrained however. We can't perform any F[_] effects in the process of registering the callback and we also can't register a finalizer to eg cancel the asynchronous task in the event that the fiber running async_ is canceled.

    This leads us directly to the more general asynchronous FFI

    def async[A](k: (Either[Throwable, A] => Unit) => F[Option[F[Unit]]]): F[A]

    As evidenced by the type signature, k may perform F[_] effects and it returns an Option[F[Unit]] which is an optional finalizer to be run in the event that the fiber running

    async(k)

    is canceled.

  2. sealed trait CancelScope extends Product with Serializable
  3. trait Clock[F[_]] extends ClockPlatform[F] with Serializable

    A typeclass which encodes various notions of time.

    A typeclass which encodes various notions of time. Analogous to some of the time functions exposed by java.lang.System.

  4. type Concurrent[F[_]] = GenConcurrent[F, Throwable]
  5. trait Cont[F[_], K, R] extends Serializable

    This construction supports Async.cont

    This construction supports Async.cont

    trait Async[F[_]] {
      ...
    
      def cont[A](body: Cont[F, A]): F[A]
    }

    It's a low level operation meant for implementors, end users should use async, start or Deferred instead, depending on the use case.

    It can be understood as providing an operation to resume an F asynchronously, of type Either[Throwable, A] => Unit, and an (interruptible) operation to semantically block until resumption, of type F[A]. We will refer to the former as resume, and the latter as get.

    These two operations capture the essence of fiber blocking, and can be used to build async, which in turn can be used to build Fiber, start, Deferred and so on.

    Refer to the default implementation to Async[F].async for an example of usage.

    The reason for the shape of the Cont construction in Async[F].cont, as opposed to simply:

    trait Async[F[_]] {
      ...
    
      def cont[A]: F[(Either[Throwable, A] => Unit, F[A])]
    }

    is that it's not safe to use concurrent operations such as get.start.

    The Cont encoding therefore simulates higher-rank polymorphism to ensure that you can not call start on get, but only use operations up to MonadCancel (flatMap, onCancel, uncancelable, etc).

    If you are an implementor, and you have an implementation of async but not cont, you can override Async[F].async with your implementation, and use Async.defaultCont to implement Async[F].cont.

  6. abstract class Deferred[F[_], A] extends DeferredSource[F, A] with DeferredSink[F, A]

    A purely functional synchronization primitive which represents a single value which may not yet be available.

    A purely functional synchronization primitive which represents a single value which may not yet be available.

    When created, a Deferred is empty. It can then be completed exactly once, and never be made empty again.

    get on an empty Deferred will block until the Deferred is completed. get on a completed Deferred will always immediately return its content.

    complete(a) on an empty Deferred will set it to a, notify any and all readers currently blocked on a call to get, and return true. complete(a) on a Deferred that has already been completed will not modify its content, and return false.

    Albeit simple, Deferred can be used in conjunction with Ref to build complex concurrent behaviour and data structures like queues and semaphores.

    Finally, the blocking mentioned above is semantic only, no actual threads are blocked by the implementation.

  7. trait DeferredSink[F[_], A] extends Serializable
  8. trait DeferredSource[F[_], A] extends Serializable
  9. trait Fiber[F[_], E, A] extends Serializable

    A datatype that represents a handle to a fiber and allows for waiting and cancelation against that fiber.

    A datatype that represents a handle to a fiber and allows for waiting and cancelation against that fiber.

    See also

    GenSpawn documentation for more detailed information on the concurrency of fibers.

  10. trait GenConcurrent[F[_], E] extends GenSpawn[F, E]
  11. trait GenSpawn[F[_], E] extends MonadCancel[F, E] with Unique[F]

    A typeclass that characterizes monads which support spawning and racing of fibers.

    A typeclass that characterizes monads which support spawning and racing of fibers. GenSpawn extends the capabilities of MonadCancel, so an instance of this typeclass must also provide a lawful instance for MonadCancel.

    This documentation builds upon concepts introduced in the MonadCancel documentation.

    Concurrency

    GenSpawn introduces a notion of concurrency that enables fibers to safely interact with each other via three special functions. start spawns a fiber that executes concurrently with the spawning fiber. join semantically blocks the joining fiber until the joinee fiber terminates, after which the Outcome of the joinee is returned. cancel requests a fiber to abnormally terminate, and semantically blocks the canceller until the cancellee has completed finalization.

    Just like threads, fibers can execute concurrently with respect to each other. This means that the effects of independent fibers may be interleaved nondeterministically. This mode of concurrency reaps benefits for modular program design; fibers that are described separately can execute simultaneously without requiring programmers to explicitly yield back to the runtime system.

    The interleaving of effects is illustrated in the following program:

    for {
      fa <- (println("A1") *> println("A2")).start
      fb <- (println("B1") *> println("B2")).start
    } yield ()

    In this program, two fibers A and B are spawned concurrently. There are six possible executions, each of which exhibits a different ordering of effects. The observed output of each execution is shown below:

    1. A1, A2, B1, B2
    2. A1, B1, A2, B2
    3. A1, B1, B2, A2
    4. B1, B2, A1, A2
    5. B1, A1, B2, A2
    6. B1, A1, A2, B2

    Notice how every execution preserves sequential consistency of the effects within each fiber: A1 always prints before A2, and B1 always prints before B2. However, there are no guarantees around how the effects of both fibers will be ordered with respect to each other; it is entirely nondeterministic.

    Cancelation

    MonadCancel introduces a simple means of cancelation, particularly self-cancelation, where a fiber can request the abnormal termination of its own execution. This is achieved by calling canceled.

    GenSpawn expands on the cancelation model described by MonadCancel by introducing a means of external cancelation. With external cancelation, a fiber can request the abnormal termination of another fiber by calling Fiber!.cancel.

    The cancelation model dictates that external cancelation behaves identically to self-cancelation. To guarantee consistent behavior between the two, the following semantics are shared:

    1. Masking: if a fiber is canceled while it is masked, cancelation is suppressed until it reaches a completely unmasked state. See MonadCancel documentation for more details.
    2. Backpressure: cancel semantically blocks all callers until finalization is complete.
    3. Idempotency: once a fiber's cancelation has been requested, subsequent cancelations have no effect on cancelation status.
    4. Terminal: Cancelation of a fiber that has terminated immediately returns.

    External cancelation contrasts with self-cancelation in one aspect: the former may require synchronization between multiple threads to communicate a cancelation request. As a result, cancelation may not be immediately observed by a fiber. Implementations are free to decide how and when this synchronization takes place.

    Cancelation safety

    A function or effect is considered to be cancelation-safe if it can be run in the absence of masking without violating effectful lifecycles or leaking resources. These functions require extra attention and care from users to ensure safe usage.

    start and racePair are both considered to be cancelation-unsafe effects because they return a Fiber, which is a resource that has a lifecycle.

    // Start a fiber that continuously prints "A".
    // After 10 seconds, cancel the fiber.
    F.start(F.delay(println("A")).foreverM).flatMap { fiber =>
      F.sleep(10.seconds) *> fiber.cancel
    }

    In the above example, imagine the spawning fiber is canceled after it starts the printing fiber, but before the latter is canceled. In this situation, the printing fiber is not canceled and will continue executing forever, contending with other fibers for system resources. Fiber leaks like this typically happen because some fiber that holds a reference to a child fiber is canceled before the child terminates; like threads, fibers will not automatically be cleaned up.

    Resource leaks like this are unfavorable when writing applications. In the case of start and racePair, it is recommended not to use these methods; instead, use background and race respectively.

    The following example depicts a safer version of the start example above:

    // Starts a fiber that continously prints "A".
    // After 10 seconds, the resource scope exits so the fiber is canceled.
    F.background(F.delay(println("A")).foreverM).use { _ =>
      F.sleep(10.seconds)
    }

    Scheduling

    Fibers are commonly referred to as lightweight threads or green threads. This alludes to the nature by which fibers are scheduled by runtime systems: many fibers are multiplexed onto one or more native threads.

    For applications running on the JVM, the scheduler typically manages a thread pool onto which fibers are scheduled. These fibers are executed simultaneously by the threads in the pool, achieving both concurrency and parallelism. For applications running on JavaScript platforms, all compute is restricted to a single worker thread, so multiple fibers must share that worker thread (dictated by fairness properties), achieving concurrency without parallelism.

    cede is a special function that interacts directly with the underlying scheduler. It is a means of cooperative multitasking by which a fiber signals to the runtime system that it intends to pause execution and resume at some later time at the discretion of the scheduler. This is in contrast to preemptive multitasking, in which threads of control are forcibly yielded after a well-defined time slice.

    Preemptive and cooperative multitasking are both features of runtime systems that influence the fairness and throughput properties of an application. These modes of scheduling are not necessarily mutually exclusive: a runtime system may incorporate a blend of the two, where fibers can explicitly yield back to the scheduler, but the runtime preempts a fiber if it has not yielded for some time.

    For more details on schedulers, see the following resources:

    1. https://gist.github.com/djspiewak/3ac3f3f55a780e8ab6fa2ca87160ca40
    2. https://gist.github.com/djspiewak/46b543800958cf61af6efa8e072bfd5c
  12. trait GenTemporal[F[_], E] extends GenConcurrent[F, E] with Clock[F]

    A typeclass that encodes the notion of suspending fibers for a given duration.

    A typeclass that encodes the notion of suspending fibers for a given duration. Analogous to Thread.sleep but is only fiber blocking rather than blocking an underlying OS pthread.

  13. trait MonadCancel[F[_], E] extends MonadError[F, E]

    A typeclass that characterizes monads which support safe cancelation, masking, and finalization.

    A typeclass that characterizes monads which support safe cancelation, masking, and finalization. MonadCancel extends the capabilities of cats.MonadError, so an instance of this typeclass must also provide a lawful instance for cats.MonadError.

    Fibers

    A fiber is a sequence of effects which are bound together by flatMap. The execution of a fiber of an effect F[E, A] terminates with one of three outcomes, which are encoded by the datatype Outcome:

    1. Outcome.Succeeded: indicates success with a value of type A
    2. Outcome.Errored: indicates failure with a value of type E
    3. Outcome.Canceled: indicates abnormal termination

    Additionally, a fiber may never produce an outcome, in which case it is said to be non-terminating.

    Cancelation

    Cancelation refers to the act of requesting that the execution of a fiber be abnormally terminated. MonadCancel exposes a means of self-cancelation, with which a fiber can request that its own execution be terminated. Self-cancelation is achieved via canceled.

    Cancelation is vaguely similar to the short-circuiting behavior introduced by cats.MonadError, but there are several key differences:

    1. Cancelation is effective; if it is observed it must be respected, and it cannot be reversed. In contrast, handleError exposes the ability to catch and recover from errors, and then proceed with normal execution.
    2. Cancelation can be masked via MonadCancel!.uncancelable. Masking is discussed in the next section.
    3. GenSpawn introduces external cancelation, another cancelation mechanism by which fibers can be canceled by external parties.

    Masking

    Masking allows a fiber to suppress cancelation for a period of time, which is achieved via uncancelable. If a fiber is canceled while it is masked, the cancelation is suppressed for as long as the fiber remains masked. Once the fiber reaches a completely unmasked state, it responds to the cancelation.

    While a fiber is masked, it may optionally unmask by "polling", rendering itself cancelable again.

    F.uncancelable { poll =>
      // can only observe cancelation within `fb`
      fa *> poll(fb) *> fc
    }

    These semantics allow users to precisely mark what regions of code are cancelable within a larger code block.

    Cancelation Boundaries

    A boundary corresponds to an iteration of the internal runloop. In general they are introduced by any of the combinators from the cats/cats effect hierarchy (map, flatMap, handleErrorWith, attempt, etc).

    A cancelation boundary is a boundary where the cancelation status of a fiber may be checked and hence cancelation observed. Note that in general you cannot guarantee that cancelation will be observed at a given boundary. However, in the absence of masking it will be observed eventually.

    With a small number of exceptions covered below, all boundaries are cancelable boundaries ie cancelation may be observed before the invocation of any combinator.

    fa
      .flatMap(f)
      .handleErrorWith(g)
      .map(h)

    If the fiber above is canceled then the cancelation status may be checked and the execution terminated between any of the combinators.

    As noted above, there are some boundaries which are not cancelable boundaries:

    1. Any boundary inside uncancelable and not inside poll. This is the definition of masking as above.
    F.uncancelable( _ =>
      fa
        .flatMap(f)
        .handleErrorWith(g)
        .map(h)
    )

    None of the boundaries above are cancelation boundaries as cancelation is masked.

    2. The boundary after uncancelable

    F.uncancelable(poll => foo(poll)).flatMap(f)

    It is guaranteed that we will not observe cancelation after uncancelable and hence flatMap(f) will be invoked. This is necessary for uncancelable to compose. Consider for example Resource#allocated

    def allocated[B >: A](implicit F: MonadCancel[F, Throwable]): F[(B, F[Unit])]

    which returns a tuple of the resource and a finalizer which needs to be invoked to clean-up once the resource is no longer needed. The implementation of allocated can make sure it is safe by appropriate use of uncancelable. However, if it were possible to observe cancelation on the boundary directly after allocated then we would have a leak as the caller would be unable to ensure that the finalizer is invoked. In other words, the safety of allocated and the safety of f would not guarantee the safety of the composition allocated.flatMap(f).

    This does however mean that we violate the functor law that fa.map(identity) <-> fa as

    F.uncancelable(_ => fa).onCancel(fin)  <-!-> F.uncancelable(_ => fa).map(identity).onCancel(fin)

    as cancelation may be observed before the onCancel on the RHS. The justification is that cancelation is a hint rather than a mandate and so enshrining its behaviour in laws will always be awkward. Given this, it is better to pick a semantic that allows safe composition of regions.

    3. The boundary after poll

    F.uncancelable(poll => poll(fa).flatMap(f))

    If fa completes successfully then cancelation may not be observed after poll but before flatMap. The reasoning is similar to above - if fa has successfully produced a value then the caller should have the opportunity to observe the value and ensure finalizers are in-place, etc.

    Finalization

    Finalization refers to the act of running finalizers in response to a cancelation. Finalizers are those effects whose evaluation is guaranteed in the event of cancelation. After a fiber has completed finalization, it terminates with an outcome of Canceled.

    Finalizers can be registered to a fiber for the duration of some effect via onCancel. If a fiber is canceled while running that effect, the registered finalizer is guaranteed to be run before terminating.

    Bracket pattern

    The aforementioned concepts work together to unlock a powerful pattern for safely interacting with effectful lifecycles: the bracket pattern. This is analogous to the try-with-resources/finally construct in Java.

    A lifecycle refers to a pair of actions, which are called the acquisition action and the release action respectively. The relationship between these two actions is that if the former completes successfully, then the latter is guaranteed to be run eventually, even in the presence of errors and cancelation. While the lifecycle is active, other work can be performed, but this invariant is always respected.

    The bracket pattern is an invaluable tool for safely handling resource lifecycles. Imagine an application that opens network connections to a database server to do work. If a task in the application is canceled while it holds an open database connection, the connection would never be released or returned to a pool, causing a resource leak.

    To illustrate the compositional nature of MonadCancel and its combinators, the implementation of bracket is shown below:

    def bracket[A, B](acquire: F[A])(use: A => F[B])(release: A => F[Unit]): F[B] =
      uncancelable { poll =>
        flatMap(acquire) { a =>
          val finalized = onCancel(poll(use(a)), release(a).uncancelable)
          val handled = onError(finalized) { case e => void(attempt(release(a).uncancelable)) }
          flatMap(handled)(b => as(attempt(release(a).uncancelable), b))
        }
      }

    See bracketCase and bracketFull for other variants of the bracket pattern. If more specialized behavior is necessary, it is recommended to use uncancelable and onCancel directly.

  14. type MonadCancelThrow[F[_]] = MonadCancel[F, Throwable]
  15. sealed trait Outcome[F[_], E, A] extends Product with Serializable

    Represents the result of the execution of a fiber.

    Represents the result of the execution of a fiber. It may terminate in one of 3 states:

    1. Succeeded(fa) The fiber completed with a value.

    A commonly asked question is why this wraps a value of type F[A] rather than one of type A. This is to support monad transformers. Consider

    val oc: OutcomeIO[Int] =
      for {
        fiber <- Spawn[OptionT[IO, *]].start(OptionT.none[IO, Int])
        oc <- fiber.join
      } yield oc

    If the fiber succeeds then there is no value of type Int to be wrapped in Succeeded, hence Succeeded contains a value of type OptionT[IO, Int] instead.

    In general you can assume that binding on the value of type F[A] contained in Succeeded does not perform further effects. In the case of IO that means that the outcome has been constructed as Outcome.Succeeded(IO.pure(result)).

    2. Errored(e) The fiber exited with an error.

    3. Canceled() The fiber was canceled, either externally or self-canceled via MonadCancel[F]#canceled.

  16. type ParallelF[F[_], A] = T[F, A]
  17. trait Poll[F[_]] extends ~>[F, F]
  18. abstract class Ref[F[_], A] extends RefSource[F, A] with RefSink[F, A]

    A thread-safe, concurrent mutable reference.

    A thread-safe, concurrent mutable reference.

    Provides safe concurrent access and modification of its content, but no functionality for synchronisation, which is instead handled by Deferred. For this reason, a Ref is always initialised to a value.

    The default implementation is nonblocking and lightweight, consisting essentially of a purely functional wrapper over an AtomicReference. Consequently it must not be used to store mutable data as AtomicReference#compareAndSet and friends are dependent upon object reference equality.

    See also cats.effect.std.AtomicCell class from cats-effect-std for an alternative that ensures exclusive access and effectual updates.

    If your contents are an immutable Map[K, V], and all your operations are per-key, consider using cats.effect.std.MapRef.

  19. trait RefSink[F[_], A] extends Serializable
  20. trait RefSource[F[_], A] extends Serializable
  21. sealed abstract class Resource[F[_], +A] extends Serializable

    Resource is a data structure which encodes the idea of executing an action which has an associated finalizer that needs to be run when the action completes.

    Resource is a data structure which encodes the idea of executing an action which has an associated finalizer that needs to be run when the action completes.

    Examples include scarce resources like files, which need to be closed after use, or concurrent abstractions like locks, which need to be released after having been acquired.

    There are several constructors to allocate a resource, the most common is make:

    def open(file: File): Resource[IO, BufferedReader] = {
      val openFile = IO(new BufferedReader(new FileReader(file)))
      Resource.make(acquire = openFile)(release = f => IO(f.close))
    }

    and several methods to consume a resource, the most common is use:

    def readFile(file: BufferedReader): IO[Content]
    
    open(file1).use(readFile)

    Finalisation (in this case file closure) happens when the action passed to use terminates. Therefore, the code above is _not_ equivalent to:

    open(file1).use(IO.pure).flatMap(readFile)

    which will instead result in an error, since the file gets closed after pure, meaning that .readFile will then fail.

    Also note that a _new_ resource is allocated every time use is called, so the following code opens and closes the resource twice:

    val file: Resource[IO, File]
    file.use(read) >> file.use(read)

    If you want sharing, pass the result of allocating the resource around, and call use once.

    file.use { file => read(file) >> read(file) }

    The acquire and release actions passed to make are not interruptible, and release will run when the action passed to use succeeds, fails, or is interrupted. You can use makeCase to specify a different release logic depending on each of the three outcomes above.

    It is also possible to specify an interruptible acquire though makeFull but be warned that this is an advanced concurrency operation, which requires some care.

    Resource usage nests:

    open(file1).use { in1 =>
      open(file2).use { in2 =>
        readFiles(in1, in2)
      }
    }

    However, it is more idiomatic to compose multiple resources together before use, exploiting the fact that Resource forms a Monad, and therefore that resources can be nested through flatMap. Nested resources are released in reverse order of acquisition. Outer resources are released even if an inner use or release fails.

    def mkResource(s: String) = {
      val acquire = IO(println(s"Acquiring $$s")) *> IO.pure(s)
      def release(s: String) = IO(println(s"Releasing $$s"))
      Resource.make(acquire)(release)
    }
    
    val r = for {
      outer <- mkResource("outer")
    
      inner <- mkResource("inner")
    } yield (outer, inner)
    
    r.use { case (a, b) =>
      IO(println(s"Using $$a and $$b"))
    }

    On evaluation the above prints:

    Acquiring outer
    Acquiring inner
    Using outer and inner
    Releasing inner
    Releasing outer

    A Resource can also lift arbitrary actions that don't require finalisation through eval. Actions passed to eval preserve their interruptibility.

    Finally, Resource partakes in other abstractions such as MonadError, Parallel, and Monoid, so make sure to explore those instances as well as the other methods not covered here.

    Resource is encoded as a data structure, an ADT, described by the following node types:

    Normally users don't need to care about these node types, unless conversions from Resource into something else is needed (e.g. conversion from Resource into a streaming data type), in which case they can be interpreted through pattern matching.

    F

    the effect type in which the resource is allocated and released

    A

    the type of resource

  22. type Spawn[F[_]] = GenSpawn[F, Throwable]
  23. trait Sync[F[_]] extends MonadCancel[F, Throwable] with Clock[F] with Unique[F] with Defer[F]

    A typeclass that encodes the notion of suspending synchronous side effects in the F[_] context

  24. type Temporal[F[_]] = GenTemporal[F, Throwable]
  25. trait Unique[F[_]] extends Serializable

Value Members

  1. val Concurrent: GenConcurrent.type
  2. val MonadCancelThrow: MonadCancel.type
  3. val ParallelF: kernel.Par.ParallelF.type
  4. val Spawn: GenSpawn.type
  5. val Temporal: GenTemporal.type
  6. object Async extends Serializable
  7. object CancelScope extends Serializable
  8. object Clock extends Serializable
  9. object Deferred extends Serializable
  10. object DeferredSink extends Serializable
  11. object DeferredSource extends Serializable
  12. object GenConcurrent extends Serializable
  13. object GenSpawn extends Serializable
  14. object GenTemporal extends Serializable
  15. object MonadCancel extends Serializable
  16. object Outcome extends LowPriorityImplicits with Serializable
  17. object Par
  18. object Ref extends Serializable
  19. object RefSink extends Serializable
  20. object RefSource extends Serializable
  21. object Resource extends ResourceFOInstances0 with ResourceHOInstances0 with ResourcePlatform
  22. object Sync extends Serializable
  23. object Unique extends Serializable
  24. object implicits extends AllSyntax with AllInstances

Inherited from AnyRef

Inherited from Any

Ungrouped