Testkit

Table of contents

  1. Overview
  2. Installation
  3. Choosing a Layer
  4. Basic Usage
    1. Creating a Test Layer
    2. Creating a Provider Directly
  5. Managing Flags
    1. Setting Flags
    2. Replacing All Flags
    3. Removing Flags
  6. Tracking Evaluations
    1. Check If Flag Was Evaluated
    2. Count Evaluations
    3. Get All Evaluations
    4. Clear Evaluation History
  7. Provider Status
    1. Managing Status
    2. Emitting Events
  8. Behavior Controls
    1. Imperative API
    2. TestAspect API
  9. Testing Patterns
    1. Simple Flag Testing
    2. Testing Multiple Scenarios
    3. Verifying Flag Usage
    4. Testing Context Propagation
    5. Using Transactions for Override Testing
    6. Testing Async Initialization
    7. Simulating Real Async Init
  10. Test Isolation
    1. Automatic Isolation
  11. Best Practices
    1. 1. Use Descriptive Flag Names
    2. 2. Create Test Fixtures
    3. 3. Verify Expected Evaluations
    4. 4. Test Edge Cases

Overview

The testkit module provides TestFeatureProvider, an in-memory OpenFeature provider designed for testing. It allows you to:

  • Pre-configure flag values
  • Dynamically update flags during tests
  • Track which flags were evaluated
  • Verify evaluation counts and contexts

The TestFeatureProvider implements the OpenFeature FeatureProvider interface, so it works seamlessly with the ZIO OpenFeature layer system.


Installation

Maven Central

libraryDependencies += "io.github.etacassiopeia" %% "zio-openfeature-testkit" % "<version>" % Test

Choosing a Layer

Layer Provider starts as Use when
layer(flags) Ready Most tests — flags work immediately
scopedLayer(flags) Ready Same, self-contained scope
asyncLayer(flags) NotReady Testing startup/initialization behavior — requires manual setStatus
asyncReadyLayer(flags, delay) NotReadyReady Simulating real async init without manual status management

Rule of thumb: Use layer unless you specifically need to test how your code handles a provider that isn’t ready yet.


Basic Usage

Creating a Test Layer

The simplest way to use the testkit is with TestFeatureProvider.layer:

import zio.*
import zio.test.*
import zio.openfeature.*
import zio.openfeature.testkit.*

// Create layer with initial flags
val testLayer = TestFeatureProvider.layer(Map(
  "feature-a" -> true,
  "feature-b" -> "variant-1",
  "max-items" -> 100
))

// Use in tests
val test = for
  result <- FeatureFlags.boolean("feature-a", false)
yield assertTrue(result == true)

test.provide(Scope.default >>> testLayer)

Creating a Provider Directly

For more control, create the provider directly:

for
  provider <- TestFeatureProvider.make(Map(
    "feature" -> true,
    "variant" -> "control"
  ))
  // Use provider methods directly
  _ <- provider.setFlag("new-flag", "value")
yield ()

Managing Flags

Setting Flags

for
  provider <- TestFeatureProvider.make(Map.empty)
  _        <- provider.setFlag("new-flag", true)
  _        <- provider.setFlag("count", 42)
  _        <- provider.setFlag("name", "test")
yield ()

Replacing All Flags

provider.setFlags(Map(
  "flag-1" -> true,
  "flag-2" -> "value"
))
// Previous flags are removed

Removing Flags

// Remove single flag
provider.removeFlag("flag-to-remove")

// Clear all flags
provider.clearFlags

Tracking Evaluations

Check If Flag Was Evaluated

for
  provider <- TestFeatureProvider.make(Map("feature" -> true))
  layer     = TestFeatureProvider.layerFrom(provider)
  _        <- FeatureFlags.boolean("feature", false).provide(Scope.default >>> layer)
  was      <- provider.wasEvaluated("feature")
  wasNot   <- provider.wasEvaluated("other-flag")
yield assertTrue(was) && assertTrue(!wasNot)

Count Evaluations

for
  provider <- TestFeatureProvider.make(Map("feature" -> true))
  layer     = TestFeatureProvider.layerFrom(provider)
  _        <- FeatureFlags.boolean("feature", false).provide(Scope.default >>> layer)
  _        <- FeatureFlags.boolean("feature", false).provide(Scope.default >>> layer)
  _        <- FeatureFlags.boolean("feature", false).provide(Scope.default >>> layer)
  count    <- provider.evaluationCount("feature")
yield assertTrue(count == 3)

Get All Evaluations

for
  provider <- TestFeatureProvider.make(Map("flag-a" -> true, "flag-b" -> "value"))
  layer     = TestFeatureProvider.layerFrom(provider)
  _        <- FeatureFlags.boolean("flag-a", false, EvaluationContext("user-1"))
               .provide(Scope.default >>> layer)
  _        <- FeatureFlags.string("flag-b", "", EvaluationContext("user-2"))
               .provide(Scope.default >>> layer)
  evals    <- provider.getEvaluations
yield
  // evals is List[(String, dev.openfeature.sdk.EvaluationContext)]
  // The context is the OpenFeature SDK's EvaluationContext (after conversion)
  assertTrue(evals.length == 2)

Clear Evaluation History

provider.clearEvaluations

Provider Status

Managing Status

When using TestFeatureProvider.layer, the provider starts in Ready status. You can change the status for testing different scenarios:

for
  provider <- ZIO.service[TestFeatureProvider]
  initial  <- provider.status                    // Ready (after layer creation)
  _        <- provider.setStatus(ProviderStatus.Error)
  error    <- provider.status
  _        <- provider.setStatus(ProviderStatus.Stale)
  stale    <- provider.status
yield
  assertTrue(initial == ProviderStatus.Ready) &&
  assertTrue(error == ProviderStatus.Error) &&
  assertTrue(stale == ProviderStatus.Stale)

The setStatus method updates both the ZIO status and the underlying OpenFeature provider state.

Emitting Events

// Simple event
provider.emitEvent(ProviderEvent.ConfigurationChanged(
  Set("flag-1", "flag-2"),
  provider.metadata
))

// Event with metadata
provider.emitEvent(ProviderEvent.ConfigurationChanged(
  Set("flag-1"),
  provider.metadata,
  FlagMetadata.fromStrings("source" -> "webhook")
))

Behavior Controls

Simulate real-world failure modes like slow responses, intermittent failures, and specific error types. Useful for testing timeouts, circuit breakers, and fallback logic.

Imperative API

for
  tp <- ZIO.service[TestFeatureProvider]
  // Simulate network latency
  _  <- tp.setDelay(200.millis)
  // Make all evaluations fail
  _  <- tp.setFailing(true)
  // Simulate specific error types
  _  <- tp.setErrorMode(TestFeatureProvider.ErrorMode.FlagNotFound)
  // Simulate flaky service (30% failure rate)
  _  <- tp.setFailureProbability(0.3)
  // Reset everything
  _  <- tp.clearBehavior
yield ()

Available error modes: FlagNotFound, ParseError, TypeMismatch, ProviderNotReady, General.

Provider exceptions are caught by the Java SDK and returned as default-valued resolutions with error codes. Use booleanDetails (or other *Details methods) to inspect the error code:

tp.setErrorMode(TestFeatureProvider.ErrorMode.FlagNotFound)
resolution <- FeatureFlags.booleanDetails("flag", default = false)
// resolution.errorCode == Some(ErrorCode.FlagNotFound)
// resolution.value == false (the default)

The exception is ProviderNotReady, which propagates as a ZIO-level FeatureFlagError.ProviderNotReady.

TestAspect API

For cleaner test setup/teardown, use ZIO test aspects. Behavior is set before the test and cleaned up after:

test("handles slow provider") {
  for
    result <- FeatureFlags.boolean("flag", false).timeout(100.millis)
  yield assertTrue(result.isEmpty)
} @@ TestFeatureProvider.withDelay(500.millis)

test("handles provider failures") {
  for
    resolution <- FeatureFlags.booleanDetails("flag", default = false)
  yield assertTrue(resolution.errorCode.isDefined)
} @@ TestFeatureProvider.withFailures

Available aspects:

Aspect Effect
TestFeatureProvider.withDelay(d) Adds delay before each evaluation
TestFeatureProvider.withFailures All evaluations fail with a general error
TestFeatureProvider.withErrorMode(mode) All evaluations fail with a specific error
TestFeatureProvider.withFailureProbability(p) Evaluations fail randomly (0.0 to 1.0)

Aspects require TestFeatureProvider in the environment. Apply .provide(testLayer) at the suite level when using aspects on individual tests.


Testing Patterns

Simple Flag Testing

import zio.test.*
import zio.openfeature.*
import zio.openfeature.testkit.*

object MyServiceSpec extends ZIOSpecDefault:
  def spec = suite("MyService")(
    test("shows premium content for premium users") {
      val testLayer = TestFeatureProvider.layer(Map(
        "premium-content" -> true
      ))

      for
        result <- MyService.getContent("user-123")
      yield assertTrue(result.hasPremiumContent)
    }.provide(
      MyService.live,
      Scope.default >>> testLayer
    )
  )

Testing Multiple Scenarios

def testWithFlags[R, E, A](flags: Map[String, Any])(
  test: ZIO[R & FeatureFlags, E, A]
): ZIO[R, E, A] =
  test.provide(Scope.default >>> TestFeatureProvider.layer(flags))

suite("Feature variations")(
  test("enabled") {
    testWithFlags(Map("feature" -> true)) {
      for result <- myLogic yield assertTrue(result.featureEnabled)
    }
  },
  test("disabled") {
    testWithFlags(Map("feature" -> false)) {
      for result <- myLogic yield assertTrue(!result.featureEnabled)
    }
  }
)

Verifying Flag Usage

test("service evaluates expected flags") {
  for
    provider <- TestFeatureProvider.make(Map(
      "feature-a" -> true,
      "feature-b" -> "variant"
    ))
    layer     = TestFeatureProvider.layerFrom(provider)
    _        <- MyService.doSomething.provide(Scope.default >>> layer)
    wasA     <- provider.wasEvaluated("feature-a")
    wasB     <- provider.wasEvaluated("feature-b")
    wasC     <- provider.wasEvaluated("feature-c")
  yield
    assertTrue(wasA) &&
    assertTrue(wasB) &&
    assertTrue(!wasC)  // Should not evaluate feature-c
}

Testing Context Propagation

The getEvaluations method returns OpenFeature SDK contexts (after conversion from ZIO contexts). You can verify that context attributes were correctly propagated:

test("context is passed to provider") {
  val ctx = EvaluationContext("user-123")
    .withAttribute("plan", "premium")

  for
    provider <- TestFeatureProvider.make(Map("feature" -> true))
    layer     = TestFeatureProvider.layerFrom(provider)
    _        <- FeatureFlags.boolean("feature", false, ctx)
                 .provide(Scope.default >>> layer)
    evals    <- provider.getEvaluations
    (_, sdkCtx) = evals.head
  yield
    // sdkCtx is dev.openfeature.sdk.EvaluationContext (Java SDK type)
    assertTrue(sdkCtx.getTargetingKey == "user-123") &&
    assertTrue(sdkCtx.getValue("plan") != null)
}

Using Transactions for Override Testing

Combine testkit with transactions for fine-grained control:

test("feature logic with overrides") {
  val baseLayer = TestFeatureProvider.layer(Map(
    "feature-a" -> true,
    "feature-b" -> false
  ))

  // Test with base values
  val baseTest = for
    a <- FeatureFlags.boolean("feature-a", false)
    b <- FeatureFlags.boolean("feature-b", false)
  yield assertTrue(a == true) && assertTrue(b == false)

  // Test with overrides
  val overrideTest = FeatureFlags.transaction(Map("feature-b" -> true)) {
    for
      a <- FeatureFlags.boolean("feature-a", false)
      b <- FeatureFlags.boolean("feature-b", false)
    yield assertTrue(a == true) && assertTrue(b == true)
  }

  (baseTest *> overrideTest.map(_.result)).provide(Scope.default >>> baseLayer)
}

Testing Async Initialization

Use TestFeatureProvider.asyncLayer to test how your code handles a provider that isn’t ready yet:

test("service handles provider not ready") {
  for
    result <- MyService.getFeature.either
  yield assertTrue(result.isLeft)  // Fails with ProviderNotReady
}.provide(Scope.default >>> TestFeatureProvider.asyncLayer(Map("feature" -> true)))

test("service works after provider becomes ready") {
  for
    tp     <- ZIO.service[TestFeatureProvider]
    _      <- tp.setStatus(ProviderStatus.Ready)
    result <- MyService.getFeature
  yield assertTrue(result == true)
}.provide(Scope.default >>> TestFeatureProvider.asyncLayer(Map("feature" -> true)))

The asyncLayer creates a provider that starts in NotReady state. Call setStatus(ProviderStatus.Ready) to simulate the provider becoming ready. This is useful for testing graceful degradation and startup behavior.

Simulating Real Async Init

If you don’t need to test the NotReady state directly, use asyncReadyLayer which auto-transitions to Ready after a configurable delay:

test("service works with async provider") {
  for
    _      <- ZIO.sleep(200.millis) // Wait for auto-init
    result <- MyService.getFeature
  yield assertTrue(result == true)
}.provide(Scope.default >>> TestFeatureProvider.asyncReadyLayer(
  Map("feature" -> true),
  initDelay = 100.millis
))

This simulates a real provider (e.g., Optimizely connecting to its server) without requiring manual setStatus calls in every test.


Test Isolation

Automatic Isolation

TestFeatureProvider.layer, asyncLayer, and layerFrom each create an isolated OpenFeatureAPI instance with its own provider repository and event support. This means tests using these layers can run in parallel without cross-test contamination — no extra configuration needed.

// These tests run in parallel safely — each gets its own isolated API instance
test("test 1") {
  for result <- FeatureFlags.boolean("flag", false)
  yield assertTrue(result == true)
}.provide(Scope.default >>> TestFeatureProvider.layer(Map("flag" -> true)))

test("test 2") {
  for result <- FeatureFlags.boolean("flag", false)
  yield assertTrue(result == false)
}.provide(Scope.default >>> TestFeatureProvider.layer(Map("flag" -> false)))

If you need to access both the provider and the FeatureFlags service (e.g. to track evaluations or emit events), use layerFrom:

test("tracks evaluations") {
  for
    provider <- TestFeatureProvider.make(Map("flag" -> true))
    layer     = TestFeatureProvider.layerFrom(provider)
    _        <- FeatureFlags.boolean("flag", false).provide(Scope.default >>> layer)
    was      <- provider.wasEvaluated("flag")
  yield assertTrue(was)
}

Note: The public factory methods (FeatureFlags.fromProvider, fromMultiProvider, etc.) use the global OpenFeatureAPI singleton and are not isolated. If you test with these directly, use @@ TestAspect.sequential to prevent conflicts.


Best Practices

1. Use Descriptive Flag Names

val testLayer = TestFeatureProvider.layer(Map(
  "premium-feature-enabled" -> true,
  "max-upload-size-mb" -> 100,
  "checkout-variant" -> "new"
))

2. Create Test Fixtures

object TestFixtures:
  val premiumUser = TestFeatureProvider.layer(Map(
    "premium" -> true,
    "max-items" -> 1000
  ))

  val freeUser = TestFeatureProvider.layer(Map(
    "premium" -> false,
    "max-items" -> 10
  ))

// Usage
test("premium user behavior") {
  myTest.provide(Scope.default >>> TestFixtures.premiumUser)
}

3. Verify Expected Evaluations

Use wasEvaluated for cleaner flag usage assertions:

test("service only evaluates necessary flags") {
  for
    provider <- TestFeatureProvider.make(Map(
      "needed-flag" -> true,
      "unneeded-flag" -> true
    ))
    layer     = TestFeatureProvider.layerFrom(provider)
    _        <- myService.provide(Scope.default >>> layer)
    wasNeeded   <- provider.wasEvaluated("needed-flag")
    wasUnneeded <- provider.wasEvaluated("unneeded-flag")
  yield
    assertTrue(wasNeeded) &&
    assertTrue(!wasUnneeded)
}

4. Test Edge Cases

suite("edge cases")(
  test("handles missing flag") {
    val layer = TestFeatureProvider.layer(Map.empty)

    FeatureFlags.boolean("missing", false)
      .map(result => assertTrue(result == false))
      .provide(Scope.default >>> layer)
  },
  test("handles type mismatch") {
    val layer = TestFeatureProvider.layer(Map("flag" -> "string"))

    FeatureFlags.boolean("flag", false)
      .map(result => assertTrue(result == false))  // Uses default
      .provide(Scope.default >>> layer)
  }
)

Copyright © 2026 Mohsen Zainalpour. Distributed under the Apache 2.0 license.

This site uses Just the Docs, a documentation theme for Jekyll.