Skip to content

Counterfactual

uncertainty_flow.counterfactual

Counterfactual explanations for uncertainty reduction.

UncertaintyExplainer

Explain uncertainty by finding minimal feature changes to reduce intervals.

Answers "what would need to change about this input for us to be more confident?" by searching for counterfactual examples that achieve target reduction in prediction interval width with minimal feature perturbations.

Parameters

model : BaseUncertaintyModel Fitted uncertainty model with predict() method confidence : float, default=0.9 Confidence level for prediction intervals method : {"auto", "evolutionary", "gradient"}, default="auto" Search strategy: - "auto": Automatically choose based on model type - "evolutionary": Genetic algorithm (tree-based models) - "gradient": Gradient-based (differentiable models) random_state : int, optional Random seed for reproducibility

Examples

import polars as pl from uncertainty_flow.models import QuantileForestForecaster from uncertainty_flow.counterfactual import UncertaintyExplainer

Train model

model = QuantileForestForecaster(targets="demand", horizon=7) model.fit(train_data)

Explain uncertainty for a prediction

explainer = UncertaintyExplainer(model, random_state=42) result = explainer.explain_uncertainty( ... X_test.head(1), ... target_reduction=0.5, ... feature_bounds={"temperature": (0, 40), "humidity": (0, 100)} ... )

View counterfactual explanation

print(result.to_polars())

Shows what features to change to reduce interval width by 50%

Notes

Counterfactual explanations identify actionable interventions to reduce prediction uncertainty. For example: - "If we measure temperature more precisely, demand forecast uncertainty would decrease by 40%" - "Adding a promotion flag feature would halve our inventory uncertainty"

The search minimizes both: 1. Prediction interval width (uncertainty reduction) 2. Feature perturbation magnitude (minimal change principle)

Source code in uncertainty_flow/counterfactual/explainer.py
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
class UncertaintyExplainer:
    """
    Explain uncertainty by finding minimal feature changes to reduce intervals.

    Answers "what would need to change about this input for us to be more confident?"
    by searching for counterfactual examples that achieve target reduction in
    prediction interval width with minimal feature perturbations.

    Parameters
    ----------
    model : BaseUncertaintyModel
        Fitted uncertainty model with predict() method
    confidence : float, default=0.9
        Confidence level for prediction intervals
    method : {"auto", "evolutionary", "gradient"}, default="auto"
        Search strategy:
        - "auto": Automatically choose based on model type
        - "evolutionary": Genetic algorithm (tree-based models)
        - "gradient": Gradient-based (differentiable models)
    random_state : int, optional
        Random seed for reproducibility

    Examples
    --------
    >>> import polars as pl
    >>> from uncertainty_flow.models import QuantileForestForecaster
    >>> from uncertainty_flow.counterfactual import UncertaintyExplainer
    >>>
    >>> # Train model
    >>> model = QuantileForestForecaster(targets="demand", horizon=7)
    >>> model.fit(train_data)
    >>>
    >>> # Explain uncertainty for a prediction
    >>> explainer = UncertaintyExplainer(model, random_state=42)
    >>> result = explainer.explain_uncertainty(
    ...     X_test.head(1),
    ...     target_reduction=0.5,
    ...     feature_bounds={"temperature": (0, 40), "humidity": (0, 100)}
    ... )
    >>>
    >>> # View counterfactual explanation
    >>> print(result.to_polars())
    >>> # Shows what features to change to reduce interval width by 50%

    Notes
    -----
    Counterfactual explanations identify actionable interventions to reduce
    prediction uncertainty. For example:
    - "If we measure temperature more precisely, demand forecast uncertainty
       would decrease by 40%"
    - "Adding a promotion flag feature would halve our inventory uncertainty"

    The search minimizes both:
    1. Prediction interval width (uncertainty reduction)
    2. Feature perturbation magnitude (minimal change principle)
    """

    def __init__(
        self,
        model: "BaseUncertaintyModel",
        confidence: float = 0.9,
        method: str = "auto",
        random_state: int | None = None,
    ):
        self.model = model
        self.confidence = confidence
        self.method = method
        self.random_state = random_state

        # Initialize searcher based on method
        self._searcher = self._init_searcher()

    def _init_searcher(self) -> EvolutionarySearch | GradientSearch:
        """Initialize appropriate search strategy."""
        if self.method == "auto":
            # Auto-detect based on model type
            if self._is_differentiable_model():
                return GradientSearch(
                    self.model,
                    confidence=self.confidence,
                    random_state=self.random_state,
                )
            else:
                return EvolutionarySearch(
                    self.model,
                    confidence=self.confidence,
                    random_state=self.random_state,
                )
        elif self.method == "gradient":
            return GradientSearch(
                self.model,
                confidence=self.confidence,
                random_state=self.random_state,
            )
        elif self.method == "evolutionary":
            return EvolutionarySearch(
                self.model,
                confidence=self.confidence,
                random_state=self.random_state,
            )
        else:
            raise ValueError(
                f"Invalid method: {self.method}. Must be 'auto', 'evolutionary', or 'gradient'"
            )

    def _is_differentiable_model(self) -> bool:
        """Check if model is differentiable (neural network)."""
        # Check for PyTorch models
        model_name = self.model.__class__.__name__
        if "Torch" in model_name or "Deep" in model_name:
            return True

        # Check for model attribute indicating differentiability
        if hasattr(self.model, "model") and hasattr(self.model.model, "parameters"):
            return True

        return False

    def explain_uncertainty(
        self,
        data: pl.DataFrame,
        target_reduction: float = 0.5,
        feature_bounds: dict[str, tuple[float, float]] | None = None,
        fixed_features: list[str] | None = None,
        **search_kwargs,
    ) -> SearchResult:
        """
        Find counterfactual that reduces prediction interval width.

        Searches for minimal feature changes that achieve the target reduction
        in prediction interval width.

        Args:
            data: Feature DataFrame (typically single row)
            target_reduction: Target proportional reduction in interval width (0-1)
                - 0.5 = reduce interval width by 50%
                - 0.1 = reduce interval width by 10%
            feature_bounds: Optional bounds for each feature (min, max)
                - Ensures counterfactual values stay within realistic ranges
            fixed_features: Features that should not be modified
                - Useful when only certain features can be intervened upon
            **search_kwargs: Additional arguments passed to search strategy
                - For evolutionary: population_size, n_generations, mutation_rate, etc.
                - For gradient: learning_rate, n_iterations, l1_penalty, etc.

        Returns
        -------
        SearchResult
            Counterfactual explanation with:
            - counterfactual: Counterfactual feature values
            - original: Original feature values
            - changes: Per-feature changes (counterfactual - original)
            - interval_width_reduction: Achieved proportional reduction
            - original_width: Original interval width
            - new_width: Counterfactual interval width

        Raises
        ------
        InvalidDataError
            If data is empty or has more than one row

        Examples
        --------
        >>> # Find changes to halve interval width
        >>> result = explainer.explain_uncertainty(X_test.head(1), target_reduction=0.5)
        >>>
        >>> # Find changes with custom feature bounds
        >>> result = explainer.explain_uncertainty(
        ...     X_test.head(1),
        ...     target_reduction=0.3,
        ...     feature_bounds={"price": (0, 100), "promotion": (0, 1)}
        ... )
        >>>
        >>> # Find changes while keeping certain features fixed
        >>> result = explainer.explain_uncertainty(
        ...     X_test.head(1),
        ...     target_reduction=0.4,
        ...     fixed_features=["date", "category"]
        ... )

        Notes
        -----
        The search balances two objectives:
        1. Reducing prediction interval width (uncertainty reduction)
        2. Minimizing feature perturbations (minimal change principle)

        This multi-objective optimization is handled by combining:
        - Primary objective: Width reduction (achieve target)
        - Secondary objective: L1/L2 penalties on feature changes

        For tree-based models (evolutionary search), this uses a genetic
        algorithm with tournament selection, crossover, and mutation.

        For differentiable models (gradient search), this uses gradient
        descent with L1/L2 regularization.
        """
        from ..utils.exceptions import InvalidDataError

        if data.height == 0:
            raise InvalidDataError("Cannot explain uncertainty on empty DataFrame")

        if data.height > 1:
            raise InvalidDataError(
                "explain_uncertainty expects exactly one row. "
                "Use explain_batch() for multiple samples."
            )

        return self._searcher.search(
            data,
            target_reduction=target_reduction,
            feature_bounds=feature_bounds,
            fixed_features=fixed_features,
            **search_kwargs,
        )

    def explain_batch(
        self,
        data: pl.DataFrame,
        target_reduction: float = 0.5,
        feature_bounds: dict[str, tuple[float, float]] | None = None,
        fixed_features: list[str] | None = None,
        **search_kwargs,
    ) -> list[SearchResult]:
        """
        Generate counterfactual explanations for multiple samples.

        Args:
            data: Feature DataFrame with multiple rows
            target_reduction: Target proportional reduction in interval width
            feature_bounds: Optional bounds for each feature
            fixed_features: Features that should not be modified
            **search_kwargs: Additional arguments for search strategy

        Returns
        -------
        list[SearchResult]
            List of counterfactual explanations, one per input row

        Examples
        --------
        >>> results = explainer.explain_batch(X_test.head(10), target_reduction=0.4)
        >>> for i, result in enumerate(results):
        ...     print(f"Sample {i}: {result.interval_width_reduction:.1%} reduction")
        """
        results = []
        for i in range(data.height):
            row = data[i]
            result = self._searcher.search(
                row,
                target_reduction=target_reduction,
                feature_bounds=feature_bounds,
                fixed_features=fixed_features,
                **search_kwargs,
            )
            results.append(result)

        return results

    def compare_features(
        self,
        data: pl.DataFrame,
        features: list[str],
        target_reduction: float = 0.5,
        feature_bounds: dict[str, tuple[float, float]] | None = None,
    ) -> pl.DataFrame:
        """
        Compare impact of modifying individual features on uncertainty.

        For each feature, finds counterfactual with only that feature modifiable
        (all others fixed). This identifies which features are most effective
        at reducing uncertainty.

        Args:
            data: Feature DataFrame (single row)
            features: List of features to compare
            target_reduction: Target reduction for each feature search
            feature_bounds: Bounds for feature modifications

        Returns
        -------
        pl.DataFrame
            Comparison with columns:
                - feature: Feature name
                - width_reduction: Achieved proportional reduction
                - change_magnitude: Absolute change in feature value
                - effectiveness: Reduction per unit change

        Examples
        --------
        >>> comparison = explainer.compare_features(
        ...     X_test.head(1),
        ...     features=["temperature", "humidity", "pressure"],
        ...     target_reduction=0.3
        ... )
        >>> print(comparison.sort("effectiveness", descending=True))
        """
        from ..utils.exceptions import InvalidDataError

        if data.height != 1:
            raise InvalidDataError("compare_features requires single-row DataFrame")

        results = []

        for feature in features:
            # Fix all features except this one
            other_features = [f for f in data.columns if f != feature]

            try:
                result = self._searcher.search(
                    data,
                    target_reduction=target_reduction,
                    feature_bounds=feature_bounds,
                    fixed_features=other_features,
                )

                change_magnitude = abs(result.changes.get(feature, 0))
                effectiveness = (
                    result.interval_width_reduction / (change_magnitude + 1e-8)
                    if change_magnitude > 0
                    else 0
                )

                results.append(
                    {
                        "feature": feature,
                        "width_reduction": result.interval_width_reduction,
                        "change_magnitude": change_magnitude,
                        "effectiveness": effectiveness,
                    }
                )
            except (ValueError, TypeError, RuntimeError):
                # Feature search failed, skip
                results.append(
                    {
                        "feature": feature,
                        "width_reduction": 0.0,
                        "change_magnitude": 0.0,
                        "effectiveness": 0.0,
                    }
                )

        return pl.DataFrame(results).sort("effectiveness", descending=True)

    def summary(self) -> dict[str, Any]:
        """
        Return summary of the explainer configuration.

        Returns
        -------
        dict
            Configuration summary with keys:
                - confidence: Confidence level for intervals
                - method: Search strategy used
                - random_state: Random seed
                - model_type: Type of underlying model
        """
        return {
            "confidence": self.confidence,
            "method": self.method,
            "random_state": self.random_state,
            "model_type": self.model.__class__.__name__,
        }

explain_uncertainty(data, target_reduction=0.5, feature_bounds=None, fixed_features=None, **search_kwargs)

Find counterfactual that reduces prediction interval width.

Searches for minimal feature changes that achieve the target reduction in prediction interval width.

Parameters:

Name Type Description Default
data DataFrame

Feature DataFrame (typically single row)

required
target_reduction float

Target proportional reduction in interval width (0-1) - 0.5 = reduce interval width by 50% - 0.1 = reduce interval width by 10%

0.5
feature_bounds dict[str, tuple[float, float]] | None

Optional bounds for each feature (min, max) - Ensures counterfactual values stay within realistic ranges

None
fixed_features list[str] | None

Features that should not be modified - Useful when only certain features can be intervened upon

None
**search_kwargs

Additional arguments passed to search strategy - For evolutionary: population_size, n_generations, mutation_rate, etc. - For gradient: learning_rate, n_iterations, l1_penalty, etc.

{}
Returns

SearchResult Counterfactual explanation with: - counterfactual: Counterfactual feature values - original: Original feature values - changes: Per-feature changes (counterfactual - original) - interval_width_reduction: Achieved proportional reduction - original_width: Original interval width - new_width: Counterfactual interval width

Raises

InvalidDataError If data is empty or has more than one row

Examples
Find changes to halve interval width

result = explainer.explain_uncertainty(X_test.head(1), target_reduction=0.5)

Find changes with custom feature bounds

result = explainer.explain_uncertainty( ... X_test.head(1), ... target_reduction=0.3, ... feature_bounds={"price": (0, 100), "promotion": (0, 1)} ... )

Find changes while keeping certain features fixed

result = explainer.explain_uncertainty( ... X_test.head(1), ... target_reduction=0.4, ... fixed_features=["date", "category"] ... )

Notes

The search balances two objectives: 1. Reducing prediction interval width (uncertainty reduction) 2. Minimizing feature perturbations (minimal change principle)

This multi-objective optimization is handled by combining: - Primary objective: Width reduction (achieve target) - Secondary objective: L1/L2 penalties on feature changes

For tree-based models (evolutionary search), this uses a genetic algorithm with tournament selection, crossover, and mutation.

For differentiable models (gradient search), this uses gradient descent with L1/L2 regularization.

Source code in uncertainty_flow/counterfactual/explainer.py
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
def explain_uncertainty(
    self,
    data: pl.DataFrame,
    target_reduction: float = 0.5,
    feature_bounds: dict[str, tuple[float, float]] | None = None,
    fixed_features: list[str] | None = None,
    **search_kwargs,
) -> SearchResult:
    """
    Find counterfactual that reduces prediction interval width.

    Searches for minimal feature changes that achieve the target reduction
    in prediction interval width.

    Args:
        data: Feature DataFrame (typically single row)
        target_reduction: Target proportional reduction in interval width (0-1)
            - 0.5 = reduce interval width by 50%
            - 0.1 = reduce interval width by 10%
        feature_bounds: Optional bounds for each feature (min, max)
            - Ensures counterfactual values stay within realistic ranges
        fixed_features: Features that should not be modified
            - Useful when only certain features can be intervened upon
        **search_kwargs: Additional arguments passed to search strategy
            - For evolutionary: population_size, n_generations, mutation_rate, etc.
            - For gradient: learning_rate, n_iterations, l1_penalty, etc.

    Returns
    -------
    SearchResult
        Counterfactual explanation with:
        - counterfactual: Counterfactual feature values
        - original: Original feature values
        - changes: Per-feature changes (counterfactual - original)
        - interval_width_reduction: Achieved proportional reduction
        - original_width: Original interval width
        - new_width: Counterfactual interval width

    Raises
    ------
    InvalidDataError
        If data is empty or has more than one row

    Examples
    --------
    >>> # Find changes to halve interval width
    >>> result = explainer.explain_uncertainty(X_test.head(1), target_reduction=0.5)
    >>>
    >>> # Find changes with custom feature bounds
    >>> result = explainer.explain_uncertainty(
    ...     X_test.head(1),
    ...     target_reduction=0.3,
    ...     feature_bounds={"price": (0, 100), "promotion": (0, 1)}
    ... )
    >>>
    >>> # Find changes while keeping certain features fixed
    >>> result = explainer.explain_uncertainty(
    ...     X_test.head(1),
    ...     target_reduction=0.4,
    ...     fixed_features=["date", "category"]
    ... )

    Notes
    -----
    The search balances two objectives:
    1. Reducing prediction interval width (uncertainty reduction)
    2. Minimizing feature perturbations (minimal change principle)

    This multi-objective optimization is handled by combining:
    - Primary objective: Width reduction (achieve target)
    - Secondary objective: L1/L2 penalties on feature changes

    For tree-based models (evolutionary search), this uses a genetic
    algorithm with tournament selection, crossover, and mutation.

    For differentiable models (gradient search), this uses gradient
    descent with L1/L2 regularization.
    """
    from ..utils.exceptions import InvalidDataError

    if data.height == 0:
        raise InvalidDataError("Cannot explain uncertainty on empty DataFrame")

    if data.height > 1:
        raise InvalidDataError(
            "explain_uncertainty expects exactly one row. "
            "Use explain_batch() for multiple samples."
        )

    return self._searcher.search(
        data,
        target_reduction=target_reduction,
        feature_bounds=feature_bounds,
        fixed_features=fixed_features,
        **search_kwargs,
    )

explain_batch(data, target_reduction=0.5, feature_bounds=None, fixed_features=None, **search_kwargs)

Generate counterfactual explanations for multiple samples.

Parameters:

Name Type Description Default
data DataFrame

Feature DataFrame with multiple rows

required
target_reduction float

Target proportional reduction in interval width

0.5
feature_bounds dict[str, tuple[float, float]] | None

Optional bounds for each feature

None
fixed_features list[str] | None

Features that should not be modified

None
**search_kwargs

Additional arguments for search strategy

{}
Returns

list[SearchResult] List of counterfactual explanations, one per input row

Examples

results = explainer.explain_batch(X_test.head(10), target_reduction=0.4) for i, result in enumerate(results): ... print(f"Sample {i}: {result.interval_width_reduction:.1%} reduction")

Source code in uncertainty_flow/counterfactual/explainer.py
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
def explain_batch(
    self,
    data: pl.DataFrame,
    target_reduction: float = 0.5,
    feature_bounds: dict[str, tuple[float, float]] | None = None,
    fixed_features: list[str] | None = None,
    **search_kwargs,
) -> list[SearchResult]:
    """
    Generate counterfactual explanations for multiple samples.

    Args:
        data: Feature DataFrame with multiple rows
        target_reduction: Target proportional reduction in interval width
        feature_bounds: Optional bounds for each feature
        fixed_features: Features that should not be modified
        **search_kwargs: Additional arguments for search strategy

    Returns
    -------
    list[SearchResult]
        List of counterfactual explanations, one per input row

    Examples
    --------
    >>> results = explainer.explain_batch(X_test.head(10), target_reduction=0.4)
    >>> for i, result in enumerate(results):
    ...     print(f"Sample {i}: {result.interval_width_reduction:.1%} reduction")
    """
    results = []
    for i in range(data.height):
        row = data[i]
        result = self._searcher.search(
            row,
            target_reduction=target_reduction,
            feature_bounds=feature_bounds,
            fixed_features=fixed_features,
            **search_kwargs,
        )
        results.append(result)

    return results

compare_features(data, features, target_reduction=0.5, feature_bounds=None)

Compare impact of modifying individual features on uncertainty.

For each feature, finds counterfactual with only that feature modifiable (all others fixed). This identifies which features are most effective at reducing uncertainty.

Parameters:

Name Type Description Default
data DataFrame

Feature DataFrame (single row)

required
features list[str]

List of features to compare

required
target_reduction float

Target reduction for each feature search

0.5
feature_bounds dict[str, tuple[float, float]] | None

Bounds for feature modifications

None
Returns

pl.DataFrame Comparison with columns: - feature: Feature name - width_reduction: Achieved proportional reduction - change_magnitude: Absolute change in feature value - effectiveness: Reduction per unit change

Examples

comparison = explainer.compare_features( ... X_test.head(1), ... features=["temperature", "humidity", "pressure"], ... target_reduction=0.3 ... ) print(comparison.sort("effectiveness", descending=True))

Source code in uncertainty_flow/counterfactual/explainer.py
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
def compare_features(
    self,
    data: pl.DataFrame,
    features: list[str],
    target_reduction: float = 0.5,
    feature_bounds: dict[str, tuple[float, float]] | None = None,
) -> pl.DataFrame:
    """
    Compare impact of modifying individual features on uncertainty.

    For each feature, finds counterfactual with only that feature modifiable
    (all others fixed). This identifies which features are most effective
    at reducing uncertainty.

    Args:
        data: Feature DataFrame (single row)
        features: List of features to compare
        target_reduction: Target reduction for each feature search
        feature_bounds: Bounds for feature modifications

    Returns
    -------
    pl.DataFrame
        Comparison with columns:
            - feature: Feature name
            - width_reduction: Achieved proportional reduction
            - change_magnitude: Absolute change in feature value
            - effectiveness: Reduction per unit change

    Examples
    --------
    >>> comparison = explainer.compare_features(
    ...     X_test.head(1),
    ...     features=["temperature", "humidity", "pressure"],
    ...     target_reduction=0.3
    ... )
    >>> print(comparison.sort("effectiveness", descending=True))
    """
    from ..utils.exceptions import InvalidDataError

    if data.height != 1:
        raise InvalidDataError("compare_features requires single-row DataFrame")

    results = []

    for feature in features:
        # Fix all features except this one
        other_features = [f for f in data.columns if f != feature]

        try:
            result = self._searcher.search(
                data,
                target_reduction=target_reduction,
                feature_bounds=feature_bounds,
                fixed_features=other_features,
            )

            change_magnitude = abs(result.changes.get(feature, 0))
            effectiveness = (
                result.interval_width_reduction / (change_magnitude + 1e-8)
                if change_magnitude > 0
                else 0
            )

            results.append(
                {
                    "feature": feature,
                    "width_reduction": result.interval_width_reduction,
                    "change_magnitude": change_magnitude,
                    "effectiveness": effectiveness,
                }
            )
        except (ValueError, TypeError, RuntimeError):
            # Feature search failed, skip
            results.append(
                {
                    "feature": feature,
                    "width_reduction": 0.0,
                    "change_magnitude": 0.0,
                    "effectiveness": 0.0,
                }
            )

    return pl.DataFrame(results).sort("effectiveness", descending=True)

summary()

Return summary of the explainer configuration.

Returns

dict Configuration summary with keys: - confidence: Confidence level for intervals - method: Search strategy used - random_state: Random seed - model_type: Type of underlying model

Source code in uncertainty_flow/counterfactual/explainer.py
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
def summary(self) -> dict[str, Any]:
    """
    Return summary of the explainer configuration.

    Returns
    -------
    dict
        Configuration summary with keys:
            - confidence: Confidence level for intervals
            - method: Search strategy used
            - random_state: Random seed
            - model_type: Type of underlying model
    """
    return {
        "confidence": self.confidence,
        "method": self.method,
        "random_state": self.random_state,
        "model_type": self.model.__class__.__name__,
    }