Skip to contents

Function to optimize the features of a mlr3::Learner. The function internally creates a FSelectInstanceSingleCrit or FSelectInstanceMultiCrit which describe the feature selection problem. It executes the feature selection with the FSelector (method) and returns the result with the fselect instance ($result). The ArchiveFSelect ($archive) stores all evaluated hyperparameter configurations and performance scores.


  measures = NULL,
  term_evals = NULL,
  term_time = NULL,
  terminator = NULL,
  store_benchmark_result = TRUE,
  store_models = FALSE,
  check_values = FALSE,
  callbacks = list(),



(character(1) | FSelector)
Key to retrieve fselector from mlr_fselectors dictionary or FSelector object.


Task to operate on.


Learner to optimize the feature subset for.


Resampling that is used to evaluated the performance of the feature subsets. Uninstantiated resamplings are instantiated during construction so that all feature subsets are evaluated on the same data splits. Already instantiated resamplings are kept unchanged.


(mlr3::Measure or list of mlr3::Measure)
A single measure creates a FSelectInstanceSingleCrit and multiple measures a FSelectInstanceMultiCrit. If NULL, default measure is used.


Number of allowed evaluations.


Maximum allowed time in seconds.


Stop criterion of the feature selection.


Store benchmark result in archive?


(logical(1)). Store models in benchmark result?


Check the parameters before the evaluation and the results for validity?


(list of CallbackFSelect)
List of callbacks.


(named list())
Named arguments to be set as parameters of the fselector.


The mlr3::Task, mlr3::Learner, mlr3::Resampling, mlr3::Measure and Terminator are used to construct a FSelectInstanceSingleCrit. If multiple performance Measures are supplied, a FSelectInstanceMultiCrit is created. The parameter term_evals and term_time are shortcuts to create a Terminator. If both parameters are passed, a TerminatorCombo is constructed. For other Terminators, pass one with terminator. If no termination criterion is needed, set term_evals, term_time and terminator to NULL.



For analyzing the feature selection results, it is recommended to pass the archive to The returned data table is joined with the benchmark result which adds the mlr3::ResampleResult for each feature set.

The archive provides various getters (e.g. $learners()) to ease the access. All getters extract by position (i) or unique hash (uhash). For a complete list of all getters see the methods section.

The benchmark result ($benchmark_result) allows to score the feature sets again on a different measure. Alternatively, measures can be supplied to


# Feature selection on the Palmer Penguins data set
task = tsk("pima")
learner = lrn("classif.rpart")

# Run feature selection
instance = fselect(
  method = "random_search",
  task = task,
  learner = learner,
  resampling = rsmp ("holdout"),
  measures = msr("classif.ce"),
  term_evals = 4)

# Subset task to optimized feature set

# Train the learner with optimal feature set on the full data set

# Inspect all evaluated configurations$archive)
#>      age glucose insulin  mass pedigree pregnant pressure triceps classif.ce
#> 1: FALSE   FALSE   FALSE FALSE    FALSE     TRUE    FALSE   FALSE  0.3320312
#> 2: FALSE   FALSE    TRUE FALSE    FALSE    FALSE    FALSE    TRUE  0.3203125
#> 3:  TRUE    TRUE   FALSE FALSE    FALSE    FALSE     TRUE   FALSE  0.2304688
#> 4:  TRUE    TRUE    TRUE  TRUE     TRUE     TRUE     TRUE    TRUE  0.2578125
#>    runtime_learners           timestamp batch_nr warnings errors
#> 1:            0.012 2023-01-27 13:40:34        1        0      0
#> 2:            0.012 2023-01-27 13:40:34        2        0      0
#> 3:            0.014 2023-01-27 13:40:34        3        0      0
#> 4:            0.015 2023-01-27 13:40:34        4        0      0
#>         resample_result
#> 1: <ResampleResult[21]>
#> 2: <ResampleResult[21]>
#> 3: <ResampleResult[21]>
#> 4: <ResampleResult[21]>