Function for Feature SelectionSource:
Function to optimize the features of a mlr3::Learner.
The function internally creates a FSelectInstanceSingleCrit or FSelectInstanceMultiCrit which describes the feature selection problem.
It executes the feature selection with the FSelector (
method) and returns the result with the fselect instance (
The ArchiveFSelect (
$archive) stores all evaluated hyperparameter configurations and performance scores.
fselect( fselector, task, learner, resampling, measures = NULL, term_evals = NULL, term_time = NULL, terminator = NULL, store_benchmark_result = TRUE, store_models = FALSE, check_values = FALSE, callbacks = list() )
Task to operate on.
Learner to optimize the feature subset for.
Resampling that is used to evaluated the performance of the feature subsets. Uninstantiated resamplings are instantiated during construction so that all feature subsets are evaluated on the same data splits. Already instantiated resamplings are kept unchanged.
Number of allowed evaluations. Ignored if
Maximum allowed time in seconds. Ignored if
Stop criterion of the feature selection.
Store benchmark result in archive?
logical(1)). Store models in benchmark result?
Check the parameters before the evaluation and the results for validity?
(list of CallbackFSelect)
List of callbacks.
The mlr3::Task, mlr3::Learner, mlr3::Resampling, mlr3::Measure and Terminator are used to construct a FSelectInstanceSingleCrit.
If multiple performance Measures are supplied, a FSelectInstanceMultiCrit is created.
term_time are shortcuts to create a Terminator.
If both parameters are passed, a TerminatorCombo is constructed.
For other Terminators, pass one with
If no termination criterion is needed, set
There are several sections about feature selection in the mlr3book.
Getting started with wrapper feature selection.
The gallery features a collection of case studies and demos about optimization.
For analyzing the feature selection results, it is recommended to pass the archive to
The returned data table is joined with the benchmark result which adds the mlr3::ResampleResult for each feature set.
The archive provides various getters (e.g.
$learners()) to ease the access.
All getters extract by position (
i) or unique hash (
For a complete list of all getters see the methods section.
The benchmark result (
$benchmark_result) allows to score the feature sets again on a different measure.
Alternatively, measures can be supplied to
# Feature selection on the Palmer Penguins data set task = tsk("pima") learner = lrn("classif.rpart") # Run feature selection instance = fselect( fselector = fs("random_search"), task = task, learner = learner, resampling = rsmp ("holdout"), measures = msr("classif.ce"), term_evals = 4) # Subset task to optimized feature set task$select(instance$result_feature_set) # Train the learner with optimal feature set on the full data set learner$train(task) # Inspect all evaluated configurations as.data.table(instance$archive) #> age glucose insulin mass pedigree pregnant pressure triceps classif.ce #> 1: FALSE FALSE FALSE TRUE TRUE FALSE FALSE FALSE 0.3515625 #> 2: FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE 0.3359375 #> 3: FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE 0.3359375 #> 4: TRUE TRUE FALSE FALSE TRUE TRUE TRUE FALSE 0.2890625 #> runtime_learners timestamp batch_nr warnings errors #> 1: 0.012 2023-03-02 12:42:23 1 0 0 #> 2: 0.013 2023-03-02 12:42:23 2 0 0 #> 3: 0.011 2023-03-02 12:42:23 3 0 0 #> 4: 0.013 2023-03-02 12:42:23 4 0 0 #> features resample_result #> 1: mass,pedigree <ResampleResult> #> 2: pregnant <ResampleResult> #> 3: pregnant <ResampleResult> #> 4: age,glucose,pedigree,pregnant,pressure <ResampleResult>