Skip to contents

Usage

fsi(
  task,
  learner,
  resampling,
  measures = NULL,
  terminator,
  store_benchmark_result = TRUE,
  store_models = FALSE,
  check_values = FALSE,
  callbacks = list()
)

Arguments

task

(mlr3::Task)
Task to operate on.

learner

(mlr3::Learner)
Learner to optimize the feature subset for.

resampling

(mlr3::Resampling)
Resampling that is used to evaluated the performance of the feature subsets. Uninstantiated resamplings are instantiated during construction so that all feature subsets are evaluated on the same data splits. Already instantiated resamplings are kept unchanged.

measures

(mlr3::Measure or list of mlr3::Measure)
A single measure creates a FSelectInstanceSingleCrit and multiple measures a FSelectInstanceMultiCrit. If NULL, default measure is used.

terminator

(Terminator)
Stop criterion of the feature selection.

store_benchmark_result

(logical(1))
Store benchmark result in archive?

store_models

(logical(1)). Store models in benchmark result?

check_values

(logical(1))
Check the parameters before the evaluation and the results for validity?

callbacks

(list of CallbackFSelect)
List of callbacks.

Resources

There are several sections about feature selection in the mlr3book.

The gallery features a collection of case studies and demos about optimization.

Default Measures

If no measure is passed, the default measure is used. The default measure depends on the task type.

TaskDefault MeasurePackage
"classif""classif.ce"mlr3
"regr""regr.mse"mlr3
"surv""surv.cindex"mlr3proba
"dens""dens.logloss"mlr3proba
"classif_st""classif.ce"mlr3spatial
"regr_st""regr.mse"mlr3spatial
"clust""clust.dunn"mlr3cluster

Examples

# Feature selection on Palmer Penguins data set
# \donttest{

task = tsk("penguins")
learner = lrn("classif.rpart")

# Construct feature selection instance
instance = fsi(
  task = task,
  learner = learner,
  resampling = rsmp("cv", folds = 3),
  measures = msr("classif.ce"),
  terminator = trm("evals", n_evals = 4)
)

# Choose optimization algorithm
fselector = fs("random_search", batch_size = 2)

# Run feature selection
fselector$optimize(instance)
#>    bill_depth bill_length body_mass flipper_length island  sex year
#> 1:       TRUE        TRUE      TRUE           TRUE   TRUE TRUE TRUE
#>                                                          features classif.ce
#> 1: bill_depth,bill_length,body_mass,flipper_length,island,sex,... 0.06399695

# Subset task to optimal feature set
task$select(instance$result_feature_set)

# Train the learner with optimal feature set on the full data set
learner$train(task)

# Inspect all evaluated sets
as.data.table(instance$archive)
#>    bill_depth bill_length body_mass flipper_length island   sex year classif.ce
#> 1:      FALSE       FALSE      TRUE          FALSE   TRUE FALSE TRUE 0.22946860
#> 2:       TRUE       FALSE      TRUE           TRUE   TRUE  TRUE TRUE 0.15692855
#> 3:       TRUE        TRUE     FALSE          FALSE  FALSE  TRUE TRUE 0.07564200
#> 4:       TRUE        TRUE      TRUE           TRUE   TRUE  TRUE TRUE 0.06399695
#>    runtime_learners           timestamp batch_nr warnings errors
#> 1:            0.021 2023-03-21 15:16:07        1        0      0
#> 2:            0.021 2023-03-21 15:16:07        1        0      0
#> 3:            0.022 2023-03-21 15:16:08        2        0      0
#> 4:            0.023 2023-03-21 15:16:08        2        0      0
#>                                                          features
#> 1:                                          body_mass,island,year
#> 2:            bill_depth,body_mass,flipper_length,island,sex,year
#> 3:                                bill_depth,bill_length,sex,year
#> 4: bill_depth,bill_length,body_mass,flipper_length,island,sex,...
#>         resample_result
#> 1: <ResampleResult[21]>
#> 2: <ResampleResult[21]>
#> 3: <ResampleResult[21]>
#> 4: <ResampleResult[21]>
# }