The AutoFSelector wraps a mlr3::Learner and augments it with an automatic feature selection.
The auto_fselector()
function creates an AutoFSelector object.
Details
The AutoFSelector is a mlr3::Learner which wraps another mlr3::Learner and performs the following steps during $train()
:
The wrapped (inner) learner is trained on the feature subsets via resampling. The feature selection can be specified by providing a FSelector, a bbotk::Terminator, a mlr3::Resampling and a mlr3::Measure.
A final model is fit on the complete training data with the best-found feature subset.
During $predict()
the AutoFSelector just calls the predict method of the wrapped (inner) learner.
Resources
There are several sections about feature selection in the mlr3book.
Estimate Model Performance with nested resampling.
The gallery features a collection of case studies and demos about optimization.
Nested Resampling
Nested resampling can be performed by passing an AutoFSelector object to mlr3::resample()
or mlr3::benchmark()
.
To access the inner resampling results, set store_fselect_instance = TRUE
and execute mlr3::resample()
or mlr3::benchmark()
with store_models = TRUE
(see examples).
The mlr3::Resampling passed to the AutoFSelector is meant to be the inner resampling, operating on the training set of an arbitrary outer resampling.
For this reason it is not feasible to pass an instantiated mlr3::Resampling here.
Super class
mlr3::Learner
-> AutoFSelector
Public fields
instance_args
(
list()
)
All arguments from construction to create the FSelectInstanceBatchSingleCrit.fselector
(FSelector)
Optimization algorithm.
Active bindings
archive
([ArchiveBatchFSelect)
Returns FSelectInstanceBatchSingleCrit archive.learner
(mlr3::Learner)
Trained learner.fselect_instance
(FSelectInstanceBatchSingleCrit)
Internally created feature selection instance with all intermediate results.fselect_result
(data.table::data.table)
Short-cut to$result
from FSelectInstanceBatchSingleCrit.predict_type
(
character(1)
)
Stores the currently active predict type, e.g."response"
. Must be an element of$predict_types
.hash
(
character(1)
)
Hash (unique identifier) for this object.phash
(
character(1)
)
Hash (unique identifier) for this partial object, excluding some components which are varied systematically during tuning (parameter values) or feature selection (feature names).
Methods
Method new()
Creates a new instance of this R6 class.
Usage
AutoFSelector$new(
fselector,
learner,
resampling,
measure = NULL,
terminator,
store_fselect_instance = TRUE,
store_benchmark_result = TRUE,
store_models = FALSE,
check_values = FALSE,
callbacks = NULL,
ties_method = "least_features",
id = NULL
)
Arguments
fselector
(FSelector)
Optimization algorithm.learner
(mlr3::Learner)
Learner to optimize the feature subset for.resampling
(mlr3::Resampling)
Resampling that is used to evaluated the performance of the feature subsets. Uninstantiated resamplings are instantiated during construction so that all feature subsets are evaluated on the same data splits. Already instantiated resamplings are kept unchanged.measure
(mlr3::Measure)
Measure to optimize. IfNULL
, default measure is used.terminator
(bbotk::Terminator)
Stop criterion of the feature selection.store_fselect_instance
(
logical(1)
)
IfTRUE
(default), stores the internally created FSelectInstanceBatchSingleCrit with all intermediate results in slot$fselect_instance
. Is set toTRUE
, ifstore_models = TRUE
store_benchmark_result
(
logical(1)
)
Store benchmark result in archive?store_models
(
logical(1)
). Store models in benchmark result?check_values
(
logical(1)
)
Check the parameters before the evaluation and the results for validity?callbacks
(list of CallbackBatchFSelect)
List of callbacks.ties_method
(
character(1)
)
The method to break ties when selecting sets while optimizing and when selecting the best set. Can be"least_features"
or"random"
. The option"least_features"
(default) selects the feature set with the least features. If there are multiple best feature sets with the same number of features, one is selected randomly. Therandom
method returns a random feature set from the best feature sets. Ignored if multiple measures are used.id
(
character(1)
)
Identifier for the new instance.
Method base_learner()
Extracts the base learner from nested learner objects like GraphLearner
in mlr3pipelines.
If recursive = 0
, the (tuned) learner is returned.
Method selected_features()
The selected features of the final model. These features are selected internally by the learner.
Examples
# Automatic Feature Selection
# \donttest{
# split to train and external set
task = tsk("penguins")
split = partition(task, ratio = 0.8)
# create auto fselector
afs = auto_fselector(
fselector = fs("random_search"),
learner = lrn("classif.rpart"),
resampling = rsmp ("holdout"),
measure = msr("classif.ce"),
term_evals = 4)
# optimize feature subset and fit final model
afs$train(task, row_ids = split$train)
# predict with final model
afs$predict(task, row_ids = split$test)
#> <PredictionClassif> for 69 observations:
#> row_ids truth response
#> 1 Adelie Adelie
#> 2 Adelie Adelie
#> 9 Adelie Adelie
#> --- --- ---
#> 318 Chinstrap Chinstrap
#> 334 Chinstrap Chinstrap
#> 338 Chinstrap Chinstrap
# show result
afs$fselect_result
#> bill_depth bill_length body_mass flipper_length island sex year
#> <lgcl> <lgcl> <lgcl> <lgcl> <lgcl> <lgcl> <lgcl>
#> 1: FALSE TRUE FALSE FALSE TRUE TRUE TRUE
#> features n_features classif.ce
#> <list> <int> <num>
#> 1: bill_length,island,sex,year 4 0.06521739
# model slot contains trained learner and fselect instance
afs$model
#> $learner
#> <LearnerClassifRpart:classif.rpart>: Classification Tree
#> * Model: rpart
#> * Parameters: xval=0
#> * Packages: mlr3, rpart
#> * Predict Types: [response], prob
#> * Feature Types: logical, integer, numeric, factor, ordered
#> * Properties: importance, missings, multiclass, selected_features,
#> twoclass, weights
#>
#> $features
#> [1] "bill_length" "island" "sex" "year"
#>
#> $fselect_instance
#> <FSelectInstanceBatchSingleCrit>
#> * State: Optimized
#> * Objective: <ObjectiveFSelectBatch:classif.rpart_on_penguins>
#> * Terminator: <TerminatorEvals>
#> * Result:
#> bill_depth bill_length body_mass flipper_length island sex year
#> <lgcl> <lgcl> <lgcl> <lgcl> <lgcl> <lgcl> <lgcl>
#> 1: FALSE TRUE FALSE FALSE TRUE TRUE TRUE
#> classif.ce
#> <num>
#> 1: 0.06521739
#> * Archive:
#> bill_depth bill_length body_mass flipper_length island sex year
#> <lgcl> <lgcl> <lgcl> <lgcl> <lgcl> <lgcl> <lgcl>
#> 1: TRUE TRUE TRUE TRUE TRUE TRUE TRUE
#> 2: FALSE TRUE FALSE FALSE TRUE TRUE TRUE
#> 3: FALSE FALSE FALSE TRUE FALSE FALSE FALSE
#> 4: TRUE TRUE TRUE TRUE TRUE TRUE TRUE
#> 5: TRUE TRUE TRUE TRUE FALSE TRUE TRUE
#> 6: FALSE TRUE TRUE TRUE FALSE FALSE TRUE
#> 7: TRUE TRUE TRUE TRUE TRUE TRUE TRUE
#> 8: TRUE TRUE FALSE FALSE FALSE FALSE TRUE
#> 9: TRUE FALSE FALSE FALSE FALSE FALSE FALSE
#> 10: TRUE FALSE TRUE FALSE FALSE TRUE FALSE
#> classif.ce
#> <num>
#> 1: 0.09782609
#> 2: 0.06521739
#> 3: 0.25000000
#> 4: 0.09782609
#> 5: 0.09782609
#> 6: 0.09782609
#> 7: 0.09782609
#> 8: 0.07608696
#> 9: 0.29347826
#> 10: 0.20652174
#>
# shortcut trained learner
afs$learner
#> <LearnerClassifRpart:classif.rpart>: Classification Tree
#> * Model: rpart
#> * Parameters: xval=0
#> * Packages: mlr3, rpart
#> * Predict Types: [response], prob
#> * Feature Types: logical, integer, numeric, factor, ordered
#> * Properties: importance, missings, multiclass, selected_features,
#> twoclass, weights
# shortcut fselect instance
afs$fselect_instance
#> <FSelectInstanceBatchSingleCrit>
#> * State: Optimized
#> * Objective: <ObjectiveFSelectBatch:classif.rpart_on_penguins>
#> * Terminator: <TerminatorEvals>
#> * Result:
#> bill_depth bill_length body_mass flipper_length island sex year
#> <lgcl> <lgcl> <lgcl> <lgcl> <lgcl> <lgcl> <lgcl>
#> 1: FALSE TRUE FALSE FALSE TRUE TRUE TRUE
#> classif.ce
#> <num>
#> 1: 0.06521739
#> * Archive:
#> bill_depth bill_length body_mass flipper_length island sex year
#> <lgcl> <lgcl> <lgcl> <lgcl> <lgcl> <lgcl> <lgcl>
#> 1: TRUE TRUE TRUE TRUE TRUE TRUE TRUE
#> 2: FALSE TRUE FALSE FALSE TRUE TRUE TRUE
#> 3: FALSE FALSE FALSE TRUE FALSE FALSE FALSE
#> 4: TRUE TRUE TRUE TRUE TRUE TRUE TRUE
#> 5: TRUE TRUE TRUE TRUE FALSE TRUE TRUE
#> 6: FALSE TRUE TRUE TRUE FALSE FALSE TRUE
#> 7: TRUE TRUE TRUE TRUE TRUE TRUE TRUE
#> 8: TRUE TRUE FALSE FALSE FALSE FALSE TRUE
#> 9: TRUE FALSE FALSE FALSE FALSE FALSE FALSE
#> 10: TRUE FALSE TRUE FALSE FALSE TRUE FALSE
#> classif.ce
#> <num>
#> 1: 0.09782609
#> 2: 0.06521739
#> 3: 0.25000000
#> 4: 0.09782609
#> 5: 0.09782609
#> 6: 0.09782609
#> 7: 0.09782609
#> 8: 0.07608696
#> 9: 0.29347826
#> 10: 0.20652174
# Nested Resampling
afs = auto_fselector(
fselector = fs("random_search"),
learner = lrn("classif.rpart"),
resampling = rsmp ("holdout"),
measure = msr("classif.ce"),
term_evals = 4)
resampling_outer = rsmp("cv", folds = 3)
rr = resample(task, afs, resampling_outer, store_models = TRUE)
# retrieve inner feature selection results.
extract_inner_fselect_results(rr)
#> iteration bill_depth bill_length body_mass flipper_length island sex
#> <int> <lgcl> <lgcl> <lgcl> <lgcl> <lgcl> <lgcl>
#> 1: 1 FALSE TRUE FALSE TRUE FALSE FALSE
#> 2: 2 FALSE TRUE TRUE TRUE FALSE FALSE
#> 3: 3 TRUE TRUE FALSE FALSE TRUE TRUE
#> year classif.ce features n_features
#> <lgcl> <num> <list> <int>
#> 1: FALSE 0.03947368 bill_length,flipper_length 2
#> 2: TRUE 0.06493506 bill_length,body_mass,flipper_length,year 4
#> 3: FALSE 0.09210526 bill_depth,bill_length,island,sex 4
#> task_id learner_id resampling_id
#> <char> <char> <char>
#> 1: penguins classif.rpart.fselector cv
#> 2: penguins classif.rpart.fselector cv
#> 3: penguins classif.rpart.fselector cv
# performance scores estimated on the outer resampling
rr$score()
#> task_id learner_id resampling_id iteration classif.ce
#> <char> <char> <char> <int> <num>
#> 1: penguins classif.rpart.fselector cv 1 0.06086957
#> 2: penguins classif.rpart.fselector cv 2 0.05217391
#> 3: penguins classif.rpart.fselector cv 3 0.07894737
#> Hidden columns: task, learner, resampling, prediction_test
# unbiased performance of the final model trained on the full data set
rr$aggregate()
#> classif.ce
#> 0.06399695
# }