Title: | Model-Based Optimization for 'mlr3' Through 'mlrMBO' |
---|---|
Description: | The 'mlrMBO' package can ordinarily not be used for optimization within 'mlr3', because of incompatibilities of their respective class systems. 'mlrintermbo' offers a compatibility interface that provides 'mlrMBO' as an 'mlr3tuning' 'Tuner' object, for tuning of machine learning algorithms within 'mlr3', as well as a 'bbotk' 'Optimizer' object for optimization of general objective functions using the 'bbotk' black box optimization framework. The control parameters of 'mlrMBO' are faithfully reproduced as a 'paradox' 'ParamSet'. |
Authors: | Martin Binder [aut, cre] |
Maintainer: | Martin Binder <[email protected]> |
License: | LGPL-3 |
Version: | 0.5.1-2 |
Built: | 2024-11-23 06:05:45 UTC |
Source: | https://github.com/mb706/mlrintermbo |
The 'mlrMBO' package can ordinarily not be used for optimization within 'mlr3', because of incompatibilities of their respective class systems. 'mlrintermbo' offers a compatibility interface that provides 'mlrMBO' as an 'mlr3tuning' 'Tuner' object, for tuning of machine learning algorithms within 'mlr3', as well as a 'bbotk' 'Optimizer' object for optimization of general objective functions using the 'bbotk' black box optimization framework. The control parameters of 'mlrMBO' are faithfully reproduced as a 'paradox' 'ParamSet'.
Maintainer: Martin Binder [email protected]
Useful links:
Creates the default mlrMBO surrogate learners as an mlr3::Learner
.
This imitates the behaviour of mlrCPO when no learner
argument is given to mbo()
/ initSMBO()
.
makeMlr3Surrogate( is.numeric = TRUE, is.noisy = TRUE, has.dependencies = !is.numeric )
makeMlr3Surrogate( is.numeric = TRUE, is.noisy = TRUE, has.dependencies = !is.numeric )
is.numeric |
( |
is.noisy |
( |
has.dependencies |
( |
# DiceKriging Learner: makeMlr3Surrogate() # mlr3pipelines Graph: imputation %>>% 'ranger' (randomForest): makeMlr3Surrogate(is.numeric = FALSE) # just the 'ranger' Learner: makeMlr3Surrogate(is.numeric = FALSE, has.dependencies = FALSE)
# DiceKriging Learner: makeMlr3Surrogate() # mlr3pipelines Graph: imputation %>>% 'ranger' (randomForest): makeMlr3Surrogate(is.numeric = FALSE) # just the 'ranger' Learner: makeMlr3Surrogate(is.numeric = FALSE, has.dependencies = FALSE)
bbotk
's OptimInstanceMultiCrit
class.
Re-exported since bbotk
will change the name.
bbotk
's OptimInstanceSingleCrit
class.
Re-exported since bbotk
will change the name.
mlrMBO tuning object.
mlrMBO must not be loaded directly into R when using mlr3, for various reasons. TunerInterMBO and OptimizerInterMBO take care that this does not happen.
R6::R6Class object inheriting from Tuner
(mlr3tuning
package) or Optimizer
(bbotk
package).
To optimize an objective (using the bbotk
package), use the OptimizerInterMBO
object,
ideally obtained through the bbotk::opt()
function: opt("intermbo")
.
To tune a machine learning method represented by a mlr3::Learner object,
use the TunerInterMBO
obtained ideally through mlr3tuning::tnr()
: tnr("intermbo")
.
Both have following optional arguments:
n.objectives
:: integer(1)
Number of objectives to optimize. Default is 1 for ordinary ("single objective") optimization,
but can be breater than 1 for multi-objective optimization. See mlrMBO::setMBOControlMultiObj()
for details on multi-objective optimization in mlrMBO
.
on.surrogate.error
:: character(1)
What to do when fitting or predicting the surrogate model fails. One of "stop"
(throw error),
"warn"
, and "quiet"
(ignore and propose a random point).
The surrogate model may fail sometimes, for example when the size
of the initial design is too small or when the objective function returns constant values. In practice
this is usually safe to ignore for single iterations (i.e. "warn"
or "quiet"
), but be aware
that MBO effectively degrades to random search when the surrogate model fails for all iterations.
The ParamSet
of the optimizer / tuner reflects the possible configuration
options of mlrMBO. The control parameters map directly to the arguments of
mlrMBO::makeMBOControl()
, mlrMBO::setMBOControlInfill()
, mlrMBO::setMBOControlMultiObj()
,
mlrMBO::setMBOControlMultiPoint()
, and mlrMBO::setMBOControlTermination()
.
library("paradox") library("bbotk") # silly example function: minimize x^2 for -1 < x < 1 domain <- ps(x = p_dbl(lower = -1, upper = 1)) codomain <- ps(y = p_dbl(tags = "minimize")) objective <- ObjectiveRFun$new(function(xs) list(y = xs$x^2), domain, codomain) # initialize instance instance <- OptimInstanceSingleCrit$new(objective, domain, trm("evals", n_evals = 6)) # use intermbo optimizer # # Also warn on surrogate model errors # (this is the default and can be omitted) optser <- opt("intermbo", on.surrogate.error = "warn") # optimizer has hyperparameters from mlrMBO optser$param_set$values$final.method <- "best.predicted" # optimization happens here. optser$optimize(instance) instance$result
library("paradox") library("bbotk") # silly example function: minimize x^2 for -1 < x < 1 domain <- ps(x = p_dbl(lower = -1, upper = 1)) codomain <- ps(y = p_dbl(tags = "minimize")) objective <- ObjectiveRFun$new(function(xs) list(y = xs$x^2), domain, codomain) # initialize instance instance <- OptimInstanceSingleCrit$new(objective, domain, trm("evals", n_evals = 6)) # use intermbo optimizer # # Also warn on surrogate model errors # (this is the default and can be omitted) optser <- opt("intermbo", on.surrogate.error = "warn") # optimizer has hyperparameters from mlrMBO optser$param_set$values$final.method <- "best.predicted" # optimization happens here. optser$optimize(instance) instance$result
mlr3tuning
's TuningInstanceMultiCrit
class.
Re-exported since mlr3tuning
will change the name.
mlr3tuning
's TuningInstanceSingleCrit
class.
Re-exported since mlr3tuning
will change the name.