Available algorithms
AdaBoostClassifier
- class optunaz.config.optconfig.AdaBoostClassifier(name, parameters)[source]
AdaBoost Classifier.
An AdaBoost classifier is a meta-estimator that begins by fitting a classifier on the original dataset and then fits additional copies of the classifier on the same dataset but where the weights of incorrectly classified instances are adjusted such that subsequent classifiers focus more on difficult cases.
- class Parameters(n_estimators=AdaBoostClassifier.Parameters.AdaBoostClassifierParametersNEstimators(low=3, high=100), learning_rate=AdaBoostClassifier.Parameters.AdaBoostClassifierParametersLearningRate(low=1.0, high=1.0))[source]
- Parameters:
n_estimators (AdaBoostClassifierParametersNEstimators) – The maximum number of estimators at which boosting is terminated. In case of perfect fit, the learning procedure is stopped early. - title: n_estimators
learning_rate (AdaBoostClassifierParametersLearningRate) – Weight applied to each classifierat each boosting iteration. A higher learning rateincreases the contribution of each classifier. There is a trade-off between the learning_rateand n_estimators parameters. - title: learning_rate
Lasso
- class optunaz.config.optconfig.Lasso(name, parameters)[source]
Lasso regression.
Lasso is a Linear Model trained with L1 prior as regularizer.
The Lasso is a linear model that estimates sparse coefficients. It tends to prefer solutions with fewer non-zero coefficients, effectively reducing the number of features upon which the given solution is dependent.
- class Parameters(alpha=Lasso.Parameters.LassoParametersAlpha(low=0.0, high=2.0))[source]
- Parameters:
alpha (LassoParametersAlpha) – Constant that multiplies the L1 term, controlling regularization strength. alpha must be a non-negative float i.e. in [0, inf). When alpha = 0, the objective is equivalent to ordinary least squares, solved by the LinearRegression object. For numerical reasons, using alpha = 0 with the Lasso object is not advised. Instead, you should use the LinearRegression object. - title: Alpha
KNeighborsClassifier
- class optunaz.config.optconfig.KNeighborsClassifier(name, parameters)[source]
KNeighborsClassifier.
Classifier implementing the k-nearest neighbors vote.
The principle behind nearest neighbor methods is to find a predefined number of training samples closest in distance to the new point, and predict the label from these. The number of samples is a user-defined constant for k-nearest neighbor learning. Despite its simplicity, nearest neighbors is successful in a large number of classification problems
- class Parameters(n_neighbors=KNeighborsClassifier.Parameters.KNeighborsClassifierParametersN_Neighbors(low=1, high=10), weights, metric)[source]
- Parameters:
n_neighbors (KNeighborsClassifierParametersN_Neighbors) – Number of neighbors to use by default for kneighbors queries. - title: N Neighbors
weights (List) – Weight function used in prediction - title: Weights
metric (List) – Metric to use for distance computation.The default of “minkowski” results in the standard Euclidean distance - title: Metric
KNeighborsRegressor
- class optunaz.config.optconfig.KNeighborsRegressor(name, parameters)[source]
KNeighborsRegressor.
Regressor implementing the k-nearest neighbors vote.
The principle behind nearest neighbor methods is to find a predefined number of training samples closest in distance to the new point, and predict the label from these. The number of samples is a user-defined constant for k-nearest neighbor learning. Despite its simplicity, nearest neighbors is successful in a large number of classification problems
- class Parameters(n_neighbors=KNeighborsRegressor.Parameters.KNeighborsRegressorParametersN_Neighbors(low=1, high=10), weights, metric)[source]
- Parameters:
n_neighbors (KNeighborsRegressorParametersN_Neighbors) – Number of neighbors to use by default for kneighbors queries. - title: N Neighbors
weights (List) – Weight function used in prediction - title: Weights
metric (List) – Metric to use for distance computation.The default of “minkowski” results in the standard Euclidean distance - title: Metric
LogisticRegression
- class optunaz.config.optconfig.LogisticRegression(name, parameters)[source]
Logistic Regression classifier.
Logistic regression, despite its name, is a linear model for classification rather than regression. Logistic regression is also known in the literature as logit regression, maximum-entropy classification (MaxEnt) or the log-linear classifier. In this model, the probabilities describing the possible outcomes of a single trial are modeled using a logistic function.
- class Parameters(solver, C=LogisticRegression.Parameters.LogisticRegressionParametersParameterC(low=1.0, high=1.0))[source]
- Parameters:
solver (List) – List of solvers to try. Note ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler. - title: Solver
C (LogisticRegressionParametersParameterC) – Inverse of regularization strength; must be a positive float. Like in support vector machines, smaller values specify stronger regularization. - title: C
PLSRegression
- class optunaz.config.optconfig.PLSRegression(name, parameters)[source]
PLS regression (Cross decomposition using partial least squares).
PLS is a form of regularized linear regression where the number of components controls the strength of the regularization.
Cross decomposition algorithms find the fundamental relations between two matrices (X and Y). They are latent variable approaches to modeling the covariance structures in these two spaces. They will try to find the multidimensional direction in the X space that explains the maximum multidimensional variance direction in the Y space. In other words, PLS projects both X and Y into a lower-dimensional subspace such that the covariance between transformed(X) and transformed(Y) is maximal.
- class Parameters(n_components=PLSRegression.Parameters.NComponents(low=2, high=5))[source]
- Parameters:
n_components (NComponents) – Number of components to keep. Should be in [1, min(n_samples, n_features, n_targets)]. - title: n_components
RandomForestClassifier
- class optunaz.config.optconfig.RandomForestClassifier(name, parameters)[source]
Random Forest classifier.
A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting.
- class Parameters(max_depth=RandomForestClassifier.Parameters.RandomForestClassifierParametersMaxDepth(low=2, high=32), n_estimators=RandomForestClassifier.Parameters.RandomForestClassifierParametersNEstimators(low=10, high=250), max_features)[source]
- Parameters:
max_depth (RandomForestClassifierParametersMaxDepth) – The maximum depth of the tree. - title: max_depth
n_estimators (RandomForestClassifierParametersNEstimators) – The number of trees in the forest. - title: n_estimators
max_features (List) – The number of features to considerwhen looking for the best split: If auto, thenconsider max_features features at each split. - If “auto”, then max_features=n_features. - If “sqrt”, then max_features=sqrt(n_features). - If “log2”, then max_features=log2(n_features). - title: max_features
RandomForestRegressor
- class optunaz.config.optconfig.RandomForestRegressor(name, parameters)[source]
Random Forest regression.
A random forest is a meta estimator that fits a number of classifying decision trees on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting.
- class Parameters(max_depth=RandomForestRegressor.Parameters.RandomForestRegressorParametersMaxDepth(low=2, high=32), n_estimators=RandomForestRegressor.Parameters.RandomForestRegressorParametersNEstimators(low=10, high=250), max_features)[source]
- Parameters:
max_depth (RandomForestRegressorParametersMaxDepth) – The maximum depth of the tree. - title: max_depth
n_estimators (RandomForestRegressorParametersNEstimators) – The number of trees in the forest. - title: n_estimators
max_features (List) – The number of features to considerwhen looking for the best split: If auto, thenconsider max_features features at each split. - If “auto”, then max_features=n_features. - If “sqrt”, then max_features=sqrt(n_features). - If “log2”, then max_features=log2(n_features). - title: max_features
Ridge
- class optunaz.config.optconfig.Ridge(name, parameters)[source]
Ridge Regression (Linear least squares with l2 regularization).
This model solves a regression model where the loss function is the linear least squares function and regularization is given by the l2-norm. Also known as Ridge Regression or Tikhonov regularization.
SVC
- class optunaz.config.optconfig.SVC(name, parameters)[source]
SVC classifier (C-Support Vector Classification).
The implementation is based on libsvm. The fit time scales at least quadratically with the number of samples and may be impractical beyond tens of thousands of samples.
- class Parameters(C=SVC.Parameters.SVCParametersParameterC(low=1e-10, high=100.0), gamma=SVC.Parameters.Gamma(low=0.0001, high=100.0))[source]
- Parameters:
C (SVCParametersParameterC) – Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. The penalty is a squared l2 penalty. - title: C
gamma (Gamma) – Kernel coefficient - title: gamma
SVR
- class optunaz.config.optconfig.SVR(name, parameters)[source]
SVR regression (Epsilon-Support Vector Regression).
The implementation is based on libsvm. The fit time complexity is more than quadratic with the number of samples which makes it hard to scale to datasets with more than a couple of 10000 samples.
- class Parameters(C=SVR.Parameters.SVRParametersParameterC(low=1e-10, high=100.0), gamma=SVR.Parameters.SVRParametersGamma(low=0.0001, high=100.0))[source]
- Parameters:
C (SVRParametersParameterC) – Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. The penalty is a squared l2 penalty. - title: C
gamma (SVRParametersGamma) – Kernel coefficient - title: gamma
XGBRegressor
- class optunaz.config.optconfig.XGBRegressor(parameters=XGBRegressor.Parameters(max_depth=XGBRegressor.Parameters.MaxDepth(low=2, high=32), n_estimators=XGBRegressor.Parameters.NEstimators(low=10, high=250), learning_rate=XGBRegressor.Parameters.LearningRate(low=0.1, high=0.1)), name='XGBRegressor')[source]
XGBoost regression (gradient boosting trees algorithm).
XGBoost stands for “Extreme Gradient Boosting”, where the term “Gradient Boosting” originates from the paper Greedy Function Approximation: A Gradient Boosting Machine, by Friedman.
- class Parameters(max_depth=XGBRegressor.Parameters.MaxDepth(low=2, high=32), n_estimators=XGBRegressor.Parameters.NEstimators(low=10, high=250), learning_rate=XGBRegressor.Parameters.LearningRate(low=0.1, high=0.1))[source]
- Parameters:
max_depth (MaxDepth) – Maximum tree depth for base learners. - title: max_depth
n_estimators (NEstimators) – Number of gradient boosted trees.Equivalent to number of boosting rounds. - title: n_estimators
learning_rate (LearningRate) – Weight applied to each classifierat each boosting iteration. A higher learning rateincreases the contribution of each classifier. There is a trade-off between the learning_rateand n_estimators parameters. - title: learning_rate
PRFClassifier
- class optunaz.config.optconfig.PRFClassifier(name, parameters)[source]
PRF (Probabilistic Random Forest).
PRF can be seen as a hybrid between regression and classification algorithms. Similar to regression algorithms, PRF takes as input real-valued probabilities, usually from Probabilistic Threshold Representation (PTR). However, similar to classification algorithms, it predicts probability of belonging to active or inactive class.
- class Parameters(use_py_gini=1, use_py_leafs=1, max_depth=PRFClassifier.Parameters.PRFClassifierParametersMaxDepth(low=2, high=32), n_estimators=PRFClassifier.Parameters.PRFClassifierParametersNEstimators(low=10, high=250), max_features, min_py_sum_leaf=PRFClassifier.Parameters.PRFClassifierParametersMinPySumLeaf(low=1, high=5))[source]
- Parameters:
use_py_gini (int) – The probability of y is used in GINI when this is True - minimum: 0, maximum: 1, title: Use pY GINI
use_py_leafs (int) – The probability of y is used in leaves when this is True - minimum: 0, maximum: 1, title: Use pY leafs
max_depth (PRFClassifierParametersMaxDepth) – The maximum depth of the tree. - title: max_depth
n_estimators (PRFClassifierParametersNEstimators) – The number of trees in the forest. - title: n_estimators
max_features (List) – The number of features to considerwhen looking for the best split: - If “auto”, then max_features=sqrt(n_features). - If “sqrt”, then max_features=sqrt(n_features). - If “log2”, then max_features=log2(n_features). - title: max_features
min_py_sum_leaf (PRFClassifierParametersMinPySumLeaf) – This parameter allows tree pruning when the propagation probability is small, thus reducing computation time. This value defines the probability threshold, pth as described in the Selective propagation scheme in the original publication Probabilistic Random Forest: A machine learning algorithm for noisy datasets - title: n_estimators
ChemPropRegressor
- class optunaz.config.optconfig.ChemPropRegressor(name, parameters)[source]
Chemprop Regressor
Chemprop is an open-source package for training deep learning models for molecular property prediction. ChemProp trains two networks; a Directed Message Passing Neural Network (D-MPNN) to encode a graph representation of molecules, and a Feed Forward Neural Network (FFNN); a standard multi-layer perceptron trained to predict the target property using D-MPNN encoding. It was first presented in the paper “Analyzing Learned Molecular Representations for Property Prediction”. This implementation will use Optuna to optimse parameters instead of Hyperopt (as in the original implementation of ChemProp).
- class Parameters(ensemble_size=1, epochs=30, activation, aggregation, aggregation_norm=ChemPropRegressor.Parameters.ChemPropParametersAggregation_Norm(low=1, high=200, q=1), batch_size=ChemPropRegressor.Parameters.ChemPropParametersBatch_Size(low=5, high=200, q=5), depth=ChemPropRegressor.Parameters.ChemPropParametersDepth(low=2, high=6, q=1), dropout=ChemPropRegressor.Parameters.ChemPropParametersDropout(low=0.0, high=0.4, q=0.04), features_generator, ffn_hidden_size=ChemPropRegressor.Parameters.ChemPropParametersFFN_Hidden_Size(low=300, high=2400, q=100), ffn_num_layers=ChemPropRegressor.Parameters.ChemPropParametersFFN_Num_Layers(low=1, high=3, q=1), final_lr_ratio_exp=ChemPropRegressor.Parameters.ChemPropParametersFinal_Lr_Ratio_Exp(low=-4, high=0), hidden_size=ChemPropRegressor.Parameters.ChemPropParametersHidden_Size(low=300, high=2400, q=100), init_lr_ratio_exp=ChemPropRegressor.Parameters.ChemPropParametersInit_Lr_Ratio_Exp(low=-4, high=0), max_lr_exp=ChemPropRegressor.Parameters.ChemPropParametersMax_Lr_Exp(low=-6, high=-2), warmup_epochs_ratio=ChemPropRegressor.Parameters.ChemPropParametersWarmup_Epochs_Ratio(low=0.1, high=0.1, q=0.1))[source]
- Parameters:
ensemble_size (int) – Number of ensembles with different weight initialisation (provides uncertainty) - minimum: 1, maximum: 5, title: Ensemble size
epochs (int) – Number of epochs to run (increasing this will increase run time) - minimum: 4, maximum: 400, title: Epochs
activation (List) – Activation function applied to the output of the weighted sum of inputs - title: activation
aggregation (List) – Aggregation scheme for atomic vectors into molecular vectors. - title: aggregation
aggregation_norm (ChemPropParametersAggregation_Norm) – For norm aggregation, number by which to divide summed up atomic features. - title: aggregation_norm
batch_size (ChemPropParametersBatch_Size) – How many samples per batch to load. - title: batch_size
depth (ChemPropParametersDepth) – Number of message passing steps(distance of neighboring atoms visible when modelling). - title: depth
dropout (ChemPropParametersDropout) – Dropout probability. During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call. This has proven to be an effective technique for regularization and preventing the co-adaptation of neurons - title: dropout
features_generator (List) – Method of generating additional features. - title: features_generator
ffn_hidden_size (ChemPropParametersFFN_Hidden_Size) – Dimensionality of hidden layers in the FFN. - title: ffn_hidden_size
ffn_num_layers (ChemPropParametersFFN_Num_Layers) – Number of layers in the FFN after D-MPNN encoding. - title: ffn_num_layers
final_lr_ratio_exp (ChemPropParametersFinal_Lr_Ratio_Exp) – The exponential for the final learning rate. - title: final_lr_ratio_exp
hidden_size (ChemPropParametersHidden_Size) – Size of the hidden bond message vectors in the D-MPNN - title: hidden_size
init_lr_ratio_exp (ChemPropParametersInit_Lr_Ratio_Exp) – The exponential for the learning rate ratio. - title: init_lr_ratio_exp
max_lr_exp (ChemPropParametersMax_Lr_Exp) – The exponential for the maximum learning rate. - title: max_lr_exp
warmup_epochs_ratio (ChemPropParametersWarmup_Epochs_Ratio) – Ratio for the number of epochs during which learning rate increases linearly from init_lr to max_lr. Afterwards, learning rate decreases exponentially from max_lr to final_lr. - title: warmup_epochs_ratio
ChemPropClassifier
- class optunaz.config.optconfig.ChemPropClassifier(name, parameters)[source]
Chemprop Classifier without hyperopt
Chemprop is an open-source package for training deep learning models for molecular property prediction. ChemProp trains two networks; a Directed Message Passing Neural Network (D-MPNN) to encode a graph representation of molecules, and a Feed Forward Neural Network (FFNN); a standard multi-layer perceptron trained to predict the target property using D-MPNN encoding. It was first presented in the paper “Analyzing Learned Molecular Representations for Property Prediction”. This implementation will use Optuna to optimse parameters instead of Hyperopt (as in the original implementation of ChemProp).
- class Parameters(ensemble_size=1, epochs=30, activation, aggregation, aggregation_norm=ChemPropClassifier.Parameters.ChemPropParametersAggregation_Norm(low=1, high=200, q=1), batch_size=ChemPropClassifier.Parameters.ChemPropParametersBatch_Size(low=5, high=200, q=5), depth=ChemPropClassifier.Parameters.ChemPropParametersDepth(low=2, high=6, q=1), dropout=ChemPropClassifier.Parameters.ChemPropParametersDropout(low=0.0, high=0.4, q=0.04), features_generator, ffn_hidden_size=ChemPropClassifier.Parameters.ChemPropParametersFFN_Hidden_Size(low=300, high=2400, q=100), ffn_num_layers=ChemPropClassifier.Parameters.ChemPropParametersFFN_Num_Layers(low=1, high=3, q=1), final_lr_ratio_exp=ChemPropClassifier.Parameters.ChemPropParametersFinal_Lr_Ratio_Exp(low=-4, high=0), hidden_size=ChemPropClassifier.Parameters.ChemPropParametersHidden_Size(low=300, high=2400, q=100), init_lr_ratio_exp=ChemPropClassifier.Parameters.ChemPropParametersInit_Lr_Ratio_Exp(low=-4, high=0), max_lr_exp=ChemPropClassifier.Parameters.ChemPropParametersMax_Lr_Exp(low=-6, high=-2), warmup_epochs_ratio=ChemPropClassifier.Parameters.ChemPropParametersWarmup_Epochs_Ratio(low=0.1, high=0.1, q=0.1))[source]
- Parameters:
ensemble_size (int) – Number of ensembles with different weight initialisation (provides uncertainty) - minimum: 1, maximum: 5, title: Ensemble size
epochs (int) – Number of epochs to run (increasing this will increase run time) - minimum: 4, maximum: 400, title: Epochs
activation (List) – Activation function applied to the output of the weighted sum of inputs - title: activation
aggregation (List) – Aggregation scheme for atomic vectors into molecular vectors. - title: aggregation
aggregation_norm (ChemPropParametersAggregation_Norm) – For norm aggregation, number by which to divide summed up atomic features. - title: aggregation_norm
batch_size (ChemPropParametersBatch_Size) – How many samples per batch to load. - title: batch_size
depth (ChemPropParametersDepth) – Number of message passing steps(distance of neighboring atoms visible when modelling). - title: depth
dropout (ChemPropParametersDropout) – Dropout probability. During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call. This has proven to be an effective technique for regularization and preventing the co-adaptation of neurons - title: dropout
features_generator (List) – Method of generating additional features. - title: features_generator
ffn_hidden_size (ChemPropParametersFFN_Hidden_Size) – Dimensionality of hidden layers in the FFN. - title: ffn_hidden_size
ffn_num_layers (ChemPropParametersFFN_Num_Layers) – Number of layers in the FFN after D-MPNN encoding. - title: ffn_num_layers
final_lr_ratio_exp (ChemPropParametersFinal_Lr_Ratio_Exp) – The exponential for the final learning rate. - title: final_lr_ratio_exp
hidden_size (ChemPropParametersHidden_Size) – Size of the hidden bond message vectors in the D-MPNN - title: hidden_size
init_lr_ratio_exp (ChemPropParametersInit_Lr_Ratio_Exp) – The exponential for the learning rate ratio. - title: init_lr_ratio_exp
max_lr_exp (ChemPropParametersMax_Lr_Exp) – The exponential for the maximum learning rate. - title: max_lr_exp
warmup_epochs_ratio (ChemPropParametersWarmup_Epochs_Ratio) – Ratio for the number of epochs during which learning rate increases linearly from init_lr to max_lr. Afterwards, learning rate decreases exponentially from max_lr to final_lr. - title: warmup_epochs_ratio
ChemPropHyperoptClassifier
- class optunaz.config.optconfig.ChemPropHyperoptClassifier(name, parameters)[source]
Chemprop classifier
Chemprop is an open-source package for training deep learning models for molecular property prediction. ChemProp trains two networks; a Directed Message Passing Neural Network (D-MPNN) to encode a graph representation of molecules, and a Feed Forward Neural Network (FFNN); a standard multi-layer perceptron trained to predict the target property using D-MPNN encoding. It was first presented in the paper “Analyzing Learned Molecular Representations for Property Prediction”. This implementation will use Hyperopt to optimse network parameters within each trial, allowing for Optuna to trial more complex hyperparameters, such as feature generation and side information weighting. NB: This implementation can also be used to implement quick/simple ChemProp models by using sensible defaults from the authors; to do this run ChemProp with Num_Iters=’1’.
- class Parameters(ensemble_size=1, epochs=30, num_iters=1, features_generator, search_parameter_level)[source]
- Parameters:
ensemble_size (int) – Number of ensembles with different weight initialisation (provides uncertainty) - minimum: 1, maximum: 5, title: Ensemble size
epochs (int) – Number of epochs to run (increasing this will increase run time) - minimum: 4, maximum: 400, title: Epochs
num_iters (int) – Dictates the number (Hyperopt) trials ChemProp will run - minimum: 1, maximum: 50, title: Number of HyperOpt iterations
features_generator (List) – Method of generating additional features. - title: features_generator
search_parameter_level (List) – Defines the complexity of the search space used by Hyperopt (larger=more complex). - title: search_parameter_level
ChemPropHyperoptRegressor
- class optunaz.config.optconfig.ChemPropHyperoptRegressor(name, parameters)[source]
Chemprop regressor
Chemprop is an open-source package for training deep learning models for molecular property prediction. ChemProp trains two networks; a Directed Message Passing Neural Network (D-MPNN) to encode a graph representation of molecules, and a Feed Forward Neural Network (FFNN); a standard multi-layer perceptron trained to predict the target property using D-MPNN encoding. It was first presented in the paper “Analyzing Learned Molecular Representations for Property Prediction”. This implementation will use Hyperopt to optimse network parameters within each trial, allowing for Optuna to trial more complex hyperparameters, such as feature generation and side information weighting. NB: This implementation can also be used to implement quick/simple ChemProp models by using sensible defaults from the authors; to do this run ChemProp with Num_Iters=’1’.
- class Parameters(ensemble_size=1, epochs=30, num_iters=1, features_generator, search_parameter_level)[source]
- Parameters:
ensemble_size (int) – Number of ensembles with different weight initialisation (provides uncertainty) - minimum: 1, maximum: 5, title: Ensemble size
epochs (int) – Number of epochs to run (increasing this will increase run time) - minimum: 4, maximum: 400, title: Epochs
num_iters (int) – Dictates the number (Hyperopt) trials ChemProp will run - minimum: 1, maximum: 50, title: Number of HyperOpt iterations
features_generator (List) – Method of generating additional features. - title: features_generator
search_parameter_level (List) – Defines the complexity of the search space used by Hyperopt (larger=more complex). - title: search_parameter_level
ChemPropHyperoptRegressorPretrained
- class optunaz.config.optconfig.ChemPropRegressorPretrained(name, parameters)[source]
Chemprop Regressor from a pretrined model
Pretraining can be carried out by supplying previously trained QSARtuna ChemProp PKL model.
- class Parameters(epochs=ChemPropRegressorPretrained.Parameters.ChemPropParametersEpochs(low=4, high=30, q=1), frzn, pretrained_model=None)[source]
- Parameters:
epochs (ChemPropParametersEpochs) – Number of epochs to fine-tune the pretrained model on new data - title: epochs
frzn (List) – Decide which layers of the MPNN or FFN to freeze during transfer learning. - title: Frozen layers
pretrained_model (str) – Path to a pretrained QSARtuna pkl model - title: Pretrained Model
CalibratedClassifierCVWithVA
- class optunaz.config.optconfig.CalibratedClassifierCVWithVA(name, parameters)[source]
Calibrated Classifier.
Probability calibration with isotonic regression, logistic regression, or VennABERS.
This class uses cross-validation (cv) to both estimate the parameters of a classifier and subsequently calibrate a classifier. With default ensemble=True, for each cv split it fits a copy of the base estimator to the training subset, and calibrates it using the testing subset. For prediction, predicted probabilities are averaged across these individual calibrated classifiers. When ensemble=False, cv is used to obtain unbiased predictions which are then used for calibration. For prediction, the base estimator, trained using all the data, is used. VennABERS offers uncertainty prediction based on p0 vs. p1 discordance.
- class Parameters(estimator=typing.Union[optunaz.config.optconfig.AdaBoostClassifier, optunaz.config.optconfig.KNeighborsClassifier, optunaz.config.optconfig.LogisticRegression, optunaz.config.optconfig.RandomForestClassifier, optunaz.config.optconfig.SVC, optunaz.config.optconfig.ChemPropClassifier, optunaz.config.optconfig.ChemPropHyperoptClassifier, optunaz.config.optconfig.CustomClassificationModel], ensemble=<CalibratedClassifierCVEnsemble.TRUE: 'True'>, method=<CalibratedClassifierCVMethod.ISOTONIC: 'isotonic'>, n_folds=2)[source]
- Parameters:
estimator (Union) – Base estimator to use for calibration - title: Estimator
ensemble (Union) – Whether each cv it fits a copy of the base estimator, vs. cv used to obtain unbiased predictions used for calibration - title: ensemble
method (Union) – Calibration method used to obtained calibrated predictions - title: method
n_folds (int) – Number of cv folds to obtain calibration data - minimum: 2, maximum: 5, title: Number of Cross validation folds (splits)
Mapie
- class optunaz.config.optconfig.Mapie(name, parameters)[source]
MAPIE - Model Agnostic Prediction Interval Estimator
MAPIE allows you to estimate prediction intervals for regression models. Prediction intervals output by MAPIE encompass both aleatoric and epistemic uncertainties and are backed by strong theoretical guarantees thanks to conformal prediction methods.
- class Parameters(estimator=typing.Union[optunaz.config.optconfig.Lasso, optunaz.config.optconfig.PLSRegression, optunaz.config.optconfig.RandomForestRegressor, optunaz.config.optconfig.KNeighborsRegressor, optunaz.config.optconfig.Ridge, optunaz.config.optconfig.SVR, optunaz.config.optconfig.XGBRegressor, optunaz.config.optconfig.PRFClassifier], mapie_alpha=0.05)[source]
- Parameters:
estimator (Union) – Base estimator to use - title: Estimator
mapie_alpha (float) – Alpha used to generate uncertainty estimates - minimum: 0.01, maximum: 0.99, title: Uncertainty alpha