How do I use Hyperopt for distributed hyperparameter optimization?

0


In machine learning, finding the most appropriate models and hyperparameters for the model to fit the data is a critical task in the overall modeling process. Various hyperparameter optimizers such as BayesianOptimization, GPyOpt, Hyperopt and many more are available for this task. In this article we are going to discuss the Hyperopt optimization package in Python. This package can be used for hyperparameter optimization using the Bayesian optimization technique. The main points that we will discuss here are listed below.

Table of Contents

  1. What is hyperopt?
  2. Easy implementation of Hyperopt
  3. Model selection with hyperopt

Let’s start by understanding what hyperopt is.

What is hyperopt?

Hyperopt is a tool for hyperparameter optimization. It helps find the best value over a set of possible arguments for a function that can be a scalar-valued stochastic function. One of the main differences between other optimizers and this tool is that other optimizers assume that input vectors are drawn from a space vector where we can use Hyperopt to describe our search space in a more explainable way. It helps us to describe more information about the space in which the function is defined and the space in which, in our opinion, the best values ​​are presented. We can search more efficiently by allowing algorithms in Hyperopt.

We can use the different packages under the Hyperopt library for different purposes. The list of packages is as follows:

  • Hyperopt: Distributed Asynchronous Hyperparameter Optimization in Python.
  • Hyperopt-sklearn: Hyperparameter optimization for Sklearn models.
  • Hyperopt-convnet: Convolutional Computer Vision Architectures that can be optimized by Hyperopt.
  • Hyperopt-nnet: Hyperparameter optimization for neural networks.
  • Hyperopt-gpsmbo: Gaussian process optimization algorithm for Hyperopt.

In this article we will discuss how we can use it to perform hyperparameter optimization. Let’s start by discussing various calling conventions that help define the communication between hyperopt, search space, and an objective function.

With the following lines of code we can install the hyperopt.

!pip install hyperopt

Output:

Since I’m using Google Colab in this article, it already offers Hyperopt’s ability to optimize hyperparameters. Let’s start with a simple implementation of this.

Easy implementation of Hyperopt

With the following lines of code we can define a search area.

from hyperopt import hp
space = hp.uniform('x', -10, 10)

With the code snippet above, we have defined a search space that is limited between -10 and 10.

As we saw above, we have defined a space in which the optimization algorithm can search for an optimal value so that each objective function can get a valid point. Let’s see how we can do it in the simplest possible way.

from hyperopt import fmin, tpe
fn=lambda x: x ** 2
algo=tpe.suggest
max_evals=100
best = fmin(fn, space, algo, max_evals)
print(best)

Output:

Here, with the codes above, we can see how easy it is to write the codes with it where we just need to have one function and one iteration value. In the output we can see that it came with a floating point loss.

The above example is the simplest example to find an optimal value for our objective function. We can use various test objects provided by hyperopt to make the process more explainable. There is always a need to store more statistics and diagnostic information in a nested dictionary. We can pass a few more keys with the fmin function. Two important key values ​​are:

  • Status: This button presents results in the form of OK (if the process completed successfully) and Failed (if the completion failed or the function turns out to be undefined).
  • Loss: This is a value for the floating value function that needs to be minimized.

There are also many optional keys that can be used such as:

  • Attachments
  • loss_variance
  • true_loss
  • true_loss_variance

How to use these key values ​​and test objects in our codes can be found here here. Here you can find out how we can store and display information and diagnoses with the test object shown. In order to make the size of the item compact, we will not discuss this here.

Since our main motive here is to do hyperparameter optimization with this tool, we’ll see an approach to this in the next section. Before we do this, we need to know the parameter expressions for defining space that can be used with hyperopt optimization algorithms. Some of these terms are listed below.

  • hp.choice (label, options): Returns one of the options, which should be a list or a tuple.
  • hp.randint (label, upper): Returns a random integer in the range from 0 to Upper.
  • hp.uniform (label, low, high): Returns a uniform value between low and high.
  • hp.quniform (label, low, high, q): Returns a value drawn according to exp (uniform (low, high)) so that the logarithm of the return value is evenly distributed.
  • hp.qloguniform (label, low, high, q): returns a value like round (exp (uniform (low, high)) / q) * q

In the list above we saw important expressions for creating a search space for our objective function. Now we can turn to an implementation of a simple modeling method in which we carry out a hyperparameter optimization with the hyperopt-sklearn.

Note: hyperopt-sklearn is a model based on the hyperopt tool, in which we can carry out the model selection with the machine learning algorithms of scikit-learn.

Model selection with hyperopt

In this article we will use the hyperopt-sklearn to perform the selection of the classification model for the iris dataset. This data set is in the sklearn library, so we import it from there. Let’s start by installing and importing some of the necessary libraries. We just need to install hyperopt-sklearn which is possible with the following codes.

pip install git+https://github.com/hyperopt/hyperopt-sklearn

Output:

Now we can use the library. For this implementation we also needed the sklearn, NumPy and Pandas library.

Import libraries

import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_iris
import hyperopt.tpe
import hpsklearn
import hpsklearn.demo_support

Import the data

iris = load_iris()
df = pd.DataFrame(iris.data, columns=iris.feature_names)
df['species_name'] = pd.Categorical.from_codes(iris.target, iris.target_names)
df

Output:

Splitting the data

y = df['species_name']
X = df.drop(['species_name'], axis=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

Define the estimator with the hyperopt estimator

estimator = hpsklearn.HyperoptEstimator(
    preprocessing=hpsklearn.components.any_preprocessing('pp'),
    classifier=hpsklearn.components.any_classifier('clf'),
    algo=hyperopt.tpe.suggest,
    trial_timeout=15.0, # seconds
    max_evals=20,
    )

Performing model selection with the estimator on a subset of data, attempting to perform different models on the data when,

# Demo version of estimator.fit()
fit_iterator = estimator.fit_iter(X_train,y_train)
fit_iterator.__next__()
plot_helper = hpsklearn.demo_support.PlotHelper(estimator,
                                                mintodate_ylim=(-.01, .10))
while len(estimator.trials.trials) < estimator.max_evals:
    fit_iterator.send(1) # -- try one more model
    plot_helper.post_iter()
plot_helper.post_loop()

Train the best model on whole data

estimator.retrain_best_model_on_full_data(X_train, y_train)

Output:

Now we can save the results of the model selection process as a

print('Best preprocessing pipeline:')
for pp in estimator._best_preprocs:
    print(pp)
print('n')
print('Best classifier:n', estimator._best_learner)
test_predictions = estimator.predict(X_test)
acc_in_percent = 100 * np.mean(test_predictions == y_test)
print('n')
print('Prediction accuracy in generalization is %.1f%%' % acc_in_percent)

Output:

Here in the output we can see the results. We have a number of the best functions, the best model with parameters and accuracy of the model.

last words

Here in the article we presented the Hyperopt tool for hyperparameter optimization. In addition, we have discussed some of the functions of this tool and successfully implemented an example for the model selection with the hyperopt-sklearn tool, which is provided by hyperopt for the models in the SK-Learn library.

References


Share.

Comments are closed.