What is special about DeepMind’s Sonnet library for building neural networks?

0

Sonnet is an open source deepmind library for building neural networks on top of Tensorflow 2.0. Sonnet shares many similarities with several of these current neural network libraries, but it also has unique features tailored to research needs. In this article we will discuss the uniqueness that Sonnet offers over Keras or sklearn and try to get to know Sonnet with a small implementation. Below are the topics to be covered.

contents

  1. About Sonnet
  2. What does sonnet offer?
  3. Implementation of MLP offered by Sonnet

Let’s understand the Sonnet library offered by DeepMind.

About Sonnet

DeepMind’s Sonnet is a Tensorflow-based neural network building platform. The framework offers a higher level of abstraction for the construction of neural networks based on a TensorFlow calculation graph.

The Sonnet is a high-level programming approach to building neural networks with TensorFlow. More specifically, Sonnet allows you to create Python objects that represent neural network components, which can then be integrated into a TensorFlow graph.

The basic idea of ​​Sonnet is modules. Sonnet’s modules contain neural network elements such as models that can be integrated multiple times into a data flow graph. This technique abstracts low-level TensorFlow functionality such as session creation and variable sharing. Modules can be coupled in any way, and Sonnet allows developers to create their own modules with a simple programming approach.

Sonnet’s high-level programming structures, module-based setup, and connection isolation are undeniable advantages. However, I believe that some of the most significant benefits of the new deep learning architecture are buried beneath the surface.

Looking for a complete repository of Python libraries used in data science, Look here.

What does sonnet offer?

  • Applications for multi-neural networks: Using TensorFlow to implement multi-neural network solutions like multi-layer neural networks or adversarial neural networks is a nightmare. Individual neural networks can be implemented using the Sonnet module programming approach, which can then be merged to create higher level networks.
  • Train neural networks: Sonnet facilitates neural network training by focusing on specific modules.
  • Testing: Sonnet’s high-level programming style makes it easy to automate neural network testing using popular frameworks.
  • Extensibility: Developers can easily extend Sonnet by creating new modules. You can even determine how the TensorFlow graph is constructed for this module.
  • Composability: Consider accessing a vast ecosystem of pre-built and trained neural network modules that can be dynamically combined to form higher-level networks. Sonnet is undoubtedly a step in the right direction.

Implementation of MLP offered by Sonnet

MLP stands for Multi-Layer Perceptron Classifier, which by its name is associated with a neural network. It uses the underlying neural network. In this article, let’s create an MLP classifier using the Sonnet module.

Install the Sonnet library

!pip install dm-sonnet tqdm

Import necessary libraries.

import sonnet as snt
import tensorflow as tf
import tensorflow_datasets as tfdf
import matplotlib.pyplot as plt
from tqdm import tqdm

For this article, we use the famous MNIST data set of handwritten digits with a set of 60,000 examples for training and a test set of 10,000 examples. In a fixed-size image, the digits have been size-normalized and centered.

batch_size = 200
 
def process_batch(images, labels):
  images = tf.squeeze(images, axis=[-1])
  images = tf.cast(images, dtype=tf.float32)
  images = ((images / 255.) - .5) * 2.
  return images, labels
 
def mnist(split):
  dataset = tfdf.load("mnist", split=split, as_supervised=True)
  dataset = dataset.map(process_batch)
  dataset = dataset.batch(batch_size)
  dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
  dataset = dataset.cache()
  return dataset
 
train = mnist("train").shuffle(10)
test = mnist("test")

Let’s take a look at the test data set.

images, _ = next(iter(test))
plt.imshow(images[1])
Analytics India Magazine

Create an MLP classifier

class sample_MLP(snt.Module):
 
  def __init__(self):
    super(sample_MLP, self).__init__()
    self.flatten = snt.Flatten()
    self.hidden1 = snt.Linear(1024, name="hidden1")
    self.hidden2 = snt.Linear(1024, name="hidden2")
    self.logits = snt.Linear(10, name="logits")
 
  def __call__(self, images):
    output = self.flatten(images)
    output = tf.nn.relu(self.hidden1(output))
    output = tf.nn.relu(self.hidden2(output))
    output = self.logits(output)
    return output

Here a linear Sonnet module is used, built on top of the TensorFlow module, which is a lightweight container for variables. The linear module created in the ‘_int_’ function could be called simply by using ‘_call_’ to apply the operations to the data set.

Define a base model and implement the data

mlp_testin = sample_MLP()
 
images, labels = next(iter(test))
logits = mlp_testin(images)
  
prediction = tf.argmax(logits[0]).numpy()
observed = labels[0].numpy()
print("Predicted value: {} actual value: {}".format(prediction, observed))
plt.imshow(images[0])
Analytics India Magazine

Since it could be observed that the base model does not perform well on the data since the actual value is 2 and the predicted value is 0. Therefore the model has to be adjusted.

tuning of the model

num_images = 60000
num_epochs = 10
 
tune_er = snt.optimizers.SGD(learning_rate=0.1)
 
def step(images, labels):
  """Performs one optimizer step on a single mini-batch."""
  with tf.GradientTape() as tape:
    logits = mlp_testin(images)
    loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits,
                                                          labels=labels)
    loss = tf.reduce_mean(loss)
 
  params = mlp_testin.trainable_variables
  grads = tape.gradient(loss, params)
  tune_er.apply(grads, params)
  return loss
 
for images, labels in progress_bar(train.repeat(num_epochs)):
  loss = step(images, labels)
 
print("nnFinal loss: {}".format(loss.numpy()))
Analytics India Magazine

Evaluation of the model performance.

total = 0
positive_pred = 0
for images, labels in test:
  predictions = tf.argmax(mlp_testin(images), axis=1)
  positive_pred += tf.math.count_nonzero(tf.equal(predictions, labels))
  total += images.shape[0]
 
print("Got %d/%d (%.02f%%) correct predictions" % (positive_pred, total, positive_pred / total * 100.))
Analytics India Magazine

Visualize the right and wrong predictions.

for images, labels in test:
    predictions = tf.argmax(mlp_testin(images), axis=1)
    eq = tf.equal(predictions, labels)
    for i, x in enumerate(eq):
      if x.numpy() == correct:
        label = labels[i]
        prediction = predictions[i]
        image = images[i]
        ax[n].imshow(image)
        ax[n].set_title("Prediction:{}nActual:{}".format(prediction, label))
        n += 1
        if n == (rows * cols):
          break
if n == (rows * cols):
      break
test_samples(correct=True, rows=2, cols=5)
Analytics India Magazine
test_samples(correct=False, rows=2, cols=5)
Analytics India Magazine

Conclusion

Sonnet offers a straightforward yet powerful programming approach based on a single Notion module. Modules can contain references to parameters, other modules, and procedures that process user input. With this handy article, we understood the uniqueness of Sonnet and how to implement Sonnet to create an MLP model.

references

Share.

Comments are closed.