What are Optimizers in Tensorflow?

An optimizer is a function or an algorithm that modifies the attributes of the neural network, such as weights and learning rate. Thus, it helps in reducing the overall loss and improving accuracy. It is used to decrease loss (an error) by tuning various parameters and weights, hence minimizing the loss function, and providing better accuracy of the model faster.

There are a few terms that you should be familiar with.

  • Epoch – The number of times the algorithm runs on the whole training dataset.
  • Sample – A single row of a dataset.
  • Batch – It denotes the number of samples to be taken to for updating the model parameters.
  • Learning rate – It is a parameter that provides the model with a scale of how much model weights should be updated.
  • Cost Function/Loss Function – A cost function is used to calculate the cost which is the difference between the predicted value and the actual value.
  • Weights/ Bias – The learnable parameters in a model that controls the signal between two neurons.

Optimizers in Tensorflow

Optimizer is the extended class in Tensorflow, that is initialized with parameters of the model but no tensor is given to it. The basic optimizer provided by Tensorflow is:

tf.train.Optimizer - Tensorflow version 1.x tf.compat.v1.train.Optimizer - Tensorflow version 2.x

This class is never used directly but its sub-classes are instantiated.

Tensorflow Keras Optimizers Classes

Tensorflow predominantly supports 9 optimizer classes including its base class (Optimizer).

  • Gradient Descent
  • SGD
  • AdaGrad
  • RMSprop
  • Adadelta
  • Adam
  • AdaMax
  • NAdam
  • FTRL