Basin Hopping Examples-1

Multimodal Optimization With Local Optima
The Ackley function is an example of an objective function that has a single global optima and multiple local optima in which a local search might get stuck.

As such, a global optimization technique is required. It is a two-dimensional objective function that has a global optima at [0,0], which evaluates to 0.0.

The example below implements the Ackley and creates a three-dimensional surface plot showing the global optima and multiple local optima.

ackley multimodal function

from numpy import arange
from numpy import exp
from numpy import sqrt
from numpy import cos
from numpy import e
from numpy import pi
from numpy import meshgrid
from matplotlib import pyplot
from mpl_toolkits.mplot3d import Axes3D

objective function

def objective(x, y):
return -20.0 * exp(-0.2 * sqrt(0.5 * (x2 + y2))) - exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20

define range for input

r_min, r_max = -5.0, 5.0

sample input range uniformly at 0.1 increments

xaxis = arange(r_min, r_max, 0.1)
yaxis = arange(r_min, r_max, 0.1)

create a mesh from the axis

x, y = meshgrid(xaxis, yaxis)

compute targets

results = objective(x, y)

create a surface plot with the jet color scheme

figure = pyplot.figure()
axis = figure.gca(projection=‘3d’)
axis.plot_surface(x, y, results, cmap=‘jet’)

show the plot

pyplot.show()

ackley multimodal function

from numpy import arange
from numpy import exp
from numpy import sqrt
from numpy import cos
from numpy import e
from numpy import pi
from numpy import meshgrid
from matplotlib import pyplot
from mpl_toolkits.mplot3d import Axes3D

objective function

def objective(x, y):
return -20.0 * exp(-0.2 * sqrt(0.5 * (x2 + y2))) - exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20

define range for input

r_min, r_max = -5.0, 5.0

sample input range uniformly at 0.1 increments

xaxis = arange(r_min, r_max, 0.1)
yaxis = arange(r_min, r_max, 0.1)

create a mesh from the axis

x, y = meshgrid(xaxis, yaxis)

compute targets

results = objective(x, y)

create a surface plot with the jet color scheme

figure = pyplot.figure()
axis = figure.gca(projection=‘3d’)
axis.plot_surface(x, y, results, cmap=‘jet’)

show the plot

pyplot.show()
Running the example creates the surface plot of the Ackley function showing the vast number of local optima.

3D Surface Plot of the Ackley Multimodal Function
3D Surface Plot of the Ackley Multimodal Function

We can apply the basin hopping algorithm to the Ackley objective function.

In this case, we will start the search using a random point drawn from the input domain between -5 and 5.

define the starting point as a random sample from the domain

pt = r_min + rand(2) * (r_max - r_min)

define the starting point as a random sample from the domain

pt = r_min + rand(2) * (r_max - r_min)
We will use a step size of 0.5, 200 iterations, and the default local search algorithm. This configuration was chosen after a little trial and error.

perform the basin hopping search

result = basinhopping(objective, pt, stepsize=0.5, niter=200)

perform the basin hopping search

result = basinhopping(objective, pt, stepsize=0.5, niter=200)
After the search is complete, it will report the status of the search and the number of iterations performed as well as the best result found with its evaluation.

summarize the result

print(‘Status : %s’ % result[‘message’])
print(‘Total Evaluations: %d’ % result[‘nfev’])

evaluate solution

solution = result[‘x’]
evaluation = objective(solution)
print(‘Solution: f(%s) = %.5f’ % (solution, evaluation))

summarize the result

print(‘Status : %s’ % result[‘message’])
print(‘Total Evaluations: %d’ % result[‘nfev’])

evaluate solution

solution = result[‘x’]
evaluation = objective(solution)
print(‘Solution: f(%s) = %.5f’ % (solution, evaluation))
Tying this together, the complete example of applying basin hopping to the Ackley objective function is listed below.

basin hopping global optimization for the ackley multimodal objective function

from scipy.optimize import basinhopping
from numpy.random import rand
from numpy import exp
from numpy import sqrt
from numpy import cos
from numpy import e
from numpy import pi

objective function

def objective(v):
x, y = v
return -20.0 * exp(-0.2 * sqrt(0.5 * (x2 + y2))) - exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20

define range for input

r_min, r_max = -5.0, 5.0

define the starting point as a random sample from the domain

pt = r_min + rand(2) * (r_max - r_min)

perform the basin hopping search

result = basinhopping(objective, pt, stepsize=0.5, niter=200)

summarize the result

print(‘Status : %s’ % result[‘message’])
print(‘Total Evaluations: %d’ % result[‘nfev’])

evaluate solution

solution = result[‘x’]
evaluation = objective(solution)
print(‘Solution: f(%s) = %.5f’ % (solution, evaluation))

basin hopping global optimization for the ackley multimodal objective function

from scipy.optimize import basinhopping
from numpy.random import rand
from numpy import exp
from numpy import sqrt
from numpy import cos
from numpy import e
from numpy import pi

objective function

def objective(v):
x, y = v
return -20.0 * exp(-0.2 * sqrt(0.5 * (x2 + y2))) - exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20

define range for input

r_min, r_max = -5.0, 5.0

define the starting point as a random sample from the domain

pt = r_min + rand(2) * (r_max - r_min)

perform the basin hopping search

result = basinhopping(objective, pt, stepsize=0.5, niter=200)

summarize the result

print(‘Status : %s’ % result[‘message’])
print(‘Total Evaluations: %d’ % result[‘nfev’])

evaluate solution

solution = result[‘x’]
evaluation = objective(solution)
print(‘Solution: f(%s) = %.5f’ % (solution, evaluation))
Running the example executes the optimization, then reports the results.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

In this case, we can see that the algorithm located the optima with inputs very close to zero and an objective function evaluation that is practically zero.

We can see that 200 iterations of the algorithm resulted in 86,020 function evaluations.

Status: [‘requested number of basinhopping iterations completed successfully’]
Total Evaluations: 86020
Solution: f([ 5.29778873e-10 -2.29022817e-10]) = 0.00000

Status: [‘requested number of basinhopping iterations completed successfully’]
Total Evaluations: 86020
Solution: f([ 5.29778873e-10 -2.29022817e-10]) = 0.00000