In this section, we will look at an example of using the differential evolution algorithm on a challenging objective function.
The Ackley function is an example of an objective function that has a single global optima and multiple local optima in which a local search might get stuck.
As such, a global optimization technique is required. It is a two-dimensional objective function that has a global optima at [0,0], which evaluates to 0.0.
The example below implements the Ackley and creates a three-dimensional surface plot showing the global optima and multiple local optima.
ackley multimodal function
from numpy import arange
from numpy import exp
from numpy import sqrt
from numpy import cos
from numpy import e
from numpy import pi
from numpy import meshgrid
from matplotlib import pyplot
from mpl_toolkits.mplot3d import Axes3D
objective function
def objective(x, y):
return -20.0 * exp(-0.2 * sqrt(0.5 * (x2 + y2))) - exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20
define range for input
r_min, r_max = -5.0, 5.0
sample input range uniformly at 0.1 increments
xaxis = arange(r_min, r_max, 0.1)
yaxis = arange(r_min, r_max, 0.1)
create a mesh from the axis
x, y = meshgrid(xaxis, yaxis)
compute targets
results = objective(x, y)
create a surface plot with the jet color scheme
figure = pyplot.figure()
axis = figure.gca(projection=‘3d’)
axis.plot_surface(x, y, results, cmap=‘jet’)
show the plot
pyplot.show()
ackley multimodal function
from numpy import arange
from numpy import exp
from numpy import sqrt
from numpy import cos
from numpy import e
from numpy import pi
from numpy import meshgrid
from matplotlib import pyplot
from mpl_toolkits.mplot3d import Axes3D
objective function
def objective(x, y):
return -20.0 * exp(-0.2 * sqrt(0.5 * (x2 + y2))) - exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20
define range for input
r_min, r_max = -5.0, 5.0
sample input range uniformly at 0.1 increments
xaxis = arange(r_min, r_max, 0.1)
yaxis = arange(r_min, r_max, 0.1)
create a mesh from the axis
x, y = meshgrid(xaxis, yaxis)
compute targets
results = objective(x, y)
create a surface plot with the jet color scheme
figure = pyplot.figure()
axis = figure.gca(projection=‘3d’)
axis.plot_surface(x, y, results, cmap=‘jet’)
show the plot
pyplot.show()
Running the example creates the surface plot of the Ackley function showing the vast number of local optima.
3D Surface Plot of the Ackley Multimodal Function
3D Surface Plot of the Ackley Multimodal Function
We can apply the differential evolution algorithm to the Ackley objective function.
First, we can define the bounds of the search space as the limits of the function in each dimension.
…
define the bounds on the search
bounds = [[r_min, r_max], [r_min, r_max]]
…
define the bounds on the search
bounds = [[r_min, r_max], [r_min, r_max]]
We can then apply the search by specifying the name of the objective function and the bounds of the search. In this case, we will use the default hyperparameters.
…
perform the differential evolution search
result = differential_evolution(objective, bounds)
…
perform the differential evolution search
result = differential_evolution(objective, bounds)
After the search is complete, it will report the status of the search and the number of iterations performed, as well as the best result found with its evaluation.
…
summarize the result
print(‘Status : %s’ % result[‘message’])
print(‘Total Evaluations: %d’ % result[‘nfev’])
evaluate solution
solution = result[‘x’]
evaluation = objective(solution)
print(‘Solution: f(%s) = %.5f’ % (solution, evaluation))
…
summarize the result
print(‘Status : %s’ % result[‘message’])
print(‘Total Evaluations: %d’ % result[‘nfev’])
evaluate solution
solution = result[‘x’]
evaluation = objective(solution)
print(‘Solution: f(%s) = %.5f’ % (solution, evaluation))
Tying this together, the complete example of applying differential evolution to the Ackley objective function is listed below.
differential evolution global optimization for the ackley multimodal objective function
from scipy.optimize import differential_evolution
from numpy.random import rand
from numpy import exp
from numpy import sqrt
from numpy import cos
from numpy import e
from numpy import pi
objective function
def objective(v):
x, y = v
return -20.0 * exp(-0.2 * sqrt(0.5 * (x2 + y2))) - exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20
define range for input
r_min, r_max = -5.0, 5.0
define the bounds on the search
bounds = [[r_min, r_max], [r_min, r_max]]
perform the differential evolution search
result = differential_evolution(objective, bounds)
summarize the result
print(‘Status : %s’ % result[‘message’])
print(‘Total Evaluations: %d’ % result[‘nfev’])
evaluate solution
solution = result[‘x’]
evaluation = objective(solution)
print(‘Solution: f(%s) = %.5f’ % (solution, evaluation))
differential evolution global optimization for the ackley multimodal objective function
from scipy.optimize import differential_evolution
from numpy.random import rand
from numpy import exp
from numpy import sqrt
from numpy import cos
from numpy import e
from numpy import pi
objective function
def objective(v):
x, y = v
return -20.0 * exp(-0.2 * sqrt(0.5 * (x2 + y2))) - exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20
define range for input
r_min, r_max = -5.0, 5.0
define the bounds on the search
bounds = [[r_min, r_max], [r_min, r_max]]
perform the differential evolution search
result = differential_evolution(objective, bounds)
summarize the result
print(‘Status : %s’ % result[‘message’])
print(‘Total Evaluations: %d’ % result[‘nfev’])
evaluate solution
solution = result[‘x’]
evaluation = objective(solution)
print(‘Solution: f(%s) = %.5f’ % (solution, evaluation))
Running the example executes the optimization, then reports the results.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
In this case, we can see that the algorithm located the optima with inputs equal to zero and an objective function evaluation that is equal to zero.
We can see that a total of 3,063 function evaluations were performed.
Status: Optimization terminated successfully.
Total Evaluations: 3063
Solution: f([0. 0.]) = 0.00000
Status: Optimization terminated successfully.
Total Evaluations: 3063
Solution: f([0. 0.]) = 0.00000