Local Search With SciPy

Local Search or local function optimization, refers to algorithms that seek the input to a function that results in the minimum or maximum output where the function or constrained region being searched is assumed to have a single optima, e.g. unimodal.

The function that is being optimized may or may not be convex, and may have one or more than one input variable.

A local search optimization may be applied directly to optimize a function if the function is believed or known to be unimodal; otherwise, the local search algorithm may be applied to fine-tune the result of a global search algorithm.

The SciPy library provides local search via the minimize() function.

The minimize() function takes as input the name of the objective function that is being minimized and the initial point from which to start the search and returns an OptimizeResult that summarizes the success or failure of the search and the details of the solution if found.

minimize an objective function

result = minimize(objective, point)

minimize an objective function

result = minimize(objective, point)
Additional information about the objective function can be provided if known, such as the bounds on the input variables, a function for computing the first derivative of the function (gradient or Jacobian matrix), a function for computing the second derivative of the function (Hessian matrix), and any constraints on the inputs.

Importantly, the function provides the “method” argument that allows the specific optimization used in the local search to be specified.

A suite of popular local search algorithms are available, such as:

Nelder-Mead Algorithm (method=’Nelder-Mead’).
Newton’s Method (method=’Newton-CG’).
Powell’s Method (method=’Powell’).
BFGS Algorithm and extensions (method=’BFGS’).
The example below demonstrates how to solve a two-dimensional convex function using the L-BFGS-B local search algorithm.

l-bfgs-b algorithm local optimization of a convex function

from scipy.optimize import minimize
from numpy.random import rand

objective function

def objective(x):
return x[0]**2.0 + x[1]**2.0

define range for input

r_min, r_max = -5.0, 5.0

define the starting point as a random sample from the domain

pt = r_min + rand(2) * (r_max - r_min)

perform the l-bfgs-b algorithm search

result = minimize(objective, pt, method=‘L-BFGS-B’)

summarize the result

print(‘Status : %s’ % result[‘message’])
print(‘Total Evaluations: %d’ % result[‘nfev’])

evaluate solution

solution = result[‘x’]
evaluation = objective(solution)
print(‘Solution: f(%s) = %.5f’ % (solution, evaluation))

l-bfgs-b algorithm local optimization of a convex function

from scipy.optimize import minimize
from numpy.random import rand

objective function

def objective(x):
return x[0]**2.0 + x[1]**2.0

define range for input

r_min, r_max = -5.0, 5.0

define the starting point as a random sample from the domain

pt = r_min + rand(2) * (r_max - r_min)

perform the l-bfgs-b algorithm search

result = minimize(objective, pt, method=‘L-BFGS-B’)

summarize the result

print(‘Status : %s’ % result[‘message’])
print(‘Total Evaluations: %d’ % result[‘nfev’])

evaluate solution

solution = result[‘x’]
evaluation = objective(solution)
print(‘Solution: f(%s) = %.5f’ % (solution, evaluation))
Running the example performs the optimization and reports the success or failure of the search, the number of function evaluations performed, and the input that resulted in the optima of the function.

Status : b’CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL’
Total Evaluations: 9
Solution: f([3.38059583e-07 3.70089258e-07]) = 0.00000

Status : b’CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL’
Total Evaluations: 9
Solution: f([3.38059583e-07 3.70089258e-07]) = 0.00000