The Nelder-Mead optimization algorithm can be used in Python via the minimize() function.

This function requires that the “method” argument be set to “nelder-mead” to use the Nelder-Mead algorithm. It takes the objective function to be minimized and an initial point for the search.

…

# perform the search

result = minimize(objective, pt, method=‘nelder-mead’)

…

# perform the search

result = minimize(objective, pt, method=‘nelder-mead’)

The result is an OptimizeResult object that contains information about the result of the optimization accessible via keys.

For example, the “success” boolean indicates whether the search was completed successfully or not, the “message” provides a human-readable message about the success or failure of the search, and the “nfev” key indicates the number of function evaluations that were performed.

Importantly, the “x” key specifies the input values that indicate the optima found by the search, if successful.

…

# summarize the result

print(‘Status : %s’ % result[‘message’])

print(‘Total Evaluations: %d’ % result[‘nfev’])

print(‘Solution: %s’ % result[‘x’])

…

# summarize the result

print(‘Status : %s’ % result[‘message’])

print(‘Total Evaluations: %d’ % result[‘nfev’])

print(‘Solution: %s’ % result[‘x’])

We can demonstrate the Nelder-Mead optimization algorithm on a well-behaved function to show that it can quickly and efficiently find the optima without using any derivative information from the function.

In this case, we will use the x^2 function in two-dimensions, defined in the range -5.0 to 5.0 with the known optima at [0.0, 0.0].

We can define the objective() function below.

# objective function

def objective(x):

return x[0]**2.0 + x[1]**2.0

# objective function

def objective(x):

return x[0]**2.0 + x[1]**2.0

We will use a random point in the defined domain as a starting point for the search.

…

# define range for input

r_min, r_max = -5.0, 5.0

# define the starting point as a random sample from the domain

pt = r_min + rand(2) * (r_max - r_min)

…

# define range for input

r_min, r_max = -5.0, 5.0

# define the starting point as a random sample from the domain

pt = r_min + rand(2) * (r_max - r_min)

The search can then be performed. We use the default maximum number of function evaluations set via the “maxiter” and set to N*200, where N is the number of input variables, which is two in this case, e.g. 400 evaluations.

…

# perform the search

result = minimize(objective, pt, method=‘nelder-mead’)

…

# perform the search

result = minimize(objective, pt, method=‘nelder-mead’)

After the search is finished, we will report the total function evaluations used to find the optima and the success message of the search, which we expect to be positive in this case.

…

# summarize the result

print(‘Status : %s’ % result[‘message’])

print(‘Total Evaluations: %d’ % result[‘nfev’])

…

# summarize the result

print(‘Status : %s’ % result[‘message’])

print(‘Total Evaluations: %d’ % result[‘nfev’])

Finally, we will retrieve the input values for located optima, evaluate it using the objective function, and report both in a human-readable manner.

…

# evaluate solution

solution = result[‘x’]

evaluation = objective(solution)

print(‘Solution: f(%s) = %.5f’ % (solution, evaluation))

…

# evaluate solution

solution = result[‘x’]

evaluation = objective(solution)

print(‘Solution: f(%s) = %.5f’ % (solution, evaluation))

Tying this together, the complete example of using the Nelder-Mead optimization algorithm on a simple convex objective function is listed below.

# nelder-mead optimization of a convex function

from scipy.optimize import minimize

from numpy.random import rand

# objective function

def objective(x):

return x[0]**2.0 + x[1]**2.0

# define range for input

r_min, r_max = -5.0, 5.0

# define the starting point as a random sample from the domain

pt = r_min + rand(2) * (r_max - r_min)

# perform the search

result = minimize(objective, pt, method=‘nelder-mead’)

# summarize the result

print(‘Status : %s’ % result[‘message’])

print(‘Total Evaluations: %d’ % result[‘nfev’])

# evaluate solution

solution = result[‘x’]

evaluation = objective(solution)

print(‘Solution: f(%s) = %.5f’ % (solution, evaluation))

# nelder-mead optimization of a convex function

from scipy.optimize import minimize

from numpy.random import rand

# objective function

def objective(x):

return x[0]**2.0 + x[1]**2.0

# define range for input

r_min, r_max = -5.0, 5.0

# define the starting point as a random sample from the domain

pt = r_min + rand(2) * (r_max - r_min)

# perform the search

result = minimize(objective, pt, method=‘nelder-mead’)

# summarize the result

print(‘Status : %s’ % result[‘message’])

print(‘Total Evaluations: %d’ % result[‘nfev’])

# evaluate solution

solution = result[‘x’]

evaluation = objective(solution)

print(‘Solution: f(%s) = %.5f’ % (solution, evaluation))

Running the example executes the optimization, then reports the results.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

In this case, we can see that the search was successful, as we expected, and was completed after 88 function evaluations.

We can see that the optima was located with inputs very close to [0,0], which evaluates to the minimum objective value of 0.0.

Status: Optimization terminated successfully.

Total Evaluations: 88

Solution: f([ 2.25680716e-05 -3.87021351e-05]) = 0.00000