We can perform a line search manually in Python using the line_search() function.

It supports univariate optimization, as well as multivariate optimization problems.

This function takes the name of the objective function and the name of the gradient for the objective function, as well as the current position in the search space and the direction to move.

As such, you must know the first derivative for your objective function. You must also have some idea of where to start the search and the extent to which to perform the search. Recall, you can perform the search multiple times with different directions (sign and magnitude).

…

result = line_search(objective, gradient, point, direction)

1

2

…

result = line_search(objective, gradient, point, direction)

The function returns a tuple of six elements, including the scale factor for the direction called alpha and the number of function evaluations that were performed, among other values.

The first element in the result tuple contains the alpha. If the search fails to converge, the alpha will have the value None.

…

# retrieve the alpha value found as part of the line search

alpha = result[0]

1

2

3

…

# retrieve the alpha value found as part of the line search

alpha = result[0]

The alpha, starting point, and direction can be used to construct the endpoint of a single line search.

…

# construct the end point of a line search

end = point + alpha * direction

1

2

3

…

# construct the end point of a line search

end = point + alpha * direction

For optimization problems with more than one input variable, e.g. multivariate optimization, the line_search() function will return a single alpha value for all dimensions.

This means the function assumes that the optima is equidistant from the starting point in all dimensions, which is a significant limitation.