Grid search is also referred to as a grid sampling or full factorial sampling.

Grid search involves generating uniform grid inputs for an objective function. In one-dimension, this would be inputs evenly spaced along a line. In two-dimensions, this would be a lattice of evenly spaced points across the surface, and so on for higher dimensions.

The full factorial sampling plan places a grid of evenly spaced points over the search space. This approach is easy to implement, does not rely on randomness, and covers the space, but it uses a large number of points.

— Page 235, Algorithms for Optimization, 2019.

Like random search, a grid search can be particularly effective on problems where domain expertise is typically used to influence the selection of specific optimization algorithms. The grid can help to quickly identify areas of a search space that may deserve more attention.

The grid of samples is typically uniform, although this does not have to be the case. For example, a log-10 scale could be used with a uniform spacing, allowing sampling to be performed across orders of magnitude.

The downside is that the coarseness of the grid may step over whole regions of the search space where good solutions reside, a problem that gets worse as the number of inputs (dimensions of the search space) to the problem increases.

A grid of samples can be generated by choosing the uniform separation of points, then enumerating each variable in turn and incrementing each variable by the chosen separation.

The example below gives an example of a simple two-dimensional minimization objective function and generates then evaluates a grid sample with a spacing of 0.1 for both input variables. The input with the best performance is then reported.

# example of grid search for function optimization

from numpy import arange

from numpy.random import rand

# objective function

def objective(x, y):

return x**2.0 + y**2.0

# define range for input

r_min, r_max = -5.0, 5.0

# generate a grid sample from the domain

sample = list()

step = 0.1

for x in arange(r_min, r_max+step, step):

for y in arange(r_min, r_max+step, step):

sample.append([x,y])

# evaluate the sample

sample_eval = [objective(x,y) for x,y in sample]

# locate the best solution

best_ix = 0

for i in range(len(sample)):

if sample_eval[i] < sample_eval[best_ix]:

best_ix = i

# summarize best solution

print(‘Best: f(%.5f,%.5f) = %.5f’ % (sample[best_ix][0], sample[best_ix][1], sample_eval[best_ix]))

Running the example generates a grid of input values, which are then evaluated. The best performing point is then identified and reported.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

In this case, we can see that the result finds the optima exactly.

Best: f(-0.00000,-0.00000) = 0.00000

We can update the example to plot the objective function and show the sample and best result. The complete example is listed below.

# example of grid search for function optimization with plot

from numpy import arange

from numpy import meshgrid

from numpy.random import rand

from matplotlib import pyplot

# objective function

def objective(x, y):

return x**2.0 + y**2.0

# define range for input

r_min, r_max = -5.0, 5.0

# generate a grid sample from the domain

sample = list()

step = 0.5

for x in arange(r_min, r_max+step, step):

for y in arange(r_min, r_max+step, step):

sample.append([x,y])

# evaluate the sample

sample_eval = [objective(x,y) for x,y in sample]

# locate the best solution

best_ix = 0

for i in range(len(sample)):

if sample_eval[i] < sample_eval[best_ix]:

best_ix = i

# summarize best solution

print(‘Best: f(%.5f,%.5f) = %.5f’ % (sample[best_ix][0], sample[best_ix][1], sample_eval[best_ix]))

# sample input range uniformly at 0.1 increments

xaxis = arange(r_min, r_max, 0.1)

yaxis = arange(r_min, r_max, 0.1)

# create a mesh from the axis

x, y = meshgrid(xaxis, yaxis)

# compute targets

results = objective(x, y)

# create a filled contour plot

pyplot.contourf(x, y, results, levels=50, cmap=‘jet’)

# plot the sample as black circles

pyplot.plot([x for x,_ in sample], [y for _,y in sample], ‘.’, color=‘black’)

# draw the best result as a white star

pyplot.plot(sample[best_ix][0], sample[best_ix][1], ‘*’, color=‘white’)

# show the plot

pyplot.show()

Running the example again generates the grid sample and reports the best result.

Best: f(0.00000,0.00000) = 0.00000

A contour plot is then created showing the shape of the objective function, the grid sample as black dots, and a white star for the best result located from the sample.

Note that some of the black dots for the edge of the domain appear to be off the plot; this is just an artifact for how we are choosing to draw the dots (e.g. not centered on the sample).