Random Search for Function Optimization

Random search is also referred to as random optimization or random sampling.

Random search involves generating and evaluating random inputs to the objective function. It’s effective because it does not assume anything about the structure of the objective function. This can be beneficial for problems where there is a lot of domain expertise that may influence or bias the optimization strategy, allowing non-intuitive solutions to be discovered.

… random sampling, which simply draws m random samples over the design space using a pseudorandom number generator. To generate a random sample x, we can sample each variable independently from a distribution.

— Page 236, Algorithms for Optimization, 2019.

Random search may also be the best strategy for highly complex problems with noisy or non-smooth (discontinuous) areas of the search space that can cause algorithms that depend on reliable gradients.

We can generate a random sample from a domain using a pseudorandom number generator. Each variable requires a well-defined bound or range and a uniformly random value can be sampled from the range, then evaluated.

Generating random samples is computationally trivial and does not take up much memory, therefore, it may be efficient to generate a large sample of inputs, then evaluate them. Each sample is independent, so samples can be evaluated in parallel if needed to accelerate the process.

The example below gives an example of a simple one-dimensional minimization objective function and generates then evaluates a random sample of 100 inputs. The input with the best performance is then reported.

example of random search for function optimization

from numpy.random import rand

objective function

def objective(x):
return x**2.0

define range for input

r_min, r_max = -5.0, 5.0

generate a random sample from the domain

sample = r_min + rand(100) * (r_max - r_min)

evaluate the sample

sample_eval = objective(sample)

locate the best solution

best_ix = 0
for i in range(len(sample)):
if sample_eval[i] < sample_eval[best_ix]:
best_ix = i

summarize best solution

print(‘Best: f(%.5f) = %.5f’ % (sample[best_ix], sample_eval[best_ix]))
Running the example generates a random sample of input values, which are then evaluated. The best performing point is then identified and reported.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

In this case, we can see that the result is very close to the optimal input of 0.0.

Best: f(-0.01762) = 0.00031
We can update the example to plot the objective function and show the sample and best result. The complete example is listed below.

example of random search for function optimization with plot

from numpy import arange
from numpy.random import rand
from matplotlib import pyplot

objective function

def objective(x):
return x**2.0

define range for input

r_min, r_max = -5.0, 5.0

generate a random sample from the domain

sample = r_min + rand(100) * (r_max - r_min)

evaluate the sample

sample_eval = objective(sample)

locate the best solution

best_ix = 0
for i in range(len(sample)):
if sample_eval[i] < sample_eval[best_ix]:
best_ix = i

summarize best solution

print(‘Best: f(%.5f) = %.5f’ % (sample[best_ix], sample_eval[best_ix]))

sample input range uniformly at 0.1 increments

inputs = arange(r_min, r_max, 0.1)

compute targets

results = objective(inputs)

create a line plot of input vs result

pyplot.plot(inputs, results)

plot the sample

pyplot.scatter(sample, sample_eval)

draw a vertical line at the best input

pyplot.axvline(x=sample[best_ix], ls=’–’, color=‘red’)

show the plot

pyplot.show()
Running the example again generates the random sample and reports the best result.

Best: f(0.01934) = 0.00037
A line plot is then created showing the shape of the objective function, the random sample, and a red line for the best result located from the sample.