Let’s look at how algorithm works when applied to the logical OR operator.

The variable definition can be found in the following code listing.

It specifies the input vector size (ISIZE), the weight vector size (ISIZE+1 to account for the bias weight), my low learning rate, the maximum number of iterations, and the input and weight vector types.

My network initialization is shown in the following code listing.

In this function, I initialize each weight in the weight vector to a random floating-point number between 0 and 1 after seeding the random number generator.

The network’s execution is demonstrated in the following code example.

The training vector is provided to the feedforward function, which is subsequently used to calculate the neuron’s output (per the equation found in Figure 4).

Finally, I use the step activation function to get the result.

The following code listing shows the last function, train.

I run over the training set in this function, applying the test pattern to the network (via feedforward) and calculating an error based on the outcome.

I update each of the three weights based on the learning rate and contribution of the input given the error.

When no more errors are identified, the process ends.

Finally, sample output for this simple example may be found in the following code.

In this case, learning the OR operation needed three iterations of training (the value in parentheses is the desired output).

The final weights, including the bias, are also displayed.

Perceptron learning can be implemented in about 65 lines of C.