Like many other supervised learning algorithms, Perceptron learning has a straightforward flow, but the network is changed differently.
The overall supervised flow is depicted in the diagram below.
I start by setting up my network (topology is not fixed and initial weights).
Then, after applying a training vector to the network, I iterate by adjusting the weights of my neural network based on the error (actual versus expected output) to categorize this input in the future appropriately.
After that, I put in a stop condition (no more errors are found or based on some training iterations).
I evaluate the network with unseen training examples (to determine how well it generalizes to unrecognized input) and then deploy it into its target application once this procedure is complete.
This is the general flow of perceptron learning.

My network’s weights are set to a random set of numbers at the start.

I then go over my training set again until there are no more errors.

The term “applying a training vector” refers to the process of applying a training vector to a network and then executing the network (feeding that training vector forward to yield an output value).

This output is subtracted from the required work (called the error).

With a low learning rate, I utilize this error to alter the weight based on the contribution of the input. In other words, the error is multiplied by the information (related to the provided weight) multiplied by a tiny learning rate, and the weight is altered accordingly.
This procedure is repeated again until there are no more errors.