Propagating the computations of all neurons within all layers moving from left to right. This starts with the feeding of your feature vector(s)/tensors into the input layer, and ends with the final prediction generated by the output layer. Forward pass computations occur during training in order to evaluate the objective/loss function under the current network parameter settings in each iteration, as well as during inference (prediction after training) when applied to new/unseen data.
Known as back-propagation, or “backprop”, this is a step executed during training in order to compute the objective/loss function gradient with respect to the network’s parameters for updating them during a single iteration of some form of gradient descent (Adam, RMSProp, etc.). It is named as such because, when viewing a neural network as a computation graph, it starts by computing objective/loss function derivatives at the output layer, and propagates them back towards the input layer (effectively, this is the chain rule from Calculus in action) in order to compute derivatives for, and make updates to, all parameters in all layers.