next up previous
Next: Modified back-propagation Up: Back-Propagation algorithm Previous: Forward Propagation

Backward Propagation

For the backward phase (figure 7) the neuron j in the output layer calculates the error between its actual output value oj , known from the forward phase, and the expected nominal target value tj :

δj:= (tj- oj) .f'(aj)oj.(1 - oj)

The error δj is propagated backwards to the previous hidden layer.

The neuron i in a hidden layer calculates an error δ'i that is propagated backwards again to its previous layer. Therefor a column of the weight matrix is used.

δ'i:= (∑jwjij) .f'(ai)oi.(1 - oi)

To minimize the error the weights of the projective edges of neuron i and the bias values in the receptive layer have to be changed. The old values have to be increased by:

Δwji = η.δj.oi

Δ&thetas;j = η.δj.

η is the training rate and has an empirical value: η≈1 .

The back-propagation algorithm optimizes the error by the method of gradient descent, where η ist the length of each step.



WWW-Administration
Fri Jun 30 13:29:58 MET DST 1995