next up previous
Next: Batch Learning Up: Back-Propagation algorithm Previous: Backward Propagation

Modified back-propagation

 
Figure 8:  Modified back-propagation with neuron splitting

The optimal configuration of a feedforward multilayer perceptron network with its input, hidden and output layers is very difficult to find. Too many hidden neurons lead to a net that is not able to extract the function rule and takes more time for learning. With a lack in hidden neurons it is not possible to reach any error bound. Input and output layers are determined by the problem and the function that is to be approximated.

The modified back-propagation algorithm [9] is able to increase the quality of a net by monotonic net incrementation. The training starts with a net of a few hidden neurons. Badly trained neurons are split periodically while learning the training set. The old weights are distributed at random between the two new neurons (cf. Figure 8). This is done until a maximum number of neurons within a hidden layer is reached. By training the net with the modified back-propagation algorithm a better minimum of the error is reached in shorter time.



WWW-Administration
Fri Jun 30 13:29:58 MET DST 1995