next up previous
Next: On-line training Up: Parallelization Previous: Modified back-propagation

Batch Learning

For parallel batch learning the training set is divided and learned separately with some identical copies of the net in parallel [6]. The weight corrections are summed up and globally corrected in all nets after each epoch.

Communication is only necessary for the calculation of the global sum of the weight corrections after each epoch. In addition to this a global broadcast has to be performed after the master node has calculated the random numbers for the new weights after splitting, but this happens very rarely.

The batch learning is different from the on-line training concerning the convergence speed and the quality of approximation. There are training problems where the batch learning algorithm is more suitable than the on-line training and vice versa.



WWW-Administration
Fri Jun 30 13:29:58 MET DST 1995