Implementation



Next: Results Up: Parallel Back-Propagation for Previous: Batch Learning

Implementation

We use PVM 3.3.4 and XPVM 1.0.3 on a cluster of Sun Sparc workstations. Recently we run PVM/PARIX 1.0.1 by Parsytec on our system of T800 Transputers. For batch learning the sequential algorithm runs on every PVM node. Because of the different power of the computation nodes load balancing must be done. The part of the training set for a low-performance workstation has to be less than for a powerful one.

We are implementing the parallelization of the on-line training on our Transputer system with the runtime environment PARIX because of the high communication demands. The following time measurements show the better communication performance of PARIX.



Frank M. Thiesing
Mon Dec 19 16:19:41 MET 1994