Results



Next: Conclusion and Future Up: Parallel Back-Propagation for Previous: Implementation

Results

For testing the algorithms we use the chaotic series generated by the VERHULST process [7] with :

 
Figure 5: Comparison of training results with and without neuron splitting

The configuration of the FMP net is described by a string that contains topology information:

<layers>::<input>:<hidden1>(<max1>): ... :<output>

<layers> means the number of layers without the input layer. The other values are the numbers of neurons within the hidden layers at the beginning of the training and in parentheses the maximum number for this layer reachable by splitting with the modified back-propagation.

The results in Figure 5 show that the modified back-propagation for the net 2::3:6(24):1 is best in approximating the VERHULST process and predicting its values. The training set consists of the ``original'' values left from the vertical line. On the right the quality of the forecast can be seen for the different nets.

For time measurements we have trained this net by batch learning. The training times and speedups for different numbers of nodes are presented in the following tables.

Instance 1 has a training set of 100,000 items and instance 2 of 10,000 items that are both learned for 100 epochs. The results are very promising, but can surely be improved in the future.



Frank M. Thiesing
Mon Dec 19 16:19:41 MET 1994