Efficient mapping of backpropagation algorithm onto a network of workstations
V Sudhakar, CSR Murthy - IEEE Transactions on Systems, Man …, 1998 - ieeexplore.ieee.org
V Sudhakar, CSR Murthy
IEEE Transactions on Systems, Man, and Cybernetics, Part B …, 1998•ieeexplore.ieee.orgIn this paper, we present an efficient technique for mapping a backpropagation (BP) learning
algorithm for multilayered neural networks onto a network of workstations (NOW's). We
adopt a vertical partitioning scheme, where each layer in the neural network is divided into p
disjoint partitions, and map each partition onto an independent workstation in a network of p
workstations. We present a fully distributed version of the BP algorithm and also its speedup
analysis. We compare the performance of our algorithm with a recent work involving the …
algorithm for multilayered neural networks onto a network of workstations (NOW's). We
adopt a vertical partitioning scheme, where each layer in the neural network is divided into p
disjoint partitions, and map each partition onto an independent workstation in a network of p
workstations. We present a fully distributed version of the BP algorithm and also its speedup
analysis. We compare the performance of our algorithm with a recent work involving the …
In this paper, we present an efficient technique for mapping a backpropagation (BP) learning algorithm for multilayered neural networks onto a network of workstations (NOW's). We adopt a vertical partitioning scheme, where each layer in the neural network is divided into p disjoint partitions, and map each partition onto an independent workstation in a network of p workstations. We present a fully distributed version of the BP algorithm and also its speedup analysis. We compare the performance of our algorithm with a recent work involving the vertical partitioning approach for mapping the BP algorithm onto a distributed memory multiprocessor. Our results on SUN 3/50 NOW's show that we are able to achieve better speedups by using only two communication sets and also by avoiding some redundancy in the weights computation for one training cycle of the algorithm.
ieeexplore.ieee.org
Showing the best result for this search. See all results