Home
Scholarly Works
A Fully Parallel and Scalable Implementation of a...
Conference

A Fully Parallel and Scalable Implementation of a Hopfield Neural Network on the SHARC-NET Supercomputer

Abstract

Artificial neural networks (ANN) are an established area of artificial intelligence (AI) and computer science. ANNs have been used in a number of ways for research and industrial projects. However, despite ANN research spanning many years, the typical implementation is a single threaded programming model. This paper presents a fully parallel implementation of a Hopfield Neural Network using a supercomputer. The goal of this project is to develop a core learning unit capable of enormous range of scaling ability over a large number of nodes in a supercomputer. Furthermore, we integrate techniques that minimize the dependencies on any particular topology thus making it easier to port to other supercomputing environments. Ideally, other SHARC-net users will extend these ideas and conduct research using the tools developed in this project. This paper provides an outline of the issues associated with the development of this artificial neural network on SHARC-net, the benefits of such work, the difficulties encountered and future directions.

Authors

Sykes ER; Mirkovic A

Pagination

pp. 103-109

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

January 1, 2005

DOI

10.1109/hpcs.2005.6

Name of conference

19th International Symposium on High Performance Computing Systems and Applications (HPCS'05)
View published work (Non-McMaster Users)

Contact the Experts team