Abstract
Understanding, and then replicating, the computing paradigm(s) used in the brain’s neocortex is a computer architecture research problem that is of unquestionable practical and scientific importance, but one that will require an unconventional approach. Unconventional because it begins with the end product ̶a biological computing engine possessing amazing capabilities and operating efficiencies ̶and then tries to reconstruct, or reverse-architect, the underlying computational paradigm(s). When considered as a whole, the task is daunting. However, within a meta-architecture framework, a roadmap for reverse-architecting the neocortex can be reduced to exploring natural layers of abstraction. This talk lays out a potential roadmap, based on several years of study and experimentation, in a methodical, bottom-up manner.
The first important milestone along the road is the development of feedforward biologically plausible neural networks capable of unsupervised, continual learning, and implementable with high energy efficiency. To achieve the first milestone, the initial task is the crucial abstraction from biological electronics (as revealed through observation and experimentation) to mathematically-based computational primitives. To this end, neuroscience researchers have made significant progress over the past couple of decades, and their research provides a firm foundation for achieving this first reverse-architecting step. The next step moves from plausible primitives to functional building blocks that may be combined to achieve the milestone neural network. Here, for a number of practical reasons, there is much less experimental support, and the problem becomes more challenging. This also means that the research space is virtually wide-open with many opportunities for innovation. Furthermore, after the first milestone is reached and the roadmap going forward becomes a little clearer, reverse-architecting the higher level cognitive functions promises to be at the leading edge of computer architecture research for decades to come.
Biography
James E. Smith is Adjunct Professor at Carnegie Mellon University Silicon Valley and Professor Emeritus at the University of Wisconsin-Madison. He attended the University of Illinois, receiving his PhD in 1976. He then joined the faculty of the University of Wisconsin-Madison, teaching and conducting research ̶first in fault-tolerant computing, then in computer architecture. Over the years, he has also worked in industry (Control Data, ACA, Cray Research, Google, and Intel) on a variety of computer research and development projects. Prof. Smith made a number of early contributions to the development of superscalar processors. These include basic mechanisms for dynamic branch prediction and implementing precise traps. He has also studied vector processor architectures and worked on the development of innovative microarchitecture paradigms. He received the 1999 ACM/IEEE Eckert-Mauchly Award for these contributions. For the past seven years, he has been developing neuron-based computing paradigms at home along the Clark Fork near Missoula, Montana.