Deep convolutional neural networks (CNNs) are rapidly becoming the dominant approach to computer vision and a major component of many other pervasive machine learning tasks, such as speech recognition, natural language processing, and fraud detection. As a result, accelerators for efficiently evaluating DNNs are rapidly growing in popularity. Our work in this area focuses on two key challenges: minimizing the off-chip data transfer and maximizing the utilization of the computation units. In this talk, I will present an overview of my research work on understanding and improving the efficiency of server systems, and dive deeper into our recent results on FPGA-based server accelerators for machine learning.
Mike Ferdman is an Assistant Professor of Computer Science at Stony Brook University, where he co-directs the Computer Architecture Stony Brook (COMPAS) Lab. His research interests are in the area of computer architecture, with particular emphasis on the server computing stack. His current projects center on FPGA accelerators for machine learning, emerging memory technologies, and speculative micro-architectural techniques. Mike received a BS in Computer Science, and BS, MS, and PhD in Electrical and Computer Engineering from Carnegie Mellon University.