The University of Massachusetts Amherst
University of Massachusetts Amherst

Search Google Appliance

Links

Xia and Yang Publish Paper in Nature Machine Intelligence

Joshua Yang and Qiangfei Xia

Yang and Xia

Qiangfei Xia and Joshua Yang of the Electrical and Computer Engineering (ECE) Department lead a 17-person research team that has published a paper titled "Long short-term memory networks in memristor crossbar arrays" in Nature Machine Intelligence, a new Nature research journal launching in January of 2019 and covering a wide range of topics in machine learning, robotics, and artificial intelligence. Some accepted papers are being posted online in early December. This Nature Machine Intelligence paper demonstrates that memristor crossbar arrays can address bottlenecks in traditional long short-term memory (LSTM) units, with crucial applications such as data prediction, natural language understanding, machine translation, speech recognition, and video surveillance.

An LSTM is a building unit (or block) for layers of a recurrent neural network, most commonly used, for example, in speech recognition and natural language processing. The authors define a memristor as a two-terminal “memory resistor,” which performs computation via physical laws at the same location where information is stored. This feature entirely removes the need of traditional computers for data transfer between memory and computation, leading to a much better speed-energy efficiency.

As the Nature Machine Intelligence paper asserts, “Recent breakthroughs in recurrent deep neural networks with LSTM units have led to major advances in artificial intelligence.”

However, as the authors continue, state-of-the-art LSTM models with significantly increased complexity and a large number of parameters have a bottleneck in computing power resulting from both limited memory capacity and limited data communication bandwidth.

As the Nature Machine Intelligence paper explains, a solution to this LSTM blockage “can be implemented with a memristor crossbar array, which has a small circuit footprint, can store a large number of parameters, and offers in-memory computing capability that contributes to circumventing the ‘von Neumann bottleneck’ [see below]. We illustrate the capability of our crossbar system as a core component in solving real-world problems…and show that memristor LSTM is a promising low-power and low-latency hardware platform for edge inference.”

The von Neumann bottleneck refers to the limited computing throughput and energy efficiency in a computer built with von Neumann architecture, in which the data processing and memory units are physically separated with a single common bus in between. Both terms are named after John von Neumann, a 20th century mathematician, scientist, and computer science pioneer who in 1945 proposed the computer architecture which is still the basis for our digital computers today. He was also involved in the Manhattan Project.

As background to this bottleneck issue, the Nature Machine Intelligence paper goes on to explain that the recent success of artificial intelligence largely results from the advances of deep neural networks with various architectures, among which the LSTM network is an important one. By enabling the learning process to remember or forget the history of observations, LSTM-based recurrent neural networks are responsible for recent achievements in analyzing data for a variety of applications.

However, as the authors observe, “When implemented in conventional digital hardware, LSTM networks have complicated structures and hence drawbacks for inference latency and power consumption. These issues become more prominent as more applications involve the processing of temporal data near the source in the era of the Internet of Things. Although there has been an increased level of efforts in designing novel architectures to accelerate LSTM-based neural networks, low parallelism and limited bandwidth between computing and memory units are still outstanding issues. It is therefore imperative to seek an alternative computing paradigm for LSTM networks.” 

One good alternative, say the authors, is the memristor crossbar array. Built into a crossbar architecture, memristors have been successfully employed in feed-forward, fully-connected neural networks that showed significant advantages in power consumption and inference latency over CMOS-based counterparts.

“The memristor crossbar implementation of an LSTM,” say the authors, “to the best of our knowledge, has yet to be demonstrated, primarily because of the relative scarcity of large memristor arrays. In this work, we demonstrate our experimental implementation of a core part of LSTM networks in memristor crossbar arrays.”

As a demonstration, the authors applied the memristor-based LSTM in predicting the number of airline passengers based on data from past years and in recognizing a person according to the way s/he walks. Such recognition is important in identifying a person when the face is camouflaged or facial recognition is technically difficult.

As the authors conclude, “This work shows that the LSTM networks built in memristor crossbar arrays represent a promising alternative computing paradigm with high-speed energy efficiency.”

The Nature Machine Intelligence paper represents a collaboration among 17 researchers at the UMass ECE, Hewlett Packard Labs in Palo Alto, California, and the Air Force Research Laboratory Information Directorate in Rome, New York. (January 2019)