CEA-LETI and Stanford Target Edge-AI Apps with Breakthrough NVM Memory Cell

Researchers at CEA-Leti and Stanford University have developed the world’s first circuit integrating multiple-bit non-volatile memory (NVM) technology called Resistive RAM (RRAM) with silicon computing units, as well as new memory resiliency features that provide 2.3-times the capacity of existing RRAM. Target applications include energy-efficient, smart-sensor nodes to support artificial intelligence on the Internet of Things, or “edge AI”.

The proof-of-concept chip has been validated for a wide variety of applications (machine learning, control, security). Designed by a Stanford team led by Professors Subhasish Mitra and H.-S. Philip Wong and realized in CEA-Leti’s cleanroom in Grenoble, France, the chip monolithically integrates two heterogeneous technologies: 18 kilobytes (KB) of on-chip RRAM on top of commercial 130nm silicon CMOS with a 16-bit general-purpose microcontroller core with 8KB of SRAM.

The new chip delivers 10-times better energy efficiency (at similar speed) versus standard embedded FLASH, thanks to its low operation energy, as well as ultra-fast and energy-efficient transitions from on mode to off mode and vice versa. To save energy, smart-sensor nodes must turn themselves off. Non-volatility, which enables memories to retain data when power is off, is thus becoming an essential on-chip memory characteristic for edge nodes. The design of 2.3 bits/cell RRAM enables higher memory density (NVM dense integration) yielding better application results: 2.3x better neural network inference accuracy, for example, compared to a 1-bit/cell equivalent memory.

The technology was presented on Feb. 19, at the International Solid-State Circuits Conference (ISSCC) 2019 in San Francisco in a paper titled, “A 43pJ/Cycle Non-Volatile Microcontroller with 4.7μs Shutdown/Wake-up Integrating 2.3-bit/Cell Resistive RAM and Resilience Techniques”.

But NVM technologies (RRAM and others) suffer from write failures. Such write failures have catastrophic impact at the application level and significantly diminish the usefulness of NVM such as RRAM. The CEA-Leti and Stanford team created a new technique called ENDURER that overcomes this major challenge. This gives the chip a 10-year functional lifetime when continuously running inference with the Modified National Institute of Standards and Technology (MNIST) database, for example.

“The Stanford/CEA-Leti team demonstrated a complete chip that stores multiple bits per on-chip RRAM cell. Stored information is correctly processed when compared with previous demonstrations using standalone RRAM or a few cells in a RAM array,” said Thomas Ernst, Leti’s chief scientist for silicon components and technologies. “This multi-bit storage improves the accuracy of neural network inference, a vital component of AI.”

Mitra said the chip demonstrates several industry firsts for RRAM technology. These include new algorithms that achieve multiple bits-per-cell RRAM at the full memory level, new techniques that exploit RRAM features as well as application characteristics to demonstrate the effectiveness of multiple bits-per-cell RRAM at the computing system level, and new resilience techniques that achieve a useful lifetime for RRAM-based computing systems.

“This is only possible with a unique team with end-to-end expertise across technology, circuits, architecture, and applications,” he said. “The Stanford SystemX Alliance and the Carnot Chair of Excellence in NanoSystems at CEA-Leti enabled such a unique collaboration.”

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.