Discovering faster matrix multiplication algorithms with reinforcement learning
On 30 Nov, 2022 By admin 0 Comments
August, 2022
Abstract
Realizing increasingly complex artificial intelligence (AI) functionalities directly on edge devices calls for unprecedented energy efficiency of edge hardware. Compute-in-memory (CIM) based on resistive random-access memory (RRAM) promises to meet such demand by storing AI model weights in dense, analogue and non-volatile RRAM devices, and by performing AI computation directly within RRAM, thus eliminating power-hungry data movement between separate compute and memory.
January, 2022
Abstract
September, 2022
Abstract
October, 2022
Abstract
October, 2022
Abstract
February, 2021
Abstract