TY - GEN
T1 - High-Speed VGSOT-MRAM Design for Non-Volatile Cache Memories
AU - Guo, Xianzeng
AU - Wang, Chao
AU - Xiang, Luman
AU - Wang, Zhaohao
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Spin-orbit torque magnetic random-access memory (SOT-MRAM) is a promising candidate for next-generation memory systems, particularly for cache applications, owing to its ultra-fast write speed and high endurance. However, SOTMRAM confronts challenges of large bit-cell area and high write current. Voltage-gated-SOT-MRAM (VGSOT-MRAM) mitigates these issues through the voltage controlled magnetic anisotropy (VCMA) mechanism, reducing the write current and enabling high-density device structure, but at the cost of slower read speed due to high device resistance. To address the read speed issue, we propose the local read bit-line (LRBL) scheme, which decreases the load capacitance of the read path and can reduce the read latency by 55.0% with minimal area overhead. Additionally, an efficient parallel-discharge-serial-sensing (PDSS) scheme is proposed to optimize the sequential read operations in cache, achieving up to 84.8% latency reduction during cache pre-fetch operations. Furthermore, implementing an appropriate error checking and correction (ECC) algorithm can further diminish the total read latency by 26.4%.
AB - Spin-orbit torque magnetic random-access memory (SOT-MRAM) is a promising candidate for next-generation memory systems, particularly for cache applications, owing to its ultra-fast write speed and high endurance. However, SOTMRAM confronts challenges of large bit-cell area and high write current. Voltage-gated-SOT-MRAM (VGSOT-MRAM) mitigates these issues through the voltage controlled magnetic anisotropy (VCMA) mechanism, reducing the write current and enabling high-density device structure, but at the cost of slower read speed due to high device resistance. To address the read speed issue, we propose the local read bit-line (LRBL) scheme, which decreases the load capacitance of the read path and can reduce the read latency by 55.0% with minimal area overhead. Additionally, an efficient parallel-discharge-serial-sensing (PDSS) scheme is proposed to optimize the sequential read operations in cache, achieving up to 84.8% latency reduction during cache pre-fetch operations. Furthermore, implementing an appropriate error checking and correction (ECC) algorithm can further diminish the total read latency by 26.4%.
KW - Cache
KW - Energy Efficient
KW - High Read Performance
KW - High Speed
KW - VGSOT-MRAM
UR - https://www.scopus.com/pages/publications/105010588866
U2 - 10.1109/ISCAS56072.2025.11044202
DO - 10.1109/ISCAS56072.2025.11044202
M3 - 会议稿件
AN - SCOPUS:105010588866
T3 - Proceedings - IEEE International Symposium on Circuits and Systems
BT - ISCAS 2025 - IEEE International Symposium on Circuits and Systems, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2025 IEEE International Symposium on Circuits and Systems, ISCAS 2025
Y2 - 25 May 2025 through 28 May 2025
ER -