Research Spotlight: “Investigating the Impact of Dzyaloshinskii–Moriya Interaction and Current Pulse Shape on Critical Current Density and Write Energy of SOT-MRAMs”

A team from Georgia Tech, led by PhD student Md Nahid Haque Shazon and advised by Dr. Azad Naeemi, explored how to make SOT-MRAM faster and more energy-efficient. By tuning the Dzyaloshinskii–Moriya interaction and using shaped current pulses—especially descending triangles—they achieved significant energy savings and speed improvements. The project won the People’s Choice Award for […]

Research Spotlight: Torch2Chip with Jian Meng (Cornell Tech)

Advancements in AI model compression and hardware acceleration are essential for enabling efficient, high-performance deep learning systems. Torch2Chip is a customizable deep neural network compression and deployment toolkit designed to bridge the gap between AI algorithms and prototype hardware accelerators. Developed by Jian Meng, a Ph.D. candidate at Cornell Tech under the guidance of Professor […]

Fusion3D: Research Spotlight

Fusion3D, developed by CoCoSys scholars Sixu Li, Yang (Katie) Zhao, Chaojian Li, Zhifan Ye, and other members of PI Prof. Yingyan (Celine) Lin’s lab at Georgia Tech, presents an end-to-end acceleration framework for real-time 3D intelligence. By accelerating computation and data movement across the algorithm, architecture, and system integration levels, this innovative approach significantly improves […]

Research Spotlight: HISIM with Pragnya Nalla, UM

Pragnya Nalla is a third-year Electrical and Computer Engineering student at the University of Minnesota, Twin Cities, working under the guidance of Prof. Yu (Kevin) Cao on HW/SW co-design for heterogeneous integration. Together with Zhenyu Wang (TSMC) and other collaborators, she has contributed to the development of HISIM, a cutting-edge tool for fast, accurate, and […]

Data4AIGChip with Yongan (Luke) Zhang, Georgia Tech

Data4AIGChip, developed by Yongan (Luke) Zhang at Georgia Tech, leverages advanced AI models and automated data generation to enhance hardware design capabilities, making the process more accessible to developers of varying expertise levels. By fine-tuning large language models with specialized datasets, this innovative approach democratizes hardware design, enabling the creation of customized, efficient hardware solutions […]