A cargo truck carrying Samsung Electronics' high-bandwidth memory 4 is parked at an undisclosed Samsung plant in this handout photo released, Thursday. Courtesy of Samsung Electronics
Samsung Electronics announced Thursday that it has begun mass production and shipping of high-bandwidth memory 4 (HBM4), marking the world’s first delivery of the advanced chip for artificial intelligence (AI) accelerators to customers.
For over a year, the company’s chipmaking Device Solutions division had refrained from officially announcing updates related to its HBM, as its predecessor HBM3E struggled in the market due to yield issues. With the latest announcement, Samsung appears to be signaling confidence in its technological advantages and stability in the mass production system.
Samsung Electronics’ HBM4 chips are using its 1c process, the sixth-generation 10-nanometer-class DRAM technology, for the DRAM cell die, while using a 4-nanometer foundry process for the base die.
While its main rival SK hynix is using a previous-generation 1b process for HBM4 to focus on stability, Samsung said it has pursued a more advanced process from the designing phase to secure top-tier performance, and achieved stable yields without any redesigns.
“Instead of taking the conventional path of using existing proven designs, Samsung took the leap and adopted the most advanced nodes like the 1c DRAM and 4-nanometer logic process for HBM4,” said Hwang Sang-joon, head of memory development at Samsung Electronics.
“By leveraging our process competitiveness and design optimization, we are able to secure substantial performance headroom, enabling us to satisfy our customers’ escalating demands for higher performance, when they need it.”
Samsung Electronics' HBM4 / Courtesy of Samsung electronics
Based on these technologies, Samsung had its HBM4 chips achieve data processing speeds of up to 11.7 gigabits per second (Gbps), exceeding the Joint Electron Device Engineering Council’s standard of 8 Gbps. This represents a 1.22-fold increase over the maximum pin speed of 9.6 Gbps of HBM3E.
The company said its HBM4 performance can be further enhanced up to 13Gbps, effectively mitigating data bottlenecks stemming from scaling AI models. The total memory bandwidth per single stack is increased by 2.7 times compared to HBM3E, to a maximum of 3.3 terabytes per second.
Source: Korea Times News