SK hynix's small outline compression attached memory module 2 (SOCAMM2) 192GB / Courtesy of SK hynix
SK hynix said Monday it has begin mass production of its small outline compression attached memory module 2 (SOCAMM2) 192GB, a next‑generation chip for artificial intelligence (AI) server produced with its sixth-generation 10‑nanometer‑class (1c) process.
SOCAMM2 is a server memory module that leverages low-power double data rate (LPDDR) memory chips commonly used for smartphones, aimed at cutting power consumption to roughly one-third of conventional server modules.
Designed specifically for AI servers, it uses a thin, high-density form factor, improving signal integrity and making it easier to swap or upgrade.
SOCAMM2 is gaining attention among AI data center operators as power efficiency becomes increasingly important for managing total cost of ownership.
While commonly used AI memory such as high-bandwidth memory (HBM) is mounted within the package of logic chips such as graphics processing units (GPUs) or central processing units (CPUs), SOCAMM2 is typically placed next to the logic chips on the system board. In this setup, HBM supports computing acceleration, while SOCAMM2 improves overall system-level power efficiency by complementing conventional DDR-based memory modules.
Of note is that SK hynix uses the 1c process for manufacturing LPDDR5X memory for the SOCAMM2.
In the road map spanning the 1a, 1b and 1c process generations, 1c is considered one of the most advanced nodes currently available, delivering both performance gains and improved power efficiency. Industry officials said DDR5 built on the 1c process is known to offer about 11 percent faster speeds and more than 9 percent better power efficiency compared with 1b-based DDR5.
“With its 1c process, the SOCAMM2 delivers more than twice the bandwidth of conventional RDIMMs while improving energy efficiency by over 75 percent, making it a solution tuned for high-performance AI workloads,” the company said.
The company noted that the product has been optimized for Nvidia’s Vera Rubin, a next-generation AI computing platform.
Source: Korea Times News