In a significant development signaling the next leap in high-bandwidth memory technology, SK Hynix has officially announced plans to commence mass production of its 12-High HBM4 memory in October 2025. This strategic timeline aligns with the anticipated release of NVIDIA’s next-generation GPU architecture, codenamed Rubin, showcasing growing collaboration between semiconductor memory and graphics processing unit sectors.

The HBM4 (High Bandwidth Memory Generation 4) standard is expected to deliver unprecedented data throughput and power efficiency, catering to the ever-growing requirements of AI training, high-performance computing (HPC), and advanced graphics workloads. SK Hynix’s 12Hi stack – indicating a dozen vertically stacked DRAM layers – combines performance gains with a compact physical footprint, ideally suited for advanced AI accelerators and next-gen GPUs.
Key Features of SK Hynix’s 12Hi HBM4 Technology
The 12Hi HBM4 memory represents a significant jump over previous HBM3E offerings. Some of the anticipated features and advantages include:
- Increased Bandwidth: Bandwidth per stack is expected to exceed 1.2 TB/s, nearly doubling the performance of current HBM3E modules.
- Lower Power Consumption: Improved power efficiency will be achieved through smaller process nodes and advanced stacking techniques, making them suitable for energy-conscious data centers.
- Thermal Optimization: The 12Hi stacking method enhances thermal management capabilities, crucial for high-density AI workloads.
- Higher Capacity: Each HBM4 module might offer up to 48GB per stack, enabling large-scale AI models and simulations to run with fewer memory stacks.
According to industry insiders, SK Hynix is preparing its advanced fabrication facilities in South Korea and is working closely with packaging partners to meet the technical challenges of producing and stacking 12 DRAM dies in a single module. Extensive testing and yield management efforts are underway to ensure reliable volume production by the announced target date.
NVIDIA’s Rubin Architecture: A Strategic Partner
The mass production timeline coincides with the projected release of NVIDIA’s Rubin architecture, which is expected to succeed the Blackwell GPUs that are launching in 2024. Rubin GPUs will likely serve as the backbone for NVIDIA’s advanced AI servers and cloud-based computing platforms, such as DGX and HGX lines. Analysts predict that Rubin’s architectural refinements, coupled with HBM4’s raw memory bandwidth, will redefine the landscape of generative AI and simulation platforms.

NVIDIA’s decision to adopt HBM4 highlights both the technical merit and critical necessity of new memory technologies to handle the exploding complexity in large language models (LLMs), video processing applications, and AI inference frameworks. SK Hynix and NVIDIA have collaborated closely on prior GPU generations, and this new move emphasizes a continued and deepening partnership.
Strategic Implications for the Semiconductor Industry
SK Hynix’s HBM4 production ambitions underscore several strategic trends in the broader semiconductor ecosystem:
- Increased Spend on AI Infrastructure: Demand for high-performance AI compute solutions is accelerating investments in both GPU and memory production capabilities.
- Supply Chain Realignments: The timing of HBM4 production has ripple effects across packaging, thermal solutions, and high-density printed circuit board (PCB) providers.
- Shift Toward Vertical Integration: Companies like SK Hynix are investing in cross-functional capabilities – from fabrication to substrate packaging – to control yields, performance, and cost.
Market observers expect other memory vendors, including Samsung and Micron, to reveal their own HBM4 roadmaps shortly. However, insiders suggest SK Hynix’s early move to HBM4 12Hi could position the company as the performance leader in the high-bandwidth memory segment well into 2026.
Looking Forward
While October 2025 is still over a year away, the race toward scalable, efficient, and ultra-fast memory is accelerating. As AI workloads continue to grow in complexity and scale, technological progress in supporting components like high-bandwidth memory will be a determining factor in the next wave of compute systems. SK Hynix’s commitment to deliver 12Hi HBM4 modules at scale marks a pivotal milestone not only for the company but also for the entire semiconductor and AI computing ecosystem.
With test samples expected to be delivered to partners in early 2025, and mass volume shipping commencing just months later, SK Hynix is poised to play a central role in powering the next generation of AI and HPC platforms – starting with NVIDIA’s Rubin architecture and likely expanding to a broader range of accelerated computing applications in the years to follow.