How the rise of AI is reshaping Samsung and SK hynix
![Samsung Electronics Vice Chairman and head of semiconductor division Jun Young-hyun delivers a speech at the equipment installation ceremony for the next-generation semiconductor research & development complex held at Yongin, Gyeonggi, on Nov. 18, 2024. [SAMSUNG ELECTRONICS]](https://koreajoongangdaily.joins.com/data/photo/2025/09/09/5ebb5a50-f45a-444a-9e27-a3c20435a4e8.jpg)
Samsung Electronics Vice Chairman and head of semiconductor division Jun Young-hyun delivers a speech at the equipment installation ceremony for the next-generation semiconductor research & development complex held at Yongin, Gyeonggi, on Nov. 18, 2024. [SAMSUNG ELECTRONICS]
[CHIP REPORT ③]
Korea’s semiconductor industry is undergoing a dramatic shake-up fueled by the explosive rise of generative AI. Market leaders are losing ground while once-overlooked underdogs are gaining momentum. This Chip Report series unpacks the forces driving this shift and explores how the industry’s new hierarchy is likely to take shape in the years ahead.
When Samsung Electronics' Vice Chairman and head of semiconductor division Jun Young-hyun assumed leadership in May of last year, he shook the room with a blunt metaphor. “Right now, Samsung’s semiconductor business is like an herbivorous dinosaur,” he told executives. The comparison to a massive creature — too large, too slow to adapt and ultimately doomed to extinction — quickly spread through the ranks.
Jun has invoked his “dinosaur theory” repeatedly, raising it five or six times at official meetings during his fourteen months in office. His point was always constant: the larger the organization, the more likely it is to miss creeping problems or resist change until it is too late to recover. To make the lesson tangible, he even decorated his office with dinosaur models and used them to warn visiting executives about the cost of squandering Samsung’s window of opportunity.
Last year, following the company’s third quarter earnings release, Jun went further, issuing what he called a “letter of repentance.” He admitted to causing concern regarding Samsung’s core technological competitiveness and the company’s long-term prospects and pledged to expose and debate problems openly in order to fix them. Since his arrival, Samsung’s research and development spending has risen significantly. At the March shareholder meeting, he promised to keep adding research staff and wafers for development, stressing that architectural shifts — including packaging — were now at the heart of competition.
“We need to secure competitive elemental technologies quickly to provide total solutions in both memory and logic packaging,” he said.
The urgency was clear. For the past four to five years, the semiconductor industry has been searching for the “next Nvidia,” yet Nvidia itself has continued to dominate. From AI training and inference chips to high-performance servers and "AI PCs," the company has held its grip on the market. For Korean chipmakers, the more realistic ambition is not to replace Nvidia but to discover the “next HBM,” the high bandwidth memory that has become the backbone of AI computing.
SK hynix's leadership under Kwak Noh-jung
At SK hynix, the transformation has been just as profound. Kwak Noh-jung, who joined Hyundai Electronics in 1994 and spent most of his career in manufacturing and technology, began to emerge as a next-generation leader in 2021 when he was promoted to president alongside Noh Jong-won. The two were simultaneously promoted to president and appointed registered directors of the board, with Kwak overseeing development and production and Noh managing business operations. Although known as dual-track leadership, the spotlight fell on Noh for his rapid ascent to president in just five years.
Their paths soon diverged. In March 2022, Kwak became co-CEO with Park Jung-ho, and by the end of 2023, he was the sole CEO. Noh, meanwhile, was reassigned in 2023 to co-lead Solidigm, the NAND flash and SSD subsidiary acquired from Intel.
![SK hynix CEO Kwak Noh-jung gives a presentation of next-generation AI memory at the SK AI Summit 2024, held at southern Seoul's Coex on Nov. 4, 2024. [SK HYNIX]](https://koreajoongangdaily.joins.com/data/photo/2025/09/09/7cbbc6e9-22c0-49bc-bba2-7a93ff16c8cd.jpg)
SK hynix CEO Kwak Noh-jung gives a presentation of next-generation AI memory at the SK AI Summit 2024, held at southern Seoul's Coex on Nov. 4, 2024. [SK HYNIX]
With the vice chairman era of Park Sung-wook and Park Jung-ho behind it, SK hynix entered a new phase under Kwak — the first leader in years to run the company without a vice chairman above him. Kwak is recognized for his collaborative approach, one that emphasizes sharing authority and leveraging collective expertise over unilateral decision-making.
That approach was evident in SK hynix’s sweeping restructuring last December. The company reorganized under five divisions reporting directly to the CEO: AI infrastructure, development, production, future technology research and the revived Corporate Center. The return of the Corporate Center was particularly striking. First created in 2012 after SK acquired Hynix, it had been dormant for years. Now reinstated, it oversees nearly every aspect of corporate support — strategy, finance, corporate culture, procurement, investor relations, communications, government relations and human resources — making it a giant control tower that combines the traditional responsibilities of chief financial officer, chief strategy officer, chief communications officer and chief human resources officer.
At the helm is Song Hyun-jong, who was promoted to president in 2023. Song had stepped back from organizational leadership in recent years but was brought back to lead the Corporate Center. He is remembered internally as one of the few who supported large-scale investment in HBM equipment back in 2018, when the business was still small and skepticism was widespread. His stance — that Nvidia’s demands justified bold investment — was vindicated only years later, after HBM sales took off. He has also played a major role in SK hynix’s operations strategy in China.
The five-division structure is designed for complexity. SK hynix must maintain its leadership in HBM, manage sensitive operations in China amid geopolitical strain, pursue next-generation research and development and maintain financial discipline. The revived Corporate Center is intended to balance those pressures.

Socamm after HBM
If HBM is the star of today’s memory market, Socamm is emerging as its promising neighbor. At Semicon Korea 2025, when a reporter mentioned Socamm in passing, SK hynix’s Kwak seized the opportunity to highlight change. He noted that just as processors have diversified into CPUs, tensor processing units and neural processing units depending on performance and applications, dynamic random access memory (DRAM), too, would branch into new roles. At the following month’s shareholder meeting, when pressed on what would succeed HBM, his answer was unambiguous: “We are preparing a range of solutions, including CXL [Compute Express Link], LPCAMM2 [low-power compression attached memory modules], Socamm and PIM [processing-in-memory].”
![SK hynix's System on Chip Attached Memory Module (Socamm) [SK HYNIX]](https://koreajoongangdaily.joins.com/data/photo/2025/09/09/230c4c0f-6dff-432c-a5d2-6baea1c98d1e.jpg)
SK hynix's System on Chip Attached Memory Module (Socamm) [SK HYNIX]
Socamm is built by stacking LPDDR DRAM. The module is only one-third the size of a conventional server RDIMM, yet it delivers roughly 2.5 times the processing speed while consuming less power — an ideal profile for high-performance AI devices. Although sometimes referred to as the “second HBM,” its function is distinct. HBM sits next to GPUs to handle massive parallel operations, whereas Socamm is positioned beside CPUs to improve logic efficiency. HBM is tightly packaged together with GPUs, while Socamm is detachable, reducing packaging costs and allowing capacity to be adjusted as needed.
Manufacturing is also simpler: HBM requires drilling and through-silicon via connections, but Socamm uses wire bonding. In essence, if HBM is like widening a two-lane road into sixteen, Socamm is like adding extra lanes only when they are required.
Nvidia is expected to become Socamm’s largest customer, as it already has with HBM. The company has announced plans to equip all of its upcoming AI accelerators with Socamm and has even revealed that future "AI PCs" will also carry the module. In a notable twist, Nvidia initially assigned Socamm development not to SK hynix, but to Samsung Electronics and Micron Technology. While SK hynix dominates HBM today, Nvidia pursued a two-vendor strategy from the outset for Socamm. For Samsung and Micron, the technology represents a chance at redemption after falling behind in HBM, while SK hynix is equally determined not to miss the opportunity.
Recent announcements have underscored the intensity of the race. Micron revealed that it had begun the mass production of Socamm while Samsung disclosed that it had begun work on a next-generation version. At first glance, this suggested Micron had won Nvidia’s business and Samsung was scrambling to catch up. The reality is more nuanced. Nvidia originally intended to incorporate Socamm into its “Blackwell Ultra” GB300, set for release this year, but board development complications forced the company to postpone adoption until the next-generation “Rubin.” As a result, the Socamm modules from Samsung and Micron already certified for the GB300 suddenly had no immediate demand.
Micron’s mass-producing announcement and Samsung’s pivot to next-generation development were therefore natural moves, and industry insiders expect this pattern of rolling announcements to continue as all three memory giants fight for position.
CXL: The Airbnb of servers
If Socamm is a new kind of memory module, CXL is a new kind of architecture. Built on PCIe, it allows CPUs, GPUs, accelerators and memory to communicate directly, reducing redundant data transfers and increasing efficiency. Unlike Nvidia’s proprietary NVLink, which connects GPUs to GPUs, CXL supports high-speed communication across CPUs, GPUs and memory and even memory-to-memory transfers.
![Samsung Electronics' 128GB DRAM supporting Compute Express Link (CXL) 2.0 [SAMSUNG ELECTRONICS]](https://koreajoongangdaily.joins.com/data/photo/2025/09/09/44d927b7-c5b5-4405-b227-f3bde4b74641.jpg)
Samsung Electronics' 128GB DRAM supporting Compute Express Link (CXL) 2.0 [SAMSUNG ELECTRONICS]
The feature drawing the most attention is memory pooling. Traditionally, each server can only use its own memory. With CXL, servers can share memory dynamically. It is like borrowing a neighbor’s empty house when guests arrive: One server can tap into another’s idle memory during peak demand. This “Airbnb of servers” could significantly lower total data center costs, a critical issue for hyperscalers like Google, Meta, Microsoft and Alibaba that are spending heavily on GPUs but still struggling to profit from AI services.
The CXL consortium includes nearly every major player in the industry: Intel, AMD, Arm, Nvidia, Samsung, SK hynix and the world’s largest cloud companies.
In Korea, startup Xcena has drawn attention with its CXL 3.0 memory, built using Samsung’s four-nanometer process. Unlike the expansion-oriented solutions of larger rivals, Xcena’s product integrates thousands of RISC-V cores into the controller, enabling significant in-memory computation. CEO Kim Jin-young explains that memory utilization in data centers is currently only in the mid-thirty percent range. With CXL, he argues, the same tasks could be completed with one-tenth as many servers, while also enabling memory recycling as older modules are redeployed in new configurations.
The timeline for adoption may be close. Intel is expected to launch the first server CPUs supporting CXL 3.0 in the second half of next year. At that point, industry insiders expect the market to truly take off. For Samsung, SK hynix and their rivals, the stakes are clear: The race is about not only supplying memory for Nvidia’s accelerators, but also shaping the very architecture of computing in the AI era.
This article was originally written in Korean and translated by a bilingual reporter with the help of generative AI tools. It was then edited by a native English-speaking editor. All AI-assisted translations are reviewed and refined by our newsroom.
BY SHIM SEO-HYUN, LEE GA-RAM, PARK HAE-LEE, YI WOO-LIM [lee.jaelim@joongang.co.kr]
No comments
Post a Comment