South Korean Chip Startup Panmnesia Raises $12.5M, Reaching $81.4M in Valuation
Panmnesia, a South Korean chip startup, has secured $12.5 million in seed funding, pushing its valuation to $81.4 million.
Panmnesia, a South Korean chip startup, has secured a $12.5 million seed round of funding, boosting its valuation to $81.4 million. The seed round, led by Daekyo Investment, also featured investments from notable entities such as SL Investment, Smilegate Investment, GNTech Venture Capital, Time Works Investment, Yuanta Investment, and Quantum Ventures Korea.
The company specializes in creating intellectual property for Compute Express Link (CXL) technology, which aims to revolutionize big data centers by enabling the pooling of various devices like AI accelerator chips, processors, and memory.
Memory bottlenecks in processing AI applications have long been challenging for data center operators, according to the company. Panmnesia said its CXL technology aims to address this issue by enhancing efficiency through device pooling.
“We do not have a standard interface, so it’s very difficult to accommodate all the different AI accelerators in the data center,” Panmnesia CEO Myoungsoo Jung told Reuters.
Prominent chip design software companies Synopsys and Cadence Design Systems have also expressed interest in Panmnesia’s technology.
Panmnesia’s capital comes as part of the South Korean government’s ambitious plan to bolster the domestic AI chip industry, earmarking $800 million over the next five years to elevate the Korean AI chip presence in domestic data centers from virtually zero to an impressive 80% by 2030.
Panmnesia’s AI Acceleration System
In April, Panmnesia revealed an advanced AI acceleration system named TrainingCXL that leverages CXL technology to address the escalating demands of memory-intensive AI applications such as ChatGPT.
TrainingCXL offers a colossal memory capacity, scalable performance, and significantly reduced training execution times, presenting a game-changing solution for AI applications.
According to Jung, the system boasts a memory capacity that can scale up to a remarkable 4PB (petabytes). Moreover, it reduces training execution time by 5.3 times compared to conventional systems that rely on the PCIe interconnect.
As major tech giants like Meta and Microsoft grapple with the need to expand the size of their AI models to stay competitive, the demand for computing systems capable of handling large-scale AI has been skyrocketing.