[新闻] 三星突破内存瓶颈 公布HBM-PIM成果

楼主: kyle5241 (kyle)   2023-08-31 23:19:01
https://tinyurl.com/ywfs2xpy
三星突破内存瓶颈 公布HBM-PIM、LPDDR-PIM研究成果
https://tinyurl.com/ymux929s
Samsung breaks through memory bottleneck; announces research results on
HBM-PIM and LPDDR-PIM
At the 2023 Hot Chips forum, in addition to Intel's announcement of its data
center chip product, the latest report from Korea's TheElec also pointed out
Samsung Electronics has announced its research results on high bandwidth
memory (HBM)-processing-in-memory (PIM) and low power DDR (LPDDR)-PIM as part
of its efforts to focus on the AI sector.
三星公布了高频宽内存内运算(HBM-PIM)和低功耗动态内存内运算的成果
Previously, Samsung and AMD began a collaboration related to PIM technology.
Samsung equipped HBM-PIM memory onto AMD's commercial GPU accelerator card,
the MI-100. According to Samsung's research results, applying HBM-PIM to
generative AI will more than double the accelerator's performance and power
consumption efficiency compared to existing HBMs.
之前三星和AMD开始在内存内运算(PIM)合作,三星把HBM-PIM整合到AMD
GPU MI-100中。根据三星的研究,HBM-PIM在同样的攻耗下,运算效能可以提升一倍
To solve the memory bottleneck that has begun to appear in the AI
semiconductor sector in recent years, next-gen memory technologies like
HBM-PIM have received significant attention. The HBM-PIM conducts computation
processing within the memory itself through PIM technology. This simplifies
the data movement steps, thereby enhancing performance and power efficiency.
为了解决AI运算中的内存瓶颈,内存内运算受到很大的注目,HBM-PIM是直接在
内存内执行运算,这样简化了数据传输的步骤,因此增加性能和功耗
Furthermore, to verify the Mixture of Experts (MoE) model, Samsung used 96
HBM-PIM-equipped MI-100 units to build a HBM-PIM cluster. In the MoE model,
the HBM-PIM accelerator doubled the performance and tripled the power
efficiency compared to HBM.
在MoE模型中,三星的HBM-PIM展现了2倍性能以及提升了三倍的功耗比
Industry sources explained that the speed of memory development has been
slower compared to the advancements in AI accelerator technology. To
alleviate this memory bottleneck, it's necessary to expand the application of
next-gen semiconductors like HBM-PIM. Additionally, in sectors like LLM, many
data sets are frequently reused. Therefore, utilizing HBM-PIM computation can
also reduce data movement.
内存速度的增速远低于AI加速的进展,为了抒解内存瓶颈,HBM-PIM的使用是必要的
此外,对于AI模型LLM来说,很多的数据都会重复的使用,所以使用HBM-PIM可以大幅
的减少数据移动
On the other hand, Samsung also introduced the "LPDDR-PIM," which combines
mobile DRAM with PIM to enable direct processing and computing within edge
devices. Notably, because LPDDR-PIM is designed for edge devices, it offers
lower bandwidth (102.4GB/s) and saves 72% of power compared to DRAMs.
另外对于边缘运算装置,三星结合了手机内存在PIM中,创造了可以在边缘运算
装置中使用的LPDDR-PIM,虽然频宽比较小,但能耗也降低了72%
Previously, Samsung revealed its AI memory plans during its 2Q23 earnings
call. It not only mentioned that the HBM3 supply was undergoing customer
verification but also stated that it's actively developing new edge AI memory
products and PIM technology. Looking ahead, both HBM-PIM and LPDDR-PIM are
still some time away from commercialization. Compared to existing HBMs, PIMs
are quite expensive.
之前,三星说它的HBM3在接受客户的验证,而且它也在开发新的边缘运算的AI内存
产品和PIM技术。目前,HBM-PIM和LPDDR-PIM离商业化还有段距离,主要是PIM还是
非常的贵
The Hot Chips forum is a prominent academic event in the semiconductor
industry. It's typically held in late August. Apart from Samsung, other major
companies like SK Hynix, Intel, AMD, and Nvidia also participated in this
event.
心得:
如果PIM做出来,就完全不需要先进封装了...
作者: darkangel119 (星星的眷族)   2023-08-31 23:53:00
三星王朝再起

Links booklink

Contact Us: admin [ a t ] ucptt.com