The Future in Chips at Hot Chips
Sep 02, 2024
The Future in Chips at Hot Chips
While today’s semiconductors can do some amazing tasks in processing a vast amount of data, scientists remain on a never-ending quest to develop chips that break even more barriers in speed and the ability handle challenging computational tasks. At the Hot Chips Conference held August 25 through 27 at Stanford University, some of the latest innovations in chip design and architecture will be on display.
Not surprisingly, industry-leading companies such as Intel, AMD, and Nvidia will be well-presented in the presentations at this year’s conference. But there are also smaller, lesser-known (for now) startup companies showing some promising, leading-edge products and technologies that could define the future of technologies such as AI and machine learning.
Here is a snapshot of interesting sessions and demonstrations at this year’s Hot Chips Conference.
-Startup Flow Computing is demonstrating technology comprising a parallel processing unit and compiler to speed up CPU code one-hundred fold. The technology is based on research from the University of Eastern Finland and continued on the state-owned VTT Technical Research Center. The technology could resolve CPU performance bottlenecks with other processing approaches.
-Silicon Valley startup Enfabrica is demonstrating what it terms as the industry’s first multi-GPU SuperNIC chip. According to the company, the chip will substantially increase bandwidth for GPUs, raising network performance for generative AI applications. It will allow multi-port 800 gigabit Ethernet, PCIe Gen 5, and CXL 2.0+ interfacing. Enfabrica developed a patented chip architecture to free some of the bottlenecks plaguing traditional chip architectures.
Related:Future Chips Emerge at Hot Chips Conference
-Fabless semiconductor company EdgeCortix will show its next-generation Sakura-II Edge AI accelerator for edge AI processing. According to the company, the processor offers benefits such as greater DRAM bandwidth, real-time data streaming, complex model handling, and on-chip power management.
-With the needs for robust memory increasing, memory supplier Hynix is discussing an AI-specific computing memory solution in a session titled, “From AiM device to Heterogeneous AiMX-xPU System for Comprehensive LLM Inference.”
-In the chiplet sector, Intel is demonstrating a 4 terabits/sec optical compute interconnect for XPU connectivity. Intel will also present a session on its upcoming Lunar Lake processors for AI PCs.
-Telsa will discuss AI networks in a session titled, “DOJO: An Exa-Scale Lossy AI Network using the Tesla Transport Protocol over Ethernet (TTPoE).”
Related:Chip Innovations Meet Leading-Edge Electronics Apps at ISSCC