Mesh Tiling Holds Key to Speeding Time-to-Market for SoCs in Generative AI Apps
Nov 02, 2024
Mesh Tiling Holds Key to Speeding Time-to-Market for SoCs in Generative AI Apps
More companies are seeking to employ System-on-Chip (SoC) designs for future Generative AI products. To help meet these needs, network-on-chip IP company Arteris has expanded its tiling capabilities and extended mesh topology support to help design engineers speed time-to-market for Artificial Intelligence (AI) and Machine Learning (ML) compute in system-on-chip (SoC) designs. The new functionality enables design teams to scale compute performance by more than 10 times while meeting project schedules plus power, performance and area (PPA) goals.
The rapid growth of generative AI has design engineers seeking ways to optimize the performance and functionality of SoC designs. Arteris employs patented network-on-chip IP to allow both cache-coherent and non-coherent interconnects to work together. The company’s NoC tiling with mesh topology enables engineers to scale the chip architecture into manageable, repeatable units. The topology allows SoC architects to create modular, scalable designs by replicating soft tiles across the chip. Each soft tile represents a self-contained functional unit, enabling faster integration, verification and optimization.
Tiling coupled with mesh topologies within Arteris’ flagship NoC IP products, FlexNoC and Ncore, are transformative for the ever-growing inclusion of AI compute into most SoCs. By combining tiling and mesh topologies, design engineers can further reduce the auxiliary processing unit (XPU) sub-system design time and overall SoC connectivity execution time by up to 50% versus manually integrated, non-tiled designs. According to Arteris, the tiling technology enables groups of NoCs to be turned off when not needed, reducing power usage. The topology allows the SoC power controller to dynamically enable processing elements depending on the workload, thus increasing overall power efficiency by 20%.
Related:Ncore’s Not Your Granny’s Coherent NoC (Network-on-Chip)
The first iteration of NoC tiling organizes Network Interface Units (NIUs) into modular, repeatable blocks, improving scalability, efficiency and reliability in SoC designs. These SoC designs result in increasingly larger and more advanced AI compute which supports fast-growing, sophisticated AI workloads for Vision, Machine Learning (ML) models, Deep Learning (DL), Natural Language Processing (NLP) including Large Language Models (LLMs), and Generative AI (GAI), both for training and inference, including at the edge.
Arteris’ FlexNoC and Ncore NoC IP products, which offer expanded AI support via tiling and extended mesh topology capabilities, are now available to early-access customers and partners.
Related:SoC NoCs: Homegrown or Commercial Off-the-Shelf?