资源与支持

SiFive 博客

来自 RISC-V 专家的最新洞察与深度技术解析

January 13, 2020

Part 1: Fast Access to Accelerators: Enabling Optimized Data Transfer with RISC-V

This is the first in a series of blogs about Domain-specific accelerators (DSAs), which are becoming increasingly common in systems-on-chip (SoCs). A DSA provides higher performance per watt than a general-purpose processor by optimizing the specialized function it implements. Examples of DSAs include compression/decompression units, random number generators and network packet processors. A DSA is typically connected to the core complex using a standard IO interconnect, such as an AXI bus.

IO Caching Slide

Unfortunately, data transfers between DSAs and the core complex, which are often critical, can be inefficient in traditional SoCs. Figure 1 shows that the datapath between a DSA and core complex in a traditional SoC traverses multiple interconnects and bridges. This increases the latency between the cores and a DSA to 100s of cycles. Consequently, this makes it difficult to have fine-grain interaction between cores and a DSA.

RISC-V offers a unique opportunity to optimize such fine-grain communication between cores and DSAs. For example, as Figure 2 shows, a DSA can export a per-core DSA cache sitting next to a RISC-V core. The RISC-V core can poll status changes out of the DSA cache, thereby reducing the latency of interaction between the core and DSA to tens of cycles. The DSA can update status changes in the DSA cache through a sideband network. Others [1, 2] have argued for similar mechanisms.

The DSA cache can further improve core-DSA interaction performance by prefetching data from the DSA and merging small IO writes into bigger chunks. A RISC-V core could even integrate some of these mechanisms within the core pipeline, if one were to design a custom RISC-V core.

[1] J. Mcalpin, "Notes on Cached Access to Memory-Mapped IO Regions", https://sites.utexas.edu/jdm4372/2013/05/29/notes-on-cached-access-to-memory-mapped-io-regions

[2] Mukherjee, et al., "Coherent Network Interfaces for Fine-Grain Communication," Proceedings of the 23rd Annual International Symposium on Computer Architecture, May 1996.

See more details about SiFive’s standard cores, or to customize and build domain-specific RISC-V cores, please visit sifive.com/risc-v-core-ip

Read more Insights from the RISC-V Experts

Building the Future of AI on Intelligent Accelerators
Blog Post
Building the Future of AI on Intelligent Accelerators
The Accelerator Control Unit (ACU) is a popular use case for our new Intelligence products, learn why in this in depth look.
赋能远端边缘的 AI 创新
Blog Post
赋能远端边缘的 AI 创新
当前行业的焦点,更多投向那些能够将数据中心 AI 性能推向更高峰的硬件技术上。在 HotChips 2025 大会期间,对超大规模计算性能提升的需求占据绝大多数议程,而功能强大的大型芯片则成为了焦点。
本地 AI 的完美解决方案
Blog Post
本地 AI 的完美解决方案
近年来,AI 一直是科技行业的焦点。随着 RISC-V 的快速发展,SiFive 通过我们的 Intelligence IP 家族取得了领先地位,提供了一个基于单一 ISA 的可扩展计算平台,并具备根据特定 AI 工作负载进行定制的能力。