![]() ![]() ![]() Conventionally, FPGA application development requires the use of a hardware description language (HDL) and knowledge about the low-level details of FPGA hard- ware. Eve n tho ugh FPG As off er gre at ben efit s ove r CPU s and GPUs, these benefits come with design and usability trade-offs. Such merits lead to improved thermal stability as well as reduced cooling and energy costs, which is critically needed for both cloud and edge computing. More energy-efficient, especially for processing streaming data or executing high-dependency tasks. Th ir d, FPGA devices consume an order of magnitude lower power th an CP Us an d GP Us an d ar e up to tw o or de rs of ma gn it ud e As a result, FPGAs can provide consis- tently high computational throughput for accelerating both high-concurrency and high-dependency algorithms, serving a mu ch br oa de r ra ng e of cl oud an d ed ge ap pl ica ti on s. Specifically, the hardware resources on an FPGA can be dynamically reconfigured to compose both spatial and te mp or al (p ipe lin e) pa ra lle lis m at a fin e gr an ula ri ty an d on a massive scale. Second, unlike CPUs and GPUs that have a fixed architecture, FPGAs can adapt their architecture to best fit any algorithm characteristics due to their hardware flexi- bility. The pipeli ne registers allow efficient data movement among processing elements (PEs) without involving memory access, resulting in significantly improved throughput and reduced latency. ![]() With abundant register and configu rable I/O res ource s, a stre aming architect ure can be implemented on an FPGA to process data streams dir ect ly fr om I/O s in a pipe line d fas hion. First, unlike CPUs and GPUs that are optimized for the batch processing of memory data, FPGAs are inherently efficie nt for proce ssing streamin g data from inputs /out- puts (I/Os) at the network edge. In contrast with CPUs and GPUs that are widely deployed in the cloud, FPGAs have several unique features rendering them syn- ergi stic accelerat ors for both cloud and edge computi ng. Recen tly, commercial cloud services, including Amazon and Mi- crosoft, have been employing FPGAs. Progr ammab le gate arrays (FPGA s) are gaining incre asing attentio n in both cloud and edge comput ing because of their hardware flexibility, superior computational thr ough put, and low ene rgy con sum pti on. This survey helps researchers to efficiently learn about FPGA virtualization research by providing a comprehensive review of the existing literature. In addition, we identify the primary objectives of FPGA virtualization, based on which we summarize the techniques for realizing FPGA virtualization. In this survey, we review the system architectures used in the literature for FPGA virtualization. There are many works in the field of FPGA virtualization covering different aspects and targeting different application areas. Such abstraction also enables the sharing of FPGA resources among multiple users and accelerator applications, which is important because, traditionally, FPGAs have been mostly used in single-user, single-embedded-application scenarios. Therefore, the virtualization of FPGAs becomes extremely important to create a useful abstraction of the hardware suitable for application developers. However, the design flow of FPGAs often requires specific knowledge of the underlying hardware, which hinders the wide adoption of FPGAs by application developers. FPGA accelerators are gaining increasing attention in both cloud and edge computing because of their hardware flexibility, high computational throughput, and low power consumption. ![]()
0 Comments
Leave a Reply. |