Using FPGA to Survive the Death of Moore’s Law, Part 2

Processor performance has been in decline for the last 40 years, leading organizations to search for new ways to manage their data load.

Daniel Proch

November 20, 2019

4 Min Read
Using FPGA to Survive the Death of Moore’s Law, Part 2
(Source: Pixabay)

In the first installment of this article, we looked at the decline of Moore’s Law and what it means for companies trying to keep up with today’s huge volumes of data. Cloud providers, for instance, saw that decline early on and experimented with a variety of technologies to help accelerate workloads. They found that an older technology, FPGA, could be used for this purpose. We conclude, then, by looking at FPGAs and how this older technology has new tricks to help organizations manage their data.

FPGA Changes the Game

Processor performance has been in decline for the last 40 years, leading organizations to search for new ways to manage their data load. Field Programmable Gate Arrays (FPGAs) are one such solution. FPGAs have been around a few years longer than Moore’s Law and have typically been used as an intermediary step in the design of Application Specific Integrated Circuit (ASIC) semiconductor chips.

FPGAs use the same tools and languages as those used to design semiconductor chips. The great advantage of FPGAs is that it is possible to rewrite or reconfigure the FPGA with a new design on the fly. The downside is that FPGAs are bigger and use more power than ASICs.

But an interesting development occurred. Typically, once a technology hits the market, competition leads vendors to find less expensive ways to produce it, and the price goes down. But in the case of ASICs, the cost of production began to increase, and it became harder and harder to justify producing them.

During this period, FPGAs became more efficient and cost-competitive. It, therefore, made sense to remain at the FPGA stage and release the product based on an FPGA design. Today, FPGAs are widely used in a broad range of industries, especially in networking and cybersecurity equipment, where they perform specific hardware-accelerated tasks.

As these developments coalesced, Microsoft Azure got the idea to try using FPGA-based SmartNICs in standard severs to offload compute- and data-intensive tasks from the CPU to the FPGA. Today, these FPGA-based SmartNICs are used broadly throughout Microsoft Azure’s data centers, supporting services like Bing and Microsoft 365.

Because FPGAs proved themselves valuable as a means of hardware acceleration, Intel acquired Altera in 2015, the second-largest producer of FPGA chips and development software, for $16 billion. Since then, several cloud companies have added FPGA technology to their service offerings, including AWS, Alibaba, Tencent, and Baidu, to name a few.

FPGAs’ Many Attractions

FPGAs have many qualities to recommend them. They are versatile and powerful while also being efficient and cost-effective. FPGAs can be used for virtually any processing task. It is possible to implement parallel processing on an FPGA, but it is also possible to implement other processing architectures.

Another thing that makes FPGAs worthy of notice is that details such as data path widths and register lengths can be tailored specifically to the needs of the application. Indeed, when designing a solution on an FPGA, it is best to have a specific use case and application in mind in order to truly exploit the power of the FPGA.

FPGAs come in a variety of power choices. For example, compare the smallest FPGAs that can be used on drones for image processing, to extremely large FPGAs that can be used for machine learning and artificial intelligence. FPGAs generally provide very good performance per watt. For example, FPGA-based SmartNICs can process up to 200 Gbps of data without exceeding the power requirements on server PCIe slots.

Organizations can use FPGAs to create very efficient solutions that do just what is required, when required, due to their reconfigurable nature. One of the drawbacks of generic multi-processor solutions is that there is an overhead in cost due to their universal nature. A generic processor can do many things well at the same time but will always struggle to compete with a specific processor designed to accelerate a specific task.

Like any chip technology, the cost of a chip reduces dramatically with volume, and this is also the case with FPGAs. There are a vast array of FPGA options on the market, and you can find the right model at the right price point to fit your application needs. FPGAs are widely used today as an alternative to ASIC chips, providing a volume base and competitive pricing that is only set to improve over the coming years.

What’s Next

The loss of doubled processing power that Gordon Moore predicted has caused organizations dependent on compute power to scramble for new ways to manage their data. We all must reconfigure our assumptions of what constitutes high-performance computing architectures, programming languages, and solution design. Turing Award winners Hennessey and Patterson even refer to this as the start of a new “golden age” in computer and software architecture innovation. Hardware acceleration, in the meantime, is one of the more exciting and flexible options for increasing server performance.

 

About the Author

Daniel Proch

Daniel Prochis Vice President Of Product Management at Napatech.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights