Yesterday I talked about an FPGA hobby board I had started playing with. As I was explaining it to my son, the question came up: how fast it is?
Let’s start with the Arduino. It runs with a clock speed of 16 mega Hertz. It does one operation per clock, so it’s rough performance is (converting to giga values) 0.016×10^9 operations per second.
For a desktop PC, we tend to measure speed as the giga-Hertz rating of the CPU, times the number of cores. For a really hot machine I put together for work, that is 4.6 giga Hertz times 8, for a rough speed index of 4.6×10^9 clocks/second x 8 operations/clock = 36.8×10^9 operations/second.
GPUs (graphics processing units) the video processors added to a PC to let it better play games. GPUs are also the current method of achieving higher performance. Many of the fastest supercomputers are now based on GPUs, with the CPU providing support services. A GPU operation is not the same as a CPU operation, but I’m looking for directions here, not specificity.
A GPU is pretty fast. A GPU can run at 750 mega Hertz and has 2688 processing units, which gives it a figure of merit of 750×10^6 clocks/second x 2688 operations/clock = 2016×10^9 operations/second. This is the highest performance GPU I could find as of today. It is not cheap, it is not simple to program.
The little FPGA board, the Mojo, suitable as a companion for a $20 Arduino, looks pretty good in comparison. There are 9152 logic cells. The clock speed that comes with the board is 50 mega Hertz. That figure of merit is 9152 operations/clock x 50×10^6 clocks/second = 457.6×10^9 operations/second. This makes it 10 times faster than the fast PC, and one fifth the speed of the fastest GPU, but 28,000 times the speed of the Arduino.
How does this look on a performance per dollar basis?
Pretty good for a hobby board. Imagine what processing you can do.
ps. I wrote this originally for my home-town STEM site, www.waylandstem.org.