Building a 300 node Raspberry Pi supercomputer

Commodity hardware makes possible massive 100,000 node clusters, because, after all, commodity hardware is “cheap” — if you’re Google. What if you want a lot of cycles but don’t have a few million dollars to spend? Think Raspberry Pi.

Poor people have compute needs, too, especially if they are chronically underfunded academics. That’s what drove Larry Page and Sergey Brin to cobble together PC motherboards for several iterations of Google clusters.

Read also: This is why you need to learn the Raspberry Pi 3 (ZDNet Academy)

But when you start talking about hundreds of PCs the prices start adding up to hundreds of thousands, even millions of dollars. That’s the problem that a group of computer scientists tackled in Affordable and Energy-Efficient Cloud Computing Clusters: The Bolzano Raspberry Pi Cloud Cluster Experimentby Pekka Abrahamsson, Sven Helmer, Nattakarn Phaphoom, Lorenzo Nicolodi, Nick Preda, Lorenzo Miori, Matteo Angriman, Juha Rikkilä, Xiaofeng Wang, Karim Hamily, and Sara Bugoloni, all of the Free University of Bozen-Bolzano, in Bolzano Italy.


Probably not. The Top 500 supercomputers all cost millions.

But bragging rights? Priceless.

The paper uses the original Raspberry Pi (RPi), but today you can get the RPi3, which is roughly 4x faster, for $35. The original was roughly the power of 300MHz Pentium 2, which might have been the processor in the early Google clusters. Bottom line, you can do real work on this class of machine.


Network. They used a star configuration, where a number of low-cost fan out switches connect to a single interconnect switch. Each RPi connects to a port on the fan out switches.

Storage. The SD cards on the RPi are too slow for writes, so the team attached a QNAP 4-drive NAS storage array and used a logical volume manager to provide volumes to sub clusters. The SD card had the OS on it so each RPi could boot using local storage.

Mounting. Design students at the Free University of Bolzano devised a rack system with individual holders for each RPi. The holders are clear so status lights are visible, and individual RPis can be added or removed while the system is running. The racks are stackable.

Power. Three-hundred individual power supplies would be unwieldy, so they took old PC power supply units to power each rack unit. Yes, if a PSU failed, 24 nodes would go down.

Software. They used Debian v7 as a base for enough functionality to boot an individual RPi so it could request the current configuration from the master RPi.