Raspberry Pi Cluster

Recently, I’ve completed construction of a 40-node computing cluster based on the Raspberry Pi single board computer. Below is a quick overview video, showing the finished product.

And here’s another one showing a few of the basics about how the case works.

In the practical sense, this is a supercomputer which has been scaled down to the point where the entire system is about as fast as a nice desktop system. Most of the resources available to individual nodes have been proportionally scaled. I believe this will make it an ideal testbed for distributed software.

Cluster - Left Side Angled Cluster - Right Side Cluster - Front Side

My goals for this project were as follows:

  • Build a model supercomputer, which structurally mimics a modern supercomputer.
  • All hardware required for the cluster to operate is housed in a case no larger than a full tower.
  • Parts that are likely to fail should be easy to replace.
  • It should be space-efficient, energy-efficient, economically-efficient, and well-constructed.
  • Ideally, it should be visually pleasing.

I feel I have met these goals with my design.

Here are the specifications of the final system:

  • 40 cores Broadcom BCM2835 @700 MHz
  • 20 GB total distributed RAM
  • 5 TB disk storage – upgradeable to 12 TB
  • ~440 GB flash storage
  • Internal 10/100 network connects individual nodes
  • Internal wireless N network can be configured as an access point or bridge.
  • External ports: four 10/100 LAN and one gigabit LAN (internal network), 1 router uplink
  • Case has a mostly toolless design, facilitating easy hot-swapping of parts
  • Outer dimenions: 9.9″ x 15.5″ x 21.8″.
  • Approximate system cost of $3,000. (The first one cost slightly more.)

Why Build It?

I needed a computing cluster that I could use for testing distributed software. Since I don’t have free access to a traditional supercomputer, I decided to build my own. Originally, I planned to create it as part of my MSCE thesis, but ended up with a different project for that. As a consequence, I pursued this as a personal project instead.

Since I was making a significant investment in this, I wanted it to be something I would be proud to show people for the next several years.

Design and Build Process

I made the choice early in the planning process for this project that no more than one expensive piece of specialty equipment should be required for its construction. This maximizes the chance that any given maker will have access to the required equipment. As I’m a member of Dallas Makerspace, I have access to CNC laser cutter through my membership, which I used extensively during the build process. A laser cutter with at least an 18″x24″ bed would be required for recreating this cluster case.

Complete plans for this project can be found through the following links:

  1. What You Need
  2. Circuit Boards
  3. Power Cards
  4. Raspberry Pi Cards
  5. Router Card
  6. Ethernet Switch Cards
  7. Case Central Structure
  8. Case Side Panels
  9. Fan and Filter Harness
  10. Ethernet Jack Mounts
  11. Hard Drive Array
  12. Ethernet Cables
  13. Power Cables
  14. Final Assembly
  15. Moving Forward

Acquired Skills

During this project, I acquired/improved the following skills:

  • Process design: Learned to better organize design tasks for an efficient design process
  • Working with constraints: Learned to better work with strict design constraints
    • Manufacture of product requires access to only one specialty tool – a CNC laser cutter with a 2’x2′ bed
    • End product dimensions were limited to the size of a full tower case
  • Prototyping: Gained additional experience in masking/etching printed circuits
    • Tried 2 new masking techniques
    • Made my own etchant for the first time
  • Equipment: Gained a great deal of skill in the operation of a CNC laser cutter
    • I had no prior experience. Now I’m one of the top resources for laser cutter knowledge at Dallas Makerspace.

What’s Next?

Now that the hardware is finished, I’ll be installing some common software packages for distributed computing, in order to evaluate their potential and train myself on them. This will include as many of the packages that run on top of Apache Mesos as possible (e.g. MPI and Hadoop). In the future, I will be writing my own distributed applications, which may include my own cluster management software and some form of reality simulation engine.

Related Links & Articles

76 thoughts on “Raspberry Pi Cluster

  1. jasonrubik

    Awesome job !

    Its way more versatile than a simple 40 core computer. And it might be more expensive based only on FLOPS, but it is way cooler than anything around !

    As for the reality simulation engine, why not try a molecular manufacturing-based nanorobot design environment ?

    1. David Guill


      Thanks for the suggestion; that sounds like a cool idea. I think I’ll probably be starting out closer to the macro level, though.

  2. Pingback: Raspberry Pi 40-Node Computing Cluster @Raspberry_Pi #piday #RaspberryPi « adafruit industries blog

  3. Pingback: Raspberry Pi Cluster | The Elite World of the Raspberry PI

  4. Paul S


    That’s absolutely brilliant! I’m really impressed in how you made it actually look amazing too.
    I wonder if there would be a market to sell them?

    I’m currently building one, but only with 3 pis, but this is a project I could do in the future.

    Thanks for sharing,
    Paul S

    1. David Guill


      I don’t know how much of a market there would be for selling them – I’d have to charge a lot considering the amount of labor involved. That’s part of the reason I made the plans available, so others could use them.

      Good luck with your 3-Pi cluster. What software are you planning to install on it?

      If you do take on this project in the future, I would love to hear about it.

  5. Amar

    Because there are benefits to having a physical cluster test bed as opposed to a virtual cluster test bed, this is pretty cool.

  6. Pingback: Raspberry Pi Cluster | Like Magic Appears! | Do...

  7. Pingback: Servers! | Harry Jiang

  8. Vijayakumar Subburaj

    Experiment and market it as a “High frequency trading” system, if possible.

  9. ser

    how are the hdds connected to the cluster?
    do you have one or more dedicated control nodes?

  10. Pingback: rndm(mod) » 40-Node Raspi Cluster

  11. Pingback: 40-Node Raspi Cluster - RaspberryPiBoards

  12. Pingback: 40-Node Raspi Cluster - | Noticias de seguridad informática, ¿qué es la seguridad informática?

  13. Tom Hargrave

    A very good project. You could get even more efficiency by using a 3.3V buck converters instead of a 5V buck converters and feeding the 3.3V in through the GPIO port, bypassing the Pi’s inefficient on-board series regulator.

    1. David Guill

      When I designed this system, I wanted to be able to swap out the boards for faster boards later. 5V power connections (usually via USB) have become so ubiquitous that I expect it to be a de facto standard for at least 5 more years. I could be wrong, of course, but I felt it was the best way to go.

  14. Pingback: » Blog Archive » Cluster de 40 Nodos con Raspberry Pi

  15. Pingback: Tutorial para instalar un cluster de 40 nodos con Raspberry pi | Cyberhades

  16. Dejan Lekic

    It is impressive indeed.
    Only I wonder how easier is to connect few Parallella boards (say 10) together, and build a cluster out of them. Also, how more powerul such cluster would be…

    1. David Guill

      It’s possible it would be easier if not for the fact that I placed my first Raspberry Pi order in 7/2012. Parallella didn’t begin their Kickstarter campaign until 9/2012. Also, the $35 pricetag of the Pi made it more appealing to build a larger cluster of them; and I wanted more than 14 nodes with the budget I was working with.

    2. William

      It would make a lot better sense using Parallela with 16 cores you are already half way to making a super computers with all those cores, although if you wait a few years they apparently will have 64 cores and upto 1000 cores by 2020 according to the website.

      Apparently its possible to line up 64 boards with 64 cores so when this happens we will have over 4000 cores working away which is more like an actual super computer in terms of core count when compared to the mini RasPi super computers we can create.

      Waiting for Parallela is the problem, 2020 for the real results sucks big time

      1. David Guill

        I really like what Parallella is doing, but I didn’t use their board for a few reasons:

        1. It’s more expensive than the Raspberry Pi.
        2. I don’t feel their custom language offers a practical benefit over OpenCL unless it becomes more accepted as a standard. (Granted, their board does have an OpenCL driver.)
        3. I’d already bought several Raspberry Pi model B boards for my cluster before Parallella was kickstarted. (I learned about Parallella from their Kickstarter.)

        But if I hadn’t care more about node count than core count during my planning phase, I might have given more consideration to Parallela and could have potentially switched over to it before I’d bought very many Pis. Granted, if core count had been a stronger consideration, I’d probably be doing a lot more GPGPU-type stuff already.

  17. Pingback: 40-Raspi impreuna | arduino.ro

  18. Pingback: Проект 40-Node Raspi Cluster — кластер из Raspberry Pi | Лучший моддинг сайт

  19. Pingback: 树莓派热点回顾第7期 - 极客范 - GeekFan.net

  20. Pingback: NewsSprocket | 40-node Raspberry Pi cluster hides behind a rainbow of cables

  21. Pingback: 40-node Raspberry Pi cluster | GeekLány

  22. Pingback: Raspberry Pi supercomputer met maar liefst 40 Pi's » Mancave

  23. Pingback: 40組Raspberry Pi組成的Cluster電腦 | 一路往前走!2.0

  24. alejandro

    hello how are you?
    wanted to thank you for this project published!
    also want to do so here please wish me luck.

    follow you well and see you soon. :)

    1. David Guill

      You’re welcome.

      If you’re going to attempt building it, keep an eye on my writeup for updates. There will be a few as I discover issues with the design and work out how to solve them.

  25. Pingback: Super computing with 40 Raspberry Pi’s | Simon Hall

  26. Pingback: Linux Video of the Week: 40-Node Raspberry Pi Supercomputer | Linux-Support.com

  27. Victor

    You need to post some benchmarks to compare it to commercial systems. True this raspberry setup has added cost of all the cables and unused components in each pi but the commercial systems have the added costs of their business infrastructure. Now that you have a $3000 system what is projected potential of a $30,000 system.

  28. nixmd

    I had same idea for while. Now I see someone has made it and it’s beautiful!
    how much power does it consumes? do you think if this much power used by this cluster in comparision to a x86 super computer with same aspects is reasonable and effective or not?

    1. David Guill

      Unfortunately, I haven’t been able to give this project the attention it deserves for a couple months. A post with benchmarks and data on power consumption is overdue. All I can say is that I’ll post it when I have it.

  29. Pingback: Beautiful #RaspberryPi cluster | Raspberry PiPod

  30. Pingback: Un supercalculateur de 40 Raspberry pi - The Raspberry | The Raspberry

  31. Carson

    I am starting a web hosting company, would this 40 Node Raspi Cluster be good for that??

    1. David Guill

      Truthfully, I don’t think it’s so good for typical commercial web hosting applications. Generally, web hosts will sell cheaper plans where a fast system is shared, which is generally accomplished by configuring potentially hundreds of vhosts per server. One of the advantages of doing it that way is that the same hardware can be used for hundreds of economy plans or fewer deluxe hosting plans. A Pi cluster, such as mine, would be a more expensive way to support those economy plans and would generally not be suitable for deluxe hosting accounts.

      The one exception would be an instance where the customer wants to test something that requires a distributed system with many nodes. A cluster such as mine could make sense for that. But you would probably have hundreds of physical servers before you would have enough need to justify building one.

  32. Carson

    also could you sell just the case itself because I am thinking of building one and that would make things alot easier

    1. David Guill

      Near the end of the design/build process, I started asking myself what I would charge to build one of these. I decided that I shouldn’t say that I won’t sell one, because that could potentially mean turning down a serious offer. However, my price for the case alone (without cables, circuit boards, or other components) would likely be about $4,500, due to the amount of manual labor involved. I do not expect anyone to jump at that price and I’ll be fine with it if no one ever does.

  33. Albert Chew

    It brought a big smile and a big yearning in my heart! Thank you for sharing a very ‘professional-looking’ creation. In my survey of Rasp cluster setup, this is now the number one design – best of the best 2012-2014. It has a real commercial potential.

    Kuala Lumpur,

  34. Marius

    Hello David,
    While reading about your 40 RaspPi cluster I was thinking about a more compact version that can be rack-mounted. It will require some low level networking magic (an area I’m lacking experience in) but if you’d like to brainstorm about it please email me.

    Kinds regards and kudos for the awesome accomplishment!

    1. David Guill


      All I can tell you is that I’m no expert at low-level networking magic either. For my own projects, I intend to let the TCP and UDP protocols do a lot of the low-level work for me. But I wish you the best of luck with it if you try to build a rack-mountable version. If you do it, you might want to consider the DIMM version of the Pi.

    2. Mirko

      Your supercomputer is great.also your tutorials and links who help anyone from as to build one,congratulations.

  35. Sam Gleske

    Hi, first off your project is amazing. Second off how do you plan to provision the operating systems and software in your environment? I hope you plan to use automation. If you like you may consult with me any questions you might have through email (at no cost) because I’d like to see your project continue to evolve.

    First off you should consider a PXE environment for installing all of the operating systems automatically. You can set up initial installation through the network which would be the most practical. You could use cobbler as a solution for that.


    Once you have your environment configured you should make use of configuration management so that rebuilding nodes is easy in the case of a failure. You can also use configuration management to push out software you’ve written to all nodes in the cluster (or select nodes if you wish) at almost no effort. There’s several solutions for configuration management.

    cfengine, saltstack, ansible, chef, and puppet are among some of the most popular solutions. There’s a more comprehensive list you can consider in a Wikipedia article.


    I realize there will be challenges regarding the architecture for those systems because the Pi is arm but most of the solutions I’ve mentioned can be built from source.

    Keep in touch.

    1. David Guill



      I’ve been educating myself on a few different configuration managers, including Puppet, BCFG2, and CFEngine. I can’t say I consider one of them to be an overall winner yet.

      I’ll have a look at Cobbler when I have the opportunity.

    2. David Guill


      Last time I was able to fire it up and spend some good time with it (which I’ve not been able to do as much as I want lately, unfortunately), I was trying to choose a configuration management package. I’ve spent a little time with BCFG2, Puppet, and CFEngine. I can’t say I’ve really decided yet which I like best.

      I’ll have a look at Cobbler when I get the chance to fire it up again.

  36. Eric

    Hi. Which OS did you install on each board? What software do you use to administer and manage the boards? What about the load balancing software? Thanks!

    1. David Guill

      So far, I’ve experimented with BCFG2, Puppet, and CFEngine for configuration management.

      The load balancing part can be done through software like Mesos, MPI, or HTCondor for distributing processing load. There are a number of others just for distributing databases. I’ve spent some time getting a few of them on the system, but so far none actually experimenting with using them. I’ll get to that eventually.

  37. Roie Black

    Hi David,

    I have wanted to build just such a machine for some time, and I may well take on building one of these down here in Austin (at ATXHackerspace). I learned assembly language on a Cray-1 and directed a Cray-2 site in Albuquerque as part of my past life in the USAF. I am now teaching a course in computer architecture at Austin Community College. This would be a cool machine for student projects in that class! I would like to see if we can arrange a meeting to see the machine on one of my visits to DPRG. Are you aware of anyone actually trying to build one from your instructions?

    1. David Guill

      I need to take it back to Dallas Makerspace for another visit at some point. However, I’m presently under orders not to lift anything so heavy for the next 3 months after having a tendon repaired in my left ankle. We can try to arrange something in late December or after, assuming I’m as far along as expected with my recovery.

      From comments I’ve received on here, I believe a few people have started the process of trying to build one. I’m not yet aware of anyone getting very far into the process. But it’s possible there’s someone out there who is 95% finished and hasn’t told anyone else about it.

      If you do build one, keep in mind that the current iteration has a cooling issue. (This is mentioned in a few places in the instructions.) The current build stays cool enough if the left side panel is off and the filters aren’t installed, so that’s typically how I run mine. I have some potential fixes in mind that would preserve nearly the entire design and parts list, but they aren’t so easy to try on mine as they would be on one made by the instructions I posted, due to some parts of mine being glued that I don’t recommend gluing in. I should eventually have a fix posted, but I can’t make any guarantees about when. Until then, running it with the left side off works fine – it’s just not a very elegant fix.

  38. Pingback: Clusters of single-board computers | Bananas Everywhere

  39. Pingback: 120 Raspberry Pi avec écran chez resin.io | Framboise 314, le Raspberry Pi à la sauce française….

  40. SeanVN

    That was interesting. I’d like to see someone running real computations on a ARM board cluster. I guess you were trying for the cheapest possible cluster to try distributed computing on. Most ARM boards would have 2 cores and 1GB RAM for about $50. But 4 cores and 2GB RAM are becoming more common. So for the cost of a Xeno Phi you could have say 100 ARM boards. Giving 400 cores and 200GB RAM. I think for a lot of real world computation (eg. engineering) that would work far better. I would be far easier to program and much more likely to be used. In terms of obsolescence such a cluster would still be worth running 5+ years from now as well.
    I have stopped coding in assembly language because I found the latest version of Oracle Java to be so fast (on a 64-bit Linux PC). The latest HotSpot just-in-time compiler is really effective. I don’t know if Oracle Java for ARM has reached that level yet? If so then it would be faster to develop distributed code in Java than in C or assembly language for sure.

  41. Pingback: Intel Developer Forum 2014

  42. Pingback: 40-Node Raspi Cluster | Hackaday

  43. Pingback: Top 10 Raspberry Pi Creations

  44. Jason

    Nice work David, but I think you may have dropped the ball regarding cooling. Its hard to tell from your pics(havent watched the video) exactly how your airflow works. From what I can tell those are intake fans? Also, it looks like you have holes beneath those fans, is this correct? If thats the case, then what is probably happening it that at least some of your airflow is going in and then right back out again. At the same time, that case is likely to have hot pockets all over the top area and in all the corners, where the air is just stagnating and not moving. You have to also take into account positive vs negative pressure airflow. Im assuming all those fans are intake, which means you are running a positive pressure case. This means air enters via the fans, and exits mostly the path of least resistance but also through every little crack and crevice in the case. This is good for cleanliness over time, as with proper filtering you suck in very little dust, but it does not perform as well as negative pressure, and tends to create pockets of hot air that do not move. Youd be better off with a negative pressure case(more exhaust, less intake) as it would suck in air from every little crevice and tends to create a small bit of airflow in those hot areas that is just enough to cycle the air. It is always more efficient to remove hot air then it is to add cold air. Also, air should move from one side of the case to the other. As in you suck air in from the bottom and front, and blow air out the top and back. This creates a constant airflow around all your parts, from one end of the case to the other. Your case appears to be just blowing air in at the components(correct me if Im wrong). Unfortunately, that is a poor way to cool components. You need to adjust your fans and any exhaust vents so that air is coming in the bottom of one side, and out the top of the other side. Front to back seems to be most popular with gaming rigs, but you can do it from side to side as well, like yours, you just need proper intake and exhaust. Hope this helps.

    1. David Guill

      Regarding your comment that I “may have dropped the ball regarding cooling”, please keep in mind that this is presently still hobby work for me. I have stated on this site that there’s a cooling issue with the current iteration of the design. I would have liked to release a fix for it quickly, but practical concerns related to securing an income have forced me to postpone a fix. I will get to it.

      I disagree with you regarding a lot of your suggestions, especially the one about exhaust fans being better. I’d rather not get too deeply involved in this topic, but I’ll touch on it briefly and discuss some of my current thoughts about the design and my plan for improving the cooling.

      From a fluid dynamics perspective, it is easier for a fan to push a compressible fluid than to pull it. As a consequence, airflow is slightly better with a positive pressure design. Along with my attempt to filter incoming air, this is why I chose intake fans on the side and why I put them on the outside of the filters. Additionally, the PSU has an exhaust fan. Presently, that’s by far the dominant exhaust port on this design. I was hoping the push/pull arrangement formed by the four intake fans and the PSU fan would be enough, but it obviously isn’t. But I still think it’s far better than if I’d aimed them all outward.

      Also, regarding “Your case appears to be just blowing air in at the components(correct me if Im wrong). Unfortunately, that is a poor way to cool components.” I feel you’re incorrect. “Just blowing air in at the components” is an excellent way to cool them. This is why so many effective approaches to cooling components do little more than just blowing air at them.

      There are some ventilation holes on the current backplate design, but not nearly enough. So the first thing I want to do is to open up the backplate by adding more ventilation holes. I feel this will allow more direct flow from the fans to the Pis. I state in my assembly instructions not to glue in the backplate – I stated this because I learned the hard way. So, in order to experiment with this change, I’ll not only need to remove most of the wiring and Pis, I’ll also need to break the welded plastic bonds holding my backplate in place. It’s not a task I intend to take on until I have at least a few days with nothing on my schedule, since I may need to do some very careful work to remove broken pieces of the old backplate.

      Regarding your suggestion that I cut more ventilation holes – I’ve considered cutting more ventilation holes on the side opposite the fans. I feel this is where it would make the most sense to add them. But I want to do this only after opening up the backplate with more holes. Aesthetically, I like that side best without holes, but I think it could look very attractive with a custom graphic cut into it.


Leave a Reply

Your email address will not be published. Required fields are marked *