Random Hacks‎ > ‎

Pimp Your S1070

The Problem

You want to provide access to Fermi class GPUs without having a suitable machine to host them, or you have a Tesla S1070 node, but don't know what to do with it since it is the "odd one out". This is happening for example on the Temple HPC cluster, where we have (very nice and powerful) GPU nodes with 4x Tesla C2050 GPUs in them, but to have people reserve an entire node for testing code seems very unpractical. It would be much better to have suitable GPUs in the login node(s), however, to host GPUs in them you have to purchase very specific hardware and expensive M2050 GPUs. One may think that inserting a very low power GPUs might do as well, but the power budget that can be provided through the PCI-e bus of many cluster nodes is very limited. In the case of our Dell R610 nodes, this is about 25W, this is too little for even the lowliest of the available GPUs with a Fermi based architecture.

The Idea

A Tesla S1070 is for the most part just an extension to the PCI-e bus that connects M1060 GPUs to another cluster node via a host adapter card. So it is conceivable, that it would not work exclusively for M1060 GPUs, but one could replace them with newer ones. The nice thing about the S1070 node is that it has its own power supply, so the problem of supplying sufficient power to the GPUs is solved and the host adapter cards are certified for regular cluster nodes, so that is less of a concern, too.  Now, the remaining worry is about cooling. Tesla M1060 GPUs are passively cooled and the S1070 case is set up to provide a strong airflow "through" them. Of course under such conditions, it would be risky to use a desktop GPU that is
optimized to be placed in a less confined environment, but for development and quick tests, one does not need the fastest and most perfect GPUs, but something reasonable with not too high a power consumption like a GeForce GTX 550Ti cards. This card needs only one 6-pin power cable and is also very affordable, but at 1/4 the number of GPU cores not a complete dud.

Does it Fit?

The next question is, can one actually place the cards inside the node and the answer is "yes, almost". Due to placing the cards inside the node, one has to remove the faceplate around the DVI connectors. This has the added benefit that it will improve the airflow through the card. The GeForce cards have different dimensions than the M1060 cards, so they don't fit exactly to the mounting points, but one can still fit them in very nicely. Only at one point there is a conflict where the plastic casing of the GeForce touches a small thread bar that connects mounting rods to the case. This needs to be cut away, e.g. with a Dremel tool (see picture on the right). Also some of the DVI ports are a tight fit, but with a little bit of care one can insert the GeForce cards so that they sit tightly and straight in the two PCI-e slots that are closest to the front of the case.

Does it Work?

Now for the really big question: does it work? Surprisingly, yes! There is one gotcha, though: all slots have to be occupied. So a couple of M1060s have to remain in the case, but to avoid any confusion, one can use the nvida-smi tool to set those cards to "compute prohibited" mode, so that nobody gets confused.

More pictures

Nvidia Tesla S1070 Modding

Here are a few more pictures (sorry for the bad quality, but the camera in my cellphone is not exactly stellar, and the cellphone is pretty old) showing the setup from a few additional angles and perspectives.
Comments