Building a deep learning computer is no different than building a regular computer. There are only 8 components to a build: the GPU, the CPU, Storage, Memory, a CPU Cooler (or a fan), a Motherboard, Power, and a Case. The first 4 are the most important. When training, data flows from storage to memory to the GPU, while the CPU helps along the way (manipulates batches, etc). So, you want to make sure:
- Your CPU can support all your GPUs
- Your GPU is fast enough and can fit the model and data batch in memory
- Memory is DDR4 and big enough to float most datasets uncompressed
- Storage is M.2 PCIe and big enough
Following Jeff Chen’s blog post, I purchased the following parts:
GPU: NVIDIA GTX 1080 Ti Founder’s Edition (11GB / 3584 Cores )
CPU: ThreadRipper 1950X 16 Cores. 32 Threads
Storage: Samsung 1TB 970 EVO NVMe M.2 Internal SSD
Memory (RAM): CORSAIR Vengeance RGB 64GB DDR4 3000 4×16
CPU Cooler: Fractal S24 Cooler
Motherboard: X399 Gaming Pro Carbon AC
Power: EVGA SuperNOVA 1600 P2 Power
Case: Lian-Li PC-011AIR Case
Fan (Optional): Cool Master 80mm Case Fan
Assembling the computer took about 10 hours. At the time, Jeff did not put up a build video of how he built his computer, so I had to google and find the various videos on how to put each part together. I made an external build first—placing the CPU, the memory, the storage, and the GPU directly onto the motherboard. I tested each of these parts to make sure they were properly working. Then I removed the GPU, mounted the motherboard on the case, and placed the GPU, the CPU cooler, and an extra fan inside the case. In hindsight, this probably added a few extra hours to my total build time, but since this was my first build I wanted to take it slow and not damage any of the parts!