This took way, way, longer than I thought I needed, but to finally see the 2 GPUs running is making me really happy right now 🙂
So here we are at the beginning, with the hardware layer mostly sorted. So why build a Homelab? What IS a homelab?
Why build a Homelab?
With my career interest and actual job in the “cloud” (GCP, AWS, Azure etc), Homelabs offer a fantastic platform for learning, experimenting, and even running personal projects on the very similar, at least in concept, what goes on in big data centers.
Especially for tasks involving machine learning, and artificial intelligence, a good graphics card (GPU) helps with quality of life. The initial idea of the setup centered around the ‘mighty’ Tesla M40 GPU, exploring its unique advantages and considerations.
The Setup
- Intel Xeon E5-2683 (16 cores, 32 Threads)
- NVidia GTX 1660 Super
- NVidia Tesla M40 (24GB VRAM)
- 32 GB of DDR4 RAM
And yes, I had to watercool the Tesla M40 with a NZXT Kraken G12 because it’s a server graphics card and doesn’t come with any cooling when stuffed into a classic ATX computer case. So why a Tesla M40?
The Allure of the Tesla M40:
The Tesla M40, released in 2015 and now mostly not used in production environments, was a powerhouse in its time, boasting 3840 CUDA cores and 24GB of VRAM memory, hence it remains a compelling choice for homelabs due to several factors:
- Cost-effectiveness: Compared to newer high-end GPUs, the M40 can be found at significantly lower prices on the secondary market. I got mine for ~$200 SGD (compared to a RTX 3090 with similar 24GB VRAM that is still going for ~ $2K)
- Compute Power: It delivers excellent performance for various workloads since it has a comparable VRAM as a NVIDIA 3090, and even more than new cards like the 4080!