Executive Summary
Advanced Micro Devices (AMD) is the "Second Source" in the most important supply chain on earth. Nvidia effectively has a monopoly on AI training chips, which implies infinite pricing power. The Hyperscalers (Microsoft, Meta, Google) hate this. They need a competitor to keep Nvidia honest. AMD's MI300 accelerator is the only viable alternative. The thesis is not that AMD beats Nvidia, but that it captures 20% of a $400B market simply by showing up.
1. The MI300 Ramp
The MI300 is the fastest-ramping product in AMD history.
- Memory Advantage: AMD chip designs often pack more HBM (High Bandwidth Memory) than comparable Nvidia chips. For Inference (running the model), memory matters more than raw compute. This is AMD's wedge.
- ROCm: AMD's software stack (ROCm) was historically terrible compared to CUDA. It is now "good enough" for PyTorch.
2. X86 CPU Share
While everyone watches AI, AMD is quietly stealing server market share from Intel.
- EPYC: AMD's server CPUs are more power-efficient than Intel's Xeons. In a constrained data center, performance-per-watt is the only metric that matters. AMD is now near 30% share, up from 0% a decade ago.
Risks to the Thesis
- Software Gap: Nvidia's CUDA is a 15-year moat. If developers refuse to optimize for ROCm, AMD hardware sits idle.
- Custom Silicon: Google (TPU), Amazon (Trainium), and Microsoft (Maia) are building their own chips to replace Nvidia. They might skip AMD entirely.
Conclusion
AMD is the "Beta" play on AI. It is higher risk than Nvidia but offers potentially higher returns if they execute perfectly on the "Second Source" narrative.