On this episode, we’re joined by Andrew Feldman, Founder and CEO of Cerebras Systems. Andrew and the Cerebras team are responsible for building the largest-ever computer chip and the fastest AI-specific processor in the industry.
We discuss:
- The advantages of using large chips for AI work.
- Cerebras Systems’ process for building chips optimized for AI.
- Why traditional GPUs aren’t the optimal machines for AI work.
- Why efficiently distributing computing resources is a significant challenge for AI work.
- How much faster Cerebras Systems’ machines are than other processors on the market.
- Reasons why some ML-specific chip companies fail and what Cerebras does differently.
- Unique challenges for chip makers and hardware companies.
- Cooling and heat-transfer techniques for Cerebras machines.
- How Cerebras approaches building chips that will fit the needs of customers for years to come.
- Why the strategic vision for what data to collect for ML needs more discussion.
Resources:
Andrew Feldman - https://www.linkedin.com/in/andrewdfeldman/
Cerebras Systems - https://www.linkedin.com/company/cerebras-systems/
Cerebras Systems | Website - https://www.cerebras.net/
Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#OCR #DeepLearning #AI #Modeling #ML