Elon Musk has announced that his AI startup, xAI, will require an astounding 100,000 specialized semiconductors to develop and operate the next iteration of its conversational AI, Grok. To enhance the chatbot's intelligence, Musk has informed investors of plans to link these chips into a single, colossal supercomputer, which he refers to as a “gigafactory of compute.”
In a presentation to investors in May, Musk stated his intention to have the supercomputer operational by fall 2025 and has taken personal responsibility for its timely completion. Once finished, this interconnected network of Nvidia’s flagship H100 graphics processing units will be at least four times larger than the most substantial GPU clusters in existence today, such as those constructed by Meta Platforms for training its AI models.
This ambitious project highlights Musk's dedication to advancing AI technology through xAI, setting a new benchmark in the industry. By creating a supercomputer of unprecedented scale, Musk aims to position xAI at the forefront of AI innovation, challenging current leaders like OpenAI and Google in the rapidly evolving AI sector.