Tesla CEO Signals Push Toward World’s Highest Volume AI Chips as Dojo3 Returns and AI5 Claims Spark Debate
Tesla CEO Elon Musk is again leaning into a long running ambition: turning Tesla into a major custom silicon player by scaling one chip family across vehicles and data center compute. In a new post on X, Musk said that now the AI5 chip design is in good shape, Tesla will restart work on Dojo3, adding that the effort targets what he calls the highest volume chips in the world.
Now that the AI5 chip design is in good shape, Tesla will restart work on Dojo3.
— Elon Musk (@elonmusk) January 18, 2026
If you’re interested in working on what will be the highest volume chips in the world, send a note to AI_Chips@Tesla.com with 3 bullet points on the toughest technical problems you’ve solved.
That message matters because it reverses the momentum from last year’s reporting that Tesla had stepped back from Dojo. Reuters previously reported that Tesla disbanded its Dojo supercomputer team and reassigned workers amid a strategic shift toward external compute partnerships, following staff departures and internal restructuring. The latest Musk post suggests the internal compute track is no longer paused, at least not at the Dojo3 layer, which reframes Tesla’s roadmap as dual track: buy leading external acceleration when needed, but still build an internal silicon plus training stack to lower long term unit economics for Full Self Driving, robotics, and future AI services.
Musk also outlined a faster iteration cadence for Tesla’s custom chips in a separate post on X, claiming Tesla plans to scale the lineup toward AI9 on a 9 month rhythm, a pace that would be unusually aggressive for any silicon program once verification, automotive safety constraints, and software stability are factored in. This is where the strategy looks clear even if execution risk remains high: Tesla wants a tight product loop that reuses platform building blocks so it can iterate like a software company, but on hardware that must survive real world safety requirements.
The performance claims are the other headline. In a third post on X, Musk said Tesla is targeting Hopper class performance with a single AI5 chip configuration, and that a dual die version rivals Blackwell, while also implying the cost structure is extremely favorable. If Tesla can approach that level of throughput per dollar at automotive scale, the upside is obvious: millions of deployed endpoints become a volume engine that traditional data center chip vendors cannot easily replicate.
However, the industry reality is that silicon is not a vibes business. Designing a chip is hard, but proving it is correct, safe, manufacturable, thermally stable, and supported by a mature toolchain is where timelines slip. Even observers bullish on Tesla’s vertical integration note that maintaining a 9 month cadence typically requires evolutionary changes rather than clean sheet designs, with heavy reuse across architecture, memory systems, and the software stack.
From a market impact perspective, Tesla’s real differentiator would not be claiming parity with NVIDIA on a chart. It would be delivering a coherent, scalable ecosystem where the same silicon philosophy supports car autonomy, Optimus class robotics, and Dojo style training clusters, while reducing supply chain exposure and cost per compute over time. That is the strategic bet being signaled in these posts, and Dojo3 returning to the narrative suggests Tesla wants more control over the full pipeline from data to training to deployment.
Do you see Tesla’s AI5 and Dojo3 push as a credible path to a true in house compute advantage, or as ambitious messaging that will still depend heavily on NVIDIA scale for years?
