Google Cloud Games Boss Claims Roughly 9 Out of 10 Developers Are Already Using AI, But Many Are Not Telling Players
Google Cloud’s games division head Jack Buser has made another major statement on artificial intelligence in game development, claiming that AI powered tools are already deeply embedded across the industry, even if many studios are not openly discussing it with players. In a recent MobileGamer.Biz interview, Buser said that “roughly 9 out of 10” game developers surveyed by Google around Gamescom 2025 confirmed they were already using AI powered tools in their development workflows.
The statement follows Buser’s previous argument that the video game industry should embrace AI in the same way Tony Stark embraces Iron Man’s suit, positioning the technology as an accelerator rather than a replacement for human creativity. This time, however, his comments went further by suggesting that many of the games players already enjoy were created with AI assistance, whether players know it or not.
"When there are technological revolutions in this industry, oftentimes you have a reaction from the player that's like, hold on, I know what my favourite games are, and I'm worried about change. Am I going to like the games of the future? Because I sure like the games I'm playing now. And I totally get that reaction.
Jack Buser"
Buser framed player resistance as a familiar response to major technological shifts in gaming. According to him, players may be worried that AI will change the creative identity of future titles, but he argues that AI is already part of the production pipeline behind games that have shipped and are currently being played.
"I think what players don't realize is that their favourite games right now were already built with AI. Those games have shipped. We did a survey around Gamescom last summer with studios all over the world. Roughly nine out of 10 game developers told us, yeah, we're using it.
Jack Buser"
Buser also addressed the gap between Google’s claimed figure and other industry surveys, which have reported lower adoption numbers, often closer to 40% or 50%. His explanation is that the difference comes down to disclosure, not actual usage. In his view, many developers and studios are already using these tools but are not willing to publicly confirm it due to player backlash, reputational concerns, or the broader controversy around generative AI in creative work.
"That gap is basically the developers willingness to tell you whether the fact of the matter is that it's being used.
Jack Buser"
One of the studios Buser cited was Capcom, which he described as a major user of Google Cloud’s AI tools. However, Capcom should not be placed in the same category as companies quietly avoiding disclosure. The publisher has already publicly told investors that it plans to use generative AI to improve efficiency and productivity in development areas such as graphics, sound, and programming, while also stating that it does not intend to use AI generated assets or materials in final products. Reports on Capcom’s position also note the distinction between internal efficiency tools and player facing game content.
Buser described one potential use case involving large scale world creation, where developers need to generate, review, and organize massive amounts of concept material during pre production. He said studios are using tools such as Gemini and Nano Banana to rapidly generate large volumes of ideas, then using Gemini again to curate those ideas so art directors can focus creative resources on higher value areas such as main characters, major enemies, core scenes, and important objects.
"What they're doing is they're using Nano Banana and Gemini to rapidly generate just countless ideas, and then they're talking to Gemini to actually go through those ideas and curate them.
Jack Buser"
While this sounds practical from a production management perspective, Buser’s framing is also open to criticism. The idea that developers are being slowed down by individual pebbles, blades of grass, or repeated art reviews may not fully reflect how modern AAA art pipelines already operate. Large studios have long used procedural generation, asset libraries, photogrammetry, modular environment kits, outsourcing pipelines, and specialized tools for vegetation, rocks, terrain details, and background assets. Presenting generative AI as the first meaningful solution to these issues risks simplifying the actual craft and technical skill already present in game art production.
The broader question is not whether AI tools are being used. At this point, it is increasingly clear that they are. The more important question is where they are being used, whether they are replacing creative labor or supporting it, and whether players deserve clearer disclosure when generative AI contributes to a product they are buying. The 2026 GDC State of the Game Industry discussion around AI shows that the topic remains highly divisive, with reported studio adoption sitting at a significant level while developer sentiment remains strongly split. Some reports highlighted that 52% of developers believe generative AI is having a negative impact on the industry, while usage varies heavily depending on job role and department.
Buser’s comments also arrive at a time when the business side of gaming is searching for ways to reduce ballooning production costs. AAA development cycles can now stretch across 5, 6, or 7 years, with budgets that place immense pressure on publishers to deliver massive commercial hits. AI is being pitched by some executives as a way to reduce repetitive work, speed up iteration, and allow studios to take more creative risks. However, that promise remains largely unproven at scale, especially when the industry’s instability is also tied to poor production planning, over expansion, layoffs, unrealistic revenue expectations, live service saturation, and the rising cost of global development.
This is where Buser’s argument becomes more complicated. AI may help with brainstorming, internal asset exploration, code assistance, localization support, marketing material, testing workflows, and production organization. But claiming that AI is already solving the game industry’s cost crisis is a much larger statement. The technology may improve certain workflows, but it does not automatically fix leadership issues, creative bottlenecks, over scoped projects, market saturation, or the financial pressure that has pushed so many studios into layoffs and restructuring.
The player trust issue also remains unresolved. If a studio uses AI for internal ideation only, many players may be more accepting. If AI generated material appears in final art, writing, voices, cosmetics, or monetized content, the reaction can be much stronger. The distinction matters, and this is where transparency becomes a strategic necessity rather than a public relations burden.
For now, Buser’s claim adds more fuel to one of the most important debates in modern game development. AI is no longer a distant future concept for the games industry. It is already inside the pipeline. The real battle now is over disclosure, creative control, labor impact, and whether the technology will genuinely help developers make better games or simply become another corporate tool for cutting costs while asking fewer people to do more.
What do you think? Should studios clearly disclose when AI tools are used during development, even if the final game assets are still made by human artists?
