Level 5 CEO Akihiro Hino Warns Anti AI Stigma Could Slow Game Innovation, Calls GenAI a Tool That Must Be Used Responsibly
Generative AI continues to sit at the center of one of the most emotionally charged debates in modern game development, especially as awards conversations and community backlash increasingly shape how studios communicate their production pipelines. In that climate, Level 5 chief executive officer Akihiro Hino has weighed in publicly, arguing that turning AI use into a moral panic risks throttling broader digital progress rather than protecting creativity.
In a statement posted on his official social channel, Akihiro Hino statement, Hino says AI enables time savings that cannot be dismissed, and he warns that if the industry builds the impression that using AI is evil, it could seriously hinder the progress of modern digital tech. He frames the technology as a capability amplifier that can reshape production timelines and development economics, not as a replacement for creative leadership or human accountability.
今日はめずらしく、自分が作っているゲームの話ではないことを書きます。
— 日野晃博 (@AkihiroHino) December 26, 2025
【AIをめぐる騒動について】…
Hino also uses the post to address what he describes as a misunderstanding tied to Level 5. He rejects claims that the studio is letting generative AI do all the programming, explaining that the rumor spiral appears to have stemmed from commentary about an unreleased title themed around AI, where an example was raised about deliberately letting AI handle programming for that specific concept. In his telling, the anecdote was cited as a thought experiment about where the industry could be headed, then repeated without context until it became a claim about Level 5’s current production reality.
From there, Hino’s position is blunt and operational. He argues that AI has the potential to upend the common sense of game production, including compressing the timeline for the kind of large scale projects players expect, shifting from a world where big releases can take 5 to 10 years toward a cadence that could be meaningfully faster. The promise he is selling is not convenience for executives, it is throughput for players, more games, more iteration, and more opportunities for studios to take creative swings without being locked into decade long pipelines.
He also tackles the ethical layer directly by separating misuse from existence. In a practical analogy, he compares AI to tools that can be used for positive outcomes or harmful outcomes, emphasizing that AI can produce plagiarized content if misused, but if used properly, it can enrich the creative world further. The core message is governance over fear: the industry should target clear rules, transparent standards, and responsible use cases, rather than broad condemnation that treats all AI usage as inherently illegitimate.
For developers, publishers, and platform holders, the business implication is obvious. Public sentiment is becoming a production constraint. Even when AI is used for narrow efficiency gains such as internal prototyping, automation of repetitive tasks, or assisting with tooling, studios now have to consider perception risk alongside technical risk. Hino is effectively calling for a reframing that protects innovation velocity while still leaving room for enforcement against plagiarism, rights violations, and deceptive practices. The industry needs a playbook that can scale, not a stigma that freezes experimentation.
Should studios be required to disclose exactly how AI is used in development, or is it enough to enforce anti plagiarism rules and judge the final game on quality?
