A Privacy Policy Change Has Genshin Impact Players Concerned Voice Chat Data Could Train AI Models Unless They Opt Out

Genshin Impact players are raising fresh concerns about generative AI and consent after spotting wording changes in HoYoverse’s privacy policy that appear to broaden how player input can be used for model training. The discussion accelerated after a now removed Reddit post was echoed by Bluesky user Cevian, who highlighted a change dated January 14, 2026 on the official policy page and pointed out that a voice chat specific clause was removed.

At the center of the debate is a shift in how the policy communicates voice based data handling versus broader user generated input. The removed section previously described voice chat as a real time communication feature and explained that voice communication data could be processed for operational purposes such as providing the communication service itself, maintaining security and stability, content moderation, and complying with applicable laws and regulations. In other words, it read like a traditional safety and operations clause rather than a model training clause.

With that voice chat subsection removed, players are focusing on a more general statement in the policy under the section explaining why personal data is collected and processed. The updated wording states that user generated input such as chat data may be used to train and improve the model used to provide services, and it also states players can opt out of model training at any time in service related settings without impacting existing gameplay. The lack of explicit separation between text chat and voice communication in the newer phrasing is what appears to be driving the strongest interpretation among players, especially when the opt out mechanism is framed as something the user must proactively set.

It is important to separate perception risk from confirmed behavior. Based on the language as presented, the policy does not explicitly state that voice chat is being used to train a generative AI model, and the earlier removed clause was primarily oriented around moderation and security. However, player trust is extremely sensitive right now, and the industry has learned that even the appearance of on by default data use can trigger backlash, especially when the data in question could include voice.

That is why this situation is resonating beyond Genshin Impact’s community. Players are increasingly measuring privacy and AI policies through a consent first lens, where the standard expectation is clear disclosure, clear scoping of what data types are included, and a true opt in for anything that could be interpreted as training. When a policy update reads like it expands model training permissions while leaving ambiguity around voice, the conversation moves fast, even before any official clarification is issued.

You noted you have reached out to HoYoverse for comment. A direct statement that clarifies whether voice chat is included or excluded, how opt out is presented in game, and what the model training scope actually covers would likely defuse most of the concern. Until then, this is a classic trust and communication challenge where players are reacting to ambiguity, not necessarily a confirmed change in practice.


Do you think game studios should require opt in for any AI model training that involves player communications, or is a clear opt out setting enough if it is explained transparently in game?

Share
Angel Morales

Founder and lead writer at Duck-IT Tech News, and dedicated to delivering the latest news, reviews, and insights in the world of technology, gaming, and AI. With experience in the tech and business sectors, combining a deep passion for technology with a talent for clear and engaging writing

Previous
Previous

Monster Hunter Rise and PGA Tour 2K25 Headline PlayStation Plus Essential Games for March 2026

Next
Next

Reactive Medieval Fantasy World Valorborn From Laps Games Arrives in Early Access in April 2026