This past week, Nvidia unveiled its new graphics upscaling technology, DLSS 5, with a new feature that gave in-game character models AI makeovers. Their drastically different appearances, which made them look like the “yassified” style popular in cheap mobile games, drew a public backlash — not just out of revulsion for their appearance, but because it would change the work that game developers labored over without their input.
Gamers are rebelling over the use of generative AI in the games they play, especially when it isn’t disclosed. That makes it tricky to use, whether to whip up code and art while making games or in player-facing materials like generating nonplayer character dialogue in real time in response to your choices.
Back in January, planners for the Game Developers Conference released their annual state of the games industry report for 2026, in which 52% of respondents reported that generative AI was used at their company, though only 36% said they’re using it as part of their jobs; some say it’s optional, at least for now. They mostly use the technology for research and brainstorming (81% of respondents), writing emails and scheduling (47%), or for code assistance (47%), among other tasks. But developers themselves are increasingly skeptical of generative AI, with 52% responding that it’s bad for the industry — up from 30% last year.
By the time the show rolled around in the middle of March, uncertainty around generative AI permeated GDC, held in San Francisco. Like most years, the professional convention was a nexus for members of the games industry to share lessons, make deals and forecast the next year in gaming. But as I walked around the halls of GDC 2026, I saw a stark juxtaposition: a handful of smaller games proudly using generative AI, and relative silence from the rest of the industry.
It’s still very early days for the use of gen AI in video games. At prior GDCs, I’d seen primitive gen AI-powered NPCs running on Nvidia tech and Microsoft explaining how its Copilot tech would provide in-game tips and advice, but neither of these player-facing applications has really arrived in any big game in 2026, or even debuted at the margins. If there were a killer gen AI application that made its use essential in production or in gameplay, we’d have probably heard about it. Or as Chris Hays, lead services programmer at id Software, put it, gen AI isn’t nearly as transformative as the true paradigm-shifting tech we’ve seen before.
“People weren’t begging people to use the web when it came out. If [generative AI] was really as revolutionary as the web, people would be using it,” Hays said.
I sat down with Hays, who is also a lead organizer at id Software’s Big Friendly Union, and Sherveen Uduwana, treasurer at the United Videogame Workers union (which made its public debut at last year’s GDC 2025), to chat about the state of the games industry, including how much generative AI is being used by developers. Between the two, the consensus was: not much. What they’d heard from the cases where it’s used in development, humans had to step in and amend AI-created errors.
“I’m skeptical, even for the studios that say, ‘We’re implementing AI into the process.’ We’re not seeing the number of revisions that are happening after these AI-generated content, where essentially a worker is going in and fixing all these mistakes to the point that it possibly could have been done without the AI in the first place,” Uduwana said.
Amusingly, Hays said, freelancers he’s talked to have loved the AI push — as they’re hired to come in and fix AI’s mistakes.
At Google’s booth in the Moscone Center’s West Hall, the company showed in-house demos of how Gemini can be used in games. In this example, players type conversational responses to NPCs to progress the game.
Who’s actually using gen AI in their games?
I’d chatted with Hays and Uduwana at the Communication Workers of America booth on the GDC show floor beneath the Moscone Center’s North Hall. (Disclosure: One of the CWA’s member unions, the NewsGuild, represents editorial workers at CNET. Until recently, I was a member.) A few hundred feet away, I walked into the Google booth, where the tech giant was showing off ways that its Gemini gen AI-powered assistant could be used in games — including some that were set to launch.
Google’s sizable space held a handful of internally built demos showcasing how one could use Gemini in their game. They were pretty rudimentary. In one, a Google employee demonstrated how players could talk their way, ChatGPT-style, through a village and order a drink at a tavern in another example of gen AI-powered NPC conversations. I got hands-on with another demo, walking around a server farm, shooting robots while an assistant kept up a constant flow of commentary, Zelda fairy-style, relative to my performance — even healing me if I took too much damage, like a reactive easy mode.
But next to these were actual games purportedly coming out soon. I saw one, a strategy game for phones called Colony by Parallel Studios, that’s aiming for a release in the next three months, and that lets players oversee and defend a settlement on a distant world. As Game Director Andrew Veen told me, Colony uses Gemini-powered large language models in two ways. First, to let players solve in-game challenges with suggestions that the AI judges — for instance, to thaw a frozen power core, players have tried to use bombs, flamethrowers, napalm and even pickaxes (which have all worked).
One of the upcoming games in Google’s booth, Parallel Studios’ Colony, is a mobile game that uses Gemini to let players insert their own creative solutions to game challenges as well as Google tech to translate 2D images to in-game 3D items.
Second, Colony uses a Gemini workflow that starts with Nano Banana to generate 2D images of objects and then puts them through the Google-owned Atlas tech to convert them into 3D models within the game. Currently, players can create helmets for their characters this way, but the plan is to expand into armor, furniture and vehicles eventually — like Animal Crossing in the far-flung future meets Fallout Shelter, Veen explained. Converting an image to a 3D item you can equip on a character takes about two and a half minutes to do through Gemini’s servers, but since Colony is an “idle” mobile game where base-building progress happens in real-time, that delay is built into the mechanics.
Veen added that Gemini has also sped up Parallel Studios developers’ workflows, using it to help them code and give feedback for designs. The studio started work on Colony nearly a year ago and was going it alone for the first eight months, but partnering with Google and using its AI tech enabled it to do more work in the most recent three months than in the eight without it. Combined with Atlas, it’s shown Veen and his team that “we can build a game that we otherwise wouldn’t be able to.”
“I don’t think we get here without Gemini,” Veen said.
It’s worth noting that, absent the Google branding, there weren’t any other major companies showing off gen AI integrations — not even Microsoft, which was trumpeting its Copilot for Gaming initiative at last year’s GDC. Despite a block of sponsored Xbox panels, the company’s big news was that developer kits for its next console, codenamed Project Helix, would start going out in 2027.
To be fair, GDC’s main draw is looking backward, with most of its programming being panels of developers discussing lessons learned in the last year of game development. The highest-profile of these covered major games released in 2025 like Clair Obscur: Expedition 33, Silent Hill f and ARC Raiders. Most are smaller discussions split across different disciplines, such as audio, graphics or narrative. They’re also a mix of official programs vetted by GDC parent company GSMA and sponsored ones that individual companies paid to host. The vast majority of AI-related panels were the latter, reinforcing that gen AI hadn’t landed in last year’s games in any big way.
Unleashed Games founder Irena Pereira speaks to attendees about using generative AI in the ideation process during game development.
But there were some illuminating panels that I sat in, featuring developers from smaller studios that had been experimenting with gen AI in their pipelines. In one, product development specialist and founder of Unleashed Games, Irena Pereira, explained how using gen AI can help the “blank page problem” to generate, say, 500 crappy ideas and build the lone promising one out into a proper quest, item, character or story beat.
“[You’re] creating those compelling stories that really should be coming from you, but they can start in a more automated popcorn kind of zone that is brought to you by AI, and then at the end of the day, you finish it as a human,” Pereira said.
What’s clear is the restraint: gen AI may be used in preproduction or organization, but nothing that ends up in the final product, Pereira said.
That’s probably wise considering the hair-trigger gamers are on for anything AI-related, even pouncing on Baldur’s Gate 3 creator Larian last December after CEO Swen Vincke brought up using gen AI in ideating its next Divinity game, to the point that the studio confirmed in January that it would abandon using some generative tools to ensure it can trace the provenance of the art ending up in the final game. But in the same Reddit AMA in which Larian answered public questions, it also acknowledged experimenting with other machine learning tools to “reduce the ‘mechanical legwork'” and speed up game development.
That and other incidents have led to the backlash from players when they hear about any AI use in making games. For developers, it’s more nuanced. David “Rez” Graham, AI programmer and lead developer of The Sims 4, who hosts AI roundtables for developers to share ideas at GDC, explained to me over email that the industry is against generated assets like art ending up in games, but that engineers have started to try out code assistance and generation tools like Claude Code or Codex.
The difference, which Graham talked about in his Human Cost of Generative AI panel at GDC this year, was the split in intent between these tools: art generators like Midjourney are designed to replace artists, while most current code-generating tools are intended to assist and accelerate engineers’ work, he said. Claude Code and Codex are useless unless you know what you’re doing. To wit, Graham had Claude audit one of his projects of around 2,000 lines of code to look for bugs; of the 12 it reported, only two were real issues, while the other 10 were false positives. If he’d let the gen AI tool apply the suggested fixes for the latter, it would have created 10 new bugs, he said.
“You still need significant programmer oversight, so the tools act more like an accelerator,” Graham said. “As long as that remains true, I think engineering will continue to embrace them.”
The convention floor of GDC 2026, below the ground floor of the Moscone Center.
Reading the tea leaves: Gen AI in 2026 games… and beyond
In previous years of GDC, the halls of Moscone Center were draped in advertisements for gaming companies riding the latest wave of barely tech grifts, from blockchain to NFTs to web3. Now it’s generative AI, and though the ads for them were less garishly slathered over the convention center this year, it’s hard to shake its association with trendy waves from yesteryear.
Yet generative AI applications seem to have more potential than those of other technologies, even if they’re not even close to being widespread. Unlike the others, gen AI is being treated cautiously.
Another panel I sat in, sponsored by AI audio acting company Lingotion, was titled “How to Build or Use Generative AI That Is Legally Compliant, Safe, and Ethically Sustainable.” Though obviously pitching the company’s services, the presenter carefully explained that the only way to ethically clone an actor’s voice to generate lines for a game is to properly license all data from them directly and be clear about its purpose for generative AI — then share revenue with them.
Gen AI applications in gaming are still piecemeal. As in years past, I visited Nvidia’s hotel room demo to get a peek at its tech behind closed doors, though it happened a week before the company’s controversial reveal of DLSS 5 (which wasn’t present). What I saw were less radical tech progressions, like last year’s more modest DLSS 4.5 that reduced screen menu issues when upscaling graphics and offered better ray tracing in new games like Resident Evil Requiem.
At Nvidia’s GDC 2026 showcase, one computer demonstrated an application of Nvidia Ace generative AI that empowered an in-game advisor (seen in the top left corner of the screen) to provide advice tailored to players’ situations in the strategy game Total War: Pharaoh.
There was also a demonstration of Nvidia Ace, the company’s suite of gen AI developer tools, specifically using the tech to power an advisor who would help players in Creative Assembly’s Total War: Pharaoh. While Total War games follow the strategy model of offering generic advice, the advisor would make recommendations based on the player’s situation; in the demo, an Nvidia employee typed questions that the gen AI-powered in-game assistant answered, like why a nearby province rebelled, but wouldn’t share info outside the player’s knowledge, like intel beyond the fog of war.
Whether in-game or in development, gen AI tools aren’t mainstream in gaming, at least not yet. We’re starting to see some use cases at the fringes of game development, but they’re still far from being embraced by the world’s biggest game companies.
Despite years of holding AI roundtables at GDC and working directly on AI applications in gaming, Graham is hesitant to make any predictions about the future — things are moving too fast, and the gaming industry doesn’t know how to tackle big issues with gen AI such as using stolen work for training data, the environmental impact, the economic impact (like with the RAM shortage), the labor impact and more. Considering all the intense investment in the technology with little financial return, Graham compared this moment to the dot-com boom and bust, and he expects a similar subsequent wipeout of AI companies — and when the dust settles, we’ll see the final form of AI in gaming. Perhaps then, the US, the EU and other countries will set AI regulations, Graham theorized.
But in the short term, Graham expects more companies to try to integrate gen AI into their experience. He pointed out that more games have been released with the technology, specifically pointing to the game Whispers From the Star released last August, which is extremely upfront about using AI to power dynamic conversations between the main character, a female astronaut, and the player who talks her through surviving a crash landing on an alien planet.
“It has a ‘Very Positive’ rating on Steam, so it’s clear that players aren’t against gen AI as a whole, just when it’s used in place of art,” Graham said.
On the convention floor, the Communication Workers of America hosted a booth for curious games industry members to come discuss union options.
For union leaders and game developers Hays and Uduwana, the reasons that gen AI is still only used in smaller games and not from the biggest names in gaming are mainly twofold: The tools aren’t refined enough yet, and developers like themselves resisting using technologies that would threaten their fellow workers’ employment.
“I know anytime we even have any discussions about AI, it’s like, it should never do something that you couldn’t do yourself,” Hays said. “If it’s not an accelerant for you, then you’re not using a tool. You’re just having something that’s replacing someone’s job.”
Hays acknowledged that Microsoft, which owns his studio id Software, is a big backer of AI, but so far, the tech giant has only said it wanted gen AI tools that accelerate work productivity. The Big Friendly Union is taking Microsoft at its word.
“We’re not against movement forward. We’re against things that are immoral, that take jobs, that are bad for the environment, that are bad for people,” Hays said. “And if there are wins, then it would be OK. But there haven’t been, which is why there’s not a lot of movement.”
Gen AI’s inability to rival what developers can make is a testament to their competence and skill, Uduwana said. Hopefully, this makes clear how the thousands of hours of labor going into making games delivers an attention to detail that players notice, that they think is compelling and creates an emotional response, he said.
It’s not hard to see how that positive reaction to conventionally made games is linked to players’ negative reactions whenever they discover gen AI wasn’t disclosed in the creation of new games. Sometimes, the truth comes out when gamers realize that crudely made visual assets or text were generated by AI. Even if it turns out that the materials were minimal or left in by accident, as with last year’s game The Alters, players still feel betrayed and mistrustful of other parts of the game.
“I do think that people who play games are smart about what they’re consuming, and that they see the impact that generative AI has, and how it’s leading to less quality control in the franchises that they are really excited about,” Uduwana said. “And I think that those anxieties are things that there’s common ground between the workers and the people playing the game.”
To these seasoned game developers, there’s a pretty simple truth: It’s not being used in big games yet for good reason.
“There are plenty of studios that are pushing AI. They’re not the ones that are doing well,” Hays said. “Everyone sees it, and the players are rejecting it. So long as we want to be successful, we’re not going to be using [AI] tools.”
Read the full article here














