A gamer uses a computer powered with an Nvidia Corp. chip at the Gamescon video games trade fair in Cologne, Germany, on Wednesday, Aug. 23, 2023. Gamescon runs until Sunday, Aug. 27. Photographer: Alex Kraus/Bloomberg via Getty Images
Bloomberg | Bloomberg | Getty Images
It’s not just human life that will be remade by the rapid advance in generative artificial intelligence. NPCs (non-playable characters), the figures who populate generated worlds in video games but have to date largely run on limited scripts — think the proprietor of the store you enter — are being tested as one of the first core gaming aspects where AI can improve gameplay and immersiveness. A recent partnership between Microsoft Xbox and Inworld AI is a prime example.
Better dialogue is just the first step. “We’re creating the tech that allows NPCs to evolve beyond predefined roles, adapt to player behavior, learn from interactions, and contribute to a living, breathing game world,” said Kylan Gibbs, chief product officer and co-founder of Inworld AI. “AI NPCs are not just a technological leap. They’re a paradigm shift for player engagement.”
It’s also a big opportunity for the gaming companies and game developers. Shifting from scripted dialogue to dynamic player-driven narratives will increase immersion in a way that drives replayability, retention, and revenue.
The interaction between powerful chips and gaming has for years been part of the success story at Nvidia, but there is now a clear sense in the gaming industry that it is just beginning to get to the point where AI will take off, after some initial uncertainty.
“All developers are interested in how artificial intelligence can impact game development process,” John Spitzer, vice president of developer and performance technology at Nvidia, recently told CNBC, and he cited powering non-playable characters as a key test case.
It’s always been true that technological limits and possibilities overdetermine the gaming worlds developers can create. The technology behind AI NPCs, Gibbs says, will become a catalyst for a new era of storytelling, creative expression, and innovative gameplay. But much of what is to come will be “games we have yet to imagine,” he said.
Bing Gordon, an Inworld advisor and former chief creative officer at Electronic Arts, said the biggest advancements in gaming in recent decades have been through improvements in visual fidelity and graphics. Gordon, who is now chief product officer at venture capital firm Kleiner Perkins and serves on the board of gaming company Take-Two Interactive, believes AI will remake the world of both the gamer and game designer.
“AI will enable truly immersive worlds and sophisticated narratives that put players at the center of the fantasy,” Gordon said. “Moreover, AI that influences fundamental game mechanics has the potential to increase engagement and draw players deeper into your game.”
The first big opportunity for gen AI may be in gaming production. “That’s where we expect to see a major impact first,” said Anders Christofferson, a partner within Bain & Company’s media & entertainment practice.
In other professional tasks, such as creating presentations using software like PowerPoint and first drafts of speeches, gen AI is already doing days of work in minutes. Initial storyboard design and NPC dialogue creation are made for gen AI, and that will free up developer time to focus on the more immersive and creative parts of game making, Christofferson said.
Creating unpredictable worlds
A recent Bain study noted that AI is already taking on some tasks, including preproduction and planning out of game content. Soon it will play a larger role in developing characters, dialogue, and environments. Gaming executives, Bain’s research shows, expect AI to manage more than half of game development within five years to a decade. This may not lead to lower production costs — blockbuster games can run up total development costs of $1 billion — but AI will allow games to be delivered more quickly, and with enhanced quality.
Ultimately, the proliferation of gen AI should allow the development process of games to include the average gamer in content creation. This means that more games will offer what Christofferson calls a “create mode” allowing for increased user-generated content — Gibbs referred to it as “player-driven narratives.”
The current human talent shortage, a labor issue that exists across the software engineering space, isn’t something AI will solve in the short-term. But it may free developers up to put more time into creative tasks and learn how best to use the new technology as they experiment. A recent CNBC study found that across the labor force, 72% of workers who use AI say it makes them more productive, consistent with research Microsoft has conducted on the impact of its Copilot AI in the workplace.
“GenAI is very nascent in gaming and the emerging landscape of players, services, etc. is very dynamic – changing by the day,” Christofferson said. “As with any emerging technologies, we expect lots of learning to take place regarding GenAI over the next few years.”
Given how much change is taking place in gaming, it may simply be too difficult to forecast AI’s scale at the moment, says Julian Togelius, associate professor of computer science and engineering at New York University. He summed up the current state of AI implementation as a “medium-size deal.”
“In the game development process, generative AI is already in use by lots of people. Programmers use Copilot and ChatGPT to help them write code, concept artists experiment with Stable Diffusion and Midjourney, and so on,” said Togelius. “There is also a big interest in automated game testing and other forms of AI-augmented QA,” he added.
The Microsoft and Inworld partnership will test two of the key AI implications in the video game industry: design-time and assistance with narrative generation. If a game has thousands of NPCs in it, having AI generate individual backstories for each of them can save enormous development time — and having generative AI working while players interact with NPCs could also enhance gameplay.
The latter will be trickier to achieve, Togelius said. “I think this is much harder to get right, partly because of the well-known hallucination issues of LLMs, and partly because games are not designed for this,” he said.
Hallucinations occur when large language models (LLMs) generate responses that deviate from context or rational meaning — they speak nonsensically but grammatically, about things that don’t make sense or have any relation to the given context. “Video games are designed for predictable, hand-crafted NPCs that don’t veer off script and start talking about things that don’t exist in the game world,” Togelius said.
Traditionally, NPCs behave in predictable ways that have been hand-authored by a designer or design team. Predictability, in fact, is a core tenant of the video game world and its design process. Open-ended games are thrilling because of their sense of infinite possibility, but to function reliably there is great control and predictability built into them. Unpredictability in the gaming world is a new realm, and could be a barrier to having AI gain wider use. Working out this balance will be a key to moving forward with AI.
“I think we are going to see modern AI in more and more places in games and game development very soon,” Togelius said. “And we will need new designs that work with the strengths and weaknesses of generative AI.”