C-Suite Perspectives On AI: Christoffer Holmgård Of modl.ai On Where to Use AI and Where to Rely Only on Humans
An Interview With Kieran Powell
Having Data Available and Identifying a Human Driver: If you’re relying on data for an AI tool, it’s important to know that you will have that data available that you can train from and you’re identifying a human driver that operates the data collection and understands it well. An example from our business is when we help game studios train AI Player Bots for their games. We lean heavily on learning from members of the team or their audience playing the game, allowing the bots to copy what the actual human players did. But this also means that we and the developer need to think about how those training games were played and if they’re exhibiting the behavior they want to see in the game in the end. What you put in determines what you get out.
As artificial intelligence (AI) continues to advance and integrate into various aspects of business, decision-makers at the highest levels face the complex task of determining where AI can be most effectively utilized and where the human touch remains irreplaceable. This series seeks to explore the nuanced decisions made by C-Suite executives regarding the implementation of AI in their operations. As part of this series, we had the pleasure of interviewing modl.ai CEO Christoffer Holmgård.
Christoffer Holmgård is the co-founder and CEO of modl.ai. Holmgård is a Doctor of Philosophy in Artificial Intelligence and Procedural Content Generation.
Holmgård has a stellar track record as a game analyst, artificial intelligence researcher, and game developer. He built his first game development company in 2008, Duck and Cover Games ApS. In 2011, Holmgård joined the award-winning game studio Die Gute Fabrik as a Partner and Managing Director, becoming the Chair of the Board of Directors in 2020. During his tenure, the studio secured more than 22 industry nominations and awards, including the GDC Innovation Award and the IndieCade Grand Jury Award.
Holmgård holds a BA degree in Psychology from the University of Copenhagen and an MSc in Media Technology from The IT University of Copenhagen. In 2015, he earned a Ph.D. in Artificial Intelligence and Procedural Content Generation from The IT University of Copenhagen and further earned a post-doctorate in Game Engineering from New York University.
Thank you so much for your time! I know that you are a very busy person. Our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started?
I have had a lifelong relationship with video games since 1989, when I inherited the Amstrad CPC 464 from my older brother. At around 14 years old, I sent my first invoice to a customer who happened to be a friend. I was interested in learning about contemporary computer kits but did not have the pocket money to fund my interest. To solve this problem, I managed to get registered as a computer hardware vendor with a wholesaler. I ordered components from them and built gaming systems for my friends at wholesale prices, inadvertently undercutting the local computer shops.
After graduating from high school, I convinced myself that video games did not hold prospects for a proper career. So, I decided to study psychology instead. It turned out that psychology held ample opportunities for exploring human experience and behavior. I soon found myself working mainly with cognitive/experimental psychology and science, psychometrics, and even simulations and learning, all of which are key components of video game development. After completing my bachelor’s degree, I realized that I could pursue a Master’s degree in video game development. I left psychology with the idea that it was possible to make a living developing video games.
Before starting modl.ai, I was a co-founder and Managing Director of the indie game studio Die Gute Fabrik (German for The Good Factory), which has published games such as Saltsea Chronicles, Mutazione, Sportsfriends, Johann Sebastian Joust, and others.
It has been said that our mistakes can be our greatest teachers. Can you share a story about the funniest mistake you made when you were first starting? Can you tell us what lesson you learned from that?
Early in my career before modl.ai, I learned the hard way what can go wrong when a game isn’t tested properly. We started this video game with an institution here in Denmark for assisting children whose parents were going through a divorce. The children take on a cat character and go through a journey where they help someone else in that situation, helping the kids de-center and make sense of their emotions.
This game was especially timely during autumn vacations when divorce filings are especially high, leading us to get an opportunity with a national news publication to cover the game on live TV. With nearly 20 percent of the Danish population watching this channel, we spent days testing the game to make sure it was capable of handling so many people using it at once. We put in a lot of work to make sure that it worked properly but we didn’t have the capacity to simulate millions of users playing the game. Instead, we bought the biggest server and took all the precautions we could.
Then the time comes when the show is live and of course, the game is not loading when we show the game for about 10 to 20 seconds. With the URL on the screen, there were just too many people using it at once. While we tried to prepare ourselves for every situation, this also taught me that you need to understand the parameters of what you can achieve, especially in a live situation. At the same time, we also didn’t have the tools to properly simulate that situation and it just further instilled in me how important it is for developers to have the tools to be successful.
Are you working on any exciting new projects now? How do you think that will help people?
Overall, game developers have very limited time for learning new tools and for integrating new tech into their processes. To help solve this, we are working on some research internally at model.ai to sort of preempt that whole question and do away with the need to integrate. This essentially would make the products usable without having to install anything. We are very keen to focus on this as many developers who want to use automated testing features and processes in their organization would be free to use it without investing in setting up and running them.
We are also working on getting Large Language Models (LLMs) to look at games being played and identify if there is a bug or glitch that needs to be fixed. This is especially useful for QA testers that need tools with that close attention to detail to catch bugs. They can rely on automated processes to catch these small details in case something is missed. We can even take that more strenuous work away from QA so that they can focus on features and control instead.
Thank you for that. Let’s now shift to the central focus of our discussion. In your experience, what have been the most challenging aspects of integrating AI into your business operations, and how have you balanced these with the need to preserve human-centric roles?
At modl.ai, our focus is integrating AI into other people’s business operations and achieving that balance with human-centric roles is the biggest challenge. When discussing this process with customers, people are keen to see what the benefit is so that they are not the first company to adopt this new process. This requires a lot of trust around these automation processes and time to consider how this will impact the people at their organization already doing this work.
Achieving that balance requires more holistic decision making that considers how these automation processes can provide new opportunities for more human-centric work. Then the automated tasks can focus on what you don’t need a human operator to do.
Can you share a specific instance where AI initially seemed like the optimal solution but ultimately proved less effective than human intervention? What did this experience teach you about the limitations of AI in your field?
From my experience, I tend to focus on the notion of modeling the player experience at the emotional and cognitive level, responding to what the player needs at a given moment. By getting that human perspective, we can better understand how people are reacting to the game and if it is providing an experience that we intended. In this way, video games are another art form that is meant to elicit an emotional experience that we want people to respond to and enjoy.
How do you navigate the ethical implications of implementing AI in your company, especially concerning potential job displacement and ensuring ethical AI usage?
Automation always leads to growth, even for those who are involved in that automation. We believe this only provides more opportunity to scale the kind of games that developers are creating which can help design new roles for those using and complementing these automated processes. When it comes to data usage, we’re not collecting any data about people individually. Instead, we’re primarily interested in seeing what you do as you interact with the game, allowing us to have a basis for making better decisions and providing a better game experience.
For anyone using AI, the most important thing is to be transparent about the fact that AI is being used. With this open communication, I would venture to say that 99% of people would be fine with it as it’s actually contributing to giving everyone a better experience with the game later.
Could you describe a successful instance in your company where AI and human skills were synergistically combined to achieve a result that neither could have accomplished alone?
One great example of this is our work with Good Games Studios. In one of their games, every new level that the game designer made would have to be played 10–15 times to understand how many moves the level took and how difficult it was. Typically, this is something that the game designer does manually because the person needs to estimate how difficult the level is. This is where having a bot play the game for him helped save him time so he could get the information quickly while still having the designer manually change it. In this way, AI becomes part of the creative process by speeding up the creative work for the designer. This allowed him to get through 50 levels a day, which is 16 percent faster than doing everything manually.
Based on your experience and success, what are the “5 Things To Keep in Mind When Deciding Where to Use AI and Where to Rely Only on Humans, and Why?” How have these 5 things impacted your work or your career?
1 . Human Intentionality and the Need for AI: When considering automated processes, it’s important to first consider if the process is really something that should be automated. Fundamentally, AI is just another tool to help you accomplish a task. Keeping human intentionality in mind will help you use AI correctly when it needs to be used. Without considering all your options, some companies can overinvest in a complex solution, and it’s important to know which tools to use for a given situation. For instance, in Good Games Studios, it was critical to retain the human designer’s intent for the level, while the playing of a level, as it would free up more time for the core task: inventing and iterating over new designs.
2 . Having Data Available and Identifying a Human Driver: If you’re relying on data for an AI tool, it’s important to know that you will have that data available that you can train from and you’re identifying a human driver that operates the data collection and understands it well. An example from our business is when we help game studios train AI Player Bots for their games. We lean heavily on learning from members of the team or their audience playing the game, allowing the bots to copy what the actual human players did. But this also means that we and the developer need to think about how those training games were played and if they’re exhibiting the behavior they want to see in the game in the end. What you put in determines what you get out.
3 . Establishing an Element of Trust: When implementing a new process that uses AI, you need to demonstrate that the developers and QA teams are going to get the end result that they are looking for. As a company, you also need to ensure that you are instilling that transparency with your customers so teams still have creative freedom in their jobs. We continuously talk to our customers and users to make sure we are targeting real needs in their daily workflows rather than providing applications of AI that really do not move the needle for them. For automation to be useful and trusted, it should help someone achieve their goals faster.
4 . Maintaining Human Connection and Authenticity: In the gaming industry, authenticity and transparency are highly valued to show that real people are involved behind the game and that they are making real connections with people across the globe. Keeping people in every stage of game development is critical for keeping human connections alive. For us, it is about helping the human creators reach their vision for the game’s design faster than they could otherwise.
5 . Creating a Roadmap for the Future: When you are using AI or people to implement a new process, employees and stakeholders need to see the vision of the company to understand where they will have the opportunity to grow. Similarly, customers need game studios to be transparent about when AI is being used and why. If you include the people who will use the tools eventually in the process, you will likely get the right outcomes of process change with AI, just like any other tool.
Looking towards the future, in which areas of your business do you foresee AI making the most significant impact, and conversely, in which areas do you believe a human touch will remain indispensable?
When it comes to game development, human intervention will always be important to help create a fun and enjoyable game experience. AI technology will continue to grow and improve, but it will also continue to serve as a tool for QA testers and developers to use for their own visions. Many new applications of AI for game development are on the near horizon. I think you can largely think of them as falling within efficiency tools or experience tools. In this interview, I’ve covered at length how we’re building efficiency tools at modl.ai with a QA perspective. Longer term, I think these tools will expand to also include error identification or even correction. It’s likely that in the future, a QA bot will be able to not only find a problem in the game but also suggest the solution in the game source code — or may even submit itself and re-test the game with only intermittent human supervision. There’s also a lot of work happening and accelerating in using AI for content generation, whether that is for art, audio or coding assistance. On the experience, we’re heavily invested in making multiplayer experiences faster to make and more enjoyable. Currently, we focus on the core game playing itself at modl.ai, but I believe there’s a future where responsive bots or agents that learn from humans and play with them in deep ways emerge to create new experiences for players is promising. It is early days for this kind of work, but the experiments being made now might significantly change what it feels like to play an online video game five years down the line from now. As we continue to learn how AI can help make our work more efficient and of better quality, we can only continue to set our sights even higher for the future and make better games.
You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. 🙂
- The most pressing problem for humanity that I can imagine us realistically solving as a species is access to unlimited, free, green energy for everyone, which I think would be transformative for humanity. I think fusion energy represents a realistic technology for providing this in time. I have always been interested in this problem since I was a kid, so much so that I once got the chance to spend my 8th grade work placement experience with a nuclear fusion research group, but my work life ended up going in a different direction. A global public/private push into research and engineering for this space would be amazing. Things seem to have been picking up speed in recent years, but there’s still so much more that could be done to support that work.
- Closer to my own day-to-day work and wheelhouse, I think gaming at large has suffered from issues of toxicity and hard tones between gamers but also in the relationship between audiences and developers. Fans who care deeply about their favorite games and franchises may become frustrated over missed expectations, whether reasonable or not, resulting in hurtful actions against the developers working on those titles. Every developer I have ever met would work their hardest to ship the best possible game to fulfill their creative vision, so I would wish for the players of games to have greater understanding for the effort and love that developers pour into their creations. I think this is improving too, and I developers have put effort into bringing players and their communities into the developer process, which helps a lot in building that understanding, but there is still room for growth and improvement in this part of gaming.
How can our readers further follow your work online?
You can check out our website at modl.ai for updates on our work.
This was very inspiring. Thank you so much for joining us!
About The Interviewer: Kieran Powell is the EVP of Channel V Media a New York City Public Relations agency with a global network of agency partners in over 30 countries. Kieran has advised more than 150 companies in the Technology, B2B, Retail and Financial sectors. Prior to taking over business operations at Channel V Media, Kieran held roles at Merrill Lynch, PwC and Ernst & Young. Get in touch with Kieran to discuss how marketing and public relations can be leveraged to achieve concrete business goals.
C-Suite Perspectives On AI: Christoffer Holmgård Of modl.ai was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.