AMA with Mark Riedl

A discussion of AI, storytelling, and games!
by Hidden Door
Share:

Happy Thursday! For this week's AMA (ask-me-anything), we were privileged to welcome Dr. Mark Riedl, a Professor in the Georgia Tech School of Interactive Computing and Associate Director of the Georgia Tech Machine Learning Center. He led a wide-ranging discussion of topics such as computational creativity, safe AI systems, and more — check it out!

This is part of a series of chats with team members and friends with backgrounds in games, literary publishing, AI/ML, and more. Be sure to join our Discord for the latest events and updates!

Could you introduce yourself, and tell us a bit about your work?

Hello! I'm Mark Riedl. I'm a professor at Georgia Tech and I work on AI for storytelling and computer games. I also work on "serious" stuff like AI explainability and AI Safety.

In a way these are all related. I am interested in AI that enhances the human experience. To do that we need to build AI systems that understand and model the human experience and use it on behalf of the user, whether to entertain, educate, teach, etc.

What drew you to artificial intelligence?

I got into artificial intelligence at a time when it was seen as a curiosity — a class in college that one could take but didn't seem to have a lot of practical use. I actually got into AI through HCI (Human-computer interaction). I worked at a company briefly with psychologists who were trying to make software more usable. So I wanted to make software more usable. And one way of doing that is to make them learn and anticipate the user.

From there I started looking at computer games, because they were one of the applications that had artificial intelligence controlling bots. I wanted to know if we could make the bots smarter. That then got me into storytelling. If we could let users do whatever they wanted in a game, could we also start dynamically changing the storyline.

What about AI and storytelling got you interested you in that intersection?

When we look at traditional computer games, the storyline is usually hard-coded. In fact, most of the important plot points in computer game stories are told through cut-scenes or other pre-scripted scenes. That means that the player's agency is quite limited. They are sort of along for the ride. If we wanted the user to do anything they want, they might do something that was bad for the story (kill the bad guy in the first scene?). But maybe that is okay we we could just create a new storyline that was just as good. Okay, so how does one generate a story? Well, it turns out storytelling is one of those things that is really easy for humans — we all do it every day whether relating something around the dinner table or entertaining a kid — but really hard for computers.

Storytelling is hard for computers because it requires so many things like commonsense reasoning and understanding sociocultural rules and emotions and causality. All these turn out to be first-class hard problems in AI.

So in a sense storytelling and AI go hand-in-hand. Thinking about storytelling lets me think about hard AI problems that are not unique to games. And sometimes it allows me to propose new solutions

Are you aware of any research on international AI storytelling, e.g. not using English, or using differences in language to improve model training? For instance, there are often sayings that are difficult to translate cleanly, or different methods of storytelling across cultures.

Quite a bit of storytelling AI research is, unfortunately, in English, and using Western ideals of storytelling. There is some in Chinese and a fair amount coming out of India. I would probably say there are two issues going on here. One is the language of presentation, English vs. other language. The other issue is cultural differences of structuring stories. This is where I think the big differences will be. Stories can be separated from their presentational language, and imo the interesting thing is not what language we tell the story in but how we structure the content of a plot (and whether that is culturally appropriate).

What's your favorite research you've done at your lab or elsewhere?

My favorite research: that is a really hard question to answer. The one I had the most fun on was Weird A.I. Yankovic. This was a bit outside my usual wheelhouse, but I really wanted to put together an AI system that would generate parody lyrics for existing songs. I did as a hobby project and I thought it wouldn't work. But it ended up working better than I thought. It is a system I actually go back to and use for my own personal enjoyment.

So much of research is about taking baby steps in a small direction, but to get something that works enough to use it for real is rare. At least in my experience.

Outside of my lab, I am quite obsessed with text-to-image generators right now, like Dall-E and Stable Diffusion. They have come closest that I have seen to something that people might actually use to augment one's creative practices. Not quite there yet. Not quite useful beyond funny memes, but intriguing.

On that note, do you have any advice for a layperson who wants to try out using AI or NLP in some way for their projects?

It's a good question. Let's see what I can do. If you have some proficiency in programming, then there are a lot of off-the-shelf tools to run language generators. You can run GPT-2 and some of the bigger models with just a few lines of code. I won't shill for the companies that make the APIs but you can search them. That means you can try them out. The larger models are very flexible. All are horrible at creating coherent plot lines. If you want to do anything too fancy, then you are going to need to know how to get into the guts of a neural network.

Is there any interesting research for synthesizing reasonable dialogue — like from an NPC in a game — that's close to "being usable" today?

There are a number of really strong neural network language generators that can be used for dialogue in computer games. Putting together fluent sentences is no longer the bottleneck. But the reason we aren't seeing much (any?) of this in games right now is because some something called "controllability". We really don't ever know what a neural language generator is going to say, and that makes game developers really nervous because they can make mistakes and ruin the gameplay experience. Or say something toxic or misogynist or racist. So until we have better controllability of neural dialogue generators so we can instruct them what they are allowed to say or what they aren't allowed to say, practical applications will be limited.

Controllability is not a solved problem. There are a number of people working on it. One way is to use training data to make neural networks better at receiving and following instructions. That has helped a lot but it cannot provide any guarantees. Some companies put a classifier between the output of a language generator and the final dialogue delivered to the user that catches some (but not all) toxic and unwanted material. For games, one might want to go farther than toxicity and specify topics that are off-limits because they will spoil the game or make a bad guy more sympathetic (or vice versa) but it is really hard to quantify that. To get a bit into my own work, I am trying to guess how a dialogue or a story will unfold in the future and then figure out if this is undesirable or not. Kind of like looking ahead and thinking, this dialogue has some probability of bringing up unwanted topics.

Has there been any significant AI policy news related to gaming technology?

Not that I am aware of. Game companies are very conservative about applying new AI technologies so they have largely avoided areas that would create controversy. I guess I can add that companies are really worried about offensive content being generated. And that would get in front of kids, so one can imagine why they are cautious and why AI in games are quite likely to cause some sort of AI policy problem in the future.

How does one address ethics and safety in AI games?

I think when people talk about safety and ethics, they really talk about bias in AI systems. And by that it means bias toward undesirable things like racism, misogyny, and other sorts of toxic output. Bias can be addressed in two ways: removing it from training data before machine learning models are learned. The other way is to catch it after it is generated but before it is shown to the user/player. I guess the issues are the same as any AI dialogue system. Games would have additional challenges involving the behavior of bots, such as how they respond to actions by players of different races, sexes. This gets into something called "normativity" — what are the societal and cultural norms around behavior. In the real world, we have social expectations about how we choose to do things, like go to a restaurant, but also how to be polite that go beyond just words. Learning what is normative is really hard because we have a lot of text data but we don't have a lot of data about everyday behavior. And the stuff that is socioculturally normative are often done by humans by habit and we don't talk about our record that kind of mundane behavior. The way I've been addressing this is by trying to learn sociocultural normative behavior "scripts" from stories. Stories have a lot of tiny examples of normativity that we might be able to mine. I've been able to show we can influence the behavior of bots in simple environments to, for example, be helpful/altruistic while still trying to play the game.

What are some current projects that you're working on that might interest our community?

I have a bunch of projects going on. In the interest of time, I'll just summarize a few. I am working on AI Story Generators that are goal-driven. Most story generators based on neural networks tend to meander and never get to any sort of point. We are trying to teach neural networks to go toward some sort of conclusion ("happily ever after", "bad guy in jail" whatever). I am doing a lot of work that is not game related, on explainability. That is, how can we tell users what is going on inside an AI system when it does something like answer a question or control a robot or a bot in a virtual environment. And then we are doing a lot to learn sociocultural "rules of society" from stories and then teach agents to act more human like.

You mentioned your current hobby is image generators; what's your favorite result from it, if you have time?

Oh gosh. Well I'm a huge fan of the book, Gideon The Ninth (read it!). So I tried using AI image generators to create some fan art.

AI fan art of Godeon the Ninth

Awesome! Would you have any resources or links at this time where our community could find out more about your current work?

I think I am supposed to say my website but I haven't updated it in a while. yikes need to remember to do that. I talk about my work and other work around the world that I think is neat on twitter. And also make fun of bad press about AI.

And that's all for our Fourth Community AMA! Thank you so much for joining us this afternoon to discuss AI, your research, and the intersection between storytelling and ML!

Thanks for all the really great, and deep questions. Had a blast geeking out with you all.

Next Steps