Update: Find out more about the 50 Years of Text Games book and the revised final version of this article!
The Playground
by Scott Neal Reilly
Written: 1994-1996
Language: Hap, Common Lisp
Platform: Mach/UnixOpening Text:
The recess bell has just rung and it’s time to really start working. Math, English, and Social Studies are nothing compared to the harsh competition out on the playground. Try to collect baseball cards of players you like by trading with the other kids on the playground. Who knows, this may even be your big chance to get that Willie Mays card you’ve been trying to get for so long! Have fun!
On a spring day in 1990, in a tiny studio theater at Carnegie Mellon—the first university in the world to offer a degree in drama—a most unusual performance took place. The seats were empty except for a handful of computer science researchers, and the only audience member was on the stage. She stood amidst a minimalist set representing a bus station along with a small troupe of improv actors wearing headsets. She’d been told she was taking part in an experiment in “interactive drama,” but her only guidance was to try to buy a bus ticket: she had no script, and neither did the actors with their headsets. Not even the director, whispering to the actors through an off-stage microphone, had the lines. All he had was a graph of possible outcomes, and a straightforward goal: whatever the woman on stage did, keep the story around her compelling.
The play was an experiment by an ambitious campus research group that called themselves the Oz project. They were hoping to solve what they saw as a glaring problem in the still-teething world of computer games. While the software behind the bestsellers then on shelves was capable of all kinds of interesting simulations—of 3D spaces, of realistic lighting, of airplane aerodynamics, of the growth of virtual cities—one aspect still depressingly static was their stories. Scholar Espen Aarseth would later write of the difference between a simulated door in a video game—one the player can open, move through, perhaps lock and unlock or use for cover—and a merely fictional door: a painted-on texture, leading nowhere, not understood as a door by the system or able to be operated as one by the player. While simulated and fictional doors might look the same at first glance, a merely fictional door can’t take part in the make-believe world of a video game except as set dressing. You can’t play with it. By the 1990s many games featured increasingly elaborate stories—some laden with video clips and famous actors—but those stories were not, in any meaningful way, simulated. Like the stories in other media, they were merely fictional, painted on: unable to change in response to the player’s actions except perhaps through brute-force authoring of alternatives.
But as computing power continued to increase, some researchers had begun to wonder whether you could teach a computer enough about stories and characters and dramatic arcs—the same way flight sims encoded wisdom about lift and drag and throttles—that the system could tell a story that could be playful: reacting to the player, reshaping itself to whatever they tried to do while still telling the same core tale the author had devised. “One approach is to see them [the player and the system] in a kind of two-player game, such as chess,” wrote Joseph Bates, the Oz project’s founder:
The director and user are taking turns, the user acting as a free agent in the world, the director looking down from above and very gently pushing the elements of the world in various ways. The director is constantly trying to maximize the chances of a pleasing overall experience, no matter what the user does along the way. ...The director wins if the complete history of the world is consistent with the creator’s aesthetic goals, thereby (presumably) pleasing the user.
“Our idea is to gently guide the users experience so it conforms in some way to an artistic destiny,” another Oz researcher posted on the rec.arts.int-fiction newsgroup in 1991, “while at the same time allowing the user complete freedom of action.” And while new technologies like virtual reality seemed imminently on the horizon, the easiest medium in which to prototype such a system was text games: projects like LambdaMOO had already demonstrated how compelling and complex a text-based virtual world could become. The Oz project, at least at first, would focus on “developing technology for high quality interactive fiction.”
In the experimental play, the actors and director were standing in for components of a yet-unbuilt computer program. The director represented a future algorithmic “Playwright” who could re-plot a scenario in real-time based on the player’s actions by feeding new instructions to the actors in their headsets. The actors in turn stood in for NPCs who would have some autonomy of their own, but could also pivot on a whim to adapt to the Playwright’s new instructions. The details of the scenario, involving a troubled bus station customer who begins to turn violent, weren’t really important: the point was to gather data on the strategies the Playwright and actors invented to keep the plot moving, and how the lone audience member experienced a performance tailored just for her. The goal was a first step toward learning how to “create a medium beyond ‘static stories’... [of] constructed yet unpredictable worlds.”
The project had evolved out of a line of academic thinking traced back to two 1986 dissertations, both by women, which would later become the foundations of modern digital game studies. Mary Ann Buckles’ “Interactive Fiction: The Computer Storygame ‘Adventure’” had been one of the first book-length scholarly works to seriously study interactive fiction as new and unique medium, breaking down how Crowther and Woods’ genre-defining game both connected to earlier literary traditions and also functioned as a legitimately new medium. Brenda Laurel’s “Toward the Design of a Computer-Based Interactive Fantasy System” went a step further, positing a theory of how truly interactive and responsive stories might function. Inspired by Aristotle’s Poetics and live theatre traditions, Laurel defined an “interactive drama” as “a first-person experience within a fantasy world, in which the user may create, enact, and observe a character whose choices and actions affect the course of events just as they might in a play.” Contemporary adventure games were a start, but entirely different from the truly interactive experience Laurel envisioned, driven by a digital Playwright trying its best to marry a human player’s improvisations with a human author’s story.
Laurel had first developed these ideas at an extraordinary meeting of minds arranged by computing pioneer Alan Kay, who in the 1960s had helped popularize many foundational concepts of modern computing, from windowed interfaces to word processing. In the early eighties, Atari had hired Kay as their chief scientist, and he promptly set up a think-tank and research lab to explore the future of games and entertainment technology. “Alan’s strategy was simple,” remembers Laurel, part of Kay’s cohort:
create the richest possible environment and plop creative people into it, and something wonderful is bound to be the result. Atari in 1981-82 was the perfect place for such a grand experiment—with revenues in excess of a billion dollars, the company was in a position to build a “dream lab” for creating the future of high-tech consumer products.
Members of the group had wide leeway to devise their own experimental projects, requisitioning equipment and hiring contractors as needed. Surviving memos are filled with truly heady concepts: one from Laurel wondered “Is there a video game we could imagine that a human and a dolphin could play together?” In those days it felt like the sky could be the limit for the future of computer-enabled play, and the dreams were big. Kay believed that “in order to do good research, one needs a ‘grand idea’—a vision of something that might exist, far in the future and beyond our abilities to imagine it fully.” Laurel’s grand idea (after the dolphins, perhaps, got little traction) was a pitch for an ambitious game prototype that would immerse a player in an environment created by wraparound screens and simulated sound and lighting effects, maybe even smells. Inspired by the immersive nursery playroom in the Ray Bradbury story “The Veldt,” she imagined a rich multimedia experience where a concealed human Playwright would manipulate the projected images and deploy live actors to adapt an ongoing story to the player’s actions. Laurel got as far as convincing Bradbury himself to be the system’s first Playwright of a scenario based on his classic Something Wicked This Way Comes. But it was not to be. Atari, hit hard by the videogame crash and suddenly more interested in immediate than distant futures, disbanded Kay’s research group by early 1984.
Laurel carried her work over into a PhD dissertation, which would in turn inspire another brilliant mind: Joseph Bates, a child prodigy who’d enrolled in college at age 13 and graduated before he could vote. Fifteen years later, he’d started a research group at Carnegie Mellon devoted to inventing the future of interactive stories. Like Kay and Laurel, Bates was a big dreamer: “you have to be able to invent new, strange stuff, and you have to be able to throw out most of it,” he once noted. His initial missives for the Oz project were beyond ambitious: in an early critique of Laurel’s thesis, he wrote that her ideas of dynamic storytelling were too limiting because
in these situations [an example from Laurel of an interactive Hamlet] the user never ceases to be himself, in Hamlet’s position. What if the user is Hamlet... what if the user’s mind is manipulated by the system to try to make the user think/feel like Hamlet, not just experience Hamlet’s objective experiences?
In lieu of any means for realizing this rather revisionist take on the nature of storytelling itself, Bates and his group hoped as a first step to at least build something like Laurel’s hypothetical interactive drama system. They deconstructed Infocom games like Deadline to understand how they created the illusion of dynamic plot, and staged a variant of Laurel’s unrealized theatre experiment with a human Playwright directing live actors, enlisting the help of Margaret Kelso, a professor in the CMU drama department. While small in scale, the experiment was nothing like what other researchers into games or A.I. were doing. It was “new, strange stuff” indeed.
Bates and a handful of graduate students soon began work on an interactive narrative engine called Oz, written in Common Lisp—then the language of choice for anything connected to artificial intelligence. Initially the work was divided into “six sets of questions or problems:”
how to simulate the physical world [to “provide just enough of a physical reality to let authors construct interesting characters and stories”], how to simulate the minds of characters, how to design the user interface, how to build a working theory of drama, how to design the world-building environment, and how to facilitate artistic use of the system.
As more grad students came aboard the exciting project, each was set to work on one facet of the problem, many designing and prototyping new components of the overall Oz architecture. Most modules were named after a character from L. Frank Baum’s famous books. An engine to generate natural language descriptions of a simulated world was called Glinda; a parser to understand a wider range of natural language input was dubbed the Gump; a core framework for agents—characters—to operate within a virtual world was named Tok, after Dorothy’s mechanical companion Tik-Tok who “Thinks, Speaks, Acts, and Does Everything but Live.”
One CMU grad student, Scott Neal Reilly, focused in on the problem of giving Tok characters more realistic behaviors and social understanding through a program called Em, for emotion (and Dorothy’s aunt). “The goal of building believable agents is inherently an artistic one,” Reilly wrote:
Traditional AI goals of creating competence and building models of human cognition are only tangentially related because creating believability is not the same as creating intelligence or realism. Therefore, the tools that have been designed for those tasks are not appropriate.
“Believable” agents, Reilly thought, would not make perfect plans and execute them in the most efficient way possible. Their plans might be ill-advised, and they ought to endearingly fail many times before succeeding, if they ever did, in ways designed to reveal their character and make players relate to them, cheer them, or despise them. Like many Oz students, Reilly had looked for inspiration outside computer science research, studying writers and Disney animators for the techniques they’d used to bring memorable characters to life. “Artists know how to create believable characters,” he wrote, but “AI researchers know how to create autonomous agents.” The problem was finding a way to bring those disparate worlds together, to find a path “somewhere between the arts and artificial intelligence”: somewhere, you might say, over the rainbow.
In the course of his dissertation work Reilly would build several small simulations to test Em. One of these, The Playground, casts you as a school kid with the goal of trading baseball cards with your peers: Melvin, a friendly Star Trek nut, and Sluggo, a not-too-bright bully. The action played out in a small model world that seemed on the surface much like a traditional interactive fiction:
You are in the playground. The sand box, the jungle gym and the tree house are in the playground. Sluggo is in the tree house. Sluggo is holding a Willie Mays trading card, a Jose Canseco trading card and a Catfish Hunter trading card. Melvin is in the sand box. Melvin is holding a Tom Seaver trading card, a Mickey Mantle trading card and a Reggie Jackson trading card. Melvin is wearing his eye glasses. You are holding a Babe Ruth trading card, a Ted Williams trading card and a Henry Aaron trading card. PLAYER> get in the sandbox You go into the sand box. Sluggo spits. Melvin is now smiling. Melvin is speaking to you. Melvin’s voice says ``Salutations, Vulcan ambassador! The Klingon high command has sent me in search of baseball cards.''. PLAYER> melvin: What cards do the Klingon high command want? You are speaking to Melvin. Player’s voice says ``What cards do the Klingon high command want?’’. Sluggo smokes. Melvin is speaking to you. Melvin’s voice says ``The Klingon Emperor wants to know if you would be willing to part with Babe Ruth for Reggie Jackson?’’. PLAYER> Melvin: No way! You are speaking to Melvin. Player’s voice says ``No way!’’. Sluggo spits. Melvin is speaking to you. Melvin’s voice says ``I wouldn't be hasty if I were you. Reggie Jackson for Babe Ruth is a trade any Ferengi would be proud of.’’. PLAYER> Melvin: How about Ruth for Jackson and Mantle?
While Playground featured only a simple parser and text narration system—the player’s lines in the example above are mostly recognized through basic keyword matches, and the stilted output comes from a system that simply reports the status of the underlying simulation with no attempts at artifice—the code driving character behavior was far more complex than any commercial text game had shipped with. Most games create NPC behavior with some equivalent of a list of if-then statements accounting for specific foreseen eventualities: something like “If player offers Melvin a trade evaluated as good, then say Melvin smiles and accepts the trade.” But the Oz framework with Reilly’s extensions broke this process down into many more steps, each of which had its own possibility to influence the outcome.
For example: an earlier Oz prototype designed as a test bed for Tok and Em had simulated a cat named Lyotard, who would actually perceive things about the world through specific senses and use those impressions to update an internal representation of his model of reality. Lyotard might remember where he had last eaten food, for instance, and return there when hungry even if the player had since moved his tins of sardines. The tactile sensation of a comfy chair might cause an emotion of contentedness that could change Lyotard’s reactions to events like a human walking into the room, or cause him to develop an attachment to fuzzy objects. Mistreating the virtual cat could make him develop long-term emotions of hatred towards you that would in turn affect his actions in your presence. Lyotard made decisions about what to do—whether to allow an unfamiliar hand to pet him, for instance, or bite it—based on a constantly shifting bank of sensory inputs, emotional states, and memories. While the results might not have seemed much different from a well-implemented cat in a traditional text adventure (like, say, the one in Graham Nelson’s Curses) the behind-the-scenes systems were laying the groundwork for worlds with truly emergent characters who could believably respond to unexpected events. They were characters transitioning from merely fictional to meaningfully simulated.
Characters in Tok used a three-stage cycle of sense, think, act to plan their behaviors, doing so in the context of goals they wished to achieve. Lyotard’s goals might be taking a nap, or eating food when hungry. But Reilly hoped to introduce more complex emotional and social reasoning into the think step that could handle human NPCs with more complex goals and drives than a house cat’s. He began to extend Em to support more advanced emotional reasoning, using a language called Hap (also an Oz invention) to write reactive planner rules that defined how a character’s simulated emotions might influence the formation and performance of goals. For instance, Hap code to define a trigger for a frustration emotion—when an attempt to take steps toward completing a plan fails—might look like this:
(sequential-production update-frustration ()
(demon em-update-frustration-demon
;; LHS
;; Fire when a failed behavior has been put in
;; the $plan-failures slot and the importance of the
;; behavior is greater than 0
(and (match $plan-failures
(list-containing ?plan))
(match (call importance ?plan) ?intensity)
(> ?intensity 0)
;; Create an emotion structure. Set the variable
;; ?emotion-structure to the structure
(match (make frustration-emotion
actor self
cause ?plan
frustration-production ?intensity)
?emotion-structure))
;; RHS
;; Store the structure
(mental-act
(call add-emotion
(slot emotion-type-hierarchy $em)
$$emotion-structure
`frustration))
;; Remove the behavior from the $plan-failures slot
(mental-act
(setf $plan-failures
(remove $$plan $plan-failures)))))
Reilly created a library of a few dozen emotions that could each be defined as a consequence of interactions between an agent’s goals and the model world. Fear, for instance, was the emotion “when a goal is considered to be likely to fail and it is important to the agent that the goal not fail.” Various events might cause happiness, such as “A goal succeeds that the agent hoped would succeed.” Resentment was felt “when an agent dislikes another agent who is happy.” Each emotion had an intensity and a rate of decay (hatred would linger much longer than disappointment) and could also be attached to the person or event that had caused it.
Interacting with the world, then, each agent would accrue a set of active emotions which might alter their future behavior. While some behaviors were general, Reilly noted that most would be character-specific: in a traditional game the writer’s prose would do most of the work of defining character, but an Oz game would lean much more on the way its characters behaved and reacted. In The Playground, for instance, Trekkie Melvin feels joy when the player uses Star Trek lingo in their interactions with him; enjoys the social interaction of trading cards more than the specifics of making a good trade; and feels fear when bully Sluggo gets too close. Emotions, in turn, could cause character-specific behaviors: Melvin will use geeky ways of phrasing things if he’s happy, but might abandon a trade with a new goal of running away if he gets too scared. Melvin gets sad and shy if insulted, but Sluggo gets angry:
You are speaking to Melvin. Player’s voice says ``What do you want for Mantle?''. Sluggo smokes a cigarette. Melvin is speaking to you. Melvin’s voice says ``The aliens told me to offer you Mickey Mantle in return for Babe Ruth.''. PLAYER> *Sluggo: Hey dork, get a life!* You are speaking to Sluggo. Player’s voice says ``Hey dork, get a life!''. Sluggo is now red. Sluggo is now scowling. Sluggo is now tense. Sluggo goes into the sand box. Melvin is now pale. Melvin is now bug-eyed. Melvin is now trembling. Melvin is speaking to you. Melvin’s voice says ``Why don’t we finish this later...''. Melvin gets on the jungle gym.
Melvin’s goals, emotions, and behaviors in The Playground differ from Sluggo’s in noticeable ways, which helps paint the two as unique characters. One of Melvin’s goals is making friends, which he knows social interaction helps him achieve; and he also values the novelty of new baseball cards in his collection. Together, these two motivations might cause him to make a trade he knows isn’t optimal, especially if he likes you, because he hopes it will help him make a new friend. Sluggo, by contrast, only cares about having good cards in his hand: he gains no satisfaction from the patter leading up to a trade or the act of trading itself, resulting in different kinds of performances (like getting annoyed if the player is slow to complete a trade). Characters could even be programmed with their own unique ways of translating sensory inputs into internal models of the world, or of understanding the player. Melvin, for instance, can mentally juggle more complex trades than Sluggo. If you try offering Sluggo a trade involving more than two cards, he gets “angry, distressed, and reproachful towards the player for making him feel stupid.”
The Playground was explicitly a prototype, never meant to be a polished experience either for players or outsider creators, and neither it nor its source code were ever publicly released. It was one of a series of Oz prototypes designed to rapidly iterate on the team's design and technology questions, moving toward Bates's ambitious vision of a future, five or ten years down the road, when the logic behind those purely textual worlds could begin driving immersive virtual realities. It was a foundational assumption that interactive drama and reactive character technology would have to be the true underpinnings of any such systems, not the more prosaic concerns of head tracking and high-res rendering: “We see this focus on [VR] interface as something like studying celluloid instead of cinema, paper instead of novels, cathode ray tubes instead of television.” Bates hoped that after his band of technologists had built a successful interactive drama engine, tools would follow for writers and artists to tell stories with it. He envisioned libraries of reusable character behaviors or reasoning logic which could be built up over time, “similar perhaps to the backlots of Hollywood studios.” He imagined tools for both rapid prototyping and fine-grained polish, tools for crafting VR conversations with believable characters who could understand speech and improvise their own. It seemed quite plausible that the reactive characters of Star Trek’s holodeck, debuting in the pilot episode of The Next Generation in 1987, might be only a decade or so away.
But that future still has yet to arrive, and interactive drama in the form imagined by Laurel, Bates, and the Oz project team seems in many ways barely closer than it did thirty years ago. There are likely many reasons why. The simplest explanation is that games with static stories—presented in cut scenes, in between bouts of combat—remained profitable despite their “merely fictional” structure. Upsetting the status quo presented a risk: if dynamic stories had the potential to be emergently better, they might also be emergently worse. While many Oz prototypes worked and produced intriguing results, they required authors comfortable with Lisp and logic programming: both notoriously hard concepts to teach. Even the experts had a hard time working with interlinked systems with the complexity of Oz modules. And while the idea of text-only worlds as high-tech prototypes was still conceivable in the early nineties, with commercial text games only a few years gone from shelves, they would increasingly seem old-fashioned enough to discourage influencers or investors from taking them seriously in the years to come. Time spent on resources not visible in screenshots or game trailers, no matter how visionary the ideas behind them, became hard to justify.
Like Tik Tok himself, the Oz project would eventually wind down as fewer and fewer people remained to keep turning the key. Bates and several graduating students left CMU in 1997 to found a company called Zoesis to commercialize Oz technology. It survived for some time on spec work and prototypes for the early, more experimental web, but would never attain the kind of sustainable, recurring traction that comes with widespread adoption. Perhaps the most famous descendant of Oz technology would be the 2005 game Façade co-created by Andrew Stern and Michael Mateas (the last Oz graduate student). In the twenty-minute interactive drama, the player visits friends Trip and Grace, a married couple in the midst of a quarrel, and takes part in a character-driven story that can end in many ways based on the player’s moment-to-moment performance. The culmination of years of authoring and programming effort, it remains one of the most intriguing glimpses at what truly dynamic and responsive stories might be like; but the immense effort and expertise required to create it have rarely been brought together since.
When virtual reality for the masses finally did arrive in the 2010s, there would be no ready-made interactive storytelling engine to drive it. The rainbow bridge “between the arts and artificial intelligence” can seem at times as illusory as ever: stories in the bestselling games remain firmly fictional, not simulated. And yet it isn’t true that there’s been no progress: it’s just come in dozens of isolated steps instead of one grand unifying vision. From the elaborately simulated world inhabited by the bearded “agents” of Dwarf Fortress, to dynamic systems personalizing Middle-Earth’s enemies or a shooter’s difficulty, to newer experiments in reactive character with memories and emotions, to increasingly accurate speech recognition and believable speech synthesis, to the rise of author-friendly tools for generative text: many of the problem areas once named after characters from the Land of Oz keep cropping up in unexpected and disparate places. Flung by some hidden Playwright to faraway lands, their ultimate destinies, artistic and otherwise, remain undecided.
Whether they will ever return to Oz, or somewhere like it, to tell a story together once again will have to be a tale for another time.
Next week: the groundbreaking hypertext that blurred the boundaries between page and screen, creator and creation.
The Playground was deeply tied to the Oz project ecosystem and never publicly released, so sadly cannot be played today; but you can read a trace of a full playthrough online. Transcripts and code excerpts come from Scott Reilly’s PhD thesis. Brenda Laurel’s dissertation and later book Computers as Theatre were other major sources, along with various Oz project research papers: the two dissertations are the source of unattributed quotations from Reilly and Laurel respectively. The books Warlocks and Warpdrive: Contemporary Fantasy Entertainments with Interactive and Virtual Environments and Colors of a Different Horse: Rethinking Creative Writing Theory and Pedagogy (whew) were both useful for descriptions of the CMU theatre experiment. Noah Wardrip-Fruin’s book Expressive Processing provides a great overview of other significant academic work into interactive story and characters.
Another excellent piece. I really liked how the agents would resolve and "understand" human emotions through basic logic. I appreciate you bringing this to the greater internet: it isn't easy to google for information on Oz, and even Zoesis's most relevant result is an old site on cmu.edu
Aaron, that was a great read. Thanks for putting that together as you clearly spent a lot of time on that. We had plenty of people writing about the Oz Project over the years, but I don't recall anyone really getting it the way you clearly do.