Long ago, when Bill Clinton was president and reality television hadn't yet destroyed the American psyche, I spent a summer term in an intro to world politics class at a school that had not yet officially admitted me, with my stoner roommate and the non-talkative girl I had a crush on at the time and a professor who I believe has now become a sorcerer of some sort. I took four classes with this prof, slowly making my way up to an A--I think the A- I got for a 92.25% was sending me a message--but the first thing I remember from my short time as a poli-sci major was realism, and its Dark Knight Returns cousin neorealism. After the collapse of political idealism in the epic clusterfuck of World War I--the bloodiest, most hideous folly of human cruelty and stupidity until the next one a couple decades later--political realism sure seemed like a pretty sound theory, and held sway nicely through the cold war. Look it up; I don't really plan to explain it here, but if you pay attention to international politics, it's a concept with which you're familiar. And if you're a cynical dude, it's obvious and intutive. But I am not a cynical dude, all evidence to the contrary--I prefer hyperskeptic idealist, myself--and the view of human nature that political realism always seems to be coupled with, even if it's not strictly part of the theory, always rubbed me the wrong way. It's a view that's constitutive of the doctrine of total depravity, and therefore has its tendrils in much modern Protestant thought, as well as every consdescending lecture about "the real world" you ever received from a parent, teacher, or court-appointed psychiatrist. Put simply, I find it rather unfathomable that people spend so much time and effort thinking about right and wrong if they honestly believe that we're all a bunch of bastards who couldn't choose right if we wanted to, which we don't. In practice, total depravity is commonly deployed as a descriptor of everyone else's behavior, but is rarely (in my experience) argued coherently.
So, walking home one day, I stopped at one of those lovely sidewalk book sales that periodically dot the Cambridge landscape, and picked up a copy of Reinhold Niebuhr's Moral Man and Immoral Society. It's a hell of a title, aside from the fact that it seems to have been so named in an effort to market to me, personally; it's a marketing practice reminiscent of Become Who You Were Born to Be, which I believe was designed to appeal to the tastes of Aragorn, son of Arathorn. Anyway, Niebuhr neatly dovetails political realism and original sin in a way that makes eminently more sense (to me, at least) than either of them do separately. To wit: though morality is neither wholly rational nor wholly social, the keeping of the Christian moral law--which is short, and you should know it by now--does require a rational mind, the ability to see ourselves and our neighbors as equivalent when our sensory perceptions and emotional reactions plainly think it's stupid. And yes, Virginia, small groups can behavor morally, with effort and forethought. However, the world does not consist of small groups, but a series of nested groups of varying size, and groups are not rational. Niebuhr suggests a kind of law of diminishing return for rationality among groups, and at a sufficiently large level, groups are incapable of acting in any interest other than their own. (There's a lot of "ifs" in here, of course, in that one could argue that all manner of moral behavior operates from self-interest, if one happens to be a psychological egoist, but I digress.)
Whatever the virtues of Niebuhr's theories--it's fascinating and thought-provoking, right or wrong--videogame design does seem to reflect a similar perspective. When we talk about moral choice in games, we almost exclusively do so in terms of individuals. Fable, The Sims, GTA, etc. What would even make sense as a "moral decision" in, say, SimCity?
I'm not sure if this is something new design ideas could surmount; the serious games movement has pointed in that direction with its public policy angle (public policy can be interpreted as a means to moral action by groups, but Niebuhr has some thoughts on that as well), but it does seem that we have an awfully hard time conceiving of morality outside of small groups, or imagining anything outside of self-interest for larger ones. The philosophers, of course, have gleefully attempted to reduce all morality to one or the other, to varying degrees of success. And perhaps it is the job of systems-thinking to help us learn to think both morally and collectively.
This, he mused, he must think hard upon.
Friday, December 5, 2008
Character Sheet
Back in the day, I had the good fortune to be friends with a bunch of nerds. These nerds, taken collectively, connected me to most of the various nerd tribes, but there was a specific preponderance of Tolkien among their specific schools of nerditude. 2000-2003 were good years to be a Tolkien nerd, thanks to the efforts of various Australians, and it was comparatively easy to pick up my slack in that area. (Full disclosure: I still have not read Lord of the Rings. I read The Hobbit and Fellowship of the Ring, but got caught up in thesis prep about 60 pages into The Two Towers. So my thoughts on Tolkien aren't exactly authoritative.)
My nerd lineage starts with videogames and spreads out from there. I've never seen the pre-Special Edition Star Wars, and therefore never saw any of them until I was 15 years old. Nonetheless, at 15, I developed a near-encyclopedic knowledge of the universe through games and a wonkish interest in Joseph Campbell. Similarly, I've never played Dungeons & Dragons, but picked up the basics of the ruleset through adaptations, and the elements that spread throughout the RPG genre. So, as I watched the Rings movies, I'd see a lot of things I recognized from various RPGs, many of them Japanese, and my Tolkien nerd friends would smugly assert that, of course, everything in the fantasy genre has a straight line back to Tolkien.
I thought this was a little odd at the time, in that even I knew that Gary Gygax and company had at least one other major influence, Robert E. Howard, in establishing the D&D universe. From what I've read since then, it turns out that Gygax wasn't a big fan of Tolkien--he liked the American pulps, mostly--and the references to LOTR in D&D mostly amount to marketing ploys. More to the point, however, what makes D&D important has very little to do with evocative world-making. I don't know if Gygax's rule system was the first or even the most effective of its time, but it seems to me that the relevant thing about D&D as it relates to videogames and simulation in general is that it devised a system for measuring human behavior through the narrativized interaction of random and non-random statistics.
Case in point: A D&D character is fundamentally comprised of 6 base statistics: Strength ("the muscle and physical power of your character"), Dexterity ("agility, reflexes and balance"), Constitution ("the health and stamina of your character"), Intelligence ("how well your character learns and reasons"), Wisdom ("willpower, common sense, perception and intuition"), and Charisma ("force of personality, persuasiveness, ability to lead, and physical attractiveness").
Ok, that's all well and good, but what do they do? Narrative niceties aside, the issue is how it ties into actual gameplay. (I'll be referring to the NWN ruleset here, so, y'know, take heed.) Strength covers carrying capacity, melee weapon damage, and the "discipline" skill, which resists various combat skills. Dexterity covers bow damage and dodging, as well as hiding, sneaking, lock picking, parrying, pickpocketing, and setting traps. Constitution covers HP (i.e. how much damage one could soak up and survive), as well as concentration and the barbarian's "rage" ability. Wisdom allows characters to ask more insightful questions to NPCs to get better information, covers divine magic for clerics, druids, paladins and rangers, enhances monks' dodging abilities, and contributes to healing, listening, and looking. Intelligence covers the acquisition of new skills (general learning speed), as well as arcane magic for wizards and the disable trap, lore, search, and spellcraft (counter-magic) skills. Charisma covers arcane magic for bards and sorcerers and contributes to animal empathy, singing, persuasion, taunting, and using magic devices.
So we're now in a bit deeper; certainly better than the bare-bones "physical/mental/other" trinity from which most RPG rulesets are, ahem, divined. And we've covered a good many things a hypothetical person could do, with a semi-coherent system for what skills govern which actions.
What interests me most at this point is the treatment of the mind: at first glance, two of the main stats, intelligence and wisdom, seem to cover this category. The manual notes that high intelligence and low wisdom makes for something of an idiot savant, while high wisdom and low intelligence makes for a kind of non-specific street smarts. After all, "wisdom" comprises a fairly wide array of concepts--willpower, common sense, perception and intuition--it's a pretty heavy stat, from a narrative level. (Notably, Arcanum goes to the trouble of breaking it into "willpower" and "perception.") So we have two stats standing in for "mind." Except...charisma? That's more "interpersonal" than smart, so maybe that's a third category. And once we're into threes, hoo boy. One could alternatively divy up the stats into physical, mental and spatial/temporal, or internal, external and liminal: strength and constitution for the objective, visible world, intelligence and charisma for mind and speech (speech being, in this projection, a manifestation of the inner self), dexterity and wisdom for the relation of the world to the self. That these pairings seem to oppose each other--constitution makes enemies' strength less effective, wisdom counteracts dexterity skills--adds some legs to this model.
One more thing about these skills, which have (of course) evolved considerably over many iterations of D&D: saving throws. Certain attacks, curses, etc. can be turned aside by fortitude (constitution), will (wisdom), or reflex (dexterity) saves. In the 4th edition ruleset, all six stats contribute to saving throws, essentially pairing off the starting six: strength and constitution, dexterity and intelligence, wisdom and charisma. And this pairing also makes a kind of sense, which raises a new question: do any of these stats really work in isolation? I mean, in the universe we actually inhabit?
In practice, it's difficult to build muscle without also improving endurance and general cardiovascular health. Dexterity is a matter for people with more medical knowledge than I, but there's certainly a fairly significant physical component. Similarly, how well can one actually think with an unhealthy body? If the brain itself--a physical organ that runs on oxygen and regulates an unfathomably complex machine via electrochemical signals--doesn't problematize mind/body dualism, perhaps the ubiquity of anti-depressants in modern American society will. And shouldn't the "willpower" part of wisdom affect all of these?
Working from this concept, one could easily split up the starting six into primary and secondary groups, making the secondary stats by combining the primary. Wisdom and dexterity are pretty convincing as the building blocks of charisma, at least where I'm stanging--dexterity is already associated, metaphorically, with wit and mental processes, and a general comfort with and awareness of one's company and surroundings is always the part of interpersonal relations at which I suck.
There are a theoretically infinite number of these kinds of models that can be produced, of course, even with a relatively small number of variables, just by rearranging the relationships between them. And each of these models would no doubt be consistent with some aspects of observed or imagined reality and not others. RPGs aren't my favorite genre to play, but they're definitely my favorite genre to think about, for the same reason I find my liberal-arts-major knowledge of science so useful in my everyday walkin' around time: it gives you a new way to look at your regular, boring-ass life.
My nerd lineage starts with videogames and spreads out from there. I've never seen the pre-Special Edition Star Wars, and therefore never saw any of them until I was 15 years old. Nonetheless, at 15, I developed a near-encyclopedic knowledge of the universe through games and a wonkish interest in Joseph Campbell. Similarly, I've never played Dungeons & Dragons, but picked up the basics of the ruleset through adaptations, and the elements that spread throughout the RPG genre. So, as I watched the Rings movies, I'd see a lot of things I recognized from various RPGs, many of them Japanese, and my Tolkien nerd friends would smugly assert that, of course, everything in the fantasy genre has a straight line back to Tolkien.
I thought this was a little odd at the time, in that even I knew that Gary Gygax and company had at least one other major influence, Robert E. Howard, in establishing the D&D universe. From what I've read since then, it turns out that Gygax wasn't a big fan of Tolkien--he liked the American pulps, mostly--and the references to LOTR in D&D mostly amount to marketing ploys. More to the point, however, what makes D&D important has very little to do with evocative world-making. I don't know if Gygax's rule system was the first or even the most effective of its time, but it seems to me that the relevant thing about D&D as it relates to videogames and simulation in general is that it devised a system for measuring human behavior through the narrativized interaction of random and non-random statistics.
Case in point: A D&D character is fundamentally comprised of 6 base statistics: Strength ("the muscle and physical power of your character"), Dexterity ("agility, reflexes and balance"), Constitution ("the health and stamina of your character"), Intelligence ("how well your character learns and reasons"), Wisdom ("willpower, common sense, perception and intuition"), and Charisma ("force of personality, persuasiveness, ability to lead, and physical attractiveness").
Ok, that's all well and good, but what do they do? Narrative niceties aside, the issue is how it ties into actual gameplay. (I'll be referring to the NWN ruleset here, so, y'know, take heed.) Strength covers carrying capacity, melee weapon damage, and the "discipline" skill, which resists various combat skills. Dexterity covers bow damage and dodging, as well as hiding, sneaking, lock picking, parrying, pickpocketing, and setting traps. Constitution covers HP (i.e. how much damage one could soak up and survive), as well as concentration and the barbarian's "rage" ability. Wisdom allows characters to ask more insightful questions to NPCs to get better information, covers divine magic for clerics, druids, paladins and rangers, enhances monks' dodging abilities, and contributes to healing, listening, and looking. Intelligence covers the acquisition of new skills (general learning speed), as well as arcane magic for wizards and the disable trap, lore, search, and spellcraft (counter-magic) skills. Charisma covers arcane magic for bards and sorcerers and contributes to animal empathy, singing, persuasion, taunting, and using magic devices.
So we're now in a bit deeper; certainly better than the bare-bones "physical/mental/other" trinity from which most RPG rulesets are, ahem, divined. And we've covered a good many things a hypothetical person could do, with a semi-coherent system for what skills govern which actions.
What interests me most at this point is the treatment of the mind: at first glance, two of the main stats, intelligence and wisdom, seem to cover this category. The manual notes that high intelligence and low wisdom makes for something of an idiot savant, while high wisdom and low intelligence makes for a kind of non-specific street smarts. After all, "wisdom" comprises a fairly wide array of concepts--willpower, common sense, perception and intuition--it's a pretty heavy stat, from a narrative level. (Notably, Arcanum goes to the trouble of breaking it into "willpower" and "perception.") So we have two stats standing in for "mind." Except...charisma? That's more "interpersonal" than smart, so maybe that's a third category. And once we're into threes, hoo boy. One could alternatively divy up the stats into physical, mental and spatial/temporal, or internal, external and liminal: strength and constitution for the objective, visible world, intelligence and charisma for mind and speech (speech being, in this projection, a manifestation of the inner self), dexterity and wisdom for the relation of the world to the self. That these pairings seem to oppose each other--constitution makes enemies' strength less effective, wisdom counteracts dexterity skills--adds some legs to this model.
One more thing about these skills, which have (of course) evolved considerably over many iterations of D&D: saving throws. Certain attacks, curses, etc. can be turned aside by fortitude (constitution), will (wisdom), or reflex (dexterity) saves. In the 4th edition ruleset, all six stats contribute to saving throws, essentially pairing off the starting six: strength and constitution, dexterity and intelligence, wisdom and charisma. And this pairing also makes a kind of sense, which raises a new question: do any of these stats really work in isolation? I mean, in the universe we actually inhabit?
In practice, it's difficult to build muscle without also improving endurance and general cardiovascular health. Dexterity is a matter for people with more medical knowledge than I, but there's certainly a fairly significant physical component. Similarly, how well can one actually think with an unhealthy body? If the brain itself--a physical organ that runs on oxygen and regulates an unfathomably complex machine via electrochemical signals--doesn't problematize mind/body dualism, perhaps the ubiquity of anti-depressants in modern American society will. And shouldn't the "willpower" part of wisdom affect all of these?
Working from this concept, one could easily split up the starting six into primary and secondary groups, making the secondary stats by combining the primary. Wisdom and dexterity are pretty convincing as the building blocks of charisma, at least where I'm stanging--dexterity is already associated, metaphorically, with wit and mental processes, and a general comfort with and awareness of one's company and surroundings is always the part of interpersonal relations at which I suck.
There are a theoretically infinite number of these kinds of models that can be produced, of course, even with a relatively small number of variables, just by rearranging the relationships between them. And each of these models would no doubt be consistent with some aspects of observed or imagined reality and not others. RPGs aren't my favorite genre to play, but they're definitely my favorite genre to think about, for the same reason I find my liberal-arts-major knowledge of science so useful in my everyday walkin' around time: it gives you a new way to look at your regular, boring-ass life.
Wednesday, November 19, 2008
The first of many posts on Mortal Kombat
So, yesterday I picked up Mortal Kombat vs. DC Universe, because, come on. The first ad was a movie of Sub-Zero fighting Batman. Beating each other up in mid-air. I rest my case.
Anyway. The good: it's an old-school MK game. Not like Deadly Alliance, Deception or Armageddon. Not even like MK4 or the MK3 clusterfuck. It feels like MK2. Well, no. It feels like a version of MK2 designed by Capcom or Namco, a team more competent than ambitious. It's MK3 and 4 with all the gimmicks well implemented and balanced. It's simpler than any of the last generation MKs, easy to pick up, and a blast to play. The DC characters are well designed and narratively consistent (aside from the obvious Superman Problem, the graphics are gorgeous, and--let me say it again--Batman can fight Sub-Zero.
The story mode pretty much plays the MK universe as a superhero comic, in a way that actually helps iron out some of the narrative oddities of Boon and Tobias' little world. The comic association also makes it easier to swallow that MK has, over eight "main" games and two spinoffs, played incredibly fast and loose with its own continuity. The complete failure of MK's narrative is a multi-level train wreck I refer to as "The Mortal Kombat problem," a case study in how not to run a fighting game franchise, and it could easily fill several blog posts. And it will. Because, seriously, have you seen my output on this thing lately? It's not like the D&D, God of War and Jane Austen posts are writing themselves. If I don't pick up the pace soon I'm going to have to outsource this thing to grad students in unpaid internships, and even my supply of those has been threatened by recent events.
There isn't much bad to be had, aside from the lack of big New Features. Then again, after the run button, vs. screen "kodes," clumsy weapons and the most truly boring use of 3D movement yet seen in a fighting game, I hope we can agree that the MK team ought to have stopped bothering with New Features somewhere around 1993.
The one thing I will say in this post: this whole fatality business. Fatalities, as a gameplay device, only make sense in the context of arcades. They should have been radically revamped as soon as MK went console-only. And while that hasn't happened yet, a curious thing happened with MK vs. DC. Due to DC's stranglehold over portrayals of their IP, there were initially rumors that the game would have no fatalities, or that the DC characters would have no fatalities. Fine; whatever. It's not like anyone actually ever dies in either universe, anyway. They settled on the DC villains having fatalities, and the whole MK cast, but not the heroes. Instead, they have..."heroic brutalities."
Read that again.
Heroic. Brutality.
I appreciate the effort, but really, guys, you need to hire a lit major. Someone who could tell you that, unless we're openly embracing fascism, those concepts are antithetical.
While we're on the subject of violence, a parting shot--supposedly the Joker's famed gun fatality has been edited for the American release, but remains in the European release. I haven't tested it myself, but here's what amuses me. In America, it was edited so the game could get a Teen rating, and avoid legal hoopla. In the UK, where people beating the crap out of each other is, in and of itself, enough to be considered graphic violence, it got by with a 16+, which apparently presents no such hurdles.
So, in short, America gets the less violent version of the game because we are more tolerant of violent media.
Go figure.
(PS--I like that Liu Kang has something resembling a Chinese accent, but why does Kitana sound like the actress actually recorded her dialogue while wearing a mask?)
Anyway. The good: it's an old-school MK game. Not like Deadly Alliance, Deception or Armageddon. Not even like MK4 or the MK3 clusterfuck. It feels like MK2. Well, no. It feels like a version of MK2 designed by Capcom or Namco, a team more competent than ambitious. It's MK3 and 4 with all the gimmicks well implemented and balanced. It's simpler than any of the last generation MKs, easy to pick up, and a blast to play. The DC characters are well designed and narratively consistent (aside from the obvious Superman Problem, the graphics are gorgeous, and--let me say it again--Batman can fight Sub-Zero.
The story mode pretty much plays the MK universe as a superhero comic, in a way that actually helps iron out some of the narrative oddities of Boon and Tobias' little world. The comic association also makes it easier to swallow that MK has, over eight "main" games and two spinoffs, played incredibly fast and loose with its own continuity. The complete failure of MK's narrative is a multi-level train wreck I refer to as "The Mortal Kombat problem," a case study in how not to run a fighting game franchise, and it could easily fill several blog posts. And it will. Because, seriously, have you seen my output on this thing lately? It's not like the D&D, God of War and Jane Austen posts are writing themselves. If I don't pick up the pace soon I'm going to have to outsource this thing to grad students in unpaid internships, and even my supply of those has been threatened by recent events.
There isn't much bad to be had, aside from the lack of big New Features. Then again, after the run button, vs. screen "kodes," clumsy weapons and the most truly boring use of 3D movement yet seen in a fighting game, I hope we can agree that the MK team ought to have stopped bothering with New Features somewhere around 1993.
The one thing I will say in this post: this whole fatality business. Fatalities, as a gameplay device, only make sense in the context of arcades. They should have been radically revamped as soon as MK went console-only. And while that hasn't happened yet, a curious thing happened with MK vs. DC. Due to DC's stranglehold over portrayals of their IP, there were initially rumors that the game would have no fatalities, or that the DC characters would have no fatalities. Fine; whatever. It's not like anyone actually ever dies in either universe, anyway. They settled on the DC villains having fatalities, and the whole MK cast, but not the heroes. Instead, they have..."heroic brutalities."
Read that again.
Heroic. Brutality.
I appreciate the effort, but really, guys, you need to hire a lit major. Someone who could tell you that, unless we're openly embracing fascism, those concepts are antithetical.
While we're on the subject of violence, a parting shot--supposedly the Joker's famed gun fatality has been edited for the American release, but remains in the European release. I haven't tested it myself, but here's what amuses me. In America, it was edited so the game could get a Teen rating, and avoid legal hoopla. In the UK, where people beating the crap out of each other is, in and of itself, enough to be considered graphic violence, it got by with a 16+, which apparently presents no such hurdles.
So, in short, America gets the less violent version of the game because we are more tolerant of violent media.
Go figure.
(PS--I like that Liu Kang has something resembling a Chinese accent, but why does Kitana sound like the actress actually recorded her dialogue while wearing a mask?)
Wednesday, October 15, 2008
Ok, so this thing's been dead for a while.
We're working on that. I was, um, abducted by aliens. They've been helping me figure out this MTEL thing.
At any rate, I haven't abandoned this thing, so all of you who are reading this. Which, based on demographic data, consists of you, whoever is reading this sentence. Right now, the person reading this sentence is me, but eventually it'll probably be someone else.
Incidentally, while I am typing this right now, I will likely not be when you're reading it. So try to use your best judgment on this and related matters.
At any rate, I haven't abandoned this thing, so all of you who are reading this. Which, based on demographic data, consists of you, whoever is reading this sentence. Right now, the person reading this sentence is me, but eventually it'll probably be someone else.
Incidentally, while I am typing this right now, I will likely not be when you're reading it. So try to use your best judgment on this and related matters.
Sunday, May 25, 2008
More shit about rules and fiction.
So, last night I was depressed, and my partner and I rented Alien vs. Predator: Requiem. These two things may not seem like they complement each other to most people, but we are a strange breed. The clerks were split on whether this was a good choice, but agreed that it worked fine as a retaliation for my recently having been forced to watch A Chorus Line. The retaliation theory doesn't really work, in that she enjoyed the first AvP more than I did, partially because she thought the lead predator was adorable. So, he said, there's that.
The new one is, well, what you see is what you get. Aliens blown apart by bullets. Aliens ripped apart by glaives. Humans decapitated by shoulder cannons. But AvP isn't really ideal for narrative media anyway; I have some of the old comics somewhere, and perhaps I'm wrong and the whole predator homeworld thing added something really vital to the mythos, something greater than the sum of its parts. But as far as I can see, it's piggybacking on the established tropes of two sci-fi/horror series of wildly varying respectability. All it adds is fighting. The appeal of AvP is more kinetic than narrative. Which means that it probably ought to have been a videogame in the first place.
There've been quite a few AvP games over the years, of course. The one for the Jaguar has been largely forgotten, which is a shame, because it represents an era of gaming culture that is, frankly, hilarious. The later PC release is the one most of us remember, I suspect, and aside from somewhat clunky multiplayer, it was a thoroughly brilliant FPS. When everyone talks about System Shock, I bring up AvP. What I remember about the game is mostly its use of darkness, and how each species relied on different methods to cope with it, but my fondest memories involve the weapons. The presence of three species allowed developers to build two contradictory weapon sets into the human and predator armories. Most importantly, these weapons were narratively consistent with what we knew from the films: the shoulder cannon fires in a straight line, and aliens are too fast for it, so that's for humans. The prox pistol lobs a ball of energy that safely and quickly kills aliens with its splash damage, but if you miss and hit one with the initial shot, they'll explode and bathe you in acid. It's too short range to be useful against something that's not running toward you, so that's for aliens. And the invisibility, conveniently, fails when coupled with any of the weapons designed for aliens, who don't need to see you anyway.
The sequel, the aptly titled AvP2, improved on the first in virtually every way. Still, it took me a long time to warm up to it, because it was narratively inconsistent with both the films and the previous game. The shoulder cannon became a tracking weapon for fast-moving prey, and became incompatible with invisibility. The new weapons, like the netgun, seemed awfully human-like, complete with ammo pickups. In fact, the line between human and predator seemed to be getting smaller.
On further play, I realized what I had been missing. Despite the increased emphasis on story in the three single-player campaigns, the game had been optimized for multiplayer, specifically a class-based multiplayer that divided all three species into four specialized classes. Every piece of weaponry, every alien mutation, now made perfect sense from a design perspective, and had the single-player campaign used a similar structure--back in the days of dial-up, I can be forgiven for reliably playing the single-player campaign first--I would have grasped the reason for the changes immediately, and acknowledged that they did, in fact, lead to a better game. But even for a better game, the narrative inconsistency might have been a little tough to swallow. Different people value texts for different reasons, of course; does this mean I'm a narratologist?
The new one is, well, what you see is what you get. Aliens blown apart by bullets. Aliens ripped apart by glaives. Humans decapitated by shoulder cannons. But AvP isn't really ideal for narrative media anyway; I have some of the old comics somewhere, and perhaps I'm wrong and the whole predator homeworld thing added something really vital to the mythos, something greater than the sum of its parts. But as far as I can see, it's piggybacking on the established tropes of two sci-fi/horror series of wildly varying respectability. All it adds is fighting. The appeal of AvP is more kinetic than narrative. Which means that it probably ought to have been a videogame in the first place.
There've been quite a few AvP games over the years, of course. The one for the Jaguar has been largely forgotten, which is a shame, because it represents an era of gaming culture that is, frankly, hilarious. The later PC release is the one most of us remember, I suspect, and aside from somewhat clunky multiplayer, it was a thoroughly brilliant FPS. When everyone talks about System Shock, I bring up AvP. What I remember about the game is mostly its use of darkness, and how each species relied on different methods to cope with it, but my fondest memories involve the weapons. The presence of three species allowed developers to build two contradictory weapon sets into the human and predator armories. Most importantly, these weapons were narratively consistent with what we knew from the films: the shoulder cannon fires in a straight line, and aliens are too fast for it, so that's for humans. The prox pistol lobs a ball of energy that safely and quickly kills aliens with its splash damage, but if you miss and hit one with the initial shot, they'll explode and bathe you in acid. It's too short range to be useful against something that's not running toward you, so that's for aliens. And the invisibility, conveniently, fails when coupled with any of the weapons designed for aliens, who don't need to see you anyway.
The sequel, the aptly titled AvP2, improved on the first in virtually every way. Still, it took me a long time to warm up to it, because it was narratively inconsistent with both the films and the previous game. The shoulder cannon became a tracking weapon for fast-moving prey, and became incompatible with invisibility. The new weapons, like the netgun, seemed awfully human-like, complete with ammo pickups. In fact, the line between human and predator seemed to be getting smaller.
On further play, I realized what I had been missing. Despite the increased emphasis on story in the three single-player campaigns, the game had been optimized for multiplayer, specifically a class-based multiplayer that divided all three species into four specialized classes. Every piece of weaponry, every alien mutation, now made perfect sense from a design perspective, and had the single-player campaign used a similar structure--back in the days of dial-up, I can be forgiven for reliably playing the single-player campaign first--I would have grasped the reason for the changes immediately, and acknowledged that they did, in fact, lead to a better game. But even for a better game, the narrative inconsistency might have been a little tough to swallow. Different people value texts for different reasons, of course; does this mean I'm a narratologist?
Tuesday, May 20, 2008
One, two, three
Let's start with three.
One is a point, two is a line, three is a shape. The Greeks were big on three and its multiples, at least partially for that reason, and this might be why we in Western civ have such a rough time not thinking in terms of threes. It certainly seems intuitive, from the perspective of the reality our language constructs: thesis, antithesis, synthesis. Once you have a thing, it's an intuitive leap to its opposite or absence, and an intuitive leap from there to the integration of the two. We see other arrangements as well: paper-rock-scissors is another one, appearing in Eternal Darkness as flesh-mind-spirit, the warrior, the alchemist and the wizard. Father, Son and Holy Spirit is the big one, of course, the one we can't unthink even if we want to. Metropolis refashions the trinity as head, hands and heart--suggesting a relationship between the three largely consistent with C.S. Lewis' description of the trinity itself. That which begets, that which is begotten, and the love between them, an animating force that is, itself, a person. The mind that conceives, the word that is spoken into being, and the breath that constitutes the connection between the two.
How does this relate to videogames? Mostly via a simple assertion: binaries are fucking boring. It's easier to make interesting relationships between three signs, characters, or factions than between two. And consequently, most modern RPGs and many adventure games are based around three primary stats or paths. Generally, it breaks down in terms of physical, mental/magical, and...other. Often it's agility, which is associated with stealth and thievery; agility essentially being an intuitive connection between mind and body that automates certain precise processes. The venerable Kingdom of Loathing uses "moxie" to much the same ends. Eternal Darkness splits mental/magical into two categories. Diablo begins with three playable characters, each based around strength, intelligence or agility; Phantasy Star Online reproduced the meme and split each class into three characters, divided among three races that related to each other as the classes did. In multiplayer games, a fourth entity sometimes appears in the form of the healer--most MMORPGs these days seem to be built around the interaction of a melee fighter, a non-specific ranger/thief support fighter, a healer and a nuker. Dungeons & Dragons, I'm told had the cleric before the rogue, but then, the cleric in D&D isn't much like the clerisy in any other RPG. And we're talking about three for now.
In RPGs that allow variable morality, it's generally a secondary stat, one that changes as a result of your decisions rather than leveling up. Arcanum uses a single good/bad axis, like the hilariously simple Jedi Knight and the highly confusing Darkwatch. Occasionally, these simple systems are used for things that aren't quite moral in nature, but function for the player much the same way, such as the professionalism meter in Reservoir Dogs or the trust bars in Splinter Cell: Double Agent. D&D did something a bit more complex by adding a lawful/chaotic axis perpendicular to the good/evil one, but its application in games is a bit odd, as it was really designed for tabletop games with actual humans improvising shit and then fighting about it.
So, what if we looked at morality instead as a primary stat, the heart that mediates between head and hands, the breath of life between the mind and the word...the spirit, the soul, that which is third? How would morality function as an ability stat?
There are a few options. As grace, favor of the gods, etc., morality could function as a luck stat. But a quick look at how the world functions shows this to be a fairly stupid and untenable idea. Besides, most good stories require at least a little bit of bad things happening to good people. Conversely, it could function as a kind of anti-luck, a demonic shit-magnet, but that would have to be offset with some positive to make it make sense. Protection from certain kinds of evil is a possibility, as is immunity to certain effects, such as fear or supernatural curses. Experience growth would be interesting, associating moral living with the life force. Virtue ethics might provide a useful template for ideas, as might the Christian cardinal/theological virtues. All of these, of course, hinge on free will.
So we borrow a page from Kant, and to some extent, Zoroaster, and associate the morality stat with free will. What does this mean? Well, first of all, it sets up morality as a matter of presence vs. absence; morality opposing amorality, not immorality. A character with a low morality stat is, functionally, an animal, operating largely on stimulus-response, i.e. the avatar spends some of its time on autopilot. This character pursues self-interest--its rationality is debatable, and might be linked with the mind stat--and thus might or might not be thought of as an egoist, but, y'know, moving on. At any rate, this sad avatar of low morals is ruled by avarice and fear. (An aside: I think lust ought to be here, but that's a very hypothetical area I'll not deal with in this post, because any game that purports to be about morality, sex, and violence is going to need--need--to deal with rape. And not superficially.) (S)he identifies all opposing players as enemies, and can't converse or exchange items with them. Teams and clans, therefore, cannot be joined; those of low morals are doomed to solo. This connects our oversimplified, somewhat childish, yet still kind of interesting morality signifier with the realm of the interpersonal. More to the point, it penalizes mindless (automated) aggression, and makes not doing things as important as doing them--more, since not doing things is effectively a reward for increased abilties. (This principle will need to be applied at a few layers, but, whatever.)
As for immorality, the perversion of substantive good, well, there's a couple of paths for that, fodder for future posts. In the meantime, what does the ruleset I've vaguely outlined above say? That the evil are fearsome, and more powerful individually than the good, but their power is limited and redounds upon itself by their lack of self-control. Finally, this ruleset would seem to give griefers their own class, although one wonders if they'd prefer to play as moral characters to as to fuck up other players more effectively.
One is a point, two is a line, three is a shape. The Greeks were big on three and its multiples, at least partially for that reason, and this might be why we in Western civ have such a rough time not thinking in terms of threes. It certainly seems intuitive, from the perspective of the reality our language constructs: thesis, antithesis, synthesis. Once you have a thing, it's an intuitive leap to its opposite or absence, and an intuitive leap from there to the integration of the two. We see other arrangements as well: paper-rock-scissors is another one, appearing in Eternal Darkness as flesh-mind-spirit, the warrior, the alchemist and the wizard. Father, Son and Holy Spirit is the big one, of course, the one we can't unthink even if we want to. Metropolis refashions the trinity as head, hands and heart--suggesting a relationship between the three largely consistent with C.S. Lewis' description of the trinity itself. That which begets, that which is begotten, and the love between them, an animating force that is, itself, a person. The mind that conceives, the word that is spoken into being, and the breath that constitutes the connection between the two.
How does this relate to videogames? Mostly via a simple assertion: binaries are fucking boring. It's easier to make interesting relationships between three signs, characters, or factions than between two. And consequently, most modern RPGs and many adventure games are based around three primary stats or paths. Generally, it breaks down in terms of physical, mental/magical, and...other. Often it's agility, which is associated with stealth and thievery; agility essentially being an intuitive connection between mind and body that automates certain precise processes. The venerable Kingdom of Loathing uses "moxie" to much the same ends. Eternal Darkness splits mental/magical into two categories. Diablo begins with three playable characters, each based around strength, intelligence or agility; Phantasy Star Online reproduced the meme and split each class into three characters, divided among three races that related to each other as the classes did. In multiplayer games, a fourth entity sometimes appears in the form of the healer--most MMORPGs these days seem to be built around the interaction of a melee fighter, a non-specific ranger/thief support fighter, a healer and a nuker. Dungeons & Dragons, I'm told had the cleric before the rogue, but then, the cleric in D&D isn't much like the clerisy in any other RPG. And we're talking about three for now.
In RPGs that allow variable morality, it's generally a secondary stat, one that changes as a result of your decisions rather than leveling up. Arcanum uses a single good/bad axis, like the hilariously simple Jedi Knight and the highly confusing Darkwatch. Occasionally, these simple systems are used for things that aren't quite moral in nature, but function for the player much the same way, such as the professionalism meter in Reservoir Dogs or the trust bars in Splinter Cell: Double Agent. D&D did something a bit more complex by adding a lawful/chaotic axis perpendicular to the good/evil one, but its application in games is a bit odd, as it was really designed for tabletop games with actual humans improvising shit and then fighting about it.
So, what if we looked at morality instead as a primary stat, the heart that mediates between head and hands, the breath of life between the mind and the word...the spirit, the soul, that which is third? How would morality function as an ability stat?
There are a few options. As grace, favor of the gods, etc., morality could function as a luck stat. But a quick look at how the world functions shows this to be a fairly stupid and untenable idea. Besides, most good stories require at least a little bit of bad things happening to good people. Conversely, it could function as a kind of anti-luck, a demonic shit-magnet, but that would have to be offset with some positive to make it make sense. Protection from certain kinds of evil is a possibility, as is immunity to certain effects, such as fear or supernatural curses. Experience growth would be interesting, associating moral living with the life force. Virtue ethics might provide a useful template for ideas, as might the Christian cardinal/theological virtues. All of these, of course, hinge on free will.
So we borrow a page from Kant, and to some extent, Zoroaster, and associate the morality stat with free will. What does this mean? Well, first of all, it sets up morality as a matter of presence vs. absence; morality opposing amorality, not immorality. A character with a low morality stat is, functionally, an animal, operating largely on stimulus-response, i.e. the avatar spends some of its time on autopilot. This character pursues self-interest--its rationality is debatable, and might be linked with the mind stat--and thus might or might not be thought of as an egoist, but, y'know, moving on. At any rate, this sad avatar of low morals is ruled by avarice and fear. (An aside: I think lust ought to be here, but that's a very hypothetical area I'll not deal with in this post, because any game that purports to be about morality, sex, and violence is going to need--need--to deal with rape. And not superficially.) (S)he identifies all opposing players as enemies, and can't converse or exchange items with them. Teams and clans, therefore, cannot be joined; those of low morals are doomed to solo. This connects our oversimplified, somewhat childish, yet still kind of interesting morality signifier with the realm of the interpersonal. More to the point, it penalizes mindless (automated) aggression, and makes not doing things as important as doing them--more, since not doing things is effectively a reward for increased abilties. (This principle will need to be applied at a few layers, but, whatever.)
As for immorality, the perversion of substantive good, well, there's a couple of paths for that, fodder for future posts. In the meantime, what does the ruleset I've vaguely outlined above say? That the evil are fearsome, and more powerful individually than the good, but their power is limited and redounds upon itself by their lack of self-control. Finally, this ruleset would seem to give griefers their own class, although one wonders if they'd prefer to play as moral characters to as to fuck up other players more effectively.
Thursday, April 24, 2008
More fun with criticism
Well. This week has absolutely sucked. When did I last update this thing?
Oh, right. Anyway.
The dirty little secret about the videogame medium is that it'd probably be more accurate to say the videogame media. Granted, no universally agreed-upon definition for medium/media exists--I seem to recall a good, functional one about communication technologies and the social protocols that surround them--and given that HoMM5 runs on the same physical hardware as Blogger, the way we think about them and use them certainly has to come into it. But the trouble with coming up with a definition of "videogame" is that the commonalities between, say, Resident Evil and Second Life are not all that much stronger than the connection between Resident Evil and, well, Blogger. Part of why it's important to be able to identify different texts as belonging to different media is that it allows for the construction of critical theories appropriate to the medium in general.
I wonder if Eagleton's tripartite division in pre-structuralist lit theory might be useful in helping us see some of these distinctions. Authorial intent is not a sexy concept, of course, and it's unclear where some of my own perspectives fall...to what extent does it make sense to say that a game "says" or "does" something? Are we talking about the author? Generally not. If we do talk about the author, it's usually because someone fucked up. Part of the thing that makes mediocre games so compelling as objects of study is looking at the pieces and not being able to resist coming up with explanations about how they were supposed to fit together, before the dev team ran out of time or money. Other games, like Black & White and Frasca's still-fictional Strikeman, more or less demand to be looked at in terms of authorial intent, at least in terms of vision. Even emergent systems would seem to have a vague intent of their own, if only an intent to allow players to play with these rules over here but not those over there. But art doesn't generally work out the way we plan, and the engine we see is the product of several different intentional actors, along with mistakes, quick fixes and changes in direction, even before our perspectives as players come into play.
That said, as we go into the realm of multiplayer and user-generated content, reception theory does seem like it'd be the closest analogue to what would be most effective. There's certainly a lot to be said about authorial intent in Second Life, just as fan cultures have done some interesting things with the thoroughly authorial and linear Resident Evil, but in general, certain theoretical approaches will work better for some genres, and pinning some of those down might be more important than coming to a complete understanding of what videogames are.
After all, the bar is pretty low here. According to Eagleton, nobody knows what the hell Literature is anymore.
Oh, right. Anyway.
The dirty little secret about the videogame medium is that it'd probably be more accurate to say the videogame media. Granted, no universally agreed-upon definition for medium/media exists--I seem to recall a good, functional one about communication technologies and the social protocols that surround them--and given that HoMM5 runs on the same physical hardware as Blogger, the way we think about them and use them certainly has to come into it. But the trouble with coming up with a definition of "videogame" is that the commonalities between, say, Resident Evil and Second Life are not all that much stronger than the connection between Resident Evil and, well, Blogger. Part of why it's important to be able to identify different texts as belonging to different media is that it allows for the construction of critical theories appropriate to the medium in general.
I wonder if Eagleton's tripartite division in pre-structuralist lit theory might be useful in helping us see some of these distinctions. Authorial intent is not a sexy concept, of course, and it's unclear where some of my own perspectives fall...to what extent does it make sense to say that a game "says" or "does" something? Are we talking about the author? Generally not. If we do talk about the author, it's usually because someone fucked up. Part of the thing that makes mediocre games so compelling as objects of study is looking at the pieces and not being able to resist coming up with explanations about how they were supposed to fit together, before the dev team ran out of time or money. Other games, like Black & White and Frasca's still-fictional Strikeman, more or less demand to be looked at in terms of authorial intent, at least in terms of vision. Even emergent systems would seem to have a vague intent of their own, if only an intent to allow players to play with these rules over here but not those over there. But art doesn't generally work out the way we plan, and the engine we see is the product of several different intentional actors, along with mistakes, quick fixes and changes in direction, even before our perspectives as players come into play.
That said, as we go into the realm of multiplayer and user-generated content, reception theory does seem like it'd be the closest analogue to what would be most effective. There's certainly a lot to be said about authorial intent in Second Life, just as fan cultures have done some interesting things with the thoroughly authorial and linear Resident Evil, but in general, certain theoretical approaches will work better for some genres, and pinning some of those down might be more important than coming to a complete understanding of what videogames are.
After all, the bar is pretty low here. According to Eagleton, nobody knows what the hell Literature is anymore.
Sunday, April 6, 2008
A few words on reception theory.
Terry Eagleton is very funny. That's quite an accomplishment in his field. Most people can't write funny literary theory; it's rare enough to be able to write intelligible literary theory. And if you're outside that particular tribe and wonder what literary theory is all about, he lays it out for you in Literary Theory: An Introduction: literary theory is an ongoing argument about literature, an academic field created by the Victorians to compensate for the Anglican church's waning ability to control the masses. (Marxist critics have a knack for providing such inspiring explanations for historical processes.) In describing reception theory, Eagleton suggests that it can be seen as part of an ongoing process:
More on New Criticism, authorial issues, and the problem of intent later. For now, this post is already long enough for me. I need a break.
Indeed, one might very roughly periodize the history of modern literary theory in three stages: a preoccupation with the author (Romanticism and the nineteenth century); an exclusive concern with the text (New Criticism); and a marked shift of attention to the reader over recent years. The reader has always been the most underprivileged of this trio--strangely, since without him or her there would be no literary texts at all. Literary texts do not exist on bookshelves: they are processes of signification materialized only in the practice of reading. For literature to happen, the reader is quite as vital as the author.As someone who has spent the last ten years or so thinking of himself primarily as a writer, all empirical evidence to the contrary, I've never been entirely keen on this primacy of the reader thing. First, it's hard to say the text doesn't exist because nobody's reading it; at the very least, the author read it, probably several times, sometimes before it got written down. "Reading" is not the only practice of signification that goes into writing, and I urge you to read and critique a blank sheet of paper sometime should you doubt this. The author is, of course, not a reader per se, because he (in this case, being me, the author is nominally male) has always "read" more than the archetypal reader. When I look at an old story of mine, I can't read the story the way a stranger can, because I can't un-remember the paratexts: when I read that story, I can't help but read the sentences I deleted, the scenes I decided not to write, the in-jokes I snuck into the exposition, or the books I was reading when I came up with the idea. But then, a friend of mine reading the story will have an experience not quite like either mine or the archetypal reader, so it might be a matter of degree. That said, the reader is clearly an important part of the process. In literature, that is. Eagleton continues:
What is involved in the act of reading? Let me take, almost literally at random, the first two sentences of a novel: "'What did you make of the new couple?' The Hanemas, Piet and Angela, were undressing." (John Updike, Couples.) What are we to make of this? We are puzzled for a moment, perhaps, by an apparent lack of connection between the two sentences, until we grasp that what is at work here is the literary convention by which we may attribute a piece of direct speech to a character even if the text does not explicitly do this itself. We gather that some character, probably Piet or Angela Hanema, makes the opening statement; but why do we presume this? The sentence in quotation marks may not be spoken at all: it may be a thought, or a question which someone else has asked, or a kind of epigraph placed at the opening of the novel. Perhaps it is addressed to Piet and Angela Hanema by somebody else, or by a sudden voice from the sky. One reason why the latter solution seems unlikely is that the question is a little colloquial for a voice from the sky, and we might know that Updike is in general a realist writer who does not usually go in for such devices; but a writer's texts do not necessarily form a consistent whole and it may be unwise to lean on this assumption too heavily. It is unlikely on realist grounds that the question is asked by a chorus of people speaking in unison, and slightly unlikely that it is asked by somebody other than Piet or Angela Hanema, since we learn the next moment that they are undressing, perhaps speculate that they are a married couple, and know that married couples, in our suburb of Birmingham at least, do not make a practice of undressing together before third parties, whatever they might do individually.What Eagleton describes here is the struggle to grok a rule system: to learn the underlying structures of the universe in order to piece together a useful, predictive understanding from incomplete information. It's about determining relationships, deciding which signs are relevant to which other signs, which narrative elements are epiphenomenal and which have deeper roots. Often this process relies (as it does in Eagleton's reading of Updike) in genre conventions, which are neither strictly textual nor the work of any particular author, but do form their own kind of tradition. Tradition is a loaded word in literary circles, one that's led to such unpleasantness as elitism, anti-semitism and The Waste Land, but it's worth wondering where we'd be as gamers without our own little tradition. Would anyone be able to make the slightest bit of sense out of Twilight Princess if it had been released one year after the original Legend of Zelda?
We have probably already made a whole set of inferences as we read these sentences. We may infer, for example, that the "couple" referred to is a man and woman, though there is nothing so far to tell us that they are not two women or tiger cubs. We assume that whoever poses the question cannot mind-read, as then there would be no need to ask. We may suspect that the questioner values the judgment of the addressee, though there is not sufficient context as yet for us to judge that the question is not taunting or aggressive. The phrase "The Hanemas," we imagine, is probably in grammatical opposition to the phrase "Piet and Angela," to indicate that this is their surname, which provides a significant piece of evidence for their being married. But we cannot rule out the possibility that there is some group of people called the Hanemas in addition to Piet and Angela, perhaps a whole tribe of them, and that they are all undressing together in some immense hall. The fact that Piet and Angela may share the same surname does not confirm that they are husband and wife: they may be a particularly liberated or incestuous brother and sister, father and daughter or mother and son. We have assumed, however, that they are undressing in sight of each other, whereas nothing has yet told us that the question is not shouted from one bedroom or beach-hut to another. Perhaps Piet and Angela Hanema are small children, though the relative sophistication of the question makes this unlikely. Most readers will by now probably have assumed that Piet and Angela Hanema are a married couple undressing together in their bedroom after some event, perhaps a party, at which a new married couple was present, but none of this is actually said.
More on New Criticism, authorial issues, and the problem of intent later. For now, this post is already long enough for me. I need a break.
Wednesday, March 26, 2008
A Drug Against War?
I've been writing about Columbine, on and off, for almost a decade now, more than a third of my total lifespan. It's consistently depressing, but strangely compelling as a topic. I finished an article on Bully a while back, a text in which it's difficult to avoid comparisons to Columbine if only in terms of the pre-release controversy (the text itself has a lot more to do with Lord of the Flies than any "factual" youth violence narratives), and in the interest of expanding on that, it seemed high time to take a look at the (briefly) infamous Super Columbine Massacre RPG. I haven't finished it, and in fact seem to have developed a rather pronounced mental block against playing it that can't be explained purely in terms of my utter addiction to HoMM5. All I can say with any degree of certainty is that it's rather not what I was expecting.
As the game opens, you (and your avatar, Eric Harris) run through the morning of April 20, 1999, moving through contemporary pop-culture references (Doom! Luvox! KMFDM! ...Marilyn Manson?) and hitting the occasional flashback. While I haven't checked into the specifics, the game appears to be built from a kit derived from Final Fantasy IV, or II for the yabanjin among us, and the engine goes a long way towards contextualizing the gameplay. I have to wonder if maybe the game has nothing to do with Columbine at all, and only uses the sensational real-world shooting as a device to parody the tropes of Final Fantasy and JRPGs in general. The long, trauma=drama cut-scenes, the emo whining, the easy, pointless battles...
...which brings us to the actual shooting. The battles are set up like in an RPG, a genre we don't think of as violent despite the fact that most RPGs produce body counts pure action games couldn't match. What other genre actually encourages players to wander aimlessly and kill everything they come across for hours and hours with no overt narrative motivation for doing so? That the "fights" against the unarmed students and teachers are so easy is, perhaps, part of the point, and I found myself habitually trying to maximize efficiency with the weapons and "armor" for the two characters, minimizing the expenditure of ammunition (which here functions as MP generally does in RPGs) and health items. In killing my way to character level 12--counting the two flashbacks that gave three levels to one kid each--I killed far more people than the actual Harris and Klebold. Having sufficiently explored the map (since I wasn't really planning on playing this thing more than once), I headed back to the point in the library where I'd earlier received the suicide prompt, and my two characters shot themselves.
I rather expected this to be the end of the game, but after a long and maudlin memorial sequence, a quotation from Dante's Inferno came up, and I found myself controlling Klebold in Hell. Now armed with only a pistol, he walked around long enough to be attacked by former humans and former human sergeants before an imp--yes, the furry, spikey, fireball-happy kind--killed him.
I'm not sure there's another strategy to be used here. It seems unlikely I can avoid that many of them. And building to level 12 wasn't nearly enough for this kind of fight. So the best I can guess is that I'm going to need to grind like hell during the actual, historical rampage shooting portion of the game so I'll be adequately prepared for the fighting I'm going to have to do in hell.
For me, the fact that it prompted me to write that last sentence is the most remarkable thing about the game. If I get nothing else out of the game, that's a sort of accomplishment in and of itself.
As the game opens, you (and your avatar, Eric Harris) run through the morning of April 20, 1999, moving through contemporary pop-culture references (Doom! Luvox! KMFDM! ...Marilyn Manson?) and hitting the occasional flashback. While I haven't checked into the specifics, the game appears to be built from a kit derived from Final Fantasy IV, or II for the yabanjin among us, and the engine goes a long way towards contextualizing the gameplay. I have to wonder if maybe the game has nothing to do with Columbine at all, and only uses the sensational real-world shooting as a device to parody the tropes of Final Fantasy and JRPGs in general. The long, trauma=drama cut-scenes, the emo whining, the easy, pointless battles...
...which brings us to the actual shooting. The battles are set up like in an RPG, a genre we don't think of as violent despite the fact that most RPGs produce body counts pure action games couldn't match. What other genre actually encourages players to wander aimlessly and kill everything they come across for hours and hours with no overt narrative motivation for doing so? That the "fights" against the unarmed students and teachers are so easy is, perhaps, part of the point, and I found myself habitually trying to maximize efficiency with the weapons and "armor" for the two characters, minimizing the expenditure of ammunition (which here functions as MP generally does in RPGs) and health items. In killing my way to character level 12--counting the two flashbacks that gave three levels to one kid each--I killed far more people than the actual Harris and Klebold. Having sufficiently explored the map (since I wasn't really planning on playing this thing more than once), I headed back to the point in the library where I'd earlier received the suicide prompt, and my two characters shot themselves.
I rather expected this to be the end of the game, but after a long and maudlin memorial sequence, a quotation from Dante's Inferno came up, and I found myself controlling Klebold in Hell. Now armed with only a pistol, he walked around long enough to be attacked by former humans and former human sergeants before an imp--yes, the furry, spikey, fireball-happy kind--killed him.
I'm not sure there's another strategy to be used here. It seems unlikely I can avoid that many of them. And building to level 12 wasn't nearly enough for this kind of fight. So the best I can guess is that I'm going to need to grind like hell during the actual, historical rampage shooting portion of the game so I'll be adequately prepared for the fighting I'm going to have to do in hell.
For me, the fact that it prompted me to write that last sentence is the most remarkable thing about the game. If I get nothing else out of the game, that's a sort of accomplishment in and of itself.
Sunday, March 23, 2008
Guns, Germs, and Steel: Ethics and Genre Shift
I was a big graphic adventure fan back in the day, and it never fails to bother me when I read that the genre is dead. It is more or less dead, of course, at least in its undiluted form, but then, graphic adventures are so filmic that much videogame theory practically defines them out of the category of "games." But this post isn't about graphic adventures, so much as their immediate descendant, the "survival horror" genre.
Inaugurated by Resident Evil, or Alone in the Dark if you're like that, survival horror basically jury-rigged some very basic action mechanics into the graphic adventure, a genre stressing narrativity (in the sense of both storyline as well as evocative architecture and aesthetics), observation, and basic logic. Assuming we take Resident Evil as the starting point, and I feel we would have good reason to do so--it's a choice between Milla Jovovich and Tara Reid, after all--we see a fairly simplistic graphic adventure combined with a pretty crappy action game. The puzzles are mostly pure item-swapping, and the action...well, you point and fire until it the bad guys hit the ground. The Director's Cut (and the sequels) go one further by actually aiming the gun for you, eliminating the time-consuming "point" part of the process. So what made the game compelling? In "Hands-On Horror," Tanya Krzywinska suggests that horror-themed games (including but not limited to survival horror) derive their appeal from the tension between control and lack of control, and that this binary between free will and determinism, active gameplay and cut-scene, manifests at narrative and ludic levels. It's a wonderful idea, with implications far beyond the specific genre/milieu on which she was writing, but this post isn't about that either.
Rather, this is about what players are called upon to do in Resident Evil. The puzzles are simple, relying more on having the right item than any real thought process. The combat is simple, relying more on having enough ammunition (and the right weapon) than any particular combat strategy. So where's the player's main role? What takes up the majority of their time and energy?
Onimusha, upon its release, was described not inaccurately as Resident Evil with swords. It's not a trivial distinction; if the protagonist's main weapon is a sword, ammo conservation goes out the window as a play mechanic. So, the ethics change from "pick your shots" to "kill everything you see." Kills reward the player with more than short-term safety in Onimusha, providing health, magic, weapon upgrades, and...keys. The most important items in Resident Evil, the ones that most directly allow you to progress the story and win the game, are sometimes earned in Onimusha not by judicious study of the environment, but from the simple acquisition of kills. At a narrative level, it's worth noting that the bad guys in Onimusha are conscious, evil beings rather than animals and braindead victims of a pharmaceutical accident. The argument for leaving a demon alive and going about your day is weaker than a similar argument would be for a zombie who used to be a lab tech. The Resident Evil hybrid of action and graphic adventure is tweaked in favor of action.
Devil May Cry continues the demon theme, as well as the "kill everything that moves" ethic, but adds a new element: style. One of the determinants of how many red orbs the player receives, and therefore how quickly and effectively the player can upgrade their character, is their ability to rack up style ratings against their numerous and creepy opponents. These style ratings necessitate keeping a constant stream of damage going for as long as possible, and thus discourages powerful, disjointed hit-and-run tactics in favor of fluid, aesthetically pleasing sword combos and juggling--the kind of thing we used to see in fighting games, back when fighting games mattered. The ethics in Onimusha demand that you kill things, but only Devil May Cry requires you to look cool while you're doing it. The puzzles are even simpler than in Onimusha, and the combat is much more frequent and requires more thought. The "action with a touch of graphic adventure" formula of Onimusha now becomes a straight out action game, with elements of fighting games starting to trickle in.
God of War, though born of a different developer, further articulates the nascent fighting game elements of Devil May Cry. The style rating is replaced with a more traditional (and precise) combo system, and the easy blocking and rolling allows skilled players to string together ridiculously long combos to rack up red orbs for (wait for it) weapon and skill upgrades. Moreover, God of War actually brings back the "fatality" concept from Mortal Kombat, giving most enemies a specific, cinematic death, accessed by button/analog combinations irrelevant to normal gameplay, that allows players either to maximize red orb earnings or refill health or magic. It's perhaps not coincidental that among God of War's imitators was Mortal Kombat: Shaolin Monks, a spinoff that reenacts the Mortal Kombat II storyline in a way that actually makes some narrative sense.
So, in these titles, we see a genre shift from graphic adventure through hack-and-slash action to a new adventure/fighting game hybrid, accomplished through minor shifts in gameplay ethics.
Inaugurated by Resident Evil, or Alone in the Dark if you're like that, survival horror basically jury-rigged some very basic action mechanics into the graphic adventure, a genre stressing narrativity (in the sense of both storyline as well as evocative architecture and aesthetics), observation, and basic logic. Assuming we take Resident Evil as the starting point, and I feel we would have good reason to do so--it's a choice between Milla Jovovich and Tara Reid, after all--we see a fairly simplistic graphic adventure combined with a pretty crappy action game. The puzzles are mostly pure item-swapping, and the action...well, you point and fire until it the bad guys hit the ground. The Director's Cut (and the sequels) go one further by actually aiming the gun for you, eliminating the time-consuming "point" part of the process. So what made the game compelling? In "Hands-On Horror," Tanya Krzywinska suggests that horror-themed games (including but not limited to survival horror) derive their appeal from the tension between control and lack of control, and that this binary between free will and determinism, active gameplay and cut-scene, manifests at narrative and ludic levels. It's a wonderful idea, with implications far beyond the specific genre/milieu on which she was writing, but this post isn't about that either.
Rather, this is about what players are called upon to do in Resident Evil. The puzzles are simple, relying more on having the right item than any real thought process. The combat is simple, relying more on having enough ammunition (and the right weapon) than any particular combat strategy. So where's the player's main role? What takes up the majority of their time and energy?
- The player must acquire items to open doors and generally move along.
- To acquire items, the player must search rooms.
- To search rooms, the player must either avoid or kill zombies (and other assorted baddies).
- To kill zombies, one must fire bullets, gradually exhausting their supply.
- To acquire new bullets, the player must search rooms. See step 3.
Onimusha, upon its release, was described not inaccurately as Resident Evil with swords. It's not a trivial distinction; if the protagonist's main weapon is a sword, ammo conservation goes out the window as a play mechanic. So, the ethics change from "pick your shots" to "kill everything you see." Kills reward the player with more than short-term safety in Onimusha, providing health, magic, weapon upgrades, and...keys. The most important items in Resident Evil, the ones that most directly allow you to progress the story and win the game, are sometimes earned in Onimusha not by judicious study of the environment, but from the simple acquisition of kills. At a narrative level, it's worth noting that the bad guys in Onimusha are conscious, evil beings rather than animals and braindead victims of a pharmaceutical accident. The argument for leaving a demon alive and going about your day is weaker than a similar argument would be for a zombie who used to be a lab tech. The Resident Evil hybrid of action and graphic adventure is tweaked in favor of action.
Devil May Cry continues the demon theme, as well as the "kill everything that moves" ethic, but adds a new element: style. One of the determinants of how many red orbs the player receives, and therefore how quickly and effectively the player can upgrade their character, is their ability to rack up style ratings against their numerous and creepy opponents. These style ratings necessitate keeping a constant stream of damage going for as long as possible, and thus discourages powerful, disjointed hit-and-run tactics in favor of fluid, aesthetically pleasing sword combos and juggling--the kind of thing we used to see in fighting games, back when fighting games mattered. The ethics in Onimusha demand that you kill things, but only Devil May Cry requires you to look cool while you're doing it. The puzzles are even simpler than in Onimusha, and the combat is much more frequent and requires more thought. The "action with a touch of graphic adventure" formula of Onimusha now becomes a straight out action game, with elements of fighting games starting to trickle in.
God of War, though born of a different developer, further articulates the nascent fighting game elements of Devil May Cry. The style rating is replaced with a more traditional (and precise) combo system, and the easy blocking and rolling allows skilled players to string together ridiculously long combos to rack up red orbs for (wait for it) weapon and skill upgrades. Moreover, God of War actually brings back the "fatality" concept from Mortal Kombat, giving most enemies a specific, cinematic death, accessed by button/analog combinations irrelevant to normal gameplay, that allows players either to maximize red orb earnings or refill health or magic. It's perhaps not coincidental that among God of War's imitators was Mortal Kombat: Shaolin Monks, a spinoff that reenacts the Mortal Kombat II storyline in a way that actually makes some narrative sense.
So, in these titles, we see a genre shift from graphic adventure through hack-and-slash action to a new adventure/fighting game hybrid, accomplished through minor shifts in gameplay ethics.
Wednesday, March 19, 2008
Ideology link
I posted a brief bit about Terry Eagleton's Ideology: An Introduction and what it can tell us about game design over at the Valuable Games blog. Link goes here.
Sunday, March 9, 2008
The damnable time-suck of HoMM5
I haven't posted to this thing in a while, but fortunately, nobody's actually reading yet. (With apologies to the exceptions. Hi, mom!) I've been slogging through an article about Heroes of Might and Magic V (HoMM5) and the war in Iraq, and for a fairly short article, it's taking a while to get done.
The problem with writing such an article is that it requires you to play HoMM5, which is both ferociously addictive and dangerously time-consuming. In Everything Bad Is Good For You, Stephen Johnson wrote of the struggle-reward cycle that underlies most videogames, but turn-based strategy literalizes that principle into "press 'end turn' button for terror; wait ten seconds for joy." Since the actual article won't see print here for a while (and hopefully never), here are some loose thoughts that have come up along the way about the principles encoded into the fictional and simulated world, bearing in mind that "fictional" and "simulated" are not actually dependent on one another.
1. Hierarchy. This isn't a war for the common folk. One tier 7 troop will make more difference on the battlefield than a hundred tier 1 troops, and the hero--the field general who faces no direct danger and has a largely symbolic diegetic role--is often more important than all the troops under his command.
2. Tribalism. Troop morale is determined by a variety of factors, some intuitive and some, um, not. Among the most prevalent, and in practice the most important, is an internalized taboo against, for lack of a better term, race-mixing. Factions differ on the level of species as well as culture, but since here in the really real world we have no language to deal with the problem of multiple sentient, humanoid species, we tend to use the word "race." It's a problematic term--I suspect that humans and zombies have far more convincing reasons to despise each other than, say, Sunni and Shi'a--but there it is. Troop morale drops if they're placed under the command of a hero whose race/species differs from their own, or if the troops contain warriors from more than one faction. Should both happen, should a stack of demon troops find themselves serving in an otherwise all-elf army under the command of an elven hero, the morale penalty often renders that stack basically inoperative. So, if you're going into a tough battle, be careful about social liberalism. In the single-player campaign, two of the five main characters are narratively outsiders to their faction, are but treated (and coded) and being natives. One's a hero, one's definitively not, but it's interesting that they're so prominent, given the ludic rules on mixing.
3. Militarism. Every aspect of gameplay is geared toward the war effort. It's hard to imagine how it could be otherwise, given the genre. But it's interesting that the towns from which the player builds armies and researches technologies (a loaded term in this case) are narratively identified as actual towns, with, presumably, non-combatant citizens. (Otherwise, who's doing all this work? Can the two heroes who visit in a given week really keep that tavern in business by themselves?) We never see them, and don't know who they are or what they look like. Presumably their labor produces the gold the town offers up each day, but the only time we see mention of taxes is in the Haven faction's "peasant" unit bio, and they somehow manage to pay taxes while on the road, far away from their fields.
4. Multiple, mutually exclusive perspectives. Most strategy games let you play as more than one side, with the moral equivalence this often suggests and the strategic equivalence the genre demands. Generally, this amounts to having several different campaigns, each centered on the experience of one faction. HoMM5 has six campaigns (out of the box), that can only be played in order. Unlike in, say, Command & Conquer, in which the two campaigns loosely overlap, the six campaigns play out chronologically in a consistent universe, the result being that you're constantly being forced to deal with the consequences of problems you caused for yourself while playing as another faction. It takes some of the verve out of the big victories when you realize that, one cut-scene later, it will retroactively have been a big defeat, which is one kind of identity confusion videogames do very well.
The problem with writing such an article is that it requires you to play HoMM5, which is both ferociously addictive and dangerously time-consuming. In Everything Bad Is Good For You, Stephen Johnson wrote of the struggle-reward cycle that underlies most videogames, but turn-based strategy literalizes that principle into "press 'end turn' button for terror; wait ten seconds for joy." Since the actual article won't see print here for a while (and hopefully never), here are some loose thoughts that have come up along the way about the principles encoded into the fictional and simulated world, bearing in mind that "fictional" and "simulated" are not actually dependent on one another.
1. Hierarchy. This isn't a war for the common folk. One tier 7 troop will make more difference on the battlefield than a hundred tier 1 troops, and the hero--the field general who faces no direct danger and has a largely symbolic diegetic role--is often more important than all the troops under his command.
2. Tribalism. Troop morale is determined by a variety of factors, some intuitive and some, um, not. Among the most prevalent, and in practice the most important, is an internalized taboo against, for lack of a better term, race-mixing. Factions differ on the level of species as well as culture, but since here in the really real world we have no language to deal with the problem of multiple sentient, humanoid species, we tend to use the word "race." It's a problematic term--I suspect that humans and zombies have far more convincing reasons to despise each other than, say, Sunni and Shi'a--but there it is. Troop morale drops if they're placed under the command of a hero whose race/species differs from their own, or if the troops contain warriors from more than one faction. Should both happen, should a stack of demon troops find themselves serving in an otherwise all-elf army under the command of an elven hero, the morale penalty often renders that stack basically inoperative. So, if you're going into a tough battle, be careful about social liberalism. In the single-player campaign, two of the five main characters are narratively outsiders to their faction, are but treated (and coded) and being natives. One's a hero, one's definitively not, but it's interesting that they're so prominent, given the ludic rules on mixing.
3. Militarism. Every aspect of gameplay is geared toward the war effort. It's hard to imagine how it could be otherwise, given the genre. But it's interesting that the towns from which the player builds armies and researches technologies (a loaded term in this case) are narratively identified as actual towns, with, presumably, non-combatant citizens. (Otherwise, who's doing all this work? Can the two heroes who visit in a given week really keep that tavern in business by themselves?) We never see them, and don't know who they are or what they look like. Presumably their labor produces the gold the town offers up each day, but the only time we see mention of taxes is in the Haven faction's "peasant" unit bio, and they somehow manage to pay taxes while on the road, far away from their fields.
4. Multiple, mutually exclusive perspectives. Most strategy games let you play as more than one side, with the moral equivalence this often suggests and the strategic equivalence the genre demands. Generally, this amounts to having several different campaigns, each centered on the experience of one faction. HoMM5 has six campaigns (out of the box), that can only be played in order. Unlike in, say, Command & Conquer, in which the two campaigns loosely overlap, the six campaigns play out chronologically in a consistent universe, the result being that you're constantly being forced to deal with the consequences of problems you caused for yourself while playing as another faction. It takes some of the verve out of the big victories when you realize that, one cut-scene later, it will retroactively have been a big defeat, which is one kind of identity confusion videogames do very well.
Wednesday, February 27, 2008
Tall Tales and Paidia
Given the popularity of the genre in American film and folklore, it's really quite bizarre that we see so few Western-themed videogames. Granted, I can't think of any off the top of my head that have been especially successful, but it's hard to say if that's because gamers are for some reason particularly uninterested in the Western milieu or because they make so few of them in the first place. Neversoft's Gun is a notable exception, squeezing most of the standard Western motifs into the free-roaming adventure mold established and popularized by Grand Theft Auto. The pacing, however, is a bit different; in GTA games, while the story missions do generally follow each other chronologically, there isn't a great sense of urgency connecting them, and it doesn't interfere with the experience to wander around and explore the world for a while.
Gun spins a fairly tight tale, and one that doesn't especially lend itself to taking breaks. The narrative proceeds with a sense of urgency that the rule system doesn't bear out--there really is no penalty if you decide to wander around doing odd jobs and mining for gold while a friend of yours is being held and tortured by the bad guys. I enjoy the side missions a great deal, and tend to do them as soon as they become available; this play style tends to make the story a bit disjointed.
This might not, in fact, be an accident. The gameplay structure of Gun is cyclical: story missions open up side missions and new weapons. Side missions boost character stats and allow the player to earn money. Money is spent on upgrades. So playing straight through the story missions consecutively allows no time for stat growth or upgrades. I assume this would make the game unbearably hard, and there are key points in which the game actually reminds you that things are going to get harder soon, and you'd better raise your stats. These key points generally arrive at less time-sensitive moments than the "kidnapped comrade" scenario I mentioned above, and it's possible that this is how the designers intended/expected players to progress: hours of concentrated play on story missions followed by hours of concentrated play on side missions. At any rate, in practice the gameplay structure allows players to change the difficulty of the game to an unusual degree.
In addition to screwing with the dramatic tension of the story, doing all the side missions as soon as they come makes the game, well, pretty damn easy. By the second half of the game, your character is practically bulletproof, and most of the bad guys go down if you look at them funny. You're not just heroic, you're bloody invincible.
Which might be the point. Someone who prioritizes the ludus game exclusively would find a brutal, dramatic game with a harsh difficulty curve emphasizing the usual FPS bag of tricks, such traps, ranging, proper weapon selection, stealth kills, etc. Not exactly a realistic story--the Western is not especially well-known for "realistic"--but not out of step with, say, modern film narratives. Conversely, someone who goes off the path into paidia as much as possible ends up with something more closely resembling a tall tale: not only does the hero beat the bad guys, he does so without a great deal of difficulty; he gets shot hundreds of times and lives to tell the tale; he hunts better with a bow and arrow than the best Indian hunter; he does the jobs the sheriff and federal marshall couldn't; he's the best horseman in the Pony Express, the best gambler, the best prospector. It's common enough for the sheer hyperbolic weight of the protagonist's heroic accomplishments to swamp the main storyline in a game, but in a milieu that's always half-folktale anyway, it seems strangely appropriate. Accident or not, the tension between ludus and paidia exerts its own pull on the game narrative, resulting in a storyline that's as pleasantly flexible as those concerning any of our "real" Western demigods.
Gun spins a fairly tight tale, and one that doesn't especially lend itself to taking breaks. The narrative proceeds with a sense of urgency that the rule system doesn't bear out--there really is no penalty if you decide to wander around doing odd jobs and mining for gold while a friend of yours is being held and tortured by the bad guys. I enjoy the side missions a great deal, and tend to do them as soon as they become available; this play style tends to make the story a bit disjointed.
This might not, in fact, be an accident. The gameplay structure of Gun is cyclical: story missions open up side missions and new weapons. Side missions boost character stats and allow the player to earn money. Money is spent on upgrades. So playing straight through the story missions consecutively allows no time for stat growth or upgrades. I assume this would make the game unbearably hard, and there are key points in which the game actually reminds you that things are going to get harder soon, and you'd better raise your stats. These key points generally arrive at less time-sensitive moments than the "kidnapped comrade" scenario I mentioned above, and it's possible that this is how the designers intended/expected players to progress: hours of concentrated play on story missions followed by hours of concentrated play on side missions. At any rate, in practice the gameplay structure allows players to change the difficulty of the game to an unusual degree.
In addition to screwing with the dramatic tension of the story, doing all the side missions as soon as they come makes the game, well, pretty damn easy. By the second half of the game, your character is practically bulletproof, and most of the bad guys go down if you look at them funny. You're not just heroic, you're bloody invincible.
Which might be the point. Someone who prioritizes the ludus game exclusively would find a brutal, dramatic game with a harsh difficulty curve emphasizing the usual FPS bag of tricks, such traps, ranging, proper weapon selection, stealth kills, etc. Not exactly a realistic story--the Western is not especially well-known for "realistic"--but not out of step with, say, modern film narratives. Conversely, someone who goes off the path into paidia as much as possible ends up with something more closely resembling a tall tale: not only does the hero beat the bad guys, he does so without a great deal of difficulty; he gets shot hundreds of times and lives to tell the tale; he hunts better with a bow and arrow than the best Indian hunter; he does the jobs the sheriff and federal marshall couldn't; he's the best horseman in the Pony Express, the best gambler, the best prospector. It's common enough for the sheer hyperbolic weight of the protagonist's heroic accomplishments to swamp the main storyline in a game, but in a milieu that's always half-folktale anyway, it seems strangely appropriate. Accident or not, the tension between ludus and paidia exerts its own pull on the game narrative, resulting in a storyline that's as pleasantly flexible as those concerning any of our "real" Western demigods.
Monday, February 18, 2008
A few words on feedback.
My name is J.C. Denton.
Well, no, it isn't. I am Peter Rauch playing Ion Storm's Deus Ex, and even diegetically—that is, even from the perspective of the game's internal world—J.C. Denton is a codename. As Denton, I am infiltrating the ruins of the Statue of Liberty, which have been occupied by a terrorist group called the NSF. (The Statue of Liberty is in ruins because a different terrorist group has blown it up several years earlier. This attack on a major American landmark has allowed the government to launch a global war against vaguely defined “terrorists” and clamp down on civil liberties in general. There is much to be said about this plotline; I will say only that it sure seemed like a fun escapist fantasy back in 2000.)
My diegetic brother, Paul Denton, is assisting me on this mission. He reminds me that I am serving in a police capacity, not as a soldier, and encourages me to minimize bloodshed. I am armed with a 9mm semiautomatic and a short-range stun-gun, and Paul allows me to choose a third weapon. If I select the non-lethal tranquilizer crossbow, Paul is pleased; if I instead opt for the sniper rifle, he is concerned, asking that I remember that I'll be shooting at human beings, not targets.
Every character, in fact, seems concerned with my attitude toward the casual application of lethal force. Only Paul seems opposed to it. In fact, if I kill too few people, and gain the admiration of Paul, my other comrades will doubt my commitment to the mission. Two opposing viewpoints on the morality of my killing are clearly established. Taking actions that satisfy either viewpoint will please some and displease others. My own beliefs concerning the morality of violence color the proceedings, of course, and I therefore consider one path preferable to the other. However, from my perspective as a player, and not as a character in the world of Deus Ex, the two viewpoints are distinguished differently. From a purely practical standpoint, completing any given part of the game with a high body count is much easier than doing so with a low one.
Deus Ex has only three non-lethal weapons, and they all require more skill to use effectively than their lethal counterparts. As the game goes on and my foes become more difficult, this skill difference becomes greater, and one might expect that the treatments of lethal and non-lethal violence would become more disparate.
This is not what happens. At the end of what could be considered the game's “first act,” Paul reveals that he has been working for the NSF all along. It is never made clear if he opposed the gratuitous killing of NSF agents because they were human beings, or because he was secretly on their side. This plot development could be read as an endorsement of the “mercy equals betrayal” attitude espoused by J.C.'s more bloodthirsty comrades. From this point on, while the game itself continues to make distinctions between “dead” and “unconscious,” the characters in the game do not. Characters drugged into unconsciousness are treated by other characters as being dead. At this point, combat functions much like any FPS: if something attacks you, empty as much of its blood as possible onto the floor.
In Deus Ex, the reasons we do not generally engage in wanton homicide in the “real” world generally do not apply. Beyond some vaguely-realistic faces and voices, the NPCs in Deus Ex are not very much like human beings. Whether he leaves them conscious, unconscious, or dead, J.C. rarely encounters any specific enemy more than once. The gun-toting NPCs are, on one level, problems to be solved, and it so happens that the sniper rifle is much more effective for solving problems than the crossbow. So why would anyone want to use the crossbow?
One reason, of course, is because the crossbow is less effective. Non-lethal weapons require more skill, but developing and displaying skill is one of the things that makes videogames enjoyable. Variety is another reason, as players tend to seek out multiple ways to play a given scenario. Players who apply a role-playing element to the game might opt for non-lethal tactics because they wish to impute their own morality onto J.C. For this last reason to function, however, another more fundamental reason must already be in place. Why would players want to minimize NSF casualties in the face of greater difficulty?
Because the game will notice if they do.
Well, no, it isn't. I am Peter Rauch playing Ion Storm's Deus Ex, and even diegetically—that is, even from the perspective of the game's internal world—J.C. Denton is a codename. As Denton, I am infiltrating the ruins of the Statue of Liberty, which have been occupied by a terrorist group called the NSF. (The Statue of Liberty is in ruins because a different terrorist group has blown it up several years earlier. This attack on a major American landmark has allowed the government to launch a global war against vaguely defined “terrorists” and clamp down on civil liberties in general. There is much to be said about this plotline; I will say only that it sure seemed like a fun escapist fantasy back in 2000.)
My diegetic brother, Paul Denton, is assisting me on this mission. He reminds me that I am serving in a police capacity, not as a soldier, and encourages me to minimize bloodshed. I am armed with a 9mm semiautomatic and a short-range stun-gun, and Paul allows me to choose a third weapon. If I select the non-lethal tranquilizer crossbow, Paul is pleased; if I instead opt for the sniper rifle, he is concerned, asking that I remember that I'll be shooting at human beings, not targets.
Every character, in fact, seems concerned with my attitude toward the casual application of lethal force. Only Paul seems opposed to it. In fact, if I kill too few people, and gain the admiration of Paul, my other comrades will doubt my commitment to the mission. Two opposing viewpoints on the morality of my killing are clearly established. Taking actions that satisfy either viewpoint will please some and displease others. My own beliefs concerning the morality of violence color the proceedings, of course, and I therefore consider one path preferable to the other. However, from my perspective as a player, and not as a character in the world of Deus Ex, the two viewpoints are distinguished differently. From a purely practical standpoint, completing any given part of the game with a high body count is much easier than doing so with a low one.
Deus Ex has only three non-lethal weapons, and they all require more skill to use effectively than their lethal counterparts. As the game goes on and my foes become more difficult, this skill difference becomes greater, and one might expect that the treatments of lethal and non-lethal violence would become more disparate.
This is not what happens. At the end of what could be considered the game's “first act,” Paul reveals that he has been working for the NSF all along. It is never made clear if he opposed the gratuitous killing of NSF agents because they were human beings, or because he was secretly on their side. This plot development could be read as an endorsement of the “mercy equals betrayal” attitude espoused by J.C.'s more bloodthirsty comrades. From this point on, while the game itself continues to make distinctions between “dead” and “unconscious,” the characters in the game do not. Characters drugged into unconsciousness are treated by other characters as being dead. At this point, combat functions much like any FPS: if something attacks you, empty as much of its blood as possible onto the floor.
In Deus Ex, the reasons we do not generally engage in wanton homicide in the “real” world generally do not apply. Beyond some vaguely-realistic faces and voices, the NPCs in Deus Ex are not very much like human beings. Whether he leaves them conscious, unconscious, or dead, J.C. rarely encounters any specific enemy more than once. The gun-toting NPCs are, on one level, problems to be solved, and it so happens that the sniper rifle is much more effective for solving problems than the crossbow. So why would anyone want to use the crossbow?
One reason, of course, is because the crossbow is less effective. Non-lethal weapons require more skill, but developing and displaying skill is one of the things that makes videogames enjoyable. Variety is another reason, as players tend to seek out multiple ways to play a given scenario. Players who apply a role-playing element to the game might opt for non-lethal tactics because they wish to impute their own morality onto J.C. For this last reason to function, however, another more fundamental reason must already be in place. Why would players want to minimize NSF casualties in the face of greater difficulty?
Because the game will notice if they do.
Friday, February 15, 2008
The Sarah Problem and the gendering of genre
Hat tip to August J. Pollack for finding this. It's tempting to think of this sort of thing as mere ham-handed marketing with nothing more than the profit motive behind it, a desperate attempt to bring women into a consumer group perceived to be hostile to said women via tired stereotypes. It's easy because, well, it's largely true, but to dwell on that would be to miss the fact that this kind of thinking is surprisingly pervasive at all levels of the gaming community, from the players to the press, and even, to some extent, to the academics.
Richard Cobbett covered this territory more effectively than I ever could with "Writing A 'Girls In Games' Article", an essay that ought to be required reading for anyone attempting to discuss gender and games. Girl Gamer seems to flow from several lines of thought critiqued by Cobbett, specifically points 3, 4, 8 and 9, with the greatest emphasis on point 4.
Thing is, the idea that women, when expressed as as an arithmetic mean, prefer certain genres, modes and features was not arbitrarily pulled from the ether. Statistically, it has some support, and even for those of us who feel that the American faith in statistics is more often religious than scientific in nature, that support is hard to ignore. But even at their best, statistics are only empirical, prone to methodological error, and are not, in and of themselves, predictive. (That's where "theory" comes in. Creationists beware.) Group identities are useful things, but they are ultimately fictions. I like fiction; fiction can be compelling and useful, and you don't have to be a mystic to understand that things that exist subjectively can and often do affect things that exist objectively. To riff a bit on a quote from a dead conservative/libertarian humorist whose name I cannot, at this moment, find, women are only available in units of one. Out here in the really real world, they're not actually a hive mind.
Which brings me to the Sarah Problem. Sarah is not an average or a composite, but an actual human being, made mostly of water, and capable of reflecting on her own existence. While I haven't verified it directly, her name, physical appearance, and the image she projects suggest that she has two X chromosomes. She is, in short, a woman. And the rules we apply to women in the context of their relationship to videogames do not seem to apply to her. She's not big into The Sims or casual games. She isn't turned off by brutal violence or highly sexualized female avatars. (And yes, sports fans, she's straight. That should save a couple of comment writers a minute or two.) She bought her PS2 before I bought mine, and nearly every time I get into a bloodbath like Devil May Cry, God of War or Resistance, she's already bought, played, and usually beaten it.
This would seem to make her something of a statistical outlier, but I can't sign on to the assumption, implicit in many discussions of gender and videogames, that this makes her experience as a gamer or as a woman less valid. Because, well, she exists. She's a friend of mine. And her experience ought to be part of the discussion. Individual experiences matter. In addition, in a large enough sample group--say, people who play videogames--outliers can be comprised of rather large groups, and sometimes the exceptions to the rule are among the most interesting and important.
The need to create a "feminine space" in videogames, however worthwhile that goal might be, has led to an irritating phenomenon I refer to as the gendering of genre: Halo is for boys, The Sims is for girls. Boys like speed, competition and violence, girls like story, personalization and collaboration. And, if you were to take a poll, that's certainly true for some of them. But things like story, personalization and collaboration are important in and of themselves, not because they might be marginally more likely to appeal to women. We're seeing a great expansion in paidia play in nearly all videogame genres now, due to a combination of market demands and the new creative options available due to advancing technology. The development of new genres is a good thing, period. Will these new genres help developers and publishers expand their consumer base? Who cares? Electronic Arts' bottom line really isn't my problem.
The problem with gendering these aspects of gameplay is that actual flesh-and-blood women do occasionally fall on the "masculine" side of the spectrum, and this creates a conflict in our construction of the topic. If violent, ludus-heavy action games are masculine, then Sarah is something of a ludic transvestite. Whereas before she might have been thought of as unfeminine for playing videogames, now she can look forward to being thought of as unfeminine for playing the "wrong" videogames.
More to the point, treating Sarah's play experience as being "masculine," in the sense of being equivalent to the experience of a male playing the same games, collapses her into a group to which she does not belong. That meatsuit she wears influences her consciousness, her sense of identity, and the way she's treated by others, just as my (marginally different) meatsuit does for me. Her experience might very well be different from that of the "usual" gamer, and for research purposes that seems like it might be kind of important. Yes, it's always interesting to think about what games non-gamers might like to play, and a lot of those non-gamers happen to be women. But if there is some social good to be gained from having more women playing videogames (a question I'll not attempt here), it seems like women who already play and like games in multiple well-established, commercially successful genres would be worth listening to as well.
In Arcanum, female avatars are given a +1 bonus to Constitution and a -1 penalty to Strength. Even in a world of elves and dwarves, default status is issued to human males--white ones, judging by the available character portraits. The discussion of the "female problem" in the videogame industry does not have to function along similar lines. So let's all grasp a firm hold of our undergraduate understanding of the difference between "sex" and "gender," and remember that we don't know much about biology, culture and causality, and that demographic data that's true right now might not be for very long. To pretend otherwise, to reify what might be fairly arbitrary taste issues, would be stupid.
Richard Cobbett covered this territory more effectively than I ever could with "Writing A 'Girls In Games' Article", an essay that ought to be required reading for anyone attempting to discuss gender and games. Girl Gamer seems to flow from several lines of thought critiqued by Cobbett, specifically points 3, 4, 8 and 9, with the greatest emphasis on point 4.
Thing is, the idea that women, when expressed as as an arithmetic mean, prefer certain genres, modes and features was not arbitrarily pulled from the ether. Statistically, it has some support, and even for those of us who feel that the American faith in statistics is more often religious than scientific in nature, that support is hard to ignore. But even at their best, statistics are only empirical, prone to methodological error, and are not, in and of themselves, predictive. (That's where "theory" comes in. Creationists beware.) Group identities are useful things, but they are ultimately fictions. I like fiction; fiction can be compelling and useful, and you don't have to be a mystic to understand that things that exist subjectively can and often do affect things that exist objectively. To riff a bit on a quote from a dead conservative/libertarian humorist whose name I cannot, at this moment, find, women are only available in units of one. Out here in the really real world, they're not actually a hive mind.
Which brings me to the Sarah Problem. Sarah is not an average or a composite, but an actual human being, made mostly of water, and capable of reflecting on her own existence. While I haven't verified it directly, her name, physical appearance, and the image she projects suggest that she has two X chromosomes. She is, in short, a woman. And the rules we apply to women in the context of their relationship to videogames do not seem to apply to her. She's not big into The Sims or casual games. She isn't turned off by brutal violence or highly sexualized female avatars. (And yes, sports fans, she's straight. That should save a couple of comment writers a minute or two.) She bought her PS2 before I bought mine, and nearly every time I get into a bloodbath like Devil May Cry, God of War or Resistance, she's already bought, played, and usually beaten it.
This would seem to make her something of a statistical outlier, but I can't sign on to the assumption, implicit in many discussions of gender and videogames, that this makes her experience as a gamer or as a woman less valid. Because, well, she exists. She's a friend of mine. And her experience ought to be part of the discussion. Individual experiences matter. In addition, in a large enough sample group--say, people who play videogames--outliers can be comprised of rather large groups, and sometimes the exceptions to the rule are among the most interesting and important.
The need to create a "feminine space" in videogames, however worthwhile that goal might be, has led to an irritating phenomenon I refer to as the gendering of genre: Halo is for boys, The Sims is for girls. Boys like speed, competition and violence, girls like story, personalization and collaboration. And, if you were to take a poll, that's certainly true for some of them. But things like story, personalization and collaboration are important in and of themselves, not because they might be marginally more likely to appeal to women. We're seeing a great expansion in paidia play in nearly all videogame genres now, due to a combination of market demands and the new creative options available due to advancing technology. The development of new genres is a good thing, period. Will these new genres help developers and publishers expand their consumer base? Who cares? Electronic Arts' bottom line really isn't my problem.
The problem with gendering these aspects of gameplay is that actual flesh-and-blood women do occasionally fall on the "masculine" side of the spectrum, and this creates a conflict in our construction of the topic. If violent, ludus-heavy action games are masculine, then Sarah is something of a ludic transvestite. Whereas before she might have been thought of as unfeminine for playing videogames, now she can look forward to being thought of as unfeminine for playing the "wrong" videogames.
More to the point, treating Sarah's play experience as being "masculine," in the sense of being equivalent to the experience of a male playing the same games, collapses her into a group to which she does not belong. That meatsuit she wears influences her consciousness, her sense of identity, and the way she's treated by others, just as my (marginally different) meatsuit does for me. Her experience might very well be different from that of the "usual" gamer, and for research purposes that seems like it might be kind of important. Yes, it's always interesting to think about what games non-gamers might like to play, and a lot of those non-gamers happen to be women. But if there is some social good to be gained from having more women playing videogames (a question I'll not attempt here), it seems like women who already play and like games in multiple well-established, commercially successful genres would be worth listening to as well.
In Arcanum, female avatars are given a +1 bonus to Constitution and a -1 penalty to Strength. Even in a world of elves and dwarves, default status is issued to human males--white ones, judging by the available character portraits. The discussion of the "female problem" in the videogame industry does not have to function along similar lines. So let's all grasp a firm hold of our undergraduate understanding of the difference between "sex" and "gender," and remember that we don't know much about biology, culture and causality, and that demographic data that's true right now might not be for very long. To pretend otherwise, to reify what might be fairly arbitrary taste issues, would be stupid.
Monday, February 11, 2008
The Torture Game
More recycled content, technically the second half of the last post. If you're not going to skip this, read that one first.
Four recent, commercial games have directly dealt with the issue of torture: The Punisher, State of Emergency 2, The Godfather and Reservoir Dogs. This list is not exhaustive, but these titles demonstrate some of the ways torture has been approached in existing games. Of these four titles, The Punisher is the most explicit, and is the central subject of my investigation. As such it receives the most attention, but all four offer useful insight on the subject.
The Punisher, it must be noted, is not merely a game, but part of a multimedia franchise. Originating as a villain in an issue of Spider-Man, the character known as Frank Castle—alias The Punisher—has been a persistent figure in the Marvel Comics universe for thirty years. Volition's videogame adaptation of The Punisher was released in 2004 to coincide with the theatrical release of the film of the same name. Both the film and game adaptations drew heavily on the work of Garth Ennis, who had recently revitalized interest in the character among comic readers. Ennis' particular take on The Punisher is substantially more complex than the simple-minded vigilante previous writers had crafted, and the Punisher videogame is so thoroughly steeped in the work of Ennis that it cannot be read in isolation from that work. Panels from Ennis' books provide a substantial part of the game's reward system, and serve as indexes, pointing to the larger narrative of which the game is a part. That narrative guides the game mechanics, and the game's ethical framework compels the player to kill in a variety of ways, none of which should be unfamiliar, symbolically or mechanically, to any action game enthusiast. What is comparatively new is The Punisher's treatment of torture.
The Punisher's so-called “torture engine” is a mini-game of sorts. Frank puts his victims in a dangerous, frightening and/or painful situation that is not immediately lethal, and he must keep them sufficiently intimidated without killing them. The controls vary with every method of torture, but all rely on subtle manipulation of an analog stick. At first glance, torture appears to function as an interrogation technique. Certain characters possess special information that can only be extracted through torture. However, this information is never essential to Frank's mission, but only supplementary: a skilled player can easily get by without it. Moreover, very few characters have any useful information to be extracted, yet nearly all can be tortured. In spite of torture's lack of value for interrogatory purposes, it is nevertheless a crucial play mechanic, and players cannot easily avoid engaging in it.
The Punisher is not an open-ended play-space like Second Life, and players are not expected to do things merely because they can. Rather, the game encourages torture (makes it "ethical") by connecting it to two incentives: the acquisition of points, and the unlocking of hidden content. Points feed directly back into the gameplay experience, as players exchange them for skill and weapon upgrades. Scripted, location-sensitive tortures provide the largest point bonuses, but any enemy character within grabbing distance can be exploited for this purpose, and an execution is never as profitable, in terms of points, as an execution preceded by torture. In addition to the points, torture will randomly cause Frank to have flashbacks. These flashbacks are presented to the player as a panel of comic art from Ennis' Punisher stories accompanied by an appropriate voice sample; for example, an image of Frank holding a dead family member juxtaposed with a terrified criminal screaming “I have a family!” These flashbacks, once unlocked in the main game, can be viewed from the title menu, and contribute to overall completion of the game, much like the side-quests in the recent Grand Theft Auto games. For the player, the reward for the (frequently challenging) act of torture is non-diegetic. Points have no meaning at the narrative level, and it's unclear why Frank would want to suffer flashbacks to painful moments in his life. Thus, in terms of the game's internal world, it would be tempting to refer back to George Orwell's 1984: “The purpose of torture is torture.” More accurately, though, the purpose of torture, in The Punisher, is a “bonus round” of sorts, a chance to allow the player to demonstrate skill in exchange for points. If torture is a “mini-game,” it is easy enough to “fail” by accidentally killing the victim. The player loses points for killing a victim in the course of torture, even though he or she would gain points for killing the same person in a more conventional fashion. The game takes no notice whether or not the victim has given Frank whatever information they have. The rules are simply that killing is rewarded, torture is rewarded, but accidental killing during torture is punished. These are the ethics of torture in The Punisher, and they make sense at a purely mechanical level. At a narrative level, they are internally inconsistent, and thus the narrative and ethics cannot be integrated into a moral argument about torture.
State of Emergency 2 is the little-known sequel to the controversial State of Emergency, which places players in violent street combat against a fascistic corporate dictatorship. The original game incorporates contemporary political debates about globalization into its narrative, but squanders its potential for legitimate discourse through simple-minded play mechanics.
The sequel adopts a more linear, story-based approach to revolution that includes a mini-game in which players interrogate suspects. The interrogator is “Spanky,” a former gang member and Hispanic stereotype, and the interrogation consists of repeatedly punching a captive. In terms of play mechanics, interrogation is a timing game, in which players must hold the proper button and release it at the proper time—release the button too early and Spanky will not punch hard enough to cause sufficient pain, release the button too late and Spanky will punch too hard and kill the captive. In contrast to the calculated brutality of the torture seen in The Punisher, the State of Emergency torture scenes are somewhat cartoonish. The famously graphic violence of the original State of Emergency, which allows players to blast non-player characters (NPCs) apart with explosives and then use the charred body parts as weapons, has been toned down considerably in the sequel, and one wonders why torture was included at all if gratuitous violence were a concern. As it stands, the torture scenes are among the least violent and disturbing action scenes in the game.
The Godfather is the high-profile videogame adaptation of the world described in the Mario Puzo novel and Francis Ford Coppola films. Though not explicitly mirroring the plot of the novel or films—the protagonist is a new character not found in either—the ubiquity of The Godfather in popular culture makes it unlikely that players will come to the game unfamiliar with the Corleone dynasty. As with The Punisher, the game narrative must be read in context of the larger text of which it is a part.
Intimidation is a major factor in the gameplay of The Godfather. The most common use of intimidation is against shopkeepers, to encourage them to hand over protection money. Unlike the previous examples, the player need not resort to physical pain for this purpose, although the game allows a great deal of realistic physical violence. If a shopkeeper is being particularly stubborn in his refusal to pay, smashing his cash register might be more effective than choking him or shooting him in the kneecap. Simply placing someone in your gunsights for several seconds will often do the trick. Consistent with the gangster ethics detailed in the novel and films, the game engine generally rewards players for finding ways to intimidate without resorting to direct bodily harm.
Finally, Reservoir Dogs is the videogame adaptation of the 1994 Quentin Tarantino film of the same name. Similar to The Godfather, torture is used not for interrogation, but rather for intimidation. Though the game gives players the option of blasting their way through all obstacles, earning a “Psychopath” rating in the process, the more cerebral “Professional” track requires a more measured use of violence, both threatened and enacted. Taking human shields, and therefore threatening hostages with lethal violence, is sufficient to disarm security guards, but will result in a standoff with actual police. Police will also drop their weapons, however, if the player pistol-whips the hostage in front of them—but even this is ineffective against large numbers of police. When surrounded, players who have charged up the avatar's “adrenaline” can perform a “signature” move, beating the hostage into unconsciousness and likely disfiguring him or her in the process.
These “signature” moves are unique to each character, from Mr. Blue's cigar to Mr. Blonde's trademark straight razor, though the most brutal violence happens off-screen. A “signature” move will make every cop in the vicinity lay down their weapons in surrender. The game's ethics, in this case, cannot possibly be developed into a moral argument, simply because the they make no sense whatsoever at the narrative level. Beating and disfiguring a civilian should, logically, make the character more likely to be shot by police, not less. In addition, unconscious hostages drop to the ground and cannot be picked up. Thus, by performing a “signature” move, the protagonist reveals to the police that he is violent, unpredictable and dangerous, while simultaneously releasing his human shield. The torture techniques described by Mr. White in the film, or enacted by Mr. Blonde, would have made some degree of sense in terms of the narrative, but the torture found in the game, while superficially similar, does not.
In all these games, some common elements exist. First, the games' ethics, which compel the player to torture, are not explicitly out of sync with the protagonists' motivations. From the protagonists' perspective, torture is justified by the moral “gray area” of the situations in which they find themselves, be it organized crime, insurrection, or vigilantism. We are given no reason to believe that the protagonists themselves believe torture to be immoral, at least under the given circumstances. It is worth noting that three of the games I've discussed, The Punisher, The Godfather and Reservoir Dogs, are adaptations of existing works, and each inherits a nuanced morality of violence from the worlds' origins in film, novels and comic books. The player is not called upon to accept or reject the protagonist's actions as moral, and the circumstances in which the protagonists find themselves are defined as extraordinary and largely unrelated to “real life.”
Second, the morality of torturing an innocent is never addressed. The Punisher cannot torture an innocent person who was simply in the wrong place at the wrong time, because these people do not exist in the game. (Innocents exist, but they are clearly marked, and the player cannot make Frank torture them.) In The Godfather and Reservoir Dogs, the player is an anti-hero at best, but there are no judgments on when it is moral to torture, just when it is ethical in terms of gameplay.
Third, when torture is applied for the purpose of interrogation, it is universally effective. The tortured party will invariably “crack,” given the right circumstances. When they do, they will invariably give the protagonist correct information.
Fourth, the actions of the player have no long-term effect on the overall “war effort.” It is hard to imagine how it could, given the genres in which it takes place. The mafia and the fascist thugs of the games in question are not in a position to become more brutal due to the avatar's actions.
Fifth, the experience of having intentionally inflicted pain on a defenseless human being has no long-term effect on the mental health of the protagonist. Again, this is to be expected, since the modeling of avatar's mental states is still very rare in videogames. (Silicon Knights, of Blood Omen fame, made some progress on this front with Eternal Darkness, albeit in a less "serious" supernatural fashion.)
These games clearly demonstrate that videogame designers have developed the conceptual tools necessary to model the act of torture, but not its consequences. By carefully integrating the rule system and narrative, and by explicitly addressing those elements found lacking in the games I've described, it is possible to design videogames that make coherent moral arguments about, and more specifically against, torture in a way that would not be possible in any other medium. I here propose a model for such a game.
The best genre for such a game would be a single-player strategy game that alternates between macro-management and micro-management, similar to Microprose's X-COM: UFO Defense. Time will need to be somewhat fluid in the game, which would suggest a turn-based approach, but there's no reason parts of the game couldn't be designed for real-time strategy. The player commands a military unit in occupied territory, under constant threat of attack from local guerrilla forces. To prepare for or prevent these attacks, the player must gather information, make arrests, interrogate suspects, and use the new information to coordinate attacks or make more arrests. Like X-COM, gameplay will be cyclical in nature, and will end when either the guerrillas successfully wipe out the player's unit, or when public support for the guerrillas wanes and order is restored. These are only end conditions, however—it might be necessary, depending on the argument the designers seek to make, for true, non-diegetic victory to be independent of military success. Most importantly, the morality espoused in the narrative must be consistent with the ethics of gameplay.
As the game begins, players are given some initial intelligence from a variety of sources concerning planned attacks, and suggesting suspects. Players must then travel to a given location and attempt to arrest a suspect, using a minimum of force. After all, killing a suspect before he can make himself useful is a failure at both military and moral levels. Assuming the suspect can be arrested and returned to base successfully, the interrogation phase begins.
The interrogation process is the most significant portion of the game. Consequently, the game rules must acknowledge the issues ignored by the games I've discussed. The rule system, after all, will determine the ethics of gameplay, compelling gamers to play in a certain way, and the narrative cannot be allowed to disconnect from these ethics. Thus, characters must express differing opinions on the morality of torture in general. Establishing the opinions of NPCs can be handled in a number of ways, and designers need not resort to overlong cutscenes, but they will need, at the very least, well-written dialogue that is both semi-random and likely to be encountered by players. In addition, the game must include the possibility of bad intelligence, and it must be possible, even likely, for players to make false arrests. Whether or not the suspects actually know anything, many will lie and give false information as the torture becomes increasingly brutal; conversely, some will protest their innocence through any level of torture, and some will simply say nothing.
Players will be allowed to detain suspects for as long as they choose, torture them in any way provided by the game designers, and execute them at will. All of these actions must directly affect the rest of the game. The guerrillas might gain popular support, and become more numerous and better armed, depending on who the player arrests, how the suspects are treated, and whether they are released, detained indefinitely, or executed. In addition, as a result of the player's actions, suspects could become increasingly less likely to allow themselves to be arrested, opting instead to shoot it out with the player's troops or blow themselves up to evade capture.
In addition to the effects of the player's torture on the effectiveness of the mission, there must also be consequences to the torturer. This can best be accomplished by having a single interrogation specialist character with greater narrative depth than most other characters: in the context of the interrogation sequences, the specialist is the protagonist. While much of the game's dialogue can be semi-random, the interrogation specialist must have more tightly scripted dialogue, and more of it. If the game is to have a narrator of any kind, the interrogation specialist is the logical choice. As torture becomes more frequent and more brutal, the specialist will become increasingly unhinged. Torture will become more difficult to accomplish, as the protagonist increasingly “ignores” the player's controller input, increasing the number of so-called “accidents.” As the protagonist moves from torture as a means to an end to torture as an end unto itself, he will become less effective at extracting information. The less brutal methods of interrogation will cease to be available to players. Eventually, it will become impossible for players to do anything with suspects except brutally torture and kill them, and doing so will only hasten the victory of the guerrillas.
These are the basics of the game, the elements common to any meaningful argument against torture. From there, three specific arguments can be made. The specific mechanics of the game, such as the probabilities of arresting an innocent person or extracting false confessions, will be dependent on the designers' intended argument. The first is a rather Machiavellian claim that torture is an effective tool for a counter-insurgency, but must be used sparingly, so the benefits of useful information outweigh the costs of increased enemy resistance and deaths of innocent victims. This argument defines what is good as what wins the war, and treats torture as an evil to be engaged in only for a greater good. For this argument, torture must make the game easier to complete; refraining from torture as much as possible must bring a greater difficulty and a greater reward. Nonetheless, the only win condition is military victory, and no moral rule is more important than that.
The second argument is that torture is simply counter-productive. For this argument, the variables must be set so the costs of torture are overwhelmingly larger than any possible benefits. Consequently, it must be impossible to complete the mission using torture as a strategy, and victory must be easiest when the player repudiates torture entirely. Again, this argument ties morality with military victory, and the most moral solution is also the most practical. This argument could also be made satirically by separating the win condition from military victory, and rewarding the player in non-diegetic ways for continuing to torture even as it dehumanizes the protagonist, kills innocent people, and allows the guerrillas to take over the country. The world will be decisively worse than when the player began the game, the mission will have failed miserably, but the player will be assured, through a high score or bonus content, that they've done the right thing. The sheer absurdity of such a game would be a powerful argument against torture.
The third argument differs from the first two by designing the game's ethics to serve an anti-torture morality completely divorced from military victory. The mission may succeed or fail, but such success is not taken into consideration in terms of the player's reward. Rather, the game must encourage players to torture by offering powerful short-term benefits, and reward them for resisting the temptation, both with non-diegetic rewards such as points and unlocked content, and a well-constructed narrative that makes it clear that, win or lose, soldiers who refrain from crimes against humanity can at least look themselves in the mirror with their sanity intact.
These are, as I like to say, loose thoughts. I can't design this stuff, and don't know if it would work, assuming we can all agree on what constitutes "working" in this context. But it's an interesting possibility, and an interesting way to think about this kind of debate.
Four recent, commercial games have directly dealt with the issue of torture: The Punisher, State of Emergency 2, The Godfather and Reservoir Dogs. This list is not exhaustive, but these titles demonstrate some of the ways torture has been approached in existing games. Of these four titles, The Punisher is the most explicit, and is the central subject of my investigation. As such it receives the most attention, but all four offer useful insight on the subject.
The Punisher, it must be noted, is not merely a game, but part of a multimedia franchise. Originating as a villain in an issue of Spider-Man, the character known as Frank Castle—alias The Punisher—has been a persistent figure in the Marvel Comics universe for thirty years. Volition's videogame adaptation of The Punisher was released in 2004 to coincide with the theatrical release of the film of the same name. Both the film and game adaptations drew heavily on the work of Garth Ennis, who had recently revitalized interest in the character among comic readers. Ennis' particular take on The Punisher is substantially more complex than the simple-minded vigilante previous writers had crafted, and the Punisher videogame is so thoroughly steeped in the work of Ennis that it cannot be read in isolation from that work. Panels from Ennis' books provide a substantial part of the game's reward system, and serve as indexes, pointing to the larger narrative of which the game is a part. That narrative guides the game mechanics, and the game's ethical framework compels the player to kill in a variety of ways, none of which should be unfamiliar, symbolically or mechanically, to any action game enthusiast. What is comparatively new is The Punisher's treatment of torture.
The Punisher's so-called “torture engine” is a mini-game of sorts. Frank puts his victims in a dangerous, frightening and/or painful situation that is not immediately lethal, and he must keep them sufficiently intimidated without killing them. The controls vary with every method of torture, but all rely on subtle manipulation of an analog stick. At first glance, torture appears to function as an interrogation technique. Certain characters possess special information that can only be extracted through torture. However, this information is never essential to Frank's mission, but only supplementary: a skilled player can easily get by without it. Moreover, very few characters have any useful information to be extracted, yet nearly all can be tortured. In spite of torture's lack of value for interrogatory purposes, it is nevertheless a crucial play mechanic, and players cannot easily avoid engaging in it.
The Punisher is not an open-ended play-space like Second Life, and players are not expected to do things merely because they can. Rather, the game encourages torture (makes it "ethical") by connecting it to two incentives: the acquisition of points, and the unlocking of hidden content. Points feed directly back into the gameplay experience, as players exchange them for skill and weapon upgrades. Scripted, location-sensitive tortures provide the largest point bonuses, but any enemy character within grabbing distance can be exploited for this purpose, and an execution is never as profitable, in terms of points, as an execution preceded by torture. In addition to the points, torture will randomly cause Frank to have flashbacks. These flashbacks are presented to the player as a panel of comic art from Ennis' Punisher stories accompanied by an appropriate voice sample; for example, an image of Frank holding a dead family member juxtaposed with a terrified criminal screaming “I have a family!” These flashbacks, once unlocked in the main game, can be viewed from the title menu, and contribute to overall completion of the game, much like the side-quests in the recent Grand Theft Auto games. For the player, the reward for the (frequently challenging) act of torture is non-diegetic. Points have no meaning at the narrative level, and it's unclear why Frank would want to suffer flashbacks to painful moments in his life. Thus, in terms of the game's internal world, it would be tempting to refer back to George Orwell's 1984: “The purpose of torture is torture.” More accurately, though, the purpose of torture, in The Punisher, is a “bonus round” of sorts, a chance to allow the player to demonstrate skill in exchange for points. If torture is a “mini-game,” it is easy enough to “fail” by accidentally killing the victim. The player loses points for killing a victim in the course of torture, even though he or she would gain points for killing the same person in a more conventional fashion. The game takes no notice whether or not the victim has given Frank whatever information they have. The rules are simply that killing is rewarded, torture is rewarded, but accidental killing during torture is punished. These are the ethics of torture in The Punisher, and they make sense at a purely mechanical level. At a narrative level, they are internally inconsistent, and thus the narrative and ethics cannot be integrated into a moral argument about torture.
State of Emergency 2 is the little-known sequel to the controversial State of Emergency, which places players in violent street combat against a fascistic corporate dictatorship. The original game incorporates contemporary political debates about globalization into its narrative, but squanders its potential for legitimate discourse through simple-minded play mechanics.
The sequel adopts a more linear, story-based approach to revolution that includes a mini-game in which players interrogate suspects. The interrogator is “Spanky,” a former gang member and Hispanic stereotype, and the interrogation consists of repeatedly punching a captive. In terms of play mechanics, interrogation is a timing game, in which players must hold the proper button and release it at the proper time—release the button too early and Spanky will not punch hard enough to cause sufficient pain, release the button too late and Spanky will punch too hard and kill the captive. In contrast to the calculated brutality of the torture seen in The Punisher, the State of Emergency torture scenes are somewhat cartoonish. The famously graphic violence of the original State of Emergency, which allows players to blast non-player characters (NPCs) apart with explosives and then use the charred body parts as weapons, has been toned down considerably in the sequel, and one wonders why torture was included at all if gratuitous violence were a concern. As it stands, the torture scenes are among the least violent and disturbing action scenes in the game.
The Godfather is the high-profile videogame adaptation of the world described in the Mario Puzo novel and Francis Ford Coppola films. Though not explicitly mirroring the plot of the novel or films—the protagonist is a new character not found in either—the ubiquity of The Godfather in popular culture makes it unlikely that players will come to the game unfamiliar with the Corleone dynasty. As with The Punisher, the game narrative must be read in context of the larger text of which it is a part.
Intimidation is a major factor in the gameplay of The Godfather. The most common use of intimidation is against shopkeepers, to encourage them to hand over protection money. Unlike the previous examples, the player need not resort to physical pain for this purpose, although the game allows a great deal of realistic physical violence. If a shopkeeper is being particularly stubborn in his refusal to pay, smashing his cash register might be more effective than choking him or shooting him in the kneecap. Simply placing someone in your gunsights for several seconds will often do the trick. Consistent with the gangster ethics detailed in the novel and films, the game engine generally rewards players for finding ways to intimidate without resorting to direct bodily harm.
Finally, Reservoir Dogs is the videogame adaptation of the 1994 Quentin Tarantino film of the same name. Similar to The Godfather, torture is used not for interrogation, but rather for intimidation. Though the game gives players the option of blasting their way through all obstacles, earning a “Psychopath” rating in the process, the more cerebral “Professional” track requires a more measured use of violence, both threatened and enacted. Taking human shields, and therefore threatening hostages with lethal violence, is sufficient to disarm security guards, but will result in a standoff with actual police. Police will also drop their weapons, however, if the player pistol-whips the hostage in front of them—but even this is ineffective against large numbers of police. When surrounded, players who have charged up the avatar's “adrenaline” can perform a “signature” move, beating the hostage into unconsciousness and likely disfiguring him or her in the process.
These “signature” moves are unique to each character, from Mr. Blue's cigar to Mr. Blonde's trademark straight razor, though the most brutal violence happens off-screen. A “signature” move will make every cop in the vicinity lay down their weapons in surrender. The game's ethics, in this case, cannot possibly be developed into a moral argument, simply because the they make no sense whatsoever at the narrative level. Beating and disfiguring a civilian should, logically, make the character more likely to be shot by police, not less. In addition, unconscious hostages drop to the ground and cannot be picked up. Thus, by performing a “signature” move, the protagonist reveals to the police that he is violent, unpredictable and dangerous, while simultaneously releasing his human shield. The torture techniques described by Mr. White in the film, or enacted by Mr. Blonde, would have made some degree of sense in terms of the narrative, but the torture found in the game, while superficially similar, does not.
In all these games, some common elements exist. First, the games' ethics, which compel the player to torture, are not explicitly out of sync with the protagonists' motivations. From the protagonists' perspective, torture is justified by the moral “gray area” of the situations in which they find themselves, be it organized crime, insurrection, or vigilantism. We are given no reason to believe that the protagonists themselves believe torture to be immoral, at least under the given circumstances. It is worth noting that three of the games I've discussed, The Punisher, The Godfather and Reservoir Dogs, are adaptations of existing works, and each inherits a nuanced morality of violence from the worlds' origins in film, novels and comic books. The player is not called upon to accept or reject the protagonist's actions as moral, and the circumstances in which the protagonists find themselves are defined as extraordinary and largely unrelated to “real life.”
Second, the morality of torturing an innocent is never addressed. The Punisher cannot torture an innocent person who was simply in the wrong place at the wrong time, because these people do not exist in the game. (Innocents exist, but they are clearly marked, and the player cannot make Frank torture them.) In The Godfather and Reservoir Dogs, the player is an anti-hero at best, but there are no judgments on when it is moral to torture, just when it is ethical in terms of gameplay.
Third, when torture is applied for the purpose of interrogation, it is universally effective. The tortured party will invariably “crack,” given the right circumstances. When they do, they will invariably give the protagonist correct information.
Fourth, the actions of the player have no long-term effect on the overall “war effort.” It is hard to imagine how it could, given the genres in which it takes place. The mafia and the fascist thugs of the games in question are not in a position to become more brutal due to the avatar's actions.
Fifth, the experience of having intentionally inflicted pain on a defenseless human being has no long-term effect on the mental health of the protagonist. Again, this is to be expected, since the modeling of avatar's mental states is still very rare in videogames. (Silicon Knights, of Blood Omen fame, made some progress on this front with Eternal Darkness, albeit in a less "serious" supernatural fashion.)
These games clearly demonstrate that videogame designers have developed the conceptual tools necessary to model the act of torture, but not its consequences. By carefully integrating the rule system and narrative, and by explicitly addressing those elements found lacking in the games I've described, it is possible to design videogames that make coherent moral arguments about, and more specifically against, torture in a way that would not be possible in any other medium. I here propose a model for such a game.
The best genre for such a game would be a single-player strategy game that alternates between macro-management and micro-management, similar to Microprose's X-COM: UFO Defense. Time will need to be somewhat fluid in the game, which would suggest a turn-based approach, but there's no reason parts of the game couldn't be designed for real-time strategy. The player commands a military unit in occupied territory, under constant threat of attack from local guerrilla forces. To prepare for or prevent these attacks, the player must gather information, make arrests, interrogate suspects, and use the new information to coordinate attacks or make more arrests. Like X-COM, gameplay will be cyclical in nature, and will end when either the guerrillas successfully wipe out the player's unit, or when public support for the guerrillas wanes and order is restored. These are only end conditions, however—it might be necessary, depending on the argument the designers seek to make, for true, non-diegetic victory to be independent of military success. Most importantly, the morality espoused in the narrative must be consistent with the ethics of gameplay.
As the game begins, players are given some initial intelligence from a variety of sources concerning planned attacks, and suggesting suspects. Players must then travel to a given location and attempt to arrest a suspect, using a minimum of force. After all, killing a suspect before he can make himself useful is a failure at both military and moral levels. Assuming the suspect can be arrested and returned to base successfully, the interrogation phase begins.
The interrogation process is the most significant portion of the game. Consequently, the game rules must acknowledge the issues ignored by the games I've discussed. The rule system, after all, will determine the ethics of gameplay, compelling gamers to play in a certain way, and the narrative cannot be allowed to disconnect from these ethics. Thus, characters must express differing opinions on the morality of torture in general. Establishing the opinions of NPCs can be handled in a number of ways, and designers need not resort to overlong cutscenes, but they will need, at the very least, well-written dialogue that is both semi-random and likely to be encountered by players. In addition, the game must include the possibility of bad intelligence, and it must be possible, even likely, for players to make false arrests. Whether or not the suspects actually know anything, many will lie and give false information as the torture becomes increasingly brutal; conversely, some will protest their innocence through any level of torture, and some will simply say nothing.
Players will be allowed to detain suspects for as long as they choose, torture them in any way provided by the game designers, and execute them at will. All of these actions must directly affect the rest of the game. The guerrillas might gain popular support, and become more numerous and better armed, depending on who the player arrests, how the suspects are treated, and whether they are released, detained indefinitely, or executed. In addition, as a result of the player's actions, suspects could become increasingly less likely to allow themselves to be arrested, opting instead to shoot it out with the player's troops or blow themselves up to evade capture.
In addition to the effects of the player's torture on the effectiveness of the mission, there must also be consequences to the torturer. This can best be accomplished by having a single interrogation specialist character with greater narrative depth than most other characters: in the context of the interrogation sequences, the specialist is the protagonist. While much of the game's dialogue can be semi-random, the interrogation specialist must have more tightly scripted dialogue, and more of it. If the game is to have a narrator of any kind, the interrogation specialist is the logical choice. As torture becomes more frequent and more brutal, the specialist will become increasingly unhinged. Torture will become more difficult to accomplish, as the protagonist increasingly “ignores” the player's controller input, increasing the number of so-called “accidents.” As the protagonist moves from torture as a means to an end to torture as an end unto itself, he will become less effective at extracting information. The less brutal methods of interrogation will cease to be available to players. Eventually, it will become impossible for players to do anything with suspects except brutally torture and kill them, and doing so will only hasten the victory of the guerrillas.
These are the basics of the game, the elements common to any meaningful argument against torture. From there, three specific arguments can be made. The specific mechanics of the game, such as the probabilities of arresting an innocent person or extracting false confessions, will be dependent on the designers' intended argument. The first is a rather Machiavellian claim that torture is an effective tool for a counter-insurgency, but must be used sparingly, so the benefits of useful information outweigh the costs of increased enemy resistance and deaths of innocent victims. This argument defines what is good as what wins the war, and treats torture as an evil to be engaged in only for a greater good. For this argument, torture must make the game easier to complete; refraining from torture as much as possible must bring a greater difficulty and a greater reward. Nonetheless, the only win condition is military victory, and no moral rule is more important than that.
The second argument is that torture is simply counter-productive. For this argument, the variables must be set so the costs of torture are overwhelmingly larger than any possible benefits. Consequently, it must be impossible to complete the mission using torture as a strategy, and victory must be easiest when the player repudiates torture entirely. Again, this argument ties morality with military victory, and the most moral solution is also the most practical. This argument could also be made satirically by separating the win condition from military victory, and rewarding the player in non-diegetic ways for continuing to torture even as it dehumanizes the protagonist, kills innocent people, and allows the guerrillas to take over the country. The world will be decisively worse than when the player began the game, the mission will have failed miserably, but the player will be assured, through a high score or bonus content, that they've done the right thing. The sheer absurdity of such a game would be a powerful argument against torture.
The third argument differs from the first two by designing the game's ethics to serve an anti-torture morality completely divorced from military victory. The mission may succeed or fail, but such success is not taken into consideration in terms of the player's reward. Rather, the game must encourage players to torture by offering powerful short-term benefits, and reward them for resisting the temptation, both with non-diegetic rewards such as points and unlocked content, and a well-constructed narrative that makes it clear that, win or lose, soldiers who refrain from crimes against humanity can at least look themselves in the mirror with their sanity intact.
These are, as I like to say, loose thoughts. I can't design this stuff, and don't know if it would work, assuming we can all agree on what constitutes "working" in this context. But it's an interesting possibility, and an interesting way to think about this kind of debate.
(Gameplay) Ethics: A Primer
And now for something completely different. This is recycled content, having appeared first in a conference paper and later in my master's thesis, available here. If you've already read it, you'll be pretty bored here. I lay out my ideas about ethical gameplay here, a concept to which I'll be returning and hopefully improving.
In “Simulation versus Narrative,” Gonzalo Frasca posits the possibility of meaningful argument in simulation games. Drawing on the topic of a worker's strike, famously explored in literature and film in Emile Zola's Germinal and Ken Loach's Bread and Roses, Frasca describes a hypothetical real-time strategy game called Strikeman. What Strikeman offers that is unique to the videogame form is a story comprised of not only the author's singular vision, but also the activity of the player, the effect of random and pseudo-random events, and the specific limits and probabilities encoded into the simulation by the author. The form of the story would constantly change, but because simulations are inherently iterative, the internal logic of the world becoming apparent to the player only through repeated play, patterns would emerge over time. In these patterns, Frasca argues, is the author's thesis: a viewpoint being argued about the events being simulated. Behind the viewpoint in question are the author's implicit beliefs about the subject at hand, the worldview on which the argument rests.
James Paul Gee argues that videogames' ability to model worldviews, or “cultural models,” allows players to articulate and challenge their own unexamined assumptions about the world. In “Cultural Models: Do You Want to Be the Blue Sonic or the Dark Sonic?,” Gee examines a variety of war-themed games, from the superheroic Return to Castle Wolfenstein to the darkly realistic Operation Flashpoint to the explicitly political Under Ash. Under Ash, an action game in which the player takes on the role of a Palestinian fighting against Israeli soldiers and settlers, hints at an unrealized potential of the videogame medium: the ability to argue for the validity of a moral viewpoint.
A vital distinction must be made between morals and ethics. Many dictionaries consider them to be synonymous, but in common usage, at least in American English, the two words can have a variety of subtly different meanings. My definitions are provisional, and while they bear some similarities to existing popular definitions, they are specifically tailored to be applied to the interpretation of videogames. I am not suggesting that “real-world” morals and ethics function the way I describe here, but only that they do so in the context of the videogame medium.
I define ethics as a discourse concerning what is correct and what is incorrect. What is ethical is dependent on a specific activity, determined entirely by an explicit, constructed system of rules, and cannot be questioned by the participants. I define morals as a discourse concerning what is right and what is wrong. Morality, unlike ethics, is not tied to a specific activity, but can be applied over multiple activities, and possibly all experience. Moral rules enjoy considerably more variance than ethical rules: because they are wider in scope, they are more nuanced, and subject to interpretation.
Ethical frameworks, while they might attempt to model moral behavior—as in the examples of ethical codes for doctors or lawyers—need not have any connection to morality at all. In chess, that players should try to capture their opponents' pieces is an ethical rule, not a moral one. It has no relevance to the world outside chess. This rule is also not subject to interpretation or argument. It is simply, factually, true. A player that makes no effort to capture the opponent's pieces is not playing chess. The same cannot be said of moral rules like “love your neighbor as yourself,” Jesus' formulation of the “golden rule,” nor can it be said of “act only in accordance with that maxim through which you can at the same time will that it become a universal law,” Kant's categorical imperative. These rules concern the very act of being human, but one does not cease to be human if he or she rejects or violates them. They are much less specific than the rule concerning the capturing of pieces in chess, and open to many more interpretations. No, these definitions are generally not English-speaking people mean when they say "moral" or "ethical," though they are built in part from conversational usage. I'm told Tracy Flick had some interesting thoughts on the correct distinction. That said, a lot of people seem to disagree with my terminology here. Without arguing that further, I'll just add that I'm using these terms in terms of videogames, and here make no claim about ethics or morals proper.
Morals and ethics exist independently of each other, and while they must each be internally consistent, it is possible for the two to explicitly contradict one another. Law is an ethical system that is constantly revised to prevent such conflicts. Torture, for example, is illegal under international law. Assuming one accepts the existence of international law, the legality of torture is not open to debate. The morality of torture, however, is fundamentally unconnected to its legality. Torture is not less moral now than it was before the Geneva Convention. Conversely, it would not become more moral if the U.N. were to repudiate the Geneva Convention tomorrow.
Any game that has a “win condition” has an ethical framework. This applies to all games, not just videogames. First and foremost, these games are possessed of an overriding ethical imperative: win. If the game has a win condition, a player who does not try to win is not playing the game. As Johan Huizinga notes in Homo Ludens, a player who does not try to win faces greater censure from society than a player who cheats in order to win. One interpretation of Huizinga's claim is that a player who cheats breaks only those rules concerning the means of play, whereas the player who throws the game violates the goals of play. The goal constitutes what players must do, while the rules offer only clarification on how the goal is to be accomplished—what actions are allowed, and what actions are not. A strategy or technique that helps a player win, while not explicitly violating any of the rules, is always ethical in terms of the game in question. The ethical framework comprises both goal and means, and although the former is more fundamental to the game than the latter, they are both necessary for a game to function. With an established goal, the game's rules, which determine how the game can be played, give rise to the ethics, which determine how it should be played.
I use the term “ethical” to denote imperatives that are dependent on the accepting of a role, as in the specific ethics of a given profession, and also in terms of play in general—playing a videogame ethically could be seen as the player's agreement to play the role allotted to her by the designers. Some degree of freedom is present, of course; were such freedom absent, it would not be play. However, just as an actor may be allowed to improvise, but must ultimately play his role to the author's conclusion, the player must play “in character” to play the game. If the player does not accept this role, she is not playing the game, but rather playing a game with a game. This activity of “metaplay” (not to be confused with the paratextual "metagame" of fan cultures), in which the player designates goals unrelated or contrary to the game's internal ethics, has a wide variety of forms. Metaplay, at least in single-player games (where there are no social expectations of ethical play), is not “cheating” in the sense that the word is used in everyday speech. It simply means that the player in question is not, strictly speaking, playing the game.
In addition to the ethical frameworks inherent in any games, videogames can potentially add an unprecedented level of narrativity. This narrativity is achieved by mapping recognizable symbols onto the rule system. This mapping process allows for the suspension of disbelief necessary to involve the player emotionally in the gameworld.
The interaction of these symbols gives videogames the potential for rich narratives. However, if the narrative is not sufficiently integrated with the rule system, it will appear arbitrary, and fundamentally disconnected to the experience of play. This disconnect between narrative and rule systems is one of the central problems for the potential of videogames as a storytelling medium, forcing a distinction between authorial narrative (the story written by the designers) and emergent narrative (the story enacted by the players). However, even in the most non-linear games with the greatest potential for emergent narrative, the rule system and choice of symbols are selected by the designers, and as such the players' freedom of interpretation is inherently limited. In videogames, the author might be dead, as was famously suggested by Roland Barthes, but she is still the author, and she must not be confused with the reader. To make the transition from ethical imperatives to moral arguments, the designers must fully embrace authorial status.
Moral arguments can easily be attributed to texts in traditional narrative forms such as literature and film, but in videogames, a narrative thesis unconnected to the game rules creates a disjointed experience. Without a connection to the ethics, the gameplay and the narrative will operate independently of one another, as is often the case in games that rely extensively on “cut-scenes,” which are essentially short film sequences that interrupt active gameplay. Moral imperatives can exist in a game only when the ethics can be interpreted and applied to the “real” world in which the player resides, and this can only be achieved by connecting internal ethics to the external world through narrative. Most, if not all, of the game rules must be connected to recognizable symbols, and those symbols must have referents in reality.
Rules and a win condition are all that is necessary for an ethical framework, because ethics point inward to a specific activity. Conversely, because morality must gesture outward to the world a large, it cannot consist only of abstract symbols. For a game to have a moral argument, it must have an ethical framework, a narrative that can be connected in some way to what we speciousl refer to as “real life,” and a careful integration of the two. Specifically, the moral argument of the narrative must be connected to the win condition. It might be necessary, in making distinctions between what is right and what is expedient, to develop some new ideas as to what constitutes “winning.” This will require a somewhat nuanced perspective on the avatar.
The avatar, in most games, is more than an extension of the player into the gameworld. Rather, the avatar is simultaneously an extension of the player and a different character that is not the player. I refer to this different character as the protagonist. Since the protagonist has only diegetic information, his or her motivation for interaction in the world must be entirely diegetic. The player, who has access to the game's non-diegetic information, will have additional goals, often involving tasks with no narrative meaning, such as scoring points or unlocking content. Narratives, even videogame narratives, have a logic of their own, and even when the narrative fails to emotionally invest the player in the story, it can usually be assumed that the protagonist is quite involved. The narrative, even when viewed by players as epiphenomenal, is the entirety of the protagonist's reality.
In the interest of symmetry, this post concludes here.
In “Simulation versus Narrative,” Gonzalo Frasca posits the possibility of meaningful argument in simulation games. Drawing on the topic of a worker's strike, famously explored in literature and film in Emile Zola's Germinal and Ken Loach's Bread and Roses, Frasca describes a hypothetical real-time strategy game called Strikeman. What Strikeman offers that is unique to the videogame form is a story comprised of not only the author's singular vision, but also the activity of the player, the effect of random and pseudo-random events, and the specific limits and probabilities encoded into the simulation by the author. The form of the story would constantly change, but because simulations are inherently iterative, the internal logic of the world becoming apparent to the player only through repeated play, patterns would emerge over time. In these patterns, Frasca argues, is the author's thesis: a viewpoint being argued about the events being simulated. Behind the viewpoint in question are the author's implicit beliefs about the subject at hand, the worldview on which the argument rests.
James Paul Gee argues that videogames' ability to model worldviews, or “cultural models,” allows players to articulate and challenge their own unexamined assumptions about the world. In “Cultural Models: Do You Want to Be the Blue Sonic or the Dark Sonic?,” Gee examines a variety of war-themed games, from the superheroic Return to Castle Wolfenstein to the darkly realistic Operation Flashpoint to the explicitly political Under Ash. Under Ash, an action game in which the player takes on the role of a Palestinian fighting against Israeli soldiers and settlers, hints at an unrealized potential of the videogame medium: the ability to argue for the validity of a moral viewpoint.
A vital distinction must be made between morals and ethics. Many dictionaries consider them to be synonymous, but in common usage, at least in American English, the two words can have a variety of subtly different meanings. My definitions are provisional, and while they bear some similarities to existing popular definitions, they are specifically tailored to be applied to the interpretation of videogames. I am not suggesting that “real-world” morals and ethics function the way I describe here, but only that they do so in the context of the videogame medium.
I define ethics as a discourse concerning what is correct and what is incorrect. What is ethical is dependent on a specific activity, determined entirely by an explicit, constructed system of rules, and cannot be questioned by the participants. I define morals as a discourse concerning what is right and what is wrong. Morality, unlike ethics, is not tied to a specific activity, but can be applied over multiple activities, and possibly all experience. Moral rules enjoy considerably more variance than ethical rules: because they are wider in scope, they are more nuanced, and subject to interpretation.
Ethical frameworks, while they might attempt to model moral behavior—as in the examples of ethical codes for doctors or lawyers—need not have any connection to morality at all. In chess, that players should try to capture their opponents' pieces is an ethical rule, not a moral one. It has no relevance to the world outside chess. This rule is also not subject to interpretation or argument. It is simply, factually, true. A player that makes no effort to capture the opponent's pieces is not playing chess. The same cannot be said of moral rules like “love your neighbor as yourself,” Jesus' formulation of the “golden rule,” nor can it be said of “act only in accordance with that maxim through which you can at the same time will that it become a universal law,” Kant's categorical imperative. These rules concern the very act of being human, but one does not cease to be human if he or she rejects or violates them. They are much less specific than the rule concerning the capturing of pieces in chess, and open to many more interpretations. No, these definitions are generally not English-speaking people mean when they say "moral" or "ethical," though they are built in part from conversational usage. I'm told Tracy Flick had some interesting thoughts on the correct distinction. That said, a lot of people seem to disagree with my terminology here. Without arguing that further, I'll just add that I'm using these terms in terms of videogames, and here make no claim about ethics or morals proper.
Morals and ethics exist independently of each other, and while they must each be internally consistent, it is possible for the two to explicitly contradict one another. Law is an ethical system that is constantly revised to prevent such conflicts. Torture, for example, is illegal under international law. Assuming one accepts the existence of international law, the legality of torture is not open to debate. The morality of torture, however, is fundamentally unconnected to its legality. Torture is not less moral now than it was before the Geneva Convention. Conversely, it would not become more moral if the U.N. were to repudiate the Geneva Convention tomorrow.
Any game that has a “win condition” has an ethical framework. This applies to all games, not just videogames. First and foremost, these games are possessed of an overriding ethical imperative: win. If the game has a win condition, a player who does not try to win is not playing the game. As Johan Huizinga notes in Homo Ludens, a player who does not try to win faces greater censure from society than a player who cheats in order to win. One interpretation of Huizinga's claim is that a player who cheats breaks only those rules concerning the means of play, whereas the player who throws the game violates the goals of play. The goal constitutes what players must do, while the rules offer only clarification on how the goal is to be accomplished—what actions are allowed, and what actions are not. A strategy or technique that helps a player win, while not explicitly violating any of the rules, is always ethical in terms of the game in question. The ethical framework comprises both goal and means, and although the former is more fundamental to the game than the latter, they are both necessary for a game to function. With an established goal, the game's rules, which determine how the game can be played, give rise to the ethics, which determine how it should be played.
I use the term “ethical” to denote imperatives that are dependent on the accepting of a role, as in the specific ethics of a given profession, and also in terms of play in general—playing a videogame ethically could be seen as the player's agreement to play the role allotted to her by the designers. Some degree of freedom is present, of course; were such freedom absent, it would not be play. However, just as an actor may be allowed to improvise, but must ultimately play his role to the author's conclusion, the player must play “in character” to play the game. If the player does not accept this role, she is not playing the game, but rather playing a game with a game. This activity of “metaplay” (not to be confused with the paratextual "metagame" of fan cultures), in which the player designates goals unrelated or contrary to the game's internal ethics, has a wide variety of forms. Metaplay, at least in single-player games (where there are no social expectations of ethical play), is not “cheating” in the sense that the word is used in everyday speech. It simply means that the player in question is not, strictly speaking, playing the game.
In addition to the ethical frameworks inherent in any games, videogames can potentially add an unprecedented level of narrativity. This narrativity is achieved by mapping recognizable symbols onto the rule system. This mapping process allows for the suspension of disbelief necessary to involve the player emotionally in the gameworld.
The interaction of these symbols gives videogames the potential for rich narratives. However, if the narrative is not sufficiently integrated with the rule system, it will appear arbitrary, and fundamentally disconnected to the experience of play. This disconnect between narrative and rule systems is one of the central problems for the potential of videogames as a storytelling medium, forcing a distinction between authorial narrative (the story written by the designers) and emergent narrative (the story enacted by the players). However, even in the most non-linear games with the greatest potential for emergent narrative, the rule system and choice of symbols are selected by the designers, and as such the players' freedom of interpretation is inherently limited. In videogames, the author might be dead, as was famously suggested by Roland Barthes, but she is still the author, and she must not be confused with the reader. To make the transition from ethical imperatives to moral arguments, the designers must fully embrace authorial status.
Moral arguments can easily be attributed to texts in traditional narrative forms such as literature and film, but in videogames, a narrative thesis unconnected to the game rules creates a disjointed experience. Without a connection to the ethics, the gameplay and the narrative will operate independently of one another, as is often the case in games that rely extensively on “cut-scenes,” which are essentially short film sequences that interrupt active gameplay. Moral imperatives can exist in a game only when the ethics can be interpreted and applied to the “real” world in which the player resides, and this can only be achieved by connecting internal ethics to the external world through narrative. Most, if not all, of the game rules must be connected to recognizable symbols, and those symbols must have referents in reality.
Rules and a win condition are all that is necessary for an ethical framework, because ethics point inward to a specific activity. Conversely, because morality must gesture outward to the world a large, it cannot consist only of abstract symbols. For a game to have a moral argument, it must have an ethical framework, a narrative that can be connected in some way to what we speciousl refer to as “real life,” and a careful integration of the two. Specifically, the moral argument of the narrative must be connected to the win condition. It might be necessary, in making distinctions between what is right and what is expedient, to develop some new ideas as to what constitutes “winning.” This will require a somewhat nuanced perspective on the avatar.
The avatar, in most games, is more than an extension of the player into the gameworld. Rather, the avatar is simultaneously an extension of the player and a different character that is not the player. I refer to this different character as the protagonist. Since the protagonist has only diegetic information, his or her motivation for interaction in the world must be entirely diegetic. The player, who has access to the game's non-diegetic information, will have additional goals, often involving tasks with no narrative meaning, such as scoring points or unlocking content. Narratives, even videogame narratives, have a logic of their own, and even when the narrative fails to emotionally invest the player in the story, it can usually be assumed that the protagonist is quite involved. The narrative, even when viewed by players as epiphenomenal, is the entirety of the protagonist's reality.
In the interest of symmetry, this post concludes here.
Subscribe to:
Posts (Atom)