Voluntary Work

For those who choose to go into academia, Jane McGonigal’s observation that people actually prefer work to entertainment may not be surprising. After all, we’ve chosen to enter into a discipline in which we make our own work. For McGonigal, “entertainment” means specifically passive entertainment, like watching a movie or television show, rather than more active forms (like games). She says, “The research proves what gamers already know: within the limits of our own endurance, we would rather work hard than be entertained. Perhaps that’s why gamers spend less time watching television than anyone else on the planet” (McGonigal 33). In other words, we want to be engaged by something. Television and movies – for the most part – don’t challenge us. They allow us to stare at something while the twists and turns of the plot are displayed to us without necessitating any effort on our parts. Sometimes they provide enough of a challenge that we can try to “figure them out,” but we wouldn’t have to if we didn’t want to; eventually (maybe at the end of the film or episode, or maybe, as in Lost, at the end of the series), they will answer our questions.

But games don’t do that. Yes, they will provide you with an ending, a resolution, but they won’t get there on their own. You have to put effort into the game in order to progress the narrative – you have to participate in the ludonarrative (the gameplay) in order to make the narrative move along. Sometimes, the ludonarrative is interwoven into the narrative (as in games like Mass Effect), and the narrative itself changes based on your decisions as a player. Sometimes (as in Gears of War) the narrative line is set, but you have to act in order to access the next portion. Either way, you have to do something in order to receive the next piece of information.

And when you choose to play a game – any game, whether videogame, boardgame, or sport – you are deliberately making your life more difficult. On purpose. And you have done so because you want your life to be more difficult: “Playing a game is the voluntary attempt to overcome unnecessary obstacles” (Suits 38). McGonigal suggests that we want games when our lives aren’t satisfying enough, when we don’t have enough obstacles. I would argue that we are more satisfied the more obstacles we can overcome (or at least have hope of overcoming). After all, some of us have careers that are nothing but deeply satisfying, self-created challenges, and we still choose to play games – perhaps because with games, there are no stakes. If I fail at Braid, nothing happens. No punishment is forthcoming. I can try and try and fail and fail and maybe I will succeed, but I can do so on my own time, my own way, and without fear of judgment.

So perhaps playing a game is the voluntary attempt to overcome unnecessary obstacles without fear. And being able to overcome obstacles without fear lets us tackle the necessary obstacles we encounter, even when we’re afraid.

Fiero!

One of the reasons why we play games is that they make us feel good – especially when we win. Athletics – sports, specifically – are the epitome of this sensation, as they integrate physical exertion and adrenaline with the positive sensations of winning (or even of playing well, whether you win or not, although pretty much everyone would agree that winning is better).

However, one of the other aspects of this joy is the fact that you want to win a good game; we don’t want to completely destroy our opponents on the sporting field, online, or at the game table. We want to be challenged, to drag out the battle, to feel the positive stress that accompanies the uncertainty of victory or defeat. Jane McGonigal refers to this sensation as “Fiero.”

Fiero is what we feel after we triumph over adversity. You know it when you feel it – and when you see it. That’s because we almost all express fiero in exactly the same way: we throw our arms over our head and yell.

The fact that virtually all humans physically express fiero in the same way is a sure sign that it’s related to some of our most primal emotions….Fiero, according to researchers at the Center for Interdisciplinary Brian Sciences Research at Stanford, is the emotion that first created a desire to leave the cave and conquer the world. It’s a craving for challenges that we can overcome, battles we can win, and dangers we can vanquish. (McGonigal 33)

McGonigal’s anecdote about the universal tendency to “throw our arms over our head and yell” conjures up, no doubt, any number of instances in professional sports, at family game nights, and in other situations in which we’ve seen or done this ourselves. A universal feeling expressed in a universal way. But that’s beside the point (although it does have any number of interesting anthropological and psychological implications).

In a context outside of gaming, we experience fiero in any context that involves “winning” – political campaigns, social movements, major projects, and even successful production of products. Fiero is a part of our everyday lives, in varying degrees. But it’s important to remember that fiero is a part of what motivates us to become followers, as well as what motivates leaders in the first place. We want to be a part of the winning team, whether as a participant or a fan.

Work or Play

The design of the work flow is key here: the game constantly challenging you to try something just a little bit more difficult than what you’ve just accomplished. These microincreases in challenge are just big enough to keep sparking your interest and motivation – but never big enough to create anxiety or an ability gap. (McGonigal 57)

The fundamental issue here, I believe, lies in the fact that McGonigal is proposing to use game mechanics as a means of improving work life. While on the one hand, this approach has a lot of potential, it also ignores one significant component of (at least some) work: that there may not be any of the incremental steps to which McGonigal here refers. In some industries, these steps might well exist, but in some fields and occupations, the work being done does not have steps – the employee is either making things up as s/he goes along (for any one of a number of reasons) or is endlessly repeating the same task.

Certainly, there are ways of implementing this incrementalism in many workplaces, and perhaps that is the key behind “sales competitions,” but such competitions themselves often garner dissatisfaction among the employees. There are other cases in which such increments must be self-imposed, and self-imposed goals are never quite as satisfying to check off as those given to us (especially if we are willing to accept them). We can deliberately manipulate our own checklists to make them easier to accomplish, the result of which is usually a latent dissatisfaction with our progress because we know we could have (and probably should have) done more or better.

However, incremental difficulty progression is often hard to enact when one’s tasks do not get more difficult, just more onerous. Certainly, there is a sense of accomplishment to the aggregation of accomplished tasks, but that sort of accomplishment is not the same as McGonigal’s “satisfying work.”

It seems to me that this model would serve education better than it would the workplace (for many). After all, the purpose of the educational system is to increase student aptitude, and therefore the incremental increase in difficulty not only makes sense, but is already what education is designed to do (although admittedly often fails to accomplish). This is not to say that education should be conducted solely through games (although I do think games have a place in the educational system), but, rather, that game systems can be made to work for the process of education.

Utilitarianism

The term “utility” keeps coming up in my readings and research, both leadership studies and gaming. David Hume mentions “utility” in “Of the First Principles of Government.” Garry Wills mentions it in Certain Trumpets: The Nature of Leadership. And it comes up again in Morton Davis’ Game Theory as the function that mitigates a player’s ability to behave rationally – that is, to the player’s best possible advantage.

When we think about utility, we don’t usually think about emotions or ideologies, but that is precisely what Davis aligns with “utility.” The player’s utility function is that which overrides rational gameplay behavior; a utility function is a belief or ideology that is more important to the player than winning. For some, this will be honesty. For others, maintaining a persona or perpetuating a belief system. For Robin Hood, it is “robbing the rich to feed the poor.”

Some games present us with “false” utilities: rules that we are not supposed to break and which we are punished for breaking, either with “death,” a restart, or with “negative” experience points (I’m thinking of Bioware’s Paragon/Renegade or Friend/Rival systems). In No One Lives Forever, the player dies if they shoot an innocent monkey – an odd utility, admittedly, but one emblematic of a desire to reinforce the larger utility of not killing civilians/innocents.

The thing about utilities is that they force not only single behaviors within a game system, but a whole pattern of behaviors. Players of games with “good/evil” dichotomies will often have a “good” character (with a “moral” utility) and a “bad” one (with an “immoral” utility). What is most interesting about this is the fact that gameplayers willingly change their utilities in games, even if those utilities do not reflect their own “real” utilities. And that is one of the things I appreciate most about games – they ask us to reevaluate the reasoning behind our utilities by presenting us with the option to temporarily change them and then showing us the kinds of consequences that might result from having a different utility. Are they simplistic utilities? Of course. But, then again, most temporary fantasies are.

Digital Exodus

Having recently finished Jane McGonigal’s (2011) Reality is Broken, I’ve begun noticing a new trend in the study of “virtual worlds” popping up all over the place. This past fall, I was privileged to hear Edward Castronova’s plenary talk at a conference on ethics and video games, and – more recently – I have heard from several students that they are interested in studying virtual spaces. While games and virtual spaces are not always mutually concurrent (there are games that don’t really take place in virtual space, and there are certainly virtual spaces that do not require gameplay), it seems that the phenomena of gaming and virtual space – the “Exodus to the Virtual World” in Castronova’s terms – is being linked by scholars with a problem in “reality.”

McGonigal suggests that

The fact that so many people of all ages, all over the world, are choosing to spend so much time in game worlds is a sign of something important, a truth that we urgently need to recognize.

     The truth is this: in today’s society, computer and video games are fulfilling genuine human needs that the real world is currently unable to satisfy. Games are providing rewards that reality is not. They are teaching and inspiring and engaging us in ways that reality is not. They are bringing us together in ways that reality is not.

     And unless something dramatic happens to reverse the resulting exodus, we’re fast on our way to becoming a society in which a substantial portion of our population devotes its greatest efforts to playing games, creates its best memories in game environments, and experiences its biggest successes in game worlds. (4)

In other words, people are playing more games and joining virtual communities because the world in which they live – the “real” world – isn’t offering them the kind of lives they want to live. As McGonigal says, “Reality, compared to games, is broken” (3).

In Leadership Studies, we have long heard about a ‘crisis of leadership,’ an academic, social, and political call to action demanding not only new leadership, but a new understanding of what that leadership might or should be. It seems that McGonigal is getting at something similar here. What we need is a “new reality” to replace the one that’s broken.

McGonigal raises several interesting points about introducing gaming into the real world, including SuperBetter (a game to help people deal with or recover from chronic injury or illness), FreeRice (a really spiffy game that increases your vocabulary while feeding the hungry), and FoldIt (an even spiffier game in which humans demonstrate the ability to solve complex proteins that even supercomputers can’t). Now these games are great examples of the ways in which gaming can produce legitimate and lasting benefits to “reality.” There’s some great stuff there that I’ll come back to in the future. However, one of the things she doesn’t address is the value added by just pure gaming.

She does point out the things we gain from games that are missing in our “real” lives – reward systems, clear goals, and the important factor of choice – but what I want to get at here is the fact that games are autotelic and imaginative. We choose to play games because they aren’t reality – they are fantasy. Yes, games can make our “real” lives better, but they are still interacting with those real lives. Sometimes, we want fantasy.

I’m not talking about escapism. I’m talking about the use of fantasy to explore things that do impact our realities, and this is where Leadership Studies comes in (yes, I’m getting there). Games – especially more recent plot-driven shooters – put the player in a position of leadership. Gears of War, Call of Duty, Mass Effect, and many others ask us as players to adopt the role of a leader – to make decisions and take actions as the leader of a group. We have to consider the outcomes of our ergodic decisions (those that might influence the outcome of the game or even the scene), which – in some cases – could mean the difference between life and death for an NPC (non-player character). And in good games, we care if the NPCs die. We think carefully about whether an NPC will approve of our actions, whether they will help us accomplish our goals, and whether our choices will result in “good” or “bad” outcomes. In essence, a fantasy-run for our leadership skills.

But even if we will never be in a situation even remotely comparable to those in games (such as in an alternate medieval-style universe), we will be in or witness situations for which these games prepare us. Perhaps they aren’t perfect. Perhaps they can never be as complex as they are in reality, but they do require us to think about reality in a way that benefits us as players, but also those around us with whom we interact. Is this why we game? No. But it is part of why games were created, and what makes them important to us as a society.

Moral Currency

Recently, I’ve been looking a lot more at game theory, leadership, and how the two relate in the context of videogame criticism, as well as “real life.” In terms of a discussion of ethics, the idea of “moral currency” seems particularly relevant to a discussion of a non-zero-sum n-person game system; in essence, when what you’re “playing” for isn’t a monetary, but a morally coded reward. Ian Bogost, in Persuasive Games(111), suggests that we already consider ourselves as members of a moral game with debits and credits:

In Lakoff’s view, we conceptualize well-being as wealth. Changes to our well-being are thus akin to gains and losses. Lakoff characterizes this metaphorical understanding of morality in terms of financial transactions. Individuals and societies alike have “moral debts” and “moral credits” that must sum to zero. Moral accounting implies the need for reciprocation and retribution; good actions must be rewarded and harmful ones must be punished. That punishment might include restitution, which can in turn take many forms, from contrition to prison. When we speak of criminals who have completed their sentences, we often say that they have “paid their debt to society.” In a moral system of this type, “the moral books must be balanced.”

In terms of game theory, this idea bears further scrutiny. In an ostensibly moral society, we think of ourselves as being fundamentally moral, or “in the moral black,” which – I would argue – affords us the ability to spend our “moral credits” on minor infractions: speeding, taking a pen from work, little white lies, etc. There are some things, however, that are simply too “expensive” – murder, larceny, fraud, etc. – so as to be prohibitive. Except that in a society that values moral currency the same way we do monetary currency, people with more of one somehow are justified in spending more of the other (i.e. our celebrities are able to spend moral currency the same way they do monetary currency… and often do so at the same time). While Bogost suggests that our moral debits and credits are a zero-sum game, that does not seem to be the case. Rather, we expect a certain level of moral positivity – it isn’t enough to be amoral in a society that fundamentally reflects a certain amount of Christian moral ethos. Rather, we need to remain more positive than negative in order to perpetuate an acceptable moral appearance (just as we need to remain financially solvent – in monetary as in moral wealth our desire is to always be positive). It is also the case in morality that – in theory – we need not exchange moral currency in order to gain or lose it: the game is non-zero-sum. While it may also be the case that a universal rise in moral currency would “reset” our perception of morality (just as a universal rise in monetary wealth “resets” the poverty line), nevertheless, the goal is ultimately to continue a universal increase rather than a positive-negative zero-sum balance. The real question is whether that is possible in either sphere.