Apologies for the long-winded title; it’s actually quite hard to find a subject that gets right to the point. This isn’t about triggering a particular emotion in gamers — not directly, at least. It’s also not about how ‘emotional’ gaming can be — we already know that playing games can be an intense experience that can warrant a massive gamut of emotions.
This entry’s about your avatar — your character, the model that represents you — and the emotions that it can, or as the case may be, cannot display.
Emotions have long played a vital role in communication and human interaction. We smile and raise our shoulders a little when we’re happy; we frown and slump when we’re sad — these emotional keys are a form of communication in their own right: body language!
Beyond subtle muscle shifts we also have emotive reactions that we’re less aware of: we blush when we’re embarrassed or caught lying; we raise our voice in anger or petulance. Most importantly though are the muscles groups on our face: the flaring or contraction of our lips and eyes, the furrowing or raising of the brow — each of these actions, or reactions, are ‘programmed in’ genetically and almost impossible to alter. It’s these same minute movements that we’re (often unconsciously) reading in the face of whoever we’re talking to. It’s these tiny twitches in someone else’s face or body language that can trigger our own involuntary responses: that momentary curl of the lip might be all the indication you need to run away quickly.
This ‘hunt for emotion’ as we communicate with other people is so ingrained that online communication has always felt a little… distant. Internet veterans are cautious, aware that without body language their words can easily be misconstrued. Newbies often blunder, forgetting that no one can see the ironic smile on their face. There’s a reason emoticons :-), *asterisks*, CAPSLOCK and _underscores_ exist: to convey emotion! It’s clunky and slow compared to body language or facial expressions but it’s the best that we have.
Why, twenty years after the first text-based world, are we still communicating with such basic tools? Some early games like LegendMUD had ways to inflect mood into your conversation through expansion of the verb sets (‘say alts’) but since then… nothing. In graphical virtual worlds a couple of games have tried to incorporate moods (notably Star Wars: Galaxies and EverQuest2) but still they were still primarily low-tech text-only executions, toggles: /angry, /sad, /afraid, or parsing exclamations and queries.
Why are we still running around in virtual worlds with emotionless, gormless avatars? In single-player games it’s almost the state of the art, the bleeding edge! ‘More realistic than ever before!’ the developers cry. What makes the games more realistic? Interaction with the game world: physics and realistic NPCs, or in the case of virtual worlds, other player avatars. You only need to look at the success of LittleBigPlanet — a very simple platformer with oodles of delicious detail and bucket loads of charm and a very diverse emotion system.
For a market segment that generates almost all of its appeal (and revenue) from the immersive quality of virtual worlds it’s amazing that there isn’t yet a virtual world that has the power to model emotions through various facial expressions and body poses. You could even go one step further from the toggle system and parse complex emotions like sadness, apprehension and lust out of chat. Then there’s the character state itself: in battle your avatar would grimace upon being hit; a healer would smile upon saving a party member.
Are we simply being held back by World of Warcraft‘s ancient graphics engine? Surely it’s time for realistic, immersive emotions in virtual worlds.