Communicating the Botox Problem of Virtual Reality

Virtual Reality has a problem which no-one can see and that’s a little weird for a technology based around sight.

Over the last 365 days I’ve spent almost 50 of those days in Virtual Reality (VR). Immersing myself in online lectures, escaping the loneliness of lockdown and occasionally working with clients who have such extreme social anxiety they dare not turn on their webcamera to talk via Google Meet, Zoom or Skype has been a joy to experience. It’s even allowed me to take to virtual Ted-Talk stages to practice my public speaking when everyone else was cooped up at home. I’m now an advocate of the technology and its therapeutic benefits for treating mental illnesses or improving education have already been shown in numerous case studies. However, as a communication skills coach, I have some concerns about how virtual reality and the mechanised ‘Metaverse’ has the potential to damage the way we communicate, and it’s not something I’ve noticed the big tech-companies talking about: the lack of micro-expressions.

Virtual Reality?

If you know what virtual reality is, you can skip this section, but if you’ve not heard of VR, it’s essentially a couple of special mobile phone-like screens held a few centimeters before your eyes which when combined display a 3D image.

Thanks to clever engineering there’s little worry of eye damage and the headset can trick your brain into thinking you’re in the environment shown before you. Couple this with a set of controllers which allow you to move around in the said environment and you can have a particularly convincing experience of escapism, education or excitement. It’s not perfect, but for gaming, recreation and practically anything you can imagine it’s a fantastic resource.

Source: The Verge - https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.theverge.com%2F2019%2F5%2F28%2F18639084%2Fvalve-index-steamvr-headset-knuckles-controllers-preview&psig=AOvVaw1P0n0JK5C-a_GLMy37Ew5s&ust=1641990952984000&source=images&cd=vfe&ved=0CAwQjhxqFwoTCMDfuo7bqfUCFQAAAAAdAAAAABAV

Image Source: The Verge

Virtual reality is regularly used in enterprise too. Car manufactures are using it to examine prototypes before production. Surgeons are using it to see inside virtual cadavers and standing on a mock Ted Talk stage in VR for the first, second and third time can be a lot scarier than you think, especially when combined with the uncanny-valley effect of a 2D audience:

No alt text provided for this image

Image Source: Virtual Speech

Subconscious subtle signals

Micro-expressions on the other hand are are best imagined as tiny, involuntary visual emotions. They are the minute bodily and facial movements which occur whenever our subconscious responds to a stimulus. Poker players train for years to try their best to hide and read these subtle ‘tells’.

Micro-expressions are so powerful that if you bump into someone you know, both you and they will realise within a split second if you are happy to see each other, because your micro-expressions responded quicker than your conscious mind can manage them. Restaurant staff have known for a long time that a mirror behind the bar tends to subdue the perturbed customer who can see their own angry face, call center staff know customers are much more aggressive over the telephone than they would ever be in person and those who have had a bad haircut can often find it fiendishly challenging to convince the barber they did a good job. All this over a few imperceptible movements.

Despite being small, micro-expressions can’t be underestimated. Both nature and nurture have taught our subconscious how to interpret the meaning behind these subtle signals. Anyone who has ever been in a tense situation will probably talk about how they “saw he had a look in his eye”, something “just didn’t seem right” during a negotiation or how a “gut feeling” saved them from a dangerous situation. In these instances, it was often the micro-expressions observed by the subconscious which gave away the other parties true intentions. Others describe micro-expressions when they say how a lover “had a twinkle in their eye”, how “looking at them made you feel at ease” or how “their smile lit up the whole room”. In both of these examples, there was something almost undefinable which caused either emotional upset, or encouragement. You might not have been able to see it, but your subconscious could.

But without micro-expressions, communicating with another person can seem a little unsettling. It’s why some people are scared of talking over the phone. It’s why some animations can fall into the uncanny-valley territory. It’s also why people have been struggling with Zoom fatigue over lockdown. The overemphasis of trying to appear ‘presentable’, stoic and a constant belief that we have to maintain eye-contact with the speaker or webcamera means we are consciously trying to suppress those natural expressions. Imagine then what problems the lack of micro-expressions could cause in VR.

Virtual Reality is a bit like Botox

I’ve never had Botox. I quite like my face as it is. But Botox paralyses facial muscles. That’s not a problem, it’s what it’s supposed to do. The paralysis removes wrinkles and supposedly restores a little bit of youth to those who use it. But as written by Jessie Cole in The Guardian:

“…an often unconsidered side effect of Botox is the intended paralysis of facial muscles means the user loses those brief, involuntary facial expressions, or micro-expressions which reveal our unconscious feelings of anger, happiness, disgust, embarrassment or pride. In a sense, communicating with someone who’s had Botox is like communicating with a static image – much of the body language involved is silenced. Considering that body language, mostly consisting of facial expressions, makes up at least half of any message being communicated, this is a significant loss.

…but this facial paralysis also inhibits the ability of the Botoxed to mimic the facial expressions of others, which is critical in the formation of empathy. Facial micro-mimicry is the major way we understand others’ emotions. If you are wincing in pain I immediately do a micro-wince, which sends a message to my brain about what you are experiencing. By experiencing it myself I understand what you are going through. This suggests that not only do I find my Botoxed friends hard to read, but they are also hindered in their capacity to read me. An unfortunate feedback cycle. The possible implications of this are frightening.”

The paralysis of micro-expressions can be devastating. Studies have found the children of parents injected with Botox struggle to develop empathy in their early life, because their parents faces can’t show it. People who go too far with plastic surgery paralyze their face to the extent they fall into the uncanny-valley effect, making conversation with them uncomfortable, their own children showing ‘still face syndrome’ rather than natural expressions.

Virtual Reality is liable to cause the same problem, because the technology we have currently lacks the ability for its users to show micro-expressions, which could be detrimental when we communicate with our ‘virtual’ staff, customers or colleagues.

To better understand why, we need to look at the current state of the avatars used to represent people in the virtual world.

Avatars and Emoticons:

Virtual Reality avatars are multifold. They range from the semi-realistic shown here made by software company Wolf3D:

No alt text provided for this image

To the Japanese anime designs commonly found in the most popular VR social platform, VRChat:

No alt text provided for this image

To the Zuckerberg look-a-like Rubenoid style found on Altspace VR:

No alt text provided for this image

To the almost realistic 3D scans offered by Spatial:

Add alt textNo alt text provided for this image

When choosing an avatar for private use the choice is yours. Some may prefer a more realistic style, others a more cartoony and some might even forgo human designs to play the part of a robot, animal or imaginary design of their choice. Personal choice is always going to be popular and there’s a booming avatar creation industry among for fans looking for a unique design.

But as VR avatars lack the ability to display micro-expressions, the subconscious is unable measure the true intentions behind the speaker’s words or actions. The cross-referencing of past expressions with those being currently displayed by a basic avatar is paralysing for the subconscious to measure compared to the hundreds of available stimuli being observed when seeing a person in-person.

People in VR also tend to accidentally interrupt each other when speaking, because they are unable to see when their conversational partner has finished talking or is simply pausing to collect their thoughts. Even for long-term users of VR this can become annoying (if not accepted as a drawback to the current technology). The use of larger body language tends to help, but with many corporate avatars failing to offer anything more than a head and a floating pair of hands, that crutch is not always available.

Imagine then how a first time user of VR is going to feel talking to a static face, being regularly interrupted and left unable to experience those reassuring moments of mirror neurons firing when sharing empathy with another person.

How emotions are currently expressed in VR:

Users of VR will also know that the most unsettling of avatars are those which are either realistic or static. Hearing a voice speak from an unmoving, or miss-moving face sets off alarm bells deep within the psyche. There’s a visceral response to seeing these avatars, which is why on virtual platforms they are immensely unpopular.

To avoid this, in VR emotions are usually coded to provide a set number of responses and are displayed by either the click of a button or the movement of the users hands. Making a thumbs up sign usually causes the face to smile. A finger gun could be a sign of annoyance, the V for Victory sign meaning sadness and the rock and roll horns gesture often displaying an expression of excitement. For habitual users of VR this rudimentary form of sign language often bleeds into daily life, with users accidentally finding themselves making the aforementioned gestures whilst having non-virtual conversations.

More recently, technology which reads facial gestures and uses eye tracking has been growing in popularity. The example below created by Vive removes the needs for memorizing finger-semaphore shown here in this rather uncanny-valley example

But again, looking at this this example you might be able to gleam a sense of the emotion people are trying to convey, but it’s far from natural. For some, it can even be unnerving.

A potential pool of problems:

A lack of micro-expressions in a VR meeting at your workplace could cause contention. It could be as simple as a glitch. We also all know the musculature feeling of pulling a smile, but what happens if your software fails and shows an angry face when you’re being kind? You’d have no idea what your face was showing.

What happens if a staff member is unable to read the tone of your voice and thinks your hesitant “yes” which would usually accompany a nervous glance, was an agreed-to “yes” if they can’t see your expressions?

Much as you can’t see the expressions of the actors during a radio play or audiobooks they need to exaggerate their emotions and voices to convince you of what’s happening to them. Without this emphasis the performance sounds woody, hollow and lacking life. The same might happen in the virtual office. Can you imagine how tiring it would be to have to be exuberant with your emotions all day long to get the point across? Even the master performers at Disney Park need regular breaks.

What happens if you’re talking to someone and forget to follow company code and show the smiley emoji mid-conversation? Can you imagine having someone scrutinise your expressions all day long? An automated system can and companies might use that as grounds to fire you.

What happens if press the wrong button and show the wrong emotion? Three strikes and you’re out?

What happens if you have to speak to someone who struggles to understand your words? Micro-expressions and gestures are especially powerful in helping convey crucial information.

How do you give a sideways glance or raise an eyebrow in VR?

What happens if an administrator robs you of certain ‘unwanted’ emotions or even your voice during a virtual meeting?

A more sinister problem on the horizon:

Fixing the issue of missing micro-expressions is a technical problem. With enough time and resources, it can be achieved. But those of the technofundamentalism creed (the idea that every problem can be solved by technological advancement) don’t always consider the larger picture: what happens if your real life image isn’t up to company code?

Indian call centers have long suppressed staff identities by demanding the use of ‘neutral accents’. Other companies have demanded international staff use Western names. If Siddharth has been forced to announce himself as Steve and Kjersti made to call herself Christine, companies will surely do the same with how their employees look in the virtual space.

Minorities are likely to suffer first. You need only look how most call centers tend to use the same stock photography of a light-skinned, brown-haired and blue-eyed representative which seldom reflects the people employed.

If you’re a dark-skinned person with dreadlocks what’s stopping a racist company using VR as a customer service tool demanding you display a more ‘presentable’ avatar, such as one with different coloured skin and a different haircut? There’s no law against this.

What happens if you’re a red-head, but marketing says customers prefer speaking to blondes? Not many companies would dare demand their staff dye their hair. (Although the sad reality is, this sort of discrimination already exists in the real world; American employers are legally allowed to discriminate potential employees based on their hairstyles.)

How would you feel being forced to look down at a pair of hands which are drastically different to your own skin tone, the excuse being that it is ‘pleasing’ to customers?

Voice changers are also becoming more realistic. How would you feel if HR demanded you use a vocal synthesizer to play the part of the opposite gender on your shift?

What happens if the company demands the use of facial tracking, or pupil measurement software to measure your expressions on the job?

In the VR space, capitulating to such demands would only further entrench inequalities and biases, making it more difficult for the world to accept people as they are.

Steps towards a brave new world:

The solution to fixing the issue of missing micro-expressions is simple – add them. The benefit of doing so ensures users of VR are better understood.

However unlike our Botox analogy, serious ethical discussions need to be had by experts on the looming issue of the corporatisation of virtual representation, because those who see virtual reality individuality as a ‘problem’, are likely to be on the loudest side when demanding conformity and pushing for the removal of the rights of self-expression.

Copyright © Richard Di Britannia 2020 | Britannia Voices | All rights reserved | [email protected] |
Designed by Ben Gosler