Skip to content

Siri and the Stewardess Smile – a Human-machine Introspection

May 17, 2019 | by: point

Łukasz Krol from College of Europe and Polaris gave us an insight into how do we perceive AI, and talked about relationships of humans and digital assistants such as Siri, Alexa, etc.

Photos by: Vanja Čerimagić

Łukasz gave the introduction into humanity’s interaction with technology by referring to 70s air travel when it was a huge and glamorous thing. Aircrew ended up assuming the personality of quasi  moms to spoiled businessman. That wasn’t aircrews personality, they weren’t happy or angry or sad or ecstatic, instead they were taking on a corporate personality which entailed eclipsing their own feelings in order to give the customers what they want.

This kind of projection is happening in terms of AI and digital assistants. Polaris reminded everyone that Siri, Alexa and other types of digital assistants were not shaped by the way that we interact with them but by their media representation and the way that they are marketed. They said that these are products and marketing of that product makes us believe that there’s some kind of personality behind it, but there really isn’t because they’re designed in a way to appear to us the most to be affirming and to reacting exactly the way that we want it.

Polaris researched findings by former professor of communication Clifford Nass from Stanford who found that people tend to perceive female voices as helping us to solve our problems by ourselves while, they view male voices as authority figures who tell us the answers to our problems.

In designing AI digital assistants, there is a paramount expectation that is inherently superimposed upon them depending on their “gender”. Humans are racist, sexist and discriminatory in many ways. Since algorithms know only the things we “teach” them, they adopt this bias. This creates an issue because bias cannot be measured. However, Polaris asserts the question: what if digital assistance could teach us about our bias and actually measure it? Teaching digital assistants to measure bias could help us on the path to eradicate it.

However, Łukasz asserts the question: what if digital assistance could teach us about our bias and actually measure it?

Teaching digital assistants to measure bias could help us on the path to eradicate it.

Furthermore Polaris points out, that it’s important to be aware that in contrast to their agreeable design these device are agents of corporate surveillance, recording and processing data, without users’ awareness and agency.

Watch the full video: