Imagine the most charming person coming into the room and saying ‘Hi!’. With a simple greeting, he or she can convince you that everything is going to be alright, completely gaining your trust. The opposite is also possible — you might dislike someone from the moment they greet you. Now, a team of researchers has analyzed what makes our intonation come across as trustworthy or likable.
We don’t often think about it consciously, but a big part of our communication isn’t necessarily what we’re saying, but how we’re saying it. We draw a lot of information and we make a lot of decisions based on people’s tone. According to recent research, we make mental images of how people sound, just like we make mental images of what they look like — and much of what we think about their personalities stems from the acoustic quality of their voice.
Now, for the first time ever, researchers have managed to visually model these mental representations and compare those of different individuals.
In order to do this, they first developed a computer program for voice manipulation called CLEESE. What CLEESE does is that it takes a real-life recording of someone saying a word, and then it generates thousands of other alternative pronunciations of that word, all unique in their own way. Researchers had participants listen to these different pronunciations, analyzing their reactions (CLEESE is freely available here).
They found that in order to inspire trust, the pitch must rise quickly at the end of the word. But if you want to sound determined, you should speak with a descending voice. For instance, a French speaker must pronounce bonjour (French for “hello”) with an emphasis on the second syllable. Here are a few examples:
- A trustworthy Bonjour:
- A determined Bonjour:
The study could not only teach us a few things about our tone but, ultimately, it could teach us a lot about emotions. Using this software, the team cracked the “code” we use to interpret others, and it can help us understand why some people are unable to use this code (for instance some autistic individuals).
However, for all its scientific value, this study is also somewhat unnerving — after all, scientists are taking something that’s deeply rooted in human nature (understanding and reacting to intonation), and they’re quantifying and separating it into reproducible parts. It’s easy to see this study being applied in making an AI assistant like Alexa seem more trustworthy, manipulating us through artificial tone modulations — and that’s a thought that’s just unsettling.
In the meantime, researchers say that the system could be used to study how words are interpreted by survivors of a stroke, which often affects how people perceive intonation.
Journal Reference: Cracking the social code of speech prosody using reverse correlation. Emmanuel Ponsot, Juan José Burred, Pascal Belin & Jean-Julien Aucouturier, PNAS, 26 mars 2018. DOI : 10.1073/pnas.1716090115.