What does an AI ‘hear’? - RSA

What does an AI ‘hear’?

Comment

  • Picture of Monica Porteanu FRSA
    Monica Porteanu FRSA
  • Fellowship
  • Science
  • Technology

Continuing the #RSAtech line of enquiry into Tech and Society, Monica Porteanu FRSA is inviting Fellows to join the conversation about the role of our voice interaction with machines.

Voice-based user interfaces are increasingly present in our daily lives. These systems rely on voice as a key identifier and biometric, sometimes drawing from aggregated data sets to facilitate decision-making based on interpretations of our emotional state, gender, ethnicity and more. Voice as an indicator of identity and intention is already a fraught part of interaction among humans in light of bias and difference. How then does this play out in our voice-based interaction with machines? What exactly are these systems listening to and concluding about our voices?

Do they understand the context in which the person speaks? Do these conclusions vary if a voice is different than the norm? Is the person aware of how the machine interprets them? Can the person do anything about that should they wish to? Who or what can access those interpretations and for what purpose? Is there an opportunity to give back the person the agency over the journey of their voice through technology and its affective interpretations? 

As with so many contemporary systems, it is hard to know much as a single user. But together, we can begin to compare notes and form an idea. This is why we are inviting RSA Fellows to participate in a research study as part of a collective inquiry to probe the current state of affective analysis of voice by consumer devices.

In our first study, we are testing several audio artificial intelligence (AI) algorithms for sentiment interpretation. With you, we seek to answer three research questions:

  • How does an algorithm ‘hear’ and interpret emotion from our voice?
  • How does an algorithm differentiate distinct emotions ‘heard’ in someone’s voice?
  • Would a user agree with how the algorithm ‘hears’ and interprets what it ‘hears’?

Our eventual aim is to develop a tool for users to evaluate their own voice in light of how a system stands to interpret it; a sort of ‘mirror’ that show how they will be heard. This particularly interests us in light of the growing number of systems that make decisions based on affective interpretation; with the stakes quite high in such cases, a user will want to know how to retain agency in their interaction.

In our first step towards this goal, we are looking for participants who would agree to provide at least eight brief voice recordings over several weeks in response to a series of prompts. The study will end with a 30-min one-to-one interview to discuss findings. Your data privacy, confidentiality and protection are ensured under IRB # 21610 approved by the Office for the Protection of Research Subjects at the University of Illinois at Urbana Champaign.

If interested, please contact Monica Porteanu via monicap2@illinois.edu Feel free to forward this invitation to others in your networks who might be interested.


Monica Porteanu is PhD candidate in Informatics at the University of Illinois at Urbana Champaign.

Be the first to write a comment

0 Comments

Please login to post a comment or reply

Don't have an account? Click here to register.

Related articles