InterSpeech 2021

Communication and interaction, multimodality

A Psychology-Driven Computational Analysis of Political Interviews
(3 minutes introduction)

Darren Cook (University of Liverpool, UK), Miri Zilka (University of Sussex, UK), Simon Maskell (University of Liverpool, UK), Laurence Alison (University of Liverpool, UK)

Speech Emotion Recognition based on Attention Weight Correction Using Word-level Confidence Measure
(3 minutes introduction)

Jennifer Santoso (University of Tsukuba, Japan), Takeshi Yamada (University of Tsukuba, Japan), Shoji Makino (University of Tsukuba, Japan), Kenkichi Ishizuka (Revcomm, Japan), Takekatsu Hiramura (Revcomm, Japan)

Speech Emotion Recognition based on Attention Weight Correction Using Word-level Confidence Measure
(longer introduction)

Jennifer Santoso (University of Tsukuba, Japan), Takeshi Yamada (University of Tsukuba, Japan), Shoji Makino (University of Tsukuba, Japan), Kenkichi Ishizuka (Revcomm, Japan), Takekatsu Hiramura (Revcomm, Japan)

Effects of voice type and task on L2 learners’ awareness of pronunciation errors
(3 minutes introduction)

Alif Silpachai (Iowa State University, USA), Ivana Rehman (Iowa State University, USA), Taylor Anne Barriuso (Iowa State University, USA), John Levis (Iowa State University, USA), Evgeny Chukharev-Hudilainen (Iowa State University, USA), Guanlong Zhao (Texas A&M University, USA), Ricardo Gutierrez-Osuna (Texas A&M University, USA)

Lexical Entrainment and Intra-Speaker Variability in Cooperative Dialogues
(3 minutes introduction)

Alla Menshikova (Saint Petersburg State University, Russia), Daniil Kocharov (Saint Petersburg State University, Russia), Tatiana Kachkovskaia (Saint Petersburg State University, Russia)

Detecting Alzheimer's Disease using Interactional and Acoustic features from spontaneous speech
(3 minutes introduction)

Shamila Nasreen (Queen Mary University of London, UK), Julian Hough (Queen Mary University of London, UK), Matthew Purver (Queen Mary University of London, UK)

Investigating the interplay between affective, phonatory and motoric subsystems in Autism Spectrum Disorder using an audiovisual dialogue agent
(3 minutes introduction)

Hardik Kothare (Modality.AI, USA), Vikram Ramanarayanan (Modality.AI, USA), Oliver Roesler (Modality.AI, USA), Michael Neumann (Modality.AI, USA), Jackson Liscombe (Modality.AI, USA), William Burke (Modality.AI, USA), Andrew Cornish (Modality.AI, USA), Doug Habberstad (Modality.AI, USA), Alaa Sakallah (University of California at San Francisco, USA), Sara Markuson (University of California at San Francisco, USA), Seemran Kansara (University of California at San Francisco, USA), Afik Faerman (University of California at San Francisco, USA), Yasmine Bensidi-Slimane (University of California at San Francisco, USA), Laura Fry (University of California at San Francisco, USA), Saige Portera (University of California at San Francisco, USA), David Suendermann-Oeft (Modality.AI, USA), David Pautler (Modality.AI, USA), Carly Demopoulos (University of California at San Francisco, USA)

Analysis of eye gaze reasons and gaze aversions during three-party conversations
(3 minutes introduction)

Carlos Toshinori Ishi (RIKEN, Japan), Taiken Shintani (RIKEN, Japan)