Uncovering the acoustic cues of COVID-19 infection
|Sriram Ganapathy (Indian Institute of Science, Bangalore)|
Abstract The investigation of acoustic biomarkers of respiratory diseases has societal and public health impact following the onset of COVID-19 pandemic. The efforts in the pre-pandemic period focused on developing smartphone friendly diagnostic tools for the detection of chronic pulmonary diseases, Tuberculosis and asthmatic conditions using cough sounds. During the past two years, several research works of varying scales have been undertaken by the speech and signal processing community for analyzing the acoustic symptoms of COVID. The motivation for the development of acoustic-based tools for COVID diagnostics arises from the key limitations of cost, time, and safety of the current gold standard in COVID testing, namely the reverse transcription polymerase chain reaction (RT-PCR) testing. In this talk, I will survey the major efforts undertaken by groups across the world in i) developing data resources of acoustic signals for COVID-19 diagnostics, and ii) designing models and learning algorithms for tool development. The landscape of data resources ranges from controlled hospital recordings to crowdsourced smartphone-based data. While the primary signal modality recorded is the cough data, the impact of COVID on other modalities like breathing, speech and symptom data are also studied. In the talk, I will also discuss the considerations in designing data representations and machine learning models for COVID detection from acoustic data. The pointers to open-source data resources and tools will be highlighted with the aim of encouraging budding researchers to pursue this important direction. The talk will conclude by remarking about the progress made by our group, Coswara, where a multi-modal combination of information from several modalities shows the potential to surpass regulatory requirements needed for a rapid acoustic-based point of care testing (POCT) tool. Bio Sriram Ganapathy is a faculty member at the Electrical Engineering, Indian Institute of Science, Bangalore, where he heads the activities of the learning and extraction of acoustic patterns (LEAP) lab. Prior to joining the Indian Institute of Science, he was a research staff member at the IBM Watson Research Center, Yorktown Heights, USA. He received his Doctor of Philosophy from the Center for Language and Speech Processing, Johns Hopkins University. He obtained his Bachelor of Technology from College of Engineering, Trivandrum, India and Master of Engineering from the Indian Institute of Science, Bangalore. He has also worked as a Research Assistant in Idiap Research Institute, Switzerland. At the LEAP lab, his research interests include signal processing, digital health, machine learning methodologies for speech analytics and auditory neuroscience. He is a subject editor for the Speech Communications journal, member of ISCA and a senior member of IEEE. He is the recipient of young scientist awards from Department of Science and Technology (DST), India, Department of Atomic Energy (DAE), India and the Pratiksha Trust, Indian Institute of Science, Bangalore. Over the past 10 years, he has published more than 100 peer-reviewed journals/conference publications in the areas of deep learning, and speech/audio processing.