Introduction to Professor Shrikanth (Shri) Narayanan and His Lab at University of Southern California (USC)
1. Could you briefly introduce yourself (and your University/Lab)?
I am a University Professor and the Niki & C. L. Max Nikias Chair in Engineering at the University of Southern California (USC). I hold appointments as Professor of Electrical and Computer Engineering, Computer Science, Linguistics, Psychology, Neuroscience, Otolaryngology-Head and Neck Surgery, and Pediatrics, Research Director of the Information Science Institute, and Director of the Ming Hsieh Institute. Prior to USC I was with AT&T Bell Labs and AT&T Research. My Signal Analysis and Interpretation Laboratory (SAIL) at USC focuses on developing engineering approaches to understand the human condition and in creating machine intelligence technologies that can support and enhance human experiences.
2. What have been your most significant research contributions up to now?
My career, starting in industry (AT&T Bell Labs/AT&T Research, 1995-2000) and subsequently in academia (USC, 2000-present), has focused on creating novel interdisciplinary approaches to human-centered sensing, signal and information processing, and computational modeling inspired by societal applications centered on human communication, interaction and behavior. Using an engineering lens, it has focused on both scientifically illuminating human speech and multimodal affective behavior, and on creating technologies for supporting domains ranging from defense and security to health, consumer services and the media arts. My contributions include not only publications, but curated datasets and tools widely-used worldwide and leadership in technical projects supported by various US government agencies and industry collaborations, 18 granted US patents (leading to 3 startups).
My research contributions span several areas, including human speech science on topics related to measuring, analyzing and modeling of speech production, articulatory-acoustics, and speech prosody, audio, speech and language processing developing computational methods and algorithms including for automatic speech and speaker recognition, speech translation, speech synthesis, spoken dialog and conversational systems, behavioral signal processing, affective computing, multimodal signal processing and machine learning and computational media intelligence, music processing, and biomedical signal and image processing including novel realtime magnetic resonance imaging and instrumentation technologies for human health and wellbeing.
The following five example publications, from over the last 25 years, highlight some of my research contributions. The paper Acoustics of children’s speech: Developmental changes of temporal and spectral parameters (J. Acoust. Soc. Am., 1999), provides a scientific account of speech patterning in children as they grow. The paper Creating Conversational Interfaces for Children (IEEE Transactions in Speech and Audio Processing, 2002), a 2005 IEEE Signal Processing Society Best Journal Paper Award recipient, is foundational work on speech technology for children, inspiring many applications that followed including technology-assisted methods for early literacy assessment and diagnostics/treatment support in Autism Spectrum Disorder. The paper An approach to real-time magnetic resonance imaging for speech production (J. Acoust. Soc. Am., 2004) reports the development and demonstration for the first time of a magnetic resonance imaging for realtime visualization and measurements of human vocal production impact on research and education in speech communication and clinical applications (e.g., head & neck cancer, sleep apnea, neurological disorders). The work on affective computing, Toward detecting emotions in spoken dialogs (IEEE Transactions in Speech and Audio Processing, 2005), a 2010 IEEE Signal Processing Society Best Paper Award winner, proposed novel algorithms for estimating perceived emotions from various facets of speech and spoken language expressions and demonstrated its utility on a real spoken dialog application for customer service. The work also led to a patent, and the launch of a startup (Behavioral Signals, Inc). The foundational framing of the emerging area of behavioral signal processing that attempts to map behavioral observations to human mental state constructs that can support decision making by experts and in creating tools for scientific discovery and application across many domains, notably in mental health was provided in the article Behavioral Signal Processing: Deriving Human Behavioral Informatics from Speech and Language (Proceedings of IEEE. 2013). My research has led to inventions and their commercialization with broad impact on conversational technologies for speech-based services and information retrieval on the cloud, and for mobile device.
A recent highlight are my contributions to creating technological approaches in support of diversity and inclusion for illuminating representations and portrayals of people in the media, for example, demonstrating the disparity in the portrayal of women in film and television: women are heard and seen for only about third of the time on screen. In partnership with Google, through automated analysis of gender representation in advertisements, we showed that content with gender parity received 30% more user views. With the United Nations/ITU, my lab deployed a system to track and measure progress towards gender equality targets at their events. And most recently, we have developed an AI-based script analysis tool “Spell Check for Bias” for supporting inclusive content production, and is being adopted by the entertainment media industry.
3. What problems in your research field deserve more attention (or what problems will you like to solve) in the next few years, and why?
The proliferation of converging human-centered technologies in sensing, signal processing and machine learning promises the creation and deployment of applications impacting all aspects of human society including health, education, environment and security. Given the inherently rich diversity and variability within and across people, and how they communicate, interact, and behave, it is critical to create inclusive technologies that are robust, fair, safe, and representative of all. This is reflected in the mission of the Signal Analysis and Interpretation Lab (SAIL) I founded and direct at USC: “creating technologies to understand the human condition and to support and enhance human capabilities and experiences.”
4. What advice would you like to give to the young generation of researchers/engineers?
There are tremendous opportunities for the new generation of researchers to make an impact on society. My advice is nothing new: to be open minded in approaching and solving problems, to collaborate with collegiality, and respect for ideas from all sources including what may be deemed “non traditional” in engineering, to be persistent knowing that setbacks in research are common (more a feature than a bug!), and above all derive joy in doing something that is personally meaningful and fulfilling.