The Next Step in AI: Multimodal Perception | Louis-Philippe Morency | TEDxCMU

Video Channel
Please help us to find bad videos. Broken or unappropriated video content?
Human face-to-face communication is a little like a dance: participants continuously adjust their behaviors based on their interlocutor’s speech, gestures and facial expressions. These multimodal behaviors are a reflection of our emotional and psychological state. Louis-Philippe Morency presents new artificial intelligence research that leverages this multimodal dance to help mental health professionals make diagnoses or treatment decisions.
Professor Morency is a tenure-track faculty member at the CMU Language Technology Institute where he leads the Multimodal Communication and Machine Learning Laboratory (MultiComp Lab). His research focuses on building the computational foundations to enable computers with the abilities to analyze, recognize and predict subtle human communicative behaviors during social interactions.
This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at
Fpvracer.lt is not the owner of this text/video/image/photo content, the real source of content is Youtube.com and user declared in this page publication as Youtube.com user, if you have any question about video removal, what was shared by open community, please contact Youtube.com directly or report bad/not working video links directly to video owner on Youtube.com. Removed video from Youtube.com will also be removed from here.