Red Hen Lab is a distributed consortium of researchers in multimodal communication, with participants all over the world. They are senior professors at major research universities, senior developers in technology corporations, and also junior professors, postdoctoral students, graduate students, undergraduate students, and even a few advanced high school students. Red Hen develops code in Natural Language Processing, audio parsing, computer vision, and joint multimodal analysis.

Red Hen’s multimodal communication research involves locating, identifying, and characterizing auditory and visual elements in videos and pictures. They may provide annotated clips or images and present the challenge of developing the machine learning tools to find additional instances in a much larger dataset. Some examples are gestures, eye movements, and tone of voice. They favor projects that combine more than one modality, but have a clear communicative function – an example would be floor-holding techniques. Once a feature has been successfully identified in our full dataset of several hundred thousand hours of news videos, cognitive linguists, communication scholars, and political scientists can use this information to study higher-level phenomena in language, culture, and politics and develop a better understanding of the full spectrum of human communication. Their dataset is recorded in a large number of languages, giving Red Hen a global perspective.

For GSoC 2018, they invited proposals from students for components for a unified multimodal processing pipeline, whose aim is to extract information from text, audio, and video, and to develop integrative cross-modal feature detection tasks. Red Hen Lab is directed jointly by Francis Steen (UCLA) and Mark Turner (Case Western Reserve University).

Go to the Home Page