Computer Vision and Multimodal Communication Lab

The “Computer Vision and Multimodal Communication” lab is a collaborative initiative led by Cuihua (Cindy) Shen at the University of California, Davis, Yilang Peng at the University of Georgia, and Yingdan Lu at Northwestern University. Our overarching aim is to leverage state-of-the-art computational social science methodologies, encompassing computer vision, video analysis, and natural language processing, to address the substantial volume of visual and multimodal information present in our contemporary media landscape. We are dedicated to investigating the dissemination and effects of visual media in diverse contexts, covering a broad range of topics such as the spread of visual misinformation, the role of visuals in mobilizing social movements, and the implications of AI-generated media. If you are interested in collaborating with us, please don’t hesitate to contact us.


Key Areas of Research

The Dissemination Patterns of Visual Misinformation

  • How does the visual presentation of misinformation (such as images, videos, infographics, memes, etc.) affect its spread in terms of effectiveness, speed, and reach?
  • Are there specific contexts, content, or emotional elements that make misinformation more prone to spreading?
  • Does the acceptance and sharing of visual misinformation depend on factors such as the audience’s age, gender, media literacy, and cultural background?

The Use and Impact of Visual Information in Social Movements

  • Contemporary social media plays a significant role in driving social movements. How is visual information used in social movements, and how does it influence the audience compared to textual information?
  • Can visual information incite anger, resonance, or calls to action?
  • Are there differences in conveying information and influencing the audience between various types of visual information (such as photos, videos, illustrations, cartoons, etc.)?
  • How does visual information shape the image and reputation of organizations within social movements?
  • How does visual information interact with textual information to create a more powerful dissemination effect?

AI-Generated Images and Misinformation

  • What role do AI-generated images play in the spread of misinformation?
  • How can we detect and identify misinformation that uses AI-generated images?
  • Are there effective technological means to distinguish between real images and generated ones?
  • How can users differentiate between real and generated images?
  • Does the awareness of AI-generated images impact users’ trust in news organization and institutions?

Selected Publications
  • Peng, Y., Lock, I., & Ali Salah, A. (In Press). Automated visual analysis for the study of social media effects: Opportunities, approaches, and challenges. Communication Methods and Measures.
  • Lu, Y. & Shen, C. (2023). Unpacking multimodal fact-checking: Features and engagement of fact-checking videos on Chinese TikTok (Douyin). Social Media + Society, 9(1). [LINK]
  • Peng, Y., Lu, Y., & Shen, C. (2023). An agenda for studying credibility perceptions of visual misinformation. Political Communication. [LINK]
  • Sharma, M., & Peng, Y. (2023). How visual aesthetics and calorie density predict food image popularity on Instagram: A computer vision analysis. Health Communication. [LINK]