Description:
In various key communication contexts, such as political communication, science communication, health communication, and advertising, visual media have unique and powerful effects. Our research has revealed the features, categories, and the power of visual media in different contexts such as social-mediated protests, climate change, and brand promotion.
Featured Publication
Convergence or divergence? A cross-platform analysis of climate change visual content categories, features, and social media engagement on Twitter and Instagram. Public Relations Review. PDF
Advocacy organizations increasingly leverage social media and visuals to communicate complex climate issues. By examining an extensive dataset of visual posts collected from five organization accounts on two multimodal social media platforms, Twitter and Instagram, we conducted a cross-platform comparison of visual content categories and visual features related to climate change. Through deep-learning-based unsupervised image clustering, we discovered that visual content on both platforms could be broadly classified into five categories: infographics/captioned images, nature landscape/wildlife, climate activism, technology, and data visualization. However, these categories were not equally represented on each platform. Instagram featured more nature landscape/wildlife content, while Twitter showed more infographics/captioned images and data visualization. Through computational visual analysis, we found that the two platforms also presented significant differences in overall warm and cool colors, brightness, colorfulness, visual complexity, and presence of faces. Additionally, we identified platform-specific patterns of engagement associated with these categories and features. With the urgency to address climate change, these findings can guide climate advocacy organizations in developing strategies tailored to each platform’s specific characteristics for maximum effectiveness. This study highlights the significance of using computational methods in efficiently uncovering meaningful themes from extensive visual data and quantifying aesthetic features in strategic communication.
Publications
Qian, S., Lu, Y., Peng, Y., Shen, C., & Xu, H. (2024). Convergence or divergence? A cross-platform analysis of climate change visual content categories, features, and social media engagement on Twitter and Instagram. Public Relations Review. PDF
Lu, Y., & Peng, Y. (2024). The mobilizing power of visual media across cycles of social movements. Political Communication. PDF
Peng, Y., Wen, T. J., & Yang, J. (2023). A computer vision methodology to predict brand personality from image features. Journal of Advertising. PDF
Peng, Y., Lock, I., & Ali Salah, A. (2024). Automated visual analysis for the study of social media effects: Opportunities, approaches, and challenges. Communication Methods and Measures. PDF
Sharma, M., & Peng, Y. (2023). How visual aesthetics and calorie density predict food image popularity on Instagram: A computer vision analysis. Health Communication. PDF
Peng, Y., Lu, Y., & Shen, C. (2023). An agenda for studying credibility perceptions of visual misinformation. Political Communication, 40(2), 225-237. PDF
Lu, Y. & Shen, C. (2023). Unpacking multimodal fact-checking: Features and engagement of fact-checking videos on Chinese TikTok (Douyin). Social Media + Society, 9(1). PDF
Qian, S., Shen, C. & Zhang, J. (2022). Fighting cheapfakes: Using a digital media literacy intervention to motivate reverse search of out-of-context visual misinformation. Journal of Computer-Mediated Communication, 28(1). PDF
Description
The rise of AI-generated images (AIGIs) presents significant threats to information integrity given their striking resemblance to real-world images and their potential to misinform the public. Our current study on the features, perceptions, and effects of AIGIs will contribute to advancing researchers and policymakers’ understanding of the features, processing, and impact of photorealistic AIGIs.
Featured Publication
Crafting Synthetic Realities: Examining Visual Realism and Misinformation Potential of Photorealistic AI-Generated Images, Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, 1–12. LINK.
Advances in generative models have produced Artificial Intelligence-Generated Images (AIGIs) nearly indistinguishable from real photographs. Leveraging 30,824 AIGIs collected from Instagram and X (Twitter), this study combines quantitative and qualitative content analysis of a sample of 4,335 AIGIs to examine the photorealism of AIGIs through visual features related to perceived realism across content, human, aesthetic, and AI production features. We find photorealistic AIGIs often depict human figures, especially celebrities and politicians, with notable surrealism. Aesthetic professionalism is evident in staging and professional lighting. Only a small number of AIGIs show clear signs of generative AI, and AI production flaws were rare but varied. Our findings offer critical insights for understanding visual misinformation, mitigating potential risks of photorealistic AIGIs, and improving the responsible use of AIGIs.
Description
This research area examines how artificial intelligence can be harnessed to advance social science. AI enables researchers to measure complex content attributes such as sentiment, credibility, or visual framing with greater scale. It also supports hypothesis discovery by uncovering patterns and relationships in data that might be overlooked by traditional methods. We investigate how AI can automate elements of the research process—from coding and annotation to theory building and testing. We also study the divergence between human and AI evaluations, including how human and machine coders differ in their judgments.
Featured Research
Peng, Y., Qian, S., Lu, Y., & Shen, C. Large Language Model-Informed Feature Discovery Improves Prediction and Interpretation of Credibility Perceptions of Visual Content. PREPRINT.
In today’s visually dominated social media landscape, predicting the perceived credibility of visual content and understanding what drives human judgment are crucial for countering misinformation. However, these tasks are challenging due to the diversity and richness of visual features. We introduce a Large Language Model (LLM)-informed feature discovery framework that leverages multimodal LLMs, such as GPT-4o, to evaluate content credibility and explain its reasoning. We extract and quantify interpretable features using targeted prompts and integrate them into machine learning models to improve credibility predictions. We tested this approach on 4,191 visual social media posts across eight topics in science, health, and politics, using credibility ratings from 5,355 crowdsourced workers. Our method outperformed zero-shot GPT-based predictions by 13% in R2, and revealed key features like information concreteness and image format. We discuss the implications for misinformation mitigation, visual credibility, and the role of LLMs in social science.