Visual Sentiment Analysis

Sentiment Analysis of Urban Images by Using Semantic and Deep Features

Project Goal

The present study focus on the understanding of the sentiment of urban images shared by users on social networks.

PerceptSent

Data

PerceptSent Dataset: Composed of 500 images from Flickr, Instagram, and NYC311. A pool of five evaluators labelled all images as positive, slightly positive, neutral, slightly negative, and negative. Besides the sentiment opinion for each image, the dataset contains metadata concerning~ each evaluator --- age, gender, socio-economic strata, education, and psychological hints --- their perceptions, such as the presence of nature, violence, lack of maintenance, among others --- as well as independent scene objects annotated in the images.

Code

The ConvNet architectures explored in this paer, trained models are available here.

OutdoorSent

Data

OutdooSent Dataset: Composed of 1,950 outdoor images from Flickr. Each of them was labeled based on the evaluation of five different volunteers. Then, each image is labeled as Positive, Neutral, or Negative.  

Code

The ConvNet architectures explored in this paper, trained models, and semantic attributes are available.

Examples of use can be found here

Team

Researchers

Leyza Baldo Dorini

Rodrigo Minetto

Thiago H. Silva

Myriam Delgado


Students

André L. Zanelatto

Cesar Rafael Lopes

Lucas Nogueira

Thiago Mildemberger

Wyverson Bonasoli de Oliveira

Acknowledgements 

Grant CNPq-URBCOMP (#403260/2016-7). 

Grant Fapesp-GoodWeb (#2023/00148-0).

CNPq (grants 314699/2020-1 and 310998/2020-4).

NVIDIA Corporation due to the donation of the Titan Xp GPU used for this research. 

Research agencies: CAPES, CNPq, and FAPESP.