The Web, and in particular Social Media Platforms, from microblogging (e.g. Twitter, Weibo) to comments on news articles, are hubs of human activities and experiences. The amount of natural data which is available to the public, researchers, industries, and policy makers cannot be compared to any other period in human history. Most of this data is still expressed in natural languages (although images, e.g. memes, are becoming a big competitors), making it available to be processed and automatically analysed by means of language technologies. Existing systems, although good, are in the majority of cases not good to process the natural language on the web in all of its forms; new challenges come into existence (how to interpret this massive amount of data minimizing human intervention without simplifying its complexity and avoiding overgeneralization? how make the Web a non-toxic place where people can freely share information?), and new systems needs to be developed.