VC7

What problems has Facebook experienced when trying to address the problem of fake news? How is it using human beings and technology to address the problem? Investigate the firm's current attempts to curtail negative content online -  Harm Test 

Facebook has experienced an increasing amount of fake news as the years progress because of the overall growth of the platform. It can be difficult for the content to be moderated due to the large scale of users and the inconsistency that can come from determining the authenticity of news. User behavior can also be an issue for the platform when addressing fake news because content can continue to be shared quickly even if there is action of it being flagged/reported initially. When looking into their current methods, Facebook utilizes machine learning algorithms and human fact-checkers along with encouraging user reporting on the platform. They can automatically flag potentially false content and analyze the accuracy much quicker at a larger scale. There are independent fact-checking organizations that also reduce the percentage of fake news on Facebook. Additionally, there are various community guidelines and content policies in place to encourage users and keep the content accurate. 

Negative content online such as hate speech, harmful material, or general misinformation is currently addressed by Facebook through human moderation, community reporting, or machine learning. There is a lot more content coverage through utilizing machine learning, especially with the introduction and growth of artificial intelligence. Human moderators can review and make more in-depth decisions when reviewing potential violations of community standards even though it may be at a smaller scale or have a longer process. When looking into the various tests to address options, the harm test would help the evaluation of content to uncover material that promotes violence, misinformation, or discrimination. Assessing the platform will help Facebook verify content that fails the harm test and be more restrictive in the future. Their current methods are effective but as the platform continues to expand and discover new capabilities, their policies will need to be updated to keep content regulated.