Manipulated content different from most types of disinformation because there is generally some elements of truth to it. What makes something manipulated content is when a picture is altered or edited to make it fit whatever story is being pushed. In the photo to the left, they are using a common trick in media where a photo is taken from a specific angle to make it appear that the gathering around the politician or reporter is much larger than it really is (WPSU - Penn State Public Media).
Media deception like the picture above from coverage of Theresa May is extremely common practice in media coverage. Another example of manipulated content is when photos are completely edited to appear completely different than the original photo. With editing software such as photoshop being easy to access and use, it is very easy to be fooled on social media by an edited post, such as the picture on the right which took Hong Kong politician Junius Ho and completely changed the photo (National Library Singapore).
A large scale example of media manipulating content came from Facebook. Conducting an unethical study on a large amount of users, "Facebook manipulated the content seen by more than 600,000 users in an attempt to see if they could affect their emotional state. They basically skewed the number of positive or negative items on random users’ news feeds and then analyzed these people’s future postings. The result? Facebook can manipulate your emotions" (Birkett, 2022).
"The information presented by AI-powered information architectures is not neutral; it is personalized on the basis of the data collected from users. This means that two people who enter the same term into a search engine will probably be shown different results. That can be helpful if, for example, we want to look up a restaurant and the search engine displays hits in our neighborhood at the top of the list, rather than a restaurant with the same name on the other side of the world. But if we are shown news or political content solely on the basis of our preferences, we risk finding ourselves in a filter bubble where we are no longer exposed to any other opinions.
The research team sees false and misleading information as another challenge for people online. Videos and posts propagating conspiracy theories and unverified rumors can spread rapidly through social media, causing real harm. For example, people may decide not to get vaccinated due to misinformation about vaccines, putting themselves and others at risk" (UK Parliament).
Like the Facebook example above, much of what we see on social media platforms is intended to make us feel a certain way. The best advice for avoiding manipulated content is not to avoid social media outright, but rather be less reliant on it as a source of information (Birkett, 2020). Don't blindly trust every post online and before you formulate an opinion on a topic be sure to do some research of your own and gather your own information instead.
For the other examples above with edited imagery, this connects to the lesson that a fair amount of skepticism is advisable when using the internet. While it may be hard to identify an edited photo, one of the best ways to check if a photo is legitimate is to do a reverse Google search. If for whatever reason a photo seems a bit off or seems like it looks like a "clickbait" post, it probably is (National Library Singapore).
To summarize, the best way to avoid and identify manipulated content is to always be somewhat skeptical of what you are seeing. Ask yourself questions, do your own research, and never blindly trust any single source online to have the answers you are looking for. Like all types of misinformation, cover a diverse spread of sources to ensure that the information you are using is legitimate (UK Parliament).