Predictive Text

"Education is not about finding the right direction to go with the following."

I have read statements like this mostly in the forms of social media (Facebook, youtube, even magazines, etc.), though some academic articles (particularly in education) have exemplified similar rhetoric. Normally if I were to share such a sentiment I would provide more context and an alternative affirmative position rather than simply a negative one. The statement presented is actually the shortest auto-generated text I was content with as I could at least somewhat recognize my voice within it. Longer auto-generated text descended into circular sentences that were verbose and confusing, and although I have a tendency towards verbosity, the lack of grammar and conceptual organization did not sound like how I would typically present my ideas. Nevertheless, though the algorithms and artificial intelligence platforms we live with today could still be categorized as relatively simple (Alexa still struggles to turn my living room light on if I don't say it the right way), they are continually evolving to become more sophisticated, and it is conceivable that such synthetic-mediated writing or "artificial text" might become reality within our lifetimes.

An example of such artificial text was explored during the "Be Right Back" episode of Netflix's Black Mirror, which I highly recommend reviewing, where a user's text data is analyzed by artificial intelligence so that it can simulate our typical text-based discourse with another person even after we have died. The implication for the "various arenas" of society is that the true origin of any creative work becomes questionable: how much of an exemplary essay might truly be created by the author, and how much is mediated by an algorithm leading the user down a particular rhetorical rabbit hole? Yet true creativity is defined as a departure from the predictable, and in this sense, though the bland writings of most users may grow in our technologically-enhanced society, the works of truly creative authors and makers will stand out all the more clearly as a departure from the technologically typical works of the masses.

What concerns me most is not the saturation of the market with bland artificial texts, rather, I find myself more concerned with how unlike me my auto-generated text was and how larger organizations such as employers, law enforcement or government agencies might similarly misjudge myself or other users were they to have access to the same kind data. As described by O'Neil (2017), algorithms that include too broad a swath of data (like failing to distinguish violent and petty crimes) falsely diagnose areas of high risk; what if less morally scrupulous agencies were to use these tools to identify potential opposition or values they were opposed to? Unless these systems are so well designed that they can appreciate the most subtle nuances of human dialogue, we risk falling into a surveillance state where the most basic expressions of dissent could lead to one's classification as an extremist or "high-risk" individual. Worse still, the more these algorithms strive to maintain our attention, the more likely they are to expose us to more extreme content so as to maintain our attention; in essence, these same platforms that radicalize us to keep us engaged could also be used by other private or public agencies to condemn us.

The only solution to me seems to be increased regulation; ordinary humans cannot be expected to resist such super-intelligences with such vast knowledge of human behaviour that it can use to manipulate us, and we have already seen the consequences of how profoundly these systems can influence our society when simply used for political advertisement. Though I won't go so far as to that such development should be banned outright, as this is neither realistic nor preferable (a more profound understanding of human nature does have benefits to offer for our self-understanding IF we can all have access to our data), we ought to have strong regulations regarding when and how our data can be used by the forces that govern us. In essence, It's time for "algorithmic negligence" to become part of our societal lexicon so that the freedoms of speech and assembly don't have to leave it.

References

O’Neil, C. (2017, April 6). Justice in the age of big data. Retrieved June 18, 2019, from ideas.ted.com website: https://ideas.ted.com/justice-in-the-age-of-big-data/