This task is reminiscent of the twine task- a sort of choose your own adventure. Each word choice led to a different stem or option. I chose the stem “Every time I think about our future.” I chose this one because I was interested to see what would come up- would this be specific to me, or specific to society in general.
Every time I think about our future…
we, I, plans
The 3 above choices all lead to different “doors,” similar to a “choose your own adventure”
It did not take a long time using predictive text to recognize these patterns- before I had about 100 words there were commonalities I was seeing. The first thing I noticed about the predictive word choices is that these words are mostly simple/ easy words and phrases. I’m not sure how else to describe these words- so I have provided a few examples. I would consider them to be high frequency words in text, as they feature a lot of pronouns. This makes sense to me as I understand this program to be general in a sense. It would not make sense that specific words to me would come up, such as: Every time I think about our future I can see… puppy/ baby/ Christmas/ hockey. I think I would be slightly freaked out if these specific options, important to me, came up. Early in writing, I began to notice a pattern. For example, below are some more of the predictive text choices:
Next, day, same
I, and, of
School, home, dinner (probably the most specific I encountered during this exercise)
We, I, then
Went, got, had
Anything, me, to
In context, this is what it looked like to me: Everyt ime I think about our future I can see that the same time of my day:
And, haha, is (my choices).
Interestingly, only “is” would make sense with the sentence. I am surprised with some of these predictive word choices. The simplicity of them allows the sentences to carry on, but I am not confident that they make much sense. I chose “is” at this stage, because I wanted to finish my sentence (Every time I think about our future I can see that the same time of my day is going well) and see what would happen at the start of a new sentence. I wanted to start a new sentence because the first sentence, my opening, really does not make much sense. My next sentence is “I guess that’s why it is really good.” Again, the predictive word choices are common, high frequency words. At this point, I was tempted to re-start and choose a different “door” right from my first sentence starter. I wonder how my microblog story would change if i changed my beginning sentence stem from “I” to “we” or “plans.” I decided against this as I felt I had to be loyal (in a sense) to the pattern I have started to create.
My next sentence was interesting- my first emoji choice came up, so I had to choose it:
I think (thinking emoji)
Is, was, gonna
Neither of the next three choices made sense, so I ended the sentence with “I think.”
There are a few series of words that were presented multiple times as sentence starters, following a period:
Ya, I, and thanks
Don’t, guess, and think
Mid way through my microblog I figured I wanted to try to manipulative (in a way) or try to figure out the pattern, instead of just pressing random buttons. Random choices would lead to nonsense. I experienced that my first sentence.
I had already used “I think,” and “I don’t,” so I wanted to see where “I guess” would lead. “I guess...” lead to the longest, predictive text word: tomorrow. But, interestingly, that led me back to “ya, I, and thanks” as sentence starters. A second emoji came up for me, so I took this as an opportunity to end the sentence (without a period) so I would be able to continue on with different predictive word choices. I considered the school emoji a period.
The types of high frequency words generated by this platform are generic for a reason. This microblog statement I’ve created would not be seen in blogs, novels, and definitely not academic articles (unless it was used as an example as a part of a study). I thought this question was interesting, so I thought, where would I see this or similar patterns? This may be a bit of a stretch, but what comes to mind is that this type of statement may be seen from someone who uses text-to-talk/ speech or dictation software. In those types of speech software, they tend to rely or lean on high frequency words as well. Proper nouns or slang may be replaced with a more common word, as the software is trying to be predictive- it makes sense. This thought is reminiscent of task 3- voice to text.
This text does not sound like me at all. This is not surprising because it is not personal. There is also no punctuation besides periods. No commas, dashes, parentheses, exclamation points, or question marks. Interestingly, the software knows I am sending a text. A text is an engaging conversation between two people- so, I am curious that ? do not come up, or predictive words that create sentences that lead to a question. Another surprise is the word y’all- why did this come up? Have I used this word before? The word “y’all would go against my high frequency, common word argument. Although, this may be a one-off. Another interesting set of predictive words was “gonna” and “wanna.” Are these high frequency words? I would consider them slang (ish) in the English language. I consciously make an effort with my 6/7 students to say “going to” because “gonna” is an informal, contraction word. This has made me think that informal words are more high frequency via text, and possibly speech. We would not see these words in formal writing settings such as a novel (unless it was conducive to a character).
In O’Neil’s article, “How can we stop algorithms telling lies?” she says, “to train an algorithm you need to provide historical data as well as a definition of success.” This quote stuck with me. This makes me wonder about “training” my phone’s predictive text. I wonder if I were to use the predictive text model more often, would the algorithm change? Would I see the more high frequency words change to words and phrases more specific to me?
This was my first attempt with predictive text. I’ve always seen the options on my phone, but never used them. I wonder if anyone else feels the same? For me, I also don’t use shortcuts (with words or phrases), in general, in my writing. This may or may not have come from being an English major- not sure!
The entire topic of algorithms in general is very interesting. I mentioned in a previous post my reflections and learnings from watching “The Great Hack” and how this was surprising to me. I am guilty about being a bit naive on the subject. O’Neil’s article affirmed this for me. Algorithms can stretch as far as predictive text- a choice between Ya, I, and thanks... to a much larger system used for crime. As I was reading, I would whisper to myself, “hmm,” or “no way.” In her article she explains the PredPol model which is a developed algorithm model for crimes- targeted geographically. Interestingly, what they found was that although it is targeted geographically, allowing for more police presence in high frequency areas, it actually does end up targeting the individual. This is because the data is also a predictive model (just as the text message platform). The police are more drawn to the neighbourhoods where they are needed more, thus an increase in patrolling, however is also amounts to an increase in down time. This means that the police are in these areas and may “happen to see” some crimes occur, where they may not have actually been there in the first place. The connection I’ve seen is that the algorithm is not perfect. In the case of PredPol, “The result is that we criminalize poverty, believing all the while that our tools are not only scientific but fair. And computers, for all of their advances in language and logic, still struggle mightily with concepts, like beauty, friendship, and yes, fairness.”
Additionally, on Episode 140 -Machine Bias- of the You Are Not So Smart podcast, they discuss predictive patterns with relation to gender. The example given is talking about a nurse, in which the predictive text would follow with the pronoun “she”. A.I. expert Damian Williams claims that bias is just a fact. This makes sense as the future is predicted, based on the past, and therefore biases creep in. In my predictive text message, I did not come across bias. In the two examples discussed above, it makes sense to me that algorithms can become critiqued as perhaps being unethical with regard to bias. This is considered “data ethics” and is something to continue to research and learn about in the future.