My generated sentence:
This sentence didn't really sound much like me (at least what I think I sound like in my own head). Grammatically, most of the predictive suggestions were articles, prepositions, pronouns and common verbs. it sounded like a vague teaching philosophy one might see on a generic teaching Twitter account or Pinterest post. Nothing substantive, because it didn't have proper nouns to choose from. It didn't have a variety of verb choice once you got going on a string. The algorithm can't predict your thoughts or write in a revealing and creative manner. It can say that after a verb, one if most likely to use a conjunction or adverb and offer up and, so, or.
But that does often seem to me what those generic one-liner pearls of wisdom on education Twitter or Pinterest sayings often boil down to. One liners that sound positive, but likely have no connection to the realities of ones varied, dynamic and unique classroom and teaching situations.
I also noticed it did not offer up any profanity. I composed my sentence using Twitter as it had a built in 280 character limit. It was obvious the predictive text algorithm was not using machine learning based on my own written patterns in Twitter, as I'm fairly certain my disappointment with the current "where we are going as a society in Alberta" question has been expressed with non-scholarly language. I'm not sure if the predictive text resides in each app differently as part of the apps code, or if predictive text is a global function of iOS and the rules come from Apple's algorithms. After composing my sentence, I tried to recreate it using Messages, but didn't get the same suggestions, and couldn't make the exact same sentence unless I cheated some words. Not sure though, without further research why that was - app specific text or something else.
Perhaps the difficulty in creating a predictive text sentence about an opinion based discussion is that one's opinion can't be cultivated from the stem. I use gMail's predicted responses ALL the time. I get an email . gMail reads it, and gives me three response choices based on what the ending question or statement was in the email - at least that seems to be how it work just based on observation. It's really good. I feel like I choose one of the suggested responses often. I'd actually be curious to know how often. However, these are only good for one-liners response. "Will you be at the meeting?" "Yes, I'll be there."
When I get an inquiry about a student's performance, sometimes I can use a generated start of the email, "Thank you for you inquiry." But all the text after that has to be created by me. It has to be personalized. It reminds me of when you call a customer service line now and you get the computerized agent who asks you what you need help with, and if you have an issue more complicated than "hours", "locations", "balance inquiry", you have to yell "speak to a representative", though sometimes after going in circles first. Very frustrating. I often wonder how the short time saved in not answering a call for "hours" translates into better business than the customers who may leave the service due to frustration from horrible service. I quit Airmiles because of how their computer call center worked. Changed my credit card to a cashback card, so I'd never have to call Airmiles again. I suppose they have generated data to show if it overall improves or worsens productivity/performance, but it seems hard to believe that some of theses automated systems don't turn customers off. Where it's concerning is when it used in a service you can't say no to or quit. I can switch banks, or drop a loyalty program, But I can't quit government services. Canada Revenue for example. Their system for getting into your account each year to Netfile is terrible. It's so secure, I have trouble verifying my own self. I'd totally quit it if that was an option. But it's not efficient to file another way, so it's a frustration I put up with every February. It reminds me a bit of the Inyerface game. I have gone around in circles of verify through a partner institution many times. I get super frustrated, yet I would consider to be a digital native, with a strong grasp of the English language and how websites work. I feel a lot of empathy for users with a first language other than English or French, or people who are not digital natives - the elderly for example. And I think the difficulty of the system leaves them more susceptible to being scammed or preyed upon. My grandmother for example, would be totally afraid of using the CRA site. She would ask someone else to do it for her, and hand over all her personal info. She was more afraid of the computer program, than a person. It left her dependent on another, who hopefully was trustworthy, but might not be. I wonder about scammers using algorithms. Can a phone scammer generate a list of phone numbers likely to be associated with the elderly? If Facebook can find them and send them "customized' ads, can scammers target them for customized scams? The elderly at least are apprehensive I think (based on visiting my grandparents, their friends etc). But our students have a very false sense of trust in the Internet and technology. My students are not one bit worried about what they post, who they give their info to, how much they share etc. That's not their fault. Some have has a social media presence since they were in the womb and debuted on Facebook as a sonogram image. They believe as long as they don't fall victim to online pedophiles, everything else is fine. Even when discussing how social media posts might affect future hire-ability, I have been told, "Miss, if they judged us all on our Snaps, then no one would get hired". And since who is using algorithms, and how they are being used is opaque as discussed in one of the podcasts/videos, I don't even know if their argument might be true. Are Canadian universities mining applicant's data to determine acceptance? Are Canadian employers? Are our insurance companies? Without more openness, it's almost impossible for the ordinary citizen to know. If you are concerned, are you paranoid, or if you aren't concerned are you being naive? I suppose that is where we hope we have ethical and un-biased research happening, and ethical, fair journalistic reporting to inform us of the research. But my real sentence for "As a society we are" would have very much gone in the direction of "increasingly unable to trust in the integrity and fairness of our media". Are just at the mercy of whomever has the biggest budget for targeted information influence? I'm glad the tone of the (second I think) podcast ended on a positive, hopeful note, as I'm naturally a little more cynical, and would drift toward the it's very scary side when thinking about these issues.
Cheers,
Katie
Question: In what textual products have you read statements like the one you generated? In blogs? Academic articles? Magazines? Novels? How are these generated statements different from how you would normally express yourself and/or your opinions on the matter you wrote about? Did the statement you generated speak in your "voice"—did it sound like you? Why or why not? Reflect further on the use of algorithms in public writing spaces and the implications this might have in various arenas (politics, academia, business, education, etc.).