August 31, 2024
Cybersecurity is a complex issue, and many experts even struggle to create a cohesive and accurate definition of the term. The Department of Homeland Security describes cybersecurity as “the activity or process, ability or capability, or state whereby information and communications systems and the information contained therein are protected from and/or defended against damage, unauthorized use or modification, or exploitation” (Craigen et al. 15). Another definition—by the International Telecommunication Union—describes cybersecurity as “the collection of tools, policies, security concepts, security safeguards, guidelines, risk management approaches, actions, training, best practices, assurance and technologies that can be used to protect the cyberenvironment and organization and user’s assets” (Veale 2). Both definitions attest to the breadth of the issue, but it is almost impossible to accumulate every aspect of cybersecurity into a singular definition. Personally, I would define cybersecurity simply: ways individuals or corporations can protect their cyber property. However, the issue is not cybersecurity, but rather cybersecurity threats, or anything that breaches cyberspace or prior existing cybersecurity measures.
In regard to cybersecurity threats, there are a few main categories that most threats can be placed into: malware, social engineering, network attacks, data breaches, insider threats, and advanced persistent threats. The scope of cybersecurity refers both to the previous types of threats, but also to the prevention or management of threats as described in the aforementioned definition by the International Telecommunication Union. The fundamental tenets of cybersecurity are often attributed as the CIA triad: confidentiality, integrity, and availability. If unauthorized users access data, the system processing is disrupted, or the system is unusable, a cybersecurity threat may have occurred.
I have personally had to deal with the effects of cybersecurity threats a few times. Most often, I have been a victim to password attacks. For many of my websites and accounts, I have one unique password that is easy for me to remember yet would be hard for an average individual to guess. However, a few years ago, a website that I had an account with experienced a data breach where thousands of emails and passwords were leaked. Since many of my accounts used the same email and password, I had a few scares where my accounts were almost breached. Thankfully, I use two-factor authentication often, so I would receive notices that there had been unauthorized access attempted on my accounts and would immediately change my passcode. While in high school, I also received phishing texts from various sources with messages including links; I tended to ignore these due to the mindset that if I truly owed money somewhere, that information would be relayed to my parents first, not the teenager.
One of the best measures to address the growing concept of cybersecurity and to defend against cybersecurity threats is to provide clear and simple education about the subject that is available and taught to the public; many cybersecurity threats rely on people being uneducated in such manors. When it truly becomes common practice to not click on links from unknown senders or download files from unknown websites for knowing of the possible danger in doing so, cybersecurity threats—as they are known currently—will decrease. Τhe next way to address cybersecurity threats is the idea of prevention; prevention includes anything from installing specific software to using multi-factor authentication for accounts. All of these prevention strategies add another fenced layer between an individual’s cyber property and a potential hacker, slowing down or completing stopping private information from being stolen or damaged.
While cybersecurity initially sounds like a daunting issue, the idea can be broken down into types of threats and types of management, all dealing with the access to information in cyberspaces. When education is readily available on this topic, people can be confident in responding to cybersecurity threats correctly and keeping their information safe. While it is still a complex issue, cybersecurity can be attainable for individuals by completing basic prevention measures and being educated on how to manage threats. Cybersecurity threats can have lasting negative impacts, so it is essential to advocate for widespread education, knowledge, and training on this issue.
Works Cited
Craigen, Dan, et al. “Defining Cybersecurity.” Technology Innovation Management Review, October 2014, https://doi.org/10.22215/timreview/835
Veale, Michael, and Ian Brown. “Cybersecurity.” Internet Policy Review, 17 December 2020, https://doi.org/10.14763/2020.4.1533
September 13, 2024
Media literacy and information literacy are two different and important parts to becoming a strong digital citizen, yet something that many people lack. Media literacy includes being able to “decode, evaluate, analyze and produce both print and electronic media” (Koltay 212). In fact, for adults and children alike, “the fundamental objective of media literacy is a critical autonomy relationship to all media” (212). Media literacy includes interpreting messages by recognizing the bias and privilege that the communicator may have, and a “baseline of media literacy [helps] to understand and evaluate the information that you are receiving” (Williams). Information literacy differs as “information literate people are able to recognize when information is needed… [and the field] emphasizes critical thinking, meta-cognitive, and procedural knowledge used to locate information in specific domains, fields, and contexts” (Koltay 215). Information literacy deals with the credibility and reliability of a source.
When discussing media and information literacy, it is important to understand the issues that exist, which make these literacies necessary. Specifically in news, there are many cases of either misinformation or disinformation. Whether intentional or unintentional, both can be harmful if users do not view their media sources with a critical lense found within the principles of media and information literacy. Another tenet of this is the fact that some employees purposefully post inaccurate information to drive up engagement to receive promotions. Furthermore, it is important to note that many younger generations have grown up in a completely digital world, meaning these individuals have been navigating the digital world and the mixed messages for almost the entirety of their lives. However, children may not take the education they have been surrounded with and apply it to their digital lives.
In my own experience, I have received an extensive education on being a good digital citizen who is media and literate. Before college, my school emphasized being a critical thinker and examining pieces of media with a grain of salt, thoroughly thinking of who wrote the piece and the agenda they may be pushing through their rhetoric. One of my first memories in education of this concept was being taught as an elementary student to use Wikipedia wisely; this was understanding that anyone could change the information regardless of credentials, but that the resource could still be useful as a network for other related resources. At TCU, I have continued to be challenged in this way with classes that give me strategies on how to evaluate the things that I read, the videos I watch, and the information that I intake from social media sites of the internet to be a balanced individual.
To address the challenge, it is important that media and information literacy education is readily available for all users. In fact, this education should be even more ingrained inside the school system and could even be an incentivized program for adults who are not in school; it could also be useful if big corporations offered educational information to their users—like Apple or the New York Times—even if it was through advertisements on these sites. Additionally, people who work in the media field should have salaried jobs, so that they do not have to comprise their ethics as a professional, spreading misinformation in order to make a living wage.
Media and information literacy are two fields that emphasize skills on how to navigate the digital world, through decoding messages and evaluating when information is missing, yet are skills that many people lack. Some companies may be abusing this lack of knowledge to push a narrative, while others may share misinformation to encourage engagement. However, the younger generations have spent most of their lives alongside the digital world, so it is important to make sure there are safeguards in place, like offering available education or paying writers a salary that is not based on post engagement. Unprecedented digital times call for the public to educated themselves to keep their families and selves protected from misinformation.
Works Cited
Koltay, Tibor. The Media and the literacies: media literacy, information literacy, digital literacy. Media, Culture & Society, 2011.
Williams, L. Joy. Media Literacy Podcast. Podcast, 2024
October 9, 2024
Social media is something that many people across the globe use daily. One scholar refers to social media as a “a set of online tools open for public membership that support idea sharing, creating and editing content, and building relationships through interaction and collaboration,” and that the problem lies in the fact that there is “no research to our knowledge exists that addresses how and in what context social media can be used for open innovation across the entire innovation funnel” (Mount and Martinez 126, 125). Other experts claim that social media is revolutionary as it is “transforming how individuals, communities, and organizations create, share, and consume information from each other and from firms,” but that there are an “increasing number of incidents” that cloud any positive reputations of social media (Baccarella et al. 431). Personally, I believe the issue with social media is that it is developing at such a fast rate, making it difficult to hold organizations or social media companies accountable for their actions in cyberspace.
When discussing social media, it is important to understand the dichotomy between the light side and the dark side. Social media has many positive features like allowing users to distribute content, communicate with one another, use groups to form communities and connections, and allowing users to understand and reveal their own identities. Additionally, social media is a space where innovation can occur. However, there is also a dangerous dark side to social media that exists. Social media perpetuates inappropriate distribution of content, stalking through location tracking or monitoring, aggressive misinformation or disinformation, exploitations, abuse and intimidation, bias and harmful stereotypes, and shaming and defamation. When discussing social media, it is also important to pay attention to the fact that some brands have created a monopoly in the industry, making it difficult for accountability to happen or for new companies to form that could be more fair-use based.
In my own experience, I have seen the negative impacts of social media on friends around me. I have witnessed first-hand the dangers related to location stalking and the negative ways that people misuse that information. On a frequent basis, I see flagrant displays of intentional misinformation and disinformation on social media, which leads to fearmongering, propaganda, and aggressive online discourse. Additionally, I have seen classmates and peers fall victim to the trends of overconsumption presented through social media. However, amidst all of these issues, I—and many others—still use these platforms because there are few other options. I have created a large network on my platforms that I would be unable to find or form on smaller and more ethical platforms due to the monopoly that the mainstream social media platforms have on the industry.
In order to address the challenge, one of the easiest measures would be to implement more education. Many people fall victim to the challenges of social media simply because they are uneducated on the potential problems and risks. This could look like mandatory disclosure statements on platforms before the first use, specials on daily broadcasted news sources, or an increase in research and documentaries about the effects of social media. Along with more education, there also need to be stronger safety regulations to keep users and creators safe, both physically and virtually. These safety regulations could be implemented and enforced by a safety corporation or put into legal codes. Currently the field is monitored by strong expectations and vague guidelines, instead of rigid and strict rules.
Social media is a tool, but like many tools, there are a plethora of issues and problems with the industry. First, for every bright and positive moment of social media, there is dark moment lurking in the shadows; however, only the positive aspects are ever showcased to the public. Many people in my own life have fallen as prey to the vicious tactics involved through social media platforms, but many people feel stuck as large corporations have a monopoly over this industry that feels essential to daily life. However, through public education and stronger safety regulations, this problem with social media can hopefully be addressed, so that future generations can truly view social media as a tool instead of an issue.
Works Cited
Baccarella, Christian V., et al. “Social Media? It’s Serious! Understanding the Dark Side of Social Media.” European Management Journal, vol. 36, no. 4, 2018, pp. 431–38, https://doi.org/10.1016/j.emj.2018.07.002.
Mount, Matthew, and Marian Garcia Martinez. “Social Media: A Tool for Open Innovation.” California Management Review, vol. 56, no. 4, 2014, pp. 124–43, https://doi.org/10.1525/cmr.2014.56.4.124.
October 18, 2024
Generative AI is changing the game for the professional world in many positive ways, but also in negative ways. “Generative AI is a distinct set of techniques within the larger AI field that generate something new: images, text, even video;” however, there are problems and “unresolved questions around permission to use existing images to train generative models and the ownership of the resulting images” (Stratis). While traditional artificial intelligence is rule-based to solve problems or understand patterns, generative AI focuses on creation. Generative AI is also changing quicker than previous models and “can digest more complex inputs, and can produce humanlike output, making Generative AI more versatile and scalable than prior innovations in AI and machine learning” (Eisfeldt). Personally, it seems as if generative AI is so innovative that it is difficult to make sure that are proper preventative measures to protect the public, creators, and organizations from any potential harms of AI.
There are two main types of generative AI platforms that are recognized as reliable across the field: image generators and text generators, otherwise known as large language models. Image generators create an image based off a text prompt; prompt engineering comes into play here as the better and more clear the prompt is, the more accurate the image is to the wishes of the requester. Requesters can ask for specific images created or edited in the style of images found in the online database or in an image uploaded by the user. Text models are used in similar ways as they respond to user-generated prompts, except they produce text or language instead of images. However, there are also some issues within these generators such as hallucinations and jailbreaking. Hallucinations are the result of generative AI platforms giving users incorrect answers to their questions or requests. Jailbreaking is a specific, negative type of prompt engineering that occurs when users attempt and succeed in bypassing the generative AI restrictions to create inappropriate results or responses.
Personally, I do not have much experience with Artificial Intelligence, neither discriminatory AI models nor generative AI models. Occasionally, I have used platforms with ChatGPT to help with brainstorming ideas or summarizing books; however, every time I have used ChatGPT, I have experienced hallucinations. For instance, I asked the platform to list the main points of each chapter before a discussion in one of my literature classes, but the results were confidently incorrect (which I knew as I had actually read the chapter and knew the correct names of the characters involved). On the other hand, generative AI platforms have been useful for me in non-academic settings, such as asking a platform to create a meal with ingredients I have in my fridge or helping me brainstorm ideas for a friend’s birthday banner design. I also frequently watch my peers and colleagues use generative AI platforms during class discussions to form questions to ask the class.
The most difficult and unsolved issues in regard to generative AI platforms seem to be the ethical issues. One solution to prevent copyright and ethics issues would be to create an authoritative way to ensure that online art (whether image or text) was given a standardized credit if it was created by an artist or an AI platform. Of course, this “stamp of approval” would have to be something that future generative AI platforms would be unable to replicate on their image or text creations. Additionally in order to best prepare the generative AI platforms to deal with jailbreaking attempts, it could be beneficial to train the platforms with wicked and twisted people who would purposefully use jailbreaking techniques to ask the platforms to create inappropriate content, followed by software engineers teaching the platforms that those prompts are also inappropriate and not to be fulfilled.
Overall, generative AI differs from traditional discriminatory AI platforms as it uses its databases to create new data, most commonly through image generators and large language models (text generators). The technology is so innovative that it is difficult to prevent issues before they arise. While I am a personal expert in generative AI, I have experimented with text generators, like ChatGPT. In order to respond to ethical issues, it is necessary to protect human creators by implemented a standard credit for artist/AI work and protect human users by training AI to recognize evil engineered prompts by using them on the platforms. Generative AI is changing the world, so it is important to stay up to date on the issues and potential solutions.
Works Cited
Eisfeldt, Andrea L., et al. Generative AI and Firm Values. National Bureau of Economic Research, 2023.
Stratis, Kyle. What Is Generative AI? First edition., O’Reilly Media, Inc., 2023.