Accountability
đź•›Suggested Time:Â Total 60 minutes
Learning Objectives
Participants will be able to define accountability in the context of digital technology.
Participants will be able to analyze challenges related to accountability in digital technology by connecting them to real-world examples.
Participants will be able to propose solutions for holding stakeholders accountable for technology misuse.
Agenda
Understanding Accountability in Ethical perspectives.
Understanding Accountability in Legal perspectives.
Tech Bias Test Activity.
Understanding Accountability in Psychological perspectives.
Accountability in Tech in Action Discussion.
Take Actionable Steps.
Accountability in Responsible Tech refers to the idea that:
“
Individuals and organizations developing, deploying, or using technology are responsible for its actions and consequences.
This includes being able to explain and justify decisions made by the technology, as well as being prepared to address any negative impacts that might arise. It's about ensuring that someone is held accountable when technology makes a mistake, harms someone, or fails to meet its intended goals.
———————————— EU Artificial Intelligence ACT
As we navigate the digital world, the need for accountability has never been more urgent. According to the World Economic Forum’s 2024 Global Risks Report, misinformation and disinformation, particularly those driven by AI, ranked as the second-highest global risk, just behind extreme weather.
At the same time, the Federal Trade Commission has raised alarms over Big Tech’s growing data overreach. Companies now collect vast amounts of personal data, including users’ location, biometric details, and browsing behaviors. What’s more concerning is the lack of transparency—policies are often hidden, vague, or difficult to understand. These practices enable aggressive data monetization, where personal information is sold or shared for targeted advertising. Such systems also come with significant security risks, from data breaches to identity theft. Together, these reveal a critical need for both personal and institutional accountability in shaping a safer and more ethical digital needs.
Accountability doesn’t emerge out of thin air. It’s shaped, supported, and sustained by the ethical standards we agree to live by. Across professions and communities, people have come together to formalize codes of ethics: shared understandings that define what responsible behavior looks like, provide guardrails before harm occurs, and help establish trust. These codes don’t just set expectations; they remind us that our actions affect others, and that with responsibility comes the need to be answerable for our choices.
We can see this clearly in well-established examples. Nurses recite the Florence Nightingale Pledge, committing to the care and dignity of every patient. Journalists follow a Code of Ethics that calls them to report truthfully and act with independence and integrity. Even science fiction offers a glimpse of ethical foresight—the first law of robotics famously declares that no robot should harm a human being. These examples, though drawn from different worlds, share a common thread: they aim to protect people and ensure that those in positions of power are held to a clear, meaningful standard.
Violating an established code of ethics isn’t just a personal misstep. It can carry serious consequences. In many professions, ethical guidelines are more than ideals; they are binding commitments. When these standards are broken, individuals may face disciplinary actions, loss of professional licenses, or even legal repercussions. For example, healthcare professionals who breach patient confidentiality or act without informed consent can be suspended or sued. Civil engineers who cut corners in safety regulations may be held liable for structural failures. Certified Public Accountants (CPAs) who falsify records can face legal prosecution, while lawyers who violate their duty to clients risk disbarment. These examples remind us that accountability isn’t optional. It is woven into the trust society places in those who serve the public.
A significant challenge:
“
There has been the lack of equivalent ethical standard for emerging fields, such as those involving digital technologies, which are only now beginning to develop comprehensive codes of conduct.
———————————— National Society of Professional Engineers Â
I
Quick Discussion
Should software developers and tech companies be held to the same ethical standards as doctors and lawyers?Â
What are the pros and cons of holding them to such standards?
Unlike traditional professions that operate under clear legal and ethical codes, the tech world lacks a universally accepted rulebook. There is no global framework guiding what ethical technology should look like. Organizations like UNIDO have called for translating global considerations into concrete industrial standards, but such efforts remain aspirational. Voluntary guidelines from bodies like UNESCO and the OECD do exist—emphasizing ideals like safety, fairness, and trustworthiness—but they are not legally binding. As a result, accountability becomes fragmented. What one region or organization labels as “unethical tech” might be perfectly acceptable elsewhere. This inconsistency allows companies to cherry-pick which guidelines to follow, often opting for those that align with their interests rather than uphold broader ethical standards. In this uncertain legal landscape, the burden of responsibility often falls on individuals and communities to demand clarity and push for stronger, enforceable norms.
In today’s digital landscape, a handful of powerful corporations, such as Google, Amazon, Meta, and Microsoft, hold extraordinary influence over how we access and interact with technology. These companies dominate the digital marketplace and often operate with minimal external oversight. With few binding regulations to answer to, they are largely free to prioritize profit over responsibility. This has raised serious concerns about fairness and accountability. A Congressional Report on competition in the digital marketplace warns that these dominant platforms may use their power “to destroy competition, exploit other businesses, harm consumers, and impede disruptive innovation.” A small number of players set the rules of the game and benefit the most from them. The public interest can easily be sidelined. True accountability, then, must include not just ethical ideals, but real mechanisms that ensure no company is above the systems meant to keep them in check.
“Move fast and break things” – and face little consequences
The slogan “move fast and break things” has shaped the culture of the tech industry for years. But when systems fail or cause harm, who is left to deal with the consequences? In far too many cases, there are no meaningful legal consequences for unethical or irresponsible uses of technology. Companies often operate without being answerable to anyone! Not even to the communities affected by their tools and platforms. The Digital Rights Indicators reveal a troubling gap in accountability across the digital ecosystem.
Consider a few examples. In 2018, a self-driving Uber vehicle was involved in a fatal accident. Despite the tragedy, the corporation faced no formal liability. Facebook’s role in amplifying hate speech during the Myanmar crisis contributed to real-world violence, yet no executive was held responsible. Amazon’s experimental recruiting tool was found to systematically discriminate against women. The project was eventually abandoned, but not before quietly revealing how biased systems can shape real lives. These moments are not outliers. They point to a recurring pattern where harm is absorbed by the public, while those with the power to prevent it remain shielded.
Activity: Tech Bias Test
You will be given two color sheets.
Red = Intentional bias.
Green = Accidental bias.
Examples of Tech Bias Discussed
1. Facial Recognition and Gender Shades Study: Researcher Joy Buolamwini’s "Gender Shades" study highlighted significant biases in commercial facial recognition systems. The study found that these systems had higher error rates for darker-skinned and female faces compared to lighter-skinned and male faces, underscoring the need for more inclusive training data and testing. Study finds gender and skin-type bias in commercial artificial-intelligence systems
2. Mortgage Algorithms Discriminating Against Minorities: A 2021 investigation revealed that algorithmic underwriting engines in the mortgage industry were disproportionately denying loans to Black applicants compared to white applicants with similar financial backgrounds. This perpetuated existing racial disparities in homeownership and access to credit. A.I. Bias Caused 80% Of Black Mortgage Applicants To Be Denied
3. AI-Generated Images Reinforcing Stereotypes: Studies have shown that AI image generators can perpetuate gender and racial biases. For instance, generative AI models have been found to depict CEOs predominantly as white males, reflecting and reinforcing societal stereotypes present in the training data. Major AI Bias Examples: Tackling Ageism, Sexism, Racism, and More.Â
The Illusion of Neutrality
One of the most persistent myths in the digital age is the idea that algorithms are neutral. This belief shapes how many people trust technology. Algorithms are often seen as purely logical, emotionless systems that simply process data and return results. Because of this, many assume that they are inherently fair. The mindset mirrors a familiar phrase—just as some claim that "guns don’t kill, people do," a similar logic is applied to technology: "algorithms don’t discriminate, humans do." This perception creates a dangerous gap in accountability. It allows companies to shift blame away from their systems and say, "it’s the algorithm’s fault," rather than taking responsibility for the outcomes those systems produce. But behind every algorithm is a series of choices—made by people, shaped by values, and driven by goals. Believing in the illusion of neutrality can prevent us from asking hard questions and demanding better answers.
Despite popular belief, technology is not neutral. Every system we interact with is shaped by human choices, social values, and historical patterns. Tools that appear objective on the surface often reflect the assumptions of those who create them. In fact, technologies are socially constructed. The data that fuels them and the decisions embedded in their design are influenced by cultural norms, economic priorities, and unequal power dynamics. As one scholar puts it, technologies “are neither objective nor neutral but outcomes of human deliberation and power struggles.” Reports such as the one from the National Institute of Standards and Technology have echoed this concern, showing how even statistical models carry built-in biases. These biases are not always intentional, but their effects are very real. Whether by accident or design, digital systems can reinforce inequality and legitimize existing power structures. When we talk about fairness in tech, we must look deeper to understand the layers of bias that shape what is built and how it is used.
Discussion: Accountability in Tech in Action
Suggested Time:Â 5 minutes
Turn to the person next to you:
1. In an ideal world, in our ideal universe, what does accountability look like to you?
2. Is it simply fixing what's wrong with the world now?
Or are there other things for us to consider?
Understanding the problems is only the beginning. The next step is asking what we can do about them. Real change requires both collective momentum and individual courage. Whether through large-scale advocacy or small personal shifts, every effort matters. One of the most powerful things you can do is stay informed. Follow trustworthy news outlets, read critical voices in tech, and pay attention to emerging debates in the field. Resources like Tech Policy Press and The Office of Critical and Emerging Technology News can help you navigate these issues with insight and clarity.
Beyond staying informed, consider getting involved. Many organizations are actively working to hold the tech industry accountable and to build more equitable systems:
Whether you choose to advocate, volunteer, or simply share what you’ve learned, taking action sends a clear message: responsible technology is not just a technical issue—it is a social commitment.
Suggested Time:Â 5 minutes
Technology is Not Neutral.
Accountability is a Shared Responsibility.
Challenges Exist, but Solutions Are Possible.
Responsibility requires work and is inconvenient, but necessary in the fight for our own autonomy, freedom, and rights.