Tech companies face increasing criticism for behaviors that many feel are unethical, ranging from how they design their products, to the clients and partners they choose. Some organizations like Markkula Center for Applied Ethics have developed ethical frameworks, toolkits, and checklists for organizations to build ethics into their culture or processes, while some tech companies have announced their own ethical design principles. Which, if any, of these frameworks, toolsets, and principles actually result in more ethical decision-making and products both in academic pedagogy and in industry? And what do we even mean when we say “ethics,” “bias,” and “fair” in the first place? In this workshop, we will bring together industry, academia, and social advocacy groups to develop concrete definitions for these terms, compare and contrast the various tools, get practical examples from organizations that have actually implemented them, and identify the best ways to evaluate their success and failure.
The last few years have seen a flurry of headlines about companies making ethically questionable decisions. This includes things like how tech companies conduct research with their users (e.g., Facebook social contagion experiment) and how they design their products (e.g., creating addictive experiences). Recently we have seen employees speaking out about the products they are building (e.g., Google employees pushing to end government contract Project Maven) and who they do business with (e.g., Salesforce employees concerned about Customs and Border Patrol as a customer).
Technology, especially artificial intelligence, is impacting human rights at an unprecedented scale. Without government regulation, we are relying on technology companies to choose to act in society’s best interests.
In recognition of this need, many consumer advocacy groups and academic institutions are publishing ethical frameworks, toolkits, and checklists for companies to follow in order to more socially beneficially decisions (e.g., Markkula Center, IFTF and Omidyar Network). And some tech companies are now publishing their own ethical principles and resources (e.g., Google, IBM, SAP).
The question has been posed as to whether or not companies adopting these frameworks and principles are simply “ethics-washing” or “ethics shopping” in order to avoid regulation. How effective are these tools in creating more ethical cultures, teams, individuals, or products and services? How is success and failure measured? How can we objectively measure success when terms like “ethics,” “bias,” and “fairness” are used interchangeably without clear agreement on their meaning?
For organizations or teams that want to work more ethically, how do they know which one to choose? At the time of writing this proposal, the authors count over 20 separate tools.
When examining the effectiveness of new frameworks, models, or training to change an organization’s culture and people’s behaviors, it is useful to look at lessons learned from the past. For the past decade, many companies have implemented unconscious bias training and similar diversity inclusion trainings with little measurable success. Some of the causes of failure include lack of engagement throughout the company, lack of social accountability, and poor measurement of impact.
The motivation of the workshop is to evaluate the existing tools for building ethical cultures and product development to identify what is most effective and what is not.