Call For Papers: Special Issue on AI Fairness, Trust and Ethics


Special Issue on AI Fairness, Trust and Ethics

Special Issue Editors:
Lionel P. Robert Jr., University of Michigan
Gaurav Bansal, University of Wisconsin-Green Bay
Nigel Melville, University of Michigan
Tom Stafford, Louisiana Tech University 


Submission Deadline: Full papers due February 15, 2020
AI is rapidly changing every aspect of our society from how we conduct business, socialize and exercise. AI has amplified our productivity as well as biases. John Giannandrea, who leads AI at Google, recently lamented in the MIT Technology Review that the dangers posed by the ability of AI systems to learn human prejudices were far greater than those posed by killer-robots. This phenomenon is problematic because AI systems are making millions of decisions every minute many of which are invisible to the users and incomprehensible to the designers. Their opaqueness is a significant cause of worry and leaves many unanswered questions.

Fairness, Trust and Ethics are at the core of many of the issues underlying the implications of AI. Fairness is undermined when managers rely blindly on “objective” AI outputs to “augment” or replace their decision making. Managers often ignore the limitations of their assumptions and the relevance of the data that was used to train and test AI models, resulting in bias decisions that are hard to detect or appeal. Trust is undercut, when AI is used to render false or misleading images of individuals saying or doing things that are simply not true. These false images are making it difficult for society to trust what they see or hear. Ethical challenges are presented when decisions used by AI lead to further inequalities in the society. Examples include: displaced workers and shortages of affordable housing due to rental apartments and housing units being diverted to higher paying Airbnb short term vacationers.

Despite the potential transformative effects, research on AI in the Information Systems field is still scarce, and as a result, our knowledge on the impacts of AI are still far from conclusive. Yet, it is very important from the business and technical perspective that we research and examine issues of fairness, trust and ethics with AI. This examination is critical as issues of fairness, trust and ethics lie at the heart of addressing the new challenges facing the development and use of AI throughout our society. This is especially true, as there has been a rapid increase in the number of applications of AI in an ever increasing number of new areas. In all, AI has the potential to disrupt and dramatically change the interactions between humans and technologies.

This Special Issue on AI Fairness, Trust and Ethics calls for research that can unpack the potential, challenges, impacts, and theoretical implications of AI. We welcome research from different perspectives regardless of the approach or methodology. Submissions with novel theoretical implications that span disciplines are strongly encouraged. We seek submissions that can improve our understanding about the impacts of AI in organizations and our broader society.

Potential topics include (but are not limited to):
  • Defining fair, ethical and trustworthy AI
  • Antecedents and consequents for fair, ethical and trustworthy AI
  • Designing, implementing and deploying fair, ethical and trustworthy AI
  • Theories of fair, ethical and trustworthy AI
  • Policy and governance for fair, ethical and trustworthy AI
  • Appropriate and inappropriate applications of AI
  • Legal responsibilities for decisions made by AI  
  • AI biases
  • AI algorithm transparency – how to improve
  • The dark side of AI 
  • AI equality vs AI equity 
  • Implications of unfair, unethical and untrustworthy AI

Key Dates:
Optional one page abstract submissions: Oct 1, 2019
Selected abstracts invited for poster presentations at Pre-ICIS 2019 SIGHCI workshop on Dec 15, 2019
First round submissions: Feb 15, 2020
First round decisions: April 15, 2020
Second round submissions: July 15, 2020
Second round decisions to authors: Sep 15, 2020
Third and final round submissions: November 1, 2020
Final decisions to authors: November 15, 2020
Targeted publication date: December 31, 2020


To submit a manuscript, read the "Information for Authors" and "THCI Policy" pages, then go to http://mc.manuscriptcentral.com/thci.


Contact:
All questions about submissions should be emailed to: AIS-THCI-AI-FTE-SI-requests@umich.edu.


Editorial Board
Christoph Lütge, Technical University of Munich
Adriane Randolph, Kennesaw State University
Lauren Goggins, University of Maryland
Anna Sidorova, University of North Texas
Sangseok You, HEC Paris
More to come…..

Comments