Dissertation

ABSTRACT

A DYNAMIC DELPHI SYSTEM TO SUPPORT DECISION MAKING BY LARGE GROUPS OF CRISIS MANAGEMENT EXPERTS

by

Connie Marie White

Increasingly, extreme events demand large groups of experts distributed in both location and time to communicate, plan, and coordinate actions (Danieisson and Ohisson, 1999; Skertchly and Skertchly, 2001; Turoff, White and Plotnick, 2008; Harrald, 2009). When responding to extreme events, very often there are both alternative actions that might be considered and far more requests for actions than can be executed immediately (Kontoginnis and Kossiavelou, 1999; Kowalski-Trakofler and Vaught, 2003). The relative desirability of each option for action could be a collaborative expression of a significant number of emergency managers and experts trying to manage the most desirable alternatives at any given time, in real time (Danieisson and Ohisson, 1999; Skertchly and Skertchly, 2001; White, Turoff and Van de Walle, 2007). This same decision process could be used for a number of tasks but will be designed for distributed dynamic decision making during time critical phases of extreme events (White, Hiltz and Turoff, 2008). This is because our proposed system is specially designed to save time, remove ambiguities, and decrease uncertainty, major challenges described in the literature on time critical decision making during the volatile time critical phases of emergency response (Rodriquez, 1997; McLennan, Holgate and Wearing, 2003).

A DYNAMIC DELPHI SYSTEM TO SUPPORT DECISION MAKING BY

LARGE GROUPS OF CRISIS MANAGEMENT EXPERTS

by

Connie White

A Dissertation

Submitted to the Faculty of

New Jersey Institute of Technology

In Partial Fulfillment of the Requirements for the Degree of

Doctor of Philosophy in Information Systems

Department of Information Systems

January 2010

Copyright © 2010 by Connie Marie White

ALL RIGHTS RESERVED

APPROVAL PAGE

A DYNAMIC DELPHI SYSTEM TO SUPPORT DECISION MAKING BY LARGE GROUPS OF CRISIS MANAGEMENT EXPERTS

Connie Marie White

<insert Signed form>

BIOGRAPHICAL SKETCH

Author: Connie Marie White

Degree: Doctor of Philosophy

Date: January 2010

Date of Birth: March 15, 1964

Place of Birth: Gainesville, Florida, USA

Undergraduate and Graduate Education

· Doctor of Philosophy in Information Systems

New Jersey Institute of Technology, Newark, NJ, 2010

· Masters of Science in Teaching Mathematics

Loyola University, New Orleans, LA, 1996

· Bachelor of General Studies

Southeastern Louisiana University, LA, 1993

Major: Information Systems

Presentations and Publications:

Murray Turoff, Connie White and Linda Plotnick. Book Chapter: Real Time Decision Making. Dynamic Emergency Response Management for Large Scale Decision Making in Extreme Hazardous Events, CRC Press, release date 2010.

Connie White and Linda Plotnick. A Framework to Identify Best Practices: Social Media and Web 2.0 Technologies in the Emergency Domain. International Journal of Information Systems for Crisis Response And Management (IJISCRAM). Volume II, Issue I. IGI Publishers, January Issue, 2010.

Guest Editor, Linda Plotnick and Connie White. A Social Media Tsunami: The Approaching Wave. Online Social Networks to Support Community Resilience Through Collaborative Web 2.0 Technologies. International Journal of Information Systems for Crisis Response And Management (IJISCRAM). Volume II, Issue I. IGI Publishers, January Issue, 2010.

Linda Plotnick, Connie White, and Maria Plummer. The Design of a Social Networking Site for Emergency Management: One Stop Shop. Americas Conference on Information Systems, (AMCIS) San Francisco.

Connie White, Linda Plotnick, Jane Kushma, Starr Roxanne Hiltz, and Murray Turoff. An Online Social Network for Emergency Management. Information Systems for Crisis Response and Management (ISCRAM), Sweden.

Connie White and Murray Turoff, The Potential for Social Networks in Emergency Management. International Association of Emergency Managers. IAEM Bulletin,February Special Edition.

Connie White, Dynamic Delphi. Methodologies for Identifying and Ranking Sustainable Transport Practices In Urban Regions. Research Report to Transport Canada Project, Barry Wellar, Principal Investigator.

Connie White, Linda Plotnick, Jane Kushma, Star Roxanne Hiltz and Murray Turoff. An Online Social Network for Emergency Management. International Conference on Information Systems, ICIS, Paris, France, Pre-ICIS SeventhWorkshop on e-Business (WeB 2008).

Murray Turoff, Star Roxanne Hiltz, Connie White, Linda Plotnick, Art Hendela and Xiang Yao. The Past as the Future in Emergency Preparedness and Management, The International Journal of Information Systems for Crisis Response and Management IJISCRAM, Vol. 1, Issue 1 December 2008.

Connie White, Linda Plotnick, Ronja Aadams-Moring, Murray Turoff and Starr Roxanne Hiltz. Leveraging A Wiki to Enhance Collaboration in the Emergency Domain. 41st Hawaii International Conference on System Sciences, (HICSS).

Connie White, Starr Roxanne Hiltz, and Murray Turoff. United We Respond: One Community, One Voice, Information Systems for Crisis Response and Management, ISCRAM, 2008 Washington, DC.

Connie White, Murray Turoff, and Bartel Van de Walle. A Dynamic Delphi Process Utilizing a Modified Thurstone Scaling Method: Collaborative Judgment in Emergency Response. Proceedings of the 4th Annual Information Systems on Crisis and Response Management, (ISCRAM), Delft, Netherlands.

Connie White, Linda Plotnick, Murray Turoff and Starr Roxanne Hiltz. A Dynamic Voting Wiki Model. Americas Conference on Information Systems (AMCIS), Keystone, Colorado.

Connie White, Starr Roxanne Hiltz and Murray Turoff. Finding the Voice of a Virtual Community of Practice. International Conference on Information Systems, Quebec, Pre-ICIS Sixth Workshop on e-Business (WeB 2007).

Connie White and Murray Turoff. A Dynamic Delphi System. The Network Nation and Beyond. A Festschrift in Honor of Starr Roxanne Hiltz and Murray Turoff, New Jersey Institute of Technology. Invited for Journal submission , Jan. 2008.

Murray Turoff, Connie White, and Linda Plotnick. Dynamic Emergency Response Management For Large Scale Extreme Events. International Conference on Information Systems, Pre-ICIS SIG DSS 2007 Workshop.

Linda Plotnick, Liz Avery Gomez, Connie White and Murray Turoff. A Dynamic Thurstonian Method Utilized for Interpretation of Alert Levels and Public Perception. Proceedings of the 4th Annual ISCRAM, Delft, Netherlands.

I would like to dedicate this doctoral effort to the memory of my father, Gordon Hughes White for providing me with the strength and work ethic to conquer life’s difficult challenges.

This is also dedicated to my children Brigitte, Thomas and Hunter, for providing me with the incentive I needed to drive me through the toughest days. Mommy is trying to make the world a better place.

I would like to also dedicate this effort to Sean Roberts and thank him for providing me with a place that had the solitude required for such a level of dedication and focus needed to get through this time-consuming effort.

This doctoral effort is also dedicated to the women in my family: my mother Jo Ann White and grandmothers, Ollie Dee Daniels and Berta Lee White. I want to thank them for providing me with examples of strong women who never shied away from any of life’s challenges and for taking on roles that let me know that I could do anything.

Mostly, this doctoral effort is dedicated to my mother, Jo Ann Mills White. She knew the importance of education and it is because of her encouragement, steadfast support and love that I was able complete this academic endeavor. I love you Mom!

ACKNOWLEDGMENT

Several people contributed to this PhD effort and I would like to express my appreciation for their tireless effort and dedication in helping me complete this journey. I am especially grateful to my advisor Murray Turoff and would like to thank him for sharing his brilliant ideas, providing me with guidance, and for opening the world of emergency management to me. I would also like to give special thanks to Roxanne Hiltz for taking me under her wing and taking the strides as an educator to push me beyond any and all expectations, for her patient and vigilant aid in my academic endeavors and for being such a dedicated teacher.

I would like to thank my committee members, Bartel Van de Walle, Jack Harrald, Mike Chumer and Julian Scher for their guidance and expertise. I am most appreciative to my colleague and good friend Linda Plotnick, the connections made through the ISCRAM nation and to the Sahana Disaster Management Software community – in particular, Chamindra de Silva, Reza Mohammadi and Fran Boon. I am appreciative to my good friends and Turovian support group members Xiang Yao and Art Hendela.

During this doctoral effort, three catastrophic events occurred: Hurricane Katrina in 2005, the Indian Tsunami in 2004 and the terrorist attacks of September 11, 2001. Murray Turoff and these events forever changed who I am and what I will do with my life.

CHAPTER 1

INTRODUCTION

1.1 Abstract

Increasingly, extreme events demand large groups of experts distributed in both location and time to communicate, plan, and coordinate actions (Danieisson and Ohisson, 1999; Skertchly and Skertchly, 2001; Turoff, White and Plotnick, 2008; Harrald, 2009). When responding to extreme events, very often there are both alternative actions that might be considered, and far more requests for actions than can be executed immediately (Kontoginnis and Kossiavelou, 1999; Kowalski-Trakofler and Vaught, 2003). The relative desirability of each option for action could be a collaborative expression of a significant number of emergency managers and experts trying to manage the most desirable alternatives at any given time, in real time (Danieisson and Ohisson, 1999; Skertchly and Skertchly, 2001; White, Turoff and Van de Walle, 2007). This same decision process could be used for a number of tasks but will be designed for distributed dynamic decision making during time critical phases of extreme events (White, Hiltz and Turoff, 2008). This is because the proposed system is specially designed to save time, remove ambiguities, and decrease uncertainty, all of which are major challenges described in the literature on time critical decision making during the volatile time critical phases of emergency response (Rodriquez, 1997; McLennan, Holgate and Wearing, 2003).

1.2 Introduction

A prototype of an information system to support decision-making tasks was created and studied, particularly for the disaster management community of practice. The system will be developed for large groups of emergency domain-related experts working collaboratively on solving complex problems. These problems last over time, are wicked, and are often time critical. Many times, the decisions have to be made without all of the information required (uncertainty) and among a subset of the required members, as all members of the group may not be present or have relevant contributions when a specific decision must be made. Some of the parts of this problem have been explored and solutions exist. Other parts are sub problems where problems still exist and further research must be conducted.

This research proposes a novel approach in how feedback, showing areas of disagreement, can stimulate discussion and potentially generate more options, or even more creative ones that might not have occurred to the single individual in charge of that particular emergency event. The contribution is in how uncertainty is handled through the use of experts voting, using a modified version of Thurstone’s Law of Comparative Analysis. Voting is used as the input to reflect the expert’s opinion at any moment in time. Voting is dynamic, in that anyone can vote, change their vote or opt not to vote, given some reason. The reasons can be anything, from an expert wanting to wait for more information or because the expert does not feel knowledgeable in a relevant domain. Dynamic voting is one of the many requirements implementing flexibility into the overall proposed system.

One of the strengths of the proposed system is that it is problem independent so, any group, anywhere, with any language, can use it for problem sets involving the selection of a timely and beneficial set of solution options. On a broader scale, this sort of collaborative decision making process could greatly benefit the large online social networks that are developing quickly for all sorts of groups (White, Plotnick, Kushma, Hiltz, and Turoff, 2009). The decision making process being proposed could take social networks to the next level and leverage their abilities as subgroups of knowledgeable individuals to collaborate and recommend meaningful solution options to a given problem (White, Turoff and Van de Walle, 2007). This particular system could also be used as a recommender system to build a reservoir of findings for an evolving collaborative knowledge base (White, Turoff and Hiltz, 2008). An example of such an effort is emerging for the ISCRAM (Information Systems for Crisis Response and Management) community (www.iscram.org/live, 2008).

1.3 Design Science

The Dynamic Delphi system is being constructed, evaluated and implemented using a set of guidelines proposed in design-science for information systems research, design-science research (Hevner, March, Park and Ram, 2004). Design Science is a problem solving process. It was used as a guide to build a prototype. A set of seven guidelines were created to specify requirements for effective design science research. Hevner and his coauthors describe the process.

“That is, design-science research requires the creation of an innovative, purposeful artifact (Guideline 1) for a specified problem domain (Guideline 2). Because the artifact is "purposeful," it must yield utility for the specified problem. Hence, thorough evaluation of the artifact is crucial (Guideline 3). Novelty is similarly crucial since the artifact must be "innovative," solving a heretofore unsolved problem or solving a known problem in a more effective or efficient manner (Guideline 4). In this way, design-science research is differentiated from the practice of design. The artifact itself must be rigorously defined, formally represented, coherent, and internally consistent (Guideline 5). The process by which it is created, and often the artifact itself, incorporates or enables a search process whereby a problem space is constructed and a mechanism posed or enacted to find an effective solution (Guideline 6). Finally, the results of the design-science research must be communicated effectively (Guideline 7) both to a technical audience (researchers who will extend them and practitioners who will implement them) and to a managerial audience (researchers who will study them in context and practitioners who will decide if they should be implemented within their organizations).”

(Hevner, et. al, 2004, p. 11).

1.3.1 Guideline 1: Creation of an Artifact

According to Hevner, et. al, the first guideline addresses design as an artifact. This states that “design-science research must produce a viable artifact in the form of a construct, a model, a method, or an instantiation” (Hevner, et. al, 2004, p. 83). Further, Hevner’s group defines IT artifacts as “constructs (vocabulary and symbols), models (abstractions and representations), methods (algorithms and practices), and instantiations (implemented and prototype systems) (2004, p.77).” In the Dynamic Delphi system effort, models, methods and instantiations are all considered IT artifacts. The system has been built and tested. Chapter 7 explains how the system was developed and presents information on the prototype. A viable artifact has been produced as a prototype system has been created and implemented.

1.3.2 Guideline 2: Problem Relevance

The second guideline describes problem relevance where “the objective of design-science research is to develop technology-based solutions to important and relevant business problems” (Hevner, et. al, 2004, p. 83). The Dynamic Delphi system has the capability to address a number of important problems where decision makers are evaluating a number of alternatives from which to select. However, using the system in the emergency domain to help manage large scale disasters is deemed important by the emergency management community. Better decision making strategies are needed to help effectively manage these wicked types of problems. This system is designed to help solve a critical problem in the emergency domain using state-of-the-art technology. This will be covered in the next section covering crisis management decision making. A technology-based solution for online large scale crisis management decision making was created. The next chapter covers the characteristics of the problems of this organization and domain. This system was developed specifically to reduce the problems that are posed with this environment.

1.3.3 Guideline 3: Scientifically Sound Foundation

The third guideline describes the importance of using rigorous evaluation methods for the design. Specifically, “The utility, quality, and efficacy of a design artifact must be rigorously demonstrated via well-executed evaluation methods” (Hevner, et. al, 2004, p. 83). This is an ongoing research effort built upon the backs of other doctoral students under the same advisor (Li, 2003; Wang, 2003; Cho, 2004). The results have been the end results deemed contributions and published not only as dissertations, but also in numerous peer reviewed publications. The tools have been designed based on a strong scientific foundation. Thurstone’s Law of Comparative Judgment, paired comparisons, and the Delphi technique are well documented and supported in the literature from the scientific community. This information is found in the chapters in the Literature Review at the beginning of this dissertation. The chapters presented as part of the literature review cover these theoretical concepts and apply them as solutions to support the foundation of the methods used and design strategies used to implement these methods. This system is built on a solid foundation of theory.

1.3.4 Guideline 4: Contribution

The fourth guideline addresses the importance of the work being considered as a contribution to the academic world. It states that the IS research effort should be considered a contribution to the field. “Effective design-science research must provide clear and verifiable contributions in the areas of the design artifact, design foundations, and /or design methodologies” (Hevner, et. al, 2004, p. 83). The need for this system is great and the role it would serve crisis managers is greatly needed. That the system is offered for free just makes it that much more important. At the present time, a system that satisfies the requirements of the Delphi Decision Maker is not available. Having a system as such could help improve disaster management and bring online decision making to another level for those in crisis management. March and Smith propose that building an innovative and creative system is, in and of itself enough to be considered a contribution to the research community. “Building the first of virtually any set of constructs, model, method, or instantiation is deemed to be research, provided the artifact has utility for an important task. The research contribution lies in the novelty of the artifact and in the persuasiveness of the claims that it is effective. Actually, performance evaluation is not required at this state” (March and Smith, 1995, p. 260). The system was built and it is now offered as a Module on the SahanaPy Disaster Management System. To date, there has not been any system found that can manage this problem as efficiently as this one does.

1.3.5 Guideline 5: Rigor in Evaluation Methods

The fifth guideline focuses on research rigor and states that “Design-science research relies upon the application of rigorous methods in both the construction and evaluation of the design artifact” (Hevner, et. al, 2004). All of the methods and models used in the Dynamic Delphi system have been tested rigorously. The system, The Delphi Decision Maker, Version 1.0 has been rigorously tested. Different groups have tested the system using different problems. A field study was created where a group tested the system. The process of the research project was critiqued by experts and an overarching approach was developed and tested to properly guide the project and test the system. Proof by concept, descriptive statistics and content analysis was used to evaluate and measure the design and the system. Cronbach’s Alpha was used to test for reliability. Only statistically sound methods were used to evaluate the data. Many chapters in this dissertation will cover various aspects of the evaluation methods used throughout this effort.

1.3.6 Guideline 6: Iterative Design Process

The sixth guideline relates to design as a search process. It states that ‘the search for an effective artifact requires utilizing available means to reach desired ends while satisfying laws in the problem environment (Hevner, et. al, 2004, p. 83). Basically, this described how the development process is iterative where each area needs to be explored so that the most efficient and feasible method can be identified during the design and implementation stages of the development. The first version of the software system was built and tested identifying problem areas. A set of functional requirements has been written which will be used for the next version of this software. Strategies have been devised and a methodology has been developed describing this system’s design, conceptual ideas that can derive from it and how it is to be implemented. These modifications have been written up along with the problems they will be solving in a chapter titled “Design Science and The Delphi Decision Maker, Version 2.0. This system is presently under development. New features will be demonstrated at the defense.

1.3.7 Guideline 7: Publish the Work

The last guideline addresses the importance that the work be published in both the academic community and in the practitioner’s community. Some publications have already come from this work from the academic community. Future publications are expected both in academia and in the practitioner’s viewing audience. Both conference level articles and journal submissions are in the works. Results from this work can be submitted to a variety of scientific communities like Psychometrika, The Journal of Homeland Security and Emergency Management, and Management Information Systems Quarterly (MISQ). Other work needs to be conducted that will also be published as no work exists in this area or if it does, the researcher was never able to find it over a 4 year period. It is important to publish this work where practitioners will be more likely to read it. This system needs to be exposed to further its development especially given that it’s free. Articles will be submitted to a variety of practitioner driven publications like The IAEM (International Association of Emergency Managers) Bulletin and Emergency Management Magazine.

CHAPTER 2

DECISION MAKING DURING EXTREME EVENTS IN

EMERGENCY MANAGEMENT

2.1 Abstract

This chapter explores the literature on decision making particularly addressing the needs within the emergency domain. The second guideline of Design Science describes problem relevance where “the objective of design-science research is to develop technology-based solutions to important and relevant business problems” (Hevner, et. al, 2004, p. 83). A system was built to aid crisis managers during time critical situations and/or in extreme events. The main objective of this chapter is to review the actual problems emergency managers (EM), as decision makers, face when confronting extreme events from the emergency literature. Common themes are repeated throughout the literature concerning time, how EMs manage or mismanage information, problems of uncertainty, and stress. The literature indicates how extreme events pose a different set of problems dissimilar from smaller, reoccurring events. The information discovered in this review effort indicates that the needs of EM are the same as those outlined in this overall research effort. This chapter is not to address any information system problem per se, but to identify the needs strictly from the EM perspective.

2.2 Introduction

The purpose of this chapter is to review the major problem areas that are considered when EMs make decisions responding to extreme events. “An emergency is by definition a unique and unpredictable event, and it is seldom possible, even in retrospect, to assess what the outcome of an emergency response would have been if alternative measures had been followed” (Danielsson and Ohlsson, 1999, p. 92).

The problems are unambiguous and recurring in the literature. Clausewitz offers a cohesive observation outlining these problematic areas:

“A commander must continually face situations involving uncertainties, questionable or incomplete data or several possible alternatives. As the primary decision maker, he, with the assistance of his staff, must not only decide what to do and how to do it, but he must also recognize if and when he must make a decision” (Clausewitz, 1976, p. 383).

This is the common theme found throughout this review of the literature from the emergency domain concerning decision making in extreme events. This research is important because the needs of the EM must be identified from the literature found within that particular domain. It is important for the results of studies confirming the task type, needs and considerations of the practitioners themselves to be observed to show how the Dynamic Delphi System can support and manage those problems in a way perceived beneficial by the user. Some of the major issues discovered are presented.

Stress is an understandable emotion felt by EM. EM must make life and death decisions especially where such tragedies requiring triage may have to be decided in the selection criterion between groups of people (Kowalski-Trakofler, Vaught and Scharf, 2003).

Another source of stress arises when decisions must be made under severe time constraints (Rodriquez, 1997, Kowalski-Trakofler, et. al, 2003). DMs have to forecast and make predictions given the uncertainty in expectations of future events (Rodriguez, 1997). Time is precious, and accurate decisions must be made along a time line at particular points in time over the duration of the event as a disaster evolves (Brehmer, 1987; Danielsson and Ohlsson, 1999). “The operational commander continually faces an uncertain environment” (Rodriguez, 1997, p. 5).

Critical judgments must be made where large amounts of information are available for consideration creating information overload. To make matters worse, this information can be wrong or incomplete (Kowalski-Trakofler, et. al, 2003) or sufficient time may be lacking to gain the perfect and complete information needed before the decision is made (Rodriquez, 1997). “In dealing with the uncertainty of a continually changing environment, the decision maker must achieve a trade-off between the cost of action and the risk of non-action” (Kowalski-Trakofler and Vaught, 2003, p. 283). Sometimes these decisions are made on the decision maker’s (DM) assumptions and intuition when information is not attainable (Rodriquez, 1997).

Small events occur frequently, and catastrophic events occur rarely (Hyndman and Hyndman, 2006). Protocols or heuristics can be used for the emergencies that are smaller and occur frequently. However, management is posed with the problem of not having any or little prior experience to larger events where national boundaries are ignored and the demands of the resources needed far exceed the availability of supply. Research reveals that extreme events have different characteristics from smaller disasters (Skertchly and Skertchly, 2001). This calls for a dynamic approach to decision making to fit the task due to the overwhelming nature of these extreme events considered with the limitations of a human’s mental capacity and ability to manage a large set of ongoing problems at any one time. A major problem exists in a decision maker’s ability to effectively manage all of the ongoing events simultaneously during an extreme event (Danielsson and Ohlsson, 1999; Kerstholt, 1996).

One person is in charge of making the final decision for action, but this is a collaborative effort of numerous stakeholders sharing numerous overlapping tasks. “As complexity increases, it becomes impossible for a single individual with the limited information processing capacity to gain control” (Danieisson and Ohisson, 1999, p. 93). A dynamic decision making approach is a much needed method due to the inherent nature of the chaos characteristic of extreme events (Danielsson and Ohlsson 1999). Extreme events need to be managed using structure with flexibility to improvise or adapt where necessary to achieve agility (Harrald, 2009).

In the remainder of this chapter, these facets will be elaborated, further probing deeper into the needs of emergency managers. First, how extreme events are different from small emergencies and must be approached as a different task type is covered. Second, extreme events are a wicked problem, and these characteristics are laid out and matched with extreme events. Good versus bad characteristics in EM decision making from the literature are listed. Third, types of bias that are specific in emergency situations and decision making are covered. Next, literature findings concerning time, stress and information overload are provided. Methods describing how EM handles information presently are discussed and related to other research concepts already explored in this research effort. Next, research indicating how feedback and expert intuition are used to manage uncertainty is examined.

2.3 Extreme Events

Large scale extreme events are not like small emergencies. Small emergencies occur regularly where most decisions are rule based due to the experience of the event (Rasmussen, 1983). This is referred to as procedural expertise (Adams and Ericsson, 2000). In the event that a small emergency should occur, the EM may not even be notified because firefighters, police and emergency medical attendants already know how to proceed (Danielsson and Ohlsson, 1999). On the other hand, extreme events present a different set of characteristics due to the problem type and task structure (Campbell, 1999; Mitchell, 1999; McLellan, et. al, 2003).

In large-scale operations, the cognitive demands on the EM are severe (Danielsson and Ohlsson, 1999). Team coordination strategies will evolve from explicit coordination under low workload conditions to implicit coordination as work load increases. Large-scale emergency operations imply distributed decision making in that decisions are disseminated among many stakeholders, of which no single individual has complete knowledge of the current situation (Danielsson and Ohlsson, 1999; Mitchell, 1999; Kowalski-Trakofler and Vaught, 2003).

Wicked Problems

Extreme events possess characteristics, are problem types and have task structures that are categorized as wicked. Wicked problems are volatile and of a very dynamic nature with considerable uncertainty and ambiguity (Horn, 2005). Wicked problems are ongoing and have no stopping rule (Rittel and Webber, 1973, Digh, 2000). They are never resolved and change over time (Conklin, 1998). Wicked problems are solved per se when they no longer are of interest to the stakeholders, when resources are depleted or when the political agenda changes (Horst and Webber, 1973). Many stakeholders with multiple value conflicts redefine what the problem is repeatedly, reconsider what the causal factors are and have multiple views of how to approach and hopefully deal with the problem (Rittel and Webber, 1973, Conklin, 1998, Digh, 2000). Getting and maintaining agreement amongst the stakeholders is most difficult because each has their own perception and, thus, opinion of what is best (Rittel and Webber, 1973).

Extreme events possess the characteristics of those found within the definitions of wicked problems. “Each dysfunctional event has its own unique characteristics, impacts, and legacies” (Skertchly and Skertchly, 2001, p. 23). For example, catastrophic disasters have the following attributes and dimensions many of which are the same as those described in wicked problems:

· *They don’t have any rules.

· Often, emergency services are insufficient to cope with the demands given the limited amount of available resources.

· Vital resources are damaged and nonfunctional.

· *Procedures for dealing with the situation are inadequate.

· *No solutions for resolution exist on a short-term basis.

· *Events continue to escalate.

· *Serious differences of opinion arise about how things should be managed.

· The government of the day and the bureaucracy becomes seriously involved.

· The public takes an armchair position and is fed by the media.

· *The number of authorities and officials involved ar growing.

· *Sometimes simply trying to identify which of the emergency services and investigative bodies is doing what results in complete chaos.

· The need to know who is in charge is urgent (Campbell 1999, 52).

*are characteristic of wicked problems

EM tasks differ from control task types in that, no two events are the same so different decision processes are required to be implemented. Interacting variables are many, and the domain is ill defined and unknown at times (Danielsson and Ohlsson, 1999). An EM cannot project any future decisions with any degree of accuracy due to all of the variables that are involved and all of the different scenarios that can exist due to the great amount of uncertainty involved and lack of experience of the unknown (Newport: 1996).

2.4 Decision Making in Emergency Management

Decision tasks are perceived to be difficult by the EM where issues involving life saving operations such as evacuations or triage have the potential to have devastating results if not conducted accurately (Danielsson and Ohlsson, 1999).

Studies show an EMs most difficult aspects of work are:

· Lack of routine and practice–refers to the infrequency of major accidents, making it difficult to get experiences of the command and control proper.

· Communicational shortcomings

o Information overload is salient during the initial phase of an emergency response and is seen as especially severe if no staff members are available to perform communication duties.

o Technical equipment inadequacy

o Lack of skills in handling communication equipment

· Feelings of isolation–lack of peers with whom to discuss common problems (Danielsson and Ohlsson 1999, p. 94).

Other psychological processes are associated with decisions made by EMs. Effective decision makers must take many factors of the environment into consideration to understand that these are complex, dynamic, time-pressured, high-stakes, multi-person task environments (McLellan, et. al, 2003).

Some hazard conceptualization and management problems developed from Mitchell, 1999 are presented:

· *Lack of agreement about definition and identification of problems

· *Lack of awareness of natural and unnatural (human-made) hazards

· *Lack of future forecasting capabilities

· *Misperception of misjudgment of risks associated with hazards

· Deliberate misrepresentation of hazards and risks

· *Lack of awareness of appropriate responses

· *Lack of expertise to make use of responses

· Lack of money or resources to pay for responses

· *Lack of coordination among institutions and organizations

· Lack of attention to relationship between ‘disasters’ and ‘development’

· Failure to treat hazards as contextual problem whose components require simultaneous attention

· Lack of access by affected populations to decision making

· Lack of public confidence in scientific knowledge

· Lack of capable and enlightened political leadership

· *Conflicting goals among populations at risk

· *Fluctuating salience of hazards

· Public opposition by negatively affected individuals and groups.

*wicked characteristics

Many of these are also characteristic of the wicked problem types defined earlier and have characteristics in common with those of extreme events (Rittel and Webber, 1973; Campbell 1999).

2.4.1 Time

“Time lost is always a disadvantage that is bound in some way to weaken he who loses it” (Clauswitz, 1976, p. 383).

Time is a critical factor that further complicates the decision making process. In extreme events, an EM must consider an enormous number of factors quickly (Kowalski-Trakofler, et. al, 2003). Decisions must be made, sometimes forced due to time constraints. “The faster a decision has to be made, the less time the information processing system has to convert or gather enough accurate information to convert assumptions to facts” (Rodriquez, 1997, p7-8). This means that decisions are made under uncertainty and without full consideration. An EM must weigh delaying the decision making against the negative consequences that may occur while waiting for more requested information (Kowalski-Trakofler and Vaught, 2003). Once time has passed, alternative actions are no longer possible and perhaps the best decision has been bypassed leaving only less optimal conditions from which to choose.

Kowalski-Trakofler and Vaught conducted a study of good decision making characteristics under life threatening situations. They found that, during any phase of the decision making process, a set of factors could significantly impact one’s ability to deal with complex problems under time critical situations. These factors are:

· Psychomotor skills, knowledge and attitude

· Information quality and completeness

· Stress–generated both by the problem at hand and any existing background problem

· The complexity of elements that must be attended (2003, p. 285).

One research finding indicates that performance can be maintained under time pressure if the communication changes from explicit to implicit (Serfaty and Entin, 1993). They found that “Implicit coordination patterns, anticipatory behavior, and redirection of the team communication strategy are evident under conditions of increased time-pressure. The authors conclude that effective changes in communication patterns may involve updating team members, regularly anticipating the needs of others by offering unrequested information, minimizing interruptions, and articulating plans at a high level in order to allow flexibility in the role of front-line emergency responders” (Serfaty and Entin, 1993).

2.4.2 Stress

Stress is defined as “a process by which certain work demands evoke an appraisal process in which perceived demands exceed resources and result in undesirable physiological, emotional, cognitive and social changes” (Salas, Driskell, and Hughs, 1996, p.6).

Information during an emergency can be the source of stress in many ways (Kowalski-Trakofler, et. al, 2003). First, due to technical malfunctions or just poor implementation, the initial warnings can be ambiguous and create a greater need for clarity in a situation. This causes the situation to be interpreted differently and leads to different interpretations in how people are to respond. Another stressor due to information mismanagement is when people do not fully understand what is going on or have disagreement between stakeholders on the situation; the right information is not gathered. This wastes time and causes more stress and aggravation. Other stressors come from poor leadership. If leadership is weak, then it adds to worse decisions or no decisions being made and can result in confusion. Last, when technology or other apparatus fails, this leaves people without information and the inability to keep current with response efforts and will add more stress (Kowalski-Trakofler, et. al, 2003).

Stress is a major factor in decision making especially during life critical situations (Kowalski-Trakofler, et. al, 2003). One of the primary stressors is the lack of information immediately after the event during the early phase of the emergency response where it concerns determining scale and the characteristics of damage (Danielsson and Ohlsson, 1999).

A major problem occurs when people are making decisions under stress that leads to poor decision making. Research shows all of the feasible choices are not considered, and a decision is likely to be made prematurely (Keinan, Friedland, and Ben-Porath, 1987). This is not good because no matter how experienced a DM may be, they will be confronted with situations they have not experienced previously (Harrald, 2009). So, all of the influential information that time allows should be considered in order to make the most appropriate decision.

2.4.3 Information Overload

Good incident commanders function as if they have a good practical understanding of the limitations of their information processing system, and the corresponding limitations of others (McLellan, et. al, 2003). In particular, they operated in such a manner that (a) their effective working memory capacity was not exceeded, (b) they monitored and regulated their emotions and their arousal level, and (c) they communicated with subordinates in ways that took into account subordinates’ working memory capacity limitations. The foundation of their ability to manage their own information load effectively seems to be prior learning from past experience.

Studies show that during an emergency, information quality varies on three dimensions: reliability, availability and relevance (Danielsson and Ohlsson, 1999). The decision to use information at any given time and the weight of the usage of the information is based on these dimensions.

Bias

Many forms of bias exist when it comes to decision making, but emergency management has a set that is associated with disastrous leadership. Research indicates that this is from a lack of self awareness which is a normal reaction concerning information processing. Table 2.1 lists the bias types along with a brief description derived by Adams & Ericsson (2000).

Table 2.1 Bias in Emergency Management Decision Making

Muddling Through

A large amount of information must be considered in a very small amount of time. Time to fully explore all alternatives is lacking not to mention, stress has a tendency to make DM focus narrowly on the list of available alternatives. Studies found that good DM only focus on the most feasible and reliable solutions and eliminate the nonessential information (Kowalski-Trakofler, et. al, 2003). This does not compromise the DM ability to make good decisions, but rather, simplifies the process allowing them to focus on the critical issues.

This same approach was validated by other research studying decision processes of good DM (McLellan, et. al, 2003). The study indicated that all of the information was scanned but focus was only considered on a ‘need to know’ basis and only on the relevant factors which needed to be considered.

This decision making strategy is described by Charles Lindblom that he refers to as Muddling Through (Lindblom, 1959, Lindblom 1979). This employs methods that help a (DM) focus on the most relevant subgroup, given a list of alternatives from which to choose for any given task. Muddling through a problem guides decision makers to direct their focus into selecting incremental changes.

2.4.4 Uncertainty

The demands on emergency management are described by The Catastrophic Annex to the National Response Plan (NRP; DHS 2004): “A detailed and credible common operating picture may not be achievable for 24 to 48 hours (or longer). “As a result, response activities must begin without the benefit of a detailed or complete situation and critical needs assessment” (Harrald, 2006, p. 258). Due to the nature of an extreme event, many judgments must be made with information that is often ambiguous, wrong and incomplete (Kowalski-Trakofler, et. al, 2003). The operational activities involve “hierarchical teams of trained individuals, using specialized equipment, whose efforts must be coordinated via command, control, and communication processes to achieve specified objectives under conditions of threat, uncertainty, and limited resources, both human and material” (McLellan, et. al, 2003, p. 2).

Not only are the decisions made presently under dicey information, but forecasting future events also poses a challenge due to the uncertainty in the future events as they play out over the duration of the extreme event (Rodriguez, 1997). “To make decisions about an uncertain future, the commander must make many assumptions. Intuitive thinking is an important skill in the ability to make a sound assumption” (Rodriguez, 1997, p1). This is where the experts are using intuition to fill gaps in information needs.

Feedback

Timely and reliable feedback is one means to help DM make good judgments. One type of uncertainty is from the lack of feedback or reported information from the initial assessment from affected areas. Particularly annoying to EM can be in the lack of feedback where the next decision cannot be made without the present information acquired especially when the damage cannot be visualized (Danielsson and Ohlsson, 1999). This can have detrimental effects on the outcome of the event, because the DM performance is diminished.

Expert Intuition

Assumptions are used by DM to fill in gaps where uncertainty exists (Rodruguez, 1997). Intuition plays a large role in filling in these gaps and can have good consequences from those with experience. “For experienced commanders, intuition fills in the decision making processes where imperfect information leaves off” (Battle Command, 1994, p. 25).

A study conducted on a large group of top executives supports the concept that intuition was used to guide critical decision making situations. The situations and environments in which intuition was mostly used and helpful were found to be where:

    • A high level of uncertainty exists

    • The event has little previous precedent

    • Variables are often not scientifically predictable

    • “Facts are limited”

    • Facts do not clearly point the way to go

    • Time is limited and the pressure is to be right

    • Several plausible alternative solutions are available to choose from, with good arguments for each (Argot, 1986, p 18)

When considering the issue of analytical versus intuition judgment, the National Institute for Occupational Safety and Health (NIOSH) reported:

“The point here is that research which focuses on judgment must include scrutiny not only of decisions that are made, but also of real-world variables that influence them. The quality of any decision may have little or no direct relationship to the eventual outcome of its execution in a given situation. This is because a decision-maker is constrained not only by the stress of the situation or personal knowledge and attitudes, but also because he or she can only weigh information that is available” (Kowalski-Trakofler, et. al, 2003, p. 286).

Normal decision making techniques do not suffice in such complex situations as extreme events. Characteristics were identified as:

    • Novelty—the officer had never encountered such a situation before,

    • Opacity—needed information was not available,

    • Resource inadequacy—the resources currently available were not sufficient to permit an optimal response (McLellan, et. al, 2003, p. 3).

The EM is continually facing an uncertain environment. There is insufficient time for the EM to get the correct information they need and this must be weighed against the need to make a decision at a particular time, so he/she must rely on assumptions and intuition. Intuition helps the DM to make decisions faster and more accurately, contributing to initiative and agility (Rodriguez, 1997).

2.5 Conclusion

Decision making by emergency managers in extreme events has problem areas that need support in order to minimize the disastrous effects that can cripple the outcome and recovery efforts. This is a review of the research literature specifically from the emergency domain and not from the information systems perspective. A system was designed to support the needs of emergency management. The problem areas are time, stress, information overload, bias, and delayed feedback. As required by Design Science, this system will be tested against the needs of the users, but limitations make it such that, until the system is used in the real world, it will not known if it works as intended. This system is part of the Sahana Disaster Management System which is deployed often by countries all over the world. It is hoped that given this exposure, the system will be ultimately tested so that the developers can support the needs of the real world users.

CHAPTER 3

LITERATURE REVIEW: DECISION SUPPORT SYSTEMS

3.1 Abstract

This chapter reviews Decision Support Systems exploring areas of design which have influence on the outcome. A non-exhaustive literature review is conducted determining areas which should receive focus when given the task of building a system of this nature. Design Science Research requires that the foundation of the design of the software be based on solid scientifically supported work. This chapter reviews Group Support Systems and explores the options available to model the system based on the needs of the organization in which it is being designed. Group Support Systems come in many varieties given the users proximity, group size and the task to undertake. Information overload is analyzed as to how it happens, how the users deal with it and then methods to decrease it are described. Catalyst in the form of facilitators and moderators and such for a group process is reviewed where different variations fit different needs. Complexity concerning the number of dimensions considered in a problem is covered. Many of the topics that are vital in the development of a decision support system are explored. This is a very large area and is briefly covered here.

3.2 Introduction

There can never be enough emphasis on making the best decision possible and continuing to improve it (Simon, 1960, Weick and Sutcliffe, 2001, Eom, 2001). The ramifications of decisions can be never-ending and they set the stage from which to work and from which the next decision is to be made (Weick and Sutcliffe, 2001, Eom, 2001, White, et. al, 2007a). Building a system to support decision making includes considerations to support all of the phases for it (Hiltz and Turoff, 1978, Sprague and Carlson 1982, Turoff, et. al, 1993, Eom, 2001, Turoff, et. al, 2002, White, et. al, 2007b). Present day decision support systems are used for scenario ‘what if’ generation and evaluation and also for generating and analyzing a means to an end type of ‘goal-seeking’ strategy into the phases of design and selection (Eom, 2001). To support any group there needs to be a strong foundation to support both communication and coordination processes (Turoff, et. al, 1993).

3.2.1 Acronyms

There are many acronyms where it concerns the use of technology in the creation of decision support systems (DSS). Many of the systems have overlapping definitions and this can cause confusion to the point where multiple systems are erroneously referenced as one. There’s computer-supported cooperative work (CSCW), group support systems (GSS), computer mediated communication systems (CMC), collaboration support systems (CSS), conferencing systems (CS), electronic meeting systems (EMS) and group decision support systems (GDSS), just to name a few (Hiltz, et al 1996, Eom, 2001, White, et. al, 2008). These are not interchangeable, but are specialized based on a particular set of requirements (Hiltz, et. al, 1996). For example, GDSS have focused on individuals working together in decision making and problem solving, while CSCW focuses on improving communication between group members (Hiltz, et. al, 1996, Eom, 2001).

3.2.2 Decision Support Systems Components

Decision Support Systems can be categorized initially into two areas which separate the human from the system. Although these two entities interact, each must be first understood as to what each is capable of as well as what each other’s limitations may be (Simon, 1969, Hiltz and Turoff, 1978). The system component can then be further broken down into two more sub-categories, one representing how the data will be handled, and the other for the methods and models that will be used by the system (Sprague and Carlson, 1982). All DSS have a common framework referred to as the Dialog, Data and Modeling (DDM) paradigm. This states that all decision support systems are based on the following:

1. Dialog exchanged between the users of the system,

2. The Data utilized and managed in the system and

3. The Model which supports the analytical work.

Sprague and Carlson created a model which is presented in Figure 3.1 (1982). This model reflects the architectural support of the DDM paradigm for a DSS. Each system is modified and specialized according to the type of activity that will be performed by the users. This task-technology, best-fit claim, is also supported by others (Hiltz, et. al, 1996, Zigurs, 1998).

Figure 3.1 Architectural support of the DDM.

3.2.3 Database Management Systems

Figure 3.1 illustrates the three components connected using a database management system (DBMS). The components are defined as the dialog component which connects the user to the system for interactive purposes, the data component which stores the data through input , archiving or some other means and the model component, which will have the algorithms and problem-solving analysis tools required by the users of the system (Sprague and Carlson, 1982). Database Management Systems come in all shapes, forms and sizes. There are free- and open-source systems available with no support, or very sophisticated expensive systems with a great deal of support, ranging from small on-site jobs to very large distributed jobs (White, et. al, 2008a). DBMS are the backbone of the information provided to the end user. Queries are made against the data to form information. This information can feed into other programs with algorithms necessary to provide various analyses to further aid the decision maker (Gehani, 2007, Rob and Coronel, 2007). Providing a well designed interface between the human and computer is crucial (Hiltz and Turoff, 1978). Major consideration should be given to the availability of input/output devices, peripheral hardware available, presentation of materials in the form of reports, and many other issues which go beyond the scope of this paper.

3.2.4 Decision Support Systems Defined

Decision Support Systems (DSS) are defined by many authors, but all of the descriptions, either in-part or in-full, have the common characteristics describing a procedure for an individual or group of individuals who use technology in order to support decision making (Alter 1980; Bonczek et al. 1981; Keen and Scott-Morton 1978; Sprague and Carlson 1982). Primarily, the system is supposed to support the user and not replace them. The system uses data and models in order to aid in making more informative, and/or better decisions. The DSS can support various types of problems be they semi-structured, ill-structured, tame or wicked (Rittel and Webber, 1973, Keen and Scott-Morton 1978, Bonczek, et al. 1981, Sprague and Carlson, 1982). Last, in a DSS the focus in the decision process is on effectiveness rather than efficiency.

3.2.5 Wisdom and the Human Factor

The DSS is a tool, which should aid individuals in making the best decision on their own and aid in the creation of collective intelligence when a group is interacting with one another (Hiltz and Turoff, 1978). A decision support system can be a lot of things; however, it cannot replace the human component where judgment needs to be exercised. The function of a human decision maker as a component of DSS is not to enter data to build a database, but to exercise judgment, or intuition, throughout the entire decision making process (Eom, 2001). For both individuals and groups, the DSS should ultimately provide the basis for a group to reach beyond their own knowledge and advocate their judgment in the form of wisdom. “Wisdom goes beyond knowledge because it allows comparisons (judgments) with regard to know-what and know-why. “It is a long way from data to wisdom” (Zeleny, 1987, page 60).

In the remainder of this chapter, design contingencies for DSS identified as group size, proximity and problem type are identified. Next, the requirements needed by users will be discussed and roles will be defined. Membership rights are then defined and elaborated as to who gets what membership rights according to the role they play. Many types of DSS will be described depending on the nature of the group working together, their problem types and proximity to one another. Studies are investigated concerning group activity coordinators in the form of facilitators and moderators and such, where their effects on the outcomes of the decision making are compared. Last, some methodologies are discussed where particular types of group problems and task types are identified. The chapter concludes by drawing together some concerns of existing problems that need improvement.

3.3 Design Contingencies

The objective behind a DSS is to act as an interface for a decision maker, giving them the proper set of tools, based on appropriate models and database systems, all in an effort to assist in making an effective decision (Turoff and Hiltz, 1982). According to DeSanctis and Gallupe (1987), there are three primary contingencies that should be considered when designing a DSS: the size of the group, individual proximity, and the type of task before the group.

3.3.1 Size Matters

Systems are built to support individuals or groups (Turoff and Hiltz, 1982). Decision making groups are defined as “two or more people who are jointly responsible for detecting a problem, elaborating on the nature of the problem, generating possible solutions, evaluating potential solutions, or formulating strategies for implementing solutions. The members of a group may or may not be located in the same physical location, but they are aware of one another and perceive themselves to be a part of the group which is making the decision” (DeSanctis and Gallupe, 1987).

These group sizes can range from a few people co-authoring a paper, to a medium size group of people such as the typical size for a class (30), to very large groups of people interacting globally (White, et. al, 2008a). Information overload is a concern for individuals and groups. When large groups work together, tools for filtering or skimming should be implemented (Hiltz and Turoff, 1985).

When working with groups and the consideration of group estimation processes, there are some tautologies which are significant: (Dalkey, 1967)

    1. The total amount of information available to a group is at least as great as that available to any member;

    1. The median response to a numerical estimate is at least as good as that of one half of the respondents;

    1. The amount of misinformation available to the group is at least as great as that available to any member;

    1. The number of approaches (or informal models) for arriving at an estimate is at least as great for the group as it is for any one member.

3.3.2 Proximity

Groups of individuals working together can be either be collocated, distributed and not together, or partially distributed where some members are collocated while others are distant.

Face-to-Face

Groups who work together in one geographic location are referred to as Face-to-Face (F2F). Huang and Ocker (2006) define ‘distributed’ as “a collection of geographically, organizationally and/or time dispersed individuals who work interdependently, primarily using information and communication technologies, in order to accomplish a shared task or goal.”

Partially Distributed Teams

In partially distributed teams (PDTs), the subgroups are collocated but distant from other subgroups (Huang and Ocker, 2006). A PDT is defined as “a virtual team, in which some sub-groups are collocated, yet the subgroups are dispersed from each other, and communication between them is primarily by electronic media” (Plotnick, et. al, 2008). Given the proper set of tools in a computer mediated communication system, groups and subgroups can work as effectively, if not better, using electronic technology (Turoff and Hiltz, 1982).

Online Communities

Where most traditional groups have a F2F element, more organizations are moving to an online format where a community forms and utilizes CMC to conduct their daily business. There are many definitions of online communities, three of which are presented here. Hagel and Armstrong (1997) define a virtual community as a computer-mediated space where there is an integration of content and communication with an emphasis on member-generated content. Lin (2003) defines a virtual community as “the gathering of people with common interests to share information and coordinate their work via information technologies, specifically for transaction, interest, fantasy, and relationships.” Preece further specifies an online community as consisting of:

· People, who interact socially as they strive to satisfy their own needs or perform special roles, such as leading or moderating.

· A shared purpose, such as an interest, need, information exchange, or service that provides a reason for the community.

· Policies, in the form of tacit assumptions, rituals, protocols, rules and laws that guide people’s interactions.

· Computer systems, to support and mediate social interaction and facilitate a sense of togetherness.” (Preece, 2000, p.10)

Although DeSanctis and Gallupe (1987) strongly believe that the aforementioned contingencies are important, they specifically designated group size and proximity to be the most critical factors upon which to design a group decision support system (GDSS). Figure 4.2 describes the taxonomy they came up with to further describe the different settings for a GDSS.

Figure 3.2 Different settings for GDSS.

The most static of the settings calls for a Decision Room. This is basically a computer/technology enhanced room for a group of collocated decision makers. This is normally a U-shaped meeting room where each member can see the other members, where both individual as well as group support systems are in place. The Legislative Session setting is described as a Decision Room, but much larger. More rules are in place for the discussion facilitation and a moderator may need to be present to ‘rule’ over the procedural process. The Local Area Decision Network consist of a smaller group of people who are geographically dispersed be it down the hall or across the country. The last setting is described as a Computer Mediated Conference. This supports large groups of individuals working together in a distributed mode (DeSanctis and Gallupe, 1987).

3.3.3 Problem Types

There are different types of problems that can be handled and supported by a DSS. Due to the basic nature of how humans think and make decisions, these problems are unstructured (Simon, 1960). Problem types for DSS range from semi-structured to wicked (Simon, 1960, Rittel and Webber, 1973, DeSanctis and Gallupe, 1987). It’s the problem type and level of (or lack of) structure which can greatly define the functionality requirements needed by the system (Simon, 1960, Hiltz, et. al, 1996, Zigurs, 1998, Eom, 2001). Different types of problems require a different approach.

Some problem types are:

· Structured – this is a problem that has a discrete process in order to solve the problem. This can be programmed as an automated task in an information system (Simon, 1969). There is no need for a complex decision support system as the variables are known beforehand and a simple step-by-step design such as that of an enterprise resource system, can be structured.

· Semi-structured (Simon, 1976).

· Ill-structured (Simon, 1976).

· Wicked (Rittel and Webber, 1973) This is not defined as any problem with malice intent, but where the there isn’t clarity or consensus on definitions, goals and such. The facts may be based on a belief system which makes statements neither right nor wrong.

McGrath (1984) integrated numerous task types and categorized them into a ‘circumplex model’ which defined the tasked by what the group had to accomplish during their meeting. There are three tasks identified as 1) the Generation of ideas and actions, 2) Choosing from those alternatives and 3) Negotiation among alternatives.

Structured Problems

Structured problems are inflexible and have workable, discrete solutions that can be automated. However, when the environment changes, these automated solutions can be dealt the wrong solutions (Turoff and Hiltz, 1982).

Semi-structured Problems

Semi-structured problems are “expressed in terms of problems that can be organized” (Keen and Morton, page 861). Many problems fall under this category and must be dealt with by decision makers within organizations. These groups tend to be dispersed geographically as well as organizationally, and must use systems that support the needs of these members (Turoff and Hiltz, 1982).

Ill-structured Problems

Decision makers may have to interact with others in order to better understand the needs of an ill-structured problem (Turoff and Hiltz, 1982). To help groups of decision makers understand a problem and come up with effective solutions, a GDSS needs to use a combination of technologies in order to aid in communication, computing and decision support (DeSanctis and Gallupe, 1987). Larger groups generate a better analysis of a complex problem and have a better chance at turning an ill-structured problem into a semi structured one (Turoff and Hiltz, 1982). Ill-structured problems are defined as and grouped together with ‘messy’ or ‘wicked’ problems (Horn, 2001).

Tame Problems

A tame problem is by no means a structured problem, but does have a well-defined problem statement which has a solution that can be objectively evaluated as right or wrong.

By definition, a tame problem has a well-defined and stable problem statement; a definite stopping point; a solution that can be objectively evaluated as being right or wrong; solutions that can be tried and abandoned; and belongs to a class of similar problems that can be solved in a similar manner (Digh, 2000).

Wicked Problems

By contrast, wicked problems like affordable housing, disparities in healthcare, and institutional racism are ill-defined, ambiguous and can be associated with strong moral, political and professional issues and are constrained by political, economical, cultural and ideological values (Digh, 2000, Horn 2005). Wicked problems are volatile and of a very dynamic nature with considerable uncertainty and ambiguity (Horn, 2005).

There are many causes of wicked problems which are coupled with other issues and constraints all of which are evolving and unfolding over time (Weber and Rittel, 1973, Horn, 2001). Each wicked problem is unique with no definitive formulation. Wicked problems are not well understood until they are experienced, and only when potential solutions are formulated does one understand the problem more (Conklin and Weil, 1998, Conklin, 2005) The problem is complex and recursive in nature for when you understand one problem, it only leads to questions that lie defining other problems (Horn, 2001). Since wicked problems are unique and have not been experienced before, a well-defined solution set is not feasible nor available (Rittel and Webber, 1973).

Since the problem cannot be defined, it is difficult to tell when it is resolved since judgment is based on not right or wrong, but good or bad which is subjective and from the perspective of the stakeholder (Rittel and Webber, 1973, Digh, 2000, Horn, 2001, Conklin, 2005). Formulating the problem and the solution are essentially the same thing. Each attempt at creating a solution changes the understanding of the problem. Every wicked problem can be considered a symptom of another problem. Solutions to wicked problems are not true-or-false but good-or-bad. There is no immediate and no ultimate test of a solution to a wicked problem. Every implemented solution to a wicked problem has consequences. Solutions to wicked problems generate waves of consequences, and it is impossible to know how all of the consequences will eventually play out (Rittel and Weber, 1973).

Wicked problems are ongoing and have no stopping rule (Rittel and Webber, 1973, Digh, 2000). They are never resolved, but change over time (Conklin, 1998). Wicked problems are solved per se when they no longer hold interest to the stakeholders, when resources are depleted or when the political agenda changes (Horst and Webber, 1973). There are many stakeholders with multiple value conflicts who redefine what the problem is repeatedly, reconsider what the causal factors are and have multiple views of how to approach and hopefully deal with the problem (Rittel and Webber, 1973, Conklin, 1998, Digh, 2000). Getting and maintaining agreement amongst the stakeholders is most difficult as each has their own opinion of what is best (Rittel and Webber, 1973). As in ill-structured problems, wicked problems are best managed if they can be turned into a tame or structured problem (Rittel and Webber, 1973, Turoff and Hiltz, 1982).

3.4 Varieties of Decision Support Systems

Decision support systems are designed to meet the needs of the users. Some considerations are based on the number of users and their proximity to one another.

3.4.1 Computer Mediated Communication Systems (CMC)

Computer Mediated Communication systems is a general description of computer systems being used between people in a variety of ways such as e-mail, web pages, forums and such. They can be used synchronously, asynchronously and in face-to-face meetings or in a distance format (DeSanctis and Gallupe, 1985, Hiltz, et. al, 1996).

3.4.2 Single User Decision Support Systems

When a single user is using a system for decision support, the system is used for information gathering and inquiry seeking needs (Churchman, 1971). Information is presented in graphs and tables or any other means of filtering information into a manageable amount so as to not suffer from information overload (Hiltz and Turoff, 1985). Software can act as a part of the solution to information overload. The individual must learn to use the system by selecting their own criteria filtering information and to develop the social skills that will be necessary to work effectively with other group members (Hiltz and Turoff, 1985).

As the workforce is increasingly working on a global level with other co-workers, a trend has developed to not only use DSS for single users decision making, but to use them to network organization members together into distributed decision making (DDM) systems (Eom, 2001). These single users are connected in such a way as to be able to work independently of one another on a sequential joint production (Hiltz and Turoff, 1985, Rathwell and Burns, 1985).

3.4.3 Group Decision Support Systems (GDSS)

A Group Decision Support System is defined as 'an interactive computer-based system which facilitates solution of unstructured problems by a set of decision makers working together as a group' (DeSanctis and Gallupe, 1985, pg. 3). Group Decision Support Systems (GDSS) are characterized as “a set of software, hardware, and language components and procedures that support a group of people engaged in a decision-related meeting” (Huber, 1984). The proper combination of tools or structures can help a group be more efficient and get better outcomes or make better decisions (Turoff and Hiltz, 1982, DeSanctis and Gallupe, 1985). There is a greater chance for equal participation and numerous ways to interpret data such that the human will understand the information better (Hiltz, 1996).

One integrated system should be used for group software such that the individuals use one set of technologies (Turoff, et. al, 1993). Group support systems have a primary goal of helping groups be more productive (Gert-Jan de Vreede, 2001). Group support systems offer a wide variety of tools from which both individuals and groups can use to make better decisions, structure activities and improving communications by supplying the proper technology (Huber, 1984, DeSanctis and Gallupe, 1985, Hiltz, 1996, Gert-Jan de Vreede, 2001). Especially when given large groups of individuals working together, the end result is a collective intelligence, where the group is smarter than any one individual (Hiltz and Turoff, 1985, Turoff, et. al, 2002).

Levels of GDSS

Where it concerns members exchanging information, DeSanctus and Gallupe (1987) defined three Levels/approaches for group support. Level 1 concerns communication mediums. It focuses on removing communication barriers and facilitating the information that is exchanged between group members. Level 2 encompasses the incorporation of modeling technologies into the interpretation of data and also methodologies to aid in the reduction of ‘noise’, decreasing information overload and aiding in decision making where it involves uncertainty. Level 3 includes supporting more intelligent, dynamic decision making where rules enforcing patterns of information exchange are imposed by expert advisors during the meeting. For example, Roberts Rules of Order could be implemented in a Level 3 GDSS where the system could even participate in the building or selecting of rules in the decision making process.

3.4.4 Distributed Group Support System

Distributed Group Support Systems embed GDSS type tools and procedures within a Computer-Mediated Communication (CMC) system to support collaborative work among dispersed groups of people. "Distributed" has several dimensions: temporal, spatial and technological (Hiltz, et. al, 1996).

Communication support can be provided by a DGSS in five different ways (Turoff, et. al, 1993). One way is by offering numerous alternative communication channels so as to support different types of information, be it textually based, graphically based or structured data. Second, rules and protocols can be implemented along with user roles so that structured interactions along with membership permissions can enforce rules to improve the group’s performance. Third, a GDSS should aid both the individual and the group with information seeking while reducing information overload through a variety of functionalities such as information gathering, filtering, feedback, etc. Sophisticated decision making methods are a fourth way that these systems can support the communication needs. And last, but not least by any means, is that a DGSS can help synchronize the entire communication process where it concerns any phase in decision making. Last, Turoff mentions that a group memory should be utilized for future use and/or to help in any other phase (Turoff, et. al, 1993).

3.4.5 Social Decision Support Systems (SDSS)

The primary objective of a social decision support system is to facilitate the gathering of multiple points of view from varying perspectives and creates a working body of knowledge.

(Turoff, et. al, 2002)

SDSSs are designed given a specific set of governing criteria and objectives (Turoff, et. al, 2002). Structures are based on the issues which need to be discussed, the options that are available from which to choose, the comments that the individual group members may wish to contribute and the relationships between the three (Turoff, et. al, 2002).

3.5 Information Overload

If there is too much or too little of a workload, human performance deteriorates (Rouse, 1975, Hiltz and Turoff, 1985). Information overload is defined as “information presented at a rate too fast for a person to process” (Sheridan and Ferrell, 1974). Computer mediated systems should provide the necessary tools to help users filter and organize data, helping them to screen large amounts of incoming information (Hiltz and Turoff, 1985). Novice users will have to put the time into learning a system as experienced users will manage their information better since they already have devised ways of filtering (Hiltz and Turoff, 1985). There are ways to overcome information overload using a computer. First, human roles can implement functionalities into particular categories of membership thus restricting the information down to a minimal. Second, protocols can be implemented directing the group what to do when and third, knowledge structures can be used to aid in decision making (Turoff, et. al, 1991).

3.5.1 How It Happens

There is a correlation found between user experience with a system and management skills reducing information overload (Hiltz and Turoff, 1985). Studies conducted with users on management information systems show that intermediate users suffer the most from information overload (Dickson, et. al, 1977).

3.5.2 What Occurs From It

There are many negative ways that humans deal with information overload which are conducive for poor decision making. Hiltz and Turoff offer a list (1985, page 682) of ways individuals may cope:

1. Fail to respond to certain inputs,

2. Respond less accurately than they would otherwise,

3. Respond incorrectly,

4. Store inputs and then respond to them as time permitted,

5. Systematically ignore (i.e., filter) some features of the input,

6. Recode the inputs in a more compact or effective form, or

7. Quit (in extreme cases) (Sheridan and Ferrell, 1974).

The primary way individuals cope with too much information is by filtering and omitting or ignoring it (Miller, 1962).

3.5.3 How to Decrease It

There are a number of ways to decrease information overload. An example is that communications can be regulated by social pressures. Also, user experience in learning to use a system efficiently with the given tools and having numerous options for personal preference in the filtering of information is required in the design of a system (Hiltz and Turoff, 1985).

Data Management

There are two basic processes for increasing the organizational efficiency of information systems: message routing and message summarizing (Huber, 1982). Also, system notifications of information having been read already or on the other hand, that new information has arrived, aid in the reduction. More communication options giving the end user the freedom to filter and skim also greatly reduces the incoming data into a manageable set of information (Hiltz and Turoff, 1985).

Social Influence

There are social pressures that can aid in the reduction of information (Hiltz and Turoff, 1985). One, restrictions or limitations can be placed on the length of comments allowed. Two, allow the group to create their own norms as the membership increases or decreases to reflect the views of the group (Hiltz and Turoff, 1985). Pen Names and anonymity along with individual member rights to view, post or edit information can also reduce information overload (Hiltz and Turoff, 1985)

Conference Systems

Conference systems can aid in the management of large amounts of data. A conference system allows for discussions to occur. This system along with a message system can greatly reduce the flow of unwanted messages (Hiltz and Turoff, 1985).

System Induced Measures

There are automated methods that can be employed in a system such as ‘self-destruct’ dates to aid in eliminating unwanted data which in turn reduces the information to sift through (Hiltz and Turoff, 1985). Systems can aid in data management reducing information overload by implementing database methods such to help sort, search and find strings of information in textual based documentation (Hiltz and Turoff, 1985).

3.6 Process Catalyst

A person or group of persons can fill or implement process catalyst roles using the computer mediated communication system. These catalyst are referred to as a facilitator, coordinator, moderator, leader, chauffer or user driven system (Hiltz, Johnson and Turoff, 1982, Hiltz, 1984, Hiltz and Turoff, 1985, DeSanctis and Gallupe, 1987, Bostrom et. al, 1993, Hiltz, 1996, Limayen, 2006). Some of these roles or parts of the catalyst functionalities can be automated (Gert-Jan de Vreede, 2001, Limayen, 2006),

These aforementioned roles can edit, move, delete, or prompt (amongst other functionalities) discussions, trigger voting among particular subgroups, or request actions taking on the powers defined by the role (Hiltz and Turoff, 1985, White, et. al, 2007a).

3.6.1 Leader

Leadership and process are key variables that influential in decision making amongst a small group of individuals in a traditional face-to-face environment (Hiltz, 1996). Processes can help in rational decision making and in brainstorming (Osborn 1957) or Nominal Group Techniques (Van de Ven & Delbecq, 1971), can enhance process and outcomes. These same variables aren’t as effective in an asynchronous distributed environment (Hiltz, 1996). Using the group as the decision-maker proves better than any single individual (Condorcet, 1976, Hiltz and Turoff, 1993)

3.6.2 The Facilitator

Human facilitation is where an individual intervenes using special privileges provided by the role in the software. This individual aids in the group’s decision making by carrying out a set of activities during the duration (before, during and after) of the process (Bostrom et. al, 1993, Limayen, 2006). Human facilitators can be key in improving group performance (Reagan-Cirincione, 1994) and can help groups accomplish their task (Hiltz, Johnson and Turoff, 1982). Conference leaders are called coordinators and prove useful in enforcing protocols to aid in the navigation (Hiltz, Johnson and Turoff, 1982). Groups may need help in using a system and the facilitator can be used as an interface (DeSanctis and Gallupe, 1987). However, this technical driver is also referred to as a chauffeur (Dickson, 1989).

Whether to have a designated facilitator, human or automated, is a dicey question in a community using a computer mediated communication system. Whitworth (1999) describes the interaction method which requires the need for a facilitator to motivate the people during the entire decision making process. Other types of group interaction such as Robert’s Rules of Order require a monitor or parliamentarian to keep the meeting on track and enforce the rules (Pitt, 2005). Hiltz and Turoff, 1996 believe that, no matter the complexity or organization, there will exist a need for a facilitator or moderator. They state that in particular given a Delphi system, human intervention is barely required due to the structure of the methodology. If someone is used, the software powers or special privileges that such an individual needs are:

· Being able to freeze a given list when it is felt there are sufficient entries to halt contributions, so as to focus energies on evaluation of the items entered to that point in time.

· Being able to edit entries to eliminate or minimize duplications of resolutions or arguments.

· Being able to call for final voting on a given item or set of items.

· Being able to modify linkages between items when appropriate.

· Reviewing data on participation so as to encourage participation via private messages.

While a lot of material in an online Delphi can be delivered directly to the group, the specific decisions on this still need to be made by the person or team in control of the Delphi process. In Computer Mediated Communications, the activity level and actions of a conference moderator can be quite critical to the success of an asynchronous conference and specific guidelines for moderators can be found in the literature (Hiltz, 1984).

Groups, especially in the virtual setting, need a facilitator to introduce agendas or trigger a decision making process (Tarmizi, 2006). A facilitator is a key role in building an online communities. ‘One of the techniques that can help to sustain a community of practice (CoP) is the introduction of a facilitator, since a facilitator can play a crucial role in addressing the challenges of establishing and nurturing a community (Tarmizi, 2006). One of the most important aspects of the policies and procedures that govern an online community is the set of roles and responsibilities that are used (Fontaine, 2001). In particular, leadership in the form of facilitators or moderators is crucial. Another crucial aspect is the norms and practices of interaction that support trust and build community spirit and cohesiveness.

3.6.3 Moderator

Problems can arise when group consensus is unattainable and a moderator may need to intervene. A facilitator can now change to a moderator as long as it’s a 3rd party non-bias member of the group who is there to help navigate the group towards a satisfactory consensus (White, Hiltz and Turoff, 2007). Another important part of this is presented by Voss (2003) where he states “for an e-discourse it is important to be documented at a shared place so that everyone can access the same version at anytime from anywhere.”

Over time Cristal (2006) notes that once the community members are interacting actively, less organizational intervention can occur for the group (community) will now exist on its own devices.

Motivation

Catalysts can be used as motivators. One of the largest barriers to get over in creating online communities is getting the members to participate in the initial stages of its creation. Members have a higher satisfaction rate when they collaborate as a community (Kimble, Li, and Barlow, 2000). Exposing the interest and defining these interest play an important part behind motivating individuals to come to the site initially, then participate eventually. It’s the domain of information which should be one of the driving forces behind the motivation to participate and then, to offer something to the members that they cannot find elsewhere (Wenger, 2003). “A cooperative relationship among members as a way to increase sense of community, which in turn could lead to increases in participation” (Tarmizi, 2006). It’s the community itself that must discover and be the driving force behind this motivation.

Anonymity can be taken too far (Turoff and Hiltz, 1996). It is important that experts know that they are working with other credible people to make it worth their effort. This is a motivating factor in motivating participants to interact. Anonymity can still be put in place and at the same time, users know the group of people they are working with. For example, there could be a list of people’s name and affiliations given so that the user knows they are working with this identified group of people, but still have them use pen names and such. Also, people can have the ability to identify themselves if they so choose. Last, anonymity can be used selectively as can the identification of group participants. For example, the experts may want to use their real names during the arguments, detailing where the information for the argument comes from, but then have the voting anonymous.

3.6.4 Chauffer

A “chauffeur” is described as a person who helps groups through the technical aspects of a system. Dickson et al. (1989) propose that this is beneficial if a group is using a system very rarely. The term chauffeur is defined as a non-bias individual external to the decision making of the group and simply ‘drives’ the group through the process (Dickson, et. al, 1989).

3.6.5 Automated

As stated earlier in this chapter, structured problems can be automated. However, even dynamic structures can have automated functionalities that can, for example, reflect a vote (White, et. al, 2007a). Automated facilitation can implement protocol into a system executing a structured decision making process (Limayen and DeSanctis , 2000, Limayen, 2006). Based on human thinking models, automated facilitators aid the group in reaching their outcomes by providing the group guidance during decision making (Limayen, 2006). Also, the automation of some facilitation task have proven to have positive effects (Gert-Jan de Vreede, 2001).

3.7 Methodologies

Decision making can be made by individuals or groups. The decision making can have a single dimension or be multidimensional (Bard and Sousk, 1990, Iz and Gardiner, 2005). Many issues can be measured during decision making such as decision outputs, changes in the decision process, changes in managers' concepts of the decision situation, procedural changes, cost/benefit analysis, service measures and managers' assessment of the system's value (Keen and Scott Morton 1978, Eom, 2001).

3.7.1 Unidimensional Data

When working with unidimensional data, items can be ranked from one area to another opposing area such as what occurs in ranked data (Thurstone, 1927a). Paired comparisons have been used a number of ways to support both individual and group decision making (Thurstone, 1927a, 1927b). They have also been used for the formation of weight assignment, preference scores (Saaty, 1980, Thurstone, 1927a, Ichikawa, 1980, Kok, 1986, Kok and Lootsma, 1985). Paired comparisons can assign weights to unidimensional data or it can give comparisons on each item unit as each stands relative against one another (Thurstone, 1927b, Iz and Gardiner, 2005, White, et. al, 2008b). Paired comparisons are good to use with larger sets of data because they give a more accurate reflection of a user’s opinion than if the user were to rank each item by eye (Miranda, 2001).

It’s suggested to use the Analytic Hierarchy Process(AHP) with results from paired comparisons for cooperative groups (Saaty, 1988, Iz and Gardiner, 2005). AHP used paired comparisons along with the arithmetic mean to calculate weights and preference scores or can be used to place alternatives into a cardinal ranking (Saaty, 1988, Bard and Sousk, 1990, Iz and Gardiner, 2005).

3.7.2 Multicriteria

Multiple criteria decision making (MCDM) has numerous options with differing weights, and various stake holders with subjective needs. There are numerous techniques that use multiple criteria in their decision making technique (Iz and Gardiner, 2005). One of the more popular methods used is Multi-Attribute Utility Theory (MAUT) (Nakayama, 1979, Iz and Gardiner, 2005). Another technique is Multiobjective Mathematical Programming (MMP) which deals with complex problems processing numerous decision variables that have constraints (Mavrotas, 2008). AHP is also used in MCDM as a way to rank order alternatives (Saaty, 1988).

3.7.3 Measuring Decision Support

There are numerous ways DeSanctis and Gallupe (1987) offer to measure decision support. One way would be to measure the outcomes of the meetings against a set of predetermined goals and objectives or even measuring the outcome of what was thought to occur versus what has occurred. Another way to measure would be to survey the users and find out how satisfied they were with the decision making that took place or whether the group members are willing to work together again (DeSanctis and Gallupe, 1987).

3.8 Conclusion

It has been demonstrated that there are many issues to take into consideration when designing a decision support system. Group size, task type and proximity are major considerations however; also note how communication structures must be provided to support the particular needs of the group. Problem types need to have their approach catered to their environmental needs and some processes need some form of catalyst for discussions. Some issues that are of particular interest are:

· How to best create a system for a very large group of users;

· How to manage the roles these users fill and the privileges that will be given to them on the system;

· How to best manage the increasing mass of information presented to the user;

· Which model best reflects the group’s opinion;

· Wow to best implement human or automated facilitative roles;

· How to best present the information to the user for fast and accurate decision making needs.

Therefore, it is believed that as a critical part of this research endeavor, the system must be tailored to support the communication needs of the users.

CHAPTER 4

LITERATURE REVIEW: DELPHI

4.1 Abstract

This chapter presents a literature review on the Delphi technique. The Delphi technique is interpreted many different ways taking into consideration the numerous ways it can be implemented. This is one of the fundamental techniques being implemented into the system as further supports Design Science Research in fulfilling the third guideline stating that the design must be based on scientifically supported research. The objective of this chapter is to identify the characteristics of the system which, as a direct or indirect result, affect the decision making process and outputs. Reasons and rationalizations for selecting the method will be found in the strengths outlined here. Limitations are also examined. There is a critical component to the Delphi technique and that is based on the requirement to use Experts as the participants. The importance of this is described and key issues are discussed. Implementing the Delphi technique as part of the model for this decision making system, will benefit decision makers who will be using the system. This strategy offers them an organized way to approach decision making given the decision maker surrounds themselves with experts and uses their input into the evaluation process.

4.2 Introduction

Delphi is characterized by a set of procedures, which elicits opinions from experts in order to reach a group consensus concerning a particular subject (Dalkey 1967, Helmer 1967, Hardy, 2004). In particular, the use of experts in decision making is beneficial when information is lacking (Dalkey 1969). It is intuition and other less understood mental processes that make experts excellent decision makers in problems dealing with uncertainty. Linstone and Turoff extended the original idea by elaborating on how the complexity of the problem being addressed can require larger groups of experts who are heterogeneous in nature. This was followed by Hiltz and Turoff (1985) further defining how these complex problems could be handled by larger groups of experts.

The problem needs to be structured to allow subgroups to better focus on the areas they are most confident about. A mesh network of these subgroups could dynamically address each problem. Best described by Hiltz and Turoff,

“Delphi is a communication structure aimed at producing detailed critical examination and discussion, not at forcing a quick compromise. Certainly quantification is a property, but only to serve the goal of quickly identifying agreement and disagreement in order to focus attention” (1996, p 2).

In 1967, two researchers, Olaf Helmer and Norman Dalkey, at RAND Corporation, published work describing a means of groups making better decisions they referred to as Delphi (Dalkey 1967, Helmer 1967, Fisher 1978, Linstone and Turoff 1975, Hiltz and Turoff 1996, Baker 2006). Dalkey conducted research on the accuracy of groups considering their interactions. He found that if you took the average opinion of a group of individuals, it would be more accurate than had the group met in a traditional face-to-face meeting in order to come to a consensus about how to solve the problem (Fisher 1978).

After running some Delphi experiments, “Dalkey discovered that

1. Opinions tend to have a large range in the initial round but that in succeeding rounds there is a pronounced convergence of opinion and;

2. Where the accuracy of response can be checked, the accuracy of the group response increases with iteration” (Fisher, 1978, p 65).

This was most beneficial, as it demonstrated that groups of experts may produce better decisions, which are more accurate and reliable. Although this was beneficial, there were many limitations. For example, in a round, questions from experts had limits, so although an expert may have five questions, they were only allowed two- per- round, and only on given rounds. As with any new methodology, there was much room for improvement.

Hiltz and Turoff imply that collective intelligence is at the core of Delphi, emerging from the collaboration between a group of individuals who have a synergetic effect. The idea is that the group will be at least as smart as the smartest individual, but more so, that the group will reflect a collective intelligence greater than any one group member could have offered.

In the remainder of this chapter, characteristics of Delphi will be explored. In particular, focus will be given to the three primary characteristics identified as 1) anonymity, 2) feedback and 3) statistical group response. Next, a review of the Delphi processes will be covered. Problems with various aspects of Delphi are examined followed by some solutions and alternatives. Areas which may be subject to bias are identified. Next, what defines an expert is explored. This is followed by another related issue where having improper subjects participate in Delphi experiments affects end results. Basically, if one is having non-experts participate in a research project that requires experts, then what good are the results? Last, some experiments will be reviewed and analyzed.

4.3 Characteristics of Delphi

Delphi is a lot of different things to a lot of different people. One of the originators, Norman Dalkey, identified three core characteristics of Delphi. His claim, along with Olaf Helmer’s, was that combining user anonymity with rounds of controlled feedback, consisting of information and a statistical group response, would have a synergistic effect on decision making that would have the potential to have a group of experts build a reservoir of knowledge over time.

4.3.1 Anonymity

One of the most recognized characteristics of the Delphi method is that of anonymity (Turoff and Hiltz, 1996). There are many reasons for implementing anonymity in most aspects of decision making, but primarily it is to reduce bias in the influences one has when making a decision (Dalkey, 1967). Different levels of anonymity can be implemented. Pen names can be used all the way to total anonymity, for the level of disclosure has different levels for the different needs to be fulfilled (Hiltz and Turoff 1978, Bezilla, 1978).

Bezilla (1978) described some reasons for anonymity and stated how it could promote interaction, objectivity and problem solving:

· “Interaction for the free exchange of ideas or reporting of matters without the threat of disclosure of the same to peers or even to the collectors or compilers of the data; that is, anonymity can remove any threat that the privacy of personal data will be compromised.

· Objectivity through the masking of identity can serve to suppress distracting sensory cuing or ad hominem fallacies so that matter being reported or discussed can be considered on its intrinsic merits without regard to personal origin or aspects of origin.

· Problem solving for the total subordination of the individual ego to the group task. Presumably anonymity can be used to suppress individual considerations that might hinder the group’s progress in a mission, e.g., one would not have to worry about peer relations, advancement of unpopular ideas, risk ridicule, etc.” (Bezilla, 1978, page 7).

Later when considering conducting Delphi processes on computer mediated communication systems, Turoff and Hiltz, offered a host of reasons for anonymity. For example,

· “Individuals should not have to commit themselves to initial expressions of an idea that may not turn out to be suitable.

· If an idea turns out to be unsuitable, no one loses face from having been the individual to introduce it.

· Persons of high status are reluctant to produce questionable ideas even when there is some chance they might turn out to be valuable.

· Committing one's name to a concept makes it harder to reject it or change one's mind about it.

· Votes are more frequently changed when the identity of a given voter is not available to the group.

· The consideration of an idea or concept may be biased by who introduced it.

· When ideas are introduced within a group where severe conflicts exist in either "interests" or "values," the consideration of an idea may be biased by knowing that it is produced by someone with whom the individual agrees or disagrees.

· The high social status of an individual contributor may influence others in the group to accept the given concept or idea.

· Conversely, lower status individuals may not introduce ideas, for fear that the ideas will be rejected outright” (1996, page 6).

There are times when total anonymity may be desired, for example, during a vote. However, there are other times when a vote may be called upon to be validated, by knowing who voted which way. There are occasions when the users may want to be in a social environment using either their real or pen names in order to be identified. Further, there may be situations where a user may want to have multiple pen names as well as the ability to participate anonymously. “The advantage of pen names over total anonymity is that participants can address a reply to a pen name, whereas you cannot send a message to anonymous” (Hiltz and Turoff, 1978 p 95). It is critical that members of the group believe they are working with other credible experts from the field. Anonymity may not be desirable at that point (Turoff and Hiltz, 1996). One way to use both anonymity and allow members to know ‘the group’ with which they are interacting is to have the identities of the group members named, although members could use pen names. This way, members would know the group they were working with but wouldn’t know the individuals by their pen names. Another factor that may relax the need for anonymity is that, the more history a group develops as a social group, the more flexibility they will require where it concerns levels of anonymity (Turoff and Hiltz, 1996).

Olaf Helmer held that Delphi was about experts having an anonymous debate (1967). Research shows that experts debating anonymously, along with feedback, will give more accurate opinions (Helmer, 1967, Hiltz and Turoff, 1978).

4.3.2 Feedback

Feedback is defined in a variety of forms and is measured on different dimensions. For example, interest in a topic can be said to be defined by how much feedback is given within a certain time frame, i.e., those who respond faster find the topic of interest. Feedback can be a stimulus; it can increase learning, can be controlled, and can be provided in a number of formats:

· Responses,

· Graphic Visuals

· Summarized Information (Turoff, et. al, 2002).

There’s positive and negative feedback and the consequences it can hold on decision making and the bias that it can create (Tversky and Kahneman, 1974). Also, feedback is described by a set of tools for analysis on two levels: on the individual level and to aid the individual on a group level.

In Delphi processes, feedback may be given in the form of summarized information in an interactive manner known as rounds. Also, feedback maybe given in a statistical form showing the interquartile range so that individuals will know where they stand on an argument in respect to other group members. Feedback is given in a controlled manner. Dalkey, 1967 defines controlled feedback as a noise reduction device. Noise is any information that is not conducive to productive decision making. It could be group members arguing about trivial matters, or bad information. One must consider the bias that may be reflected in the summarized data and the filtering that was conducted to reduce the noise in the data.

Dalkey (1969) stated that allowing participants to ask questions during the rounds allowed variability in the diversity of the questions put forth. This also allowed for subgroups of experts to be more articulate about, and responsive to, the information required.

Feedback comes in many forms e.g. tables, text and graphics. It’s important to determine which interpretations are most beneficial to the decision makers. The implementation of graphics has proven useful as they are a quick way for experts to take in information and determine its meaning (Hiltz and Turoff, 1996).

Another powerful force behind feedback, especially given computer mediated communication systems and the Internet, is how quickly information can be updated, (Hiltz and Turoff 1996, White, et. al, 2007b). The power that real-time data can give a group of decision makers, working on a time critical problem, can make the difference between life and death, be it humans or corporations (White, Plotnick, Aadams-Moring, Turoff and Hiltz, 2008a, White, Hiltz, and Turoff, 2008b).

4.3.3 Statistical Group Response

Norman Dalkey wrote that the two prior characteristics, anonymity and feedback, along with a statistical group consensus, are what make Delphi different from the rest of the decision making strategies being offered (Dalkey, 1967).

However, the application of such a mathematical technique will not produce the qualitative model that represents the collective judgment of all the experts involved. It is that model which is important to understanding the projection and what actions can be taken to influence changes in the trend or in understanding the variation in the projection of the trend (Hiltz and Turoff, 1996, Turoff, et. al, 2002).

Most of the original Delphi experiments used a median average from which to derive their ‘statistical’ group response. As stated previously, the interquartile range would be sent back to experts every round indicating the middle 50% of the opinions of the group.

A statistical group response method is the multi-dimensional scaling approach. However, multiple dimensions can confound the results of one variable with the results of another variable. Hiltz and Turoff, 1996 proposed an alternative way of implementing an older method, Thurstone’s Law of Comparative Judgment (Turoff, et. al, 2002). Although paired comparisons can even alter in their calculations for end results, Thurstone uses a unit normal table as a way to average out the distribution (Thurstone, 1927). This method is presently being tested for its fit to the given situation. However, many more studies need to be conducted using Thurstone to validate this method, as no great number of subjects has been used in more recent times (Li, et. al, 2003, Plotnick et. al, 2007). However, even Thurstone’s method can be complicated and have various means with which to implement this methodology, as there are five special cases identified (Thurstone, 1927) which are all handled differently.

So, just at the basic core of Delphi having anonymity, feedback and a statistical group response holds a multitude of paths from which to follow. These three variables alone make it such that standardizing any Delphi method cannot happen (Hardy, 2004). Actually, it could be this flexibility that hinders the acceptance of any Delphi software developed, as it’s easy to see where no two Delphi systems will provide the same results given the same scenario, i.e., no replications can be made against other experiments validating the methodology. However, the baby shouldn’t be thrown out with the bath water, but stringent explanations of the methods used should be offered to researchers, so that the experiments can be validated by various means.

4.3.4 Why A Delphi Technique?

The Delphi technique is being used in the design of the Dynamic Delphi System for its core characteristics of discussion, anonymity and feedback. These characteristics and why they are selected will be presented in this chapter. First, the ‘discussion aspect’ given its asynchronous and ongoing, fulfills a couple of needs:

· As a measuring device. This is where the word count from the discussion is being used as a means to relate changes in vote and observe how they correlate with the amount of discussion. Will a greater amount of discussion cause some to be persuaded enough to change their votes?

· As a means to decrease uncertainty. It’s through the discussion that uncertainty is decreased. Dalkey (1969) conducted research which indicated that in times of uncertainty, deferring to expertise is best. Experts, through intuition and insight, can make educated guesses that will be more accurate given there is uncertainty in a particular problem area. Part of the overall contribution of this research effort is in the design of the Dynamic Delphi System is in how uncertainty is managed. This is because in crisis management, uncertainty is high, but decisions still must be made. Anonymity is an noted characteristic, but given information overload and the counterproductive heuristics used to manage it, Delphi doesn’t bring in more information but is used more to further utilize the existing information that’s inherent to the group due to the qualifications of the experts.

· A final reason the Delphi technique was used in this design was to further reduce information overload and minimize uncertainty through the use of visual feedback. For this effort, an interval scale calculated by a modified version of Thurstone’s Law of Comparative Judgment is used.

Voting is used as a means for the expert to express their opinion. Immediate feedback displaying the results of real time voting to the experts provides the group’s opinion which can further stimulate discussion. There is a direct link between the expert knowing what their opinion is compared to the group’s opinion which lets the expert know if they are in agreement with this overall view that will either increase or decrease the discussion which then, will stimulate changes in votes until the problem no longer addressed.

4.4 Delphi Processes

The Delphi process is defined as “the procedures consist of obtaining individual answers to preformulated questions either by questionnaire or some other formal communication technique; iterating the questionnaire one or more times where the information feedback between rounds is carefully controlled by the exercise manager; taking as the group response a statistical aggregate of the final answers” (Dalkey, 1969, pg 6).

In the traditional Delphi processes, all contributions will first go to the exercise coordinator who will then integrate all of the information and give it as feedback to the participants (Hiltz and Turoff, 1996). However, a coordinator would have to be objective in this endeavor as there would be a tendency to interject bias into the decision making based on the selective feedback and interpretation of the input materials by the coordinator. In computerized Delphi processes, this mediating intervention is not required as a more collaborative effort is taken and all of the information proposed by group members is evaluated by all of the group members themselves (White, et. al, 2007b). This has many added benefits, one of which is that the data is not interpreted or filtered. Another benefit comes from the collective intelligence that will adhere from the process being distributed in this manner (Linstone and Turoff, 1970, Hiltz and Turoff, 1978, Hiltz and Turoff, 1998, Turoff et. al, 2002, Li, et al 2003, White et. al, 2007).

Early Delphi Processes

Delphi processes are conducted in a number of ways. A key protocol in the process is defined as a round. This is where discrete steps of instructions are encompassed in a cycle and the process is repeated with some stopping rule. Some processes have more rounds than others and it is alleged that with enough rounds, the best choice would eventually surface (Helmer, 1967). These rounds are given as a means of replicating the interactions and information exchange that would occur in a discussion or debate. Descriptions of the original Delphi process are presented followed by a detailed example.

In November of 1967, Helmer offers the simplest description in his publication, Systematic Use of Expert Opinion:

1. “Have each member of the panel independently write down his own estimate;

2. Reveal the set of estimates but without identifying which was made by whom;

3. Debate openly the pros and cons of various estimates;

4. Then have each person once more independently write down his own (possibly revised) estimate, and accept the median of these as the group’s decision (page 9).”

Published a month prior to this, in October, was a description of the process given by Dalkey, (1967) where he elaborates a bit more on where both input and feedback are given.

1. “A typical exercise is initiated by a questionnaire which requests estimates of a set of numerical quantities e.g., dates at which technological possibilities will be realized, or probabilities of realization by given dates, levels of performance, and the like.

2. The results of the first round will be summarized, e.g., as the median and inter-quartile range of the responses, and fed back with a request to revise the first estimates where appropriate.

3. On succeeding rounds, those individuals whose answers deviate markedly from the median are requested to justify their estimates.

4. These justifications are summarized, fed back and counter-arguments elicited.

5. The counter-arguments are in turn fed back and additional reappraisals collected.”

Helmer offers a description where the group interactions are elaborated upon. These details are very important as they demonstrate how Delphi was trying to integrate discussion into the process. Helmer’s explanation was that in Delphi, there was an anonymous debate going on amongst experts. He explains further that:

“In the 2nd round, if the revised answer falls outside the interquartile range, he is required to state briefly why he thought that the event would occur that much earlier than the majority seemed to think. Justifying relatively extreme opinions on the respondents typically has the effect shown in the illustration; those without strong convictions tended to move their estimates closer to the median, while those who felt they had a good argument for a deviant opinion tended to retain their original estimate and defend it.

3rd round, respondents were given a summary of the reasons for the extreme positions, and asked to revise their estimates, given the information. If a respondent’s revised answer is outside the new interquartile range, he was now required to state why he was unpersuaded by the opposing argument.

4th round, counter arguments are presented to the group again, giving rise to one last chance for estimating the date of occurrence. Sometimes, when no convergence toward a narrow interval of values takes place, opinions are seen to polarize around two distinct values, so that two schools of thought regarding a particular issue seem to emerge. This may be an indication that opinions are based on different sets of data or on different interpretations of the same data (Helmer, 1967).”

It was later demonstrated, through further research, confirming that most disagreements were due to uncertainty in the interpretation of the information. Hence, there was really consensus, but through the ambiguity in the diction, disagreements appeared to exist only to be clarified through discussion (Hiltz and Turoff 1978, Turoff and Hiltz 1996, White et al 2007).

In 1978, Fisher, et. al, published a study detailing how the process was conducted giving further examples of how the problem is integrated into the process. They also describe further details not presented in either of the aforementioned descriptions. Here is the process given along with an experiment:

1. “Draw up a list of experts whose opinions we think would be valuable, and mail each expert a questionnaire explaining the nature of the study and asking each person to generate a list of important developments likely to occur in the field of library automation during the next 25 years.

2. From the first-round responses, we would consolidate items and eliminate those suggested items we considered irrelevant, in order to draw up a list of probable developments. We would then mail the second questionnaire and request each respondent to indicate the date when each listed development is likely to be implemented.

3. From the second-round responses, we would calculate the median and the interquartile range (the interval containing the middle 50 percent of responses) for each item and return this information to each respondent asking him/her to consider the second-round response, in light of this new statistical information, and to move either to the interquartile range or to briefly state reasons for remaining outside the range.

4. From the third-round responses, we would supply respondents with new consensus data and a summary of minority opinions and ask the respondents for a final revision of their responses.

5. The final result would be a list of possible future developments in library automation, a consensus (the median and the interquartile range) of the date when each development is likely to be implemented, and a list of dissenting opinions. With this final report, we would hopefully have useful information for long-range planning.”

Last, further insight was given into the process by Turoff and Hiltz in 1996. These insights were derived by examining the exchanges between the participants. Information needed to be given to the participants, not only to show where an individual stood on a subject, but also to show where they stood against the remaining group participants. One individual may have greater expertise in an area or be completely ignorant. Also, further insight was sought to determine hidden factors that may bias a group in making a decision or explain the results as a consequence between the relationships of group members, where half of the group could be Republicans and the other half Democrats. Once identified, these sorts of biases can explain how the results were influenced.

Early Delphi exercises tended to focus on homogeneous groups. This resulted in numerous variations. For example, the Policy Delphi was designed to discover conflicting views about the possible resolutions of policy issues. Further, the summary of the results was sometimes displayed by what type of experts expressed what sort of view relative to other subgroups of experts.

These processes show how there can be many other agendas integrated into the Delphi process. This is no trivial set of events. From reading the simple set of procedures to the set actually used during an experiment can be quite different. It’s the insight that can be gained from this process that should be paid special attention. Recently, these processes and rounds have been dismantled and replaced with asynchronous interaction (White, et. al, 2007b). This takes the forced rounds of discussion and makes them a more ‘natural fit’ as to how humans think. This will be delved into further in a following section.

4.5 Problems with Delphi

The problems with Delphi are numerous and diversified, ranging from areas in the process itself, statistical methods used, sampling methods, expert selection, evaluation of prediction and interface design from which computer based Delphis are run (Sackman 1975, Fisher 1978, Turoff 1999). Given the same situation, it is unlikely that results will be consistent from one methodology to the next. Some of these problems stem from the lack of standards upon which other experiments or replicated studies can be achieved. This makes it unreliable to compare the results of one study against the results of another if they are not conducted with the exact same methodology.

4.5.1 Experiments

During this literature review, it was discovered that there is a lack of detail given in the descriptions of Delphi experiments. Delphi methodologies need to be compared in a discrete manner. For example, it may be best to look at specific parts like: Sampling methods analyzed in the selection of expert participants for Delphi studies. Then, evaluate how each of those methods were implemented and how they came about the end result. However, there may be missing information to make a thorough case-by-case analysis, but it would still break down a characteristic of Delphi upon which a strong evaluation could be made.

4.5.2 Subjects

Another critique of Delphi is that the subjects used in the experiments are normally students and not experts (Dalkey, 1967, Sackman 1975, Baker, et al 2006). This is true for the initial study on Version 1.0 of the Delphi Decision Maker. Dalkey described how it would be difficult enough to get a small group of experts to participate. The feasibility of finding a large enough population of experts who can take time from their jobs and who will agree to participate is unlikely. A group of emergency managers have volunteered to test the Version 2.0 release, but we are looking to secure funding so that further research can be conducted where the experts can be paid for their effort. Another problem would be the logistics of conducting a Delphi experiment on the experts given scheduling, and the collection of data. However, this is what is required. These logistical problems can be overcome by implementing a Delphi exercise asynchronously and online using a CMC system (Hiltz and Turoff, 1996; White, et. al, 2007b). This has been accomplished the Pilot System, The Delphi Decision Maker, Version 1.0.

4.5.3 Consensus

Consensus is defined and achieved using a variety of methodologies in Delphi studies. Consensus can be described a number of ways. Hardy, et al (2004) presented a few questions for consideration.

1) What is the criterion for determining consensus?

2) What is the criterion for assigning importance?

3) How are the items that meet this consensus and importance criteria interpreted?

Questions arise when contemplating the way consensus is defined, as it is reflected in the outcome of the experiment. From this literature review, it is deduced that particular methodologies are implemented given the considerations that must be taken into account to best fit the problem situation. However, comparing end results of Delphi studies appears to be problematic due to the lack of consistency in methodology.

4.6 Bias

Bias can be difficult to minimize in Delphi studies, as it can be present in so many aspects of the method. The sample population of experts can be biased. When groups work together in a Delphi fashion, the members may be from the same organization (Baker et. al, 2006). This means that they have been exposed to the same information, have been in the same corporate culture and have been distributed the same propaganda. The sample pool should come from a large group that is selected to represent all geographical areas and subspecialties (Hardy, et. al, 2004). The group of experts may be under a moral influence where the decisions are a matter of principle and thus influence the incoming information (Helmer, 1967).

One of the most recognized characteristics of Delphi is anonymity. Anonymity minimizes bias. It promotes an open environment for honest opinions, where individual’s inhibitions should be lifted and ideas flourish (Turoff and Hiltz 1996, Hardy et al 2004). This is further explored in the chapter on Bias.

4.7 Experts

The Concise Oxford English Dictionary defines an expert as: “An expert is a person who is very knowledgeable about or skillful in particular area” (Concise Oxford English Dictionary, 2003). A person can be considered an expert by their position within an organization, by a group of people or by enough other people deeming them so. Experts are defined as ‘informed individuals, ‘specialists in the field’ or ‘someone who has knowledge about a specific subject’ (Keeney 2001). Experience in the field can make one considered an expert (Baker, et al 2006). Baker states that experts are defined by having knowledge, but Keeney argues that mere knowledge does not qualify as expertise.

Experts are the reasoning behind what makes Delphi so powerful under situations where information is lacking and intuition must be drawn upon. Helmer said that it was the intuition that played such a large role, filling the unknowns with what was known and what was not known here in the judgment.

At the core of Delphi is the group of experts who make up and use the system. It’s the old ‘input - output’ theory, what goes in the system is what will come out. This makes the process of selecting an expert a critical step. This is very loosely defined in research (Keeney, 2001, Baker et al 2006). For a Delphi system to work properly, experts must be selected based on some agreed upon set of criteria that the research community will consider respectfully. Problems with Delphi research studies can found from the start with the sampling of the group for participation. Defining expertise combined with the feasibility of finding a large group of experts from which to take a random smaller group, is very unlikely (Baker, et al 2006). Judging expertise, or defining it, can be the core problem in this stage. Who is an expert and how is this determined? What justifies someone as being an expert? Solutions to this problem have been identified and elaborated upon by Baker 2006, Sackman 1975 and Keeney 2001.

In the research literature, more detailed information on defining expertise and what constitutes an expert needs to be given. It needs to be given enough detail in order to justify the definition then, list the criteria and processes under which this selection is made for others to review (Baker et. al, 2006). From this literature review, it is observed that this would also be beneficial, given the success of the experiment, that this same method of selection could be utilized in future studies and prove a valid means of judgment. This could be further critiqued and refined by other researchers to improve upon until there is an accepted set of criteria which the research community could use as a robust methodology.

4.7.1 Weights

One method for determining a group decision is by giving experts corresponding weights in which their expertise varies (Helmer, 1967). An expert may be average in some areas then, specialized in others. Where an expert specializes in knowledge or where their experience is vast and varied for a unique set of circumstances, they will hold more weight in these areas. If some items are deemed more important than others, they can be given more weight and this adds to the end result of having the best decision possible being made (White et. al, 2007a). Other group members could vote on members’ trustworthiness and that could be an additional factor in increasing the weight of an expert’s opinion (Helmer, 1967). This is along the same lines as a social book marking system, where users go to an area to see where experts go online to get their information (Benbunan-Fich, 2008). This follows the logic that some people’s influence is so great that others want to know what choices the experts made.

4.7.2 Self-Rating

Self rating research was conducted in Delphi studies to determine if better decisions could be made by subgroups of experts. When compared to those who didn’t rate themselves as knowledgeable, versus those who rated themselves as knowing a lot, results proved that more accurate information could be gained by implementing the proper set of self-rating techniques (Helmer, 1967, Dalkey, et. al, 1969, Turoff and Hiltz, 1996). Hence, those who designated that they knew more on a particular subject gave better results.

One situation to consider is how to go about reflecting the way an expert feels about their opinion/expertise on a given situation. The protocol for this structure would have to be modified to fit a given situation, for there are different ways for experts to rate themselves. Dalkey and his fellow researchers described numerous kinds of self ratings methods including: (1969)

1. ‘Ranking the questions in the order of the respondents judgment as to his competence to answer them;

2. Furnishing an absolute time of the respondent’s confidence in his answer;

3. Estimating a relative self-confidence with respect to some reference group’.

An example of a set of instructions asking experts to rate themselves is given by Helmer (1967),

    • First, you are asked to rate the questions with respect to the amount of knowledge you feel you have concerning the answer.

    • Do this as follows: before giving any answers, look over the first ten questions, and find the one that you feel you know the most about.

    • Give this question a 5 rating in the box;

    • Then find the one you feel you know the least about, and give it a rating of 1.

    • Rate all of the other questions relative to these two, using a scale of 1-5.

This type of self-rating is relative and gives the expert a scale on which to rate how much they feel they know on a particular subject. This has proven an especially promising method to use where higher areas of uncertainty exist (Helmer 1967).

Self rating was found to have further use in that with large groups, these ratings could be used to create specialized subgroups that can tackle particular subtopics of expertise. This would be beneficial as complex problems need to have a large group of knowledgeable experts to use in decision making. These subgroups could work on problems creating a mesh network of expertise intermingling to work on more complex problems. This makes it such that experts can work on numerous sub problems and give particular areas a greater level of concentrated expertise (Dalkey 1969, Turoff, 2007).

4.8 Delphi Studies and Experiments

Conducting experiments is unpredictable at best when using the Delphi method according to some researchers (Hardy et al 2004). Research indicates that there are discrepancies found in the way experiments are conducted, that there is a lack of reporting in the design and analytical methods used with a failure to elaborate on the statistical tests or sampling methods, and that it makes it difficult to replicate prior studies or aid first time experimenters (Sackman 1975, Fisher 1978, Turoff and Hiltz 1996, Hardy et al 2004). Fisher protests that the “failure of the Delphi method to incorporate such elements as standard statistical tests, accepted sampling procedures and replications, leaves the method suspect as a reliable scientific method of prediction” (1978, p68).

Helmer blames other research problems and outcomes on the lack of having experts available for the experiment. He states “there is always some doubt as to whether favorable results obtained under such conditions are transferable to the case of high-level specialists for whom the Delphi technique is intended” (Helmer, 1967, page 5). Another view by Baker is that some experts will be more likely to participate over other experts and this implies bias (2006), but this is true for any survey sample.

Turoff and Hiltz claim that there is confusion on the part of Delphi based experiments that are human based. They state that the problems are due to “a basic lack of knowledge by many people working in these areas as to what was learned in the studies of the Delphi Method about how to properly employ these techniques and their impact on the communication process” (Turoff and Hiltz, 1996, page 2). This leads to a lot of studies covering the same problems and regenerating the same results, just to be packaged differently.

A few research groups have published work in attempts to remedy this situation. Hardy, et al (2004) in particular dedicated a paper addressing how to use the Delphi technique to demonstrate how it could be a valid means of extracting expert knowledge and the benefits that could be held in the medical/clinical area. They paid particular attention to detailing the aspects of the entire experiment in attempts to set an example. In particular, they found areas of concern to be:

· Group Composition

· Participant Motivation

· Problem Exploration

· Consensus

· Feedback

This attempt to standardize the Delphi technique targets areas of the research and its administration. However, Hiltz and Turoff took a social perspective of the system and found more focus should be on the functionality offered to the users. They presented the following list, which describes a set of tools that are the focus of the development of the system (Turoff 1991, Hiltz 1986, 1990):

· Provide each member with new items that they have not yet seen.

· Tally the votes and make the vote distribution viewable when sufficient votes are accumulated.

· Organize a pro list and a con list of arguments about any resolution.

· Allow the individual to view lists of arguments according to the results of the different voting scales (e.g., most valid to least valid arguments).

· Allow the individual to compare opposing arguments.

· Provide status information on how many respondents have dealt with a given resolution or list of arguments.

Surveying methods design is a concern of researchers. Good designs, along with proper analytical methods, are applicable to the Delphi Technique (Hiltz and Turoff, 1996). Hardy’s group made sure to list the importance of the survey items in their experiment so that others could critique their work on Delphi (Hardy et al 2004). However they claimed that other areas, such as ‘what defines consensus’ have yet to be developed, so there are no hard rules in selecting which empirical studies can be applied.

Fisher applied numerous statistical analyses on his data. Some examples of the analysis run were:

· “Rank order listing by mean scores

· Standard deviations and median scores

· Frequency distribution of mean scores

· Semi-interquartile ranges for each item

· Percentage of responses changed between rounds

· Direction of changes in relation to the median

· Rank order ratings of items by educators and non-educators and by males and females

· And percentage of scores changed by educators and non-educators and by males and females” (1978, p 67).

Actually, the original Delphi studies gave quite a bit of statistical data. However, details of the experiments in their entirety weren’t given.

Once some standards are put in place, or once humans understand how to interpret the results, Delphi will be interpreted as more reliable (Linstone and Turoff, 1975, Hardy, et. al, 2004).

4.8.1 Groups

Delphi procedures were designed and are used in a way that eliminates problems that are prevalent in face-to-face (F2F) meetings (Dalkey 1967, Helmer 1967). This is actually why one of the primary characteristics of Delphi, anonymity, came into existence. However, there are other procedures applied in order to counter other problems that exist in typical F2F meetings. Some of the problems with groups are listed:

1. The influence of the dominant individual:

a. Loudest voice

b. Greatest authority (boss)

c. Who talks the most

2. An unwillingness to abandon publicly expressed opinions;

3. The bandwagon effect of majority opinion, groupthink;

4. Noise irrelevant or redundant material:

a. Gossip amongst group members

b. Old rivals reoccurring argument that derail talks

c. Members frivolous contributions due to not being prepared

d. Digressing conversations that have nothing to do with key topic

5. Group pressure that puts a premium on compromise (Dalkey 1967, Helmer 1967, Turoff 1996).

The size of the group can influence the study. Larger groups of experts have difficulties working together in an F2F environment (Turoff, 1996). The traditional Delphi studies were conducted primarily as questionnaires sent through the mail. This made it such that larger groups of experts could participate (Helmer, 1967). Larger groups are desired for many reasons. Primarily, the reasoning goes back to Arrow’s theorem. For as the size of the group decreases, so too does the accuracy in the group reflection, even with self rating (Dalkey 1969).

Large Groups of Experts

The Delphi technique may have greater potential working with a large heterogeneous group of experts from which to have input from a litany of academic and professional areas (Turoff, 2007). Dalkey proposed a larger group be used and referred to them as the Advise Community (1967). Government, military and industry are comprised of larger groups. The military, industry and government consist of a large group of experts who provide information, predictions, and analyses to aid in the formation of policy and in the making of decisions (Dalkey, 1967).

The Delphi technique involves a structured way of collecting and aggregating collective and informed judgment from a potentially large group of global experts on specific questions and issues that requires less cost and is more flexible with considerations of time in an efficient way (Hardy, et al 2004). Complex problems are comprised of many smaller problems requiring differing cognitive abilities and problem solving skills. The chances of these needs being reflected by the group with a good fit is best when there exists a larger group of experts from which to choose (Turoff 1996). A lot of the expert opinion is required when information is scarce. The likelihood of gaining more knowledge increases as the size of the group increases (Dalkey, 1967).

Other researchers believe the opposite is true, that Delphi studies should be small, consisting of no more than 20 participants (Baker, et al 2006). This is very difficult to achieve with a large heterogeneous sample. However, if you break down the complex problems into their smallest manageable subcomponents, smaller groups may be used and may be best. Studies prove that when smaller groups of higher qualified experts work together, they get more accurate results than those who are not highly specialized in that problem area (Dalkey, 1969).

Subgroups of Experts

The Advice Community Dalkey refers to is a great wealth of information that is stored within larger heterogeneous groups of “in-house” advisors, external or outsourced consultants from academia and/or other industries and any other group that appears pertinent to the problem facing the decision maker (Dalkey, 1967). Helmer elaborated on this concept by discussing the model of a hierarchical panel structure. Helmer described these panels as subgroups of experts working collaboratively as groups in decision making. Using a divide and conquer algorithm, the problem would be broken down into its components, then panels consisting of subgroups of experts would work from a bottom-up approach, building their way back up the decision ladder (Helmer, 1967). In addition, these multiple panels might better reflect the stakeholder’s interest and provide a more accurate reflection of the stakeholder (Hardy, et al 2004).

These subgroups could be identified in a number of ways. One would be through the expert him/herself by allowing them to rate their own level of expertise. This can be done in a number of ways. Refer to the section on Self Rating for a more in-depth discussion on the topic. It has been proven that these subgroups of experts can give a more accurate decision than the general expert population (Dalkey, 1969). Studies at RAND Corporation confirmed that subgroups of experts could be more accurate by self rating, but only if the following held true:

    1. The difference in average self rating between the subgroups should be substantial;

    2. The size of the subgroups should be substantial for both the higher and lower self rating subgroups.

Hiltz and Turoff, 1996 describe how the computer aided in supporting a multiple group environment, especially in larger populations of groups working together on a common goal. In 1967, Helmer toyed with the idea of distributed groups and what he referred to as ‘grandiose future applications’ which are now commonplace with the Internet. The power of such an idea is behind collaborative judgment in emergency response where a large group of emergency professionals and administrators work together globally through computer mediated communication systems to manage a large scale disaster better, creating a global collective intelligence (White, et al 2007a, Turoff, et al 2007).

4.8.2 Rounds

Originally, rounds were used in Delphi studies with a goal in mind and that goal was consensus. Researchers traditionally use surveying methods in rounds to structure the formal process (Hiltz and Turoff 1996, Hardy et al 2004, Keeney 2001).

The rounds provide a structure which considers factors for convergence: (Dalkey, 1967)

1. Social pressure

2. Rethinking the problem

3. And transfer of information during feedback.

One observation is that although Delphi studies have used techniques to remove bias, they also implemented techniques, such as social pressure to converge/consensus which interjects bias back into the problem. Different studies use different numbers of rounds, usually three or four. The number of rounds reflects attempts at creating the same information flow as what would occur during a discussion. The number of rounds is numerically determined by the need to offer an element of discussion to the group so that they can make arguments, they can counter argue, then rebuttal which ends up taking four rounds. Helmer stated that, given enough rounds, any problem could reach consensus (1967).

Since the application of Delphi techniques has been conducted in a computer based environment, more dynamic solutions are offered. These advances eliminate the need for the structure given in rounds due to the ability to have asynchronous communication. Human intervention is no longer needed, continuous feedback is feasible and people can interact with any part of the process anytime from the comforts of their home (Turoff and Hiltz 1999, White, et. al, 2007b).

4.8.3 Anytime Voting

Delphi was offered a new opportunity upon the advent of computer mediated communication systems, especially given asynchronous environments (Hiltz and Turoff, 1978; Rice and Associates, 1984; Turoff, 1989; Turoff, 1991, Turoff 1996). This gives the experts the advantage of time where it concerns muddling over a given problem, and then allowing for better analytical skills to be utilized (Hardy et. al, 2004). Back in 1967, Helmer saw this potential when he wrote – “it is easy to imagine that for important decisions simultaneous consultation with experts in geographically distinct locations via a nation-wide or even a world-wide computer network may become a matter of routine” (page 8).

Turoff recognized the need describing how paper and pencil Delphi’s were restricted by having to use a top-down/bottom-up dichotomy versus parallelism and how there should be a way to tackle any part of the problem the expert felt naturally compelled to do at any moment in time (1996). Also, Helmer stated that, given enough anonymous debate, the problem area may resolve and lead to true consensus. This would naturally relay over to an equivalent saying that asynchronous interaction leads to a higher likelihood that areas of disagreement will be tracked down and eliminated, thus creating a true consensus (Helmer, 1967).

The implementation of asynchronous communication has a most profound effect on the process and should be the single most important aspect under which Delphi processes are designed (Turoff, 1996). Further, Turoff elaborates and states that asynchronous interaction has two properties:

1. A person may choose to participate in the group communication process when they feel they want to.

2. A person may choose to contribute to that aspect of the problem to which they feel best able to contribute.

This best reflects how the human mind works and is most conducive for the best results (White, et al 2007b, White, et. al, 2008b).

4.8.4 Uncertainty

Uncertainty arises in two different situations here: 1) when information is missing, 2) when interpretations differ.

4.8.5 Missing Data

Delphi is inherently a desirable technique to use when information is missing, uncertainty exists in the decision being made, and intuition needs to be used by members of the team to help fill in the gaps (Baker et al 2006, Hardy, et al 2004, Fisher, 1978, Helmer 1966). This is the reason behind using highly qualified judges, i.e. experts for decision making. It’s not so much the explicit knowledge that comes from the collaborative efforts of this defined group, but by the tacit knowledge that can better fill these gaps utilizing collective intelligence when decisions must be made with little to go on (White, et. al, 2008b).

Interpretation

The other area of uncertainty to be concerned about in Delphi techniques or any group communication process focuses on how different members of the group may interpret/misinterpret the problem, the vocabulary used or any other communications (Baker et al 2006, Hardy et al 2004, McDonnell 1996, Hiltz and Turoff, 1996; White, et. al, 2007a). This can surface as results of disagreement between group members. It’s through discussion that understandability is increased and ambiguity decreases. Once these lexicon issues are agreed upon, then agreement and consensus are more likely to occur (White, et. al, 2007a).

This issue was studied in a pilot where a group used a voting Delphi method in order to reflect that the members were interpreting, both the problems and the solutions. Studies from this pilot confirmed that through discussion, ambiguity lessened and the group performed better than a group that didn’t have to agree on such issues (White et. al, 2007).

4.9 Conclusions

One of the most significant research issues was and remains that Delphi systems are primarily tested with subjects who are not experts in a field and therefore results are difficult to evaluate properly. Also, Delphi systems are early in their development on utilizing the Internet for its leveraging power. This could open a knowledge bank to be used by experts unbound by their geographic location. Utilizing Delphi and expert’s knowledge on a global level could change the way certain events are handled, like catastrophic events. To tap this knowledge, because it is not neatly formalized but distributed in the minds of many people, it is necessary to develop methods, of which the Delphi technique is one, for collecting the opinions of individual experts and combining them into judgments that have operational utility to policy makers (Helmer, 1967).

Although the Delphi technique has been around for quite some time, numerous research issues remain. A range of objectives were outlined in 1996 by Hiltz and Turoff detailing characteristics which still remain not implemented in systems. These are as follows (Hiltz and Turoff, 1996, p12):

· Improve the understanding of the participants through analysis of subjective judgments to produce a clear presentation of the range of views and considerations.

· Detect hidden disagreements and judgmental biases that should be exposed for further clarification.

· Detect missing information or cases of ambiguity in interpretation by different participants.

· Allow the examination of very complex situations that can only be summarized by analysis procedures.

· Detect patterns of information and of sub-group positions.

· Detect critical items that need to be focused upon.

Primarily the research issue is, can the Delphi technique generate the best group opinion given a particular set of methods implemented along the way? Which methods are best to implement and how should these be evaluated? Which forms of feedback will give both the individual and the group the best information upon which to make the best decision? A Delphi technique can be implemented and used for a variety of uses. The focus of this effort is to answer these questions with crisis management in mind.

Norman Dalkey wrote this 40 years ago, it still holds true today, “There is a very large field waiting for the plough” (page 10).

CHAPTER 5

LITERATURE REVIEW: CURRENT ONLINE

OPINION BASED DECISION MAKING

5.1 Abstract

In this chapter, systems will be covered that use voting or some structured method to rate group opinion. An overview of some recommendation systems and other opinion based information on sites will be conducted. The objective of this chapter is to relay how voting and other polling tools are being used by larger populations. This is not meant to be an non-exhaustive presentation of what is being used, but rather a brief synopsis of how online voting is used by the masses in everyday life. This will show how information can be used to help others make decisions by using voting, rating or other opinion based polling methods.

5.2 Introduction

Voting, rating, ranking and other tools are being used online for a variety of reasons. Large groups are feeding information into these systems as well as using them as a feedback mechanism on which to base some decision. This is being presented to show how online systems are presently being used to self generate information for others. These are systems the author is familiar with and uses frequently just for these purposes.

First, simple voting will be covered showing how quick and easy responses are used throughout society in a number of online sites. Next, more complicated decision making is supported by these same systems where information can be more precise on a drill down level, giving different types of opinion based feedback to the user. Feedback

can come not only on a good or service, but also from the provider. Companies are coming up with new ways to utilize user generated information. However, the potential impact of a serious decision making tool based on voting and ranking is not part of this every day event. This chapter concludes with some future implications of voting tools and ranking systems for future use.

5.3 Simple Decision Making through Voting

Simple decisions can be made by using various voting tools available online. User opinions are requested on an array of issues, anything from insurance companies to almost limitless possibilities that you can think of using systems available like epinion.com. Recommendation systems take advantage of the information that can be supplied for large groups, and then used by individuals or businesses.

5.3.1 User Opinion Polls

User opinion polls are seen in a variety of places. Universities, newspapers or anyone can use a simple opinion poll to gather information concerning a number of issues. A company called SodaHead (http://www.sodahead.com/about-us/) provides users with a Polling Widget which can be embedded into anyone’s online space.

Polling Widget

Take the discussion to your site! Many celebrities, bloggers and media companies use SodaHead's customized poll widget to better engage their audience. Customize the look and feel of your poll, create questions using images, video and MP3s, promote user engagement and get instant feedback by embedding a poll on your online space. Check out who's using our poll already

Figure 5.1 Creating simple opinion polls.

This makes gathering this sort of information easy for anyone to implement and use.

5.3.2 Recommendation Systems

Recommendation systems are being used presently by newspapers, movie rental stores, and other businesses that use products or other sites rated like Amazon to provide customers information. For some sites like epinions.com, this is the basis of the site, collaborative judgment. People can rank events, people, opinions, etc. to create a list indicating the most popular opinion. ‘Top Videos’ are an end result of those articles in the news that have gained more attention, like the video series found on MSNBC (www.msnbc.com). This site creates two groups and offers selections based on both Top Videos and Most Viewed, where one is based on user feedback of opinion through rating and the other is gathered by computing the frequency of which videos are viewed most. This is a valuable form of feedback.

Newspapers

Simple voting can be used as a recommendation system as demonstrated in the next figure from a local newspaper, The Destin Log distributed in Destin, Florida, USA.

Figure 5.2 Simple recommendations for news article.

Movie Rental: Netflix

Netflix bases a lot of its information on user feedback. Given that the commodity is in the movie industry, this is a good system for providing customers information from other customers. Its opinion based, so a company like Netflix has nothing to lose if a movie gets a low recommendation.

Epinions.com

Epinions.com bases its entire knowledge base on the user. As the name of the site indicates, this is a site offering people’s opinions on any tangible or intangible product. Epinions.com offers information on a product or service, has a 5-star rating system, allows customers to enter their reviews, allows customers to compare like items item against item, and even conducts a price comparison evaluation providing information on where the customer can find the lowest price.

Figure 5.3 Epinions.com information.

Epinions.com offers a lot of information but still only uses ranking for simple feedback.

5.3.3 Rating Systems for Discussion Feedback

Simple voting or ranking can be used in discussion forums as a means of feedback to both authors (those who post) as well as to other users of the system. Some course management systems offer this as a way to show students what others think about what they post as well as to indicate areas of interest.

Figure 5.4 Ranking systems for discussion feedback.

5.4 More Complex Decision Making through Voting

Amazon and Netflix both use recommendation systems to make suggestions to customers based on the collaborative efforts of other customers. One way to provide information is through a star rating system. From 1 to 5 stars, rated from most undesirable to most desirable, the star rating gives a visual interpretation that further gives information by breaking the stars up into portions so there can be 3.5. Both systems combine the star rating with the number of participants who have voted. Amazon keeps the information updated by also providing only those participants within the last year. Amazon will also let the customer know if they are purchasing goods through a new business. This is of particular use for the used book market. This all combined gives a business a reputation that the new customer can use to give themselves confidence that they are purchasing from a reputable dealer, let the buyer beware.

5.4.1 Amazon Example

An example is presented here demonstrating both the functionalities of a recommendation system and decision making in the selection of a good, then on the business from which to purchase the good. Given a search for a book, a list comes up with possibilities for the user to pick from. For example, “The Origin of Man” was searched on Amazon.com; Figure 7.5 provides a figure of the results.

Figure 5.5 Ranking and opinion based decision support system.

This provides the user with information on what books are available, how many relevant search results came up, and then information on the book (or any good) as demonstrated in the figure provided in Figure 7.5. The book has the title, author, and price new, used and if they are in stock. A star rating is used in this stage as an overall recommendation system. The star rating along with the number of people who rated the book is included. This is a drill down information system. For more help in the decision making process, a lot of extra information on each item in the list is provided once it’s clicked.

5.4.2 Determining Activity Level

Amazon further provides a domain specific community bulletin board communication systems with forums for customers to post and reply more discussions on any question, opinion or insight they may have. To keep the most current information at the forefront, Amazon ranks the discussion list by most recent post giving the number of replies along with the discussion topic and title as demonstrated in Figure 7.9.

Active discussions in related forums

Figure 5.9 List of discussions: information provided for activity level.

Where the dropdown box is on the left, acts as a tool tip and the most recent discussions, i.e. the last three, are given. This lets people know how active the topic is which can directly related to how ‘hot’ a topic is presently.

5.5 Summary

Voting online is becoming more common. People voting and thus, making recommendations is found to be a contribution in many situations to a collective intelligence. Some examples are found at online shopping sites such as Amazon.com or on newspaper’s sites. However, the potentials of such voting and ranking systems are untapped given the large populations that gather online. This is the basis of this ongoing research effort. It is through this effort that collective intelligence can be achieved given the Dynamic Delphi System is used with the masses, especially when the group is comprised of experts.

CHAPTER 6

THEORETICAL FOUNDATIONS

6.1 Abstract

The topic of this chapter is to cover the theoretical foundations of Thurstone’s Law of Comparative Judgment (TLCJ) and how it is being modified for this research effort. This satisfies one of the requirements for Design Science Research (Hevner, et. al, 2004). The third guide states that the system and the methods used in its design must be built on a scientifically sound foundation. Thurstone’s work is well tested in the research community for over 80 years (Thurstone, 1927a). The objective of this chapter is to demonstrate how this scaling technique will be used to bring together many expert’s opinions and calculate one interval scale indicating a single group opinion using incomplete datasets, and calculating in uncertainty in the future outcome of the vote given various types of voting options. Once calculated, the scale can be used to determine multiple outcomes depending on the need of the decision maker. Areas of agreement or disagreement can be determined as well as observations of clustering indicating equivalences. This research effort demonstrated that the modified version of Thurstone’s Law of Comparative Judgment is a sound method that can be used under a dynamic situation given changes in users, information and votes.

6.2 Introduction

How do you measure the mind? Discriminal differences are the subjective or psychological dimensions that can be mediated as a discriminate process measureable on the psychological continuum (Torgerson, 1958). Human judgment can be measured where stimuli are presented and there is some sort of preference of one item over another. For example, items could be measured based on importance or any preference like water is needed more than paint. If a person is asked to judge a set of items presented in pairs given some stimulus, they may be inconsistent with the level of stimulus of preference of the same item over time. However, given enough judgments towards that one item compared with every other item, a normal distribution is formed. For each stimulus level measured, there is a different distribution.

Figure 6.1 Distributions of the psychological continuum of discriminal processes associated with four stimuli (Torgerson, 1958, P 157).

The contribution of this work is a modification of TLCJ. Thurstone’s work assumes that the process is based on the completeness of data, i.e., all items are judged and everyone has made all judgments required. This work is considering wicked problems where not all information is considered complete, and uses a process to account for the uncertainty considered during the decision making process.

6.3 Thurstone’s Law of Comparative Judgment

Thurstone’s Law of Comparative Judgment (TLCJ) measures a person’s preference for one item i over any another item j based on a single stimulus measuring the discriminal dispersion between the two on a psychological continuum. This is conducted by using paired comparisons where every item i in a pair (i,j) in a set will be compared to every other item in a set producing a total of n(n-1)/2 comparisons (Thurstone, 1927).

Torgerson provides an example of this process using three matrices: matrix F (frequency), matrix P (probability) and matrix X (unit normal derivative). The first matrix F, is a frequency count of items selected in any given pair by every individual in the group. The table presented next shows a frequency count of a user group where N = 5. The items along the 1st row, A, B, C, and D are preferred over the corresponding items in the 1st column of an M x M matrix, where zeros are placed along the diagonal. Given the number of users, N = 5, each corresponding pair should equal N, i.e. (i,j) + (j,i) = 5:

Table 6.1 Matrix F Frequency Count of User Input

These frequencies are changed to probabilities as such for matrix P where each each (i,j) + (j,i) = 100%:

Table 6.2 Matrix P Frequencies Converted to Probabilities

The final matrix, X, is where these percentages are replaced by their unit normal deviates to cumulative proportions.

Table 6.3 Matrix X Cumulative Normal Distribution Function

Values greater than 50% will be positive in this transformation of matrix P to X and values less than 50% will be negative. Any values of 100% or 0% in matrix P are given a value of 0 in matrix X because the x values corresponding from the unit normal deviate table are unboundedly large (Torgerson, 1958).

Thurstone’s scale is an extension based off of Thorndike’s work which provides added insight into ranked lists of items. Given A is preferred over B 75% of the time and B is preferred over C 85% of the time, “how much greater than the distance AB is the distance BC? Thorndike solved this problem by assuming that the difference in distances is proportional to the difference in the unit normal deviates corresponding to the two proportions” (Torgerson, 1958, p. 155).

Reducing the Comparisons

Comparing all items available in a given set is not a feasible task especially when the number of items is high. Guilford suggests that not all items should be compared. He continues that:

“There is nothing sacrosanct about pairing each stimulus with every other one in the series. To do so probably does tend to emphasize the unity of the continuum in question in the minds of the judges. And yet some stimuli in long series are so far apart psychologically that the proportions of judgments approach 1.00; hence the differences are so unreliable as to be useless for the computation of the scale values. (if 100%, then matrix X gets a 0 value) Therefore, not every stimulus is a good standard with which to compare all the stimuli of the series. It is often a proper procedure to select from all the stimuli, a limited number to become the standards for the scale” (Guilford, 1936, p. 235).

Thorndike suggested obtaining ‘paired comparisons between neighboring pairs of stimuli’ (Guilford, 1936, p. 236). Where paired information may be missing, gaps can be filled using transitivity. There must be a consistency check confirming that no cyclic triad exists between any items. A cyclic triad is when A > B, B > C and C > A, and must be eliminated to achieve consistency (Hill, 1953).

6.4 Modified Thurstone Process Calculations

The change in the probability to measure the uncertainty is also well specified and it is a clear heuristic that might be influenced by how questions are asked to infer if a person plans to vote. If a person plans to vote, then they are counted into the total for the uncertainty calculation.

The calculations remain the same as Thurstone. The difference with this model only the number who have actually voted on a paired comparison is being used and not necessarily the same number as any other comparison.

At least 3 to 5 must vote on a comparison before computations can be made due to a small numbers effect. This could further depend on how many items and how many total potential voters exist.

The uncertainty heuristic is also very logical. The starting value of any pair of items is a probability of .5 as this is the highest entropy where there is no information available to make a judgment. This means the two items will be at the same point in the scale. Once some people vote and the probability moves to some value higher or lower than .5 for the given i and j, there is a separation. However if people who have not voted are assumed to be moving in the opposite direction, then they are looked at as voting opposite to the current trend which is the most pessimistic position you can take. However, if that opposite vote moves it past the original .5 value, it is called the uncertainty .5 and is returned it to the highest entropy.

The program will be evaluated more once developed, as the fine points can be modified while observing the boundary condition. The system must ensure that a probability never gets higher than .999 or lower than .001, so there is no overflow anywhere.

6.4.1 Uncertainty of Votes

How the people are asked the question so that it can be inferred who might still vote or might change their vote is the only real uncertainty in the aforementioned process, but once there is a delta N for the current N voters on any single comparison, there is no uncertainty in what can be done with it. The results for any item are distance between the current position on the scale and the distance pushing it closer together to the other item in the probability comparison. There is a need to normalize the two scales to the same total interval to be able to do this as the uncertainty scale is going to be shorter than the original one. There are different ways to do the normalization and that will be further tested once the system is developed to test which is better, if any. One is to just adjust the length from the lowest to the highest item to be the same for the uncertainty scale.

6.4.2 Probability Calculation

To make it easy to have a boundary condition for starting the voting calculation, a Hegelian approach is proposed. There are two virtual voters who completely disagree with one another. So on every preference they vote the opposite to one another. Therefore, all probabilities start equal to 1/2 = .5 and as real people vote, there is

p(i>j) = (1+ n(i>j))/(2+n(i>j)+n(j>i))

A heuristic for initial voting and for the uncertainty measure is used. nij is the number who have voted for the ith item to be preferred to the jth item.

Note that nij + nji is the total that have voted on the given preference.

The probability of i being preferred to j is

Pij = (1 + nij)/(2 + nij +nji)

Note that Pij + Pji =1 and if nij + nji =0 the Pij = ½

For nij + nji =1, there are only 1/3 and 2/3 as possible outcomes.

For Nij = nij + nji = Nji we have

2 implies ¼ 2/4 ¾

3 we have 1/5 2/5 3/5 4/5

4 we have 1/6 2/6 3/6 4/6 5/6

5 we have 1/7 2/7 3/7 4/7 5/7 6/7

etc.

For the modified P values for uncertainty the following can be done which does not require questions to be asked to find out the intention to vote. For now,

Let q = Maximum of (nij + nji)

PMODij = (1+ nij) / (2 + q) for all Pij > .5

If PMODij is < .5 we set it equal to .5

The non voters drive it back only to the zero point for probability which is .5. Those still at .5 remain there. You have to worry about round off error in the tolerance of comparing values to .5. After the above procedure the values for PMODji = 1 – PMODij

This should eliminate any boundary problems. In a future version where there are thousands voting on a continuous list and identities aren’t checked, an exponential smoothing can be applied to the calculation of the new values of nij. This will show the most recent votes as more relative to the present result as votes can be changing in accordance with the current ‘state’ of the problem. The problem is dynamic in relation to its state and will/can change over time as the environment changes or merits of an argument change.

6.5 Consistency in Modified Vote

If a user enters a vote for the first time on a decision problem, the information is updated as described earlier. It is simply added to the existing population and the previously described calculations are made. However, if a user changes their opinion, another procedure is called. There will be a system check for consistency in item rank given the new changes in item vote.

If by some chance, the new selection interferes with the ranking of another item such that the user’s list is inconsistent (cyclic triad), this will be brought to the attention of the user. This is to help them make the best decision and remind them of past considerations.

CHAPTER 7

SYSTEM DESIGN AND FIELD STUDY DEVELOPMENT

7.1 Introduction

This chapter covers the details of how the project was developed and administered. This software system is the product of a geographically distributed group of disaster management professionals using decision tools embedded in the Sahana platform as described. Much has changed from the initial proposal as reality and feasibility issues forced the researcher to search out different avenues to reach the goals as outlined in the proposal. A website was created to help organize the all of the information and linking the user to the tasks that needed to be performed. The original idea for the tutorial was implemented along with a video version. The tools were created and an explanation along with a figure is provided for: Voting, Scale Feedback, Discussion Forum and Item Management.

A very relaxed version of protocol analysis was used. More formal efforts will be put forth in the next version of the Delphi Decision Maker. The population of testers is described indicating how the system was tested during the Rapid Agile Development process.

A team was created to develop the system outsourced through Sahana. It is important to note that everything used to develop any part of this system, tutorial, survey, sites, languages, is free and/or open source software. These are newer developments and the researcher was very lucky in the discovery of many of these and their availability as

such systems were not available during the past efforts towards developing this research project by other doctoral students.

7.2 Delphi Decision Maker Website

A website was created for managing the tasks and information for this research effort (http://sites.google.com/site/delphidecisionmaker/). Figure 7.1 provides a picture of the web site that was developed to manage the overall effort of the field study. Since this study was designed to be conducted online and asynchronously within a 3 day designated time frame, and without any ‘help’ instruction type of support of the researcher in a face to face environment, it is important to make it such that the users know what to do and in what order.

Figure 7.1 Homepage for study on research.

The site was created using Google sites. The researcher was familiar with the software and it worked well for this need. Video was embedded so that users could watch the tutorial in that window and links were provided to the participants to direct them to each task. This worked very well at managing the project.

7.3 Tutorials

Three tutorials were created. The first tutorial is to introduce and direct the subject to the overall project, the expectations and the order in which tasks are to be accomplished. The second tutorial covers the theoretical foundations behind the methods chosen to implement in the system. It’s important that the user of the system understand how the system works. This could contribute to the system’s acceptance by the user because certain things are not the norm on this system. For example, conducting the paired comparisons task as a means to create a rank is something foreign to most. A third tutorial is provided to demonstrate the system itself and how to use it covering the functionalities that correspond with the theoretical explanations provided in the second tutorial. All of the links provided on the homepage are presented in a discrete order.

Tutorials were created using a few products: Jing, Camtasia, Google Sites, Word and PowerPoint. Microsoft PowerPoint was used to introduce the theoretical foundation behind the methods chosen (Appendix D), but then this was simplified into a screen capturing presentation with video (http://www.screencast.com/users/connie.m.white/folders/Jing/media/9aac94b4-9bed-49f4-bbfd-aaaf1540299a). Jing by TechSmith, version 2.1.9181 (www.jingproject.com) was used for interactive screen capturing with audio for all of the tutorials which included covering using the system. Five minute videos were presented as this was the maximum time allowed by the free version. Jing also provides the user with a link, hosting the videos for free, uploading them to an online site where anyone can view the video. A 30 day, trial version of Camtasia was used to convert video formatting such that the videos could be uploaded and viewed through YouTube (http://www.youtube.com/watch?v=rXJeHLuNdFE&feature=player_embedded). This format was blurry. The Jing format could maximize the view and still have high quality resolution. Google Sites (sites.google.com) was used to create the homepage for the entire project. From the homepage, links were provided to the instructions, video tutorials, surveys and the Delphi Decision Maker system.

Videos were used in hopes that a video is easier to communicate information like this versus a text based training manual. At present, only two video hosting services can be used to embed video into a Google Site, one of which is YouTube (www.youtube.com). However, not everyone has video capability or sound. So, a text based tutorial was also offered demonstrating the tools along with the corresponding figure. This tutorial is provided in the Appendix.

7.4 Project Testers

Various sorts of populations have been used in the development of this system, website, surveys and tutorials. Survey questions are reviewed and modified according to input from the PhD committee members. The researcher’s friends and colleagues have been testers for different parts of the project. Survey questions, videos and the website were first critiqued using these experts in the area. Members of the committee reviewed these along with friends, and colleagues. Next, other students (not from where the study will take place) were used to use the system and provide feedback at various stages of development. The researcher teaches and remains close to many past students from prior universities where she was employed. These students are emergency management practitioners and are good at providing honest, constructive feedback, which is crucial. It is important to know if the system, surveys, video and sites are doing what they are designed to do.

7.5 Sahana’s Contribution

There is software such as WebEOC developed to help manage emergencies and crisis. This and other commercial software costs a lot of money and requires expensive hardware to accommodate the system’s needs. This is good software if a group can afford it. However, funding such a system is not feasible for many emergency management entities in the United States, let alone in other countries around the world. No event could demonstrate this better than when the Indian Tsunami struck in 2004. There were at least 226,000 dead and $7.5 billion in costs for damages. A group of software developers came together in a humanitarian effort and quickly designed and implemented a Free and Open Source Software (FOSS) solution to meet the basic needs of the people affected in these overwhelming extreme events. They named the software Sahana after the Sri Lankan word for relief.

From this historical event, a free system was now available to be built on top of and modified as needed, free to anyone in the world. It was also designed to run with minimal hardware support. Sahana is a disaster management system which consists of a core set of modules identified as needed from the Indian Tsunami response. Problems identified in the aftermath of disasters where technology could help were:

· The trauma caused by waiting to be found or find the next of kin.

· Coordinating all aid groups and helping them to operate effectively as one.

· Managing the multitude of requests from the affected region and matching them effectively to the pledges of assistance.

· Tracking the location of all temporary shelters, camps, etc. (Prutsalis, 2009)

These are requirements from the recovery period of a disaster. The modules created to support the needs of the problems identified are:

· An Organization Registry. This module maintains data like contacts and services, of:

o Groups

o Organizations

o Volunteers responding to the disaster.

· A Missing Persons and Disaster Victim Registry. This module helps track and find:

o Missing

o Deceased

o Injured

o Displaced people

o Families

· A Request Management. This module tracks all requests and helps match:

o Pledges for support

o Aid

o Supplies to Fulfillment

· A Shelter Registry. This module tracks data on all temporary shelters set up following the disaster.

Sahana has been used over the years and continues to develop to meet the needs of its users. A list of Sahana deployments is provided to demonstrate its present use which demonstrates the potential use.

· Asian Tsunami in Sri Lanka – 2005

· Kashmir Earthquake in Pakistan – 2005

· Landslide disaster in Philippines– 2005

· Yogjakarta Earthquake, Indonesia – 2006

· Cyclone Sidr in Bangladesh – 2007

· Coastal Storm Plan in New York City – 2007

· Ica Earthquake, Peru – 2007

· Sarvodaya (NGO), Sri Lanka – 2008

· Bihar Floods, India – 2008

· Chendu-Sitzuan Province Earthquake, China – 2008

· National Disaster Management Center & Ministry of Resettlement & Disaster Relief Services, Sri Lanka – 2009

· Bethesda Hospitals Emergency Preparedness Partnership, Maryland – 2009

· National Disaster Coordinating Council in Philippines – 2009

· National Coordinating Agency for Disaster Management (BNPB) in Indonesia - 2009

One limitation is that much of Sahana’s primary activity relates to recovery operations so that there is no chance in that environment to really test the utility of the system to first responder activities. This is a limitation but does not detract from the importance to the severe problems of delivery of support of all sorts and coordination of it to major international disaster events.

There are two versions of the Sahana Disaster Management System. Two languages are used, PHP from the original build and Python (Web2Py) on a newer rebuild. Because they both support web services then the two installs can interoperate with each other with a little work. It is not, however, possible to run a single install with some modules being PHP & some being Python.

Without the support of the Sahana Disaster Management System development community, this dissertation may not have ever been written. Although I was asked to develop the Delphi Decision Maker as a module for Sahana, there were obstacles to overcome for any software development project of this magnitude. It was soon realized by the researcher that although the Sahana community was extremely helpful, there were many complexities involved and time was not on her side. The researcher reached out to the Sahana developer community and asked to hire a programmer. Having someone who already had experience with the language and the Sahana system, was a benefit to expedite system development. This was key to the success of this project. The outsourced developer, the Sahana site mentor and this researcher worked intensely for one month to produce this first version of the system. It is important to note that that a space was provided for all three team members to work together as the team members were each located on different continents.

7.6 Rapid Agile Software Development

This system was developed using the methods defined as Rapid Agile Software (RAS) development. A good description of RAS is provided by Wikipedia (en.wikipedia.org/wiki/Agile_software_development, November, 2009).

Agile software development refers to a group of software development methodologies based on iterative development, where requirements and solutions evolve through collaboration between self-organizing cross-functional teams. The term was coined in the year 2001 when the Agile Manifesto was formulated.

Agile methods generally promote a disciplined project management process that encourages frequent inspection and adaptation, a leadership philosophy that encourages teamwork, self-organization and accountability, a set of engineering best practices that allow for rapid delivery of high-quality software, and a business approach that aligns development with customer needs and company goals. Conceptual foundations of this framework are found in modern approaches to operations management and analysis, such as lean manufacturing, soft systems methodology,speech act theory (network of conversations approach), and Six Sigma.”

This researcher was provided with a wiki space where functional requirements were presented in order to help the developer understand the needs and requirements of the system http://trac.sahanapy.org/wiki/BluePrintDecisionMaking (Appendix G). For over a week, this global team of three edited and modified these requirements until there was an agreed upon interpretation. Next, technical requirements were written up providing an entity relationship diagram and mathematical formulas (Appendix H). Although it was a lot of work for everyone, it was seamless. Added benefits came from this interaction between the team members. Although the researcher was the designer of the system, the programmer came up with excellent solutions to tricky problems. For example, in this email provided next, a discussion occurs in which a new design idea is posed and implemented. This was something neither the designer nor anyone else had come up with before.

On Wed, Oct 7, 2009 at 8:42 AM,

Programmer wrote:

…. I will prepare the forum and fix the problem tonight and will make the two other scales ready for Saturday. I should add a caption to the scale table to show how many users have voted from how many registered users.

Designer wrote:

that's excellent!

Programmer wrote:

The scale table's min and max colors are calculated from min and max scale points, which is misleading sometimes; for example when there are just two options, one of them is always green and the other is always orange/yellow :P Do I change this and use the absolute theoretically possible minimum and maximum values for them? In this case, sometimes there would be empty rows in top and/or bottom of the table.

Designer wrote:

you are so right, it would be misleading - thank you for the observation because it's about human judgment and perception of information from a visual scale = the visual interpretation of data is a serious thing! i think it's better to have a few empty cells above or below, i like your idea.

there are a couple of usability issues - you have the word Results for the Scale - although the scale is a result ;) can you use the word Scale or Total?

i'll play around with the system in a bit - and think it will be a matter of moving some buttons (triggers) - it is looking so good. everyone was impressed with your scale idea too <programmer> - i look forward to seeing the modified version. hey - how can i find out what the running totals are right now so that i can check calculation? i think you mentioned you had given me admin privs - would i just look at a data file? or is there an easy way to have that information come up? - whatever is easiest for you -

The team members quickly build a history of interaction and discussion occurred whenever there was any ambiguity between two members on some concept.

7.7 Tools Implementation for The Delphi Decision Maker

7.7.1 Voting

Paired comparison came out as expected. Each pair is presented on one line. Altering colors were used for each row to help the user identify which item was paired together with another item. Figure 7.2 shows what the user interface looked like. The Description of the problem is listed on top of the page, but what is more important is that the criteria for comparison is listed right above the paired comparisons.

Figure 7.2 Paired comparison voting.

7.7.2 The Scale

The scale turned out better than expected. The Description and Criteria was listed at the top of the page for easy reference. Solution items were in one column and their Thurstone calculations were posted in a column next to them. Defining the intervals was a bit difficult, but with the use of a two color bi-polar scale where the colors get strong as they approach the opposing ends, proved to be a great addition for the visualization of data and its interpretation.

Figure 7.3 Visual scale for feedback.

7.7.3 The Discussion Forum

There are a few common discussion forum types used in online formats presently. Comment style dialog and nested formatting were both considered for the implementation of the Discussion functionality to be provided to the experts.

Initially, Comment boxes were implemented due to time constraints. Basically, they were easier to implement into the working system. Comment boxes were considered a method to support discussion. For example, when reading an article in an online newspaper like www.nola.com, users can comment on any article they so desire. This is also used for users to make comments on www.YouTube.com videos where the email address is basically the only required bit of information required. These are called comment boxes and are most popularly known as ‘blogs.’ Users can usually have the option to make their comments anonymously although the administrator always has the ability to see who is making what comment. This could be dangerous for some given comments could have an end result of violence from retaliation if the commenter’s identity were to be exposed.

It was soon discovered that Comment boxes were not sufficient to support the needs of the group discussion. There were many options that needed to be discussed and no way to link one comment to a particular option. The other major difficulty was that there was no way to link the discussion topics, discussion threads. This was supported by comments made in the initial testing phase of the system. Figure 7.4 shows the forum discussion where testers of the system were making negative comments already. This was modified to support nested forums.

Figure 7.4 Discussion forum post.

The Discussion Forum needs to support the ability for users, experts in the field to make arguments. These arguments need to be able to have structure that supports linking the available options to a discussion thread. This means that a post/reply ability needs to be offered.

7.7.4 List of Options

The Summary tool offers the users different functionalities.

1. The user can view the present list of options that are being considered by the group.

2. The user can edit one of these options to either correct or modify.

a. The item can be deleted using edit. All information on votes is gone with this option.

3. The user can Add a New Item to the list.

4. From this screen, although it’s also in the menu, the user can Vote. See Figure 7.5.

Figure 7.5 List option view and creation tool.

Adding an item is performed in a beneficial way so that no added pairs must be compared. Only the new comparisons are made from the new option paired with each of the existing items. The users votes are all brought before the subject along with the new comparisons. The new pairs in dispersed and not presented one after the other.

7.8 Protocol Analysis

The development of this system was Rapid Agile Development where many various task were being developed, tested, and corrected simultaneously – where parallel processing was going on and there was no one particular starting or stopping point. Interactions were intense and consistently ongoing between the developer, the designer, the main site mentor for Sahana, the testers of the system and other various types of input were obtained from other interested parties.

Protocol analysis was conducted online. Interactions between the researcher and participants were conducted using Skype (www.skype.com), a voice over internet protocol (VoIP) technology that is very good and is free of charge. Users need to have high speed Internet connectivity. Both participant and researchers had to create accounts with Skype. All already had accounts and all had a prior history of interacting using Skype. Audio and video capabilities were used. The participant was asked to perform each of the following tasks. The Dessert Problem was used in the Delphi Decision Maker. This was done in a series of ways. For example, the Researcher would frequently call her colleague and then they would get on the system at the same time while talking. At this point, the researcher would ask the tester to perform an action. If there were problems, they would be addressed and then the same tester would be asked to review the process again. Once any tool initially passed, then other testers would be emailed and asked to perform the task. Email was used for these interactions. Some testers would make comments in the Discussion Forum that was provided with the Problem provided for testers. At the end of the development, the site link for the homepage would be sent to experts in the field who had not had any affiliation with the study so far. I would ask, ‘do you know what to do?’ and ‘after viewing the tutorial, do you know how to use the system?’ This same process was used with the tutorials.

Testing Tasks

The tasks were tested by protocol analysis once the system was developed. Instructions were given in a variety of was as explained earlier where the testers were asked to accomplish the following tasks:

1. Watch Video Tutorial

2. Login,

3. View the options,

4. Add an option,

5. Look at present scale

6. What does this mean to you?

7. Read discussion

8. Vote on items,

a. What do you think of paired comparisons?

b. If logic check reject – note response.

9. Reply to another comment in discussion

10. Change votes on items

11. View the scale

a. What does the information on the scale mean to you?

12. Enter a comment into the forum

13. Vote on Items again

14. View Scale

15. Observations?

16. Exit from the system.

Feedback would be provided, modifications were made based on this feedback, and another video would be created. This was the cycle that was used for testing the system. And all of these parts were being tested at different times by different people. The system was easily used by the testers at the end. This worked well and no further formal protocol analysis was conducted.

7.9 Conclusion

The final result of the Delphi Decision Maker, Version 1.0 came out almost as expected. Superior free software helped research efforts in the organization and distribution of many of the tasks that needed to be accomplished during this project. An alternative decision had to be made concerning development and it proved to have an excellent outcome. RAS worked well and the system was developed only one week and one day late. Certain functions could not be implemented due to time restraints. The user interface turned out as planned and the quick integration of functionalities design was implemented successfully.

CHAPTER 8

THE PILOT

8.1 Introduction

A Pilot study on the Delphi Decision Maker was conducted using a group of 14 graduate students from a ‘Current Issues in Homeland Security’ class this researcher was teaching at a university in a department of emergency management. These students made for a good test group in many ways. First, they were a nontraditional group of older students where were also practitioners in the field. Second, the students were considered experts in the problem domain. This chapter will cover The Pilot, but at the same time, will review a very similar system to the one reported in this research effort. This latter part should be considered as an update to the Chapter 5 in Online Decision Making. The problem identified by Homeland Security and the system it was conducted on were modified and used for the Pilot Study.

8.1.1 Students as Experts

These students were considered experts in this area for a few reasons. First, they had been placed in groups representing each category that was being considered in a review. They were assigned the task of conducting a literature review to discover the most impending issues concerning their assigned category and writing a report, as if they were the advisory committee in that particular area, for the Secretary of Homeland Security. Next, the students participated in two phases, II and III, of the Quadrennial Homeland Security Review (QHSR) the Department of Homeland Security (DHS) hosted. Descriptions of these phases are presented in the next section. For the

student’s midterm exam, they were assigned to comb through the final phase information and come up with a summary outlining the issues and solutions the group came up with. Another unexpected benefit was that the QHSR was conducted on a somewhat similar type of online system which both implemented a Delphi technique and conducted a problem solving technique that will be covered in this chapter.

8.1.2 Problem Domain, Quadrennial Homeland Security Review

Upon the creation of Homeland Security, Congress decreed that a quadrennial review shall be conducted analyzing the current affairs and realigning problem areas with goals and mission statements. In this unprecedented move, Secretary Janet Nepolitano reached out to the stakeholders across the board, to collaborate and generate a priority of concerns and issues. Over 20,000 people interacted on an online system which consisted of three phases. Information from the DHS site (www.homelandsecuritydialogue.org) explains:

Thank you for participating in this groundbreaking effort. From July 16th through October 4th, more than 20,000 stakeholders from all 50 U.S. states and the District of Columbia participated in this series of dialogues to help inform the development of this important review, which will inform Homeland Security policy for the next four years.

Though the National Dialogue has concluded, the three dialogues in this series remain archived online in their entirety here:

Dialogue 1: An initial forum of participant ideas on the goals and objectives developed by DHS study groups across six topic areas.

Dialogue 2: A deeper discussion into how best to prioritize and achieve the proposed goals and objectives.

Dialogue 3: A review of the final products of each study group with participant feedback and identification of next steps.

A Delphi technique was used as members could be anonymous, were given feedback and at the end of each round (phase) information was aggregated and presented to the user upon the beginning of the next phase. This was performed by humans. Also, users were allowed to post ideas, rank ideas using a 5 star method, see a tally of the ranks and discuss any issues they wanted.

8.2 The Delphi Decision Maker’s QHSR

Students were asked to use The Delphi Decision Maker using the QHSR problem but the problem was modified to ask a question about preference, ‘Which category of homeland security issues should be given top priority of money, man power or focus?’ The list of categories was seeded by the researcher onto the Option List. This approach was used by Thurstone in a study where people were asked to consider which crime was worse and the crimes were listed for them to choose from (Thurstone, 1927b). The students were computer savvy a bit more than normal as the course as well as their entire degree was taught in an online format. This was unique to the campus as this was a tradition brick and mortar university otherwise.

8.2.1 Pilot Study Announcement

The students were given the regular invitation with the link to the web site that covered all necessary information and materials, but in addition, were given a set of instructions directing them to the QHSR.

Hi Everyone, I am finishing up my PhD - and have built a decision making system for crisis management. A couple hundred students will use this soon, but i'm asking this group to test the system by performing the quad review again - but this is with a different twist - which area should get the most focus, money, man power, etc. Below is the generic invitation that will be sent out. The link will take you to a site where you will need to Watch a 5 minute video - on all of this, you will not hear about the QHSR - this problem is specifically for you. Given you are practitioners and grad students, you will be a good group for this. Also, given your experience with the QHSR, you will be able to provide more valuable feedback. Let's have the problem open anytime from now - Monday. Your interactions on this problem will qualify you for the Lotto drawing. ok, here's the generic invite!

Thank you for your interest in participating in the Delphi Decision Maker field study. Participants who complete the entire study will be eligible to be entered into a $100.00 lottery*.

The Delphi Decision Maker is an online system with tools created to help a group generate ideas, create solutions and prioritize the solutions. This study will go from Thursday 8:00am – Monday 10:00pm. Participation in this study will take no more than 1 – 2 hours in total. Participants are expected to login daily and contribute to the ongoing discussion as well as perform the task (vote, comment, add items as directed by the instructions on the project site). A 5 minute video tutorial is available as a teaching tool. The project homepage can be located at: http://sites.google.com/site/delphidecisionmaker/

*drawing and distribution Nov. 1

8.2.2 Creating a New Problem

The researcher created the problem and seeded it with the categories as written by DHS www.dhs.gov. This allowed the researcher to test the ease of use within which the group would be able to use the system as intended with the tools provided. The Active Problems page is presented next as a figure.

Figure 8.1 New problem added for QHSR.

8.2.3 The Quadrennial Homeland Security Review Categories

The items were added by the researcher and taken from the QHSR site directly. The categories and how they were presented is displayed in Figure 8.2 which came from the DHS website, www.homelandsecuritydialogue.org/.

Figure 8.2 Homeland Security topic categories.

8.2.4 Seeding the Option List

The Option List was seeded with these categories from the QHSR onto the Delphi Decision Maker. Figure 8.3 provides a view of what the Pilot Study participants saw where the Description, Criteria and List of Items were presented.

Figure 8.3 Focus areas for quadrennial review seeded.

8.2.5 Voting on Paired Comparisons

From these 6 Focus areas came 15 comparisons for the voting pairs. This is confirmed by plugging 6 into the formula n(n-1)/2. Figure 8.3 provides a view of what the participants saw when they selected voting, the Description, Criteria the all paired comparisons.

Figure 8.4 Paired comparisons for the six focus areas.

8.2.6 Initialization of the Scale

When new items are posted to the Option List, they are placed at 0.0 on the scale in the order in which they were added. So, although they appear ranked here, it is important to view the number given to each item to understand that they are all actually at the same place on the scale. This could pose problems later where items may have the same number but are listed one after the other visually misleading the participants. The initial feedback Scale is created and presented in Figure 8.5.

Figure 8.5 Initialization of new items on scale.

8.3 Pilot Testing the Delphi Decision Maker Software System

The Pilot Study group was to rank which of the 6 areas of focus given by HS, should get the most support, manpower, money, etc. Table 8.2 displays the initial opinions of the group. Participants didn’t have any problems logging onto the system. However, one participant never did, they did not know why they couldn’t vote or read the discussion. This was covered in the tutorial though.

8.3.1 Voting Tool Utilized

Table 8.1 is the scale provided by Thurstone’s calculations along with the uncertainty given by the other two columns of numbers providing the best and worst outcome given the subjects who have not voted yet. This was to be an indicator of how the vote could sway. The numbers on the scale reflected how much more the group selects one option over the other. There was no group specification in the creation of this version of the software. So, any person who registered onto the system was accounted for in the uncertainty calculation. This was misleading and quickly identified as a requirement that would be needed for the next version of the software. Group and individual permissions are needed for many reasons. In this case, if uncertainty is a calculation based off of those who have yet to vote, and the consequences of their votes based on the possible outcomes, the group must have very strict membership guidelines. For example, voting could be used for Congressional matters where the votes have weight and determine the outcome to a given proposal.

Table 8.1 Early Voting Outcome

At this point, there was enough information provided so that others could argue if they disagreed with the present outcome. This was the case as users did use the discussion forum and started placing their arguments there.

8.3.2 Discussion Tool Utilized

Statements discussing these sorts of issues are presented. One participant gave their opinion:

“If I am reading the results correctly, I agree that the number one area of focus so far is "Securing our Borders." We as a nation will have to dedicate resources to accomplish the goal of homeland security.

I was surprised to see "Smart and Tough Immigration Laws" ranking last in the survey. I guess there are a lot of Americans who feel that this is an area that does not need much focus, or simply is a lost cause.”

The Pilot group is used to using a Forum for discussion as this is what they have been doing for two months in the researcher’s course. So, this was natural for them to use and demonstrated the ease with which the discussion forum was used. This was interpreted as the system was easy to use for what it was intended.

It didn’t take long before arguments were put forth and voting, re-voting occurred providing a new rank. Securing our Boarders (boarders should be borders) fell to the next to last position in the final results after discussion ensued. Arguments were made by participants like:

“i feel that many of these options encompass each other. If we secure our borders, the tough immigration laws may have preceded the securing. Define securing. Securing to me may mean something different to you. Securing is not necessarily putting a fence around the country. It could simply be better tracking activities and departure dates of visitors. Many of the choices encompass other choices.”

8.3.3 Item List Tool Utilized

A participant added an option ‘Interagency and Intergovernmental Information Sharing Capacity.’ Although the students were not supposed to add items, this was a good addition. This demonstrated that adding an Item to the list was easy to do. This tool will be better tested in the Field Study as that problem has the group create the list where this one was seeded.

8.3.4 Scales Utilized

The scales were used as feedback as intended. This is supported by the discussion. For example, one participant said,

“I noticed the results this morning that we now have a new number one. It looks like now the focus should be on Homeland Security National Risk Management.

An area in this category that I feel warrants further analysis and am hopeful will make it across the desk of someone with authority in the Department of Homeland Security is the need for security education within the schools and lives of US citizens. To manage a risk factor, Americans need to be educated on what constitutes a risk factor. Most citizens are only exposed to what the media shows and tells them. They receive a one-sided view instead of receiving all of the elements needed to make an informed decision.”

The scale is intended to let a user know if they agree or disagree with the group and may trigger argumentation. This was the case with this Pilot Study. Argumentation ensued which is at the heart of this system, to support groups when they disagree. Other students honed in and agreed or disagreed. There was a history of interaction among the participants and that could have contributed to the ease with which their discussions occurred. More students commented back to the aforementioned post. They said,

“Actually, I agree with (student’s prior post) it would seem like the number one priority would be Homeland Security National Risk Management. This would point out the vulnerable areas that DHS should focus their resources into. I also believe that the second area of focus should be disaster response. The DHS via FEMA is the lead agency for coordination and response activities. This certainly needs to remain a top priority. I also beleive that the new category of Interagency & Intergovernment Information Sharing Capacity is actually a subset of Conterterrorism & Domestic Security Management.”

and

“Smart and tough immigration laws I think would be ranked last because it is becoming a divisive issue and "smart and tough" legislation usually happens when issues are pretty clear cut or there is consensus. Besides good legislation can take time, where seuring the boarders can be done quicker.”

However, more attention needs to be given to interpreting the results of the scale. Information needs to be provided to the users on this. A couple of comments confirmed this observation:’

“I do not understand the Scale of Results. I will examine it more closely.”

This comment not only remarks about the scale, but also identifies the problem area that will benefit most from using this system,

“I am a little confused as to how this voting is going to be helpful. I guess a real time situation with the need for immediate decisions would do this program more justice.”

This group cared about the results and this shows in how they utilized the tools. These final results are provided in Table 8.2.

Table 8.2 Final Voting Outcome

Much discussion ensued about the areas. This directly affected the voting such that new results erupted.

8.4 Conclusion

The Pilot Study confirmed that the system was ready to be tested. The tools were used and no problems surfaced as far as usability issues are concerned. The site maintained itself and was used for the entire duration supporting the tools as intended. No interruption problems occurred. The uncertainty calculations grew larger as more participants joined the system. At this point, there were 51 people registered and 8 voters. Problems needed to have only the group members accounted for to calculate this type of uncertainty for it to have any meaning. This was not the case though as anybody who registered to the system was now calculated in the final uncertainty calculations rendering them useless for this problem.

CHAPTER 9

METHODOLOGY

9.1 Introduction

This research project consisted of many steps the participants would need to through from start to finish. This project was conducted over the Internet and the system is developed for online use. There was a great effort to make the project easy to understand from the beginning of the study to the end. Participants were solicited through email with an invitation and in this invitation, a link to the research project management site was provided. On this homepage, titled The Delphi Decision Maker (http://sites.google.com/site/delphidecisionmaker/), instructions were provided in a discrete and chronological order. This was tested for ease of use by sending the link to a few testers and asking them by email, if they understood what was supposed to by done, just by looking at the web site. A 5 minute video was created for an introduction to the project. This was created using screen video capturing software, Jing (www.jing.com), where a description was provided to the participants, showing them how to use the site, and clicking all of the options to demonstrate what was required next. Both a text based and video were provided as tutorials describing how to use the software system. After this, a survey was provided to the participant asking them about the effectiveness of the tutorial. Next, a sample problem was provided to the participants where they could practice their newly learned skills using a non stressful situation; Dessert Selection Problem. Then, the participants were led to the actual study problem, The Campus Threat Assessment. Finally, participants were asked to fill out

the post survey. In the introduction video, emphasis was made on the importance of conducting this last part of the system. Not all who started the project finished, but all who started the final survey, completed it.

9.2 The Delphi Decision Maker Home Site

A website was created in attempts to organize the information that was needed in order to complete the study as it consisted of many steps. The study site was distributed to a group of faculty members from the university where the study was conducted along with a brief note to introduce the study to those potential participants. Those faculty members distributed the announcement to their students using an internal university electronic mail system. These mails were sent out staggering over a three day period. The researcher also reached out to other faculty in her department in hopes of further distribution. Also, in the email, it was asked that this notice be sent to any other university personnel who may be interested in hopes of a snowball effect. A figure of the homepage is provided next, http://sites.google.com/site/delphidecisionmaker/.

Figure 9.1 The Delphi Decision Maker project homepage.

The site was created in Google Sites, sites.google.com, as it was free and easy to use and created a quality site. On this site the rest of the information was presented. This site was sent to a variety of information system experts over the span of a week and edits were made until these agreed that it was very clear as to what needed to be done. Upon final creation, the site was sent to another information systems expert who was not familiar with the project at all. Based on this feedback, the site was deemed good and ready.

9.3 Watch This Video First Research Participants

In order to help participants know what to do upon coming to this site, any ambiguities were lessened by using very direct language. A video was created using live screen capturing ability where each step in the process was verbally explained and visually supported by the actions in the video matching those of the explanation. All of the instructions were provided in this video showing the participants what was required. Quite of bit of effort was required of the participants so, in hopes of encouraging participation to the end of the project, a one hundred dollar lottery drawing was offered.

Prior to its use, this video was sent out to a variety of information systems experts who critiqued it. Upon feedback, modifications were made. A new version was created, tested again and then agreed that it was clear in its instructions. The video was then tested with others not from the information systems domain and was found equally understandable. The video provides the potential participant with detailed instructions on what all needs to be done, and what the software should look like when they get to a particular part of the project. The survey software was demonstrated; the need to click the “agree” check box on the consent form, and then the software system that was designed in this effort is demoed.

9.4 Pre Survey and Consent Form

A pre survey with a consent form was created using QuestionPro.com. This application creates online surveys and generates general statistics summarizing results good for descriptive statistics. It also provides an organized way to aggregate text-based responses and conduct content coding for qualitative analysis. A pre survey was given to the study group. First, from the developer side, there was a need to identity which browser the participants were using as there are many and not all work the same. As researchers, there was interest in knowing if and how much experience the group members had with group support systems and for those with such experience, how many had experience conducting decision making tasks on line as a distributed group of individuals. General demographics were collected as were interested in knowing the age of the users, gender and their proximity to campus since the threat assessment was geographically related. Last, the subjects were questioned about their interest in and educated about, emergency management. The researcher was faculty in an emergency management department and used her graduate students as participants in hopes of receiving input from those in the domain for which the system is ultimately designed, crisis managers. A survey was created online and the study participants were provided with a link that directed them to the online survey.

Fifty three started the project taking the pre-survey, 47 completed it give a completion rate of 88.68%. Almost half of the participants used Internet Explorer, 45.45% as the browser to access the Delphi Decision Maker. Mozilla was used 34% of the time and other browsers used included Google Chrome and Safari. When asked if the users had any history of using a group decision support system, 61% responded that they had never used one. Most others had very little exposure and only 2 of 44 participants who answered this question claimed to have a lot of experience. This no history of use grew to 82% when the participants were queried as to whether or not they used online decision making software, although all had some online experience and 89% replied that they were frequent users of the Internet, using it multiple times per day.

The researcher’s students, all of whom are enrolled in an emergency management program, were asked to participate in the study. Given this system is designed for crisis managers, some relevant questions were posed. All participants rated their knowledge in emergency management as average or above and over half rated their interest in emergency management as high.

All participants were affiliated with a single university although many were taking courses remotely and did not live on or near the campus. Some didn’t live in the same state as the university. 66% of the participants were male, and 90% were students. 16% of those surveyed were considered traditional students and were under the age of 24. However, most are not traditional students where 30% were from the 25 – 35 range and 35% were from 36 – 50. 16% were in the 51-60 range and one was in the 61 – 75 range.

Given that the task question used in the study is for students to conduct a threat assessment for a campus, it is important to record the residency of the study participants.

Table 9.1 Results Table for Demographics for Residency of Study Group

Please indicate your residency.

Answer

I live on campus

I commute to campus

I never come to campus (distance)

Total

Count

3

7

32

42

Percent

7.14%

16.67%

76.19%

100%

The Frequency analysis shows that most of the participants, 76% were distance students. This would make them less likely to know about or be concerned about campus threats. They would not be knowledgeable to the potential threats. Some students did mention cyber security concerns though.

9.5 Video and Text-Based Tutorials

Two tutorials were created to teach new users to the system, how the system works. A screen capturing software product, Jing (www.jing.com) was used to create a tutorial that was believed to be an easier way for people to learn the system. This tutorial was created then distributed to information systems experts to critique. Three experts provided feedback and based on this information, the tutorial was modified. Another observation from one of the experts was that, not everyone would have sound so, a text based version of the tutorial may be useful. It was upon this suggestion that the text based tutorial was created. This was reviewed by experts, then by others and was found to be adequate for these purposes. The text based tutorial is found at the end of this dissertation as an appendix.

Both of the tutorials presented the tools that were created to in this research effort. Some of the tools have a different approach to accomplishing task than what people are used to. Paired comparisons were demonstrated and a brief description of Thurstone’s scale is presented. Also, the list generation tool was covered and the forum was demonstrated.

A survey was created and participants were asked to provide constructive feedback. This survey can be found as an appendix at the end of this dissertation.

9.6 Tutorial Survey

The tutorial provided good feedback. Thirty-nine participants started the survey, but only thirty-two completed it, giving an 82% completion rate. Twenty-four of these people used the tutorial. Of those 88% watched the video format while 12% used the text based version, most preferred video but many desired both. The views varied on the tutorials, but most found them good, see the next chart for details.

Table 9.2 Opinions Critiquing the Value of the Tutorial

The tutorial provided for The Delphi Decision Maker was

Answer

Poor

Below Average

Average

Good

Excellent

Total

Count

0

2

7

19

4

32

Percent

0.00%

6.25%

21.88%

59.38%

12.50%

100%

Participants were asked if the tutorials made sense or if they were confusing. Using descriptive statistics, a mean score of 2.125 of a 7 point semantic differential indicated that the survey did make sense with its instructions. Also, participant’s results confirm that enough information was provided. However, the respondents desired to know more about the models used to provide information to the group. ¼ of the participants thought more needed to be covered in the tutorial. However, one limitation to note is that it is difficult to know if a tutorial was good until you go to use the system. So, a question was added to the post survey asking about this specific issue: What improvements can be made in the tutorial to make the system easier to use initially?

Most people found the tutorials provided them with the information needed. For example, one participant stated, “I found the tutorial informative enough. I was able to get into the system and begin using it immediately without any problems.” However, many were confused by the logic check error message and thought this needed to be covered in the tutorial. A couple of comments referring to this are, “explain what the loops are when you try to vote and it gives you that message” and “Explain what it means, if after completing the survey, it says that you have a loop in your answers.” This was not foreseen to cause the problems it did cause in the study. A logic check was used to enforce integrity in the selection process by the user in the paired comparisons when voting. It was noticed that, while the number of comparisons stays relatively small, less than 5 items, there was no problem, but as the list size increases, so too does the number of comparisons which directly effects the logic check.

For the next version of The Delphi Decision Maker, Version 2.0, more explanations will be provided based upon the feedback of this tutorial. One subject asked for a shorter version of the tutorial to be provided as 5 minutes was perceived to be too long. Others wanted a longer explanation, a longer tutorial. There is a diverse set of backgrounds with the users of this system. Various versions of the tutorial will be considered to support the various desires of the users such that it meets the needs of the expert, whatever depth they would like or not like as a tutorial.

9.7 Conclusion

This methodology chapter gives the information to provide a basis for study results and analysis. Information was provided showing how the study was conducted and provided details on the study subject population. Both the studies as well as the results from the studies are provided in their own chapters.

CHAPTER 10

THE RESEARCH STUDY

10.1 Introduction

The objective of this chapter is to document the information and the study that was conducted on the Delphi Decision Maker, Version 1.0 software system in a chronological order and as a discrete process. The software was tested by people throughout the development and a pilot study had been run. Major problems surfaced with the voting going through and this decreased the Voter count as participants couldn’t get their votes accounted for because they could not get through a logic check. Many modifications and upgrades will be made in The Delphi Decision Maker 2.0. Problems identified are being addressed where methods are being proposed to overcome these problems in the design.

10.2 Notification for Participation

An email was sent to faculty to be announced to their classes. The announcement for soliciting students is presented next.

“Thank you for your interest in participating in the Delphi Decision Maker field study. Participants who complete the entire study will be eligible to be entered into a $100.00 lottery*.

The Delphi Decision Maker is an online system with tools created to help a group generate ideas, create solutions and prioritize the solutions. This study will go from Thursday 8:00am – Monday 10:00pm. Participation in this study will take no more than 1 – 2 hours in total. Participants are expected to login daily and contribute to the ongoing discussion as well as perform the task (vote, comment, add items as directed by the instructions on the project site). A 5 minute video tutorial is available as a teaching tool. The project homepage can be located at: http://sites.google.com/site/delphidecisionmaker/

*drawing and distribution Nov. 1”

Due to development delays, the invitation went out a day later than expected. This went to the designated group of faculty who had agreed to distribute the announcement as well as to 50 of the researcher’s own students. The invitation was also asked to be passed on to the other university staff, faculty and students in hopes of a snowball effect. The announcement went out at 7:30 a.m. from the researcher to the faculty Central Standard Time.

10.3 Initialization of Problem

Another group started a pilot on the system two days prior to the study. These were students of the researcher and were also inclusive in the research study; see Pilot Study for more details. Combining the Pilot group with the people who were asked to test the system, the system had 20 users at the beginning of the research study. Unfortunately, there were no group functions in place to restrict membership exclusively in one Active Problem area versus another but rather, anyone who logged in was accounted for and hence, factored into the calculations that were provided to the user. This will be explained further into this chapter.

The Pilot Study group consisted of emergency managers. They gave the Field Study a good start by providing numerous options and seeding the list the first day with 6 possible threats. The researcher kept data on when people registered to the system. This consisted of the date, time, number of registered users, items added to the Option list and number of voters. Table x.1 provides this information.

Table 10.1 Field Study Participation Data

From the onset, 6 options were added to the list. It was odd however, that no other options were added until the next day. Severe Weather was listed as first and was confirmed as the most likely threat to campus by all who participated. This was odd because there were numerous threats to the campus, a biochemical stockpile of weapons from WWII was only miles away at the Anniston Army Depot, there was a closed but working Fort in the area which housed many government entities, like the Federal Emergency Management Agency and Homeland Security training, special forces training, a police academy, bomb sniffing canine unit, emergency management training and National Guard training. Not to mention, there are several businesses represented that do other military required jobs like, building tanks for the ongoing war on terrorism. So, it was odd that ‘Terrorism’ was never added to the list especially given that some of the students were in a Homeland Security class.

Figure 10.1 provides information from the Scale feature. This is used throughout this chapter to show how the information changed.

Table 10.2 Saturday, October 17, 8:00 PM Results

Total number of users: 33

Users who have voted so far: 7

Soon H1N1, the name given to an ongoing Pandemic Flu was entered, it rose near the top. New items entered to the Option List are placed at the 0.0 point on the scale. Options posed could be either placed at the bottom of the entire list or at the zero point. A light hearted Dessert Problem was used in the study as a place for participants to use their new found skills after reviewing the tutorial. The following Dessert Problem figure provides a view of how new items being added to the Option List looked to the user on the scale.

Figure 10.1 New dessert items added on scale place at 0.0

No real discussion surfaced debating H1N1 even though it was spreading throughout the campus, community and state. Participants noted that there was a threat from this through both discussion and voting.

Severe weather is a common occurrence in the upstate Alabama region. Many in the Alabama area already had H1N1 including the researcher, but it still wasn’t considered a higher threat than Severe Weather. It rose quickly, but did not top severe weather and subjects continued to comment that Severe Weather was the biggest threat. The campus used in the threat assessment is located in the foothills of the Appalachian Mountains in the northern part of Alabama in the United States of America. Streamline winds are commonplace and are very dangerous in this region. Combining this threat with the biochemical stockpile, which accounts for 7.2% of the world’s known supply; sirens go off almost weekly during many months of the year. The threat is not so much the severe weather but the consequences should the containment of the Anniston Army Depot stockpile be released into the air as a direct result threatening the local population. Streamline winds can cause torrential downpours of rain, large hail and high winds are common, and also, tornadic activity can spin off causing heavy damage.

This system is meant to be used by experts. The features of providing a discussion area and having the ability to change the wording of an item in the Option List were never used for their abilities. Experts would most probably identify such aforementioned threats and debate the severity and potential of each. Arguments or intellectual brawls could ensue where information would be exchange. This system is built to support such needs. It is critical to select the correct problem type for the study group in order to test a system like The Delphi Decision Maker. Although various problem types were administered, none pulled from the users the needs that would optimize the use of such a system.

The following afternoon on October 18, Table 10.3 shows that a few more participants have registered onto the system and a couple more have voted. This has changed the numbers but not the order of the top 3 options. However, the order of the remaining options has changed a good bit.

Table 10.3 Items Moving Closer In Rank

Total number of users: 36

Users who have voted so far: 9

Although the order remained the same, the scale shows that the second and third were considered more of a threat than previously displayed. Although there remained agreement, Severe Weather was not determined so much more of a threat by the group as before. Tornadoes also should have been further explored as they are a direct result of Severe Weather and do not occur without it but as a result. However, they are not as likely to occur as general Severe Weather as defined by Wikipedia “Severe weather is any dangerous meteorological or hydro-meteorological phenomena, of varying duration, with risk of causing major damage, serious social disruption and loss of human life” (http://en.wikipedia.org/wiki/Severe_weather). So Severe Weather can just be high winds, a hail storm, an ice storm, etc. which makes it more likely to occur.

Table 10.4 New Items Entered and Set at 0.0

Total number of users: 44

Users who have voted so far: 13

By Monday evening, 6:40 PM, the results had changed a bit, but due to the logic check, many weren’t able to get their votes taken. What is concerning about this is that the system is forcing the people to check their logic versus vote on paired comparisons. The two are more than most people want to contend with. This researcher personally found that it was more like playing a game to make the system accept your selections, which runs counter to preferred preference.

Table 10.5 Monday, 6:40 PM Results

Total number of users: 48

Users who have voted so far: 14

The Study was over Monday evening, October 18, 2009, 10 PM. The next table is the final ranking of results.

Table 10.6 Tuesday, Final Results

Total number of users: 51

Users who have voted so far: 18

As the final table confirms, the group agreed that Severe Weather was the biggest threat. Only H1N1 and Fire switch places on the scale. The study was brought to a close Monday, 10 PM, however, a few more people participated on Tuesday. Tuesday evening, the surveys were closed and the results were gathered. As the aforementioned table indicates, there were 51 voters. This was used in the uncertainty calculation which was faulty in its results due to not having restrictions of members in a group. So, in this chart, while it shows there were 51 users, only about 35 of those were using the system for this particular problem. The other users were the testers of the system.

The frequency matrix is provided to show that Thurstone’s calculations are correct based on this input.

Table 10.7 Frequency Count for Final Scale Results

10.4 Conclusion

There were many issues observed over the duration of the study. Problem areas were identified and will be corrected in the next version, The Delphi Decision Maker 2.0. The study went well and many areas will require improvement as this system is being used by Sahana presently. Future studies are planned and the lessons learned from this study will be applied. All and all, the study went very well and all expectations were met. This is design science, and as such, the next version will address the problem areas identified here. One unexpected piece of information that came from this was, how important it is for the problem to be of a particular type and also, for the group to be seriously vested in the results from that problem – given, people are trying to prioritize a list of options given some problem. However, it goes further than this in that, depending on what the goal is of the prioritization, the system will be utilized differently. For example, a scale of severity has a different purpose when it’s applied to developing a scale versus having the top selections getting resources. As such, the interest of the stakeholder or group member will be different as they use the system. This is not a trivial problem and there are dimensions that should be considered when using this system.

CHAPTER 11

RELIABILITY MEASURES

11.1 Introduction

In this section, reliability measures are presented for the survey questions that were used to address the research questions posed in this dissertation. This only applies to some, but not all of the research questions in this study. First, the research question is stated followed by a chart of the survey questions and their associated survey question identification numbers. Next, a table of Simple Statistics is presented for each survey question used to analyze the research question. The survey questions were structured as semantic differentials with a scale from 1 to 7 where 1 was high agreement and 7 was high disagreement. Cronbach’s Alpha is used to measure the internal consistency of item scales in the survey used to address each research question. A Cronbach’s alpha of .7 or over is generally considered sufficient to show internal consistency (reliability) of the scale (Hair et. al, 2006). ). Although these scales are not used for testing hypotheses in the initial studies, if they are reliable, they can be used in subsequent studies.

11.2 Reliability of Scales

11.2.1 Research Question 1

RQ1 asked, Is it possible to create a web-based system that will enable dispersed groups of experts or knowledgeable individuals to share and evaluate information and opinions, expose

disagreements, and reach decisions more quickly than they could have without such a system?

Next, the survey questions used to measure research question 1 are presented in Table x.1.

Table 11.1 Survey Question Number with Associated Question

A table of Simple Statistics is presented next for this group of survey questions. Thirty students finished the survey. A scale from 1 to 7 using semantic differentials was used where 1 supported high agreement and 7 supported high disagreement. The means with standard deviations are presented in Table 11.2.

Table 11.2 Statistics for Survey Questions for Research Question 1

8 Variables:

q20 q23 q24 q26 q32 q36 q33 q1

Research Question 1survey questions had a Standardized Alpha of 0.919480 which indicates high reliability for the scale. Table 11.2 displays both the Raw and Standardized calculation. The Total Mean was 2.5 with a Standard Deviation of 0.3.

Next, research question 2 is presented. Research question 2 is composed of two parts, both of which ask different questions. These two questions were based on the Technology Acceptance Model where the first part is asking about Perceived Usefulness and the second part asks about Usability. The first part of RQ2 will have a separate calculation based on Cronbach’s Alpha. In this part, one question, q3 had to be reversed due to the semantic differentials.

11.2.2 Research Question 2

RQ2 asked, What features or functions are most useful for this objective? What is required to make the design of the system understandable and useable?

Table 11.3 presents the survey questions that were used along with the question number they were given on the questionnaire.

Table 11.3 Survey Questions used to Measure RQ2

Simple Statistics were run on these four variables and the results are in Table x.6.

Table 11.4 Simple Statics for Survey Questions Measuring Research Question 2

7 Variables:

q3r q4 q5 q6 q7 q29r q31

Table 11.5 Cronbach’s Alpha on RQ2

Two of the questions, q3 and q29 had to have their numbers on the scale reversed due to the positive and negative poles defined on the scale in order to be in the same order as the other questions so that a correct Alpha could be calculated. Cronbach’s Alpha was calculated on this group and a result of 0.828269 indicates adequate reliability. The Total Mean was 3.1 with a Standard Deviation of 0.7.

11.2.3 Research Question 4

RQ4 can the system be used for planning and other tasks that would make it a regularly used system, eliminating need for special training for use in emergencies?

The set of questions that were used to answer this question are in the next table.

Table 11.6 Survey Questions and Identification Numbers

Simple Statistics are provided on these 4 variables in Table 11.7.

Table 11.7 Simple Statics for Survey Questions Measuring Research Question 2

3 Variables:

q9 q10 q11

Table 11.8 Cronbach’s Alpha for RQ4

Although there were only three variables considered, a result of 0.878225 indicates that these questions were measuring the same thing. The Total Mean was 2.5 with a Standard Deviation of 0.4.

11.2.4 Research Question 10

RQ10 is considered next as it was also analyzed using surveying methods. It asks: Will using a visual scale based on Thurstone’s Law of Comparative Judgment help the expert make a more informed or confident judgment?

Four questions were created to answer this question. They are provided in the next table.

Table 11.9 Survey Questions and Identification for RQ10

Simple Statistics are presented on these four questions. A reversal had to be conducted on q34.

Table 11.10 Simple Statics on Survey Questions for RQ10

4 Variables:

q22 q25 q34r q35

Table 11.11 Cronbach’s Alpha for RQ10

Cronbach’s Alpha was calculated and a correlation of 0.767301 indicates that the questions are asking about the same factor. The Total Mean was 3.7 with a Standard Deviation of 1.3.

11.3 Conclusion

Cronbach’s Alpha indicates that, for each research question considered, the questions used to answer the research questions were at least all, by groups, asking about the same factor. Although this cannot directly prove that the results from the questions are asking about the questions themselves, it does confirm at the very least that they are related and could be asking about the research questions which they were designed to answer.

CHAPTER 12

STUDY RESULTS

12.1 Introduction

This chapter describes the field study and analyzes the data by use of the technique of triangulation. The results of the data analyses give insights that suggest answers to the research questions. The objective is to analyze the information gained from the study and address each research question by analyzing the data relevant to the research question. A variety of methods are used to answer the research questions: proof of concept, descriptive statistics, and qualitative analysis. The system was viewed positively by study participants and thought to be an important tool that is needed for emergency response. In fulfilling the requirements of design science, further modifications are being made to the system in order to maximize its ability to support the needs of the targeted users.

12.2 Constructs Defined for Evaluation

The research questions have themes which are further abstracted into constructs from the literature for various types of systems. An online survey using a software system called QuestionPro (www.questionpro.com) was used to create both pre and post surveys to collect data from the participants. Wherever possible, previously validated survey items were used or modified for use. Constructs examined are: Usefulness, Confidence, and Satisfaction as well as constructs newly examined in this dissertation: Scale Use, Discussion Use and Voting Use. This gives a good foundation upon which

existing survey questions that have been deemed reliable can be used in this effort. The questions are Semantic Differential items. Because the sample size (N=31) was too small, factor analysis could not be used to assess construct validity (Hair et. al, 2006).

12.2.1 Usefulness: The items for this construct measure if the user perceives the system as beneficial in helping them accomplish a task. Does this system help the user do something they cannot otherwise do without the system? Or does this help them do something better or faster? The Technology Acceptance Model (TAM) indicates that perceptions of usefulness can lead to behavioral intention to use a system (Venkatesh and Davis, 2000).

12.2.2 Ease of Use: This construct is designed to help determine if the user finds the system easy to use and intuitive. This is very important given that the system needs to be able to reflect the thoughts of the user in some logical manner that’s interpreted through the user interface to satisfy the user’s needs for quickly identifying and determining what to do next and what is where. Especially given that this system’s intended users are in the crisis management field, it must be easy to use and manage (Venkatesh and Davis, 2000).

12.2.3 Confidence: It is important that someone using the system believes in the calculations giving the final output. Confidence will be tested as it relates to the user’s confidence in the results from the feedback. Do they think the results accurately reflect the group? Do the individual results reflect the user’s opinion? Do the users feel confident that using this system is better than not using the system?

12.2.4 Satisfaction: In the surveys, participants were asked to rate their satisfaction with the system. This is to determine if the system is perceived to have value. Does the system help do tasks better? Are the users happy with the outcome of the entire process or do they like the process at all?

12.2.5 Scale Use: This construct is designed to test if a user finds the scale useful for its intended use. The scale is to ultimately steer the group by indicating areas of agreement/disagreement through the visualization of data as it is presented on the scale. Further, the scale is the key in showing the user how uncertainty can change the outcome. Sometimes, this can make a difference as the uncertainty can change things; in other instances it makes no difference at all in the outcome of the final rank. This was shown to be true by examining the possible outcomes under these situations. Therefore, in some instances, it may be worth the added effort to persuade a group and in others, it may not.

12.2.6 Forum Use: This construct is to help determine if the forum is being used and if it’s being used, why. The forum may be used when a participant determines from the scale, that their opinion differs from the group. The forum can be used to persuade others using arguments and pointers to other information backing up those arguments. The forum may also be used for one to confirm, modify or contribute to another post or for some other reason that may be uncovered in data analysis.

12.2.7 Vote Use: This construct is designed to determine the motivations of the users to vote. Voting is to be used to help represent a participant’s opinion and change in respect to the change of opinion in an ongoing basis. Will voting be used to reflect user opinion? Will the participants actively change their votes when their opinion changes? Perhaps participants will only worry about voting on only those solutions that concern them.

12.3 Research Questions and Results

12.3.1 Research Question 1

Research question one was the over arching question in this dissertation research. It asks if a system is built, offering tools to quickly direct a group of experts through the decision making process, would they use it? The system design and implementation were driven by theory (Thurstone’s Law of Comparative Judgment). Tools were developed based on these principles and created to specifically:

· Support item generation for building a list of options,

· Allow dynamic voting to reflect one’s opinion,

· Trigger discussion amongst participants,

· Provide a visual scale indicating a single group feedback

Would crisis managers find these useful and beneficial enough and use it?

Constructs that will be used to evaluate answers to the first research question will draw from the Usability and Confidence constructs.

The first research question asks:

RQ1: Is it possible to create a web-based system that will enable dispersed groups of experts or knowledgeable individuals to share and evaluate information and opinions, expose disagreements, and reach decisions more quickly than they could have without such a system.

Results of data analysis support a positive response to this question. Data were analyzed using three methods: Proof of Concept, Open Ended Survey Responses and Descriptive Statistics.

Proof of Concept

First, the system has been built and used by not only testers, but also by experts from the field of emergency management. Proof of Concept supports an affirmative response for RQ1. The software is called the Delphi Decision Maker and it is presently existing as a module, ready to be used as part of the Sahana Disaster Management System. A live demonstration of the software is available at http://demo.sahanapy.org where all of the modules are available for people interested in the system to explore and learn more about. A figure of the main page of the system is provided next.

Figure 12.1 SahanaPy homepage with Delphi Decision Maker module.

Participant Open Ended Text Feedback

Another analysis of the comments made by participants in the research study supports that the answer to RQ1 is yes. For the open-ended questions in the post study survey, participants made only favorable comments concerning the potential usefulness of this system. No negative remarks were made in the comments when asked.

A participant remarked, “I liked it…add a few more topics and you’ll have a major site on your hands”. Another stated, “Think it is good system. Feel it could benefit in many different ways.” These statements indicate that the users thought the system could be used and for different task types and decision types. One participant made a statement which pin pointed the initial goals behind the design effort. S/he stated,

“I guess a real time situation with the need for immediate decisions would do this program more justice.”

This is exactly the decision making environment the system was designed to support so, confirmation of effort was made in this participant observation.

Descriptive Statistics

Survey questions were developed to gather data to answer RQ1. During this analysis, the survey questions are denoted as Q#. Participants were given a questionnaire after the exercise was over and 31 responded. This information was created such that it could be used in analyzing the results and in measuring the responses. Each research question is covered and the survey questions that were used to support the questions are analyzed using simple statistics providing information that is used to evaluate the research questions. Frequency analysis and descriptive statistics are used to see the distribution of answers and then the mean is used to determine the level of support for the following questions. Questions were structured as semantic differentials using a 7 point scale. The following categorizations of the responses will be used:

Table 12.1 Interpretation of Scale Values

Still analyzing the data for the first research question, “It asks if a system is built, offering tools to quickly direct a group of experts through the decision making process, would they use it?”, a set of survey questions were developed. Each question is covered, the first of which was Question #30 (Q30) on the questionnaire:

Q30 - Given the problem, the number of participants, discussions that went on and the evaluation of that information, this system took us a (shorter/longer) amount of time to reach a decision than we could have made without such a system.

Table 12.2 Survey Statistics for Survey Question Number 30

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Shorter

2

3

4

5

6

7 Longer

Total

Count

8

6

9

5

2

0

1

31

Percent

25.81%

19.35%

29.03%

16.13%

6.45%

0.00%

3.23%

100%

The mean was 2.710 interpreted as a positive response where the participants believed that this process occurred more quickly than without using the system. Viewing the Frequency Analysis in Table 12.2, 77% of the participants selected 1 – 3 demonstrating that the majority of the group believed this was positive. Hence this survey question, Q30, is supported and this is confirming that, given the same set of circumstances, using this system provided a quicker way to perform the same task given this system had not been used. It should be noted that less than 20% of those surveyed in the pre-study survey noted having no history of using a group decision support system in the past. This is an indicator that software systems that are to support decision making of groups are not common presently nor widely used in day to day activities at work. This gives the response more meaning here as most have not experienced anything other than traditional face to face style interactions.

Q26 - I believe that I was able to evaluate the information (better/worse) than normal given there was an online forum.

Table 12.3 Survey Statistics for Survey Question Number 26

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Better

2

3

4

5

6

7 Worse

Total

Count

8

11

5

5

0

1

1

31

Percent

25.81%

35.48%

16.13%

16.13%

0.00%

3.23%

3.23%

100%

77% of the participants agreed providing a rank of 1 – 3 in support that they agreed that they could evaluate information better than normal given there was an online forum. Another 16% provided a rating of 4 which is considered neutral leaving only 6% in disagreement. This is pretty strong evidence to support that the online format was better. Further, a mean of 2.516 indicates that the participants felt they were able to evaluate the information better than normal given there was an online forum. This supports RQ1 in that the research question specifically addresses the user’s ability to evaluate the information on the Delphi Decision Maker.

Q23 - It was (easier/more difficult) for me to share and express my opinion using this system than I could have done, (under the same set of circumstances) not using the system.

Table 12.4 Survey Statistics for Survey Question Number 23

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Easier

2

3

4

5

6

7 More Difficult

Total

Count

10

6

8

3

3

0

1

31

Percent

32.26%

19.35%

25.81%

9.68%

9.68%

0.00%

3.23%

100%

A mean of 2.581 indicates that, given the same set of circumstances, people found it easier to share and express their opinions using this system. Looking at the Frequency Analysis, most participants agreed with this giving a rank of 1 – 3. 77% of the participants ranked this a 1 – 3 showing that most are in agreement.

Q36 - I am confident that a group __can/can’t______ use a web based system that will enable dispersed groups of experts or knowledgeable individuals to share and evaluate information and opinions.

Table 12.5 Survey Statistics for Survey Question Number 36

1.

2.

3.

4.

5.

6.

7.

Answer

1 Can

2

3

4

5

6

7 Can Not

Total

Count

15

7

4

3

1

1

0

31

Percent

48.39%

22.58%

12.90%

9.68%

3.23%

3.23%

0.00%

100%

48% of the participants rated this with a 1 indicating strong agreement over 70% rated this with a 1 or 2. From combining the 1 -3 ranks, 83% agree that this will work in an online environment. 6.5% disagrees. Given that all three of the past questions were all given very low ranking of 6 or 7, one may interpret this as perhaps coming from the same people whose views have been consistently negative in this analysis thus far. A mean of 2.065 was an indicator that people confident that a web based system could enable dispersed groups of people to share information, evaluate information and share opinions. This brought together three of the other independent aspects of the system.

Given the three aforementioned research questions address the same issues of evaluating information and sharing opinions, a mean of the means was calculated giving a 2.387 further supporting RQ1.

Q24 - I feel confident that a group ___can/can’t______ use a web based system that will enable dispersed groups of experts or knowledgeable individuals to expose disagreements, and reach decisions.

Table 12.6 Survey Statistics for Survey Question Number 24

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Can

2

3

4

5

6

7 Can Not

Total

Count

13

7

4

3

2

2

0

31

Percent

41.94%

22.58%

12.90%

9.68%

6.45%

6.45%

0.00%

100%

Almost 42% ranked this question with a 1 for high agreement which is a bit surprising. 76% rated at least 1 – 3 in agreement. A mean of 2.355 indicates that participants believed that the web based system could bring dispersed groups of experts together and help expose disagreements and reach decisions. This is very important because this is at the heart of the study – exposing disagreement so that the experts will focus on the areas of disagreement, bypassing agreement and resulting in saved time.

Q20 - I am confident that a group __can/can’t______ use a web based system that will enable dispersed groups of experts or knowledgeable individuals to reach decisions more quickly than they could have without such a system.

Table 12.7 Survey Statistics for Survey Question Number 20

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Can

2

3

4

5

6

7 Can Not

Total

Count

13

8

4

3

2

1

0

31

Percent

41.94%

25.81%

12.90%

9.68%

6.45%

3.23%

0.00%

100%

Again, strong consensus was shown with almost 42% of the participants giving this question a 1 for strong agreement. 79% in total gave a 1 – 3 response to this question showing that the greater majority agrees.

With a Mean 2.226 which is a highly positive response, the response to this question provides support to the RQ1 by indicating that a group can use a web based system, and that it can enable dispersed groups of knowledgeable individuals to reach decisions more quickly than they could have without such a system.

Using triangulation, all three approaches used for analyzing RQ1, Proof By Concept, Survey Responses, and Descriptive Statistics, support an affirmative answer to RQ1s: that it is possible to create a web-based system that will enable dispersed groups of experts or knowledgeable individuals to share and evaluate information and opinions, expose disagreements, and reach decisions more quickly than they could have without such a system.

12.3.2 Research Question 2

The second research question has two, each of which will be analyzed separately using different methods of analysis. Descriptive statistics are used providing the means to the survey responses for the interpretation of results. Qualitative analysis through content coding of responses to open-ended survey questions is used to further analyze data and ascertain the perceptions of the study participants.

These survey questions are based on the TAM model addressing Usefulness and Ease of Use. One of the aspects measured by TAM is if the user perceives that the system will be of future benefit to them and help them perform their duties more efficiently, they will more than likely use the system. Also, how easy is the system to use? This is important because if a system is found to be too complex to use, a user may reject it.

Part I

RQ2a: What features or functions are most useful for this objective?

Tools were developed to be used for particular tasks, but also such that they would work together seamlessly doing what was needed to be done to satisfy the needs of the users. Four major task areas were identified and created:

1. A List Generation Tool

2. A Dynamic Voting Tool

3. Thurstone Scale Development with Uncertainty Measure

4. Discussion Forum

Descriptive statistics are used to understand the responses of the participants to survey questions pertaining to the usability of the tools. To help answer the second research question and determine if the tools provided were perceived as useful, a series of survey questions were created. These questions were measured using a 7 point semantic differential scale. The categories were divided as such:

Interpretation of Scale Values are presented next.

All of the questions mean scores indicate that the participants found the tools extremely useful.

Q32 – I consider the options list tool (extremely useful/not useful) had a mean of 2.7.

Table 12.8 Survey Statistics for Survey Question Number 32

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Extremely Useful

2

3

4

5

6

7 Not Useful at All

Total

Count

8

9

4

6

0

2

1

30

Percent

26.67%

30.00%

13.33%

20.00%

0.00%

6.67%

3.33%

100%

Although 20% were neutral in their answer, overall the response here was positive. 70% of the participants voted 1 – 3 showing that the greater majority agreed. 10% did not find the Options List Tool useful.

The participants liked having the ability to add items to the option list. One participant said, “There was a spot to add a category for a voter. This was good!” However, open ended text based feedback indicated that explanations needed to be provided for the options describing what the options mean more specifically. For example, one person stated, “…what were the definitions of some of the choices! My definition could easily have been different than the intended definition. Some of the choices encompassed each other, intertwined.”

Although there was an editing feature for the items to be changed, no participant took advantage of this feature. This feature is there for just such occasions, so that ambiguities can be eliminated. This will need to be explored more in the next design phase. The tutorial will demonstrate this feature and also, users will be told specifically that this is a feature of the system. This, along with a way to define options will be implemented in The Delphi Decision Maker 2.0.

People were asked if they found the scale to be of any use.

Q31 – I found the scale to be (extremely useful/not useful)

Table 12.9 Survey Statistics for Survey Question Number 31

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Extremely Useful

2

3

4

5

6

7 Not Useful at All

Total

Count

7

11

2

6

1

3

0

30

Percent

23.33%

36.67%

6.67%

20.00%

3.33%

10.00%

0.00%

100%

A bit of a coincidence, 20% of the participants again are neutral on this. However, overall the group found the Scale Feedback useful as 66% were in agreement giving a rank of 1 – 3. None strongly disagreed and 10% weakly disagreed. Perhaps this is the same 10% that found the List Option Tool not so useful.

A mean score of 2.733 indicates that people did find the scale to be useful, but further analysis through content coding identified areas where improvement needed to be made. Participants did provide comments that supported the mean score, “It was easy to see what the group thought was important by numbers.” But then the respondent further stated that it was easier to tell what people were thinking from the discussion forum, “Better by comments. You could see/understand better why someone may have voted the way they did.” However, both provide a different way of giving feedback to the user and each has a different use. The scale is to show where the group stands as one on the rank where the discussion forum should be a place where arguments or discussions can be made where other information is needed or others are trying to persuade potential votes one way or the other.

Many comments were made asking for further clarification of what the scales meant, “More of an explanation of the results” and “many disparate variables to decision-making may make difficult to define uncertainty and provide participants with the plethora of variables that should be considered. This, of course, will affect informed decision-making and may be hindered if these are not fully communicated and easily understood in an online forum.” This last comment brought up a very important point and that is that uncertainty is defined by the problem and cannot be generalized. In this system, uncertainty was calculated by the number of voters. Each vote is considered to hold weight and there are only so many voters. This is how Congress works where votes make the decisions and there are entire organizations whose goal is to win a vote. However, due to an unforeseen problem, people’s votes were not able to make it through due to the logic check so, the uncertainty was very high making use of such information in a system impossible. This really demonstrated how uncertainty should be a calculation that fits the problem and its environment. Participants were asked directly if they thought the voting tool was useful.

Q27 – I consider the voting tool (extremely useful/not useful).

Table 12.10 Survey Statistics for Survey Question Number 27

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Extremely Useful

2

3

4

5

6

7 Not Useful

Total

Count

10

9

4

1

3

3

0

30

Percent

33.33%

30.00%

13.33%

3.33%

10.00%

10.00%

0.00%

100%

The Voting Tool was found to be extremely useful by 1/3 of the participants. 76% of the participants gave a rank of 1 – 3 indicating that the greater majority found the tool useful. With a means score of 2.567, this is further supported. Where the Scale and Option list had 10% of the participants finding them not very useful, 20% found the voting not useful. There were problems with the voting tool and this may have put a negative bias towards understanding how useful Voting really was. Also, that there was no real disagreement, people didn’t need to change their votes which may have made them perceive the tool as less useful.

Q28 – I consider the Discussion Forum tool (extremely useful/not useful).

Table 12.11 Survey Statistics for Survey Question Number 28

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Extremely Useful

2

3

4

5

6

7 Not Useful

Total

Count

12

6

6

3

1

2

0

30

Percent

40.00%

20.00%

20.00%

10.00%

3.33%

6.67%

0.00%

100%

40% of the participants found the Discussion Forum extremely useful. 80% provided at least a 1 – 3 rating showing that the greater majority found this tool useful. A mean score of 2.367 indicates that the participants found the forum very useful. Combining these means together a mean of the means results in a 2.591, indicating that all are found to be useful together

Further qualitative analysis through content coding confirmed these positive descriptive statistics. A participant commented, “Overall, I would find this a useful tool to use, Great Job!” Other statements supported this view, “I think that this is a very good tool and will have a lot of uses in various fields.” Next, there was a need to know if the design of the system was understandable and easy to use.

Part II

The research question was asked:

RQ2b: What is required to make the design of the system understandable and useable?

Protocol analysis, qualitative analysis through content coding and descriptive statistics are used to analyze data pertaining to this question. The means from the descriptive statistics are interpreted based on their particular question as the scales used different semantics and were not consistent in any low to high sequence for consistent measurement. This was to avoid yea saying.

This researcher designed the system based on another system design which was very easy to use and had the same types of needs for human interaction and quick accessibility, the software language Web2Py (www.web2py.com) which is also the language the software system was developed in. Problems were found and solutions are being implemented in the version 2.0 of the software. These solutions can be found in the chapter on The Delphi Decision Maker 2.0.

The system was modeled after the foundation of another system design; Web2Py is an easy to use free and open source software system that manages a complex environment (eg database, code control, graphical user interface). Next, it needed to be determined that, if given the chance, would people want to use this system for prioritizing a list of items? So, the question was posed,

Q9 – If given the chance, I would (use/not use) this system for prioritizing alternative solutions to a problem or situation.

Table 12.12 Survey Statistics for Survey Question Number 9

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Use

2

3

4

5

6

7 Not Use

Total

Count

8

8

5

3

2

1

3

30

Percent

26.67%

26.67%

16.67%

10.00%

6.67%

3.33%

10.00%

100%

Frequency analysis shows that 70%, the majority of the participants confirmed that they would use this system for ranking alternatives. 10% of the participants were neutral and 10% said they would not use this system for such a task. There were problems with the system that were frustrating. As the number of items increased, voting was difficult and time consuming. Once these modifications are made to the system, there is a better chance the user will want to use the system. With a mean score of 2.933, it was predicted that people would use the system for this sort of task. A big issue to overcome was to make sure the system was easy to use. The following survey question was posed,

Q4 – The Delphi Decision Maker system as a whole was (easy/difficult) to use.

Table 12.13 Survey Statistics for Survey Question Number 4

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Easy

2

3

4

5

6

7 Difficult

Total

Count

9

10

4

5

1

1

0

30

Percent

30.00%

33.33%

13.33%

16.67%

3.33%

3.33%

0.00%

100%

30% of the participants found the system extremely easy to use followed by 33% finding it easy to use. Combining the 1 – 3 scores from the Frequency analysis shows that ~77% found the system as a whole easy to use. A rather large percentage was neutral on this, 16.67%, but none found it extremely difficult. A mean score of 2.400 indicates that people did find the system easy to use. This result was further supported using qualitative analysis through content coding. Comments supporting this finding were, “I thought the system was extremely easy to use” and “I thought the system was easy to use from the start. All someone has to do is pay attention.” Although a system may be easy to use, this does not necessarily mean that the design is good so, it was asked,

Q5 – The design of the system was (good/bad).

Table 12.14 Survey Statistics for Survey Question Number 5

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Good

2

3

4

5

6

7 Bad

Total

Count

11

6

7

6

1

0

0

31

Percent

35.48%

19.35%

22.58%

19.35%

3.23%

0.00%

0.00%

100%

Most found the design of the system good. The Frequency analysis shows that the greater majority found the design good. Combining the 1 – 3 ratings shows that over 77% of the participants thought the design was good. A high number of 19.35 were neutral on this maybe interpreted as ‘not bad, not good’ but only 1 person gave it a weak rating indicating a poor design. Consistently through these questions, at least two users have seemed very unhappy with the system. A positive result was indicated through the mean where an average mean of 2.355 supports the idea that the participants did find the design of the system to be good. This next question is asked with the rating reversed such that ‘high disagreement’ is ranked at 1 and ‘high agreement’ is ranked as 7.

Q3 – The tools provided by The Delphi Decision Maker software were of (no use/great value).

Table 12.15 Survey Statistics for Survey Question Number 3

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 No Use

2

3

4

5

6

7 Great Value

Total

Count

1

0

5

8

9

5

3

31

Percent

3.23%

0.00%

16.13%

25.81%

29.03%

16.13%

9.68%

100%

Overall, it appears that most of the participants found the tools provided by the system of value. ~55% of them gave this a rating of 5 – 7 which is positive for this problem. The Frequency of the votes indicates that users found it useful, but not as useful as the researcher had hoped. This could be due to the question that was used and the study population. For this system to be appreciated and perform optimally, problems will need to be given where there is more stakeholder investment, where the outcomes have some important associated meaning, where the users will debate over the options available. This is when the tools will be leveraged for their combined ability to help crisis managers make real time decisions. This question will asked again after the version 2.0 of the software is launched and tested by crisis managers with a problem that is likely to encounter disagreement. This question had a mean of 4.645 where great value was 7. Reversed, it is 2.355 and this is considered positive by the scale. This is an indicator that the participants found the system of value.

Q6 – Voting was a (good/bad) way to reflect what I was thinking.

Table 12.16 Survey Statistics for Survey Question Number 6

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Good

2

3

4

5

6

7 Bad

Total

Count

8

8

4

6

3

2

0

31

Percent

25.81%

25.81%

12.90%

19.35%

9.68%

6.45%

0.00%

100%

Positive results came from the participants where over 25% found voting to be a very good way to reflect what they were thinking. This is important because it is such an unusual way for people to accomplish this sort of task. Combining the 1 – 3 ratings gave 64.52 confirming that the majority were in agreement. Over 19% were undecided but 16% weakly disagreed. One positive note to finish, nobody strongly disagreed. A mean score of 2.806 indicates that voting was a good way to reflect what someone was thinking. This is a newer way for people to reflect what they are thinking and this was a positive first step in the acceptance of perhaps doing things differently. However, this was not without concern. One participant commented, “Voting was an ok way to say how I was feeling.” Another user was curious about the logic and erroneous entries and actually tested the ranking method. He commented, “When voting, I went down the line to see how it would configure conflicting information. It instantly gave me an error message stating there was a loop in my answer. (Glad to see there is a checks and balance on this ;)) But thinking of those that are haphazardly entering their vote, is there a way to program this to show where the “loop” is?”

Many comments supported this last request and this particular problem is being addressed as part of the Version 2.0 Delphi Decision Maker. Suggestions were made to “Explain the loop issue, also have the system remember what you submitted in case you forget something” and that “A deeper explanation of how the results work, as well as explanations about what error messages mean.” Both of these suggestions are being further explored in the next design phase. Next, the participants were asked about how they felt with the area for discussion provided.

Q7 – Having the uncertainty represented was a (useful/not useful) function to have.

Table 12.17 Survey Statistics for Survey Question Number 7

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Useful

2

3

4

5

6

7 Not at all useful

Total

Count

7

4

3

8

5

3

0

30

Percent

23.33%

13.33%

10.00%

26.67%

16.67%

10.00%

0.00%

100%

Given how the uncertainty calculation provided to the participants wasn’t of much use, it was surprising that these results came out as positive as they did. None found this ‘Not Useful At All’ and the highest percentage was a rating of 4 which is interpreted as neutral. This will be retested on the next version of the Delphi Decision Maker. 46% of the participants agreed that the uncertainty function was a good tool to have, where only 26% found it a little useful. As the researcher has stated before, particular tools are more likely to be perceived as more beneficial if the problem is in an environment where the tools are more likely to be used. There is more on this in the conclusion.

Uncertainty was found to be useful with a mean of 3.300. However, the problem that was given to the participants didn’t call for the use of uncertainty. Everyone was in agreement with the top choice. This drove the researcher to look more into the other problems that existed in the duration of the study on the system. Consequences to options and stakeholder investment would drive users to argue their opinion more so and that is if there is disagreement. It was further noted that uncertainty comes in many forms some of which are relevant to a particular problem type in a particular setting. Uncertainty should be studied more in the future. Uncertainty could be in the number of people who can vote but who have not, uncertainty could be in the probability that an event will occur; uncertainty can be where a population is at risk where subpopulations are deemed more at risk than others, as in the ongoing H1N1 Pandemic Flu. To be an effective and useful tool, uncertainty needs to be problem type dependent at the very least and requires much more research.

Q8 – I found the discussion forum a (necessary/unnecessary) function for the decision making.

Table 12.18 Survey Statistics for Survey Question Number 8

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Necessary

2

3

4

5

6

7 Unnecessary

Total

Count

10

4

8

4

3

1

1

31

Percent

32.26%

12.90%

25.81%

12.90%

9.68%

3.23%

3.23%

100%

The greater majority found the Discussion Tool useful although there wasn’t any argument. Frequency analysis shows that the highest percentage found it highly necessary with 32.26% and combining the 1 – 3 ratings produced a majority confirmation of 70.97%. Only 13% found it mildly unnecessary and one participant found it totally unnecessary. Again, the researcher believes that there will be stronger agreement in this area once the system is modified then used with the correct group and problem type.

A mean score of 2.77 indicates that participants found the discussion forum necessary for decision making. Not many comments were made addressing this issue, but the ones that were made were markedly diverse.

Comments ranged from one end of the spectrum to the other. One person didn’t think forums were necessary by stating, “I think the discussion part could be left off. It is easy to get bogged down in all the comments. The questions are straight forward so, in my opinion, this is not a needed aspect of the program.” Another really ‘loved’ having the forum. They stated, “I loved the Discussion Forum! I guess the days of flip charts are gone forever.”

12.3.3 Research Question 3

The 3rd research question refers more to the target market this system was designed for: crisis managers. The first study was used to determine fundamental issues concerning the system and its use and usability and such and if the tools did what they were designed to do. The second version will be tested on a group of emergency domain related individuals as well as groups who work together in the emergency environment. Studies on The Delphi Decision Maker Version 2.0 will answer these next two questions posed:

RQ3: Will emergency management experts use such a system? What do they see as the advantages and disadvantages?

This will be answered in future research.

12.3.4 Research Question 4

Although the participants are working on a single problem that is of a serious nature, threat assessment, they were asked to determine if such a system could be used in other areas that will require the generation of ideas, and prioritizing a list of options. This also focused on the perceived Usefulness of the system. The research question posed was:

RQ4: Can the system be used for planning and other tasks that would make it a regularly used system, eliminating need for special training for use in emergencies?

Descriptive statistics were used to answer this question. Semantic differential questions were created using a 7 point scale testing for usability and satisfaction. The categories were made to determine the outcomes of the respondent selections. The interpretation of scale values are as follows.

First, the participants were questioned about the functionality of the system and further wondered if they would use it for ranking items specifically. With this in mind, the following survey question was asked:

Q9 - If given the chance, I would (use/not use) this system for prioritizing alternative solutions to a problem or situation.

Table 12.19 Survey Statistics for Survey Question Number 9

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Use

2

3

4

5

6

7 Not Use

Total

Count

8

8

5

3

2

1

3

30

Percent

26.67%

26.67%

16.67%

10.00%

6.67%

3.33%

10.00%

100%

A majority of 70% were in agreement that they would use this system for ranking under other conditions. This is very positive because if people will use the system for non-emergency times, then they will be able to better use the system as they will be familiar, during emergency situations (Turoff, et. al, 2004). 10% however stated that they would not use this system for ranking options and another 10% were doubtful. 10% were neutral. With a mean score of 2.933, it was suggested that given the chance, people would use this system to prioritize alternative options for some problem. The result indicates that the participants found the system extremely useful.

Next, there was the need to know if the users would find this system good for addressing other types of problems where ranking alternatives was required amongst a group of individuals. This is important because the system will be more successful for its intended purpose if the users are familiar with the system. If they use it on a day to day basis for similar task types, then they will be able to optimize its ability during emergent times as they will know how to use it (Turoff, et. al, 2004). It was asked:

Q10 - This system is (good/not good) to use for other problems where ranking alternatives is desired among a group.

Table 12.20 Survey Statistics for Survey Question Number 10

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Good

2

3

4

5

6

7 Not Good

Total

Count

14

4

7

5

1

0

0

31

Percent

45.16%

12.90%

22.58%

16.13%

3.23%

0.00%

0.00%

100%

A very high percentage agreed that this system would be really good to use for other ranking types of problems. Combining the 1 – 3 ratings gave a high majority of 80% of the participants saying that they would use the system for ranking alternatives. A mean score of 2.194 indicates that this system is seen as valuable where ranking alternatives is desired among a group. This result supports the conjecture that the users found the system extremely useful. Last, it needed to be determined if the participants were satisfied using the system for the task. Participants were asked to rank the following semantic differential question:

Q11 - I am (very/not at all) satisfied with using this system to support a large group generating and prioritizing a list of alternatives.

Table 12.21 Survey Statistics for Survey Question Number 11

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Very

2

3

4

5

6

7 Not at all

Total

Count

11

7

6

1

3

0

1

29

Percent

37.93%

24.14%

20.69%

3.45%

10.34%

0.00%

3.45%

100%

Again, the largest percentage of ratings was a 1, with almost 38% of the participants stating that they were very satisfied using the system to rank items by large groups of individuals Frequency analysis shows that when combining the 1 – 3 ratings, a high percentage of ~83% were satisfied. A mean score 2.379 indicates that the participants were very satisfied using the system to support a large group in the generation, creation and prioritizing of a list of options. A mean of the means for questions Q9, Q10 and Q11 was calculated resulting in a 2.502 score. With this, there was support to answer yes to RQ4. The system can be used for planning and other tasks that would make it a regularly used system, eliminating need for special training for use in emergencies. This question will be asked again in studies conducted on the next version of the software system where crisis managers will make up the subject population.

12.3.5 Research Question 5

Other considerations should be implemented in the system to help balance the voting in recognition of expertise. Possible weights could be given to experts or the opposite, experts could ‘defer to expertise’ by lowering their own weight.

RQ5: How should votes be weighted to reflect their importance over time in a prolonged reoccurring problem?

This question will be answered in future research as the system did not attempt to add such a feature this early in the development process. An expert domain module needs to be developed and this researcher offers a design on this for future development. However, weights will have to be examined more carefully and rating systems, both by one’s self and others, should be explored. This issue should be addressed as voting would not be beneficial when there are only one or two true experts of the group who are qualified to answer a particular question with valid options. This is a very important and interesting topic and considerations are being made for future development of such a module with the Sahana Disaster Management System.

12.3.6 Research Question 6

This research question was addressed by evaluating using qualitative analysis and content coding, descriptive statistics along with proof of concept. The changes in rank were analyzed along with the number of postings given in that time frame. Content analysis was used to determine if users increased their discussions for some particular solution being posed. This research question asked:

RQ6: Can voting be used to determine the level of interest (activity) of a given topic?

In this field study, voting could not be used to determine the level of interest on any given topic. From analysis, this was found to be due to a couple of reasons. One, there was full agreement on the problem. The participants were asked to conduct a threat assessment of potential dangers to a university campus. Severe weather was the top selection throughout the entire experiment. This was demonstrated in the scale results throughout the entire study.

Quantitative analysis using content coding was used. Of the 30 comments made in the forum discussion, only confirmations were made of agreement indicating that all found ‘Severe Weather’ to be the top threat. Given this did not spawn any sort of debate or discussion; no further determination could be made supporting that activity level of a topic could be indicated by the level of voting. This is because, if a user’s opinion is in agreement with others, there is no change in the vote. Further, just because someone agrees with a group, and no vote changes occur, this is no indicator that the person does not hold interest in the topic. Therefore, this study indicates that it is not always possible to examine voting to determine the level of interest in a topic. Further study is required to ascertain those cases, if any, for which voting can serve as a proxy for user interest in a topic. A discussion did ensue that was not related to the top votes, but this was not related to any changes in votes but rather, a topic that is of interest to the general public. This was further confirmed in a statement made by a participant, who is also a police officer. The debate was whether or not a single shooter should be considered homicide. She titled her topic, sensationalism in the news and stated, “Some events do not seem to be as prevalent but get more attention in the media because they are sensationalized, like shootings and homicides. Severe weather is more likely to take place in most geographical areas.” Her statement basically backs up the reasoning for the discussion activity; it’s a topic of interest given any campus.

12.3.7 Research Question 7

An important part of the system is new to users and how their opinions are reflected. This is a new way for people to represent what they are thinking, a new way to vote, people are not accustomed to voting on paired comparisons to reflect how they feel. People are not used to using paired comparisons to provide a ranked list of items. It was important to determine if people would use the methods implemented to accomplish this goal. The research question asked was:

RQ7: Will experts use voting to reflect their opinion changes over time?

To answer this question, three questions were created and posed. First, did people use voting at all? This was important to answer because, if they weren’t using the voting tool at all, they certainly weren’t using it to reflect the changes in their opinion. The first question asked was:

Q12 - I voted on items. Yes/No

94% of the participants surveyed responded that they did vote on items. Next, it needed to determined, since there was forced voting on all available pairs, if the participant did go through all of the available pairs? Participants were asked:

Q13 - When voting, I voted on all of the items available? Yes/No

83% of participants surveyed responded that they did vote on all pairs. This means that they used the voting tool as intended. This is important to know because this directly reflects the final ranked list of options and further, determines if the scale reflects the user’s selection. It is very important to note that of 36 participants of the field study, almost half could not vote due to a logic check which validated the logic of the user’s sequence of answers for correctness. So, participants voted, but may not have been able to vote again to reflect the changes in their opinions. This problem will be fixed in the next version of the system. This last question for the research question asks the question directly:

Q14 - I used voting to reflect any change of opinion I had. Yes/No

Although the participants agreed that Severe Weather posed the greatest threat to campus, H1N1 Pandemic Flu was only added to the Option List half way through the experiment. Once this item was brought to everyone’s attention, it began to be selected over others and rise up the scale, but not over Severe Weather.

Given that 66% of the participants did use voting to reflect their opinions and from observing how H1N1 rose near the top after introduction, it is believed that experts will use voting to reflect the changes in their option lists. Hence, this further supports that voting can be used to reflect an expert’s opinion.

12.3.8 Research Question 8

This question assumes that there is disagreement amongst group members in the problem. However, this did not occur in this study.

RQ8: Will the discussion from disagreements lead to more new options being proposed?

This question could not be answered using the data from this study as there was no disagreement. This question will be asked again in future research as the problem areas will lend themselves more to disagreement through the problem type posed. For example, if there are competing stakeholders desiring the same resources, disagreements are more likely to occur given the vested interests. However, in the Campus Threat Assessment problem, all were in agreement that ‘Severe Weather’ was the greatest threat so there was no disagreement.

12.3.9 Research Questions 9

A very important part of future research will be focused on answering the following research question:

RQ9: How can voting be used to aid experts in ‘muddling through’ an initial set of items to create a subset which the experts determine to be the most important items for the group to work with?

This feature was not implemented into this version but will be a very important part of the Delphi Decision Maker Version 2.0. If anything, problems encountered as the list of options grew indicate the importance of people being able to select subgroups of items they find of particular interest.

12.3.10 Research Question 10

Feedback is very important to the group and the scale was provided as a visual interpretation of data given a single group calculation. The following research question was posed:

RQ10: Will using a visual scale based on Thurstone’s Law of Comparative Judgment help the expert make a more informed or confident judgment?

Uncertainty, to be further analyzed after this study, can help decision makers on particular types of questions, but the types of information that would decrease uncertainty are problem dependent. For example, forecasting future events has a set of probabilities for various considerations. This problem for this study did not really fit the need for the scale to be used in disagreement or under uncertainty. It simply was used as an indicator to let the participants know the group decision which they agreed with. The scores from descriptive statistics supports that, using Thurstone’s scale does help the user make a more informed judgment. This will be asked again in future studies where the problem will use this sort of functionality where it may be perceived as a more useful tool. The survey questions were asked:

Q22 - I think that my decision was (more/less) informed because of the knowledge provided by knowing the various outcomes given the uncertainty.

Table 12.22 Survey Statistics for Survey Question Number 22

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 More

2

3

4

5

6

7 Less

Total

Count

6

4

7

8

1

1

3

30

Percent

20.00%

13.33%

23.33%

26.67%

3.33%

3.33%

10.00%

100%

Many participants were neutral on this, 26.67% as the uncertainty really didn’t provide much information. Given the problem with voting, the researcher is surprised that as much positive feedback came from this as it did. Combining the 1 – 3 positive ratings showed that 56.66% of the participants thought their decisions were more informed. Combining the disagreement ratings of 5 – 7 gave16%, which really wasn’t bad given the problems encountered. However, when voting counts and votes have weight on a decision, this could have been more useful. Also, the information will need to have a meaningful interpretation to the end user. These are issues that will be taken up in the Delphi Decision Maker, Version 2.0. A mean score of 3.3 is interpreted as positive although the Frequency analysis provides better insight into the outcomes of this. Next, there was a need to know if the user was utilizing the scale for its purpose.

Q25 - I (frequently/never) used the scale for feedback.

Table 12.23 Survey Statistics for Survey Question Number 25

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Frequently

2

3

4

5

6

7 Never

Total

Count

7

2

5

6

7

3

1

31

Percent

22.58%

6.45%

16.13%

19.35%

22.58%

9.68%

3.23%

100%

The majority of the participants frequently used the scale for feedback and most in any one alternative, gave a rating of 1. Combining the 1 – 3 Frequency ratings, 45.16 of the group members used it frequently. One never used the scale which makes the researcher question any of that participant’s responses. How could somebody use this system and not use the scale? However, it cannot be ignored that 32% of the participants didn’t use the scale much for feedback. Again, this could be attributed to the problem and population. There may not have been much interest in the outcomes after a while. Most agreed and the scale order remained the same. A mean of 3.5 is given from this group on this question. This question will be asked again to the users of the next version. It is possible that if a stakeholder has more vested in a problem and its outcome, they may use the scale more to see if the group agrees with them or not. Like a football fan watches a score when two great teams are playing each other, and the game is tied at halftime, so too it is believed, that a stakeholder will in the decision making process when interest is high. Next, it needed to be determined if the uncertainty scale had any influence on the decision maker’s behavior. These answers were reversed and rated negative at 1 and positive at 7.

Q34 - The information on uncertainty provided by the scale made (no/a big) difference in my decision.

Table 12.24 Survey Statistics for Survey Question Number 34

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 No

2

3

4

5

6

7 A Big

Total

Count

10

5

6

5

2

1

0

29

Percent

34.48%

17.24%

20.69%

17.24%

6.90%

3.45%

0.00%

100%

There was a strong negative reply to this question. The uncertainty didn’t do much to influence a participant’s opinion. This outcome is not surprising given that the uncertainty affected the possible outcome and the outcome did not affect the participants one way or the other. They had no vested interested in the outcome because there would not be any consequence whether or not the group agreed with them as an individual. Also, this question was subjective and as such, uncertainty didn’t provide any additional insight or understanding. An uncertainty calculation on those not having voted yet didn’t meet the needs of the problem. If anything, the uncertainty would have been in the lack of general knowledge of the geographic area. Although the mean was 2.552, once reversed, it was 4.45 and this is interpreted as neutral. The interpretation of this is that the users possibly didn’t know what to make of the information. However, this outcome is neither positive nor negative, but still in question. This will be asked again in the next study planned for the Delphi Decision Maker 2.0. Next, the subjects were questioned about information provided by the scales and how they added confidence to the decision making.

Q35 - I was (very/not) confident in the decisions I made due to the information provided on the scales.

Table 12.25 Survey Statistics for Survey Question Number 35

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Very

2

3

4

5

6

7 Not at all

Total

Count

9

7

4

4

3

2

1

30

Percent

30.00%

23.33%

13.33%

13.33%

10.00%

6.67%

3.33%

100%

Participants were surveyed and the responses provided a mean of the means of Q22, Q25, A34, and Q35 of 3.0495. This is an indicator that answers yes to RQ10. Using a visual scale based on Thurstone’s Law of Comparative Judgment will help the expert make a more informed or confident judgment

12.4 Conclusion

As a result of this study and analysis of the data provided from it, this system will be able to satisfy the over arching question that not only is it possible to create a web-based system that will enable dispersed groups of experts or knowledgeable individuals to share and evaluate information and opinions, expose disagreements, and reach decisions more quickly than they could have without such a system, but it is found to be useful and likely to be used in the future. It was known that possible problem areas existed going into the studies, but further insight into these and other problem areas were brought to light and modifications will be made to make the system better for its intended use to help crisis managers optimize their time during decision making tasks.

One major area that will require more attention is the need to help experts with uncertainty when it exists. However, uncertainty can exist in many forms and these forms are problem dependent. Further exploration of matching uncertainty calculations to the particular needs of a group given its decision making task is required. This system is built to be leveraged during response efforts when information is low but decisions must be made under time critical restraints. Decisions must be made whether or not all of the information is in and whether or not all of the group members required to make the decision, are present. The latter was tested and modifications will be made such that group permissions are instantiated. However, further research is required to further aid experts. Dalkey conducted studies supporting that just the mere fact that a group of experts is working together, through their insight, can fill information gaps that may be useful for problem solving and decision making. This will have to be further explored when experts are using the system and methods are created to test such situations. Dalkey’s studies were based on forecasting where the answers were known in advance. This differs greatly with the decision making tasks of emergency management in extreme events where the problems are wicked and there is no right or wrong answer, but better or worse. One of the major outcomes to this researcher is that it was in her opinion that the emergency domain is no trivial problem area and there are no easy answers where uncertainty can be generalized and calculated. The problems are wide and various so, there is no one real problem type, therefore there cannot be one uncertainty calculation. This must be matched to fit the task and the environment and its rules.

CHAPTER 13

THE CONTRIBUTIONS AND LIMITATIONS OF THIS WORK

13.1 Contributions

The major contribution of this work is in the development of a system that can potentially change the way decisions are made in emergent and extreme events. No such system occurs that can organize the material efficiently to support the decision making needs of crisis managers and experts. This decision making tool has been offered as part of a free and open source disaster management system that is being used globally and is deployed during most major extreme events that occur outside the United States. The potential use of this decision making module to be leveraged further is great and will hopefully have a strong influence on how decisions are made where a group ranks a list of alternative solutions. However, this is only one of many uses of this tool. By Design Science methodology, just creating a working system that has been used successfully is a contribution.

One of the major contributions of this work was to identify that uncertainty calculations are important and help the user make a more informed decision. However, one of the major findings of this endeavor was that even when raking a list of solutions and options for a problem, until the problem is known, the uncertainty existing cannot be identified. The uncertainty cannot be identified until the problem is known. A series of test will be conducted as future research to identify and develop a set of possible uncertainty calculations that the users can select from. If a set of various ways to calculate uncertainty are provided to the users, they can perhaps then match the type

of uncertainty calculation(s) that will best fit their needs given the particulars of their situation. This thesis effort explored the use of uncertainty and established a mathematical heuristic for calculation a new type of uncertainty measure based upon those that have not voted yet but plan to possibly as a result of the discussion and the emergence of new information. This calculation also took into consideration the addition of new items and how they were entered into the calculation and ranked based on this algorithm. This made the voting take into account not only who was voting and when, but when new items were entered and ranked.

Based upon the above mathematical process, a new computer meditated communication process has been designed which is appropriately titled as a "The Delphi Decision Maker." All the phases or rounds of a Delphi process become one iterative round where at any time each participant may propose new options, vote on existing options, change their vote, or comment on any option at any time. There are no restrictions on when such actions occur. The termination of the process will occur when the subjects no longer need the tools due to either the problem not existing anymore or due to some sort of consensus in agreement.

The hypothesis that voting in such a dynamic process becomes the measure of disagreement that causes discussion to focus on the options for which there are disagreements can be quantified. This can be proven or disproven by the collected data on voting synchronized with the data on discussion. Results can be tied to options exhibiting disagreement and then compared to those options exhibiting consensus.

By conducting a prototype field trial with students whose major is in emergency management there will be an opportunity to obtain feedback on how useful they felt it was for a learning exercise in their field. Since some will have work experience which they will express by years of experience, inferences can be made of the potential of this method for learning and training. This field trial should serve to provide insight into any major usability problems that are encountered beyond the prior individual protocol studies.

The second prototype is currently being developed and will be studied on a group of volunteers from the emergency preparedness and management environment who will provide their years of experience and who will be solicited by current message list and through contacts in the community. They will be asked to deal with a real planning problem such as to propose and evaluate mitigation options for a particular future disaster situation in a specific location. Based upon their experience they will be surveyed on their reactions to the system and specifically asked to list what particular emergency management issues and situations they feel this system can be applied to.

13.2 Limitations

The greatest limitation is that the system cannot be tested in a real time environment to determine its feasibility for this mode of operation. Hopefully it has many other applications in Emergency Management that will make it useful and in the course of events provide the opportunity for those familiar with it to be able to choose to use it in that environment. The second limitation is that a practice exercise is needed to ensure that everyone in the group comes to understand the meaning of the scaling results and how their actions can influence the results and the uncertainties involved. There are too many problems that could occur given a true emergency and the system would have to be further tested for its robustness and its ability to be used to satisfy all the needs required for such an event. Another limitation is that this system was tested on a collection of individual trial users and not on a single group with a common objective and mission. Much of the Version 2.0 and trials for the Version 2.0 as a research effort will be oriented to using groups that are real problem solving groups.

CHAPTER 14

SUMMARY AND DISCUSSION

14.1 Introduction

The objective of this chapter is to provide a summary of this research effort. A software system was designed and developed to support large groups of crisis experts making decisions in time critical extreme events. The overarching question that drove this research effort was: Is it possible to create a web-based system that will enable dispersed groups of experts or knowledgeable individuals to share and evaluate information and opinions, expose disagreements, and reach decisions more quickly than they could have without such a system?

Built as a module to add to the existing Sahana Disaster Management System, this addition later named The Delphi Decision Maker, was created and implemented. A series of tests were run on the prototype including: individuals as system testers, a smaller group used as a pilot study, and a larger set of participants in a field study. Analysis was conducted by triangulation where proof of concept, quantitative (simple statistics) from survey analysis and qualitative (descriptive) statistics of content coding were used from the discussions in the forums. Reliability measures were used to test the survey questions. A prototype of the system was built and implemented, thus satisfying Proof of Concept. Results from the participants taking the Post Study questionnaire also provided positive support for the overarching question. Further, comments from the participants provided more positive feedback that it was possible to create such a system. Hence, there is support for the main research question.

In order to help this system to be successful in that crisis experts would want to use it, other research questions asked, What features or functions would be most useful for this objective? And, what would be required to make the design of the system understandable and useable? A basic set of four tools {Voting, List Creation, Feedback Scale, Discussion Forum} was created to help guide a group through the decision making process. The design of the system mimicked that of another software program, Web2Py whose design was considered easy to use. There was positive support for these research questions. Participants were asked if they found the tools useful, both individually as well as together. People surveyed confirmed that the tools were useful. Also, research findings supported that the design was easy to use. However, in both areas, there was room for improvement.

In emergency management, especially given time critical situations, it’s best if an emergency management information system is used for day to day tasks, or at least on regular occasions. If people frequently use a system, they will be more apt to use the system during an emergency where duress and other factors may place undue pressure on a user and their ability to make effective decisions (Turoff, et. al, 2004). Participants were tested to see if they perceived that the system and the function it served, generating and creating a ranked list, could be useful in general. The following question was posed: Can the system be used for planning and other tasks that would make it a regularly used system, eliminating need for special training for use in emergencies? Survey questions were created to test this; reliability measures confirmed that the set did in fact test the same thing. Participants were surveyed providing positive feedback and statistical results to support that this system could be used for other, non emergency tasks. Further, through content coding, various comments supported these questions with agreement.

Thurstone’s Law of Comparative Judgment influenced the design of the system. A scale based on this method provides further information versus a ranked list where each item is proportioned one against the next equally. Thurstone’s method shows how much closer an item A is to another item B, versus that same item A to another item C. This provides more information to the user as it shows how much more one item is desired over another. Participants were tested to see if they perceived this to be additional information to help them make better decisions. The research question was posed, Will using a visual scale based on Thurstone’s Law of Comparative Judgment help the expert make a more informed or confident judgment? Participants were surveyed using a set of questions designed to measure this question. Reliability measures confirmed that all questions were asking the same thing. The results to these survey questions found that participants did believe that they were making more informed or confident decisions.

Thurstone’s Law of Comparative Judgment also influenced the design of how people would rank a list of items, by paired comparisons. Participants were used to determine the next research question, Will experts use voting to reflect their opinion over time? A set of questions was designed to answer this question. First, it needed to be known if the user used the voting tool at all. Voting by paired comparisons is not a natural or normal way people create ranked lists, so it was important to determine if users would accept this form of ranking. Most participants voted and further, they used voting to reflect changes they may have had.

However, as described at length in prior chapters, this feature was not properly tested as there was a high level of agreement to start with, in the main trial. Further, it was determined that the task type and stakeholder involvement would influence the usefulness of such tools. If a user has a vested interest in the outcome of the rank where there are repercussions somehow in the results of that rank, the user will need to view the information provided more because of a vested interested and the ramifications of the end results.

(Move what follows to the future section) This is the basis behind the model that is represented next. This shows how the tools are possibly influenced by the task type, group proximity and size.

14.2 Limitations

A limitation of the study was that a group of students was used to test the system. Second only to the hope of winning $100, extra credit was the motivator for many as they were in one of the researcher’s classes. The Delphi Decision Maker was designed for a particular group of people, experts. Not being able to use crisis experts was the limitation where the problem task didn’t meet the demands that are required to use a system such as the one developed. This system was developed also for time-critical situations and this is an element that cannot be tested given the guidelines for research on humans. So, basically, the toolkit presented was tested at a basic level which served other purposes, but did not test the system for what it has been developed to tackle.

14.3 Contributions Summary

The primary contribution of this work is that a free and open source module has been developed and implemented to meet the unique characteristics given an extreme event and to support the decision making needs of the crisis experts trying to manage them. This system was found to be useful for day-to-day tasks of the same nature which would keep the users of the system more familiar with it and thus, be more successful at using it during time critical, emergent situations. In addition, the system can be used as a learning exercise where other situations are the focus. Another major contribution to this work is that a method was developed where a modified version of Thurstone’s Law of Comparative Judgment was created to calculate uncertainty with missing data. This will give the users further information from which their insight and intuition may be further utilized decreasing gaps where no information or knowledge exist. This will be implemented in the next version which is described next.

14.4 Future Research Plans

This research effort is guided by the seven steps of Design Science according to (Hevner, March, Park and Ram, 2004). The sixth requirement guiding design science is that the software should be tested to determine if it is satisfying the needs of the organization and to modify the software system after testing it to make it a better fit to the characteristics of the group for which it is being designed. Presented next is a model that was developed to guide the next iteration of studies. Modifications to the system are listed and the functional requirements for the Delphi Decision Maker, Version 2.0 are described.

14.4.1 Hypothesized Model of Relationships amongst Constructs

Figure 14.1 Model of constructs.

The constructs that underlie the evaluation effort are: Usefulness, Ease of Use, Confidence in the results, satisfaction, usage, and the major functions of the discussion forum, the linear Thurstone’s scale, and the voting process. The constructs were created to measure the system against the research questions. The tools were measured as constructs and were found to be useful. However, there is a direct relation between a basic set of group support tools and the context in which they are used that influence the Usefulness of this system. It is influenced by the task the group is working on, the group’s proximity and size. These characteristics will be investigated further as testing continues for their positive or negative influences on the usage of the system. Another influence on the construct Usefulness is the accuracy the experts find in the information provided by the Scale and Voting tools provided. This was very important as the expert needs to be confident that those tools provide an accurate reflection of what they are thinking and what the group is thinking. Participants found both that using the paired comparisons for voting was an accurate way to reflect what they were thinking, and that the scale provided an accurate reflection of the result. That the users had confidence in these results increased the chances of this system being used.

A system that is easy to use will more likely be used. That this system was found easy to use which users needs to be able to accomplish tasks with minimal effort. This also made it such that the system would be likely to be used. This model will be tested in the next round of studies after the next version (2.0) is developed and implemented.

14.4.2 Major Problem and Design Areas Identified

The system was found to work well when the number of items in the Option List was low. There weren’t any complications and the system worked well for up to 6 items. As the number of items on a list grew, the problems surfaced and were linked to other problems. It was found that there was a direct correlation between the number of items and voting as a problem. This is because there is a logic check in the paired comparisons ensuring there are no cyclic triads. This is where some option A is selected over B, then B is selected over C, but then C is selected over A. Although a logic check was conducted, this is not anywhere in the former literature and is something being explored as a means to help experts in their consistency in judgment during the trying times of crisis management. In the studies, users found it most difficult to be consistent. This was not a stressful problem but was interpreted as simply having too many items under consideration to remember the sequence. This is going to be addressed two ways: one, users will be able to select a subgroup of items and then, the items that are not correct in logic will be brought before the user and the user will have the option to clarify further or enter as initially selected. In version 1.0 of the Delphi Decision Maker, users were forced to compare all items. Given that the number of comparisons is based on the formula where N(N-1)/2, the number of comparisons gets unreasonable when reaching even smaller numbers of 5+.

So, since people could not get through the logic check, their votes could not go through. Since uncertainty was based on members who had not voted yet, this problem snowballed. Since people’s votes wouldn’t go through, the uncertainty grew to a point where results were equal to those who had already voted. This rendered the uncertainty useless; it was barely useful initially and will need major analysis. This is discussed in the conclusion of the chapter on the Field Study.

Transaction Log

For further studies, it was determined that a list of a variety of transactions needed to be gathered and maintained. This will help researchers determine which tools or actions trigger other tool usage or actions. A transaction log will help support the ability to study how the system is used, by which users and why. This can help determine if new items are created when disagreement is high and other relationships between the available toolset. Each ‘Active Problem’ will need its own transaction log that only records the actions that are between the members of that particular group. Particular information needs to be recorded:

    • When someone votes - for each item - (not a change in vote but just if it's voted on initially)

    • When someone changes a vote (for Each item available)

    • When someone posts or replies to a discussion

    • When someone adds an item in the Options List

    • When someone views the Scale

    • When someone views the Option List

    • And time-date for each user,

      • Initial vote on an item

      • Change in vote

      • Post/Reply

      • Add item

      • View item list and range or item number

All integer fields need to have sorting ability so that analysis is easier and more structured for a more accurate analysis. It should also be available in a format such that it can easily be imported into statistical software and spreadsheet programs, i.e. a simple table. To maintain anonymity of personal data, each instance can be identified by a primary key.

Monitoring functions such as group formulation, edits of material, the addition of new members, etc., needs to be part of the transaction log as well as other roles that can make some contribution.

Model

A model of the system is presented. This model shows some preliminary views on how these tools may be used together. One of the underlying features of The Delphi Decision Maker is that the scale will be used as feedback to help groups focus on areas of disagreement, thus saving them time. Through saving data through a transaction log, other information like this will be available to discover such patterns of use.

Figure 14.2: Model of possible tool use.

This model describes one possible way in which quantification (the scale) could be used to help focus groups on areas of disagreement. However, until a transaction log is created, such patterns of use cannot be confirmed.

Miscellaneous Usability Changes

These are changes that are small but make a big difference. This will be implemented in Version 2.0.

1. The bottom of discussion screen needs a ‘go to top’ AND options to vote, scale, etc at bottom (Same as on top as menu, but located at the bottom to make availability easier and quicker).

2. After a User votes and clicks Submit, a confirmation that the vote went through needs to be made to the user along with a temporary visual scale of their results.

Item Selection for Option List Creation

This tool requires modification so that larger lists of Options can be managed effectively and efficiently. If a user selects a subset of a larger list, this will expedite the time it takes for the crisis manager to create a ranked list using voting as a tool to reflect what the expert is thinking. This was initially in the Version 1.0 proposal, but due to the lack of time, could not be implemented. This design would be a good way to implement for this task. This sort of item selection is already used in other software systems like Moodle and SAS, Statistical Analysis Software.

Figure 14.3 Item Selection of Sub-List Figure

This Muddling Through is supported in the research to make decision making more efficient and effective by allowing experts the flexibility to select only what they want to consider. Version 1.0 had a forced vote where all pairs of items had to be voted on. Otherwise, an error would occur and the vote would not go through. This was not good as there were too many paired comparisons which caused other problems when getting through the logic check. That will be described later with solutions posed.

Items will be more dynamic and informative. The modification for this will be:

· When a User adds an item to the Option List, they can give a description to define or explain it better. This is to lessen ambiguity between individuals and the options they pose.

· When a User adds an item to the Option List, a corresponding Discussion Forum will be created automatically. This Forum will automatically be given the same name as the Option item created.

· The version 1.0 Discussion Forum in place will be removed.

Voting

This tool gave this research effort the most trouble and needs modification if this system is to be successfully used in the real world. Having so many voting pairs caused great problems given there was a logic check in place checking for cyclic triads. The researcher has worked to modify the system to reduce the number of paired comparisons in two ways. One, allowing smaller sub-lists to be selected to rank will reduce the number of items to compare. Two, by having each item paired up with another item such that the least amount of required comparisons is made. On a user’s initial vote, for example, if there are four items A, B, C and D, the following comparisons will be made: (A,B), (B,C), (C,D), (D,A). From this information, transitivity will be used and the remaining cells will be filled with the information that is present. Next, if an item E is added to the list, a binary search algorithm will be used to place the new item E, into the existing ranked list, if and only if the user wants to accept the item as a consideration. Otherwise, the user can ignore the new item suggested altogether.

Logic Check Modifications

When a User votes and does not pass the Logic Check – A screen will return to the user telling him/her of this error. The screen will have the Users vote on selections returned and the items that are not passing the Logic check will be pointed out to the User. The items could be bolded and in red with an asterisk by them so that Users can quickly see the problem. At this point, the User can either:

1. Make other selections and click Re-Submit.

2. Not make any changes and click Submit Anyway.

Discussion Forum

The Discussion Forum will be improved by adding more flexibility to the user and more alternatives on what functionalities are available for them. Discussion Forum Improvements

· A User will be able to Edit posts – anywhere in the thread.

· A User will be able to Delete own posts if no thread exists

· A User will be able to Insert pictures

· A User will be able to Upload Attachments

· A User will be able to make an Anonymous post

· A User will be able to Post or Reply using their Pen Name

· When a User reads a post, it will be marked as ‘read.’ This will be accomplished by having new posts in bold letters and ‘Read’ posts NOT in bold.

· A User can click a box next to any message/comment/reply item and Mark as ‘Read’

· A User can have the option to ‘Mark All Read’ where they did NOT have to check the box next to the post.

· We would like to have hypertext ability where, given you are in Option A’s Discussion Forum and mention Option X, Option X will automatically be a keyword which will link the Option X keyword to Option X’s Discussion Forum. This would hold true for any Option’s Forum. If any Option on the list is referenced, when the keyword is typed in, a link will automatically be created linking this option’s keyword to the option’s discussion forum.

Moodle is a Free and Open Source Software system for course management. Although it is written in PHP, the developer is trying to change it over to Python and implement its discussion forum structure for this effort.

Forums will be available for every single item in the Option List and can also be started for new topics.

User Permission

User permissions need to be implemented such that groups can have more control over membership. This is important for a variety of reasons. There could be a need for a closed group of members who are representatives and have a vote for or against some policy. This will also allow more structure in large groups such that some people have the ability to vote, and others can contribute ideas but not vote or whatever combination of needs that may be required for the given problem. User permissions can be seen in the User’s Profile. New options will be:

· Pen Names – Pen Names will be added so that users can be anonymous but still have an identity.

· Profile Picture – pictures can be added so that the user has some visual representation along with a name. This is important in online environments so that when these people are face to face, they recognize one another. Avatars also make you feel like you know the person better because it’s another form of representation/identification and richer than a text based name.

· Profile Email – a primary email should be given so that experts can be mailed by the moderator.

· Profile Update Information – the user will be able to update their information under their profile.

· Description – A user will be able to add information on their domain expertise, listing the areas in which they are considered to have expertise.

Group Permissions and Classifications

The studies were conducted and many problems were created, but there was a problem in that every problem reflected the total number of users of the system and not the group targeted for that specific problem. A solution is posed for these related problems where group permissions will expand. There will be different permission levels desired for the system.

1. Guest –guests will be able to view the Active Problems List, Scale, and Item List.

2. Guest - will NOT be able to vote.

3. Guest - can NOT view or contribute to the Discussion Forum.

4. Contributors – Contributors inherit the permissions of the Guest.

5. Contributors – can see and post/reply in the Discussion Forum.

6. Contributors – can add items to the Item List.

7. Contributors – can NOT vote.

8. Participant – inherits both the Guest and Contributor privileges. However, the Participant can Vote.

9. Moderator – A moderator inherits all available permissions.

10. Moderator – can create a new problem

11. Moderator – can edit or delete any item as an option.

12. Moderator - can set up groups by allowing or inviting individuals to the group.

13. Moderator – can email all participants.

14. Moderator – can designate members in a group able to review and accept another person who wishes to join the group.

15. Any member of the system in any role can be an observer in a given group and request a participant or contributor role.

16. Groups need to be able to select the group members and/or accept membership requests. It needs to be such that a can accept or reject individuals for participation if a participant requests to join, but each group needs to have a designated set of individuals that are working on a single problem area. This will have to be tested after it’s designed to see if it fits the needs of the organizations.

14.5 Uncertainty Calculations in Problems

As has been stated throughout these observations of the studies, support for types of uncertainty needs to be explored. For this Version 2.0, the Voting Uncertainty feature will be implemented. Given that groups will be closed and membership privileges will define which members are allowed to vote, it is hoped that the Version 2.0 information from the calculation will be of more benefit to the user as it was intended.

Uncertainty comes in many forms and there is no one overall calculation that is applicable to any or all problems. The uncertainty cannot be identified until the problem is identified. Many variations of uncertainty representation should be explored in hopes that a set of choices can be selected by the group members in a given group. Some examples are:

· Votes and voter uncertainty can come into play when each member gets to add weight to a decision using a Vote.

· Uncertainty can be calculated by forecasting events that are likely to occur.

· Uncertainty can be in resource availability.

· Uncertainty can be calculated from a subset of ranked items.

· Uncertainty can be created from Risk Assessment.

Until the problem is identified, the uncertainty cannot be identified and therefore cannot be calculated. So, each problem should be explored for the types of uncertainty that will be of some benefit in the decision making process of the expert. Eventually, a set of uncertainty calculations should be available to the expert where the group can choose to use the ones that are beneficial for a designated problem type.

Voting and Uncertainty

Voting was used as a calculation for uncertainty in Version 1.0. Uncertainty was calculated use of by those participants in the trial who had not voted on an item. Although this did not work well for the Study, it could easily work for the right group under the right problem type. For example, recently the House passed the Health Care Reform Bill. Votes meant everything. The number who had not voted yet was used to calculate a measure of uncertainty by assuming all those not voting could all vote one preference or the opposite one to give high and low values for the final result. This did not work well and resulted from trying to a quick estimate as opposed to the original specified uncertainty measure. This type of uncertainty calculation, although it was in need of interpretation and fine tuning, could have been used for such a situation as a Bill being brought before either the House of Representatives or Congress. A Figure 14.1 is offered as an example of a real vote occurring from the Health Care Reform Act.

Figure 14.4 Voting Used as Uncertainty Problem Type

From the study conducted, many problems were created on The Delphi Decision Maker. It’s from the analysis of these individual problems showed particular areas where uncertainty could occur, but each was different and no one uncertainty calculation satisfied all. Until the problem is identified, the uncertainty cannot be identified. Each of the problem areas are explored as such next.

1. Resource Allocation Problem. This problem was created during the development of the system to use between the designer, developer, and testers. It was based loosely on the aftermath of Hurricane Katrina and had been used in a paper published by the researcher and some committee members. Although this problem was not part of the study, this problem seemed to draw interest from testers and participants alike. In a real world problem, if this was under a time critical situation, this would test the system and perhaps have the tools utilized and leveraged as intended. There would be the basis for arguments. Stakeholders have a vested interest in the outcome and they could be life or death decisions.

2. Quadrennial Homeland Security Review. This is a problem based on past uses of Thurstone’s Law of Comparative Judgment that we ran prior pilots on; see the research proposal. This is where the Option List is already created and the participants vote on those items only. This problem used a group of experts who were knowledgeable about the information before them. This was a bit different in that no ideas were to be added to the Option list, but rather, a ranking of the most important ongoing topics was the goal. Good debates occurred and agreement between participants was used to judge the information and ongoing debate. The groups had a bit of bias in one category over another as they had submitted group projects presenting the most compelling arguments behind their ‘own self selected’ topic areas. However, there was nothing implemented, no consequence of the results.

3. Campus Threat Assessment. This is the problem that was used for the study.

Although this was a good problem, it held no consequences. There was no budget tied to implementation, there was no outcome to be had. A problem requires that there be some compelling reason to want outcomes based on some need or requirement.

4. Dessert Problem. Although this was created as a Sandbox for participants to play in and use their newly found tools, the problem also brought up an interesting issue – some problems are based on things like taste and although people can be persuaded for a variety of reasons, if something tastes good, or bad, there’s no persuading someone to taste something differently. However, other considerations may make a participant willing to give something up for the greater good of the group. For example, diabetics should be considered, not to mention there’s an obesity problem so, low fat desserts should matter, although that is doubtful. So, all of these sorts of issues should be considered as some things, belief systems or taste cannot be argued although maybe negotiated. For example, if there were a problem saying ‘list and rank the best religions,’ it is doubtful that people could be persuaded to support anything but their own belief.

5. Sahana Framework Problem. This is a problem for the Sahana development community. For a while now, various frameworks have been proposed for the system to use. However, these discussions are unorganized and discussion is supported by an email list. There’s no way to know what the group thinks so, the site mentor of this research effort asked that this problem be added to The Delphi Decision Maker. The system will support something like this well as there are only a few alternatives being considered. This is a group of experts who know how to interact with one another in a post/reply format. One framework will be selected over another and so, argumentation could be used. However, this problem brings up another problem that actually, the Resource Allocation also brings up – that accommodations need to be made that help the decision makers by providing a list of factors that directly relate to the options being explored. For example, in the Resource Allocation problem, the number of generators, how much gas they use and how long they need gas should be available for group members to see. These sorts of pieces of information directly relate to the decision making and make a more informed decision for the participant.

The system worked when there was a small number of items on the Option List from which to choose. This meant that voting wasn’t confusing and also, the forum would be able to support so few topics. However, as the number of items increases, the system doesn’t manage the complexity well.

So, questions should be asked:

1) Will a solution be implemented?

2) Does the end result affect the stakeholder?

3) What is the end result going to be used for? What is the goal?

4) What is the uncertainty to be considered for the problems?

5) What type of problem is this? Forecasting? Ranking? Clustering?

14.6 Future Features for The Delphi Decision Maker 3.0

There are other features that should be developed to further support the efforts of the Delphi Decision Maker.

14.6.1 Expert Domain Group Identification

When a New Problem is entered into The Delphi Decision Maker, there needs to be a way for the groups of experts to be formed. Groups could be formed using a number of alternative methods.

1. Database of Experts. First, a database of experts should be built. Information on level of expertise, domains of expertise, location and availability could all be attributes. This would give users the ability to search and retrieve experts and allow programs to be written to automatically identify and mine experts.

2. Moderator Determined. This is where the moderator or person in charge of ‘forming a subgroup’ of experts selects participants and reaches out to the experts for their contributions.

3. Expert’s Surfing. Experts could have methods bringing awareness to situations where they may be able to provide expertise. An expert could do this by having RSS feeds from the Sahana community or other sources. This way the expert reaches out to the problem. When the moderator is in charge, they are reaching out to the expert. Using a variety of methods for group formation may prove more beneficial. However, there may be times when a closed group method is preferred due to issues such as decision making power of authority.

4. Automatic Expert Mining. Other programs could be written automating lists of possible experts who could contribute to a particular problem. Experts could be registered to particular domains and upon selecting a series of related domains, a program could automatically contact and confirm membership participation.

These systems could use multiple methods of communications in order to quickly contact experts and confirm if they are willing, ready and available. Expert lists could be updated real time so that the most efficient use of experts could be deployed.

14.6.2 Dynamic Discussion Forum

Although there are standard forums that can be used to satisfy the Discussion aspect of the system, a much more versatile format would be much better for the complexities and information needs of experts for this system. If forums could act more as wikis, this would be a beneficial technology to implement where terms could be linked and interlinked and new information better managed. However, there is a lot of research that needs to be conducted in this area for solutions. Other information from the Sahana system needs to be available and integrated into the discussion area such that real time information is existing and information exchange can be seamless.

14.6.3 LIVE Information Aggregation Sites

Information needs to be collected in a way that is most useful to the expert’s needs, not overloading the user but supporting the user’s needs. Although search engines can retrieve bits and pieces of information, tagging information and other sorts of ways of aggregating data to a collective space should be a consideration.

Information on a particular subject could be spread out between forums and other relevant places. This information is chronological by nature due to the timeline of actions and decision making. However, information should also be directed, either by humans or automatically or both such that there is a direct link that experts can go to for a comprehensive but organized representation of the information on some given option, problem or situation..

An example of a live site is provided by Information Systems for Crisis Response and Management, www.iscram.org/live. ISCRAM has it such that tagging information collectively by group members and having a site receive and organize such information – real time – makes for a useful place to find information during an event. Real time information may be coming in from various people using a variety of methods, information is streamed to the site, viewable by anyone, and others have space to contribute documentation. This is a media rich environment that supports both outgoing and incoming information and activity. It’s this sort of ‘portal’ that could be effective in managing the particular needs of the experts and decision makers.

Figure 14.5 Information aggregated on ISCRAM site.

14.6.4 Integration of Sahana Modules

Many other upgrades will be made in Version 3.0 such that information from other modules in the Sahana system can be integrated with the Delphi Decision Maker. If experts and crisis managers can get the real-time information they need directly from the same system, it would make decision making even more efficient and effective. This would save time and since the information would be real time, decisions could be made that wouldn’t be outdated by the time the actions are reached by those on the ground.

14.7 Future Research Questions

A set of research questions that were left unanswered from the Version 1.0 system will drive the study efforts where the Version 2.0 system will be implemented and tested. They are:

1. Will emergency management experts use such a system? What do they see as the advantages and disadvantages?

2. Will the discussion from disagreements lead to more new options being proposed?

3. How can voting be used to aid experts in ‘muddling through’ an initial set of items to create a subset which the experts determine to be the most important items for the group to work with?

Ways of answering these questions are under development. These core questions will help identify if the system is getting closer to its intended use.

14.8 Future Studies

Future studies are planned where groups of crisis experts will be used. A request was posted to the International Association for Emergency Managers mailing list. From this post both groups of individuals volunteered as well as two groups of emergency affiliated responded and are available. It is from these studies where the true design efforts of the system will be tested. To help the researcher better understand the needs of the users, studies are in the works where the researcher would be part of a RELIEF effort that will hold the following series of evaluations:

· Sahana Testing at Camp Roberts (in the works)

        • 6 months Tabletop

        • 9 months Functional

        • 12 months Full Scale

It is very difficult to test the system using problems or scenarios. Testing the system in its environment will provide researchers with a more realistic picture of the situations in which the system can be best used and where other modifications can be realized to better fit the needs of the users. Once the system is deployed with the Sahana Disaster Management System, real cases can be evaluated. To be able to study the system from a behavioral perspective may aid the researcher effort to further support the needs of the users. It is important to study how people use the system. New information can be derived from these studies where the users could use the system for situations not considered by the researchers previously.

14.9 Conclusion

Overall, this research effort went well. The system proved useful at the core, but many improvements need to be implemented so that The Delphi Decision Maker can be used by the emergency domain community to support them in the most critical times. A new model has been developed which will guide the next iteration of studies. Research questions have been developed to help measure the success of the design. Future studies are planned where the new version of the system will be tested again and further modified according to the needs identified through analysis. The last guideline to Design Science Research is that the work from this effort should be published. Many publications are planned from this work addressing the variety of issues that surfaced from this work. Hopefully, the system will be studied in the field as part of a larger effort where its use would be implemented in the field. This would change from Design Science Research to Action Research if this comes through. It is only through the real use of this system that the real contribution can be acknowledged. When extreme events are better managed where the death toll is minimized and a more efficient recovery is actualized can the system be fully realized.

APPENDIX A

CONSENT FORM AND PRE STUDY SURVEY

This appendix covers the information that was presented in the consent form and all Summary Statistics for every survey question that was part of the initial Pre Study questionnaire. This information was formatted and produced using QuestionPro software.

Hello JSU:

You are invited to participate in our pre survey for the Delphi Decision Maker Software System research effort. In this survey, approximately 200 people will be asked to complete a survey that asks questions about demographics and background system type questions. It will take approximately 5 minutes to complete the questionnaire.

Thank you very much for your interest in participating in this research to try and evaluate a state-of-the-art group support system to facilitate decision making during time critical situations. The software, referred to as The Delphi Decision Maker, aims at guiding a group through a discussion where the group members are generating, creating and prioritizing solutions or options for a given problem.

“Delphi is a communication structure aimed at producing detailed critical examination and discussion, not at forcing a quick compromise. Certainly quantification is a property, but only to serve the goal of quickly identifying agreement and disagreement in order to focus attention” (Hiltz and Turoff,1996, p 2).

We implement Delphi characteristics by:

* allowing anonymous participation

* allowing users to vote and rank their own list where they can change their vote or rank or item list anytime.

* show users a scale that shows where the group stands on the current ranking of items.

* allows users to discuss and argue ideas

Your participation in the field trials with a group of people using this system is very important to improve its design.

We are looking forward to:

Your reflections on the above objectives as a result of your experience with the system;

Your comments on using the system;

Your suggestions to improve the system.

----------------------------------------------------------

Consent Form

Your participation in this study is completely voluntary. There are no foreseeable risks associated with this project. However, if you feel uncomfortable answering any questions, you can withdraw from the survey at any point. It is very important for us to learn your opinions.

Your survey responses will be strictly confidential and data from this research will be reported only in the aggregate. Your information will be coded and will remain confidential. If you have questions at any time about the survey or the procedures, you may contact Connie White by email at the email address: connie.m.white@gmail.com

Thank you very much for your time and support!

Connie White, IEP Faculty

By checking this box and clicking the "Continue" button, I acknowledge that I have read and agreed the contents mentioned in the above consent form. Continue

I Agree

Please contact connie.m.white@gmail.com if you have any questions regarding this survey.

Which Internet browser will you use while using the Delphi Decision Maker?

I have used a group support system.

Have you ever used an online decision support system?

Yes

No

How would you rank your level of experience using online systems?

How often do you use the Internet?

I would rate my knowledge of emergency management as

I would rate my interest in emergency management as:

Please identify your role at JSU.

Please indicate your age range:

Please indicate your gender:

Female

Male

Please indicate your residency.

I live on campus

I commute to campus

I never come to campus (distance)

Do you take online classes?

Never

Sometimes

Full Online Program

APPENDIX B

TUTORIAL

This appendix presents the tutorial that was produced as an aid for the user of the pilot system given this research effort.

Delphi Decision Maker Tutorial

Welcome to the Delphi Decision Maker. This is a system to aid large groups of crisis experts to generate and prioritize a list of alternative solutions for problems producing a ranked list of alternatives based on the input of the individuals creating a single group rank.

Delphi Decision Maker Homepage

The first screen to appear allows the user to access the system by clicking ‘Delphi Decision Maker.’

Figure 1: Delphi Decision Maker Homepage

There are two types of permissions:

1. Guest - can only view the active problems and descriptions.

2. Registered User - the user must register and login to use the tools.

Active Problems

Once logged in as a registered user, the Active Problems will come up.

Figure x.2 Active Problem Page

At this point, a registered user can read the descriptions of the ongoing problems, and then can select any of them to interact with. We will select the Dessert Selection for Campus Event – Practice Exercise.

Menu of Tools Available

Notice, The Delphi Decision Maker has 4 tools that are used together to help support group decision making:

· Summary – presents a list of options created. New items are added here.

· Voting – paired comparisons are put before the user. One item must be selected over the other and all pairs must be considered at first. Later, any items can be changed simply by going to Vote and ‘clicking’ the option preferred.

· Scale of Results – a visual interpretation of data is offered as feedback. This is calculated from the individual input creating a single group calculation.

o Uncertainty – both the best case and worst case scenarios are provided along side the Group’s opinion. This lets the user know, given those who have not voted yet, if they all voted one way in favor of an option (best case) and then if they didn’t (worst case). Sometimes this can make a difference in the possible outcome given uncertainty, and yet at other times, it can make no difference at all. This is particularly true on an Option by Option basis.

· Discussion – each problem has its own discussion area where arguments and such discussions or opinions may be listed.

As demonstrated in the next figure, these items are available in the menu.

Figure x.3 Tools on Menu Bar

Summary List of Items and Add New Items

Once a user selects one of the Active Problems, a list of current options is listed. This is where the user can also add any new options as solutions to the problem.

Figure x.4 List of Items, Add New Items

Vote

In order for a user to have ‘their voice’ heard, they must vote and change their vote as a reflection of a change in their opinion. This contributes to the overall scale of results. One selection of each pair must be selected where ‘one item is preferred over the other.’ This is done by clicking the selection of choice.

Click the Update button upon completion.

Figure x.5 Voting on Paired Comparisons

Scale of Results

The Scale of Results provides the user with feedback. The scale is an indicator to the user where the group stands on the solutions posed thus far at that particular point in time. The results are Real Time Feedback.

1. Scale of Group Opinion – the first scale is a direct reflection of the group members who have voted so far.

2. Worse Case Scale – This scale takes into account if no other members voted in favor of an item, where it would end up. However, it is crucial to remember that new members are joining all of the time, so the uncertainty can change with every new additional member.

3. Best Case Scale – This scale considers the outcome of an option provided every member who had not yet voted, voted in favor of that one Option – this gives the best possible position on the scale any Option has. This is an Option by Option feature as each has its own uncertainty possibility.

Figure x.6 Scale of Results – Group Opinion and Uncertainty

Discussion

If a user would like to argue their opinion or offer information to the group, a forum is offered for discussion. A threaded discussion is provided with Reply options for nested discussions.

Figure x.7 Discussion Forum

A user can use the form at the bottom of the Discussion forum to enter a New Discussion. A Title Space is followed by a text box. The user’s name, the date and the time is provided with each post. Users can also click ‘Reply’ to another user’s comment. This is a good way to discuss particular areas together.

Figure New Discussion or Reply to Existing

Exiting From System

To logout of the system, the user should mouse over the login area, this will bring up a few options of either logging out, editing a profile or allows the user to change their password.

Figure x.8 Exiting The System

Thank you for using the Delphi Decision Maker!

APPENDIX C

TUTORIAL SURVEY

This appendix covers all Summary Statistics for every survey question that was part of the tutorial questionnaire. This information was formatted and produced using QuestionPro software.

Now that you've completed The Delphi Decision Maker Tutorial, please take 3 - 5 minutes to provide some very important feedback for us by taking this survey. It's from your feedback that improvements will be made so that others can teach themselves how to use the system and how to use the tools as they were developed. Thank you for providing us with your experience, this will make the overall system more successful.

Thank you - click Continue to proceed.

Connie White, IEP Faculty

used the _____ tutorial. 'All future answers will assume this selection.'

Text based

Video

The tutorial provided for The Delphi Decision Maker was

Do you prefer the video or the text based tutorial?

Given the tutorial you took, instructions

The tutorial was __________________ in its instructions.

There were ______________ details provided in the instructions.

The feedback scale was ______________ to understand.

I wanted to ____________ the models used and reasons for design.

What else does the tutorial need to cover?

Nothing, everything is covered

Else, Provide comments for suggestions.

Do you have any other suggestions?

No, Thank you.

Else, Provide comments for suggestions.

What other improvements can you suggest to make the tutorial better?

None, it was perfect

Else, Provide comments for suggestions.

Email Address needed for Lotto drawing!

APPENDIX D

REPORT ON STATISTICAL ANALYSIS

This appendix covers all Summary Statistics for every survey question that was part of the post Delphi questionnaire. This information was formatted and produced using QuestionPro software. This questionnaire was distributed after the designated time period was over and the users had finished using the system.

This system was ___________ to use.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Easy

2

3

4

5

6

7 Difficult

Total

Count

10

3

9

4

2

2

1

31

Percent

32.26%

9.68%

29.03%

12.90%

6.45%

6.45%

3.23%

100%

The tools provided by The Delphi Decision Maker software were of

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 No Use

2

3

4

5

6

7 Great Value

Total

Count

1

0

5

8

9

5

3

31

Percent

3.23%

0.00%

16.13%

25.81%

29.03%

16.13%

9.68%

100%

The Delphi Decision Maker system as a whole was _________ to use.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Easy

2

3

4

5

6

7 Difficult

Total

Count

9

10

4

5

1

1

0

30

Percent

30.00%

33.33%

13.33%

16.67%

3.33%

3.33%

0.00%

100%

The design of the system was

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Good

2

3

4

5

6

7 Bad

Total

Count

11

6

7

6

1

0

0

31

Percent

35.48%

19.35%

22.58%

19.35%

3.23%

0.00%

0.00%

100%

Voting was a _________ way to reflect what i was thinking.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Good

2

3

4

5

6

7 Bad

Total

Count

8

8

4

6

3

2

0

31

Percent

25.81%

25.81%

12.90%

19.35%

9.68%

6.45%

0.00%

100%

Having the uncertainty represented was a ______________ function to have.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Useful

2

3

4

5

6

7 Not at all useful

Total

Count

7

4

3

8

5

3

0

30

Percent

23.33%

13.33%

10.00%

26.67%

16.67%

10.00%

0.00%

100%

I found the Discussion Forum a _________ function for the decision making process.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Necessary

2

3

4

5

6

7 Unnecessary

Total

Count

10

4

8

4

3

1

1

31

Percent

32.26%

12.90%

25.81%

12.90%

9.68%

3.23%

3.23%

100%

If given the chance, I would ______ this system for prioritizing alternative solutions to a problem or situation.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Use

2

3

4

5

6

7 Not Use

Total

Count

8

8

5

3

2

1

3

30

Percent

26.67%

26.67%

16.67%

10.00%

6.67%

3.33%

10.00%

100%

This system is ___________ to use for other problems where ranking alternatives is desired among a group.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Good

2

3

4

5

6

7 Not Good

Total

Count

14

4

7

5

1

0

0

31

Percent

45.16%

12.90%

22.58%

16.13%

3.23%

0.00%

0.00%

100%

I am _________ satisfied with using this system to support a large group generating and prioritizing a list of alternatives.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Very

2

3

4

5

6

7 Not at all

Total

Count

11

7

6

1

3

0

1

29

Percent

37.93%

24.14%

20.69%

3.45%

10.34%

0.00%

3.45%

100%

I voted on the items.

Frequency Analysis

1.

2.

Answer

Yes

No

Total

Count

29

2

31

Percent

93.55%

6.45%

100%

When voting, I voted on all of the items available.

Frequency Analysis

1.

2.

3.

Answer

Yes

No

Didnt Vote on Items

Total

Count

24

4

1

29

Percent

82.76%

13.79%

3.45%

100%

I used voting to reflect any change of opinion I had.

Frequency Analysis

1.

2.

Answer

Yes

No

Total

Count

20

10

30

Percent

66.67%

33.33%

100%

Did you try vote on only a subgroup of options?

Frequency Analysis

1.

2.

Answer

Yes

No

Total

Count

7

23

30

Percent

23.33%

76.67%

100%

I agree __________ with the results of the top considerations that came from the scale at the end of the study.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Exactly

2

3

4

5

6

7 Ambiguously

Total

Count

2

4

10

8

4

2

1

31

Percent

6.45%

12.90%

32.26%

25.81%

12.90%

6.45%

3.23%

100%

The Delphi Decision Maker final scale result (did, did not) mimic what I was thinking.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Did

2

3

4

5

6

7 Did Not

Total

Count

3

4

9

4

3

4

4

31

Percent

9.68%

12.90%

29.03%

12.90%

9.68%

12.90%

12.90%

100%

It was __________ to see what the group thought was important.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Easy

2

3

4

5

6

7 Difficult

Total

Count

17

5

3

3

2

1

0

31

Percent

54.84%

16.13%

9.68%

9.68%

6.45%

3.23%

0.00%

100%

I was ________ confident in this system because I was allowed to see what the ongoing results were.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 More

2

3

4

5

6

7 Less

Total

Count

8

5

10

6

1

0

1

31

Percent

25.81%

16.13%

32.26%

19.35%

3.23%

0.00%

3.23%

100%

I am confident that a group ________ use a web based system that will enable dispersed groups of experts or knowledgeable individuals to reach decisions more quickly than they could have without such a system.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Can

2

3

4

5

6

7 Can Not

Total

Count

13

8

4

3

2

1

0

31

Percent

41.94%

25.81%

12.90%

9.68%

6.45%

3.23%

0.00%

100%

I felt as though my opinion was (Very, Not at all) accurately reflected because I could vote only on those solutions I agreed on.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Very

2

3

4

5

6

7 Not at all

Total

Count

9

5

7

8

1

1

0

31

Percent

29.03%

16.13%

22.58%

25.81%

3.23%

3.23%

0.00%

100%

I think that my decision was ________ informed because of the knowledge provided by knowing the various outcomes given the uncertainty.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 More

2

3

4

5

6

7 Less

Total

Count

6

4

7

8

1

1

3

30

Percent

20.00%

13.33%

23.33%

26.67%

3.33%

3.33%

10.00%

100%

It was _______ for me to share and express my opinion using this system than I could have done, (under the same set of circumstances) not using the system.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Easier

2

3

4

5

6

7 More Difficult

Total

Count

10

6

8

3

3

0

1

31

Percent

32.26%

19.35%

25.81%

9.68%

9.68%

0.00%

3.23%

100%

I feel confident that a group _________ use a web based system that will enable dispersed groups of experts or knowledgeable individuals to expose disagreements, and reach decisions.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Can

2

3

4

5

6

7 Can Not

Total

Count

13

7

4

3

2

2

0

31

Percent

41.94%

22.58%

12.90%

9.68%

6.45%

6.45%

0.00%

100%

I ___________ used the scale for feedback.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Frequently

2

3

4

5

6

7 Never

Total

Count

7

2

5

6

7

3

1

31

Percent

22.58%

6.45%

16.13%

19.35%

22.58%

9.68%

3.23%

100%

I believe that I was able to evaluate the information ______ than normal given there was an online forum.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Better

2

3

4

5

6

7 Worse

Total

Count

8

11

5

5

0

1

1

31

Percent

25.81%

35.48%

16.13%

16.13%

0.00%

3.23%

3.23%

100%

I consider the the Voting tool _________.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Extremely Useful

2

3

4

5

6

7 Not Useful

Total

Count

10

9

4

1

3

3

0

30

Percent

33.33%

30.00%

13.33%

3.33%

10.00%

10.00%

0.00%

100%

I consider the the Discussion Forum tool _________.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Extremely Useful

2

3

4

5

6

7 Not Useful

Total

Count

12

6

6

3

1

2

0

30

Percent

40.00%

20.00%

20.00%

10.00%

3.33%

6.67%

0.00%

100%

If given the opportunity, I would use the The Delphi Decision Maker software a _________ at work.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 None

2

3

4

5

6

7 Extensively

Total

Count

6

2

4

6

9

2

1

30

Percent

20.00%

6.67%

13.33%

20.00%

30.00%

6.67%

3.33%

100%

Given the problem, the number of participants, discussions that went on and the evaluation of that information, this system took us a ________ amount of time to reach a decision than we could have made without such a system.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Shorter

2

3

4

5

6

7 Longer

Total

Count

8

6

9

5

2

0

1

31

Percent

25.81%

19.35%

29.03%

16.13%

6.45%

0.00%

3.23%

100%

I found the scale to be ____________.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Extremely Useful

2

3

4

5

6

7 Not Useful at All

Total

Count

7

11

2

6

1

3

0

30

Percent

23.33%

36.67%

6.67%

20.00%

3.33%

10.00%

0.00%

100%

I consider the Options (List Creation) tool __________.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Extremely Useful

2

3

4

5

6

7 Not Useful at All

Total

Count

8

9

4

6

0

2

1

30

Percent

26.67%

30.00%

13.33%

20.00%

0.00%

6.67%

3.33%

100%

Going from one functionality to another, voting to feedback for example, was _________ on the Delphi Decision Maker.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Simple

2

3

4

5

6

7 Complex

Total

Count

12

7

5

4

1

0

2

31

Percent

38.71%

22.58%

16.13%

12.90%

3.23%

0.00%

6.45%

100%

The information on uncertainty provided by the scale made ______________ difference in my decision.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 No

2

3

4

5

6

7 A Big

Total

Count

10

5

6

5

2

1

0

29

Percent

34.48%

17.24%

20.69%

17.24%

6.90%

3.45%

0.00%

100%

I was _________________ confident in the decisions I made due to the information provided on the scales.

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Very

2

3

4

5

6

7 Not at all

Total

Count

9

7

4

4

3

2

1

30

Percent

30.00%

23.33%

13.33%

13.33%

10.00%

6.67%

3.33%

100%

I am confident that a group ________ use a web based system that will enable dispersed groups of experts or knowledgeable individuals to share and evaluate information and opinions

Frequency Analysis

1.

2.

3.

4.

5.

6.

7.

Answer

1 Can

2

3

4

5

6

7 Can Not

Total

Count

15

7

4

3

1

1

0

31

Percent

48.39%

22.58%

12.90%

9.68%

3.23%

3.23%

0.00%

100%

APPENDIX E

SAHANA BLUEPRINT WIKI FUNCTIONAL REQUIREMENTS

The Delphi Decision Maker - Version 1.0

Functional Specification

Overview

The Delphi Decision Maker module helps groups create a ranked list.

More specifically, it is designed to support the decision making of large groups of Crisis Management Experts. It guides experts to generate, debate and explore alternative solutions producing an real time ranked list of alternative solutions that reflects the group's opinion at any point in time quickly. This system accounts for uncertainty.

Summary

Increasingly, extreme events demand large groups of experts distributed in both location and time to communicate, plan, and coordinate actions. When responding to extreme events, very often there are both alternative actions that might be considered and far more requests for actions than can be executed immediately. The relative desirability of each option for action could be a collaborative expression of a significant number of emergency managers and experts trying to manage the most desirable alternatives at any given time, in real time. This same decision process could be used for a number of tasks but will be designed for distributed dynamic decision making during time critical phases of extreme events. This is because our proposed system is specially designed to save time, remove ambiguities, and decrease uncertainty, major challenges described in the literature on time critical decision making during the volatile time critical phases of emergency.

Scenarios

Scenario 1: Transport Canadian Project (real case)

The problem the group wanted to address was finding a method for: Identifying and Ranking Sustainable Transport Practices in Urban Regions. Making Decisions about Identifying, Adopting, or Implementing Sustainable Transport Practices. Below is the two page description of what the system can do and how the users can use it for their needs.

The Delphi Decision Maker is an online decision support system available to users anywhere there is WEB connectivity. It is designed to support large groups of professionals engaged in urgent, distributed, dynamic decision and option analysis activities.

This process is designed to handle real-world problems, and can be used where distributed subgroups and individuals are determining the options and analyzing them to solve a complex problem or emergency. And as a further important feature of the dynamic Delphi approach, by virtue of being dynamic it provides a real-time mechanism to support continuous planning operations, whereby many individuals add intelligence and new input to the updating of plans, or deal with new products, cost overruns, and other events.

The central idea behind a Delphi process is that the collective opinion of a group of professionals is more accurate and informed than their separate opinions. That is, the group approach produces “collective intelligence”, and is a means for a number of professionals to interact in such a way that: 1) They can offer a feasible and analyzed list of options from which a decision maker can select the mix that satisfies the current problem; and, 2) They can better understand why some options were less satisfactory than those chosen. A dynamic Delphi process can be used to help a group of professionals identify, evaluate and select an optimal-ranked list of options.

This particular method uses voting to identify areas of agreement and disagreement. Exposing areas of disagreement informs the group where they may need to focus their discussion input. On the other hand, letting a group know that they agree on an issue informs participants in a timely manner that consensus has been reached, and directs them to concentrate their effort on the next item on the agenda. While it is always important to respect the participants’ time, it is even more important to do so in this environment because of the urgent aspect that could underlie the problem under consideration.

In a Delphi with heterogeneous professionals, they are asked to vote only on what they feel confident about, or wait until more information on uncertainties is provided by other experts in a field. Participants are informed of how many participants have voted on a given item, as well as the degree to which more votes are expected in the future and how that could affect the results.

This Delphi process is dynamic because of the following reasons:

    • An expert can participate in any phase of the decision-making process at any time, that is,

o problem identification, 2) information gathering, 3) solution generation, 4) evaluation.

    • This feature allows individuals to have discussions in forums where they can present information and debate issues as replies to specific options. Because the dialogue is text-based, others can read and benefit from the content

    • An expert can participate online at any time during the day or night, given an Internet connection and web browser.

    • This feature helps professionals dedicate thoughts and ideas to the discourse as they arise after having time to think about a problem. This means they can choose a time to participate that is convenient for them. However, when some issues call for face-to-face meetings, the Dynamic Delphi approach can be used in preparation for, during, and/or after the group meeting as the meeting agenda or summary instrument.

    • There is real-time feedback of both the professionals’ individual opinions and of the group’s opinion.

    • Experts can vote, change their votes or withhold their votes for some reason.

    • The merits of the situation can change, or new information can sway opinions. Either way, the vote mimics the real-time opinion of the expert and, hence, the group.

    • Not all members of the group have to interact in order for a decision process to continue.

    • There may be cases where some of the participants cannot be present, or they may feel they do not have the expertise to engage in a specific option.

    • Uncertainty as to the status of the current vote (How final are the votes?) is calculated, and produced as feedback to the participants.

    • The system requests individual comparisons of options for preferences, and converts this rank-order information to an interval scale where distance represents the degree of preference between options.

This method has been modified to handle incomplete data with respect to participation in voting.

As an example for this Transport Canada project, a mission to examine options for implementing sustainable transport practices could entail the need for professionals in 20 to 30 different professional fields, including expertise regarding insurance and liability matters which restrict or prohibit using privately owned vehicles for collective uses such as car-pooling, ride-sharing, etc.

When it comes to taking options and discussing them on a relative basis in order to have the information to choose which options should go forward to implementation, the dynamic Delphi will be the most useful way to involve a large, heterogeneous group necessary to uncover all the potential bottlenecks and concerns that must be addressed and reconciled.

Non Goals

This version 1.0 will not support the following features:

· users to create and pose problems.

· users outside the voting group to pose solutions-

o i.e. various permissions to handle all the roles of the users.

· This version will not be integrated with Sahana, this will be accomplished in version 2.0.

Definitions

Version 1.0

will be used to refer to the first build of the system. This is testing how all of the functionality play together and tests to see if, together this improves group decision making ability in an online asynchronous environment. This is what's due in October.

Subject

will be used synonymously with Users and Experts. This system is created for Experts who are Users of the system. Users will be Subjects in the study that will be conducted on Version 1.0.

List

a list is a problem that is created by someone with this permission. This is not required for Phase I due Oct. Only one discussion will be going on and this is presented to a User group who will all have the same permissions.

Item

an item is a solution offered to a problem (List). Users need to be able to add an Item to the List in Phase I.

User Stories

Pilot Problem on Version 1.0 -- Faculty, staff and students are going to conduct a threat assessment for their campus. These users can interact anytime and from anywhere there is an Internet connection and a browser. The user can:

1.Propose a new option to consider that is added to a List.

2.Comment on any option proposed in a discussion forum - plain text, Linear-Blog, comments should be editable (for 30 minutes after creation or something like that)

3.Vote on a relative comparison of any two options, all options should be available from which to choose on one screen. will provide figure idea of interface solution. Once subgroup is selected, then each item needs to be compared with the other items as paired comparisons. (can force Usersto complete all available pairs so that the calculation is easier)

4.View the votes to find both disagreements and agreements using the scale

5.Change their votes. no need to maintain individual old vote. but a 'quality control check' needs to be run to test for any cyclic triad where A is selected over B, B is selected over C, and C is selected over A. error message to User asking them to correct.

6.View a linear interval scale for the options based upon current votes as calculated from the Unit Normal Table.

7.View a modified interval scale based upon the missing votes for any options showing the degree of uncertainty in the relative position of an item in the scale resulting from those that have already voted.One scale showing worst case scenario (lower boundary) and another scale showing best case (upper boundary).

So, three scales are required for Phase I:

1) present calculation based on those who have voted

2) scale of best case scenario (calculation in technical requirements)

3) scale of vote giving worst case scenario of non-voting members

· Calculation for vote/revote (their first initial vote is kept, present vote, then a new vote (which is compared to the present vote and updated))

· Each item is voted upon with paired comparisons which produces a frequency count which is turned into a percentage number which is matched up with the corresponding number on a Unit Normal Table which finally produces a rank.

· A calculation for uncertainty is done: everyone who can vote - those who did

So there are three scales:

· one from the calculation from Thurstone (of which code is available for the unit normal conversion table)which takes many individual votes and calculates one group opinion.

· one with this, but including uncertainty (those who didn't vote) showing the best case outcome (if all votes go in favor of that item).

· and then the third shows the opposing side (uncertainty) where the items will stand on a vote given they receive no more votes.

Delphi Decision Maker Flowchart

Screen by Screen Specification

Design

The following diagram shows how it's important for functionalities to be available with as few 'clicks' as possible. this saves time and gets experts to the information they need as quickly as possible. This shows the 5 primary functionalities required of the system

The Primary Interface Design mimics that of Web2Py - this design can be better seen on the original Research Proposal. link provided under 'background documentation below. Figure 10.6 Figure of Integrated Functionality.

-/(+) means that each list below can be expanded or contracted.

This layout corresponds with the design above.

FIGURE - main page to work from

FIGURE Item Selection for creating subgroup of items to vote on

FIGURE Registration of Users

· PenName? is needed to hide identity - part of Delphi characteristics.

FIGURE how paired comparisons of items could be presented.

FIGURE Scales Visualized for Feedback

Background Reading

A Dynamic Delphi Process Utilizing a Modified Thurstone Scaling Method: Collaborative Judgment in Emergency Response

* http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.107.1633

The Formal Research Proposal

* http://sites.google.com/site/conniemwhite/Home/research-proposal

Need to request access:

· http://sites.google.com/site/conniemwhite/Home/google-summer-of-code-proposal

Attachments

· DDM-Flowchart.jpg (12.3 KB) - added by connie 7 weeks ago.

· homeUI.jpg (77.6 KB) - added by connie 7 weeks ago.

· dm3.jpg (57.0 KB) - added by connie 7 weeks ago.

· dm6.jpg (46.9 KB) - added by connie 7 weeks ago.

· dm5.jpg (36.6 KB) - added by connie 7 weeks ago.

· pairCompUI.jpg (63.8 KB) - added by connie 7 weeks ago.

· ItemSelectionUI.jpg (45.1 KB) - added by connie 7 weeks ago.

· RegistrationUI.jpg (41.7 KB) - added by connie 7 weeks ago.

APPENDIX F

SAHANA BLUEPRINT WIKI TECHNICAL REQUIREMENT

The Delphi Decision Maker - Version 1.0

Technical Specifications

Version 1.0 ER Diagram for the Database Design

Matrix Calculations for Thurstone's Law of Comparative Judgment

there are three calculations that occur - normally in a MxM matrix where all of the options are across the first row and column. when explaining it to someone, you read where item in the row item is selected against the item in a column that it's being compared to. now for every paired comparison, let's use (3, 2) - a frequency count is maintained. so, for every user,

there will be a point where they select either (3) or (2) when the pairs are presented together. here, we'll keep it simple and say we have 10 users where 6 prefer option 3 over option 2.

<insert frequency matrix>

step 1: get frequency count of votes for each item pair so, this reads, item (3) is selected over item (2) 60% of the time. although it's good for the individual user to see how they voted as an individual, they key issue of this system is that we're getting a group calculation (which answers your next question). it's from this group calculation that a scale is made and hence, everyone can see. this way, they know what the 'group' opinion on the problem is. this is supposed to help them argue their idea of what's best if they see that the group is not voting the way they would overall.

step 2: change totals to percentages so, the first step is to keep a running total, a frequency count, of each paired pair's running total. next step is simple, the numbers are changed into a percentage (will be very different numbers obviously though when we have 200 - 2000 people using the system, but i'm going to keep it simple here to explain the process. so, the percentage matrix would be at this point: <insert percentage matrix> step 3: find equivalence number in the unit normal table so, from the code i gave you (because i don't have the book with the table by me (sorry)) you would match each number in each cell with the equivalent number in the unit normal table. so, it would look up 60% (and actually the table is the same sort of matrix as we have above (by chance)) and you'd go til you found 6 at the top, then 0 from the other part, but the code probably explains that better to you. i will try to find a UNT online and send. it's really a mapping problem i think. anyway, you change the numbers from the percentage to the equivalent number of that percentage into a UNT value, let's say - they end up being (and must be carried out to the 4th decimal place too: <insert final unit normal table matrix>

now the columns are totaled (each column represents an item option) and then these summed numbers are the ranked once put into an ascending order - but i don't think you do that - i think this is where the scale comes in. each number (item option) is represented on the scale and that shows the user where the group stands on the overall list. and vwa lah, that's the thurstone scale of comparative judgement! kewl, huh?

since there is supposed to be a very large number of users, there's no need to maintain every person's vote change in the frequency matrix. the idea is that, if they change their vote, you just add the vote on top of the existing frequency count. the theory is, that the large numbers of voters and voting will make a particular item surface to the top - so, no need to subtract one from an item if a user changes their mind, you simply add the new vote on top of the existing total votes (frequency matrix). And, about user story #4, Every user should be able to see the votes of other users? How should the user interface look like?

they look at the scale where the thurstone's final caculation is represented. a single group vote. this is the beauty behind the system, individuals all input and give there voice on a problem, but there is one running group vote that drives everything else (arguments in the forum to persuade others, then revoting if an individual changes their mind).

now remember that uncertainty is part of the other two scales - one to show the best case if all registered voters who haven't voted yet, voted in favor of an option.

difference = total voters - those who voted on that item

best case = (those who voted on an item + difference)/total voters (scale 2)

worst case = (those who voted on an item)/total voters (scale 3)

the difference between these and the regular calculation (scale 1) is that in scale 1 only the active voters are considered so the total voters is simply the total voters who actively voted on something (not everyone who could) because remember, they don't have to vote on every item, only those items (subgroup) they select.

9.3 Thurstone’s Law of Comparative Judgment

Thurstone’s Law of Comparative Judgment (TLCJ) measures a person’s preference for one item i over any another item j based on a single stimulus measuring the discriminal dispersion between the two on a psychological continuum. This is conducted by using paired comparisons where every item i in a pair (i,j) in a set will be compared to every other item in a set producing a total of n(n-1)/2 comparisons (Thurstone, 1927).

Torgerson provides an example of this process using three matrices: matrix F (frequency), matrix P (probability) and matrix X (unit normal derivative). The first matrix F, is a frequency count of item selected in any given pair by every individual in the group. The table presented next shows a frequency count of a user group where N = 5. The items along the 1st row, A, B, C, and D are preferred over the corresponding items in the 1st column of an M x M matrix, where zeros are placed along the diagonal. Given the number of users, N = 5, each corresponding pair should equal N, i.e. (i,j) + (j,i) = 5:

A B C D

A ---- 1 2 1 B 4 ---- 3 4 C 3 2 ---- 5 D 4 1 1 ---- Table x.1 Matrix F Frequency Count of User Input

These frequencies are changed to probabilities as such for matrix P where each each (i,j) + (j,i) = 100%:

A B C D

A ---- .20 .40 .20 B .80 ---- .60 .80 C .60 .40 ---- 1.00 D .80 .20 .0 ----

2.2 .80 1.0 2.0

Table x.2 Matrix P Frequencies Converted to Probabilities

The final matrix, X, is where these percentages are replaced by their unit normal deviates to cumulative proportions.

A B C D

A ---- -.85 -.26 -.85 B .84 ---- .25 .84 C .25 -.26 ---- 0 D .84 -.85 0 ----

1.93 -1.96 -.01 -.01

Table x.3 Matrix X Cumulative Normal Distribution Function

Values greater than 50% will be positive in this transformation of matrix P to X and values less than 50% will be negative. Any values of 100% or 0% in matrix P are given a value of 0 in matrix X because the x values corresponding from the unit normal deviate table are unboundedly large (Torgerson, 1958).

Thurstone’s scale is an extension based off of Thorndike’s work which provides added insight into ranked lists of items. Given A is preferred over B 75% of the time and B is preferred over C 85% of the time, “how much greater than the distance AB is the distance BC? Thorndike solved this problem by assuming that the difference in distances is proportional to the difference in the unit normal deviates corresponding to the two proportions” (Torgerson, 1958, p. 155).

Attachments

· DDM-V1.jpg (39.4 KB) - added by connie 7 weeks ago.

Download in other formats:

· Plain Text

REFERENCE LIST

1. Adams, R. J. & Ericsson, A. E. (2000). Intoduction to the cognitive processes of expert pilots. Journal of Human Performance in Extreme Environments, 5(1), 44-62.

2. Agor, W.H., The Logic of Intuitive Decision Making (New Yorik: Quorom, 1986), 18.

3. Alter, S. Decision Support Systems: Current Practice and Continuing Challenges - 1980 - Addison Wesley Publishing Company.

4. Amazon.com, www.amazon.com, retrieved on March 18, 2009.

5. Arrow, K. Social Choice and Individual Values, Cowles Commission Monograph 12, New York: John Wiley & Sons, 1951. Benbunan-Fich, R. Del.i.cious. ICIS eWeb Workshop, Quebec, 2008.

6. Baker, J., Lovell, K., and Harris, N. How expert are the experts? An exploration of the concept of ‘expert’ within Delphi panel techniques. NurseResearcher, 2006, 14, 1. pp 59 – 70.

7. BaIjd, J.F., and S.F. Sousk, "A Tradeoff Analysis for Rough Terrain Cargo Handlers Using the AHP: An Example of Group Decision Making," IEEE Transactions on Engineering Management 33(3), 222-227, 1990.

8. Baumgart, L, Bass, E., Philips, B. and Kloesel, K. Emergency Management Decision-Making During Severe Weather, Weather and Forecasting, In Press, 2008.

9. Bard, J. and Sousk, S. A Tradeoff Analysis for Rough Terrain Cargo Handlers Using the AHP: An Example of Group Decision Making. IEEE Transactions on Engineering Management, Vol 37, No. 3, August, 1990.

10. Battle Command. Leadership and Decision Making for War and Operations Other than War. Fort Leavenworth, Kansas: Battle Command Battle Laboratory, 22 April 1994.

11. Beck, M.P., and B.W. Lin, "Some Heuristics for the Consensus Ranking Problem," Computers and Operations Research 10(1), 1-7, 1983.

12. Benbunan-Fich, R. and Koufaris, M. Understanding the Sustainability of Social Bookmarking Systems. International Conference on Information Systems, Quebec, Pre-ICIS Sixth Workshop on e-Business (WeB 2007).

13. Benbunan-Fish, R. and Koufaris, M. Motivations and Contribution Behavior in Social Bookmarking Systems: An Empirical Investigation Electronic Markets – The International Journal, 2008.

14. Berg, Sven. Condorcet's jury theorem, dependency among jurors, Social Choice and Welfare, Springer, Berlin. Vol. 10, No. 1, 1993.

15. Bezilla, R. Selected Aspects of Privacy, Confidentiality and Anonymity in Computerized Conferencing. RR#11, Computerized Conferencing and Communication Center, 1978. (referenced 12/12/2007 http://archives.njit.edu/vol01/cccc-materials/njit-cccc-rr-011/njit-cccc-rr-011.pdf)

16. Blackboard Course Management System, retrieved March 20, 2009.

17. Bonczek, R. Foundations of Decision Support Systems, CW Holsapple, AB Whinston - 1981 - Academic Press.

18. Bostrom, R.P., Anson, R., Clawson, V.K. (1993), 'Group Facilitation and Group Support Systems'. Group Support Systems: New Perspectives, Macmillan, 146-148.

19. Bottom, Willam, Ladha, Drishna and Miller, Gary. Propagation of Individual Bias through Group Judgment: Error in the Treatment of Asymmetrically Informative Signals. The Journal of Risk and Uncertainty, 25:2; 147-163, 2002.

20. Bower, Bruce. Simple Minds, Smart Choices. ScienceNewsOnline. Volume 155, No. 22, May 29, 1999. www.sciencenews.org/sn_arc99/5_29_99/bob2.htm (referenced 11/28/2007)

21. Buchanan, J., “A Two-phase Interactive Solution Method for Multiple Objective Programming Problems," IEEE Transactions on Systems, Man and Cybernetics 21(4), 743-749, 1991.

22. Campbell, D.J. Task Complexity: A Review and Analysis, Academy of Management Review, Vol. 13, No.1, pp. 40-52).

23. Campbell, R. 1999, ‘Controlling Crisis Chaos’, Journal of Emergency Management Australia, Vol. 14, No. 3, pp. 51-54.

24. Cambridge Definition: referenced 12/02/2007 http://dictionary.cambridge.org/define.asp?key=7255&dict=CALD

25. Cho, Hee-Kyung, The impacts of Delphi communication structure on small and medium sized asynchronous virtual teams., Doctoral Dissertation, NJIT, 2004. Library.njit.edu

26. Churchman, C.W. The Design of Inquiry Systems. New York: Basic Books, 1971.

27. Clausewitz, Carl Von, On War. Princeton, New Jersey: Princeton University Press 1984.

28. Clawson, V.K., Bostrom, R.P. (1996), ' Research Driven Facilitation Training for Computer Supported Environments'. Group Decision and Negotiation, No. 1, 7-29.

29. Clawson, V.K., Bostrom, R.P. AND ANSON, R. (1993). 'The Role of the Facilitator in Computer-Supported Meetings.' Small Group Research, 24(4), 547-565.

30. Condorcet, Marie Jean Antoine Nicolas de Caritat Marquis de. 1976. “Essay on the Application of Mathematics to the Theory of Decision Making. In Keither M. Baker (ed.), Condorcet: Selected Writings. Indianapolis: Bobbs-Merrill.

31. Conklin, J. and Weil, W. Wicked Problems: Naming the Pain in Organizations. 3M Reading Room Research Center, 2005. (retrieved 3/31/2008 http://www.leanconstruction.org/pdf/wicked.pdf)

32. Conklin, J. Dialogue Mapping: Building Shared Understanding of Wicked Problems. Wiley, November 18, 2005, ISBN: 0470017686.

33. Cook, W.D., and M. Kress, "Ordinal Ranking with Intensity of Preference," Management Science 31(1), 26-32, 1985.

34. Cook, W.D., and L.M. Seiford. "Priority Ranking and Consensus Formation," Management Science 24(16), 1721-1732, (1978).

35. Cowan, Nelson. The Magical Number 4 in Short-Term Memory: A reconsideration of Mental Storage Capacity. Behavioral and Brain Sciences 24 (1), 2001.

36. Dalkey, N. Delphi. The RAND Corporation, Second Symposium on Long-Range Forecasting and Planning, Almagordo, New Mexico, October 11-12, 1967.

37. Dalkey, N., Brown, B. and Cochran, S. The Delphi Method, III: Use of Self Ratings to Improve Group Estimates. United States Air Force Project Rand, November 1969.

38. Danieisson, M. and Ohisson, K. Decision Making in Emergency Management: A Survey Study. International Journal of Cognitive Ergonomics, 1999, 3(2), 91-99.

39. DeSanctis , G., Gallupe, R.B. (1987), 'A Foundation for the study of Group Decision Support Systems'. Management Science, 33(5), 589-609.

40. Dickson. G.W.. Senn, J.A.. and Chervany. N.L. Research in management information systems: The Minnesota experiments. Manage. Sci. 23,9 (Mar. 1977). 973-921.

41. Dickson, G.W., Lee, J.E., Robinson, L., and eat, R. Observations on GDSS Interaction: Cauffeured, Facilitated, and User-Driven Systems, 22nd HICSS, 1989, Maui, I. pp 337-343.

42. Dickson, G., Limayem, M., Lee Partridge J., DeSanctis , G. (1996), 'Facilitating Computer Supported Meetings: A Cumulative Analysis In A Multiple Criteria Task Environment'. Group Decision and Negotiation, 5(1 ), 51-72.

43. Digh, P., Global Literacies: Lessons on Business Leadership and National Cultures (Simon & Schuster 2000).

44. Dictionary.com referenced 12/02/2007 (http://dictionary.reference.com/browse/bias)

45. Drucker, F.D., Hammond, J., Keeney, R., Raiffa, H., and Hayashi, A. Harvard Business Review on Decision Making, Harvard Business School Press (May 2001).

46. Eden, C. (1990). 'The Unfolding Nature of Group Decision Support. In C. Eden, J. Radford (Eds.), Tackling Strategic Problems-The Role of Group Decision Support. Sage.

47. Eom, Sean. Decision Support Systems, International Encyclopedia of Business and Management, 2nd Ed, Edited by Malcolm Warner, International Thomson Business Publishing Co, London, London. England, 2001.

48. Epinion.com, retrieved March 20, 2009.

49. Ericsson, K. A., & Staszewski, J. J. (1989). Skilled memory and expertise: Mechanisms of exceptional performance. In D. Klahr & K. Kotovsky (Eds.), Complex information processing: The impact of Herbert A. Simon (pp. 235-267). Hillsdale, NJ: Lawrence Erlbaum.

50. Fisher, R.G. The Delphi Method: A Description, The Journal of Academic Librarianship, vol 4, no 2, p 64-70, 1978.

51. Fjermestad, J., Hiltz, S.R. (1998). An Assessment of Group Support Systems Experimental Research: Methodology and Results, Journal of Management Information Systems, 15 (3), 7-149.

52. Fontaine, M. Keeping Communities of Practice Afloat: Understanding and fostering roles in communities. Knowledge Management Review, 4, 4, Sept/Oct 2001.

53. Gehani, N. The Database Book Principles & Practice Using MySQL. Silicon Press, 2007.

54. George, J., Dennis, A., Nunamaker, J. (1992), 'An Experimental Investigation of Facilitation in an EMS Decision Room'. Group Decision and Negotiation, No. 1, 57-70.

55. Gigerenzer, Gerd and Todd, Peter and the ABC Research Group. Simple Heuristics That Make Us Smart. Oxford University Press, 1999.

56. Google Summer of Code, http://socghop.appspot.com/program/home/google/gsoc2009, (retrieved March 27, 2009).

57. Griffith, T., Fuller M., Northcraft G. (1998), 'Facilitator Influence in Group Support Systems'. Information Systems Research, 9( 1 ), 20-36.

58. Hall, W.A., and Y.Y. Haimes, "The Surrogate Worth Trade-Off Method with Multiple Decision-Makers." In M, Zeleny, eds., Multiple Criteria Decision Making. Berlin: Springer-Verlag, 1976.

59. Hamburg, Morris. Statistical Analysis for Decision Making. The Harace Series in Business and Economics, 1970.

60. Hardy, D., O’Brien, A.P., Gaskin, C., O’Brien A.J., Morrison-Ngatai, E., Skews, G., Ryan, T. and McNulty, N. Practical application of the Delphi technique in bicultural mental health nursing study in New Zealand, Methodological Issues In Nursing Research, Journal of Advanced Nursing, 46(1), 95-109. Blackwell Publishing, 2004.

61. Harrald, J. Agility and Discipline: Critical Success Factors for Disaster Response. Annals, AAPSS, 604, March 2006.

62. Harrald, J. Achieving Agility in Disaster Management. International Journal of Information Systems and Crisis Management, Volume I, Issue I, 2009.

63. Helmer, Olaf. Systematic Use of Expert Opinion. The RAND Corporation, Santa Monica, California. Paper was prepared for AFAG Board Meeting, November 1967.

64. Heron, J. (1989). The Facilitator's Handbook, Kogan Page.

65. Hevner, A., March, S., Park, J., and Ram, S. Design Science in Information

Systems Research. MIS Quarterly Vol. 28 No.1, pp. 75 – 105, March 2004.

66. Hill, R.J. A Note on the Inconsistency in Paired Comparison Judgments. American Sociological Review 18:418-440, Oct. 1953.

67. Hiltz, Starr Roxanne and Turoff, Murray. The Network Nation. MIT Press, 1978.

68. Hiltz. S.R.. Johnson, K.. and Turoff. M. The effects of formal human leadership and computer-generated decision aids on problem solving via computer: A controlled experiment. Res. Rep. 18. Computerized Conferencing and Communications Center, New Jersey Institute of Technology, Newark, 1982.

69. Hiltz, S.R., Online Communities: A Case Study of the Office of the Future, Ablex Press, 1984.

70. Hiltz, S.R. and Turoff, M., Structuring Computer-Mediated Communication Systems to Avoid Information Overload. CACM, July 1985, Volume 28, Number 7.

71. Hiltz, S.R., Dufner, D., Fjermestad, J., Kim, Y., Ocker, R., Rana, A., and Turoff, M. Distributed Group Support Systems: Theory Development and Experimentation, Book chapter for: Olsen, B.M., Smith, J.B. and Malone, T., eds., Coordination Theory and Collaboration Technology, Hillsdale NJ: Lawrence Erlbaum Associates, 1996.

72. Horn, R.E. Knowledge mapping for complex social messes. A speech to the Packard Foundation Conference on Knowledge Management (http://www.stanford.edu/~rhorn/SpchPackard.html), 2001.

73. Huang, H. and Ocker, R.L. Preliminary Insights into the In-Group/Out-Group Effect in Partially Distributed Teams: An Analysis of Participant Reflections, SIGMIS-CPR’06, April 13–15, 2006, Claremont, California, USA.

74. Huber. G. Organizational information systems: Determinants of their performance and behavior. Manage. Sci. 28, 2 (Feb. 1982), 138-153.

75. Huber, G.P. (1984). "Issues in the Design of Group Decision Support Systems," MIS Quarterly 8(3), 195-204.

76. Ichikawa, A., ed. (1980). Theory and Method for Multi-Objective Decision, Soc, of Instru. & Control Engr. (in Japanese).

77. Islei, G, and G. Lockett. (199I), "Group Decision Making: Suppositions and Practice." Socio- Economic Planning Sciences 25(1), 67-8I.

78. Isermann, H. (1984). Investment and Financial Planning in a General Partnerships. In M. Grauer and A.P. Wierbicki, eds., Interactive Decision Analysis. Berlin: Springer-Verlag.

79. Iz, Peri H. and Gardiner, Lorraine R. Analysis of Multiple Criteria Decision Support Systems for Cooperative Groups. Group Decision and Negotiation, 2:61-79 (1993) Kluwer Academic Publishers.

80. Keen, P. and Morton, M. Decision support systems: an organizational perspective, 1978 - Addison-Wesley.

81. Keinan, G., Friedland, N. and Ben-Porath, Y. (1987) .Decision-making under stress: Scanning of alternatives under physical threat., Acta Psychologica, Elsevier Science Publishers B.V., North Holland, Vol. 64, pp.219-228.

82. Kerstholt, J. Dynamic Decision Making. TNO Human Factors Netherlands, 1996.

83. King, D. (1993) 'Intelligent support systems: art, augmentation, and agents', in R.H. Sprague, Jr and H.J. Watson

84. (eds), Decision Support Systems: Putting Theory into Practice, 3rd Ed., Englewood Cliffs, NJ: Prentice Hall.

85. Kok, M. (1986). The Interface with Decision Makers and Some Experimental Results in Interactive Multiple Objective Programming Methods, European Journal of Operational Research 26(1), 96-107.

86. Kok, M., and F.A. Lootsma. (1985). Pairwise-Comparison Methods in Multiple Objective Programming with Applications in a Long-Term Energy-Planning Model, European Journal of Operational Research 22(t), 44-45.

87. Kontogiannis, T. and Kossiavelou, Z., Stress and team performance: principles and challenges for intelligent decision aids, Safety Science, December, Vol.33, Issue 3, pp. 103 -128, 1999.

88. Korhonen, P., and J. Wallenius. (1990). "Supporting Individuals in Group Decision Making," Theory and Decision 28(3), 313-329.

89. Kowalski-Trakofler, K., Vaught and Sharf, T., Judgment and decision making under stress: an overview for emergency managers. Int. J. Emergency Management, Vol. 1, No. 3, pp. 278-289, 2003.

90. Li, Z., Design and Evaluation of a Voting Tool in a Collaborative Environment, PhD dissertation, 2003, IS Dept of NJIT. http://www.library.njit.edu/etd/index.cfm

91. Liang, G., and M. Wang. (1991). "A Fuzzy Multi-Criteria Decision-Making Method for Facility Site Location," International Journal of Production Research 29(11), 2313-2330.

92. Limayem, M., Lee-Partridge, J., Dickson. G., DeSanctis , G. (1993). 'Enhancing GDSS Effectiveness: Automated versus Human Facilitation', Proceedings of the 26th Annual Hawaii International Conference on System Sciences. IEEE [27] Society Press. Los Alamitos. CA.

93. Limayem, M. and DeSanctus, G. Providing Decisional Guidance for Multicriteria Decision Making in Groups, Information Systems Research, Vol. 22, No. 4, pp. 386-401.

94. Limayem, M. Human Versus Automated Facilitation in the GSS Context. The DATA BASE for Advances in Information Systems – Spring-Summer 2006, Vol. 37, Nos 2&3.

95. Lin, F. and Hsueh, C. Knowledge Map Creation and Maintenance for Virtual Communities of Practice. Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03).

96. Lindblom, Charles. The Science of “Muddling Through.” Public Administrative Review, Vol. 19, Spring 1959, pp. 78 - 88.

97. Lindblom, Charles. Still Muddling, Not Yet Through. Public Administration Review, Vol. 39, No. 6 (Nov. - Dec., 1979), pp. 517-526

98. Linstone, H. and Turoff, M. (1975) (editors) (1975) The Delphi Method: Techniques and Applications, Addison Wesley Advanced Book Program. (Online version can be accessed via http://is.njit.edu/turoff)

99. Lootsma, F.A. (1988). "Numerical Scaling of Human Judgement in Pairwise-Comparison Methods for Fuzzy Multicriteria Decision Analysis." In: G. Mitra, eds., Mathematical Models.for Decision Support. Berlin: Springer-Verlag.

100. March, S.T. and Smith, G.F. Design and natural science research on information technology. Decision Support Systems 15 (1995) 251-266.

101. Matthews, Lerow, Reece, Wendy, and Burggraf, Linda. Estimating production potentials: Expert bias in applied decision making. Department of Energy's (DOE) Information Bridge: DOE Scientific and Technical Information. Engineering Psychology & Cognitive Ergonomics. October 28, 1998 – October 30, 1998.

102. Programming problems using GAMS. Effective implementation of the ε-constraint method. Retrieved April 18, 2008 (http://www.gams.com/~erwin/book/lp.pdf).

103. McLennan, J., Holgate, A., and Wearing A. Human Information Processing aspects of Effective Emergency Incident Management Decision Making. Human Factors of Decision Making in Complex Systems, Dublin , Scotland, September, 2003.

104. McGrath, J.E., Groups: Interaction and Performance, Prentice Hall, Englewood Cliffs, NJ, 1984.

105. McNurlin, Barbara C. and Sprague, Ralph H. Jr., Information Systems Management In Practice, 7th ed. Pearson Prentice Hall, 2006.

106. Merriam-Webster Definition:retrieved 12/02/2007 http://www.m-w.com/cgi-bin/dictionary?book=Dictionary&va=bias

107. Miller, CA. Information input overload. In Proceedings of the Conference on Self-Organizing Systems, M.C. Yovits. G.T. Jacobi. and G.D. Goldstein, Eds. Spartan Books, Washington, 1962.

108. Miller, George A. The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. The Psychological Review, 1956, vol. 63, pp.81-97.

109. Miranda, E. Improving Subjective Estimates Using Paired Comparisons. IEEE Software, Jan-Feb., 2001.

110. Mitchell, J.K. ed. 1999, Crucibles of Hazard: Mega-Cities and Disasters in Transition, United Nations University Press, Tokyo.

111. Moore, Omar Khayyam. Divination – A New Perspective. American Anthropologist, 59, 1957.

112. Murphy, Priscilla. Affiliation Bias and Expert Disagreement in Framing the Nicotine Addiction Debate. Science, Technology, & Human Values, Vol. 26 No. 3, Summer, 2001, 278-299, 2001 Sage Publications.

113. Naval War College, Joint Military Operations Department, Operational Decision Making. United States Naval War College Instructional Per NWC 4108, Joint Military Operations Department, 1996.

114. Nakayama, H., et al. (1979). "Methodology for Group Decision Support with an Application to Assessment of Residential Environment," 1EEE Transactions on Systems, Man, and Cybernetics 9(9), 477-485.

115. Netflix, Netflix.com, retrieved on March 18, 2009.

116. Niederman, F., Beise , C.M., Beranek. P.M. (1996), 'Issues and Concerns about Computer-Supported Meetings: The Facilitator's Perspective'. MISQ, 20(I), 1-22.

117. Nutt, Paul. Why Decisions Fail. Berrett-Koehler Publishers; 1 edition (July 15, 2002).

118. Pitt, J, Kamara, L. Sergot, M and Artikis, A. Formalization of a Voting Protocol for Virtual Organizations. AAMAS05, July 25-29, 2005, Utrecht, Netherlands.

119. Plotnick, Linda and Turoff, M. (2009) Mitigating Threat Rigidity in Crisis, in Information Systems for Emergency Management (Bartel van de Walle, Murray Turoff, and Starr Roxanne Hiltz, eds), a volume in the Advances in Management Information Systems monograph series (Editor in Chief: Vladimir Zwass).

120. Plotnick, L. Gomez, E.A, White, C. and Turoff, M. A Dynamic Thurstonian Method Utilized for Interpretation of Alert Levels and Public Perception. Proceedings of the 4th Annual ISCRAM, Delft, Netherlands, 2007.

121. Plotnick, L., Ocker, R., Hiltz, S.R., and Rossen, M.B. Leadership Roles and Communication Issues in Partially Distributed Emergency Response Software Development Teams: A Pilot Study. HICSS, 2008.

122. Preece, J. (2000). Online Communities: Designing Usability, Supporting Sociability. Chichester, UK: John Wiley & Sons.

123. Rana, Ajaz, R., Turoff, Murray and Hiltz, Starr Roxanne. Task and Technology Interaction (TTI): A Theory of Technological Support for Group Tasks, HICSS 1997.

124. Rathwell, M.A. and Burns, A. (1985) 'Information systems support for group planning and decision making activities', MIS Quarterly 9 (3): 254–71.

125. Reagan-Cirincione, P. Improving the Accuracy of Group Judgment: A Process Intervention Combining Group Facilitation Social Judgment Analysis and Information Technology, Organizational Behavior and Human Decision Processes, Vol. 58, No. 2, pp. 246-270.

126. Rittel, H., and M. Webber; "Dilemmas in a General Theory of Planning" pp 155-169, Policy Sciences, Vol. 4, Elsevier Scientific Publishing Company, Inc., Amsterdam, 1973.

127. Rob, P. and Coronel, C. Database Systems Design, Implementation, and Management, 7th Ed., Thomson Publisher, 2007.

128. Robertson, S. Voter-Centered Design: Toward A Voter Decision Support System, ACM Transactions on Computer-Human Interaction, Vol. 12, No. 2, June 2005, Pages 263-292.

129. Rodriguez, David M., Dominating Time in the Operational Decision Making Process, Final Report NAVAL WAR COLL NEWPORT RI, June 1997.

130. Rosson, M.B. and Carroll, J.M. Usability Engineering: Scenario-Based Development of Human-Computer Interaction, Morgan Kaufmann Publishers, 2002.

131. Rouse, W.B. Design of man-computer interfaces for on-line interactive systems. Proc. IEEE 63, 6 (June 1975). 1347-857.

132. Saaty, T.L. (1988). Decision Making for Leaders. Pittsburgh, PA: RWS Publications.

133. Sackman, H. 1975. Delphi Critique. Lexington, Mass.: D.C. Heath.

134. Sahana Project, Free and Open Source Disaster Management System, http://www.sahana.lk/ , (retrieved March 27, 2009).

135. Salas, E., Driskell, E. and Hughs, S. (1996) .The study of stress and human performance., in J.E. Driskell and E. Salas (Eds.) Stress and Human Performance, Lawrence Erlbaum Associates, New Jersey, pp.1.45.

136. Sheridan, T.B. and Ferrell. W.R. Mm-Machine System: Information, Control, and Decision Models of Humans Performance. MIT Press, Cambridge, Mass. 1974.

137. Simon, H.A. (1960) The New Science of Management Decision, New York: Harper & Row.

138. Simon, Herbert. Sciences of the Artificial. The MIT Press; 1st edition (1969).

139. Simon, H.A., ‘The Structure of Ill-Structured Problems.’ Artificial Intelligence, 4, 181-201.

140. Simon, Herbert and Associates. Decision Making and Problem Solving, Research Briefings 1986: Report of the Research Briefing Panel on Decision Making and Problem Solving by the National Academy of Sciences. Published by National Academy Press, Washington, DC.

141. Skertchly, A. and Skertcly, K. Catastrophe management: coping with totally unexpected extreme disasters. The Australian Journal of Emergency Management. Volume 16, Issue 1, Autumn 2001.

142. Soanes, C. and Stevenson, A., Concise Oxford English Dictionary, 2003.

143. Sodahead, http://www.sodahead.com/about-us/, Retrieved on March 18, 2009.

144. Sprague, R. H. and E. D. Carlson (1982). Building effective decision support systems. Englewood Cliffs, N.J., Prentice-Hall.

145. Steuer, R.E., and E.U. Choo. (1983). "An Interactive Weighted Tchebycheff Procedure for Multiple Objective Programming," Mathematical Programming 26(1), 326-344.

146. Stevens, C.H. Many-to-many communication. CISR 72. Center for Information Systems Research, MIT, Cambridge, Mass., 1981.

147. Tarmizi, H., de Vreede, G.J. and Zigurs, Ilze. Identifying Challenges for Facilitation in Communities of Practice. 39th HICSS, 2006.

148. Tabarrok, A. Arrow’s Impossibility Theorem. Department of Economics, George Mason University. Retrieved on January 17, 2008. (http://mason.gmu.edu/~atabarro/arrowstheorem.pdf).

149. The Destin Log Newspaper, http://www.thedestinlog.com/news/survey_8044___article.html/win_florida.html, Retrieved March 18, 2009.

150. Thurstone, L. L. A Law of Comparative Judgment. Psychological Review, 34, pp.273–286, 1927a.

151. Thurstone, L.L. The Method of Paired Comparisons for Social Values, Journal of Abnormal and Social Psychology, 21, 384-400, 1927b.

152. Todd, Peter and Gigerenzer, Gerd. Simple Heuristics That Make Us Smart. Oxford University Press, 1999.

153. Turoff, M. and Hiltz, S.R. National Library of Medicine Paper, 2008.

154. Turoff, M. and Hiltz, S.R. Computer Support for Group Versus Individual Decisions. Communications, IEEE Transactions on [legacy, pre - 1988] Publication Date: Jan 1982 Volume: 30, Issue: 1, Part 1 On page(s): 82- 91.

155. Turoff, M., Computer Mediated Communication Requirements for Group Support, Organizational Computing, (1:1), 1991, 85-113.

156. Turoff, M., Rao, U., and Hiltz, S.R (1991) Collaborative Hypertext in Computer Mediated Communications (.pdf). Reprinted from HICSS. Vol. IV, Hawaii, January 8-11, 1991.

157. Turoff, M., Hiltz, S.R., Bahgat, A.N.F, Rana, A.R. Distributed Group Support Systems, MIS Quarterly, Vol. 17, No. 4 (Dec., 1993), pp. 399-417.

158. Turoff, M. and Hiltz, S.R. Computer Based Delphi Processes. Invited Book Chapter for Michael Adler and Erio Ziglio, editors., Gazing Into the Oracle: The Delphi Method and Its Application to Social Policy and Public Health, London, Kingsley Publishers, 1996.

159. Turoff, M. Hiltz, S.R. Bieber, M., and Rana, Ajaz. Collaborative Discourse Structures in Computer Mediated Group Communications, HICSS, 1999.

160. Turoff, M. OVERHEADS SET 3: DESIGN OF INTERACTIVE SYSTEMS. (1998) Retrieved 12/05/2007 http://web.njit.edu/~turoff/coursenotes/CIS732/732index.html

161. Turoff, M. OVERHEADS SET 2: MANAGEMENT OF INFORMATION SYSTEMS. (2000) 12/05/2007 http://web.njit.edu/~turoff/coursenotes/IS679/679index.html

162. Turoff, M., Hiltz, SR., Cho, H., Li,Z. and Wang, Y., Social Decision Support Systems (SDSS). Proceedings of the 35th Annual Hawaii International Conference on System Sciences (HICSS 2002)

163. Turoff, M. Chumer, M., Van de Walle, B. and Yao, X. The Design of a Dynamic Emergency Response Management Information System, Journal of Information Technology Theory and Applications (2004).

164. Turoff, M. White, C. and Plotnick, L. Dynamic Emergency Response Management For Large Scale Extreme Events. International Conference on Information Systems, Pre-ICIS SIG DSS 2007 Workshop.

165. Tversky, Amos and Kahneman, Daniel. Judgment under Uncertainty: Heuristics and Biases. Science, New Series, Vol. 185, No. 4157. (Sep. 27, 1974), pp. 1124-1131.

166. Vetschera, R. (1991). "Integrating Databases and Preference Evaluations in Group Decision Support: A Feedback-Oriented Approach," Decision Support Systems 7(1), 67-77.

167. Voss, A., and Schafer, A. Discourse Knowledge Management in Communities of Practice. Proceedings of the 14th International Workshop on Database and Expert Systems Applications (DEXA’03).

168. Vreede, G.J. De (2001). A Field Study into the Organizational Application of GSS, Journal of Information Technology Cases & Applications, 2(4).

169. Vreede, G.J. De, Niederman, F. and Paarlbert, Ilse. Measuring Participants’ Perception on Facilitation in Group Support Systems Meetings. Special Interest Group on Computer Personnel Research Annual Conference Proceedings of the 2001 ACM SIGCPR conference on Computer personnel research, San Diego, California, United States, Pages: 173 – 181, 2001, ISBN:1-58113-363-4.

170. Wang, Yuangqiong, Design and evaluation of a list gathering tool in a web-based collaborative environment, 2003. Doctoral Dissertation, library.njit.edu.

171. Webster’s New World Medical Library, 2nd edition (January, 2003) Wiley Publishing, Inc.; ISBN: 0-7645-2461-5.

172. Weick, K.E and Sutcliffe, K. Managing the Unexpected Assuringi High Performance in an Age of Complexity. Jossey-bass, Jon Wiley & Sons, 2001.

173. Wenger, E., McDermott, R. and Snyder, W. Cultivating Communities of Practice. Harvard Business School Press, Boston, Mass. 2002.

174. Wegner, E., White, N., Smith, J.D., and Rowe, Kim. Technology for Communities. 2005. CEFRIO.

175. White, C., Hiltz, S.R., and Turoff, M. United We Stand: One Community, One Response. ISCRAM 2008b.

176. White, C., Plotnick, L. Aadams-Moring, R., Turoff, M. and Hiltz, S.R. Leveraging a Wiki To Enhance Virtual Collaboration in the Emergency Domain. HICSS, 2008a.

177. White, Connie, Hiltz, Starr Roxanne and Turoff, Murray. Finding the Voice of a Virtual Community of Practice. International Conference on Information Systems, Quebec, Pre-ICIS Sixth Workshop on e-Business (WeB 2007).

178. White, C., Turoff, M. and Van de Walle, Bartel. A Dynamic Delphi Process Utilizing a Modified Thurstonian Scaling Process: Collaborative Judgment in Emergency Response. ISCRAM, 2007a, Neatherlands.

179. White, C., Plotnick, L., Turoff, M., and Hiltz, S.R.. A Dynamic Voting Wiki Model. Americas Conference on Information Systems (AMCIS), Colorado, 2007b.

180. Whitworth, B. and McQueen, R.J. Voting before discussing: Computer voting as social communication, Proceedings of the 32nd HICSS, 1999.

181. Wilensky. H.L. Organizational Intelligence: Knowledge and Policy in Government and Industry. Basic Books, New York, 1967.

182. Yao, Xiang and Turoff, Murray. Using Task Structure to Improve Collaborative Scenario Creation. Information Systems for Crisis Response and Management (ISCRAM), 2005.

183. Zeleny, M. (1987) 'Management support systems: towards integrated knowledge management', Human Systems Management 7 (1): 59–70.

184. Zigurs, I., and Buckland, B. A Theory of Task-Technology Fit and Groups Support Systems Effectiveness, MIS Quarterly, Vol. 22, No. 2, pp 313-334, 1998.

185. Zigurs, Ilze, Buckland, Bonnie, Connolly, James and Wilson, Vance. A Test of Task-Technology Fit Theory for Group Support Systems. The DATA BASE for Advances in IS – Summer-Fall 1999, Vol. 30.