Collaborative Decision Analysis

Literature Review of Collaborative Decision Analysis

State of the Art Paper

Connie White

New Jersey Institute of Technology (NJIT)

University Heights

connie.m.white@gmail.com

Spring 2008

SOTA Committee

Advisor: Murray Turoff

Committee Member: Starr Roxanne Hiltz

Committee Member: Bartel Van de Walle

Sota Committee:

Murray Turoff

Starr Roxanne Hiltz

Bartel Van deWalle

“There are no more promising or important targets for basic scientific research than understanding how human minds, with and without the help of computers, solve problems and make decisions effectively, and improving our problem-solving and decision-making capabilities.”

Herbert Simon

BRIEF CONTENTS

CHAPTER 1: INTRODUCTION

CHAPTER 2: DECISION MAKING

CHAPTER 3: BIAS

CHAPTER 4: DECISION SUPPORT SYSTEMS

CHAPTER 5: VOTING

CHAPTER 6: DELPHI

CHAPTER 7: WHAT WE KNOW AND WHAT WE DON’T KNOW

LIST OF TABLES AND FIGURES

ACRONYMS AND ABBREVIATIONS

REFERENCES

TABLE OF CONTENTS

CHAPTER 1

INTRODUCTION

1.1 Objectives

1.2 Overview

CHAPTER 2

Decision Making

2.1 Abstract

2.2 Introduction

2.3 Differences

2.4 Processes

2.5 Groups

2.6 Task Types

2.7 Heuristics

2.8 Discussions

2.9 Conclusions

CHAPTER 3

BIAS

3.1 Abstract

3.2 Introduction

3.3 Definitions

3.4 Bias Through Framing

3.5 Bias in Individuals and Groups Making Decisions

3.6 Bias in Bounded Rationality and Uncertainty

3.7 Information Overload and Bias

3.8 Bias in Experiments

3.8.1 Systematic Bias

3.8.2 Experimenter’s Bias

3.8.3 Response Bias

3.9 Conclusions

CHAPTER 4

DECISION SUPPORT SYSTEMS

4.1 Abstract

4.2 Introduction

4.2.1 Acronyms

4.2.2 Decision Support System Components

4.2.3 Database Management System

4.2.4 Decision Support Systems Defined

4.2.5 Wisdom and the Human Component

4.3 Design Contingencies

4.3.1 Size Matters

4.3.2 Proximity

4.3.2.1 Face-to-Face (F2F)

4.3.2.2 Partially Distributed Groups (PDTs)

4.3.2.3 Virtual

4.3.2.4 Online Communities

4.3.3 Problem Types

4.3.3.1 Structured

4.3.3.2 Semi-structured

4.3.3.3 Ill-structured

4.3.3.4 Tame

4.3.3.5 Wicked

4.4 Varieties of DSS

4.4.1 Computer Mediated Communication Systems (CMC)

4.4.2 Single User Decision Support Systems

4.4.3 Group Decision Support Systems (GDSS)

4.4.3.1 Levels of GDSS

4.4.4 Distributed Group Support System (DGSS)

4.4.5 Social Decision Support Systems (SDSS)

4.5 Information Overload

4.5.1 How It Happens

4.5.2 What Occurs From It

4.5.3 How To Decrease It

4.5.3.1 Data Management

4.5.3.2 Social Influence

4.5.3.3 Conferencing Systems

4.5.3.4 System Induced Measures

4.6 Process Catalyst

4.6.1 Leader

4.6.2 Facilitator

4.6.3 Moderator

4.6.3.1 Motivation

4.6.4 Chauffer

4.6.5 Automated

4.7 Methodologies

4.7.1 Unidimensional Data

4.7.2 Multi-criteria

4.7.3 Measuring Decision Support

4.8 Conclusions

CHAPTER 5

VOTING

5.1 Abstract

5.2 Introduction

5.2.1 Voting for Decision Making

5.2.2 Definitions

5.3 Voting Methods

5.3.1 Plurality

5.3.2 Plurality with Elimination

5.3.3 Borda Count

5.3.4 Pairwise Comparison

5.4 Fairness Criterion

5.4.1 Majority

5.4.2 Condorcet

5.4.3 Monotonicity

5.4.4 Independence of Irrelevant Alternatives

5.5 Arrow’s Impossibility Theorem

5.6 Protocol

5.6.1 Robert’s Rules of Order

5.7 Roles

5.8 When to Vote

5.9 Bias in Voting

5.10 Uncertainty

5.11 Anonymity in Voting

5.12 Conclusions

CHAPTER 6

DELPHI

6.1 Abstract

6.2 Introduction

6.3 Characteristics of Delphi

6.3.1 Anonymity

6.3.2 Feedback

6.3.3 Statistical Group Response

6.4 Delphi Processes

6.4.1 Early Delphi Processes

6.5 Problems with Delphi

6.5.1 Experiments

6.5.2 Subjects

6.5.3 Consensus

6.6 Bias

6.7 Experts

6.7.1 Weights

6.7.2 Self-Rating

6.8 Experiments

6.8.1 Groups

6.8.1.1 Large Groups of Experts

6.8.1.2 Subgroups of Experts

6.8.2 Rounds

6.8.3 Asynchronous

6.8.4 Uncertainty

6.8.4.1 Missing Data

6.8.4.2 Interpretation

6.9 Conclusions

CHAPTER 7

WHAT WE KNOW AND WHAT WE DON’T KNOW

LIST OF FIGURES AND TABLES

FIGURES

2.1 Dynamic Decision Making

2.2 Support Levels and Dimensions

4.1 Architectural Support of the DDM

4.2 Different Settings for GDSS

5.1 Factors Influencing Opinions

TABLES

2.1 Group Dynamics

2.2 Task Contingencies

2.3 Validation Contingencies

2.4 Coordination Contingencies

ACRONYMS AND ABBREVIATIONS

AHP Analytic Hierarchy Process

CMC Computer Mediated Communication System

CSCW Computer-Supported Cooperative Work

CS Conferencing Systems

CSS Collaborative Support Systems

DDM Dialog, Data and Modeling

DGSS Distributed Group Support Systems

DM Decision Makers

DSS Decision Support System

EMS Electronic Meeting Systems

GDSS Group Decision Support System

GSS Groups Support Systems

IBIS Issues Based Information System

ISCRAM Intelligent Systems for Crisis and Response Management

MAUT Multi-Attribute Utility Theory

MCDM Multi-Criteria Decision Making

MMP Multiobjective Mathematical Programming

SOTA State-of-the-Art Paper

CHAPTER 1

INTRODUCTION

Do you think that I know something you don't know?

What do you want from me?

If I don't promise you the answer would you go?

What do you want from me?

- Pink Floyd

1.1 Objectives

The purpose of the State-of-the-Art paper (SOTA) is to examine the literature relevant to the study of collaboration in decision making. In particular, the focus is on large groups of experts working together using computer mediated communication systems (CMC) producing group opinions. These individuals are not collocated and are distributed throughout the world working together as virtual teams on particular parts of a larger problem. The idea behind this strategy is that individuals can work together on subproblems concerning their particular area of expertise, dividing the problem into its atomic parts.

1.2 Overview

Difficulties arise when large groups of experts work together using collaborative decision making software systems. In particular, how these individuals work together in a decision making process is analyzed. Individual experts working together have their opinions independent from one another, but the outcomes are a collaborative effort reflected by a single opinion calculated from multiple inputs. Experts may or may not have knowledge in an area so they may or may not be able to contribute to a problem by offering solutions or input and these issues are explored. This is no trivial task as many issues arise during these efforts. Individuals can have bias towards a particular interest, uncertainty in their decision making or have no opinion on a given domain due to their lack of knowledge in that particular area.

Many issues arise during the decision making process which have been identified and are described in a variety of ways in the literature. In the second chapter of this SOTA, processes of decision making are explored and are defined multiple ways in the literature. Group dynamics along with task types are identified and the heuristics used in the cognitive process dealing with information overload are presented. Chapter 3 explores numerous ways in which bias can affect decision making throughout the entire process with both individuals as well as groups of individuals. Bias is identified in how problems are framed, as a result from information overload and in experimental research. Individuals suffer from bias as do groups. Biases can be recognized in many stages of the experimental process and some of these are identified. In Chapter 4 decision support systems are explored and analyzed for their ability to aid individuals in the process. The focus is on how the group opinion is formulated utilizing voting methods in Chapter 5. Different voting processes are researched, protocols are delved into and Arrow’s Impossibility Theorem is covered. Chapter 6 defines the Delphi technique, its processes and how it has been used in the past. What constitutes an expert and problems with experimental outcomes are explored. This SOTA ends by posing questions that are raised given the issues that are identified in the literature review.

CHAPTER 2

DECISION MAKING

2.1 Abstract

Individuals make decisions and groups of individuals work together to solve problems and make decisions. Groups can have consensus or conflict and problems can be addressed given the type of task being considered. Decision making is a very complex topic with many avenues in which to wind down. In keeping within the scope of this paper, only particular areas of decision making will be addressed for this is an ever intriguing area which could never feasibly be completely covered in one writing. Herbert Simon divides this type of group interaction into two areas where he categorizes the activities of “fixing agendas, setting goals and designing actions” to that of problem solving and “evaluating and choosing” to that of decision making. Throughout this paper, I will assume these to be both part of the decision making process and use them interchangeably. Problem solving can be bombarded by the enormous amount of information that can be relevant to the given situation. In order to have effective decision making outcomes, the group dynamics and information needs to be managed in a way that is conducive for the best result given the way people work together and how they think. Humans have numerous, primarily misguided ways of processing massive amounts of information. Due to time, information and other bounded factors, conducting an exhaustive search of all available solutions is not a possibility. The search must be restricted such that only the best information is considered and that non-pertinent information is discarded.

2.2 Introduction

There are many definitions of decision making to be found in the literature (Simon, 1969, Hiltz and Turoff, 1978, Turoff, 1996, 1998, Whitworth and McQueen, 1999, Druker, 2001, Nutt, 2002, White, et. al, 2007). Although each version has its own particular language used outlining the steps defining the process of decision making, there is a common set of functionalities. They are all more or less defined as such:

1. Setting the agenda, recognizing the problem;

2. Classify/define/represent/assess the problem;

3. Develop list of solutions, find alternatives, list boundaries;

4. Choose a solution;

5. Some sort of feedback (in some), evaluation process, test the solution against end result.

Various researchers’ approaches can make this list differ greatly as to the outcome or even, how the process is managed. For example, one task in any decision problem is to draw up a list of the possible actions that are available. Some believe that considerable attention should be paid to the task of ‘list generation.’ This is because the choices of action will be limited to those contained within this list (Simon, 1969). A theory is to designate that a list of all possible solutions is important to identify, one which exhausts the possibilities (Lindley, 1971). Others may argue that this is not an optimal approach due to reasons described as bounded rationality (Simon, 1969). The same basic theory holds true with muddling through – this is where a feasible subset of possibilities is suggested as the optimum and only realistic way to effectively select an optimal solution (Lindblom, 1959, 1979) where incremental changes are made to the present given situation, aiding in fine tuning and improving the imperfections versus implementing opposing alternatives as knee jerk reactions to bad situations.

Simon characterizes the intelligence gathering phase which he refers to as ‘finding alternatives,’ as holding the key to knowing how it’s to be handled through the level of understandability. The more deeply one understands the problem, the more efficient the means to the solution will be (Simon, 1969). Gaining a better understanding of the problem can be derived by discussions between team members. Increased discussion lessens ambiguity and lessens conflict, increasing consensus (Turoff and Hiltz, 1978, White, et al, 2007a).

Many other factors can influence how individuals or groups interact to make more optimal decisions. Certain implementations of features are found to be conducive to improving the decision making process. These features implemented into a system can increase the participation amongst team members and can offer asynchronous interaction to generate an environment that will produce collective intelligence (Hiltz and Turoff, 1975). These are listed as:

1. “Anonymity;

2. Independent generation of ideas or judgments, by assuring that all participants have an opportunity to think and record their ideas or judgments before receiving the ideas of others;

3. Specification of modes of communication for some or all the communication i.e. the use of written communications;

4. Mechanisms for assuring equality of opportunity to participate;

5. Appointed facilitators to assure the flow of communications in the prestructured manner (rather than reliance on informal leadership from within the group itself);

6. Specification of allowable subjects of and forms of communications (example: voting or discussion segregated by time periods);

7. Some sort of organized feedback to the group of the input of each member and the aggregate ‘group decision’ that is emerging;

8. Specification of allowable ‘who-to-whom’ patterns of communication (no private communications)” (Hiltz and Turoff, 1978, pp. 282-283).

2.3 Differences

One variable that plays differently from one methodology and researcher to the next is the manner in which the decision process is allowed to proceed. For example, some of the steps are to be fulfilled at particular times in a discrete, synchronous manner while others are dynamic and offer an anytime/anywhere asynchronous environment (Simon, 1969, Hiltz and Turoff, 1978, Turoff, 1996, 1998, Whitworth and McQueen, 1999, Druker, 2001, Nutt, 2002, White, et. al, 2007b).

In 2007, White, Plotnick, Turoff and Hiltz extended these phases to reflect those of a social decision support system (SDSS) (2007a). Figure A demonstrates how the phases are all pointing at one another, are interconnected. This is due to the process being asynchronous and adhering to the other characteristics based on Delphi techniques and SDSS where the phases

Figure 1.1 Dynamic Decision Making

aren’t static, but rather, are dynamic where anybody can participate in any phase or the decision can have the flexibility to be visited on any phase at any time.

The top phase, Problem Definition, is where the facilitator triggers the process by administering an agenda before the group. Next is the Intelligence Gathering process where individuals start to research topics within the problem domain. Members of the group present information they have discovered and post it to a common area for all to view. The next process is Argumentation. This is a forum discussion area where knowledge exchange is greater due to communications amongst team members. If there is a disagreement on an issue between members, arguments for and against the issue can be debated in a forum for all to observe. Idea Generation is where people start posting any solution they may have to offer the group. It’s from this area that all ideas are ‘put on the table.’ This can be viewed by all at which point it may either spark more debate or if time dictates, make a decision given the ideas that are available in the Implementation process. This is by no means a final process as directed lines indicate. Choices don’t always work out and other selections must be made from the existing pool available. This dynamic model holds to the needs of the group when asynchronous interaction is a requirement for decision makers (White, Hiltz and Turoff, 2007c).

Brian Whitworth and Robert McQueen offer a higher macro-level breakdown where they describe their process in periods:

1. “Intelligence. A period of idea generation when the problem is defined and relevant ideas and information are brought out in the open;

2. Design. A period of analysis, where alternatives are identified and arguments presented;

3. Choice. The final stage where one or more decisions are made, with the intention to implement them” (Whitworth and McQueen, 1999).

So, in conclusion there is a set of core functionalities that are common to all decision making processes. They are simply modified/altered to fit the environment in which they are set in order to optimize the end result.

Although there is a core set of commonalities, decision making using one methodology can differ greatly from another method implemented. One example is how individual opinions are expressed and how group consensuses are calculated. One way to represent a group opinion is by distributing a vote on a certain issue to the members. Given a decision making model using voting for a group opinion, even now, there are differences that can make an enormous difference in the outcome. One argument is that a vote should be cast by the group before a discussion occurs, i.e. the Voting Before Discussion (VBD) method (Whitworth and McQueen, 1999). The logic is stated that if everyone’s in agreement, then there’s no need to go any further and the group should move on to another problem saving time. It if further argued that this, in turn, reduces interpersonal conflict which strengthens the case for a group consensus. Votes may occur in different places within the process for other methods, being used for different reasons and having different affects. For example, Delphi studies have an initial vote such that the experts can display their initial opinions given a particular situation. And then to further complicate what seemingly was a simple process defined as decision-making, even Delphi studies can differ one from the next in how voting is implemented and how calculations are made. This issue will be explored further in the Chapter on Voting.

Another difference between decision models is how ‘decisions’ are measured. In 2002, Paul Nutt conducted a 20 year study on over 400 upper management decisions in medium and large sized companies to aid in his discovery of the best practices to follow for a successful decision to be implemented. These decisions were measured in terms of decision value by measuring the impact, merit and satisfaction of the decision. Development time and decision use were also measured. From his studies, five decision making phases were identified:

1. Collect information to understand the claims calling for action;

2. Establish a direction that indicates the desired result;

3. Mount a systematic search for ideas;

4. Evaluate these ideas with the direction in mind;

5. Manage social and political barriers that can block the preferred course of action during implementation.

Again it is clear that these processes are the same as the aforementioned core set. They are just targeted towards policy or strategic decision analysis and making. This is what is done for all decision making processes, they are implemented and modified to best suit a group, the problem being solved and perhaps any technology that may be utilized (Zigurs and Buckland, 1998, Poole, et al, 1999).

2.4 Real World Problems

A large amount of work in the decision sciences focuses on static, complete information. This is not a reflection of how organizations exist and compete, especially given how IT has changed how we work and interact. Real world groups encounter dynamic situations where information needed may not exist and the present state of affairs has not experienced the exact state as such, meaning there is a new set of circumstances not considered before. These sorts of problems are ill-structured “because ambiguous goals and shifting problem formulations are typical characteristics of problems of design” (Simon, et al, 1986, pg. 14).

Defining the processes needed for a group to work together is primarily determined by the requirements identified for the communication needs of the users as both individuals as well as group members interacting. Perhaps this can be based on tasks and the user roles to determine the needs of the group, then offering support for those needs.

2.5 Groups

Groups are comprised of many dynamics. According to McNurlin and Sprague, there are five group dynamics which should be mentioned: 1) Group Membership, 2) Interaction, 3) Hierarchy, 4) Location and 5) Time. Table 1 list the various types of groups along with their descriptions borrowed from the authors, (McNurlin and Sprague, 2006).

Table 2.1 Group Dynamics

Group membership is dictated by the rules of admittance. If the group is open, anyone can join, but if the group is closed, only select members are allowed to participate. This has a crucial impact on online communities where you must be a member in order to gain access. Once a member, then further, there are roles that define what a member can do (Turoff and Hiltz, 2003, Li, et al, 2002, White, et al, 2007b). On one level, it’s these roles that define the member and give them their power (Robertson, 2005). Power comes in all forms as rights, the right to vote, the right to post or reply. This has a direct impact, or works hand in hand with the dynamics outlined by McNurlin and Sprague as hierarchy. Traditionally, most organizations had a structure based on centralization and top-down management, but there is a trend with collaboration where management is more decentralized and peers working together are on an equal playing field as far as their membership powers (Wikinomics, 2006, White, et al, 2008a). The idea of decentralization and a horizontal management structure is conducive to the output desired where the strengths of the individuals pull together in the form of collective intelligence (Hiltz and Turoff, 1978).

The group dynamics of Interaction, Location and Time work hand in hand especially given an online format. Although interaction can be tightly coupled where each member is dependent on another group members work to some degree, you also have loose coupling where the work of each individual is not so directly connected or dependent on the other member’s work. However, it’s more the advantageous interactions that can occur through asynchronous interaction. Time, location and interaction can be more flexible given that anyone can interact anytime from almost anywhere (Hiltz and Turoff, 1978).

2.6 Task Types

When individuals or groups use a computer for decision making, certain communication technologies need to be available to support the needs of the users (Hiltz and Turoff, 1978). One approach to this has been by defining what the task type is, then proceeding to add some criteria for support given a certain defined task type.

Ilze Zigurs, et al (1999) conducted a study on group support systems which gave certain task types unique profiles. In particular, problems were classified and corresponding types of technology needs were identified. Although not all were true to form, the study did support a task-technology fit analysis. These tasks were measured against the problems the group’s individuals were working on and critiqued if the correct technology was in place to support a particular type of task where group performance is associated (Zigurs, et al, 1999). “A group’s task is defined as the behavior requirements for accomplishing stated goals, via some process, using given information” (Campbell, 1988, Zigurs & Buckland, 1998).

Zigurs classified task types based on the following:

1. Outcome Multiplicity which is where there is more than one acceptable outcome to the problem, a set of solutions is available.

2. Solution Schema Multiplicity which means that there is more than one way to accomplish the task.

3. Conflicting Interdependences which means that not just any subgroup of selections can be made due to a conflict of interest between the two being selected together.

4. Solution Scheme-Outcome Uncertainty which is when there is an element of uncertainty in the probability of reaching the goal with a particular solution.

One must consider the needs of the users and the work they will be producing in order to support the needs of those users. A critical analysis should be conducted on the group’s task where the requirements are identified and specified. The contributions from this research were outlined as such:

1. “Operationalization of a theory-based task typology via a coding schema for group tasks;

2. Operationalization of a theory-based GSS typology via a characterization of GSS on a common set of dimensions; and

3. A test of the theory of task-technology fit using the operationalizations on a representative set of published GSS experiments” (Zigurs, et al, 1999, pg. 35).

Tasks are further defined as (Hiltz and Turoff, 1993, Turoff, 2000):

1. Deductive

2. Inductive

3. Relative

4. Negotiated

5. Conflictual

6. Pragmatic

Turoff further clarifies these conditions according to the type of problem being solved based on complexity where a problem may be well-structured, semi-structured, unstructured or wicked (2000). These problem structures are based on their elements, relations and external environment where each is identified as known, uncertain, unknown or ambiguous. It’s a unique set of these variables which determines the type of problem being solved then altered to fit the overall task and thus, approach to the problem. For within each task classification, each problem type can be addressed.

Rana, et al (1997) describes three dimensions upon which to identify support for tasks. This Task and Technology Interaction (TTI) is comprised of dimensions along the line of task complexity, group process validation and task contingencies concerning the coordination dimension (Rana, Turoff and Hiltz, 1997). Borrowed are the three tables which layout the information that is considered along these dimensions.

Table 2.2 Task Contingencies

Table 2.3 Validation Contingencies

Table 2.4 Coordination Contingencies

Rana, Turoff and Hiltz, (1997), identified support levels, one for the individual, a second for processes and a third for meta processes, along with their functionalities along each dimension. Tasks fall along this functional paradigm within the 3 dimensions which further explain and correspond to their needs, further determining other architectural information that will advise complementary needs and requirements.

Figure 2.2 Support Levels and Dimensions

An objective is to “characterize and empirically validate the differences in group interaction process that would be expected among task distinctions that can be conceptualized along three dimensions of the task classification framework” (Rana, et al, 1997). The TTI offers a structure from which other GSS studies can be based to measure the fit between task and technology.

As a conclusive statement concerning tasks, it appears as though through the discovery of the task type, the engineering of the system can be made. The task type defines the needs of the system which in turns defines the needs of the user. So, to define the task type, means to define the type of system that will be needed in order to enhance productivity in groups.

2.7 Heuristics

Voter decision making creates a high demand on cognitive resources (Robertson, 2005). To deal with this problem, voters use information-seeking and decision-making heuristics to ease the cognitive tasks. In this view, giving high weight to a position checklist, a candidate’s appearance, or a physical endorsement is simply a way to shortcut a more complex cognitive process. Lau et al proposed that heuristics guide information seeking behavior and they provide evidence that the use of heuristics in political decision making is correlated with retention of important information and even the making of more informed choices.

2.8 Discussions

When is it beneficial for a group to have a discussion? Turoff and Hiltz (2002) indicated that through group discussion, ambiguity would lessen and understanding would increase. Discussion increases the domain of knowledge within the group members but at the same time, increases the chance of disagreement and reduces groupthink (Whitworth, et al 1999). How is it different in a face-to-face setting versus an online setting where the members were not collocated? Benefits are described by Whitworth, et al (1999) indicating that online participation utilizing a voting technique would lessen non productivity that is a result of group dynamics when there’s conflict. But also, they demonstrated where there is more focus on the materials being presented and its merit versus any intergroup conflict that may get things off the track or from conversations that digress and lead astray wasting time. These are some more problems with face to face meetings.

2.9 Conclusion

We have seen many different suggestions for problem solving, decision analysis and decision making structures.

Many of these have both unique steps and/or overlapping similarities. What is clear is that one cannot separate a detailed view from the type of problem, task, or decision issue and the presence or absence of certain properties:

1. Degree of innovation needed;

2. Types of constraints (e.g. Social problems);

3. The nature of evidence required;

4. And the Interdisciplinary nature, etc.

Decision making is a complex problem given the many variables that have to be taken into consideration when evaluating which process best reflects a group’s opinion on any given issue. How does one evaluate the criteria behind the meaning of ‘best reflects?’ How is ‘best’ defined? This may be dependent on the given situation. For example, in some cases, the group could define the criteria upon which to base their definition of ‘best.’ In others, groups of experts may be external to a problem per se, but may have abstracted enough scenarios to justify a model upon which to base judgment after an event has played out. Yet in other cases, the problems for decision making are wicked and can never be systematically evaluated. It’s these sorts of cases, or events in which decision making is at present challenged and upon which group collaboration in particular, needs to be further explored. Therefore, we believe as part of the research in this area, an adequate problem solving structure has to be tailored to the nature of the problem and the environment in which it occurs.

CHAPTER 3

BIAS IN DECISION MAKING

Just the facts ma’am, just the facts.

Sgt. Joe Friday, Dragnet

3.1 Abstract

This section reviews bias on the part of individuals and groups in decision making. Literature is reviewed, bias is defined and issues are outlined demonstrating the negative effects bias can have and why it should be eliminated from decision making in order to have better decisions made which are more reflective of the goals that are desired. The problem with bias in decision making is that it can ignore the best criteria being presented, skew information and thus, lead to decisions based on information that should be ignored which lead to poor end results. Methods and strategies are explored showing how to best eliminate bias and strengthen group performance.

3.2 Introduction

Individuals can have bias in decision making which can skew an outcome based on these personal influences. Personal bias can be in the form of race, ethnicity, religion, political party, family name, predisposed situations based on past experiences, i.e. internal influences. It can be based on ethics, department affiliation, group affiliation, geographic regions, and nationalities, i.e. external influences. Recently, a British teacher working in Sudan allowed her students to name a Teddy Bear Mohammad. She was charged and convicted of blasphemy by the government. People judged the incident with great bias where beliefs were mixed on the punishment fitting the crime; on one side of the issue, calls for the teacher’s death were made; On the other, opinion was that there was a gross overreaction to a cultural faux pas. When bias enters into judgment, how does this affect the correctness of the decision to be made?

”These are differences that cannot be corrected or changed easily or by applications of a greater philosophy or belief unless both parties accept it.” [1] Some bias can be controlled, some cannot. The functionalities available to the group members may eliminate some forms of bias, but other forms such as the ones described here, are based on beliefs, interests and other factors that are inherent to individuals. And to judge others favorably who have similar beliefs, desires and interest as those of our own, is a natural occurrence (Simon, et al, 1986).

In the remainder of this chapter, I will cover various definitions of bias then, show the various types of bias likely to be encountered during this research. Everything from the experimenter’s bias, to the way in which data is collected, disseminated, framed and the method of choosing the right group, to bias in decision-making due to missing or incomplete data will be introduced. Last, I explore systematic bias in decision making and how to deal with it appropriately during statistical analysis.

3.3 Definitions of Bias

Bias is found to be defined a number of ways, most of which use the common adjective of ‘prejudice’ with a negative connotation. Bias is defined as follows by a variety of references:

Herbert Simons believes that we don’t know how we make up our minds under uncertainty. He writes “In many cases, they can predict how they will behave, but the reasons people give for their choices can often be shown to be rationalizations and not closely related to their real motives. At times, the observation of a new problem can be the stimulus where decisions stir past experiences which may reside deep within the psyche and surface only when a set of extenuating circumstances occur which triggered the memory” (Simon, 1986). Next, I will cover a few ways in which bias is reflected in decision making and problem solving.

3.4 Bias Through Framing

Information can be perceived in a particular manner if the information is framed as positive or negative or skewed in some way. Tversky and Kahneman referred to this as a judgmental heuristic called availability, “Biases due to the retrievability of instances” (1974). This is an example of how bias is an error in decision making because in this case, there is a high level of association with the selection being made, although the choices are equally likely and this skews the information making you only more likely to pay attention to what you know and ignore the other given information. During a trial, evidence can be presented by experts from the same exact fields, but the information will be presented to the jurors in a biased fashion due to the side reflected by the prosecution or defense, that the expert is representing. Which one is true? Can they both be true or can the truth be sought from both? How best can the jurors make a correct decision based on the evidence before them? It is well researched that information can be framed negatively or positively which can influence or distort the information and have a direct effect on the outcome of the decision (Mathews, et al, 1998; Murphy, 2001).

Experts as well as laymen are influenced by information framing (Ericsson and Staszewski, 1989). Mathews, et al conducted a study and demonstrated that both experts and non expert’s forecasting abilities are bias favorably when the information is framed in a positive manner. The same information presented in a negative light gave different results. However, it was further shown that positive information is processed differently cognitively and forecasts were overestimated given positive information versus the equivalent negative information (Mathews, et al 1998) although the experts were found to be more accurate in their judgments. They concluded that framing should be taken into consideration when experts are presented the task of forecasting.

3.5 Bias in Individuals and Groups Making Decisions

Individual bias can be minimized when larger groups participate in the decision making efforts and greater accuracy is reflected according to Condorcet’s Jury Theorem (Berg, 1993; Condorcet, 1976; Hiltz and Turoff, 1978). This is one of the primary reasons behind groups making key decisions for policies, laws, verdicts and other important judgments where a best decision must be made for the greater good of man or where the criteria pertinent to the situation requires the need to be evaluated critically and without bias using a group of representatives. However, it was noticed that the error from bias still could occur given these groups may be trained in the same manner, working for the same people or have other similar characteristics which over taint their overall group opinion (Bottom, et al, 2002). People who are trained the same way are likely to vote the same way.

People demonstrate bias by thinking if something worked before, it will work again. If problems are static, this may be true. However, dynamic situations call for different decisions to be made and the environment could have changed. Some decisions are made based on the fear of change and how it may be unknown to a user. People resist change as do many organizations. Sometimes there may be risk in the implementation of some selection. Risk can be a factor which promotes bias in that the organization finds comfort in its routine and past methods. There are cycles of some solutions being in fashion and thus, promote bias in choice based on ‘what is in’ at the time. Other organizations are biased in using their own creations versus something else that may have a better fit (Turoff, 2000).

3.6 Bias in Bounded Rationality and Uncertainty

Decision making has its problems with information overload amongst flawed or biased individuals who work together. Uncertainty in the data, missing data and unknowns add to the complexity of how bias is handled. Bounded Rationality states that people can only process so much information due to human limitations of the brain’s capacity to acquire, maintain and process data. Simon turns to the SEU model for answers but this is not a system meant for a dynamic environment with imperfect information.

Other forms of bias come from bounded rationality where information may be limited and time of the essence and all alternatives cannot feasibly be explored (Gigerenzer, 1999). There may be ignorance to the subject matter on some part of the individuals participating not knowing all alternatives and thus the choice selected on the basis of recollection. Bias is inherently reflected in each individual, but as a group, especially as the group grows in numbers, is diluted and the decisions makers not only overcome the bias with a canceling out effect, but actually ascertain a better overall decision than any one group member could have provided (Condorcet 1976; Ericsson, et al, 1989; Turoff and Hiltz, 1978).

How individuals handle this uncertainty is through a multitude of heuristics that prove to be somewhat error prone. Tversky and Kahneman (1974) mention bias that exists even when the statistical data tells the human otherwise, that certain aspects of the mind form of heuristics which override bad or good information. For example, someone may judge a human to be a librarian due to a set of characteristics that the human has with the stereotype of a librarian. Given the added information that 70% of the sample populations are engineers, the human will still use the heuristic of categorization and ignore other information. This is also an example showing where humans think differently than thinking algorithms such as Bayes Theorem. Bayes would have factored in that there was the 70% chance of the person being an engineer where the human ignored the information and stayed with their categorization (Tversky and Kahneman, 1974).

Bayes theorem has been used when prior probabilities are large and other information has been gathered from a database. Working with a priori knowledge, decisions of the future can be based on the past. Another favorable reason to use Bayes is that the decisions can be traced along a tree structure where the paths can offer the decision makers an explanation for the end result. It’s this ‘explaining away’ of the decision that is attractive to humans. Although this is a useful way to implement a thinking algorithm into a computer, this is not the way all humans think and should not be used for group decision making. “…individuals do not always process information in a manner consistent with Bayesian models” (Bottom, et al, 2002, pg. 154).

An observation was made revealing how more primitive societies make decisions. The methodology was a reflection of collective wisdom combined with randomness into new alternatives. The method of the natives was nonrandom randomness where a series of influences into a bone gave elders a path from which more information could be deduced. It’s argued that humans are not good random generators and tend to make the same mistakes repeatedly. So, it can be said that when decisions are made ‘off the beaten path,’ then the other choices available may also be better (Moore, 1957). There is a bias in dealing with the unknown that lies deep within the psyche of the elders that together, with the randomness of the process, comes new ideas from old memories from which to interpret a successful alternative idea.

Bottom, et al demonstrated that when partially informed individuals worked together, they performed at an amazing level of accuracy as a group. A majority rule method was used and outcomes were good when groups worked on simpler problems. However, on more complex problems, this approach did not seem to remove the individual bias which was considered more threatening in these types of outcomes. This research also showed where such optimal heuristics such as Bayes Theorem, worked in some areas but did “leave the individual susceptible to certain, predictable and sometimes quiet profound forms of bias” (Bottom, et al, 2002, pg. 149).

3.7 Information Overload and Bias

When there is too much information to be processed, individuals make decisions which suffer from systematic bias. Basically, people can be presented so much information that they no longer are capable of making the correct choice, but revert back to what they know or perhaps select what they recognize (Tversky and Kahneman, 1974). Gigerenzer, et. al., argue that statistical models of reasoning and information presentation are not best in aiding human decision making. They argue that other natural formats are better for understanding and solving problems where it concerns the interpretation of data.

Information overload can be reduced when the subjects are allowed to select and focus on a subgroup of choices (Lindblom, 1959, Simon, et al, 1986). They may be perceived as more relevant information from which to choose and aid in reducing the time it would take in making an exhaustive search of all available information and paths to take in evaluating all possibilities. Having the capability in a system allows much needed flexibility for a group to be able to select a subgroup of issues from a larger list (Lindblom, 1959). The lessening of the selection reduces the load on the person. This would aid in reducing the number of heuristics that may be employed in areas as mentioned previously which are prone to induce error into the judgments being made by the individuals.

3.8 Uncertainty and Bias in Heuristics Used

When decisions are made and there is uncertainty in that data from which to ascertain a final conclusive answer, heuristics are used. These can lead to major errors in decision making. Tversky and Kahnemen (1974) claim that people making decisions under uncertainty use poor heuristics and ignore factors that lead to major errors in decision making. Best stated by Herbert Simon (1986), “Some of the general heuristics, or rules of thumb, that people use in making judgments have been compiled – heuristics that produce biases toward classifying situations according to their representativeness, or toward judging frequencies according to the availability of examples in memory, or toward interpretations warped by the way in which a problem has been framed.”

3.9 Bias in Experiments

Bias can come out even during the experiment. There’s experimenter’s bias whereby a scientist can hope for results to be favorable towards their assumptions. So, there is the problem that the information can be evaluated in a bias manner and therefore, not precise and true. Even the subjects in the study can react differently than they would have in reality due to the expectations of them as individuals in the research project (Simon, et al, 1986). Bias can come in the form of questions on a survey, where methods must be in place to not have confounding factors, like ‘yea saying’ for example where the questions are on a somewhat alternative basis of being framed more negatively then positively. Anonymity can aid in removing bias if an individual is working with a group. If the members of the group are anonymous to one another, a truer opinion can form from the individuals, and hence from the overall group as to how they truly feel about some agenda. Anonymity removes fear of repercussion due to a conflicting opinion, it removes preconceived ideas which may come from a member, and it removes peer pressure from having to vote a certain way in conformity.

3.10 Systematic Bias

Systematic bias arises from inaccurate measurements. Best described by Hamburg, (1970)

“Among the causes of bias are faulty design of a questionnaire such as misleading or ambiguous questions, systematic mistakes in planning and carrying out the collection and processing of the data, nonresponse and refusals by respondents to provide information and too great a discrepancy between the sampling frame and the target universe.”

There are many problems with systematic bias as they are not corrected by the size of the population increasing. They can be corrected by using an alternative means of measurement and through those results attempt to clue researchers as to where to look for the inconsistency.

3.11 Experimenter’s Bias

Experimenter’s bias is when the researcher has some sort of influence on the outcome due to his/her expectations of results. Experimenter’s bias or any other research which is put in a place of influence be it conscious or subconscious must be eliminated as much as possible to not influence the outcome of the results. At conferences, there’s a double blind review for most peer reviewed article submissions. This has two reviewers read and rate an article based on some criteria, with neither reviewer knowing the identity of the author(s). This is to remove any bias that may come with knowing who wrote the article, good or bad and provide rigor for better conferences. A double blind review is one way to eliminate experimenter’s bias.

3.12 Response Bias

Response bias was addressed in the way surveys were distributed to subjects by Hiltz and Turoff. They claimed that anonymity was shown to increase response rates. They used this result to support the idea of anonymity in a computerized conferencing setting stating that, ”we find indications that, since CC responses would be entered in private, uninfluenced by a desire not to offend an interviewer, and can be made anonymously, for “sensitive” issues at least, more truthful and fuller accounts may be elicited than by face-to-face or telephone interviewing” (Hiltz and Turoff, 1978, pg 264). Giving the subjects anonymity in their responses gets a truer set of data for the researcher and is another way to minimize bias in decision making.

3.13 Conclusion

We have demonstrated that bias can come in a number of forms, from individuals, to groups, to the system and how the information is framed. Strategies have been laid out in order to demonstrate what can be done to reduce bias where it’s rational to believe that it can be reduced. The goal is to have the best decision making possible amongst team members, according to whatever it is that defines ‘best.’ Groups should be offered a system which best reflects their opinions and reduces bias as much as humanly possible. In this research effort, there will be a mission to identify and eliminate bias, wherever possible. In all of the steps of the research process, attempts will be made to eliminate bias so that the optimum judgment can occur given the information at hand.

CHAPTER 4

DECISION SUPPORT SYSTEMS

It is a long way from data to wisdom.

Zeleny

4.1 Abstract

This chapter reviews Decision Support Systems exploring areas of design which have influence on the outcome. A non-exhaustive literature review is conducted determining areas which should receive focus when given the task of building a system of this nature. Group Support Systems come in many varieties given the users proximity, group size and the task to undertake. Information overload is analyzed as to how it happens, how the users deal with it and then methods to decrease it are described. Catalyst in the form of facilitators and moderators and such for a group process is reviewed where different variations fit different needs. Complexity concerning the number of dimensions considered in a problem is covered. We hope to cover many of the topics that are vital in the development of a decision support system. This is a very large area and is briefly covered here.

4.2 Introduction

There can never be enough emphasis on making the best decision possible and continuing to improve it (Simon, 1960, Weick and Sutcliffe, 2001, Eom, 2001). The ramifications of decisions can be never-ending and they set the stage from which to work and from which the next decision is to be made (Weick and Sutcliffe, 2001, Eom, 2001, White, et al, 2007a). Building a system to support decision making includes considerations to support all of the phases for it (Hiltz and Turoff, 1978, Sprague and Carlson 1982, Turoff, et al, 1993, Eom, 2001, Turoff, et al, 2002, White, et al, 2007b). Present day decision support systems are used for scenario ‘what if’ generation and evaluation and also for generating and analyzing a means to an end type of ‘goal-seeking’ strategy into the phases of design and selection (Eom, 2001). To support any group there needs to be a strong foundation to support both communication and coordination processes (Turoff, et al, 1993).

4.2.1 Acronyms

There are many acronyms where it concerns the use of technology in the creation of decision support systems (DSS). Many of the systems have overlapping definitions and this can cause confusion to the point where multiple systems are erroneously referenced as one. There’s computer-supported cooperative work (CSCW), group support systems (GSS), computer mediated communication systems (CMC), collaboration support systems (CSS), conferencing systems (CS), electronic meeting systems (EMS) and group decision support systems (GDSS), just to name a few (Hiltz, et al 1996, Eom, 2001, White, et al, 2008). These are not interchangeable, but are specialized based on a particular set of requirements (Hiltz, et al, 1996). For example, GDSS have focused on individuals working together in decision making and problem solving, while CSCW focuses on improving communication between group members (Hiltz, et al, 1996, Eom, 2001).

4.2.2 Decision Support Systems Components

Decision Support Systems can be categorized initially into two areas which separate the human from the system. Although these two entities interact, each must be first understood as to what each is capable of as well as what each other’s limitations may be (Simon, 1969, Hiltz and Turoff, 1978). The system component can then be further broken down into two more sub-categories, one representing how the data will be handled, and the other for the methods and models that will be used by the system (Sprague and Carlson,1982). All DSS have a common framework referred to as the Dialog, Data and Modeling (DDM) paradigm. This states that all decision support systems are based on the following:

1) Dialog exchanged between the users of the system,

2) The Data utilized and managed in the system and

3) The Model which supports the analytical work.

A model was designed by the authors and is presented in Figure 4.1 (Sprague and Carlson, 1982). This model reflects the architectural support of the DDM paradigm for a DSS. Each system is modified and specialized according to the type of activity that will be performed by the users. This task-technology, best-fit claim, is also supported by others (Hiltz, et al, 1996, Zigurs 1998).

Figure 4.1 Architectural Support of the DDM

4.2.3 Database Management Systems

This model shows the three components connected using a database management system (DBMS). The components are defined as the dialog component which connects the user to the system for interactive purposes, the data component which stores the data through input , archiving or some other means and the model component, which will have the algorithms and problem-solving analysis tools required by the users of the system (Sprague and Carlson, 1982). Database Management Systems come in all shapes, forms and sizes. There are free- and open-source systems available with no support, or very sophisticated expensive systems with a great deal of support, ranging from small on-site jobs to very large distributed jobs (White, et al, 2008a). DBMS are the backbone of the information provided to the end user. Queries are made against the data to form information. This information can feed into other programs with algorithms necessary to provide various analyses to further aid the decision maker (Gehani, 2007, Rob and Coronel, 2007). Providing a well designed interface between the human and computer is crucial (Hiltz and Turoff, 1978). Great consideration should be given to the availability of input/output devices, peripheral hardware available, presentation of materials in the form of reports, and many other issues which go beyond the scope of this paper.

4.2.4 Decision Support Systems Defined

As demonstrated, there are a multitude of topics used to define decision support systems. However all of the definitions, either in-part or in-full, have the common characteristics of a procedure for people which uses technology in order to support decision making (Alter 1980; Bonczek et al. 1981; Keen and Scott-Morton 1978; Sprague and Carlson 1982). Primarily, the system is supposed to support the user and not replace them. The system uses data and models in order to aid in making more informative, and/or better decisions. The DSS can support various types of problems be they semi-structured, ill-structured, tame or wicked (Rittel and Webber, 1973, Keen and Scott-Morton 1978, Bonczek, et al. 1981, Sprague and Carlson, 1982). Last, in a DSS the focus in the decision process is on effectiveness versus efficiency.

4.2.5 Wisdom and the Human Factor

The DSS is a tool, which should aid individuals in making the best decision on their own and aid in the creation of collective intelligence when a group is interacting with one another (Hiltz and Turoff, 1978). A decision support system can be a lot of things; however, it cannot replace the human component where judgment needs to be exercised. The function of a human decision maker as a component of DSS is not to enter data to build a database, but to exercise judgment, or intuition, throughout the entire decision making process (Eom, 2001). For both individuals and groups, the DSS should ultimately provide the basis for a group to reach beyond their own knowledge and advocate their judgment in the form of wisdom. “Wisdom goes beyond knowledge because it allows comparisons (judgments) with regard to know-what and know-why. “It is a long way from data to wisdom” (Zeleny, 1987, page 60).

In the remainder of this chapter we will first identify design contingencies for DSS identified as group size, proximity and problem type. Next, the requirements needed by users will be discussed and roles will be defined. Membership rights are then defined and elaborated as to who gets what membership rights according to the role they play. Many types of DSS will be described depending on the nature of the group working together, their problem types and proximity to one another. Studies are investigated concerning group activity coordinators in the form of facilitators and moderators and such, where their effects on the outcomes of the decision making are compared. Last, some methodologies are discussed where particular types of group problems and task types are identified. The chapter ends by drawing together some concerns of existing problems that need improvement.

4.3 Design Contingencies

The objective behind a DSS is to act as an interface for a decision maker, giving them the proper set of tools, based on appropriate models and database systems, all in an effort to assist in making an effective decision (Turoff and Hiltz, 1982). According to DeSanctis and Gallupe (1987), there are three primary contingencies that should be considered when designing a DSS: the size of the group, individual proximity, and the type of task before the group.

4.3.1 Size Matters

Systems are built to support individuals or groups (Turoff and Hiltz, 1982). Decision making groups are defined as “two or more people who are jointly responsible for detecting a problem, elaborating on the nature of the problem, generating possible solutions, evaluating potential solutions, or formulating strategies for implementing solutions. The members of a group may or may not be located in the same physical location, but they are aware of one another an perceive themselves to be a part of the group which is making the decision” (DeSanctis and Gallupe, 1987).

These group sizes can range from a few people co-authoring a paper, to a medium size group of people such as the typical size for a class (30), to very large groups of people interacting globally (White, et al, 2008a). Information overload is a concern for individuals and groups. When large groups work together, tools for filtering or skimming should be implemented (Hiltz and Turoff, 1985).

When working with groups and the consideration of group estimation processes, there are some tautologies which are significant: (Dalkey, 1967)

1. The total amount of information available to a group is at least as great as that available to any member;

2. The median response to a numerical estimate is at least as good as that of one half of the respondents;

3. The amount of misinformation available to the group is at least as great as that available to any member;

4. The number of approaches (or informal models) for arriving at an estimate is at least as great for the group as it is for any one member.

4.3.2 Proximity

Groups of individuals working together can be either be collocated, distributed and not together, or partially distributed where some members are collocated while others are distant.

4.3.2.1 Face-to-Face

Groups who work together in one geographic location are referred to as Face-to-Face (F2F). Huang and Ocker (2006) define distributed as “a collection of geographically, organizationally and/or time dispersed individuals who work interdependently, primarily using information and communication technologies, in order to accomplish a shared task or goal.”

4.3.2.2 Partially Distributed Teams

In partially distributed teams (PDTs), the subgroups are collocated but distant from other subgroups (Huang and Ocker, 2006). PDTs are defined as “a virtual team, in which some sub-groups are collocated, yet the subgroups are dispersed from each other, and communication between them is primarily by electronic media” (Plotnick, et al, 2008). Given the proper set of tools in a computer mediated communication system, groups and subgroups can work as effectively, if not better, using electronic technology (Turoff and Hiltz, 1982).

4.3.2.4 Online Communities

Where most traditional groups have a F2F element, more organizations are moving to an online format where a community forms and utilizes CMC to conduct their daily business. There are many definitions of online communities, three of which are presented here. Hagel and Armstrong (1997) define a virtual community as a computer-mediated space where there is an integration of content and communication with an emphasis on member-generated content. Lin (2003) defines a virtual community as “the gathering of people with common interests to share information and coordinate their work via information technologies, specifically for transaction, interest, fantasy, and relationships.” Preece further specifies an online community as consisting of:

· People, who interact socially as they strive to satisfy their own needs or perform special roles, such as leading or moderating.

· A shared purpose, such as an interest, need, information exchange, or service that provides a reason for the community.

· Policies, in the form of tacit assumptions, rituals, protocols, rules and laws that guide people’s interactions.

· Computer systems, to support and mediate social interaction and facilitate a sense of togetherness.” (Preece 2000, p.10)

Although DeSanctis and Gallupe (1987) strongly believe that the aforementioned contingencies are important, they specifically designated group size and proximity to be the most critical factors upon which to design a group decision support system (GDSS). Figure 4.2 describes the taxonomy they came up with to further describe the different settings for a GDSS.

Figure 4.2 Different Settings for GDSS

The most static of the settings calls for a Decision Room. This is basically a computer/technology enhanced room for a group of collocated decision makers. This is normally a U-shaped meeting room where each member can see the other members, where both individual as well as group support systems are in place. The Legislative Session setting is described as a Decision Room, but much larger. More rules are in place for the discussion facilitation and a moderator may need to be present to ‘rule’ over the procedural process. The Local Area Decision Network consist of a smaller group of people who are geographically dispersed be it down the hall or across the country. The last setting is described as a Computer Mediated Conference. This supports large groups of individuals working together in a distributed mode (DeSanctis and Gallupe (1987).

4.3.3 Problem Types

There are different types of problems that can be handled and supported by a DSS. Due to the basic nature of how humans think and make decisions, these problems are unstructured (Simon, 1960). Problem types for DSS range from semi-structured to wicked (Simon, 1960, Rittel and Webber, 1973, DeSanctis and Gallupe, 1987). It’s the problem type and level of (or lack of) structure which can greatly define the functionality requirements needed by the system (Simon, 1960, Hiltz, et al, 1996, Zigurs, 1998, Eom, 2001). Different types of problems require a different approach.

Some problem types are:

· Structured – this is a problem that has a discrete process in order to solve the problem. This can be programmed as an automated task in an information system (Simon, 1969). There is no need for a complex decision support system as the variables are known beforehand and a simple step-by-step design such as that of an enterprise resource system, can be structured.

· Semi-strucured (DeSanctus and Gallupe).

· Ill-structured (DeSanctus and Gallupe).

· Wicked (Rittel and Webber, 1973) This is not defined as any problem with malice intent, but where the there isn’t clarity or consensus on definitions, goals and such. The facts may be based on a belief system which makes statements neither right nor wrong.

McGrath (1984) integrated numerous task types and categorized them into a ‘circumplex model’ which defined the tasked by what the group had to accomplish during their meeting. There are three tasks identified as 1) the Generation of ideas and actions, 2) Choosing from those alternatives and 3) Negotiation among alternatives.

4.3.3.1 Structured Problems

Structured problems are inflexible and have workable, discrete solutions that can be automated. However, when the environment changes, these automated solutions can be dealt the wrong solutions (Turoff and Hiltz, 1982).

4.3.3.2 Semi-structured Problems

Semi-structured problems are “expressed in terms of problems that can be organized” (Keen and Morton, page 861). Many problems fall under this category and must be dealt with by decision makers within organizations. These groups tend to be dispersed geographically as well as organizationally, and must use systems that support the needs of these members (Turoff and Hiltz, 1982).

4.3.3.3 Ill-structured Problems

Decision makers may have to interact with others in order to better understand the needs of an ill-structured problem (Turoff and Hiltz, 1982). To help groups of decision makers understand a problem and come up with effective solutions, a GDSS needs to use a combination of technologies in order to aid in communication, computing and decision support (DeSanctis and Gallupe, 1987). Larger groups generate a better analysis of a complex problem and have a better chance at turning an ill-structured problem into a semistructured one (Turoff and Hiltz, 1982). Ill-structured problems are defined as and grouped together with ‘messy’ or ‘wicked’ problems (Horn, 2001).

4.3.3.4 Tame Problems

A tame problem is by no means a structured problem, but does have a well-defined problem statement which has a solution that can be objectively evaluated as right or wrong.

By definition, a tame problem has a well-defined and stable problem statement; a definite stopping point; a solution that can be objectively evaluated as being right or wrong; solutions that can be tried and abandoned; and belongs to a class of similar problems that can be solved in a similar manner (Digh, 2000).

4.3.3.4 Wicked Problems

By contrast, wicked problems like affordable housing, disparities in healthcare, and institutional racism are ill-defined, ambiguous and can be associated with strong moral, political and professional issues and are constrained by political, economical, cultural and ideological values (Digh, 2000, Horn 2005). Wicked problems are volatile and of a very dynamic nature with considerable uncertainty and ambiguity (Horn, 2005).

There are many causes of wicked problems which are coupled with other issues and constraints all of which are evolving and unfolding over time (Weber and Rittel, 1973, Horn, 2001). Each wicked problem is unique with no definitive formulation. Wicked problems are not well understood until they are experienced, and only when potential solutions are formulated does one understand the problem more (Conklin and Weil, 1998, Conklin, 2005) The problem is complex and recursive in nature for when you understand one problem, it only leads to questions that lie defining other problems (Horn, 2001). Since wicked problems are unique and have not been experienced before, a well-defined solution set is not feasible nor available (Rittel and Webber, 1973).

Since you cannot define the problem, it is difficult to tell when it is resolved since judgment is based on not right or wrong, but good or bad which is subjective and from the perspective of the stakeholder (Rittel and Webber, 1973, Digh, 2000, Horn, 2001, Conklin, 2005). Formulating the problem and the solution are essentially the same thing. Each attempt at creating a solution changes the understanding of the problem. Every wicked problem can be considered a symptom of another problem. Solutions to wicked problems are not true-or-false but good-or-bad. There is no immediate and no ultimate test of a solution to a wicked problem. Every implemented solution to a wicked problem has consequences. Solutions to wicked problems generate waves of consequences, and it is impossible to know how all of the consequences will eventually play out (Rittel and Weber, 1973).

Wicked problems are ongoing and have no stopping rule (Rittel and Webber, 1973, Digh, 2000). They are never resolved, but change over time (Conklin, 1998). Wicked problems are solved per se when they no longer hold interest to the stakeholders, when resources are depleted or when the political agenda changes (Horst and Webber, 1973). There are many stakeholders with multiple value conflicts who redefine what the problem is repeatedly, reconsider what the causal factors are and have multiple views of how to approach and hopefully deal with the problem (Rittel and Webber, 1973, Conklin, 1998, Digh, 2000). Getting and maintaining agreement amongst the stakeholders is most difficult as each has their own opinion of what is best (Rittel and Webber, 1973). As in ill-structured problems, wicked problems are best managed if they can be turned into a tame or structured problem (Rittel and Webber, 1973, Turoff and Hiltz, 1982).

4.4 Varieties of Decision Support Systems

Decision support systems are designed to meet the needs of the users. Some considerations are based on the number of users and their proximity to one another.

4.4.1 Computer Mediated Communication Systems (CMC)

Computer Mediated Communication systems is a general description of computer systems being used between people in a variety of ways such as e-mail, web pages, forums and such. They can be used synchronously, asynchronously and in face-to-face meetings or in a distance format (DeSanctis and Gallupe, 1985, Hiltz, et al, 1996).

4.4.2 Single User Decision Support Systems

When a single user is using a system for decision support, the system is used for information gathering and inquiry seeking needs (Churchman, 1971). Information is presented in graphs and tables or any other means of filtering information into a manageable amount so as to not suffer from information overload (Hiltz and Turoff, 1985). Software can act as a part of the solution to information overload. The individual must learn to use the system by selecting their own criteria filtering information and to develop the social skills that will be necessary to work effectively with other group members (Hiltz and Turoff, 1985).

As the workforce is increasingly working on a global level with other co-workers, a trend has developed to not only use DSS for single users decision making, but to use them to network organization members together into distributed decision making (DDM) systems (Eom, 2001). These single users are connected in such a way as to be able to work independently of one another on a sequential joint production (Hiltz and Turoff, 1985, Rathwell and Burns, 1985).

4.4.3 Group Decision Support Systems (GDSS)

A Group Decision Support System is defined as 'an interactive computer-based system which facilitates solution of unstructured problems by a set of decision makers working together as a group' (DeSanctis and Gallupe, 1985, pg. 3). Group Decision Support Systems (GDSS) are characterized as “a set of software, hardware, and language components and procedures that support a group of people engaged in a decision-related meeting” (Huber, 1984). The proper combination of tools or structures can help a group be more efficient and get better outcomes or make better decisions (Turoff and Hiltz, 1982, DeSanctis and Gallupe, 1985). There is a greater chance for equal participation and numerous ways to interpret data such that the human will understand the information better (Hiltz, 1996).

One integrated system should be used for group software such that the individuals use one set of technologies (Turoff, et al, 1993). Group support systems have a primary goal of helping groups be more productive (Gert-Jan de Vreede, 2001). Group support systems offer a wide variety of tools from which both individuals and groups can use to make better decisions, structure activities and improving communications by supplying the proper technology (Huber, 1984, DeSanctis and Gallupe, 1985, Hiltz, 1996, Gert-Jan de Vreede, 2001). Especially when given large groups of individuals working together, the end result is a collective intelligence, where the group is smarter than any one individual (Hiltz and Turoff, 1985, Turoff, et al, 2002).

4.4.3.1 Levels of GDSS

Where it concerns members exchanging information, DeSanctus and Gallupe (1987) defined three Levels/approaches for group support. Level 1 concerns communication mediums. It focuses on removing communication barriers and facilitating the information that is exchanged between group members. Level 2 encompasses the incorporation of modeling technologies into the interpretation of data and also methodologies to aid in the reduction of ‘noise’, decreasing information overload and aiding in decision making where it involves uncertainty. Level 3 includes supporting more intelligent, dynamic decision making where rules enforcing patterns of information exchange are imposed by expert advisors during the meeting. For example, Roberts Rules of Order could be implemented in a Level 3 GDSS where the system could even participate in the building or selecting of rules in the decision making process.

4.4.4 Distributed Group Support System

"Distributed Group Support Systems" embed GDSS type tools and procedures within a Computer-Mediated Communication (CMC) system to support collaborative work among dispersed groups of people. "Distributed" has several dimensions: temporal, spatial and technological (Hiltz, et al, 1996).

Communication support can be provided by a DGSS 5 different ways (Turoff, et al, 1993). One way is by offering numerous alternative communication channels so as to support different types of information be it textually based, graphically based or structured data. Second, rules and protocols can be implemented along with user roles so that structured interactions along with membership permissions can enforce rules to improve the groups performance. Third, a GDSS should aid both the individual and the group with information seeking while reducing information overload through a variety of functionalities such as information gathering, filtering, feedback, etc. Sophisticated decision making methods are a fourth way that these systems can support the communication needs. And last, but not least by any means, is that a DGSS can help synchronize the entire communication process where it concerns any phase in decision making. Last, Turoff mentions that a group memory should be utilized for future use and/or to help in any other phase (Turoff, et al, 1993).

4.4.5 Social Decision Support Systems (SDSS)

The primary objective of a social decision support system is to facilitate the gathering of multiple points of view from varying perspectives and creates a working body of knowledge.

(Turoff, et al, 2002)

SDSSs are designed given a specific set of governing criteria and objectives (Turoff, et al, 2002). Structures are based on the issues which need to be discussed, the options that are available from which to choose, the comments that the individual group members may wish to contribute and the relationships between the three (Turoff, et al, 2002).

4.5 Information Overload

If there is too much or too little of a workload, human performance deteriorates (Rouse, 1975, Hiltz and Turoff, 1985). Information overload is defined as “information presented at a rate too fast for a person to process” (Sheridan and Ferrell, 1974). Computer mediated systems should provide the necessary tools to help users filter and organize data, helping them to screen large amounts of incoming information (Hiltz and Turoff, 1985). Novice users will have to put the time into learning a system as experienced users will manage their information better since they already have devised ways of filtering (Hiltz and Turoff, 1985). There are ways to overcome information overload using a computer. First, human roles can implement functionalities into particular categories of membership thus restricting the information down to a minimal. Second, protocols can be implemented directing the group what to do when and third, knowledge structures can be used to aid in decision making (Turoff, et al, 1991).

4.5.1 How It Happens

There is a correlation found between user experience with a system and management skills reducing information overload (Hiltz and Turoff, 1985). Studies conducted with users on management information systems show that intermediate users suffer the most from information overload (Dickson, et al, 1977).

4.5.2 What Occurs From It

There are many negative ways that humans deal with information overload which are conducive for poor decision making. Hiltz and Turoff offer a list (1985, page 682) of ways individuals may cope:

1. Fail to respond to certain inputs,

2. Respond less accurately than they would otherwise,

3. Respond incorrectly,

4. Store inputs and then respond to them as time permitted,

5. Systematically ignore (i.e., filter) some features of the input,

6. Recode the inputs in a more compact or effective form, or

7. Quit (in extreme cases) (Sheridan and Ferrell, 1974).

The primary way individuals cope with too much information is by filtering and omitting or ignoring it (Miller, 1962).

4.5.3 How to Decrease It

There are a number of ways to decrease information overload. An example is that communications can be regulated by social pressures. Also, user experience in learning to use a system efficiently with the given tools and having numerous options for personal preference in the filtering of information is required in the design of a system (Hiltz and Turoff, 1985). Information overload has been discussed throughout this SOTA and should be referenced for other ways already indicated on how to reduce information overload.

4.5.3.1 Data Management

There are two basic processes for increasing the organizational efficiency of information systems: message routing and message summarizing (Huber, 1982). Also, system notifications of information having been read already or on the other hand, that new information has arrived, aid in the reduction. More communication options giving the end user the freedom to filter and skim also greatly reduces the incoming data into a manageable set of information (Hiltz and Turoff, 1985).

4.5.3.2 Social Influence

There are social pressures that can aid in the reduction of information (Hiltz and Turoff, 1985). One, restrictions or limitations can be placed on the length of comments allowed. Two, allow the group to create their own norms as the membership increases or decreases to reflect the views of the group (Hiltz and Turoff, 1985). Pen Names and anonymity along with individual member rights to view, post or edit information can also reduce information overload (Hiltz and Turoff, 1985)

4.5.3.3 Conferencing Systems

Conference systems can aid in the management of large amounts of data. A conference system allows for discussions to occur. This system along with a message system can greatly reduce the flow of unwanted messages (Hiltz and Turoff, 1985).

4.5.3.4 System Induced Measures

There are automated methods that can be employed in a system such as ‘self-destruct’ dates to aid in eliminating unwanted data which in turn reduces the information to sift through (Hiltz and Turoff, 1985). Systems can aid in data management which reduces information overload by implementing database methods such to help sort, search and find strings of information in textual based documentation (Hiltz and Turoff, 1985).

4.6 Process Catalyst

A person or group of persons can fill or implement process catalyst roles using the computer mediated communication system. These catalyst are referred to as a facilitator, coordinator, moderator, leader, chauffer or user driven system (Hiltz, Johnson and Turoff, 1982, Hiltz, 1984, Hiltz and Turoff, 1985, DeSanctis and Gallupe, 1987, Bostrom et al., 1993, Hiltz, 1996, Limayen, 2006). Some of these roles or parts of the catalyst functionalities can be automated (Gert-Jan de Vreede, 2001, Limayen, 2006),

These aforementioned roles can edit, move, delete, or prompt (amongst other functionalities) discussions, trigger voting among particular subgroups, or request actions taking on the powers defined by the role (Hiltz and Turoff, 1985, White, et al, 2007a).

4.6.1 Leader

Leadership and process are key variables that influential in decision making amongst a small group of individuals in a traditional face-to-face environment (Hiltz, 1996). Processes can help in rational decision making and in brainstorming (Osborn 1957) or Nominal Group Techniques (Van de Ven & Delbecq, 1971), can enhance process and outcomes. These same variables aren’t as effective in an asynchronous distributed environment (Hiltz, 1996). Using the group as the decision-maker proves better than any single individual (Condorcet, 1976, Hiltz and Turoff, 1993)

4.6.2 The Facilitator

Human facilitation is where an individual intervenes using special privileges provided by the role in the software. This individual aids in the group’s decision making by carrying out a set of activities during the duration (before, during and after) of the process (Bostrom et al., 1993, Limayen, 2006). Human facilitators can be key in improving group performance (Reagan-Cirincione, 1994) and can help groups accomplish their task (Hiltz, Johnson and Turoff, 1982). Conference leaders are called coordinators and prove useful in enforcing protocols to aid in the navigation (Hiltz, Johnson and Turoff, 1982). Groups may need help in using a system and the facilitator can be used as an interface (DeSanctis and Gallupe, 1987). However, this technical driver is also referred to as a chauffeur (Dickson, 1989).

Whether to have a designated facilitator, human or automated, is a dicey question in a community using a computer mediated communication system. Whitworth (1999) describes the interaction method which requires the need for a facilitator to motivate the people during the entire decision making process. Other types of group interaction such as Robert’s Rules of Order require a monitor or parliamentarian to keep the meeting on track and enforce the rules (Pitt, 2005). Hiltz and Turoff, 1996 believe that, no matter the complexity or organization, there will exist a need for a facilitator or moderator. They state that in particular given a Delphi system, human intervention is barely required due to the structure of the methodology. If someone is used, the software powers or special privileges that such an individual needs are:

· Being able to freeze a given list when it is felt there are sufficient entries to halt contributions, so as to focus energies on evaluation of the items entered to that point in time.

· Being able to edit entries to eliminate or minimize duplications of resolutions or arguments.

· Being able to call for final voting on a given item or set of items.

· Being able to modify linkages between items when appropriate.

· Reviewing data on participation so as to encourage participation via private messages.

While a lot of material in an online Delphi can be delivered directly to the group, the specific decisions on this still need to be made by the person or team in control of the Delphi process. In Computer Mediated Communications, the activity level and actions of a conference moderator can be quite critical to the success of an asynchronous conference and specific guidelines for moderators can be found in the literature (Hiltz, 1984).

Groups, especially in the virtual setting, need a facilitator to introduce agendas or trigger a decision making process (Tarmizi, 2006). A facilitator is a key role in building a online communities. ‘One of the techniques that can help to sustain a community of practice (CoP) is the introduction of a facilitator, since a facilitator can play a crucial role in addressing the challenges of establishing and nurturing a community (Tarmizi, 2006). One of the most important aspects of the policies and procedures that govern an online community is the set of roles and responsibilities that are used (Fontaine, 2001). In particular, leadership in the form of facilitators or moderators is crucial. Another crucial aspect is the norms and practices of interaction that support trust and build community spirit and cohesiveness.

4.6.3 Moderator

Problems can arise when group consensus is unattainable and a moderator may need to intervene. A facilitator can now change to a moderator as long as it’s a 3rd party non-bias member of the group who is there to help navigate the group towards a satisfactory consensus (White, Hiltz and Turoff, 2007). Another important part of this is presented by Voss (2003) where he states “for an e-discourse it is important to be documented at a shared place so that every one can access the same version at any time from anywhere.”

Over time Cristal (2006) notes that once the community members are interacting actively, less organizational intervention can occur for the group (community) will now exist on its own devices.

4.6.3.1 Motivation

Catalysts can be used as motivators. One of the largest barriers to get over in creating online communities is getting the members to participate in the initial stages of its creation. Members have a higher satisfaction rate when they collaborate as a community (Kimble, Li, and Barlow, 2000). Exposing the interest and defining these interest play an important part behind motivating individuals to come to the site initially, then participate eventually. It’s the domain of information which should be one of the driving forces behind the motivation to participate and then, to offer something to the members that they cannot find elsewhere (Wenger, 2003). “A cooperative relationship among members as a way to increase sense of community, which in turn could lead to increases in participation” (Tarmizi, 2006). It’s the community itself that must discover and be the driving force behind this motivation.

Anonymity can be taken too far (Turoff and Hiltz, 1996). It is important that experts know that they are working with other credible people to make it worth their effort. This is a motivating factor in motivating participants to interact. Anonymity can still be put in place and at the same time, users know the group of people they are working with. For example, there could be a list of people’s name and affiliations given so that the user knows they are working with this identified group of people, but still have them use pen names and such. Also, people can have the ability to identify themselves if they so choose. Last, anonymity can be used selectively as can the identification of group participants. For example, the experts may want to use their real names during the arguments, detailing where the information for the argument comes from, but then have the voting anonymous.

4.6.4 Chauffer

A “chauffeur” is described as a person who helps groups through the technical aspects of a system. Dickson et al. (1989) propose that this is beneficial if a group is using a system very rarely. The term chauffeur is defined as a non-bias individual external to the decision making of the group and simply ‘drives’ the group through the process (Dickson, et al., 1989).

4.6.5 Automated

As stated earlier in this chapter, structured problems can be automated. However, even dynamic structures can have automated functionalities that can, for example, reflect a vote (White, et al., 2007a). Automated facilitation can implement protocol into a system executing a structured decision making process (Limayem and DeSanctis , 2000, Limayen, 2006). Based on human thinking models, automated facilitators aid the group in reaching their outcomes by providing the group guidance during decision making (Limayen, 2006). Also, the automation of some facilitation task have proven to have positive effects (Gert-Jan de Vreede, 2001).

4.7 Methodologies

Decision making can be made by individuals or groups. The decision making can have a single dimension or be multidimensional (Bard and Sousk, 1990,

Iz and Gardiner, 2005). Many issues can be measured during decision making such as decision outputs, changes in the decision process, changes in managers' concepts of the decision situation, procedural changes, cost/benefit analysis, service measures and managers' assessment of the system's value (Keen and Scott Morton 1978, Eom, 2001).

4.7.1 Unidimensional Data

When working with unidimensional data, items can be ranked from one area to another opposing area such as what occurs in ranked data (Thurstone, 1927a). Paired comparisons have been used a number of ways to support both individual and group decision making (Thurstone, 1927a, 1927b). They have also been used for the formation of weight assignment, preference scores (Saaty, 1980, Thurstone, 1927a, Ichikawa, 1980, Kok, 1986, Kok and Lootsma, 1985). Paired comparisons can assign weights to unidimensional data or it can give comparisons on each item unit as each stands relative against one another (Thurstone, 1927b, Iz and Gardiner, 2005, White, et al, 2008b). Paired comparisons are good to use with larger sets of data because they give a more accurate reflection of a user’s opinion than if the user were to rank each item by eye (Miranda, 2001).

It’s suggested to use AHP with results from paired comparisons for cooperative groups (Saaty, 1988, Iz and Gardiner, 2005). AHP used paired comparisons along with the arithmetic mean to calculate weights and preference scores or can be used to place alternatives into a cardinal ranking (Saaty, 1988, Bard and Sousk, 1990, Iz and Gardiner, 2005).

4.7.2 Multicriteria

Multiple criteria decision making (MCDM) has numerous options with differing weights, and various stake holders with subjective needs. There are numerous techniques that use multiple criteria in their decision making technique (Iz and Gardiner, 2005). One of the more popular methods used is Multi-Attribute Utility Theory (MAUT) (Nakayama, 1979, Iz and Gardiner, 2005). Another technique is Multiobjective Mathematical Programming (MMP) which deals with complex problems processing numerous decision variables that have constraints (Mavrotas, 2008). AHP is also used in MCDM as a way to rank order alternatives (Saaty, 1988).

4.7.3 Measuring Decision Support

There are numerous ways DeSanctis and Gallupe (1987) offer to measure decision support. One way would be to measure the outcomes of the meetings against a set of predetermined goals and objectives or even measuring the outcome of what was thought to occur versus what has occurred. Another way to measure would be to survey the users and find out how satisfied they were with the decision making that took place or whether the group members are willing to work together again (DeSanctis and Gallupe, 1987).

4.8 Conclusion

We have demonstrated that there are many issues to take into consideration when designing a decision support system. Group size, task type and proximity are major considerations however; we also note how communication structures must be provided to support the particular needs of the group. Problem types need to have their approach catered to their environmental needs and some processes need some form of catalyst for discussions. Some issues that are of particular interest are:

· How to best create a system for a very large group of users;

· How to manage the roles these users fill and the privileges that will be given to them on the system;

· How to best manage the increasing mass of information presented to the user;

· Which model best reflects the group’s opinion;

· Wow to best implement human or automated facilitative roles;

· How to best present the information to the user for fast and accurate decision making needs.

Therefore, we believe as a critical part of this research endeavor, the system must be tailored to support the communication needs of the users.

CHAPTER 5

VOTING

If you choose not to decide, you still have made a choice.

Freewill -Rush

Abstract

The topic to be covered in this section is about voting and how it can be used for group decision making. The objective of this section is to show how voting can be used in this effort, demonstrating the pros and cons of voting while outlining the areas where it can be used best. Also, I will show how different voting methods can be used in different parts of the decision making process fulfilling the needs of the users with a multitude of voting methodologies. For example, there isn’t only one type of voting method implemented, but rather, a multitude of methods can be used together synchronously or asynchronously in order to cater to more complex environments. Past research has demonstrated voting as a productive tool which reflects the group’s opinion concerning an issue. This paper concludes demonstrating the need for the proper type of voting to best fit the task given where the population’s preference is based on an agreed upon voting method.

5.2 Introduction

Defined by Robertson, (2005), “Voting in its broadest context is considered to be an ongoing activity that involves information gathering, attitude development, opinion management, collaboration, periodic consideration of specific candidates and issues, decision making in elections, and tracking.” Obviously, all of these areas defined can be supported by technology although maybe not interpreted due to the inner complexities of the formation of the mind’s opinions, beliefs and interests, but nonetheless, supported. Hiltz and Turoff further define how votes which lie on the outer limits of the group means can be a useful tool in understanding the reasoning behind individual voter’s opinions (Hiltz and Turoff, 1978).

Voting can be used as a unified visual interpretation to guide groups in decision making. There are many different voting methods upon which various criteria further define the outcome of the vote. Different methods can be used, for the different needs a group may encounter during ongoing decision making process cycles. Voting is used for both rational and social decision making (Whitworth, 1999). There are numerous complexities within the users which may influence the direction in which people vote (Robertson, 2005).

Another dynamic posed by voting is that ‘when’ a group votes has implications in itself (Whitworth, 1999). Voting has been used in social decision making and has proven useful as a tool to reflect a group opinion (Turoff, et al, 2002; Li, 2003, White, et al, 2007b) and has also been demonstrated to work on complex problems lacking structure (Whitworth, 1999). However, voting has not reached the potential and recognition that it could have as an integrated part of group decision making through input, output, feedback and as a stimulus for discussion which aids in reducing many negative side effects.

I will briefly explain how voting could be used to support the following electoral processes identified by Robertson, 2005:

· Information gathering. Social book marking systems are one way in which people can use voting as a tool in identifying useful information.

· Deliberation. Using voting as a visual feedback mechanism which further stimulates discussions where areas of disagreement exist.

· Decision making. Choosing from a list or ranking a list in an order is at the heart of voting and is what it is used for initially.

· Evaluation. Reflections of decisions made are nothing more but more lists built upon some set of criteria, in this case, as a tool for feedback on some given outcome.

Voting can give members a range of values from which to choose. This can affect how the end results are tallied. Furthermore, these calculations need to reflect and satisfy the individuals as well as groups in their decision making. Also, the list may seem to reflect no more than a Likert scale, however, other alternatives are offered that are not used in such a scale at all. Although this is not an exhaustive list, some selections are presented demonstrating the range of feedback that can be considered to best reflect a voter’s preference (Pitt, et al, 2005; Turoff, et al, 2007):

1. Strongly Agree;

2. Agree;

3. Slightly Agree;

4. In the Middle;

5. Slightly Disagree;

6. Disagree;

7. Strongly Disagree;

8. Abstain, No Vote;

9. Don’t Understand;

10. Waiting for More Information;

11. Not Being Knowledgeable;

12. Would only vote for if this were the only choice available;

13. Would not vote for under any circumstance[2].

It’s the other selections offered which can aid a group in making the best decision as a multitude of information gathering instruments can aid users. These tools can help by covering the spectrum from uncertainty due to a lack of information to information overload (Hiltz and Turoff, 1978, Li, 2003). Other alternatives should be offered to best reflect the expert’s state of mind. For example, “it should be clear to the respondents that they do not have to respond to every question, but can decide to take a "no judgment" view” (Turoff and Hiltz, 1996, pg 4).

In this section, voting methods will be described and fairness criteria will be explored. Next, I will discuss the conditions of Arrow’s Impossibility Theorem. Voting protocols will be discussed and Roberts Rules of Order will be used as an example. Voter’s Roles will be defined and the powers one holds using a vote as well as how they are allowed to apply a vote will be discussed. Next, the benefits or problems associated with discussions will be explored describing how voting can have a direct effect on them either way. Problems with bias in voting will be explored and ways to reduce it will be presented. Decision making using voting methods under uncertainty will be explored. Finally, anonymity and how it changes multiple attributes of voting will be presented.

5.3 Voting Methods

There are quiet a few voting methods upon which a few shall be mentioned here. Different types of voting are used all around society in everything from elections to choosing talent. These can include both very small groups considering simple situations and on the other extreme, very large groups addressing very complex problems. The ways in which the end results are tallied vary using one method versus another can end with drastically different outcomes. Other issues include votes can be weighted and how this can greatly influence the outcome.

5.3.1 Plurality Method

The first method presented is the Plurality method. This is a simple method where the end result is calculated by having the highest number of first place votes. This is a method where items are ranked from first to last.

5.3.2 Plurality with Elimination

An extension of this method is Plurality with Elimination. This is a voting method consisting of rounds whereby each cycle eliminates the item which scored the lowest. This rotation continues until there are only two remaining items from which to choose and one will be selected based on a majority criterion. However, this majority could be further broken down where half of the votes come from one source and where the other half comes from another source.

5.4.3 Borda Count Method

The Borda Count Method is also used for ranking items or selecting a single item. This is used as a surveying method where every position from least to most is given a corresponding number which complements the understanding of the scale, i.e. intervals. So, every item selected is given a weight and then these weights are calculated in order to give a group rank.

5.4.4 Pairwise Comparisons

Pair-wise Comparisons is another method upon which group judgments can be made. There are numerous variations on this method upon which tallies can be made. One utilized in this research effort is from Thurstone’s Law of Comparative Judgment (Thurstone, 1927). All items in the set are matched against all other items in the set on a pair-wise basis where one item is selected over the other item based on some unidimensional criteria. However, ties can be treated differently. In particular, Thurstone’s Law dictates that there will be no ties allowed. Other methods split the value in half giving each item half the weight. So, the selected item in the pair gets a point, and if a tie should occur, then both items receive half a point. The items are summed giving a ratio where each pair equals 100%. Next, these proportions are given a corresponding number from a Unit Normal Table. The items are then summed and a final ranking is produced.

5.4 Fairness Criterion

Voting can be used to reflect a group’s opinion. There are numerous types of voting and criteria from which they are judged. Some criterion upon which voting is deemed fair are outlined under the Majority Criterion, the Condorcet Criterion, the Monotonicity Criterion, and the Independence of Irrelevant Alternatives Criterion. These are situations in which there are specific rules to implement such that the listed criterion remains true to its definition. The difference between these four mentioned is, if and how any rounds may be implemented, and how the calculations are made.

5.4.1 The Majority Criterion

The Majority Criterion states that, given a set of candidates, whoever gets the most votes should win. In one single voting procedure, this would be determined. It matters not the difference between candidates, if there’s one vote or a million, the end result is the same.

5.4.2 Condorcet’s Criterion

Condorcet’s Criterion is a bit different. This states that for every ordered pair where any given set {x,y} = {y,x}, whoever wins the most paired comparisons, given that every pair is compared, wins. These varied procedural variations give different outcomes. For example, given the same set of items and choices made by a group of experts, a Majority vote could possibly have winner A and Condorcet winner B.

5.4.3 Monotonicity

Another criterion is the Monotonicity criterion. This states that once a vote takes place, no change in merit will disrupt the end result previously calculated, through a revote. It states that once a vote goes to an item, the vote must remain with that item such that if an item is selected as the top choice, a revote should provide the same outcome.

5.4.4 Independence of Irrelevant Alternatives

Last, the Independence of Irrelevant Alternatives criterion is covered. This states that, if the number of items upon which a decision is selected that gave a particular outcome is modified, this shift in number should not affect the outcome.

5.5 Arrow’s Impossibility Theorem

A system of social choice or better known as voting, takes collective individual preference orderings into a group preference ordering. Arrow’s argument was that a reliable, truly representative voting system should possess certain desirable properties all of which can never be satisfied simultaneously. The axioms include: 1) Universal Domain, Completeness and Transitivity, 3) Positive Association, 4) Independence of Irrelevant Alternatives, 5) Non-Imposition and 6) Non-dictatorship (Tabarrok, 2005). Since all of the aforementioned cannot be satisfied simultaneously, something must be sacrificed. Which, and are there different circumstanced that can maintain a robust system?

Arrow defined three areas in which restrictions should be made during voting: 1) the input, 2) the output and 3) how the final calculation is made. Arrow’s theorem basically states that there can never be a consistent outcome when more than 2 items are being considered in a vote. Further he states that there cannot be a group decision determined from individual input (Arrow, 1951). “Furthermore, standard averaging approaches can lead to inconsistencies in group judgments” (Hiltz and Turoff, 1996, pg. 13).

The axioms which place restrictions on inputs are:

1. Universal Domain: Input to the voting system by all individually rational preference orderings are allowed. One cannot assume that all individuals have the same exact preference. If so, then there would be unanimous consensus and no need for a voting system.

2. Completeness and Transitivity: There must be consistency for any given input to have a reliable output.

a. Without completeness, the voting system fails and doesn’t answer our questions given the vote and an outcome.

b. Without transitivity, the voting system fails the by answering our questions ambiguously. Transitivity states that output given the same set of input must be consistent. Checks should be in place notifying individuals of inconsistencies in their choices. An example is provided by Hill where he defines a ‘circular triad’ (Hill, 1953). This is characterized as an error. This is a situation where an individual prefers X to Y, Y to Z, but then Z to X. The output of the voting system must be helpful and indicate some logical meaning or message informing such an error to the end user.

The following are defined by Arrow as axioms which restrict the ways in which individual preferences are transformed into social preferences.

3. Positive Association: For any given vote where X is preferred to Y and then X is raised higher in preference, the same orders of preference given X and Y, should remain the same. Hence, even if X is raised higher in preference, it should remain preferred to Y in order. Positive association states that the social ranking of X to Y should remain in place if no other individuals have reduced their ranking of X.

4. Independence of Irrelevant Alternatives: If X is preferred to Y, then any other alternative selection should not have any effect on the two items X and Y. They should remain in their order of preference. Although other alternatives may be distributed throughout the item list intermittently between the two, any one item which is preferred to the other should remain the same way all things considered. This was identified earlier as a circular triad (Hill, 1953).

5. Non-Imposition: The voter preferences should be reflected in the outcome. This would mean that the end result of a rigged outcome, where an election has been falsified by malicious means, would be non-valid.

6. Non-Dictatorship: The vote should be made by more than one person. For example, if someone declares themselves President and other stakeholders don’t get to vote, it’s not valid.

Although the latter two axioms seem a bit trivial, they still must be identified and written for there’s always someone lurking in the background waiting to manipulate a situation given a social vote. Many nations have mass killings and riots due to axiom 5. How can you know that a vote is fair and represents the true beliefs of a population? Many checks and balances must be put in place. This can be crucial, for a vote can have an end result of great power for any given individual or group of individuals.

5.6 Protocol

Protocol is simply the order in which something is supposed to occur. This can have great effects on how a vote is determined given any complexities or given any changes that may arise. Sometimes strict protocols or rules of order are given in order to properly recognize individuals and to give the voting process structure. When in face-to-face (F2F) meetings, this can be a necessity. However, as a group’s size increases, the need for structure is considered necessary. Although enforcement is prominent in online communities, these protocol structures can be somewhat relaxed although they may still have to be followed. For example, Robert’s Rules of Order are a common protocol for meetings. It is structured such that members can be heard fairly and equally and issues can be discussed in a timely manner. This is primarily to keep group members on a schedule given that they only have so much time in which to conduct the meeting and carry out the items on an agenda.

5.6.1 Robert’s Rules of Order

Robert’s Rules of Order, newly revised (RONR) are a popular set of rules for group meetings. Both large and small groups quite commonly use them. Pitt outlines the formalities of conducting a vote according to RONR as follows:

· “A member proposes a motion

· Another member seconds the motion

· The members debate the motion

· The chair calls for those in favour to cast their vote

· The chair calls for those against the motion to cast their vote

· The motion is carried, or not, according to the standing rules of the assembly” (Pitt, 2005).

It is important to remember that given the nature of group meetings and their asynchronous activities, many of the restrictions and formalities of RONR are not required because of time allowance and geographic proximity. The reasons for implementing RORN are different in online settings. For example, the group may be allowed to interject any thoughts or ideas into any part of a system. This can be accomplished any time and from any where which greatly frees one up from the restrictions imposed in a F2F meeting. For everyone to have the opportunity to be heard is greatly strengthened in an online format with such dynamics (Turoff, et al, 2002, White, et al, 2007b)

5.7 Roles

Roles are extremely important given a set of strict protocol in voting as in RONR. Roles identify what needs should be supported for a given user in a voting system. However, this tends to give people unfair levels of power. This can be seen anywhere from someone’s vote counting more than another’s, to allowing some roles to have input where others may not. However, it’s these sorts of restrictive criteria that help mimic the decision making process the group of voters want reflected. And roles can give structure to a more complicated social interactive system. There is also the need to give members concurrent roles to cover the various needs a user may have, even within a single group that should be taken into account. “It is also becoming quiet evident from existing systems that individuals need multiple identities for the multiple roles they may play, even within a single group” (Hiltz and Turoff, 1978, pp 335). Voters can be identified or anonymous. For example, on one hand, a voter may want to be anonymous during a heated debate. On the other hand, they may want their opinion to be known amongst the population, to have their say. Again, these are issues that are contingent on the meeting format, be it F2F or conducted online, as well as the particular situation and environment in which the issue is conducted.

Pitt describes how the roles may be identified in an agent based community where the various powers given to members are identified as:

1. “Voter: Those agents who are empowered to vote.

2. Proposer: Those agents who are empowered to propose motions.

3. Seconder: Those agents who are empowered to second motions.

4. Chair: Those agents who are qualified to conduct the procedure; one of whom at any time, will be designated to be actual chair, and thereby empowered to conduct the procedure.

5. Monitor: Those members who are to be informed of the actions of others, in particular the votes cast and the decisions reached (Pitt, 2005).”

This last role of monitor takes on various roles when presented with an online community. This particular area can have various meaning in an online setting and is further described in the following chapter. Turoff, et al, in their description of a Social Decision Support System (SDSS), presents a collegial type of role where most voters have the same definitions. This indicated that more ideas and better ideas could be generated from a group of experts (2003).

5.8 When to Vote

A question one may consider is, when should the vote be given? Should it be given once, twice or should it be open? How about the results, when should people be allowed to see the tallies?: at the beginning, end, or should there be a running tally? Hiltz and Turoff offer some reasons for not giving results too early during a voting session:

“Since it may be desirable to hold certain types of contributions until the group is at a point in the deliberations where they are ready to deal with them. Also, information such as voting results should not be provided until a sufficient number of votes about an item have been accumulated.” (Turoff and Hiltz, 1996, p7)

In most group support systems, voting is used as an end to the means for selections or processes (Whitworth, et al, 1999). Historically, political election results are withheld until a designated time. Some Delphi processes have the group members vote at the beginning which is called, round 1, for an initial opinion, then a few more times during the forecasting process (Dalkey, 1967). Other Delphi processes use voting to identify areas of agreement/disagreement (Turoff and Hiltz, 1996). When a group votes and how they vote are issues that should be further explored in general. A few ideas are presented here to demonstrate how unique voting methodologies can be created and used for particular kinds of group needs and consequences.

Research shows that an initial vote on an issue may be a good thing as if there is strong group consensus. This saves time by not proceeding to other steps that may be part of the protocol in order to finish the formality. Other processes from the decision model may be skipped and the next challenge can be encountered more quickly (Hiltz and Turoff, 1978; Whitworth, et al 1999).

Whitworth, et al proposed that a voting before discussing (VBD) method was best when working with groups who have a high level of conflict. That basically, discussion would only be used as a tool for antagonization amongst group members and an avenue that was actually conducive to decreasing consensus. It is the conflict ridden group that can benefit most using VBD. This is because it reduces the chance of members holding further discussion where areas of agreement exist (Whitworth et al, 1999). This idea was brought to light using Delphi processes by Dalkey in 1967 where he states, “The second presumption was that the effect of the undesirable social interactions could be meliorated by imposing a specific format for the discussions” (p 6). Saving time is another benefit of testing for group consensus early. The areas of agreement are bypassed and the group can focus on their disagreements (Turoff, et al, 2002). Voting can be used as a visual feedback system to prompt further discussion on areas of disagreement (Linstone and Turoff, 1975). This can be stimulus for further asynchronous discussion in a virtual setting given real time CMC systems that have access to the Internet (White, et al, 2007b).

5.9 Bias in Voting

Bias in voting can exist in a number of forms. Bias is a large complexity in humans as it resides in many forms, some of which we can remove in a system and some we cannot. Robertson presents a model demonstrating factors which influence our opinions.

Figure 5.1 Factors Influencing Opinions

He suggests that voting systems should encompass a litany of tools to support the users’ particular needs in order for them to make the best decision. In this model, subjective bias is supported. Any model where there is a social element; a human is naturally going to be subjective in some of the selections they make. This model suggests that “participation in social networks is construed by some researchers to be a way of reducing the costs of information gathering by spreading it out among connected individuals” (Robertson, 2005). The diversity in resources that these individuals would have or would not have, would further diversify the perspective of the incoming material and thus lessen bias as to it’s objectivity in relation to the source of information. Social book marking systems are a form of this type of information gathering tool and voting and bias for it’s other’s opinions upon which judgment is being made as to the authenticity of the information being presented. Another way to look at this is that using the group members to all collectively aid in the information gathering process would lead to a collective intelligence where the group together would prove more intelligent than any one single member (Hiltz and Turoff, 1978).

5.10 Uncertainty

Uncertainty can be reflected a number of ways in voting. Uncertainty in the information that we do have and its truthiness must be determined. Uncertainty can come from the unknowns in a problem. Uncertainty can be a way of contending with information overload and the use of heuristics to deal with it. Aldrich (1993) states that “voters wish to reduce uncertainty by gaining information, but information search has a cost.” We can only process so much information at a time. People will balance the cost of information gathering against uncertainty about whatever choices they are making. Information is not only a limited by cognitive capability, but also by restrained by bounded rationality (Herbert Simon, 1969).

In this author’s research in particular, uncertainty in the vote can be even more complex where particular characteristics must be dealt with mathematically. For example, consider the following options:

· A member wants to hold his vote;

· A member may get to rate their level of confidence on a given issue;

· The age of a vote in a long time frame;

· The change of a voters selection;

· Not all of the group members voted.

One person’s uncertainty may be another person’s assumption. The valid assumptions voted as such by the group can be set aside where the maybe assumptions are further explored due to their obvious uncertainty within the group members. The distinction must be made distinguishing which of the items are maybe due to true uncertainty vs the maybe from a wide difference of opinion of the experts (Hiltz and Turoff, 1996).

5.11 Anonymity in Voting

“Pen names, anonymity, and voting represent extensions of the communication process that have definite psychological overtones. The use or incorporation of these features may be very dependent on the nature of the application” (Hiltz and Turoff, 1978, pp 335). Anonymity in voting is so strongly felt that it is a right protected under the law. This is to reduce undue influence. Given that so much voting is now accomplished using computer technology, a voter’s privacy is a major concern (Robertson et al, 2005).

Some decisions hold such a large responsibility on the members, that it is desirable for the identity to be known (Hiltz and Turoff, 1976). For example, voters want to know which way their representatives vote. This is a way for people to be able to match what the elected official said they would do and what they believe in (republic or democratic bias) and how their actions matched up.

Anonymity of identification amongst group members in decision making is beneficial. Communication is straight forward where personality differences are cast aside, there is a lower chance of any member dominating the forum and ideas are clearly presented without prejudice and based more on merit (Whitworth, 1999).

5.12 Conclusion

Various voting methods have been revealed demonstrating how different one method can be against the next. Given a particular set of input, each method can produce a different output. This further shows that the voting method selected needs to fit the problem type and more so, satisfy the members who are using it. Considerations of what defines a fair vote were examined. This area doesn’t require much more exploration as groups are in agreement with this. In this research effort, Arrow’s Impossibility Theorem was explored and demonstrated under what conditions this research would encounter. Robert’s Rules of Order show how protocol can help manage larger groups successfully through an agenda. Particular group types and situations can benefit from voting at the beginning of an issue. Other situations call for voting later in the allowable time. So, considerations of ‘when’ to vote should be taken into account to ensure an efficient process. We identified where bias may exist and demonstrated methods which can be implemented to reduce it. Uncertainty in voting is of particular interest in this researcher’s efforts. Given an extensive, but not exhaustive search, there’s lacks a real time means of reducing or eliminating uncertainty during decision making. This is at the heart of this research where the potential to reduce uncertainty given mass collaborative efforts creating a collective intelligence which will produce the best available solution given any problem.

CHAPTER 6

DELPHI

“There is a very large field waiting for the plough”

Norman Dalkey, 1967, p10

6.1 Abstract

The topic of this chapter is the Delphi method. The objective of this chapter is to review literature concerning components of Delphi systems. There is a brief review of Delphi processes; Delphi criticisms, troubles with bias, exploring what defines an expert, and issues in experiments conducted is provided. This chapter will also cover recent research experiments, evaluating any pros and cons of the methods used, and analyzing other common areas.

6.2 Introduction

Delphi is characterized by a set of procedures, which elicits opinions from experts in order to reach a group consensus concerning a particular subject (Dalkey 1967, Helmer 1967, Hardy 2004). In particular, the use of experts in decision making is beneficial when information is lacking (Dalkey 1969). It is intuition and other less understood mental processes that make experts excellent decision makers in problems dealing with uncertainty. Linstone and Turoff extended the original idea by elaborating on how the complexity of the problem being addressed can require larger groups of experts who are heterogeneous in nature. Then Hiltz and Turoff (1985) further defined how these complex problems could be handled by larger groups of experts. The problem needs to be structured to allow subgroups to better focus on the areas they are most confident about. A mesh network of these subgroups could dynamically address each problem. Best described by Hiltz and Turoff,

“Delphi is a communication structure aimed at producing detailed critical examination and discussion, not at forcing a quick compromise. Certainly quantification is a property, but only to serve the goal of quickly identifying agreement and disagreement in order to focus attention” (1996, p 2).

In 1967, two researchers, Olaf Helmer and Norman Dalkey, at RAND Corporation, published work describing a means of groups making better decisions they referred to as Delphi (Dalkey 1967, Helmer 1967, Fisher 1978, Linstone and Turoff 1975, Hiltz and Turoff 1996, Baker 2006). Dalkey conducted research on the accuracy of groups considering their interactions. He found that if you took the average opinion of a group of individuals, it would be more accurate than had the group met in a traditional face-to-face meeting in order to come to a consensus about how to solve the problem (Fisher 1978).

After running some Delphi experiments, “Dalkey discovered that

1. Opinions tend to have a large range in the initial round but that in succeeding rounds there is a pronounced convergence of opinion and;

2. Where the accuracy of response can be checked, the accuracy of the group response increases with iteration” (Fisher, 1978, p 65).

This was most beneficial, as it demonstrated that groups of experts may produce better decisions, which are more accurate and reliable. Although this was beneficial, there were many limitations. For example, in a round, questions from experts had limits, so although an expert may have five questions, they were only allowed two- per- round, and only on given rounds. As with any new methodology, there was much room for improvement.

Hiltz and Turoff imply that collective intelligence is at the core of Delphi, emerging from the collaboration between a group of individuals who have a synergetic effect. The idea is that the group will be at least as smart as the smartest individual, but more so, that the group will reflect a collective intelligence greater than any one group member could have offered.

In the remainder of this chapter, characteristics of Delphi will be explored. In particular, focus will be given to the three primary characteristics identified as 1) anonymity, 2) feedback and 3) statistical group response. Next, a review of the Delphi processes will be covered. Problems with various aspects of Delphi are examined followed by some solutions and alternatives. Areas which may be subject to bias are identified. Next, what defines an expert is explored. This is followed by another related issue where having improper subjects participate in Delphi experiments affects end results. Basically, if one is having non-experts participate in a research project that requires experts, then what good are the results? Last, some experiments will be reviewed and analyzed.

6.3 Characteristics of Delphi

Delphi is a lot of different things to a lot of different people. One of the originators, Norman Dalkey, identified three core characteristics of Delphi. His claim, along with Olaf Helmer’s, was that combining user anonymity with rounds of controlled feedback, consisting of information and a statistical group response, would have a synergistic effect on decision making that would have the potential to have a group of experts build a reservoir of knowledge over time.

6.3.1 Anonymity

One of the most recognized characteristics of the Delphi method is that of anonymity (Turoff and Hiltz, 1996). There are many reasons for implementing anonymity in most aspects of decision making, but primarily it is to reduce bias in the influences one has when making a decision (Dalkey, 1967). Different levels of anonymity can be implemented. Pen names can be used all the way to total anonymity, for the level of disclosure has different levels for the different needs to be fulfilled (Hiltz and Turoff 1978, Bezilla, 1978).

Bezilla described some reasons for anonymity and stated how it could promote interaction, objectivity and problem solving:

· “Interaction for the free exchange of ideas or reporting of matters without the threat of disclosure of the same to peers or even to the collectors or compilers of the data; that is, anonymity can remove any threat that the privacy of personal data will be compromised.

· Objectivity through the masking of identity can serve to suppress distracting sensory cuing or ad hominem fallacies so that matter being reported or discussed can be considered on its intrinsic merits without regard to personal origin or aspects of origin

· Problem Solving for the total subordination of the individual ego to the group task. Presumably anonymity can be used to suppress individual considerations that might hinder the group’s progress in a mission, e.g., one would not have to worry about peer relations, advancement of unpopular ideas, risk ridicule, etc.” (Bezilla, 1978, page 7).

Later when considering conducting Delphi processes on computer mediated communication systems, Turoff and Hiltz, offered a host of reasons for anonymity. For example,

· “Individuals should not have to commit themselves to initial expressions of an idea that may not turn out to be suitable.

· If an idea turns out to be unsuitable, no one loses face from having been the individual to introduce it.

· Persons of high status are reluctant to produce questionable ideas even when there is some chance they might turn out to be valuable.

· Committing one's name to a concept makes it harder to reject it or change one's mind about it.

· Votes are more frequently changed when the identity of a given voter is not available to the group.

· The consideration of an idea or concept may be biased by who introduced it.

· When ideas are introduced within a group where severe conflicts exist in either "interests" or "values," the consideration of an idea may be biased by knowing that it is produced by someone with whom the individual agrees or disagrees.

· The high social status of an individual contributor may influence others in the group to accept the given concept or idea.

· Conversely, lower status individuals may not introduce ideas, for fear that the ideas will be rejected outright” (1996, page 6).

There are times when total anonymity may be desired, for example, during a vote. However, there are other times when a vote may be called upon to be validated, by knowing who voted which way. There are occasions when the users may want to be in a social environment using either their real or pen names in order to be identified. Further, there may be situations where a user may want to have multiple pen names as well as the ability to participate anonymously. “The advantage of pen names over total anonymity is that participants can address a reply to a pen name, whereas you cannot send a message to anonymous” (Hiltz and Turoff, 1978 p 95). It is critical that members of the group believe they are working with other credible experts from the field. Anonymity may not be desirable at that point (Turoff and Hiltz, 1996). One way to use both anonymity and allow members to know ‘the group’ with which they are interacting is to have the identities of the group members named, although members could use pen names. This way, members would know the group they were working with but wouldn’t know the individuals by their pen names. Another factor that may relax the need for anonymity is that, the more history a group develops as a social group, the more flexibility they will require where it concerns levels of anonymity (Turoff and Hiltz, 1996).

Olaf Helmer held that Delphi was about experts having an anonymous debate (1967). Research shows that experts debating anonymously, along with feedback, will give more accurate opinions (Helmer, 1967, Hiltz and Turoff, 1978).

6.3.2 Feedback

Feedback is defined in a variety of forms and is measured on different dimensions. For example, interest in a topic can be said to be defined by how much feedback is given within a certain time frame, i.e., those who respond faster find the topic of interest. Feedback can be a stimulus; it can increase learning, can be controlled, and can be provided in a number of formats: 1) responses, 2) graphic visuals 3) summarized information (Turoff, et al, 2002). There’s positive and negative feedback and the consequences it can hold on decision making and the bias that it can create (Tversky and Kahneman, 1974). Also, feedback is described by a set of tools for analysis on two levels: on the individual level and to aid the individual on a group level.

In Delphi processes, feedback maybe given in the form of summarized information in an interactive manner known as rounds. Also, feedback maybe given in a statistical form showing the interquartile range so that individuals will know where they stand on an argument in respect to other group members. Feedback is given in a controlled manner. Dalkey, 1967 defines controlled feedback as a noise reduction device. Noise is any information that is not conducive to productive decision making. It could be group members arguing about trivial matters, or bad information. One must consider the bias that may be reflected in the summarized data and the filtering that was conducted to reduce the noise in the data.

Dalkey (1969) stated that allowing participants to ask questions during the rounds allowed variability in the diversity of the questions put forth. This also allowed for subgroups of experts to be more articulate about, and responsive to, the information required.

Feedback comes in many forms e.g. tables, text and graphics. It’s important to determine which interpretations are most beneficial to the decision makers. The implementation of graphics has proven useful as they are a quick way for experts to take in information and determine its meaning (Hiltz and Turoff, 1996).

Another powerful force behind feedback, especially given computer mediated communication systems and the Internet, is how quickly information can be updated, (Hiltz and Turoff 1996, White, et al, 2007b). The power that realtime data can give a group of decision makers, working on a time critical problem, can make the difference between life and death, be it humans or corporations (White, Plotnick, Aadams-Moring, Turoff and Hiltz, 2008a, White, Hiltz, and Turoff, 2008b).

6.3.3 Statistical Group Response

Norman Dalkey wrote that the two prior characteristics, anonymity and feedback, along with a statistical group consensus, are what make Delphi different from the rest of the decision making strategies being offered (Dalkey, 1967).

However, the application of such a mathematical technique will not produce the qualitative model that represents the collective judgment of all the experts involved. It is that model which is important to understanding the projection and what actions can be taken to influence changes in the trend or in understanding the variation in the projection of the trend (Hiltz and Turoff, 1996, Turoff, et al, 2002).

Most of the original Delphi experiments used a median average from which to derive their ‘statistical’ group response. As stated previously, the interquartile range would be sent back to experts every round indicating the middle 50% of the opinions of the group.

A statistical group response method is the multi-dimensional scaling approach. However, multiple dimensions can confound the results of one variable with the results of another variable. Hiltz and Turoff, 1996 proposed an alternative way of implementing an older method, Thurstone’s Law of Comparative Judgment (Turoff, et al, 2002). Although paired comparisons can even alter in their calculations for end results, Thurstone uses a unit normal table as a way to average out the distribution (Thurstone, 1927). This method is presently being tested for its fit to the given situation. However, many more studies need to be conducted using Thurstone to validate this method, as no great number of subjects has been used in more recent times (Li, et al, 2003, Plotnick et al, 2007). However, even Thurstone’s method can be complicated and have various means with which to implement this methodology, as there are five special cases identified (Thurstone, 1927) which are all handled differently.

So, just at the basic core of Delphi having anonymity, feedback and a statistical group response holds a multitude of paths from which to follow. These three variables alone make it such that standardizing any Delphi method cannot happen (Hardy, 2004). Actually, it could be this flexibility that hinders the acceptance of any Delphi software developed, as it’s easy to see where no two Delphi systems will provide the same results given the same scenario, i.e., no replications can be made against other experiments validating the methodology. However, the baby shouldn’t be thrown out with the bath water, but stringent explanations of the methods used should be offered to researchers, so that the experiments can be validated by various means.

6.4 Delphi Processes

The Delphi process is defined as “the procedures consist of obtaining individual answers to preformulated questions either by questionnaire or some other formal communication technique; iterating the questionnaire one or more times where the information feedback between rounds is carefully controlled by the exercise manager; taking as the group response a statistical aggregate of the final answers” (Dalkey, 1969, pg 6).

In the traditional Delphi processes, all contributions will first go to the exercise coordinator who will then integrate all of the information and give it as feedback to the participants (Hiltz and Turoff, 1996). However, a coordinator would have to be objective in this endeavor as there would be a tendency to interject bias into the decision making based on the selective feedback and interpretation of the input materials by the coordinator. In computerized Delphi processes, this mediating intervention is not required as a more collaborative effort is taken and all of the information proposed by group members is evaluated by all of the group members themselves (White, et al, 2007b). This has many added benefits, one of which is that the data is not interpreted or filtered. Another benefit comes from the collective intelligence that will adhere from the process being distributed in this manner (Linstone and Turoff, 1970, Hiltz and Turoff, 1978, Hiltz and Turoff, 1998, Turoff et al, 2002, Li, et al 2003, White et al, 2007).

6.4.1 Early Delphi Processes

Delphi processes are conducted in a number of ways. A key protocol in the process is defined as a round. This is where discrete steps of instructions are encompassed in a cycle and the process is repeated with some stopping rule. Some processes have more rounds than others and it is alleged that with enough rounds, the best choice would eventually surface (Helmer, 1967). These rounds are given as a means of replicating the interactions and information exchange that would occur in a discussion or debate. Descriptions of the original Delphi process are presented followed by a detailed example.

In November of 1967, Helmer offers the simplest description in his publication, Systematic Use of Expert Opinion:

1. “Have each member of the panel independently write down his own estimate;

2. Reveal the set of estimates but without identifying which was made by whom;

3. Debate openly the pros and cons of various estimates;

4. Then have each person once more independently write down his own (possibly revised) estimate, and accept the median of these as the group’s decision (page 9).”

Published a month prior to this, in October, was a description of the process given by Dalkey, (1967) where he elaborates a bit more on where both input and feedback are given.

1. “A typical exercise is initiated by a questionnaire which requests estimates of a set of numerical quantities e.g., dates at which technological possibilities will be realized, or probabilities of realization by given dates, levels of performance, and the like.

2. The results of the first round will be summarized, e.g., as the median and inter-quartile range of the responses, and fed back with a request to revise the first estimates where appropriate.

3. On succeeding rounds, those individuals whose answers deviate markedly from the median are requested to justify their estimates.

4. These justifications are summarized, fed back and counter-arguments elicited.

5. The counter-arguments are in turn fed back and additional reappraisals collected.”

Helmer offers a description where the group interactions are elaborated upon. These details are very important as they demonstrate how Delphi was trying to integrate discussion into the process. Helmer’s explanation was that in Delphi, there was an anonymous debate going on amongst experts. He explains further that:

“In the 2nd round, if the revised answer falls outside the interquartile range, he is required to state briefly why he thought that the event would occur that much earlier than the majority seemed to think. Justifying relatively extreme opinions on the respondents typically has the effect shown in the illustration; those without strong convictions tended to move their estimates closer to the median, while those who felt they had a good argument for a deviant opinion tended to retain their original estimate and defend it.

3rd round, respondents were given a summary of the reasons for the extreme positions, and asked to revise their estimates, given the information. If a respondent’s revised answer is outside the new interquartile range, he was now required to state why he was unpersuaded by the opposing argument.

4th round, counter arguments are presented to the group again, giving rise to one last chance for estimating the date of occurrence. Sometimes, when no convergence toward a narrow interval of values takes place, opinions are seen to polarize around two distinct values, so that two schools of thought regarding a particular issue seem to emerge. This may be an indication that opinions are based on different sets of data or on different interpretations of the same data (Helmer, 1967).”

It was later demonstrated, through further research, confirming that most disagreements were due to uncertainty in the interpretation of the information. Hence, there was really consensus, but through the ambiguity in the diction, disagreements appeared to exist only to be clarified through discussion (Hiltz and Turoff 1978, Turoff and Hiltz 1996, White et al 2007).

In 1978, Fisher, et al, published a study detailing how the process was conducted giving further examples of how the problem is integrated into the process. They also describe further details not presented in either of the aforementioned descriptions. Here is the process given along with an experiment:

1. “Draw up a list of experts whose opinions we think would be valuable, and mail each expert a questionnaire explaining the nature of the study and asking each person to generate a list of important developments likely to occur in the field of library automation during the next 25 years.

2. From the first-round responses, we would consolidate items and eliminate those suggested items we considered irrelevant, in order to draw up a list of probable developments. We would then mail the second questionnaire and request each respondent to indicate the date when each listed development is likely to be implemented.

3. From the second-round responses, we would calculate the median and the interquartile range (the interval containing the middle 50 percent of responses) for each item and return this information to each respondent asking him/her to consider the second-round response, in light of this new statistical information, and to move either to the interquartile range or to briefly state reasons for remaining outside the range.

4. From the third-round responses, we would supply respondents with new consensus data and a summary of minority opinions and ask the respondents for a final revision of their responses.

5. The final result would be a list of possible future developments in library automation, a consensus (the median and the interquartile range) of the date when each development is likely to be implemented, and a list of dissenting opinions. With this final report, we would hopefully have useful information for long-range planning.”

Last, further insight was given into the process by Turoff and Hiltz 1996. These insights were derived by examining the exchanges between the participants. Information needed to be given to the participants, not only to show where an individual stood on a subject, but also to show where they stood against the remaining group participants. One individual may have greater expertise in an area or be completely ignorant. Also, further insight was sought to determine hidden factors that may bias a group in making a decision or explain the results as a consequence between the relationships of group members, where half of the group could be Republicans and the other half Democrats. Once identified, these sorts of biases can explain how the results were influenced.

Early Delphi exercises tended to focus on homogeneous groups. This resulted in numerous variations. For example, the Policy Delphi was designed to discover conflicting views about the possible resolutions of policy issues. Further, the summary of the results was sometimes displayed by what type of experts expressed what sort of view relative to other subgroups of experts.

These processes show how there can be many other agendas integrated into the Delphi process. This is no trivial set of events. From reading the simple set of procedures to the set actually used during an experiment can be quite different. It’s the insight that can be gained from this process that should be paid special attention. Recently, these processes and rounds have been dismantled and replaced with asynchronous interaction (White, et al, 2007b). This takes the forced rounds of discussion and makes them a more ‘natural fit’ as to how humans think. This will be delved into further in a following section.

6.5 Problems with Delphi

The problems with Delphi are numerous and diversified, ranging from areas in the process itself, statistical methods used, sampling methods, expert selection, evaluation of prediction and interface design from which computer based Delphis are run (Sackman 1975, Fisher 1978, Turoff 1999). Given the same situation, it is unlikely tat results will be consistent from one methodology to the next. Some of these problems stem from the lack of standards upon which other experiments or replicated studies can be achieved. This makes it unreliable to compare the results of one study against the results of another if they are not conducted with the exact same methodology.

6.5.1 Experiments

During this literature review, it was discovered that there is a lack of detail given in the descriptions of Delphi experiments. Delphi methodologies need to be compared in a discrete manner. For example, it may be best to look at specific parts like: Sampling methods analyzed in the selection of expert participants for Delphi studies. Then, evaluate how each of those methods were implemented and how they came about the end result. However, there may be missing information to make a thorough case-by-case analysis, but it would still break down a characteristic of Delphi upon which a strong evaluation could be made.

6.5.2 Subjects

Another critique of Delphi is that the subjects used in the experiments are normally students and not experts (Dalkey, 1967, Sackman 1975, Baker, et al 2006). Dalkey described how it would be difficult enough to get a small group of experts to participate. The feasibility of finding a large enough population of experts who can take time from their jobs and who will agree to participate is unlikely. Another problem would be the logistics of conducting a Delphi experiment on the experts given scheduling, and the collection of data. However, this is what is required. These logistical problems can be overcome by implementing a Delphi exercise asynchronously and online using a CMC system (White, et al, 2007b).

6.5.3 Consensus

Consensus is defined and achieved using a variety of methodologies in Delphi studies. Consensus can be described a number of ways. Hardy, et al (2004) presented a few questions for consideration.

1. What is the criterion for determining consensus?

2. What is the criterion for assigning importance?

3. How are the items that meet this consensus and importance criteria interpreted?

Questions arise when contemplating the way consensus is defined, as it is reflected in the outcome of the experiment. From this literature review, it is deduced that particular methodologies are implemented given the considerations that must be taken into account to best fit the problem situation. However, comparing end results of Delphi studies appears to be problematic due to the lack of consistency in methodology.

6.6 Bias

Bias can be difficult to minimize in Delphi studies, as it can be present in so many aspects of the method. The sample population of experts can be biased. When groups work together in a Delphi fashion, the members may be from the same organization (Baker et al, 2006). This means that they have been exposed to the same information, have been in the same corporate culture and have been distributed the same propaganda. The sample pool should come from a large group that is selected to represent all geographical areas and subspecialties (Hardy, et al, 2004). The group of experts may be under a moral influence where the decisions are a matter of principle and thus influence the incoming information (Helmer, 1967).

One of the most recognized characteristics of Delphi is anonymity. Anonymity minimizes bias. It promotes an open environment for honest opinions, where individual’s inhibitions should be lifted and ideas flourish (Turoff and Hiltz 1996, Hardy et al 2004). This is further explored in the chapter on Bias.

6.7 Experts

The Concise Oxford English Dictionary defines an expert as: “An expert is a person who is very knowledgeable about or skillful in particular area” (2003). A person can be considered an expert by their position within an organization, by a group of people or by enough other people deeming them so. Experts are defined as ‘informed individuals, ‘specialists in the field’ or ‘someone who has knowledge about a specific subject’ (Keeney 2001). Experience in the field can make one considered an expert (Baker, et al 2006). Baker states that experts are defined by having knowledge, but Keeney argues that mere knowledge does not qualify as expertise.

Experts are the reasoning behind what makes Delphi so powerful under situations where information is lacking and intuition must be drawn upon. Helmer said that it was the intuition that played such a large role, filling the unknowns with what was known and what was not known here in the judgment.

At the core of Delphi is the group of experts who make up and use the system. It’s the old ‘input - output’ theory, what goes in the system is what will come out. This makes the process of selecting an expert a critical step. This is very loosely defined in research (Keeney, 2001, Baker et al 2006). For a Delphi system to work properly, experts must be selected based on some agreed upon set of criteria that the research community will consider respectfully. Problems with Delphi research studies can found from the start with the sampling of the group for participation. Defining expertise combined with the feasibility of finding a large group of experts from which to take a random smaller group, is very unlikely (Baker, et al 2006). Judging expertise, or defining it, can be the core problem in this stage. Who is an expert and how is this determined? What justifies someone as being an expert? Solutions to this problem have been identified and elaborated upon by Baker 2006, Sackman 1975 and Keeney 2001.

In the research literature, more detailed information on defining expertise and what constitutes an expert needs to be given. It needs to be given enough detail in order to justify the definition then, list the criteria and processes under which this selection is made for others to review (Baker et al, 2006). From this literature review, it is observed that this would also be beneficial, given the success of the experiment, that this same method of selection could be utilized in future studies and prove a valid means of judgment. This could be further critiqued and refined by other researchers to improve upon until there is an accepted set of criteria which the research community could use as a robust methodology.

6.7.1 Weights

One method for determining a group decision is by giving experts corresponding weights in which their expertise varies (Helmer, 1967). An expert may be average in some areas then, specialized in others. Where an expert specializes in knowledge or where their experience is vast and varied for a unique set of circumstances, they will hold more weight in these areas. If some items are deemed more important than others, they can be given more weight and this adds to the end result of having the best decision possible being made (White et al, 2007a). Other group members could vote on members’ trustworthiness and that could be an additional factor in increasing the weight of an expert’s opinion (Helmer, 1967). This is along the same lines as a social book marking system, where users go to an area to see where experts go online to get their information (Raquel 2007). This follows the logic that some people’s influence is so great that others want to know what choices the experts made. This is also known as bias and one reason that anonymity is used such that all opinions will be considered equally no matter the source.

6.7.2 Self-Rating

Self rating research was conducted in Delphi studies to determine if better decisions could be made by subgroups of experts. When compared to those who didn’t rate themselves as knowledgeable, versus those who rated themselves as knowing a lot, results proved that more accurate information could be gained by implementing the proper set of self-rating techniques (Helmer 1967, Dalkey et al 1969, Turoff and Hiltz 1996). Hence, those who designated that they knew more on a particular subject gave better results.

One situation to consider is how to go about reflecting the way an expert feels about their opinion/expertise on a given situation. The protocol for this structure would have to be modified to fit a given situation, for there are different ways for experts to rate themselves. Dalkey and his fellow researchers described numerous kinds of self ratings methods including: (1969)

1. ‘Ranking the questions in the order of the respondents judgment as to his competence to answer them;

2. Furnishing an absolute time of the respondent’s confidence in his answer;

3. Estimating a relative self-confidence with respect to some reference group’.

An example of a set of instructions asking experts to rate themselves is given by Dalkey,

· First, you are asked to rate the questions with respect to the amount of knowledge you feel you have concerning the answer.

· Do this as follows: before giving any answers, look over the first ten questions, and find the one that you feel you know the most about.

· Give this question a 5 rating in the box;

· Then find the one you feel you know the least about, and give it a rating of 1.

· Rate all of the other questions relative to these two, using a scale of 1-5.

This type of self-rating is relative and gives the expert a scale on which to rate how much they feel they know on a particular subject. This has proven an especially promising method to use where higher areas of uncertainty exist (Helmer 1967).

Self rating was found to have further use in that with large groups, these ratings could be used to create specialized subgroups that can tackle particular subtopics of expertise. This would be beneficial as complex problems need to have a large group of knowledgeable experts to use in decision making. These subgroups could work on problems creating a mesh network of expertise intermingling to work on more complex problems. This makes it such that experts can work on numerous subproblems and give particular areas a greater level of concentrated expertise (Dalkey 1969, Turoff, 2007).

6.8 Experiments

Conducting experiments is dicey at best when using the Delphi method according to some researchers (Hardy et al 2004). Research indicates that there are discrepancies found in the way experiments are conducted, that there is a lack of reporting in the design and analytical methods used with a failure to elaborate on the statistical tests or sampling methods, and that it makes it difficult to replicate prior studies or aid first time experimenters (Sackman 1975, Fisher 1978, Turoff and Hiltz 1996, Hardy et al 2004). Fisher protests that the “failure of the Delphi method to incorporate such elements as standard statistical tests, accepted sampling procedures and replications, leaves the method suspect as a reliable scientific method of prediction” (1978, p68).

Helmer blames other research problems and outcomes on the lack of having experts available for the experiment. He states “there is always some doubt as to whether favorable results obtained under such conditions are transferable to the case of high-level specialists for whom the Delphi technique is intended” (Helmer, 1967, page 5). Another view by Baker is that some experts will be more likely to participate over other experts and this implies bias (2006), but this is true for any survey sample.

Turoff and Hiltz claim that there is confusion on the part of Delphi based experiments that are human based. They state that the problems are due to “a basic lack of knowledge by many people working in these areas as to what was learned in the studies of the Delphi Method about how to properly employ these techniques and their impact on the communication process” (Turoff and Hiltz, 1996, page 2). This leads to a lot of studies covering the same problems and regenerating the same results, just to be packaged differently.

A few research groups have published work in attempts to remedy this situation. Hardy, et al (2004) in particular dedicated a paper addressing how to use the Delphi technique to demonstrate how it could be a valid means of extracting expert knowledge and the benefits that could be held in the medical/clinical area. They paid particular attention in detailing the aspects of the entire experiment in attempts to set an example. In particular, they found areas of concern to be:

· Group Composition

· Participant Motivation

· Problem Exploration

· Consensus

· Feedback

This attempt to standardize the Delphi technique targets areas of the research and its administration. However, Hiltz and Turoff took a social perspective of the system and found more focus should be on the functionality offered to the users. They presented the following list, which describes a set of tools that are the focus of the development of the system (Turoff 1991, Hiltz 1986, 1990):

· Provide each member with new items that they have not yet seen.

· Tally the votes and make the vote distribution viewable when sufficient votes are accumulated.

· Organize a pro list and a con list of arguments about any resolution.

· Allow the individual to view lists of arguments according to the results of the different voting scales (e.g., most valid to least valid arguments).

· Allow the individual to compare opposing arguments.

· Provide status information on how many respondents have dealt with a given resolution or list of arguments.

Surveying methods design is a concern of researchers. Good designs, along with proper analytical methods, are applicable to the Delphi Technique (Hiltz and Turoff, 1996). Hardy’s group made sure to list the importance of the survey items in their experiment so that others could critique their work on Delphi (Hardy et al 2004). However they claimed that other areas, such as ‘what defines consensus’ have yet to be developed, so there are no hard rules in selecting which empirical studies can be applied.

Fisher applied numerous statistical analyses on his data. Some examples of the analysis run were:

· “Rank order listing by mean scores

· Standard deviations and median scores

· Frequency distribution of mean scores

· Semi-interquartile ranges for each item

· Percentage of responses changed between rounds

· Direction of changes in relation to the median

· Rank order ratings of items by educators and non-educators and by males and females

· And percentage of scores changed by educators and non-educators and by males and females” (1978, p 67).

Actually, the original Delphi studies gave quite a bit of statistical data. However, details of the experiments in their entirety weren’t given.

Once some standards are put in place, or once humans understand how to interpret the results, Delphi will be interpreted as more reliable (Linstone and Turoff, 1975, Hardy, et al, 2004).

6.8.1 Groups

Delphi procedures were designed and are used in a way that eliminates problems that are prevalent in face-to-face (F2F) meetings (Dalkey 1967, Helmer 1967). This is actually why one of the primary characteristics of Delphi, anonymity, came into existence. However, there are other procedures applied in order to counter other problems that exist in typical F2F meetings. Some of the problems with groups are listed:

1. The influence of the dominant individual:

a. Loudest voice

b. Greatest authority (boss)

c. Who talks the most

2. An unwillingness to abandon publicly expressed opinions;

3. The bandwagon effect of majority opinion, groupthink;

4. Noise irrelevant or redundant material:

a. Gossip amongst group members

b. Old rivals reoccurring argument that derail talks

c. Members frivolous contributions due to not being prepared

d. Digressing conversations that have nothing to do with key topic

5. Group pressure that puts a premium on compromise (Dalkey 1967, Helmer 1967, Turoff 1996).

The size of the group can influence the study. Larger groups of experts have difficulties working together in an F2F environment (Turoff 1996). The traditional Delphi studies were conducted primarily as questionnaires sent through the mail. This made it such that larger groups of experts could participate (Helmer, 1967). Larger groups are desired for many reasons. Primarily, the reasoning goes back to Arrow’s theorem. For as the size of the group decreases, so too does the accuracy in the group reflection, even with self rating (Dalkey 1969).

6.8.1.1 Large Groups of Experts

The Delphi technique may have greater potential working with a large heterogeneous group of experts from which to have input from a litany of academic and professional areas (Turoff, 2007). Dalkey proposed a larger group be used and referred to them as the Advise Community (1967). Government, military and industry are comprised of larger groups. The military, industry and government consist of a large group of experts who provide information, predictions, and analyses to aid in the formation of policy and in the making of decisions (Dalkey, 1967).

The Delphi technique involves a structured way of collecting and aggregating collective and informed judgment from a potentially large group of global experts on specific questions and issues that requires less cost and is more flexible with considerations of time in an efficient way (Hardy, et al 2004). Complex problems are comprised of many smaller problems requiring differing cognitive abilities and problem solving skills. The chances of these needs being reflected by the group with a good fit is best when there exists a larger group of experts from which to choose (Turoff 1996). A lot of the expert opinion is required when information is scarce. The likelihood of gaining more knowledge increases as the size of the group increases (Dalkey, 1967).

Other researchers believe the opposite is true, that Delphi studies should be small, consisting of no more than 20 participants (Baker, et al 2006). This is very difficult to achieve with a large heterogeneous sample. However, if you break down the complex problems into their smallest manageable subcomponents, smaller groups may be used and may be best. Studies prove that when smaller groups of higher qualified experts work together, they get more accurate results than those who are not highly specialized in that problem area (Dalkey, 1969).

6.8.1.2 Subgroups of Experts

The Advice Community Dalkey refers to is a great wealth of information that is stored within larger heterogeneous groups of “in-house” advisors, external or outsourced consultants from academia and/or other industries and any other group that appears pertinent to the problem facing the decision maker (Dalkey, 1967). Helmer elaborated on this concept by discussing the model of a hierarchical panel structure. Helmer described these panels as subgroups of experts working collaboratively as groups in decision making. Using a divide and conquer algorithm, the problem would be broken down into its components, then panels consisting of subgroups of experts would work from a bottom-up approach, building their way back up the decision ladder (Helmer, 1967). In addition, these multiple panels might better reflect the stakeholder’s interest and provide a more accurate reflection of the stakeholder (Hardy, et al 2004).

These subgroups could be identified in a number of ways. One would be through the expert him/herself by allowing them to rate their own level of expertise. This can be done in a number of ways. Refer to the section on Self Rating for a more in-depth discussion on the topic. It has been proven that these subgroups of experts can give a more accurate decision than the general expert population (Dalkey, 1969). Studies at RAND Corporation confirmed that subgroups of experts could be more accurate by self rating, but only if the following held true:

1. The difference in average self rating between the subgroups should be substantial;

2. The size of the subgroups should be substantial for both the higher and lower self rating subgroups.

Hiltz and Turoff, 1996 describe how the computer aided in supporting a multiple group environment, especially in larger populations of groups working together on a common goal. In 1967, Helmer toyed with the idea of distributed groups and what he referred to as ‘grandiose future applications’ which are now commonplace with the Internet. The power of such an idea is behind collaborative judgment in emergency response where a large group of emergency professionals and administrators work together globally through computer mediated communication systems to manage a large scale disaster better, creating a global collective intelligence (White, et al 2007a, Turoff, et al 2007).

6.8.2 Rounds

Originally, rounds were used in Delphi studies with a goal in mind and that goal was consensus. Researchers traditionally use surveying methods in rounds to structure the formal process (Hiltz and Turoff 1996, Hardy et al 2004, Keeney 2001).

The rounds provide a structure which considers factors for convergence: (Dalkey, 1967)

1. Social pressure

2. Rethinking the problem

3. And transfer of information during feedback.

One observation is that although Delphi studies have used techniques to remove bias, they also implemented techniques, such as social pressure to converge/consensus which interjects bias back into the problem. Different studies use different numbers of rounds, usually three or four. The number of rounds reflects attempts at creating the same information flow as what would occur during a discussion. The number of rounds is numerically determined by the need to offer an element of discussion to the group so that they can make arguments, they can counter argue, then rebuttal which ends up taking four rounds. Helmer stated that, given enough rounds, any problem could reach consensus (1967).

Since the application of Delphi techniques has been conducted in a computer based environment, more dynamic solutions are offered. These advances eliminate the need for the structure given in rounds due to the ability to have asynchronous communication. Human intervention is no longer needed, continuous feedback is feasible and people can interact with any part of the process anytime from the comforts of their home (Turoff and Hiltz 1999, White, et al, 2007b).

6.8.3 Asynchronous

Delphi was offered a new opportunity upon the advent of computer mediated communication systems, especially given asynchronous environments (Hiltz and Turoff, 1978; Rice and Associates, 1984; Turoff, 1989; Turoff, 1991, Turoff 1996). This gives the experts the advantage of time where it concerns muddling over a given problem, and then allowing for better analytical skills to be utilized (Hardy et al, 2004). Back in 1967, Helmer saw this potential when he wrote – “it is easy to imagine that for important decisions simultaneous consultation with experts in geographically distinct locations via a nation-wide or even a world-wide computer network may become a matter of routine” (page 8).

Turoff recognized the need describing how paper and pencil Delphi’s were restricted by having to use a top-down/bottom-up dichotomy versus parallelism and how there should be a way to tackle any part of the problem the expert felt naturally compelled to do at any moment in time (1996). Also, Helmer stated that, given enough anonymous debate, the problem area may resolve and lead to true consensus. This would naturally relay over to an equivalent saying that asynchronous interaction leads to a higher likelihood that areas of disagreement will be tracked down and eliminated, thus creating a true consensus (Helmer, 1967).

The implementation of asynchronous communication has a most profound effect on the process and should be the single most important aspect under which Delphi processes are designed (Turoff, 1996). Further, Turoff elaborates and states that asynchronous interaction has two properties:

1. A person may choose to participate in the group communication process when they feel they want to.

2. A person may choose to contribute to that aspect of the problem to which they feel best able to contribute.

This best reflects how the human mind works and is most conducive for the best results (White, et al 2007b, White, et al, 2008b).

6.8.4 Uncertainty

Uncertainty arises in two different situations here: 1) when information is missing, 2) when interpretations differ.

6.8.4.1 Missing Data

Delphi is inherently a desirable technique to use when information is missing, uncertainty exists in the decision being made, and intuition needs to be used by members of the team to help fill in the gaps (Baker et al 2006, Hardy, et al 2004, Fisher, 1978, Helmer 1966). This is the reason behind using highly qualified judges, i.e. experts for decision making. It’s not so much the explicit knowledge that comes from the collaborative efforts of this defined group, but by the tacit knowledge that can better fill these gaps utilizing collective intelligence when decisions must be made with little to go on (White, et al, 2008b).

6.8.4.2 Interpretation

The other area of uncertainty to be concerned about in Delphi techniques or any group communication process focuses on how different members of the group may interpret/misinterpret the problem, the vocabulary used or any other communications (Baker et al 2006, Hardy et al 2004, McDonnell 1996, White, et al, 2007a). This can surface as results of disagreement between group members. It’s through discussion that understandability is increased and ambiguity decreases. Once these lexicon issues are agreed upon, then agreement and consensus are more likely to occur (White, et al, 2007a).

This issue was studied in a pilot where a group used a voting Delphi method in order to reflect that the members were interpreting, both the problems and the solutions. Studies from this pilot confirmed that through discussion, ambiguity lessened and the group performed better than a group that didn’t have to agree on such issues (White et al, 2007).

6.9 Conclusions

One of the most significant research issues was and remains that Delphi systems are primarily tested with subjects who are not experts in a field and therefore results are difficult to evaluate properly. Also, Delphi systems are early in their development on utilizing the Internet for its leveraging power. This could open a knowledge bank to be used by experts unbound by their geographic location. Utilizing Delphi and expert’s knowledge on a global level could change the way certain events are handled, like catastrophic events. To tap this knowledge, because it is not neatly formalized but distributed in the minds of many people, it is necessary to develop methods, of which the Delphi technique is one, for collecting the opinions of individual experts and combining them into judgments that have operational utility to policy makers (Helmer, 1967).

Although the Delphi technique has been around for quite some time, numerous research issues remain. A range of objectives were outlined in 1996 by Hiltz and Turoff detailing characteristics which still remain not implemented in systems. These are as follows (Hiltz and Turoff, 1996, p12):

· Improve the understanding of the participants through analysis of subjective judgments to produce a clear presentation of the range of views and considerations.

· Detect hidden disagreements and judgmental biases that should be exposed for further clarification.

· Detect missing information or cases of ambiguity in interpretation by different participants.

· Allow the examination of very complex situations that can only be summarized by analysis procedures.

· Detect patterns of information and of sub-group positions.

· Detect critical items that need to be focused upon.

I think primarily the research issue is, can the Delphi technique generate the best group opinion given a particular set of methods implemented along the way? Which methods are best to implement and how should these be evaluated? Which forms of feedback will give both the individual and the group the best information upon which to make the best decision?

Norman Dalkey wrote this 40 years ago, it still holds true today, “There is a very large field waiting for the plough” (page 10).

CHAPTER 7

WHAT WE KNOW AND WHAT WE DON’T KNOW

1.1 Introduction

Lots of research has been dedicated to groups making decisions. The Internet and computer technology have created alternative ways for people to interact. Research shows a host of advantages where there used to be problems. The Internet makes it such that information can be transferred more quickly and efficiently. Groups can interact asynchronously and from different geographic locations. Web 2.0 technologies such as Social Networks and Wikis, are used for mass collaboration. If there were this sort of mass collaboration amongst experts, wouldn’t this make for a better collective intelligence? This question lies at the heart of this student’s research effort. There are many subproblems and issues that will be explored further in this chapter.

Appendix of Delphi Studies Summary

Reference:

1. Alter, S. Decision Support Systems: Current Practice and Continuing Challenges - 1980 - Addison Wesley Publishing Company.

2. Arrow, K. Social Choice and Individual Values, Cowles Commission Monograph 12, New York: John Wiley & Sons, 1951.

3. Baker, J., Lovell, K., and Harris, N. How expert are the experts? An exploration of the concept of ‘expert’ within Delphi panel techniques. NurseResearcher, 2006, 14, 1. pp 59 – 70.

4. BaIjd, J.F., and S.F. Sousk, "A Tradeoff Analysis for Rough Terrain Cargo Handlers Using the AHP: An Example of Group Decision Making," IEEE Transactions on Engineering Management 33(3), 222-227, 1990.

5. Bard, J. and Sousk, S. A Tradeoff Analysis for Rough Terrain Cargo Handlers Using the AHP: An Example of Group Decision Making. IEEE Transactions on Engineering Management, Vol 37, No. 3, August, 1990.

6. Beck, M.P., and B.W. Lin, "Some Heuristics for the Consensus Ranking Problem," Computers and Operations Research 10(1), 1-7, 1983.

7. Benbunan-Fich, R. and Koufaris, M. Understanding the Sustainability of Social Bookmarking Systems. International Conference on Information Systems, Quebec, Pre-ICIS Sixth Workshop on e-Business (WeB 2007).

8. Benbunan-Fish, R. and Koufaris, M. Motivations and Contribution Behavior in Social Bookmarking Systems: An Empirical Investigation Electronic Markets – The International Journal, 2008.

9. Berg, Sven. Condorcet's jury theorem, dependency among jurors, Social Choice and Welfare, Springer, Berlin. Vol. 10, No. 1, 1993.

10. Bezilla, R. Selected Aspects of Privacy, Confidentiality and Anonymity in Computerized Conferencing. RR#11, Computerized Conferencing and Communication Center, 1978. (referenced 12/12/2007 http://archives.njit.edu/vol01/cccc-materials/njit-cccc-rr-011/njit-cccc-rr-011.pdf)

11. Bonczek, R. Foundations of Decision Support Systems, CW Holsapple, AB Whinston - 1981 - Academic Press.

12. Bostrom, R.P., Anson, R., Clawson, V.K. (1993), 'Group Facilitation and Group Support Systems'. Group Support Systems: New Perspectives, Macmillan, 146-148.

13. Bottom, Willam, Ladha, Drishna and Miller, Gary. Propagation of Individual Bias through Group Judgment: Error in the Treatment of Asymmetrically Informative Signals. The Journal of Risk and Uncertainty, 25:2; 147-163, 2002.

14. Bower, Bruce. Simple Minds, Smart Choices. ScienceNewsOnline. Volume 155, No. 22, May 29, 1999. www.sciencenews.org/sn_arc99/5_29_99/bob2.htm (referenced 11/28/2007)

15. Buchanan, J., “A Two-phase Interactive Solution Method for Multiple Objective Programming Problems," IEEE Transactions on Systems, Man and Cybernetics 21(4), 743-749, 1991.

16. Campbell, D.J. Task Complexity: A Review and Analysis, Academy of Management Review, Vol. 13, No.1, pp. 40-52)

17. Cambridge Definition: referenced 12/02/2007 http://dictionary.cambridge.org/define.asp?key=7255&dict=CALD

18. Churchman, C.W. The Design of Inquiry Systems. New York: Basic Books, 1971.

19. Clawson, V.K., Bostrom, R.P. (1996), ' Research Driven Facilitation Training for Computer Supported Environments'. Group Decision and Negotiation, No. 1, 7-29.

20. Clawson, V.K., Bostrom, R.P. AND ANSON, R. (1993). 'The Role of the Facilitator in Computer-Supported Meetings.' Small Group Research, 24(4), 547-565.

21. Condorcet, Marie Jean Antoine Nicolas de Caritat Marquis de. 1976. “Essay on the Application of Mathematics to the Theory of Decision Making. In Keither M. Baker (ed.), Condorcet: Selected Writings. Indianapolis: Bobbs-Merrill.

22. Conklin, J. and Weil, W. Wicked Problems: Naming the Pain in Organizations. 3M Reading Room Research Center, 2005. (retrieved 3/31/2008 http://www.leanconstruction.org/pdf/wicked.pdf)

23. Conklin, J. Dialogue Mapping: Building Shared Understanding of Wicked Problems. Wiley, November 18, 2005, ISBN: 0470017686.

24. Cook, W.D., and M. Kress, "Ordinal Ranking with Intensity of Preference," Management Science 31(1), 26-32, 1985.

25. Cook, W.D., and L.M. Seiford. "Priority Ranking and Consensus Formation," Management Science 24(16), 1721-1732, (1978).

26. Dalkey, N. Delphi. The RAND Corporation, Second Symposium on Long-Range Forecasting and Planning, Almagordo, New Mexico, October 11-12, 1967.

27. Dalkey, N., Brown, B. and Cochran, S. The Delphi Method, III: Use of Self Ratings to Improve Group Estimates. United States Air Force Project Rand, November 1969.

28. DeSanctis , G., Gallupe, R.B. (1987), 'A Foundation for the study of Group Decision Support Systems'. Management Science, 33(5), 589-609.

29. Dickson. G.W.. Senn, J.A.. and Chervany. N.L. Research in management information systems: The Minnesota experiments. Manage. Sci. 23,9 (Mar. 1977). 973-921.

30. Dickson, G.W., Lee, J.E., Robinson, L., and eat, R. Observations on GDSS Interaction: Cauffeured, Facilitated, and User-Driven Systems, 22nd HICSS, 1989, Maui, I. pp 337-343.

31. Dickson, G., Limayem, M., Lee Partridge J., DeSanctis , G. (1996), 'Facilitating Computer Supported Meetings: A Cumulative Analysis In A Multiple Criteria Task Environment'. Group Decision and Negotiation, 5(1 ), 51-72.

32. Digh, P., Global Literacies: Lessons on Business Leadership and National Cultures (Simon & Schuster 2000).

33. Dictionary.com referenced 12/02/2007 (http://dictionary.reference.com/browse/bias)

34. Drucker, F.D., Hammond, J., Keeney, R., Raiffa, H., and Hayashi, A. Harvard Business Review on Decision Making, Harvard Business School Press (May 2001)

35. Eden, C. (1990). 'The Unfolding Nature of Group Decision Support. In C. Eden, J. Radford (Eds.), Tackling Strategic Problems-The Role of Group Decision Support. Sage.

36. Eom, Sean. Decision Support Systems, International Encyclopedia of Business and Management, 2nd Ed, Edited by Malcolm Warner, International Thomson Business Publishing Co, London, London. England, 2001.

37. Ericsson, K. A., & Staszewski, J. J. (1989). Skilled memory and expertise: Mechanisms of exceptional performance. In D. Klahr & K. Kotovsky (Eds.), Complex information processing: The impact of Herbert A. Simon (pp. 235-267). Hillsdale, NJ: Lawrence Erlbaum.

38. Fisher, R.G. The Delphi Method: A Description, The Journal of Academic Librarianship, vol 4, no 2, p 64-70, 1978.

39. Fjermestad, J., Hiltz, S.R. (1998). An Assessment of Group Support Systems Experimental Research: Methodology and Results, Journal of Management Information Systems, 15 (3), 7-149.

40. Fontanie, M. Keeping Communities of Practice Afloat: Understanding and fostering roles in communities. Knowledge Management Review, 4, 4, Sept/Oct 2001.

41. Gehani, N. The Database Book Principles & Practice Using MySQL. Silicon Press, 2007.

42. George, J., Dennis, A., Nunamaker, J. (1992), 'An Experimental Investigation of Facilitation in an EMS Decision Room'. Group Decision and Negotiation, No. 1, 57-70.

43. Gigerenzer, Gerd and Todd, Peter and the ABC Research Group. Simple Heuristics That Make Us Smart. Oxford University Press, 1999.

44. Griffith, T., Fuller M., Northcraft G. (1998), 'Facilitator Influence in Group Support Systems'. Information Systems Research, 9( 1 ), 20-36.

45. Hall, W.A., and Y.Y. Haimes, "The Surrogate Worth Trade-Off Method with Multiple Decision-Makers." In M, Zeleny, eds., Multiple Criteria Decision Making. Berlin: Springer-Verlag, 1976.

46. Hamburg, Morris. Statistical Analysis for Decision Making. The Harace Series in Business and Economics, 1970.

47. Hardy, D., O’Brien, A.P., Gaskin, C., O’Brien A.J., Morrison-Ngatai, E., Skews, G., Ryan, T. and McNulty, N. Practical application of the Delphi technique in bicultural mental health nursing study in New Zealand, Methodological Issues In Nursing Research, Journal of Advanced Nursing, 46(1), 95-109. Blackwell Publishing, 2004.

48. Helmer, Olaf. Systematic Use of Expert Opinion. The RAND Corporation, Santa Monica, California. Paper was prepared for AFAG Board Meeting, November 1967.

49. Heron, J. (1989). The Facilitator's Handbook, Kogan Page.

50. Hill, R.J. A Note on the Inconsistency in Paired Comparison Judgments. American Sociological Review 18:418-440, Oct. 1953.

51. Hiltz, Starr Roxanne and Turoff, Murray. The Network Nation. MIT Press, 1978.

52. Hiltz. S.R.. Johnson, K.. and Turoff. M. The effects of formal human leadership and computer-generated decision aids on problem solving via computer: A controlled experiment. Res. Rep. 18. Computerized Conferencing and Communications Center, New Jersey Institute of Technology, Newark, 1982.

53. Hiltz, S.R., Online Communities: A Case Study of the Office of the Future, Ablex Press, 1984.

54. Hiltz, S.R. and Turoff, M. , Structuring Computer-Mediated Communication Systems to Avoid Information Overload. CACM, July 1985, Volume 28, Number 7.

55. Hiltz, S.R., Dufner, D., Fjermestad, J., Kim, Y., Ocker, R., Rana, A., and Turoff, M. Distributed Group Support Systems: Theory Development and Experimentation, Book chapter for: Olsen, B.M., Smith, J.B. and Malone, T., eds., Coordination Theory and Collaboration Technology, Hillsdale NJ: Lawrence Erlbaum Associates, 1996.

56. Horn, R.E. Knowledge mapping for complex social messes. A speech to the Packard Foundation Conference on Knowledge Management (http://www.stanford.edu/~rhorn/SpchPackard.html), 2001.

57. Huang, H. and Ocker, R.L. Preliminary Insights into the In-Group/Out-Group Effect in Partially Distributed Teams: An Analysis of Participant Reflections, SIGMIS-CPR’06, April 13–15, 2006, Claremont, California, USA.

58. Huber. G. Organizational information systems: Determinants of their performance and behavior. Manage. Sci. 28, 2 (Feb. 1982), 138-153.

59. Huber, G.P. (1984). "Issues in the Design of Group Decision Support Systems," MIS Quarterly 8(3), 195-204.

60. Ichikawa, A., ed. (1980). Theory and Method for Multi-Objective Decision, Soc, of Instru. & Control Engr. (in Japanese).

61. Islei, G, and G. Lockett. (199I), "Group Decision Making: Suppositions and Practice." Socio- Economic Planning Sciences 25(1), 67-8I.

62. Isermann, H. (1984). Investment and Financial Planning in a General Partnerships. In M. Grauer and A.P. Wierbicki, eds., Interactive Decision Analysis. Berlin: Springer-Verlag.

63. Iz, Peri H. and Gardiner, Lorraine R. Analysis of Multiple Criteria Decision Support Systems for Cooperative Groups. Group Decision and Negotiation, 2:61-79 (1993) Kluwer Academic Publishers.

64. Keen, P. and Morton, M. Decision support systems: an organizational perspective, 1978 - Addison-Wesley.

65. King, D. (1993) 'Intelligent support systems: art, augmentation, and agents', in R.H. Sprague, Jr and H.J. Watson

66. (eds), Decision Support Systems: Putting Theory into Practice, 3rd edn, Englewood Cliffs, NJ: Prentice Hall.

67. Kok, M. (1986). The Interface with Decision Makers and Some Experimental Results in Interactive Multiple Objective Programming Methods, European Journal of Operational Research 26(1), 96-107.

68. Kok, M., and F.A. Lootsma. (1985). Pairwise-Comparison Methods in Multiple Objective Programming with Applications in a Long-Term Energy-Planning Model, European Journal of Operational Research 22(t), 44-45.

69. Korhonen, P., and J. Wallenius. (1990). "Supporting Individuals in Group Decision Making," Theory and Decision 28(3), 313-329.

70. Li, Z., Design and Evaluation of a Voting Tool in a Collaborative Environment, PhD dissertation, 2003, IS Dept of NJIT. http://www.library.njit.edu/etd/index.cfm

71. Liang, G., and M. Wang. (1991). "A Fuzzy Multi-Criteria Decision-Making Method for Facility Site Location," International Journal of Production Research 29(11), 2313-2330.

72. Limayem, M., Lee-Partridge, J., Dickson. G., DeSanctis , G. (1993). 'Enhancing GDSS Effectiveness: Automated versus Human Facilitation', Proceedings of the 26th Annual Hawaii International Conference on System Sciences. IEEE [27] Society Press. Los Alamitos. CA.

73. Limayem, M. and DeSanctus, G. Providing Decisional Guidance for Multicriteria Decision Making in Groups, Informatiron Systems Research, Vol. 22, No. 4, pp. 386-401.

74. Limayem, M. Human Versus Automated Facilitation in the GSS Context. The DATA BASE for Advances in Information Systems – Spring-Summer 2006, Vol. 37, Nos 2&3.

75. Lin, F. and Hsueh, C. Knowledge Map Creation and Maintenance for Virtual Communities of Practice. Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03).

76. Lindblom, Charles. The Science of “Muddling Through.” Public Administrative Review, Vol. 19, Spring 1959, pp. 78 - 88.

77. Lindblom, Charles. Still Muddling, Not Yet Through. Public Administration Review, Vol. 39, No. 6 (Nov. - Dec., 1979), pp. 517-526

78. Linstone, H. and Turoff, M. (1975) (editors) (1975) The Delphi Method: Techniques and Applications, Addison Wesley Advanced Book Program. (Online version can be accessed via http://is.njit.edu/turoff)

79. Lootsma, F.A. (1988). "Numerical Scaling of Human Judgement in Pairwise-Comparison Methods for Fuzzy Multicriteria Decision Analysis." In: G. Mitra, eds., Mathematical Models.for DecisionSupport. Berlin: Springer-Verlag.

80. Matthews, Lerow, Reece, Wendy, and Burggraf, Linda. Estimating production potentials: Expert bias in applied decision making. Department of Energy's (DOE) Information Bridge: DOE Scientific and Technical Information. Engineering Psychology & Cognitive Ergonomics. October 28, 1998 – October 30, 1998.

81. Mavrotas, G., Generation of efficient solutions in Multiobjective Mathematical Programming problems using GAMS. Effective implementation of the ε-constraint method. Retrieved April 18, 2008(http://www.gams.com/~erwin/book/lp.pdf).

82. McGrath, J.E., Groups: Interaction and Performance, Prentice Hall, Englewood Cliffs, NJ, 1984.

83. McNurlin, Barbara C. and Sprague, Ralph H. Jr., Information Systems Management In Practice, 7th ed. Pearson Prentice Hall, 2006.

84. Merriam-Webster Definition:retrieved 12/02/2007 http://www.m-w.com/cgi-bin/dictionary?book=Dictionary&va=bias

85. Miller, CA. Information input overload. In Proceedings of the Conference on Self-Organizing Systems, M.C. Yovits. G.T. Jacobi. and G.D. Goldstein, Eds. Spartan Books, Washington, 1962.

86. Miranda, E. Improving Subjective Estimates Using Paired Comparisons. IEEE Software, Jan-Feb., 2001.

87. Moore, Omar Khayyam. Divination – A New Perspective. American Anthropoligist, 59, 1957.

88. Murphy, Priscilla. Affiliation Bias and Expert Disagreement in Framing the Nicotine Addiction Debate. Science, Technology, & Human Values, Vol. 26 No. 3, Summer 2001 278-299, 2001 Sage Publications.

89. Nakayama, H., et al. (1979). "Methodology for Group Decision Support with an Application to Assessment of Residential Environment," 1EEE Transactions on Systems, Man, and Cybernetics 9(9), 477-485.

90. Niederman, F., Beise , C.M., Beranek. P.M. (1996), 'Issues and Concerns about Computer-Supported Meetings: The Facilitator's Perspective'. MISQ, 20(I), 1-22.

91. Nutt, Paul. Why Decisions Fail. Berrett-Koehler Publishers; 1 edition (July 15, 2002).

92. Pitt, J, Kamara, L. Sergot, M and Artikis, A. Formalization of a Voting Protocol for Virtual Organizations. AAMAS05, July 25-29, 2005, Utrecht, Netherlands.

93. Plotnick, L. Gomez, E.A, White, C. and Turoff, M. A Dynamic Thurstonian Method Utilized for Interpretation of Alert Levels and Public Perception. Proceedings of the 4th Annual ISCRAM, Delft, Netherlands, 2007.

94. Plotnick, L., Ocker, R., Hiltz, S.R., and Rossen, M.B. Leadership Roles and Communication Issues in Partially Distributed Emergency Response Software Development Teams: A Pilot Study. HICSS, 2008.

95. Preece, J. (2000). Online Communities: Designing Usability, Supporting Sociability. Chichester, UK: John Wiley & Sons.

96. Rana, Ajaz, R., Turoff, Murray and Hiltz, Starr Roxanne. Task and Technology Interaction (TTI): A Theory of Technological Support for Group Tasks, HICSS 1997.

97. Rathwell, M.A. and Burns, A. (1985) 'Information systems support for group planning and decision making

98. activities', MIS Quarterly 9 (3): 254–71.

99. Reagan-Cirincione, P. Improving the Accuracy of Group Judgment: A Process Intervention Combining Group Facilitation Social Judgment Analysis and Information Technology, Organizational Behavior and Human Decision Processes, Vol. 58, No. 2, pp. 246-270.

100. Rittel, H., and M. Webber; "Dilemmas in a General Theory of Planning" pp 155-169, Policy Sciences, Vol. 4, Elsevier Scientific Publishing Company, Inc., Amsterdam, 1973.

101. Rob, P. and Coronel, C. Database Systems Design, Implementation, and Management, 7th Ed., Thomson Publisher, 2007.

102. Robertson, S. Voter-Centered Design: Toward A Voter Decision Support System, ACM Transactions on Computer-Human Interaction, Vol. 12, No. 2, June 2005, Pages 263-292.

103. Rosson, M.B. and Carroll, J.M. Usability Engineering: Scenario-Based Development of Human-Computer Interaction, Morgan Kaufmann Publishers, 2002.

104. Rouse, W.B. Design of man-computer interfaces for on-line interactive systems. Proc. IEEE 63, 6 (June 1975). 1347-857.

105. Sackman, H. 1975. Delphi Critique. Lexington, Mass.: D.C. Heath.

106. Saaty, T.L. (1988). Decision Making for Leaders. Pittsburgh, PA: RWS Publications.

107. Sheridan, T.B.. and Ferrell. W.R. Mm-Machine System: Information, Control, and Decision Models of Humans Performance. MIT Press, Cambridge, Mass.. 1974.

108. Simon, H.A. (1960) The New Science of Management Decision, New York: Harper & Row.

109. Simon, Herbert. Sciences of the Artificial. The MIT Press; 1st edition (1969).

110. Simon, Herbert and Associates. Decision Making and Problem Solving, Research Briefings 1986: Report of the Research Briefing Panel on Decision Making and Problem Solving by the National Academy of Sciences. Published by National Academy Press, Washington, DC.

111. Soanes, C. and Stevenson, A., Concise Oxford English Dictionary, 2003.

112. Sprague, R. H. and E. D. Carlson (1982). Building effective decision support systems. Englewood Cliffs, N.J., Prentice-Hall.

113. Steuer, R.E., and E.U. Choo. (1983). "An Interactive Weighted Tchebycheff Procedure for Multiple Objective Programming," Mathematical Programming 26(1), 326-344.

114. Stevens, C.H. Many-to-many communication. CISR 72. Center for Information Systems Research, MIT, Cambridge, Mass., 1981.

115. Tarmizi, H., de Vreede, G.J. and Zigurs, Ilze. Identifying Challenges for Facilitation in Communities of Practice. 39th HICSS, 2006.

116. Tabarrok, A. Arrow’s Impossibility Theorem. Department of Economics, George Mason University. Retrieved on January 17, 2008. (http://mason.gmu.edu/~atabarro/arrowstheorem.pdf)

117. Thurstone, L. L. A Law of Comparative Judgment. Psychological Review, 34, pp.273–286, 1927a.

118. Thurstone, L.L. The Method of Paired Comparisons for Social Values, Journal of Abnormal and Social Psychology, 21, 384-400, 1927b.

119. Todd, Peter and Gigerenzer, Gerd. Simple Heuristics That Make Us Smart. Oxford University Press, 1999.

120. Turoff, M. and Hiltz, S.R. Computer Support for Group Versus Individual Decisions. Communications, IEEE Transactions on [legacy, pre - 1988] Publication Date: Jan 1982 Volume: 30, Issue: 1, Part 1 On page(s): 82- 91.

121. Turoff, M., Computer Mediated Communication Requirements for Group Support, Organizational Computing, (1:1), 1991, 85-113.

122. Turoff, M., Rao, U., and Hiltz, S.R (1991) Collaborative Hypertext in Computer Mediated Communications (.pdf) Reprinted from HICSS. Vol. IV, Hawaii, January 8-11, 1991.

123. Turoff, M., Hiltz, S.R., Bahgat, A.N.F, Rana, A.R. Distributed Group Support Systems, MIS Quarterly, Vol. 17, No. 4 (Dec., 1993), pp. 399-417

124. Turoff, M. and Hiltz, S.R. Computer Based Delphi Processes. Invited Book Chapter for Michael Adler and Erio Ziglio, editors., Gazing Into the Oracle: The Delphi Method and Its Application to Social Policy and Public Health, London, Kingsley Publishers, 1996.

125. Turoff, M. Hiltz, S.R. Bieber, M., and Rana, Ajaz. Collaborative Discourse Structures in Computer Mediated Group Communications, HICSS, 1999.

126. Turoff, M. OVERHEADS SET 3: DESIGN OF INTERACTIVE SYSTEMS. (1998) Retrieved 12/05/2007 http://web.njit.edu/~turoff/coursenotes/CIS732/732index.html

127. Turoff, M. OVERHEADS SET 2: MANAGEMENT OF INFORMATION SYSTEMS. (2000) 12/05/2007 http://web.njit.edu/~turoff/coursenotes/IS679/679index.html

128. Turoff, M., Hiltz, SR., Cho, H., Li,Z. and Wang, Y., Social Decision Support Systems (SDSS). Proceedings of the 35th Annual Hawaii International Conference on System Sciences (HICSS 2002)

129. Turoff, M. Chumer, M., Van de Walle, B. and Yao, X. The Design of a Dynamic Emergency Response Management Information System, Journal of Information Technology Theory and Applications (2004).

130. Turoff, M. White, C. and Plotnick, L. Dynamic Emergency Response Management For Large Scale Extreme Events. International Conference on Information Systems, Pre-ICIS SIG DSS 2007 Workshop.

131. Tversky, Amos and Kahneman, Daniel. Judgment under Uncertainty: Heuristics and Biases. Science, New Series, Vol. 185, No. 4157. (Sep. 27, 1974), pp. 1124-1131.

132. Vetschera, R. (1991). "Integrating Databases and Preference Evaluations in Group Decision Support: A Feedback-Oriented Approach," Decision Support Systems 7(1), 67-77.

133. Voss, A., and Schafer, A. Discourse Knowledge Management in Communities of Practice. Proceedings of the 14th International Workshop on Database and Expert Systems Applications (DEXA’03).

134. Vreede, G.J. De (2001). A Field Study into the Organizational Application of GSS, Journal of Information Technology Cases & Applications, 2(4).

135. Vreede, G.J. De, Niederman, F. and Paarlbert, Ilse. Measuring Participants’ Perception on Facilitation in Group Support Systems Meetings. Special Interest Group on Computer Personnel Research Annual Conference Proceedings of the 2001 ACM SIGCPR conference on Computer personnel research, San Diego, California, United States, Pages: 173 – 181, 2001, ISBN:1-58113-363-4

136. Webster’s New World Medical Library, 2nd edition (January, 2003) Wiley Publishing, Inc.; ISBN: 0-7645-2461-5.

137. Weick, K.E and Sutcliffe, K. Managing the Unexpected Assuringi High Performance in an Age of Complexity. Jossey-bass, Jon Wiley & Sons, 2001.

138. Wenger, E., McDermott, R. and Snyder, W. Cultivating Communities of Practice. Harvard Business School Press, Boston, Mass. 2002.

139. Wegner, E., White, N., Smith, J.D., and Rowe, Kim. Technology for Communities. 2005. CEFRIO.

140. White, C., Hiltz, S.R., and Turoff, M. United We Stand: One Community, One Response. ISCRAM 2008b.

141. White, C., Plotnick, L. Aadams-Moring, R., Turoff, M. and Hiltz, S.R. Leveraging a Wiki To Enhance Virtual Collaboration in the Emergency Domain. HICSS, 2008a.

142. White, Connie, Hiltz, Starr Roxanne and Turoff, Murray. Finding the Voice of a Virtual Community of Practice. International Conference on Information Systems, Quebec, Pre-ICIS Sixth Workshop on e-Business (WeB 2007).

143. White, C., Turoff, M. and Van de Walle, Bartel. A Dynamic Delphi Process Utilizing a Modified Thurstonian Scaling Process: Collaborative Judgment in Emergency Response. ISCRAM, 2007a, Neatherlands.

144. White, C., Plotnick, L., Turoff, M., and Hiltz, S.R.. A Dynamic Voting Wiki Model. Americas Conference on Information Systems (AMCIS), Colorado, 2007b.

145. Whitworth, B. and McQueen, R.J. Voting before discussing: Computer voting as social communication, Proceedings of the 32nd HICSS, 1999.

146. Wilensky. H.L. Organizational Intelligence: Knowledge and Policy in Government and Industry. Basic Books, New York, 1967.

147. Yao, Xiang and Turoff, Murray. Using Task Structure to Improve Collaborative Scenario Creation. Information Systems for Crisis Response and Management (ISCRAM), 2005.

148. Zeleny, M. (1987) 'Management support systems: towards integrated knowledge management', Human Systems Management 7 (1): 59–70.

149. Zigurs, I., and Buckland, B. A Theory of Task-Technology Fit and Groups Support Systems Effectiveness, MIS Quarterly, Vol. 22, No. 2, pp 313-334, 1998.

150. Zigurs, Ilze, Buckland, Bonnie, Connolly, James and Wilson, Vance. A Test of Task-Technology Fit Theory for Group Support Systems. The DATA BASE for Advances in IS – Summer-Fall 1999 (Vol. 30, No. 3,4)

[2] Heard on IMUS radio show, http://www.wabcradio.com/article.asp?id=122648 Monday, December 10, 2007