Survey FAQs

In this section, we answer questions which are frequently asked by questionnaire designers, but which have not been addressed in other sections of our text, because:

  • the topic didn’t fit well in the other chapters
  • the answer to the question applies across multiple sections, or
  • we simply didn’t have room for the material.

While our answers are terse, we have attempted to provide links to useful and current sources.

Table of Contents

1. Am I developing a “survey” or a “questionnaire?” When does it matter what I call it?

Let’s consider the similarities and differences between questionnaires and surveys. A questionnaire, as defined by visual thesaurus (http://www.visualthesaurus.com/), is a “form containing a set of questions, submitted to people to gain statistical information” or “a printed document with spaces in which to write.” Unlike the word “questionnaire” which is a noun, the word “survey” is both a noun and a verb, which suggests that it can also be a process. The noun “survey” is defined as a “detailed critical inspection” or a “short descriptive summary,” or “the art of looking or seeing or observing” events, while the verb survey is to “consider in a comprehensive way” or to “look over carefully, or inspect.”

During classroom discussions, students tend to describe surveys as collecting factual, verifiable information (age, addresses, etc.); while questionnaires collect subjective information (expectations, beliefs). In describing the collection of preference information, they typically say that surveys also collect these data in a quantitative format (ratings and rankings) while questionnaires collect these data in qualitative format (open-ended questions). Eventually the discussion leads to the conclusion that questionnaires (the noun) are tools used in surveys (the noun and the verb). In the interest of simplicity, we use the words questionnaire and survey interchangeably both in our book (Moroney & Cameron 2018) and on this website.

However, within academic circles there are differences between the terms “questionnaire” and “survey,” and this is not a distinction without a difference. When performing literature searches, the choice of terms impacts the thoroughness of your literature search. From a numeric perspective, a Google search for “questionnaires on driver behavior” yielded 1,590,000 hits, while a Google search for “surveys on driver behavior” yielded 711,000 hits. From the content perspective, there was some overlap between the searches, but many sources appeared only in one search and not in the other. We found a similar pattern when we searched other databases (e.g., PsychNet). Apparently, some journals and organizations prefer to use the word “questionnaire” while others prefer the word “survey.” Since search engines examine the title and abstract, the authors’ preference for a term is also a factor, since they write both title and the abstract. Therefore, we recommend that separate searches be performed using the terms “survey” and “questionnaire,” unless the search engine accepts “AND/OR” instructions to search on both terms simultaneously. The small investment in time may keep from you from becoming one of the individuals who can only say, “I wish I had known about that questionnaire/survey when I was designing mine.”

2. What are some good sources of questionnaire related terms and definitions?

We don’t expect readers to be familiar with the domain of questionnaire/survey design. However, when reading articles in this domain, you may encounter new terms, which have specific meanings. Often simply searching for the word or term in a browser will provide an answer. If you need to search further for a definition, the following links are helpful:

3. What guidance can you provide on selecting a computer-based questionnaire design tool ?

As in selecting any tool, the first question to ask is: “What are your requirements?” We provide a document entitled in “Considerations when Selecting a Questionnaire Design Tool “to help you get started, but the full answer is more complicated.

One major ethical consideration often raised by Institutional Review Boards is the security of responses. Specifically, if the information is sensitive, where are the questionnaires stored, and how are the data secured? We note that many organizations are concerned about asking questions related to topics such as strategic goals, long-term planning and funding, income, 360° performance evaluations. Student evaluations of faculty are also sensitive and many organizations prefer questionnaires which are stored entirely on their corporate website.

In general, we have been pleased with the capabilities of Survey Monkey (https://www.surveymonkey.com/) and Google forms (https://www.google.com/forms/about/#start). Both are very user friendly. The latter is free and has considerable functionality. Survey Monkey, like many other commercially available design tools, has a free version with reduced capabilities, and a for-fee version with additional capabilities.

In addition to the instructional material provided by most of the tools listed above, many of the tools have their own FAQ section or user blogs. If you have a particular problem, we encourage you to examine the tool’s FAQ section, the user blogs, or contact the tools developer.

We close with this caveat; the market for questionnaire generation tools is highly dynamic. Not only are there turnovers and mergers within and among the various organizations (http://www.websm.org/r/9/198/), but the tools capabilities change regularly and these updates are announced to potential users.

4. How can I increase the response rate ?

Chapter 1 of Internet, phone, mail, and mixed-mode surveys: the tailored design method by Dillman, Smyth, & Christian (2014) provides an excellent discussion of factors that influence response rate; indeed, they entitle their chapter “Turbulent Times for Survey Methodology.” Our perspective follows below.


Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: the tailored design method. Hoboken NJ: John Wiley & Sons.

4.1 Don't focus on increasing response rate; focus on more important task of contacting Representative respondents

Think of your task as reaching potential customers. You must provide them with a reason to “buy what you are selling” and respond to your invitation to participate. Just because you have achieved the sample size needed for your survey doesn’t mean that you can stop data collection. You need to ascertain that your sample is representative of your population. In academic settings we have found an overrepresentation of females from the School of Education and an under representation of males from the School of Engineering. Among professional organizations we have noted higher response rates among Fellows than among affiliates. Since your intent is to generalize the findings to your population your sample respondent must represent your population. We suggest that you develop a series of tables for the demographics of interest. Thus you might have tables for gender, age, education, income, etc. A hypothetical example based on the 2010 US Census is provided below:

In this example, it is obvious that “Whites alone” are over-represented, while all the other groups are under-represented.

Race (Self-Identified)

White alone

Black or African American

Asian

Native Americans and Alaska Natives

Native Hawaiians and Other Pacific Islanders

Two or more races

Some other race

Total

% of population

72.4

12.6

4.8

0.9

0.2

2.9

6.2

100

% of respondents

90

6

2

1

0

0.5

0.5

100

4.2 Ask the right People multiple Times on multiple channels.

Dillman, Smyth and Christian (2014, Chapter 1) recommend the use of multiple and different (mixed-mode) data collection as a strategy to increase response rate. They propose a cost effective strategy of collecting as many responses as possible using the least expensive mode, then using the next least expensive mode to collect additional responses. The most expensive response collection strategy is reserved for the most recalcitrant. (See Moroney & Cameron, 2018, Section 3.3). Thus, you might start by using a web-push invitation to participate, or use postal mail to request a web response, and withhold alternative response modes (paper, phone interviews, onsite visits) until later in the data collection process.

A mixed mode approach can also be used to improve coverage. Indeed, it may be the only possible approach when the same contact information (email address, mailing address, or phone number) is not available for all potential respondents.

Be aware that some individuals may be more willing to respond in one mode than another. For example, rather than participating in a telephone interview or responding on a smart phone, they might prefer responding to the same questions on a website or completing a paper-and-pencil questionnaire. We recommend providing your potential respondents with multiple paths for responding, if they did not respond to your first invitation or your follow-on attempts to contact them. In some cases, the solution may be as simple as asking the respondents to select their preferred administration method and then making that available to them.

Dillman (2017) examined response patterns for studies administered using different modes between 2007 and 2012. He reported that when given a choice between responding via the web or by mail, 80% of the respondents opted to respond by mail. While the response rate when only the web was offered was only 60%, he reported that token cash incentives sent with the initial request significantly improved the web and overall response rates (incentives are discussed in FAQ 4.6 ). It should be noted that since Dillman’s study, the number of questionnaires responded to on smart phones has been increasing, which provides both opportunities and difficulties for questionnaire designers.

Finally, realize that an effectively designed survey is the result of many decisions that interact to facilitate the collection of data from the desired population. Multiple factors including timing of initial contact, contact mode, personalization, use of appropriate incentives, topic salience, and administration mode, all contribute to the respondent’s willingness to donate their time.


Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: the tailored design method. John Wiley & Sons.
Dillman, D.A. (2017). The promise and challenge of pushing respondents to the web in mixed mode surveys. Survey Methodology, Statistics Canada, Catalogue No. 12 001 X, Vol. 43, No. 1. Retrieved from: http://www.statcan.gc.ca/pub/12-001-x/2017001/article/14836-eng.htm.
Moroney, W.F. & Cameron, J.A. (2018). Questionnaire design: How to ask the right questions of the right people at the right time to get the information you need. Human Factors and Ergonomics Society.

4.3 Accommodate your potential respondents

Email is still the best way to contact millennials, who are most likely to respond on a cell phone (Naragon, 2015). Seniors may prefer a paper copy, but provide them with an option to respond on the web. Allow your potential respondents to select how they wish to reply.

  • Make your surveys as short as possible.
  • Tell the respondents how long you estimate it would take to complete.
  • Be attentive to cultural nuances: See Sherwin (2016) and Pew (2016) ( FYI: Survey Monkey has questions in multiple languages in their question bank (https://help.surveymonkey.com/articles/en_US/kb/Custom-Question-Bank/)
  • Design surveys that can be completed on multiple platforms or modes. If a survey is particularly long, encourage respondents to switch from a smart phone to a laptop. The US Census Bureau (2007) provides guidelines for designing questionnaires for administration in different modes https://www.census.gov/srd/mode-guidelines.pdf
  • Allow your respondent to start a survey and then return to it if they cannot complete it in one sitting.
  • A less desirable alternate to consider is to record the responses and allow the respondent to exit wherever they wish.

Naragon, K. (2015). Subject: Email, we just can't get enough. Adobe News. Retrieved from https://theblog.adobe.com/email/
Gao.G. (May 11, 2016). The challenges of polling Asian Americans. Pew Research Center. Retrieved form http://www.pewresearch.org/fact-tank/2016/05/11/the-challenges-of-polling-asian-americans/).
Sherwin, K. (June 26, 2016) Cultural nuances impact user experience: Why we test with international audiences. Retrieved from https://www.nngroup.com/articles/cultural-nuances/?utm_source=Alertbox&utm_campaign=d9fb136f49-Hamburger_Menus_Cultural_Nuances_06_27_2016&utm_medium=email&utm_term=0_7f29a2b335-d9fb136f49-24386729


4.4. Utilize crowdsourcing to increase response rate obtain results faster and potentially reach a larger population.

Crowdsourcing provides rapid access to large population of potential respondents. Simply stated, crowdsourcing is a process for hiring respondents to complete your questionnaire. You define the demographics/characteristics of your desired respondents and the crowdsourcing tool searches for individuals who meet your requirements. Possible sources and related links include:

Sources

Chandler, J., & Shapiro, D. (2016). Conducting clinical research using crowdsourced convenience samples. Annual Review of Clinical Psychology, 12, 53-81.
Paolacci, G., & Chandler, J. (2014). Inside the Turk: Understanding Mechanical Turk as a participant pool. Current Directions in Psychological Science, 23(3), 184-188.
Van Bavel, J. J., & Rand, D. G. (2013). Restocking Our Subject Pools. APS Observer, 26(9). Retrieved from: https://www.psychologicalscience.org/observer/restocking-our-subject-pools

4.5 Minimize respondent burden

Follow the procedures recommended in Chapters 5 and 6 of Moroney & Cameron (2018) and:

  • Minimize survey length (desired ~ 10 min; max 20-30 min for most samples)
  • Minimize the number of required responses. Most questionnaire design tools allow you to use an * to indicate questions to which a response is required. Required responses can often be limited to demographics
  • Contact respondents on the platform and at a time that is most appropriate/convenient for them
  • On web-based surveys, allow respondents to click into your survey directly or copy and paste URLs and/or passwords
  • Liu & Inchausti (2017) reported that including the first question in the invitation email increases response rates


Liu, M., & Inchausti, N. (2017). Improving survey response rates: The effect of embedded questions in web survey email invitations. Survey Practice, 10(1).
Moroney, W.F. & Cameron, J.A. (2018). Questionnaire design: How to ask the right questions of the right people at the right time to get the information you need. Human Factors and Ergonomics Society

4.6 Utilize methods which increase the likelihood of response

Edwards, Roberts, Clarke, et al. (2009) searched 14 electronic databases to identify methods that impacted the response rate of postal and electronic questionnaires. They examined ~625 qualified studies and identified 110 different approaches intended to increase responses to postal questionnaires and 27 approaches used for electronic questionnaires.

We summarize their major findings in the table below, and, based on the pooled odds ratios, classify the methods as most effective, more effective and effective. Note that since their emphasis was on the overall response proportion, they did not examine the impact of factors such as readability. Also note that since the studies reviewed were completed prior to February 2008, there were more studies on postal surveys than electronic, and the methods were not evaluated on smartphones. If you’re seeking a method to increase responses, we recommend that you consult their very informative 468-page report for detailed information.

Using a photo of the researcher in an email invitation increased the response rate by approximately 200%, based on two studies by the same author, with only 213 respondents. However, this outlier finding suggests the importance and value of providing a personalized incentive to the potential respondents. If you opt to use a photo in the email invitation, you should consider the time it takes to download the photo. A pilot study to determine the willingness of respondents to wait for its download could be used to determine the required image quality.

PERCENT INCREASE IN RESPONSE RATE*

POSTAL QUESTIONNAIRES

Most Effective (more than 50% increase)

  • Monetary incentives
  • Recorded delivery
  • A teaser on the envelope suggesting that opening it would be beneficial to the recipient
  • A well thought out, meaningful questionnaire topic.

ELECTRONIC QUESTIONNAIRES

Most Effective (~200%)

  • Using a photo in the email invitation




More Effective (26 – 50% increase)

  • Pre-notification
  • Follow-up on contact
  • Unconditional incentives
  • Shorter questionnaire
  • Providing 2nd copy of the questionnaire when following up
  • Mentioning obligation to respond
  • Assurance of confidentiality
  • Mentioning University sponsorship

More Effective (34 - 50 % increase)

  • Nonmonetary incentives
  • Shorter electronic questionnaires
  • Including a statement that “others had responded”
  • Interesting questionnaire topic. It may be helpful to think of your questionnaire topic as “click bait”; but don’t bait and switch. See FAQ: How Do I Write Subject Lines in Emails Which Will Encourage Participation?

Effective (up to25% increase)

  • Nonmonetary incentives
  • Personalize questionnaires
  • Handwritten addresses
  • Stamp return envelopes as opposed to franked return envelopes
  • Using first class mail

Effective (up to 33%)

  • Using lottery with immediate notification of win or loss
  • Offering to share survey results
  • Use of a white background
  • Personalizing the questionnaire
  • Using simple header on questionnaire
  • Specifying a deadline

PERCENT DECREASE IN RESPONSE RATE

POSTAL QUESTIONNAIRES

  • Questions of a sensitive nature (6% decrease)

ELECTRONIC QUESTIONNAIRES

  • Using the word “Survey” in the subject line (19% decrease)
  • Using a male signature in the emailed invitation to participate (45 % decrease)

*Not all methods were evaluated for both postal and electronic based questionnaires.


Edwards, P. J., Roberts, I., Clarke, M. J., DiGuiseppi, C., Wentz, R., Kwan, I., Cooper, R., Felix, L. M., & Pratap, S. (2009). Methods to increase response to postal and electronic questionnaires. Retrieved from: https://www.ncbi.nlm.nih.gov/pubmed/19588449

5 What is the optimal number of items for survey?

A more appropriate question would inquire about time to complete rather than the number of items; but the simple answer is “shorter is better,” (Dillman, Smyth, & Christian, 2014, Ch. 1), at least from the respondent’s respective. Our general guidance is that respondents should be able to complete pop-up questions within seconds (consider: Netflix’s Thumbs-up, Amazon’s Five Stars, 2-3 questions sent by SMS) but no more than 3 minutes; 10-20 minutes might be the upper limit which can be imposed on respondents. If it takes longer than 10-20 minutes, consider using incentives or paying the respondents. Respondents are more likely to complete shorter questionnaires and the quality of their responses will be higher.

Cape (2015) lists ten ways to make questionnaires shorter, in particular: challenging the need for each question (what will you do with the answer?) and asking fewer questions of more people. If your survey is particularly long, he recommends breaking the questionnaire into modules and then re-asking the respondents if they wish to continue to the next module; interestingly, they usually do. Since a questionnaire is a conversation, he also suggests providing “intrinsic motivations of autonomy, value, relatedness and competence” at appropriate transition points in the questionnaire.

Additionally, gamification can be used to increase variety; consider creating scenarios around the questions or framing questions as guessing challenges. SurveyMonkey provides insightful advice based on their examination of 26,000 surveys. They concluded that surveys with more questions were less likely to be completed. Specifically: “... a 10-question survey has an 89% completion rate, while a 40-question survey has an average completion rate that’s a full 10 percentage points lower.” Note that, length and complexity increase completion time. Therefore, design your questionnaire to minimize respondent fatigue. A survey should be as short as possible, while still meeting your needs, as defined in your Questionnaire Development Form (QDF in Chapter 4 of Moroney & Cameron, 2018). This is not to preclude you from providing a text box inviting additional comments by the respondent at the end of the questionnaire.


Cape, P. (May 28,2015). 10 ways to make questionnaires shorter. https://www.surveysampling.com/site/assets/files/1495/ten-ways-to-make-questionnaires-shorter.pdf
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: the tailored design method. Hoboken NJ: John Wiley & Sons.
Moroney, W.F. & Cameron, J.A. (2018). Questionnaire design: How to ask the right questions of the right people at the right time to get the information you need. Human Factors and Ergonomics Society.

6. How do I write subject lines in emails which will encourage participation?

To encourage participation, the respondent must receive the email, open the email, and be convinced to respond. The second step, open the email, is a function of the information contained in the “From” field and the Subject Line. The “From” field should contain a known, professional email sender name and address (e.g. Mary Jones@HumanResources.abcUniv.Edu). Ideally. the sender, or his or her name/organization, should be known to the recipient. However, avoid using sender’s/organization’s name if it creates negative affect or could bias the respondent. Since you are attempting to establish a conversation with the recipient, we recommend that you not change email addresses or points of contact when corresponding with the recipient. The same email address should be used in all correspondence with the recipient.

A carefully selected subject line requires substantial thought but has significant payoff. In some cases, the questionnaire designer has the ability to develop a meaningful, perhaps innovative, subject line. People open emails if the topic is important to them, salient, or raises their curiosity (think of the click-on “opportunities” that vie for your attention). In this case we can learn from hackers (http://www.zdnet.com/article/how-hackers-can-make-virtually-any-person-click-on-a-dangerous-link/). The subject line must have the correct content and be presented in the correct context.

We find that asking a representative sample of potential respondents to develop subject lines can provide new insights. Since questionnaire designers are not respondents, we used a small focus group comprised of potential respondents to develop potential subject lines. Then we had an independent sample of potential respondents rank the top 3 subject lines to which they were most likely to respond. We have also use this approach in determining appropriate incentives.

Be judicious in using the words “survey” or “questionnaire” in the subject line. Respondent reactions can range from “Oh, NO, not another one” to “Yes. I’d like to give them a piece of my mind.” In general, if you think the respondents’ reactions will be positive inform them that this is a survey; otherwise work around the issue.

It is also important that the email not be flagged as spam. Since spam filters work in a variety of ways, you might want to talk with your IT department or your Internet Service Provider to determine if there any problems. It is also good policy to include your email address in all distribution lists, this allows you to confirm exactly when the email was sent and that it at least cleared the spam filter used by your organization. Since some spam filters are triggered by words such as: free, cash, prize, win, etc., we recommend that you not use these and similar words. Avoid using marketing language such as “our exciting new feature” or “breathtaking opportunity.” After you have selected your subject line, be sure to test it on a variety of platforms and browsers which use different spam filters. Also, avoid using HTML messages, which often indicate spam messages.

Since a questionnaire is a conversation, we recommend:

  • That subject lines be both professional and informative, and tweaked to encourage the recipient’s participation.
  • Be sure the language is respondent centric not questionnaire designer centric (Nielsen/Norman Group, 2013). If the potential respondents are professionals or specialists, then using technical terms may be appropriate.
  • When developing your subject line, you may wish to review Sections 2.4.4 and 4.6.3 of Moroney and Cameron (2018), which describe components of social exchange theory.
  • For hints on effective email subject lines, consider the approach of marketing and election campaign professionals. As expected the subject lines are tested and evaluated before they are used. Subject lines included: invitations (“Dinner?” or “Dinner! ”), calls to action (“Go vote!”), reactions to news (“What did you think of last night’s debate?”), merchandise announcements (“Shop the new holiday collection”), event announcements (“Hear from XXXXX in (location)”), and general campaign messages (“You shouldn’t miss this.”) (Detrow, 2015).
  • Linking your subject line to a current event is a very desirable strategy.
  • Use personalization in the subject line (“you have the answers”) or scarcity (“you only have 3 more days”). Consider providing the respondent with a potential opportunity: “Share your story about your xxxx” or “Don’t miss your chance to …”

Additionally, be terse and minimize the number of characters in your subject line. Many of the recipients will be reading your message on smart phones which have character limits (iPhone 35 characters in portrait view and 80 in landscape view; Android 33 characters in portrait view and 72 in landscape view). Loranger & Nielsen (January 29, 2017) provide additional guidance on using well-written, short text fragments presented out of context which can nudge recipients toward the desired action. Remember your competition is tough, since more than 205 billion emails are sent daily.

An innovative approach was recently developed by Liu & Inchausti (2017), from Facebook and SurveyMonkey respectively. They included or excluded the first question of the survey as part of an invitation sent to 8876 customers. They reported a significant higher completion rate (4.7%) among recipients of invitations containing the first question. While the first question on all questionnaires should be nonthreatening and preferably intriguing, including it in the invitation starts the questionnaire completion process.


Dertow, S. (December 15, 2015 ). Bill wants to meet you: Why political fundraising Emails work. Retrieved from http://www.npr.org/2015/12/15/459704216/bill-wants-to-meet-you-why-political-fundraising-emails-work
Liu, M., & Inchausti, N. (2017). Improving survey response rates: The effect of embedded questions in web survey email Invitations. Survey Practice, 10(1).
Loranger, H. & Nielsen, J. (January 29, 2017). Microcontent: A few small words have a mega impact on business. https://www.nngroup.com/articles/microcontent-how-to-write-headlines-page-titles-and-subject-lines/
Moroney, W.F. & Cameron, J.A. (2018). Questionnaire design: How to ask the right questions of the right people at the right time to get the information you need. Human Factors and Ergonomics Society.
Nielsen/Norman Group. (September 28, 2013). User-centric vs. maker-centric language: 3 essential guidelines. Retrieved from https://www.nngroup.com/articles/user-centric-language/

7. Should I use a progress indicator/bar?

The simple answer is “No”. There is no real consensus regarding the impact on dropout rate (Villar, Callegaro, & Yang, 2013). Indeed, these authors reported that in studies providing a small incentive, the presence of a constant progress indicator is associated with increased dropout rate. Apparently respondents are evaluating their return on investment. Dillman, Smyth, and Christian (2014) suggest that progress indicators may be somewhat effective in short studies where people progress quickly, however in longer studies they may be more discouraging than encouraging. They are problematic in web-based studies, which utilize branching or grids. If you feel that a progress indicator is necessary, Dillman, Smyth and Christian (2014) recommend using an unobtrusive display, which informs the respondent that this is “question XX out of YY.”


Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: the tailored design method. Hoboken, NJ: John Wiley & Sons.
Villar, A., Callegaro, M., & Yang, Y. (2013). Where am I? A meta-analysis of experiments on the effects of progress indicators for web surveys. Social Science Computer Review, 31(6), 744-762. Retrieved from http://ssc.sagepub.com/content/31/6/744

8. When using open-ended responses, how can I encourage the respondent to provide more detail?

Text boxes in most web-based surveys expand to accommodate the respondent’s reply, but that capability is not often apparent to some respondents. A simple solution is to provide a larger (taller) text box, which indicates that a more detailed response is expected. Providing this affordance, has been shown to yield significantly more words in the response (see Section 2.4.3 of Moroney and Cameron (2018), Linegang and Moroney (2012).

If you are certain that your respondents will be involved in the process and are committed to providing you with the requested information, there is nothing wrong with the open-ended questions in the Exhibit below,

Two Sample Open-Ended Questions

  1. What single characteristic of the XYZ system is most likely to lead to errors? Explain why you selected that characteristic
  2. If you could change ONE feature of the system, what feature would you change? Explain why you selected that feature.

If you are not certain about the respondent’s commitment, consider the alternative presented in the exhibit below. This alternate separate the questions presented in the above Exhibit into their two components: identifying the characteristic or feature, and explaining why a particular characteristic or feature was selected.

Alternate Open-Ended Question Structure that May Provide More Data

1. What single characteristic of the XYZ system is most likely to lead to errors.

1.1. Please tell us why you would change that characteristic.

2. If you could change ONE feature of the system, what feature would you change?

2.1 I would change that feature because:

We have noticed that some individuals skip questions that ask for additional details or require the respondent to justify their position. When the question is presented in two parts, respondents are more likely to at least identify the characteristic or feature that they would change. Some respondents may not explain why, but at least you know what they would have changed if they could. Some will explain why and, if your respondents are really cooperative, you might continue with a third inquiry and ask them “How would you change that feature?”

Overall, open-ended questions are a good starting point that allows the designer to bound the areas of concern and understand the range of alternative responses. This format is very useful when seeking additional comments and insights from respondents. While this format is very appropriate during the early phases of questionnaire design, it should not be overused because it creates a data reduction burden for the questionnaire analyst.


Linegang, M. P., & Moroney, W. F. (2012). Effects of cover letter subject line and open-ended question response area on responding to an internet survey. Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 56, No. 1, pp. 1268-1262). Santa Monica, CA: Human Factors and Ergonomics Society.

9. How can I use a filter question to qualify respondents?

Sometimes asking the right question (See Section 5.1.2 of our book, Moroney & Cameron, 2018) is asking a filter question, Filter questions can be used to determine if the respondent is qualified to answer your questions.

Rather than making assumptions about the depth of a respondent’s knowledge, it may be wise to test their level of knowledge and know for sure. In a 2004 Harris Interactive survey of 2704 members of the Cardiovascular Nurses Association (which is no longer retrievable), 85 percent of the respondents reported that they were “somewhat or very knowledgeable” about cholesterol. In an effort to verify this claim, the survey subsequently tested the respondents’ knowledge about HDL (good) and LDL (bad) cholesterol. Fifty-nine percent of the respondents could not differentiate between the effect of HDL and LDL. Similarly, to ascertain that respondents were actually knowledgeable and veterans, Lynn (2014) required participants to arrange military insignia by rank. As a questionnaire designer, you need to decide if a qualifying filter questions can be based on the respondents’ reported knowledge or if their claim needs to be verified.

As discussed in Chapter 2 of Moroney and Cameron (2018), a questionnaire is a conversation and respondents usually believe that they are expected to have answers to the questions presented to them. Indeed, Schwarz (1999) reported that about 30 percent of a representative sample will offer an opinion on fictitious issues. Bishop, Tuchfarber, & Oldendick (1986) asked respondents about one of three fictional acts or bills: Public Affairs Act of 1975”. Agricultural Trade Act, or Monetary Control Bill. When respondents were not provided with a “don’t know” option, twenty to fifty percent of the respondents provided an opinion; as opposed to stating that they didn’t know, or were unaware of the act, or had no opinion. The authors noted that the less knowledgeable person was about a topic, the more likely they were to provide a response. This illustrates the strength of demand characteristics in a survey interview. Similar results were obtained in 2013, see http://www.huffingtonpost.com/2013/04/11/survey-questions-fiction_n_2994363.html . Inserting a fictitious issue may allow you to screen out individuals who are not well informed about your topic or at least to treat the responses of these individuals differently than you would responses of the more knowledgeable respondents. We strongly recommend that filter questions be used if you are using samples of convenience or recruiting respondent by using crowdsourcing, social media, or using commercial “suppliers” such as M-Turk (Chandler & Shapiro, 2016) or SurveyMonkey Audience (https://www.surveymonkey.com/mp/audience/). Since the demand characteristics are so strong you may need to prescreen as unobtrusively as possible.


Bishop, G. F., Tuchfarber, A. J., & Oldendick, R. W. (1986). Opinions on fictitious issues: The pressure to answer survey questions. Public Opinion Quarterly, 50(2), 240-250.
Chandler, J., & Shapiro, D. (2016). Conducting clinical research using crowdsourced convenience samples. Annual Review of Clinical Psychology, 12, 53-81.
Lynn, B. M. D. (2014). Shared sense of purpose and well-being among veterans and non-veterans (Doctoral dissertation). Retrieved from ProQuest Dissertations and Theses database. (UMI No. 3690322).
Moroney, W.F. & Cameron, J.A. (2018). Questionnaire design: How to ask the right questions of the right people at the right time to get the information you need. Human Factors and Ergonomics Society.
Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist, 54, 93–105. Retrieved from https://dornsife.usc.edu/assets/sites/780/docs/99_ap_schwarz_self-reports.pdf


10. What else should I read to learn more about questionnaire design and development?

Handbooks (How to style)

Textbooks (with How to guidance)

  • Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method (4rd Ed). New York: John Wiley. Also see: Dillman, et al 2000 and 2009.
  • Groves, R. M., Fowler Jr., F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey Methodology. New York: John Wiley & Sons.
  • Marsden, P. V., & Wright, J. D. (2010). Handbook of survey research. Bingley, UK: Emerald Group Publishing Limited.

Test and Evaluation

  • Fowler, F. J. (1995). Improving Survey Questions: Design and Evaluation. Thousand Oaks, CA: Sage Publications.
  • Kreuter, F. (Ed.). (2013). Improving surveys with paradata: Analytic uses of process information. New York: John Wiley & Sons.
  • Presser, S., Couper, M. P., Lessler, J. T., Martin, E., Martin, J., Rothgeb, J. M., & Singer, E. (2004). Methods for testing and evaluating survey questionnaires. New York: John Wiley & Sons.

Interviewing

  • Willis, G. B. (2004). Cognitive interviewing: A tool for improving questionnaire design. Thousand Oaks, CA: Sage Publications.

Questionnaire as Conversation

  • Schwarz, N. (2014).Cognition and communication: Judgmental biases, research methods, and the logic of conversation. New York: Psychology Press.

Psychological Emphasis

  • Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The psychology of survey response. Cambridge, UK: Cambridge University Press.

Statistics: (Nonparametric)

  • Siegel, S., & Castellan, N. J., Jr. (1988). Nonparametric statistics for the behavioral sciences (2nd Ed.). New York: McGraw-Hill.
  • Sheskin, D. J. (1997). Handbook of parametric and nonparametric statistical procedures. Boca Raton, FL: CRC Press.

Scale Development

  • Anastasi, A. U., & Urbina, A. S. (1997). Psychological testing. Ann Arbor, MI: Prentice Hall.
  • DeVellis, R. F. (2016). Scale development: Theory and applications: Vol. 26. Applied social research methods series (4th ed.). Thousand Oaks, CA: Sage Publications.
  • Johnson, R.L., & Morgan, G.B. Survey Scales: A guide to development, analysis and reporting. New York: The Guilford Press

Web based Emphasis

  • Callegaro, M., Manfreda, K. L., & Vehovar, V. (2015). Web survey methodology. Thousand Oaks, CA: Sage Publications. Also see: http://www.websm.org
  • Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Web Questionnaires and Implementation (Chapter 9) in Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method (4rd Ed). New York: John Wiley. Also see: Dillman, et al 2000 and 2009
  • Tourangeau, R.,Conrad, F. G. & Couper, M. P. (2013). The Science of Web Surveys. NY: Oxford University Press.

Professional Organizations and Groups

Links to sources of Pre-existing Questions and questionnaires

11. Can I use different administration methods to collect data and increase response rates?

Dillman, Smyth and Christian (2014, Chapter 1) recommend the use of multiple and different (mixed-mode) data collection as a strategy to increase response rate. They propose a cost effective strategy of collecting as many responses as possible using the least expensive mode, then using the next least expensive mode to collect additional responses. The most expensive response collection strategy is reserved for the most recalcitrant (See Section 3.3 of Moroney and Cameron, 2018). Thus, you might start by using a web-push invitation to participate or use postal mail to request a web response and withhold alternative response modes (paper, phone interviews, onsite visits) until later in the data collection process. Their methodology has been used by the U.S. Census Bureau and other governmental data collection agencies.

A mixed mode approach can also be used to improve coverage. Indeed, it may be the only possible approach when the same contact information (email address, mailing address, or phone number) is not available for all potential respondents.

Be aware that some individuals may be more willing to respond in one mode than another. For example, rather than participating in a telephone interview or responding on a smart phone, they might prefer responding to the same questions on a website or completing a paper-and-pencil questionnaire. We recommend providing your potential respondents with multiple paths for responding, if they did not respond to your first invitation or your follow-on attempts to contact them. In some cases, the solution may be as simple as asking the respondents to select their preferred administration method and then making that available to them.

Dillman (2017) examined response patterns for studies administered using different modes between 2007 and 2012. He reported that when given a choice between responding via the web or by mail, 80% of the respondents opted to respond by mail. While the response rate when only the web was offered was only 60%, he reported that token cash incentives sent with the initial request significantly improved the web and overall response rates (incentives are discussed elsewhere in the FAQ section). As would be expected, the number of questionnaires responded to on smart phones continues to increase.

Finally, realize that an effectively designed survey is the result of many decisions that interact to facilitate the collection of data from the desired population. Multiple factors including timing of initial contact, contact mode, personalization, use of appropriate incentives, topic salience, and administration mode all contribute to the respondent’s willingness to donate their time. Hopefully, you are becoming aware of some of the other facets of the “iceberg” shown in Moroney and Cameron, 2018, Exhibit 1.1

Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: The tailored design method (4th ed.). New York, NY:. John Wiley & Sons.

Dillman, D.A. (2017). The promise and challenge of pushing respondents to the web in mixed mode surveys. Survey Methodology, Statistics Canada, Catalogue No. 12‑001‑X, Vol. 43, No. 1. Retrieved from: http://www.statcan.gc.ca/pub/12-001-x/2017001/article/14836-eng.htm

Moroney, W.F. & Cameron, J.A. (2018). Questionnaire design: How to ask the right questions of the right people at the right time to get the information you need. Santa Monica, CA: Human Factors and Ergonomics Society.

12. What should I consider before using social media to contact potential respondents?

As one would expect, social media has been used successfully to contact potential respondents. Facebook was used successfully to identify and contact a unique population that had engaged in socially undesirable behavior: smoking cigarettes while pregnant (Fraser, Murphy, Hsieh, Guillory, Kim, & Savitz, 2016). Within five days, the researchers completed 108 interviews with members of an ultra-rare population (women who smoked while pregnant). Ramo, Rodriguez, Chavez, Sommer, & Prochaska, (2014) describe using Facebook for recruiting young adult smokers for a cessation trial. Using Facebook is a good strategy when snowball sampling (see Moroney and Cameron, 2018, Section 3.3.4.) is appropriate (Brickman Bhutta, 2012).

To recruit knowledgeable respondents in particular areas you should consider professional blogs and groups such as LINKEDIN. Niche networks can provide more focused access to members of specific populations (science, education, medicine, finance, other businesses, trading networks, etc.). If your topic requires a more general population, consider social networking services such as: Twitter, Flickr, Snapchat, YouTube, and Vimeo.

No matter which social media platform you consider, we encourage you to read Brickman Bhutta’s (2012), discussion of the advantages and risk of using social media in collecting data and recruiting respondents. Because of the large number of potential respondents available on social media, you must be careful when specifying your population of interest. We suggest that you also use well designed filters to ascertain that the respondent’s demographics meet your requirements. As you know, some people on social media are not really who they purport to be; in addition, if you are offering a token/payment for participation you may have attracted bogus respondents. For details on screening questions and use of incentives see those topics in this FAQ section. Survey Monkey provides additional guidance on how to engage potential respondents using social media; see https://www.surveymonkey.com/mp/social-media-surveys/

Brickman Bhutta, C. (2012). Not by the book: Facebook as a sampling frame. Sociological Methods & Research, 41(1), 57-88.

Fraser, A. M., Murphy, J., Hsieh, P., Guillory, J., Kim, A. & Savitz, D. (2016). Needles in a haystack: Recruiting for study of pregnancy and e-cigarettes using Facebook. Unpublished presentation made at Midwest Association for Public Opinion Research.

Ramo, D. E., Rodriguez, T. M., Chavez, K., Sommer, M. J., & Prochaska, J. J. (2014). Facebook recruitment of young adult smokers for a cessation trial: methods, metrics, and lessons learned. Internet Interventions, 1(2), 58-64.

13. Which social media should I consider using?

Social networks serve unique populations, so you should choose judiciously. Blogs and LinkedIn groups tend to support discussions on areas of common interest (professional, medical, scientific, etc.).Pinterest, Messenger, Instagram, WhatsAPP can be used as resources. Facebook, with 1,500,000,000 monthly active users, is the largest social network and like Twitter, it allows members to exchange messages on more personal topics, including social issues, healthcare, and quality of service. Searches on YouTube and Vimeo, can provide you with information on product usability, human error, design defects, accidents, etc. For a listing of several hundred social media networking sites go to: https://en.wikipedia.org/wiki/List_of_social_networking_websites

Consider this unique use of social media. In a 2015 study by hundreds of social media sites,

Psychological language on Twitter predicts county-level heart disease mortality. Eichstaedt et al. demonstrated that the psychological language used on Twitter was a better predictor of a county’s level of heart disease mortality than a model that “combined 10 common demographic, socioeconomic, and health risk factors, including smoking, diabetes, hypertension, and obesity” (Abstract).

Identifying and “listening to” social media users discussing topics of interest to you can provide insights that may answer your questions and/or provide you with insights that will allow you to formulate better questions and develop more informed response options. Smith, Rainie, Shneiderman, and Himelboim (2014) used a social media network analysis tool, based on link analysis, to identify patterns in Tweets that occurred within different types of groups (polarizing crowds, tight crowds, brand clusters, community clusters, broadcast networks, and support networks). By examining the topics discussed on these various platforms, you can identify conversations that you would like to monitor and individuals/groups that could provide information pertinent to your questionnaire and/or identify informed and motivated individuals who would qualify as respondents.

Mitchell and Guskin (2013) describe the Twitter population as younger, more mobile, and better educated than the population in general. Brickman Butta (2012) provides insights into the advantages and disadvantages of using Facebook as a source of information. Markoff (2016) cautions that, in the era of false news, a large number of Tweets may have been sent by chatbots, which is another reason to be judicious when using social media data.

Brickman Bhutta, C. (2012). Not by the book: Facebook as a sampling frame. Sociological Methods & Research, 41(1), 57-88. Retrieved from: http://www.thearda.com/workingpapers/download/Not%20by%20the%20Book%20-%20Bhutta.doc

Eichstaedt, J. C., Schwartz, H. A., Kern, M. L., Park, G., Labarthe, D. R., Merchant, R. M., … & Seligman, M. E. (2015). Psychological language on Twitter predicts county-level heart disease mortality. Psychological Science, 26(2), 159-169. Retrieved from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4433545/

Markoff, J, (November 17, 2016). Automated pro-Trump bots overwhelmed pro-Clinton messages, researchers say. Retrieved from https://www.nytimes.com/2016/11/18/technology/automated-pro-trump-bots-overwhelmed-pro-clinton-messages-researchers-say.html?emc=eta1&_r=0

Mitchell, A. & Guskin, E. (November 4, 2013). Twitter news consumers: Young, mobile and educated. Retrieved from http://www.journalism.org/2013/11/04/twitter-news-consumers-young-mobile-and-educated/

Smith, M. A., Rainie, L., Shneiderman, B., & Himelboim, I. (2014). Mapping Twitter topic networks: From polarized crowds to community clusters. Pew Research Center, 20. Retrieved from http://www.pewinternet.org/2014/02/20/mapping-twitter-topic-networks-from-polarized-crowds-to-community-clusters/

14. How can I assess reading level?

This topic is briefly discussed in section 7.4.2 of Moroney and Cameron (2018). The Flesch-Kincaid Grade Level (Kincaid, Fishburne, Rogers, & Chissom, 1975) provides a readability rating for a passage of text. Scores are based on the US grade system (i.e., a score of 8.0 means the text is appropriate for eighth graders and above. A variety of online tools (e.g., http://www.readabilityformulas.com/ ) as well as Microsoft Word’s “Spelling & Grammar” function provide this score; see https://support.office.com/en-us/article/Test-your-document-s-readability-75b4969e-e70a-4777-7dd3-f7fc3c7b3fd2. Note these teste will not work for the entire questionnaire, but you can use Word in areas such as your invitation to participate, instructions within the questionnaire, etc. Note that: When using MS Word, readability scores are only provided after grammar checks and spelling checks have been completed. For a quick overview of readability test see: https://en.wikipedia.org/wiki/Readability#Popular_readability_formulas

Questionnaire designers should tailor the reading level of their questionnaire to that of their population of interest. The average reading level of US adults is between the sixth and eighth grade, but subpopulations vary on rates of literacy. If your target audience understands plain English well, you could write for the 8th to 9th grade level (a score between 70 and 60 on the Flesch Reading-Ease Test).

Readers should note that neither of these two approaches to readability include frequency of word usage in their calculations. Except when dealing with a professional or unique population we recommend using more frequently used words. In addition, make complex sentences more readable by: 1) reducing the number of words in a sentence, and 2) using words with fewer syllables.

Kincaid, J. P., Fishburne Jr, R. P., Rogers, R. L., & Chissom, B. S. (1975). Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel.

Moroney, W.F. & Cameron, J.A. (2018). Questionnaire design: How to ask the right questions of the right people at the right time to get the information you need. Santa Monica, CA: Human Factors and Ergonomics Society.

15. You briefly describe computer-based questionnaire evaluation tools (QAS, QUAID, and the SQP) In Exhibit 7.4; can you expand on these tools?

The Question Appraisal System (QAS, Willis & Lessler, 1999) uses a series of checklists to help questionnaire designers self-diagnose problems in questionnaire construction. Each checklist focuses on a potential source of error in the question, and evaluates: the quality of question instructions, clarity of question intent, appropriateness of any assumptions in the question, likelihood that a respondent will know or remember the information requested, whether or not a question topic is sensitive, and the quality of response options. The checklist (available at: https://www.cdc.gov/healthyyouth/evaluation/pdf/brief15.pdf ) also includes a comprehensive guide to evaluating each item in the checklist.

The Questionnaire Understanding Aid (QUAID; Graesser, Cai, Louwerse, & Daniel, 2006) is an online tool developed specifically for identifying potential comprehension problems in survey questions. Rather than providing a single score representing a question’s complexity, the QUAID enumerates potential problems originating in both the question stem and the response options. These problems can include: terms in the question that may be unfamiliar to respondents, terms in the question that are vague or imprecise, complex grammar in the question, and question wording that places a high load on respondents’ short-term memory.

The tool (available at: http://quaid.cohmetrix.com/) asks users to input: 1) the question stem (labelled “Question:”), 2) instructions for interviewers (labeled “Context:”; this field is optional and used only for interviewer-administered surveys), and 3) response options (labeled “Answer:”; also optional). The tool outputs problems it finds in each of the categories mentioned above.

The Survey Quality Predictor (SQP (Saris & Gallhofer, 2007), like the QUAID, is an online tool developed specifically to evaluate the quality of survey questions. However, the SQP is unique in that it compares each user-entered survey question to a database of nearly 4,000 questions whose quality has been evaluated using several Multitrait-Multimethod (MTMM) experiments. Based on this comparison, the tool outputs a quality score (a number between 0 and 1) that reflects the predicted reliability and validity of the user-entered question. Higher scores represent higher quality questions. The tool can also provide quality predictions specific to the country where the question will be asked.

The SQP (available at: http://sqp.upf.edu/) first asks users to input the question stem and response options. Next, the user is asked to self-code numerous details about the question. Examples of these characteristics of interest include: whether the response scale is presented horizontally or vertically, whether or not a neutral category is present, and how many questions precede this one in a survey. Once the score is calculated, the tool suggests potential ways the questionnaire designer might improve the question. Although this tool is the most time-consuming of those outlined in this section, the evaluation provided by the SQP is also the most robust. We recommend that first-time users watch a series of training videos (available at: http://tinyurl.com/hol54bp) that orient you to the tool.

As an added benefit, the SQP allows users to search and view questions that have been submitted by others. This database contains nearly 70,000 quality-rated questions in a variety of languages, and is an excellent resource for questionnaire designers.

Thanks to Jerry Timbrook, a PhD student specializing in survey methods within the Sociology Department at the University of Nebraska-Lincoln for the above material and his contribution to Section 7.4.2 of Moroney and Cameron (2018).

Graesser, A. C., Cai, Z., Louwerse, M. M., & Daniel, F. (2006). Question Understanding Aid (QUAID): A web facility that tests question comprehensibility. Public Opinion Quarterly, 70(1), 3-22.

Moroney, W.F. & Cameron, J.A. (2018). Questionnaire design: How to ask the right questions of the right people at the right time to get the information you need. Santa Monica, CA: Human Factors and Ergonomics Society.

Saris, W. E., & Gallhofer, I. N. (2007). Design, evaluation, and analysis of questionnaires for survey research. Hoboken, NJ: John Wiley & Sons.

Willis, G. B., & Lessler, J. T. (1999). Question appraisal system QAS-99. Rockville, MD: Research Triangle Institute. Retrieved from http://bit.ly/2IcT9T1

© 2019 William F. Moroney