Competency: Constructs an instrument, establishes its validity, validates the constructed research instrument to experts, conducts reliability testing, administers a survey to target respondents, and describes intervention (if applicable), respondents.
Definition of Survey Research
Many immediately come to think of Survey Research the moment they hear or read the expression, Non-experimental research. This is so because Survey research is the most used non-experimental research in the field of Sociology, Psychology, and Humanities. Inquiries, investigations, and experiments also happen in this type of non-experimental research, but in terms of types and analysis of data, Survey research follows a standard that applies to social sciences (Schreiber 2011).
Survey research is a method of research that aims at knowing what a big number of people think and feel about some sociological issues. The data it collects from these people serving as “representatives or informants” explain or describe the society’s thoughts, attitudes, and feelings towards environmental issues. Although survey research is a very old research technique that began in the period of the ancient Egyptian rulers, many still consider this as a very popular means of social inquiry (Babbie 2013, p. 383).
The extensive use of survey research is proven by the fact that more than one-third of published research online in Sociology, Psychology, and Humanities were done the through survey research. Usually used by researchers to study issues affecting a large population, survey research requires data-gathering techniques such as interview, questionnaire, online survey, and telephone interview that primarily consider the size of the group being studied (Schutt 2013). Here, the researcher selects a sample of respondents from a small/large population and provide the chosen subjects a formalized questionnaire.
Purposes of Survey Research
1. To obtain information about people’s opinions and feelings about an issue.
2. To identify the present condition, needs, or problems of people in a short span of time.
3. To seek answers to social problems.
4. To give school officials pointers on curricular offerings, guidance and counseling services, teacher evaluation, and so on.
Planning a Survey Research
The research design of survey research is similar to that of the experimental research, only, that when it comes to data collection method and instrument, survey research goes through the following phases:
1. Explanation of objectives clearly
2. Formulation of research questions or hypotheses to predict relationships of variables
3. Determination of the exact kind of data referred to by the hypotheses or research questions
4. Assurance of the population or group of people to which the findings will
be applied to
5. Finalization of the sampling method for selecting the participants
6. Identification of the method or instrument in collecting data; that is, whether it is a questionnaire on paper, through phone, via computer, or face-to-face.
Strengths of Survey Research
Stressing the effectiveness and usefulness of survey research, Schutt (2013) gives the following pluses of survey research:
1. Versatility. It can tackle any issue affecting society.
2. Efficiency. It is not costly in terms of money and time, assuming there is excellent communication or a postal system.
3. Generality. It can get a good representation or sample of a large group of people.
4. Confidentiality. It is capable of safeguarding the privacy or anonymity of the respondents.
Here are the weak points of survey research appearing in several books about this type of quasi-experimental research:
1. It cannot provide sufficient evidence about the relationships of variables.
2. It cannot examine the significance of some issues affecting people’s social life.
3. It cannot get data reflecting the effects of the interconnectedness of environmental features on the research study.
4. It cannot consider man’s naturalistic tendencies as the basis of human behaviour unless his ways or styles of living are related to his surroundings.
5. It cannot promote interpretive and creative thinking unless its formation of ideas results from scientific thinking.
6. It cannot have an effective application for all topics for research.
7. It cannot use a questioning or coding method that can accurately register differences among the participants’ responses.
8. It cannot diffuse the main researcher’s abilities to control and manipulate some factors affecting the study.
9. It cannot account for real or actual happenings but can give ideas on respondents’ views, beliefs, concepts, and emotions.
Ethical Principles and Rules in Survey Research
You are in a Higher Education Institution called college or university that always considers academic excellence as its number one goal. Be academically competent by producing an excellent research paper that will mirror your intellectual abilities and your valuing system. Considering the importance of honesty and integrity in conducting a research paper, keep in mind the following ethical principles and rules in producing an honest-to-goodness research paper (Ransome 2013; Corti 2014):
1. Respect whatever decision a person has about your research work for his participation in your study comes solely from his or her own decision- making powers.
2. Ensure that your study will be instrumental in elevating the living conditions of people around you or bringing about world progress.
3. Conduct your research work in a way that the respondents will be safe from any injury or damage that may arise from their physical and emotional involvement in the study.
4. Practice honesty and truthfulness in reporting about the results of your study.
5. Accept the reality that the nature, kind, and extent of responses to your questions depend solely on the respondent’s dispositions.
6. Decide properly which information should go public or secret.
7. Stick to your promise of safeguarding the secrecy of some information you obtained from the respondents.
How to Determine the Validity and Reliability of the Instrument?
Validity and reliability are two important factors to consider when developing and testing any instrument (e.g., content assessment test or questionnaire) for use in a study. Attention to these considerations helps to ensure the quality of your measurement and the data collected for your study.
Understanding and Testing Validity
Validity refers to the degree to which an instrument accurately measures what it intends to measure. Three common types of validity for researchers and evaluators to consider are content, construct, and criterion validities.
§ Content validity indicates the extent to which items adequately measure or represent the content of the property or trait that the researcher wishes to measure. Subject matter expert review is often a good first step in instrument development to assess content validity in relation to the area or field you are studying.
§ Construct validity indicates the extent to which a measurement method accurately represents a construct (e.g., a latent variable or phenomena that can’t be measured directly, such as a person’s attitude or belief) and produces an observation, distinct from that which is produced by a measure of another construct. Common methods to assess construct validity include, but are not limited to, factor analysis, correlation tests, and item response theory models (including Rasch model).
§ Criterion-related validity indicates the extent to which the instrument’s scores correlate with an external criterion (i.e., usually another measurement from a different instrument) either at present (concurrent validity) or in the future (predictive validity). A common measurement of this type of validity is the correlation coefficient between the two measures.
Oftentimes, when developing, modifying, and interpreting the validity of a given instrument, rather than view or test each type of validity individually, researchers and evaluators test for evidence of several different forms of validity, collectively.
Understanding and Testing Reliability
Reliability refers to the degree to which an instrument yields consistent results. Common measures of reliability include internal consistency, test-retest, and inter-rater reliabilities.
§ Internal consistency reliability looks at the consistency of the score of individual items on an instrument, with the scores of a set of items, or subscale, which typically consists of several items to measure a single construct. Cronbach’s alpha is one of the most common methods for checking internal consistency reliability. Group variability, score reliability, a number of items, sample sizes, and difficulty level of the instrument also can impact the Cronbach’s alpha value.
§ Test-retest measures the correlation between scores from one administration of an instrument to another, usually within an interval of 2 to 3 weeks. Unlike pre-post tests, no treatment occurs between the first and second administrations of the instrument in order to test-retest reliability. A similar type of reliability called alternate forms involves using slightly different forms or versions of an instrument to see if different versions yield consistent results.
§ Inter-rater reliability checks the degree of agreement among raters (i.e., those completing items on an instrument). Common situations where more than one rater is involved may occur when more than one person conducts classroom observations, uses an observation protocol, or scores an open-ended test, using a rubric or other standard protocol. Kappa statistics, correlation coefficients, and intra-class correlation (ICC) coefficient are some of the commonly reported inter-rater reliability measures.
Developing a valid and reliable instrument usually requires multiple iterations of piloting and testing, which can be resource-intensive. Therefore, when available, I suggest using already established valid and reliable instruments, such as those published in peer-reviewed journal articles. However, even when using these instruments, you should re-check validity and reliability, using your study methods and your own participants’ data before running additional statistical analyses. This process will confirm that the instrument performs, as intended, in your study with the population you are studying, even though they are identical to the purpose and population for which the instrument was initially developed. Below are a few additional, useful readings to further inform your understanding of validity and reliability.
What is Cronbach’s alpha?
Cronbach’s alpha, α (or coefficient alpha), developed by Lee Cronbach in 1951, measures reliability or internal consistency. “Reliability” is how well a test measures what it should. For example, a company might give a job satisfaction survey to its employees. High reliability means it measures job satisfaction, while low reliability means it measures something else (or possibly nothing at all).
Cronbach’s alpha tests to see if multiple-question Likert scale surveys are reliable. These questions measure latent variables — hidden or unobservable variables like: a person’s conscientiousness, neurosis, or openness. These are very difficult to measure in real life. Cronbach’s alpha will tell you if the test you have designed is accurately measuring the variable of interest.
Cronbach Alpha’s Formula
The formula for Cronbach alpha is:
Where:
N = the number of items
c̄ = average covariance between item-pairs
v̄ = average variance
SPSS Steps
While it’s good to know the formula behind the concept, in reality, you will not need to work it. You’ll often calculate alpha in SPSS or similar software. In SPSS, the steps are:
Step 1: Click “Analyze,” then click “Scale” and then click “Reliability Analysis.”
Step 2: Transfer your variables (q1 to q5) into “Item.” The model default should be set as “Alpha.”
Step 3: Click “Statistics” in the dialog box.
Step 4: Select “Item,” “Scale,” and “Scale if item deleted” in the box description. Choose “Correlation” in the inter-item box.
Step 5: Click “Continue,” and then click “OK.”
Rule of Thumb for Reliability Result
A rule of thumb for interpreting alpha for dichotomous questions (i.e., questions with two possible answers) or Likert scale questions is:
In general, a score of more than 0.7 is usually okay. However, some authors suggest higher values of 0.90 to 0.95.
Use the rules of thumb listed above with caution. A high level for alpha may mean that the items in the test are highly correlated. However, α is also sensitive to the number of items in a test. A larger number of items can result in a larger α and a smaller number of items in a smaller α. If alpha is high, this may mean redundant questions (i.e., they are asking the same thing). A low value for alpha may mean that there are not enough questions on the test. Adding more relevant items to the test can increase alpha. Poor interrelatedness between test questions can also cause low values, so can measuring more than one latent variable.
Confusion often surrounds the causes for high and low alpha scores. This can result in incorrectly discarded tests or tests wrongly labeled as untrustworthy. Psychometrics professor Mohsen Tavakol and medical education professor Reg Dennick suggest that improving your knowledge about internal consistency and unidimensionality will lead to the correct use of Cronbach’s alpha1:
Unidimensionality in Cronbach’s alpha assumes the questions are only measuring one latent variable or dimension. If you measure more than one dimension (either knowingly or unknowingly), the test result may be meaningless. You could break the test into parts, measuring a different latent variable or dimension with each part. If you are not sure if your test is unidimensional or not, run Factor Analysis to identify the dimensions in your test.