The learner demonstrates understanding of quantitative research designs, description of sample, instrument development, description of intervention (if applicable), data collection and analysis procedures such as survey, interview, and observation, and guidelines in writing research methodology.
The learner is able to describe adequately quantitative research designs, sample, instrument used, intervention (if applicable), data collection, and analysis procedures.
QUANTITATIVE RESEARCH DESIGN
Quantitative research is a research approach that emphasizes the collection and analysis of numerical data to make inferences, test hypotheses, and quantify relationships between variables. It is characterized by its focus on objective and systematic measurement, statistical analysis, and the use of structured research instruments. Quantitative research is often used to describe, explain, and predict phenomena in a precise and quantifiable manner.
Characteristics
Quantitative research relies on the measurement of variables using standardized instruments and procedures. These variables are often expressed in numerical terms.
Researchers formulate specific hypotheses or research questions and use statistical methods to test these hypotheses. The goal is to determine if there are statistically significant relationships or differences between variables.
Quantitative research often involves larger sample sizes to enhance the generalizability of findings to broader populations.
Data is typically collected through structured methods such as surveys, experiments, questionnaires, and standardized tests. This ensures consistency and comparability.
Data is analyzed using statistical techniques, including descriptive statistics (mean, median, mode), inferential statistics (t-tests, ANOVA, regression analysis), and various data visualization methods.
Quantitative research aims for findings that are generalizable to a larger population. Random or stratified sampling methods are often used to achieve this.
The researcher's presence and bias are minimized in quantitative research, as the focus is on objective measurement and analysis.
SAMPLING IN QUANTITATIVE RESEARCH
Sampling refers to the process of selecting a subset of individuals or items from a larger population or dataset for the purpose of studying and making inferences about the entire population or dataset. Sampling is a fundamental aspect of research design and is crucial for ensuring that research findings are both valid, practical and can be generalized (applied to the entire population). The logic followed in sampling is analogous to food tasting, where the spoon (sample) used to check the flavor of the dish being cooked represents the taste of the entire dish (population). Hence, sampling is applied in statistical surveys such as election preference and poverty rate reports, where a small number of respondents from across the Philippines are surveyed to represent the perceptions of the entire Filipino population.
Key Terms:
Population refers to the entire group of individuals, items, or elements that the researcher is interested in studying. It can be a specific group of people, objects, data points, or any other defined set.
Target Population. The entire group that the researcher is interested in studying or making conclusions about. It represents the ideal or theoretical group, but researchers often work with the accessible population due to practical limitations. Example, for research targeting the HUMSS students, all HUMSS students enrolled in all Senior High Schools in the Philippines are the target population because they bear the same characteristics regardless of the academic institution.
Accessible Population. The subset of the target population that a researcher can easily access, recruit, or study. It is a practical group that fits within the constraints of the research project. Example, population of all HUMSS students who are enrolled in Bauan Technical High School, or Schools Division of Batangas Province.
Sample is the subset of the population that is selected for the actual research study. It is a smaller, manageable group that represents the larger population.
Sampling Frame is a list or source from which the sample will be drawn. It should ideally include all members of the population. However, it's important to note that the sampling frame might not always be identical to the actual population due to practical constraints.
Randomization is the process of introducing randomness or chance into the selection of the sample or the assignment of subjects to different experimental groups. It usually include fishbowl, lottery, computer-generated techniques.
Sample Size refers to the number of individuals or items included in a sample selected from a larger population. A larger sample generally provides more precise estimates, but it can also be more resource-intensive. Sample size calculations are often used to determine an appropriate sample size based on factors like desired confidence level and margin of error.
Margin of Error (MOE) is a measure of the uncertainty or variability in survey results due to random sampling. Usual preference is at 2%, 5% or 8% (0.02, 0.05, or 0.08, respectively).
Confidence Level is a measure of the level of confidence or probability that the true population parameter falls within the margin of error. A higher confidence level, such as 99%, indicates a higher level of confidence but typically results in a larger margin of error.
Importance of Sampling
Sampling allows researchers to study a subset of individuals or items from a larger population. By drawing inferences from the sample, researchers aim to make conclusions about the entire population. This is especially important when studying large or inaccessible populations.
Conducting research on an entire population can be impractical, time-consuming, and costly. Sampling is a more efficient way to obtain data and make generalizations without the need to study every member of the population.
Sampling reduces the burden on data collection efforts, making it more manageable for researchers. This is particularly important when conducting surveys, interviews, or experiments.
In some cases, it is simply impossible to study an entire population. For example, it may be challenging to survey every resident of a large city, or it may be impractical to study every item on a production line. Sampling makes research projects more feasible.
When researchers can focus their efforts on a smaller, more manageable sample, they can often collect more detailed and accurate data, leading to higher data quality and more reliable results.
Properly conducted sampling allows researchers to generalize their findings from the sample to the larger population. This generalizability is a fundamental goal of scientific research.
Through random or systematic selection methods, sampling can help reduce bias in the selection of study participants or items. This is crucial for obtaining unbiased and representative results.
In many cases, a well-designed sample can provide results that are sufficiently precise for the research objectives. A larger sample further enhance precision.
Sampling can sometimes be more ethical than studying an entire population. For example, it may be considered unethical to expose an entire population to certain experiments or interventions.
Sampling allows researchers to test hypotheses and research questions, which may involve comparing groups, measuring associations, or testing the effectiveness of interventions. These analyses are often more manageable with a sample.
TYPES OF SAMPLING IN RESEARCH
Randomization
Randomization techniques are used in research and various activities to introduce an element of randomness or chance into a process, which helps reduce bias and ensure fairness. Here are some common randomization techniques, along with instructions on how to perform them:
Lottery or Fishbowl Technique. This technique involves drawing items or names from a container, often a fishbowl or hat, to make random selections.
Procedure:
1. Write down the items or names you want to randomize on separate pieces of paper, cards, or slips.
2. Place all the pieces of paper with the items or names into a container (e.g., a fishbowl).
3. Mix the pieces of paper in the container thoroughly.
4. Blindly select one piece of paper from the container to make your random selection.
Roulette or Wheel of Names (Online). Online tools and applications, such as "Wheel of Names" or random name picker websites, can be used to conduct random drawings.
Procedure:
1. Go to a reputable online random name picker tool or website.
2. Enter the names or items you want to randomize into the tool or website.
3. Use the "spin" or "pick a random name" button to obtain a random selection.
3. Computer-Generated Randomization (Spreadsheet Application like Microsoft Excel). Spreadsheet software like Microsoft Excel includes built-in randomization functions (e.g., RAND) to generate random numbers, which can be used for random selections.
Procedure (using Excel's RAND function):
1. In a cell, enter the formula "=RAND()" to generate a random number between 0 and 1.
2. Copy the formula to generate multiple random numbers.
3. Sort the list of random numbers in ascending order to obtain a random order of items or names.
4. Select items or names based on the sorted order of random numbers
Sample Size Determination
Sample size determination is a critical step in research and survey design. It involves calculating the number of participants or items that should be included in a sample to ensure that the study produces reliable and meaningful results. The appropriate sample size depends on several factors, including the research objectives, desired level of confidence, margin of error, and characteristics of the population. Two of the known ways in determining the sample size is through the use of Slovin’s Formula and Raosoft Calculator.
Slovin's Formula is a straightforward method to estimate the sample size for a research study, particularly in situations where the population size is large and the goal is to achieve a reasonable level of precision. It is commonly used in survey research. The formula is as follows:
Where:
n is the sample size you want to determine.
N is the total population size.
e is the margin of error (expressed as a decimal, not a percentage).
To compute the sample size using Slovin's Formula, follow these steps:
Determine the Population Size (N): You need to know the total size of the population you are interested in. This should be the entire group you want to make inferences about.
Choose a Margin of Error (e): The margin of error is a measure of how much you want your sample estimate to vary from the true population value. It's usually expressed as a decimal, and the value depends on your desired level of confidence. For example, if you want a 5% margin of error, you would use e=0.05.
Apply Slovin's Formula: Plug the population size (N) and the margin of error (e) into the formula. This will give you the required sample size (n).
Round the Result: Depending on your practical needs and constraints, you may need to round the calculated sample size to the nearest whole number. In some cases, it's acceptable to have a fraction of a person or item in your sample, while in others, you may round up to ensure an adequate sample size.
An example to illustrate how to use Slovin's Formula:
Suppose you want to estimate the sample size for a survey of customer preferences in Batangas City with a population of 10,790 people (N) and you want a margin of error of 5% (e = 0.05). Using Slovin's Formula:
Steps in Proportional Sample Size Distribution (suppose we have 7 groups [barangay] wherein 386 samples are to be distributed proportionally in Batangas City)
Determine the Total Population (N): Find the total population of Batangas City. This is the sum of the populations of all seven barangays.
Calculate the Total Sample Size (n): Use Slovin's Formula to calculate the total sample size for the entire city, using the total population (N) and the desired margin of error (e). In this case, 386 were identified on the previous computation.
Distribute the Total Sample Size Proportionally:
Divide the total sample size (n) among the seven groups (barangay) in Batangas City. The distribution should be proportional to the population of each group.
Where:
ni is the sample size allocated to the i-th group.
Ni is the population of the i-th group.
N is the total population of Batangas City.
n is the total sample size for the city calculated in step 2.
Round the Allocated Sample Sizes: Round the calculated sample sizes for each group to the nearest whole number, as it's usually not practical to have fractional individuals in a sample.
Conduct the Sampling: Proceed to sample individuals or items from each of the seven groups according to the allocated sample sizes.
An example to illustrate the proportional distribution of sample size
Apply the formula to proportionally distribute the 386 sample to seven barangays comprising of 10,790 individuals.
DATA GATHERING INSTRUMENTS
Data gathering instrument is a tool, method, or device that is used to collect data or information from research participants or sources. These instruments are designed to systematically and consistently capture data for the purpose of answering research questions, testing hypotheses, or addressing the objectives of a study. Data gathering instruments can take various forms and are tailored to the specific research design and the type of data needed.
PHASES OF INSTRUMENT DEVELOPMENT
RESEARCH INSTRUMENT VALIDATION
Research instrument validation is the process of assessing the quality and accuracy of a research tool or instrument to ensure that it measures what it is intended to measure. There are several types of validation methods, each serving a specific purpose.
Content Validity. It assesses the extent to which the items in a research instrument represent the entire content domain of the construct it is intended to measure (alignment with the research questions or objectives). Experts in the field often review the instrument to ensure it comprehensively addresses the research objectives.
Example: A survey on student satisfaction with online learning might be validated by asking educational experts to confirm that the questions comprehensively cover areas such as course content, ease of navigation, instructor support, and technical reliability.
1. Construct Validity. Determines whether the instrument truly measures the theoretical construct it claims to measure. This often involves correlating the instrument with other measures known to assess the same construct.
Example: A test measuring "critical thinking skills" could be validated by comparing it with established critical thinking assessments to see if the new test produces similar results.
1. Criterion-Related Validity. Refers to how well one measure predicts an outcome based on another established measure (the criterion). It is divided into two subtypes:
a. Concurrent Validity: The instrument is compared with another valid measure taken at the same time.
b. Predictive Validity: The instrument is tested to see if it can predict future outcomes.
Example: A job aptitude test could be validated by comparing its scores with employees' current job performance (concurrent validity) or their future success in a role (predictive validity).
2. Face Validity. Refers to whether the instrument "appears" to measure what it is supposed to measure, as judged by individuals who are not experts (subjective form of validation)
a. Language Validity. It checks the appropriateness of the language and word use for cultural, sensitivity and translation issues.
b. Statistical Validation. It determines for the correctness and alignment of the Likert scale response anchor applied, sample determination, reliability test and/or the statistical treatment tools to be used in analyzing the data.
Example: A questionnaire on stress levels might have face validity if the questions seem reasonable to participants (e.g., asking about work pressure, family obligations) even if not thoroughly validated by experts.
RELIABILITY TESTING
Research instrument reliability tests assess the consistency and stability of a research instrument or measurement tool. Several types of reliability tests are commonly used, each serving a specific purpose. Here are some of the most common types, along with examples and interpretations of their scores or results:
Internal Consistency Reliability. Internal consistency reliability assesses the degree to which the items within an instrument consistently measure the same underlying construct. It is often measured using techniques like Cronbach's alpha.
Example: A researcher administers a 20-item questionnaire on job satisfaction to a group of employees.
Interpretation: Cronbach's alpha is calculated, and a value closer to 1.00 indicates higher internal consistency. A typical benchmark for acceptability is an alpha value of 0.70 or higher, but this can vary depending on the context.
Test-Retest Reliability. This type of reliability assesses the consistency of the instrument over time. It involves administering the same instrument to the same participants on two separate occasions, with a reasonable time gap between the tests.
Example: A researcher administers a personality inventory to a group of participants and then re-administers the same inventory to the same participants two weeks later.
Interpretation: The scores from the two test administrations are correlated, and a high correlation coefficient (e.g., Pearson's r) indicates good test-retest reliability. A coefficient of 0.70 or higher is typically considered acceptable.
Parallel Forms Reliability. Also known as alternate or equivalent forms reliability, this method involves developing two different but equivalent versions of the instrument and administering them to the same participants.
Example: Two sets of math achievement tests are created, and each set is administered to the same group of students.
Interpretation: The scores from the two forms are correlated, and a high correlation coefficient indicates good parallel forms reliability. Similar to test-retest reliability, a coefficient of 0.70 or higher is typically considered acceptable.
Inter-Rater Reliability. Inter-rater reliability assesses the consistency of ratings or judgments made by different raters or observers. It is commonly used in observational research and content analysis.
Example: Two independent observers assess video recordings of classroom behavior using a predefined rating scale.
Interpretation: Cohen's kappa or intraclass correlation coefficients (ICC) are often used to assess inter-rater reliability. Higher coefficients indicate greater agreement between raters.
Split-Half Reliability. In split-half reliability, the instrument is divided into two halves, and the scores on one half are compared to the scores on the other half.
Example: A researcher divides a 20-item vocabulary test into two sets of 10 items each and compares the scores obtained on each set.
Interpretation: The correlation between the scores on the two halves is calculated. A high correlation coefficient indicates good split-half reliability.
STATISTICAL TOOLS FOR RELIABILITY TESTING
Reliability testing involves the use of various statistical tools and techniques to assess the consistency, stability, or repeatability of measurements made using a research instrument. The specific tool or technique chosen depends on the type of reliability being tested and the characteristics of the data.
Cronbach's Alpha (α):
Type of Reliability: Internal Consistency Reliability
Description: Cronbach's alpha is used to assess the internal consistency of an instrument, especially when it contains multiple items that measure the same construct. It quantifies the degree to which items in a scale are related to each other.
Interpretation: A high alpha value (typically 0.70 or higher) indicates good internal consistency.
Cohen's Kappa (κ):
Type of Reliability: Inter-Rater Reliability
Description: Cohen's kappa measures the level of agreement between two or more raters when assessing categorical data. It takes into account the agreement expected by chance.
Interpretation: A high kappa value (close to 1) indicates good inter-rater reliability.
Split-Half Correlation:
Type of Reliability: Split-Half Reliability
Description: The split-half method involves dividing an instrument into two equal halves and then correlating the scores on one half with the scores on the other half.
Interpretation: A high correlation coefficient between the two halves indicates good split-half reliability.
Kuder-Richardson Formula 20 (KR-20) or KR-21:
Type of Reliability: Internal Consistency Reliability
Description: KR-20 and KR-21 are used to estimate the internal consistency of dichotomous (e.g., yes/no) items, such as those found in tests and questionnaires.
Interpretation: A high KR-20 or KR-21 value (typically 0.70 or higher) indicates good internal consistency.
Pearson Correlation Coefficient (r):
Type of Reliability: Test-Retest Reliability
Description: Pearson's correlation coefficient assesses the linear relationship between two sets of data collected at two different time points. It measures the degree of association or correlation between measurements taken on the same individuals on two separate occasions.
Interpretation: A high positive correlation (close to +1) indicates good test-retest reliability.
Spearman Rank-Order Correlation (ρ or rho):
Type of Reliability: Test-Retest Reliability
Description: The Spearman correlation is a non-parametric measure that assesses the monotonic relationship between two sets of data. It is used when the data is ordinal or when the assumption of linearity is not met.
Interpretation: A high positive correlation indicates good test-retest reliability.
INTERVENTION IN RESEARCH
An intervention refers to a deliberate and planned action or treatment that is applied to one or more groups or individuals in a study to assess its impact or influence on a particular outcome or dependent variable. Interventions are commonly used in experimental and quasi-experimental research designs to investigate causal relationships between variables.
In quantitative research, interventions are crucial for testing hypotheses and drawing conclusions about cause-and-effect relationships. The careful design and execution of interventions help researchers investigate the impact of specific treatments or conditions on the variables of interest, contributing to the body of scientific knowledge in various fields.
Characteristics
The primary purpose of an intervention is to manipulate or change a specific independent variable to observe its effects on one or more dependent variables. Researchers use interventions to determine whether a particular treatment, program, or condition has a significant impact on the outcomes they are studying.
In experimental research, interventions are a fundamental component. Researchers randomly assign participants to different groups, with one group (the experimental group) receiving the intervention, and another group (the control group) not receiving the intervention or receiving a placebo. This allows researchers to compare the outcomes of the two groups to assess the intervention's effectiveness.
In some cases, researchers may use quasi-experimental designs where they cannot or should not assign participants randomly to groups. In such designs, interventions may still be applied to specific groups or individuals, but the assignment may be non-random. Researchers must carefully consider the potential biases in these cases.
Quantitative research relies on measuring variables and outcomes in a precise and systematic manner. Interventions help researchers collect data before and after the treatment to determine any changes that can be attributed to the intervention. This requires selecting appropriate measurement tools and statistical analysis techniques.
Interventions can take many forms, such as educational programs, medical treatments, policy changes, behavioral interventions, and more. For instance, in a study on the effectiveness of a new drug, the drug itself would be the intervention. In an educational study, a teaching method might be the intervention.
To enhance the internal validity of the study, researchers often control for other variables (known as control variables) that could potentially influence the outcome. This helps ensure that any observed effects are more likely to be due to the intervention itself rather than external factors.
PHASES OF INTERVENTION DEVELOPMENT
One example of an intervention in medical experimental research is the testing of a new vaccine.
Intervention: COVID-19 Vaccine Trials. In the context of the COVID-19 pandemic, the development of vaccines to prevent infection with the SARS-CoV-2 virus involved experimental research with clear interventions.
Development of the Vaccine. Researchers and pharmaceutical companies developed various COVID-19 vaccine candidates. These vaccines were designed to trigger an immune response in the human body to protect against the virus.
Pre-Clinical Testing. Before human trials, the vaccine candidates underwent pre-clinical testing in the laboratory and in animal models to assess their safety and efficacy.
Phase I Clinical Trials. The first phase of human clinical trials involved a small group of healthy volunteers. The intervention was the administration of the experimental vaccine to determine its safety, dosage, and ability to generate an immune response.
Phase II Clinical Trials. In this phase, a larger group of participants received the vaccine to further evaluate its safety and efficacy. Researchers closely monitored the intervention group's responses to the vaccine.
Phase III Clinical Trials. Phase III trials involved tens of thousands of participants and were conducted in multiple locations. Participants were randomized into two groups: one received the vaccine (experimental group), and the other received a placebo or a different vaccine (control group). The intervention in this phase was the administration of the COVID-19 vaccine.
Data Collection and Evaluation. Data on the incidence of COVID-19 cases and adverse effects were collected from both the experimental and control groups over several months. The goal was to assess whether the vaccine effectively prevented COVID-19 and to gather information about its safety.
Analysis and Regulatory Approval. Statistical analysis was conducted to determine the vaccine's efficacy and safety. The results were submitted to regulatory agencies (e.g., the FDA in the Philippines) for approval.
Rollout and Widespread Administration. Once the vaccine received regulatory approval, it was made available for widespread administration to the general population as an effective intervention to prevent COVID-19.
Other examples:
In experimental research, interventions are applied to manipulate an independent variable in order to investigate its effect on one or more dependent variables.
Medical Research:
Testing the efficacy of a new drug: The intervention involves administering the new drug to one group of patients (experimental group) while giving a placebo or an existing standard treatment to another group (control group).
Studying the impact of a specific medical procedure: Researchers may perform a medical procedure on one group of patients and compare the outcomes with a control group that does not receive the procedure.
Educational Research:
Evaluating the effectiveness of a teaching method: An intervention could involve implementing a new teaching approach (e.g., problem-based learning) in one group of students, while another group receives traditional instruction.
Assessing the impact of an educational program: An intervention may involve providing an educational program, such as a tutoring service, to a specific group of students to measure its effect on academic performance.
Behavioral Psychology:
Investigating the influence of a behavioral intervention: An intervention could involve implementing a behavior modification program to change specific behaviors in an experimental group and comparing the results with a control group not receiving the intervention.
Studying the impact of rewards or punishments: Researchers may apply different reinforcement strategies to assess their effect on behavior, such as offering rewards for desired behaviors in one group and no rewards in another.
Public Health:
Examining the effectiveness of a health intervention: Researchers may implement a public health campaign to promote vaccination in one community while leaving another community without the campaign to compare vaccination rates.
Testing the impact of a smoking cessation program: An intervention could involve offering a smoking cessation program to a group of smokers and measuring their success in quitting compared to a control group that doesn't receive the program.
Environmental Science:
Assessing the impact of an environmental policy change: Researchers may examine the effects of implementing stricter environmental regulations on pollution levels, comparing areas where the regulations are enforced with those where they are not.
Studying the effect of a conservation program: An intervention might involve introducing a conservation initiative in a specific region and monitoring changes in wildlife populations or habitat quality.
Social Work:
Investigating the impact of a social intervention program: Researchers could implement a program aimed at reducing substance abuse in a group of individuals and compare the results to a control group without the program.
Assessing the effectiveness of a therapy or counseling approach: Researchers may provide therapy or counseling services to individuals dealing with a specific issue and evaluate the outcomes compared to a control group that does not receive the intervention.
PLANNING FOR DATA COLLECTION
Key Considerations:
Research Objectives. Clearly define the research objectives and research questions. Ensure that the data gathering procedures are directly related to the research goals.
Construction and Validation of the Instrument. Design or select data collection instruments (e.g., questionnaires, surveys, experiments) that are reliable and valid. Pre-test and pilot the instruments to identify and rectify any issues.
Sampling Strategy. Define your target population and choose a sampling method (e.g., random sampling, stratified sampling) that will yield a representative sample. Calculate the required sample size to achieve statistical power.
Data Collection Environment. Ensure that the data collection environment is conducive to data collection. Minimize distractions and noise, especially for in-person data collection.
Data Collection Schedule. Create a data collection timeline with specific start and end dates. Allow for a buffer to accommodate unexpected delays.
Data Collection Procedures. Implement the defined data collection methods according to the research plan. Ensure that data collectors adhere to the instructions and procedures consistently.
Reporting and Documentation. Document the data collection process, including any deviations from the plan. Maintain detailed records for transparency and auditability.
Ethics and Consent: Adhere to ethical considerations, including obtaining informed consent from participants. Ensure that all data collection activities comply with ethical guidelines. Develop a system for secure data management, including data storage, coding, and backup procedures. Protect participants' privacy and confidentiality.
Quantitative Research Data Gathering Protocol
Develop, validate, and subject the data gathering instrument to pilot testing to check its reliability.
Schedule the time and location for data gathering.
Brief the respondents by explaining the purpose, rationale, rights, benefits, and consequences of their participation in the survey.
Emphasize the anonymity and confidentiality of their responses and their participation. Ask them to sign the consent form and the data privacy notice before the actual survey. In the case of electronic data collection, include a section in the electronic form that states that as the respondents proceed with the survey, they have read, understood, and agreed to the terms and conditions of their participation.
Proceed with the actual data gathering.
Debrief the respondents and ask them about their concerns and questions before concluding the survey.
Record and tally the collected data.
Document the data gathering process while observing ethical considerations in reporting the process.
Ethical Considerations in Data Gathering
Research Ethics are set of guidelines and principles that should be observed in the conduct of research. In research, these are essential to ensure the rights, well-being, and privacy of research participants are protected. Researchers must adhere to ethical principles and guidelines to maintain the integrity and credibility of their work.
Informed Consent. Researchers must obtain informed and voluntary consent from all participants before data collection. Participants should be fully aware of the study's purpose, procedures, risks, and benefits, and they should have the option to withdraw at any time without consequences.
Assent Form - Assent is the agreement of someone not able to give legal consent to participate in the activity (i.e. teachers).
Consent Form - Consent may only be given by individuals who have reached the legal age of consent (typically 18 years old).
Minimization of Harm. Researchers must minimize any potential physical, emotional, or psychological harm to participants. They should also provide appropriate resources and support in the event of any adverse effects.
Beneficence. Researchers should aim to maximize benefits and minimize harm to participants. The research should contribute to the greater good without imposing undue risks or burdens on participants.
Non-Discrimination. Researchers should avoid any form of discrimination or bias when selecting and treating participants. All individuals should be treated with fairness and respect, and their diversity should be acknowledged.
Anonymity and Confidentiality. Participants' identities should be protected, and data should be reported in such a way that individual participants cannot be identified. This includes using anonymous codes or ensuring that data are reported in aggregate form, using pseudonyms.
Confidentiality. This entails limiting the data access to the researcher and the respondents only. De-identifying data, securing data storage, and ensuring that confidential information is not disclosed without consent are some of the practices.
Data Privacy. In compliance with the Data Privacy Act of the Philippines (RA 10173). Researchers must take measures to safeguard data during collection, storage, and analysis. This includes limiting data being sought, and using secure and password-protected systems to prevent unauthorized access.
Briefing and Debriefing. In some research, particularly studies involving deception, researchers should provide participants with a debriefing session after data collection to clarify any misunderstandings and address concerns.
Voluntary Participation. Researchers must respect the autonomy of participants and their right to make informed decisions about their involvement in the research. This includes respecting their right to withdraw from the study at any time.
PLANNING FOR DATA ANALYSIS
Key Strategies for Planning Data Analysis in Quantitative Research
Clearly articulate your research objectives and research questions. Your data analysis plan should align with these objectives and questions.
Choose the most suitable statistical and data analysis techniques based on the type of data you have and the research objectives. Consider whether you need descriptive statistics, inferential statistics, regression analysis, or other specific methods.
Before analysis, clean and prepare the data. This includes dealing with missing data, outliers, and ensuring data is correctly formatted for analysis.
Outline the specific steps and procedures you will follow in the data analysis process. This may include the order of analyses, the statistical software to be used, and the criteria for determining significance.
If your research involves hypothesis testing, specify your null and alternative hypotheses, significance level, and the appropriate statistical tests. Ensure that the tests match the type of data and research design.
Verify that your sample size is sufficient to achieve the desired statistical power for your analyses.
Plan for analyzing interactions or moderation effects, especially if your research explores relationships between variables with moderating factors.
Determine how you will present your results. Create clear and informative tables, charts, and visualizations to communicate your findings effectively.
Consider involving a colleague or expert to review your data analysis plan to ensure accuracy and validity.
If necessary, consult with a statistician or data analysis expert to ensure that your data analysis plan is sound and rigorous.
WRITING THE RESEARCH METHODOLOGY
Key Strategies for Planning Data Analysis in Quantitative Research
Clearly articulate your research objectives and research questions. Your data analysis plan should align with these objectives and questions.
Choose the most suitable statistical and data analysis techniques based on the type of data you have and the research objectives. Consider whether you need descriptive statistics, inferential statistics, regression analysis, or other specific methods.
Before analysis, clean and prepare the data. This includes dealing with missing data, outliers, and ensuring data is correctly formatted for analysis.
Outline the specific steps and procedures you will follow in the data analysis process. This may include the order of analyses, the statistical software to be used, and the criteria for determining significance.
If your research involves hypothesis testing, specify your null and alternative hypotheses, significance level, and the appropriate statistical tests. Ensure that the tests match the type of data and research design.
Verify that your sample size is sufficient to achieve the desired statistical power for your analyses.
Plan for analyzing interactions or moderation effects, especially if your research explores relationships between variables with moderating factors.
Determine how you will present your results. Create clear and informative tables, charts, and visualizations to communicate your findings effectively.
Consider involving a colleague or expert to review your data analysis plan to ensure accuracy and validity.
If necessary, consult with a statistician or data analysis expert to ensure that your data analysis plan is sound and rigorous.
References:
Division Memorandum 279, s. 2022. Research Abstract Template and Informed Consent Form. Department of Education. City Schools Division of Dasmarinas. Retrieved on October 31, 2023 from https://bit.ly/40iNRtI
O'Gorman, K.D. & MacIntosh, R. (2015) "Chapter 4 Mapping Research Methods" In: O'Gorman, K.D. & MacIntosh, R. (ed) . Oxford: Goodfellow Publishers http://dx.doi.org/10.23912/978-1-910158-51-7-2772
Illustrations retrieved from https://www.scribbr.com/methodology/sampling-methods/