Introduction
It is currently the middle of February and I feel as though I am traveling across America and am about to hit the west coastline and put my feet in the warm sand. I have been able to learn so much along my journey and while it initially made me sad there was so little academic research done on reparations for Native American tribes I also feel so excited that there are other people like me who care about this topic and have dedicated time and energy into writing and researching it.
A major milestone in my AP Research project has been completing the full data collection phase across five academic databases: JSTOR, Project Muse, Google Scholar, EBSCO, and the Library of Congress. Additionally, I have been able to conduct initial analysis of this data on both a quantitative and qualitative scale. I designed a centralized spreadsheet to organize both quantitative and qualitative data, with each row corresponding to a single academic source and each tab serving a different stage of analysis. This system allowed me to track search terms, total results, peer-reviewed results, relevance tests, and key quotes all in one place. I feel like I have made signficant progess in my research and while I was initally moving slow through the data collection process was able to quickly complete it and then spend time analyzing the data.
One of the biggest challenges I encountered was the imbalance in database results. Large databases like Google Scholar and EBSCO returned thousands of results for nearly every search term, even those with the word reparations. This would make the close reading analysis I had planned unrealistic within the project’s timeframe. On the opposite end, the Library of Congress returned zero results for search terms that included reparations
Another unexpected challenge I faced was figuring out how to analyze my data in a meaningful way, given my limitations as a high school researcher. I don’t have access to advanced statistical software, and my data didn’t come from experiments or surveys. These factors initially made me wonder if a meta-analysis was even possible to conduct. This was not at all a challenge I expected to run into.
To address the challenge surrounding the discrepancy in database results, I narrowed my qualitative analysis to JSTOR and Project Muse. This choice provided me with a manageable number of peer-reviewed sources while still maintaining academic credibility. This adjustment allowed me to move forward without sacrificing massive amounts of time or the integrity of my analysis.
In order to tackle the issue surrounding my quantitative analysis, I had to teach myself how meta-analytic methods can be adapted in history and social science research. After figuring this out, I learned how to calculate weighted peer-review percentages, variance, pooled standard deviation, and effect size using only a graphing calculator. standard statistical formulas, and the information on my spreadsheet. This process involved a lot of trial and error, double-checking formulas, and learning statistical tests that I had never encountered before
One of the most significant findings so far comes from my quantitative meta-analysis comparing restoration-focused and reparations-focused search terms. This was the portion of my analysis I was extremely worried about because I had no idea what the results would be. It turned out that while restoration-related terms produced a higher volume of raw results in each database on average, reparations-focused terms had a much higher weighted peer-reviewed percentage of 31.01% compared to 17.53%. Using standardized mean difference (Cohen’s d), I found an effect size of 1.55, which indicates a very large difference in scholarly concentration between the two conceptual groups. This suggests that although reparations in literature is less common, it is more academically concentrated.
Reflection
So far throughout this research process, I’ve learned how to adapt complex methods like meta-analysis with a limited amount of resources and to teach myself statistical concepts without relying on advanced software. I’ve also become more confident in making methodological decisions. I’ve also learned to sit with uncertainty and let patterns emerge slowly rather than forcing my argument to fit my expectations. I thought many aspects of my data would turn out very differntly then they did, and so I had to be okay with switching perspectives and letting the data speak for itself. Looking ahead at the remainder of my research journey, I hope to continue to deepen my qualitative analysis and connect these patterns more clearly to questions of accountability, sovereignty, and historical responsibility.