Bibliography
Amorim, E., Cançado, M., & Veloso, A. (2018). Automated Essay Scoring in the Presence of Biased Ratings. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), 229–237. https://doi.org/10.18653/v1/N18-1021
Arstein, R. and Poesio, M. (2008). Inter-Coder Agreement for Computational Linguistics. Computational Linguistics, Volume 34, Number 4, 555-596. https://doi.org/10.1162/coli.07-034-R2
Conijn, R., Kahr, P., & Snijders, C. (2023). The Effects of Explanations in Automated Essay Scoring Systems on Student Trust and Motivation. Journal of Learning Analytics, 10(1), Article 1. https://doi.org/10.18608/jla.2023.7801
Congdon, P., & MeQueen, J. (2000). The Stability of Rater Severity in Large‐Scale Assessment
Programs. Journal of Educational Measurement, 37, 163-178. https://doi.org/10.1111/j.1745-3984.2000.tb01081.x.
Ilievski, I. (2015) Java End-to-End PDTB-Styled Discourse Parser. https://github.com/WING-NUS/pdtb-parser
Klebanov, B. B., & Madnani, N. (2022). Automated Essay Scoring. Springer International Publishing. https://doi.org/10.1007/978-3-031-02182-4
Lin, Z., Ng, H. T., & Kan, M.-Y. (2014). A PDTB-styled end-to-end discourse parser. Natural Language Engineering, 20(2), 151–184. https://doi.org/10.1017/S1351324912000307
Marro, S., Cabrio, E., & Villata, S. (2022). Graph Embeddings for Argumentation Quality Assessment. Findings of the Association for Computational Linguistics: EMNLP 2022, 4154–4164. https://doi.org/10.18653/v1/2022.findings-emnlp.306
Mikolov, T., Chen, K., Corrado, G.S., & Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. International Conference on Learning Representations. https://doi.org/10.48550/arXiv.1301.3781
Manning, C., Surdeanu, M., Bauer, J., Finkel, J., Bethard, S., & McClosky, D. (2014). The Stanford CoreNLP Natural Language Processing Toolkit. Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 55–60. https://doi.org/10.3115/v1/P14-5010
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., & Duchesnay, É. (2011). Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12(85), 2825–2830.
Stab, C., & Gurevych, I. (2017). Parsing Argumentation Structures in Persuasive Essays. Computational Linguistics, 43(3), 619–659. https://doi.org/10.1162/COLI_a_00295
Wachsmuth, H., & Werner, T. (2020). Intrinsic Quality Assessment of Arguments. Proceedings of the 28th International Conference on Computational Linguistics, 6739–6745. https://doi.org/10.18653/v1/2020.coling-main.592
Wambsganss, T., Niklaus, C., Cetto, M., Söllner, M., Handschuh, S., & Leimeister, J. M. (2020). AL: An Adaptive Learning Support System for Argumentation Skills. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–14. https://doi.org/10.1145/3313831.3376732