Highradius. (n.d.). Credit scoring models: Types and examples. Highradius. Retrieved from https://www.highradius.com/resources/Blog/credit-scoring-models-types-and-examples/
Varga, G. (2023, July 11). Understanding credit scoring for fintechs. Oscilar. Retrieved from https://oscilar.com/blog/credit-scoring-guide
Fengff1292. (n.d.). Credict score prediction by NGBoost. Kaggle. Retrieved July 18, 2024, from https://www.kaggle.com/code/fengff1292/credict-score-prediction-by-ngboost
Satpathy, S. (2020). SMOTE for imbalanced classification with Python. Analytics Vidhya. Retrieved from https://www.analyticsvidhya.com
Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). SMOTE: Synthetic minority over-sampling technique. Journal of Artificial Intelligence Research, 16, 321-357.
Train in Data. (n.d.). Overcoming class imbalance with SMOTE: How to tackle imbalanced datasets in machine learning. Retrieved from https://www.blog.trainindata.com/overcoming-class-imbalance-with-smote-how-to-tackle-imbalanced-datasets-in-machine-learning/
Bunkhumpornpat, C., Sinapiromsaran, K., & Lursinsap, C. (2009). Safe-level-SMOTE: Safe-level-synthetic minority over-sampling technique for handling the class imbalanced problem. In Advances in knowledge discovery and data mining: 13th Pacific-Asia conference, PAKDD 2009 Bangkok, Thailand, April 27-30, 2009 proceedings 13 (pp. 475-482). Springer Berlin Heidelberg.
Zou, H., & Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2), 301-320.
Sharmasaravanan. (2023, August 25). Understanding Stacking Classifiers: A Comprehensive Guide. Medium; Medium. https://sharmasaravanan.medium.com/understanding-stacking-classifiers-a-comprehensive-guide-195bfab58e48
Nalepa, J., & Kawulok, M. (2019). Selecting training sets for support vector machines: A review. Artificial Intelligence Review, 52(2), 857-900.
Džeroski, S., & Ženko, B. (2004). Is combining classifiers with stacking better than selecting the best one? Machine Learning, 54, 255-273.
Nti, I. K., Nyarko-Boateng, O., & Aning, J. (2021). Performance of machine learning algorithms with different K values in K-fold cross-validation. International Journal of Information Technology and Computer Science, 13(6), 61-71.
Raschka, S. (2018). Model evaluation, model selection, and algorithm selection in machine learning. arXiv preprint arXiv:1811.12808.
Shruti Dhumne. “What Is Lasso Regression ? - Shruti Dhumne - Medium.” Medium, Medium, 4 Mar. 2023, medium.com/@shruti.dhumne/what-is-lasso-regression-bd44addc448c#:~:text=One%20of%20the%20main%20disadvantages. Accessed 21 July 2024.
Evidently AI. “Accuracy vs. Precision vs. Recall in Machine Learning: What’s the Difference?” Www.evidentlyai.com, www.evidentlyai.com/classification-metrics/accuracy-precision-recall#:~:text=Accuracy%20is%20a%20metric%20that.
“What Is Model Accuracy in Machine Learning.” Iguazio, www.iguazio.com/glossary/model-accuracy-in-ml/.
“Accuracy vs. Precision vs. Recall in Machine Learning: What’s the Difference?” Www.evidentlyai.com, www.evidentlyai.com/classification-metrics/accuracy-precision-recall#:~:text=Pros%3A. Accessed 21 July 2024.
Wijaya, Cornellius Yudha. “The Limitation of Accuracy Score.” Non-Brand Data, 17 Feb. 2023, cornellius.substack.com/p/the-limitation-of-accuracy-score.
Simic, M. (2022, February 25). Gradient Boosting Trees vs. Random Forests | Baeldung on Computer Science. Www.baeldung.com. https://www.baeldung.com/cs/gradient-boosting-trees-vs-random-forests