The stacking models require longer training times and more computational resources. This is especially true when using computationally intensive base classifiers like XGBoost and LinearSVC.Â
Although SMOTE is useful for data imbalance as it gives the model a better idea of what constitutes a data instance to have a certain classification label, SMOTE has weaknesses of its own such as not considering the majority class while creating its synthetic data instances.