Cardiovascular disease (CVD) remains a critical global health concern, demanding reliable and interpretable predictive models for early risk assessment. This study presents a large-scale analysis using the Heart Disease Health Indicators Dataset, developing a strategically weighted ensemble model that combines tree-based methods (LightGBM, XGBoost) with a Convolutional Neural Network (CNN) to predict CVD risk. The model was trained on a preprocessed dataset of 229,781 patients where the inherent class imbalance was managed through strategic weighting and feature engineering enhanced the original 22 features to 25. The final ensemble achieves a statistically significant improvement over the best individual model, with a Test AUC of 0.8371 (p=0.003) and is particularly suited for screening with a high recall of 80.0%. To provide transparency and clinical interpretability, surrogate decision trees and SHapley Additive exPlanations (SHAP) are used. The proposed model delivers a combination of robust predictive performance and clinical transparency by blending diverse learning architectures and incorporating explainability through SHAP and surrogate decision trees, making it a strong candidate for real-world deployment in public health screening.
Machine learning (ML) techniques, extensively being used for medical image analysis, show great potential in disease diagnosis such as detecting pneumonia from chest X-rays. However, the ‘black box’ nature of most ML models raises skepticism among medical personnel as the lack of transparency hinders trust and rational decision-making in clinical settings. Although interpretable ML models offer transparency, they often sacrifice accuracy creating a trade-off that limits the practical utility of machine learning in medical image analysis. This study aims to bridge this gap by developing a noble convolutional neural network (CNN) model and investigating the interpretability of that model in pneumonia diagnosis from the RSNA Pneumonia Dataset comprising X-ray images. The proposed research will evaluate the interpretability of the convolutional neural network (CNN)-based pneumonia detection models through state-of-the-art interpretability analysis tools such as LIME, and Grad-CAM. This study will assess the accuracy-interpretability trade-off through quantitative and visual explanations and develop strategies to strike a balance between accuracy and interpretability. This research is expected to improve pneumonia diagnosis by providing an advanced tool that offers interpretable visualizations, aiding physicians in their decision-making process, and fostering trust in ML-based pneumonia detection systems.
In this research, we focused on the challenge of distinguishing between brass and copper, two visually similar metallic materials that often pose difficulties for traditional classification approaches. To build a reliable dataset, we collected original samples that included images of differently shaped brass and copper objects captured under multiple lighting conditions in order to account for variations in reflection, brightness, and texture. Alongside the visual data, we also recorded numerical measurements with a lux meter, capturing subtle differences in how the two metals respond to changes in light intensity. By combining these complementary modalities—visual features such as color gradients, surface patterns, and reflectivity, together with quantitative sensor-based readings—we created a more robust representation of each material. We then developed and evaluated a machine learning model capable of integrating these multimodal features, achieving accurate classification of brass and copper across a range of object shapes and lighting environments. The study demonstrates that incorporating both image-based and numerical data can significantly enhance recognition performance, offering insights into how multimodal approaches may be extended to broader problems in material identification, industrial inspection, and automated quality control.