Title: Enhancing Sentiment Analysis with Local and Global Memory in Heterogeneous Graph Neural Networks
Brief Description: We present LGM-HGNN, a unique hybrid model that uses heterogeneous graph neural networks enhanced with hierarchical memory tracking through dynamic GRU gating. Our model uses rich graph representations to capture inter-word and inter-aspect relationships. It also employs a dual-level memory module, with local memory for instance-level detail and global memory for corpus-level sentiment trends, which are dynamically updated and fused for better sentiment tracking.
Journal: Knowledge-Based Systems [Q1 Journal]
Status: Accepted
Title: A Comparative Analysis of Classification Algorithms for Loan Eligibility on Imbalanced Data
Brief Description: A comprehensive evaluation of diverse classification algorithms—including linear models, distance-based methods, tree-based learners, boosting techniques, and probabilistic classifiers—is conducted to determine the most suitable approach for loan eligibility prediction.
Conference: International Conference on Quantum Photonics, Artificial Intelligence, and Networking (QPAIN 2025) [IEEE]
Status: Accepted
Title: MaskNet: Enhancing Crime Event Detection with Feature Masking and Dynamic Attention
Brief Description: This paper offers MaskNet, a novel technique that improves criminal event detection by combining dynamic multi-head attention and feature masking in Transformer models. Traditional attention systems frequently fail to prioritize key aspects, which reduces detection efficiency.
Conference: 2nd International Conference on Next-Generation Computing, IoT and Machine Learning (NCIM 2025) [IEEE]
Status: Accepted
Title: Explainable AI for Stroke Risk Assessment: Voting versus Stacking Classifiers
Brief Description: This research focuses on the application of machine learning (ML) techniques for accurate stroke prediction, focusing on Voting and Stacking classifiers. Our study integrates feature significance analysis with Local Interpretable Model-agnostic Explanations (LIME) to enhance interpretability, thereby transcending traditional correctness measures.
Journal: Journal of Artificial Intelligence (IJ-AI) [Q2 in AI Category]
Status: Accepted
Title: Co-AttenDWG: Co-Attentive Dimension-Wise Gating and Expert Fusion for Multi-Modal Offensive Content Detection
Brief Description: The Co-AttenDWG architecture redefines multi-modal learning by overcoming limitations inherent in static fusion techniques. Integrating dual-path encoding, co-attention with dimension-wise gating, and advanced expert fusion, it dynamically harnesses complementary textual and visual cues in a unified embedding space.
Journal: IEEE Transactions on Artificial Intelligence [Q1 Journal]
Status: Revision On-Going
Title: A Secure and Interpretable Federated Learning Framework for Diabetes Prediction with Blockchain-Enabled Security
Brief Description: This paper introduces the Secure Interpretable Federated Learning (SIFL) framework, designed to advance trustworthy AI in healthcare by combining federated optimization, blockchain-based verification, and model interpretability. In contrast to previous studies that focus on privacy, auditability, or explainability separately, SIFL offers an integrated, latency-aware solution suited for real-world, non-IID, and resource-constrained settings.
Journal: IEEE Transactions on Artificial Intelligence [Q1 Journal]
Status: Revision On-Going
Title: CrosGrpsABS: Cross-Attention over Syntactic and Semantic Graphs for Aspect-Based Sentiment Analysis in a Low-Resource Language
Brief Description: This work addresses the underexplored task of aspect-based sentiment analysis (ABSA) in Bengali, a low-resource language lacking annotated datasets and NLP tools. We propose CrosGrpsABS, a hybrid framework that combines transformer-based contextual embeddings with syntactic and semantic graphs via a bidirectional cross-attention mechanism.
Journal: Expert Systems with Applications [Q1 Journal]
Status: Peer Reviewing Stage
Title: A Hybrid Architecture with Separable Convolutions and Attention for Lung and Colon Cancer Detection
Brief Description: We propose a novel hybrid architecture, the Separable Convolution and Attention Mechanism (SCA-mechanism), to enhance the accuracy of cancer classification. The proposed model integrates separable convolutions with residual connections to facilitate efficient feature extraction while incorporating attention mechanisms to emphasize key regions indicative of malignancy.
Journal: Array [Q1 Journal]
Status: Revision On-Going
Title: Hierarchical Graph Attention Networks with BERT Embedding for Bengali Aspect-Based Sentiment Analysis
Brief Description: This work addresses the challenges of Aspect-Based Sentiment Analysis (ABSA) in Bengali, a low-resource language, by introducing a novel hierarchical graph-based model. Leveraging Graph Attention Networks (GATs) and BERT embedding, complemented by a custom-designed Transformer block, our approach effectively captures the intricate relationships between aspect terms and emotion expressions.
Journal: Egyptian Informatics Journal [Q1 Journal]
Status: Peer Reviewing Stage
Title: Evaluating Assistive Apps for Visually Impaired Users: Features, Design, and User Satisfaction
Brief Description: We propose a framework that will serve as a valuable resource for future app developers focusing on creating modified applications for visually impaired individuals. Our findings highlight the importance of accessibility features and user-centric design, suggesting future research should explore the integration of emerging technologies to further enhance app usability for visually impaired users.
Journal: Multimedia Tools and Applications [Q1 Journal]
Status: Revision On-Going
Title: Enhancing Transparency in Healthcare: An Explainable AI Framework for Multi-Disease Diagnosis
Brief Description: This paper proposes an Explainable Artificial Intelligence (XAI) framework aimed at enhancing transparency in multi-disease diagnosis by integrating explainability into both tabular and image-based data modalities. This dual-modality framework effectively addresses the transparency gap in AI-powered diagnostics by combining performance with interpretability. Our results demonstrate that integrating multiple XAI techniques across ML and DL models significantly improves clinical relevance, fosters clinician trust, and paves the way for ethical and accountable AI systems in healthcare.
Journal: Cognitive Computation [Q1 Journal]
Status: Revision On-Going
Title: Evaluating the Usability Based on Blended HCI Approach: Perspective of Bangladeshi e-Learning Web Platforms
Brief Description: We proposed a data-driven blended HCI framework while combining the users' and experts' perspectives to evaluate four popular Bangladeshi e-learning web platforms. We hope that our proposed HCI framework will play a vital role in the future development of better user-engaged e-learning platforms. Additionally, we have recommended a base model to develop e-learning web applications in the future while incorporating both parties.
Journal: IEEE ACCESS
Status: On-going