Recent advances in quantum technologies have accelerated research into computational models that surpass classical limitations. Quantum computing harnesses principles such as superposition and entanglement to solve problems that are infeasible for classical machines.
My research focuses on designing and analyzing quantum algorithms and investigating scalable quantum hardware systems. I study quantum circuit design, qubit behavior through Bloch sphere modeling, and surface code-based error correction. This work aims to bridge theoretical quantum algorithm development with practical implementations on NISQ (Noisy Intermediate-Scale Quantum) devices.
I explore fault tolerance mechanisms across various computing paradigms, focusing on how different systems ensure reliability in the presence of errors. In embedded systems, where resources are constrained, I study lightweight software-based techniques for protecting against soft errors. In classical computing, I examine fault recovery methods such as hardware redundancy and consensus protocols. In quantum computing, I analyze how logical qubits are stabilized using quantum error correction codes to manage the inherent fragility of physical qubits. My work aims to systematically understand and compare these distinct approaches to fault tolerance, bridging practical engineering with theoretical foundations.
I explore the integration of quantum computing with federated learning to enable secure and privacy-preserving distributed model training. In this paradigm, local devices collect and encode sensitive data into quantum states, ensuring that raw information never leaves the device. Each device performs local quantum training, leveraging quantum circuits to extract model updates while maintaining data confidentiality. These updates are then transmitted to a central server, where they are aggregated to form a global model without accessing any original data. My work examines how this approach combines the computational advantages of quantum systems with the collaborative benefits of federated learning, aiming to enhance security, efficiency, and scalability in next-generation AI systems.