Dr. Rajesh Kumar
Assistant Professor, Department of Computer Science, Bucknell University
My research focuses on behavioral biometrics: the study of how patterns of human behavior—such as typing, movement, touch, and gaze—can be used to secure systems, verify authorship, and improve the reliability and accessibility of intelligent technologies. Over the course of this work, we have pursued a coherent research agenda centered on a practical and consequential question: how do behavioral security and AI systems perform under realistic, adversarial, and human-centered conditions?
This agenda has developed into three tightly connected lines of inquiry:
(1) establishing realistic adversarial threat models for behavioral biometric systems,
(2) extending behavioral biometrics beyond authentication to academic integrity and authorship verification in the era of large language models, and
(3) integrating behavioral signals into natural language processing and accessible human–computer interaction.
Independent external evaluations characterize these contributions as foundational rather than incremental, note their influence on evaluation practice in the field, and emphasize the consistency and upward trajectory of this research program, particularly given its development in a teaching-intensive, undergraduate-centered environment.
A central contribution of our work is the systematic study of behavioral biometric systems under realistic adversarial assumptions. Early work in this area often evaluated authentication systems in settings that did not adequately reflect how attackers adapt, imitate, or exploit population-level behavior.
Our research on gait-based authentication introduced one of the first reproducible frameworks for studying physical imitation attacks via treadmill-assisted spoofing. This work demonstrated that wearable gait systems can be compromised using external cues alone, without access to the victim’s data or devices. We subsequently extended this line of inquiry by adapting dictionary-style attacks, long used in password security, to inertial sensor data. These studies showed that population-derived motion patterns could be replayed to defeat gait authentication systems, reframing how such systems are evaluated and establishing benchmarks for adversarial testing.
A similar approach characterizes our work on touch-based and signature-based authentication. We showed that swipe-gesture and offline signature systems were often evaluated against weak or unrealistic attack models, leading to overstated performance claims. To address this gap, we developed generative adversarial frameworks that simulate both genuine users and informed impostors during training and evaluation. These methods exposed previously overlooked vulnerabilities and produced countermeasures that improved robustness while preserving usability.
Across modalities—gait, touch, signatures, and motion sensors—this body of work follows a complete research cycle: identifying vulnerabilities, formalizing adversarial models, and validating principled defenses. External reviewers emphasize that these contributions established methodological and empirical standards that subsequent work in behavioral authentication now assumes.
More recently, our research has extended behavioral biometrics to academic integrity and authorship verification, motivated by the rapid adoption of generative language models. As content-based plagiarism detection becomes increasingly fragile, we introduced a shift in perspective: analyzing how text is produced, rather than only the final text.
Our work on keystroke-based authorship verification models behavioral signals such as typing latency, revision dynamics, and fluency to distinguish human-authored text from AI-assisted or generated writing. This work was validated across multiple writing conditions and languages and received the Best Paper Award at the IEEE International Joint Conference on Biometrics (2024). External evaluations describe this contribution as positioning the field for a new phase of authorship verification that moves beyond surface-level textual analysis.
This line of work has expanded to multilingual settings, cognitive-context analysis, and ongoing extensions to code writing and low-resource languages. The broader contribution is not a single detector, but a privacy-conscious, hard-to-game behavioral framework for authorship verification that aligns with how humans actually produce text.
In parallel, we have applied behavioral biometrics to social and professional media integrity, developing methods to identify deceptive or automated accounts using behavioral production signals rather than invasive content inspection. Together, these projects extend behavioral security beyond authentication to address emerging integrity challenges in education and online platforms.
A third strand of our research explores how behavioral signals can be used not only to secure systems, but also to improve and humanize AI models.
In work on gaze-informed natural language processing, we demonstrated that incorporating real or synthetically generated gaze signals into language models improves performance on tasks such as sentiment and sarcasm detection. This work introduced a behaviorally grounded approach to attention modeling that aligns machine learning systems more closely with human reading behavior.
We have also developed behavior-based input systems aimed at accessibility, including a touchless typing framework that translates head and facial movements into text using sequence-to-sequence models. Developed with undergraduate collaborators, this project received national recognition and illustrates how behavioral modeling can expand access to computing for users with motor impairments.
Across these projects, the unifying theme is the integration of behavioral evidence into AI systems to improve robustness, transparency, and inclusivity.
This body of work has resulted in 28+ peer-reviewed publications, including sustained publication in selective venues such as ACM CCS, IEEE IJCB, IEEE T-BIOM, and EACL, with multiple papers recognized as top-reviewed and a Best Paper Award (IJCB 2024). These contributions have accrued nearly 1,000 citations and have influenced follow-on work across behavioral biometrics, security, and trustworthy AI.
Our professional service includes four Best Reviewer Awards, extensive reviewing activity, and repeated invitations to serve on National Science Foundation review panels evaluating federal research proposals. External reviewers consistently note that this level of impact and visibility is particularly notable given a five-course teaching load and the absence of PhD or postdoctoral infrastructure, and that the scholarly record would meet or exceed tenure standards at comparable institutions.
This research program has been built largely through undergraduate mentorship, resulting in award-winning, publishable work and a durable pipeline of student researchers.
Looking forward, our research will continue to develop behaviorally grounded security and integrity frameworks for systems increasingly shaped by generative AI. Ongoing work extends authorship verification to new languages and modalities, revisits behavioral authentication under evolving threat models, and further integrates behavioral signals into trustworthy and accessible AI systems.
The overarching goal remains consistent: to design systems that are not only accurate under ideal conditions, but robust, transparent, and aligned with real human behavior.