Hybrid Optogenetic and Electrical Retinal Prosthesis for Vision Restoration
Abstract: Retinal prostheses aimed at stimulating retinal ganglion cells hold promise for restoring vision in patients with photoreceptor degeneration, such as retinitis pigmentosa and age-related macular degeneration. Current clinically approved retinal prostheses utilizing electrical stimulation are limited by electrical spread, resulting in vision restoration not even up to the threshold of legal blindness. Optogenetic approaches offer greater spatial precision but are hindered by lower temporal fidelity and safety concerns due to the high intensity of light required for stimulation. In this study, we present a novel hybrid approach that combines optogenetic and electrical stimulation of retinal ganglion cells. Through ex vivo retinal electrophysiology experiments, including patch clamping and calcium imaging, we demonstrated that hybrid stimulation could lower optogenetic activation thresholds and enhance high-frequency responses without compromising spatial resolution. Our findings suggest that integrating hybrid stimulation into future retinal prostheses could improve the safety, efficiency, and resolution of vision restoration.
Integrated Multiscale Modeling of the Virtual Eye and Retina
Abstract: We introduce a novel, parameterized simulation platform designed for the comprehensive study of the human eyeball, retina, and optic nerve. This in silico framework holds significant potential for patient-specific tailoring and serves as an advanced tool in enhancing artificial vision solutions for individuals with visual impairments. The multiscale model is developed by integrating the capabilities of NEURON and COMSOL Multiphysics, enabling large-scale simulations of over 100,000 neurons while preserving robust biological plausibility. The finite element model intricately represents the anatomical details of a multi-layer human eye, including an optic nerve segment comprising myelinated fibers and surrounding tissues. The neural model encompasses a diverse array of retinal components such as photoreceptors, horizontal cells, bipolar cells, amacrine cells, and both midget and parasol retinal ganglion cells with their axonal projections, coupled with comprehensive network connectivity across various eccentricities. Model construction leverages data from electrophysiology, immunohistology, and optical coherence tomography imaging of healthy and degenerated human retinas. This simulation platform is poised to become an invaluable resource for engineers, scientists, and clinicians engaged in the design and optimization of retinal stimulation devices and drug delivery methods, with broad applications in both research and education.
Fractional neural sampling: a shared computational mechanism of biological and artificial neural networks
Abstract: Brain activity unfolds across multiple spatial and temporal scales, giving rise to complex spatiotemporal dynamics. In this talk, I will introduce Fractional Neural Sampling (FNS), a theoretical framework that harnesses these dynamics to support flexible and efficient neural computation. I will first present how FNS emerges from a biophysically realistic neural circuit model, highlighting the critical role of heterogeneous connectivity in shaping complex activity patterns. I will then demonstrate how FNS provides a unified perspective on a range of brain functions, including visual attentional sampling. Finally, I will discuss how FNS enables efficient learning dynamics in deep neural networks used in AI.
The Future of Sight: Nanochap’s Innovations in Artificial Vision
Abstract: In this talk, we will showcase Nanochap’s pioneering advancements in artificial vision with cutting-edge hardware innovation. As a leader in intelligent sensing, Nanochap is redefining how machines see, understand, and interact with the world, enabling breakthroughs from lab to real-world deployment. We’ll dive into our latest ultra-low-power vision processors and high-performance neural accelerators, engineered to deliver unprecedented speed and efficiency for next-gen artificial vision applications. Learn how our hardware-software co-design approach tackles critical challenges in real-time processing, energy constraints, and scalable deployment across academic research and industrial solutions. Join us to explore how Nanochap’s technology is powering the future of sight and discover what’s coming next in the evolution of vision.
Jian Liu
University of Birmingham
Bridging Realms: Integrating Artificial and Natural Vision for Future Innovations
Abstract: In this Symposium, we explore the cutting-edge convergence of biological and machine vision systems. As artificial intelligence advances, the lines between natural and synthetic vision blur, opening new possibilities for healthcare and neuroscience. This event brings together researchers to discuss breakthroughs in neural-inspired algorithms, sensor technologies, and brain-machine interfaces. By fostering interdisciplinary dialogue, we aim to accelerate innovations that unify artificial and natural vision, shaping a future where these systems collaborate seamlessly to enhance perception, cognition, and human-machine interaction.