I am a faculty member at the Department of Computer Science at American University in DC, where I head the Computational Material Perception Laboratory. Our lab is generally interested in research in topics in human cognition, AI, and developing AI tools to best understand human cognition and neuroscience. My expertise lies in human and machine vision, particularly in understanding how the human brain processes sensory inputs to make sense of the physical world. Currently, my lab works on various topics, including human perception, generative AI for graphics and visualization, multimodal perception in VR/AR, and natural language models for clinical trial estimation. I combine human psychophysics, crowd-sourcing, image synthesis and analysis, machine learning, computer vision, and AR/VR techniques to accomplish these goals. Our research is funded by the National Institute of Health (NIH) and the Natural Science Foundation (NSF). Please refer to my bio page for more information on my background and current projects.
05/16-05/20/2025 Xiao Lab will be at the Vision Science Society Meeting (VSS) in St. Pete's Beach. Bei is giving a talk in the symposium "25 years of seeing stuff: advances and challenges in material perception".
11/22/2024 Bei is presenting at IEEE Biomedicine and Bioinformatics (BIBM) conference in Lisbon December 3, 2024.
10/14/2024 Our paper "CTP-LLM: Predicting Clinical Trial Phase Transition using Large-Language Models" Is accepted by IEEE BIBM as a short paper. Congratulations on first author Michael Reinisch!
9/17/2024 Chenxi's paper "Probing the Link Between Vision and Language in Material Perception. " is now published in PLOS Computational Biology. Congratulations, Chenxi.
8/23/2024 The Xiao lab welcome Vatanak (Vee) Lavan joining the lab as PhD student. Vee received his BS degree in CS in Kean University in 2024.
8/6/2024 Chenxi Liao is presenting a poser on "Probing semantic and visual representations in material perception through psychophysics and unsupervised learning" at CCN2024 on August 6, MIT, Cambridge, USA.
5/18/2024 Bei is giving a talk at Technology Workshop " Unveiling the potential of AI in Understanding Human Vision with Ethical Integration" in Vision Science Society Annual Meeting.
4/25/2024 Congratulations to collaborating student Jianfeng He (Virginia Tech) on successfully defending his PhD thesis and receiving the best thesis award from Virginia Tech.
4/03/2024 Bei is giving a seminar talk at Stanford Vision Brunch on Zoom at 12:30pm US Eastern.
3/22/2024. Bei is giving a seminar talk at SUNY School of Optometry SIVR Colloquia, New York City.
3/08/2024. Bei is awarded the honorific title Provost Associate Professor.
1/04/2024. American University is among the first to receive an NSF Accelerating Research Translation (ART) award ($5.7 Million) for translating basic research into practice and policy. Bei leads one of the seed projects on clinical trial transition prediction.
The lab currently has an opening for a PhD fellowship in Neuroscience (Application deadline Dec 1, 2024). The main topic of this PhD studentship is to understand perception object and scenes from images and videos using large-scale human psychophysics, deep learning, VR/AR, and Volumetric Capturing methods. The ideal candidate should have a strong technical background and have experience in at least one or two of the following methods: machine learning, applied math or statistics , computational modeling, image processing, psychophysics, computer graphics. Solid programming skills with Python and MATLAB is plus . Prospective graduate student should contact me directly and are required to apply to the graduate program Behavior, Cognition, & Neuroscience Graduate Program at AU. International students must pass the TOFEL exam. Details please see Openings.
The lab is recruiting several NIH-sponsored master and undergraduate students for independent studies/capstone projects in applying VR/AR methods in human perception (Summer, 2024 onward). The project also includes opportunities to learn how to use Volumetric Capturing, Unity 3D game engine, machine learning, and programming Python and Javascript for human psychophysical experiments. Interested students please send a CV with detailed academic experiences and transcript and schedule a meeting with me. Solid programming in one or two of the languages, Python, C, C# and/or JavaScript is required.
In all inquiries, please send me your detailed CV, a brief description of motivation, research interests and experience, and specify your programming and analytical skills.
I recommend students interested in joining my lab as a PhD or Postdoc to read a few books. I found myself reviewing them from time to time.
Foundation of Vision, Brian Wandell. 1995
Theories of Visual Perception, Second Edition, Ian Gordon. 1997.
Introduction to Linear Algebra, Gilbert Strang. Fifth Edition. 2016.
Information theory, Inference, and Learning. David Mackay, 2003.
CSC486/696 Deep Learning in Vision , Spring 2025
CSC 208 Introduction to Programming in Python II , Spring, 2023 ( New! We will use Github Classroom)
CSC 476 Introduction to Computer Vision, Spring, 2021
CSC 435 Web Programming, Spring, 2021.
CSC 469/696 Deep learning in computer vision, Fall, 2019
I am interested in human perception, generative AI, computer vision, and machine learning. Specifically, I am interested in various aspects of perception of object material properties (especially complex materials such as cloth, skin, liquid, wax, and stone): how humans perceive materials with multiple senses, how machine algorithms estimate material properties, and how to simulate realistic material appearance using graphics.
Current main projects in the lab include:
Automatic image and video generation and editing from GANs, NERFs, and other models.
Machine inference of scene, object, and material properties of objects from images and videos.
Interaction and integration between tactile and visual perception of object properties in VR.
Techniques: Human psychophysics, machine learning, generative AI computer graphics (3D modeling, rendering, animations), haptics, VR/AR.
BIBM 2024 Programming Committee
ECVP 2024 Abstract Review Committee
ACM SAP 2024 Programming Committee
BMVC 2023 Programming Chair
BMVC 2022 Programming Chair
Liao, C ª, Sawayama, M, Xiao, B. (2024). Probing the Link Between Vision and Language in Material Perception. PLOS Computational Biology. 2024.10.3. PDF
We investigate the relationship between visual similarity judgements and verbal descriptions of of materials from image. We use AI models to create a diverse array of plausible visual appearances of familiar and unfamiliar materials. Our findings reveal a moderate vision-language correlation within individual participants, yet a notable discrepancy persists between the two modalities. While verbal descriptions capture material qualities at a coarse categorical level, precise alignment between vision and language at the individual stimulus level is still lacking.
He,. J. , Zhang, X., Shuo L., Alhamadani, A., Xiao, B., Lu, C-T (2023) CLUR: Uncertainty Estimation for Few-Shot Text Classification with Contrastive Learning. SIGKDD 2023 Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), Research Track, Long Beach, USA, August 2023. PDF
This paper investigates the Uncertainty Estimation for Few-shot Text Classification (UEFTC), an unexplored research area. Given limited samples, a UEFTC model predicts an uncertainty score for a classification result, which is the likelihood that the classification result is false. We propose Contrastive Learning from Uncertainty Relations (CLUR) to address UEFTC. CLUR can be trained with only one support sample for each class with the help of pseudo uncertainty scores. Also, we explore four model structures in CLUR to investigate the performance of three common-used contrastive learning components in UEFTC.
Liao, C, Sawayama, M, Xiao, B. (2023) Unsupervised learning reveals interpretable latent representations for translucency perception. PLOS Computational Biology. Feb 8, 2023. PDF. Cover. Github. Data.
We have developed an image-computable model that can accurately predict human translucency judgments, a first-of-its-kind breakthrough. Using deep learning techniques, we trained a powerful image generation network (known as StyleGAN) to produce realistic translucent appearances from unlabeled photographs of objects. This allowed us to create a layer-wise latent representation that captures the statistical structure of images across multiple spatial scales. Interestingly, our results indicate that the middle-layers of the latent space, which represent mid-to-low spatial scale features, are particularly adept at predicting human perception.
Liao, C, Sawayama, M, Xiao, B. (2022) Crystal or Jelly? Effect of Color on the Perception of Translucent Materials with Photographs of Real-world Objects. Journal of Vision. PDF. Supplementary. Github.
This paper studies the impact of color on the perception of translucent materials, such as food and wax, using real-world object photographs as stimuli. To accomplish this, we conducted three distinct tasks: binary classification, semantic attribute rating, and material category judgments. Our findings reveal that the removal of color leads to increased disagreement among observers across all three tasks. Additionally, when converting images to grayscale, we observed a tendency for participants to misjudge certain food images as non-food items (e.g., a chunk of jelly may be perceived as crystal).
He,. J. , Xiao, B., Zhang, X., Shuo L. Wang, S, Huang, Q., Lu, C-T (2021) Reducing Noise Pixels and Metric Bias in Semantic Inpainting on Segmentation Map. Proceedings of IEEE ICCV 2021. Advances in Image Manipulation Workshop. PDF. Video.
We improve the segmentation by semantic inputs by considering the unique characteristics of segmentation maps in the both training and testing processes. We propose a novel DA using a characteristic of segmentation maps, which allows us to estimate the possible value ranges of pixels in the inpainted areas in advance. To improve quality of inpainted shape, we propose a novel metric, Semantic Similarity (Sem), to quantify the semantic divergence between the generated and ground-truth target objects.
He,. J. Zhang, X., Shuo L. Wang, S, Huang, Q., Lu, C-T, Xiao, B. (2020) Semantic Editing On Segmentation Map Via Multi-Expansion Loss. Neurocomputing. 2010.08128. PDF. GitHub.
We automatically edit segmentation label map conditioned on semantic inputs by proposing MExGAN, which uses a novel Multi-Expansion (MEx) loss implemented by adversarial losses on a series of expanded areas to improve alignment in the boundary areas between manipulated object and the surrounding scene. This method can also be applied in natural image inpainting.
Aston S., Denisowa, K., Hurrlbert, A., Olkkonen,M., Pearce, B., Rudd, M, Werner, A., and Xiao, B. (2020). Exploring the Determinants of Color Perception Using #Thedress and Its Variants: The Role of Spatio-Chromatic Context, Chromatic Illumination, and Material– Light Interaction. Perception. PDF.
We show how spatial and temporal manipulations of light spectra affect people’s perceptions of material colors and illustrated the variability in individual color perception.
He, J., Zhang, X, Lei, S. Chen, Z, Chen, F, Alhamadani, A. Xiao. B and Lu, CT. (2020) Towards More Accurate Uncertainty Estimation In Text Classification. EMNLP 2020. PDF. GitHub
We propose a model, MSD (Mix-up self-ensembling, and distinctiveness score), to improve accuracy of estimating uncertainty of text classification of DNN by reducing the effect of overconfidence of winning score and considering the impact of different categories of uncertainty simultaneously. Our methods can be flexibly applied to several DNNs.
Xiao, B., Zhao, S., Gkioulekas, I., Bi, WY, and Bala, K. (2020) Effect of Geometric Sharpness on Translucent Material Perception. Journal of Vision. Journal o Vision, 20-10. PDF. GitHub.
Using simulated relief objects of translucent materials with varying shapes and optical properties under different illuminations, we find that the level of geometric sharpness significantly affects perceived translucency by the observers.
Bi, WY, Jin, P, Nienborg, H, and Xiao, B. (2019) Manipulating patterns of dynamic deformations alters the impression of cloth with varying stiffness. Journal of Vision, 19-18. PDF. Project page.
We isolated the dynamic deformation using dot stimuli and found directly manipulating the pattern of dynamic deformation using the method we proposed can alter the perceived stiffness.
Wijntes M., Xiao, B. and Volcic, R. (2019). Visual communication of how fabrics feel. Journal of Vision, Feb, 2019. PDF.
We study which visual media (images versus videos) better convey haptic properties of fabrics and explore what psychophysical task is appropriate to address this issue.
Bi, WY., Newport, J. Xiao, B. (2018). Interaction between static visual cues and force-feedback on the perception of mass of virtual objects. ACM Symposium of Applied Perception (SAP'18). PDF. Project Page.
We use force-feedback device and a game engine to measure the effects of material appearance on the perception of mass of virtual objects. We find static visual appearance influence perceived mass and the effect is opposite from the classical "material weight illusion".
Bi, WY, Jin,P. Nienborg, H and Xiao, B. (2018). Estimating mechanical properties of cloth from videos using dense motion trajectories: human psychophysics and machine learning. Journal of Vision, 18(5), 12-12. PDF. Project Page.
We discover that long-range spatiotemporal information across video frames plays an important role on how humans estimate bending stiffness of cloth from animations. A model based on the features of dense motion trajectories can predict human perceptual scale of bending stiffness of cloth.
Bi, WY, and Xiao, B. (2016). Perceptual constancy of mechanical properties of fabrics under variation of external force. ACM Symposium of Applied Perception (SAP 2016). PDF. Project Page.
We study how humans achieve perceptual constancy when estimating mechanical properties of cloth varying under external forces. We discuss our results in the context of optical flow statistics.
Xiao, B., Bi, W.Y., Jia, X-D, Wei, HH, and Adelson, E. (2016). Can you see what you feel? Color and folding properties affects visual-tactile material discrimination of fabrics. Journal of Vision. PDF. Project Page.
We use tactile perception as ground truth to measure visual material perception. Using fabrics as our stimuli, we measure how observers match what they see (photographs of fabric samples) with what they feel (physical fabric samples).
Heasly, B.S., Cottaris, N.P., Lichtman, D.P., Xiao, B., Brainard, D.H. (2014). RenderToolbox3:MATLAB tools that facilitate physically-based stimulus rendering for vision research. Journal of Vision, Vol. 14, 2. PDF. GitHub.
We describe and release RenderToolbox3, a set of MATLAB utilities, and prescribes a workflow that should be useful to researchers who want to employ computer graphics in the study of perception.
Xiao, B., Walter, B.W., Gkioulekas, I., Zickler, T., Adelson, E, and Bala, K. (2014). Looking against the light: how perception of translucency depends on lighting direction. Journal of Vision. 14(3): 17. PDF
We study the effects of lighting direction on perception of translucency. In particular, we explore the interaction of shape, illumination, and degree of translucency constancy in variation of lighting direction by including in our analysis the variations in translucent appearance that are induced by the shape of the scattering phase function.
Akkayanak, D. Treibitz, T., Xiao,B.,Gurkan, U.A., Allen, J.J.,Demirci, U., and Hanlon, R. (2014). Use of commercial off the shelf (COTS) digital cameras for scientific data acquisition and scene-specific color calibration. Journal of Optical Society of America A (JOSA A), Vol. 31, Issue 2, pp. 312-321. PDF, Online, MATLAB code.
Bouman, KL., Xiao,B., Battaglia, P., and Freeman, WT. (2013). Estimating the Material Properties of Fabrics From Video. International conference on computer vision (ICCV), 2013. PDF Video Datasets.
We develop a new computer vision algorithm based on motion statistics extracted from videos that can accurately estimate mechanical properties of real cloth. We find the model prediction is highly correlated with human judgements.
Gkioulekas, I., Xiao,B., Zhao, S., Adelson, E.H., Zickler, T., and Bala, K.(2013). Understanding the Role of Phase Function in Translucent Appearance. ACM Transactions on graphics (TOG). Volume 32, Issue 5. PDF, Supplemental Materials, Media coverage. This work was presented at SIGGRAPH 2013.
We generalize scattering phase function models, demonstrate an expanded translucent appearance space, and discover perceptually-meaningful translucency controls by analyzing thousands of images with computation and psychophysics.
Xiao, B., Hurst, B., MacIntyer, L. and Brainard, D.H. (2012). The Color Constancy of Three-Dimensional Objects. Journal of Vision,12(4):6. PDF Supplemental Materials.
We measure human color constancy of 3D objects in computer rendered complex 3D scenes. More specifically, we find there is an interaction between the test object's three-dimensional shape and spectral changes in the contextual scene.
Xiao, B. and Wade, A.R. (2010). Measurements of Long-range suppression in human opponent S-cone and achromatic luminance channels. Journal of Vision 10(13):10. PDF
We use a combination of neuroimaging data from source-imaged EEG and two different psychophysical measures of surround suppression to study contrast normalization in stimuli containing achromatic luminance and S-cone-isolating contrast.
Xiao, B. and Brainard, D.H.(2008). Surface gloss and color perception of 3D objects. Visual Neuroscience, 25:371-385. PDF Supplemental Materials.
We conduct two experiments examining how the color appearance of 3D objects is affected by changes in object material properties.
Xiao, B. and Brainard, D.H. (2006). Color Perception of 3D objects: constancy with respect to variation of surface gloss. Proceedings of ACM Symposium on Applied Perception in Graphics and Visualization (APGV06), 63-68. PDF
Brainard, D.H., Longere, P., Delahunt, P.B., Freeman, W.T., Kraft, J.M., and Xiao, B. (2006). Bayesian model of human color constancy. Journal of Vision, 6, 1277-1281. PDF
We develop a model of human color constancy which includes an explicit link between psychophysical data and illuminant estimates obtained via a Bayesian algorithm.
Xiao, B. (2020). Color Constancy. The 2nd Edition of Encyclopedia of Color Science and Technology (Springer). Editor, Renzo Shamey. PDF. Publisher Website.
Xiao, B. (2016). Color Constancy, Encyclopedia of Color Science and Technology (Springer). PDF. Publisher Website.
Xiao, B. (2009). Color Perception of 3D objects in Complex Scenes. Neuroscience Graduate Program, University of Pennsylvania, Philadelphia. Abstract PDF
Xiao, B. (2015). The Science Behind #the Dress. American University News Column. Full Article.
Bi, W., Kumar, G., Nienborg, H., and Xiao, B. (2019). Understanding Information Processing Mechanisms for Estimating Material Properties of Cloth in Deep Neural Networks. VSS 2019, St. Pete Beach, FL.
Bi, W, Nienborg, H, and Xiao, B. (2018). How does motion affect material perception of deformable objects? Computational Cognitive Neuroscience, CCN 2018. Philadelphia. PDF. Poster.
Xiao, B. Shuang, Z., Gkioulekas, I, Bi, WY, and Bala, K. (2018). Does geometric sharpness affect perception of translucent material perception? VSS 2018. St. Pete's Beach. Poster.
Xiao, B. (2017). Seeing materials from movements: motion cues in perception of cloth in dynamic scenes. Symposium on "Beyond translation: Image deformation and dynamics in material and shape perception”. ECVP 2017. Berlin. Talk.
Wijntjes, M, and Xiao, B (2016). Visual communications of haptic material properties. VSS 2016. St. Pete's Beach, Poster.
Bermudez, L. and Xiao, B. (2016). Estimating material properties of cloth from dynamic silhouettes. VSS 2016, St.Pete's Beach, Florida. Poster.
Xiao, B., and Kistler, W. (2015). Perceptual dimensions of material properties of fabrics in dynamic scenes. VSS 2015, St. Pete's Beach, Florida. Talk.
Xiao, B., and Kistler, W. (2014). Perceptual dimensions of material properties of moving fabrics. European Conference of Visual Perception (ECVP), Belgrade, Serbia. Poster.
Xiao, B., Walter, B., Gkioulekas, I., Adelson,E., Zickler, T. and Bala, K. (2014). Looking against the light: how perception of translucency depends on lighting direction and phase function. Vision Science Society Annual Meeting, St.Pete's beach, FL. Abstract, Talk Slides.
Xiao, B., Adelson, E. (2013). Multi-sensory understanding of material properties. Prism2, The Science of Light and Shade. Bordeaux, France.
Xiao, B., Jia, X.D., and Adelson, E. (2013). Can you see what you feel? Visual and Tactile perception of fabrics. Vision Science Annual Meeting, Naples, FL. Poster.
Xiao, B., Gkioulekas, I., Dunn, A, Zhao, S. Adelson,E., Zickler, T. and Bala, K. (2012). Effects of shape and color on the perception of translucency. Vision Science Society Annual Meeting, Naples, FL. Talk Slides.
Xiao, B., Sharan. R, Rosenholtz. R. and Adelson,E. (2011). Speed of material vs. object recognition depends upon viewing conditions. OSA Fall Vision Meeting, Seattle, WA. Abstract, Slides.
Xiao, B., Wade, A.R. (2010). Interactions between S-cone and luminance signals in surround suppression. Vision Science Society Annual Meeting, Naples, FL. Abstract.
Xiao, B., Wade, A.R. (2009). Surround suppression between S-cone and luminance channels measured with psychophysics and source-localized EEG. OSA Fall Vision Meeting, Seattle, WA. Abstract.
Xiao, B., Brainard, D.H. (2009). Surface material properties and color constancy of 3D objects. Vision Science Society Annual Meeting, Naples, FL. Abstract.
All of my code is hosted on my GitHub account.
Rendertoolbox (Version 4) is a MATLAB toolbox that drives modeling software Blender and rendering software Mitsuba and PBRT, to create stimuli for vision research. I contributed to the first generation of this toolbox.
Trained mostly in classical music, I play the piano and the harpsichord. I had great fun playing a piano piece as part of the sound-track of my colleague Krzysztof Pietroszek's melancholic short movie "Vera" and its volumetric version. Here is a complete recording of Tchaikovsky's The seasons - June (barcarolle).
I am always looking for opportunities to play chamber music with other musicians, especially other pianists (four hands), vocalists and string musicians.
I listen to wide varieties of music but I am extremely interested in new music composed by living composers, baroque, Jazz and world music.
Live concerts that I follow:
DC has vibrant concert series. I especially like chamber music in small venues. My favorite contemporary classical/world music venues in town are:
Lisner auditorium in GWU ( world music, contemporary ensembles).
Phillips Collection ( high quality chamber music, mostly classical, not free)
Washington Performing Art Society (Most classical music, not free)
Freer/Sackler Gallery Concerts (world music, free)
Library of Congress concerts (high quality old and contemporary music, free)
930 Club (Popular music, Contemporary music)
Six and I concert series (american contemporary music and jazz, not free)
I like indie and classical cinema. My favourite genre is film noir. But I haven't been able to keep up this hobby for a while, but always happy to be at a movie night for a old black and white movie.
I am always up for a board game break in the lab and at home. Some favorites: