I am on sabbatical from AU this year (2022-2023). I will return to class-room teaching in 2023.
I am a faculty at the Department of Computer Science at American University in DC and the head of the Computational Material Perception Laboratory I am interested in building theoretical framework of complex natural phenomena and for this reason, I choose to study the visual system. My current research focuses on human and computer vision and how to apply principles of human cognition to improve robustness in artificial intelligence and whether and how machine algorithms can inform and assist humans. Specifically, I study perception and reasoning of material properties of objects in complex and dynamic scenes. I use a combination of human psychophysics, crowd-sourcing, image synthesis and analysis, machine learning, computer vision and AR/VR techniques. Read more of on my bio and project page.
08/29/2022, Chenxi presented a poster, "Unsupervised learning of translucent material appearance using StyleGAN", in Computational Cognitive Neuroscience Conference (CCN) in San Francisco.
06/29/2022. Bei is serving on Programming Committee at the 2022 British Machine Vision Conference (BMVC 2022). Abstract Submission deadline: July 22nd. Full paper deadline: July 29th. All deadline 23:59 GMT.
06/07/2022 Our paper "Semantic Inpainting on Segmentation Map Via Multi-Expansion Loss" is accepted to publish in Neurocomputing. Please find the PDF of the paper here and the code here. Congratulations on first author Jianfeng He.
5/24/2022 The lab is attending The third Capital Graphics workshop, a local gathering of vision and graphics researchers in DMV in George Washington University. Bei, Chenxi, Jesse and Shengkai are all presenting!
5/13/2022 Chenxi and I are attending Vision Science Society Annual Meeting in St. Pete's beach in Florida. Chenxi is giving a talk in the Color, light and material session on Sunday afternoon, 2:30pm.
2/9/2022. Our submission "A Perceptual Evaluation of the StyleGAN2-ADA Generated Images of Translucent Objects" has been accepted as an Oral Presentation at VSS 2022. Congratulations to first author, Chenxi Liao.
2/9/2022. Our submission "Replaceability of two deep generative models trained with a pair of translucent objects with different geometries" Has been accepted as a poster presentation in V-VSS. Congratulations to first author, Masataka Sawayama.
12/17/2021 Our paper "Crystal or Jelly? Effect of Color on the Perception of Translucent Materials with Photographs of Real-world Objects" has been accepted for publication in the Journal of Vision. Congratulations on first author, Chenxi Liao.
09/22/2021 We received a new NSF grant "MRI: Acquisition of Volumetric Capture System for the Institute for IDEAS". This grants allows to set up a new Volumetric Capture Studio at Institute of IDEALS at AU and opens exciting research directions at AU!
09/17/2021 Our paper "Reducing Noise Pixels and Metric Bias in Semantic Inpainting onSegmentation Map" is accepted to publish in ICCV Advances in Image Manipulations (AIMs) 2021 workshop. Congratulations on first author JIanfeng He. Read the paper here.
08/02/2021 Bei Xiao is promoted to Associate Professor with Tenure, effectively July 1st.
07/02/2021 Exciting news! Starting July, we are having a joint Machine Vision Journal Club with Prof. Leah Ding's lab. We read recent and classical papers in machine vision, human cognition, and machine learning in cybersecurity. You can find the Journal Club papers and schedules here. We wish to continue this into Fall, 2022. If you are interested in joining us, please email Bei or Chenxi Liao.
The lab currently has an opening for a PhD fellowship (Application deadline Dec 1, 2022). The main topic of this PhD studentship is to understand material perception from images and videos using large-scale human psychophysics, deep learning, VR/AR, and Volumetric Capturing methods. The ideal candidate should have a strong technical background and have experience in at least one or two of the following methods: machine learning, applied math or statistics , computational modeling, image processing, psychophysics, computer graphics. Solid programming skills with Python and MATLAB is plus . Prospective graduate student should contact me directly and are required to apply to the graduate program Behavior, Cognition, & Neuroscience Graduate Program at AU. International students must pass the TOFEL exam.
CSC 476 Introduction to Computer Vision, Spring, 2021
CSC 435 Web Programming, Spring, 2021.
CSC148 Introduction to Programming in Python, Fall, 2020
CSC 469/696 Deep learning in computer vision, Fall, 2019
I am interested in human perception, VR, multi-sensory perception, computer vision, and graphics. Specifically, I am interested in various aspects of perception of object material properties (especially complex materials such as cloth, skin, liquid, wax and stone): how humans perceive materials with multiple senses, how machine algorithm estimate material properties, and how to simulate realistic material appearance using graphics.
Current main projects in the lab include:
Automatic image and video generation and editing from GANs with scene graphs, and texts, and multimedia inputs.
Interaction and integration between tactile and visual perception of object properties in VR.
Machine inference of scene, object, and material properties of objects from images and videos.
Perception and rendering of translucent appearance.
Fields: Multi-sensory perception, computer inference of material properties and dynamic scenes, perception-driven computer graphics, computer vision.
Techniques: Human psychophysics, computer vision, computer graphics (3D modeling, rendering, animations), machine learning, haptics, VR.
Read more on project page.
Liao, C, Sawayama, M, Xiao, B. (2022) Crystal or Jelly? Effect of Color on the Perception of Translucent Materials with Photographs of Real-world Objects. Journal of Vision. PDF. Supplementary. Github.
We study the effect of color on perception of translucent materials (e.g. food, wax) using photographs of real-objects as stimuli using three tasks: binary classification, semantic attribute rating, and material category judgements. . We discover that there are more disagreements among observers when color is removed in all three tasks. Converting to grayscale also affects how observers judge material categories for some images such that observers tend to misjudge images of food as non-food (e.g. a chunk of Jelly as Cyrstal).
He,. J. , Xiao, B., Zhang, X., Shuo L. Wang, S, Huang, Q., Lu, C-T (2021) Reducing Noise Pixels and Metric Bias in Semantic Inpainting on Segmentation Map. Proceedings of IEEE ICCV 2021. Advances in Image Manipulation Workshop. PDF. Video.
We improve the SISM by considering the unique characteristics of segmentation maps in the both training and testing processes. We propose a novel DA using a characteristic of segmentation maps, which allows us to estimate the possible value ranges of pixels in the inpainted areas in advance. To improve quality of inpainted shape, we propose a novel metric, Semantic Similarity (Sem), to quantify the semantic divergence between the generated and ground-truth target objects.
We automatically edit segmentation label map conditioned on semantic inputs by proposing MExGAN, which uses a novel Multi-Expansion (MEx) loss implemented by adversarial losses on a series of expanded areas to improve alignment in the boundary areas between manipulated object and the surrounding scene. This method can also be applied in natural image inpainting.
Aston S., Denisowa, K., Hurrlbert, A., Olkkonen,M., Pearce, B., Rudd, M, Werner, A., and Xiao, B. (2020). Exploring the Determinants of Color Perception Using #Thedress and Its Variants: The Role of Spatio-Chromatic Context, Chromatic Illumination, and Material– Light Interaction. Perception. PDF.
We show how spatial and temporal manipulations of light spectra affect people’s perceptions of material colors and illustrated the variability in individual color perception.
We propose a model, MSD (Mix-up self-ensembling, and distinctiveness score), to improve accuracy of estimating uncertainty of text classification of DNN by reducing the effect of overconfidence of winning score and considering the impact of different categories of uncertainty simultaneously. Our methods can be flexibly applied to several DNNs.
Using simulated relief objects of translucent materials with varying shapes and optical properties under different illuminations, we find that the level of geometric sharpness significantly affects perceived translucency by the observers.
We isolated the dynamic deformation using dot stimuli and found directly manipulating the pattern of dynamic deformation using the method we proposed can alter the perceived stiffness.
Wijntes M., Xiao, B. and Volcic, R. (2019). Visual communication of how fabrics feel. Journal of Vision, Feb, 2019. PDF.
We study which visual media (images versus videos) better convey haptic properties of fabrics and explore what psychophysical task is appropriate to address this issue.
Bi, WY., Newport, J. Xiao, B. (2018). Interaction between static visual cues and force-feedback on the perception of mass of virtual objects. ACM Symposium of Applied Perception (SAP'18). PDF. Project Page.
We use force-feedback device and a game engine to measure the effects of material appearance on the perception of mass of virtual objects. We find static visual appearance influence perceived mass and the effect is opposite from the classical "material weight illusion".
Bi, WY, Jin,P. Nienborg, H and Xiao, B. (2018). Estimating mechanical properties of cloth from videos using dense motion trajectories: human psychophysics and machine learning. Journal of Vision, 18(5), 12-12. PDF. Project Page.
We discover that long-range spatiotemporal information across video frames plays an important role on how humans estimate bending stiffness of cloth from animations. A model based on the features of dense motion trajectories can predict human perceptual scale of bending stiffness of cloth.
We study how humans achieve perceptual constancy when estimating mechanical properties of cloth varying under external forces. We discuss our results in the context of optical flow statistics.
Xiao, B., Bi, W.Y., Jia, X-D, Wei, HH, and Adelson, E. (2016). Can you see what you feel? Color and folding properties affects visual-tactile material discrimination of fabrics. Journal of Vision. PDF. Project Page.
We use tactile perception as ground truth to measure visual material perception. Using fabrics as our stimuli, we measure how observers match what they see (photographs of fabric samples) with what they feel (physical fabric samples).
Heasly, B.S., Cottaris, N.P., Lichtman, D.P., Xiao, B., Brainard, D.H. (2014). RenderToolbox3:MATLAB tools that facilitate physically-based stimulus rendering for vision research. Journal of Vision, Vol. 14, 2. PDF. GitHub.
We describe and release RenderToolbox3, a set of MATLAB utilities, and prescribes a workflow that should be useful to researchers who want to employ computer graphics in the study of perception.
Xiao, B., Walter, B.W., Gkioulekas, I., Zickler, T., Adelson, E, and Bala, K. (2014). Looking against the light: how perception of translucency depends on lighting direction. Journal of Vision. 14(3): 17. PDF
We study the effects of lighting direction on perception of translucency. In particular, we explore the interaction of shape, illumination, and degree of translucency constancy in variation of lighting direction by including in our analysis the variations in translucent appearance that are induced by the shape of the scattering phase function.
Akkayanak, D. Treibitz, T., Xiao,B.,Gurkan, U.A., Allen, J.J.,Demirci, U., and Hanlon, R. (2014). Use of commercial off the shelf (COTS) digital cameras for scientific data acquisition and scene-specific color calibration. Journal of Optical Society of America A (JOSA A), Vol. 31, Issue 2, pp. 312-321. PDF, Online, MATLAB code.
We develop a new computer vision algorithm based on motion statistics extracted from videos that can accurately estimate mechanical properties of real cloth. We find the model prediction is highly correlated with human judgements.
Gkioulekas, I., Xiao,B., Zhao, S., Adelson, E.H., Zickler, T., and Bala, K.(2013). Understanding the Role of Phase Function in Translucent Appearance. ACM Transactions on graphics (TOG). Volume 32, Issue 5. PDF, Supplemental Materials, Media coverage. This work was presented at SIGGRAPH 2013.
We generalize scattering phase function models, demonstrate an expanded translucent appearance space, and discover perceptually-meaningful translucency controls by analyzing thousands of images with computation and psychophysics.
We measure human color constancy of 3D objects in computer rendered complex 3D scenes. More specifically, we find there is an interaction between the test object's three-dimensional shape and spectral changes in the contextual scene.
Xiao, B. and Wade, A.R. (2010). Measurements of Long-range suppression in human opponent S-cone and achromatic luminance channels. Journal of Vision 10(13):10. PDF
We use a combination of neuroimaging data from source-imaged EEG and two different psychophysical measures of surround suppression to study contrast normalization in stimuli containing achromatic luminance and S-cone-isolating contrast.
Xiao, B. and Brainard, D.H. (2006). Color Perception of 3D objects: constancy with respect to variation of surface gloss. Proceedings of ACM Symposium on Applied Perception in Graphics and Visualization (APGV06), 63-68. PDF
Brainard, D.H., Longere, P., Delahunt, P.B., Freeman, W.T., Kraft, J.M., and Xiao, B. (2006). Bayesian model of human color constancy. Journal of Vision, 6, 1277-1281. PDF
We develop a model of human color constancy which includes an explicit link between psychophysical data and illuminant estimates obtained via a Bayesian algorithm.
Xiao, B. (2015). The Science Behind #the Dress. American University News Column. Full Article.
Peer-reviewed Conference Presentations
Bi, W., Kumar, G., Nienborg, H., and Xiao, B. (2019). Understanding Information Processing Mechanisms for Estimating Material Properties of Cloth in Deep Neural Networks. VSS 2019, St. Pete Beach, FL.
Xiao, B. Shuang, Z., Gkioulekas, I, Bi, WY, and Bala, K. (2018). Does geometric sharpness affect perception of translucent material perception? VSS 2018. St. Pete's Beach. Poster.
Xiao, B. (2017). Seeing materials from movements: motion cues in perception of cloth in dynamic scenes. Symposium on "Beyond translation: Image deformation and dynamics in material and shape perception”. ECVP 2017. Berlin. Talk.
Wijntjes, M, and Xiao, B (2016). Visual communications of haptic material properties. VSS 2016. St. Pete's Beach, Poster.
Bermudez, L. and Xiao, B. (2016). Estimating material properties of cloth from dynamic silhouettes. VSS 2016, St.Pete's Beach, Florida. Poster.
Xiao, B., and Kistler, W. (2015). Perceptual dimensions of material properties of fabrics in dynamic scenes. VSS 2015, St. Pete's Beach, Florida. Talk.
Xiao, B., and Kistler, W. (2014). Perceptual dimensions of material properties of moving fabrics. European Conference of Visual Perception (ECVP), Belgrade, Serbia. Poster.
Xiao, B., Walter, B., Gkioulekas, I., Adelson,E., Zickler, T. and Bala, K. (2014). Looking against the light: how perception of translucency depends on lighting direction and phase function. Vision Science Society Annual Meeting, St.Pete's beach, FL. Abstract, Talk Slides.
Xiao, B., Adelson, E. (2013). Multi-sensory understanding of material properties. Prism2, The Science of Light and Shade. Bordeaux, France.
Xiao, B., Jia, X.D., and Adelson, E. (2013). Can you see what you feel? Visual and Tactile perception of fabrics. Vision Science Annual Meeting, Naples, FL. Poster.
Xiao, B., Gkioulekas, I., Dunn, A, Zhao, S. Adelson,E., Zickler, T. and Bala, K. (2012). Effects of shape and color on the perception of translucency. Vision Science Society Annual Meeting, Naples, FL. Talk Slides.
Xiao, B., Wade, A.R. (2010). Interactions between S-cone and luminance signals in surround suppression. Vision Science Society Annual Meeting, Naples, FL. Abstract.
Xiao, B., Wade, A.R. (2009). Surround suppression between S-cone and luminance channels measured with psychophysics and source-localized EEG. OSA Fall Vision Meeting, Seattle, WA. Abstract.
Xiao, B., Brainard, D.H. (2009). Surface material properties and color constancy of 3D objects. Vision Science Society Annual Meeting, Naples, FL. Abstract.
I had great fun playing a piano piece as part of the sound-track of my colleague Krzysztof Pietroszek's melancholic short movie "Vera" and its volumetric version. Here is a complete recording of Tchaikovsky's The seasons - June (barcarolle).
Trained mostly in classical music, I play the piano and the harpsichord. I am always looking for opportunities to play chamber music with other musicians, especially other pianists (four hands), vocalists and string musicians.
I listen to wide varieties of music but I am extremely interested in new music composed by living composers, baroque, Jazz and world music.
DC has vibrant concert series. I especially like chamber music in small venues. My favorite contemporary classical/world music venues in town are:
Lisner auditorium in GWU ( world music, contemporary ensembles).
Phillips Collection ( high quality chamber music, mostly classical, not free)
Washington Performing Art Society (Most classical music, not free)
Freer/Sackler Gallery Concerts (world music, free)
Library of Congress concerts (high quality old and contemporary music, free)
930 Club (Popular music, Contemporary music)
Six and I concert series (american contemporary music and jazz, not free)
I am always up for a board game break in the lab and at home. Some favorites: