Research

Transfer Learning

Transfer learning leverages existing models or labeled data to solve another similar but different problem. My major research focuses on one type of transfer learning paradigms: domain adaptation. Due to the differences between different domains, termed data bias or domain shift, machine learning models often do not generalize well from an existing domain to a novel unlabeled domain. Domain adaptation (DA) has been proposed to leverage knowledge from an abundant labeled source domain to learn an effective predictor for the target domain with few or no labels, while mitigating the domain shift problem. In my research, DA has been applied in many tasks, such as image recognition, segmentation, regression, etc.

Data shift/bias of different datasets when transferring knowledge from an existing domain to another domain (learn from product domain, and recognize objects in clipart domain; segment cityscapes dataset and apply it to the real-world dataset; identify plant species from one set to another set).
Domain adaptation aims to reduce data shift issue, and improve models' performance in the new domain.




Youshan Zhang, and Brian D. Davison. Adversarial Regression Learning for Bone Age Estimation. In 27th Information Processing in Medical Imaging (IPMI 2021). [paper][bibtex]








Youshan Zhang, and Brian D. Davison. Adversarial Continuous Learning on Unsupervised Domain Adaptation. In 25th International Conference on Pattern Recognition Workshop (ICPRW 2021). [more details][paper][bibtex]









Youshan Zhang, and Brian D. Davison. Adversarial Reinforcement Learning for Unsupervised Domain Adaptation. In 2021 IEEE Winter Conference on Applications of Computer Vision, PP. 635-644. [more details][paper][bibtex][talk]






Youshan Zhang, and Brian D. Davison. Adversarial Consistent Learning on Partial Domain Adaptation of PlantCLEF 2020 Challenge. Conference and Labs of the Evaluation Forum (CLEF) (Working Notes) 2020. [more details][paper][bibtex]


Frequently used pre-trained models for transfer learning

T-SNE view of extracted features [more details]


Generate more intermediate subspace with Geodesic sampling on Riemannian manifolds (GSM)

Domain adaptation with subspace learning [more details]

Shape Deformation comparison of our proposed Subspace sampling demons (SSD), GSM and previous sampling geodesic sampling (SGF)

Manifold Learning and Shape Analysis

Manifold Learning (ML) is an application of differential geometry to machine learning. Data points lying on low-dimensional nonlinear manifolds are expected to be highly conforming. ML techniques initially aim to identify the underlying low-dimensional of data from a set of high-dimensional observations. ML has been widely used in data dimensionality reduction, shape analysis, face recognition, activity recognition, object detection and classification, and generative adversarial networks, etc.


Bayesian Geodesic Regression on Riemannian Manifolds (BGRM)

Mixture Probabilistic Principal Geodesic Analysis (MPPGA)

Manifold learning for data dimensionality reduction and shape analysis (more details)

Given input age, our ShapeNet can predict corresponding corpus callosum shape. [more details]

Other Applications

Electricity consumption prediction

Regressive Convolution Neural network and Support Vector Regression (RCNN-SVR) for electricity consumption prediction [more details]

NASNet-Large segmentation network

Automatic Head Over Coat Thickness Measure (Seagate technology internship 2019 Summer)

Optical Coherence Microscopy (OCM) images

fly.mp4

Fly heart segmentation (2018 summer)

st1.mp4
st2.mp4
re.mp4

3D stitching of multiple cell OCM images and 3D Rendering (2018 summer)