Jiashi FENG

Department of Electrical Engineering and Computer Sciences
University of California, Berkeley
Sutardja Dai Hall, Berkeley, CA 94720

Email: jshfeng [AT] berkeley.edu

Bio [CV]

I am a postdoc researcher in the EECS department and ICSI at the University of California, Berkeley, working with Trevor Darrell.  obtained my Ph.D. degree from National University of Singapore (NUS) in 2014, where I was fortunate enough to be advised by Shuicheng Yan and Huan Xu. Before that, I received my Bachelor's degree in Automation from University of Science and Technology of China (USTC) in 2007, with the highest honor. Then I pursued my Master's degree in brain science at Institute of Automation, CAS from 2007 to 2009, advised by Tianzi JiangI was also fortunate to work with Shie Mannor as a visiting student at the wonderful Technion (Hafai, Israel) in 2014.

My current research interests are broadly distributed in big contaminated data analysis, online robust learning, distributed robust learning, non-parametric learning and their applications in object recognition (including object classification, detection and segmentation).  I am also interested in theoretically understanding the object recognition problems (the sample complexity, the learnablity, the long-tail distribution of object categories and so on), and building the object recognition system with a few labeled (noisy) samples. If you are also interested in them and would like to have a discussion, please drop me an email.

P.S. Feng is my surname, which is pronounced as [fəŋ].

Research [publication list by year[publication list by topic] [research statement]

  • Big Contaminated Data Analysis
    The explosive growth of data in the era of big data has presented great challenges to traditional machine learning techniques, since most of them are difficult to apply for handling large-scale, high-dimensional and dynamically changing data. Moreover, most of the current learning methods are fragile to the noise explosion in high-dimensional regime, data contamination and outliers, which however are ubiquitous in realistic data. We propose distributed and online learning framework for robustly recovering the structure of data to solve the above key challenges. These methods possess high efficiency, strong robustness, good scalability and theoretically guaranteed performance in handling big data, even in the presence of noises, contaminations and adversarial outliers. 

    • J. Feng, H. Xu and S. Mannor. Distributed Robust Learning. ArXiv 2014.
    • J. Feng, H. Xu, S. Mannor and S. Yan. Robust Logistic Regression and Classification. NIPS 2014.
    • J. Feng, H. Xu and S. Yan. Online Robust PCA via Stochastic Optimization. NIPS 2013.
    • J. Feng, H. Xu, S. Mannor and S. Yan. Online PCA for Contaminated Data. NIPS 2013.
    • J. Feng, H. Xu, and S. Yan. Robust PCA in High-dimension: A Deterministic Approach. ICML 2012.

    • Talk: Structure Learning from Big Contaminated Data. Google Research, Mountain View, CA, 2014.
    • Talk: Stochastic Robust PCA. Margic.SG, Singapore, 2014.
    • Talk: Deterministic High-dimensional Robust PCA. ICML, Edinburgh, 2012.
    • MATLAB Code: Online Robust Learning, Distributed Robust Learning, Stochastic Optimization for RPCA.

  • Non-parametric Model Learning for Object Recognition
    Traditional learning based object recognition requires pre-defined model structure based on domain knowledge, and the model is fixed when applying on different datasets/domains. However, pre-defined model may fail to adapt to the properties  of the data at hand, may not necessarily be discriminative, and/or may not generalize well. In this work, we propose a non-parametric framework that flexibly adapts to the complexity of the given data set and reliably discovers the inherent model structure for the data. We demonstrate that our framework is applicable to both object recognition and complex image retrieval tasks even with few training examples. 

    • J. Feng, S. Jegelka, S. Yan and T. Darrell. Learning Discriminative Scalable Dictionaries from Sample Relatedness. CVPR 2014.

    • Talk: Infinite Attributes Learning for Object Recognition. CVPR, Columbus, OH, 2014. ICSI, Berkeley, 2014.
    • Talk: Non-parametric Bayesian for Object Recognition. MURI workshop, MIT, Cambridge, MA, 2014.
    • MALTAB code: Asymptotic Indian Buffet Process for Attribute Learning

  • Object Recognition with Structure Prior
    Realistic image and video data usually have huge number of dimension and large sample variance. Structure prior is an efficient and effective way to reduce the sample complexity for the object recognition system and enhance the performance. The structure prior includes the block structure in object segmentation, community detection, distribution prior, sub-group structure within the global feature and informative continuous spatial structure. Through diverse applications, we demonstrate that integrating those simple yet quite useful structure prior into the existing learning based object recognition methods is able to boost the performance and reduce the sample complexity significantly. 

    • J. Feng, X. Yuan, Z. Wang, H. Xu and S. Yan. Auto-grouped Sparse Representation for Visual AnalysisIEEE Trans. on Image Processing (TIP), 2014.
    • J. Feng, Z. Lin, H. Xu, and S. Yan. Robust Subspace Segmentation with Laplacian Constraint. CVPR 2014.
    • J. Feng, X. Yuan, Z. Wang, H. Xu,  and S. Yan. Auto-grouped Sparse Representation for Visual Analysis. ECCV 2012.
    • J. Feng, B. Ni, D. Xu and S. Yan. Histogram ContextualizationIEEE Trans. on Image Processing (TIP), 21(2): 778-788, 2012.
    • J. Feng, B. Ni, Q. Tian and S. Yan. Geometric Lp-norm Feature Pooling for Image Classification. CVPR 2011.

    • Talk: Robust Subspace Segmentation with Block-diagonal Prior. CVPR, Columbus, Ohio, 2014.
    • MATLAB code: Matrix Decomposition with Block-diagonal Prior.