Enforcing Population and Lifetime Sparsity

In this page you can find additional information (including the code) of the EPLS algorithm introduced in "Meta-parameter free unsupervised sparse feature learning".

Description:

In this paper, we propose a meta-parameter free, off-the-shelf, simple and fast unsupervised feature learning algorithm, which exploits a new way of optimizing for sparsity. The algorithm sets an output target with one "hot code" while ensuring a strong form of lifetime sparsity to avoid dead outputs and optimizes for that specific target to learn the dictionary bases. Experiments on STL-10, CIFAR-10 and UCMerced show that the method presents state-of-the-art performance and provides discriminative features that generalize well.

Demo:

The following image shows the bases learnt using the EPLS algorithm on randomly extracted patches of size 10x10x3 from unlabeled images of STL-10 for a single layer network of 100 outputs. As shown in the image, the methods learns common bases such as oriented edges/ridges in many directions and colors as well as corner detectors, tri-banded colored filters and center surrounds among others.

The following video shows how the dictionary bases are learnt. A single frame represents the bases updated after each consecutive epoch.

Code:

You can download the code to run the EPLS algorithm. The code has been tested on Windows, Linux and MacOS 64bits machines. The zip file contains the complete set of .m and .mex files to run and test the EPLS algorithm.

To run the code, download STL-10 dataset matlab files and copy unlabeled.mat, train.mat and test.mat files to EPLS/datasets/stl-10 folder. Then, change matlab directory to EPLS/code directory and run demo.m.

Running demo.m file, you should be able to reproduce the results in Table 1 of the paper on STL-10 dataset, training EPLS (with randomly extracted patches of receptive field 10x10 pixels) and applying its natural encoding (logistic activation) when extracting features to train an test the L2 linear SVM. Note that results could change slightly with to the ones reported in the paper due to the random extraction of training patches.

Changing the values of the variables in script_params.m, you should be able to train any single layer network of your choice using EPLS.

Do not hesitate to contact me if you have any further questions or comments.

Matlab Code

BibTex:

@article{Romero14-tpami, author = {Adriana Romero and Petia Radeva and Carlo Gatta}, title = {Meta-Parameter Free Unsupervised Sparse Feature Learning}, journal = {{IEEE} Transactions on Pattern Analysis and Machine Intelligence}, volume = {37}, number = {8}, pages = {1716--1722}, year = {2015}, }

More on EPLS:

The properties of the EPLS algorithm have been exploited to train deep architectures in a greedy layer-wise fashion with applications to image parsing and remote sensing image/pixel classification.

  • A. Romero, C. Gatta, G. Camps-Valls. “Unsupervised Deep Feature Extraction for Remote Sensing Image Classification”. Accepted to IEEE Transactions on Geoscience and Remote Sensing, 2015.
  • A. Romero , C. Gatta and G. Camps-Valls. “Unsupervised Deep Feature Extraction Of Hyperspectral Images”. IEEE Workshop on Hyperspectral Image and Signal Processing, WHISPERS, 2014.
  • C. Gatta, A. Romero and J. van de Weijer. “Unrolling loopy top-down semantic feedback in convolutional deep networks”. Deep-vision workshop CVPR, 2014.