References
 

[1] H. Zou and T. Hastie. Regularization and Variable Selection via the Elastic Net (pdf). JRSSB (2005) 67(2) 301-320.

[2] G. Cecchi, I. Rish, R. Rao, R. Garg. Prediction of Brain Activity based on Elastic Net Algorithm, in PBAIC workshop at Human Brain Mapping 2007 conference. Extended version is under submittion.

[3] G. Chandalia and I. Rish. Blind Source Separation Approach to Performance Diagnosis and Dependency Discovery, to appear in Internet Measurement Conference (IMC-07).

[4] A. Beygelzimer, J. Kephart and I. Rish. Evaluation of Optimization Methods for Network Bottleneck Diagnosis, in ICAC 2007.

[5] R. Tibshirani(1996). Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B., Vol. 58, No. 1, pages 267-288.

[6] Zhao, P. & Yu, B. (2004), Boosted lasso, Technical report, University of California, Berkeley, USA.

[7] M. Park and T. Hastie. An L1 Regularization-path Algorithm for Generalized Linear Models. Stanford Technical Report; to appear in JRSSB.

[8] J. Zhu, S. Rosset, T. Hastie and R. Tibshirani. 1-Norm Support Vector Machines, NIPS 2003.

[9] A. Chan, N. Vasconcelos and G. Lanckriet. (2007). Direct Convex Relaxations of Sparse SVM. ICML-07.

[10] Y. Li, A. Cichocki, S. Amari, S. Shishkin, J. Cao, and F. Gu. Sparse representation and its applications in blind source separation. NIPS-03.

[11] H. Zou, T. Hastie, and R. Tibshirani. Sparse Principal Component Analysis. JCGS 2006 15(2): 262-286.

[12] d'Aspremont, A., El Ghaoui, L., Jordan, M.I., Lanckriet, G.R.G. (2004). A Direct Formulation for Sparse PCA using Semidefinite Programming. NIPS-04.

[13] P. O. Hoyer. Non-negative Matrix Factorization with sparseness constraints. JMLR 5:1457-1469, 2004.

[14] H. Kim and H. Park. Sparse non-negative matrix factorizations via alternating non-negativity-constrained least squares for microarray data analysis. Bioinformatics, 23(12), 1495-1502, 2007

[15] M. Wainwright, P. Ravikumar and J. Lafferty.High-Dimensional Graphical Model Selection Using l1-Regularized Logistic Regression. NIPS-06

[16] D. Donoho. Compressed sensing. (EEE Trans. on Information Theory, 52(4), pp. 1289 - 1306, 2006.

[17] E. Candès, Compressive sampling. Proc. International Congress of Mathematics, 3, pp. 1433-1452, Madrid, Spain, 2006

[18] R. Baraniuk, A Lecture on Compressive Sensing. IEEE Signal Processing Magazine, July 2007.

[19] S. Ji and L. Carin, Bayesian Compressive Sensing and Projection Optimization. ICML 2007

[20] T. Jebara. "Multi-Task Feature and Kernel Selection for SVMs". International Conference on Machine Learning, ICML, July 2004.

[21] T. Jebara and T. Jaakkola. "Feature Selection and Dualities in Maximum Entropy Discrimination". In 16th Conference on Uncertainty in Artificial Intelligence, UAI 2000. July 2000.