Home

26 July 2013 - Review for Saliency paper

1. hausdorff distance between contours

2. COMPUTATION OF THE HAUSDORFF DISTANCE BETWEEN PLANE VECTOR POLYLINES - J.F. Hangouet

3. Computing the Hausdorff distance between sets of curves (master thesis Ludmila Scharf)

4. discrete image partitions proximity hausdorff

5. A Hausdorff Chirality Measure

6. Nearness in Digital Images and Proximity Spaces

22 July 2013 Tabber

0. June 1 tabber

- http://en.wikipedia.org/wiki/Effect_size

- non metric hierarchical clustering

- Nonmetric Multidimensional Scaling and Hierarchical Clustering: Procedures for the Investigation of the Perception of Sports

- Nonmetric Multidimensional Scaling and Hierarchical Clustering

- Multidimensional scaling and clustering analysis

- (hierarchical) cluster analysis with non-standard distance

- ClusTree: The Hierarchical Clustering Analyzer (Version 1.0)

- Multi-scale clustering by building a robust and self correcting ultrametric topology on data points

- transforming into a ultrametric matrix

- Parisi matrix

- Large Random Matrices: Lectures on Macroscopic Asymptotics

- Random Hierarchical Matrices: Spectral Properties and Relation to Polymers on Disordered Trees

00. May Uppsala Tabber

- http://en.wikipedia.org/wiki/Modifiable_areal_unit_problem, http://web.gps.caltech.edu/~drf/misc/airs/maup_summary.pdf

- Iterative Redistribution of Single Observations

- Graph-Based Hierarchical Conceptual Clustering

- clustering hierarchy lattice

- http://orange.biolab.si/docs/latest/widgets/rst/unsupervized/hierarchicalclustering/

- http://en.wikipedia.org/wiki/Birkhoff_interpolation

- http://en.wikipedia.org/wiki/Secret_sharing_using_the_Chinese_remainder_theorem

- A multi-attribute hierarchical threshold scheme

- multi attribute hierarchical clustering

- A general theory of classificatory sorting strategies - http://biocomparison.ucoz.ru/_ld/0/50_Lance_Willams_1.pdf

- SliceSort: Efficient Sorting of Hierarchical Data - http://www.comp.nus.edu.sg/~chancy/cikm12-slicesort.pdf

1. Real analysis lectures:

- http://freevideolectures.com/Course/3153/Real-Analysis#

- http://math.stackexchange.com/questions/312492/video-lectures-on-real-analysis

- http://mathoverflow.net/questions/54430/video-lectures-of-mathematics-courses-available-online-for-free

-

2. Hierarchical Interactive Image Segmentation using Irregular Pyramids

GBPR paper: http://www.prip.tuwien.ac.at/people/krw/more/papers/2011/GbR/Gerstmayer2011a.pdf

Masters thesis:

http://www.prip.tuwien.ac.at/staffpages/yll/bib/papers/thesis/Georg_Zankl_Msc2012.pdf

Group at TUwien

3. Convex optimization and ordinal optimization

- Graphical models, message-passing algorithms, and convex optimization

- http://en.wikipedia.org/wiki/Ordinal_optimization

- convex optimization lattice theory

- The Construction of Huffman Codes is a Submodular (`Convex') Optimization Problem over a Lattice of Binary Trees (1996)

New update on Tree based regularization and sparsity inducing priors (15 May 2013)

1. http://www.di.ens.fr/~jenatton/

Proximal Methods for Sparse Hierarchical Dictionary Learning http://www.di.ens.fr/~fbach/icml2010a.pdf

Proximal Methods for Hierarchical Sparse Coding http://jmlr.csail.mit.edu/papers/volume12/jenatton11a/jenatton11a.pdf

Multi-scale Mining of fMRI Data with Hierarchical Structured Sparsity http://hal.inria.fr/docs/00/64/94/05/PDF/sparse_hierarchical_fmri_mining_hal.pdf

(2011) Structured Sparsity-Inducing Norms: Statistical and Algorithmic Properties with Applications to Neuroimaging. Ph.D thesis. Ecole Normale Supérieure de Cachan. 2011. [pdf] [slides of the defense]

2. Tree Based Ensemble Models Regularization by Convex Optimization http://videolectures.net/nipsworkshops09_cornelusse_tbe/

3. Tree-Guided Group Lasso for Multi-Task Regression with Structured Sparsity http://www.cs.cmu.edu/~sssykim/papers/2010_ICML_tlasso.pdf

Moreau-Yosida Regularization for Grouped Tree Structure Learning - http://books.nips.cc/papers/files/nips23/NIPS2010_0600.pdf

4. http://en.wikipedia.org/wiki/Gradient_boosting

5. Feature Selection via Regularized Trees http://arxiv.org/pdf/1201.1587.pdf

6. http://en.wikipedia.org/wiki/Augmented_Lagrangian_method

7. Learning with Structured Sparsity - Code paper and slides

8. Structured sparsity through convex optimization

9. Hierarchical Classification of Images by Sparse Approximation

Other updates (15 may search)

- Near Optimal Hierarchical Path-Finding

- Network Partitioning into Tree Hierarchies

- From Hierarchical Partitions to Hierarchical Covers: Optimal Fault-Tolerant Spanners for Doubling Metrics

- Signal Reconstruction using Sparse Tree Representations

- Nonlinear Shape Statistics in Mumford–Shah Based Segmentation

- http://en.wikipedia.org/wiki/Bregman_divergence

- Hierarchical Video Representation with Trajectory Binary Partition Tree

- 3D Knowledge-based Segmentation Using Sparse & Hierarchical Models: Contributions and Applications in Medical Imaging

- Multipath Sparse Coding Using Hierarchical Matching Pursuit

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Interesting Tools:

http://cs.stanford.edu/people/karpathy/researchlei/

Academic Papers Management and Discovery System

Recent Updates to classify 1

1. Solving Non-uniqueness in Agglomerative Hierarchical Clustering Using Multidendrograms - Alberto Fernandez and Sergio Gomez 2008

Abstract: In agglomerative hierarchical clustering, pair-group methods suffer from a problem of non-uniqueness when two or more distances between different clusters coincide during the amalgamation process. The traditional approach for solving this drawback has been to take any arbitrary criterion in order to break ties between distances, which results in different hierarchical classifications depending on the criterion followed. In this article we propose a variable-group algorithm that consists in grouping more than two clusters at the same time when ties occur. We give a tree representation for the results of the algorithm, which we call a multidendrogram, as well as a generalization of the Lance and Williams’ formula which enables the implementation of the algorithm in a recursive way.

Keywords: Agglomerative methods; Cluster analysis; Hierarchical classification; Lance and Williams’ formula; Ties in proximity.

Interesting points:

The non-uniqueness problem in agglomerative hierarchical clustering generates several hierarchical classifications from a unique set of tied proximity data. In such cases, selecting a unique classification can be misleading. This problem has traditionally been dealt with distinct criteria, which mostly consist of the selection of one out of various resulting hierarchies. In this article we have proposed a variable-group algorithm for agglomerative hierarchical clustering that solves the ties in proximity problem. The output of this algorithm is a uniquely determined type of valued tree, which we call a multivalued tree, while graphically we represent it with a multidendrogram.

In addition we have generalized the definition of distance between clusters for the most commonly used agglomerative hierarchical methods, in order to be able to compute them using the variable-group algorithm. We have also given the corresponding generalization of Lance and Williams’ formula, which enables us to get agglomerative hierarchical classifications in a recursive way. Finally, we have showed the possible usefulness of our proposal with some

results obtained using data from a real example.

Gathering up the main advantages of our new proposal, we can state the following points:

  • When there are no ties, the variable-group algorithm gives the same result as the pairgroup one. The new algorithm always gives a uniquely determined solution.
  • In the multidendrogram representation for the results one can explicitly observe the occurrence of ties during the agglomerative process. Furthermore, the height of any fusion interval indicates the degree of heterogeneity inside the corresponding cluster.
  • When ties exist, the variable-group algorithm is computationally more efficient than obtaining all the possible solutions following out the various ties with the pair-group alternative.
  • The new proposal can be also computed in a recursive way using a generalization of Lance and Williams’ formula.

Although ties need not be present in the initial proximity data, they may arise during the agglomeration process. For this reason and given that the results of the variable-group algorithm coincide with those of the pair-group algorithm when there are not any ties, we recommend to use directly the variable-group option. With a single action one knows whether ties exist or not, and additionally the subsequent solution is obtained.

2. “Exploiting Planarity for Network Flow and Connectivity Problems” - Glencora Borradaile, PhD Thesis Brown university.

Abstract: By restricting the input to a problem, it often becomes possible to design more accurate or more efficient algorithms to solve that problem. In this thesis we restrict our attention to planar graphs and achieve both these goals. Planar graphs exhibit many structural and combinatorial properties

that enable the design of good algorithms. These properties include: corresponding to every planar graph there is a dual planar graph; the dual of the complement of the edges of a spanning tree form a spanning tree of the dual graph; a set of edges is a cycle if and only if the dual edges form a

cut; cycles can be said to enclose edges, faces and vertices in the planar embedding; paths can be compared as to their relative embedding.

We capitalize on these properties to design (a) faster algorithms for polynomial-time solvable network flow problems and (b) algorithms with better approximation guarantees for NP-hard connectivity problems. We give a conceptually simple O(n log n)-time algorithm for finding the maximum st-

flow in a directed planar graph, proving a theorem that was incorrectly claimed over a decade ago. We also show how to compute the minimum cut between all pairs of vertices on a common face of a planar graph in linear time. We give the first polynomial-time approximation schemes for the

Steiner-tree and 2-edge-connected subgraph problems. Both schemes are NP-hard in planar graphs and admit no PTAS in general graphs. Our schemes run in O(n log n) time.

3. Maximum flows and parametric shortest paths in planar graphs:

Abstract: We observe that the classical maximum flow problem in any directed planar graph G can be reformulated as a parametric shortest path problem in the oriented dual graph G*. This reformulation immediately suggests an algorithm to compute maximum flows, which runs in O(n log n) time. As we continuously increase the parameter, each change in the shortest path tree can be effected in O(log n) time using standard dynamic tree data structures, and the special structure of the parameterization implies that each directed edge enters the evolving shortest path tree at most once. The resulting maximum-flow algorithm is identical to the recent algorithm of Borradaile and Klein [J. ACM 2009], but our new formulation allows a simpler presentation and analysis. On the other hand, we demonstrate that for a similarly structured parametric shortest path problem on the torus, the shortest path tree can change Ω(n²) times in the worst case, suggesting that a different method may be required to efficiently compute maximum flows in higher-genus graphs.

4. Presentation: Monge Property and Max-Flows in Planar Graphs - ppt here:

If you are interested in max-flow computations in planar graphs, below is the talk I have given at WORKER 2013. It gives an overview of how to use shortest paths and Monge property to get the following three results:

  • O(n log log n) time algorithm for undirected max-flow in planar graphs [1],
  • almost linear time algorithm for all-source all-sink max-flow in undirected planar graphs, i.e., it computes Gumory-Hu tree [2],
  • almost linear time algorithm for single-source all-sink max-flow in directed planar graphs [3].

Interesting property: Monge property- http://en.wikipedia.org/wiki/Monge_array

Recent Updates to classify 2

1. Matching pursuit using hierarchical approaches:

Matching Pursuit-Based Shape Representation and Recognition Using Scale-Space

http://infoscience.epfl.ch/record/101552/files/Mendels.pdf

Hierarchical Matching Pursuit for Image Classification: Architecture and Fast Algorithms

http://homes.cs.washington.edu/~lfb/paper/nips11.pdf

Tree-Based Pursuit: Algorithm and Properties

http://infoscience.epfl.ch/record/87340/files/jost.pdf

http://www.scholarpedia.org/article/Matching_pursuit

Attached ppt on wavelets on matching pursuit and best basis approaches.

2. CVPR 2012 Hierarchies work: http://www.cvpapers.com/cvpr2012.html

A Hierarchical Image Clustering Cosegmentation Framework: Edward Kim, Hongsheng Li, Xiaolei Huang,

Given the knowledge that the same or similar objects appear in a set of images, our goal is to simultaneously segment that object from the set of images. To solve this problem, known as the cosegmentation problem, we present a method based upon hierarchical clustering. Our framework first eliminates intra-class heterogeneity in a dataset by clustering similar images together into smaller groups. Then, from each image, our method extracts multiple levels of segmentation and creates connections between regions (e.g. superpixel) across levels to establish intra-image multi-scale constraints. Next we take advantage of the information available from other images in our group. We design and present an efficient method to create inter-image relationships, e.g. connections between image regions from one image to all other images in an image cluster. Given the intra & inter-image connections, we perform a segmentation of the group of images into foreground and background regions. Finally, we compare our segmentation accuracy to several other state-of-the-art segmentation methods on standard datasets, and also demonstrate the robustness of our method on real world data.

Interesting parts:

An illustration of our constructed graph between two images. An image is hierarchically segmented into a number of layers where the intra-image affinities, W, are defined between neighboring superpixels, weighted by the length of the shared border shown in red. The hierarchical constraints between layer segmentations are illustrated by the yellow connections. These connections are defined in our constraint matrix, C. The inter-image affinities, R, are made between images at their coarsest level of segmentation. A fully connected graph is considered and then the number of edges are trimmed, as illustrated in green. (Note: yellow and green connections are visualizations and not the actual edges).

Learning 3D Object Template by Hierarchical Quantization of Geometry and Appearance Spaces

project code and data: http://www.stat.ucla.edu/~wzhu/CVPR12/

Abstract: This paper presents a method for learning 3D object templates from view labeled object images. The 3D template is defined in a joint appearance and geometry space composed of deformable planar part templates placed at different 3D positions and orientations. Appearance of each part template is represented by Gabor filters, which are hierarchically grouped into line segments and geometric shapes. AND-OR trees are further used to quantize the possible geometry and appearance of part templates, so that learning can be done on a sub-sampled discrete space. Using information gain as a criterion, the best 3D template can be searched through the AND-OR trees using one bottom-up pass and one top-down pass. Experiments on a new car dataset with diverse views show that the proposed method can learn meaningful 3D car templates, and give satisfactory detection and view estimation performance. Experiments are also performed on a public car dataset, which show comparable performance with recent methods.

Interesting part:

To learn 3D templates in this space, we use various measures to quantize and organize the space. The key idea is to use AND-OR tree to link shared partial part template compositions, so that in computation, lots of computations could be shared. With the AND-OR tree structure, the problem of learning optimal 3D object template becomes a searching problem, where dynamic programming can be used to find the composition with most promising score.

This is a climbing energy! Similar optimization happening but on geometrical primitive structures.

Learning Hierarchical Similarity Metrics - Nakul Verma, Dhruv Mahajan, Sundararajan Sellamanickam, Vinod Nair

Categories in multi-class data are often part of an underlying semantic taxonomy. Recent work in object classification has found interesting ways to use this taxonomy structure to develop better recognition algorithms. Here we propose a novel framework to learn similarity metrics using the class taxonomy. We show that a nearest neighbor classifier using the learned metrics gets improved performance over the best discriminative methods. Moreover, by incorporating the taxonomy, our learned metrics can also

help in some taxonomy specific applications. We show that the metrics can help determine the correct placement of a new category that was not part of the original taxonomy, and can provide effective classification amongst categories local to specific subtrees of the taxonomy.

3. Local constraints on combinatorial optimization and multi-resolution optimization: