Graph Convolutional Networks have been successful in addressing graph-based tasks such as semi-supervised node classification. Existing methods use a network structure defined by the user based on experimentation with fixed number of layers and employ a layer-wise propagation rule to obtain the node embeddings. Designing an automatic process to define a problem-dependant architecture for Graph Convolutional Networks can greatly help to reduce the computational complexity of the training process. We propose a method to automatically build compact and task-specific Graph Convolutional Networks. Experimental results on widely used publicly available datasets indicate that the proposed method outperforms the related graph-based learning algorithms in terms of classification performance and network compactness.
GESVM a multi-class classification framework incorporating geometric data relationships described in both intrinsic and penalty graphs in multi-class Support Vector Machine. Direct solutions are derived for its optimization problem for both linear and non-linear multi-class classification. GESVM constitutes a general framework for maximum margin-based multi-class classification exploiting geometric data relationships, which includes several SVM-based classification schemes as special cases.
GEELM is an extension of the extreme learning machine algorithm for single-hidden layer feedforward neural network training that is able to incorporate subspace learning criteria on the optimization process followed for the calculation of the network's output weights. It is able to naturally exploit both intrinsic and penalty SL criteria that have been (or will be) designed under the graph embedding framework. In addition, we extend the proposed GEELM algorithm in order to be able to exploit SL criteria in arbitrary (even infinite) dimensional ELM spaces.
M(C)VELM is an algorithm for Single-hidden Layer Feedforward Neural networks training motivated by the observation that the learning process of such networks can be considered to be a non-linear mapping of the training data to a high-dimensional feature space, followed by a data projection process to a low-dimensional space where classification is performed by a linear classifier. To improve its performance we extend the Extreme Learning Machine algorithm in order to exploit the training data dispersion (MVELM) or the class compactness (MCVELM) in its optimization process.
DropELM is an extension of Extreme Learning Machine algorithm for Single-hidden Layer Feedforward Neural network training that incorporates Dropout and DropConnect regularization in its optimization process. Both types of regularization lead to the same solution for the network output weights calculation, which does not require computationally intensive iterative weight tuning.
SSMvELM is a classification algorithm extending the ELM algorithm by incorporating discrimination criteria in its optimization process. Proper regularization is incorporated in its optimization process to exploit information appearing in both labeled and unlabeled samples. Multi-view classification is achieved by following an iterative optimization scheme for jointly optimizing the parameters for all views.
The list provided in the following may be incomplete. The complete list of papers related to this topic can be found in the lists of journal papers and conference papers.
N. Heidari and A. Iosifidis, “Progressive Convolutional Networks for Semi-Supervised Node Classification”, arXiv:2003.12277, 2020
A. Iosifidis and M. Gabbouj, “Multi-class Support Vector Machine Classifiers using Intrinsic and Penalty Graphs”, Pattern Recognition, vol. 55, pp. 231-246, 2016
A. Iosifidis, A. Tefas and I. Pitas, “Graph Embedded Extreme Learning Machine”, IEEE Transactions on Cybernetics, vol. 46, no. 1, pp. 311-324, 2016
A. Iosifidis, A. Tefas and I. Pitas, “DropELM: Fast Neural Network Regularization with Dropout and DropConnect”, Neurocomputing, vol. 162, pp. 57-66, 2015
A. Iosifidis, A. Tefas and I. Pitas, “Regularized Extreme Learning Machine for Multi-view Semi-supervised Action Recognition”, Neurocomputing, vol. 145, pp. 250-262, 2014
A. Iosifidis, A. Tefas and I. Pitas, “Minimum Class Variance Extreme Learning Machine for Human Action Recognition”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 23, no. 11, pp. 1968-1979, 2013
A. Iosifidis, A. Tefas and I. Pitas, “Minimum Variance Extreme Learning Machine for Human Action Recognition”, IEEE International Conference on Acoustics, Speech and Signal Processing, Florence, Italy, 2014