Neural-Kernel Machine

Scalable hybrid deep neural kernel networks  [ESANN 2017, Special Session: Deep and kernel methods][PDF]

The proposed deep learning model follows a combination of neural networks based architecture and a kernel based model. In particular, here an explicit feature map is used to make the transition between the two architectures more straight-forward as well as making the model scalable to large datasets by solving the optimization problem in the primal. Experimental results show a significant improvement over shallow models on several medium to large scale real-life datasets.

This paper introduces a novel hybrid deep neural kernel framework. The proposed deep learning model makes a combination of a neural networksbased architecture and a kernel based model. In particular, here an explicit feature map, based on random Fourier features, is used to make the transition between the two architectures more straightforward as well as making the model scalable to large datasets by solving the optimization problem in the primal. Furthermore, the introduced framework is considered as the first building block for the development of even deeper models and more advanced architectures. Experimental results show an improvement over shallow models and the standard non-hybrid neural networks architecture on several medium to large scale real-life datasets.

Deep Neural-Kernel Blocks  [Neural Networks, 2019 [PDF]

This paper introduces novel deep architectures using the hybrid neural-kernel core model as the first building block. The proposed models follow a combination of a neural networks based architecture and a kernel based model enriched with pooling layers. In particular, in this context three kernel blocks with average, maxout and convolutional pooling layers are introduced and examined. We start with a simple merging layer which averages the output of the previous representation layers. The maxout layer on the other hand triggers competition among different representations of the input. Thanks to this pooling layer, not only the dimensionality of the output of multi-scale representations is reduced but also multiple sub-networks are formed within the same model. In the same context, the pointwise convolutional layer is also employed with the aim of projecting the multi-scale representations onto a new space. Experimental results show an improvement over the core deep hybrid model as well as kernel based models on several real-life datasets.