On the approximation by single hidden layer feedforward neural networks with fixed weights

On the approximation by single hidden layer feedforward

neural networks with fixed weights

Namig J. Guliyev and Vugar E. Ismailov

Abstract.   The theory of neural networks has become increasing more popular in various areas of modern science. In many problems, the authors use single hidden layer feedforward neural networks (SLFNs) with certain activation functions. The efficiency of this usage is based on the universal approximation property of such networks. Some authors showed that SLFNs with limited weights still remain their natural propensity to approximate any function with arbitrary precision. But this phenomena does not lay any restrictions on the number of neurons in the hidden layer. The more this number, the more the probability of the considered network to give precise results. In this note, we constructively prove that SLFNs with the fixed weight 1 and two neurons in the hidden layer can approximate any continuous function on a compact subset of the real line. The applicability of this result is demonstrated in various numerical examples. Finally, we show that SLFNs with fixed weights cannot approximate all multivariate functions.

arXiv   PDF   SageMath worksheet