Ting Fan XIE, Fei Long CAO
In 1991, Hornik proved that the collection of single hidden layer feedforward neural networks (SLFNs) with continuous, bounded, and non-constant activation function σ is dense in C(K) where K is a compact set in Rs (see Neural Networks, 4(2), 251-257 (1991)). Meanwhile, he pointed out “Whether or not the continuity assumption can entirely be dropped is still an open quite challenging problem”. This paper replies in the affirmative to the problem and proves that for bounded and continuous almost everywhere (a.e.) activation function σ on R, the collection of SLFNs is dense in C(K) if and only if σ is un-constant a.e..