[PDF][PDF] On computing decision regions with neural nets

LK Li - J. Comput. Syst. Sci., 1991 - core.ac.uk
LK Li
J. Comput. Syst. Sci., 1991core.ac.uk
Recently, the capabilities, limitations, and applications of feedforward networks have been
studied. One of the introductory papers is [4] by Lippmann. In this paper, on page 16, it is
claimed that “No number of nodes, however, can separate the meshed class regions in Fig.
14 with a two-layer perceptron.” However, he has underestimated the ability of a two-layer
feedforward network. The results of Hornik, Stinchcombe, and White [3] show that a two-
layer perceptron can approximate any continuous function arbitrarily closely. An alternate …
Recently, the capabilities, limitations, and applications of feedforward networks have been studied. One of the introductory papers is [4] by Lippmann. In this paper, on page 16, it is claimed that “No number of nodes, however, can separate the meshed class regions in Fig. 14 with a two-layer perceptron.” However, he has underestimated the ability of a two-layer feedforward network. The results of Hornik, Stinchcombe, and White [3] show that a two-layer perceptron can approximate any continuous function arbitrarily closely. An alternate proof of their result can be found in Blum and Li [11, as well as a basic result on three-layer networks. This note gives a proof of separation of arbitrary disjoint compact regions by two-layer McCulloch-Pitts (Mc-P) networks based on the theorem given in [3].
First, we demonstrate how a two-layer MC-P network separates the case in Fig. 14 of Lippmann [4]. Then, we extend our result to arbitrary compact decision regions by the theorem of Hornik, Stinchcombe, and White [3]. Finally, we give an example and discuss the limitations of two-layer nets with noncompact regions.
core.ac.uk
顯示最佳搜尋結果。 查看所有結果