Which ML models are universal function approximators?

The universal approximation theorem states that a feed-forward neural network with a single hidden layer containing a finite number of neurons can approximate any continuous function (provided some assumptions on the activation function are met).


Is there any other machine learning model (apart from any neural network model) that has been proved to be an universal function approximator (and that is potentially comparable to neural networks, in terms of usefulness and applicability)? If yes, can you provide a link to a research paper or book that shows the proof?

Answered by Abhinayan Verma

Support vector machines

In the paper A Note on the Universal Approximation Capability of Support Vector Machines (2002) B. Hammer and K. Gersmann investigate the universal function approximators capabilities of SVMs. More specifically, the authors show that SVMs with standard kernels (including Gaussian, polynomial, and several dot product kernels) can approximate any measurable or continuous function up to any desired accuracy. Therefore, SVMs are universal function approximators. Polynomials It is also widely known that we can approximate any continuous function with polynomials (see the Stone-Weierstrass theorem). You can use polynomial regression to fit polynomials to your labeled data.



Your Answer

Interviews

Parent Categories