Generating rectifiable measures through neural networks

Authors

Erwin Riegler, Alex Bühler, Yang Pan, and Helmut Bölcskei

Reference

pp. 61, Dec. 2024, in preparation.

[BibTeX, LaTeX, and HTML Reference]

Abstract

We derive universal approximation results for the class of (countably) m-rectifiable measures. Specifically, we prove that m-rectifiable measures can be approximated as push-forwards of the one-dimensional Lebesgue measure on the unit interval using ReLU neural networks with arbitrarily small approximation error in terms of Wasserstein distance. What is more, the weights in the networks under consideration are quantized and bounded and the number of ReLU neural networks required to achieve a certain approximation error depends on the rectifiability parameter m, which can be significantly smaller than the ambient dimension, thereby enhancing the results presented in Perekrestenko et al. We extend this result to countably m-rectifiable measures and show that we get the same behaviour provided that, among other technical assumptions, the measure decays exponentially on the individual components of the countably m-rectifiable support set.


Download this document:

 

Copyright Notice: © 2024 E. Riegler, A. Bühler, Y. Pan, and H. Bölcskei.

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.