Skip to main content

The 35th Annual Shanks Lecture – Tuesday, May 16, 3:40pm, 103 Wilson Hall

dahmen-wolfgang

 

Shanks Lecturer
Wolfgang Dahmen
University of South Carolina and RWTH Aachen University

 

High-Dimensional Approximation, Compositional Sparsity, and Deep Neural Networks

The need to recover or approximate functions of many variables is ubiquitous in numerous application contexts such as machine learning, uncertainty quantification, or data assimilation.

In all these scenarios the so-called Curse of Dimensionality is an intrinsic obstruction that has been a long standing theme in approximation theory. It roughly expresses an exponential dependence of “recovery cost” on the spatial dimension. It is well-known that being able to avoid the Curse depends on both, the structure of the particular “model class” of functions one wishes to approximate and on the approximation system that is to be used.

In this talk we highlight recent results concerning the interplay between these two constituents. For small spatial dimensions approximation complexity is in essence determined by the smoothness of the approximand, e.g. in terms of Sobolev- or Besov-regularity which is effectively exploited by approximation systems that rely on spatial localization.

By contrast, in high-dimensions, more global structural sparsity properties determine the ability to avoid the Curse, unless one imposes excessively high smoothness degrees. Inspired by the highly nonlinear structure of Deep Neural Networks (DNNs) and the Kolmogorov-Arnold Superposition Theorem, we focus in particular on a new notion of “tamed compositional sparsity” that leads to new types of model classes for high-dimensional approximation. The relevance of such classes is perhaps best illustrated in the context of solution manifolds of parameter-dependent families of partial differential equations (PDEs). Specifically, the framework accommodates “inheritance theorems”: compositional sparsity of problem data (like parameter-dependent coefficient fields) are inherited by solutions. In particular, we briefly discuss transport equations. In fact, it is well-known that common model reduction concepts for an effective approximation of corresponding parameter-to-solution maps fail for this type of PDEs. Nevertheless, given compositionally sparse data, corresponding solution manifolds can be shown to belong to compositional approximation classes whose manifold-widths defy the Curse of Dimensionality. Corresponding concrete approximation rates, realized by DNNs, exhibit only a low algebraic dependence on the (large) parametric dimension. We conclude by briefly discussing the bearing of these findings on other problem types and ensuing research directions.