On Solution Functions of Optimization: Universal Approximation and Covering Number Bounds
We analyze the expressivity and learnability of solution functions of convex optimization and their multi-layer extensions. Results include: (1) solution functions of LP/QP are universal approximators for smooth models (or restricted Sobolev spaces) with characterized rate–distortion; (2) a regression-error perspective where information is provided via data observations; (3) compositional architectures with optimization-as-a-layer can exactly reconstruct basic numerical-analysis functions, implying (4) substantial rate–distortion reduction with a universal architecture; and (5) empirical covering-number bounds for LP/QP and a generic (possibly nonconvex) problem via tame geometry. This provides a first rigorous analysis of approximation and learning-theoretic properties of solution functions with implications for algorithm design and guarantees.