Learning Neural Networks under Input-Output Specifications
We study learning neural networks that certifiably satisfy input–output specifications. The key idea is to convexify the verification condition by abstracting nonlinear specs and activations with quadratic constraints and introducing a loop-transformation-based reparameterization, yielding a convex condition enforceable during training. The resulting inner approximation of admissible parameters enables certification. We validate on reachability-style specs across input regions.