Diminishing Regret for Online Nonconvex Optimization
We introduce nonconvexity regret to evaluate local-search methods for online nonconvex optimization (ONO) against a global solver. We define the depth of a global minimum and show that memory and random exploration drive nonconvexity regret to zero when objective variability is small relative to depth. We provide probabilistic regret bounds depending on the evolution of time-varying landscapes, leveraging notions of missing mass and 1-occupancy.