If you see this, something is wrong
A neural ordinary differential equation (neural ODE) is a machine learning model that is commonly described as a continuous-depth generalization of a residual network (ResNet) with a single residual block, or conversely, the ResNet can be seen as the Euler discretization of the neural ODE. These two models are therefore strongly related in a way that the behaviors of either model are considered to be an approximation of the behaviors of the other. In this work, we establish a more formal relationship between these two models by bounding the approximation error between two such related models. The obtained error bound then allows us to use one of the models as a verification proxy for the other, without running the verification tools twice: if the reachable output set expanded by the error bound satisfies a safety property on one of the models, this safety property is then guaranteed to be also satisfied on the other model. This feature is fully reversible, and the initial safety verification can be run indifferently on either of the two models. This novel approach is illustrated on a numerical example of a fixed-point attractor system modeled as a neural ODE.
Remark 1
The case where \( f\) contains piecewise-affine activation functions such as ReLU can theoretically be handled as well, since our approach only really requires their derivatives to be bounded (but not necessarily continuous). But for the sake of clarity of presentation (to avoid the case decompositions of each ReLU activation), this case is kept out of the scope of the present paper.
Problem 1 (Error Bounding)
Given an input set \( \mathcal{X}_{in}\subseteq\mathbb{R}^n\) , we want to over-approximate the set \( \mathcal{R}_\varepsilon(\mathcal{X}_{in})\) of errors between the ResNet (2) and neural ODE (1) models, defined as:
Problem 2 (Verification Proxy)
Given an input-output safety property defined by an input set \( \mathcal{X}_{in} \subseteq \mathbb{R}^n\) and a safe output set \( \mathcal{X}_{s} \subseteq \mathbb{R}^n\) , the verification problem consists in checking whether the reachable output set of a model is fully contained in the targeted safe set: \( \mathcal{R}(\mathcal{X}_{in})\subseteq\mathcal{X}_{s}\) . In this paper, we want to verify this safety property on one model by relying only on the error set \( \mathcal{R}_\varepsilon(\mathcal{X}_{in})\) from Problem 1 and the reachability analysis of the other model.
proposition 1 (Lagrange remainder)
There exists \( t^*\in[0,t]\) such that
(5)
Theorem 1
The set \( \Omega_\varepsilon(\mathcal{X}_{in})\) obtained after applying this second step described above solves Problem 1:
Theorem 2 (Soundness)
For the case that either Algorithm 1 or 2 returns Safe, the safety property in the sense of Problem 2 holds true [13].
[1] Safety verification for neural networks based on set-boundary analysis International Symposium on Theoretical Aspects of Software Engineering 2023 248--267 Springer