# Calibration in the context of VVUQ principle VVUQ is a known acronym standing for "Verification, Validation and Uncertainty Quantification". Within this framework, the calibration procedure of a model, sometimes also called "Inverse problem" {cite}`Tarantola2005` or "data assimilation" {cite}`Asch2016` depending on the assumptions and the context, is an important step of uncertainty quantification. This step should not be confused with validation, even if both procedures are based on a comparison between reference data and model predictions, their definitions are given below {cite}`trucano2006calibration` **validation:** : process of determining the degree to which a model is an accurate representation of the real world for its intended uses. **calibration:** : process of improving the agreement between model calculations and a chosen set of benchmarks by adjusting the parameters implemented in the model. The underlying question of validation is "What is the confidence level that can be granted to the model given the difference seen between the predictions and physical reality ?" while the underlying question of calibration is "Given the chosen model, which parameter value minimises the difference between a set of observations and its predictions, under the chosen statistical assumptions?". It sometimes happens that a calibration problem allows an infinite number of equivalent solutions {cite}`Hansen1998`, which is possible for instance when the chosen model $f_\theta$ depends explicitly on an operation of two parameters. The simplest example would be to have a model $f_\theta$ depending only on two parameters through the difference $\theta_1 - \theta_2$. In this peculiar case, every couple of parameters $(\theta_1, \theta_2)$ that would lead to the same difference $\theta_1 - \theta_2$ would provide the exact same model prediction, which means that it is impossible to disentangle these solutions. This issue, also known as parameter identifiability, is crucial as one needs to consider how the chosen model is parameterised {cite}`walter1997identification`. Defining a calibration analysis consists in several important steps: - Specify the set of observations that will be used as reference; - Specify the model that is supposed to adequately represent the real world; - Define the parameters to be analysed (either by specifying *a priori* distributions or at least by setting a range). This step requires particular caution concerning identifiability. - Choose the method used to calibrate the parameters. - Choose the distance function used to quantify the discrepancy between the observations and the model predictions.