11.6. Markov chain Monte Carlo approach

In a Bayesian framework, Markov Chain Monte Carlo (MCMC) methods are a powerful tool for calibration. They are especially valuable when the statistical model cannot be solved analytically, such as when the prior distribution has a complex structure or the model is nonlinear. Unlike many classical approaches, MCMC does not require the assumption of Gaussian errors: it remains applicable even when the likelihood is non-Gaussian.

Rather than providing a single “best-fit” solution (as in minimisation techniques), MCMC generates a collection of parameter samples that represent the full posterior distribution (similar to ABC methods). However, these methods also come at the cost of potentially high computational demand, since long sampling chains may be required to achieve convergence and reliable estimates. Users should therefore interpret results as distributions and ensure that convergence diagnostics are checked before drawing conclusions (see [Bla17] for more details).

The usage of the TMCMC class can be summarised in a few key steps:

  1. Prepare the data and the model:

    • The parameters to be calibrated must be instances of classes inheriting from TStochasticAttribute;

    • Select the assessor type and construct the TMCMC object with the appropriate likelihood function (see Constructing the TMCMC object).

  2. Set the algorithm properties:

  3. Perform the estimate:

  4. Perform post-processing:

  5. Analyse the results:

Two examples are also provided in the use-case section (see Macro “calibrationMCMCFlowrate1D.py” and Macro “calibrationMCMCLinReg.py”).