11.4. Analytical linear Bayesian estimation

This method is fairly simple from an algorithm point of view as it consists mainly of the analytical formulation of the posterior distribution under assumptions: the problem can be considered linear and the prior distributions are normally distributed (or non-informative/flat, as discussed in [Bla17]).

In practice, this technique is applied by following the procedure provided in Calibration classes, distance and likelihood functions, observations and model with one important difference, however: the code or function passed through the constructor of the TLinearBayesian object is not strictly necessary. The parameter estimate is analytical so the main point of providing an assessor is to get both the a priori and a posteriori residuals distributions.

The usage of the TLinearBayesian class can be summarised in a few key steps:

  1. Prepare the data and the model:

  2. Set the algorithm properties:

    • Provide the input covariance matrix, i.e., the reference observation covariance (in [Bla17], this corresponds to \(\Sigma\)). This step is mandatory, as the covariance matrix is used to compute the posterior distribution, as discussed in [Bla17];

    • Specify the name of the regressors. This is also a key step as a regressor can be an input variable, but also any function of one or several input variables. This is discussed in Defining the TLinearBayesian properties;

    • A transformation function may be provided, although this is optional. This is discussed in Transformation of the results.

  3. Perform the estimate and analyse the results:

    • Run the estimate process;

    • Extract the results and visualise them with the standard plotting tools (see Looking at the results).

An example is also provided in the use-case section (see Macro “calibrationLinBayesFlowrate1D.py”).