--- myst: substitutions: sentence1: "" sentence2: "as noted at the end of this section" --- ```{include} /../core/calibration/linear_bayesian.md ``` In the specific case of a linear model, one can then write $f_\theta(\mathbf{x})=h^T(\mathbf{x})\theta$ where $h(\mathbf{x})$ is the regressor vector. This way of writing the model can include an "hidden virtual" $\theta_0 = 1$ whose purpose is to integrate a constant term into the regression (to describe a pedestal). Using the statistical approach introduced in [](#calibration_reminder), one can also define the covariance matrix of the residuals which will be written hereafter as $\Sigma={\rm diag}(\sigma_{\varepsilon_1},\ldots,\sigma_{\varepsilon_n})$ From there, one can construct the *design matrix* $ H = [h(\mathbf{x}_1), \ldots, h( \mathbf{x}_n)]^T \in M_{n,p}(\mathbb{R})$ whose columns define the subspace onto which the model is projected. With a normal prior, which follows the form $\theta \sim \mathcal{N}(m_\theta, \Sigma_\theta)$ the posterior is also expected to be normal, so it can be written $\pi(\theta|\mathbf{y}) \sim \mathcal{N}(m^{post}_\theta,\Sigma^{post}_\theta)$ where its parameters are expressed as ```{math} :label: eq_linPostMeanCalib m^{post}_{\theta} = \Big( \Sigma^{-1}_{\theta} +H^{T} \Sigma^{-1} H \Big)^{-1} \Big( m^{T}_{\theta} \Sigma^{-1}_{\theta} + \mathbf{y}^{T} \Sigma^{-1} H \Big)^{T} ``` and ```{math} :label: eq_linPostStdCalib \Sigma^{post}_{\theta} = \Big( \Sigma^{-1}_{\theta} + H^{T} \Sigma^{-1} H \Big)^{-1} ``` It is also possible, as introduced in [](#calibration_reminder_discussing_theoretical_bayesian_approach), to use a non-informative **prior** such as Jeffrey's prior: it is an improper flat prior ($\pi(\mathbf{y})\propto 1$) {cite}`bioche2015approximation`, whose posterior distribution (in the linear case) is also Gaussian. For this prior, the posterior parameters are equivalent to those obtained with a Gaussian prior, given in {eq}`eq_linPostMeanCalib` and {eq}`eq_linPostStdCalib` with all references to $\Sigma_\theta$ removed: ```{math} :label: eq_linPostMeanStdCalibJef m^{post}_{\theta} = \Big(H^{T} \Sigma^{-1} H \Big)^{-1} H^{T} \Sigma^{-1} \mathbf{y} \;\; {\rm and} \; \; \Sigma^{post}_{\theta} = \Big(H^{T} \Sigma^{-1} H \Big)^{-1} ``` This final form corresponds to the expected results obtained when only considering linear regression within the weighted least squares approach {cite}`Fry2010`. ```{toctree} linear_bayesian/prediction ```