1. Glossary
Analysis of variance or ANOVA (Analyse de variance): decomposition of the variance (as a breakdown) to elementary pieces (also know as HDMR, Hoeffding’s decomposition, Sobol’s decomposition… c.f. No hypothesis on the model).
Cumulative distribution function or CDF (Fonction de répartition): function of a real-valued random variable \(X\) which, once evaluated at \(x\), gives the probability that \(X\) will take a value less than or equal to \(x\) (c.f. The probability distributions).
Kriging or Gaussian process (Krigeage ou processus gaussien): is a family of interpolation methods that uses information about the “spatial” correlation between observations to make predictions with a confidence interval at new locations (c.f. The kriging method).
Latin hypercube sampling or LHS (Échantillonage par hypercube latin): sampling methods that stratifies the probability space by dividing it in equal probabilities (c.f. Introduction).
Leave-one-out or LOO (validation croisée un contre tous): type of cross-validation for which a surrogate model is re-train on the learning database removing just one point, in order to obtain an estimation of this new model on this precise point (c.f. Adapting the fitting strategy).
Likelihood (vraisemblance): is the hypothetical probability that an event that has already occurred would yield a specific outcome. The concept differs from that of a probability in that a probability refers to the occurrence of future events, while a likelihood refers to past events with known outcomes [Wei].
Low discrepancy sequence: (Suite à faible discrépance): sequence for which the discrepancy is low, meaning the proportion of points in the sequence falling into an arbitrary set \(B\) is close to proportional to the measure of \(B\) (c.f. QMC method).
Pareto front (front de Pareto): a set of nondominated solutions, being chosen as optimal, if no objective can be improved without sacrificing at least one other objective (c.f. The pareto concept in a nutshell).
Pearson coefficient (Coefficient de Pearson): it is the linear correlation coefficient (c.f. The monotone case).
Principal component analysis or PCA (Analyse en conclusion principale): the process of computing the principal components and using them to perform a change of basis on the data, sometimes using only the first few principal components and ignoring the rest (c.f. Combining these aspects: performing PCA).
Probability density function or PDF (Densité de probabilité): function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample (c.f. The probability distributions).
Quantile (Quantile): the quantile \(x_p\), for a probability \(p \in [0,1]\), is the lowest value of a random variable \(X\) so that \(P\left\{X \le x_p \right\} = p\) (c.f. The quantile computation).
Screening method (méthode de criblage): process that extracts, isolates and identifies a compound or group of components in a sample with the minimum number of steps and the least manipulation of the sample (c.f. Sensitivity analysis).
Simple random sampling or SRS (Échantillonage simple aléatoire): independent generation of samples following provided PDFs (c.f. Introduction).
Sparse grids: numerical techniques to represent, integrate or interpolate high dimensional functions.