(sampler_stochastic_method_maximin_lhs_introduction)= # Introduction Considering the definition of a LHS sampling, introduced in [](#sampler_stochastic_method_introduction), it is clear that permutating a coordinate of two different points, will create a new sampling. If one looks at the x-coordinate (corresponding to a normal distribution) in {numref}`sampler_srs_lhs_8_points`, one could put the point in the second equi-probable range, in the sixth one, and move the point which was in the sixth equi-probable range into the second one, without changing the y-coordinate. The results of this permutation is a new sampling with the interesting property of remaining a LHS sampling. A follow-up question can then be: what is the difference between these two samplings, and would there be any reason to try many permutations ? This is a very brief introduction to a dedicated field of research: the optimisation of a {{doe}} with respect to the goals of the ongoing analysis. In {{uranie}}, a new kind of LHS sampling has been recently introduced, called maximin LHS, whose purpose is to maximise the minimal distance between two points. The distance under consideration is the **mindist** criterion: let $D=[\mathbf{x}_1, \cdots,\mathbf{x}_N] \subset [0,1]^{d}$ be a {{doe}} with $N$ points. The mindist criterion is written as: ```{math} :label: eq_mindist \min_{i,j}{||\mathbf{x}_i-\mathbf{x}_j||_{2}} ``` where $||.||_2$ is the euclidian norm. The designs which maximise the mindist criterion are referred to as maximin LHS, but generally speaking, a design with a large value of the mindist criterion is referred to as maximin LHS as well. It has been observed that the best designs in terms of maximising ({eq}`eq_mindist`) can be constructed by minimising its $L^{p}$ regularisation instead. It is written as ```{math} \phi_p := \Big[ \sum_{i