English Français

The modular approach


Uranie (the version under discussion here being v4.9.0) is a software dedicated to perform studies on uncertainty propagation, sensitivity analysis and surrogate model generation and calibration, based on ROOT (the corresponding version being v6.32.02).

As a result, Uranie benefits from numerous features of ROOT, among which:

- an interactive C++ interpreter (Cling), built on the top of LLVM and Clang;
- a Python interface (PyROOT);
- an access to SQL databases;
- many advanced data visualisation features;
- and much more...

In the following sections, the ROOT platform will be briefly introduced as well as the python interface it brings once the Uranie classes are declared and known. The organisation of the Uranie platform is then introduced, from a broad scale, giving access to more refined discussion within this documentation.

I. Uranie modules organisation

The platform consists of a set of so-called technical libraries, or modules (represented as green boxes in Figure I.1), each performing a specific task.

Figure I.1. Organisation of the Uranie-modules (green boxes) in terms of inter-dependencies. The blue boxes represent the external dependencies (discussed later on).



Dans la suite de cette section, chacun des modules abordés dans cette documentation sera brièvement décrit (leur rôle et leurs composants principaux). Une description plus précise est donnée dans les autres chapitres dédiés, en suivant les liens indiqués ci-dessous.

I.1. Dataserver module

The DataServer library (cf Chapter II) is the core of the Uranie platform. It describes the central element of Uranie: the dataserver. This object contains all the necessary information about the variables of a problem (such as the names, units, probability laws, data files, and so on...) and allows to perform the very basic statistical operations.

I.2. Sampler module

The Sampler library (cf Chapter III) allows to create design-of-experiments using dataserver's attributes which are random variables. There are a large variety of design-of-experiments some of which are only meant to be called by more complicated methods.

I.3. Launcher module

The Launcher and Relauncher libraries are technical modules (cf Chapter IV and Chapter VIII) applies an analytic function (python or C++), an external simulation code or a combination of all the aforementioned, on the content of a dataserver. The dataserver content can either result from a design-of-experiments generated using one of the sampler object, or can be loaded into from an external source (ASCII file, SQL database, etc...). These modules provide differents approaches to distribute the computations on clusters or locally (fork, split memory, shared memory...).

I.4. Modeler module

The Modeler library (cf Chapter V) allows the construction of a surrogate model (polynomial models, neural networks, ...) that links the outputs and the input factors in order to mimic as best as possible the behaviour of a set of points provided to train the surrogate model.

I.5. Sensitivity module

The Sensitivity library (cf Chapter VI) allows to perform sensitivity analysis of one of the output response with respect to the input factors (for ). The very basic concepts of sensitivity analysis are introduced as well in the introduction of this chapter while they are discussed a bit more thoroughly in the methodological guide.

I.6. Optimisation module

The Optimizer and Reoptimizer libraries (cf Chapter VII and Chapter IX) are dedicated to optimisation and model calibration. Model calibration consists in setting up the "degrees of freedom" of a model so that future simulations will optimally fit an experimental database. The optimisation is a complex procedure and several techniques are available to perform single-criterion or multi criteria one, with and without constraint.

I.7. Optimisation with surrogate models module

The MetaModelOptim library (cf Chapitre X) is a library dedicated to optimisation techniques coupling the generation of surrogate models (in particular the kriging one) and the evolutionnary algorithms to get an EGO-like approach.

I.8. Calibration module

The Calibration library (cf Chapter XI) is more a dedicated module that is used to get the best estimations of some of the parameter of a specific model under consideration. This module provides different techniques relying on their own hypothesis on the model but all of these methods need data to perform this calibration.

I.9. Reliability module

The Reliability library (cf Chapitre XIII) is a module dedicated to low probability events, meaning the measurement of rare events.