You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As now it is difficult for the users to QC that the distributions they have chosen is as they would like them to be. I suggest to add a QC-sheet in the generated design matrix with the following content (and perhaps more)
Statistical tables for each variable of what was actually sampled (min, P90, P50, mean, P10, max)
Estimated correlation between each variable from what was actually sampled
Plots of the histograms for each variable from what was actually sampled.
The probability of each category of discrete variables from what as actually sampled
The tables should perhaps also compare with the input-values. The users are usually not aware that the probabilities they put in are not reproduced exactly - or can even be off quite a lot. This generates a lot of confusion -especially if there is a discrete variable that is important for the result. Then it can be important that the probabilities sampled are (low, medium, high) = (0.25, 0,43, 0.32) instead of (0.3, 0.4, 0.3)
The text was updated successfully, but these errors were encountered:
As I've understood it, the long term plan is to have fmudesign as a part of ert. The QC functionallity you suggest is probably also relevant for parameters drawn by GEN_KW. But for current use, I think all these parameter statistics mentioned are covered by webviz ? Just that webviz uses parameters.txt as input instead of the generated design matrix. And of course no comparison to what was the input distribution parameters.
The process of defining and setting up a design matrix is often an iterative procedure. If you have to run ERT to be able to see how the input variables actually was sampled, that requires a lot of runs. If you could see what you actually have sampled/defined from the input distributions before you run ERT, you could save a lot of time. This is a strength with @risk. When using @risk to generate the design matrix (as I do), I can go through each variable and see what was actually sampled and what the resulting correlation actually became. If I am not happy with the results I can change input and rerun - without having to start the heavy ERT-machinery. Especially what resulting P90, mean, P10 actually is (and the probabilities) is useful
As now it is difficult for the users to QC that the distributions they have chosen is as they would like them to be. I suggest to add a QC-sheet in the generated design matrix with the following content (and perhaps more)
The tables should perhaps also compare with the input-values. The users are usually not aware that the probabilities they put in are not reproduced exactly - or can even be off quite a lot. This generates a lot of confusion -especially if there is a discrete variable that is important for the result. Then it can be important that the probabilities sampled are (low, medium, high) = (0.25, 0,43, 0.32) instead of (0.3, 0.4, 0.3)
The text was updated successfully, but these errors were encountered: