You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current evaluation process uses the configuration versions in the "configuration sets" folder. This allows RRE to run the evaluation against those versions and therefore make useful comparisons between them.
Another option could be (see issue #54) to version the evaluation process itself; that is: each time the evaluation is executed, it will be persisted (again, see #54) and versioned.
Subsequently, some external BI/Reporting tool could use that data in order to make comparisons between different executions (which in this case could be called "versions")
The text was updated successfully, but these errors were encountered:
Hi. We'd like to send evaluation report information into Kibana - would this fit your description of external BI/Reporting tool? We're happy to make this enhancement, but not sure if it belongs in this issue (should I make another issue and eventually make a pull request for that?). Thanks!
Hi @binarymax,
I think #54 it's the issue you're looking for. Ideally the target persistence model should be pluggable (in this case Elasticsearch but in another scenario a customer wanted to have that data in relational database).
Then yes, once data is in ES you can query it using Kibana.
This issue is specific for "versioning" the persisted data. I added that after #54 trying to figure out how to compare different evaluation executions.
So, just to give some explanation about my reasoning, at the moment you have (example) three folders 1.1, 1.2 and 1.3. Each time RRE executes, it will evaluate all of them so the versioning is implicit (i.e. the json output file contains, for each metric, three values).
If you
leave the configuration folders
change the output model from locale FS to Elasticsearch
then versioning is not a problem, because the evaluation output always contains those three versions. In other words: each time you run RRE it will evaluate all versions.
The reason behind this issue was related to a customer who didn't want to maintain the configuration sets: I had just one configuration folder with the last version; so, the RRE output didn't compare anything because only one version was in the JSON output.
So the idea was: for each execution, the output is persisted somewhere and is tagged with a version (with a timstamp for example). So after n executions, you will have n comparable versions.
Sorry for the disgression: the short answer is #54 :D
Thanks for the quick reply! I was thinking of using the reporting plugin and adding a new output there to send the evaluation to elastic, that can be queried by kibana...but now I see your point about persistence (we shouldn't overwrite/repost the old evaluations that already exist). I will think about this a bit more and comment on #54 if needed. Thanks again.
The current evaluation process uses the configuration versions in the "configuration sets" folder. This allows RRE to run the evaluation against those versions and therefore make useful comparisons between them.
Another option could be (see issue #54) to version the evaluation process itself; that is: each time the evaluation is executed, it will be persisted (again, see #54) and versioned.
Subsequently, some external BI/Reporting tool could use that data in order to make comparisons between different executions (which in this case could be called "versions")
The text was updated successfully, but these errors were encountered: