Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamic Incremental Versioning #56

Open
agazzarini opened this issue Sep 16, 2018 · 3 comments
Open

Dynamic Incremental Versioning #56

agazzarini opened this issue Sep 16, 2018 · 3 comments
Assignees
Labels
enhancement New feature or request task
Milestone

Comments

@agazzarini
Copy link
Member

The current evaluation process uses the configuration versions in the "configuration sets" folder. This allows RRE to run the evaluation against those versions and therefore make useful comparisons between them.

Another option could be (see issue #54) to version the evaluation process itself; that is: each time the evaluation is executed, it will be persisted (again, see #54) and versioned.

Subsequently, some external BI/Reporting tool could use that data in order to make comparisons between different executions (which in this case could be called "versions")

@agazzarini agazzarini added enhancement New feature or request task labels Sep 16, 2018
@agazzarini agazzarini added this to the 1.1 milestone Sep 16, 2018
@agazzarini agazzarini self-assigned this Sep 16, 2018
@binarymax
Copy link
Contributor

Hi. We'd like to send evaluation report information into Kibana - would this fit your description of external BI/Reporting tool? We're happy to make this enhancement, but not sure if it belongs in this issue (should I make another issue and eventually make a pull request for that?). Thanks!

@agazzarini
Copy link
Member Author

Hi @binarymax,
I think #54 it's the issue you're looking for. Ideally the target persistence model should be pluggable (in this case Elasticsearch but in another scenario a customer wanted to have that data in relational database).
Then yes, once data is in ES you can query it using Kibana.

This issue is specific for "versioning" the persisted data. I added that after #54 trying to figure out how to compare different evaluation executions.

So, just to give some explanation about my reasoning, at the moment you have (example) three folders 1.1, 1.2 and 1.3. Each time RRE executes, it will evaluate all of them so the versioning is implicit (i.e. the json output file contains, for each metric, three values).

If you

  • leave the configuration folders
  • change the output model from locale FS to Elasticsearch

then versioning is not a problem, because the evaluation output always contains those three versions. In other words: each time you run RRE it will evaluate all versions.

The reason behind this issue was related to a customer who didn't want to maintain the configuration sets: I had just one configuration folder with the last version; so, the RRE output didn't compare anything because only one version was in the JSON output.

So the idea was: for each execution, the output is persisted somewhere and is tagged with a version (with a timstamp for example). So after n executions, you will have n comparable versions.

Sorry for the disgression: the short answer is #54 :D

@irwinmx-lng-con
Copy link

irwinmx-lng-con commented Nov 13, 2018

Thanks for the quick reply! I was thinking of using the reporting plugin and adding a new output there to send the evaluation to elastic, that can be queried by kibana...but now I see your point about persistence (we shouldn't overwrite/repost the old evaluations that already exist). I will think about this a bit more and comment on #54 if needed. Thanks again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request task
Projects
None yet
Development

No branches or pull requests

3 participants