-
Notifications
You must be signed in to change notification settings - Fork 459
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Assert protocol is propagated #3292
Assert protocol is propagated #3292
Conversation
for _, protocol := range cfg.PromConfig.GlobalConfig.ScrapeProtocols { | ||
scrapeProtocols = append(scrapeProtocols, monitoringv1.ScrapeProtocol(protocol)) | ||
} | ||
prom.Spec.CommonPrometheusFields.ScrapeProtocols = scrapeProtocols |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is the correct thing to do. In my view, GlobalConfig
should only affect the raw scrape configs, while the respective Prometheus
fields (which only affect prometheus-operator CRs) should be separately configured.
The ambiguity of how GlobalConfig
should affect the prometheus-operator world is why I was reluctant to include it in the first place.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can see that, however, i think the confusion / importance of doing it this way is that we don't have the luxury of having it automatically propagated to the prometheus instance. i.e. when a prometheus instance sets the global config and is being configured via the prometheus operator, any scrape configs generated by the prometheus CRDs to be added to the prometheus instance will be using the global config defined by said prometheus instance.
As I was writing this, i was wondering if this is necessary given it's the collector that's doing the scraping. So if a user sets the global config on the prometheus receiver in the collector, shouldn't that be used when scraping a target overriding the scrape_configs received by the TA?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need to look into this, because if that's the case this would be unnecessary right? Otherwise, I do think we need this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way I see it, we inhabit two separate worlds here:
-
The world of raw Prometheus configurations. A user can put a configuration in their prometheus receiver settings, and expect the TargetAllocator to use it.
scrape_configs
andglobal_configs
apply here. This has nothing to do with Kubernetes per se, and works without TA as well. -
The world of Prometheus CRs in Kubernetes. This is specific to the Target Allocator and is configured via the OpenTelemetryCollector (and in the near future, TargetAllocator) CR. Internally, this is done by passing the configuration to a
Prometheus
CR and using that to generate scrape configs from ServiceMonitors and such.
My opinion is that world 2 should not be affected by configuration for world 1. If we want to set scrapeProtocols
in the same way we would normally do on a Prometheus
CR, then we should have a scrapeProtocols
field on our CRs for this. See #1934 for reference.
Does that make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah that makes sense to me, and would probably better in the long run anyway. I had to look through some more code to make sense of this, but I agree that this is probably the way to go.
Closing this in favor of the resolution of #1934 |
Description:
This is a new check to assert that we are indeed setting the protocol globally in the generated scrape config file
Link to tracking Issue(s): n/a
Testing:
Documentation: