Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support transport options in exporter #380

Closed
wants to merge 15 commits into from

Conversation

ocervell
Copy link
Contributor

@ocervell ocervell commented Nov 1, 2018

This PR adds support to pass options to the transport initialized in our exporters.

This will allow, for instance, to set the grace_period and max_batch_size for the BackgroundThread transport when initializing the exporter:

from opencensus.trace.exporters.stackdriver_exporter import StackdriverExporter
from opencensus.trace.exporters.transports.background_thread import BackgroundThreadTransport

exporter = StackdriverExporter(
    transport=BackgroundThreadTransport, 
    transport_config={'grace_period': 1, 'max_batch_size': 30})

self.file_name = file_name
self.transport = transport(self)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder why we're initializing the transport here instead of passing the exporter constructor a preconfigured transport. This may be a simpler solution to your problem.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@c24t, I suggested this in #249 (pass an object instead), but add a configuration parameter could be interesting for backwards compatibility.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @felippe-mendonca, this is mostly to keep backwards compatibility while giving more options. Maybe we can think of a more global reformat later on.

@c24t
Copy link
Member

c24t commented Nov 5, 2018

@ocervell it looks like this branch still has some lint and coverage failures.

To check for new lint errors quickly you can do:

flake8 $(git diff --name-only upstream/master)

And to see the detailed coverage report:

% python -m pytest --cov=opencensus --cov-config=.coveragerc
% python -m coverage html
% open htmlcov/index.html

@ocervell
Copy link
Contributor Author

ocervell commented Nov 10, 2018

Need to add a test for the init_transport function to get full coverage.

@felippe-mendonca
Copy link
Contributor

Need to add a test for the init_transport function to get full coverage.

@ocervell , I'd suggest only the modification below on test_base_exporter.py file to test the init_transport function, since it's already tested indirectly on all exporters' tests.

class TestBaseExporter(unittest.TestCase):
    def test_export_abstract(self):
        from opencensus.trace.exporters import base

        exporter = base.Exporter()
        trace = {}

        with self.assertRaises(NotImplementedError):
            exporter.export(trace)

    def test_emit_abstract(self):
        from opencensus.trace.exporters import base

        exporter = base.Exporter()
        trace = {}

        with self.assertRaises(NotImplementedError):
            exporter.emit(trace)

    def test_init_transport(self):
        from opencensus.trace.exporters import base

        with self.assertRaises(TypeError):
            transport = base.init_transport(
                exporter=base.Exporter(),
                transport=MockTransport,
                config={
                    "par2": 0,
                })


class MockTransport(object):
    def __init__(self, exporter=None, par1=0):
        self.export_called = False
        self.exporter = exporter
        self.par1 = par1

    def export(self, trace):
        self.export_called = True

@davewalkexpel
Copy link

davewalkexpel commented Jan 4, 2019

Hi, this is cool but would it be possible to make _WAIT_PERIOD configurable via the new transport_config dict? Or would that be out of scope and should I start a new PR?

My use case is I want to send less write requests to Stackdriver so that we don't exceed our write quota. I can control the max batch size of the requests now, but the app is still going to send a write request every second even if there's only one span to emit (traces are getting created continuously). If I could up the wait period to something like five seconds I would really be in business. Thanks!

@c24t
Copy link
Member

c24t commented Jan 11, 2019

would it be possible to make _WAIT_PERIOD configurable via the new transport_config dict?

It'd mean adding an arg to the AsyncTransport constructor, but there's at least as good an argument for making this configurable as the grace period (which already has an arg).

@felippe-mendonca
Copy link
Contributor

@davewalkexpel, Seems good to be possible to also configure _WAIT_TIME in your use case. I had a similar problem but I'd solved only changing the batch size. Besides, adjust _WAIT_TIME could also be interesting in cases where you have multiples components sending traces, which mean multiple requests to the collector. In that case, increase interval and batch size might reduce collector's pressure. However, I think that this feature fits to another PR, and maybe change the name from 'wait_time' to 'interval'. Or, even more, open an discussion about the AsyncTransport export strategy.

@reyang
Copy link
Contributor

reyang commented May 24, 2019

This has been addressed by the configuration refactor work in #394.
The following mechanism has been added in #621:
https://github.com/census-instrumentation/opencensus-python/tree/master/opencensus/common/configuration

@reyang reyang closed this May 24, 2019
@reyang
Copy link
Contributor

reyang commented May 24, 2019

There is plan to refactor exporter, worker, queue and transport, refer to the conversations in #642.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants