Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: instrumentation-openai #469

Merged
merged 21 commits into from
Dec 12, 2024
Merged

feat: instrumentation-openai #469

merged 21 commits into from
Dec 12, 2024

Conversation

trentm
Copy link
Member

@trentm trentm commented Dec 11, 2024

This adds a @elastic/opentelemetry-instrumentation-openai instrumentation that has been in dev internally for a while.
The intention is to publish this from here, and then add this as a stock instrumentation in EDOT Node.js (aka @elastic/opentelemetry-node).

…all size

Also add a package-lock as used by other packages in this monorepo.
…ence of higher version glob in top-level node_modules

    > @elastic/[email protected] compile
    > tsc -p .

    ../../node_modules/@types/glob/index.d.ts:29:42 - error TS2694: Namespace '".../node_modules/minimatch/dist/commonjs/index"' has no exported member 'IOptions'.

    29     interface IOptions extends minimatch.IOptions {
                                                ~~~~~~~~

    ../../node_modules/@types/glob/index.d.ts:74:30 - error TS2724: '".../node_modules/minimatch/dist/commonjs/index"' has no exported member named 'IMinimatch'. Did you mean 'Minimatch'?

    74         minimatch: minimatch.IMinimatch;
                                    ~~~~~~~~~~

    Found 2 errors.

I love TS.
@trentm trentm requested a review from david-luna December 11, 2024 01:00
@trentm trentm self-assigned this Dec 11, 2024
@trentm
Copy link
Member Author

trentm commented Dec 11, 2024

@trentm
Copy link
Member Author

trentm commented Dec 11, 2024

I'll have to come back to the test failures tomorrow.

Copy link

@codefromthecrypt codefromthecrypt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

great to see this!

Copy link
Member

@david-luna david-luna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the only blocker is the empty exports in build/src/instrumentation.d.ts

'prefer-rest-params': 'off',
},
},
{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

upstream has specific rules for *.test.ts files

{
  "files": ["test/**/*.ts"],
  "rules": {
    "no-empty": "off",
    "@typescript-eslint/ban-ts-ignore": "off",
    "@typescript-eslint/no-empty-function": "off",
    "@typescript-eslint/no-explicit-any": "off",
    "@typescript-eslint/no-unused-vars": "off",
    "@typescript-eslint/no-var-requires": "off",
    "@typescript-eslint/no-shadow": ["off"],
    "@typescript-eslint/no-floating-promises": ["off"],
    "@typescript-eslint/no-non-null-assertion": ["off"],
    "@typescript-eslint/explicit-module-boundary-types": ["off"]
  }
}

they seem to relax the requirements so I guess we're good in case we want to contribute upstream.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yah. I had noticed that. I wasn't too careful with exactly matching the upstream eslint config, but I figured we could cope with any minor linting differences later if/when considering upstreaming this.

packages/instrumentation-openai/src/index.ts Outdated Show resolved Hide resolved
// Use `DEBUG=elastic-opentelemetry-instrumentation-openai ...` for debug output.
// Or use `node --env-file ./dev.env ...` in this repo.
import createDebug from 'debug';
const debug = createDebug('elastic-opentelemetry-instrumentation-openai');
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious about this. Instrumentations in opentelemetry-js-contrib make use of the logger injected in _diag private property to print debug messages. What are the advantages of having debug?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pros:

  • This debug output can be enabled independently of whether the app has setup a DiagLogger. That's nice both for turning it on, and for being able to have just this debug output and not everything that comes with OTEL_LOG_LEVEL=verbose (or whatever mechanism to turn on diag output).
  • The output is a bit nicer to look at. E.g.:
% node --env-file ../openai.env --env-file ../dev.env -r ./telemetry.js use-chat
  elastic-opentelemetry-instrumentation-openai instrument [email protected] (isESM=false), config={ enabled: true, captureMessageContent: true } +0ms
  elastic-opentelemetry-instrumentation-openai OpenAI.Chat.Completions.create args: [
  elastic-opentelemetry-instrumentation-openai   {
  elastic-opentelemetry-instrumentation-openai     model: 'gpt-4o-mini',
  elastic-opentelemetry-instrumentation-openai     messages: [
  elastic-opentelemetry-instrumentation-openai       {
  elastic-opentelemetry-instrumentation-openai         role: 'user',
  elastic-opentelemetry-instrumentation-openai         content: 'Answer in up to 3 words: Which ocean contains the falkland islands?'
  elastic-opentelemetry-instrumentation-openai       }
  elastic-opentelemetry-instrumentation-openai     ]
  elastic-opentelemetry-instrumentation-openai   }
  elastic-opentelemetry-instrumentation-openai ] +2ms
  elastic-opentelemetry-instrumentation-openai OpenAI.Chat.Completions.create result: {
  elastic-opentelemetry-instrumentation-openai   id: 'chatcmpl-AdgifUy4rCMXzVceJoHLilm4cSaLJ',
  elastic-opentelemetry-instrumentation-openai   object: 'chat.completion',
  elastic-opentelemetry-instrumentation-openai   created: 1734022369,
  elastic-opentelemetry-instrumentation-openai   model: 'gpt-4o-mini-2024-07-18',
  elastic-opentelemetry-instrumentation-openai   choices: [
  elastic-opentelemetry-instrumentation-openai     {
  elastic-opentelemetry-instrumentation-openai       index: 0,
  elastic-opentelemetry-instrumentation-openai       message: {
  elastic-opentelemetry-instrumentation-openai         role: 'assistant',
  elastic-opentelemetry-instrumentation-openai         content: 'South Atlantic Ocean.',
  elastic-opentelemetry-instrumentation-openai         refusal: null
  elastic-opentelemetry-instrumentation-openai       },
  elastic-opentelemetry-instrumentation-openai       logprobs: null,
  elastic-opentelemetry-instrumentation-openai       finish_reason: 'stop'
  elastic-opentelemetry-instrumentation-openai     }
  elastic-opentelemetry-instrumentation-openai   ],
  elastic-opentelemetry-instrumentation-openai   usage: {
  elastic-opentelemetry-instrumentation-openai     prompt_tokens: 24,
  elastic-opentelemetry-instrumentation-openai     completion_tokens: 4,
  elastic-opentelemetry-instrumentation-openai     total_tokens: 28,
  elastic-opentelemetry-instrumentation-openai     prompt_tokens_details: { cached_tokens: 0, audio_tokens: 0 },
  elastic-opentelemetry-instrumentation-openai     completion_tokens_details: {
  elastic-opentelemetry-instrumentation-openai       reasoning_tokens: 0,
  elastic-opentelemetry-instrumentation-openai       audio_tokens: 0,
  elastic-opentelemetry-instrumentation-openai       accepted_prediction_tokens: 0,
  elastic-opentelemetry-instrumentation-openai       rejected_prediction_tokens: 0
  elastic-opentelemetry-instrumentation-openai     }
  elastic-opentelemetry-instrumentation-openai   },
  elastic-opentelemetry-instrumentation-openai   system_fingerprint: 'fp_6fc10e10eb'
  elastic-opentelemetry-instrumentation-openai } +663ms
South Atlantic Ocean.

That output is coloured. It includes some timing info (although that isn't important in this case).

Compare to this (a quick/incomplete switch to using diag):

Example output diag logging for same command
% OTEL_LOG_LEVEL=debug node --env-file openai.env --env-file dev.env -r ./examples/telemetry.js examples/use-chat.js
@opentelemetry/api: Registered a global for diag v1.9.0.
The 'logRecordProcessor' option is deprecated. Please use 'logRecordProcessors' instead.
@opentelemetry/api: Registered a global for trace v1.9.0.
@opentelemetry/api: Registered a global for context v1.9.0.
@opentelemetry/api: Registered a global for propagation v1.9.0.
@opentelemetry/api: Registered a global for metrics v1.9.0.
@opentelemetry/instrumentation-http Applying instrumentation patch for nodejs core module on require hook { module: 'http' }
@opentelemetry/instrumentation-http Applying instrumentation patch for nodejs core module on require hook { module: 'https' }
@elastic/opentelemetry-instrumentation-openai Applying instrumentation patch for module on require hook {
  module: 'openai',
  version: '4.76.1',
  baseDir: '/Users/trentm/el/elastic-otel-node5/packages/instrumentation-openai/node_modules/openai'
}
@elastic/opentelemetry-instrumentation-openai instrument openai@%s (isESM=%s), config=%o 4.76.1 false { enabled: true, captureMessageContent: true }
@elastic/opentelemetry-instrumentation-openai OpenAI.Chat.Completions.create args: %O [ { model: 'gpt-4o-mini', messages: [ [Object] ] } ]
EnvDetector found resource. Resource {
  _attributes: {},
  asyncAttributesPending: false,
  _syncAttributes: {},
  _asyncAttributesPromise: Promise {
    {},
    [Symbol(async_id_symbol)]: 45,
    [Symbol(trigger_async_id_symbol)]: 0
  }
}
ProcessDetector found resource. Resource {
  _attributes: {
    'process.pid': 12179,
    'process.executable.name': 'node',
    'process.executable.path': '/Users/trentm/.nvm/versions/node/v20.18.1/bin/node',
    'process.command_args': [
      '/Users/trentm/.nvm/versions/node/v20.18.1/bin/node',
      '--env-file',
      'openai.env',
      '--env-file',
      'dev.env',
      '-r',
      './examples/telemetry.js',
      '/Users/trentm/el/elastic-otel-node5/packages/instrumentation-openai/examples/use-chat.js'
    ],
    'process.runtime.version': '20.18.1',
    'process.runtime.name': 'nodejs',
    'process.runtime.description': 'Node.js',
    'process.command': '/Users/trentm/el/elastic-otel-node5/packages/instrumentation-openai/examples/use-chat.js',
    'process.owner': 'trentm'
  },
  asyncAttributesPending: false,
  _syncAttributes: {},
  _asyncAttributesPromise: Promise {
    {
      'process.pid': 12179,
      'process.executable.name': 'node',
      'process.executable.path': '/Users/trentm/.nvm/versions/node/v20.18.1/bin/node',
      'process.command_args': [Array],
      'process.runtime.version': '20.18.1',
      'process.runtime.name': 'nodejs',
      'process.runtime.description': 'Node.js',
      'process.command': '/Users/trentm/el/elastic-otel-node5/packages/instrumentation-openai/examples/use-chat.js',
      'process.owner': 'trentm'
    },
    [Symbol(async_id_symbol)]: 46,
    [Symbol(trigger_async_id_symbol)]: 0
  }
}
@opentelemetry/instrumentation-http https instrumentation outgoingRequest
@opentelemetry/instrumentation-http http.ClientRequest return request
HostDetector found resource. Resource {
  _attributes: {
    'host.name': 'peach.local',
    'host.arch': 'arm64',
    'host.id': '8FEBBC33-D6DA-57FC-8EF0-1A9C14B919F8'
  },
  asyncAttributesPending: false,
  _syncAttributes: {},
  _asyncAttributesPromise: Promise {
    {
      'host.name': 'peach.local',
      'host.arch': 'arm64',
      'host.id': '8FEBBC33-D6DA-57FC-8EF0-1A9C14B919F8'
    },
    [Symbol(async_id_symbol)]: 92,
    [Symbol(trigger_async_id_symbol)]: 0
  }
}
@opentelemetry/instrumentation-http outgoingRequest on response()
@opentelemetry/instrumentation-http outgoingRequest on end()
@opentelemetry/instrumentation-http outgoingRequest on request close()
@elastic/opentelemetry-instrumentation-openai OpenAI.Chat.Completions.create result: %O {
  id: 'chatcmpl-AdgrJMXxLsv9BhH5UHXYPfbfewC48',
  object: 'chat.completion',
  created: 1734022905,
  model: 'gpt-4o-mini-2024-07-18',
  choices: [
    {
      index: 0,
      message: [Object],
      logprobs: null,
      finish_reason: 'stop'
    }
  ],
  usage: {
    prompt_tokens: 24,
    completion_tokens: 4,
    total_tokens: 28,
    prompt_tokens_details: { cached_tokens: 0, audio_tokens: 0 },
    completion_tokens_details: {
      reasoning_tokens: 0,
      audio_tokens: 0,
      accepted_prediction_tokens: 0,
      rejected_prediction_tokens: 0
    }
  },
  system_fingerprint: 'fp_bba3c8e70b'
}
South Atlantic Ocean.
OTLPExportDelegate items to be sent {
  resource: Resource {
    _attributes: {
      'service.name': 'unknown_service:node',
      'telemetry.sdk.language': 'nodejs',
      'telemetry.sdk.name': 'opentelemetry',
      'telemetry.sdk.version': '1.29.0',
      'process.pid': 12179,
      'process.executable.name': 'node',
      'process.executable.path': '/Users/trentm/.nvm/versions/node/v20.18.1/bin/node',
      'process.command_args': [Array],
      'process.runtime.version': '20.18.1',
      'process.runtime.name': 'nodejs',
      'process.runtime.description': 'Node.js',
      'process.command': '/Users/trentm/el/elastic-otel-node5/packages/instrumentation-openai/examples/use-chat.js',
      'process.owner': 'trentm',
      'host.name': 'peach.local',
      'host.arch': 'arm64',
      'host.id': '8FEBBC33-D6DA-57FC-8EF0-1A9C14B919F8'
    },
    asyncAttributesPending: false,
    _syncAttributes: {
      'service.name': 'unknown_service:node',
      'telemetry.sdk.language': 'nodejs',
      'telemetry.sdk.name': 'opentelemetry',
      'telemetry.sdk.version': '1.29.0'
    },
    _asyncAttributesPromise: Promise {
      [Object],
      [Symbol(async_id_symbol)]: 105,
      [Symbol(trigger_async_id_symbol)]: 0
    }
  },
  scopeMetrics: [
    { scope: [Object], metrics: [Array] },
    { scope: [Object], metrics: [Array] }
  ]
}
Instrumentation suppressed, returning Noop Span
@opentelemetry/instrumentation-http http instrumentation outgoingRequest
@opentelemetry/instrumentation-http http.ClientRequest return request
@opentelemetry/instrumentation-http outgoingRequest on response()
@opentelemetry/instrumentation-http outgoingRequest on end()
@opentelemetry/instrumentation-http outgoingRequest on request close()
OTLPExportDelegate Export succeeded but could not deserialize response - is the response specification compliant? RangeError: index out of range: 3 + 111 > 8
    at indexOutOfRange (/Users/trentm/el/elastic-otel-node5/packages/instrumentation-openai/node_modules/protobufjs/src/reader.js:13:12)
    at BufferReader.skip (/Users/trentm/el/elastic-otel-node5/packages/instrumentation-openai/node_modules/protobufjs/src/reader.js:343:19)
    at Reader.skipType (/Users/trentm/el/elastic-otel-node5/packages/instrumentation-openai/node_modules/protobufjs/src/reader.js:369:18)
    at Reader.skipType (/Users/trentm/el/elastic-otel-node5/packages/instrumentation-openai/node_modules/protobufjs/src/reader.js:373:22)
    at Function.decode (/Users/trentm/el/elastic-otel-node5/packages/instrumentation-openai/node_modules/@opentelemetry/otlp-transformer/build/src/generated/root.js:4974:48)
    at Object.deserializeResponse (/Users/trentm/el/elastic-otel-node5/packages/instrumentation-openai/node_modules/@opentelemetry/otlp-transformer/build/src/protobuf/serializers.js:50:36)
    at _promiseQueue.pushPromise._transport.send.then.resultCallback.code (/Users/trentm/el/elastic-otel-node5/packages/instrumentation-openai/node_modules/@opentelemetry/otlp-exporter-base/build/src/otlp-export-delegate.js:56:79)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5) <Buffer 7b 22 6f 6b 22 3a 31 7d>
OTLPExportDelegate items to be sent [
  LogRecord {
    attributes: { 'event.name': 'gen_ai.user.message', 'gen_ai.system': 'openai' },
    totalAttributesCount: 2,
    _isReadonly: true,
    hrTime: [ 1734022905, 60000000 ],
    hrTimeObserved: [ 1734022905, 60000000 ],
    spanContext: {
      traceId: '803446bf610e99e9859cc4c9aaa3889c',
      spanId: 'ca233033158b13e9',
      traceFlags: 1,
      traceState: undefined
    },
    _severityNumber: 9,
    _severityText: undefined,
    _body: {
      role: 'user',
      content: 'Answer in up to 3 words: Which ocean contains the falkland islands?'
    },
    resource: Resource {
      _attributes: [Object],
      asyncAttributesPending: false,
      _syncAttributes: [Object],
      _asyncAttributesPromise: [Promise]
    },
    instrumentationScope: {
      name: '@elastic/opentelemetry-instrumentation-openai',
      version: '0.3.0',
      schemaUrl: undefined
    },
    _logRecordLimits: { attributeCountLimit: 128, attributeValueLengthLimit: Infinity }
  },
  LogRecord {
    attributes: { 'event.name': 'gen_ai.choice', 'gen_ai.system': 'openai' },
    totalAttributesCount: 2,
    _isReadonly: true,
    hrTime: [ 1734022905, 477000000 ],
    hrTimeObserved: [ 1734022905, 477000000 ],
    spanContext: {
      traceId: '803446bf610e99e9859cc4c9aaa3889c',
      spanId: 'ca233033158b13e9',
      traceFlags: 1,
      traceState: undefined
    },
    _severityNumber: 9,
    _severityText: undefined,
    _body: { finish_reason: 'stop', index: 0, message: [Object] },
    resource: Resource {
      _attributes: [Object],
      asyncAttributesPending: false,
      _syncAttributes: [Object],
      _asyncAttributesPromise: [Promise]
    },
    instrumentationScope: {
      name: '@elastic/opentelemetry-instrumentation-openai',
      version: '0.3.0',
      schemaUrl: undefined
    },
    _logRecordLimits: { attributeCountLimit: 128, attributeValueLengthLimit: Infinity }
  }
]
Instrumentation suppressed, returning Noop Span
@opentelemetry/instrumentation-http http instrumentation outgoingRequest
@opentelemetry/instrumentation-http http.ClientRequest return request
OTLPExportDelegate shutdown started
OTLPExportDelegate items to be sent [
  Span {
    attributes: {
      'http.url': 'https://api.openai.com/v1/chat/completions',
      'http.method': 'POST',
      'http.target': '/v1/chat/completions',
      'net.peer.name': 'api.openai.com',
      'http.host': 'api.openai.com:443',
      'http.user_agent': [Array],
      'net.peer.ip': '162.159.140.245',
      'net.peer.port': 443,
      'http.status_code': 200,
      'http.status_text': 'OK',
      'http.flavor': '1.1',
      'net.transport': 'ip_tcp'
    },
    links: [],
    events: [],
    _droppedAttributesCount: 0,
    _droppedEventsCount: 0,
    _droppedLinksCount: 0,
    status: { code: 0 },
    endTime: [ 1734022905, 471753667 ],
    _ended: true,
    _duration: [ 0, 408753667 ],
    name: 'POST',
    _spanContext: {
      traceId: '803446bf610e99e9859cc4c9aaa3889c',
      spanId: '34a29e37741c01d4',
      traceFlags: 1,
      traceState: undefined
    },
    parentSpanId: 'ca233033158b13e9',
    kind: 2,
    _performanceStartTime: 330.519833,
    _performanceOffset: -0.537841796875,
    _startTimeProvided: false,
    startTime: [ 1734022905, 63000000 ],
    resource: Resource {
      _attributes: [Object],
      asyncAttributesPending: false,
      _syncAttributes: [Object],
      _asyncAttributesPromise: [Promise]
    },
    instrumentationLibrary: {
      name: '@opentelemetry/instrumentation-http',
      version: '0.56.0',
      schemaUrl: undefined
    },
    _spanLimits: {
      attributeValueLengthLimit: Infinity,
      attributeCountLimit: 128,
      linkCountLimit: 128,
      eventCountLimit: 128,
      attributePerEventCountLimit: 128,
      attributePerLinkCountLimit: 128
    },
    _attributeValueLengthLimit: Infinity,
    _spanProcessor: MultiSpanProcessor { _spanProcessors: [Array] }
  },
  Span {
    attributes: {
      'gen_ai.operation.name': 'chat',
      'gen_ai.request.model': 'gpt-4o-mini',
      'gen_ai.system': 'openai',
      'server.address': 'api.openai.com',
      'server.port': 443,
      'gen_ai.response.finish_reasons': [Array],
      'gen_ai.response.id': 'chatcmpl-AdgrJMXxLsv9BhH5UHXYPfbfewC48',
      'gen_ai.response.model': 'gpt-4o-mini-2024-07-18',
      'gen_ai.usage.input_tokens': 24,
      'gen_ai.usage.output_tokens': 4
    },
    links: [],
    events: [],
    _droppedAttributesCount: 0,
    _droppedEventsCount: 0,
    _droppedLinksCount: 0,
    status: { code: 0 },
    endTime: [ 1734022905, 476396000 ],
    _ended: true,
    _duration: [ 0, 417396000 ],
    name: 'chat gpt-4o-mini',
    _spanContext: {
      traceId: '803446bf610e99e9859cc4c9aaa3889c',
      spanId: 'ca233033158b13e9',
      traceFlags: 1,
      traceState: undefined
    },
    parentSpanId: undefined,
    kind: 2,
    _performanceStartTime: 327.086166,
    _performanceOffset: -1.104248046875,
    _startTimeProvided: false,
    startTime: [ 1734022905, 59000000 ],
    resource: Resource {
      _attributes: [Object],
      asyncAttributesPending: false,
      _syncAttributes: [Object],
      _asyncAttributesPromise: [Promise]
    },
    instrumentationLibrary: {
      name: '@elastic/opentelemetry-instrumentation-openai',
      version: '0.3.0',
      schemaUrl: undefined
    },
    _spanLimits: {
      attributeValueLengthLimit: Infinity,
      attributeCountLimit: 128,
      linkCountLimit: 128,
      eventCountLimit: 128,
      attributePerEventCountLimit: 128,
      attributePerLinkCountLimit: 128
    },
    _attributeValueLengthLimit: Infinity,
    _spanProcessor: MultiSpanProcessor { _spanProcessors: [Array] }
  }
]
Instrumentation suppressed, returning Noop Span
@opentelemetry/instrumentation-http http instrumentation outgoingRequest
@opentelemetry/instrumentation-http http.ClientRequest return request
@opentelemetry/instrumentation-http outgoingRequest on response()
@opentelemetry/instrumentation-http outgoingRequest on end()
@opentelemetry/instrumentation-http outgoingRequest on request close()

Cons:

  • It is "yet another debug output mechanism". Having both diag and this debug thing could be confusing.

I found it useful for initial dev. However, I'd be fine removing it if it was deemed added confusion/complexity.

packages/instrumentation-openai/src/types.ts Outdated Show resolved Hide resolved
* Start a span for this chat-completion API call. This also emits log events
* as appropriate for the request params.
*
* @param {import('openai').OpenAI.ChatCompletionCreateParams} params
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the type comes from the lib and you're not exporting it to the consumer I think it's safe to

import type {OpenAI} from 'openai';
...

  _startChatCompletionsSpan(
    params: OpenAI.ChatCompletionCreateParams
    config: OpenAIInstrumentationConfig,
    baseURL: string | undefined
  )
...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've found it has its drawbacks:

  • the import leaks to the exported types and this may give versioning issues in the future
  • [email protected] has compilation errors (which go away with latest 4.x)

perhaps having our own type with the properties we are interested in could be a way be more TS friendly

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is that if we want strong typing of the OpenAI params and responses being handled in the instrumentation that we'd have to define our own types (in src/internal-types.ts or where ever). We don't want a runtime dep on openai, and I'm not sure we even want a compile-type dep on the openai package.

These would be (mostly) a copy of OpenAI types --- frozen to a particular openai version.

I wasn't sure this was worth the added effort (and then maintenance?) of extracting those types from a particular version of openai. For example, the ChatCompletionCreateParams type is 100s of lines, involving some deprecated old edge cases, streaming and non-streaming slight differences.

https://gist.github.com/trentm/e46a8862c1c4221675320aad2b13636e#file-completions-d-ts-L716

Our version of the type "with the properties we are interested in" would be smaller than that, yes.

I guess I don't know where the line is here: In authoring type OpenAIChatCompletionCreateParams in our internal-types.ts, am I improving the instrumentation code, or am I taking time away from adding tests and other maintenance to avoid having one more any in the TypeScript code? :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deciding where is "the line" is up to the instrumentation author so I'm okay with a couple of anys

packages/instrumentation-openai/src/instrumentation.ts Outdated Show resolved Hide resolved
packages/instrumentation-openai/src/instrumentation.ts Outdated Show resolved Hide resolved
@trentm trentm requested a review from david-luna December 12, 2024 17:49
Copy link
Member

@david-luna david-luna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@trentm trentm merged commit dd25a80 into main Dec 12, 2024
17 checks passed
@trentm trentm deleted the trentm/instr-openai branch December 12, 2024 18:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants