Skip to content

chore(deps): bump the gha group across 1 directory with 19 updates#3791

Closed
dependabot[bot] wants to merge 1 commit intomainfrom
dependabot/uv/packages/opentelemetry-instrumentation-llamaindex/gha-5e0447eaf2
Closed

chore(deps): bump the gha group across 1 directory with 19 updates#3791
dependabot[bot] wants to merge 1 commit intomainfrom
dependabot/uv/packages/opentelemetry-instrumentation-llamaindex/gha-5e0447eaf2

Conversation

@dependabot
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Mar 11, 2026

Bumps the gha group with 16 updates in the /packages/opentelemetry-instrumentation-llamaindex directory:

Package From To
opentelemetry-api 1.39.1 1.40.0
opentelemetry-semantic-conventions-ai 0.4.13 0.4.15
llama-index 0.14.12 0.14.16
ruff 0.14.11 0.15.5
chromadb 0.5.23 1.5.5
llama-index-embeddings-openai 0.5.1 0.5.2
llama-index-llms-cohere 0.6.1 0.7.1
llama-index-llms-openai 0.6.13 0.6.26
llama-index-postprocessor-cohere-rerank 0.5.1 0.7.0
onnxruntime 1.19.2 1.24.3
openai 1.109.1 2.26.0
opentelemetry-instrumentation-chromadb 0.50.1 0.53.0
opentelemetry-instrumentation-cohere 0.50.1 0.53.0
opentelemetry-instrumentation-openai 0.50.1 0.53.0
pytest-asyncio 0.23.8 1.3.0
sqlalchemy 2.0.45 2.0.48

Updates opentelemetry-api from 1.39.1 to 1.40.0

Changelog

Sourced from opentelemetry-api's changelog.

Version 1.40.0/0.61b0 (2026-03-04)

  • opentelemetry-sdk: deprecate LoggingHandler in favor of opentelemetry-instrumentation-logging, see opentelemetry-instrumentation-logging documentation (#4919)
  • opentelemetry-sdk: Clarify log processor error handling expectations in documentation (#4915)
  • bump semantic-conventions to v1.40.0 (#4941)
  • Add stale PR GitHub Action (#4926)
  • opentelemetry-sdk: Drop unused Jaeger exporter environment variables (exporter removed in 1.22.0) (#4918)
  • opentelemetry-sdk: Clarify timeout units in environment variable documentation (#4906)
  • opentelemetry-exporter-otlp-proto-grpc: Fix re-initialization of gRPC channel on UNAVAILABLE error (#4825)
  • opentelemetry-exporter-prometheus: Fix duplicate HELP/TYPE declarations for metrics with different label sets (#4868)
  • Allow loading all resource detectors by setting OTEL_EXPERIMENTAL_RESOURCE_DETECTORS to * (#4819)
  • opentelemetry-sdk: Fix the type hint of the _metrics_data property to allow None (#4837).
  • Regenerate opentelemetry-proto code with v1.9.0 release (#4840)
  • Add python 3.14 support (#4798)
  • Silence events API warnings for internal users (#4847)
  • opentelemetry-sdk: make it possible to override the default processors in the SDK configurator (#4806)
  • Prevent possible endless recursion from happening in SimpleLogRecordProcessor.on_emit, (#4799) and (#4867).
  • Implement span start/end metrics (#4880)
  • Add environment variable carriers to API (#4609)
  • Add experimental composable rule based sampler (#4882)
  • Make ConcurrentMultiSpanProcessor fork safe (#4862)
  • opentelemetry-exporter-otlp-proto-http: fix retry logic and error handling for connection failures in trace, metric, and log exporters (#4709)
  • opentelemetry-sdk: avoid RuntimeError during iteration of view instrument match dictionary in MetricReaderStorage.collect() (#4891)
  • Implement experimental TracerConfigurator (#4861)
  • opentelemetry-sdk: Fix instrument creation race condition (#4913)
  • bump semantic-conventions to v1.39.0 (#4914)

... (truncated)

Commits

Updates opentelemetry-instrumentation from 0.60b1 to 0.61b0

Release notes

Sourced from opentelemetry-instrumentation's releases.

opentelemetry-instrumentation-openai-v2 2.3b0

  • Fix AttributeError when handling LegacyAPIResponse (from with_raw_response) (#4017)
  • Add support for chat completions choice count and stop sequences span attributes (#4028)
  • Fix crash with streaming with_raw_response (#4033)
  • Bump to 1.30.0 semconv schema: gen_ai.request.seed instead of gen_ai.openai.request.seed (#4036)

opentelemetry-instrumentation-openai-v2 2.2b0

  • Fix service tier attribute names: use GEN_AI_OPENAI_REQUEST_SERVICE_TIER for request attributes and GEN_AI_OPENAI_RESPONSE_SERVICE_TIER for response attributes. (#3920)
  • Added support for OpenAI embeddings instrumentation (#3461)
  • Record prompt and completion events regardless of span sampling decision. (#3226)
  • Filter out attributes with the value of NotGiven instances (#3760)
  • Migrate off the deprecated events API to use the logs API (#3625)

opentelemetry-instrumentation-openai-agents-v2 0.1.0

  • Initial barebones package skeleton: minimal instrumentor stub, version module, and packaging metadata/entry point. (#3805)
  • Implement OpenAI Agents span processing aligned with GenAI semantic conventions. (#3817)
  • Input and output according to GenAI spec. (#3824)

opentelemetry-instrumentation-openai-v2 2.1b0

  • Coerce openai response_format to semconv format (#3073)
  • Add example to opentelemetry-instrumentation-openai-v2 (#3006)
  • Support for AsyncOpenAI/AsyncCompletions (#2984)
  • Add metrics (#3180)

opentelemetry-instrumentation-openai-v2 2.0b0

  • Use generic OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT environment variable to control if content of prompt, completion, and other messages is captured. (#2947)

  • Update OpenAI instrumentation to Semantic Conventions v1.28.0: add new attributes and switch prompts and completions to log-based events. (#2925)

  • Initial OpenAI instrumentation (#2759)

Changelog

Sourced from opentelemetry-instrumentation's changelog.

Version 1.40.0/0.61b0 (2026-03-04)

Added

  • Add Python 3.14 support (#4193)
  • opentelemetry-instrumentation-asgi: Add exemplars for http.server.request.duration and http.server.duration metrics (#3739)
  • opentelemetry-instrumentation-wsgi: Add exemplars for http.server.request.duration and http.server.duration metrics (#3739)
  • opentelemetry-instrumentation-aiohttp-client: add ability to capture custom headers (#3988)
  • opentelemetry-instrumentation-requests: add ability to capture custom headers (#3987)
  • opentelemetry-instrumentation-aiohttp-client: add typechecking for aiohttp-client instrumentation (#4006)
  • opentelemetry-instrumentation-flask: Add support for 3.1+ streaming responses (#3938)
  • opentelemetry-instrumentation-aiohttp-server: Support passing TracerProvider when instrumenting. (#3819)
  • opentelemetry-instrumentation-system-metrics: Add support for the OTEL_PYTHON_SYSTEM_METRICS_EXCLUDED_METRICS environment variable (#3959)
  • opentelemetry-instrumentation-httpx: add ability to capture custom headers (#4047)
  • opentelemetry-instrumentation-urllib3: add ability to capture custom headers (#4050)
  • opentelemetry-instrumentation-urllib: add ability to capture custom headers (#4051)
  • opentelemetry-instrumentation-confluent-kafka: Increase confluent-kafka upper bound to support newer versions (2.13.0) (#4099)
  • opentelemetry-instrumentation-aiohttp-server Implement new semantic convention opt-in migration (#3980)
  • opentelemetry-instrumentation-falcon: pass request attributes at span creation (#4119)
  • opentelemetry-instrumentation: add database stability attribute setters in _semconv utilities (#4108)
  • opentelemetry-instrumentation-aiohttp-server: pass request attributes at span creation (#4118)
  • opentelemetry-instrumentation-tornado: Implement new semantic convention opt-in migration (#3993)
  • opentelemetry-instrumentation-tornado: pass request attributes at span creation (#4140)
  • opentelemetry-instrumentation-pyramid Implement new semantic convention opt-in migration (#3982)
  • opentelemetry-instrumentation-tortoiseorm Add unit tests for Tortoise ORM instrumentation (#4141)
  • opentelemetry-instrumentation-pyramid: pass request attributes at span creation (#4139)
  • opentelemetry-instrumentation-logging: Move there the SDK LoggingHandler (#4210)

... (truncated)

Commits

Updates opentelemetry-semantic-conventions-ai from 0.4.13 to 0.4.15

Updates opentelemetry-semantic-conventions from 0.60b1 to 0.61b0

Changelog

Sourced from opentelemetry-semantic-conventions's changelog.

Version 1.40.0/0.61b0 (2026-03-04)

  • opentelemetry-sdk: deprecate LoggingHandler in favor of opentelemetry-instrumentation-logging, see opentelemetry-instrumentation-logging documentation (#4919)
  • opentelemetry-sdk: Clarify log processor error handling expectations in documentation (#4915)
  • bump semantic-conventions to v1.40.0 (#4941)
  • Add stale PR GitHub Action (#4926)
  • opentelemetry-sdk: Drop unused Jaeger exporter environment variables (exporter removed in 1.22.0) (#4918)
  • opentelemetry-sdk: Clarify timeout units in environment variable documentation (#4906)
  • opentelemetry-exporter-otlp-proto-grpc: Fix re-initialization of gRPC channel on UNAVAILABLE error (#4825)
  • opentelemetry-exporter-prometheus: Fix duplicate HELP/TYPE declarations for metrics with different label sets (#4868)
  • Allow loading all resource detectors by setting OTEL_EXPERIMENTAL_RESOURCE_DETECTORS to * (#4819)
  • opentelemetry-sdk: Fix the type hint of the _metrics_data property to allow None (#4837).
  • Regenerate opentelemetry-proto code with v1.9.0 release (#4840)
  • Add python 3.14 support (#4798)
  • Silence events API warnings for internal users (#4847)
  • opentelemetry-sdk: make it possible to override the default processors in the SDK configurator (#4806)
  • Prevent possible endless recursion from happening in SimpleLogRecordProcessor.on_emit, (#4799) and (#4867).
  • Implement span start/end metrics (#4880)
  • Add environment variable carriers to API (#4609)
  • Add experimental composable rule based sampler (#4882)
  • Make ConcurrentMultiSpanProcessor fork safe (#4862)
  • opentelemetry-exporter-otlp-proto-http: fix retry logic and error handling for connection failures in trace, metric, and log exporters (#4709)
  • opentelemetry-sdk: avoid RuntimeError during iteration of view instrument match dictionary in MetricReaderStorage.collect() (#4891)
  • Implement experimental TracerConfigurator (#4861)
  • opentelemetry-sdk: Fix instrument creation race condition (#4913)
  • bump semantic-conventions to v1.39.0 (#4914)

... (truncated)

Commits

Updates llama-index from 0.14.12 to 0.14.16

Release notes

Sourced from llama-index's releases.

v0.14.16

Release Notes

[2026-03-10]

llama-index-core [0.14.16]

  • Add token-bucket rate limiter for LLM and embedding API calls (#20712)
  • Fix/20706 chonkie init doc (#20713)
  • fix: pass tool_choice through FunctionCallingProgram (#20740)
  • feat: Multimodal LLMReranker (#20743)
  • feat: add optional embed_model to SemanticDoubleMergingSplitterNodeParser (#20748)
  • fix(core): preserve doc_id in legacy_json_to_doc (#20750)
  • fix: async retry backoff to avoid blocking event loop (#20764)
  • Fix additionalProperties in auto-generated KG schema models (#20768)
  • fix: respect db_schema when custom async_engine is provided (#20779)
  • fix(core): replace blocking run_async_tasks with asyncio.gather (#20795)
  • feat(rate_limiter): add SlidingWindowRateLimiter for strict per-minute caps (#20799)
  • fix(core): preserve docstore_strategy across pipeline runs when no vector store is attached (#20824)
  • Fix FunctionTool not respecting pydantic Field defaults (#20839)
  • Fix MarkdownElementNodeParser to extract code blocks (#20840)
  • security: add RestrictedUnpickler to SimpleObjectNodeMapping (CWE-502) (#20857)
  • feat: extend vector store metadata filters (#20861)
  • fix(react): pass system_prompt to ReActChatFormatter template (#20873)
  • refactor: deprecate asyncio_module in favour of get_asyncio_module (#20902)
  • fix(core): partial-failure handling in SubQuestionQueryEngine (#20905)
  • fix: add bounds check to prevent infinite loop in ChatMemoryBuffer.get() (#20914)
  • fix: ensure streaming flag reset on exception in CondenseQuestionChatEngine (#20915)
  • fix: pass through run id correctly (#20928)

llama-index-embeddings-bedrock [0.7.4]

  • fix: raise ValueError when 'model' is passed instead of 'model_name' in BedrockEmbedding (#20836)

llama-index-embeddings-openai [0.5.2]

  • Respect Retry-After header in OpenAI retry decorator (#20813)

llama-index-embeddings-upstage [0.5.1]

  • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

llama-index-graph-stores-neo4j [0.6.0]

  • Add Neo4j user agent (#20827)
  • feat(neo4j): add apoc_sample parameter for large database schema introspection (#20859)

llama-index-instrumentation [0.4.3]

  • otel instrumentation enhancements (#20816)

... (truncated)

Changelog

Sourced from llama-index's changelog.

llama-index-core [0.14.16]

  • Add token-bucket rate limiter for LLM and embedding API calls (#20712)
  • Fix/20706 chonkie init doc (#20713)
  • fix: pass tool_choice through FunctionCallingProgram (#20740)
  • feat: Multimodal LLMReranker (#20743)
  • feat: add optional embed_model to SemanticDoubleMergingSplitterNodeParser (#20748)
  • fix(core): preserve doc_id in legacy_json_to_doc (#20750)
  • fix: async retry backoff to avoid blocking event loop (#20764)
  • Fix additionalProperties in auto-generated KG schema models (#20768)
  • fix: respect db_schema when custom async_engine is provided (#20779)
  • fix(core): replace blocking run_async_tasks with asyncio.gather (#20795)
  • feat(rate_limiter): add SlidingWindowRateLimiter for strict per-minute caps (#20799)
  • fix(core): preserve docstore_strategy across pipeline runs when no vector store is attached (#20824)
  • Fix FunctionTool not respecting pydantic Field defaults (#20839)
  • Fix MarkdownElementNodeParser to extract code blocks (#20840)
  • security: add RestrictedUnpickler to SimpleObjectNodeMapping (CWE-502) (#20857)
  • feat: extend vector store metadata filters (#20861)
  • fix(react): pass system_prompt to ReActChatFormatter template (#20873)
  • refactor: deprecate asyncio_module in favour of get_asyncio_module (#20902)
  • fix(core): partial-failure handling in SubQuestionQueryEngine (#20905)
  • fix: add bounds check to prevent infinite loop in ChatMemoryBuffer.get() (#20914)
  • fix: ensure streaming flag reset on exception in CondenseQuestionChatEngine (#20915)
  • fix: pass through run id correctly (#20928)

llama-index-embeddings-bedrock [0.7.4]

  • fix: raise ValueError when 'model' is passed instead of 'model_name' in BedrockEmbedding (#20836)

llama-index-embeddings-openai [0.5.2]

  • Respect Retry-After header in OpenAI retry decorator (#20813)

llama-index-embeddings-upstage [0.5.1]

  • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

llama-index-graph-stores-neo4j [0.6.0]

  • Add Neo4j user agent (#20827)
  • feat(neo4j): add apoc_sample parameter for large database schema introspection (#20859)

llama-index-instrumentation [0.4.3]

  • otel instrumentation enhancements (#20816)

llama-index-llms-anthropic [0.10.11]

  • Add User-Agent header for Anthropic API calls (#20771)
  • fix: apply cache_control only to last block to respect Anthropic's 4-block limit (#20875)

... (truncated)

Commits
  • 9e2cf43 Release 0.14.16 (#20941)
  • c558825 update gemini embeddings (#20940)
  • e92dfc2 fix: replace mutable default arguments with None (#20938)
  • dee5e09 fix: pass through run id correctly (#20928)
  • 143ced4 perf(redis-chat-store): Use Pydantic directly for ChatMessage serialization &...
  • f25230a fix: ensure streaming flag reset on exception in CondenseQuestionChatEngine (...
  • 4828b1a fix: add bounds check to prevent infinite loop in ChatMemoryBuffer.get() (#20...
  • 1ea49bf fix(azure-inference): properly manage async client lifecycle to prevent unclo...
  • 3f5c422 feat(seltz): update Seltz integration to SDK 0.2.0 (#20906)
  • 6a8b91a fix(core): partial-failure handling in SubQuestionQueryEngine (#20905)
  • Additional commits viewable in compare view

Updates ruff from 0.14.11 to 0.15.5

Release notes

Sourced from ruff's releases.

0.15.5

Release Notes

Released on 2026-03-05.

Preview features

  • Discover Markdown files by default in preview mode (#23434)
  • [perflint] Extend PERF102 to comprehensions and generators (#23473)
  • [refurb] Fix FURB101 and FURB103 false positives when I/O variable is used later (#23542)
  • [ruff] Add fix for none-not-at-end-of-union (RUF036) (#22829)
  • [ruff] Fix false positive for re.split with empty string pattern (RUF055) (#23634)

Bug fixes

  • [fastapi] Handle callable class dependencies with __call__ method (FAST003) (#23553)
  • [pydocstyle] Fix numpy section ordering (D420) (#23685)
  • [pyflakes] Fix false positive for names shadowing re-exports (F811) (#23356)
  • [pyupgrade] Avoid inserting redundant None elements in UP045 (#23459)

Documentation

  • Document extension mapping for Markdown code formatting (#23574)
  • Update default Python version examples (#23605)

Other changes

  • Publish releases to Astral mirror (#23616)

Contributors

Install ruff 0.15.5

Install prebuilt binaries via shell script

curl --proto '=https' --tlsv1.2 -LsSf https://github.com/astral-sh/ruff/releases/download/0.15.5/ruff-installer.sh | sh

Install prebuilt binaries via powershell script

... (truncated)

Changelog

Sourced from ruff's changelog.

0.15.5

Released on 2026-03-05.

Preview features

  • Discover Markdown files by default in preview mode (#23434)
  • [perflint] Extend PERF102 to comprehensions and generators (#23473)
  • [refurb] Fix FURB101 and FURB103 false positives when I/O variable is used later (#23542)
  • [ruff] Add fix for none-not-at-end-of-union (RUF036) (#22829)
  • [ruff] Fix false positive for re.split with empty string pattern (RUF055) (#23634)

Bug fixes

  • [fastapi] Handle callable class dependencies with __call__ method (FAST003) (#23553)
  • [pydocstyle] Fix numpy section ordering (D420) (#23685)
  • [pyflakes] Fix false positive for names shadowing re-exports (F811) (#23356)
  • [pyupgrade] Avoid inserting redundant None elements in UP045 (#23459)

Documentation

  • Document extension mapping for Markdown code formatting (#23574)
  • Update default Python version examples (#23605)

Other changes

  • Publish releases to Astral mirror (#23616)

Contributors

0.15.4

Released on 2026-02-26.

This is a follow-up release to 0.15.3 that resolves a panic when the new rule PLR1712 was enabled with any rule that analyzes definitions, such as many of the ANN or D rules.

Bug fixes

  • Fix panic on access to definitions after analyzing definitions (#23588)
  • [pyflakes] Suppress false positive in F821 for names used before del in stub files (#23550)

... (truncated)

Commits
  • 5e4a3d9 Bump 0.15.5 (#23743)
  • 69c23cc [ty] Render all changed diagnostics in conformance.py (#23613)
  • 4926bd5 [ty] Split deferred checks out of types/infer/builder.rs (#23740)
  • 9a70f5e Discover markdown files by default in preview mode (#23434)
  • 3dc78b0 [ty] Use HasOptionalDefinition for except handlers (#23739)
  • a6a5e8d [ty] Fix precedence of all selector in TOML configurations (#23723)
  • 2a5384b [ty] Make all selector case sensitive (#23713)
  • db77d7b [ty] Add a diagnostic if a TypeVar is used to specialize a ParamSpec, or ...
  • db28490 [ty] Override home directory in ty tests (#23724)
  • 5f0fd91 [ty] More type-variable default validation (#23639)
  • Additional commits viewable in compare view

Updates chromadb from 0.5.23 to 1.5.5

Release notes

Sourced from chromadb's releases.

1.5.5

Version: 1.5.5 Git ref: refs/tags/1.5.5 Build Date: 2026-03-10T09:30 PIP Package: chroma-1.5.5.tar.gz Github Container Registry Image: :1.5.5 DockerHub Image: :1.5.5

What's Changed

Full Changelog: chroma-core/chroma@1.5.4...1.5.5

1.5.3

Version: 1.5.3 Git ref: refs/tags/1.5.3 Build Date: 2026-03-07T19:07 PIP Package: chroma-1.5.3.tar.gz Github Container Registry Image: :1.5.3 DockerHub Image: :1.5.3

What's Changed

... (truncated)

Commits

Updates llama-index-embeddings-openai from 0.5.1 to 0.5.2

Updates llama-index-llms-cohere from 0.6.1 to 0.7.1

Updates llama-index-llms-openai from 0.6.13 to 0.6.26

Updates llama-index-postprocessor-cohere-rerank from 0.5.1 to 0.7.0

Updates onnxruntime from 1.19.2 to 1.24.3

Release notes

Sourced from onnxruntime's releases.

ONNX Runtime v1.24.3

This is a patch release for ONNX Runtime 1.24, containing bug fixes, security improvements, performance enhancements, and execution provider updates.

Security Fixes

  • Core: Fixed GatherCopyData integer truncation leading to heap out-of-bounds read/write. (#27444)
  • Core: Fixed RoiAlign heap out-of-bounds read via unchecked batch_indices. (#27543)
  • Core: Prevent heap OOB from maliciously crafted Lora Adapters. (#27518)
  • Core: Fixed out-of-bounds access for Resize operation. (#27419)

Bug Fixes

  • Core: Fixed GatherND division by zero when batch dimensions mismatch. (#27090)
  • Core: Fixed validation for external data paths for models loaded from bytes. (#27430)
  • Core: Fixed SkipLayerNorm fusion incorrectly applied when gamma/beta are not 1D. (#27459)
  • Core: Fixed double-free in TRT EP custom op domain Release functions. (#27471)
  • Core: Fixed QMoE CPU Operator. (#27360)
  • Core: Fixed MatmulNBits prepacking scales. (#27412)
  • Python: Fixed refcount bug in map input conversion that caused shutdown segfault. (#27413)
  • NuGet: Fixed DllImportResolver. (#27397)
  • NuGet: Added OrtEnv.DisableDllImportResolver to prevent fatal error on resolver conflict. (#27535)

Performance Improvements

  • Core: QMoE CPU performance update (up to 4x on 4-bit). (#27364)
  • Core: Fixed O(n²) model load time for TreeEnsemble with categorical feature chains. (#27391)

Execution Provider Updates

  • NvTensorRtRtx EP:
    • Avoid repetitive creation of fp4/fp8 native-custom-op domains. (#27192)
    • Added missing override specifiers to suppress warnings. (#27288)
    • DQ→MatMulNBits fusion transformer. (#27466)
  • WebGPU:
    • Used embedded WASM module in Blob URL workers when wasmBinary is provided. (#27318)
    • Fixed usage of wasmBinary together with a blob URL for .mjs. (#27411)
    • Removed the unhelpful "Unknown CPU vendor" warning. (#27399)
    • Allows new memory info name for WebGPU. (#27475)
  • MLAS:
    • Added DynamicQGemm function pointers and ukernel interface. (#27403)
    • Fixed error where bytes is not assigned for dynamic qgemm pack b size. (#27421)
  • VitisAI EP: Removed s_kernel_registry_vitisaiep.reset() in deinitialize_vitisai_ep(). (#27295)
  • Plugin EPs: Added "library_path" metadata entry to OrtEpDevice instances for plugin and provider bridge EPs. (#27522)

Build and Infrastructure

  • Pipelines:
    • Build Windows ARM64X binaries as part of packaging pipeline. (#27316)
    • Moved JAR testing pipelines to canonical pipeline template. (#27480)
  • Python: Enabled Python 3.14 CI and upgraded dependencies. (#27401)
  • Build: Suppressed spurious Array Out of Bounds warnings produced by GCC 14.2 compiler on Linux builds. (#27454)
  • Build: Fixed -Warray-bounds build error in MLAS on clang 17+. (#27499)
  • Telemetry: Added/Updated telemetry events. (#27356)
  • Config: Increased kMaxValueLength to 8192. (#27521)

... (truncated)

Commits

Bumps the gha group with 16 updates in the /packages/opentelemetry-instrumentation-llamaindex directory:

| Package | From | To |
| --- | --- | --- |
| [opentelemetry-api](https://github.com/open-telemetry/opentelemetry-python) | `1.39.1` | `1.40.0` |
| opentelemetry-semantic-conventions-ai | `0.4.13` | `0.4.15` |
| [llama-index](https://github.com/run-llama/llama_index) | `0.14.12` | `0.14.16` |
| [ruff](https://github.com/astral-sh/ruff) | `0.14.11` | `0.15.5` |
| [chromadb](https://github.com/chroma-core/chroma) | `0.5.23` | `1.5.5` |
| llama-index-embeddings-openai | `0.5.1` | `0.5.2` |
| llama-index-llms-cohere | `0.6.1` | `0.7.1` |
| llama-index-llms-openai | `0.6.13` | `0.6.26` |
| llama-index-postprocessor-cohere-rerank | `0.5.1` | `0.7.0` |
| [onnxruntime](https://github.com/microsoft/onnxruntime) | `1.19.2` | `1.24.3` |
| [openai](https://github.com/openai/openai-python) | `1.109.1` | `2.26.0` |
| [opentelemetry-instrumentation-chromadb](https://github.com/traceloop/openllmetry) | `0.50.1` | `0.53.0` |
| [opentelemetry-instrumentation-cohere](https://github.com/traceloop/openllmetry) | `0.50.1` | `0.53.0` |
| [opentelemetry-instrumentation-openai](https://github.com/traceloop/openllmetry) | `0.50.1` | `0.53.0` |
| [pytest-asyncio](https://github.com/pytest-dev/pytest-asyncio) | `0.23.8` | `1.3.0` |
| [sqlalchemy](https://github.com/sqlalchemy/sqlalchemy) | `2.0.45` | `2.0.48` |



Updates `opentelemetry-api` from 1.39.1 to 1.40.0
- [Release notes](https://github.com/open-telemetry/opentelemetry-python/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-python/blob/main/CHANGELOG.md)
- [Commits](open-telemetry/opentelemetry-python@v1.39.1...v1.40.0)

Updates `opentelemetry-instrumentation` from 0.60b1 to 0.61b0
- [Release notes](https://github.com/open-telemetry/opentelemetry-python-contrib/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-python-contrib/commits)

Updates `opentelemetry-semantic-conventions-ai` from 0.4.13 to 0.4.15

Updates `opentelemetry-semantic-conventions` from 0.60b1 to 0.61b0
- [Release notes](https://github.com/open-telemetry/opentelemetry-python/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-python/commits)

Updates `llama-index` from 0.14.12 to 0.14.16
- [Release notes](https://github.com/run-llama/llama_index/releases)
- [Changelog](https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md)
- [Commits](run-llama/llama_index@v0.14.12...v0.14.16)

Updates `ruff` from 0.14.11 to 0.15.5
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](astral-sh/ruff@0.14.11...0.15.5)

Updates `chromadb` from 0.5.23 to 1.5.5
- [Release notes](https://github.com/chroma-core/chroma/releases)
- [Changelog](https://github.com/chroma-core/chroma/blob/main/RELEASE_PROCESS.md)
- [Commits](chroma-core/chroma@0.5.23...1.5.5)

Updates `llama-index-embeddings-openai` from 0.5.1 to 0.5.2

Updates `llama-index-llms-cohere` from 0.6.1 to 0.7.1

Updates `llama-index-llms-openai` from 0.6.13 to 0.6.26

Updates `llama-index-postprocessor-cohere-rerank` from 0.5.1 to 0.7.0

Updates `onnxruntime` from 1.19.2 to 1.24.3
- [Release notes](https://github.com/microsoft/onnxruntime/releases)
- [Changelog](https://github.com/microsoft/onnxruntime/blob/main/docs/ReleaseManagement.md)
- [Commits](microsoft/onnxruntime@v1.19.2...v1.24.3)

Updates `openai` from 1.109.1 to 2.26.0
- [Release notes](https://github.com/openai/openai-python/releases)
- [Changelog](https://github.com/openai/openai-python/blob/main/CHANGELOG.md)
- [Commits](openai/openai-python@v1.109.1...v2.26.0)

Updates `opentelemetry-instrumentation-chromadb` from 0.50.1 to 0.53.0
- [Release notes](https://github.com/traceloop/openllmetry/releases)
- [Changelog](https://github.com/traceloop/openllmetry/blob/main/CHANGELOG.md)
- [Commits](0.50.1...0.53.0)

Updates `opentelemetry-instrumentation-cohere` from 0.50.1 to 0.53.0
- [Release notes](https://github.com/traceloop/openllmetry/releases)
- [Changelog](https://github.com/traceloop/openllmetry/blob/main/CHANGELOG.md)
- [Commits](0.50.1...0.53.0)

Updates `opentelemetry-instrumentation-openai` from 0.50.1 to 0.53.0
- [Release notes](https://github.com/traceloop/openllmetry/releases)
- [Changelog](https://github.com/traceloop/openllmetry/blob/main/CHANGELOG.md)
- [Commits](0.50.1...0.53.0)

Updates `opentelemetry-sdk` from 1.39.1 to 1.40.0
- [Release notes](https://github.com/open-telemetry/opentelemetry-python/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-python/blob/main/CHANGELOG.md)
- [Commits](open-telemetry/opentelemetry-python@v1.39.1...v1.40.0)

Updates `pytest-asyncio` from 0.23.8 to 1.3.0
- [Release notes](https://github.com/pytest-dev/pytest-asyncio/releases)
- [Commits](pytest-dev/pytest-asyncio@v0.23.8...v1.3.0)

Updates `sqlalchemy` from 2.0.45 to 2.0.48
- [Release notes](https://github.com/sqlalchemy/sqlalchemy/releases)
- [Changelog](https://github.com/sqlalchemy/sqlalchemy/blob/main/CHANGES.rst)
- [Commits](https://github.com/sqlalchemy/sqlalchemy/commits)

---
updated-dependencies:
- dependency-name: opentelemetry-api
  dependency-version: 1.40.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: gha
- dependency-name: opentelemetry-instrumentation
  dependency-version: 0.61b0
  dependency-type: direct:production
  dependency-group: gha
- dependency-name: opentelemetry-semantic-conventions-ai
  dependency-version: 0.4.15
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: gha
- dependency-name: opentelemetry-semantic-conventions
  dependency-version: 0.61b0
  dependency-type: direct:production
  dependency-group: gha
- dependency-name: llama-index
  dependency-version: 0.14.16
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: gha
- dependency-name: ruff
  dependency-version: 0.15.5
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: gha
- dependency-name: chromadb
  dependency-version: 1.5.5
  dependency-type: direct:development
  update-type: version-update:semver-major
  dependency-group: gha
- dependency-name: llama-index-embeddings-openai
  dependency-version: 0.5.2
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: gha
- dependency-name: llama-index-llms-cohere
  dependency-version: 0.7.1
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: gha
- dependency-name: llama-index-llms-openai
  dependency-version: 0.6.26
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: gha
- dependency-name: llama-index-postprocessor-cohere-rerank
  dependency-version: 0.7.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: gha
- dependency-name: onnxruntime
  dependency-version: 1.24.3
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: gha
- dependency-name: openai
  dependency-version: 2.26.0
  dependency-type: direct:development
  update-type: version-update:semver-major
  dependency-group: gha
- dependency-name: opentelemetry-instrumentation-chromadb
  dependency-version: 0.53.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: gha
- dependency-name: opentelemetry-instrumentation-cohere
  dependency-version: 0.53.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: gha
- dependency-name: opentelemetry-instrumentation-openai
  dependency-version: 0.53.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: gha
- dependency-name: opentelemetry-sdk
  dependency-version: 1.40.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: gha
- dependency-name: pytest-asyncio
  dependency-version: 1.3.0
  dependency-type: direct:development
  update-type: version-update:semver-major
  dependency-group: gha
- dependency-name: sqlalchemy
  dependency-version: 2.0.48
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: gha
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Mar 11, 2026
@dependabot @github
Copy link
Contributor Author

dependabot bot commented on behalf of github Mar 17, 2026

Looks like these dependencies are updatable in another way, so this is no longer needed.

@dependabot dependabot bot closed this Mar 17, 2026
@dependabot dependabot bot deleted the dependabot/uv/packages/opentelemetry-instrumentation-llamaindex/gha-5e0447eaf2 branch March 17, 2026 13:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants