-
Notifications
You must be signed in to change notification settings - Fork 2.8k
feat: LiquidAI audio plugin for LiveKit Agents #4656
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughAdds a new LiquidAI audio plugin package for LiveKit Agents: initializes and registers a LiquidAIPlugin at import, exposes STT/TTS and version, provides a module logger, and introduces packaging and README files. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant STT as STT Engine
participant API as LiquidAI API
participant Audio as Audio Processing
Client->>STT: recognize(audio_buffer)
STT->>Audio: convert to WAV & base64
STT->>API: streaming /chat/completions (system + audio)
API-->>STT: stream text deltas
STT->>STT: assemble transcript
STT->>Client: SpeechEvent(FINAL_TRANSCRIPT)
sequenceDiagram
participant Client
participant TTS as TTS Engine
participant API as LiquidAI API
participant Audio as Audio Processing
participant Emitter as AudioEmitter
Client->>TTS: synthesize(text)
TTS->>API: streaming /chat/completions (system + text)
API-->>TTS: stream audio chunks (base64 float32)
TTS->>Audio: base64 decode & float32→int16
Audio->>Emitter: push PCM samples
TTS->>Emitter: flush
TTS->>Client: deliver ChunkedStream
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested Reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: Organization UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used📓 Path-based instructions (1)**/*.py📄 CodeRabbit inference engine (AGENTS.md)
Files:
🧠 Learnings (3)📓 Common learnings📚 Learning: 2026-01-16T07:44:56.353ZApplied to files:
📚 Learning: 2026-01-18T01:09:01.847ZApplied to files:
🧬 Code graph analysis (1)livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/stt.py (4)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
🔇 Additional comments (5)
✏️ Tip: You can disable this entire section by setting Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/stt.py`:
- Around line 120-128: The model parameter in the streaming call to
self._client.chat.completions.create is an empty string, causing HTTP 400;
change the call to pass the instance's model property (self.model) so the
correct model identifier (e.g., "LFM2.5-Audio") is sent; update the invocation
of self._client.chat.completions.create to replace model="" with
model=self.model and ensure any tests/configs set self.model appropriately.
In `@livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/tts.py`:
- Around line 143-151: The chat completion call is using an empty model string,
which will be rejected; update the call to pass the correct LiquidAI model
identifier by replacing model="" with the TTS instance's configured model (use
self.tts.model, which is "LFM2.5-Audio") in the
self._client.chat.completions.create invocation so the request uses the proper
model name.
In `@livekit-plugins/livekit-plugins-liquidai/pyproject.toml`:
- Line 25: Update the dependency list in pyproject.toml to require a newer
OpenAI client and to declare httpx explicitly: replace the existing dependencies
array entry that includes "openai>=1.0.0" with a bumped version (for example
"openai>=1.107.2" or a stable "openai>=2.0.0") and add "httpx" (e.g.,
"httpx>=0.24.0") alongside existing items; note that stt.py and tts.py import
httpx directly so adding it as an explicit dependency ensures installations
include it.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/__init__.pylivekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/log.pylivekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/stt.pylivekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/tts.pylivekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/version.pylivekit-plugins/livekit-plugins-liquidai/pyproject.toml
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Format code with ruff
Run ruff linter and auto-fix issues
Run mypy type checker in strict mode
Maintain line length of 100 characters maximum
Ensure Python 3.9+ compatibility
Use Google-style docstrings
Files:
livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/version.pylivekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/log.pylivekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/__init__.pylivekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/tts.pylivekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/stt.py
🧠 Learnings (3)
📓 Common learnings
Learnt from: CR
Repo: livekit/agents PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-16T07:44:56.353Z
Learning: Implement Model Interface Pattern for STT, TTS, LLM, and Realtime models with provider-agnostic interfaces, fallback adapters for resilience, and stream adapters for different streaming patterns
Learnt from: davidzhao
Repo: livekit/agents PR: 4548
File: livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/responses/llm.py:77-81
Timestamp: 2026-01-18T01:09:01.847Z
Learning: In the OpenAI responses LLM (`livekit-plugins-openai/livekit/plugins/openai/responses/llm.py`), reasoning effort defaults are intentionally set lower than OpenAI's API defaults for voice interactions: "none" for gpt-5.1/gpt-5.2 and "minimal" for other reasoning-capable models like gpt-5, to avoid enabling reasoning by default in voice contexts.
Learnt from: CR
Repo: livekit/agents PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-16T07:44:56.353Z
Learning: Follow the Plugin System pattern where plugins in livekit-plugins/ are separate packages registered via the Plugin base class
📚 Learning: 2026-01-16T07:44:56.353Z
Learnt from: CR
Repo: livekit/agents PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-16T07:44:56.353Z
Learning: Follow the Plugin System pattern where plugins in livekit-plugins/ are separate packages registered via the Plugin base class
Applied to files:
livekit-plugins/livekit-plugins-liquidai/pyproject.toml
📚 Learning: 2026-01-16T07:44:56.353Z
Learnt from: CR
Repo: livekit/agents PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-16T07:44:56.353Z
Learning: Implement Model Interface Pattern for STT, TTS, LLM, and Realtime models with provider-agnostic interfaces, fallback adapters for resilience, and stream adapters for different streaming patterns
Applied to files:
livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/stt.py
🧬 Code graph analysis (3)
livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/__init__.py (3)
livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/stt.py (1)
STT(28-156)livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/tts.py (1)
TTS(36-114)livekit-agents/livekit/agents/plugin.py (2)
Plugin(13-56)register_plugin(31-36)
livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/tts.py (4)
livekit-agents/livekit/agents/_exceptions.py (2)
APIConnectionError(84-88)APITimeoutError(91-95)livekit-agents/livekit/agents/types.py (1)
APIConnectOptions(54-88)livekit-agents/livekit/agents/utils/misc.py (1)
is_given(25-26)livekit-agents/livekit/agents/tts/tts.py (1)
TTSCapabilities(47-51)
livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/stt.py (3)
livekit-agents/livekit/agents/_exceptions.py (2)
APIConnectionError(84-88)APITimeoutError(91-95)livekit-agents/livekit/agents/stt/stt.py (4)
SpeechEventType(32-49)STTCapabilities(78-84)SpeechEvent(70-74)SpeechData(53-61)livekit-agents/livekit/agents/utils/misc.py (1)
is_given(25-26)
🔇 Additional comments (15)
livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/version.py (1)
1-1: LGTM. Version constant is straightforward and in the right place.livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/log.py (1)
1-3: LGTM. Logger setup is minimal and clear.livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/__init__.py (2)
31-33: LGTM. Plugin initialization wires required metadata cleanly.
36-45: LGTM. Registration and doc-visibility pruning are straightforward.livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/stt.py (5)
17-25: LGTM. Defaults and options container are clear.
31-70: LGTM. Constructor sets capabilities and defaults cleanly.
80-89: LGTM. Option updates are guarded appropriately.
149-153: LGTM. SpeechEvent mapping is concise and correct.
155-156: LGTM. Client shutdown is handled.livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/tts.py (6)
21-34: LGTM. Defaults and options container are clean.
39-78: LGTM. Constructor wiring is clear.
88-97: LGTM. Option updates are straightforward.
99-111: LGTM. ChunkedStream construction is clean.
113-114: LGTM. Client shutdown is handled.
120-132: LGTM. Stream wrapper initialization is tidy.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/stt.py
Show resolved
Hide resolved
livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/tts.py
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@livekit-plugins/livekit-plugins-liquidai/README.md`:
- Line 5: Convert the bare URL in the README.md to a Markdown link by replacing
the plain URL "https://huggingface.co/LiquidAI/LFM2.5-Audio-1.5B-GGUF" with a
link label and URL format, e.g. use a descriptive label like
[LFM2.5-Audio-1.5B-GGUF](https://huggingface.co/LiquidAI/LFM2.5-Audio-1.5B-GGUF)
so the linter warning is resolved and the link is more readable.
- Line 3: Update the README description string "LFM2.5-Audio family of SST from
LiquidAI." to use the correct abbreviation "STT" (i.e., change "SST" -> "STT");
search the README.md for any other occurrences of "SST" and replace them with
"STT" so the phrase "LFM2.5-Audio family of STT from LiquidAI" is consistent.
- Around line 15-20: The README's "Start audio server" section uses the external
binary llama-liquid-audio-server (and CKPT paths) without guidance; update the
docs to state that llama-liquid-audio-server is provided by LiquidAI, link to
the official repository or release page, add brief build/install steps (git
clone, build prerequisites like Rust/CMake/GCC or where to download prebuilt
binaries), and list system prerequisites (OS, GPU/CPU, CUDA/CUDNN or AVX
requirements, disk for model checkpoints) plus an example of placing model files
under the CKPT directory so the existing command will work.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
livekit-plugins/livekit-plugins-liquidai/README.md
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: CR
Repo: livekit/agents PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-16T07:44:56.353Z
Learning: Follow the Plugin System pattern where plugins in livekit-plugins/ are separate packages registered via the Plugin base class
📚 Learning: 2026-01-16T07:44:56.353Z
Learnt from: CR
Repo: livekit/agents PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-16T07:44:56.353Z
Learning: Follow the Plugin System pattern where plugins in livekit-plugins/ are separate packages registered via the Plugin base class
Applied to files:
livekit-plugins/livekit-plugins-liquidai/README.md
🪛 markdownlint-cli2 (0.20.0)
livekit-plugins/livekit-plugins-liquidai/README.md
[warning] 5-5: Bare URL used
(MD034, no-bare-urls)
🔇 Additional comments (1)
livekit-plugins/livekit-plugins-liquidai/README.md (1)
7-11: LGTM!The installation instructions are clear and follow standard conventions.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/tts.py`:
- Around line 88-97: The voice option is stored but never applied; when
update_options receives voice (self._opts.voice) you must fold it into the
effective system prompt used for synthesis (self._opts.system_prompt) so the API
receives the voice directive. Modify update_options (and the request assembly
code that currently reads self._opts.system_prompt) to compute a combined prompt
(e.g., prepend or append a short "Voice: {voice}" instruction when
self._opts.voice is given) and use that combined prompt in the API request
payload instead of using system_prompt alone; ensure the same merging logic is
used wherever the request body is constructed so update_options(voice=...)
actually changes synthesis.
🧹 Nitpick comments (3)
livekit-plugins/livekit-plugins-liquidai/README.md (3)
3-3: Be more specific about the model family.The phrase "Audio family" is vague. Since the plugin specifically supports the LFM2.5-Audio models (as indicated in the title and HuggingFace link), clarify this in the description.
📝 Suggested improvement
-Support for the Audio family of SST/TTS from LiquidAI. +Support for the LFM2.5-Audio family of SST/TTS from LiquidAI.
18-21: Improve command readability with line continuation.The server startup command is very long and difficult to read on a single line. Breaking it into multiple lines using bash line continuation would make it easier for users to understand the command structure and parameters.
♻️ Suggested improvement
-```bash -export CKPT=/path/to/LFM2.5-Audio-1.5B-GGUF -./llama-liquid-audio-server -m $CKPT/LFM2.5-Audio-1.5B-Q4_0.gguf -mm $CKPT/mmproj-LFM2.5-Audio-1.5B-Q4_0.gguf -mv $CKPT/vocoder-LFM2.5-Audio-1.5B-Q4_0.gguf --tts-speaker-file $CKPT/tokenizer-LFM2.5-Audio-1.5B-Q4_0.gguf -``` +```bash +export CKPT=/path/to/LFM2.5-Audio-1.5B-GGUF +./llama-liquid-audio-server \ + -m $CKPT/LFM2.5-Audio-1.5B-Q4_0.gguf \ + -mm $CKPT/mmproj-LFM2.5-Audio-1.5B-Q4_0.gguf \ + -mv $CKPT/vocoder-LFM2.5-Audio-1.5B-Q4_0.gguf \ + --tts-speaker-file $CKPT/tokenizer-LFM2.5-Audio-1.5B-Q4_0.gguf +``` </details> --- `1-21`: **Consider adding usage examples.** The README would be more helpful with a brief usage example showing how to instantiate and use the STT and TTS classes in LiveKit Agents code. This would help users get started quickly after installation. <details> <summary>💡 Example structure</summary> Consider adding a section like: ```markdown ## Usage ```python from livekit import agents from livekit.plugins import liquidai # Speech-to-Text stt = liquidai.STT() # Text-to-Speech tts = liquidai.TTS()</details> </blockquote></details> </blockquote></details> <details> <summary>📜 Review details</summary> **Configuration used**: Organization UI **Review profile**: CHILL **Plan**: Pro <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between 7ac3560743b7c2b89c0e713139cbe16eb14bb932 and 810d1010e5cfc39a0e5b50277836ef541274b1e1. </details> <details> <summary>📒 Files selected for processing (2)</summary> * `livekit-plugins/livekit-plugins-liquidai/README.md` * `livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/tts.py` </details> <details> <summary>🧰 Additional context used</summary> <details> <summary>📓 Path-based instructions (1)</summary> <details> <summary>**/*.py</summary> **📄 CodeRabbit inference engine (AGENTS.md)** > `**/*.py`: Format code with ruff > Run ruff linter and auto-fix issues > Run mypy type checker in strict mode > Maintain line length of 100 characters maximum > Ensure Python 3.9+ compatibility > Use Google-style docstrings Files: - `livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/tts.py` </details> </details><details> <summary>🧠 Learnings (2)</summary> <details> <summary>📓 Common learnings</summary>Learnt from: CR
Repo: livekit/agents PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-16T07:44:56.353Z
Learning: Follow the Plugin System pattern where plugins in livekit-plugins/ are separate packages registered via the Plugin base class</details> <details> <summary>📚 Learning: 2026-01-18T01:09:01.847Z</summary>Learnt from: davidzhao
Repo: livekit/agents PR: 4548
File: livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/responses/llm.py:77-81
Timestamp: 2026-01-18T01:09:01.847Z
Learning: In the OpenAI responses LLM (livekit-plugins-openai/livekit/plugins/openai/responses/llm.py), reasoning effort defaults are intentionally set lower than OpenAI's API defaults for voice interactions: "none" for gpt-5.1/gpt-5.2 and "minimal" for other reasoning-capable models like gpt-5, to avoid enabling reasoning by default in voice contexts.**Applied to files:** - `livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/tts.py` </details> </details> </details> <details> <summary>⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)</summary> * GitHub Check: unit-tests </details> <details> <summary>🔇 Additional comments (5)</summary><blockquote> <details> <summary>livekit-plugins/livekit-plugins-liquidai/README.md (1)</summary><blockquote> `8-12`: **LGTM!** The installation instructions are clear and follow standard Python package conventions. </blockquote></details> <details> <summary>livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/tts.py (4)</summary><blockquote> `21-34`: **Defaults/options setup looks good.** Clear defaults and a simple options container keep the surface area clean. --- `99-114`: **synthesize/aclose wiring looks solid.** Clean construction of the stream object and proper client shutdown. --- `120-132`: **ChunkedStream initialization is clean.** State setup and base initialization are straightforward. --- `56-60`: No changes needed. The `streaming=False` flag is correct. The `TTSCapabilities.streaming` flag indicates whether the TTS accepts streaming text input (incremental text chunks), not whether it outputs audio in chunks. LiquidAI's `synthesize()` requires the full text upfront and does not implement a `stream()` method for accepting text chunks, making `streaming=False` appropriate. This follows the standard pattern used by OpenAI, Azure, Google, and other non-streaming TTS implementations. The framework automatically wraps non-streaming TTS with `StreamAdapter` in the agent pipeline when needed. </blockquote></details> </blockquote></details> <sub>✏️ Tip: You can disable this entire section by setting `review_details` to `false` in your review settings.</sub> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/tts.py
Show resolved
Hide resolved
livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/tts.py
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/stt.py`:
- Around line 102-103: The code mutates instance state by assigning
self._opts.language = language when a per-call language override is provided;
instead compute a local effective_language variable (e.g., effective_language =
language if is_given(language) else self._opts.language) inside the method and
pass that to _transcription_to_speech_event (update
_transcription_to_speech_event signature to accept the effective_language) so
the instance's self._opts remains unchanged and per-call overrides only affect
the current call.
🧹 Nitpick comments (1)
livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/tts.py (1)
193-200: Consider explicitly handlingasyncio.CancelledErrorfor clarity.While
asyncio.CancelledErroris aBaseExceptionsubclass (notException) in Python 3.8+, explicitly re-raising it improves code clarity and future-proofs against potential changes:♻️ Optional improvement
+import asyncio import base64 import uuid from dataclasses import dataclassexcept openai.APITimeoutError: raise APIConnectionError() from None + except asyncio.CancelledError: + raise except openai.APIStatusError as e: logger.error(f"TTS API error: {e}") raise APIConnectionError() from e
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
livekit-plugins/livekit-plugins-liquidai/README.mdlivekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/py.typedlivekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/stt.pylivekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/tts.pylivekit-plugins/livekit-plugins-liquidai/pyproject.toml
🚧 Files skipped from review as they are similar to previous changes (2)
- livekit-plugins/livekit-plugins-liquidai/README.md
- livekit-plugins/livekit-plugins-liquidai/pyproject.toml
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Format code with ruff
Run ruff linter and auto-fix issues
Run mypy type checker in strict mode
Maintain line length of 100 characters maximum
Ensure Python 3.9+ compatibility
Use Google-style docstrings
Files:
livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/stt.pylivekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/tts.py
🧠 Learnings (3)
📓 Common learnings
Learnt from: CR
Repo: livekit/agents PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-16T07:44:56.353Z
Learning: Follow the Plugin System pattern where plugins in livekit-plugins/ are separate packages registered via the Plugin base class
📚 Learning: 2026-01-16T07:44:56.353Z
Learnt from: CR
Repo: livekit/agents PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-16T07:44:56.353Z
Learning: Implement Model Interface Pattern for STT, TTS, LLM, and Realtime models with provider-agnostic interfaces, fallback adapters for resilience, and stream adapters for different streaming patterns
Applied to files:
livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/stt.py
📚 Learning: 2026-01-18T01:09:01.847Z
Learnt from: davidzhao
Repo: livekit/agents PR: 4548
File: livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/responses/llm.py:77-81
Timestamp: 2026-01-18T01:09:01.847Z
Learning: In the OpenAI responses LLM (`livekit-plugins-openai/livekit/plugins/openai/responses/llm.py`), reasoning effort defaults are intentionally set lower than OpenAI's API defaults for voice interactions: "none" for gpt-5.1/gpt-5.2 and "minimal" for other reasoning-capable models like gpt-5, to avoid enabling reasoning by default in voice contexts.
Applied to files:
livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/stt.pylivekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/tts.py
🧬 Code graph analysis (1)
livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/tts.py (4)
livekit-agents/livekit/agents/_exceptions.py (2)
APIConnectionError(84-88)APITimeoutError(91-95)livekit-agents/livekit/agents/types.py (1)
APIConnectOptions(54-88)livekit-agents/livekit/agents/utils/misc.py (1)
is_given(25-26)livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/stt.py (4)
model(76-77)provider(80-81)update_options(83-92)aclose(160-161)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: type-check (3.9)
- GitHub Check: type-check (3.13)
- GitHub Check: unit-tests
🔇 Additional comments (11)
livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/stt.py (6)
1-23: LGTM!Imports are well-organized and constants are appropriately defined for a local LiquidAI server setup.
25-73: LGTM!The initialization is well-structured with appropriate timeout configurations for audio processing. The use of
NOT_GIVENpattern for optional parameters aligns with the codebase conventions.
75-81: LGTM!Properties are correctly implemented.
83-92: LGTM!The
update_optionsmethod follows the established pattern for runtime configuration updates.
154-158: LGTM!The helper method correctly constructs the speech event.
160-161: LGTM!Proper cleanup of the async client.
livekit-plugins/livekit-plugins-liquidai/livekit/plugins/liquidai/tts.py (5)
1-37: LGTM!Imports and constants are well-defined. The audio parameters (24kHz sample rate, mono channel) are appropriate for TTS output.
39-81: LGTM!Initialization is consistent with the STT implementation, with appropriate TTS-specific capabilities and audio parameters.
102-117: LGTM!The
synthesizefactory method andaclosecleanup are correctly implemented.
120-134: LGTM!
ChunkedStreaminitialization properly stores the required dependencies for streaming.
136-191: LGTM!The streaming implementation correctly processes audio chunks, converting float32 PCM to int16 format. The defensive checks for
chunk.choicesanddeltahandle edge cases appropriately.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
Added LiquidAI LFM2.5-Audio plugin for LiveKit Agents
Summary by CodeRabbit
New Features
Documentation
Chores
✏️ Tip: You can customize this high-level summary in your review settings.