Skip to content

feat: add manage_profiler tool for CPU timing, GC alloc, and animation profiling#991

Open
zaferdace wants to merge 6 commits intoCoplayDev:betafrom
zaferdace:feat/manage-profiler
Open

feat: add manage_profiler tool for CPU timing, GC alloc, and animation profiling#991
zaferdace wants to merge 6 commits intoCoplayDev:betafrom
zaferdace:feat/manage-profiler

Conversation

@zaferdace
Copy link
Contributor

@zaferdace zaferdace commented Mar 26, 2026

Summary

Adds a new manage_profiler tool that exposes Unity's ProfilerRecorder API for CPU timing, script performance, GC allocation, and animation profiling data.

This fills a gap in the current toolset — manage_graphics covers rendering stats and memory totals, but there's no way to query script execution time, physics step duration, or per-frame GC allocations. manage_profiler complements manage_graphics with zero overlap.

Actions

Action Category What it returns
get_frame_timing Internal Main thread, render thread, CPU/GPU frame time (ms)
get_script_timing Scripts Update, FixedUpdate, LateUpdate execution time (ms)
get_physics_timing Physics Physics.Simulate, Physics2D.Simulate time (ms)
get_gc_alloc Memory GC allocation bytes and count per frame
get_animation_timing Animation Animator.Update time (ms)

All actions are read-only. Each counter includes a _valid flag indicating whether the ProfilerRecorder returned valid data (some counters require Play Mode).

Files

C# (6 files)

  • MCPForUnity/Editor/Tools/Profiler/ManageProfiler.cs — dispatcher
  • Operations/FrameTimingOps.cs
  • Operations/ScriptTimingOps.cs
  • Operations/PhysicsTimingOps.cs
  • Operations/GCAllocOps.cs
  • Operations/AnimationTimingOps.cs

Python (3 files)

  • Server/src/services/tools/manage_profiler.py — MCP tool
  • Server/src/cli/commands/profiler.py — CLI commands
  • Server/tests/test_manage_profiler.py — 20 tests

Test plan

  • All 20 Python tests pass (uv run pytest tests/test_manage_profiler.py -v)
  • Verify counter values in Play Mode on a Unity project
  • Verify graceful _valid: false in Edit Mode

🤖 Generated with Claude Code

Summary by Sourcery

Add a new core profiler tool and CLI commands to query Unity CPU, physics, GC allocation, and animation profiling metrics via MCP and the Unity editor.

New Features:

  • Introduce the manage_profiler MCP tool that exposes Unity Profiler counters for frame, script, physics, GC allocation, and animation timing.
  • Add profiler CLI commands to fetch specific profiling metrics from manage_profiler.

Tests:

  • Add a dedicated test suite for manage_profiler covering action validation, forwarding behavior, error handling, and tool registration.

Summary by CodeRabbit

  • New Features

    • In-editor profiler management exposing frame, script, physics, GC allocation, and animation timing metrics (five case-insensitive actions).
    • CLI profiler command group with subcommands to fetch those metrics.
    • Server-side tool endpoint to dispatch profiler requests to Unity.
  • Tests

    • Comprehensive tests for action validation, case-insensitivity, error handling, dispatch behavior, and response shapes.

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Mar 26, 2026

Reviewer's Guide

Introduces a new cross-layer manage_profiler tool wired from CLI → Python MCP service → Unity editor, exposing Unity ProfilerRecorder-based metrics for frame, script, physics, GC allocation, and animation timing, including validation flags and error handling.

Sequence diagram for manage_profiler end-to-end profiling call

sequenceDiagram
  actor User
  participant CLIProfiler as CLI_profiler_command
  participant MCPTool as MCP_manage_profiler_tool
  participant UnityTransport as Unity_transport
  participant UnityEditor as Unity_ManageProfiler

  User->>CLIProfiler: run `profiler frame-timing`
  CLIProfiler->>MCPTool: run_command(tool_name=manage_profiler, action=get_frame_timing)
  MCPTool->>MCPTool: validate action against PROFILER_ACTIONS
  MCPTool->>UnityTransport: send_with_unity_instance(async_send_command_with_retry, unity_instance, manage_profiler, {action: get_frame_timing})
  UnityTransport->>UnityEditor: HandleCommand({action: get_frame_timing})
  UnityEditor->>UnityEditor: new ToolParams(params)
  UnityEditor->>UnityEditor: FrameTimingOps.GetFrameTiming(params)
  UnityEditor->>UnityTransport: {success, message, data}
  UnityTransport->>MCPTool: result dict
  MCPTool->>CLIProfiler: result dict
  CLIProfiler->>User: formatted profiler output
Loading

Class diagram for new ManageProfiler and profiler operations

classDiagram
  class ManageProfiler {
    <<static>>
    +HandleCommand(params JObject) object
  }

  class FrameTimingOps {
    <<static>>
    -COUNTER_MAP : (string counterName, string jsonKey)[]
    +GetFrameTiming(params JObject) object
  }

  class ScriptTimingOps {
    <<static>>
    -COUNTER_MAP : (string counterName, string jsonKey)[]
    +GetScriptTiming(params JObject) object
  }

  class PhysicsTimingOps {
    <<static>>
    -COUNTER_MAP : (string counterName, string jsonKey)[]
    +GetPhysicsTiming(params JObject) object
  }

  class GCAllocOps {
    <<static>>
    +GetGCAlloc(params JObject) object
  }

  class AnimationTimingOps {
    <<static>>
    +GetAnimationTiming(params JObject) object
  }

  class ToolParams {
    +ToolParams(params JObject)
    +Get(key string) JToken
  }

  class ErrorResponse {
    +ErrorResponse(message string)
  }

  ManageProfiler ..> FrameTimingOps : uses
  ManageProfiler ..> ScriptTimingOps : uses
  ManageProfiler ..> PhysicsTimingOps : uses
  ManageProfiler ..> GCAllocOps : uses
  ManageProfiler ..> AnimationTimingOps : uses
  ManageProfiler ..> ToolParams : parses params
  ManageProfiler ..> ErrorResponse : error results
Loading

File-Level Changes

Change Details Files
Add Python MCP tool wrapper for Unity profiler operations and register it in the tool registry.
  • Define PROFILER_ACTIONS and validate requested action including case-normalization.
  • Implement async manage_profiler that resolves the Unity instance from context and forwards a minimal {action} payload via send_with_unity_instance/async_send_command_with_retry.
  • Return a uniform dict response, wrapping non-dict Unity responses into an error.
  • Annotate tool metadata (group, description, ToolAnnotations) and register via mcp_for_unity_tool.
Server/src/services/tools/manage_profiler.py
Server/tests/test_manage_profiler.py
Expose profiler actions through a CLI profiler command group to invoke the MCP tool from the terminal.
  • Create click command group profiler and subcommands for frame-timing, script-timing, physics-timing, gc-alloc, and animation-timing.
  • Wire each subcommand to call run_command("manage_profiler", {"action": ...}) and format output via format_output.
  • Apply standard connection handling with handle_unity_errors and get_config.
Server/src/cli/commands/profiler.py
Implement Unity-side manage_profiler dispatcher that routes actions to specific ProfilerRecorder-based operations.
  • Create ManageProfiler.HandleCommand that validates params, lowercases the action, and switches across profiler operations, returning ErrorResponse for unknown actions or exceptions.
  • Ensure case-insensitive action handling and log failures via McpLog.Error.
  • Mark the C# tool with McpForUnityTool("manage_profiler", Group="core", AutoRegister=false).
MCPForUnity/Editor/Tools/Profiler/ManageProfiler.cs
Add Unity ProfilerRecorder-backed operations for frame, script, physics, GC allocation, and animation metrics with validity flags.
  • FrameTimingOps: record main thread, render thread, CPU frame time, and GPU frame time from ProfilerCategory.Internal, convert nanoseconds to ms, and emit *_ms plus *_valid fields.
  • ScriptTimingOps: record BehaviourUpdate, FixedBehaviourUpdate, LateBehaviourUpdate from ProfilerCategory.Scripts and output *_ms plus *_valid fields.
  • PhysicsTimingOps: record Physics.Simulate and Physics2D.Simulate from ProfilerCategory.Physics to *_ms and *_valid fields.
  • GCAllocOps: record GC.Alloc and GC.Alloc.Count from ProfilerCategory.Memory, returning bytes/count and validity flags.
  • AnimationTimingOps: record Animator.Update from ProfilerCategory.Animation, exposing animator_update_ms and animator_update_valid.
MCPForUnity/Editor/Tools/Profiler/Operations/FrameTimingOps.cs
MCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.cs
MCPForUnity/Editor/Tools/Profiler/Operations/PhysicsTimingOps.cs
MCPForUnity/Editor/Tools/Profiler/Operations/GCAllocOps.cs
MCPForUnity/Editor/Tools/Profiler/Operations/AnimationTimingOps.cs

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 26, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ffcfb6ae-68de-4e5b-86b3-9c5e32ba57ef

📥 Commits

Reviewing files that changed from the base of the PR and between 806fa0f and 8925c32.

📒 Files selected for processing (1)
  • MCPForUnity/Editor/Tools/Profiler/Operations/FrameTimingOps.cs
✅ Files skipped from review due to trivial changes (1)
  • MCPForUnity/Editor/Tools/Profiler/Operations/FrameTimingOps.cs

📝 Walkthrough

Walkthrough

Adds a Unity Editor command handler ManageProfiler.HandleCommand(JObject) and five profiler operation modules that sample Unity Profiler counters; exposes a server-side manage_profiler MCP tool, corresponding CLI subcommands, and tests validating action validation, dispatch, and response shapes.

Changes

Cohort / File(s) Summary
Editor Profiler Handler & Ops
MCPForUnity/Editor/Tools/Profiler/ManageProfiler.cs, MCPForUnity/Editor/Tools/Profiler/Operations/FrameTimingOps.cs, MCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.cs, MCPForUnity/Editor/Tools/Profiler/Operations/PhysicsTimingOps.cs, MCPForUnity/Editor/Tools/Profiler/Operations/GCAllocOps.cs, MCPForUnity/Editor/Tools/Profiler/Operations/AnimationTimingOps.cs
Added public static ManageProfiler.HandleCommand(JObject) that validates an action param, routes to internal ops, and returns success/error objects. Each op uses ProfilerRecorder to sample counters (converting ns→ms where applicable), emits value + validity flags, and disposes recorders. Exceptions are caught and returned as error responses.
Server Tool Integration
Server/src/services/tools/manage_profiler.py
New MCP tool manage_profiler (registered via decorator) that normalizes/validates action, retrieves a Unity instance from context, dispatches "manage_profiler" to Unity with {"action": <action>}, and returns or wraps the Unity response.
CLI Commands
Server/src/cli/commands/profiler.py
New Click profiler command group with subcommands frame-timing, script-timing, physics-timing, gc-alloc, animation-timing; each loads config, calls the manage_profiler tool, and formats output; decorated with handle_unity_errors.
Tests
Server/tests/test_manage_profiler.py
New pytest suite verifying PROFILER_ACTIONS contents, case-insensitive action handling, invalid/empty action errors (no Unity calls), correct dispatch params for each action, handling of non-dict Unity responses, and tool registry metadata.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant CLI as CLI Command
    participant Server as manage_profiler Tool
    participant Unity as ManageProfiler Handler
    participant Recorder as ProfilerRecorder

    User->>CLI: profiler frame-timing
    CLI->>Server: invoke manage_profiler(action="frame-timing")
    Server->>Server: validate & normalize action
    Server->>Unity: send_command("manage_profiler", {action:"frame-timing"})
    Unity->>Unity: HandleCommand parses action
    Unity->>Unity: route to FrameTimingOps.GetFrameTiming
    FrameTimingOps->>Recorder: start ProfilerRecorder(s)
    Recorder-->>FrameTimingOps: return CurrentValue (ns) + validity
    FrameTimingOps->>FrameTimingOps: convert ns→ms and set validity flags
    FrameTimingOps-->>Unity: return {success,message,data}
    Unity-->>Server: forward response
    Server-->>CLI: format_output(result)
    CLI-->>User: display metrics
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Suggested reviewers

  • justinpbarnett

Poem

🐰 I hopped through frames and timing trees,

sampled scripts, physics, GC with ease.
Nanoseconds bowed, became milliseconds bright,
I twitched my nose and logged the light. ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 30.30% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The PR title accurately and specifically summarizes the main change: adding a manage_profiler tool for profiling CPU timing, GC allocation, and animation metrics.
Description check ✅ Passed The PR description comprehensively covers the change with type, detailed action table, file list, test plan, and additional context; all template sections are addressed.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 2 issues, and left some high level feedback:

  • The physics-timing CLI help text mentions Physics.Processing and Physics.FetchResults, but the implementation records Physics.Simulate and Physics2D.Simulate; update the description or the counters so they match the actual behavior.
  • The list of valid actions is duplicated across PROFILER_ACTIONS (Python), the C# switch in ManageProfiler, and the CLI commands, which risks them getting out of sync; consider centralizing these action names (or at least deriving the error message and CLI wiring from a single shared source).
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The `physics-timing` CLI help text mentions `Physics.Processing` and `Physics.FetchResults`, but the implementation records `Physics.Simulate` and `Physics2D.Simulate`; update the description or the counters so they match the actual behavior.
- The list of valid actions is duplicated across `PROFILER_ACTIONS` (Python), the C# switch in `ManageProfiler`, and the CLI commands, which risks them getting out of sync; consider centralizing these action names (or at least deriving the error message and CLI wiring from a single shared source).

## Individual Comments

### Comment 1
<location path="Server/src/cli/commands/profiler.py" line_range="30-32" />
<code_context>
+    click.echo(format_output(result, config.format))
+
+
+@profiler.command("physics-timing")
+@handle_unity_errors
+def physics_timing():
+    """Get Physics.Processing and Physics.FetchResults time (ms)."""
+    config = get_config()
</code_context>
<issue_to_address>
**issue:** Physics CLI help text doesn’t match the actual profiler counters being used.

The help text references `Physics.Processing` and `Physics.FetchResults`, but `PhysicsTimingOps` actually uses `Physics.Simulate` and `Physics2D.Simulate`. Please either update this docstring to match the profiler counters in use or change the counters to match the current description so the CLI help aligns with Unity’s profiler markers.
</issue_to_address>

### Comment 2
<location path="Server/tests/test_manage_profiler.py" line_range="19-28" />
<code_context>
+# Fixtures
+# ---------------------------------------------------------------------------
+
+@pytest.fixture
+def mock_unity(monkeypatch):
+    """Patch Unity transport layer and return captured call dict."""
+    captured: dict[str, object] = {}
+
+    async def fake_send(send_fn, unity_instance, tool_name, params):
+        captured["unity_instance"] = unity_instance
+        captured["tool_name"] = tool_name
+        captured["params"] = params
+        return {"success": True, "message": "ok"}
+
+    monkeypatch.setattr(
+        "services.tools.manage_profiler.get_unity_instance_from_context",
+        AsyncMock(return_value="unity-instance-1"),
+    )
+    monkeypatch.setattr(
+        "services.tools.manage_profiler.send_with_unity_instance",
+        fake_send,
+    )
+    return captured
+
+
</code_context>
<issue_to_address>
**suggestion (testing):** Add a test that asserts the correct Unity instance is forwarded to `send_with_unity_instance`

The `mock_unity` fixture already records `unity_instance`, but no test asserts its value. Consider adding a test (e.g. `test_uses_unity_instance_from_context`) that checks `mock_unity["unity_instance"] == "unity-instance-1" to verify `manage_profiler` correctly passes the context-derived instance into the transport layer.

Suggested implementation:

```python
import asyncio
from types import SimpleNamespace
from unittest.mock import AsyncMock

import pytest

from services.tools.manage_profiler import (
    manage_profiler,
    PROFILER_ACTIONS,
)


# ---------------------------------------------------------------------------
# Fixtures
# ---------------------------------------------------------------------------


@pytest.fixture
def mock_unity(monkeypatch):
    """Patch Unity transport layer and return captured call dict."""
    captured: dict[str, object] = {}

    async def fake_send(send_fn, unity_instance, tool_name, params):
        captured["unity_instance"] = unity_instance
        captured["tool_name"] = tool_name
        captured["params"] = params
        return {"success": True, "message": "ok"}

    monkeypatch.setattr(
        "services.tools.manage_profiler.get_unity_instance_from_context",
        AsyncMock(return_value="unity-instance-1"),
    )
    monkeypatch.setattr(
        "services.tools.manage_profiler.send_with_unity_instance",
        fake_send,
    )
    return captured


# ---------------------------------------------------------------------------
# Tests
# ---------------------------------------------------------------------------


def test_uses_unity_instance_from_context(mock_unity):
    """manage_profiler should forward the context-derived Unity instance to the transport layer."""
    # NOTE: context setup may need to be aligned with existing tests in this file.
    context = SimpleNamespace()

    # Call manage_profiler in the same way other tests in this file do.
    # Adjust arguments here if the signature differs.
    asyncio.run(
        manage_profiler(
            context=context,
            action=PROFILER_ACTIONS.STATUS if hasattr(PROFILER_ACTIONS, "STATUS") else None,
            params={},
        )
    )

    assert mock_unity["unity_instance"] == "unity-instance-1"

```

To integrate cleanly with the existing test suite you should:
1. Update the `manage_profiler` call in `test_uses_unity_instance_from_context` to match the real function signature and how other tests in this file invoke it (e.g. positional vs keyword arguments, `action` values, and parameter names).
2. If `PROFILER_ACTIONS` does not expose a `STATUS` member, replace that argument with the specific action value you want to exercise (for example the same one other tests use when they assert calls into `send_with_unity_instance`).
3. If your project uses `pytest.mark.asyncio` for async tests instead of `asyncio.run`, you may prefer to:
   - Change `test_uses_unity_instance_from_context` to `@pytest.mark.asyncio` and `async def` and
   - Await `manage_profiler(...)` directly instead of wrapping with `asyncio.run`.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +19 to +28
@pytest.fixture
def mock_unity(monkeypatch):
"""Patch Unity transport layer and return captured call dict."""
captured: dict[str, object] = {}

async def fake_send(send_fn, unity_instance, tool_name, params):
captured["unity_instance"] = unity_instance
captured["tool_name"] = tool_name
captured["params"] = params
return {"success": True, "message": "ok"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Add a test that asserts the correct Unity instance is forwarded to send_with_unity_instance

The mock_unity fixture already records unity_instance, but no test asserts its value. Consider adding a test (e.g. test_uses_unity_instance_from_context) that checks mock_unity["unity_instance"] == "unity-instance-1" to verify manage_profiler` correctly passes the context-derived instance into the transport layer.

Suggested implementation:

import asyncio
from types import SimpleNamespace
from unittest.mock import AsyncMock

import pytest

from services.tools.manage_profiler import (
    manage_profiler,
    PROFILER_ACTIONS,
)


# ---------------------------------------------------------------------------
# Fixtures
# ---------------------------------------------------------------------------


@pytest.fixture
def mock_unity(monkeypatch):
    """Patch Unity transport layer and return captured call dict."""
    captured: dict[str, object] = {}

    async def fake_send(send_fn, unity_instance, tool_name, params):
        captured["unity_instance"] = unity_instance
        captured["tool_name"] = tool_name
        captured["params"] = params
        return {"success": True, "message": "ok"}

    monkeypatch.setattr(
        "services.tools.manage_profiler.get_unity_instance_from_context",
        AsyncMock(return_value="unity-instance-1"),
    )
    monkeypatch.setattr(
        "services.tools.manage_profiler.send_with_unity_instance",
        fake_send,
    )
    return captured


# ---------------------------------------------------------------------------
# Tests
# ---------------------------------------------------------------------------


def test_uses_unity_instance_from_context(mock_unity):
    """manage_profiler should forward the context-derived Unity instance to the transport layer."""
    # NOTE: context setup may need to be aligned with existing tests in this file.
    context = SimpleNamespace()

    # Call manage_profiler in the same way other tests in this file do.
    # Adjust arguments here if the signature differs.
    asyncio.run(
        manage_profiler(
            context=context,
            action=PROFILER_ACTIONS.STATUS if hasattr(PROFILER_ACTIONS, "STATUS") else None,
            params={},
        )
    )

    assert mock_unity["unity_instance"] == "unity-instance-1"

To integrate cleanly with the existing test suite you should:

  1. Update the manage_profiler call in test_uses_unity_instance_from_context to match the real function signature and how other tests in this file invoke it (e.g. positional vs keyword arguments, action values, and parameter names).
  2. If PROFILER_ACTIONS does not expose a STATUS member, replace that argument with the specific action value you want to exercise (for example the same one other tests use when they assert calls into send_with_unity_instance).
  3. If your project uses pytest.mark.asyncio for async tests instead of asyncio.run, you may prefer to:
    • Change test_uses_unity_instance_from_context to @pytest.mark.asyncio and async def and
    • Await manage_profiler(...) directly instead of wrapping with asyncio.run.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
Server/tests/test_manage_profiler.py (2)

99-138: Duplicate forwarding tests can be consolidated.

Coverage is good, but this block duplicates the existing parametrized forwarding check. Consider collapsing into one parametrized payload test to reduce maintenance.

Refactor example
- def test_get_frame_timing_sends_correct_params(mock_unity):
-     ...
- def test_get_script_timing_sends_correct_params(mock_unity):
-     ...
- def test_get_physics_timing_sends_correct_params(mock_unity):
-     ...
- def test_get_gc_alloc_sends_correct_params(mock_unity):
-     ...
- def test_get_animation_timing_sends_correct_params(mock_unity):
-     ...
-
- def test_only_action_in_params(mock_unity):
-     ...
+@pytest.mark.parametrize("action_name", PROFILER_ACTIONS)
+def test_action_sends_only_canonical_action_param(mock_unity, action_name):
+    result = asyncio.run(
+        manage_profiler(SimpleNamespace(), action=action_name)
+    )
+    assert result["success"] is True
+    assert mock_unity["tool_name"] == "manage_profiler"
+    assert mock_unity["params"] == {"action": action_name}

Also applies to: 190-195

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Server/tests/test_manage_profiler.py` around lines 99 - 138, Consolidate the
duplicated tests that call manage_profiler (functions like
test_get_frame_timing_sends_correct_params,
test_get_script_timing_sends_correct_params,
test_get_physics_timing_sends_correct_params,
test_get_gc_alloc_sends_correct_params,
test_get_animation_timing_sends_correct_params) into a single parametrized
pytest test: use pytest.mark.parametrize to supply the different action strings
and expected params, call asyncio.run(manage_profiler(SimpleNamespace(),
action=action)), and assert result["success"] and mock_unity["params"] ==
{"action": action}; update or remove the duplicate tests (also the similar block
referenced at lines ~190-195) so only the parametrized test remains.

202-209: Registry test is vulnerable to shared global state.

This depends on mutable global _tool_registry, so cross-test pollution can make it flaky. Prefer isolating/resetting registry state in a fixture before asserting exact cardinality.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Server/tests/test_manage_profiler.py` around lines 202 - 209, The test
depends on the mutable global _tool_registry and can be polluted by other tests;
modify test_tool_registered_with_core_group to isolate/reset registry state
before asserting cardinality by using monkeypatch to set
services.registry.tool_registry._tool_registry to a fresh list (or call a
provided registry reset/reinit function) then (re)register or import the
manage_profiler tool under test and assert len == 1 and group == "core"; ensure
the original _tool_registry is not relied on (or restore it if needed) so the
test no longer flakily depends on shared global state.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@MCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.cs`:
- Around line 9-33: COUNTER_MAP currently uses the wrong profiler marker name
"LateBehaviourUpdate" causing recorder.Valid to be false in GetScriptTiming;
update the tuple in COUNTER_MAP for the late update entry to use the documented
marker "PreLateUpdate.ScriptRunBehaviourLateUpdate" (keep the json key
"late_update_ms" unchanged) so that the ProfilerRecorder started in
GetScriptTiming captures valid late-update timings.

In `@Server/tests/test_manage_profiler.py`:
- Around line 77-82: The test test_empty_action_returns_error should also assert
that the Unity client was not invoked to prevent side-effect regressions: after
calling manage_profiler(SimpleNamespace(), action=""), add an assertion that the
mock_unity (the fixture/mock passed into the test) did not receive any method
calls (e.g., assert mock_unity.method_calls == [] or use
mock_unity.assert_not_called()/assert_no_calls depending on the mock type) so
the test verifies both result["success"] is False and that manage_profiler did
not call into the Unity client.

---

Nitpick comments:
In `@Server/tests/test_manage_profiler.py`:
- Around line 99-138: Consolidate the duplicated tests that call manage_profiler
(functions like test_get_frame_timing_sends_correct_params,
test_get_script_timing_sends_correct_params,
test_get_physics_timing_sends_correct_params,
test_get_gc_alloc_sends_correct_params,
test_get_animation_timing_sends_correct_params) into a single parametrized
pytest test: use pytest.mark.parametrize to supply the different action strings
and expected params, call asyncio.run(manage_profiler(SimpleNamespace(),
action=action)), and assert result["success"] and mock_unity["params"] ==
{"action": action}; update or remove the duplicate tests (also the similar block
referenced at lines ~190-195) so only the parametrized test remains.
- Around line 202-209: The test depends on the mutable global _tool_registry and
can be polluted by other tests; modify test_tool_registered_with_core_group to
isolate/reset registry state before asserting cardinality by using monkeypatch
to set services.registry.tool_registry._tool_registry to a fresh list (or call a
provided registry reset/reinit function) then (re)register or import the
manage_profiler tool under test and assert len == 1 and group == "core"; ensure
the original _tool_registry is not relied on (or restore it if needed) so the
test no longer flakily depends on shared global state.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 969bc378-474a-403d-b54d-d40acee3531e

📥 Commits

Reviewing files that changed from the base of the PR and between c348a3d and f42d268.

📒 Files selected for processing (9)
  • MCPForUnity/Editor/Tools/Profiler/ManageProfiler.cs
  • MCPForUnity/Editor/Tools/Profiler/Operations/AnimationTimingOps.cs
  • MCPForUnity/Editor/Tools/Profiler/Operations/FrameTimingOps.cs
  • MCPForUnity/Editor/Tools/Profiler/Operations/GCAllocOps.cs
  • MCPForUnity/Editor/Tools/Profiler/Operations/PhysicsTimingOps.cs
  • MCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.cs
  • Server/src/cli/commands/profiler.py
  • Server/src/services/tools/manage_profiler.py
  • Server/tests/test_manage_profiler.py

Comment on lines +77 to +82
def test_empty_action_returns_error(mock_unity):
result = asyncio.run(
manage_profiler(SimpleNamespace(), action="")
)
assert result["success"] is False

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Strengthen empty-action guard assertions.

This test only checks success == False; it should also assert Unity is not called (as done in the unknown-action test) to prevent side-effect regressions.

Suggested test hardening
 def test_empty_action_returns_error(mock_unity):
     result = asyncio.run(
         manage_profiler(SimpleNamespace(), action="")
     )
     assert result["success"] is False
+    assert "tool_name" not in mock_unity
+    assert "Unknown action" in result["message"]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Server/tests/test_manage_profiler.py` around lines 77 - 82, The test
test_empty_action_returns_error should also assert that the Unity client was not
invoked to prevent side-effect regressions: after calling
manage_profiler(SimpleNamespace(), action=""), add an assertion that the
mock_unity (the fixture/mock passed into the test) did not receive any method
calls (e.g., assert mock_unity.method_calls == [] or use
mock_unity.assert_not_called()/assert_no_calls depending on the mock type) so
the test verifies both result["success"] is False and that manage_profiler did
not call into the Unity client.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
MCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.cs (1)

9-25: Consider removing dynamic key rewriting for _valid keys.

jsonKey.Replace("_ms", "_valid") is functional, but predefining the valid key in COUNTER_MAP avoids per-call string allocation and reduces key-coupling risk if naming changes later.

Refactor sketch
-        private static readonly (string counterName, string jsonKey)[] COUNTER_MAP = new[]
+        private static readonly (string counterName, string valueKey, string validKey)[] COUNTER_MAP = new[]
         {
-            ("BehaviourUpdate", "update_ms"),
-            ("FixedBehaviourUpdate", "fixed_update_ms"),
-            ("PreLateUpdate.ScriptRunBehaviourLateUpdate", "late_update_ms"),
+            ("BehaviourUpdate", "update_ms", "update_valid"),
+            ("FixedBehaviourUpdate", "fixed_update_ms", "fixed_update_valid"),
+            ("PreLateUpdate.ScriptRunBehaviourLateUpdate", "late_update_ms", "late_update_valid"),
         };
@@
-            foreach (var (counterName, jsonKey) in COUNTER_MAP)
+            foreach (var (counterName, valueKey, validKey) in COUNTER_MAP)
             {
                 using var recorder = ProfilerRecorder.StartNew(ProfilerCategory.Scripts, counterName);
-                data[jsonKey] = recorder.Valid ? recorder.CurrentValue / 1e6 : 0.0;
-                data[jsonKey.Replace("_ms", "_valid")] = recorder.Valid;
+                data[valueKey] = recorder.Valid ? recorder.CurrentValue / 1e6 : 0.0;
+                data[validKey] = recorder.Valid;
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@MCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.cs` around lines
9 - 25, COUNTER_MAP currently contains tuples of (counterName, jsonKey) and
GetScriptTiming constructs the "_valid" key at runtime with
jsonKey.Replace("_ms", "_valid"); change COUNTER_MAP to store a third element
for the valid key (e.g. (counterName, jsonKey, validJsonKey)) and update
GetScriptTiming to use that third element instead of performing Replace; update
references in the foreach deconstruction (var (counterName, jsonKey,
validJsonKey)) and set data[validJsonKey] = recorder.Valid to avoid per-call
string allocations and decouple naming.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@MCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.cs`:
- Around line 9-25: COUNTER_MAP currently contains tuples of (counterName,
jsonKey) and GetScriptTiming constructs the "_valid" key at runtime with
jsonKey.Replace("_ms", "_valid"); change COUNTER_MAP to store a third element
for the valid key (e.g. (counterName, jsonKey, validJsonKey)) and update
GetScriptTiming to use that third element instead of performing Replace; update
references in the foreach deconstruction (var (counterName, jsonKey,
validJsonKey)) and set data[validJsonKey] = recorder.Valid to avoid per-call
string allocations and decouple naming.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 59896a51-10ab-4396-997f-c4f6feed8905

📥 Commits

Reviewing files that changed from the base of the PR and between f42d268 and 31fa37e.

📒 Files selected for processing (3)
  • MCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.cs
  • Server/src/cli/commands/profiler.py
  • Server/tests/test_manage_profiler.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • Server/src/cli/commands/profiler.py
  • Server/tests/test_manage_profiler.py

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
MCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.cs (1)

16-17: Optional: make unused parameter intent explicit.

@params is currently unused. Consider marking intent to avoid analyzer noise and improve readability.

♻️ Suggested tweak
 internal static object GetScriptTiming(JObject `@params`)
 {
+    _ = `@params`;
     var data = new Dictionary<string, object>();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@MCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.cs` around lines
16 - 17, GetScriptTiming has an unused parameter `@params`; make this intent
explicit to silence analyzers and improve readability by either renaming the
parameter to _params, prefixing it with an underscore, or adding a discard
assignment (e.g., _ = `@params`) at the top of the GetScriptTiming method so
callers remain compatible while signaling the parameter is intentionally unused.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@MCPForUnity/Editor/Tools/Profiler/Operations/FrameTimingOps.cs`:
- Around line 9-15: COUNTER_MAP in FrameTimingOps.cs uses non-official counter
labels causing ProfilerRecorder.Valid to be false; update the tuple first
elements in the COUNTER_MAP array (the counterName strings) to the exact Unity
Frame Timing API names: change "Main Thread" to "CPU Main Thread Frame Time",
"Render Thread" to "CPU Render Thread Frame Time", and "CPU Frame Time" to "CPU
Total Frame Time" (leave "GPU Frame Time" as-is) so the ProfilerRecorder lookups
in the FrameTimingOps logic will match the official ProfilerCategory.Internal
counters and return valid timings.

---

Nitpick comments:
In `@MCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.cs`:
- Around line 16-17: GetScriptTiming has an unused parameter `@params`; make this
intent explicit to silence analyzers and improve readability by either renaming
the parameter to _params, prefixing it with an underscore, or adding a discard
assignment (e.g., _ = `@params`) at the top of the GetScriptTiming method so
callers remain compatible while signaling the parameter is intentionally unused.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 9540ebe4-ad5a-4ddc-bf9b-3a8c18543570

📥 Commits

Reviewing files that changed from the base of the PR and between 31fa37e and 806fa0f.

📒 Files selected for processing (3)
  • MCPForUnity/Editor/Tools/Profiler/Operations/FrameTimingOps.cs
  • MCPForUnity/Editor/Tools/Profiler/Operations/PhysicsTimingOps.cs
  • MCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.cs
✅ Files skipped from review due to trivial changes (1)
  • MCPForUnity/Editor/Tools/Profiler/Operations/PhysicsTimingOps.cs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant