feat: add manage_profiler tool for CPU timing, GC alloc, and animation profiling#991
feat: add manage_profiler tool for CPU timing, GC alloc, and animation profiling#991zaferdace wants to merge 6 commits intoCoplayDev:betafrom
Conversation
…n profiling Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Reviewer's GuideIntroduces a new cross-layer Sequence diagram for manage_profiler end-to-end profiling callsequenceDiagram
actor User
participant CLIProfiler as CLI_profiler_command
participant MCPTool as MCP_manage_profiler_tool
participant UnityTransport as Unity_transport
participant UnityEditor as Unity_ManageProfiler
User->>CLIProfiler: run `profiler frame-timing`
CLIProfiler->>MCPTool: run_command(tool_name=manage_profiler, action=get_frame_timing)
MCPTool->>MCPTool: validate action against PROFILER_ACTIONS
MCPTool->>UnityTransport: send_with_unity_instance(async_send_command_with_retry, unity_instance, manage_profiler, {action: get_frame_timing})
UnityTransport->>UnityEditor: HandleCommand({action: get_frame_timing})
UnityEditor->>UnityEditor: new ToolParams(params)
UnityEditor->>UnityEditor: FrameTimingOps.GetFrameTiming(params)
UnityEditor->>UnityTransport: {success, message, data}
UnityTransport->>MCPTool: result dict
MCPTool->>CLIProfiler: result dict
CLIProfiler->>User: formatted profiler output
Class diagram for new ManageProfiler and profiler operationsclassDiagram
class ManageProfiler {
<<static>>
+HandleCommand(params JObject) object
}
class FrameTimingOps {
<<static>>
-COUNTER_MAP : (string counterName, string jsonKey)[]
+GetFrameTiming(params JObject) object
}
class ScriptTimingOps {
<<static>>
-COUNTER_MAP : (string counterName, string jsonKey)[]
+GetScriptTiming(params JObject) object
}
class PhysicsTimingOps {
<<static>>
-COUNTER_MAP : (string counterName, string jsonKey)[]
+GetPhysicsTiming(params JObject) object
}
class GCAllocOps {
<<static>>
+GetGCAlloc(params JObject) object
}
class AnimationTimingOps {
<<static>>
+GetAnimationTiming(params JObject) object
}
class ToolParams {
+ToolParams(params JObject)
+Get(key string) JToken
}
class ErrorResponse {
+ErrorResponse(message string)
}
ManageProfiler ..> FrameTimingOps : uses
ManageProfiler ..> ScriptTimingOps : uses
ManageProfiler ..> PhysicsTimingOps : uses
ManageProfiler ..> GCAllocOps : uses
ManageProfiler ..> AnimationTimingOps : uses
ManageProfiler ..> ToolParams : parses params
ManageProfiler ..> ErrorResponse : error results
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
📝 WalkthroughWalkthroughAdds a Unity Editor command handler Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant CLI as CLI Command
participant Server as manage_profiler Tool
participant Unity as ManageProfiler Handler
participant Recorder as ProfilerRecorder
User->>CLI: profiler frame-timing
CLI->>Server: invoke manage_profiler(action="frame-timing")
Server->>Server: validate & normalize action
Server->>Unity: send_command("manage_profiler", {action:"frame-timing"})
Unity->>Unity: HandleCommand parses action
Unity->>Unity: route to FrameTimingOps.GetFrameTiming
FrameTimingOps->>Recorder: start ProfilerRecorder(s)
Recorder-->>FrameTimingOps: return CurrentValue (ns) + validity
FrameTimingOps->>FrameTimingOps: convert ns→ms and set validity flags
FrameTimingOps-->>Unity: return {success,message,data}
Unity-->>Server: forward response
Server-->>CLI: format_output(result)
CLI-->>User: display metrics
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Hey - I've found 2 issues, and left some high level feedback:
- The
physics-timingCLI help text mentionsPhysics.ProcessingandPhysics.FetchResults, but the implementation recordsPhysics.SimulateandPhysics2D.Simulate; update the description or the counters so they match the actual behavior. - The list of valid actions is duplicated across
PROFILER_ACTIONS(Python), the C# switch inManageProfiler, and the CLI commands, which risks them getting out of sync; consider centralizing these action names (or at least deriving the error message and CLI wiring from a single shared source).
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The `physics-timing` CLI help text mentions `Physics.Processing` and `Physics.FetchResults`, but the implementation records `Physics.Simulate` and `Physics2D.Simulate`; update the description or the counters so they match the actual behavior.
- The list of valid actions is duplicated across `PROFILER_ACTIONS` (Python), the C# switch in `ManageProfiler`, and the CLI commands, which risks them getting out of sync; consider centralizing these action names (or at least deriving the error message and CLI wiring from a single shared source).
## Individual Comments
### Comment 1
<location path="Server/src/cli/commands/profiler.py" line_range="30-32" />
<code_context>
+ click.echo(format_output(result, config.format))
+
+
+@profiler.command("physics-timing")
+@handle_unity_errors
+def physics_timing():
+ """Get Physics.Processing and Physics.FetchResults time (ms)."""
+ config = get_config()
</code_context>
<issue_to_address>
**issue:** Physics CLI help text doesn’t match the actual profiler counters being used.
The help text references `Physics.Processing` and `Physics.FetchResults`, but `PhysicsTimingOps` actually uses `Physics.Simulate` and `Physics2D.Simulate`. Please either update this docstring to match the profiler counters in use or change the counters to match the current description so the CLI help aligns with Unity’s profiler markers.
</issue_to_address>
### Comment 2
<location path="Server/tests/test_manage_profiler.py" line_range="19-28" />
<code_context>
+# Fixtures
+# ---------------------------------------------------------------------------
+
+@pytest.fixture
+def mock_unity(monkeypatch):
+ """Patch Unity transport layer and return captured call dict."""
+ captured: dict[str, object] = {}
+
+ async def fake_send(send_fn, unity_instance, tool_name, params):
+ captured["unity_instance"] = unity_instance
+ captured["tool_name"] = tool_name
+ captured["params"] = params
+ return {"success": True, "message": "ok"}
+
+ monkeypatch.setattr(
+ "services.tools.manage_profiler.get_unity_instance_from_context",
+ AsyncMock(return_value="unity-instance-1"),
+ )
+ monkeypatch.setattr(
+ "services.tools.manage_profiler.send_with_unity_instance",
+ fake_send,
+ )
+ return captured
+
+
</code_context>
<issue_to_address>
**suggestion (testing):** Add a test that asserts the correct Unity instance is forwarded to `send_with_unity_instance`
The `mock_unity` fixture already records `unity_instance`, but no test asserts its value. Consider adding a test (e.g. `test_uses_unity_instance_from_context`) that checks `mock_unity["unity_instance"] == "unity-instance-1" to verify `manage_profiler` correctly passes the context-derived instance into the transport layer.
Suggested implementation:
```python
import asyncio
from types import SimpleNamespace
from unittest.mock import AsyncMock
import pytest
from services.tools.manage_profiler import (
manage_profiler,
PROFILER_ACTIONS,
)
# ---------------------------------------------------------------------------
# Fixtures
# ---------------------------------------------------------------------------
@pytest.fixture
def mock_unity(monkeypatch):
"""Patch Unity transport layer and return captured call dict."""
captured: dict[str, object] = {}
async def fake_send(send_fn, unity_instance, tool_name, params):
captured["unity_instance"] = unity_instance
captured["tool_name"] = tool_name
captured["params"] = params
return {"success": True, "message": "ok"}
monkeypatch.setattr(
"services.tools.manage_profiler.get_unity_instance_from_context",
AsyncMock(return_value="unity-instance-1"),
)
monkeypatch.setattr(
"services.tools.manage_profiler.send_with_unity_instance",
fake_send,
)
return captured
# ---------------------------------------------------------------------------
# Tests
# ---------------------------------------------------------------------------
def test_uses_unity_instance_from_context(mock_unity):
"""manage_profiler should forward the context-derived Unity instance to the transport layer."""
# NOTE: context setup may need to be aligned with existing tests in this file.
context = SimpleNamespace()
# Call manage_profiler in the same way other tests in this file do.
# Adjust arguments here if the signature differs.
asyncio.run(
manage_profiler(
context=context,
action=PROFILER_ACTIONS.STATUS if hasattr(PROFILER_ACTIONS, "STATUS") else None,
params={},
)
)
assert mock_unity["unity_instance"] == "unity-instance-1"
```
To integrate cleanly with the existing test suite you should:
1. Update the `manage_profiler` call in `test_uses_unity_instance_from_context` to match the real function signature and how other tests in this file invoke it (e.g. positional vs keyword arguments, `action` values, and parameter names).
2. If `PROFILER_ACTIONS` does not expose a `STATUS` member, replace that argument with the specific action value you want to exercise (for example the same one other tests use when they assert calls into `send_with_unity_instance`).
3. If your project uses `pytest.mark.asyncio` for async tests instead of `asyncio.run`, you may prefer to:
- Change `test_uses_unity_instance_from_context` to `@pytest.mark.asyncio` and `async def` and
- Await `manage_profiler(...)` directly instead of wrapping with `asyncio.run`.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| @pytest.fixture | ||
| def mock_unity(monkeypatch): | ||
| """Patch Unity transport layer and return captured call dict.""" | ||
| captured: dict[str, object] = {} | ||
|
|
||
| async def fake_send(send_fn, unity_instance, tool_name, params): | ||
| captured["unity_instance"] = unity_instance | ||
| captured["tool_name"] = tool_name | ||
| captured["params"] = params | ||
| return {"success": True, "message": "ok"} |
There was a problem hiding this comment.
suggestion (testing): Add a test that asserts the correct Unity instance is forwarded to send_with_unity_instance
The mock_unity fixture already records unity_instance, but no test asserts its value. Consider adding a test (e.g. test_uses_unity_instance_from_context) that checks mock_unity["unity_instance"] == "unity-instance-1" to verify manage_profiler` correctly passes the context-derived instance into the transport layer.
Suggested implementation:
import asyncio
from types import SimpleNamespace
from unittest.mock import AsyncMock
import pytest
from services.tools.manage_profiler import (
manage_profiler,
PROFILER_ACTIONS,
)
# ---------------------------------------------------------------------------
# Fixtures
# ---------------------------------------------------------------------------
@pytest.fixture
def mock_unity(monkeypatch):
"""Patch Unity transport layer and return captured call dict."""
captured: dict[str, object] = {}
async def fake_send(send_fn, unity_instance, tool_name, params):
captured["unity_instance"] = unity_instance
captured["tool_name"] = tool_name
captured["params"] = params
return {"success": True, "message": "ok"}
monkeypatch.setattr(
"services.tools.manage_profiler.get_unity_instance_from_context",
AsyncMock(return_value="unity-instance-1"),
)
monkeypatch.setattr(
"services.tools.manage_profiler.send_with_unity_instance",
fake_send,
)
return captured
# ---------------------------------------------------------------------------
# Tests
# ---------------------------------------------------------------------------
def test_uses_unity_instance_from_context(mock_unity):
"""manage_profiler should forward the context-derived Unity instance to the transport layer."""
# NOTE: context setup may need to be aligned with existing tests in this file.
context = SimpleNamespace()
# Call manage_profiler in the same way other tests in this file do.
# Adjust arguments here if the signature differs.
asyncio.run(
manage_profiler(
context=context,
action=PROFILER_ACTIONS.STATUS if hasattr(PROFILER_ACTIONS, "STATUS") else None,
params={},
)
)
assert mock_unity["unity_instance"] == "unity-instance-1"To integrate cleanly with the existing test suite you should:
- Update the
manage_profilercall intest_uses_unity_instance_from_contextto match the real function signature and how other tests in this file invoke it (e.g. positional vs keyword arguments,actionvalues, and parameter names). - If
PROFILER_ACTIONSdoes not expose aSTATUSmember, replace that argument with the specific action value you want to exercise (for example the same one other tests use when they assert calls intosend_with_unity_instance). - If your project uses
pytest.mark.asynciofor async tests instead ofasyncio.run, you may prefer to:- Change
test_uses_unity_instance_from_contextto@pytest.mark.asyncioandasync defand - Await
manage_profiler(...)directly instead of wrapping withasyncio.run.
- Change
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
Server/tests/test_manage_profiler.py (2)
99-138: Duplicate forwarding tests can be consolidated.Coverage is good, but this block duplicates the existing parametrized forwarding check. Consider collapsing into one parametrized payload test to reduce maintenance.
Refactor example
- def test_get_frame_timing_sends_correct_params(mock_unity): - ... - def test_get_script_timing_sends_correct_params(mock_unity): - ... - def test_get_physics_timing_sends_correct_params(mock_unity): - ... - def test_get_gc_alloc_sends_correct_params(mock_unity): - ... - def test_get_animation_timing_sends_correct_params(mock_unity): - ... - - def test_only_action_in_params(mock_unity): - ... +@pytest.mark.parametrize("action_name", PROFILER_ACTIONS) +def test_action_sends_only_canonical_action_param(mock_unity, action_name): + result = asyncio.run( + manage_profiler(SimpleNamespace(), action=action_name) + ) + assert result["success"] is True + assert mock_unity["tool_name"] == "manage_profiler" + assert mock_unity["params"] == {"action": action_name}Also applies to: 190-195
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Server/tests/test_manage_profiler.py` around lines 99 - 138, Consolidate the duplicated tests that call manage_profiler (functions like test_get_frame_timing_sends_correct_params, test_get_script_timing_sends_correct_params, test_get_physics_timing_sends_correct_params, test_get_gc_alloc_sends_correct_params, test_get_animation_timing_sends_correct_params) into a single parametrized pytest test: use pytest.mark.parametrize to supply the different action strings and expected params, call asyncio.run(manage_profiler(SimpleNamespace(), action=action)), and assert result["success"] and mock_unity["params"] == {"action": action}; update or remove the duplicate tests (also the similar block referenced at lines ~190-195) so only the parametrized test remains.
202-209: Registry test is vulnerable to shared global state.This depends on mutable global
_tool_registry, so cross-test pollution can make it flaky. Prefer isolating/resetting registry state in a fixture before asserting exact cardinality.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Server/tests/test_manage_profiler.py` around lines 202 - 209, The test depends on the mutable global _tool_registry and can be polluted by other tests; modify test_tool_registered_with_core_group to isolate/reset registry state before asserting cardinality by using monkeypatch to set services.registry.tool_registry._tool_registry to a fresh list (or call a provided registry reset/reinit function) then (re)register or import the manage_profiler tool under test and assert len == 1 and group == "core"; ensure the original _tool_registry is not relied on (or restore it if needed) so the test no longer flakily depends on shared global state.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@MCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.cs`:
- Around line 9-33: COUNTER_MAP currently uses the wrong profiler marker name
"LateBehaviourUpdate" causing recorder.Valid to be false in GetScriptTiming;
update the tuple in COUNTER_MAP for the late update entry to use the documented
marker "PreLateUpdate.ScriptRunBehaviourLateUpdate" (keep the json key
"late_update_ms" unchanged) so that the ProfilerRecorder started in
GetScriptTiming captures valid late-update timings.
In `@Server/tests/test_manage_profiler.py`:
- Around line 77-82: The test test_empty_action_returns_error should also assert
that the Unity client was not invoked to prevent side-effect regressions: after
calling manage_profiler(SimpleNamespace(), action=""), add an assertion that the
mock_unity (the fixture/mock passed into the test) did not receive any method
calls (e.g., assert mock_unity.method_calls == [] or use
mock_unity.assert_not_called()/assert_no_calls depending on the mock type) so
the test verifies both result["success"] is False and that manage_profiler did
not call into the Unity client.
---
Nitpick comments:
In `@Server/tests/test_manage_profiler.py`:
- Around line 99-138: Consolidate the duplicated tests that call manage_profiler
(functions like test_get_frame_timing_sends_correct_params,
test_get_script_timing_sends_correct_params,
test_get_physics_timing_sends_correct_params,
test_get_gc_alloc_sends_correct_params,
test_get_animation_timing_sends_correct_params) into a single parametrized
pytest test: use pytest.mark.parametrize to supply the different action strings
and expected params, call asyncio.run(manage_profiler(SimpleNamespace(),
action=action)), and assert result["success"] and mock_unity["params"] ==
{"action": action}; update or remove the duplicate tests (also the similar block
referenced at lines ~190-195) so only the parametrized test remains.
- Around line 202-209: The test depends on the mutable global _tool_registry and
can be polluted by other tests; modify test_tool_registered_with_core_group to
isolate/reset registry state before asserting cardinality by using monkeypatch
to set services.registry.tool_registry._tool_registry to a fresh list (or call a
provided registry reset/reinit function) then (re)register or import the
manage_profiler tool under test and assert len == 1 and group == "core"; ensure
the original _tool_registry is not relied on (or restore it if needed) so the
test no longer flakily depends on shared global state.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 969bc378-474a-403d-b54d-d40acee3531e
📒 Files selected for processing (9)
MCPForUnity/Editor/Tools/Profiler/ManageProfiler.csMCPForUnity/Editor/Tools/Profiler/Operations/AnimationTimingOps.csMCPForUnity/Editor/Tools/Profiler/Operations/FrameTimingOps.csMCPForUnity/Editor/Tools/Profiler/Operations/GCAllocOps.csMCPForUnity/Editor/Tools/Profiler/Operations/PhysicsTimingOps.csMCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.csServer/src/cli/commands/profiler.pyServer/src/services/tools/manage_profiler.pyServer/tests/test_manage_profiler.py
MCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.cs
Outdated
Show resolved
Hide resolved
| def test_empty_action_returns_error(mock_unity): | ||
| result = asyncio.run( | ||
| manage_profiler(SimpleNamespace(), action="") | ||
| ) | ||
| assert result["success"] is False | ||
|
|
There was a problem hiding this comment.
Strengthen empty-action guard assertions.
This test only checks success == False; it should also assert Unity is not called (as done in the unknown-action test) to prevent side-effect regressions.
Suggested test hardening
def test_empty_action_returns_error(mock_unity):
result = asyncio.run(
manage_profiler(SimpleNamespace(), action="")
)
assert result["success"] is False
+ assert "tool_name" not in mock_unity
+ assert "Unknown action" in result["message"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Server/tests/test_manage_profiler.py` around lines 77 - 82, The test
test_empty_action_returns_error should also assert that the Unity client was not
invoked to prevent side-effect regressions: after calling
manage_profiler(SimpleNamespace(), action=""), add an assertion that the
mock_unity (the fixture/mock passed into the test) did not receive any method
calls (e.g., assert mock_unity.method_calls == [] or use
mock_unity.assert_not_called()/assert_no_calls depending on the mock type) so
the test verifies both result["success"] is False and that manage_profiler did
not call into the Unity client.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
MCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.cs (1)
9-25: Consider removing dynamic key rewriting for_validkeys.
jsonKey.Replace("_ms", "_valid")is functional, but predefining the valid key inCOUNTER_MAPavoids per-call string allocation and reduces key-coupling risk if naming changes later.Refactor sketch
- private static readonly (string counterName, string jsonKey)[] COUNTER_MAP = new[] + private static readonly (string counterName, string valueKey, string validKey)[] COUNTER_MAP = new[] { - ("BehaviourUpdate", "update_ms"), - ("FixedBehaviourUpdate", "fixed_update_ms"), - ("PreLateUpdate.ScriptRunBehaviourLateUpdate", "late_update_ms"), + ("BehaviourUpdate", "update_ms", "update_valid"), + ("FixedBehaviourUpdate", "fixed_update_ms", "fixed_update_valid"), + ("PreLateUpdate.ScriptRunBehaviourLateUpdate", "late_update_ms", "late_update_valid"), }; @@ - foreach (var (counterName, jsonKey) in COUNTER_MAP) + foreach (var (counterName, valueKey, validKey) in COUNTER_MAP) { using var recorder = ProfilerRecorder.StartNew(ProfilerCategory.Scripts, counterName); - data[jsonKey] = recorder.Valid ? recorder.CurrentValue / 1e6 : 0.0; - data[jsonKey.Replace("_ms", "_valid")] = recorder.Valid; + data[valueKey] = recorder.Valid ? recorder.CurrentValue / 1e6 : 0.0; + data[validKey] = recorder.Valid; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@MCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.cs` around lines 9 - 25, COUNTER_MAP currently contains tuples of (counterName, jsonKey) and GetScriptTiming constructs the "_valid" key at runtime with jsonKey.Replace("_ms", "_valid"); change COUNTER_MAP to store a third element for the valid key (e.g. (counterName, jsonKey, validJsonKey)) and update GetScriptTiming to use that third element instead of performing Replace; update references in the foreach deconstruction (var (counterName, jsonKey, validJsonKey)) and set data[validJsonKey] = recorder.Valid to avoid per-call string allocations and decouple naming.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@MCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.cs`:
- Around line 9-25: COUNTER_MAP currently contains tuples of (counterName,
jsonKey) and GetScriptTiming constructs the "_valid" key at runtime with
jsonKey.Replace("_ms", "_valid"); change COUNTER_MAP to store a third element
for the valid key (e.g. (counterName, jsonKey, validJsonKey)) and update
GetScriptTiming to use that third element instead of performing Replace; update
references in the foreach deconstruction (var (counterName, jsonKey,
validJsonKey)) and set data[validJsonKey] = recorder.Valid to avoid per-call
string allocations and decouple naming.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 59896a51-10ab-4396-997f-c4f6feed8905
📒 Files selected for processing (3)
MCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.csServer/src/cli/commands/profiler.pyServer/tests/test_manage_profiler.py
🚧 Files skipped from review as they are similar to previous changes (2)
- Server/src/cli/commands/profiler.py
- Server/tests/test_manage_profiler.py
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
MCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.cs (1)
16-17: Optional: make unused parameter intent explicit.
@paramsis currently unused. Consider marking intent to avoid analyzer noise and improve readability.♻️ Suggested tweak
internal static object GetScriptTiming(JObject `@params`) { + _ = `@params`; var data = new Dictionary<string, object>();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@MCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.cs` around lines 16 - 17, GetScriptTiming has an unused parameter `@params`; make this intent explicit to silence analyzers and improve readability by either renaming the parameter to _params, prefixing it with an underscore, or adding a discard assignment (e.g., _ = `@params`) at the top of the GetScriptTiming method so callers remain compatible while signaling the parameter is intentionally unused.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@MCPForUnity/Editor/Tools/Profiler/Operations/FrameTimingOps.cs`:
- Around line 9-15: COUNTER_MAP in FrameTimingOps.cs uses non-official counter
labels causing ProfilerRecorder.Valid to be false; update the tuple first
elements in the COUNTER_MAP array (the counterName strings) to the exact Unity
Frame Timing API names: change "Main Thread" to "CPU Main Thread Frame Time",
"Render Thread" to "CPU Render Thread Frame Time", and "CPU Frame Time" to "CPU
Total Frame Time" (leave "GPU Frame Time" as-is) so the ProfilerRecorder lookups
in the FrameTimingOps logic will match the official ProfilerCategory.Internal
counters and return valid timings.
---
Nitpick comments:
In `@MCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.cs`:
- Around line 16-17: GetScriptTiming has an unused parameter `@params`; make this
intent explicit to silence analyzers and improve readability by either renaming
the parameter to _params, prefixing it with an underscore, or adding a discard
assignment (e.g., _ = `@params`) at the top of the GetScriptTiming method so
callers remain compatible while signaling the parameter is intentionally unused.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9540ebe4-ad5a-4ddc-bf9b-3a8c18543570
📒 Files selected for processing (3)
MCPForUnity/Editor/Tools/Profiler/Operations/FrameTimingOps.csMCPForUnity/Editor/Tools/Profiler/Operations/PhysicsTimingOps.csMCPForUnity/Editor/Tools/Profiler/Operations/ScriptTimingOps.cs
✅ Files skipped from review due to trivial changes (1)
- MCPForUnity/Editor/Tools/Profiler/Operations/PhysicsTimingOps.cs
Summary
Adds a new
manage_profilertool that exposes Unity'sProfilerRecorderAPI for CPU timing, script performance, GC allocation, and animation profiling data.This fills a gap in the current toolset —
manage_graphicscovers rendering stats and memory totals, but there's no way to query script execution time, physics step duration, or per-frame GC allocations.manage_profilercomplementsmanage_graphicswith zero overlap.Actions
get_frame_timingget_script_timingget_physics_timingget_gc_allocget_animation_timingAll actions are read-only. Each counter includes a
_validflag indicating whether the ProfilerRecorder returned valid data (some counters require Play Mode).Files
C# (6 files)
MCPForUnity/Editor/Tools/Profiler/ManageProfiler.cs— dispatcherOperations/FrameTimingOps.csOperations/ScriptTimingOps.csOperations/PhysicsTimingOps.csOperations/GCAllocOps.csOperations/AnimationTimingOps.csPython (3 files)
Server/src/services/tools/manage_profiler.py— MCP toolServer/src/cli/commands/profiler.py— CLI commandsServer/tests/test_manage_profiler.py— 20 testsTest plan
uv run pytest tests/test_manage_profiler.py -v)_valid: falsein Edit Mode🤖 Generated with Claude Code
Summary by Sourcery
Add a new core profiler tool and CLI commands to query Unity CPU, physics, GC allocation, and animation profiling metrics via MCP and the Unity editor.
New Features:
Tests:
Summary by CodeRabbit
New Features
Tests