Releases: eyaltoledano/claude-task-master
[email protected]
Minor Changes
-
#1427
122c23aThanks @Crunchyman-ralph! - Added Gemini 3 pro preview to supported Taskmaster AI providers- Added to Google providers
- Added to Gemini CLI providers
- Attention: Gemini 3 Pro is available for:
- Google AI Ultra Subscribers
- Users who have access via a paid Gemini API key
- If you want to use the gemini api key, make sure you have this defined in your .env or mcp.json env variables:
GEMINI_API_KEY=xxxx
- If you want to use the gemini api key, make sure you have this defined in your .env or mcp.json env variables:
- Attention: Gemini 3 Pro is available for:
-
#1398
e59c16cThanks @Crunchyman-ralph! - Claude Code provider now respects your global, project, and local Claude Code configuration files.When using the Claude Code AI provider, Task Master now automatically loads your Claude Code settings from:
- Global config (
~/.claude/directory) - Your personal preferences across all projects - Project config (
.claude/directory) - Project-specific settings like CLAUDE.md instructions - Local config - Workspace-specific overrides
This means your CLAUDE.md files, custom instructions, and Claude Code settings will now be properly applied when Task Master uses Claude Code as an AI provider. Previously, these settings were being ignored.
What's improved:
- ✅ CLAUDE.md files are now automatically loaded and applied (global and local)
- ✅ Your custom Claude Code settings are respected
- ✅ Project-specific instructions work as expected
- ✅ No manual configuration needed - works out of the box
Issues:
- Global config (
Patch Changes
-
#1400
c62cf84Thanks @Crunchyman-ralph! - Fix subtasks not showing parent task when displaying in cli (eg. tm show 10) -
#1393
da8ed6aThanks @bjcoombs! - Fix completion percentage and dependency resolution to treat cancelled tasks as complete. Cancelled tasks now correctly count toward project completion (e.g., 14 done + 1 cancelled = 100%, not 93%) and satisfy dependencies for dependent tasks, preventing permanent blocks. -
#1407
0003b6fThanks @Crunchyman-ralph! - Fix complexity analysis prompt to ensure consistent JSON output format -
#1351
37aee78Thanks @bjcoombs! - fix: prioritize .taskmaster in parent directories over other project markersWhen running task-master commands from subdirectories containing other project markers (like .git, go.mod, package.json), findProjectRoot() now correctly finds and uses .taskmaster directories in parent folders instead of stopping at the first generic project marker found.
This enables multi-repo monorepo setups where a single .taskmaster at the root tracks work across multiple sub-repositories.
-
#1406
9079d04Thanks @Crunchyman-ralph! - Fix MCP server compatibility with Cursor IDE's latest update by upgrading to fastmcp v3.20.1 with Zod v4 support- This resolves connection failures where the MCP server was unable to establish proper capability negotiation.
- Issue typically included wording like:
Server does not support completions
-
#1382
ac4328aThanks @JJVvV! - Added opt-in proxy support for all AI providers - respects http_proxy/https_proxy environment variables when enabled.When using Task Master in corporate or restricted network environments that require HTTP/HTTPS proxies, API calls to AI providers (OpenAI, Anthropic, Google, AWS Bedrock, etc.) would previously fail with ECONNRESET errors. This update adds seamless proxy support that can be enabled via environment variable or configuration file.
How to enable:
Proxy support is opt-in. Enable it using either method:
Method 1: Environment Variable
export TASKMASTER_ENABLE_PROXY=true export http_proxy=http://your-proxy:port export https_proxy=http://your-proxy:port export no_proxy=localhost,127.0.0.1 # Optional: bypass proxy for specific hosts # Then use Task Master normally task-master add-task "Create a new feature"
Method 2: Configuration File
Add to
.taskmaster/config.json:{ "global": { "enableProxy": true } }Then set your proxy environment variables:
export http_proxy=http://your-proxy:port export https_proxy=http://your-proxy:port
Technical details:
- Uses undici's
EnvHttpProxyAgentfor automatic proxy detection - Centralized implementation in
BaseAIProviderfor consistency across all providers - Supports all AI providers: OpenAI, Anthropic, Perplexity, Azure OpenAI, Google AI, Google Vertex AI, AWS Bedrock, and OpenAI-compatible providers
- Opt-in design ensures users without proxy requirements are not affected
- Priority:
TASKMASTER_ENABLE_PROXYenvironment variable >config.jsonsetting
- Uses undici's
-
#1408
10ec025Thanks @Crunchyman-ralph! - Add --json back totask-master listandtask-master showfor when using the commands with ai agents (less context)
[email protected]
Patch Changes
- #1421
e75946bThanks @Crunchyman-ralph! - Upgrade fastmcp dependency to solveServer does not support completions (required for completion/complete)
[email protected]
[email protected]
Minor Changes
-
#1398
e59c16cThanks @Crunchyman-ralph! - Claude Code provider now respects your global, project, and local Claude Code configuration files.When using the Claude Code AI provider, Task Master now automatically loads your Claude Code settings from:
- Global config (
~/.claude/directory) - Your personal preferences across all projects - Project config (
.claude/directory) - Project-specific settings like CLAUDE.md instructions - Local config - Workspace-specific overrides
This means your CLAUDE.md files, custom instructions, and Claude Code settings will now be properly applied when Task Master uses Claude Code as an AI provider. Previously, these settings were being ignored.
What's improved:
- ✅ CLAUDE.md files are now automatically loaded and applied (global and local)
- ✅ Your custom Claude Code settings are respected
- ✅ Project-specific instructions work as expected
- ✅ No manual configuration needed - works out of the box
Issues:
- Global config (
Patch Changes
-
#1400
c62cf84Thanks @Crunchyman-ralph! - Fix subtasks not showing parent task when displaying in cli (eg. tm show 10) -
#1393
da8ed6aThanks @bjcoombs! - Fix completion percentage and dependency resolution to treat cancelled tasks as complete. Cancelled tasks now correctly count toward project completion (e.g., 14 done + 1 cancelled = 100%, not 93%) and satisfy dependencies for dependent tasks, preventing permanent blocks. -
#1407
0003b6fThanks @Crunchyman-ralph! - Fix complexity analysis prompt to ensure consistent JSON output format -
#1351
37aee78Thanks @bjcoombs! - fix: prioritize .taskmaster in parent directories over other project markersWhen running task-master commands from subdirectories containing other project markers (like .git, go.mod, package.json), findProjectRoot() now correctly finds and uses .taskmaster directories in parent folders instead of stopping at the first generic project marker found.
This enables multi-repo monorepo setups where a single .taskmaster at the root tracks work across multiple sub-repositories.
-
#1406
9079d04Thanks @Crunchyman-ralph! - Fix MCP server compatibility with Cursor IDE's latest update by upgrading to fastmcp v3.20.1 with Zod v4 support- This resolves connection failures where the MCP server was unable to establish proper capability negotiation.
- Issue typically included wording like:
Server does not support completions
-
#1382
ac4328aThanks @JJVvV! - Added opt-in proxy support for all AI providers - respects http_proxy/https_proxy environment variables when enabled.When using Task Master in corporate or restricted network environments that require HTTP/HTTPS proxies, API calls to AI providers (OpenAI, Anthropic, Google, AWS Bedrock, etc.) would previously fail with ECONNRESET errors. This update adds seamless proxy support that can be enabled via environment variable or configuration file.
How to enable:
Proxy support is opt-in. Enable it using either method:
Method 1: Environment Variable
export TASKMASTER_ENABLE_PROXY=true export http_proxy=http://your-proxy:port export https_proxy=http://your-proxy:port export no_proxy=localhost,127.0.0.1 # Optional: bypass proxy for specific hosts # Then use Task Master normally task-master add-task "Create a new feature"
Method 2: Configuration File
Add to
.taskmaster/config.json:{ "global": { "enableProxy": true } }Then set your proxy environment variables:
export http_proxy=http://your-proxy:port export https_proxy=http://your-proxy:port
Technical details:
- Uses undici's
EnvHttpProxyAgentfor automatic proxy detection - Centralized implementation in
BaseAIProviderfor consistency across all providers - Supports all AI providers: OpenAI, Anthropic, Perplexity, Azure OpenAI, Google AI, Google Vertex AI, AWS Bedrock, and OpenAI-compatible providers
- Opt-in design ensures users without proxy requirements are not affected
- Priority:
TASKMASTER_ENABLE_PROXYenvironment variable >config.jsonsetting
- Uses undici's
-
#1408
10ec025Thanks @Crunchyman-ralph! - Add --json back totask-master listandtask-master showfor when using the commands with ai agents (less context)
[email protected]
Patch Changes
-
#1377
3c22875Thanks @Crunchyman-ralph! - Fix parse-prd schema to accept responses from models that omit optional fields (like Z.ai/GLM). Changedmetadatafield to use union pattern with.default(null)for better structured outputs compatibility. -
#1377
3c22875Thanks @Crunchyman-ralph! - Fix ai response not showing price after its json was repaired -
#1377
3c22875Thanks @Crunchyman-ralph! - Enable structured outputs for Z.ai providers. AddedsupportsStructuredOutputs: trueto usejson_schemamode for more reliable JSON generation in operations like parse-prd.
[email protected]
Patch Changes
-
#1370
9c3b273Thanks @Crunchyman-ralph! - Add support for ZAI (GLM) Coding Plan subscription endpoint as a separate provider. Users can now select between two ZAI providers:- zai: Standard ZAI endpoint (
https://api.z.ai/api/paas/v4/) - zai-coding: Coding Plan endpoint (
https://api.z.ai/api/coding/paas/v4/)
Both providers use the same model IDs (glm-4.6, glm-4.5) but route to different API endpoints based on your subscription. When running
tm models --setup, you'll see both providers listed separately:zai / glm-4.6- Standard endpointzai-coding / glm-4.6- Coding Plan endpoint
- zai: Standard ZAI endpoint (
-
#1371
abf46b8Thanks @Crunchyman-ralph! - Improved auto-update experience:- updates now happen before your CLI command runs and automatically restart to execute your command with the new version.
- No more manual restarts needed!
[email protected]
Minor Changes
-
#1360
819d5e1Thanks @Crunchyman-ralph! - Add support for custom OpenAI-compatible providers, allowing you to connect Task Master to any service that implements the OpenAI API specificationHow to use:
Configure your custom provider with the
modelscommand:task-master models --set-main <your-model-id> --openai-compatible --baseURL <your-api-endpoint>
Example:
task-master models --set-main llama-3-70b --openai-compatible --baseURL http://localhost:8000/v1 # Or for an interactive view task-master models --setupSet your API key (if required by your provider) in mcp.json, your .env file or in your env exports:
OPENAI_COMPATIBLE_API_KEY="your-key-here"This gives you the flexibility to use virtually any LLM service with Task Master, whether it's self-hosted, a specialized provider, or a custom inference server.
-
#1360
819d5e1Thanks @Crunchyman-ralph! - Add native support for Z.ai (GLM models), giving you access to high-performance Chinese models including glm-4.6 with massive 200K+ token context windows at competitive pricingHow to use:
-
Get your Z.ai API key from https://z.ai/manage-apikey/apikey-list
-
Set your API key in .env, mcp.json or in env exports:
ZAI_API_KEY="your-key-here" -
Configure Task Master to use GLM models:
task-master models --set-main glm-4.6 # Or for an interactive view task-master models --setup
Available models:
glm-4.6- Latest model with 200K+ context, excellent for complex projectsglm-4.5- Previous generation, still highly capable- Additional GLM variants for different use cases:
glm-4.5-air,glm-4.5v
GLM models offer strong performance on software engineering tasks, with particularly good results on code generation and technical reasoning. The large context window makes them ideal for analyzing entire codebases or working with extensive documentation.
-
-
#1360
819d5e1Thanks @Crunchyman-ralph! - Add LM Studio integration, enabling you to run Task Master completely offline with local models at zero API cost.How to use:
-
Download and install LM Studio
-
Launch LM Studio and download a model (e.g., Llama 3.2, Mistral, Qwen)
-
Optional: Add api key to mcp.json or .env (LMSTUDIO_API_KEY)
-
Go to the "Local Server" tab and click "Start Server"
-
Configure Task Master:
task-master models --set-main <model-name> --lmstudio
Example:
task-master models --set-main llama-3.2-3b --lmstudio
-
Patch Changes
-
#1362
3e70edfThanks @Crunchyman-ralph! - Improve parse PRD schema for better llm model compatiblity- Fixes #1353
-
#1358
0c639bdThanks @Crunchyman-ralph! - Fix subtask ID display to show full compound notationWhen displaying a subtask via
tm show 104.1, the header and properties table showed only the subtask's local ID (e.g., "1") instead of the full compound ID (e.g., "104.1"). The CLI now preserves and displays the original requested task ID throughout the display chain, ensuring subtasks are clearly identified with their parent context. Also improved TypeScript typing by using discriminated unions for Task/Subtask returns fromtasks.get(), eliminating unsafe type coercions. -
#1339
3b09b5dThanks @Crunchyman-ralph! - Fixed MCP server sometimes crashing when getting into the commit step of autopilot- autopilot now persists state consistently through the whole flow
-
#1326
9d5812bThanks @SharifMrCreed! - Improve gemini cli integrationWhen initializing Task Master with the
geminiprofile, you now get properly configured context files tailored specifically for Gemini CLI, including MCP configuration and Gemini-specific features like file references, session management, and headless mode.
[email protected]
Minor Changes
-
#1360
819d5e1Thanks @Crunchyman-ralph! - Add support for custom OpenAI-compatible providers, allowing you to connect Task Master to any service that implements the OpenAI API specificationHow to use:
Configure your custom provider with the
modelscommand:task-master models --set-main <your-model-id> --openai-compatible --baseURL <your-api-endpoint>
Example:
task-master models --set-main llama-3-70b --openai-compatible --baseURL http://localhost:8000/v1 # Or for an interactive view task-master models --setupSet your API key (if required by your provider) in mcp.json, your .env file or in your env exports:
OPENAI_COMPATIBLE_API_KEY="your-key-here"This gives you the flexibility to use virtually any LLM service with Task Master, whether it's self-hosted, a specialized provider, or a custom inference server.
-
#1360
819d5e1Thanks @Crunchyman-ralph! - Add native support for Z.ai (GLM models), giving you access to high-performance Chinese models including glm-4.6 with massive 200K+ token context windows at competitive pricingHow to use:
-
Get your Z.ai API key from https://z.ai/manage-apikey/apikey-list
-
Set your API key in .env, mcp.json or in env exports:
ZAI_API_KEY="your-key-here" -
Configure Task Master to use GLM models:
task-master models --set-main glm-4.6 # Or for an interactive view task-master models --setup
Available models:
glm-4.6- Latest model with 200K+ context, excellent for complex projectsglm-4.5- Previous generation, still highly capable- Additional GLM variants for different use cases:
glm-4.5-air,glm-4.5v
GLM models offer strong performance on software engineering tasks, with particularly good results on code generation and technical reasoning. The large context window makes them ideal for analyzing entire codebases or working with extensive documentation.
-
-
#1360
819d5e1Thanks @Crunchyman-ralph! - Add LM Studio integration, enabling you to run Task Master completely offline with local models at zero API cost.How to use:
-
Download and install LM Studio
-
Launch LM Studio and download a model (e.g., Llama 3.2, Mistral, Qwen)
-
Optional: Add api key to mcp.json or .env (LMSTUDIO_API_KEY)
-
Go to the "Local Server" tab and click "Start Server"
-
Configure Task Master:
task-master models --set-main <model-name> --lmstudio
Example:
task-master models --set-main llama-3.2-3b --lmstudio
-
Patch Changes
-
#1362
3e70edfThanks @Crunchyman-ralph! - Improve parse PRD schema for better llm model compatiblity- Fixes #1353
-
#1358
0c639bdThanks @Crunchyman-ralph! - Fix subtask ID display to show full compound notationWhen displaying a subtask via
tm show 104.1, the header and properties table showed only the subtask's local ID (e.g., "1") instead of the full compound ID (e.g., "104.1"). The CLI now preserves and displays the original requested task ID throughout the display chain, ensuring subtasks are clearly identified with their parent context. Also improved TypeScript typing by using discriminated unions for Task/Subtask returns fromtasks.get(), eliminating unsafe type coercions. -
#1339
3b09b5dThanks @Crunchyman-ralph! - Fixed MCP server sometimes crashing when getting into the commit step of autopilot- autopilot now persists state consistently through the whole flow
-
#1326
9d5812bThanks @SharifMrCreed! - Improve gemini cli integrationWhen initializing Task Master with the
geminiprofile, you now get properly configured context files tailored specifically for Gemini CLI, including MCP configuration and Gemini-specific features like file references, session management, and headless mode.
[email protected]
Patch Changes
- #1340
d63a40cThanks @Crunchyman-ralph! - Improve session persistence reliability
[email protected]
Patch Changes
-
#1305
a98d96eThanks @bjcoombs! - Fix warning message box width to match dashboard box width for consistent UI alignment -
#1346
25addf9Thanks @Crunchyman-ralph! - remove file and complexity report parameter from get-tasks and get-task mcp tool- In an effort to reduce complexity and context bloat for ai coding agents, we simplified the parameters of these tools