Find local AI model weights, VM/runtime stores, and LLM session files across desktop apps, shared caches, and project folders.
weightless is for the messy real world where Ollama, LM Studio, Hugging Face, Draw Things, Docker, Podman, Lima, Apple simulators, Claude, Codex, Copilot, Antigravity, OpenCode, and one-off repos all store heavy local files in different places. It gives you one interactive terminal UI plus a JSON mode for scripting and debugging.
- Scans provider-specific model stores, virtual machine/runtime stores, and LLM session stores by default, with an optional on-demand
disk-scanfor broader model folders - Groups raw files into logical models so sharded packages show up as one row
- Shows size, provider, category, created date, and path in JSON
- Lets you drill from Summary into provider-specific artifacts
- Adds dedicated tabs for Models, Virtual Machines, and LLM Sessions
- Refreshes in place with
r - Emits machine-readable JSON
- Keeps provider detection easy to extend in internal/providers/registry.go
Install script:
curl -fsSL https://raw.githubusercontent.com/needle-tools/weightless/main/install.sh | bashSpecific version:
curl -fsSL https://raw.githubusercontent.com/needle-tools/weightless/main/install.sh | bash -s -- -s 1.1.0Or download a release archive directly from GitHub Releases.
weightless
weightless --json
weightless --versionCommon flags:
weightless --providers ollama,lm-studio,huggingface
weightless --providers docker,podman,lima,apple-simulators
weightless --providers claude,codex,cursor,opencode
weightless --roots ~/work/models,/Volumes/FastSSD/models
weightless --min-size-mb 8Keys:
←and→switch tabsenterorspacedrills into a provider from Summaryoopens or reveals the selected itemrrefreshes the scanescgoes back from a drilled viewqquits
Model coverage includes:
ollamalm-studioanythingllmdraw-thingsupscaylhuggingfaceunsloth-studiojangpt4allvllmnode-llama-cppllama.cppshared-cache attributionchrome-built-in-ainvidiatext-generation-webuicomfystable-diffusion-webuiinvokeaidisk-scan(lazy, on demand from Summary)
Virtual machine and runtime coverage includes:
dockerpodmanlimaapple-simulatorsapple-simulator-runtimesandroid-emulatorclaude-vmcodex-vmutmvercel-sandbox
LLM session coverage includes:
claudecodexcopilotantigravityopencodecursorwindsurfclineroo-codekilo-codeaidergemini-cliqwen-code
Example shape:
{
"categories": [
{
"category": "models",
"artifacts": 31,
"size_bytes": 111883059200,
"size_human": "104.2 GiB"
}
],
"summary": [
{
"provider": "ollama",
"artifacts": 2,
"complete_artifacts": 2,
"incomplete_artifacts": 0,
"size_bytes": 7630497504,
"size_human": "7.1 GiB"
}
],
"artifacts": [
{
"category": "models",
"name": "qwen3.5:9b",
"model_name": "qwen3.5:9b",
"status": "complete",
"primary_provider": "ollama",
"path": "/Users/you/.ollama/models/blobs/sha256-...",
"timestamp": "2026-04-08T09:15:00+02:00",
"file_count": 1,
"all_paths": [
"/Users/you/.ollama/models/blobs/sha256-..."
]
}
],
"total_artifacts": 31,
"total_size_human": "104.2 GiB"
}Build locally:
PATH=/opt/homebrew/bin:$PATH go build -o weightless .Run from source:
go run .Build from source with the installer:
./install.sh --build-from-sourceThis repo is set up to publish GitHub Releases directly.
One-time maintainer setup:
- Create the GitHub repo
needle-tools/weightless. - Push
main.
Publish a release:
git push origin main
git tag v1.1.0
git push origin v1.1.0That release flow will:
- run CI
- build macOS, Linux, and Windows binaries
- publish GitHub Release assets
- generate checksums
See CHANGELOG.md.