Skip to content

Commit 9963fec

Browse files
gpu: mention Docker Model Runner (#23810)
<!--Delete sections as needed --> ## Description Mention Docker Model Runner to test the GPU on Windows with NVIDIA platforms. <img width="952" height="603" alt="Screenshot 2025-12-04 at 17 33 12" src="https://github.com/user-attachments/assets/92d68d8d-097e-4599-9e73-12121f495359" /> ## Reviews <!-- Notes for reviewers here --> <!-- List applicable reviews (optionally @tag reviewers) --> - [ ] Technical review - [ ] Editorial review - [ ] Product review --------- Signed-off-by: Dorin Geman <[email protected]> Co-authored-by: Allie Sadler <[email protected]>
1 parent a4b77bf commit 9963fec

File tree

1 file changed

+21
-5
lines changed
  • content/manuals/desktop/features

1 file changed

+21
-5
lines changed

content/manuals/desktop/features/gpu.md

Lines changed: 21 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -63,16 +63,32 @@ GPU Device 0: "GeForce RTX 2060 with Max-Q Design" with compute capability 7.5
6363
= 2724.379 single-precision GFLOP/s at 20 flops per interaction
6464
```
6565

66-
## Run a real-world model: Llama2 with Ollama
66+
## Run a real-world model: SmolLM2 with Docker Model Runner
6767

68-
Use the [official Ollama image](https://hub.docker.com/r/ollama/ollama) to run the Llama2 LLM with GPU acceleration:
68+
> [!NOTE]
69+
>
70+
> Docker Model Runner with vLLM for Windows with WSL2 is available starting with Docker Desktop 4.54.
71+
72+
Use Docker Model Runner to run the SmolLM2 LLM with vLLM and GPU acceleration:
73+
74+
```console
75+
$ docker model install-runner --backend vllm --gpu cuda
76+
```
77+
78+
Check it's correctly installed:
6979

7080
```console
71-
$ docker run --gpus=all -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
81+
$ docker status
82+
Docker Model Runner is running
83+
84+
Status:
85+
llama.cpp: running llama.cpp version: c22473b
86+
vllm: running vllm version: 0.11.0
7287
```
7388

74-
Then start the model:
89+
Run the model:
7590

7691
```console
77-
$ docker exec -it ollama ollama run llama2
92+
$ docker model un ai/smollm2-vllm hi
93+
Hello! I'm sure everything goes smoothly here. How can I assist you today?
7894
```

0 commit comments

Comments
 (0)