You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-[Pull and push models to and from Docker Hub](https://hub.docker.com/u/ai)
36
36
- Serve models on OpenAI-compatible APIs for easy integration with existing apps
37
-
- Package GGUF files as OCI Artifacts and publish them to any Container Registry
37
+
- Support for both llama.cpp and vLLM inference engines (vLLM currently supported on Linux x86_64/amd64 with NVIDIA GPUs only)
38
+
- Package GGUF and Safetensors files as OCI Artifacts and publish them to any Container Registry
38
39
- Run and interact with AI models directly from the command line or from the Docker Desktop GUI
39
40
- Manage local models and display logs
40
41
- Display prompt and response details
@@ -68,7 +69,7 @@ Windows(arm64):
68
69
69
70
Docker Engine only:
70
71
71
-
- Linux CPU & Linux NVIDIA
72
+
- Linux CPU, NVIDIA, AMD and Vulkan
72
73
- NVIDIA drivers 575.57.08+
73
74
74
75
{{< /tab >}}
@@ -83,6 +84,8 @@ initial pull may take some time. After that, they're cached locally for faster
83
84
access. You can interact with the model using
84
85
[OpenAI-compatible APIs](api-reference.md).
85
86
87
+
Docker Model Runner supports both [llama.cpp](https://github.com/ggerganov/llama.cpp) and [vLLM](https://github.com/vllm-project/vllm) as inference engines, providing flexibility for different model formats and performance requirements. For more details, see the [Docker Model Runner repository](https://github.com/docker/model-runner).
The Docker Model CLI currently lacks consistent support for specifying models by image digest. As a temporary workaround, you should refer to models by name instead of digest.
117
-
118
117
## Share feedback
119
118
120
-
Thanks for trying out Docker Model Runner. Give feedback or report any bugs
121
-
you may find through the **Give feedback** link next to the **Enable Docker Model Runner** setting.
119
+
Thanks for trying out Docker Model Runner. To report bugs or request features, [open an issue on GitHub](https://github.com/docker/model-runner/issues). You can also give feedback through the **Give feedback** link next to the **Enable Docker Model Runner** setting.
0 commit comments