Hi,
I am trying to replicate the example of kag explained in the user guide (developer mode) https://openspg.github.io/v2/docs_en. I want to use the Ollama models that I have in the docker container ollamaGPU instead of the models of deepseek or Siliconflow. I wrote the following configuration in the example_config.yaml file:
openie_llm: &openie_llm
base_url: http://ollamaGPU:11434
model: qwen2.5:3b
type: openai
chat_llm: &chat_llm
base_url: http://ollamaGPU:11434
model: qwen2.5:3b
type: openai
---------------- EMBEDDINGS ----------------
vectorize_model: &vectorize_model
base_url: http://ollamaGPU:11434/v1
api_key: ollama
model: qwen3-embedding:8b
type: openai
vector_dimensions: 1024
vectorizer: *vectorize_model
When I execute the command: $ knext project create --config_path ./example_config.yaml
I obtain: Error: invalid llm config: {'base_url': 'http://ollamaGPU:11434', 'model': 'qwen2.5:3b', 'type': 'openai'}, for details: Connection error.
Do you know which is the correct configuration for Ollama models?
Note: in product mode the url worked correctly for both all the models.
Thank you for your help!
Hi,
I am trying to replicate the example of kag explained in the user guide (developer mode) https://openspg.github.io/v2/docs_en. I want to use the Ollama models that I have in the docker container ollamaGPU instead of the models of deepseek or Siliconflow. I wrote the following configuration in the example_config.yaml file:
openie_llm: &openie_llm
base_url: http://ollamaGPU:11434
model: qwen2.5:3b
type: openai
chat_llm: &chat_llm
base_url: http://ollamaGPU:11434
model: qwen2.5:3b
type: openai
---------------- EMBEDDINGS ----------------
vectorize_model: &vectorize_model
base_url: http://ollamaGPU:11434/v1
api_key: ollama
model: qwen3-embedding:8b
type: openai
vector_dimensions: 1024
vectorizer: *vectorize_model
When I execute the command: $ knext project create --config_path ./example_config.yaml
I obtain: Error: invalid llm config: {'base_url': 'http://ollamaGPU:11434', 'model': 'qwen2.5:3b', 'type': 'openai'}, for details: Connection error.
Do you know which is the correct configuration for Ollama models?
Note: in product mode the url worked correctly for both all the models.
Thank you for your help!