Setup Ollama Connection
One of the easiest way to host an open source model and connect to Linguly is to host a model from Ollama library in Coolify and connect to your Linguly Core via an Ollama Connection
How to?
1. Download and Run a Model Using Ollama
- Add ollama-with-open-webui in Coolify resources to a docker network same as your Linguly service.
- Go to the terminal and connect to ollama api
- Use the following command to download a desired model:
ollama pull llama3.2:3b
- To check the result you can use
ollama list
- If you want to run and test the model in terminal:
ollama run llama3.2:3b
- To check the result you can use
- You can later remove the Open Webui by clicking on
Edit Compose File
and removing the section for Webui
2. Connect to your Linguly Core
In order to make a connection, we need to add the ollama service to the environment variables of our Linguly Core instance.
- Modify the Compose file in Ollama service to add it to the same docker network as your Linguly Core.
- For us its
coolify
network - The final compose file should look like this:
- For us its
services:
ollama-api:
image: 'ollama/ollama:latest'
volumes:
- 'ollama:/root/.ollama'
networks:
- coolify
healthcheck:
test:
- CMD
- ollama
- list
interval: 5s
timeout: 30s
retries: 10
networks:
coolify:
external: true
- In your Linguly Core instance, go to
Environment Variables
and then setOLLAMA_URL
tohttp://ollama-api:11434
For Local Tests
You can install ollama locally from here and then pull
a model similarly.