You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: readme.md
-13Lines changed: 0 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,19 +38,6 @@ No need to install Ollama manually, it will run in a container as
38
38
part of the stack when running with the Linux profile: run `docker compose --profile linux up`.
39
39
Make sure to set the `OLLAMA_BASE_URL=http://llm:11434` in the `.env` file when using Ollama docker container.
40
40
41
-
If run into issues that your Nvidia GPU is not used under Linux (despite using the profile), ensure that you have `nvidia-container-toolkit` installed.
42
-
And add this to the `llm` service.
43
-
44
-
```yaml
45
-
deploy:
46
-
resources:
47
-
reservations:
48
-
devices:
49
-
- driver: nvidia
50
-
count: all
51
-
capabilities: [gpu]
52
-
```
53
-
54
41
**Windows**
55
42
Not supported by Ollama, so Windows users need to generate a OpenAI API key and configure the stack to use `gpt-3.5` or `gpt-4` in the `.env` file.
0 commit comments