mirror of
https://github.com/ollama/ollama.git
synced 2026-04-21 08:15:42 +02:00
This workaround logic in llama.cpp is causing crashes for users with less system memory than VRAM.
11 KiB
Executable File
11 KiB
Executable File