mirror of
https://github.com/ollama/ollama.git
synced 2026-04-18 05:54:09 +02:00
As we automatically enable flash attention for more models, there are likely some cases where we get it wrong. This allows setting OLLAMA_FLASH_ATTENTION=0 to disable it, even for models that usually have flash attention.