Two reductions:
1. Drop the gguf_rename_tensor forwarder from gguf.h/gguf.cpp.
The rename-in-place trick it does (calling ggml_set_name on an embedded
ggml_tensor) can be done from outside gguf.cpp via:
char * p = const_cast<char *>(gguf_get_tensor_name(meta, id));
strncpy(p, new_name, GGML_MAX_NAME - 1);
That pointer aims into a mutable char[GGML_MAX_NAME] inside a std::vector
element; the const on the return type is API courtesy. This is defined
behavior and has no struct-layout dependency.
2. Drop the src/CMakeLists.txt hunk that added llama-ollama-compat.cpp to
the llama target. Replace with a target_sources() call in Ollama's
llama/server/CMakeLists.txt after FetchContent_MakeAvailable. Our
compat files now stay in llama/compat/ and are never copied into the
fetched _deps/ tree.
Net patch now touches 3 files, 20 lines, all pure call-site insertions:
src/llama-model-loader.cpp +8 (include + translate + 2x should_skip)
src/llama-model.cpp +4 (include + apply_tensor_transforms)
tools/mtmd/clip.cpp +8 (include + translate_clip + maybe_load)
Verified: fresh build from scratch (rm -rf build && cmake configure)
runs PATCH_COMMAND cleanly, compiles, and ollama run gemma3 still works
end-to-end for text + vision.
The clip.cpp tensor-read loop was the fattest hook in the patch — it
duplicated the host-vs-device buffer dispatch around a call into the
compat layer. Move that dispatch into our code (maybe_load_tensor),
so the upstream patch is a single conditional call.
Net: upstream patch drops from 48 lines across 6 files to 34 lines.
Every remaining edit is either a 1-line include, a 1-line function call,
or the gguf_rename_tensor shim (which accesses gguf_context internals
and has to live in gguf.cpp).
Verified end-to-end: text + vision both still correct after rebuild.
Previous handler only fired on vision-capable gemma3 (4B/12B/27B) because
its detection looked for `gemma3.mm.tokens_per_image` or embedded v.*/mm.*
tensors. The 1B blob has neither — but its old Ollama converter emitted:
- gemma3.rope.global.freq_base (upstream uses gemma3.rope.freq_base)
- gemma3.rope.local.freq_base (upstream uses gemma3.rope.freq_base_swa)
- tokenizer.ggml.add_{padding,unknown}_token
so llama.cpp would fall back to default rope_freq_base=10000 and produce
visibly-worse output.
Also inject rope.scaling.factor=8.0 / type=linear on 4B/12B/27B — those
variants ship with that scaling in their HF config to extend the native
~16k trained context to 131072. Without this KV, llama.cpp uses factor=1.0
and the positional embeddings are subtly off everywhere.
Detection now flips on any Ollama-specific marker. All three variants
verified end-to-end via `ollama run gemma3:{latest,1b,270m}`.
FetchContent's PATCH_COMMAND runs after each update step — including on
incremental rebuilds. `git apply` fails when the patch is already applied,
which bricks the build until the dev wipes build/ entirely.
Fix by routing the apply through a small apply-patch.cmake helper that
checks `git apply --reverse --check` first. If the patch cleanly reverses,
it's already applied and we skip. Otherwise apply forward. Both branches
surface real errors (drift against upstream, missing patch file, etc.).
Verified: fresh configure+build applies the patch once; re-running the
same commands is a no-op with no errors.
Two tiny Go-side changes that let the llama/compat shim take over gemma3:
1. llm/llama_server.go: when the GGUF has embedded v.* tensors and no
projector layer is declared, pass the model file itself as --mmproj.
The in-process compat layer translates the same file into both a
text-only view (for --model) and a clip-mmproj view (for --mmproj).
2. server/model_resolver.go: drop library/gemma3 from compatModelRedirects.
The compat layer handles it directly, so no dhiltgen/ republish is
needed. Other arches stay in the redirect list until they get their
own handler in llama/compat/llama-ollama-compat.cpp.
End-to-end verified: `ollama run gemma3` answers text and image prompts
against the existing library/gemma3 blob with no re-download.
Older Ollama builds ship GGUFs that diverge slightly from upstream llama.cpp
in arch names, KV keys, tensor names, and (for vision models) file layout
(text+vision in one monolithic file). This adds a self-contained compat
layer that translates those files in memory at load time, so
~/.ollama/models/blobs/* can be served by upstream llama-server with no
re-conversion and no re-download.
Structure:
llama/compat/
llama-ollama-compat.{h,cpp} — the shim (Ollama-owned, ~500 LOC)
upstream-edits.patch — ~48 lines of call-site hooks in 6 upstream files
compat.cmake — include()-able CMake fragment
README.md — what/why/how-to-regen
Integration: llama/server/CMakeLists.txt includes compat.cmake and passes
OLLAMA_LLAMA_CPP_COMPAT_PATCH_COMMAND to FetchContent_Declare via
PATCH_COMMAND. When OLLAMA_LLAMA_CPP_SOURCE is set (dev mode), the patch is
skipped so the developer's tree stays untouched.
Currently handles gemma3 (text + vision). Pattern is data-driven — adding
other archs is a new handle_<arch>() + one dispatch line. See README for
the per-arch checklist.
Verified end-to-end: `llama-server --model BLOB --mmproj BLOB` with an
Ollama gemma3:latest blob answers both text prompts ("Paris") and vision
prompts (correct image descriptions).
Remove the vendored GGML and llama.cpp backend, CGO runner, Go model
implementations, and sample. llama-server (built from upstream llama.cpp via
FetchContent) is now the sole inference engine for GGUF-based models.
(Safetensor based models continue to run on the new MLX engine.) This allows
us to more rapidly pick up new capabilities and fixes from llama.cpp as they
come out.
On windows this now requires recent AMD driver versions to support ROCm v7 as
llama.cpp currently does not support building against v6.
If you have a long running create, and start another ollama server with the
same model dir, the GC algorithm deletes the pending blobs and breaks the
create. This adds a 1h grace period to avoid deleting in-flight creation
operations.
Following up on #15560, this change now has e2b/e4b render differently
from 26b/31b.
For backwards compatibility, we take the existing renderer name `gemma4`
and make it do dynamic resolution based on the model name/size, but the
intended use is for the models to be republished with the renderer
variant specified explicitly: `gemma4-small` or `gemma4-large`.
After the rotating buffer has wrapped (c.offset > c.maxSize) a subsequent
L>1 Update() went through a slice-to-[0, c.idx) path that discarded all
slots in [c.idx, Dim), losing the older-but-still-in-window tokens the
first Q of the new batch needs for its sliding-window attention.
Linearize the circular buffer to logical order in that wrapped case so
the existing trim + concat preserves the last (maxSize - 1) old tokens.
When the buffer has not yet wrapped (c.offset <= c.maxSize), slots
[c.idx, Dim) are grow padding or stale post-rewind data, so keep
dropping them.
Converts SiLU/GELUApprox to compiled kernels and adds SwiGLU,
matching upstream mlx/mlx_lm's activations pattern. Routes llama,
qwen3, qwen3_5 (dense + MoE), and glm4_moe_lite MLP paths through
mlx.SwiGLU so each MLP invocation runs as one fused Metal/CUDA
kernel rather than a chain of per-op launches.
Wraps MLX's mlx_compile API so Go functions can be traced into fused
kernels. Contiguous elementwise chains collapse into a single
Metal/CUDA kernel instead of launching one per op.
Exposes Compile plus arity helpers (Compile1/2/3) that mirror Python's
@mx.compile decorator shape, lazily building the closure on first call
so package-level declarations work before the MLX dylib loads.
* gemma4: implement Gemma 4 model for MLX (text-only runtime)
* gemma4: two MoE + SWA prefill perf fixes
Two performance optimizations in the gemma4 forward pass
1. Memoize the sliding-window prefill mask across layers.
2. Softmax only over the selected experts in Router.Forward.
* review comments
Gemma 4 prompts differ when thinking is disabled for different sized
models: 26b/31b emit an empty thought block, while e2b/e4b do not.
Before #15490, our shared Gemma 4 renderer effectively matched the
e2b behavior. #15490 changed it to always emit the empty thought block,
which regressed e2b/e4b nothink behavior and led to #15536 (and possibly
This change restores the previous shared behavior by removing the empty
trailing thought block. It also renames the checked-in upstream chat
templates so the e2b and 31b fixtures are tracked separately.
A follow-up will split Gemma 4 rendering by model size.
Fixes: #15536
For some versions of Xcode, cmake builds are failing due to header problems in
cross-compiling during the generate phase. Since generate is producing arch
independent generated output, we can skip this during cross-compiling.
* mlx: add op wrappers for Conv2d, Pad, activations, trig, and masked SDPA
Add Conv2d, flexible Pad (with axes/mode), PadConstant, Maximum,
Minimum, Softplus, ReLU, GLU, Clamp, Sin, Cos, Clip,
ScaledDotProductAttentionMasked, and RoPEWithFreqs. Refactor
RoPEWithBase to delegate to RoPEWithFreqs.
* review comments
* mlx: fix ScaledDotProductAttentionMasked to consult the mask argument
Improve the MLX model creation pipeline with several model-agnostic changes:
- Rewrite supportsVision to use vision_config instead of architecture name
- Add supportsAudio for audio encoder detection
- Add alignment checking (isAligned) for quantization group sizes
- Support per-projection mixed quantization in MoE expert packing
- Record per-tensor quant metadata in safetensors blobs
- Parse per-tensor quant metadata at model load time
- Validate quantize output is non-empty before storing
- Fix pin/unpin cleanup in expert group quantization
- Promote v_proj/k_proj/down_proj to INT8 for INT4 base quant
- Add MetalIsAvailable() utility
- Skip audio encoder tensors from quantization
* gemma4: update renderer to match new jinja template
Google has updated their jinja template for gemma4, and so this change
gives us parity with the new template. The parsing also slightly changed
upstream, so we make a small change to our parser as well.
I've also corrected a few probably existing edge cases, especially
around type unions. The upstream output format is weird (a stringified
array), but in practice the models seem to understand it well.
* gemma4: special case simple `AnyOf`s
The upstream template doesn't handle `AnyOf`s, but since in the previous
commit we saw type unions work reasonably well, I'm now treating very
simple `AnyOf`s as type unions to help in cases where they might be used
* fix lint
* gemma4: prefer empty instead of `None`
We can't currently distinguish between a result being not-present vs.
empty. The empty case seems more important (e.g., a legitimately empty
tool call)
* gemma4: be more careful for tool results with missing IDs
We were missing setting the function index for several models that can
make parallel tool calls.
In the future we may want to consider putting some sort of post-parse
hook and relieve the parsers of this duty.
Fixes: #15457
Update cloud and local model recommendations to match current
models.go: add qwen3.5:cloud and glm-5.1:cloud, replace glm-4.7-flash
with gemma4 and qwen3.5 as local options.
Add documentation for Hermes Agent by Nous Research, covering
installation, Ollama setup via custom endpoint, messaging configuration,
and recommended models.