Commit Graph

2 Commits

Author SHA1 Message Date
ParthSareen
59928c536b x/cmd: add context-aware tool output truncation for LLM
Implement dual-limit tool output truncation to prevent context overflow:
- 4k tokens (~16k chars) for local models on local servers
- 10k tokens (~40k chars) for cloud models or remote servers

This helps preserve context window for local models with smaller
context windows while allowing larger outputs for cloud services.
2026-01-06 15:36:03 -08:00
ParthSareen
9383082070 x: add tests for tool disabling, auth error, and helper functions
- Add tests for OLLAMA_AGENT_DISABLE_WEBSEARCH/BASH env vars
- Add tests for ErrWebSearchAuthRequired error type
- Add tests for isLocalModel, isLocalServer, truncateToolOutputForLocalModel
2026-01-06 14:51:27 -08:00