mirror of
https://github.com/ollama/ollama.git
synced 2026-04-17 23:54:05 +02:00
launch: replace deprecated OPENAI_BASE_URL with config.toml profile for codex (#15041)
This commit is contained in:
@@ -35,36 +35,39 @@ To use `codex` with Ollama, use the `--oss` flag:
|
||||
codex --oss
|
||||
```
|
||||
|
||||
### Changing Models
|
||||
|
||||
By default, codex will use the local `gpt-oss:20b` model. However, you can specify a different model with the `-m` flag:
|
||||
To use a specific model, pass the `-m` flag:
|
||||
|
||||
```
|
||||
codex --oss -m gpt-oss:120b
|
||||
```
|
||||
|
||||
### Cloud Models
|
||||
To use a cloud model:
|
||||
|
||||
```
|
||||
codex --oss -m gpt-oss:120b-cloud
|
||||
```
|
||||
|
||||
### Profile-based setup
|
||||
|
||||
## Connecting to ollama.com
|
||||
|
||||
|
||||
Create an [API key](https://ollama.com/settings/keys) from ollama.com and export it as `OLLAMA_API_KEY`.
|
||||
|
||||
To use ollama.com directly, edit your `~/.codex/config.toml` file to point to ollama.com.
|
||||
For a persistent configuration, add an Ollama provider and profiles to `~/.codex/config.toml`:
|
||||
|
||||
```toml
|
||||
model = "gpt-oss:120b"
|
||||
model_provider = "ollama"
|
||||
|
||||
[model_providers.ollama]
|
||||
[model_providers.ollama-launch]
|
||||
name = "Ollama"
|
||||
base_url = "https://ollama.com/v1"
|
||||
env_key = "OLLAMA_API_KEY"
|
||||
base_url = "http://localhost:11434/v1"
|
||||
|
||||
[profiles.ollama-launch]
|
||||
model = "gpt-oss:120b"
|
||||
model_provider = "ollama-launch"
|
||||
|
||||
[profiles.ollama-cloud]
|
||||
model = "gpt-oss:120b-cloud"
|
||||
model_provider = "ollama-launch"
|
||||
```
|
||||
|
||||
Run `codex` in a new terminal to load the new settings.
|
||||
Then run:
|
||||
|
||||
```
|
||||
codex --profile ollama-launch
|
||||
codex --profile ollama-cloud
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user