- Added max_tokens and temperature fields to ProviderConfig - Custom REST providers now send modelConfig with temperature and max_tokens - OpenAI-compatible providers now use configured max_tokens/temperature - Both formats fall back to defaults if not specified - Bumped version to 0.2.9 This allows users to configure response length and randomness for all AI providers, including Custom REST providers which require modelConfig format. |
||
|---|---|---|
| .. | ||
| .cargo | ||
| capabilities | ||
| gen/schemas | ||
| icons | ||
| src | ||
| target | ||
| build.rs | ||
| Cargo.lock | ||
| Cargo.toml | ||
| tauri.conf.json | ||