|
Some checks failed
Auto Tag / auto-tag (push) Successful in 5s
Test / rust-clippy (push) Has been cancelled
Test / rust-tests (push) Has been cancelled
Test / frontend-typecheck (push) Has been cancelled
Test / rust-fmt-check (push) Has been cancelled
Test / frontend-tests (push) Has been cancelled
Test / wiki-sync (push) Has been cancelled
Release / build-windows-amd64 (push) Has been cancelled
Release / build-macos-arm64 (push) Has been cancelled
Release / build-linux-arm64 (push) Has been cancelled
Release / build-linux-amd64 (push) Has been cancelled
- Added max_tokens and temperature fields to ProviderConfig - MSI GenAI now sends modelConfig with temperature and max_tokens - OpenAI-compatible providers now use configured max_tokens/temperature - Both formats fall back to defaults if not specified - Bumped version to 0.2.9 This allows users to configure response length and randomness for all AI providers, including MSI GenAI which requires modelConfig format. |
||
|---|---|---|
| .. | ||
| .cargo | ||
| capabilities | ||
| gen/schemas | ||
| icons | ||
| src | ||
| target | ||
| build.rs | ||
| Cargo.lock | ||
| Cargo.toml | ||
| tauri.conf.json | ||