|
Some checks failed
Test / rust-fmt-check (pull_request) Successful in 1m9s
Test / frontend-typecheck (pull_request) Successful in 1m17s
Test / frontend-tests (pull_request) Successful in 1m22s
Test / rust-clippy (pull_request) Successful in 4m19s
Test / rust-tests (pull_request) Successful in 5m46s
PR Review Automation / review (pull_request) Failing after 1m15s
Replace direct Ollama API calls with liteLLM proxy at 172.0.0.29:11434 using qwen2.5-72b (72B VLLM model). Increase timeouts to 300s for larger model inference. Reuses existing OLLAMA_API_KEY secret for liteLLM auth. Also add push-to-master trigger on test.yml so merges to master run the full CI suite (previously only pull_request events triggered). |
||
|---|---|---|
| .. | ||
| auto-tag.yml | ||
| build-images.yml | ||
| pr-review.yml | ||
| test.yml | ||