Updated 5 wiki pages: Home.md: - Updated version to v0.2.6 - Added Custom REST provider and custom provider support to features - Updated integration status from stubs to complete - Updated release table with v0.2.3 and v0.2.6 highlights Integrations.md: - Complete rewrite: Changed from 'v0.2 stubs' to fully implemented - Added detailed docs for Confluence REST API client (6 tests) - Added detailed docs for ServiceNow REST API client (7 tests) - Added detailed docs for Azure DevOps REST API client (6 tests) - Documented OAuth2 PKCE flow implementation - Added database schema for credentials and integration_config tables - Added troubleshooting section with common OAuth/API errors AI-Providers.md: - Added section for Custom Provider (Custom REST provider) - Documented Custom REST provider API format differences from OpenAI - Added request/response format examples - Added configuration instructions and troubleshooting - Documented custom provider fields (api_format, custom_endpoint_path, etc) - Added available Custom REST provider models list IPC-Commands.md: - Replaced 'v0.2 stubs' section with full implementation details - Added OAuth2 commands (initiate_oauth, handle_oauth_callback) - Added Confluence commands (5 functions) - Added ServiceNow commands (5 functions) - Added Azure DevOps commands (5 functions) - Documented authentication storage with AES-256-GCM encryption - Added common types (ConnectionResult, PublishResult, TicketResult) Database.md: - Updated migration count from 10 to 11 - Added migration 011: credentials and integration_config tables - Documented AES-256-GCM encryption for OAuth tokens - Added usage notes for OAuth2 vs basic auth storage
8.4 KiB
AI Providers
TFTSR supports 6+ AI providers, including custom providers with flexible authentication and API formats. API keys are stored encrypted with AES-256-GCM.
Provider Factory
ai/provider.rs::create_provider(config) dispatches on config.name to the matching implementation. Adding a provider requires implementing the Provider trait and adding a match arm.
pub trait Provider {
async fn chat(&self, messages: Vec<Message>, config: &ProviderConfig) -> Result<ChatResponse>;
fn name(&self) -> &str;
}
Supported Providers
1. OpenAI-Compatible
Covers: OpenAI, Azure OpenAI, LM Studio, vLLM, LiteLLM (AWS Bedrock), and any OpenAI-API-compatible endpoint.
| Field | Value |
|---|---|
config.name |
"openai" |
| Default URL | https://api.openai.com/v1/chat/completions |
| Auth | Authorization: Bearer <api_key> |
| Max tokens | 4096 |
Models: gpt-4o, gpt-4o-mini, gpt-4-turbo
Custom endpoint: Set config.base_url to any OpenAI-compatible API:
- LM Studio:
http://localhost:1234/v1 - LiteLLM (AWS Bedrock):
http://localhost:8000/v1— See LiteLLM + Bedrock Setup for full configuration guide
2. Anthropic Claude
| Field | Value |
|---|---|
config.name |
"anthropic" |
| URL | https://api.anthropic.com/v1/messages |
| Auth | x-api-key: <api_key> + anthropic-version: 2023-06-01 |
| Max tokens | 4096 |
Models: claude-sonnet-4-20250514, claude-haiku-4-20250414, claude-3-5-sonnet-20241022
3. Google Gemini
| Field | Value |
|---|---|
config.name |
"gemini" |
| URL | https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent |
| Auth | API key as ?key= query parameter |
| Max tokens | 4096 |
Models: gemini-2.0-flash, gemini-2.0-pro, gemini-1.5-pro, gemini-1.5-flash
4. Mistral AI
| Field | Value |
|---|---|
config.name |
"mistral" |
| Default URL | https://api.mistral.ai/v1/chat/completions |
| Auth | Authorization: Bearer <api_key> |
| Max tokens | 4096 |
Models: mistral-large-latest, mistral-medium-latest, mistral-small-latest, open-mistral-nemo
Uses OpenAI-compatible request/response format.
5. Ollama (Local / Offline)
| Field | Value |
|---|---|
config.name |
"ollama" |
| Default URL | http://localhost:11434/api/chat |
| Auth | None |
| Max tokens | No limit enforced |
Models: Any model pulled locally — llama3.1, llama3, mistral, codellama, phi3, etc.
Fully offline. Responses include eval_count / prompt_eval_count token stats.
Custom URL: Change the Ollama URL in Settings → AI Providers → Ollama (stored in settingsStore.ollama_url).
Domain System Prompts
Each triage conversation is pre-loaded with a domain-specific expert system prompt from src/lib/domainPrompts.ts.
| Domain | Key areas covered |
|---|---|
| Linux | systemd, filesystem, memory, networking, kernel, performance |
| Windows | Event Viewer, Active Directory, IIS, Group Policy, clustering |
| Network | DNS, firewalls, load balancers, BGP/OSPF, Layer 2, VPN |
| Kubernetes | Pod failures, service mesh, ingress, storage, Helm |
| Databases | Connection pools, slow queries, indexes, replication, MongoDB/Redis |
| Virtualization | vMotion, storage (VMFS), HA, snapshots; KVM/QEMU |
| Hardware | RAID, SMART data, ECC memory errors, thermal, BIOS/firmware |
| Observability | Prometheus/Grafana, ELK/OpenSearch, tracing, SLO/SLI burn rates |
The domain prompt is injected as the first system role message in every new conversation.
6. Custom Provider (MSI GenAI & Others)
Status: ✅ Implemented (v0.2.6)
Custom providers allow integration with non-OpenAI-compatible APIs. The application supports two API formats:
Format: OpenAI Compatible (Default)
Standard OpenAI /chat/completions endpoint with Bearer authentication.
| Field | Default Value |
|---|---|
api_format |
"openai" |
custom_endpoint_path |
/chat/completions |
custom_auth_header |
Authorization |
custom_auth_prefix |
Bearer |
Use cases:
- Self-hosted LLMs with OpenAI-compatible APIs
- Custom proxy services
- Enterprise gateways
Format: MSI GenAI
Motorola Solutions Internal GenAI Service — Enterprise AI platform with centralized cost tracking and model access.
| Field | Value |
|---|---|
config.provider_type |
"custom" |
config.api_format |
"msi_genai" |
| API URL | https://genai-service.commandcentral.com/app-gateway (prod)https://genai-service.stage.commandcentral.com/app-gateway (stage) |
| Auth Header | x-msi-genai-api-key |
| Auth Prefix | `` (empty - no Bearer prefix) |
| Endpoint Path | `` (empty - URL includes full path /api/v2/chat) |
Available Models:
VertexGemini— Gemini 2.0 Flash (Private/GCP)Claude-Sonnet-4— Claude Sonnet 4 (Public/Anthropic)ChatGPT4o— GPT-4o (Public/OpenAI)ChatGPT-5_2-Chat— GPT-4.5 (Public/OpenAI)- See GenAI API User Guide for full model list
Request Format:
{
"model": "VertexGemini",
"prompt": "User's latest message",
"system": "Optional system prompt",
"sessionId": "uuid-for-conversation-continuity",
"userId": "user.name@motorolasolutions.com"
}
Response Format:
{
"status": true,
"sessionId": "uuid",
"msg": "AI response text",
"initialPrompt": false
}
Key Differences from OpenAI:
- Single prompt instead of message array (server manages history via
sessionId) - Response in
msgfield instead ofchoices[0].message.content - Session-based conversation continuity (no need to resend history)
- Cost tracking via
userIdfield (optional — defaults to API key owner if omitted) - Custom client header:
X-msi-genai-client: tftsr-devops-investigation
Configuration (Settings → AI Providers → Add Provider):
Name: MSI GenAI
Type: Custom
API Format: MSI GenAI
API URL: https://genai-service.stage.commandcentral.com/app-gateway
Model: VertexGemini
API Key: (your MSI GenAI API key from portal)
User ID: your.name@motorolasolutions.com (optional)
Endpoint Path: (leave empty)
Auth Header: x-msi-genai-api-key
Auth Prefix: (leave empty)
Rate Limits:
- $50/user/month (enforced server-side)
- Per-API-key quotas available
Troubleshooting:
| Error | Cause | Solution |
|---|---|---|
| 403 Forbidden | Invalid API key or insufficient permissions | Verify key in MSI GenAI portal, check model access |
Missing userId field |
Configuration not saved | Ensure UI shows User ID field when api_format=msi_genai |
| No conversation history | sessionId not persisted |
Session ID stored in ProviderConfig.session_id — currently per-provider, not per-conversation |
Implementation Details:
- Backend:
src-tauri/src/ai/openai.rs::chat_msi_genai() - Schema:
src-tauri/src/state.rs::ProviderConfig(addeduser_id,api_format, custom auth fields) - Frontend:
src/pages/Settings/AIProviders.tsx(conditional UI for MSI GenAI) - CSP whitelist:
https://genai-service.stage.commandcentral.comand production domain
Custom Provider Configuration Fields
All providers support the following optional configuration fields (v0.2.6+):
| Field | Type | Purpose | Default |
|---|---|---|---|
custom_endpoint_path |
Option<String> |
Override endpoint path | /chat/completions |
custom_auth_header |
Option<String> |
Custom auth header name | Authorization |
custom_auth_prefix |
Option<String> |
Prefix before API key | Bearer |
api_format |
Option<String> |
API format (openai or msi_genai) |
openai |
session_id |
Option<String> |
Session ID for stateful APIs | None |
user_id |
Option<String> |
User ID for cost tracking (MSI GenAI) | None |
Backward Compatibility: All fields are optional and default to OpenAI-compatible behavior. Existing provider configurations are unaffected.
Adding a New Provider
- Create
src-tauri/src/ai/{name}.rsimplementing theProvidertrait - Add a match arm in
ai/provider.rs::create_provider() - Add the model list in
commands/ai.rs::list_providers() - Add the TypeScript type in
src/lib/tauriCommands.ts - Add a UI entry in
src/pages/Settings/AIProviders.tsx