tftsr-devops_investigation/src-tauri/src/ai/openai.rs
Shaun Arman 8839075805 feat: initial implementation of TFTSR IT Triage & RCA application
Implements Phases 1-8 of the TFTSR implementation plan.

Rust backend (Tauri 2.x, src-tauri/):
- Multi-provider AI: OpenAI-compatible, Anthropic, Gemini, Mistral, Ollama
- PII detection engine: 11 regex patterns with overlap resolution
- SQLCipher AES-256 encrypted database with 10 versioned migrations
- 28 Tauri IPC commands for triage, analysis, document, and system ops
- Ollama: hardware probe, model recommendations, pull/delete with events
- RCA and blameless post-mortem Markdown document generators
- PDF export via printpdf
- Audit log: SHA-256 hash of every external data send
- Integration stubs for Confluence, ServiceNow, Azure DevOps (v0.2)

Frontend (React 18 + TypeScript + Vite, src/):
- 9 pages: full triage workflow NewIssue→LogUpload→Triage→Resolution→RCA→Postmortem→History+Settings
- 7 components: ChatWindow, TriageProgress, PiiDiffViewer, DocEditor, HardwareReport, ModelSelector, UI primitives
- 3 Zustand stores: session, settings (persisted), history
- Type-safe tauriCommands.ts matching Rust backend types exactly
- 8 IT domain system prompts (Linux, Windows, Network, K8s, DB, Virt, HW, Obs)

DevOps:
- .woodpecker/test.yml: rustfmt, clippy, cargo test, tsc, vitest on every push
- .woodpecker/release.yml: linux/amd64 + linux/arm64 builds, Gogs release upload

Verified:
- cargo check: zero errors
- tsc --noEmit: zero errors
- vitest run: 13/13 unit tests passing

Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
2026-03-14 22:36:25 -05:00

79 lines
2.2 KiB
Rust

use async_trait::async_trait;
use crate::ai::provider::Provider;
use crate::ai::{ChatResponse, Message, ProviderInfo, TokenUsage};
use crate::state::ProviderConfig;
pub struct OpenAiProvider;
#[async_trait]
impl Provider for OpenAiProvider {
fn name(&self) -> &str {
"openai"
}
fn info(&self) -> ProviderInfo {
ProviderInfo {
name: "OpenAI Compatible".to_string(),
supports_streaming: true,
models: vec![
"gpt-4o".to_string(),
"gpt-4o-mini".to_string(),
"gpt-4-turbo".to_string(),
],
}
}
async fn chat(
&self,
messages: Vec<Message>,
config: &ProviderConfig,
) -> anyhow::Result<ChatResponse> {
let client = reqwest::Client::new();
let url = format!(
"{}/chat/completions",
config.api_url.trim_end_matches('/')
);
let body = serde_json::json!({
"model": config.model,
"messages": messages,
"max_tokens": 4096,
});
let resp = client
.post(&url)
.header("Authorization", format!("Bearer {}", config.api_key))
.header("Content-Type", "application/json")
.json(&body)
.send()
.await?;
if !resp.status().is_success() {
let status = resp.status();
let text = resp.text().await?;
anyhow::bail!("OpenAI API error {}: {}", status, text);
}
let json: serde_json::Value = resp.json().await?;
let content = json["choices"][0]["message"]["content"]
.as_str()
.ok_or_else(|| anyhow::anyhow!("No content in response"))?
.to_string();
let usage = json.get("usage").and_then(|u| {
Some(TokenUsage {
prompt_tokens: u["prompt_tokens"].as_u64()? as u32,
completion_tokens: u["completion_tokens"].as_u64()? as u32,
total_tokens: u["total_tokens"].as_u64()? as u32,
})
});
Ok(ChatResponse {
content,
model: config.model.clone(),
usage,
})
}
}