docs(architecture): add C4 diagrams, ADRs, and architecture overview

Comprehensive architecture documentation covering:

- docs/architecture/README.md: Full C4 model diagrams (system context,
  container, component), data flow sequences, security architecture,
  AI provider class diagram, CI/CD pipeline, and deployment diagrams.
  All diagrams use Mermaid for version-controlled diagram-as-code.

- docs/architecture/adrs/ADR-001: Tauri vs Electron decision rationale
- docs/architecture/adrs/ADR-002: SQLCipher encryption choices and
  cipher_page_size=16384 rationale for Apple Silicon
- docs/architecture/adrs/ADR-003: Provider trait + factory pattern
- docs/architecture/adrs/ADR-004: Regex + Aho-Corasick PII detection
- docs/architecture/adrs/ADR-005: Auto-generate encryption keys at
  runtime (documents the fix from PR #24)
- docs/architecture/adrs/ADR-006: Zustand state management rationale

- docs/wiki/Architecture.md: Updated module table (14 migrations, not
  10), corrected integrations description, updated startup sequence to
  reflect key auto-generation, added links to new ADR docs.

- README.md: Fixed stale database paths (tftsr → trcaa) and updated
  env var descriptions to reflect auto-generation behavior.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
Shaun Arman 2026-04-07 09:15:55 -05:00
parent d294847210
commit fdb4fc03b9
9 changed files with 1387 additions and 14 deletions

View File

@ -287,9 +287,9 @@ All data is stored locally in a SQLCipher-encrypted database at:
| OS | Path | | OS | Path |
|---|---| |---|---|
| Linux | `~/.local/share/tftsr/tftsr.db` | | Linux | `~/.local/share/trcaa/trcaa.db` |
| macOS | `~/Library/Application Support/tftsr/tftsr.db` | | macOS | `~/Library/Application Support/trcaa/trcaa.db` |
| Windows | `%APPDATA%\tftsr\tftsr.db` | | Windows | `%APPDATA%\trcaa\trcaa.db` |
Override with the `TFTSR_DATA_DIR` environment variable. Override with the `TFTSR_DATA_DIR` environment variable.
@ -300,8 +300,8 @@ Override with the `TFTSR_DATA_DIR` environment variable.
| Variable | Default | Purpose | | Variable | Default | Purpose |
|---|---|---| |---|---|---|
| `TFTSR_DATA_DIR` | Platform data dir | Override database location | | `TFTSR_DATA_DIR` | Platform data dir | Override database location |
| `TFTSR_DB_KEY` | _(none)_ | Database encryption key (required in release builds) | | `TFTSR_DB_KEY` | _(auto-generated)_ | Database encryption key override — auto-generated at first launch if unset |
| `TFTSR_ENCRYPTION_KEY` | _(none)_ | Credential encryption key (required in release builds) | | `TFTSR_ENCRYPTION_KEY` | _(auto-generated)_ | Credential encryption key override — auto-generated at first launch if unset |
| `RUST_LOG` | `info` | Tracing log level (`debug`, `info`, `warn`, `error`) | | `RUST_LOG` | `info` | Tracing log level (`debug`, `info`, `warn`, `error`) |
--- ---

863
docs/architecture/README.md Normal file
View File

@ -0,0 +1,863 @@
# TRCAA Architecture Documentation
**Troubleshooting and RCA Assistant** — C4-model architecture documentation using Mermaid diagrams.
---
## Table of Contents
1. [System Context (C4 Level 1)](#system-context)
2. [Container Architecture (C4 Level 2)](#container-architecture)
3. [Component Architecture (C4 Level 3)](#component-architecture)
4. [Data Architecture](#data-architecture)
5. [Security Architecture](#security-architecture)
6. [AI Provider Architecture](#ai-provider-architecture)
7. [Integration Architecture](#integration-architecture)
8. [Deployment Architecture](#deployment-architecture)
9. [Key Data Flows](#key-data-flows)
10. [Architecture Decision Records](#architecture-decision-records)
---
## System Context
The system context diagram shows TRCAA in relation to its users and external systems.
```mermaid
C4Context
title System Context — Troubleshooting and RCA Assistant
Person(it_eng, "IT Engineer", "Diagnoses incidents and conducts root cause analysis")
System(trcaa, "TRCAA Desktop App", "Structured AI-backed assistant for IT troubleshooting, 5-whys RCA, and post-mortem documentation")
System_Ext(ollama, "Ollama (Local)", "Runs open-source LLMs locally (llama3, mistral, phi3)")
System_Ext(openai, "OpenAI API", "GPT-4o, GPT-4o-mini for cloud AI inference")
System_Ext(anthropic, "Anthropic API", "Claude 3.5 Sonnet, Claude Haiku")
System_Ext(gemini, "Google Gemini API", "Gemini Pro for cloud AI inference")
System_Ext(msi_genai, "MSI GenAI Gateway", "Enterprise AI gateway (commandcentral.com)")
System_Ext(confluence, "Confluence", "Atlassian wiki — publish RCA docs")
System_Ext(servicenow, "ServiceNow", "ITSM platform — create incident tickets")
System_Ext(ado, "Azure DevOps", "Work item tracking and collaboration")
Rel(it_eng, trcaa, "Uses", "Desktop app (Tauri WebView)")
Rel(trcaa, ollama, "AI inference", "HTTP/JSON (local)")
Rel(trcaa, openai, "AI inference", "HTTPS/REST")
Rel(trcaa, anthropic, "AI inference", "HTTPS/REST")
Rel(trcaa, gemini, "AI inference", "HTTPS/REST")
Rel(trcaa, msi_genai, "AI inference", "HTTPS/REST")
Rel(trcaa, confluence, "Publish RCA docs", "HTTPS/REST + OAuth2")
Rel(trcaa, servicenow, "Create incidents", "HTTPS/REST + OAuth2")
Rel(trcaa, ado, "Create work items", "HTTPS/REST + OAuth2")
```
---
## Container Architecture
TRCAA is a single-process Tauri 2 desktop application. The "containers" are logical boundaries within the process.
```mermaid
C4Container
title Container Architecture — TRCAA
Person(user, "IT Engineer")
System_Boundary(trcaa, "TRCAA Desktop Process") {
Container(webview, "React Frontend", "React 18 + TypeScript + Vite", "Renders UI via OS WebView (WebKit/WebView2). Manages ephemeral session state and persisted settings.")
Container(tauri_core, "Tauri Core / IPC Bridge", "Rust / Tauri 2", "Routes invoke() calls between WebView and backend command handlers. Enforces capability ACL.")
Container(rust_backend, "Rust Backend", "Rust / Tokio async", "Command handlers, AI provider clients, PII engine, document generation, integration clients, audit logging.")
ContainerDb(db, "SQLCipher Database", "SQLite + SQLCipher AES-256", "All persistent data: issues, logs, messages, audit trail, credentials, AI provider configs.")
ContainerDb(stronghold, "Stronghold Key Store", "tauri-plugin-stronghold", "Encrypted key-value store for symmetric key material.")
ContainerDb(local_fs, "Local Filesystem", "App data directory", "Redacted log files, .dbkey, .enckey, exported documents.")
}
System_Ext(ai_providers, "AI Providers", "OpenAI, Anthropic, Gemini, Mistral, Ollama")
System_Ext(integrations, "Integrations", "Confluence, ServiceNow, Azure DevOps")
Rel(user, webview, "Interacts with", "Mouse/keyboard via OS WebView")
Rel(webview, tauri_core, "IPC calls", "invoke() / Tauri JS bridge")
Rel(tauri_core, rust_backend, "Dispatches commands", "Rust function calls")
Rel(rust_backend, db, "Reads/writes", "rusqlite (sync, mutex-guarded)")
Rel(rust_backend, stronghold, "Reads/writes keys", "Plugin API")
Rel(rust_backend, local_fs, "Reads/writes files", "std::fs")
Rel(rust_backend, ai_providers, "AI inference", "reqwest async HTTP")
Rel(rust_backend, integrations, "API calls", "reqwest async HTTP + OAuth2")
```
---
## Component Architecture
### Backend Components
```mermaid
graph TD
subgraph "Tauri IPC Layer"
IPC[IPC Command Router\nlib.rs generate_handler!]
end
subgraph "Command Handlers (commands/)"
CMD_DB[db.rs\nIssue CRUD\nTimeline Events\n5-Whys Entries]
CMD_AI[ai.rs\nChat Message\nLog Analysis\nProvider Test]
CMD_ANALYSIS[analysis.rs\nLog Upload\nPII Detection\nRedaction Apply]
CMD_DOCS[docs.rs\nRCA Generation\nPostmortem Gen\nDocument Export]
CMD_INTEGRATIONS[integrations.rs\nConfluence\nServiceNow\nAzure DevOps\nOAuth Flow]
CMD_SYSTEM[system.rs\nSettings CRUD\nOllama Mgmt\nAI Provider Mgmt\nAudit Log]
end
subgraph "Domain Services"
AI[AI Layer\nai/provider.rs\nTrait + Factory]
PII[PII Engine\npii/detector.rs\n12 Pattern Detectors]
AUDIT[Audit Logger\naudit/log.rs\nHash-chained entries]
DOCS_GEN[Doc Generator\ndocs/rca.rs\ndocs/postmortem.rs]
end
subgraph "AI Providers (ai/)"
ANTHROPIC[anthropic.rs\nClaude API]
OPENAI[openai.rs\nOpenAI + Custom REST]
OLLAMA[ollama.rs\nLocal Models]
GEMINI[gemini.rs\nGoogle Gemini]
MISTRAL[mistral.rs\nMistral API]
end
subgraph "Integration Clients (integrations/)"
CONFLUENCE[confluence.rs\nconfluence_search.rs]
SERVICENOW[servicenow.rs\nservicenow_search.rs]
AZUREDEVOPS[azuredevops.rs\nazuredevops_search.rs]
AUTH[auth.rs\nAES-256-GCM\nToken Encryption]
WEBVIEW_AUTH[webview_auth.rs\nOAuth WebView\nCallback Server]
end
subgraph "Data Layer (db/)"
MIGRATIONS[migrations.rs\n14 Schema Versions]
MODELS[models.rs\nIssue / LogFile\nAiMessage / Document\nAuditEntry / Credential]
CONNECTION[connection.rs\nSQLCipher Connect\nKey Auto-gen\nPlain→Encrypted Migration]
end
IPC --> CMD_DB
IPC --> CMD_AI
IPC --> CMD_ANALYSIS
IPC --> CMD_DOCS
IPC --> CMD_INTEGRATIONS
IPC --> CMD_SYSTEM
CMD_AI --> AI
CMD_ANALYSIS --> PII
CMD_DOCS --> DOCS_GEN
CMD_INTEGRATIONS --> CONFLUENCE
CMD_INTEGRATIONS --> SERVICENOW
CMD_INTEGRATIONS --> AZUREDEVOPS
CMD_INTEGRATIONS --> AUTH
CMD_INTEGRATIONS --> WEBVIEW_AUTH
AI --> ANTHROPIC
AI --> OPENAI
AI --> OLLAMA
AI --> GEMINI
AI --> MISTRAL
CMD_DB --> MODELS
CMD_AI --> AUDIT
CMD_ANALYSIS --> AUDIT
MODELS --> MIGRATIONS
MIGRATIONS --> CONNECTION
style IPC fill:#4a90d9,color:#fff
style AI fill:#7b68ee,color:#fff
style PII fill:#e67e22,color:#fff
style AUDIT fill:#c0392b,color:#fff
```
### Frontend Components
```mermaid
graph TD
subgraph "React Application (src/)"
APP[App.tsx\nSidebar + Router\nTheme Provider]
end
subgraph "Pages (src/pages/)"
DASHBOARD[Dashboard\nStats + Quick Actions]
NEW_ISSUE[NewIssue\nCreate Form]
LOG_UPLOAD[LogUpload\nFile Upload + PII Review]
TRIAGE[Triage\n5-Whys AI Chat]
RESOLUTION[Resolution\nStep Tracking]
RCA[RCA\nDocument Editor]
POSTMORTEM[Postmortem\nDocument Editor]
HISTORY[History\nSearch + Filter]
SETTINGS[Settings\nProviders / Ollama\nIntegrations / Security]
end
subgraph "Components (src/components/)"
CHAT_WIN[ChatWindow\nStreaming Messages]
DOC_EDITOR[DocEditor\nMarkdown Editor]
PII_DIFF[PiiDiffViewer\nSide-by-side Diff]
HW_REPORT[HardwareReport\nSystem Specs]
MODEL_SEL[ModelSelector\nProvider Dropdown]
TRIAGE_PROG[TriageProgress\n5-Whys Steps]
end
subgraph "State (src/stores/)"
SESSION[sessionStore\nEphemeral — NOT persisted\nCurrentIssue / Messages\nPiiSpans / WhyLevel]
SETTINGS_STORE[settingsStore\nPersisted to localStorage\nTheme / ActiveProvider\nPiiPatterns]
HISTORY_STORE[historyStore\nCached issue list\nSearch results]
end
subgraph "IPC Layer (src/lib/)"
IPC[tauriCommands.ts\nTyped invoke() wrappers\nAll Tauri commands]
PROMPTS[domainPrompts.ts\n8 Domain System Prompts]
end
APP --> DASHBOARD
APP --> TRIAGE
APP --> LOG_UPLOAD
APP --> HISTORY
APP --> SETTINGS
TRIAGE --> CHAT_WIN
TRIAGE --> TRIAGE_PROG
LOG_UPLOAD --> PII_DIFF
RCA --> DOC_EDITOR
POSTMORTEM --> DOC_EDITOR
SETTINGS --> HW_REPORT
SETTINGS --> MODEL_SEL
TRIAGE --> SESSION
TRIAGE --> SETTINGS_STORE
HISTORY --> HISTORY_STORE
SETTINGS --> SETTINGS_STORE
CHAT_WIN --> IPC
LOG_UPLOAD --> IPC
RCA --> IPC
SETTINGS --> IPC
IPC --> PROMPTS
style SESSION fill:#e74c3c,color:#fff
style SETTINGS_STORE fill:#27ae60,color:#fff
style IPC fill:#4a90d9,color:#fff
```
---
## Data Architecture
### Database Schema
```mermaid
erDiagram
issues {
TEXT id PK
TEXT title
TEXT description
TEXT severity
TEXT status
TEXT category
TEXT source
TEXT assigned_to
TEXT tags
TEXT created_at
TEXT updated_at
}
log_files {
TEXT id PK
TEXT issue_id FK
TEXT file_name
TEXT content_hash
TEXT mime_type
INTEGER size_bytes
INTEGER redacted
TEXT created_at
}
pii_spans {
TEXT id PK
TEXT log_file_id FK
INTEGER start_offset
INTEGER end_offset
TEXT original_value
TEXT replacement
TEXT pattern_type
INTEGER approved
}
ai_conversations {
TEXT id PK
TEXT issue_id FK
TEXT provider_name
TEXT model_name
TEXT created_at
}
ai_messages {
TEXT id PK
TEXT conversation_id FK
TEXT role
TEXT content
INTEGER token_count
TEXT created_at
}
resolution_steps {
TEXT id PK
TEXT issue_id FK
INTEGER step_order
TEXT question
TEXT answer
TEXT evidence
TEXT created_at
}
documents {
TEXT id PK
TEXT issue_id FK
TEXT doc_type
TEXT title
TEXT content_md
TEXT created_at
TEXT updated_at
}
audit_log {
TEXT id PK
TEXT action
TEXT entity_type
TEXT entity_id
TEXT prev_hash
TEXT entry_hash
TEXT details
TEXT created_at
}
credentials {
TEXT id PK
TEXT service UNIQUE
TEXT token_type
TEXT encrypted_token
TEXT token_hash
TEXT expires_at
TEXT created_at
}
integration_config {
TEXT id PK
TEXT service UNIQUE
TEXT base_url
TEXT username
TEXT project_name
TEXT space_key
INTEGER auto_create
}
ai_providers {
TEXT id PK
TEXT name UNIQUE
TEXT provider_type
TEXT api_url
TEXT encrypted_api_key
TEXT model
TEXT config_json
}
issues_fts {
TEXT rowid FK
TEXT title
TEXT description
}
issues ||--o{ log_files : "has"
issues ||--o{ ai_conversations : "has"
issues ||--o{ resolution_steps : "has"
issues ||--o{ documents : "has"
issues ||--|| issues_fts : "indexed by"
log_files ||--o{ pii_spans : "contains"
ai_conversations ||--o{ ai_messages : "contains"
```
### Data Flow — Issue Triage Lifecycle
```mermaid
sequenceDiagram
participant U as User
participant FE as React Frontend
participant IPC as Tauri IPC
participant BE as Rust Backend
participant PII as PII Engine
participant AI as AI Provider
participant DB as SQLCipher DB
U->>FE: Create new issue
FE->>IPC: create_issue(title, severity)
IPC->>BE: cmd::db::create_issue()
BE->>DB: INSERT INTO issues
DB-->>BE: Issue{id, ...}
BE-->>FE: Issue
U->>FE: Upload log file
FE->>IPC: upload_log_file(issue_id, path)
IPC->>BE: cmd::analysis::upload_log_file()
BE->>BE: Read file, SHA-256 hash
BE->>DB: INSERT INTO log_files
BE->>PII: detect(content)
PII-->>BE: Vec<PiiSpan>
BE->>DB: INSERT INTO pii_spans
BE-->>FE: {log_file, spans}
U->>FE: Approve redactions
FE->>IPC: apply_redactions(log_file_id, span_ids)
IPC->>BE: cmd::analysis::apply_redactions()
BE->>DB: UPDATE pii_spans SET approved=1
BE->>BE: Write .redacted file
BE->>DB: UPDATE log_files SET redacted=1
BE->>DB: INSERT INTO audit_log (hash-chained)
U->>FE: Start AI triage
FE->>IPC: analyze_logs(issue_id, ...)
IPC->>BE: cmd::ai::analyze_logs()
BE->>DB: SELECT redacted log content
BE->>AI: POST /chat/completions (redacted content)
AI-->>BE: {summary, findings, why1, severity}
BE->>DB: INSERT ai_messages
BE-->>FE: AnalysisResult
loop 5-Whys Iteration
U->>FE: Ask "Why?" question
FE->>IPC: chat_message(conversation_id, msg)
IPC->>BE: cmd::ai::chat_message()
BE->>DB: SELECT conversation history
BE->>AI: POST /chat/completions
AI-->>BE: Response with why level detection
BE->>DB: INSERT ai_messages
BE-->>FE: ChatResponse{content, why_level}
FE->>FE: Auto-advance why level (1→5)
end
U->>FE: Generate RCA
FE->>IPC: generate_rca(issue_id)
IPC->>BE: cmd::docs::generate_rca()
BE->>DB: SELECT issue + steps + conversations
BE->>BE: Build markdown template
BE->>DB: INSERT INTO documents
BE-->>FE: Document{content_md}
```
---
## Security Architecture
### Security Layers
```mermaid
graph TB
subgraph "Layer 1: Network Security"
CSP[Content Security Policy\nallow-list of external hosts]
TLS[TLS Enforcement\nreqwest HTTPS only]
CAP[Tauri Capability ACL\nLeast-privilege permissions]
end
subgraph "Layer 2: Data Encryption"
SQLCIPHER[SQLCipher AES-256\nFull database encryption\nPBKDF2-SHA512, 256k iterations]
AES_GCM[AES-256-GCM\nCredential token encryption\nUnique nonce per encrypt]
STRONGHOLD[Tauri Stronghold\nKey derivation + storage\nArgon2 password hashing]
end
subgraph "Layer 3: Key Management"
DB_KEY[.dbkey file\nPer-install random 256-bit key\nMode 0600 — owner only]
ENC_KEY[.enckey file\nPer-install random 256-bit key\nMode 0600 — owner only]
ENV_OVERRIDE[TFTSR_DB_KEY / TFTSR_ENCRYPTION_KEY\nOptional env var override]
end
subgraph "Layer 4: PII Protection"
PII_DETECT[12-Pattern PII Detector\nEmail / IP / Phone / SSN\nTokens / Passwords / MAC]
USER_APPROVE[User Approval Gate\nManual review before AI send]
AUDIT[Hash-chained Audit Log\nprev_hash → entry_hash\nTamper detection]
end
subgraph "Layer 5: Credential Storage"
TOKEN_HASH[Token Hash Storage\nSHA-256 hash in credentials table]
TOKEN_ENC[Token Encrypted Storage\nAES-256-GCM ciphertext]
NO_BROWSER[No Browser Storage\nAPI keys never in localStorage]
end
SQLCIPHER --> DB_KEY
AES_GCM --> ENC_KEY
DB_KEY --> ENV_OVERRIDE
ENC_KEY --> ENV_OVERRIDE
TOKEN_ENC --> AES_GCM
TOKEN_HASH --> AUDIT
style SQLCIPHER fill:#c0392b,color:#fff
style AES_GCM fill:#c0392b,color:#fff
style AUDIT fill:#e67e22,color:#fff
style PII_DETECT fill:#e67e22,color:#fff
style USER_APPROVE fill:#27ae60,color:#fff
```
### Authentication Flow — OAuth2 Integration
```mermaid
sequenceDiagram
participant U as User
participant FE as Frontend
participant BE as Rust Backend
participant WV as WebView Window
participant CB as Callback Server\n(warp, port 8765)
participant EXT as External Service\n(Confluence/ADO)
U->>FE: Click "Connect" for integration
FE->>BE: initiate_oauth(service)
BE->>BE: Generate PKCE code_verifier + code_challenge
BE->>CB: Start warp server (localhost:8765)
BE->>WV: Open auth URL in new WebView window
WV->>EXT: GET /oauth/authorize?code_challenge=...
EXT-->>WV: Login page
U->>WV: Enter credentials
WV->>EXT: POST credentials
EXT-->>WV: Redirect to localhost:8765/callback?code=xxx
WV->>CB: GET /callback?code=xxx
CB->>BE: Signal auth code received
BE->>EXT: POST /oauth/token (code + code_verifier)
EXT-->>BE: access_token + refresh_token
BE->>BE: encrypt_token(access_token)
BE->>DB: INSERT credentials (encrypted_token, token_hash)
BE->>DB: INSERT audit_log
BE-->>FE: OAuth complete
FE->>FE: Show "Connected" status
```
---
## AI Provider Architecture
### Provider Trait Pattern
```mermaid
classDiagram
class Provider {
<<trait>>
+name() String
+chat(messages, config) Future~ChatResponse~
+info() ProviderInfo
}
class AnthropicProvider {
-api_key: String
-model: String
+chat(messages, config)
+name() "anthropic"
}
class OpenAiProvider {
-api_url: String
-api_key: String
-model: String
-api_format: ApiFormat
+chat(messages, config)
+name() "openai"
}
class OllamaProvider {
-base_url: String
-model: String
+chat(messages, config)
+name() "ollama"
}
class GeminiProvider {
-api_key: String
-model: String
+chat(messages, config)
+name() "gemini"
}
class MistralProvider {
-api_key: String
-model: String
+chat(messages, config)
+name() "mistral"
}
class ProviderFactory {
+create_provider(config: ProviderConfig) Box~dyn Provider~
}
class ProviderConfig {
+name: String
+provider_type: String
+api_url: String
+api_key: String
+model: String
+max_tokens: Option~u32~
+temperature: Option~f64~
+custom_endpoint_path: Option~String~
+custom_auth_header: Option~String~
+custom_auth_prefix: Option~String~
+api_format: Option~String~
}
Provider <|.. AnthropicProvider
Provider <|.. OpenAiProvider
Provider <|.. OllamaProvider
Provider <|.. GeminiProvider
Provider <|.. MistralProvider
ProviderFactory --> Provider : creates
ProviderFactory --> ProviderConfig : consumes
```
### Tool Calling Flow (Azure DevOps)
```mermaid
sequenceDiagram
participant U as User
participant FE as Frontend
participant BE as Rust Backend
participant AI as AI Provider
participant ADO as Azure DevOps API
U->>FE: Chat message mentioning ADO work item
FE->>BE: chat_message(conversation_id, msg, provider_config)
BE->>BE: Inject get_available_tools() into request
BE->>AI: POST /chat/completions {messages, tools: [add_ado_comment]}
AI-->>BE: {tool_calls: [{function: "add_ado_comment", args: {work_item_id, comment_text}}]}
BE->>BE: Parse tool_calls from response
BE->>BE: Validate tool name matches registered tools
BE->>ADO: PATCH /wit/workitems/{id}?api-version=7.0 (add comment)
ADO-->>BE: 200 OK
BE->>BE: Format tool result message
BE->>AI: POST /chat/completions {messages, tool_result}
AI-->>BE: Final response to user
BE->>DB: INSERT ai_messages (tool call + result)
BE-->>FE: ChatResponse{content}
```
---
## Integration Architecture
```mermaid
graph LR
subgraph "Integration Layer (integrations/)"
AUTH[auth.rs\nToken Encryption\nOAuth + PKCE\nCookie Extraction]
subgraph "Confluence"
CF[confluence.rs\nPublish Documents\nSpace Management]
CF_SEARCH[confluence_search.rs\nContent Search\nPersistent WebView]
end
subgraph "ServiceNow"
SN[servicenow.rs\nCreate Incidents\nUpdate Records]
SN_SEARCH[servicenow_search.rs\nIncident Search\nKnowledge Base]
end
subgraph "Azure DevOps"
ADO[azuredevops.rs\nWork Items CRUD\nComments (AI tool)]
ADO_SEARCH[azuredevops_search.rs\nWork Item Search\nPersistent WebView]
end
subgraph "Auth Infrastructure"
WV_AUTH[webview_auth.rs\nOAuth WebView\nLogin Flow]
CB_SERVER[callback_server.rs\nwarp HTTP Server\nlocalhost:8765]
NAT_COOKIES[native_cookies*.rs\nPlatform Cookie\nExtraction]
end
end
subgraph "External Services"
CF_EXT[Atlassian Confluence\nhttps://*.atlassian.net]
SN_EXT[ServiceNow\nhttps://*.service-now.com]
ADO_EXT[Azure DevOps\nhttps://dev.azure.com]
end
AUTH --> CF
AUTH --> SN
AUTH --> ADO
WV_AUTH --> CB_SERVER
WV_AUTH --> NAT_COOKIES
CF --> CF_EXT
CF_SEARCH --> CF_EXT
SN --> SN_EXT
SN_SEARCH --> SN_EXT
ADO --> ADO_EXT
ADO_SEARCH --> ADO_EXT
style AUTH fill:#c0392b,color:#fff
```
---
## Deployment Architecture
### CI/CD Pipeline
```mermaid
graph TB
subgraph "Source Control"
GOGS[Gogs / Gitea\ngogs.tftsr.com\nSarman Repository]
end
subgraph "CI/CD Triggers"
PR_TRIGGER[PR Opened/Updated\ntest.yml workflow]
MASTER_TRIGGER[Push to master\nauto-tag.yml workflow]
DOCKER_TRIGGER[.docker/ changes\nbuild-images.yml workflow]
end
subgraph "Test Runner — amd64-docker-runner"
RUSTFMT[1. rustfmt\nFormat Check]
CLIPPY[2. clippy\n-D warnings]
CARGO_TEST[3. cargo test\n64 Rust tests]
TSC[4. tsc --noEmit\nType Check]
VITEST[5. vitest run\n13 JS tests]
end
subgraph "Release Builders (Parallel)"
AMD64[linux/amd64\nDocker: trcaa-linux-amd64\n.deb .rpm .AppImage]
WINDOWS[windows/amd64\nDocker: trcaa-windows-cross\n.exe .msi]
ARM64[linux/arm64\narm64 native runner\n.deb .rpm .AppImage]
MACOS[macOS arm64\nnative macOS runner\n.app .dmg]
end
subgraph "Artifact Storage"
RELEASE[Gitea Release\nv0.x.x tags\nAll platform assets]
REGISTRY[Gitea Container Registry\n172.0.0.29:3000\nCI Docker images]
end
GOGS --> PR_TRIGGER
GOGS --> MASTER_TRIGGER
GOGS --> DOCKER_TRIGGER
PR_TRIGGER --> RUSTFMT
RUSTFMT --> CLIPPY
CLIPPY --> CARGO_TEST
CARGO_TEST --> TSC
TSC --> VITEST
MASTER_TRIGGER --> AMD64
MASTER_TRIGGER --> WINDOWS
MASTER_TRIGGER --> ARM64
MASTER_TRIGGER --> MACOS
AMD64 --> RELEASE
WINDOWS --> RELEASE
ARM64 --> RELEASE
MACOS --> RELEASE
DOCKER_TRIGGER --> REGISTRY
style VITEST fill:#27ae60,color:#fff
style RELEASE fill:#4a90d9,color:#fff
```
### Runtime Architecture (per Platform)
```mermaid
graph TB
subgraph "macOS Runtime"
MAC_PROC[trcaa process\nMach-O arm64 binary]
WEBKIT[WKWebView\nSafari WebKit engine]
MAC_DATA[~/Library/Application Support/trcaa/\n.dbkey mode 0600\n.enckey mode 0600\ntrcaa.db SQLCipher]
MAC_BUNDLE[Troubleshooting and RCA Assistant.app\n/Applications/]
end
subgraph "Linux Runtime"
LINUX_PROC[trcaa process\nELF amd64/arm64]
WEBKIT2[WebKitGTK WebView\nwebkit2gtk4.1]
LINUX_DATA[~/.local/share/trcaa/\n.dbkey .enckey\ntrcaa.db]
LINUX_PKG[.deb / .rpm / .AppImage]
end
subgraph "Windows Runtime"
WIN_PROC[trcaa.exe\nPE amd64]
WEBVIEW2[Microsoft WebView2\nChromium-based]
WIN_DATA[%APPDATA%\trcaa\\\n.dbkey .enckey\ntrcaa.db]
WIN_PKG[NSIS .exe / .msi]
end
MAC_BUNDLE --> MAC_PROC
MAC_PROC --> WEBKIT
MAC_PROC --> MAC_DATA
LINUX_PKG --> LINUX_PROC
LINUX_PROC --> WEBKIT2
LINUX_PROC --> LINUX_DATA
WIN_PKG --> WIN_PROC
WIN_PROC --> WEBVIEW2
WIN_PROC --> WIN_DATA
```
---
## Key Data Flows
### PII Detection and Redaction
```mermaid
flowchart TD
A[User uploads log file] --> B[Read file contents\nmax 50MB]
B --> C[Compute SHA-256 hash]
C --> D[Store metadata in log_files table]
D --> E[Run PII Detection Engine]
subgraph "PII Engine"
E --> F{12 Pattern Detectors}
F --> G[Email Regex]
F --> H[IPv4/IPv6 Regex]
F --> I[Bearer Token Regex]
F --> J[Password Regex]
F --> K[SSN / Phone / CC]
F --> L[MAC / Hostname]
G & H & I & J & K & L --> M[Collect all spans]
M --> N[Sort by start offset]
N --> O[Remove overlaps\nlongest span wins]
end
O --> P[Store pii_spans in DB\nwith UUID per span]
P --> Q[Return spans to UI]
Q --> R[PiiDiffViewer\nSide-by-side diff]
R --> S{User reviews}
S -->|Approve| T[apply_redactions\nMark spans approved]
S -->|Dismiss| U[Remove from approved set]
T --> V[Write .redacted log file\nreplace spans with placeholders]
V --> W[Update log_files.redacted = 1]
W --> X[Append to audit_log\nhash-chained entry]
X --> Y[Log now safe for AI send]
```
### Encryption Key Lifecycle
```mermaid
flowchart TD
A[App Launch] --> B{TFTSR_DB_KEY env var set?}
B -->|Yes| C[Use env var key]
B -->|No| D{Release build?}
D -->|Debug| E[Use hardcoded dev key]
D -->|Release| F{.dbkey file exists?}
F -->|Yes| G[Load key from .dbkey]
F -->|No| H[Generate 32 random bytes\nhex-encode → 64 char key]
H --> I[Write to .dbkey\nmode 0600]
I --> J[Use generated key]
G --> K{Open database}
C --> K
E --> K
J --> K
K --> L{SQLCipher decrypt success?}
L -->|Yes| M[Run migrations\nDatabase ready]
L -->|No| N{File is plain SQLite?}
N -->|Yes| O[migrate_plain_to_encrypted\nCreate .db.plain-backup\nATTACH + sqlcipher_export]
N -->|No| P[Fatal error\nDatabase corrupt]
O --> M
style H fill:#27ae60,color:#fff
style O fill:#e67e22,color:#fff
style P fill:#c0392b,color:#fff
```
---
## Architecture Decision Records
See the [adrs/](./adrs/) directory for all Architecture Decision Records.
| ADR | Title | Status |
|-----|-------|--------|
| [ADR-001](./adrs/ADR-001-tauri-desktop-framework.md) | Tauri as Desktop Framework | Accepted |
| [ADR-002](./adrs/ADR-002-sqlcipher-encrypted-database.md) | SQLCipher for Encrypted Storage | Accepted |
| [ADR-003](./adrs/ADR-003-provider-trait-pattern.md) | Provider Trait Pattern for AI Backends | Accepted |
| [ADR-004](./adrs/ADR-004-pii-regex-aho-corasick.md) | Regex + Aho-Corasick for PII Detection | Accepted |
| [ADR-005](./adrs/ADR-005-auto-generate-encryption-keys.md) | Auto-generate Encryption Keys at Runtime | Accepted |
| [ADR-006](./adrs/ADR-006-zustand-state-management.md) | Zustand for Frontend State Management | Accepted |

View File

@ -0,0 +1,66 @@
# ADR-001: Tauri as Desktop Framework
**Status**: Accepted
**Date**: 2025-Q3
**Deciders**: sarman
---
## Context
A cross-platform desktop application is required for IT engineers who need:
- Fully offline operation (local AI via Ollama)
- Encrypted local data storage (sensitive incident details)
- Access to local filesystem (log files)
- No telemetry or cloud dependency for core functionality
- Distribution on Linux, macOS, and Windows
The main alternatives considered were **Electron**, **Flutter**, **Qt**, and a pure **web app**.
---
## Decision
Use **Tauri 2** with a **Rust backend** and **React/TypeScript frontend**.
---
## Rationale
| Criterion | Tauri 2 | Electron | Flutter | Web App |
|-----------|---------|----------|---------|---------|
| Binary size | ~8 MB | ~120+ MB | ~40 MB | N/A |
| Memory footprint | ~50 MB | ~200+ MB | ~100 MB | N/A |
| OS WebView | Yes (native) | No (bundled Chromium) | No | N/A |
| Rust backend | Yes (native perf) | No (Node.js) | No (Dart) | No |
| Filesystem access | Scoped ACL | Unrestricted by default | Limited | CORS-limited |
| Offline-first | Yes | Yes | Yes | No |
| SQLCipher integration | Via rusqlite | Via better-sqlite3 | Via plugin | No |
| Existing team skills | Rust + React | Node.js + React | Dart | TypeScript |
**Tauri's advantages for this use case:**
1. **Security model**: Capability-based ACL prevents frontend from making arbitrary system calls. The frontend can only call explicitly-declared commands.
2. **Performance**: Rust backend handles CPU-intensive work (PII regex scanning, PDF generation, SQLCipher operations) without Node.js overhead.
3. **Binary size**: Uses the OS-native WebView (WebKit on macOS/Linux, WebView2 on Windows) — no bundled browser engine.
4. **Stronghold plugin**: Built-in encrypted key-value store for credential management.
5. **IPC type safety**: `generate_handler![]` macro ensures all IPC commands are registered; `invoke()` on the frontend can be fully typed via `tauriCommands.ts`.
---
## Consequences
**Positive:**
- Small distributable (<20 MB .dmg vs 150+ MB Electron .dmg)
- Rust's memory safety prevents a class of security bugs
- Tauri's CSP enforcement and capability ACL provide defense-in-depth
- Native OS dialogs, file pickers, and notifications
**Negative:**
- WebKit/WebView2 inconsistencies require cross-browser testing
- Rust compile times are longer than Node.js (mitigated by Docker CI caching)
- Tauri 2 is relatively new — smaller ecosystem than Electron
- macOS builds require a macOS runner (no cross-compilation)
**Neutral:**
- React frontend works identically to a web app — no desktop-specific UI code needed
- TypeScript IPC wrappers (`tauriCommands.ts`) decouple frontend from Tauri details

View File

@ -0,0 +1,73 @@
# ADR-002: SQLCipher for Encrypted Storage
**Status**: Accepted
**Date**: 2025-Q3
**Deciders**: sarman
---
## Context
All incident data (titles, descriptions, log contents, AI conversations, resolution steps, RCA documents) must be stored locally and at rest must be encrypted. The application cannot rely on OS-level full-disk encryption being enabled.
Requirements:
- AES-256 encryption of the full database file
- Key derivation suitable for per-installation keys (not user passwords)
- No plaintext data accessible if the `.db` file is copied off-machine
- Rust-compatible SQLite bindings
---
## Decision
Use **SQLCipher** via `rusqlite` with the `bundled-sqlcipher-vendored-openssl` feature flag.
---
## Rationale
**Alternatives considered:**
| Option | Pros | Cons |
|--------|------|------|
| **SQLCipher** (chosen) | Transparent full-DB encryption, AES-256, PBKDF2 key derivation, vendored so no system dep | Larger binary; not standard SQLite |
| Plain SQLite | Simple, well-known | No encryption — ruled out |
| SQLite + file-level encryption | Flexible | No atomicity; complex implementation |
| LevelDB / RocksDB | Fast, encrypted options exist | No SQL, harder migration |
| `sled` (Rust-native) | Modern, async-friendly | No SQL, immature for complex schemas |
**SQLCipher specifics chosen:**
```
PRAGMA cipher_page_size = 16384; -- Matches 16KB kernel page (Apple Silicon)
PRAGMA kdf_iter = 256000; -- 256k PBKDF2 iterations
PRAGMA cipher_hmac_algorithm = HMAC_SHA512;
PRAGMA cipher_kdf_algorithm = PBKDF2_HMAC_SHA512;
```
The `cipher_page_size = 16384` is specifically tuned for Apple Silicon (M-series) which uses 16KB kernel pages — using 4096 (SQLCipher default) causes page boundary issues.
---
## Key Management
Per ADR-005, encryption keys are auto-generated at runtime:
- **Release builds**: Random 256-bit key generated at first launch, stored in `.dbkey` (mode 0600)
- **Debug builds**: Hardcoded dev key (`dev-key-change-in-prod`)
- **Override**: `TFTSR_DB_KEY` environment variable
---
## Consequences
**Positive:**
- Full database encryption transparent to all SQL queries
- Vendored OpenSSL means no system library dependency (important for portable AppImage/DMG)
- SHA-512 HMAC provides authenticated encryption (tampering detected)
**Negative:**
- `bundled-sqlcipher-vendored-openssl` significantly increases compile time and binary size
- Cannot use standard SQLite tooling to inspect database files (must use sqlcipher CLI)
- `cipher_page_size` mismatch between debug/release would corrupt databases — mitigated by auto-migration (ADR-005)
**Migration Handling:**
If a plain SQLite database is detected in a release build (e.g., developer switched from debug), `migrate_plain_to_encrypted()` automatically migrates using `ATTACH DATABASE` + `sqlcipher_export`. A `.db.plain-backup` file is created before migration.

View File

@ -0,0 +1,76 @@
# ADR-003: Provider Trait Pattern for AI Backends
**Status**: Accepted
**Date**: 2025-Q3
**Deciders**: sarman
---
## Context
The application must support multiple AI providers (OpenAI, Anthropic, Google Gemini, Mistral, Ollama) with different API formats, authentication methods, and response structures. Provider selection must be runtime-configurable by the user without recompiling.
Additionally, enterprise environments may need custom AI endpoints (e.g., MSI GenAI gateway at `genai-service.commandcentral.com`) that speak OpenAI-compatible APIs with custom auth headers.
---
## Decision
Use a **Rust trait object** (`Box<dyn Provider>`) with a **factory function** (`create_provider(config: ProviderConfig)`) that dispatches to concrete implementations at runtime.
---
## Rationale
**The `Provider` trait:**
```rust
#[async_trait]
pub trait Provider: Send + Sync {
fn name(&self) -> &str;
async fn chat(&self, messages: Vec<Message>, config: &ProviderConfig) -> Result<ChatResponse>;
fn info(&self) -> ProviderInfo;
}
```
**Why trait objects over generics:**
- Provider type is not known at compile time (user configures at runtime)
- `Box<dyn Provider>` allows storing different providers in the same `AppState`
- `#[async_trait]` enables async methods on trait objects (required for `reqwest`)
**`ProviderConfig` design:**
The config struct uses `Option<String>` fields for provider-specific settings:
```rust
pub struct ProviderConfig {
pub custom_endpoint_path: Option<String>,
pub custom_auth_header: Option<String>,
pub custom_auth_prefix: Option<String>,
pub api_format: Option<String>, // "openai" | "custom_rest"
}
```
This allows a single `OpenAiProvider` implementation to handle both standard OpenAI and arbitrary OpenAI-compatible endpoints — the user configures the auth header name and prefix to match their gateway.
---
## Adding a New Provider
1. Create `src-tauri/src/ai/<provider>.rs` implementing the `Provider` trait
2. Add a match arm in `create_provider()` in `provider.rs`
3. Register the provider type string in `ProviderConfig`
4. Add UI in `src/pages/Settings/AIProviders.tsx`
No changes to command handlers or IPC layer required.
---
## Consequences
**Positive:**
- New providers require zero changes outside `ai/`
- `ProviderConfig` is stored in the database — provider can be changed without app restart
- `test_provider_connection()` command works uniformly across all providers
- `list_providers()` returns capabilities dynamically (supports streaming, tool calling, etc.)
**Negative:**
- `dyn Provider` has a small vtable dispatch overhead (negligible for HTTP-bound operations)
- Each provider implementation must handle its own error types and response parsing
- Testing requires mocking at the `reqwest` level (via `mockito`)

View File

@ -0,0 +1,88 @@
# ADR-004: Regex + Aho-Corasick for PII Detection
**Status**: Accepted
**Date**: 2025-Q3
**Deciders**: sarman
---
## Context
Log files submitted for AI analysis may contain sensitive data: IP addresses, emails, bearer tokens, passwords, SSNs, credit card numbers, MAC addresses, phone numbers, and API keys. This data must be detected and redacted before any content leaves the machine via an AI API call.
Requirements:
- Fast scanning of files up to 50MB
- Multiple pattern types with different regex complexity
- Non-overlapping spans (longest match wins on overlap)
- User-controlled toggle per pattern type
- Byte-offset tracking for accurate replacement
---
## Decision
Use **Rust `regex` crate** for per-pattern matching combined with **`aho-corasick`** for multi-pattern string searching. Detection runs entirely in the Rust backend on the raw log content.
---
## Rationale
**Alternatives considered:**
| Option | Pros | Cons |
|--------|------|------|
| **regex + aho-corasick** (chosen) | Fast, Rust-native, no external deps, byte-offset accurate | Regex patterns need careful tuning; false positives possible |
| ML-based NER (spaCy, Presidio) | Higher recall for contextual PII | Requires Python runtime, large model files, not offline-friendly |
| Simple string matching | Extremely fast | Too many false negatives on varied formats |
| WASM-based detection | Runs in browser | Slower; log content in JS memory before Rust sees it |
**Implementation approach:**
1. **12 regex patterns** compiled once at startup via `lazy_static!`
2. Each pattern returns `(start, end, replacement)` tuples
3. All spans from all patterns collected into a flat `Vec<PiiSpan>`
4. Spans sorted by `start` offset
5. **Overlap resolution**: iterate through sorted spans, skip any span whose start is before the current end (greedy, longest match)
6. Spans stored in DB with UUID — referenced by `approved` flag when user confirms redaction
7. Redaction applies spans in **reverse order** to preserve byte offsets
**Why aho-corasick for some patterns:**
Literal string searches (e.g., `password=`, `api_key=`, `bearer `) are faster with Aho-Corasick multi-pattern matching than running individual regexes. The regex then validates the captured value portion.
---
## Patterns
| Pattern ID | Type | Example Match |
|------------|------|---------------|
| `url_credentials` | URL with embedded credentials | `https://user:pass@host` |
| `bearer_token` | Authorization headers | `Bearer eyJhbGc...` |
| `api_key` | API key assignments | `api_key=sk-abc123...` |
| `password` | Password assignments | `password=secret123` |
| `ssn` | Social Security Numbers | `123-45-6789` |
| `credit_card` | Credit card numbers | `4111 1111 1111 1111` |
| `email` | Email addresses | `user@example.com` |
| `mac_address` | MAC addresses | `AA:BB:CC:DD:EE:FF` |
| `ipv6` | IPv6 addresses | `2001:db8::1` |
| `ipv4` | IPv4 addresses | `192.168.1.1` |
| `phone` | Phone numbers | `+1 (555) 123-4567` |
| `hostname` | FQDNs | `db-prod.internal.example.com` |
---
## Consequences
**Positive:**
- No runtime dependencies — detection works fully offline
- 50MB file scanned in <500ms on modern hardware
- Patterns independently togglable via `pii_enabled_patterns` in settings
- Byte-accurate offsets enable precise redaction without re-parsing
**Negative:**
- Regex-based detection has false positives (e.g., version strings matching IPv4 patterns)
- User must review and approve — not fully automatic (mitigated by UX design)
- Pattern maintenance required as new credential formats emerge
- No contextual understanding (a password in a comment vs an active credential look identical)
**User safeguard:**
All redactions require user approval via `PiiDiffViewer` before the redacted log is written. The original is never sent to AI.

View File

@ -0,0 +1,98 @@
# ADR-005: Auto-generate Encryption Keys at Runtime
**Status**: Accepted
**Date**: 2026-04
**Deciders**: sarman
---
## Context
The application uses two encryption keys:
1. **Database key** (`TFTSR_DB_KEY`): SQLCipher AES-256 key for the full database
2. **Credential key** (`TFTSR_ENCRYPTION_KEY`): AES-256-GCM key for token/API key encryption
The original design required both to be set as environment variables in release builds. This caused:
- **Critical failure on Mac**: Fresh installs would crash at startup with "file is not a database" error
- **Silent failure on save**: Saving AI providers would fail with "TFTSR_ENCRYPTION_KEY must be set in release builds"
- **Developer friction**: Switching from `cargo tauri dev` (debug, plain SQLite) to a release build would crash because the existing plain database couldn't be opened as encrypted
---
## Decision
Auto-generate cryptographically secure 256-bit keys at first launch and persist them to the app data directory with restricted file permissions.
---
## Key Storage
| Key | File | Permissions | Location |
|-----|------|-------------|----------|
| Database | `.dbkey` | `0600` (owner r/w only) | `$TFTSR_DATA_DIR/` |
| Credentials | `.enckey` | `0600` (owner r/w only) | `$TFTSR_DATA_DIR/` |
**Platform data directories:**
- macOS: `~/Library/Application Support/trcaa/`
- Linux: `~/.local/share/trcaa/`
- Windows: `%APPDATA%\trcaa\`
---
## Key Resolution Order
For both keys:
1. Check environment variable (`TFTSR_DB_KEY` / `TFTSR_ENCRYPTION_KEY`) — use if set and non-empty
2. If debug build — use hardcoded dev key (never touches filesystem)
3. If `.dbkey` / `.enckey` exists and is non-empty — load from file
4. Otherwise — generate 32 random bytes via `OsRng`, hex-encode to 64-char string, write to file with `mode 0600`
---
## Plain-to-Encrypted Migration
When a release build encounters an existing plain SQLite database (written by a debug build), rather than crashing:
```
1. Detect plain SQLite via 16-byte header check ("SQLite format 3\0")
2. Copy database to .db.plain-backup
3. Open plain database
4. ATTACH encrypted database at temp path with new key
5. SELECT sqlcipher_export('encrypted') -- copies all tables, indexes, triggers
6. DETACH encrypted
7. rename(temp_encrypted, original_path)
8. Open encrypted database with key
```
---
## Alternatives Considered
| Option | Pros | Cons |
|--------|------|------|
| **Auto-generate keys** (chosen) | Works out-of-the-box, no user config | Key file loss = data loss (acceptable: key + DB on same machine) |
| Require env vars (original) | Explicit — users know their key | Crashes on fresh install, poor UX |
| Derive from machine ID | No file to lose | Machine ID changes break DB on hardware changes |
| OS keychain | Most secure | Complex cross-platform implementation; adds dependency |
| Prompt user for password | User controls key | Poor UX for a tool; password complexity issues |
**Why not OS keychain:**
The `tauri-plugin-stronghold` already provides a keychain-like abstraction for credentials, but integrating SQLCipher key retrieval into Stronghold would create a chicken-and-egg problem: Stronghold itself needs to be initialized before the database that stores Stronghold's key material.
---
## Consequences
**Positive:**
- Zero-configuration installation — app works on first launch
- Developers can freely switch between debug and release builds
- Environment variable override still available for automated/enterprise deployments
- Key files are protected by Unix file permissions (`0600`)
**Negative:**
- If `.dbkey` or `.enckey` are deleted, the database and all stored credentials become permanently inaccessible
- Key files are not themselves encrypted — OS-level protection depends on filesystem permissions
- Not suitable for multi-user scenarios where different users need isolated key material (single-user desktop app — acceptable)
**Mitigation for key loss:**
Document clearly that backing up `$TFTSR_DATA_DIR` (including hidden files) preserves both key files and database. Loss of keys without losing the database = data loss.

View File

@ -0,0 +1,91 @@
# ADR-006: Zustand for Frontend State Management
**Status**: Accepted
**Date**: 2025-Q3
**Deciders**: sarman
---
## Context
The React frontend manages three distinct categories of state:
1. **Ephemeral session state**: Current issue, AI chat messages, PII spans, 5-whys progress — exists for the duration of one triage session, should not survive page reload
2. **Persisted settings**: Theme, active AI provider, PII pattern toggles — should survive app restart, stored locally
3. **Cached server data**: Issue history, search results — loaded from DB on demand, invalidated on changes
---
## Decision
Use **Zustand** for all three state categories, with selective persistence via `localStorage` for settings only.
---
## Rationale
**Alternatives considered:**
| Option | Pros | Cons |
|--------|------|------|
| **Zustand** (chosen) | Minimal boilerplate, built-in persist middleware, TypeScript-first | Smaller ecosystem than Redux |
| Redux Toolkit | Battle-tested, DevTools support | Verbose boilerplate for simple state |
| React Context | No dependency | Performance issues with frequent updates (chat messages) |
| Jotai | Atomic state, minimal | Less familiar pattern |
| TanStack Query | Excellent for async server state | Overkill for Tauri IPC (not HTTP) |
**Store architecture decisions:**
**`sessionStore`** — NOT persisted:
- Chat messages accumulate quickly; persisting would bloat localStorage
- Session is per-issue; loading a different issue should reset all session state
- `reset()` method called on navigation away from triage
**`settingsStore`** — Persisted to localStorage as `"tftsr-settings"`:
- Theme, active provider, PII pattern toggles — user preference, should survive restart
- AI providers themselves are NOT persisted here — only `active_provider` string
- Actual `ProviderConfig` (with encrypted API keys) lives in the backend DB, loaded via `load_ai_providers()`
**`historyStore`** — NOT persisted (server-cache pattern):
- Always loaded fresh from DB on History page mount
- Search results replaced on each query
- No stale-data risk
---
## Persistence Details
The settings store persists to localStorage:
```typescript
persist(
(set, get) => ({ ...storeImpl }),
{
name: 'tftsr-settings',
partialize: (state) => ({
theme: state.theme,
active_provider: state.active_provider,
pii_enabled_patterns: state.pii_enabled_patterns,
// NOTE: ai_providers excluded — stored in encrypted backend DB
})
}
)
```
**Why localStorage and not a Tauri store plugin:**
- Settings are non-sensitive (theme, provider name, pattern toggles)
- `tauri-plugin-store` would add IPC overhead for every settings read
- localStorage survives across WebView reloads without async overhead
---
## Consequences
**Positive:**
- Minimal boilerplate — stores are ~50 LOC each
- `zustand/middleware/persist` handles localStorage serialization
- Subscribing to partial state prevents unnecessary re-renders
- No Provider wrapping required — stores accessed via hooks anywhere
**Negative:**
- No Redux DevTools integration (Zustand has its own devtools but less mature)
- localStorage persistence means settings are WebView-profile-scoped (fine for single-user app)
- Manual cache invalidation in `historyStore` after issue create/delete

View File

@ -29,7 +29,8 @@ TFTSR uses a Tauri 2.x architecture: a Rust backend runs natively, and a React/T
pub struct AppState { pub struct AppState {
pub db: Arc<Mutex<rusqlite::Connection>>, pub db: Arc<Mutex<rusqlite::Connection>>,
pub settings: Arc<Mutex<AppSettings>>, pub settings: Arc<Mutex<AppSettings>>,
pub app_data_dir: PathBuf, // ~/.local/share/tftsr on Linux pub app_data_dir: PathBuf, // ~/.local/share/trcaa on Linux
pub integration_webviews: Arc<Mutex<HashMap<String, String>>>,
} }
``` ```
@ -46,10 +47,10 @@ All command handlers receive `State<'_, AppState>` as a Tauri-injected parameter
| `commands/analysis.rs` | Log file upload, PII detection, redaction | | `commands/analysis.rs` | Log file upload, PII detection, redaction |
| `commands/docs.rs` | RCA and post-mortem generation, document export | | `commands/docs.rs` | RCA and post-mortem generation, document export |
| `commands/system.rs` | Ollama management, hardware probe, settings, audit log | | `commands/system.rs` | Ollama management, hardware probe, settings, audit log |
| `commands/integrations.rs` | Confluence / ServiceNow / ADO — v0.2 stubs | | `commands/integrations.rs` | Confluence / ServiceNow / ADO — OAuth2, WebView auth, tool calling |
| `ai/provider.rs` | `Provider` trait + `create_provider()` factory | | `ai/provider.rs` | `Provider` trait + `create_provider()` factory |
| `pii/detector.rs` | Multi-pattern PII scanner with overlap resolution | | `pii/detector.rs` | Multi-pattern PII scanner with overlap resolution |
| `db/migrations.rs` | Versioned schema (10 migrations in `_migrations` table) | | `db/migrations.rs` | Versioned schema (14 migrations tracked in `_migrations` table) |
| `db/models.rs` | All DB types — see `IssueDetail` note below | | `db/models.rs` | All DB types — see `IssueDetail` note below |
| `docs/rca.rs` + `docs/postmortem.rs` | Markdown template builders | | `docs/rca.rs` + `docs/postmortem.rs` | Markdown template builders |
| `audit/log.rs` | `write_audit_event()` — called before every external send | | `audit/log.rs` | `write_audit_event()` — called before every external send |
@ -178,14 +179,31 @@ Use `detail.issue.title`, **not** `detail.title`.
``` ```
1. Initialize tracing (RUST_LOG controls level) 1. Initialize tracing (RUST_LOG controls level)
2. Determine data directory (~/.local/share/tftsr or TFTSR_DATA_DIR) 2. Determine data directory (state::get_app_data_dir() or TFTSR_DATA_DIR)
3. Open / create SQLite database (run migrations) 3. Auto-generate or load .dbkey / .enckey (mode 0600) — see ADR-005
4. Create AppState (db + settings + app_data_dir) 4. Open / create SQLCipher encrypted database
5. Register Tauri plugins (stronghold, dialog, fs, shell, http, cli, updater) - If plain SQLite detected (debug→release upgrade): auto-migrate + backup
6. Register all 39 IPC command handlers 5. Run DB migrations (14 schema versions)
7. Start WebView with React app 6. Create AppState (db + settings + app_data_dir + integration_webviews)
7. Register Tauri plugins (stronghold, dialog, fs, shell, http)
8. Register all IPC command handlers via generate_handler![]
9. Start WebView with React app
``` ```
## Architecture Documentation
Full architecture documentation with C4 diagrams, data flow diagrams, and Architecture Decision Records (ADRs) is available in [`docs/architecture/`](../architecture/README.md):
| Document | Contents |
|----------|----------|
| [Architecture Overview](../architecture/README.md) | C4 diagrams, data flows, security model |
| [ADR-001](../architecture/adrs/ADR-001-tauri-desktop-framework.md) | Why Tauri over Electron |
| [ADR-002](../architecture/adrs/ADR-002-sqlcipher-encrypted-database.md) | SQLCipher encryption choices |
| [ADR-003](../architecture/adrs/ADR-003-provider-trait-pattern.md) | AI provider trait design |
| [ADR-004](../architecture/adrs/ADR-004-pii-regex-aho-corasick.md) | PII detection implementation |
| [ADR-005](../architecture/adrs/ADR-005-auto-generate-encryption-keys.md) | Key auto-generation design |
| [ADR-006](../architecture/adrs/ADR-006-zustand-state-management.md) | Frontend state management |
## Data Flow ## Data Flow
``` ```