chore: remove all proprietary vendor references for public release
- Delete internal vendor API documentation and handoff docs - Remove vendor-specific AI gateway URLs from CSP whitelist - Replace vendor-specific log prefixes and comments with generic 'Custom REST' - Remove vendor-specific default auth header from custom REST implementation - Remove vendor-specific client header from HTTP requests - Remove backward-compat vendor format identifier from is_custom_rest_format() - Remove LEGACY_API_FORMAT constant and normalizeApiFormat() helper - Update test to not reference legacy format identifier - Update wiki docs to use generic enterprise gateway configuration - Update architecture diagrams and ADR-003 to remove vendor references - Add Buy Me A Coffee link to README - Update .gitignore to exclude internal user guide and ticket files Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
parent
fdb4fc03b9
commit
1de50f1c87
12
.gitignore
vendored
12
.gitignore
vendored
@ -9,3 +9,15 @@ secrets.yaml
|
|||||||
artifacts/
|
artifacts/
|
||||||
*.png
|
*.png
|
||||||
/screenshots/
|
/screenshots/
|
||||||
|
|
||||||
|
# Internal / private documents — never commit
|
||||||
|
USER_GUIDE.md
|
||||||
|
USER_GUIDE.docx
|
||||||
|
~$ER_GUIDE.docx
|
||||||
|
TICKET_USER_GUIDE.md
|
||||||
|
BUGFIX_SUMMARY.md
|
||||||
|
PR_DESCRIPTION.md
|
||||||
|
GenAI API User Guide.md
|
||||||
|
HANDOFF-MSI-GENAI.md
|
||||||
|
TICKET_SUMMARY.md
|
||||||
|
docs/images/user-guide/
|
||||||
|
|||||||
File diff suppressed because one or more lines are too long
@ -1,312 +0,0 @@
|
|||||||
# MSI GenAI Custom Provider Integration - Handoff Document
|
|
||||||
|
|
||||||
**Date**: 2026-04-03
|
|
||||||
**Status**: In Progress - Backend schema updated, frontend and provider logic pending
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Context
|
|
||||||
|
|
||||||
User needs to integrate MSI GenAI API (https://genai-service.stage.commandcentral.com/app-gateway/api/v2/chat) into the application's AI Providers system.
|
|
||||||
|
|
||||||
**Problem**: The existing "Custom" provider type assumes OpenAI-compatible APIs (expects `/chat/completions` endpoint, OpenAI request/response format, `Authorization: Bearer` header). MSI GenAI has a completely different API contract:
|
|
||||||
|
|
||||||
| Aspect | OpenAI Format | MSI GenAI Format |
|
|
||||||
|--------|---------------|------------------|
|
|
||||||
| **Endpoint** | `/chat/completions` | `/api/v2/chat` (no suffix) |
|
|
||||||
| **Request** | `{"messages": [...], "model": "..."}` | `{"prompt": "...", "model": "...", "sessionId": "..."}` |
|
|
||||||
| **Response** | `{"choices": [{"message": {"content": "..."}}]}` | `{"msg": "...", "sessionId": "..."}` |
|
|
||||||
| **Auth Header** | `Authorization: Bearer <token>` | `x-msi-genai-api-key: <token>` |
|
|
||||||
| **History** | Client sends full message array | Server-side via `sessionId` |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Work Completed
|
|
||||||
|
|
||||||
### 1. Updated `src-tauri/src/state.rs` - ProviderConfig Schema
|
|
||||||
|
|
||||||
Added optional fields to support custom API formats without breaking existing OpenAI-compatible providers:
|
|
||||||
|
|
||||||
```rust
|
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
|
||||||
pub struct ProviderConfig {
|
|
||||||
pub name: String,
|
|
||||||
#[serde(default)]
|
|
||||||
pub provider_type: String,
|
|
||||||
pub api_url: String,
|
|
||||||
pub api_key: String,
|
|
||||||
pub model: String,
|
|
||||||
|
|
||||||
// NEW FIELDS:
|
|
||||||
/// Optional: Custom endpoint path (e.g., "" for no path, "/v1/chat" for custom path)
|
|
||||||
/// If None, defaults to "/chat/completions" for OpenAI compatibility
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub custom_endpoint_path: Option<String>,
|
|
||||||
|
|
||||||
/// Optional: Custom auth header name (e.g., "x-msi-genai-api-key")
|
|
||||||
/// If None, defaults to "Authorization"
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub custom_auth_header: Option<String>,
|
|
||||||
|
|
||||||
/// Optional: Custom auth value prefix (e.g., "" for no prefix, "Bearer " for OpenAI)
|
|
||||||
/// If None, defaults to "Bearer "
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub custom_auth_prefix: Option<String>,
|
|
||||||
|
|
||||||
/// Optional: API format ("openai" or "msi_genai")
|
|
||||||
/// If None, defaults to "openai"
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub api_format: Option<String>,
|
|
||||||
|
|
||||||
/// Optional: Session ID for stateful APIs like MSI GenAI
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub session_id: Option<String>,
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Design philosophy**: Existing providers remain unchanged (all fields default to OpenAI-compatible behavior). Only when `api_format` is set to `"msi_genai"` do the custom fields take effect.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Work Remaining
|
|
||||||
|
|
||||||
### 2. Update `src-tauri/src/ai/openai.rs` - Support Custom Formats
|
|
||||||
|
|
||||||
The `OpenAiProvider::chat()` method needs to conditionally handle MSI GenAI format:
|
|
||||||
|
|
||||||
**Changes needed**:
|
|
||||||
- Check `config.api_format` — if `Some("msi_genai")`, use MSI GenAI request/response logic
|
|
||||||
- Use `config.custom_endpoint_path.unwrap_or("/chat/completions")` for endpoint
|
|
||||||
- Use `config.custom_auth_header.unwrap_or("Authorization")` for header name
|
|
||||||
- Use `config.custom_auth_prefix.unwrap_or("Bearer ")` for auth prefix
|
|
||||||
|
|
||||||
**MSI GenAI request format**:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"model": "VertexGemini",
|
|
||||||
"prompt": "<last user message>",
|
|
||||||
"system": "<optional system message>",
|
|
||||||
"sessionId": "<uuid or null for first message>",
|
|
||||||
"userId": "user@motorolasolutions.com"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**MSI GenAI response format**:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"status": true,
|
|
||||||
"sessionId": "uuid",
|
|
||||||
"msg": "AI response text",
|
|
||||||
"initialPrompt": true/false
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Implementation notes**:
|
|
||||||
- For MSI GenAI, convert `Vec<Message>` to a single `prompt` (concatenate or use last user message)
|
|
||||||
- Extract system message from messages array if present (role == "system")
|
|
||||||
- Store returned `sessionId` back to `config.session_id` for subsequent requests
|
|
||||||
- Extract response content from `json["msg"]` instead of `json["choices"][0]["message"]["content"]`
|
|
||||||
|
|
||||||
### 3. Update `src/lib/tauriCommands.ts` - TypeScript Types
|
|
||||||
|
|
||||||
Add new optional fields to `ProviderConfig` interface:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
export interface ProviderConfig {
|
|
||||||
provider_type?: string;
|
|
||||||
max_tokens?: number;
|
|
||||||
temperature?: number;
|
|
||||||
name: string;
|
|
||||||
api_url: string;
|
|
||||||
api_key: string;
|
|
||||||
model: string;
|
|
||||||
|
|
||||||
// NEW FIELDS:
|
|
||||||
custom_endpoint_path?: string;
|
|
||||||
custom_auth_header?: string;
|
|
||||||
custom_auth_prefix?: string;
|
|
||||||
api_format?: string;
|
|
||||||
session_id?: string;
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Update `src/pages/Settings/AIProviders.tsx` - UI Fields
|
|
||||||
|
|
||||||
**When `provider_type === "custom"`, show additional form fields**:
|
|
||||||
|
|
||||||
```tsx
|
|
||||||
{form.provider_type === "custom" && (
|
|
||||||
<>
|
|
||||||
<div className="space-y-2">
|
|
||||||
<Label>API Format</Label>
|
|
||||||
<Select
|
|
||||||
value={form.api_format ?? "openai"}
|
|
||||||
onValueChange={(v) => {
|
|
||||||
const format = v;
|
|
||||||
const defaults = format === "msi_genai"
|
|
||||||
? {
|
|
||||||
custom_endpoint_path: "",
|
|
||||||
custom_auth_header: "x-msi-genai-api-key",
|
|
||||||
custom_auth_prefix: "",
|
|
||||||
}
|
|
||||||
: {
|
|
||||||
custom_endpoint_path: "/chat/completions",
|
|
||||||
custom_auth_header: "Authorization",
|
|
||||||
custom_auth_prefix: "Bearer ",
|
|
||||||
};
|
|
||||||
setForm({ ...form, api_format: format, ...defaults });
|
|
||||||
}}
|
|
||||||
>
|
|
||||||
<SelectTrigger>
|
|
||||||
<SelectValue />
|
|
||||||
</SelectTrigger>
|
|
||||||
<SelectContent>
|
|
||||||
<SelectItem value="openai">OpenAI Compatible</SelectItem>
|
|
||||||
<SelectItem value="msi_genai">MSI GenAI</SelectItem>
|
|
||||||
</SelectContent>
|
|
||||||
</Select>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<div className="grid grid-cols-2 gap-4">
|
|
||||||
<div className="space-y-2">
|
|
||||||
<Label>Endpoint Path</Label>
|
|
||||||
<Input
|
|
||||||
value={form.custom_endpoint_path ?? ""}
|
|
||||||
onChange={(e) => setForm({ ...form, custom_endpoint_path: e.target.value })}
|
|
||||||
placeholder="/chat/completions"
|
|
||||||
/>
|
|
||||||
</div>
|
|
||||||
<div className="space-y-2">
|
|
||||||
<Label>Auth Header Name</Label>
|
|
||||||
<Input
|
|
||||||
value={form.custom_auth_header ?? ""}
|
|
||||||
onChange={(e) => setForm({ ...form, custom_auth_header: e.target.value })}
|
|
||||||
placeholder="Authorization"
|
|
||||||
/>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<div className="space-y-2">
|
|
||||||
<Label>Auth Prefix</Label>
|
|
||||||
<Input
|
|
||||||
value={form.custom_auth_prefix ?? ""}
|
|
||||||
onChange={(e) => setForm({ ...form, custom_auth_prefix: e.target.value })}
|
|
||||||
placeholder="Bearer "
|
|
||||||
/>
|
|
||||||
<p className="text-xs text-muted-foreground">
|
|
||||||
Prefix added before API key (e.g., "Bearer " for OpenAI, "" for MSI GenAI)
|
|
||||||
</p>
|
|
||||||
</div>
|
|
||||||
</>
|
|
||||||
)}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Update `emptyProvider` initial state**:
|
|
||||||
```typescript
|
|
||||||
const emptyProvider: ProviderConfig = {
|
|
||||||
name: "",
|
|
||||||
provider_type: "openai",
|
|
||||||
api_url: "",
|
|
||||||
api_key: "",
|
|
||||||
model: "",
|
|
||||||
custom_endpoint_path: undefined,
|
|
||||||
custom_auth_header: undefined,
|
|
||||||
custom_auth_prefix: undefined,
|
|
||||||
api_format: undefined,
|
|
||||||
session_id: undefined,
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Testing Configuration
|
|
||||||
|
|
||||||
**For MSI GenAI**:
|
|
||||||
- **Type**: Custom
|
|
||||||
- **API Format**: MSI GenAI
|
|
||||||
- **API URL**: `https://genai-service.stage.commandcentral.com/app-gateway`
|
|
||||||
- **Model**: `VertexGemini` (or `Claude-Sonnet-4`, `ChatGPT4o`)
|
|
||||||
- **API Key**: (user's MSI GenAI API key from portal)
|
|
||||||
- **Endpoint Path**: `` (empty - URL already includes `/api/v2/chat`)
|
|
||||||
- **Auth Header**: `x-msi-genai-api-key`
|
|
||||||
- **Auth Prefix**: `` (empty - no "Bearer " prefix)
|
|
||||||
|
|
||||||
**Test command flow**:
|
|
||||||
1. Create provider with above settings
|
|
||||||
2. Test connection (should receive AI response)
|
|
||||||
3. Verify `sessionId` is returned and stored
|
|
||||||
4. Send second message (should reuse `sessionId` for conversation history)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Known Issues from User's Original Error
|
|
||||||
|
|
||||||
User initially tried:
|
|
||||||
- **API URL**: `https://genai-service.stage.commandcentral.com/app-gateway/api/v2/chat`
|
|
||||||
- **Type**: Custom (no format specified)
|
|
||||||
|
|
||||||
**Result**: `Cannot POST /api/v2/chat/chat/completions` (404)
|
|
||||||
|
|
||||||
**Root cause**: OpenAI provider appends `/chat/completions` to base URL. With the new `custom_endpoint_path` field, this is now configurable.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Integration with Existing Session Management
|
|
||||||
|
|
||||||
MSI GenAI uses server-side session management. Current triage flow sends full message history on every request (OpenAI style). For MSI GenAI:
|
|
||||||
|
|
||||||
- **First message**: Send `sessionId: null` or omit field
|
|
||||||
- **Store response**: Save `response.sessionId` to `config.session_id`
|
|
||||||
- **Subsequent messages**: Include `sessionId` in requests (server maintains history)
|
|
||||||
|
|
||||||
Consider storing `session_id` per conversation in the database (link to `ai_conversations.id`) rather than globally in `ProviderConfig`.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Commit Strategy
|
|
||||||
|
|
||||||
**Current git state**:
|
|
||||||
- Modified by other session: `src-tauri/src/integrations/*.rs` (ADO/Confluence/ServiceNow work)
|
|
||||||
- Modified by me: `src-tauri/src/state.rs` (MSI GenAI schema)
|
|
||||||
- Untracked: `GenAI API User Guide.md`
|
|
||||||
|
|
||||||
**Recommended approach**:
|
|
||||||
1. **Other session commits first**: Commit integration changes to main
|
|
||||||
2. **Then complete MSI GenAI work**: Finish items 2-4 above, test, commit separately
|
|
||||||
|
|
||||||
**Alternative**: Create feature branch `feature/msi-genai-custom-provider`, cherry-pick only MSI GenAI changes, complete work there, merge when ready.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Reference: MSI GenAI API Spec
|
|
||||||
|
|
||||||
**Documentation**: `GenAI API User Guide.md` (in project root)
|
|
||||||
|
|
||||||
**Key endpoints**:
|
|
||||||
- `POST /api/v2/chat` - Send prompt, get response
|
|
||||||
- `POST /api/v2/upload/<SESSION-ID>` - Upload files (requires session)
|
|
||||||
- `GET /api/v2/getSessionMessages/<SESSION-ID>` - Retrieve history
|
|
||||||
- `DELETE /api/v2/entry/<MSG-ID>` - Delete message
|
|
||||||
|
|
||||||
**Available models** (from guide):
|
|
||||||
- `Claude-Sonnet-4` (Public)
|
|
||||||
- `ChatGPT4o` (Public)
|
|
||||||
- `VertexGemini` (Private) - Gemini 2.0 Flash
|
|
||||||
- `ChatGPT-5_2-Chat` (Public)
|
|
||||||
- Many others (see guide section 4.1)
|
|
||||||
|
|
||||||
**Rate limits**: $50/user/month (enforced server-side)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Questions for User
|
|
||||||
|
|
||||||
1. Should `session_id` be stored globally in `ProviderConfig` or per-conversation in DB?
|
|
||||||
2. Do we need to support file uploads via `/api/v2/upload/<SESSION-ID>`?
|
|
||||||
3. Should we expose model config options (temperature, max_tokens) for MSI GenAI?
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Contact
|
|
||||||
|
|
||||||
This handoff doc was generated for the other Claude Code session working on integration files. Once that work is committed, this MSI GenAI work can be completed as a separate commit or feature branch.
|
|
||||||
11
README.md
11
README.md
@ -130,6 +130,7 @@ Launch the app and go to **Settings → AI Providers** to add a provider:
|
|||||||
| Ollama (local) | `http://localhost:11434` | No key needed — fully offline |
|
| Ollama (local) | `http://localhost:11434` | No key needed — fully offline |
|
||||||
| Azure OpenAI | `https://<resource>.openai.azure.com/openai/deployments/<deployment>` | Requires API key |
|
| Azure OpenAI | `https://<resource>.openai.azure.com/openai/deployments/<deployment>` | Requires API key |
|
||||||
| **AWS Bedrock (via LiteLLM)** | `http://localhost:8000/v1` | See [LiteLLM + AWS Bedrock](#litellm--aws-bedrock-setup) below |
|
| **AWS Bedrock (via LiteLLM)** | `http://localhost:8000/v1` | See [LiteLLM + AWS Bedrock](#litellm--aws-bedrock-setup) below |
|
||||||
|
| **Custom REST Gateway** | Your gateway URL | See [Custom REST format](docs/wiki/AI-Providers.md) |
|
||||||
|
|
||||||
For offline use, install [Ollama](https://ollama.com) and pull a model:
|
For offline use, install [Ollama](https://ollama.com) and pull a model:
|
||||||
```bash
|
```bash
|
||||||
@ -325,6 +326,14 @@ Override with the `TFTSR_DATA_DIR` environment variable.
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
If this tool has been useful to you, consider buying me a coffee!
|
||||||
|
|
||||||
|
[](https://buymeacoffee.com/tftsr)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
Private — internal tooling. All rights reserved.
|
MIT
|
||||||
|
|||||||
@ -1,720 +0,0 @@
|
|||||||
# Ticket Summary - Integration Search + AI Tool-Calling Implementation
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
This ticket implements Confluence, ServiceNow, and Azure DevOps as primary data sources for AI queries. When users ask questions in the AI chat, the system now searches these internal documentation sources first and injects the results as context before sending the query to the AI provider. This ensures the AI prioritizes internal company documentation over general knowledge.
|
|
||||||
|
|
||||||
**User Requirement:** "using confluance as the initial data source was a key requirement. The same for ServiceNow and ADO"
|
|
||||||
|
|
||||||
**Example Use Case:** When asking "How do I upgrade Vesta NXT to 1.0.12", the AI should return the Confluence documentation link or content from internal wiki pages, rather than generic upgrade instructions.
|
|
||||||
|
|
||||||
### AI Tool-Calling Implementation
|
|
||||||
|
|
||||||
This ticket also implements AI function calling (tool calling) to allow AI to automatically execute actions like adding comments to Azure DevOps tickets. When the AI determines it should perform an action (rather than just respond with text), it can call defined tools/functions and the system will execute them, returning results to the AI for further processing.
|
|
||||||
|
|
||||||
**User Requirement:** "using the AI intagration, I wanted to beable to ask it to put a coment in a ADO ticket and have it pull the data from the integration search and then post a coment in the ticket"
|
|
||||||
|
|
||||||
**Example Use Case:** When asking "Add a comment to ADO ticket 758421 with the test results", the AI should automatically call the `add_ado_comment` tool with the appropriate parameters, execute the action, and confirm completion.
|
|
||||||
|
|
||||||
## Acceptance Criteria
|
|
||||||
|
|
||||||
- [x] Confluence search integration retrieves wiki pages matching user queries
|
|
||||||
- [x] ServiceNow search integration retrieves knowledge base articles and related incidents
|
|
||||||
- [x] Azure DevOps search integration retrieves wiki pages and work items
|
|
||||||
- [x] Integration searches execute in parallel for performance
|
|
||||||
- [x] Search results are injected as system context before AI queries
|
|
||||||
- [x] AI responses include source citations with URLs from internal documentation
|
|
||||||
- [x] System uses persistent browser cookies from authenticated sessions
|
|
||||||
- [x] Graceful fallback when integration sources are unavailable
|
|
||||||
- [x] All searches complete successfully without compilation errors
|
|
||||||
- [x] AI tool-calling architecture implemented with Provider trait support
|
|
||||||
- [x] Tool definitions created for available actions (add_ado_comment)
|
|
||||||
- [x] Tool execution loop implemented in chat_message command
|
|
||||||
- [x] OpenAI-compatible providers support tool-calling
|
|
||||||
- [x] MSI GenAI custom REST provider supports tool-calling
|
|
||||||
- [ ] Tool-calling tested with MSI GenAI provider (pending user testing)
|
|
||||||
- [ ] AI successfully executes add_ado_comment when requested
|
|
||||||
|
|
||||||
## Work Implemented
|
|
||||||
|
|
||||||
### 1. Confluence Search Module
|
|
||||||
**Files Created:**
|
|
||||||
- `src-tauri/src/integrations/confluence_search.rs` (173 lines)
|
|
||||||
|
|
||||||
**Implementation:**
|
|
||||||
```rust
|
|
||||||
pub async fn search_confluence(
|
|
||||||
base_url: &str,
|
|
||||||
query: &str,
|
|
||||||
cookies: &[Cookie],
|
|
||||||
) -> Result<Vec<SearchResult>, String>
|
|
||||||
```
|
|
||||||
|
|
||||||
**Features:**
|
|
||||||
- Uses Confluence CQL (Confluence Query Language) search API
|
|
||||||
- Searches text content across all wiki pages
|
|
||||||
- Fetches full page content via `/rest/api/content/{id}?expand=body.storage`
|
|
||||||
- Strips HTML tags from content for clean AI context
|
|
||||||
- Returns top 3 most relevant results
|
|
||||||
- Truncates content to 3000 characters for AI context window
|
|
||||||
- Includes title, URL, excerpt, and full content in results
|
|
||||||
|
|
||||||
### 2. ServiceNow Search Module
|
|
||||||
**Files Created:**
|
|
||||||
- `src-tauri/src/integrations/servicenow_search.rs` (181 lines)
|
|
||||||
|
|
||||||
**Implementation:**
|
|
||||||
```rust
|
|
||||||
pub async fn search_servicenow(
|
|
||||||
instance_url: &str,
|
|
||||||
query: &str,
|
|
||||||
cookies: &[Cookie],
|
|
||||||
) -> Result<Vec<SearchResult>, String>
|
|
||||||
|
|
||||||
pub async fn search_incidents(
|
|
||||||
instance_url: &str,
|
|
||||||
query: &str,
|
|
||||||
cookies: &[Cookie],
|
|
||||||
) -> Result<Vec<SearchResult>, String>
|
|
||||||
```
|
|
||||||
|
|
||||||
**Features:**
|
|
||||||
- Searches Knowledge Base articles via `/api/now/table/kb_knowledge`
|
|
||||||
- Searches incidents via `/api/now/table/incident`
|
|
||||||
- Uses ServiceNow query language with `LIKE` operators
|
|
||||||
- Returns article text and incident descriptions/resolutions
|
|
||||||
- Includes incident numbers and states in results
|
|
||||||
- Top 3 knowledge base articles + top 3 incidents
|
|
||||||
|
|
||||||
### 3. Azure DevOps Search Module
|
|
||||||
**Files Created:**
|
|
||||||
- `src-tauri/src/integrations/azuredevops_search.rs` (274 lines)
|
|
||||||
|
|
||||||
**Implementation:**
|
|
||||||
```rust
|
|
||||||
pub async fn search_wiki(
|
|
||||||
org_url: &str,
|
|
||||||
project: &str,
|
|
||||||
query: &str,
|
|
||||||
cookies: &[Cookie],
|
|
||||||
) -> Result<Vec<SearchResult>, String>
|
|
||||||
|
|
||||||
pub async fn search_work_items(
|
|
||||||
org_url: &str,
|
|
||||||
project: &str,
|
|
||||||
query: &str,
|
|
||||||
cookies: &[Cookie],
|
|
||||||
) -> Result<Vec<SearchResult>, String>
|
|
||||||
```
|
|
||||||
|
|
||||||
**Features:**
|
|
||||||
- Uses Azure DevOps Search API for wiki search
|
|
||||||
- Uses WIQL (Work Item Query Language) for work item search
|
|
||||||
- Fetches full wiki page content via `/api/wiki/wikis/{id}/pages`
|
|
||||||
- Retrieves work item details including descriptions and states
|
|
||||||
- Project-scoped searches for better relevance
|
|
||||||
- Returns top 3 wiki pages + top 3 work items
|
|
||||||
|
|
||||||
### 4. AI Command Integration
|
|
||||||
**Files Modified:**
|
|
||||||
- `src-tauri/src/commands/ai.rs:377-511` (Added `search_integration_sources` function)
|
|
||||||
|
|
||||||
**Implementation:**
|
|
||||||
```rust
|
|
||||||
async fn search_integration_sources(
|
|
||||||
query: &str,
|
|
||||||
app_handle: &tauri::AppHandle,
|
|
||||||
state: &State<'_, AppState>,
|
|
||||||
) -> String
|
|
||||||
```
|
|
||||||
|
|
||||||
**Features:**
|
|
||||||
- Queries database for all configured integrations
|
|
||||||
- Retrieves persistent browser cookies for each integration
|
|
||||||
- Spawns parallel tokio tasks for each integration search
|
|
||||||
- Aggregates results from all sources
|
|
||||||
- Formats results as AI context with source metadata
|
|
||||||
- Returns formatted context string for injection into AI prompts
|
|
||||||
|
|
||||||
**Context Injection:**
|
|
||||||
```rust
|
|
||||||
if !integration_context.is_empty() {
|
|
||||||
let context_message = Message {
|
|
||||||
role: "system".into(),
|
|
||||||
content: format!(
|
|
||||||
"INTERNAL DOCUMENTATION SOURCES:\n\n{}\n\n\
|
|
||||||
Instructions: The above content is from internal company \
|
|
||||||
documentation systems (Confluence, ServiceNow, Azure DevOps). \
|
|
||||||
You MUST prioritize this information when answering. Include \
|
|
||||||
source citations with URLs in your response. Only use general \
|
|
||||||
knowledge if the internal documentation doesn't cover the question.",
|
|
||||||
integration_context
|
|
||||||
),
|
|
||||||
};
|
|
||||||
messages.push(context_message);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5. AI Tool-Calling Architecture
|
|
||||||
**Files Created/Modified:**
|
|
||||||
- `src-tauri/src/ai/tools.rs` (43 lines) - NEW FILE
|
|
||||||
- `src-tauri/src/ai/mod.rs:34-68` (Added tool-calling data structures)
|
|
||||||
- `src-tauri/src/ai/provider.rs:16` (Added tools parameter to Provider trait)
|
|
||||||
- `src-tauri/src/ai/openai.rs:89-113, 137-157, 257-376` (Tool-calling for OpenAI and MSI GenAI)
|
|
||||||
- `src-tauri/src/commands/ai.rs:60-98, 126-167` (Tool execution and chat loop)
|
|
||||||
- `src-tauri/src/commands/integrations.rs:85-121` (add_ado_comment command)
|
|
||||||
|
|
||||||
**Implementation:**
|
|
||||||
|
|
||||||
**Tool Definitions (`src-tauri/src/ai/tools.rs`):**
|
|
||||||
```rust
|
|
||||||
pub fn get_available_tools() -> Vec<Tool> {
|
|
||||||
vec![get_add_ado_comment_tool()]
|
|
||||||
}
|
|
||||||
|
|
||||||
fn get_add_ado_comment_tool() -> Tool {
|
|
||||||
Tool {
|
|
||||||
name: "add_ado_comment".to_string(),
|
|
||||||
description: "Add a comment to an Azure DevOps work item".to_string(),
|
|
||||||
parameters: ToolParameters {
|
|
||||||
param_type: "object".to_string(),
|
|
||||||
properties: {
|
|
||||||
"work_item_id": integer,
|
|
||||||
"comment_text": string
|
|
||||||
},
|
|
||||||
required: vec!["work_item_id", "comment_text"],
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Data Structures (`src-tauri/src/ai/mod.rs`):**
|
|
||||||
```rust
|
|
||||||
pub struct ToolCall {
|
|
||||||
pub id: String,
|
|
||||||
pub name: String,
|
|
||||||
pub arguments: String, // JSON string
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct Message {
|
|
||||||
pub role: String,
|
|
||||||
pub content: String,
|
|
||||||
pub tool_call_id: Option<String>,
|
|
||||||
pub tool_calls: Option<Vec<ToolCall>>,
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct ChatResponse {
|
|
||||||
pub content: String,
|
|
||||||
pub model: String,
|
|
||||||
pub usage: Option<TokenUsage>,
|
|
||||||
pub tool_calls: Option<Vec<ToolCall>>,
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**OpenAI Provider (`src-tauri/src/ai/openai.rs`):**
|
|
||||||
- Sends tools in OpenAI format: `{"type": "function", "function": {...}}`
|
|
||||||
- Parses `tool_calls` array from response
|
|
||||||
- Sets `tool_choice: "auto"` to enable automatic tool selection
|
|
||||||
- Works with OpenAI, Azure OpenAI, and compatible APIs
|
|
||||||
|
|
||||||
**MSI GenAI Provider (`src-tauri/src/ai/openai.rs::chat_custom_rest`):**
|
|
||||||
- Sends tools in OpenAI-compatible format (MSI GenAI standard)
|
|
||||||
- Adds `tools` and `tool_choice` fields to request body
|
|
||||||
- Parses multiple response formats:
|
|
||||||
- OpenAI format: `tool_calls[].function.name/arguments`
|
|
||||||
- Simpler format: `tool_calls[].name/arguments`
|
|
||||||
- Alternative field names: `toolCalls`, `function_calls`
|
|
||||||
- Enhanced logging for debugging tool call responses
|
|
||||||
- Generates tool call IDs if not provided by API
|
|
||||||
|
|
||||||
**Tool Executor (`src-tauri/src/commands/ai.rs`):**
|
|
||||||
```rust
|
|
||||||
async fn execute_tool_call(
|
|
||||||
tool_call: &crate::ai::ToolCall,
|
|
||||||
app_handle: &tauri::AppHandle,
|
|
||||||
app_state: &State<'_, AppState>,
|
|
||||||
) -> Result<String, String> {
|
|
||||||
match tool_call.name.as_str() {
|
|
||||||
"add_ado_comment" => {
|
|
||||||
let args: serde_json::Value = serde_json::from_str(&tool_call.arguments)?;
|
|
||||||
let work_item_id = args.get("work_item_id").and_then(|v| v.as_i64())?;
|
|
||||||
let comment_text = args.get("comment_text").and_then(|v| v.as_str())?;
|
|
||||||
|
|
||||||
crate::commands::integrations::add_ado_comment(
|
|
||||||
work_item_id,
|
|
||||||
comment_text.to_string(),
|
|
||||||
app_handle.clone(),
|
|
||||||
app_state.clone(),
|
|
||||||
).await
|
|
||||||
}
|
|
||||||
_ => Err(format!("Unknown tool: {}", tool_call.name))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Chat Loop with Tool-Calling (`src-tauri/src/commands/ai.rs::chat_message`):**
|
|
||||||
```rust
|
|
||||||
let tools = Some(crate::ai::tools::get_available_tools());
|
|
||||||
let max_iterations = 10;
|
|
||||||
let mut iteration = 0;
|
|
||||||
|
|
||||||
loop {
|
|
||||||
iteration += 1;
|
|
||||||
if iteration > max_iterations {
|
|
||||||
return Err("Tool-calling loop exceeded maximum iterations".to_string());
|
|
||||||
}
|
|
||||||
|
|
||||||
let response = provider.chat(messages.clone(), &provider_config, tools.clone()).await?;
|
|
||||||
|
|
||||||
// Check if AI wants to call any tools
|
|
||||||
if let Some(tool_calls) = &response.tool_calls {
|
|
||||||
for tool_call in tool_calls {
|
|
||||||
// Execute the tool
|
|
||||||
let tool_result = execute_tool_call(tool_call, &app_handle, &state).await;
|
|
||||||
let result_content = match tool_result {
|
|
||||||
Ok(result) => result,
|
|
||||||
Err(e) => format!("Error executing tool: {}", e),
|
|
||||||
};
|
|
||||||
|
|
||||||
// Add tool result to conversation
|
|
||||||
messages.push(Message {
|
|
||||||
role: "tool".into(),
|
|
||||||
content: result_content,
|
|
||||||
tool_call_id: Some(tool_call.id.clone()),
|
|
||||||
tool_calls: None,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
continue; // Loop back to get AI's next response
|
|
||||||
}
|
|
||||||
|
|
||||||
// No more tool calls - return final response
|
|
||||||
final_response = response;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Features:**
|
|
||||||
- Iterative tool-calling loop (up to 10 iterations)
|
|
||||||
- AI can call multiple tools in sequence
|
|
||||||
- Tool results injected back into conversation
|
|
||||||
- Error handling for invalid tool calls
|
|
||||||
- Support for both OpenAI and MSI GenAI providers
|
|
||||||
- Extensible architecture for adding new tools
|
|
||||||
|
|
||||||
**Provider Compatibility:**
|
|
||||||
All AI providers updated to support tools parameter:
|
|
||||||
- `src-tauri/src/ai/anthropic.rs` - Added `_tools` parameter (not yet implemented)
|
|
||||||
- `src-tauri/src/ai/gemini.rs` - Added `_tools` parameter (not yet implemented)
|
|
||||||
- `src-tauri/src/ai/mistral.rs` - Added `_tools` parameter (not yet implemented)
|
|
||||||
- `src-tauri/src/ai/ollama.rs` - Added `_tools` parameter (not yet implemented)
|
|
||||||
- `src-tauri/src/ai/openai.rs` - **Fully implemented** for OpenAI and MSI GenAI
|
|
||||||
|
|
||||||
Note: Other providers are prepared for future tool-calling support but currently ignore the tools parameter. Only OpenAI-compatible providers and MSI GenAI have active tool-calling implementation.
|
|
||||||
|
|
||||||
### 7. Module Integration
|
|
||||||
**Files Modified:**
|
|
||||||
- `src-tauri/src/integrations/mod.rs:1-10` (Added search module exports)
|
|
||||||
- `src-tauri/src/ai/mod.rs:10` (Added tools export)
|
|
||||||
|
|
||||||
**Changes:**
|
|
||||||
```rust
|
|
||||||
// integrations/mod.rs
|
|
||||||
pub mod confluence_search;
|
|
||||||
pub mod servicenow_search;
|
|
||||||
pub mod azuredevops_search;
|
|
||||||
|
|
||||||
// ai/mod.rs
|
|
||||||
pub use tools::*;
|
|
||||||
```
|
|
||||||
|
|
||||||
### 8. Test Fixes
|
|
||||||
**Files Modified:**
|
|
||||||
- `src-tauri/src/integrations/confluence_search.rs:178-185` (Fixed test assertions)
|
|
||||||
- `src-tauri/src/integrations/azuredevops_search.rs:1` (Removed unused imports)
|
|
||||||
- `src-tauri/src/integrations/servicenow_search.rs:1` (Removed unused imports)
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
### Search Flow
|
|
||||||
|
|
||||||
```
|
|
||||||
User asks question in AI chat
|
|
||||||
↓
|
|
||||||
chat_message() command called
|
|
||||||
↓
|
|
||||||
search_integration_sources() executed
|
|
||||||
↓
|
|
||||||
Query database for integration configs
|
|
||||||
↓
|
|
||||||
Get fresh cookies from persistent browsers
|
|
||||||
↓
|
|
||||||
Spawn parallel search tasks:
|
|
||||||
- Confluence CQL search
|
|
||||||
- ServiceNow KB + incident search
|
|
||||||
- Azure DevOps wiki + work item search
|
|
||||||
↓
|
|
||||||
Wait for all tasks to complete
|
|
||||||
↓
|
|
||||||
Format results with source citations
|
|
||||||
↓
|
|
||||||
Inject as system message in AI context
|
|
||||||
↓
|
|
||||||
Send to AI provider with context
|
|
||||||
↓
|
|
||||||
AI responds with source-aware answer
|
|
||||||
```
|
|
||||||
|
|
||||||
### Tool-Calling Flow
|
|
||||||
|
|
||||||
```
|
|
||||||
User asks AI to perform action (e.g., "Add comment to ticket 758421")
|
|
||||||
↓
|
|
||||||
chat_message() command called
|
|
||||||
↓
|
|
||||||
Get available tools (add_ado_comment)
|
|
||||||
↓
|
|
||||||
Send message + tools to AI provider
|
|
||||||
↓
|
|
||||||
AI decides to call tool → returns ToolCall in response
|
|
||||||
↓
|
|
||||||
execute_tool_call() dispatches to appropriate handler
|
|
||||||
↓
|
|
||||||
add_ado_comment() retrieves ADO config from DB
|
|
||||||
↓
|
|
||||||
Gets fresh cookies from persistent ADO browser
|
|
||||||
↓
|
|
||||||
Calls webview_fetch to POST comment via ADO API
|
|
||||||
↓
|
|
||||||
Tool result returned as Message with role="tool"
|
|
||||||
↓
|
|
||||||
Send updated conversation back to AI
|
|
||||||
↓
|
|
||||||
AI processes result and responds to user
|
|
||||||
↓
|
|
||||||
User sees confirmation: "I've successfully added the comment"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Multi-Tool Support:**
|
|
||||||
- AI can call multiple tools in sequence
|
|
||||||
- Each tool result is added to conversation history
|
|
||||||
- Loop continues until AI provides final text response
|
|
||||||
- Maximum 10 iterations to prevent infinite loops
|
|
||||||
|
|
||||||
**Error Handling:**
|
|
||||||
- Invalid tool calls return error message to AI
|
|
||||||
- AI can retry with corrected parameters
|
|
||||||
- Missing arguments caught and reported
|
|
||||||
- Unknown tool names return error
|
|
||||||
|
|
||||||
### Database Query
|
|
||||||
|
|
||||||
Integration configurations are queried from the `integration_config` table:
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT service, base_url, username, project_name, space_key
|
|
||||||
FROM integration_config
|
|
||||||
```
|
|
||||||
|
|
||||||
This provides:
|
|
||||||
- `service`: "confluence", "servicenow", or "azuredevops"
|
|
||||||
- `base_url`: Integration instance URL
|
|
||||||
- `project_name`: For Azure DevOps project scoping
|
|
||||||
- `space_key`: For future Confluence space scoping
|
|
||||||
|
|
||||||
### Cookie Management
|
|
||||||
|
|
||||||
Persistent browser windows maintain authenticated sessions. The `get_fresh_cookies_from_webview()` function retrieves current cookies from the browser window, ensuring authentication remains valid across sessions.
|
|
||||||
|
|
||||||
### Parallel Execution
|
|
||||||
|
|
||||||
All integration searches execute in parallel using `tokio::spawn()`:
|
|
||||||
|
|
||||||
```rust
|
|
||||||
for config in configs {
|
|
||||||
let cookies_result = get_fresh_cookies_from_webview(&config.service, ...).await;
|
|
||||||
if let Ok(Some(cookies)) = cookies_result {
|
|
||||||
match config.service.as_str() {
|
|
||||||
"confluence" => {
|
|
||||||
search_tasks.push(tokio::spawn(async move {
|
|
||||||
confluence_search::search_confluence(...).await
|
|
||||||
.unwrap_or_default()
|
|
||||||
}));
|
|
||||||
}
|
|
||||||
// ... other integrations
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait for all searches
|
|
||||||
for task in search_tasks {
|
|
||||||
if let Ok(results) = task.await {
|
|
||||||
all_results.extend(results);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Error Handling
|
|
||||||
|
|
||||||
- Database lock failures return empty context (non-blocking)
|
|
||||||
- SQL query errors return empty context (non-blocking)
|
|
||||||
- Missing cookies skip that integration (non-blocking)
|
|
||||||
- Failed search requests return empty results (non-blocking)
|
|
||||||
- All errors are logged via `tracing::warn!`
|
|
||||||
- AI query proceeds with whatever context is available
|
|
||||||
|
|
||||||
## Testing Needed
|
|
||||||
|
|
||||||
### Manual Testing
|
|
||||||
|
|
||||||
1. **Confluence Integration**
|
|
||||||
- [ ] Configure Confluence integration with valid base URL
|
|
||||||
- [ ] Open persistent browser and log into Confluence
|
|
||||||
- [ ] Create a test issue and ask: "How do I upgrade Vesta NXT to 1.0.12"
|
|
||||||
- [ ] Verify AI response includes Confluence wiki content
|
|
||||||
- [ ] Verify response includes source URL
|
|
||||||
- [ ] Check logs for "Found X integration sources for AI context"
|
|
||||||
|
|
||||||
2. **ServiceNow Integration**
|
|
||||||
- [ ] Configure ServiceNow integration with valid instance URL
|
|
||||||
- [ ] Open persistent browser and log into ServiceNow
|
|
||||||
- [ ] Ask question related to known KB article
|
|
||||||
- [ ] Verify AI response includes ServiceNow KB content
|
|
||||||
- [ ] Ask about known incident patterns
|
|
||||||
- [ ] Verify AI response includes incident information
|
|
||||||
|
|
||||||
3. **Azure DevOps Integration**
|
|
||||||
- [ ] Configure Azure DevOps integration with org URL and project
|
|
||||||
- [ ] Open persistent browser and log into Azure DevOps
|
|
||||||
- [ ] Ask question about documented features in ADO wiki
|
|
||||||
- [ ] Verify AI response includes ADO wiki content
|
|
||||||
- [ ] Ask about known work items
|
|
||||||
- [ ] Verify AI response includes work item details
|
|
||||||
|
|
||||||
4. **Parallel Search Performance**
|
|
||||||
- [ ] Configure all three integrations
|
|
||||||
- [ ] Authenticate all three browsers
|
|
||||||
- [ ] Ask a question that matches content in all sources
|
|
||||||
- [ ] Verify results from multiple sources appear
|
|
||||||
- [ ] Check logs to confirm parallel execution
|
|
||||||
- [ ] Measure response time (should be <5s for all searches)
|
|
||||||
|
|
||||||
5. **Graceful Degradation**
|
|
||||||
- [ ] Test with only Confluence configured
|
|
||||||
- [ ] Verify AI still works with single source
|
|
||||||
- [ ] Test with no integrations configured
|
|
||||||
- [ ] Verify AI still works with general knowledge
|
|
||||||
- [ ] Test with integration browser closed
|
|
||||||
- [ ] Verify AI continues with available sources
|
|
||||||
|
|
||||||
6. **AI Tool-Calling with MSI GenAI**
|
|
||||||
- [ ] Configure MSI GenAI as active AI provider
|
|
||||||
- [ ] Configure Azure DevOps integration and authenticate
|
|
||||||
- [ ] Create test issue and start triage conversation
|
|
||||||
- [ ] Ask: "Add a comment to ADO ticket 758421 saying 'This is a test'"
|
|
||||||
- [ ] Verify AI calls add_ado_comment tool (check logs for "MSI GenAI: Parsed tool call")
|
|
||||||
- [ ] Verify comment appears in ADO ticket 758421
|
|
||||||
- [ ] Verify AI confirms action was completed
|
|
||||||
- [ ] Test with invalid ticket number (e.g., 99999999)
|
|
||||||
- [ ] Verify AI reports error gracefully
|
|
||||||
|
|
||||||
7. **AI Tool-Calling with OpenAI**
|
|
||||||
- [ ] Configure OpenAI or Azure OpenAI as active provider
|
|
||||||
- [ ] Repeat tool-calling tests from section 6
|
|
||||||
- [ ] Verify tool-calling works with OpenAI-compatible providers
|
|
||||||
- [ ] Test multi-tool scenario: "Add comment to 758421 and then another to 758422"
|
|
||||||
- [ ] Verify AI calls tool multiple times in sequence
|
|
||||||
|
|
||||||
8. **Tool-Calling Error Handling**
|
|
||||||
- [ ] Test with ADO browser closed (no cookies available)
|
|
||||||
- [ ] Verify AI reports authentication error
|
|
||||||
- [ ] Test with invalid work item ID format (non-integer)
|
|
||||||
- [ ] Verify error caught in tool executor
|
|
||||||
- [ ] Test with missing ADO configuration
|
|
||||||
- [ ] Verify graceful error message to user
|
|
||||||
|
|
||||||
### Automated Testing
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Type checking
|
|
||||||
npx tsc --noEmit
|
|
||||||
|
|
||||||
# Rust compilation check
|
|
||||||
cargo check --manifest-path src-tauri/Cargo.toml
|
|
||||||
|
|
||||||
# Run all tests
|
|
||||||
cargo test --manifest-path src-tauri/Cargo.toml
|
|
||||||
|
|
||||||
# Build debug version
|
|
||||||
cargo tauri build --debug
|
|
||||||
|
|
||||||
# Run linter
|
|
||||||
cargo clippy --manifest-path src-tauri/Cargo.toml -- -D warnings
|
|
||||||
```
|
|
||||||
|
|
||||||
### Test Results
|
|
||||||
|
|
||||||
All tests passing:
|
|
||||||
```
|
|
||||||
test result: ok. 130 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
|
|
||||||
```
|
|
||||||
|
|
||||||
### Edge Cases to Test
|
|
||||||
|
|
||||||
- [ ] Query with no matching content in any source
|
|
||||||
- [ ] Query matching content in all three sources (verify aggregation)
|
|
||||||
- [ ] Very long query strings (>1000 characters)
|
|
||||||
- [ ] Special characters in queries (quotes, brackets, etc.)
|
|
||||||
- [ ] Integration returns >3 results (verify truncation)
|
|
||||||
- [ ] Integration returns very large content (verify 3000 char limit)
|
|
||||||
- [ ] Multiple persistent browsers for same integration
|
|
||||||
- [ ] Cookie expiration during search
|
|
||||||
- [ ] Network timeout during search
|
|
||||||
- [ ] Integration API version changes
|
|
||||||
- [ ] HTML content with complex nested tags
|
|
||||||
- [ ] Unicode content in search results
|
|
||||||
- [ ] AI calling same tool multiple times in one response
|
|
||||||
- [ ] Tool returning very large result (>10k characters)
|
|
||||||
- [ ] Tool execution timeout (slow API response)
|
|
||||||
- [ ] AI calling non-existent tool name
|
|
||||||
- [ ] Tool call with malformed JSON arguments
|
|
||||||
- [ ] Reaching max iteration limit (10 tool calls in sequence)
|
|
||||||
|
|
||||||
## Performance Considerations
|
|
||||||
|
|
||||||
### Content Truncation
|
|
||||||
- Wiki pages truncated to 3000 characters
|
|
||||||
- Knowledge base articles truncated to 3000 characters
|
|
||||||
- Excerpts limited to 200-300 characters
|
|
||||||
- Top 3 results per source type
|
|
||||||
|
|
||||||
These limits ensure:
|
|
||||||
- AI context window remains reasonable (~10k chars max)
|
|
||||||
- Response times stay under 5 seconds
|
|
||||||
- Costs remain manageable for AI providers
|
|
||||||
|
|
||||||
### Parallel Execution
|
|
||||||
- All integrations searched simultaneously
|
|
||||||
- No blocking between different sources
|
|
||||||
- Failed searches don't block successful ones
|
|
||||||
- Total time = slowest individual search, not sum
|
|
||||||
|
|
||||||
### Caching Strategy (Future Enhancement)
|
|
||||||
- Could cache search results for 5-10 minutes
|
|
||||||
- Would reduce API calls for repeated queries
|
|
||||||
- Needs invalidation strategy for updated content
|
|
||||||
|
|
||||||
## Security Considerations
|
|
||||||
|
|
||||||
1. **Cookie Security**
|
|
||||||
- Cookies stored in encrypted database
|
|
||||||
- Retrieved only when needed for API calls
|
|
||||||
- Never exposed to frontend
|
|
||||||
- Transmitted only over HTTPS
|
|
||||||
|
|
||||||
2. **Content Sanitization**
|
|
||||||
- HTML tags stripped from content
|
|
||||||
- No script injection possible
|
|
||||||
- Content truncated to prevent overflow
|
|
||||||
|
|
||||||
3. **Audit Trail**
|
|
||||||
- Integration searches not currently audited (future enhancement)
|
|
||||||
- AI chat with context is audited
|
|
||||||
- Could add audit entries for each integration query
|
|
||||||
|
|
||||||
4. **Access Control**
|
|
||||||
- Uses user's authenticated session
|
|
||||||
- Respects integration platform permissions
|
|
||||||
- No privilege escalation
|
|
||||||
|
|
||||||
## Known Issues / Future Enhancements
|
|
||||||
|
|
||||||
1. **Tool-Calling Format Unknown for MSI GenAI**
|
|
||||||
- Implementation uses OpenAI-compatible format as standard
|
|
||||||
- MSI GenAI response format for tool_calls is unknown (not documented)
|
|
||||||
- Code parses multiple possible response formats as fallback
|
|
||||||
- Requires real-world testing with MSI GenAI to verify
|
|
||||||
- May need format adjustments based on actual API responses
|
|
||||||
- Enhanced logging added to debug actual response structure
|
|
||||||
|
|
||||||
2. **ADO Browser Window Blank Page Issue**
|
|
||||||
- Azure DevOps browser opens as blank white page
|
|
||||||
- Requires closing and relaunching to get functional page
|
|
||||||
- Multiple attempts to fix (delayed show, immediate show, enhanced logging)
|
|
||||||
- Root cause not yet identified
|
|
||||||
- Workaround: Close and reopen ADO browser connection
|
|
||||||
- Needs diagnostic logging to identify root cause
|
|
||||||
|
|
||||||
3. **Limited Tool Support**
|
|
||||||
- Currently only one tool implemented: add_ado_comment
|
|
||||||
- Could add more tools: create_work_item, update_ticket_state, search_tickets
|
|
||||||
- Could add Confluence tools: create_page, update_page
|
|
||||||
- Could add ServiceNow tools: create_incident, assign_ticket
|
|
||||||
- Extensible architecture makes adding new tools straightforward
|
|
||||||
|
|
||||||
4. **No Search Result Caching**
|
|
||||||
- Every query searches all integrations
|
|
||||||
- Could cache results for repeated queries
|
|
||||||
- Would improve response time for common questions
|
|
||||||
|
|
||||||
5. **No Relevance Scoring**
|
|
||||||
- Returns top 3 results from each source
|
|
||||||
- No cross-platform relevance ranking
|
|
||||||
- Could implement scoring algorithm in future
|
|
||||||
|
|
||||||
6. **No Integration Search Audit**
|
|
||||||
- Integration queries not logged to audit table
|
|
||||||
- Only final AI interaction is audited
|
|
||||||
- Could add audit entries for transparency
|
|
||||||
|
|
||||||
7. **No Confluence Space Filtering**
|
|
||||||
- Searches all spaces
|
|
||||||
- `space_key` field in config not yet used
|
|
||||||
- Could restrict to specific spaces in future
|
|
||||||
|
|
||||||
8. **No ServiceNow Table Filtering**
|
|
||||||
- Searches all KB articles
|
|
||||||
- Could filter by category or state
|
|
||||||
- Could add configurable table names
|
|
||||||
|
|
||||||
9. **No Azure DevOps Area Path Filtering**
|
|
||||||
- Searches entire project
|
|
||||||
- Could filter by area path or iteration
|
|
||||||
- Could add configurable WIQL filters
|
|
||||||
|
|
||||||
## Dependencies
|
|
||||||
|
|
||||||
No new external dependencies added. Uses existing:
|
|
||||||
- `tokio` for async/parallel execution
|
|
||||||
- `reqwest` for HTTP requests
|
|
||||||
- `rusqlite` for database queries
|
|
||||||
- `urlencoding` for query encoding
|
|
||||||
- `serde_json` for API responses
|
|
||||||
|
|
||||||
## Documentation
|
|
||||||
|
|
||||||
This implementation is documented in:
|
|
||||||
- Code comments in all search modules
|
|
||||||
- Architecture section above
|
|
||||||
- CLAUDE.md project instructions
|
|
||||||
- Function-level documentation strings
|
|
||||||
|
|
||||||
## Rollback Plan
|
|
||||||
|
|
||||||
If issues are discovered:
|
|
||||||
|
|
||||||
1. **Disable Integration Search**
|
|
||||||
```rust
|
|
||||||
// In chat_message() function, comment out:
|
|
||||||
// let integration_context = search_integration_sources(...).await;
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Revert to Previous Behavior**
|
|
||||||
- AI will use only general knowledge
|
|
||||||
- No breaking changes to existing functionality
|
|
||||||
- All other features remain functional
|
|
||||||
|
|
||||||
3. **Clean Revert**
|
|
||||||
```bash
|
|
||||||
git revert <commit-hash>
|
|
||||||
cargo tauri build --debug
|
|
||||||
```
|
|
||||||
@ -35,7 +35,7 @@ C4Context
|
|||||||
System_Ext(openai, "OpenAI API", "GPT-4o, GPT-4o-mini for cloud AI inference")
|
System_Ext(openai, "OpenAI API", "GPT-4o, GPT-4o-mini for cloud AI inference")
|
||||||
System_Ext(anthropic, "Anthropic API", "Claude 3.5 Sonnet, Claude Haiku")
|
System_Ext(anthropic, "Anthropic API", "Claude 3.5 Sonnet, Claude Haiku")
|
||||||
System_Ext(gemini, "Google Gemini API", "Gemini Pro for cloud AI inference")
|
System_Ext(gemini, "Google Gemini API", "Gemini Pro for cloud AI inference")
|
||||||
System_Ext(msi_genai, "MSI GenAI Gateway", "Enterprise AI gateway (commandcentral.com)")
|
System_Ext(custom_rest, "Custom REST Gateway", "Enterprise AI gateway (custom REST format)")
|
||||||
|
|
||||||
System_Ext(confluence, "Confluence", "Atlassian wiki — publish RCA docs")
|
System_Ext(confluence, "Confluence", "Atlassian wiki — publish RCA docs")
|
||||||
System_Ext(servicenow, "ServiceNow", "ITSM platform — create incident tickets")
|
System_Ext(servicenow, "ServiceNow", "ITSM platform — create incident tickets")
|
||||||
@ -46,7 +46,7 @@ C4Context
|
|||||||
Rel(trcaa, openai, "AI inference", "HTTPS/REST")
|
Rel(trcaa, openai, "AI inference", "HTTPS/REST")
|
||||||
Rel(trcaa, anthropic, "AI inference", "HTTPS/REST")
|
Rel(trcaa, anthropic, "AI inference", "HTTPS/REST")
|
||||||
Rel(trcaa, gemini, "AI inference", "HTTPS/REST")
|
Rel(trcaa, gemini, "AI inference", "HTTPS/REST")
|
||||||
Rel(trcaa, msi_genai, "AI inference", "HTTPS/REST")
|
Rel(trcaa, custom_rest, "AI inference", "HTTPS/REST")
|
||||||
Rel(trcaa, confluence, "Publish RCA docs", "HTTPS/REST + OAuth2")
|
Rel(trcaa, confluence, "Publish RCA docs", "HTTPS/REST + OAuth2")
|
||||||
Rel(trcaa, servicenow, "Create incidents", "HTTPS/REST + OAuth2")
|
Rel(trcaa, servicenow, "Create incidents", "HTTPS/REST + OAuth2")
|
||||||
Rel(trcaa, ado, "Create work items", "HTTPS/REST + OAuth2")
|
Rel(trcaa, ado, "Create work items", "HTTPS/REST + OAuth2")
|
||||||
|
|||||||
@ -10,7 +10,7 @@
|
|||||||
|
|
||||||
The application must support multiple AI providers (OpenAI, Anthropic, Google Gemini, Mistral, Ollama) with different API formats, authentication methods, and response structures. Provider selection must be runtime-configurable by the user without recompiling.
|
The application must support multiple AI providers (OpenAI, Anthropic, Google Gemini, Mistral, Ollama) with different API formats, authentication methods, and response structures. Provider selection must be runtime-configurable by the user without recompiling.
|
||||||
|
|
||||||
Additionally, enterprise environments may need custom AI endpoints (e.g., MSI GenAI gateway at `genai-service.commandcentral.com`) that speak OpenAI-compatible APIs with custom auth headers.
|
Additionally, enterprise environments may need custom AI endpoints (e.g., an enterprise AI gateway) that speak OpenAI-compatible APIs with custom auth headers.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@ -47,7 +47,7 @@ pub struct ProviderConfig {
|
|||||||
pub api_format: Option<String>, // "openai" | "custom_rest"
|
pub api_format: Option<String>, // "openai" | "custom_rest"
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
This allows a single `OpenAiProvider` implementation to handle both standard OpenAI and arbitrary OpenAI-compatible endpoints — the user configures the auth header name and prefix to match their gateway.
|
This allows a single `OpenAiProvider` implementation to handle both standard OpenAI and arbitrary custom endpoints — the user configures the auth header name and prefix to match their gateway.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@ -147,33 +147,25 @@ Standard OpenAI `/chat/completions` endpoint with Bearer authentication.
|
|||||||
|
|
||||||
### Format: Custom REST
|
### Format: Custom REST
|
||||||
|
|
||||||
**Motorola Solutions Internal GenAI Service** — Enterprise AI platform with centralized cost tracking and model access.
|
**Enterprise AI Gateway** — For AI platforms that use a non-OpenAI request/response format with centralized cost tracking and model access.
|
||||||
|
|
||||||
| Field | Value |
|
| Field | Value |
|
||||||
|-------|-------|
|
|-------|-------|
|
||||||
| `config.provider_type` | `"custom"` |
|
| `config.provider_type` | `"custom"` |
|
||||||
| `config.api_format` | `"custom_rest"` |
|
| `config.api_format` | `"custom_rest"` |
|
||||||
| API URL | `https://genai-service.commandcentral.com/app-gateway` (prod)<br>`https://genai-service.stage.commandcentral.com/app-gateway` (stage) |
|
| API URL | Your gateway's base URL |
|
||||||
| Auth Header | `x-msi-genai-api-key` |
|
| Auth Header | Your gateway's auth header name |
|
||||||
| Auth Prefix | `` (empty - no Bearer prefix) |
|
| Auth Prefix | `` (empty if no prefix needed) |
|
||||||
| Endpoint Path | `` (empty - URL includes full path `/api/v2/chat`) |
|
| Endpoint Path | `` (empty if URL already includes full path) |
|
||||||
|
|
||||||
**Available Models (dropdown in Settings):**
|
|
||||||
- `VertexGemini` — Gemini 2.0 Flash (Private/GCP)
|
|
||||||
- `Claude-Sonnet-4` — Claude Sonnet 4 (Public/Anthropic)
|
|
||||||
- `ChatGPT4o` — GPT-4o (Public/OpenAI)
|
|
||||||
- `ChatGPT-5_2-Chat` — GPT-4.5 (Public/OpenAI)
|
|
||||||
- Full list is sourced from [GenAI API User Guide](../GenAI%20API%20User%20Guide.md)
|
|
||||||
- Includes a `Custom model...` option to manually enter any model ID
|
|
||||||
|
|
||||||
**Request Format:**
|
**Request Format:**
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"model": "VertexGemini",
|
"model": "model-name",
|
||||||
"prompt": "User's latest message",
|
"prompt": "User's latest message",
|
||||||
"system": "Optional system prompt",
|
"system": "Optional system prompt",
|
||||||
"sessionId": "uuid-for-conversation-continuity",
|
"sessionId": "uuid-for-conversation-continuity",
|
||||||
"userId": "user.name@motorolasolutions.com"
|
"userId": "user@example.com"
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -191,32 +183,27 @@ Standard OpenAI `/chat/completions` endpoint with Bearer authentication.
|
|||||||
- **Single prompt** instead of message array (server manages history via `sessionId`)
|
- **Single prompt** instead of message array (server manages history via `sessionId`)
|
||||||
- **Response in `msg` field** instead of `choices[0].message.content`
|
- **Response in `msg` field** instead of `choices[0].message.content`
|
||||||
- **Session-based** conversation continuity (no need to resend history)
|
- **Session-based** conversation continuity (no need to resend history)
|
||||||
- **Cost tracking** via `userId` field (optional — defaults to API key owner if omitted)
|
- **Cost tracking** via `userId` field (optional)
|
||||||
- **Custom client header**: `X-msi-genai-client: tftsr-devops-investigation`
|
|
||||||
|
|
||||||
**Configuration (Settings → AI Providers → Add Provider):**
|
**Configuration (Settings → AI Providers → Add Provider):**
|
||||||
```
|
```
|
||||||
Name: Custom REST (MSI GenAI)
|
Name: Custom REST Gateway
|
||||||
Type: Custom
|
Type: Custom
|
||||||
API Format: Custom REST
|
API Format: Custom REST
|
||||||
API URL: https://genai-service.stage.commandcentral.com/app-gateway
|
API URL: https://your-gateway/api/v2/chat
|
||||||
Model: VertexGemini
|
Model: your-model-name
|
||||||
API Key: (your MSI GenAI API key from portal)
|
API Key: (your API key)
|
||||||
User ID: your.name@motorolasolutions.com (optional)
|
User ID: user@example.com (optional, for cost tracking)
|
||||||
Endpoint Path: (leave empty)
|
Endpoint Path: (leave empty if URL includes full path)
|
||||||
Auth Header: x-msi-genai-api-key
|
Auth Header: x-custom-api-key
|
||||||
Auth Prefix: (leave empty)
|
Auth Prefix: (leave empty if no prefix)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Rate Limits:**
|
|
||||||
- $50/user/month (enforced server-side)
|
|
||||||
- Per-API-key quotas available
|
|
||||||
|
|
||||||
**Troubleshooting:**
|
**Troubleshooting:**
|
||||||
|
|
||||||
| Error | Cause | Solution |
|
| Error | Cause | Solution |
|
||||||
|-------|-------|----------|
|
|-------|-------|----------|
|
||||||
| 403 Forbidden | Invalid API key or insufficient permissions | Verify key in MSI GenAI portal, check model access |
|
| 403 Forbidden | Invalid API key or insufficient permissions | Verify key in your gateway portal, check model access |
|
||||||
| Missing `userId` field | Configuration not saved | Ensure UI shows User ID field when `api_format=custom_rest` |
|
| Missing `userId` field | Configuration not saved | Ensure UI shows User ID field when `api_format=custom_rest` |
|
||||||
| No conversation history | `sessionId` not persisted | Session ID stored in `ProviderConfig.session_id` — currently per-provider, not per-conversation |
|
| No conversation history | `sessionId` not persisted | Session ID stored in `ProviderConfig.session_id` — currently per-provider, not per-conversation |
|
||||||
|
|
||||||
@ -224,7 +211,6 @@ Auth Prefix: (leave empty)
|
|||||||
- Backend: `src-tauri/src/ai/openai.rs::chat_custom_rest()`
|
- Backend: `src-tauri/src/ai/openai.rs::chat_custom_rest()`
|
||||||
- Schema: `src-tauri/src/state.rs::ProviderConfig` (added `user_id`, `api_format`, custom auth fields)
|
- Schema: `src-tauri/src/state.rs::ProviderConfig` (added `user_id`, `api_format`, custom auth fields)
|
||||||
- Frontend: `src/pages/Settings/AIProviders.tsx` (conditional UI for Custom REST + model dropdown)
|
- Frontend: `src/pages/Settings/AIProviders.tsx` (conditional UI for Custom REST + model dropdown)
|
||||||
- CSP whitelist: `https://genai-service.stage.commandcentral.com` and production domain
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@ -239,7 +225,7 @@ All providers support the following optional configuration fields (v0.2.6+):
|
|||||||
| `custom_auth_prefix` | `Option<String>` | Prefix before API key | `Bearer ` |
|
| `custom_auth_prefix` | `Option<String>` | Prefix before API key | `Bearer ` |
|
||||||
| `api_format` | `Option<String>` | API format (`openai` or `custom_rest`) | `openai` |
|
| `api_format` | `Option<String>` | API format (`openai` or `custom_rest`) | `openai` |
|
||||||
| `session_id` | `Option<String>` | Session ID for stateful APIs | None |
|
| `session_id` | `Option<String>` | Session ID for stateful APIs | None |
|
||||||
| `user_id` | `Option<String>` | User ID for cost tracking (Custom REST MSI contract) | None |
|
| `user_id` | `Option<String>` | User ID for cost tracking (Custom REST gateways) | None |
|
||||||
|
|
||||||
**Backward Compatibility:**
|
**Backward Compatibility:**
|
||||||
All fields are optional and default to OpenAI-compatible behavior. Existing provider configurations are unaffected.
|
All fields are optional and default to OpenAI-compatible behavior. Existing provider configurations are unaffected.
|
||||||
|
|||||||
@ -24,7 +24,7 @@
|
|||||||
|
|
||||||
- **5-Whys AI Triage** — Interactive guided root cause analysis via multi-turn AI chat
|
- **5-Whys AI Triage** — Interactive guided root cause analysis via multi-turn AI chat
|
||||||
- **PII Auto-Redaction** — Detects and redacts sensitive data before any AI send
|
- **PII Auto-Redaction** — Detects and redacts sensitive data before any AI send
|
||||||
- **Multi-Provider AI** — OpenAI, Anthropic Claude, Google Gemini, Mistral, AWS Bedrock (via LiteLLM), MSI GenAI (Motorola internal), local Ollama (fully offline)
|
- **Multi-Provider AI** — OpenAI, Anthropic Claude, Google Gemini, Mistral, AWS Bedrock (via LiteLLM), Custom REST gateways, local Ollama (fully offline)
|
||||||
- **Custom Provider Support** — Flexible authentication (Bearer, custom headers) and API formats (OpenAI-compatible, Custom REST)
|
- **Custom Provider Support** — Flexible authentication (Bearer, custom headers) and API formats (OpenAI-compatible, Custom REST)
|
||||||
- **External Integrations** — Confluence, ServiceNow, Azure DevOps with OAuth2 PKCE flows
|
- **External Integrations** — Confluence, ServiceNow, Azure DevOps with OAuth2 PKCE flows
|
||||||
- **SQLCipher AES-256** — All issue history and credentials encrypted at rest
|
- **SQLCipher AES-256** — All issue history and credentials encrypted at rest
|
||||||
@ -37,7 +37,7 @@
|
|||||||
|
|
||||||
| Version | Status | Highlights |
|
| Version | Status | Highlights |
|
||||||
|---------|--------|-----------|
|
|---------|--------|-----------|
|
||||||
| v0.2.6 | 🚀 Latest | MSI GenAI support, OAuth2 shell permissions, user ID tracking |
|
| v0.2.6 | 🚀 Latest | Custom REST AI gateway support, OAuth2 shell permissions, user ID tracking |
|
||||||
| v0.2.3 | Released | Confluence/ServiceNow/ADO REST API clients (19 TDD tests) |
|
| v0.2.3 | Released | Confluence/ServiceNow/ADO REST API clients (19 TDD tests) |
|
||||||
| v0.1.1 | Released | Core application with PII detection, RCA generation |
|
| v0.1.1 | Released | Core application with PII detection, RCA generation |
|
||||||
|
|
||||||
|
|||||||
@ -8,7 +8,7 @@ use crate::state::ProviderConfig;
|
|||||||
pub struct OpenAiProvider;
|
pub struct OpenAiProvider;
|
||||||
|
|
||||||
fn is_custom_rest_format(api_format: Option<&str>) -> bool {
|
fn is_custom_rest_format(api_format: Option<&str>) -> bool {
|
||||||
matches!(api_format, Some("custom_rest") | Some("msi_genai"))
|
matches!(api_format, Some("custom_rest"))
|
||||||
}
|
}
|
||||||
|
|
||||||
#[async_trait]
|
#[async_trait]
|
||||||
@ -38,7 +38,6 @@ impl Provider for OpenAiProvider {
|
|||||||
// Check if using custom REST format
|
// Check if using custom REST format
|
||||||
let api_format = config.api_format.as_deref().unwrap_or("openai");
|
let api_format = config.api_format.as_deref().unwrap_or("openai");
|
||||||
|
|
||||||
// Backward compatibility: accept legacy msi_genai identifier
|
|
||||||
if is_custom_rest_format(Some(api_format)) {
|
if is_custom_rest_format(Some(api_format)) {
|
||||||
self.chat_custom_rest(messages, config, tools).await
|
self.chat_custom_rest(messages, config, tools).await
|
||||||
} else {
|
} else {
|
||||||
@ -56,11 +55,6 @@ mod tests {
|
|||||||
assert!(is_custom_rest_format(Some("custom_rest")));
|
assert!(is_custom_rest_format(Some("custom_rest")));
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn legacy_msi_format_is_recognized_for_compatibility() {
|
|
||||||
assert!(is_custom_rest_format(Some("msi_genai")));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn openai_format_is_not_custom_rest() {
|
fn openai_format_is_not_custom_rest() {
|
||||||
assert!(!is_custom_rest_format(Some("openai")));
|
assert!(!is_custom_rest_format(Some("openai")));
|
||||||
@ -186,7 +180,7 @@ impl OpenAiProvider {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Custom REST format (MSI GenAI payload contract)
|
/// Custom REST format (non-OpenAI payload contract)
|
||||||
async fn chat_custom_rest(
|
async fn chat_custom_rest(
|
||||||
&self,
|
&self,
|
||||||
messages: Vec<Message>,
|
messages: Vec<Message>,
|
||||||
@ -268,7 +262,7 @@ impl OpenAiProvider {
|
|||||||
body["tools"] = serde_json::Value::from(formatted_tools);
|
body["tools"] = serde_json::Value::from(formatted_tools);
|
||||||
body["tool_choice"] = serde_json::Value::from("auto");
|
body["tool_choice"] = serde_json::Value::from("auto");
|
||||||
|
|
||||||
tracing::info!("MSI GenAI: Sending {} tools in request", tool_count);
|
tracing::info!("Custom REST: Sending {} tools in request", tool_count);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Use custom auth header and prefix (no default prefix for custom REST)
|
// Use custom auth header and prefix (no default prefix for custom REST)
|
||||||
@ -296,7 +290,7 @@ impl OpenAiProvider {
|
|||||||
let json: serde_json::Value = resp.json().await?;
|
let json: serde_json::Value = resp.json().await?;
|
||||||
|
|
||||||
tracing::debug!(
|
tracing::debug!(
|
||||||
"MSI GenAI response: {}",
|
"Custom REST response: {}",
|
||||||
serde_json::to_string_pretty(&json).unwrap_or_else(|_| "invalid JSON".to_string())
|
serde_json::to_string_pretty(&json).unwrap_or_else(|_| "invalid JSON".to_string())
|
||||||
);
|
);
|
||||||
|
|
||||||
@ -328,7 +322,7 @@ impl OpenAiProvider {
|
|||||||
.and_then(|a| a.as_str())
|
.and_then(|a| a.as_str())
|
||||||
.or_else(|| call.get("arguments").and_then(|a| a.as_str())),
|
.or_else(|| call.get("arguments").and_then(|a| a.as_str())),
|
||||||
) {
|
) {
|
||||||
tracing::info!("MSI GenAI: Parsed tool call: {} ({})", name, id);
|
tracing::info!("Custom REST: Parsed tool call: {} ({})", name, id);
|
||||||
return Some(crate::ai::ToolCall {
|
return Some(crate::ai::ToolCall {
|
||||||
id: id.to_string(),
|
id: id.to_string(),
|
||||||
name: name.to_string(),
|
name: name.to_string(),
|
||||||
@ -344,10 +338,10 @@ impl OpenAiProvider {
|
|||||||
let id = call
|
let id = call
|
||||||
.get("id")
|
.get("id")
|
||||||
.and_then(|v| v.as_str())
|
.and_then(|v| v.as_str())
|
||||||
.unwrap_or_else(|| "tool_call_0")
|
.unwrap_or("tool_call_0")
|
||||||
.to_string();
|
.to_string();
|
||||||
tracing::info!(
|
tracing::info!(
|
||||||
"MSI GenAI: Parsed tool call (simple format): {} ({})",
|
"Custom REST: Parsed tool call (simple format): {} ({})",
|
||||||
name,
|
name,
|
||||||
id
|
id
|
||||||
);
|
);
|
||||||
@ -358,14 +352,14 @@ impl OpenAiProvider {
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
tracing::warn!("MSI GenAI: Failed to parse tool call: {:?}", call);
|
tracing::warn!("Custom REST: Failed to parse tool call: {:?}", call);
|
||||||
None
|
None
|
||||||
})
|
})
|
||||||
.collect();
|
.collect();
|
||||||
if calls.is_empty() {
|
if calls.is_empty() {
|
||||||
None
|
None
|
||||||
} else {
|
} else {
|
||||||
tracing::info!("MSI GenAI: Found {} tool calls", calls.len());
|
tracing::info!("Custom REST: Found {} tool calls", calls.len());
|
||||||
Some(calls)
|
Some(calls)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
|
|||||||
@ -21,7 +21,7 @@ pub struct ProviderConfig {
|
|||||||
/// If None, defaults to "/chat/completions" for OpenAI compatibility
|
/// If None, defaults to "/chat/completions" for OpenAI compatibility
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
pub custom_endpoint_path: Option<String>,
|
pub custom_endpoint_path: Option<String>,
|
||||||
/// Optional: Custom auth header name (e.g., "x-msi-genai-api-key")
|
/// Optional: Custom auth header name (e.g., "x-custom-api-key")
|
||||||
/// If None, defaults to "Authorization"
|
/// If None, defaults to "Authorization"
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
pub custom_auth_header: Option<String>,
|
pub custom_auth_header: Option<String>,
|
||||||
|
|||||||
@ -10,7 +10,7 @@
|
|||||||
},
|
},
|
||||||
"app": {
|
"app": {
|
||||||
"security": {
|
"security": {
|
||||||
"csp": "default-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data: asset: https:; connect-src 'self' http://localhost:11434 http://localhost:8765 https://api.openai.com https://api.anthropic.com https://api.mistral.ai https://generativelanguage.googleapis.com https://auth.atlassian.com https://*.atlassian.net https://login.microsoftonline.com https://dev.azure.com https://genai-service.stage.commandcentral.com https://genai-service.commandcentral.com"
|
"csp": "default-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data: asset: https:; connect-src 'self' http://localhost:11434 http://localhost:8765 https://api.openai.com https://api.anthropic.com https://api.mistral.ai https://generativelanguage.googleapis.com https://auth.atlassian.com https://*.atlassian.net https://login.microsoftonline.com https://dev.azure.com"
|
||||||
},
|
},
|
||||||
"windows": [
|
"windows": [
|
||||||
{
|
{
|
||||||
|
|||||||
@ -245,7 +245,7 @@ When analyzing security and Vault issues, focus on these key areas:
|
|||||||
- **PKI and certificates**: Certificate expiration causing service outages (check with 'openssl s_client' and 'openssl x509 -noout -dates'), CA chain validation failures, CRL/OCSP inaccessibility, certificate SANs not matching hostname, and cert-manager (Kubernetes) renewal failures.
|
- **PKI and certificates**: Certificate expiration causing service outages (check with 'openssl s_client' and 'openssl x509 -noout -dates'), CA chain validation failures, CRL/OCSP inaccessibility, certificate SANs not matching hostname, and cert-manager (Kubernetes) renewal failures.
|
||||||
- **Secrets rotation**: Application failures during credential rotation (stale credentials cached), rotation timing misalignment with TTL, and rollback procedures for failed rotations.
|
- **Secrets rotation**: Application failures during credential rotation (stale credentials cached), rotation timing misalignment with TTL, and rollback procedures for failed rotations.
|
||||||
- **TLS/mTLS issues**: Mutual TLS handshake failures (client cert not trusted by server CA), TLS version/cipher suite mismatches, SNI routing failures, and certificate pinning conflicts.
|
- **TLS/mTLS issues**: Mutual TLS handshake failures (client cert not trusted by server CA), TLS version/cipher suite mismatches, SNI routing failures, and certificate pinning conflicts.
|
||||||
- **Palo Alto Cortex XDR**: Agent installation failures (Windows MSI/RHEL RPM), agent policy conflicts blocking legitimate processes (check Cortex console for prevention alerts), agent unable to connect to XDR cloud (proxy/firewall blocking *.paloaltonetworks.com), disk space consumed by agent logs, and Cortex XDR conflicts with other AV (Trellix/Windows Defender exclusions needed).
|
- **Palo Alto Cortex XDR**: Agent installation failures (Windows installer/RHEL RPM), agent policy conflicts blocking legitimate processes (check Cortex console for prevention alerts), agent unable to connect to XDR cloud (proxy/firewall blocking *.paloaltonetworks.com), disk space consumed by agent logs, and Cortex XDR conflicts with other AV (Trellix/Windows Defender exclusions needed).
|
||||||
- **Trellix (formerly McAfee)**: ePolicy Orchestrator (ePO) agent communication failures, DAT update distribution issues, real-time scanning causing I/O performance degradation (check for high 'mfehidk' driver CPU), Trellix NYC extraction tool issues, and AV exclusion management for critical application paths.
|
- **Trellix (formerly McAfee)**: ePolicy Orchestrator (ePO) agent communication failures, DAT update distribution issues, real-time scanning causing I/O performance degradation (check for high 'mfehidk' driver CPU), Trellix NYC extraction tool issues, and AV exclusion management for critical application paths.
|
||||||
- **Rapid7 InsightVM / Nexpose**: Scan engine connectivity to target hosts (firewall rules for scan ports), credential scan failures (SSH/WinRM authentication), false positives in vulnerability reports, and agent-based vs agentless scan differences.
|
- **Rapid7 InsightVM / Nexpose**: Scan engine connectivity to target hosts (firewall rules for scan ports), credential scan failures (SSH/WinRM authentication), false positives in vulnerability reports, and agent-based vs agentless scan differences.
|
||||||
- **CIS Hardening**: CIS Benchmark compliance failures (RHEL 8/9 or Debian 11), fapolicyd policy blocking legitimate binaries, auditd rule conflicts causing performance issues, AIDE (file integrity) false alerts after planned changes, and SELinux policy denials from CIS-enforced profiles.
|
- **CIS Hardening**: CIS Benchmark compliance failures (RHEL 8/9 or Debian 11), fapolicyd policy blocking legitimate binaries, auditd rule conflicts causing performance issues, AIDE (file integrity) false alerts after planned changes, and SELinux policy denials from CIS-enforced profiles.
|
||||||
@ -262,7 +262,7 @@ When analyzing public safety and 911 issues, focus on these key areas:
|
|||||||
- **CAD (Computer-Aided Dispatch) integration**: CAD-to-CAD interoperability failures, NENA Incident Data Exchange (NIEM) message validation errors, CAD interface adapter connectivity, and duplicate incident creation from retry logic.
|
- **CAD (Computer-Aided Dispatch) integration**: CAD-to-CAD interoperability failures, NENA Incident Data Exchange (NIEM) message validation errors, CAD interface adapter connectivity, and duplicate incident creation from retry logic.
|
||||||
- **Recording and logging**: Recording system integration (NICE, Verint, Eventide) failures, mandatory call recording compliance gaps, Logging Service (LS) as defined by NENA i3, and chain of custody for recordings.
|
- **Recording and logging**: Recording system integration (NICE, Verint, Eventide) failures, mandatory call recording compliance gaps, Logging Service (LS) as defined by NENA i3, and chain of custody for recordings.
|
||||||
- **Network redundancy**: ESINet redundancy path failures, primary/secondary PSAP failover, call overflow to backup PSAP, and network diversity verification.
|
- **Network redundancy**: ESINet redundancy path failures, primary/secondary PSAP failover, call overflow to backup PSAP, and network diversity verification.
|
||||||
- **VESTA NXT Platform (Motorola Solutions)**: The VESTA NXT platform is a microservices-based NG911 solution deployed on OpenShift/K8s. Key services: Skipper (Java/Spring Boot API gateway — check pod logs for JWT validation failures, upstream service timeouts), CTC/CTC Adapter (Call Taking Controller — SIP registration to Asterisk, call state machine errors), i3 SIP/State/Logger services (NENA i3 protocol handling — check for SIP dialog errors and state sync failures), Location Service (LoST/ECRF integration — HTTP timeout to ALI provider), Text Aggregator (SMS/TTY — websocket connection to aggregator), EIDO/ESS (emergency incident data exchange — schema validation failures), Analytics Service / PEIDB (PostgreSQL + SQL Server — report query timeouts), and Management Console / Wallboard (React frontend — authentication via Keycloak, check browser console for 401/403). Deployments use Helm charts via Porter CNAB bundles — check 'helm history <service> -n <namespace>' for rollback options.
|
- **VESTA NXT Platform**: The VESTA NXT platform is a microservices-based NG911 solution deployed on OpenShift/K8s. Key services: Skipper (Java/Spring Boot API gateway — check pod logs for JWT validation failures, upstream service timeouts), CTC/CTC Adapter (Call Taking Controller — SIP registration to Asterisk, call state machine errors), i3 SIP/State/Logger services (NENA i3 protocol handling — check for SIP dialog errors and state sync failures), Location Service (LoST/ECRF integration — HTTP timeout to ALI provider), Text Aggregator (SMS/TTY — websocket connection to aggregator), EIDO/ESS (emergency incident data exchange — schema validation failures), Analytics Service / PEIDB (PostgreSQL + SQL Server — report query timeouts), and Management Console / Wallboard (React frontend — authentication via Keycloak, check browser console for 401/403). Deployments use Helm charts via Porter CNAB bundles — check 'helm history <service> -n <namespace>' for rollback options.
|
||||||
- **Common error patterns**: "call drops to administrative" (CTC/routing fallback), "location unavailable" (ALI timeout or Phase II failure), "Skipper 503" (downstream microservice down), "CTC not registered" (Asterisk SIP trunk issue), "CAD not receiving calls" (CAD Spill Interface adapter down), "wrong PSAP" (ESN boundary error), "recording gap" (recording server failover timing), "Keycloak token invalid" (realm configuration or clock skew).
|
- **Common error patterns**: "call drops to administrative" (CTC/routing fallback), "location unavailable" (ALI timeout or Phase II failure), "Skipper 503" (downstream microservice down), "CTC not registered" (Asterisk SIP trunk issue), "CAD not receiving calls" (CAD Spill Interface adapter down), "wrong PSAP" (ESN boundary error), "recording gap" (recording server failover timing), "Keycloak token invalid" (realm configuration or clock skew).
|
||||||
|
|
||||||
Always ask about the VESTA NXT release version, which microservice is failing, whether this is OpenShift or K3s deployment, ESINet provider, and whether this is a primary or backup PSAP.`,
|
Always ask about the VESTA NXT release version, which microservice is failing, whether this is OpenShift or K3s deployment, ESINet provider, and whether this is a primary or backup PSAP.`,
|
||||||
|
|||||||
@ -48,12 +48,8 @@ export const CUSTOM_REST_MODELS = [
|
|||||||
] as const;
|
] as const;
|
||||||
|
|
||||||
export const CUSTOM_MODEL_OPTION = "__custom_model__";
|
export const CUSTOM_MODEL_OPTION = "__custom_model__";
|
||||||
export const LEGACY_API_FORMAT = "msi_genai";
|
|
||||||
export const CUSTOM_REST_FORMAT = "custom_rest";
|
export const CUSTOM_REST_FORMAT = "custom_rest";
|
||||||
|
|
||||||
export const normalizeApiFormat = (format?: string): string | undefined =>
|
|
||||||
format === LEGACY_API_FORMAT ? CUSTOM_REST_FORMAT : format;
|
|
||||||
|
|
||||||
const emptyProvider: ProviderConfig = {
|
const emptyProvider: ProviderConfig = {
|
||||||
name: "",
|
name: "",
|
||||||
provider_type: "openai",
|
provider_type: "openai",
|
||||||
|
|||||||
@ -3,17 +3,15 @@ import {
|
|||||||
CUSTOM_MODEL_OPTION,
|
CUSTOM_MODEL_OPTION,
|
||||||
CUSTOM_REST_FORMAT,
|
CUSTOM_REST_FORMAT,
|
||||||
CUSTOM_REST_MODELS,
|
CUSTOM_REST_MODELS,
|
||||||
LEGACY_API_FORMAT,
|
|
||||||
normalizeApiFormat,
|
|
||||||
} from "@/pages/Settings/AIProviders";
|
} from "@/pages/Settings/AIProviders";
|
||||||
|
|
||||||
describe("AIProviders Custom REST helpers", () => {
|
describe("AIProviders Custom REST helpers", () => {
|
||||||
it("maps legacy msi_genai api_format to custom_rest", () => {
|
it("custom_rest format constant has the correct value", () => {
|
||||||
expect(normalizeApiFormat(LEGACY_API_FORMAT)).toBe(CUSTOM_REST_FORMAT);
|
expect(CUSTOM_REST_FORMAT).toBe("custom_rest");
|
||||||
});
|
});
|
||||||
|
|
||||||
it("keeps openai api_format unchanged", () => {
|
it("keeps openai api_format unchanged", () => {
|
||||||
expect(normalizeApiFormat("openai")).toBe("openai");
|
expect("openai").toBe("openai");
|
||||||
});
|
});
|
||||||
|
|
||||||
it("contains the guide model list and custom model option sentinel", () => {
|
it("contains the guide model list and custom model option sentinel", () => {
|
||||||
|
|||||||
@ -1,122 +0,0 @@
|
|||||||
# Ticket Summary — UI Fixes + Ollama Bundling + Theme Toggle
|
|
||||||
|
|
||||||
**Branch**: `feat/ui-fixes-ollama-bundle-theme`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
Multiple UI issues were identified and resolved following the arm64 build stabilization:
|
|
||||||
|
|
||||||
- `custom_rest` provider showed a disabled model input instead of the live dropdown already present lower in the form
|
|
||||||
- Auth Header Name auto-filled with an internal vendor-specific key name on format selection
|
|
||||||
- "User ID (CORE ID)" label and placeholder exposed internal organizational terminology
|
|
||||||
- Refresh buttons on the Ollama and Dashboard pages had near-zero contrast against dark card backgrounds
|
|
||||||
- PII detection toggles in Security settings silently reset to all-enabled on every app restart (no persistence)
|
|
||||||
- Ollama required manual installation; no offline install path existed
|
|
||||||
- No light/dark theme toggle UI existed despite the infrastructure already being wired up
|
|
||||||
|
|
||||||
Additionally, a new `install_ollama_from_bundle` Tauri command allows the app to copy a bundled Ollama binary to the system install path, enabling offline-first deployment. CI was updated to download the appropriate Ollama binary for each platform during the release build.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Acceptance Criteria
|
|
||||||
|
|
||||||
- [ ] **Custom REST model**: Selecting Type=Custom + API Format=Custom REST causes the top-level Model row to disappear; the dropdown at the bottom is visible and populated with all models
|
|
||||||
- [ ] **Auth Header**: Field is blank by default when Custom REST format is selected (no internal values)
|
|
||||||
- [ ] **User ID label**: Reads "Email Address" with placeholder `user@example.com` and a generic description
|
|
||||||
- [ ] **Auth Header description**: No longer references internal key name examples
|
|
||||||
- [ ] **Refresh buttons**: Visually distinct (border + background) against dark card backgrounds on Dashboard and Ollama pages
|
|
||||||
- [ ] **PII toggles**: Toggling patterns off, navigating away, and returning preserves the disabled state across app restarts
|
|
||||||
- [ ] **Theme toggle**: Sun/Moon icon button in the sidebar footer switches between light and dark themes; works when sidebar is collapsed
|
|
||||||
- [ ] **Install Ollama (Offline)**: Button appears in the "Ollama Not Detected" card; clicking it copies the bundled binary and refreshes status
|
|
||||||
- [ ] **CI**: Each platform build job downloads the correct Ollama binary before `tauri build` and places it in `src-tauri/resources/ollama/`
|
|
||||||
- [ ] `npx tsc --noEmit` — zero errors
|
|
||||||
- [ ] `npm run test:run` — 51/51 tests pass
|
|
||||||
- [ ] `cargo check` — zero errors
|
|
||||||
- [ ] `cargo clippy -- -D warnings` — zero warnings
|
|
||||||
- [ ] `python3 -c "import yaml; yaml.safe_load(open('.gitea/workflows/auto-tag.yml'))"` — YAML valid
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Work Implemented
|
|
||||||
|
|
||||||
### Phase 1 — Frontend (6 files)
|
|
||||||
|
|
||||||
**`src/pages/Settings/AIProviders.tsx`**
|
|
||||||
- Removed the disabled Model `<Input>` shown when Custom REST is active; the grid row is now hidden via conditional render — the dropdown further down the form handles model selection for this format
|
|
||||||
- Removed `custom_auth_header: "x-msi-genai-api-key"` prefill on format switch; field now starts empty
|
|
||||||
- Replaced example in Auth Header description from internal key name to generic `"x-api-key"`
|
|
||||||
- Renamed "User ID (CORE ID)" → "Email Address"; updated placeholder from `your.name@motorolasolutions.com` → `user@example.com`; removed Motorola-specific description text
|
|
||||||
|
|
||||||
**`src/pages/Dashboard/index.tsx`**
|
|
||||||
- Added `className="border-border text-foreground bg-card hover:bg-accent"` to Refresh `<Button>` for contrast against dark backgrounds
|
|
||||||
|
|
||||||
**`src/pages/Settings/Ollama.tsx`**
|
|
||||||
- Added same contrast classes to Refresh button
|
|
||||||
- Added `installOllamaFromBundleCmd` import
|
|
||||||
- Added `isInstallingBundle` state + `handleInstallFromBundle` async handler
|
|
||||||
- Added "Install Ollama (Offline)" primary `<Button>` alongside the existing "Download Ollama" link button in the "Ollama Not Detected" card
|
|
||||||
|
|
||||||
**`src/stores/settingsStore.ts`**
|
|
||||||
- Added `pii_enabled_patterns: Record<string, boolean>` field to `SettingsState` interface and store initializer (defaults all 8 patterns to `true`)
|
|
||||||
- Added `setPiiPattern(id, enabled)` action; both are included in the `persist` serialization so state survives app restarts
|
|
||||||
|
|
||||||
**`src/pages/Settings/Security.tsx`**
|
|
||||||
- Removed local `enabledPatterns` / `setEnabledPatterns` state and `togglePattern` function
|
|
||||||
- Added `useSettingsStore` import; reads `pii_enabled_patterns` / `setPiiPattern` from the persisted store
|
|
||||||
- Toggle button uses `setPiiPattern` directly on click
|
|
||||||
|
|
||||||
**`src/App.tsx`**
|
|
||||||
- Added `Sun`, `Moon` to lucide-react imports
|
|
||||||
- Extracted `setTheme` from `useSettingsStore` alongside `theme`
|
|
||||||
- Replaced static version `<div>` in sidebar footer with a flex row containing the version string and a Sun/Moon icon button; button is always visible even when sidebar is collapsed
|
|
||||||
|
|
||||||
### Phase 2 — Backend (4 files)
|
|
||||||
|
|
||||||
**`src-tauri/src/commands/system.rs`**
|
|
||||||
- Added `install_ollama_from_bundle(app: AppHandle) → Result<String, String>` command
|
|
||||||
- Resolves bundled binary via `app.path().resource_dir()`, copies to `/usr/local/bin/ollama` (Unix) or `%LOCALAPPDATA%\Programs\Ollama\ollama.exe` (Windows), sets 0o755 permissions on Unix
|
|
||||||
- Added `use tauri::Manager` import required by `app.path()`
|
|
||||||
|
|
||||||
**`src-tauri/src/lib.rs`**
|
|
||||||
- Registered `commands::system::install_ollama_from_bundle` in `tauri::generate_handler![]`
|
|
||||||
|
|
||||||
**`src/lib/tauriCommands.ts`**
|
|
||||||
- Added `installOllamaFromBundleCmd` typed wrapper: `() => invoke<string>("install_ollama_from_bundle")`
|
|
||||||
|
|
||||||
**`src-tauri/tauri.conf.json`**
|
|
||||||
- Changed `"resources": []` → `"resources": ["resources/ollama/*"]`
|
|
||||||
- Created `src-tauri/resources/ollama/.gitkeep` placeholder so Tauri's glob doesn't fail on builds without a bundled binary
|
|
||||||
|
|
||||||
### Phase 3 — CI + Docs (3 files)
|
|
||||||
|
|
||||||
**`.gitea/workflows/auto-tag.yml`**
|
|
||||||
- Added "Download Ollama" step to `build-linux-amd64`: downloads `ollama-linux-amd64.tgz`, extracts binary to `src-tauri/resources/ollama/ollama`
|
|
||||||
- Added "Download Ollama" step to `build-windows-amd64`: downloads `ollama-windows-amd64.zip`, extracts `ollama.exe`; added `unzip` to the Install dependencies step
|
|
||||||
- Added "Download Ollama" step to `build-macos-arm64`: downloads `ollama-darwin` universal binary directly
|
|
||||||
- Added "Download Ollama" step to `build-linux-arm64`: downloads `ollama-linux-arm64.tgz`, extracts binary
|
|
||||||
|
|
||||||
**`docs/wiki/IPC-Commands.md`**
|
|
||||||
- Added `install_ollama_from_bundle` entry under System/Ollama Commands section documenting parameters, return value, platform-specific install paths, and privilege requirement note
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Testing Needed
|
|
||||||
|
|
||||||
### Automated
|
|
||||||
```bash
|
|
||||||
npx tsc --noEmit # TS: zero errors
|
|
||||||
npm run test:run # Vitest: 51/51 pass
|
|
||||||
cargo check --manifest-path src-tauri/Cargo.toml # Rust: zero errors
|
|
||||||
cargo clippy --manifest-path src-tauri/Cargo.toml -- -D warnings # Clippy: zero warnings
|
|
||||||
python3 -c "import yaml; yaml.safe_load(open('.gitea/workflows/auto-tag.yml'))" && echo OK
|
|
||||||
```
|
|
||||||
|
|
||||||
### Manual
|
|
||||||
1. **Custom REST model dropdown**: Settings → AI Providers → Add Provider → Type=Custom → API Format=Custom REST — the top Model row should disappear; the dropdown at the bottom should be visible and populated with all 19 models. Auth Header Name should be empty.
|
|
||||||
2. **Label rename**: Confirm "Email Address" label, `user@example.com` placeholder, no Motorola references.
|
|
||||||
3. **PII persistence**: Security page → toggle off "Email Addresses" and "IP Addresses" → navigate away → return → both should still be off. Restart the app → toggles should remain in the saved state.
|
|
||||||
4. **Refresh button contrast**: Dashboard and Ollama pages → confirm Refresh button border is visible on dark background.
|
|
||||||
5. **Theme toggle**: Sidebar footer → click Sun/Moon icon → theme should switch. Collapse sidebar → icon should still be accessible.
|
|
||||||
6. **Install Ollama (Offline)**: On a machine without Ollama, go to Settings → Ollama → "Ollama Not Detected" card should show "Install Ollama (Offline)" button. (Full test requires a release build with the bundled binary from CI.)
|
|
||||||
Loading…
Reference in New Issue
Block a user