Compare commits

...

54 Commits

Author SHA1 Message Date
c0d482ace7 chore: update CHANGELOG.md for v0.2.66 [skip ci] 2026-04-20 01:26:00 +00:00
5a12718566 Merge pull request 'fix(test): await async data in auditLog test' (#51) from fix/audit-log-test into master
Some checks failed
Auto Tag / autotag (push) Successful in 15s
Auto Tag / wiki-sync (push) Successful in 15s
Test / rust-fmt-check (push) Successful in 1m4s
Test / frontend-typecheck (push) Successful in 1m22s
Auto Tag / changelog (push) Successful in 53s
Test / frontend-tests (push) Successful in 1m29s
Test / rust-clippy (push) Successful in 8m5s
Test / rust-tests (push) Successful in 11m30s
Auto Tag / build-linux-amd64 (push) Successful in 16m13s
Auto Tag / build-linux-arm64 (push) Successful in 17m54s
Auto Tag / build-windows-amd64 (push) Successful in 18m51s
Auto Tag / build-macos-arm64 (push) Failing after 11m59s
2026-04-20 01:21:55 +00:00
Shaun Arman
4a0c7957ec fix(test): await async data in auditLog test to prevent race condition
Some checks failed
Test / rust-fmt-check (pull_request) Successful in 1m11s
Test / frontend-typecheck (pull_request) Successful in 1m23s
Test / frontend-tests (pull_request) Successful in 1m33s
PR Review Automation / review (pull_request) Has been cancelled
Test / rust-tests (pull_request) Has been cancelled
Test / rust-clippy (pull_request) Has been cancelled
2026-04-19 20:21:37 -05:00
12a76b4dd8 chore: update CHANGELOG.md for v0.2.66 [skip ci] 2026-04-20 00:47:35 +00:00
Shaun Arman
0e6fd09455 chore: retrigger auto-tag pipeline
Some checks failed
Auto Tag / autotag (push) Successful in 6s
Auto Tag / wiki-sync (push) Successful in 5s
Auto Tag / changelog (push) Successful in 51s
Test / rust-fmt-check (push) Successful in 1m10s
Test / frontend-typecheck (push) Successful in 1m28s
Test / frontend-tests (push) Failing after 1m38s
Auto Tag / build-macos-arm64 (push) Successful in 4m18s
Test / rust-clippy (push) Successful in 7m56s
Test / rust-tests (push) Has been cancelled
Auto Tag / build-linux-amd64 (push) Successful in 19m44s
Auto Tag / build-linux-arm64 (push) Successful in 22m7s
Auto Tag / build-windows-amd64 (push) Successful in 23m18s
2026-04-19 19:46:34 -05:00
Shaun Arman
b7f348bf34 chore: retrigger build pipeline 2026-04-19 19:42:39 -05:00
Shaun Arman
7234704636 chore: trigger build pipeline 2026-04-19 19:40:02 -05:00
06b0c10b17 Merge pull request 'docs: add v0.2.66 changelog entry' (#50) from chore/trigger-build-2 into master 2026-04-20 00:34:55 +00:00
Shaun Arman
ab231b6564 docs: add v0.2.66 changelog entry
Some checks failed
PR Review Automation / review (pull_request) Has been cancelled
Test / frontend-tests (pull_request) Has been cancelled
Test / frontend-typecheck (pull_request) Has been cancelled
Test / rust-clippy (pull_request) Has been cancelled
Test / rust-tests (pull_request) Has been cancelled
Test / rust-fmt-check (pull_request) Has been cancelled
2026-04-19 19:33:52 -05:00
8b828fe4c3 Merge pull request 'docs: clarify changelog exclusion criteria' (#49) from chore/trigger-build into master
Reviewed-on: #49
2026-04-20 00:29:55 +00:00
Shaun Arman
27193c91e6 docs: clarify changelog exclusion criteria
All checks were successful
Test / rust-fmt-check (pull_request) Successful in 1m5s
Test / frontend-typecheck (pull_request) Successful in 1m15s
Test / frontend-tests (pull_request) Successful in 1m22s
PR Review Automation / review (pull_request) Successful in 3m46s
Test / rust-clippy (pull_request) Successful in 4m17s
Test / rust-tests (pull_request) Successful in 5m29s
2026-04-19 19:20:57 -05:00
cb542d7f22 Merge pull request 'fix(ci): switch PR review to liteLLM + add push trigger to tests' (#46) from fix/litellm-pr-review into master
Reviewed-on: #46
2026-04-19 23:56:22 +00:00
Shaun Arman
d066e71eeb fix(ci): switch PR review from Ollama to liteLLM (qwen2.5-72b)
Some checks failed
Test / rust-fmt-check (pull_request) Successful in 1m9s
Test / frontend-typecheck (pull_request) Successful in 1m17s
Test / frontend-tests (pull_request) Successful in 1m22s
Test / rust-clippy (pull_request) Successful in 4m19s
Test / rust-tests (pull_request) Successful in 5m46s
PR Review Automation / review (pull_request) Failing after 1m15s
Replace direct Ollama API calls with liteLLM proxy at
172.0.0.29:11434 using qwen2.5-72b (72B VLLM model). Increase
timeouts to 300s for larger model inference. Reuses existing
OLLAMA_API_KEY secret for liteLLM auth.

Also add push-to-master trigger on test.yml so merges to master
run the full CI suite (previously only pull_request events triggered).
2026-04-19 18:41:54 -05:00
257b2fb9c5 Merge pull request 'feat: incident response methodology + UTC timeline tracking' (#45) from feat/incident-response-timeline into master
Reviewed-on: #45
2026-04-19 23:34:34 +00:00
Shaun Arman
d715ba0b25 docs: update wiki for timeline events and incident response methodology
Some checks failed
Test / rust-fmt-check (pull_request) Successful in 1m12s
Test / frontend-typecheck (pull_request) Successful in 1m17s
Test / frontend-tests (pull_request) Successful in 1m25s
PR Review Automation / review (pull_request) Failing after 2m45s
Test / rust-clippy (pull_request) Successful in 4m26s
Test / rust-tests (pull_request) Successful in 5m42s
- Database.md: document timeline_events table (migration 017), event
  types, dual-write strategy, correct migration count to 17
- IPC-Commands.md: document get_timeline_events, updated
  add_timeline_event with metadata, chat_message system_prompt param
- Architecture.md: document incident response methodology integration,
  5-phase framework, system prompt injection, correct migration count
2026-04-19 18:26:21 -05:00
Shaun Arman
8b0cbc3ce8 fix: harden timeline event input validation and atomic writes
Address security review findings:
- Validate event_type against whitelist of 7 known types (M-3)
- Validate metadata is valid JSON and under 10KB (M-2, M-4)
- Include metadata in audit log details (M-2)
- Wrap timeline insert + audit write + timestamp update in a
  SQLite transaction for atomicity (M-5)
- Fix TypeScript TimelineEvent interface: add issue_id, metadata
  fields and correct created_at type to string (L-3)
- Add timeline_events to IssueDetail TypeScript interface (L-4)
2026-04-19 18:25:53 -05:00
Shaun Arman
13c4969e31 feat: wire incident response methodology into AI and record triage events
Add INCIDENT_RESPONSE_FRAMEWORK to domainPrompts.ts and append it to
all 17 domain prompts via getDomainPrompt(). Add system_prompt param
to chat_message command so frontend can inject domain expertise. Record
UTC timeline events (triage_started, log_uploaded, why_level_advanced,
root_cause_identified, rca_generated, postmortem_generated,
document_exported) at key moments with non-blocking calls.

Update tauriCommands.ts with getTimelineEventsCmd, optional metadata on
addTimelineEventCmd, and systemPrompt on chatMessageCmd.

12 new frontend tests (9 domain prompts, 3 timeline events).
2026-04-19 18:13:47 -05:00
Shaun Arman
79a623dbb2 feat: populate RCA and postmortem docs with real timeline data
Add format_event_type() and calculate_duration() helpers to convert
raw timeline events into human-readable tables and metrics. RCA now
includes an Incident Timeline section and Incident Metrics (event
count, duration, time-to-root-cause). Postmortem replaces placeholder
timeline rows with real events, calculates impact duration, and
auto-populates What Went Well from evidence.

10 new Rust tests covering timeline rendering, duration calculation,
and event type formatting.
2026-04-19 18:13:30 -05:00
Shaun Arman
107fee8853 feat: add timeline_events table, model, and CRUD commands
- Add migration 017_create_timeline_events with indexes
- Update TimelineEvent struct with issue_id, metadata, UTC string timestamps
- Add TimelineEvent::new() constructor with UUIDv7
- Add timeline_events field to IssueDetail
- Rewrite add_timeline_event to write to new table + audit_log (dual-write)
- Add get_timeline_events command for ordered retrieval
- Update get_issue to load timeline_events
- Update delete_issue to clean up timeline_events
- Register get_timeline_events in generate_handler
- Add migration tests for table, indexes, and cascade delete
- Fix flaky derive_aes_key test (env var race condition in parallel tests)
2026-04-19 18:02:38 -05:00
6d105a70ad chore: update CHANGELOG.md for v0.2.66 [skip ci] 2026-04-15 02:11:31 +00:00
ca56b583c5 Merge pull request 'feat: implement dynamic versioning from Git tags' (#42) from fix/version-dynamic-build into master
All checks were successful
Auto Tag / autotag (push) Successful in 12s
Auto Tag / wiki-sync (push) Successful in 13s
Auto Tag / changelog (push) Successful in 41s
Auto Tag / build-linux-amd64 (push) Successful in 13m51s
Auto Tag / build-linux-arm64 (push) Successful in 15m41s
Auto Tag / build-windows-amd64 (push) Successful in 16m36s
Auto Tag / build-macos-arm64 (push) Successful in 2m22s
Reviewed-on: #42
2026-04-15 02:10:10 +00:00
Shaun Arman
8c35e91aef Merge branch 'master' into fix/version-dynamic-build
Some checks failed
Test / rust-fmt-check (pull_request) Successful in 1m8s
Test / frontend-typecheck (pull_request) Successful in 1m17s
Test / frontend-tests (pull_request) Successful in 1m23s
PR Review Automation / review (pull_request) Failing after 2m11s
Test / rust-clippy (pull_request) Successful in 6m11s
Test / rust-tests (pull_request) Successful in 9m7s
2026-04-14 21:09:11 -05:00
Shaun Arman
1055841b6f fix: remove invalid --locked flag from cargo commands and fix format string
All checks were successful
Test / rust-fmt-check (pull_request) Successful in 1m3s
PR Review Automation / review (pull_request) Successful in 2m54s
Test / frontend-typecheck (pull_request) Successful in 1m14s
Test / frontend-tests (pull_request) Successful in 1m25s
Test / rust-clippy (pull_request) Successful in 8m1s
Test / rust-tests (pull_request) Successful in 10m11s
- Remove --locked flag from cargo fmt, clippy, and test commands in CI
- Update build.rs to use Rust 2021 direct variable interpolation in format strings
2026-04-14 20:50:47 -05:00
f38ca7e2fc chore: update CHANGELOG.md for v0.2.63 [skip ci] 2026-04-15 01:45:29 +00:00
a9956a16a4 Merge pull request 'feat(integrations): implement query expansion for semantic search' (#44) from feature/integration-search-expansion into master
Some checks failed
Auto Tag / autotag (push) Successful in 7s
Auto Tag / wiki-sync (push) Successful in 6s
Auto Tag / changelog (push) Successful in 43s
Auto Tag / build-linux-amd64 (push) Successful in 15m51s
Auto Tag / build-linux-arm64 (push) Successful in 18m51s
Auto Tag / build-windows-amd64 (push) Successful in 19m44s
Auto Tag / build-macos-arm64 (push) Has been cancelled
Reviewed-on: #44
2026-04-15 01:44:42 +00:00
Shaun Arman
bc50a78db7 fix: correct WIQL syntax and escape_wiql implementation
All checks were successful
Test / rust-fmt-check (pull_request) Successful in 10s
Test / frontend-typecheck (pull_request) Successful in 1m11s
Test / frontend-tests (pull_request) Successful in 1m12s
PR Review Automation / review (pull_request) Successful in 3m6s
Test / rust-clippy (pull_request) Successful in 3m49s
Test / rust-tests (pull_request) Successful in 5m4s
- Replace CONTAINS with ~ operator (correct WIQL syntax for text matching)
- Remove escaping of ~, *, ? which are valid WIQL wildcards
- Update tests to reflect correct escape_wiql behavior
2026-04-14 20:38:21 -05:00
Shaun Arman
e6d1965342 security: address all issues from automated PR review
All checks were successful
Test / rust-fmt-check (pull_request) Successful in 10s
Test / frontend-typecheck (pull_request) Successful in 1m9s
Test / frontend-tests (pull_request) Successful in 1m13s
PR Review Automation / review (pull_request) Successful in 2m58s
Test / rust-clippy (pull_request) Successful in 3m50s
Test / rust-tests (pull_request) Successful in 5m12s
- Add missing CQL escaping for &, |, +, - characters
- Improve escape_wiql() to escape more dangerous characters: ", \, (, ), ~, *, ?, ;, =
- Sanitize HTML in excerpts using strip_html_tags() to prevent XSS
- Add unit tests for escape_wiql, escape_cql, canonicalize_url functions
- Document expand_query() behavior (always returns at least original query)
- All tests pass (158/158), cargo fmt and clippy pass
2026-04-14 20:26:05 -05:00
Shaun Arman
708e1e9c18 security: fix query expansion issues from PR review
All checks were successful
Test / rust-fmt-check (pull_request) Successful in 12s
Test / frontend-typecheck (pull_request) Successful in 1m11s
Test / frontend-tests (pull_request) Successful in 1m16s
PR Review Automation / review (pull_request) Successful in 3m0s
Test / rust-clippy (pull_request) Successful in 3m50s
Test / rust-tests (pull_request) Successful in 5m0s
- Use MAX_EXPANDED_QUERIES constant in confluence_search.rs instead of hardcoded 3
- Improve escape_wiql() to escape more dangerous characters: ", \, (, ), ~, *, ?, ;, =
- Fix logging to show expanded_query instead of search_url in confluence_search.rs

All tests pass (142/142), cargo fmt and clippy pass.
2026-04-14 20:07:59 -05:00
Shaun Arman
5b45c6c418 fix(integrations): security and correctness improvements
All checks were successful
Test / rust-fmt-check (pull_request) Successful in 12s
Test / frontend-typecheck (pull_request) Successful in 1m18s
Test / frontend-tests (pull_request) Successful in 1m21s
Test / rust-clippy (pull_request) Successful in 3m56s
PR Review Automation / review (pull_request) Successful in 4m20s
Test / rust-tests (pull_request) Successful in 5m22s
- Add url canonicalization for deduplication (strip fragments/query params)
- Add WIQL injection escaping for Azure DevOps work item searches
- Add CQL injection escaping for Confluence searches
- Add MAX_EXPANDED_QUERIES constant for consistency
- Fix logging to show expanded_query instead of search_url
- Add input validation for empty queries
- Add url crate dependency for URL parsing

All 142 tests pass.
2026-04-14 19:55:32 -05:00
Shaun Arman
096068ed2b feat(integrations): implement query expansion for semantic search
All checks were successful
Test / rust-fmt-check (pull_request) Successful in 12s
Test / frontend-typecheck (pull_request) Successful in 1m11s
Test / frontend-tests (pull_request) Successful in 1m15s
PR Review Automation / review (pull_request) Successful in 3m13s
Test / rust-clippy (pull_request) Successful in 3m45s
Test / rust-tests (pull_request) Successful in 5m9s
- Add query_expansion.rs module with product synonyms and keyword extraction
- Update confluence_search.rs to use expanded queries
- Update servicenow_search.rs to use expanded queries
- Update azuredevops_search.rs to use expanded queries
- Update webview_fetch.rs to use expanded queries
- Fix extract_keywords infinite loop bug for non-alphanumeric endings

All 142 tests pass.
2026-04-14 19:37:27 -05:00
Shaun Arman
9248811076 fix: add --locked to cargo commands and improve version update script
Some checks failed
Test / rust-fmt-check (pull_request) Failing after 1m11s
Test / frontend-typecheck (pull_request) Successful in 1m18s
Test / frontend-tests (pull_request) Successful in 1m21s
Test / rust-clippy (pull_request) Failing after 3m25s
PR Review Automation / review (pull_request) Successful in 3m37s
Test / rust-tests (pull_request) Successful in 5m9s
- Add --locked to fmt, clippy, and test commands in CI
- Remove updateCargoLock() and rely on cargo generate-lockfile
- Add .git directory existence check in update-version.mjs
- Use package.json as dynamic fallback instead of hardcoded 0.2.50
- Ensure execSync uses shell: false explicitly
2026-04-13 17:54:16 -05:00
Shaun Arman
007d0ee9d5 chore: fix version update implementation
All checks were successful
PR Review Automation / review (pull_request) Successful in 2m18s
- Replace npm ci with npm install in CI
- Remove --locked flag from cargo clippy/test
- Add cargo generate-lockfile after version update
- Update update-version.mjs with semver validation
- Add build.rs for Rust-level version injection
2026-04-13 16:34:48 -05:00
Shaun Arman
9e1a9b1d34 feat: implement dynamic versioning from Git tags
Some checks failed
Test / rust-clippy (pull_request) Failing after 15s
Test / rust-tests (pull_request) Failing after 19s
Test / rust-fmt-check (pull_request) Successful in 55s
Test / frontend-typecheck (pull_request) Successful in 1m22s
Test / frontend-tests (pull_request) Successful in 1m26s
PR Review Automation / review (pull_request) Successful in 2m57s
- Add build.rs to read version from git describe --tags
- Create update-version.mjs script to sync version across files
- Add get_app_version() command to Rust backend
- Update App.tsx to use custom version command
- Run version update in CI before Rust checks
2026-04-13 16:12:03 -05:00
cdb1dd1dad chore: update CHANGELOG.md for v0.2.55 [skip ci] 2026-04-13 21:09:47 +00:00
6dbe40ef03 chore: update CHANGELOG.md for v0.2.53 [skip ci] 2026-04-13 20:25:56 +00:00
Shaun Arman
75fc3ca67c fix: add Windows nsis target and update CHANGELOG to v0.2.61
All checks were successful
Auto Tag / autotag (push) Successful in 6s
Auto Tag / wiki-sync (push) Successful in 6s
Auto Tag / changelog (push) Successful in 43s
Auto Tag / build-macos-arm64 (push) Successful in 3m0s
Auto Tag / build-linux-amd64 (push) Successful in 11m29s
Auto Tag / build-linux-arm64 (push) Successful in 13m31s
Auto Tag / build-windows-amd64 (push) Successful in 14m10s
- Update CHANGELOG to include releases v0.2.54 through v0.2.61
- Add 'nsis' to bundle targets in tauri.conf.json for Windows builds
- This fixes Windows artifact upload failures by enabling .exe/.msi generation

The Windows build was failing because tauri.conf.json only had Linux bundle
targets (['deb', 'rpm']). Without nsis target, no Windows installers were
produced, causing the upload step to fail with 'No Windows amd64 artifacts
were found'.
2026-04-13 15:25:05 -05:00
fdae6d6e6d chore: update CHANGELOG.md for v0.2.53 [skip ci] 2026-04-13 19:58:25 +00:00
Shaun Arman
d78181e8c0 chore: trigger release with fix
Some checks failed
Auto Tag / autotag (push) Successful in 7s
Auto Tag / wiki-sync (push) Successful in 6s
Auto Tag / changelog (push) Successful in 43s
Auto Tag / build-macos-arm64 (push) Successful in 4m25s
Auto Tag / build-linux-amd64 (push) Successful in 11m27s
Auto Tag / build-linux-arm64 (push) Successful in 13m25s
Auto Tag / build-windows-amd64 (push) Failing after 13m38s
2026-04-13 14:57:35 -05:00
Shaun Arman
b4ff52108a fix: remove AppImage from upload artifact patterns
Some checks failed
Auto Tag / autotag (push) Successful in 6s
Auto Tag / wiki-sync (push) Successful in 6s
Auto Tag / changelog (push) Has been cancelled
Auto Tag / build-linux-arm64 (push) Has been cancelled
Auto Tag / build-windows-amd64 (push) Has been cancelled
Auto Tag / build-macos-arm64 (push) Has been cancelled
Auto Tag / build-linux-amd64 (push) Has been cancelled
2026-04-13 14:57:14 -05:00
29a68c07e9 chore: update CHANGELOG.md for v0.2.53 [skip ci] 2026-04-13 18:43:07 +00:00
Shaun Arman
40a2c25428 chore: trigger changelog update for AppImage removal
Some checks failed
Auto Tag / autotag (push) Successful in 9s
Auto Tag / wiki-sync (push) Successful in 8s
Auto Tag / changelog (push) Successful in 44s
Auto Tag / build-macos-arm64 (push) Successful in 3m8s
Auto Tag / build-linux-amd64 (push) Successful in 11m29s
Auto Tag / build-linux-arm64 (push) Successful in 13m28s
Auto Tag / build-windows-amd64 (push) Failing after 7m46s
2026-04-13 13:42:15 -05:00
Shaun Arman
62e3570a15 fix: remove AppImage bundling to fix linux-amd64 build
Some checks failed
Auto Tag / autotag (push) Successful in 6s
Auto Tag / wiki-sync (push) Successful in 6s
Build CI Docker Images / windows-cross (push) Successful in 7s
Build CI Docker Images / linux-arm64 (push) Successful in 6s
Auto Tag / changelog (push) Has been cancelled
Auto Tag / build-windows-amd64 (push) Has been cancelled
Auto Tag / build-linux-amd64 (push) Has been cancelled
Auto Tag / build-macos-arm64 (push) Has been cancelled
Auto Tag / build-linux-arm64 (push) Has been cancelled
Build CI Docker Images / linux-amd64 (push) Successful in 2m37s
- Remove appimage from bundle targets in tauri.conf.json
- Remove linuxdeploy from Dockerfile
- Update Dockerfile to remove fuse dependency (not needed)
2026-04-13 13:41:56 -05:00
41e5753de6 chore: update CHANGELOG.md for v0.2.53 [skip ci] 2026-04-13 18:18:07 +00:00
Shaun Arman
25201eaac1 chore: trigger changelog update for latest fixes
Some checks failed
Auto Tag / autotag (push) Successful in 5s
Auto Tag / wiki-sync (push) Successful in 5s
Auto Tag / changelog (push) Successful in 1m37s
Auto Tag / build-macos-arm64 (push) Successful in 2m21s
Auto Tag / build-linux-amd64 (push) Failing after 13m17s
Auto Tag / build-windows-amd64 (push) Successful in 15m20s
Auto Tag / build-linux-arm64 (push) Successful in 13m46s
2026-04-13 13:16:23 -05:00
618eb6b43d chore: update CHANGELOG.md for v0.2.53 [skip ci] 2026-04-13 18:07:19 +00:00
Shaun Arman
5084dca5e3 fix: add fuse dependency for AppImage support
Some checks failed
Auto Tag / autotag (push) Successful in 6s
Auto Tag / wiki-sync (push) Successful in 5s
Build CI Docker Images / windows-cross (push) Successful in 6s
Build CI Docker Images / linux-arm64 (push) Successful in 6s
Auto Tag / changelog (push) Successful in 37s
Build CI Docker Images / linux-amd64 (push) Successful in 1m56s
Auto Tag / build-macos-arm64 (push) Successful in 2m27s
Auto Tag / build-linux-amd64 (push) Has been cancelled
Auto Tag / build-windows-amd64 (push) Has been cancelled
Auto Tag / build-linux-arm64 (push) Has been cancelled
2026-04-13 13:06:33 -05:00
Shaun Arman
6cbdcaed21 refactor: revert to original Dockerfile without manual linuxdeploy installation
- CI handles linuxdeploy download and execution via npx tauri build
2026-04-13 13:06:33 -05:00
Shaun Arman
8298506435 refactor: remove custom linuxdeploy install per CI CI uses tauri-downloaded version 2026-04-13 13:06:33 -05:00
412c5e70f0 chore: update CHANGELOG.md for v0.2.53 [skip ci] 2026-04-13 17:01:51 +00:00
05f87a7bff Merge pull request 'fix: add missing ai_providers columns and fix linux-amd64 build' (#41) from fix/ai-provider-migration-issue into master
Some checks failed
Auto Tag / autotag (push) Successful in 14s
Auto Tag / wiki-sync (push) Successful in 14s
Build CI Docker Images / windows-cross (push) Successful in 11s
Build CI Docker Images / linux-arm64 (push) Successful in 10s
Auto Tag / changelog (push) Successful in 54s
Auto Tag / build-macos-arm64 (push) Successful in 2m57s
Auto Tag / build-linux-amd64 (push) Failing after 13m36s
Auto Tag / build-linux-arm64 (push) Successful in 15m7s
Auto Tag / build-windows-amd64 (push) Successful in 15m35s
Build CI Docker Images / linux-amd64 (push) Failing after 7s
Reviewed-on: #41
2026-04-13 17:00:50 +00:00
Shaun Arman
8e1d43da43 fix: address critical AI review issues
All checks were successful
Test / rust-fmt-check (pull_request) Successful in 28s
Test / frontend-typecheck (pull_request) Successful in 1m29s
Test / frontend-tests (pull_request) Successful in 1m31s
PR Review Automation / review (pull_request) Successful in 3m28s
Test / rust-clippy (pull_request) Successful in 4m29s
Test / rust-tests (pull_request) Successful in 5m42s
- Fix linuxdeploy AppImage extraction using --appimage-extract
- Remove 'has no column named' from duplicate column error handling
- Use strftime instead of datetime for created_at default format
2026-04-13 08:50:34 -05:00
Shaun Arman
2d7aac8413 fix: address AI review findings
All checks were successful
Test / rust-fmt-check (pull_request) Successful in 15s
Test / frontend-typecheck (pull_request) Successful in 1m21s
Test / frontend-tests (pull_request) Successful in 1m25s
PR Review Automation / review (pull_request) Successful in 3m32s
Test / rust-clippy (pull_request) Successful in 4m1s
Test / rust-tests (pull_request) Successful in 5m18s
- Add -L flag to curl for linuxdeploy redirects
- Split migration 015 into 015_add_use_datastore_upload and 016_add_created_at
- Use separate execute calls for ALTER TABLE statements
- Add idempotency test for migration 015
- Use bool type for use_datastore_upload instead of i64
2026-04-13 08:38:43 -05:00
Shaun Arman
84c69fbea8 fix: add missing ai_providers columns and fix linux-amd64 build
Some checks failed
Test / rust-fmt-check (pull_request) Successful in 15s
Test / rust-clippy (pull_request) Failing after 17s
Test / frontend-typecheck (pull_request) Successful in 1m23s
Test / frontend-tests (pull_request) Successful in 1m23s
PR Review Automation / review (pull_request) Successful in 3m16s
Test / rust-tests (pull_request) Successful in 4m19s
- Add migration 015 to add use_datastore_upload and created_at columns
- Handle column-already-exists errors gracefully
- Update Dockerfile to install linuxdeploy for AppImage bundling
- Add fuse dependency for AppImage support
2026-04-13 08:22:08 -05:00
9bc570774a chore: update CHANGELOG.md for v0.2.53 [skip ci] 2026-04-13 03:19:05 +00:00
38 changed files with 2652 additions and 776 deletions

View File

@ -322,7 +322,7 @@ jobs:
fi
echo "Release ID: $RELEASE_ID"
ARTIFACTS=$(find src-tauri/target/x86_64-unknown-linux-gnu/release/bundle -type f \
\( -name "*.deb" -o -name "*.rpm" -o -name "*.AppImage" \))
\( -name "*.deb" -o -name "*.rpm" \))
if [ -z "$ARTIFACTS" ]; then
echo "ERROR: No Linux amd64 artifacts were found to upload."
exit 1

View File

@ -43,13 +43,13 @@ jobs:
git diff origin/${{ github.base_ref }}..HEAD > /tmp/pr_diff.txt
echo "diff_size=$(wc -l < /tmp/pr_diff.txt | tr -d ' ')" >> $GITHUB_OUTPUT
- name: Analyze with Ollama
- name: Analyze with LLM
id: analyze
if: steps.diff.outputs.diff_size != '0'
shell: bash
env:
OLLAMA_URL: https://ollama-ui.tftsr.com/ollama/v1
OLLAMA_API_KEY: ${{ secrets.OLLAMA_API_KEY }}
LITELLM_URL: http://172.0.0.29:11434/v1
LITELLM_API_KEY: ${{ secrets.OLLAMA_API_KEY }}
PR_TITLE: ${{ github.event.pull_request.title }}
PR_NUMBER: ${{ github.event.pull_request.number }}
run: |
@ -62,32 +62,32 @@ jobs:
| grep -v -E '^[+-].*[A-Za-z0-9+/]{40,}={0,2}([^A-Za-z0-9+/=]|$)')
PROMPT="Analyze the following code changes for correctness, security issues, and best practices. PR Title: ${PR_TITLE}\n\nDiff:\n${DIFF_CONTENT}\n\nProvide a review with: 1) Summary, 2) Bugs/errors, 3) Security issues, 4) Best practices. Give specific comments with suggested fixes."
BODY=$(jq -cn \
--arg model "qwen3-coder-next:latest" \
--arg model "qwen2.5-72b" \
--arg content "$PROMPT" \
'{model: $model, messages: [{role: "user", content: $content}], stream: false}')
echo "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] PR #${PR_NUMBER} - Calling Ollama API (${#BODY} bytes)..."
HTTP_CODE=$(curl -s --max-time 120 --connect-timeout 30 \
--retry 3 --retry-delay 5 --retry-connrefused --retry-max-time 120 \
-o /tmp/ollama_response.json -w "%{http_code}" \
-X POST "$OLLAMA_URL/chat/completions" \
-H "Authorization: Bearer $OLLAMA_API_KEY" \
echo "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] PR #${PR_NUMBER} - Calling liteLLM API (${#BODY} bytes)..."
HTTP_CODE=$(curl -s --max-time 300 --connect-timeout 30 \
--retry 3 --retry-delay 10 --retry-connrefused --retry-max-time 300 \
-o /tmp/llm_response.json -w "%{http_code}" \
-X POST "$LITELLM_URL/chat/completions" \
-H "Authorization: Bearer $LITELLM_API_KEY" \
-H "Content-Type: application/json" \
-d "$BODY")
echo "HTTP status: $HTTP_CODE"
echo "Response file size: $(wc -c < /tmp/ollama_response.json) bytes"
echo "Response file size: $(wc -c < /tmp/llm_response.json) bytes"
if [ "$HTTP_CODE" != "200" ]; then
echo "ERROR: Ollama returned HTTP $HTTP_CODE"
cat /tmp/ollama_response.json
echo "ERROR: liteLLM returned HTTP $HTTP_CODE"
cat /tmp/llm_response.json
exit 1
fi
if ! jq empty /tmp/ollama_response.json 2>/dev/null; then
echo "ERROR: Invalid JSON response from Ollama"
cat /tmp/ollama_response.json
if ! jq empty /tmp/llm_response.json 2>/dev/null; then
echo "ERROR: Invalid JSON response from liteLLM"
cat /tmp/llm_response.json
exit 1
fi
REVIEW=$(jq -r '.choices[0].message.content // empty' /tmp/ollama_response.json)
REVIEW=$(jq -r '.choices[0].message.content // empty' /tmp/llm_response.json)
if [ -z "$REVIEW" ]; then
echo "ERROR: No content in Ollama response"
echo "ERROR: No content in liteLLM response"
exit 1
fi
echo "Review length: ${#REVIEW} chars"
@ -109,11 +109,11 @@ jobs:
if [ -f "/tmp/pr_review.txt" ] && [ -s "/tmp/pr_review.txt" ]; then
REVIEW_BODY=$(head -c 65536 /tmp/pr_review.txt)
BODY=$(jq -n \
--arg body "🤖 Automated PR Review:\n\n${REVIEW_BODY}\n\n---\n*this is an automated review from Ollama*" \
--arg body "Automated PR Review (qwen2.5-72b via liteLLM):\n\n${REVIEW_BODY}\n\n---\n*automated code review*" \
'{body: $body, event: "COMMENT"}')
else
BODY=$(jq -n \
'{body: "⚠️ Automated PR Review could not be completed — Ollama analysis failed or produced no output.", event: "COMMENT"}')
'{body: "Automated PR Review could not be completed - LLM analysis failed or produced no output.", event: "COMMENT"}')
fi
HTTP_CODE=$(curl -s --max-time 30 --connect-timeout 10 \
-o /tmp/review_post_response.json -w "%{http_code}" \
@ -131,4 +131,4 @@ jobs:
- name: Cleanup
if: always()
shell: bash
run: rm -f /tmp/pr_diff.txt /tmp/ollama_response.json /tmp/pr_review.txt /tmp/review_post_response.json
run: rm -f /tmp/pr_diff.txt /tmp/llm_response.json /tmp/pr_review.txt /tmp/review_post_response.json

View File

@ -1,6 +1,9 @@
name: Test
on:
push:
branches:
- master
pull_request:
jobs:
@ -37,6 +40,11 @@ jobs:
key: ${{ runner.os }}-cargo-linux-amd64-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo-linux-amd64-
- name: Install dependencies
run: npm install --legacy-peer-deps
- name: Update version from Git
run: node scripts/update-version.mjs
- run: cargo generate-lockfile --manifest-path src-tauri/Cargo.toml
- run: cargo fmt --manifest-path src-tauri/Cargo.toml --check
rust-clippy:
@ -72,7 +80,7 @@ jobs:
key: ${{ runner.os }}-cargo-linux-amd64-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo-linux-amd64-
- run: cargo clippy --locked --manifest-path src-tauri/Cargo.toml -- -D warnings
- run: cargo clippy --manifest-path src-tauri/Cargo.toml -- -D warnings
rust-tests:
runs-on: ubuntu-latest
@ -107,7 +115,7 @@ jobs:
key: ${{ runner.os }}-cargo-linux-amd64-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo-linux-amd64-
- run: cargo test --locked --manifest-path src-tauri/Cargo.toml -- --test-threads=1
- run: cargo test --manifest-path src-tauri/Cargo.toml -- --test-threads=1
frontend-typecheck:
runs-on: ubuntu-latest

View File

@ -6,6 +6,102 @@ CI, chore, and build changes are excluded.
## [Unreleased]
### Bug Fixes
- Harden timeline event input validation and atomic writes
### Documentation
- Update wiki for timeline events and incident response methodology
### Features
- Add timeline_events table, model, and CRUD commands
- Populate RCA and postmortem docs with real timeline data
- Wire incident response methodology into AI and record triage events
## [0.2.65] — 2026-04-15
### Bug Fixes
- Add --locked to cargo commands and improve version update script
- Remove invalid --locked flag from cargo commands and fix format string
- **integrations**: Security and correctness improvements
- Correct WIQL syntax and escape_wiql implementation
### Features
- Implement dynamic versioning from Git tags
- **integrations**: Implement query expansion for semantic search
### Security
- Fix query expansion issues from PR review
- Address all issues from automated PR review
## [0.2.63] — 2026-04-13
### Bug Fixes
- Add Windows nsis target and update CHANGELOG to v0.2.61
## [0.2.61] — 2026-04-13
### Bug Fixes
- Remove AppImage from upload artifact patterns
## [0.2.59] — 2026-04-13
### Bug Fixes
- Remove AppImage bundling to fix linux-amd64 build
## [0.2.57] — 2026-04-13
### Bug Fixes
- Add fuse dependency for AppImage support
### Refactoring
- Remove custom linuxdeploy install per CI CI uses tauri-downloaded version
- Revert to original Dockerfile without manual linuxdeploy installation
## [0.2.56] — 2026-04-13
### Bug Fixes
- Add missing ai_providers columns and fix linux-amd64 build
- Address AI review findings
- Address critical AI review issues
## [0.2.55] — 2026-04-13
### Bug Fixes
- **ci**: Use Gitea file API to push CHANGELOG.md — eliminates non-fast-forward rejection
- **ci**: Harden CHANGELOG.md API push step per review
## [0.2.54] — 2026-04-13
### Bug Fixes
- **ci**: Correct git-cliff archive path in tar extraction
## [0.2.53] — 2026-04-13
### Features
- **ci**: Add automated changelog generation via git-cliff
## [0.2.52] — 2026-04-13
### Bug Fixes
- **ci**: Add APPIMAGE_EXTRACT_AND_RUN to build-linux-amd64
## [0.2.51] — 2026-04-13
### Bug Fixes
- **ci**: Address AI review — rustup idempotency and cargo --locked
- **ci**: Replace docker:24-cli with alpine + docker-cli in build-images
- **docker**: Add ca-certificates to arm64 base image step 1
- **ci**: Resolve test.yml failures — Cargo.lock, updated test assertions
- **ci**: Address second AI review — || true, ca-certs, cache@v4, key suffixes
### Documentation
- **docker**: Expand rebuild trigger comments to include OpenSSL and Tauri CLI
### Performance
- **ci**: Use pre-baked images and add cargo/npm caching
## [0.2.50] — 2026-04-12
### Bug Fixes
- Rename GITEA_TOKEN to TF_TOKEN to comply with naming restrictions
- Remove actions/checkout to avoid Node.js dependency
@ -25,22 +121,10 @@ CI, chore, and build changes are excluded.
- Replace github.server_url with hardcoded gogs.tftsr.com for container access
- Revert to two-dot diff — three-dot requires merge base unavailable in shallow clone
- Harden pr-review workflow — secret redaction, log safety, auth header
- **ci**: Address AI review — rustup idempotency and cargo --locked
- **ci**: Replace docker:24-cli with alpine + docker-cli in build-images
- **docker**: Add ca-certificates to arm64 base image step 1
- **ci**: Resolve test.yml failures — Cargo.lock, updated test assertions
- **ci**: Address second AI review — || true, ca-certs, cache@v4, key suffixes
- **ci**: Add APPIMAGE_EXTRACT_AND_RUN to build-linux-amd64
### Documentation
- **docker**: Expand rebuild trigger comments to include OpenSSL and Tauri CLI
### Features
- Add automated PR review workflow with Ollama AI
### Performance
- **ci**: Use pre-baked images and add cargo/npm caching
## [0.2.49] — 2026-04-10
### Bug Fixes
@ -104,62 +188,190 @@ CI, chore, and build changes are excluded.
## [0.2.40] — 2026-04-06
### Bug Fixes
- Add user_id support and OAuth shell permission (v0.2.6)
- Use Wiki secret for authenticated wiki sync (v0.2.8)
- Persist integration settings and implement persistent browser windows
- ARM64 build uses native target instead of cross-compile
- Resolve clippy uninlined_format_args in integrations and related modules
- Resolve clippy format-args failures and OpenSSL vendoring issue
- Resolve macOS bundle path after app rename
- **ci**: Make release artifacts reliable across platforms
- **ci**: Harden release asset uploads for reruns
- **ci**: Trigger release workflow from auto-tag pushes
- **ci**: Guarantee release jobs run after auto-tag
- **ci**: Use stable auto-tag job outputs for release fanout
- **ci**: Run post-tag release builds without job-output gating
- **ci**: Repair auto-tag workflow yaml so jobs trigger
- **ci**: Force explicit linux arm64 target for release artifacts
- **ci**: Run linux arm release natively and enforce arm artifacts
- **ci**: Remove explicit docker.sock mount — act_runner mounts it automatically
## [0.2.36] — 2026-04-06
### Features
- **ci**: Add persistent pre-baked Docker builder images
## [0.2.35] — 2026-04-06
### Bug Fixes
- **ci**: Skip Ollama download on macOS build — runner has no access to GitHub binary assets
- **ci**: Remove all Ollama bundle download steps — use UI download button instead
### Refactoring
- **ollama**: Remove download/install buttons — show plain install instructions only
## [0.2.34] — 2026-04-06
### Bug Fixes
- **security**: Add path canonicalization and actionable permission error in install_ollama_from_bundle
### Features
- **ui**: Fix model dropdown, auth prefill, PII persistence, theme toggle, and Ollama bundle
## [0.2.33] — 2026-04-05
### Features
- **rebrand**: Rename binary to trcaa and auto-generate DB key
## [0.2.32] — 2026-04-05
### Bug Fixes
- **ci**: Restrict arm64 bundles to deb,rpm — skip AppImage
## [0.2.31] — 2026-04-05
### Bug Fixes
- **ci**: Set APPIMAGE_EXTRACT_AND_RUN=1 for arm64 AppImage bundling
## [0.2.30] — 2026-04-05
### Bug Fixes
- **ci**: Add make to arm64 host tools for OpenSSL vendored build
## [0.2.28] — 2026-04-05
### Bug Fixes
- **ci**: Use POSIX dot instead of source in arm64 build step
## [0.2.27] — 2026-04-05
### Bug Fixes
- **ci**: Remove GITHUB_PATH append that was breaking arm64 install step
## [0.2.26] — 2026-04-05
### Bug Fixes
- **ci**: Switch build-linux-arm64 to Ubuntu 22.04 with ports mirror
### Documentation
- Update CI pipeline wiki and add ticket summary for arm64 fix
## [0.2.25] — 2026-04-05
### Bug Fixes
- **ci**: Rebuild apt sources with per-arch entries before arm64 cross-compile install
- **ci**: Add workflow_dispatch and concurrency guard to auto-tag
- **ci**: Replace heredoc with printf in arm64 install step
## [0.2.24] — 2026-04-05
### Bug Fixes
- **ci**: Fix arm64 cross-compile, drop cargo install tauri-cli, move wiki-sync
## [0.2.23] — 2026-04-05
### Bug Fixes
- **ci**: Unblock release jobs and namespace linux artifacts by arch
- **security**: Harden secret handling and audit integrity
- **pii**: Remove lookahead from hostname regex, fix fmt in analysis test
- **security**: Enforce PII redaction before AI log transmission
- **ci**: Unblock release jobs and namespace linux artifacts by arch
- **ci**: Fix arm64 cross-compile, drop cargo install tauri-cli, move wiki-sync
- **ci**: Rebuild apt sources with per-arch entries before arm64 cross-compile install
- **ci**: Add workflow_dispatch and concurrency guard to auto-tag
- **ci**: Replace heredoc with printf in arm64 install step
- **ci**: Switch build-linux-arm64 to Ubuntu 22.04 with ports mirror
- **ci**: Remove GITHUB_PATH append that was breaking arm64 install step
- **ci**: Use POSIX dot instead of source in arm64 build step
- **ci**: Add make to arm64 host tools for OpenSSL vendored build
- **ci**: Set APPIMAGE_EXTRACT_AND_RUN=1 for arm64 AppImage bundling
- **ci**: Restrict arm64 bundles to deb,rpm — skip AppImage
- **security**: Add path canonicalization and actionable permission error in install_ollama_from_bundle
- **ci**: Skip Ollama download on macOS build — runner has no access to GitHub binary assets
- **ci**: Remove all Ollama bundle download steps — use UI download button instead
- **ci**: Remove explicit docker.sock mount — act_runner mounts it automatically
## [0.2.22] — 2026-04-05
### Bug Fixes
- **ci**: Run linux arm release natively and enforce arm artifacts
## [0.2.21] — 2026-04-05
### Bug Fixes
- **ci**: Force explicit linux arm64 target for release artifacts
## [0.2.20] — 2026-04-05
### Refactoring
- **ci**: Remove standalone release workflow
## [0.2.19] — 2026-04-05
### Bug Fixes
- **ci**: Guarantee release jobs run after auto-tag
- **ci**: Use stable auto-tag job outputs for release fanout
- **ci**: Run post-tag release builds without job-output gating
- **ci**: Repair auto-tag workflow yaml so jobs trigger
## [0.2.18] — 2026-04-05
### Bug Fixes
- **ci**: Trigger release workflow from auto-tag pushes
## [0.2.17] — 2026-04-05
### Bug Fixes
- **ci**: Harden release asset uploads for reruns
## [0.2.16] — 2026-04-05
### Bug Fixes
- **ci**: Make release artifacts reliable across platforms
## [0.2.14] — 2026-04-04
### Bug Fixes
- Resolve macOS bundle path after app rename
## [0.2.13] — 2026-04-04
### Bug Fixes
- Resolve clippy uninlined_format_args in integrations and related modules
- Resolve clippy format-args failures and OpenSSL vendoring issue
### Features
- Add custom_rest provider mode and rebrand application name
## [0.2.12] — 2026-04-04
### Bug Fixes
- ARM64 build uses native target instead of cross-compile
## [0.2.11] — 2026-04-04
### Bug Fixes
- Persist integration settings and implement persistent browser windows
## [0.2.10] — 2026-04-03
### Features
- Complete webview cookie extraction implementation
## [0.2.9] — 2026-04-03
### Features
- Add multi-mode authentication for integrations (v0.2.10)
## [0.2.8] — 2026-04-03
### Features
- Add temperature and max_tokens support for Custom REST providers (v0.2.9)
## [0.2.7] — 2026-04-03
### Bug Fixes
- Use Wiki secret for authenticated wiki sync (v0.2.8)
### Documentation
- Update wiki for v0.2.6 - integrations and Custom REST provider
### Features
- Add automatic wiki sync to CI workflow (v0.2.7)
## [0.2.6] — 2026-04-03
### Bug Fixes
- Add user_id support and OAuth shell permission (v0.2.6)
## [0.2.5] — 2026-04-03
### Documentation
- Add Custom REST provider documentation
- Update wiki for v0.2.6 - integrations and Custom REST provider
- Update CI pipeline wiki and add ticket summary for arm64 fix
### Features
- Implement Confluence, ServiceNow, and Azure DevOps REST API clients
- Add Custom REST provider support
- Add automatic wiki sync to CI workflow (v0.2.7)
- Add temperature and max_tokens support for Custom REST providers (v0.2.9)
- Add multi-mode authentication for integrations (v0.2.10)
- Complete webview cookie extraction implementation
- Add custom_rest provider mode and rebrand application name
- **rebrand**: Rename binary to trcaa and auto-generate DB key
- **ui**: Fix model dropdown, auth prefill, PII persistence, theme toggle, and Ollama bundle
- **ci**: Add persistent pre-baked Docker builder images
### Refactoring
- **ci**: Remove standalone release workflow
- **ollama**: Remove download/install buttons — show plain install instructions only
## [0.2.4] — 2026-04-03
@ -180,6 +392,33 @@ CI, chore, and build changes are excluded.
## [0.2.1] — 2026-04-03
### Bug Fixes
- Implement native DOCX export without pandoc dependency
### Features
- Add AI disclaimer modal before creating new issues
## [0.1.0] — 2026-04-03
### Bug Fixes
- Resolve all clippy lints (uninlined format args, range::contains, push_str single chars)
- Inline format args for Rust 1.88 clippy compatibility
- Retain GPU-VRAM-eligible models in recommender even when RAM is low
- Use alpine/git with explicit checkout for tag-based release builds
- Set CI=true for cargo tauri build — Woodpecker sets CI=woodpecker which Tauri CLI rejects
- Arm64 cross-compilation — add multiarch pkg-config sysroot setup
- Remove arm64 from release pipeline — webkit2gtk multiarch conflict on x86_64 host
- Write artifacts to workspace (shared between steps), not /artifacts/
- Upload step needs gogs_default network to reach Gogs API (host firewall blocks default bridge)
- Use bundled-sqlcipher-vendored-openssl for portable Windows cross-compilation
- Add make to windows build step (required by vendored OpenSSL)
- Replace empty icon placeholder files with real app icons
- Suppress MinGW auto-export to resolve Windows DLL ordinal overflow
- Use when: platform: for arm64 step routing (Woodpecker 0.15.4 compat)
- Remove unused tauri-plugin-cli causing startup crash
- Use $GITHUB_REF_NAME env var instead of ${{ github.ref_name }} expression
- Remove unused tauri-plugin-updater + SQLCipher 16KB page size
- Prevent WebKit/GTK system theme from overriding input text colors on Linux
- Set SQLCipher cipher_page_size BEFORE first database access
- Button text visibility, toggle contrast, create_issue IPC, ad-hoc codesign
- Dropdown text invisible on macOS + correct codesign order for DMG
- Add explicit text-foreground to SelectTrigger, SelectValue, and SelectItem
@ -199,49 +438,6 @@ CI, chore, and build changes are excluded.
- Improve release artifact upload error handling
- Install jq in Linux/Windows build containers
- Improve download button visibility and add DOCX export
- Implement native DOCX export without pandoc dependency
### Documentation
- Add LiteLLM + AWS Bedrock integration guide
### Features
- Add macOS arm64 act_runner and release build job
- Auto-increment patch tag on every merge to master
- Inline file/screenshot attachment in triage chat
- Close issues, restore history, auto-save resolution steps
- Expand domains to 13 — add Telephony, Security/Vault, Public Safety, Application, Automation/CI-CD
- Add HPE, Dell, Identity domains + expand k8s/security/observability/VESTA NXT
- Add AI disclaimer modal before creating new issues
## [0.1.1] — 2026-03-30
### Bug Fixes
- Remove unused tauri-plugin-updater + SQLCipher 16KB page size
- Prevent WebKit/GTK system theme from overriding input text colors on Linux
- Set SQLCipher cipher_page_size BEFORE first database access
### Documentation
- Update README, wiki, and UI version to v0.1.1
## [0.1.0] — 2026-03-29
### Bug Fixes
- Resolve all clippy lints (uninlined format args, range::contains, push_str single chars)
- Inline format args for Rust 1.88 clippy compatibility
- Retain GPU-VRAM-eligible models in recommender even when RAM is low
- Use alpine/git with explicit checkout for tag-based release builds
- Set CI=true for cargo tauri build — Woodpecker sets CI=woodpecker which Tauri CLI rejects
- Arm64 cross-compilation — add multiarch pkg-config sysroot setup
- Remove arm64 from release pipeline — webkit2gtk multiarch conflict on x86_64 host
- Write artifacts to workspace (shared between steps), not /artifacts/
- Upload step needs gogs_default network to reach Gogs API (host firewall blocks default bridge)
- Use bundled-sqlcipher-vendored-openssl for portable Windows cross-compilation
- Add make to windows build step (required by vendored OpenSSL)
- Replace empty icon placeholder files with real app icons
- Suppress MinGW auto-export to resolve Windows DLL ordinal overflow
- Use when: platform: for arm64 step routing (Woodpecker 0.15.4 compat)
- Remove unused tauri-plugin-cli causing startup crash
- Use $GITHUB_REF_NAME env var instead of ${{ github.ref_name }} expression
### Documentation
- Update PLAN.md with accurate implementation status
@ -251,17 +447,21 @@ CI, chore, and build changes are excluded.
- Update README and wiki for v0.1.0-alpha release
- Remove broken arm64 CI step, document Woodpecker 0.15.4 limitation
- Update README and wiki for Gitea Actions migration
- Update README, wiki, and UI version to v0.1.1
- Add LiteLLM + AWS Bedrock integration guide
### Features
- Initial implementation of TFTSR IT Triage & RCA application
- Add Windows amd64 cross-compile to release pipeline; add arm64 QEMU agent
- Add native linux/arm64 release build step
- Add macOS arm64 act_runner and release build job
- Auto-increment patch tag on every merge to master
- Inline file/screenshot attachment in triage chat
- Close issues, restore history, auto-save resolution steps
- Expand domains to 13 — add Telephony, Security/Vault, Public Safety, Application, Automation/CI-CD
- Add HPE, Dell, Identity domains + expand k8s/security/observability/VESTA NXT
### Security
- Rotate exposed token, redact from PLAN.md, add secret patterns to .gitignore
## [0.1.0-test] — 2026-03-15
### Features
- Initial implementation of TFTSR IT Triage & RCA application

View File

@ -50,7 +50,7 @@ All command handlers receive `State<'_, AppState>` as a Tauri-injected parameter
| `commands/integrations.rs` | Confluence / ServiceNow / ADO — v0.2 stubs |
| `ai/provider.rs` | `Provider` trait + `create_provider()` factory |
| `pii/detector.rs` | Multi-pattern PII scanner with overlap resolution |
| `db/migrations.rs` | Versioned schema (12 migrations in `_migrations` table) |
| `db/migrations.rs` | Versioned schema (17 migrations in `_migrations` table) |
| `db/models.rs` | All DB types — see `IssueDetail` note below |
| `docs/rca.rs` + `docs/postmortem.rs` | Markdown template builders |
| `audit/log.rs` | `write_audit_event()` — called before every external send |
@ -176,6 +176,55 @@ pub struct IssueDetail {
Use `detail.issue.title`, **not** `detail.title`.
## Incident Response Methodology
The application integrates a comprehensive incident response framework via system prompt injection. The `INCIDENT_RESPONSE_FRAMEWORK` constant in `src/lib/domainPrompts.ts` is appended to all 17 domain-specific system prompts (Linux, Windows, Network, Kubernetes, Databases, Virtualization, Hardware, Observability, and others).
**5-Phase Framework:**
1. **Detection & Evidence Gathering** — Initial issue assessment, log collection, PII redaction
2. **Diagnosis & Hypothesis Testing** — AI-assisted analysis, pattern matching against known incidents
3. **Root Cause Analysis with 5-Whys** — Iterative questioning to identify underlying cause (steps 15)
4. **Resolution & Prevention** — Remediation planning and implementation
5. **Post-Incident Review** — Timeline-based blameless post-mortem and lessons learned
**System Prompt Injection:**
The `chat_message` command accepts an optional `system_prompt` parameter. If provided, it prepends domain expertise before the conversation history. If omitted, the framework selects the appropriate domain prompt based on the issue category. This allows:
- **Specialized expertise**: Different frameworks for Linux vs. Kubernetes vs. Network incidents
- **Flexible override**: Users can inject custom system prompts for cross-domain problems
- **Consistent methodology**: All 17 domain prompts follow the same 5-phase incident response structure
**Timeline Event Recording:**
Timeline events are recorded non-blockingly at key triage moments:
```
Issue Creation → triage_started
Log Upload → log_uploaded (metadata: file_name, file_size)
Why-Level Progression → why_level_advanced (metadata: from_level → to_level)
Root Cause Identified → root_cause_identified (metadata: root_cause, confidence)
RCA Generated → rca_generated (metadata: doc_id, section_count)
Postmortem Generated → postmortem_generated (metadata: doc_id, timeline_events_count)
Document Exported → document_exported (metadata: format, file_path)
```
**Document Generation:**
RCA and Postmortem generators now use real timeline event data instead of placeholders:
- **RCA**: Incorporates timeline to show detection-to-root-cause progression
- **Postmortem**: Uses full timeline to demonstrate the complete incident lifecycle and response effectiveness
Timeline events are stored in the `timeline_events` table (indexed by issue_id and created_at for fast retrieval) and dual-written to `audit_log` for security/compliance purposes.
## Application Startup Sequence
```

View File

@ -2,7 +2,7 @@
## Overview
TFTSR uses **SQLite** via `rusqlite` with the `bundled-sqlcipher` feature for AES-256 encryption in production. 12 versioned migrations are tracked in the `_migrations` table.
TFTSR uses **SQLite** via `rusqlite` with the `bundled-sqlcipher` feature for AES-256 encryption in production. 17 versioned migrations are tracked in the `_migrations` table.
**DB file location:** `{app_data_dir}/tftsr.db`
@ -38,7 +38,7 @@ pub fn init_db(data_dir: &Path) -> anyhow::Result<Connection> {
---
## Schema (11 Migrations)
## Schema (17 Migrations)
### 001 — issues
@ -245,6 +245,51 @@ CREATE TABLE image_attachments (
- Basic auth (ServiceNow): Store encrypted password
- One credential per service (enforced by UNIQUE constraint)
### 017 — timeline_events (Incident Response Timeline)
```sql
CREATE TABLE timeline_events (
id TEXT PRIMARY KEY,
issue_id TEXT NOT NULL REFERENCES issues(id) ON DELETE CASCADE,
event_type TEXT NOT NULL,
description TEXT NOT NULL,
metadata TEXT, -- JSON object with event-specific data
created_at TEXT NOT NULL
);
CREATE INDEX idx_timeline_events_issue ON timeline_events(issue_id);
CREATE INDEX idx_timeline_events_time ON timeline_events(created_at);
```
**Event Types:**
- `triage_started` — Incident response begins, initial issue properties recorded
- `log_uploaded` — Log file uploaded and analyzed
- `why_level_advanced` — 5-Whys entry completed, progression to next level
- `root_cause_identified` — Root cause determined from analysis
- `rca_generated` — Root Cause Analysis document created
- `postmortem_generated` — Post-mortem document created
- `document_exported` — Document exported to file (MD or PDF)
**Metadata Structure (JSON):**
```json
{
"triage_started": {"severity": "high", "category": "network"},
"log_uploaded": {"file_name": "app.log", "file_size": 2048576},
"why_level_advanced": {"from_level": 2, "to_level": 3, "question": "Why did the service timeout?"},
"root_cause_identified": {"root_cause": "DNS resolution failure", "confidence": 0.95},
"rca_generated": {"doc_id": "doc_abc123", "section_count": 7},
"postmortem_generated": {"doc_id": "doc_def456", "timeline_events_count": 12},
"document_exported": {"format": "pdf", "file_path": "/home/user/docs/rca.pdf"}
}
```
**Design Notes:**
- Timeline events are **queryable** (indexed by issue_id and created_at) for document generation
- Dual-write: Events recorded to both `timeline_events` and `audit_log` — timeline for chronological reporting, audit_log for security/compliance
- `created_at`: TEXT UTC timestamp (`YYYY-MM-DD HH:MM:SS`)
- Non-blocking writes: Timeline events recorded asynchronously at key triage moments
- Cascade delete from issues ensures cleanup
---
## Key Design Notes
@ -289,4 +334,13 @@ pub struct AuditEntry {
pub user_id: String,
pub details: Option<String>,
}
pub struct TimelineEvent {
pub id: String,
pub issue_id: String,
pub event_type: String,
pub description: String,
pub metadata: Option<String>, // JSON
pub created_at: String,
}
```

View File

@ -62,11 +62,27 @@ updateFiveWhyCmd(entryId: string, answer: string) → void
```
Sets or updates the answer for an existing 5-Whys entry.
### `get_timeline_events`
```typescript
getTimelineEventsCmd(issueId: string) → TimelineEvent[]
```
Retrieves all timeline events for an issue, ordered by created_at ascending.
```typescript
interface TimelineEvent {
id: string;
issue_id: string;
event_type: string; // One of: triage_started, log_uploaded, why_level_advanced, etc.
description: string;
metadata?: Record<string, any>; // Event-specific JSON data
created_at: string; // UTC timestamp
}
```
### `add_timeline_event`
```typescript
addTimelineEventCmd(issueId: string, eventType: string, description: string) → TimelineEvent
addTimelineEventCmd(issueId: string, eventType: string, description: string, metadata?: Record<string, any>) → TimelineEvent
```
Records a timestamped event in the issue timeline.
Records a timestamped event in the issue timeline. Dual-writes to both `timeline_events` (for document generation) and `audit_log` (for security audit trail).
---
@ -137,9 +153,9 @@ Sends selected (redacted) log files to the AI provider with an analysis prompt.
### `chat_message`
```typescript
chatMessageCmd(issueId: string, message: string, providerConfig: ProviderConfig) → ChatResponse
chatMessageCmd(issueId: string, message: string, providerConfig: ProviderConfig, systemPrompt?: string) → ChatResponse
```
Sends a message in the ongoing triage conversation. Domain system prompt is injected automatically on first message. AI response is parsed for why-level indicators (15).
Sends a message in the ongoing triage conversation. Optional `systemPrompt` parameter allows prepending domain expertise before conversation history. If not provided, the domain-specific system prompt for the issue category is injected automatically on first message. AI response is parsed for why-level indicators (15).
### `list_providers`
```typescript
@ -155,13 +171,13 @@ Returns the list of supported providers with their available models and configur
```typescript
generateRcaCmd(issueId: string) → Document
```
Builds an RCA Markdown document from the issue data, 5-Whys answers, and timeline.
Builds an RCA Markdown document from the issue data, 5-Whys answers, and timeline events. Uses real incident response timeline (log uploads, why-level progression, root cause identification) instead of placeholders.
### `generate_postmortem`
```typescript
generatePostmortemCmd(issueId: string) → Document
```
Builds a blameless post-mortem Markdown document.
Builds a blameless post-mortem Markdown document. Incorporates timeline events to show the full incident lifecycle: detection, diagnosis, resolution, and post-incident review phases.
### `update_document`
```typescript

View File

@ -1,11 +1,12 @@
{
"name": "tftsr",
"private": true,
"version": "0.2.50",
"version": "0.2.62",
"type": "module",
"scripts": {
"dev": "vite",
"build": "tsc && vite build",
"version:update": "node scripts/update-version.mjs",
"preview": "vite preview",
"tauri": "tauri",
"test": "vitest",

111
scripts/update-version.mjs Normal file
View File

@ -0,0 +1,111 @@
#!/usr/bin/env node
import { execSync } from 'child_process';
import { readFileSync, writeFileSync, existsSync, mkdirSync } from 'fs';
import { resolve, dirname } from 'path';
import { fileURLToPath } from 'url';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const projectRoot = resolve(__dirname, '..');
/**
* Validate version is semver-compliant (X.Y.Z)
*/
function isValidSemver(version) {
return /^[0-9]+\.[0-9]+\.[0-9]+$/.test(version);
}
function validateGitRepo(root) {
if (!existsSync(resolve(root, '.git'))) {
throw new Error(`Not a Git repository: ${root}`);
}
}
function getVersionFromGit() {
validateGitRepo(projectRoot);
try {
const output = execSync('git describe --tags --abbrev=0', {
encoding: 'utf-8',
cwd: projectRoot,
shell: false
});
let version = output.trim();
// Remove v prefix
version = version.replace(/^v/, '');
// Validate it's a valid semver
if (!isValidSemver(version)) {
const pkgJsonVersion = getFallbackVersion();
console.warn(`Invalid version format "${version}" from git describe, using package.json fallback: ${pkgJsonVersion}`);
return pkgJsonVersion;
}
return version;
} catch (e) {
const pkgJsonVersion = getFallbackVersion();
console.warn(`Failed to get version from Git tags, using package.json fallback: ${pkgJsonVersion}`);
return pkgJsonVersion;
}
}
function getFallbackVersion() {
const pkgPath = resolve(projectRoot, 'package.json');
if (!existsSync(pkgPath)) {
return '0.2.50';
}
try {
const content = readFileSync(pkgPath, 'utf-8');
const json = JSON.parse(content);
return json.version || '0.2.50';
} catch {
return '0.2.50';
}
}
function updatePackageJson(version) {
const fullPath = resolve(projectRoot, 'package.json');
if (!existsSync(fullPath)) {
throw new Error(`File not found: ${fullPath}`);
}
const content = readFileSync(fullPath, 'utf-8');
const json = JSON.parse(content);
json.version = version;
// Write with 2-space indentation
writeFileSync(fullPath, JSON.stringify(json, null, 2) + '\n', 'utf-8');
console.log(`✓ Updated package.json to ${version}`);
}
function updateTOML(path, version) {
const fullPath = resolve(projectRoot, path);
if (!existsSync(fullPath)) {
throw new Error(`File not found: ${fullPath}`);
}
const content = readFileSync(fullPath, 'utf-8');
const lines = content.split('\n');
const output = [];
for (const line of lines) {
if (line.match(/^\s*version\s*=\s*"/)) {
output.push(`version = "${version}"`);
} else {
output.push(line);
}
}
writeFileSync(fullPath, output.join('\n') + '\n', 'utf-8');
console.log(`✓ Updated ${path} to ${version}`);
}
const version = getVersionFromGit();
console.log(`Setting version to: ${version}`);
updatePackageJson(version);
updateTOML('src-tauri/Cargo.toml', version);
updateTOML('src-tauri/tauri.conf.json', version);
console.log(`✓ All version fields updated to ${version}`);

3
src-tauri/Cargo.lock generated
View File

@ -6139,7 +6139,7 @@ dependencies = [
[[package]]
name = "trcaa"
version = "0.2.50"
version = "0.2.62"
dependencies = [
"aes-gcm",
"aho-corasick",
@ -6174,6 +6174,7 @@ dependencies = [
"tokio-test",
"tracing",
"tracing-subscriber",
"url",
"urlencoding",
"uuid",
"warp",

View File

@ -1,6 +1,6 @@
[package]
name = "trcaa"
version = "0.2.50"
version = "0.2.62"
edition = "2021"
[lib]
@ -44,6 +44,7 @@ lazy_static = "1.4"
warp = "0.3"
urlencoding = "2"
infer = "0.15"
url = "2.5.8"
[dev-dependencies]
tokio-test = "0.4"
@ -52,3 +53,7 @@ mockito = "1.2"
[profile.release]
opt-level = "s"
strip = true

View File

@ -1,3 +1,30 @@
fn main() {
let version = get_version_from_git();
println!("cargo:rustc-env=APP_VERSION={version}");
println!("cargo:rerun-if-changed=.git/refs/heads/master");
println!("cargo:rerun-if-changed=.git/refs/tags");
tauri_build::build()
}
fn get_version_from_git() -> String {
if let Ok(output) = std::process::Command::new("git")
.arg("describe")
.arg("--tags")
.arg("--abbrev=0")
.output()
{
if output.status.success() {
let version = String::from_utf8_lossy(&output.stdout)
.trim()
.trim_start_matches('v')
.to_string();
if !version.is_empty() {
return version;
}
}
}
"0.2.50".to_string()
}

View File

@ -165,6 +165,7 @@ pub async fn chat_message(
issue_id: String,
message: String,
provider_config: ProviderConfig,
system_prompt: Option<String>,
app_handle: tauri::AppHandle,
state: State<'_, AppState>,
) -> Result<ChatResponse, String> {
@ -232,7 +233,21 @@ pub async fn chat_message(
// Search integration sources for relevant context
let integration_context = search_integration_sources(&message, &app_handle, &state).await;
let mut messages = history;
let mut messages = Vec::new();
// Inject domain system prompt if provided
if let Some(ref prompt) = system_prompt {
if !prompt.is_empty() {
messages.push(Message {
role: "system".into(),
content: prompt.clone(),
tool_call_id: None,
tool_calls: None,
});
}
}
messages.extend(history);
// If we found integration content, add it to the conversation context
if !integration_context.is_empty() {

View File

@ -2,7 +2,7 @@ use tauri::State;
use crate::db::models::{
AiConversation, AiMessage, ImageAttachment, Issue, IssueDetail, IssueFilter, IssueSummary,
IssueUpdate, LogFile, ResolutionStep,
IssueUpdate, LogFile, ResolutionStep, TimelineEvent,
};
use crate::state::AppState;
@ -171,12 +171,35 @@ pub async fn get_issue(
.filter_map(|r| r.ok())
.collect();
// Load timeline events
let mut te_stmt = db
.prepare(
"SELECT id, issue_id, event_type, description, metadata, created_at \
FROM timeline_events WHERE issue_id = ?1 ORDER BY created_at ASC",
)
.map_err(|e| e.to_string())?;
let timeline_events: Vec<TimelineEvent> = te_stmt
.query_map([&issue_id], |row| {
Ok(TimelineEvent {
id: row.get(0)?,
issue_id: row.get(1)?,
event_type: row.get(2)?,
description: row.get(3)?,
metadata: row.get(4)?,
created_at: row.get(5)?,
})
})
.map_err(|e| e.to_string())?
.filter_map(|r| r.ok())
.collect();
Ok(IssueDetail {
issue,
log_files,
image_attachments,
resolution_steps,
conversations,
timeline_events,
})
}
@ -302,6 +325,11 @@ pub async fn delete_issue(issue_id: String, state: State<'_, AppState>) -> Resul
[&issue_id],
)
.map_err(|e| e.to_string())?;
db.execute(
"DELETE FROM timeline_events WHERE issue_id = ?1",
[&issue_id],
)
.map_err(|e| e.to_string())?;
db.execute("DELETE FROM issues WHERE id = ?1", [&issue_id])
.map_err(|e| e.to_string())?;
@ -505,37 +533,105 @@ pub async fn update_five_why(
Ok(())
}
const VALID_EVENT_TYPES: &[&str] = &[
"triage_started",
"log_uploaded",
"why_level_advanced",
"root_cause_identified",
"rca_generated",
"postmortem_generated",
"document_exported",
];
#[tauri::command]
pub async fn add_timeline_event(
issue_id: String,
event_type: String,
description: String,
metadata: Option<String>,
state: State<'_, AppState>,
) -> Result<(), String> {
// Use audit_log for timeline tracking
let db = state.db.lock().map_err(|e| e.to_string())?;
let entry = crate::db::models::AuditEntry::new(
event_type,
"issue".to_string(),
) -> Result<TimelineEvent, String> {
if !VALID_EVENT_TYPES.contains(&event_type.as_str()) {
return Err(format!("Invalid event_type: {event_type}"));
}
let meta = metadata.unwrap_or_else(|| "{}".to_string());
if meta.len() > 10240 {
return Err("metadata exceeds maximum size of 10KB".to_string());
}
serde_json::from_str::<serde_json::Value>(&meta)
.map_err(|_| "metadata must be valid JSON".to_string())?;
let event = TimelineEvent::new(
issue_id.clone(),
serde_json::json!({ "description": description }).to_string(),
event_type.clone(),
description.clone(),
meta,
);
let mut db = state.db.lock().map_err(|e| e.to_string())?;
let tx = db.transaction().map_err(|e| e.to_string())?;
tx.execute(
"INSERT INTO timeline_events (id, issue_id, event_type, description, metadata, created_at) \
VALUES (?1, ?2, ?3, ?4, ?5, ?6)",
rusqlite::params![
event.id,
event.issue_id,
event.event_type,
event.description,
event.metadata,
event.created_at,
],
)
.map_err(|e| e.to_string())?;
crate::audit::log::write_audit_event(
&db,
&entry.action,
&entry.entity_type,
&entry.entity_id,
&entry.details,
&tx,
&event_type,
"issue",
&issue_id,
&serde_json::json!({ "description": description, "metadata": event.metadata }).to_string(),
)
.map_err(|_| "Failed to write security audit entry".to_string())?;
// Update issue timestamp
let now = chrono::Utc::now().format("%Y-%m-%d %H:%M:%S").to_string();
db.execute(
tx.execute(
"UPDATE issues SET updated_at = ?1 WHERE id = ?2",
rusqlite::params![now, issue_id],
)
.map_err(|e| e.to_string())?;
Ok(())
tx.commit().map_err(|e| e.to_string())?;
Ok(event)
}
#[tauri::command]
pub async fn get_timeline_events(
issue_id: String,
state: State<'_, AppState>,
) -> Result<Vec<TimelineEvent>, String> {
let db = state.db.lock().map_err(|e| e.to_string())?;
let mut stmt = db
.prepare(
"SELECT id, issue_id, event_type, description, metadata, created_at \
FROM timeline_events WHERE issue_id = ?1 ORDER BY created_at ASC",
)
.map_err(|e| e.to_string())?;
let events = stmt
.query_map([&issue_id], |row| {
Ok(TimelineEvent {
id: row.get(0)?,
issue_id: row.get(1)?,
event_type: row.get(2)?,
description: row.get(3)?,
metadata: row.get(4)?,
created_at: row.get(5)?,
})
})
.map_err(|e| e.to_string())?
.filter_map(|r| r.ok())
.collect();
Ok(events)
}

View File

@ -4,6 +4,7 @@ use crate::ollama::{
OllamaStatus,
};
use crate::state::{AppSettings, AppState, ProviderConfig};
use std::env;
// --- Ollama commands ---
@ -275,3 +276,11 @@ pub async fn delete_ai_provider(
Ok(())
}
/// Get the application version from build-time environment
#[tauri::command]
pub async fn get_app_version() -> Result<String, String> {
env::var("APP_VERSION")
.or_else(|_| env::var("CARGO_PKG_VERSION"))
.map_err(|e| format!("Failed to get version: {e}"))
}

View File

@ -191,6 +191,28 @@ pub fn run_migrations(conn: &Connection) -> anyhow::Result<()> {
updated_at TEXT NOT NULL DEFAULT (datetime('now'))
);",
),
(
"015_add_use_datastore_upload",
"ALTER TABLE ai_providers ADD COLUMN use_datastore_upload INTEGER DEFAULT 0",
),
(
"016_add_created_at",
"ALTER TABLE ai_providers ADD COLUMN created_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%d %H:%M:%S', 'now'))",
),
(
"017_create_timeline_events",
"CREATE TABLE IF NOT EXISTS timeline_events (
id TEXT PRIMARY KEY,
issue_id TEXT NOT NULL,
event_type TEXT NOT NULL,
description TEXT NOT NULL DEFAULT '',
metadata TEXT NOT NULL DEFAULT '{}',
created_at TEXT NOT NULL,
FOREIGN KEY (issue_id) REFERENCES issues(id) ON DELETE CASCADE
);
CREATE INDEX idx_timeline_events_issue ON timeline_events(issue_id);
CREATE INDEX idx_timeline_events_time ON timeline_events(created_at);",
),
];
for (name, sql) in migrations {
@ -201,10 +223,27 @@ pub fn run_migrations(conn: &Connection) -> anyhow::Result<()> {
if !already_applied {
// FTS5 virtual table creation can be skipped if FTS5 is not compiled in
if let Err(e) = conn.execute_batch(sql) {
if name.contains("fts") {
// Also handle column-already-exists errors for migrations 015-016
if name.contains("fts") {
if let Err(e) = conn.execute_batch(sql) {
tracing::warn!("FTS5 not available, skipping: {e}");
} else {
}
} else if name.ends_with("_add_use_datastore_upload")
|| name.ends_with("_add_created_at")
{
// Use execute for ALTER TABLE (SQLite only allows one statement per command)
// Skip error if column already exists (SQLITE_ERROR with "duplicate column name")
if let Err(e) = conn.execute(sql, []) {
let err_str = e.to_string();
if err_str.contains("duplicate column name") {
tracing::info!("Column may already exist, skipping migration {name}: {e}");
} else {
return Err(e.into());
}
}
} else {
// Use execute_batch for other migrations (FTS5, CREATE TABLE, etc.)
if let Err(e) = conn.execute_batch(sql) {
return Err(e.into());
}
}
@ -560,4 +599,195 @@ mod tests {
assert_eq!(encrypted_key, "encrypted_key_123");
assert_eq!(model, "gpt-4o");
}
#[test]
fn test_add_missing_columns_to_existing_table() {
let conn = Connection::open_in_memory().unwrap();
// Simulate existing table without use_datastore_upload and created_at
conn.execute_batch(
"CREATE TABLE IF NOT EXISTS ai_providers (
id TEXT PRIMARY KEY,
name TEXT NOT NULL UNIQUE,
provider_type TEXT NOT NULL,
api_url TEXT NOT NULL,
encrypted_api_key TEXT NOT NULL,
model TEXT NOT NULL,
max_tokens INTEGER,
temperature REAL,
custom_endpoint_path TEXT,
custom_auth_header TEXT,
custom_auth_prefix TEXT,
api_format TEXT,
user_id TEXT,
updated_at TEXT NOT NULL DEFAULT (datetime('now'))
);",
)
.unwrap();
// Verify columns BEFORE migration
let mut stmt = conn.prepare("PRAGMA table_info(ai_providers)").unwrap();
let columns: Vec<String> = stmt
.query_map([], |row| row.get::<_, String>(1))
.unwrap()
.collect::<Result<Vec<_>, _>>()
.unwrap();
assert!(columns.contains(&"name".to_string()));
assert!(columns.contains(&"model".to_string()));
assert!(!columns.contains(&"use_datastore_upload".to_string()));
assert!(!columns.contains(&"created_at".to_string()));
// Run migrations (should apply 015 to add missing columns)
run_migrations(&conn).unwrap();
// Verify columns AFTER migration
let mut stmt = conn.prepare("PRAGMA table_info(ai_providers)").unwrap();
let columns: Vec<String> = stmt
.query_map([], |row| row.get::<_, String>(1))
.unwrap()
.collect::<Result<Vec<_>, _>>()
.unwrap();
assert!(columns.contains(&"name".to_string()));
assert!(columns.contains(&"model".to_string()));
assert!(columns.contains(&"use_datastore_upload".to_string()));
assert!(columns.contains(&"created_at".to_string()));
// Verify data integrity - existing rows should have default values
conn.execute(
"INSERT INTO ai_providers (id, name, provider_type, api_url, encrypted_api_key, model)
VALUES (?, ?, ?, ?, ?, ?)",
rusqlite::params![
"test-provider-2",
"Test Provider",
"openai",
"https://api.example.com",
"encrypted_key_456",
"gpt-3.5-turbo"
],
)
.unwrap();
let (name, use_datastore_upload, created_at): (String, bool, String) = conn
.query_row(
"SELECT name, use_datastore_upload, created_at FROM ai_providers WHERE name = ?1",
["Test Provider"],
|r| Ok((r.get(0)?, r.get(1)?, r.get(2)?)),
)
.unwrap();
assert_eq!(name, "Test Provider");
assert!(!use_datastore_upload);
assert!(created_at.len() > 0);
}
#[test]
fn test_idempotent_add_missing_columns() {
let conn = Connection::open_in_memory().unwrap();
// Create table with both columns already present (simulating prior migration run)
conn.execute_batch(
"CREATE TABLE IF NOT EXISTS ai_providers (
id TEXT PRIMARY KEY,
name TEXT NOT NULL UNIQUE,
provider_type TEXT NOT NULL,
api_url TEXT NOT NULL,
encrypted_api_key TEXT NOT NULL,
model TEXT NOT NULL,
max_tokens INTEGER,
temperature REAL,
custom_endpoint_path TEXT,
custom_auth_header TEXT,
custom_auth_prefix TEXT,
api_format TEXT,
user_id TEXT,
use_datastore_upload INTEGER DEFAULT 0,
created_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT NOT NULL DEFAULT (datetime('now'))
);",
)
.unwrap();
// Should not fail even though columns already exist
run_migrations(&conn).unwrap();
}
#[test]
fn test_timeline_events_table_exists() {
let conn = setup_test_db();
let count: i64 = conn
.query_row(
"SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='timeline_events'",
[],
|r| r.get(0),
)
.unwrap();
assert_eq!(count, 1);
let mut stmt = conn.prepare("PRAGMA table_info(timeline_events)").unwrap();
let columns: Vec<String> = stmt
.query_map([], |row| row.get::<_, String>(1))
.unwrap()
.collect::<Result<Vec<_>, _>>()
.unwrap();
assert!(columns.contains(&"id".to_string()));
assert!(columns.contains(&"issue_id".to_string()));
assert!(columns.contains(&"event_type".to_string()));
assert!(columns.contains(&"description".to_string()));
assert!(columns.contains(&"metadata".to_string()));
assert!(columns.contains(&"created_at".to_string()));
}
#[test]
fn test_timeline_events_cascade_delete() {
let conn = setup_test_db();
conn.execute("PRAGMA foreign_keys = ON", []).unwrap();
let now = chrono::Utc::now().format("%Y-%m-%d %H:%M:%S").to_string();
conn.execute(
"INSERT INTO issues (id, title, created_at, updated_at) VALUES (?1, ?2, ?3, ?4)",
rusqlite::params!["issue-1", "Test Issue", now, now],
)
.unwrap();
conn.execute(
"INSERT INTO timeline_events (id, issue_id, event_type, description, metadata, created_at) VALUES (?1, ?2, ?3, ?4, ?5, ?6)",
rusqlite::params!["te-1", "issue-1", "triage_started", "Started triage", "{}", "2025-01-15 10:00:00 UTC"],
)
.unwrap();
// Verify event exists
let count: i64 = conn
.query_row("SELECT COUNT(*) FROM timeline_events", [], |r| r.get(0))
.unwrap();
assert_eq!(count, 1);
// Delete issue — cascade should remove timeline event
conn.execute("DELETE FROM issues WHERE id = 'issue-1'", [])
.unwrap();
let count: i64 = conn
.query_row("SELECT COUNT(*) FROM timeline_events", [], |r| r.get(0))
.unwrap();
assert_eq!(count, 0);
}
#[test]
fn test_timeline_events_indexes() {
let conn = setup_test_db();
let mut stmt = conn
.prepare(
"SELECT name FROM sqlite_master WHERE type='index' AND tbl_name='timeline_events'",
)
.unwrap();
let indexes: Vec<String> = stmt
.query_map([], |row| row.get(0))
.unwrap()
.filter_map(|r| r.ok())
.collect();
assert!(indexes.contains(&"idx_timeline_events_issue".to_string()));
assert!(indexes.contains(&"idx_timeline_events_time".to_string()));
}
}

View File

@ -47,6 +47,7 @@ pub struct IssueDetail {
pub image_attachments: Vec<ImageAttachment>,
pub resolution_steps: Vec<ResolutionStep>,
pub conversations: Vec<AiConversation>,
pub timeline_events: Vec<TimelineEvent>,
}
/// Lightweight row returned by list/search commands.
@ -121,9 +122,31 @@ pub struct FiveWhyEntry {
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TimelineEvent {
pub id: String,
pub issue_id: String,
pub event_type: String,
pub description: String,
pub created_at: i64,
pub metadata: String,
pub created_at: String,
}
impl TimelineEvent {
pub fn new(
issue_id: String,
event_type: String,
description: String,
metadata: String,
) -> Self {
TimelineEvent {
id: Uuid::now_v7().to_string(),
issue_id,
event_type,
description,
metadata,
created_at: chrono::Utc::now()
.format("%Y-%m-%d %H:%M:%S UTC")
.to_string(),
}
}
}
// ─── Log File ───────────────────────────────────────────────────────────────

View File

@ -1,4 +1,5 @@
use crate::db::models::IssueDetail;
use crate::docs::rca::{calculate_duration, format_event_type};
pub fn generate_postmortem_markdown(detail: &IssueDetail) -> String {
let issue = &detail.issue;
@ -51,7 +52,16 @@ pub fn generate_postmortem_markdown(detail: &IssueDetail) -> String {
// Impact
md.push_str("## Impact\n\n");
md.push_str("- **Duration:** _[How long did the incident last?]_\n");
if detail.timeline_events.len() >= 2 {
let first = &detail.timeline_events[0].created_at;
let last = &detail.timeline_events[detail.timeline_events.len() - 1].created_at;
md.push_str(&format!(
"- **Duration:** {}\n",
calculate_duration(first, last)
));
} else {
md.push_str("- **Duration:** _[How long did the incident last?]_\n");
}
md.push_str("- **Users Affected:** _[Number/percentage of affected users]_\n");
md.push_str("- **Revenue Impact:** _[Financial impact, if applicable]_\n");
md.push_str("- **SLA Impact:** _[Were any SLAs breached?]_\n\n");
@ -67,7 +77,19 @@ pub fn generate_postmortem_markdown(detail: &IssueDetail) -> String {
if let Some(ref resolved) = issue.resolved_at {
md.push_str(&format!("| {resolved} | Issue resolved |\n"));
}
md.push_str("| _HH:MM_ | _[Add additional timeline events]_ |\n\n");
if detail.timeline_events.is_empty() {
md.push_str("| _HH:MM_ | _[Add additional timeline events]_ |\n");
} else {
for event in &detail.timeline_events {
md.push_str(&format!(
"| {} | {} - {} |\n",
event.created_at,
format_event_type(&event.event_type),
event.description
));
}
}
md.push('\n');
// Root Cause Analysis
md.push_str("## Root Cause Analysis\n\n");
@ -114,6 +136,19 @@ pub fn generate_postmortem_markdown(detail: &IssueDetail) -> String {
// What Went Well
md.push_str("## What Went Well\n\n");
if !detail.resolution_steps.is_empty() {
md.push_str(&format!(
"- Systematic 5-whys analysis conducted ({} steps completed)\n",
detail.resolution_steps.len()
));
}
if detail
.timeline_events
.iter()
.any(|e| e.event_type == "root_cause_identified")
{
md.push_str("- Root cause was identified during triage\n");
}
md.push_str("- _[e.g., Quick detection through existing alerts]_\n");
md.push_str("- _[e.g., Effective cross-team collaboration]_\n");
md.push_str("- _[e.g., Smooth communication with stakeholders]_\n\n");
@ -158,7 +193,7 @@ pub fn generate_postmortem_markdown(detail: &IssueDetail) -> String {
#[cfg(test)]
mod tests {
use super::*;
use crate::db::models::{Issue, IssueDetail, ResolutionStep};
use crate::db::models::{Issue, IssueDetail, ResolutionStep, TimelineEvent};
fn make_test_detail() -> IssueDetail {
IssueDetail {
@ -188,6 +223,7 @@ mod tests {
created_at: "2025-02-10 09:00:00".to_string(),
}],
conversations: vec![],
timeline_events: vec![],
}
}
@ -246,4 +282,76 @@ mod tests {
assert!(md.contains("| Priority | Action | Owner | Due Date | Status |"));
assert!(md.contains("| P0 |"));
}
#[test]
fn test_postmortem_timeline_with_real_events() {
let mut detail = make_test_detail();
detail.timeline_events = vec![
TimelineEvent {
id: "te-1".to_string(),
issue_id: "pm-456".to_string(),
event_type: "triage_started".to_string(),
description: "Triage initiated".to_string(),
metadata: "{}".to_string(),
created_at: "2025-02-10 08:05:00 UTC".to_string(),
},
TimelineEvent {
id: "te-2".to_string(),
issue_id: "pm-456".to_string(),
event_type: "root_cause_identified".to_string(),
description: "Certificate expiry confirmed".to_string(),
metadata: "{}".to_string(),
created_at: "2025-02-10 10:30:00 UTC".to_string(),
},
];
let md = generate_postmortem_markdown(&detail);
assert!(md.contains("## Timeline"));
assert!(md.contains("| 2025-02-10 08:05:00 UTC | Triage Started - Triage initiated |"));
assert!(md.contains(
"| 2025-02-10 10:30:00 UTC | Root Cause Identified - Certificate expiry confirmed |"
));
assert!(!md.contains("_[Add additional timeline events]_"));
}
#[test]
fn test_postmortem_impact_with_duration() {
let mut detail = make_test_detail();
detail.timeline_events = vec![
TimelineEvent {
id: "te-1".to_string(),
issue_id: "pm-456".to_string(),
event_type: "triage_started".to_string(),
description: "Triage initiated".to_string(),
metadata: "{}".to_string(),
created_at: "2025-02-10 08:00:00 UTC".to_string(),
},
TimelineEvent {
id: "te-2".to_string(),
issue_id: "pm-456".to_string(),
event_type: "root_cause_identified".to_string(),
description: "Found it".to_string(),
metadata: "{}".to_string(),
created_at: "2025-02-10 10:30:00 UTC".to_string(),
},
];
let md = generate_postmortem_markdown(&detail);
assert!(md.contains("**Duration:** 2h 30m"));
assert!(!md.contains("_[How long did the incident last?]_"));
}
#[test]
fn test_postmortem_what_went_well_with_steps() {
let mut detail = make_test_detail();
detail.timeline_events = vec![TimelineEvent {
id: "te-1".to_string(),
issue_id: "pm-456".to_string(),
event_type: "root_cause_identified".to_string(),
description: "Root cause found".to_string(),
metadata: "{}".to_string(),
created_at: "2025-02-10 10:00:00 UTC".to_string(),
}];
let md = generate_postmortem_markdown(&detail);
assert!(md.contains("Systematic 5-whys analysis conducted (1 steps completed)"));
assert!(md.contains("Root cause was identified during triage"));
}
}

View File

@ -1,5 +1,48 @@
use crate::db::models::IssueDetail;
pub fn format_event_type(event_type: &str) -> &str {
match event_type {
"triage_started" => "Triage Started",
"log_uploaded" => "Log File Uploaded",
"why_level_advanced" => "Why Level Advanced",
"root_cause_identified" => "Root Cause Identified",
"rca_generated" => "RCA Document Generated",
"postmortem_generated" => "Post-Mortem Generated",
"document_exported" => "Document Exported",
other => other,
}
}
pub fn calculate_duration(start: &str, end: &str) -> String {
let fmt = "%Y-%m-%d %H:%M:%S UTC";
let start_dt = match chrono::NaiveDateTime::parse_from_str(start, fmt) {
Ok(dt) => dt,
Err(_) => return "N/A".to_string(),
};
let end_dt = match chrono::NaiveDateTime::parse_from_str(end, fmt) {
Ok(dt) => dt,
Err(_) => return "N/A".to_string(),
};
let duration = end_dt.signed_duration_since(start_dt);
let total_minutes = duration.num_minutes();
if total_minutes < 0 {
return "N/A".to_string();
}
let days = total_minutes / (24 * 60);
let hours = (total_minutes % (24 * 60)) / 60;
let minutes = total_minutes % 60;
if days > 0 {
format!("{days}d {hours}h")
} else if hours > 0 {
format!("{hours}h {minutes}m")
} else {
format!("{minutes}m")
}
}
pub fn generate_rca_markdown(detail: &IssueDetail) -> String {
let issue = &detail.issue;
@ -57,6 +100,52 @@ pub fn generate_rca_markdown(detail: &IssueDetail) -> String {
md.push_str("\n\n");
}
// Incident Timeline
md.push_str("## Incident Timeline\n\n");
if detail.timeline_events.is_empty() {
md.push_str("_No timeline events recorded._\n\n");
} else {
md.push_str("| Time (UTC) | Event | Description |\n");
md.push_str("|------------|-------|-------------|\n");
for event in &detail.timeline_events {
md.push_str(&format!(
"| {} | {} | {} |\n",
event.created_at,
format_event_type(&event.event_type),
event.description
));
}
md.push('\n');
}
// Incident Metrics
md.push_str("## Incident Metrics\n\n");
md.push_str(&format!(
"- **Total Events:** {}\n",
detail.timeline_events.len()
));
if detail.timeline_events.len() >= 2 {
let first = &detail.timeline_events[0].created_at;
let last = &detail.timeline_events[detail.timeline_events.len() - 1].created_at;
md.push_str(&format!(
"- **Incident Duration:** {}\n",
calculate_duration(first, last)
));
} else {
md.push_str("- **Incident Duration:** N/A\n");
}
let root_cause_event = detail
.timeline_events
.iter()
.find(|e| e.event_type == "root_cause_identified");
if let (Some(first), Some(rc)) = (detail.timeline_events.first(), root_cause_event) {
md.push_str(&format!(
"- **Time to Root Cause:** {}\n",
calculate_duration(&first.created_at, &rc.created_at)
));
}
md.push('\n');
// 5 Whys Analysis
md.push_str("## 5 Whys Analysis\n\n");
if detail.resolution_steps.is_empty() {
@ -143,7 +232,7 @@ pub fn generate_rca_markdown(detail: &IssueDetail) -> String {
#[cfg(test)]
mod tests {
use super::*;
use crate::db::models::{Issue, IssueDetail, LogFile, ResolutionStep};
use crate::db::models::{Issue, IssueDetail, LogFile, ResolutionStep, TimelineEvent};
fn make_test_detail() -> IssueDetail {
IssueDetail {
@ -194,6 +283,7 @@ mod tests {
},
],
conversations: vec![],
timeline_events: vec![],
}
}
@ -247,4 +337,135 @@ mod tests {
let md = generate_rca_markdown(&detail);
assert!(md.contains("Unassigned"));
}
#[test]
fn test_rca_timeline_section_with_events() {
let mut detail = make_test_detail();
detail.timeline_events = vec![
TimelineEvent {
id: "te-1".to_string(),
issue_id: "test-123".to_string(),
event_type: "triage_started".to_string(),
description: "Triage initiated by oncall".to_string(),
metadata: "{}".to_string(),
created_at: "2025-01-15 10:00:00 UTC".to_string(),
},
TimelineEvent {
id: "te-2".to_string(),
issue_id: "test-123".to_string(),
event_type: "log_uploaded".to_string(),
description: "app.log uploaded".to_string(),
metadata: "{}".to_string(),
created_at: "2025-01-15 10:30:00 UTC".to_string(),
},
TimelineEvent {
id: "te-3".to_string(),
issue_id: "test-123".to_string(),
event_type: "root_cause_identified".to_string(),
description: "Connection pool leak found".to_string(),
metadata: "{}".to_string(),
created_at: "2025-01-15 12:15:00 UTC".to_string(),
},
];
let md = generate_rca_markdown(&detail);
assert!(md.contains("## Incident Timeline"));
assert!(md.contains("| Time (UTC) | Event | Description |"));
assert!(md
.contains("| 2025-01-15 10:00:00 UTC | Triage Started | Triage initiated by oncall |"));
assert!(md.contains("| 2025-01-15 10:30:00 UTC | Log File Uploaded | app.log uploaded |"));
assert!(md.contains(
"| 2025-01-15 12:15:00 UTC | Root Cause Identified | Connection pool leak found |"
));
}
#[test]
fn test_rca_timeline_section_empty() {
let detail = make_test_detail();
let md = generate_rca_markdown(&detail);
assert!(md.contains("## Incident Timeline"));
assert!(md.contains("_No timeline events recorded._"));
}
#[test]
fn test_rca_metrics_section() {
let mut detail = make_test_detail();
detail.timeline_events = vec![
TimelineEvent {
id: "te-1".to_string(),
issue_id: "test-123".to_string(),
event_type: "triage_started".to_string(),
description: "Triage started".to_string(),
metadata: "{}".to_string(),
created_at: "2025-01-15 10:00:00 UTC".to_string(),
},
TimelineEvent {
id: "te-2".to_string(),
issue_id: "test-123".to_string(),
event_type: "root_cause_identified".to_string(),
description: "Root cause found".to_string(),
metadata: "{}".to_string(),
created_at: "2025-01-15 12:15:00 UTC".to_string(),
},
];
let md = generate_rca_markdown(&detail);
assert!(md.contains("## Incident Metrics"));
assert!(md.contains("**Total Events:** 2"));
assert!(md.contains("**Incident Duration:** 2h 15m"));
assert!(md.contains("**Time to Root Cause:** 2h 15m"));
}
#[test]
fn test_calculate_duration_hours_minutes() {
assert_eq!(
calculate_duration("2025-01-15 10:00:00 UTC", "2025-01-15 12:15:00 UTC"),
"2h 15m"
);
}
#[test]
fn test_calculate_duration_days() {
assert_eq!(
calculate_duration("2025-01-15 10:00:00 UTC", "2025-01-18 11:00:00 UTC"),
"3d 1h"
);
}
#[test]
fn test_calculate_duration_minutes_only() {
assert_eq!(
calculate_duration("2025-01-15 10:00:00 UTC", "2025-01-15 10:45:00 UTC"),
"45m"
);
}
#[test]
fn test_calculate_duration_invalid() {
assert_eq!(calculate_duration("bad-date", "also-bad"), "N/A");
}
#[test]
fn test_format_event_type_known() {
assert_eq!(format_event_type("triage_started"), "Triage Started");
assert_eq!(format_event_type("log_uploaded"), "Log File Uploaded");
assert_eq!(
format_event_type("why_level_advanced"),
"Why Level Advanced"
);
assert_eq!(
format_event_type("root_cause_identified"),
"Root Cause Identified"
);
assert_eq!(format_event_type("rca_generated"), "RCA Document Generated");
assert_eq!(
format_event_type("postmortem_generated"),
"Post-Mortem Generated"
);
assert_eq!(format_event_type("document_exported"), "Document Exported");
}
#[test]
fn test_format_event_type_unknown() {
assert_eq!(format_event_type("custom_event"), "custom_event");
assert_eq!(format_event_type(""), "");
}
}

View File

@ -629,11 +629,10 @@ mod tests {
#[test]
fn test_derive_aes_key_is_stable_for_same_input() {
std::env::set_var("TFTSR_ENCRYPTION_KEY", "stable-test-key");
let k1 = derive_aes_key().unwrap();
let k2 = derive_aes_key().unwrap();
// Use deterministic helper to avoid env var race conditions in parallel tests
let k1 = derive_aes_key_from_str("stable-test-key").unwrap();
let k2 = derive_aes_key_from_str("stable-test-key").unwrap();
assert_eq!(k1, k2);
std::env::remove_var("TFTSR_ENCRYPTION_KEY");
}
// Test helper functions that accept key directly (bypass env var)

View File

@ -1,4 +1,40 @@
use super::confluence_search::SearchResult;
use crate::integrations::query_expansion::expand_query;
const MAX_EXPANDED_QUERIES: usize = 3;
fn escape_wiql(s: &str) -> String {
s.replace('\'', "''")
.replace('"', "\\\"")
.replace('\\', "\\\\")
.replace('(', "\\(")
.replace(')', "\\)")
.replace(';', "\\;")
.replace('=', "\\=")
}
/// Basic HTML tag stripping to prevent XSS in excerpts
fn strip_html_tags(html: &str) -> String {
let mut result = String::new();
let mut in_tag = false;
for ch in html.chars() {
match ch {
'<' => in_tag = true,
'>' => in_tag = false,
_ if !in_tag => result.push(ch),
_ => {}
}
}
// Clean up whitespace
result
.split_whitespace()
.collect::<Vec<_>>()
.join(" ")
.trim()
.to_string()
}
/// Search Azure DevOps Wiki for content matching the query
pub async fn search_wiki(
@ -10,90 +46,94 @@ pub async fn search_wiki(
let cookie_header = crate::integrations::webview_auth::cookies_to_header(cookies);
let client = reqwest::Client::new();
// Use Azure DevOps Search API
let search_url = format!(
"{}/_apis/search/wikisearchresults?api-version=7.0",
org_url.trim_end_matches('/')
);
let expanded_queries = expand_query(query);
let search_body = serde_json::json!({
"searchText": query,
"$top": 5,
"filters": {
"ProjectFilters": [project]
let mut all_results = Vec::new();
for expanded_query in expanded_queries.iter().take(MAX_EXPANDED_QUERIES) {
// Use Azure DevOps Search API
let search_url = format!(
"{}/_apis/search/wikisearchresults?api-version=7.0",
org_url.trim_end_matches('/')
);
let search_body = serde_json::json!({
"searchText": expanded_query,
"$top": 5,
"filters": {
"ProjectFilters": [project]
}
});
tracing::info!("Searching Azure DevOps Wiki with query: {}", expanded_query);
let resp = client
.post(&search_url)
.header("Cookie", &cookie_header)
.header("Accept", "application/json")
.header("Content-Type", "application/json")
.json(&search_body)
.send()
.await
.map_err(|e| format!("Azure DevOps wiki search failed: {e}"))?;
if !resp.status().is_success() {
let status = resp.status();
let text = resp.text().await.unwrap_or_default();
tracing::warn!("Azure DevOps wiki search failed with status {status}: {text}");
continue;
}
});
tracing::info!("Searching Azure DevOps Wiki: {}", search_url);
let json: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse ADO wiki search response: {e}"))?;
let resp = client
.post(&search_url)
.header("Cookie", &cookie_header)
.header("Accept", "application/json")
.header("Content-Type", "application/json")
.json(&search_body)
.send()
.await
.map_err(|e| format!("Azure DevOps wiki search failed: {e}"))?;
if let Some(results_array) = json["results"].as_array() {
for item in results_array.iter().take(MAX_EXPANDED_QUERIES) {
let title = item["fileName"].as_str().unwrap_or("Untitled").to_string();
if !resp.status().is_success() {
let status = resp.status();
let text = resp.text().await.unwrap_or_default();
return Err(format!(
"Azure DevOps wiki search failed with status {status}: {text}"
));
}
let path = item["path"].as_str().unwrap_or("");
let url = format!(
"{}/_wiki/wikis/{}/{}",
org_url.trim_end_matches('/'),
project,
path
);
let json: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse ADO wiki search response: {e}"))?;
let excerpt = strip_html_tags(item["content"].as_str().unwrap_or(""))
.chars()
.take(300)
.collect::<String>();
let mut results = Vec::new();
if let Some(results_array) = json["results"].as_array() {
for item in results_array.iter().take(3) {
let title = item["fileName"].as_str().unwrap_or("Untitled").to_string();
let path = item["path"].as_str().unwrap_or("");
let url = format!(
"{}/_wiki/wikis/{}/{}",
org_url.trim_end_matches('/'),
project,
path
);
let excerpt = item["content"]
.as_str()
.unwrap_or("")
.chars()
.take(300)
.collect::<String>();
// Fetch full wiki page content
let content = if let Some(wiki_id) = item["wiki"]["id"].as_str() {
if let Some(page_path) = item["path"].as_str() {
fetch_wiki_page(org_url, wiki_id, page_path, &cookie_header)
.await
.ok()
// Fetch full wiki page content
let content = if let Some(wiki_id) = item["wiki"]["id"].as_str() {
if let Some(page_path) = item["path"].as_str() {
fetch_wiki_page(org_url, wiki_id, page_path, &cookie_header)
.await
.ok()
} else {
None
}
} else {
None
}
} else {
None
};
};
results.push(SearchResult {
title,
url,
excerpt,
content,
source: "Azure DevOps".to_string(),
});
all_results.push(SearchResult {
title,
url,
excerpt,
content,
source: "Azure DevOps".to_string(),
});
}
}
}
Ok(results)
all_results.sort_by(|a, b| a.url.cmp(&b.url));
all_results.dedup_by(|a, b| a.url == b.url);
Ok(all_results)
}
/// Fetch full wiki page content
@ -151,55 +191,68 @@ pub async fn search_work_items(
let cookie_header = crate::integrations::webview_auth::cookies_to_header(cookies);
let client = reqwest::Client::new();
// Use WIQL (Work Item Query Language)
let wiql_url = format!(
"{}/_apis/wit/wiql?api-version=7.0",
org_url.trim_end_matches('/')
);
let expanded_queries = expand_query(query);
let wiql_query = format!(
"SELECT [System.Id], [System.Title], [System.Description], [System.State] FROM WorkItems WHERE [System.TeamProject] = '{project}' AND ([System.Title] CONTAINS '{query}' OR [System.Description] CONTAINS '{query}') ORDER BY [System.ChangedDate] DESC"
);
let mut all_results = Vec::new();
let wiql_body = serde_json::json!({
"query": wiql_query
});
for expanded_query in expanded_queries.iter().take(MAX_EXPANDED_QUERIES) {
// Use WIQL (Work Item Query Language)
let wiql_url = format!(
"{}/_apis/wit/wiql?api-version=7.0",
org_url.trim_end_matches('/')
);
tracing::info!("Searching Azure DevOps work items");
let safe_query = escape_wiql(expanded_query);
let wiql_query = format!(
"SELECT [System.Id], [System.Title], [System.Description], [System.State] FROM WorkItems WHERE [System.TeamProject] = '{project}' AND ([System.Title] ~ '{safe_query}' OR [System.Description] ~ '{safe_query}') ORDER BY [System.ChangedDate] DESC"
);
let resp = client
.post(&wiql_url)
.header("Cookie", &cookie_header)
.header("Accept", "application/json")
.header("Content-Type", "application/json")
.json(&wiql_body)
.send()
.await
.map_err(|e| format!("ADO work item search failed: {e}"))?;
let wiql_body = serde_json::json!({
"query": wiql_query
});
if !resp.status().is_success() {
return Ok(Vec::new()); // Don't fail if work item search fails
}
tracing::info!(
"Searching Azure DevOps work items with query: {}",
expanded_query
);
let json: serde_json::Value = resp
.json()
.await
.map_err(|_| "Failed to parse work item response".to_string())?;
let resp = client
.post(&wiql_url)
.header("Cookie", &cookie_header)
.header("Accept", "application/json")
.header("Content-Type", "application/json")
.json(&wiql_body)
.send()
.await
.map_err(|e| format!("ADO work item search failed: {e}"))?;
let mut results = Vec::new();
if !resp.status().is_success() {
continue; // Don't fail if work item search fails
}
if let Some(work_items) = json["workItems"].as_array() {
// Fetch details for top 3 work items
for item in work_items.iter().take(3) {
if let Some(id) = item["id"].as_i64() {
if let Ok(work_item) = fetch_work_item_details(org_url, id, &cookie_header).await {
results.push(work_item);
let json: serde_json::Value = resp
.json()
.await
.map_err(|_| "Failed to parse work item response".to_string())?;
if let Some(work_items) = json["workItems"].as_array() {
// Fetch details for top 3 work items
for item in work_items.iter().take(MAX_EXPANDED_QUERIES) {
if let Some(id) = item["id"].as_i64() {
if let Ok(work_item) =
fetch_work_item_details(org_url, id, &cookie_header).await
{
all_results.push(work_item);
}
}
}
}
}
Ok(results)
all_results.sort_by(|a, b| a.url.cmp(&b.url));
all_results.dedup_by(|a, b| a.url == b.url);
Ok(all_results)
}
/// Fetch work item details
@ -263,3 +316,53 @@ async fn fetch_work_item_details(
source: "Azure DevOps".to_string(),
})
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_escape_wiql_escapes_single_quotes() {
assert_eq!(escape_wiql("test'single"), "test''single");
}
#[test]
fn test_escape_wiql_escapes_double_quotes() {
assert_eq!(escape_wiql("test\"double"), "test\\\\\"double");
}
#[test]
fn test_escape_wiql_escapes_backslash() {
assert_eq!(escape_wiql("test\\backslash"), r#"test\\backslash"#);
}
#[test]
fn test_escape_wiql_escapes_parens() {
assert_eq!(escape_wiql("test(paren"), r#"test\(paren"#);
assert_eq!(escape_wiql("test)paren"), r#"test\)paren"#);
}
#[test]
fn test_escape_wiql_escapes_semicolon() {
assert_eq!(escape_wiql("test;semi"), r#"test\;semi"#);
}
#[test]
fn test_escape_wiql_escapes_equals() {
assert_eq!(escape_wiql("test=equal"), r#"test\=equal"#);
}
#[test]
fn test_escape_wiql_no_special_chars() {
assert_eq!(escape_wiql("simple query"), "simple query");
}
#[test]
fn test_strip_html_tags() {
let html = "<p>Hello <strong>world</strong>!</p>";
assert_eq!(strip_html_tags(html), "Hello world!");
let html2 = "<div><h1>Title</h1><p>Content</p></div>";
assert_eq!(strip_html_tags(html2), "TitleContent");
}
}

View File

@ -1,4 +1,9 @@
use serde::{Deserialize, Serialize};
use url::Url;
use super::query_expansion::expand_query;
const MAX_EXPANDED_QUERIES: usize = 3;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SearchResult {
@ -6,10 +11,36 @@ pub struct SearchResult {
pub url: String,
pub excerpt: String,
pub content: Option<String>,
pub source: String, // "confluence", "servicenow", "azuredevops"
pub source: String,
}
fn canonicalize_url(url: &str) -> String {
Url::parse(url)
.ok()
.map(|u| {
let mut u = u.clone();
u.set_fragment(None);
u.set_query(None);
u.to_string()
})
.unwrap_or_else(|| url.to_string())
}
fn escape_cql(s: &str) -> String {
s.replace('"', "\\\"")
.replace(')', "\\)")
.replace('(', "\\(")
.replace('~', "\\~")
.replace('&', "\\&")
.replace('|', "\\|")
.replace('+', "\\+")
.replace('-', "\\-")
}
/// Search Confluence for content matching the query
///
/// This function expands the user query with related terms, synonyms, and variations
/// to improve search coverage across Confluence spaces.
pub async fn search_confluence(
base_url: &str,
query: &str,
@ -18,86 +49,89 @@ pub async fn search_confluence(
let cookie_header = crate::integrations::webview_auth::cookies_to_header(cookies);
let client = reqwest::Client::new();
// Use Confluence CQL search
let search_url = format!(
"{}/rest/api/search?cql=text~\"{}\"&limit=5",
base_url.trim_end_matches('/'),
urlencoding::encode(query)
);
let expanded_queries = expand_query(query);
tracing::info!("Searching Confluence: {}", search_url);
let mut all_results = Vec::new();
let resp = client
.get(&search_url)
.header("Cookie", &cookie_header)
.header("Accept", "application/json")
.send()
.await
.map_err(|e| format!("Confluence search request failed: {e}"))?;
for expanded_query in expanded_queries.iter().take(MAX_EXPANDED_QUERIES) {
let safe_query = escape_cql(expanded_query);
let search_url = format!(
"{}/rest/api/search?cql=text~\"{}\"&limit=5",
base_url.trim_end_matches('/'),
urlencoding::encode(&safe_query)
);
if !resp.status().is_success() {
let status = resp.status();
let text = resp.text().await.unwrap_or_default();
return Err(format!(
"Confluence search failed with status {status}: {text}"
));
}
tracing::info!(
"Searching Confluence with expanded query: {}",
expanded_query
);
let json: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse Confluence search response: {e}"))?;
let resp = client
.get(&search_url)
.header("Cookie", &cookie_header)
.header("Accept", "application/json")
.send()
.await
.map_err(|e| format!("Confluence search request failed: {e}"))?;
let mut results = Vec::new();
if !resp.status().is_success() {
let status = resp.status();
let text = resp.text().await.unwrap_or_default();
tracing::warn!("Confluence search failed with status {status}: {text}");
continue;
}
if let Some(results_array) = json["results"].as_array() {
for item in results_array.iter().take(3) {
// Take top 3 results
let title = item["title"].as_str().unwrap_or("Untitled").to_string();
let json: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse Confluence search response: {e}"))?;
let id = item["content"]["id"].as_str();
let space_key = item["content"]["space"]["key"].as_str();
if let Some(results_array) = json["results"].as_array() {
for item in results_array.iter().take(MAX_EXPANDED_QUERIES) {
let title = item["title"].as_str().unwrap_or("Untitled").to_string();
// Build URL
let url = if let (Some(id_str), Some(space)) = (id, space_key) {
format!(
"{}/display/{}/{}",
base_url.trim_end_matches('/'),
space,
id_str
)
} else {
base_url.to_string()
};
let id = item["content"]["id"].as_str();
let space_key = item["content"]["space"]["key"].as_str();
// Get excerpt from search result
let excerpt = item["excerpt"]
.as_str()
.unwrap_or("")
.to_string()
.replace("<span class=\"highlight\">", "")
.replace("</span>", "");
let url = if let (Some(id_str), Some(space)) = (id, space_key) {
format!(
"{}/display/{}/{}",
base_url.trim_end_matches('/'),
space,
id_str
)
} else {
base_url.to_string()
};
// Fetch full page content
let content = if let Some(content_id) = id {
fetch_page_content(base_url, content_id, &cookie_header)
.await
.ok()
} else {
None
};
let excerpt = strip_html_tags(item["excerpt"].as_str().unwrap_or(""))
.chars()
.take(300)
.collect::<String>();
results.push(SearchResult {
title,
url,
excerpt,
content,
source: "Confluence".to_string(),
});
let content = if let Some(content_id) = id {
fetch_page_content(base_url, content_id, &cookie_header)
.await
.ok()
} else {
None
};
all_results.push(SearchResult {
title,
url,
excerpt,
content,
source: "Confluence".to_string(),
});
}
}
}
Ok(results)
all_results.sort_by(|a, b| canonicalize_url(&a.url).cmp(&canonicalize_url(&b.url)));
all_results.dedup_by(|a, b| canonicalize_url(&a.url) == canonicalize_url(&b.url));
Ok(all_results)
}
/// Fetch full content of a Confluence page
@ -185,4 +219,43 @@ mod tests {
let html2 = "<div><h1>Title</h1><p>Content</p></div>";
assert_eq!(strip_html_tags(html2), "TitleContent");
}
#[test]
fn test_escape_cql_escapes_special_chars() {
assert_eq!(escape_cql("test\"quote"), r#"test\"quote"#);
assert_eq!(escape_cql("test(paren"), r#"test\(paren"#);
assert_eq!(escape_cql("test)paren"), r#"test\)paren"#);
assert_eq!(escape_cql("test~tilde"), r#"test\~tilde"#);
assert_eq!(escape_cql("test&and"), r#"test\&and"#);
assert_eq!(escape_cql("test|or"), r#"test\|or"#);
assert_eq!(escape_cql("test+plus"), r#"test\+plus"#);
assert_eq!(escape_cql("test-minus"), r#"test\-minus"#);
}
#[test]
fn test_escape_cql_no_special_chars() {
assert_eq!(escape_cql("simple query"), "simple query");
}
#[test]
fn test_canonicalize_url_removes_fragment() {
assert_eq!(
canonicalize_url("https://example.com/page#section"),
"https://example.com/page"
);
}
#[test]
fn test_canonicalize_url_removes_query() {
assert_eq!(
canonicalize_url("https://example.com/page?param=value"),
"https://example.com/page"
);
}
#[test]
fn test_canonicalize_url_handles_malformed() {
// Malformed URLs fall back to original
assert_eq!(canonicalize_url("not a url"), "not a url");
}
}

View File

@ -4,6 +4,7 @@ pub mod azuredevops_search;
pub mod callback_server;
pub mod confluence;
pub mod confluence_search;
pub mod query_expansion;
pub mod servicenow;
pub mod servicenow_search;
pub mod webview_auth;

View File

@ -0,0 +1,290 @@
/// Query expansion module for integration search
///
/// This module provides functionality to expand user queries with related terms,
/// synonyms, and variations to improve search results across integrations like
/// Confluence, ServiceNow, and Azure DevOps.
use std::collections::HashSet;
/// Product name synonyms for common product variations
/// Maps common abbreviations/variants to their full names for search expansion
fn get_product_synonyms(query: &str) -> Vec<String> {
let mut synonyms = Vec::new();
// VESTA NXT related synonyms
if query.to_lowercase().contains("vesta") || query.to_lowercase().contains("vnxt") {
synonyms.extend(vec![
"VESTA NXT".to_string(),
"Vesta NXT".to_string(),
"VNXT".to_string(),
"vnxt".to_string(),
"Vesta".to_string(),
"vesta".to_string(),
"VNX".to_string(),
"vnx".to_string(),
]);
}
// Version number patterns (e.g., 1.0.12, 1.1.9)
if query.contains('.') {
// Extract version-like patterns and add variations
let version_parts: Vec<&str> = query.split('.').collect();
if version_parts.len() >= 2 {
// Add variations without dots
let version_no_dots = version_parts.join("");
synonyms.push(version_no_dots);
// Add partial versions
if version_parts.len() >= 2 {
synonyms.push(version_parts[0..2].join("."));
}
if version_parts.len() >= 3 {
synonyms.push(version_parts[0..3].join("."));
}
}
}
// Common upgrade-related terms
if query.to_lowercase().contains("upgrade") || query.to_lowercase().contains("update") {
synonyms.extend(vec![
"upgrade".to_string(),
"update".to_string(),
"migration".to_string(),
"patch".to_string(),
"version".to_string(),
"install".to_string(),
"installation".to_string(),
]);
}
// Remove duplicates and empty strings
synonyms.sort();
synonyms.dedup();
synonyms.retain(|s| !s.is_empty());
synonyms
}
/// Expand a search query with related terms for better search coverage
///
/// This function takes a user query and expands it with:
/// - Product name synonyms (e.g., "VNXT" -> "VESTA NXT", "Vesta NXT")
/// - Version number variations
/// - Related terms based on query content
///
/// # Arguments
/// * `query` - The original user query
///
/// # Returns
/// A vector of query strings to search, with the original query first
/// followed by expanded variations. Returns empty only if input is empty or
/// whitespace-only. Otherwise, always returns at least the original query.
pub fn expand_query(query: &str) -> Vec<String> {
if query.trim().is_empty() {
return Vec::new();
}
let mut expanded = vec![query.to_string()];
// Get product synonyms
let product_synonyms = get_product_synonyms(query);
expanded.extend(product_synonyms);
// Extract keywords from query for additional expansion
let keywords = extract_keywords(query);
// Add keyword variations
for keyword in keywords.iter().take(5) {
if !expanded.contains(keyword) {
expanded.push(keyword.clone());
}
}
// Add common related terms based on query content
let query_lower = query.to_lowercase();
if query_lower.contains("confluence") || query_lower.contains("documentation") {
expanded.push("docs".to_string());
expanded.push("manual".to_string());
expanded.push("guide".to_string());
}
if query_lower.contains("deploy") || query_lower.contains("deployment") {
expanded.push("deploy".to_string());
expanded.push("deployment".to_string());
expanded.push("release".to_string());
expanded.push("build".to_string());
}
if query_lower.contains("kubernetes") || query_lower.contains("k8s") {
expanded.push("kubernetes".to_string());
expanded.push("k8s".to_string());
expanded.push("pod".to_string());
expanded.push("container".to_string());
}
// Remove duplicates and empty strings
expanded.sort();
expanded.dedup();
expanded.retain(|s| !s.is_empty());
expanded
}
/// Extract important keywords from a search query
///
/// This function removes stop words and extracts meaningful terms
/// for search expansion.
///
/// # Arguments
/// * `query` - The original user query
///
/// # Returns
/// A vector of extracted keywords
fn extract_keywords(query: &str) -> Vec<String> {
let stop_words: HashSet<&str> = [
"how", "do", "i", "the", "a", "an", "is", "are", "was", "were", "be", "been", "being",
"have", "has", "had", "having", "do", "does", "did", "doing", "will", "would", "should",
"could", "can", "may", "might", "must", "to", "from", "in", "on", "at", "by", "for",
"with", "about", "as", "of", "or", "and", "but", "not", "what", "when", "where", "which",
"who", "this", "that", "these", "those", "if", "then", "else", "for", "while", "until",
"against", "between", "into", "through", "during", "before", "after", "above", "below",
"up", "down", "out", "off", "over", "under", "again", "further", "then", "once", "here",
"there", "why", "where", "all", "any", "both", "each", "few", "more", "most", "other",
"some", "such", "no", "nor", "only", "own", "same", "so", "than", "too", "very", "can",
"just", "should", "now",
]
.into_iter()
.collect();
let mut keywords = Vec::new();
let mut remaining = query.to_string();
while !remaining.is_empty() {
// Skip leading whitespace
if remaining.starts_with(char::is_whitespace) {
remaining = remaining.trim_start().to_string();
continue;
}
// Try to extract version number (e.g., 1.0.12, 1.1.9)
if remaining.starts_with(|c: char| c.is_ascii_digit()) {
let mut end_pos = 0;
let mut dot_count = 0;
for (i, c) in remaining.chars().enumerate() {
if c.is_ascii_digit() {
end_pos = i + 1;
} else if c == '.' {
end_pos = i + 1;
dot_count += 1;
} else {
break;
}
}
// Only extract if we have at least 2 dots (e.g., 1.0.12)
if dot_count >= 2 && end_pos > 0 {
let version = remaining[..end_pos].to_string();
keywords.push(version.clone());
remaining = remaining[end_pos..].to_string();
continue;
}
}
// Find word boundary - split on whitespace or non-alphanumeric
let mut split_pos = remaining.len();
for (i, c) in remaining.chars().enumerate() {
if c.is_whitespace() || !c.is_alphanumeric() {
split_pos = i;
break;
}
}
// If split_pos is 0, the string starts with a non-alphanumeric character
// Skip it and continue
if split_pos == 0 {
remaining = remaining[1..].to_string();
continue;
}
let word = remaining[..split_pos].to_lowercase();
remaining = remaining[split_pos..].to_string();
// Skip empty words, single chars, and stop words
if word.is_empty() || word.len() < 2 || stop_words.contains(word.as_str()) {
continue;
}
// Add numeric words with 3+ digits
if word.chars().all(|c| c.is_ascii_digit()) && word.len() >= 3 {
keywords.push(word.clone());
continue;
}
// Add words with at least one alphabetic character
if word.chars().any(|c| c.is_alphabetic()) {
keywords.push(word.clone());
}
}
keywords.sort();
keywords.dedup();
keywords
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_expand_query_with_product_synonyms() {
let query = "upgrade vesta nxt to 1.1.9";
let expanded = expand_query(query);
// Should contain original query
assert!(expanded.contains(&query.to_string()));
// Should contain product synonyms
assert!(expanded
.iter()
.any(|s| s.contains("vnxt") || s.contains("vnxt")));
}
#[test]
fn test_expand_query_with_version_numbers() {
let query = "version 1.0.12";
let expanded = expand_query(query);
// Should contain original query
assert!(expanded.contains(&query.to_string()));
}
#[test]
fn test_extract_keywords() {
let query = "How do I upgrade VESTA NXT from 1.0.12 to 1.1.9?";
let keywords = extract_keywords(query);
assert!(keywords.contains(&"upgrade".to_string()));
assert!(keywords.contains(&"vesta".to_string()));
assert!(keywords.contains(&"nxt".to_string()));
assert!(keywords.contains(&"1.0.12".to_string()));
assert!(keywords.contains(&"1.1.9".to_string()));
}
#[test]
fn test_product_synonyms() {
let synonyms = get_product_synonyms("vesta nxt upgrade");
// Should contain VNXT synonym
assert!(synonyms
.iter()
.any(|s| s.contains("VNXT") || s.contains("vnxt")));
}
#[test]
fn test_empty_query() {
let expanded = expand_query("");
assert!(expanded.is_empty() || expanded.contains(&"".to_string()));
}
}

View File

@ -1,4 +1,7 @@
use super::confluence_search::SearchResult;
use crate::integrations::query_expansion::expand_query;
const MAX_EXPANDED_QUERIES: usize = 3;
/// Search ServiceNow Knowledge Base for content matching the query
pub async fn search_servicenow(
@ -9,82 +12,88 @@ pub async fn search_servicenow(
let cookie_header = crate::integrations::webview_auth::cookies_to_header(cookies);
let client = reqwest::Client::new();
// Search Knowledge Base articles
let search_url = format!(
"{}/api/now/table/kb_knowledge?sysparm_query=textLIKE{}^ORshort_descriptionLIKE{}&sysparm_limit=5",
instance_url.trim_end_matches('/'),
urlencoding::encode(query),
urlencoding::encode(query)
);
let expanded_queries = expand_query(query);
tracing::info!("Searching ServiceNow: {}", search_url);
let mut all_results = Vec::new();
let resp = client
.get(&search_url)
.header("Cookie", &cookie_header)
.header("Accept", "application/json")
.send()
.await
.map_err(|e| format!("ServiceNow search request failed: {e}"))?;
for expanded_query in expanded_queries.iter().take(MAX_EXPANDED_QUERIES) {
// Search Knowledge Base articles
let search_url = format!(
"{}/api/now/table/kb_knowledge?sysparm_query=textLIKE{}^ORshort_descriptionLIKE{}&sysparm_limit=5",
instance_url.trim_end_matches('/'),
urlencoding::encode(expanded_query),
urlencoding::encode(expanded_query)
);
if !resp.status().is_success() {
let status = resp.status();
let text = resp.text().await.unwrap_or_default();
return Err(format!(
"ServiceNow search failed with status {status}: {text}"
));
}
tracing::info!("Searching ServiceNow with query: {}", expanded_query);
let json: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse ServiceNow search response: {e}"))?;
let resp = client
.get(&search_url)
.header("Cookie", &cookie_header)
.header("Accept", "application/json")
.send()
.await
.map_err(|e| format!("ServiceNow search request failed: {e}"))?;
let mut results = Vec::new();
if !resp.status().is_success() {
let status = resp.status();
let text = resp.text().await.unwrap_or_default();
tracing::warn!("ServiceNow search failed with status {status}: {text}");
continue;
}
if let Some(result_array) = json["result"].as_array() {
for item in result_array.iter().take(3) {
// Take top 3 results
let title = item["short_description"]
.as_str()
.unwrap_or("Untitled")
.to_string();
let json: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse ServiceNow search response: {e}"))?;
let sys_id = item["sys_id"].as_str().unwrap_or("").to_string();
if let Some(result_array) = json["result"].as_array() {
for item in result_array.iter().take(MAX_EXPANDED_QUERIES) {
// Take top 3 results
let title = item["short_description"]
.as_str()
.unwrap_or("Untitled")
.to_string();
let url = format!(
"{}/kb_view.do?sysparm_article={}",
instance_url.trim_end_matches('/'),
sys_id
);
let sys_id = item["sys_id"].as_str().unwrap_or("").to_string();
let excerpt = item["text"]
.as_str()
.unwrap_or("")
.chars()
.take(300)
.collect::<String>();
let url = format!(
"{}/kb_view.do?sysparm_article={}",
instance_url.trim_end_matches('/'),
sys_id
);
// Get full article content
let content = item["text"].as_str().map(|text| {
if text.len() > 3000 {
format!("{}...", &text[..3000])
} else {
text.to_string()
}
});
let excerpt = item["text"]
.as_str()
.unwrap_or("")
.chars()
.take(300)
.collect::<String>();
results.push(SearchResult {
title,
url,
excerpt,
content,
source: "ServiceNow".to_string(),
});
// Get full article content
let content = item["text"].as_str().map(|text| {
if text.len() > 3000 {
format!("{}...", &text[..3000])
} else {
text.to_string()
}
});
all_results.push(SearchResult {
title,
url,
excerpt,
content,
source: "ServiceNow".to_string(),
});
}
}
}
Ok(results)
all_results.sort_by(|a, b| a.url.cmp(&b.url));
all_results.dedup_by(|a, b| a.url == b.url);
Ok(all_results)
}
/// Search ServiceNow Incidents for related issues
@ -96,68 +105,78 @@ pub async fn search_incidents(
let cookie_header = crate::integrations::webview_auth::cookies_to_header(cookies);
let client = reqwest::Client::new();
// Search incidents
let search_url = format!(
"{}/api/now/table/incident?sysparm_query=short_descriptionLIKE{}^ORdescriptionLIKE{}&sysparm_limit=3&sysparm_display_value=true",
instance_url.trim_end_matches('/'),
urlencoding::encode(query),
urlencoding::encode(query)
);
let expanded_queries = expand_query(query);
tracing::info!("Searching ServiceNow incidents: {}", search_url);
let mut all_results = Vec::new();
let resp = client
.get(&search_url)
.header("Cookie", &cookie_header)
.header("Accept", "application/json")
.send()
.await
.map_err(|e| format!("ServiceNow incident search failed: {e}"))?;
for expanded_query in expanded_queries.iter().take(MAX_EXPANDED_QUERIES) {
// Search incidents
let search_url = format!(
"{}/api/now/table/incident?sysparm_query=short_descriptionLIKE{}^ORdescriptionLIKE{}&sysparm_limit=3&sysparm_display_value=true",
instance_url.trim_end_matches('/'),
urlencoding::encode(expanded_query),
urlencoding::encode(expanded_query)
);
if !resp.status().is_success() {
return Ok(Vec::new()); // Don't fail if incident search fails
}
tracing::info!(
"Searching ServiceNow incidents with query: {}",
expanded_query
);
let json: serde_json::Value = resp
.json()
.await
.map_err(|_| "Failed to parse incident response".to_string())?;
let resp = client
.get(&search_url)
.header("Cookie", &cookie_header)
.header("Accept", "application/json")
.send()
.await
.map_err(|e| format!("ServiceNow incident search failed: {e}"))?;
let mut results = Vec::new();
if !resp.status().is_success() {
continue; // Don't fail if incident search fails
}
if let Some(result_array) = json["result"].as_array() {
for item in result_array.iter() {
let number = item["number"].as_str().unwrap_or("Unknown");
let title = format!(
"Incident {}: {}",
number,
item["short_description"].as_str().unwrap_or("No title")
);
let json: serde_json::Value = resp
.json()
.await
.map_err(|_| "Failed to parse incident response".to_string())?;
let sys_id = item["sys_id"].as_str().unwrap_or("");
let url = format!(
"{}/incident.do?sys_id={}",
instance_url.trim_end_matches('/'),
sys_id
);
if let Some(result_array) = json["result"].as_array() {
for item in result_array.iter() {
let number = item["number"].as_str().unwrap_or("Unknown");
let title = format!(
"Incident {}: {}",
number,
item["short_description"].as_str().unwrap_or("No title")
);
let description = item["description"].as_str().unwrap_or("").to_string();
let sys_id = item["sys_id"].as_str().unwrap_or("");
let url = format!(
"{}/incident.do?sys_id={}",
instance_url.trim_end_matches('/'),
sys_id
);
let resolution = item["close_notes"].as_str().unwrap_or("").to_string();
let description = item["description"].as_str().unwrap_or("").to_string();
let content = format!("Description: {description}\nResolution: {resolution}");
let resolution = item["close_notes"].as_str().unwrap_or("").to_string();
let excerpt = content.chars().take(200).collect::<String>();
let content = format!("Description: {description}\nResolution: {resolution}");
results.push(SearchResult {
title,
url,
excerpt,
content: Some(content),
source: "ServiceNow".to_string(),
});
let excerpt = content.chars().take(200).collect::<String>();
all_results.push(SearchResult {
title,
url,
excerpt,
content: Some(content),
source: "ServiceNow".to_string(),
});
}
}
}
Ok(results)
all_results.sort_by(|a, b| a.url.cmp(&b.url));
all_results.dedup_by(|a, b| a.url == b.url);
Ok(all_results)
}

View File

@ -6,6 +6,7 @@ use serde_json::Value;
use tauri::WebviewWindow;
use super::confluence_search::SearchResult;
use crate::integrations::query_expansion::expand_query;
/// Execute an HTTP request from within the webview context
/// This automatically includes all cookies (including HttpOnly) from the authenticated session
@ -123,106 +124,113 @@ pub async fn search_confluence_webview<R: tauri::Runtime>(
base_url: &str,
query: &str,
) -> Result<Vec<SearchResult>, String> {
// Extract keywords from the query for better search
// Remove common words and extract important terms
let keywords = extract_keywords(query);
let expanded_queries = expand_query(query);
// Build CQL query with OR logic for keywords
let cql = if keywords.len() > 1 {
// Multiple keywords - search for any of them
let keyword_conditions: Vec<String> =
keywords.iter().map(|k| format!("text ~ \"{k}\"")).collect();
keyword_conditions.join(" OR ")
} else if !keywords.is_empty() {
// Single keyword
let keyword = &keywords[0];
format!("text ~ \"{keyword}\"")
} else {
// Fallback to original query
format!("text ~ \"{query}\"")
};
let mut all_results = Vec::new();
let search_url = format!(
"{}/rest/api/search?cql={}&limit=10",
base_url.trim_end_matches('/'),
urlencoding::encode(&cql)
);
for expanded_query in expanded_queries.iter().take(3) {
// Extract keywords from the query for better search
// Remove common words and extract important terms
let keywords = extract_keywords(expanded_query);
tracing::info!("Executing Confluence search via webview with CQL: {}", cql);
// Build CQL query with OR logic for keywords
let cql = if keywords.len() > 1 {
// Multiple keywords - search for any of them
let keyword_conditions: Vec<String> =
keywords.iter().map(|k| format!("text ~ \"{k}\"")).collect();
keyword_conditions.join(" OR ")
} else if !keywords.is_empty() {
// Single keyword
let keyword = &keywords[0];
format!("text ~ \"{keyword}\"")
} else {
// Fallback to expanded query
format!("text ~ \"{expanded_query}\"")
};
let response = fetch_from_webview(webview_window, &search_url, "GET", None).await?;
let search_url = format!(
"{}/rest/api/search?cql={}&limit=10",
base_url.trim_end_matches('/'),
urlencoding::encode(&cql)
);
let mut results = Vec::new();
tracing::info!("Executing Confluence search via webview with CQL: {}", cql);
if let Some(results_array) = response.get("results").and_then(|v| v.as_array()) {
for item in results_array.iter().take(5) {
let title = item["title"].as_str().unwrap_or("Untitled").to_string();
let content_id = item["content"]["id"].as_str();
let space_key = item["content"]["space"]["key"].as_str();
let response = fetch_from_webview(webview_window, &search_url, "GET", None).await?;
let url = if let (Some(id), Some(space)) = (content_id, space_key) {
format!(
"{}/display/{}/{}",
base_url.trim_end_matches('/'),
space,
id
)
} else {
base_url.to_string()
};
if let Some(results_array) = response.get("results").and_then(|v| v.as_array()) {
for item in results_array.iter().take(5) {
let title = item["title"].as_str().unwrap_or("Untitled").to_string();
let content_id = item["content"]["id"].as_str();
let space_key = item["content"]["space"]["key"].as_str();
let excerpt = item["excerpt"]
.as_str()
.unwrap_or("")
.replace("<span class=\"highlight\">", "")
.replace("</span>", "");
let url = if let (Some(id), Some(space)) = (content_id, space_key) {
format!(
"{}/display/{}/{}",
base_url.trim_end_matches('/'),
space,
id
)
} else {
base_url.to_string()
};
// Fetch full page content
let content = if let Some(id) = content_id {
let content_url = format!(
"{}/rest/api/content/{id}?expand=body.storage",
base_url.trim_end_matches('/')
);
if let Ok(content_resp) =
fetch_from_webview(webview_window, &content_url, "GET", None).await
{
if let Some(body) = content_resp
.get("body")
.and_then(|b| b.get("storage"))
.and_then(|s| s.get("value"))
.and_then(|v| v.as_str())
let excerpt = item["excerpt"]
.as_str()
.unwrap_or("")
.replace("<span class=\"highlight\">", "")
.replace("</span>", "");
// Fetch full page content
let content = if let Some(id) = content_id {
let content_url = format!(
"{}/rest/api/content/{id}?expand=body.storage",
base_url.trim_end_matches('/')
);
if let Ok(content_resp) =
fetch_from_webview(webview_window, &content_url, "GET", None).await
{
let text = strip_html_simple(body);
Some(if text.len() > 3000 {
format!("{}...", &text[..3000])
if let Some(body) = content_resp
.get("body")
.and_then(|b| b.get("storage"))
.and_then(|s| s.get("value"))
.and_then(|v| v.as_str())
{
let text = strip_html_simple(body);
Some(if text.len() > 3000 {
format!("{}...", &text[..3000])
} else {
text
})
} else {
text
})
None
}
} else {
None
}
} else {
None
}
} else {
None
};
};
results.push(SearchResult {
title,
url,
excerpt: excerpt.chars().take(300).collect(),
content,
source: "Confluence".to_string(),
});
all_results.push(SearchResult {
title,
url,
excerpt: excerpt.chars().take(300).collect(),
content,
source: "Confluence".to_string(),
});
}
}
}
all_results.sort_by(|a, b| a.url.cmp(&b.url));
all_results.dedup_by(|a, b| a.url == b.url);
tracing::info!(
"Confluence webview search returned {} results",
results.len()
all_results.len()
);
Ok(results)
Ok(all_results)
}
/// Extract keywords from a search query
@ -296,92 +304,99 @@ pub async fn search_servicenow_webview<R: tauri::Runtime>(
instance_url: &str,
query: &str,
) -> Result<Vec<SearchResult>, String> {
let mut results = Vec::new();
let expanded_queries = expand_query(query);
// Search knowledge base
let kb_url = format!(
"{}/api/now/table/kb_knowledge?sysparm_query=textLIKE{}^ORshort_descriptionLIKE{}&sysparm_limit=3",
instance_url.trim_end_matches('/'),
urlencoding::encode(query),
urlencoding::encode(query)
);
let mut all_results = Vec::new();
tracing::info!("Executing ServiceNow KB search via webview");
for expanded_query in expanded_queries.iter().take(3) {
// Search knowledge base
let kb_url = format!(
"{}/api/now/table/kb_knowledge?sysparm_query=textLIKE{}^ORshort_descriptionLIKE{}&sysparm_limit=3",
instance_url.trim_end_matches('/'),
urlencoding::encode(expanded_query),
urlencoding::encode(expanded_query)
);
if let Ok(kb_response) = fetch_from_webview(webview_window, &kb_url, "GET", None).await {
if let Some(kb_array) = kb_response.get("result").and_then(|v| v.as_array()) {
for item in kb_array {
let title = item["short_description"]
.as_str()
.unwrap_or("Untitled")
.to_string();
let sys_id = item["sys_id"].as_str().unwrap_or("");
let url = format!(
"{}/kb_view.do?sysparm_article={sys_id}",
instance_url.trim_end_matches('/')
);
let text = item["text"].as_str().unwrap_or("");
let excerpt = text.chars().take(300).collect();
let content = Some(if text.len() > 3000 {
format!("{}...", &text[..3000])
} else {
text.to_string()
});
tracing::info!("Executing ServiceNow KB search via webview with expanded query");
results.push(SearchResult {
title,
url,
excerpt,
content,
source: "ServiceNow".to_string(),
});
if let Ok(kb_response) = fetch_from_webview(webview_window, &kb_url, "GET", None).await {
if let Some(kb_array) = kb_response.get("result").and_then(|v| v.as_array()) {
for item in kb_array {
let title = item["short_description"]
.as_str()
.unwrap_or("Untitled")
.to_string();
let sys_id = item["sys_id"].as_str().unwrap_or("");
let url = format!(
"{}/kb_view.do?sysparm_article={sys_id}",
instance_url.trim_end_matches('/')
);
let text = item["text"].as_str().unwrap_or("");
let excerpt = text.chars().take(300).collect();
let content = Some(if text.len() > 3000 {
format!("{}...", &text[..3000])
} else {
text.to_string()
});
all_results.push(SearchResult {
title,
url,
excerpt,
content,
source: "ServiceNow".to_string(),
});
}
}
}
// Search incidents
let inc_url = format!(
"{}/api/now/table/incident?sysparm_query=short_descriptionLIKE{}^ORdescriptionLIKE{}&sysparm_limit=3&sysparm_display_value=true",
instance_url.trim_end_matches('/'),
urlencoding::encode(expanded_query),
urlencoding::encode(expanded_query)
);
if let Ok(inc_response) = fetch_from_webview(webview_window, &inc_url, "GET", None).await {
if let Some(inc_array) = inc_response.get("result").and_then(|v| v.as_array()) {
for item in inc_array {
let number = item["number"].as_str().unwrap_or("Unknown");
let title = format!(
"Incident {}: {}",
number,
item["short_description"].as_str().unwrap_or("No title")
);
let sys_id = item["sys_id"].as_str().unwrap_or("");
let url = format!(
"{}/incident.do?sys_id={sys_id}",
instance_url.trim_end_matches('/')
);
let description = item["description"].as_str().unwrap_or("");
let resolution = item["close_notes"].as_str().unwrap_or("");
let content = format!("Description: {description}\nResolution: {resolution}");
let excerpt = content.chars().take(200).collect();
all_results.push(SearchResult {
title,
url,
excerpt,
content: Some(content),
source: "ServiceNow".to_string(),
});
}
}
}
}
// Search incidents
let inc_url = format!(
"{}/api/now/table/incident?sysparm_query=short_descriptionLIKE{}^ORdescriptionLIKE{}&sysparm_limit=3&sysparm_display_value=true",
instance_url.trim_end_matches('/'),
urlencoding::encode(query),
urlencoding::encode(query)
);
if let Ok(inc_response) = fetch_from_webview(webview_window, &inc_url, "GET", None).await {
if let Some(inc_array) = inc_response.get("result").and_then(|v| v.as_array()) {
for item in inc_array {
let number = item["number"].as_str().unwrap_or("Unknown");
let title = format!(
"Incident {}: {}",
number,
item["short_description"].as_str().unwrap_or("No title")
);
let sys_id = item["sys_id"].as_str().unwrap_or("");
let url = format!(
"{}/incident.do?sys_id={sys_id}",
instance_url.trim_end_matches('/')
);
let description = item["description"].as_str().unwrap_or("");
let resolution = item["close_notes"].as_str().unwrap_or("");
let content = format!("Description: {description}\nResolution: {resolution}");
let excerpt = content.chars().take(200).collect();
results.push(SearchResult {
title,
url,
excerpt,
content: Some(content),
source: "ServiceNow".to_string(),
});
}
}
}
all_results.sort_by(|a, b| a.url.cmp(&b.url));
all_results.dedup_by(|a, b| a.url == b.url);
tracing::info!(
"ServiceNow webview search returned {} results",
results.len()
all_results.len()
);
Ok(results)
Ok(all_results)
}
/// Search Azure DevOps wiki using webview fetch
@ -391,82 +406,89 @@ pub async fn search_azuredevops_wiki_webview<R: tauri::Runtime>(
project: &str,
query: &str,
) -> Result<Vec<SearchResult>, String> {
// Extract keywords for better search
let keywords = extract_keywords(query);
let expanded_queries = expand_query(query);
let search_text = if !keywords.is_empty() {
keywords.join(" ")
} else {
query.to_string()
};
let mut all_results = Vec::new();
// Azure DevOps wiki search API
let search_url = format!(
"{}/{}/_apis/wiki/wikis?api-version=7.0",
org_url.trim_end_matches('/'),
urlencoding::encode(project)
);
for expanded_query in expanded_queries.iter().take(3) {
// Extract keywords for better search
let keywords = extract_keywords(expanded_query);
tracing::info!(
"Executing Azure DevOps wiki search via webview for: {}",
search_text
);
let search_text = if !keywords.is_empty() {
keywords.join(" ")
} else {
expanded_query.clone()
};
// First, get list of wikis
let wikis_response = fetch_from_webview(webview_window, &search_url, "GET", None).await?;
// Azure DevOps wiki search API
let search_url = format!(
"{}/{}/_apis/wiki/wikis?api-version=7.0",
org_url.trim_end_matches('/'),
urlencoding::encode(project)
);
let mut results = Vec::new();
tracing::info!(
"Executing Azure DevOps wiki search via webview for: {}",
search_text
);
if let Some(wikis_array) = wikis_response.get("value").and_then(|v| v.as_array()) {
// Search each wiki
for wiki in wikis_array.iter().take(3) {
let wiki_id = wiki["id"].as_str().unwrap_or("");
// First, get list of wikis
let wikis_response = fetch_from_webview(webview_window, &search_url, "GET", None).await?;
if wiki_id.is_empty() {
continue;
}
if let Some(wikis_array) = wikis_response.get("value").and_then(|v| v.as_array()) {
// Search each wiki
for wiki in wikis_array.iter().take(3) {
let wiki_id = wiki["id"].as_str().unwrap_or("");
// Search wiki pages
let pages_url = format!(
"{}/{}/_apis/wiki/wikis/{}/pages?recursionLevel=Full&includeContent=true&api-version=7.0",
org_url.trim_end_matches('/'),
urlencoding::encode(project),
urlencoding::encode(wiki_id)
);
if wiki_id.is_empty() {
continue;
}
if let Ok(pages_response) =
fetch_from_webview(webview_window, &pages_url, "GET", None).await
{
// Try to get "page" field, or use the response itself if it's the page object
if let Some(page) = pages_response.get("page") {
search_page_recursive(
page,
&search_text,
org_url,
project,
wiki_id,
&mut results,
);
} else {
// Response might be the page object itself
search_page_recursive(
&pages_response,
&search_text,
org_url,
project,
wiki_id,
&mut results,
);
// Search wiki pages
let pages_url = format!(
"{}/{}/_apis/wiki/wikis/{}/pages?recursionLevel=Full&includeContent=true&api-version=7.0",
org_url.trim_end_matches('/'),
urlencoding::encode(project),
urlencoding::encode(wiki_id)
);
if let Ok(pages_response) =
fetch_from_webview(webview_window, &pages_url, "GET", None).await
{
// Try to get "page" field, or use the response itself if it's the page object
if let Some(page) = pages_response.get("page") {
search_page_recursive(
page,
&search_text,
org_url,
project,
wiki_id,
&mut all_results,
);
} else {
// Response might be the page object itself
search_page_recursive(
&pages_response,
&search_text,
org_url,
project,
wiki_id,
&mut all_results,
);
}
}
}
}
}
all_results.sort_by(|a, b| a.url.cmp(&b.url));
all_results.dedup_by(|a, b| a.url == b.url);
tracing::info!(
"Azure DevOps wiki webview search returned {} results",
results.len()
all_results.len()
);
Ok(results)
Ok(all_results)
}
/// Recursively search through wiki pages for matching content
@ -544,115 +566,124 @@ pub async fn search_azuredevops_workitems_webview<R: tauri::Runtime>(
project: &str,
query: &str,
) -> Result<Vec<SearchResult>, String> {
// Extract keywords
let keywords = extract_keywords(query);
let expanded_queries = expand_query(query);
// Check if query contains a work item ID (pure number)
let work_item_id: Option<i64> = keywords
.iter()
.filter(|k| k.chars().all(|c| c.is_numeric()))
.filter_map(|k| k.parse::<i64>().ok())
.next();
let mut all_results = Vec::new();
// Build WIQL query
let wiql_query = if let Some(id) = work_item_id {
// Search by specific ID
format!(
"SELECT [System.Id], [System.Title], [System.Description], [System.WorkItemType] \
FROM WorkItems WHERE [System.Id] = {id}"
)
} else {
// Search by text in title/description
let search_terms = if !keywords.is_empty() {
keywords.join(" ")
for expanded_query in expanded_queries.iter().take(3) {
// Extract keywords
let keywords = extract_keywords(expanded_query);
// Check if query contains a work item ID (pure number)
let work_item_id: Option<i64> = keywords
.iter()
.filter(|k| k.chars().all(|c| c.is_numeric()))
.filter_map(|k| k.parse::<i64>().ok())
.next();
// Build WIQL query
let wiql_query = if let Some(id) = work_item_id {
// Search by specific ID
format!(
"SELECT [System.Id], [System.Title], [System.Description], [System.WorkItemType] \
FROM WorkItems WHERE [System.Id] = {id}"
)
} else {
query.to_string()
// Search by text in title/description
let search_terms = if !keywords.is_empty() {
keywords.join(" ")
} else {
expanded_query.clone()
};
// Use CONTAINS for text search (case-insensitive)
format!(
"SELECT [System.Id], [System.Title], [System.Description], [System.WorkItemType] \
FROM WorkItems WHERE [System.TeamProject] = '{project}' \
AND ([System.Title] CONTAINS '{search_terms}' OR [System.Description] CONTAINS '{search_terms}') \
ORDER BY [System.ChangedDate] DESC"
)
};
// Use CONTAINS for text search (case-insensitive)
format!(
"SELECT [System.Id], [System.Title], [System.Description], [System.WorkItemType] \
FROM WorkItems WHERE [System.TeamProject] = '{project}' \
AND ([System.Title] CONTAINS '{search_terms}' OR [System.Description] CONTAINS '{search_terms}') \
ORDER BY [System.ChangedDate] DESC"
)
};
let wiql_url = format!(
"{}/{}/_apis/wit/wiql?api-version=7.0",
org_url.trim_end_matches('/'),
urlencoding::encode(project)
);
let wiql_url = format!(
"{}/{}/_apis/wit/wiql?api-version=7.0",
org_url.trim_end_matches('/'),
urlencoding::encode(project)
);
let body = serde_json::json!({
"query": wiql_query
})
.to_string();
let body = serde_json::json!({
"query": wiql_query
})
.to_string();
tracing::info!("Executing Azure DevOps work item search via webview");
tracing::debug!("WIQL query: {}", wiql_query);
tracing::debug!("Request URL: {}", wiql_url);
tracing::info!("Executing Azure DevOps work item search via webview");
tracing::debug!("WIQL query: {}", wiql_query);
tracing::debug!("Request URL: {}", wiql_url);
let wiql_response =
fetch_from_webview(webview_window, &wiql_url, "POST", Some(&body)).await?;
let wiql_response = fetch_from_webview(webview_window, &wiql_url, "POST", Some(&body)).await?;
if let Some(work_items) = wiql_response.get("workItems").and_then(|v| v.as_array()) {
// Fetch details for first 5 work items
for item in work_items.iter().take(5) {
if let Some(id) = item.get("id").and_then(|i| i.as_i64()) {
let details_url = format!(
"{}/_apis/wit/workitems/{}?api-version=7.0",
org_url.trim_end_matches('/'),
id
);
let mut results = Vec::new();
if let Ok(details) =
fetch_from_webview(webview_window, &details_url, "GET", None).await
{
if let Some(fields) = details.get("fields") {
let title = fields
.get("System.Title")
.and_then(|t| t.as_str())
.unwrap_or("Untitled");
let work_item_type = fields
.get("System.WorkItemType")
.and_then(|t| t.as_str())
.unwrap_or("Item");
let description = fields
.get("System.Description")
.and_then(|d| d.as_str())
.unwrap_or("");
if let Some(work_items) = wiql_response.get("workItems").and_then(|v| v.as_array()) {
// Fetch details for first 5 work items
for item in work_items.iter().take(5) {
if let Some(id) = item.get("id").and_then(|i| i.as_i64()) {
let details_url = format!(
"{}/_apis/wit/workitems/{}?api-version=7.0",
org_url.trim_end_matches('/'),
id
);
let clean_description = strip_html_simple(description);
let excerpt = clean_description.chars().take(200).collect();
if let Ok(details) =
fetch_from_webview(webview_window, &details_url, "GET", None).await
{
if let Some(fields) = details.get("fields") {
let title = fields
.get("System.Title")
.and_then(|t| t.as_str())
.unwrap_or("Untitled");
let work_item_type = fields
.get("System.WorkItemType")
.and_then(|t| t.as_str())
.unwrap_or("Item");
let description = fields
.get("System.Description")
.and_then(|d| d.as_str())
.unwrap_or("");
let url =
format!("{}/_workitems/edit/{id}", org_url.trim_end_matches('/'));
let clean_description = strip_html_simple(description);
let excerpt = clean_description.chars().take(200).collect();
let full_content = if clean_description.len() > 3000 {
format!("{}...", &clean_description[..3000])
} else {
clean_description.clone()
};
let url = format!("{}/_workitems/edit/{id}", org_url.trim_end_matches('/'));
let full_content = if clean_description.len() > 3000 {
format!("{}...", &clean_description[..3000])
} else {
clean_description.clone()
};
results.push(SearchResult {
title: format!("{work_item_type} #{id}: {title}"),
url,
excerpt,
content: Some(full_content),
source: "Azure DevOps".to_string(),
});
all_results.push(SearchResult {
title: format!("{work_item_type} #{id}: {title}"),
url,
excerpt,
content: Some(full_content),
source: "Azure DevOps".to_string(),
});
}
}
}
}
}
}
all_results.sort_by(|a, b| a.url.cmp(&b.url));
all_results.dedup_by(|a, b| a.url == b.url);
tracing::info!(
"Azure DevOps work items webview search returned {} results",
results.len()
all_results.len()
);
Ok(results)
Ok(all_results)
}
/// Add a comment to an Azure DevOps work item

View File

@ -69,6 +69,7 @@ pub fn run() {
commands::db::add_five_why,
commands::db::update_five_why,
commands::db::add_timeline_event,
commands::db::get_timeline_events,
// Analysis / PII
commands::analysis::upload_log_file,
commands::analysis::upload_log_file_by_content,
@ -120,6 +121,7 @@ pub fn run() {
commands::system::get_settings,
commands::system::update_settings,
commands::system::get_audit_log,
commands::system::get_app_version,
])
.run(tauri::generate_context!())
.expect("Error running Troubleshooting and RCA Assistant application");

View File

@ -6,7 +6,7 @@
"frontendDist": "../dist",
"devUrl": "http://localhost:1420",
"beforeDevCommand": "npm run dev",
"beforeBuildCommand": "npm run build"
"beforeBuildCommand": "npm run version:update && npm run build"
},
"app": {
"security": {
@ -26,7 +26,7 @@
},
"bundle": {
"active": true,
"targets": "all",
"targets": ["deb", "rpm", "nsis"],
"icon": [
"icons/32x32.png",
"icons/128x128.png",
@ -41,4 +41,7 @@
"shortDescription": "Troubleshooting and RCA Assistant",
"longDescription": "Structured AI-backed assistant for IT troubleshooting, 5-whys root cause analysis, and post-mortem documentation with offline Ollama support."
}
}
}

View File

@ -1,5 +1,4 @@
import React, { useState, useEffect } from "react";
import { getVersion } from "@tauri-apps/api/app";
import { Routes, Route, NavLink, useLocation } from "react-router-dom";
import {
Home,
@ -15,7 +14,7 @@ import {
Moon,
} from "lucide-react";
import { useSettingsStore } from "@/stores/settingsStore";
import { loadAiProvidersCmd, testProviderConnectionCmd } from "@/lib/tauriCommands";
import { getAppVersionCmd, loadAiProvidersCmd, testProviderConnectionCmd } from "@/lib/tauriCommands";
import Dashboard from "@/pages/Dashboard";
import NewIssue from "@/pages/NewIssue";
@ -50,7 +49,7 @@ export default function App() {
void useLocation();
useEffect(() => {
getVersion().then(setAppVersion).catch(() => {});
getAppVersionCmd().then(setAppVersion).catch(() => {});
}, []);
// Load providers and auto-test active provider on startup

View File

@ -331,6 +331,58 @@ When analyzing identity and access issues, focus on these key areas:
Always ask about the Keycloak version, realm configuration (external IdP vs local users vs LDAP), SSSD version and configured domains, and whether this is a first-time setup or a regression.`,
};
export const INCIDENT_RESPONSE_FRAMEWORK = `
---
## INCIDENT RESPONSE METHODOLOGY
Follow this structured framework for every triage conversation. Each phase must be completed with evidence before advancing.
### Phase 1: Detection & Evidence Gathering
- **Do NOT propose fixes** until the problem is fully understood
- Gather: error messages, timestamps, affected systems, scope of impact, recent changes
- Ask: "What changed? When did it start? Who/what is affected? What has been tried?"
- Record all evidence with UTC timestamps
- Establish a clear problem statement before proceeding
### Phase 2: Diagnosis & Hypothesis Testing
- Apply the scientific method: form hypotheses, test them with evidence
- **The 3-Fix Rule**: If you cannot confidently identify the root cause after 3 hypotheses, STOP and reassess your assumptions you may be looking at the wrong system or the wrong layer
- Check the most common causes first (Occam's Razor): DNS, certificates, disk space, permissions, recent deployments
- Differentiate between symptoms and causes treat causes, not symptoms
- Use binary search to narrow scope: which component, which layer, which change
### Phase 3: Root Cause Analysis with 5-Whys
- Each "Why" must be backed by evidence, not speculation
- If you cannot provide evidence for a "Why", state what investigation is needed to confirm
- Look for systemic issues, not just proximate causes
- The root cause should explain ALL observed symptoms, not just some
- Common root cause categories: configuration drift, capacity exhaustion, dependency failure, race condition, human error in process
### Phase 4: Resolution & Prevention
- **Immediate fix**: What stops the bleeding right now? (rollback, restart, failover)
- **Permanent fix**: What prevents recurrence? (code fix, config change, automation)
- **Runbook update**: Document the fix for future oncall engineers
- Verify the fix resolves ALL symptoms, not just the primary one
- Monitor for regression after applying the fix
### Phase 5: Post-Incident Review
- Calculate incident metrics: MTTD (detect), MTTA (acknowledge), MTTR (resolve)
- Conduct blameless post-mortem focused on systems and processes
- Identify action items with owners and due dates
- Categories: monitoring gaps, process improvements, technical debt, training needs
- Ask: "What would have prevented this? What would have detected it faster? What would have resolved it faster?"
### Communication Practices
- State your current phase explicitly (e.g., "We are in Phase 2: Diagnosis")
- Summarize findings at each phase transition
- Flag assumptions clearly: "ASSUMPTION: ..." vs "CONFIRMED: ..."
- When advancing the Why level, explicitly state the evidence chain
`;
export function getDomainPrompt(domainId: string): string {
return domainPrompts[domainId] ?? "";
const domainSpecific = domainPrompts[domainId] ?? "";
if (!domainSpecific) return "";
return domainSpecific + INCIDENT_RESPONSE_FRAMEWORK;
}

View File

@ -74,9 +74,11 @@ export interface FiveWhyEntry {
export interface TimelineEvent {
id: string;
issue_id: string;
event_type: string;
description: string;
created_at: number;
metadata: string;
created_at: string;
}
export interface AiConversation {
@ -104,6 +106,7 @@ export interface IssueDetail {
image_attachments: ImageAttachment[];
resolution_steps: ResolutionStep[];
conversations: AiConversation[];
timeline_events: TimelineEvent[];
}
export interface IssueSummary {
@ -268,8 +271,8 @@ export interface TriageMessage {
export const analyzeLogsCmd = (issueId: string, logFileIds: string[], providerConfig: ProviderConfig) =>
invoke<AnalysisResult>("analyze_logs", { issueId, logFileIds, providerConfig });
export const chatMessageCmd = (issueId: string, message: string, providerConfig: ProviderConfig) =>
invoke<ChatResponse>("chat_message", { issueId, message, providerConfig });
export const chatMessageCmd = (issueId: string, message: string, providerConfig: ProviderConfig, systemPrompt?: string) =>
invoke<ChatResponse>("chat_message", { issueId, message, providerConfig, systemPrompt: systemPrompt ?? null });
export const listProvidersCmd = () => invoke<ProviderInfo[]>("list_providers");
@ -361,8 +364,11 @@ export const addFiveWhyCmd = (
export const updateFiveWhyCmd = (entryId: string, answer: string) =>
invoke<void>("update_five_why", { entryId, answer });
export const addTimelineEventCmd = (issueId: string, eventType: string, description: string) =>
invoke<TimelineEvent>("add_timeline_event", { issueId, eventType, description });
export const addTimelineEventCmd = (issueId: string, eventType: string, description: string, metadata?: string) =>
invoke<TimelineEvent>("add_timeline_event", { issueId, eventType, description, metadata: metadata ?? null });
export const getTimelineEventsCmd = (issueId: string) =>
invoke<TimelineEvent[]>("get_timeline_events", { issueId });
// ─── Document commands ────────────────────────────────────────────────────────
@ -486,3 +492,8 @@ export const loadAiProvidersCmd = () =>
export const deleteAiProviderCmd = (name: string) =>
invoke<void>("delete_ai_provider", { name });
// ─── System / Version ─────────────────────────────────────────────────────────
export const getAppVersionCmd = () =>
invoke<string>("get_app_version");

View File

@ -5,7 +5,7 @@ import { DocEditor } from "@/components/DocEditor";
import { useSettingsStore } from "@/stores/settingsStore";
import {
generatePostmortemCmd,
addTimelineEventCmd,
updateDocumentCmd,
exportDocumentCmd,
type Document_,
@ -28,6 +28,7 @@ export default function Postmortem() {
const generated = await generatePostmortemCmd(id);
setDoc(generated);
setContent(generated.content_md);
addTimelineEventCmd(id, "postmortem_generated", "Post-mortem document generated").catch(() => {});
} catch (err) {
setError(String(err));
} finally {
@ -54,6 +55,7 @@ export default function Postmortem() {
try {
const path = await exportDocumentCmd(doc.id, doc.title, content, format, "");
setError(`Document exported to: ${path}`);
addTimelineEventCmd(id!, "document_exported", `Post-mortem exported as ${format}`).catch(() => {});
setTimeout(() => setError(null), 5000);
} catch (err) {
setError(`Export failed: ${String(err)}`);

View File

@ -8,6 +8,7 @@ import {
generateRcaCmd,
updateDocumentCmd,
exportDocumentCmd,
addTimelineEventCmd,
type Document_,
} from "@/lib/tauriCommands";
@ -29,6 +30,7 @@ export default function RCA() {
const generated = await generateRcaCmd(id);
setDoc(generated);
setContent(generated.content_md);
addTimelineEventCmd(id, "rca_generated", "RCA document generated").catch(() => {});
} catch (err) {
setError(String(err));
} finally {
@ -55,6 +57,7 @@ export default function RCA() {
try {
const path = await exportDocumentCmd(doc.id, doc.title, content, format, "");
setError(`Document exported to: ${path}`);
addTimelineEventCmd(id!, "document_exported", `RCA exported as ${format}`).catch(() => {});
setTimeout(() => setError(null), 5000);
} catch (err) {
setError(`Export failed: ${String(err)}`);

View File

@ -15,6 +15,7 @@ import {
updateIssueCmd,
addFiveWhyCmd,
} from "@/lib/tauriCommands";
import { getDomainPrompt } from "@/lib/domainPrompts";
import type { TriageMessage } from "@/lib/tauriCommands";
const CLOSE_PATTERNS = [
@ -167,7 +168,8 @@ export default function Triage() {
setPendingFiles([]);
try {
const response = await chatMessageCmd(id, aiMessage, provider);
const systemPrompt = currentIssue ? getDomainPrompt(currentIssue.category) : undefined;
const response = await chatMessageCmd(id, aiMessage, provider, systemPrompt);
const assistantMsg: TriageMessage = {
id: `asst-${Date.now()}`,
issue_id: id,

View File

@ -42,11 +42,8 @@ describe("Audit Log", () => {
it("displays audit entries", async () => {
render(<Security />);
// Wait for audit log to load
await screen.findByText("Audit Log");
// Check that the table has rows (header + data rows)
const table = screen.getByRole("table");
// Wait for table to appear after async audit data loads
const table = await screen.findByRole("table");
expect(table).toBeInTheDocument();
const rows = screen.getAllByRole("row");
@ -56,9 +53,7 @@ describe("Audit Log", () => {
it("provides way to view transmitted data details", async () => {
render(<Security />);
await screen.findByText("Audit Log");
// Should have View/Hide buttons for expanding details
// Wait for async data to load and render the table
const viewButtons = await screen.findAllByRole("button", { name: /View/i });
expect(viewButtons.length).toBeGreaterThan(0);
});
@ -66,14 +61,13 @@ describe("Audit Log", () => {
it("details column or button exists for viewing data", async () => {
render(<Security />);
await screen.findByText("Audit Log");
// Wait for async data to load and render the table
await screen.findByRole("table");
// The audit log should have a Details column header
const detailsHeader = screen.getByText("Details");
expect(detailsHeader).toBeInTheDocument();
// Should have view buttons
const viewButtons = await screen.findAllByRole("button", { name: /View/i });
const viewButtons = screen.getAllByRole("button", { name: /View/i });
expect(viewButtons.length).toBe(2); // One for each mock entry
});
});

View File

@ -0,0 +1,63 @@
import { describe, it, expect } from "vitest";
import { getDomainPrompt, DOMAINS, INCIDENT_RESPONSE_FRAMEWORK } from "@/lib/domainPrompts";
describe("Domain Prompts with Incident Response Framework", () => {
it("exports INCIDENT_RESPONSE_FRAMEWORK constant", () => {
expect(INCIDENT_RESPONSE_FRAMEWORK).toBeDefined();
expect(typeof INCIDENT_RESPONSE_FRAMEWORK).toBe("string");
expect(INCIDENT_RESPONSE_FRAMEWORK.length).toBeGreaterThan(100);
});
it("framework contains all 5 phases", () => {
expect(INCIDENT_RESPONSE_FRAMEWORK).toContain("Phase 1: Detection & Evidence Gathering");
expect(INCIDENT_RESPONSE_FRAMEWORK).toContain("Phase 2: Diagnosis & Hypothesis Testing");
expect(INCIDENT_RESPONSE_FRAMEWORK).toContain("Phase 3: Root Cause Analysis with 5-Whys");
expect(INCIDENT_RESPONSE_FRAMEWORK).toContain("Phase 4: Resolution & Prevention");
expect(INCIDENT_RESPONSE_FRAMEWORK).toContain("Phase 5: Post-Incident Review");
});
it("framework contains the 3-Fix Rule", () => {
expect(INCIDENT_RESPONSE_FRAMEWORK).toContain("3-Fix Rule");
});
it("framework contains communication practices", () => {
expect(INCIDENT_RESPONSE_FRAMEWORK).toContain("Communication Practices");
});
it("all defined domains include incident response methodology", () => {
for (const domain of DOMAINS) {
const prompt = getDomainPrompt(domain.id);
if (prompt) {
expect(prompt).toContain("INCIDENT RESPONSE METHODOLOGY");
expect(prompt).toContain("Phase 1:");
expect(prompt).toContain("Phase 5:");
}
}
});
it("returns empty string for unknown domain", () => {
expect(getDomainPrompt("nonexistent_domain")).toBe("");
expect(getDomainPrompt("")).toBe("");
});
it("preserves existing Linux domain content", () => {
const prompt = getDomainPrompt("linux");
expect(prompt).toContain("senior Linux systems engineer");
expect(prompt).toContain("RHEL");
expect(prompt).toContain("INCIDENT RESPONSE METHODOLOGY");
});
it("preserves existing Kubernetes domain content", () => {
const prompt = getDomainPrompt("kubernetes");
expect(prompt).toContain("Kubernetes platform engineer");
expect(prompt).toContain("k3s");
expect(prompt).toContain("INCIDENT RESPONSE METHODOLOGY");
});
it("preserves existing Network domain content", () => {
const prompt = getDomainPrompt("network");
expect(prompt).toContain("network engineer");
expect(prompt).toContain("Fortigate");
expect(prompt).toContain("INCIDENT RESPONSE METHODOLOGY");
});
});

View File

@ -35,6 +35,7 @@ const mockIssueDetail = {
},
],
conversations: [],
timeline_events: [],
};
describe("Resolution Page", () => {

View File

@ -0,0 +1,54 @@
import { describe, it, expect, vi, beforeEach } from "vitest";
import { invoke } from "@tauri-apps/api/core";
const mockInvoke = vi.mocked(invoke);
describe("Timeline Event Commands", () => {
beforeEach(() => {
mockInvoke.mockReset();
});
it("addTimelineEventCmd calls invoke with correct params", async () => {
const mockEvent = {
id: "te-1",
issue_id: "issue-1",
event_type: "triage_started",
description: "Started",
metadata: "{}",
created_at: "2025-01-15 10:00:00 UTC",
};
mockInvoke.mockResolvedValueOnce(mockEvent as never);
const { addTimelineEventCmd } = await import("@/lib/tauriCommands");
const result = await addTimelineEventCmd("issue-1", "triage_started", "Started");
expect(mockInvoke).toHaveBeenCalledWith("add_timeline_event", {
issueId: "issue-1",
eventType: "triage_started",
description: "Started",
metadata: null,
});
expect(result).toEqual(mockEvent);
});
it("addTimelineEventCmd passes metadata when provided", async () => {
mockInvoke.mockResolvedValueOnce({} as never);
const { addTimelineEventCmd } = await import("@/lib/tauriCommands");
await addTimelineEventCmd("issue-1", "log_uploaded", "File uploaded", '{"file":"app.log"}');
expect(mockInvoke).toHaveBeenCalledWith("add_timeline_event", {
issueId: "issue-1",
eventType: "log_uploaded",
description: "File uploaded",
metadata: '{"file":"app.log"}',
});
});
it("getTimelineEventsCmd calls invoke with correct params", async () => {
mockInvoke.mockResolvedValueOnce([] as never);
const { getTimelineEventsCmd } = await import("@/lib/tauriCommands");
const result = await getTimelineEventsCmd("issue-1");
expect(mockInvoke).toHaveBeenCalledWith("get_timeline_events", { issueId: "issue-1" });
expect(result).toEqual([]);
});
});