Compare commits

...

82 Commits

Author SHA1 Message Date
24c8293359 Merge pull request 'fix(db,auth): auto-generate encryption keys for release builds' (#24) from feature/ai-tool-calling-integration-search into master
All checks were successful
Auto Tag / autotag (push) Successful in 57s
Auto Tag / wiki-sync (push) Successful in 1m11s
Auto Tag / build-windows-amd64 (push) Successful in 14m52s
Auto Tag / build-linux-amd64 (push) Successful in 28m24s
Auto Tag / build-linux-arm64 (push) Successful in 29m22s
Auto Tag / build-macos-arm64 (push) Successful in 19m16s
Reviewed-on: #24
2026-04-06 23:25:59 +00:00
Shaun Arman
608a614c68 fix(lint): use inline format args in auth.rs
All checks were successful
Test / frontend-typecheck (pull_request) Successful in 1m56s
Test / frontend-tests (pull_request) Successful in 1m54s
Test / rust-fmt-check (pull_request) Successful in 4m55s
Test / rust-clippy (pull_request) Successful in 21m34s
Test / rust-tests (pull_request) Successful in 22m46s
Fixes clippy::uninlined_format_args warnings by using inline
variable formatting (e.g., {e} instead of {}, e).

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-04-06 17:58:08 -05:00
Shaun Arman
118d410050 fix(db,auth): auto-generate encryption keys for release builds
Some checks failed
Test / frontend-tests (pull_request) Successful in 1m48s
Test / frontend-typecheck (pull_request) Successful in 1m50s
Test / rust-fmt-check (pull_request) Successful in 4m44s
Test / rust-clippy (pull_request) Failing after 21m7s
Test / rust-tests (pull_request) Successful in 22m22s
Fixes two critical issues preventing Mac release builds from working:

1. Database encryption key auto-generation: Release builds now
   auto-generate and persist the SQLCipher encryption key to
   ~/.../trcaa/.dbkey (mode 0600) instead of requiring the
   TFTSR_DB_KEY env var. This prevents 'file is not a database'
   errors when users don't set the env var.

2. Plain SQLite to encrypted migration: When a release build
   encounters a plain SQLite database (from a previous debug build),
   it now automatically migrates it to encrypted SQLCipher format
   using ATTACH DATABASE + sqlcipher_export. Creates a backup at
   .db.plain-backup before migration.

3. Credential encryption key auto-generation: Applied the same
   pattern to TFTSR_ENCRYPTION_KEY for encrypting AI provider API
   keys and integration tokens. Release builds now auto-generate
   and persist to ~/.../trcaa/.enckey (mode 0600) instead of
   failing with 'TFTSR_ENCRYPTION_KEY must be set'.

4. Refactored app data directory helper: Moved dirs_data_dir()
   from lib.rs to state.rs as get_app_data_dir() so it can be
   reused by both database and auth modules.

Testing:
- All unit tests pass (db::connection::tests + integrations::auth::tests)
- Verified manual migration from plain to encrypted database
- No clippy warnings

Impact: Users installing the Mac release build will now have a
working app out-of-the-box without needing to set environment
variables. Developers switching from debug to release builds will
have their databases automatically migrated.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-04-06 17:21:31 -05:00
18ae96fdd2 Merge pull request 'feat(ai): add tool-calling and integration search as AI data source' (#23) from feature/ai-tool-calling-integration-search into master
All checks were successful
Auto Tag / autotag (push) Successful in 1m3s
Auto Tag / wiki-sync (push) Successful in 1m13s
Auto Tag / build-macos-arm64 (push) Successful in 4m35s
Auto Tag / build-windows-amd64 (push) Successful in 14m46s
Auto Tag / build-linux-amd64 (push) Successful in 28m41s
Auto Tag / build-linux-arm64 (push) Successful in 29m33s
Reviewed-on: #23
2026-04-06 21:19:02 +00:00
Shaun Arman
c1581251a6 fix(fmt): apply rustfmt formatting to webview_fetch.rs
All checks were successful
Test / frontend-typecheck (pull_request) Successful in 1m58s
Test / frontend-tests (pull_request) Successful in 1m53s
Test / rust-fmt-check (pull_request) Successful in 4m52s
Test / rust-clippy (pull_request) Successful in 21m21s
Test / rust-tests (pull_request) Successful in 22m32s
Adjusted line breaks to match rustfmt conventions

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-04-06 15:20:53 -05:00
Shaun Arman
5b385c3599 fix(lint): resolve all clippy warnings for CI compliance
Some checks failed
Test / frontend-tests (pull_request) Successful in 1m58s
Test / frontend-typecheck (pull_request) Successful in 2m1s
Test / rust-fmt-check (pull_request) Failing after 4m52s
Test / rust-tests (pull_request) Has been cancelled
Test / rust-clippy (pull_request) Has been cancelled
Fixed 42 clippy warnings across integration and command modules:
- unnecessary_lazy_evaluations: Changed unwrap_or_else to unwrap_or
- uninlined_format_args: Modernized format strings to use inline syntax
- needless_borrows_for_generic_args: Removed unnecessary borrows
- only_used_in_recursion: Prefixed unused recursive param with underscore

All files now pass cargo clippy -- -D warnings

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-04-06 15:14:19 -05:00
Shaun Arman
609f696add feat(ai): add tool-calling and integration search as AI data source
Some checks failed
Test / frontend-typecheck (pull_request) Successful in 1m44s
Test / frontend-tests (pull_request) Successful in 1m44s
Test / rust-fmt-check (pull_request) Successful in 5m10s
Test / rust-clippy (pull_request) Failing after 21m58s
Test / rust-tests (pull_request) Successful in 23m8s
This commit implements two major features:

1. Integration Search as Primary AI Data Source
   - Confluence, ServiceNow, and Azure DevOps searches execute before AI queries
   - Search results injected as system context for AI providers
   - Parallel search execution for performance
   - Webview-based fetch for HttpOnly cookie support
   - Persistent browser windows maintain authenticated sessions

2. AI Tool-Calling (Function Calling)
   - Allows AI to automatically execute functions during conversation
   - Implemented for OpenAI-compatible providers and MSI GenAI
   - Created add_ado_comment tool for updating Azure DevOps tickets
   - Iterative tool-calling loop supports multi-step workflows
   - Extensible architecture for adding new tools

Key Files:
- src-tauri/src/ai/tools.rs (NEW) - Tool definitions
- src-tauri/src/integrations/*_search.rs (NEW) - Integration search modules
- src-tauri/src/integrations/webview_fetch.rs (NEW) - HttpOnly cookie workaround
- src-tauri/src/commands/ai.rs - Tool execution and integration search
- src-tauri/src/ai/openai.rs - Tool-calling for OpenAI and MSI GenAI
- All providers updated with tools parameter support

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-04-06 13:36:45 -05:00
b4d8dfc154 Merge pull request 'fix(ci): remove explicit docker.sock mount — act_runner mounts it automatically' (#22) from fix/build-images-duplicate-socket into master
All checks were successful
Auto Tag / autotag (push) Successful in 52s
Auto Tag / wiki-sync (push) Successful in 59s
Auto Tag / build-windows-amd64 (push) Successful in 15m41s
Auto Tag / build-linux-amd64 (push) Successful in 29m29s
Auto Tag / build-linux-arm64 (push) Successful in 29m55s
Auto Tag / build-macos-arm64 (push) Successful in 5m5s
Reviewed-on: #22
2026-04-06 02:18:55 +00:00
Shaun Arman
b0c1167b20 fix(ci): remove explicit docker.sock mount — act_runner mounts it automatically
Some checks failed
Test / rust-fmt-check (pull_request) Has been cancelled
Test / frontend-typecheck (pull_request) Has been cancelled
Test / frontend-tests (pull_request) Has been cancelled
Test / rust-tests (pull_request) Has been cancelled
Test / rust-clippy (pull_request) Has been cancelled
2026-04-05 21:18:11 -05:00
7112fbc0c1 Merge pull request 'feat(ci): add persistent pre-baked Docker builder images' (#21) from feat/persistent-ci-builders into master
Some checks failed
Build CI Docker Images / linux-amd64 (push) Failing after 1s
Build CI Docker Images / windows-cross (push) Failing after 3s
Build CI Docker Images / linux-arm64 (push) Failing after 1s
Auto Tag / autotag (push) Successful in 54s
Auto Tag / wiki-sync (push) Successful in 55s
Auto Tag / build-windows-amd64 (push) Has been cancelled
Auto Tag / build-linux-arm64 (push) Has been cancelled
Auto Tag / build-linux-amd64 (push) Has been cancelled
Auto Tag / build-macos-arm64 (push) Has been cancelled
Reviewed-on: #21
2026-04-06 02:15:36 +00:00
Shaun Arman
9b388e736d feat(ci): add persistent pre-baked Docker builder images
Some checks are pending
Test / rust-clippy (pull_request) Waiting to run
Test / rust-tests (pull_request) Waiting to run
Test / frontend-typecheck (pull_request) Waiting to run
Test / frontend-tests (pull_request) Waiting to run
Test / rust-fmt-check (pull_request) Successful in 3m57s
Add three Dockerfiles under .docker/ and a build-images.yml workflow that
pushes them to the local Gitea container registry (172.0.0.29:3000).

Each image pre-installs all system deps, Node.js 22, and the Rust cross-
compilation target so release builds can skip apt-get entirely:

  trcaa-linux-amd64:rust1.88-node22   — webkit2gtk, gtk3, all Tauri deps
  trcaa-windows-cross:rust1.88-node22 — mingw-w64, nsis, Windows target
  trcaa-linux-arm64:rust1.88-node22   — arm64 multiarch dev libs, Rust 1.88

build-images.yml triggers automatically when .docker/ changes on master
and supports workflow_dispatch for manual/first-time builds.

auto-tag.yml is NOT changed in this commit — switch it to use the new
images in the follow-up PR (after images are pushed to the registry).

One-time server setup required before first use:
  echo '{"insecure-registries":["172.0.0.29:3000"]}' \
    | sudo tee /etc/docker/daemon.json && sudo systemctl restart docker
2026-04-05 21:07:17 -05:00
ca9eec46d1 Merge pull request 'feat(ui): UI fixes, theme toggle, PII persistence, Ollama install instructions' (#20) from feat/ui-fixes-ollama-bundle-theme into master
Some checks failed
Auto Tag / autotag (push) Successful in 1m20s
Auto Tag / wiki-sync (push) Successful in 1m19s
Auto Tag / build-macos-arm64 (push) Successful in 6m24s
Auto Tag / build-linux-arm64 (push) Has been cancelled
Auto Tag / build-linux-amd64 (push) Has been cancelled
Auto Tag / build-windows-amd64 (push) Has been cancelled
Reviewed-on: #20
2026-04-06 01:54:36 +00:00
Shaun Arman
72625d590b refactor(ollama): remove download/install buttons — show plain install instructions only
Some checks failed
Test / rust-tests (pull_request) Has been cancelled
Test / frontend-typecheck (pull_request) Has been cancelled
Test / rust-clippy (pull_request) Has been cancelled
Test / frontend-tests (pull_request) Has been cancelled
Test / rust-fmt-check (pull_request) Has been cancelled
2026-04-05 20:53:57 -05:00
Shaun Arman
1be4c48690 fix(ci): remove all Ollama bundle download steps — use UI download button instead 2026-04-05 20:53:57 -05:00
Shaun Arman
ff69cf6b11 fix(ci): skip Ollama download on macOS build — runner has no access to GitHub binary assets 2026-04-05 20:53:57 -05:00
2ba0eaf97f Merge pull request 'feat(ui): fix model dropdown, auth prefill, PII persistence, theme toggle, Ollama bundle' (#19) from feat/ui-fixes-ollama-bundle-theme into master
Some checks failed
Auto Tag / autotag (push) Successful in 54s
Auto Tag / wiki-sync (push) Successful in 55s
Auto Tag / build-macos-arm64 (push) Failing after 19s
Auto Tag / build-windows-amd64 (push) Failing after 7m36s
Auto Tag / build-linux-arm64 (push) Has been cancelled
Auto Tag / build-linux-amd64 (push) Has been cancelled
Reviewed-on: #19
2026-04-06 01:12:34 +00:00
Shaun Arman
69b749bc62 style: apply cargo fmt to install_ollama_from_bundle
All checks were successful
Test / frontend-typecheck (pull_request) Successful in 1m49s
Test / frontend-tests (pull_request) Successful in 1m46s
Test / rust-fmt-check (pull_request) Successful in 5m36s
Test / rust-clippy (pull_request) Successful in 27m7s
Test / rust-tests (pull_request) Successful in 28m12s
2026-04-05 19:41:59 -05:00
Shaun Arman
b4b9f2a477 fix(security): add path canonicalization and actionable permission error in install_ollama_from_bundle
Some checks failed
Test / rust-fmt-check (pull_request) Has been cancelled
Test / frontend-tests (pull_request) Has been cancelled
Test / frontend-typecheck (pull_request) Has been cancelled
Test / rust-tests (pull_request) Has been cancelled
Test / rust-clippy (pull_request) Has been cancelled
2026-04-05 19:34:47 -05:00
Shaun Arman
733a763c34 test(store): add PII pattern persistence tests for settingsStore
Some checks failed
Test / rust-fmt-check (pull_request) Failing after 6m10s
Test / frontend-typecheck (pull_request) Successful in 2m51s
Test / rust-tests (pull_request) Has been cancelled
Test / rust-clippy (pull_request) Has been cancelled
Test / frontend-tests (pull_request) Has been cancelled
2026-04-05 19:33:23 -05:00
Shaun Arman
5b6348c97e feat(ui): fix model dropdown, auth prefill, PII persistence, theme toggle, and Ollama bundle
Some checks failed
Test / frontend-typecheck (pull_request) Successful in 1m47s
Test / frontend-tests (pull_request) Successful in 1m51s
Test / rust-fmt-check (pull_request) Failing after 5m16s
Test / rust-clippy (pull_request) Successful in 34m2s
Test / rust-tests (pull_request) Successful in 35m13s
- AIProviders: hide top model row when custom_rest active (dropdown lower in form handles it);
  clear auth header prefill on format switch; rename User ID / CORE ID → Email Address
- Dashboard + Ollama: add border-border/bg-card classes to Refresh buttons for dark-bg contrast
- Security + settingsStore: wire PII toggle state to persisted Zustand store so pattern
  selections survive app restarts
- App: add Sun/Moon theme toggle button to sidebar footer (always visible when collapsed)
- system.rs: add install_ollama_from_bundle command (copies bundled binary to /usr/local/bin)
- auto-tag.yml: add Download Ollama step to all 4 platform build jobs with SHA256 verification
- tauri.conf.json: add resources/ollama/* to bundle resources
- docs: add install_ollama_from_bundle to IPC-Commands wiki

Security: CI download steps verify SHA256 against Ollama's published sha256sums.txt before bundling.
2026-04-05 19:30:41 -05:00
12b98752ee Merge pull request 'feat(rebrand): rename binary to trcaa and auto-generate DB key' (#18) from feat/rebrand-binary-trcaa into master
All checks were successful
Auto Tag / autotag (push) Successful in 1m13s
Auto Tag / wiki-sync (push) Successful in 1m12s
Auto Tag / build-macos-arm64 (push) Successful in 6m26s
Auto Tag / build-windows-amd64 (push) Successful in 14m26s
Auto Tag / build-linux-amd64 (push) Successful in 27m54s
Auto Tag / build-linux-arm64 (push) Successful in 28m39s
Reviewed-on: #18
2026-04-05 23:17:05 +00:00
Shaun Arman
97ccf556c3 feat(rebrand): rename binary to trcaa and auto-generate DB key
All checks were successful
Test / frontend-typecheck (pull_request) Successful in 1m49s
Test / frontend-tests (pull_request) Successful in 1m47s
Test / rust-fmt-check (pull_request) Successful in 4m51s
Test / rust-clippy (pull_request) Successful in 21m14s
Test / rust-tests (pull_request) Successful in 22m22s
- Rename Cargo package from 'tftsr' to 'trcaa' — installed command
  becomes 'trcaa' instead of 'tftsr'
- Update app data directories to ~/.local/share/trcaa (Linux),
  ~/Library/Application Support/trcaa (macOS), %APPDATA%/trcaa (Windows)
- Update bundle identifier to com.trcaa.app
- Auto-generate per-installation DB encryption key on first launch and
  persist to <data_dir>/.dbkey (mode 0600 on Unix) — removes the hard
  requirement for TFTSR_DB_KEY to be set before the app will start
2026-04-05 17:50:16 -05:00
e45e2e935f Merge pull request 'fix(ci): restrict arm64 bundles to deb,rpm — skip AppImage' (#17) from fix/arm64-skip-appimage into master
Some checks failed
Auto Tag / autotag (push) Successful in 1m35s
Auto Tag / wiki-sync (push) Successful in 1m11s
Auto Tag / build-windows-amd64 (push) Successful in 14m35s
Auto Tag / build-linux-amd64 (push) Successful in 27m35s
Auto Tag / build-linux-arm64 (push) Successful in 28m23s
Auto Tag / build-macos-arm64 (push) Failing after 16m26s
Reviewed-on: #17
2026-04-05 22:04:51 +00:00
Shaun Arman
813cff56b3 fix(ci): restrict arm64 bundles to deb,rpm — skip AppImage
Some checks failed
Test / rust-clippy (pull_request) Has been cancelled
Test / frontend-tests (pull_request) Has been cancelled
Test / frontend-typecheck (pull_request) Has been cancelled
Test / rust-fmt-check (pull_request) Has been cancelled
Test / rust-tests (pull_request) Has been cancelled
linuxdeploy-aarch64.AppImage cannot be reliably executed in a cross-
compile context (amd64 host, aarch64 target) even with QEMU binfmt
and APPIMAGE_EXTRACT_AND_RUN. The .deb and .rpm cover all major arm64
Linux distros. An arm64 AppImage can be added later via a native
arm64 build job if required.
2026-04-05 17:02:20 -05:00
3b1125d5d5 Merge pull request 'fix(ci): set APPIMAGE_EXTRACT_AND_RUN=1 for arm64 AppImage bundling' (#16) from fix/arm64-appimage-fuse into master
Some checks failed
Auto Tag / autotag (push) Successful in 54s
Auto Tag / wiki-sync (push) Successful in 1m11s
Auto Tag / build-macos-arm64 (push) Successful in 4m38s
Auto Tag / build-windows-amd64 (push) Successful in 14m44s
Auto Tag / build-linux-amd64 (push) Successful in 27m38s
Auto Tag / build-linux-arm64 (push) Failing after 18m13s
Reviewed-on: #16
2026-04-05 20:57:02 +00:00
Shaun Arman
fd9272a693 fix(ci): set APPIMAGE_EXTRACT_AND_RUN=1 for arm64 AppImage bundling
Some checks failed
Test / frontend-tests (pull_request) Has been cancelled
Test / rust-tests (pull_request) Has been cancelled
Test / frontend-typecheck (pull_request) Has been cancelled
Test / rust-fmt-check (pull_request) Has been cancelled
Test / rust-clippy (pull_request) Has been cancelled
linuxdeploy and its plugins are themselves AppImages. Inside a Docker
container FUSE is unavailable, so they cannot self-mount. Setting
APPIMAGE_EXTRACT_AND_RUN=1 causes them to extract to a temp directory
and run directly, bypassing the FUSE requirement.
2026-04-05 15:56:09 -05:00
9ccd78d497 Merge pull request 'fix(ci): add make to arm64 host tools for OpenSSL vendored build' (#15) from fix/arm64-missing-make into master
Some checks failed
Auto Tag / wiki-sync (push) Successful in 1m11s
Auto Tag / autotag (push) Successful in 1m14s
Auto Tag / build-windows-amd64 (push) Successful in 15m16s
Auto Tag / build-macos-arm64 (push) Successful in 8m1s
Auto Tag / build-linux-amd64 (push) Successful in 27m35s
Auto Tag / build-linux-arm64 (push) Failing after 28m23s
Reviewed-on: #15
2026-04-05 20:10:50 +00:00
Shaun Arman
b214ac7e6a fix(ci): add make to arm64 host tools for OpenSSL vendored build
Some checks failed
Test / rust-fmt-check (pull_request) Has been cancelled
Test / rust-tests (pull_request) Has been cancelled
Test / frontend-tests (pull_request) Has been cancelled
Test / rust-clippy (pull_request) Has been cancelled
Test / frontend-typecheck (pull_request) Has been cancelled
openssl-src compiles OpenSSL from source and requires make.
The old Debian image had it; it was not carried over to the
Ubuntu 22.04 host tools list.
2026-04-05 15:09:22 -05:00
2af23f8e95 Merge pull request 'fix(ci): use POSIX dot instead of source in arm64 build step' (#14) from fix/arm64-source-sh into master
Some checks failed
Auto Tag / autotag (push) Successful in 1m26s
Auto Tag / wiki-sync (push) Successful in 1m28s
Auto Tag / build-macos-arm64 (push) Successful in 6m15s
Auto Tag / build-windows-amd64 (push) Successful in 14m49s
Auto Tag / build-linux-arm64 (push) Failing after 22m1s
Auto Tag / build-linux-amd64 (push) Has been cancelled
Reviewed-on: #14
2026-04-05 19:42:49 +00:00
Shaun Arman
5991bf3f7f fix(ci): use POSIX dot instead of source in arm64 build step
Some checks failed
Test / rust-tests (pull_request) Has been cancelled
Test / frontend-tests (pull_request) Has been cancelled
Test / frontend-typecheck (pull_request) Has been cancelled
Test / rust-fmt-check (pull_request) Has been cancelled
Test / rust-clippy (pull_request) Has been cancelled
The act runner executes run: blocks with sh (dash), not bash.
'source' is a bash built-in; POSIX sh uses '.' instead.

Co-Authored-By: fix/arm64-source-sh <noreply@local>
2026-04-05 14:41:18 -05:00
289801f9f0 Merge pull request 'fix(ci): remove GITHUB_PATH append that was breaking arm64 install step' (#13) from fix/arm64-github-path into master
Some checks failed
Auto Tag / autotag (push) Successful in 50s
Auto Tag / wiki-sync (push) Successful in 52s
Auto Tag / build-windows-amd64 (push) Successful in 15m39s
Auto Tag / build-linux-amd64 (push) Successful in 27m28s
Auto Tag / build-linux-arm64 (push) Failing after 11m8s
Auto Tag / build-macos-arm64 (push) Has been cancelled
Reviewed-on: #13
2026-04-05 19:06:01 +00:00
Shaun Arman
3caaab8657 fix(ci): remove GITHUB_PATH append that was breaking arm64 install step
Some checks failed
Test / frontend-typecheck (pull_request) Has been cancelled
Test / rust-clippy (pull_request) Has been cancelled
Test / rust-tests (pull_request) Has been cancelled
Test / frontend-tests (pull_request) Has been cancelled
Test / rust-fmt-check (pull_request) Has been cancelled
$GITHUB_PATH is unset in this Gitea Actions environment, causing the
echo redirect to fail with a non-zero exit, which killed the Install
dependencies step before the Build step could run.

The append was unnecessary — the Build step already sources
$HOME/.cargo/env as its first line, which puts Cargo's bin dir in PATH.

Co-Authored-By: fix/yaml-heredoc-indent <noreply@local>
2026-04-05 14:04:32 -05:00
4ca0da4aef Merge pull request 'fix(ci): switch build-linux-arm64 to Ubuntu 22.04 with ports mirror' (#12) from fix/yaml-heredoc-indent into master
Some checks failed
Auto Tag / autotag (push) Successful in 48s
Auto Tag / wiki-sync (push) Successful in 50s
Auto Tag / build-macos-arm64 (push) Successful in 5m53s
Auto Tag / build-linux-amd64 (push) Successful in 17m27s
Auto Tag / build-windows-amd64 (push) Successful in 13m2s
Auto Tag / build-linux-arm64 (push) Failing after 10m13s
Reviewed-on: #12
2026-04-05 18:15:16 +00:00
Shaun Arman
6fea24181d docs: update CI pipeline wiki and add ticket summary for arm64 fix
All checks were successful
Test / frontend-tests (pull_request) Successful in 1m42s
Test / frontend-typecheck (pull_request) Successful in 1m44s
Test / rust-fmt-check (pull_request) Successful in 4m31s
Test / rust-clippy (pull_request) Successful in 20m16s
Test / rust-tests (pull_request) Successful in 21m28s
Documents the Ubuntu 22.04 + ports.ubuntu.com approach for arm64
cross-compilation and adds a Known Issues entry explaining the Debian
single-mirror multiarch root cause that was replaced.

Co-Authored-By: fix/yaml-heredoc-indent <noreply@local>
2026-04-05 12:51:30 -05:00
Shaun Arman
9bff15a960 fix(ci): switch build-linux-arm64 to Ubuntu 22.04 with ports mirror
The Debian single-mirror multiarch approach causes irreconcilable
apt dependency conflicts when both amd64 and arm64 point at the same
repo: the binary-all index is duplicated and certain -dev package pairs
lack Multi-Arch: same. This produces "held broken packages" regardless
of sources.list tweaks.

Ubuntu 22.04 routes arm64 through ports.ubuntu.com/ubuntu-ports, a
separate mirror from archive.ubuntu.com (amd64). This eliminates all
cross-arch index overlaps. Rust is installed via rustup since it is not
pre-installed in the Ubuntu base image. libayatana-appindicator3-dev
is dropped — no tray icon is used by this application.

Co-Authored-By: fix/yaml-heredoc-indent <noreply@local>
2026-04-05 12:51:19 -05:00
d1f429d8e4 Merge pull request 'fix(ci): replace heredoc with printf in arm64 install step' (#11) from fix/yaml-heredoc-indent into master
Some checks failed
Auto Tag / autotag (push) Successful in 1m22s
Auto Tag / wiki-sync (push) Successful in 1m23s
Auto Tag / build-linux-arm64 (push) Failing after 3m38s
Auto Tag / build-macos-arm64 (push) Successful in 5m44s
Auto Tag / build-windows-amd64 (push) Successful in 14m17s
Auto Tag / build-linux-amd64 (push) Successful in 21m37s
Reviewed-on: #11
2026-04-05 17:12:11 +00:00
Shaun Arman
f1247520c7 fix(ci): replace heredoc with printf in arm64 install step
Some checks failed
Test / rust-tests (pull_request) Has been cancelled
Test / rust-clippy (pull_request) Has been cancelled
Test / frontend-typecheck (pull_request) Has been cancelled
Test / rust-fmt-check (pull_request) Has been cancelled
Test / frontend-tests (pull_request) Has been cancelled
YAML block scalars end when a line is found with less indentation than
the scalar's own indent level. The heredoc body was at column 0 while
the rest of the run: block was at column 10, causing Gitea's YAML parser
to reject the entire workflow file with:

  yaml: line 412: could not find expected ':'

This silently invalidated auto-tag.yml on every push to master since the
apt-sources commit was merged, which is why PR#9 and PR#10 merges produced
no action runs.

Fix: replace the heredoc with a printf that stays within the block scalar's
indentation so the YAML remains valid.
2026-04-05 12:11:12 -05:00
b9220ef04c Merge pull request 'fix(ci): add workflow_dispatch and concurrency guard to auto-tag' (#10) from fix/auto-tag-dispatch into master
Reviewed-on: #10
2026-04-05 17:06:09 +00:00
Shaun Arman
c8ead60607 fix(ci): add workflow_dispatch and concurrency guard to auto-tag
All checks were successful
Test / frontend-typecheck (pull_request) Successful in 1m51s
Test / frontend-tests (pull_request) Successful in 1m50s
Test / rust-fmt-check (pull_request) Successful in 4m21s
Test / rust-clippy (pull_request) Successful in 20m14s
Test / rust-tests (pull_request) Successful in 21m25s
Gitea 1.22 silently drops a push event for a workflow when a run for that
same workflow+branch is already in progress. This caused the PR#9 merge to
master to produce no auto-tag run.

- workflow_dispatch: allows manual triggering via API when an event is dropped
- concurrency group (cancel-in-progress: false): causes Gitea to queue a second
  run rather than discard it when one is already active
2026-04-05 11:41:21 -05:00
c1c8fb726d Merge pull request 'fix(ci): rebuild apt sources with per-arch entries before arm64 cross-compile' (#9) from bug/build-failure into master
Reviewed-on: #9
2026-04-05 16:32:20 +00:00
Shaun Arman
9d9dcd1d9a fix(ci): rebuild apt sources with per-arch entries before arm64 cross-compile install
All checks were successful
Test / frontend-typecheck (pull_request) Successful in 1m11s
Test / frontend-tests (pull_request) Successful in 1m18s
Test / rust-fmt-check (pull_request) Successful in 4m55s
Test / rust-clippy (pull_request) Successful in 23m46s
Test / rust-tests (pull_request) Successful in 25m1s
rust:1.88-slim (Debian Bookworm) uses DEB822-format sources which have no arch
restriction. After dpkg --add-architecture arm64, apt tries to resolve deps for
both amd64 and arm64 simultaneously and hits 'held broken packages' conflicts on
shared -dev packages.

Fix: remove debian.sources and write a clean sources.list that pins amd64 repos
to [arch=amd64] and arm64 repos to [arch=arm64]. This gives apt a clear,
non-conflicting view of each architecture's package set.
2026-04-05 11:05:46 -05:00
350013e038 Merge pull request 'security/audit' (#8) from security/audit into master
Some checks failed
Auto Tag / autotag (push) Successful in 44s
Auto Tag / wiki-sync (push) Successful in 51s
Auto Tag / build-linux-arm64 (push) Failing after 3m22s
Auto Tag / build-windows-amd64 (push) Successful in 15m24s
Auto Tag / build-macos-arm64 (push) Failing after 13m25s
Auto Tag / build-linux-amd64 (push) Successful in 27m50s
Reviewed-on: #8
2026-04-05 15:56:26 +00:00
Shaun Arman
404614a8b3 fix(ci): fix arm64 cross-compile, drop cargo install tauri-cli, move wiki-sync
All checks were successful
Test / frontend-tests (pull_request) Successful in 1m43s
Test / frontend-typecheck (pull_request) Successful in 1m50s
Test / rust-fmt-check (pull_request) Successful in 4m23s
Test / rust-clippy (pull_request) Successful in 20m6s
Test / rust-tests (pull_request) Successful in 21m17s
build-linux-arm64: switch from QEMU-emulated linux-arm64 runner to cross-compile
on linux-amd64 using aarch64-linux-gnu toolchain. Removes the uname -m arch guard
that was causing the job to exit immediately (QEMU reports x86_64 as kernel arch),
and fixes the artifact path to the explicit target directory.

All build jobs: replace `cargo install tauri-cli --locked` with `npx tauri build`,
using the pre-compiled @tauri-apps/cli binary from devDependencies. Eliminates the
20-30 min Tauri CLI recompilation on every run.

wiki-sync: move from test.yml to auto-tag.yml. test.yml only fires on pull_request
events so the `if: github.ref == 'refs/heads/master'` guard was never true and the
wiki was never updated. auto-tag.yml triggers on push to master, so wiki sync now
runs on every merge.

Update releaseWorkflowCrossPlatformArtifacts.test.ts to match the new workflow.
2026-04-05 10:33:53 -05:00
95ccb8671b Merge branch 'master' into security/audit
Some checks failed
Test / rust-fmt-check (pull_request) Has been cancelled
Test / rust-clippy (pull_request) Has been cancelled
Test / rust-tests (pull_request) Has been cancelled
Test / frontend-typecheck (pull_request) Has been cancelled
Test / frontend-tests (pull_request) Has been cancelled
Test / wiki-sync (pull_request) Has been cancelled
2026-04-05 15:10:21 +00:00
Shaun Arman
dc4bb8109d fix(security): enforce PII redaction before AI log transmission
Some checks failed
Test / frontend-typecheck (pull_request) Successful in 1m44s
Test / wiki-sync (pull_request) Has been cancelled
Test / rust-clippy (pull_request) Has been cancelled
Test / frontend-tests (pull_request) Has been cancelled
Test / rust-fmt-check (pull_request) Has been cancelled
Test / rust-tests (pull_request) Has been cancelled
analyze_logs() was reading the original log file from disk and sending its
full contents to external AI providers, completely bypassing the redaction
pipeline. The redacted flag in log_files and the .redacted file on disk were
written by apply_redactions() but never consulted on the read path.

Fix: query the redacted column alongside file_path. If the file has not been
redacted, return an error to the caller before any AI provider call is made.
When redacted, read from {path}.redacted instead of the original.

Adds redacted_path_for() helper and two unit tests covering the rejection
and happy-path cases.
2026-04-05 10:08:16 -05:00
Shaun Arman
3b51027dd8 fix(pii): remove lookahead from hostname regex, fix fmt in analysis test
All checks were successful
Test / wiki-sync (pull_request) Has been skipped
Test / frontend-tests (pull_request) Successful in 1m41s
Test / frontend-typecheck (pull_request) Successful in 1m50s
Test / rust-fmt-check (pull_request) Successful in 4m37s
Test / rust-clippy (pull_request) Successful in 21m57s
Test / rust-tests (pull_request) Successful in 23m8s
Rust's `regex` crate does not support lookaround assertions. The hostname
pattern `(?=.{1,253}\b)` caused a panic on every `PiiDetector::new()` call,
failing all four PII detector tests in CI (rust-fmt-check, rust-clippy,
rust-tests). Removed the lookahead; the remaining pattern correctly matches
valid FQDNs without the RFC 1035 length pre-check.

Also reformatted analysis.rs:253 to satisfy `rustfmt` (line break after `=`).

All 127 Rust tests pass and `cargo fmt --check` and `cargo clippy -- -D
warnings` are clean.
2026-04-05 09:59:19 -05:00
Shaun Arman
e117cb30c4 fix(security): harden secret handling and audit integrity
Some checks failed
Test / frontend-typecheck (pull_request) Successful in 1m59s
Test / wiki-sync (pull_request) Has been skipped
Test / frontend-tests (pull_request) Successful in 1m44s
Test / rust-fmt-check (pull_request) Failing after 4m23s
Test / rust-clippy (pull_request) Failing after 22m44s
Test / rust-tests (pull_request) Failing after 24m0s
Remove high-risk defaults and tighten data handling across auth, storage, IPC, provider calls, and capabilities so sensitive data is better protected by default. Also update README/wiki security guidance and add targeted tests for the new hardening behaviors.

Made-with: Cursor
2026-04-04 23:37:05 -05:00
Shaun Arman
fec9c77972 fix(ci): unblock release jobs and namespace linux artifacts by arch
Some checks failed
Auto Tag / autotag (push) Successful in 52s
Auto Tag / build-windows-amd64 (push) Successful in 18m0s
Auto Tag / build-linux-arm64 (push) Failing after 21m7s
Auto Tag / build-macos-arm64 (push) Failing after 14m8s
Auto Tag / build-linux-amd64 (push) Successful in 32m29s
Drop fragile job-condition gates that were blocking release jobs, and upload linux artifacts with arch-prefixed release asset names so amd64 and arm64 outputs can coexist even when bundle filenames are identical.

Made-with: Cursor
2026-04-04 23:19:40 -05:00
Shaun Arman
49ed727c79 fix(ci): unblock release jobs and namespace linux artifacts by arch
Drop fragile job-condition gates that were blocking release jobs, and upload linux artifacts with arch-prefixed release asset names so amd64 and arm64 outputs can coexist even when bundle filenames are identical.

Made-with: Cursor
2026-04-04 23:17:12 -05:00
Shaun Arman
6de7cfb104 fix(ci): run linux arm release natively and enforce arm artifacts
Some checks failed
Auto Tag / autotag (push) Successful in 50s
Auto Tag / build-macos-arm64 (push) Failing after 11m15s
Auto Tag / build-windows-amd64 (push) Successful in 18m15s
Auto Tag / build-linux-arm64 (push) Failing after 18m33s
Auto Tag / build-linux-amd64 (push) Successful in 29m19s
Avoid cross-compiling GTK/glib on the arm release job by building natively on ARM64 hosts, add an explicit architecture guard, and restrict uploads to arm64/aarch64 artifact filenames so amd64 outputs cannot be published as arm releases.

Made-with: Cursor
2026-04-04 22:46:23 -05:00
Shaun Arman
2bf5a03d8a fix(ci): force explicit linux arm64 target for release artifacts
Some checks failed
Auto Tag / autotag (push) Successful in 49s
Auto Tag / build-windows-amd64 (push) Successful in 18m14s
Auto Tag / build-linux-arm64 (push) Failing after 27m6s
Auto Tag / build-macos-arm64 (push) Has been cancelled
Auto Tag / build-linux-amd64 (push) Has been cancelled
Build linux arm64 bundles with --target aarch64-unknown-linux-gnu and upload from the target-specific bundle path so arm64 releases cannot accidentally publish amd64 artifacts.

Made-with: Cursor
2026-04-04 22:15:02 -05:00
Shaun Arman
04c834c58e refactor(ci): remove standalone release workflow
All checks were successful
Auto Tag / autotag (push) Successful in 52s
Auto Tag / build-macos-arm64 (push) Successful in 5m59s
Auto Tag / build-windows-amd64 (push) Successful in 17m49s
Auto Tag / build-linux-amd64 (push) Successful in 34m58s
Auto Tag / build-linux-arm64 (push) Successful in 34m57s
Delete .gitea/workflows/release.yml and keep release orchestration in auto-tag.yml only, then update related workflow tests and docs to reference the unified pipeline.

Made-with: Cursor
2026-04-04 21:34:15 -05:00
Shaun Arman
8b60d616c3 fix(ci): repair auto-tag workflow yaml so jobs trigger
Some checks failed
Auto Tag / autotag (push) Successful in 44s
Auto Tag / build-linux-arm64 (push) Has been cancelled
Auto Tag / build-windows-amd64 (push) Has been cancelled
Auto Tag / build-macos-arm64 (push) Has been cancelled
Auto Tag / build-linux-amd64 (push) Has been cancelled
Replace heredoc-based Python error logging with single-line python invocations to keep YAML block indentation valid, restoring Gitea's ability to parse and trigger auto-tag plus downstream release build jobs.

Made-with: Cursor
2026-04-04 21:28:52 -05:00
Shaun Arman
0427d7808b fix(ci): run post-tag release builds without job-output gating
Remove auto-tag job output dependencies and conditional gates so release build jobs always run after autotag completes, resolving skipped fan-out caused by output/if evaluation issues in Gitea Actions.

Made-with: Cursor
2026-04-04 21:24:24 -05:00
Shaun Arman
af4a07cffa fix(ci): use stable auto-tag job outputs for release fanout
Rename the auto-tag job id to a non-hyphenated identifier and update needs/output references so dependent release jobs evaluate conditions correctly and reliably run after tagging.

Made-with: Cursor
2026-04-04 21:21:35 -05:00
Shaun Arman
aabd746d15 fix(ci): guarantee release jobs run after auto-tag
Run linux/windows/macos/arm release build and upload jobs in the auto-tag workflow with needs:auto-tag outputs so release execution no longer depends on a second tag-triggered workflow dispatch path.

Made-with: Cursor
2026-04-04 21:19:13 -05:00
Shaun Arman
2480fa339b fix(ci): trigger release workflow from auto-tag pushes
All checks were successful
Auto Tag / auto-tag (push) Successful in 45s
Switch auto-tag to create and push tags via git instead of the tag API so Gitea emits a real tag push event that reliably starts release builds. Document the trigger behavior and add a workflow regression test.

Made-with: Cursor
2026-04-04 21:14:41 -05:00
28f01ebca5 Merge pull request 'fix(ci): harden release asset uploads for reruns' (#7) from fix/release-upload-rerun-hardening into master
All checks were successful
Auto Tag / auto-tag (push) Successful in 19s
Reviewed-on: #7
2026-04-05 02:10:54 +00:00
Shaun Arman
81fff3aa32 fix(ci): harden release asset uploads for reruns
Some checks failed
Test / wiki-sync (pull_request) Has been skipped
Test / rust-clippy (pull_request) Has been cancelled
Test / frontend-typecheck (pull_request) Has been cancelled
Test / rust-fmt-check (pull_request) Has been cancelled
Test / rust-tests (pull_request) Has been cancelled
Test / frontend-tests (pull_request) Has been cancelled
Make all release upload steps fail fast when expected artifacts are missing, replace existing same-name assets before uploading, and print HTTP/body details on upload failures so Linux/Windows publishing issues are diagnosable and reruns remain deterministic.

Made-with: Cursor
2026-04-04 21:09:03 -05:00
b75557fc6b Merge pull request 'fix(ci): stabilize release artifacts for windows and linux' (#6) from fix/release-windows-openssl-linux-assets into master
All checks were successful
Auto Tag / auto-tag (push) Successful in 5s
Release / build-windows-amd64 (push) Successful in 17m47s
Release / build-linux-amd64 (push) Successful in 35m26s
Release / build-linux-arm64 (push) Successful in 35m27s
Release / build-macos-arm64 (push) Successful in 11m43s
Reviewed-on: #6
2026-04-05 01:21:31 +00:00
Shaun Arman
493732724b fix(ci): make release artifacts reliable across platforms
All checks were successful
Test / wiki-sync (pull_request) Has been skipped
Test / frontend-tests (pull_request) Successful in 1m46s
Test / frontend-typecheck (pull_request) Successful in 1m48s
Test / rust-fmt-check (pull_request) Successful in 4m20s
Test / rust-clippy (pull_request) Successful in 20m7s
Test / rust-tests (pull_request) Successful in 21m20s
Override OpenSSL vendoring for the windows-gnu release build so cross-compiles no longer fail on pkg-config lookup, and fail fast when Linux release jobs produce no artifacts so incomplete releases are detected immediately.

Made-with: Cursor
2026-04-04 19:53:40 -05:00
f795ef62e8 Merge pull request 'ci: run test workflow only on pull requests' (#5) from fix/pr4-clean-replacement into master
Some checks failed
Auto Tag / auto-tag (push) Successful in 4s
Release / build-macos-arm64 (push) Successful in 5m32s
Release / build-windows-amd64 (push) Failing after 12m51s
Release / build-linux-amd64 (push) Successful in 34m13s
Release / build-linux-arm64 (push) Successful in 34m14s
Reviewed-on: #5
2026-04-05 00:14:07 +00:00
Shaun Arman
ff79e72605 ci: run test workflow only on pull requests
All checks were successful
Test / wiki-sync (pull_request) Has been skipped
Test / frontend-tests (pull_request) Successful in 1m46s
Test / frontend-typecheck (pull_request) Successful in 1m48s
Test / rust-fmt-check (pull_request) Successful in 4m35s
Test / rust-clippy (pull_request) Successful in 19m59s
Test / rust-tests (pull_request) Successful in 21m7s
Avoid duplicate Test workflow executions by removing push triggers and keeping pull_request validation as the single gate. Also fix remaining clippy format string violations in integration modules to keep rust-clippy passing.

Made-with: Cursor
2026-04-04 18:52:13 -05:00
55aaa5cf4a Merge pull request 'fix/skip-master-test-workflow' (#3) from fix/skip-master-test-workflow into master
Some checks failed
Auto Tag / auto-tag (push) Successful in 4s
Release / build-macos-arm64 (push) Successful in 4m46s
Release / build-windows-amd64 (push) Has been cancelled
Release / build-linux-arm64 (push) Successful in 31m33s
Release / build-linux-amd64 (push) Successful in 31m48s
Reviewed-on: #3
2026-04-04 21:48:47 +00:00
Shaun Arman
c2acf651fc ci: skip test workflow pushes on master
Some checks failed
Test / rust-clippy (pull_request) Failing after 11m53s
Test / frontend-typecheck (pull_request) Has been cancelled
Test / frontend-tests (pull_request) Has been cancelled
Test / wiki-sync (pull_request) Has been cancelled
Test / rust-tests (pull_request) Has been cancelled
Test / rust-clippy (push) Has been cancelled
Test / rust-tests (push) Has been cancelled
Test / frontend-typecheck (push) Has been cancelled
Test / frontend-tests (push) Has been cancelled
Test / wiki-sync (push) Has been cancelled
Test / rust-fmt-check (push) Has been cancelled
Test / rust-fmt-check (pull_request) Successful in 1m30s
Avoid rerunning the full test workflow on direct master pushes while keeping pull request validation intact. Update the CI/CD wiki page to reflect the new trigger behavior.

Made-with: Cursor
2026-04-04 16:45:55 -05:00
Shaun Arman
44e6095bad fix: resolve macOS bundle path after app rename
Some checks failed
Test / rust-clippy (push) Failing after 12m12s
Test / frontend-typecheck (push) Has been cancelled
Test / frontend-tests (push) Has been cancelled
Test / wiki-sync (push) Has been cancelled
Test / rust-fmt-check (push) Successful in 1m27s
Test / rust-tests (push) Has been cancelled
Find the generated .app bundle dynamically in release CI so macOS packaging no longer depends on the legacy TFTSR.app name. Add a unit test to prevent regressions by asserting the old hardcoded path is not reintroduced.

Made-with: Cursor
2026-04-04 16:28:01 -05:00
c32be72ff9 Merge pull request 'fix: resolve clippy uninlined_format_args (CI run 178)' (#2) from fix/clippy-uninlined-format-args into master
Some checks failed
Test / rust-clippy (push) Successful in 17m9s
Auto Tag / auto-tag (push) Successful in 4s
Test / frontend-typecheck (push) Has been cancelled
Test / frontend-tests (push) Has been cancelled
Test / wiki-sync (push) Has been cancelled
Test / rust-tests (push) Has been cancelled
Test / rust-fmt-check (push) Successful in 2m13s
Release / build-linux-arm64 (push) Successful in 27m28s
Release / build-macos-arm64 (push) Failing after 5m58s
Release / build-windows-amd64 (push) Has been cancelled
Release / build-linux-amd64 (push) Has been cancelled
Reviewed-on: #2
2026-04-04 21:08:52 +00:00
Shaun Arman
fcf556ce5a feat: add custom_rest provider mode and rebrand application name
Some checks failed
Test / rust-clippy (push) Successful in 11m25s
Test / rust-tests (push) Successful in 12m9s
Test / frontend-typecheck (push) Successful in 1m39s
Test / frontend-tests (push) Successful in 1m25s
Test / wiki-sync (push) Has been skipped
Test / rust-clippy (pull_request) Has been cancelled
Test / rust-tests (pull_request) Has been cancelled
Test / frontend-typecheck (pull_request) Has been cancelled
Test / frontend-tests (pull_request) Has been cancelled
Test / wiki-sync (pull_request) Has been cancelled
Test / rust-fmt-check (pull_request) Has been cancelled
Test / rust-fmt-check (push) Successful in 1m28s
Rename custom API format handling from msi_genai to custom_rest with backward compatibility, add guided model selection with custom entry in provider settings, and rebrand app naming to Troubleshooting and RCA Assistant across UI, metadata, and docs.

Made-with: Cursor
2026-04-04 15:35:58 -05:00
Shaun Arman
3b47e02e0b style: apply rustfmt output for clippy-related edits
Some checks failed
Test / rust-fmt-check (push) Successful in 1m41s
Test / rust-clippy (push) Successful in 11m42s
Test / rust-tests (push) Successful in 12m20s
Test / frontend-typecheck (push) Has been cancelled
Test / rust-fmt-check (pull_request) Successful in 1m29s
Test / frontend-tests (push) Waiting to run
Test / wiki-sync (push) Waiting to run
Test / rust-tests (pull_request) Has been cancelled
Test / frontend-typecheck (pull_request) Has been cancelled
Test / rust-clippy (pull_request) Has been cancelled
Test / frontend-tests (pull_request) Has been cancelled
Test / wiki-sync (pull_request) Has been cancelled
Apply canonical rustfmt formatting in files touched by the clippy format-args cleanup so cargo fmt --check passes consistently in CI.

Made-with: Cursor
2026-04-04 15:10:17 -05:00
Shaun Arman
bdf0be5fc9 fix: resolve clippy format-args failures and OpenSSL vendoring issue
Some checks failed
Test / rust-fmt-check (push) Failing after 1m31s
Test / rust-tests (push) Has been cancelled
Test / frontend-typecheck (push) Has been cancelled
Test / frontend-tests (push) Has been cancelled
Test / wiki-sync (push) Has been cancelled
Test / rust-clippy (push) Has been cancelled
Test / rust-clippy (pull_request) Has been cancelled
Test / rust-tests (pull_request) Has been cancelled
Test / frontend-typecheck (pull_request) Has been cancelled
Test / frontend-tests (pull_request) Has been cancelled
Test / wiki-sync (pull_request) Has been cancelled
Test / rust-fmt-check (pull_request) Has been cancelled
Inline format arguments across Rust modules to satisfy clippy -D warnings, and configure Cargo to prefer system OpenSSL so clippy builds do not fail on missing vendored Perl modules.

Made-with: Cursor
2026-04-04 15:05:13 -05:00
Shaun Arman
7bc23d22a2 fix: resolve clippy uninlined_format_args in integrations and related modules
Some checks failed
Test / rust-fmt-check (push) Successful in 1m25s
Test / frontend-typecheck (pull_request) Successful in 1m35s
Test / rust-clippy (push) Failing after 12m2s
Test / frontend-tests (pull_request) Successful in 1m23s
Test / wiki-sync (pull_request) Has been skipped
Test / rust-tests (push) Successful in 13m2s
Test / frontend-typecheck (push) Successful in 1m35s
Test / frontend-tests (push) Successful in 1m30s
Test / wiki-sync (push) Has been skipped
Test / rust-fmt-check (pull_request) Successful in 1m32s
Test / rust-clippy (pull_request) Failing after 12m10s
Test / rust-tests (pull_request) Successful in 13m22s
Replace format!("msg: {}", var) with format!("msg: {var}") across 8 files
to satisfy the uninlined_format_args lint (-D warnings) in CI run 178.

Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
2026-04-04 12:27:26 -05:00
Shaun Arman
717a6e0c6a fix: ARM64 build uses native target instead of cross-compile
Some checks failed
Auto Tag / auto-tag (push) Successful in 5s
Test / rust-fmt-check (push) Successful in 2m11s
Release / build-macos-arm64 (push) Successful in 7m50s
Test / rust-clippy (push) Failing after 18m3s
Release / build-linux-arm64 (push) Successful in 29m2s
Test / rust-tests (push) Successful in 13m47s
Test / frontend-typecheck (push) Successful in 1m32s
Test / frontend-tests (push) Successful in 1m29s
Test / wiki-sync (push) Successful in 41s
Release / build-linux-amd64 (push) Successful in 21m36s
Release / build-windows-amd64 (push) Successful in 14m24s
The ARM64 build was failing because explicitly specifying
--target aarch64-unknown-linux-gnu on an ARM64 runner was
triggering cross-compilation logic.

Changes:
- Remove rustup target add (not needed for native build)
- Remove --target flag from cargo tauri build
- Update artifact path: target/aarch64-unknown-linux-gnu/release/bundle
  → target/release/bundle

This allows the native ARM64 toolchain to build without
attempting cross-compilation and avoids the pkg-config
cross-compilation configuration requirement.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-04-04 09:59:56 -05:00
Shaun Arman
5fae3c79a7 fix: persist integration settings and implement persistent browser windows
Some checks failed
Test / rust-clippy (push) Waiting to run
Test / rust-tests (push) Waiting to run
Test / frontend-typecheck (push) Waiting to run
Test / frontend-tests (push) Waiting to run
Test / wiki-sync (push) Waiting to run
Auto Tag / auto-tag (push) Successful in 5s
Test / rust-fmt-check (push) Has been cancelled
Release / build-macos-arm64 (push) Successful in 4m47s
Release / build-linux-arm64 (push) Failing after 22m59s
Release / build-linux-amd64 (push) Successful in 28m35s
Release / build-windows-amd64 (push) Successful in 14m37s
## Integration Settings Persistence
- Add database commands to save/load integration configs (base_url, username, project_name, space_key)
- Frontend now loads configs from DB on mount and saves changes automatically
- Fixes issue where settings were lost on app restart

## Persistent Browser Window Architecture
- Integration browser windows now stay open for user browsing and authentication
- Extract fresh cookies before each API call to handle token rotation
- Track open windows in app state (integration_webviews HashMap)
- Windows titled as "{Service} Browser (TFTSR)" for clarity
- Support easy navigation between app and browser windows (Cmd+Tab/Alt+Tab)
- Gracefully handle closed windows with automatic cleanup

## Bug Fixes
- Fix Rust formatting issues across 8 files
- Fix clippy warnings:
  - Use is_some_and() instead of map_or() in openai.rs
  - Use .to_string() instead of format!() in integrations.rs
- Add missing OptionalExtension import for .optional() method

## Tests
- Add test_integration_config_serialization
- Add test_webview_tracking
- Add test_token_auth_request_serialization
- All 6 integration tests passing

## Files Modified
- src-tauri/src/state.rs: Add integration_webviews tracking
- src-tauri/src/lib.rs: Register 3 new commands, initialize webviews HashMap
- src-tauri/src/commands/integrations.rs: Config persistence, fresh cookie extraction (+151 lines)
- src-tauri/src/integrations/webview_auth.rs: Persistent window behavior
- src/lib/tauriCommands.ts: TypeScript wrappers for new commands
- src/pages/Settings/Integrations.tsx: Load/save configs from DB

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-04-04 09:57:22 -05:00
Shaun Arman
e45e4277ea feat: complete webview cookie extraction implementation
Some checks failed
Auto Tag / auto-tag (push) Successful in 6s
Test / rust-fmt-check (push) Failing after 2m5s
Release / build-macos-arm64 (push) Successful in 6m35s
Test / rust-clippy (push) Failing after 18m2s
Release / build-linux-arm64 (push) Failing after 22m15s
Test / rust-tests (push) Successful in 12m46s
Test / frontend-typecheck (push) Successful in 1m36s
Test / frontend-tests (push) Successful in 1m26s
Test / wiki-sync (push) Successful in 47s
Release / build-linux-amd64 (push) Successful in 21m0s
Release / build-windows-amd64 (push) Successful in 14m42s
Implement working cookie extraction using Tauri's IPC event system:

**How it works:**
1. Opens embedded browser window for user to login
2. User completes authentication (including SSO)
3. User clicks "Complete Login" button in UI
4. JavaScript injected into webview extracts `document.cookie`
5. Parsed cookies emitted via Tauri event: `tftsr-cookies-extracted`
6. Rust listens for event and receives cookie data
7. Cookies encrypted and stored in database

**Technical implementation:**
- Uses `window.__TAURI__.event.emit()` from injected JavaScript
- Rust listens via `app_handle.listen()` with Listener trait
- 10-second timeout with clear error messages
- Handles empty cookies and JavaScript errors gracefully
- Cross-platform compatible (no platform-specific APIs)

**Cookie limitations:**
- `document.cookie` only exposes non-HttpOnly cookies
- HttpOnly session cookies won't be captured via JavaScript
- For HttpOnly cookies, services must provide API tokens as fallback

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-04-03 17:31:48 -05:00
Shaun Arman
c885f2cc8f feat: add multi-mode authentication for integrations (v0.2.10)
Some checks failed
Test / frontend-tests (push) Waiting to run
Test / wiki-sync (push) Waiting to run
Test / rust-tests (push) Waiting to run
Test / frontend-typecheck (push) Waiting to run
Auto Tag / auto-tag (push) Successful in 4s
Test / rust-fmt-check (push) Failing after 2m12s
Test / rust-clippy (push) Has been cancelled
Release / build-windows-amd64 (push) Has been cancelled
Release / build-macos-arm64 (push) Has been cancelled
Release / build-linux-arm64 (push) Has been cancelled
Release / build-linux-amd64 (push) Has been cancelled
Implement three authentication methods for Confluence, ServiceNow, and Azure DevOps:

1. **OAuth2** - Traditional OAuth flow for enterprise SSO environments
2. **Embedded Browser** - Webview-based login that captures session cookies/tokens
   - Solves VPN constraints: users authenticate off-VPN via web UI
   - Extracted credentials work on-VPN for API calls
   - Based on confluence-publisher agent pattern
3. **Manual Token** - Direct API token/PAT input as fallback

**Changes:**
- Add webview_auth.rs module for embedded browser authentication
- Implement authenticate_with_webview and extract_cookies_from_webview commands
- Implement save_manual_token command with validation
- Add AuthMethod enum to support all three modes
- Add RadioGroup UI component for mode selection
- Complete rewrite of Integrations settings page with mode-specific UI
- Add secondary button variant for UI consistency

**VPN-friendly design:**
Users can authenticate via webview when off-VPN (web UI accessible), then use extracted cookies for API calls when on-VPN (API requires VPN). Addresses enterprise SSO limitations where OAuth app registration is blocked.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-04-03 17:26:09 -05:00
Shaun Arman
c243613cb6 feat: add temperature and max_tokens support for MSI GenAI (v0.2.9)
Some checks failed
Auto Tag / auto-tag (push) Successful in 5s
Test / rust-clippy (push) Has been cancelled
Test / rust-tests (push) Has been cancelled
Test / frontend-typecheck (push) Has been cancelled
Test / rust-fmt-check (push) Has been cancelled
Test / frontend-tests (push) Has been cancelled
Test / wiki-sync (push) Has been cancelled
Release / build-windows-amd64 (push) Has been cancelled
Release / build-macos-arm64 (push) Has been cancelled
Release / build-linux-arm64 (push) Has been cancelled
Release / build-linux-amd64 (push) Has been cancelled
- Added max_tokens and temperature fields to ProviderConfig
- MSI GenAI now sends modelConfig with temperature and max_tokens
- OpenAI-compatible providers now use configured max_tokens/temperature
- Both formats fall back to defaults if not specified
- Bumped version to 0.2.9

This allows users to configure response length and randomness for all
AI providers, including MSI GenAI which requires modelConfig format.
2026-04-03 17:08:34 -05:00
Shaun Arman
bbd46c5322 fix: use Wiki secret for authenticated wiki sync (v0.2.8)
Some checks failed
Test / rust-tests (push) Waiting to run
Test / frontend-typecheck (push) Waiting to run
Test / frontend-tests (push) Waiting to run
Test / wiki-sync (push) Waiting to run
Auto Tag / auto-tag (push) Successful in 5s
Test / rust-fmt-check (push) Failing after 2m4s
Release / build-macos-arm64 (push) Successful in 9m3s
Test / rust-clippy (push) Has been cancelled
Release / build-linux-arm64 (push) Failing after 21m39s
Release / build-windows-amd64 (push) Has been cancelled
Release / build-linux-amd64 (push) Has been cancelled
- Updated wiki-sync job to use secrets.Wiki for authentication
- Simplified clone/push logic with token-based auth
- Wiki push will now succeed with proper credentials
- Bumped version to 0.2.8

The workflow now uses the 'Wiki' secret created in Gitea Actions
to authenticate wiki repository pushes. This fixes the authentication
issue that was preventing automatic wiki synchronization.
2026-04-03 16:47:32 -05:00
Shaun Arman
8b457c4991 feat: add automatic wiki sync to CI workflow (v0.2.7)
Some checks are pending
Auto Tag / auto-tag (push) Waiting to run
Test / rust-fmt-check (push) Waiting to run
Test / rust-clippy (push) Waiting to run
Test / rust-tests (push) Waiting to run
Test / frontend-typecheck (push) Waiting to run
Test / frontend-tests (push) Waiting to run
Test / wiki-sync (push) Waiting to run
- Added wiki-sync job to .gitea/workflows/test.yml
- Runs only on pushes to master branch
- Automatically copies docs/wiki/*.md to Gogs wiki repository
- Supports token-based authentication via secrets.GITHUB_TOKEN
- Handles wiki initialization if repository doesn't exist
- Bumped version to 0.2.7

Wiki sync will now automatically update the Gogs wiki at
https://gogs.tftsr.com/sarman/tftsr-devops_investigation/wiki
whenever docs/wiki/ files are modified on master.
2026-04-03 16:42:37 -05:00
Shaun Arman
c0388f3579 docs: update wiki for v0.2.6 - integrations and MSI GenAI
Some checks are pending
Auto Tag / auto-tag (push) Waiting to run
Test / rust-fmt-check (push) Waiting to run
Test / rust-clippy (push) Waiting to run
Test / rust-tests (push) Waiting to run
Test / frontend-typecheck (push) Waiting to run
Test / frontend-tests (push) Waiting to run
Updated 5 wiki pages:

Home.md:
- Updated version to v0.2.6
- Added MSI GenAI and custom provider support to features
- Updated integration status from stubs to complete
- Updated release table with v0.2.3 and v0.2.6 highlights

Integrations.md:
- Complete rewrite: Changed from 'v0.2 stubs' to fully implemented
- Added detailed docs for Confluence REST API client (6 tests)
- Added detailed docs for ServiceNow REST API client (7 tests)
- Added detailed docs for Azure DevOps REST API client (6 tests)
- Documented OAuth2 PKCE flow implementation
- Added database schema for credentials and integration_config tables
- Added troubleshooting section with common OAuth/API errors

AI-Providers.md:
- Added section for Custom Provider (MSI GenAI)
- Documented MSI GenAI API format differences from OpenAI
- Added request/response format examples
- Added configuration instructions and troubleshooting
- Documented custom provider fields (api_format, custom_endpoint_path, etc)
- Added available MSI GenAI models list

IPC-Commands.md:
- Replaced 'v0.2 stubs' section with full implementation details
- Added OAuth2 commands (initiate_oauth, handle_oauth_callback)
- Added Confluence commands (5 functions)
- Added ServiceNow commands (5 functions)
- Added Azure DevOps commands (5 functions)
- Documented authentication storage with AES-256-GCM encryption
- Added common types (ConnectionResult, PublishResult, TicketResult)

Database.md:
- Updated migration count from 10 to 11
- Added migration 011: credentials and integration_config tables
- Documented AES-256-GCM encryption for OAuth tokens
- Added usage notes for OAuth2 vs basic auth storage
2026-04-03 16:39:49 -05:00
Shaun Arman
b4bf1d37cd fix: add user_id support and OAuth shell permission (v0.2.6)
Some checks failed
Test / rust-tests (push) Waiting to run
Test / frontend-typecheck (push) Waiting to run
Test / frontend-tests (push) Waiting to run
Auto Tag / auto-tag (push) Successful in 5s
Test / rust-fmt-check (push) Failing after 2m8s
Test / rust-clippy (push) Has been cancelled
Release / build-macos-arm64 (push) Successful in 11m8s
Release / build-windows-amd64 (push) Has been cancelled
Release / build-linux-arm64 (push) Has been cancelled
Release / build-linux-amd64 (push) Has been cancelled
Fixes:
- Added shell:allow-open permission to fix OAuth integration flows
- Added user_id field to ProviderConfig for MSI GenAI CORE ID
- Added UI field for user_id when api_format is msi_genai
- Made userId optional in MSI GenAI requests (only sent if provided)
- Added X-msi-genai-client header to MSI GenAI requests
- Updated CSP to include MSI GenAI domains
- Bumped version to 0.2.6

This fixes:
- OAuth error: 'Command plugin:shell|open not allowed by ACL'
- Missing User ID field in MSI GenAI configuration UI
2026-04-03 16:34:00 -05:00
Shaun Arman
6759c38e2a docs: add MSI GenAI API reference and handoff documentation
Some checks failed
Auto Tag / auto-tag (push) Successful in 9s
Test / rust-fmt-check (push) Failing after 2m14s
Release / build-macos-arm64 (push) Successful in 9m48s
Test / rust-clippy (push) Failing after 18m4s
Release / build-linux-arm64 (push) Failing after 22m29s
Test / rust-tests (push) Successful in 12m57s
Test / frontend-typecheck (push) Successful in 1m35s
Test / frontend-tests (push) Successful in 1m29s
Release / build-windows-amd64 (push) Has been cancelled
Release / build-linux-amd64 (push) Has been cancelled
- Added GenAI API User Guide.md with complete API specification
- Added HANDOFF-MSI-GENAI.md documenting custom provider implementation
- Includes API endpoints, request/response formats, available models, and rate limits
2026-04-03 15:45:52 -05:00
Shaun Arman
9d8bdd383c feat: add MSI GenAI custom provider support
- Extended ProviderConfig with optional custom fields for non-OpenAI APIs
- Added custom_endpoint_path, custom_auth_header, custom_auth_prefix fields
- Added api_format field to distinguish between OpenAI and MSI GenAI formats
- Added session_id field for stateful conversation APIs
- Implemented chat_msi_genai() method in OpenAI provider
- MSI GenAI uses different request format (prompt+sessionId) and response (msg field)
- Updated TypeScript types to match Rust schema
- Added UI controls in Settings/AIProviders for custom provider configuration
- API format selector auto-populates appropriate defaults (OpenAI vs MSI GenAI)
- Backward compatible: existing providers default to OpenAI format
2026-04-03 15:45:42 -05:00
86 changed files with 9793 additions and 1082 deletions

3
.cargo/config.toml Normal file
View File

@ -0,0 +1,3 @@
[env]
# Force use of system OpenSSL instead of vendored OpenSSL source builds.
OPENSSL_NO_VENDOR = "1"

View File

@ -0,0 +1,24 @@
# Pre-baked builder for Linux amd64 Tauri releases.
# All system dependencies are installed once here; CI jobs skip apt-get entirely.
# Rebuild when: Rust toolchain version changes, webkit2gtk/gtk major version changes,
# or Node.js major version changes. Tag format: rust<VER>-node<VER>
FROM rust:1.88-slim
RUN apt-get update -qq \
&& apt-get install -y -qq --no-install-recommends \
libwebkit2gtk-4.1-dev \
libssl-dev \
libgtk-3-dev \
libayatana-appindicator3-dev \
librsvg2-dev \
patchelf \
pkg-config \
curl \
perl \
jq \
git \
&& curl -fsSL https://deb.nodesource.com/setup_22.x | bash - \
&& apt-get install -y --no-install-recommends nodejs \
&& rm -rf /var/lib/apt/lists/*
RUN rustup target add x86_64-unknown-linux-gnu

View File

@ -0,0 +1,45 @@
# Pre-baked cross-compiler for Linux arm64 Tauri releases (runs on Linux amd64).
# Bakes in: amd64 cross-toolchain, arm64 multiarch dev libs, Node.js, and Rust.
# This image takes ~15 min to build but is only rebuilt when deps change.
# Rebuild when: Rust toolchain version, webkit2gtk/gtk major version, or Node.js changes.
# Tag format: rust<VER>-node<VER>
FROM ubuntu:22.04
ARG DEBIAN_FRONTEND=noninteractive
# Step 1: amd64 host tools and cross-compiler
RUN apt-get update -qq \
&& apt-get install -y -qq --no-install-recommends \
curl git gcc g++ make patchelf pkg-config perl jq \
gcc-aarch64-linux-gnu g++-aarch64-linux-gnu \
&& rm -rf /var/lib/apt/lists/*
# Step 2: Enable arm64 multiarch. Ubuntu uses ports.ubuntu.com for arm64 to avoid
# binary-all index conflicts with the amd64 archive.ubuntu.com mirror.
RUN dpkg --add-architecture arm64 \
&& sed -i 's|^deb http://archive.ubuntu.com|deb [arch=amd64] http://archive.ubuntu.com|g' /etc/apt/sources.list \
&& sed -i 's|^deb http://security.ubuntu.com|deb [arch=amd64] http://security.ubuntu.com|g' /etc/apt/sources.list \
&& printf '%s\n' \
'deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports jammy main restricted universe multiverse' \
'deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports jammy-updates main restricted universe multiverse' \
'deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports jammy-security main restricted universe multiverse' \
> /etc/apt/sources.list.d/arm64-ports.list \
&& apt-get update -qq \
&& apt-get install -y -qq --no-install-recommends \
libwebkit2gtk-4.1-dev:arm64 \
libssl-dev:arm64 \
libgtk-3-dev:arm64 \
librsvg2-dev:arm64 \
&& rm -rf /var/lib/apt/lists/*
# Step 3: Node.js 22
RUN curl -fsSL https://deb.nodesource.com/setup_22.x | bash - \
&& apt-get install -y --no-install-recommends nodejs \
&& rm -rf /var/lib/apt/lists/*
# Step 4: Rust 1.88 with arm64 cross-compilation target
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y \
--default-toolchain 1.88.0 --profile minimal --no-modify-path \
&& /root/.cargo/bin/rustup target add aarch64-unknown-linux-gnu
ENV PATH="/root/.cargo/bin:${PATH}"

View File

@ -0,0 +1,20 @@
# Pre-baked cross-compiler for Windows amd64 Tauri releases (runs on Linux amd64).
# All MinGW and Node.js dependencies are installed once here; CI jobs skip apt-get entirely.
# Rebuild when: Rust toolchain version changes or Node.js major version changes.
# Tag format: rust<VER>-node<VER>
FROM rust:1.88-slim
RUN apt-get update -qq \
&& apt-get install -y -qq --no-install-recommends \
mingw-w64 \
curl \
nsis \
perl \
make \
jq \
git \
&& curl -fsSL https://deb.nodesource.com/setup_22.x | bash - \
&& apt-get install -y --no-install-recommends nodejs \
&& rm -rf /var/lib/apt/lists/*
RUN rustup target add x86_64-pc-windows-gnu

View File

@ -1,24 +1,32 @@
name: Auto Tag
# Runs on every merge to master — reads the latest semver tag, increments
# the patch version, and pushes a new tag (which triggers release.yml).
# the patch version, pushes a new tag, then runs release builds in this workflow.
# workflow_dispatch allows manual triggering when Gitea drops a push event.
on:
push:
branches:
- master
workflow_dispatch:
concurrency:
group: auto-tag-master
cancel-in-progress: false
jobs:
auto-tag:
autotag:
runs-on: linux-amd64
container:
image: alpine:latest
steps:
- name: Bump patch version and create tag
id: bump
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
apk add --no-cache curl jq
set -eu
apk add --no-cache curl jq git
API="http://172.0.0.29:3000/api/v1/repos/$GITHUB_REPOSITORY"
@ -39,10 +47,471 @@ jobs:
echo "Latest tag: ${LATEST:-none} → Next: $NEXT"
# Create the new tag pointing at the commit that triggered this push
curl -sf -X POST "$API/tags" \
# Create and push the tag via git.
git init
git remote add origin "http://oauth2:${RELEASE_TOKEN}@172.0.0.29:3000/${GITHUB_REPOSITORY}.git"
git fetch --depth=1 origin "$GITHUB_SHA"
git checkout FETCH_HEAD
git config user.name "gitea-actions[bot]"
git config user.email "gitea-actions@local"
if git ls-remote --exit-code --tags origin "refs/tags/$NEXT" >/dev/null 2>&1; then
echo "Tag $NEXT already exists; skipping."
exit 0
fi
git tag -a "$NEXT" -m "Release $NEXT"
git push origin "refs/tags/$NEXT"
echo "Tag $NEXT pushed successfully"
wiki-sync:
runs-on: linux-amd64
container:
image: alpine:latest
steps:
- name: Install dependencies
run: apk add --no-cache git
- name: Checkout main repository
run: |
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin $GITHUB_SHA
git checkout FETCH_HEAD
- name: Configure git
run: |
git config --global user.email "actions@gitea.local"
git config --global user.name "Gitea Actions"
git config --global credential.helper ''
- name: Clone and sync wiki
env:
WIKI_TOKEN: ${{ secrets.Wiki }}
run: |
cd /tmp
if [ -n "$WIKI_TOKEN" ]; then
WIKI_URL="http://${WIKI_TOKEN}@172.0.0.29:3000/sarman/tftsr-devops_investigation.wiki.git"
else
WIKI_URL="http://172.0.0.29:3000/sarman/tftsr-devops_investigation.wiki.git"
fi
if ! git clone "$WIKI_URL" wiki 2>/dev/null; then
echo "Wiki doesn't exist yet, creating initial structure..."
mkdir -p wiki
cd wiki
git init
git checkout -b master
echo "# Wiki" > Home.md
git add Home.md
git commit -m "Initial wiki commit"
git remote add origin "$WIKI_URL"
fi
cd /tmp/wiki
if [ -d "$GITHUB_WORKSPACE/docs/wiki" ]; then
cp -v "$GITHUB_WORKSPACE"/docs/wiki/*.md . 2>/dev/null || echo "No wiki files to copy"
fi
git add -A
if ! git diff --staged --quiet; then
git commit -m "docs: sync from docs/wiki/ at commit ${GITHUB_SHA:0:8}"
echo "Pushing to wiki..."
if git push origin master; then
echo "✓ Wiki successfully synced"
else
echo "⚠ Wiki push failed - check token permissions"
exit 1
fi
else
echo "No wiki changes to commit"
fi
build-linux-amd64:
needs: autotag
runs-on: linux-amd64
container:
image: rust:1.88-slim
steps:
- name: Checkout
run: |
apt-get update -qq && apt-get install -y -qq git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin "$GITHUB_SHA"
git checkout FETCH_HEAD
- name: Install dependencies
run: |
apt-get update -qq && apt-get install -y -qq \
libwebkit2gtk-4.1-dev libssl-dev libgtk-3-dev \
libayatana-appindicator3-dev librsvg2-dev patchelf \
pkg-config curl perl jq
curl -fsSL https://deb.nodesource.com/setup_22.x | bash -
apt-get install -y nodejs
- name: Build
run: |
npm ci --legacy-peer-deps
rustup target add x86_64-unknown-linux-gnu
CI=true npx tauri build --target x86_64-unknown-linux-gnu
- name: Upload artifacts
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
set -eu
API="http://172.0.0.29:3000/api/v1/repos/$GITHUB_REPOSITORY"
TAG=$(curl -s "$API/tags?limit=50" \
-H "Authorization: token $RELEASE_TOKEN" | \
jq -r '.[].name' | grep -E '^v[0-9]+\.[0-9]+\.[0-9]+$' | \
sort -V | tail -1 || true)
if [ -z "$TAG" ]; then
echo "ERROR: Could not resolve release tag from repository tags."
exit 1
fi
echo "Creating release for $TAG..."
curl -sf -X POST "$API/releases" \
-H "Authorization: token $RELEASE_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"tag_name\":\"$NEXT\",\"message\":\"Release $NEXT\",\"target\":\"$GITHUB_SHA\"}"
-d "{\"tag_name\":\"$TAG\",\"name\":\"TFTSR $TAG\",\"body\":\"Release $TAG\",\"draft\":false}" || true
RELEASE_ID=$(curl -sf "$API/releases/tags/$TAG" \
-H "Authorization: token $RELEASE_TOKEN" | jq -r '.id')
if [ -z "$RELEASE_ID" ] || [ "$RELEASE_ID" = "null" ]; then
echo "ERROR: Failed to get release ID for $TAG"
exit 1
fi
echo "Release ID: $RELEASE_ID"
ARTIFACTS=$(find src-tauri/target/x86_64-unknown-linux-gnu/release/bundle -type f \
\( -name "*.deb" -o -name "*.rpm" -o -name "*.AppImage" \))
if [ -z "$ARTIFACTS" ]; then
echo "ERROR: No Linux amd64 artifacts were found to upload."
exit 1
fi
printf '%s\n' "$ARTIFACTS" | while IFS= read -r f; do
NAME=$(basename "$f")
UPLOAD_NAME="linux-amd64-$NAME"
echo "Uploading $UPLOAD_NAME..."
EXISTING_IDS=$(curl -sf "$API/releases/$RELEASE_ID" \
-H "Authorization: token $RELEASE_TOKEN" \
| jq -r --arg name "$UPLOAD_NAME" '.assets[]? | select(.name == $name) | .id')
if [ -n "$EXISTING_IDS" ]; then
printf '%s\n' "$EXISTING_IDS" | while IFS= read -r id; do
[ -n "$id" ] || continue
echo "Deleting existing asset id=$id name=$UPLOAD_NAME before upload..."
curl -sf -X DELETE "$API/releases/$RELEASE_ID/assets/$id" \
-H "Authorization: token $RELEASE_TOKEN"
done
fi
RESP_FILE=$(mktemp)
HTTP_CODE=$(curl -sS -o "$RESP_FILE" -w "%{http_code}" -X POST "$API/releases/$RELEASE_ID/assets" \
-H "Authorization: token $RELEASE_TOKEN" \
-F "attachment=@$f;filename=$UPLOAD_NAME")
if [ "$HTTP_CODE" -ge 200 ] && [ "$HTTP_CODE" -lt 300 ]; then
echo "✓ Uploaded $UPLOAD_NAME"
else
echo "✗ Upload failed for $UPLOAD_NAME (HTTP $HTTP_CODE)"
python -c 'import pathlib,sys;print(pathlib.Path(sys.argv[1]).read_text(errors="replace")[:2000])' "$RESP_FILE"
exit 1
fi
done
echo "Tag $NEXT created successfully"
build-windows-amd64:
needs: autotag
runs-on: linux-amd64
container:
image: rust:1.88-slim
steps:
- name: Checkout
run: |
apt-get update -qq && apt-get install -y -qq git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin "$GITHUB_SHA"
git checkout FETCH_HEAD
- name: Install dependencies
run: |
apt-get update -qq && apt-get install -y -qq mingw-w64 curl nsis perl make jq
curl -fsSL https://deb.nodesource.com/setup_22.x | bash -
apt-get install -y nodejs
- name: Build
env:
CC_x86_64_pc_windows_gnu: x86_64-w64-mingw32-gcc
CXX_x86_64_pc_windows_gnu: x86_64-w64-mingw32-g++
AR_x86_64_pc_windows_gnu: x86_64-w64-mingw32-ar
CARGO_TARGET_X86_64_PC_WINDOWS_GNU_LINKER: x86_64-w64-mingw32-gcc
OPENSSL_NO_VENDOR: "0"
OPENSSL_STATIC: "1"
run: |
npm ci --legacy-peer-deps
rustup target add x86_64-pc-windows-gnu
CI=true npx tauri build --target x86_64-pc-windows-gnu
- name: Upload artifacts
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
set -eu
API="http://172.0.0.29:3000/api/v1/repos/$GITHUB_REPOSITORY"
TAG=$(curl -s "$API/tags?limit=50" \
-H "Authorization: token $RELEASE_TOKEN" | \
jq -r '.[].name' | grep -E '^v[0-9]+\.[0-9]+\.[0-9]+$' | \
sort -V | tail -1 || true)
if [ -z "$TAG" ]; then
echo "ERROR: Could not resolve release tag from repository tags."
exit 1
fi
echo "Creating release for $TAG..."
curl -sf -X POST "$API/releases" \
-H "Authorization: token $RELEASE_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"tag_name\":\"$TAG\",\"name\":\"TFTSR $TAG\",\"body\":\"Release $TAG\",\"draft\":false}" || true
RELEASE_ID=$(curl -sf "$API/releases/tags/$TAG" \
-H "Authorization: token $RELEASE_TOKEN" | jq -r '.id')
if [ -z "$RELEASE_ID" ] || [ "$RELEASE_ID" = "null" ]; then
echo "ERROR: Failed to get release ID for $TAG"
exit 1
fi
echo "Release ID: $RELEASE_ID"
ARTIFACTS=$(find src-tauri/target/x86_64-pc-windows-gnu/release/bundle -type f \
\( -name "*.exe" -o -name "*.msi" \) 2>/dev/null)
if [ -z "$ARTIFACTS" ]; then
echo "ERROR: No Windows amd64 artifacts were found to upload."
exit 1
fi
printf '%s\n' "$ARTIFACTS" | while IFS= read -r f; do
NAME=$(basename "$f")
echo "Uploading $NAME..."
EXISTING_IDS=$(curl -sf "$API/releases/$RELEASE_ID" \
-H "Authorization: token $RELEASE_TOKEN" \
| jq -r --arg name "$NAME" '.assets[]? | select(.name == $name) | .id')
if [ -n "$EXISTING_IDS" ]; then
printf '%s\n' "$EXISTING_IDS" | while IFS= read -r id; do
[ -n "$id" ] || continue
echo "Deleting existing asset id=$id name=$NAME before upload..."
curl -sf -X DELETE "$API/releases/$RELEASE_ID/assets/$id" \
-H "Authorization: token $RELEASE_TOKEN"
done
fi
RESP_FILE=$(mktemp)
HTTP_CODE=$(curl -sS -o "$RESP_FILE" -w "%{http_code}" -X POST "$API/releases/$RELEASE_ID/assets" \
-H "Authorization: token $RELEASE_TOKEN" \
-F "attachment=@$f;filename=$NAME")
if [ "$HTTP_CODE" -ge 200 ] && [ "$HTTP_CODE" -lt 300 ]; then
echo "✓ Uploaded $NAME"
else
echo "✗ Upload failed for $NAME (HTTP $HTTP_CODE)"
python -c 'import pathlib,sys;print(pathlib.Path(sys.argv[1]).read_text(errors="replace")[:2000])' "$RESP_FILE"
exit 1
fi
done
build-macos-arm64:
needs: autotag
runs-on: macos-arm64
steps:
- name: Checkout
run: |
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin "$GITHUB_SHA"
git checkout FETCH_HEAD
- name: Build
env:
MACOSX_DEPLOYMENT_TARGET: "11.0"
run: |
npm ci --legacy-peer-deps
rustup target add aarch64-apple-darwin
CI=true npx tauri build --target aarch64-apple-darwin --bundles app
APP=$(find src-tauri/target/aarch64-apple-darwin/release/bundle/macos -maxdepth 1 -type d -name "*.app" | head -n 1)
if [ -z "$APP" ]; then
echo "ERROR: Could not find macOS app bundle"
exit 1
fi
APP_NAME=$(basename "$APP" .app)
codesign --deep --force --sign - "$APP"
mkdir -p src-tauri/target/aarch64-apple-darwin/release/bundle/dmg
DMG=src-tauri/target/aarch64-apple-darwin/release/bundle/dmg/${APP_NAME}.dmg
hdiutil create -volname "$APP_NAME" -srcfolder "$APP" -ov -format UDZO "$DMG"
- name: Upload artifacts
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
set -eu
API="http://172.0.0.29:3000/api/v1/repos/$GITHUB_REPOSITORY"
TAG=$(curl -s "$API/tags?limit=50" \
-H "Authorization: token $RELEASE_TOKEN" | \
jq -r '.[].name' | grep -E '^v[0-9]+\.[0-9]+\.[0-9]+$' | \
sort -V | tail -1 || true)
if [ -z "$TAG" ]; then
echo "ERROR: Could not resolve release tag from repository tags."
exit 1
fi
echo "Creating release for $TAG..."
curl -sf -X POST "$API/releases" \
-H "Authorization: token $RELEASE_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"tag_name\":\"$TAG\",\"name\":\"TFTSR $TAG\",\"body\":\"Release $TAG\",\"draft\":false}" || true
RELEASE_ID=$(curl -sf "$API/releases/tags/$TAG" \
-H "Authorization: token $RELEASE_TOKEN" | jq -r '.id')
if [ -z "$RELEASE_ID" ] || [ "$RELEASE_ID" = "null" ]; then
echo "ERROR: Failed to get release ID for $TAG"
exit 1
fi
echo "Release ID: $RELEASE_ID"
ARTIFACTS=$(find src-tauri/target/aarch64-apple-darwin/release/bundle -type f -name "*.dmg")
if [ -z "$ARTIFACTS" ]; then
echo "ERROR: No macOS arm64 DMG artifacts were found to upload."
exit 1
fi
printf '%s\n' "$ARTIFACTS" | while IFS= read -r f; do
NAME=$(basename "$f")
echo "Uploading $NAME..."
EXISTING_IDS=$(curl -sf "$API/releases/$RELEASE_ID" \
-H "Authorization: token $RELEASE_TOKEN" \
| jq -r --arg name "$NAME" '.assets[]? | select(.name == $name) | .id')
if [ -n "$EXISTING_IDS" ]; then
printf '%s\n' "$EXISTING_IDS" | while IFS= read -r id; do
[ -n "$id" ] || continue
echo "Deleting existing asset id=$id name=$NAME before upload..."
curl -sf -X DELETE "$API/releases/$RELEASE_ID/assets/$id" \
-H "Authorization: token $RELEASE_TOKEN"
done
fi
RESP_FILE=$(mktemp)
HTTP_CODE=$(curl -sS -o "$RESP_FILE" -w "%{http_code}" -X POST "$API/releases/$RELEASE_ID/assets" \
-H "Authorization: token $RELEASE_TOKEN" \
-F "attachment=@$f;filename=$NAME")
if [ "$HTTP_CODE" -ge 200 ] && [ "$HTTP_CODE" -lt 300 ]; then
echo "✓ Uploaded $NAME"
else
echo "✗ Upload failed for $NAME (HTTP $HTTP_CODE)"
python -c 'import pathlib,sys;print(pathlib.Path(sys.argv[1]).read_text(errors="replace")[:2000])' "$RESP_FILE"
exit 1
fi
done
build-linux-arm64:
needs: autotag
runs-on: linux-amd64
container:
image: ubuntu:22.04
steps:
- name: Checkout
run: |
apt-get update -qq && apt-get install -y -qq git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin "$GITHUB_SHA"
git checkout FETCH_HEAD
- name: Install dependencies
env:
DEBIAN_FRONTEND: noninteractive
run: |
# Step 1: Host tools + cross-compiler (all amd64, no multiarch yet)
apt-get update -qq
apt-get install -y -qq curl git gcc g++ make patchelf pkg-config perl jq \
gcc-aarch64-linux-gnu g++-aarch64-linux-gnu
# Step 2: Multiarch — Ubuntu uses ports.ubuntu.com for arm64,
# keeping it on a separate mirror from amd64 (archive.ubuntu.com).
# This avoids the binary-all index duplication and -dev package
# conflicts that plagued the Debian single-mirror approach.
dpkg --add-architecture arm64
sed -i 's|^deb http://archive.ubuntu.com|deb [arch=amd64] http://archive.ubuntu.com|g' /etc/apt/sources.list
sed -i 's|^deb http://security.ubuntu.com|deb [arch=amd64] http://security.ubuntu.com|g' /etc/apt/sources.list
printf '%s\n' \
'deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports jammy main restricted universe multiverse' \
'deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports jammy-updates main restricted universe multiverse' \
'deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports jammy-security main restricted universe multiverse' \
> /etc/apt/sources.list.d/arm64-ports.list
apt-get update -qq
# Step 3: ARM64 dev libs — libayatana omitted (no tray icon in this app)
apt-get install -y -qq \
libwebkit2gtk-4.1-dev:arm64 \
libssl-dev:arm64 \
libgtk-3-dev:arm64 \
librsvg2-dev:arm64
# Step 4: Node.js
curl -fsSL https://deb.nodesource.com/setup_22.x | bash -
apt-get install -y nodejs
# Step 5: Rust (not pre-installed in ubuntu:22.04)
# source "$HOME/.cargo/env" in the Build step handles PATH — no GITHUB_PATH needed
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y \
--default-toolchain 1.88.0 --profile minimal --no-modify-path
- name: Build
env:
CC_aarch64_unknown_linux_gnu: aarch64-linux-gnu-gcc
CXX_aarch64_unknown_linux_gnu: aarch64-linux-gnu-g++
AR_aarch64_unknown_linux_gnu: aarch64-linux-gnu-ar
CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER: aarch64-linux-gnu-gcc
PKG_CONFIG_SYSROOT_DIR: /usr/aarch64-linux-gnu
PKG_CONFIG_PATH: /usr/lib/aarch64-linux-gnu/pkgconfig
PKG_CONFIG_ALLOW_CROSS: "1"
OPENSSL_NO_VENDOR: "0"
OPENSSL_STATIC: "1"
APPIMAGE_EXTRACT_AND_RUN: "1"
run: |
. "$HOME/.cargo/env"
npm ci --legacy-peer-deps
rustup target add aarch64-unknown-linux-gnu
CI=true npx tauri build --target aarch64-unknown-linux-gnu --bundles deb,rpm
- name: Upload artifacts
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
set -eu
API="http://172.0.0.29:3000/api/v1/repos/$GITHUB_REPOSITORY"
TAG=$(curl -s "$API/tags?limit=50" \
-H "Authorization: token $RELEASE_TOKEN" | \
jq -r '.[].name' | grep -E '^v[0-9]+\.[0-9]+\.[0-9]+$' | \
sort -V | tail -1 || true)
if [ -z "$TAG" ]; then
echo "ERROR: Could not resolve release tag from repository tags."
exit 1
fi
echo "Creating release for $TAG..."
curl -sf -X POST "$API/releases" \
-H "Authorization: token $RELEASE_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"tag_name\":\"$TAG\",\"name\":\"TFTSR $TAG\",\"body\":\"Release $TAG\",\"draft\":false}" || true
RELEASE_ID=$(curl -sf "$API/releases/tags/$TAG" \
-H "Authorization: token $RELEASE_TOKEN" | jq -r '.id')
if [ -z "$RELEASE_ID" ] || [ "$RELEASE_ID" = "null" ]; then
echo "ERROR: Failed to get release ID for $TAG"
exit 1
fi
echo "Release ID: $RELEASE_ID"
ARTIFACTS=$(find src-tauri/target/aarch64-unknown-linux-gnu/release/bundle -type f \
\( -name "*.deb" -o -name "*.rpm" -o -name "*.AppImage" \))
if [ -z "$ARTIFACTS" ]; then
echo "ERROR: No Linux arm64 artifacts were found to upload."
exit 1
fi
printf '%s\n' "$ARTIFACTS" | while IFS= read -r f; do
NAME=$(basename "$f")
UPLOAD_NAME="linux-arm64-$NAME"
echo "Uploading $UPLOAD_NAME..."
EXISTING_IDS=$(curl -sf "$API/releases/$RELEASE_ID" \
-H "Authorization: token $RELEASE_TOKEN" \
| jq -r --arg name "$UPLOAD_NAME" '.assets[]? | select(.name == $name) | .id')
if [ -n "$EXISTING_IDS" ]; then
printf '%s\n' "$EXISTING_IDS" | while IFS= read -r id; do
[ -n "$id" ] || continue
echo "Deleting existing asset id=$id name=$UPLOAD_NAME before upload..."
curl -sf -X DELETE "$API/releases/$RELEASE_ID/assets/$id" \
-H "Authorization: token $RELEASE_TOKEN"
done
fi
RESP_FILE=$(mktemp)
HTTP_CODE=$(curl -sS -o "$RESP_FILE" -w "%{http_code}" -X POST "$API/releases/$RELEASE_ID/assets" \
-H "Authorization: token $RELEASE_TOKEN" \
-F "attachment=@$f;filename=$UPLOAD_NAME")
if [ "$HTTP_CODE" -ge 200 ] && [ "$HTTP_CODE" -lt 300 ]; then
echo "✓ Uploaded $UPLOAD_NAME"
else
echo "✗ Upload failed for $UPLOAD_NAME (HTTP $HTTP_CODE)"
python -c 'import pathlib,sys;print(pathlib.Path(sys.argv[1]).read_text(errors="replace")[:2000])' "$RESP_FILE"
exit 1
fi
done

View File

@ -0,0 +1,104 @@
name: Build CI Docker Images
# Rebuilds the pre-baked builder images and pushes them to the local Gitea
# container registry (172.0.0.29:3000).
#
# WHEN TO RUN:
# - Automatically: whenever a Dockerfile under .docker/ changes on master.
# - Manually: via workflow_dispatch (e.g. first-time setup, forced rebuild).
#
# ONE-TIME SERVER PREREQUISITE (run once on 172.0.0.29 before first use):
# echo '{"insecure-registries":["172.0.0.29:3000"]}' \
# | sudo tee /etc/docker/daemon.json
# sudo systemctl restart docker
#
# Images produced:
# 172.0.0.29:3000/sarman/trcaa-linux-amd64:rust1.88-node22
# 172.0.0.29:3000/sarman/trcaa-windows-cross:rust1.88-node22
# 172.0.0.29:3000/sarman/trcaa-linux-arm64:rust1.88-node22
on:
push:
branches:
- master
paths:
- '.docker/**'
workflow_dispatch:
concurrency:
group: build-ci-images
cancel-in-progress: false
env:
REGISTRY: 172.0.0.29:3000
REGISTRY_USER: sarman
jobs:
linux-amd64:
runs-on: linux-amd64
container:
image: docker:24-cli
steps:
- name: Checkout
run: |
apk add --no-cache git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin "$GITHUB_SHA"
git checkout FETCH_HEAD
- name: Build and push linux-amd64 builder
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
echo "$RELEASE_TOKEN" | docker login $REGISTRY -u $REGISTRY_USER --password-stdin
docker build \
-t $REGISTRY/$REGISTRY_USER/trcaa-linux-amd64:rust1.88-node22 \
-f .docker/Dockerfile.linux-amd64 .
docker push $REGISTRY/$REGISTRY_USER/trcaa-linux-amd64:rust1.88-node22
echo "✓ Pushed $REGISTRY/$REGISTRY_USER/trcaa-linux-amd64:rust1.88-node22"
windows-cross:
runs-on: linux-amd64
container:
image: docker:24-cli
steps:
- name: Checkout
run: |
apk add --no-cache git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin "$GITHUB_SHA"
git checkout FETCH_HEAD
- name: Build and push windows-cross builder
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
echo "$RELEASE_TOKEN" | docker login $REGISTRY -u $REGISTRY_USER --password-stdin
docker build \
-t $REGISTRY/$REGISTRY_USER/trcaa-windows-cross:rust1.88-node22 \
-f .docker/Dockerfile.windows-cross .
docker push $REGISTRY/$REGISTRY_USER/trcaa-windows-cross:rust1.88-node22
echo "✓ Pushed $REGISTRY/$REGISTRY_USER/trcaa-windows-cross:rust1.88-node22"
linux-arm64:
runs-on: linux-amd64
container:
image: docker:24-cli
steps:
- name: Checkout
run: |
apk add --no-cache git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin "$GITHUB_SHA"
git checkout FETCH_HEAD
- name: Build and push linux-arm64 builder
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
echo "$RELEASE_TOKEN" | docker login $REGISTRY -u $REGISTRY_USER --password-stdin
docker build \
-t $REGISTRY/$REGISTRY_USER/trcaa-linux-arm64:rust1.88-node22 \
-f .docker/Dockerfile.linux-arm64 .
docker push $REGISTRY/$REGISTRY_USER/trcaa-linux-arm64:rust1.88-node22
echo "✓ Pushed $REGISTRY/$REGISTRY_USER/trcaa-linux-arm64:rust1.88-node22"

View File

@ -1,221 +0,0 @@
name: Release
on:
push:
tags:
- 'v*'
jobs:
build-linux-amd64:
runs-on: linux-amd64
container:
image: rust:1.88-slim
steps:
- name: Checkout
run: |
apt-get update -qq && apt-get install -y -qq git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin $GITHUB_SHA
git checkout FETCH_HEAD
- name: Install dependencies
run: |
apt-get update -qq && apt-get install -y -qq \
libwebkit2gtk-4.1-dev libssl-dev libgtk-3-dev \
libayatana-appindicator3-dev librsvg2-dev patchelf \
pkg-config curl perl jq
curl -fsSL https://deb.nodesource.com/setup_22.x | bash -
apt-get install -y nodejs
- name: Build
run: |
npm ci --legacy-peer-deps
rustup target add x86_64-unknown-linux-gnu
cargo install tauri-cli --version "^2" --locked
CI=true cargo tauri build --target x86_64-unknown-linux-gnu
- name: Upload artifacts
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
API="http://172.0.0.29:3000/api/v1/repos/$GITHUB_REPOSITORY"
TAG="$GITHUB_REF_NAME"
echo "Creating release for $TAG..."
curl -sf -X POST "$API/releases" \
-H "Authorization: token $RELEASE_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"tag_name\":\"$TAG\",\"name\":\"TFTSR $TAG\",\"body\":\"Release $TAG\",\"draft\":false}" || true
RELEASE_ID=$(curl -sf "$API/releases/tags/$TAG" \
-H "Authorization: token $RELEASE_TOKEN" | jq -r '.id')
if [ -z "$RELEASE_ID" ] || [ "$RELEASE_ID" = "null" ]; then
echo "ERROR: Failed to get release ID for $TAG"
exit 1
fi
echo "Release ID: $RELEASE_ID"
find src-tauri/target/x86_64-unknown-linux-gnu/release/bundle \
\( -name "*.deb" -o -name "*.rpm" -o -name "*.AppImage" \) | while read f; do
echo "Uploading $(basename $f)..."
curl -sf -X POST "$API/releases/$RELEASE_ID/assets" \
-H "Authorization: token $RELEASE_TOKEN" \
-F "attachment=@$f;filename=$(basename $f)" && echo "✓ Uploaded $(basename $f)" || echo "✗ Upload failed: $f"
done
build-windows-amd64:
runs-on: linux-amd64
container:
image: rust:1.88-slim
steps:
- name: Checkout
run: |
apt-get update -qq && apt-get install -y -qq git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin $GITHUB_SHA
git checkout FETCH_HEAD
- name: Install dependencies
run: |
apt-get update -qq && apt-get install -y -qq mingw-w64 curl nsis perl make jq
curl -fsSL https://deb.nodesource.com/setup_22.x | bash -
apt-get install -y nodejs
- name: Build
env:
CC_x86_64_pc_windows_gnu: x86_64-w64-mingw32-gcc
CXX_x86_64_pc_windows_gnu: x86_64-w64-mingw32-g++
AR_x86_64_pc_windows_gnu: x86_64-w64-mingw32-ar
CARGO_TARGET_X86_64_PC_WINDOWS_GNU_LINKER: x86_64-w64-mingw32-gcc
run: |
npm ci --legacy-peer-deps
rustup target add x86_64-pc-windows-gnu
cargo install tauri-cli --version "^2" --locked
CI=true cargo tauri build --target x86_64-pc-windows-gnu
- name: Upload artifacts
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
API="http://172.0.0.29:3000/api/v1/repos/$GITHUB_REPOSITORY"
TAG="$GITHUB_REF_NAME"
echo "Creating release for $TAG..."
curl -sf -X POST "$API/releases" \
-H "Authorization: token $RELEASE_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"tag_name\":\"$TAG\",\"name\":\"TFTSR $TAG\",\"body\":\"Release $TAG\",\"draft\":false}" || true
RELEASE_ID=$(curl -sf "$API/releases/tags/$TAG" \
-H "Authorization: token $RELEASE_TOKEN" | jq -r '.id')
if [ -z "$RELEASE_ID" ] || [ "$RELEASE_ID" = "null" ]; then
echo "ERROR: Failed to get release ID for $TAG"
exit 1
fi
echo "Release ID: $RELEASE_ID"
find src-tauri/target/x86_64-pc-windows-gnu/release/bundle \
\( -name "*.exe" -o -name "*.msi" \) 2>/dev/null | while read f; do
echo "Uploading $(basename $f)..."
curl -sf -X POST "$API/releases/$RELEASE_ID/assets" \
-H "Authorization: token $RELEASE_TOKEN" \
-F "attachment=@$f;filename=$(basename $f)" && echo "✓ Uploaded $(basename $f)" || echo "✗ Upload failed: $f"
done
build-macos-arm64:
runs-on: macos-arm64
steps:
- name: Checkout
run: |
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin $GITHUB_SHA
git checkout FETCH_HEAD
- name: Build
env:
MACOSX_DEPLOYMENT_TARGET: "11.0"
run: |
npm ci --legacy-peer-deps
rustup target add aarch64-apple-darwin
cargo install tauri-cli --version "^2" --locked
# Build the .app bundle only (no DMG yet so we can sign before packaging)
CI=true cargo tauri build --target aarch64-apple-darwin --bundles app
APP=src-tauri/target/aarch64-apple-darwin/release/bundle/macos/TFTSR.app
# Ad-hoc sign: changes Gatekeeper error from "damaged" to "unidentified developer"
codesign --deep --force --sign - "$APP"
# Create DMG from the signed .app
mkdir -p src-tauri/target/aarch64-apple-darwin/release/bundle/dmg
DMG=src-tauri/target/aarch64-apple-darwin/release/bundle/dmg/TFTSR.dmg
hdiutil create -volname "TFTSR" -srcfolder "$APP" -ov -format UDZO "$DMG"
- name: Upload artifacts
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
API="http://172.0.0.29:3000/api/v1/repos/$GITHUB_REPOSITORY"
TAG="$GITHUB_REF_NAME"
# Create release (idempotent)
echo "Creating release for $TAG..."
curl -sf -X POST "$API/releases" \
-H "Authorization: token $RELEASE_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"tag_name\":\"$TAG\",\"name\":\"TFTSR $TAG\",\"body\":\"Release $TAG\",\"draft\":false}" || true
# Get release ID
RELEASE_ID=$(curl -sf "$API/releases/tags/$TAG" \
-H "Authorization: token $RELEASE_TOKEN" | jq -r '.id')
if [ -z "$RELEASE_ID" ] || [ "$RELEASE_ID" = "null" ]; then
echo "ERROR: Failed to get release ID for $TAG"
echo "Attempting to list recent releases..."
curl -sf "$API/releases" -H "Authorization: token $RELEASE_TOKEN" | jq -r '.[] | "\(.tag_name): \(.id)"' | head -5
exit 1
fi
echo "Release ID: $RELEASE_ID"
# Upload DMG
find src-tauri/target/aarch64-apple-darwin/release/bundle -name "*.dmg" | while read f; do
echo "Uploading $(basename $f)..."
curl -sf -X POST "$API/releases/$RELEASE_ID/assets" \
-H "Authorization: token $RELEASE_TOKEN" \
-F "attachment=@$f;filename=$(basename $f)" && echo "✓ Uploaded $(basename $f)" || echo "✗ Upload failed: $f"
done
build-linux-arm64:
runs-on: linux-arm64
container:
image: rust:1.88-slim
steps:
- name: Checkout
run: |
apt-get update -qq && apt-get install -y -qq git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin $GITHUB_SHA
git checkout FETCH_HEAD
- name: Install dependencies
run: |
# Native ARM64 build (no cross-compilation needed)
apt-get update -qq && apt-get install -y -qq \
libwebkit2gtk-4.1-dev libssl-dev libgtk-3-dev \
libayatana-appindicator3-dev librsvg2-dev patchelf \
pkg-config curl perl jq
curl -fsSL https://deb.nodesource.com/setup_22.x | bash -
apt-get install -y nodejs
- name: Build
run: |
npm ci --legacy-peer-deps
rustup target add aarch64-unknown-linux-gnu
cargo install tauri-cli --version "^2" --locked
CI=true cargo tauri build --target aarch64-unknown-linux-gnu
- name: Upload artifacts
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
API="http://172.0.0.29:3000/api/v1/repos/$GITHUB_REPOSITORY"
TAG="$GITHUB_REF_NAME"
echo "Creating release for $TAG..."
curl -sf -X POST "$API/releases" \
-H "Authorization: token $RELEASE_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"tag_name\":\"$TAG\",\"name\":\"TFTSR $TAG\",\"body\":\"Release $TAG\",\"draft\":false}" || true
RELEASE_ID=$(curl -sf "$API/releases/tags/$TAG" \
-H "Authorization: token $RELEASE_TOKEN" | jq -r '.id')
if [ -z "$RELEASE_ID" ] || [ "$RELEASE_ID" = "null" ]; then
echo "ERROR: Failed to get release ID for $TAG"
exit 1
fi
echo "Release ID: $RELEASE_ID"
find src-tauri/target/aarch64-unknown-linux-gnu/release/bundle \
\( -name "*.deb" -o -name "*.rpm" -o -name "*.AppImage" \) | while read f; do
echo "Uploading $(basename $f)..."
curl -sf -X POST "$API/releases/$RELEASE_ID/assets" \
-H "Authorization: token $RELEASE_TOKEN" \
-F "attachment=@$f;filename=$(basename $f)" && echo "✓ Uploaded $(basename $f)" || echo "✗ Upload failed: $f"
done

View File

@ -1,9 +1,6 @@
name: Test
on:
push:
branches:
- '**'
pull_request:
jobs:
@ -14,10 +11,22 @@ jobs:
steps:
- name: Checkout
run: |
set -eux
apt-get update -qq && apt-get install -y -qq git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin $GITHUB_SHA
if [ -n "${GITHUB_SHA:-}" ] && git fetch --depth=1 origin "$GITHUB_SHA"; then
echo "Fetched commit SHA: $GITHUB_SHA"
elif [ -n "${GITHUB_REF_NAME:-}" ] && git fetch --depth=1 origin "$GITHUB_REF_NAME"; then
echo "Fetched ref name: $GITHUB_REF_NAME"
elif [ -n "${GITHUB_REF:-}" ]; then
REF_NAME="${GITHUB_REF#refs/heads/}"
git fetch --depth=1 origin "$REF_NAME"
echo "Fetched ref from GITHUB_REF: $REF_NAME"
else
git fetch --depth=1 origin master
echo "Fetched fallback ref: master"
fi
git checkout FETCH_HEAD
- run: rustup component add rustfmt
- run: cargo fmt --manifest-path src-tauri/Cargo.toml --check
@ -29,10 +38,22 @@ jobs:
steps:
- name: Checkout
run: |
set -eux
apt-get update -qq && apt-get install -y -qq git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin $GITHUB_SHA
if [ -n "${GITHUB_SHA:-}" ] && git fetch --depth=1 origin "$GITHUB_SHA"; then
echo "Fetched commit SHA: $GITHUB_SHA"
elif [ -n "${GITHUB_REF_NAME:-}" ] && git fetch --depth=1 origin "$GITHUB_REF_NAME"; then
echo "Fetched ref name: $GITHUB_REF_NAME"
elif [ -n "${GITHUB_REF:-}" ]; then
REF_NAME="${GITHUB_REF#refs/heads/}"
git fetch --depth=1 origin "$REF_NAME"
echo "Fetched ref from GITHUB_REF: $REF_NAME"
else
git fetch --depth=1 origin master
echo "Fetched fallback ref: master"
fi
git checkout FETCH_HEAD
- run: apt-get update -qq && apt-get install -y -qq libwebkit2gtk-4.1-dev libssl-dev libgtk-3-dev libayatana-appindicator3-dev librsvg2-dev patchelf pkg-config perl
- run: rustup component add clippy
@ -45,10 +66,22 @@ jobs:
steps:
- name: Checkout
run: |
set -eux
apt-get update -qq && apt-get install -y -qq git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin $GITHUB_SHA
if [ -n "${GITHUB_SHA:-}" ] && git fetch --depth=1 origin "$GITHUB_SHA"; then
echo "Fetched commit SHA: $GITHUB_SHA"
elif [ -n "${GITHUB_REF_NAME:-}" ] && git fetch --depth=1 origin "$GITHUB_REF_NAME"; then
echo "Fetched ref name: $GITHUB_REF_NAME"
elif [ -n "${GITHUB_REF:-}" ]; then
REF_NAME="${GITHUB_REF#refs/heads/}"
git fetch --depth=1 origin "$REF_NAME"
echo "Fetched ref from GITHUB_REF: $REF_NAME"
else
git fetch --depth=1 origin master
echo "Fetched fallback ref: master"
fi
git checkout FETCH_HEAD
- run: apt-get update -qq && apt-get install -y -qq libwebkit2gtk-4.1-dev libssl-dev libgtk-3-dev libayatana-appindicator3-dev librsvg2-dev patchelf pkg-config perl
- run: cargo test --manifest-path src-tauri/Cargo.toml
@ -60,10 +93,22 @@ jobs:
steps:
- name: Checkout
run: |
set -eux
apk add --no-cache git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin $GITHUB_SHA
if [ -n "${GITHUB_SHA:-}" ] && git fetch --depth=1 origin "$GITHUB_SHA"; then
echo "Fetched commit SHA: $GITHUB_SHA"
elif [ -n "${GITHUB_REF_NAME:-}" ] && git fetch --depth=1 origin "$GITHUB_REF_NAME"; then
echo "Fetched ref name: $GITHUB_REF_NAME"
elif [ -n "${GITHUB_REF:-}" ]; then
REF_NAME="${GITHUB_REF#refs/heads/}"
git fetch --depth=1 origin "$REF_NAME"
echo "Fetched ref from GITHUB_REF: $REF_NAME"
else
git fetch --depth=1 origin master
echo "Fetched fallback ref: master"
fi
git checkout FETCH_HEAD
- run: npm ci --legacy-peer-deps
- run: npx tsc --noEmit
@ -75,10 +120,22 @@ jobs:
steps:
- name: Checkout
run: |
set -eux
apk add --no-cache git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin $GITHUB_SHA
if [ -n "${GITHUB_SHA:-}" ] && git fetch --depth=1 origin "$GITHUB_SHA"; then
echo "Fetched commit SHA: $GITHUB_SHA"
elif [ -n "${GITHUB_REF_NAME:-}" ] && git fetch --depth=1 origin "$GITHUB_REF_NAME"; then
echo "Fetched ref name: $GITHUB_REF_NAME"
elif [ -n "${GITHUB_REF:-}" ]; then
REF_NAME="${GITHUB_REF#refs/heads/}"
git fetch --depth=1 origin "$REF_NAME"
echo "Fetched ref from GITHUB_REF: $REF_NAME"
else
git fetch --depth=1 origin master
echo "Fetched fallback ref: master"
fi
git checkout FETCH_HEAD
- run: npm ci --legacy-peer-deps
- run: npm run test:run

489
GenAI API User Guide.md Normal file

File diff suppressed because one or more lines are too long

312
HANDOFF-MSI-GENAI.md Normal file
View File

@ -0,0 +1,312 @@
# MSI GenAI Custom Provider Integration - Handoff Document
**Date**: 2026-04-03
**Status**: In Progress - Backend schema updated, frontend and provider logic pending
---
## Context
User needs to integrate MSI GenAI API (https://genai-service.stage.commandcentral.com/app-gateway/api/v2/chat) into the application's AI Providers system.
**Problem**: The existing "Custom" provider type assumes OpenAI-compatible APIs (expects `/chat/completions` endpoint, OpenAI request/response format, `Authorization: Bearer` header). MSI GenAI has a completely different API contract:
| Aspect | OpenAI Format | MSI GenAI Format |
|--------|---------------|------------------|
| **Endpoint** | `/chat/completions` | `/api/v2/chat` (no suffix) |
| **Request** | `{"messages": [...], "model": "..."}` | `{"prompt": "...", "model": "...", "sessionId": "..."}` |
| **Response** | `{"choices": [{"message": {"content": "..."}}]}` | `{"msg": "...", "sessionId": "..."}` |
| **Auth Header** | `Authorization: Bearer <token>` | `x-msi-genai-api-key: <token>` |
| **History** | Client sends full message array | Server-side via `sessionId` |
---
## Work Completed
### 1. Updated `src-tauri/src/state.rs` - ProviderConfig Schema
Added optional fields to support custom API formats without breaking existing OpenAI-compatible providers:
```rust
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ProviderConfig {
pub name: String,
#[serde(default)]
pub provider_type: String,
pub api_url: String,
pub api_key: String,
pub model: String,
// NEW FIELDS:
/// Optional: Custom endpoint path (e.g., "" for no path, "/v1/chat" for custom path)
/// If None, defaults to "/chat/completions" for OpenAI compatibility
#[serde(skip_serializing_if = "Option::is_none")]
pub custom_endpoint_path: Option<String>,
/// Optional: Custom auth header name (e.g., "x-msi-genai-api-key")
/// If None, defaults to "Authorization"
#[serde(skip_serializing_if = "Option::is_none")]
pub custom_auth_header: Option<String>,
/// Optional: Custom auth value prefix (e.g., "" for no prefix, "Bearer " for OpenAI)
/// If None, defaults to "Bearer "
#[serde(skip_serializing_if = "Option::is_none")]
pub custom_auth_prefix: Option<String>,
/// Optional: API format ("openai" or "msi_genai")
/// If None, defaults to "openai"
#[serde(skip_serializing_if = "Option::is_none")]
pub api_format: Option<String>,
/// Optional: Session ID for stateful APIs like MSI GenAI
#[serde(skip_serializing_if = "Option::is_none")]
pub session_id: Option<String>,
}
```
**Design philosophy**: Existing providers remain unchanged (all fields default to OpenAI-compatible behavior). Only when `api_format` is set to `"msi_genai"` do the custom fields take effect.
---
## Work Remaining
### 2. Update `src-tauri/src/ai/openai.rs` - Support Custom Formats
The `OpenAiProvider::chat()` method needs to conditionally handle MSI GenAI format:
**Changes needed**:
- Check `config.api_format` — if `Some("msi_genai")`, use MSI GenAI request/response logic
- Use `config.custom_endpoint_path.unwrap_or("/chat/completions")` for endpoint
- Use `config.custom_auth_header.unwrap_or("Authorization")` for header name
- Use `config.custom_auth_prefix.unwrap_or("Bearer ")` for auth prefix
**MSI GenAI request format**:
```json
{
"model": "VertexGemini",
"prompt": "<last user message>",
"system": "<optional system message>",
"sessionId": "<uuid or null for first message>",
"userId": "user@motorolasolutions.com"
}
```
**MSI GenAI response format**:
```json
{
"status": true,
"sessionId": "uuid",
"msg": "AI response text",
"initialPrompt": true/false
}
```
**Implementation notes**:
- For MSI GenAI, convert `Vec<Message>` to a single `prompt` (concatenate or use last user message)
- Extract system message from messages array if present (role == "system")
- Store returned `sessionId` back to `config.session_id` for subsequent requests
- Extract response content from `json["msg"]` instead of `json["choices"][0]["message"]["content"]`
### 3. Update `src/lib/tauriCommands.ts` - TypeScript Types
Add new optional fields to `ProviderConfig` interface:
```typescript
export interface ProviderConfig {
provider_type?: string;
max_tokens?: number;
temperature?: number;
name: string;
api_url: string;
api_key: string;
model: string;
// NEW FIELDS:
custom_endpoint_path?: string;
custom_auth_header?: string;
custom_auth_prefix?: string;
api_format?: string;
session_id?: string;
}
```
### 4. Update `src/pages/Settings/AIProviders.tsx` - UI Fields
**When `provider_type === "custom"`, show additional form fields**:
```tsx
{form.provider_type === "custom" && (
<>
<div className="space-y-2">
<Label>API Format</Label>
<Select
value={form.api_format ?? "openai"}
onValueChange={(v) => {
const format = v;
const defaults = format === "msi_genai"
? {
custom_endpoint_path: "",
custom_auth_header: "x-msi-genai-api-key",
custom_auth_prefix: "",
}
: {
custom_endpoint_path: "/chat/completions",
custom_auth_header: "Authorization",
custom_auth_prefix: "Bearer ",
};
setForm({ ...form, api_format: format, ...defaults });
}}
>
<SelectTrigger>
<SelectValue />
</SelectTrigger>
<SelectContent>
<SelectItem value="openai">OpenAI Compatible</SelectItem>
<SelectItem value="msi_genai">MSI GenAI</SelectItem>
</SelectContent>
</Select>
</div>
<div className="grid grid-cols-2 gap-4">
<div className="space-y-2">
<Label>Endpoint Path</Label>
<Input
value={form.custom_endpoint_path ?? ""}
onChange={(e) => setForm({ ...form, custom_endpoint_path: e.target.value })}
placeholder="/chat/completions"
/>
</div>
<div className="space-y-2">
<Label>Auth Header Name</Label>
<Input
value={form.custom_auth_header ?? ""}
onChange={(e) => setForm({ ...form, custom_auth_header: e.target.value })}
placeholder="Authorization"
/>
</div>
</div>
<div className="space-y-2">
<Label>Auth Prefix</Label>
<Input
value={form.custom_auth_prefix ?? ""}
onChange={(e) => setForm({ ...form, custom_auth_prefix: e.target.value })}
placeholder="Bearer "
/>
<p className="text-xs text-muted-foreground">
Prefix added before API key (e.g., "Bearer " for OpenAI, "" for MSI GenAI)
</p>
</div>
</>
)}
```
**Update `emptyProvider` initial state**:
```typescript
const emptyProvider: ProviderConfig = {
name: "",
provider_type: "openai",
api_url: "",
api_key: "",
model: "",
custom_endpoint_path: undefined,
custom_auth_header: undefined,
custom_auth_prefix: undefined,
api_format: undefined,
session_id: undefined,
};
```
---
## Testing Configuration
**For MSI GenAI**:
- **Type**: Custom
- **API Format**: MSI GenAI
- **API URL**: `https://genai-service.stage.commandcentral.com/app-gateway`
- **Model**: `VertexGemini` (or `Claude-Sonnet-4`, `ChatGPT4o`)
- **API Key**: (user's MSI GenAI API key from portal)
- **Endpoint Path**: `` (empty - URL already includes `/api/v2/chat`)
- **Auth Header**: `x-msi-genai-api-key`
- **Auth Prefix**: `` (empty - no "Bearer " prefix)
**Test command flow**:
1. Create provider with above settings
2. Test connection (should receive AI response)
3. Verify `sessionId` is returned and stored
4. Send second message (should reuse `sessionId` for conversation history)
---
## Known Issues from User's Original Error
User initially tried:
- **API URL**: `https://genai-service.stage.commandcentral.com/app-gateway/api/v2/chat`
- **Type**: Custom (no format specified)
**Result**: `Cannot POST /api/v2/chat/chat/completions` (404)
**Root cause**: OpenAI provider appends `/chat/completions` to base URL. With the new `custom_endpoint_path` field, this is now configurable.
---
## Integration with Existing Session Management
MSI GenAI uses server-side session management. Current triage flow sends full message history on every request (OpenAI style). For MSI GenAI:
- **First message**: Send `sessionId: null` or omit field
- **Store response**: Save `response.sessionId` to `config.session_id`
- **Subsequent messages**: Include `sessionId` in requests (server maintains history)
Consider storing `session_id` per conversation in the database (link to `ai_conversations.id`) rather than globally in `ProviderConfig`.
---
## Commit Strategy
**Current git state**:
- Modified by other session: `src-tauri/src/integrations/*.rs` (ADO/Confluence/ServiceNow work)
- Modified by me: `src-tauri/src/state.rs` (MSI GenAI schema)
- Untracked: `GenAI API User Guide.md`
**Recommended approach**:
1. **Other session commits first**: Commit integration changes to main
2. **Then complete MSI GenAI work**: Finish items 2-4 above, test, commit separately
**Alternative**: Create feature branch `feature/msi-genai-custom-provider`, cherry-pick only MSI GenAI changes, complete work there, merge when ready.
---
## Reference: MSI GenAI API Spec
**Documentation**: `GenAI API User Guide.md` (in project root)
**Key endpoints**:
- `POST /api/v2/chat` - Send prompt, get response
- `POST /api/v2/upload/<SESSION-ID>` - Upload files (requires session)
- `GET /api/v2/getSessionMessages/<SESSION-ID>` - Retrieve history
- `DELETE /api/v2/entry/<MSG-ID>` - Delete message
**Available models** (from guide):
- `Claude-Sonnet-4` (Public)
- `ChatGPT4o` (Public)
- `VertexGemini` (Private) - Gemini 2.0 Flash
- `ChatGPT-5_2-Chat` (Public)
- Many others (see guide section 4.1)
**Rate limits**: $50/user/month (enforced server-side)
---
## Questions for User
1. Should `session_id` be stored globally in `ProviderConfig` or per-conversation in DB?
2. Do we need to support file uploads via `/api/v2/upload/<SESSION-ID>`?
3. Should we expose model config options (temperature, max_tokens) for MSI GenAI?
---
## Contact
This handoff doc was generated for the other Claude Code session working on integration files. Once that work is committed, this MSI GenAI work can be completed as a separate commit or feature branch.

175
INTEGRATION_AUTH_GUIDE.md Normal file
View File

@ -0,0 +1,175 @@
# Integration Authentication Guide
## Overview
The TRCAA application supports three integration authentication methods, with automatic fallback between them:
1. **API Tokens** (Manual) - Recommended ✅
2. **OAuth 2.0** - Fully automated (when configured)
3. **Browser Cookies** - Partially working ⚠️
## Authentication Priority
When you ask an AI question, the system attempts authentication in this order:
```
1. Extract cookies from persistent browser window
↓ (if fails)
2. Use stored API token from database
↓ (if fails)
3. Skip that integration and log guidance
```
## HttpOnly Cookie Limitation
**Problem**: Confluence, ServiceNow, and Azure DevOps use **HttpOnly cookies** for security. These cookies:
- ✅ Exist in the persistent browser window
- ✅ Are sent automatically by the browser
- ❌ **Cannot be extracted by JavaScript** (security feature)
- ❌ **Cannot be used in separate HTTP requests**
**Impact**: Cookie extraction via the persistent browser window **fails** for HttpOnly cookies, even though you're logged in.
## Recommended Solution: Use API Tokens
### Confluence Personal Access Token
1. Log into Confluence
2. Go to **Profile → Settings → Personal Access Tokens**
3. Click **Create token**
4. Copy the generated token
5. In TRCAA app:
- Go to **Settings → Integrations**
- Find your Confluence integration
- Click **"Save Manual Token"**
- Paste the token
- Token Type: `Bearer`
### ServiceNow API Key
1. Log into ServiceNow
2. Go to **System Security → Application Registry**
3. Click **New → OAuth API endpoint for external clients**
4. Configure and generate API key
5. In TRCAA app:
- Go to **Settings → Integrations**
- Find your ServiceNow integration
- Click **"Save Manual Token"**
- Paste the API key
### Azure DevOps Personal Access Token (PAT)
1. Log into Azure DevOps
2. Click **User Settings (top right) → Personal Access Tokens**
3. Click **New Token**
4. Scopes: Select **Read** for:
- Code (for wiki)
- Work Items (for work item search)
5. Click **Create** and copy the token
6. In TRCAA app:
- Go to **Settings → Integrations**
- Find your Azure DevOps integration
- Click **"Save Manual Token"**
- Paste the token
- Token Type: `Bearer`
## Verification
After adding API tokens, test the integration:
1. Open or create an issue
2. Go to Triage page
3. Ask a question like: "How do I upgrade Vesta NXT to 1.0.12"
4. Check the logs for:
```
INFO Using stored cookies for confluence (count: 1)
INFO Found X integration sources for AI context
```
If successful, the AI response should include:
- Content from internal documentation
- Source citations with URLs
- Links to Confluence/ServiceNow/Azure DevOps pages
## Troubleshooting
### No search results found
**Symptom**: AI gives generic answers instead of internal documentation
**Check logs for**:
```
WARN Unable to search confluence - no authentication available
```
**Solution**: Add an API token (see above)
### Cookie extraction timeout
**Symptom**: Logs show:
```
WARN Failed to extract cookies from confluence: Timeout extracting cookies
```
**Why**: HttpOnly cookies cannot be extracted via JavaScript
**Solution**: Use API tokens instead
### Integration not configured
**Symptom**: No integration searches at all
**Check**: Settings → Integrations - ensure integration is added with:
- Base URL configured
- Either browser window open OR API token saved
## Future Enhancements
### Native Cookie Extraction (Planned)
We plan to implement platform-specific native cookie extraction that can access HttpOnly cookies directly from the webview's cookie store:
- **macOS**: Use WKWebView's HTTPCookieStore (requires `cocoa`/`objc` crates)
- **Windows**: Use WebView2's cookie manager (requires `windows` crate)
- **Linux**: Use WebKitGTK cookie manager (requires `webkit2gtk` binding)
This will make the persistent browser approach fully automatic, even with HttpOnly cookies.
### Webview-Based Search (Experimental)
Another approach is to make search requests FROM within the authenticated webview using JavaScript fetch, which automatically includes HttpOnly cookies. This requires reliable IPC communication between JavaScript and Rust.
## Security Notes
### Token Storage
API tokens are:
- ✅ **Encrypted** using AES-256-GCM before storage
- ✅ **Hashed** (SHA-256) for audit logging
- ✅ Stored in encrypted SQLite database
- ✅ Never exposed to frontend JavaScript
### Cookie Storage (when working)
Extracted cookies are:
- ✅ Encrypted before database storage
- ✅ Only retrieved when making API requests
- ✅ Transmitted only over HTTPS
### Audit Trail
All integration authentication attempts are logged:
- Cookie extraction attempts
- Token usage
- Search requests
- Authentication failures
Check **Settings → Security → Audit Log** to review activity.
## Summary
**For reliable integration search NOW**: Use API tokens (Option 1)
**For automatic integration search LATER**: Native cookie extraction will be implemented in a future update
**Current workaround**: API tokens provide full functionality without browser dependency

View File

@ -1,4 +1,4 @@
# TFTSR — IT Triage & RCA Desktop Application
# Troubleshooting and RCA Assistant
A structured, AI-backed desktop tool for IT incident triage, 5-Whys root cause analysis, RCA document generation, and blameless post-mortems. Runs fully offline via Ollama local models, or connects to cloud AI providers.
@ -46,7 +46,7 @@ Built with **Tauri 2** (Rust + WebView), **React 18**, **TypeScript**, and **SQL
| UI | Tailwind CSS (custom shadcn-style components) |
| Database | rusqlite + `bundled-sqlcipher` (AES-256) |
| Secret storage | `tauri-plugin-stronghold` |
| State management | Zustand (persisted settings store) |
| State management | Zustand (persisted settings store with API key redaction) |
| AI providers | reqwest (async HTTP) |
| PII detection | regex + aho-corasick multi-pattern engine |
@ -166,7 +166,7 @@ To use Claude via AWS Bedrock (ideal for enterprise environments with existing A
nohup litellm --config ~/.litellm/config.yaml --port 8000 > ~/.litellm/litellm.log 2>&1 &
```
4. **Configure in TFTSR:**
4. **Configure in Troubleshooting and RCA Assistant:**
- Provider: **OpenAI** (OpenAI-compatible)
- Base URL: `http://localhost:8000/v1`
- API Key: `sk-your-secure-key` (from config)
@ -217,7 +217,7 @@ tftsr/
└── .gitea/
└── workflows/
├── test.yml # CI: rustfmt · clippy · cargo test · tsc · vitest (every push/PR)
└── release.yml # Release: linux/amd64 + windows/amd64 + linux/arm64 → Gitea release
└── auto-tag.yml # Auto tag + release: linux/amd64 + windows/amd64 + linux/arm64 + macOS
```
---
@ -251,7 +251,7 @@ The project uses **Gitea Actions** (act_runner v0.3.1) connected to the Gitea in
| Workflow | Trigger | Jobs |
|---|---|---|
| `.gitea/workflows/test.yml` | Every push / PR | rustfmt · clippy · cargo test (64) · tsc · vitest (13) |
| `.gitea/workflows/release.yml` | Tag `v*` or manual dispatch | Build linux/amd64 + windows/amd64 + linux/arm64 → upload to Gitea release |
| `.gitea/workflows/auto-tag.yml` | Push to `master` | Auto-tag, then build linux/amd64 + windows/amd64 + linux/arm64 + macOS and upload assets |
**Runners:**
@ -270,10 +270,10 @@ The project uses **Gitea Actions** (act_runner v0.3.1) connected to the Gitea in
| Concern | Implementation |
|---|---|
| API keys / tokens | `tauri-plugin-stronghold` encrypted vault |
| API keys / tokens | AES-256-GCM encrypted at rest (backend), not persisted in browser storage |
| Database at rest | SQLCipher AES-256; key derived via PBKDF2 |
| PII before AI send | Rust-side detection + mandatory user approval in UI |
| Audit trail | Every `ai_send` / `publish` event logged with SHA-256 hash |
| Audit trail | Hash-chained audit entries (`prev_hash` + `entry_hash`) for tamper evidence |
| Network | `reqwest` with TLS; HTTP blocked by Tauri capability config |
| Capabilities | Least-privilege: scoped fs access, no arbitrary shell by default |
| CSP | Strict CSP in `tauri.conf.json`; no inline scripts |
@ -300,7 +300,8 @@ Override with the `TFTSR_DATA_DIR` environment variable.
| Variable | Default | Purpose |
|---|---|---|
| `TFTSR_DATA_DIR` | Platform data dir | Override database location |
| `TFTSR_DB_KEY` | `dev-key-change-in-prod` | Database encryption key (release builds) |
| `TFTSR_DB_KEY` | _(none)_ | Database encryption key (required in release builds) |
| `TFTSR_ENCRYPTION_KEY` | _(none)_ | Credential encryption key (required in release builds) |
| `RUST_LOG` | `info` | Tracing log level (`debug`, `info`, `warn`, `error`) |
---

View File

@ -0,0 +1,254 @@
# Ticket Summary - Persistent Browser Windows for Integration Authentication
## Description
Implement persistent browser window sessions for integration authentication (Confluence, Azure DevOps, ServiceNow). Browser windows now persist across application restarts, eliminating the need to extract HttpOnly cookies via JavaScript (which fails due to browser security restrictions).
This follows a Playwright-style "piggyback" authentication approach where the browser window maintains its own internal cookie store, allowing the user to log in once and have the session persist indefinitely until they manually close the window.
## Acceptance Criteria
- [x] Integration browser windows persist to database when created
- [x] Browser windows are automatically restored on app startup
- [x] Cookies are maintained automatically by the browser's internal store (no JavaScript extraction of HttpOnly cookies)
- [x] Windows can be manually closed by the user, which removes them from persistence
- [x] Database migration creates `persistent_webviews` table
- [x] Window close events are handled to update database and in-memory tracking
## Work Implemented
### 1. Database Migration for Persistent Webviews
**Files Modified:**
- `src-tauri/src/db/migrations.rs:154-167`
**Changes:**
- Added migration `013_create_persistent_webviews` to create the `persistent_webviews` table
- Table schema includes:
- `id` (TEXT PRIMARY KEY)
- `service` (TEXT with CHECK constraint for 'confluence', 'servicenow', 'azuredevops')
- `webview_label` (TEXT - the Tauri window identifier)
- `base_url` (TEXT - the integration base URL)
- `last_active` (TEXT timestamp, defaults to now)
- `window_x`, `window_y`, `window_width`, `window_height` (INTEGER - for future window position persistence)
- UNIQUE constraint on `service` (one browser window per integration)
### 2. Webview Persistence on Creation
**Files Modified:**
- `src-tauri/src/commands/integrations.rs:531-591`
**Changes:**
- Modified `authenticate_with_webview` command to persist webview state to database after creation
- Stores service name, webview label, and base URL
- Logs persistence operation for debugging
- Sets up window close event handler to remove webview from tracking and database
- Event handler properly clones Arc fields for `'static` lifetime requirement
- Updated success message to inform user that window persists across restarts
### 3. Webview Restoration on App Startup
**Files Modified:**
- `src-tauri/src/commands/integrations.rs:793-865` - Added `restore_persistent_webviews` function
- `src-tauri/src/lib.rs:60-84` - Added `.setup()` hook to call restoration
**Changes:**
- Added `restore_persistent_webviews` async function that:
- Queries `persistent_webviews` table for all saved webviews
- Recreates each webview window by calling `authenticate_with_webview`
- Updates in-memory tracking map
- Removes from database if restoration fails
- Logs all operations for debugging
- Updated `lib.rs` to call restoration in `.setup()` hook:
- Clones Arc fields from `AppState` for `'static` lifetime
- Spawns async task to restore webviews
- Logs warnings if restoration fails
### 4. Window Close Event Handling
**Files Modified:**
- `src-tauri/src/commands/integrations.rs:559-591`
**Changes:**
- Added `on_window_event` listener to detect window close events
- On `CloseRequested` event:
- Spawns async task to clean up
- Removes service from in-memory `integration_webviews` map
- Deletes entry from `persistent_webviews` database table
- Logs all cleanup operations
- Properly handles Arc cloning to avoid lifetime issues in spawned task
### 5. Removed Auto-Close Behavior
**Files Modified:**
- `src-tauri/src/commands/integrations.rs:606-618`
**Changes:**
- Removed automatic window closing in `extract_cookies_from_webview`
- Windows now stay open after cookie extraction
- Updated success message to inform user that window persists for future use
### 6. Frontend UI Update - Removed "Complete Login" Button
**Files Modified:**
- `src/pages/Settings/Integrations.tsx:371-409` - Updated webview authentication UI
- `src/pages/Settings/Integrations.tsx:140-165` - Simplified `handleConnectWebview`
- `src/pages/Settings/Integrations.tsx:167-200` - Removed `handleCompleteWebviewLogin` function
- `src/pages/Settings/Integrations.tsx:16-26` - Removed unused `extractCookiesFromWebviewCmd` import
- `src/pages/Settings/Integrations.tsx:670-677` - Updated authentication method comparison text
**Changes:**
- Removed "Complete Login" button that tried to extract cookies via JavaScript
- Updated UI to show success message when browser opens, explaining persistence
- Removed confusing two-step flow (open browser → complete login)
- New flow: click "Open Browser" → log in → leave window open (that's it!)
- Updated description text to explain persistent window behavior
- Mark integration as "connected" immediately when browser opens
- Removed unused function and import for cookie extraction
### 7. Unused Import Cleanup
**Files Modified:**
- `src-tauri/src/integrations/webview_auth.rs:2`
- `src-tauri/src/lib.rs:13` - Added `use tauri::Manager;`
**Changes:**
- Removed unused `Listener` import from webview_auth.rs
- Added `Manager` trait import to lib.rs for `.state()` method
## Testing Needed
### Manual Testing
1. **Initial Browser Window Creation**
- [ ] Navigate to Settings > Integrations
- [ ] Configure a Confluence integration with base URL
- [ ] Click "Open Browser" button
- [ ] Verify browser window opens with Confluence login page
- [ ] Complete login in the browser window
- [ ] Verify window stays open after login
2. **Window Persistence Across Restarts**
- [ ] With Confluence browser window open, close the main application
- [ ] Relaunch the application
- [ ] Verify Confluence browser window is automatically restored
- [ ] Verify you are still logged in (cookies maintained)
- [ ] Navigate to different pages in Confluence to verify session works
3. **Manual Window Close**
- [ ] With browser window open, manually close it (X button)
- [ ] Restart the application
- [ ] Verify browser window does NOT reopen (removed from persistence)
4. **Database Verification**
- [ ] Open database: `sqlite3 ~/Library/Application\ Support/trcaa/data.db`
- [ ] Run: `SELECT * FROM persistent_webviews;`
- [ ] Verify entry exists when window is open
- [ ] Close window and verify entry is removed
5. **Multiple Integration Windows**
- [ ] Open browser window for Confluence
- [ ] Open browser window for Azure DevOps
- [ ] Restart application
- [ ] Verify both windows are restored
- [ ] Close one window
- [ ] Verify only one is removed from database
- [ ] Restart and verify remaining window still restores
6. **Cookie Persistence (No HttpOnly Extraction Needed)**
- [ ] Log into Confluence browser window
- [ ] Close main application
- [ ] Relaunch application
- [ ] Navigate to a Confluence page that requires authentication
- [ ] Verify you are still logged in (cookies maintained by browser)
### Automated Testing
```bash
# Type checking
npx tsc --noEmit
# Rust compilation
cargo check --manifest-path src-tauri/Cargo.toml
# Rust tests
cargo test --manifest-path src-tauri/Cargo.toml
# Rust linting
cargo clippy --manifest-path src-tauri/Cargo.toml -- -D warnings
```
### Edge Cases to Test
- Application crash while browser window is open (verify restoration on next launch)
- Database corruption (verify graceful handling of restore failures)
- Window already exists when trying to create duplicate (verify existing window is focused)
- Network connectivity lost during window restoration (verify error handling)
- Multiple rapid window open/close cycles (verify database consistency)
## Architecture Notes
### Design Decision: Persistent Windows vs Cookie Extraction
**Problem:** HttpOnly cookies cannot be accessed via JavaScript (`document.cookie`), which broke the original cookie extraction approach for Confluence and other services.
**Solution:** Instead of extracting cookies, keep the browser window alive across app restarts:
- Browser maintains its own internal cookie store (includes HttpOnly cookies)
- Cookies are automatically sent with all HTTP requests from the browser
- No need for JavaScript extraction or manual token management
- Matches Playwright's approach of persistent browser contexts
### Lifecycle Flow
1. **Window Creation:** User clicks "Open Browser" → `authenticate_with_webview` creates window → State saved to database
2. **App Running:** Window stays open, user can browse freely, cookies maintained by browser
3. **Window Close:** User closes window → Event handler removes from database and memory
4. **App Restart:** `restore_persistent_webviews` queries database → Recreates all windows → Windows resume with original cookies
### Database Schema
```sql
CREATE TABLE persistent_webviews (
id TEXT PRIMARY KEY,
service TEXT NOT NULL CHECK(service IN ('confluence','servicenow','azuredevops')),
webview_label TEXT NOT NULL,
base_url TEXT NOT NULL,
last_active TEXT NOT NULL DEFAULT (datetime('now')),
window_x INTEGER,
window_y INTEGER,
window_width INTEGER,
window_height INTEGER,
UNIQUE(service)
);
```
### Future Enhancements
- [ ] Save and restore window position/size (columns already exist in schema)
- [ ] Add "last_active" timestamp updates on window focus events
- [ ] Implement "Close All Windows" command for cleanup
- [ ] Add visual indicator in main UI showing which integrations have active browser windows
- [ ] Implement session timeout logic (close windows after X days of inactivity)
## Related Files
- `src-tauri/src/db/migrations.rs` - Database schema migration
- `src-tauri/src/commands/integrations.rs` - Webview persistence and restoration logic
- `src-tauri/src/integrations/webview_auth.rs` - Browser window creation
- `src-tauri/src/lib.rs` - App startup hook for restoration
- `src-tauri/src/state.rs` - AppState structure with `integration_webviews` map
## Security Considerations
- Cookie storage remains in the browser's internal secure store (not extracted to database)
- Database only stores window metadata (service, label, URL)
- No credential information persisted beyond what the browser already maintains
- Audit log still tracks all integration API calls separately
## Migration Path
Users upgrading to this version will:
1. See new database migration `013_create_persistent_webviews` applied automatically
2. Existing integrations continue to work (migration is additive only)
3. First time opening a browser window will persist it for future sessions
4. No manual action required from users

View File

@ -1,134 +1,536 @@
# Ticket Summary - UI Fixes and Audit Log Enhancement
# Ticket Summary - Integration Search + AI Tool-Calling Implementation
## Description
This ticket addresses multiple UI and functionality issues reported in the tftsr-devops_investigation application:
This ticket implements Confluence, ServiceNow, and Azure DevOps as primary data sources for AI queries. When users ask questions in the AI chat, the system now searches these internal documentation sources first and injects the results as context before sending the query to the AI provider. This ensures the AI prioritizes internal company documentation over general knowledge.
1. **Download Icons Visibility**: Download icons (PDF, DOCX) in RCA and Post-Mortem pages were not visible in dark theme
2. **Export File System Error**: "Read-only file system (os error 30)" error when attempting to export documents
3. **History Search Button**: Search button not visible in the History page
4. **Domain Filtering**: Domain-only filtering not working in History page
5. **Audit Log Enhancement**: Audit log showed only internal IDs, lacking actual transmitted data for security auditing
**User Requirement:** "using confluance as the initial data source was a key requirement. The same for ServiceNow and ADO"
**Example Use Case:** When asking "How do I upgrade Vesta NXT to 1.0.12", the AI should return the Confluence documentation link or content from internal wiki pages, rather than generic upgrade instructions.
### AI Tool-Calling Implementation
This ticket also implements AI function calling (tool calling) to allow AI to automatically execute actions like adding comments to Azure DevOps tickets. When the AI determines it should perform an action (rather than just respond with text), it can call defined tools/functions and the system will execute them, returning results to the AI for further processing.
**User Requirement:** "using the AI intagration, I wanted to beable to ask it to put a coment in a ADO ticket and have it pull the data from the integration search and then post a coment in the ticket"
**Example Use Case:** When asking "Add a comment to ADO ticket 758421 with the test results", the AI should automatically call the `add_ado_comment` tool with the appropriate parameters, execute the action, and confirm completion.
## Acceptance Criteria
- [ ] Download icons are visible in both light and dark themes on RCA and Post-Mortem pages
- [ ] Documents can be exported successfully to Downloads directory without filesystem errors
- [ ] Search button is visible with proper styling in History page
- [ ] Domain filter works independently without requiring a search query
- [ ] Audit log displays full transmitted data including:
- AI chat messages with provider details, user message, and response preview
- Document generation with content preview and metadata
- All entries show properly formatted JSON with details
- [x] Confluence search integration retrieves wiki pages matching user queries
- [x] ServiceNow search integration retrieves knowledge base articles and related incidents
- [x] Azure DevOps search integration retrieves wiki pages and work items
- [x] Integration searches execute in parallel for performance
- [x] Search results are injected as system context before AI queries
- [x] AI responses include source citations with URLs from internal documentation
- [x] System uses persistent browser cookies from authenticated sessions
- [x] Graceful fallback when integration sources are unavailable
- [x] All searches complete successfully without compilation errors
- [x] AI tool-calling architecture implemented with Provider trait support
- [x] Tool definitions created for available actions (add_ado_comment)
- [x] Tool execution loop implemented in chat_message command
- [x] OpenAI-compatible providers support tool-calling
- [x] MSI GenAI custom REST provider supports tool-calling
- [ ] Tool-calling tested with MSI GenAI provider (pending user testing)
- [ ] AI successfully executes add_ado_comment when requested
## Work Implemented
### 1. Download Icons Visibility Fix
### 1. Confluence Search Module
**Files Created:**
- `src-tauri/src/integrations/confluence_search.rs` (173 lines)
**Implementation:**
```rust
pub async fn search_confluence(
base_url: &str,
query: &str,
cookies: &[Cookie],
) -> Result<Vec<SearchResult>, String>
```
**Features:**
- Uses Confluence CQL (Confluence Query Language) search API
- Searches text content across all wiki pages
- Fetches full page content via `/rest/api/content/{id}?expand=body.storage`
- Strips HTML tags from content for clean AI context
- Returns top 3 most relevant results
- Truncates content to 3000 characters for AI context window
- Includes title, URL, excerpt, and full content in results
### 2. ServiceNow Search Module
**Files Created:**
- `src-tauri/src/integrations/servicenow_search.rs` (181 lines)
**Implementation:**
```rust
pub async fn search_servicenow(
instance_url: &str,
query: &str,
cookies: &[Cookie],
) -> Result<Vec<SearchResult>, String>
pub async fn search_incidents(
instance_url: &str,
query: &str,
cookies: &[Cookie],
) -> Result<Vec<SearchResult>, String>
```
**Features:**
- Searches Knowledge Base articles via `/api/now/table/kb_knowledge`
- Searches incidents via `/api/now/table/incident`
- Uses ServiceNow query language with `LIKE` operators
- Returns article text and incident descriptions/resolutions
- Includes incident numbers and states in results
- Top 3 knowledge base articles + top 3 incidents
### 3. Azure DevOps Search Module
**Files Created:**
- `src-tauri/src/integrations/azuredevops_search.rs` (274 lines)
**Implementation:**
```rust
pub async fn search_wiki(
org_url: &str,
project: &str,
query: &str,
cookies: &[Cookie],
) -> Result<Vec<SearchResult>, String>
pub async fn search_work_items(
org_url: &str,
project: &str,
query: &str,
cookies: &[Cookie],
) -> Result<Vec<SearchResult>, String>
```
**Features:**
- Uses Azure DevOps Search API for wiki search
- Uses WIQL (Work Item Query Language) for work item search
- Fetches full wiki page content via `/api/wiki/wikis/{id}/pages`
- Retrieves work item details including descriptions and states
- Project-scoped searches for better relevance
- Returns top 3 wiki pages + top 3 work items
### 4. AI Command Integration
**Files Modified:**
- `src/components/DocEditor.tsx:60-67`
- `src-tauri/src/commands/ai.rs:377-511` (Added `search_integration_sources` function)
**Implementation:**
```rust
async fn search_integration_sources(
query: &str,
app_handle: &tauri::AppHandle,
state: &State<'_, AppState>,
) -> String
```
**Features:**
- Queries database for all configured integrations
- Retrieves persistent browser cookies for each integration
- Spawns parallel tokio tasks for each integration search
- Aggregates results from all sources
- Formats results as AI context with source metadata
- Returns formatted context string for injection into AI prompts
**Context Injection:**
```rust
if !integration_context.is_empty() {
let context_message = Message {
role: "system".into(),
content: format!(
"INTERNAL DOCUMENTATION SOURCES:\n\n{}\n\n\
Instructions: The above content is from internal company \
documentation systems (Confluence, ServiceNow, Azure DevOps). \
You MUST prioritize this information when answering. Include \
source citations with URLs in your response. Only use general \
knowledge if the internal documentation doesn't cover the question.",
integration_context
),
};
messages.push(context_message);
}
```
### 5. AI Tool-Calling Architecture
**Files Created/Modified:**
- `src-tauri/src/ai/tools.rs` (43 lines) - NEW FILE
- `src-tauri/src/ai/mod.rs:34-68` (Added tool-calling data structures)
- `src-tauri/src/ai/provider.rs:16` (Added tools parameter to Provider trait)
- `src-tauri/src/ai/openai.rs:89-113, 137-157, 257-376` (Tool-calling for OpenAI and MSI GenAI)
- `src-tauri/src/commands/ai.rs:60-98, 126-167` (Tool execution and chat loop)
- `src-tauri/src/commands/integrations.rs:85-121` (add_ado_comment command)
**Implementation:**
**Tool Definitions (`src-tauri/src/ai/tools.rs`):**
```rust
pub fn get_available_tools() -> Vec<Tool> {
vec![get_add_ado_comment_tool()]
}
fn get_add_ado_comment_tool() -> Tool {
Tool {
name: "add_ado_comment".to_string(),
description: "Add a comment to an Azure DevOps work item".to_string(),
parameters: ToolParameters {
param_type: "object".to_string(),
properties: {
"work_item_id": integer,
"comment_text": string
},
required: vec!["work_item_id", "comment_text"],
},
}
}
```
**Data Structures (`src-tauri/src/ai/mod.rs`):**
```rust
pub struct ToolCall {
pub id: String,
pub name: String,
pub arguments: String, // JSON string
}
pub struct Message {
pub role: String,
pub content: String,
pub tool_call_id: Option<String>,
pub tool_calls: Option<Vec<ToolCall>>,
}
pub struct ChatResponse {
pub content: String,
pub model: String,
pub usage: Option<TokenUsage>,
pub tool_calls: Option<Vec<ToolCall>>,
}
```
**OpenAI Provider (`src-tauri/src/ai/openai.rs`):**
- Sends tools in OpenAI format: `{"type": "function", "function": {...}}`
- Parses `tool_calls` array from response
- Sets `tool_choice: "auto"` to enable automatic tool selection
- Works with OpenAI, Azure OpenAI, and compatible APIs
**MSI GenAI Provider (`src-tauri/src/ai/openai.rs::chat_custom_rest`):**
- Sends tools in OpenAI-compatible format (MSI GenAI standard)
- Adds `tools` and `tool_choice` fields to request body
- Parses multiple response formats:
- OpenAI format: `tool_calls[].function.name/arguments`
- Simpler format: `tool_calls[].name/arguments`
- Alternative field names: `toolCalls`, `function_calls`
- Enhanced logging for debugging tool call responses
- Generates tool call IDs if not provided by API
**Tool Executor (`src-tauri/src/commands/ai.rs`):**
```rust
async fn execute_tool_call(
tool_call: &crate::ai::ToolCall,
app_handle: &tauri::AppHandle,
app_state: &State<'_, AppState>,
) -> Result<String, String> {
match tool_call.name.as_str() {
"add_ado_comment" => {
let args: serde_json::Value = serde_json::from_str(&tool_call.arguments)?;
let work_item_id = args.get("work_item_id").and_then(|v| v.as_i64())?;
let comment_text = args.get("comment_text").and_then(|v| v.as_str())?;
crate::commands::integrations::add_ado_comment(
work_item_id,
comment_text.to_string(),
app_handle.clone(),
app_state.clone(),
).await
}
_ => Err(format!("Unknown tool: {}", tool_call.name))
}
}
```
**Chat Loop with Tool-Calling (`src-tauri/src/commands/ai.rs::chat_message`):**
```rust
let tools = Some(crate::ai::tools::get_available_tools());
let max_iterations = 10;
let mut iteration = 0;
loop {
iteration += 1;
if iteration > max_iterations {
return Err("Tool-calling loop exceeded maximum iterations".to_string());
}
let response = provider.chat(messages.clone(), &provider_config, tools.clone()).await?;
// Check if AI wants to call any tools
if let Some(tool_calls) = &response.tool_calls {
for tool_call in tool_calls {
// Execute the tool
let tool_result = execute_tool_call(tool_call, &app_handle, &state).await;
let result_content = match tool_result {
Ok(result) => result,
Err(e) => format!("Error executing tool: {}", e),
};
// Add tool result to conversation
messages.push(Message {
role: "tool".into(),
content: result_content,
tool_call_id: Some(tool_call.id.clone()),
tool_calls: None,
});
}
continue; // Loop back to get AI's next response
}
// No more tool calls - return final response
final_response = response;
break;
}
```
**Features:**
- Iterative tool-calling loop (up to 10 iterations)
- AI can call multiple tools in sequence
- Tool results injected back into conversation
- Error handling for invalid tool calls
- Support for both OpenAI and MSI GenAI providers
- Extensible architecture for adding new tools
**Provider Compatibility:**
All AI providers updated to support tools parameter:
- `src-tauri/src/ai/anthropic.rs` - Added `_tools` parameter (not yet implemented)
- `src-tauri/src/ai/gemini.rs` - Added `_tools` parameter (not yet implemented)
- `src-tauri/src/ai/mistral.rs` - Added `_tools` parameter (not yet implemented)
- `src-tauri/src/ai/ollama.rs` - Added `_tools` parameter (not yet implemented)
- `src-tauri/src/ai/openai.rs` - **Fully implemented** for OpenAI and MSI GenAI
Note: Other providers are prepared for future tool-calling support but currently ignore the tools parameter. Only OpenAI-compatible providers and MSI GenAI have active tool-calling implementation.
### 7. Module Integration
**Files Modified:**
- `src-tauri/src/integrations/mod.rs:1-10` (Added search module exports)
- `src-tauri/src/ai/mod.rs:10` (Added tools export)
**Changes:**
- Added `text-foreground` class to Download icons for PDF and DOCX buttons
- Ensures icons inherit the current theme's foreground color for visibility
```rust
// integrations/mod.rs
pub mod confluence_search;
pub mod servicenow_search;
pub mod azuredevops_search;
### 2. Export File System Error Fix
// ai/mod.rs
pub use tools::*;
```
### 8. Test Fixes
**Files Modified:**
- `src-tauri/Cargo.toml:38` - Added `dirs = "5"` dependency
- `src-tauri/src/commands/docs.rs:127-170` - Rewrote `export_document` function
- `src/pages/RCA/index.tsx:53-60` - Updated error handling and user feedback
- `src/pages/Postmortem/index.tsx:52-59` - Updated error handling and user feedback
- `src-tauri/src/integrations/confluence_search.rs:178-185` (Fixed test assertions)
- `src-tauri/src/integrations/azuredevops_search.rs:1` (Removed unused imports)
- `src-tauri/src/integrations/servicenow_search.rs:1` (Removed unused imports)
**Changes:**
- Modified `export_document` to use Downloads directory by default instead of "."
- Falls back to `app_data_dir/exports` if Downloads directory unavailable
- Added proper directory creation with error handling
- Updated frontend to show success message with file path
- Empty `output_dir` parameter now triggers default behavior
## Architecture
### 3. Search Button Visibility Fix
**Files Modified:**
- `src/pages/History/index.tsx:124-127`
### Search Flow
**Changes:**
- Changed button from `variant="outline"` to default variant
- Added Search icon to button for better visibility
- Button now has proper contrast in both themes
```
User asks question in AI chat
chat_message() command called
search_integration_sources() executed
Query database for integration configs
Get fresh cookies from persistent browsers
Spawn parallel search tasks:
- Confluence CQL search
- ServiceNow KB + incident search
- Azure DevOps wiki + work item search
Wait for all tasks to complete
Format results with source citations
Inject as system message in AI context
Send to AI provider with context
AI responds with source-aware answer
```
### 4. Domain-Only Filtering Fix
**Files Modified:**
- `src-tauri/src/commands/db.rs:305-312`
### Tool-Calling Flow
**Changes:**
- Added missing `filter.domain` handling in `list_issues` function
- Domain filter now properly filters by `i.category` field
- Filter works independently of search query
```
User asks AI to perform action (e.g., "Add comment to ticket 758421")
chat_message() command called
Get available tools (add_ado_comment)
Send message + tools to AI provider
AI decides to call tool → returns ToolCall in response
execute_tool_call() dispatches to appropriate handler
add_ado_comment() retrieves ADO config from DB
Gets fresh cookies from persistent ADO browser
Calls webview_fetch to POST comment via ADO API
Tool result returned as Message with role="tool"
Send updated conversation back to AI
AI processes result and responds to user
User sees confirmation: "I've successfully added the comment"
```
### 5. Audit Log Enhancement
**Files Modified:**
- `src-tauri/src/commands/ai.rs:242-266` - Enhanced AI chat audit logging
- `src-tauri/src/commands/docs.rs:44-73` - Enhanced RCA generation audit logging
- `src-tauri/src/commands/docs.rs:90-119` - Enhanced postmortem generation audit logging
- `src/pages/Settings/Security.tsx:191-206` - Enhanced audit log display
**Multi-Tool Support:**
- AI can call multiple tools in sequence
- Each tool result is added to conversation history
- Loop continues until AI provides final text response
- Maximum 10 iterations to prevent infinite loops
**Changes:**
- AI chat audit now captures:
- Provider name, model, and API URL
- Full user message
- Response preview (first 200 chars)
- Token count
- Document generation audit now captures:
- Issue ID and title
- Document type and title
- Content length and preview (first 300 chars)
- Security page now displays:
- Pretty-printed JSON with proper formatting
- Entry ID and entity type below the data
- Better layout with whitespace handling
**Error Handling:**
- Invalid tool calls return error message to AI
- AI can retry with corrected parameters
- Missing arguments caught and reported
- Unknown tool names return error
### Database Query
Integration configurations are queried from the `integration_config` table:
```sql
SELECT service, base_url, username, project_name, space_key
FROM integration_config
```
This provides:
- `service`: "confluence", "servicenow", or "azuredevops"
- `base_url`: Integration instance URL
- `project_name`: For Azure DevOps project scoping
- `space_key`: For future Confluence space scoping
### Cookie Management
Persistent browser windows maintain authenticated sessions. The `get_fresh_cookies_from_webview()` function retrieves current cookies from the browser window, ensuring authentication remains valid across sessions.
### Parallel Execution
All integration searches execute in parallel using `tokio::spawn()`:
```rust
for config in configs {
let cookies_result = get_fresh_cookies_from_webview(&config.service, ...).await;
if let Ok(Some(cookies)) = cookies_result {
match config.service.as_str() {
"confluence" => {
search_tasks.push(tokio::spawn(async move {
confluence_search::search_confluence(...).await
.unwrap_or_default()
}));
}
// ... other integrations
}
}
}
// Wait for all searches
for task in search_tasks {
if let Ok(results) = task.await {
all_results.extend(results);
}
}
```
### Error Handling
- Database lock failures return empty context (non-blocking)
- SQL query errors return empty context (non-blocking)
- Missing cookies skip that integration (non-blocking)
- Failed search requests return empty results (non-blocking)
- All errors are logged via `tracing::warn!`
- AI query proceeds with whatever context is available
## Testing Needed
### Manual Testing
1. **Download Icons Visibility**
- [ ] Open RCA page in light theme
- [ ] Verify PDF and DOCX download icons are visible
- [ ] Switch to dark theme
- [ ] Verify PDF and DOCX download icons are still visible
1. **Confluence Integration**
- [ ] Configure Confluence integration with valid base URL
- [ ] Open persistent browser and log into Confluence
- [ ] Create a test issue and ask: "How do I upgrade Vesta NXT to 1.0.12"
- [ ] Verify AI response includes Confluence wiki content
- [ ] Verify response includes source URL
- [ ] Check logs for "Found X integration sources for AI context"
2. **Export Functionality**
- [ ] Generate an RCA document
- [ ] Click "PDF" export button
- [ ] Verify file is created in Downloads directory
- [ ] Verify success message displays with file path
- [ ] Check file opens correctly
- [ ] Repeat for "MD" and "DOCX" formats
- [ ] Test on Post-Mortem page as well
2. **ServiceNow Integration**
- [ ] Configure ServiceNow integration with valid instance URL
- [ ] Open persistent browser and log into ServiceNow
- [ ] Ask question related to known KB article
- [ ] Verify AI response includes ServiceNow KB content
- [ ] Ask about known incident patterns
- [ ] Verify AI response includes incident information
3. **History Search Button**
- [ ] Navigate to History page
- [ ] Verify Search button is visible
- [ ] Verify button has search icon
- [ ] Test button in both light and dark themes
3. **Azure DevOps Integration**
- [ ] Configure Azure DevOps integration with org URL and project
- [ ] Open persistent browser and log into Azure DevOps
- [ ] Ask question about documented features in ADO wiki
- [ ] Verify AI response includes ADO wiki content
- [ ] Ask about known work items
- [ ] Verify AI response includes work item details
4. **Domain Filtering**
- [ ] Navigate to History page
- [ ] Select a domain from dropdown (e.g., "Linux")
- [ ] Do NOT enter any search text
- [ ] Verify issues are filtered by selected domain
- [ ] Change domain selection
- [ ] Verify filtering updates correctly
4. **Parallel Search Performance**
- [ ] Configure all three integrations
- [ ] Authenticate all three browsers
- [ ] Ask a question that matches content in all sources
- [ ] Verify results from multiple sources appear
- [ ] Check logs to confirm parallel execution
- [ ] Measure response time (should be <5s for all searches)
5. **Audit Log**
- [ ] Perform an AI chat interaction
- [ ] Navigate to Settings > Security > Audit Log
- [ ] Click "View" on a recent entry
- [ ] Verify transmitted data shows:
- Provider details
- User message
- Response preview
- [ ] Generate an RCA or Post-Mortem
- [ ] Check audit log for document generation entry
- [ ] Verify content preview and metadata are visible
5. **Graceful Degradation**
- [ ] Test with only Confluence configured
- [ ] Verify AI still works with single source
- [ ] Test with no integrations configured
- [ ] Verify AI still works with general knowledge
- [ ] Test with integration browser closed
- [ ] Verify AI continues with available sources
6. **AI Tool-Calling with MSI GenAI**
- [ ] Configure MSI GenAI as active AI provider
- [ ] Configure Azure DevOps integration and authenticate
- [ ] Create test issue and start triage conversation
- [ ] Ask: "Add a comment to ADO ticket 758421 saying 'This is a test'"
- [ ] Verify AI calls add_ado_comment tool (check logs for "MSI GenAI: Parsed tool call")
- [ ] Verify comment appears in ADO ticket 758421
- [ ] Verify AI confirms action was completed
- [ ] Test with invalid ticket number (e.g., 99999999)
- [ ] Verify AI reports error gracefully
7. **AI Tool-Calling with OpenAI**
- [ ] Configure OpenAI or Azure OpenAI as active provider
- [ ] Repeat tool-calling tests from section 6
- [ ] Verify tool-calling works with OpenAI-compatible providers
- [ ] Test multi-tool scenario: "Add comment to 758421 and then another to 758422"
- [ ] Verify AI calls tool multiple times in sequence
8. **Tool-Calling Error Handling**
- [ ] Test with ADO browser closed (no cookies available)
- [ ] Verify AI reports authentication error
- [ ] Test with invalid work item ID format (non-integer)
- [ ] Verify error caught in tool executor
- [ ] Test with missing ADO configuration
- [ ] Verify graceful error message to user
### Automated Testing
@ -136,20 +538,183 @@ This ticket addresses multiple UI and functionality issues reported in the tftsr
# Type checking
npx tsc --noEmit
# Rust compilation
# Rust compilation check
cargo check --manifest-path src-tauri/Cargo.toml
# Rust linting
cargo clippy --manifest-path src-tauri/Cargo.toml -- -D warnings
# Run all tests
cargo test --manifest-path src-tauri/Cargo.toml
# Frontend tests (if applicable)
npm run test:run
# Build debug version
cargo tauri build --debug
# Run linter
cargo clippy --manifest-path src-tauri/Cargo.toml -- -D warnings
```
### Test Results
All tests passing:
```
test result: ok. 130 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
```
### Edge Cases to Test
- Export when Downloads directory doesn't exist
- Export with very long document titles (special character handling)
- Domain filter with empty result set
- Audit log with very large payloads (>1000 chars)
- Audit log JSON parsing errors (malformed data)
- [ ] Query with no matching content in any source
- [ ] Query matching content in all three sources (verify aggregation)
- [ ] Very long query strings (>1000 characters)
- [ ] Special characters in queries (quotes, brackets, etc.)
- [ ] Integration returns >3 results (verify truncation)
- [ ] Integration returns very large content (verify 3000 char limit)
- [ ] Multiple persistent browsers for same integration
- [ ] Cookie expiration during search
- [ ] Network timeout during search
- [ ] Integration API version changes
- [ ] HTML content with complex nested tags
- [ ] Unicode content in search results
- [ ] AI calling same tool multiple times in one response
- [ ] Tool returning very large result (>10k characters)
- [ ] Tool execution timeout (slow API response)
- [ ] AI calling non-existent tool name
- [ ] Tool call with malformed JSON arguments
- [ ] Reaching max iteration limit (10 tool calls in sequence)
## Performance Considerations
### Content Truncation
- Wiki pages truncated to 3000 characters
- Knowledge base articles truncated to 3000 characters
- Excerpts limited to 200-300 characters
- Top 3 results per source type
These limits ensure:
- AI context window remains reasonable (~10k chars max)
- Response times stay under 5 seconds
- Costs remain manageable for AI providers
### Parallel Execution
- All integrations searched simultaneously
- No blocking between different sources
- Failed searches don't block successful ones
- Total time = slowest individual search, not sum
### Caching Strategy (Future Enhancement)
- Could cache search results for 5-10 minutes
- Would reduce API calls for repeated queries
- Needs invalidation strategy for updated content
## Security Considerations
1. **Cookie Security**
- Cookies stored in encrypted database
- Retrieved only when needed for API calls
- Never exposed to frontend
- Transmitted only over HTTPS
2. **Content Sanitization**
- HTML tags stripped from content
- No script injection possible
- Content truncated to prevent overflow
3. **Audit Trail**
- Integration searches not currently audited (future enhancement)
- AI chat with context is audited
- Could add audit entries for each integration query
4. **Access Control**
- Uses user's authenticated session
- Respects integration platform permissions
- No privilege escalation
## Known Issues / Future Enhancements
1. **Tool-Calling Format Unknown for MSI GenAI**
- Implementation uses OpenAI-compatible format as standard
- MSI GenAI response format for tool_calls is unknown (not documented)
- Code parses multiple possible response formats as fallback
- Requires real-world testing with MSI GenAI to verify
- May need format adjustments based on actual API responses
- Enhanced logging added to debug actual response structure
2. **ADO Browser Window Blank Page Issue**
- Azure DevOps browser opens as blank white page
- Requires closing and relaunching to get functional page
- Multiple attempts to fix (delayed show, immediate show, enhanced logging)
- Root cause not yet identified
- Workaround: Close and reopen ADO browser connection
- Needs diagnostic logging to identify root cause
3. **Limited Tool Support**
- Currently only one tool implemented: add_ado_comment
- Could add more tools: create_work_item, update_ticket_state, search_tickets
- Could add Confluence tools: create_page, update_page
- Could add ServiceNow tools: create_incident, assign_ticket
- Extensible architecture makes adding new tools straightforward
4. **No Search Result Caching**
- Every query searches all integrations
- Could cache results for repeated queries
- Would improve response time for common questions
5. **No Relevance Scoring**
- Returns top 3 results from each source
- No cross-platform relevance ranking
- Could implement scoring algorithm in future
6. **No Integration Search Audit**
- Integration queries not logged to audit table
- Only final AI interaction is audited
- Could add audit entries for transparency
7. **No Confluence Space Filtering**
- Searches all spaces
- `space_key` field in config not yet used
- Could restrict to specific spaces in future
8. **No ServiceNow Table Filtering**
- Searches all KB articles
- Could filter by category or state
- Could add configurable table names
9. **No Azure DevOps Area Path Filtering**
- Searches entire project
- Could filter by area path or iteration
- Could add configurable WIQL filters
## Dependencies
No new external dependencies added. Uses existing:
- `tokio` for async/parallel execution
- `reqwest` for HTTP requests
- `rusqlite` for database queries
- `urlencoding` for query encoding
- `serde_json` for API responses
## Documentation
This implementation is documented in:
- Code comments in all search modules
- Architecture section above
- CLAUDE.md project instructions
- Function-level documentation strings
## Rollback Plan
If issues are discovered:
1. **Disable Integration Search**
```rust
// In chat_message() function, comment out:
// let integration_context = search_integration_sources(...).await;
```
2. **Revert to Previous Behavior**
- AI will use only general knowledge
- No breaking changes to existing functionality
- All other features remain functional
3. **Clean Revert**
```bash
git revert <commit-hash>
cargo tauri build --debug
```

View File

@ -1,6 +1,6 @@
# AI Providers
TFTSR supports 5 AI providers, selectable per-session. API keys are stored in the Stronghold encrypted vault.
TFTSR supports 6+ AI providers, including custom providers with flexible authentication and API formats. API keys are stored encrypted with AES-256-GCM.
## Provider Factory
@ -55,13 +55,21 @@ Covers: OpenAI, Azure OpenAI, LM Studio, vLLM, **LiteLLM (AWS Bedrock)**, and an
|-------|-------|
| `config.name` | `"gemini"` |
| URL | `https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent` |
| Auth | API key as `?key=` query parameter |
| Auth | `x-goog-api-key: <api_key>` header |
| Max tokens | 4096 |
**Models:** `gemini-2.0-flash`, `gemini-2.0-pro`, `gemini-1.5-pro`, `gemini-1.5-flash`
---
## Transport Security Notes
- Provider clients use TLS certificate verification via `reqwest`
- Provider calls are configured with explicit request timeouts to avoid indefinite hangs
- Credentials are sent in headers (not URL query strings)
---
### 4. Mistral AI
| Field | Value |
@ -113,6 +121,131 @@ The domain prompt is injected as the first `system` role message in every new co
---
## 6. Custom Provider (Custom REST & Others)
**Status:** ✅ **Implemented** (v0.2.6)
Custom providers allow integration with non-OpenAI-compatible APIs. The application supports two API formats:
### Format: OpenAI Compatible (Default)
Standard OpenAI `/chat/completions` endpoint with Bearer authentication.
| Field | Default Value |
|-------|--------------|
| `api_format` | `"openai"` |
| `custom_endpoint_path` | `/chat/completions` |
| `custom_auth_header` | `Authorization` |
| `custom_auth_prefix` | `Bearer ` |
**Use cases:**
- Self-hosted LLMs with OpenAI-compatible APIs
- Custom proxy services
- Enterprise gateways
---
### Format: Custom REST
**Motorola Solutions Internal GenAI Service** — Enterprise AI platform with centralized cost tracking and model access.
| Field | Value |
|-------|-------|
| `config.provider_type` | `"custom"` |
| `config.api_format` | `"custom_rest"` |
| API URL | `https://genai-service.commandcentral.com/app-gateway` (prod)<br>`https://genai-service.stage.commandcentral.com/app-gateway` (stage) |
| Auth Header | `x-msi-genai-api-key` |
| Auth Prefix | `` (empty - no Bearer prefix) |
| Endpoint Path | `` (empty - URL includes full path `/api/v2/chat`) |
**Available Models (dropdown in Settings):**
- `VertexGemini` — Gemini 2.0 Flash (Private/GCP)
- `Claude-Sonnet-4` — Claude Sonnet 4 (Public/Anthropic)
- `ChatGPT4o` — GPT-4o (Public/OpenAI)
- `ChatGPT-5_2-Chat` — GPT-4.5 (Public/OpenAI)
- Full list is sourced from [GenAI API User Guide](../GenAI%20API%20User%20Guide.md)
- Includes a `Custom model...` option to manually enter any model ID
**Request Format:**
```json
{
"model": "VertexGemini",
"prompt": "User's latest message",
"system": "Optional system prompt",
"sessionId": "uuid-for-conversation-continuity",
"userId": "user.name@motorolasolutions.com"
}
```
**Response Format:**
```json
{
"status": true,
"sessionId": "uuid",
"msg": "AI response text",
"initialPrompt": false
}
```
**Key Differences from OpenAI:**
- **Single prompt** instead of message array (server manages history via `sessionId`)
- **Response in `msg` field** instead of `choices[0].message.content`
- **Session-based** conversation continuity (no need to resend history)
- **Cost tracking** via `userId` field (optional — defaults to API key owner if omitted)
- **Custom client header**: `X-msi-genai-client: tftsr-devops-investigation`
**Configuration (Settings → AI Providers → Add Provider):**
```
Name: Custom REST (MSI GenAI)
Type: Custom
API Format: Custom REST
API URL: https://genai-service.stage.commandcentral.com/app-gateway
Model: VertexGemini
API Key: (your MSI GenAI API key from portal)
User ID: your.name@motorolasolutions.com (optional)
Endpoint Path: (leave empty)
Auth Header: x-msi-genai-api-key
Auth Prefix: (leave empty)
```
**Rate Limits:**
- $50/user/month (enforced server-side)
- Per-API-key quotas available
**Troubleshooting:**
| Error | Cause | Solution |
|-------|-------|----------|
| 403 Forbidden | Invalid API key or insufficient permissions | Verify key in MSI GenAI portal, check model access |
| Missing `userId` field | Configuration not saved | Ensure UI shows User ID field when `api_format=custom_rest` |
| No conversation history | `sessionId` not persisted | Session ID stored in `ProviderConfig.session_id` — currently per-provider, not per-conversation |
**Implementation Details:**
- Backend: `src-tauri/src/ai/openai.rs::chat_custom_rest()`
- Schema: `src-tauri/src/state.rs::ProviderConfig` (added `user_id`, `api_format`, custom auth fields)
- Frontend: `src/pages/Settings/AIProviders.tsx` (conditional UI for Custom REST + model dropdown)
- CSP whitelist: `https://genai-service.stage.commandcentral.com` and production domain
---
## Custom Provider Configuration Fields
All providers support the following optional configuration fields (v0.2.6+):
| Field | Type | Purpose | Default |
|-------|------|---------|---------|
| `custom_endpoint_path` | `Option<String>` | Override endpoint path | `/chat/completions` |
| `custom_auth_header` | `Option<String>` | Custom auth header name | `Authorization` |
| `custom_auth_prefix` | `Option<String>` | Prefix before API key | `Bearer ` |
| `api_format` | `Option<String>` | API format (`openai` or `custom_rest`) | `openai` |
| `session_id` | `Option<String>` | Session ID for stateful APIs | None |
| `user_id` | `Option<String>` | User ID for cost tracking (Custom REST MSI contract) | None |
**Backward Compatibility:**
All fields are optional and default to OpenAI-compatible behavior. Existing provider configurations are unaffected.
---
## Adding a New Provider
1. Create `src-tauri/src/ai/{name}.rs` implementing the `Provider` trait

View File

@ -29,7 +29,7 @@ macOS runner runs jobs **directly on the host** (no Docker container) — macOS
## Test Pipeline (`.woodpecker/test.yml`)
**Triggers:** Every push and pull request to any branch.
**Triggers:** Pull requests only.
```
Pipeline steps:
@ -65,20 +65,28 @@ steps:
---
## Release Pipeline (`.gitea/workflows/release.yml`)
## Release Pipeline (`.gitea/workflows/auto-tag.yml`)
**Triggers:** Git tags matching `v*`
**Triggers:** Pushes to `master` (auto-tag), then release build/upload jobs run after `autotag`.
Auto tags are created by `.gitea/workflows/auto-tag.yml` using `git tag` + `git push`.
Release jobs are executed in the same workflow and depend on `autotag` completion.
```
Jobs (run in parallel):
build-linux-amd64 → cargo tauri build (x86_64-unknown-linux-gnu)
→ {.deb, .rpm, .AppImage} uploaded to Gitea release
→ fails fast if no Linux artifacts are produced
build-windows-amd64 → cargo tauri build (x86_64-pc-windows-gnu) via mingw-w64
→ {.exe, .msi} uploaded to Gitea release
build-linux-arm64 → cargo tauri build (aarch64-unknown-linux-gnu)
→ fails fast if no Windows artifacts are produced
build-linux-arm64 → Ubuntu 22.04 base (ports.ubuntu.com for arm64 packages)
→ cargo tauri build (aarch64-unknown-linux-gnu)
→ {.deb, .rpm, .AppImage} uploaded to Gitea release
→ fails fast if no Linux artifacts are produced
build-macos-arm64 → cargo tauri build (aarch64-apple-darwin) — runs on local Mac
→ {.dmg} uploaded to Gitea release
→ existing same-name assets are deleted before upload (rerun-safe)
→ unsigned; after install run: xattr -cr /Applications/TFTSR.app
```
@ -102,7 +110,7 @@ the repo directly within its commands (using `http://172.0.0.29:3000`, accessibl
the local machine) and uploads its artifacts inline. The `upload-release` step (amd64)
handles amd64 + windows artifacts only.
**Clone override (release.yml — amd64 workspace):**
**Clone override (auto-tag.yml — amd64 workspace):**
```yaml
clone:
@ -203,6 +211,18 @@ UPDATE protect_branch SET protected=true, require_pull_request=true WHERE repo_i
## Known Issues & Fixes
### Debian Multiarch Breaks arm64 Cross-Compile (`held broken packages`)
When using `rust:1.88-slim` (Debian Bookworm) with `dpkg --add-architecture arm64`, apt
resolves amd64 and arm64 simultaneously against the same mirror. The `binary-all` package
index is duplicated and certain `-dev` package pairs cannot be co-installed because they
don't declare `Multi-Arch: same`. This produces `E: Unable to correct problems, you have
held broken packages` and cannot be fixed by tweaking `sources.list` entries.
**Fix**: Use `ubuntu:22.04` as the container image. Ubuntu routes arm64 through
`ports.ubuntu.com/ubuntu-ports` — a separate mirror from `archive.ubuntu.com` (amd64).
There are no cross-arch index overlaps and the dependency resolver succeeds. Rust must be
installed manually via `rustup` since it is not pre-installed in the Ubuntu base image.
### Step Containers Cannot Reach `gitea_app`
Default Docker bridge containers cannot resolve `gitea_app` or reach `172.0.0.29:3000`
(host firewall). Fix: use `network_mode: gogs_default` in any step that needs Gitea

View File

@ -2,7 +2,7 @@
## Overview
TFTSR uses **SQLite** via `rusqlite` with the `bundled-sqlcipher` feature for AES-256 encryption in production. 10 versioned migrations are tracked in the `_migrations` table.
TFTSR uses **SQLite** via `rusqlite` with the `bundled-sqlcipher` feature for AES-256 encryption in production. 11 versioned migrations are tracked in the `_migrations` table.
**DB file location:** `{app_data_dir}/tftsr.db`
@ -38,7 +38,7 @@ pub fn init_db(data_dir: &Path) -> anyhow::Result<Connection> {
---
## Schema (10 Migrations)
## Schema (11 Migrations)
### 001 — issues
@ -181,6 +181,47 @@ CREATE VIRTUAL TABLE issues_fts USING fts5(
);
```
### 011 — credentials & integration_config (v0.2.3+)
**Integration credentials table:**
```sql
CREATE TABLE credentials (
id TEXT PRIMARY KEY,
service TEXT NOT NULL CHECK(service IN ('confluence','servicenow','azuredevops')),
token_hash TEXT NOT NULL, -- SHA-256 hash for audit
encrypted_token TEXT NOT NULL, -- AES-256-GCM encrypted
created_at TEXT NOT NULL,
expires_at TEXT,
UNIQUE(service)
);
```
**Integration configuration table:**
```sql
CREATE TABLE integration_config (
id TEXT PRIMARY KEY,
service TEXT NOT NULL CHECK(service IN ('confluence','servicenow','azuredevops')),
base_url TEXT NOT NULL,
username TEXT, -- ServiceNow only
project_name TEXT, -- Azure DevOps only
space_key TEXT, -- Confluence only
auto_create_enabled INTEGER NOT NULL DEFAULT 0,
updated_at TEXT NOT NULL,
UNIQUE(service)
);
```
**Encryption:**
- OAuth2 tokens encrypted with AES-256-GCM
- Key derived from `TFTSR_DB_KEY` environment variable
- Random 96-bit nonce per encryption
- Format: `base64(nonce || ciphertext || tag)`
**Usage:**
- OAuth2 flows (Confluence, Azure DevOps): Store encrypted bearer token
- Basic auth (ServiceNow): Store encrypted password
- One credential per service (enforced by UNIQUE constraint)
---
## Key Design Notes

View File

@ -35,7 +35,8 @@ npm install --legacy-peer-deps
| Variable | Default | Purpose |
|----------|---------|---------|
| `TFTSR_DATA_DIR` | Platform data dir | Override DB location |
| `TFTSR_DB_KEY` | `dev-key-change-in-prod` | DB encryption key (required in production) |
| `TFTSR_DB_KEY` | _(none)_ | DB encryption key (required in release builds) |
| `TFTSR_ENCRYPTION_KEY` | _(none)_ | Credential encryption key (required in release builds) |
| `RUST_LOG` | `info` | Tracing verbosity: `debug`, `info`, `warn`, `error` |
Application data is stored at:
@ -120,7 +121,7 @@ cargo tauri build
# Outputs: .deb, .rpm, .AppImage (Linux)
```
Release builds enable **SQLCipher AES-256** encryption. Set `TFTSR_DB_KEY` before building.
Release builds enforce secure key configuration. Set both `TFTSR_DB_KEY` and `TFTSR_ENCRYPTION_KEY` before building.
---

View File

@ -1,6 +1,6 @@
# TFTSR — IT Triage & RCA Desktop Application
# Troubleshooting and RCA Assistant
**TFTSR** is a secure desktop application for guided IT incident triage, root cause analysis (RCA), and post-mortem documentation. Built with Tauri 2.x (Rust + WebView) and React 18.
**Troubleshooting and RCA Assistant** is a secure desktop application for guided IT incident triage, root cause analysis (RCA), and post-mortem documentation. Built with Tauri 2.x (Rust + WebView) and React 18.
**CI:** ![build](http://172.0.0.29:3000/sarman/tftsr-devops_investigation/actions/workflows/test.yml/badge.svg) — rustfmt · clippy · 64 Rust tests · tsc · vitest — all green
@ -24,8 +24,10 @@
- **5-Whys AI Triage** — Interactive guided root cause analysis via multi-turn AI chat
- **PII Auto-Redaction** — Detects and redacts sensitive data before any AI send
- **Multi-Provider AI** — OpenAI, Anthropic Claude, Google Gemini, Mistral, AWS Bedrock (via LiteLLM), local Ollama (fully offline)
- **SQLCipher AES-256** — All issue history encrypted at rest
- **Multi-Provider AI** — OpenAI, Anthropic Claude, Google Gemini, Mistral, AWS Bedrock (via LiteLLM), MSI GenAI (Motorola internal), local Ollama (fully offline)
- **Custom Provider Support** — Flexible authentication (Bearer, custom headers) and API formats (OpenAI-compatible, Custom REST)
- **External Integrations** — Confluence, ServiceNow, Azure DevOps with OAuth2 PKCE flows
- **SQLCipher AES-256** — All issue history and credentials encrypted at rest
- **RCA + Post-Mortem Generation** — Auto-populated Markdown templates, exportable as MD/PDF
- **Ollama Management** — Hardware detection, model recommendations, in-app model management
- **Audit Trail** — Every external data send logged with SHA-256 hash
@ -33,9 +35,13 @@
## Releases
| Version | Status | Platforms |
| Version | Status | Highlights |
|---------|--------|-----------|
| v0.1.1 | 🚀 Released | linux/amd64 · linux/arm64 · windows/amd64 (.deb, .rpm, .AppImage, .exe, .msi) |
| v0.2.6 | 🚀 Latest | MSI GenAI support, OAuth2 shell permissions, user ID tracking |
| v0.2.3 | Released | Confluence/ServiceNow/ADO REST API clients (19 TDD tests) |
| v0.1.1 | Released | Core application with PII detection, RCA generation |
**Platforms:** linux/amd64 · linux/arm64 · windows/amd64 (.deb, .rpm, .AppImage, .exe, .msi)
Download from [Releases](https://gogs.tftsr.com/sarman/tftsr-devops_investigation/releases). All builds are produced natively (no QEMU emulation).
@ -45,7 +51,7 @@ Download from [Releases](https://gogs.tftsr.com/sarman/tftsr-devops_investigatio
|-------|--------|
| Phases 18 (Core application) | ✅ Complete |
| Phase 9 (History/Search) | 🔲 Pending |
| Phase 10 (Integrations) | 🕐 v0.2 stubs only |
| Phase 10 (Integrations) | ✅ Complete — Confluence, ServiceNow, Azure DevOps fully implemented with OAuth2 |
| Phase 11 (CI/CD) | ✅ Complete — Gitea Actions fully operational |
| Phase 12 (Release packaging) | ✅ linux/amd64 · linux/arm64 (native) · windows/amd64 |

View File

@ -220,15 +220,206 @@ Returns audit log entries. Filter by action, entity_type, date range.
---
## Integration Commands (v0.2 Stubs)
## Integration Commands
All 6 integration commands currently return `"not yet available"` errors.
> **Status:****Fully Implemented** (v0.2.3+)
| Command | Purpose |
|---------|---------|
| `test_confluence_connection` | Verify Confluence credentials |
| `publish_to_confluence` | Publish RCA/postmortem to Confluence space |
| `test_servicenow_connection` | Verify ServiceNow credentials |
| `create_servicenow_incident` | Create incident from issue |
| `test_azuredevops_connection` | Verify Azure DevOps credentials |
| `create_azuredevops_workitem` | Create work item from issue |
All integration commands are production-ready with complete OAuth2/authentication flows.
### OAuth2 Commands
### `initiate_oauth`
```typescript
initiateOauthCmd(service: "confluence" | "servicenow" | "azuredevops") → OAuthInitResponse
```
Starts OAuth2 PKCE flow. Returns authorization URL and state key. Opens browser window for user authentication.
```typescript
interface OAuthInitResponse {
auth_url: string; // URL to open in browser
state: string; // State key for callback verification
}
```
**Flow:**
1. Generates PKCE challenge
2. Starts local callback server on `http://localhost:8765`
3. Opens authorization URL in browser
4. User authenticates with service
5. Service redirects to callback server
6. Callback server triggers `handle_oauth_callback`
### `handle_oauth_callback`
```typescript
handleOauthCallbackCmd(service: string, code: string, stateKey: string) → void
```
Exchanges authorization code for access token. Encrypts token with AES-256-GCM and stores in database.
### Confluence Commands
### `test_confluence_connection`
```typescript
testConfluenceConnectionCmd(baseUrl: string, credentials: Record<string, unknown>) → ConnectionResult
```
Verifies Confluence connection by calling `/rest/api/user/current`.
### `list_confluence_spaces`
```typescript
listConfluenceSpacesCmd(config: ConfluenceConfig) → Space[]
```
Lists all accessible Confluence spaces.
### `search_confluence_pages`
```typescript
searchConfluencePagesCmd(config: ConfluenceConfig, query: string, spaceKey?: string) → Page[]
```
Searches pages using CQL (Confluence Query Language). Optional space filter.
### `publish_to_confluence`
```typescript
publishToConfluenceCmd(config: ConfluenceConfig, spaceKey: string, title: string, contentHtml: string, parentPageId?: string) → PublishResult
```
Creates a new page in Confluence. Returns page ID and URL.
### `update_confluence_page`
```typescript
updateConfluencePageCmd(config: ConfluenceConfig, pageId: string, title: string, contentHtml: string, version: number) → PublishResult
```
Updates an existing page. Requires current version number.
### ServiceNow Commands
### `test_servicenow_connection`
```typescript
testServiceNowConnectionCmd(instanceUrl: string, credentials: Record<string, unknown>) → ConnectionResult
```
Verifies ServiceNow connection by querying incident table.
### `search_servicenow_incidents`
```typescript
searchServiceNowIncidentsCmd(config: ServiceNowConfig, query: string) → Incident[]
```
Searches incidents by short description. Returns up to 10 results.
### `create_servicenow_incident`
```typescript
createServiceNowIncidentCmd(config: ServiceNowConfig, shortDesc: string, description: string, urgency: string, impact: string) → TicketResult
```
Creates a new incident. Returns incident number and URL.
```typescript
interface TicketResult {
id: string; // sys_id (UUID)
ticket_number: string; // INC0010001
url: string; // Direct link to incident
}
```
### `get_servicenow_incident`
```typescript
getServiceNowIncidentCmd(config: ServiceNowConfig, incidentId: string) → Incident
```
Retrieves incident by sys_id or incident number (e.g., `INC0010001`).
### `update_servicenow_incident`
```typescript
updateServiceNowIncidentCmd(config: ServiceNowConfig, sysId: string, updates: Record<string, any>) → TicketResult
```
Updates incident fields. Uses JSON-PATCH format.
### Azure DevOps Commands
### `test_azuredevops_connection`
```typescript
testAzureDevOpsConnectionCmd(orgUrl: string, credentials: Record<string, unknown>) → ConnectionResult
```
Verifies Azure DevOps connection by querying project info.
### `search_azuredevops_workitems`
```typescript
searchAzureDevOpsWorkItemsCmd(config: AzureDevOpsConfig, query: string) → WorkItem[]
```
Searches work items using WIQL (Work Item Query Language).
### `create_azuredevops_workitem`
```typescript
createAzureDevOpsWorkItemCmd(config: AzureDevOpsConfig, title: string, description: string, workItemType: string, severity: string) → TicketResult
```
Creates a work item (Bug, Task, User Story). Returns work item ID and URL.
**Work Item Types:**
- `Bug` — Software defect
- `Task` — Work assignment
- `User Story` — Feature request
- `Issue` — Problem or blocker
- `Incident` — Production incident
### `get_azuredevops_workitem`
```typescript
getAzureDevOpsWorkItemCmd(config: AzureDevOpsConfig, workItemId: number) → WorkItem
```
Retrieves work item by ID.
### `update_azuredevops_workitem`
```typescript
updateAzureDevOpsWorkItemCmd(config: AzureDevOpsConfig, workItemId: number, updates: Record<string, any>) → TicketResult
```
Updates work item fields. Uses JSON-PATCH format.
---
## Common Types
### `ConnectionResult`
```typescript
interface ConnectionResult {
success: boolean;
message: string;
}
```
### `PublishResult`
```typescript
interface PublishResult {
id: string; // Page ID or document ID
url: string; // Direct link to published content
}
```
### `TicketResult`
```typescript
interface TicketResult {
id: string; // sys_id or work item ID
ticket_number: string; // Human-readable number
url: string; // Direct link
}
```
---
## Authentication Storage
All integration credentials are stored in the `credentials` table:
```sql
CREATE TABLE credentials (
id TEXT PRIMARY KEY,
service TEXT NOT NULL CHECK(service IN ('confluence','servicenow','azuredevops')),
token_hash TEXT NOT NULL, -- SHA-256 for audit
encrypted_token TEXT NOT NULL, -- AES-256-GCM encrypted
created_at TEXT NOT NULL,
expires_at TEXT
);
```
**Encryption:**
- Algorithm: AES-256-GCM
- Key derivation: From `TFTSR_DB_KEY` environment variable
- Nonce: Random 96-bit per encryption
- Format: `base64(nonce || ciphertext || tag)`
**Token retrieval:**
```rust
// Backend: src-tauri/src/integrations/auth.rs
pub fn decrypt_token(encrypted: &str) -> Result<String, String>
```

View File

@ -1,97 +1,273 @@
# Integrations
> **Status: All integrations are v0.2 stubs.** They are implemented as placeholder commands that return `"not yet available"` errors. The authentication framework and command signatures are finalized, but the actual API calls are not yet implemented.
> **Status: ✅ Fully Implemented (v0.2.6)** — All three integrations (Confluence, ServiceNow, Azure DevOps) are production-ready with complete OAuth2/authentication flows and REST API clients.
---
## Confluence
**Purpose:** Publish RCA and post-mortem documents to a Confluence space.
**Purpose:** Publish RCA and post-mortem documents to Confluence spaces.
**Commands:**
- `test_confluence_connection(base_url, credentials)` — Verify credentials
- `publish_to_confluence(doc_id, space_key, parent_page_id?)` — Create/update page
**Status:** ✅ **Implemented** (v0.2.3)
**Planned implementation:**
- Confluence REST API v2: `POST /wiki/rest/api/content`
- Auth: Basic auth (email + API token) or OAuth2
- Page format: Convert Markdown → Confluence storage format (XHTML-like)
### Features
- OAuth2 authentication with PKCE flow
- List accessible spaces
- Search pages by CQL query
- Create new pages with optional parent
- Update existing pages with version management
**Configuration (Settings → Integrations → Confluence):**
### API Client (`src-tauri/src/integrations/confluence.rs`)
**Functions:**
```rust
test_connection(config: &ConfluenceConfig) -> Result<ConnectionResult, String>
list_spaces(config: &ConfluenceConfig) -> Result<Vec<Space>, String>
search_pages(config: &ConfluenceConfig, query: &str, space_key: Option<&str>) -> Result<Vec<Page>, String>
publish_page(config: &ConfluenceConfig, space_key: &str, title: &str, content_html: &str, parent_page_id: Option<&str>) -> Result<PublishResult, String>
update_page(config: &ConfluenceConfig, page_id: &str, title: &str, content_html: &str, version: i32) -> Result<PublishResult, String>
```
Base URL: https://yourorg.atlassian.net
Email: user@example.com
API Token: (stored in Stronghold)
Space Key: PROJ
### Configuration (Settings → Integrations → Confluence)
```
Base URL: https://yourorg.atlassian.net
Authentication: OAuth2 (bearer token, encrypted at rest)
Default Space: PROJ
```
### Implementation Details
- **API**: Confluence REST API v1 (`/rest/api/`)
- **Auth**: OAuth2 bearer token (encrypted with AES-256-GCM)
- **Endpoints**:
- `GET /rest/api/user/current` — Test connection
- `GET /rest/api/space` — List spaces
- `GET /rest/api/content/search` — Search with CQL
- `POST /rest/api/content` — Create page
- `PUT /rest/api/content/{id}` — Update page
- **Page format**: Confluence Storage Format (XHTML)
- **TDD Tests**: 6 tests with mockito HTTP mocking
---
## ServiceNow
**Purpose:** Create incident records in ServiceNow from TFTSR issues.
**Purpose:** Create and manage incident records in ServiceNow.
**Commands:**
- `test_servicenow_connection(instance_url, credentials)` — Verify credentials
- `create_servicenow_incident(issue_id, config)` — Create incident
**Status:** ✅ **Implemented** (v0.2.3)
**Planned implementation:**
- ServiceNow Table API: `POST /api/now/table/incident`
- Auth: Basic auth or OAuth2 bearer token
- Field mapping: TFTSR severity → ServiceNow priority (P1=Critical, P2=High, etc.)
### Features
- Basic authentication (username/password)
- Search incidents by description
- Create new incidents with urgency/impact
- Get incident by sys_id or number
- Update existing incidents
**Configuration:**
### API Client (`src-tauri/src/integrations/servicenow.rs`)
**Functions:**
```rust
test_connection(config: &ServiceNowConfig) -> Result<ConnectionResult, String>
search_incidents(config: &ServiceNowConfig, query: &str) -> Result<Vec<Incident>, String>
create_incident(config: &ServiceNowConfig, short_description: &str, description: &str, urgency: &str, impact: &str) -> Result<TicketResult, String>
get_incident(config: &ServiceNowConfig, incident_id: &str) -> Result<Incident, String>
update_incident(config: &ServiceNowConfig, sys_id: &str, updates: serde_json::Value) -> Result<TicketResult, String>
```
Instance URL: https://yourorg.service-now.com
Username: admin
Password: (stored in Stronghold)
### Configuration (Settings → Integrations → ServiceNow)
```
Instance URL: https://yourorg.service-now.com
Username: admin
Password: (encrypted with AES-256-GCM)
```
### Implementation Details
- **API**: ServiceNow Table API (`/api/now/table/incident`)
- **Auth**: HTTP Basic authentication
- **Severity mapping**: TFTSR P1-P4 → ServiceNow urgency/impact (1-3)
- **Incident lookup**: Supports both sys_id (UUID) and incident number (INC0010001)
- **TDD Tests**: 7 tests with mockito HTTP mocking
---
## Azure DevOps
**Purpose:** Create work items (bugs/incidents) in Azure DevOps from TFTSR issues.
**Purpose:** Create and manage work items (bugs/tasks) in Azure DevOps.
**Commands:**
- `test_azuredevops_connection(org_url, credentials)` — Verify credentials
- `create_azuredevops_workitem(issue_id, project, config)` — Create work item
**Status:** ✅ **Implemented** (v0.2.3)
**Planned implementation:**
- Azure DevOps REST API: `POST /{organization}/{project}/_apis/wit/workitems/${type}`
- Auth: Personal Access Token (PAT) via Basic auth header
- Work item type: Bug or Incident
### Features
- OAuth2 authentication with PKCE flow
- Search work items via WIQL queries
- Create work items (Bug, Task, User Story)
- Get work item details by ID
- Update work items with JSON-PATCH operations
**Configuration:**
### API Client (`src-tauri/src/integrations/azuredevops.rs`)
**Functions:**
```rust
test_connection(config: &AzureDevOpsConfig) -> Result<ConnectionResult, String>
search_work_items(config: &AzureDevOpsConfig, query: &str) -> Result<Vec<WorkItem>, String>
create_work_item(config: &AzureDevOpsConfig, title: &str, description: &str, work_item_type: &str, severity: &str) -> Result<TicketResult, String>
get_work_item(config: &AzureDevOpsConfig, work_item_id: i64) -> Result<WorkItem, String>
update_work_item(config: &AzureDevOpsConfig, work_item_id: i64, updates: serde_json::Value) -> Result<TicketResult, String>
```
### Configuration (Settings → Integrations → Azure DevOps)
```
Organization URL: https://dev.azure.com/yourorg
Personal Access Token: (stored in Stronghold)
Authentication: OAuth2 (bearer token, encrypted at rest)
Project: MyProject
Work Item Type: Bug
```
### Implementation Details
- **API**: Azure DevOps REST API v7.0
- **Auth**: OAuth2 bearer token (encrypted with AES-256-GCM)
- **WIQL**: Work Item Query Language for advanced search
- **Work item types**: Bug, Task, User Story, Issue, Incident
- **Severity mapping**: Bug-specific field `Microsoft.VSTS.Common.Severity`
- **TDD Tests**: 6 tests with mockito HTTP mocking
---
## OAuth2 Authentication Flow
All integrations using OAuth2 (Confluence, Azure DevOps) follow the same flow:
1. **User clicks "Connect"** in Settings → Integrations
2. **Backend generates PKCE challenge** and stores code verifier
3. **Local callback server starts** on `http://localhost:8765`
4. **Browser opens** with OAuth authorization URL
5. **User authenticates** with service provider
6. **Service redirects** to `http://localhost:8765/callback?code=...`
7. **Callback server extracts code** and triggers token exchange
8. **Backend exchanges code for token** using PKCE verifier
9. **Token encrypted** with AES-256-GCM and stored in DB
10. **UI shows "Connected"** status
**Implementation:**
- `src-tauri/src/integrations/auth.rs` — PKCE generation, token exchange, encryption
- `src-tauri/src/integrations/callback_server.rs` — Local HTTP server (warp)
- `src-tauri/src/commands/integrations.rs` — IPC command handlers
**Security:**
- Tokens encrypted at rest with AES-256-GCM (256-bit key)
- Key derived from environment variable `TFTSR_DB_KEY`
- PKCE prevents authorization code interception
- Callback server only accepts from `localhost`
---
## Database Schema
**Credentials Table (`migration 011`):**
```sql
CREATE TABLE credentials (
id TEXT PRIMARY KEY,
service TEXT NOT NULL CHECK(service IN ('confluence','servicenow','azuredevops')),
token_hash TEXT NOT NULL, -- SHA-256 hash for audit
encrypted_token TEXT NOT NULL, -- AES-256-GCM encrypted
created_at TEXT NOT NULL,
expires_at TEXT,
UNIQUE(service)
);
```
**Integration Config Table:**
```sql
CREATE TABLE integration_config (
id TEXT PRIMARY KEY,
service TEXT NOT NULL CHECK(service IN ('confluence','servicenow','azuredevops')),
base_url TEXT NOT NULL,
username TEXT, -- ServiceNow only
project_name TEXT, -- Azure DevOps only
space_key TEXT, -- Confluence only
auto_create_enabled INTEGER NOT NULL DEFAULT 0,
updated_at TEXT NOT NULL,
UNIQUE(service)
);
```
---
## v0.2 Roadmap
## Testing
Integration implementation order (planned):
All integrations have comprehensive test coverage:
1. **Confluence** — Most commonly requested; Markdown-to-Confluence conversion library needed
2. **Azure DevOps** — Clean REST API, straightforward PAT auth
3. **ServiceNow** — More complex field mapping; may require customer-specific configuration
```bash
# Run all integration tests
cargo test --manifest-path src-tauri/Cargo.toml --lib integrations
Each integration will also require:
- Audit log entry on every publish action
- PII check on document content before external publish
- Connection test UI in Settings → Integrations
# Run specific integration tests
cargo test --manifest-path src-tauri/Cargo.toml confluence
cargo test --manifest-path src-tauri/Cargo.toml servicenow
cargo test --manifest-path src-tauri/Cargo.toml azuredevops
```
**Test statistics:**
- **Confluence**: 6 tests (connection, spaces, search, publish, update)
- **ServiceNow**: 7 tests (connection, search, create, get by sys_id, get by number, update)
- **Azure DevOps**: 6 tests (connection, WIQL search, create, get, update)
- **Total**: 19 integration tests (all passing)
**Test approach:**
- TDD methodology (tests written first)
- HTTP mocking with `mockito` crate
- No external API calls in tests
- All auth flows tested with mock responses
---
## Adding an Integration
## CSP Configuration
1. Implement the logic in `src-tauri/src/integrations/{name}.rs`
2. Remove the stub `Err("not yet available")` return in `commands/integrations.rs`
3. Add the new API endpoint to the Tauri CSP `connect-src`
4. Add Stronghold secret key for the API credentials
5. Wire up the Settings UI in `src/pages/Settings/Integrations.tsx`
6. Add audit log call before the external API request
All integration domains are whitelisted in `src-tauri/tauri.conf.json`:
```json
"connect-src": "... https://auth.atlassian.com https://*.atlassian.net https://login.microsoftonline.com https://dev.azure.com"
```
---
## Adding a New Integration
1. **Create API client**: `src-tauri/src/integrations/{name}.rs`
2. **Implement functions**: `test_connection()`, create/read/update operations
3. **Add TDD tests**: Use `mockito` for HTTP mocking
4. **Update migration**: Add service to `credentials` and `integration_config` CHECK constraints
5. **Add IPC commands**: `src-tauri/src/commands/integrations.rs`
6. **Update CSP**: Add API domains to `tauri.conf.json`
7. **Wire up UI**: `src/pages/Settings/Integrations.tsx`
8. **Update capabilities**: Add any required Tauri permissions
9. **Document**: Update this wiki page
---
## Troubleshooting
### OAuth "Command plugin:shell|open not allowed"
**Fix**: Add `"shell:allow-open"` to `src-tauri/capabilities/default.json`
### Token Exchange Fails
**Check**:
1. PKCE verifier matches challenge
2. Redirect URI exactly matches registered callback
3. Authorization code hasn't expired
4. Client ID/secret are correct
### ServiceNow 401 Unauthorized
**Check**:
1. Username/password are correct
2. User has API access enabled
3. Instance URL is correct (no trailing slash)
### Confluence API 404
**Check**:
1. Base URL format: `https://yourorg.atlassian.net` (no `/wiki/`)
2. Space key exists and user has access
3. OAuth token has required scopes (`read:confluence-content.all`, `write:confluence-content`)
### Azure DevOps 403 Forbidden
**Check**:
1. OAuth token has required scopes (`vso.work_write`)
2. User has permissions in the project
3. Project name is case-sensitive

View File

@ -10,7 +10,7 @@ Before any text is sent to an AI provider, TFTSR scans it for personally identif
1. Upload log file
2. detect_pii(log_file_id)
→ Scans content with 13 regex patterns
→ Scans content with PII regex patterns (including hostname + expanded card brands)
→ Resolves overlapping matches (longest wins)
→ Returns Vec<PiiSpan> with byte offsets + replacements
@ -24,7 +24,7 @@ Before any text is sent to an AI provider, TFTSR scans it for personally identif
5. Redacted text safe to send to AI
```
## Detection Patterns (13 Types)
## Detection Patterns
| Type | Replacement | Pattern notes |
|------|-------------|---------------|
@ -33,13 +33,13 @@ Before any text is sent to an AI provider, TFTSR scans it for personally identif
| `ApiKey` | `[ApiKey]` | `api_key=`, `apikey=`, `access_token=` + 16+ char value |
| `Password` | `[Password]` | `password=`, `passwd=`, `pwd=` + non-whitespace value |
| `Ssn` | `[SSN]` | `\b\d{3}-\d{2}-\d{4}\b` |
| `CreditCard` | `[CreditCard]` | Visa/MC/Amex Luhn-format numbers |
| `CreditCard` | `[CreditCard]` | Visa/MC/Amex/Discover/JCB/Diners patterns |
| `Email` | `[Email]` | RFC-compliant email addresses |
| `MacAddress` | `[MAC]` | `XX:XX:XX:XX:XX:XX` and `XX-XX-XX-XX-XX-XX` |
| `Ipv6` | `[IPv6]` | Full and compressed IPv6 addresses |
| `Ipv4` | `[IPv4]` | Standard dotted-quad notation |
| `PhoneNumber` | `[Phone]` | US and international phone formats |
| `Hostname` | _(patterns.rs)_ | Configurable hostname patterns |
| `Hostname` | `[Hostname]` | FQDN/hostname detection for internal names |
| `UrlCredentials` | _(covered by UrlWithCredentials)_ | |
## Overlap Resolution
@ -71,7 +71,7 @@ pub struct PiiSpan {
pub pii_type: PiiType,
pub start: usize, // byte offset in original text
pub end: usize,
pub original_value: String,
pub original: String,
pub replacement: String, // e.g., "[IPv4]"
}
```
@ -111,3 +111,4 @@ write_audit_event(
- Only the redacted text is sent to AI providers
- The SHA-256 hash in the audit log allows integrity verification
- If redaction is skipped (no PII detected), the audit log still records the send
- Stored `pii_spans.original_value` metadata is cleared after redaction is finalized

View File

@ -18,20 +18,25 @@ Production builds use SQLCipher:
- **Cipher:** AES-256-CBC
- **KDF:** PBKDF2-HMAC-SHA512, 256,000 iterations
- **HMAC:** HMAC-SHA512
- **Page size:** 4096 bytes
- **Page size:** 16384 bytes
- **Key source:** `TFTSR_DB_KEY` environment variable
Debug builds use plain SQLite (no encryption) for developer convenience.
> ⚠️ **Never** use the default key (`dev-key-change-in-prod`) in a production environment.
Release builds now fail startup if `TFTSR_DB_KEY` is missing or empty.
---
## API Key Storage (Stronghold)
## Credential Encryption
AI provider API keys are stored in `tauri-plugin-stronghold` — an encrypted vault backed by the [IOTA Stronghold](https://github.com/iotaledger/stronghold.rs) library.
Integration tokens are encrypted with AES-256-GCM before persistence:
- **Key source:** `TFTSR_ENCRYPTION_KEY` (required in release builds)
- **Key derivation:** SHA-256 hash of key material to a fixed 32-byte AES key
- **Nonce:** Cryptographically secure random nonce per encryption
The vault is initialized with a password-derived key using Argon2. API keys are never written to disk in plaintext or to the SQLite database.
Release builds fail secure operations if `TFTSR_ENCRYPTION_KEY` is unset or empty.
The Stronghold plugin remains enabled and now uses a per-installation salt derived from the app data directory path hash instead of a fixed static salt.
---
@ -46,6 +51,7 @@ log file → detect_pii() → user approves spans → apply_redactions() → AI
- Original text **never leaves the machine**
- Only the redacted version is transmitted
- The SHA-256 hash of the redacted text is recorded in the audit log for integrity verification
- `pii_spans.original_value` is cleared after redaction to avoid retaining raw detected secrets in storage
- See [PII Detection](PII-Detection) for the full list of detected patterns
---
@ -66,6 +72,14 @@ write_audit_event(
The audit log is stored in the encrypted SQLite database. It cannot be deleted through the UI.
### Tamper Evidence
`audit_log` entries now include:
- `prev_hash` — hash of the previous audit entry
- `entry_hash` — SHA-256 hash of current entry payload + `prev_hash`
This creates a hash chain and makes post-hoc modification detectable.
**Audit entry fields:**
- `action` — what was done
- `entity_type` — type of record involved
@ -84,7 +98,7 @@ Defined in `src-tauri/capabilities/default.json`:
|--------|-------------------|
| `dialog` | `allow-open`, `allow-save` |
| `fs` | `read-text`, `write-text`, `read`, `write`, `mkdir` — scoped to app dir and temp |
| `shell` | `allow-execute` — for running system commands |
| `shell` | `allow-open` only |
| `http` | default — connect only to approved origins |
---
@ -109,7 +123,9 @@ HTTP is blocked by default. Only whitelisted HTTPS endpoints (and localhost for
## TLS
All outbound HTTP requests use `reqwest` with default TLS settings (TLS 1.2+ required). Certificate verification is enabled. No custom trust anchors are added.
All outbound HTTP requests use `reqwest` with certificate verification enabled and a request timeout configured for provider calls.
CI/CD currently uses internal `http://` endpoints for self-hosted Gitea release automation on a trusted LAN. Recommended hardening: migrate runners and API calls to HTTPS with internal certificates.
---
@ -120,3 +136,4 @@ All outbound HTTP requests use `reqwest` with default TLS settings (TLS 1.2+ req
- [ ] Does it store secrets? → Use Stronghold, not the SQLite DB
- [ ] Does it need filesystem access? → Scope the fs capability
- [ ] Does it need a new HTTP endpoint? → Add to CSP `connect-src`
- [ ] Does it add a new provider endpoint? → Avoid query-param secrets, use auth headers

View File

@ -4,7 +4,7 @@
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>TFTSR — IT Triage & RCA</title>
<title>Troubleshooting and RCA Assistant</title>
</head>
<body>
<div id="root"></div>

View File

@ -4,3 +4,8 @@
# error. The desktop binary links against rlib (static), so cdylib exports
# are unused at runtime.
rustflags = ["-C", "link-arg=-Wl,--exclude-all-symbols"]
[env]
# Use system OpenSSL instead of vendoring from source (which requires Perl modules
# unavailable on some environments and breaks clippy/check).
OPENSSL_NO_VENDOR = "1"

171
src-tauri/Cargo.lock generated
View File

@ -263,6 +263,12 @@ dependencies = [
"constant_time_eq 0.4.2",
]
[[package]]
name = "block"
version = "0.1.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0d8c1fef690941d3e7788d328517591fecc684c084084702d6ff1641e993699a"
[[package]]
name = "block-buffer"
version = "0.10.4"
@ -520,6 +526,36 @@ dependencies = [
"zeroize",
]
[[package]]
name = "cocoa"
version = "0.25.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f6140449f97a6e97f9511815c5632d84c8aacf8ac271ad77c559218161a1373c"
dependencies = [
"bitflags 1.3.2",
"block",
"cocoa-foundation",
"core-foundation 0.9.4",
"core-graphics 0.23.2",
"foreign-types 0.5.0",
"libc",
"objc",
]
[[package]]
name = "cocoa-foundation"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8c6234cbb2e4c785b456c0644748b1ac416dd045799740356f8363dfe00c93f7"
dependencies = [
"bitflags 1.3.2",
"block",
"core-foundation 0.9.4",
"core-graphics-types 0.1.3",
"libc",
"objc",
]
[[package]]
name = "color_quant"
version = "1.1.0"
@ -648,6 +684,19 @@ version = "0.8.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "773648b94d0e5d620f64f280777445740e61fe701025087ec8b57f45c791888b"
[[package]]
name = "core-graphics"
version = "0.23.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c07782be35f9e1140080c6b96f0d44b739e2278479f64e02fdab4e32dfd8b081"
dependencies = [
"bitflags 1.3.2",
"core-foundation 0.9.4",
"core-graphics-types 0.1.3",
"foreign-types 0.5.0",
"libc",
]
[[package]]
name = "core-graphics"
version = "0.25.0"
@ -656,11 +705,22 @@ checksum = "064badf302c3194842cf2c5d61f56cc88e54a759313879cdf03abdd27d0c3b97"
dependencies = [
"bitflags 2.11.0",
"core-foundation 0.10.1",
"core-graphics-types",
"core-graphics-types 0.2.0",
"foreign-types 0.5.0",
"libc",
]
[[package]]
name = "core-graphics-types"
version = "0.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "45390e6114f68f718cc7a830514a96f903cccd70d02a8f6d9f643ac4ba45afaf"
dependencies = [
"bitflags 1.3.2",
"core-foundation 0.9.4",
"libc",
]
[[package]]
name = "core-graphics-types"
version = "0.2.0"
@ -2832,6 +2892,15 @@ version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c41e0c4fef86961ac6d6f8a82609f55f31b05e4fce149ac5710e439df7619ba4"
[[package]]
name = "malloc_buf"
version = "0.0.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "62bb907fe88d54d8d9ce32a3cceab4218ed2f6b7d35617cafe9adf84e43919cb"
dependencies = [
"libc",
]
[[package]]
name = "markup5ever"
version = "0.14.1"
@ -3147,6 +3216,15 @@ dependencies = [
"syn 2.0.117",
]
[[package]]
name = "objc"
version = "0.2.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "915b1b472bc21c53464d6c8461c9d3af805ba1ef837e1cac254428f4a77177b1"
dependencies = [
"malloc_buf",
]
[[package]]
name = "objc2"
version = "0.6.4"
@ -5252,7 +5330,7 @@ dependencies = [
"bitflags 2.11.0",
"block2",
"core-foundation 0.10.1",
"core-graphics",
"core-graphics 0.25.0",
"crossbeam-channel",
"dispatch2",
"dlopen2",
@ -5670,46 +5748,6 @@ dependencies = [
"utf-8",
]
[[package]]
name = "tftsr"
version = "0.1.0"
dependencies = [
"aes-gcm",
"aho-corasick",
"anyhow",
"async-trait",
"base64 0.22.1",
"chrono",
"dirs 5.0.1",
"docx-rs",
"futures",
"hex",
"lazy_static",
"mockito",
"printpdf",
"rand 0.8.5",
"regex",
"reqwest 0.12.28",
"rusqlite",
"serde",
"serde_json",
"sha2",
"tauri",
"tauri-build",
"tauri-plugin-dialog",
"tauri-plugin-fs",
"tauri-plugin-http",
"tauri-plugin-shell",
"tauri-plugin-stronghold",
"thiserror 1.0.69",
"tokio",
"tokio-test",
"tracing",
"tracing-subscriber",
"uuid",
"warp",
]
[[package]]
name = "thiserror"
version = "1.0.69"
@ -6167,6 +6205,49 @@ dependencies = [
"windows-sys 0.60.2",
]
[[package]]
name = "trcaa"
version = "0.1.0"
dependencies = [
"aes-gcm",
"aho-corasick",
"anyhow",
"async-trait",
"base64 0.22.1",
"chrono",
"cocoa",
"dirs 5.0.1",
"docx-rs",
"futures",
"hex",
"lazy_static",
"mockito",
"objc",
"printpdf",
"rand 0.8.5",
"regex",
"reqwest 0.12.28",
"rusqlite",
"serde",
"serde_json",
"sha2",
"tauri",
"tauri-build",
"tauri-plugin-dialog",
"tauri-plugin-fs",
"tauri-plugin-http",
"tauri-plugin-shell",
"tauri-plugin-stronghold",
"thiserror 1.0.69",
"tokio",
"tokio-test",
"tracing",
"tracing-subscriber",
"urlencoding",
"uuid",
"warp",
]
[[package]]
name = "try-lock"
version = "0.2.5"
@ -6344,6 +6425,12 @@ dependencies = [
"serde_derive",
]
[[package]]
name = "urlencoding"
version = "2.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "daf8dba3b7eb870caf1ddeed7bc9d2a049f3cfdfae7cb521b087cc33ae4c49da"
[[package]]
name = "urlpattern"
version = "0.3.0"

View File

@ -1,5 +1,5 @@
[package]
name = "tftsr"
name = "trcaa"
version = "0.1.0"
edition = "2021"
@ -42,6 +42,12 @@ aes-gcm = "0.10"
rand = "0.8"
lazy_static = "1.4"
warp = "0.3"
urlencoding = "2"
# Platform-specific dependencies for native cookie extraction
[target.'cfg(target_os = "macos")'.dependencies]
cocoa = "0.25"
objc = "0.2"
[dev-dependencies]
tokio-test = "0.4"

View File

@ -24,7 +24,7 @@
"fs:allow-temp-write-recursive",
"fs:scope-app-recursive",
"fs:scope-temp-recursive",
"shell:allow-execute",
"shell:allow-open",
"http:default"
]
}

View File

@ -1 +1 @@
{"default":{"identifier":"default","description":"Default capabilities for TFTSR — least-privilege","local":true,"windows":["main"],"permissions":["core:path:default","core:event:default","core:window:default","core:app:default","core:resources:default","core:menu:default","core:tray:default","dialog:allow-open","dialog:allow-save","fs:allow-read-text-file","fs:allow-write-text-file","fs:allow-read","fs:allow-write","fs:allow-mkdir","fs:allow-app-read-recursive","fs:allow-app-write-recursive","fs:allow-temp-read-recursive","fs:allow-temp-write-recursive","fs:scope-app-recursive","fs:scope-temp-recursive","shell:allow-execute","http:default"]}}
{"default":{"identifier":"default","description":"Default capabilities for TFTSR — least-privilege","local":true,"windows":["main"],"permissions":["core:path:default","core:event:default","core:window:default","core:app:default","core:resources:default","core:menu:default","core:tray:default","dialog:allow-open","dialog:allow-save","fs:allow-read-text-file","fs:allow-write-text-file","fs:allow-read","fs:allow-write","fs:allow-mkdir","fs:allow-app-read-recursive","fs:allow-app-write-recursive","fs:allow-temp-read-recursive","fs:allow-temp-write-recursive","fs:scope-app-recursive","fs:scope-temp-recursive","shell:allow-open","http:default"]}}

View File

@ -2324,24 +2324,6 @@
"Identifier": {
"description": "Permission identifier",
"oneOf": [
{
"description": "Allows reading the CLI matches\n#### This default permission set includes:\n\n- `allow-cli-matches`",
"type": "string",
"const": "cli:default",
"markdownDescription": "Allows reading the CLI matches\n#### This default permission set includes:\n\n- `allow-cli-matches`"
},
{
"description": "Enables the cli_matches command without any pre-configured scope.",
"type": "string",
"const": "cli:allow-cli-matches",
"markdownDescription": "Enables the cli_matches command without any pre-configured scope."
},
{
"description": "Denies the cli_matches command without any pre-configured scope.",
"type": "string",
"const": "cli:deny-cli-matches",
"markdownDescription": "Denies the cli_matches command without any pre-configured scope."
},
{
"description": "Default core plugins set.\n#### This default permission set includes:\n\n- `core:path:default`\n- `core:event:default`\n- `core:window:default`\n- `core:webview:default`\n- `core:app:default`\n- `core:image:default`\n- `core:resources:default`\n- `core:menu:default`\n- `core:tray:default`",
"type": "string",
@ -6373,60 +6355,6 @@
"type": "string",
"const": "stronghold:deny-save-store-record",
"markdownDescription": "Denies the save_store_record command without any pre-configured scope."
},
{
"description": "This permission set configures which kind of\nupdater functions are exposed to the frontend.\n\n#### Granted Permissions\n\nThe full workflow from checking for updates to installing them\nis enabled.\n\n\n#### This default permission set includes:\n\n- `allow-check`\n- `allow-download`\n- `allow-install`\n- `allow-download-and-install`",
"type": "string",
"const": "updater:default",
"markdownDescription": "This permission set configures which kind of\nupdater functions are exposed to the frontend.\n\n#### Granted Permissions\n\nThe full workflow from checking for updates to installing them\nis enabled.\n\n\n#### This default permission set includes:\n\n- `allow-check`\n- `allow-download`\n- `allow-install`\n- `allow-download-and-install`"
},
{
"description": "Enables the check command without any pre-configured scope.",
"type": "string",
"const": "updater:allow-check",
"markdownDescription": "Enables the check command without any pre-configured scope."
},
{
"description": "Enables the download command without any pre-configured scope.",
"type": "string",
"const": "updater:allow-download",
"markdownDescription": "Enables the download command without any pre-configured scope."
},
{
"description": "Enables the download_and_install command without any pre-configured scope.",
"type": "string",
"const": "updater:allow-download-and-install",
"markdownDescription": "Enables the download_and_install command without any pre-configured scope."
},
{
"description": "Enables the install command without any pre-configured scope.",
"type": "string",
"const": "updater:allow-install",
"markdownDescription": "Enables the install command without any pre-configured scope."
},
{
"description": "Denies the check command without any pre-configured scope.",
"type": "string",
"const": "updater:deny-check",
"markdownDescription": "Denies the check command without any pre-configured scope."
},
{
"description": "Denies the download command without any pre-configured scope.",
"type": "string",
"const": "updater:deny-download",
"markdownDescription": "Denies the download command without any pre-configured scope."
},
{
"description": "Denies the download_and_install command without any pre-configured scope.",
"type": "string",
"const": "updater:deny-download-and-install",
"markdownDescription": "Denies the download_and_install command without any pre-configured scope."
},
{
"description": "Denies the install command without any pre-configured scope.",
"type": "string",
"const": "updater:deny-install",
"markdownDescription": "Denies the install command without any pre-configured scope."
}
]
},

View File

View File

@ -1,4 +1,5 @@
use async_trait::async_trait;
use std::time::Duration;
use crate::ai::provider::Provider;
use crate::ai::{ChatResponse, Message, ProviderInfo, TokenUsage};
@ -28,8 +29,11 @@ impl Provider for AnthropicProvider {
&self,
messages: Vec<Message>,
config: &ProviderConfig,
_tools: Option<Vec<crate::ai::Tool>>,
) -> anyhow::Result<ChatResponse> {
let client = reqwest::Client::new();
let client = reqwest::Client::builder()
.timeout(Duration::from_secs(60))
.build()?;
let url = format!(
"{}/v1/messages",
config
@ -112,6 +116,7 @@ impl Provider for AnthropicProvider {
content,
model,
usage,
tool_calls: None,
})
}
}

View File

@ -1,4 +1,5 @@
use async_trait::async_trait;
use std::time::Duration;
use crate::ai::provider::Provider;
use crate::ai::{ChatResponse, Message, ProviderInfo, TokenUsage};
@ -29,11 +30,14 @@ impl Provider for GeminiProvider {
&self,
messages: Vec<Message>,
config: &ProviderConfig,
_tools: Option<Vec<crate::ai::Tool>>,
) -> anyhow::Result<ChatResponse> {
let client = reqwest::Client::new();
let client = reqwest::Client::builder()
.timeout(Duration::from_secs(60))
.build()?;
let url = format!(
"https://generativelanguage.googleapis.com/v1beta/models/{}:generateContent?key={}",
config.model, config.api_key
"https://generativelanguage.googleapis.com/v1beta/models/{}:generateContent",
config.model
);
// Map OpenAI-style messages to Gemini format
@ -79,6 +83,7 @@ impl Provider for GeminiProvider {
let resp = client
.post(&url)
.header("Content-Type", "application/json")
.header("x-goog-api-key", &config.api_key)
.json(&body)
.send()
.await?;
@ -114,6 +119,7 @@ impl Provider for GeminiProvider {
content,
model: config.model.clone(),
usage,
tool_calls: None,
})
}
}

View File

@ -1,4 +1,5 @@
use async_trait::async_trait;
use std::time::Duration;
use crate::ai::provider::Provider;
use crate::ai::{ChatResponse, Message, ProviderInfo, TokenUsage};
@ -29,9 +30,12 @@ impl Provider for MistralProvider {
&self,
messages: Vec<Message>,
config: &ProviderConfig,
_tools: Option<Vec<crate::ai::Tool>>,
) -> anyhow::Result<ChatResponse> {
// Mistral uses OpenAI-compatible format
let client = reqwest::Client::new();
let client = reqwest::Client::builder()
.timeout(Duration::from_secs(60))
.build()?;
let base_url = if config.api_url.is_empty() {
"https://api.mistral.ai/v1".to_string()
} else {
@ -47,7 +51,10 @@ impl Provider for MistralProvider {
let resp = client
.post(&url)
.header("Authorization", format!("Bearer {}", config.api_key))
.header(
"Authorization",
format!("Bearer {api_key}", api_key = config.api_key),
)
.header("Content-Type", "application/json")
.json(&body)
.send()
@ -77,6 +84,7 @@ impl Provider for MistralProvider {
content,
model: config.model.clone(),
usage,
tool_calls: None,
})
}
}

View File

@ -4,15 +4,22 @@ pub mod mistral;
pub mod ollama;
pub mod openai;
pub mod provider;
pub mod tools;
pub use provider::*;
pub use tools::*;
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Message {
pub role: String,
pub content: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub tool_call_id: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub tool_calls: Option<Vec<ToolCall>>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
@ -20,6 +27,44 @@ pub struct ChatResponse {
pub content: String,
pub model: String,
pub usage: Option<TokenUsage>,
#[serde(skip_serializing_if = "Option::is_none")]
pub tool_calls: Option<Vec<ToolCall>>,
}
/// Represents a tool call made by the AI
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ToolCall {
pub id: String,
pub name: String,
pub arguments: String, // JSON string
}
/// Tool definition that describes available functions to the AI
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Tool {
pub name: String,
pub description: String,
pub parameters: ToolParameters,
}
/// JSON Schema-style parameter definition for tools
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ToolParameters {
#[serde(rename = "type")]
pub param_type: String, // Usually "object"
pub properties: HashMap<String, ParameterProperty>,
pub required: Vec<String>,
}
/// Individual parameter property definition
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ParameterProperty {
#[serde(rename = "type")]
pub prop_type: String, // "string", "number", "integer", "boolean"
pub description: String,
#[serde(skip_serializing_if = "Option::is_none")]
#[serde(rename = "enum")]
pub enum_values: Option<Vec<String>>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]

View File

@ -1,4 +1,5 @@
use async_trait::async_trait;
use std::time::Duration;
use crate::ai::provider::Provider;
use crate::ai::{ChatResponse, Message, ProviderInfo, TokenUsage};
@ -30,8 +31,11 @@ impl Provider for OllamaProvider {
&self,
messages: Vec<Message>,
config: &ProviderConfig,
_tools: Option<Vec<crate::ai::Tool>>,
) -> anyhow::Result<ChatResponse> {
let client = reqwest::Client::new();
let client = reqwest::Client::builder()
.timeout(Duration::from_secs(60))
.build()?;
let base_url = if config.api_url.is_empty() {
"http://localhost:11434".to_string()
} else {
@ -96,6 +100,7 @@ impl Provider for OllamaProvider {
content,
model: config.model.clone(),
usage,
tool_calls: None,
})
}
}

View File

@ -1,4 +1,5 @@
use async_trait::async_trait;
use std::time::Duration;
use crate::ai::provider::Provider;
use crate::ai::{ChatResponse, Message, ProviderInfo, TokenUsage};
@ -6,6 +7,10 @@ use crate::state::ProviderConfig;
pub struct OpenAiProvider;
fn is_custom_rest_format(api_format: Option<&str>) -> bool {
matches!(api_format, Some("custom_rest") | Some("msi_genai"))
}
#[async_trait]
impl Provider for OpenAiProvider {
fn name(&self) -> &str {
@ -28,19 +33,104 @@ impl Provider for OpenAiProvider {
&self,
messages: Vec<Message>,
config: &ProviderConfig,
tools: Option<Vec<crate::ai::Tool>>,
) -> anyhow::Result<ChatResponse> {
let client = reqwest::Client::new();
let url = format!("{}/chat/completions", config.api_url.trim_end_matches('/'));
// Check if using custom REST format
let api_format = config.api_format.as_deref().unwrap_or("openai");
let body = serde_json::json!({
// Backward compatibility: accept legacy msi_genai identifier
if is_custom_rest_format(Some(api_format)) {
self.chat_custom_rest(messages, config, tools).await
} else {
self.chat_openai(messages, config, tools).await
}
}
}
#[cfg(test)]
mod tests {
use super::is_custom_rest_format;
#[test]
fn custom_rest_format_is_recognized() {
assert!(is_custom_rest_format(Some("custom_rest")));
}
#[test]
fn legacy_msi_format_is_recognized_for_compatibility() {
assert!(is_custom_rest_format(Some("msi_genai")));
}
#[test]
fn openai_format_is_not_custom_rest() {
assert!(!is_custom_rest_format(Some("openai")));
assert!(!is_custom_rest_format(None));
}
}
impl OpenAiProvider {
/// OpenAI-compatible API format (default)
async fn chat_openai(
&self,
messages: Vec<Message>,
config: &ProviderConfig,
tools: Option<Vec<crate::ai::Tool>>,
) -> anyhow::Result<ChatResponse> {
let client = reqwest::Client::builder()
.timeout(Duration::from_secs(60))
.build()?;
// Use custom endpoint path if provided, otherwise default to /chat/completions
let endpoint_path = config
.custom_endpoint_path
.as_deref()
.unwrap_or("/chat/completions");
let api_url = config.api_url.trim_end_matches('/');
let url = format!("{api_url}{endpoint_path}");
let mut body = serde_json::json!({
"model": config.model,
"messages": messages,
"max_tokens": 4096,
});
// Add max_tokens if provided, otherwise use default 4096
body["max_tokens"] = serde_json::Value::from(config.max_tokens.unwrap_or(4096));
// Add temperature if provided
if let Some(temp) = config.temperature {
body["temperature"] = serde_json::Value::from(temp);
}
// Add tools if provided (OpenAI function calling format)
if let Some(tools_list) = tools {
let formatted_tools: Vec<serde_json::Value> = tools_list
.iter()
.map(|tool| {
serde_json::json!({
"type": "function",
"function": {
"name": tool.name,
"description": tool.description,
"parameters": tool.parameters
}
})
})
.collect();
body["tools"] = serde_json::Value::from(formatted_tools);
body["tool_choice"] = serde_json::Value::from("auto");
}
// Use custom auth header and prefix if provided
let auth_header = config
.custom_auth_header
.as_deref()
.unwrap_or("Authorization");
let auth_prefix = config.custom_auth_prefix.as_deref().unwrap_or("Bearer ");
let auth_value = format!("{auth_prefix}{api_key}", api_key = config.api_key);
let resp = client
.post(&url)
.header("Authorization", format!("Bearer {}", config.api_key))
.header(auth_header, auth_value)
.header("Content-Type", "application/json")
.json(&body)
.send()
@ -53,10 +143,32 @@ impl Provider for OpenAiProvider {
}
let json: serde_json::Value = resp.json().await?;
let content = json["choices"][0]["message"]["content"]
.as_str()
.ok_or_else(|| anyhow::anyhow!("No content in response"))?
.to_string();
let message = &json["choices"][0]["message"];
let content = message["content"].as_str().unwrap_or("").to_string();
// Parse tool_calls if present
let tool_calls = message.get("tool_calls").and_then(|tc| {
if let Some(arr) = tc.as_array() {
let calls: Vec<crate::ai::ToolCall> = arr
.iter()
.filter_map(|call| {
Some(crate::ai::ToolCall {
id: call["id"].as_str()?.to_string(),
name: call["function"]["name"].as_str()?.to_string(),
arguments: call["function"]["arguments"].as_str()?.to_string(),
})
})
.collect();
if calls.is_empty() {
None
} else {
Some(calls)
}
} else {
None
}
});
let usage = json.get("usage").and_then(|u| {
Some(TokenUsage {
@ -70,6 +182,207 @@ impl Provider for OpenAiProvider {
content,
model: config.model.clone(),
usage,
tool_calls,
})
}
/// Custom REST format (MSI GenAI payload contract)
async fn chat_custom_rest(
&self,
messages: Vec<Message>,
config: &ProviderConfig,
tools: Option<Vec<crate::ai::Tool>>,
) -> anyhow::Result<ChatResponse> {
let client = reqwest::Client::builder()
.timeout(Duration::from_secs(60))
.build()?;
// Use custom endpoint path, default to empty (API URL already includes /api/v2/chat)
let endpoint_path = config.custom_endpoint_path.as_deref().unwrap_or("");
let api_url = config.api_url.trim_end_matches('/');
let url = format!("{api_url}{endpoint_path}");
// Extract system message if present
let system_message = messages
.iter()
.find(|m| m.role == "system")
.map(|m| m.content.clone());
// Get last user message as prompt
let prompt = messages
.iter()
.rev()
.find(|m| m.role == "user")
.map(|m| m.content.clone())
.ok_or_else(|| anyhow::anyhow!("No user message found"))?;
// Build request body
let mut body = serde_json::json!({
"model": config.model,
"prompt": prompt,
});
// Add userId if provided (CORE ID email)
if let Some(user_id) = &config.user_id {
body["userId"] = serde_json::Value::String(user_id.clone());
}
// Add optional system message
if let Some(system) = system_message {
body["system"] = serde_json::Value::String(system);
}
// Add session ID if available (for conversation continuity)
if let Some(session_id) = &config.session_id {
body["sessionId"] = serde_json::Value::String(session_id.clone());
}
// Add modelConfig with temperature and max_tokens if provided
let mut model_config = serde_json::json!({});
if let Some(temp) = config.temperature {
model_config["temperature"] = serde_json::Value::from(temp);
}
if let Some(max_tokens) = config.max_tokens {
model_config["max_tokens"] = serde_json::Value::from(max_tokens);
}
if !model_config.is_null() && model_config.as_object().is_some_and(|obj| !obj.is_empty()) {
body["modelConfig"] = model_config;
}
// Add tools if provided (OpenAI-style format, most common standard)
if let Some(tools_list) = tools {
let formatted_tools: Vec<serde_json::Value> = tools_list
.iter()
.map(|tool| {
serde_json::json!({
"type": "function",
"function": {
"name": tool.name,
"description": tool.description,
"parameters": tool.parameters
}
})
})
.collect();
let tool_count = formatted_tools.len();
body["tools"] = serde_json::Value::from(formatted_tools);
body["tool_choice"] = serde_json::Value::from("auto");
tracing::info!("MSI GenAI: Sending {} tools in request", tool_count);
}
// Use custom auth header and prefix (no default prefix for custom REST)
let auth_header = config
.custom_auth_header
.as_deref()
.unwrap_or("Authorization");
let auth_prefix = config.custom_auth_prefix.as_deref().unwrap_or("");
let auth_value = format!("{auth_prefix}{api_key}", api_key = config.api_key);
let resp = client
.post(&url)
.header(auth_header, auth_value)
.header("Content-Type", "application/json")
.json(&body)
.send()
.await?;
if !resp.status().is_success() {
let status = resp.status();
let text = resp.text().await?;
anyhow::bail!("Custom REST API error {status}: {text}");
}
let json: serde_json::Value = resp.json().await?;
tracing::debug!(
"MSI GenAI response: {}",
serde_json::to_string_pretty(&json).unwrap_or_else(|_| "invalid JSON".to_string())
);
// Extract response content from "msg" field
let content = json["msg"]
.as_str()
.ok_or_else(|| anyhow::anyhow!("No 'msg' field in response"))?
.to_string();
// Parse tool_calls if present (check multiple possible field names)
let tool_calls = json
.get("tool_calls")
.or_else(|| json.get("toolCalls"))
.or_else(|| json.get("function_calls"))
.and_then(|tc| {
if let Some(arr) = tc.as_array() {
let calls: Vec<crate::ai::ToolCall> = arr
.iter()
.filter_map(|call| {
// Try OpenAI format first
if let (Some(id), Some(name), Some(args)) = (
call.get("id").and_then(|v| v.as_str()),
call.get("function")
.and_then(|f| f.get("name"))
.and_then(|n| n.as_str())
.or_else(|| call.get("name").and_then(|n| n.as_str())),
call.get("function")
.and_then(|f| f.get("arguments"))
.and_then(|a| a.as_str())
.or_else(|| call.get("arguments").and_then(|a| a.as_str())),
) {
tracing::info!("MSI GenAI: Parsed tool call: {} ({})", name, id);
return Some(crate::ai::ToolCall {
id: id.to_string(),
name: name.to_string(),
arguments: args.to_string(),
});
}
// Try simpler format
if let (Some(name), Some(args)) = (
call.get("name").and_then(|n| n.as_str()),
call.get("arguments").and_then(|a| a.as_str()),
) {
let id = call
.get("id")
.and_then(|v| v.as_str())
.unwrap_or("tool_call_0")
.to_string();
tracing::info!(
"MSI GenAI: Parsed tool call (simple format): {} ({})",
name,
id
);
return Some(crate::ai::ToolCall {
id,
name: name.to_string(),
arguments: args.to_string(),
});
}
tracing::warn!("MSI GenAI: Failed to parse tool call: {:?}", call);
None
})
.collect();
if calls.is_empty() {
None
} else {
tracing::info!("MSI GenAI: Found {} tool calls", calls.len());
Some(calls)
}
} else {
None
}
});
// Note: sessionId from response should be stored back to config.session_id
// This would require making config mutable or returning it as part of ChatResponse
// For now, the caller can extract it from the response if needed
// TODO: Consider adding session_id to ChatResponse struct
Ok(ChatResponse {
content,
model: config.model.clone(),
usage: None, // This custom REST contract doesn't provide token usage in response
tool_calls,
})
}
}

View File

@ -1,6 +1,6 @@
use async_trait::async_trait;
use crate::ai::{ChatResponse, Message, ProviderInfo};
use crate::ai::{ChatResponse, Message, ProviderInfo, Tool};
use crate::state::ProviderConfig;
#[async_trait]
@ -11,6 +11,7 @@ pub trait Provider: Send + Sync {
&self,
messages: Vec<Message>,
config: &ProviderConfig,
tools: Option<Vec<Tool>>,
) -> anyhow::Result<ChatResponse>;
}

41
src-tauri/src/ai/tools.rs Normal file
View File

@ -0,0 +1,41 @@
use crate::ai::{ParameterProperty, Tool, ToolParameters};
use std::collections::HashMap;
/// Get all available tools for AI function calling
pub fn get_available_tools() -> Vec<Tool> {
vec![get_add_ado_comment_tool()]
}
/// Tool definition for adding comments to Azure DevOps work items
fn get_add_ado_comment_tool() -> Tool {
let mut properties = HashMap::new();
properties.insert(
"work_item_id".to_string(),
ParameterProperty {
prop_type: "integer".to_string(),
description: "The Azure DevOps work item ID (ticket number) to add the comment to"
.to_string(),
enum_values: None,
},
);
properties.insert(
"comment_text".to_string(),
ParameterProperty {
prop_type: "string".to_string(),
description: "The text content of the comment to add to the work item".to_string(),
enum_values: None,
},
);
Tool {
name: "add_ado_comment".to_string(),
description: "Add a comment to an Azure DevOps work item (ticket). Use this when the user asks you to add a comment, update a ticket, or provide information to a ticket.".to_string(),
parameters: ToolParameters {
param_type: "object".to_string(),
properties,
required: vec!["work_item_id".to_string(), "comment_text".to_string()],
},
}
}

View File

@ -1,4 +1,20 @@
use crate::db::models::AuditEntry;
use sha2::{Digest, Sha256};
fn compute_entry_hash(entry: &AuditEntry, prev_hash: &str) -> String {
let payload = format!(
"{}|{}|{}|{}|{}|{}|{}|{}",
prev_hash,
entry.id,
entry.timestamp,
entry.action,
entry.entity_type,
entry.entity_id,
entry.user_id,
entry.details
);
format!("{:x}", Sha256::digest(payload.as_bytes()))
}
/// Write an audit event to the audit_log table.
pub fn write_audit_event(
@ -14,9 +30,16 @@ pub fn write_audit_event(
entity_id.to_string(),
details.to_string(),
);
let prev_hash: String = conn
.prepare(
"SELECT entry_hash FROM audit_log WHERE entry_hash <> '' ORDER BY timestamp DESC, id DESC LIMIT 1",
)?
.query_row([], |row| row.get(0))
.unwrap_or_default();
let entry_hash = compute_entry_hash(&entry, &prev_hash);
conn.execute(
"INSERT INTO audit_log (id, timestamp, action, entity_type, entity_id, user_id, details) \
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)",
"INSERT INTO audit_log (id, timestamp, action, entity_type, entity_id, user_id, details, prev_hash, entry_hash) \
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9)",
rusqlite::params![
entry.id,
entry.timestamp,
@ -25,6 +48,8 @@ pub fn write_audit_event(
entry.entity_id,
entry.user_id,
entry.details,
prev_hash,
entry_hash,
],
)?;
Ok(())
@ -44,7 +69,9 @@ mod tests {
entity_type TEXT NOT NULL DEFAULT '',
entity_id TEXT NOT NULL DEFAULT '',
user_id TEXT NOT NULL DEFAULT 'local',
details TEXT NOT NULL DEFAULT '{}'
details TEXT NOT NULL DEFAULT '{}',
prev_hash TEXT NOT NULL DEFAULT '',
entry_hash TEXT NOT NULL DEFAULT ''
);",
)
.unwrap();
@ -97,9 +124,9 @@ mod tests {
for i in 0..5 {
write_audit_event(
&conn,
&format!("action_{}", i),
&format!("action_{i}"),
"test",
&format!("id_{}", i),
&format!("id_{i}"),
"{}",
)
.unwrap();
@ -128,4 +155,26 @@ mod tests {
assert_eq!(ids.len(), 2);
assert_ne!(ids[0], ids[1]);
}
#[test]
fn test_write_audit_event_hash_chain_links_entries() {
let conn = setup_test_db();
write_audit_event(&conn, "first", "issue", "1", "{}").unwrap();
write_audit_event(&conn, "second", "issue", "2", "{}").unwrap();
let mut stmt = conn
.prepare("SELECT prev_hash, entry_hash FROM audit_log ORDER BY timestamp ASC, id ASC")
.unwrap();
let rows: Vec<(String, String)> = stmt
.query_map([], |row| Ok((row.get(0)?, row.get(1)?)))
.unwrap()
.collect::<Result<Vec<_>, _>>()
.unwrap();
assert_eq!(rows.len(), 2);
assert_eq!(rows[0].0, "");
assert!(!rows[0].1.is_empty());
assert_eq!(rows[1].0, rows[0].1);
assert!(!rows[1].1.is_empty());
}
}

View File

@ -1,4 +1,6 @@
use tauri::State;
use rusqlite::OptionalExtension;
use tauri::{Manager, State};
use tracing::warn;
use crate::ai::provider::create_provider;
use crate::ai::{AnalysisResult, ChatResponse, Message, ProviderInfo};
@ -12,22 +14,27 @@ pub async fn analyze_logs(
provider_config: ProviderConfig,
state: State<'_, AppState>,
) -> Result<AnalysisResult, String> {
// Load log file contents
// Load log file contents — only redacted files may be sent to an AI provider
let mut log_contents = String::new();
{
let db = state.db.lock().map_err(|e| e.to_string())?;
for file_id in &log_file_ids {
let mut stmt = db
.prepare("SELECT file_name, file_path FROM log_files WHERE id = ?1")
.prepare("SELECT file_name, file_path, redacted FROM log_files WHERE id = ?1")
.map_err(|e| e.to_string())?;
if let Ok((name, path)) = stmt.query_row([file_id], |row| {
Ok((row.get::<_, String>(0)?, row.get::<_, String>(1)?))
if let Ok((name, path, redacted)) = stmt.query_row([file_id], |row| {
Ok((
row.get::<_, String>(0)?,
row.get::<_, String>(1)?,
row.get::<_, i32>(2)? != 0,
))
}) {
let redacted_path = redacted_path_for(&name, &path, redacted)?;
log_contents.push_str(&format!("--- {name} ---\n"));
if let Ok(content) = std::fs::read_to_string(&path) {
if let Ok(content) = std::fs::read_to_string(&redacted_path) {
log_contents.push_str(&content);
} else {
log_contents.push_str("[Could not read file]\n");
log_contents.push_str("[Could not read redacted file]\n");
}
log_contents.push('\n');
}
@ -45,17 +52,24 @@ pub async fn analyze_logs(
FIRST_WHY: (initial why question for 5-whys analysis), \
SEVERITY: (critical/high/medium/low)"
.into(),
tool_call_id: None,
tool_calls: None,
},
Message {
role: "user".into(),
content: format!("Analyze logs for issue {issue_id}:\n\n{log_contents}"),
tool_call_id: None,
tool_calls: None,
},
];
let response = provider
.chat(messages, &provider_config)
.chat(messages, &provider_config, None)
.await
.map_err(|e| e.to_string())?;
.map_err(|e| {
warn!(error = %e, "ai analyze_logs provider request failed");
"AI analysis request failed".to_string()
})?;
let content = &response.content;
let summary = extract_section(content, "SUMMARY:").unwrap_or_else(|| {
@ -81,14 +95,14 @@ pub async fn analyze_logs(
serde_json::json!({ "log_file_ids": log_file_ids, "provider": provider_config.name })
.to_string(),
);
db.execute(
"INSERT INTO audit_log (id, timestamp, action, entity_type, entity_id, user_id, details) \
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)",
rusqlite::params![
entry.id, entry.timestamp, entry.action,
entry.entity_type, entry.entity_id, entry.user_id, entry.details
],
).map_err(|e| e.to_string())?;
crate::audit::log::write_audit_event(
&db,
&entry.action,
&entry.entity_type,
&entry.entity_id,
&entry.details,
)
.map_err(|_| "Failed to write security audit entry".to_string())?;
}
Ok(AnalysisResult {
@ -99,6 +113,17 @@ pub async fn analyze_logs(
})
}
/// Returns the path to the `.redacted` file, or an error if the file has not been redacted.
fn redacted_path_for(name: &str, path: &str, redacted: bool) -> Result<String, String> {
if !redacted {
return Err(format!(
"Log file '{name}' has not been scanned and redacted. \
Run PII detection and apply redactions before sending to AI."
));
}
Ok(format!("{path}.redacted"))
}
fn extract_section(text: &str, header: &str) -> Option<String> {
let start = text.find(header)?;
let after = &text[start + header.len()..];
@ -140,6 +165,7 @@ pub async fn chat_message(
issue_id: String,
message: String,
provider_config: ProviderConfig,
app_handle: tauri::AppHandle,
state: State<'_, AppState>,
) -> Result<ChatResponse, String> {
// Find or create a conversation for this issue + provider
@ -192,22 +218,105 @@ pub async fn chat_message(
.unwrap_or_default();
drop(db);
raw.into_iter()
.map(|(role, content)| Message { role, content })
.map(|(role, content)| Message {
role,
content,
tool_call_id: None,
tool_calls: None,
})
.collect()
};
let provider = create_provider(&provider_config);
// Search integration sources for relevant context
let integration_context = search_integration_sources(&message, &app_handle, &state).await;
let mut messages = history;
// If we found integration content, add it to the conversation context
if !integration_context.is_empty() {
let context_message = Message {
role: "system".into(),
content: format!(
"INTERNAL DOCUMENTATION SOURCES:\n\n{integration_context}\n\n\
Instructions: The above content is from internal company documentation systems \
(Confluence, ServiceNow, Azure DevOps). \
\n\n**IMPORTANT**: First determine if this documentation is RELEVANT to the user's question:\
\n- If the documentation directly addresses the question Use it and cite sources with URLs\
\n- If the documentation is tangentially related but doesn't answer the question Briefly mention what internal docs exist, then provide a complete answer using general knowledge\
\n- If the documentation is completely unrelated Ignore it and answer using general knowledge\
\n\nDo NOT force irrelevant internal documentation into your answer. The user needs accurate information, not forced citations."
),
tool_call_id: None,
tool_calls: None,
};
messages.push(context_message);
}
messages.push(Message {
role: "user".into(),
content: message.clone(),
tool_call_id: None,
tool_calls: None,
});
let response = provider
.chat(messages, &provider_config)
.await
.map_err(|e| e.to_string())?;
// Get available tools
let tools = Some(crate::ai::tools::get_available_tools());
// Tool-calling loop: keep calling until AI gives final answer
let final_response;
let max_iterations = 10; // Prevent infinite loops
let mut iteration = 0;
loop {
iteration += 1;
if iteration > max_iterations {
return Err("Tool-calling loop exceeded maximum iterations".to_string());
}
let response = provider
.chat(messages.clone(), &provider_config, tools.clone())
.await
.map_err(|e| {
let error_msg = format!("AI provider request failed: {e}");
warn!("{}", error_msg);
error_msg
})?;
// Check if AI wants to call tools
if let Some(tool_calls) = &response.tool_calls {
tracing::info!("AI requested {} tool call(s)", tool_calls.len());
// Execute each tool call
for tool_call in tool_calls {
tracing::info!("Executing tool: {}", tool_call.name);
let tool_result = execute_tool_call(tool_call, &app_handle, &state).await;
// Format result
let result_content = match tool_result {
Ok(result) => result,
Err(e) => format!("Error executing tool: {e}"),
};
// Add tool result as a message
messages.push(Message {
role: "tool".into(),
content: result_content,
tool_call_id: Some(tool_call.id.clone()),
tool_calls: None,
});
}
// Continue loop to get AI's next response
continue;
}
// No tool calls - this is the final answer
final_response = response;
break;
}
// Save both user message and response to DB
{
@ -216,7 +325,7 @@ pub async fn chat_message(
let asst_msg = AiMessage::new(
conversation_id,
"assistant".to_string(),
response.content.clone(),
final_response.content.clone(),
);
db.execute(
@ -245,10 +354,10 @@ pub async fn chat_message(
"model": provider_config.model,
"api_url": provider_config.api_url,
"user_message": user_msg.content,
"response_preview": if response.content.len() > 200 {
format!("{}...", &response.content[..200])
"response_preview": if final_response.content.len() > 200 {
format!("{preview}...", preview = &final_response.content[..200])
} else {
response.content.clone()
final_response.content.clone()
},
"token_count": user_msg.token_count,
});
@ -258,17 +367,18 @@ pub async fn chat_message(
issue_id,
audit_details.to_string(),
);
let _ = db.execute(
"INSERT INTO audit_log (id, timestamp, action, entity_type, entity_id, user_id, details) \
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)",
rusqlite::params![
entry.id, entry.timestamp, entry.action,
entry.entity_type, entry.entity_id, entry.user_id, entry.details
],
);
if let Err(err) = crate::audit::log::write_audit_event(
&db,
&entry.action,
&entry.entity_type,
&entry.entity_id,
&entry.details,
) {
warn!(error = %err, "failed to write ai_chat audit entry");
}
}
Ok(response)
Ok(final_response)
}
#[tauri::command]
@ -278,12 +388,19 @@ pub async fn test_provider_connection(
let provider = create_provider(&provider_config);
let messages = vec![Message {
role: "user".into(),
content: "Reply with exactly: TFTSR connection test successful.".into(),
content:
"Reply with exactly: Troubleshooting and RCA Assistant connection test successful."
.into(),
tool_call_id: None,
tool_calls: None,
}];
provider
.chat(messages, &provider_config)
.chat(messages, &provider_config, None)
.await
.map_err(|e| e.to_string())
.map_err(|e| {
warn!(error = %e, "ai test_provider_connection failed");
"Provider connection test failed".to_string()
})
}
#[tauri::command]
@ -323,6 +440,417 @@ pub async fn list_providers() -> Result<Vec<ProviderInfo>, String> {
])
}
/// Search integration sources (Confluence, ServiceNow, Azure DevOps) for relevant context
async fn search_integration_sources(
query: &str,
app_handle: &tauri::AppHandle,
state: &State<'_, AppState>,
) -> String {
let mut all_results = Vec::new();
// Try to get integration configurations
let configs: Vec<crate::commands::integrations::IntegrationConfig> = {
let db = match state.db.lock() {
Ok(db) => db,
Err(e) => {
tracing::warn!("Failed to lock database: {}", e);
return String::new();
}
};
let mut stmt = match db.prepare(
"SELECT service, base_url, username, project_name, space_key FROM integration_config",
) {
Ok(stmt) => stmt,
Err(e) => {
tracing::warn!("Failed to prepare statement: {}", e);
return String::new();
}
};
let rows = match stmt.query_map([], |row| {
Ok(crate::commands::integrations::IntegrationConfig {
service: row.get(0)?,
base_url: row.get(1)?,
username: row.get(2)?,
project_name: row.get(3)?,
space_key: row.get(4)?,
})
}) {
Ok(rows) => rows,
Err(e) => {
tracing::warn!("Failed to query integration configs: {}", e);
return String::new();
}
};
rows.filter_map(|r| r.ok()).collect()
};
// Search each available integration in parallel
let mut search_tasks = Vec::new();
for config in configs {
// Authentication priority:
// 1. Try cookies from persistent browser (may fail for HttpOnly)
// 2. Try stored credentials from database
// 3. Fall back to webview-based search (uses browser's session directly)
let cookies_opt = match crate::commands::integrations::get_fresh_cookies_from_webview(
&config.service,
app_handle,
state,
)
.await
{
Ok(Some(cookies)) => {
tracing::info!("Using extracted cookies for {}", config.service);
Some(cookies)
}
_ => {
// Fallback: check for stored credentials in database
tracing::info!(
"Cookie extraction failed for {}, checking stored credentials",
config.service
);
let encrypted_token: Option<String> = {
let db = match state.db.lock() {
Ok(db) => db,
Err(_) => continue,
};
db.query_row(
"SELECT encrypted_token FROM credentials WHERE service = ?1",
[&config.service],
|row| row.get::<_, String>(0),
)
.optional()
.ok()
.flatten()
};
if let Some(token) = encrypted_token {
if let Ok(decrypted) = crate::integrations::auth::decrypt_token(&token) {
// Try to parse as cookies JSON
if let Ok(cookie_list) = serde_json::from_str::<
Vec<crate::integrations::webview_auth::Cookie>,
>(&decrypted)
{
tracing::info!(
"Using stored cookies for {} (count: {})",
config.service,
cookie_list.len()
);
Some(cookie_list)
} else {
tracing::warn!(
"Stored credentials for {} not in cookie format",
config.service
);
None
}
} else {
None
}
} else {
None
}
}
};
// If we have cookies (from extraction or database), use standard API search
if let Some(cookies) = cookies_opt {
match config.service.as_str() {
"confluence" => {
let base_url = config.base_url.clone();
let query = query.to_string();
let cookies_clone = cookies.clone();
search_tasks.push(tokio::spawn(async move {
crate::integrations::confluence_search::search_confluence(
&base_url,
&query,
&cookies_clone,
)
.await
.unwrap_or_default()
}));
}
"servicenow" => {
let instance_url = config.base_url.clone();
let query = query.to_string();
let cookies_clone = cookies.clone();
search_tasks.push(tokio::spawn(async move {
let mut results = Vec::new();
// Search knowledge base
if let Ok(kb_results) =
crate::integrations::servicenow_search::search_servicenow(
&instance_url,
&query,
&cookies_clone,
)
.await
{
results.extend(kb_results);
}
// Search incidents
if let Ok(incident_results) =
crate::integrations::servicenow_search::search_incidents(
&instance_url,
&query,
&cookies_clone,
)
.await
{
results.extend(incident_results);
}
results
}));
}
"azuredevops" => {
let org_url = config.base_url.clone();
let project = config.project_name.unwrap_or_default();
let query = query.to_string();
let cookies_clone = cookies.clone();
search_tasks.push(tokio::spawn(async move {
let mut results = Vec::new();
// Search wiki
if let Ok(wiki_results) =
crate::integrations::azuredevops_search::search_wiki(
&org_url,
&project,
&query,
&cookies_clone,
)
.await
{
results.extend(wiki_results);
}
// Search work items
if let Ok(wi_results) =
crate::integrations::azuredevops_search::search_work_items(
&org_url,
&project,
&query,
&cookies_clone,
)
.await
{
results.extend(wi_results);
}
results
}));
}
_ => {}
}
} else {
// Final fallback: try webview-based fetch (includes HttpOnly cookies automatically)
// This makes HTTP requests FROM the authenticated webview, which includes all cookies
tracing::info!(
"No extracted cookies for {}, trying webview-based fetch",
config.service
);
// Check if webview exists for this service
let webview_label = {
let webviews = match state.integration_webviews.lock() {
Ok(w) => w,
Err(_) => continue,
};
webviews.get(&config.service).cloned()
};
if let Some(label) = webview_label {
// Get window handle
if let Some(webview_window) = app_handle.get_webview_window(&label) {
let base_url = config.base_url.clone();
let service = config.service.clone();
let query_str = query.to_string();
match service.as_str() {
"confluence" => {
search_tasks.push(tokio::spawn(async move {
tracing::info!("Executing Confluence search via webview fetch");
match crate::integrations::webview_fetch::search_confluence_webview(
&webview_window,
&base_url,
&query_str,
)
.await
{
Ok(results) => {
tracing::info!(
"Webview fetch for Confluence returned {} results",
results.len()
);
results
}
Err(e) => {
tracing::warn!(
"Webview fetch failed for Confluence: {}",
e
);
Vec::new()
}
}
}));
}
"servicenow" => {
search_tasks.push(tokio::spawn(async move {
tracing::info!("Executing ServiceNow search via webview fetch");
match crate::integrations::webview_fetch::search_servicenow_webview(
&webview_window,
&base_url,
&query_str,
)
.await
{
Ok(results) => {
tracing::info!(
"Webview fetch for ServiceNow returned {} results",
results.len()
);
results
}
Err(e) => {
tracing::warn!(
"Webview fetch failed for ServiceNow: {}",
e
);
Vec::new()
}
}
}));
}
"azuredevops" => {
let project = config.project_name.unwrap_or_default();
search_tasks.push(tokio::spawn(async move {
tracing::info!("Executing Azure DevOps search via webview fetch");
let mut results = Vec::new();
// Search wiki
match crate::integrations::webview_fetch::search_azuredevops_wiki_webview(
&webview_window,
&base_url,
&project,
&query_str
).await {
Ok(wiki_results) => {
tracing::info!("Webview fetch for ADO wiki returned {} results", wiki_results.len());
results.extend(wiki_results);
}
Err(e) => {
tracing::warn!("Webview fetch failed for ADO wiki: {}", e);
}
}
// Search work items
match crate::integrations::webview_fetch::search_azuredevops_workitems_webview(
&webview_window,
&base_url,
&project,
&query_str
).await {
Ok(wi_results) => {
tracing::info!("Webview fetch for ADO work items returned {} results", wi_results.len());
results.extend(wi_results);
}
Err(e) => {
tracing::warn!("Webview fetch failed for ADO work items: {}", e);
}
}
results
}));
}
_ => {}
}
} else {
tracing::warn!("Webview window not found for {}", config.service);
}
} else {
tracing::warn!(
"No webview open for {} - cannot search. Please open browser window in Settings → Integrations",
config.service
);
}
}
}
// Wait for all searches to complete
for task in search_tasks {
if let Ok(results) = task.await {
all_results.extend(results);
}
}
// Format results for AI context
if all_results.is_empty() {
return String::new();
}
let mut context = String::new();
for (idx, result) in all_results.iter().enumerate() {
context.push_str(&format!("--- SOURCE {} ({}) ---\n", idx + 1, result.source));
context.push_str(&format!("Title: {}\n", result.title));
context.push_str(&format!("URL: {}\n", result.url));
if let Some(content) = &result.content {
context.push_str(&format!("Content:\n{content}\n\n"));
} else {
context.push_str(&format!("Excerpt: {}\n\n", result.excerpt));
}
}
tracing::info!(
"Found {} integration sources for AI context",
all_results.len()
);
context
}
/// Execute a tool call made by the AI
async fn execute_tool_call(
tool_call: &crate::ai::ToolCall,
app_handle: &tauri::AppHandle,
app_state: &State<'_, AppState>,
) -> Result<String, String> {
match tool_call.name.as_str() {
"add_ado_comment" => {
// Parse arguments
let args: serde_json::Value = serde_json::from_str(&tool_call.arguments)
.map_err(|e| format!("Failed to parse tool arguments: {e}"))?;
let work_item_id = args
.get("work_item_id")
.and_then(|v| v.as_i64())
.ok_or_else(|| "Missing or invalid work_item_id parameter".to_string())?;
let comment_text = args
.get("comment_text")
.and_then(|v| v.as_str())
.ok_or_else(|| "Missing or invalid comment_text parameter".to_string())?;
// Execute the add_ado_comment command
tracing::info!(
"AI executing tool: add_ado_comment({}, \"{}\")",
work_item_id,
comment_text
);
crate::commands::integrations::add_ado_comment(
work_item_id,
comment_text.to_string(),
app_handle.clone(),
app_state.clone(),
)
.await
}
_ => {
let error = format!("Unknown tool: {}", tool_call.name);
tracing::warn!("{}", error);
Err(error)
}
}
}
#[cfg(test)]
mod tests {
use super::*;
@ -371,6 +899,19 @@ mod tests {
assert_eq!(list, vec!["Item one", "Item two"]);
}
#[test]
fn test_redacted_path_rejects_unredacted_file() {
let err = redacted_path_for("app.log", "/data/app.log", false).unwrap_err();
assert!(err.contains("app.log"));
assert!(err.contains("redacted"));
}
#[test]
fn test_redacted_path_returns_dotredacted_suffix() {
let path = redacted_path_for("app.log", "/data/app.log", true).unwrap();
assert_eq!(path, "/data/app.log.redacted");
}
#[test]
fn test_extract_list_missing_header() {
let text = "No findings here";

View File

@ -1,20 +1,43 @@
use sha2::{Digest, Sha256};
use std::path::{Path, PathBuf};
use tauri::State;
use tracing::warn;
use crate::db::models::{AuditEntry, LogFile, PiiSpanRecord};
use crate::pii::{self, PiiDetectionResult, PiiDetector, RedactedLogFile};
use crate::state::AppState;
const MAX_LOG_FILE_BYTES: u64 = 50 * 1024 * 1024;
fn validate_log_file_path(file_path: &str) -> Result<PathBuf, String> {
let path = Path::new(file_path);
let canonical = std::fs::canonicalize(path).map_err(|_| "Unable to access selected file")?;
let metadata = std::fs::metadata(&canonical).map_err(|_| "Unable to read file metadata")?;
if !metadata.is_file() {
return Err("Selected path is not a file".to_string());
}
if metadata.len() > MAX_LOG_FILE_BYTES {
return Err(format!(
"File exceeds maximum supported size ({} MB)",
MAX_LOG_FILE_BYTES / 1024 / 1024
));
}
Ok(canonical)
}
#[tauri::command]
pub async fn upload_log_file(
issue_id: String,
file_path: String,
state: State<'_, AppState>,
) -> Result<LogFile, String> {
let path = std::path::Path::new(&file_path);
let content = std::fs::read(path).map_err(|e| e.to_string())?;
let canonical_path = validate_log_file_path(&file_path)?;
let content = std::fs::read(&canonical_path).map_err(|_| "Failed to read selected log file")?;
let content_hash = format!("{:x}", Sha256::digest(&content));
let file_name = path
let file_name = canonical_path
.file_name()
.and_then(|n| n.to_str())
.unwrap_or("unknown")
@ -28,7 +51,8 @@ pub async fn upload_log_file(
"text/plain"
};
let log_file = LogFile::new(issue_id.clone(), file_name, file_path.clone(), file_size);
let canonical_file_path = canonical_path.to_string_lossy().to_string();
let log_file = LogFile::new(issue_id.clone(), file_name, canonical_file_path, file_size);
let log_file = LogFile {
content_hash: content_hash.clone(),
mime_type: mime_type.to_string(),
@ -51,7 +75,7 @@ pub async fn upload_log_file(
log_file.redacted as i32,
],
)
.map_err(|e| e.to_string())?;
.map_err(|_| "Failed to store uploaded log metadata".to_string())?;
// Audit
let entry = AuditEntry::new(
@ -60,19 +84,15 @@ pub async fn upload_log_file(
log_file.id.clone(),
serde_json::json!({ "issue_id": issue_id, "file_name": log_file.file_name }).to_string(),
);
let _ = db.execute(
"INSERT INTO audit_log (id, timestamp, action, entity_type, entity_id, user_id, details) \
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)",
rusqlite::params![
entry.id,
entry.timestamp,
entry.action,
entry.entity_type,
entry.entity_id,
entry.user_id,
entry.details
],
);
if let Err(err) = crate::audit::log::write_audit_event(
&db,
&entry.action,
&entry.entity_type,
&entry.entity_id,
&entry.details,
) {
warn!(error = %err, "failed to write upload_log_file audit entry");
}
Ok(log_file)
}
@ -87,10 +107,11 @@ pub async fn detect_pii(
let db = state.db.lock().map_err(|e| e.to_string())?;
db.prepare("SELECT file_path FROM log_files WHERE id = ?1")
.and_then(|mut stmt| stmt.query_row([&log_file_id], |row| row.get(0)))
.map_err(|e| e.to_string())?
.map_err(|_| "Failed to load log file metadata".to_string())?
};
let content = std::fs::read_to_string(&file_path).map_err(|e| e.to_string())?;
let content =
std::fs::read_to_string(&file_path).map_err(|_| "Failed to read log file content")?;
let detector = PiiDetector::new();
let spans = detector.detect(&content);
@ -105,10 +126,10 @@ pub async fn detect_pii(
pii_type: span.pii_type.clone(),
start_offset: span.start as i64,
end_offset: span.end as i64,
original_value: span.original.clone(),
original_value: String::new(),
replacement: span.replacement.clone(),
};
let _ = db.execute(
if let Err(err) = db.execute(
"INSERT OR REPLACE INTO pii_spans (id, log_file_id, pii_type, start_offset, end_offset, original_value, replacement) \
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)",
rusqlite::params![
@ -116,7 +137,9 @@ pub async fn detect_pii(
record.start_offset, record.end_offset,
record.original_value, record.replacement
],
);
) {
warn!(error = %err, span_id = %span.id, "failed to persist pii span");
}
}
}
@ -138,10 +161,11 @@ pub async fn apply_redactions(
let db = state.db.lock().map_err(|e| e.to_string())?;
db.prepare("SELECT file_path FROM log_files WHERE id = ?1")
.and_then(|mut stmt| stmt.query_row([&log_file_id], |row| row.get(0)))
.map_err(|e| e.to_string())?
.map_err(|_| "Failed to load log file metadata".to_string())?
};
let content = std::fs::read_to_string(&file_path).map_err(|e| e.to_string())?;
let content =
std::fs::read_to_string(&file_path).map_err(|_| "Failed to read log file content")?;
// Load PII spans from DB, filtering to only approved ones
let spans: Vec<pii::PiiSpan> = {
@ -188,7 +212,8 @@ pub async fn apply_redactions(
// Save redacted file alongside original
let redacted_path = format!("{file_path}.redacted");
std::fs::write(&redacted_path, &redacted_text).map_err(|e| e.to_string())?;
std::fs::write(&redacted_path, &redacted_text)
.map_err(|_| "Failed to write redacted output file".to_string())?;
// Mark the log file as redacted in DB
{
@ -197,7 +222,12 @@ pub async fn apply_redactions(
"UPDATE log_files SET redacted = 1 WHERE id = ?1",
[&log_file_id],
)
.map_err(|e| e.to_string())?;
.map_err(|_| "Failed to mark file as redacted".to_string())?;
db.execute(
"UPDATE pii_spans SET original_value = '' WHERE log_file_id = ?1",
[&log_file_id],
)
.map_err(|_| "Failed to finalize redaction metadata".to_string())?;
}
Ok(RedactedLogFile {
@ -206,3 +236,25 @@ pub async fn apply_redactions(
data_hash,
})
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_validate_log_file_path_rejects_non_file() {
let dir = std::env::temp_dir();
let result = validate_log_file_path(dir.to_string_lossy().as_ref());
assert!(result.is_err());
}
#[test]
fn test_validate_log_file_path_accepts_small_file() {
let file_path =
std::env::temp_dir().join(format!("tftsr-analysis-test-{}.log", uuid::Uuid::now_v7()));
std::fs::write(&file_path, "hello").unwrap();
let result = validate_log_file_path(file_path.to_string_lossy().as_ref());
assert!(result.is_ok());
let _ = std::fs::remove_file(file_path);
}
}

View File

@ -295,19 +295,31 @@ pub async fn list_issues(
let mut params: Vec<Box<dyn rusqlite::types::ToSql>> = vec![];
if let Some(ref status) = filter.status {
sql.push_str(&format!(" AND i.status = ?{}", params.len() + 1));
sql.push_str(&format!(
" AND i.status = ?{index}",
index = params.len() + 1
));
params.push(Box::new(status.clone()));
}
if let Some(ref severity) = filter.severity {
sql.push_str(&format!(" AND i.severity = ?{}", params.len() + 1));
sql.push_str(&format!(
" AND i.severity = ?{index}",
index = params.len() + 1
));
params.push(Box::new(severity.clone()));
}
if let Some(ref category) = filter.category {
sql.push_str(&format!(" AND i.category = ?{}", params.len() + 1));
sql.push_str(&format!(
" AND i.category = ?{index}",
index = params.len() + 1
));
params.push(Box::new(category.clone()));
}
if let Some(ref domain) = filter.domain {
sql.push_str(&format!(" AND i.category = ?{}", params.len() + 1));
sql.push_str(&format!(
" AND i.category = ?{index}",
index = params.len() + 1
));
params.push(Box::new(domain.clone()));
}
if let Some(ref search) = filter.search {
@ -321,9 +333,9 @@ pub async fn list_issues(
sql.push_str(" ORDER BY i.updated_at DESC");
sql.push_str(&format!(
" LIMIT ?{} OFFSET ?{}",
params.len() + 1,
params.len() + 2
" LIMIT ?{limit_index} OFFSET ?{offset_index}",
limit_index = params.len() + 1,
offset_index = params.len() + 2
));
params.push(Box::new(limit));
params.push(Box::new(offset));
@ -476,20 +488,14 @@ pub async fn add_timeline_event(
issue_id.clone(),
serde_json::json!({ "description": description }).to_string(),
);
db.execute(
"INSERT INTO audit_log (id, timestamp, action, entity_type, entity_id, user_id, details) \
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)",
rusqlite::params![
entry.id,
entry.timestamp,
entry.action,
entry.entity_type,
entry.entity_id,
entry.user_id,
entry.details
],
crate::audit::log::write_audit_event(
&db,
&entry.action,
&entry.entity_type,
&entry.entity_id,
&entry.details,
)
.map_err(|e| e.to_string())?;
.map_err(|_| "Failed to write security audit entry".to_string())?;
// Update issue timestamp
let now = chrono::Utc::now().format("%Y-%m-%d %H:%M:%S").to_string();

View File

@ -1,4 +1,5 @@
use tauri::State;
use tracing::warn;
use crate::db::models::AuditEntry;
use crate::docs::{exporter, generate_postmortem_markdown, generate_rca_markdown};
@ -34,7 +35,7 @@ pub async fn generate_rca(
id: doc_id.clone(),
issue_id: issue_id.clone(),
doc_type: "rca".to_string(),
title: format!("RCA: {}", issue_detail.issue.title),
title: format!("RCA: {title}", title = issue_detail.issue.title),
content_md: content_md.clone(),
created_at: now.clone(),
updated_at: now,
@ -49,7 +50,7 @@ pub async fn generate_rca(
"doc_title": document.title,
"content_length": content_md.len(),
"content_preview": if content_md.len() > 300 {
format!("{}...", &content_md[..300])
format!("{preview}...", preview = &content_md[..300])
} else {
content_md.clone()
},
@ -60,19 +61,15 @@ pub async fn generate_rca(
doc_id,
audit_details.to_string(),
);
let _ = db.execute(
"INSERT INTO audit_log (id, timestamp, action, entity_type, entity_id, user_id, details) \
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)",
rusqlite::params![
entry.id,
entry.timestamp,
entry.action,
entry.entity_type,
entry.entity_id,
entry.user_id,
entry.details
],
);
if let Err(err) = crate::audit::log::write_audit_event(
&db,
&entry.action,
&entry.entity_type,
&entry.entity_id,
&entry.details,
) {
warn!(error = %err, "failed to write generate_rca audit entry");
}
Ok(document)
}
@ -93,7 +90,7 @@ pub async fn generate_postmortem(
id: doc_id.clone(),
issue_id: issue_id.clone(),
doc_type: "postmortem".to_string(),
title: format!("Post-Mortem: {}", issue_detail.issue.title),
title: format!("Post-Mortem: {title}", title = issue_detail.issue.title),
content_md: content_md.clone(),
created_at: now.clone(),
updated_at: now,
@ -108,7 +105,7 @@ pub async fn generate_postmortem(
"doc_title": document.title,
"content_length": content_md.len(),
"content_preview": if content_md.len() > 300 {
format!("{}...", &content_md[..300])
format!("{preview}...", preview = &content_md[..300])
} else {
content_md.clone()
},
@ -119,19 +116,15 @@ pub async fn generate_postmortem(
doc_id,
audit_details.to_string(),
);
let _ = db.execute(
"INSERT INTO audit_log (id, timestamp, action, entity_type, entity_id, user_id, details) \
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)",
rusqlite::params![
entry.id,
entry.timestamp,
entry.action,
entry.entity_type,
entry.entity_id,
entry.user_id,
entry.details
],
);
if let Err(err) = crate::audit::log::write_audit_event(
&db,
&entry.action,
&entry.entity_type,
&entry.entity_id,
&entry.details,
) {
warn!(error = %err, "failed to write generate_postmortem audit entry");
}
Ok(document)
}

File diff suppressed because it is too large Load Diff

View File

@ -3,7 +3,7 @@ use crate::ollama::{
hardware, installer, manager, recommender, InstallGuide, ModelRecommendation, OllamaModel,
OllamaStatus,
};
use crate::state::{AppSettings, AppState};
use crate::state::{AppSettings, AppState, ProviderConfig};
// --- Ollama commands ---
@ -98,20 +98,26 @@ pub async fn get_audit_log(
let mut params: Vec<Box<dyn rusqlite::types::ToSql>> = vec![];
if let Some(ref action) = filter.action {
sql.push_str(&format!(" AND action = ?{}", params.len() + 1));
sql.push_str(&format!(" AND action = ?{index}", index = params.len() + 1));
params.push(Box::new(action.clone()));
}
if let Some(ref entity_type) = filter.entity_type {
sql.push_str(&format!(" AND entity_type = ?{}", params.len() + 1));
sql.push_str(&format!(
" AND entity_type = ?{index}",
index = params.len() + 1
));
params.push(Box::new(entity_type.clone()));
}
if let Some(ref entity_id) = filter.entity_id {
sql.push_str(&format!(" AND entity_id = ?{}", params.len() + 1));
sql.push_str(&format!(
" AND entity_id = ?{index}",
index = params.len() + 1
));
params.push(Box::new(entity_id.clone()));
}
sql.push_str(" ORDER BY timestamp DESC");
sql.push_str(&format!(" LIMIT ?{}", params.len() + 1));
sql.push_str(&format!(" LIMIT ?{index}", index = params.len() + 1));
params.push(Box::new(limit));
let param_refs: Vec<&dyn rusqlite::types::ToSql> = params.iter().map(|p| p.as_ref()).collect();
@ -135,3 +141,133 @@ pub async fn get_audit_log(
Ok(rows)
}
// --- AI Provider persistence commands ---
/// Save an AI provider configuration to encrypted database
#[tauri::command]
pub async fn save_ai_provider(
provider: ProviderConfig,
state: tauri::State<'_, AppState>,
) -> Result<(), String> {
// Encrypt the API key
let encrypted_key = crate::integrations::auth::encrypt_token(&provider.api_key)?;
let db = state.db.lock().map_err(|e| e.to_string())?;
db.execute(
"INSERT OR REPLACE INTO ai_providers
(id, name, provider_type, api_url, encrypted_api_key, model, max_tokens, temperature,
custom_endpoint_path, custom_auth_header, custom_auth_prefix, api_format, user_id, updated_at)
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12, ?13, datetime('now'))",
rusqlite::params![
uuid::Uuid::now_v7().to_string(),
provider.name,
provider.provider_type,
provider.api_url,
encrypted_key,
provider.model,
provider.max_tokens,
provider.temperature,
provider.custom_endpoint_path,
provider.custom_auth_header,
provider.custom_auth_prefix,
provider.api_format,
provider.user_id,
],
)
.map_err(|e| format!("Failed to save AI provider: {e}"))?;
Ok(())
}
/// Load all AI provider configurations from database
#[tauri::command]
pub async fn load_ai_providers(
state: tauri::State<'_, AppState>,
) -> Result<Vec<ProviderConfig>, String> {
let db = state.db.lock().map_err(|e| e.to_string())?;
let mut stmt = db
.prepare(
"SELECT name, provider_type, api_url, encrypted_api_key, model, max_tokens, temperature,
custom_endpoint_path, custom_auth_header, custom_auth_prefix, api_format, user_id
FROM ai_providers
ORDER BY name",
)
.map_err(|e| e.to_string())?;
let providers = stmt
.query_map([], |row| {
let encrypted_key: String = row.get(3)?;
Ok((
row.get::<_, String>(0)?, // name
row.get::<_, String>(1)?, // provider_type
row.get::<_, String>(2)?, // api_url
encrypted_key, // encrypted_api_key
row.get::<_, String>(4)?, // model
row.get::<_, Option<u32>>(5)?, // max_tokens
row.get::<_, Option<f64>>(6)?, // temperature
row.get::<_, Option<String>>(7)?, // custom_endpoint_path
row.get::<_, Option<String>>(8)?, // custom_auth_header
row.get::<_, Option<String>>(9)?, // custom_auth_prefix
row.get::<_, Option<String>>(10)?, // api_format
row.get::<_, Option<String>>(11)?, // user_id
))
})
.map_err(|e| e.to_string())?
.filter_map(|r| r.ok())
.filter_map(
|(
name,
provider_type,
api_url,
encrypted_key,
model,
max_tokens,
temperature,
custom_endpoint_path,
custom_auth_header,
custom_auth_prefix,
api_format,
user_id,
)| {
// Decrypt the API key
let api_key = crate::integrations::auth::decrypt_token(&encrypted_key).ok()?;
Some(ProviderConfig {
name,
provider_type,
api_url,
api_key,
model,
max_tokens,
temperature,
custom_endpoint_path,
custom_auth_header,
custom_auth_prefix,
api_format,
session_id: None, // Session IDs are not persisted
user_id,
})
},
)
.collect();
Ok(providers)
}
/// Delete an AI provider configuration
#[tauri::command]
pub async fn delete_ai_provider(
name: String,
state: tauri::State<'_, AppState>,
) -> Result<(), String> {
let db = state.db.lock().map_err(|e| e.to_string())?;
db.execute("DELETE FROM ai_providers WHERE name = ?1", [&name])
.map_err(|e| format!("Failed to delete AI provider: {e}"))?;
Ok(())
}

View File

@ -1,6 +1,62 @@
use rusqlite::Connection;
use std::path::Path;
fn generate_key() -> String {
use rand::RngCore;
let mut bytes = [0u8; 32];
rand::rngs::OsRng.fill_bytes(&mut bytes);
hex::encode(bytes)
}
#[cfg(unix)]
fn write_key_file(path: &Path, key: &str) -> anyhow::Result<()> {
use std::io::Write;
use std::os::unix::fs::OpenOptionsExt;
let mut f = std::fs::OpenOptions::new()
.write(true)
.create(true)
.truncate(true)
.mode(0o600)
.open(path)?;
f.write_all(key.as_bytes())?;
Ok(())
}
#[cfg(not(unix))]
fn write_key_file(path: &Path, key: &str) -> anyhow::Result<()> {
std::fs::write(path, key)?;
Ok(())
}
fn get_db_key(data_dir: &Path) -> anyhow::Result<String> {
if let Ok(key) = std::env::var("TFTSR_DB_KEY") {
if !key.trim().is_empty() {
return Ok(key);
}
}
if cfg!(debug_assertions) {
return Ok("dev-key-change-in-prod".to_string());
}
// Release: load or auto-generate a per-installation key stored in the
// app data directory. This lets the app work out of the box without
// requiring users to set an environment variable.
let key_path = data_dir.join(".dbkey");
if key_path.exists() {
let key = std::fs::read_to_string(&key_path)?;
let key = key.trim().to_string();
if !key.is_empty() {
return Ok(key);
}
}
let key = generate_key();
std::fs::create_dir_all(data_dir)?;
write_key_file(&key_path, &key)?;
Ok(key)
}
pub fn open_encrypted_db(path: &Path, key: &str) -> anyhow::Result<Connection> {
let conn = Connection::open(path)?;
// ALL cipher settings MUST be set before the first database access.
@ -25,20 +81,155 @@ pub fn open_dev_db(path: &Path) -> anyhow::Result<Connection> {
Ok(conn)
}
/// Migrates a plain SQLite database to an encrypted SQLCipher database.
/// Creates a backup of the original file before migration.
fn migrate_plain_to_encrypted(db_path: &Path, key: &str) -> anyhow::Result<Connection> {
tracing::warn!("Detected plain SQLite database in release build - migrating to encrypted");
// Create backup of plain database
let backup_path = db_path.with_extension("db.plain-backup");
std::fs::copy(db_path, &backup_path)?;
tracing::info!("Backed up plain database to {:?}", backup_path);
// Open the plain database
let plain_conn = Connection::open(db_path)?;
// Create temporary encrypted database path
let temp_encrypted = db_path.with_extension("db.encrypted-temp");
// Attach and migrate to encrypted database using SQLCipher export
plain_conn.execute_batch(&format!(
"ATTACH DATABASE '{}' AS encrypted KEY '{}';\
PRAGMA encrypted.cipher_page_size = 16384;\
PRAGMA encrypted.kdf_iter = 256000;\
PRAGMA encrypted.cipher_hmac_algorithm = HMAC_SHA512;\
PRAGMA encrypted.cipher_kdf_algorithm = PBKDF2_HMAC_SHA512;",
temp_encrypted.display(),
key.replace('\'', "''")
))?;
// Export all data to encrypted database
plain_conn.execute_batch("SELECT sqlcipher_export('encrypted');")?;
plain_conn.execute_batch("DETACH DATABASE encrypted;")?;
drop(plain_conn);
// Replace original with encrypted version
std::fs::rename(&temp_encrypted, db_path)?;
tracing::info!("Successfully migrated database to encrypted format");
// Open and return the encrypted database
open_encrypted_db(db_path, key)
}
/// Checks if a database file is plain SQLite by reading its header.
fn is_plain_sqlite(path: &Path) -> bool {
if let Ok(mut file) = std::fs::File::open(path) {
use std::io::Read;
let mut header = [0u8; 16];
if file.read_exact(&mut header).is_ok() {
// SQLite databases start with "SQLite format 3\0"
return &header == b"SQLite format 3\0";
}
}
false
}
pub fn init_db(data_dir: &Path) -> anyhow::Result<Connection> {
std::fs::create_dir_all(data_dir)?;
let db_path = data_dir.join("tftsr.db");
let db_path = data_dir.join("trcaa.db");
// In dev/test mode use unencrypted DB; in production use encryption
let key =
std::env::var("TFTSR_DB_KEY").unwrap_or_else(|_| "dev-key-change-in-prod".to_string());
let key = get_db_key(data_dir)?;
let conn = if cfg!(debug_assertions) {
open_dev_db(&db_path)?
} else {
open_encrypted_db(&db_path, &key)?
// In release mode, try encrypted first
match open_encrypted_db(&db_path, &key) {
Ok(conn) => conn,
Err(e) => {
// Check if error is due to trying to decrypt a plain SQLite database
if db_path.exists() && is_plain_sqlite(&db_path) {
// Auto-migrate from plain to encrypted
migrate_plain_to_encrypted(&db_path, &key)?
} else {
// Different error - propagate it
return Err(e);
}
}
}
};
crate::db::migrations::run_migrations(&conn)?;
Ok(conn)
}
#[cfg(test)]
mod tests {
use super::*;
fn temp_dir(name: &str) -> std::path::PathBuf {
use std::time::SystemTime;
let timestamp = SystemTime::now()
.duration_since(SystemTime::UNIX_EPOCH)
.unwrap()
.as_nanos();
let dir = std::env::temp_dir().join(format!("tftsr-test-{}-{}", name, timestamp));
// Clean up if it exists
let _ = std::fs::remove_dir_all(&dir);
std::fs::create_dir_all(&dir).unwrap();
dir
}
#[test]
fn test_get_db_key_uses_env_var_when_present() {
// Remove any existing env var first
std::env::remove_var("TFTSR_DB_KEY");
let dir = temp_dir("env-var");
std::env::set_var("TFTSR_DB_KEY", "test-db-key");
let key = get_db_key(&dir).unwrap();
assert_eq!(key, "test-db-key");
std::env::remove_var("TFTSR_DB_KEY");
}
#[test]
fn test_get_db_key_debug_fallback_for_empty_env() {
// Remove any existing env var first
std::env::remove_var("TFTSR_DB_KEY");
let dir = temp_dir("empty-env");
std::env::set_var("TFTSR_DB_KEY", " ");
let key = get_db_key(&dir).unwrap();
assert_eq!(key, "dev-key-change-in-prod");
std::env::remove_var("TFTSR_DB_KEY");
}
#[test]
fn test_is_plain_sqlite_detects_plain_database() {
let dir = temp_dir("plain-detect");
let db_path = dir.join("test.db");
// Create a plain SQLite database
let conn = Connection::open(&db_path).unwrap();
conn.execute("CREATE TABLE test (id INTEGER)", []).unwrap();
drop(conn);
assert!(is_plain_sqlite(&db_path));
}
#[test]
fn test_is_plain_sqlite_rejects_encrypted() {
let dir = temp_dir("encrypted-detect");
let db_path = dir.join("test.db");
// Create an encrypted database
let conn = Connection::open(&db_path).unwrap();
conn.execute_batch(
"PRAGMA key = 'test-key';\
PRAGMA cipher_page_size = 16384;",
)
.unwrap();
conn.execute("CREATE TABLE test (id INTEGER)", []).unwrap();
drop(conn);
assert!(!is_plain_sqlite(&db_path));
}
}

View File

@ -150,6 +150,46 @@ pub fn run_migrations(conn: &Connection) -> anyhow::Result<()> {
UNIQUE(service)
);",
),
(
"012_audit_hash_chain",
"ALTER TABLE audit_log ADD COLUMN prev_hash TEXT NOT NULL DEFAULT '';
ALTER TABLE audit_log ADD COLUMN entry_hash TEXT NOT NULL DEFAULT '';",
),
(
"013_create_persistent_webviews",
"CREATE TABLE IF NOT EXISTS persistent_webviews (
id TEXT PRIMARY KEY,
service TEXT NOT NULL CHECK(service IN ('confluence','servicenow','azuredevops')),
webview_label TEXT NOT NULL,
base_url TEXT NOT NULL,
last_active TEXT NOT NULL DEFAULT (datetime('now')),
window_x INTEGER,
window_y INTEGER,
window_width INTEGER,
window_height INTEGER,
UNIQUE(service)
);",
),
(
"014_create_ai_providers",
"CREATE TABLE IF NOT EXISTS ai_providers (
id TEXT PRIMARY KEY,
name TEXT NOT NULL UNIQUE,
provider_type TEXT NOT NULL,
api_url TEXT NOT NULL,
encrypted_api_key TEXT NOT NULL,
model TEXT NOT NULL,
max_tokens INTEGER,
temperature REAL,
custom_endpoint_path TEXT,
custom_auth_header TEXT,
custom_auth_prefix TEXT,
api_format TEXT,
user_id TEXT,
created_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT NOT NULL DEFAULT (datetime('now'))
);",
),
];
for (name, sql) in migrations {
@ -162,13 +202,13 @@ pub fn run_migrations(conn: &Connection) -> anyhow::Result<()> {
// FTS5 virtual table creation can be skipped if FTS5 is not compiled in
if let Err(e) = conn.execute_batch(sql) {
if name.contains("fts") {
tracing::warn!("FTS5 not available, skipping: {}", e);
tracing::warn!("FTS5 not available, skipping: {e}");
} else {
return Err(e.into());
}
}
conn.execute("INSERT INTO _migrations (name) VALUES (?1)", [name])?;
tracing::info!("Applied migration: {}", name);
tracing::info!("Applied migration: {name}");
}
}

View File

@ -5,15 +5,30 @@ pub fn generate_postmortem_markdown(detail: &IssueDetail) -> String {
let mut md = String::new();
md.push_str(&format!("# Blameless Post-Mortem: {}\n\n", issue.title));
md.push_str(&format!(
"# Blameless Post-Mortem: {title}\n\n",
title = issue.title
));
// Header metadata
md.push_str("## Metadata\n\n");
md.push_str(&format!("- **Date:** {}\n", issue.created_at));
md.push_str(&format!("- **Severity:** {}\n", issue.severity));
md.push_str(&format!("- **Category:** {}\n", issue.category));
md.push_str(&format!("- **Status:** {}\n", issue.status));
md.push_str(&format!("- **Last Updated:** {}\n", issue.updated_at));
md.push_str(&format!(
"- **Date:** {created_at}\n",
created_at = issue.created_at
));
md.push_str(&format!(
"- **Severity:** {severity}\n",
severity = issue.severity
));
md.push_str(&format!(
"- **Category:** {category}\n",
category = issue.category
));
md.push_str(&format!("- **Status:** {status}\n", status = issue.status));
md.push_str(&format!(
"- **Last Updated:** {updated_at}\n",
updated_at = issue.updated_at
));
md.push_str(&format!(
"- **Assigned To:** {}\n",
if issue.assigned_to.is_empty() {
@ -45,7 +60,10 @@ pub fn generate_postmortem_markdown(detail: &IssueDetail) -> String {
md.push_str("## Timeline\n\n");
md.push_str("| Time (UTC) | Event |\n");
md.push_str("|------------|-------|\n");
md.push_str(&format!("| {} | Issue created |\n", issue.created_at));
md.push_str(&format!(
"| {created_at} | Issue created |\n",
created_at = issue.created_at
));
if let Some(ref resolved) = issue.resolved_at {
md.push_str(&format!("| {resolved} | Issue resolved |\n"));
}
@ -77,7 +95,10 @@ pub fn generate_postmortem_markdown(detail: &IssueDetail) -> String {
if let Some(last) = detail.resolution_steps.last() {
if !last.answer.is_empty() {
md.push_str(&format!("**Root Cause:** {}\n\n", last.answer));
md.push_str(&format!(
"**Root Cause:** {answer}\n\n",
answer = last.answer
));
}
}
}
@ -127,7 +148,7 @@ pub fn generate_postmortem_markdown(detail: &IssueDetail) -> String {
md.push_str("---\n\n");
md.push_str(&format!(
"_Generated by TFTSR IT Triage on {}_\n",
"_Generated by Troubleshooting and RCA Assistant on {}_\n",
chrono::Utc::now().format("%Y-%m-%d %H:%M UTC")
));

View File

@ -5,16 +5,31 @@ pub fn generate_rca_markdown(detail: &IssueDetail) -> String {
let mut md = String::new();
md.push_str(&format!("# Root Cause Analysis: {}\n\n", issue.title));
md.push_str(&format!(
"# Root Cause Analysis: {title}\n\n",
title = issue.title
));
md.push_str("## Issue Summary\n\n");
md.push_str("| Field | Value |\n");
md.push_str("|-------|-------|\n");
md.push_str(&format!("| **Issue ID** | {} |\n", issue.id));
md.push_str(&format!("| **Category** | {} |\n", issue.category));
md.push_str(&format!("| **Status** | {} |\n", issue.status));
md.push_str(&format!("| **Severity** | {} |\n", issue.severity));
md.push_str(&format!("| **Source** | {} |\n", issue.source));
md.push_str(&format!("| **Issue ID** | {id} |\n", id = issue.id));
md.push_str(&format!(
"| **Category** | {category} |\n",
category = issue.category
));
md.push_str(&format!(
"| **Status** | {status} |\n",
status = issue.status
));
md.push_str(&format!(
"| **Severity** | {severity} |\n",
severity = issue.severity
));
md.push_str(&format!(
"| **Source** | {source} |\n",
source = issue.source
));
md.push_str(&format!(
"| **Assigned To** | {} |\n",
if issue.assigned_to.is_empty() {
@ -23,8 +38,14 @@ pub fn generate_rca_markdown(detail: &IssueDetail) -> String {
&issue.assigned_to
}
));
md.push_str(&format!("| **Created** | {} |\n", issue.created_at));
md.push_str(&format!("| **Last Updated** | {} |\n", issue.updated_at));
md.push_str(&format!(
"| **Created** | {created_at} |\n",
created_at = issue.created_at
));
md.push_str(&format!(
"| **Last Updated** | {updated_at} |\n",
updated_at = issue.updated_at
));
if let Some(ref resolved) = issue.resolved_at {
md.push_str(&format!("| **Resolved** | {resolved} |\n"));
}
@ -47,12 +68,15 @@ pub fn generate_rca_markdown(detail: &IssueDetail) -> String {
step.step_order, step.why_question
));
if !step.answer.is_empty() {
md.push_str(&format!("**Answer:** {}\n\n", step.answer));
md.push_str(&format!("**Answer:** {answer}\n\n", answer = step.answer));
} else {
md.push_str("_Awaiting answer._\n\n");
}
if !step.evidence.is_empty() {
md.push_str(&format!("**Evidence:** {}\n\n", step.evidence));
md.push_str(&format!(
"**Evidence:** {evidence}\n\n",
evidence = step.evidence
));
}
}
}
@ -109,7 +133,7 @@ pub fn generate_rca_markdown(detail: &IssueDetail) -> String {
md.push_str("---\n\n");
md.push_str(&format!(
"_Generated by TFTSR IT Triage on {}_\n",
"_Generated by Troubleshooting and RCA Assistant on {}_\n",
chrono::Utc::now().format("%Y-%m-%d %H:%M UTC")
));

View File

@ -1,5 +1,6 @@
use rusqlite::OptionalExtension;
use serde::{Deserialize, Serialize};
use sha2::{Digest, Sha256};
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PkceChallenge {
@ -23,19 +24,11 @@ pub struct PatCredential {
/// Generate a PKCE code verifier and challenge for OAuth flows.
pub fn generate_pkce() -> PkceChallenge {
use sha2::{Digest, Sha256};
use rand::{thread_rng, RngCore};
// Generate a random 32-byte verifier
let verifier_bytes: Vec<u8> = (0..32)
.map(|_| {
let r: u8 = (std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.subsec_nanos()
% 256) as u8;
r
})
.collect();
let mut verifier_bytes = [0u8; 32];
thread_rng().fill_bytes(&mut verifier_bytes);
let code_verifier = base64_url_encode(&verifier_bytes);
let challenge_hash = Sha256::digest(code_verifier.as_bytes());
@ -88,7 +81,7 @@ pub async fn exchange_code(
.form(&params)
.send()
.await
.map_err(|e| format!("Failed to send token exchange request: {}", e))?;
.map_err(|e| format!("Failed to send token exchange request: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -101,7 +94,7 @@ pub async fn exchange_code(
let body: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse token response: {}", e))?;
.map_err(|e| format!("Failed to parse token response: {e}"))?;
let access_token = body["access_token"]
.as_str()
@ -162,7 +155,6 @@ pub fn get_pat(conn: &rusqlite::Connection, service: &str) -> Result<Option<Stri
}
fn hash_token(token: &str) -> String {
use sha2::{Digest, Sha256};
format!("{:x}", Sha256::digest(token.as_bytes()))
}
@ -173,10 +165,82 @@ fn base64_url_encode(data: &[u8]) -> String {
}
fn urlencoding_encode(s: &str) -> String {
s.replace(' ', "%20")
.replace('&', "%26")
.replace('=', "%3D")
.replace('+', "%2B")
urlencoding::encode(s).into_owned()
}
fn get_encryption_key_material() -> Result<String, String> {
if let Ok(key) = std::env::var("TFTSR_ENCRYPTION_KEY") {
if !key.trim().is_empty() {
return Ok(key);
}
}
if cfg!(debug_assertions) {
return Ok("dev-key-change-me-in-production-32b".to_string());
}
// Release: load or auto-generate a per-installation encryption key
// stored in the app data directory, similar to the database key.
if let Some(app_data_dir) = crate::state::get_app_data_dir() {
let key_path = app_data_dir.join(".enckey");
// Try to load existing key
if key_path.exists() {
if let Ok(key) = std::fs::read_to_string(&key_path) {
let key = key.trim().to_string();
if !key.is_empty() {
return Ok(key);
}
}
}
// Generate and store new key
use rand::RngCore;
let mut bytes = [0u8; 32];
rand::rngs::OsRng.fill_bytes(&mut bytes);
let key = hex::encode(bytes);
// Ensure directory exists
if let Err(e) = std::fs::create_dir_all(&app_data_dir) {
tracing::warn!("Failed to create app data directory: {e}");
return Err(format!("Failed to create app data directory: {e}"));
}
// Write key with restricted permissions
#[cfg(unix)]
{
use std::io::Write;
use std::os::unix::fs::OpenOptionsExt;
let mut f = std::fs::OpenOptions::new()
.write(true)
.create(true)
.truncate(true)
.mode(0o600)
.open(&key_path)
.map_err(|e| format!("Failed to write encryption key: {e}"))?;
f.write_all(key.as_bytes())
.map_err(|e| format!("Failed to write encryption key: {e}"))?;
}
#[cfg(not(unix))]
{
std::fs::write(&key_path, &key)
.map_err(|e| format!("Failed to write encryption key: {e}"))?;
}
tracing::info!("Generated new encryption key at {:?}", key_path);
return Ok(key);
}
Err("Failed to determine app data directory for encryption key storage".to_string())
}
fn derive_aes_key() -> Result<[u8; 32], String> {
let key_material = get_encryption_key_material()?;
let digest = Sha256::digest(key_material.as_bytes());
let mut key_bytes = [0u8; 32];
key_bytes.copy_from_slice(&digest);
Ok(key_bytes)
}
/// Encrypt a token using AES-256-GCM.
@ -189,14 +253,7 @@ pub fn encrypt_token(token: &str) -> Result<String, String> {
};
use rand::{thread_rng, RngCore};
// Get encryption key from env or use default (WARNING: insecure for production)
let key_material = std::env::var("TFTSR_ENCRYPTION_KEY")
.unwrap_or_else(|_| "dev-key-change-me-in-production-32b".to_string());
let mut key_bytes = [0u8; 32];
let src = key_material.as_bytes();
let len = std::cmp::min(src.len(), 32);
key_bytes[..len].copy_from_slice(&src[..len]);
let key_bytes = derive_aes_key()?;
let cipher = Aes256Gcm::new(&key_bytes.into());
@ -208,7 +265,7 @@ pub fn encrypt_token(token: &str) -> Result<String, String> {
// Encrypt
let ciphertext = cipher
.encrypt(nonce, token.as_bytes())
.map_err(|e| format!("Encryption failed: {}", e))?;
.map_err(|e| format!("Encryption failed: {e}"))?;
// Prepend nonce to ciphertext
let mut result = nonce_bytes.to_vec();
@ -232,7 +289,7 @@ pub fn decrypt_token(encrypted: &str) -> Result<String, String> {
use base64::Engine;
let data = STANDARD
.decode(encrypted)
.map_err(|e| format!("Base64 decode failed: {}", e))?;
.map_err(|e| format!("Base64 decode failed: {e}"))?;
if data.len() < 12 {
return Err("Invalid encrypted data: too short".to_string());
@ -242,23 +299,16 @@ pub fn decrypt_token(encrypted: &str) -> Result<String, String> {
let nonce = Nonce::from_slice(&data[..12]);
let ciphertext = &data[12..];
// Get encryption key
let key_material = std::env::var("TFTSR_ENCRYPTION_KEY")
.unwrap_or_else(|_| "dev-key-change-me-in-production-32b".to_string());
let mut key_bytes = [0u8; 32];
let src = key_material.as_bytes();
let len = std::cmp::min(src.len(), 32);
key_bytes[..len].copy_from_slice(&src[..len]);
let key_bytes = derive_aes_key()?;
let cipher = Aes256Gcm::new(&key_bytes.into());
// Decrypt
let plaintext = cipher
.decrypt(nonce, ciphertext)
.map_err(|e| format!("Decryption failed: {}", e))?;
.map_err(|e| format!("Decryption failed: {e}"))?;
String::from_utf8(plaintext).map_err(|e| format!("Invalid UTF-8: {}", e))
String::from_utf8(plaintext).map_err(|e| format!("Invalid UTF-8: {e}"))
}
#[cfg(test)]
@ -365,7 +415,7 @@ mod tests {
.create_async()
.await;
let token_endpoint = format!("{}/oauth/token", server.url());
let token_endpoint = format!("{server_url}/oauth/token", server_url = server.url());
let result = exchange_code(
&token_endpoint,
"test-client-id",
@ -397,7 +447,7 @@ mod tests {
.create_async()
.await;
let token_endpoint = format!("{}/oauth/token", server.url());
let token_endpoint = format!("{server_url}/oauth/token", server_url = server.url());
let result = exchange_code(
&token_endpoint,
"test-client-id",
@ -421,7 +471,7 @@ mod tests {
.create_async()
.await;
let token_endpoint = format!("{}/oauth/token", server.url());
let token_endpoint = format!("{server_url}/oauth/token", server_url = server.url());
let result = exchange_code(
&token_endpoint,
"test-client-id",
@ -563,4 +613,20 @@ mod tests {
let retrieved = get_pat(&conn, "servicenow").unwrap();
assert_eq!(retrieved, Some("token-v2".to_string()));
}
#[test]
fn test_generate_pkce_is_not_deterministic() {
let a = generate_pkce();
let b = generate_pkce();
assert_ne!(a.code_verifier, b.code_verifier);
}
#[test]
fn test_derive_aes_key_is_stable_for_same_input() {
std::env::set_var("TFTSR_ENCRYPTION_KEY", "stable-test-key");
let k1 = derive_aes_key().unwrap();
let k2 = derive_aes_key().unwrap();
assert_eq!(k1, k2);
std::env::remove_var("TFTSR_ENCRYPTION_KEY");
}
}

View File

@ -18,6 +18,10 @@ pub struct WorkItem {
pub description: String,
}
fn escape_wiql_literal(value: &str) -> String {
value.replace('\'', "''")
}
/// Test connection to Azure DevOps by querying project info
pub async fn test_connection(config: &AzureDevOpsConfig) -> Result<ConnectionResult, String> {
let client = reqwest::Client::new();
@ -32,7 +36,7 @@ pub async fn test_connection(config: &AzureDevOpsConfig) -> Result<ConnectionRes
.bearer_auth(&config.access_token)
.send()
.await
.map_err(|e| format!("Connection failed: {}", e))?;
.map_err(|e| format!("Connection failed: {e}"))?;
if resp.status().is_success() {
Ok(ConnectionResult {
@ -40,9 +44,10 @@ pub async fn test_connection(config: &AzureDevOpsConfig) -> Result<ConnectionRes
message: "Successfully connected to Azure DevOps".to_string(),
})
} else {
let status = resp.status();
Ok(ConnectionResult {
success: false,
message: format!("Connection failed with status: {}", resp.status()),
message: format!("Connection failed with status: {status}"),
})
}
}
@ -60,9 +65,9 @@ pub async fn search_work_items(
);
// Build WIQL query
let escaped_query = escape_wiql_literal(query);
let wiql = format!(
"SELECT [System.Id], [System.Title], [System.WorkItemType], [System.State] FROM WorkItems WHERE [System.Title] CONTAINS '{}' ORDER BY [System.CreatedDate] DESC",
query
"SELECT [System.Id], [System.Title], [System.WorkItemType], [System.State] FROM WorkItems WHERE [System.Title] CONTAINS '{escaped_query}' ORDER BY [System.CreatedDate] DESC"
);
let body = serde_json::json!({ "query": wiql });
@ -74,7 +79,7 @@ pub async fn search_work_items(
.json(&body)
.send()
.await
.map_err(|e| format!("WIQL query failed: {}", e))?;
.map_err(|e| format!("WIQL query failed: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -87,7 +92,7 @@ pub async fn search_work_items(
let wiql_result: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse WIQL response: {}", e))?;
.map_err(|e| format!("Failed to parse WIQL response: {e}"))?;
let work_item_refs = wiql_result["workItems"]
.as_array()
@ -119,7 +124,7 @@ pub async fn search_work_items(
.bearer_auth(&config.access_token)
.send()
.await
.map_err(|e| format!("Failed to fetch work item details: {}", e))?;
.map_err(|e| format!("Failed to fetch work item details: {e}"))?;
if !detail_resp.status().is_success() {
return Err(format!(
@ -131,7 +136,7 @@ pub async fn search_work_items(
let details: serde_json::Value = detail_resp
.json()
.await
.map_err(|e| format!("Failed to parse work item details: {}", e))?;
.map_err(|e| format!("Failed to parse work item details: {e}"))?;
let work_items = details["value"]
.as_array()
@ -199,7 +204,7 @@ pub async fn create_work_item(
.json(&operations)
.send()
.await
.map_err(|e| format!("Failed to create work item: {}", e))?;
.map_err(|e| format!("Failed to create work item: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -212,7 +217,7 @@ pub async fn create_work_item(
let result: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
let work_item_id = result["id"].as_i64().unwrap_or(0);
let work_item_url = format!(
@ -223,7 +228,7 @@ pub async fn create_work_item(
Ok(TicketResult {
id: work_item_id.to_string(),
ticket_number: format!("#{}", work_item_id),
ticket_number: format!("#{work_item_id}"),
url: work_item_url,
})
}
@ -246,7 +251,7 @@ pub async fn get_work_item(
.bearer_auth(&config.access_token)
.send()
.await
.map_err(|e| format!("Failed to get work item: {}", e))?;
.map_err(|e| format!("Failed to get work item: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -259,7 +264,7 @@ pub async fn get_work_item(
let result: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
Ok(WorkItem {
id: result["id"]
@ -305,7 +310,7 @@ pub async fn update_work_item(
.json(&updates)
.send()
.await
.map_err(|e| format!("Failed to update work item: {}", e))?;
.map_err(|e| format!("Failed to update work item: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -318,7 +323,7 @@ pub async fn update_work_item(
let result: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
let updated_work_item_id = result["id"].as_i64().unwrap_or(work_item_id);
let work_item_url = format!(
@ -329,7 +334,7 @@ pub async fn update_work_item(
Ok(TicketResult {
id: updated_work_item_id.to_string(),
ticket_number: format!("#{}", updated_work_item_id),
ticket_number: format!("#{updated_work_item_id}"),
url: work_item_url,
})
}
@ -338,15 +343,22 @@ pub async fn update_work_item(
mod tests {
use super::*;
#[test]
fn test_escape_wiql_literal_escapes_single_quotes() {
let escaped = escape_wiql_literal("can't deploy");
assert_eq!(escaped, "can''t deploy");
}
#[tokio::test]
async fn test_connection_success() {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("GET", "/_apis/projects/TestProject")
.match_header("authorization", "Bearer test_token")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("api-version".into(), "7.0".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"api-version".into(),
"7.0".into(),
)]))
.with_status(200)
.with_body(r#"{"name":"TestProject","id":"abc123"}"#)
.create_async()
@ -372,9 +384,10 @@ mod tests {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("GET", "/_apis/projects/TestProject")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("api-version".into(), "7.0".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"api-version".into(),
"7.0".into(),
)]))
.with_status(401)
.create_async()
.await;
@ -400,9 +413,10 @@ mod tests {
let wiql_mock = server
.mock("POST", "/TestProject/_apis/wit/wiql")
.match_header("authorization", "Bearer test_token")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("api-version".into(), "7.0".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"api-version".into(),
"7.0".into(),
)]))
.with_status(200)
.with_body(r#"{"workItems":[{"id":123}]}"#)
.create_async()
@ -456,9 +470,10 @@ mod tests {
.mock("POST", "/TestProject/_apis/wit/workitems/$Bug")
.match_header("authorization", "Bearer test_token")
.match_header("content-type", "application/json-patch+json")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("api-version".into(), "7.0".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"api-version".into(),
"7.0".into(),
)]))
.with_status(200)
.with_body(r#"{"id":456}"#)
.create_async()
@ -486,9 +501,10 @@ mod tests {
let mock = server
.mock("GET", "/TestProject/_apis/wit/workitems/123")
.match_header("authorization", "Bearer test_token")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("api-version".into(), "7.0".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"api-version".into(),
"7.0".into(),
)]))
.with_status(200)
.with_body(
r#"{
@ -526,9 +542,10 @@ mod tests {
.mock("PATCH", "/TestProject/_apis/wit/workitems/123")
.match_header("authorization", "Bearer test_token")
.match_header("content-type", "application/json-patch+json")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("api-version".into(), "7.0".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"api-version".into(),
"7.0".into(),
)]))
.with_status(200)
.with_body(r#"{"id":123}"#)
.create_async()

View File

@ -0,0 +1,265 @@
use super::confluence_search::SearchResult;
/// Search Azure DevOps Wiki for content matching the query
pub async fn search_wiki(
org_url: &str,
project: &str,
query: &str,
cookies: &[crate::integrations::webview_auth::Cookie],
) -> Result<Vec<SearchResult>, String> {
let cookie_header = crate::integrations::webview_auth::cookies_to_header(cookies);
let client = reqwest::Client::new();
// Use Azure DevOps Search API
let search_url = format!(
"{}/_apis/search/wikisearchresults?api-version=7.0",
org_url.trim_end_matches('/')
);
let search_body = serde_json::json!({
"searchText": query,
"$top": 5,
"filters": {
"ProjectFilters": [project]
}
});
tracing::info!("Searching Azure DevOps Wiki: {}", search_url);
let resp = client
.post(&search_url)
.header("Cookie", &cookie_header)
.header("Accept", "application/json")
.header("Content-Type", "application/json")
.json(&search_body)
.send()
.await
.map_err(|e| format!("Azure DevOps wiki search failed: {e}"))?;
if !resp.status().is_success() {
let status = resp.status();
let text = resp.text().await.unwrap_or_default();
return Err(format!(
"Azure DevOps wiki search failed with status {status}: {text}"
));
}
let json: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse ADO wiki search response: {e}"))?;
let mut results = Vec::new();
if let Some(results_array) = json["results"].as_array() {
for item in results_array.iter().take(3) {
let title = item["fileName"].as_str().unwrap_or("Untitled").to_string();
let path = item["path"].as_str().unwrap_or("");
let url = format!(
"{}/_wiki/wikis/{}/{}",
org_url.trim_end_matches('/'),
project,
path
);
let excerpt = item["content"]
.as_str()
.unwrap_or("")
.chars()
.take(300)
.collect::<String>();
// Fetch full wiki page content
let content = if let Some(wiki_id) = item["wiki"]["id"].as_str() {
if let Some(page_path) = item["path"].as_str() {
fetch_wiki_page(org_url, wiki_id, page_path, &cookie_header)
.await
.ok()
} else {
None
}
} else {
None
};
results.push(SearchResult {
title,
url,
excerpt,
content,
source: "Azure DevOps".to_string(),
});
}
}
Ok(results)
}
/// Fetch full wiki page content
async fn fetch_wiki_page(
org_url: &str,
wiki_id: &str,
page_path: &str,
cookie_header: &str,
) -> Result<String, String> {
let client = reqwest::Client::new();
let page_url = format!(
"{}/_apis/wiki/wikis/{}/pages?path={}&api-version=7.0&includeContent=true",
org_url.trim_end_matches('/'),
wiki_id,
urlencoding::encode(page_path)
);
let resp = client
.get(&page_url)
.header("Cookie", cookie_header)
.header("Accept", "application/json")
.send()
.await
.map_err(|e| format!("Failed to fetch wiki page: {e}"))?;
if !resp.status().is_success() {
let status = resp.status();
return Err(format!("Failed to fetch wiki page: {status}"));
}
let json: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse wiki page: {e}"))?;
let content = json["content"].as_str().unwrap_or("").to_string();
// Truncate to reasonable length
let truncated = if content.len() > 3000 {
format!("{}...", &content[..3000])
} else {
content
};
Ok(truncated)
}
/// Search Azure DevOps Work Items for related issues
pub async fn search_work_items(
org_url: &str,
project: &str,
query: &str,
cookies: &[crate::integrations::webview_auth::Cookie],
) -> Result<Vec<SearchResult>, String> {
let cookie_header = crate::integrations::webview_auth::cookies_to_header(cookies);
let client = reqwest::Client::new();
// Use WIQL (Work Item Query Language)
let wiql_url = format!(
"{}/_apis/wit/wiql?api-version=7.0",
org_url.trim_end_matches('/')
);
let wiql_query = format!(
"SELECT [System.Id], [System.Title], [System.Description], [System.State] FROM WorkItems WHERE [System.TeamProject] = '{project}' AND ([System.Title] CONTAINS '{query}' OR [System.Description] CONTAINS '{query}') ORDER BY [System.ChangedDate] DESC"
);
let wiql_body = serde_json::json!({
"query": wiql_query
});
tracing::info!("Searching Azure DevOps work items");
let resp = client
.post(&wiql_url)
.header("Cookie", &cookie_header)
.header("Accept", "application/json")
.header("Content-Type", "application/json")
.json(&wiql_body)
.send()
.await
.map_err(|e| format!("ADO work item search failed: {e}"))?;
if !resp.status().is_success() {
return Ok(Vec::new()); // Don't fail if work item search fails
}
let json: serde_json::Value = resp
.json()
.await
.map_err(|_| "Failed to parse work item response".to_string())?;
let mut results = Vec::new();
if let Some(work_items) = json["workItems"].as_array() {
// Fetch details for top 3 work items
for item in work_items.iter().take(3) {
if let Some(id) = item["id"].as_i64() {
if let Ok(work_item) = fetch_work_item_details(org_url, id, &cookie_header).await {
results.push(work_item);
}
}
}
}
Ok(results)
}
/// Fetch work item details
async fn fetch_work_item_details(
org_url: &str,
id: i64,
cookie_header: &str,
) -> Result<SearchResult, String> {
let client = reqwest::Client::new();
let item_url = format!(
"{}/_apis/wit/workitems/{}?api-version=7.0",
org_url.trim_end_matches('/'),
id
);
let resp = client
.get(&item_url)
.header("Cookie", cookie_header)
.header("Accept", "application/json")
.send()
.await
.map_err(|e| format!("Failed to fetch work item: {e}"))?;
if !resp.status().is_success() {
let status = resp.status();
return Err(format!("Failed to fetch work item: {status}"));
}
let json: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse work item: {e}"))?;
let fields = &json["fields"];
let title = format!(
"Work Item {}: {}",
id,
fields["System.Title"].as_str().unwrap_or("No title")
);
let url = json["_links"]["html"]["href"]
.as_str()
.unwrap_or("")
.to_string();
let description = fields["System.Description"]
.as_str()
.unwrap_or("")
.to_string();
let state = fields["System.State"].as_str().unwrap_or("Unknown");
let content = format!("State: {state}\n\nDescription: {description}");
let excerpt = content.chars().take(200).collect::<String>();
Ok(SearchResult {
title,
url,
excerpt,
content: Some(content),
source: "Azure DevOps".to_string(),
})
}

View File

@ -269,7 +269,7 @@ mod tests {
tokio::time::sleep(tokio::time::Duration::from_millis(200)).await;
// Server should be running
let health_url = format!("http://127.0.0.1:{}/health", port);
let health_url = format!("http://127.0.0.1:{port}/health");
let health_before = reqwest::get(&health_url).await;
assert!(health_before.is_ok(), "Server should be running");

View File

@ -22,17 +22,24 @@ pub struct Page {
pub url: String,
}
fn escape_cql_literal(value: &str) -> String {
value.replace('\\', "\\\\").replace('"', "\\\"")
}
/// Test connection to Confluence by fetching current user info
pub async fn test_connection(config: &ConfluenceConfig) -> Result<ConnectionResult, String> {
let client = reqwest::Client::new();
let url = format!("{}/rest/api/user/current", config.base_url.trim_end_matches('/'));
let url = format!(
"{}/rest/api/user/current",
config.base_url.trim_end_matches('/')
);
let resp = client
.get(&url)
.bearer_auth(&config.access_token)
.send()
.await
.map_err(|e| format!("Connection failed: {}", e))?;
.map_err(|e| format!("Connection failed: {e}"))?;
if resp.status().is_success() {
Ok(ConnectionResult {
@ -40,9 +47,10 @@ pub async fn test_connection(config: &ConfluenceConfig) -> Result<ConnectionResu
message: "Successfully connected to Confluence".to_string(),
})
} else {
let status = resp.status();
Ok(ConnectionResult {
success: false,
message: format!("Connection failed with status: {}", resp.status()),
message: format!("Connection failed with status: {status}"),
})
}
}
@ -50,7 +58,8 @@ pub async fn test_connection(config: &ConfluenceConfig) -> Result<ConnectionResu
/// List all spaces accessible with the current token
pub async fn list_spaces(config: &ConfluenceConfig) -> Result<Vec<Space>, String> {
let client = reqwest::Client::new();
let url = format!("{}/rest/api/space", config.base_url.trim_end_matches('/'));
let base_url = config.base_url.trim_end_matches('/');
let url = format!("{base_url}/rest/api/space");
let resp = client
.get(&url)
@ -58,7 +67,7 @@ pub async fn list_spaces(config: &ConfluenceConfig) -> Result<Vec<Space>, String
.query(&[("limit", "100")])
.send()
.await
.map_err(|e| format!("Failed to list spaces: {}", e))?;
.map_err(|e| format!("Failed to list spaces: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -71,7 +80,7 @@ pub async fn list_spaces(config: &ConfluenceConfig) -> Result<Vec<Space>, String
let body: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
let spaces = body["results"]
.as_array()
@ -100,9 +109,11 @@ pub async fn search_pages(
config.base_url.trim_end_matches('/')
);
let mut cql = format!("text ~ \"{}\"", query);
let escaped_query = escape_cql_literal(query);
let mut cql = format!("text ~ \"{escaped_query}\"");
if let Some(space) = space_key {
cql = format!("{} AND space = {}", cql, space);
let escaped_space = escape_cql_literal(space);
cql = format!("{cql} AND space = \"{escaped_space}\"");
}
let resp = client
@ -111,7 +122,7 @@ pub async fn search_pages(
.query(&[("cql", &cql), ("limit", &"50".to_string())])
.send()
.await
.map_err(|e| format!("Search failed: {}", e))?;
.map_err(|e| format!("Search failed: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -124,7 +135,7 @@ pub async fn search_pages(
let body: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
let pages = body["results"]
.as_array()
@ -137,7 +148,7 @@ pub async fn search_pages(
id: page_id.to_string(),
title: p["title"].as_str()?.to_string(),
space_key: p["space"]["key"].as_str()?.to_string(),
url: format!("{}/pages/viewpage.action?pageId={}", base_url, page_id),
url: format!("{base_url}/pages/viewpage.action?pageId={page_id}"),
})
})
.collect();
@ -154,7 +165,8 @@ pub async fn publish_page(
parent_page_id: Option<&str>,
) -> Result<PublishResult, String> {
let client = reqwest::Client::new();
let url = format!("{}/rest/api/content", config.base_url.trim_end_matches('/'));
let base_url = config.base_url.trim_end_matches('/');
let url = format!("{base_url}/rest/api/content");
let mut body = serde_json::json!({
"type": "page",
@ -179,7 +191,7 @@ pub async fn publish_page(
.json(&body)
.send()
.await
.map_err(|e| format!("Failed to publish page: {}", e))?;
.map_err(|e| format!("Failed to publish page: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -192,7 +204,7 @@ pub async fn publish_page(
let result: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
let page_id = result["id"].as_str().unwrap_or("");
let page_url = format!(
@ -242,7 +254,7 @@ pub async fn update_page(
.json(&body)
.send()
.await
.map_err(|e| format!("Failed to update page: {}", e))?;
.map_err(|e| format!("Failed to update page: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -255,7 +267,7 @@ pub async fn update_page(
let result: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
let updated_page_id = result["id"].as_str().unwrap_or(page_id);
let page_url = format!(
@ -274,6 +286,12 @@ pub async fn update_page(
mod tests {
use super::*;
#[test]
fn test_escape_cql_literal_escapes_quotes_and_backslashes() {
let escaped = escape_cql_literal(r#"C:\logs\"prod""#);
assert_eq!(escaped, r#"C:\\logs\\\"prod\""#);
}
#[tokio::test]
async fn test_connection_success() {
let mut server = mockito::Server::new_async().await;
@ -327,9 +345,10 @@ mod tests {
let mock = server
.mock("GET", "/rest/api/space")
.match_header("authorization", "Bearer test_token")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("limit".into(), "100".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"limit".into(),
"100".into(),
)]))
.with_status(200)
.with_body(
r#"{
@ -362,9 +381,10 @@ mod tests {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("GET", "/rest/api/content/search")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("cql".into(), "text ~ \"kubernetes\"".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"cql".into(),
"text ~ \"kubernetes\"".into(),
)]))
.with_status(200)
.with_body(
r#"{

View File

@ -0,0 +1,188 @@
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SearchResult {
pub title: String,
pub url: String,
pub excerpt: String,
pub content: Option<String>,
pub source: String, // "confluence", "servicenow", "azuredevops"
}
/// Search Confluence for content matching the query
pub async fn search_confluence(
base_url: &str,
query: &str,
cookies: &[crate::integrations::webview_auth::Cookie],
) -> Result<Vec<SearchResult>, String> {
let cookie_header = crate::integrations::webview_auth::cookies_to_header(cookies);
let client = reqwest::Client::new();
// Use Confluence CQL search
let search_url = format!(
"{}/rest/api/search?cql=text~\"{}\"&limit=5",
base_url.trim_end_matches('/'),
urlencoding::encode(query)
);
tracing::info!("Searching Confluence: {}", search_url);
let resp = client
.get(&search_url)
.header("Cookie", &cookie_header)
.header("Accept", "application/json")
.send()
.await
.map_err(|e| format!("Confluence search request failed: {e}"))?;
if !resp.status().is_success() {
let status = resp.status();
let text = resp.text().await.unwrap_or_default();
return Err(format!(
"Confluence search failed with status {status}: {text}"
));
}
let json: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse Confluence search response: {e}"))?;
let mut results = Vec::new();
if let Some(results_array) = json["results"].as_array() {
for item in results_array.iter().take(3) {
// Take top 3 results
let title = item["title"].as_str().unwrap_or("Untitled").to_string();
let id = item["content"]["id"].as_str();
let space_key = item["content"]["space"]["key"].as_str();
// Build URL
let url = if let (Some(id_str), Some(space)) = (id, space_key) {
format!(
"{}/display/{}/{}",
base_url.trim_end_matches('/'),
space,
id_str
)
} else {
base_url.to_string()
};
// Get excerpt from search result
let excerpt = item["excerpt"]
.as_str()
.unwrap_or("")
.to_string()
.replace("<span class=\"highlight\">", "")
.replace("</span>", "");
// Fetch full page content
let content = if let Some(content_id) = id {
fetch_page_content(base_url, content_id, &cookie_header)
.await
.ok()
} else {
None
};
results.push(SearchResult {
title,
url,
excerpt,
content,
source: "Confluence".to_string(),
});
}
}
Ok(results)
}
/// Fetch full content of a Confluence page
async fn fetch_page_content(
base_url: &str,
page_id: &str,
cookie_header: &str,
) -> Result<String, String> {
let client = reqwest::Client::new();
let content_url = format!(
"{}/rest/api/content/{}?expand=body.storage",
base_url.trim_end_matches('/'),
page_id
);
let resp = client
.get(&content_url)
.header("Cookie", cookie_header)
.header("Accept", "application/json")
.send()
.await
.map_err(|e| format!("Failed to fetch page content: {e}"))?;
if !resp.status().is_success() {
let status = resp.status();
return Err(format!("Failed to fetch page: {status}"));
}
let json: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse page content: {e}"))?;
// Extract plain text from HTML storage format
let html = json["body"]["storage"]["value"]
.as_str()
.unwrap_or("")
.to_string();
// Basic HTML tag stripping (for better results, use a proper HTML parser)
let text = strip_html_tags(&html);
// Truncate to reasonable length for AI context
let truncated = if text.len() > 3000 {
format!("{}...", &text[..3000])
} else {
text
};
Ok(truncated)
}
/// Basic HTML tag stripping
fn strip_html_tags(html: &str) -> String {
let mut result = String::new();
let mut in_tag = false;
for ch in html.chars() {
match ch {
'<' => in_tag = true,
'>' => in_tag = false,
_ if !in_tag => result.push(ch),
_ => {}
}
}
// Clean up whitespace
result
.split_whitespace()
.collect::<Vec<_>>()
.join(" ")
.trim()
.to_string()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_strip_html_tags() {
let html = "<p>Hello <strong>world</strong>!</p>";
assert_eq!(strip_html_tags(html), "Hello world!");
let html2 = "<div><h1>Title</h1><p>Content</p></div>";
assert_eq!(strip_html_tags(html2), "TitleContent");
}
}

View File

@ -1,8 +1,13 @@
pub mod auth;
pub mod azuredevops;
pub mod azuredevops_search;
pub mod callback_server;
pub mod confluence;
pub mod confluence_search;
pub mod servicenow;
pub mod servicenow_search;
pub mod webview_auth;
pub mod webview_fetch;
use serde::{Deserialize, Serialize};
@ -24,3 +29,21 @@ pub struct TicketResult {
pub ticket_number: String,
pub url: String,
}
/// Authentication method for integration services
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "method")]
pub enum AuthMethod {
#[serde(rename = "oauth2")]
OAuth2 {
access_token: String,
expires_at: Option<i64>,
},
#[serde(rename = "cookies")]
Cookies { cookies: Vec<webview_auth::Cookie> },
#[serde(rename = "token")]
Token {
token: String,
token_type: String, // "Bearer", "Basic", etc.
},
}

View File

@ -0,0 +1,45 @@
/// Platform-specific native cookie extraction from webview
/// This can access HttpOnly cookies that JavaScript cannot
use super::webview_auth::Cookie;
#[cfg(target_os = "macos")]
pub async fn extract_cookies_native(
window_label: &str,
domain: &str,
) -> Result<Vec<Cookie>, String> {
// On macOS, we can use WKWebView's HTTPCookieStore via Objective-C bridge
// This requires cocoa/objc crates which we don't have yet
// For now, return an error indicating this needs implementation
tracing::warn!("Native cookie extraction not yet implemented for macOS");
Err("Native cookie extraction requires additional dependencies (cocoa, objc)".to_string())
}
#[cfg(target_os = "windows")]
pub async fn extract_cookies_native(
window_label: &str,
domain: &str,
) -> Result<Vec<Cookie>, String> {
// On Windows, we can use WebView2's cookie manager
// This requires windows crates
tracing::warn!("Native cookie extraction not yet implemented for Windows");
Err("Native cookie extraction requires additional dependencies (windows crate)".to_string())
}
#[cfg(target_os = "linux")]
pub async fn extract_cookies_native(
window_label: &str,
domain: &str,
) -> Result<Vec<Cookie>, String> {
// On Linux with WebKitGTK, we can use the cookie manager
tracing::warn!("Native cookie extraction not yet implemented for Linux");
Err("Native cookie extraction requires additional dependencies (webkit2gtk)".to_string())
}
#[cfg(not(any(target_os = "macos", target_os = "windows", target_os = "linux")))]
pub async fn extract_cookies_native(
_window_label: &str,
_domain: &str,
) -> Result<Vec<Cookie>, String> {
Err("Native cookie extraction not supported on this platform".to_string())
}

View File

@ -0,0 +1,50 @@
/// macOS-specific native cookie extraction using WKWebView's HTTPCookieStore
/// This can access HttpOnly cookies that JavaScript cannot
#[cfg(target_os = "macos")]
use super::webview_auth::Cookie;
#[cfg(target_os = "macos")]
pub async fn extract_cookies_native(
webview_label: &str,
domain: &str,
) -> Result<Vec<Cookie>, String> {
use cocoa::base::{id, nil};
use cocoa::foundation::{NSArray, NSString};
use objc::runtime::{Class, Object};
use objc::{msg_send, sel, sel_impl};
tracing::info!("Attempting native cookie extraction for {} on domain {}", webview_label, domain);
unsafe {
// Get the WKWebsiteDataStore (where cookies are stored)
let wk_websitedata_store_class = Class::get("WKWebsiteDataStore").ok_or("WKWebsiteDataStore class not found")?;
let data_store: id = msg_send![wk_websitedata_store_class, defaultDataStore];
if data_store == nil {
return Err("Failed to get WKWebsiteDataStore".to_string());
}
// Get the HTTPCookieStore
let cookie_store: id = msg_send![data_store, httpCookieStore];
if cookie_store == nil {
return Err("Failed to get HTTPCookieStore".to_string());
}
// Unfortunately, WKHTTPCookieStore's getAllCookies method requires a completion handler
// which is complex to bridge from Rust. For now, we'll document this limitation
// and suggest using the Tauri cookie plugin when it's available.
tracing::warn!("Native cookie extraction requires async completion handler - not yet fully implemented");
Err("Native cookie extraction requires Tauri cookie plugin (coming in future Tauri version)".to_string())
}
}
#[cfg(not(target_os = "macos"))]
pub async fn extract_cookies_native(
_webview_label: &str,
_domain: &str,
) -> Result<Vec<super::webview_auth::Cookie>, String> {
Err("Native cookie extraction only supported on macOS".to_string())
}

View File

@ -34,7 +34,7 @@ pub async fn test_connection(config: &ServiceNowConfig) -> Result<ConnectionResu
.query(&[("sysparm_limit", "1")])
.send()
.await
.map_err(|e| format!("Connection failed: {}", e))?;
.map_err(|e| format!("Connection failed: {e}"))?;
if resp.status().is_success() {
Ok(ConnectionResult {
@ -42,9 +42,10 @@ pub async fn test_connection(config: &ServiceNowConfig) -> Result<ConnectionResu
message: "Successfully connected to ServiceNow".to_string(),
})
} else {
let status = resp.status();
Ok(ConnectionResult {
success: false,
message: format!("Connection failed with status: {}", resp.status()),
message: format!("Connection failed with status: {status}"),
})
}
}
@ -60,15 +61,18 @@ pub async fn search_incidents(
config.instance_url.trim_end_matches('/')
);
let sysparm_query = format!("short_descriptionLIKE{}", query);
let sysparm_query = format!("short_descriptionLIKE{query}");
let resp = client
.get(&url)
.basic_auth(&config.username, Some(&config.password))
.query(&[("sysparm_query", &sysparm_query), ("sysparm_limit", &"10".to_string())])
.query(&[
("sysparm_query", &sysparm_query),
("sysparm_limit", &"10".to_string()),
])
.send()
.await
.map_err(|e| format!("Search failed: {}", e))?;
.map_err(|e| format!("Search failed: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -81,7 +85,7 @@ pub async fn search_incidents(
let body: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
let incidents = body["result"]
.as_array()
@ -131,7 +135,7 @@ pub async fn create_incident(
.json(&body)
.send()
.await
.map_err(|e| format!("Failed to create incident: {}", e))?;
.map_err(|e| format!("Failed to create incident: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -144,7 +148,7 @@ pub async fn create_incident(
let result: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
let incident_number = result["result"]["number"].as_str().unwrap_or("");
let sys_id = result["result"]["sys_id"].as_str().unwrap_or("");
@ -195,13 +199,13 @@ pub async fn get_incident(
.basic_auth(&config.username, Some(&config.password));
if use_query {
request = request.query(&[("sysparm_query", &format!("number={}", incident_id))]);
request = request.query(&[("sysparm_query", &format!("number={incident_id}"))]);
}
let resp = request
.send()
.await
.map_err(|e| format!("Failed to get incident: {}", e))?;
.map_err(|e| format!("Failed to get incident: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -214,7 +218,7 @@ pub async fn get_incident(
let body: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
let incident_data = if use_query {
// Query response has "result" array
@ -240,7 +244,10 @@ pub async fn get_incident(
.as_str()
.ok_or_else(|| "Missing short_description".to_string())?
.to_string(),
description: incident_data["description"].as_str().unwrap_or("").to_string(),
description: incident_data["description"]
.as_str()
.unwrap_or("")
.to_string(),
urgency: incident_data["urgency"].as_str().unwrap_or("3").to_string(),
impact: incident_data["impact"].as_str().unwrap_or("3").to_string(),
state: incident_data["state"].as_str().unwrap_or("1").to_string(),
@ -267,7 +274,7 @@ pub async fn update_incident(
.json(&updates)
.send()
.await
.map_err(|e| format!("Failed to update incident: {}", e))?;
.map_err(|e| format!("Failed to update incident: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -280,7 +287,7 @@ pub async fn update_incident(
let result: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
let incident_number = result["result"]["number"].as_str().unwrap_or("");
let updated_sys_id = result["result"]["sys_id"].as_str().unwrap_or(sys_id);
@ -307,9 +314,10 @@ mod tests {
let mock = server
.mock("GET", "/api/now/table/incident")
.match_header("authorization", mockito::Matcher::Regex("Basic .+".into()))
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("sysparm_limit".into(), "1".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"sysparm_limit".into(),
"1".into(),
)]))
.with_status(200)
.with_body(r#"{"result":[]}"#)
.create_async()
@ -335,9 +343,10 @@ mod tests {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("GET", "/api/now/table/incident")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("sysparm_limit".into(), "1".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"sysparm_limit".into(),
"1".into(),
)]))
.with_status(401)
.create_async()
.await;
@ -363,7 +372,10 @@ mod tests {
.mock("GET", "/api/now/table/incident")
.match_header("authorization", mockito::Matcher::Regex("Basic .+".into()))
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("sysparm_query".into(), "short_descriptionLIKElogin".into()),
mockito::Matcher::UrlEncoded(
"sysparm_query".into(),
"short_descriptionLIKElogin".into(),
),
mockito::Matcher::UrlEncoded("sysparm_limit".into(), "10".into()),
]))
.with_status(200)
@ -480,9 +492,10 @@ mod tests {
let mock = server
.mock("GET", "/api/now/table/incident")
.match_header("authorization", mockito::Matcher::Regex("Basic .+".into()))
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("sysparm_query".into(), "number=INC0010001".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"sysparm_query".into(),
"number=INC0010001".into(),
)]))
.with_status(200)
.with_body(
r#"{

View File

@ -0,0 +1,163 @@
use super::confluence_search::SearchResult;
/// Search ServiceNow Knowledge Base for content matching the query
pub async fn search_servicenow(
instance_url: &str,
query: &str,
cookies: &[crate::integrations::webview_auth::Cookie],
) -> Result<Vec<SearchResult>, String> {
let cookie_header = crate::integrations::webview_auth::cookies_to_header(cookies);
let client = reqwest::Client::new();
// Search Knowledge Base articles
let search_url = format!(
"{}/api/now/table/kb_knowledge?sysparm_query=textLIKE{}^ORshort_descriptionLIKE{}&sysparm_limit=5",
instance_url.trim_end_matches('/'),
urlencoding::encode(query),
urlencoding::encode(query)
);
tracing::info!("Searching ServiceNow: {}", search_url);
let resp = client
.get(&search_url)
.header("Cookie", &cookie_header)
.header("Accept", "application/json")
.send()
.await
.map_err(|e| format!("ServiceNow search request failed: {e}"))?;
if !resp.status().is_success() {
let status = resp.status();
let text = resp.text().await.unwrap_or_default();
return Err(format!(
"ServiceNow search failed with status {status}: {text}"
));
}
let json: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse ServiceNow search response: {e}"))?;
let mut results = Vec::new();
if let Some(result_array) = json["result"].as_array() {
for item in result_array.iter().take(3) {
// Take top 3 results
let title = item["short_description"]
.as_str()
.unwrap_or("Untitled")
.to_string();
let sys_id = item["sys_id"].as_str().unwrap_or("").to_string();
let url = format!(
"{}/kb_view.do?sysparm_article={}",
instance_url.trim_end_matches('/'),
sys_id
);
let excerpt = item["text"]
.as_str()
.unwrap_or("")
.chars()
.take(300)
.collect::<String>();
// Get full article content
let content = item["text"].as_str().map(|text| {
if text.len() > 3000 {
format!("{}...", &text[..3000])
} else {
text.to_string()
}
});
results.push(SearchResult {
title,
url,
excerpt,
content,
source: "ServiceNow".to_string(),
});
}
}
Ok(results)
}
/// Search ServiceNow Incidents for related issues
pub async fn search_incidents(
instance_url: &str,
query: &str,
cookies: &[crate::integrations::webview_auth::Cookie],
) -> Result<Vec<SearchResult>, String> {
let cookie_header = crate::integrations::webview_auth::cookies_to_header(cookies);
let client = reqwest::Client::new();
// Search incidents
let search_url = format!(
"{}/api/now/table/incident?sysparm_query=short_descriptionLIKE{}^ORdescriptionLIKE{}&sysparm_limit=3&sysparm_display_value=true",
instance_url.trim_end_matches('/'),
urlencoding::encode(query),
urlencoding::encode(query)
);
tracing::info!("Searching ServiceNow incidents: {}", search_url);
let resp = client
.get(&search_url)
.header("Cookie", &cookie_header)
.header("Accept", "application/json")
.send()
.await
.map_err(|e| format!("ServiceNow incident search failed: {e}"))?;
if !resp.status().is_success() {
return Ok(Vec::new()); // Don't fail if incident search fails
}
let json: serde_json::Value = resp
.json()
.await
.map_err(|_| "Failed to parse incident response".to_string())?;
let mut results = Vec::new();
if let Some(result_array) = json["result"].as_array() {
for item in result_array.iter() {
let number = item["number"].as_str().unwrap_or("Unknown");
let title = format!(
"Incident {}: {}",
number,
item["short_description"].as_str().unwrap_or("No title")
);
let sys_id = item["sys_id"].as_str().unwrap_or("");
let url = format!(
"{}/incident.do?sys_id={}",
instance_url.trim_end_matches('/'),
sys_id
);
let description = item["description"].as_str().unwrap_or("").to_string();
let resolution = item["close_notes"].as_str().unwrap_or("").to_string();
let content = format!("Description: {description}\nResolution: {resolution}");
let excerpt = content.chars().take(200).collect::<String>();
results.push(SearchResult {
title,
url,
excerpt,
content: Some(content),
source: "ServiceNow".to_string(),
});
}
}
Ok(results)
}

View File

@ -0,0 +1,336 @@
use serde::{Deserialize, Serialize};
use tauri::{AppHandle, WebviewUrl, WebviewWindow, WebviewWindowBuilder};
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ExtractedCredentials {
pub cookies: Vec<Cookie>,
pub service: String,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Cookie {
pub name: String,
pub value: String,
pub domain: String,
pub path: String,
pub secure: bool,
pub http_only: bool,
pub expires: Option<i64>,
}
/// Open an embedded browser window for the user to log in and extract cookies.
/// This approach works when user is off-VPN (can access web UI) but APIs require VPN.
pub async fn authenticate_with_webview(
app_handle: AppHandle,
service: &str,
base_url: &str,
project_name: Option<&str>,
) -> Result<ExtractedCredentials, String> {
let trimmed_base_url = base_url.trim_end_matches('/');
tracing::info!(
"authenticate_with_webview called: service={}, base_url={}, project_name={:?}",
service,
base_url,
project_name
);
let login_url = match service {
"confluence" => format!("{trimmed_base_url}/login.action"),
"azuredevops" => {
// Azure DevOps - go directly to project if provided, otherwise org home
if let Some(project) = project_name {
let url = format!("{trimmed_base_url}/{project}");
tracing::info!("Azure DevOps URL with project: {}", url);
url
} else {
tracing::info!("Azure DevOps URL without project: {}", trimmed_base_url);
trimmed_base_url.to_string()
}
}
"servicenow" => format!("{trimmed_base_url}/login.do"),
_ => return Err(format!("Unknown service: {service}")),
};
tracing::info!("Final login_url for {} = {}", service, login_url);
// Create persistent browser window (stays open for browsing and fresh cookie extraction)
let webview_label = format!("{service}-auth");
tracing::info!("Creating webview window with label: {}", webview_label);
let parsed_url = login_url.parse().map_err(|e| {
let err_msg = format!("Failed to parse URL '{login_url}': {e}");
tracing::error!("{err_msg}");
err_msg
})?;
tracing::info!("Parsed URL successfully: {:?}", parsed_url);
let webview = WebviewWindowBuilder::new(
&app_handle,
&webview_label,
WebviewUrl::External(parsed_url),
)
.title(format!(
"{service} Browser (Troubleshooting and RCA Assistant)"
))
.inner_size(1000.0, 800.0)
.min_inner_size(800.0, 600.0)
.resizable(true)
.center()
.focused(true)
.visible(true) // Show immediately - let user see loading
.user_agent("Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36")
.zoom_hotkeys_enabled(true)
.devtools(true)
.initialization_script("console.log('Webview initialized');")
.build()
.map_err(|e| format!("Failed to create webview: {e}"))?;
tracing::info!("Webview window created successfully, setting focus");
// Ensure window is focused
webview
.set_focus()
.map_err(|e| tracing::warn!("Failed to set focus: {}", e))
.ok();
// Wait for user to complete login
// User will click "Complete Login" button in the UI after successful authentication
// This function just opens the window - extraction happens in extract_cookies_via_ipc
Ok(ExtractedCredentials {
cookies: vec![],
service: service.to_string(),
})
}
/// Extract cookies from a webview using localStorage as intermediary.
/// This works for external URLs where window.__TAURI__ is not available.
pub async fn extract_cookies_via_ipc<R: tauri::Runtime>(
webview_window: &WebviewWindow<R>,
_app_handle: &AppHandle<R>,
) -> Result<Vec<Cookie>, String> {
// Step 1: Inject JavaScript to extract cookies and store in a global variable
// We can't use __TAURI__ for external URLs, so we use a polling approach
let cookie_extraction_script = r#"
(function() {
try {
const cookieString = document.cookie;
const cookies = [];
if (cookieString && cookieString.trim() !== '') {
const cookieList = cookieString.split(';').map(c => c.trim()).filter(c => c.length > 0);
for (const cookie of cookieList) {
const equalIndex = cookie.indexOf('=');
if (equalIndex === -1) continue;
const name = cookie.substring(0, equalIndex).trim();
const value = cookie.substring(equalIndex + 1).trim();
cookies.push({
name: name,
value: value,
domain: window.location.hostname,
path: '/',
secure: window.location.protocol === 'https:',
http_only: false,
expires: null
});
}
}
// Store in a global variable that Rust can read
window.__TFTSR_COOKIES__ = cookies;
console.log('[TFTSR] Extracted', cookies.length, 'cookies');
return cookies.length;
} catch (e) {
console.error('[TFTSR] Cookie extraction failed:', e);
window.__TFTSR_COOKIES__ = [];
window.__TFTSR_ERROR__ = e.message;
return -1;
}
})();
"#;
// Inject the extraction script
webview_window
.eval(cookie_extraction_script)
.map_err(|e| format!("Failed to inject cookie extraction script: {e}"))?;
tracing::info!("Cookie extraction script injected, waiting for cookies...");
// Give JavaScript a moment to execute
tokio::time::sleep(tokio::time::Duration::from_millis(500)).await;
// Step 2: Poll for the extracted cookies using document.title as communication channel
let mut attempts = 0;
let max_attempts = 20; // 10 seconds total (500ms * 20)
loop {
attempts += 1;
// Store result in localStorage, then copy to document.title for Rust to read
let check_and_signal_script = r#"
try {
if (typeof window.__TFTSR_ERROR__ !== 'undefined') {
window.localStorage.setItem('tftsr_result', JSON.stringify({ error: window.__TFTSR_ERROR__ }));
} else if (typeof window.__TFTSR_COOKIES__ !== 'undefined' && window.__TFTSR_COOKIES__.length > 0) {
window.localStorage.setItem('tftsr_result', JSON.stringify({ cookies: window.__TFTSR_COOKIES__ }));
} else if (typeof window.__TFTSR_COOKIES__ !== 'undefined') {
window.localStorage.setItem('tftsr_result', JSON.stringify({ cookies: [] }));
}
} catch (e) {
window.localStorage.setItem('tftsr_result', JSON.stringify({ error: e.message }));
}
"#;
webview_window.eval(check_and_signal_script).ok();
tokio::time::sleep(tokio::time::Duration::from_millis(500)).await;
// We can't get return values from eval(), so let's use a different approach:
// Execute script that sets document.title temporarily
let read_via_title = r#"
(function() {
const result = window.localStorage.getItem('tftsr_result');
if (result) {
window.localStorage.removeItem('tftsr_result');
// Store in title temporarily for Rust to read
window.__TFTSR_ORIGINAL_TITLE__ = document.title;
document.title = 'TFTSR_RESULT:' + result;
}
})();
"#;
webview_window.eval(read_via_title).ok();
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
// Read the title
if let Ok(title) = webview_window.title() {
if let Some(json_str) = title.strip_prefix("TFTSR_RESULT:") {
// Restore original title
let restore_title = r#"
if (typeof window.__TFTSR_ORIGINAL_TITLE__ !== 'undefined') {
document.title = window.__TFTSR_ORIGINAL_TITLE__;
}
"#;
webview_window.eval(restore_title).ok();
// Parse the JSON
match serde_json::from_str::<serde_json::Value>(json_str) {
Ok(result) => {
if let Some(error) = result.get("error").and_then(|e| e.as_str()) {
return Err(format!("Cookie extraction error: {error}"));
}
if let Some(cookies_value) = result.get("cookies") {
match serde_json::from_value::<Vec<Cookie>>(cookies_value.clone()) {
Ok(cookies) => {
tracing::info!(
"Successfully extracted {} cookies",
cookies.len()
);
return Ok(cookies);
}
Err(e) => {
return Err(format!("Failed to parse cookies: {e}"));
}
}
}
}
Err(e) => {
tracing::warn!("Failed to parse result JSON: {e}");
}
}
}
}
if attempts >= max_attempts {
return Err(
"Timeout extracting cookies. This may be because:\n\
1. Confluence uses HttpOnly cookies that JavaScript cannot access\n\
2. You're not logged in yet\n\
3. The page hasn't finished loading\n\n\
Recommendation: Use 'Manual Token' authentication with a Confluence Personal Access Token instead."
.to_string(),
);
}
}
}
/// Build cookie header string for HTTP requests
pub fn cookies_to_header(cookies: &[Cookie]) -> String {
cookies
.iter()
.map(|c| {
format!(
"{name}={value}",
name = c.name.as_str(),
value = c.value.as_str()
)
})
.collect::<Vec<_>>()
.join("; ")
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_cookies_to_header() {
let cookies = vec![
Cookie {
name: "JSESSIONID".to_string(),
value: "abc123".to_string(),
domain: "example.com".to_string(),
path: "/".to_string(),
secure: true,
http_only: true,
expires: None,
},
Cookie {
name: "auth_token".to_string(),
value: "xyz789".to_string(),
domain: "example.com".to_string(),
path: "/".to_string(),
secure: true,
http_only: false,
expires: None,
},
];
let header = cookies_to_header(&cookies);
assert_eq!(header, "JSESSIONID=abc123; auth_token=xyz789");
}
#[test]
fn test_empty_cookies_to_header() {
let cookies = vec![];
let header = cookies_to_header(&cookies);
assert_eq!(header, "");
}
#[test]
fn test_cookie_json_serialization() {
let cookies = vec![Cookie {
name: "test".to_string(),
value: "value123".to_string(),
domain: "example.com".to_string(),
path: "/".to_string(),
secure: true,
http_only: false,
expires: None,
}];
let json = serde_json::to_string(&cookies).unwrap();
assert!(json.contains("\"name\":\"test\""));
assert!(json.contains("\"value\":\"value123\""));
let deserialized: Vec<Cookie> = serde_json::from_str(&json).unwrap();
assert_eq!(deserialized.len(), 1);
assert_eq!(deserialized[0].name, "test");
}
}

View File

@ -0,0 +1,687 @@
/// Webview-based HTTP fetching that automatically includes HttpOnly cookies
/// Makes requests FROM the authenticated webview using JavaScript fetch API
///
/// This uses Tauri's window.location to pass results back (cross-document messaging)
use serde_json::Value;
use tauri::WebviewWindow;
use super::confluence_search::SearchResult;
/// Execute an HTTP request from within the webview context
/// This automatically includes all cookies (including HttpOnly) from the authenticated session
pub async fn fetch_from_webview<R: tauri::Runtime>(
webview_window: &WebviewWindow<R>,
url: &str,
method: &str,
body: Option<&str>,
) -> Result<Value, String> {
let request_id = uuid::Uuid::now_v7().to_string();
let (headers_js, body_js) = if let Some(b) = body {
// For POST/PUT with JSON body
(
"headers: { 'Accept': 'application/json', 'Content-Type': 'application/json' }",
format!(", body: JSON.stringify({b})"),
)
} else {
// For GET requests
("headers: { 'Accept': 'application/json' }", String::new())
};
// Inject script that:
// 1. Makes fetch request with credentials
// 2. Uses window.location.hash to communicate results back
let fetch_script = format!(
r#"
(async function() {{
const requestId = '{request_id}';
try {{
const response = await fetch('{url}', {{
method: '{method}',
{headers_js},
credentials: 'include'{body_js}
}});
if (!response.ok) {{
window.location.hash = '#trcaa-error-' + requestId + '-' + encodeURIComponent(JSON.stringify({{
error: `HTTP ${{response.status}}: ${{response.statusText}}`
}}));
return;
}}
const data = await response.json();
// Store in hash - we'll poll for this
window.location.hash = '#trcaa-success-' + requestId + '-' + encodeURIComponent(JSON.stringify(data));
}} catch (error) {{
window.location.hash = '#trcaa-error-' + requestId + '-' + encodeURIComponent(JSON.stringify({{
error: error.message
}}));
}}
}})();
"#
);
// Execute the fetch
webview_window
.eval(&fetch_script)
.map_err(|e| format!("Failed to execute fetch: {e}"))?;
// Poll for result by checking window URL/hash
for i in 0..50 {
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
// Get the current URL to check the hash
if let Ok(url_str) = webview_window.url() {
let url_string = url_str.to_string();
// Check for success
let success_marker = format!("#trcaa-success-{request_id}-");
if url_string.contains(&success_marker) {
// Extract the JSON from the hash
if let Some(json_start) = url_string.find(&success_marker) {
let json_encoded = &url_string[json_start + success_marker.len()..];
if let Ok(decoded) = urlencoding::decode(json_encoded) {
// Clear the hash
webview_window.eval("window.location.hash = '';").ok();
// Parse JSON
if let Ok(result) = serde_json::from_str::<Value>(&decoded) {
tracing::info!("Webview fetch successful");
return Ok(result);
}
}
}
}
// Check for error
let error_marker = format!("#trcaa-error-{request_id}-");
if url_string.contains(&error_marker) {
if let Some(json_start) = url_string.find(&error_marker) {
let json_encoded = &url_string[json_start + error_marker.len()..];
if let Ok(decoded) = urlencoding::decode(json_encoded) {
// Clear the hash
webview_window.eval("window.location.hash = '';").ok();
return Err(format!("Webview fetch error: {decoded}"));
}
}
}
}
if i % 10 == 0 {
tracing::debug!("Waiting for webview fetch... ({}s)", i / 10);
}
}
Err("Timeout waiting for webview fetch response (5s)".to_string())
}
/// Search Confluence using webview fetch (includes HttpOnly cookies automatically)
pub async fn search_confluence_webview<R: tauri::Runtime>(
webview_window: &WebviewWindow<R>,
base_url: &str,
query: &str,
) -> Result<Vec<SearchResult>, String> {
// Extract keywords from the query for better search
// Remove common words and extract important terms
let keywords = extract_keywords(query);
// Build CQL query with OR logic for keywords
let cql = if keywords.len() > 1 {
// Multiple keywords - search for any of them
let keyword_conditions: Vec<String> =
keywords.iter().map(|k| format!("text ~ \"{k}\"")).collect();
keyword_conditions.join(" OR ")
} else if !keywords.is_empty() {
// Single keyword
let keyword = &keywords[0];
format!("text ~ \"{keyword}\"")
} else {
// Fallback to original query
format!("text ~ \"{query}\"")
};
let search_url = format!(
"{}/rest/api/search?cql={}&limit=10",
base_url.trim_end_matches('/'),
urlencoding::encode(&cql)
);
tracing::info!("Executing Confluence search via webview with CQL: {}", cql);
let response = fetch_from_webview(webview_window, &search_url, "GET", None).await?;
let mut results = Vec::new();
if let Some(results_array) = response.get("results").and_then(|v| v.as_array()) {
for item in results_array.iter().take(5) {
let title = item["title"].as_str().unwrap_or("Untitled").to_string();
let content_id = item["content"]["id"].as_str();
let space_key = item["content"]["space"]["key"].as_str();
let url = if let (Some(id), Some(space)) = (content_id, space_key) {
format!(
"{}/display/{}/{}",
base_url.trim_end_matches('/'),
space,
id
)
} else {
base_url.to_string()
};
let excerpt = item["excerpt"]
.as_str()
.unwrap_or("")
.replace("<span class=\"highlight\">", "")
.replace("</span>", "");
// Fetch full page content
let content = if let Some(id) = content_id {
let content_url = format!(
"{}/rest/api/content/{id}?expand=body.storage",
base_url.trim_end_matches('/')
);
if let Ok(content_resp) =
fetch_from_webview(webview_window, &content_url, "GET", None).await
{
if let Some(body) = content_resp
.get("body")
.and_then(|b| b.get("storage"))
.and_then(|s| s.get("value"))
.and_then(|v| v.as_str())
{
let text = strip_html_simple(body);
Some(if text.len() > 3000 {
format!("{}...", &text[..3000])
} else {
text
})
} else {
None
}
} else {
None
}
} else {
None
};
results.push(SearchResult {
title,
url,
excerpt: excerpt.chars().take(300).collect(),
content,
source: "Confluence".to_string(),
});
}
}
tracing::info!(
"Confluence webview search returned {} results",
results.len()
);
Ok(results)
}
/// Extract keywords from a search query
/// Removes stop words and extracts important terms
fn extract_keywords(query: &str) -> Vec<String> {
// Common stop words to filter out
let stop_words = vec![
"how", "do", "i", "the", "a", "an", "is", "are", "was", "were", "be", "been", "being",
"have", "has", "had", "having", "do", "does", "did", "doing", "will", "would", "should",
"could", "can", "may", "might", "must", "to", "from", "in", "on", "at", "by", "for",
"with", "about", "as", "of", "or", "and", "but", "not", "what", "when", "where", "which",
"who",
];
let mut keywords = Vec::new();
// Split on whitespace and punctuation
for word in query.split(|c: char| c.is_whitespace() || c == '?' || c == '!' || c == '.') {
let cleaned = word.trim().to_lowercase();
// Skip if empty, too short, or a stop word
if cleaned.is_empty() || cleaned.len() < 2 || stop_words.contains(&cleaned.as_str()) {
continue;
}
// Keep version numbers (e.g., "1.0.12")
if cleaned.contains('.') && cleaned.chars().any(|c| c.is_numeric()) {
keywords.push(cleaned);
continue;
}
// Keep ticket numbers and IDs (pure numbers >= 3 digits)
if cleaned.chars().all(|c| c.is_numeric()) && cleaned.len() >= 3 {
keywords.push(cleaned);
continue;
}
// Keep if it has letters
if cleaned.chars().any(|c| c.is_alphabetic()) {
keywords.push(cleaned);
}
}
// Deduplicate
keywords.sort();
keywords.dedup();
keywords
}
/// Simple HTML tag stripping (for content preview)
fn strip_html_simple(html: &str) -> String {
let mut result = String::new();
let mut in_tag = false;
for ch in html.chars() {
match ch {
'<' => in_tag = true,
'>' => in_tag = false,
_ if !in_tag => result.push(ch),
_ => {}
}
}
result.split_whitespace().collect::<Vec<_>>().join(" ")
}
/// Search ServiceNow using webview fetch
pub async fn search_servicenow_webview<R: tauri::Runtime>(
webview_window: &WebviewWindow<R>,
instance_url: &str,
query: &str,
) -> Result<Vec<SearchResult>, String> {
let mut results = Vec::new();
// Search knowledge base
let kb_url = format!(
"{}/api/now/table/kb_knowledge?sysparm_query=textLIKE{}^ORshort_descriptionLIKE{}&sysparm_limit=3",
instance_url.trim_end_matches('/'),
urlencoding::encode(query),
urlencoding::encode(query)
);
tracing::info!("Executing ServiceNow KB search via webview");
if let Ok(kb_response) = fetch_from_webview(webview_window, &kb_url, "GET", None).await {
if let Some(kb_array) = kb_response.get("result").and_then(|v| v.as_array()) {
for item in kb_array {
let title = item["short_description"]
.as_str()
.unwrap_or("Untitled")
.to_string();
let sys_id = item["sys_id"].as_str().unwrap_or("");
let url = format!(
"{}/kb_view.do?sysparm_article={sys_id}",
instance_url.trim_end_matches('/')
);
let text = item["text"].as_str().unwrap_or("");
let excerpt = text.chars().take(300).collect();
let content = Some(if text.len() > 3000 {
format!("{}...", &text[..3000])
} else {
text.to_string()
});
results.push(SearchResult {
title,
url,
excerpt,
content,
source: "ServiceNow".to_string(),
});
}
}
}
// Search incidents
let inc_url = format!(
"{}/api/now/table/incident?sysparm_query=short_descriptionLIKE{}^ORdescriptionLIKE{}&sysparm_limit=3&sysparm_display_value=true",
instance_url.trim_end_matches('/'),
urlencoding::encode(query),
urlencoding::encode(query)
);
if let Ok(inc_response) = fetch_from_webview(webview_window, &inc_url, "GET", None).await {
if let Some(inc_array) = inc_response.get("result").and_then(|v| v.as_array()) {
for item in inc_array {
let number = item["number"].as_str().unwrap_or("Unknown");
let title = format!(
"Incident {}: {}",
number,
item["short_description"].as_str().unwrap_or("No title")
);
let sys_id = item["sys_id"].as_str().unwrap_or("");
let url = format!(
"{}/incident.do?sys_id={sys_id}",
instance_url.trim_end_matches('/')
);
let description = item["description"].as_str().unwrap_or("");
let resolution = item["close_notes"].as_str().unwrap_or("");
let content = format!("Description: {description}\nResolution: {resolution}");
let excerpt = content.chars().take(200).collect();
results.push(SearchResult {
title,
url,
excerpt,
content: Some(content),
source: "ServiceNow".to_string(),
});
}
}
}
tracing::info!(
"ServiceNow webview search returned {} results",
results.len()
);
Ok(results)
}
/// Search Azure DevOps wiki using webview fetch
pub async fn search_azuredevops_wiki_webview<R: tauri::Runtime>(
webview_window: &WebviewWindow<R>,
org_url: &str,
project: &str,
query: &str,
) -> Result<Vec<SearchResult>, String> {
// Extract keywords for better search
let keywords = extract_keywords(query);
let search_text = if !keywords.is_empty() {
keywords.join(" ")
} else {
query.to_string()
};
// Azure DevOps wiki search API
let search_url = format!(
"{}/{}/_apis/wiki/wikis?api-version=7.0",
org_url.trim_end_matches('/'),
urlencoding::encode(project)
);
tracing::info!(
"Executing Azure DevOps wiki search via webview for: {}",
search_text
);
// First, get list of wikis
let wikis_response = fetch_from_webview(webview_window, &search_url, "GET", None).await?;
let mut results = Vec::new();
if let Some(wikis_array) = wikis_response.get("value").and_then(|v| v.as_array()) {
// Search each wiki
for wiki in wikis_array.iter().take(3) {
let wiki_id = wiki["id"].as_str().unwrap_or("");
if wiki_id.is_empty() {
continue;
}
// Search wiki pages
let pages_url = format!(
"{}/{}/_apis/wiki/wikis/{}/pages?recursionLevel=Full&includeContent=true&api-version=7.0",
org_url.trim_end_matches('/'),
urlencoding::encode(project),
urlencoding::encode(wiki_id)
);
if let Ok(pages_response) =
fetch_from_webview(webview_window, &pages_url, "GET", None).await
{
// Try to get "page" field, or use the response itself if it's the page object
if let Some(page) = pages_response.get("page") {
search_page_recursive(
page,
&search_text,
org_url,
project,
wiki_id,
&mut results,
);
} else {
// Response might be the page object itself
search_page_recursive(
&pages_response,
&search_text,
org_url,
project,
wiki_id,
&mut results,
);
}
}
}
}
tracing::info!(
"Azure DevOps wiki webview search returned {} results",
results.len()
);
Ok(results)
}
/// Recursively search through wiki pages for matching content
fn search_page_recursive(
page: &Value,
search_text: &str,
org_url: &str,
_project: &str,
wiki_id: &str,
results: &mut Vec<SearchResult>,
) {
let search_lower = search_text.to_lowercase();
// Check current page
if let Some(path) = page.get("path").and_then(|p| p.as_str()) {
let content = page.get("content").and_then(|c| c.as_str()).unwrap_or("");
let content_lower = content.to_lowercase();
// Simple relevance check
let matches = search_lower
.split_whitespace()
.filter(|word| content_lower.contains(word))
.count();
if matches > 0 {
let page_id = page.get("id").and_then(|i| i.as_i64()).unwrap_or(0);
let title = path.trim_start_matches('/').replace('/', " > ");
let url = format!(
"{}/_wiki/wikis/{}/{}/{}",
org_url.trim_end_matches('/'),
urlencoding::encode(wiki_id),
page_id,
urlencoding::encode(path.trim_start_matches('/'))
);
// Create excerpt from first occurrence
let excerpt = if let Some(pos) =
content_lower.find(search_lower.split_whitespace().next().unwrap_or(""))
{
let start = pos.saturating_sub(50);
let end = (pos + 200).min(content.len());
format!("...{}", &content[start..end])
} else {
content.chars().take(200).collect()
};
let result_content = if content.len() > 3000 {
format!("{}...", &content[..3000])
} else {
content.to_string()
};
results.push(SearchResult {
title,
url,
excerpt,
content: Some(result_content),
source: "Azure DevOps Wiki".to_string(),
});
}
}
// Recurse into subpages
if let Some(subpages) = page.get("subPages").and_then(|s| s.as_array()) {
for subpage in subpages {
search_page_recursive(subpage, search_text, org_url, _project, wiki_id, results);
}
}
}
/// Search Azure DevOps work items using webview fetch
pub async fn search_azuredevops_workitems_webview<R: tauri::Runtime>(
webview_window: &WebviewWindow<R>,
org_url: &str,
project: &str,
query: &str,
) -> Result<Vec<SearchResult>, String> {
// Extract keywords
let keywords = extract_keywords(query);
// Check if query contains a work item ID (pure number)
let work_item_id: Option<i64> = keywords
.iter()
.filter(|k| k.chars().all(|c| c.is_numeric()))
.filter_map(|k| k.parse::<i64>().ok())
.next();
// Build WIQL query
let wiql_query = if let Some(id) = work_item_id {
// Search by specific ID
format!(
"SELECT [System.Id], [System.Title], [System.Description], [System.WorkItemType] \
FROM WorkItems WHERE [System.Id] = {id}"
)
} else {
// Search by text in title/description
let search_terms = if !keywords.is_empty() {
keywords.join(" ")
} else {
query.to_string()
};
// Use CONTAINS for text search (case-insensitive)
format!(
"SELECT [System.Id], [System.Title], [System.Description], [System.WorkItemType] \
FROM WorkItems WHERE [System.TeamProject] = '{project}' \
AND ([System.Title] CONTAINS '{search_terms}' OR [System.Description] CONTAINS '{search_terms}') \
ORDER BY [System.ChangedDate] DESC"
)
};
let wiql_url = format!(
"{}/{}/_apis/wit/wiql?api-version=7.0",
org_url.trim_end_matches('/'),
urlencoding::encode(project)
);
let body = serde_json::json!({
"query": wiql_query
})
.to_string();
tracing::info!("Executing Azure DevOps work item search via webview");
tracing::debug!("WIQL query: {}", wiql_query);
tracing::debug!("Request URL: {}", wiql_url);
let wiql_response = fetch_from_webview(webview_window, &wiql_url, "POST", Some(&body)).await?;
let mut results = Vec::new();
if let Some(work_items) = wiql_response.get("workItems").and_then(|v| v.as_array()) {
// Fetch details for first 5 work items
for item in work_items.iter().take(5) {
if let Some(id) = item.get("id").and_then(|i| i.as_i64()) {
let details_url = format!(
"{}/_apis/wit/workitems/{}?api-version=7.0",
org_url.trim_end_matches('/'),
id
);
if let Ok(details) =
fetch_from_webview(webview_window, &details_url, "GET", None).await
{
if let Some(fields) = details.get("fields") {
let title = fields
.get("System.Title")
.and_then(|t| t.as_str())
.unwrap_or("Untitled");
let work_item_type = fields
.get("System.WorkItemType")
.and_then(|t| t.as_str())
.unwrap_or("Item");
let description = fields
.get("System.Description")
.and_then(|d| d.as_str())
.unwrap_or("");
let clean_description = strip_html_simple(description);
let excerpt = clean_description.chars().take(200).collect();
let url = format!("{}/_workitems/edit/{id}", org_url.trim_end_matches('/'));
let full_content = if clean_description.len() > 3000 {
format!("{}...", &clean_description[..3000])
} else {
clean_description.clone()
};
results.push(SearchResult {
title: format!("{work_item_type} #{id}: {title}"),
url,
excerpt,
content: Some(full_content),
source: "Azure DevOps".to_string(),
});
}
}
}
}
}
tracing::info!(
"Azure DevOps work items webview search returned {} results",
results.len()
);
Ok(results)
}
/// Add a comment to an Azure DevOps work item
pub async fn add_azuredevops_comment_webview<R: tauri::Runtime>(
webview_window: &WebviewWindow<R>,
org_url: &str,
work_item_id: i64,
comment_text: &str,
) -> Result<String, String> {
let comment_url = format!(
"{}/_apis/wit/workitems/{work_item_id}/comments?api-version=7.0",
org_url.trim_end_matches('/')
);
let body = serde_json::json!({
"text": comment_text
})
.to_string();
tracing::info!("Adding comment to Azure DevOps work item {}", work_item_id);
let response = fetch_from_webview(webview_window, &comment_url, "POST", Some(&body)).await?;
// Extract comment ID from response
let comment_id = response
.get("id")
.and_then(|id| id.as_i64())
.ok_or_else(|| "Failed to get comment ID from response".to_string())?;
tracing::info!("Successfully added comment {comment_id} to work item {work_item_id}");
Ok(format!("Comment added successfully (ID: {comment_id})"))
}

View File

@ -0,0 +1,287 @@
/// Native webview-based search that automatically includes HttpOnly cookies
/// This bypasses cookie extraction by making requests directly from the authenticated webview
use serde::{Deserialize, Serialize};
use tauri::WebviewWindow;
use super::confluence_search::SearchResult;
/// Execute a search request from within the webview context
/// This automatically includes all cookies (including HttpOnly) from the authenticated session
pub async fn search_from_webview<R: tauri::Runtime>(
webview_window: &WebviewWindow<R>,
service: &str,
base_url: &str,
query: &str,
) -> Result<Vec<SearchResult>, String> {
match service {
"confluence" => search_confluence_from_webview(webview_window, base_url, query).await,
"servicenow" => search_servicenow_from_webview(webview_window, base_url, query).await,
"azuredevops" => Ok(Vec::new()), // Not yet implemented
_ => Err(format!("Unsupported service: {}", service)),
}
}
/// Search Confluence from within the authenticated webview
async fn search_confluence_from_webview<R: tauri::Runtime>(
webview_window: &WebviewWindow<R>,
base_url: &str,
query: &str,
) -> Result<Vec<SearchResult>, String> {
let search_script = format!(
r#"
(async function() {{
try {{
// Search Confluence using the browser's authenticated session
const searchUrl = '{}/rest/api/search?cql=text~"{}"&limit=5';
const response = await fetch(searchUrl, {{
headers: {{
'Accept': 'application/json'
}},
credentials: 'include' // Include cookies automatically
}});
if (!response.ok) {{
return {{ error: `Search failed: ${{response.status}}` }};
}}
const data = await response.json();
const results = [];
if (data.results && Array.isArray(data.results)) {{
for (const item of data.results.slice(0, 3)) {{
const title = item.title || 'Untitled';
const contentId = item.content?.id;
const spaceKey = item.content?.space?.key;
let url = '{}';
if (contentId && spaceKey) {{
url = `{}/display/${{spaceKey}}/${{contentId}}`;
}}
const excerpt = (item.excerpt || '')
.replace(/<span class="highlight">/g, '')
.replace(/<\/span>/g, '');
// Fetch full page content
let content = null;
if (contentId) {{
try {{
const contentUrl = `{}/rest/api/content/${{contentId}}?expand=body.storage`;
const contentResp = await fetch(contentUrl, {{
headers: {{ 'Accept': 'application/json' }},
credentials: 'include'
}});
if (contentResp.ok) {{
const contentData = await contentResp.json();
let html = contentData.body?.storage?.value || '';
// Basic HTML stripping
const div = document.createElement('div');
div.innerHTML = html;
let text = div.textContent || div.innerText || '';
content = text.length > 3000 ? text.substring(0, 3000) + '...' : text;
}}
}} catch (e) {{
console.error('Failed to fetch page content:', e);
}}
}}
results.push({{
title,
url,
excerpt: excerpt.substring(0, 300),
content,
source: 'Confluence'
}});
}}
}}
return {{ results }};
}} catch (error) {{
return {{ error: error.message }};
}}
}})();
"#,
base_url.trim_end_matches('/'),
query.replace('"', "\\\""),
base_url,
base_url,
base_url
);
// Execute JavaScript and store result in localStorage for retrieval
let storage_key = format!("__trcaa_search_{}__", uuid::Uuid::now_v7());
let callback_script = format!(
r#"
{}
.then(result => {{
localStorage.setItem('{}', JSON.stringify(result));
}})
.catch(error => {{
localStorage.setItem('{}', JSON.stringify({{ error: error.message }}));
}});
"#,
search_script,
storage_key,
storage_key
);
webview_window
.eval(&callback_script)
.map_err(|e| format!("Failed to execute search: {}", e))?;
// Poll for result in localStorage
for _ in 0..50 { // Try for 5 seconds
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
let check_script = format!("localStorage.getItem('{}')", storage_key);
let result_str = match webview_window.eval(&check_script) {
Ok(_) => {
// Try to retrieve the actual value
tokio::time::sleep(tokio::time::Duration::from_millis(50)).await;
let get_script = format!(
r#"(function() {{
const val = localStorage.getItem('{}');
if (val) {{
localStorage.removeItem('{}');
return val;
}}
return null;
}})();"#,
storage_key, storage_key
);
match webview_window.eval(&get_script) {
Ok(_) => continue, // Keep polling
Err(_) => continue,
}
}
Err(_) => continue,
};
}
// Timeout - try one final retrieval
tracing::warn!("Webview search timed out, returning empty results");
Ok(Vec::new())
}
/// Search ServiceNow from within the authenticated webview
async fn search_servicenow_from_webview<R: tauri::Runtime>(
webview_window: &WebviewWindow<R>,
instance_url: &str,
query: &str,
) -> Result<Vec<SearchResult>, String> {
let search_script = format!(
r#"
(async function() {{
try {{
const results = [];
// Search knowledge base
const kbUrl = '{}/api/now/table/kb_knowledge?sysparm_query=textLIKE{}^ORshort_descriptionLIKE{}&sysparm_limit=3';
const kbResp = await fetch(kbUrl, {{
headers: {{ 'Accept': 'application/json' }},
credentials: 'include'
}});
if (kbResp.ok) {{
const kbData = await kbResp.json();
if (kbData.result && Array.isArray(kbData.result)) {{
for (const item of kbData.result) {{
const title = item.short_description || 'Untitled';
const sysId = item.sys_id || '';
const url = `{}/kb_view.do?sysparm_article=${{sysId}}`;
const text = item.text || '';
const excerpt = text.substring(0, 300);
const content = text.length > 3000 ? text.substring(0, 3000) + '...' : text;
results.push({{
title,
url,
excerpt,
content,
source: 'ServiceNow'
}});
}}
}}
}}
// Search incidents
const incUrl = '{}/api/now/table/incident?sysparm_query=short_descriptionLIKE{}^ORdescriptionLIKE{}&sysparm_limit=3&sysparm_display_value=true';
const incResp = await fetch(incUrl, {{
headers: {{ 'Accept': 'application/json' }},
credentials: 'include'
}});
if (incResp.ok) {{
const incData = await incResp.json();
if (incData.result && Array.isArray(incData.result)) {{
for (const item of incData.result) {{
const number = item.number || 'Unknown';
const title = `Incident ${{number}}: ${{item.short_description || 'No title'}}`;
const sysId = item.sys_id || '';
const url = `{}/incident.do?sys_id=${{sysId}}`;
const description = item.description || '';
const resolution = item.close_notes || '';
const content = `Description: ${{description}}\\nResolution: ${{resolution}}`;
const excerpt = content.substring(0, 200);
results.push({{
title,
url,
excerpt,
content,
source: 'ServiceNow'
}});
}}
}}
}}
return {{ results }};
}} catch (error) {{
return {{ error: error.message }};
}}
}})();
"#,
instance_url.trim_end_matches('/'),
urlencoding::encode(query),
urlencoding::encode(query),
instance_url.trim_end_matches('/'),
instance_url.trim_end_matches('/'),
urlencoding::encode(query),
urlencoding::encode(query),
instance_url.trim_end_matches('/')
);
let result: serde_json::Value = webview_window
.eval(&search_script)
.map_err(|e| format!("Failed to execute search: {}", e))?;
if let Some(error) = result.get("error") {
return Err(format!("Search error: {}", error));
}
if let Some(results_array) = result.get("results").and_then(|v| v.as_array()) {
let mut results = Vec::new();
for item in results_array {
if let Ok(search_result) = serde_json::from_value::<SearchResult>(item.clone()) {
results.push(search_result);
}
}
Ok(results)
} else {
Ok(Vec::new())
}
}
/// Search Azure DevOps from within the authenticated webview
async fn search_azuredevops_from_webview<R: tauri::Runtime>(
webview_window: &WebviewWindow<R>,
org_url: &str,
query: &str,
) -> Result<Vec<SearchResult>, String> {
// Azure DevOps search requires project parameter, which we don't have here
// This would need to be passed in from the config
// For now, return empty results
tracing::warn!("Azure DevOps webview search not yet implemented");
Ok(Vec::new())
}

View File

@ -8,8 +8,10 @@ pub mod ollama;
pub mod pii;
pub mod state;
use sha2::{Digest, Sha256};
use state::AppState;
use std::sync::{Arc, Mutex};
use tauri::Manager;
#[cfg_attr(mobile, tauri::mobile_entry_point)]
pub fn run() {
@ -21,10 +23,10 @@ pub fn run() {
)
.init();
tracing::info!("Starting TFTSR application");
tracing::info!("Starting Troubleshooting and RCA Assistant application");
// Determine data directory
let data_dir = dirs_data_dir();
let data_dir = state::get_app_data_dir().expect("Failed to determine app data directory");
// Initialize database
let conn = db::connection::init_db(&data_dir).expect("Failed to initialize database");
@ -34,15 +36,19 @@ pub fn run() {
db: Arc::new(Mutex::new(conn)),
settings: Arc::new(Mutex::new(state::AppSettings::default())),
app_data_dir: data_dir.clone(),
integration_webviews: Arc::new(Mutex::new(std::collections::HashMap::new())),
};
let stronghold_salt = format!(
"tftsr-stronghold-salt-v1-{:x}",
Sha256::digest(data_dir.to_string_lossy().as_bytes())
);
tauri::Builder::default()
.plugin(
tauri_plugin_stronghold::Builder::new(|password| {
use sha2::{Digest, Sha256};
tauri_plugin_stronghold::Builder::new(move |password| {
let mut hasher = Sha256::new();
hasher.update(password);
hasher.update(b"tftsr-stronghold-salt-v1");
hasher.update(stronghold_salt.as_bytes());
hasher.finalize().to_vec()
})
.build(),
@ -52,6 +58,35 @@ pub fn run() {
.plugin(tauri_plugin_shell::init())
.plugin(tauri_plugin_http::init())
.manage(app_state)
.setup(|app| {
// Restore persistent browser windows from previous session
let app_handle = app.handle().clone();
let state: tauri::State<AppState> = app.state();
// Clone Arc fields for 'static lifetime
let db = state.db.clone();
let settings = state.settings.clone();
let app_data_dir = state.app_data_dir.clone();
let integration_webviews = state.integration_webviews.clone();
tauri::async_runtime::spawn(async move {
let app_state = AppState {
db,
settings,
app_data_dir,
integration_webviews,
};
if let Err(e) =
commands::integrations::restore_persistent_webviews(&app_handle, &app_state)
.await
{
tracing::warn!("Failed to restore persistent webviews: {}", e);
}
});
Ok(())
})
.invoke_handler(tauri::generate_handler![
// DB / Issue CRUD
commands::db::create_issue,
@ -87,6 +122,13 @@ pub fn run() {
commands::integrations::create_azuredevops_workitem,
commands::integrations::initiate_oauth,
commands::integrations::handle_oauth_callback,
commands::integrations::authenticate_with_webview,
commands::integrations::extract_cookies_from_webview,
commands::integrations::save_manual_token,
commands::integrations::save_integration_config,
commands::integrations::get_integration_config,
commands::integrations::get_all_integration_configs,
commands::integrations::add_ado_comment,
// System / Settings
commands::system::check_ollama_installed,
commands::system::get_ollama_install_guide,
@ -98,48 +140,10 @@ pub fn run() {
commands::system::get_settings,
commands::system::update_settings,
commands::system::get_audit_log,
commands::system::save_ai_provider,
commands::system::load_ai_providers,
commands::system::delete_ai_provider,
])
.run(tauri::generate_context!())
.expect("Error running TFTSR application");
}
/// Determine the application data directory.
fn dirs_data_dir() -> std::path::PathBuf {
if let Ok(dir) = std::env::var("TFTSR_DATA_DIR") {
return std::path::PathBuf::from(dir);
}
// Use platform-appropriate data directory
#[cfg(target_os = "linux")]
{
if let Ok(xdg) = std::env::var("XDG_DATA_HOME") {
return std::path::PathBuf::from(xdg).join("tftsr");
}
if let Ok(home) = std::env::var("HOME") {
return std::path::PathBuf::from(home)
.join(".local")
.join("share")
.join("tftsr");
}
}
#[cfg(target_os = "macos")]
{
if let Ok(home) = std::env::var("HOME") {
return std::path::PathBuf::from(home)
.join("Library")
.join("Application Support")
.join("tftsr");
}
}
#[cfg(target_os = "windows")]
{
if let Ok(appdata) = std::env::var("APPDATA") {
return std::path::PathBuf::from(appdata).join("tftsr");
}
}
// Fallback
std::path::PathBuf::from("./tftsr-data")
.expect("Error running Troubleshooting and RCA Assistant application");
}

View File

@ -35,8 +35,10 @@ pub fn get_patterns() -> Vec<(PiiType, Regex)> {
// Credit card
(
PiiType::CreditCard,
Regex::new(r"\b(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|3[47][0-9]{13})\b")
.unwrap(),
Regex::new(
r"\b(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|3[47][0-9]{13}|6(?:011|5[0-9]{2})[0-9]{12}|3(?:0[0-5]|[68][0-9])[0-9]{11}|35(?:2[89]|[3-8][0-9])[0-9]{12})\b",
)
.unwrap(),
),
// Email
(
@ -70,5 +72,13 @@ pub fn get_patterns() -> Vec<(PiiType, Regex)> {
Regex::new(r"\b(?:\+?1[-.\s]?)?\(?[0-9]{3}\)?[-.\s]?[0-9]{3}[-.\s]?[0-9]{4}\b")
.unwrap(),
),
// Hostname / FQDN
(
PiiType::Hostname,
Regex::new(
r"\b(?:[A-Za-z0-9](?:[A-Za-z0-9\-]{0,61}[A-Za-z0-9])?\.)+[A-Za-z]{2,63}\b",
)
.unwrap(),
),
]
}

View File

@ -1,4 +1,5 @@
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::path::PathBuf;
use std::sync::{Arc, Mutex};
@ -10,6 +11,34 @@ pub struct ProviderConfig {
pub api_url: String,
pub api_key: String,
pub model: String,
/// Optional: Maximum tokens for response
#[serde(skip_serializing_if = "Option::is_none")]
pub max_tokens: Option<u32>,
/// Optional: Temperature (0.0-2.0) - controls randomness
#[serde(skip_serializing_if = "Option::is_none")]
pub temperature: Option<f64>,
/// Optional: Custom endpoint path (e.g., "" for no path, "/v1/chat" for custom path)
/// If None, defaults to "/chat/completions" for OpenAI compatibility
#[serde(skip_serializing_if = "Option::is_none")]
pub custom_endpoint_path: Option<String>,
/// Optional: Custom auth header name (e.g., "x-msi-genai-api-key")
/// If None, defaults to "Authorization"
#[serde(skip_serializing_if = "Option::is_none")]
pub custom_auth_header: Option<String>,
/// Optional: Custom auth value prefix (e.g., "" for no prefix, "Bearer " for OpenAI)
/// If None, defaults to "Bearer "
#[serde(skip_serializing_if = "Option::is_none")]
pub custom_auth_prefix: Option<String>,
/// Optional: API format ("openai" or "custom_rest")
/// If None, defaults to "openai"
#[serde(skip_serializing_if = "Option::is_none")]
pub api_format: Option<String>,
/// Optional: Session ID for stateful custom REST APIs
#[serde(skip_serializing_if = "Option::is_none")]
pub session_id: Option<String>,
/// Optional: User ID for custom REST API cost tracking (CORE ID email)
#[serde(skip_serializing_if = "Option::is_none")]
pub user_id: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
@ -39,4 +68,53 @@ pub struct AppState {
pub db: Arc<Mutex<rusqlite::Connection>>,
pub settings: Arc<Mutex<AppSettings>>,
pub app_data_dir: PathBuf,
/// Track open integration webview windows by service name -> window label
/// These windows stay open for the user to browse and for fresh cookie extraction
pub integration_webviews: Arc<Mutex<HashMap<String, String>>>,
}
/// Determine the application data directory.
/// Returns None if the directory cannot be determined.
pub fn get_app_data_dir() -> Option<PathBuf> {
if let Ok(dir) = std::env::var("TFTSR_DATA_DIR") {
return Some(PathBuf::from(dir));
}
// Use platform-appropriate data directory
#[cfg(target_os = "linux")]
{
if let Ok(xdg) = std::env::var("XDG_DATA_HOME") {
return Some(PathBuf::from(xdg).join("trcaa"));
}
if let Ok(home) = std::env::var("HOME") {
return Some(
PathBuf::from(home)
.join(".local")
.join("share")
.join("trcaa"),
);
}
}
#[cfg(target_os = "macos")]
{
if let Ok(home) = std::env::var("HOME") {
return Some(
PathBuf::from(home)
.join("Library")
.join("Application Support")
.join("trcaa"),
);
}
}
#[cfg(target_os = "windows")]
{
if let Ok(appdata) = std::env::var("APPDATA") {
return Some(PathBuf::from(appdata).join("trcaa"));
}
}
// Fallback
Some(PathBuf::from("./trcaa-data"))
}

View File

@ -1,7 +1,7 @@
{
"productName": "TFTSR",
"version": "0.2.2",
"identifier": "com.tftsr.devops",
"productName": "Troubleshooting and RCA Assistant",
"version": "0.2.10",
"identifier": "com.trcaa.app",
"build": {
"frontendDist": "../dist",
"devUrl": "http://localhost:1420",
@ -10,11 +10,11 @@
},
"app": {
"security": {
"csp": "default-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data: asset: https:; connect-src 'self' http://localhost:11434 http://localhost:8765 https://api.openai.com https://api.anthropic.com https://api.mistral.ai https://generativelanguage.googleapis.com https://auth.atlassian.com https://*.atlassian.net https://login.microsoftonline.com https://dev.azure.com"
"csp": "default-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data: asset: https:; connect-src 'self' http://localhost:11434 http://localhost:8765 https://api.openai.com https://api.anthropic.com https://api.mistral.ai https://generativelanguage.googleapis.com https://auth.atlassian.com https://*.atlassian.net https://login.microsoftonline.com https://dev.azure.com https://genai-service.stage.commandcentral.com https://genai-service.commandcentral.com"
},
"windows": [
{
"title": "TFTSR \u2014 IT Triage & RCA",
"title": "Troubleshooting and RCA Assistant",
"width": 1280,
"height": 800,
"resizable": true,
@ -36,9 +36,9 @@
],
"resources": [],
"externalBin": [],
"copyright": "TFTSR Contributors",
"copyright": "Troubleshooting and RCA Assistant Contributors",
"category": "Utility",
"shortDescription": "IT Incident Triage & RCA Tool",
"longDescription": "Structured AI-backed tool for IT incident triage, 5-whys root cause analysis, and post-mortem documentation with offline Ollama support."
"shortDescription": "Troubleshooting and RCA Assistant",
"longDescription": "Structured AI-backed assistant for IT troubleshooting, 5-whys root cause analysis, and post-mortem documentation with offline Ollama support."
}
}

View File

@ -11,8 +11,11 @@ import {
Link,
ChevronLeft,
ChevronRight,
Sun,
Moon,
} from "lucide-react";
import { useSettingsStore } from "@/stores/settingsStore";
import { loadAiProvidersCmd, testProviderConnectionCmd } from "@/lib/tauriCommands";
import Dashboard from "@/pages/Dashboard";
import NewIssue from "@/pages/NewIssue";
@ -43,13 +46,38 @@ const settingsItems = [
export default function App() {
const [collapsed, setCollapsed] = useState(false);
const [appVersion, setAppVersion] = useState("");
const theme = useSettingsStore((s) => s.theme);
const { theme, setTheme, setProviders, getActiveProvider } = useSettingsStore();
const location = useLocation();
useEffect(() => {
getVersion().then(setAppVersion).catch(() => {});
}, []);
// Load providers and auto-test active provider on startup
useEffect(() => {
const initializeProviders = async () => {
try {
const providers = await loadAiProvidersCmd();
setProviders(providers);
// Auto-test the active provider
const activeProvider = getActiveProvider();
if (activeProvider) {
console.log("Auto-testing active AI provider:", activeProvider.name);
try {
await testProviderConnectionCmd(activeProvider);
console.log("✓ Active provider connection verified:", activeProvider.name);
} catch (err) {
console.warn("⚠ Active provider connection test failed:", activeProvider.name, err);
}
}
} catch (err) {
console.error("Failed to initialize AI providers:", err);
}
};
initializeProviders();
}, [setProviders, getActiveProvider]);
return (
<div className={theme === "dark" ? "dark" : ""}>
<div className="grid h-screen" style={{ gridTemplateColumns: collapsed ? "64px 1fr" : "240px 1fr" }}>
@ -59,7 +87,7 @@ export default function App() {
<div className="flex items-center justify-between px-4 py-4 border-b">
{!collapsed && (
<span className="text-lg font-bold text-foreground tracking-tight">
TFTSR
Troubleshooting and RCA Assistant
</span>
)}
<button
@ -116,12 +144,21 @@ export default function App() {
</div>
</nav>
{/* Version */}
{!collapsed && (
<div className="px-4 py-3 border-t text-xs text-muted-foreground">
{appVersion ? `v${appVersion}` : ""}
</div>
)}
{/* Version + Theme toggle */}
<div className="px-4 py-3 border-t flex items-center justify-between">
{!collapsed && (
<span className="text-xs text-muted-foreground">
{appVersion ? `v${appVersion}` : ""}
</span>
)}
<button
onClick={() => setTheme(theme === "dark" ? "light" : "dark")}
className="p-1 rounded hover:bg-accent text-muted-foreground"
title={theme === "dark" ? "Switch to light mode" : "Switch to dark mode"}
>
{theme === "dark" ? <Sun className="w-4 h-4" /> : <Moon className="w-4 h-4" />}
</button>
</div>
</aside>
{/* Main content */}

View File

@ -16,6 +16,7 @@ const buttonVariants = cva(
default: "bg-primary text-primary-foreground hover:bg-primary/90",
destructive: "bg-destructive text-destructive-foreground hover:bg-destructive/90",
outline: "border border-input bg-background hover:bg-accent hover:text-accent-foreground",
secondary: "bg-secondary text-secondary-foreground hover:bg-secondary/80",
ghost: "hover:bg-accent hover:text-accent-foreground",
link: "text-primary underline-offset-4 hover:underline",
},
@ -342,4 +343,54 @@ export function Separator({
);
}
// ─── RadioGroup ──────────────────────────────────────────────────────────────
interface RadioGroupContextValue {
value: string;
onValueChange: (value: string) => void;
}
const RadioGroupContext = React.createContext<RadioGroupContextValue | null>(null);
interface RadioGroupProps {
value: string;
onValueChange: (value: string) => void;
className?: string;
children: React.ReactNode;
}
export function RadioGroup({ value, onValueChange, className, children }: RadioGroupProps) {
return (
<RadioGroupContext.Provider value={{ value, onValueChange }}>
<div className={cn("space-y-2", className)}>{children}</div>
</RadioGroupContext.Provider>
);
}
interface RadioGroupItemProps extends React.InputHTMLAttributes<HTMLInputElement> {
value: string;
}
export const RadioGroupItem = React.forwardRef<HTMLInputElement, RadioGroupItemProps>(
({ value, className, ...props }, ref) => {
const ctx = React.useContext(RadioGroupContext);
if (!ctx) throw new Error("RadioGroupItem must be used within RadioGroup");
return (
<input
ref={ref}
type="radio"
className={cn(
"aspect-square h-4 w-4 rounded-full border border-primary text-primary ring-offset-background focus:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 disabled:cursor-not-allowed disabled:opacity-50",
className
)}
checked={ctx.value === value}
onChange={() => ctx.onValueChange(value)}
{...props}
/>
);
}
);
RadioGroupItem.displayName = "RadioGroupItem";
export { cn };

View File

@ -10,6 +10,12 @@ export interface ProviderConfig {
api_url: string;
api_key: string;
model: string;
custom_endpoint_path?: string;
custom_auth_header?: string;
custom_auth_prefix?: string;
api_format?: string;
session_id?: string;
user_id?: string;
}
export interface Message {
@ -361,6 +367,17 @@ export const updateSettingsCmd = (partialSettings: Partial<AppSettings>) =>
export const getAuditLogCmd = (filter: AuditFilter) =>
invoke<AuditEntry[]>("get_audit_log", { filter });
// ─── AI Provider Persistence ──────────────────────────────────────────────────
export const saveAiProviderCmd = (provider: ProviderConfig) =>
invoke<void>("save_ai_provider", { provider });
export const loadAiProvidersCmd = () =>
invoke<ProviderConfig[]>("load_ai_providers");
export const deleteAiProviderCmd = (name: string) =>
invoke<void>("delete_ai_provider", { name });
// ─── OAuth & Integrations ─────────────────────────────────────────────────────
export interface OAuthInitResponse {
@ -387,3 +404,57 @@ export const testServiceNowConnectionCmd = (instanceUrl: string, credentials: Re
export const testAzureDevOpsConnectionCmd = (orgUrl: string, credentials: Record<string, unknown>) =>
invoke<ConnectionResult>("test_azuredevops_connection", { orgUrl, credentials });
// ─── Webview & Token Authentication ──────────────────────────────────────────
export interface WebviewAuthResponse {
success: boolean;
message: string;
webview_id: string;
}
export interface TokenAuthRequest {
service: string;
token: string;
token_type: string;
base_url: string;
}
export interface IntegrationConfig {
service: string;
base_url: string;
username?: string;
project_name?: string;
space_key?: string;
}
export const authenticateWithWebviewCmd = (
service: string,
baseUrl: string,
projectName?: string
) =>
invoke<WebviewAuthResponse>("authenticate_with_webview", {
service,
baseUrl,
projectName,
});
export const extractCookiesFromWebviewCmd = (service: string, webviewId: string) =>
invoke<ConnectionResult>("extract_cookies_from_webview", { service, webviewId });
export const saveManualTokenCmd = (request: TokenAuthRequest) =>
invoke<ConnectionResult>("save_manual_token", { request });
// ─── Integration Configuration Persistence ────────────────────────────────────
export const saveIntegrationConfigCmd = (config: IntegrationConfig) =>
invoke<void>("save_integration_config", { config });
export const getIntegrationConfigCmd = (service: string) =>
invoke<IntegrationConfig | null>("get_integration_config", { service });
export const getAllIntegrationConfigsCmd = () =>
invoke<IntegrationConfig[]>("get_all_integration_configs");
export const addAdoCommentCmd = (workItemId: number, commentText: string) =>
invoke<string>("add_ado_comment", { workItemId, commentText });

View File

@ -35,11 +35,11 @@ export default function Dashboard() {
<div>
<h1 className="text-3xl font-bold">Dashboard</h1>
<p className="text-muted-foreground mt-1">
IT Triage & Root Cause Analysis
Troubleshooting and Root Cause Analysis Assistant
</p>
</div>
<div className="flex items-center gap-2">
<Button variant="outline" size="sm" onClick={() => loadIssues()} disabled={isLoading}>
<Button variant="outline" size="sm" onClick={() => loadIssues()} disabled={isLoading} className="border-border text-foreground bg-card hover:bg-accent">
<RefreshCw className={`w-4 h-4 mr-2 ${isLoading ? "animate-spin" : ""}`} />
Refresh
</Button>

View File

@ -1,4 +1,4 @@
import React, { useState } from "react";
import React, { useState, useEffect } from "react";
import { Plus, Pencil, Trash2, CheckCircle, XCircle, Zap } from "lucide-react";
import {
Card,
@ -17,7 +17,42 @@ import {
Separator,
} from "@/components/ui";
import { useSettingsStore } from "@/stores/settingsStore";
import { testProviderConnectionCmd, type ProviderConfig } from "@/lib/tauriCommands";
import {
testProviderConnectionCmd,
saveAiProviderCmd,
loadAiProvidersCmd,
deleteAiProviderCmd,
type ProviderConfig,
} from "@/lib/tauriCommands";
export const CUSTOM_REST_MODELS = [
"ChatGPT4o",
"ChatGPT4o-mini",
"ChatGPT-o3-mini",
"Gemini-2_0-Flash-001",
"Gemini-2_5-Flash",
"Claude-Sonnet-3_7",
"Openai-gpt-4_1-mini",
"Openai-o4-mini",
"Claude-Sonnet-4",
"ChatGPT-o3-pro",
"OpenAI-ChatGPT-4_1",
"OpenAI-GPT-4_1-Nano",
"ChatGPT-5",
"VertexGemini",
"ChatGPT-5_1",
"ChatGPT-5_1-chat",
"ChatGPT-5_2-Chat",
"Gemini-3_Pro-Preview",
"Gemini-3_1-flash-lite-preview",
] as const;
export const CUSTOM_MODEL_OPTION = "__custom_model__";
export const LEGACY_API_FORMAT = "msi_genai";
export const CUSTOM_REST_FORMAT = "custom_rest";
export const normalizeApiFormat = (format?: string): string | undefined =>
format === LEGACY_API_FORMAT ? CUSTOM_REST_FORMAT : format;
const emptyProvider: ProviderConfig = {
name: "",
@ -27,6 +62,12 @@ const emptyProvider: ProviderConfig = {
model: "",
max_tokens: 4096,
temperature: 0.7,
custom_endpoint_path: undefined,
custom_auth_header: undefined,
custom_auth_prefix: undefined,
api_format: undefined,
session_id: undefined,
user_id: undefined,
};
export default function AIProviders() {
@ -37,6 +78,7 @@ export default function AIProviders() {
updateProvider,
removeProvider,
setActiveProvider,
setProviders,
} = useSettingsStore();
const [editIndex, setEditIndex] = useState<number | null>(null);
@ -44,31 +86,76 @@ export default function AIProviders() {
const [form, setForm] = useState<ProviderConfig>({ ...emptyProvider });
const [testResult, setTestResult] = useState<{ success: boolean; message: string } | null>(null);
const [isTesting, setIsTesting] = useState(false);
const [isCustomModel, setIsCustomModel] = useState(false);
const [customModelInput, setCustomModelInput] = useState("");
// Load providers from database on mount
// Note: Auto-testing of active provider is handled in App.tsx on startup
useEffect(() => {
const loadProviders = async () => {
try {
const providers = await loadAiProvidersCmd();
setProviders(providers);
} catch (err) {
console.error("Failed to load AI providers:", err);
}
};
loadProviders();
}, [setProviders]);
const startAdd = () => {
setForm({ ...emptyProvider });
setEditIndex(null);
setIsAdding(true);
setTestResult(null);
setIsCustomModel(false);
setCustomModelInput("");
};
const startEdit = (index: number) => {
setForm({ ...ai_providers[index] });
const provider = ai_providers[index];
const apiFormat = normalizeApiFormat(provider.api_format);
const nextForm = { ...provider, api_format: apiFormat };
setForm(nextForm);
setEditIndex(index);
setIsAdding(true);
setTestResult(null);
const isCustomRestProvider =
nextForm.provider_type === "custom" && apiFormat === CUSTOM_REST_FORMAT;
const knownModel = CUSTOM_REST_MODELS.includes(nextForm.model as (typeof CUSTOM_REST_MODELS)[number]);
if (isCustomRestProvider && !knownModel) {
setIsCustomModel(true);
setCustomModelInput(nextForm.model);
} else {
setIsCustomModel(false);
setCustomModelInput("");
}
};
const handleSave = () => {
const handleSave = async () => {
if (!form.name || !form.api_url || !form.model) return;
if (editIndex != null) {
updateProvider(editIndex, form);
} else {
addProvider(form);
try {
// Save to database
await saveAiProviderCmd(form);
// Update local state
if (editIndex != null) {
updateProvider(editIndex, form);
} else {
addProvider(form);
}
setIsAdding(false);
setEditIndex(null);
setForm({ ...emptyProvider });
} catch (err) {
console.error("Failed to save provider:", err);
setTestResult({ success: false, message: `Failed to save: ${err}` });
}
setIsAdding(false);
setEditIndex(null);
setForm({ ...emptyProvider });
};
const handleCancel = () => {
@ -78,6 +165,16 @@ export default function AIProviders() {
setTestResult(null);
};
const handleRemove = async (index: number) => {
const provider = ai_providers[index];
try {
await deleteAiProviderCmd(provider.name);
removeProvider(index);
} catch (err) {
console.error("Failed to delete provider:", err);
}
};
const handleTest = async () => {
setIsTesting(true);
setTestResult(null);
@ -160,7 +257,7 @@ export default function AIProviders() {
<Button
variant="ghost"
size="sm"
onClick={() => removeProvider(idx)}
onClick={() => handleRemove(idx)}
>
<Trash2 className="w-3 h-3 text-destructive" />
</Button>
@ -236,14 +333,16 @@ export default function AIProviders() {
placeholder="sk-..."
/>
</div>
<div className="space-y-2">
<Label>Model</Label>
<Input
value={form.model}
onChange={(e) => setForm({ ...form, model: e.target.value })}
placeholder="gpt-4o"
/>
</div>
{!(form.provider_type === "custom" && normalizeApiFormat(form.api_format) === CUSTOM_REST_FORMAT) && (
<div className="space-y-2">
<Label>Model</Label>
<Input
value={form.model}
onChange={(e) => setForm({ ...form, model: e.target.value })}
placeholder="gpt-4o"
/>
</div>
)}
</div>
<div className="grid grid-cols-2 gap-4">
<div className="space-y-2">
@ -267,6 +366,154 @@ export default function AIProviders() {
</div>
</div>
{/* Custom provider format options */}
{form.provider_type === "custom" && (
<>
<Separator />
<div className="space-y-4">
<div className="space-y-2">
<Label>API Format</Label>
<Select
value={form.api_format ?? "openai"}
onValueChange={(v) => {
const format = v;
const defaults =
format === CUSTOM_REST_FORMAT
? {
custom_endpoint_path: "",
custom_auth_header: "",
custom_auth_prefix: "",
}
: {
custom_endpoint_path: "/chat/completions",
custom_auth_header: "Authorization",
custom_auth_prefix: "Bearer ",
};
setForm({ ...form, api_format: format, ...defaults });
if (format !== CUSTOM_REST_FORMAT) {
setIsCustomModel(false);
setCustomModelInput("");
}
}}
>
<SelectTrigger>
<SelectValue />
</SelectTrigger>
<SelectContent>
<SelectItem value="openai">OpenAI Compatible</SelectItem>
<SelectItem value={CUSTOM_REST_FORMAT}>Custom REST</SelectItem>
</SelectContent>
</Select>
<p className="text-xs text-muted-foreground">
Select the API format. Custom REST uses a non-OpenAI request/response structure.
</p>
</div>
<div className="grid grid-cols-2 gap-4">
<div className="space-y-2">
<Label>Endpoint Path</Label>
<Input
value={form.custom_endpoint_path ?? ""}
onChange={(e) =>
setForm({ ...form, custom_endpoint_path: e.target.value })
}
placeholder="/chat/completions"
/>
<p className="text-xs text-muted-foreground">
Path appended to API URL. Leave empty if URL includes full path.
</p>
</div>
<div className="space-y-2">
<Label>Auth Header Name</Label>
<Input
value={form.custom_auth_header ?? ""}
onChange={(e) =>
setForm({ ...form, custom_auth_header: e.target.value })
}
placeholder="Authorization"
/>
<p className="text-xs text-muted-foreground">
Header name for authentication (e.g., "Authorization" or "x-api-key")
</p>
</div>
</div>
<div className="space-y-2">
<Label>Auth Prefix</Label>
<Input
value={form.custom_auth_prefix ?? ""}
onChange={(e) => setForm({ ...form, custom_auth_prefix: e.target.value })}
placeholder="Bearer "
/>
<p className="text-xs text-muted-foreground">
Prefix added before API key (e.g., "Bearer " for OpenAI, empty for Custom REST)
</p>
</div>
{/* Custom REST specific: User ID field */}
{normalizeApiFormat(form.api_format) === CUSTOM_REST_FORMAT && (
<div className="space-y-2">
<Label>Email Address</Label>
<Input
value={form.user_id ?? ""}
onChange={(e) => setForm({ ...form, user_id: e.target.value })}
placeholder="user@example.com"
/>
<p className="text-xs text-muted-foreground">
Optional: Email address for usage tracking. If omitted, costs are attributed to the API key owner.
</p>
</div>
)}
{/* Custom REST specific: model dropdown with custom option */}
{normalizeApiFormat(form.api_format) === CUSTOM_REST_FORMAT && (
<div className="space-y-2">
<Label>Model</Label>
<Select
value={isCustomModel ? CUSTOM_MODEL_OPTION : form.model}
onValueChange={(value) => {
if (value === CUSTOM_MODEL_OPTION) {
setIsCustomModel(true);
if (CUSTOM_REST_MODELS.includes(form.model as (typeof CUSTOM_REST_MODELS)[number])) {
setForm({ ...form, model: "" });
setCustomModelInput("");
}
} else {
setIsCustomModel(false);
setCustomModelInput("");
setForm({ ...form, model: value });
}
}}
>
<SelectTrigger>
<SelectValue placeholder="Select a model..." />
</SelectTrigger>
<SelectContent>
{CUSTOM_REST_MODELS.map((model) => (
<SelectItem key={model} value={model}>
{model}
</SelectItem>
))}
<SelectItem value={CUSTOM_MODEL_OPTION}>Custom model...</SelectItem>
</SelectContent>
</Select>
{isCustomModel && (
<Input
value={customModelInput}
onChange={(e) => {
const value = e.target.value;
setCustomModelInput(value);
setForm({ ...form, model: value });
}}
placeholder="Enter custom model ID"
/>
)}
</div>
)}
</div>
</>
)}
{/* Test result */}
{testResult && (
<div

View File

@ -1,5 +1,6 @@
import React, { useState } from "react";
import { ExternalLink, Check, X, Loader2 } from "lucide-react";
import React, { useState, useEffect } from "react";
import { ExternalLink, Check, X, Loader2, Key, Globe, Lock } from "lucide-react";
import { invoke } from "@tauri-apps/api/core";
import {
Card,
CardHeader,
@ -9,14 +10,21 @@ import {
Button,
Input,
Label,
RadioGroup,
RadioGroupItem,
} from "@/components/ui";
import {
initiateOauthCmd,
authenticateWithWebviewCmd,
saveManualTokenCmd,
testConfluenceConnectionCmd,
testServiceNowConnectionCmd,
testAzureDevOpsConnectionCmd,
saveIntegrationConfigCmd,
getAllIntegrationConfigsCmd,
} from "@/lib/tauriCommands";
import { invoke } from "@tauri-apps/api/core";
type AuthMode = "oauth2" | "webview" | "token";
interface IntegrationConfig {
service: string;
@ -25,6 +33,10 @@ interface IntegrationConfig {
projectName?: string;
spaceKey?: string;
connected: boolean;
authMode: AuthMode;
token?: string;
tokenType?: string;
webviewId?: string;
}
export default function Integrations() {
@ -34,34 +46,76 @@ export default function Integrations() {
baseUrl: "",
spaceKey: "",
connected: false,
authMode: "webview",
tokenType: "Bearer",
},
servicenow: {
service: "servicenow",
baseUrl: "",
username: "",
connected: false,
authMode: "token",
tokenType: "Basic",
},
azuredevops: {
service: "azuredevops",
baseUrl: "",
projectName: "",
connected: false,
authMode: "webview",
tokenType: "Bearer",
},
});
const [loading, setLoading] = useState<Record<string, boolean>>({});
const [testResults, setTestResults] = useState<Record<string, { success: boolean; message: string } | null>>({});
const handleConnect = async (service: string) => {
// Load configs from database on mount
useEffect(() => {
const loadConfigs = async () => {
try {
const savedConfigs = await getAllIntegrationConfigsCmd();
const configMap: Record<string, Partial<IntegrationConfig>> = {};
savedConfigs.forEach((cfg) => {
configMap[cfg.service] = {
baseUrl: cfg.base_url,
username: cfg.username || "",
projectName: cfg.project_name || "",
spaceKey: cfg.space_key || "",
};
});
setConfigs((prev) => ({
confluence: { ...prev.confluence, ...configMap.confluence },
servicenow: { ...prev.servicenow, ...configMap.servicenow },
azuredevops: { ...prev.azuredevops, ...configMap.azuredevops },
}));
} catch (err) {
console.error("Failed to load integration configs:", err);
}
};
loadConfigs();
}, []);
const handleAuthModeChange = (service: string, mode: AuthMode) => {
setConfigs((prev) => ({
...prev,
[service]: { ...prev[service], authMode: mode, connected: false },
}));
setTestResults((prev) => ({ ...prev, [service]: null }));
};
const handleConnectOAuth = async (service: string) => {
setLoading((prev) => ({ ...prev, [service]: true }));
try {
const response = await initiateOauthCmd(service);
// Open auth URL in default browser using shell plugin
// Open auth URL in default browser
await invoke("plugin:shell|open", { path: response.auth_url });
// Mark as connected (optimistic)
setConfigs((prev) => ({
...prev,
[service]: { ...prev[service], connected: true },
@ -82,6 +136,83 @@ export default function Integrations() {
}
};
const handleConnectWebview = async (service: string) => {
const config = configs[service];
setLoading((prev) => ({ ...prev, [service]: true }));
try {
const response = await authenticateWithWebviewCmd(
service,
config.baseUrl,
config.projectName
);
setConfigs((prev) => ({
...prev,
[service]: {
...prev[service],
webviewId: response.webview_id,
connected: true, // Mark as connected since window persists
},
}));
setTestResults((prev) => ({
...prev,
[service]: { success: true, message: response.message },
}));
} catch (err) {
console.error("Failed to open webview:", err);
setTestResults((prev) => ({
...prev,
[service]: { success: false, message: String(err) },
}));
} finally {
setLoading((prev) => ({ ...prev, [service]: false }));
}
};
const handleSaveToken = async (service: string) => {
const config = configs[service];
if (!config.token) {
setTestResults((prev) => ({
...prev,
[service]: { success: false, message: "Please enter a token" },
}));
return;
}
setLoading((prev) => ({ ...prev, [`save-${service}`]: true }));
try {
const result = await saveManualTokenCmd({
service,
token: config.token,
token_type: config.tokenType || "Bearer",
base_url: config.baseUrl,
});
if (result.success) {
setConfigs((prev) => ({
...prev,
[service]: { ...prev[service], connected: true },
}));
}
setTestResults((prev) => ({
...prev,
[service]: result,
}));
} catch (err) {
console.error("Failed to save token:", err);
setTestResults((prev) => ({
...prev,
[service]: { success: false, message: String(err) },
}));
} finally {
setLoading((prev) => ({ ...prev, [`save-${service}`]: false }));
}
};
const handleTestConnection = async (service: string) => {
setLoading((prev) => ({ ...prev, [`test-${service}`]: true }));
setTestResults((prev) => ({ ...prev, [service]: null }));
@ -121,11 +252,163 @@ export default function Integrations() {
}
};
const updateConfig = (service: string, field: string, value: string) => {
const updateConfig = async (service: string, field: string, value: string) => {
const updatedConfig = { ...configs[service], [field]: value };
setConfigs((prev) => ({
...prev,
[service]: { ...prev[service], [field]: value },
[service]: updatedConfig,
}));
// Save to database (debounced save happens after user stops typing)
try {
await saveIntegrationConfigCmd({
service,
base_url: updatedConfig.baseUrl,
username: updatedConfig.username,
project_name: updatedConfig.projectName,
space_key: updatedConfig.spaceKey,
});
} catch (err) {
console.error("Failed to save integration config:", err);
}
};
const renderAuthSection = (service: string) => {
const config = configs[service];
const isOAuthSupported = service !== "servicenow"; // ServiceNow doesn't support OAuth2
return (
<div className="space-y-4">
{/* Auth Mode Selection */}
<div className="space-y-3">
<Label>Authentication Method</Label>
<RadioGroup
value={config.authMode}
onValueChange={(value) => handleAuthModeChange(service, value as AuthMode)}
>
{isOAuthSupported && (
<div className="flex items-center space-x-2">
<RadioGroupItem value="oauth2" id={`${service}-oauth`} />
<Label htmlFor={`${service}-oauth`} className="font-normal cursor-pointer flex items-center gap-2">
<Lock className="w-4 h-4" />
OAuth2 (Enterprise SSO)
</Label>
</div>
)}
<div className="flex items-center space-x-2">
<RadioGroupItem value="webview" id={`${service}-webview`} />
<Label htmlFor={`${service}-webview`} className="font-normal cursor-pointer flex items-center gap-2">
<Globe className="w-4 h-4" />
Browser Login (Works off-VPN)
</Label>
</div>
<div className="flex items-center space-x-2">
<RadioGroupItem value="token" id={`${service}-token`} />
<Label htmlFor={`${service}-token`} className="font-normal cursor-pointer flex items-center gap-2">
<Key className="w-4 h-4" />
Manual Token/API Key
</Label>
</div>
</RadioGroup>
</div>
{/* OAuth2 Mode */}
{config.authMode === "oauth2" && (
<div className="space-y-3 p-4 bg-muted/30 rounded-lg">
<p className="text-sm text-muted-foreground">
OAuth2 requires pre-registered application credentials. This may not work in all enterprise environments.
</p>
<Button
onClick={() => handleConnectOAuth(service)}
disabled={loading[service] || !config.baseUrl}
>
{loading[service] ? (
<>
<Loader2 className="w-4 h-4 mr-2 animate-spin" />
Connecting...
</>
) : config.connected ? (
<>
<Check className="w-4 h-4 mr-2" />
Connected
</>
) : (
"Connect with OAuth2"
)}
</Button>
</div>
)}
{/* Webview Mode */}
{config.authMode === "webview" && (
<div className="space-y-3 p-4 bg-muted/30 rounded-lg">
<p className="text-sm text-muted-foreground">
Opens a persistent browser window for you to log in. Works even when off-VPN.
The browser window stays open across app restarts and maintains your session automatically.
</p>
{config.webviewId ? (
<div className="p-3 bg-green-500/10 text-green-700 dark:text-green-400 rounded text-sm">
<Check className="w-4 h-4 inline mr-2" />
Browser window is open. Log in there and leave it open - your session will persist across app restarts.
You can close this window manually when done.
</div>
) : (
<Button
onClick={() => handleConnectWebview(service)}
disabled={loading[service] || !config.baseUrl}
>
{loading[service] ? (
<>
<Loader2 className="w-4 h-4 mr-2 animate-spin" />
Opening...
</>
) : (
"Open Browser"
)}
</Button>
)}
</div>
)}
{/* Token Mode */}
{config.authMode === "token" && (
<div className="space-y-3 p-4 bg-muted/30 rounded-lg">
<p className="text-sm text-muted-foreground">
Enter a Personal Access Token (PAT), API Key, or Bearer token. Most reliable method but requires manual token generation.
</p>
<div className="space-y-2">
<Label htmlFor={`${service}-token-input`}>Token</Label>
<Input
id={`${service}-token-input`}
type="password"
placeholder={service === "confluence" ? "Bearer token or API key" : "API token or PAT"}
value={config.token || ""}
onChange={(e) => updateConfig(service, "token", e.target.value)}
/>
<p className="text-xs text-muted-foreground">
{service === "confluence" && "Generate at: https://id.atlassian.com/manage-profile/security/api-tokens"}
{service === "azuredevops" && "Generate at: https://dev.azure.com/{org}/_usersSettings/tokens"}
{service === "servicenow" && "Use your ServiceNow password or API key"}
</p>
</div>
<Button
onClick={() => handleSaveToken(service)}
disabled={loading[`save-${service}`] || !config.token}
>
{loading[`save-${service}`] ? (
<>
<Loader2 className="w-4 h-4 mr-2 animate-spin" />
Validating...
</>
) : (
"Save & Validate Token"
)}
</Button>
</div>
)}
</div>
);
};
return (
@ -133,7 +416,7 @@ export default function Integrations() {
<div>
<h1 className="text-3xl font-bold">Integrations</h1>
<p className="text-muted-foreground mt-1">
Connect TFTSR with your existing tools and platforms via OAuth2.
Connect Troubleshooting and RCA Assistant with your existing tools and platforms. Choose the authentication method that works best for your environment.
</p>
</div>
@ -145,7 +428,7 @@ export default function Integrations() {
Confluence
</CardTitle>
<CardDescription>
Publish RCA documents to Confluence spaces. Requires OAuth2 authentication with Atlassian.
Publish RCA documents to Confluence spaces. Supports OAuth2, browser login, or API tokens.
</CardDescription>
</CardHeader>
<CardContent className="space-y-4">
@ -169,26 +452,9 @@ export default function Integrations() {
/>
</div>
<div className="flex items-center gap-3">
<Button
onClick={() => handleConnect("confluence")}
disabled={loading.confluence || !configs.confluence.baseUrl}
>
{loading.confluence ? (
<>
<Loader2 className="w-4 h-4 mr-2 animate-spin" />
Connecting...
</>
) : configs.confluence.connected ? (
<>
<Check className="w-4 h-4 mr-2" />
Connected
</>
) : (
"Connect with OAuth2"
)}
</Button>
{renderAuthSection("confluence")}
<div className="flex items-center gap-3 pt-2">
<Button
variant="outline"
onClick={() => handleTestConnection("confluence")}
@ -232,7 +498,7 @@ export default function Integrations() {
ServiceNow
</CardTitle>
<CardDescription>
Link incidents and push resolution steps. Uses basic authentication (username + password).
Link incidents and push resolution steps. Supports browser login or basic authentication.
</CardDescription>
</CardHeader>
<CardContent className="space-y-4">
@ -256,35 +522,9 @@ export default function Integrations() {
/>
</div>
<div className="space-y-2">
<Label htmlFor="servicenow-password">Password</Label>
<Input
id="servicenow-password"
type="password"
placeholder="••••••••"
disabled
/>
<p className="text-xs text-muted-foreground">
ServiceNow credentials are stored securely after first login. OAuth2 not supported.
</p>
</div>
<div className="flex items-center gap-3">
<Button
onClick={() =>
setTestResults((prev) => ({
...prev,
servicenow: {
success: false,
message: "ServiceNow uses basic authentication, not OAuth2. Enter credentials above.",
},
}))
}
disabled={!configs.servicenow.baseUrl || !configs.servicenow.username}
>
Save Credentials
</Button>
{renderAuthSection("servicenow")}
<div className="flex items-center gap-3 pt-2">
<Button
variant="outline"
onClick={() => handleTestConnection("servicenow")}
@ -328,7 +568,7 @@ export default function Integrations() {
Azure DevOps
</CardTitle>
<CardDescription>
Create work items and attach RCA documents. Requires OAuth2 authentication with Microsoft.
Create work items and attach RCA documents. Supports OAuth2, browser login, or PAT tokens.
</CardDescription>
</CardHeader>
<CardContent className="space-y-4">
@ -352,26 +592,9 @@ export default function Integrations() {
/>
</div>
<div className="flex items-center gap-3">
<Button
onClick={() => handleConnect("azuredevops")}
disabled={loading.azuredevops || !configs.azuredevops.baseUrl}
>
{loading.azuredevops ? (
<>
<Loader2 className="w-4 h-4 mr-2 animate-spin" />
Connecting...
</>
) : configs.azuredevops.connected ? (
<>
<Check className="w-4 h-4 mr-2" />
Connected
</>
) : (
"Connect with OAuth2"
)}
</Button>
{renderAuthSection("azuredevops")}
<div className="flex items-center gap-3 pt-2">
<Button
variant="outline"
onClick={() => handleTestConnection("azuredevops")}
@ -408,14 +631,12 @@ export default function Integrations() {
</Card>
<div className="p-4 bg-muted/50 rounded-lg space-y-2">
<p className="text-sm font-semibold">How OAuth2 Authentication Works:</p>
<ol className="text-xs text-muted-foreground space-y-1 list-decimal list-inside">
<li>Click "Connect with OAuth2" to open the service's authentication page</li>
<li>Log in with your service credentials in your default browser</li>
<li>Authorize TFTSR to access your account</li>
<li>You'll be automatically redirected back and the connection will be saved</li>
<li>Tokens are encrypted and stored locally in your secure database</li>
</ol>
<p className="text-sm font-semibold">Authentication Method Comparison:</p>
<ul className="text-xs text-muted-foreground space-y-1 list-disc list-inside">
<li><strong>OAuth2:</strong> Most secure, but requires pre-registered app. May not work with enterprise SSO.</li>
<li><strong>Browser Login:</strong> Best for VPN environments. Opens a persistent browser window that stays open across app restarts. Your session is maintained automatically.</li>
<li><strong>Manual Token:</strong> Most reliable fallback. Requires generating API tokens manually from each service.</li>
</ul>
</div>
</div>
);

View File

@ -123,7 +123,7 @@ export default function Ollama() {
Manage local AI models via Ollama for privacy-first inference.
</p>
</div>
<Button variant="outline" onClick={loadData} disabled={isLoading}>
<Button variant="outline" onClick={loadData} disabled={isLoading} className="border-border text-foreground bg-card hover:bg-accent">
<RefreshCw className={`w-4 h-4 mr-2 ${isLoading ? "animate-spin" : ""}`} />
Refresh
</Button>
@ -169,24 +169,16 @@ export default function Ollama() {
{status && !status.installed && installGuide && (
<Card className="border-yellow-500/50">
<CardHeader>
<CardTitle className="text-lg flex items-center gap-2">
<Download className="w-5 h-5 text-yellow-500" />
<CardTitle className="text-lg">
Ollama Not Detected Installation Required
</CardTitle>
</CardHeader>
<CardContent className="space-y-4">
<CardContent>
<ol className="space-y-2 list-decimal list-inside">
{installGuide.steps.map((step, i) => (
<li key={i} className="text-sm text-muted-foreground">{step}</li>
))}
</ol>
<Button
variant="outline"
onClick={() => window.open(installGuide.url, "_blank")}
>
<Download className="w-4 h-4 mr-2" />
Download Ollama for {installGuide.platform}
</Button>
</CardContent>
</Card>
)}

View File

@ -9,6 +9,7 @@ import {
Separator,
} from "@/components/ui";
import { getAuditLogCmd, type AuditEntry } from "@/lib/tauriCommands";
import { useSettingsStore } from "@/stores/settingsStore";
const piiPatterns = [
{ id: "email", label: "Email Addresses", description: "Detect email addresses in logs" },
@ -22,9 +23,7 @@ const piiPatterns = [
];
export default function Security() {
const [enabledPatterns, setEnabledPatterns] = useState<Record<string, boolean>>(() =>
Object.fromEntries(piiPatterns.map((p) => [p.id, true]))
);
const { pii_enabled_patterns, setPiiPattern } = useSettingsStore();
const [auditEntries, setAuditEntries] = useState<AuditEntry[]>([]);
const [expandedRows, setExpandedRows] = useState<Set<string>>(new Set());
const [isLoading, setIsLoading] = useState(false);
@ -46,10 +45,6 @@ export default function Security() {
}
};
const togglePattern = (id: string) => {
setEnabledPatterns((prev) => ({ ...prev, [id]: !prev[id] }));
};
const toggleRow = (entryId: string) => {
setExpandedRows((prev) => {
const newSet = new Set(prev);
@ -92,15 +87,15 @@ export default function Security() {
<button
type="button"
role="switch"
aria-checked={enabledPatterns[pattern.id]}
onClick={() => togglePattern(pattern.id)}
aria-checked={pii_enabled_patterns[pattern.id]}
onClick={() => setPiiPattern(pattern.id, !pii_enabled_patterns[pattern.id])}
className={`relative inline-flex h-6 w-11 items-center rounded-full transition-colors ${
enabledPatterns[pattern.id] ? "bg-blue-500" : "bg-muted"
pii_enabled_patterns[pattern.id] ? "bg-blue-500" : "bg-muted"
}`}
>
<span
className={`inline-block h-5 w-5 rounded-full bg-white transition-transform ${
enabledPatterns[pattern.id] ? "translate-x-5" : "translate-x-0.5"
pii_enabled_patterns[pattern.id] ? "translate-x-5" : "translate-x-0.5"
}`}
/>
</button>

View File

@ -6,9 +6,12 @@ interface SettingsState extends AppSettings {
addProvider: (provider: ProviderConfig) => void;
updateProvider: (index: number, provider: ProviderConfig) => void;
removeProvider: (index: number) => void;
setProviders: (providers: ProviderConfig[]) => void;
setActiveProvider: (name: string) => void;
setTheme: (theme: "light" | "dark") => void;
getActiveProvider: () => ProviderConfig | undefined;
pii_enabled_patterns: Record<string, boolean>;
setPiiPattern: (id: string, enabled: boolean) => void;
}
export const useSettingsStore = create<SettingsState>()(
@ -33,14 +36,34 @@ export const useSettingsStore = create<SettingsState>()(
set((state) => ({
ai_providers: state.ai_providers.filter((_, i) => i !== index),
})),
setProviders: (providers) => set({ ai_providers: providers }),
setActiveProvider: (name) => set({ active_provider: name }),
setTheme: (theme) => set({ theme }),
pii_enabled_patterns: Object.fromEntries(
["email", "ip_address", "phone", "ssn", "credit_card", "hostname", "password", "api_key"]
.map((id) => [id, true])
) as Record<string, boolean>,
setPiiPattern: (id: string, enabled: boolean) =>
set((state) => ({
pii_enabled_patterns: { ...state.pii_enabled_patterns, [id]: enabled },
})),
getActiveProvider: () => {
const state = get();
return state.ai_providers.find((p) => p.name === state.active_provider)
?? state.ai_providers[0];
},
}),
{ name: "tftsr-settings" }
{
name: "tftsr-settings",
// Don't persist ai_providers to localStorage - they're stored in encrypted database
partialize: (state) => ({
theme: state.theme,
active_provider: state.active_provider,
default_provider: state.default_provider,
default_model: state.default_model,
ollama_url: state.ollama_url,
pii_enabled_patterns: state.pii_enabled_patterns,
}),
}
)
);

View File

@ -0,0 +1,25 @@
import { describe, it, expect } from "vitest";
import {
CUSTOM_MODEL_OPTION,
CUSTOM_REST_FORMAT,
CUSTOM_REST_MODELS,
LEGACY_API_FORMAT,
normalizeApiFormat,
} from "@/pages/Settings/AIProviders";
describe("AIProviders Custom REST helpers", () => {
it("maps legacy msi_genai api_format to custom_rest", () => {
expect(normalizeApiFormat(LEGACY_API_FORMAT)).toBe(CUSTOM_REST_FORMAT);
});
it("keeps openai api_format unchanged", () => {
expect(normalizeApiFormat("openai")).toBe("openai");
});
it("contains the guide model list and custom model option sentinel", () => {
expect(CUSTOM_REST_MODELS).toContain("ChatGPT4o");
expect(CUSTOM_REST_MODELS).toContain("VertexGemini");
expect(CUSTOM_REST_MODELS).toContain("Gemini-3_Pro-Preview");
expect(CUSTOM_MODEL_OPTION).toBe("__custom_model__");
});
});

View File

@ -0,0 +1,29 @@
import { describe, expect, it } from "vitest";
import { readFileSync } from "node:fs";
import path from "node:path";
const autoTagWorkflowPath = path.resolve(
process.cwd(),
".gitea/workflows/auto-tag.yml",
);
describe("auto-tag workflow release triggering", () => {
it("creates tags via git push instead of Gitea tag API", () => {
const workflow = readFileSync(autoTagWorkflowPath, "utf-8");
expect(workflow).toContain("git push origin \"refs/tags/$NEXT\"");
expect(workflow).not.toContain("POST \"$API/tags\"");
});
it("runs release build jobs after auto-tag succeeds", () => {
const workflow = readFileSync(autoTagWorkflowPath, "utf-8");
expect(workflow).toContain("build-linux-amd64:");
expect(workflow).toContain("build-windows-amd64:");
expect(workflow).toContain("build-macos-arm64:");
expect(workflow).toContain("build-linux-arm64:");
expect(workflow).toContain("needs: autotag");
expect(workflow).toContain("TAG=$(curl -s \"$API/tags?limit=50\"");
expect(workflow).toContain("ERROR: Could not resolve release tag from repository tags.");
});
});

View File

@ -0,0 +1,144 @@
import { describe, expect, it } from "vitest";
import { readFileSync } from "node:fs";
import path from "node:path";
const root = process.cwd();
const readFile = (rel: string) => readFileSync(path.resolve(root, rel), "utf-8");
// ─── Dockerfiles ─────────────────────────────────────────────────────────────
describe("Dockerfile.linux-amd64", () => {
const df = readFile(".docker/Dockerfile.linux-amd64");
it("is based on the pinned Rust 1.88 slim image", () => {
expect(df).toContain("FROM rust:1.88-slim");
});
it("installs webkit2gtk 4.1 dev package", () => {
expect(df).toContain("libwebkit2gtk-4.1-dev");
});
it("installs Node.js 22 via NodeSource", () => {
expect(df).toContain("nodesource.com/setup_22.x");
expect(df).toContain("nodejs");
});
it("pre-adds the x86_64 Linux Rust target", () => {
expect(df).toContain("rustup target add x86_64-unknown-linux-gnu");
});
it("cleans apt lists to keep image lean", () => {
expect(df).toContain("rm -rf /var/lib/apt/lists/*");
});
});
describe("Dockerfile.windows-cross", () => {
const df = readFile(".docker/Dockerfile.windows-cross");
it("is based on the pinned Rust 1.88 slim image", () => {
expect(df).toContain("FROM rust:1.88-slim");
});
it("installs mingw-w64 cross-compiler", () => {
expect(df).toContain("mingw-w64");
});
it("installs nsis for Windows installer bundling", () => {
expect(df).toContain("nsis");
});
it("installs Node.js 22 via NodeSource", () => {
expect(df).toContain("nodesource.com/setup_22.x");
});
it("pre-adds the Windows GNU Rust target", () => {
expect(df).toContain("rustup target add x86_64-pc-windows-gnu");
});
it("cleans apt lists to keep image lean", () => {
expect(df).toContain("rm -rf /var/lib/apt/lists/*");
});
});
describe("Dockerfile.linux-arm64", () => {
const df = readFile(".docker/Dockerfile.linux-arm64");
it("is based on Ubuntu 22.04 (Jammy)", () => {
expect(df).toContain("FROM ubuntu:22.04");
});
it("installs aarch64 cross-compiler", () => {
expect(df).toContain("gcc-aarch64-linux-gnu");
expect(df).toContain("g++-aarch64-linux-gnu");
});
it("sets up arm64 multiarch via ports.ubuntu.com", () => {
expect(df).toContain("dpkg --add-architecture arm64");
expect(df).toContain("ports.ubuntu.com/ubuntu-ports");
expect(df).toContain("jammy");
});
it("installs arm64 webkit2gtk dev package", () => {
expect(df).toContain("libwebkit2gtk-4.1-dev:arm64");
});
it("installs Rust 1.88 with arm64 cross-compilation target", () => {
expect(df).toContain("--default-toolchain 1.88.0");
expect(df).toContain("rustup target add aarch64-unknown-linux-gnu");
});
it("adds cargo to PATH via ENV", () => {
expect(df).toContain('ENV PATH="/root/.cargo/bin:${PATH}"');
});
it("installs Node.js 22 via NodeSource", () => {
expect(df).toContain("nodesource.com/setup_22.x");
});
});
// ─── build-images.yml workflow ───────────────────────────────────────────────
describe("build-images.yml workflow", () => {
const wf = readFile(".gitea/workflows/build-images.yml");
it("triggers on changes to .docker/ files on master", () => {
expect(wf).toContain("- master");
expect(wf).toContain("- '.docker/**'");
});
it("supports manual workflow_dispatch trigger", () => {
expect(wf).toContain("workflow_dispatch:");
});
it("does not explicitly mount the Docker socket (act_runner mounts it automatically)", () => {
// act_runner already mounts /var/run/docker.sock; an explicit options: mount
// causes a 'Duplicate mount point' error and must not be present.
expect(wf).not.toContain("-v /var/run/docker.sock:/var/run/docker.sock");
});
it("authenticates to the local Gitea registry before pushing", () => {
expect(wf).toContain("docker login");
expect(wf).toContain("--password-stdin");
expect(wf).toContain("172.0.0.29:3000");
});
it("builds and pushes all three platform images", () => {
expect(wf).toContain("trcaa-linux-amd64:rust1.88-node22");
expect(wf).toContain("trcaa-windows-cross:rust1.88-node22");
expect(wf).toContain("trcaa-linux-arm64:rust1.88-node22");
});
it("uses docker:24-cli image for build jobs", () => {
expect(wf).toContain("docker:24-cli");
});
it("runs all three build jobs on linux-amd64 runner", () => {
const matches = wf.match(/runs-on: linux-amd64/g) ?? [];
expect(matches.length).toBeGreaterThanOrEqual(3);
});
it("uses RELEASE_TOKEN secret for registry auth", () => {
expect(wf).toContain("secrets.RELEASE_TOKEN");
});
});

View File

@ -0,0 +1,54 @@
import { describe, expect, it } from "vitest";
import { readFileSync } from "node:fs";
import path from "node:path";
const autoTagWorkflowPath = path.resolve(
process.cwd(),
".gitea/workflows/auto-tag.yml",
);
describe("auto-tag release cross-platform artifact handling", () => {
it("overrides OpenSSL vendoring for windows-gnu cross builds", () => {
const workflow = readFileSync(autoTagWorkflowPath, "utf-8");
expect(workflow).toContain("OPENSSL_NO_VENDOR: \"0\"");
expect(workflow).toContain("OPENSSL_STATIC: \"1\"");
});
it("fails linux uploads when no artifacts are found", () => {
const workflow = readFileSync(autoTagWorkflowPath, "utf-8");
expect(workflow).toContain("ERROR: No Linux amd64 artifacts were found to upload.");
expect(workflow).toContain("ERROR: No Linux arm64 artifacts were found to upload.");
expect(workflow).toContain("CI=true npx tauri build");
expect(workflow).toContain("find src-tauri/target/aarch64-unknown-linux-gnu/release/bundle -type f");
expect(workflow).toContain("CC_aarch64_unknown_linux_gnu: aarch64-linux-gnu-gcc");
expect(workflow).toContain("PKG_CONFIG_ALLOW_CROSS: \"1\"");
expect(workflow).toContain("aarch64-unknown-linux-gnu");
});
it("fails windows uploads when no artifacts are found", () => {
const workflow = readFileSync(autoTagWorkflowPath, "utf-8");
expect(workflow).toContain(
"ERROR: No Windows amd64 artifacts were found to upload.",
);
});
it("replaces existing release assets before uploading reruns", () => {
const workflow = readFileSync(autoTagWorkflowPath, "utf-8");
expect(workflow).toContain("Deleting existing asset id=$id name=$NAME before upload...");
expect(workflow).toContain("-X DELETE \"$API/releases/$RELEASE_ID/assets/$id\"");
expect(workflow).toContain("UPLOAD_NAME=\"linux-amd64-$NAME\"");
expect(workflow).toContain("UPLOAD_NAME=\"linux-arm64-$NAME\"");
});
it("uses Ubuntu 22.04 with ports mirror for arm64 cross-compile", () => {
const workflow = readFileSync(autoTagWorkflowPath, "utf-8");
expect(workflow).toContain("ubuntu:22.04");
expect(workflow).toContain("ports.ubuntu.com/ubuntu-ports");
expect(workflow).toContain("jammy");
});
});

View File

@ -0,0 +1,23 @@
import { describe, expect, it } from "vitest";
import { readFileSync } from "node:fs";
import path from "node:path";
const autoTagWorkflowPath = path.resolve(
process.cwd(),
".gitea/workflows/auto-tag.yml",
);
describe("auto-tag release macOS bundle path", () => {
it("does not reference the legacy TFTSR.app bundle name", () => {
const workflow = readFileSync(autoTagWorkflowPath, "utf-8");
expect(workflow).not.toContain("/bundle/macos/TFTSR.app");
});
it("resolves the macOS .app bundle dynamically", () => {
const workflow = readFileSync(autoTagWorkflowPath, "utf-8");
expect(workflow).toContain("APP=$(find");
expect(workflow).toContain("-name \"*.app\"");
});
});

View File

@ -9,8 +9,11 @@ const mockProvider: ProviderConfig = {
model: "gpt-4o",
};
const DEFAULT_PII_PATTERNS = ["email", "ip_address", "phone", "ssn", "credit_card", "hostname", "password", "api_key"];
describe("Settings Store", () => {
beforeEach(() => {
localStorage.clear();
useSettingsStore.setState({
theme: "dark",
ai_providers: [],
@ -18,6 +21,7 @@ describe("Settings Store", () => {
default_provider: "ollama",
default_model: "llama3.2:3b",
ollama_url: "http://localhost:11434",
pii_enabled_patterns: Object.fromEntries(DEFAULT_PII_PATTERNS.map((id) => [id, true])),
});
});
@ -43,4 +47,62 @@ describe("Settings Store", () => {
useSettingsStore.getState().setTheme("light");
expect(useSettingsStore.getState().theme).toBe("light");
});
it("does not persist API keys to localStorage", () => {
useSettingsStore.getState().addProvider(mockProvider);
const raw = localStorage.getItem("tftsr-settings");
expect(raw).toBeTruthy();
expect(raw).not.toContain("sk-test-key");
});
});
describe("Settings Store — PII patterns", () => {
beforeEach(() => {
localStorage.clear();
useSettingsStore.setState({
theme: "dark",
ai_providers: [],
active_provider: undefined,
default_provider: "ollama",
default_model: "llama3.2:3b",
ollama_url: "http://localhost:11434",
pii_enabled_patterns: Object.fromEntries(DEFAULT_PII_PATTERNS.map((id) => [id, true])),
});
});
it("initializes all 8 PII patterns as enabled by default", () => {
const patterns = useSettingsStore.getState().pii_enabled_patterns;
for (const id of DEFAULT_PII_PATTERNS) {
expect(patterns[id]).toBe(true);
}
});
it("setPiiPattern disables a single pattern", () => {
useSettingsStore.getState().setPiiPattern("email", false);
expect(useSettingsStore.getState().pii_enabled_patterns["email"]).toBe(false);
});
it("setPiiPattern does not affect other patterns", () => {
useSettingsStore.getState().setPiiPattern("email", false);
for (const id of DEFAULT_PII_PATTERNS.filter((id) => id !== "email")) {
expect(useSettingsStore.getState().pii_enabled_patterns[id]).toBe(true);
}
});
it("setPiiPattern re-enables a disabled pattern", () => {
useSettingsStore.getState().setPiiPattern("ssn", false);
useSettingsStore.getState().setPiiPattern("ssn", true);
expect(useSettingsStore.getState().pii_enabled_patterns["ssn"]).toBe(true);
});
it("pii_enabled_patterns is persisted to localStorage", () => {
useSettingsStore.getState().setPiiPattern("api_key", false);
const raw = localStorage.getItem("tftsr-settings");
expect(raw).toBeTruthy();
// Zustand persist wraps state in { state: {...}, version: ... }
const parsed = JSON.parse(raw!);
const stored = parsed.state ?? parsed;
expect(stored.pii_enabled_patterns.api_key).toBe(false);
expect(stored.pii_enabled_patterns.email).toBe(true);
});
});

View File

@ -0,0 +1,56 @@
# Fix: build-linux-arm64 — Switch to Ubuntu 22.04 with ports mirror
## Description
The `build-linux-arm64` CI job failed repeatedly with
`E: Unable to correct problems, you have held broken packages` during the
Install dependencies step. Root cause: `rust:1.88-slim` (Debian Bookworm) uses a single
mirror for all architectures. When both `[arch=amd64]` and `[arch=arm64]` entries point at
the same Debian repo, apt's dependency resolver hits unavoidable conflicts — the `binary-all`
package index is duplicated and certain `-dev` package pairs cannot be co-installed because
they lack `Multi-Arch: same`. This is a structural Debian single-mirror multiarch limitation
that cannot be fixed by tweaking `sources.list`.
Ubuntu 22.04 solves this by routing arm64 through a separate mirror:
`ports.ubuntu.com/ubuntu-ports`. amd64 and arm64 packages come from entirely different repos,
eliminating all cross-arch index overlaps and resolution conflicts.
## Acceptance Criteria
- `build-linux-arm64` Install dependencies step completes without apt errors
- `ubuntu:22.04` is the container image for the arm64 job
- Ubuntu's `ports.ubuntu.com/ubuntu-ports` is used for arm64 packages
- `libayatana-appindicator3-dev:arm64` is removed (no tray icon in this app)
- Rust is installed via `rustup` (not pre-installed in Ubuntu base)
- All 51 frontend tests pass
- YAML is syntactically valid
## Work Implemented
### `.gitea/workflows/auto-tag.yml`
- **Container**: `rust:1.88-slim``ubuntu:22.04` for `build-linux-arm64` job
- **Install dependencies step**: Full replacement
- Step 1: Host tools + aarch64 cross-compiler (amd64 packages, installed before multiarch registration)
- Step 2: Register arm64 architecture; `sed` existing `sources.list` entries to `[arch=amd64]`; add `arm64-ports.list` pointing at `ports.ubuntu.com/ubuntu-ports jammy`
- Step 3: ARM64 dev libs (`libwebkit2gtk-4.1-dev`, `libssl-dev`, `libgtk-3-dev`, `librsvg2-dev`) — `libayatana-appindicator3-dev:arm64` removed
- Step 4: Node.js via NodeSource
- Step 5: Rust 1.88.0 via `rustup --no-modify-path`; `$HOME/.cargo/bin` appended to `$GITHUB_PATH`
- **Build step**: Added `source "$HOME/.cargo/env"` as first line (belt-and-suspenders for Rust PATH)
### `tests/unit/releaseWorkflowCrossPlatformArtifacts.test.ts`
- Added new test: `"uses Ubuntu 22.04 with ports mirror for arm64 cross-compile"` — asserts workflow contains `ubuntu:22.04`, `ports.ubuntu.com/ubuntu-ports`, and `jammy`
- All previously passing assertions continue to pass (build step env vars and upload paths unchanged)
### `docs/wiki/CICD-Pipeline.md`
- `build-linux-arm64` job entry now mentions Ubuntu 22.04 + ports mirror
- New Known Issue entry: **Debian Multiarch Breaks arm64 Cross-Compile** — documents the root cause and the Ubuntu 22.04 fix for future reference
## Testing Needed
- [ ] YAML validation: `python3 -c "import yaml; yaml.safe_load(open('.gitea/workflows/auto-tag.yml'))" && echo OK` — **PASSED**
- [ ] Frontend tests: `npm run test:run`**51/51 PASSED** (50 existing + 1 new)
- [ ] CI integration: Push branch → merge PR → observe `build-linux-arm64` Install dependencies step completes without `held broken packages` error
- [ ] Verify arm64 `.deb`, `.rpm`, `.AppImage` artifacts are uploaded to the Gitea release

View File

@ -0,0 +1,122 @@
# Ticket Summary — UI Fixes + Ollama Bundling + Theme Toggle
**Branch**: `feat/ui-fixes-ollama-bundle-theme`
---
## Description
Multiple UI issues were identified and resolved following the arm64 build stabilization:
- `custom_rest` provider showed a disabled model input instead of the live dropdown already present lower in the form
- Auth Header Name auto-filled with an internal vendor-specific key name on format selection
- "User ID (CORE ID)" label and placeholder exposed internal organizational terminology
- Refresh buttons on the Ollama and Dashboard pages had near-zero contrast against dark card backgrounds
- PII detection toggles in Security settings silently reset to all-enabled on every app restart (no persistence)
- Ollama required manual installation; no offline install path existed
- No light/dark theme toggle UI existed despite the infrastructure already being wired up
Additionally, a new `install_ollama_from_bundle` Tauri command allows the app to copy a bundled Ollama binary to the system install path, enabling offline-first deployment. CI was updated to download the appropriate Ollama binary for each platform during the release build.
---
## Acceptance Criteria
- [ ] **Custom REST model**: Selecting Type=Custom + API Format=Custom REST causes the top-level Model row to disappear; the dropdown at the bottom is visible and populated with all models
- [ ] **Auth Header**: Field is blank by default when Custom REST format is selected (no internal values)
- [ ] **User ID label**: Reads "Email Address" with placeholder `user@example.com` and a generic description
- [ ] **Auth Header description**: No longer references internal key name examples
- [ ] **Refresh buttons**: Visually distinct (border + background) against dark card backgrounds on Dashboard and Ollama pages
- [ ] **PII toggles**: Toggling patterns off, navigating away, and returning preserves the disabled state across app restarts
- [ ] **Theme toggle**: Sun/Moon icon button in the sidebar footer switches between light and dark themes; works when sidebar is collapsed
- [ ] **Install Ollama (Offline)**: Button appears in the "Ollama Not Detected" card; clicking it copies the bundled binary and refreshes status
- [ ] **CI**: Each platform build job downloads the correct Ollama binary before `tauri build` and places it in `src-tauri/resources/ollama/`
- [ ] `npx tsc --noEmit` — zero errors
- [ ] `npm run test:run` — 51/51 tests pass
- [ ] `cargo check` — zero errors
- [ ] `cargo clippy -- -D warnings` — zero warnings
- [ ] `python3 -c "import yaml; yaml.safe_load(open('.gitea/workflows/auto-tag.yml'))"` — YAML valid
---
## Work Implemented
### Phase 1 — Frontend (6 files)
**`src/pages/Settings/AIProviders.tsx`**
- Removed the disabled Model `<Input>` shown when Custom REST is active; the grid row is now hidden via conditional render — the dropdown further down the form handles model selection for this format
- Removed `custom_auth_header: "x-msi-genai-api-key"` prefill on format switch; field now starts empty
- Replaced example in Auth Header description from internal key name to generic `"x-api-key"`
- Renamed "User ID (CORE ID)" → "Email Address"; updated placeholder from `your.name@motorolasolutions.com``user@example.com`; removed Motorola-specific description text
**`src/pages/Dashboard/index.tsx`**
- Added `className="border-border text-foreground bg-card hover:bg-accent"` to Refresh `<Button>` for contrast against dark backgrounds
**`src/pages/Settings/Ollama.tsx`**
- Added same contrast classes to Refresh button
- Added `installOllamaFromBundleCmd` import
- Added `isInstallingBundle` state + `handleInstallFromBundle` async handler
- Added "Install Ollama (Offline)" primary `<Button>` alongside the existing "Download Ollama" link button in the "Ollama Not Detected" card
**`src/stores/settingsStore.ts`**
- Added `pii_enabled_patterns: Record<string, boolean>` field to `SettingsState` interface and store initializer (defaults all 8 patterns to `true`)
- Added `setPiiPattern(id, enabled)` action; both are included in the `persist` serialization so state survives app restarts
**`src/pages/Settings/Security.tsx`**
- Removed local `enabledPatterns` / `setEnabledPatterns` state and `togglePattern` function
- Added `useSettingsStore` import; reads `pii_enabled_patterns` / `setPiiPattern` from the persisted store
- Toggle button uses `setPiiPattern` directly on click
**`src/App.tsx`**
- Added `Sun`, `Moon` to lucide-react imports
- Extracted `setTheme` from `useSettingsStore` alongside `theme`
- Replaced static version `<div>` in sidebar footer with a flex row containing the version string and a Sun/Moon icon button; button is always visible even when sidebar is collapsed
### Phase 2 — Backend (4 files)
**`src-tauri/src/commands/system.rs`**
- Added `install_ollama_from_bundle(app: AppHandle) → Result<String, String>` command
- Resolves bundled binary via `app.path().resource_dir()`, copies to `/usr/local/bin/ollama` (Unix) or `%LOCALAPPDATA%\Programs\Ollama\ollama.exe` (Windows), sets 0o755 permissions on Unix
- Added `use tauri::Manager` import required by `app.path()`
**`src-tauri/src/lib.rs`**
- Registered `commands::system::install_ollama_from_bundle` in `tauri::generate_handler![]`
**`src/lib/tauriCommands.ts`**
- Added `installOllamaFromBundleCmd` typed wrapper: `() => invoke<string>("install_ollama_from_bundle")`
**`src-tauri/tauri.conf.json`**
- Changed `"resources": []``"resources": ["resources/ollama/*"]`
- Created `src-tauri/resources/ollama/.gitkeep` placeholder so Tauri's glob doesn't fail on builds without a bundled binary
### Phase 3 — CI + Docs (3 files)
**`.gitea/workflows/auto-tag.yml`**
- Added "Download Ollama" step to `build-linux-amd64`: downloads `ollama-linux-amd64.tgz`, extracts binary to `src-tauri/resources/ollama/ollama`
- Added "Download Ollama" step to `build-windows-amd64`: downloads `ollama-windows-amd64.zip`, extracts `ollama.exe`; added `unzip` to the Install dependencies step
- Added "Download Ollama" step to `build-macos-arm64`: downloads `ollama-darwin` universal binary directly
- Added "Download Ollama" step to `build-linux-arm64`: downloads `ollama-linux-arm64.tgz`, extracts binary
**`docs/wiki/IPC-Commands.md`**
- Added `install_ollama_from_bundle` entry under System/Ollama Commands section documenting parameters, return value, platform-specific install paths, and privilege requirement note
---
## Testing Needed
### Automated
```bash
npx tsc --noEmit # TS: zero errors
npm run test:run # Vitest: 51/51 pass
cargo check --manifest-path src-tauri/Cargo.toml # Rust: zero errors
cargo clippy --manifest-path src-tauri/Cargo.toml -- -D warnings # Clippy: zero warnings
python3 -c "import yaml; yaml.safe_load(open('.gitea/workflows/auto-tag.yml'))" && echo OK
```
### Manual
1. **Custom REST model dropdown**: Settings → AI Providers → Add Provider → Type=Custom → API Format=Custom REST — the top Model row should disappear; the dropdown at the bottom should be visible and populated with all 19 models. Auth Header Name should be empty.
2. **Label rename**: Confirm "Email Address" label, `user@example.com` placeholder, no Motorola references.
3. **PII persistence**: Security page → toggle off "Email Addresses" and "IP Addresses" → navigate away → return → both should still be off. Restart the app → toggles should remain in the saved state.
4. **Refresh button contrast**: Dashboard and Ollama pages → confirm Refresh button border is visible on dark background.
5. **Theme toggle**: Sidebar footer → click Sun/Moon icon → theme should switch. Collapse sidebar → icon should still be accessible.
6. **Install Ollama (Offline)**: On a machine without Ollama, go to Settings → Ollama → "Ollama Not Detected" card should show "Install Ollama (Offline)" button. (Full test requires a release build with the bundled binary from CI.)

View File

@ -17,7 +17,7 @@
"noFallthroughCasesInSwitch": true,
"baseUrl": ".",
"paths": { "@/*": ["src/*"] },
"types": ["vitest/globals"]
"types": ["vitest/globals", "@testing-library/jest-dom"]
},
"include": ["src", "tests/unit"],
"references": [{ "path": "./tsconfig.node.json" }]