Compare commits

...

75 Commits

Author SHA1 Message Date
9f730304cc Merge pull request 'fix(ci): remove explicit docker.sock mount — act_runner mounts it automatically' (#22) from fix/build-images-duplicate-socket into master
All checks were successful
Auto Tag / autotag (push) Successful in 1m39s
Auto Tag / build-macos-arm64 (push) Successful in 4m42s
Auto Tag / build-windows-amd64 (push) Successful in 16m15s
Auto Tag / build-linux-arm64 (push) Successful in 28m32s
Auto Tag / wiki-sync (push) Successful in 1m44s
Auto Tag / build-linux-amd64 (push) Successful in 26m57s
Reviewed-on: #22
2026-04-06 02:18:55 +00:00
Shaun Arman
f54d1aa6a8 fix(ci): remove explicit docker.sock mount — act_runner mounts it automatically 2026-04-05 21:18:11 -05:00
bff11dc847 Merge pull request 'feat(ci): add persistent pre-baked Docker builder images' (#21) from feat/persistent-ci-builders into master
Reviewed-on: #21
2026-04-06 02:15:36 +00:00
Shaun Arman
eb8a0531e6 feat(ci): add persistent pre-baked Docker builder images
Add three Dockerfiles under .docker/ and a build-images.yml workflow that
pushes them to the local Gitea container registry (172.0.0.29:3000).

Each image pre-installs all system deps, Node.js 22, and the Rust cross-
compilation target so release builds can skip apt-get entirely:

  trcaa-linux-amd64:rust1.88-node22   — webkit2gtk, gtk3, all Tauri deps
  trcaa-windows-cross:rust1.88-node22 — mingw-w64, nsis, Windows target
  trcaa-linux-arm64:rust1.88-node22   — arm64 multiarch dev libs, Rust 1.88

build-images.yml triggers automatically when .docker/ changes on master
and supports workflow_dispatch for manual/first-time builds.

auto-tag.yml is NOT changed in this commit — switch it to use the new
images in the follow-up PR (after images are pushed to the registry).

One-time server setup required before first use:
  echo '{"insecure-registries":["172.0.0.29:3000"]}' \
    | sudo tee /etc/docker/daemon.json && sudo systemctl restart docker
2026-04-05 21:07:17 -05:00
bf6e589b3c Merge pull request 'feat(ui): UI fixes, theme toggle, PII persistence, Ollama install instructions' (#20) from feat/ui-fixes-ollama-bundle-theme into master
Reviewed-on: #20
2026-04-06 01:54:36 +00:00
Shaun Arman
9175faf0b4 refactor(ollama): remove download/install buttons — show plain install instructions only 2026-04-05 20:53:57 -05:00
Shaun Arman
0796297e8c fix(ci): remove all Ollama bundle download steps — use UI download button instead 2026-04-05 20:53:57 -05:00
Shaun Arman
809c4041ea fix(ci): skip Ollama download on macOS build — runner has no access to GitHub binary assets 2026-04-05 20:53:57 -05:00
180ca74ec2 Merge pull request 'feat(ui): fix model dropdown, auth prefill, PII persistence, theme toggle, Ollama bundle' (#19) from feat/ui-fixes-ollama-bundle-theme into master
Reviewed-on: #19
2026-04-06 01:12:34 +00:00
Shaun Arman
2d02cfa9e8 style: apply cargo fmt to install_ollama_from_bundle 2026-04-05 19:41:59 -05:00
Shaun Arman
dffd26a6fd fix(security): add path canonicalization and actionable permission error in install_ollama_from_bundle 2026-04-05 19:34:47 -05:00
Shaun Arman
fc50fe3102 test(store): add PII pattern persistence tests for settingsStore 2026-04-05 19:33:23 -05:00
Shaun Arman
215c0ae218 feat(ui): fix model dropdown, auth prefill, PII persistence, theme toggle, and Ollama bundle
- AIProviders: hide top model row when custom_rest active (dropdown lower in form handles it);
  clear auth header prefill on format switch; rename User ID / CORE ID → Email Address
- Dashboard + Ollama: add border-border/bg-card classes to Refresh buttons for dark-bg contrast
- Security + settingsStore: wire PII toggle state to persisted Zustand store so pattern
  selections survive app restarts
- App: add Sun/Moon theme toggle button to sidebar footer (always visible when collapsed)
- system.rs: add install_ollama_from_bundle command (copies bundled binary to /usr/local/bin)
- auto-tag.yml: add Download Ollama step to all 4 platform build jobs with SHA256 verification
- tauri.conf.json: add resources/ollama/* to bundle resources
- docs: add install_ollama_from_bundle to IPC-Commands wiki

Security: CI download steps verify SHA256 against Ollama's published sha256sums.txt before bundling.
2026-04-05 19:30:41 -05:00
a40bc2304f Merge pull request 'feat(rebrand): rename binary to trcaa and auto-generate DB key' (#18) from feat/rebrand-binary-trcaa into master
Reviewed-on: #18
2026-04-05 23:17:05 +00:00
Shaun Arman
d87b01b154 feat(rebrand): rename binary to trcaa and auto-generate DB key
- Rename Cargo package from 'tftsr' to 'trcaa' — installed command
  becomes 'trcaa' instead of 'tftsr'
- Update app data directories to ~/.local/share/trcaa (Linux),
  ~/Library/Application Support/trcaa (macOS), %APPDATA%/trcaa (Windows)
- Update bundle identifier to com.trcaa.app
- Auto-generate per-installation DB encryption key on first launch and
  persist to <data_dir>/.dbkey (mode 0600 on Unix) — removes the hard
  requirement for TFTSR_DB_KEY to be set before the app will start
2026-04-05 17:50:16 -05:00
b734991932 Merge pull request 'fix(ci): restrict arm64 bundles to deb,rpm — skip AppImage' (#17) from fix/arm64-skip-appimage into master
Reviewed-on: #17
2026-04-05 22:04:51 +00:00
Shaun Arman
73a4c71196 fix(ci): restrict arm64 bundles to deb,rpm — skip AppImage
linuxdeploy-aarch64.AppImage cannot be reliably executed in a cross-
compile context (amd64 host, aarch64 target) even with QEMU binfmt
and APPIMAGE_EXTRACT_AND_RUN. The .deb and .rpm cover all major arm64
Linux distros. An arm64 AppImage can be added later via a native
arm64 build job if required.
2026-04-05 17:02:20 -05:00
a3c9a5a710 Merge pull request 'fix(ci): set APPIMAGE_EXTRACT_AND_RUN=1 for arm64 AppImage bundling' (#16) from fix/arm64-appimage-fuse into master
Reviewed-on: #16
2026-04-05 20:57:02 +00:00
Shaun Arman
acccab4235 fix(ci): set APPIMAGE_EXTRACT_AND_RUN=1 for arm64 AppImage bundling
linuxdeploy and its plugins are themselves AppImages. Inside a Docker
container FUSE is unavailable, so they cannot self-mount. Setting
APPIMAGE_EXTRACT_AND_RUN=1 causes them to extract to a temp directory
and run directly, bypassing the FUSE requirement.
2026-04-05 15:56:09 -05:00
d6701cb51a Merge pull request 'fix(ci): add make to arm64 host tools for OpenSSL vendored build' (#15) from fix/arm64-missing-make into master
Reviewed-on: #15
2026-04-05 20:10:50 +00:00
Shaun Arman
7ecf66a8cd fix(ci): add make to arm64 host tools for OpenSSL vendored build
openssl-src compiles OpenSSL from source and requires make.
The old Debian image had it; it was not carried over to the
Ubuntu 22.04 host tools list.
2026-04-05 15:09:22 -05:00
fdbcee9fbd Merge pull request 'fix(ci): use POSIX dot instead of source in arm64 build step' (#14) from fix/arm64-source-sh into master
Reviewed-on: #14
2026-04-05 19:42:49 +00:00
Shaun Arman
5546f9f615 fix(ci): use POSIX dot instead of source in arm64 build step
The act runner executes run: blocks with sh (dash), not bash.
'source' is a bash built-in; POSIX sh uses '.' instead.

Co-Authored-By: fix/arm64-source-sh <noreply@local>
2026-04-05 14:41:18 -05:00
3f76818a47 Merge pull request 'fix(ci): remove GITHUB_PATH append that was breaking arm64 install step' (#13) from fix/arm64-github-path into master
Reviewed-on: #13
2026-04-05 19:06:01 +00:00
Shaun Arman
eb4cf59192 fix(ci): remove GITHUB_PATH append that was breaking arm64 install step
$GITHUB_PATH is unset in this Gitea Actions environment, causing the
echo redirect to fail with a non-zero exit, which killed the Install
dependencies step before the Build step could run.

The append was unnecessary — the Build step already sources
$HOME/.cargo/env as its first line, which puts Cargo's bin dir in PATH.

Co-Authored-By: fix/yaml-heredoc-indent <noreply@local>
2026-04-05 14:04:32 -05:00
e6d5a7178b Merge pull request 'fix(ci): switch build-linux-arm64 to Ubuntu 22.04 with ports mirror' (#12) from fix/yaml-heredoc-indent into master
Reviewed-on: #12
2026-04-05 18:15:16 +00:00
Shaun Arman
81442be1bd docs: update CI pipeline wiki and add ticket summary for arm64 fix
Documents the Ubuntu 22.04 + ports.ubuntu.com approach for arm64
cross-compilation and adds a Known Issues entry explaining the Debian
single-mirror multiarch root cause that was replaced.

Co-Authored-By: fix/yaml-heredoc-indent <noreply@local>
2026-04-05 12:51:30 -05:00
Shaun Arman
9188a63305 fix(ci): switch build-linux-arm64 to Ubuntu 22.04 with ports mirror
The Debian single-mirror multiarch approach causes irreconcilable
apt dependency conflicts when both amd64 and arm64 point at the same
repo: the binary-all index is duplicated and certain -dev package pairs
lack Multi-Arch: same. This produces "held broken packages" regardless
of sources.list tweaks.

Ubuntu 22.04 routes arm64 through ports.ubuntu.com/ubuntu-ports, a
separate mirror from archive.ubuntu.com (amd64). This eliminates all
cross-arch index overlaps. Rust is installed via rustup since it is not
pre-installed in the Ubuntu base image. libayatana-appindicator3-dev
is dropped — no tray icon is used by this application.

Co-Authored-By: fix/yaml-heredoc-indent <noreply@local>
2026-04-05 12:51:19 -05:00
bc9c7d5cd1 Merge pull request 'fix(ci): replace heredoc with printf in arm64 install step' (#11) from fix/yaml-heredoc-indent into master
Reviewed-on: #11
2026-04-05 17:12:11 +00:00
Shaun Arman
5ab00a3759 fix(ci): replace heredoc with printf in arm64 install step
YAML block scalars end when a line is found with less indentation than
the scalar's own indent level. The heredoc body was at column 0 while
the rest of the run: block was at column 10, causing Gitea's YAML parser
to reject the entire workflow file with:

  yaml: line 412: could not find expected ':'

This silently invalidated auto-tag.yml on every push to master since the
apt-sources commit was merged, which is why PR#9 and PR#10 merges produced
no action runs.

Fix: replace the heredoc with a printf that stays within the block scalar's
indentation so the YAML remains valid.
2026-04-05 12:11:12 -05:00
d676372487 Merge pull request 'fix(ci): add workflow_dispatch and concurrency guard to auto-tag' (#10) from fix/auto-tag-dispatch into master
Reviewed-on: #10
2026-04-05 17:06:09 +00:00
Shaun Arman
a04ba02424 fix(ci): add workflow_dispatch and concurrency guard to auto-tag
Gitea 1.22 silently drops a push event for a workflow when a run for that
same workflow+branch is already in progress. This caused the PR#9 merge to
master to produce no auto-tag run.

- workflow_dispatch: allows manual triggering via API when an event is dropped
- concurrency group (cancel-in-progress: false): causes Gitea to queue a second
  run rather than discard it when one is already active
2026-04-05 11:41:21 -05:00
2bc4cf60a0 Merge pull request 'fix(ci): rebuild apt sources with per-arch entries before arm64 cross-compile' (#9) from bug/build-failure into master
Reviewed-on: #9
2026-04-05 16:32:20 +00:00
Shaun Arman
15b69e2350 fix(ci): rebuild apt sources with per-arch entries before arm64 cross-compile install
rust:1.88-slim (Debian Bookworm) uses DEB822-format sources which have no arch
restriction. After dpkg --add-architecture arm64, apt tries to resolve deps for
both amd64 and arm64 simultaneously and hits 'held broken packages' conflicts on
shared -dev packages.

Fix: remove debian.sources and write a clean sources.list that pins amd64 repos
to [arch=amd64] and arm64 repos to [arch=arm64]. This gives apt a clear,
non-conflicting view of each architecture's package set.
2026-04-05 11:05:46 -05:00
1b26bf5214 Merge pull request 'security/audit' (#8) from security/audit into master
Reviewed-on: #8
2026-04-05 15:56:26 +00:00
Shaun Arman
cde4a85cc7 fix(ci): fix arm64 cross-compile, drop cargo install tauri-cli, move wiki-sync
build-linux-arm64: switch from QEMU-emulated linux-arm64 runner to cross-compile
on linux-amd64 using aarch64-linux-gnu toolchain. Removes the uname -m arch guard
that was causing the job to exit immediately (QEMU reports x86_64 as kernel arch),
and fixes the artifact path to the explicit target directory.

All build jobs: replace `cargo install tauri-cli --locked` with `npx tauri build`,
using the pre-compiled @tauri-apps/cli binary from devDependencies. Eliminates the
20-30 min Tauri CLI recompilation on every run.

wiki-sync: move from test.yml to auto-tag.yml. test.yml only fires on pull_request
events so the `if: github.ref == 'refs/heads/master'` guard was never true and the
wiki was never updated. auto-tag.yml triggers on push to master, so wiki sync now
runs on every merge.

Update releaseWorkflowCrossPlatformArtifacts.test.ts to match the new workflow.
2026-04-05 10:33:53 -05:00
3831ac0262 Merge branch 'master' into security/audit 2026-04-05 15:10:21 +00:00
Shaun Arman
abab5c3153 fix(security): enforce PII redaction before AI log transmission
analyze_logs() was reading the original log file from disk and sending its
full contents to external AI providers, completely bypassing the redaction
pipeline. The redacted flag in log_files and the .redacted file on disk were
written by apply_redactions() but never consulted on the read path.

Fix: query the redacted column alongside file_path. If the file has not been
redacted, return an error to the caller before any AI provider call is made.
When redacted, read from {path}.redacted instead of the original.

Adds redacted_path_for() helper and two unit tests covering the rejection
and happy-path cases.
2026-04-05 10:08:16 -05:00
Shaun Arman
0a25ca7692 fix(pii): remove lookahead from hostname regex, fix fmt in analysis test
Rust's `regex` crate does not support lookaround assertions. The hostname
pattern `(?=.{1,253}\b)` caused a panic on every `PiiDetector::new()` call,
failing all four PII detector tests in CI (rust-fmt-check, rust-clippy,
rust-tests). Removed the lookahead; the remaining pattern correctly matches
valid FQDNs without the RFC 1035 length pre-check.

Also reformatted analysis.rs:253 to satisfy `rustfmt` (line break after `=`).

All 127 Rust tests pass and `cargo fmt --check` and `cargo clippy -- -D
warnings` are clean.
2026-04-05 09:59:19 -05:00
Shaun Arman
281e676ad1 fix(security): harden secret handling and audit integrity
Remove high-risk defaults and tighten data handling across auth, storage, IPC, provider calls, and capabilities so sensitive data is better protected by default. Also update README/wiki security guidance and add targeted tests for the new hardening behaviors.

Made-with: Cursor
2026-04-04 23:37:05 -05:00
Shaun Arman
10cccdc653 fix(ci): unblock release jobs and namespace linux artifacts by arch
Drop fragile job-condition gates that were blocking release jobs, and upload linux artifacts with arch-prefixed release asset names so amd64 and arm64 outputs can coexist even when bundle filenames are identical.

Made-with: Cursor
2026-04-04 23:19:40 -05:00
Shaun Arman
b1d794765f fix(ci): unblock release jobs and namespace linux artifacts by arch
Drop fragile job-condition gates that were blocking release jobs, and upload linux artifacts with arch-prefixed release asset names so amd64 and arm64 outputs can coexist even when bundle filenames are identical.

Made-with: Cursor
2026-04-04 23:17:12 -05:00
Shaun Arman
7b5f2daaa4 fix(ci): run linux arm release natively and enforce arm artifacts
Avoid cross-compiling GTK/glib on the arm release job by building natively on ARM64 hosts, add an explicit architecture guard, and restrict uploads to arm64/aarch64 artifact filenames so amd64 outputs cannot be published as arm releases.

Made-with: Cursor
2026-04-04 22:46:23 -05:00
Shaun Arman
aaa48d65a2 fix(ci): force explicit linux arm64 target for release artifacts
Build linux arm64 bundles with --target aarch64-unknown-linux-gnu and upload from the target-specific bundle path so arm64 releases cannot accidentally publish amd64 artifacts.

Made-with: Cursor
2026-04-04 22:15:02 -05:00
Shaun Arman
e20228da6f refactor(ci): remove standalone release workflow
Delete .gitea/workflows/release.yml and keep release orchestration in auto-tag.yml only, then update related workflow tests and docs to reference the unified pipeline.

Made-with: Cursor
2026-04-04 21:34:15 -05:00
Shaun Arman
2d2c62e4f5 fix(ci): repair auto-tag workflow yaml so jobs trigger
Replace heredoc-based Python error logging with single-line python invocations to keep YAML block indentation valid, restoring Gitea's ability to parse and trigger auto-tag plus downstream release build jobs.

Made-with: Cursor
2026-04-04 21:28:52 -05:00
Shaun Arman
b69c132a0a fix(ci): run post-tag release builds without job-output gating
Remove auto-tag job output dependencies and conditional gates so release build jobs always run after autotag completes, resolving skipped fan-out caused by output/if evaluation issues in Gitea Actions.

Made-with: Cursor
2026-04-04 21:24:24 -05:00
Shaun Arman
a6b4ed789c fix(ci): use stable auto-tag job outputs for release fanout
Rename the auto-tag job id to a non-hyphenated identifier and update needs/output references so dependent release jobs evaluate conditions correctly and reliably run after tagging.

Made-with: Cursor
2026-04-04 21:21:35 -05:00
Shaun Arman
93ead1362f fix(ci): guarantee release jobs run after auto-tag
Run linux/windows/macos/arm release build and upload jobs in the auto-tag workflow with needs:auto-tag outputs so release execution no longer depends on a second tag-triggered workflow dispatch path.

Made-with: Cursor
2026-04-04 21:19:13 -05:00
Shaun Arman
48041acc8c fix(ci): trigger release workflow from auto-tag pushes
Switch auto-tag to create and push tags via git instead of the tag API so Gitea emits a real tag push event that reliably starts release builds. Document the trigger behavior and add a workflow regression test.

Made-with: Cursor
2026-04-04 21:14:41 -05:00
42120cb140 Merge pull request 'fix(ci): harden release asset uploads for reruns' (#7) from fix/release-upload-rerun-hardening into master
Reviewed-on: #7
2026-04-05 02:10:54 +00:00
Shaun Arman
2d35e2a2c1 fix(ci): harden release asset uploads for reruns
Make all release upload steps fail fast when expected artifacts are missing, replace existing same-name assets before uploading, and print HTTP/body details on upload failures so Linux/Windows publishing issues are diagnosable and reruns remain deterministic.

Made-with: Cursor
2026-04-04 21:09:03 -05:00
b22d508f25 Merge pull request 'fix(ci): stabilize release artifacts for windows and linux' (#6) from fix/release-windows-openssl-linux-assets into master
Some checks failed
Release / build-macos-arm64 (push) Has been cancelled
Release / build-linux-arm64 (push) Has been cancelled
Release / build-linux-amd64 (push) Has been cancelled
Release / build-windows-amd64 (push) Has been cancelled
Reviewed-on: #6
2026-04-05 01:21:31 +00:00
Shaun Arman
c3fd83f330 fix(ci): make release artifacts reliable across platforms
Override OpenSSL vendoring for the windows-gnu release build so cross-compiles no longer fail on pkg-config lookup, and fail fast when Linux release jobs produce no artifacts so incomplete releases are detected immediately.

Made-with: Cursor
2026-04-04 19:53:40 -05:00
4606fdd104 Merge pull request 'ci: run test workflow only on pull requests' (#5) from fix/pr4-clean-replacement into master
Some checks failed
Release / build-linux-arm64 (push) Has been cancelled
Release / build-windows-amd64 (push) Has been cancelled
Release / build-linux-amd64 (push) Has been cancelled
Release / build-macos-arm64 (push) Has been cancelled
Reviewed-on: #5
2026-04-05 00:14:07 +00:00
Shaun Arman
4e7a5b64ba ci: run test workflow only on pull requests
Avoid duplicate Test workflow executions by removing push triggers and keeping pull_request validation as the single gate. Also fix remaining clippy format string violations in integration modules to keep rust-clippy passing.

Made-with: Cursor
2026-04-04 18:52:13 -05:00
82c18871af Merge pull request 'fix/skip-master-test-workflow' (#3) from fix/skip-master-test-workflow into master
Some checks failed
Release / build-windows-amd64 (push) Has been cancelled
Release / build-macos-arm64 (push) Has been cancelled
Release / build-linux-arm64 (push) Has been cancelled
Release / build-linux-amd64 (push) Has been cancelled
Reviewed-on: #3
2026-04-04 21:48:47 +00:00
Shaun Arman
8e7356e62d ci: skip test workflow pushes on master
Avoid rerunning the full test workflow on direct master pushes while keeping pull request validation intact. Update the CI/CD wiki page to reflect the new trigger behavior.

Made-with: Cursor
2026-04-04 16:45:55 -05:00
Shaun Arman
b426f56149 fix: resolve macOS bundle path after app rename
Find the generated .app bundle dynamically in release CI so macOS packaging no longer depends on the legacy TFTSR.app name. Add a unit test to prevent regressions by asserting the old hardcoded path is not reintroduced.

Made-with: Cursor
2026-04-04 16:28:01 -05:00
f2531eb922 Merge pull request 'fix: resolve clippy uninlined_format_args (CI run 178)' (#2) from fix/clippy-uninlined-format-args into master
Some checks failed
Release / build-linux-arm64 (push) Has been cancelled
Release / build-macos-arm64 (push) Has been cancelled
Release / build-linux-amd64 (push) Has been cancelled
Release / build-windows-amd64 (push) Has been cancelled
Reviewed-on: #2
2026-04-04 21:08:52 +00:00
Shaun Arman
c4ea32e660 feat: add custom_rest provider mode and rebrand application name
Rename custom API format handling from custom_rest to custom_rest with backward compatibility, add guided model selection with custom entry in provider settings, and rebrand app naming to Troubleshooting and RCA Assistant across UI, metadata, and docs.

Made-with: Cursor
2026-04-04 15:35:58 -05:00
Shaun Arman
0bc20f09f6 style: apply rustfmt output for clippy-related edits
Apply canonical rustfmt formatting in files touched by the clippy format-args cleanup so cargo fmt --check passes consistently in CI.

Made-with: Cursor
2026-04-04 15:10:17 -05:00
Shaun Arman
85a8d0a4c0 fix: resolve clippy format-args failures and OpenSSL vendoring issue
Inline format arguments across Rust modules to satisfy clippy -D warnings, and configure Cargo to prefer system OpenSSL so clippy builds do not fail on missing vendored Perl modules.

Made-with: Cursor
2026-04-04 15:05:13 -05:00
Shaun Arman
bdb63f3aee fix: resolve clippy uninlined_format_args in integrations and related modules
Replace format!("msg: {}", var) with format!("msg: {var}") across 8 files
to satisfy the uninlined_format_args lint (-D warnings) in CI run 178.

Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
2026-04-04 12:27:26 -05:00
Shaun Arman
64492c743b fix: ARM64 build uses native target instead of cross-compile
Some checks failed
Release / build-macos-arm64 (push) Successful in 5m14s
Release / build-linux-arm64 (push) Has been cancelled
Release / build-linux-amd64 (push) Has been cancelled
Release / build-windows-amd64 (push) Has been cancelled
The ARM64 build was failing because explicitly specifying
--target aarch64-unknown-linux-gnu on an ARM64 runner was
triggering cross-compilation logic.

Changes:
- Remove rustup target add (not needed for native build)
- Remove --target flag from cargo tauri build
- Update artifact path: target/aarch64-unknown-linux-gnu/release/bundle
  → target/release/bundle

This allows the native ARM64 toolchain to build without
attempting cross-compilation and avoids the pkg-config
cross-compilation configuration requirement.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-04-04 09:59:56 -05:00
Shaun Arman
a7903db904 fix: persist integration settings and implement persistent browser windows
Some checks failed
Release / build-macos-arm64 (push) Successful in 4m52s
Release / build-linux-amd64 (push) Has been cancelled
Release / build-linux-arm64 (push) Has been cancelled
Release / build-windows-amd64 (push) Has been cancelled
## Integration Settings Persistence
- Add database commands to save/load integration configs (base_url, username, project_name, space_key)
- Frontend now loads configs from DB on mount and saves changes automatically
- Fixes issue where settings were lost on app restart

## Persistent Browser Window Architecture
- Integration browser windows now stay open for user browsing and authentication
- Extract fresh cookies before each API call to handle token rotation
- Track open windows in app state (integration_webviews HashMap)
- Windows titled as "{Service} Browser (TFTSR)" for clarity
- Support easy navigation between app and browser windows (Cmd+Tab/Alt+Tab)
- Gracefully handle closed windows with automatic cleanup

## Bug Fixes
- Fix Rust formatting issues across 8 files
- Fix clippy warnings:
  - Use is_some_and() instead of map_or() in openai.rs
  - Use .to_string() instead of format!() in integrations.rs
- Add missing OptionalExtension import for .optional() method

## Tests
- Add test_integration_config_serialization
- Add test_webview_tracking
- Add test_token_auth_request_serialization
- All 6 integration tests passing

## Files Modified
- src-tauri/src/state.rs: Add integration_webviews tracking
- src-tauri/src/lib.rs: Register 3 new commands, initialize webviews HashMap
- src-tauri/src/commands/integrations.rs: Config persistence, fresh cookie extraction (+151 lines)
- src-tauri/src/integrations/webview_auth.rs: Persistent window behavior
- src/lib/tauriCommands.ts: TypeScript wrappers for new commands
- src/pages/Settings/Integrations.tsx: Load/save configs from DB

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-04-04 09:57:22 -05:00
Shaun Arman
fbce897608 feat: complete webview cookie extraction implementation
Some checks failed
Release / build-macos-arm64 (push) Successful in 5m4s
Release / build-windows-amd64 (push) Has been cancelled
Release / build-linux-amd64 (push) Has been cancelled
Release / build-linux-arm64 (push) Has been cancelled
Implement working cookie extraction using Tauri's IPC event system:

**How it works:**
1. Opens embedded browser window for user to login
2. User completes authentication (including SSO)
3. User clicks "Complete Login" button in UI
4. JavaScript injected into webview extracts `document.cookie`
5. Parsed cookies emitted via Tauri event: `tftsr-cookies-extracted`
6. Rust listens for event and receives cookie data
7. Cookies encrypted and stored in database

**Technical implementation:**
- Uses `window.__TAURI__.event.emit()` from injected JavaScript
- Rust listens via `app_handle.listen()` with Listener trait
- 10-second timeout with clear error messages
- Handles empty cookies and JavaScript errors gracefully
- Cross-platform compatible (no platform-specific APIs)

**Cookie limitations:**
- `document.cookie` only exposes non-HttpOnly cookies
- HttpOnly session cookies won't be captured via JavaScript
- For HttpOnly cookies, services must provide API tokens as fallback

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-04-03 17:31:48 -05:00
Shaun Arman
32d83df3cf feat: add multi-mode authentication for integrations (v0.2.10)
Some checks failed
Release / build-windows-amd64 (push) Has been cancelled
Release / build-linux-amd64 (push) Has been cancelled
Release / build-macos-arm64 (push) Has been cancelled
Release / build-linux-arm64 (push) Has been cancelled
Implement three authentication methods for Confluence, ServiceNow, and Azure DevOps:

1. **OAuth2** - Traditional OAuth flow for enterprise SSO environments
2. **Embedded Browser** - Webview-based login that captures session cookies/tokens
   - Solves VPN constraints: users authenticate off-VPN via web UI
   - Extracted credentials work on-VPN for API calls
   - Based on confluence-publisher agent pattern
3. **Manual Token** - Direct API token/PAT input as fallback

**Changes:**
- Add webview_auth.rs module for embedded browser authentication
- Implement authenticate_with_webview and extract_cookies_from_webview commands
- Implement save_manual_token command with validation
- Add AuthMethod enum to support all three modes
- Add RadioGroup UI component for mode selection
- Complete rewrite of Integrations settings page with mode-specific UI
- Add secondary button variant for UI consistency

**VPN-friendly design:**
Users can authenticate via webview when off-VPN (web UI accessible), then use extracted cookies for API calls when on-VPN (API requires VPN). Addresses enterprise SSO limitations where OAuth app registration is blocked.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-04-03 17:26:09 -05:00
Shaun Arman
2c5e04a6ce feat: add temperature and max_tokens support for Custom REST providers (v0.2.9)
Some checks failed
Release / build-linux-amd64 (push) Has been cancelled
Release / build-windows-amd64 (push) Has been cancelled
Release / build-macos-arm64 (push) Has been cancelled
Release / build-linux-arm64 (push) Has been cancelled
- Added max_tokens and temperature fields to ProviderConfig
- Custom REST providers now send modelConfig with temperature and max_tokens
- OpenAI-compatible providers now use configured max_tokens/temperature
- Both formats fall back to defaults if not specified
- Bumped version to 0.2.9

This allows users to configure response length and randomness for all
AI providers, including Custom REST providers which require modelConfig format.
2026-04-03 17:08:34 -05:00
Shaun Arman
1d40dfb15b fix: use Wiki secret for authenticated wiki sync (v0.2.8)
Some checks failed
Release / build-macos-arm64 (push) Has been cancelled
Release / build-windows-amd64 (push) Has been cancelled
Release / build-linux-arm64 (push) Has been cancelled
Release / build-linux-amd64 (push) Has been cancelled
- Updated wiki-sync job to use secrets.Wiki for authentication
- Simplified clone/push logic with token-based auth
- Wiki push will now succeed with proper credentials
- Bumped version to 0.2.8

The workflow now uses the 'Wiki' secret created in Gitea Actions
to authenticate wiki repository pushes. This fixes the authentication
issue that was preventing automatic wiki synchronization.
2026-04-03 16:47:32 -05:00
Shaun Arman
94b486b801 feat: add automatic wiki sync to CI workflow (v0.2.7)
- Added wiki-sync job to .gitea/workflows/test.yml
- Runs only on pushes to master branch
- Automatically copies docs/wiki/*.md to Gogs wiki repository
- Supports token-based authentication via secrets.GITHUB_TOKEN
- Handles wiki initialization if repository doesn't exist
- Bumped version to 0.2.7

Wiki sync will now automatically update the Gogs wiki at
https://gogs.tftsr.com/sarman/tftsr-devops_investigation/wiki
whenever docs/wiki/ files are modified on master.
2026-04-03 16:42:37 -05:00
Shaun Arman
5f9798a4fd docs: update wiki for v0.2.6 - integrations and Custom REST provider
Updated 5 wiki pages:

Home.md:
- Updated version to v0.2.6
- Added Custom REST provider and custom provider support to features
- Updated integration status from stubs to complete
- Updated release table with v0.2.3 and v0.2.6 highlights

Integrations.md:
- Complete rewrite: Changed from 'v0.2 stubs' to fully implemented
- Added detailed docs for Confluence REST API client (6 tests)
- Added detailed docs for ServiceNow REST API client (7 tests)
- Added detailed docs for Azure DevOps REST API client (6 tests)
- Documented OAuth2 PKCE flow implementation
- Added database schema for credentials and integration_config tables
- Added troubleshooting section with common OAuth/API errors

AI-Providers.md:
- Added section for Custom Provider (Custom REST provider)
- Documented Custom REST provider API format differences from OpenAI
- Added request/response format examples
- Added configuration instructions and troubleshooting
- Documented custom provider fields (api_format, custom_endpoint_path, etc)
- Added available Custom REST provider models list

IPC-Commands.md:
- Replaced 'v0.2 stubs' section with full implementation details
- Added OAuth2 commands (initiate_oauth, handle_oauth_callback)
- Added Confluence commands (5 functions)
- Added ServiceNow commands (5 functions)
- Added Azure DevOps commands (5 functions)
- Documented authentication storage with AES-256-GCM encryption
- Added common types (ConnectionResult, PublishResult, TicketResult)

Database.md:
- Updated migration count from 10 to 11
- Added migration 011: credentials and integration_config tables
- Documented AES-256-GCM encryption for OAuth tokens
- Added usage notes for OAuth2 vs basic auth storage
2026-04-03 16:39:49 -05:00
Shaun Arman
a42745b791 fix: add user_id support and OAuth shell permission (v0.2.6)
Some checks failed
Release / build-linux-arm64 (push) Has been cancelled
Release / build-linux-amd64 (push) Has been cancelled
Release / build-macos-arm64 (push) Has been cancelled
Release / build-windows-amd64 (push) Has been cancelled
Fixes:
- Added shell:allow-open permission to fix OAuth integration flows
- Added user_id field to ProviderConfig for Custom REST provider CORE ID
- Added UI field for user_id when api_format is custom_rest
- Made userId optional in Custom REST provider requests (only sent if provided)
- Added X-msi-genai-client header to Custom REST provider requests
- Updated CSP to include Custom REST provider domains
- Bumped version to 0.2.6

This fixes:
- OAuth error: 'Command plugin:shell|open not allowed by ACL'
- Missing User ID field in Custom REST provider configuration UI
2026-04-03 16:34:00 -05:00
Shaun Arman
dd06566375 docs: add Custom REST provider documentation
Some checks failed
Release / build-macos-arm64 (push) Has been cancelled
Release / build-linux-amd64 (push) Has been cancelled
Release / build-linux-arm64 (push) Has been cancelled
Release / build-windows-amd64 (push) Has been cancelled
- Added GenAI API User Guide.md with complete API specification
- Added HANDOFF-MSI-GENAI.md documenting custom provider implementation
- Includes API endpoints, request/response formats, available models, and rate limits
2026-04-03 15:45:52 -05:00
Shaun Arman
190084888c feat: add Custom REST provider support
- Extended ProviderConfig with optional custom fields for non-OpenAI APIs
- Added custom_endpoint_path, custom_auth_header, custom_auth_prefix fields
- Added api_format field to distinguish between OpenAI and Custom REST provider formats
- Added session_id field for stateful conversation APIs
- Implemented chat_custom_rest() method in OpenAI provider
- Custom REST provider uses different request format (prompt+sessionId) and response (msg field)
- Updated TypeScript types to match Rust schema
- Added UI controls in Settings/AIProviders for custom provider configuration
- API format selector auto-populates appropriate defaults (OpenAI vs Custom REST provider)
- Backward compatible: existing providers default to OpenAI format
2026-04-03 15:45:42 -05:00
73 changed files with 5142 additions and 857 deletions

3
.cargo/config.toml Normal file
View File

@ -0,0 +1,3 @@
[env]
# Force use of system OpenSSL instead of vendored OpenSSL source builds.
OPENSSL_NO_VENDOR = "1"

View File

@ -0,0 +1,24 @@
# Pre-baked builder for Linux amd64 Tauri releases.
# All system dependencies are installed once here; CI jobs skip apt-get entirely.
# Rebuild when: Rust toolchain version changes, webkit2gtk/gtk major version changes,
# or Node.js major version changes. Tag format: rust<VER>-node<VER>
FROM rust:1.88-slim
RUN apt-get update -qq \
&& apt-get install -y -qq --no-install-recommends \
libwebkit2gtk-4.1-dev \
libssl-dev \
libgtk-3-dev \
libayatana-appindicator3-dev \
librsvg2-dev \
patchelf \
pkg-config \
curl \
perl \
jq \
git \
&& curl -fsSL https://deb.nodesource.com/setup_22.x | bash - \
&& apt-get install -y --no-install-recommends nodejs \
&& rm -rf /var/lib/apt/lists/*
RUN rustup target add x86_64-unknown-linux-gnu

View File

@ -0,0 +1,45 @@
# Pre-baked cross-compiler for Linux arm64 Tauri releases (runs on Linux amd64).
# Bakes in: amd64 cross-toolchain, arm64 multiarch dev libs, Node.js, and Rust.
# This image takes ~15 min to build but is only rebuilt when deps change.
# Rebuild when: Rust toolchain version, webkit2gtk/gtk major version, or Node.js changes.
# Tag format: rust<VER>-node<VER>
FROM ubuntu:22.04
ARG DEBIAN_FRONTEND=noninteractive
# Step 1: amd64 host tools and cross-compiler
RUN apt-get update -qq \
&& apt-get install -y -qq --no-install-recommends \
curl git gcc g++ make patchelf pkg-config perl jq \
gcc-aarch64-linux-gnu g++-aarch64-linux-gnu \
&& rm -rf /var/lib/apt/lists/*
# Step 2: Enable arm64 multiarch. Ubuntu uses ports.ubuntu.com for arm64 to avoid
# binary-all index conflicts with the amd64 archive.ubuntu.com mirror.
RUN dpkg --add-architecture arm64 \
&& sed -i 's|^deb http://archive.ubuntu.com|deb [arch=amd64] http://archive.ubuntu.com|g' /etc/apt/sources.list \
&& sed -i 's|^deb http://security.ubuntu.com|deb [arch=amd64] http://security.ubuntu.com|g' /etc/apt/sources.list \
&& printf '%s\n' \
'deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports jammy main restricted universe multiverse' \
'deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports jammy-updates main restricted universe multiverse' \
'deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports jammy-security main restricted universe multiverse' \
> /etc/apt/sources.list.d/arm64-ports.list \
&& apt-get update -qq \
&& apt-get install -y -qq --no-install-recommends \
libwebkit2gtk-4.1-dev:arm64 \
libssl-dev:arm64 \
libgtk-3-dev:arm64 \
librsvg2-dev:arm64 \
&& rm -rf /var/lib/apt/lists/*
# Step 3: Node.js 22
RUN curl -fsSL https://deb.nodesource.com/setup_22.x | bash - \
&& apt-get install -y --no-install-recommends nodejs \
&& rm -rf /var/lib/apt/lists/*
# Step 4: Rust 1.88 with arm64 cross-compilation target
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y \
--default-toolchain 1.88.0 --profile minimal --no-modify-path \
&& /root/.cargo/bin/rustup target add aarch64-unknown-linux-gnu
ENV PATH="/root/.cargo/bin:${PATH}"

View File

@ -0,0 +1,20 @@
# Pre-baked cross-compiler for Windows amd64 Tauri releases (runs on Linux amd64).
# All MinGW and Node.js dependencies are installed once here; CI jobs skip apt-get entirely.
# Rebuild when: Rust toolchain version changes or Node.js major version changes.
# Tag format: rust<VER>-node<VER>
FROM rust:1.88-slim
RUN apt-get update -qq \
&& apt-get install -y -qq --no-install-recommends \
mingw-w64 \
curl \
nsis \
perl \
make \
jq \
git \
&& curl -fsSL https://deb.nodesource.com/setup_22.x | bash - \
&& apt-get install -y --no-install-recommends nodejs \
&& rm -rf /var/lib/apt/lists/*
RUN rustup target add x86_64-pc-windows-gnu

View File

@ -1,24 +1,32 @@
name: Auto Tag
# Runs on every merge to master — reads the latest semver tag, increments
# the patch version, and pushes a new tag (which triggers release.yml).
# the patch version, pushes a new tag, then runs release builds in this workflow.
# workflow_dispatch allows manual triggering when Gitea drops a push event.
on:
push:
branches:
- master
workflow_dispatch:
concurrency:
group: auto-tag-master
cancel-in-progress: false
jobs:
auto-tag:
autotag:
runs-on: linux-amd64
container:
image: alpine:latest
steps:
- name: Bump patch version and create tag
id: bump
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
apk add --no-cache curl jq
set -eu
apk add --no-cache curl jq git
API="http://172.0.0.29:3000/api/v1/repos/$GITHUB_REPOSITORY"
@ -39,10 +47,471 @@ jobs:
echo "Latest tag: ${LATEST:-none} → Next: $NEXT"
# Create the new tag pointing at the commit that triggered this push
curl -sf -X POST "$API/tags" \
# Create and push the tag via git.
git init
git remote add origin "http://oauth2:${RELEASE_TOKEN}@172.0.0.29:3000/${GITHUB_REPOSITORY}.git"
git fetch --depth=1 origin "$GITHUB_SHA"
git checkout FETCH_HEAD
git config user.name "gitea-actions[bot]"
git config user.email "gitea-actions@local"
if git ls-remote --exit-code --tags origin "refs/tags/$NEXT" >/dev/null 2>&1; then
echo "Tag $NEXT already exists; skipping."
exit 0
fi
git tag -a "$NEXT" -m "Release $NEXT"
git push origin "refs/tags/$NEXT"
echo "Tag $NEXT pushed successfully"
wiki-sync:
runs-on: linux-amd64
container:
image: alpine:latest
steps:
- name: Install dependencies
run: apk add --no-cache git
- name: Checkout main repository
run: |
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin $GITHUB_SHA
git checkout FETCH_HEAD
- name: Configure git
run: |
git config --global user.email "actions@gitea.local"
git config --global user.name "Gitea Actions"
git config --global credential.helper ''
- name: Clone and sync wiki
env:
WIKI_TOKEN: ${{ secrets.Wiki }}
run: |
cd /tmp
if [ -n "$WIKI_TOKEN" ]; then
WIKI_URL="http://${WIKI_TOKEN}@172.0.0.29:3000/sarman/tftsr-devops_investigation.wiki.git"
else
WIKI_URL="http://172.0.0.29:3000/sarman/tftsr-devops_investigation.wiki.git"
fi
if ! git clone "$WIKI_URL" wiki 2>/dev/null; then
echo "Wiki doesn't exist yet, creating initial structure..."
mkdir -p wiki
cd wiki
git init
git checkout -b master
echo "# Wiki" > Home.md
git add Home.md
git commit -m "Initial wiki commit"
git remote add origin "$WIKI_URL"
fi
cd /tmp/wiki
if [ -d "$GITHUB_WORKSPACE/docs/wiki" ]; then
cp -v "$GITHUB_WORKSPACE"/docs/wiki/*.md . 2>/dev/null || echo "No wiki files to copy"
fi
git add -A
if ! git diff --staged --quiet; then
git commit -m "docs: sync from docs/wiki/ at commit ${GITHUB_SHA:0:8}"
echo "Pushing to wiki..."
if git push origin master; then
echo "✓ Wiki successfully synced"
else
echo "⚠ Wiki push failed - check token permissions"
exit 1
fi
else
echo "No wiki changes to commit"
fi
build-linux-amd64:
needs: autotag
runs-on: linux-amd64
container:
image: rust:1.88-slim
steps:
- name: Checkout
run: |
apt-get update -qq && apt-get install -y -qq git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin "$GITHUB_SHA"
git checkout FETCH_HEAD
- name: Install dependencies
run: |
apt-get update -qq && apt-get install -y -qq \
libwebkit2gtk-4.1-dev libssl-dev libgtk-3-dev \
libayatana-appindicator3-dev librsvg2-dev patchelf \
pkg-config curl perl jq
curl -fsSL https://deb.nodesource.com/setup_22.x | bash -
apt-get install -y nodejs
- name: Build
run: |
npm ci --legacy-peer-deps
rustup target add x86_64-unknown-linux-gnu
CI=true npx tauri build --target x86_64-unknown-linux-gnu
- name: Upload artifacts
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
set -eu
API="http://172.0.0.29:3000/api/v1/repos/$GITHUB_REPOSITORY"
TAG=$(curl -s "$API/tags?limit=50" \
-H "Authorization: token $RELEASE_TOKEN" | \
jq -r '.[].name' | grep -E '^v[0-9]+\.[0-9]+\.[0-9]+$' | \
sort -V | tail -1 || true)
if [ -z "$TAG" ]; then
echo "ERROR: Could not resolve release tag from repository tags."
exit 1
fi
echo "Creating release for $TAG..."
curl -sf -X POST "$API/releases" \
-H "Authorization: token $RELEASE_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"tag_name\":\"$NEXT\",\"message\":\"Release $NEXT\",\"target\":\"$GITHUB_SHA\"}"
-d "{\"tag_name\":\"$TAG\",\"name\":\"TFTSR $TAG\",\"body\":\"Release $TAG\",\"draft\":false}" || true
RELEASE_ID=$(curl -sf "$API/releases/tags/$TAG" \
-H "Authorization: token $RELEASE_TOKEN" | jq -r '.id')
if [ -z "$RELEASE_ID" ] || [ "$RELEASE_ID" = "null" ]; then
echo "ERROR: Failed to get release ID for $TAG"
exit 1
fi
echo "Release ID: $RELEASE_ID"
ARTIFACTS=$(find src-tauri/target/x86_64-unknown-linux-gnu/release/bundle -type f \
\( -name "*.deb" -o -name "*.rpm" -o -name "*.AppImage" \))
if [ -z "$ARTIFACTS" ]; then
echo "ERROR: No Linux amd64 artifacts were found to upload."
exit 1
fi
printf '%s\n' "$ARTIFACTS" | while IFS= read -r f; do
NAME=$(basename "$f")
UPLOAD_NAME="linux-amd64-$NAME"
echo "Uploading $UPLOAD_NAME..."
EXISTING_IDS=$(curl -sf "$API/releases/$RELEASE_ID" \
-H "Authorization: token $RELEASE_TOKEN" \
| jq -r --arg name "$UPLOAD_NAME" '.assets[]? | select(.name == $name) | .id')
if [ -n "$EXISTING_IDS" ]; then
printf '%s\n' "$EXISTING_IDS" | while IFS= read -r id; do
[ -n "$id" ] || continue
echo "Deleting existing asset id=$id name=$UPLOAD_NAME before upload..."
curl -sf -X DELETE "$API/releases/$RELEASE_ID/assets/$id" \
-H "Authorization: token $RELEASE_TOKEN"
done
fi
RESP_FILE=$(mktemp)
HTTP_CODE=$(curl -sS -o "$RESP_FILE" -w "%{http_code}" -X POST "$API/releases/$RELEASE_ID/assets" \
-H "Authorization: token $RELEASE_TOKEN" \
-F "attachment=@$f;filename=$UPLOAD_NAME")
if [ "$HTTP_CODE" -ge 200 ] && [ "$HTTP_CODE" -lt 300 ]; then
echo "✓ Uploaded $UPLOAD_NAME"
else
echo "✗ Upload failed for $UPLOAD_NAME (HTTP $HTTP_CODE)"
python -c 'import pathlib,sys;print(pathlib.Path(sys.argv[1]).read_text(errors="replace")[:2000])' "$RESP_FILE"
exit 1
fi
done
echo "Tag $NEXT created successfully"
build-windows-amd64:
needs: autotag
runs-on: linux-amd64
container:
image: rust:1.88-slim
steps:
- name: Checkout
run: |
apt-get update -qq && apt-get install -y -qq git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin "$GITHUB_SHA"
git checkout FETCH_HEAD
- name: Install dependencies
run: |
apt-get update -qq && apt-get install -y -qq mingw-w64 curl nsis perl make jq
curl -fsSL https://deb.nodesource.com/setup_22.x | bash -
apt-get install -y nodejs
- name: Build
env:
CC_x86_64_pc_windows_gnu: x86_64-w64-mingw32-gcc
CXX_x86_64_pc_windows_gnu: x86_64-w64-mingw32-g++
AR_x86_64_pc_windows_gnu: x86_64-w64-mingw32-ar
CARGO_TARGET_X86_64_PC_WINDOWS_GNU_LINKER: x86_64-w64-mingw32-gcc
OPENSSL_NO_VENDOR: "0"
OPENSSL_STATIC: "1"
run: |
npm ci --legacy-peer-deps
rustup target add x86_64-pc-windows-gnu
CI=true npx tauri build --target x86_64-pc-windows-gnu
- name: Upload artifacts
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
set -eu
API="http://172.0.0.29:3000/api/v1/repos/$GITHUB_REPOSITORY"
TAG=$(curl -s "$API/tags?limit=50" \
-H "Authorization: token $RELEASE_TOKEN" | \
jq -r '.[].name' | grep -E '^v[0-9]+\.[0-9]+\.[0-9]+$' | \
sort -V | tail -1 || true)
if [ -z "$TAG" ]; then
echo "ERROR: Could not resolve release tag from repository tags."
exit 1
fi
echo "Creating release for $TAG..."
curl -sf -X POST "$API/releases" \
-H "Authorization: token $RELEASE_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"tag_name\":\"$TAG\",\"name\":\"TFTSR $TAG\",\"body\":\"Release $TAG\",\"draft\":false}" || true
RELEASE_ID=$(curl -sf "$API/releases/tags/$TAG" \
-H "Authorization: token $RELEASE_TOKEN" | jq -r '.id')
if [ -z "$RELEASE_ID" ] || [ "$RELEASE_ID" = "null" ]; then
echo "ERROR: Failed to get release ID for $TAG"
exit 1
fi
echo "Release ID: $RELEASE_ID"
ARTIFACTS=$(find src-tauri/target/x86_64-pc-windows-gnu/release/bundle -type f \
\( -name "*.exe" -o -name "*.msi" \) 2>/dev/null)
if [ -z "$ARTIFACTS" ]; then
echo "ERROR: No Windows amd64 artifacts were found to upload."
exit 1
fi
printf '%s\n' "$ARTIFACTS" | while IFS= read -r f; do
NAME=$(basename "$f")
echo "Uploading $NAME..."
EXISTING_IDS=$(curl -sf "$API/releases/$RELEASE_ID" \
-H "Authorization: token $RELEASE_TOKEN" \
| jq -r --arg name "$NAME" '.assets[]? | select(.name == $name) | .id')
if [ -n "$EXISTING_IDS" ]; then
printf '%s\n' "$EXISTING_IDS" | while IFS= read -r id; do
[ -n "$id" ] || continue
echo "Deleting existing asset id=$id name=$NAME before upload..."
curl -sf -X DELETE "$API/releases/$RELEASE_ID/assets/$id" \
-H "Authorization: token $RELEASE_TOKEN"
done
fi
RESP_FILE=$(mktemp)
HTTP_CODE=$(curl -sS -o "$RESP_FILE" -w "%{http_code}" -X POST "$API/releases/$RELEASE_ID/assets" \
-H "Authorization: token $RELEASE_TOKEN" \
-F "attachment=@$f;filename=$NAME")
if [ "$HTTP_CODE" -ge 200 ] && [ "$HTTP_CODE" -lt 300 ]; then
echo "✓ Uploaded $NAME"
else
echo "✗ Upload failed for $NAME (HTTP $HTTP_CODE)"
python -c 'import pathlib,sys;print(pathlib.Path(sys.argv[1]).read_text(errors="replace")[:2000])' "$RESP_FILE"
exit 1
fi
done
build-macos-arm64:
needs: autotag
runs-on: macos-arm64
steps:
- name: Checkout
run: |
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin "$GITHUB_SHA"
git checkout FETCH_HEAD
- name: Build
env:
MACOSX_DEPLOYMENT_TARGET: "11.0"
run: |
npm ci --legacy-peer-deps
rustup target add aarch64-apple-darwin
CI=true npx tauri build --target aarch64-apple-darwin --bundles app
APP=$(find src-tauri/target/aarch64-apple-darwin/release/bundle/macos -maxdepth 1 -type d -name "*.app" | head -n 1)
if [ -z "$APP" ]; then
echo "ERROR: Could not find macOS app bundle"
exit 1
fi
APP_NAME=$(basename "$APP" .app)
codesign --deep --force --sign - "$APP"
mkdir -p src-tauri/target/aarch64-apple-darwin/release/bundle/dmg
DMG=src-tauri/target/aarch64-apple-darwin/release/bundle/dmg/${APP_NAME}.dmg
hdiutil create -volname "$APP_NAME" -srcfolder "$APP" -ov -format UDZO "$DMG"
- name: Upload artifacts
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
set -eu
API="http://172.0.0.29:3000/api/v1/repos/$GITHUB_REPOSITORY"
TAG=$(curl -s "$API/tags?limit=50" \
-H "Authorization: token $RELEASE_TOKEN" | \
jq -r '.[].name' | grep -E '^v[0-9]+\.[0-9]+\.[0-9]+$' | \
sort -V | tail -1 || true)
if [ -z "$TAG" ]; then
echo "ERROR: Could not resolve release tag from repository tags."
exit 1
fi
echo "Creating release for $TAG..."
curl -sf -X POST "$API/releases" \
-H "Authorization: token $RELEASE_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"tag_name\":\"$TAG\",\"name\":\"TFTSR $TAG\",\"body\":\"Release $TAG\",\"draft\":false}" || true
RELEASE_ID=$(curl -sf "$API/releases/tags/$TAG" \
-H "Authorization: token $RELEASE_TOKEN" | jq -r '.id')
if [ -z "$RELEASE_ID" ] || [ "$RELEASE_ID" = "null" ]; then
echo "ERROR: Failed to get release ID for $TAG"
exit 1
fi
echo "Release ID: $RELEASE_ID"
ARTIFACTS=$(find src-tauri/target/aarch64-apple-darwin/release/bundle -type f -name "*.dmg")
if [ -z "$ARTIFACTS" ]; then
echo "ERROR: No macOS arm64 DMG artifacts were found to upload."
exit 1
fi
printf '%s\n' "$ARTIFACTS" | while IFS= read -r f; do
NAME=$(basename "$f")
echo "Uploading $NAME..."
EXISTING_IDS=$(curl -sf "$API/releases/$RELEASE_ID" \
-H "Authorization: token $RELEASE_TOKEN" \
| jq -r --arg name "$NAME" '.assets[]? | select(.name == $name) | .id')
if [ -n "$EXISTING_IDS" ]; then
printf '%s\n' "$EXISTING_IDS" | while IFS= read -r id; do
[ -n "$id" ] || continue
echo "Deleting existing asset id=$id name=$NAME before upload..."
curl -sf -X DELETE "$API/releases/$RELEASE_ID/assets/$id" \
-H "Authorization: token $RELEASE_TOKEN"
done
fi
RESP_FILE=$(mktemp)
HTTP_CODE=$(curl -sS -o "$RESP_FILE" -w "%{http_code}" -X POST "$API/releases/$RELEASE_ID/assets" \
-H "Authorization: token $RELEASE_TOKEN" \
-F "attachment=@$f;filename=$NAME")
if [ "$HTTP_CODE" -ge 200 ] && [ "$HTTP_CODE" -lt 300 ]; then
echo "✓ Uploaded $NAME"
else
echo "✗ Upload failed for $NAME (HTTP $HTTP_CODE)"
python -c 'import pathlib,sys;print(pathlib.Path(sys.argv[1]).read_text(errors="replace")[:2000])' "$RESP_FILE"
exit 1
fi
done
build-linux-arm64:
needs: autotag
runs-on: linux-amd64
container:
image: ubuntu:22.04
steps:
- name: Checkout
run: |
apt-get update -qq && apt-get install -y -qq git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin "$GITHUB_SHA"
git checkout FETCH_HEAD
- name: Install dependencies
env:
DEBIAN_FRONTEND: noninteractive
run: |
# Step 1: Host tools + cross-compiler (all amd64, no multiarch yet)
apt-get update -qq
apt-get install -y -qq curl git gcc g++ make patchelf pkg-config perl jq \
gcc-aarch64-linux-gnu g++-aarch64-linux-gnu
# Step 2: Multiarch — Ubuntu uses ports.ubuntu.com for arm64,
# keeping it on a separate mirror from amd64 (archive.ubuntu.com).
# This avoids the binary-all index duplication and -dev package
# conflicts that plagued the Debian single-mirror approach.
dpkg --add-architecture arm64
sed -i 's|^deb http://archive.ubuntu.com|deb [arch=amd64] http://archive.ubuntu.com|g' /etc/apt/sources.list
sed -i 's|^deb http://security.ubuntu.com|deb [arch=amd64] http://security.ubuntu.com|g' /etc/apt/sources.list
printf '%s\n' \
'deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports jammy main restricted universe multiverse' \
'deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports jammy-updates main restricted universe multiverse' \
'deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports jammy-security main restricted universe multiverse' \
> /etc/apt/sources.list.d/arm64-ports.list
apt-get update -qq
# Step 3: ARM64 dev libs — libayatana omitted (no tray icon in this app)
apt-get install -y -qq \
libwebkit2gtk-4.1-dev:arm64 \
libssl-dev:arm64 \
libgtk-3-dev:arm64 \
librsvg2-dev:arm64
# Step 4: Node.js
curl -fsSL https://deb.nodesource.com/setup_22.x | bash -
apt-get install -y nodejs
# Step 5: Rust (not pre-installed in ubuntu:22.04)
# source "$HOME/.cargo/env" in the Build step handles PATH — no GITHUB_PATH needed
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y \
--default-toolchain 1.88.0 --profile minimal --no-modify-path
- name: Build
env:
CC_aarch64_unknown_linux_gnu: aarch64-linux-gnu-gcc
CXX_aarch64_unknown_linux_gnu: aarch64-linux-gnu-g++
AR_aarch64_unknown_linux_gnu: aarch64-linux-gnu-ar
CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER: aarch64-linux-gnu-gcc
PKG_CONFIG_SYSROOT_DIR: /usr/aarch64-linux-gnu
PKG_CONFIG_PATH: /usr/lib/aarch64-linux-gnu/pkgconfig
PKG_CONFIG_ALLOW_CROSS: "1"
OPENSSL_NO_VENDOR: "0"
OPENSSL_STATIC: "1"
APPIMAGE_EXTRACT_AND_RUN: "1"
run: |
. "$HOME/.cargo/env"
npm ci --legacy-peer-deps
rustup target add aarch64-unknown-linux-gnu
CI=true npx tauri build --target aarch64-unknown-linux-gnu --bundles deb,rpm
- name: Upload artifacts
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
set -eu
API="http://172.0.0.29:3000/api/v1/repos/$GITHUB_REPOSITORY"
TAG=$(curl -s "$API/tags?limit=50" \
-H "Authorization: token $RELEASE_TOKEN" | \
jq -r '.[].name' | grep -E '^v[0-9]+\.[0-9]+\.[0-9]+$' | \
sort -V | tail -1 || true)
if [ -z "$TAG" ]; then
echo "ERROR: Could not resolve release tag from repository tags."
exit 1
fi
echo "Creating release for $TAG..."
curl -sf -X POST "$API/releases" \
-H "Authorization: token $RELEASE_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"tag_name\":\"$TAG\",\"name\":\"TFTSR $TAG\",\"body\":\"Release $TAG\",\"draft\":false}" || true
RELEASE_ID=$(curl -sf "$API/releases/tags/$TAG" \
-H "Authorization: token $RELEASE_TOKEN" | jq -r '.id')
if [ -z "$RELEASE_ID" ] || [ "$RELEASE_ID" = "null" ]; then
echo "ERROR: Failed to get release ID for $TAG"
exit 1
fi
echo "Release ID: $RELEASE_ID"
ARTIFACTS=$(find src-tauri/target/aarch64-unknown-linux-gnu/release/bundle -type f \
\( -name "*.deb" -o -name "*.rpm" -o -name "*.AppImage" \))
if [ -z "$ARTIFACTS" ]; then
echo "ERROR: No Linux arm64 artifacts were found to upload."
exit 1
fi
printf '%s\n' "$ARTIFACTS" | while IFS= read -r f; do
NAME=$(basename "$f")
UPLOAD_NAME="linux-arm64-$NAME"
echo "Uploading $UPLOAD_NAME..."
EXISTING_IDS=$(curl -sf "$API/releases/$RELEASE_ID" \
-H "Authorization: token $RELEASE_TOKEN" \
| jq -r --arg name "$UPLOAD_NAME" '.assets[]? | select(.name == $name) | .id')
if [ -n "$EXISTING_IDS" ]; then
printf '%s\n' "$EXISTING_IDS" | while IFS= read -r id; do
[ -n "$id" ] || continue
echo "Deleting existing asset id=$id name=$UPLOAD_NAME before upload..."
curl -sf -X DELETE "$API/releases/$RELEASE_ID/assets/$id" \
-H "Authorization: token $RELEASE_TOKEN"
done
fi
RESP_FILE=$(mktemp)
HTTP_CODE=$(curl -sS -o "$RESP_FILE" -w "%{http_code}" -X POST "$API/releases/$RELEASE_ID/assets" \
-H "Authorization: token $RELEASE_TOKEN" \
-F "attachment=@$f;filename=$UPLOAD_NAME")
if [ "$HTTP_CODE" -ge 200 ] && [ "$HTTP_CODE" -lt 300 ]; then
echo "✓ Uploaded $UPLOAD_NAME"
else
echo "✗ Upload failed for $UPLOAD_NAME (HTTP $HTTP_CODE)"
python -c 'import pathlib,sys;print(pathlib.Path(sys.argv[1]).read_text(errors="replace")[:2000])' "$RESP_FILE"
exit 1
fi
done

View File

@ -0,0 +1,104 @@
name: Build CI Docker Images
# Rebuilds the pre-baked builder images and pushes them to the local Gitea
# container registry (172.0.0.29:3000).
#
# WHEN TO RUN:
# - Automatically: whenever a Dockerfile under .docker/ changes on master.
# - Manually: via workflow_dispatch (e.g. first-time setup, forced rebuild).
#
# ONE-TIME SERVER PREREQUISITE (run once on 172.0.0.29 before first use):
# echo '{"insecure-registries":["172.0.0.29:3000"]}' \
# | sudo tee /etc/docker/daemon.json
# sudo systemctl restart docker
#
# Images produced:
# 172.0.0.29:3000/sarman/trcaa-linux-amd64:rust1.88-node22
# 172.0.0.29:3000/sarman/trcaa-windows-cross:rust1.88-node22
# 172.0.0.29:3000/sarman/trcaa-linux-arm64:rust1.88-node22
on:
push:
branches:
- master
paths:
- '.docker/**'
workflow_dispatch:
concurrency:
group: build-ci-images
cancel-in-progress: false
env:
REGISTRY: 172.0.0.29:3000
REGISTRY_USER: sarman
jobs:
linux-amd64:
runs-on: linux-amd64
container:
image: docker:24-cli
steps:
- name: Checkout
run: |
apk add --no-cache git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin "$GITHUB_SHA"
git checkout FETCH_HEAD
- name: Build and push linux-amd64 builder
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
echo "$RELEASE_TOKEN" | docker login $REGISTRY -u $REGISTRY_USER --password-stdin
docker build \
-t $REGISTRY/$REGISTRY_USER/trcaa-linux-amd64:rust1.88-node22 \
-f .docker/Dockerfile.linux-amd64 .
docker push $REGISTRY/$REGISTRY_USER/trcaa-linux-amd64:rust1.88-node22
echo "✓ Pushed $REGISTRY/$REGISTRY_USER/trcaa-linux-amd64:rust1.88-node22"
windows-cross:
runs-on: linux-amd64
container:
image: docker:24-cli
steps:
- name: Checkout
run: |
apk add --no-cache git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin "$GITHUB_SHA"
git checkout FETCH_HEAD
- name: Build and push windows-cross builder
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
echo "$RELEASE_TOKEN" | docker login $REGISTRY -u $REGISTRY_USER --password-stdin
docker build \
-t $REGISTRY/$REGISTRY_USER/trcaa-windows-cross:rust1.88-node22 \
-f .docker/Dockerfile.windows-cross .
docker push $REGISTRY/$REGISTRY_USER/trcaa-windows-cross:rust1.88-node22
echo "✓ Pushed $REGISTRY/$REGISTRY_USER/trcaa-windows-cross:rust1.88-node22"
linux-arm64:
runs-on: linux-amd64
container:
image: docker:24-cli
steps:
- name: Checkout
run: |
apk add --no-cache git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin "$GITHUB_SHA"
git checkout FETCH_HEAD
- name: Build and push linux-arm64 builder
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
echo "$RELEASE_TOKEN" | docker login $REGISTRY -u $REGISTRY_USER --password-stdin
docker build \
-t $REGISTRY/$REGISTRY_USER/trcaa-linux-arm64:rust1.88-node22 \
-f .docker/Dockerfile.linux-arm64 .
docker push $REGISTRY/$REGISTRY_USER/trcaa-linux-arm64:rust1.88-node22
echo "✓ Pushed $REGISTRY/$REGISTRY_USER/trcaa-linux-arm64:rust1.88-node22"

View File

@ -1,221 +0,0 @@
name: Release
on:
push:
tags:
- 'v*'
jobs:
build-linux-amd64:
runs-on: linux-amd64
container:
image: rust:1.88-slim
steps:
- name: Checkout
run: |
apt-get update -qq && apt-get install -y -qq git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin $GITHUB_SHA
git checkout FETCH_HEAD
- name: Install dependencies
run: |
apt-get update -qq && apt-get install -y -qq \
libwebkit2gtk-4.1-dev libssl-dev libgtk-3-dev \
libayatana-appindicator3-dev librsvg2-dev patchelf \
pkg-config curl perl jq
curl -fsSL https://deb.nodesource.com/setup_22.x | bash -
apt-get install -y nodejs
- name: Build
run: |
npm ci --legacy-peer-deps
rustup target add x86_64-unknown-linux-gnu
cargo install tauri-cli --version "^2" --locked
CI=true cargo tauri build --target x86_64-unknown-linux-gnu
- name: Upload artifacts
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
API="http://172.0.0.29:3000/api/v1/repos/$GITHUB_REPOSITORY"
TAG="$GITHUB_REF_NAME"
echo "Creating release for $TAG..."
curl -sf -X POST "$API/releases" \
-H "Authorization: token $RELEASE_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"tag_name\":\"$TAG\",\"name\":\"TFTSR $TAG\",\"body\":\"Release $TAG\",\"draft\":false}" || true
RELEASE_ID=$(curl -sf "$API/releases/tags/$TAG" \
-H "Authorization: token $RELEASE_TOKEN" | jq -r '.id')
if [ -z "$RELEASE_ID" ] || [ "$RELEASE_ID" = "null" ]; then
echo "ERROR: Failed to get release ID for $TAG"
exit 1
fi
echo "Release ID: $RELEASE_ID"
find src-tauri/target/x86_64-unknown-linux-gnu/release/bundle \
\( -name "*.deb" -o -name "*.rpm" -o -name "*.AppImage" \) | while read f; do
echo "Uploading $(basename $f)..."
curl -sf -X POST "$API/releases/$RELEASE_ID/assets" \
-H "Authorization: token $RELEASE_TOKEN" \
-F "attachment=@$f;filename=$(basename $f)" && echo "✓ Uploaded $(basename $f)" || echo "✗ Upload failed: $f"
done
build-windows-amd64:
runs-on: linux-amd64
container:
image: rust:1.88-slim
steps:
- name: Checkout
run: |
apt-get update -qq && apt-get install -y -qq git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin $GITHUB_SHA
git checkout FETCH_HEAD
- name: Install dependencies
run: |
apt-get update -qq && apt-get install -y -qq mingw-w64 curl nsis perl make jq
curl -fsSL https://deb.nodesource.com/setup_22.x | bash -
apt-get install -y nodejs
- name: Build
env:
CC_x86_64_pc_windows_gnu: x86_64-w64-mingw32-gcc
CXX_x86_64_pc_windows_gnu: x86_64-w64-mingw32-g++
AR_x86_64_pc_windows_gnu: x86_64-w64-mingw32-ar
CARGO_TARGET_X86_64_PC_WINDOWS_GNU_LINKER: x86_64-w64-mingw32-gcc
run: |
npm ci --legacy-peer-deps
rustup target add x86_64-pc-windows-gnu
cargo install tauri-cli --version "^2" --locked
CI=true cargo tauri build --target x86_64-pc-windows-gnu
- name: Upload artifacts
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
API="http://172.0.0.29:3000/api/v1/repos/$GITHUB_REPOSITORY"
TAG="$GITHUB_REF_NAME"
echo "Creating release for $TAG..."
curl -sf -X POST "$API/releases" \
-H "Authorization: token $RELEASE_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"tag_name\":\"$TAG\",\"name\":\"TFTSR $TAG\",\"body\":\"Release $TAG\",\"draft\":false}" || true
RELEASE_ID=$(curl -sf "$API/releases/tags/$TAG" \
-H "Authorization: token $RELEASE_TOKEN" | jq -r '.id')
if [ -z "$RELEASE_ID" ] || [ "$RELEASE_ID" = "null" ]; then
echo "ERROR: Failed to get release ID for $TAG"
exit 1
fi
echo "Release ID: $RELEASE_ID"
find src-tauri/target/x86_64-pc-windows-gnu/release/bundle \
\( -name "*.exe" -o -name "*.msi" \) 2>/dev/null | while read f; do
echo "Uploading $(basename $f)..."
curl -sf -X POST "$API/releases/$RELEASE_ID/assets" \
-H "Authorization: token $RELEASE_TOKEN" \
-F "attachment=@$f;filename=$(basename $f)" && echo "✓ Uploaded $(basename $f)" || echo "✗ Upload failed: $f"
done
build-macos-arm64:
runs-on: macos-arm64
steps:
- name: Checkout
run: |
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin $GITHUB_SHA
git checkout FETCH_HEAD
- name: Build
env:
MACOSX_DEPLOYMENT_TARGET: "11.0"
run: |
npm ci --legacy-peer-deps
rustup target add aarch64-apple-darwin
cargo install tauri-cli --version "^2" --locked
# Build the .app bundle only (no DMG yet so we can sign before packaging)
CI=true cargo tauri build --target aarch64-apple-darwin --bundles app
APP=src-tauri/target/aarch64-apple-darwin/release/bundle/macos/TFTSR.app
# Ad-hoc sign: changes Gatekeeper error from "damaged" to "unidentified developer"
codesign --deep --force --sign - "$APP"
# Create DMG from the signed .app
mkdir -p src-tauri/target/aarch64-apple-darwin/release/bundle/dmg
DMG=src-tauri/target/aarch64-apple-darwin/release/bundle/dmg/TFTSR.dmg
hdiutil create -volname "TFTSR" -srcfolder "$APP" -ov -format UDZO "$DMG"
- name: Upload artifacts
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
API="http://172.0.0.29:3000/api/v1/repos/$GITHUB_REPOSITORY"
TAG="$GITHUB_REF_NAME"
# Create release (idempotent)
echo "Creating release for $TAG..."
curl -sf -X POST "$API/releases" \
-H "Authorization: token $RELEASE_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"tag_name\":\"$TAG\",\"name\":\"TFTSR $TAG\",\"body\":\"Release $TAG\",\"draft\":false}" || true
# Get release ID
RELEASE_ID=$(curl -sf "$API/releases/tags/$TAG" \
-H "Authorization: token $RELEASE_TOKEN" | jq -r '.id')
if [ -z "$RELEASE_ID" ] || [ "$RELEASE_ID" = "null" ]; then
echo "ERROR: Failed to get release ID for $TAG"
echo "Attempting to list recent releases..."
curl -sf "$API/releases" -H "Authorization: token $RELEASE_TOKEN" | jq -r '.[] | "\(.tag_name): \(.id)"' | head -5
exit 1
fi
echo "Release ID: $RELEASE_ID"
# Upload DMG
find src-tauri/target/aarch64-apple-darwin/release/bundle -name "*.dmg" | while read f; do
echo "Uploading $(basename $f)..."
curl -sf -X POST "$API/releases/$RELEASE_ID/assets" \
-H "Authorization: token $RELEASE_TOKEN" \
-F "attachment=@$f;filename=$(basename $f)" && echo "✓ Uploaded $(basename $f)" || echo "✗ Upload failed: $f"
done
build-linux-arm64:
runs-on: linux-arm64
container:
image: rust:1.88-slim
steps:
- name: Checkout
run: |
apt-get update -qq && apt-get install -y -qq git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin $GITHUB_SHA
git checkout FETCH_HEAD
- name: Install dependencies
run: |
# Native ARM64 build (no cross-compilation needed)
apt-get update -qq && apt-get install -y -qq \
libwebkit2gtk-4.1-dev libssl-dev libgtk-3-dev \
libayatana-appindicator3-dev librsvg2-dev patchelf \
pkg-config curl perl jq
curl -fsSL https://deb.nodesource.com/setup_22.x | bash -
apt-get install -y nodejs
- name: Build
run: |
npm ci --legacy-peer-deps
rustup target add aarch64-unknown-linux-gnu
cargo install tauri-cli --version "^2" --locked
CI=true cargo tauri build --target aarch64-unknown-linux-gnu
- name: Upload artifacts
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
API="http://172.0.0.29:3000/api/v1/repos/$GITHUB_REPOSITORY"
TAG="$GITHUB_REF_NAME"
echo "Creating release for $TAG..."
curl -sf -X POST "$API/releases" \
-H "Authorization: token $RELEASE_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"tag_name\":\"$TAG\",\"name\":\"TFTSR $TAG\",\"body\":\"Release $TAG\",\"draft\":false}" || true
RELEASE_ID=$(curl -sf "$API/releases/tags/$TAG" \
-H "Authorization: token $RELEASE_TOKEN" | jq -r '.id')
if [ -z "$RELEASE_ID" ] || [ "$RELEASE_ID" = "null" ]; then
echo "ERROR: Failed to get release ID for $TAG"
exit 1
fi
echo "Release ID: $RELEASE_ID"
find src-tauri/target/aarch64-unknown-linux-gnu/release/bundle \
\( -name "*.deb" -o -name "*.rpm" -o -name "*.AppImage" \) | while read f; do
echo "Uploading $(basename $f)..."
curl -sf -X POST "$API/releases/$RELEASE_ID/assets" \
-H "Authorization: token $RELEASE_TOKEN" \
-F "attachment=@$f;filename=$(basename $f)" && echo "✓ Uploaded $(basename $f)" || echo "✗ Upload failed: $f"
done

View File

@ -1,9 +1,6 @@
name: Test
on:
push:
branches:
- '**'
pull_request:
jobs:
@ -14,10 +11,22 @@ jobs:
steps:
- name: Checkout
run: |
set -eux
apt-get update -qq && apt-get install -y -qq git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin $GITHUB_SHA
if [ -n "${GITHUB_SHA:-}" ] && git fetch --depth=1 origin "$GITHUB_SHA"; then
echo "Fetched commit SHA: $GITHUB_SHA"
elif [ -n "${GITHUB_REF_NAME:-}" ] && git fetch --depth=1 origin "$GITHUB_REF_NAME"; then
echo "Fetched ref name: $GITHUB_REF_NAME"
elif [ -n "${GITHUB_REF:-}" ]; then
REF_NAME="${GITHUB_REF#refs/heads/}"
git fetch --depth=1 origin "$REF_NAME"
echo "Fetched ref from GITHUB_REF: $REF_NAME"
else
git fetch --depth=1 origin master
echo "Fetched fallback ref: master"
fi
git checkout FETCH_HEAD
- run: rustup component add rustfmt
- run: cargo fmt --manifest-path src-tauri/Cargo.toml --check
@ -29,10 +38,22 @@ jobs:
steps:
- name: Checkout
run: |
set -eux
apt-get update -qq && apt-get install -y -qq git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin $GITHUB_SHA
if [ -n "${GITHUB_SHA:-}" ] && git fetch --depth=1 origin "$GITHUB_SHA"; then
echo "Fetched commit SHA: $GITHUB_SHA"
elif [ -n "${GITHUB_REF_NAME:-}" ] && git fetch --depth=1 origin "$GITHUB_REF_NAME"; then
echo "Fetched ref name: $GITHUB_REF_NAME"
elif [ -n "${GITHUB_REF:-}" ]; then
REF_NAME="${GITHUB_REF#refs/heads/}"
git fetch --depth=1 origin "$REF_NAME"
echo "Fetched ref from GITHUB_REF: $REF_NAME"
else
git fetch --depth=1 origin master
echo "Fetched fallback ref: master"
fi
git checkout FETCH_HEAD
- run: apt-get update -qq && apt-get install -y -qq libwebkit2gtk-4.1-dev libssl-dev libgtk-3-dev libayatana-appindicator3-dev librsvg2-dev patchelf pkg-config perl
- run: rustup component add clippy
@ -45,10 +66,22 @@ jobs:
steps:
- name: Checkout
run: |
set -eux
apt-get update -qq && apt-get install -y -qq git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin $GITHUB_SHA
if [ -n "${GITHUB_SHA:-}" ] && git fetch --depth=1 origin "$GITHUB_SHA"; then
echo "Fetched commit SHA: $GITHUB_SHA"
elif [ -n "${GITHUB_REF_NAME:-}" ] && git fetch --depth=1 origin "$GITHUB_REF_NAME"; then
echo "Fetched ref name: $GITHUB_REF_NAME"
elif [ -n "${GITHUB_REF:-}" ]; then
REF_NAME="${GITHUB_REF#refs/heads/}"
git fetch --depth=1 origin "$REF_NAME"
echo "Fetched ref from GITHUB_REF: $REF_NAME"
else
git fetch --depth=1 origin master
echo "Fetched fallback ref: master"
fi
git checkout FETCH_HEAD
- run: apt-get update -qq && apt-get install -y -qq libwebkit2gtk-4.1-dev libssl-dev libgtk-3-dev libayatana-appindicator3-dev librsvg2-dev patchelf pkg-config perl
- run: cargo test --manifest-path src-tauri/Cargo.toml
@ -60,10 +93,22 @@ jobs:
steps:
- name: Checkout
run: |
set -eux
apk add --no-cache git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin $GITHUB_SHA
if [ -n "${GITHUB_SHA:-}" ] && git fetch --depth=1 origin "$GITHUB_SHA"; then
echo "Fetched commit SHA: $GITHUB_SHA"
elif [ -n "${GITHUB_REF_NAME:-}" ] && git fetch --depth=1 origin "$GITHUB_REF_NAME"; then
echo "Fetched ref name: $GITHUB_REF_NAME"
elif [ -n "${GITHUB_REF:-}" ]; then
REF_NAME="${GITHUB_REF#refs/heads/}"
git fetch --depth=1 origin "$REF_NAME"
echo "Fetched ref from GITHUB_REF: $REF_NAME"
else
git fetch --depth=1 origin master
echo "Fetched fallback ref: master"
fi
git checkout FETCH_HEAD
- run: npm ci --legacy-peer-deps
- run: npx tsc --noEmit
@ -75,10 +120,22 @@ jobs:
steps:
- name: Checkout
run: |
set -eux
apk add --no-cache git
git init
git remote add origin http://172.0.0.29:3000/sarman/tftsr-devops_investigation.git
git fetch --depth=1 origin $GITHUB_SHA
if [ -n "${GITHUB_SHA:-}" ] && git fetch --depth=1 origin "$GITHUB_SHA"; then
echo "Fetched commit SHA: $GITHUB_SHA"
elif [ -n "${GITHUB_REF_NAME:-}" ] && git fetch --depth=1 origin "$GITHUB_REF_NAME"; then
echo "Fetched ref name: $GITHUB_REF_NAME"
elif [ -n "${GITHUB_REF:-}" ]; then
REF_NAME="${GITHUB_REF#refs/heads/}"
git fetch --depth=1 origin "$REF_NAME"
echo "Fetched ref from GITHUB_REF: $REF_NAME"
else
git fetch --depth=1 origin master
echo "Fetched fallback ref: master"
fi
git checkout FETCH_HEAD
- run: npm ci --legacy-peer-deps
- run: npm run test:run

489
GenAI API User Guide.md Normal file

File diff suppressed because one or more lines are too long

312
HANDOFF-MSI-GENAI.md Normal file
View File

@ -0,0 +1,312 @@
# MSI GenAI Custom Provider Integration - Handoff Document
**Date**: 2026-04-03
**Status**: In Progress - Backend schema updated, frontend and provider logic pending
---
## Context
User needs to integrate MSI GenAI API (https://genai-service.stage.commandcentral.com/app-gateway/api/v2/chat) into the application's AI Providers system.
**Problem**: The existing "Custom" provider type assumes OpenAI-compatible APIs (expects `/chat/completions` endpoint, OpenAI request/response format, `Authorization: Bearer` header). MSI GenAI has a completely different API contract:
| Aspect | OpenAI Format | MSI GenAI Format |
|--------|---------------|------------------|
| **Endpoint** | `/chat/completions` | `/api/v2/chat` (no suffix) |
| **Request** | `{"messages": [...], "model": "..."}` | `{"prompt": "...", "model": "...", "sessionId": "..."}` |
| **Response** | `{"choices": [{"message": {"content": "..."}}]}` | `{"msg": "...", "sessionId": "..."}` |
| **Auth Header** | `Authorization: Bearer <token>` | `x-msi-genai-api-key: <token>` |
| **History** | Client sends full message array | Server-side via `sessionId` |
---
## Work Completed
### 1. Updated `src-tauri/src/state.rs` - ProviderConfig Schema
Added optional fields to support custom API formats without breaking existing OpenAI-compatible providers:
```rust
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ProviderConfig {
pub name: String,
#[serde(default)]
pub provider_type: String,
pub api_url: String,
pub api_key: String,
pub model: String,
// NEW FIELDS:
/// Optional: Custom endpoint path (e.g., "" for no path, "/v1/chat" for custom path)
/// If None, defaults to "/chat/completions" for OpenAI compatibility
#[serde(skip_serializing_if = "Option::is_none")]
pub custom_endpoint_path: Option<String>,
/// Optional: Custom auth header name (e.g., "x-msi-genai-api-key")
/// If None, defaults to "Authorization"
#[serde(skip_serializing_if = "Option::is_none")]
pub custom_auth_header: Option<String>,
/// Optional: Custom auth value prefix (e.g., "" for no prefix, "Bearer " for OpenAI)
/// If None, defaults to "Bearer "
#[serde(skip_serializing_if = "Option::is_none")]
pub custom_auth_prefix: Option<String>,
/// Optional: API format ("openai" or "msi_genai")
/// If None, defaults to "openai"
#[serde(skip_serializing_if = "Option::is_none")]
pub api_format: Option<String>,
/// Optional: Session ID for stateful APIs like MSI GenAI
#[serde(skip_serializing_if = "Option::is_none")]
pub session_id: Option<String>,
}
```
**Design philosophy**: Existing providers remain unchanged (all fields default to OpenAI-compatible behavior). Only when `api_format` is set to `"msi_genai"` do the custom fields take effect.
---
## Work Remaining
### 2. Update `src-tauri/src/ai/openai.rs` - Support Custom Formats
The `OpenAiProvider::chat()` method needs to conditionally handle MSI GenAI format:
**Changes needed**:
- Check `config.api_format` — if `Some("msi_genai")`, use MSI GenAI request/response logic
- Use `config.custom_endpoint_path.unwrap_or("/chat/completions")` for endpoint
- Use `config.custom_auth_header.unwrap_or("Authorization")` for header name
- Use `config.custom_auth_prefix.unwrap_or("Bearer ")` for auth prefix
**MSI GenAI request format**:
```json
{
"model": "VertexGemini",
"prompt": "<last user message>",
"system": "<optional system message>",
"sessionId": "<uuid or null for first message>",
"userId": "user@motorolasolutions.com"
}
```
**MSI GenAI response format**:
```json
{
"status": true,
"sessionId": "uuid",
"msg": "AI response text",
"initialPrompt": true/false
}
```
**Implementation notes**:
- For MSI GenAI, convert `Vec<Message>` to a single `prompt` (concatenate or use last user message)
- Extract system message from messages array if present (role == "system")
- Store returned `sessionId` back to `config.session_id` for subsequent requests
- Extract response content from `json["msg"]` instead of `json["choices"][0]["message"]["content"]`
### 3. Update `src/lib/tauriCommands.ts` - TypeScript Types
Add new optional fields to `ProviderConfig` interface:
```typescript
export interface ProviderConfig {
provider_type?: string;
max_tokens?: number;
temperature?: number;
name: string;
api_url: string;
api_key: string;
model: string;
// NEW FIELDS:
custom_endpoint_path?: string;
custom_auth_header?: string;
custom_auth_prefix?: string;
api_format?: string;
session_id?: string;
}
```
### 4. Update `src/pages/Settings/AIProviders.tsx` - UI Fields
**When `provider_type === "custom"`, show additional form fields**:
```tsx
{form.provider_type === "custom" && (
<>
<div className="space-y-2">
<Label>API Format</Label>
<Select
value={form.api_format ?? "openai"}
onValueChange={(v) => {
const format = v;
const defaults = format === "msi_genai"
? {
custom_endpoint_path: "",
custom_auth_header: "x-msi-genai-api-key",
custom_auth_prefix: "",
}
: {
custom_endpoint_path: "/chat/completions",
custom_auth_header: "Authorization",
custom_auth_prefix: "Bearer ",
};
setForm({ ...form, api_format: format, ...defaults });
}}
>
<SelectTrigger>
<SelectValue />
</SelectTrigger>
<SelectContent>
<SelectItem value="openai">OpenAI Compatible</SelectItem>
<SelectItem value="msi_genai">MSI GenAI</SelectItem>
</SelectContent>
</Select>
</div>
<div className="grid grid-cols-2 gap-4">
<div className="space-y-2">
<Label>Endpoint Path</Label>
<Input
value={form.custom_endpoint_path ?? ""}
onChange={(e) => setForm({ ...form, custom_endpoint_path: e.target.value })}
placeholder="/chat/completions"
/>
</div>
<div className="space-y-2">
<Label>Auth Header Name</Label>
<Input
value={form.custom_auth_header ?? ""}
onChange={(e) => setForm({ ...form, custom_auth_header: e.target.value })}
placeholder="Authorization"
/>
</div>
</div>
<div className="space-y-2">
<Label>Auth Prefix</Label>
<Input
value={form.custom_auth_prefix ?? ""}
onChange={(e) => setForm({ ...form, custom_auth_prefix: e.target.value })}
placeholder="Bearer "
/>
<p className="text-xs text-muted-foreground">
Prefix added before API key (e.g., "Bearer " for OpenAI, "" for MSI GenAI)
</p>
</div>
</>
)}
```
**Update `emptyProvider` initial state**:
```typescript
const emptyProvider: ProviderConfig = {
name: "",
provider_type: "openai",
api_url: "",
api_key: "",
model: "",
custom_endpoint_path: undefined,
custom_auth_header: undefined,
custom_auth_prefix: undefined,
api_format: undefined,
session_id: undefined,
};
```
---
## Testing Configuration
**For MSI GenAI**:
- **Type**: Custom
- **API Format**: MSI GenAI
- **API URL**: `https://genai-service.stage.commandcentral.com/app-gateway`
- **Model**: `VertexGemini` (or `Claude-Sonnet-4`, `ChatGPT4o`)
- **API Key**: (user's MSI GenAI API key from portal)
- **Endpoint Path**: `` (empty - URL already includes `/api/v2/chat`)
- **Auth Header**: `x-msi-genai-api-key`
- **Auth Prefix**: `` (empty - no "Bearer " prefix)
**Test command flow**:
1. Create provider with above settings
2. Test connection (should receive AI response)
3. Verify `sessionId` is returned and stored
4. Send second message (should reuse `sessionId` for conversation history)
---
## Known Issues from User's Original Error
User initially tried:
- **API URL**: `https://genai-service.stage.commandcentral.com/app-gateway/api/v2/chat`
- **Type**: Custom (no format specified)
**Result**: `Cannot POST /api/v2/chat/chat/completions` (404)
**Root cause**: OpenAI provider appends `/chat/completions` to base URL. With the new `custom_endpoint_path` field, this is now configurable.
---
## Integration with Existing Session Management
MSI GenAI uses server-side session management. Current triage flow sends full message history on every request (OpenAI style). For MSI GenAI:
- **First message**: Send `sessionId: null` or omit field
- **Store response**: Save `response.sessionId` to `config.session_id`
- **Subsequent messages**: Include `sessionId` in requests (server maintains history)
Consider storing `session_id` per conversation in the database (link to `ai_conversations.id`) rather than globally in `ProviderConfig`.
---
## Commit Strategy
**Current git state**:
- Modified by other session: `src-tauri/src/integrations/*.rs` (ADO/Confluence/ServiceNow work)
- Modified by me: `src-tauri/src/state.rs` (MSI GenAI schema)
- Untracked: `GenAI API User Guide.md`
**Recommended approach**:
1. **Other session commits first**: Commit integration changes to main
2. **Then complete MSI GenAI work**: Finish items 2-4 above, test, commit separately
**Alternative**: Create feature branch `feature/msi-genai-custom-provider`, cherry-pick only MSI GenAI changes, complete work there, merge when ready.
---
## Reference: MSI GenAI API Spec
**Documentation**: `GenAI API User Guide.md` (in project root)
**Key endpoints**:
- `POST /api/v2/chat` - Send prompt, get response
- `POST /api/v2/upload/<SESSION-ID>` - Upload files (requires session)
- `GET /api/v2/getSessionMessages/<SESSION-ID>` - Retrieve history
- `DELETE /api/v2/entry/<MSG-ID>` - Delete message
**Available models** (from guide):
- `Claude-Sonnet-4` (Public)
- `ChatGPT4o` (Public)
- `VertexGemini` (Private) - Gemini 2.0 Flash
- `ChatGPT-5_2-Chat` (Public)
- Many others (see guide section 4.1)
**Rate limits**: $50/user/month (enforced server-side)
---
## Questions for User
1. Should `session_id` be stored globally in `ProviderConfig` or per-conversation in DB?
2. Do we need to support file uploads via `/api/v2/upload/<SESSION-ID>`?
3. Should we expose model config options (temperature, max_tokens) for MSI GenAI?
---
## Contact
This handoff doc was generated for the other Claude Code session working on integration files. Once that work is committed, this MSI GenAI work can be completed as a separate commit or feature branch.

View File

@ -1,4 +1,4 @@
# TFTSR — IT Triage & RCA Desktop Application
# Troubleshooting and RCA Assistant
A structured, AI-backed desktop tool for IT incident triage, 5-Whys root cause analysis, RCA document generation, and blameless post-mortems. Runs fully offline via Ollama local models, or connects to cloud AI providers.
@ -46,7 +46,7 @@ Built with **Tauri 2** (Rust + WebView), **React 18**, **TypeScript**, and **SQL
| UI | Tailwind CSS (custom shadcn-style components) |
| Database | rusqlite + `bundled-sqlcipher` (AES-256) |
| Secret storage | `tauri-plugin-stronghold` |
| State management | Zustand (persisted settings store) |
| State management | Zustand (persisted settings store with API key redaction) |
| AI providers | reqwest (async HTTP) |
| PII detection | regex + aho-corasick multi-pattern engine |
@ -166,7 +166,7 @@ To use Claude via AWS Bedrock (ideal for enterprise environments with existing A
nohup litellm --config ~/.litellm/config.yaml --port 8000 > ~/.litellm/litellm.log 2>&1 &
```
4. **Configure in TFTSR:**
4. **Configure in Troubleshooting and RCA Assistant:**
- Provider: **OpenAI** (OpenAI-compatible)
- Base URL: `http://localhost:8000/v1`
- API Key: `sk-your-secure-key` (from config)
@ -217,7 +217,7 @@ tftsr/
└── .gitea/
└── workflows/
├── test.yml # CI: rustfmt · clippy · cargo test · tsc · vitest (every push/PR)
└── release.yml # Release: linux/amd64 + windows/amd64 + linux/arm64 → Gitea release
└── auto-tag.yml # Auto tag + release: linux/amd64 + windows/amd64 + linux/arm64 + macOS
```
---
@ -251,7 +251,7 @@ The project uses **Gitea Actions** (act_runner v0.3.1) connected to the Gitea in
| Workflow | Trigger | Jobs |
|---|---|---|
| `.gitea/workflows/test.yml` | Every push / PR | rustfmt · clippy · cargo test (64) · tsc · vitest (13) |
| `.gitea/workflows/release.yml` | Tag `v*` or manual dispatch | Build linux/amd64 + windows/amd64 + linux/arm64 → upload to Gitea release |
| `.gitea/workflows/auto-tag.yml` | Push to `master` | Auto-tag, then build linux/amd64 + windows/amd64 + linux/arm64 + macOS and upload assets |
**Runners:**
@ -270,10 +270,10 @@ The project uses **Gitea Actions** (act_runner v0.3.1) connected to the Gitea in
| Concern | Implementation |
|---|---|
| API keys / tokens | `tauri-plugin-stronghold` encrypted vault |
| API keys / tokens | AES-256-GCM encrypted at rest (backend), not persisted in browser storage |
| Database at rest | SQLCipher AES-256; key derived via PBKDF2 |
| PII before AI send | Rust-side detection + mandatory user approval in UI |
| Audit trail | Every `ai_send` / `publish` event logged with SHA-256 hash |
| Audit trail | Hash-chained audit entries (`prev_hash` + `entry_hash`) for tamper evidence |
| Network | `reqwest` with TLS; HTTP blocked by Tauri capability config |
| Capabilities | Least-privilege: scoped fs access, no arbitrary shell by default |
| CSP | Strict CSP in `tauri.conf.json`; no inline scripts |
@ -300,7 +300,8 @@ Override with the `TFTSR_DATA_DIR` environment variable.
| Variable | Default | Purpose |
|---|---|---|
| `TFTSR_DATA_DIR` | Platform data dir | Override database location |
| `TFTSR_DB_KEY` | `dev-key-change-in-prod` | Database encryption key (release builds) |
| `TFTSR_DB_KEY` | _(none)_ | Database encryption key (required in release builds) |
| `TFTSR_ENCRYPTION_KEY` | _(none)_ | Credential encryption key (required in release builds) |
| `RUST_LOG` | `info` | Tracing log level (`debug`, `info`, `warn`, `error`) |
---

View File

@ -1,6 +1,6 @@
# AI Providers
TFTSR supports 5 AI providers, selectable per-session. API keys are stored in the Stronghold encrypted vault.
TFTSR supports 6+ AI providers, including custom providers with flexible authentication and API formats. API keys are stored encrypted with AES-256-GCM.
## Provider Factory
@ -55,13 +55,21 @@ Covers: OpenAI, Azure OpenAI, LM Studio, vLLM, **LiteLLM (AWS Bedrock)**, and an
|-------|-------|
| `config.name` | `"gemini"` |
| URL | `https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent` |
| Auth | API key as `?key=` query parameter |
| Auth | `x-goog-api-key: <api_key>` header |
| Max tokens | 4096 |
**Models:** `gemini-2.0-flash`, `gemini-2.0-pro`, `gemini-1.5-pro`, `gemini-1.5-flash`
---
## Transport Security Notes
- Provider clients use TLS certificate verification via `reqwest`
- Provider calls are configured with explicit request timeouts to avoid indefinite hangs
- Credentials are sent in headers (not URL query strings)
---
### 4. Mistral AI
| Field | Value |
@ -113,6 +121,131 @@ The domain prompt is injected as the first `system` role message in every new co
---
## 6. Custom Provider (Custom REST & Others)
**Status:** ✅ **Implemented** (v0.2.6)
Custom providers allow integration with non-OpenAI-compatible APIs. The application supports two API formats:
### Format: OpenAI Compatible (Default)
Standard OpenAI `/chat/completions` endpoint with Bearer authentication.
| Field | Default Value |
|-------|--------------|
| `api_format` | `"openai"` |
| `custom_endpoint_path` | `/chat/completions` |
| `custom_auth_header` | `Authorization` |
| `custom_auth_prefix` | `Bearer ` |
**Use cases:**
- Self-hosted LLMs with OpenAI-compatible APIs
- Custom proxy services
- Enterprise gateways
---
### Format: Custom REST
**Motorola Solutions Internal GenAI Service** — Enterprise AI platform with centralized cost tracking and model access.
| Field | Value |
|-------|-------|
| `config.provider_type` | `"custom"` |
| `config.api_format` | `"custom_rest"` |
| API URL | `https://genai-service.commandcentral.com/app-gateway` (prod)<br>`https://genai-service.stage.commandcentral.com/app-gateway` (stage) |
| Auth Header | `x-msi-genai-api-key` |
| Auth Prefix | `` (empty - no Bearer prefix) |
| Endpoint Path | `` (empty - URL includes full path `/api/v2/chat`) |
**Available Models (dropdown in Settings):**
- `VertexGemini` — Gemini 2.0 Flash (Private/GCP)
- `Claude-Sonnet-4` — Claude Sonnet 4 (Public/Anthropic)
- `ChatGPT4o` — GPT-4o (Public/OpenAI)
- `ChatGPT-5_2-Chat` — GPT-4.5 (Public/OpenAI)
- Full list is sourced from [GenAI API User Guide](../GenAI%20API%20User%20Guide.md)
- Includes a `Custom model...` option to manually enter any model ID
**Request Format:**
```json
{
"model": "VertexGemini",
"prompt": "User's latest message",
"system": "Optional system prompt",
"sessionId": "uuid-for-conversation-continuity",
"userId": "user.name@motorolasolutions.com"
}
```
**Response Format:**
```json
{
"status": true,
"sessionId": "uuid",
"msg": "AI response text",
"initialPrompt": false
}
```
**Key Differences from OpenAI:**
- **Single prompt** instead of message array (server manages history via `sessionId`)
- **Response in `msg` field** instead of `choices[0].message.content`
- **Session-based** conversation continuity (no need to resend history)
- **Cost tracking** via `userId` field (optional — defaults to API key owner if omitted)
- **Custom client header**: `X-msi-genai-client: tftsr-devops-investigation`
**Configuration (Settings → AI Providers → Add Provider):**
```
Name: Custom REST (MSI GenAI)
Type: Custom
API Format: Custom REST
API URL: https://genai-service.stage.commandcentral.com/app-gateway
Model: VertexGemini
API Key: (your MSI GenAI API key from portal)
User ID: your.name@motorolasolutions.com (optional)
Endpoint Path: (leave empty)
Auth Header: x-msi-genai-api-key
Auth Prefix: (leave empty)
```
**Rate Limits:**
- $50/user/month (enforced server-side)
- Per-API-key quotas available
**Troubleshooting:**
| Error | Cause | Solution |
|-------|-------|----------|
| 403 Forbidden | Invalid API key or insufficient permissions | Verify key in MSI GenAI portal, check model access |
| Missing `userId` field | Configuration not saved | Ensure UI shows User ID field when `api_format=custom_rest` |
| No conversation history | `sessionId` not persisted | Session ID stored in `ProviderConfig.session_id` — currently per-provider, not per-conversation |
**Implementation Details:**
- Backend: `src-tauri/src/ai/openai.rs::chat_custom_rest()`
- Schema: `src-tauri/src/state.rs::ProviderConfig` (added `user_id`, `api_format`, custom auth fields)
- Frontend: `src/pages/Settings/AIProviders.tsx` (conditional UI for Custom REST + model dropdown)
- CSP whitelist: `https://genai-service.stage.commandcentral.com` and production domain
---
## Custom Provider Configuration Fields
All providers support the following optional configuration fields (v0.2.6+):
| Field | Type | Purpose | Default |
|-------|------|---------|---------|
| `custom_endpoint_path` | `Option<String>` | Override endpoint path | `/chat/completions` |
| `custom_auth_header` | `Option<String>` | Custom auth header name | `Authorization` |
| `custom_auth_prefix` | `Option<String>` | Prefix before API key | `Bearer ` |
| `api_format` | `Option<String>` | API format (`openai` or `custom_rest`) | `openai` |
| `session_id` | `Option<String>` | Session ID for stateful APIs | None |
| `user_id` | `Option<String>` | User ID for cost tracking (Custom REST MSI contract) | None |
**Backward Compatibility:**
All fields are optional and default to OpenAI-compatible behavior. Existing provider configurations are unaffected.
---
## Adding a New Provider
1. Create `src-tauri/src/ai/{name}.rs` implementing the `Provider` trait

View File

@ -29,7 +29,7 @@ macOS runner runs jobs **directly on the host** (no Docker container) — macOS
## Test Pipeline (`.woodpecker/test.yml`)
**Triggers:** Every push and pull request to any branch.
**Triggers:** Pull requests only.
```
Pipeline steps:
@ -65,20 +65,28 @@ steps:
---
## Release Pipeline (`.gitea/workflows/release.yml`)
## Release Pipeline (`.gitea/workflows/auto-tag.yml`)
**Triggers:** Git tags matching `v*`
**Triggers:** Pushes to `master` (auto-tag), then release build/upload jobs run after `autotag`.
Auto tags are created by `.gitea/workflows/auto-tag.yml` using `git tag` + `git push`.
Release jobs are executed in the same workflow and depend on `autotag` completion.
```
Jobs (run in parallel):
build-linux-amd64 → cargo tauri build (x86_64-unknown-linux-gnu)
→ {.deb, .rpm, .AppImage} uploaded to Gitea release
→ fails fast if no Linux artifacts are produced
build-windows-amd64 → cargo tauri build (x86_64-pc-windows-gnu) via mingw-w64
→ {.exe, .msi} uploaded to Gitea release
build-linux-arm64 → cargo tauri build (aarch64-unknown-linux-gnu)
→ fails fast if no Windows artifacts are produced
build-linux-arm64 → Ubuntu 22.04 base (ports.ubuntu.com for arm64 packages)
→ cargo tauri build (aarch64-unknown-linux-gnu)
→ {.deb, .rpm, .AppImage} uploaded to Gitea release
→ fails fast if no Linux artifacts are produced
build-macos-arm64 → cargo tauri build (aarch64-apple-darwin) — runs on local Mac
→ {.dmg} uploaded to Gitea release
→ existing same-name assets are deleted before upload (rerun-safe)
→ unsigned; after install run: xattr -cr /Applications/TFTSR.app
```
@ -102,7 +110,7 @@ the repo directly within its commands (using `http://172.0.0.29:3000`, accessibl
the local machine) and uploads its artifacts inline. The `upload-release` step (amd64)
handles amd64 + windows artifacts only.
**Clone override (release.yml — amd64 workspace):**
**Clone override (auto-tag.yml — amd64 workspace):**
```yaml
clone:
@ -203,6 +211,18 @@ UPDATE protect_branch SET protected=true, require_pull_request=true WHERE repo_i
## Known Issues & Fixes
### Debian Multiarch Breaks arm64 Cross-Compile (`held broken packages`)
When using `rust:1.88-slim` (Debian Bookworm) with `dpkg --add-architecture arm64`, apt
resolves amd64 and arm64 simultaneously against the same mirror. The `binary-all` package
index is duplicated and certain `-dev` package pairs cannot be co-installed because they
don't declare `Multi-Arch: same`. This produces `E: Unable to correct problems, you have
held broken packages` and cannot be fixed by tweaking `sources.list` entries.
**Fix**: Use `ubuntu:22.04` as the container image. Ubuntu routes arm64 through
`ports.ubuntu.com/ubuntu-ports` — a separate mirror from `archive.ubuntu.com` (amd64).
There are no cross-arch index overlaps and the dependency resolver succeeds. Rust must be
installed manually via `rustup` since it is not pre-installed in the Ubuntu base image.
### Step Containers Cannot Reach `gitea_app`
Default Docker bridge containers cannot resolve `gitea_app` or reach `172.0.0.29:3000`
(host firewall). Fix: use `network_mode: gogs_default` in any step that needs Gitea

View File

@ -2,7 +2,7 @@
## Overview
TFTSR uses **SQLite** via `rusqlite` with the `bundled-sqlcipher` feature for AES-256 encryption in production. 10 versioned migrations are tracked in the `_migrations` table.
TFTSR uses **SQLite** via `rusqlite` with the `bundled-sqlcipher` feature for AES-256 encryption in production. 11 versioned migrations are tracked in the `_migrations` table.
**DB file location:** `{app_data_dir}/tftsr.db`
@ -38,7 +38,7 @@ pub fn init_db(data_dir: &Path) -> anyhow::Result<Connection> {
---
## Schema (10 Migrations)
## Schema (11 Migrations)
### 001 — issues
@ -181,6 +181,47 @@ CREATE VIRTUAL TABLE issues_fts USING fts5(
);
```
### 011 — credentials & integration_config (v0.2.3+)
**Integration credentials table:**
```sql
CREATE TABLE credentials (
id TEXT PRIMARY KEY,
service TEXT NOT NULL CHECK(service IN ('confluence','servicenow','azuredevops')),
token_hash TEXT NOT NULL, -- SHA-256 hash for audit
encrypted_token TEXT NOT NULL, -- AES-256-GCM encrypted
created_at TEXT NOT NULL,
expires_at TEXT,
UNIQUE(service)
);
```
**Integration configuration table:**
```sql
CREATE TABLE integration_config (
id TEXT PRIMARY KEY,
service TEXT NOT NULL CHECK(service IN ('confluence','servicenow','azuredevops')),
base_url TEXT NOT NULL,
username TEXT, -- ServiceNow only
project_name TEXT, -- Azure DevOps only
space_key TEXT, -- Confluence only
auto_create_enabled INTEGER NOT NULL DEFAULT 0,
updated_at TEXT NOT NULL,
UNIQUE(service)
);
```
**Encryption:**
- OAuth2 tokens encrypted with AES-256-GCM
- Key derived from `TFTSR_DB_KEY` environment variable
- Random 96-bit nonce per encryption
- Format: `base64(nonce || ciphertext || tag)`
**Usage:**
- OAuth2 flows (Confluence, Azure DevOps): Store encrypted bearer token
- Basic auth (ServiceNow): Store encrypted password
- One credential per service (enforced by UNIQUE constraint)
---
## Key Design Notes

View File

@ -35,7 +35,8 @@ npm install --legacy-peer-deps
| Variable | Default | Purpose |
|----------|---------|---------|
| `TFTSR_DATA_DIR` | Platform data dir | Override DB location |
| `TFTSR_DB_KEY` | `dev-key-change-in-prod` | DB encryption key (required in production) |
| `TFTSR_DB_KEY` | _(none)_ | DB encryption key (required in release builds) |
| `TFTSR_ENCRYPTION_KEY` | _(none)_ | Credential encryption key (required in release builds) |
| `RUST_LOG` | `info` | Tracing verbosity: `debug`, `info`, `warn`, `error` |
Application data is stored at:
@ -120,7 +121,7 @@ cargo tauri build
# Outputs: .deb, .rpm, .AppImage (Linux)
```
Release builds enable **SQLCipher AES-256** encryption. Set `TFTSR_DB_KEY` before building.
Release builds enforce secure key configuration. Set both `TFTSR_DB_KEY` and `TFTSR_ENCRYPTION_KEY` before building.
---

View File

@ -1,6 +1,6 @@
# TFTSR — IT Triage & RCA Desktop Application
# Troubleshooting and RCA Assistant
**TFTSR** is a secure desktop application for guided IT incident triage, root cause analysis (RCA), and post-mortem documentation. Built with Tauri 2.x (Rust + WebView) and React 18.
**Troubleshooting and RCA Assistant** is a secure desktop application for guided IT incident triage, root cause analysis (RCA), and post-mortem documentation. Built with Tauri 2.x (Rust + WebView) and React 18.
**CI:** ![build](http://172.0.0.29:3000/sarman/tftsr-devops_investigation/actions/workflows/test.yml/badge.svg) — rustfmt · clippy · 64 Rust tests · tsc · vitest — all green
@ -24,8 +24,10 @@
- **5-Whys AI Triage** — Interactive guided root cause analysis via multi-turn AI chat
- **PII Auto-Redaction** — Detects and redacts sensitive data before any AI send
- **Multi-Provider AI** — OpenAI, Anthropic Claude, Google Gemini, Mistral, AWS Bedrock (via LiteLLM), local Ollama (fully offline)
- **SQLCipher AES-256** — All issue history encrypted at rest
- **Multi-Provider AI** — OpenAI, Anthropic Claude, Google Gemini, Mistral, AWS Bedrock (via LiteLLM), MSI GenAI (Motorola internal), local Ollama (fully offline)
- **Custom Provider Support** — Flexible authentication (Bearer, custom headers) and API formats (OpenAI-compatible, Custom REST)
- **External Integrations** — Confluence, ServiceNow, Azure DevOps with OAuth2 PKCE flows
- **SQLCipher AES-256** — All issue history and credentials encrypted at rest
- **RCA + Post-Mortem Generation** — Auto-populated Markdown templates, exportable as MD/PDF
- **Ollama Management** — Hardware detection, model recommendations, in-app model management
- **Audit Trail** — Every external data send logged with SHA-256 hash
@ -33,9 +35,13 @@
## Releases
| Version | Status | Platforms |
| Version | Status | Highlights |
|---------|--------|-----------|
| v0.1.1 | 🚀 Released | linux/amd64 · linux/arm64 · windows/amd64 (.deb, .rpm, .AppImage, .exe, .msi) |
| v0.2.6 | 🚀 Latest | MSI GenAI support, OAuth2 shell permissions, user ID tracking |
| v0.2.3 | Released | Confluence/ServiceNow/ADO REST API clients (19 TDD tests) |
| v0.1.1 | Released | Core application with PII detection, RCA generation |
**Platforms:** linux/amd64 · linux/arm64 · windows/amd64 (.deb, .rpm, .AppImage, .exe, .msi)
Download from [Releases](https://gogs.tftsr.com/sarman/tftsr-devops_investigation/releases). All builds are produced natively (no QEMU emulation).
@ -45,7 +51,7 @@ Download from [Releases](https://gogs.tftsr.com/sarman/tftsr-devops_investigatio
|-------|--------|
| Phases 18 (Core application) | ✅ Complete |
| Phase 9 (History/Search) | 🔲 Pending |
| Phase 10 (Integrations) | 🕐 v0.2 stubs only |
| Phase 10 (Integrations) | ✅ Complete — Confluence, ServiceNow, Azure DevOps fully implemented with OAuth2 |
| Phase 11 (CI/CD) | ✅ Complete — Gitea Actions fully operational |
| Phase 12 (Release packaging) | ✅ linux/amd64 · linux/arm64 (native) · windows/amd64 |

View File

@ -220,15 +220,206 @@ Returns audit log entries. Filter by action, entity_type, date range.
---
## Integration Commands (v0.2 Stubs)
## Integration Commands
All 6 integration commands currently return `"not yet available"` errors.
> **Status:****Fully Implemented** (v0.2.3+)
| Command | Purpose |
|---------|---------|
| `test_confluence_connection` | Verify Confluence credentials |
| `publish_to_confluence` | Publish RCA/postmortem to Confluence space |
| `test_servicenow_connection` | Verify ServiceNow credentials |
| `create_servicenow_incident` | Create incident from issue |
| `test_azuredevops_connection` | Verify Azure DevOps credentials |
| `create_azuredevops_workitem` | Create work item from issue |
All integration commands are production-ready with complete OAuth2/authentication flows.
### OAuth2 Commands
### `initiate_oauth`
```typescript
initiateOauthCmd(service: "confluence" | "servicenow" | "azuredevops") → OAuthInitResponse
```
Starts OAuth2 PKCE flow. Returns authorization URL and state key. Opens browser window for user authentication.
```typescript
interface OAuthInitResponse {
auth_url: string; // URL to open in browser
state: string; // State key for callback verification
}
```
**Flow:**
1. Generates PKCE challenge
2. Starts local callback server on `http://localhost:8765`
3. Opens authorization URL in browser
4. User authenticates with service
5. Service redirects to callback server
6. Callback server triggers `handle_oauth_callback`
### `handle_oauth_callback`
```typescript
handleOauthCallbackCmd(service: string, code: string, stateKey: string) → void
```
Exchanges authorization code for access token. Encrypts token with AES-256-GCM and stores in database.
### Confluence Commands
### `test_confluence_connection`
```typescript
testConfluenceConnectionCmd(baseUrl: string, credentials: Record<string, unknown>) → ConnectionResult
```
Verifies Confluence connection by calling `/rest/api/user/current`.
### `list_confluence_spaces`
```typescript
listConfluenceSpacesCmd(config: ConfluenceConfig) → Space[]
```
Lists all accessible Confluence spaces.
### `search_confluence_pages`
```typescript
searchConfluencePagesCmd(config: ConfluenceConfig, query: string, spaceKey?: string) → Page[]
```
Searches pages using CQL (Confluence Query Language). Optional space filter.
### `publish_to_confluence`
```typescript
publishToConfluenceCmd(config: ConfluenceConfig, spaceKey: string, title: string, contentHtml: string, parentPageId?: string) → PublishResult
```
Creates a new page in Confluence. Returns page ID and URL.
### `update_confluence_page`
```typescript
updateConfluencePageCmd(config: ConfluenceConfig, pageId: string, title: string, contentHtml: string, version: number) → PublishResult
```
Updates an existing page. Requires current version number.
### ServiceNow Commands
### `test_servicenow_connection`
```typescript
testServiceNowConnectionCmd(instanceUrl: string, credentials: Record<string, unknown>) → ConnectionResult
```
Verifies ServiceNow connection by querying incident table.
### `search_servicenow_incidents`
```typescript
searchServiceNowIncidentsCmd(config: ServiceNowConfig, query: string) → Incident[]
```
Searches incidents by short description. Returns up to 10 results.
### `create_servicenow_incident`
```typescript
createServiceNowIncidentCmd(config: ServiceNowConfig, shortDesc: string, description: string, urgency: string, impact: string) → TicketResult
```
Creates a new incident. Returns incident number and URL.
```typescript
interface TicketResult {
id: string; // sys_id (UUID)
ticket_number: string; // INC0010001
url: string; // Direct link to incident
}
```
### `get_servicenow_incident`
```typescript
getServiceNowIncidentCmd(config: ServiceNowConfig, incidentId: string) → Incident
```
Retrieves incident by sys_id or incident number (e.g., `INC0010001`).
### `update_servicenow_incident`
```typescript
updateServiceNowIncidentCmd(config: ServiceNowConfig, sysId: string, updates: Record<string, any>) → TicketResult
```
Updates incident fields. Uses JSON-PATCH format.
### Azure DevOps Commands
### `test_azuredevops_connection`
```typescript
testAzureDevOpsConnectionCmd(orgUrl: string, credentials: Record<string, unknown>) → ConnectionResult
```
Verifies Azure DevOps connection by querying project info.
### `search_azuredevops_workitems`
```typescript
searchAzureDevOpsWorkItemsCmd(config: AzureDevOpsConfig, query: string) → WorkItem[]
```
Searches work items using WIQL (Work Item Query Language).
### `create_azuredevops_workitem`
```typescript
createAzureDevOpsWorkItemCmd(config: AzureDevOpsConfig, title: string, description: string, workItemType: string, severity: string) → TicketResult
```
Creates a work item (Bug, Task, User Story). Returns work item ID and URL.
**Work Item Types:**
- `Bug` — Software defect
- `Task` — Work assignment
- `User Story` — Feature request
- `Issue` — Problem or blocker
- `Incident` — Production incident
### `get_azuredevops_workitem`
```typescript
getAzureDevOpsWorkItemCmd(config: AzureDevOpsConfig, workItemId: number) → WorkItem
```
Retrieves work item by ID.
### `update_azuredevops_workitem`
```typescript
updateAzureDevOpsWorkItemCmd(config: AzureDevOpsConfig, workItemId: number, updates: Record<string, any>) → TicketResult
```
Updates work item fields. Uses JSON-PATCH format.
---
## Common Types
### `ConnectionResult`
```typescript
interface ConnectionResult {
success: boolean;
message: string;
}
```
### `PublishResult`
```typescript
interface PublishResult {
id: string; // Page ID or document ID
url: string; // Direct link to published content
}
```
### `TicketResult`
```typescript
interface TicketResult {
id: string; // sys_id or work item ID
ticket_number: string; // Human-readable number
url: string; // Direct link
}
```
---
## Authentication Storage
All integration credentials are stored in the `credentials` table:
```sql
CREATE TABLE credentials (
id TEXT PRIMARY KEY,
service TEXT NOT NULL CHECK(service IN ('confluence','servicenow','azuredevops')),
token_hash TEXT NOT NULL, -- SHA-256 for audit
encrypted_token TEXT NOT NULL, -- AES-256-GCM encrypted
created_at TEXT NOT NULL,
expires_at TEXT
);
```
**Encryption:**
- Algorithm: AES-256-GCM
- Key derivation: From `TFTSR_DB_KEY` environment variable
- Nonce: Random 96-bit per encryption
- Format: `base64(nonce || ciphertext || tag)`
**Token retrieval:**
```rust
// Backend: src-tauri/src/integrations/auth.rs
pub fn decrypt_token(encrypted: &str) -> Result<String, String>
```

View File

@ -1,97 +1,273 @@
# Integrations
> **Status: All integrations are v0.2 stubs.** They are implemented as placeholder commands that return `"not yet available"` errors. The authentication framework and command signatures are finalized, but the actual API calls are not yet implemented.
> **Status: ✅ Fully Implemented (v0.2.6)** — All three integrations (Confluence, ServiceNow, Azure DevOps) are production-ready with complete OAuth2/authentication flows and REST API clients.
---
## Confluence
**Purpose:** Publish RCA and post-mortem documents to a Confluence space.
**Purpose:** Publish RCA and post-mortem documents to Confluence spaces.
**Commands:**
- `test_confluence_connection(base_url, credentials)` — Verify credentials
- `publish_to_confluence(doc_id, space_key, parent_page_id?)` — Create/update page
**Status:** ✅ **Implemented** (v0.2.3)
**Planned implementation:**
- Confluence REST API v2: `POST /wiki/rest/api/content`
- Auth: Basic auth (email + API token) or OAuth2
- Page format: Convert Markdown → Confluence storage format (XHTML-like)
### Features
- OAuth2 authentication with PKCE flow
- List accessible spaces
- Search pages by CQL query
- Create new pages with optional parent
- Update existing pages with version management
**Configuration (Settings → Integrations → Confluence):**
### API Client (`src-tauri/src/integrations/confluence.rs`)
**Functions:**
```rust
test_connection(config: &ConfluenceConfig) -> Result<ConnectionResult, String>
list_spaces(config: &ConfluenceConfig) -> Result<Vec<Space>, String>
search_pages(config: &ConfluenceConfig, query: &str, space_key: Option<&str>) -> Result<Vec<Page>, String>
publish_page(config: &ConfluenceConfig, space_key: &str, title: &str, content_html: &str, parent_page_id: Option<&str>) -> Result<PublishResult, String>
update_page(config: &ConfluenceConfig, page_id: &str, title: &str, content_html: &str, version: i32) -> Result<PublishResult, String>
```
Base URL: https://yourorg.atlassian.net
Email: user@example.com
API Token: (stored in Stronghold)
Space Key: PROJ
### Configuration (Settings → Integrations → Confluence)
```
Base URL: https://yourorg.atlassian.net
Authentication: OAuth2 (bearer token, encrypted at rest)
Default Space: PROJ
```
### Implementation Details
- **API**: Confluence REST API v1 (`/rest/api/`)
- **Auth**: OAuth2 bearer token (encrypted with AES-256-GCM)
- **Endpoints**:
- `GET /rest/api/user/current` — Test connection
- `GET /rest/api/space` — List spaces
- `GET /rest/api/content/search` — Search with CQL
- `POST /rest/api/content` — Create page
- `PUT /rest/api/content/{id}` — Update page
- **Page format**: Confluence Storage Format (XHTML)
- **TDD Tests**: 6 tests with mockito HTTP mocking
---
## ServiceNow
**Purpose:** Create incident records in ServiceNow from TFTSR issues.
**Purpose:** Create and manage incident records in ServiceNow.
**Commands:**
- `test_servicenow_connection(instance_url, credentials)` — Verify credentials
- `create_servicenow_incident(issue_id, config)` — Create incident
**Status:** ✅ **Implemented** (v0.2.3)
**Planned implementation:**
- ServiceNow Table API: `POST /api/now/table/incident`
- Auth: Basic auth or OAuth2 bearer token
- Field mapping: TFTSR severity → ServiceNow priority (P1=Critical, P2=High, etc.)
### Features
- Basic authentication (username/password)
- Search incidents by description
- Create new incidents with urgency/impact
- Get incident by sys_id or number
- Update existing incidents
**Configuration:**
### API Client (`src-tauri/src/integrations/servicenow.rs`)
**Functions:**
```rust
test_connection(config: &ServiceNowConfig) -> Result<ConnectionResult, String>
search_incidents(config: &ServiceNowConfig, query: &str) -> Result<Vec<Incident>, String>
create_incident(config: &ServiceNowConfig, short_description: &str, description: &str, urgency: &str, impact: &str) -> Result<TicketResult, String>
get_incident(config: &ServiceNowConfig, incident_id: &str) -> Result<Incident, String>
update_incident(config: &ServiceNowConfig, sys_id: &str, updates: serde_json::Value) -> Result<TicketResult, String>
```
Instance URL: https://yourorg.service-now.com
Username: admin
Password: (stored in Stronghold)
### Configuration (Settings → Integrations → ServiceNow)
```
Instance URL: https://yourorg.service-now.com
Username: admin
Password: (encrypted with AES-256-GCM)
```
### Implementation Details
- **API**: ServiceNow Table API (`/api/now/table/incident`)
- **Auth**: HTTP Basic authentication
- **Severity mapping**: TFTSR P1-P4 → ServiceNow urgency/impact (1-3)
- **Incident lookup**: Supports both sys_id (UUID) and incident number (INC0010001)
- **TDD Tests**: 7 tests with mockito HTTP mocking
---
## Azure DevOps
**Purpose:** Create work items (bugs/incidents) in Azure DevOps from TFTSR issues.
**Purpose:** Create and manage work items (bugs/tasks) in Azure DevOps.
**Commands:**
- `test_azuredevops_connection(org_url, credentials)` — Verify credentials
- `create_azuredevops_workitem(issue_id, project, config)` — Create work item
**Status:** ✅ **Implemented** (v0.2.3)
**Planned implementation:**
- Azure DevOps REST API: `POST /{organization}/{project}/_apis/wit/workitems/${type}`
- Auth: Personal Access Token (PAT) via Basic auth header
- Work item type: Bug or Incident
### Features
- OAuth2 authentication with PKCE flow
- Search work items via WIQL queries
- Create work items (Bug, Task, User Story)
- Get work item details by ID
- Update work items with JSON-PATCH operations
**Configuration:**
### API Client (`src-tauri/src/integrations/azuredevops.rs`)
**Functions:**
```rust
test_connection(config: &AzureDevOpsConfig) -> Result<ConnectionResult, String>
search_work_items(config: &AzureDevOpsConfig, query: &str) -> Result<Vec<WorkItem>, String>
create_work_item(config: &AzureDevOpsConfig, title: &str, description: &str, work_item_type: &str, severity: &str) -> Result<TicketResult, String>
get_work_item(config: &AzureDevOpsConfig, work_item_id: i64) -> Result<WorkItem, String>
update_work_item(config: &AzureDevOpsConfig, work_item_id: i64, updates: serde_json::Value) -> Result<TicketResult, String>
```
### Configuration (Settings → Integrations → Azure DevOps)
```
Organization URL: https://dev.azure.com/yourorg
Personal Access Token: (stored in Stronghold)
Authentication: OAuth2 (bearer token, encrypted at rest)
Project: MyProject
Work Item Type: Bug
```
### Implementation Details
- **API**: Azure DevOps REST API v7.0
- **Auth**: OAuth2 bearer token (encrypted with AES-256-GCM)
- **WIQL**: Work Item Query Language for advanced search
- **Work item types**: Bug, Task, User Story, Issue, Incident
- **Severity mapping**: Bug-specific field `Microsoft.VSTS.Common.Severity`
- **TDD Tests**: 6 tests with mockito HTTP mocking
---
## OAuth2 Authentication Flow
All integrations using OAuth2 (Confluence, Azure DevOps) follow the same flow:
1. **User clicks "Connect"** in Settings → Integrations
2. **Backend generates PKCE challenge** and stores code verifier
3. **Local callback server starts** on `http://localhost:8765`
4. **Browser opens** with OAuth authorization URL
5. **User authenticates** with service provider
6. **Service redirects** to `http://localhost:8765/callback?code=...`
7. **Callback server extracts code** and triggers token exchange
8. **Backend exchanges code for token** using PKCE verifier
9. **Token encrypted** with AES-256-GCM and stored in DB
10. **UI shows "Connected"** status
**Implementation:**
- `src-tauri/src/integrations/auth.rs` — PKCE generation, token exchange, encryption
- `src-tauri/src/integrations/callback_server.rs` — Local HTTP server (warp)
- `src-tauri/src/commands/integrations.rs` — IPC command handlers
**Security:**
- Tokens encrypted at rest with AES-256-GCM (256-bit key)
- Key derived from environment variable `TFTSR_DB_KEY`
- PKCE prevents authorization code interception
- Callback server only accepts from `localhost`
---
## Database Schema
**Credentials Table (`migration 011`):**
```sql
CREATE TABLE credentials (
id TEXT PRIMARY KEY,
service TEXT NOT NULL CHECK(service IN ('confluence','servicenow','azuredevops')),
token_hash TEXT NOT NULL, -- SHA-256 hash for audit
encrypted_token TEXT NOT NULL, -- AES-256-GCM encrypted
created_at TEXT NOT NULL,
expires_at TEXT,
UNIQUE(service)
);
```
**Integration Config Table:**
```sql
CREATE TABLE integration_config (
id TEXT PRIMARY KEY,
service TEXT NOT NULL CHECK(service IN ('confluence','servicenow','azuredevops')),
base_url TEXT NOT NULL,
username TEXT, -- ServiceNow only
project_name TEXT, -- Azure DevOps only
space_key TEXT, -- Confluence only
auto_create_enabled INTEGER NOT NULL DEFAULT 0,
updated_at TEXT NOT NULL,
UNIQUE(service)
);
```
---
## v0.2 Roadmap
## Testing
Integration implementation order (planned):
All integrations have comprehensive test coverage:
1. **Confluence** — Most commonly requested; Markdown-to-Confluence conversion library needed
2. **Azure DevOps** — Clean REST API, straightforward PAT auth
3. **ServiceNow** — More complex field mapping; may require customer-specific configuration
```bash
# Run all integration tests
cargo test --manifest-path src-tauri/Cargo.toml --lib integrations
Each integration will also require:
- Audit log entry on every publish action
- PII check on document content before external publish
- Connection test UI in Settings → Integrations
# Run specific integration tests
cargo test --manifest-path src-tauri/Cargo.toml confluence
cargo test --manifest-path src-tauri/Cargo.toml servicenow
cargo test --manifest-path src-tauri/Cargo.toml azuredevops
```
**Test statistics:**
- **Confluence**: 6 tests (connection, spaces, search, publish, update)
- **ServiceNow**: 7 tests (connection, search, create, get by sys_id, get by number, update)
- **Azure DevOps**: 6 tests (connection, WIQL search, create, get, update)
- **Total**: 19 integration tests (all passing)
**Test approach:**
- TDD methodology (tests written first)
- HTTP mocking with `mockito` crate
- No external API calls in tests
- All auth flows tested with mock responses
---
## Adding an Integration
## CSP Configuration
1. Implement the logic in `src-tauri/src/integrations/{name}.rs`
2. Remove the stub `Err("not yet available")` return in `commands/integrations.rs`
3. Add the new API endpoint to the Tauri CSP `connect-src`
4. Add Stronghold secret key for the API credentials
5. Wire up the Settings UI in `src/pages/Settings/Integrations.tsx`
6. Add audit log call before the external API request
All integration domains are whitelisted in `src-tauri/tauri.conf.json`:
```json
"connect-src": "... https://auth.atlassian.com https://*.atlassian.net https://login.microsoftonline.com https://dev.azure.com"
```
---
## Adding a New Integration
1. **Create API client**: `src-tauri/src/integrations/{name}.rs`
2. **Implement functions**: `test_connection()`, create/read/update operations
3. **Add TDD tests**: Use `mockito` for HTTP mocking
4. **Update migration**: Add service to `credentials` and `integration_config` CHECK constraints
5. **Add IPC commands**: `src-tauri/src/commands/integrations.rs`
6. **Update CSP**: Add API domains to `tauri.conf.json`
7. **Wire up UI**: `src/pages/Settings/Integrations.tsx`
8. **Update capabilities**: Add any required Tauri permissions
9. **Document**: Update this wiki page
---
## Troubleshooting
### OAuth "Command plugin:shell|open not allowed"
**Fix**: Add `"shell:allow-open"` to `src-tauri/capabilities/default.json`
### Token Exchange Fails
**Check**:
1. PKCE verifier matches challenge
2. Redirect URI exactly matches registered callback
3. Authorization code hasn't expired
4. Client ID/secret are correct
### ServiceNow 401 Unauthorized
**Check**:
1. Username/password are correct
2. User has API access enabled
3. Instance URL is correct (no trailing slash)
### Confluence API 404
**Check**:
1. Base URL format: `https://yourorg.atlassian.net` (no `/wiki/`)
2. Space key exists and user has access
3. OAuth token has required scopes (`read:confluence-content.all`, `write:confluence-content`)
### Azure DevOps 403 Forbidden
**Check**:
1. OAuth token has required scopes (`vso.work_write`)
2. User has permissions in the project
3. Project name is case-sensitive

View File

@ -10,7 +10,7 @@ Before any text is sent to an AI provider, TFTSR scans it for personally identif
1. Upload log file
2. detect_pii(log_file_id)
→ Scans content with 13 regex patterns
→ Scans content with PII regex patterns (including hostname + expanded card brands)
→ Resolves overlapping matches (longest wins)
→ Returns Vec<PiiSpan> with byte offsets + replacements
@ -24,7 +24,7 @@ Before any text is sent to an AI provider, TFTSR scans it for personally identif
5. Redacted text safe to send to AI
```
## Detection Patterns (13 Types)
## Detection Patterns
| Type | Replacement | Pattern notes |
|------|-------------|---------------|
@ -33,13 +33,13 @@ Before any text is sent to an AI provider, TFTSR scans it for personally identif
| `ApiKey` | `[ApiKey]` | `api_key=`, `apikey=`, `access_token=` + 16+ char value |
| `Password` | `[Password]` | `password=`, `passwd=`, `pwd=` + non-whitespace value |
| `Ssn` | `[SSN]` | `\b\d{3}-\d{2}-\d{4}\b` |
| `CreditCard` | `[CreditCard]` | Visa/MC/Amex Luhn-format numbers |
| `CreditCard` | `[CreditCard]` | Visa/MC/Amex/Discover/JCB/Diners patterns |
| `Email` | `[Email]` | RFC-compliant email addresses |
| `MacAddress` | `[MAC]` | `XX:XX:XX:XX:XX:XX` and `XX-XX-XX-XX-XX-XX` |
| `Ipv6` | `[IPv6]` | Full and compressed IPv6 addresses |
| `Ipv4` | `[IPv4]` | Standard dotted-quad notation |
| `PhoneNumber` | `[Phone]` | US and international phone formats |
| `Hostname` | _(patterns.rs)_ | Configurable hostname patterns |
| `Hostname` | `[Hostname]` | FQDN/hostname detection for internal names |
| `UrlCredentials` | _(covered by UrlWithCredentials)_ | |
## Overlap Resolution
@ -71,7 +71,7 @@ pub struct PiiSpan {
pub pii_type: PiiType,
pub start: usize, // byte offset in original text
pub end: usize,
pub original_value: String,
pub original: String,
pub replacement: String, // e.g., "[IPv4]"
}
```
@ -111,3 +111,4 @@ write_audit_event(
- Only the redacted text is sent to AI providers
- The SHA-256 hash in the audit log allows integrity verification
- If redaction is skipped (no PII detected), the audit log still records the send
- Stored `pii_spans.original_value` metadata is cleared after redaction is finalized

View File

@ -18,20 +18,25 @@ Production builds use SQLCipher:
- **Cipher:** AES-256-CBC
- **KDF:** PBKDF2-HMAC-SHA512, 256,000 iterations
- **HMAC:** HMAC-SHA512
- **Page size:** 4096 bytes
- **Page size:** 16384 bytes
- **Key source:** `TFTSR_DB_KEY` environment variable
Debug builds use plain SQLite (no encryption) for developer convenience.
> ⚠️ **Never** use the default key (`dev-key-change-in-prod`) in a production environment.
Release builds now fail startup if `TFTSR_DB_KEY` is missing or empty.
---
## API Key Storage (Stronghold)
## Credential Encryption
AI provider API keys are stored in `tauri-plugin-stronghold` — an encrypted vault backed by the [IOTA Stronghold](https://github.com/iotaledger/stronghold.rs) library.
Integration tokens are encrypted with AES-256-GCM before persistence:
- **Key source:** `TFTSR_ENCRYPTION_KEY` (required in release builds)
- **Key derivation:** SHA-256 hash of key material to a fixed 32-byte AES key
- **Nonce:** Cryptographically secure random nonce per encryption
The vault is initialized with a password-derived key using Argon2. API keys are never written to disk in plaintext or to the SQLite database.
Release builds fail secure operations if `TFTSR_ENCRYPTION_KEY` is unset or empty.
The Stronghold plugin remains enabled and now uses a per-installation salt derived from the app data directory path hash instead of a fixed static salt.
---
@ -46,6 +51,7 @@ log file → detect_pii() → user approves spans → apply_redactions() → AI
- Original text **never leaves the machine**
- Only the redacted version is transmitted
- The SHA-256 hash of the redacted text is recorded in the audit log for integrity verification
- `pii_spans.original_value` is cleared after redaction to avoid retaining raw detected secrets in storage
- See [PII Detection](PII-Detection) for the full list of detected patterns
---
@ -66,6 +72,14 @@ write_audit_event(
The audit log is stored in the encrypted SQLite database. It cannot be deleted through the UI.
### Tamper Evidence
`audit_log` entries now include:
- `prev_hash` — hash of the previous audit entry
- `entry_hash` — SHA-256 hash of current entry payload + `prev_hash`
This creates a hash chain and makes post-hoc modification detectable.
**Audit entry fields:**
- `action` — what was done
- `entity_type` — type of record involved
@ -84,7 +98,7 @@ Defined in `src-tauri/capabilities/default.json`:
|--------|-------------------|
| `dialog` | `allow-open`, `allow-save` |
| `fs` | `read-text`, `write-text`, `read`, `write`, `mkdir` — scoped to app dir and temp |
| `shell` | `allow-execute` — for running system commands |
| `shell` | `allow-open` only |
| `http` | default — connect only to approved origins |
---
@ -109,7 +123,9 @@ HTTP is blocked by default. Only whitelisted HTTPS endpoints (and localhost for
## TLS
All outbound HTTP requests use `reqwest` with default TLS settings (TLS 1.2+ required). Certificate verification is enabled. No custom trust anchors are added.
All outbound HTTP requests use `reqwest` with certificate verification enabled and a request timeout configured for provider calls.
CI/CD currently uses internal `http://` endpoints for self-hosted Gitea release automation on a trusted LAN. Recommended hardening: migrate runners and API calls to HTTPS with internal certificates.
---
@ -120,3 +136,4 @@ All outbound HTTP requests use `reqwest` with default TLS settings (TLS 1.2+ req
- [ ] Does it store secrets? → Use Stronghold, not the SQLite DB
- [ ] Does it need filesystem access? → Scope the fs capability
- [ ] Does it need a new HTTP endpoint? → Add to CSP `connect-src`
- [ ] Does it add a new provider endpoint? → Avoid query-param secrets, use auth headers

View File

@ -4,7 +4,7 @@
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>TFTSR — IT Triage & RCA</title>
<title>Troubleshooting and RCA Assistant</title>
</head>
<body>
<div id="root"></div>

View File

@ -4,3 +4,8 @@
# error. The desktop binary links against rlib (static), so cdylib exports
# are unused at runtime.
rustflags = ["-C", "link-arg=-Wl,--exclude-all-symbols"]
[env]
# Use system OpenSSL instead of vendoring from source (which requires Perl modules
# unavailable on some environments and breaks clippy/check).
OPENSSL_NO_VENDOR = "1"

7
src-tauri/Cargo.lock generated
View File

@ -5706,6 +5706,7 @@ dependencies = [
"tokio-test",
"tracing",
"tracing-subscriber",
"urlencoding",
"uuid",
"warp",
]
@ -6344,6 +6345,12 @@ dependencies = [
"serde_derive",
]
[[package]]
name = "urlencoding"
version = "2.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "daf8dba3b7eb870caf1ddeed7bc9d2a049f3cfdfae7cb521b087cc33ae4c49da"
[[package]]
name = "urlpattern"
version = "0.3.0"

View File

@ -1,5 +1,5 @@
[package]
name = "tftsr"
name = "trcaa"
version = "0.1.0"
edition = "2021"
@ -42,6 +42,7 @@ aes-gcm = "0.10"
rand = "0.8"
lazy_static = "1.4"
warp = "0.3"
urlencoding = "2"
[dev-dependencies]
tokio-test = "0.4"

View File

@ -24,7 +24,7 @@
"fs:allow-temp-write-recursive",
"fs:scope-app-recursive",
"fs:scope-temp-recursive",
"shell:allow-execute",
"shell:allow-open",
"http:default"
]
}

View File

@ -1 +1 @@
{"default":{"identifier":"default","description":"Default capabilities for TFTSR — least-privilege","local":true,"windows":["main"],"permissions":["core:path:default","core:event:default","core:window:default","core:app:default","core:resources:default","core:menu:default","core:tray:default","dialog:allow-open","dialog:allow-save","fs:allow-read-text-file","fs:allow-write-text-file","fs:allow-read","fs:allow-write","fs:allow-mkdir","fs:allow-app-read-recursive","fs:allow-app-write-recursive","fs:allow-temp-read-recursive","fs:allow-temp-write-recursive","fs:scope-app-recursive","fs:scope-temp-recursive","shell:allow-execute","http:default"]}}
{"default":{"identifier":"default","description":"Default capabilities for TFTSR — least-privilege","local":true,"windows":["main"],"permissions":["core:path:default","core:event:default","core:window:default","core:app:default","core:resources:default","core:menu:default","core:tray:default","dialog:allow-open","dialog:allow-save","fs:allow-read-text-file","fs:allow-write-text-file","fs:allow-read","fs:allow-write","fs:allow-mkdir","fs:allow-app-read-recursive","fs:allow-app-write-recursive","fs:allow-temp-read-recursive","fs:allow-temp-write-recursive","fs:scope-app-recursive","fs:scope-temp-recursive","shell:allow-open","http:default"]}}

View File

@ -2324,24 +2324,6 @@
"Identifier": {
"description": "Permission identifier",
"oneOf": [
{
"description": "Allows reading the CLI matches\n#### This default permission set includes:\n\n- `allow-cli-matches`",
"type": "string",
"const": "cli:default",
"markdownDescription": "Allows reading the CLI matches\n#### This default permission set includes:\n\n- `allow-cli-matches`"
},
{
"description": "Enables the cli_matches command without any pre-configured scope.",
"type": "string",
"const": "cli:allow-cli-matches",
"markdownDescription": "Enables the cli_matches command without any pre-configured scope."
},
{
"description": "Denies the cli_matches command without any pre-configured scope.",
"type": "string",
"const": "cli:deny-cli-matches",
"markdownDescription": "Denies the cli_matches command without any pre-configured scope."
},
{
"description": "Default core plugins set.\n#### This default permission set includes:\n\n- `core:path:default`\n- `core:event:default`\n- `core:window:default`\n- `core:webview:default`\n- `core:app:default`\n- `core:image:default`\n- `core:resources:default`\n- `core:menu:default`\n- `core:tray:default`",
"type": "string",
@ -6373,60 +6355,6 @@
"type": "string",
"const": "stronghold:deny-save-store-record",
"markdownDescription": "Denies the save_store_record command without any pre-configured scope."
},
{
"description": "This permission set configures which kind of\nupdater functions are exposed to the frontend.\n\n#### Granted Permissions\n\nThe full workflow from checking for updates to installing them\nis enabled.\n\n\n#### This default permission set includes:\n\n- `allow-check`\n- `allow-download`\n- `allow-install`\n- `allow-download-and-install`",
"type": "string",
"const": "updater:default",
"markdownDescription": "This permission set configures which kind of\nupdater functions are exposed to the frontend.\n\n#### Granted Permissions\n\nThe full workflow from checking for updates to installing them\nis enabled.\n\n\n#### This default permission set includes:\n\n- `allow-check`\n- `allow-download`\n- `allow-install`\n- `allow-download-and-install`"
},
{
"description": "Enables the check command without any pre-configured scope.",
"type": "string",
"const": "updater:allow-check",
"markdownDescription": "Enables the check command without any pre-configured scope."
},
{
"description": "Enables the download command without any pre-configured scope.",
"type": "string",
"const": "updater:allow-download",
"markdownDescription": "Enables the download command without any pre-configured scope."
},
{
"description": "Enables the download_and_install command without any pre-configured scope.",
"type": "string",
"const": "updater:allow-download-and-install",
"markdownDescription": "Enables the download_and_install command without any pre-configured scope."
},
{
"description": "Enables the install command without any pre-configured scope.",
"type": "string",
"const": "updater:allow-install",
"markdownDescription": "Enables the install command without any pre-configured scope."
},
{
"description": "Denies the check command without any pre-configured scope.",
"type": "string",
"const": "updater:deny-check",
"markdownDescription": "Denies the check command without any pre-configured scope."
},
{
"description": "Denies the download command without any pre-configured scope.",
"type": "string",
"const": "updater:deny-download",
"markdownDescription": "Denies the download command without any pre-configured scope."
},
{
"description": "Denies the download_and_install command without any pre-configured scope.",
"type": "string",
"const": "updater:deny-download-and-install",
"markdownDescription": "Denies the download_and_install command without any pre-configured scope."
},
{
"description": "Denies the install command without any pre-configured scope.",
"type": "string",
"const": "updater:deny-install",
"markdownDescription": "Denies the install command without any pre-configured scope."
}
]
},

View File

View File

@ -1,4 +1,5 @@
use async_trait::async_trait;
use std::time::Duration;
use crate::ai::provider::Provider;
use crate::ai::{ChatResponse, Message, ProviderInfo, TokenUsage};
@ -29,7 +30,9 @@ impl Provider for AnthropicProvider {
messages: Vec<Message>,
config: &ProviderConfig,
) -> anyhow::Result<ChatResponse> {
let client = reqwest::Client::new();
let client = reqwest::Client::builder()
.timeout(Duration::from_secs(60))
.build()?;
let url = format!(
"{}/v1/messages",
config

View File

@ -1,4 +1,5 @@
use async_trait::async_trait;
use std::time::Duration;
use crate::ai::provider::Provider;
use crate::ai::{ChatResponse, Message, ProviderInfo, TokenUsage};
@ -30,10 +31,12 @@ impl Provider for GeminiProvider {
messages: Vec<Message>,
config: &ProviderConfig,
) -> anyhow::Result<ChatResponse> {
let client = reqwest::Client::new();
let client = reqwest::Client::builder()
.timeout(Duration::from_secs(60))
.build()?;
let url = format!(
"https://generativelanguage.googleapis.com/v1beta/models/{}:generateContent?key={}",
config.model, config.api_key
"https://generativelanguage.googleapis.com/v1beta/models/{}:generateContent",
config.model
);
// Map OpenAI-style messages to Gemini format
@ -79,6 +82,7 @@ impl Provider for GeminiProvider {
let resp = client
.post(&url)
.header("Content-Type", "application/json")
.header("x-goog-api-key", &config.api_key)
.json(&body)
.send()
.await?;

View File

@ -1,4 +1,5 @@
use async_trait::async_trait;
use std::time::Duration;
use crate::ai::provider::Provider;
use crate::ai::{ChatResponse, Message, ProviderInfo, TokenUsage};
@ -31,7 +32,9 @@ impl Provider for MistralProvider {
config: &ProviderConfig,
) -> anyhow::Result<ChatResponse> {
// Mistral uses OpenAI-compatible format
let client = reqwest::Client::new();
let client = reqwest::Client::builder()
.timeout(Duration::from_secs(60))
.build()?;
let base_url = if config.api_url.is_empty() {
"https://api.mistral.ai/v1".to_string()
} else {
@ -47,7 +50,10 @@ impl Provider for MistralProvider {
let resp = client
.post(&url)
.header("Authorization", format!("Bearer {}", config.api_key))
.header(
"Authorization",
format!("Bearer {api_key}", api_key = config.api_key),
)
.header("Content-Type", "application/json")
.json(&body)
.send()

View File

@ -1,4 +1,5 @@
use async_trait::async_trait;
use std::time::Duration;
use crate::ai::provider::Provider;
use crate::ai::{ChatResponse, Message, ProviderInfo, TokenUsage};
@ -31,7 +32,9 @@ impl Provider for OllamaProvider {
messages: Vec<Message>,
config: &ProviderConfig,
) -> anyhow::Result<ChatResponse> {
let client = reqwest::Client::new();
let client = reqwest::Client::builder()
.timeout(Duration::from_secs(60))
.build()?;
let base_url = if config.api_url.is_empty() {
"http://localhost:11434".to_string()
} else {

View File

@ -1,4 +1,5 @@
use async_trait::async_trait;
use std::time::Duration;
use crate::ai::provider::Provider;
use crate::ai::{ChatResponse, Message, ProviderInfo, TokenUsage};
@ -6,6 +7,10 @@ use crate::state::ProviderConfig;
pub struct OpenAiProvider;
fn is_custom_rest_format(api_format: Option<&str>) -> bool {
matches!(api_format, Some("custom_rest") | Some("msi_genai"))
}
#[async_trait]
impl Provider for OpenAiProvider {
fn name(&self) -> &str {
@ -29,18 +34,82 @@ impl Provider for OpenAiProvider {
messages: Vec<Message>,
config: &ProviderConfig,
) -> anyhow::Result<ChatResponse> {
let client = reqwest::Client::new();
let url = format!("{}/chat/completions", config.api_url.trim_end_matches('/'));
// Check if using custom REST format
let api_format = config.api_format.as_deref().unwrap_or("openai");
let body = serde_json::json!({
// Backward compatibility: accept legacy msi_genai identifier
if is_custom_rest_format(Some(api_format)) {
self.chat_custom_rest(messages, config).await
} else {
self.chat_openai(messages, config).await
}
}
}
#[cfg(test)]
mod tests {
use super::is_custom_rest_format;
#[test]
fn custom_rest_format_is_recognized() {
assert!(is_custom_rest_format(Some("custom_rest")));
}
#[test]
fn legacy_msi_format_is_recognized_for_compatibility() {
assert!(is_custom_rest_format(Some("msi_genai")));
}
#[test]
fn openai_format_is_not_custom_rest() {
assert!(!is_custom_rest_format(Some("openai")));
assert!(!is_custom_rest_format(None));
}
}
impl OpenAiProvider {
/// OpenAI-compatible API format (default)
async fn chat_openai(
&self,
messages: Vec<Message>,
config: &ProviderConfig,
) -> anyhow::Result<ChatResponse> {
let client = reqwest::Client::builder()
.timeout(Duration::from_secs(60))
.build()?;
// Use custom endpoint path if provided, otherwise default to /chat/completions
let endpoint_path = config
.custom_endpoint_path
.as_deref()
.unwrap_or("/chat/completions");
let api_url = config.api_url.trim_end_matches('/');
let url = format!("{api_url}{endpoint_path}");
let mut body = serde_json::json!({
"model": config.model,
"messages": messages,
"max_tokens": 4096,
});
// Add max_tokens if provided, otherwise use default 4096
body["max_tokens"] = serde_json::Value::from(config.max_tokens.unwrap_or(4096));
// Add temperature if provided
if let Some(temp) = config.temperature {
body["temperature"] = serde_json::Value::from(temp);
}
// Use custom auth header and prefix if provided
let auth_header = config
.custom_auth_header
.as_deref()
.unwrap_or("Authorization");
let auth_prefix = config.custom_auth_prefix.as_deref().unwrap_or("Bearer ");
let auth_value = format!("{auth_prefix}{api_key}", api_key = config.api_key);
let resp = client
.post(&url)
.header("Authorization", format!("Bearer {}", config.api_key))
.header(auth_header, auth_value)
.header("Content-Type", "application/json")
.json(&body)
.send()
@ -72,4 +141,109 @@ impl Provider for OpenAiProvider {
usage,
})
}
/// Custom REST format (MSI GenAI payload contract)
async fn chat_custom_rest(
&self,
messages: Vec<Message>,
config: &ProviderConfig,
) -> anyhow::Result<ChatResponse> {
let client = reqwest::Client::builder()
.timeout(Duration::from_secs(60))
.build()?;
// Use custom endpoint path, default to empty (API URL already includes /api/v2/chat)
let endpoint_path = config.custom_endpoint_path.as_deref().unwrap_or("");
let api_url = config.api_url.trim_end_matches('/');
let url = format!("{api_url}{endpoint_path}");
// Extract system message if present
let system_message = messages
.iter()
.find(|m| m.role == "system")
.map(|m| m.content.clone());
// Get last user message as prompt
let prompt = messages
.iter()
.rev()
.find(|m| m.role == "user")
.map(|m| m.content.clone())
.ok_or_else(|| anyhow::anyhow!("No user message found"))?;
// Build request body
let mut body = serde_json::json!({
"model": config.model,
"prompt": prompt,
});
// Add userId if provided (CORE ID email)
if let Some(user_id) = &config.user_id {
body["userId"] = serde_json::Value::String(user_id.clone());
}
// Add optional system message
if let Some(system) = system_message {
body["system"] = serde_json::Value::String(system);
}
// Add session ID if available (for conversation continuity)
if let Some(session_id) = &config.session_id {
body["sessionId"] = serde_json::Value::String(session_id.clone());
}
// Add modelConfig with temperature and max_tokens if provided
let mut model_config = serde_json::json!({});
if let Some(temp) = config.temperature {
model_config["temperature"] = serde_json::Value::from(temp);
}
if let Some(max_tokens) = config.max_tokens {
model_config["max_tokens"] = serde_json::Value::from(max_tokens);
}
if !model_config.is_null() && model_config.as_object().is_some_and(|obj| !obj.is_empty()) {
body["modelConfig"] = model_config;
}
// Use custom auth header and prefix (no prefix for this custom REST contract)
let auth_header = config
.custom_auth_header
.as_deref()
.unwrap_or("x-msi-genai-api-key");
let auth_prefix = config.custom_auth_prefix.as_deref().unwrap_or("");
let auth_value = format!("{auth_prefix}{api_key}", api_key = config.api_key);
let resp = client
.post(&url)
.header(auth_header, auth_value)
.header("Content-Type", "application/json")
.header("X-msi-genai-client", "troubleshooting-rca-assistant")
.json(&body)
.send()
.await?;
if !resp.status().is_success() {
let status = resp.status();
let text = resp.text().await?;
anyhow::bail!("Custom REST API error {status}: {text}");
}
let json: serde_json::Value = resp.json().await?;
// Extract response content from "msg" field
let content = json["msg"]
.as_str()
.ok_or_else(|| anyhow::anyhow!("No 'msg' field in response"))?
.to_string();
// Note: sessionId from response should be stored back to config.session_id
// This would require making config mutable or returning it as part of ChatResponse
// For now, the caller can extract it from the response if needed
// TODO: Consider adding session_id to ChatResponse struct
Ok(ChatResponse {
content,
model: config.model.clone(),
usage: None, // This custom REST contract doesn't provide token usage in response
})
}
}

View File

@ -1,4 +1,20 @@
use crate::db::models::AuditEntry;
use sha2::{Digest, Sha256};
fn compute_entry_hash(entry: &AuditEntry, prev_hash: &str) -> String {
let payload = format!(
"{}|{}|{}|{}|{}|{}|{}|{}",
prev_hash,
entry.id,
entry.timestamp,
entry.action,
entry.entity_type,
entry.entity_id,
entry.user_id,
entry.details
);
format!("{:x}", Sha256::digest(payload.as_bytes()))
}
/// Write an audit event to the audit_log table.
pub fn write_audit_event(
@ -14,9 +30,16 @@ pub fn write_audit_event(
entity_id.to_string(),
details.to_string(),
);
let prev_hash: String = conn
.prepare(
"SELECT entry_hash FROM audit_log WHERE entry_hash <> '' ORDER BY timestamp DESC, id DESC LIMIT 1",
)?
.query_row([], |row| row.get(0))
.unwrap_or_default();
let entry_hash = compute_entry_hash(&entry, &prev_hash);
conn.execute(
"INSERT INTO audit_log (id, timestamp, action, entity_type, entity_id, user_id, details) \
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)",
"INSERT INTO audit_log (id, timestamp, action, entity_type, entity_id, user_id, details, prev_hash, entry_hash) \
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9)",
rusqlite::params![
entry.id,
entry.timestamp,
@ -25,6 +48,8 @@ pub fn write_audit_event(
entry.entity_id,
entry.user_id,
entry.details,
prev_hash,
entry_hash,
],
)?;
Ok(())
@ -44,7 +69,9 @@ mod tests {
entity_type TEXT NOT NULL DEFAULT '',
entity_id TEXT NOT NULL DEFAULT '',
user_id TEXT NOT NULL DEFAULT 'local',
details TEXT NOT NULL DEFAULT '{}'
details TEXT NOT NULL DEFAULT '{}',
prev_hash TEXT NOT NULL DEFAULT '',
entry_hash TEXT NOT NULL DEFAULT ''
);",
)
.unwrap();
@ -97,9 +124,9 @@ mod tests {
for i in 0..5 {
write_audit_event(
&conn,
&format!("action_{}", i),
&format!("action_{i}"),
"test",
&format!("id_{}", i),
&format!("id_{i}"),
"{}",
)
.unwrap();
@ -128,4 +155,26 @@ mod tests {
assert_eq!(ids.len(), 2);
assert_ne!(ids[0], ids[1]);
}
#[test]
fn test_write_audit_event_hash_chain_links_entries() {
let conn = setup_test_db();
write_audit_event(&conn, "first", "issue", "1", "{}").unwrap();
write_audit_event(&conn, "second", "issue", "2", "{}").unwrap();
let mut stmt = conn
.prepare("SELECT prev_hash, entry_hash FROM audit_log ORDER BY timestamp ASC, id ASC")
.unwrap();
let rows: Vec<(String, String)> = stmt
.query_map([], |row| Ok((row.get(0)?, row.get(1)?)))
.unwrap()
.collect::<Result<Vec<_>, _>>()
.unwrap();
assert_eq!(rows.len(), 2);
assert_eq!(rows[0].0, "");
assert!(!rows[0].1.is_empty());
assert_eq!(rows[1].0, rows[0].1);
assert!(!rows[1].1.is_empty());
}
}

View File

@ -1,4 +1,5 @@
use tauri::State;
use tracing::warn;
use crate::ai::provider::create_provider;
use crate::ai::{AnalysisResult, ChatResponse, Message, ProviderInfo};
@ -12,22 +13,27 @@ pub async fn analyze_logs(
provider_config: ProviderConfig,
state: State<'_, AppState>,
) -> Result<AnalysisResult, String> {
// Load log file contents
// Load log file contents — only redacted files may be sent to an AI provider
let mut log_contents = String::new();
{
let db = state.db.lock().map_err(|e| e.to_string())?;
for file_id in &log_file_ids {
let mut stmt = db
.prepare("SELECT file_name, file_path FROM log_files WHERE id = ?1")
.prepare("SELECT file_name, file_path, redacted FROM log_files WHERE id = ?1")
.map_err(|e| e.to_string())?;
if let Ok((name, path)) = stmt.query_row([file_id], |row| {
Ok((row.get::<_, String>(0)?, row.get::<_, String>(1)?))
if let Ok((name, path, redacted)) = stmt.query_row([file_id], |row| {
Ok((
row.get::<_, String>(0)?,
row.get::<_, String>(1)?,
row.get::<_, i32>(2)? != 0,
))
}) {
let redacted_path = redacted_path_for(&name, &path, redacted)?;
log_contents.push_str(&format!("--- {name} ---\n"));
if let Ok(content) = std::fs::read_to_string(&path) {
if let Ok(content) = std::fs::read_to_string(&redacted_path) {
log_contents.push_str(&content);
} else {
log_contents.push_str("[Could not read file]\n");
log_contents.push_str("[Could not read redacted file]\n");
}
log_contents.push('\n');
}
@ -55,7 +61,10 @@ pub async fn analyze_logs(
let response = provider
.chat(messages, &provider_config)
.await
.map_err(|e| e.to_string())?;
.map_err(|e| {
warn!(error = %e, "ai analyze_logs provider request failed");
"AI analysis request failed".to_string()
})?;
let content = &response.content;
let summary = extract_section(content, "SUMMARY:").unwrap_or_else(|| {
@ -81,14 +90,14 @@ pub async fn analyze_logs(
serde_json::json!({ "log_file_ids": log_file_ids, "provider": provider_config.name })
.to_string(),
);
db.execute(
"INSERT INTO audit_log (id, timestamp, action, entity_type, entity_id, user_id, details) \
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)",
rusqlite::params![
entry.id, entry.timestamp, entry.action,
entry.entity_type, entry.entity_id, entry.user_id, entry.details
],
).map_err(|e| e.to_string())?;
crate::audit::log::write_audit_event(
&db,
&entry.action,
&entry.entity_type,
&entry.entity_id,
&entry.details,
)
.map_err(|_| "Failed to write security audit entry".to_string())?;
}
Ok(AnalysisResult {
@ -99,6 +108,17 @@ pub async fn analyze_logs(
})
}
/// Returns the path to the `.redacted` file, or an error if the file has not been redacted.
fn redacted_path_for(name: &str, path: &str, redacted: bool) -> Result<String, String> {
if !redacted {
return Err(format!(
"Log file '{name}' has not been scanned and redacted. \
Run PII detection and apply redactions before sending to AI."
));
}
Ok(format!("{path}.redacted"))
}
fn extract_section(text: &str, header: &str) -> Option<String> {
let start = text.find(header)?;
let after = &text[start + header.len()..];
@ -207,7 +227,10 @@ pub async fn chat_message(
let response = provider
.chat(messages, &provider_config)
.await
.map_err(|e| e.to_string())?;
.map_err(|e| {
warn!(error = %e, "ai chat provider request failed");
"AI provider request failed".to_string()
})?;
// Save both user message and response to DB
{
@ -246,7 +269,7 @@ pub async fn chat_message(
"api_url": provider_config.api_url,
"user_message": user_msg.content,
"response_preview": if response.content.len() > 200 {
format!("{}...", &response.content[..200])
format!("{preview}...", preview = &response.content[..200])
} else {
response.content.clone()
},
@ -258,14 +281,15 @@ pub async fn chat_message(
issue_id,
audit_details.to_string(),
);
let _ = db.execute(
"INSERT INTO audit_log (id, timestamp, action, entity_type, entity_id, user_id, details) \
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)",
rusqlite::params![
entry.id, entry.timestamp, entry.action,
entry.entity_type, entry.entity_id, entry.user_id, entry.details
],
);
if let Err(err) = crate::audit::log::write_audit_event(
&db,
&entry.action,
&entry.entity_type,
&entry.entity_id,
&entry.details,
) {
warn!(error = %err, "failed to write ai_chat audit entry");
}
}
Ok(response)
@ -278,12 +302,17 @@ pub async fn test_provider_connection(
let provider = create_provider(&provider_config);
let messages = vec![Message {
role: "user".into(),
content: "Reply with exactly: TFTSR connection test successful.".into(),
content:
"Reply with exactly: Troubleshooting and RCA Assistant connection test successful."
.into(),
}];
provider
.chat(messages, &provider_config)
.await
.map_err(|e| e.to_string())
.map_err(|e| {
warn!(error = %e, "ai test_provider_connection failed");
"Provider connection test failed".to_string()
})
}
#[tauri::command]
@ -371,6 +400,19 @@ mod tests {
assert_eq!(list, vec!["Item one", "Item two"]);
}
#[test]
fn test_redacted_path_rejects_unredacted_file() {
let err = redacted_path_for("app.log", "/data/app.log", false).unwrap_err();
assert!(err.contains("app.log"));
assert!(err.contains("redacted"));
}
#[test]
fn test_redacted_path_returns_dotredacted_suffix() {
let path = redacted_path_for("app.log", "/data/app.log", true).unwrap();
assert_eq!(path, "/data/app.log.redacted");
}
#[test]
fn test_extract_list_missing_header() {
let text = "No findings here";

View File

@ -1,20 +1,43 @@
use sha2::{Digest, Sha256};
use std::path::{Path, PathBuf};
use tauri::State;
use tracing::warn;
use crate::db::models::{AuditEntry, LogFile, PiiSpanRecord};
use crate::pii::{self, PiiDetectionResult, PiiDetector, RedactedLogFile};
use crate::state::AppState;
const MAX_LOG_FILE_BYTES: u64 = 50 * 1024 * 1024;
fn validate_log_file_path(file_path: &str) -> Result<PathBuf, String> {
let path = Path::new(file_path);
let canonical = std::fs::canonicalize(path).map_err(|_| "Unable to access selected file")?;
let metadata = std::fs::metadata(&canonical).map_err(|_| "Unable to read file metadata")?;
if !metadata.is_file() {
return Err("Selected path is not a file".to_string());
}
if metadata.len() > MAX_LOG_FILE_BYTES {
return Err(format!(
"File exceeds maximum supported size ({} MB)",
MAX_LOG_FILE_BYTES / 1024 / 1024
));
}
Ok(canonical)
}
#[tauri::command]
pub async fn upload_log_file(
issue_id: String,
file_path: String,
state: State<'_, AppState>,
) -> Result<LogFile, String> {
let path = std::path::Path::new(&file_path);
let content = std::fs::read(path).map_err(|e| e.to_string())?;
let canonical_path = validate_log_file_path(&file_path)?;
let content = std::fs::read(&canonical_path).map_err(|_| "Failed to read selected log file")?;
let content_hash = format!("{:x}", Sha256::digest(&content));
let file_name = path
let file_name = canonical_path
.file_name()
.and_then(|n| n.to_str())
.unwrap_or("unknown")
@ -28,7 +51,8 @@ pub async fn upload_log_file(
"text/plain"
};
let log_file = LogFile::new(issue_id.clone(), file_name, file_path.clone(), file_size);
let canonical_file_path = canonical_path.to_string_lossy().to_string();
let log_file = LogFile::new(issue_id.clone(), file_name, canonical_file_path, file_size);
let log_file = LogFile {
content_hash: content_hash.clone(),
mime_type: mime_type.to_string(),
@ -51,7 +75,7 @@ pub async fn upload_log_file(
log_file.redacted as i32,
],
)
.map_err(|e| e.to_string())?;
.map_err(|_| "Failed to store uploaded log metadata".to_string())?;
// Audit
let entry = AuditEntry::new(
@ -60,19 +84,15 @@ pub async fn upload_log_file(
log_file.id.clone(),
serde_json::json!({ "issue_id": issue_id, "file_name": log_file.file_name }).to_string(),
);
let _ = db.execute(
"INSERT INTO audit_log (id, timestamp, action, entity_type, entity_id, user_id, details) \
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)",
rusqlite::params![
entry.id,
entry.timestamp,
entry.action,
entry.entity_type,
entry.entity_id,
entry.user_id,
entry.details
],
);
if let Err(err) = crate::audit::log::write_audit_event(
&db,
&entry.action,
&entry.entity_type,
&entry.entity_id,
&entry.details,
) {
warn!(error = %err, "failed to write upload_log_file audit entry");
}
Ok(log_file)
}
@ -87,10 +107,11 @@ pub async fn detect_pii(
let db = state.db.lock().map_err(|e| e.to_string())?;
db.prepare("SELECT file_path FROM log_files WHERE id = ?1")
.and_then(|mut stmt| stmt.query_row([&log_file_id], |row| row.get(0)))
.map_err(|e| e.to_string())?
.map_err(|_| "Failed to load log file metadata".to_string())?
};
let content = std::fs::read_to_string(&file_path).map_err(|e| e.to_string())?;
let content =
std::fs::read_to_string(&file_path).map_err(|_| "Failed to read log file content")?;
let detector = PiiDetector::new();
let spans = detector.detect(&content);
@ -105,10 +126,10 @@ pub async fn detect_pii(
pii_type: span.pii_type.clone(),
start_offset: span.start as i64,
end_offset: span.end as i64,
original_value: span.original.clone(),
original_value: String::new(),
replacement: span.replacement.clone(),
};
let _ = db.execute(
if let Err(err) = db.execute(
"INSERT OR REPLACE INTO pii_spans (id, log_file_id, pii_type, start_offset, end_offset, original_value, replacement) \
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)",
rusqlite::params![
@ -116,7 +137,9 @@ pub async fn detect_pii(
record.start_offset, record.end_offset,
record.original_value, record.replacement
],
);
) {
warn!(error = %err, span_id = %span.id, "failed to persist pii span");
}
}
}
@ -138,10 +161,11 @@ pub async fn apply_redactions(
let db = state.db.lock().map_err(|e| e.to_string())?;
db.prepare("SELECT file_path FROM log_files WHERE id = ?1")
.and_then(|mut stmt| stmt.query_row([&log_file_id], |row| row.get(0)))
.map_err(|e| e.to_string())?
.map_err(|_| "Failed to load log file metadata".to_string())?
};
let content = std::fs::read_to_string(&file_path).map_err(|e| e.to_string())?;
let content =
std::fs::read_to_string(&file_path).map_err(|_| "Failed to read log file content")?;
// Load PII spans from DB, filtering to only approved ones
let spans: Vec<pii::PiiSpan> = {
@ -188,7 +212,8 @@ pub async fn apply_redactions(
// Save redacted file alongside original
let redacted_path = format!("{file_path}.redacted");
std::fs::write(&redacted_path, &redacted_text).map_err(|e| e.to_string())?;
std::fs::write(&redacted_path, &redacted_text)
.map_err(|_| "Failed to write redacted output file".to_string())?;
// Mark the log file as redacted in DB
{
@ -197,7 +222,12 @@ pub async fn apply_redactions(
"UPDATE log_files SET redacted = 1 WHERE id = ?1",
[&log_file_id],
)
.map_err(|e| e.to_string())?;
.map_err(|_| "Failed to mark file as redacted".to_string())?;
db.execute(
"UPDATE pii_spans SET original_value = '' WHERE log_file_id = ?1",
[&log_file_id],
)
.map_err(|_| "Failed to finalize redaction metadata".to_string())?;
}
Ok(RedactedLogFile {
@ -206,3 +236,25 @@ pub async fn apply_redactions(
data_hash,
})
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_validate_log_file_path_rejects_non_file() {
let dir = std::env::temp_dir();
let result = validate_log_file_path(dir.to_string_lossy().as_ref());
assert!(result.is_err());
}
#[test]
fn test_validate_log_file_path_accepts_small_file() {
let file_path =
std::env::temp_dir().join(format!("tftsr-analysis-test-{}.log", uuid::Uuid::now_v7()));
std::fs::write(&file_path, "hello").unwrap();
let result = validate_log_file_path(file_path.to_string_lossy().as_ref());
assert!(result.is_ok());
let _ = std::fs::remove_file(file_path);
}
}

View File

@ -295,19 +295,31 @@ pub async fn list_issues(
let mut params: Vec<Box<dyn rusqlite::types::ToSql>> = vec![];
if let Some(ref status) = filter.status {
sql.push_str(&format!(" AND i.status = ?{}", params.len() + 1));
sql.push_str(&format!(
" AND i.status = ?{index}",
index = params.len() + 1
));
params.push(Box::new(status.clone()));
}
if let Some(ref severity) = filter.severity {
sql.push_str(&format!(" AND i.severity = ?{}", params.len() + 1));
sql.push_str(&format!(
" AND i.severity = ?{index}",
index = params.len() + 1
));
params.push(Box::new(severity.clone()));
}
if let Some(ref category) = filter.category {
sql.push_str(&format!(" AND i.category = ?{}", params.len() + 1));
sql.push_str(&format!(
" AND i.category = ?{index}",
index = params.len() + 1
));
params.push(Box::new(category.clone()));
}
if let Some(ref domain) = filter.domain {
sql.push_str(&format!(" AND i.category = ?{}", params.len() + 1));
sql.push_str(&format!(
" AND i.category = ?{index}",
index = params.len() + 1
));
params.push(Box::new(domain.clone()));
}
if let Some(ref search) = filter.search {
@ -321,9 +333,9 @@ pub async fn list_issues(
sql.push_str(" ORDER BY i.updated_at DESC");
sql.push_str(&format!(
" LIMIT ?{} OFFSET ?{}",
params.len() + 1,
params.len() + 2
" LIMIT ?{limit_index} OFFSET ?{offset_index}",
limit_index = params.len() + 1,
offset_index = params.len() + 2
));
params.push(Box::new(limit));
params.push(Box::new(offset));
@ -476,20 +488,14 @@ pub async fn add_timeline_event(
issue_id.clone(),
serde_json::json!({ "description": description }).to_string(),
);
db.execute(
"INSERT INTO audit_log (id, timestamp, action, entity_type, entity_id, user_id, details) \
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)",
rusqlite::params![
entry.id,
entry.timestamp,
entry.action,
entry.entity_type,
entry.entity_id,
entry.user_id,
entry.details
],
crate::audit::log::write_audit_event(
&db,
&entry.action,
&entry.entity_type,
&entry.entity_id,
&entry.details,
)
.map_err(|e| e.to_string())?;
.map_err(|_| "Failed to write security audit entry".to_string())?;
// Update issue timestamp
let now = chrono::Utc::now().format("%Y-%m-%d %H:%M:%S").to_string();

View File

@ -1,4 +1,5 @@
use tauri::State;
use tracing::warn;
use crate::db::models::AuditEntry;
use crate::docs::{exporter, generate_postmortem_markdown, generate_rca_markdown};
@ -34,7 +35,7 @@ pub async fn generate_rca(
id: doc_id.clone(),
issue_id: issue_id.clone(),
doc_type: "rca".to_string(),
title: format!("RCA: {}", issue_detail.issue.title),
title: format!("RCA: {title}", title = issue_detail.issue.title),
content_md: content_md.clone(),
created_at: now.clone(),
updated_at: now,
@ -49,7 +50,7 @@ pub async fn generate_rca(
"doc_title": document.title,
"content_length": content_md.len(),
"content_preview": if content_md.len() > 300 {
format!("{}...", &content_md[..300])
format!("{preview}...", preview = &content_md[..300])
} else {
content_md.clone()
},
@ -60,19 +61,15 @@ pub async fn generate_rca(
doc_id,
audit_details.to_string(),
);
let _ = db.execute(
"INSERT INTO audit_log (id, timestamp, action, entity_type, entity_id, user_id, details) \
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)",
rusqlite::params![
entry.id,
entry.timestamp,
entry.action,
entry.entity_type,
entry.entity_id,
entry.user_id,
entry.details
],
);
if let Err(err) = crate::audit::log::write_audit_event(
&db,
&entry.action,
&entry.entity_type,
&entry.entity_id,
&entry.details,
) {
warn!(error = %err, "failed to write generate_rca audit entry");
}
Ok(document)
}
@ -93,7 +90,7 @@ pub async fn generate_postmortem(
id: doc_id.clone(),
issue_id: issue_id.clone(),
doc_type: "postmortem".to_string(),
title: format!("Post-Mortem: {}", issue_detail.issue.title),
title: format!("Post-Mortem: {title}", title = issue_detail.issue.title),
content_md: content_md.clone(),
created_at: now.clone(),
updated_at: now,
@ -108,7 +105,7 @@ pub async fn generate_postmortem(
"doc_title": document.title,
"content_length": content_md.len(),
"content_preview": if content_md.len() > 300 {
format!("{}...", &content_md[..300])
format!("{preview}...", preview = &content_md[..300])
} else {
content_md.clone()
},
@ -119,19 +116,15 @@ pub async fn generate_postmortem(
doc_id,
audit_details.to_string(),
);
let _ = db.execute(
"INSERT INTO audit_log (id, timestamp, action, entity_type, entity_id, user_id, details) \
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)",
rusqlite::params![
entry.id,
entry.timestamp,
entry.action,
entry.entity_type,
entry.entity_id,
entry.user_id,
entry.details
],
);
if let Err(err) = crate::audit::log::write_audit_event(
&db,
&entry.action,
&entry.entity_type,
&entry.entity_id,
&entry.details,
) {
warn!(error = %err, "failed to write generate_postmortem audit entry");
}
Ok(document)
}

View File

@ -1,8 +1,10 @@
use crate::integrations::{ConnectionResult, PublishResult, TicketResult};
use crate::state::AppState;
use rusqlite::OptionalExtension;
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
use tauri::State;
use tauri::{Manager, State};
use tokio::sync::oneshot;
// Global OAuth state storage (verifier + service per state key)
@ -92,7 +94,7 @@ pub async fn initiate_oauth(
let (mut callback_rx, shutdown_tx) =
crate::integrations::callback_server::start_callback_server(8765)
.await
.map_err(|e| format!("Failed to start callback server: {}", e))?;
.map_err(|e| format!("Failed to start callback server: {e}"))?;
// Store shutdown channel
{
@ -104,12 +106,14 @@ pub async fn initiate_oauth(
let db = app_state.db.clone();
let settings = app_state.settings.clone();
let app_data_dir = app_state.app_data_dir.clone();
let integration_webviews = app_state.integration_webviews.clone();
tokio::spawn(async move {
let app_state_for_callback = AppState {
db,
settings,
app_data_dir,
integration_webviews,
};
while let Some(callback) = callback_rx.recv().await {
tracing::info!("Received OAuth callback for state: {}", callback.state);
@ -119,7 +123,7 @@ pub async fn initiate_oauth(
let mut oauth_state = match OAUTH_STATE.lock() {
Ok(state) => state,
Err(e) => {
tracing::error!("Failed to lock OAuth state: {}", e);
tracing::error!("Failed to lock OAuth state: {e}");
continue;
}
};
@ -144,7 +148,7 @@ pub async fn initiate_oauth(
match result {
Ok(_) => tracing::info!("OAuth callback handled successfully"),
Err(e) => tracing::error!("OAuth callback failed: {}", e),
Err(e) => tracing::error!("OAuth callback failed: {e}"),
}
}
@ -162,7 +166,7 @@ pub async fn initiate_oauth(
{
let mut oauth_state = OAUTH_STATE
.lock()
.map_err(|e| format!("Failed to lock OAuth state: {}", e))?;
.map_err(|e| format!("Failed to lock OAuth state: {e}"))?;
oauth_state.insert(
state_key.clone(),
(service.clone(), pkce.code_verifier.clone()),
@ -189,7 +193,7 @@ pub async fn initiate_oauth(
// ServiceNow uses basic auth, not OAuth2
return Err("ServiceNow uses basic authentication, not OAuth2".to_string());
}
_ => return Err(format!("Unknown service: {}", service)),
_ => return Err(format!("Unknown service: {service}")),
};
let auth_url = crate::integrations::auth::build_auth_url(
@ -227,7 +231,7 @@ async fn handle_oauth_callback_internal(
.unwrap_or_else(|_| "ado-client-id-placeholder".to_string()),
"http://localhost:8765/callback",
),
_ => return Err(format!("Unknown service: {}", service)),
_ => return Err(format!("Unknown service: {service}")),
};
// Exchange authorization code for access token
@ -261,7 +265,7 @@ async fn handle_oauth_callback_internal(
let db = app_state
.db
.lock()
.map_err(|e| format!("Failed to lock database: {}", e))?;
.map_err(|e| format!("Failed to lock database: {e}"))?;
db.execute(
"INSERT OR REPLACE INTO credentials (id, service, token_hash, encrypted_token, created_at, expires_at)
@ -275,7 +279,7 @@ async fn handle_oauth_callback_internal(
expires_at,
],
)
.map_err(|e| format!("Failed to store credentials: {}", e))?;
.map_err(|e| format!("Failed to store credentials: {e}"))?;
// Log audit event
let audit_details = serde_json::json!({
@ -284,20 +288,14 @@ async fn handle_oauth_callback_internal(
"expires_at": expires_at,
});
db.execute(
"INSERT INTO audit_log (id, timestamp, action, entity_type, entity_id, user_id, details)
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)",
rusqlite::params![
uuid::Uuid::now_v7().to_string(),
chrono::Utc::now().format("%Y-%m-%d %H:%M:%S").to_string(),
"oauth_callback_success",
"credential",
service,
"local",
audit_details.to_string(),
],
crate::audit::log::write_audit_event(
&db,
"oauth_callback_success",
"credential",
&service,
&audit_details.to_string(),
)
.map_err(|e| format!("Failed to log audit event: {}", e))?;
.map_err(|e| format!("Failed to log audit event: {e}"))?;
Ok(())
}
@ -315,7 +313,7 @@ pub async fn handle_oauth_callback(
let verifier = {
let mut oauth_state = OAUTH_STATE
.lock()
.map_err(|e| format!("Failed to lock OAuth state: {}", e))?;
.map_err(|e| format!("Failed to lock OAuth state: {e}"))?;
oauth_state
.remove(&state_key)
.map(|(_svc, ver)| ver)
@ -406,4 +404,490 @@ mod tests {
assert_eq!(deserialized.auth_url, response.auth_url);
assert_eq!(deserialized.state, response.state);
}
#[test]
fn test_integration_config_serialization() {
let config = IntegrationConfig {
service: "confluence".to_string(),
base_url: "https://example.atlassian.net".to_string(),
username: Some("user@example.com".to_string()),
project_name: None,
space_key: Some("DEV".to_string()),
};
let json = serde_json::to_string(&config).unwrap();
assert!(json.contains("confluence"));
assert!(json.contains("https://example.atlassian.net"));
assert!(json.contains("user@example.com"));
assert!(json.contains("DEV"));
let deserialized: IntegrationConfig = serde_json::from_str(&json).unwrap();
assert_eq!(deserialized.service, config.service);
assert_eq!(deserialized.base_url, config.base_url);
assert_eq!(deserialized.username, config.username);
assert_eq!(deserialized.space_key, config.space_key);
}
#[test]
fn test_webview_tracking() {
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
let webview_tracking: Arc<Mutex<HashMap<String, String>>> =
Arc::new(Mutex::new(HashMap::new()));
// Add webview
{
let mut tracking = webview_tracking.lock().unwrap();
tracking.insert("confluence".to_string(), "confluence-auth".to_string());
}
// Verify exists
{
let tracking = webview_tracking.lock().unwrap();
assert_eq!(
tracking.get("confluence"),
Some(&"confluence-auth".to_string())
);
}
// Remove webview
{
let mut tracking = webview_tracking.lock().unwrap();
tracking.remove("confluence");
}
// Verify removed
{
let tracking = webview_tracking.lock().unwrap();
assert!(!tracking.contains_key("confluence"));
}
}
#[test]
fn test_token_auth_request_serialization() {
let request = TokenAuthRequest {
service: "azuredevops".to_string(),
token: "secret_token_123".to_string(),
token_type: "Bearer".to_string(),
base_url: "https://dev.azure.com/org".to_string(),
};
let json = serde_json::to_string(&request).unwrap();
let deserialized: TokenAuthRequest = serde_json::from_str(&json).unwrap();
assert_eq!(deserialized.service, request.service);
assert_eq!(deserialized.token, request.token);
assert_eq!(deserialized.token_type, request.token_type);
assert_eq!(deserialized.base_url, request.base_url);
}
}
// ─── Webview-Based Authentication (Option C) ────────────────────────────────
#[derive(Debug, Serialize, Deserialize)]
pub struct WebviewAuthRequest {
pub service: String,
pub base_url: String,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct WebviewAuthResponse {
pub success: bool,
pub message: String,
pub webview_id: String,
}
/// Open persistent browser window for user to log in.
/// Window stays open for browsing and fresh cookie extraction.
/// User can close it manually when no longer needed.
#[tauri::command]
pub async fn authenticate_with_webview(
service: String,
base_url: String,
app_handle: tauri::AppHandle,
app_state: State<'_, AppState>,
) -> Result<WebviewAuthResponse, String> {
let webview_id = format!("{service}-auth");
// Check if window already exists
if let Some(existing_label) = app_state
.integration_webviews
.lock()
.map_err(|e| format!("Failed to lock webviews: {e}"))?
.get(&service)
{
if app_handle.get_webview_window(existing_label).is_some() {
return Ok(WebviewAuthResponse {
success: true,
message: format!(
"{service} browser window is already open. Switch to it to log in."
),
webview_id: existing_label.clone(),
});
}
}
// Open persistent browser window
let _credentials = crate::integrations::webview_auth::authenticate_with_webview(
app_handle, &service, &base_url,
)
.await?;
// Store window reference
app_state
.integration_webviews
.lock()
.map_err(|e| format!("Failed to lock webviews: {e}"))?
.insert(service.clone(), webview_id.clone());
Ok(WebviewAuthResponse {
success: true,
message: format!(
"{service} browser window opened. This window will stay open - use it to browse and authenticate. Cookies will be extracted automatically for API calls."
),
webview_id,
})
}
/// Extract cookies from webview after user completes login.
/// User should call this after they've successfully logged in.
#[tauri::command]
pub async fn extract_cookies_from_webview(
service: String,
webview_id: String,
app_handle: tauri::AppHandle,
app_state: State<'_, AppState>,
) -> Result<ConnectionResult, String> {
// Get the webview window
let webview_window = app_handle
.get_webview_window(&webview_id)
.ok_or_else(|| "Webview window not found".to_string())?;
// Extract cookies using IPC mechanism (more reliable than platform-specific APIs)
let cookies =
crate::integrations::webview_auth::extract_cookies_via_ipc(&webview_window, &app_handle)
.await?;
if cookies.is_empty() {
return Err("No cookies found. Make sure you completed the login.".to_string());
}
// Encrypt and store cookies in database
let cookies_json =
serde_json::to_string(&cookies).map_err(|e| format!("Failed to serialize cookies: {e}"))?;
let encrypted_cookies = crate::integrations::auth::encrypt_token(&cookies_json)?;
let token_hash = {
use sha2::{Digest, Sha256};
let mut hasher = Sha256::new();
hasher.update(cookies_json.as_bytes());
format!("{:x}", hasher.finalize())
};
// Store in database
let db = app_state
.db
.lock()
.map_err(|e| format!("Failed to lock database: {e}"))?;
db.execute(
"INSERT OR REPLACE INTO credentials (id, service, token_hash, encrypted_token, created_at, expires_at)
VALUES (?1, ?2, ?3, ?4, ?5, ?6)",
rusqlite::params![
uuid::Uuid::now_v7().to_string(),
service,
token_hash,
encrypted_cookies,
chrono::Utc::now().format("%Y-%m-%d %H:%M:%S").to_string(),
None::<String>, // Cookies don't have explicit expiry
],
)
.map_err(|e| format!("Failed to store cookies: {e}"))?;
// Close the webview window
if let Some(webview) = app_handle.get_webview_window(&webview_id) {
webview
.close()
.map_err(|e| format!("Failed to close webview: {e}"))?;
}
Ok(ConnectionResult {
success: true,
message: format!("{service} authentication saved successfully"),
})
}
// ─── Manual Token Authentication (Token Mode) ───────────────────────────────
#[derive(Debug, Serialize, Deserialize)]
pub struct TokenAuthRequest {
pub service: String,
pub token: String,
pub token_type: String, // "Bearer", "Basic", "api_token"
pub base_url: String,
}
/// Store a manually provided token (API key, PAT, etc.)
/// This is the fallback authentication method when OAuth2 and webview don't work.
#[tauri::command]
pub async fn save_manual_token(
request: TokenAuthRequest,
app_state: State<'_, AppState>,
) -> Result<ConnectionResult, String> {
// Validate token by testing connection
let test_result = match request.service.as_str() {
"confluence" => {
let config = crate::integrations::confluence::ConfluenceConfig {
base_url: request.base_url.clone(),
access_token: request.token.clone(),
};
crate::integrations::confluence::test_connection(&config).await
}
"azuredevops" => {
let config = crate::integrations::azuredevops::AzureDevOpsConfig {
organization_url: request.base_url.clone(),
access_token: request.token.clone(),
project: "".to_string(), // Project not needed for connection test
};
crate::integrations::azuredevops::test_connection(&config).await
}
"servicenow" => {
// ServiceNow uses basic auth, token is base64(username:password)
let config = crate::integrations::servicenow::ServiceNowConfig {
instance_url: request.base_url.clone(),
username: "".to_string(), // Encoded in token
password: request.token.clone(),
};
crate::integrations::servicenow::test_connection(&config).await
}
_ => {
return Err(format!(
"Unknown service: {service}",
service = request.service
))
}
};
// If test fails, don't save the token
if let Ok(result) = &test_result {
if !result.success {
return Ok(ConnectionResult {
success: false,
message: format!(
"Token validation failed: {}. Token not saved.",
result.message
),
});
}
}
// Encrypt and store token
let encrypted_token = crate::integrations::auth::encrypt_token(&request.token)?;
let token_hash = {
use sha2::{Digest, Sha256};
let mut hasher = Sha256::new();
hasher.update(request.token.as_bytes());
format!("{:x}", hasher.finalize())
};
let db = app_state
.db
.lock()
.map_err(|e| format!("Failed to lock database: {e}"))?;
db.execute(
"INSERT OR REPLACE INTO credentials (id, service, token_hash, encrypted_token, created_at, expires_at)
VALUES (?1, ?2, ?3, ?4, ?5, ?6)",
rusqlite::params![
uuid::Uuid::now_v7().to_string(),
request.service,
token_hash,
encrypted_token,
chrono::Utc::now().format("%Y-%m-%d %H:%M:%S").to_string(),
None::<String>,
],
)
.map_err(|e| format!("Failed to store token: {e}"))?;
// Log audit event
crate::audit::log::write_audit_event(
&db,
"manual_token_saved",
"credential",
&request.service,
&serde_json::json!({
"token_type": request.token_type,
"token_hash": token_hash,
})
.to_string(),
)
.map_err(|e| format!("Failed to log audit event: {e}"))?;
Ok(ConnectionResult {
success: true,
message: format!(
"{service} token saved and validated successfully",
service = request.service
),
})
}
// ============================================================================
// Fresh Cookie Extraction (called before each API request)
// ============================================================================
/// Get fresh cookies from an open webview window for immediate use.
/// This is called before each integration API call to handle token rotation.
/// Returns None if window is closed or cookies unavailable.
pub async fn get_fresh_cookies_from_webview(
service: &str,
app_handle: &tauri::AppHandle,
app_state: &State<'_, AppState>,
) -> Result<Option<Vec<crate::integrations::webview_auth::Cookie>>, String> {
// Check if webview exists for this service
let webview_label = {
let webviews = app_state
.integration_webviews
.lock()
.map_err(|e| format!("Failed to lock webviews: {e}"))?;
match webviews.get(service) {
Some(label) => label.clone(),
None => return Ok(None), // No webview open for this service
}
};
// Get window handle
let webview_window = match app_handle.get_webview_window(&webview_label) {
Some(window) => window,
None => {
// Window was closed, remove from tracking
app_state
.integration_webviews
.lock()
.map_err(|e| format!("Failed to lock webviews: {e}"))?
.remove(service);
return Ok(None);
}
};
// Extract current cookies
match crate::integrations::webview_auth::extract_cookies_via_ipc(&webview_window, app_handle)
.await
{
Ok(cookies) if !cookies.is_empty() => Ok(Some(cookies)),
Ok(_) => Ok(None), // No cookies available
Err(e) => {
tracing::warn!("Failed to extract cookies from {}: {}", service, e);
Ok(None)
}
}
}
// ============================================================================
// Integration Configuration Persistence
// ============================================================================
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct IntegrationConfig {
pub service: String,
pub base_url: String,
pub username: Option<String>,
pub project_name: Option<String>,
pub space_key: Option<String>,
}
/// Save or update integration configuration (base URL, username, project, etc.)
#[tauri::command]
pub async fn save_integration_config(
config: IntegrationConfig,
app_state: State<'_, AppState>,
) -> Result<(), String> {
let db = app_state
.db
.lock()
.map_err(|e| format!("Failed to lock database: {e}"))?;
db.execute(
"INSERT OR REPLACE INTO integration_config
(id, service, base_url, username, project_name, space_key, updated_at)
VALUES (?1, ?2, ?3, ?4, ?5, ?6, datetime('now'))",
rusqlite::params![
uuid::Uuid::now_v7().to_string(),
config.service,
config.base_url,
config.username,
config.project_name,
config.space_key,
],
)
.map_err(|e| format!("Failed to save integration config: {e}"))?;
Ok(())
}
/// Get integration configuration for a specific service
#[tauri::command]
pub async fn get_integration_config(
service: String,
app_state: State<'_, AppState>,
) -> Result<Option<IntegrationConfig>, String> {
let db = app_state
.db
.lock()
.map_err(|e| format!("Failed to lock database: {e}"))?;
let mut stmt = db
.prepare("SELECT service, base_url, username, project_name, space_key FROM integration_config WHERE service = ?1")
.map_err(|e| format!("Failed to prepare query: {e}"))?;
let config = stmt
.query_row([&service], |row| {
Ok(IntegrationConfig {
service: row.get(0)?,
base_url: row.get(1)?,
username: row.get(2)?,
project_name: row.get(3)?,
space_key: row.get(4)?,
})
})
.optional()
.map_err(|e| format!("Failed to query integration config: {e}"))?;
Ok(config)
}
/// Get all integration configurations
#[tauri::command]
pub async fn get_all_integration_configs(
app_state: State<'_, AppState>,
) -> Result<Vec<IntegrationConfig>, String> {
let db = app_state
.db
.lock()
.map_err(|e| format!("Failed to lock database: {e}"))?;
let mut stmt = db
.prepare(
"SELECT service, base_url, username, project_name, space_key FROM integration_config",
)
.map_err(|e| format!("Failed to prepare query: {e}"))?;
let configs = stmt
.query_map([], |row| {
Ok(IntegrationConfig {
service: row.get(0)?,
base_url: row.get(1)?,
username: row.get(2)?,
project_name: row.get(3)?,
space_key: row.get(4)?,
})
})
.map_err(|e| format!("Failed to query integration configs: {e}"))?
.collect::<Result<Vec<_>, _>>()
.map_err(|e| format!("Failed to collect integration configs: {e}"))?;
Ok(configs)
}

View File

@ -98,20 +98,26 @@ pub async fn get_audit_log(
let mut params: Vec<Box<dyn rusqlite::types::ToSql>> = vec![];
if let Some(ref action) = filter.action {
sql.push_str(&format!(" AND action = ?{}", params.len() + 1));
sql.push_str(&format!(" AND action = ?{index}", index = params.len() + 1));
params.push(Box::new(action.clone()));
}
if let Some(ref entity_type) = filter.entity_type {
sql.push_str(&format!(" AND entity_type = ?{}", params.len() + 1));
sql.push_str(&format!(
" AND entity_type = ?{index}",
index = params.len() + 1
));
params.push(Box::new(entity_type.clone()));
}
if let Some(ref entity_id) = filter.entity_id {
sql.push_str(&format!(" AND entity_id = ?{}", params.len() + 1));
sql.push_str(&format!(
" AND entity_id = ?{index}",
index = params.len() + 1
));
params.push(Box::new(entity_id.clone()));
}
sql.push_str(" ORDER BY timestamp DESC");
sql.push_str(&format!(" LIMIT ?{}", params.len() + 1));
sql.push_str(&format!(" LIMIT ?{index}", index = params.len() + 1));
params.push(Box::new(limit));
let param_refs: Vec<&dyn rusqlite::types::ToSql> = params.iter().map(|p| p.as_ref()).collect();

View File

@ -1,6 +1,62 @@
use rusqlite::Connection;
use std::path::Path;
fn generate_key() -> String {
use rand::RngCore;
let mut bytes = [0u8; 32];
rand::rngs::OsRng.fill_bytes(&mut bytes);
hex::encode(bytes)
}
#[cfg(unix)]
fn write_key_file(path: &Path, key: &str) -> anyhow::Result<()> {
use std::io::Write;
use std::os::unix::fs::OpenOptionsExt;
let mut f = std::fs::OpenOptions::new()
.write(true)
.create(true)
.truncate(true)
.mode(0o600)
.open(path)?;
f.write_all(key.as_bytes())?;
Ok(())
}
#[cfg(not(unix))]
fn write_key_file(path: &Path, key: &str) -> anyhow::Result<()> {
std::fs::write(path, key)?;
Ok(())
}
fn get_db_key(data_dir: &Path) -> anyhow::Result<String> {
if let Ok(key) = std::env::var("TFTSR_DB_KEY") {
if !key.trim().is_empty() {
return Ok(key);
}
}
if cfg!(debug_assertions) {
return Ok("dev-key-change-in-prod".to_string());
}
// Release: load or auto-generate a per-installation key stored in the
// app data directory. This lets the app work out of the box without
// requiring users to set an environment variable.
let key_path = data_dir.join(".dbkey");
if key_path.exists() {
let key = std::fs::read_to_string(&key_path)?;
let key = key.trim().to_string();
if !key.is_empty() {
return Ok(key);
}
}
let key = generate_key();
std::fs::create_dir_all(data_dir)?;
write_key_file(&key_path, &key)?;
Ok(key)
}
pub fn open_encrypted_db(path: &Path, key: &str) -> anyhow::Result<Connection> {
let conn = Connection::open(path)?;
// ALL cipher settings MUST be set before the first database access.
@ -29,9 +85,7 @@ pub fn init_db(data_dir: &Path) -> anyhow::Result<Connection> {
std::fs::create_dir_all(data_dir)?;
let db_path = data_dir.join("tftsr.db");
// In dev/test mode use unencrypted DB; in production use encryption
let key =
std::env::var("TFTSR_DB_KEY").unwrap_or_else(|_| "dev-key-change-in-prod".to_string());
let key = get_db_key(data_dir)?;
let conn = if cfg!(debug_assertions) {
open_dev_db(&db_path)?
@ -42,3 +96,32 @@ pub fn init_db(data_dir: &Path) -> anyhow::Result<Connection> {
crate::db::migrations::run_migrations(&conn)?;
Ok(conn)
}
#[cfg(test)]
mod tests {
use super::*;
fn temp_dir(name: &str) -> std::path::PathBuf {
let dir = std::env::temp_dir().join(format!("tftsr-test-{}", name));
std::fs::create_dir_all(&dir).unwrap();
dir
}
#[test]
fn test_get_db_key_uses_env_var_when_present() {
let dir = temp_dir("env-var");
std::env::set_var("TFTSR_DB_KEY", "test-db-key");
let key = get_db_key(&dir).unwrap();
assert_eq!(key, "test-db-key");
std::env::remove_var("TFTSR_DB_KEY");
}
#[test]
fn test_get_db_key_debug_fallback_for_empty_env() {
let dir = temp_dir("empty-env");
std::env::set_var("TFTSR_DB_KEY", " ");
let key = get_db_key(&dir).unwrap();
assert_eq!(key, "dev-key-change-in-prod");
std::env::remove_var("TFTSR_DB_KEY");
}
}

View File

@ -150,6 +150,11 @@ pub fn run_migrations(conn: &Connection) -> anyhow::Result<()> {
UNIQUE(service)
);",
),
(
"012_audit_hash_chain",
"ALTER TABLE audit_log ADD COLUMN prev_hash TEXT NOT NULL DEFAULT '';
ALTER TABLE audit_log ADD COLUMN entry_hash TEXT NOT NULL DEFAULT '';",
),
];
for (name, sql) in migrations {
@ -162,13 +167,13 @@ pub fn run_migrations(conn: &Connection) -> anyhow::Result<()> {
// FTS5 virtual table creation can be skipped if FTS5 is not compiled in
if let Err(e) = conn.execute_batch(sql) {
if name.contains("fts") {
tracing::warn!("FTS5 not available, skipping: {}", e);
tracing::warn!("FTS5 not available, skipping: {e}");
} else {
return Err(e.into());
}
}
conn.execute("INSERT INTO _migrations (name) VALUES (?1)", [name])?;
tracing::info!("Applied migration: {}", name);
tracing::info!("Applied migration: {name}");
}
}

View File

@ -5,15 +5,30 @@ pub fn generate_postmortem_markdown(detail: &IssueDetail) -> String {
let mut md = String::new();
md.push_str(&format!("# Blameless Post-Mortem: {}\n\n", issue.title));
md.push_str(&format!(
"# Blameless Post-Mortem: {title}\n\n",
title = issue.title
));
// Header metadata
md.push_str("## Metadata\n\n");
md.push_str(&format!("- **Date:** {}\n", issue.created_at));
md.push_str(&format!("- **Severity:** {}\n", issue.severity));
md.push_str(&format!("- **Category:** {}\n", issue.category));
md.push_str(&format!("- **Status:** {}\n", issue.status));
md.push_str(&format!("- **Last Updated:** {}\n", issue.updated_at));
md.push_str(&format!(
"- **Date:** {created_at}\n",
created_at = issue.created_at
));
md.push_str(&format!(
"- **Severity:** {severity}\n",
severity = issue.severity
));
md.push_str(&format!(
"- **Category:** {category}\n",
category = issue.category
));
md.push_str(&format!("- **Status:** {status}\n", status = issue.status));
md.push_str(&format!(
"- **Last Updated:** {updated_at}\n",
updated_at = issue.updated_at
));
md.push_str(&format!(
"- **Assigned To:** {}\n",
if issue.assigned_to.is_empty() {
@ -45,7 +60,10 @@ pub fn generate_postmortem_markdown(detail: &IssueDetail) -> String {
md.push_str("## Timeline\n\n");
md.push_str("| Time (UTC) | Event |\n");
md.push_str("|------------|-------|\n");
md.push_str(&format!("| {} | Issue created |\n", issue.created_at));
md.push_str(&format!(
"| {created_at} | Issue created |\n",
created_at = issue.created_at
));
if let Some(ref resolved) = issue.resolved_at {
md.push_str(&format!("| {resolved} | Issue resolved |\n"));
}
@ -77,7 +95,10 @@ pub fn generate_postmortem_markdown(detail: &IssueDetail) -> String {
if let Some(last) = detail.resolution_steps.last() {
if !last.answer.is_empty() {
md.push_str(&format!("**Root Cause:** {}\n\n", last.answer));
md.push_str(&format!(
"**Root Cause:** {answer}\n\n",
answer = last.answer
));
}
}
}
@ -127,7 +148,7 @@ pub fn generate_postmortem_markdown(detail: &IssueDetail) -> String {
md.push_str("---\n\n");
md.push_str(&format!(
"_Generated by TFTSR IT Triage on {}_\n",
"_Generated by Troubleshooting and RCA Assistant on {}_\n",
chrono::Utc::now().format("%Y-%m-%d %H:%M UTC")
));

View File

@ -5,16 +5,31 @@ pub fn generate_rca_markdown(detail: &IssueDetail) -> String {
let mut md = String::new();
md.push_str(&format!("# Root Cause Analysis: {}\n\n", issue.title));
md.push_str(&format!(
"# Root Cause Analysis: {title}\n\n",
title = issue.title
));
md.push_str("## Issue Summary\n\n");
md.push_str("| Field | Value |\n");
md.push_str("|-------|-------|\n");
md.push_str(&format!("| **Issue ID** | {} |\n", issue.id));
md.push_str(&format!("| **Category** | {} |\n", issue.category));
md.push_str(&format!("| **Status** | {} |\n", issue.status));
md.push_str(&format!("| **Severity** | {} |\n", issue.severity));
md.push_str(&format!("| **Source** | {} |\n", issue.source));
md.push_str(&format!("| **Issue ID** | {id} |\n", id = issue.id));
md.push_str(&format!(
"| **Category** | {category} |\n",
category = issue.category
));
md.push_str(&format!(
"| **Status** | {status} |\n",
status = issue.status
));
md.push_str(&format!(
"| **Severity** | {severity} |\n",
severity = issue.severity
));
md.push_str(&format!(
"| **Source** | {source} |\n",
source = issue.source
));
md.push_str(&format!(
"| **Assigned To** | {} |\n",
if issue.assigned_to.is_empty() {
@ -23,8 +38,14 @@ pub fn generate_rca_markdown(detail: &IssueDetail) -> String {
&issue.assigned_to
}
));
md.push_str(&format!("| **Created** | {} |\n", issue.created_at));
md.push_str(&format!("| **Last Updated** | {} |\n", issue.updated_at));
md.push_str(&format!(
"| **Created** | {created_at} |\n",
created_at = issue.created_at
));
md.push_str(&format!(
"| **Last Updated** | {updated_at} |\n",
updated_at = issue.updated_at
));
if let Some(ref resolved) = issue.resolved_at {
md.push_str(&format!("| **Resolved** | {resolved} |\n"));
}
@ -47,12 +68,15 @@ pub fn generate_rca_markdown(detail: &IssueDetail) -> String {
step.step_order, step.why_question
));
if !step.answer.is_empty() {
md.push_str(&format!("**Answer:** {}\n\n", step.answer));
md.push_str(&format!("**Answer:** {answer}\n\n", answer = step.answer));
} else {
md.push_str("_Awaiting answer._\n\n");
}
if !step.evidence.is_empty() {
md.push_str(&format!("**Evidence:** {}\n\n", step.evidence));
md.push_str(&format!(
"**Evidence:** {evidence}\n\n",
evidence = step.evidence
));
}
}
}
@ -109,7 +133,7 @@ pub fn generate_rca_markdown(detail: &IssueDetail) -> String {
md.push_str("---\n\n");
md.push_str(&format!(
"_Generated by TFTSR IT Triage on {}_\n",
"_Generated by Troubleshooting and RCA Assistant on {}_\n",
chrono::Utc::now().format("%Y-%m-%d %H:%M UTC")
));

View File

@ -1,5 +1,6 @@
use rusqlite::OptionalExtension;
use serde::{Deserialize, Serialize};
use sha2::{Digest, Sha256};
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PkceChallenge {
@ -23,19 +24,11 @@ pub struct PatCredential {
/// Generate a PKCE code verifier and challenge for OAuth flows.
pub fn generate_pkce() -> PkceChallenge {
use sha2::{Digest, Sha256};
use rand::{thread_rng, RngCore};
// Generate a random 32-byte verifier
let verifier_bytes: Vec<u8> = (0..32)
.map(|_| {
let r: u8 = (std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.subsec_nanos()
% 256) as u8;
r
})
.collect();
let mut verifier_bytes = [0u8; 32];
thread_rng().fill_bytes(&mut verifier_bytes);
let code_verifier = base64_url_encode(&verifier_bytes);
let challenge_hash = Sha256::digest(code_verifier.as_bytes());
@ -88,7 +81,7 @@ pub async fn exchange_code(
.form(&params)
.send()
.await
.map_err(|e| format!("Failed to send token exchange request: {}", e))?;
.map_err(|e| format!("Failed to send token exchange request: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -101,7 +94,7 @@ pub async fn exchange_code(
let body: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse token response: {}", e))?;
.map_err(|e| format!("Failed to parse token response: {e}"))?;
let access_token = body["access_token"]
.as_str()
@ -162,7 +155,6 @@ pub fn get_pat(conn: &rusqlite::Connection, service: &str) -> Result<Option<Stri
}
fn hash_token(token: &str) -> String {
use sha2::{Digest, Sha256};
format!("{:x}", Sha256::digest(token.as_bytes()))
}
@ -173,10 +165,29 @@ fn base64_url_encode(data: &[u8]) -> String {
}
fn urlencoding_encode(s: &str) -> String {
s.replace(' ', "%20")
.replace('&', "%26")
.replace('=', "%3D")
.replace('+', "%2B")
urlencoding::encode(s).into_owned()
}
fn get_encryption_key_material() -> Result<String, String> {
if let Ok(key) = std::env::var("TFTSR_ENCRYPTION_KEY") {
if !key.trim().is_empty() {
return Ok(key);
}
}
if cfg!(debug_assertions) {
return Ok("dev-key-change-me-in-production-32b".to_string());
}
Err("TFTSR_ENCRYPTION_KEY must be set in release builds".to_string())
}
fn derive_aes_key() -> Result<[u8; 32], String> {
let key_material = get_encryption_key_material()?;
let digest = Sha256::digest(key_material.as_bytes());
let mut key_bytes = [0u8; 32];
key_bytes.copy_from_slice(&digest);
Ok(key_bytes)
}
/// Encrypt a token using AES-256-GCM.
@ -189,14 +200,7 @@ pub fn encrypt_token(token: &str) -> Result<String, String> {
};
use rand::{thread_rng, RngCore};
// Get encryption key from env or use default (WARNING: insecure for production)
let key_material = std::env::var("TFTSR_ENCRYPTION_KEY")
.unwrap_or_else(|_| "dev-key-change-me-in-production-32b".to_string());
let mut key_bytes = [0u8; 32];
let src = key_material.as_bytes();
let len = std::cmp::min(src.len(), 32);
key_bytes[..len].copy_from_slice(&src[..len]);
let key_bytes = derive_aes_key()?;
let cipher = Aes256Gcm::new(&key_bytes.into());
@ -208,7 +212,7 @@ pub fn encrypt_token(token: &str) -> Result<String, String> {
// Encrypt
let ciphertext = cipher
.encrypt(nonce, token.as_bytes())
.map_err(|e| format!("Encryption failed: {}", e))?;
.map_err(|e| format!("Encryption failed: {e}"))?;
// Prepend nonce to ciphertext
let mut result = nonce_bytes.to_vec();
@ -232,7 +236,7 @@ pub fn decrypt_token(encrypted: &str) -> Result<String, String> {
use base64::Engine;
let data = STANDARD
.decode(encrypted)
.map_err(|e| format!("Base64 decode failed: {}", e))?;
.map_err(|e| format!("Base64 decode failed: {e}"))?;
if data.len() < 12 {
return Err("Invalid encrypted data: too short".to_string());
@ -242,23 +246,16 @@ pub fn decrypt_token(encrypted: &str) -> Result<String, String> {
let nonce = Nonce::from_slice(&data[..12]);
let ciphertext = &data[12..];
// Get encryption key
let key_material = std::env::var("TFTSR_ENCRYPTION_KEY")
.unwrap_or_else(|_| "dev-key-change-me-in-production-32b".to_string());
let mut key_bytes = [0u8; 32];
let src = key_material.as_bytes();
let len = std::cmp::min(src.len(), 32);
key_bytes[..len].copy_from_slice(&src[..len]);
let key_bytes = derive_aes_key()?;
let cipher = Aes256Gcm::new(&key_bytes.into());
// Decrypt
let plaintext = cipher
.decrypt(nonce, ciphertext)
.map_err(|e| format!("Decryption failed: {}", e))?;
.map_err(|e| format!("Decryption failed: {e}"))?;
String::from_utf8(plaintext).map_err(|e| format!("Invalid UTF-8: {}", e))
String::from_utf8(plaintext).map_err(|e| format!("Invalid UTF-8: {e}"))
}
#[cfg(test)]
@ -365,7 +362,7 @@ mod tests {
.create_async()
.await;
let token_endpoint = format!("{}/oauth/token", server.url());
let token_endpoint = format!("{server_url}/oauth/token", server_url = server.url());
let result = exchange_code(
&token_endpoint,
"test-client-id",
@ -397,7 +394,7 @@ mod tests {
.create_async()
.await;
let token_endpoint = format!("{}/oauth/token", server.url());
let token_endpoint = format!("{server_url}/oauth/token", server_url = server.url());
let result = exchange_code(
&token_endpoint,
"test-client-id",
@ -421,7 +418,7 @@ mod tests {
.create_async()
.await;
let token_endpoint = format!("{}/oauth/token", server.url());
let token_endpoint = format!("{server_url}/oauth/token", server_url = server.url());
let result = exchange_code(
&token_endpoint,
"test-client-id",
@ -563,4 +560,20 @@ mod tests {
let retrieved = get_pat(&conn, "servicenow").unwrap();
assert_eq!(retrieved, Some("token-v2".to_string()));
}
#[test]
fn test_generate_pkce_is_not_deterministic() {
let a = generate_pkce();
let b = generate_pkce();
assert_ne!(a.code_verifier, b.code_verifier);
}
#[test]
fn test_derive_aes_key_is_stable_for_same_input() {
std::env::set_var("TFTSR_ENCRYPTION_KEY", "stable-test-key");
let k1 = derive_aes_key().unwrap();
let k2 = derive_aes_key().unwrap();
assert_eq!(k1, k2);
std::env::remove_var("TFTSR_ENCRYPTION_KEY");
}
}

View File

@ -18,6 +18,10 @@ pub struct WorkItem {
pub description: String,
}
fn escape_wiql_literal(value: &str) -> String {
value.replace('\'', "''")
}
/// Test connection to Azure DevOps by querying project info
pub async fn test_connection(config: &AzureDevOpsConfig) -> Result<ConnectionResult, String> {
let client = reqwest::Client::new();
@ -32,7 +36,7 @@ pub async fn test_connection(config: &AzureDevOpsConfig) -> Result<ConnectionRes
.bearer_auth(&config.access_token)
.send()
.await
.map_err(|e| format!("Connection failed: {}", e))?;
.map_err(|e| format!("Connection failed: {e}"))?;
if resp.status().is_success() {
Ok(ConnectionResult {
@ -40,9 +44,10 @@ pub async fn test_connection(config: &AzureDevOpsConfig) -> Result<ConnectionRes
message: "Successfully connected to Azure DevOps".to_string(),
})
} else {
let status = resp.status();
Ok(ConnectionResult {
success: false,
message: format!("Connection failed with status: {}", resp.status()),
message: format!("Connection failed with status: {status}"),
})
}
}
@ -60,9 +65,9 @@ pub async fn search_work_items(
);
// Build WIQL query
let escaped_query = escape_wiql_literal(query);
let wiql = format!(
"SELECT [System.Id], [System.Title], [System.WorkItemType], [System.State] FROM WorkItems WHERE [System.Title] CONTAINS '{}' ORDER BY [System.CreatedDate] DESC",
query
"SELECT [System.Id], [System.Title], [System.WorkItemType], [System.State] FROM WorkItems WHERE [System.Title] CONTAINS '{escaped_query}' ORDER BY [System.CreatedDate] DESC"
);
let body = serde_json::json!({ "query": wiql });
@ -74,7 +79,7 @@ pub async fn search_work_items(
.json(&body)
.send()
.await
.map_err(|e| format!("WIQL query failed: {}", e))?;
.map_err(|e| format!("WIQL query failed: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -87,7 +92,7 @@ pub async fn search_work_items(
let wiql_result: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse WIQL response: {}", e))?;
.map_err(|e| format!("Failed to parse WIQL response: {e}"))?;
let work_item_refs = wiql_result["workItems"]
.as_array()
@ -119,7 +124,7 @@ pub async fn search_work_items(
.bearer_auth(&config.access_token)
.send()
.await
.map_err(|e| format!("Failed to fetch work item details: {}", e))?;
.map_err(|e| format!("Failed to fetch work item details: {e}"))?;
if !detail_resp.status().is_success() {
return Err(format!(
@ -131,7 +136,7 @@ pub async fn search_work_items(
let details: serde_json::Value = detail_resp
.json()
.await
.map_err(|e| format!("Failed to parse work item details: {}", e))?;
.map_err(|e| format!("Failed to parse work item details: {e}"))?;
let work_items = details["value"]
.as_array()
@ -199,7 +204,7 @@ pub async fn create_work_item(
.json(&operations)
.send()
.await
.map_err(|e| format!("Failed to create work item: {}", e))?;
.map_err(|e| format!("Failed to create work item: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -212,7 +217,7 @@ pub async fn create_work_item(
let result: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
let work_item_id = result["id"].as_i64().unwrap_or(0);
let work_item_url = format!(
@ -223,7 +228,7 @@ pub async fn create_work_item(
Ok(TicketResult {
id: work_item_id.to_string(),
ticket_number: format!("#{}", work_item_id),
ticket_number: format!("#{work_item_id}"),
url: work_item_url,
})
}
@ -246,7 +251,7 @@ pub async fn get_work_item(
.bearer_auth(&config.access_token)
.send()
.await
.map_err(|e| format!("Failed to get work item: {}", e))?;
.map_err(|e| format!("Failed to get work item: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -259,7 +264,7 @@ pub async fn get_work_item(
let result: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
Ok(WorkItem {
id: result["id"]
@ -305,7 +310,7 @@ pub async fn update_work_item(
.json(&updates)
.send()
.await
.map_err(|e| format!("Failed to update work item: {}", e))?;
.map_err(|e| format!("Failed to update work item: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -318,7 +323,7 @@ pub async fn update_work_item(
let result: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
let updated_work_item_id = result["id"].as_i64().unwrap_or(work_item_id);
let work_item_url = format!(
@ -329,7 +334,7 @@ pub async fn update_work_item(
Ok(TicketResult {
id: updated_work_item_id.to_string(),
ticket_number: format!("#{}", updated_work_item_id),
ticket_number: format!("#{updated_work_item_id}"),
url: work_item_url,
})
}
@ -338,15 +343,22 @@ pub async fn update_work_item(
mod tests {
use super::*;
#[test]
fn test_escape_wiql_literal_escapes_single_quotes() {
let escaped = escape_wiql_literal("can't deploy");
assert_eq!(escaped, "can''t deploy");
}
#[tokio::test]
async fn test_connection_success() {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("GET", "/_apis/projects/TestProject")
.match_header("authorization", "Bearer test_token")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("api-version".into(), "7.0".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"api-version".into(),
"7.0".into(),
)]))
.with_status(200)
.with_body(r#"{"name":"TestProject","id":"abc123"}"#)
.create_async()
@ -372,9 +384,10 @@ mod tests {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("GET", "/_apis/projects/TestProject")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("api-version".into(), "7.0".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"api-version".into(),
"7.0".into(),
)]))
.with_status(401)
.create_async()
.await;
@ -400,9 +413,10 @@ mod tests {
let wiql_mock = server
.mock("POST", "/TestProject/_apis/wit/wiql")
.match_header("authorization", "Bearer test_token")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("api-version".into(), "7.0".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"api-version".into(),
"7.0".into(),
)]))
.with_status(200)
.with_body(r#"{"workItems":[{"id":123}]}"#)
.create_async()
@ -456,9 +470,10 @@ mod tests {
.mock("POST", "/TestProject/_apis/wit/workitems/$Bug")
.match_header("authorization", "Bearer test_token")
.match_header("content-type", "application/json-patch+json")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("api-version".into(), "7.0".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"api-version".into(),
"7.0".into(),
)]))
.with_status(200)
.with_body(r#"{"id":456}"#)
.create_async()
@ -486,9 +501,10 @@ mod tests {
let mock = server
.mock("GET", "/TestProject/_apis/wit/workitems/123")
.match_header("authorization", "Bearer test_token")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("api-version".into(), "7.0".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"api-version".into(),
"7.0".into(),
)]))
.with_status(200)
.with_body(
r#"{
@ -526,9 +542,10 @@ mod tests {
.mock("PATCH", "/TestProject/_apis/wit/workitems/123")
.match_header("authorization", "Bearer test_token")
.match_header("content-type", "application/json-patch+json")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("api-version".into(), "7.0".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"api-version".into(),
"7.0".into(),
)]))
.with_status(200)
.with_body(r#"{"id":123}"#)
.create_async()

View File

@ -269,7 +269,7 @@ mod tests {
tokio::time::sleep(tokio::time::Duration::from_millis(200)).await;
// Server should be running
let health_url = format!("http://127.0.0.1:{}/health", port);
let health_url = format!("http://127.0.0.1:{port}/health");
let health_before = reqwest::get(&health_url).await;
assert!(health_before.is_ok(), "Server should be running");

View File

@ -22,17 +22,24 @@ pub struct Page {
pub url: String,
}
fn escape_cql_literal(value: &str) -> String {
value.replace('\\', "\\\\").replace('"', "\\\"")
}
/// Test connection to Confluence by fetching current user info
pub async fn test_connection(config: &ConfluenceConfig) -> Result<ConnectionResult, String> {
let client = reqwest::Client::new();
let url = format!("{}/rest/api/user/current", config.base_url.trim_end_matches('/'));
let url = format!(
"{}/rest/api/user/current",
config.base_url.trim_end_matches('/')
);
let resp = client
.get(&url)
.bearer_auth(&config.access_token)
.send()
.await
.map_err(|e| format!("Connection failed: {}", e))?;
.map_err(|e| format!("Connection failed: {e}"))?;
if resp.status().is_success() {
Ok(ConnectionResult {
@ -40,9 +47,10 @@ pub async fn test_connection(config: &ConfluenceConfig) -> Result<ConnectionResu
message: "Successfully connected to Confluence".to_string(),
})
} else {
let status = resp.status();
Ok(ConnectionResult {
success: false,
message: format!("Connection failed with status: {}", resp.status()),
message: format!("Connection failed with status: {status}"),
})
}
}
@ -50,7 +58,8 @@ pub async fn test_connection(config: &ConfluenceConfig) -> Result<ConnectionResu
/// List all spaces accessible with the current token
pub async fn list_spaces(config: &ConfluenceConfig) -> Result<Vec<Space>, String> {
let client = reqwest::Client::new();
let url = format!("{}/rest/api/space", config.base_url.trim_end_matches('/'));
let base_url = config.base_url.trim_end_matches('/');
let url = format!("{base_url}/rest/api/space");
let resp = client
.get(&url)
@ -58,7 +67,7 @@ pub async fn list_spaces(config: &ConfluenceConfig) -> Result<Vec<Space>, String
.query(&[("limit", "100")])
.send()
.await
.map_err(|e| format!("Failed to list spaces: {}", e))?;
.map_err(|e| format!("Failed to list spaces: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -71,7 +80,7 @@ pub async fn list_spaces(config: &ConfluenceConfig) -> Result<Vec<Space>, String
let body: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
let spaces = body["results"]
.as_array()
@ -100,9 +109,11 @@ pub async fn search_pages(
config.base_url.trim_end_matches('/')
);
let mut cql = format!("text ~ \"{}\"", query);
let escaped_query = escape_cql_literal(query);
let mut cql = format!("text ~ \"{escaped_query}\"");
if let Some(space) = space_key {
cql = format!("{} AND space = {}", cql, space);
let escaped_space = escape_cql_literal(space);
cql = format!("{cql} AND space = \"{escaped_space}\"");
}
let resp = client
@ -111,7 +122,7 @@ pub async fn search_pages(
.query(&[("cql", &cql), ("limit", &"50".to_string())])
.send()
.await
.map_err(|e| format!("Search failed: {}", e))?;
.map_err(|e| format!("Search failed: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -124,7 +135,7 @@ pub async fn search_pages(
let body: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
let pages = body["results"]
.as_array()
@ -137,7 +148,7 @@ pub async fn search_pages(
id: page_id.to_string(),
title: p["title"].as_str()?.to_string(),
space_key: p["space"]["key"].as_str()?.to_string(),
url: format!("{}/pages/viewpage.action?pageId={}", base_url, page_id),
url: format!("{base_url}/pages/viewpage.action?pageId={page_id}"),
})
})
.collect();
@ -154,7 +165,8 @@ pub async fn publish_page(
parent_page_id: Option<&str>,
) -> Result<PublishResult, String> {
let client = reqwest::Client::new();
let url = format!("{}/rest/api/content", config.base_url.trim_end_matches('/'));
let base_url = config.base_url.trim_end_matches('/');
let url = format!("{base_url}/rest/api/content");
let mut body = serde_json::json!({
"type": "page",
@ -179,7 +191,7 @@ pub async fn publish_page(
.json(&body)
.send()
.await
.map_err(|e| format!("Failed to publish page: {}", e))?;
.map_err(|e| format!("Failed to publish page: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -192,7 +204,7 @@ pub async fn publish_page(
let result: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
let page_id = result["id"].as_str().unwrap_or("");
let page_url = format!(
@ -242,7 +254,7 @@ pub async fn update_page(
.json(&body)
.send()
.await
.map_err(|e| format!("Failed to update page: {}", e))?;
.map_err(|e| format!("Failed to update page: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -255,7 +267,7 @@ pub async fn update_page(
let result: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
let updated_page_id = result["id"].as_str().unwrap_or(page_id);
let page_url = format!(
@ -274,6 +286,12 @@ pub async fn update_page(
mod tests {
use super::*;
#[test]
fn test_escape_cql_literal_escapes_quotes_and_backslashes() {
let escaped = escape_cql_literal(r#"C:\logs\"prod""#);
assert_eq!(escaped, r#"C:\\logs\\\"prod\""#);
}
#[tokio::test]
async fn test_connection_success() {
let mut server = mockito::Server::new_async().await;
@ -327,9 +345,10 @@ mod tests {
let mock = server
.mock("GET", "/rest/api/space")
.match_header("authorization", "Bearer test_token")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("limit".into(), "100".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"limit".into(),
"100".into(),
)]))
.with_status(200)
.with_body(
r#"{
@ -362,9 +381,10 @@ mod tests {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("GET", "/rest/api/content/search")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("cql".into(), "text ~ \"kubernetes\"".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"cql".into(),
"text ~ \"kubernetes\"".into(),
)]))
.with_status(200)
.with_body(
r#"{

View File

@ -3,6 +3,7 @@ pub mod azuredevops;
pub mod callback_server;
pub mod confluence;
pub mod servicenow;
pub mod webview_auth;
use serde::{Deserialize, Serialize};
@ -24,3 +25,21 @@ pub struct TicketResult {
pub ticket_number: String,
pub url: String,
}
/// Authentication method for integration services
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "method")]
pub enum AuthMethod {
#[serde(rename = "oauth2")]
OAuth2 {
access_token: String,
expires_at: Option<i64>,
},
#[serde(rename = "cookies")]
Cookies { cookies: Vec<webview_auth::Cookie> },
#[serde(rename = "token")]
Token {
token: String,
token_type: String, // "Bearer", "Basic", etc.
},
}

View File

@ -34,7 +34,7 @@ pub async fn test_connection(config: &ServiceNowConfig) -> Result<ConnectionResu
.query(&[("sysparm_limit", "1")])
.send()
.await
.map_err(|e| format!("Connection failed: {}", e))?;
.map_err(|e| format!("Connection failed: {e}"))?;
if resp.status().is_success() {
Ok(ConnectionResult {
@ -42,9 +42,10 @@ pub async fn test_connection(config: &ServiceNowConfig) -> Result<ConnectionResu
message: "Successfully connected to ServiceNow".to_string(),
})
} else {
let status = resp.status();
Ok(ConnectionResult {
success: false,
message: format!("Connection failed with status: {}", resp.status()),
message: format!("Connection failed with status: {status}"),
})
}
}
@ -60,15 +61,18 @@ pub async fn search_incidents(
config.instance_url.trim_end_matches('/')
);
let sysparm_query = format!("short_descriptionLIKE{}", query);
let sysparm_query = format!("short_descriptionLIKE{query}");
let resp = client
.get(&url)
.basic_auth(&config.username, Some(&config.password))
.query(&[("sysparm_query", &sysparm_query), ("sysparm_limit", &"10".to_string())])
.query(&[
("sysparm_query", &sysparm_query),
("sysparm_limit", &"10".to_string()),
])
.send()
.await
.map_err(|e| format!("Search failed: {}", e))?;
.map_err(|e| format!("Search failed: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -81,7 +85,7 @@ pub async fn search_incidents(
let body: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
let incidents = body["result"]
.as_array()
@ -131,7 +135,7 @@ pub async fn create_incident(
.json(&body)
.send()
.await
.map_err(|e| format!("Failed to create incident: {}", e))?;
.map_err(|e| format!("Failed to create incident: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -144,7 +148,7 @@ pub async fn create_incident(
let result: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
let incident_number = result["result"]["number"].as_str().unwrap_or("");
let sys_id = result["result"]["sys_id"].as_str().unwrap_or("");
@ -195,13 +199,13 @@ pub async fn get_incident(
.basic_auth(&config.username, Some(&config.password));
if use_query {
request = request.query(&[("sysparm_query", &format!("number={}", incident_id))]);
request = request.query(&[("sysparm_query", &format!("number={incident_id}"))]);
}
let resp = request
.send()
.await
.map_err(|e| format!("Failed to get incident: {}", e))?;
.map_err(|e| format!("Failed to get incident: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -214,7 +218,7 @@ pub async fn get_incident(
let body: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
let incident_data = if use_query {
// Query response has "result" array
@ -240,7 +244,10 @@ pub async fn get_incident(
.as_str()
.ok_or_else(|| "Missing short_description".to_string())?
.to_string(),
description: incident_data["description"].as_str().unwrap_or("").to_string(),
description: incident_data["description"]
.as_str()
.unwrap_or("")
.to_string(),
urgency: incident_data["urgency"].as_str().unwrap_or("3").to_string(),
impact: incident_data["impact"].as_str().unwrap_or("3").to_string(),
state: incident_data["state"].as_str().unwrap_or("1").to_string(),
@ -267,7 +274,7 @@ pub async fn update_incident(
.json(&updates)
.send()
.await
.map_err(|e| format!("Failed to update incident: {}", e))?;
.map_err(|e| format!("Failed to update incident: {e}"))?;
if !resp.status().is_success() {
return Err(format!(
@ -280,7 +287,7 @@ pub async fn update_incident(
let result: serde_json::Value = resp
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
.map_err(|e| format!("Failed to parse response: {e}"))?;
let incident_number = result["result"]["number"].as_str().unwrap_or("");
let updated_sys_id = result["result"]["sys_id"].as_str().unwrap_or(sys_id);
@ -307,9 +314,10 @@ mod tests {
let mock = server
.mock("GET", "/api/now/table/incident")
.match_header("authorization", mockito::Matcher::Regex("Basic .+".into()))
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("sysparm_limit".into(), "1".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"sysparm_limit".into(),
"1".into(),
)]))
.with_status(200)
.with_body(r#"{"result":[]}"#)
.create_async()
@ -335,9 +343,10 @@ mod tests {
let mut server = mockito::Server::new_async().await;
let mock = server
.mock("GET", "/api/now/table/incident")
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("sysparm_limit".into(), "1".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"sysparm_limit".into(),
"1".into(),
)]))
.with_status(401)
.create_async()
.await;
@ -363,7 +372,10 @@ mod tests {
.mock("GET", "/api/now/table/incident")
.match_header("authorization", mockito::Matcher::Regex("Basic .+".into()))
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("sysparm_query".into(), "short_descriptionLIKElogin".into()),
mockito::Matcher::UrlEncoded(
"sysparm_query".into(),
"short_descriptionLIKElogin".into(),
),
mockito::Matcher::UrlEncoded("sysparm_limit".into(), "10".into()),
]))
.with_status(200)
@ -480,9 +492,10 @@ mod tests {
let mock = server
.mock("GET", "/api/now/table/incident")
.match_header("authorization", mockito::Matcher::Regex("Basic .+".into()))
.match_query(mockito::Matcher::AllOf(vec![
mockito::Matcher::UrlEncoded("sysparm_query".into(), "number=INC0010001".into()),
]))
.match_query(mockito::Matcher::AllOf(vec![mockito::Matcher::UrlEncoded(
"sysparm_query".into(),
"number=INC0010001".into(),
)]))
.with_status(200)
.with_body(
r#"{

View File

@ -0,0 +1,270 @@
use serde::{Deserialize, Serialize};
use tauri::{AppHandle, Listener, WebviewUrl, WebviewWindow, WebviewWindowBuilder};
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ExtractedCredentials {
pub cookies: Vec<Cookie>,
pub service: String,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Cookie {
pub name: String,
pub value: String,
pub domain: String,
pub path: String,
pub secure: bool,
pub http_only: bool,
pub expires: Option<i64>,
}
/// Open an embedded browser window for the user to log in and extract cookies.
/// This approach works when user is off-VPN (can access web UI) but APIs require VPN.
pub async fn authenticate_with_webview(
app_handle: AppHandle,
service: &str,
base_url: &str,
) -> Result<ExtractedCredentials, String> {
let trimmed_base_url = base_url.trim_end_matches('/');
let login_url = match service {
"confluence" => format!("{trimmed_base_url}/login.action"),
"azuredevops" => {
// Azure DevOps login - user will be redirected through Microsoft SSO
format!("{trimmed_base_url}/_signin")
}
"servicenow" => format!("{trimmed_base_url}/login.do"),
_ => return Err(format!("Unknown service: {service}")),
};
tracing::info!(
"Opening persistent browser for {} at {}",
service,
login_url
);
// Create persistent browser window (stays open for browsing and fresh cookie extraction)
let webview_label = format!("{service}-auth");
let webview = WebviewWindowBuilder::new(
&app_handle,
&webview_label,
WebviewUrl::External(login_url.parse().map_err(|e| format!("Invalid URL: {e}"))?),
)
.title(format!(
"{service} Browser (Troubleshooting and RCA Assistant)"
))
.inner_size(1000.0, 800.0)
.min_inner_size(800.0, 600.0)
.resizable(true)
.center()
.focused(true)
.visible(true)
.build()
.map_err(|e| format!("Failed to create webview: {e}"))?;
// Focus the window
webview
.set_focus()
.map_err(|e| tracing::warn!("Failed to focus webview: {e}"))
.ok();
// Wait for user to complete login
// User will click "Complete Login" button in the UI after successful authentication
// This function just opens the window - extraction happens in extract_cookies_via_ipc
Ok(ExtractedCredentials {
cookies: vec![],
service: service.to_string(),
})
}
/// Extract cookies from a webview using Tauri's IPC mechanism.
/// This is the most reliable cross-platform approach.
pub async fn extract_cookies_via_ipc<R: tauri::Runtime>(
webview_window: &WebviewWindow<R>,
app_handle: &AppHandle<R>,
) -> Result<Vec<Cookie>, String> {
// Inject JavaScript that will send cookies via IPC
// Note: We use window.__TAURI__ which is the Tauri 2.x API exposed to webviews
let cookie_extraction_script = r#"
(async function() {
try {
// Wait for Tauri API to be available
if (typeof window.__TAURI__ === 'undefined') {
console.error('Tauri API not available');
return;
}
const cookieString = document.cookie;
if (!cookieString || cookieString.trim() === '') {
await window.__TAURI__.event.emit('tftsr-cookies-extracted', { cookies: [] });
return;
}
const cookies = cookieString.split(';').map(c => c.trim()).filter(c => c.length > 0);
const parsed = cookies.map(cookie => {
const equalIndex = cookie.indexOf('=');
if (equalIndex === -1) return null;
const name = cookie.substring(0, equalIndex).trim();
const value = cookie.substring(equalIndex + 1).trim();
return {
name: name,
value: value,
domain: window.location.hostname,
path: '/',
secure: window.location.protocol === 'https:',
http_only: false,
expires: null
};
}).filter(c => c !== null);
// Use Tauri's event API to send cookies back to Rust
await window.__TAURI__.event.emit('tftsr-cookies-extracted', { cookies: parsed });
console.log('Cookies extracted and emitted:', parsed.length);
} catch (e) {
console.error('Cookie extraction failed:', e);
try {
await window.__TAURI__.event.emit('tftsr-cookies-extracted', { cookies: [], error: e.message });
} catch (emitError) {
console.error('Failed to emit error:', emitError);
}
}
})();
"#;
// Set up event listener first
let (tx, mut rx) = tokio::sync::mpsc::channel::<Result<Vec<Cookie>, String>>(1);
// Listen for the custom event from the webview
let listen_id = app_handle.listen("tftsr-cookies-extracted", move |event| {
tracing::debug!("Received cookies-extracted event");
let payload_str = event.payload();
// Parse the payload JSON
match serde_json::from_str::<serde_json::Value>(payload_str) {
Ok(payload) => {
if let Some(error_msg) = payload.get("error").and_then(|e| e.as_str()) {
let _ = tx.try_send(Err(format!("JavaScript error: {error_msg}")));
return;
}
if let Some(cookies_value) = payload.get("cookies") {
match serde_json::from_value::<Vec<Cookie>>(cookies_value.clone()) {
Ok(cookies) => {
tracing::info!("Parsed {} cookies from webview", cookies.len());
let _ = tx.try_send(Ok(cookies));
}
Err(e) => {
tracing::error!("Failed to parse cookies: {e}");
let _ = tx.try_send(Err(format!("Failed to parse cookies: {e}")));
}
}
} else {
let _ = tx.try_send(Err("No cookies field in payload".to_string()));
}
}
Err(e) => {
tracing::error!("Failed to parse event payload: {e}");
let _ = tx.try_send(Err(format!("Failed to parse event payload: {e}")));
}
}
});
// Inject the script into the webview
webview_window
.eval(cookie_extraction_script)
.map_err(|e| format!("Failed to inject cookie extraction script: {e}"))?;
tracing::info!("Cookie extraction script injected, waiting for response...");
// Wait for cookies with timeout
let result = tokio::time::timeout(tokio::time::Duration::from_secs(10), rx.recv())
.await
.map_err(|_| {
"Timeout waiting for cookies. Make sure you are logged in and on the correct page."
.to_string()
})?
.ok_or_else(|| "Failed to receive cookies from webview".to_string())?;
// Clean up event listener
app_handle.unlisten(listen_id);
result
}
/// Build cookie header string for HTTP requests
pub fn cookies_to_header(cookies: &[Cookie]) -> String {
cookies
.iter()
.map(|c| {
format!(
"{name}={value}",
name = c.name.as_str(),
value = c.value.as_str()
)
})
.collect::<Vec<_>>()
.join("; ")
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_cookies_to_header() {
let cookies = vec![
Cookie {
name: "JSESSIONID".to_string(),
value: "abc123".to_string(),
domain: "example.com".to_string(),
path: "/".to_string(),
secure: true,
http_only: true,
expires: None,
},
Cookie {
name: "auth_token".to_string(),
value: "xyz789".to_string(),
domain: "example.com".to_string(),
path: "/".to_string(),
secure: true,
http_only: false,
expires: None,
},
];
let header = cookies_to_header(&cookies);
assert_eq!(header, "JSESSIONID=abc123; auth_token=xyz789");
}
#[test]
fn test_empty_cookies_to_header() {
let cookies = vec![];
let header = cookies_to_header(&cookies);
assert_eq!(header, "");
}
#[test]
fn test_cookie_json_serialization() {
let cookies = vec![Cookie {
name: "test".to_string(),
value: "value123".to_string(),
domain: "example.com".to_string(),
path: "/".to_string(),
secure: true,
http_only: false,
expires: None,
}];
let json = serde_json::to_string(&cookies).unwrap();
assert!(json.contains("\"name\":\"test\""));
assert!(json.contains("\"value\":\"value123\""));
let deserialized: Vec<Cookie> = serde_json::from_str(&json).unwrap();
assert_eq!(deserialized.len(), 1);
assert_eq!(deserialized[0].name, "test");
}
}

View File

@ -8,6 +8,7 @@ pub mod ollama;
pub mod pii;
pub mod state;
use sha2::{Digest, Sha256};
use state::AppState;
use std::sync::{Arc, Mutex};
@ -21,7 +22,7 @@ pub fn run() {
)
.init();
tracing::info!("Starting TFTSR application");
tracing::info!("Starting Troubleshooting and RCA Assistant application");
// Determine data directory
let data_dir = dirs_data_dir();
@ -34,15 +35,19 @@ pub fn run() {
db: Arc::new(Mutex::new(conn)),
settings: Arc::new(Mutex::new(state::AppSettings::default())),
app_data_dir: data_dir.clone(),
integration_webviews: Arc::new(Mutex::new(std::collections::HashMap::new())),
};
let stronghold_salt = format!(
"tftsr-stronghold-salt-v1-{:x}",
Sha256::digest(data_dir.to_string_lossy().as_bytes())
);
tauri::Builder::default()
.plugin(
tauri_plugin_stronghold::Builder::new(|password| {
use sha2::{Digest, Sha256};
tauri_plugin_stronghold::Builder::new(move |password| {
let mut hasher = Sha256::new();
hasher.update(password);
hasher.update(b"tftsr-stronghold-salt-v1");
hasher.update(stronghold_salt.as_bytes());
hasher.finalize().to_vec()
})
.build(),
@ -87,6 +92,12 @@ pub fn run() {
commands::integrations::create_azuredevops_workitem,
commands::integrations::initiate_oauth,
commands::integrations::handle_oauth_callback,
commands::integrations::authenticate_with_webview,
commands::integrations::extract_cookies_from_webview,
commands::integrations::save_manual_token,
commands::integrations::save_integration_config,
commands::integrations::get_integration_config,
commands::integrations::get_all_integration_configs,
// System / Settings
commands::system::check_ollama_installed,
commands::system::get_ollama_install_guide,
@ -100,7 +111,7 @@ pub fn run() {
commands::system::get_audit_log,
])
.run(tauri::generate_context!())
.expect("Error running TFTSR application");
.expect("Error running Troubleshooting and RCA Assistant application");
}
/// Determine the application data directory.
@ -113,13 +124,13 @@ fn dirs_data_dir() -> std::path::PathBuf {
#[cfg(target_os = "linux")]
{
if let Ok(xdg) = std::env::var("XDG_DATA_HOME") {
return std::path::PathBuf::from(xdg).join("tftsr");
return std::path::PathBuf::from(xdg).join("trcaa");
}
if let Ok(home) = std::env::var("HOME") {
return std::path::PathBuf::from(home)
.join(".local")
.join("share")
.join("tftsr");
.join("trcaa");
}
}
@ -129,17 +140,17 @@ fn dirs_data_dir() -> std::path::PathBuf {
return std::path::PathBuf::from(home)
.join("Library")
.join("Application Support")
.join("tftsr");
.join("trcaa");
}
}
#[cfg(target_os = "windows")]
{
if let Ok(appdata) = std::env::var("APPDATA") {
return std::path::PathBuf::from(appdata).join("tftsr");
return std::path::PathBuf::from(appdata).join("trcaa");
}
}
// Fallback
std::path::PathBuf::from("./tftsr-data")
std::path::PathBuf::from("./trcaa-data")
}

View File

@ -35,8 +35,10 @@ pub fn get_patterns() -> Vec<(PiiType, Regex)> {
// Credit card
(
PiiType::CreditCard,
Regex::new(r"\b(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|3[47][0-9]{13})\b")
.unwrap(),
Regex::new(
r"\b(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|3[47][0-9]{13}|6(?:011|5[0-9]{2})[0-9]{12}|3(?:0[0-5]|[68][0-9])[0-9]{11}|35(?:2[89]|[3-8][0-9])[0-9]{12})\b",
)
.unwrap(),
),
// Email
(
@ -70,5 +72,13 @@ pub fn get_patterns() -> Vec<(PiiType, Regex)> {
Regex::new(r"\b(?:\+?1[-.\s]?)?\(?[0-9]{3}\)?[-.\s]?[0-9]{3}[-.\s]?[0-9]{4}\b")
.unwrap(),
),
// Hostname / FQDN
(
PiiType::Hostname,
Regex::new(
r"\b(?:[A-Za-z0-9](?:[A-Za-z0-9\-]{0,61}[A-Za-z0-9])?\.)+[A-Za-z]{2,63}\b",
)
.unwrap(),
),
]
}

View File

@ -1,4 +1,5 @@
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::path::PathBuf;
use std::sync::{Arc, Mutex};
@ -10,6 +11,34 @@ pub struct ProviderConfig {
pub api_url: String,
pub api_key: String,
pub model: String,
/// Optional: Maximum tokens for response
#[serde(skip_serializing_if = "Option::is_none")]
pub max_tokens: Option<u32>,
/// Optional: Temperature (0.0-2.0) - controls randomness
#[serde(skip_serializing_if = "Option::is_none")]
pub temperature: Option<f64>,
/// Optional: Custom endpoint path (e.g., "" for no path, "/v1/chat" for custom path)
/// If None, defaults to "/chat/completions" for OpenAI compatibility
#[serde(skip_serializing_if = "Option::is_none")]
pub custom_endpoint_path: Option<String>,
/// Optional: Custom auth header name (e.g., "x-msi-genai-api-key")
/// If None, defaults to "Authorization"
#[serde(skip_serializing_if = "Option::is_none")]
pub custom_auth_header: Option<String>,
/// Optional: Custom auth value prefix (e.g., "" for no prefix, "Bearer " for OpenAI)
/// If None, defaults to "Bearer "
#[serde(skip_serializing_if = "Option::is_none")]
pub custom_auth_prefix: Option<String>,
/// Optional: API format ("openai" or "custom_rest")
/// If None, defaults to "openai"
#[serde(skip_serializing_if = "Option::is_none")]
pub api_format: Option<String>,
/// Optional: Session ID for stateful custom REST APIs
#[serde(skip_serializing_if = "Option::is_none")]
pub session_id: Option<String>,
/// Optional: User ID for custom REST API cost tracking (CORE ID email)
#[serde(skip_serializing_if = "Option::is_none")]
pub user_id: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
@ -39,4 +68,7 @@ pub struct AppState {
pub db: Arc<Mutex<rusqlite::Connection>>,
pub settings: Arc<Mutex<AppSettings>>,
pub app_data_dir: PathBuf,
/// Track open integration webview windows by service name -> window label
/// These windows stay open for the user to browse and for fresh cookie extraction
pub integration_webviews: Arc<Mutex<HashMap<String, String>>>,
}

View File

@ -1,7 +1,7 @@
{
"productName": "TFTSR",
"version": "0.2.2",
"identifier": "com.tftsr.devops",
"productName": "Troubleshooting and RCA Assistant",
"version": "0.2.10",
"identifier": "com.trcaa.app",
"build": {
"frontendDist": "../dist",
"devUrl": "http://localhost:1420",
@ -10,11 +10,11 @@
},
"app": {
"security": {
"csp": "default-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data: asset: https:; connect-src 'self' http://localhost:11434 http://localhost:8765 https://api.openai.com https://api.anthropic.com https://api.mistral.ai https://generativelanguage.googleapis.com https://auth.atlassian.com https://*.atlassian.net https://login.microsoftonline.com https://dev.azure.com"
"csp": "default-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data: asset: https:; connect-src 'self' http://localhost:11434 http://localhost:8765 https://api.openai.com https://api.anthropic.com https://api.mistral.ai https://generativelanguage.googleapis.com https://auth.atlassian.com https://*.atlassian.net https://login.microsoftonline.com https://dev.azure.com https://genai-service.stage.commandcentral.com https://genai-service.commandcentral.com"
},
"windows": [
{
"title": "TFTSR \u2014 IT Triage & RCA",
"title": "Troubleshooting and RCA Assistant",
"width": 1280,
"height": 800,
"resizable": true,
@ -36,9 +36,9 @@
],
"resources": [],
"externalBin": [],
"copyright": "TFTSR Contributors",
"copyright": "Troubleshooting and RCA Assistant Contributors",
"category": "Utility",
"shortDescription": "IT Incident Triage & RCA Tool",
"longDescription": "Structured AI-backed tool for IT incident triage, 5-whys root cause analysis, and post-mortem documentation with offline Ollama support."
"shortDescription": "Troubleshooting and RCA Assistant",
"longDescription": "Structured AI-backed assistant for IT troubleshooting, 5-whys root cause analysis, and post-mortem documentation with offline Ollama support."
}
}

View File

@ -11,6 +11,8 @@ import {
Link,
ChevronLeft,
ChevronRight,
Sun,
Moon,
} from "lucide-react";
import { useSettingsStore } from "@/stores/settingsStore";
@ -43,7 +45,7 @@ const settingsItems = [
export default function App() {
const [collapsed, setCollapsed] = useState(false);
const [appVersion, setAppVersion] = useState("");
const theme = useSettingsStore((s) => s.theme);
const { theme, setTheme } = useSettingsStore();
const location = useLocation();
useEffect(() => {
@ -59,7 +61,7 @@ export default function App() {
<div className="flex items-center justify-between px-4 py-4 border-b">
{!collapsed && (
<span className="text-lg font-bold text-foreground tracking-tight">
TFTSR
Troubleshooting and RCA Assistant
</span>
)}
<button
@ -116,12 +118,21 @@ export default function App() {
</div>
</nav>
{/* Version */}
{!collapsed && (
<div className="px-4 py-3 border-t text-xs text-muted-foreground">
{appVersion ? `v${appVersion}` : ""}
</div>
)}
{/* Version + Theme toggle */}
<div className="px-4 py-3 border-t flex items-center justify-between">
{!collapsed && (
<span className="text-xs text-muted-foreground">
{appVersion ? `v${appVersion}` : ""}
</span>
)}
<button
onClick={() => setTheme(theme === "dark" ? "light" : "dark")}
className="p-1 rounded hover:bg-accent text-muted-foreground"
title={theme === "dark" ? "Switch to light mode" : "Switch to dark mode"}
>
{theme === "dark" ? <Sun className="w-4 h-4" /> : <Moon className="w-4 h-4" />}
</button>
</div>
</aside>
{/* Main content */}

View File

@ -16,6 +16,7 @@ const buttonVariants = cva(
default: "bg-primary text-primary-foreground hover:bg-primary/90",
destructive: "bg-destructive text-destructive-foreground hover:bg-destructive/90",
outline: "border border-input bg-background hover:bg-accent hover:text-accent-foreground",
secondary: "bg-secondary text-secondary-foreground hover:bg-secondary/80",
ghost: "hover:bg-accent hover:text-accent-foreground",
link: "text-primary underline-offset-4 hover:underline",
},
@ -342,4 +343,54 @@ export function Separator({
);
}
// ─── RadioGroup ──────────────────────────────────────────────────────────────
interface RadioGroupContextValue {
value: string;
onValueChange: (value: string) => void;
}
const RadioGroupContext = React.createContext<RadioGroupContextValue | null>(null);
interface RadioGroupProps {
value: string;
onValueChange: (value: string) => void;
className?: string;
children: React.ReactNode;
}
export function RadioGroup({ value, onValueChange, className, children }: RadioGroupProps) {
return (
<RadioGroupContext.Provider value={{ value, onValueChange }}>
<div className={cn("space-y-2", className)}>{children}</div>
</RadioGroupContext.Provider>
);
}
interface RadioGroupItemProps extends React.InputHTMLAttributes<HTMLInputElement> {
value: string;
}
export const RadioGroupItem = React.forwardRef<HTMLInputElement, RadioGroupItemProps>(
({ value, className, ...props }, ref) => {
const ctx = React.useContext(RadioGroupContext);
if (!ctx) throw new Error("RadioGroupItem must be used within RadioGroup");
return (
<input
ref={ref}
type="radio"
className={cn(
"aspect-square h-4 w-4 rounded-full border border-primary text-primary ring-offset-background focus:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 disabled:cursor-not-allowed disabled:opacity-50",
className
)}
checked={ctx.value === value}
onChange={() => ctx.onValueChange(value)}
{...props}
/>
);
}
);
RadioGroupItem.displayName = "RadioGroupItem";
export { cn };

View File

@ -10,6 +10,12 @@ export interface ProviderConfig {
api_url: string;
api_key: string;
model: string;
custom_endpoint_path?: string;
custom_auth_header?: string;
custom_auth_prefix?: string;
api_format?: string;
session_id?: string;
user_id?: string;
}
export interface Message {
@ -387,3 +393,46 @@ export const testServiceNowConnectionCmd = (instanceUrl: string, credentials: Re
export const testAzureDevOpsConnectionCmd = (orgUrl: string, credentials: Record<string, unknown>) =>
invoke<ConnectionResult>("test_azuredevops_connection", { orgUrl, credentials });
// ─── Webview & Token Authentication ──────────────────────────────────────────
export interface WebviewAuthResponse {
success: boolean;
message: string;
webview_id: string;
}
export interface TokenAuthRequest {
service: string;
token: string;
token_type: string;
base_url: string;
}
export interface IntegrationConfig {
service: string;
base_url: string;
username?: string;
project_name?: string;
space_key?: string;
}
export const authenticateWithWebviewCmd = (service: string, baseUrl: string) =>
invoke<WebviewAuthResponse>("authenticate_with_webview", { service, baseUrl });
export const extractCookiesFromWebviewCmd = (service: string, webviewId: string) =>
invoke<ConnectionResult>("extract_cookies_from_webview", { service, webviewId });
export const saveManualTokenCmd = (request: TokenAuthRequest) =>
invoke<ConnectionResult>("save_manual_token", { request });
// ─── Integration Configuration Persistence ────────────────────────────────────
export const saveIntegrationConfigCmd = (config: IntegrationConfig) =>
invoke<void>("save_integration_config", { config });
export const getIntegrationConfigCmd = (service: string) =>
invoke<IntegrationConfig | null>("get_integration_config", { service });
export const getAllIntegrationConfigsCmd = () =>
invoke<IntegrationConfig[]>("get_all_integration_configs");

View File

@ -35,11 +35,11 @@ export default function Dashboard() {
<div>
<h1 className="text-3xl font-bold">Dashboard</h1>
<p className="text-muted-foreground mt-1">
IT Triage & Root Cause Analysis
Troubleshooting and Root Cause Analysis Assistant
</p>
</div>
<div className="flex items-center gap-2">
<Button variant="outline" size="sm" onClick={() => loadIssues()} disabled={isLoading}>
<Button variant="outline" size="sm" onClick={() => loadIssues()} disabled={isLoading} className="border-border text-foreground bg-card hover:bg-accent">
<RefreshCw className={`w-4 h-4 mr-2 ${isLoading ? "animate-spin" : ""}`} />
Refresh
</Button>

View File

@ -19,6 +19,35 @@ import {
import { useSettingsStore } from "@/stores/settingsStore";
import { testProviderConnectionCmd, type ProviderConfig } from "@/lib/tauriCommands";
export const CUSTOM_REST_MODELS = [
"ChatGPT4o",
"ChatGPT4o-mini",
"ChatGPT-o3-mini",
"Gemini-2_0-Flash-001",
"Gemini-2_5-Flash",
"Claude-Sonnet-3_7",
"Openai-gpt-4_1-mini",
"Openai-o4-mini",
"Claude-Sonnet-4",
"ChatGPT-o3-pro",
"OpenAI-ChatGPT-4_1",
"OpenAI-GPT-4_1-Nano",
"ChatGPT-5",
"VertexGemini",
"ChatGPT-5_1",
"ChatGPT-5_1-chat",
"ChatGPT-5_2-Chat",
"Gemini-3_Pro-Preview",
"Gemini-3_1-flash-lite-preview",
] as const;
export const CUSTOM_MODEL_OPTION = "__custom_model__";
export const LEGACY_API_FORMAT = "msi_genai";
export const CUSTOM_REST_FORMAT = "custom_rest";
export const normalizeApiFormat = (format?: string): string | undefined =>
format === LEGACY_API_FORMAT ? CUSTOM_REST_FORMAT : format;
const emptyProvider: ProviderConfig = {
name: "",
provider_type: "openai",
@ -27,6 +56,12 @@ const emptyProvider: ProviderConfig = {
model: "",
max_tokens: 4096,
temperature: 0.7,
custom_endpoint_path: undefined,
custom_auth_header: undefined,
custom_auth_prefix: undefined,
api_format: undefined,
session_id: undefined,
user_id: undefined,
};
export default function AIProviders() {
@ -44,19 +79,39 @@ export default function AIProviders() {
const [form, setForm] = useState<ProviderConfig>({ ...emptyProvider });
const [testResult, setTestResult] = useState<{ success: boolean; message: string } | null>(null);
const [isTesting, setIsTesting] = useState(false);
const [isCustomModel, setIsCustomModel] = useState(false);
const [customModelInput, setCustomModelInput] = useState("");
const startAdd = () => {
setForm({ ...emptyProvider });
setEditIndex(null);
setIsAdding(true);
setTestResult(null);
setIsCustomModel(false);
setCustomModelInput("");
};
const startEdit = (index: number) => {
setForm({ ...ai_providers[index] });
const provider = ai_providers[index];
const apiFormat = normalizeApiFormat(provider.api_format);
const nextForm = { ...provider, api_format: apiFormat };
setForm(nextForm);
setEditIndex(index);
setIsAdding(true);
setTestResult(null);
const isCustomRestProvider =
nextForm.provider_type === "custom" && apiFormat === CUSTOM_REST_FORMAT;
const knownModel = CUSTOM_REST_MODELS.includes(nextForm.model as (typeof CUSTOM_REST_MODELS)[number]);
if (isCustomRestProvider && !knownModel) {
setIsCustomModel(true);
setCustomModelInput(nextForm.model);
} else {
setIsCustomModel(false);
setCustomModelInput("");
}
};
const handleSave = () => {
@ -236,14 +291,16 @@ export default function AIProviders() {
placeholder="sk-..."
/>
</div>
<div className="space-y-2">
<Label>Model</Label>
<Input
value={form.model}
onChange={(e) => setForm({ ...form, model: e.target.value })}
placeholder="gpt-4o"
/>
</div>
{!(form.provider_type === "custom" && normalizeApiFormat(form.api_format) === CUSTOM_REST_FORMAT) && (
<div className="space-y-2">
<Label>Model</Label>
<Input
value={form.model}
onChange={(e) => setForm({ ...form, model: e.target.value })}
placeholder="gpt-4o"
/>
</div>
)}
</div>
<div className="grid grid-cols-2 gap-4">
<div className="space-y-2">
@ -267,6 +324,154 @@ export default function AIProviders() {
</div>
</div>
{/* Custom provider format options */}
{form.provider_type === "custom" && (
<>
<Separator />
<div className="space-y-4">
<div className="space-y-2">
<Label>API Format</Label>
<Select
value={form.api_format ?? "openai"}
onValueChange={(v) => {
const format = v;
const defaults =
format === CUSTOM_REST_FORMAT
? {
custom_endpoint_path: "",
custom_auth_header: "",
custom_auth_prefix: "",
}
: {
custom_endpoint_path: "/chat/completions",
custom_auth_header: "Authorization",
custom_auth_prefix: "Bearer ",
};
setForm({ ...form, api_format: format, ...defaults });
if (format !== CUSTOM_REST_FORMAT) {
setIsCustomModel(false);
setCustomModelInput("");
}
}}
>
<SelectTrigger>
<SelectValue />
</SelectTrigger>
<SelectContent>
<SelectItem value="openai">OpenAI Compatible</SelectItem>
<SelectItem value={CUSTOM_REST_FORMAT}>Custom REST</SelectItem>
</SelectContent>
</Select>
<p className="text-xs text-muted-foreground">
Select the API format. Custom REST uses a non-OpenAI request/response structure.
</p>
</div>
<div className="grid grid-cols-2 gap-4">
<div className="space-y-2">
<Label>Endpoint Path</Label>
<Input
value={form.custom_endpoint_path ?? ""}
onChange={(e) =>
setForm({ ...form, custom_endpoint_path: e.target.value })
}
placeholder="/chat/completions"
/>
<p className="text-xs text-muted-foreground">
Path appended to API URL. Leave empty if URL includes full path.
</p>
</div>
<div className="space-y-2">
<Label>Auth Header Name</Label>
<Input
value={form.custom_auth_header ?? ""}
onChange={(e) =>
setForm({ ...form, custom_auth_header: e.target.value })
}
placeholder="Authorization"
/>
<p className="text-xs text-muted-foreground">
Header name for authentication (e.g., "Authorization" or "x-api-key")
</p>
</div>
</div>
<div className="space-y-2">
<Label>Auth Prefix</Label>
<Input
value={form.custom_auth_prefix ?? ""}
onChange={(e) => setForm({ ...form, custom_auth_prefix: e.target.value })}
placeholder="Bearer "
/>
<p className="text-xs text-muted-foreground">
Prefix added before API key (e.g., "Bearer " for OpenAI, empty for Custom REST)
</p>
</div>
{/* Custom REST specific: User ID field */}
{normalizeApiFormat(form.api_format) === CUSTOM_REST_FORMAT && (
<div className="space-y-2">
<Label>Email Address</Label>
<Input
value={form.user_id ?? ""}
onChange={(e) => setForm({ ...form, user_id: e.target.value })}
placeholder="user@example.com"
/>
<p className="text-xs text-muted-foreground">
Optional: Email address for usage tracking. If omitted, costs are attributed to the API key owner.
</p>
</div>
)}
{/* Custom REST specific: model dropdown with custom option */}
{normalizeApiFormat(form.api_format) === CUSTOM_REST_FORMAT && (
<div className="space-y-2">
<Label>Model</Label>
<Select
value={isCustomModel ? CUSTOM_MODEL_OPTION : form.model}
onValueChange={(value) => {
if (value === CUSTOM_MODEL_OPTION) {
setIsCustomModel(true);
if (CUSTOM_REST_MODELS.includes(form.model as (typeof CUSTOM_REST_MODELS)[number])) {
setForm({ ...form, model: "" });
setCustomModelInput("");
}
} else {
setIsCustomModel(false);
setCustomModelInput("");
setForm({ ...form, model: value });
}
}}
>
<SelectTrigger>
<SelectValue placeholder="Select a model..." />
</SelectTrigger>
<SelectContent>
{CUSTOM_REST_MODELS.map((model) => (
<SelectItem key={model} value={model}>
{model}
</SelectItem>
))}
<SelectItem value={CUSTOM_MODEL_OPTION}>Custom model...</SelectItem>
</SelectContent>
</Select>
{isCustomModel && (
<Input
value={customModelInput}
onChange={(e) => {
const value = e.target.value;
setCustomModelInput(value);
setForm({ ...form, model: value });
}}
placeholder="Enter custom model ID"
/>
)}
</div>
)}
</div>
</>
)}
{/* Test result */}
{testResult && (
<div

View File

@ -1,5 +1,6 @@
import React, { useState } from "react";
import { ExternalLink, Check, X, Loader2 } from "lucide-react";
import React, { useState, useEffect } from "react";
import { ExternalLink, Check, X, Loader2, Key, Globe, Lock } from "lucide-react";
import { invoke } from "@tauri-apps/api/core";
import {
Card,
CardHeader,
@ -9,14 +10,22 @@ import {
Button,
Input,
Label,
RadioGroup,
RadioGroupItem,
} from "@/components/ui";
import {
initiateOauthCmd,
authenticateWithWebviewCmd,
extractCookiesFromWebviewCmd,
saveManualTokenCmd,
testConfluenceConnectionCmd,
testServiceNowConnectionCmd,
testAzureDevOpsConnectionCmd,
saveIntegrationConfigCmd,
getAllIntegrationConfigsCmd,
} from "@/lib/tauriCommands";
import { invoke } from "@tauri-apps/api/core";
type AuthMode = "oauth2" | "webview" | "token";
interface IntegrationConfig {
service: string;
@ -25,6 +34,10 @@ interface IntegrationConfig {
projectName?: string;
spaceKey?: string;
connected: boolean;
authMode: AuthMode;
token?: string;
tokenType?: string;
webviewId?: string;
}
export default function Integrations() {
@ -34,34 +47,76 @@ export default function Integrations() {
baseUrl: "",
spaceKey: "",
connected: false,
authMode: "webview",
tokenType: "Bearer",
},
servicenow: {
service: "servicenow",
baseUrl: "",
username: "",
connected: false,
authMode: "token",
tokenType: "Basic",
},
azuredevops: {
service: "azuredevops",
baseUrl: "",
projectName: "",
connected: false,
authMode: "webview",
tokenType: "Bearer",
},
});
const [loading, setLoading] = useState<Record<string, boolean>>({});
const [testResults, setTestResults] = useState<Record<string, { success: boolean; message: string } | null>>({});
const handleConnect = async (service: string) => {
// Load configs from database on mount
useEffect(() => {
const loadConfigs = async () => {
try {
const savedConfigs = await getAllIntegrationConfigsCmd();
const configMap: Record<string, Partial<IntegrationConfig>> = {};
savedConfigs.forEach((cfg) => {
configMap[cfg.service] = {
baseUrl: cfg.base_url,
username: cfg.username || "",
projectName: cfg.project_name || "",
spaceKey: cfg.space_key || "",
};
});
setConfigs((prev) => ({
confluence: { ...prev.confluence, ...configMap.confluence },
servicenow: { ...prev.servicenow, ...configMap.servicenow },
azuredevops: { ...prev.azuredevops, ...configMap.azuredevops },
}));
} catch (err) {
console.error("Failed to load integration configs:", err);
}
};
loadConfigs();
}, []);
const handleAuthModeChange = (service: string, mode: AuthMode) => {
setConfigs((prev) => ({
...prev,
[service]: { ...prev[service], authMode: mode, connected: false },
}));
setTestResults((prev) => ({ ...prev, [service]: null }));
};
const handleConnectOAuth = async (service: string) => {
setLoading((prev) => ({ ...prev, [service]: true }));
try {
const response = await initiateOauthCmd(service);
// Open auth URL in default browser using shell plugin
// Open auth URL in default browser
await invoke("plugin:shell|open", { path: response.auth_url });
// Mark as connected (optimistic)
setConfigs((prev) => ({
...prev,
[service]: { ...prev[service], connected: true },
@ -82,6 +137,110 @@ export default function Integrations() {
}
};
const handleConnectWebview = async (service: string) => {
const config = configs[service];
setLoading((prev) => ({ ...prev, [service]: true }));
try {
const response = await authenticateWithWebviewCmd(service, config.baseUrl);
setConfigs((prev) => ({
...prev,
[service]: { ...prev[service], webviewId: response.webview_id },
}));
setTestResults((prev) => ({
...prev,
[service]: { success: true, message: response.message + " Click 'Complete Login' when done." },
}));
} catch (err) {
console.error("Failed to open webview:", err);
setTestResults((prev) => ({
...prev,
[service]: { success: false, message: String(err) },
}));
} finally {
setLoading((prev) => ({ ...prev, [service]: false }));
}
};
const handleCompleteWebviewLogin = async (service: string) => {
const config = configs[service];
if (!config.webviewId) {
setTestResults((prev) => ({
...prev,
[service]: { success: false, message: "No webview session found. Click 'Login via Browser' first." },
}));
return;
}
setLoading((prev) => ({ ...prev, [`complete-${service}`]: true }));
try {
const result = await extractCookiesFromWebviewCmd(service, config.webviewId);
setConfigs((prev) => ({
...prev,
[service]: { ...prev[service], connected: true, webviewId: undefined },
}));
setTestResults((prev) => ({
...prev,
[service]: { success: result.success, message: result.message },
}));
} catch (err) {
console.error("Failed to extract cookies:", err);
setTestResults((prev) => ({
...prev,
[service]: { success: false, message: String(err) },
}));
} finally {
setLoading((prev) => ({ ...prev, [`complete-${service}`]: false }));
}
};
const handleSaveToken = async (service: string) => {
const config = configs[service];
if (!config.token) {
setTestResults((prev) => ({
...prev,
[service]: { success: false, message: "Please enter a token" },
}));
return;
}
setLoading((prev) => ({ ...prev, [`save-${service}`]: true }));
try {
const result = await saveManualTokenCmd({
service,
token: config.token,
token_type: config.tokenType || "Bearer",
base_url: config.baseUrl,
});
if (result.success) {
setConfigs((prev) => ({
...prev,
[service]: { ...prev[service], connected: true },
}));
}
setTestResults((prev) => ({
...prev,
[service]: result,
}));
} catch (err) {
console.error("Failed to save token:", err);
setTestResults((prev) => ({
...prev,
[service]: { success: false, message: String(err) },
}));
} finally {
setLoading((prev) => ({ ...prev, [`save-${service}`]: false }));
}
};
const handleTestConnection = async (service: string) => {
setLoading((prev) => ({ ...prev, [`test-${service}`]: true }));
setTestResults((prev) => ({ ...prev, [service]: null }));
@ -121,11 +280,172 @@ export default function Integrations() {
}
};
const updateConfig = (service: string, field: string, value: string) => {
const updateConfig = async (service: string, field: string, value: string) => {
const updatedConfig = { ...configs[service], [field]: value };
setConfigs((prev) => ({
...prev,
[service]: { ...prev[service], [field]: value },
[service]: updatedConfig,
}));
// Save to database (debounced save happens after user stops typing)
try {
await saveIntegrationConfigCmd({
service,
base_url: updatedConfig.baseUrl,
username: updatedConfig.username,
project_name: updatedConfig.projectName,
space_key: updatedConfig.spaceKey,
});
} catch (err) {
console.error("Failed to save integration config:", err);
}
};
const renderAuthSection = (service: string) => {
const config = configs[service];
const isOAuthSupported = service !== "servicenow"; // ServiceNow doesn't support OAuth2
return (
<div className="space-y-4">
{/* Auth Mode Selection */}
<div className="space-y-3">
<Label>Authentication Method</Label>
<RadioGroup
value={config.authMode}
onValueChange={(value) => handleAuthModeChange(service, value as AuthMode)}
>
{isOAuthSupported && (
<div className="flex items-center space-x-2">
<RadioGroupItem value="oauth2" id={`${service}-oauth`} />
<Label htmlFor={`${service}-oauth`} className="font-normal cursor-pointer flex items-center gap-2">
<Lock className="w-4 h-4" />
OAuth2 (Enterprise SSO)
</Label>
</div>
)}
<div className="flex items-center space-x-2">
<RadioGroupItem value="webview" id={`${service}-webview`} />
<Label htmlFor={`${service}-webview`} className="font-normal cursor-pointer flex items-center gap-2">
<Globe className="w-4 h-4" />
Browser Login (Works off-VPN)
</Label>
</div>
<div className="flex items-center space-x-2">
<RadioGroupItem value="token" id={`${service}-token`} />
<Label htmlFor={`${service}-token`} className="font-normal cursor-pointer flex items-center gap-2">
<Key className="w-4 h-4" />
Manual Token/API Key
</Label>
</div>
</RadioGroup>
</div>
{/* OAuth2 Mode */}
{config.authMode === "oauth2" && (
<div className="space-y-3 p-4 bg-muted/30 rounded-lg">
<p className="text-sm text-muted-foreground">
OAuth2 requires pre-registered application credentials. This may not work in all enterprise environments.
</p>
<Button
onClick={() => handleConnectOAuth(service)}
disabled={loading[service] || !config.baseUrl}
>
{loading[service] ? (
<>
<Loader2 className="w-4 h-4 mr-2 animate-spin" />
Connecting...
</>
) : config.connected ? (
<>
<Check className="w-4 h-4 mr-2" />
Connected
</>
) : (
"Connect with OAuth2"
)}
</Button>
</div>
)}
{/* Webview Mode */}
{config.authMode === "webview" && (
<div className="space-y-3 p-4 bg-muted/30 rounded-lg">
<p className="text-sm text-muted-foreground">
Opens an embedded browser for you to log in normally. Works even when off-VPN. Captures session cookies for API access.
</p>
<div className="flex gap-2">
<Button
onClick={() => handleConnectWebview(service)}
disabled={loading[service] || !config.baseUrl}
>
{loading[service] ? (
<>
<Loader2 className="w-4 h-4 mr-2 animate-spin" />
Opening...
</>
) : (
"Login via Browser"
)}
</Button>
{config.webviewId && (
<Button
variant="secondary"
onClick={() => handleCompleteWebviewLogin(service)}
disabled={loading[`complete-${service}`]}
>
{loading[`complete-${service}`] ? (
<>
<Loader2 className="w-4 h-4 mr-2 animate-spin" />
Saving...
</>
) : (
"Complete Login"
)}
</Button>
)}
</div>
</div>
)}
{/* Token Mode */}
{config.authMode === "token" && (
<div className="space-y-3 p-4 bg-muted/30 rounded-lg">
<p className="text-sm text-muted-foreground">
Enter a Personal Access Token (PAT), API Key, or Bearer token. Most reliable method but requires manual token generation.
</p>
<div className="space-y-2">
<Label htmlFor={`${service}-token-input`}>Token</Label>
<Input
id={`${service}-token-input`}
type="password"
placeholder={service === "confluence" ? "Bearer token or API key" : "API token or PAT"}
value={config.token || ""}
onChange={(e) => updateConfig(service, "token", e.target.value)}
/>
<p className="text-xs text-muted-foreground">
{service === "confluence" && "Generate at: https://id.atlassian.com/manage-profile/security/api-tokens"}
{service === "azuredevops" && "Generate at: https://dev.azure.com/{org}/_usersSettings/tokens"}
{service === "servicenow" && "Use your ServiceNow password or API key"}
</p>
</div>
<Button
onClick={() => handleSaveToken(service)}
disabled={loading[`save-${service}`] || !config.token}
>
{loading[`save-${service}`] ? (
<>
<Loader2 className="w-4 h-4 mr-2 animate-spin" />
Validating...
</>
) : (
"Save & Validate Token"
)}
</Button>
</div>
)}
</div>
);
};
return (
@ -133,7 +453,7 @@ export default function Integrations() {
<div>
<h1 className="text-3xl font-bold">Integrations</h1>
<p className="text-muted-foreground mt-1">
Connect TFTSR with your existing tools and platforms via OAuth2.
Connect Troubleshooting and RCA Assistant with your existing tools and platforms. Choose the authentication method that works best for your environment.
</p>
</div>
@ -145,7 +465,7 @@ export default function Integrations() {
Confluence
</CardTitle>
<CardDescription>
Publish RCA documents to Confluence spaces. Requires OAuth2 authentication with Atlassian.
Publish RCA documents to Confluence spaces. Supports OAuth2, browser login, or API tokens.
</CardDescription>
</CardHeader>
<CardContent className="space-y-4">
@ -169,26 +489,9 @@ export default function Integrations() {
/>
</div>
<div className="flex items-center gap-3">
<Button
onClick={() => handleConnect("confluence")}
disabled={loading.confluence || !configs.confluence.baseUrl}
>
{loading.confluence ? (
<>
<Loader2 className="w-4 h-4 mr-2 animate-spin" />
Connecting...
</>
) : configs.confluence.connected ? (
<>
<Check className="w-4 h-4 mr-2" />
Connected
</>
) : (
"Connect with OAuth2"
)}
</Button>
{renderAuthSection("confluence")}
<div className="flex items-center gap-3 pt-2">
<Button
variant="outline"
onClick={() => handleTestConnection("confluence")}
@ -232,7 +535,7 @@ export default function Integrations() {
ServiceNow
</CardTitle>
<CardDescription>
Link incidents and push resolution steps. Uses basic authentication (username + password).
Link incidents and push resolution steps. Supports browser login or basic authentication.
</CardDescription>
</CardHeader>
<CardContent className="space-y-4">
@ -256,35 +559,9 @@ export default function Integrations() {
/>
</div>
<div className="space-y-2">
<Label htmlFor="servicenow-password">Password</Label>
<Input
id="servicenow-password"
type="password"
placeholder="••••••••"
disabled
/>
<p className="text-xs text-muted-foreground">
ServiceNow credentials are stored securely after first login. OAuth2 not supported.
</p>
</div>
<div className="flex items-center gap-3">
<Button
onClick={() =>
setTestResults((prev) => ({
...prev,
servicenow: {
success: false,
message: "ServiceNow uses basic authentication, not OAuth2. Enter credentials above.",
},
}))
}
disabled={!configs.servicenow.baseUrl || !configs.servicenow.username}
>
Save Credentials
</Button>
{renderAuthSection("servicenow")}
<div className="flex items-center gap-3 pt-2">
<Button
variant="outline"
onClick={() => handleTestConnection("servicenow")}
@ -328,7 +605,7 @@ export default function Integrations() {
Azure DevOps
</CardTitle>
<CardDescription>
Create work items and attach RCA documents. Requires OAuth2 authentication with Microsoft.
Create work items and attach RCA documents. Supports OAuth2, browser login, or PAT tokens.
</CardDescription>
</CardHeader>
<CardContent className="space-y-4">
@ -352,26 +629,9 @@ export default function Integrations() {
/>
</div>
<div className="flex items-center gap-3">
<Button
onClick={() => handleConnect("azuredevops")}
disabled={loading.azuredevops || !configs.azuredevops.baseUrl}
>
{loading.azuredevops ? (
<>
<Loader2 className="w-4 h-4 mr-2 animate-spin" />
Connecting...
</>
) : configs.azuredevops.connected ? (
<>
<Check className="w-4 h-4 mr-2" />
Connected
</>
) : (
"Connect with OAuth2"
)}
</Button>
{renderAuthSection("azuredevops")}
<div className="flex items-center gap-3 pt-2">
<Button
variant="outline"
onClick={() => handleTestConnection("azuredevops")}
@ -408,14 +668,12 @@ export default function Integrations() {
</Card>
<div className="p-4 bg-muted/50 rounded-lg space-y-2">
<p className="text-sm font-semibold">How OAuth2 Authentication Works:</p>
<ol className="text-xs text-muted-foreground space-y-1 list-decimal list-inside">
<li>Click "Connect with OAuth2" to open the service's authentication page</li>
<li>Log in with your service credentials in your default browser</li>
<li>Authorize TFTSR to access your account</li>
<li>You'll be automatically redirected back and the connection will be saved</li>
<li>Tokens are encrypted and stored locally in your secure database</li>
</ol>
<p className="text-sm font-semibold">Authentication Method Comparison:</p>
<ul className="text-xs text-muted-foreground space-y-1 list-disc list-inside">
<li><strong>OAuth2:</strong> Most secure, but requires pre-registered app. May not work with enterprise SSO.</li>
<li><strong>Browser Login:</strong> Best for VPN environments. Lets you authenticate off-VPN, extracts session cookies for API use.</li>
<li><strong>Manual Token:</strong> Most reliable fallback. Requires generating API tokens manually from each service.</li>
</ul>
</div>
</div>
);

View File

@ -123,7 +123,7 @@ export default function Ollama() {
Manage local AI models via Ollama for privacy-first inference.
</p>
</div>
<Button variant="outline" onClick={loadData} disabled={isLoading}>
<Button variant="outline" onClick={loadData} disabled={isLoading} className="border-border text-foreground bg-card hover:bg-accent">
<RefreshCw className={`w-4 h-4 mr-2 ${isLoading ? "animate-spin" : ""}`} />
Refresh
</Button>
@ -169,24 +169,16 @@ export default function Ollama() {
{status && !status.installed && installGuide && (
<Card className="border-yellow-500/50">
<CardHeader>
<CardTitle className="text-lg flex items-center gap-2">
<Download className="w-5 h-5 text-yellow-500" />
<CardTitle className="text-lg">
Ollama Not Detected Installation Required
</CardTitle>
</CardHeader>
<CardContent className="space-y-4">
<CardContent>
<ol className="space-y-2 list-decimal list-inside">
{installGuide.steps.map((step, i) => (
<li key={i} className="text-sm text-muted-foreground">{step}</li>
))}
</ol>
<Button
variant="outline"
onClick={() => window.open(installGuide.url, "_blank")}
>
<Download className="w-4 h-4 mr-2" />
Download Ollama for {installGuide.platform}
</Button>
</CardContent>
</Card>
)}

View File

@ -9,6 +9,7 @@ import {
Separator,
} from "@/components/ui";
import { getAuditLogCmd, type AuditEntry } from "@/lib/tauriCommands";
import { useSettingsStore } from "@/stores/settingsStore";
const piiPatterns = [
{ id: "email", label: "Email Addresses", description: "Detect email addresses in logs" },
@ -22,9 +23,7 @@ const piiPatterns = [
];
export default function Security() {
const [enabledPatterns, setEnabledPatterns] = useState<Record<string, boolean>>(() =>
Object.fromEntries(piiPatterns.map((p) => [p.id, true]))
);
const { pii_enabled_patterns, setPiiPattern } = useSettingsStore();
const [auditEntries, setAuditEntries] = useState<AuditEntry[]>([]);
const [expandedRows, setExpandedRows] = useState<Set<string>>(new Set());
const [isLoading, setIsLoading] = useState(false);
@ -46,10 +45,6 @@ export default function Security() {
}
};
const togglePattern = (id: string) => {
setEnabledPatterns((prev) => ({ ...prev, [id]: !prev[id] }));
};
const toggleRow = (entryId: string) => {
setExpandedRows((prev) => {
const newSet = new Set(prev);
@ -92,15 +87,15 @@ export default function Security() {
<button
type="button"
role="switch"
aria-checked={enabledPatterns[pattern.id]}
onClick={() => togglePattern(pattern.id)}
aria-checked={pii_enabled_patterns[pattern.id]}
onClick={() => setPiiPattern(pattern.id, !pii_enabled_patterns[pattern.id])}
className={`relative inline-flex h-6 w-11 items-center rounded-full transition-colors ${
enabledPatterns[pattern.id] ? "bg-blue-500" : "bg-muted"
pii_enabled_patterns[pattern.id] ? "bg-blue-500" : "bg-muted"
}`}
>
<span
className={`inline-block h-5 w-5 rounded-full bg-white transition-transform ${
enabledPatterns[pattern.id] ? "translate-x-5" : "translate-x-0.5"
pii_enabled_patterns[pattern.id] ? "translate-x-5" : "translate-x-0.5"
}`}
/>
</button>

View File

@ -9,6 +9,8 @@ interface SettingsState extends AppSettings {
setActiveProvider: (name: string) => void;
setTheme: (theme: "light" | "dark") => void;
getActiveProvider: () => ProviderConfig | undefined;
pii_enabled_patterns: Record<string, boolean>;
setPiiPattern: (id: string, enabled: boolean) => void;
}
export const useSettingsStore = create<SettingsState>()(
@ -35,12 +37,29 @@ export const useSettingsStore = create<SettingsState>()(
})),
setActiveProvider: (name) => set({ active_provider: name }),
setTheme: (theme) => set({ theme }),
pii_enabled_patterns: Object.fromEntries(
["email", "ip_address", "phone", "ssn", "credit_card", "hostname", "password", "api_key"]
.map((id) => [id, true])
) as Record<string, boolean>,
setPiiPattern: (id: string, enabled: boolean) =>
set((state) => ({
pii_enabled_patterns: { ...state.pii_enabled_patterns, [id]: enabled },
})),
getActiveProvider: () => {
const state = get();
return state.ai_providers.find((p) => p.name === state.active_provider)
?? state.ai_providers[0];
},
}),
{ name: "tftsr-settings" }
{
name: "tftsr-settings",
partialize: (state) => ({
...state,
ai_providers: state.ai_providers.map((provider) => ({
...provider,
api_key: "",
})),
}),
}
)
);

View File

@ -0,0 +1,25 @@
import { describe, it, expect } from "vitest";
import {
CUSTOM_MODEL_OPTION,
CUSTOM_REST_FORMAT,
CUSTOM_REST_MODELS,
LEGACY_API_FORMAT,
normalizeApiFormat,
} from "@/pages/Settings/AIProviders";
describe("AIProviders Custom REST helpers", () => {
it("maps legacy msi_genai api_format to custom_rest", () => {
expect(normalizeApiFormat(LEGACY_API_FORMAT)).toBe(CUSTOM_REST_FORMAT);
});
it("keeps openai api_format unchanged", () => {
expect(normalizeApiFormat("openai")).toBe("openai");
});
it("contains the guide model list and custom model option sentinel", () => {
expect(CUSTOM_REST_MODELS).toContain("ChatGPT4o");
expect(CUSTOM_REST_MODELS).toContain("VertexGemini");
expect(CUSTOM_REST_MODELS).toContain("Gemini-3_Pro-Preview");
expect(CUSTOM_MODEL_OPTION).toBe("__custom_model__");
});
});

View File

@ -0,0 +1,29 @@
import { describe, expect, it } from "vitest";
import { readFileSync } from "node:fs";
import path from "node:path";
const autoTagWorkflowPath = path.resolve(
process.cwd(),
".gitea/workflows/auto-tag.yml",
);
describe("auto-tag workflow release triggering", () => {
it("creates tags via git push instead of Gitea tag API", () => {
const workflow = readFileSync(autoTagWorkflowPath, "utf-8");
expect(workflow).toContain("git push origin \"refs/tags/$NEXT\"");
expect(workflow).not.toContain("POST \"$API/tags\"");
});
it("runs release build jobs after auto-tag succeeds", () => {
const workflow = readFileSync(autoTagWorkflowPath, "utf-8");
expect(workflow).toContain("build-linux-amd64:");
expect(workflow).toContain("build-windows-amd64:");
expect(workflow).toContain("build-macos-arm64:");
expect(workflow).toContain("build-linux-arm64:");
expect(workflow).toContain("needs: autotag");
expect(workflow).toContain("TAG=$(curl -s \"$API/tags?limit=50\"");
expect(workflow).toContain("ERROR: Could not resolve release tag from repository tags.");
});
});

View File

@ -0,0 +1,144 @@
import { describe, expect, it } from "vitest";
import { readFileSync } from "node:fs";
import path from "node:path";
const root = process.cwd();
const readFile = (rel: string) => readFileSync(path.resolve(root, rel), "utf-8");
// ─── Dockerfiles ─────────────────────────────────────────────────────────────
describe("Dockerfile.linux-amd64", () => {
const df = readFile(".docker/Dockerfile.linux-amd64");
it("is based on the pinned Rust 1.88 slim image", () => {
expect(df).toContain("FROM rust:1.88-slim");
});
it("installs webkit2gtk 4.1 dev package", () => {
expect(df).toContain("libwebkit2gtk-4.1-dev");
});
it("installs Node.js 22 via NodeSource", () => {
expect(df).toContain("nodesource.com/setup_22.x");
expect(df).toContain("nodejs");
});
it("pre-adds the x86_64 Linux Rust target", () => {
expect(df).toContain("rustup target add x86_64-unknown-linux-gnu");
});
it("cleans apt lists to keep image lean", () => {
expect(df).toContain("rm -rf /var/lib/apt/lists/*");
});
});
describe("Dockerfile.windows-cross", () => {
const df = readFile(".docker/Dockerfile.windows-cross");
it("is based on the pinned Rust 1.88 slim image", () => {
expect(df).toContain("FROM rust:1.88-slim");
});
it("installs mingw-w64 cross-compiler", () => {
expect(df).toContain("mingw-w64");
});
it("installs nsis for Windows installer bundling", () => {
expect(df).toContain("nsis");
});
it("installs Node.js 22 via NodeSource", () => {
expect(df).toContain("nodesource.com/setup_22.x");
});
it("pre-adds the Windows GNU Rust target", () => {
expect(df).toContain("rustup target add x86_64-pc-windows-gnu");
});
it("cleans apt lists to keep image lean", () => {
expect(df).toContain("rm -rf /var/lib/apt/lists/*");
});
});
describe("Dockerfile.linux-arm64", () => {
const df = readFile(".docker/Dockerfile.linux-arm64");
it("is based on Ubuntu 22.04 (Jammy)", () => {
expect(df).toContain("FROM ubuntu:22.04");
});
it("installs aarch64 cross-compiler", () => {
expect(df).toContain("gcc-aarch64-linux-gnu");
expect(df).toContain("g++-aarch64-linux-gnu");
});
it("sets up arm64 multiarch via ports.ubuntu.com", () => {
expect(df).toContain("dpkg --add-architecture arm64");
expect(df).toContain("ports.ubuntu.com/ubuntu-ports");
expect(df).toContain("jammy");
});
it("installs arm64 webkit2gtk dev package", () => {
expect(df).toContain("libwebkit2gtk-4.1-dev:arm64");
});
it("installs Rust 1.88 with arm64 cross-compilation target", () => {
expect(df).toContain("--default-toolchain 1.88.0");
expect(df).toContain("rustup target add aarch64-unknown-linux-gnu");
});
it("adds cargo to PATH via ENV", () => {
expect(df).toContain('ENV PATH="/root/.cargo/bin:${PATH}"');
});
it("installs Node.js 22 via NodeSource", () => {
expect(df).toContain("nodesource.com/setup_22.x");
});
});
// ─── build-images.yml workflow ───────────────────────────────────────────────
describe("build-images.yml workflow", () => {
const wf = readFile(".gitea/workflows/build-images.yml");
it("triggers on changes to .docker/ files on master", () => {
expect(wf).toContain("- master");
expect(wf).toContain("- '.docker/**'");
});
it("supports manual workflow_dispatch trigger", () => {
expect(wf).toContain("workflow_dispatch:");
});
it("does not explicitly mount the Docker socket (act_runner mounts it automatically)", () => {
// act_runner already mounts /var/run/docker.sock; an explicit options: mount
// causes a 'Duplicate mount point' error and must not be present.
expect(wf).not.toContain("-v /var/run/docker.sock:/var/run/docker.sock");
});
it("authenticates to the local Gitea registry before pushing", () => {
expect(wf).toContain("docker login");
expect(wf).toContain("--password-stdin");
expect(wf).toContain("172.0.0.29:3000");
});
it("builds and pushes all three platform images", () => {
expect(wf).toContain("trcaa-linux-amd64:rust1.88-node22");
expect(wf).toContain("trcaa-windows-cross:rust1.88-node22");
expect(wf).toContain("trcaa-linux-arm64:rust1.88-node22");
});
it("uses docker:24-cli image for build jobs", () => {
expect(wf).toContain("docker:24-cli");
});
it("runs all three build jobs on linux-amd64 runner", () => {
const matches = wf.match(/runs-on: linux-amd64/g) ?? [];
expect(matches.length).toBeGreaterThanOrEqual(3);
});
it("uses RELEASE_TOKEN secret for registry auth", () => {
expect(wf).toContain("secrets.RELEASE_TOKEN");
});
});

View File

@ -0,0 +1,54 @@
import { describe, expect, it } from "vitest";
import { readFileSync } from "node:fs";
import path from "node:path";
const autoTagWorkflowPath = path.resolve(
process.cwd(),
".gitea/workflows/auto-tag.yml",
);
describe("auto-tag release cross-platform artifact handling", () => {
it("overrides OpenSSL vendoring for windows-gnu cross builds", () => {
const workflow = readFileSync(autoTagWorkflowPath, "utf-8");
expect(workflow).toContain("OPENSSL_NO_VENDOR: \"0\"");
expect(workflow).toContain("OPENSSL_STATIC: \"1\"");
});
it("fails linux uploads when no artifacts are found", () => {
const workflow = readFileSync(autoTagWorkflowPath, "utf-8");
expect(workflow).toContain("ERROR: No Linux amd64 artifacts were found to upload.");
expect(workflow).toContain("ERROR: No Linux arm64 artifacts were found to upload.");
expect(workflow).toContain("CI=true npx tauri build");
expect(workflow).toContain("find src-tauri/target/aarch64-unknown-linux-gnu/release/bundle -type f");
expect(workflow).toContain("CC_aarch64_unknown_linux_gnu: aarch64-linux-gnu-gcc");
expect(workflow).toContain("PKG_CONFIG_ALLOW_CROSS: \"1\"");
expect(workflow).toContain("aarch64-unknown-linux-gnu");
});
it("fails windows uploads when no artifacts are found", () => {
const workflow = readFileSync(autoTagWorkflowPath, "utf-8");
expect(workflow).toContain(
"ERROR: No Windows amd64 artifacts were found to upload.",
);
});
it("replaces existing release assets before uploading reruns", () => {
const workflow = readFileSync(autoTagWorkflowPath, "utf-8");
expect(workflow).toContain("Deleting existing asset id=$id name=$NAME before upload...");
expect(workflow).toContain("-X DELETE \"$API/releases/$RELEASE_ID/assets/$id\"");
expect(workflow).toContain("UPLOAD_NAME=\"linux-amd64-$NAME\"");
expect(workflow).toContain("UPLOAD_NAME=\"linux-arm64-$NAME\"");
});
it("uses Ubuntu 22.04 with ports mirror for arm64 cross-compile", () => {
const workflow = readFileSync(autoTagWorkflowPath, "utf-8");
expect(workflow).toContain("ubuntu:22.04");
expect(workflow).toContain("ports.ubuntu.com/ubuntu-ports");
expect(workflow).toContain("jammy");
});
});

View File

@ -0,0 +1,23 @@
import { describe, expect, it } from "vitest";
import { readFileSync } from "node:fs";
import path from "node:path";
const autoTagWorkflowPath = path.resolve(
process.cwd(),
".gitea/workflows/auto-tag.yml",
);
describe("auto-tag release macOS bundle path", () => {
it("does not reference the legacy TFTSR.app bundle name", () => {
const workflow = readFileSync(autoTagWorkflowPath, "utf-8");
expect(workflow).not.toContain("/bundle/macos/TFTSR.app");
});
it("resolves the macOS .app bundle dynamically", () => {
const workflow = readFileSync(autoTagWorkflowPath, "utf-8");
expect(workflow).toContain("APP=$(find");
expect(workflow).toContain("-name \"*.app\"");
});
});

View File

@ -9,8 +9,11 @@ const mockProvider: ProviderConfig = {
model: "gpt-4o",
};
const DEFAULT_PII_PATTERNS = ["email", "ip_address", "phone", "ssn", "credit_card", "hostname", "password", "api_key"];
describe("Settings Store", () => {
beforeEach(() => {
localStorage.clear();
useSettingsStore.setState({
theme: "dark",
ai_providers: [],
@ -18,6 +21,7 @@ describe("Settings Store", () => {
default_provider: "ollama",
default_model: "llama3.2:3b",
ollama_url: "http://localhost:11434",
pii_enabled_patterns: Object.fromEntries(DEFAULT_PII_PATTERNS.map((id) => [id, true])),
});
});
@ -43,4 +47,62 @@ describe("Settings Store", () => {
useSettingsStore.getState().setTheme("light");
expect(useSettingsStore.getState().theme).toBe("light");
});
it("does not persist API keys to localStorage", () => {
useSettingsStore.getState().addProvider(mockProvider);
const raw = localStorage.getItem("tftsr-settings");
expect(raw).toBeTruthy();
expect(raw).not.toContain("sk-test-key");
});
});
describe("Settings Store — PII patterns", () => {
beforeEach(() => {
localStorage.clear();
useSettingsStore.setState({
theme: "dark",
ai_providers: [],
active_provider: undefined,
default_provider: "ollama",
default_model: "llama3.2:3b",
ollama_url: "http://localhost:11434",
pii_enabled_patterns: Object.fromEntries(DEFAULT_PII_PATTERNS.map((id) => [id, true])),
});
});
it("initializes all 8 PII patterns as enabled by default", () => {
const patterns = useSettingsStore.getState().pii_enabled_patterns;
for (const id of DEFAULT_PII_PATTERNS) {
expect(patterns[id]).toBe(true);
}
});
it("setPiiPattern disables a single pattern", () => {
useSettingsStore.getState().setPiiPattern("email", false);
expect(useSettingsStore.getState().pii_enabled_patterns["email"]).toBe(false);
});
it("setPiiPattern does not affect other patterns", () => {
useSettingsStore.getState().setPiiPattern("email", false);
for (const id of DEFAULT_PII_PATTERNS.filter((id) => id !== "email")) {
expect(useSettingsStore.getState().pii_enabled_patterns[id]).toBe(true);
}
});
it("setPiiPattern re-enables a disabled pattern", () => {
useSettingsStore.getState().setPiiPattern("ssn", false);
useSettingsStore.getState().setPiiPattern("ssn", true);
expect(useSettingsStore.getState().pii_enabled_patterns["ssn"]).toBe(true);
});
it("pii_enabled_patterns is persisted to localStorage", () => {
useSettingsStore.getState().setPiiPattern("api_key", false);
const raw = localStorage.getItem("tftsr-settings");
expect(raw).toBeTruthy();
// Zustand persist wraps state in { state: {...}, version: ... }
const parsed = JSON.parse(raw!);
const stored = parsed.state ?? parsed;
expect(stored.pii_enabled_patterns.api_key).toBe(false);
expect(stored.pii_enabled_patterns.email).toBe(true);
});
});

View File

@ -0,0 +1,56 @@
# Fix: build-linux-arm64 — Switch to Ubuntu 22.04 with ports mirror
## Description
The `build-linux-arm64` CI job failed repeatedly with
`E: Unable to correct problems, you have held broken packages` during the
Install dependencies step. Root cause: `rust:1.88-slim` (Debian Bookworm) uses a single
mirror for all architectures. When both `[arch=amd64]` and `[arch=arm64]` entries point at
the same Debian repo, apt's dependency resolver hits unavoidable conflicts — the `binary-all`
package index is duplicated and certain `-dev` package pairs cannot be co-installed because
they lack `Multi-Arch: same`. This is a structural Debian single-mirror multiarch limitation
that cannot be fixed by tweaking `sources.list`.
Ubuntu 22.04 solves this by routing arm64 through a separate mirror:
`ports.ubuntu.com/ubuntu-ports`. amd64 and arm64 packages come from entirely different repos,
eliminating all cross-arch index overlaps and resolution conflicts.
## Acceptance Criteria
- `build-linux-arm64` Install dependencies step completes without apt errors
- `ubuntu:22.04` is the container image for the arm64 job
- Ubuntu's `ports.ubuntu.com/ubuntu-ports` is used for arm64 packages
- `libayatana-appindicator3-dev:arm64` is removed (no tray icon in this app)
- Rust is installed via `rustup` (not pre-installed in Ubuntu base)
- All 51 frontend tests pass
- YAML is syntactically valid
## Work Implemented
### `.gitea/workflows/auto-tag.yml`
- **Container**: `rust:1.88-slim``ubuntu:22.04` for `build-linux-arm64` job
- **Install dependencies step**: Full replacement
- Step 1: Host tools + aarch64 cross-compiler (amd64 packages, installed before multiarch registration)
- Step 2: Register arm64 architecture; `sed` existing `sources.list` entries to `[arch=amd64]`; add `arm64-ports.list` pointing at `ports.ubuntu.com/ubuntu-ports jammy`
- Step 3: ARM64 dev libs (`libwebkit2gtk-4.1-dev`, `libssl-dev`, `libgtk-3-dev`, `librsvg2-dev`) — `libayatana-appindicator3-dev:arm64` removed
- Step 4: Node.js via NodeSource
- Step 5: Rust 1.88.0 via `rustup --no-modify-path`; `$HOME/.cargo/bin` appended to `$GITHUB_PATH`
- **Build step**: Added `source "$HOME/.cargo/env"` as first line (belt-and-suspenders for Rust PATH)
### `tests/unit/releaseWorkflowCrossPlatformArtifacts.test.ts`
- Added new test: `"uses Ubuntu 22.04 with ports mirror for arm64 cross-compile"` — asserts workflow contains `ubuntu:22.04`, `ports.ubuntu.com/ubuntu-ports`, and `jammy`
- All previously passing assertions continue to pass (build step env vars and upload paths unchanged)
### `docs/wiki/CICD-Pipeline.md`
- `build-linux-arm64` job entry now mentions Ubuntu 22.04 + ports mirror
- New Known Issue entry: **Debian Multiarch Breaks arm64 Cross-Compile** — documents the root cause and the Ubuntu 22.04 fix for future reference
## Testing Needed
- [ ] YAML validation: `python3 -c "import yaml; yaml.safe_load(open('.gitea/workflows/auto-tag.yml'))" && echo OK` — **PASSED**
- [ ] Frontend tests: `npm run test:run`**51/51 PASSED** (50 existing + 1 new)
- [ ] CI integration: Push branch → merge PR → observe `build-linux-arm64` Install dependencies step completes without `held broken packages` error
- [ ] Verify arm64 `.deb`, `.rpm`, `.AppImage` artifacts are uploaded to the Gitea release

View File

@ -0,0 +1,122 @@
# Ticket Summary — UI Fixes + Ollama Bundling + Theme Toggle
**Branch**: `feat/ui-fixes-ollama-bundle-theme`
---
## Description
Multiple UI issues were identified and resolved following the arm64 build stabilization:
- `custom_rest` provider showed a disabled model input instead of the live dropdown already present lower in the form
- Auth Header Name auto-filled with an internal vendor-specific key name on format selection
- "User ID (CORE ID)" label and placeholder exposed internal organizational terminology
- Refresh buttons on the Ollama and Dashboard pages had near-zero contrast against dark card backgrounds
- PII detection toggles in Security settings silently reset to all-enabled on every app restart (no persistence)
- Ollama required manual installation; no offline install path existed
- No light/dark theme toggle UI existed despite the infrastructure already being wired up
Additionally, a new `install_ollama_from_bundle` Tauri command allows the app to copy a bundled Ollama binary to the system install path, enabling offline-first deployment. CI was updated to download the appropriate Ollama binary for each platform during the release build.
---
## Acceptance Criteria
- [ ] **Custom REST model**: Selecting Type=Custom + API Format=Custom REST causes the top-level Model row to disappear; the dropdown at the bottom is visible and populated with all models
- [ ] **Auth Header**: Field is blank by default when Custom REST format is selected (no internal values)
- [ ] **User ID label**: Reads "Email Address" with placeholder `user@example.com` and a generic description
- [ ] **Auth Header description**: No longer references internal key name examples
- [ ] **Refresh buttons**: Visually distinct (border + background) against dark card backgrounds on Dashboard and Ollama pages
- [ ] **PII toggles**: Toggling patterns off, navigating away, and returning preserves the disabled state across app restarts
- [ ] **Theme toggle**: Sun/Moon icon button in the sidebar footer switches between light and dark themes; works when sidebar is collapsed
- [ ] **Install Ollama (Offline)**: Button appears in the "Ollama Not Detected" card; clicking it copies the bundled binary and refreshes status
- [ ] **CI**: Each platform build job downloads the correct Ollama binary before `tauri build` and places it in `src-tauri/resources/ollama/`
- [ ] `npx tsc --noEmit` — zero errors
- [ ] `npm run test:run` — 51/51 tests pass
- [ ] `cargo check` — zero errors
- [ ] `cargo clippy -- -D warnings` — zero warnings
- [ ] `python3 -c "import yaml; yaml.safe_load(open('.gitea/workflows/auto-tag.yml'))"` — YAML valid
---
## Work Implemented
### Phase 1 — Frontend (6 files)
**`src/pages/Settings/AIProviders.tsx`**
- Removed the disabled Model `<Input>` shown when Custom REST is active; the grid row is now hidden via conditional render — the dropdown further down the form handles model selection for this format
- Removed `custom_auth_header: "x-msi-genai-api-key"` prefill on format switch; field now starts empty
- Replaced example in Auth Header description from internal key name to generic `"x-api-key"`
- Renamed "User ID (CORE ID)" → "Email Address"; updated placeholder from `your.name@motorolasolutions.com``user@example.com`; removed Motorola-specific description text
**`src/pages/Dashboard/index.tsx`**
- Added `className="border-border text-foreground bg-card hover:bg-accent"` to Refresh `<Button>` for contrast against dark backgrounds
**`src/pages/Settings/Ollama.tsx`**
- Added same contrast classes to Refresh button
- Added `installOllamaFromBundleCmd` import
- Added `isInstallingBundle` state + `handleInstallFromBundle` async handler
- Added "Install Ollama (Offline)" primary `<Button>` alongside the existing "Download Ollama" link button in the "Ollama Not Detected" card
**`src/stores/settingsStore.ts`**
- Added `pii_enabled_patterns: Record<string, boolean>` field to `SettingsState` interface and store initializer (defaults all 8 patterns to `true`)
- Added `setPiiPattern(id, enabled)` action; both are included in the `persist` serialization so state survives app restarts
**`src/pages/Settings/Security.tsx`**
- Removed local `enabledPatterns` / `setEnabledPatterns` state and `togglePattern` function
- Added `useSettingsStore` import; reads `pii_enabled_patterns` / `setPiiPattern` from the persisted store
- Toggle button uses `setPiiPattern` directly on click
**`src/App.tsx`**
- Added `Sun`, `Moon` to lucide-react imports
- Extracted `setTheme` from `useSettingsStore` alongside `theme`
- Replaced static version `<div>` in sidebar footer with a flex row containing the version string and a Sun/Moon icon button; button is always visible even when sidebar is collapsed
### Phase 2 — Backend (4 files)
**`src-tauri/src/commands/system.rs`**
- Added `install_ollama_from_bundle(app: AppHandle) → Result<String, String>` command
- Resolves bundled binary via `app.path().resource_dir()`, copies to `/usr/local/bin/ollama` (Unix) or `%LOCALAPPDATA%\Programs\Ollama\ollama.exe` (Windows), sets 0o755 permissions on Unix
- Added `use tauri::Manager` import required by `app.path()`
**`src-tauri/src/lib.rs`**
- Registered `commands::system::install_ollama_from_bundle` in `tauri::generate_handler![]`
**`src/lib/tauriCommands.ts`**
- Added `installOllamaFromBundleCmd` typed wrapper: `() => invoke<string>("install_ollama_from_bundle")`
**`src-tauri/tauri.conf.json`**
- Changed `"resources": []``"resources": ["resources/ollama/*"]`
- Created `src-tauri/resources/ollama/.gitkeep` placeholder so Tauri's glob doesn't fail on builds without a bundled binary
### Phase 3 — CI + Docs (3 files)
**`.gitea/workflows/auto-tag.yml`**
- Added "Download Ollama" step to `build-linux-amd64`: downloads `ollama-linux-amd64.tgz`, extracts binary to `src-tauri/resources/ollama/ollama`
- Added "Download Ollama" step to `build-windows-amd64`: downloads `ollama-windows-amd64.zip`, extracts `ollama.exe`; added `unzip` to the Install dependencies step
- Added "Download Ollama" step to `build-macos-arm64`: downloads `ollama-darwin` universal binary directly
- Added "Download Ollama" step to `build-linux-arm64`: downloads `ollama-linux-arm64.tgz`, extracts binary
**`docs/wiki/IPC-Commands.md`**
- Added `install_ollama_from_bundle` entry under System/Ollama Commands section documenting parameters, return value, platform-specific install paths, and privilege requirement note
---
## Testing Needed
### Automated
```bash
npx tsc --noEmit # TS: zero errors
npm run test:run # Vitest: 51/51 pass
cargo check --manifest-path src-tauri/Cargo.toml # Rust: zero errors
cargo clippy --manifest-path src-tauri/Cargo.toml -- -D warnings # Clippy: zero warnings
python3 -c "import yaml; yaml.safe_load(open('.gitea/workflows/auto-tag.yml'))" && echo OK
```
### Manual
1. **Custom REST model dropdown**: Settings → AI Providers → Add Provider → Type=Custom → API Format=Custom REST — the top Model row should disappear; the dropdown at the bottom should be visible and populated with all 19 models. Auth Header Name should be empty.
2. **Label rename**: Confirm "Email Address" label, `user@example.com` placeholder, no Motorola references.
3. **PII persistence**: Security page → toggle off "Email Addresses" and "IP Addresses" → navigate away → return → both should still be off. Restart the app → toggles should remain in the saved state.
4. **Refresh button contrast**: Dashboard and Ollama pages → confirm Refresh button border is visible on dark background.
5. **Theme toggle**: Sidebar footer → click Sun/Moon icon → theme should switch. Collapse sidebar → icon should still be accessible.
6. **Install Ollama (Offline)**: On a machine without Ollama, go to Settings → Ollama → "Ollama Not Detected" card should show "Install Ollama (Offline)" button. (Full test requires a release build with the bundled binary from CI.)

View File

@ -17,7 +17,7 @@
"noFallthroughCasesInSwitch": true,
"baseUrl": ".",
"paths": { "@/*": ["src/*"] },
"types": ["vitest/globals"]
"types": ["vitest/globals", "@testing-library/jest-dom"]
},
"include": ["src", "tests/unit"],
"references": [{ "path": "./tsconfig.node.json" }]