Implements Phases 1-8 of the TFTSR implementation plan. Rust backend (Tauri 2.x, src-tauri/): - Multi-provider AI: OpenAI-compatible, Anthropic, Gemini, Mistral, Ollama - PII detection engine: 11 regex patterns with overlap resolution - SQLCipher AES-256 encrypted database with 10 versioned migrations - 28 Tauri IPC commands for triage, analysis, document, and system ops - Ollama: hardware probe, model recommendations, pull/delete with events - RCA and blameless post-mortem Markdown document generators - PDF export via printpdf - Audit log: SHA-256 hash of every external data send - Integration stubs for Confluence, ServiceNow, Azure DevOps (v0.2) Frontend (React 18 + TypeScript + Vite, src/): - 9 pages: full triage workflow NewIssue→LogUpload→Triage→Resolution→RCA→Postmortem→History+Settings - 7 components: ChatWindow, TriageProgress, PiiDiffViewer, DocEditor, HardwareReport, ModelSelector, UI primitives - 3 Zustand stores: session, settings (persisted), history - Type-safe tauriCommands.ts matching Rust backend types exactly - 8 IT domain system prompts (Linux, Windows, Network, K8s, DB, Virt, HW, Obs) DevOps: - .woodpecker/test.yml: rustfmt, clippy, cargo test, tsc, vitest on every push - .woodpecker/release.yml: linux/amd64 + linux/arm64 builds, Gogs release upload Verified: - cargo check: zero errors - tsc --noEmit: zero errors - vitest run: 13/13 unit tests passing Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
45 lines
1.7 KiB
JavaScript
45 lines
1.7 KiB
JavaScript
// By default, Node.js keeps the subprocess alive while it has a `message` or `disconnect` listener.
|
|
// We replicate the same logic for the events that we proxy.
|
|
// This ensures the subprocess is kept alive while `getOneMessage()` and `getEachMessage()` are ongoing.
|
|
// This is not a problem with `sendMessage()` since Node.js handles that method automatically.
|
|
// We do not use `anyProcess.channel.ref()` since this would prevent the automatic `.channel.refCounted()` Node.js is doing.
|
|
// We keep a reference to `anyProcess.channel` since it might be `null` while `getOneMessage()` or `getEachMessage()` is still processing debounced messages.
|
|
// See https://github.com/nodejs/node/blob/2aaeaa863c35befa2ebaa98fb7737ec84df4d8e9/lib/internal/child_process.js#L547
|
|
export const addReference = (channel, reference) => {
|
|
if (reference) {
|
|
addReferenceCount(channel);
|
|
}
|
|
};
|
|
|
|
const addReferenceCount = channel => {
|
|
channel.refCounted();
|
|
};
|
|
|
|
export const removeReference = (channel, reference) => {
|
|
if (reference) {
|
|
removeReferenceCount(channel);
|
|
}
|
|
};
|
|
|
|
const removeReferenceCount = channel => {
|
|
channel.unrefCounted();
|
|
};
|
|
|
|
// To proxy events, we setup some global listeners on the `message` and `disconnect` events.
|
|
// Those should not keep the subprocess alive, so we remove the automatic counting that Node.js is doing.
|
|
// See https://github.com/nodejs/node/blob/1b965270a9c273d4cf70e8808e9d28b9ada7844f/lib/child_process.js#L180
|
|
export const undoAddedReferences = (channel, isSubprocess) => {
|
|
if (isSubprocess) {
|
|
removeReferenceCount(channel);
|
|
removeReferenceCount(channel);
|
|
}
|
|
};
|
|
|
|
// Reverse it during `disconnect`
|
|
export const redoAddedReferences = (channel, isSubprocess) => {
|
|
if (isSubprocess) {
|
|
addReferenceCount(channel);
|
|
addReferenceCount(channel);
|
|
}
|
|
};
|