Implements Phases 1-8 of the TFTSR implementation plan. Rust backend (Tauri 2.x, src-tauri/): - Multi-provider AI: OpenAI-compatible, Anthropic, Gemini, Mistral, Ollama - PII detection engine: 11 regex patterns with overlap resolution - SQLCipher AES-256 encrypted database with 10 versioned migrations - 28 Tauri IPC commands for triage, analysis, document, and system ops - Ollama: hardware probe, model recommendations, pull/delete with events - RCA and blameless post-mortem Markdown document generators - PDF export via printpdf - Audit log: SHA-256 hash of every external data send - Integration stubs for Confluence, ServiceNow, Azure DevOps (v0.2) Frontend (React 18 + TypeScript + Vite, src/): - 9 pages: full triage workflow NewIssue→LogUpload→Triage→Resolution→RCA→Postmortem→History+Settings - 7 components: ChatWindow, TriageProgress, PiiDiffViewer, DocEditor, HardwareReport, ModelSelector, UI primitives - 3 Zustand stores: session, settings (persisted), history - Type-safe tauriCommands.ts matching Rust backend types exactly - 8 IT domain system prompts (Linux, Windows, Network, K8s, DB, Virt, HW, Obs) DevOps: - .woodpecker/test.yml: rustfmt, clippy, cargo test, tsc, vitest on every push - .woodpecker/release.yml: linux/amd64 + linux/arm64 builds, Gogs release upload Verified: - cargo check: zero errors - tsc --noEmit: zero errors - vitest run: 13/13 unit tests passing Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
69 lines
2.0 KiB
Markdown
69 lines
2.0 KiB
Markdown
# encoding-sniffer [](https://github.com/fb55/encoding-sniffer/actions/workflows/nodejs-test.yml)
|
|
|
|
An implementation of the HTML encoding sniffer algo, with stream support.
|
|
|
|
This module wraps around [iconv-lite](https://github.com/ashtuchkin/iconv-lite)
|
|
to make decoding buffers and streams incredibly easy.
|
|
|
|
## Features
|
|
|
|
- Support for streams
|
|
- Support for XML encoding types, including UTF-16 prefixes and
|
|
`<?xml encoding="...">`
|
|
- Allows decoding streams and buffers with a single function call
|
|
|
|
## Installation
|
|
|
|
```bash
|
|
npm install encoding-sniffer
|
|
```
|
|
|
|
## Usage
|
|
|
|
```js
|
|
import { DecodeStream, getEncoding, decodeBuffer } from "encoding-sniffer";
|
|
|
|
/**
|
|
* All functions accept an optional options object.
|
|
*
|
|
* Available options are (with default values):
|
|
*/
|
|
const options = {
|
|
/**
|
|
* The maximum number of bytes to sniff. Defaults to `1024`.
|
|
*/
|
|
maxBytes: 1024,
|
|
/**
|
|
* The encoding specified by the user. If set, this will only be overridden
|
|
* by a Byte Order Mark (BOM).
|
|
*/
|
|
userEncoding: undefined,
|
|
/**
|
|
* The encoding specified by the transport layer. If set, this will only be
|
|
* overridden by a Byte Order Mark (BOM) or the user encoding.
|
|
*/
|
|
transportLayerEncodingLabel: undefined,
|
|
/**
|
|
* The default encoding to use, if no encoding can be detected.
|
|
*
|
|
* Defaults to `"windows-1252"`.
|
|
*/
|
|
defaultEncoding: "windows-1252",
|
|
};
|
|
|
|
// Use the `DecodeStream` transform stream to automatically decode
|
|
// the contents of a stream as they are read
|
|
const decodeStream = new DecodeStream(options);
|
|
|
|
// Or, use the `getEncoding` function to detect the encoding of a buffer
|
|
const encoding = getEncoding(buffer, options);
|
|
|
|
// Use the `decodeBuffer` function to decode the contents of a buffer
|
|
const decodedBuffer = decodeBuffer(buffer, options);
|
|
```
|
|
|
|
## License
|
|
|
|
This project is licensed under the MIT License. See the [LICENSE](/LICENSE) file
|
|
for more information.
|