- Add image_attachments table to database schema (migration 013) - Implement image upload, list, delete, and clipboard paste commands - Add image file PII detection with user approval workflow - Register image attachment commands in Tauri IPC - Update TypeScript types and frontend components - Add unit tests for image attachment functionality - Update README and wiki documentation
5.6 KiB
5.6 KiB
KohakuHub Deployment Summary
Description
Deployed KohakuHub (a self-hosted HuggingFace-compatible model hub) on the existing Docker infrastructure at 172.0.0.29, with NGINX reverse proxy at 172.0.0.30 and CoreDNS at 172.0.0.29.
Stack deployed:
| Service | Image | Port(s) | Purpose |
|---|---|---|---|
| hub-ui | nginx:alpine | 28080:80 | Vue.js frontend SPA |
| hub-api | built from source | 127.0.0.1:48888:48888 | Python/FastAPI backend |
| minio | quay.io/minio/minio | 29001:9000, 29000:29000 | S3-compatible storage |
| lakefs | built from ./docker/lakefs | 127.0.0.1:28000:28000 | Git-style versioning |
| kohakuhub-postgres | postgres:15 | 127.0.0.1:25432:5432 | Metadata database |
FQDNs:
ai-hub.tftsr.com→ NGINX →172.0.0.29:28080(web UI + API proxy)ai-hub-files.tftsr.com→ NGINX →172.0.0.29:29001(MinIO S3 file access)
All data stored locally under /docker_mounts/kohakuhub/.
Acceptance Criteria
- All 5 containers start and remain healthy
https://ai-hub.tftsr.comserves the KohakuHub Vue.js SPAhttps://ai-hub-files.tftsr.comproxies to MinIO S3 API- DNS resolves both FQDNs to
172.0.0.30(NGINX proxy) - hub-api connects to postgres and runs migrations on startup
- MinIO
hub-storagebucket is created automatically - LakeFS initializes with S3 blockstore backend
- No port conflicts with existing stack services
- Postgres container uses unique name (
kohakuhub-postgres) to avoid conflict withgogs_postgres_db - API and LakeFS ports bound to
127.0.0.1only (not externally exposed)
Work Implemented
Phase 1 — Docker Host (172.0.0.29)
- Downloaded KohakuHub source via GitHub archive tarball (git not installed on host) to
/docker_mounts/kohakuhub/ - Generated secrets using
openssl rand -hex 32/16for SESSION_SECRET, ADMIN_TOKEN, DB_KEY, LAKEFS_KEY, DB_PASS - Created
/docker_mounts/kohakuhub/.envwithUID=1000/GID=1000for LakeFS container user mapping - Created
/docker_mounts/kohakuhub/docker-compose.ymlwith production configuration:- Absolute volume paths under
/docker_mounts/kohakuhub/ - Secrets substituted in-place
- Postgres container renamed to
kohakuhub-postgres - API/LakeFS/Postgres bound to
127.0.0.1only
- Absolute volume paths under
- Built Vue.js frontends using
docker run node:22-alpine(Node.js not installed on host):src/kohaku-hub-ui/dist/— main SPAsrc/kohaku-hub-admin/dist/— admin portal
- Started stack with
docker compose up -d --build; hub-api recovered from initial postgres race condition on its own - Verified all containers
Up, API docs at:48888/docsreturning HTTP 200,hub-storagebucket auto-created
Phase 2 — NGINX Proxy (172.0.0.30)
- Created
/etc/nginx/conf.d/ai-hub.conf— proxiesai-hub.tftsr.com→172.0.0.29:28080withclient_max_body_size 100G, 3600s timeouts, LetsEncrypt SSL - Created
/etc/nginx/conf.d/ai-hub-files.conf— proxiesai-hub-files.tftsr.com→172.0.0.29:29001with same settings - Resolved write issue: initial writes via
sudo teewith piped password produced empty files (heredoc stdin conflict); corrected by writing to/tmpthensudo cp - Validated and reloaded NGINX:
nginx -tpasses (pre-existingssl_staplingwarnings are environment-wide, unrelated to these configs)
Phase 3 — CoreDNS (172.0.0.29)
- Updated
/docker_mounts/coredns/tftsr.com.db:- SOA serial:
1718910701→2026040501 - Appended:
ai-hub.tftsr.com. 3600 IN A 172.0.0.30 - Appended:
ai-hub-files.tftsr.com. 3600 IN A 172.0.0.30
- SOA serial:
- Reloaded CoreDNS via
docker kill --signal=SIGHUP coredns
Testing Needed
Functional
- Browser test: Navigate to
https://ai-hub.tftsr.comand verify login page, registration, and model browsing work - Admin portal: Navigate to
https://ai-hub.tftsr.com/adminand verify admin dashboard is accessible with the ADMIN_TOKEN - Model upload: Upload a test model file and verify it lands in MinIO under
hub-storage - Git clone: Clone a model repo via
git clone https://ai-hub.tftsr.com/<user>/<repo>.gitand verify Git LFS works - File download: Verify
https://ai-hub-files.tftsr.comproperly serves file download redirects
Infrastructure
- Restart persistence:
docker compose down && docker compose up -don 172.0.0.29 — verify all services restart cleanly and data persists - LakeFS credentials: Check
/docker_mounts/kohakuhub/hub-meta/hub-api/credentials.envfor generated LakeFS credentials - Admin token recovery: Run
docker logs hub-api | grep -i adminto retrieve the admin token if needed - MinIO console: Verify MinIO console accessible at
http://172.0.0.29:29000(internal only) - Postgres connectivity:
docker exec kohakuhub-postgres psql -U kohakuhub -c "\dt"to confirm schema migration ran
Notes
- Frontend builds used temporary
node:22-alpineDocker containers since Node.js is not installed on172.0.0.29. If redeployment is needed, re-run the build steps or install Node.js on the host. - The
ssl_staplingwarnings in NGINX are pre-existing across all vhosts on172.0.0.30and do not affect functionality. - MinIO credentials (
minioadmin/minioadmin) are defaults. Consider rotating via the MinIO console for production hardening.