mirror of
https://github.com/NVIDIA/dgx-spark-playbooks.git
synced 2026-04-23 10:33:51 +00:00
- Display hop number (0, 1, 2...) with network icon for each triple - Show multi-hop path badge for paths with length > 1 - Add "Multi-hop enabled" badge in Retrieved Knowledge header - Implement collapsible thinking steps with proper chevron rotation - Parse <think> tags from NVIDIA reasoning content - Reduce console logging (sample only, not full dataset) - Show path length with amber lightning icon This provides visual feedback about multi-hop reasoning paths and makes the LLM's chain-of-thought process transparent. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| comfy-ui | ||
| connect-to-your-spark | ||
| connect-two-sparks | ||
| cuda-x-data-science | ||
| dgx-dashboard | ||
| flux-finetuning | ||
| jax | ||
| llama-factory | ||
| multi-agent-chatbot | ||
| multi-modal-inference | ||
| nccl | ||
| nemo-fine-tune | ||
| nim-llm | ||
| nvfp4-quantization | ||
| ollama | ||
| open-webui | ||
| pytorch-fine-tune | ||
| rag-ai-workbench | ||
| speculative-decoding | ||
| tailscale | ||
| trt-llm | ||
| txt2kg | ||
| unsloth | ||
| vibe-coding | ||
| vllm | ||
| vlm-finetuning | ||
| vscode | ||
| vss | ||