- Change default API port from 8080 to 8081
- Increase chat API timeout to 120 seconds (LLMs can be slow on ARM)
- Use custom fetch-based chat call with AbortController for timeout control
- Fix wget/curl timeout for RPCD backend
Resolves "XHR request timed out" errors when using chat with TinyLlama.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- is_running() now checks LXC with lxc-info before Docker/Podman
- get_status() calculates uptime from LXC container PID
- Order: LXC -> Podman -> Docker -> native process
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Update is_running() to detect Docker/Podman containers
- Fix get_status() uptime calculation for containers
- Improve do_chat() with better error messages and logging
- Use curl if available for API calls (more reliable than wget POST)
- Add debug logging to syslog (logger -t localai-chat)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
LocalAI changes:
- Rewrite localaictl to use Docker/Podman instead of standalone binary
- Use localai/localai:v2.25.0-ffmpeg image with all backends included
- Fix llama-cpp backend not found issue
- Auto-detect podman or docker runtime
- Update UCI config with Docker settings
New Ollama package:
- Add secubox-app-ollama as lighter alternative to LocalAI
- Native ARM64 support with backends included
- Simple CLI: ollamactl pull/run/list
- Docker image ~1GB vs 2-4GB for LocalAI
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The LuCI rpc.declare with expect: { models: [] } returns the array
directly, not wrapped in {models: [...]}. Fixed all views to handle
this correctly.
- models.js: Check Array.isArray(data) first
- dashboard.js: Extract array from results[1] directly
- chat.js: Same array handling fix
Version: 0.1.0-r12
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add secubox-app-localai package with LXC container support for LocalAI service
- Add luci-app-localai with dashboard, chat, models and settings views
- Implement RPCD backend for LocalAI API integration via /v1/models and /v1/chat/completions
- Use direct RPC declarations in LuCI views for reliable frontend communication
- Add LocalAI and Glances to secubox-portal services page
- Move Glances from services to monitoring section
Packages:
- secubox-app-localai: 0.1.0-r1
- luci-app-localai: 0.1.0-r8
- luci-app-secubox-portal: 0.6.0-r5
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>