secubox-openwrt/package/secubox/luci-app-localai
CyberMind-FR daa4c48375 fix(localai): Add gte-small preset, fix RPC expect unwrapping and chat JSON escaping
- Add gte-small embedding model preset to localaictl with proper YAML
  config (embeddings: true, context_size: 512)
- Fix RPC expect declarations across api.js, dashboard.js, models.js to
  use empty expect objects, preserving full response including error fields
- Replace fragile sed/awk JSON escaping in RPCD chat and completion
  handlers with file I/O streaming through awk for robust handling of
  special characters in LLM responses
- Switch RPCD chat handler from curl to wget to avoid missing output
  file on timeout (curl doesn't create -o file on exit code 28)
- Bypass RPCD 30s script timeout for chat by calling LocalAI API
  directly from the browser via fetch()
- Add embeddings flag to models RPC and filter embedding models from
  chat view model selector

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 08:36:20 +01:00
..
htdocs/luci-static/resources fix(localai): Add gte-small preset, fix RPC expect unwrapping and chat JSON escaping 2026-02-04 08:36:20 +01:00
root/usr fix(localai): Add gte-small preset, fix RPC expect unwrapping and chat JSON escaping 2026-02-04 08:36:20 +01:00
Makefile fix(localai): Add LXC container support to RPCD backend 2026-01-21 18:05:35 +01:00
README.md docs(secubox): Add KISS README for all 46 remaining packages 2026-02-03 07:34:06 +01:00

LuCI LocalAI Dashboard

Local LLM inference server management with OpenAI-compatible API.

Installation

opkg install luci-app-localai

Access

LuCI menu: Services -> LocalAI

Tabs

  • Dashboard -- Service health, loaded models, API endpoint status
  • Models -- Install, remove, and manage LLM models
  • Chat -- Interactive chat interface for testing models
  • Settings -- API port, memory limits, runtime configuration

RPCD Methods

Backend: luci.localai

Method Description
status Service status and runtime info
models List installed models
config Get configuration
health API health check
metrics Inference metrics and stats
start Start LocalAI
stop Stop LocalAI
restart Restart LocalAI
model_install Install a model by name
model_remove Remove an installed model
chat Send chat completion request
complete Send text completion request

Dependencies

  • luci-base
  • secubox-app-localai

License

Apache-2.0