- Add gte-small embedding model preset to localaictl with proper YAML config (embeddings: true, context_size: 512) - Fix RPC expect declarations across api.js, dashboard.js, models.js to use empty expect objects, preserving full response including error fields - Replace fragile sed/awk JSON escaping in RPCD chat and completion handlers with file I/O streaming through awk for robust handling of special characters in LLM responses - Switch RPCD chat handler from curl to wget to avoid missing output file on timeout (curl doesn't create -o file on exit code 28) - Bypass RPCD 30s script timeout for chat by calling LocalAI API directly from the browser via fetch() - Add embeddings flag to models RPC and filter embedding models from chat view model selector Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| htdocs/luci-static/resources | ||
| root/usr | ||
| Makefile | ||
| README.md | ||
LuCI LocalAI Dashboard
Local LLM inference server management with OpenAI-compatible API.
Installation
opkg install luci-app-localai
Access
LuCI menu: Services -> LocalAI
Tabs
- Dashboard -- Service health, loaded models, API endpoint status
- Models -- Install, remove, and manage LLM models
- Chat -- Interactive chat interface for testing models
- Settings -- API port, memory limits, runtime configuration
RPCD Methods
Backend: luci.localai
| Method | Description |
|---|---|
status |
Service status and runtime info |
models |
List installed models |
config |
Get configuration |
health |
API health check |
metrics |
Inference metrics and stats |
start |
Start LocalAI |
stop |
Stop LocalAI |
restart |
Restart LocalAI |
model_install |
Install a model by name |
model_remove |
Remove an installed model |
chat |
Send chat completion request |
complete |
Send text completion request |
Dependencies
luci-basesecubox-app-localai
License
Apache-2.0