- Add gte-small embedding model preset to localaictl with proper YAML config (embeddings: true, context_size: 512) - Fix RPC expect declarations across api.js, dashboard.js, models.js to use empty expect objects, preserving full response including error fields - Replace fragile sed/awk JSON escaping in RPCD chat and completion handlers with file I/O streaming through awk for robust handling of special characters in LLM responses - Switch RPCD chat handler from curl to wget to avoid missing output file on timeout (curl doesn't create -o file on exit code 28) - Bypass RPCD 30s script timeout for chat by calling LocalAI API directly from the browser via fetch() - Add embeddings flag to models RPC and filter embedding models from chat view model selector Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| files | ||
| Makefile | ||
| README.md | ||
SecuBox LocalAI
Native LLM server with OpenAI-compatible REST API. Supports GGUF models on ARM64 and x86_64.
Installation
opkg install secubox-app-localai
Configuration
UCI config file: /etc/config/localai
config localai 'main'
option enabled '0'
option port '8080'
option models_path '/srv/localai/models'
Usage
# Install the binary (downloaded on first run)
localaictl install
# Start / stop the service
localaictl start
localaictl stop
# Check status
localaictl status
# Download a model
localaictl model-pull <model-name>
The binary is downloaded from GitHub releases on first localaictl install.
Features
- OpenAI-compatible REST API
- GGUF model support (LLaMA, Mistral, Phi, TinyLlama, etc.)
- ARM64 and x86_64 architectures
Files
/etc/config/localai-- UCI configuration/usr/sbin/localaictl-- controller CLI/srv/localai/models/-- model storage directory
Dependencies
libstdcpplibpthreadwget-sslca-certificates
License
MIT