New package for building LocalAI from source with llama-cpp backend:
- localai-wb-ctl: On-device build management
- check: Verify build prerequisites
- install-deps: Install build dependencies
- build: Compile LocalAI with llama-cpp
- Model management, service control
- build-sdk.sh: Cross-compile script for SDK
- Uses OpenWrt toolchain for ARM64
- Produces optimized binary with llama-cpp
Alternative to Docker-based secubox-app-localai for native builds.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>