Add detection patterns for latest actively exploited vulnerabilities: - CVE-2025-55182 (React2Shell, CVSS 10.0) - CVE-2025-8110 (Gogs RCE), CVE-2025-53770 (SharePoint) - CVE-2025-52691 (SmarterMail), CVE-2025-40551 (SolarWinds) - CVE-2024-47575 (FortiManager), CVE-2024-21887 (Ivanti) - CVE-2024-3400, CVE-2024-0012, CVE-2024-9474 (PAN-OS) New attack categories based on OWASP Top 10 2025: - HTTP Request Smuggling (TE.CL/CL.TE conflicts) - AI/LLM Prompt Injection (ChatML, instruction markers) - WAF Bypass techniques (Unicode normalization, double encoding) - Supply Chain attacks (CI/CD poisoning, dependency confusion) - Extended SSTI (Jinja2, Freemarker, Velocity, Thymeleaf) - API Abuse (BOLA/IDOR, mass assignment) CrowdSec scenarios split into 11 separate files for reliability. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| htdocs/luci-static/resources | ||
| root/usr | ||
| Makefile | ||
| README.md | ||
LuCI LocalAI Dashboard
Local LLM inference server management with OpenAI-compatible API.
Installation
opkg install luci-app-localai
Access
LuCI menu: Services -> LocalAI
Tabs
- Dashboard -- Service health, loaded models, API endpoint status
- Models -- Install, remove, and manage LLM models
- Chat -- Interactive chat interface for testing models
- Settings -- API port, memory limits, runtime configuration
RPCD Methods
Backend: luci.localai
| Method | Description |
|---|---|
status |
Service status and runtime info |
models |
List installed models |
config |
Get configuration |
health |
API health check |
metrics |
Inference metrics and stats |
start |
Start LocalAI |
stop |
Stop LocalAI |
restart |
Restart LocalAI |
model_install |
Install a model by name |
model_remove |
Remove an installed model |
chat |
Send chat completion request |
complete |
Send text completion request |
Dependencies
luci-basesecubox-app-localai
License
Apache-2.0