- LocalAI inference takes 30-60s on ARM64 hardware
- Changed RPCD chat handler to async pattern:
- Returns poll_id immediately
- Background process runs AI query (120s timeout)
- Saves result to /var/lib/threat-analyst/chat_*.json
- Client polls with poll_id to get result
- Updated api.js with chatAsync() that polls automatically
- Changed default LocalAI port from 8081 to 8091
- Frontend shows "Thinking..." message with spinner during inference
- Uses curl instead of wget (BusyBox wget doesn't support --post-data=-)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>