secubox-openwrt/package/secubox/secubox-app-ollama
CyberMind-FR 62f2f6a7a8 docs(secubox): Add KISS README for all 46 remaining packages
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 07:34:06 +01:00
..
files feat(hexojs): Add Build & Publish LuCI interface for Gitea workflow 2026-01-26 16:18:40 +01:00
Makefile fix(deps): Remove libubox/libubus/libuci from all SecuBox package dependencies 2026-01-30 19:46:27 +01:00
README.md docs(secubox): Add KISS README for all 46 remaining packages 2026-02-03 07:34:06 +01:00

SecuBox Ollama - Local LLM Runtime

Run large language models locally on your OpenWrt device. Provides an OpenAI-compatible REST API with native ARM64 support. Supports LLaMA, Mistral, Phi, Gemma, and other open models.

Installation

opkg install secubox-app-ollama

Configuration

UCI config file: /etc/config/ollama

uci set ollama.main.enabled='1'
uci set ollama.main.bind='0.0.0.0'
uci set ollama.main.port='11434'
uci set ollama.main.model_dir='/srv/ollama/models'
uci commit ollama

Usage

ollamactl start              # Start Ollama service
ollamactl stop               # Stop Ollama service
ollamactl status             # Show service status
ollamactl pull <model>       # Download a model
ollamactl list               # List installed models
ollamactl remove <model>     # Remove a model
ollamactl run <model>        # Run interactive chat

API

OpenAI-compatible endpoint at http://<host>:11434:

curl http://localhost:11434/api/generate -d '{
  "model": "llama3.2",
  "prompt": "Hello"
}'

Supported Models

LLaMA 3.x, Mistral, Phi-3, Gemma 2, CodeLlama, and any GGUF-compatible model.

Dependencies

  • jsonfilter
  • wget-ssl

License

Apache-2.0