secubox-openwrt/package/secubox/secubox-app-ollama/README.md
CyberMind-FR ccfb58124c docs: Add trilingual documentation (French and Chinese translations)
Add complete French (fr) and Chinese (zh) translations for all documentation:

- Root files: README, CHANGELOG, SECURITY, BETA-RELEASE
- docs/: All 16 core documentation files
- DOCS/: All 19 deep-dive documents including embedded/ and archive/
- package/secubox/: All 123+ package READMEs
- Misc: secubox-tools/, scripts/, EXAMPLES/, config-backups/, streamlit-apps/

Total: 346 translation files created

Each file includes language switcher links for easy navigation between
English, French, and Chinese versions.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-03-20 10:00:18 +01:00

60 lines
1.3 KiB
Markdown

English | [Francais](README.fr.md) | [中文](README.zh.md)
# SecuBox Ollama - Local LLM Runtime
Run large language models locally on your OpenWrt device. Provides an OpenAI-compatible REST API with native ARM64 support. Supports LLaMA, Mistral, Phi, Gemma, and other open models.
## Installation
```bash
opkg install secubox-app-ollama
```
## Configuration
UCI config file: `/etc/config/ollama`
```bash
uci set ollama.main.enabled='1'
uci set ollama.main.bind='0.0.0.0'
uci set ollama.main.port='11434'
uci set ollama.main.model_dir='/srv/ollama/models'
uci commit ollama
```
## Usage
```bash
ollamactl start # Start Ollama service
ollamactl stop # Stop Ollama service
ollamactl status # Show service status
ollamactl pull <model> # Download a model
ollamactl list # List installed models
ollamactl remove <model> # Remove a model
ollamactl run <model> # Run interactive chat
```
## API
OpenAI-compatible endpoint at `http://<host>:11434`:
```bash
curl http://localhost:11434/api/generate -d '{
"model": "llama3.2",
"prompt": "Hello"
}'
```
## Supported Models
LLaMA 3.x, Mistral, Phi-3, Gemma 2, CodeLlama, and any GGUF-compatible model.
## Dependencies
- `jsonfilter`
- `wget-ssl`
## License
Apache-2.0