Add complete French (fr) and Chinese (zh) translations for all documentation: - Root files: README, CHANGELOG, SECURITY, BETA-RELEASE - docs/: All 16 core documentation files - DOCS/: All 19 deep-dive documents including embedded/ and archive/ - package/secubox/: All 123+ package READMEs - Misc: secubox-tools/, scripts/, EXAMPLES/, config-backups/, streamlit-apps/ Total: 346 translation files created Each file includes language switcher links for easy navigation between English, French, and Chinese versions. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
1.4 KiB
1.4 KiB
SecuBox Ollama - Runtime LLM Local
Executez des grands modeles de langage localement sur votre appareil OpenWrt. Fournit une API REST compatible OpenAI avec support natif ARM64. Supporte LLaMA, Mistral, Phi, Gemma et d'autres modeles ouverts.
Installation
opkg install secubox-app-ollama
Configuration
Fichier de configuration UCI : /etc/config/ollama
uci set ollama.main.enabled='1'
uci set ollama.main.bind='0.0.0.0'
uci set ollama.main.port='11434'
uci set ollama.main.model_dir='/srv/ollama/models'
uci commit ollama
Utilisation
ollamactl start # Demarrer le service Ollama
ollamactl stop # Arreter le service Ollama
ollamactl status # Afficher le statut du service
ollamactl pull <model> # Telecharger un modele
ollamactl list # Lister les modeles installes
ollamactl remove <model> # Supprimer un modele
ollamactl run <model> # Lancer un chat interactif
API
Point d'acces compatible OpenAI sur http://<host>:11434 :
curl http://localhost:11434/api/generate -d '{
"model": "llama3.2",
"prompt": "Hello"
}'
Modeles Supportes
LLaMA 3.x, Mistral, Phi-3, Gemma 2, CodeLlama, et tout modele compatible GGUF.
Dependances
jsonfilterwget-ssl
Licence
Apache-2.0