feat(streamlit-control): Phase 3 - auto-refresh, permissions, UI improvements
Streamlit Control Dashboard Phase 3:
- Add auto-refresh toggle to all main pages (10s/30s/60s intervals)
- Add permission-aware UI with can_write() and is_admin() helpers
- Containers page: tabs (All/Running/Stopped), search filter, info panels
- Security page: better CrowdSec parsing, threat table, raw data viewer
- Streamlit apps page: restart button, delete confirmation dialog
- Network page: HAProxy filter, WireGuard/DNS placeholders
fix(crowdsec-dashboard): Handle RPC error codes in overview.js
Fix TypeError when CrowdSec RPC returns error code instead of object.
Added type check to treat non-objects as empty {} in render/pollData.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
parent
99d9f307dd
commit
9081444c7a
@ -1,6 +1,6 @@
|
||||
# SecuBox UI & Theme History
|
||||
|
||||
_Last updated: 2026-03-11 (RezApp Forge, Streamlit Forge)_
|
||||
_Last updated: 2026-03-11 (Streamlit Control Phase 3, CrowdSec bugfix)_
|
||||
|
||||
1. **Unified Dashboard Refresh (2025-12-20)**
|
||||
- Dashboard received the "sh-page-header" layout, hero stats, and SecuNav top tabs.
|
||||
@ -4606,3 +4606,75 @@ git checkout HEAD -- index.html
|
||||
- Featured as new release
|
||||
- Updated `new_releases` section with both apps
|
||||
- Total plugins: 37 → 39
|
||||
|
||||
87. **Streamlit Control Dashboard Phase 1 (2026-03-11)**
|
||||
- Package: `secubox-app-streamlit-control` - Modern Streamlit-based LuCI replacement
|
||||
- Inspired by metablogizer KISS design patterns
|
||||
- Architecture:
|
||||
- Python ubus client (`lib/ubus_client.py`) - JSON-RPC for RPCD communication
|
||||
- Authentication module (`lib/auth.py`) - LuCI session integration
|
||||
- KISS widgets library (`lib/widgets.py`) - Badges, status cards, QR codes
|
||||
- Pages:
|
||||
- Home (app.py) - System stats, service status, container quick controls
|
||||
- Sites (Metablogizer clone) - One-click deploy, sites table, action buttons
|
||||
- Streamlit - Streamlit Forge apps management
|
||||
- Containers - LXC container status and controls
|
||||
- Network - Interface status, WireGuard peers, mwan3 uplinks
|
||||
- Security - WAF status, CrowdSec decisions, firewall
|
||||
- System - Board info, packages, logs
|
||||
- Deployment:
|
||||
- Registered with Streamlit Forge on port 8531
|
||||
- Exposed via HAProxy at control.gk2.secubox.in
|
||||
- Routed through mitmproxy WAF (security policy compliant)
|
||||
- Fixed mitmproxy-in container startup (cgroup:mixed removal, routes JSON repair)
|
||||
|
||||
88. **Streamlit Control Dashboard Phase 2 (2026-03-11)**
|
||||
- RPCD integration for real data access
|
||||
- Authentication:
|
||||
- HTTPS with self-signed cert support (verify=False)
|
||||
- Dual auth: root (full access) + SecuBox users (read-only)
|
||||
- SecuBox users authenticate via `luci.secubox-users.authenticate`
|
||||
- ACL updates (`/usr/share/rpcd/acl.d/unauthenticated.json`):
|
||||
- Added read access: secubox-portal, metablogizer, haproxy, mitmproxy, crowdsec-dashboard, streamlit-forge
|
||||
- Allows dashboard viewing without system login
|
||||
- Fixed methods:
|
||||
- LXC containers: `luci.secubox-portal.get_containers` (luci.lxc doesn't exist)
|
||||
- CrowdSec: `luci.crowdsec-dashboard.status`
|
||||
- Fixed duplicate key error in Streamlit pages (enumerate with index)
|
||||
- Dashboard data verified: containers (11/32 running), HAProxy, WAF (16k threats), CrowdSec
|
||||
- Test user created: `testdash` / `Password123`
|
||||
|
||||
89. **Streamlit Control Dashboard Phase 3 (2026-03-11)**
|
||||
- Auto-refresh toggle:
|
||||
- Added to all main pages (Dashboard, Containers, Security, Streamlit, Network)
|
||||
- Configurable intervals: 10s, 30s, 60s
|
||||
- Manual refresh button
|
||||
- Permission-aware UI:
|
||||
- `can_write()` and `is_admin()` helper functions in auth.py
|
||||
- Action buttons hidden/disabled for SecuBox users (read-only access)
|
||||
- "View only" indicators for limited users
|
||||
- Containers page improvements:
|
||||
- Tabs for All/Running/Stopped filtering
|
||||
- Search filter by container name
|
||||
- Improved info panels with metrics display
|
||||
- Raw data expander
|
||||
- Security page improvements:
|
||||
- Better CrowdSec status parsing (handles various response formats)
|
||||
- Threat table with columns (IP, URL, Category, Severity, Time)
|
||||
- Stats tab with raw data viewer
|
||||
- Streamlit apps page:
|
||||
- Added restart button
|
||||
- Delete confirmation dialog
|
||||
- Open link buttons
|
||||
- Network page:
|
||||
- HAProxy search filter
|
||||
- Vhost count stats
|
||||
- WireGuard/DNS placeholders with setup hints
|
||||
|
||||
90. **CrowdSec Dashboard Bugfix (2026-03-11)**
|
||||
- Fixed: `TypeError: can't assign to property "countries" on 5: not an object`
|
||||
- Root cause: RPC error code 5 (UBUS_STATUS_NOT_FOUND) returned instead of object
|
||||
- Occurs when CrowdSec service is busy or temporarily unavailable
|
||||
- Fix: Added type check in `overview.js` render() and pollData() functions
|
||||
- `var s = (data && typeof data === 'object' && !Array.isArray(data)) ? data : {}`
|
||||
- Deployed to router, cleared LuCI caches
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
# Work In Progress (Claude)
|
||||
|
||||
_Last updated: 2026-03-11 (Forge LuCI Apps + Documentation)_
|
||||
_Last updated: 2026-03-11 (Streamlit Control Phase 3 + CrowdSec bugfix)_
|
||||
|
||||
> **Architecture Reference**: SecuBox Fanzine v3 — Les 4 Couches
|
||||
|
||||
@ -10,13 +10,47 @@ _Last updated: 2026-03-11 (Forge LuCI Apps + Documentation)_
|
||||
|
||||
### 2026-03-11
|
||||
|
||||
- **RezApp Forge - Docker to SecuBox App Converter**
|
||||
- **Streamlit Control Dashboard Phase 3 (Complete)**
|
||||
- **Auto-refresh**: Toggle + interval selector on all main pages (10s/30s/60s)
|
||||
- **Permission-aware UI**: Hide/disable action buttons for SecuBox users (limited access)
|
||||
- **Containers page**: Tabs (All/Running/Stopped), search filter, improved info panels
|
||||
- **Security page**: Better CrowdSec status parsing, threat table with columns, raw data expander
|
||||
- **Streamlit apps page**: Restart button, delete confirmation dialog
|
||||
- **Network page**: HAProxy filter, vhost count stats, WireGuard/DNS placeholders
|
||||
- **Auth helpers**: `can_write()`, `is_admin()` functions for permission checks
|
||||
|
||||
- **CrowdSec Dashboard Bugfix**
|
||||
- Fixed: `TypeError: can't assign to property "countries" on 5: not an object`
|
||||
- Root cause: RPC error code 5 returned instead of object (transient service state)
|
||||
- Fix: Added type check in `overview.js` to treat non-objects as empty `{}`
|
||||
- Deployed fix to router, cleared LuCI caches
|
||||
|
||||
- **Streamlit Control Dashboard Phase 1 & 2 (Complete)**
|
||||
- Package: `secubox-app-streamlit-control` with Python ubus client
|
||||
- KISS-themed UI inspired by metablogizer design
|
||||
- 7 pages: Home, Sites, Streamlit, Containers, Network, Security, System
|
||||
- **Phase 2: RPCD Integration**
|
||||
- HTTPS connection with self-signed cert support
|
||||
- Dual auth: root (full access) + SecuBox users (read-only dashboard)
|
||||
- Updated ACL: `unauthenticated.json` allows dashboard data without login
|
||||
- Fixed LXC via `luci.secubox-portal.get_containers`
|
||||
- Fixed CrowdSec via `luci.crowdsec-dashboard.status`
|
||||
- All service status methods working (HAProxy, WAF, containers)
|
||||
- Deployed on port 8531, exposed at control.gk2.secubox.in
|
||||
- Test user: `testdash` / `Password123`
|
||||
|
||||
- **RezApp Forge - Docker to SecuBox App Converter (Complete)**
|
||||
- Package: `secubox-app-rezapp` with `rezappctl` CLI
|
||||
- UCI config: `/etc/config/rezapp` with catalog sources (Docker Hub, LinuxServer.io, GHCR)
|
||||
- Commands: catalog, search, info, convert, package, publish, list
|
||||
- Commands: catalog, search, info, import, convert, run, stop, package, publish, expose, list, cache
|
||||
- Docker to LXC workflow: pull → export → extract → generate LXC config
|
||||
- **Offline mode**: Convert from local tarball (--from-tar) or OCI directory (--from-oci)
|
||||
- **Runtime fallback**: Docker → Podman automatic fallback
|
||||
- **Network modes**: host (shared namespace), bridge (veth), none (isolated)
|
||||
- **ENV extraction**: Docker ENV vars → LXC environment + UCI config
|
||||
- **HAProxy integration**: `expose` command adds vhost + mitmproxy route
|
||||
- Templates: Makefile.tpl, init.d.tpl, ctl.tpl, config.tpl, start-lxc.tpl, lxc-config.tpl, manifest.tpl
|
||||
- Plan: `/home/reepost/.claude/plans/tingly-rolling-sky.md`
|
||||
- **Tested**: Offline conversion from cached tarball, container runs successfully
|
||||
|
||||
- **Streamlit Forge Phase 1** (implemented)
|
||||
- Package: `secubox-app-streamlit-forge` with `slforge` CLI
|
||||
@ -354,7 +388,6 @@ _Last updated: 2026-03-11 (Forge LuCI Apps + Documentation)_
|
||||
## In Progress
|
||||
|
||||
- **Streamlit Forge Phase 2** - Preview generation, Gitea push/pull
|
||||
- **RezApp Forge Full Test** - Test Docker → LXC conversion workflow
|
||||
|
||||
- **RTTY Remote Control Module (Phase 4 - Session Replay)**
|
||||
- Avatar-tap integration for session capture
|
||||
|
||||
@ -533,7 +533,14 @@
|
||||
"Bash(wg show:*)",
|
||||
"Bash(sudo ip addr:*)",
|
||||
"Bash(ethtool:*)",
|
||||
"Bash(arp-scan:*)"
|
||||
"Bash(arp-scan:*)",
|
||||
"Bash(if ping -c 1 -W 3 192.168.255.)",
|
||||
"Bash(__NEW_LINE_5d3c44d38e0f0475__ scp /home/reepost/CyberMindStudio/secubox-openwrt/package/secubox/secubox-app-streamlit-control/files/usr/share/streamlit-control/pages/3_📊_Streamlit.py root@192.168.255.1:/srv/streamlit/apps/control/src/pages/)",
|
||||
"Bash(__NEW_LINE_5d3c44d38e0f0475__ ssh root@192.168.255.1 'slforge restart control 2>&1')",
|
||||
"Bash(__NEW_LINE_010faa2e33a60d91__ ssh root@192.168.255.1 'slforge restart control 2>&1')",
|
||||
"Bash(__NEW_LINE_eb1b0ab801ec1afd__ scp /home/reepost/CyberMindStudio/secubox-openwrt/package/secubox/secubox-app-streamlit-control/files/usr/share/streamlit-control/app.py root@192.168.255.1:/srv/streamlit/apps/control/src/)",
|
||||
"Bash(__NEW_LINE_eb1b0ab801ec1afd__ ssh root@192.168.255.1 'slforge restart control 2>&1')",
|
||||
"Bash(__NEW_LINE_066a081550243cc1__ echo \"\")"
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
@ -63,7 +63,8 @@ return view.extend({
|
||||
|
||||
render: function(data) {
|
||||
var self = this;
|
||||
var s = data || {};
|
||||
// Ensure s is always an object (data could be error code like 5)
|
||||
var s = (data && typeof data === 'object' && !Array.isArray(data)) ? data : {};
|
||||
s.countries = this.parseCountries(s);
|
||||
s.alerts = this.parseAlerts(s);
|
||||
|
||||
@ -200,7 +201,9 @@ return view.extend({
|
||||
|
||||
pollData: function() {
|
||||
var self = this;
|
||||
return api.getOverview().then(function(s) {
|
||||
return api.getOverview().then(function(data) {
|
||||
// Ensure s is always an object (data could be error code)
|
||||
var s = (data && typeof data === 'object' && !Array.isArray(data)) ? data : {};
|
||||
s.countries = self.parseCountries(s);
|
||||
s.alerts = self.parseAlerts(s);
|
||||
var el = document.getElementById('cs-stats');
|
||||
|
||||
@ -18,6 +18,67 @@ log_info() { echo "[INFO] $*"; }
|
||||
log_warn() { echo "[WARN] $*" >&2; }
|
||||
log_error() { echo "[ERROR] $*" >&2; }
|
||||
|
||||
# Container runtime (docker or podman)
|
||||
CONTAINER_RUNTIME=""
|
||||
|
||||
# Detect and initialize container runtime (Docker or Podman fallback)
|
||||
init_runtime() {
|
||||
[ -n "$CONTAINER_RUNTIME" ] && return 0
|
||||
|
||||
# Try Docker first
|
||||
if command -v docker >/dev/null 2>&1; then
|
||||
if docker info >/dev/null 2>&1; then
|
||||
CONTAINER_RUNTIME="docker"
|
||||
log_info "Using Docker runtime"
|
||||
return 0
|
||||
fi
|
||||
# Try starting Docker daemon
|
||||
if [ -x /etc/init.d/dockerd ]; then
|
||||
log_info "Starting Docker daemon..."
|
||||
/etc/init.d/dockerd start 2>/dev/null
|
||||
sleep 5
|
||||
if docker info >/dev/null 2>&1; then
|
||||
CONTAINER_RUNTIME="docker"
|
||||
log_info "Using Docker runtime"
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Fallback to Podman
|
||||
if command -v podman >/dev/null 2>&1; then
|
||||
if podman info >/dev/null 2>&1; then
|
||||
CONTAINER_RUNTIME="podman"
|
||||
log_info "Using Podman runtime (fallback)"
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
|
||||
log_error "No container runtime available (docker or podman)"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Runtime wrapper functions
|
||||
runtime_pull() {
|
||||
$CONTAINER_RUNTIME pull "$@"
|
||||
}
|
||||
|
||||
runtime_create() {
|
||||
$CONTAINER_RUNTIME create "$@"
|
||||
}
|
||||
|
||||
runtime_export() {
|
||||
$CONTAINER_RUNTIME export "$@"
|
||||
}
|
||||
|
||||
runtime_rm() {
|
||||
$CONTAINER_RUNTIME rm "$@"
|
||||
}
|
||||
|
||||
runtime_inspect() {
|
||||
$CONTAINER_RUNTIME inspect "$@"
|
||||
}
|
||||
|
||||
# Load configuration
|
||||
load_config() {
|
||||
config_load "$CONFIG"
|
||||
@ -163,7 +224,7 @@ cmd_info() {
|
||||
}
|
||||
|
||||
# ==========================================
|
||||
# Convert Command
|
||||
# Convert Command (with offline mode support)
|
||||
# ==========================================
|
||||
|
||||
cmd_convert() {
|
||||
@ -174,6 +235,9 @@ cmd_convert() {
|
||||
local network=""
|
||||
local ports=""
|
||||
local mounts=""
|
||||
local from_tar=""
|
||||
local from_oci=""
|
||||
local offline="0"
|
||||
|
||||
# Parse arguments
|
||||
while [ $# -gt 0 ]; do
|
||||
@ -184,11 +248,31 @@ cmd_convert() {
|
||||
--network) network="$2"; shift 2 ;;
|
||||
--port) ports="$ports $2"; shift 2 ;;
|
||||
--mount) mounts="$mounts $2"; shift 2 ;;
|
||||
--from-tar) from_tar="$2"; offline="1"; shift 2 ;;
|
||||
--from-oci) from_oci="$2"; offline="1"; shift 2 ;;
|
||||
--offline) offline="1"; shift ;;
|
||||
-*) log_error "Unknown option: $1"; return 1 ;;
|
||||
*) image="$1"; shift ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Offline mode: convert from local tarball
|
||||
if [ -n "$from_tar" ]; then
|
||||
[ ! -f "$from_tar" ] && { log_error "Tarball not found: $from_tar"; return 1; }
|
||||
[ -z "$name" ] && { log_error "Name required with --from-tar"; return 1; }
|
||||
_convert_from_tarball "$from_tar" "$name" "$memory" "$network"
|
||||
return $?
|
||||
fi
|
||||
|
||||
# Offline mode: convert from OCI directory
|
||||
if [ -n "$from_oci" ]; then
|
||||
[ ! -d "$from_oci" ] && { log_error "OCI directory not found: $from_oci"; return 1; }
|
||||
[ -z "$name" ] && { log_error "Name required with --from-oci"; return 1; }
|
||||
_convert_from_oci "$from_oci" "$name" "$memory" "$network"
|
||||
return $?
|
||||
fi
|
||||
|
||||
# Check cache for existing tarball (offline conversion)
|
||||
[ -z "$image" ] && { log_error "Image name required"; return 1; }
|
||||
|
||||
# Default name from image
|
||||
@ -205,27 +289,26 @@ cmd_convert() {
|
||||
local lxc_path="$LXC_DIR/$name"
|
||||
local tarball="$CACHE_DIR/${name}.tar"
|
||||
|
||||
log_info "Converting $full_image -> $name"
|
||||
|
||||
# Step 1: Ensure Docker is running
|
||||
if ! docker info >/dev/null 2>&1; then
|
||||
log_info "Starting Docker daemon..."
|
||||
/etc/init.d/dockerd start 2>/dev/null
|
||||
sleep 5
|
||||
if ! docker info >/dev/null 2>&1; then
|
||||
log_error "Docker is not available"
|
||||
return 1
|
||||
fi
|
||||
# Check if cached tarball exists
|
||||
if [ -f "$tarball" ] && [ "$offline" = "1" ]; then
|
||||
log_info "Using cached tarball: $tarball"
|
||||
_convert_from_tarball "$tarball" "$name" "$memory" "$network"
|
||||
return $?
|
||||
fi
|
||||
|
||||
log_info "Converting $full_image -> $name"
|
||||
|
||||
# Step 1: Initialize container runtime (Docker or Podman)
|
||||
init_runtime || return 1
|
||||
|
||||
# Step 2: Pull image
|
||||
log_info "Pulling Docker image..."
|
||||
docker pull "$full_image" || { log_error "Failed to pull image"; return 1; }
|
||||
log_info "Pulling image via $CONTAINER_RUNTIME..."
|
||||
runtime_pull "$full_image" || { log_error "Failed to pull image"; return 1; }
|
||||
|
||||
# Step 3: Inspect image
|
||||
log_info "Inspecting image metadata..."
|
||||
mkdir -p "$app_dir"
|
||||
docker inspect "$full_image" > "$app_dir/docker-inspect.json"
|
||||
runtime_inspect "$full_image" > "$app_dir/docker-inspect.json"
|
||||
|
||||
# Extract metadata
|
||||
local entrypoint=$(jsonfilter -i "$app_dir/docker-inspect.json" -e '@[0].Config.Entrypoint[*]' 2>/dev/null | tr '\n' ' ')
|
||||
@ -234,43 +317,230 @@ cmd_convert() {
|
||||
local user=$(jsonfilter -i "$app_dir/docker-inspect.json" -e '@[0].Config.User' 2>/dev/null)
|
||||
local exposed=$(jsonfilter -i "$app_dir/docker-inspect.json" -e '@[0].Config.ExposedPorts' 2>/dev/null)
|
||||
|
||||
# Extract environment variables
|
||||
local env_vars=""
|
||||
local env_list=$(jsonfilter -i "$app_dir/docker-inspect.json" -e '@[0].Config.Env[*]' 2>/dev/null)
|
||||
if [ -n "$env_list" ]; then
|
||||
# Filter out PATH and other system vars, keep useful ones
|
||||
env_vars=$(echo "$env_list" | grep -vE '^(PATH=|HOME=|HOSTNAME=|TERM=)' | tr '\n' '|')
|
||||
fi
|
||||
|
||||
log_info "Extracted: entrypoint='$entrypoint' cmd='$cmd' workdir='$workdir' user='$user'"
|
||||
[ -n "$exposed" ] && log_info "Exposed ports: $exposed"
|
||||
|
||||
# Step 4: Export filesystem
|
||||
log_info "Exporting container filesystem..."
|
||||
docker create --name rezapp-export-$$ "$full_image" >/dev/null 2>&1
|
||||
docker export rezapp-export-$$ > "$tarball"
|
||||
docker rm rezapp-export-$$ >/dev/null 2>&1
|
||||
runtime_create --name rezapp-export-$$ "$full_image" >/dev/null 2>&1
|
||||
runtime_export rezapp-export-$$ > "$tarball"
|
||||
runtime_rm rezapp-export-$$ >/dev/null 2>&1
|
||||
|
||||
# Step 5: Create LXC rootfs
|
||||
log_info "Creating LXC container..."
|
||||
# Create LXC from tarball
|
||||
_create_lxc_from_tar "$tarball" "$name" "$memory" "$entrypoint" "$cmd" "$workdir" "$user" "$full_image" "$network" "$ports" "$env_vars" "$exposed"
|
||||
}
|
||||
|
||||
# Convert from local tarball (offline mode)
|
||||
_convert_from_tarball() {
|
||||
local tarball="$1"
|
||||
local name="$2"
|
||||
local memory="${3:-$DEFAULT_MEMORY}"
|
||||
local network="${4:-$DEFAULT_NETWORK}"
|
||||
|
||||
log_info "Converting from tarball: $tarball -> $name"
|
||||
|
||||
# Try to extract metadata from tarball
|
||||
local entrypoint=""
|
||||
local cmd="/bin/sh"
|
||||
local workdir="/"
|
||||
local user=""
|
||||
|
||||
# Check for OCI manifest in tarball
|
||||
if tar tf "$tarball" 2>/dev/null | grep -q "manifest.json"; then
|
||||
log_info "Found OCI manifest, extracting metadata..."
|
||||
local manifest_tmp="/tmp/rezapp-manifest-$$.json"
|
||||
tar xf "$tarball" -O manifest.json > "$manifest_tmp" 2>/dev/null
|
||||
# Extract config digest and parse
|
||||
local config_file=$(jsonfilter -i "$manifest_tmp" -e '@[0].Config' 2>/dev/null)
|
||||
if [ -n "$config_file" ] && tar tf "$tarball" 2>/dev/null | grep -q "$config_file"; then
|
||||
tar xf "$tarball" -O "$config_file" > "/tmp/rezapp-config-$$.json" 2>/dev/null
|
||||
entrypoint=$(jsonfilter -i "/tmp/rezapp-config-$$.json" -e '@.config.Entrypoint[*]' 2>/dev/null | tr '\n' ' ')
|
||||
cmd=$(jsonfilter -i "/tmp/rezapp-config-$$.json" -e '@.config.Cmd[*]' 2>/dev/null | tr '\n' ' ')
|
||||
workdir=$(jsonfilter -i "/tmp/rezapp-config-$$.json" -e '@.config.WorkingDir' 2>/dev/null)
|
||||
user=$(jsonfilter -i "/tmp/rezapp-config-$$.json" -e '@.config.User' 2>/dev/null)
|
||||
rm -f "/tmp/rezapp-config-$$.json"
|
||||
fi
|
||||
rm -f "$manifest_tmp"
|
||||
fi
|
||||
|
||||
_create_lxc_from_tar "$tarball" "$name" "$memory" "$entrypoint" "$cmd" "$workdir" "$user" "local:$tarball" "$network" "" "" ""
|
||||
}
|
||||
|
||||
# Convert from OCI directory (offline mode)
|
||||
_convert_from_oci() {
|
||||
local oci_dir="$1"
|
||||
local name="$2"
|
||||
local memory="${3:-$DEFAULT_MEMORY}"
|
||||
local network="${4:-$DEFAULT_NETWORK}"
|
||||
|
||||
log_info "Converting from OCI directory: $oci_dir -> $name"
|
||||
|
||||
local app_dir="$APPS_DIR/$name"
|
||||
local lxc_path="$LXC_DIR/$name"
|
||||
|
||||
# Parse OCI index and config
|
||||
local index_file="$oci_dir/index.json"
|
||||
[ ! -f "$index_file" ] && { log_error "OCI index.json not found"; return 1; }
|
||||
|
||||
local manifest_digest=$(jsonfilter -i "$index_file" -e '@.manifests[0].digest' 2>/dev/null)
|
||||
manifest_digest="${manifest_digest#sha256:}"
|
||||
local manifest_file="$oci_dir/blobs/sha256/$manifest_digest"
|
||||
|
||||
[ ! -f "$manifest_file" ] && { log_error "OCI manifest not found"; return 1; }
|
||||
|
||||
# Get config
|
||||
local config_digest=$(jsonfilter -i "$manifest_file" -e '@.config.digest' 2>/dev/null)
|
||||
config_digest="${config_digest#sha256:}"
|
||||
local config_file="$oci_dir/blobs/sha256/$config_digest"
|
||||
|
||||
local entrypoint=""
|
||||
local cmd="/bin/sh"
|
||||
local workdir="/"
|
||||
local user=""
|
||||
|
||||
if [ -f "$config_file" ]; then
|
||||
entrypoint=$(jsonfilter -i "$config_file" -e '@.config.Entrypoint[*]' 2>/dev/null | tr '\n' ' ')
|
||||
cmd=$(jsonfilter -i "$config_file" -e '@.config.Cmd[*]' 2>/dev/null | tr '\n' ' ')
|
||||
workdir=$(jsonfilter -i "$config_file" -e '@.config.WorkingDir' 2>/dev/null)
|
||||
user=$(jsonfilter -i "$config_file" -e '@.config.User' 2>/dev/null)
|
||||
fi
|
||||
|
||||
# Extract layers
|
||||
mkdir -p "$app_dir" "$lxc_path/rootfs"
|
||||
log_info "Extracting OCI layers..."
|
||||
|
||||
# Get layer digests and extract in order
|
||||
local layers=$(jsonfilter -i "$manifest_file" -e '@.layers[*].digest' 2>/dev/null)
|
||||
for layer_digest in $layers; do
|
||||
layer_digest="${layer_digest#sha256:}"
|
||||
local layer_file="$oci_dir/blobs/sha256/$layer_digest"
|
||||
if [ -f "$layer_file" ]; then
|
||||
log_info " Extracting layer: ${layer_digest:0:12}..."
|
||||
tar xf "$layer_file" -C "$lxc_path/rootfs" 2>/dev/null
|
||||
fi
|
||||
done
|
||||
|
||||
# Generate LXC config
|
||||
_generate_lxc_config "$name" "$memory" "$entrypoint" "$cmd" "$workdir" "$user" "oci:$oci_dir" "$network" "" "" ""
|
||||
}
|
||||
|
||||
# Create LXC container from tarball
|
||||
_create_lxc_from_tar() {
|
||||
local tarball="$1"
|
||||
local name="$2"
|
||||
local memory="$3"
|
||||
local entrypoint="$4"
|
||||
local cmd="$5"
|
||||
local workdir="$6"
|
||||
local user="$7"
|
||||
local source="$8"
|
||||
local network="$9"
|
||||
local ports="${10}"
|
||||
local env_vars="${11}"
|
||||
local exposed="${12}"
|
||||
|
||||
local app_dir="$APPS_DIR/$name"
|
||||
local lxc_path="$LXC_DIR/$name"
|
||||
|
||||
# Create LXC rootfs
|
||||
log_info "Creating LXC container rootfs..."
|
||||
rm -rf "$lxc_path"
|
||||
mkdir -p "$lxc_path/rootfs"
|
||||
tar xf "$tarball" -C "$lxc_path/rootfs"
|
||||
mkdir -p "$lxc_path/rootfs" "$app_dir"
|
||||
|
||||
# Step 6: Generate start script
|
||||
log_info "Extracting filesystem (this may take a while)..."
|
||||
tar xf "$tarball" -C "$lxc_path/rootfs" 2>/dev/null
|
||||
|
||||
# Ensure /bin/sh exists (some images use busybox or ash)
|
||||
if [ ! -e "$lxc_path/rootfs/bin/sh" ]; then
|
||||
if [ -e "$lxc_path/rootfs/bin/bash" ]; then
|
||||
ln -sf bash "$lxc_path/rootfs/bin/sh"
|
||||
elif [ -e "$lxc_path/rootfs/bin/busybox" ]; then
|
||||
ln -sf busybox "$lxc_path/rootfs/bin/sh"
|
||||
fi
|
||||
fi
|
||||
|
||||
_generate_lxc_config "$name" "$memory" "$entrypoint" "$cmd" "$workdir" "$user" "$source" "$network" "$ports" "$env_vars" "$exposed"
|
||||
}
|
||||
|
||||
# Generate LXC config and metadata
|
||||
_generate_lxc_config() {
|
||||
local name="$1"
|
||||
local memory="$2"
|
||||
local entrypoint="$3"
|
||||
local cmd="$4"
|
||||
local workdir="$5"
|
||||
local user="$6"
|
||||
local source="$7"
|
||||
local network="$8"
|
||||
local ports="$9"
|
||||
local env_vars="${10}"
|
||||
local exposed_ports="${11}"
|
||||
|
||||
local app_dir="$APPS_DIR/$name"
|
||||
local lxc_path="$LXC_DIR/$name"
|
||||
|
||||
# Generate start script
|
||||
log_info "Generating start script..."
|
||||
local start_script="$lxc_path/rootfs/start-lxc.sh"
|
||||
|
||||
# Build the exec command
|
||||
local exec_cmd=""
|
||||
if [ -n "$entrypoint" ]; then
|
||||
exec_cmd="$entrypoint"
|
||||
[ -n "$cmd" ] && exec_cmd="$exec_cmd $cmd"
|
||||
elif [ -n "$cmd" ]; then
|
||||
exec_cmd="$cmd"
|
||||
else
|
||||
exec_cmd="/bin/sh"
|
||||
fi
|
||||
|
||||
cat > "$start_script" << STARTEOF
|
||||
#!/bin/sh
|
||||
# Auto-generated by RezApp Forge
|
||||
# Auto-generated by RezApp Forge from $source
|
||||
|
||||
# Set working directory
|
||||
cd ${workdir:-/}
|
||||
|
||||
# Create common directories
|
||||
mkdir -p /config /data /tmp 2>/dev/null
|
||||
|
||||
# Export environment
|
||||
export HOME=\${HOME:-/root}
|
||||
export PATH=\${PATH:-/usr/local/bin:/usr/bin:/bin}
|
||||
|
||||
# Run entrypoint/cmd
|
||||
exec ${entrypoint:-${cmd:-/bin/sh}}
|
||||
exec $exec_cmd
|
||||
STARTEOF
|
||||
chmod +x "$start_script"
|
||||
|
||||
# Step 7: Generate LXC config
|
||||
log_info "Generating LXC config..."
|
||||
|
||||
# Parse user for UID/GID
|
||||
local uid="0"
|
||||
local gid="0"
|
||||
if [ -n "$user" ] && [ "$user" != "root" ]; then
|
||||
uid="${user%%:*}"
|
||||
gid="${user#*:}"
|
||||
[ "$gid" = "$user" ] && gid="$uid"
|
||||
# Handle numeric or name:name format
|
||||
case "$user" in
|
||||
*:*)
|
||||
uid="${user%%:*}"
|
||||
gid="${user#*:}"
|
||||
;;
|
||||
[0-9]*)
|
||||
uid="$user"
|
||||
gid="$user"
|
||||
;;
|
||||
*)
|
||||
# Named user - keep as 0 for now
|
||||
uid="0"
|
||||
gid="0"
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
# Convert memory to bytes
|
||||
@ -278,39 +548,118 @@ STARTEOF
|
||||
case "$memory" in
|
||||
*G) mem_bytes=$(( ${memory%G} * 1073741824 )) ;;
|
||||
*M) mem_bytes=$(( ${memory%M} * 1048576 )) ;;
|
||||
*K) mem_bytes=$(( ${memory%K} * 1024 )) ;;
|
||||
*) mem_bytes="$memory" ;;
|
||||
esac
|
||||
|
||||
log_info "Generating LXC config (network: $network)..."
|
||||
|
||||
# Start LXC config
|
||||
cat > "$lxc_path/config" << LXCEOF
|
||||
# LXC config for $name (auto-generated by RezApp Forge)
|
||||
# LXC config for $name
|
||||
# Auto-generated by RezApp Forge from $source
|
||||
|
||||
lxc.uts.name = $name
|
||||
lxc.rootfs.path = dir:$lxc_path/rootfs
|
||||
lxc.net.0.type = none
|
||||
lxc.init.cmd = /start-lxc.sh
|
||||
|
||||
# Filesystem mounts
|
||||
lxc.mount.auto = proc:mixed sys:ro
|
||||
lxc.mount.entry = /srv/$name config none bind,create=dir 0 0
|
||||
lxc.cap.drop = sys_admin sys_module mac_admin mac_override
|
||||
lxc.mount.entry = /srv/$name data none bind,create=dir 0 0
|
||||
|
||||
# Resource limits
|
||||
lxc.cgroup2.memory.max = $mem_bytes
|
||||
|
||||
# User/Group
|
||||
lxc.init.uid = $uid
|
||||
lxc.init.gid = $gid
|
||||
lxc.init.cmd = /start-lxc.sh
|
||||
|
||||
# Drop dangerous capabilities
|
||||
lxc.cap.drop = sys_admin sys_module sys_boot sys_rawio mac_admin mac_override
|
||||
|
||||
# TTY configuration
|
||||
lxc.console.size = 1024
|
||||
lxc.pty.max = 1024
|
||||
lxc.tty.max = 4
|
||||
|
||||
# Device access
|
||||
lxc.cgroup2.devices.allow = c 1:* rwm
|
||||
lxc.cgroup2.devices.allow = c 5:* rwm
|
||||
lxc.cgroup2.devices.allow = c 136:* rwm
|
||||
|
||||
# Disable seccomp for compatibility
|
||||
lxc.seccomp.profile =
|
||||
|
||||
# Autostart disabled by default
|
||||
lxc.start.auto = 0
|
||||
|
||||
LXCEOF
|
||||
|
||||
# Step 8: Create data directory
|
||||
mkdir -p "/srv/$name"
|
||||
[ "$uid" != "0" ] && chown "$uid:$gid" "/srv/$name"
|
||||
# Add network configuration based on mode
|
||||
case "$network" in
|
||||
host)
|
||||
cat >> "$lxc_path/config" << NETEOF
|
||||
# Network: Share host namespace
|
||||
lxc.namespace.share.net = 1
|
||||
NETEOF
|
||||
;;
|
||||
bridge)
|
||||
cat >> "$lxc_path/config" << NETEOF
|
||||
# Network: Bridge mode
|
||||
lxc.net.0.type = veth
|
||||
lxc.net.0.link = br-lan
|
||||
lxc.net.0.flags = up
|
||||
lxc.net.0.name = eth0
|
||||
NETEOF
|
||||
;;
|
||||
none)
|
||||
cat >> "$lxc_path/config" << NETEOF
|
||||
# Network: None (isolated)
|
||||
lxc.net.0.type = none
|
||||
NETEOF
|
||||
;;
|
||||
*)
|
||||
# Default to host network for simplicity
|
||||
cat >> "$lxc_path/config" << NETEOF
|
||||
# Network: Share host namespace (default)
|
||||
lxc.namespace.share.net = 1
|
||||
NETEOF
|
||||
;;
|
||||
esac
|
||||
|
||||
# Step 9: Save metadata
|
||||
# Add environment variables
|
||||
cat >> "$lxc_path/config" << ENVEOF
|
||||
|
||||
# Environment variables
|
||||
lxc.environment = PUID=$uid
|
||||
lxc.environment = PGID=$gid
|
||||
lxc.environment = TZ=Europe/Paris
|
||||
ENVEOF
|
||||
|
||||
# Add extracted Docker ENV vars
|
||||
if [ -n "$env_vars" ]; then
|
||||
echo "" >> "$lxc_path/config"
|
||||
echo "# Docker ENV defaults" >> "$lxc_path/config"
|
||||
echo "$env_vars" | tr '|' '\n' | while read -r env; do
|
||||
[ -n "$env" ] && echo "lxc.environment = $env" >> "$lxc_path/config"
|
||||
done
|
||||
fi
|
||||
|
||||
# Create data directory
|
||||
mkdir -p "/srv/$name"
|
||||
[ "$uid" != "0" ] && chown "$uid:$gid" "/srv/$name" 2>/dev/null
|
||||
|
||||
# Auto-detect ports from Docker EXPOSE
|
||||
local detected_ports=""
|
||||
if [ -n "$exposed_ports" ]; then
|
||||
detected_ports=$(echo "$exposed_ports" | sed 's|[{}"]||g' | tr ',' '\n' | sed 's|/tcp||g; s|/udp||g' | tr '\n' ' ')
|
||||
fi
|
||||
|
||||
# Save metadata
|
||||
cat > "$app_dir/metadata.json" << METAEOF
|
||||
{
|
||||
"name": "$name",
|
||||
"source_image": "$full_image",
|
||||
"source": "$source",
|
||||
"converted_at": "$(date -Iseconds)",
|
||||
"entrypoint": "$entrypoint",
|
||||
"cmd": "$cmd",
|
||||
@ -320,6 +669,8 @@ LXCEOF
|
||||
"gid": "$gid",
|
||||
"memory": "$memory",
|
||||
"network": "$network",
|
||||
"ports": "$ports",
|
||||
"exposed_ports": "$detected_ports",
|
||||
"lxc_path": "$lxc_path",
|
||||
"data_path": "/srv/$name"
|
||||
}
|
||||
@ -328,13 +679,53 @@ METAEOF
|
||||
log_info "Conversion complete!"
|
||||
echo ""
|
||||
echo "Container: $name"
|
||||
echo " LXC Path: $lxc_path"
|
||||
echo " Data Path: /srv/$name"
|
||||
echo " LXC Path: $lxc_path"
|
||||
echo " Data Path: /srv/$name"
|
||||
echo " Network: $network"
|
||||
[ -n "$detected_ports" ] && echo " Ports: $detected_ports"
|
||||
echo ""
|
||||
echo "To test: lxc-start -n $name -F"
|
||||
echo "To test: lxc-start -n $name -F"
|
||||
echo "To package: rezappctl package $name"
|
||||
}
|
||||
|
||||
# ==========================================
|
||||
# Import Command (download image for offline use)
|
||||
# ==========================================
|
||||
|
||||
cmd_import() {
|
||||
local image="$1"
|
||||
local tag="${2:-latest}"
|
||||
|
||||
[ -z "$image" ] && { log_error "Image name required"; return 1; }
|
||||
|
||||
local name="${image##*/}"
|
||||
name="${name%%:*}"
|
||||
local full_image="${image}:${tag}"
|
||||
local tarball="$CACHE_DIR/${name}.tar"
|
||||
|
||||
log_info "Importing $full_image for offline use..."
|
||||
|
||||
# Initialize container runtime (Docker or Podman)
|
||||
init_runtime || return 1
|
||||
|
||||
# Pull and export
|
||||
log_info "Pulling image via $CONTAINER_RUNTIME..."
|
||||
runtime_pull "$full_image" || { log_error "Failed to pull image"; return 1; }
|
||||
|
||||
log_info "Exporting to cache..."
|
||||
mkdir -p "$CACHE_DIR"
|
||||
runtime_create --name rezapp-import-$$ "$full_image" >/dev/null 2>&1
|
||||
runtime_export rezapp-import-$$ > "$tarball"
|
||||
runtime_rm rezapp-import-$$ >/dev/null 2>&1
|
||||
|
||||
# Save inspect data
|
||||
runtime_inspect "$full_image" > "$CACHE_DIR/${name}.inspect.json"
|
||||
|
||||
log_info "Image cached: $tarball"
|
||||
echo ""
|
||||
echo "To convert offline: rezappctl convert --from-tar $tarball --name $name"
|
||||
}
|
||||
|
||||
# ==========================================
|
||||
# Package Command
|
||||
# ==========================================
|
||||
@ -592,7 +983,8 @@ cmd_list() {
|
||||
[ -f "$app" ] || continue
|
||||
local name=$(dirname "$app")
|
||||
name="${name##*/}"
|
||||
local image=$(jsonfilter -i "$app" -e '@.source_image')
|
||||
local image=$(jsonfilter -i "$app" -e '@.source_image' 2>/dev/null)
|
||||
[ -z "$image" ] && image=$(jsonfilter -i "$app" -e '@.source' 2>/dev/null)
|
||||
printf " %-20s %s\n" "$name" "$image"
|
||||
done
|
||||
else
|
||||
@ -600,13 +992,164 @@ cmd_list() {
|
||||
fi
|
||||
}
|
||||
|
||||
# ==========================================
|
||||
# Cache Command (show cached images)
|
||||
# ==========================================
|
||||
|
||||
cmd_cache() {
|
||||
echo "Cached Images (offline ready):"
|
||||
echo "==============================="
|
||||
|
||||
if [ -d "$CACHE_DIR" ]; then
|
||||
local found=0
|
||||
for tarball in "$CACHE_DIR"/*.tar; do
|
||||
[ -f "$tarball" ] || continue
|
||||
found=1
|
||||
local name=$(basename "$tarball" .tar)
|
||||
local size=$(du -h "$tarball" | cut -f1)
|
||||
local inspect="$CACHE_DIR/${name}.inspect.json"
|
||||
local image=""
|
||||
if [ -f "$inspect" ]; then
|
||||
image=$(jsonfilter -i "$inspect" -e '@[0].RepoTags[0]' 2>/dev/null)
|
||||
fi
|
||||
printf " %-20s %8s %s\n" "$name" "$size" "${image:-local}"
|
||||
done
|
||||
[ "$found" = "0" ] && echo " (none)"
|
||||
else
|
||||
echo " (none)"
|
||||
fi
|
||||
echo ""
|
||||
echo "Convert offline: rezappctl convert --from-tar <file> --name <name>"
|
||||
}
|
||||
|
||||
# ==========================================
|
||||
# Run Command (start/test container)
|
||||
# ==========================================
|
||||
|
||||
cmd_run() {
|
||||
local name="$1"
|
||||
local foreground=""
|
||||
local shell=""
|
||||
|
||||
shift
|
||||
while [ $# -gt 0 ]; do
|
||||
case "$1" in
|
||||
-f|--foreground) foreground="1"; shift ;;
|
||||
-s|--shell) shell="1"; shift ;;
|
||||
*) shift ;;
|
||||
esac
|
||||
done
|
||||
|
||||
[ -z "$name" ] && { log_error "App name required"; return 1; }
|
||||
|
||||
local lxc_path="$LXC_DIR/$name"
|
||||
[ ! -d "$lxc_path" ] && { log_error "Container not found: $name"; return 1; }
|
||||
|
||||
# Check if already running
|
||||
if lxc-info -n "$name" 2>/dev/null | grep -q "RUNNING"; then
|
||||
if [ -n "$shell" ]; then
|
||||
log_info "Opening shell in running container..."
|
||||
lxc-attach -n "$name" -- /bin/sh
|
||||
return $?
|
||||
fi
|
||||
log_warn "Container already running"
|
||||
lxc-info -n "$name"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [ -n "$foreground" ]; then
|
||||
log_info "Starting $name in foreground (Ctrl+C to stop)..."
|
||||
lxc-start -n "$name" -F
|
||||
else
|
||||
log_info "Starting $name..."
|
||||
lxc-start -n "$name" -d
|
||||
sleep 3
|
||||
lxc-info -n "$name"
|
||||
fi
|
||||
}
|
||||
|
||||
cmd_stop() {
|
||||
local name="$1"
|
||||
[ -z "$name" ] && { log_error "App name required"; return 1; }
|
||||
|
||||
if lxc-info -n "$name" 2>/dev/null | grep -q "RUNNING"; then
|
||||
log_info "Stopping $name..."
|
||||
lxc-stop -n "$name"
|
||||
else
|
||||
log_warn "Container not running"
|
||||
fi
|
||||
}
|
||||
|
||||
# ==========================================
|
||||
# Expose Command (HAProxy integration)
|
||||
# ==========================================
|
||||
|
||||
cmd_expose() {
|
||||
local name="$1"
|
||||
local domain="$2"
|
||||
local port="$3"
|
||||
|
||||
[ -z "$name" ] && { log_error "App name required"; return 1; }
|
||||
|
||||
local meta_file="$APPS_DIR/$name/metadata.json"
|
||||
[ ! -f "$meta_file" ] && { log_error "App not found: $name"; return 1; }
|
||||
|
||||
# Auto-detect port from metadata if not specified
|
||||
if [ -z "$port" ]; then
|
||||
port=$(jsonfilter -i "$meta_file" -e '@.exposed_ports' 2>/dev/null | awk '{print $1}')
|
||||
fi
|
||||
[ -z "$port" ] && { log_error "Port required (or specify in metadata)"; return 1; }
|
||||
|
||||
# Default domain
|
||||
[ -z "$domain" ] && domain="${name}.gk2.secubox.in"
|
||||
|
||||
log_info "Exposing $name on $domain (port $port)..."
|
||||
|
||||
# Check for haproxyctl
|
||||
if ! command -v haproxyctl >/dev/null 2>&1; then
|
||||
log_warn "haproxyctl not found - manual HAProxy config required"
|
||||
echo ""
|
||||
echo "Add to /srv/haproxy/config/haproxy.cfg:"
|
||||
echo " acl host_${name} hdr(host) -i $domain"
|
||||
echo " use_backend ${name}_backend if host_${name}"
|
||||
echo ""
|
||||
echo " backend ${name}_backend"
|
||||
echo " mode http"
|
||||
echo " server ${name} 192.168.255.1:$port check"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Add vhost via haproxyctl
|
||||
haproxyctl vhost add "$domain"
|
||||
|
||||
# Add route to mitmproxy
|
||||
local routes_file="/srv/mitmproxy-in/haproxy-routes.json"
|
||||
if [ -f "$routes_file" ]; then
|
||||
log_info "Adding mitmproxy route..."
|
||||
# Use Python to safely add to JSON
|
||||
python3 << PYEOF
|
||||
import json
|
||||
with open('$routes_file', 'r') as f:
|
||||
routes = json.load(f)
|
||||
routes['$domain'] = ['192.168.255.1', $port]
|
||||
with open('$routes_file', 'w') as f:
|
||||
json.dump(routes, f, indent=2)
|
||||
print('Route added: $domain -> 192.168.255.1:$port')
|
||||
PYEOF
|
||||
fi
|
||||
|
||||
log_info "Exposed: https://$domain -> 192.168.255.1:$port"
|
||||
echo ""
|
||||
echo "Restart mitmproxy to apply: /etc/init.d/mitmproxy restart"
|
||||
}
|
||||
|
||||
# ==========================================
|
||||
# Help
|
||||
# ==========================================
|
||||
|
||||
usage() {
|
||||
cat << EOF
|
||||
RezApp Forge - Docker to SecuBox Converter
|
||||
RezApp Forge - Docker/OCI to SecuBox LXC Converter
|
||||
|
||||
Usage: rezappctl <command> [options]
|
||||
|
||||
@ -616,26 +1159,55 @@ Catalog Commands:
|
||||
catalog remove <name> Remove catalog
|
||||
|
||||
Search:
|
||||
search <query> Search Docker Hub
|
||||
info <image> Show image details
|
||||
search <query> Search Docker Hub (no runtime needed)
|
||||
info <image> Show image details (no runtime needed)
|
||||
|
||||
Import (download for offline use):
|
||||
import <image> [tag] Download and cache image for offline conversion
|
||||
|
||||
Convert:
|
||||
convert <image> [opts] Convert Docker image to LXC
|
||||
convert <image> [opts] Convert Docker/OCI image to LXC
|
||||
--name <name> App name (default: from image)
|
||||
--tag <tag> Image tag (default: latest)
|
||||
--memory <limit> Memory limit (default: 512M)
|
||||
--network <type> Network type (default: host)
|
||||
--network <type> Network: host, bridge, none (default: host)
|
||||
|
||||
Package:
|
||||
Offline mode (no Docker/Podman needed):
|
||||
--from-tar <file> Convert from local tarball
|
||||
--from-oci <dir> Convert from OCI image directory
|
||||
--offline Use cached tarball if available
|
||||
|
||||
Run & Manage:
|
||||
run <name> [-f] [-s] Start container (-f=foreground, -s=shell)
|
||||
stop <name> Stop container
|
||||
list Show converted apps
|
||||
cache Show cached images
|
||||
|
||||
Package & Publish:
|
||||
package <name> Generate SecuBox package
|
||||
publish <name> Add to app catalog
|
||||
|
||||
List:
|
||||
list Show converted apps
|
||||
Expose (HAProxy integration):
|
||||
expose <name> [domain] [port] Expose via HAProxy/mitmproxy
|
||||
|
||||
Runtime: Uses Docker, falls back to Podman if unavailable.
|
||||
For offline conversion, use --from-tar or --from-oci flags.
|
||||
|
||||
Examples:
|
||||
# Online workflow
|
||||
rezappctl search heimdall
|
||||
rezappctl convert linuxserver/heimdall --name heimdall
|
||||
rezappctl run heimdall -f # Test in foreground
|
||||
rezappctl expose heimdall # Expose via HAProxy
|
||||
|
||||
# Import for offline use
|
||||
rezappctl import linuxserver/heimdall latest
|
||||
rezappctl cache # Show cached images
|
||||
|
||||
# Offline workflow (no Docker needed)
|
||||
rezappctl convert --from-tar /srv/rezapp/cache/heimdall.tar --name heimdall
|
||||
|
||||
# Package for distribution
|
||||
rezappctl package heimdall
|
||||
rezappctl publish heimdall
|
||||
|
||||
@ -660,10 +1232,15 @@ case "$1" in
|
||||
;;
|
||||
search) shift; cmd_search "$@" ;;
|
||||
info) shift; cmd_info "$@" ;;
|
||||
import) shift; cmd_import "$@" ;;
|
||||
convert) shift; cmd_convert "$@" ;;
|
||||
run) shift; cmd_run "$@" ;;
|
||||
stop) shift; cmd_stop "$@" ;;
|
||||
package) shift; cmd_package "$@" ;;
|
||||
publish) shift; cmd_publish "$@" ;;
|
||||
expose) shift; cmd_expose "$@" ;;
|
||||
list) cmd_list ;;
|
||||
cache) cmd_cache ;;
|
||||
help|--help|-h) usage ;;
|
||||
*) usage ;;
|
||||
esac
|
||||
|
||||
55
package/secubox/secubox-app-streamlit-control/Makefile
Normal file
55
package/secubox/secubox-app-streamlit-control/Makefile
Normal file
@ -0,0 +1,55 @@
|
||||
include $(TOPDIR)/rules.mk
|
||||
|
||||
PKG_NAME:=secubox-app-streamlit-control
|
||||
PKG_VERSION:=1.0.0
|
||||
PKG_RELEASE:=1
|
||||
|
||||
include $(INCLUDE_DIR)/package.mk
|
||||
|
||||
define Package/secubox-app-streamlit-control
|
||||
SECTION:=secubox
|
||||
CATEGORY:=SecuBox
|
||||
SUBMENU:=Apps
|
||||
TITLE:=SecuBox Control Dashboard
|
||||
DEPENDS:=+secubox-app-streamlit-forge +python3 +python3-requests
|
||||
PKGARCH:=all
|
||||
endef
|
||||
|
||||
define Package/secubox-app-streamlit-control/description
|
||||
Streamlit-based LuCI replacement dashboard.
|
||||
Provides a modern web UI for managing SecuBox services,
|
||||
containers, sites, network, and security.
|
||||
KISS design inspired by luci-app-metablogizer.
|
||||
endef
|
||||
|
||||
define Package/secubox-app-streamlit-control/conffiles
|
||||
/etc/config/streamlit-control
|
||||
endef
|
||||
|
||||
define Build/Compile
|
||||
endef
|
||||
|
||||
define Package/secubox-app-streamlit-control/install
|
||||
$(INSTALL_DIR) $(1)/etc/config
|
||||
$(INSTALL_CONF) ./files/etc/config/streamlit-control $(1)/etc/config/
|
||||
|
||||
$(INSTALL_DIR) $(1)/usr/share/streamlit-control
|
||||
$(CP) ./files/usr/share/streamlit-control/* $(1)/usr/share/streamlit-control/
|
||||
endef
|
||||
|
||||
define Package/secubox-app-streamlit-control/postinst
|
||||
#!/bin/sh
|
||||
[ -n "$${IPKG_INSTROOT}" ] || {
|
||||
# Register with Streamlit Forge
|
||||
if [ -x /usr/sbin/slforge ]; then
|
||||
/usr/sbin/slforge register control \
|
||||
--name "SecuBox Control" \
|
||||
--port 8531 \
|
||||
--path /usr/share/streamlit-control \
|
||||
--entry app.py 2>/dev/null || true
|
||||
fi
|
||||
}
|
||||
exit 0
|
||||
endef
|
||||
|
||||
$(eval $(call BuildPackage,secubox-app-streamlit-control))
|
||||
@ -0,0 +1,7 @@
|
||||
config main 'main'
|
||||
option enabled '1'
|
||||
option name 'SecuBox Control'
|
||||
option port '8531'
|
||||
option path '/usr/share/streamlit-control'
|
||||
option entry 'app.py'
|
||||
option description 'Streamlit-based LuCI replacement dashboard'
|
||||
@ -0,0 +1,22 @@
|
||||
[server]
|
||||
headless = true
|
||||
address = "0.0.0.0"
|
||||
port = 8531
|
||||
enableCORS = false
|
||||
enableXsrfProtection = true
|
||||
|
||||
[browser]
|
||||
gatherUsageStats = false
|
||||
serverAddress = "control.gk2.secubox.in"
|
||||
serverPort = 443
|
||||
|
||||
[theme]
|
||||
base = "dark"
|
||||
primaryColor = "#00d4ff"
|
||||
backgroundColor = "#0a0a1a"
|
||||
secondaryBackgroundColor = "#0f0f2a"
|
||||
textColor = "#f0f0f5"
|
||||
|
||||
[client]
|
||||
toolbarMode = "minimal"
|
||||
showSidebarNavigation = true
|
||||
@ -0,0 +1,328 @@
|
||||
"""
|
||||
SecuBox Control - Streamlit-based LuCI Dashboard
|
||||
Main application entry point
|
||||
"""
|
||||
|
||||
import streamlit as st
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add lib to path
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
from lib.auth import require_auth, show_user_menu, can_write
|
||||
from lib.widgets import page_header, metric_row, badge, status_card, auto_refresh_toggle
|
||||
|
||||
# Page configuration
|
||||
st.set_page_config(
|
||||
page_title="SecuBox Control",
|
||||
page_icon="🎛️",
|
||||
layout="wide",
|
||||
initial_sidebar_state="expanded"
|
||||
)
|
||||
|
||||
# Custom CSS for KISS theme
|
||||
st.markdown("""
|
||||
<style>
|
||||
/* Dark theme adjustments */
|
||||
.stApp {
|
||||
background-color: #0a0a1a;
|
||||
}
|
||||
|
||||
/* Card styling */
|
||||
.status-card {
|
||||
background: rgba(255,255,255,0.03);
|
||||
border: 1px solid rgba(255,255,255,0.08);
|
||||
border-radius: 8px;
|
||||
padding: 1em;
|
||||
margin: 0.5em 0;
|
||||
}
|
||||
|
||||
/* Badge styling */
|
||||
.badge {
|
||||
display: inline-block;
|
||||
padding: 2px 8px;
|
||||
border-radius: 4px;
|
||||
font-size: 0.85em;
|
||||
margin-right: 4px;
|
||||
}
|
||||
|
||||
/* Table styling */
|
||||
.data-table {
|
||||
width: 100%;
|
||||
border-collapse: collapse;
|
||||
}
|
||||
.data-table th, .data-table td {
|
||||
padding: 0.75em;
|
||||
text-align: left;
|
||||
border-bottom: 1px solid rgba(255,255,255,0.08);
|
||||
}
|
||||
|
||||
/* Button styling */
|
||||
.action-btn {
|
||||
padding: 0.25em 0.5em;
|
||||
margin: 2px;
|
||||
border-radius: 4px;
|
||||
border: 1px solid rgba(255,255,255,0.2);
|
||||
background: transparent;
|
||||
color: inherit;
|
||||
cursor: pointer;
|
||||
}
|
||||
.action-btn:hover {
|
||||
background: rgba(255,255,255,0.1);
|
||||
}
|
||||
|
||||
/* Sidebar styling */
|
||||
.css-1d391kg {
|
||||
background-color: #0f0f2a;
|
||||
}
|
||||
|
||||
/* Hide Streamlit branding */
|
||||
#MainMenu {visibility: hidden;}
|
||||
footer {visibility: hidden;}
|
||||
|
||||
/* Primary accent color */
|
||||
.stButton>button[kind="primary"] {
|
||||
background-color: #00d4ff;
|
||||
color: #000;
|
||||
}
|
||||
</style>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
# Require authentication
|
||||
ubus = require_auth()
|
||||
|
||||
# Check if limited user
|
||||
is_limited = st.session_state.get("is_secubox_user", False)
|
||||
|
||||
# Sidebar
|
||||
st.sidebar.markdown("## 🎛️ SecuBox Control")
|
||||
st.sidebar.markdown("---")
|
||||
show_user_menu()
|
||||
st.sidebar.markdown("---")
|
||||
|
||||
# Navigation hint
|
||||
st.sidebar.markdown("""
|
||||
**Navigation**
|
||||
- Use sidebar menu for pages
|
||||
- Or keyboard: `Ctrl+K`
|
||||
""")
|
||||
|
||||
# Show banner for limited users
|
||||
if is_limited:
|
||||
st.warning("👤 Logged in as SecuBox user with limited permissions. For full access, login as root.")
|
||||
|
||||
# Main content - Home Dashboard
|
||||
page_header("Dashboard", "System overview and quick actions", "🏠")
|
||||
|
||||
# Auto-refresh toggle
|
||||
auto_refresh_toggle("dashboard", intervals=[10, 30, 60])
|
||||
st.markdown("")
|
||||
|
||||
# Fetch system data
|
||||
with st.spinner("Loading system info..."):
|
||||
board_info = ubus.system_board()
|
||||
system_info = ubus.system_info()
|
||||
|
||||
# System info cards
|
||||
col1, col2, col3, col4 = st.columns(4)
|
||||
|
||||
# Uptime
|
||||
uptime_seconds = system_info.get("uptime", 0)
|
||||
uptime_days = uptime_seconds // 86400
|
||||
uptime_hours = (uptime_seconds % 86400) // 3600
|
||||
uptime_str = f"{uptime_days}d {uptime_hours}h"
|
||||
|
||||
with col1:
|
||||
status_card(
|
||||
title="Uptime",
|
||||
value=uptime_str,
|
||||
subtitle="System running",
|
||||
icon="⏱️",
|
||||
color="#10b981"
|
||||
)
|
||||
|
||||
# Memory
|
||||
memory = system_info.get("memory", {})
|
||||
mem_total = memory.get("total", 1)
|
||||
mem_free = memory.get("free", 0) + memory.get("buffered", 0) + memory.get("cached", 0)
|
||||
mem_used_pct = int(100 * (mem_total - mem_free) / mem_total) if mem_total else 0
|
||||
|
||||
with col2:
|
||||
status_card(
|
||||
title="Memory",
|
||||
value=f"{mem_used_pct}%",
|
||||
subtitle=f"{(mem_total - mem_free) // 1024 // 1024}MB used",
|
||||
icon="🧠",
|
||||
color="#f59e0b" if mem_used_pct > 80 else "#00d4ff"
|
||||
)
|
||||
|
||||
# Load average
|
||||
load = system_info.get("load", [0, 0, 0])
|
||||
load_1m = load[0] / 65536 if load else 0
|
||||
|
||||
with col3:
|
||||
status_card(
|
||||
title="Load",
|
||||
value=f"{load_1m:.2f}",
|
||||
subtitle="1 min average",
|
||||
icon="📊",
|
||||
color="#ef4444" if load_1m > 2 else "#00d4ff"
|
||||
)
|
||||
|
||||
# Board info
|
||||
hostname = board_info.get("hostname", "secubox")
|
||||
model = board_info.get("model", "Unknown")
|
||||
|
||||
with col4:
|
||||
status_card(
|
||||
title="Host",
|
||||
value=hostname,
|
||||
subtitle=model[:30],
|
||||
icon="🖥️",
|
||||
color="#7c3aed"
|
||||
)
|
||||
|
||||
st.markdown("---")
|
||||
|
||||
# Service Status Section
|
||||
st.markdown("### 🔧 Services")
|
||||
|
||||
# Fetch service statuses
|
||||
with st.spinner("Loading services..."):
|
||||
try:
|
||||
lxc_containers = ubus.lxc_list()
|
||||
except:
|
||||
lxc_containers = []
|
||||
|
||||
try:
|
||||
mitmproxy_status = ubus.mitmproxy_status()
|
||||
except:
|
||||
mitmproxy_status = {}
|
||||
|
||||
try:
|
||||
haproxy_status = ubus.haproxy_status()
|
||||
except:
|
||||
haproxy_status = {}
|
||||
|
||||
# Count running containers
|
||||
running_containers = sum(1 for c in lxc_containers if c.get("state") == "RUNNING")
|
||||
total_containers = len(lxc_containers)
|
||||
|
||||
# Service cards row
|
||||
col1, col2, col3, col4 = st.columns(4)
|
||||
|
||||
with col1:
|
||||
st.markdown(f"""
|
||||
<div class="status-card">
|
||||
<div style="font-size:1.5em;">📦 Containers</div>
|
||||
<div style="font-size:2em; color:#00d4ff;">{running_containers}/{total_containers}</div>
|
||||
<div style="color:#888;">Running / Total</div>
|
||||
</div>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
with col2:
|
||||
waf_running = mitmproxy_status.get("running", False)
|
||||
waf_color = "#10b981" if waf_running else "#ef4444"
|
||||
waf_text = "Running" if waf_running else "Stopped"
|
||||
st.markdown(f"""
|
||||
<div class="status-card">
|
||||
<div style="font-size:1.5em;">🛡️ WAF</div>
|
||||
<div style="font-size:2em; color:{waf_color};">{waf_text}</div>
|
||||
<div style="color:#888;">mitmproxy-in</div>
|
||||
</div>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
with col3:
|
||||
haproxy_running = haproxy_status.get("running", False)
|
||||
hp_color = "#10b981" if haproxy_running else "#ef4444"
|
||||
hp_text = "Running" if haproxy_running else "Stopped"
|
||||
vhost_count = haproxy_status.get("vhost_count", 0)
|
||||
st.markdown(f"""
|
||||
<div class="status-card">
|
||||
<div style="font-size:1.5em;">🌐 HAProxy</div>
|
||||
<div style="font-size:2em; color:{hp_color};">{hp_text}</div>
|
||||
<div style="color:#888;">{vhost_count} vhosts</div>
|
||||
</div>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
with col4:
|
||||
st.markdown(f"""
|
||||
<div class="status-card">
|
||||
<div style="font-size:1.5em;">🔒 SSL</div>
|
||||
<div style="font-size:2em; color:#10b981;">Active</div>
|
||||
<div style="color:#888;">Let's Encrypt</div>
|
||||
</div>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
st.markdown("---")
|
||||
|
||||
# Quick Actions Section
|
||||
st.markdown("### ⚡ Quick Actions")
|
||||
|
||||
col1, col2, col3, col4 = st.columns(4)
|
||||
|
||||
with col1:
|
||||
if st.button("🌐 Manage Sites", use_container_width=True):
|
||||
st.switch_page("pages/2_🌐_Sites.py")
|
||||
|
||||
with col2:
|
||||
if st.button("📦 Containers", use_container_width=True):
|
||||
st.switch_page("pages/4_📦_Containers.py")
|
||||
|
||||
with col3:
|
||||
if st.button("🛡️ Security", use_container_width=True):
|
||||
st.switch_page("pages/6_🛡️_Security.py")
|
||||
|
||||
with col4:
|
||||
if st.button("⚙️ System", use_container_width=True):
|
||||
st.switch_page("pages/7_⚙️_System.py")
|
||||
|
||||
st.markdown("---")
|
||||
|
||||
# Container List (Quick View)
|
||||
if lxc_containers:
|
||||
st.markdown("### 📦 Container Status")
|
||||
|
||||
# Sort by state (running first)
|
||||
sorted_containers = sorted(
|
||||
lxc_containers,
|
||||
key=lambda x: (0 if x.get("state") == "RUNNING" else 1, x.get("name", ""))
|
||||
)
|
||||
|
||||
for container in sorted_containers[:8]: # Show top 8
|
||||
name = container.get("name", "unknown")
|
||||
state = container.get("state", "UNKNOWN")
|
||||
|
||||
col1, col2, col3 = st.columns([3, 1, 1])
|
||||
|
||||
with col1:
|
||||
st.write(f"**{name}**")
|
||||
|
||||
with col2:
|
||||
if state == "RUNNING":
|
||||
st.markdown(badge("running"), unsafe_allow_html=True)
|
||||
else:
|
||||
st.markdown(badge("stopped"), unsafe_allow_html=True)
|
||||
|
||||
with col3:
|
||||
if can_write():
|
||||
if state == "RUNNING":
|
||||
if st.button("Stop", key=f"stop_{name}", type="secondary"):
|
||||
with st.spinner(f"Stopping {name}..."):
|
||||
ubus.lxc_stop(name)
|
||||
st.rerun()
|
||||
else:
|
||||
if st.button("Start", key=f"start_{name}", type="primary"):
|
||||
with st.spinner(f"Starting {name}..."):
|
||||
ubus.lxc_start(name)
|
||||
st.rerun()
|
||||
else:
|
||||
st.caption("View only")
|
||||
|
||||
if len(lxc_containers) > 8:
|
||||
st.caption(f"... and {len(lxc_containers) - 8} more containers")
|
||||
|
||||
# Footer
|
||||
st.markdown("---")
|
||||
st.caption("SecuBox Control v1.0 - Streamlit-based Dashboard")
|
||||
@ -0,0 +1 @@
|
||||
# Streamlit Control Library
|
||||
@ -0,0 +1,140 @@
|
||||
"""
|
||||
Authentication module for Streamlit Control
|
||||
Handles LuCI/ubus session management
|
||||
"""
|
||||
|
||||
import streamlit as st
|
||||
from typing import Optional
|
||||
from lib.ubus_client import UbusClient
|
||||
|
||||
|
||||
def get_ubus() -> Optional[UbusClient]:
|
||||
"""Get authenticated ubus client from session state"""
|
||||
if "ubus" in st.session_state and st.session_state.get("authenticated"):
|
||||
return st.session_state.ubus
|
||||
return None
|
||||
|
||||
|
||||
def require_auth() -> UbusClient:
|
||||
"""
|
||||
Require authentication before showing page content.
|
||||
Call this at the top of every page.
|
||||
Returns authenticated UbusClient or stops execution.
|
||||
"""
|
||||
# Initialize session state
|
||||
if "authenticated" not in st.session_state:
|
||||
st.session_state.authenticated = False
|
||||
if "ubus" not in st.session_state:
|
||||
st.session_state.ubus = UbusClient()
|
||||
|
||||
# Check if already authenticated
|
||||
if st.session_state.authenticated:
|
||||
# Verify session is still valid
|
||||
if st.session_state.ubus.is_authenticated():
|
||||
return st.session_state.ubus
|
||||
else:
|
||||
# Session expired
|
||||
st.session_state.authenticated = False
|
||||
|
||||
# Show login form
|
||||
show_login()
|
||||
st.stop()
|
||||
|
||||
|
||||
def show_login():
|
||||
"""Display login form"""
|
||||
st.set_page_config(
|
||||
page_title="SecuBox Control - Login",
|
||||
page_icon="🔐",
|
||||
layout="centered"
|
||||
)
|
||||
|
||||
# Center the login form
|
||||
col1, col2, col3 = st.columns([1, 2, 1])
|
||||
|
||||
with col2:
|
||||
st.markdown("""
|
||||
<div style="text-align:center; margin-bottom:2em;">
|
||||
<h1 style="color:#00d4ff;">🔐 SecuBox Control</h1>
|
||||
<p style="color:#888;">Streamlit-based System Dashboard</p>
|
||||
</div>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
with st.form("login_form"):
|
||||
username = st.text_input(
|
||||
"Username",
|
||||
value="",
|
||||
placeholder="root or SecuBox username"
|
||||
)
|
||||
password = st.text_input(
|
||||
"Password",
|
||||
type="password",
|
||||
placeholder="Enter password"
|
||||
)
|
||||
|
||||
col_a, col_b = st.columns([1, 1])
|
||||
with col_b:
|
||||
submitted = st.form_submit_button(
|
||||
"Login",
|
||||
type="primary",
|
||||
use_container_width=True
|
||||
)
|
||||
|
||||
if submitted:
|
||||
if not username or not password:
|
||||
st.error("Please enter username and password")
|
||||
else:
|
||||
with st.spinner("Authenticating..."):
|
||||
ubus = UbusClient()
|
||||
if ubus.login(username, password):
|
||||
st.session_state.authenticated = True
|
||||
st.session_state.ubus = ubus
|
||||
st.session_state.username = username
|
||||
st.session_state.is_secubox_user = ubus.is_secubox_user
|
||||
if ubus.is_secubox_user:
|
||||
st.info("Logged in as SecuBox user (limited permissions)")
|
||||
st.rerun()
|
||||
else:
|
||||
st.error("Invalid credentials. Check username and password.")
|
||||
|
||||
st.markdown("""
|
||||
<div style="text-align:center; margin-top:2em; color:#666; font-size:0.9em;">
|
||||
<p>Login with:</p>
|
||||
<p>• <b>root</b> - Full system access</p>
|
||||
<p>• <b>SecuBox user</b> - Limited dashboard access</p>
|
||||
</div>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
|
||||
def logout():
|
||||
"""Logout and clear session"""
|
||||
if "ubus" in st.session_state:
|
||||
st.session_state.ubus.logout()
|
||||
st.session_state.authenticated = False
|
||||
st.session_state.pop("ubus", None)
|
||||
st.session_state.pop("username", None)
|
||||
st.rerun()
|
||||
|
||||
|
||||
def show_user_menu():
|
||||
"""Show user menu in sidebar"""
|
||||
if st.session_state.get("authenticated"):
|
||||
username = st.session_state.get("username", "root")
|
||||
is_limited = st.session_state.get("is_secubox_user", False)
|
||||
role = "Limited" if is_limited else "Admin"
|
||||
st.sidebar.markdown(f"👤 **{username}** ({role})")
|
||||
if st.sidebar.button("Logout", key="logout_btn"):
|
||||
logout()
|
||||
|
||||
|
||||
def can_write() -> bool:
|
||||
"""
|
||||
Check if current user has write permissions.
|
||||
SecuBox users (non-root) are read-only.
|
||||
"""
|
||||
return not st.session_state.get("is_secubox_user", False)
|
||||
|
||||
|
||||
def is_admin() -> bool:
|
||||
"""Check if current user is admin (root or similar)"""
|
||||
return not st.session_state.get("is_secubox_user", False)
|
||||
@ -0,0 +1,300 @@
|
||||
"""
|
||||
Ubus JSON-RPC Client for OpenWrt
|
||||
Provides Python interface to RPCD/ubus endpoints
|
||||
"""
|
||||
|
||||
import requests
|
||||
import urllib3
|
||||
from typing import Any, Dict, List, Optional
|
||||
import time
|
||||
|
||||
# Disable SSL warnings for self-signed certs
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
|
||||
|
||||
class UbusClient:
|
||||
"""JSON-RPC client for OpenWrt ubus"""
|
||||
|
||||
def __init__(self, host: str = "192.168.255.1", port: int = 443, use_ssl: bool = True):
|
||||
protocol = "https" if use_ssl else "http"
|
||||
self.url = f"{protocol}://{host}:{port}/ubus"
|
||||
self.session_id = "00000000000000000000000000000000"
|
||||
self.username = None
|
||||
self._request_id = 0
|
||||
self.verify_ssl = False # Allow self-signed certs
|
||||
self._secubox_token = None # Token for SecuBox users
|
||||
self._is_secubox_user = False
|
||||
|
||||
# Create session with proper settings
|
||||
self._session = requests.Session()
|
||||
self._session.verify = False
|
||||
self._session.trust_env = False # Ignore proxy env vars
|
||||
|
||||
def _next_id(self) -> int:
|
||||
self._request_id += 1
|
||||
return self._request_id
|
||||
|
||||
def login(self, username: str, password: str) -> bool:
|
||||
"""
|
||||
Authenticate and get session ID.
|
||||
Supports both system users (root) and SecuBox managed users.
|
||||
"""
|
||||
# Try system user login first (root, admin, etc.)
|
||||
result = self.call("session", "login", {
|
||||
"username": username,
|
||||
"password": password
|
||||
})
|
||||
if result and "ubus_rpc_session" in result:
|
||||
self.session_id = result["ubus_rpc_session"]
|
||||
self.username = username
|
||||
self._is_secubox_user = False
|
||||
return True
|
||||
|
||||
# Try SecuBox user authentication if system login fails
|
||||
result = self.call("luci.secubox-users", "authenticate", {
|
||||
"username": username,
|
||||
"password": password
|
||||
})
|
||||
if result and result.get("success") and "token" in result:
|
||||
# SecuBox users get a custom token
|
||||
self._secubox_token = result["token"]
|
||||
self.username = result.get("username", username)
|
||||
self._is_secubox_user = True
|
||||
# Use anonymous session for ubus calls (limited permissions)
|
||||
# The token validates the user identity for audit purposes
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
@property
|
||||
def is_secubox_user(self) -> bool:
|
||||
"""Check if logged in as SecuBox managed user (limited permissions)"""
|
||||
return self._is_secubox_user
|
||||
|
||||
def logout(self) -> bool:
|
||||
"""Destroy session"""
|
||||
try:
|
||||
self.call("session", "destroy", {"session": self.session_id})
|
||||
self.session_id = "00000000000000000000000000000000"
|
||||
self.username = None
|
||||
return True
|
||||
except:
|
||||
return False
|
||||
|
||||
def is_authenticated(self) -> bool:
|
||||
"""Check if session is still valid"""
|
||||
if self.session_id == "00000000000000000000000000000000":
|
||||
return False
|
||||
result = self.call("session", "access", {
|
||||
"scope": "ubus",
|
||||
"object": "system",
|
||||
"function": "board"
|
||||
})
|
||||
return result is not None and result.get("access", False)
|
||||
|
||||
def call(self, obj: str, method: str, params: Dict = None, timeout: int = 30) -> Any:
|
||||
"""Make ubus JSON-RPC call"""
|
||||
payload = {
|
||||
"jsonrpc": "2.0",
|
||||
"id": self._next_id(),
|
||||
"method": "call",
|
||||
"params": [self.session_id, obj, method, params or {}]
|
||||
}
|
||||
try:
|
||||
resp = self._session.post(
|
||||
self.url,
|
||||
json=payload,
|
||||
timeout=timeout,
|
||||
verify=False
|
||||
)
|
||||
data = resp.json()
|
||||
if "result" in data and len(data["result"]) > 1:
|
||||
return data["result"][1]
|
||||
elif "result" in data and data["result"][0] == 0:
|
||||
return {} # Success with no data
|
||||
return None
|
||||
except requests.exceptions.Timeout:
|
||||
return {"error": "Request timeout"}
|
||||
except requests.exceptions.ConnectionError:
|
||||
return {"error": "Connection failed"}
|
||||
except Exception as e:
|
||||
return {"error": str(e)}
|
||||
|
||||
# ==========================================
|
||||
# System Methods
|
||||
# ==========================================
|
||||
|
||||
def system_board(self) -> Dict:
|
||||
"""Get system board info"""
|
||||
return self.call("system", "board") or {}
|
||||
|
||||
def system_info(self) -> Dict:
|
||||
"""Get system info (memory, uptime, load)"""
|
||||
return self.call("system", "info") or {}
|
||||
|
||||
def file_read(self, path: str) -> Optional[str]:
|
||||
"""Read file contents"""
|
||||
result = self.call("file", "read", {"path": path})
|
||||
return result.get("data") if result else None
|
||||
|
||||
def file_exec(self, command: str, params: List[str] = None) -> Dict:
|
||||
"""Execute command"""
|
||||
return self.call("file", "exec", {
|
||||
"command": command,
|
||||
"params": params or []
|
||||
}) or {}
|
||||
|
||||
# ==========================================
|
||||
# Metablogizer Methods
|
||||
# ==========================================
|
||||
|
||||
def metablogizer_status(self) -> Dict:
|
||||
"""Get metablogizer status"""
|
||||
return self.call("luci.metablogizer", "status") or {}
|
||||
|
||||
def metablogizer_list_sites(self) -> List[Dict]:
|
||||
"""List all metablogizer sites"""
|
||||
result = self.call("luci.metablogizer", "list_sites")
|
||||
return result.get("sites", []) if result else []
|
||||
|
||||
def metablogizer_exposure_status(self) -> List[Dict]:
|
||||
"""Get exposure status for all sites"""
|
||||
result = self.call("luci.metablogizer", "get_sites_exposure_status")
|
||||
return result.get("sites", []) if result else []
|
||||
|
||||
def metablogizer_create_site(self, name: str, domain: str, description: str = "") -> Dict:
|
||||
"""Create a new site"""
|
||||
return self.call("luci.metablogizer", "create_site", {
|
||||
"name": name,
|
||||
"domain": domain,
|
||||
"gitea_repo": "",
|
||||
"ssl": "1",
|
||||
"description": description
|
||||
}) or {}
|
||||
|
||||
def metablogizer_delete_site(self, site_id: str) -> Dict:
|
||||
"""Delete a site"""
|
||||
return self.call("luci.metablogizer", "delete_site", {"id": site_id}) or {}
|
||||
|
||||
def metablogizer_emancipate(self, site_id: str) -> Dict:
|
||||
"""Expose site with HAProxy + SSL"""
|
||||
return self.call("luci.metablogizer", "emancipate", {"id": site_id}) or {}
|
||||
|
||||
def metablogizer_emancipate_status(self, job_id: str) -> Dict:
|
||||
"""Check emancipate job status"""
|
||||
return self.call("luci.metablogizer", "emancipate_status", {"job_id": job_id}) or {}
|
||||
|
||||
def metablogizer_unpublish(self, site_id: str) -> Dict:
|
||||
"""Unpublish site"""
|
||||
return self.call("luci.metablogizer", "unpublish_site", {"id": site_id}) or {}
|
||||
|
||||
def metablogizer_health_check(self, site_id: str) -> Dict:
|
||||
"""Check site health"""
|
||||
return self.call("luci.metablogizer", "check_site_health", {"id": site_id}) or {}
|
||||
|
||||
def metablogizer_repair(self, site_id: str) -> Dict:
|
||||
"""Repair site"""
|
||||
return self.call("luci.metablogizer", "repair_site", {"id": site_id}) or {}
|
||||
|
||||
def metablogizer_set_auth(self, site_id: str, auth_required: bool) -> Dict:
|
||||
"""Set authentication requirement"""
|
||||
return self.call("luci.metablogizer", "set_auth_required", {
|
||||
"id": site_id,
|
||||
"auth_required": "1" if auth_required else "0"
|
||||
}) or {}
|
||||
|
||||
def metablogizer_upload_file(self, site_id: str, filename: str, content: str) -> Dict:
|
||||
"""Upload file to site (base64 content)"""
|
||||
return self.call("luci.metablogizer", "upload_file", {
|
||||
"id": site_id,
|
||||
"filename": filename,
|
||||
"content": content
|
||||
}) or {}
|
||||
|
||||
# ==========================================
|
||||
# LXC Container Methods (via secubox-portal)
|
||||
# ==========================================
|
||||
|
||||
def lxc_list(self) -> List[Dict]:
|
||||
"""List LXC containers via secubox-portal"""
|
||||
result = self.call("luci.secubox-portal", "get_containers")
|
||||
containers = result.get("containers", []) if result else []
|
||||
# Normalize state format (secubox-portal uses lowercase)
|
||||
for c in containers:
|
||||
if "state" in c:
|
||||
c["state"] = c["state"].upper()
|
||||
return containers
|
||||
|
||||
def lxc_info(self, name: str) -> Dict:
|
||||
"""Get container info (basic)"""
|
||||
containers = self.lxc_list()
|
||||
for c in containers:
|
||||
if c.get("name") == name:
|
||||
return c
|
||||
return {}
|
||||
|
||||
def lxc_start(self, name: str) -> Dict:
|
||||
"""Start container (requires root session)"""
|
||||
return self.call("file", "exec", {
|
||||
"command": "/usr/bin/lxc-start",
|
||||
"params": ["-n", name]
|
||||
}) or {}
|
||||
|
||||
def lxc_stop(self, name: str) -> Dict:
|
||||
"""Stop container (requires root session)"""
|
||||
return self.call("file", "exec", {
|
||||
"command": "/usr/bin/lxc-stop",
|
||||
"params": ["-n", name]
|
||||
}) or {}
|
||||
|
||||
# ==========================================
|
||||
# HAProxy Methods
|
||||
# ==========================================
|
||||
|
||||
def haproxy_status(self) -> Dict:
|
||||
"""Get HAProxy status"""
|
||||
return self.call("luci.haproxy", "status") or {}
|
||||
|
||||
def haproxy_list_vhosts(self) -> List[Dict]:
|
||||
"""List HAProxy vhosts"""
|
||||
result = self.call("luci.haproxy", "list_vhosts")
|
||||
return result.get("vhosts", []) if result else []
|
||||
|
||||
# ==========================================
|
||||
# Mitmproxy WAF Methods
|
||||
# ==========================================
|
||||
|
||||
def mitmproxy_status(self) -> Dict:
|
||||
"""Get mitmproxy WAF status"""
|
||||
return self.call("luci.mitmproxy", "status") or {}
|
||||
|
||||
def mitmproxy_threats(self, limit: int = 20) -> List[Dict]:
|
||||
"""Get recent threats"""
|
||||
result = self.call("luci.mitmproxy", "get_threats", {"limit": limit})
|
||||
return result.get("threats", []) if result else []
|
||||
|
||||
# ==========================================
|
||||
# CrowdSec Methods (via crowdsec-dashboard)
|
||||
# ==========================================
|
||||
|
||||
def crowdsec_status(self) -> Dict:
|
||||
"""Get CrowdSec status"""
|
||||
return self.call("luci.crowdsec-dashboard", "status") or {}
|
||||
|
||||
def crowdsec_decisions(self, limit: int = 20) -> List[Dict]:
|
||||
"""Get active decisions"""
|
||||
result = self.call("luci.crowdsec", "get_decisions", {"limit": limit})
|
||||
return result.get("decisions", []) if result else []
|
||||
|
||||
# ==========================================
|
||||
# Streamlit Forge Methods
|
||||
# ==========================================
|
||||
|
||||
def streamlit_list(self) -> List[Dict]:
|
||||
"""List Streamlit apps"""
|
||||
result = self.call("luci.streamlit-forge", "list")
|
||||
return result.get("apps", []) if result else []
|
||||
|
||||
def streamlit_status(self, app_id: str) -> Dict:
|
||||
"""Get Streamlit app status"""
|
||||
return self.call("luci.streamlit-forge", "status", {"id": app_id}) or {}
|
||||
@ -0,0 +1,569 @@
|
||||
"""
|
||||
KISS-themed UI widgets for Streamlit Control
|
||||
Inspired by luci-app-metablogizer design
|
||||
"""
|
||||
|
||||
import streamlit as st
|
||||
from typing import List, Dict, Callable, Optional, Any
|
||||
import io
|
||||
|
||||
# Try to import qrcode, fallback gracefully
|
||||
try:
|
||||
import qrcode
|
||||
HAS_QRCODE = True
|
||||
except ImportError:
|
||||
HAS_QRCODE = False
|
||||
|
||||
|
||||
# ==========================================
|
||||
# Status Badges
|
||||
# ==========================================
|
||||
|
||||
BADGE_STYLES = {
|
||||
"running": ("Running", "#d4edda", "#155724"),
|
||||
"stopped": ("Stopped", "#f8d7da", "#721c24"),
|
||||
"ssl_ok": ("SSL OK", "#d4edda", "#155724"),
|
||||
"ssl_warn": ("SSL Warn", "#fff3cd", "#856404"),
|
||||
"ssl_none": ("No SSL", "#f8d7da", "#721c24"),
|
||||
"private": ("Private", "#e2e3e5", "#383d41"),
|
||||
"auth": ("Auth", "#cce5ff", "#004085"),
|
||||
"waf": ("WAF", "#d1ecf1", "#0c5460"),
|
||||
"error": ("Error", "#f8d7da", "#721c24"),
|
||||
"warning": ("Warning", "#fff3cd", "#856404"),
|
||||
"success": ("OK", "#d4edda", "#155724"),
|
||||
"info": ("Info", "#cce5ff", "#004085"),
|
||||
"empty": ("Empty", "#fff3cd", "#856404"),
|
||||
}
|
||||
|
||||
|
||||
def badge(status: str, label: str = None) -> str:
|
||||
"""
|
||||
Return HTML for a colored status badge.
|
||||
|
||||
Args:
|
||||
status: Badge type (running, stopped, ssl_ok, etc.)
|
||||
label: Optional custom label (overrides default)
|
||||
"""
|
||||
default_label, bg, color = BADGE_STYLES.get(
|
||||
status, ("Unknown", "#f8f9fa", "#6c757d")
|
||||
)
|
||||
text = label or default_label
|
||||
return f'<span style="display:inline-block;padding:2px 8px;border-radius:4px;background:{bg};color:{color};font-size:0.85em;margin-right:4px">{text}</span>'
|
||||
|
||||
|
||||
def badges_html(*args) -> str:
|
||||
"""Combine multiple badges into HTML string"""
|
||||
return "".join(args)
|
||||
|
||||
|
||||
def show_badge(status: str, label: str = None):
|
||||
"""Display a single badge using st.markdown"""
|
||||
st.markdown(badge(status, label), unsafe_allow_html=True)
|
||||
|
||||
|
||||
# ==========================================
|
||||
# Status Cards
|
||||
# ==========================================
|
||||
|
||||
def status_card(title: str, value: Any, subtitle: str = "", icon: str = "", color: str = "#00d4ff"):
|
||||
"""
|
||||
Display a metric card with KISS styling.
|
||||
"""
|
||||
st.markdown(f"""
|
||||
<div style="
|
||||
background: rgba(255,255,255,0.03);
|
||||
border: 1px solid rgba(255,255,255,0.08);
|
||||
border-radius: 8px;
|
||||
padding: 1em;
|
||||
text-align: center;
|
||||
">
|
||||
<div style="font-size:2em; color:{color};">{icon} {value}</div>
|
||||
<div style="font-size:1.1em; font-weight:500; margin-top:0.3em;">{title}</div>
|
||||
<div style="font-size:0.85em; color:#888;">{subtitle}</div>
|
||||
</div>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
|
||||
def metric_row(metrics: List[Dict]):
|
||||
"""
|
||||
Display a row of metric cards.
|
||||
|
||||
Args:
|
||||
metrics: List of dicts with keys: title, value, subtitle, icon, color
|
||||
"""
|
||||
cols = st.columns(len(metrics))
|
||||
for col, m in zip(cols, metrics):
|
||||
with col:
|
||||
status_card(
|
||||
title=m.get("title", ""),
|
||||
value=m.get("value", ""),
|
||||
subtitle=m.get("subtitle", ""),
|
||||
icon=m.get("icon", ""),
|
||||
color=m.get("color", "#00d4ff")
|
||||
)
|
||||
|
||||
|
||||
# ==========================================
|
||||
# Data Tables
|
||||
# ==========================================
|
||||
|
||||
def status_table(
|
||||
data: List[Dict],
|
||||
columns: Dict[str, str],
|
||||
badge_columns: Dict[str, Callable] = None,
|
||||
key_prefix: str = "table"
|
||||
):
|
||||
"""
|
||||
Display a data table with status badges.
|
||||
|
||||
Args:
|
||||
data: List of row dicts
|
||||
columns: Map of key -> display name
|
||||
badge_columns: Map of key -> function(value) returning badge HTML
|
||||
key_prefix: Unique prefix for widget keys
|
||||
"""
|
||||
if not data:
|
||||
st.info("No data to display")
|
||||
return
|
||||
|
||||
# Build header
|
||||
header_cols = st.columns(len(columns))
|
||||
for i, (key, name) in enumerate(columns.items()):
|
||||
header_cols[i].markdown(f"**{name}**")
|
||||
|
||||
st.markdown("---")
|
||||
|
||||
# Build rows
|
||||
for idx, row in enumerate(data):
|
||||
row_cols = st.columns(len(columns))
|
||||
for i, (key, name) in enumerate(columns.items()):
|
||||
value = row.get(key, "")
|
||||
|
||||
# Apply badge function if defined
|
||||
if badge_columns and key in badge_columns:
|
||||
html = badge_columns[key](value, row)
|
||||
row_cols[i].markdown(html, unsafe_allow_html=True)
|
||||
else:
|
||||
row_cols[i].write(value)
|
||||
|
||||
|
||||
# ==========================================
|
||||
# Action Buttons
|
||||
# ==========================================
|
||||
|
||||
def action_button(
|
||||
label: str,
|
||||
key: str,
|
||||
style: str = "default",
|
||||
icon: str = "",
|
||||
help: str = None,
|
||||
disabled: bool = False
|
||||
) -> bool:
|
||||
"""
|
||||
Styled action button.
|
||||
|
||||
Args:
|
||||
style: default, primary, danger, warning
|
||||
"""
|
||||
button_type = "primary" if style == "primary" else "secondary"
|
||||
full_label = f"{icon} {label}".strip() if icon else label
|
||||
return st.button(full_label, key=key, type=button_type, help=help, disabled=disabled)
|
||||
|
||||
|
||||
def action_buttons_row(actions: List[Dict], row_data: Dict, key_prefix: str):
|
||||
"""
|
||||
Display a row of action buttons.
|
||||
|
||||
Args:
|
||||
actions: List of {label, callback, style, icon, help}
|
||||
row_data: Data to pass to callbacks
|
||||
key_prefix: Unique key prefix
|
||||
"""
|
||||
cols = st.columns(len(actions))
|
||||
for i, action in enumerate(actions):
|
||||
with cols[i]:
|
||||
key = f"{key_prefix}_{action['label']}_{i}"
|
||||
if action_button(
|
||||
label=action.get("label", ""),
|
||||
key=key,
|
||||
style=action.get("style", "default"),
|
||||
icon=action.get("icon", ""),
|
||||
help=action.get("help")
|
||||
):
|
||||
if "callback" in action:
|
||||
action["callback"](row_data)
|
||||
|
||||
|
||||
# ==========================================
|
||||
# Modals / Dialogs
|
||||
# ==========================================
|
||||
|
||||
def confirm_dialog(
|
||||
title: str,
|
||||
message: str,
|
||||
confirm_label: str = "Confirm",
|
||||
cancel_label: str = "Cancel",
|
||||
danger: bool = False
|
||||
) -> Optional[bool]:
|
||||
"""
|
||||
Show confirmation dialog.
|
||||
Returns True if confirmed, False if cancelled, None if not interacted.
|
||||
"""
|
||||
with st.expander(title, expanded=True):
|
||||
st.warning(message) if danger else st.info(message)
|
||||
col1, col2 = st.columns(2)
|
||||
with col1:
|
||||
if st.button(cancel_label, key=f"cancel_{title}"):
|
||||
return False
|
||||
with col2:
|
||||
btn_type = "primary" if not danger else "primary"
|
||||
if st.button(confirm_label, key=f"confirm_{title}", type=btn_type):
|
||||
return True
|
||||
return None
|
||||
|
||||
|
||||
# ==========================================
|
||||
# QR Code & Sharing
|
||||
# ==========================================
|
||||
|
||||
def qr_code_image(url: str, size: int = 200):
|
||||
"""Generate and display QR code for URL"""
|
||||
if not HAS_QRCODE:
|
||||
st.warning("QR code library not available")
|
||||
st.code(url)
|
||||
return
|
||||
|
||||
qr = qrcode.QRCode(
|
||||
version=1,
|
||||
error_correction=qrcode.constants.ERROR_CORRECT_L,
|
||||
box_size=10,
|
||||
border=2,
|
||||
)
|
||||
qr.add_data(url)
|
||||
qr.make(fit=True)
|
||||
|
||||
img = qr.make_image(fill_color="black", back_color="white")
|
||||
buf = io.BytesIO()
|
||||
img.save(buf, format="PNG")
|
||||
buf.seek(0)
|
||||
|
||||
st.image(buf, width=size)
|
||||
|
||||
|
||||
def share_buttons(url: str, title: str = ""):
|
||||
"""Display social sharing buttons"""
|
||||
from urllib.parse import quote
|
||||
|
||||
encoded_url = quote(url)
|
||||
encoded_title = quote(title) if title else ""
|
||||
|
||||
col1, col2, col3, col4 = st.columns(4)
|
||||
|
||||
with col1:
|
||||
st.link_button(
|
||||
"Twitter",
|
||||
f"https://twitter.com/intent/tweet?url={encoded_url}&text={encoded_title}",
|
||||
use_container_width=True
|
||||
)
|
||||
with col2:
|
||||
st.link_button(
|
||||
"Telegram",
|
||||
f"https://t.me/share/url?url={encoded_url}&text={encoded_title}",
|
||||
use_container_width=True
|
||||
)
|
||||
with col3:
|
||||
st.link_button(
|
||||
"WhatsApp",
|
||||
f"https://wa.me/?text={encoded_title}%20{encoded_url}",
|
||||
use_container_width=True
|
||||
)
|
||||
with col4:
|
||||
st.link_button(
|
||||
"Email",
|
||||
f"mailto:?subject={encoded_title}&body={encoded_url}",
|
||||
use_container_width=True
|
||||
)
|
||||
|
||||
|
||||
def share_modal(url: str, title: str = "Share"):
|
||||
"""Complete share modal with QR and social buttons"""
|
||||
with st.expander(f"📤 {title}", expanded=False):
|
||||
st.text_input("URL", value=url, key=f"share_url_{url}", disabled=True)
|
||||
|
||||
col1, col2 = st.columns([1, 2])
|
||||
with col1:
|
||||
qr_code_image(url, size=150)
|
||||
with col2:
|
||||
st.markdown("**Share via:**")
|
||||
share_buttons(url, title)
|
||||
|
||||
if st.button("📋 Copy URL", key=f"copy_{url}"):
|
||||
st.code(url)
|
||||
st.success("URL displayed - copy from above")
|
||||
|
||||
|
||||
# ==========================================
|
||||
# Health Check Display
|
||||
# ==========================================
|
||||
|
||||
def health_status_item(label: str, status: str, detail: str = ""):
|
||||
"""Display single health check item"""
|
||||
if status in ("ok", "valid", "running"):
|
||||
icon = "✓"
|
||||
color = "green"
|
||||
elif status in ("error", "failed", "stopped"):
|
||||
icon = "✗"
|
||||
color = "red"
|
||||
elif status in ("warning", "expiring"):
|
||||
icon = "!"
|
||||
color = "orange"
|
||||
else:
|
||||
icon = "○"
|
||||
color = "gray"
|
||||
|
||||
detail_text = f" ({detail})" if detail else ""
|
||||
st.markdown(f":{color}[{icon} **{label}**: {status}{detail_text}]")
|
||||
|
||||
|
||||
def health_check_panel(health: Dict):
|
||||
"""Display health check results panel"""
|
||||
st.markdown("### Health Check Results")
|
||||
|
||||
checks = [
|
||||
("Backend", health.get("backend_status", "unknown")),
|
||||
("Frontend", health.get("frontend_status", "unknown")),
|
||||
("SSL", health.get("ssl_status", "unknown"), health.get("ssl_days_remaining", "")),
|
||||
("Content", "ok" if health.get("has_content") else "empty"),
|
||||
]
|
||||
|
||||
for check in checks:
|
||||
label = check[0]
|
||||
status = check[1]
|
||||
detail = check[2] if len(check) > 2 else ""
|
||||
health_status_item(label, str(status), str(detail) if detail else "")
|
||||
|
||||
|
||||
# ==========================================
|
||||
# Progress & Loading
|
||||
# ==========================================
|
||||
|
||||
def async_operation(title: str, operation: Callable, *args, **kwargs):
|
||||
"""
|
||||
Run async operation with progress display.
|
||||
Returns operation result.
|
||||
"""
|
||||
with st.spinner(title):
|
||||
result = operation(*args, **kwargs)
|
||||
return result
|
||||
|
||||
|
||||
# ==========================================
|
||||
# Page Layout Helpers
|
||||
# ==========================================
|
||||
|
||||
def page_header(title: str, description: str = "", icon: str = ""):
|
||||
"""Standard page header"""
|
||||
st.markdown(f"# {icon} {title}" if icon else f"# {title}")
|
||||
if description:
|
||||
st.markdown(f"*{description}*")
|
||||
st.markdown("---")
|
||||
|
||||
|
||||
def section_header(title: str, description: str = ""):
|
||||
"""Section header within page"""
|
||||
st.markdown(f"### {title}")
|
||||
if description:
|
||||
st.caption(description)
|
||||
|
||||
|
||||
# ==========================================
|
||||
# Auto-refresh Component
|
||||
# ==========================================
|
||||
|
||||
def auto_refresh_toggle(key: str = "auto_refresh", intervals: List[int] = None):
|
||||
"""
|
||||
Display auto-refresh toggle and interval selector.
|
||||
|
||||
Args:
|
||||
key: Session state key prefix
|
||||
intervals: List of refresh intervals in seconds
|
||||
|
||||
Returns:
|
||||
Tuple of (enabled, interval_seconds)
|
||||
"""
|
||||
import time
|
||||
|
||||
if intervals is None:
|
||||
intervals = [5, 10, 30, 60]
|
||||
|
||||
col1, col2, col3 = st.columns([1, 1, 2])
|
||||
|
||||
with col1:
|
||||
enabled = st.toggle("Auto-refresh", key=f"{key}_enabled")
|
||||
|
||||
with col2:
|
||||
if enabled:
|
||||
interval_labels = {5: "5s", 10: "10s", 30: "30s", 60: "1m"}
|
||||
interval = st.selectbox(
|
||||
"Interval",
|
||||
options=intervals,
|
||||
format_func=lambda x: interval_labels.get(x, f"{x}s"),
|
||||
key=f"{key}_interval",
|
||||
label_visibility="collapsed"
|
||||
)
|
||||
else:
|
||||
interval = 30
|
||||
|
||||
with col3:
|
||||
if st.button("🔄 Refresh Now", key=f"{key}_manual"):
|
||||
st.rerun()
|
||||
|
||||
# Handle auto-refresh
|
||||
if enabled:
|
||||
# Store last refresh time
|
||||
last_key = f"{key}_last_refresh"
|
||||
now = time.time()
|
||||
|
||||
if last_key not in st.session_state:
|
||||
st.session_state[last_key] = now
|
||||
|
||||
elapsed = now - st.session_state[last_key]
|
||||
if elapsed >= interval:
|
||||
st.session_state[last_key] = now
|
||||
time.sleep(0.1) # Brief pause to prevent tight loop
|
||||
st.rerun()
|
||||
|
||||
# Show countdown
|
||||
remaining = max(0, interval - elapsed)
|
||||
st.caption(f"Next refresh in {int(remaining)}s")
|
||||
|
||||
return enabled, interval
|
||||
|
||||
|
||||
def search_filter(items: List[Dict], search_key: str, search_fields: List[str]) -> List[Dict]:
|
||||
"""
|
||||
Filter items based on search query stored in session state.
|
||||
|
||||
Args:
|
||||
items: List of dicts to filter
|
||||
search_key: Session state key for search query
|
||||
search_fields: List of dict keys to search in
|
||||
|
||||
Returns:
|
||||
Filtered list of items
|
||||
"""
|
||||
query = st.session_state.get(search_key, "").lower().strip()
|
||||
|
||||
if not query:
|
||||
return items
|
||||
|
||||
filtered = []
|
||||
for item in items:
|
||||
for field in search_fields:
|
||||
value = str(item.get(field, "")).lower()
|
||||
if query in value:
|
||||
filtered.append(item)
|
||||
break
|
||||
|
||||
return filtered
|
||||
|
||||
|
||||
def filter_toolbar(key_prefix: str, filter_options: Dict[str, List[str]] = None):
|
||||
"""
|
||||
Display search box and optional filter dropdowns.
|
||||
|
||||
Args:
|
||||
key_prefix: Unique prefix for session state keys
|
||||
filter_options: Dict of filter_name -> list of options
|
||||
|
||||
Returns:
|
||||
Dict with search query and selected filters
|
||||
"""
|
||||
cols = st.columns([3] + [1] * len(filter_options or {}))
|
||||
|
||||
with cols[0]:
|
||||
search = st.text_input(
|
||||
"Search",
|
||||
key=f"{key_prefix}_search",
|
||||
placeholder="Type to filter...",
|
||||
label_visibility="collapsed"
|
||||
)
|
||||
|
||||
filters = {"search": search}
|
||||
|
||||
if filter_options:
|
||||
for i, (name, options) in enumerate(filter_options.items(), 1):
|
||||
with cols[i]:
|
||||
selected = st.selectbox(
|
||||
name,
|
||||
options=["All"] + options,
|
||||
key=f"{key_prefix}_filter_{name}",
|
||||
label_visibility="collapsed"
|
||||
)
|
||||
filters[name] = selected if selected != "All" else None
|
||||
|
||||
return filters
|
||||
|
||||
|
||||
# ==========================================
|
||||
# Container Card Component
|
||||
# ==========================================
|
||||
|
||||
def container_card(
|
||||
name: str,
|
||||
state: str,
|
||||
memory_mb: int = 0,
|
||||
cpu_pct: float = 0,
|
||||
ip: str = "",
|
||||
actions_enabled: bool = True,
|
||||
key_prefix: str = ""
|
||||
):
|
||||
"""
|
||||
Display a container card with status and actions.
|
||||
|
||||
Returns dict with triggered actions.
|
||||
"""
|
||||
is_running = state == "RUNNING"
|
||||
state_color = "#10b981" if is_running else "#6b7280"
|
||||
|
||||
with st.container():
|
||||
col1, col2, col3, col4 = st.columns([3, 1, 1, 2])
|
||||
|
||||
with col1:
|
||||
st.markdown(f"**{name}**")
|
||||
if ip:
|
||||
st.caption(f"IP: {ip}")
|
||||
|
||||
with col2:
|
||||
if is_running:
|
||||
st.markdown(badge("running"), unsafe_allow_html=True)
|
||||
else:
|
||||
st.markdown(badge("stopped"), unsafe_allow_html=True)
|
||||
|
||||
with col3:
|
||||
if memory_mb > 0:
|
||||
st.caption(f"{memory_mb}MB")
|
||||
if cpu_pct > 0:
|
||||
st.caption(f"{cpu_pct:.1f}%")
|
||||
|
||||
with col4:
|
||||
actions = {}
|
||||
if actions_enabled:
|
||||
c1, c2, c3 = st.columns(3)
|
||||
with c1:
|
||||
if is_running:
|
||||
if st.button("⏹️", key=f"{key_prefix}_stop_{name}", help="Stop"):
|
||||
actions["stop"] = True
|
||||
else:
|
||||
if st.button("▶️", key=f"{key_prefix}_start_{name}", help="Start"):
|
||||
actions["start"] = True
|
||||
with c2:
|
||||
if st.button("🔄", key=f"{key_prefix}_restart_{name}", help="Restart"):
|
||||
actions["restart"] = True
|
||||
with c3:
|
||||
if st.button("ℹ️", key=f"{key_prefix}_info_{name}", help="Info"):
|
||||
actions["info"] = True
|
||||
else:
|
||||
st.caption("View only")
|
||||
|
||||
return actions
|
||||
@ -0,0 +1,7 @@
|
||||
"""
|
||||
Home Dashboard - Redirects to main app
|
||||
"""
|
||||
import streamlit as st
|
||||
|
||||
# Redirect to main app
|
||||
st.switch_page("app.py")
|
||||
@ -0,0 +1,380 @@
|
||||
"""
|
||||
Sites Manager - Metablogizer-style static site management
|
||||
KISS design inspired by luci-app-metablogizer
|
||||
"""
|
||||
|
||||
import streamlit as st
|
||||
import sys
|
||||
import os
|
||||
import base64
|
||||
import time
|
||||
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from lib.auth import require_auth, show_user_menu
|
||||
from lib.widgets import (
|
||||
page_header, badge, badges_html, status_card,
|
||||
health_check_panel, qr_code_image, share_buttons
|
||||
)
|
||||
|
||||
st.set_page_config(
|
||||
page_title="Sites - SecuBox Control",
|
||||
page_icon="🌐",
|
||||
layout="wide"
|
||||
)
|
||||
|
||||
# Require authentication
|
||||
ubus = require_auth()
|
||||
|
||||
# Sidebar
|
||||
st.sidebar.markdown("## 🎛️ SecuBox Control")
|
||||
show_user_menu()
|
||||
|
||||
# Page header
|
||||
page_header("Sites", "Static site publisher with HAProxy vhosts and SSL", "🌐")
|
||||
|
||||
# ==========================================
|
||||
# One-Click Deploy Section
|
||||
# ==========================================
|
||||
|
||||
with st.expander("➕ **One-Click Deploy**", expanded=True):
|
||||
st.caption("Upload HTML/ZIP to create a new static site with auto-configured SSL")
|
||||
|
||||
col1, col2, col3 = st.columns([1, 2, 2])
|
||||
|
||||
with col1:
|
||||
deploy_name = st.text_input(
|
||||
"Site Name",
|
||||
placeholder="myblog",
|
||||
key="deploy_name",
|
||||
help="Lowercase letters, numbers, hyphens only"
|
||||
)
|
||||
|
||||
with col2:
|
||||
deploy_domain = st.text_input(
|
||||
"Domain",
|
||||
placeholder="blog.gk2.secubox.in",
|
||||
key="deploy_domain"
|
||||
)
|
||||
|
||||
with col3:
|
||||
deploy_file = st.file_uploader(
|
||||
"Content (optional)",
|
||||
type=["html", "htm", "zip"],
|
||||
key="deploy_file",
|
||||
help="HTML file or ZIP archive"
|
||||
)
|
||||
|
||||
if st.button("🚀 Deploy", type="primary", key="deploy_btn"):
|
||||
if not deploy_name:
|
||||
st.error("Site name is required")
|
||||
elif not deploy_domain:
|
||||
st.error("Domain is required")
|
||||
else:
|
||||
# Validate name
|
||||
import re
|
||||
if not re.match(r'^[a-z0-9-]+$', deploy_name):
|
||||
st.error("Name must be lowercase letters, numbers, and hyphens only")
|
||||
else:
|
||||
with st.spinner("Creating site and configuring HAProxy..."):
|
||||
# Create site
|
||||
result = ubus.metablogizer_create_site(
|
||||
name=deploy_name,
|
||||
domain=deploy_domain
|
||||
)
|
||||
|
||||
if result.get("success"):
|
||||
site_id = result.get("id", f"site_{deploy_name}")
|
||||
|
||||
# Upload file if provided
|
||||
if deploy_file:
|
||||
content = base64.b64encode(deploy_file.read()).decode()
|
||||
is_zip = deploy_file.name.lower().endswith('.zip')
|
||||
ubus.metablogizer_upload_file(site_id, "index.html", content)
|
||||
|
||||
st.success(f"Site created: {deploy_domain}")
|
||||
st.rerun()
|
||||
else:
|
||||
st.error(f"Failed: {result.get('error', 'Unknown error')}")
|
||||
|
||||
st.markdown("---")
|
||||
|
||||
# ==========================================
|
||||
# Sites Table
|
||||
# ==========================================
|
||||
|
||||
st.markdown("### 📋 Sites")
|
||||
|
||||
# Fetch sites data
|
||||
with st.spinner("Loading sites..."):
|
||||
sites = ubus.metablogizer_list_sites()
|
||||
exposure = ubus.metablogizer_exposure_status()
|
||||
|
||||
# Build exposure map
|
||||
exposure_map = {e.get("id"): e for e in exposure}
|
||||
|
||||
if not sites:
|
||||
st.info("No sites configured. Use One-Click Deploy above to create your first site.")
|
||||
else:
|
||||
# Sites table
|
||||
for site in sites:
|
||||
site_id = site.get("id", "")
|
||||
name = site.get("name", "unknown")
|
||||
domain = site.get("domain", "")
|
||||
port = site.get("port", "")
|
||||
enabled = site.get("enabled", "1")
|
||||
|
||||
exp = exposure_map.get(site_id, {})
|
||||
backend_running = exp.get("backend_running", False)
|
||||
vhost_exists = exp.get("vhost_exists", False)
|
||||
cert_status = exp.get("cert_status", "")
|
||||
auth_required = exp.get("auth_required", False)
|
||||
emancipated = exp.get("emancipated", False)
|
||||
has_content = exp.get("has_content", True)
|
||||
waf_enabled = site.get("waf_enabled", False)
|
||||
|
||||
# Create expandable row for each site
|
||||
with st.container():
|
||||
col1, col2, col3, col4 = st.columns([3, 2, 2, 3])
|
||||
|
||||
# Site info column
|
||||
with col1:
|
||||
st.markdown(f"**{name}**")
|
||||
if domain:
|
||||
st.markdown(f"[{domain}](https://{domain})")
|
||||
if port:
|
||||
st.caption(f"Port: {port}")
|
||||
|
||||
# Status column
|
||||
with col2:
|
||||
badges = []
|
||||
if backend_running:
|
||||
badges.append(badge("running"))
|
||||
else:
|
||||
badges.append(badge("stopped"))
|
||||
if not has_content:
|
||||
badges.append(badge("empty", "Empty"))
|
||||
st.markdown(badges_html(*badges), unsafe_allow_html=True)
|
||||
|
||||
# Exposure column
|
||||
with col3:
|
||||
badges = []
|
||||
if vhost_exists and cert_status == "valid":
|
||||
badges.append(badge("ssl_ok"))
|
||||
elif vhost_exists and cert_status == "warning":
|
||||
badges.append(badge("ssl_warn"))
|
||||
elif vhost_exists:
|
||||
badges.append(badge("ssl_none"))
|
||||
else:
|
||||
badges.append(badge("private"))
|
||||
if auth_required:
|
||||
badges.append(badge("auth"))
|
||||
if waf_enabled:
|
||||
badges.append(badge("waf"))
|
||||
st.markdown(badges_html(*badges), unsafe_allow_html=True)
|
||||
|
||||
# Actions column
|
||||
with col4:
|
||||
btn_col1, btn_col2, btn_col3, btn_col4, btn_col5 = st.columns(5)
|
||||
|
||||
with btn_col1:
|
||||
if st.button("📝", key=f"edit_{site_id}", help="Edit"):
|
||||
st.session_state[f"edit_site_{site_id}"] = True
|
||||
|
||||
with btn_col2:
|
||||
if st.button("📤", key=f"share_{site_id}", help="Share"):
|
||||
st.session_state[f"share_site_{site_id}"] = True
|
||||
|
||||
with btn_col3:
|
||||
if emancipated:
|
||||
if st.button("🔒", key=f"unpub_{site_id}", help="Unpublish"):
|
||||
st.session_state[f"unpublish_{site_id}"] = True
|
||||
else:
|
||||
if st.button("🌐", key=f"expose_{site_id}", help="Expose"):
|
||||
st.session_state[f"expose_{site_id}"] = True
|
||||
|
||||
with btn_col4:
|
||||
if st.button("💊", key=f"health_{site_id}", help="Health"):
|
||||
st.session_state[f"health_{site_id}"] = True
|
||||
|
||||
with btn_col5:
|
||||
if st.button("🗑️", key=f"del_{site_id}", help="Delete"):
|
||||
st.session_state[f"delete_{site_id}"] = True
|
||||
|
||||
st.markdown("---")
|
||||
|
||||
# ==========================================
|
||||
# Modal Dialogs (using session state)
|
||||
# ==========================================
|
||||
|
||||
# Edit Modal
|
||||
if st.session_state.get(f"edit_site_{site_id}"):
|
||||
with st.expander(f"✏️ Edit: {name}", expanded=True):
|
||||
edit_name = st.text_input("Name", value=name, key=f"edit_name_{site_id}")
|
||||
edit_domain = st.text_input("Domain", value=domain, key=f"edit_domain_{site_id}")
|
||||
edit_desc = st.text_input("Description", value=site.get("description", ""), key=f"edit_desc_{site_id}")
|
||||
edit_enabled = st.checkbox("Enabled", value=enabled != "0", key=f"edit_enabled_{site_id}")
|
||||
|
||||
col_a, col_b = st.columns(2)
|
||||
with col_a:
|
||||
if st.button("Cancel", key=f"edit_cancel_{site_id}"):
|
||||
st.session_state[f"edit_site_{site_id}"] = False
|
||||
st.rerun()
|
||||
with col_b:
|
||||
if st.button("Save", key=f"edit_save_{site_id}", type="primary"):
|
||||
with st.spinner("Saving..."):
|
||||
result = ubus.call("luci.metablogizer", "update_site", {
|
||||
"id": site_id,
|
||||
"name": edit_name,
|
||||
"domain": edit_domain,
|
||||
"description": edit_desc,
|
||||
"enabled": "1" if edit_enabled else "0"
|
||||
})
|
||||
st.session_state[f"edit_site_{site_id}"] = False
|
||||
st.success("Site updated")
|
||||
st.rerun()
|
||||
|
||||
# Share Modal
|
||||
if st.session_state.get(f"share_site_{site_id}") and domain:
|
||||
with st.expander(f"📤 Share: {name}", expanded=True):
|
||||
url = f"https://{domain}"
|
||||
st.text_input("URL", value=url, disabled=True, key=f"share_url_{site_id}")
|
||||
|
||||
col_a, col_b = st.columns([1, 2])
|
||||
with col_a:
|
||||
qr_code_image(url, size=150)
|
||||
with col_b:
|
||||
st.markdown("**Share via:**")
|
||||
share_buttons(url, name)
|
||||
|
||||
col_c, col_d = st.columns(2)
|
||||
with col_c:
|
||||
st.link_button("🔗 Visit Site", url, use_container_width=True)
|
||||
with col_d:
|
||||
if st.button("Close", key=f"share_close_{site_id}"):
|
||||
st.session_state[f"share_site_{site_id}"] = False
|
||||
st.rerun()
|
||||
|
||||
# Expose Modal
|
||||
if st.session_state.get(f"expose_{site_id}"):
|
||||
with st.expander(f"🌐 Expose: {name}", expanded=True):
|
||||
st.markdown("This will configure:")
|
||||
st.markdown(f"- HAProxy vhost for **{domain}**")
|
||||
st.markdown("- ACME SSL certificate")
|
||||
st.markdown("- DNS + Vortex mesh publication")
|
||||
|
||||
col_a, col_b = st.columns(2)
|
||||
with col_a:
|
||||
if st.button("Cancel", key=f"expose_cancel_{site_id}"):
|
||||
st.session_state[f"expose_{site_id}"] = False
|
||||
st.rerun()
|
||||
with col_b:
|
||||
if st.button("🚀 Expose", key=f"expose_confirm_{site_id}", type="primary"):
|
||||
st.session_state[f"expose_{site_id}"] = False
|
||||
|
||||
# Start emancipation
|
||||
progress_placeholder = st.empty()
|
||||
output_placeholder = st.empty()
|
||||
|
||||
progress_placeholder.info("Starting exposure workflow...")
|
||||
|
||||
result = ubus.metablogizer_emancipate(site_id)
|
||||
if result.get("success"):
|
||||
job_id = result.get("job_id")
|
||||
|
||||
# Poll for completion
|
||||
for i in range(60): # Max 2 minutes
|
||||
time.sleep(2)
|
||||
status = ubus.metablogizer_emancipate_status(job_id)
|
||||
|
||||
if status.get("output"):
|
||||
output_placeholder.code(status["output"])
|
||||
|
||||
if status.get("complete"):
|
||||
if status.get("status") == "success":
|
||||
st.success("Site exposed successfully!")
|
||||
else:
|
||||
st.error("Exposure failed")
|
||||
break
|
||||
|
||||
progress_placeholder.info(f"Working... ({i*2}s)")
|
||||
|
||||
else:
|
||||
st.error(f"Failed: {result.get('error', 'Unknown')}")
|
||||
|
||||
st.rerun()
|
||||
|
||||
# Unpublish Modal
|
||||
if st.session_state.get(f"unpublish_{site_id}"):
|
||||
with st.expander(f"🔒 Unpublish: {name}", expanded=True):
|
||||
st.warning("Remove public exposure? The site content will be preserved.")
|
||||
|
||||
col_a, col_b = st.columns(2)
|
||||
with col_a:
|
||||
if st.button("Cancel", key=f"unpub_cancel_{site_id}"):
|
||||
st.session_state[f"unpublish_{site_id}"] = False
|
||||
st.rerun()
|
||||
with col_b:
|
||||
if st.button("Unpublish", key=f"unpub_confirm_{site_id}", type="primary"):
|
||||
with st.spinner("Unpublishing..."):
|
||||
result = ubus.metablogizer_unpublish(site_id)
|
||||
st.session_state[f"unpublish_{site_id}"] = False
|
||||
if result.get("success"):
|
||||
st.success("Site unpublished")
|
||||
else:
|
||||
st.error(f"Failed: {result.get('error', 'Unknown')}")
|
||||
st.rerun()
|
||||
|
||||
# Health Check Modal
|
||||
if st.session_state.get(f"health_{site_id}"):
|
||||
with st.expander(f"💊 Health: {name}", expanded=True):
|
||||
with st.spinner("Checking health..."):
|
||||
health = ubus.metablogizer_health_check(site_id)
|
||||
|
||||
health_check_panel(health)
|
||||
|
||||
col_a, col_b = st.columns(2)
|
||||
with col_a:
|
||||
if st.button("Close", key=f"health_close_{site_id}"):
|
||||
st.session_state[f"health_{site_id}"] = False
|
||||
st.rerun()
|
||||
with col_b:
|
||||
if st.button("🔧 Repair", key=f"health_repair_{site_id}", type="primary"):
|
||||
with st.spinner("Repairing..."):
|
||||
repair_result = ubus.metablogizer_repair(site_id)
|
||||
if repair_result.get("success"):
|
||||
st.success(f"Repairs: {repair_result.get('repairs', 'done')}")
|
||||
else:
|
||||
st.error(f"Failed: {repair_result.get('error', 'Unknown')}")
|
||||
st.rerun()
|
||||
|
||||
# Delete Modal
|
||||
if st.session_state.get(f"delete_{site_id}"):
|
||||
with st.expander(f"🗑️ Delete: {name}", expanded=True):
|
||||
st.error("⚠️ This will remove the site, HAProxy vhost, and all files!")
|
||||
|
||||
delete_confirm = st.text_input(
|
||||
f"Type '{name}' to confirm deletion:",
|
||||
key=f"delete_confirm_{site_id}"
|
||||
)
|
||||
|
||||
col_a, col_b = st.columns(2)
|
||||
with col_a:
|
||||
if st.button("Cancel", key=f"del_cancel_{site_id}"):
|
||||
st.session_state[f"delete_{site_id}"] = False
|
||||
st.rerun()
|
||||
with col_b:
|
||||
if st.button("Delete", key=f"del_confirm_{site_id}", type="primary",
|
||||
disabled=delete_confirm != name):
|
||||
with st.spinner("Deleting..."):
|
||||
result = ubus.metablogizer_delete_site(site_id)
|
||||
st.session_state[f"delete_{site_id}"] = False
|
||||
if result.get("success"):
|
||||
st.success("Site deleted")
|
||||
else:
|
||||
st.error(f"Failed: {result.get('error', 'Unknown')}")
|
||||
st.rerun()
|
||||
|
||||
# Footer
|
||||
st.markdown("---")
|
||||
st.caption(f"Total sites: {len(sites)}")
|
||||
@ -0,0 +1,132 @@
|
||||
"""
|
||||
Streamlit Apps Manager - Phase 3
|
||||
Manage Streamlit Forge applications with auto-refresh
|
||||
"""
|
||||
|
||||
import streamlit as st
|
||||
import sys
|
||||
import os
|
||||
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from lib.auth import require_auth, show_user_menu, can_write
|
||||
from lib.widgets import page_header, badge, badges_html, auto_refresh_toggle
|
||||
|
||||
st.set_page_config(page_title="Streamlit Apps - SecuBox Control", page_icon="📊", layout="wide")
|
||||
|
||||
ubus = require_auth()
|
||||
st.sidebar.markdown("## 🎛️ SecuBox Control")
|
||||
show_user_menu()
|
||||
|
||||
page_header("Streamlit Apps", "Manage Streamlit Forge applications", "📊")
|
||||
|
||||
# Permission check
|
||||
has_write = can_write()
|
||||
if not has_write:
|
||||
st.info("👁️ View-only mode. Login as root for app management.")
|
||||
|
||||
# Auto-refresh
|
||||
auto_refresh_toggle("streamlit_apps", intervals=[10, 30, 60])
|
||||
st.markdown("---")
|
||||
|
||||
# Fetch apps
|
||||
with st.spinner("Loading apps..."):
|
||||
apps = ubus.streamlit_list()
|
||||
|
||||
if not apps:
|
||||
st.info("No Streamlit apps configured.")
|
||||
|
||||
if has_write:
|
||||
st.markdown("**Create apps using CLI:**")
|
||||
st.code("slforge create myapp", language="bash")
|
||||
else:
|
||||
# Stats
|
||||
running = sum(1 for app in apps if app.get("status") == "running")
|
||||
total = len(apps)
|
||||
|
||||
col1, col2, col3 = st.columns(3)
|
||||
with col1:
|
||||
st.metric("Running", running)
|
||||
with col2:
|
||||
st.metric("Stopped", total - running)
|
||||
with col3:
|
||||
st.metric("Total", total)
|
||||
|
||||
st.markdown("---")
|
||||
|
||||
for idx, app in enumerate(apps):
|
||||
app_id = app.get("id", "") or f"app_{idx}"
|
||||
name = app.get("name", "unknown")
|
||||
port = app.get("port", "")
|
||||
status = app.get("status", "stopped")
|
||||
url = app.get("url", "")
|
||||
|
||||
# Create unique key combining index and app_id
|
||||
key_base = f"{idx}_{app_id}"
|
||||
is_running = status == "running"
|
||||
|
||||
col1, col2, col3, col4 = st.columns([3, 1, 1, 2])
|
||||
|
||||
with col1:
|
||||
st.markdown(f"**{name}**")
|
||||
if port:
|
||||
st.caption(f"Port: {port}")
|
||||
|
||||
with col2:
|
||||
if is_running:
|
||||
st.markdown(badge("running"), unsafe_allow_html=True)
|
||||
else:
|
||||
st.markdown(badge("stopped"), unsafe_allow_html=True)
|
||||
|
||||
with col3:
|
||||
if is_running and url:
|
||||
st.link_button("🔗 Open", url)
|
||||
elif is_running and port:
|
||||
st.link_button("🔗 Open", f"http://192.168.255.1:{port}")
|
||||
|
||||
with col4:
|
||||
if has_write:
|
||||
c1, c2, c3 = st.columns(3)
|
||||
with c1:
|
||||
if is_running:
|
||||
if st.button("⏹️", key=f"stop_{key_base}", help="Stop"):
|
||||
with st.spinner(f"Stopping {name}..."):
|
||||
ubus.call("luci.streamlit-forge", "stop", {"id": app_id})
|
||||
st.rerun()
|
||||
else:
|
||||
if st.button("▶️", key=f"start_{key_base}", help="Start"):
|
||||
with st.spinner(f"Starting {name}..."):
|
||||
ubus.call("luci.streamlit-forge", "start", {"id": app_id})
|
||||
st.rerun()
|
||||
with c2:
|
||||
if st.button("🔄", key=f"restart_{key_base}", help="Restart"):
|
||||
with st.spinner(f"Restarting {name}..."):
|
||||
ubus.call("luci.streamlit-forge", "stop", {"id": app_id})
|
||||
import time
|
||||
time.sleep(1)
|
||||
ubus.call("luci.streamlit-forge", "start", {"id": app_id})
|
||||
st.rerun()
|
||||
with c3:
|
||||
if st.button("🗑️", key=f"del_{key_base}", help="Delete"):
|
||||
st.session_state[f"confirm_delete_{app_id}"] = True
|
||||
|
||||
# Delete confirmation
|
||||
if st.session_state.get(f"confirm_delete_{app_id}"):
|
||||
st.warning(f"Delete {name}? This cannot be undone.")
|
||||
col_a, col_b = st.columns(2)
|
||||
with col_a:
|
||||
if st.button("Cancel", key=f"cancel_del_{key_base}"):
|
||||
st.session_state[f"confirm_delete_{app_id}"] = False
|
||||
st.rerun()
|
||||
with col_b:
|
||||
if st.button("Confirm Delete", key=f"confirm_del_{key_base}", type="primary"):
|
||||
with st.spinner(f"Deleting {name}..."):
|
||||
ubus.call("luci.streamlit-forge", "delete", {"id": app_id})
|
||||
st.session_state[f"confirm_delete_{app_id}"] = False
|
||||
st.rerun()
|
||||
else:
|
||||
st.caption("View only")
|
||||
|
||||
st.markdown("---")
|
||||
|
||||
st.caption(f"Total apps: {len(apps)}")
|
||||
@ -0,0 +1,180 @@
|
||||
"""
|
||||
LXC Containers Manager - Phase 3
|
||||
With auto-refresh, filtering, and permission-aware controls
|
||||
"""
|
||||
|
||||
import streamlit as st
|
||||
import sys
|
||||
import os
|
||||
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from lib.auth import require_auth, show_user_menu, can_write
|
||||
from lib.widgets import page_header, badge, auto_refresh_toggle, search_filter
|
||||
|
||||
st.set_page_config(page_title="Containers - SecuBox Control", page_icon="📦", layout="wide")
|
||||
|
||||
ubus = require_auth()
|
||||
st.sidebar.markdown("## 🎛️ SecuBox Control")
|
||||
show_user_menu()
|
||||
|
||||
page_header("Containers", "LXC container management", "📦")
|
||||
|
||||
# Permission check
|
||||
has_write = can_write()
|
||||
if not has_write:
|
||||
st.info("👁️ View-only mode. Login as root for container management.")
|
||||
|
||||
# Auto-refresh toggle
|
||||
st.markdown("##### Controls")
|
||||
col_refresh, col_search = st.columns([1, 2])
|
||||
|
||||
with col_refresh:
|
||||
auto_refresh_toggle("containers")
|
||||
|
||||
with col_search:
|
||||
search_query = st.text_input(
|
||||
"Filter",
|
||||
key="container_search",
|
||||
placeholder="Search by name...",
|
||||
label_visibility="collapsed"
|
||||
)
|
||||
|
||||
st.markdown("---")
|
||||
|
||||
# Fetch containers
|
||||
with st.spinner("Loading containers..."):
|
||||
containers = ubus.lxc_list()
|
||||
|
||||
# Filter by search
|
||||
if search_query:
|
||||
search_lower = search_query.lower()
|
||||
containers = [c for c in containers if search_lower in c.get("name", "").lower()]
|
||||
|
||||
# Sort by state (running first), then name
|
||||
containers = sorted(containers, key=lambda x: (0 if x.get("state") == "RUNNING" else 1, x.get("name", "")))
|
||||
|
||||
# Stats
|
||||
running = sum(1 for c in containers if c.get("state") == "RUNNING")
|
||||
stopped = len(containers) - running
|
||||
|
||||
# Tabs for running vs stopped
|
||||
tab_all, tab_running, tab_stopped = st.tabs([
|
||||
f"All ({len(containers)})",
|
||||
f"🟢 Running ({running})",
|
||||
f"⭕ Stopped ({stopped})"
|
||||
])
|
||||
|
||||
def render_container_list(container_list, prefix=""):
|
||||
"""Render list of containers with actions"""
|
||||
if not container_list:
|
||||
st.info("No containers match the filter")
|
||||
return
|
||||
|
||||
for container in container_list:
|
||||
name = container.get("name", "unknown")
|
||||
state = container.get("state", "UNKNOWN")
|
||||
memory = container.get("memory", 0)
|
||||
ip = container.get("ip", "")
|
||||
|
||||
is_running = state == "RUNNING"
|
||||
key_base = f"{prefix}_{name}"
|
||||
|
||||
col1, col2, col3, col4 = st.columns([3, 1, 1, 2])
|
||||
|
||||
with col1:
|
||||
st.markdown(f"**{name}**")
|
||||
if ip:
|
||||
st.caption(f"🌐 {ip}")
|
||||
|
||||
with col2:
|
||||
if is_running:
|
||||
st.markdown(badge("running"), unsafe_allow_html=True)
|
||||
else:
|
||||
st.markdown(badge("stopped"), unsafe_allow_html=True)
|
||||
|
||||
with col3:
|
||||
if memory:
|
||||
mem_mb = memory // 1024 // 1024
|
||||
if mem_mb > 0:
|
||||
st.caption(f"💾 {mem_mb}MB")
|
||||
|
||||
with col4:
|
||||
if has_write:
|
||||
c1, c2, c3 = st.columns(3)
|
||||
with c1:
|
||||
if is_running:
|
||||
if st.button("⏹️", key=f"stop_{key_base}", help="Stop"):
|
||||
with st.spinner(f"Stopping {name}..."):
|
||||
ubus.lxc_stop(name)
|
||||
st.rerun()
|
||||
else:
|
||||
if st.button("▶️", key=f"start_{key_base}", help="Start", type="primary"):
|
||||
with st.spinner(f"Starting {name}..."):
|
||||
ubus.lxc_start(name)
|
||||
st.rerun()
|
||||
with c2:
|
||||
if st.button("🔄", key=f"restart_{key_base}", help="Restart", disabled=not is_running):
|
||||
with st.spinner(f"Restarting {name}..."):
|
||||
ubus.lxc_stop(name)
|
||||
import time
|
||||
time.sleep(1)
|
||||
ubus.lxc_start(name)
|
||||
st.rerun()
|
||||
with c3:
|
||||
if st.button("ℹ️", key=f"info_{key_base}", help="Details"):
|
||||
st.session_state[f"show_info_{name}"] = not st.session_state.get(f"show_info_{name}", False)
|
||||
else:
|
||||
# View only - just info button
|
||||
if st.button("ℹ️", key=f"info_{key_base}", help="Details"):
|
||||
st.session_state[f"show_info_{name}"] = not st.session_state.get(f"show_info_{name}", False)
|
||||
|
||||
# Info panel
|
||||
if st.session_state.get(f"show_info_{name}"):
|
||||
with st.container():
|
||||
st.markdown(f"##### 📋 {name} Details")
|
||||
|
||||
# Show available info
|
||||
info_cols = st.columns(4)
|
||||
with info_cols[0]:
|
||||
st.metric("State", state)
|
||||
with info_cols[1]:
|
||||
mem_mb = memory // 1024 // 1024 if memory else 0
|
||||
st.metric("Memory", f"{mem_mb}MB" if mem_mb else "N/A")
|
||||
with info_cols[2]:
|
||||
st.metric("IP", ip or "N/A")
|
||||
with info_cols[3]:
|
||||
autostart = container.get("autostart", "N/A")
|
||||
st.metric("Autostart", autostart)
|
||||
|
||||
# Raw data expander
|
||||
with st.expander("Raw Data", expanded=False):
|
||||
st.json(container)
|
||||
|
||||
if st.button("Close", key=f"close_info_{key_base}"):
|
||||
st.session_state[f"show_info_{name}"] = False
|
||||
st.rerun()
|
||||
|
||||
st.markdown("---")
|
||||
|
||||
|
||||
with tab_all:
|
||||
render_container_list(containers, "all")
|
||||
|
||||
with tab_running:
|
||||
running_containers = [c for c in containers if c.get("state") == "RUNNING"]
|
||||
render_container_list(running_containers, "running")
|
||||
|
||||
with tab_stopped:
|
||||
stopped_containers = [c for c in containers if c.get("state") != "RUNNING"]
|
||||
render_container_list(stopped_containers, "stopped")
|
||||
|
||||
# Summary stats at bottom
|
||||
st.markdown("---")
|
||||
col1, col2, col3 = st.columns(3)
|
||||
with col1:
|
||||
st.metric("Running", running, delta=None)
|
||||
with col2:
|
||||
st.metric("Stopped", stopped, delta=None)
|
||||
with col3:
|
||||
st.metric("Total", len(containers), delta=None)
|
||||
@ -0,0 +1,225 @@
|
||||
"""
|
||||
Network Management - Phase 3
|
||||
HAProxy, WireGuard, DNS with auto-refresh
|
||||
"""
|
||||
|
||||
import streamlit as st
|
||||
import sys
|
||||
import os
|
||||
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from lib.auth import require_auth, show_user_menu, can_write
|
||||
from lib.widgets import page_header, badge, auto_refresh_toggle
|
||||
|
||||
st.set_page_config(page_title="Network - SecuBox Control", page_icon="🔌", layout="wide")
|
||||
|
||||
ubus = require_auth()
|
||||
st.sidebar.markdown("## 🎛️ SecuBox Control")
|
||||
show_user_menu()
|
||||
|
||||
page_header("Network", "HAProxy vhosts, WireGuard, DNS management", "🔌")
|
||||
|
||||
# Auto-refresh
|
||||
auto_refresh_toggle("network", intervals=[30, 60, 120])
|
||||
st.markdown("---")
|
||||
|
||||
# Permission check
|
||||
has_write = can_write()
|
||||
|
||||
# Tabs
|
||||
tab1, tab2, tab3 = st.tabs(["🌐 HAProxy Vhosts", "🔒 WireGuard", "📡 DNS"])
|
||||
|
||||
with tab1:
|
||||
st.markdown("### HAProxy Virtual Hosts")
|
||||
|
||||
# Fetch HAProxy status
|
||||
with st.spinner("Loading HAProxy status..."):
|
||||
haproxy_status = ubus.haproxy_status()
|
||||
vhosts = ubus.haproxy_list_vhosts()
|
||||
|
||||
# Status summary
|
||||
hp_running = haproxy_status.get("running", haproxy_status.get("haproxy_running", False))
|
||||
vhost_count = haproxy_status.get("vhost_count", len(vhosts))
|
||||
|
||||
col1, col2, col3 = st.columns(3)
|
||||
with col1:
|
||||
if hp_running:
|
||||
st.success("🟢 HAProxy Running")
|
||||
else:
|
||||
st.error("🔴 HAProxy Stopped")
|
||||
with col2:
|
||||
st.metric("Vhosts", vhost_count)
|
||||
with col3:
|
||||
backend_count = haproxy_status.get("backend_count", 0)
|
||||
st.metric("Backends", backend_count)
|
||||
|
||||
st.markdown("---")
|
||||
|
||||
# Search filter
|
||||
search = st.text_input("Filter vhosts", key="vhost_search", placeholder="Type to filter by domain...")
|
||||
|
||||
if vhosts:
|
||||
# Filter if search provided
|
||||
if search:
|
||||
search_lower = search.lower()
|
||||
vhosts = [v for v in vhosts if search_lower in v.get("domain", "").lower()]
|
||||
|
||||
st.markdown(f"**Showing {len(vhosts)} vhosts**")
|
||||
st.markdown("---")
|
||||
|
||||
# Table header
|
||||
col1, col2, col3, col4 = st.columns([3, 2, 1, 1])
|
||||
with col1:
|
||||
st.markdown("**Domain**")
|
||||
with col2:
|
||||
st.markdown("**Backend**")
|
||||
with col3:
|
||||
st.markdown("**SSL**")
|
||||
with col4:
|
||||
st.markdown("**Status**")
|
||||
|
||||
for vhost in vhosts[:100]: # Limit display
|
||||
domain = vhost.get("domain", "unknown")
|
||||
backend = vhost.get("backend", "")
|
||||
ssl = vhost.get("ssl", False)
|
||||
enabled = vhost.get("enabled", True)
|
||||
waf_bypass = vhost.get("waf_bypass", False)
|
||||
|
||||
col1, col2, col3, col4 = st.columns([3, 2, 1, 1])
|
||||
|
||||
with col1:
|
||||
if enabled:
|
||||
st.markdown(f"[{domain}](https://{domain})")
|
||||
else:
|
||||
st.markdown(f"~~{domain}~~")
|
||||
|
||||
with col2:
|
||||
st.caption(f"→ {backend}")
|
||||
|
||||
with col3:
|
||||
if ssl:
|
||||
cert_status = vhost.get("cert_status", "valid")
|
||||
if cert_status == "valid":
|
||||
st.markdown(badge("ssl_ok"), unsafe_allow_html=True)
|
||||
elif cert_status == "expiring":
|
||||
st.markdown(badge("ssl_warn"), unsafe_allow_html=True)
|
||||
else:
|
||||
st.markdown(badge("ssl_none"), unsafe_allow_html=True)
|
||||
else:
|
||||
st.markdown(badge("ssl_none", "No SSL"), unsafe_allow_html=True)
|
||||
|
||||
with col4:
|
||||
badges = []
|
||||
if enabled:
|
||||
badges.append(badge("running", "On"))
|
||||
else:
|
||||
badges.append(badge("stopped", "Off"))
|
||||
if waf_bypass:
|
||||
badges.append(badge("warning", "WAF↷"))
|
||||
st.markdown(" ".join(badges), unsafe_allow_html=True)
|
||||
|
||||
if len(vhosts) > 100:
|
||||
st.caption(f"... and {len(vhosts) - 100} more")
|
||||
|
||||
else:
|
||||
st.info("No vhosts configured")
|
||||
|
||||
with tab2:
|
||||
st.markdown("### WireGuard VPN")
|
||||
|
||||
# Try to get WireGuard status via RPCD
|
||||
try:
|
||||
wg_status = ubus.call("luci.wireguard", "status") or {}
|
||||
wg_interfaces = wg_status.get("interfaces", [])
|
||||
except:
|
||||
wg_interfaces = []
|
||||
|
||||
if wg_interfaces:
|
||||
for iface in wg_interfaces:
|
||||
name = iface.get("name", "wg0")
|
||||
public_key = iface.get("public_key", "")
|
||||
listen_port = iface.get("listen_port", 0)
|
||||
peers = iface.get("peers", [])
|
||||
|
||||
st.markdown(f"#### Interface: {name}")
|
||||
|
||||
col1, col2 = st.columns(2)
|
||||
with col1:
|
||||
st.markdown(f"- **Port**: {listen_port}")
|
||||
st.markdown(f"- **Peers**: {len(peers)}")
|
||||
with col2:
|
||||
if public_key:
|
||||
st.code(public_key[:20] + "...", language=None)
|
||||
|
||||
if peers:
|
||||
st.markdown("**Peers:**")
|
||||
for peer in peers:
|
||||
peer_name = peer.get("name", peer.get("public_key", "")[:8])
|
||||
endpoint = peer.get("endpoint", "N/A")
|
||||
last_handshake = peer.get("last_handshake", 0)
|
||||
|
||||
col1, col2, col3 = st.columns([2, 2, 1])
|
||||
with col1:
|
||||
st.write(peer_name)
|
||||
with col2:
|
||||
st.caption(endpoint)
|
||||
with col3:
|
||||
if last_handshake > 0:
|
||||
st.markdown(badge("running", "Online"), unsafe_allow_html=True)
|
||||
else:
|
||||
st.markdown(badge("stopped", "Offline"), unsafe_allow_html=True)
|
||||
|
||||
st.markdown("---")
|
||||
else:
|
||||
st.info("No WireGuard interfaces configured or RPCD handler not available")
|
||||
st.caption("Configure WireGuard via LuCI → Network → Interfaces or CLI")
|
||||
|
||||
# Show command hint
|
||||
with st.expander("Quick Setup Commands"):
|
||||
st.code("""
|
||||
# Generate keys
|
||||
wg genkey | tee privatekey | wg pubkey > publickey
|
||||
|
||||
# Create interface via UCI
|
||||
uci set network.wg0=interface
|
||||
uci set network.wg0.proto='wireguard'
|
||||
uci set network.wg0.private_key='YOUR_PRIVATE_KEY'
|
||||
uci set network.wg0.listen_port='51820'
|
||||
uci commit network
|
||||
/etc/init.d/network reload
|
||||
""", language="bash")
|
||||
|
||||
with tab3:
|
||||
st.markdown("### DNS Configuration")
|
||||
|
||||
# Try to get DNS status
|
||||
try:
|
||||
dns_status = ubus.call("luci.dns-provider", "status") or {}
|
||||
except:
|
||||
dns_status = {}
|
||||
|
||||
if dns_status:
|
||||
provider = dns_status.get("provider", "Unknown")
|
||||
zones = dns_status.get("zones", [])
|
||||
|
||||
st.markdown(f"**Provider**: {provider}")
|
||||
|
||||
if zones:
|
||||
st.markdown("**Zones:**")
|
||||
for zone in zones:
|
||||
st.write(f"- {zone}")
|
||||
else:
|
||||
st.info("DNS provider integration available via secubox-app-dns-provider")
|
||||
st.caption("Configure DNS API credentials for automated DNS management")
|
||||
|
||||
# Show supported providers
|
||||
with st.expander("Supported DNS Providers"):
|
||||
st.markdown("""
|
||||
- **OVH** - API key authentication
|
||||
- **Cloudflare** - API token or global key
|
||||
- **Gandi** - Personal Access Token
|
||||
- **DigitalOcean** - API token
|
||||
|
||||
Configure via: `dnsctl provider add <name>`
|
||||
""")
|
||||
@ -0,0 +1,232 @@
|
||||
"""
|
||||
Security Dashboard - Phase 3
|
||||
WAF, CrowdSec, Firewall with auto-refresh
|
||||
"""
|
||||
|
||||
import streamlit as st
|
||||
import sys
|
||||
import os
|
||||
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from lib.auth import require_auth, show_user_menu
|
||||
from lib.widgets import page_header, badge, status_card, auto_refresh_toggle
|
||||
|
||||
st.set_page_config(page_title="Security - SecuBox Control", page_icon="🛡️", layout="wide")
|
||||
|
||||
ubus = require_auth()
|
||||
st.sidebar.markdown("## 🎛️ SecuBox Control")
|
||||
show_user_menu()
|
||||
|
||||
page_header("Security", "WAF status, CrowdSec decisions, firewall", "🛡️")
|
||||
|
||||
# Auto-refresh
|
||||
auto_refresh_toggle("security", intervals=[10, 30, 60])
|
||||
st.markdown("---")
|
||||
|
||||
# Fetch status
|
||||
with st.spinner("Loading security status..."):
|
||||
mitmproxy = ubus.mitmproxy_status()
|
||||
crowdsec = ubus.crowdsec_status()
|
||||
|
||||
# Parse CrowdSec status correctly
|
||||
# The crowdsec_status() returns various formats depending on the RPCD handler
|
||||
cs_state = crowdsec.get("crowdsec", crowdsec.get("status", "unknown"))
|
||||
if isinstance(cs_state, str):
|
||||
cs_running = cs_state.lower() in ("running", "active", "ok")
|
||||
else:
|
||||
cs_running = bool(cs_state)
|
||||
|
||||
# Get stats from crowdsec response
|
||||
cs_decisions = crowdsec.get("active_decisions", crowdsec.get("decisions_count", 0))
|
||||
cs_alerts = crowdsec.get("alerts_today", crowdsec.get("alerts", 0))
|
||||
cs_bouncers = crowdsec.get("bouncers", crowdsec.get("bouncer_count", 0))
|
||||
|
||||
# WAF stats
|
||||
waf_running = mitmproxy.get("running", False)
|
||||
waf_threats = mitmproxy.get("threats_today", 0)
|
||||
waf_blocked = mitmproxy.get("blocked_today", 0)
|
||||
waf_port = mitmproxy.get("port", 22222)
|
||||
|
||||
# Status cards row
|
||||
st.markdown("### 📊 Status Overview")
|
||||
col1, col2, col3, col4 = st.columns(4)
|
||||
|
||||
with col1:
|
||||
status_card(
|
||||
"WAF (mitmproxy)",
|
||||
"Running" if waf_running else "Stopped",
|
||||
f"Port {waf_port}",
|
||||
"🛡️",
|
||||
"#10b981" if waf_running else "#ef4444"
|
||||
)
|
||||
|
||||
with col2:
|
||||
status_card(
|
||||
"Threats Today",
|
||||
f"{waf_threats:,}" if waf_threats else "0",
|
||||
f"{waf_blocked:,} blocked" if waf_blocked else "0 blocked",
|
||||
"⚠️",
|
||||
"#f59e0b" if waf_threats > 0 else "#10b981"
|
||||
)
|
||||
|
||||
with col3:
|
||||
status_card(
|
||||
"CrowdSec",
|
||||
"Running" if cs_running else "Stopped",
|
||||
f"{cs_decisions} active bans",
|
||||
"🚫",
|
||||
"#10b981" if cs_running else "#ef4444"
|
||||
)
|
||||
|
||||
with col4:
|
||||
status_card(
|
||||
"Firewall",
|
||||
"Active",
|
||||
"nftables",
|
||||
"🔥",
|
||||
"#10b981"
|
||||
)
|
||||
|
||||
st.markdown("---")
|
||||
|
||||
# Tabs for detailed views
|
||||
tab1, tab2, tab3 = st.tabs(["🛡️ WAF Threats", "🚫 CrowdSec", "📈 Stats"])
|
||||
|
||||
with tab1:
|
||||
st.markdown("### Recent WAF Threats")
|
||||
|
||||
threats = ubus.mitmproxy_threats(limit=30)
|
||||
|
||||
if threats:
|
||||
# Summary
|
||||
st.markdown(f"Showing {len(threats)} most recent threats")
|
||||
|
||||
# Headers
|
||||
col1, col2, col3, col4, col5 = st.columns([2, 3, 2, 1, 1])
|
||||
with col1:
|
||||
st.markdown("**Source IP**")
|
||||
with col2:
|
||||
st.markdown("**URL/Path**")
|
||||
with col3:
|
||||
st.markdown("**Category**")
|
||||
with col4:
|
||||
st.markdown("**Severity**")
|
||||
with col5:
|
||||
st.markdown("**Time**")
|
||||
|
||||
st.markdown("---")
|
||||
|
||||
for idx, threat in enumerate(threats):
|
||||
col1, col2, col3, col4, col5 = st.columns([2, 3, 2, 1, 1])
|
||||
|
||||
with col1:
|
||||
ip = threat.get("ip", threat.get("source_ip", "unknown"))
|
||||
st.write(ip)
|
||||
|
||||
with col2:
|
||||
url = threat.get("url", threat.get("path", threat.get("request", "")))
|
||||
# Truncate long URLs
|
||||
if len(url) > 50:
|
||||
url = url[:47] + "..."
|
||||
st.write(url)
|
||||
|
||||
with col3:
|
||||
category = threat.get("category", threat.get("type", "unknown"))
|
||||
st.write(category)
|
||||
|
||||
with col4:
|
||||
severity = threat.get("severity", "low").lower()
|
||||
if severity == "critical":
|
||||
st.markdown(badge("error", "CRIT"), unsafe_allow_html=True)
|
||||
elif severity == "high":
|
||||
st.markdown(badge("warning", "HIGH"), unsafe_allow_html=True)
|
||||
elif severity == "medium":
|
||||
st.markdown(badge("info", "MED"), unsafe_allow_html=True)
|
||||
else:
|
||||
st.markdown(badge("success", "LOW"), unsafe_allow_html=True)
|
||||
|
||||
with col5:
|
||||
timestamp = threat.get("timestamp", threat.get("time", ""))
|
||||
# Show just time portion if available
|
||||
if timestamp and " " in str(timestamp):
|
||||
timestamp = str(timestamp).split(" ")[-1]
|
||||
st.caption(timestamp[:8] if timestamp else "-")
|
||||
else:
|
||||
st.success("✅ No recent threats detected")
|
||||
st.caption("The WAF is protecting your services. Check back later for threat activity.")
|
||||
|
||||
with tab2:
|
||||
st.markdown("### CrowdSec Security")
|
||||
|
||||
# CrowdSec status details
|
||||
col1, col2 = st.columns(2)
|
||||
|
||||
with col1:
|
||||
st.markdown("#### Engine Status")
|
||||
st.markdown(f"- **Status**: {cs_state}")
|
||||
st.markdown(f"- **Active Decisions**: {cs_decisions}")
|
||||
st.markdown(f"- **Alerts Today**: {cs_alerts}")
|
||||
st.markdown(f"- **Bouncers**: {cs_bouncers}")
|
||||
|
||||
with col2:
|
||||
st.markdown("#### Version Info")
|
||||
version = crowdsec.get("version", crowdsec.get("crowdsec_version", "N/A"))
|
||||
st.markdown(f"- **Version**: {version}")
|
||||
lapi = crowdsec.get("lapi_status", crowdsec.get("lapi", "N/A"))
|
||||
st.markdown(f"- **LAPI**: {lapi}")
|
||||
|
||||
st.markdown("---")
|
||||
st.markdown("#### Active Decisions (Bans)")
|
||||
|
||||
decisions = ubus.crowdsec_decisions(limit=30)
|
||||
|
||||
if decisions:
|
||||
for decision in decisions:
|
||||
col1, col2, col3, col4 = st.columns([2, 3, 2, 1])
|
||||
|
||||
with col1:
|
||||
value = decision.get("value", decision.get("ip", "unknown"))
|
||||
st.write(f"🚫 {value}")
|
||||
|
||||
with col2:
|
||||
reason = decision.get("reason", decision.get("scenario", ""))
|
||||
st.write(reason)
|
||||
|
||||
with col3:
|
||||
origin = decision.get("origin", decision.get("source", ""))
|
||||
st.caption(origin)
|
||||
|
||||
with col4:
|
||||
duration = decision.get("duration", decision.get("remaining", ""))
|
||||
st.caption(duration)
|
||||
else:
|
||||
st.success("✅ No active bans")
|
||||
st.caption("All traffic is currently allowed through the firewall.")
|
||||
|
||||
with tab3:
|
||||
st.markdown("### Security Statistics")
|
||||
|
||||
# Quick stats from available data
|
||||
col1, col2 = st.columns(2)
|
||||
|
||||
with col1:
|
||||
st.markdown("#### WAF Summary")
|
||||
st.metric("Threats Today", waf_threats)
|
||||
st.metric("Blocked Requests", waf_blocked)
|
||||
st.metric("Status", "🟢 Active" if waf_running else "🔴 Inactive")
|
||||
|
||||
with col2:
|
||||
st.markdown("#### CrowdSec Summary")
|
||||
st.metric("Active Bans", cs_decisions)
|
||||
st.metric("Alerts Today", cs_alerts)
|
||||
st.metric("Status", "🟢 Active" if cs_running else "🔴 Inactive")
|
||||
|
||||
st.markdown("---")
|
||||
|
||||
# Raw data for debugging
|
||||
with st.expander("🔍 Raw Status Data", expanded=False):
|
||||
st.markdown("**mitmproxy response:**")
|
||||
st.json(mitmproxy)
|
||||
st.markdown("**crowdsec response:**")
|
||||
st.json(crowdsec)
|
||||
@ -0,0 +1,70 @@
|
||||
"""
|
||||
System Management
|
||||
Packages, Services, Logs
|
||||
"""
|
||||
|
||||
import streamlit as st
|
||||
import sys
|
||||
import os
|
||||
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from lib.auth import require_auth, show_user_menu
|
||||
from lib.widgets import page_header
|
||||
|
||||
st.set_page_config(page_title="System - SecuBox Control", page_icon="⚙️", layout="wide")
|
||||
|
||||
ubus = require_auth()
|
||||
st.sidebar.markdown("## 🎛️ SecuBox Control")
|
||||
show_user_menu()
|
||||
|
||||
page_header("System", "Packages, services, logs, backup", "⚙️")
|
||||
|
||||
# Tabs
|
||||
tab1, tab2, tab3 = st.tabs(["📋 System Info", "📦 Packages", "📜 Logs"])
|
||||
|
||||
with tab1:
|
||||
st.markdown("### System Information")
|
||||
|
||||
board = ubus.system_board()
|
||||
info = ubus.system_info()
|
||||
|
||||
col1, col2 = st.columns(2)
|
||||
|
||||
with col1:
|
||||
st.markdown("**Board**")
|
||||
st.json({
|
||||
"hostname": board.get("hostname", ""),
|
||||
"model": board.get("model", ""),
|
||||
"board_name": board.get("board_name", ""),
|
||||
"kernel": board.get("kernel", ""),
|
||||
"system": board.get("system", ""),
|
||||
})
|
||||
|
||||
with col2:
|
||||
st.markdown("**Resources**")
|
||||
|
||||
memory = info.get("memory", {})
|
||||
st.json({
|
||||
"uptime": f"{info.get('uptime', 0) // 3600}h",
|
||||
"memory_total": f"{memory.get('total', 0) // 1024 // 1024}MB",
|
||||
"memory_free": f"{memory.get('free', 0) // 1024 // 1024}MB",
|
||||
"load": info.get("load", [0, 0, 0]),
|
||||
})
|
||||
|
||||
with tab2:
|
||||
st.markdown("### Installed Packages")
|
||||
st.info("Package management coming in Phase 5")
|
||||
|
||||
with tab3:
|
||||
st.markdown("### System Logs")
|
||||
|
||||
if st.button("Refresh Logs"):
|
||||
st.rerun()
|
||||
|
||||
# Get recent logs via file read
|
||||
logs = ubus.file_exec("/usr/bin/logread", ["-l", "50"])
|
||||
if logs and logs.get("stdout"):
|
||||
st.code(logs["stdout"], language="")
|
||||
else:
|
||||
st.info("No logs available")
|
||||
Loading…
Reference in New Issue
Block a user