feat(core): Add 3-tier stats persistence and LuCI tree navigation

Stats Persistence Layer:
- Add secubox-stats-persist daemon for never-trashed stats
- 3-tier caching: RAM (/tmp) → buffer → persistent (/srv)
- Hourly snapshots (24h), daily aggregates (30d)
- Boot recovery from persistent storage
- Heartbeat line: real-time 60-sample buffer (3min window)
- Evolution view: combined influence score over time

RPCD Stats Module:
- get_timeline: 24h evolution for all collectors
- get_evolution: combined influence score timeline
- get_heartbeat_line: real-time 3min buffer
- get_stats_status: persistence status and current values
- get_history: historical data per collector
- get_collector_cache: current cache value

LuCI Tree Navigation:
- Add clickable tree of all 60+ SecuBox LuCI apps
- Organized by category: Security, Network, Monitoring, Services, etc.
- Real-time search filter
- Available at /secubox-public/luci-tree and /admin/secubox/luci-tree

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
CyberMind-FR 2026-02-11 11:23:27 +01:00
parent 8055bca368
commit 13c1e596d2
11 changed files with 1060 additions and 5 deletions

View File

@ -1,6 +1,6 @@
# SecuBox UI & Theme History
_Last updated: 2026-02-10_
_Last updated: 2026-02-11_
1. **Unified Dashboard Refresh (2025-12-20)**
- Dashboard received the "sh-page-header" layout, hero stats, and SecuNav top tabs.
@ -1122,3 +1122,71 @@ _Last updated: 2026-02-10_
- Settings: UCI form for configuration
- **RPCD Handler**: 11 methods (status, get_devices, get_device, get_anomalies, scan, isolate/trust/block_device, get_vendor_rules, add/delete_vendor_rule, get_cloud_map)
- **ACL**: Public access for status and device list via `unauthenticated` group
59. **InterceptoR "Gandalf Proxy" Implementation (2026-02-11)**
- Created `luci-app-interceptor` — unified dashboard for 5-pillar transparent traffic interception.
- **Dashboard Features**:
- Health Score (0-100%) with color-coded display
- 5 Pillar Status Cards: WPAD Redirector, MITM Proxy, CDN Cache, Cookie Tracker, API Failover
- Per-pillar stats: threats, connections, hit ratio, trackers, stale serves
- Quick links to individual module dashboards
- **RPCD Handler** (`luci.interceptor`):
- `status`: Aggregates status from all 5 pillars
- `getPillarStatus`: Individual pillar details
- Health score calculation: 20 points per active pillar
- Checks: WPAD PAC file, mitmproxy LXC, Squid process, Cookie Tracker UCI, API Failover UCI
- Created `secubox-cookie-tracker` package — Cookie classification database + mitmproxy addon.
- **SQLite database** (`/var/lib/cookie-tracker/cookies.db`): domain, name, category, seen times, blocked status
- **Categories**: essential, functional, analytics, advertising, tracking
- **mitmproxy addon** (`mitmproxy-addon.py`): Real-time cookie extraction from Set-Cookie headers
- **Known trackers** (`known-trackers.tsv`): 100+ tracker domains (Google Analytics, Facebook, DoubleClick, etc.)
- **CLI** (`cookie-trackerctl`): status, list, classify, block, report --json
- **Init script**: procd service with SQLite database initialization
- Enhanced `luci-app-network-tweaks` with WPAD safety net:
- Added `setWpadEnforce`/`getWpadEnforce` RPCD methods
- Added `setup_wpad_enforce()` iptables function for non-compliant clients
- Redirect TCP 80/443 to Squid proxy for WPAD-ignoring clients
- Enhanced `luci-app-cdn-cache` with API failover config:
- Added `api_failover` UCI section: stale_if_error, offline_mode, collapsed_forwarding
- Modified init.d to generate API failover Squid config (refresh_pattern with stale-if-error)
- Created `/etc/hotplug.d/iface/99-cdn-offline` for WAN up/down detection
- Automatic offline mode on WAN down, disable on WAN up
- Configured `.sblocal` mesh domain via BIND zone file:
- Created `/etc/bind/zones/sblocal.zone` for internal service discovery
- Added c3box.sblocal A record pointing to 192.168.255.1
- Part of InterceptoR transparent proxy architecture (Peek/Poke/Emancipate model).
60. **3-Tier Stats Persistence & Evolution (2026-02-11)**
- Created `secubox-stats-persist` — 3-tier caching for never-trashed stats.
- **3-Tier Cache Architecture**:
- Tier 1: RAM cache (`/tmp/secubox/*.json`) — 3-30 second updates
- Tier 2: Volatile buffer — atomic writes with tmp+mv pattern
- Tier 3: Persistent storage (`/srv/secubox/stats/`) — survives reboot
- **Time-Series Evolution**:
- Hourly snapshots (24h retention) per collector
- Daily aggregates (30d retention) with min/max/avg
- Combined timeline JSON with all collectors
- **Heartbeat Line**:
- Real-time 60-sample buffer (3min window)
- Combined "influence" score: (health×40 + inv_threat×30 + inv_capacity×30)/100
- Updated every 3 seconds via daemon loop
- **Evolution View**:
- 48-hour combined metrics graph
- Health, Threat, Capacity, and Influence scores per hour
- JSON output for dashboard sparklines
- **Boot Recovery**:
- On daemon start, recovers cache from persistent storage
- Ensures stats continuity across reboots
- **RPCD Methods**:
- `get_timeline`: 24h evolution for all collectors
- `get_evolution`: Combined influence score timeline
- `get_heartbeat_line`: Real-time 3min buffer
- `get_stats_status`: Persistence status and current values
- `get_history`: Historical data for specific collector
- `get_collector_cache`: Current cache value for collector
- **Cron Jobs**:
- Every 5min: Persist cache to /srv (backup)
- Every hour: Generate timeline and evolution
- Daily: Aggregate hourly to daily, cleanup old data
- Integrated into `secubox-core` daemon startup (r16).
- Bumped `secubox-core` version to 0.10.0-r16.

View File

@ -11,7 +11,7 @@ LUCI_DESCRIPTION:=Unified entry point for all SecuBox applications with tabbed n
LUCI_DEPENDS:=+luci-base +luci-theme-secubox
LUCI_PKGARCH:=all
PKG_VERSION:=0.7.0
PKG_RELEASE:=2
PKG_RELEASE:=3
PKG_LICENSE:=GPL-3.0-or-later
PKG_MAINTAINER:=SecuBox Team <secubox@example.com>

View File

@ -0,0 +1,329 @@
'use strict';
'require view';
'require dom';
'require poll';
// SecuBox LuCI Tree - Clickable navigation map
var LUCI_TREE = {
"SecuBox": {
path: "admin/secubox",
icon: "shield",
children: {
"Dashboard": { path: "admin/secubox/dashboard", icon: "dashboard" },
"App Store": { path: "admin/secubox/apps", icon: "store" },
"Modules": { path: "admin/secubox/modules", icon: "cubes" },
"Alerts": { path: "admin/secubox/alerts", icon: "bell" },
"Settings": { path: "admin/secubox/settings", icon: "cog" },
"Help": { path: "admin/secubox/help", icon: "question" }
}
},
"Admin Control": {
path: "admin/secubox/admin",
icon: "user-shield",
children: {
"Control Panel": { path: "admin/secubox/admin/dashboard", icon: "sliders" },
"Cyber Console": { path: "admin/secubox/admin/cyber-dashboard", icon: "terminal" },
"Apps Manager": { path: "admin/secubox/admin/apps", icon: "boxes" },
"Profiles": { path: "admin/secubox/admin/profiles", icon: "id-card" },
"Skills": { path: "admin/secubox/admin/skills", icon: "magic" },
"System Health": { path: "admin/secubox/admin/health", icon: "heartbeat" }
}
},
"Security": {
path: "admin/secubox/security",
icon: "lock",
children: {
"CrowdSec": {
path: "admin/secubox/security/crowdsec",
icon: "shield-alt",
children: {
"Overview": { path: "admin/secubox/security/crowdsec/overview" },
"Decisions": { path: "admin/secubox/security/crowdsec/decisions" },
"Alerts": { path: "admin/secubox/security/crowdsec/alerts" },
"Bouncers": { path: "admin/secubox/security/crowdsec/bouncers" },
"Setup": { path: "admin/secubox/security/crowdsec/setup" }
}
},
"mitmproxy": {
path: "admin/secubox/security/mitmproxy",
icon: "eye",
children: {
"Status": { path: "admin/secubox/security/mitmproxy/status" },
"Settings": { path: "admin/secubox/security/mitmproxy/settings" }
}
},
"Client Guardian": { path: "admin/secubox/security/guardian", icon: "users" },
"DNS Guard": { path: "admin/secubox/security/dnsguard", icon: "dns" },
"Threat Analyst": { path: "admin/secubox/security/threat-analyst", icon: "brain" },
"Network Anomaly": { path: "admin/secubox/security/network-anomaly", icon: "chart-line" },
"Auth Guardian": { path: "admin/secubox/security/auth-guardian", icon: "key" },
"Key Storage": { path: "admin/secubox/security/ksm-manager", icon: "vault" }
}
},
"AI Gateway": {
path: "admin/secubox/ai",
icon: "robot",
children: {
"AI Insights": { path: "admin/secubox/ai/insights", icon: "lightbulb" },
"LocalRecall": { path: "admin/secubox/ai/localrecall", icon: "memory" }
}
},
"MirrorBox": {
path: "admin/secubox/mirrorbox",
icon: "network-wired",
children: {
"Overview": { path: "admin/secubox/mirrorbox/overview", icon: "home" },
"P2P Hub": { path: "admin/secubox/mirrorbox/hub", icon: "hubspot" },
"Peers": { path: "admin/secubox/mirrorbox/peers", icon: "users" },
"Services": { path: "admin/secubox/mirrorbox/services", icon: "server" },
"Factory": { path: "admin/secubox/mirrorbox/factory", icon: "industry" },
"App Store": { path: "admin/secubox/mirrorbox/packages", icon: "store" },
"Dev Status": { path: "admin/secubox/mirrorbox/devstatus", icon: "code" }
}
},
"Network": {
path: "admin/secubox/network",
icon: "sitemap",
children: {
"Network Modes": { path: "admin/secubox/network/modes", icon: "random" },
"DNS Providers": { path: "admin/secubox/network/dns-provider", icon: "globe" },
"Service Exposure": { path: "admin/secubox/network/exposure", icon: "broadcast-tower" },
"Bandwidth Manager": { path: "admin/secubox/network/bandwidth-manager", icon: "tachometer-alt" },
"Traffic Shaper": { path: "admin/secubox/network/traffic-shaper", icon: "filter" },
"MQTT Bridge": { path: "admin/secubox/network/mqtt-bridge", icon: "exchange-alt" }
}
},
"Monitoring": {
path: "admin/secubox/monitoring",
icon: "chart-bar",
children: {
"Netdata": { path: "admin/secubox/monitoring/netdata", icon: "chart-area" },
"Glances": { path: "admin/secubox/monitoring/glances", icon: "eye" },
"Media Flow": { path: "admin/secubox/monitoring/mediaflow", icon: "film" }
}
},
"System": {
path: "admin/secubox/system",
icon: "server",
children: {
"System Hub": { path: "admin/secubox/system/system-hub", icon: "cogs" },
"Cloning Station": { path: "admin/secubox/system/cloner", icon: "clone" }
}
},
"Device Intel": {
path: "admin/secubox/device-intel",
icon: "microchip",
children: {
"Dashboard": { path: "admin/secubox/device-intel/dashboard" },
"Devices": { path: "admin/secubox/device-intel/devices" },
"Mesh": { path: "admin/secubox/device-intel/mesh" }
}
},
"InterceptoR": {
path: "admin/secubox/interceptor",
icon: "filter",
children: {
"Overview": { path: "admin/secubox/interceptor/overview" }
}
},
"Services (LuCI)": {
path: "admin/services",
icon: "puzzle-piece",
children: {
"Service Registry": { path: "admin/services/service-registry", icon: "list" },
"HAProxy": { path: "admin/services/haproxy", icon: "random" },
"WireGuard": { path: "admin/services/wireguard", icon: "shield-alt" },
"Tor Shield": { path: "admin/services/tor-shield", icon: "user-secret" },
"VHost Manager": { path: "admin/services/vhosts", icon: "server" },
"CDN Cache": { path: "admin/services/cdn-cache", icon: "database" },
"LocalAI": { path: "admin/services/localai", icon: "brain" },
"Ollama": { path: "admin/services/ollama", icon: "comment-dots" },
"Nextcloud": { path: "admin/services/nextcloud", icon: "cloud" },
"Jellyfin": { path: "admin/services/jellyfin", icon: "film" },
"Jitsi Meet": { path: "admin/services/jitsi", icon: "video" },
"SimpleX Chat": { path: "admin/services/simplex", icon: "comments" },
"Domoticz": { path: "admin/services/domoticz", icon: "home" },
"Lyrion": { path: "admin/services/lyrion", icon: "music" },
"MagicMirror": { path: "admin/services/magicmirror2", icon: "desktop" },
"MAC Guardian": { path: "admin/services/mac-guardian", icon: "wifi" },
"Mail Server": { path: "admin/services/mailserver", icon: "envelope" },
"Mesh Link": { path: "admin/services/secubox-mesh", icon: "project-diagram" },
"MirrorNet": { path: "admin/services/mirrornet", icon: "network-wired" },
"Gitea": { path: "admin/services/gitea", icon: "code-branch" },
"Hexo CMS": { path: "admin/services/hexojs", icon: "blog" },
"MetaBlogizer": { path: "admin/services/metablogizer", icon: "rss" },
"Streamlit": { path: "admin/services/streamlit", icon: "stream" },
"PicoBrew": { path: "admin/services/picobrew", icon: "beer" },
"CyberFeed": { path: "admin/services/cyberfeed", icon: "newspaper" },
"Vortex DNS": { path: "admin/services/vortex-dns", icon: "globe" },
"Vortex Firewall": { path: "admin/services/vortex-firewall", icon: "fire" },
"Config Advisor": { path: "admin/services/config-advisor", icon: "clipboard-check" },
"Threat Monitor": { path: "admin/services/threat-monitor", icon: "exclamation-triangle" },
"Network Diagnostics": { path: "admin/services/network-diagnostics", icon: "stethoscope" },
"Backup Manager": { path: "admin/system/backup", icon: "save" }
}
},
"IoT & Automation": {
path: "admin/secubox/services",
icon: "microchip",
children: {
"IoT Guard": { path: "admin/secubox/services/iot-guard", icon: "shield-alt" },
"Zigbee2MQTT": { path: "admin/secubox/zigbee2mqtt", icon: "broadcast-tower" },
"nDPId": { path: "admin/secubox/ndpid", icon: "search" },
"Netifyd": { path: "admin/secubox/netifyd", icon: "chart-network" }
}
}
};
return view.extend({
render: function() {
var container = E('div', { 'class': 'cbi-map', 'style': 'background:#111;min-height:100vh;padding:20px;' }, [
E('style', {}, `
.luci-tree { font-family: monospace; color: #0f0; }
.luci-tree a { color: #0ff; text-decoration: none; }
.luci-tree a:hover { color: #fff; text-decoration: underline; }
.tree-section { margin: 15px 0; padding: 10px; background: #1a1a1a; border-left: 3px solid #0f0; border-radius: 4px; }
.tree-section-title { font-size: 18px; color: #0f0; margin-bottom: 10px; cursor: pointer; }
.tree-section-title:hover { color: #0ff; }
.tree-item { padding: 3px 0 3px 20px; border-left: 1px dashed #333; }
.tree-item:last-child { border-left-color: transparent; }
.tree-item::before { content: "├── "; color: #555; }
.tree-item:last-child::before { content: "└── "; }
.tree-nested { margin-left: 20px; }
.tree-icon { margin-right: 8px; opacity: 0.7; }
.tree-header { text-align: center; margin-bottom: 30px; }
.tree-header h1 { color: #0f0; font-size: 28px; margin: 0; }
.tree-header p { color: #888; }
.tree-stats { display: flex; justify-content: center; gap: 30px; margin: 20px 0; }
.tree-stat { text-align: center; padding: 10px 20px; background: #222; border-radius: 8px; }
.tree-stat-value { font-size: 24px; color: #0ff; }
.tree-stat-label { font-size: 12px; color: #888; }
.tree-search { margin: 20px auto; max-width: 400px; }
.tree-search input { width: 100%; padding: 10px; background: #222; border: 1px solid #333; color: #fff; border-radius: 4px; }
.tree-search input:focus { outline: none; border-color: #0f0; }
.tree-grid { display: grid; grid-template-columns: repeat(auto-fill, minmax(350px, 1fr)); gap: 15px; }
`),
E('div', { 'class': 'tree-header' }, [
E('h1', {}, 'SecuBox LuCI Navigation Tree'),
E('p', {}, 'Clickable map of all LuCI dashboards and modules')
]),
E('div', { 'class': 'tree-stats' }, [
E('div', { 'class': 'tree-stat' }, [
E('div', { 'class': 'tree-stat-value' }, Object.keys(LUCI_TREE).length.toString()),
E('div', { 'class': 'tree-stat-label' }, 'Categories')
]),
E('div', { 'class': 'tree-stat' }, [
E('div', { 'class': 'tree-stat-value', 'id': 'total-links' }, '...'),
E('div', { 'class': 'tree-stat-label' }, 'Total Links')
]),
E('div', { 'class': 'tree-stat' }, [
E('div', { 'class': 'tree-stat-value' }, '60+'),
E('div', { 'class': 'tree-stat-label' }, 'LuCI Apps')
])
]),
E('div', { 'class': 'tree-search' }, [
E('input', {
'type': 'text',
'placeholder': 'Search modules...',
'id': 'tree-search-input',
'oninput': 'filterTree(this.value)'
})
]),
E('div', { 'class': 'luci-tree tree-grid', 'id': 'tree-container' })
]);
// Build tree
var treeContainer = container.querySelector('#tree-container');
var totalLinks = 0;
function buildTreeNode(name, node, level) {
var items = [];
totalLinks++;
var link = E('a', {
'href': '/cgi-bin/luci/' + node.path,
'target': '_blank',
'class': 'tree-link'
}, name);
if (node.children) {
var nested = E('div', { 'class': 'tree-nested' });
Object.keys(node.children).forEach(function(childName) {
var childItems = buildTreeNode(childName, node.children[childName], level + 1);
childItems.forEach(function(item) {
nested.appendChild(E('div', { 'class': 'tree-item', 'data-name': childName.toLowerCase() }, [item]));
});
});
items.push(link);
items.push(nested);
} else {
items.push(link);
}
return items;
}
Object.keys(LUCI_TREE).forEach(function(sectionName) {
var section = LUCI_TREE[sectionName];
var sectionDiv = E('div', { 'class': 'tree-section', 'data-section': sectionName.toLowerCase() });
var titleLink = E('a', {
'href': '/cgi-bin/luci/' + section.path,
'target': '_blank',
'class': 'tree-section-title'
}, sectionName);
sectionDiv.appendChild(titleLink);
if (section.children) {
Object.keys(section.children).forEach(function(childName) {
var childItems = buildTreeNode(childName, section.children[childName], 1);
childItems.forEach(function(item) {
sectionDiv.appendChild(E('div', { 'class': 'tree-item', 'data-name': childName.toLowerCase() }, [item]));
});
});
}
treeContainer.appendChild(sectionDiv);
});
container.querySelector('#total-links').textContent = totalLinks.toString();
// Add search filter script
var script = E('script', {}, `
function filterTree(query) {
query = query.toLowerCase();
var sections = document.querySelectorAll('.tree-section');
sections.forEach(function(section) {
var sectionName = section.dataset.section;
var items = section.querySelectorAll('.tree-item');
var hasMatch = sectionName.includes(query);
items.forEach(function(item) {
var name = item.dataset.name || '';
var text = item.textContent.toLowerCase();
if (text.includes(query) || name.includes(query)) {
item.style.display = '';
hasMatch = true;
} else {
item.style.display = 'none';
}
});
section.style.display = hasMatch ? '' : 'none';
});
}
`);
container.appendChild(script);
return container;
},
handleSaveApply: null,
handleSave: null,
handleReset: null
});

View File

@ -104,6 +104,25 @@
"path": "secubox-portal/devstatus"
}
},
"secubox-public/luci-tree": {
"title": "LuCI Tree",
"order": 40,
"action": {
"type": "view",
"path": "secubox-portal/luci-tree"
}
},
"admin/secubox/luci-tree": {
"title": "LuCI Tree",
"order": 95,
"action": {
"type": "view",
"path": "secubox-portal/luci-tree"
},
"depends": {
"acl": ["luci-app-secubox-portal"]
}
},
"secubox-public/login": {
"title": "Connexion",
"order": 99,

View File

@ -45,7 +45,13 @@
"p2p_discover",
"p2p_get_catalog",
"p2p_get_peer_catalog",
"p2p_get_shared_services"
"p2p_get_shared_services",
"get_timeline",
"get_evolution",
"get_heartbeat_line",
"get_stats_status",
"get_history",
"get_collector_cache"
],
"luci.service-registry": [
"list_services",

View File

@ -6,7 +6,7 @@ include $(TOPDIR)/rules.mk
PKG_NAME:=secubox-core
PKG_VERSION:=0.10.0
PKG_RELEASE:=15
PKG_RELEASE:=16
PKG_ARCH:=all
PKG_LICENSE:=GPL-2.0
PKG_MAINTAINER:=SecuBox Team
@ -88,6 +88,7 @@ define Package/secubox-core/install
$(INSTALL_BIN) ./root/usr/sbin/secubox-feedback $(1)/usr/sbin/
$(INSTALL_BIN) ./root/usr/sbin/secubox-tftp-recovery $(1)/usr/sbin/
$(INSTALL_BIN) ./root/usr/sbin/secubox-vhost $(1)/usr/sbin/
$(INSTALL_BIN) ./root/usr/sbin/secubox-stats-persist $(1)/usr/sbin/
$(INSTALL_DIR) $(1)/usr/bin
$(INSTALL_BIN) ./root/usr/bin/secubox-services-status $(1)/usr/bin/
@ -95,9 +96,10 @@ define Package/secubox-core/install
# TFTP Recovery init script
$(INSTALL_BIN) ./root/etc/init.d/secubox-tftp-recovery $(1)/etc/init.d/
# File integrity monitoring cron job
# Cron jobs: integrity monitoring and stats persistence
$(INSTALL_DIR) $(1)/etc/cron.d
$(INSTALL_DATA) ./root/etc/cron.d/secubox-integrity $(1)/etc/cron.d/
$(INSTALL_DATA) ./root/etc/cron.d/secubox-stats-persist $(1)/etc/cron.d/
# TFTP Mesh library
$(INSTALL_DIR) $(1)/usr/lib/secubox

View File

@ -0,0 +1,13 @@
# SecuBox Stats Persistence
# Periodic cache persistence and evolution generation
# Daemon runs its own loops but this ensures recovery on daemon restart
# Every 5 minutes: persist cache to /srv (backup in case daemon dies)
*/5 * * * * root /usr/sbin/secubox-stats-persist persist >/dev/null 2>&1
# Every hour: generate timeline and evolution
0 * * * * root /usr/sbin/secubox-stats-persist timeline >/dev/null 2>&1
5 * * * * root /usr/sbin/secubox-stats-persist evolution >/dev/null 2>&1
# Daily: aggregate and cleanup old history
0 1 * * * root /usr/sbin/secubox-stats-persist aggregate >/dev/null 2>&1

View File

@ -0,0 +1,219 @@
#!/bin/sh
#
# SecuBox RPCD - Stats Evolution & Timeline
# Persistent stats, time-series evolution, combined heartbeat line
#
PERSIST_DIR="/srv/secubox/stats"
CACHE_DIR="/tmp/secubox"
# Register methods
list_methods_stats() {
add_method "get_timeline"
add_method "get_evolution"
add_method "get_heartbeat_line"
add_method "get_stats_status"
add_method "get_history"
add_method_str "get_collector_cache" "collector"
}
# Handle method calls
handle_stats() {
local method="$1"
case "$method" in
get_timeline)
_do_get_timeline
;;
get_evolution)
_do_get_evolution
;;
get_heartbeat_line)
_do_get_heartbeat_line
;;
get_stats_status)
_do_get_stats_status
;;
get_history)
read_input_json
local collector=$(get_input "collector")
local period=$(get_input "period")
_do_get_history "$collector" "$period"
;;
get_collector_cache)
read_input_json
local collector=$(get_input "collector")
_do_get_collector_cache "$collector"
;;
*)
return 1
;;
esac
}
# Get timeline (24h evolution for all collectors)
_do_get_timeline() {
local timeline_file="$PERSIST_DIR/timeline.json"
if [ -f "$timeline_file" ]; then
cat "$timeline_file"
else
json_init
json_add_string "error" "Timeline not generated yet"
json_add_string "hint" "Run: secubox-stats-persist timeline"
json_dump
fi
}
# Get evolution (combined influence score)
_do_get_evolution() {
local evolution_file="$PERSIST_DIR/evolution.json"
if [ -f "$evolution_file" ]; then
cat "$evolution_file"
else
json_init
json_add_string "error" "Evolution not generated yet"
json_add_string "hint" "Run: secubox-stats-persist evolution"
json_dump
fi
}
# Get heartbeat line (real-time 3min buffer)
_do_get_heartbeat_line() {
local heartbeat_file="$PERSIST_DIR/heartbeat-line.json"
if [ -f "$heartbeat_file" ]; then
cat "$heartbeat_file"
else
# Generate on-demand if daemon not running
if [ -x /usr/sbin/secubox-stats-persist ]; then
/usr/sbin/secubox-stats-persist heartbeat
else
json_init
json_add_string "error" "Heartbeat line not available"
json_dump
fi
fi
}
# Get stats persistence status
_do_get_stats_status() {
json_init
# Check persistence directory
local persist_ok=0
[ -d "$PERSIST_DIR" ] && persist_ok=1
json_add_boolean "persistence_enabled" "$persist_ok"
json_add_string "persist_dir" "$PERSIST_DIR"
json_add_string "cache_dir" "$CACHE_DIR"
# Count cached files
local cache_count=$(ls "$CACHE_DIR"/*.json 2>/dev/null | wc -l)
json_add_int "cached_files" "${cache_count:-0}"
# Count persisted files
local persist_count=$(ls "$PERSIST_DIR"/*.json 2>/dev/null | wc -l)
json_add_int "persisted_files" "${persist_count:-0}"
# Last persist time
local last_persist=""
if [ -f "$PERSIST_DIR/.last_persist" ]; then
last_persist=$(cat "$PERSIST_DIR/.last_persist")
fi
json_add_string "last_persist" "${last_persist:-never}"
# History stats
json_add_object "history"
local hourly_total=0 daily_total=0
for collector in health threat capacity crowdsec mitmproxy; do
local hourly_count=$(ls "$PERSIST_DIR/history/hourly/$collector"/*.json 2>/dev/null | wc -l)
local daily_count=$(ls "$PERSIST_DIR/history/daily/$collector"/*.json 2>/dev/null | wc -l)
hourly_total=$((hourly_total + hourly_count))
daily_total=$((daily_total + daily_count))
done
json_add_int "hourly_snapshots" "$hourly_total"
json_add_int "daily_aggregates" "$daily_total"
json_close_object
# Current cache values
json_add_object "current"
local health=$(jsonfilter -i "$CACHE_DIR/health.json" -e '@.score' 2>/dev/null || echo 0)
local threat=$(jsonfilter -i "$CACHE_DIR/threat.json" -e '@.level' 2>/dev/null || echo 0)
local capacity=$(jsonfilter -i "$CACHE_DIR/capacity.json" -e '@.combined' 2>/dev/null || echo 0)
json_add_int "health" "$health"
json_add_int "threat" "$threat"
json_add_int "capacity" "$capacity"
# Calculate influence
local t_inv=$((100 - threat))
local c_inv=$((100 - capacity))
local influence=$(( (health * 40 + t_inv * 30 + c_inv * 30) / 100 ))
json_add_int "influence" "$influence"
json_close_object
json_dump
}
# Get history for specific collector
_do_get_history() {
local collector="$1"
local period="$2"
[ -z "$period" ] && period="hourly"
json_init
json_add_string "collector" "$collector"
json_add_string "period" "$period"
local history_dir="$PERSIST_DIR/history/$period/$collector"
if [ ! -d "$history_dir" ]; then
json_add_string "error" "No history for $collector ($period)"
json_dump
return
fi
json_add_array "data"
for hfile in $(ls -t "$history_dir"/*.json 2>/dev/null | head -48); do
[ -f "$hfile" ] || continue
local filename=$(basename "$hfile" .json)
local content=$(cat "$hfile" 2>/dev/null)
# Extract key values based on collector
local ts=$(jsonfilter -i "$hfile" -e '@.timestamp' 2>/dev/null || echo 0)
local val
case "$collector" in
health) val=$(jsonfilter -i "$hfile" -e '@.score' 2>/dev/null) ;;
threat) val=$(jsonfilter -i "$hfile" -e '@.level' 2>/dev/null) ;;
capacity) val=$(jsonfilter -i "$hfile" -e '@.combined' 2>/dev/null) ;;
crowdsec*) val=$(jsonfilter -i "$hfile" -e '@.alerts_24h' 2>/dev/null) ;;
mitmproxy) val=$(jsonfilter -i "$hfile" -e '@.threats_today' 2>/dev/null) ;;
*) val=$(jsonfilter -i "$hfile" -e '@.total' 2>/dev/null) ;;
esac
[ -z "$val" ] && val=0
json_add_object ""
json_add_string "time" "$filename"
json_add_int "timestamp" "$ts"
json_add_int "value" "$val"
json_close_object
done
json_close_array
json_dump
}
# Get current collector cache
_do_get_collector_cache() {
local collector="$1"
local cache_file="$CACHE_DIR/${collector}.json"
if [ -f "$cache_file" ]; then
cat "$cache_file"
else
json_init
json_add_string "error" "Cache not found: $collector"
json_dump
fi
}

View File

@ -67,6 +67,9 @@ _list_all_methods() {
# P2P module
type list_methods_p2p >/dev/null 2>&1 && list_methods_p2p
# Stats module (evolution, timeline, heartbeat line)
type list_methods_stats >/dev/null 2>&1 && list_methods_stats
json_dump
}
@ -142,6 +145,11 @@ _call_method() {
handle_p2p "$method" && return 0
fi
# Stats methods (evolution, timeline, heartbeat line)
if type handle_stats >/dev/null 2>&1; then
handle_stats "$method" && return 0
fi
# Unknown method
json_init
json_add_boolean "error" true

View File

@ -1216,6 +1216,14 @@ daemon_mode() {
STATUS_COLLECTOR_PID=$!
log debug "Status collector started (PID: $STATUS_COLLECTOR_PID)"
# Start stats persistence layer (3-tier: RAM → /tmp → /srv)
if [ -x /usr/sbin/secubox-stats-persist ]; then
/usr/sbin/secubox-stats-persist recover 2>/dev/null
/usr/sbin/secubox-stats-persist daemon &
STATS_PERSIST_PID=$!
log debug "Stats persistence daemon started (PID: $STATS_PERSIST_PID)"
fi
# Wait for initial cache population
sleep 1

View File

@ -0,0 +1,383 @@
#!/bin/sh
#
# SecuBox Stats Persistence & Evolution Layer
# 3-tier caching: RAM (/tmp) → Volatile Buffer → Persistent (/srv)
# Time-series: Hourly snapshots (24h), Daily aggregates (30d)
# Never-trashed stats with reboot recovery
#
PERSIST_DIR="/srv/secubox/stats"
CACHE_DIR="/tmp/secubox"
HISTORY_DIR="$PERSIST_DIR/history"
TIMELINE_FILE="$PERSIST_DIR/timeline.json"
EVOLUTION_FILE="$PERSIST_DIR/evolution.json"
HEARTBEAT_LINE="$PERSIST_DIR/heartbeat-line.json"
# Collectors to persist (must match cache file basenames)
COLLECTORS="health threat capacity crowdsec mitmproxy netifyd client-guardian mac-guardian netdiag crowdsec-overview"
# Initialize directories
init_persist() {
mkdir -p "$PERSIST_DIR" "$HISTORY_DIR/hourly" "$HISTORY_DIR/daily"
mkdir -p "$CACHE_DIR"
# Create evolution tracking files if missing
for collector in $COLLECTORS; do
local hourly_dir="$HISTORY_DIR/hourly/$collector"
local daily_dir="$HISTORY_DIR/daily/$collector"
mkdir -p "$hourly_dir" "$daily_dir"
done
echo "Stats persistence initialized at $PERSIST_DIR"
}
# Recover cache from persistent storage on boot
recover_cache() {
for collector in $COLLECTORS; do
local persist_file="$PERSIST_DIR/${collector}.json"
local cache_file="$CACHE_DIR/${collector}.json"
# Only recover if cache is missing but persistent exists
if [ ! -f "$cache_file" ] && [ -f "$persist_file" ]; then
cp "$persist_file" "$cache_file"
echo "Recovered $collector from persistent storage"
fi
done
}
# Persist current cache to storage (atomic writes)
persist_cache() {
local now=$(date +%s)
local hour=$(date +%Y%m%d%H)
local day=$(date +%Y%m%d)
for collector in $COLLECTORS; do
local cache_file="$CACHE_DIR/${collector}.json"
local persist_file="$PERSIST_DIR/${collector}.json"
# Skip if cache doesn't exist
[ -f "$cache_file" ] || continue
# Atomic persist: cache → tmp → persistent
local tmp_file="$PERSIST_DIR/.${collector}.tmp"
cp "$cache_file" "$tmp_file" 2>/dev/null && \
mv -f "$tmp_file" "$persist_file" 2>/dev/null
# Hourly snapshot (only once per hour)
local hourly_file="$HISTORY_DIR/hourly/$collector/${hour}.json"
if [ ! -f "$hourly_file" ]; then
cp "$cache_file" "$hourly_file" 2>/dev/null
fi
done
echo "$now" > "$PERSIST_DIR/.last_persist"
}
# Create hourly aggregate from snapshots
aggregate_hourly() {
local collector="$1"
local hour="$2" # Format: YYYYMMDDHH
local hourly_file="$HISTORY_DIR/hourly/$collector/${hour}.json"
[ -f "$hourly_file" ] || return 1
# Extract key numeric fields for aggregation
local data=$(cat "$hourly_file" 2>/dev/null)
echo "$data"
}
# Create daily aggregate from 24 hourly snapshots
aggregate_daily() {
local day=$(date +%Y%m%d)
for collector in $COLLECTORS; do
local daily_file="$HISTORY_DIR/daily/$collector/${day}.json"
local hourly_dir="$HISTORY_DIR/hourly/$collector"
# Skip if already aggregated today
[ -f "$daily_file" ] && continue
# Count hourly files for this day (should be 24 at end of day)
local hourly_count=$(ls "$hourly_dir/${day}"*.json 2>/dev/null | wc -l)
[ "$hourly_count" -lt 6 ] && continue # Need at least 6 hours
# Create daily aggregate with min/max/avg
local min=999999 max=0 sum=0 count=0
for hfile in "$hourly_dir/${day}"*.json; do
[ -f "$hfile" ] || continue
# Extract primary metric based on collector type
local val
case "$collector" in
health) val=$(jsonfilter -i "$hfile" -e '@.score' 2>/dev/null) ;;
threat) val=$(jsonfilter -i "$hfile" -e '@.level' 2>/dev/null) ;;
capacity) val=$(jsonfilter -i "$hfile" -e '@.combined' 2>/dev/null) ;;
crowdsec*) val=$(jsonfilter -i "$hfile" -e '@.alerts_24h' 2>/dev/null) ;;
mitmproxy) val=$(jsonfilter -i "$hfile" -e '@.threats_today' 2>/dev/null) ;;
*) val=$(jsonfilter -i "$hfile" -e '@.total' 2>/dev/null) ;;
esac
[ -z "$val" ] && val=0
[ "$val" -lt "$min" ] && min=$val
[ "$val" -gt "$max" ] && max=$val
sum=$((sum + val))
count=$((count + 1))
done
[ "$count" -gt 0 ] || continue
local avg=$((sum / count))
printf '{"date":"%s","min":%d,"max":%d,"avg":%d,"samples":%d}\n' \
"$day" "$min" "$max" "$avg" "$count" > "$daily_file"
done
}
# Cleanup old history files (keep 24h hourly, 30d daily)
cleanup_history() {
local now=$(date +%s)
local hourly_cutoff=$((now - 86400)) # 24 hours
local daily_cutoff=$((now - 2592000)) # 30 days
for collector in $COLLECTORS; do
# Cleanup hourly (older than 24h)
for hfile in "$HISTORY_DIR/hourly/$collector"/*.json; do
[ -f "$hfile" ] || continue
local mtime=$(stat -c %Y "$hfile" 2>/dev/null || echo 0)
[ "$mtime" -lt "$hourly_cutoff" ] && rm -f "$hfile"
done
# Cleanup daily (older than 30d)
for dfile in "$HISTORY_DIR/daily/$collector"/*.json; do
[ -f "$dfile" ] || continue
local mtime=$(stat -c %Y "$dfile" 2>/dev/null || echo 0)
[ "$mtime" -lt "$daily_cutoff" ] && rm -f "$dfile"
done
done
}
# Generate combined timeline (last 24h evolution)
generate_timeline() {
local now=$(date +%s)
local tmp_file="$PERSIST_DIR/.timeline.tmp"
printf '{"generated":%d,"collectors":{' "$now" > "$tmp_file"
local first=1
for collector in $COLLECTORS; do
local hourly_dir="$HISTORY_DIR/hourly/$collector"
[ "$first" = "0" ] && printf ',' >> "$tmp_file"
first=0
printf '"%s":[' "$collector" >> "$tmp_file"
# Get last 24 hourly snapshots
local hfirst=1
for hfile in $(ls -t "$hourly_dir"/*.json 2>/dev/null | head -24); do
[ -f "$hfile" ] || continue
[ "$hfirst" = "0" ] && printf ',' >> "$tmp_file"
hfirst=0
# Extract timestamp and primary value
local ts=$(jsonfilter -i "$hfile" -e '@.timestamp' 2>/dev/null || echo 0)
local val
case "$collector" in
health) val=$(jsonfilter -i "$hfile" -e '@.score' 2>/dev/null) ;;
threat) val=$(jsonfilter -i "$hfile" -e '@.level' 2>/dev/null) ;;
capacity) val=$(jsonfilter -i "$hfile" -e '@.combined' 2>/dev/null) ;;
*) val=0 ;;
esac
[ -z "$val" ] && val=0
printf '{"t":%d,"v":%d}' "$ts" "$val" >> "$tmp_file"
done
printf ']' >> "$tmp_file"
done
printf '}}\n' >> "$tmp_file"
mv -f "$tmp_file" "$TIMELINE_FILE"
}
# Generate evolution sparkline data (combined metrics beep line)
generate_evolution() {
local now=$(date +%s)
local tmp_file="$PERSIST_DIR/.evolution.tmp"
printf '{"generated":%d,"window":"24h","points":[' "$now" > "$tmp_file"
# Combine health, threat, capacity into single timeline
local health_dir="$HISTORY_DIR/hourly/health"
local threat_dir="$HISTORY_DIR/hourly/threat"
local capacity_dir="$HISTORY_DIR/hourly/capacity"
# Get timestamps from health (most reliable)
local first=1
for hfile in $(ls -t "$health_dir"/*.json 2>/dev/null | head -48 | tac); do
[ -f "$hfile" ] || continue
local hour=$(basename "$hfile" .json)
local ts=$(jsonfilter -i "$hfile" -e '@.timestamp' 2>/dev/null || echo 0)
# Get values from all three
local h=$(jsonfilter -i "$hfile" -e '@.score' 2>/dev/null || echo 100)
local tfile="$threat_dir/${hour}.json"
local t=$(jsonfilter -i "$tfile" -e '@.level' 2>/dev/null 2>/dev/null || echo 0)
local cfile="$capacity_dir/${hour}.json"
local c=$(jsonfilter -i "$cfile" -e '@.combined' 2>/dev/null 2>/dev/null || echo 0)
[ "$first" = "0" ] && printf ',' >> "$tmp_file"
first=0
# Combined "influence" score: weighted combination
# Health (40%), inverse Threat (30%), inverse Capacity (30%)
local t_inv=$((100 - t))
local c_inv=$((100 - c))
local influence=$(( (h * 40 + t_inv * 30 + c_inv * 30) / 100 ))
printf '{"t":%d,"h":%d,"th":%d,"c":%d,"i":%d}' \
"$ts" "$h" "$t" "$c" "$influence" >> "$tmp_file"
done
printf ']}\n' >> "$tmp_file"
mv -f "$tmp_file" "$EVOLUTION_FILE"
}
# Generate heartbeat line (last 60 samples, ~3min of data)
generate_heartbeat_line() {
local now=$(date +%s)
local tmp_file="$PERSIST_DIR/.heartbeat.tmp"
# Read current cache values
local h=$(jsonfilter -i "$CACHE_DIR/health.json" -e '@.score' 2>/dev/null || echo 100)
local t=$(jsonfilter -i "$CACHE_DIR/threat.json" -e '@.level' 2>/dev/null || echo 0)
local c=$(jsonfilter -i "$CACHE_DIR/capacity.json" -e '@.combined' 2>/dev/null || echo 0)
# Calculate influence
local t_inv=$((100 - t))
local c_inv=$((100 - c))
local influence=$(( (h * 40 + t_inv * 30 + c_inv * 30) / 100 ))
# Append to rolling buffer (keep last 60)
local buffer_file="$PERSIST_DIR/.heartbeat_buffer"
# Read existing buffer
local buffer=""
[ -f "$buffer_file" ] && buffer=$(cat "$buffer_file")
# Append new point
local new_point=$(printf '{"t":%d,"h":%d,"th":%d,"c":%d,"i":%d}' "$now" "$h" "$t" "$c" "$influence")
if [ -z "$buffer" ]; then
buffer="[$new_point]"
else
# Parse existing, keep last 59, add new
local count=$(echo "$buffer" | tr ',' '\n' | grep -c '"t":')
if [ "$count" -ge 60 ]; then
# Remove first element
buffer=$(echo "$buffer" | sed 's/^\[{[^}]*},/[/')
fi
buffer=$(echo "$buffer" | sed 's/\]$//')
buffer="$buffer,$new_point]"
fi
echo "$buffer" > "$buffer_file"
# Write heartbeat line file
printf '{"generated":%d,"window":"3m","samples":60,"points":%s}\n' \
"$now" "$buffer" > "$tmp_file"
mv -f "$tmp_file" "$HEARTBEAT_LINE"
}
# Main persistence loop (runs every 60s)
daemon_loop() {
init_persist
recover_cache
echo "Stats persistence daemon started"
while true; do
# Persist current cache atomically
persist_cache
# Generate aggregates and timelines
aggregate_daily
generate_timeline
generate_evolution
# Cleanup old data (hourly check)
local hour=$(date +%M)
[ "$hour" = "00" ] && cleanup_history
sleep 60
done
}
# Fast heartbeat loop (runs every 3s for heartbeat line)
heartbeat_loop() {
while true; do
generate_heartbeat_line
sleep 3
done
}
# CLI
case "$1" in
init)
init_persist
;;
recover)
init_persist
recover_cache
;;
persist)
persist_cache
;;
aggregate)
aggregate_daily
;;
timeline)
generate_timeline
cat "$TIMELINE_FILE"
;;
evolution)
generate_evolution
cat "$EVOLUTION_FILE"
;;
heartbeat)
generate_heartbeat_line
cat "$HEARTBEAT_LINE"
;;
daemon)
daemon_loop &
heartbeat_loop
;;
status)
echo "=== Stats Persistence Status ==="
echo "Persist Dir: $PERSIST_DIR"
echo "Cache Dir: $CACHE_DIR"
echo ""
echo "Persisted Files:"
ls -la "$PERSIST_DIR"/*.json 2>/dev/null || echo " (none)"
echo ""
echo "Hourly History:"
for collector in $COLLECTORS; do
local count=$(ls "$HISTORY_DIR/hourly/$collector"/*.json 2>/dev/null | wc -l)
echo " $collector: $count snapshots"
done
echo ""
echo "Daily History:"
for collector in $COLLECTORS; do
local count=$(ls "$HISTORY_DIR/daily/$collector"/*.json 2>/dev/null | wc -l)
echo " $collector: $count days"
done
;;
*)
echo "Usage: $0 {init|recover|persist|aggregate|timeline|evolution|heartbeat|daemon|status}"
exit 1
;;
esac