feat(mesh): ZKP authentication and blockchain sync

- ZKP Mesh Authentication: Zero-Knowledge Proof identity for mesh nodes
  - New API endpoints: zkp-challenge, zkp-verify, zkp/graph
  - Shell functions: ml_zkp_init, ml_zkp_challenge, ml_zkp_verify
  - Enhanced join flow with optional ZKP proof requirement
  - Blockchain acknowledgment via peer_zkp_verified blocks
  - LuCI dashboard with ZKP status section and peer badges

- MirrorNet Ash Compatibility: Fixed BusyBox shell incompatibilities
  - Replaced process substitution with pipe-based patterns
  - Fixed mirror.sh, gossip.sh, health.sh, identity.sh

- Mesh Blockchain Sync: Fixed chain synchronization between nodes
  - Fixed /api/chain/since endpoint to return only new blocks
  - chain_add_block/chain_merge_block use awk for safe JSON insertion
  - Handles varying JSON formatting (whitespace, newlines)
  - Tested bidirectional sync: Master <-> Clone at height 70

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
CyberMind-FR 2026-02-24 16:45:42 +01:00
parent 7889cbb7bc
commit 6b7aa62a0e
16 changed files with 1069 additions and 78 deletions

View File

@ -3232,6 +3232,43 @@ git checkout HEAD -- index.html
- `zkp-hamiltonian/CMakeLists.txt`
- **Commit:** `65539368 feat(zkp-hamiltonian): Add Zero-Knowledge Proof library based on Hamiltonian Cycle`
41. **ZKP Mesh Authentication Integration (2026-02-24)**
- Integrated Zero-Knowledge Proofs into SecuBox master-link mesh authentication system.
- **Architecture:**
- Each node has ZKP identity (public graph + secret Hamiltonian cycle)
- Challenge-response authentication between mesh peers
- Blockchain acknowledgment of successful verifications
- **New API Endpoints:**
- `GET /api/master-link/zkp-challenge` — Generate authentication challenge with TTL
- `POST /api/master-link/zkp-verify` — Verify ZKP proof, record to blockchain
- `GET /api/zkp/graph` — Serve node's public ZKP graph (base64)
- **New Shell Functions in master-link.sh:**
- `ml_zkp_init()` — Initialize ZKP identity on first boot
- `ml_zkp_status()` — Return ZKP configuration status
- `ml_zkp_challenge()` — Generate challenge with UUID and expiry
- `ml_zkp_prove()` — Generate proof for given challenge
- `ml_zkp_verify()` — Verify peer's proof against trusted graph
- `ml_zkp_trust_peer()` — Store peer's public graph for future verification
- `ml_zkp_get_graph()` — Return base64-encoded public graph
- **Blockchain Acknowledgment:**
- New block type: `peer_zkp_verified`
- Records: peer_fp, proof_hash, challenge_id, result, verified_by
- **UCI Configuration:**
- `zkp_enabled` — Toggle ZKP authentication
- `zkp_fingerprint` — Auto-derived from graph hash (SHA256[0:16])
- `zkp_require_on_join` — Require ZKP proof for new peers
- `zkp_challenge_ttl` — Challenge validity in seconds (default 30)
- **Verification Test Results:**
- Master (192.168.255.1): ZKP identity initialized, fingerprint `7c5ead2b4e4b0106`
- API verification flow tested: challenge → proof → verify → blockchain record
- `peer_zkp_verified` block successfully recorded to chain
- **Files:**
- `secubox-master-link/files/usr/lib/secubox/master-link.sh` (ZKP functions)
- `secubox-master-link/files/www/api/zkp/graph` (new)
- `secubox-master-link/files/www/api/master-link/zkp-challenge` (new)
- `secubox-master-link/files/www/api/master-link/zkp-verify` (new)
- `secubox-master-link/files/etc/config/master-link` (ZKP options)
41. **MetaBlogizer Upload Workflow Fix (2026-02-24)**
- Sites now work immediately after upload without needing unpublish + expose.
- **Root cause:** Upload created HAProxy vhost and mitmproxy route file entry, but mitmproxy never received a reload signal to activate the route.
@ -3312,3 +3349,90 @@ git checkout HEAD -- index.html
- **Verification:** `rcve.gk2.secubox.in` now returns HTTP 200 with correct content.
- **Files Modified:**
- `luci-app-metablogizer/root/usr/libexec/rpcd/luci.metablogizer`
46. **ZKP Join Flow Integration (2026-02-24)**
- Enhanced mesh join protocol to support ZKP (Zero-Knowledge Proof) authentication.
- **Join Request Enhancement** (`ml_join_request()`):
- Now accepts `zkp_proof` (base64) and `zkp_graph` (base64) parameters
- Verifies proof against provided graph using `zkp_verifier`
- Validates fingerprint matches SHA256(graph)[0:16]
- Auto-stores peer's graph in `/etc/secubox/zkp/peers/` on successful verification
- Records `zkp_verified` and `zkp_proof_hash` in request file
- **Join Approval Enhancement** (`ml_join_approve()`):
- Auto-fetches peer's ZKP graph if not already stored during join
- Records `zkp_graph_stored` status in approval response
- Blockchain `peer_approved` blocks now include `zkp_verified` field
- **Peer-side Join** (`ml_join_with_zkp()`):
- New function for ZKP-authenticated mesh joining
- Generates ZKP proof using local identity keypair
- Uses ZKP fingerprint (from graph hash) instead of factory fingerprint
- Auto-stores master's graph for mutual authentication
- **API Update** (`/api/master-link/join`):
- Accepts `zkp_proof` and `zkp_graph` fields in POST body
- **Configuration**:
- `zkp_require_on_join`: When set to 1, rejects joins without valid ZKP proof
- **Verification:** Clone joined with `zkp_verified: true`, graphs exchanged bidirectionally
- **Files Modified:**
- `secubox-master-link/files/usr/lib/secubox/master-link.sh`
- `secubox-master-link/files/www/api/master-link/join`
47. **LuCI ZKP Dashboard (2026-02-24)**
- Enhanced `luci-app-master-link` with ZKP authentication status visualization.
- **Overview Tab - ZKP Status Section:**
- ZKP Identity card: fingerprint display, copy button, generation status
- ZKP Tools card: installation status for zkp_keygen/prover/verifier
- Trusted Peers card: count of stored peer graphs
- Purple theme (violet gradient) for ZKP elements
- Enabled/Disabled badge next to section title
- **Peer Table Enhancement:**
- New "Auth" column showing authentication method
- `zkpBadge()` helper function for visual indicators:
- 🔐 ZKP badge (purple) for ZKP-verified peers
- TOKEN badge (gray) for token-only authentication
- **Design:**
- Purple accent colors (#8b5cf6, #a855f7, #c084fc) for ZKP elements
- Consistent with SecuBox KISS theme guidelines
- **Files Modified:**
- `luci-app-master-link/htdocs/luci-static/resources/view/secubox/master-link.js`
48. **MirrorNet Ash Compatibility Fix (2026-02-24)**
- Fixed process substitution (`< <(cmd)`) incompatibility with BusyBox ash shell.
- **Pattern replaced:** `while read; do ... done < <(jsonfilter ...)`
- **Ash-compatible pattern:** `jsonfilter ... | while read; do ... done` with temp files for variable persistence
- **Files fixed:**
- `secubox-mirrornet/files/usr/lib/mirrornet/mirror.sh` (3 instances)
- `secubox-mirrornet/files/usr/lib/mirrornet/gossip.sh` (3 instances)
- `secubox-mirrornet/files/usr/lib/mirrornet/health.sh` (1 instance)
- `secubox-mirrornet/files/usr/lib/mirrornet/identity.sh` (1 instance - for loop fix)
- **Tested:** `mirrorctl status`, `mirror-add`, `mirror-upstream`, `mirror-check`, `mirror-haproxy` all working
- **Deployed:** Both master (192.168.255.1) and clone (192.168.255.156) routers
49. **Mesh Blockchain Sync (2026-02-24)**
- Fixed blockchain chain synchronization between mesh nodes.
- **Chain Append Fix:**
- `chain_add_block()`: Uses awk to safely insert new blocks before `] }` ending
- Handles JSON with/without trailing newlines and varying whitespace
- Compacts multi-line blocks to single line for clean insertion
- **Chain Merge Fix:**
- `chain_merge_block()`: Same awk-based approach for remote block merging
- Validates block structure and prev_hash linkage before merging
- **Sync Endpoint Fix:**
- `/api/chain/since/<hash>`: Now properly returns only blocks after given hash
- Returns JSON array of blocks (not full chain)
- Supports partial hash matching
- **Sync Function Fix:**
- `sync_with_peer()`: Properly fetches and merges missing blocks
- Uses `chain_merge_block()` for each received block
- Stores block data in blocks directory
- **Verification:**
- Master→Clone sync: Block 70 synced successfully
- Clone→Master sync: Block 69 synced successfully
- Both nodes at height 70 with matching hash
- JSON validity confirmed via Python parser
- **Files Modified:**
- `secubox-core/root/usr/lib/secubox/p2p-mesh.sh`
- `secubox-core/root/www/api/chain`

View File

@ -1,6 +1,6 @@
# Work In Progress (Claude)
_Last updated: 2026-02-24 (Service Stability Fixes)_
_Last updated: 2026-02-24 (ZKP Mesh Authentication)_
> **Architecture Reference**: SecuBox Fanzine v3 — Les 4 Couches
@ -62,6 +62,52 @@ _Last updated: 2026-02-24 (Service Stability Fixes)_
- Gossip-based exposure config sync via secubox-p2p
- Created `luci-app-vortex-dns` dashboard
### Just Completed (2026-02-24)
- **ZKP Mesh Authentication** — DONE (2026-02-24)
- Zero-Knowledge Proof integration for cryptographic mesh authentication
- Each node has ZKP identity (public graph + secret Hamiltonian cycle)
- New API endpoints: `/api/master-link/zkp-challenge`, `/api/master-link/zkp-verify`, `/api/zkp/graph`
- Shell functions: `ml_zkp_init()`, `ml_zkp_challenge()`, `ml_zkp_verify()`, `ml_zkp_trust_peer()`
- Blockchain acknowledgment via `peer_zkp_verified` block type
- UCI config options: `zkp_enabled`, `zkp_fingerprint`, `zkp_require_on_join`, `zkp_challenge_ttl`
- Tested on master (fingerprint: `7c5ead2b4e4b0106`)
- Files: `master-link.sh` (ZKP functions), 3 new API endpoints
- **ZKP Join Flow Integration** — DONE (2026-02-24)
- Enhanced `ml_join_request()` to accept and verify ZKP proofs during join
- Enhanced `ml_join_approve()` to auto-fetch and store peer's ZKP graph
- New peer-side `ml_join_with_zkp()` function for ZKP-authenticated joining
- `/api/master-link/join` now accepts `zkp_proof` and `zkp_graph` fields
- When ZKP proof provided: fingerprint = SHA256(graph)[0:16] (ZKP fingerprint)
- Option `zkp_require_on_join` to mandate ZKP for all new joins
- Join requests now store `zkp_verified` and `zkp_proof_hash` fields
- Tested: Clone joined with `zkp_verified: true`, graph auto-stored on approval
- **LuCI ZKP Dashboard** — DONE (2026-02-24)
- Added ZKP Status section to `luci-app-master-link` Overview tab
- Cards: ZKP Identity (fingerprint), ZKP Tools status, Trusted Peers count
- Color theme: purple gradient for ZKP elements
- Added ZKP badge column to peer table (🔐ZKP vs TOKEN)
- Helper function `zkpBadge()` for visual auth type indicator
- **MirrorNet Ash Compatibility Fix** — DONE (2026-02-24)
- Fixed process substitution `< <(cmd)` incompatibility with BusyBox ash
- Converted to pipe-based patterns with temp files for variable persistence
- Files fixed: mirror.sh (3), gossip.sh (3), health.sh (1), identity.sh (1)
- Tested: `mirrorctl` CLI fully functional on both routers
- Mirror features working: add service, add upstream, health check, HAProxy config generation
- **Mesh Blockchain Sync** — DONE (2026-02-24)
- Fixed chain.json append logic for proper JSON structure preservation
- Fixed `/api/chain/since/<hash>` endpoint to return only new blocks as array
- `chain_add_block()`: Uses awk to safely insert before closing `] }`
- `chain_merge_block()`: Same awk-based approach for remote block merging
- `sync_with_peer()`: Properly merges blocks into local chain
- Handles JSON with/without trailing newlines and varying whitespace
- Tested bidirectional sync: Master ↔ Clone both at height 70, matching hash
- Files: `p2p-mesh.sh` (chain functions), `/www/api/chain` (endpoint)
### Just Completed (2026-02-20)
- **LuCI VM Manager** — DONE (2026-02-20)

View File

@ -421,7 +421,15 @@
"Bash(# Check if OpenWrt toolchain is available ls -la /home/reepost/CyberMindStudio/secubox-openwrt/secubox-tools/openwrt/)",
"Bash(# Create symlink in SDK feeds cd /home/reepost/CyberMindStudio/secubox-openwrt/secubox-tools/sdk ln -sf ../local-feed/zkp-hamiltonian/openwrt feeds/local/zkp-hamiltonian || true ls -la feeds/local/)",
"WebFetch(domain:www.linkedin.com)",
"WebFetch(domain:www.crowdsec.net)"
"WebFetch(domain:www.crowdsec.net)",
"Bash(apt-cache search:*)",
"Bash(musl-gcc:*)",
"Bash(docker run:*)",
"Bash(PATH=/tmp/sdk-x86/openwrt-sdk-24.10.5-x86-64_gcc-13.3.0_musl.Linux-x86_64/staging_dir/toolchain-x86_64_gcc-13.3.0_musl/bin:$PATH STAGING_DIR=/tmp/sdk-x86/openwrt-sdk-24.10.5-x86-64_gcc-13.3.0_musl.Linux-x86_64/staging_dir make:*)",
"Bash(/usr/bin/tail:*)",
"Bash(/usr/bin/make:*)",
"Bash(/tmp/build-zkp-x86.sh:*)",
"Bash(__NEW_LINE_a9089175728efc91__ echo \"\")"
]
}
}

View File

@ -75,6 +75,22 @@ function statusBadge(status) {
}, status || 'unknown');
}
function zkpBadge(verified) {
if (verified === true || verified === 'true') {
return E('span', {
'style': 'display:inline-flex;align-items:center;gap:4px;padding:2px 8px;border-radius:9999px;font-size:10px;font-weight:600;color:#fff;background:#8b5cf6;',
'title': _('Zero-Knowledge Proof verified')
}, [
E('span', { 'style': 'font-size:12px;' }, '🔐'),
'ZKP'
]);
}
return E('span', {
'style': 'display:inline-block;padding:2px 8px;border-radius:9999px;font-size:10px;font-weight:500;color:#94a3b8;background:#f1f5f9;',
'title': _('Token-based authentication')
}, 'TOKEN');
}
function copyText(text) {
if (navigator.clipboard) {
navigator.clipboard.writeText(text).then(function() {
@ -230,6 +246,76 @@ return view.extend({
statusSection.appendChild(statusGrid);
overviewPanel.appendChild(statusSection);
// ZKP Status Section
var zkp = status.zkp || {};
var zkpSection = E('div', { 'class': 'cbi-section' }, [
E('h3', { 'class': 'cbi-section-title' }, [
E('span', {}, _('Zero-Knowledge Proof Authentication')),
zkp.enabled == 1 ?
E('span', {
'style': 'margin-left:10px;font-size:11px;padding:2px 8px;border-radius:9999px;background:#22c55e;color:#fff;font-weight:600;'
}, _('ENABLED')) :
E('span', {
'style': 'margin-left:10px;font-size:11px;padding:2px 8px;border-radius:9999px;background:#94a3b8;color:#fff;font-weight:600;'
}, _('DISABLED'))
])
]);
var zkpGrid = E('div', {
'style': 'display:flex;gap:20px;flex-wrap:wrap;'
});
// ZKP Fingerprint card
zkpGrid.appendChild(E('div', {
'style': 'flex:1;min-width:200px;background:#faf5ff;padding:15px;border-radius:8px;border-left:4px solid #8b5cf6;'
}, [
E('div', { 'style': 'font-size:12px;color:#666;margin-bottom:4px;' }, _('ZKP Identity')),
E('div', { 'style': 'display:flex;align-items:center;gap:8px;' }, [
zkp.has_identity ?
E('code', { 'style': 'font-size:14px;font-weight:600;letter-spacing:0.05em;color:#8b5cf6;' },
zkp.fingerprint || '-') :
E('span', { 'style': 'color:#94a3b8;font-style:italic;' }, _('Not generated')),
zkp.fingerprint ? E('button', {
'class': 'cbi-button cbi-button-action',
'style': 'padding:2px 8px;font-size:11px;',
'click': function() { copyText(zkp.fingerprint); }
}, _('Copy')) : E('span')
]),
E('div', { 'style': 'font-size:11px;color:#94a3b8;margin-top:6px;' },
zkp.has_identity ? _('Cryptographic identity based on Hamiltonian cycle') : _('Run zkp-init to generate'))
]));
// ZKP Tools status
zkpGrid.appendChild(E('div', {
'style': 'flex:1;min-width:200px;background:#faf5ff;padding:15px;border-radius:8px;border-left:4px solid #a855f7;'
}, [
E('div', { 'style': 'font-size:12px;color:#666;margin-bottom:4px;' }, _('ZKP Tools')),
E('div', { 'style': 'display:flex;align-items:center;gap:8px;' }, [
zkp.tools_available ?
E('span', { 'style': 'color:#22c55e;font-weight:600;' }, '✓ ' + _('Installed')) :
E('span', { 'style': 'color:#ef4444;font-weight:600;' }, '✗ ' + _('Not installed'))
]),
E('div', { 'style': 'font-size:11px;color:#94a3b8;margin-top:6px;' },
_('zkp_keygen, zkp_prover, zkp_verifier'))
]));
// Trusted Peers
zkpGrid.appendChild(E('div', {
'style': 'flex:1;min-width:200px;background:#faf5ff;padding:15px;border-radius:8px;border-left:4px solid #c084fc;'
}, [
E('div', { 'style': 'font-size:12px;color:#666;margin-bottom:4px;' }, _('Trusted Peers')),
E('div', {}, [
E('span', { 'style': 'font-size:20px;font-weight:700;color:#8b5cf6;' },
String(zkp.trusted_peers || 0)),
E('span', { 'style': 'font-size:11px;color:#666;margin-left:4px;' }, _('peer graphs stored'))
]),
E('div', { 'style': 'font-size:11px;color:#94a3b8;margin-top:6px;' },
_('For challenge-response authentication'))
]));
zkpSection.appendChild(zkpGrid);
overviewPanel.appendChild(zkpSection);
// Upstream info (for peers/sub-masters)
if (status.upstream) {
overviewPanel.appendChild(E('div', { 'class': 'cbi-section' }, [
@ -347,6 +433,7 @@ return view.extend({
E('th', { 'class': 'th' }, _('Hostname')),
E('th', { 'class': 'th' }, _('Address')),
E('th', { 'class': 'th' }, _('Fingerprint')),
E('th', { 'class': 'th' }, _('Auth')),
E('th', { 'class': 'th' }, _('Requested')),
E('th', { 'class': 'th' }, _('Status')),
E('th', { 'class': 'th' }, _('Actions'))
@ -430,6 +517,7 @@ return view.extend({
E('td', { 'class': 'td' }, peer.hostname || '-'),
E('td', { 'class': 'td' }, E('code', { 'style': 'font-size:12px;' }, peer.address || '-')),
E('td', { 'class': 'td' }, E('code', { 'style': 'font-size:11px;' }, (peer.fingerprint || '').substring(0, 12) + '...')),
E('td', { 'class': 'td' }, zkpBadge(peer.zkp_verified)),
E('td', { 'class': 'td', 'style': 'font-size:12px;' }, formatTime(peer.timestamp)),
E('td', { 'class': 'td' }, statusBadge(peer.status)),
actionCell

View File

@ -138,14 +138,74 @@ EOF
)
# Append to chain (using temp file for atomic update)
# JSON structure ends with: ...} ] } (with possible whitespace)
# Use awk to replace the last ] } with ,newblock ] }
local tmp_chain="$MESH_DIR/tmp/chain_$$.json"
jsonfilter -i "$CHAIN_FILE" -e '@' | sed 's/\]$//' > "$tmp_chain"
echo ",$block_record]}" >> "$tmp_chain"
mv "$tmp_chain" "$CHAIN_FILE"
# Compact the block to single line (escape special chars for awk)
local compact_block=$(echo "$block_record" | tr '\n' ' ' | tr -s ' ' | sed 's/&/\\&/g')
# Use awk to replace last occurrence of ] followed by whitespace and }
# RS="" reads entire file, gsub replaces pattern
awk -v block="$compact_block" '
BEGIN { RS=""; ORS="" }
{
# Find last ] } pattern and insert block before it
n = match($0, /\][ \t\n]*\}[ \t\n]*$/)
if (n > 0) {
print substr($0, 1, n-1) "," block "]}\n"
} else {
print $0
}
}
' "$CHAIN_FILE" > "$tmp_chain"
mv "$tmp_chain" "$CHAIN_FILE"
echo "$block_hash"
}
# Merge a remote block into the local chain (for sync)
chain_merge_block() {
local block_json="$1"
# Validate block structure
local block_index=$(echo "$block_json" | jsonfilter -e '@.index' 2>/dev/null)
local block_hash=$(echo "$block_json" | jsonfilter -e '@.hash' 2>/dev/null)
local block_prev=$(echo "$block_json" | jsonfilter -e '@.prev_hash' 2>/dev/null)
[ -z "$block_index" ] && return 1
[ -z "$block_hash" ] && return 1
# Check prev_hash matches our tip
local local_hash=$(chain_get_hash)
if [ "$block_prev" != "$local_hash" ]; then
echo "Block prev_hash mismatch: expected=$local_hash got=$block_prev" >&2
return 1
fi
# Append block to chain
local tmp_chain="$MESH_DIR/tmp/chain_merge_$$.json"
# Compact block to single line (escape special chars for awk)
local compact_block=$(echo "$block_json" | tr '\n' ' ' | tr -s ' ' | sed 's/&/\\&/g')
# Use awk to insert block before last ] }
awk -v block="$compact_block" '
BEGIN { RS=""; ORS="" }
{
n = match($0, /\][ \t\n]*\}[ \t\n]*$/)
if (n > 0) {
print substr($0, 1, n-1) "," block "]}\n"
} else {
print $0
}
}
' "$CHAIN_FILE" > "$tmp_chain"
mv "$tmp_chain" "$CHAIN_FILE"
echo "Merged block $block_index ($block_hash)"
}
chain_verify() {
# Verify chain integrity
local prev_hash="0000000000000000000000000000000000000000000000000000000000000000"
@ -326,22 +386,34 @@ sync_with_peer() {
# Get missing blocks from peer
local missing=$(curl -s "http://$peer_addr:$peer_port/api/chain/since/$local_hash" 2>/dev/null)
if [ -n "$missing" ]; then
echo "$missing" | jsonfilter -e '@[*]' | while read block; do
local block_hash=$(echo "$block" | jsonfilter -e '@.hash')
local block_type=$(echo "$block" | jsonfilter -e '@.type')
if [ -n "$missing" ] && [ "$missing" != "[]" ]; then
# Store blocks in temp file for ordered processing
local tmp_blocks="$MESH_DIR/tmp/sync_blocks_$$.tmp"
echo "$missing" | jsonfilter -e '@[*]' > "$tmp_blocks" 2>/dev/null
# Fetch and store block data
if ! block_exists "$block_hash"; then
curl -s "http://$peer_addr:$peer_port/api/block/$block_hash" -o "$MESH_DIR/tmp/$block_hash"
if [ -f "$MESH_DIR/tmp/$block_hash" ]; then
block_store_file "$MESH_DIR/tmp/$block_hash"
rm "$MESH_DIR/tmp/$block_hash"
local synced_count=0
while read -r block; do
[ -z "$block" ] && continue
local block_hash=$(echo "$block" | jsonfilter -e '@.hash' 2>/dev/null)
# Merge block into local chain
if chain_merge_block "$block"; then
synced_count=$((synced_count + 1))
# Also store block data if present
if [ -n "$block_hash" ] && ! block_exists "$block_hash"; then
local block_data=$(echo "$block" | jsonfilter -e '@.data' 2>/dev/null)
if [ -n "$block_data" ]; then
echo "$block_data" | block_store "$block_hash"
fi
fi
done
fi
done < "$tmp_blocks"
rm -f "$tmp_blocks"
echo "Synced $(echo "$missing" | jsonfilter -e '@[*]' | wc -l) blocks from $peer_addr"
echo "Synced $synced_count blocks from $peer_addr"
else
echo "No new blocks from $peer_addr"
fi
}

View File

@ -25,9 +25,39 @@ case "$PATH_INFO" in
SINCE_HASH=${PATH_INFO#/since/}
# Return blocks since given hash (for sync)
if [ -f "$CHAIN_FILE" ]; then
cat "$CHAIN_FILE"
# Find index of the hash and return subsequent blocks as JSON array
FOUND_INDEX=""
TOTAL_BLOCKS=$(jsonfilter -i "$CHAIN_FILE" -e '@.blocks[*]' 2>/dev/null | wc -l)
# Find the block with matching hash
for i in $(seq 0 $((TOTAL_BLOCKS - 1))); do
BLOCK_HASH=$(jsonfilter -i "$CHAIN_FILE" -e "@.blocks[$i].hash" 2>/dev/null)
if [ "$BLOCK_HASH" = "$SINCE_HASH" ] || echo "$BLOCK_HASH" | grep -q "^$SINCE_HASH"; then
FOUND_INDEX=$i
break
fi
done
if [ -n "$FOUND_INDEX" ]; then
# Return blocks after the found index as array
START_INDEX=$((FOUND_INDEX + 1))
echo "["
FIRST=1
for i in $(seq $START_INDEX $((TOTAL_BLOCKS - 1))); do
BLOCK=$(jsonfilter -i "$CHAIN_FILE" -e "@.blocks[$i]" 2>/dev/null)
if [ -n "$BLOCK" ]; then
[ "$FIRST" = "1" ] || echo ","
echo "$BLOCK"
FIRST=0
fi
done
echo "]"
else
echo "{\"blocks\":[]}"
# Hash not found, return empty array
echo "[]"
fi
else
echo "[]"
fi
;;
*)

View File

@ -9,3 +9,8 @@ config master-link 'main'
option token_ttl '3600'
option auto_approve '0'
option ipk_path '/www/secubox-feed/secubox-master-link_*.ipk'
# ZKP Authentication
option zkp_enabled '1'
option zkp_fingerprint ''
option zkp_require_on_join '0'
option zkp_challenge_ttl '30'

View File

@ -15,10 +15,269 @@ ML_TOKENS_DIR="$ML_DIR/tokens"
ML_REQUESTS_DIR="$ML_DIR/requests"
MESH_PORT="${MESH_PORT:-7331}"
# ZKP Configuration
ZKP_DIR="/etc/secubox/zkp"
ZKP_IDENTITY_GRAPH="$ZKP_DIR/identity.graph"
ZKP_IDENTITY_KEY="$ZKP_DIR/identity.key"
ZKP_PEERS_DIR="$ZKP_DIR/peers"
ZKP_CHALLENGES_DIR="/tmp/zkp_challenges"
ZKP_CHALLENGE_TTL="${ZKP_CHALLENGE_TTL:-30}"
ml_init() {
mkdir -p "$ML_DIR" "$ML_TOKENS_DIR" "$ML_REQUESTS_DIR"
factory_init_keys >/dev/null 2>&1
mesh_init >/dev/null 2>&1
ml_zkp_init >/dev/null 2>&1
}
# ============================================================================
# ZKP Identity Management
# ============================================================================
# Initialize ZKP identity (generate keypair if not exists)
ml_zkp_init() {
# Check if ZKP tools are available
command -v zkp_keygen >/dev/null 2>&1 || return 0
mkdir -p "$ZKP_DIR" "$ZKP_PEERS_DIR" "$ZKP_CHALLENGES_DIR"
# Generate identity if not exists
if [ ! -f "$ZKP_IDENTITY_GRAPH" ] || [ ! -f "$ZKP_IDENTITY_KEY" ]; then
logger -t master-link "Generating ZKP identity keypair..."
local tmpprefix="/tmp/zkp_init_$$"
if zkp_keygen -n 50 -r 1.0 -o "$tmpprefix" >/dev/null 2>&1; then
mv "${tmpprefix}.graph" "$ZKP_IDENTITY_GRAPH"
mv "${tmpprefix}.key" "$ZKP_IDENTITY_KEY"
chmod 644 "$ZKP_IDENTITY_GRAPH"
chmod 600 "$ZKP_IDENTITY_KEY"
logger -t master-link "ZKP identity generated"
else
logger -t master-link "ZKP keygen failed"
rm -f "${tmpprefix}.graph" "${tmpprefix}.key"
return 1
fi
fi
# Derive and store ZKP fingerprint
if [ -f "$ZKP_IDENTITY_GRAPH" ]; then
local zkp_fp=$(sha256sum "$ZKP_IDENTITY_GRAPH" | cut -c1-16)
uci -q set master-link.main.zkp_fingerprint="$zkp_fp"
uci -q set master-link.main.zkp_enabled="1"
uci commit master-link
fi
return 0
}
# Get ZKP status
ml_zkp_status() {
local zkp_enabled=$(uci -q get master-link.main.zkp_enabled)
local zkp_fp=$(uci -q get master-link.main.zkp_fingerprint)
local has_tools="false"
local has_identity="false"
local peer_count=0
command -v zkp_keygen >/dev/null 2>&1 && has_tools="true"
[ -f "$ZKP_IDENTITY_GRAPH" ] && [ -f "$ZKP_IDENTITY_KEY" ] && has_identity="true"
[ -d "$ZKP_PEERS_DIR" ] && peer_count=$(ls -1 "$ZKP_PEERS_DIR"/*.graph 2>/dev/null | wc -l)
cat <<-EOF
{
"enabled": ${zkp_enabled:-0},
"tools_available": $has_tools,
"has_identity": $has_identity,
"fingerprint": "${zkp_fp:-}",
"trusted_peers": $peer_count
}
EOF
}
# Generate ZKP challenge for authentication
ml_zkp_challenge() {
mkdir -p "$ZKP_CHALLENGES_DIR"
local challenge_id=$(head -c 16 /dev/urandom 2>/dev/null | sha256sum | cut -c1-32)
local timestamp=$(date +%s)
local expires=$((timestamp + ZKP_CHALLENGE_TTL))
# Store challenge
echo "{\"id\":\"$challenge_id\",\"timestamp\":$timestamp,\"expires\":$expires}" > "$ZKP_CHALLENGES_DIR/${challenge_id}.json"
# Cleanup old challenges
find "$ZKP_CHALLENGES_DIR" -name "*.json" -mmin +5 -delete 2>/dev/null
cat <<-EOF
{
"challenge_id": "$challenge_id",
"timestamp": $timestamp,
"expires": $expires,
"ttl": $ZKP_CHALLENGE_TTL
}
EOF
}
# Generate ZKP proof for authentication
ml_zkp_prove() {
local challenge_id="$1"
# Check identity exists
if [ ! -f "$ZKP_IDENTITY_GRAPH" ] || [ ! -f "$ZKP_IDENTITY_KEY" ]; then
echo '{"success":false,"error":"no_identity"}'
return 1
fi
# Check tools available
command -v zkp_prover >/dev/null 2>&1 || {
echo '{"success":false,"error":"no_zkp_tools"}'
return 1
}
local proof_file="/tmp/zkp_proof_$$.proof"
# Generate proof
if zkp_prover -g "$ZKP_IDENTITY_GRAPH" -k "$ZKP_IDENTITY_KEY" -o "$proof_file" >/dev/null 2>&1; then
local proof_b64=$(base64 -w 0 "$proof_file")
local proof_hash=$(sha256sum "$proof_file" | cut -c1-16)
local proof_size=$(stat -c %s "$proof_file" 2>/dev/null || echo 0)
rm -f "$proof_file"
local zkp_fp=$(uci -q get master-link.main.zkp_fingerprint)
cat <<-EOF
{
"success": true,
"fingerprint": "$zkp_fp",
"challenge_id": "$challenge_id",
"proof": "$proof_b64",
"proof_hash": "$proof_hash",
"proof_size": $proof_size
}
EOF
else
rm -f "$proof_file"
echo '{"success":false,"error":"proof_generation_failed"}'
return 1
fi
}
# Verify ZKP proof from peer
ml_zkp_verify() {
local peer_fp="$1"
local proof_b64="$2"
local challenge_id="$3"
# Validate challenge
local challenge_file="$ZKP_CHALLENGES_DIR/${challenge_id}.json"
if [ -n "$challenge_id" ] && [ -f "$challenge_file" ]; then
local expires=$(jsonfilter -i "$challenge_file" -e '@.expires' 2>/dev/null)
local now=$(date +%s)
if [ -n "$expires" ] && [ "$now" -gt "$expires" ]; then
rm -f "$challenge_file"
echo '{"success":false,"result":"REJECT","error":"challenge_expired"}'
return 1
fi
fi
# Check peer graph exists
local graph_file="$ZKP_PEERS_DIR/${peer_fp}.graph"
if [ ! -f "$graph_file" ]; then
echo '{"success":false,"result":"REJECT","error":"unknown_peer"}'
return 1
fi
# Check tools available
command -v zkp_verifier >/dev/null 2>&1 || {
echo '{"success":false,"result":"REJECT","error":"no_zkp_tools"}'
return 1
}
# Decode and verify proof
local proof_file="/tmp/zkp_verify_$$.proof"
echo "$proof_b64" | base64 -d > "$proof_file" 2>/dev/null
local result=$(zkp_verifier -g "$graph_file" -p "$proof_file" 2>&1)
local rc=$?
local proof_hash=$(sha256sum "$proof_file" 2>/dev/null | cut -c1-16)
rm -f "$proof_file"
# Clean up challenge after use
[ -f "$challenge_file" ] && rm -f "$challenge_file"
local my_fp=$(factory_fingerprint 2>/dev/null)
local timestamp=$(date +%s)
if [ "$result" = "ACCEPT" ]; then
# Record to blockchain
chain_add_block "peer_zkp_verified" \
"{\"peer_fp\":\"$peer_fp\",\"proof_hash\":\"$proof_hash\",\"challenge_id\":\"$challenge_id\",\"result\":\"ACCEPT\",\"verified_by\":\"$my_fp\"}" \
"$(echo "zkp_verify:${peer_fp}:${proof_hash}:${timestamp}" | sha256sum | cut -d' ' -f1)" >/dev/null 2>&1
logger -t master-link "ZKP verified: peer=$peer_fp result=ACCEPT"
cat <<-EOF
{
"success": true,
"result": "ACCEPT",
"peer_fp": "$peer_fp",
"proof_hash": "$proof_hash",
"verified_at": $timestamp,
"verified_by": "$my_fp"
}
EOF
else
logger -t master-link "ZKP verification failed: peer=$peer_fp result=$result"
cat <<-EOF
{
"success": true,
"result": "REJECT",
"peer_fp": "$peer_fp",
"error": "verification_failed"
}
EOF
return 1
fi
}
# Store peer's ZKP graph (called during approval)
ml_zkp_trust_peer() {
local peer_fp="$1"
local peer_addr="$2"
mkdir -p "$ZKP_PEERS_DIR"
# Fetch peer's graph
local graph_b64=$(curl -s --connect-timeout 5 "http://${peer_addr}:${MESH_PORT}/api/zkp/graph" 2>/dev/null)
if [ -z "$graph_b64" ]; then
logger -t master-link "Failed to fetch ZKP graph from $peer_addr"
return 1
fi
# Decode and verify fingerprint
local tmp_graph="/tmp/zkp_peer_$$.graph"
echo "$graph_b64" | base64 -d > "$tmp_graph" 2>/dev/null
local fetched_fp=$(sha256sum "$tmp_graph" | cut -c1-16)
if [ "$fetched_fp" != "$peer_fp" ]; then
logger -t master-link "ZKP fingerprint mismatch: expected=$peer_fp got=$fetched_fp"
rm -f "$tmp_graph"
return 1
fi
# Store trusted peer graph
mv "$tmp_graph" "$ZKP_PEERS_DIR/${peer_fp}.graph"
chmod 644 "$ZKP_PEERS_DIR/${peer_fp}.graph"
logger -t master-link "Trusted ZKP peer: $peer_fp"
return 0
}
# Get own public graph (base64 encoded)
ml_zkp_get_graph() {
if [ -f "$ZKP_IDENTITY_GRAPH" ]; then
base64 -w 0 "$ZKP_IDENTITY_GRAPH"
else
echo ""
fi
}
# ============================================================================
@ -282,11 +541,14 @@ ml_token_is_auto_approve() {
# ============================================================================
# Handle join request from new node
# Enhanced with ZKP authentication support
ml_join_request() {
local token="$1"
local peer_fp="$2"
local peer_addr="$3"
local peer_hostname="${4:-unknown}"
local zkp_proof_b64="$5"
local zkp_graph_b64="$6"
# Validate token
local validation=$(ml_token_validate "$token")
@ -298,9 +560,62 @@ ml_join_request() {
fi
local token_hash=$(echo "$token" | sha256sum | cut -d' ' -f1)
# Store join request
local now=$(date +%s)
local zkp_verified="false"
local zkp_proof_hash=""
# Check if ZKP is required for join
local zkp_require=$(uci -q get master-link.main.zkp_require_on_join)
local zkp_enabled=$(uci -q get master-link.main.zkp_enabled)
# ZKP verification if proof provided
if [ -n "$zkp_proof_b64" ] && [ -n "$zkp_graph_b64" ]; then
# Check ZKP tools available
if command -v zkp_verifier >/dev/null 2>&1; then
# First, verify peer fingerprint matches graph hash
local tmp_graph="/tmp/zkp_join_$$.graph"
local tmp_proof="/tmp/zkp_join_$$.proof"
echo "$zkp_graph_b64" | base64 -d > "$tmp_graph" 2>/dev/null
echo "$zkp_proof_b64" | base64 -d > "$tmp_proof" 2>/dev/null
local graph_fp=$(sha256sum "$tmp_graph" 2>/dev/null | cut -c1-16)
zkp_proof_hash=$(sha256sum "$tmp_proof" 2>/dev/null | cut -c1-16)
if [ "$graph_fp" = "$peer_fp" ]; then
# Fingerprint matches - verify proof
local verify_result=$(zkp_verifier -g "$tmp_graph" -p "$tmp_proof" 2>&1)
if [ "$verify_result" = "ACCEPT" ]; then
zkp_verified="true"
logger -t master-link "ZKP join proof verified for $peer_fp"
# Store peer graph for future verifications
mkdir -p "$ZKP_PEERS_DIR"
mv "$tmp_graph" "$ZKP_PEERS_DIR/${peer_fp}.graph"
chmod 644 "$ZKP_PEERS_DIR/${peer_fp}.graph"
else
logger -t master-link "ZKP join proof REJECTED for $peer_fp: $verify_result"
rm -f "$tmp_graph"
fi
else
logger -t master-link "ZKP fingerprint mismatch: expected=$peer_fp got=$graph_fp"
rm -f "$tmp_graph"
fi
rm -f "$tmp_proof"
else
logger -t master-link "ZKP tools not available, skipping proof verification"
fi
fi
# Reject if ZKP required but not verified
if [ "$zkp_require" = "1" ] && [ "$zkp_enabled" = "1" ] && [ "$zkp_verified" != "true" ]; then
logger -t master-link "Rejecting join: ZKP required but not verified ($peer_fp)"
echo '{"success":false,"error":"zkp_required","message":"ZKP proof required for join"}'
return 1
fi
# Store join request with ZKP status
cat > "$ML_REQUESTS_DIR/${peer_fp}.json" <<-EOF
{
"fingerprint": "$peer_fp",
@ -308,16 +623,18 @@ ml_join_request() {
"hostname": "$peer_hostname",
"token_hash": "$token_hash",
"timestamp": $now,
"zkp_verified": $zkp_verified,
"zkp_proof_hash": "$zkp_proof_hash",
"status": "pending"
}
EOF
# Add join_request block to chain
# Add join_request block to chain (with ZKP status)
chain_add_block "join_request" \
"{\"fp\":\"$peer_fp\",\"addr\":\"$peer_addr\",\"hostname\":\"$peer_hostname\",\"token_hash\":\"$token_hash\"}" \
"{\"fp\":\"$peer_fp\",\"addr\":\"$peer_addr\",\"hostname\":\"$peer_hostname\",\"token_hash\":\"$token_hash\",\"zkp_verified\":$zkp_verified}" \
"$(echo "join_request:${peer_fp}:${now}" | sha256sum | cut -d' ' -f1)" >/dev/null 2>&1
logger -t master-link "Join request from $peer_hostname ($peer_fp) at $peer_addr"
logger -t master-link "Join request from $peer_hostname ($peer_fp) at $peer_addr [zkp=$zkp_verified]"
# Check auto-approve: either global setting or token-specific (clone tokens)
local auto_approve=$(uci -q get master-link.main.auto_approve)
@ -329,10 +646,11 @@ ml_join_request() {
return $?
fi
echo "{\"success\":true,\"status\":\"pending\",\"message\":\"Join request queued for approval\"}"
echo "{\"success\":true,\"status\":\"pending\",\"zkp_verified\":$zkp_verified,\"message\":\"Join request queued for approval\"}"
}
# Approve a peer join request
# Enhanced with ZKP graph fetching on approval
ml_join_approve() {
local peer_fp="$1"
@ -351,7 +669,10 @@ ml_join_approve() {
local peer_hostname=$(jsonfilter -i "$request_file" -e '@.hostname' 2>/dev/null)
local token_hash=$(jsonfilter -i "$request_file" -e '@.token_hash' 2>/dev/null)
local orig_ts=$(jsonfilter -i "$request_file" -e '@.timestamp' 2>/dev/null)
local zkp_verified=$(jsonfilter -i "$request_file" -e '@.zkp_verified' 2>/dev/null)
local zkp_proof_hash=$(jsonfilter -i "$request_file" -e '@.zkp_proof_hash' 2>/dev/null)
[ -z "$orig_ts" ] && orig_ts=0
[ -z "$zkp_verified" ] && zkp_verified="false"
local now=$(date +%s)
local my_fp=$(factory_fingerprint 2>/dev/null)
local my_depth=$(uci -q get master-link.main.depth)
@ -364,7 +685,22 @@ ml_join_approve() {
# Add peer to mesh
peer_add "$peer_addr" "$MESH_PORT" "$peer_fp" >/dev/null 2>&1
# Update request status
# Fetch peer's ZKP graph if not already stored (from join verification)
local zkp_enabled=$(uci -q get master-link.main.zkp_enabled)
local zkp_graph_stored="false"
if [ "$zkp_enabled" = "1" ] && [ ! -f "$ZKP_PEERS_DIR/${peer_fp}.graph" ]; then
logger -t master-link "Fetching ZKP graph from approved peer $peer_fp"
if ml_zkp_trust_peer "$peer_fp" "$peer_addr" >/dev/null 2>&1; then
zkp_graph_stored="true"
logger -t master-link "Stored ZKP graph for peer $peer_fp"
else
logger -t master-link "Failed to fetch ZKP graph from $peer_fp (ZKP auth won't work for this peer)"
fi
elif [ -f "$ZKP_PEERS_DIR/${peer_fp}.graph" ]; then
zkp_graph_stored="true"
fi
# Update request status with ZKP info
cat > "$request_file" <<-EOF
{
"fingerprint": "$peer_fp",
@ -375,6 +711,9 @@ ml_join_approve() {
"approved_at": $now,
"approved_by": "$my_fp",
"depth": $peer_depth,
"zkp_verified": $zkp_verified,
"zkp_proof_hash": "$zkp_proof_hash",
"zkp_graph_stored": $zkp_graph_stored,
"status": "approved"
}
EOF
@ -391,15 +730,15 @@ ml_join_approve() {
fi
done
# Add peer_approved block to chain
# Add peer_approved block to chain (with ZKP status)
chain_add_block "peer_approved" \
"{\"fp\":\"$peer_fp\",\"addr\":\"$peer_addr\",\"depth\":$peer_depth,\"approved_by\":\"$my_fp\"}" \
"{\"fp\":\"$peer_fp\",\"addr\":\"$peer_addr\",\"depth\":$peer_depth,\"approved_by\":\"$my_fp\",\"zkp_verified\":$zkp_verified}" \
"$(echo "peer_approved:${peer_fp}:${now}" | sha256sum | cut -d' ' -f1)" >/dev/null 2>&1
# Sync chain with new peer
gossip_sync >/dev/null 2>&1 &
logger -t master-link "Peer approved: $peer_hostname ($peer_fp) at depth $peer_depth"
logger -t master-link "Peer approved: $peer_hostname ($peer_fp) at depth $peer_depth [zkp_verified=$zkp_verified]"
cat <<-EOF
{
@ -408,6 +747,8 @@ ml_join_approve() {
"address": "$peer_addr",
"hostname": "$peer_hostname",
"depth": $peer_depth,
"zkp_verified": $zkp_verified,
"zkp_graph_stored": $zkp_graph_stored,
"status": "approved"
}
EOF
@ -818,6 +1159,16 @@ ml_status() {
local hostname=$(uci -q get system.@system[0].hostname 2>/dev/null || hostname)
# ZKP status
local zkp_enabled=$(uci -q get master-link.main.zkp_enabled)
local zkp_fp=$(uci -q get master-link.main.zkp_fingerprint)
local zkp_tools="false"
local zkp_identity="false"
local zkp_peers=0
command -v zkp_keygen >/dev/null 2>&1 && zkp_tools="true"
[ -f "$ZKP_IDENTITY_GRAPH" ] && [ -f "$ZKP_IDENTITY_KEY" ] && zkp_identity="true"
[ -d "$ZKP_PEERS_DIR" ] && zkp_peers=$(ls -1 "$ZKP_PEERS_DIR"/*.graph 2>/dev/null | wc -l)
cat <<-EOF
{
"enabled": $enabled,
@ -835,7 +1186,14 @@ ml_status() {
"total": $((pending + approved + rejected))
},
"active_tokens": $active_tokens,
"chain_height": $chain_height
"chain_height": $chain_height,
"zkp": {
"enabled": ${zkp_enabled:-0},
"fingerprint": "${zkp_fp:-}",
"tools_available": $zkp_tools,
"has_identity": $zkp_identity,
"trusted_peers": $zkp_peers
}
}
EOF
}
@ -941,6 +1299,98 @@ ml_check_local_auth() {
return 1
}
# ============================================================================
# Peer-side Join with ZKP
# ============================================================================
# Send a ZKP-authenticated join request to master
# Called by a node wanting to join a mesh
ml_join_with_zkp() {
local master_addr="$1"
local token="$2"
[ -z "$master_addr" ] || [ -z "$token" ] && {
echo '{"success":false,"error":"missing_args","usage":"ml_join_with_zkp <master_ip> <token>"}'
return 1
}
# Initialize ZKP if not already done
ml_zkp_init >/dev/null 2>&1
local my_hostname=$(uci -q get system.@system[0].hostname 2>/dev/null || hostname)
local my_addr=$(uci -q get network.lan.ipaddr)
[ -z "$my_addr" ] && my_addr=$(ip -4 addr show br-lan 2>/dev/null | grep -oP 'inet \K[0-9.]+' | head -1)
# Prepare ZKP proof if available
local zkp_proof_b64=""
local zkp_graph_b64=""
local my_fp=""
if [ -f "$ZKP_IDENTITY_GRAPH" ] && [ -f "$ZKP_IDENTITY_KEY" ] && command -v zkp_prover >/dev/null 2>&1; then
local proof_file="/tmp/zkp_join_proof_$$.proof"
if zkp_prover -g "$ZKP_IDENTITY_GRAPH" -k "$ZKP_IDENTITY_KEY" -o "$proof_file" >/dev/null 2>&1; then
zkp_proof_b64=$(base64 -w 0 "$proof_file")
zkp_graph_b64=$(base64 -w 0 "$ZKP_IDENTITY_GRAPH")
# Use ZKP fingerprint (graph hash) when ZKP is available
my_fp=$(sha256sum "$ZKP_IDENTITY_GRAPH" | cut -c1-16)
rm -f "$proof_file"
logger -t master-link "Generated ZKP proof for join request (zkp_fp=$my_fp)"
else
logger -t master-link "ZKP proof generation failed, joining without ZKP"
rm -f "$proof_file"
my_fp=$(factory_fingerprint 2>/dev/null)
fi
else
logger -t master-link "No ZKP identity or tools, joining without ZKP"
my_fp=$(factory_fingerprint 2>/dev/null)
fi
# Build JSON request body
local body="{\"token\":\"$token\",\"fingerprint\":\"$my_fp\",\"hostname\":\"$my_hostname\",\"address\":\"$my_addr\""
if [ -n "$zkp_proof_b64" ]; then
body="${body},\"zkp_proof\":\"$zkp_proof_b64\",\"zkp_graph\":\"$zkp_graph_b64\""
fi
body="${body}}"
# Send join request
local response=$(curl -s --connect-timeout 10 -X POST \
"http://${master_addr}:${MESH_PORT}/api/master-link/join" \
-H "Content-Type: application/json" \
-d "$body" 2>/dev/null)
if [ -z "$response" ]; then
echo '{"success":false,"error":"connection_failed"}'
return 1
fi
# Check if approved, store upstream
local status=$(echo "$response" | jsonfilter -e '@.status' 2>/dev/null)
local success=$(echo "$response" | jsonfilter -e '@.success' 2>/dev/null)
local zkp_verified=$(echo "$response" | jsonfilter -e '@.zkp_verified' 2>/dev/null)
if [ "$success" = "true" ]; then
if [ "$status" = "approved" ]; then
# Auto-approved - configure as peer
uci -q set master-link.main.role='peer'
uci -q set master-link.main.upstream="$master_addr"
local depth=$(echo "$response" | jsonfilter -e '@.depth' 2>/dev/null)
[ -n "$depth" ] && uci -q set master-link.main.depth="$depth"
uci commit master-link
# Fetch master's ZKP graph for mutual authentication
ml_zkp_trust_peer "$(echo "$response" | jsonfilter -e '@.approved_by' 2>/dev/null || echo 'master')" "$master_addr" >/dev/null 2>&1
logger -t master-link "Joined mesh as peer of $master_addr [zkp=$zkp_verified]"
else
logger -t master-link "Join request pending approval at $master_addr"
fi
fi
echo "$response"
}
# ============================================================================
# Main CLI
# ============================================================================
@ -989,7 +1439,13 @@ case "${1:-}" in
echo "{\"registered\":true,\"token_hash\":\"$token_hash\",\"expires\":$expires}"
;;
join-request)
ml_join_request "$2" "$3" "$4" "$5"
# Usage: join-request <token> <fingerprint> <addr> [hostname] [zkp_proof] [zkp_graph]
ml_join_request "$2" "$3" "$4" "$5" "$6" "$7"
;;
join-with-zkp)
# Join a mesh with ZKP authentication (peer-side command)
# Usage: master-link.sh join-with-zkp <master_ip> <token>
ml_join_with_zkp "$2" "$3"
;;
join-approve)
ml_join_approve "$2"
@ -1019,6 +1475,29 @@ case "${1:-}" in
ml_init
echo "Master-link initialized"
;;
# ZKP commands
zkp-init)
ml_zkp_init
echo "ZKP identity initialized"
;;
zkp-status)
ml_zkp_status
;;
zkp-challenge)
ml_zkp_challenge
;;
zkp-prove)
ml_zkp_prove "$2"
;;
zkp-verify)
ml_zkp_verify "$2" "$3" "$4"
;;
zkp-graph)
ml_zkp_get_graph
;;
zkp-trust-peer)
ml_zkp_trust_peer "$2" "$3"
;;
*)
# Sourced as library - do nothing
:

View File

@ -30,6 +30,10 @@ fingerprint=$(echo "$input" | jsonfilter -e '@.fingerprint' 2>/dev/null)
address=$(echo "$input" | jsonfilter -e '@.address' 2>/dev/null)
peer_hostname=$(echo "$input" | jsonfilter -e '@.hostname' 2>/dev/null)
# ZKP fields (optional unless zkp_require_on_join=1)
zkp_proof=$(echo "$input" | jsonfilter -e '@.zkp_proof' 2>/dev/null)
zkp_graph=$(echo "$input" | jsonfilter -e '@.zkp_graph' 2>/dev/null)
# Use REMOTE_ADDR as fallback for address
[ -z "$address" ] && address="$REMOTE_ADDR"
@ -38,4 +42,4 @@ if [ -z "$token" ] || [ -z "$fingerprint" ]; then
exit 0
fi
ml_join_request "$token" "$fingerprint" "$address" "$peer_hostname"
ml_join_request "$token" "$fingerprint" "$address" "$peer_hostname" "$zkp_proof" "$zkp_graph"

View File

@ -0,0 +1,28 @@
#!/bin/sh
# Master-Link API - ZKP Challenge Generation
# GET /api/master-link/zkp-challenge
# Returns: challenge_id and timestamp for ZKP authentication
echo "Content-Type: application/json"
echo "Access-Control-Allow-Origin: *"
echo "Access-Control-Allow-Methods: GET, OPTIONS"
echo "Access-Control-Allow-Headers: Content-Type"
echo ""
# Handle CORS preflight
if [ "$REQUEST_METHOD" = "OPTIONS" ]; then
exit 0
fi
# Load library
. /usr/lib/secubox/master-link.sh >/dev/null 2>&1
# Check if ZKP is enabled
zkp_enabled=$(uci -q get master-link.main.zkp_enabled)
if [ "$zkp_enabled" != "1" ]; then
echo '{"error":"zkp_disabled"}'
exit 0
fi
# Generate challenge
ml_zkp_challenge

View File

@ -0,0 +1,49 @@
#!/bin/sh
# Master-Link API - ZKP Proof Verification
# POST /api/master-link/zkp-verify
# Body: {"fingerprint": "<peer_fp>", "challenge_id": "<id>", "proof": "<base64>"}
# Returns: {"result": "ACCEPT|REJECT", "verified_at": <timestamp>}
echo "Content-Type: application/json"
echo "Access-Control-Allow-Origin: *"
echo "Access-Control-Allow-Methods: POST, OPTIONS"
echo "Access-Control-Allow-Headers: Content-Type"
echo ""
# Handle CORS preflight
if [ "$REQUEST_METHOD" = "OPTIONS" ]; then
exit 0
fi
# Load library
. /usr/lib/secubox/master-link.sh >/dev/null 2>&1
# Check if ZKP is enabled
zkp_enabled=$(uci -q get master-link.main.zkp_enabled)
if [ "$zkp_enabled" != "1" ]; then
echo '{"error":"zkp_disabled"}'
exit 0
fi
# Only accept POST
if [ "$REQUEST_METHOD" != "POST" ]; then
echo '{"error":"method_not_allowed"}'
exit 0
fi
# Read request body
read -r input
# Parse fields
fingerprint=$(echo "$input" | jsonfilter -e '@.fingerprint' 2>/dev/null)
challenge_id=$(echo "$input" | jsonfilter -e '@.challenge_id' 2>/dev/null)
proof=$(echo "$input" | jsonfilter -e '@.proof' 2>/dev/null)
# Validate required fields
if [ -z "$fingerprint" ] || [ -z "$proof" ]; then
echo '{"error":"missing_required_fields","required":["fingerprint","proof"]}'
exit 0
fi
# Verify proof
ml_zkp_verify "$fingerprint" "$proof" "$challenge_id"

View File

@ -0,0 +1,21 @@
#!/bin/sh
# ZKP API - Get node's public graph
# GET /api/zkp/graph
# Returns: base64-encoded graph for ZKP authentication
echo "Content-Type: text/plain"
echo "Access-Control-Allow-Origin: *"
echo "Access-Control-Allow-Methods: GET, OPTIONS"
echo "Access-Control-Allow-Headers: Content-Type"
echo ""
# Handle CORS preflight
if [ "$REQUEST_METHOD" = "OPTIONS" ]; then
exit 0
fi
# Load library
. /usr/lib/secubox/master-link.sh >/dev/null 2>&1
# Return public graph (base64 encoded)
ml_zkp_get_graph

View File

@ -282,8 +282,8 @@ gossip_forward() {
local msg_path
msg_path=$(echo "$message" | jsonfilter -e '@.path[*]' 2>/dev/null)
# Forward to each peer not in path
while read -r peer_line; do
# Forward to each peer not in path (ash-compatible)
jsonfilter -i "$peers_file" -e '@[*]' 2>/dev/null | while read -r peer_line; do
local peer_addr peer_id
peer_addr=$(echo "$peer_line" | jsonfilter -e '@.address' 2>/dev/null)
peer_id=$(echo "$peer_line" | jsonfilter -e '@.id' 2>/dev/null)
@ -302,7 +302,7 @@ gossip_forward() {
--connect-timeout 2 &
_update_stat "forwarded"
done < <(jsonfilter -i "$peers_file" -e '@[*]' 2>/dev/null)
done
wait # Wait for all forwards to complete
}
@ -328,8 +328,11 @@ gossip_broadcast() {
return 1
fi
local sent_count=0
while read -r peer_line; do
# Send to all peers (ash-compatible)
local sent_count_file="/tmp/gossip_sent_$$.tmp"
echo "0" > "$sent_count_file"
jsonfilter -i "$peers_file" -e '@[*]' 2>/dev/null | while read -r peer_line; do
local peer_addr
peer_addr=$(echo "$peer_line" | jsonfilter -e '@.address' 2>/dev/null)
@ -340,11 +343,14 @@ gossip_broadcast() {
"http://$peer_addr:7332/api/gossip" \
--connect-timeout 2 &
sent_count=$((sent_count + 1))
local cnt=$(cat "$sent_count_file")
echo $((cnt + 1)) > "$sent_count_file"
_update_stat "sent"
done < <(jsonfilter -i "$peers_file" -e '@[*]' 2>/dev/null)
done
wait
local sent_count=$(cat "$sent_count_file")
rm -f "$sent_count_file"
logger -t mirrornet "Gossip: broadcast $type to $sent_count peers"
echo "$msg_id"
@ -398,15 +404,22 @@ gossip_process_queue() {
local batch_size
batch_size=$(_get_batch_size)
local count=0
while read -r message; do
# Process queue (ash-compatible)
local count_file="/tmp/gossip_proc_$$.tmp"
echo "0" > "$count_file"
jsonfilter -i "$GOSSIP_QUEUE" -e '@[*]' 2>/dev/null | while read -r message; do
[ -z "$message" ] && continue
gossip_forward "$message"
count=$((count + 1))
local cnt=$(cat "$count_file")
[ "$cnt" -ge "$batch_size" ] && break
[ "$count" -ge "$batch_size" ] && break
done < <(jsonfilter -i "$GOSSIP_QUEUE" -e '@[*]' 2>/dev/null)
gossip_forward "$message"
echo $((cnt + 1)) > "$count_file"
done
local count=$(cat "$count_file")
rm -f "$count_file"
# Clear processed messages
if [ "$count" -gt 0 ]; then

View File

@ -289,16 +289,15 @@ health_check_all_peers() {
echo " \"timestamp\": $(date +%s),"
echo " \"peers\": ["
local first=1
while read -r peer_line; do
# Collect peer check results (ash-compatible)
local tmp_results="/tmp/health_check_$$.tmp"
jsonfilter -i "$peers_file" -e '@[*]' 2>/dev/null | while read -r peer_line; do
local peer_id peer_addr
peer_id=$(echo "$peer_line" | jsonfilter -e '@.id' 2>/dev/null)
peer_addr=$(echo "$peer_line" | jsonfilter -e '@.address' 2>/dev/null)
[ -z "$peer_addr" ] && continue
[ "$first" = "1" ] || echo ","
# Run health checks
local ping_result http_result
ping_result=$(health_ping "$peer_addr")
@ -318,20 +317,22 @@ health_check_all_peers() {
combined_status="unhealthy"
fi
echo " {"
echo " \"peer_id\": \"$peer_id\","
echo " \"address\": \"$peer_addr\","
echo " \"status\": \"$combined_status\","
echo " \"ping\": $ping_result,"
echo " \"http\": $http_result"
echo " }"
# Output peer result as single line JSON
echo "{\"peer_id\":\"$peer_id\",\"address\":\"$peer_addr\",\"status\":\"$combined_status\",\"ping\":$ping_result,\"http\":$http_result}"
# Record metrics
local metrics="{\"latency_ms\":$(echo "$ping_result" | jsonfilter -e '@.latency_ms' 2>/dev/null || echo null),\"packet_loss\":$(echo "$ping_result" | jsonfilter -e '@.packet_loss' 2>/dev/null || echo 0),\"http_code\":$(echo "$http_result" | jsonfilter -e '@.http_code' 2>/dev/null || echo 0)}"
health_record_metrics "$peer_id" "$metrics"
done > "$tmp_results"
# Output collected results with proper formatting
local first=1
while read -r result; do
[ "$first" = "1" ] || echo ","
echo " $result"
first=0
done < <(jsonfilter -i "$peers_file" -e '@[*]' 2>/dev/null)
done < "$tmp_results"
rm -f "$tmp_results"
echo " ]"
echo "}"

View File

@ -240,7 +240,7 @@ identity_list_peers() {
echo "["
local first=1
for peer_file in "$peer_dir"/*.json 2>/dev/null; do
for peer_file in "$peer_dir"/*.json; do
[ -f "$peer_file" ] || continue
[ "$first" = "1" ] || echo ","
cat "$peer_file"

View File

@ -145,9 +145,10 @@ mirror_check_service() {
# Parse and check each upstream
local first=1
local count=0
local tmp_output="/tmp/mirror_check_$$.tmp"
# Simple line-by-line parsing
while read -r line; do
# Simple line-by-line parsing (ash-compatible)
jsonfilter -i "$upstreams_file" -e '@[*]' 2>/dev/null | while read -r line; do
local address port peer_id
address=$(echo "$line" | jsonfilter -e '@.address' 2>/dev/null)
port=$(echo "$line" | jsonfilter -e '@.port' 2>/dev/null)
@ -158,11 +159,19 @@ mirror_check_service() {
local status
status=$(mirror_check_upstream "$address" "$port")
echo "{\"peer_id\":\"$peer_id\",\"address\":\"$address\",\"port\":$port,\"status\":\"$status\"}"
done > "$tmp_output"
# Output collected results
local count=0
local first=1
while read -r item; do
[ "$first" = "1" ] || echo ","
echo " {\"peer_id\":\"$peer_id\",\"address\":\"$address\",\"port\":$port,\"status\":\"$status\"}"
echo " $item"
first=0
count=$((count + 1))
done < <(jsonfilter -i "$upstreams_file" -e '@[*]' 2>/dev/null)
done < "$tmp_output"
rm -f "$tmp_output"
echo " ],"
echo " \"total\": $count"
@ -178,30 +187,34 @@ mirror_get_best_upstream() {
return 1
fi
# Find highest priority healthy upstream
local best_address=""
local best_port=""
local best_priority=0
# Find highest priority healthy upstream (ash-compatible)
local best_file="/tmp/mirror_best_$$.tmp"
echo "0" > "$best_file"
while read -r line; do
jsonfilter -i "$upstreams_file" -e '@[*]' 2>/dev/null | while read -r line; do
local address port priority status
address=$(echo "$line" | jsonfilter -e '@.address' 2>/dev/null)
port=$(echo "$line" | jsonfilter -e '@.port' 2>/dev/null)
priority=$(echo "$line" | jsonfilter -e '@.priority' 2>/dev/null)
[ -z "$address" ] && continue
[ -z "$priority" ] && priority=50
status=$(mirror_check_upstream "$address" "$port")
if [ "$status" = "ok" ] && [ "$priority" -gt "$best_priority" ]; then
best_address="$address"
best_port="$port"
best_priority="$priority"
if [ "$status" = "ok" ]; then
local current_best=$(cat "$best_file" | cut -d: -f1)
if [ "$priority" -gt "$current_best" ]; then
echo "$priority:$address:$port" > "$best_file"
fi
done < <(jsonfilter -i "$upstreams_file" -e '@[*]' 2>/dev/null)
fi
done
if [ -n "$best_address" ]; then
echo "$best_address:$best_port"
local result=$(cat "$best_file")
rm -f "$best_file"
if [ "$result" != "0" ]; then
echo "$result" | cut -d: -f2-
else
return 1
fi
@ -250,21 +263,31 @@ mirror_generate_haproxy_backend() {
echo " option httpchk GET /health"
echo " http-check expect status 200"
local server_num=1
while read -r line; do
# Generate server lines (ash-compatible)
local tmp_servers="/tmp/mirror_servers_$$.tmp"
jsonfilter -i "$upstreams_file" -e '@[*]' 2>/dev/null | while read -r line; do
local address port priority
address=$(echo "$line" | jsonfilter -e '@.address' 2>/dev/null)
port=$(echo "$line" | jsonfilter -e '@.port' 2>/dev/null)
priority=$(echo "$line" | jsonfilter -e '@.priority' 2>/dev/null)
[ -z "$address" ] && continue
[ -z "$priority" ] && priority=50
local weight=$((priority / 10))
[ "$weight" -lt 1 ] && weight=1
echo " server srv$server_num $address:$port weight $weight check inter 10s fall 3 rise 2"
echo "$address:$port:$weight"
done > "$tmp_servers"
local server_num=1
while read -r srv_line; do
local addr_port=$(echo "$srv_line" | cut -d: -f1-2)
local weight=$(echo "$srv_line" | cut -d: -f3)
echo " server srv$server_num $addr_port weight $weight check inter 10s fall 3 rise 2"
server_num=$((server_num + 1))
done < <(jsonfilter -i "$upstreams_file" -e '@[*]' 2>/dev/null)
done < "$tmp_servers"
rm -f "$tmp_servers"
echo ""
}