AUTARCH v1.9 — remote monitoring, SSH manager, daemon, vault, cleanup
- Add Remote Monitoring Station with PIAP device profile system - Add SSH/SSHD manager with fail2ban integration - Add privileged daemon architecture for safe root operations - Add encrypted vault, HAL memory, HAL auto-analyst - Add network security suite, module creator, codex training - Add start.sh launcher script and GTK3 desktop launcher - Remove Output/ build artifacts, installer files, loose docs - Update .gitignore for runtime data and build artifacts - Update README for v1.9 with new launch method, screenshots, and features Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -3,9 +3,11 @@ flask>=3.0
|
||||
bcrypt>=4.0
|
||||
requests>=2.31
|
||||
msgpack>=1.0
|
||||
cryptography>=41.0
|
||||
PyCryptodome>=3.19
|
||||
|
||||
# OSINT & Networking
|
||||
# (nmap, tcpdump, tshark are system packages)
|
||||
# System packages needed: nmap, tcpdump, tshark, whois, dig (dnsutils)
|
||||
|
||||
# Hardware / Serial
|
||||
pyserial>=3.5
|
||||
@@ -13,16 +15,19 @@ esptool>=4.0
|
||||
|
||||
# Packet Analysis
|
||||
pyshark>=0.6
|
||||
scapy>=2.5
|
||||
|
||||
# UPnP
|
||||
# (miniupnpc is a system package, provides upnpc CLI)
|
||||
# Discovery
|
||||
zeroconf>=0.131
|
||||
|
||||
# Reports
|
||||
# Reports & QR
|
||||
qrcode>=7.0
|
||||
Pillow>=10.0
|
||||
|
||||
# ── Optional LLM Backends ──────────────────────────────
|
||||
# Uncomment the backend(s) you want to use:
|
||||
# MCP (Model Context Protocol)
|
||||
mcp>=1.0
|
||||
|
||||
# ── LLM Backends ──────────────────────────────────────────
|
||||
|
||||
# Local GGUF models (CPU-friendly):
|
||||
llama-cpp-python>=0.3.16
|
||||
@@ -30,10 +35,19 @@ llama-cpp-python>=0.3.16
|
||||
# CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python --force-reinstall --no-cache-dir
|
||||
|
||||
# HuggingFace SafeTensors models (GPU-recommended):
|
||||
# transformers>=4.35
|
||||
# torch>=2.1
|
||||
# accelerate>=0.25
|
||||
transformers>=4.35
|
||||
accelerate>=0.25
|
||||
bitsandbytes>=0.41 # for 4-bit/8-bit quantization (Linux/CUDA only; skip on Windows if unavailable)
|
||||
# torch>=2.1 # Install manually: https://pytorch.org/get-started/locally/
|
||||
|
||||
# Anthropic Claude API:
|
||||
# anthropic>=0.40
|
||||
anthropic>=0.40
|
||||
|
||||
# OpenAI API:
|
||||
openai>=1.0
|
||||
|
||||
# HuggingFace Inference API:
|
||||
huggingface-hub>=0.20
|
||||
|
||||
# ── Knowledge System ──────────────────────────────────────
|
||||
numpy>=1.24
|
||||
|
||||
Reference in New Issue
Block a user