The Problem#
Nobody tells you when someone’s scanning your SSH. Nobody tells you when a container you forgot about is accepting connections from the internet. You find out when you go looking – if you go looking. Most people don’t.
If you run self-hosted infrastructure – a Proxmox cluster, NAS, media stack, automation services – you have attack surface. And unlike a managed cloud service, there’s no SOC watching the logs for you. You need something to detect anomalies and something to block hostile traffic automatically. These are two different problems, and they’re best solved by two different tools.
Solution Overview#
The architecture runs two security tools on a single dedicated container (security-node), each handling a distinct responsibility:
CrowdSec acts as the reactive enforcement layer. It ingests logs, matches them against community-maintained detection scenarios, and pushes ban decisions to a firewall bouncer on the Proxmox hypervisor. It also participates in a global crowd-sourced threat intelligence network – if an IP is bruteforcing SSH across thousands of CrowdSec installations worldwide, your instance knows about it before that IP ever reaches your services.
Wazuh acts as the deep analysis layer. Agents installed on every container and the hypervisor report system events, authentication logs, file integrity changes, and kernel events to a central manager. It’s a proper SIEM with alerting, dashboards, and historical data.
Together: CrowdSec blocks at the network edge. Wazuh sees everything that happens on the hosts.
Prerequisites#
- Proxmox VE (or any Linux hypervisor/host environment)
- A dedicated container or VM for the security stack (minimum 4 GB RAM for the full Wazuh stack with indexer; 2 GB is possible without the integrated indexer)
- Docker for running CrowdSec and Wazuh Manager
- Ansible for agent deployment (optional but strongly recommended)
- Network segmentation (VLANs) – you want the security node to see traffic from all segments
Implementation#
Architecture#
The entire security layer runs on a single Debian container:
graph TD
SN["Security Node (Debian CT)"]
SN --- CS["CrowdSec Core + LAPI"]
SN --- WM["Wazuh Manager"]
SN --- GR["Grafana + CrowdSec plugin"]
SN --- RS["rsyslog ingestion"]
CS -->|ban decisions| PVE["PVE Host — firewall bouncer"]
WM -->|agent reporting| CTs["All CTs — wazuh-agent"]
CrowdSec’s Local API (LAPI) listens on port 8080 by default, accepting log data and serving ban decisions to bouncers. The firewall bouncer runs on the Proxmox host itself, enforcing bans at the iptables/ipset level – the network edge of the entire infrastructure.
Wazuh agents run on every container and the hypervisor, reporting back to the Wazuh Manager on port 1514. The default protocol is UDP; the config below uses TCP instead, which requires matching <connection>secure</connection> on the manager side. For systems that can’t run an agent (like TrueNAS), syslog forwarding to port 514 on the security node provides coverage.
CrowdSec: Community-Driven Detection#
CrowdSec’s value proposition is its detection engine combined with crowd-sourced intelligence. The LAPI configuration binds to the security node’s internal interface – don’t use 0.0.0.0 unless network-level firewall rules restrict access to this port:
api:
server:
enable: true
listen_uri: 192.168.6.10:8080The real power comes from scenarios (detection patterns) and the Community API (CAPI). When your CrowdSec instance detects an attack pattern, it reports the source IP to the global network. When other installations detect the same IP, your instance receives preemptive ban decisions. It’s collective immunity for infrastructure.
The firewall bouncer on the Proxmox host translates ban decisions into iptables rules via ipset. Banned IPs are dropped at the hypervisor level, meaning they never reach any container or VM.
CrowdSec also has an Application Security Engine (AppSec, GA since v1.6) that provides WAF-like request inspection for web-facing services. That’s out of scope for this post, but worth exploring if you expose web apps.
Wazuh: Comprehensive Host Monitoring#
While CrowdSec handles reactive enforcement, Wazuh provides visibility into what’s happening on every host. Agent deployment is automated via Ansible:
ansible-playbook -i inventory/hosts.yml playbooks/install-wazuh-agent.ymlEach agent monitors SSH authentication, systemd events, kernel messages, and (optionally) file integrity. The Wazuh Manager correlates events across all agents, providing a unified view of security events across the entire infrastructure.
Agents report to the manager with a simple configuration. Note: the default Wazuh protocol is UDP. If you use TCP (as below), the manager’s ossec.conf needs a matching <connection>secure</connection> listener:
<server>
<address>security-node</address>
<port>1514</port>
<protocol>tcp</protocol>
</server>For TrueNAS (which doesn’t support agents), syslog forwarding fills the gap. rsyslog on the security node receives TrueNAS logs on UDP 514, routes them to a dedicated log file, and both CrowdSec and Wazuh ingest them from there.
How They Complement Each Other#
The two tools have distinct strengths and minimal overlap:
| Capability | CrowdSec | Wazuh |
|---|---|---|
| Brute force detection | Yes (primary) | Yes (alerting) |
| Automated IP banning | Yes | No |
| Global threat intelligence | Yes (crowd-sourced) | No |
| WAF / request inspection | Yes (AppSec engine) | No |
| File integrity monitoring | No | Yes |
| Host-level process auditing | No | Yes |
| Rootkit detection | No | Yes |
| Historical event analysis | Limited | Yes |
| Dashboard / SIEM UI | Grafana or CrowdSec Console | Full SIEM dashboard |
CrowdSec answers: “Is someone attacking my infrastructure right now? Block them.” Wazuh answers: “What happened on host X last Tuesday at 3 AM? Show me every event.”
This maps directly to enterprise architecture: CrowdSec fills the IPS role (Palo Alto, Cisco Firepower), Wazuh fills the SIEM/EDR role (Sentinel, Splunk). The difference is cost and scale, not architecture.
VLAN Awareness#
The security node sits on the core VLAN (192.168.6.0/24) alongside all infrastructure services. VLAN segmentation ensures that guest devices and IoT traffic are isolated from the security stack. The firewall bouncer on the Proxmox host enforces bans across all VLANs because it operates at the hypervisor’s network boundary.
This is important. If your security monitoring only covers one network segment, you have blind spots. The architecture ensures that even syslog from devices on other VLANs (routed through the gateway) reaches the security node.
Security Considerations#
The security node is the highest-value target in the infrastructure. If someone pops it, they can disable banning and suppress alerts – game over. Lock it down: SSH key-only, no root login, firewall rules limiting who can reach LAPI (8080) and the Wazuh Manager (1514) to trusted sources only. If you’re running on Proxmox, consider unprivileged container settings.
CrowdSec’s CAPI shares anonymized attack data (source IPs and scenario triggers) with the community. That’s how the crowd-sourced intelligence works – you contribute, you benefit. Review their data sharing policy if that’s a concern for your environment.
API keys between CrowdSec LAPI and bouncers are sensitive. A leaked bouncer key can query and manipulate ban decisions. Rotate them if you suspect exposure.
Wazuh Manager stores event data with full detail – usernames, IPs, commands. Set a retention policy. For a homelab, 90 days is reasonable. Don’t keep everything forever just because you can.
GitHub#
The repo is private – it contains host-specific configs, API keys, and network details I’m not publishing. Everything architectural is covered above. The CrowdSec and Wazuh configurations are designed to be adapted, not copied verbatim. Adjust the LAPI bind address, agent endpoints, and VLAN ranges to match your environment.