As a rule of thumb, companies with this kind of monitoring need typically also have the infrastructure to host PRTG themselves; correspondingly, the overwhelming majority of licenses are on-premise.
For customers with highly distributed infrastructure, or few to no on-premise servers, Paessler offers PRTG Hosted Monitor as a cloud-hosted option, but it comes with the caveat of a maximum of 10,000 sensors, and therefore only one PRTG instance.
A successful large PRTG deployment starts with a thorough understanding of the environment and monitoring goals, and PRTG’s capabilities and limitations. Dreadnought was the first company in the US to achieve Paessler Certified Implementation Engineer status, and is uniquely positioned to help ensure a smooth, scalable, and effective PRTG deployment.
For those handling implementation in-house, we strongly suggest reading:
Planning large installations of PRTG Network Monitor.
In general, Paessler recommends a maximum of 10,000 sensors per core, or half that number for 2-core failover clusters. That number can vary substantially depending on the hardware hosting the installation, and the sensor composition. Effective use of remote probes can help alleviate the bottlenecks of highly resource-intensive sensors, dramatically improving scalability. For environments with more than 10,000 sensors, it often makes sense to deploy more than one core, so PRTG Enterprise Monitor may be a better fit.
Once your cores are set up and devices are onboarded, the next challenge is to prevent the large environment becoming overwhelming. Here are a few things we recommend spending time on to maximize the value of a PRTG installation:
New PRTG users tend to gravitate towards WMI sensors for a few reasons, the most common of which are Windows nativity and the ease of deployment courtesy of AD credentials. That usually works out fine for overloaded IT staff at the SMB scale, but in larger deployments the performance penalties of WMI start to add up. Paessler recommends a maximum of 200 WMI sensors per probe, whereas cores running exclusively ping and SNMP sensors can scale well beyond the recommended total sensor count per core without performance issues. Fortunately, SNMP provides very similar monitoring capabilities on Windows servers, especially for fundamental resources such as CPU, disk, memory, and network. This table outlines the major differences between the two in both implementation and ongoing monitoring:
SNMP Pros:
More scalable (much greater parallelization than WMI)
Reduces remote probe count (i.e. VM resource consumption)
Consistency across (non-)
Windows monitoring
WMI Pros:
Native (simple deployment)
Active Directory authentication
WMI multi-disk sensor reduces sensor count (and license costs)
More detailed statistics for processes
SNMP Cons:
Requires additional configuration on Windows hosts
No SNMP multi-disk sensor (increases sensor count)
Process monitoring in Windows only running or not, no CPU/memory usage
WMI Cons:
Resource-heavy for device, probes, and cores
Low (200) limit on sensors per probe
Higher network load
Requires special permissions or scripting for process monitoring
When discussing SNMP, it’s worth mentioning that SNMPv2 is far and away the most efficient meaningful sensor (excluding perhaps ICMP ping) available in PRTG. SNMPv3 comes with encryption, increasing the CPU overhead of each query on both the probe and device, and reducing the overall scalability in the process. But no matter which version of SNMP is employed, the performance impact is still lower than WMI.
Of course, it’s entirely possible to mix-and-match even on the same device. For example, a single Windows server could have SNMP CPU, disk, and process sensors, but use WMI to check update status, which would still be far more scalable than querying all of those resources via WMI exclusively.
Our recommendation is to use SNMP wherever possible, and fall back to WMI as needed in order to meet specific monitoring requirements SNMP cannot.