System requirements
This topic describes minimal system requirements for ASMS hardware, software and networking. For more details, see also ASMS system architecture.
Note: ASMS performance on VMs depends on the other, non-AlgoSec machines residing on the same VMware platform. To ensure performance, we recommend working with dedicated resources.
Hardware minimum requirements
Notes: Native Linux server is not supported in A32.10.
We recommend that ASMS deployments meet or exceed the following minimum hardware requirements.
Important: If the number of cores changes on Central Managers, Load Distribution nodes or Remote Agents, see the AlgoPedia article, Concurrent Analysis Limit Parameter .
Hardware | Required on standalone systems, Central Managers, HA/DR or Load Distribution nodes | Required on both primary and secondary Remote Agents and AutoDiscovery Remote Agents |
---|---|---|
CPU |
8 cores * |
4 cores |
Memory |
32 GB * |
16 GB |
Storage | 300 GB | 300 GB |
Disk write speed | 80MB/s** | 80MB/s** |
Network | For details, see on this page Bandwidth requirements for distributed environments |
* These minimum requirements suffice for initial demo and testing environments, such as for up to 50 simple devices. For details about final sizing calculations for production environments, contact your AlgoSec partner or sales engineer.
** We recommend disk write speed of at least 300MB/s; system performance will improve as the speed increases.
ASMS does not keep all traffic logs, and only stores usage statistics that enable ASMS to create reports for unused rules, unused objects within rules, and the Intelligent Policy Tuner. Statistics stored in ASMS are calculated and stored specifically for these reports.
Storing statistics instead of actual reports enables ASMS to maintain a longer history than would otherwise be possible, and make statements such as Rule 1234 has not been used for 18 months.
Note: Storing statistics instead of actual logs also means that ASMS log storage is not a replacement for a full log repository sometimes needed by customers. For example, the ASMS will not provide full details in forensic investigations for cyber incidents, or for identifying attacks in real-time.
Differences per environment configuration
Hardware requirements will differ, depending on your environment configuration and type. Main differences and considerations include:
Configuration | Description |
---|---|
NAS storage |
If you configure AFA to store all reports on a remote NAS server, this will impact where the storage space is needed. For details, see NAS (Network Attached Storage) support . |
HA/DR clusters |
Each node in an HA/DR cluster must be identical, including the same type of installation (AlgoSec hardware or VM appliance), and have the same amount of disk space. For details, see Manage clusters |
Distributed architecture |
In distributed architecture environments, consider the requirements for the Central Manager and each Remote Agent (geographic distribution) or Load Unit (load distribution). Remote Agents and Load Units do not store reports. For details, see Configure a distributed architecture. |
AWS deployments |
If you are deploying on AWS, see Deploy ASMS on AWS. |
Azure deployments |
If you are deploying in Azure, see Deploy ASMS on Microsoft Azure. |
Software requirements
ASMS requires the following software, depending on your deployment method:
AlgoSec hardware appliances |
AlgoSec hardware appliances comes pre-installed with all require software. No additional software is needed. |
Virtual appliances |
ASMS can be deployed on virtual machines that use VMware ESX versions 7 and higher, and AWS |
Networking requirements and recommendations
This section includes the following data:
- Required port connections
- Bandwidth requirements for distributed environments
- Email and device connectivity requirements
- AFA server DNS name / IP address recommendations
- Security certificate recommendations
For more details, see Manage clusters
Required port connections
Deploying ASMS requires the following port connectivity between nodes:
Type |
Port |
Central Manager <> Load Unit |
Central Manager <> Remote Agent |
Load Unit <> Load Unit |
HA |
DR |
Central Manager (or Standalone Server) Migration |
---|---|---|---|---|---|---|---|
ICMP |
✔ |
✔ | ✖ | ✔ | ✔ | ✔ | |
SSH |
TCP/22 | ✔ | ✔ | ✖ | ✔ | ✔ | ✔ |
HTTPS | TCP/443 | ✔ | ✔ | ✖ | ✔ | ✔ | ✔ |
syslog | UDP/514 |
✖ |
✖ | ✖ |
✔* |
✖ |
✖ |
hazelcast | TCP/5701 | ✔ | ✖ | ✔ | ✔ | ✖ | ✖ |
activemq | TCP/61616 | ✔ | ✖ | ✖ | ✔ | ✖ | ✖ |
postgresql | TCP/5432 | ✔ | ✖ | ✖ | ✔ | ✔ | ✔ |
postgresql additional port | TCP/5433 | ✖ | ✖ | ✖ |
✔ |
✖ |
✖ |
HA/DR | TCP/9595 | ✖ | ✖ | ✖ | ✔ | ✔ | ✔ |
AAD Log Sensor | TCP/9645 | ✖ | ✔ | ✖ | ✖ | ✖ | ✔ |
AAD network sensor | TCP/9545 | ✖ | ✔ | ✖ | ✖ | ✖ | ✔ |
*UDP/514 is required for traffic logs and/or connectivity to ART from the HA node.
For migration, ensure that the target machine has connectivity to external servers if they're defined in the source machine, as follows:
Type |
Port |
---|---|
Mail server | TCP/25 (or customer-defined port) |
NAS server |
TCP/2049 |
Bandwidth requirements for distributed environments
Distributed environments must work with the following minimum bandwidths between nodes:
Central Manager and load distribution agents | 1 Gbit/s |
Between High Availability nodes | 1 Gbit/s |
Central Manager and geographic distribution agents | 100 Mbit/s |
Between Disaster Recovery nodes | 100 Mbit/s |
Tip: The faster your network speed, the faster your clusters will be completely synched.
Email and device connectivity requirements
Enable the following connectivity for AFA and FireFlow:
Requirement | Description |
---|---|
Email address |
Define an e-mail address to be used by AFA and FireFlow, such as [email protected], on a mail server that supports SMTP and POP3/IMAP4. Alternatively, emails can be forwarded to AFA and FireFlow as an MTA (message transfer agent). |
Email access | Enable access from AFA and FireFlow to the mail server via SMTP and POP3/IMAP4 |
Device access |
Enable access from the Central Manager, any high availability secondary nodes, and Remote Agents to devices via SSH, OPSEC, REST, or SNMP (as needed) |
This connectivity configuration includes configuring the necessary passwords for FireFlow.
AFA server DNS name / IP address recommendations
The AFA server must have a fixed DNS name or IP address that can be used to access the AFA user interface.
We recommend that you do not configure the server to obtain an IP address automatically or to use DHCP.
Security certificate recommendations
To prevent warnings from appearing about security certificates, install a certificate signed by a CA instead of a self-signed certificate.
For more details, see the KB article in AlgoPedia How to Install and Generate an SSL key and Certificate Signing Request (CSR).
Note: AlgoSec recommends using a 2048-bit certificate instead of the 1024-bit certificate recommended by the CentOS documentation.
Supported deployments per architecture structure
Notes: Native Linux server is not supported in A32.00 and above.
The following table lists the supported deployment models for each architecture structure.
Deployment | Central Manager/ Standalone ASMS | High Availability | Disaster Recovery | Load Distribution |
Geographic Distribution (Including AutoDiscovery Server) |
NAS |
---|---|---|---|---|---|---|
AlgoSec Physical Appliance (2XX3 series)* |
✔ |
✔ |
✔ |
✔ |
✔ |
✔ |
Virtual Appliance (VMware)** |
✔ |
✔ |
✔ |
✔ |
✔ |
✔ |
ASMS on AWS (AMI) |
✔ |
✖ |
✔ |
✔ *** |
✔ |
✖ |
ASMS on Azure |
✔ |
✖ |
✖ |
✖ |
✖ |
✖ |
* Only AlgoSec Hardware Appliances 2203 and 2403 are compatible with A32.00 and above.
- AlgoSec Hardware Appliance 2063 – EOL by Oct. 2023: Not supported. Contact your AlgoSec sales representative to discuss options.
- 2XX2 Hardware Appliances – EOL by Oct. 2021: Not supported. Contact your AlgoSec sales representative to discuss options.
- 2XX1 Hardware Appliances - EOL: Not supported. Contact your AlgoSec sales representative to discuss options.
**vMotion is not compatible with AlgoSec.
*** When deployed on AWS, any Load Units must also be located in AWS, in the same subnet as the Central Manager.