Networking Overview
The appliance is intended to allow connectivity in from outside of the cluster for administrators and users (permissions depending) to access configuration and monitoring/alerting and also connect into the rest of the cluster to push configurations, manage DHCP leases (if needed) of the cluster, monitor for alerts, and track and manage node and cluster statistics. To do this, the appliance typically will need connectivity in from an institution’s internal networking, connectivity to the rest of the cluster, and serial connectivity to the cluster via IPMI.
Below standard settings and configurations are described, but many can be adjusted and altered according to the customer’s needs. If you have any questions on any defaults that can be altered, contact ACT support at support@advancedclusering.com.
External Connectivity
Due to security requirements, some institution policies may not allow a connection from the institution’s network into the CV appliance. It is not a requirement to have a direct connection into the appliance, but does allow for ease of monitoring and configuration.
If a external connection cannot be provided, the appliance will still perform all functions as normal.
With command line and web UI interfaces for cluster management and monitoring, an external (to the compute cluster) connection is needed to connect the CV appliance to an institutions internal network to allow users and administrators access.
A typical need for configuration will be an IP address available to be assigned to the node. A DNS name can be helpful for access using a domain name, but is not required.
With the appliance being accessible outside of the cluster, the appliance is configured with security measures in place. The web UI is accessible via https (configured with a self-signed certificate, but has the ability to utilize an internal certificate from an institution) and the appliance runs a firewall underneath, which only allows in essential services:
Zone: ‘public’
Target: default
Allowed Services:
cockpit
dhcpv6-client
ssh
Forwarding: yes
Zone: ‘external’
Target: default
Allowed Services:
cockpit
http
https
ssh
Forwarding: no
Internal Connectivity
To manage and monitor the cluster, the appliance has a 1 Gb connection to a managed switch (usually placed at the top of the rack above the appliance) that the rest of the nodes in the cluster are also connected to.
Do not connect an uplink cable from this switch into your organization’s internal network. The appliance runs a DHCP server and will begin assigning IPs to the organization’s internal network (see below for DHCP server info), which will cause issues and make your network admins quite unhappy
The default IP address scheme for this interface is 10.1.1.0/24. This can be altered to fit your IP address schema if required or desired.
All traffic through this interface runs through the “trusted” firewall zone:
Zone: ‘trusted’
Target: ACCEPT
Ports:
22/tcp
443/tcp
Forwarding: no
IPMI
The appliance gathers BMC information and stats and utilizes a serial connection/console using IPMI on either a dedicated or shared interface on its own VLAN.
There are several tools within the ClusterVisor toolkit which can gather information such as power status, hardware statistics such as temperature, usage, speeds, etc., hardware failures, etc. for monitoring and administration.
ClusterVisor also utilizes IPMI for administration and troubleshooting purposes like connecting to a node via serial console when ssh is not functioning, connecting to a node’s IPMI console, power a node on/off serially.
Example tools:
cv-consolecv-ipmitoolcv-powercv-selcv-stats
The default IPMI IP scheme is 10.1.2.0/24 (as mentioned above, this can be altered to fit IP address scheme requirements or desires) and all traffic is tagged on VLAN 2.
Additional serial settings are set according to node manufacturer specifications.