Configuring High Availability
Background
Cluster Two Quantum Spark Appliances connected to each other for High Availability. maintains connections in the organization's network when there is a failure in one of the Cluster Members. The cluster provides redundancy.
In the Device view > Advanced section > High Availability page you can create a cluster of two appliances for high availability.
After you configure a cluster, you can select to Enable or Disable the cluster.
|
Notes:
|
Limitations
-
You cannot create a cluster when you have a switch defined in the network settings on the appliance. If necessary, change network settings in the Device > Local Network page.
Starting from R81.10.15, cluster in Bridge Mode is supported.
-
In versions R81.10.10 and lower, it is not supported to configure a cluster of Quantum Spark Appliances when the Internet connection is a Bond interface.
-
Cluster requires Static IP addresses on the physical cluster interfaces.
-
Cluster does not support pure IPv6 addresses on cluster interfaces (you must also configure IPv4 addresses).
-
All cluster configuration is done through the Active Cluster Member. The WebUI of the Standby Cluster Member only has some options available for fine tuning - basic network settings, and logs (a cluster managed by Quantum Spark Portal cluster also has Cloud Services).
Prerequisites
-
In WebUI > Device > Local Network, delete switch configurations before you start to configure a cluster.
-
The appliances in a cluster must have the same hardware, firmware (version and build), and licenses.
Note - Connect the sync cables only after you complete the First Time Configuration Wizard and remove the switch on both appliances. No additional configuration is required on the members.
|
Best Practice - Designate the same LAN port for the Sync interface. The default Sync interface is LAN2/SYNC. For appliance models 1600, 1800, 1900 and 2000, we recommend that you configure a bond of two interfaces for synchronization. |
Configuration Workflow
-
Complete the First Time Configuration Wizard on both appliances.
In the Local Network page of the wizard, clear the checkbox Enable switch on LAN ports.
-
Configure network settings on the appliance that is the primary Cluster Member.
-
Connect cables between the Sync interfaces on the appliances.
Note - Sync ports can also be connected through a switch.
-
Configure the primary Cluster Member.
Procedure-
Connect to the WebUI on the appliance.
-
From the left navigation panel, click Device.
-
In the Advanced section, click the High Availability page.
-
Click Configure Cluster.
The New Cluster Wizard opens.
-
On the page Step 1: Gateway Priority:
-
Select this option:
-
In versions R81.10.15 and higher:
Configure first member.
-
In versions R81.10.00 - R81.10.10
Configure as primary member.
-
-
Click Next.
-
-
On the page Step 2: SIC Settings:
Steps for versions R81.10.15 and higherImportant - The configuration on the second Cluster Member must match the configuration on the primary Cluster Member.
-
In the Sync Interface section, configure the required settings for the synchronization interfaces:
-
In the field Sync interface (master), select the first (main) synchronization interface. Default: LAN2
-
In the field Second sync interface, select the second synchronization interface.
Best Practice - For large appliances such as the 1600, 1800, 1900, and 2000, we highly recommend that you select a second sync interface.
This creates a bond interface called SYNCBOND that includes both the first and second synchronization interfaces.
-
-
In the Advanced sub-section, you can override the default settings:
-
In the field Operation mode, you can select the working mode between the synchronization interfaces of the Cluster Members:
-
Select Health check if the synchronization interfaces on the Cluster Members are connected through a switch.
-
Select Link state (this is the default) if the synchronization interfaces on the Cluster Members are connected directly to each other.
-
-
In the field Sync IP address, you can configure a different IPv4 address of the synchronization interface on the primary Cluster Member. Default: 10.231.149.1
-
In the Sync IP subnet field, you can configure a different IPv4 address of the synchronization subnet. Default: 255.255.255.0
-
In the field Other member sync IP address, you can configure a different IPv4 address of the synchronization interface on the second Cluster Member. Default: 10.231.149.2
-
In the field Synchronization mode, you can select the working mode for the cluster synchronization:
-
Optimized sync
This is the default.
This mode synchronizes most of kernel tables to ensure smooth cluster failover.
This mode does not synchronize large kernel tables (such as "Connections").
-
Sync is enabled
This mode synchronizes all the kernel tables.
Important - Depending on the number of concurrent connections and the enabled Software Blades, this mode can increase the load on the CPU.
-
Sync is disabled
This mode disables the synchronization.
-
-
In the field High Availability mode, you can configure the Cluster Member recovery method - which Cluster Member to select as Active during a cluster fail-back (when the cluster returns to normal operation after a cluster failover):
Important:
-
This mode must be the same on both Cluster Members.
-
Changing this mode may cause a cluster failover.
-
Active up
This is the default.
The Cluster Member that is currently in the Active state, remains in this state.
The other Cluster Member that returns to normal operation, remains in the Standby state.
-
Primary up
The Cluster Member with higher priority is the first one to be configured. The primary Cluster Member that has the highest priority becomes the new Active.
The state of the previously Active Cluster Member changes to Standby.
-
-
-
In the Secure Internal Communication section, in the fields Password and Confirm, enter a one-time password for connecting the two Cluster Members to each other.
Note - You cannot use these characters in a password or shared secret:
{ } [ ] ` ~ | ‘ " \
(maximum number of characters: 255) -
Click Next.
Steps for versions R81.10.00 - R81.10.10Important - The configuration on the second Cluster Member must match the configuration on the primary Cluster Member.
-
In the Secure Internal Communication section, in the fields Password and Confirm, enter a one-time password for connecting the two Cluster Members to each other.
Notes:
-
You cannot use these characters in a password or shared secret:
{ } [ ] ` ~ | ‘ " \
(maximum number of characters: 255) -
You must enter the same one-time password when you configure the second Cluster Member.
-
-
In the Advanced section, you can override the default settings:
-
In the field Sync interface, you can select a synchronization interface. Default: LAN2
-
In the field Sync IP address, you can configure the IPv4 address of the synchronization interface on the primary Cluster Member. Default: 10.231.149.1
-
In the field Sync IP subnet, you can configure the IPv4 address of the synchronization subnet. Default: 255.255.255.0
-
In the field Other member sync IP address, you can configure the IPv4 address of the synchronization interface on the second Cluster Member. Default: 10.231.149.2
-
-
Click Next.
-
-
On the page Step 3: Gateway Interfaces (<X> out of <Y>):
On these pages, you configure the "internal" and "external" cluster interfaces.
Note - The physical IP addresses of cluster interfaces and the cluster Virtual IP address must be in the same subnet unless you are configuring a Single Routable IP Cluster.
Steps for versions R81.10.15 and higher-
Select Enable High Availability on interface (this is the default).
If you enable the high availability on an interface, the primary Cluster Member monitors it and if there is a failure, it automatically fails over to the second Cluster Member.
If you clear this option, then you can also clear the option Monitor interface state (fail over when interface is down) to stop the cluster monitoring completely.
-
In the field Cluster IP address, configure the applicable cluster Virtual IPv4 address. All hosts and network devices on the corresponding connected network must send their traffic to this Virtual IP address as their default gateway.
-
In the field Subnet mask, configure the applicable IPv4 subnet mask for the cluster Virtual IP address.
-
In the field This physical IP address, the wizard shows the IPv4 address configured on the interface.
-
In the field Peer physical IP address, configure the applicable IPv4 address.
-
Click Next.
Steps for versions R81.10.00 - R81.10.10-
Select Enable High Availability on interface (this is the default).
If you enable the high availability on an interface, the primary Cluster Member monitors it and if there is a failure, it automatically fails over to the second Cluster Member.
If you clear this option, then you can also clear the option Monitor interface state (fail over when interface is down) to stop the cluster monitoring completely.
-
In the field Cluster IP address, configure the applicable cluster Virtual IPv4 address. All hosts and network devices on the corresponding connected network must send their traffic to this Virtual IP address as their default gateway.
-
In the field Subnet mask, configure the applicable IPv4 subnet mask for the cluster Virtual IP address.
-
In the field Primary physical IP address, the wizard shows the IPv4 address configured on the interface.
-
In the field second physical IP address, configure the applicable IPv4 address.
-
Click Next.
-
-
Click Finish.
Note - At the top of the page, the Peer gateway field shows "is not defined". This status changes after you finish configuring the second Cluster Member.
Watch the Video (for R81.10.00 - R81.10.10 versions)
-
-
Configure the second Cluster Member.
Procedure-
Connect to the WebUI on the appliance.
-
From the left navigation panel, click Device.
-
In the Advanced section, click the High Availability page.
-
Click Configure Cluster.
The New Cluster Wizard opens.
-
On the page Step 1: Gateway Priority:
-
Select this option:
-
R81.10.15 and higher:
Configure as peer member.
-
R81.10.00 - R81.10.10
Configure as second member.
-
-
Click Next.
-
-
On the page Step 2: SIC Settings:
Steps for versions R81.10.15 and higherImportant - The configuration on the second Cluster Member must match the configuration on the primary Cluster Member.
-
In the Sync Interface section, configure the required settings for the synchronization interfaces:
-
In the field Sync interface (master), select the first (main) synchronization interface. Default: LAN2
-
In the field Second sync interface, select the second synchronization interface.
Best Practice - For large appliances such as the 1600, 1800, 1900, and 2000, we highly recommend that you select a second sync interface.
This creates a bond interface called SYNCBOND that includes both the first and second synchronization interfaces.
-
-
In the Advanced sub-section, you can override the default settings:
-
In the field Operation mode, you can select the working mode between the synchronization interfaces of the Cluster Members:
-
Select Health check if the synchronization interfaces on the Cluster Members are connected through a switch.
-
Select Link state (this is the default) if the synchronization interfaces on the Cluster Members are connected directly to each other.
-
-
In the field Sync IP address, you can configure a different IPv4 address of the synchronization interface on the primary Cluster Member. Default: 10.231.149.1
-
In the Sync IP subnet field, you can configure a different IPv4 address of the synchronization subnet. Default: 255.255.255.0
-
In the field Other member sync IP address, you can configure a different IPv4 address of the synchronization interface on the second Cluster Member. Default: 10.231.149.2
-
In the field High Availability mode, you can configure the Cluster Member recovery method - which Cluster Member to select as Active during a cluster fail-back (when the cluster returns to normal operation after a cluster failover):
Important:
-
This mode must be the same on both Cluster Members.
-
Changing this mode may cause a cluster failover.
-
Active up
This is the default.
The Cluster Member that is currently in the Active state, remains in this state.
The other Cluster Member that returns to normal operation, remains in the Standby state.
-
Primary up
The Cluster Member with higher priority is the first one to be configured. The primary Cluster Member that has the highest priority becomes the new Active.
The state of the previously Active Cluster Member changes to Standby.
-
-
-
In the Secure Internal Communication section, in the field Password enter the same one-time password you configured for the primary Cluster Member.
Steps for versions R81.10.00 - R81.10.10Important - The configuration on the second Cluster Member must match the configuration on the primary Cluster Member.
-
In the Secure Internal Communication section, in the field Password enter the same one-time password you configured for the primary Cluster Member.
-
In the Advanced section, you can override the default settings:
-
In the field Sync interface, you can select a synchronization interface. Default: LAN2
-
In the field Sync IP address, you can configure the IPv4 address of the synchronization interface on the primary Cluster Member. Default: 10.231.149.1
-
In the field Sync IP subnet, you can configure the IPv4 address of the synchronization subnet. Default: 255.255.255.0
-
In the field Other member sync IP address, you can configure the IPv4 address of the synchronization interface on the second Cluster Member. Default: 10.231.149.2
-
-
Click Next.
-
-
Click Establish Trust.
The second Cluster Member fetches the settings from the primary Cluster Member and applies them.
-
Click Finish.
Watch the Video
-
Viewing Cluster Interfaces
-
Connect to the WebUI on a Cluster Member:
https://<IP Address of the Cluster Member>:4434
Best Practice - After the cluster is successfully configured, connect to
https://<Virtual IP Address of the Cluster>:4434
. This redirects you to the WebUI Home > System page for the active Cluster Member. -
From the left navigation panel, click Device.
-
In the Advanced section, click the High Availability page.
-
The table List of Configured Interfaces shows information about the cluster interfaces:
Viewing the Cluster Status
-
Connect to the WebUI on a Cluster Member:
https://<IP Address of the Cluster Member>:4434
Best Practice - After the cluster is successfully configured, connect to
https://<Virtual IP Address of the Cluster>:4434
. This redirects you to the WebUI Home > System page for the active Cluster Member. -
From the left navigation panel, click Device.
-
In the Advanced section, click the High Availability page.
-
Click View diagnostics.
Failing Over Manually
-
Connect to the WebUI on the primary Cluster Member:
https://<IP Address of the Primary Cluster Member>:4434
-
From the left navigation panel, click Device.
-
In the Advanced section, click the High Availability page.
-
Click Force Member Down.
A confirmation message appears.
-
Click Yes.
-
Cluster State:
-
The primary Cluster Member is now Down.
-
The second Cluster Member is now Active.
-
-
The primary Cluster Member logs you out from WebUI because it has to reload it (to show only the supported pages).
|
Notes:
|
-
Connect to the WebUI on the primary Cluster Member:
https://<IP Address of the Primary Cluster Member>:4434
-
From the left navigation panel, click Device.
-
In the Advanced section, click the High Availability page.
-
Click Disable Manual Failover.
A confirmation message shows.
-
Click Yes.
In Primary Up mode, the original primary Cluster Member is now the Active Cluster Member.
In Active Up mode, run Disable Manual Failover to make the member Standby.
Resetting Cluster Configuration
-
Connect to the WebUI on one of the Cluster Members:
https://<IP Address of the Cluster Member>:4434
-
From the left navigation panel, click Device.
-
In the Advanced section, click the High Availability page.
-
Click Reset Cluster Configuration.
|
Important - This deletes all cluster configuration settings from both Cluster Members. You must run the New Cluster Wizard again to configure the cluster. |
Upgrading a Cluster Manually
|
Notes:
|
-
Upgrade the current Standby Cluster Member:
-
Connect to the WebUI on a Cluster Member:
https://<IP Address of the Cluster Member>:4434
-
From the left navigation panel, click Device.
-
In the Advanced section, click the High Availability page.
-
At the top of this page, examine the cluster state.
If the current cluster state shows "This gateway (<...>) is standby", then continue to the next step.
Otherwise, connect to the other Cluster Member
-
In the System section, click the System Operations page.
-
Click Manual Upgrade.
The Upgrade Software Wizard opens.
-
Follow the wizard instructions.
-
After the upgrade, this appliance remains the Standby.
-
-
Upgrade the new Standby Cluster Member (former Active Cluster Member):
-
Connect to the WebUI on the Cluster Member:
https://<IP Address of the High Availability>:4434
-
From the left navigation panel, click Device.
-
In the Advanced section, click the High Availability page.
-
At the top of this page, examine the cluster state.
Wait for the current cluster state to show "This gateway (<...>) is standby", and then continue to the next step.
-
In the System section, click the System Operations page.
-
Click Manual Upgrade.
The Upgrade Software Wizard opens.
-
Follow the wizard instructions.
-
After the upgrade, this appliance remains as the Standby.
-
Single Routable IP Cluster
You can configure a Single Routable IP cluster where the virtual IP address is in a different subnet than the physical IP addresses of the Cluster Members. Only the virtual IP address is routable. Traffic sent from Cluster Members to internal or external networks is hidden behind the cluster Virtual IP address.
Advantages of using different subnets:
-
Use only one public IP address for the cluster.
-
Hide physical Cluster Members' IP addresses behind the cluster Virtual IP address.
-
Create a cluster in an existing subnet that has a limited number of available IP addresses.
-
The Internet connection must be of type Static.
-
The IP address of the Internet connection must be a fake, non-routable on the same subnet as the Internet Connection of the other member. For example, the IP address of the Internet connection of the first member is 4.4.4.4 with subnet of 255.255.255.0, and the IP address of the second member is subnet 255.255.255.0.
-
When first configuring the Internet connection, you must configure a default gateway. This gateway IP address must be fake as well and in the same subnet as the Cluster Members' IP addresses. In our example, 4.4.4.1.
-
You must turn off probe monitoring:
-
Click Edit to open the Edit Internet Connection window > Connection Monitoring tab.
-
Clear all probing checkboxes.
-
-
You must turn off SD-WAN (supported starting from R81.10.10).
-
Configure the primary and second Cluster Members as for a regular cluster (see Configuration Workflow) but with these differences:
-
After you configure the second member:
-
Go back to the primary (Active) member and click Edit.
-
Set the Default gateway as the default gateway of the Virtual IP address subnet.
-
-
For each Cluster Member, in the Connection Monitoring tab, click the checkboxes to restore the probing options.
-
If SD-WAN is supported, turn it on.
-
-
Click Save to save your changes.
The related route to the Virtual IP address subnet shows in the Routing Table.
Cluster Managed by Quantum Spark Management
You can configure a cluster in which both gateways are managed by the Quantum Spark Management service in Infinity Portal.
Connect to Quantum Spark Management after you configure the cluster.
A cluster supported by Quantum Spark Management is very similar to a Locally Managed cluster. One cluster member is Active, and the other cluster member is Standby. To change the status of the Active member, click Force Member Down.
Connecting a Cluster Gateway to Spark Management
Prerequisites:
-
An account in the Infinity Portal with the Spark Management application. See the Quantum Spark Management Administration Guide.
-
Both gateways must have the same hardware, firmware (version and build), and licenses.
-
The firmware version must be R81.10.15 and higher.
-
The cluster is configured on the gateway level (see Configuration Workflow).
For more information on Cloud Services, see the Configuring Cloud Services page.
The cluster is configured locally and not yet connected to Spark Management.
-
In Spark Management:
-
Navigate to the Gateways page
-
Create a new gateway object.
-
In the Gateway type field, select Spark Cluster.
-
Enter a name and select a plan.
-
Create the cluster members objects in Spark Management. Each member represents a physical gateway member of the cluster.
Notes:
-
You can create each member from the New Gateway Wizard page or later on the Cluster object in the General tab.
-
If you add the members later, click Save after you add both members.
-
-
Click Finish. You are redirected to the cluster’s General tab
-
Copy the HA activation key.
-
-
On the active member of the cluster local WebUI:
-
Navigate to Home > Cloud Services.
-
Select the option Manage with Spark Management.
-
Paste the HA activation key you copied from the Spark Management in the applicable field.
-
Both members begin the Cloud Services activation process. In a few minutes the process completes and both members appear as connected.
-
The gateway was added and cluster configuration was completed on the local WebUI.
-
In Spark Management:
-
Navigate to the Gateways page
-
Create a new gateway object.
-
In the Gateway type field, select Spark Cluster.
-
Enter a name and select a plan. This is usually the same plan that is used by the single gateway.
-
Create the cluster members objects in Spark Management
-
For the first member, click Search and select the relevant gateway.
-
For the second member, create a new gateway object to represent the second member of the cluster.
Notes:
-
You can create each member from the New Gateway Wizard page or later on the Cluster object in the General tab.
-
If you add the members later, click Save after you add both members.
-
-
- Click Finish. You are redirected to the cluster’s General tab
-
Copy the HA activation key.
-
-
On the active member of the cluster local WebUI:
-
Navigate to Home > Cloud Services.
-
Change the Cloud Management mode to Off and click Save.
-
Select the Manage with Spark Management option.
-
Paste the HA activation key you copied from the Spark Management into the applicable field.
-
Both members begin the Cloud Activation process. In a few minutes the process completes and both members and appear as connected.
-
Enable the new Cloud capabilities for Extended Monitoring on a cluster that is managed locally.
-
On the active member local WebUI:
-
Navigate to Home > Cloud Services.
-
Select the Use cloud capabilities option and follow these steps:
-
Step 1 - Create an Infinity Portal account to connect the cluster. If you already have an account, skip this step.
-
Step 2 - Get the token:
-
Click the Get token link.
-
Log in to Infinity Portal with your credentials. If your user is affiliated with more than a single account, select the relevant account.
-
Copy the token.
-
-
Step 3 – Paste the activation token into the applicable field.
-
-
Both members begin the Cloud Activation process. In a few minutes the process completes and both members and appear as connected.
-
A single gateway, with enabled Cloud capabilities for Extended Monitoring, is converted into a cluster. The gateway was added and cluster configuration was completed on the local WebUI.
-
Option 1 – Reconnect to the Cloud as a cluster (Simple):
-
Navigate to Home > Cloud Services.
-
Change the Cloud Management mode to Off and click Save.
-
Select the Use cloud capabilities option and follow these steps:
-
Step 1 - Create an Infinity Portal account to connect the cluster. If you already have an account, skip this step.
-
Step 2 - Get the token:
-
Click the Get token link.
-
Log in to Infinity Portal with your credentials. If your user is affiliated with more than a single account, select the relevant account.
-
Copy the token.
-
-
Step 3 – Paste the activation token into the applicable field.
-
-
Both members begin the Cloud Activation process. In a few minutes the process completes and both members and appear as connected.
Notes:
-
In this method the logs history available locally on the appliance is not retained. The reason is that the gateway initiates a new unique connection with Cloud Services. To retain the logs, use Option 2.
-
To view the history, the data still exists when logging into to Spark Management application in the Infinity Portal
-
-
-
Option 2 – Extend the single gateway to a cluster in Spark Management.
Note - Even when you use the Cloud Capabilities option, Spark Management configuration for the gateway still exists.
Follow the steps to convert a single gateway to a cluster:
-
In Spark Management:
-
Navigate to the Gateways page
-
Create a new gateway object.
-
In the Gateway type field, select Spark Cluster.
-
Enter a name and select a plan. This is usually the same plan that is used by the single gateway.
-
Create the cluster members objects in Spark Management
-
For the first member, click Search and select the relevant gateway.
-
For the second member, create a new gateway object to represent the second member of the cluster.
Notes:
-
You can create each member from the New Gateway Wizard page or later on the Cluster object in the General tab.
-
If you add the members later, click Save after you add both members.
-
-
- Click Finish. You are redirected to the cluster’s General tab
-
Copy the HA activation key.
-
-
On the active member of the cluster local WebUI:
-
Navigate to Home > Cloud Services.
-
Change the Cloud Management mode to Off and click Save.
-
Select the Manage with Spark Management option.
-
Paste the HA activation key you copied from the Spark Management into the applicable field.
-
Both members begin the Cloud Activation process. In a few minutes the process completes and both members and appear as connected.
-
-