Learn Storage, Backup, Virtualization, and Cloud. AWS, GCP & AZURE.
..........................................................................................................................................................................
Expand a Cluster
Perform the following steps before adding new nodes to the cluster.
1 Two methods are available:
Use the iris_cli.
a Use the iris_cli vlan add command to set up the non-native VLAN to be used for the node add workflow. Example:
iris_cli vlan add if-name=bond0 id=101 subnet-mask-bits=8
b Use the following command to set the non-native VLAN logical bond interface as primary. Replace vland_id with the ID of the VLAN you added.
iris_cli ip config interface-name=<bond0.vland_id> interface-role=primary
Alternatively, to configure the IP on a new node and access the node using the IP (not required if using Avahi to discover all nodes), use this command:
iris_cli ip config interface-name=<bond0.vlan_id> iface-ips=xx subnet-gateway=yy subnet-mask-bits=zz mtu=qq
2 Or use the configure_network.sh script.
a Use configure_network.sh option 10.
Location: /home/cohesity/bin/network/configure_network.sh
3 Restart the Nexus service:
sudo service nexus restart
4 Run ifconfig and ensure Avahi runs on the non-native VLAN bonded interface.
5 On any node in the existing cluster, start the node add workflow from the UI and provide cluster IPs from the configured non-native VLAN.
NOTE: If necessary, the user can configure cluster IPs and VIPs from the non-native VLAN and keep the IPMI in the native VLAN or some other subnet.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Remove a Node from Cluster.
This is a clean way. Tricky way could be failing the node and let it reconstruct in the background— given you have right redundancy settings in place, which can be checked under Storage domain Configuration.
- Log into Cluster/node
> iris_cli cluster status
(It lists Node ID with IP and Serial Numbers)
> iris_cli node rm -id=<serial number of node>
(It will prompt cluster username (admin) and Password, followed by message—
“Success: Node ID: <Serial Number> marked for removal successfully.”)
Note: There is no way to track removal process using cli. But if you were to logged in to Siren Page, and go to Scribe, it will show you KRemoveNode process and metadata/replica that node holds constantly decreasing. It indicates the Node is being removed. Scribe service track/manage metadata and metadata removal and data removal from owned disks from the node in question runs in parallel. However, metadata finishes quickly. Once Data gets reshuffled across other nodes, by logging into the node and running same commands as above will show a message— Node is not part of cluster, and/or password is reset to default admin password, not the one you have it changed for entire cluster.
You are Welcome :)