Friday, September 25, 2020

Cohesity: How to create a new Cohesity Cluster--with Examples

Learn Storage, Backup, Virtualization,  and Cloud. AWS, GCP & AZURE.
.......................................................................................................................................................................... 

This is the method used to create a new cluster using IPMI.

This one applies to 6XX models. (If you were to use it for C25xx or 4xxx model, you set value of “3”)
C6xxx uses username: admin, and Password: administrator for IPMI.
  1. Console into the very first Node.
It will take you to black Screen.

[cohesity@node ~]$sh (Type sh and enter)
UserName:cohesity
Password: Cohe$1ty

(This will take you to Cluster shell) You run these below Commands
sudo ipmitool lan print 1
sudo ipmitool lan set 1 ipsrc static
sudo ipmitool lan set 1 ipaddr 10.123.123.20
sudo ipmitool lan set 1 defgw ipaddr 10.123.123.1
sudo ipmitool lan set 1 access on
  1. Now that you have enabled IPMI, you can Use IP address on URL and access KVM remotely.
Once Logged in to KVM.
[cohesity@node ~]$cd bin/network
$ ls
(This will list available Scripts)
  1. Select configure_network.sh Script.
[cohesity@node ~]$./configure_network.sh
(It will list 12 options. Select Option 7 to configure LACP bonding across two 10G ports on Cohesity side. You must have 10G LACP configured the same way on Switch side too).
(LACP config on Switch Side should look like this:
SwitchA
interface Ethernet1/5
description  cohesity-node1-ens802f0
switchport mode trunk
switchport trunk allowed vlan 50
switchport trunk native vlan 50
channel-group 101 mode active
mtu 9216
SwitchB:
interface Ethernet1/5
description cohesity-node1-ens802f1
switchport mode trunk
switchport trunk allowed vlan 50
switchport trunk native vlan 50
channel-group 101 mode active
mtu 9216
  1. In an event BMC/IPMI Port becomes inresponsive, Log Into IPMI from another node and run this to reboot.
ipmitool -I lanplus -U admin -P administrator -H  10.123.123.20 mc reset cold
(If a IPMI interface is frozon, then you can use this to reset the IPMI using IPMI from a different node).
  1. Part of ./configure_network.sh uses Node IP. You can ssh into that NODE IP (E.G.10.123.123.40)  now.
  1. Once ssh into NODE IP,
[cohesity@node ~]$cat /proc/net/bonding/bond0 (This gives info on what kind of bond config is configured)
It Shows something like this.
[cohesity@node ~]$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer3+4 (1)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
Slave Interface: ens802f0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: a4:bf:01:2d:7f:56
Aggregator ID: 3
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
  1. [cohesity@node ~]$avahi-browse -tarp
(This goes out discovering all the Nodes connected in the cluster using IPV6 internal processes). If this doesn’t see any nodes, it needs to be looked at.

  1. At this Stage, you can use Node IP in URL and should be able to discover all the Nodes in discovery to be able to start Creating Cohesity Cluster.
This is Interactive session, you get to assign NODE IP, VIPS, SMTP, DNS, NTP Servers.
At the end of interactive session, it gives a message notifying you that Cluster has been created, and You can use the provided URL using admin user.
Username: admin
Password: admin
Note: If you want to update gflags, and other things, you may at this point in time.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Validation Steps for Cluster Settings:
1. Now that Cluster Is Up, you can run this at any Node. MII Should show UP on all the nodes you have as part of the cluster.
[cohesity@node ~]$ allssh.sh 'cat /proc/net/bonding/bond0' | grep MII
[01;31m[KMII[m[K Status: up
[01;31m[KMII[m[K Polling Interval (ms): 100
[01;31m[KMII[m[K Status: up
[01;31m[KMII[m[K Status: up
[01;31m[KMII[m[K Status: up
[01;31m[KMII[m[K Polling Interval (ms): 100
[01;31m[KMII[m[K Status: up
[01;31m[KMII[m[K Status: up
[01;31m[KMII[m[K Status: up
[01;31m[KMII[m[K Polling Interval (ms): 100
[01;31m[KMII[m[K Status: up
[01;31m[KMII[m[K Status: up
[01;31m[KMII[m[K Status: up
[01;31m[KMII[m[K Polling Interval (ms): 100
[01;31m[KMII[m[K Status: up
[01;31m[KMII[m[K Status: up
[01;31m[KMII[m[K Status: up
[01;31m[KMII[m[K Polling Interval (ms): 100
[01;31m[KMII[m[K Status: up
[01;31m[KMII[m[K Status: up
  1. [cohesity@node ~]$ allssh.sh 'cat /proc/net/bonding/bond0' | grep Mode
(This should list link aggregation mode. Mode 4 i.e. LACP is dynamic link aggregation mode)
Bonding [01;31m[KMode[m[K: IEEE 802.3ad Dynamic link aggregation
  1. [cohesity@node ~]$ iris_cli node status
  2. [cohesity@node ~]$ iris_cli cluster status
  3. [cohesity@node ~]$ allssh.sh hostips
(This will list all nodes iPs in the cluster)
  1. [cohesity@node ~]$ less logs/iris_proxy_exec.FATAL (lists any fatals related to iris service)
You are Welcome :)

1 comment: