Learn Storage, Backup, Virtualization, and Cloud. AWS, GCP & AZURE.
............................................................................................................................................................................
Problem Synopsis: I have encountered into a situation, where I saw discrepancy on logical partition size of newly added disks into a pool of partitioned disks aggregate while you check partition size of spare drive.
Potential Result: In an event drive would fail in a current aggregate, the spare partition drive wont kick in, or even if it kicks in, it may not be in healthy state.
Solution: Validate the current partition size first, and then when add a new tray of SSD or just drives, perform partition copying the partition size.
**This applies only when you are adding additional disk/tray to an already existent aggregate**
Step 1: All new drives or entire tray of disks should be assigned to one node. If it partitions by default, you would need to unpartition the disks first. After disks are unpartitioned, thats when you partition the disks from nodeshell.
Step 2: Get the actual raw size of P3 partition from root aggregate. Remember Higher Partition number is for ROOT. E.G. P3 is partition only usable for root, No data aggregate can be created even when you have many spare P3 partition available. (This is what I found little off i.e. not being able to use entire space).
** Judgement to use Unparitioned drive to leave two drives for Parity and DP, Or, do Partition the drives and still leave P3 partition in the float pool, is something what you want. I found out that partitioning still is better in terms of total SSD usable capacity rather than using non-partition based disk add, after adding into pool.**
............................................................................................................................................................................
Problem Synopsis: I have encountered into a situation, where I saw discrepancy on logical partition size of newly added disks into a pool of partitioned disks aggregate while you check partition size of spare drive.
Potential Result: In an event drive would fail in a current aggregate, the spare partition drive wont kick in, or even if it kicks in, it may not be in healthy state.
Solution: Validate the current partition size first, and then when add a new tray of SSD or just drives, perform partition copying the partition size.
**This applies only when you are adding additional disk/tray to an already existent aggregate**
Step 1: All new drives or entire tray of disks should be assigned to one node. If it partitions by default, you would need to unpartition the disks first. After disks are unpartitioned, thats when you partition the disks from nodeshell.
Step 2: Get the actual raw size of P3 partition from root aggregate. Remember Higher Partition number is for ROOT. E.G. P3 is partition only usable for root, No data aggregate can be created even when you have many spare P3 partition available. (This is what I found little off i.e. not being able to use entire space).
** Judgement to use Unparitioned drive to leave two drives for Parity and DP, Or, do Partition the drives and still leave P3 partition in the float pool, is something what you want. I found out that partitioning still is better in terms of total SSD usable capacity rather than using non-partition based disk add, after adding into pool.**
- Go to diag mode from a node where you already have root aggregate (partitioned) and gather info on P3 partition raw size.
- node>priv set diag
- node*>raid_Config info listdisk XX.XX.XXP3 (locate raw size)
xxxxxxxxxxxxxxxxdfdsdafdsjlfkj, jlkjfldkjlsj ,ljkjflsjkdljsljd,jlfjlksjfljdlsdjssjlfjldsfjldfs,lsjldsjdlfjsd
rawsize=28246976,used=28239680,rightsize=28244928,slfjldsjfs,klsjfkdsljfs,jlkfjlsjlskjlk,lkdjldksjlkf
Step 3: Now From list of all the unpartitioned disk that were just added, Pick the one you want to partition. And perform Partition.
- node*> disk partition -n 3 -i 3 -b 28246976 0d.11.0 (n= number of parition, b=raw size from current one)
disk partition: 0d.11.0 partitioned successfully
Step 4: Validate the partitioned disk.
node*> disk show -n
DISK OWNER POOL SERIAL NUMBER HOME DR HOME
------------ ------------- ----- ------------- ------------- -------------
0d.11.0P1 Not Owned NONE
0d.11.0P2 Not Owned NONE
0d.11.0P3 Not Owned NONE
Step 4: Now Assign the partition to respective node.
node*> disk assign 0d.11.0P3 -o
This will leave remainder of Partition match what were were previously on existing partitioned disks.
You are Welcome :)
Source: www.netapp.com
No comments:
Post a Comment