Learn Storage, Backup, Virtualization, and Cloud. AWS, GCP & AZURE.
............................................................................................................................................................................
Depending on what scenario is applicable to you in your environment, there are two ways to run data erasure and disk configuration wipe out activity. By simply pushing the data disk to spare pool after removing it from aggregate may not suffice some of the data erasure function requirement. Disk degaussing by feeding the drive into traditional degauss machine that uses magnetic function to certify degauss might leave some data owners in a questionable position about their comfort before disk disposal. Here are two scenarios that are standard practices which Netapp supports without intervention from third party software or application that runs disks data erasure and wiping configuration.
Depending on what scenario is applicable to you in your environment, there are two ways to run data erasure and disk configuration wipe out activity. By simply pushing the data disk to spare pool after removing it from aggregate may not suffice some of the data erasure function requirement. Disk degaussing by feeding the drive into traditional degauss machine that uses magnetic function to certify degauss might leave some data owners in a questionable position about their comfort before disk disposal. Here are two scenarios that are standard practices which Netapp supports without intervention from third party software or application that runs disks data erasure and wiping configuration.
Scenario 1 Usage: Disk Initialize
If there is flexibility to reboot the node and entire disks data needs to be wiped or configuration reset, then “disk Initialize” is the right option.
Prerequisites.
1. If the disks are part of an aggregate and/or holds volumes, volumes must be taken offline, destroyed followed by taking aggregate offline and aggregate deletion.
2. Disks must be in a spare pool, but can be owned by nodes.
3. Only root aggregate disks should be present.
Actual Action:
Step 1. Boot each node by while accessing from console or SP (if configured) and take it to the loader prompt.
Boot each node to the LOADER/CFE prompt and ensure that the below variables are set. These variables remove the cluster RDBs, CDBs and the varfs from mroot, boot device and nvram.
setenv bootarg.init.boot_clustered true
setenv bootarg.factory_init_completed true
setenv bootarg.init.clearvarfsnvram true
Step 2. Then run boot_ontap from loader prompt and while node reboots, Press CTRL + C to go to special boot menu.
Step 3. Out of 8 special boot menu option, dont select any option yet. Rather Type “wipeconfig” on each node.
Step 4. Then select Option no. 4. that says “Clean configuration and initialize all disks”
(This will prompt if you want to zero disks, reset config and install a new file system. Type “yes”)
Step 5. This will run disk initialize operation in the background which is indicated by dots (…….) fillling the screen till its done. Each and every drives gets initialized and upon completion, it will take you to a prompt where it asks if you want to create or join cluster or new filesystem.
At this time, it is safe to power down Controller head and Disk Shelves. Disk have been reset and data have been completely erased.
Scenario 2 Usage: Disk Sanitization
Data ONTAP 8.0 and earlier, the disk sanitization feature needed a disk sanitization license.
Data ONTAP 8.1 and later, Just need to enable the feature per step 1, under Actual Action below.
If you only few drives in a stack, or just one shelf from a set of stack of shelves, you cannot use scenario 1 based solution for complete data erasure as we are not going to erase data from entire array, but only from selective disks.
Disk sanitization is the process of physically obliterating data by overwriting disks with specified byte patterns or random data so that recovery of the original data becomes impossible. You use the sanitization process to ensure that no one can recover the data on the disks. This functionality is available through the nodeshell.
Pre-Requisites.
1. Disks in question must be in spare pool, but can be owned by nodes.
How Disk sanitization works?
Disk sanitization process uses three successive default or user-specified byte overwrite patterns for up to seven cycles per operation. The random overwrite pattern is repeated for each cycle.
**Sanitization contains two phase:
a. Formatting phase
b. Pattern overwrite phase
**Disk Sanitization Feature is applied at Storage system level, and once it is enabled, it cannot be disabled.
Actual Action:
Step 1. Go to nodeshell from cluster.
node::>options nodescope.reenabledoptions licensed_feature.disk_sanitization.enable
node::>options licensed_feature.disk_sanitization.enable on
Step 2.Start disk sanitize on disk or disklist.
node::> disk sanitize start disk_list
Step 3. Check disk sanitize status
node::> disk sanitize status disk_list
Step 4. After disk sanitization is complete, return the sanitized disk to spare pool, it wont automatically send disk to spare pool.
node::> disk sanitize release disk_name
Step 5. Exit from node shell and go to cluster shell.
node::> CTRL + D
Cluster::>>
Step 6. Verify disk have been properly placed on spare pool.
cluster::> storage disk show -container-type spare
By now, disk is sanitized with no data and is in hot spare pool for it be ready to be used.
**At this time, you can use the degauss machine to crush the drive**
**Some of the Industry Standards on how to run disk sanitization or data erasure procedure**
https://kb.netapp.com/app/answers/answer_view/a_id/1034565/~/how-to-use-disk-sanitize-to-meet-department-of-defense-5220.22m-
https://kb.netapp.com/app/answers/answer_view/a_id/1072424/~/how-to-perform-disk-erasure%2C-disk-clearing-and-disk-sanitization-
You are Welcome :)
No comments:
Post a Comment