Wednesday, December 18, 2019

Netapp: xcp tool for high file count volumes copy

Learn Storage, Backup, Virtualization,  and Cloud. AWS, GCP & AZURE.
............................................................................................................................................................................


Troubleshooting Workflow: NetApp XCP Migration Tool errorsDocument ID BR519
Answer ID 1005557
Published Date 12/12/2019
Symptom
Symptom 1
-bash: ./xcp: Permission denied
Symptom 2
xcp: ERROR: This license has expired
Symptom 3
xcp: This copy is not licensed.
Symptom 4
xcp: ERROR: XCP not activated, run 'activate' first
Symptom 5
xcp: ERROR: License file /opt/NetApp/xFiles/xcp/license not found.
Symptom 6
xcp: ERROR: Failed to activate license: Server unreachable
Symptom 7
xcp usage error: show: requires at least one server
Symptom 8
xcp: ERROR: show '10.61.73.94:/vol/nfsvol1': invalid hostname
Symptom 9
xcp usage error: scan: missing source path
Symptom 10
xcp usage error: scan: invalid path '10.61.73.94'
Symptom 11
xcp: ERROR: Catalog inaccessible: Cannot mount nfs_server:/export[:subdirectory]
Symptom 12
xcp: compare1 'n.txt': ERROR: nfs3 LOOKUP 'n.txt' in '10.61.73.113:/target': nfs3 error 2: no such file or directory
Symptom 13
xcp: ERROR: Failed to open catalog id '3': nfs3 LOOKUP '3' in '10.61.73.94:/vol/nfsvol1/catalog/indexes': nfs3 error 2: no such file or directory
Symptom 14
xcp: ERROR: Empty or invalid index '10.61.73.94:/vol/nfsvol1/catalog/indexes/2/2.index'
Symptom 15
xcp: copying 'file3': WARNING: 10.193.67.237:/ntfs/topdir_3/subdir_52/file3: nfs3 SETATTR '10.193.67.237:/ntfs/topdir_3/subdir_52/file3' mode 00777, uid 0, gid 0, size 27, atime Thu Feb 23 10:15:00 2017, mtime Tue Feb  7 11:51:54 2017: nfs3 error 1: not owner
Symptom 16
Sync fails in different areas due to following warning:
xcp: mount 10.61.73.94:/vol/nfsvol1: WARNING: This NFS server only supports 1-second timestamp granularity. This may cause sync to fail because changes will often be undetectable.
Symptom 17
XCP fails to transfer ACLs and produces the following ERROR:
ERROR failed to obtain fallback security principal "". Please check if the principal with the name "" exists on "".
Cause
The cause and procedure to be performed to resolve these issues are described in the Solution section below.

Cause #1: Improper file permissions for XCP binary.
Solution: Modify permissions by running the  "chmod 755" command. 
Ensure XCP binary can be executed by the 
root user or sudo
command on designated XCP Linux client host.

Cause #2: This error occurs if the free 90 days XCP evaluation license is expired.
Solution: Renew or obtain the new XCP license from https://xcp.netapp.com

Cause #3: XCP is not licensed.
Solution: Download and apply XCP licenses on Linux client host system.
Cause #4: XCP license is not activated
Solution: Obtain the appropriate XCP license file. Copy the XCP license to the 
/opt/NetApp/xFiles/xcp/ directory on the XCP Linux client host. Run  "xcp activate" command to activate the license.
Cause #5: XCP license is not activated.
Solution: Download the XCP license from https://xcp.netapp.com. Copy XCP license file on to the XCP Linux client host at 
/opt/NetApp/xFiles/xcp/
After copying, run "xcp activate" command to activate the XCP license.
Cause #6: License file not found at 
/opt/NetApp/xFiles/xcp/license
Solution: Register for XCP license at https://xcp.netapp.com. Download and copy XCP license file to 
/opt/NetApp/xFiles/xcp/ directory on XCP Linux client host.
Cause #7: Incomplete parameter specified for the 
xcp show command.
Solution: Re-run the command with the server name or IP address specified. Correct syntax 
./xcp show abc.nfsserver.com
Cause #8: Valid hostname was not specified for the 
./xcp show
 command.
Solution: Run the 
./xcp show
 command with valid hostname. Exact command syntax 
./xcp show 10.61.73.94
 or 
./xcp show localhost
Cause #9: Incomplete parameters specified for the 
xcp scan
 command.
Solution: To resolve this error, run the 
xcp scan
 command with the complete source NFSv3 export path. Correct syntax 
xcp scan
Cause #10: Incomplete parameter specified for the 
xcp scan command.
Solution: Run the same command again by specifying the complete NFSv3 source export path. Correct syntax 
xcp scan .
Cause #11: Catalog path is not specified in the
 xcp.ini
 configuration file.
Solution: Open a text editor on XCP Linux client host and update XCP configuration file with proper catalog location. XCP config file is located at 
/opt/NetApp/xFiles/xcp/xcp.ini
Sample entries of config file:
[root@localhost linux]# cat /opt/NetApp/xFiles/xcp/xcp.ini
# Sample xcp config:
[xcp]
catalog = 10.61.73.94:/vol/nfsvol1

Cause #12: Verify operation did not find the source file(s) on target NFS export.
Solution: Run the 
xcp sync
 command to copy the incremental updates from the source to destination.

Cause #13: XCP could not locate the specified catalog index number.
Solution: Verify the index number of the previous operation. To determine the exact catalog path, run the 
cat /opt/NetApp/xFiles/xcp/xcp.ini
 command. Catalog indexes are located in the 
:/catalog/indexes
 directory. After locating the index number, run the same command again by specifying the correct index number or name.
Cause #14: Previous copy operation was interrupted before indexing the files.
Solution: Run the 
xcp copy
 operation again by specifying the 
–newid
 option.
Interrupt the copy operation when you see the message '
indexed' 
on the console session
:
88,126 scanned, 42,058 copied, 41,469 indexed, 150 MiB in (3.60 MiB/s), 41.9 MiB out (2.88 MiB/s), 16s
Cause #15: This happens on NTFS security style volumes because XCP attempts to chown a file and cannot, as UNIX operations will fail on NTFS security style volume by default.
Solution:  Run the following command:
# chown root newfile
chown: changing ownership of ‘newfile’: Operation not permitted

In a packet trace, the following is displayed:
3380 7.642295   10.193.67.237   10.193.67.233   NFS  214  V3 SETATTR Reply (Call In 3338) Error: NFS3ERR_PERM
The error is benign, as the owner would be set from a Windows client after the fact anyway.
The following are the two ways to avoid this:
  • Change the export policy rule to ignore with the following command: export-policy rule modify -vserver DEMO -policyname default -ruleindex 1 -ntfs-unix-security-ops ignore
  • Copy to a mixed security style or UNIX security style volume. After the copy, change the security style to NTFS.
A third possible way would be for XCP not to attempt to run setattr after copying. Unable to identify a way to perform so in the current version for the copy command. Perhaps that option could go into a future release.
  • copy: Recursively copy everything from source to target
  • newid : Catalog name for a new index
  • md5: Checksum the files (also save the checksums when indexing) (default: False)
  • edupe: Include dedupe estimate in reports (see documentation for details)
  • nonames: Do not look up user and group names for file listings or reports
  • bs : read/write blocksize (default: 64k)
  • dircount : Request size for reading directories (default: 64k)
  • parallel : Maximum concurrent batch processes (default: 7)
  • noId: Disables the creation of a default index (default: False)
Cause #16: Have seen this when source volume is SLES or RHEL with ext3 file system
Solution: Issue is documented here: 1181841 XCP Sync: Timestamp granularity
  • Workaround: We recommend to have access time enabled, because xcp compares the timestamp of the files from the source and index (contains all the file metadata found on the source). And in this use-case we may have differences < 1s and xcp will not apply the changes.
  • This is only a warning, and could be ignored.
Cause #17: This may happen when:
  • The target netbios name/machine account cannot be resolved to an IP
  • The target machine cannot resolve the fallback-user or fallback-group to a SID
Solution:
  • For cause 1, add a mapping in the hosts file (c:\windows\system32\drivers\etc\hosts) for the netbios name/machine account name of the target.
  • For cause 2, select a fallback-user/group that can be resolved by the target system. This can either be a local user to the target machine, or a domain user resolvable by the target machine.
Related Links:
  • 1098120: NetApp XCP Frequently Asked Questions and Resources
  • 1015592: Triage Template - How to troubleshoot NetApp XCP Migration Tool issues
  • 1015613: How to transition 7-Mode NFSv3 exports to clustered Data ONTAP using NetApp XCP Migration Tool 1.0
XCP 1.5
XCP Best Practices TR-4808
You are Welcome :)

Monday, December 9, 2019

Netapp: How to do perform Disk Erasure, Disk Clearing and Wipe Configuration on CDOT Netapp disks.

Learn Storage, Backup, Virtualization,  and Cloud. AWS, GCP & AZURE.
............................................................................................................................................................................

Depending on what scenario is applicable to you in your environment, there are two ways to run data erasure and disk configuration wipe out activity. By simply pushing the data disk to spare pool after removing it from aggregate may not suffice some of the data erasure function requirement. Disk degaussing by feeding the drive into traditional degauss machine that uses magnetic function to certify degauss might leave some data owners in a questionable position about their comfort before disk disposal. Here are two scenarios that are standard practices which Netapp supports without intervention from third party software or application that runs disks data erasure and wiping configuration.

Scenario 1 Usage: Disk Initialize
If there is flexibility to reboot the node and entire disks data needs to be wiped or configuration reset, then “disk Initialize” is the right option.

Prerequisites.
1. If the disks are part of an aggregate and/or holds volumes, volumes must be taken offline, destroyed followed by taking aggregate offline and aggregate deletion.
2. Disks must be in a spare pool, but can be owned by nodes.
3. Only root aggregate disks should be present.

Actual Action:
Step 1. Boot each node by while accessing from console or SP (if configured) and take it to the loader prompt.
Boot each node to the LOADER/CFE prompt and ensure that the below variables are set. These variables remove the cluster RDBs, CDBs and the varfs from mroot, boot device and nvram.
setenv bootarg.init.boot_clustered true
setenv bootarg.factory_init_completed true
setenv bootarg.init.clearvarfsnvram true

Step 2. Then run boot_ontap from loader prompt and while node reboots, Press CTRL + C to go to special boot menu.

Step 3. Out of 8 special boot menu option, dont select any option yet. Rather Type “wipeconfig” on each node.

Step 4. Then select Option no. 4. that says “Clean configuration and initialize all disks”
(This will prompt if you want to zero disks, reset config and install a new file system. Type “yes”)

Step 5. This will run disk initialize operation in the background which is indicated by dots (…….) fillling the screen till its done. Each and every drives gets initialized and upon completion, it will take you to a prompt where it asks if you want to create or join cluster or new filesystem.

At this time, it is safe to power down Controller head and Disk Shelves. Disk have been reset and data have been completely erased.


Scenario 2 Usage: Disk Sanitization
Data ONTAP 8.0 and earlier, the disk sanitization feature needed a disk sanitization license.
Data ONTAP 8.1 and later, Just need to enable the feature per step 1, under Actual Action below.
If you only few drives in a stack, or just one shelf from a set of stack of shelves, you cannot use scenario 1 based solution for complete data erasure as we are not going to erase data from entire array, but only from selective disks.
Disk sanitization is the process of physically obliterating data by overwriting disks with specified byte patterns or random data so that recovery of the original data becomes impossible. You use the sanitization process to ensure that no one can recover the data on the disks. This functionality is available through the nodeshell.

Pre-Requisites.
1. Disks in question must be in spare pool, but can be owned by nodes.

How Disk sanitization works?
Disk sanitization process uses three successive default or user-specified byte overwrite patterns for up to seven cycles per operation. The random overwrite pattern is repeated for each cycle.

**Sanitization contains two phase:
a. Formatting phase
b. Pattern overwrite phase

**Disk Sanitization Feature is applied at Storage system level, and once it is enabled, it cannot be disabled.

Actual Action:
Step 1. Go to nodeshell from cluster.
node::>options nodescope.reenabledoptions licensed_feature.disk_sanitization.enable
node::>options licensed_feature.disk_sanitization.enable on

Step 2.Start disk sanitize on disk or disklist.
node::> disk sanitize start disk_list

Step 3. Check disk sanitize status
node::> disk sanitize status disk_list

Step 4. After disk sanitization is complete, return the sanitized disk to spare pool, it wont automatically send disk to spare pool.
node::> disk sanitize release disk_name

Step 5. Exit from node shell and go to cluster shell.
node::> CTRL + D
Cluster::>>

Step 6. Verify disk have been properly placed on spare pool.
cluster::> storage disk show -container-type spare

By now, disk is sanitized with no data and is in hot spare pool for it be ready to be used.

**At this time, you can use the degauss machine to crush the drive**

**Some of the Industry Standards on how to run disk sanitization or data erasure procedure**

https://kb.netapp.com/app/answers/answer_view/a_id/1034565/~/how-to-use-disk-sanitize-to-meet-department-of-defense-5220.22m-
https://kb.netapp.com/app/answers/answer_view/a_id/1072424/~/how-to-perform-disk-erasure%2C-disk-clearing-and-disk-sanitization-

You are Welcome :)

Tuesday, November 26, 2019

Netapp: How to remove Nodes from multi-node cluster running CDOT with Examples


I am a seasonal IT professional with a background on VMware, Storage, Backup, Unix, and Project liaison experience. I have held positions working on technologies like Netapp, EMC, IBM, Cohesity storage and Backup supporting SAN and NAS Environment. I have held roles of IT administrator, engineer, team lead and project liaison. This blog is for Storage and Backup Professionals, and content are derived from vendor as well as my own experience.
............................................................................................................................................................................
I recently did cluster unjoin and removed nodes from multi-node cluster running CDOT 9.3Px without any service disruption and with seamless activities. 

##Visibility to system is the KEY. Log into SP or Console of nodes all the time during this activity. Helps if you run into some unforeseen or unpredicted situations.


Prerequisites:
  1. Disable Storage failover (cf status) between HA nodes.
  2. Migrate data lifs home and home port from nodes in question over to other healthy nodes/ports by net interface modify.
  3. Remove ports of nodes from broadcast domain / failover-groups leaving only ones that would be present after nodes removal.
  4. Delete intercluster lifs, and remove Intercluster ports from corresponding broadcast domain.


Actual Steps:

Step 1. 
Log into adv mode in cluster by:
cluster: set advanced
Warning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel.
Do you want to continue? {y|n}: (Press y)

Step 2. 
Confirm that the nodes in questions is serving as master node. If it is working as master node, make it ineligible. (Caution, Once you have it made it ineligible, you would need to reboot the node to make it eligible again, if you need to.) This can be done by checking cluster ring status from adv mode.
cluster*::>cluster ring show

Verify that there is still no lif or data on nodes in question:
net int show -home-node node1

net int show -home-node node2

cluster*::> cluster modify -node node2 -eligibility false
Double confirm by running below to ensure there is no lif dependency except cluster port and/or node management port/lif. It would complain if there is still something left on nodes being worked.

cluster*::>cluster ring show
(This would list node2 being offline as it has been deemed ineligible. Other node will display as master)
(You run the same for both nodes in HA.)


Node      UnitName Epoch    DB Epoch   DB Trnxs      Master    Online
---------     --------      -------       --------       --------           ---------      ---------
node2     mgmt         0               25       736267                           offline
node2     vldb           0               23       1913                               offline
node2     vifmgr        0               25       11031                             offline
node2     bcomd       0               26       4                                     offline
node2     crs             0               23       1                                     offline

Step 3.  Verify:
cluster::*>storage failover show
cluster::*>cluster ring show

Step 4. 
Now run actual unjoin action.
cluster::*> cluster unjoin -node node2

(This will provide some warnings, but as long as checklist of pre-req and other is completed, proceed)

(At this time, you can wipe disk data by pressing step 4 by doing “Ctrl + C” during reboot. You will have visibility to console, if you have been accessing the node via console or service processor. It is strongly advised, that nodes in question be accessed using sp in a different session than actual session where cluster unjoin session is being executed.)

Step 5. 
Halt the nodes in question by logging into individual nodes, if applicable. You can uninstall the Hardware after completion of disk initialization (if chosen). Otherwise, its safe to remove cables, uninstall hardware.



You are welcome :)


  

Thursday, November 21, 2019

Cohesity: What are chunks, Erasure Coding (EC) and Replication Factor (RF) ?

Learn Storage, Backup, Virtualization,  and Cloud. AWS, GCP & AZURE.
............................................................................................................................................................................
If you are in Cohesity domain, you will hear a lot about data resiliency, fault tolerance and distributed workload.  
  • All of these whines around smallest unit of data which Cohesity calls Chunk, a form which is written into the disk.
  • Chunk combined with Drive and Node level redundancy and resiliency, results into highly available resilient backup solution.

Chunk: The unit of storage data that Cohesity uses for protection. Chunk file can be considered to be a collection of pieces of data from one or more client objects (files, VMs, etc.) packaged together into a single large unit. Cohesity takes a blob of storage, which can be a collection of one or more client objects, divides it into variable-sized, deduplicated chunks, compresses and encrypts them, and puts them in a chunk file. Usually, chunks from the same large client (user) file are combined to belong to the same chunk file. This will happen in most cases when the client file or VM writes are sequential and can be stored together. There may also be several smaller client files that are not large enough to form a single chunk file, in which case chunks from such client files could be packed together to form a chunk file. 


A chunk file could be protected using either EC or RF schemes. Cohesity provides a configurable resiliency on HDDs or node failures. A single, large file could be a part of several different chunk files and will end up getting distributed evenly across all the nodes of the cluster as defined in Cohesity Erasure coding settings


Replication Factor (RF) refers to the number of replicas of a unit of data. The unit of replication is a chunk file, and a chunk file is mirrored into either one or two other nodes depending on the Replication Factor number chosen. An RF2 mechanism provides resilience against a single data unit failure, and a RF3 provides resilience against two data unit failures.


Erasure Coding (EC) refers to a scheme where a number of usable data stripe units can be protected from failures using code stripe units, which are in turn derived from the usable data stripe units. A single code stripe unit can protect against one data (or code) stripe failure, and two code stripe units can protect against two data (or code) stripe unit failures. 




You are Welcome :)


Cohesity: Architecture Concept and Terminology...

Learn Storage, Backup, Virtualization,  and Cloud. AWS, GCP & AZURE.
............................................................................................................................................................................
Whats up with Cohesity Architecture ?
  • Uses Paxos algorithm for read consistency which is a mechanism to return read request from recently written value especially in a distributed filesystem.
  • Consistent Hashing to spread data across all nodes in a cluster.
  • Data distribution using selected Erasure Coding (EC) or Replication Factor (RF) factor.
  • Strict consistency : Non Disruptive Upgrades and operational function of non-disruptive service delivery at an event of disk or node failures i.e. strict consistency to support backup, restore, application data consistency and so on.
  • SpanFS is an underlying web-scale file system which is a fully distributed filesystem which is where Cohesity Software Defined backup and recovery application “DataProtect” runs. SpanFS is what exposes NFS, SMB, and S3 Interfaces while it also manages the IO operation for all data written to or from the system.
  • Distributed Lock Manager, manages concurrent access to the data repository and metadata
  • Data Repository stores actual client data, such as network files, VMs, and databases ina. deduplication, compressed, and encrypted form.
  • Metadata Store keeps track of all file data sitting across nodes,  Metadata store is based on Distributed Key-Value, that incorporates a fully redundant consistent, distributed NoSQL store for fast IO operations at scale.
  • SnapTree is Cohesity’s builtin function that provides unlimited, frequent snapshots which  provides a distributed metadata structure  based on B+ tree concepts.
  • Data Journaling: The SpanFS file system constantly looks at incoming requests and tries to estimate the IO pattern. Journal absorbs IOs and acts as write-cache which can be committed to disks later helping making data crash-consistent. It is part of the metadata and is replicated along with the File Metadata Store .
  • Distributed Metadata Manager: On each node, the underlying SpanFS file system is used to write to disks. All file data is stored on the Distributed File Data Store. Distributed Metadata Manager maintains all metadata.


Pictorial Depiction Below:



You are Welcome :)



Source:https://info.cohesity.com/Cohesity-Fault-Tolerance-White-Paper.html

Wednesday, November 20, 2019

Netapp: How to partition SSD disk/tray to an aggregate that already has partitioned disks

Learn Storage, Backup, Virtualization,  and Cloud. AWS, GCP & AZURE.
............................................................................................................................................................................
Problem Synopsis: I have encountered into a situation, where I saw discrepancy on logical partition size of newly added disks into a pool of partitioned disks aggregate while you check partition size of spare drive.

Potential Result: In an event drive would fail in a current aggregate, the spare partition drive wont kick in, or even if it kicks in, it may not be in healthy state.

Solution: Validate the current partition size first, and then when add a new tray of SSD or just drives, perform partition copying the partition size.

**This applies only when you are adding additional disk/tray to an already existent aggregate**
Step 1: All new drives or entire tray of disks should be assigned to one node. If it partitions by default, you would need to unpartition the disks first. After disks are unpartitioned, thats when you partition the disks from nodeshell.

Step 2: Get the actual raw size of P3 partition from root aggregate. Remember Higher Partition number is for ROOT. E.G. P3 is partition only usable for root, No data aggregate can be created even when you have many spare P3 partition available. (This is what I found little off i.e. not being able to use entire space).

** Judgement to use Unparitioned drive to leave two drives for Parity and DP, Or, do Partition the drives and still leave P3 partition in the float pool, is something what you want. I found out that partitioning still is better in terms of total SSD usable capacity rather than using non-partition based disk add, after adding into pool.**

  1. Go to diag mode from a node where you already have root aggregate (partitioned) and gather info on P3 partition raw size.
  • node>priv set diag
  • node*>raid_Config info listdisk XX.XX.XXP3 (locate raw size)
xxxxxxxxxxxxxxxxdfdsdafdsjlfkj, jlkjfldkjlsj ,ljkjflsjkdljsljd,jlfjlksjfljdlsdjssjlfjldsfjldfs,lsjldsjdlfjsd
rawsize=28246976,used=28239680,rightsize=28244928,slfjldsjfs,klsjfkdsljfs,jlkfjlsjlskjlk,lkdjldksjlkf

Step 3: Now From list of all the unpartitioned disk that were just added, Pick the one you want to partition. And perform Partition.
  • node*> disk partition -n 3 -i 3 -b 28246976 0d.11.0 (n= number of parition, b=raw size from current one) 
  disk partition: 0d.11.0 partitioned successfully 


Step 4: Validate the partitioned disk.
node*> disk show -n

DISK       OWNER                    POOL   SERIAL NUMBER         HOME                    DR HOME

------------ -------------            -----  -------------   -------------           -------------
0d.11.0P1    Not Owned                  NONE   
0d.11.0P2    Not Owned                  NONE   
0d.11.0P3    Not Owned                  NONE   

Step 4: Now Assign the partition to respective node.
node*> disk assign 0d.11.0P3 -o

This will leave remainder of Partition match what were were previously on existing partitioned disks.


You are Welcome :)
Source: www.netapp.com








Thursday, November 14, 2019

SAN CISCO: NPV and NPIV Concepts, Configuration with examples


I am a seasonal IT professional with a background on VMware, Storage, Backup, Unix, and Project liaison experience. I have held positions working on technologies like Netapp, EMC, IBM, Cohesity storage and Backup supporting SAN and NAS Environment. I have held roles of IT administrator, engineer, team lead and project liaison. This blog is for Storage and Backup Professionals, and content are derived from vendor as well as my own experience.
............................................................................................................................................................................
Why to Use NPV, and what is NPIV. Use Case with Examples


·      In fabric mode, each switch that joins a SAN is assigned a domain ID. Each SAN (or VSAN) supports a maximum of 239 domain IDs, so the SAN has a limit of 239 switches.

·      NPV alleviates the domain ID limit by sharing the domain ID of the core switch among multiple edge switches.

·      In NPV mode, the edge switch relays all traffic to the core switch, which provides the Fibre Channel switching capabilities. The edge switch shares the domain ID of the core switch.

·      Server interfaces are F ports on the edge switch that connect to the servers. 

·      A server interface may support multiple end devices by enabling the N port identifier virtualization (NPIV) feature. NPIV provides a means to assign multiple FC IDs to a single N port, which allows the server to assign unique FC IDs to different applications.

·      All interfaces from the edge switch to the core switch are configured as proxy N ports (NP ports). An NP uplink is a connection from an NP port on the edge switch to an F port on the core switch.

EXTERNAL INTERFACE: NP Port (That connects to the Core Switch F-Port and does fabric logins)
SERVER INTERFACE: F Port (That connects to Client hosts)






Enabling NPV
Command
Purpose
Step 1
switch# configure terminal
switch(config)#
Enters configuration mode.
Step 2
switch(config)# npv enable

Enables NPV mode. The switch reboots, and it comes back up in NPV mode.
Note A write-erase is performed during the initialization.
Step 3
switch(config-npv)# no npv enable
switch(config)#
Disables NPV mode, which results in a reload of the switch.



Configuring NPV Interfaces

After you enable NPV, you should configure the NP uplink interfaces and the server interfaces. 

Command
Purpose
Step 1
switch# configure terminal
switch(config)#
Enters configuration mode.
Step 2
switch(config)# interface fc slot/port
Selects an interface that will be connected to the core NPV switch.
Step 3
switch(config-if)# switchport mode NP

switch(config-if)# no shutdown
Configures the interface as an NP port.
Brings up the interface.

To configure a server interface, perform this task:

Command
Purpose
Step 1
switch# configure terminal
switch(config)#
Enters configuration mode.
Step 2
switch(config)# interface { fc slot/port |
vfc vfc-id }

Selects a server interface.
Step 3
switch(config-if)# switchport mode F

switch(config-if)# no shutdown

Configures the interface as an F port.
Brings up the interface.

NPV Traffic Maps

An NPV traffic map associates one or more NP uplink interfaces with a server interface.

To configure a traffic map, perform this task:
Command
Purpose
Step 1
switch# config t
switch(config)#
Enters configuration mode on the NPV.
Step 2
switch(config)# npv traffic-map server-interface { fc slot/port | vfc vfc-id } e xternal-interface fc slot/port
switch (config)#
Configures a mapping between a server interface (or range of server interfaces) and an NP uplink interface (or range of NP uplink interfaces).
switch(config)# no npv traffic-map server-interface { fc slot/port | vfc vfc-id } external-interface fc slot/port
switch (config)#

Removes the mapping between the specified server interfaces and NP uplink interfaces.






Verifying NPV

Command
Purpose
switch# show npv flogi-table [ all ]
Displays the NPV configuration.


Display a list of devices on a server interface and their assigned NP uplinks,














Display the status of the server interfaces and the NP uplink interfaces,
switch# show npv status
npiv is enabled

External Interfaces:
====================
Interface: fc2/1, VSAN: 1, FCID: 0x1c0000, State: Up
Interface: fc2/2, VSAN: 1, FCID: 0x040000, State: Up
Interface: fc2/3, VSAN: 1, FCID: 0x260000, State: Up
Interface: fc2/4, VSAN: 1, FCID: 0x1a0000, State: Up
Number of External Interfaces: 4

Server Interfaces:
==================
Interface: vfc3/1, VSAN: 1, NPIV: No, State: Up

Number of Server Interfaces: 1


Verifying NPV Traffic Management













You are Welcome :)
Source: www.cisco.com