vsim

You are currently browsing articles tagged vsim.

In this article, describe the way to add virtual disks to Data ONTAP 8.3 simulator to increase simulator’s storage capacity.


Add virtual disk devices to the simulator

  1. Unlock “diag” system user and assign it a password
    cluster01::> security login unlock -username diag cluster01::> security login password -username diag Enter a new password: <password> Enter it again: <password>>
  2. Log in to the system shell using diag user account:
    cluster01::> set -priv diag Warning: These diagnostic commands are for use by NetApp personnel only. Do you want to continue? {y|n}: y cluster01::*>
  3. Log in to the system shell using diag user account:
    cluster01::*> systemshell (system node systemshell)Data ONTAP/amd64 (cluster01-01) (pts/2) login: diag Password: Warning: The system shell provides access to low-level diagnostic tools that can cause irreparable damage to the system if not used properly. Use this environment only when directed to do so by support personnel. cluster01-01%
  4. Add the simulator disk tool directory “/sbin/bin” to the path:
    cluster01-01% echo $PATH /sbin:/bin:/usr/sbin:/usr/bin:/usr/games:/usr/local/sbin:/usr/local/bin:/var/home/diag/bin cluster01-01% setenv PATH “${PATH}:/sim/bin” cluster01-01% echo $PATH /sbin:/bin:/usr/sbin:/usr/bin:/usr/games:/usr/local/sbin:/usr/local/bin:/var/home/diag/bin:/sim/bin
  5. Go to the simulator device director:
    cluster01-01% cd /sim/dev cluster01-01% ls ,disks ,tapes cluster01-01% ls ,disks/ ,reservations Shelf:DiskShelf14 v0.16:NETAPP__:VD-1000MB-FZ-520:11895900:2104448 v0.17:NETAPP__:VD-1000MB-FZ-520:11895901:2104448 v0.18:NETAPP__:VD-1000MB-FZ-520:11895902:2104448 v0.19:NETAPP__:VD-1000MB-FZ-520:11895903:2104448 v0.20:NETAPP__:VD-1000MB-FZ-520:11895904:2104448 v0.21:NETAPP__:VD-1000MB-FZ-520:11895905:2104448 v0.22:NETAPP__:VD-1000MB-FZ-520:11895906:2104448 v0.24:NETAPP__:VD-1000MB-FZ-520:11895907:2104448 v0.25:NETAPP__:VD-1000MB-FZ-520:11895908:2104448 v0.26:NETAPP__:VD-1000MB-FZ-520:11895909:2104448 v0.27:NETAPP__:VD-1000MB-FZ-520:11895910:2104448 v0.28:NETAPP__:VD-1000MB-FZ-520:11895911:2104448 v0.29:NETAPP__:VD-1000MB-FZ-520:11895912:2104448 v0.32:NETAPP__:VD-1000MB-FZ-520:11895913:2104448 v1.16:NETAPP__:VD-1000MB-FZ-520:14285900:2104448 v1.17:NETAPP__:VD-1000MB-FZ-520:14285901:2104448 v1.18:NETAPP__:VD-1000MB-FZ-520:14285902:2104448 v1.19:NETAPP__:VD-1000MB-FZ-520:14285903:2104448 v1.20:NETAPP__:VD-1000MB-FZ-520:14285904:2104448 v1.21:NETAPP__:VD-1000MB-FZ-520:14285905:2104448 v1.22:NETAPP__:VD-1000MB-FZ-520:14285906:2104448 v1.24:NETAPP__:VD-1000MB-FZ-520:14285907:2104448 v1.25:NETAPP__:VD-1000MB-FZ-520:14285908:2104448 v1.26:NETAPP__:VD-1000MB-FZ-520:14285909:2104448 v1.27:NETAPP__:VD-1000MB-FZ-520:14285910:2104448 v1.28:NETAPP__:VD-1000MB-FZ-520:14285911:2104448 v1.29:NETAPP__:VD-1000MB-FZ-520:14285912:2104448 v1.32:NETAPP__:VD-1000MB-FZ-520:14285913:2104448
    At this point you will see a number of files which represent the simulated disks. Notice that these files start with “v0.” and “v1.”. That means the disk are attached to adapters 0 and 1, and if you count the disk files you’ll see that there are 14 of them on each adapter. This is similar to the DS14 shelf topology with each shelf attached to its own adapter.
  6. Check disk types:
    cluster01-01% vsim_makedisks -h Usage: /usr/sbin/vsim_makedisks [ -n ] [ -t ] [ -e ] [ -a [ -h ] By default 5 disks will be added. The can be one of the following: (NOTE, 0 is the default) Fast Type Vendor ID Product ID Usable Size[B] Actual Size[B] Zero BPS RPM 0 NETAPP__ VD-16MB_________ 16,777,216 38,273,024 No 512 10000 1 NETAPP__ VD-35MB_________ 35,913,728 57,409,536 No 512 10000 2 NETAPP__ VD-50MB_________ 52,428,800 73,924,608 No 512 10000 3 NETAPP__ VD-100MB________ 104,857,600 126,353,408 No 512 10000 4 NETAPP__ VD-500MB________ 524,288,000 545,783,808 No 512 10000 5 NETAPP__ VD-1000MB_______ 1,048,576,000 1,070,071,808 No 512 10000 6 NETAPP__ VD-16MB-FZ______ 16,777,216 38,273,024 Yes 512 15000 7 NETAPP__ VD-35MB-FZ______ 35,913,728 57,409,536 Yes 512 15000 8 NETAPP__ VD-50MB-FZ______ 52,428,800 73,924,608 Yes 512 15000 9 NETAPP__ VD-100MB-FZ_____ 104,857,600 126,353,408 Yes 512 15000 10 NETAPP__ VD-500MB-FZ_____ 524,288,000 545,783,808 Yes 512 15000 11 NETAPP__ VD-1000MB-FZ____ 1,048,576,000 1,070,071,808 Yes 512 15000 12 NETAPP__ VD-16MB-520_____ 16,777,216 38,273,024 No 520 10000 13 NETAPP__ VD-35MB-520_____ 35,913,728 57,409,536 No 520 10000 14 NETAPP__ VD-50MB-520_____ 52,428,800 73,924,608 No 520 10000 15 NETAPP__ VD-100MB-520____ 104,857,600 126,353,408 No 520 10000 16 NETAPP__ VD-500MB-520____ 524,288,000 545,783,808 No 520 10000 17 NETAPP__ VD-1000MB-520___ 1,048,576,000 1,070,071,808 No 520 10000 18 NETAPP__ VD-16MB-FZ-520__ 16,777,216 38,273,024 Yes 520 15000 19 NETAPP__ VD-35MB-FZ-520__ 35,913,728 57,409,536 Yes 520 15000 20 NETAPP__ VD-50MB-FZ-520__ 52,428,800 73,924,608 Yes 520 15000 21 NETAPP__ VD-100MB-FZ-520_ 104,857,600 126,353,408 Yes 520 15000 22 NETAPP__ VD-500MB-FZ-520_ 524,288,000 545,783,808 Yes 520 15000 23 NETAPP__ VD-1000MB-FZ-520 1,048,576,000 1,070,071,808 Yes 520 15000 24 NETAPP__ VD-16MB-FZ-ATA__ 16,777,216 51,388,416 Yes 512 7200 25 NETAPP__ VD-35MB-FZ-ATA__ 36,700,160 73,801,728 Yes 512 7200 26 NETAPP__ VD-50MB-FZ-ATA__ 52,428,800 91,496,448 Yes 512 7200 27 NETAPP__ VD-100MB-FZ-ATA_ 104,857,600 150,478,848 Yes 512 7200 28 NETAPP__ VD-500MB-FZ-ATA_ 524,288,000 622,338,048 Yes 512 7200 29 NETAPP__ VD-1000MB-FZ-ATA 1,048,576,000 1,212,162,048 Yes 512 7200 30 NETAPP__ VD-2000MB-FZ-520 2,097,512,000 2,119,007,808 Yes 520 15000 31 NETAPP__ VD-4000MB-FZ-520 4,194,304,000 4,215,799,808 Yes 520 15000 32 NETAPP__ VD-2000MB-FZ-ATA 2,097,512,000 2,391,810,048 Yes 512 7200 33 NETAPP__ VD-4000MB-FZ-ATA 4,194,304,000 4,751,106,048 Yes 512 7200 34 NETAPP__ VD-100MB-SS-512_ 104,857,600 126,353,408 Yes 512 15000 35 NETAPP__ VD-500MB-SS-520_ 524,288,000 545,783,808 Yes 520 15000 36 NETAPP__ VD-9000MB-FZ-520 9,437,184,000 9,458,679,808 Yes 520 15000 37 NETAPP__ VD-9000MB-FZ-ATA 9,437,184,000 10,649,346,048 Yes 512 7200
  7. At the moment, add 14×2(=28) 1GB disks to the simulator:
    cluster01-01% sudo vsim_makedisks -n 14 -t 23 -a 2 Creating ,disks/v2.16:NETAPP__:VD-1000MB-FZ-520:35383900:2104448 Creating ,disks/v2.17:NETAPP__:VD-1000MB-FZ-520:35383901:2104448 Creating ,disks/v2.18:NETAPP__:VD-1000MB-FZ-520:35383902:2104448 Creating ,disks/v2.19:NETAPP__:VD-1000MB-FZ-520:35383903:2104448 Creating ,disks/v2.20:NETAPP__:VD-1000MB-FZ-520:35383904:2104448 Creating ,disks/v2.21:NETAPP__:VD-1000MB-FZ-520:35383905:2104448 Creating ,disks/v2.22:NETAPP__:VD-1000MB-FZ-520:35383906:2104448 Creating ,disks/v2.24:NETAPP__:VD-1000MB-FZ-520:35383907:2104448 Creating ,disks/v2.25:NETAPP__:VD-1000MB-FZ-520:35383908:2104448 Creating ,disks/v2.26:NETAPP__:VD-1000MB-FZ-520:35383909:2104448 Creating ,disks/v2.27:NETAPP__:VD-1000MB-FZ-520:35383910:2104448 Creating ,disks/v2.28:NETAPP__:VD-1000MB-FZ-520:35383911:2104448 Creating ,disks/v2.29:NETAPP__:VD-1000MB-FZ-520:35383912:2104448 Creating ,disks/v2.32:NETAPP__:VD-1000MB-FZ-520:35383913:2104448 Shelf file Shelf:DiskShelf14 updated cluster01-01% sudo vsim_makedisks -n 14 -t 23 -a 3 Creating ,disks/v3.16:NETAPP__:VD-1000MB-FZ-520:37484400:2104448 Creating ,disks/v3.17:NETAPP__:VD-1000MB-FZ-520:37484401:2104448 Creating ,disks/v3.18:NETAPP__:VD-1000MB-FZ-520:37484402:2104448 Creating ,disks/v3.19:NETAPP__:VD-1000MB-FZ-520:37484403:2104448 Creating ,disks/v3.20:NETAPP__:VD-1000MB-FZ-520:37484404:2104448 Creating ,disks/v3.21:NETAPP__:VD-1000MB-FZ-520:37484405:2104448 Creating ,disks/v3.22:NETAPP__:VD-1000MB-FZ-520:37484406:2104448 Creating ,disks/v3.24:NETAPP__:VD-1000MB-FZ-520:37484407:2104448 Creating ,disks/v3.25:NETAPP__:VD-1000MB-FZ-520:37484408:2104448 Creating ,disks/v3.26:NETAPP__:VD-1000MB-FZ-520:37484409:2104448 Creating ,disks/v3.27:NETAPP__:VD-1000MB-FZ-520:37484410:2104448 Creating ,disks/v3.28:NETAPP__:VD-1000MB-FZ-520:37484411:2104448 Creating ,disks/v3.29:NETAPP__:VD-1000MB-FZ-520:37484512:2104448 Creating ,disks/v3.32:NETAPP__:VD-1000MB-FZ-520:37484513:2104448 Shelf file Shelf:DiskShelf14 updated
  8. Check the results:
    cluster01-01% ls ,disks/ ,reservations Shelf:DiskShelf14 v0.16:NETAPP__:VD-1000MB-FZ-520:11895900:2104448 v0.17:NETAPP__:VD-1000MB-FZ-520:11895901:2104448 v0.18:NETAPP__:VD-1000MB-FZ-520:11895902:2104448 v0.19:NETAPP__:VD-1000MB-FZ-520:11895903:2104448 v0.20:NETAPP__:VD-1000MB-FZ-520:11895904:2104448 v0.21:NETAPP__:VD-1000MB-FZ-520:11895905:2104448 v0.22:NETAPP__:VD-1000MB-FZ-520:11895906:2104448 v0.24:NETAPP__:VD-1000MB-FZ-520:11895907:2104448 v0.25:NETAPP__:VD-1000MB-FZ-520:11895908:2104448 v0.26:NETAPP__:VD-1000MB-FZ-520:11895909:2104448 v0.27:NETAPP__:VD-1000MB-FZ-520:11895910:2104448 v0.28:NETAPP__:VD-1000MB-FZ-520:11895911:2104448 v0.29:NETAPP__:VD-1000MB-FZ-520:11895912:2104448 v0.32:NETAPP__:VD-1000MB-FZ-520:11895913:2104448 v1.16:NETAPP__:VD-1000MB-FZ-520:14285900:2104448 v1.17:NETAPP__:VD-1000MB-FZ-520:14285901:2104448 v1.18:NETAPP__:VD-1000MB-FZ-520:14285902:2104448 v1.19:NETAPP__:VD-1000MB-FZ-520:14285903:2104448 v1.20:NETAPP__:VD-1000MB-FZ-520:14285904:2104448 v1.21:NETAPP__:VD-1000MB-FZ-520:14285905:2104448 v1.22:NETAPP__:VD-1000MB-FZ-520:14285906:2104448 v1.24:NETAPP__:VD-1000MB-FZ-520:14285907:2104448 v1.25:NETAPP__:VD-1000MB-FZ-520:14285908:2104448 v1.26:NETAPP__:VD-1000MB-FZ-520:14285909:2104448 v1.27:NETAPP__:VD-1000MB-FZ-520:14285910:2104448 v1.28:NETAPP__:VD-1000MB-FZ-520:14285911:2104448 v1.29:NETAPP__:VD-1000MB-FZ-520:14285912:2104448 v1.32:NETAPP__:VD-1000MB-FZ-520:14285913:2104448 v2.16:NETAPP__:VD-1000MB-FZ-520:35383900:2104448 v2.17:NETAPP__:VD-1000MB-FZ-520:35383901:2104448 v2.18:NETAPP__:VD-1000MB-FZ-520:35383902:2104448 v2.19:NETAPP__:VD-1000MB-FZ-520:35383903:2104448 v2.20:NETAPP__:VD-1000MB-FZ-520:35383904:2104448 v2.21:NETAPP__:VD-1000MB-FZ-520:35383905:2104448 v2.22:NETAPP__:VD-1000MB-FZ-520:35383906:2104448 v2.24:NETAPP__:VD-1000MB-FZ-520:35383907:2104448 v2.25:NETAPP__:VD-1000MB-FZ-520:35383908:2104448 v2.26:NETAPP__:VD-1000MB-FZ-520:35383909:2104448 v2.27:NETAPP__:VD-1000MB-FZ-520:35383910:2104448 v2.28:NETAPP__:VD-1000MB-FZ-520:35383911:2104448 v2.29:NETAPP__:VD-1000MB-FZ-520:35383912:2104448 v2.32:NETAPP__:VD-1000MB-FZ-520:35383913:2104448 v3.16:NETAPP__:VD-1000MB-FZ-520:37484400:2104448 v3.17:NETAPP__:VD-1000MB-FZ-520:37484401:2104448 v3.18:NETAPP__:VD-1000MB-FZ-520:37484402:2104448 v3.19:NETAPP__:VD-1000MB-FZ-520:37484403:2104448 v3.20:NETAPP__:VD-1000MB-FZ-520:37484404:2104448 v3.21:NETAPP__:VD-1000MB-FZ-520:37484405:2104448 v3.22:NETAPP__:VD-1000MB-FZ-520:37484406:2104448 v3.24:NETAPP__:VD-1000MB-FZ-520:37484407:2104448 v3.25:NETAPP__:VD-1000MB-FZ-520:37484408:2104448 v3.26:NETAPP__:VD-1000MB-FZ-520:37484409:2104448 v3.27:NETAPP__:VD-1000MB-FZ-520:37484410:2104448 v3.28:NETAPP__:VD-1000MB-FZ-520:37484411:2104448 v3.29:NETAPP__:VD-1000MB-FZ-520:37484512:2104448 v3.32:NETAPP__:VD-1000MB-FZ-520:37484513:2104448
  9. Reverse some of the earlier steps and reboot:
    cluster01-01% exit logout cluster01::*> security login lock -username diag cluster01::*> set -priv admin cluster01::> reboot (system node reboot) Warning: Are you sure you want to reboot node “cluster01-01″? {y|n}: y

Assign the disks to the disk pool

  1. After reboot, check the status that the 14×2 1GB disks have been added:
    • case A) check the status by storage disk show command:
      (The disks from “NET-1.29″ to “NET-1.56″ have been added)
      cluster01::> storage disk show Usable Disk Container Container Disk Size Shelf Bay Type Type Name Owner —————- ———- —– — ——- ———– ——— ——– NET-1.1 1020MB – 16 FCAL aggregate aggr1 cluster01-01 NET-1.2 1020MB – 17 FCAL aggregate aggr1 cluster01-01 NET-1.3 1020MB – 18 FCAL aggregate aggr1 cluster01-01 NET-1.4 1020MB – 19 FCAL aggregate aggr1 cluster01-01 NET-1.5 1020MB – 20 FCAL aggregate aggr1 cluster01-01 NET-1.6 1020MB – 21 FCAL aggregate aggr1 cluster01-01 NET-1.7 1020MB – 22 FCAL aggregate aggr1 cluster01-01 NET-1.8 1020MB – 16 FCAL aggregate aggr0 cluster01-01 NET-1.9 1020MB – 17 FCAL aggregate aggr0 cluster01-01 NET-1.10 1020MB – 18 FCAL aggregate aggr0 cluster01-01 NET-1.11 1020MB – 19 FCAL aggregate aggr0 cluster01-01 NET-1.12 1020MB – 24 FCAL spare Pool0 cluster01-01 NET-1.13 1020MB – 25 FCAL spare Pool0 cluster01-01 NET-1.14 1020MB – 26 FCAL spare Pool0 cluster01-01 NET-1.15 1020MB – 27 FCAL spare Pool0 cluster01-01 NET-1.16 1020MB – 28 FCAL spare Pool0 cluster01-01 NET-1.17 1020MB – 29 FCAL spare Pool0 cluster01-01 NET-1.18 1020MB – 32 FCAL spare Pool0 cluster01-01 NET-1.19 1020MB – 20 FCAL spare Pool0 cluster01-01 NET-1.20 1020MB – 21 FCAL spare Pool0 cluster01-01 NET-1.21 1020MB – 22 FCAL spare Pool0 cluster01-01 NET-1.22 1020MB – 24 FCAL spare Pool0 cluster01-01 NET-1.23 1020MB – 25 FCAL spare Pool0 cluster01-01 NET-1.24 1020MB – 26 FCAL spare Pool0 cluster01-01 NET-1.25 1020MB – 27 FCAL spare Pool0 cluster01-01 NET-1.26 1020MB – 28 FCAL spare Pool0 cluster01-01 NET-1.27 1020MB – 29 FCAL spare Pool0 cluster01-01 NET-1.28 1020MB – 32 FCAL spare Pool0 cluster01-01 NET-1.29 – – 24 FCAL unassigned – – NET-1.30 – – 25 FCAL unassigned – – NET-1.31 – – 26 FCAL unassigned – – NET-1.32 – – 27 FCAL unassigned – – NET-1.33 – – 28 FCAL unassigned – – NET-1.34 – – 29 FCAL unassigned – – NET-1.35 – – 32 FCAL unassigned – – NET-1.36 – – 16 FCAL unassigned – – NET-1.37 – – 17 FCAL unassigned – – NET-1.38 – – 18 FCAL unassigned – – NET-1.39 – – 19 FCAL unassigned – – NET-1.40 – – 20 FCAL unassigned – – NET-1.41 – – 21 FCAL unassigned – – NET-1.42 – – 22 FCAL unassigned – – NET-1.43 – – 16 FCAL unassigned – – NET-1.44 – – 17 FCAL unassigned – – NET-1.45 – – 18 FCAL unassigned – – NET-1.46 – – 19 FCAL unassigned – – NET-1.47 – – 20 FCAL unassigned – – NET-1.48 – – 21 FCAL unassigned – – NET-1.49 – – 22 FCAL unassigned – – NET-1.50 – – 24 FCAL unassigned – – NET-1.51 – – 25 FCAL unassigned – – NET-1.52 – – 26 FCAL unassigned – – NET-1.53 – – 27 FCAL unassigned – – NET-1.54 – – 28 FCAL unassigned – – NET-1.55 – – 29 FCAL unassigned – – NET-1.56 – – 32 FCAL unassigned – –
    • case B) Check the status by node run local disk show -v command:
      (The disks named “v2.*” and “v3.*” have been added)
      cluster01::> node run local disk show -v DISK OWNER POOL SERIAL NUMBER HOME DR HOME CHKSUM ———— ————- —– ————- ————- ————- ——– v0.16 cluster01-01(4082368511) Pool0 11895900 cluster01-01(4082368511) Block v0.17 cluster01-01(4082368511) Pool0 11895901 cluster01-01(4082368511) Block v0.18 cluster01-01(4082368511) Pool0 11895902 cluster01-01(4082368511) Block v0.19 cluster01-01(4082368511) Pool0 11895903 cluster01-01(4082368511) Block v0.20 cluster01-01(4082368511) Pool0 11895904 cluster01-01(4082368511) Block v0.21 cluster01-01(4082368511) Pool0 11895905 cluster01-01(4082368511) Block v0.22 cluster01-01(4082368511) Pool0 11895906 cluster01-01(4082368511) Block v0.24 cluster01-01(4082368511) Pool0 11895907 cluster01-01(4082368511) Block v0.25 cluster01-01(4082368511) Pool0 11895908 cluster01-01(4082368511) Block v0.26 cluster01-01(4082368511) Pool0 11895909 cluster01-01(4082368511) Block v0.27 cluster01-01(4082368511) Pool0 11895910 cluster01-01(4082368511) Block v0.28 cluster01-01(4082368511) Pool0 11895911 cluster01-01(4082368511) Block v0.29 cluster01-01(4082368511) Pool0 11895912 cluster01-01(4082368511) Block v0.32 cluster01-01(4082368511) Pool0 11895913 cluster01-01(4082368511) Block v1.16 cluster01-01(4082368511) Pool0 11285900 cluster01-01(4082368511) Block v1.17 cluster01-01(4082368511) Pool0 11285901 cluster01-01(4082368511) Block v1.18 cluster01-01(4082368511) Pool0 11285902 cluster01-01(4082368511) Block v1.19 cluster01-01(4082368511) Pool0 11285903 cluster01-01(4082368511) Block v1.20 cluster01-01(4082368511) Pool0 11285904 cluster01-01(4082368511) Block v1.21 cluster01-01(4082368511) Pool0 11285905 cluster01-01(4082368511) Block v1.22 cluster01-01(4082368511) Pool0 11285906 cluster01-01(4082368511) Block v1.24 cluster01-01(4082368511) Pool0 11285907 cluster01-01(4082368511) Block v1.25 cluster01-01(4082368511) Pool0 11285908 cluster01-01(4082368511) Block v1.26 cluster01-01(4082368511) Pool0 11285909 cluster01-01(4082368511) Block v1.27 cluster01-01(4082368511) Pool0 11285910 cluster01-01(4082368511) Block v1.28 cluster01-01(4082368511) Pool0 11285911 cluster01-01(4082368511) Block v1.29 cluster01-01(4082368511) Pool0 11285912 cluster01-01(4082368511) Block v1.32 cluster01-01(4082368511) Pool0 11285913 cluster01-01(4082368511) Block v2.16 Not Owned NONE 11383900 Block v2.17 Not Owned NONE 11383901 Block v2.18 Not Owned NONE 11383902 Block v2.19 Not Owned NONE 11383903 Block v2.20 Not Owned NONE 11383904 Block v2.21 Not Owned NONE 11383905 Block v2.22 Not Owned NONE 11383906 Block v2.24 Not Owned NONE 11383907 Block v2.25 Not Owned NONE 11383908 Block v2.26 Not Owned NONE 11383909 Block v2.27 Not Owned NONE 11383910 Block v2.28 Not Owned NONE 11383911 Block v2.29 Not Owned NONE 11383912 Block v2.32 Not Owned NONE 11383913 Block v3.16 Not Owned NONE 11484400 Block v3.17 Not Owned NONE 11484401 Block v3.18 Not Owned NONE 11484402 Block v3.19 Not Owned NONE 11484403 Block v3.20 Not Owned NONE 11484404 Block v3.21 Not Owned NONE 11484405 Block v3.22 Not Owned NONE 11484406 Block v3.24 Not Owned NONE 11484407 Block v3.25 Not Owned NONE 11484408 Block v3.26 Not Owned NONE 11484409 Block v3.27 Not Owned NONE 11484410 Block v3.28 Not Owned NONE 11484411 Block v3.29 Not Owned NONE 11484512 Block v3.32 Not Owned NONE 11484513 Block
  2. Assign the disks to a node and check the result
    In this example, has used two different ways to assign.
    • case A) Assign the disks by node run local disk assign command:
      At this point, add the disks named “v2.*”.
      cluster01::> node run local disk assign v2.* -o cluster01-01
      After adding, check the disks status by node run local disk show -v command.
      (The disks “v2.*” have been assigned)
      cluster01::> node run local disk show -v DISK OWNER POOL SERIAL NUMBER HOME DR HOME CHKSUM ———— ————- —– ————- ————- ————- ——– v0.16 cluster01-01(4082368511) Pool0 11895900 cluster01-01(4082368511) Block v0.17 cluster01-01(4082368511) Pool0 11895901 cluster01-01(4082368511) Block v0.18 cluster01-01(4082368511) Pool0 11895902 cluster01-01(4082368511) Block v0.19 cluster01-01(4082368511) Pool0 11895903 cluster01-01(4082368511) Block v0.20 cluster01-01(4082368511) Pool0 11895904 cluster01-01(4082368511) Block v0.21 cluster01-01(4082368511) Pool0 11895905 cluster01-01(4082368511) Block v0.22 cluster01-01(4082368511) Pool0 11895906 cluster01-01(4082368511) Block v0.24 cluster01-01(4082368511) Pool0 11895907 cluster01-01(4082368511) Block v0.25 cluster01-01(4082368511) Pool0 11895908 cluster01-01(4082368511) Block v0.26 cluster01-01(4082368511) Pool0 11895909 cluster01-01(4082368511) Block v0.27 cluster01-01(4082368511) Pool0 11895910 cluster01-01(4082368511) Block v0.28 cluster01-01(4082368511) Pool0 11895911 cluster01-01(4082368511) Block v0.29 cluster01-01(4082368511) Pool0 11895912 cluster01-01(4082368511) Block v0.32 cluster01-01(4082368511) Pool0 11895913 cluster01-01(4082368511) Block v1.16 cluster01-01(4082368511) Pool0 11285900 cluster01-01(4082368511) Block v1.17 cluster01-01(4082368511) Pool0 11285901 cluster01-01(4082368511) Block v1.18 cluster01-01(4082368511) Pool0 11285902 cluster01-01(4082368511) Block v1.19 cluster01-01(4082368511) Pool0 11285903 cluster01-01(4082368511) Block v1.20 cluster01-01(4082368511) Pool0 11285904 cluster01-01(4082368511) Block v1.21 cluster01-01(4082368511) Pool0 11285905 cluster01-01(4082368511) Block v1.22 cluster01-01(4082368511) Pool0 11285906 cluster01-01(4082368511) Block v1.24 cluster01-01(4082368511) Pool0 11285907 cluster01-01(4082368511) Block v1.25 cluster01-01(4082368511) Pool0 11285908 cluster01-01(4082368511) Block v1.26 cluster01-01(4082368511) Pool0 11285909 cluster01-01(4082368511) Block v1.27 cluster01-01(4082368511) Pool0 11285910 cluster01-01(4082368511) Block v1.28 cluster01-01(4082368511) Pool0 11285911 cluster01-01(4082368511) Block v1.29 cluster01-01(4082368511) Pool0 11285912 cluster01-01(4082368511) Block v1.32 cluster01-01(4082368511) Pool0 11285913 cluster01-01(4082368511) Block v2.16 cluster01-01(4082368511) Pool0 11383900 cluster01-01(4082368511) Block v2.17 cluster01-01(4082368511) Pool0 11383901 cluster01-01(4082368511) Block v2.18 cluster01-01(4082368511) Pool0 11383902 cluster01-01(4082368511) Block v2.19 cluster01-01(4082368511) Pool0 11383903 cluster01-01(4082368511) Block v2.20 cluster01-01(4082368511) Pool0 11383904 cluster01-01(4082368511) Block v2.21 cluster01-01(4082368511) Pool0 11383905 cluster01-01(4082368511) Block v2.22 cluster01-01(4082368511) Pool0 11383906 cluster01-01(4082368511) Block v2.24 cluster01-01(4082368511) Pool0 11383907 cluster01-01(4082368511) Block v2.25 cluster01-01(4082368511) Pool0 11383908 cluster01-01(4082368511) Block v2.26 cluster01-01(4082368511) Pool0 11383909 cluster01-01(4082368511) Block v2.27 cluster01-01(4082368511) Pool0 11383910 cluster01-01(4082368511) Block v2.28 cluster01-01(4082368511) Pool0 11383911 cluster01-01(4082368511) Block v2.29 cluster01-01(4082368511) Pool0 11383912 cluster01-01(4082368511) Block v2.32 cluster01-01(4082368511) Pool0 11383913 cluster01-01(4082368511) Block v3.16 Not Owned NONE 11484400 Block v3.17 Not Owned NONE 11484401 Block v3.18 Not Owned NONE 11484402 Block v3.19 Not Owned NONE 11484403 Block v3.20 Not Owned NONE 11484404 Block v3.21 Not Owned NONE 11484405 Block v3.22 Not Owned NONE 11484406 Block v3.24 Not Owned NONE 11484407 Block v3.25 Not Owned NONE 11484408 Block v3.26 Not Owned NONE 11484409 Block v3.27 Not Owned NONE 11484410 Block v3.28 Not Owned NONE 11484411 Block v3.29 Not Owned NONE 11484512 Block v3.32 Not Owned NONE 11484513 Block cluster01::>
      Note: Also check the status by storage disk show command.
      (The disks from “NET-1.43″ to “NET-1.56″ have been assigned)
      cluster01::> storage disk show Usable Disk Container Container Disk Size Shelf Bay Type Type Name Owner —————- ———- —– — ——- ———– ——— ——– NET-1.1 1020MB – 16 FCAL aggregate aggr1 cluster01-01 NET-1.2 1020MB – 17 FCAL aggregate aggr1 cluster01-01 NET-1.3 1020MB – 18 FCAL aggregate aggr1 cluster01-01 NET-1.4 1020MB – 19 FCAL aggregate aggr1 cluster01-01 NET-1.5 1020MB – 20 FCAL aggregate aggr1 cluster01-01 NET-1.6 1020MB – 21 FCAL aggregate aggr1 cluster01-01 NET-1.7 1020MB – 22 FCAL aggregate aggr1 cluster01-01 NET-1.8 1020MB – 16 FCAL aggregate aggr0 cluster01-01 NET-1.9 1020MB – 17 FCAL aggregate aggr0 cluster01-01 NET-1.10 1020MB – 18 FCAL aggregate aggr0 cluster01-01 NET-1.11 1020MB – 19 FCAL aggregate aggr0 cluster01-01 NET-1.12 1020MB – 24 FCAL spare Pool0 cluster01-01 NET-1.13 1020MB – 25 FCAL spare Pool0 cluster01-01 NET-1.14 1020MB – 26 FCAL spare Pool0 cluster01-01 NET-1.15 1020MB – 27 FCAL spare Pool0 cluster01-01 NET-1.16 1020MB – 28 FCAL spare Pool0 cluster01-01 NET-1.17 1020MB – 29 FCAL spare Pool0 cluster01-01 NET-1.18 1020MB – 32 FCAL spare Pool0 cluster01-01 NET-1.19 1020MB – 20 FCAL spare Pool0 cluster01-01 NET-1.20 1020MB – 21 FCAL spare Pool0 cluster01-01 NET-1.21 1020MB – 22 FCAL spare Pool0 cluster01-01 NET-1.22 1020MB – 24 FCAL spare Pool0 cluster01-01 NET-1.23 1020MB – 25 FCAL spare Pool0 cluster01-01 NET-1.24 1020MB – 26 FCAL spare Pool0 cluster01-01 NET-1.25 1020MB – 27 FCAL spare Pool0 cluster01-01 NET-1.26 1020MB – 28 FCAL spare Pool0 cluster01-01 NET-1.27 1020MB – 29 FCAL spare Pool0 cluster01-01 NET-1.28 1020MB – 32 FCAL spare Pool0 cluster01-01 NET-1.29 – – 24 FCAL unassigned – – NET-1.30 – – 25 FCAL unassigned – – NET-1.31 – – 26 FCAL unassigned – – NET-1.32 – – 27 FCAL unassigned – – NET-1.33 – – 28 FCAL unassigned – – NET-1.34 – – 29 FCAL unassigned – – NET-1.35 – – 32 FCAL unassigned – – NET-1.36 – – 16 FCAL unassigned – – NET-1.37 – – 17 FCAL unassigned – – NET-1.38 – – 18 FCAL unassigned – – NET-1.39 – – 19 FCAL unassigned – – NET-1.40 – – 20 FCAL unassigned – – NET-1.41 – – 21 FCAL unassigned – – NET-1.42 – – 22 FCAL unassigned – – NET-1.43 1020MB – 16 FCAL spare Pool0 cluster01-01 NET-1.44 1020MB – 17 FCAL spare Pool0 cluster01-01 NET-1.45 1020MB – 18 FCAL spare Pool0 cluster01-01 NET-1.46 1020MB – 19 FCAL spare Pool0 cluster01-01 NET-1.47 1020MB – 20 FCAL spare Pool0 cluster01-01 NET-1.48 1020MB – 21 FCAL spare Pool0 cluster01-01 NET-1.49 1020MB – 22 FCAL spare Pool0 cluster01-01 NET-1.50 1020MB – 24 FCAL spare Pool0 cluster01-01 NET-1.51 1020MB – 25 FCAL spare Pool0 cluster01-01 NET-1.52 1020MB – 26 FCAL spare Pool0 cluster01-01 NET-1.53 1020MB – 27 FCAL spare Pool0 cluster01-01 NET-1.54 1020MB – 28 FCAL spare Pool0 cluster01-01 NET-1.55 1020MB – 29 FCAL spare Pool0 cluster01-01 NET-1.56 1020MB – 32 FCAL spare Pool0 cluster01-01 56 entries were displayed. cluster01::>
    • case B) Assign the disks by storage disk assign command:
      At this point, add the all of unassigned disks.
      cluster01::> storage disk assign -all -node cluster01-01
      After adding, check the disks status by storage disk show command.
      (The disks from “NET-1.29″ to “NET-1.42″ have been assigned)
      cluster01::> storage disk show Usable Disk Container Container Disk Size Shelf Bay Type Type Name Owner —————- ———- —– — ——- ———– ——— ——– NET-1.1 1020MB – 16 FCAL aggregate aggr1 cluster01-01 NET-1.2 1020MB – 17 FCAL aggregate aggr1 cluster01-01 NET-1.3 1020MB – 18 FCAL aggregate aggr1 cluster01-01 NET-1.4 1020MB – 19 FCAL aggregate aggr1 cluster01-01 NET-1.5 1020MB – 20 FCAL aggregate aggr1 cluster01-01 NET-1.6 1020MB – 21 FCAL aggregate aggr1 cluster01-01 NET-1.7 1020MB – 22 FCAL aggregate aggr1 cluster01-01 NET-1.8 1020MB – 16 FCAL aggregate aggr0 cluster01-01 NET-1.9 1020MB – 17 FCAL aggregate aggr0 cluster01-01 NET-1.10 1020MB – 18 FCAL aggregate aggr0 cluster01-01 NET-1.11 1020MB – 19 FCAL aggregate aggr0 cluster01-01 NET-1.12 1020MB – 24 FCAL spare Pool0 cluster01-01 NET-1.13 1020MB – 25 FCAL spare Pool0 cluster01-01 NET-1.14 1020MB – 26 FCAL spare Pool0 cluster01-01 NET-1.15 1020MB – 27 FCAL spare Pool0 cluster01-01 NET-1.16 1020MB – 28 FCAL spare Pool0 cluster01-01 NET-1.17 1020MB – 29 FCAL spare Pool0 cluster01-01 NET-1.18 1020MB – 32 FCAL spare Pool0 cluster01-01 NET-1.19 1020MB – 20 FCAL spare Pool0 cluster01-01 NET-1.20 1020MB – 21 FCAL spare Pool0 cluster01-01 NET-1.21 1020MB – 22 FCAL spare Pool0 cluster01-01 NET-1.22 1020MB – 24 FCAL spare Pool0 cluster01-01 NET-1.23 1020MB – 25 FCAL spare Pool0 cluster01-01 NET-1.24 1020MB – 26 FCAL spare Pool0 cluster01-01 NET-1.25 1020MB – 27 FCAL spare Pool0 cluster01-01 NET-1.26 1020MB – 28 FCAL spare Pool0 cluster01-01 NET-1.27 1020MB – 29 FCAL spare Pool0 cluster01-01 NET-1.28 1020MB – 32 FCAL spare Pool0 cluster01-01 NET-1.29 1020MB – 24 FCAL spare Pool0 cluster01-01 NET-1.30 1020MB – 25 FCAL spare Pool0 cluster01-01 NET-1.31 1020MB – 26 FCAL spare Pool0 cluster01-01 NET-1.32 1020MB – 27 FCAL spare Pool0 cluster01-01 NET-1.33 1020MB – 28 FCAL spare Pool0 cluster01-01 NET-1.34 1020MB – 29 FCAL spare Pool0 cluster01-01 NET-1.35 1020MB – 32 FCAL spare Pool0 cluster01-01 NET-1.36 1020MB – 16 FCAL spare Pool0 cluster01-01 NET-1.37 1020MB – 17 FCAL spare Pool0 cluster01-01 NET-1.38 1020MB – 18 FCAL spare Pool0 cluster01-01 NET-1.39 1020MB – 19 FCAL spare Pool0 cluster01-01 NET-1.40 1020MB – 20 FCAL spare Pool0 cluster01-01 NET-1.41 1020MB – 21 FCAL spare Pool0 cluster01-01 NET-1.42 1020MB – 22 FCAL spare Pool0 cluster01-01 NET-1.43 1020MB – 16 FCAL spare Pool0 cluster01-01 NET-1.44 1020MB – 17 FCAL spare Pool0 cluster01-01 NET-1.45 1020MB – 18 FCAL spare Pool0 cluster01-01 NET-1.46 1020MB – 19 FCAL spare Pool0 cluster01-01 NET-1.47 1020MB – 20 FCAL spare Pool0 cluster01-01 NET-1.48 1020MB – 21 FCAL spare Pool0 cluster01-01 NET-1.49 1020MB – 22 FCAL spare Pool0 cluster01-01 NET-1.50 1020MB – 24 FCAL spare Pool0 cluster01-01 NET-1.51 1020MB – 25 FCAL spare Pool0 cluster01-01 NET-1.52 1020MB – 26 FCAL spare Pool0 cluster01-01 NET-1.53 1020MB – 27 FCAL spare Pool0 cluster01-01 NET-1.54 1020MB – 28 FCAL spare Pool0 cluster01-01 NET-1.55 1020MB – 29 FCAL spare Pool0 cluster01-01 NET-1.56 1020MB – 32 FCAL spare Pool0 cluster01-01
      Note: Also check the disks status by node run local disk show -v command.
      (The disks “v3.*” have been assigned)
      cluster01::> node run local disk show -v DISK OWNER POOL SERIAL NUMBER HOME DR HOME CHKSUM ———— ————- —– ————- ————- ————- ——– v0.16 cluster01-01(4082368511) Pool0 11895900 cluster01-01(4082368511) Block v0.17 cluster01-01(4082368511) Pool0 11895901 cluster01-01(4082368511) Block v0.18 cluster01-01(4082368511) Pool0 11895902 cluster01-01(4082368511) Block v0.19 cluster01-01(4082368511) Pool0 11895903 cluster01-01(4082368511) Block v0.20 cluster01-01(4082368511) Pool0 11895904 cluster01-01(4082368511) Block v0.21 cluster01-01(4082368511) Pool0 11895905 cluster01-01(4082368511) Block v0.22 cluster01-01(4082368511) Pool0 11895906 cluster01-01(4082368511) Block v0.24 cluster01-01(4082368511) Pool0 11895907 cluster01-01(4082368511) Block v0.25 cluster01-01(4082368511) Pool0 11895908 cluster01-01(4082368511) Block v0.26 cluster01-01(4082368511) Pool0 11895909 cluster01-01(4082368511) Block v0.27 cluster01-01(4082368511) Pool0 11895910 cluster01-01(4082368511) Block v0.28 cluster01-01(4082368511) Pool0 11895911 cluster01-01(4082368511) Block v0.29 cluster01-01(4082368511) Pool0 11895912 cluster01-01(4082368511) Block v0.32 cluster01-01(4082368511) Pool0 11895913 cluster01-01(4082368511) Block v1.16 cluster01-01(4082368511) Pool0 11285900 cluster01-01(4082368511) Block v1.17 cluster01-01(4082368511) Pool0 11285901 cluster01-01(4082368511) Block v1.18 cluster01-01(4082368511) Pool0 11285902 cluster01-01(4082368511) Block v1.19 cluster01-01(4082368511) Pool0 11285903 cluster01-01(4082368511) Block v1.20 cluster01-01(4082368511) Pool0 11285904 cluster01-01(4082368511) Block v1.21 cluster01-01(4082368511) Pool0 11285905 cluster01-01(4082368511) Block v1.22 cluster01-01(4082368511) Pool0 11285906 cluster01-01(4082368511) Block v1.24 cluster01-01(4082368511) Pool0 11285907 cluster01-01(4082368511) Block v1.25 cluster01-01(4082368511) Pool0 11285908 cluster01-01(4082368511) Block v1.26 cluster01-01(4082368511) Pool0 11285909 cluster01-01(4082368511) Block v1.27 cluster01-01(4082368511) Pool0 11285910 cluster01-01(4082368511) Block v1.28 cluster01-01(4082368511) Pool0 11285911 cluster01-01(4082368511) Block v1.29 cluster01-01(4082368511) Pool0 11285912 cluster01-01(4082368511) Block v1.32 cluster01-01(4082368511) Pool0 11285913 cluster01-01(4082368511) Block v2.16 cluster01-01(4082368511) Pool0 11383900 cluster01-01(4082368511) Block v2.17 cluster01-01(4082368511) Pool0 11383901 cluster01-01(4082368511) Block v2.18 cluster01-01(4082368511) Pool0 11383902 cluster01-01(4082368511) Block v2.19 cluster01-01(4082368511) Pool0 11383903 cluster01-01(4082368511) Block v2.20 cluster01-01(4082368511) Pool0 11383904 cluster01-01(4082368511) Block v2.21 cluster01-01(4082368511) Pool0 11383905 cluster01-01(4082368511) Block v2.22 cluster01-01(4082368511) Pool0 11383906 cluster01-01(4082368511) Block v2.24 cluster01-01(4082368511) Pool0 11383907 cluster01-01(4082368511) Block v2.25 cluster01-01(4082368511) Pool0 11383908 cluster01-01(4082368511) Block v2.26 cluster01-01(4082368511) Pool0 11383909 cluster01-01(4082368511) Block v2.27 cluster01-01(4082368511) Pool0 11383910 cluster01-01(4082368511) Block v2.28 cluster01-01(4082368511) Pool0 11383911 cluster01-01(4082368511) Block v2.29 cluster01-01(4082368511) Pool0 11383912 cluster01-01(4082368511) Block v2.32 cluster01-01(4082368511) Pool0 11383913 cluster01-01(4082368511) Block v3.16 cluster01-01(4082368511) Pool0 11484400 cluster01-01(4082368511) Block v3.17 cluster01-01(4082368511) Pool0 11484401 cluster01-01(4082368511) Block v3.18 cluster01-01(4082368511) Pool0 11484402 cluster01-01(4082368511) Block v3.19 cluster01-01(4082368511) Pool0 11484403 cluster01-01(4082368511) Block v3.20 cluster01-01(4082368511) Pool0 11484404 cluster01-01(4082368511) Block v3.21 cluster01-01(4082368511) Pool0 11484405 cluster01-01(4082368511) Block v3.22 cluster01-01(4082368511) Pool0 11484406 cluster01-01(4082368511) Block v3.24 cluster01-01(4082368511) Pool0 11484407 cluster01-01(4082368511) Block v3.25 cluster01-01(4082368511) Pool0 11484408 cluster01-01(4082368511) Block v3.26 cluster01-01(4082368511) Pool0 11484409 cluster01-01(4082368511) Block v3.27 cluster01-01(4082368511) Pool0 11484410 cluster01-01(4082368511) Block v3.28 cluster01-01(4082368511) Pool0 11484411 cluster01-01(4082368511) Block v3.29 cluster01-01(4082368511) Pool0 11484512 cluster01-01(4082368511) Block v3.32 cluster01-01(4082368511) Pool0 11484513 cluster01-01(4082368511) Block cluster01::>

Tags: , ,

Extend the Data ONTAP root volume “vol0″

In this section, describe the way to extend the root volume.
After the Data ONTAP simulator setup, the root volume “vol0″ has only a few free space. Therefore, extending the root volume is better.

The summary of steps is following.
i. assign unassigned disks into the pool.
ii. extend the aggregate by assigning a disk from the pool.
iii. extend the root volume which is on the aggregate.

Detailed steps is as follow.

  1. Check the disk status by storage disk show before assign the disks to a pool.
    cluster01::> storage disk show Usable Disk Container Container Disk Size Shelf Bay Type Type Name Owner —————- ———- —– — ——- ———– ——— ——– NET-1.1 – – 16 FCAL unassigned – – NET-1.2 – – 17 FCAL unassigned – – NET-1.3 – – 18 FCAL unassigned – – NET-1.4 – – 19 FCAL unassigned – – NET-1.5 – – 20 FCAL unassigned – – NET-1.6 – – 21 FCAL unassigned – – NET-1.7 – – 22 FCAL unassigned – – NET-1.8 1020MB – 16 FCAL aggregate aggr0 cluster01-01 NET-1.9 1020MB – 17 FCAL aggregate aggr0 cluster01-01 NET-1.10 1020MB – 18 FCAL aggregate aggr0 cluster01-01 NET-1.11 1020MB – 19 FCAL spare Pool0 cluster01-01 NET-1.12 – – 24 FCAL unassigned – – NET-1.13 – – 25 FCAL unassigned – – NET-1.14 – – 26 FCAL unassigned – – NET-1.15 – – 27 FCAL unassigned – – NET-1.16 – – 28 FCAL unassigned – – NET-1.17 – – 29 FCAL unassigned – – NET-1.18 – – 32 FCAL unassigned – – NET-1.19 1020MB – 20 FCAL spare Pool0 cluster01-01 NET-1.20 1020MB – 21 FCAL spare Pool0 cluster01-01 NET-1.21 1020MB – 22 FCAL spare Pool0 cluster01-01 NET-1.22 1020MB – 24 FCAL spare Pool0 cluster01-01 NET-1.23 1020MB – 25 FCAL spare Pool0 cluster01-01 NET-1.24 1020MB – 26 FCAL spare Pool0 cluster01-01 NET-1.25 1020MB – 27 FCAL spare Pool0 cluster01-01 NET-1.26 1020MB – 28 FCAL spare Pool0 cluster01-01 NET-1.27 1020MB – 29 FCAL spare Pool0 cluster01-01 NET-1.28 1020MB – 32 FCAL spare Pool0 cluster01-01 28 entries were displayed. cluster01::>
  2. Assign unassigned disks to a pool by storage disk assign.
    In this example, assign all unassigned disks into the pool “Pool0″ which is owned by node “cluster01-01″.
    cluster01::> storage disk assign -all -node cluster01-01 cluster01::>
  3. Check the disk status after the assignment.
    cluster01::> storage disk show Usable Disk Container Container Disk Size Shelf Bay Type Type Name Owner —————- ———- —– — ——- ———– ——— ——– NET-1.1 1020MB – 16 FCAL spare Pool0 cluster01-01 NET-1.2 1020MB – 17 FCAL spare Pool0 cluster01-01 NET-1.3 1020MB – 18 FCAL spare Pool0 cluster01-01 NET-1.4 1020MB – 19 FCAL spare Pool0 cluster01-01 NET-1.5 1020MB – 20 FCAL spare Pool0 cluster01-01 NET-1.6 1020MB – 21 FCAL spare Pool0 cluster01-01 NET-1.7 1020MB – 22 FCAL spare Pool0 cluster01-01 NET-1.8 1020MB – 16 FCAL aggregate aggr0 cluster01-01 NET-1.9 1020MB – 17 FCAL aggregate aggr0 cluster01-01 NET-1.10 1020MB – 18 FCAL aggregate aggr0 cluster01-01 NET-1.11 1020MB – 19 FCAL spare Pool0 cluster01-01 NET-1.12 1020MB – 24 FCAL spare Pool0 cluster01-01 NET-1.13 1020MB – 25 FCAL spare Pool0 cluster01-01 NET-1.14 1020MB – 26 FCAL spare Pool0 cluster01-01 NET-1.15 1020MB – 27 FCAL spare Pool0 cluster01-01 NET-1.16 1020MB – 28 FCAL spare Pool0 cluster01-01 NET-1.17 1020MB – 29 FCAL spare Pool0 cluster01-01 NET-1.18 1020MB – 32 FCAL spare Pool0 cluster01-01 NET-1.19 1020MB – 20 FCAL spare Pool0 cluster01-01 NET-1.20 1020MB – 21 FCAL spare Pool0 cluster01-01 NET-1.21 1020MB – 22 FCAL spare Pool0 cluster01-01 NET-1.22 1020MB – 24 FCAL spare Pool0 cluster01-01 NET-1.23 1020MB – 25 FCAL spare Pool0 cluster01-01 NET-1.24 1020MB – 26 FCAL spare Pool0 cluster01-01 NET-1.25 1020MB – 27 FCAL spare Pool0 cluster01-01 NET-1.26 1020MB – 28 FCAL spare Pool0 cluster01-01 NET-1.27 1020MB – 29 FCAL spare Pool0 cluster01-01 NET-1.28 1020MB – 32 FCAL spare Pool0 cluster01-01 28 entries were displayed. cluster01::>
  4. Check the status of root volume “vol0″ and the aggregate “aggr0″.
    In this example, “vol0″ has 356.8MB free space, and “aggr0″ which is configured by disks NET-1.{8,9,10} has 41.81MB free space .
    cluster01::> volume show Vserver Volume Aggregate State Type Size Available Used% ——— ———— ———— ———- —- ———- ———- —– cluster01-01 vol0 aggr0 online RW 807.3MB 356.8MB 55% cluster01::> agar show Aggregate Size Available Used% State #Vols Nodes RAID Status ——— ——– ——— —– ——- —— —————- ———— aggr0 855MB 41.81MB 95% online 1 cluster01-01 raid_dp, normal cluster01::> aggr show -disk Aggregate #disks Disks ——— —— ————————— aggr0 3 NET-1.8, NET-1.9, NET-1.10
  5. cluster01::> aggr add-disk aggr0 -disklist NET-1.11 cluster01::>
  6. cluster01::> agar show Aggregate Size Available Used% State #Vols Nodes RAID Status ——— ——– ——— —– ——- —— —————- ———— aggr0 1.67GB 896.8MB 48% online 1 cluster01-01 raid_dp, normal cluster01::> aggr show -disk Aggregate #disks Disks ——— —— ————————— aggr0 4 NET-1.8, NET-1.9, NET-1.10, NET-1.11 cluster01::> disk show Usable Disk Container Container Disk Size Shelf Bay Type Type Name Owner —————- ———- —– — ——- ———– ——— ——– NET-1.1 1020MB – 16 FCAL spare Pool0 cluster01-01 NET-1.2 1020MB – 17 FCAL spare Pool0 cluster01-01 NET-1.3 1020MB – 18 FCAL spare Pool0 cluster01-01 NET-1.4 1020MB – 19 FCAL spare Pool0 cluster01-01 NET-1.5 1020MB – 20 FCAL spare Pool0 cluster01-01 NET-1.6 1020MB – 21 FCAL spare Pool0 cluster01-01 NET-1.7 1020MB – 22 FCAL spare Pool0 cluster01-01 NET-1.8 1020MB – 16 FCAL aggregate aggr0 cluster01-01 NET-1.9 1020MB – 17 FCAL aggregate aggr0 cluster01-01 NET-1.10 1020MB – 18 FCAL aggregate aggr0 cluster01-01 NET-1.11 1020MB – 19 FCAL aggregate aggr0 cluster01-01 NET-1.12 1020MB – 24 FCAL spare Pool0 cluster01-01 <…snip…> NET-1.28 1020MB – 32 FCAL spare Pool0 cluster01-01 28 entries were displayed.
  7. cluster01::> vol size -vserver cluster01-01 -volume vol0 -new-size +500M vol size: Volume “cluster01-01:vol0″ size set to 1.28g.
  8. cluster01::> vol show Vserver Volume Aggregate State Type Size Available Used% ——— ———— ———— ———- —- ———- ———- —– cluster01-01 vol0 aggr0 online RW 1.28GB 825.3MB 36% cluster01::> agar show Aggregate Size Available Used% State #Vols Nodes RAID Status ——— ——– ——— —– ——- —— —————- ———— aggr0 1.67GB 394.3MB 77% online 1 cluster01-01 raid_dp, normal

Add Data ONTAP feature licenses.

  1. cluster01::> system license show Serial Number: 1-80-000008 Owner: cluster01 Package Type Description Expiration —————– ——- ——————— ——————– Base license Cluster Base License – cluster01::> system license status show Package Licensed Method Expiration —————– ————— ——————– Base license – NFS none – CIFS none – iSCSI none – FCP none – SnapRestore none – SnapMirror none – FlexClone none – SnapVault none – SnapLock none – SnapManagerSuite none – SnapProtectApps none – V_StorageAttach none – SnapLock_Enterprise none – Insight_Balance none – 15 entries were displayed.
  2. cluster01::> system license add <CIFS license key> License for package “CIFS” and serial number “1-81-0000000000000004082368511″ installed successfully. (1 of 1 added successfully)
  3. cluster01::> system license status show Package Licensed Method Expiration —————– ————— ——————– Base license – NFS none – CIFS license – iSCSI none – FCP none – SnapRestore none – SnapMirror none – FlexClone none – SnapVault none – SnapLock none – SnapManagerSuite none – SnapProtectApps none – V_StorageAttach none – SnapLock_Enterprise none – Insight_Balance none – 15 entries were displayed.

Add a secondary DNS server to a name service.

  1. cluster01::> vserver service name-service dns show Name Vserver State Domains Servers ————— ——— ———————————– —————- cluster01 enabled testdomain.local 10.0.0.110 cluster01::> vserver service name-service dns modify -vserver cluster01 -domains testdomain.local -name-servers 10.0.0.1 10.0.0.110
  2. cluster01::> vserver service name-service dns show Name Vserver State Domains Servers ————— ——— ———————————– —————- cluster01 enabled testdomain.local 10.0.0.1, 10.0.0.110

Change the timezone and configure the time synchronization with NTP.

  1. cluster01::> cluster date show Node Date Time zone ——— ————————- ————————- cluster01-01 5/1/2015 16:56:13 +00:00 Etc/UTC
  2. cluster01::> cluster date modify -timezone Japan
  3. cluster01::> cluster date show Node Date Time zone ——— ————————- ————————- cluster01-01 5/2/2015 01:56:35 +09:00 Japan
  4. cluster01::> cluster time-service ntp server show This table is currently empty.
  5. cluster01::> cluster time-service ntp server create -server time.asia.apple.com cluster01::>
  6. cluster01::> cluster time-service ntp server show Server Version —————————— ——- time.asia.apple.com auto cluster01::> date Node Date Time zone ——— ———————— ————————- cluster01-01 Sat May 02 02:44:01 2015 Japan

  1. cluster01::> autosupport show Node State From To Mail Hosts ——————— ——— ————- ————- ———- cluster01-01 enable Postmaster – mailhost cluster01::>
  2. cluster01::> autosupport modify -state disable cluster01::> autosupport show Node State From To Mail Hosts ——————— ——— ————- ————- ———- cluster01-01 disable Postmaster – mailhost cluster01::>
cluster01::> system node halt

Tags: , ,

Initialize Data ONTAP configuration and all virtual disks

  1. Turn on the VM, and open a console of the VM.
    VSIM_20150509_01_000
  2. When turn on the VM, after a while, “Press Ctrl-C for Boot Menu” message will be shown.
    Press Ctrl-C, wait until the boot menu is displayed.
    ******************************* * * * Press Ctrl-C for Boot Menu. * * * ******************************* ^C Boot Menu will be available.
  3. After a moment, boot menu will be displayed. Select the fourth item – “Clean configuration and initialize all disks”.
    (1) Normal Boot. (2) Boot without /etc/rc. (3) Change password. (4) Clean configuration and initialize all disks. (5) Maintenance mode boot. (6) Update flash from backup config. (7) Install new software first. (8) Reboot node. Selection (1-8)? 4
  4. Display the confirmation message, input “y” and push Enter key.
    Zero disks, reset config and install a new file system?: y This will erase all the data on the disks, are you sure?: y
    After a while, rebooting messages will be displayed as follow.
    Rebooting to finish wipe config request. Waiting for PIDS: 1102. Skipped backing up /var file system to CF. Terminated .

Setup the node and the cluster

  1. After rebooting, system initialization process will be started automatically
    After a while, node setup will be started automatically.
    System initialization has completed successfully. Welcome to node setup. You can enter following commands at any time: “help” or “?” – if you want to have a question clarified, “back” – if you want to change previously answered questions, and “exit” or “quit” – if you want to quit the setup wizard. Any changes you made before quoting will be saved. To accept a default or omit a question, do not enter a value. This system will sent event messages and weekly reports to NetApp Technical Support. To disable this feature, enter “auto support modify -support disable” within 24 hours. Enabling AutoSupport can significantly speed problem determination and resolution should a problem occur on your system. For further information on AutoSupport, see: http://support.netapp.com/autosupport/ Type yes to confirm and continue (yes): yes Enter the node management interface port [e0c]: e0c Enter the node management IP address: 10.0.0.140 Enter the node management interface netmask: 255.255.255.0 Enter the node management interface default gateway: 10.0.0.1 A node management interface on port e0c with IP address 10.0.0.140 has been created. This node has its management address assigned and is ready for cluster setup.
  2. login to Data ONTAP shell by “admin” account
    login: admin ::>
  3. Start up the cluster setup wizard.
    ::> cluster setup Welcome to the cluster setup wizard.
  4. Initially, enter the basic parameters of cluster setup wizard.
    Do you want to create a new cluster or join an existing cluster? {create, join}: create Dou you intend for this node to be used as a single node cluster? {yes, no} [no]: no Will the cluster network be configured to use network switches? [yes]: yes System Defaults: Private cluster network ports [e0a,e0b]. Cluster port MTU values will be set to 1500. Cluster Interface IP addresses will be automatically generated. Do you want to use these defaults? {yes, no} [yes]: yes Enter the cluster administrator’s (username “admin”) password: <password> Enter the password: <password> It can take several minutes to create cluster interfaces…
  5. Step 1 of 5: Create a Cluster.
    Note: In this example, used “cluster01″ as the cluster name.
    Step 1 of 5: Create a Cluster You can type “back”, “exit”, or “help” at any question. Enter the cluster name: cluster01 Enter the cluster base license key: <license key> Creating cluster cluster01 Starting cluster support services . Cluster cluster01 has been created.
  6. Step 2 of 5: Add Features License Keys.
    In this time, skip this step and do later.
    Step 2 of 5: Add Feature License Keys You can type “back”, “exit”, or “help” at any question. Enter an additional license key []:
  7. Step 3 of 5: Set up a server (aka Storage VM) for cluster administration.
    Step 3 of 5: Set Up a Vserver for Cluster Administration You can type “back”, “exit”, or “help” at any question. Enter the cluster management interface port [e0d]: e0d Enter the cluster management interface IP address: 10.0.0.145 Enter the cluster management interface netmask: 255.255.255.0 Enter the cluster management interface default gateway [10.0.0.1]: 10.0.0.1 A cluster management interface on port e0d with IP address 10.0.0.145 has been created. You can use this address to connect to and manage the cluster. Enter the DNS domain name: testdomain.local Enter the name server IP address: 10.0.0.110 DNS lookup for the admin Vserver will use the testdomain.local domain.
  8. Step 4 of 5: Configure Storage Failover (SFO)
    This step is skipped automatically because Data ONTAP simulator is a non-HA system.
    Step 4 of 5: Configure Storage Failover (SFO) You can type “back”, “exit”, or “help” at any question. SFO will no be enabled on a non-HA system.
  9. Step 5 of 5: Set up the node.
    Entering the node management interface parameters can be skipped because these parameters have already been entered.
    Note: In this example, used “lab” as the controller location.
    Step 5 of 5: Set up the Node You can type “back”, “exit”, or “help” at any question. Where is the controller located []: lab Enter the node management interface port [e0c]: Enter the node management interface IP address [10.0.0.140]: Enter the node management interface netmask [255.255.255.0]: Enter the node management interface default gateway [10.0.0.1]: The node management interface has been modified to use port e0c with IP address 10.0.0.140 This system will sent event messages and weekly reports to NetApp Technical Support. To disable this feature, enter “auto support modify -support disable” within 24 hours. Enabling AutoSupport can significantly speed problem determination and resolution should a problem occur on your system. For further information on AutoSupport, see: http://support.netapp.com/autosupport/ Press enter to continue: Cluster “cluster01″ has been created.

Check the cluster status after the setup wizard.

  1. Check the cluster status by cluster show.
    cluster01::> cluster show Node Health Eligibility ——————— ——- ———— cluster01-01 true true cluster01::>

Tags: , ,

(ここではData ONTAP シミュレータの仮想マシンをVMwareESXi上で起動するまでの手順を示す。事前にNetApp社のサポートサイトから、シミュレータのイメージファイル(圧縮ファイル)とシリアルキーをダウンロードしておくこと。)

Prepare the vSwitch for Clustered Data ONTAP cluster network (Clusterd Data ONTAP のクラスタネットワーク用仮想スイッチを準備する)

  1. At the ESXi host’s [inventory] view, open the [Configuration] tab, and select the [Add Networking] link at [network] menu.
    (ホストの[インベントリ]画面にて、[構成]タブを開き、[ネットワーク]メニュー内の[ネットワークの追加]を押下する。)
    VSIM_20150509_00_000
  2. After opening [Add Network Wizard] dialog, select [Virtual Machine] radio button at [Connection Type] pane, and push [Next] button.
    ([ネットワークの追加ウィザード]画面が開いたら、[接続タイプ]ペインにある[仮想マシン]ラジオボタンを選択して、[次へ]ボタンを押下する。)
    VSIM_20150509_00_001
  3. To prepare the vSwitch for Clustered Data ONTAP cluster network, Choose whether to use an existing vSwitch or creating a new vSwitch.
    This time, I used an existing vSwitch. Select [Use vSwitch0] radio button, and Push [Next] bution.
    (Clustered Data ONTAP のクラスタネットワーク用仮想スイッチを用意するため、既存の仮想スイッチを利用するか、新たに仮想スイッチを作成するか選択する。今回は既存の仮想スイッチを利用することとして、[vSwitch0の使用]ラジオボタンを選択して、[次へ]ボタンを押下する。
    VSIM_20150509_00_002
  4. ([ポートグループのプロパティ]ペインにある[ネットワークラベル]テキストボックスに任意の名前を(ここでは “Data ONTAP Cluster Network” とした)、[VLAN ID]ドロップダウンボックスにVLAN IDを(ここでは なし とした)入力し、[次へ]ボタンを押下する)
    VSIM_20150509_00_003
  5. (構成を確認して、[終了]ボタンを押下する)
    VSIM_20150509_00_004

Upload the Data ONTAP simulator image file to the datastore.
(Data ONTAP シミュレータのイメージファイルをデータストア上にアップロードする)

  1. (Data ONTAP シミュレータを稼働させるデータストアを選択して、データストアブラウザを開く。その上で、ツールバーにある[このデータストアにファイルをアップロード]ボタンを押下する。)
    VSIM_20150509_00_009
  2. (メニューから[ファイルのアップロード]を選択する)
    VSIM_20150509_00_010
  3. (Data ONTAP シミュレータの圧縮ファイル “vsim_esx-cm.tgz” を選択し、アップロードされたことを確認する。ここではデータストア直下にアップロードしている。)
    VSIM_20150509_00_011

Enable ESXi SSH shell login feature temporary
(ESXiのSSHシェルログイン機能を一時的に有効にする)

  1. At the ESXi host’s [inventory] view, open the [Configuration] tab, and select the [Property…] link at [Security Profile] menu.
    (ホストのインベントリ画面の[構成]タブにて、[セキュリティプロファイル]内の[プロパティ…]を選択する。)
    VSIM_20150509_00_005
  2. ([サービスプロパティ]ダイアログが開くので、[サービス]から “SSH” を選択し、[オプション…]ボタンを押下する。)
    VSIM_20150509_00_006
  3. ([SSH (TSM-SSH)]ダイアログが開くので、[起動ポリシー]ペイン内の[手動で開始および停止]ラジオボタンを選択し、[サービスコマンド]ペイン内の[開始]ボタンを押下する。)
    VSIM_20150509_00_007
  4. ([ステータス]ペインに “実行中” と表示されていることを確認する。)
    VSIM_20150509_00_008

Uncompress the simulator’s image file and concatenate multiextent VMDK files into a VMDK file.
(シミュレータのイメージファイルを展開し、マルチエクステント仮想ディスクファイルを1つの仮想ディスクファイルに結合する)

  1. (VMWare ESXi ホストにSSHでログインする。)
  2. [root@esxi01:~] vmkload_mod --list | grep multiextent [root@esxi01:~]
    [root@esxi01:~] vmkload_mod multiextent Module multiextent loaded successfully [root@esxi01:~]
    [root@esxi01:~] vmkload_mod --list | grep multiextent multiextent 0 12 [root@esxi01:~]
  3. [root@esxi01:~] ls -la /vmfs/volumes/datastore3/ total 1274880 drwxr-xr-t 1 root root 1540 May 1 10:31 . drwxr-xr-x 1 root root 512 May 1 10:33 .. -r——– 1 root root 1245184 Nov 4 05:19 .fbb.sf -r——– 1 root root 267026432 Nov 4 05:19 .fdc.sf -r——– 1 root root 1179648 Nov 4 05:19 .pb2.sf -r——– 1 root root 268435456 Nov 4 05:19 .pbc.sf -r——– 1 root root 262733824 Nov 4 05:19 .sbc.sf drwx—— 1 root root 280 Nov 4 05:19 .sdd.sf -r——– 1 root root 4194304 Nov 4 05:19 .vh.sf drwxr-xr-x 1 root root 8680 May 1 10:15 2012R2vCenter -rw——- 1 root root 493916424 May 1 10:20 vsim_esx-cm.tgz
    [root@esxi01:~] tar xvf /vmfs/volumes/datastore3/vsim_esx-cm.tgz -C /vmfs/volumes/datastore3 vsim_esx-cm/ vsim_esx-cm/cfcard/ vsim_esx-cm/cfcard/env/ vsim_esx-cm/cfcard/env/env vsim_esx-cm/nvram vsim_esx-cm/DataONTAP.vmdk vsim_esx-cm/DataONTAP-flat.vmdk vsim_esx-cm/DataONTAP-var.vmdk vsim_esx-cm/DataONTAP-var-flat.vmdk vsim_esx-cm/DataONTAP-nvram.vmdk vsim_esx-cm/DataONTAP-nvram-flat.vmdk vsim_esx-cm/DataONTAP-s001.vmdk vsim_esx-cm/DataONTAP-s002.vmdk vsim_esx-cm/DataONTAP.vmx vsim_esx-cm/DataONTAP.vmxf vsim_esx-cm/uml/ vsim_esx-cm/DataONTAP-s003.vmdk vsim_esx-cm/DataONTAP-s004.vmdk vsim_esx-cm/DataONTAP-s005.vmdk <…snip…> vsim_esx-cm/DataONTAP-s120.vmdk vsim_esx-cm/DataONTAP-s121.vmdk vsim_esx-cm/DataONTAP-s122.vmdk vsim_esx-cm/DataONTAP-s123.vmdk vsim_esx-cm/DataONTAP-s124.vmdk vsim_esx-cm/DataONTAP-s125.vmdk vsim_esx-cm/DataONTAP-s126.vmdk vsim_esx-cm/DataONTAP-sim.vmdk [root@esxi01:~]
  4. [root@esxi01:~] ls -la /vmfs/volumes/datastore3/vsim_esx-cm total 8681504 drwxr-xr-x 1 54527 30 19600 May 1 14:16 . drwxr-xr-t 1 root root 1960 May 1 14:14 .. -rw-rw-rw- 1 54527 30 2037383168 Dec 1 15:16 DataONTAP-flat.vmdk -rw-rw-rw- 1 54527 30 5101322240 Dec 1 15:15 DataONTAP-nvram-flat.vmdk -rw-rw-rw- 1 54527 30 416 Dec 1 15:15 DataONTAP-nvram.vmdk -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s001.vmdk -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s002.vmdk -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s003.vmdk -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s004.vmdk -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s005.vmdk <…snip…> -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s120.vmdk -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s121.vmdk -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s122.vmdk -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s123.vmdk -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s124.vmdk -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s125.vmdk -rw-r–r– 1 54527 30 65536 Dec 1 15:16 DataONTAP-s126.vmdk -rw-r–r– 1 54527 30 5326 Dec 1 15:16 DataONTAP-sim.vmdk -rw-rw-rw- 1 54527 30 1616904192 Dec 1 15:15 DataONTAP-var-flat.vmdk -rw-rw-rw- 1 54527 30 414 Dec 1 15:15 DataONTAP-var.vmdk -rw-rw-rw- 1 54527 30 410 Dec 1 15:15 DataONTAP.vmdk -rwxr-xr-x 1 54527 30 1825 Dec 1 15:16 DataONTAP.vmx -rwxrwxrwx 1 54527 30 195 Dec 1 15:15 DataONTAP.vmxf drwxr-xr-x 1 54527 30 420 May 1 14:14 cfcard -rw-rw-rw- 1 54527 30 1 Dec 1 15:15 nvram drwxr-xr-x 1 54527 30 280 Dec 1 15:15 uml
    VSIM_20150509_00_012
    [root@esxi01:~] cat /vmfs/volumes/datastore3/vsim_esx-cm/DataONTAP-sim.vmdk # Disk DescriptorFile version=1 CID=8c1ec616 parentCID=ffffffff createType=”twoGbMaxExtentSparse” # Extent description RW 4192256 SPARSE “DataONTAP-s001.vmdk” RW 4192256 SPARSE “DataONTAP-s002.vmdk” RW 4192256 SPARSE “DataONTAP-s003.vmdk” RW 4192256 SPARSE “DataONTAP-s004.vmdk” RW 4192256 SPARSE “DataONTAP-s005.vmdk” <…snip…> RW 4192256 SPARSE “DataONTAP-s120.vmdk” RW 4192256 SPARSE “DataONTAP-s121.vmdk” RW 4192256 SPARSE “DataONTAP-s122.vmdk” RW 4192256 SPARSE “DataONTAP-s123.vmdk” RW 4192256 SPARSE “DataONTAP-s124.vmdk” RW 4192256 SPARSE “DataONTAP-s125.vmdk” RW 256000 SPARSE “DataONTAP-s126.vmdk” # The Disk Data Base #DDB ddb.virtualHWVersion = “4” ddb.geometry.cylinders = “16383” ddb.geometry.heads = “16” ddb.geometry.sectors = “63” ddb.adapterType = “ide”
  5. [root@esxi01:~] vmkfstools -i /vmfs/volumes/datastore3/vsim_esx-cm/DataONTAP-sim.vmdk /vmfs/volumes/datastore3/vsim_esx-cm/DataONTAP-sim-new.vmdk -d thin Destination disk format: VMFS thin-provisioned Cloning disk ‘/vmfs/volumes/datastore3/vsim_esx-cm/DataONTAP-sim.vmdk’… Clone: 100% done.
    [root@esxi01:~] ls -la /vmfs/volumes/datastore3/vsim_esx-cm total 8681504 drwxr-xr-x 1 54527 30 19880 May 1 14:21 . drwxr-xr-t 1 root root 1680 May 1 14:19 .. -rw-rw-rw- 1 54527 30 2037383168 Dec 1 15:16 DataONTAP-flat.vmdk -rw-rw-rw- 1 54527 30 5101322240 Dec 1 15:15 DataONTAP-nvram-flat.vmdk -rw-rw-rw- 1 54527 30 416 Dec 1 15:15 DataONTAP-nvram.vmdk -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s001.vmdk -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s002.vmdk -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s003.vmdk -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s004.vmdk -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s005.vmdk <…snip…> -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s120.vmdk -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s121.vmdk -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s122.vmdk -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s123.vmdk -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s124.vmdk -rw-r–r– 1 54527 30 327680 Dec 1 15:16 DataONTAP-s125.vmdk -rw-r–r– 1 54527 30 65536 Dec 1 15:16 DataONTAP-s126.vmdk -rw——- 1 root root 268435456000 May 1 14:20 DataONTAP-sim-new-flat.vmdk -rw——- 1 root root 529 May 1 14:21 DataONTAP-sim-new.vmdk -rw-r–r– 1 54527 30 5326 Dec 1 15:16 DataONTAP-sim.vmdk -rw-rw-rw- 1 54527 30 1616904192 Dec 1 15:15 DataONTAP-var-flat.vmdk -rw-rw-rw- 1 54527 30 414 Dec 1 15:15 DataONTAP-var.vmdk -rw-rw-rw- 1 54527 30 410 Dec 1 15:15 DataONTAP.vmdk -rwxr-xr-x 1 54527 30 1825 Dec 1 15:16 DataONTAP.vmx -rwxrwxrwx 1 54527 30 195 Dec 1 15:15 DataONTAP.vmxf drwxr-xr-x 1 54527 30 420 May 1 14:14 cfcard -rw-rw-rw- 1 54527 30 1 Dec 1 15:15 nvram drwxr-xr-x 1 54527 30 280 Dec 1 15:15 uml
  6. [root@esxi01:~] vmkfstools -U /vmfs/volumes/datastore3/vsim_esx-cm/DataONTAP-sim.vmdk
    [root@esxi01:~] ls -la /vmfs/volumes/datastore3/vsim_esx-cm total 8552472 drwxr-xr-x 1 54527 30 2100 May 1 14:24 . drwxr-xr-t 1 root root 1680 May 1 14:19 .. -rw-rw-rw- 1 54527 30 2037383168 Dec 1 15:16 DataONTAP-flat.vmdk -rw-rw-rw- 1 54527 30 5101322240 Dec 1 15:15 DataONTAP-nvram-flat.vmdk -rw-rw-rw- 1 54527 30 416 Dec 1 15:15 DataONTAP-nvram.vmdk -rw——- 1 root root 268435456000 May 1 14:20 DataONTAP-sim-new-flat.vmdk -rw——- 1 root root 529 May 1 14:21 DataONTAP-sim-new.vmdk -rw-rw-rw- 1 54527 30 1616904192 Dec 1 15:15 DataONTAP-var-flat.vmdk -rw-rw-rw- 1 54527 30 414 Dec 1 15:15 DataONTAP-var.vmdk -rw-rw-rw- 1 54527 30 410 Dec 1 15:15 DataONTAP.vmdk -rwxr-xr-x 1 54527 30 1825 Dec 1 15:16 DataONTAP.vmx -rwxrwxrwx 1 54527 30 195 Dec 1 15:15 DataONTAP.vmxf drwxr-xr-x 1 54527 30 420 May 1 14:14 cfcard -rw-rw-rw- 1 54527 30 1 Dec 1 15:15 nvram drwxr-xr-x 1 54527 30 280 Dec 1 15:15 uml
  7. [root@esxi01:~] vmkfstools -E /vmfs/volumes/datastore3/vsim_esx-cm/DataONTAP-sim-new.vmdk /vmfs/volumes/datastore3/vsim_esx-cm/DataONTAP-sim.vmdk
    [root@esxi01:~] ls -la /vmfs/volumes/datastore3/vsim_esx-cm total 8552472 drwxr-xr-x 1 54527 30 2100 May 1 14:25 . drwxr-xr-t 1 root root 1680 May 1 14:19 .. -rw-rw-rw- 1 54527 30 2037383168 Dec 1 15:16 DataONTAP-flat.vmdk -rw-rw-rw- 1 54527 30 5101322240 Dec 1 15:15 DataONTAP-nvram-flat.vmdk -rw-rw-rw- 1 54527 30 416 Dec 1 15:15 DataONTAP-nvram.vmdk -rw——- 1 root root 268435456000 May 1 14:20 DataONTAP-sim-flat.vmdk -rw——- 1 root root 525 May 1 14:25 DataONTAP-sim.vmdk -rw-rw-rw- 1 54527 30 1616904192 Dec 1 15:15 DataONTAP-var-flat.vmdk -rw-rw-rw- 1 54527 30 414 Dec 1 15:15 DataONTAP-var.vmdk -rw-rw-rw- 1 54527 30 410 Dec 1 15:15 DataONTAP.vmdk -rwxr-xr-x 1 54527 30 1825 Dec 1 15:16 DataONTAP.vmx -rwxrwxrwx 1 54527 30 195 Dec 1 15:15 DataONTAP.vmxf drwxr-xr-x 1 54527 30 420 May 1 14:14 cfcard -rw-rw-rw- 1 54527 30 1 Dec 1 15:15 nvram drwxr-xr-x 1 54527 30 280 Dec 1 15:15 uml
    [root@esxi01:~] cat /vmfs/volumes/datastore3/vsim_esx-cm/DataONTAP-sim.vmdk # Disk DescriptorFile version=1 CID=8c1ec616 parentCID=ffffffff isNativeSnapshot=”no” createType=”vmfs” # Extent description RW 524288000 VMFS “DataONTAP-sim-flat.vmdk” # The Disk Data Base #DDB ddb.adapterType = “ide” ddb.deletable = “true” ddb.encoding = “UTF-8″ ddb.geometry.cylinders = “16383” ddb.geometry.heads = “16” ddb.geometry.sectors = “63” ddb.longContentID = “d1c05bea2a9dccdd87158f6cfffffffe” ddb.thinProvisioned = “1” ddb.uuid = “60 00 C2 98 c5 6f bc cc-7a d6 12 b1 24 73 98 b0″ ddb.virtualHWVersion = “4”
  8. [root@esxi01:~] vmkload_mod --list | grep multiextent multiextent 0 12 [root@esxi01:~]
    [root@esxi01:~] vmkload_mod --unload multiextent Module multiextent successfully unloaded [root@esxi01:~]
    [root@esxi01:~] vmkload_mod --list | grep multiextent [root@esxi01:~]

Register the simulator to a virtual machine inventory
(シミュレータを仮想マシンインベントリに登録する)

  1. VSIM_20150509_00_013
  2. VSIM_20150509_00_014
  3. VSIM_20150509_00_015
  4. VSIM_20150509_00_016
  5. VSIM_20150509_00_017

Connect the simulator’s vNICs to the vSwitches.
(シミュレータの仮想NICを仮想スイッチに接続する)

  1. VSIM_20150509_00_018
  2. VSIM_20150509_00_019
  3. VSIM_20150509_00_021
  4. VSIM_20150509_00_020
  5. VSIM_20150509_00_022
  6. VSIM_20150509_00_023

Tags: , ,