NetApp Data ONTAP 8.3 Simulator Setup 3/4

Extend the Data ONTAP root volume “vol0″

In this section, describe the way to extend the root volume.
After the Data ONTAP simulator setup, the root volume “vol0″ has only a few free space. Therefore, extending the root volume is better.

The summary of steps is following.
i. assign unassigned disks into the pool.
ii. extend the aggregate by assigning a disk from the pool.
iii. extend the root volume which is on the aggregate.

Detailed steps is as follow.

  1. Check the disk status by storage disk show before assign the disks to a pool.
    cluster01::> storage disk show Usable Disk Container Container Disk Size Shelf Bay Type Type Name Owner —————- ———- —– — ——- ———– ——— ——– NET-1.1 – – 16 FCAL unassigned – – NET-1.2 – – 17 FCAL unassigned – – NET-1.3 – – 18 FCAL unassigned – – NET-1.4 – – 19 FCAL unassigned – – NET-1.5 – – 20 FCAL unassigned – – NET-1.6 – – 21 FCAL unassigned – – NET-1.7 – – 22 FCAL unassigned – – NET-1.8 1020MB – 16 FCAL aggregate aggr0 cluster01-01 NET-1.9 1020MB – 17 FCAL aggregate aggr0 cluster01-01 NET-1.10 1020MB – 18 FCAL aggregate aggr0 cluster01-01 NET-1.11 1020MB – 19 FCAL spare Pool0 cluster01-01 NET-1.12 – – 24 FCAL unassigned – – NET-1.13 – – 25 FCAL unassigned – – NET-1.14 – – 26 FCAL unassigned – – NET-1.15 – – 27 FCAL unassigned – – NET-1.16 – – 28 FCAL unassigned – – NET-1.17 – – 29 FCAL unassigned – – NET-1.18 – – 32 FCAL unassigned – – NET-1.19 1020MB – 20 FCAL spare Pool0 cluster01-01 NET-1.20 1020MB – 21 FCAL spare Pool0 cluster01-01 NET-1.21 1020MB – 22 FCAL spare Pool0 cluster01-01 NET-1.22 1020MB – 24 FCAL spare Pool0 cluster01-01 NET-1.23 1020MB – 25 FCAL spare Pool0 cluster01-01 NET-1.24 1020MB – 26 FCAL spare Pool0 cluster01-01 NET-1.25 1020MB – 27 FCAL spare Pool0 cluster01-01 NET-1.26 1020MB – 28 FCAL spare Pool0 cluster01-01 NET-1.27 1020MB – 29 FCAL spare Pool0 cluster01-01 NET-1.28 1020MB – 32 FCAL spare Pool0 cluster01-01 28 entries were displayed. cluster01::>
  2. Assign unassigned disks to a pool by storage disk assign.
    In this example, assign all unassigned disks into the pool “Pool0″ which is owned by node “cluster01-01″.
    cluster01::> storage disk assign -all -node cluster01-01 cluster01::>
  3. Check the disk status after the assignment.
    cluster01::> storage disk show Usable Disk Container Container Disk Size Shelf Bay Type Type Name Owner —————- ———- —– — ——- ———– ——— ——– NET-1.1 1020MB – 16 FCAL spare Pool0 cluster01-01 NET-1.2 1020MB – 17 FCAL spare Pool0 cluster01-01 NET-1.3 1020MB – 18 FCAL spare Pool0 cluster01-01 NET-1.4 1020MB – 19 FCAL spare Pool0 cluster01-01 NET-1.5 1020MB – 20 FCAL spare Pool0 cluster01-01 NET-1.6 1020MB – 21 FCAL spare Pool0 cluster01-01 NET-1.7 1020MB – 22 FCAL spare Pool0 cluster01-01 NET-1.8 1020MB – 16 FCAL aggregate aggr0 cluster01-01 NET-1.9 1020MB – 17 FCAL aggregate aggr0 cluster01-01 NET-1.10 1020MB – 18 FCAL aggregate aggr0 cluster01-01 NET-1.11 1020MB – 19 FCAL spare Pool0 cluster01-01 NET-1.12 1020MB – 24 FCAL spare Pool0 cluster01-01 NET-1.13 1020MB – 25 FCAL spare Pool0 cluster01-01 NET-1.14 1020MB – 26 FCAL spare Pool0 cluster01-01 NET-1.15 1020MB – 27 FCAL spare Pool0 cluster01-01 NET-1.16 1020MB – 28 FCAL spare Pool0 cluster01-01 NET-1.17 1020MB – 29 FCAL spare Pool0 cluster01-01 NET-1.18 1020MB – 32 FCAL spare Pool0 cluster01-01 NET-1.19 1020MB – 20 FCAL spare Pool0 cluster01-01 NET-1.20 1020MB – 21 FCAL spare Pool0 cluster01-01 NET-1.21 1020MB – 22 FCAL spare Pool0 cluster01-01 NET-1.22 1020MB – 24 FCAL spare Pool0 cluster01-01 NET-1.23 1020MB – 25 FCAL spare Pool0 cluster01-01 NET-1.24 1020MB – 26 FCAL spare Pool0 cluster01-01 NET-1.25 1020MB – 27 FCAL spare Pool0 cluster01-01 NET-1.26 1020MB – 28 FCAL spare Pool0 cluster01-01 NET-1.27 1020MB – 29 FCAL spare Pool0 cluster01-01 NET-1.28 1020MB – 32 FCAL spare Pool0 cluster01-01 28 entries were displayed. cluster01::>
  4. Check the status of root volume “vol0″ and the aggregate “aggr0″.
    In this example, “vol0″ has 356.8MB free space, and “aggr0″ which is configured by disks NET-1.{8,9,10} has 41.81MB free space .
    cluster01::> volume show Vserver Volume Aggregate State Type Size Available Used% ——— ———— ———— ———- —- ———- ———- —– cluster01-01 vol0 aggr0 online RW 807.3MB 356.8MB 55% cluster01::> agar show Aggregate Size Available Used% State #Vols Nodes RAID Status ——— ——– ——— —– ——- —— —————- ———— aggr0 855MB 41.81MB 95% online 1 cluster01-01 raid_dp, normal cluster01::> aggr show -disk Aggregate #disks Disks ——— —— ————————— aggr0 3 NET-1.8, NET-1.9, NET-1.10
  5. cluster01::> aggr add-disk aggr0 -disklist NET-1.11 cluster01::>
  6. cluster01::> agar show Aggregate Size Available Used% State #Vols Nodes RAID Status ——— ——– ——— —– ——- —— —————- ———— aggr0 1.67GB 896.8MB 48% online 1 cluster01-01 raid_dp, normal cluster01::> aggr show -disk Aggregate #disks Disks ——— —— ————————— aggr0 4 NET-1.8, NET-1.9, NET-1.10, NET-1.11 cluster01::> disk show Usable Disk Container Container Disk Size Shelf Bay Type Type Name Owner —————- ———- —– — ——- ———– ——— ——– NET-1.1 1020MB – 16 FCAL spare Pool0 cluster01-01 NET-1.2 1020MB – 17 FCAL spare Pool0 cluster01-01 NET-1.3 1020MB – 18 FCAL spare Pool0 cluster01-01 NET-1.4 1020MB – 19 FCAL spare Pool0 cluster01-01 NET-1.5 1020MB – 20 FCAL spare Pool0 cluster01-01 NET-1.6 1020MB – 21 FCAL spare Pool0 cluster01-01 NET-1.7 1020MB – 22 FCAL spare Pool0 cluster01-01 NET-1.8 1020MB – 16 FCAL aggregate aggr0 cluster01-01 NET-1.9 1020MB – 17 FCAL aggregate aggr0 cluster01-01 NET-1.10 1020MB – 18 FCAL aggregate aggr0 cluster01-01 NET-1.11 1020MB – 19 FCAL aggregate aggr0 cluster01-01 NET-1.12 1020MB – 24 FCAL spare Pool0 cluster01-01 <…snip…> NET-1.28 1020MB – 32 FCAL spare Pool0 cluster01-01 28 entries were displayed.
  7. cluster01::> vol size -vserver cluster01-01 -volume vol0 -new-size +500M vol size: Volume “cluster01-01:vol0″ size set to 1.28g.
  8. cluster01::> vol show Vserver Volume Aggregate State Type Size Available Used% ——— ———— ———— ———- —- ———- ———- —– cluster01-01 vol0 aggr0 online RW 1.28GB 825.3MB 36% cluster01::> agar show Aggregate Size Available Used% State #Vols Nodes RAID Status ——— ——– ——— —– ——- —— —————- ———— aggr0 1.67GB 394.3MB 77% online 1 cluster01-01 raid_dp, normal

Add Data ONTAP feature licenses.

  1. cluster01::> system license show Serial Number: 1-80-000008 Owner: cluster01 Package Type Description Expiration —————– ——- ——————— ——————– Base license Cluster Base License – cluster01::> system license status show Package Licensed Method Expiration —————– ————— ——————– Base license – NFS none – CIFS none – iSCSI none – FCP none – SnapRestore none – SnapMirror none – FlexClone none – SnapVault none – SnapLock none – SnapManagerSuite none – SnapProtectApps none – V_StorageAttach none – SnapLock_Enterprise none – Insight_Balance none – 15 entries were displayed.
  2. cluster01::> system license add <CIFS license key> License for package “CIFS” and serial number “1-81-0000000000000004082368511″ installed successfully. (1 of 1 added successfully)
  3. cluster01::> system license status show Package Licensed Method Expiration —————– ————— ——————– Base license – NFS none – CIFS license – iSCSI none – FCP none – SnapRestore none – SnapMirror none – FlexClone none – SnapVault none – SnapLock none – SnapManagerSuite none – SnapProtectApps none – V_StorageAttach none – SnapLock_Enterprise none – Insight_Balance none – 15 entries were displayed.

Add a secondary DNS server to a name service.

  1. cluster01::> vserver service name-service dns show Name Vserver State Domains Servers ————— ——— ———————————– —————- cluster01 enabled testdomain.local 10.0.0.110 cluster01::> vserver service name-service dns modify -vserver cluster01 -domains testdomain.local -name-servers 10.0.0.1 10.0.0.110
  2. cluster01::> vserver service name-service dns show Name Vserver State Domains Servers ————— ——— ———————————– —————- cluster01 enabled testdomain.local 10.0.0.1, 10.0.0.110

Change the timezone and configure the time synchronization with NTP.

  1. cluster01::> cluster date show Node Date Time zone ——— ————————- ————————- cluster01-01 5/1/2015 16:56:13 +00:00 Etc/UTC
  2. cluster01::> cluster date modify -timezone Japan
  3. cluster01::> cluster date show Node Date Time zone ——— ————————- ————————- cluster01-01 5/2/2015 01:56:35 +09:00 Japan
  4. cluster01::> cluster time-service ntp server show This table is currently empty.
  5. cluster01::> cluster time-service ntp server create -server time.asia.apple.com cluster01::>
  6. cluster01::> cluster time-service ntp server show Server Version —————————— ——- time.asia.apple.com auto cluster01::> date Node Date Time zone ——— ———————— ————————- cluster01-01 Sat May 02 02:44:01 2015 Japan

  1. cluster01::> autosupport show Node State From To Mail Hosts ——————— ——— ————- ————- ———- cluster01-01 enable Postmaster – mailhost cluster01::>
  2. cluster01::> autosupport modify -state disable cluster01::> autosupport show Node State From To Mail Hosts ——————— ——— ————- ————- ———- cluster01-01 disable Postmaster – mailhost cluster01::>
cluster01::> system node halt

Tags: , ,

Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>