Clustering Solaris Zones Using ZFS And Sun Cluster 3.2

This entry was posted in Solaris Administration and tagged , , , , on June 17, 2012, by

Below is an example of how to create a highly available Solaris 10 zone using Sun Cluster 3.2. It assumes you already have 2 servers configured as a cluster using shared SAN storage.
In this example I am using two SunFire v440 servers connected to an EMC Clariion CX3-40 with three shared LUNS. Be sure to read the Clariion documentation on how to properly set up host bindings to Sun Cluster will be happy with the disks (LUN v.s. Array). The HBA is a single-port Emulex LP9802. The driver used is the native Solaris Leadville drivers also known as “Sun StorageTek SAN Foundation Software”.
Servers used in this example (SunFire v440):

nms01-cl-global
nms02-cl-global

A quick check of storage via mpathadm:

# mpathadm list LU
        /dev/rdsk/c4t6006016090011D0050F71F4BD485DD11d0s2
                Total Path Count: 4
                Operational Path Count: 4
        /dev/rdsk/c4t6006016090011D00210DEF36D485DD11d0s2
                Total Path Count: 4
                Operational Path Count: 4
        /dev/rdsk/c4t6006016090011D00200DEF36D485DD11d0s2
                Total Path Count: 4
                Operational Path Count: 4

And the HBA information:

# fcinfo hba-port
HBA Port WWN: 10000000c93fff80
        OS Device Name: /dev/cfg/c2
        Manufacturer: Emulex
        Model: LP9802
        Firmware Version: 1.90a4
        FCode/BIOS Version: 1.40a0
        Type: N-port
        State: online
        Supported Speeds: 1Gb 2Gb
        Current Speed: 2Gb
        Node WWN: 20000000c93fff80

Check currently registered data service types:

# cresourcetype list
SUNW.LogicalHostname:2
SUNW.SharedAddress:2

Register HAStoragePlus and gds resource types:

# clresourcetype register SUNW.gds:6
# clresourcetype register SUNW.HAStoragePlus

Check again to make sure all required data services have been registered:

# clresourcetype list
SUNW.LogicalHostname:2
SUNW.SharedAddress:2
SUNW.gds:6
SUNW.HAStoragePlus:6

Create an empty resource group to use for the zone:

# clresourcegroup create -n nms01-cl-global,nms02-cl-global nms-zone01-rg

Create the zpool on a shared LUN:

# zpool create -m none nms-zone01-zpool c4t6006016090011D0050F71F4BD485DD11d0

Create zone filesystem with mount option set:

# zfs create -o mountpoint=/nms-zone01 nms-zone01-zpool/nms-zone01

Install a zone using zonecfg with a root as /nms-zone01:

I'll let you figure this one out.

Copy /etc/zones/ to other nodes in the cluster:

# rsync -a -v -essh /etc/zones/ nms02-cl-global:/etc/zones/

Create the SUNW.HAStoragePlus resource:

# clrs create -g nms-zone01-rg -t SUNW.HAStoragePlus -x Zpools=nms-zone01-zpool nms-zone01-fs

Bring the new RG online on the current host:

# clresourcegroup online -emM nms-zone01-rg

Create a resource group for the zone “nms-zone01”
Utilize the framework provided by the “Sun Cluster HA for Solaris Containers” toolkit. this script configues a failover zone as a SUNW.gds resource type.
Create the cluster-pfiles directory for the zone:

# mkdir  /nms-zone01/cluster-pfiles

Set aside the sample config files:

# cd /opt/SUNWsczone/sczbt/util
# mv sczbt_config sczbt_config.save

Edit sczbt_config.nms-zone01 and populate with the following:

RS=nms-zone01-rs
RG=nms-zone01-rg
PARAMETERDIR=/nms-zone01/cluster-pfiles
SC_NETWORK=false
FAILOVER=true
HAS_RS=nms-zone01-fs
Zonename=nms-zone01
Zonebootopt=
Zonebrand=native
Milestone=multi-user-server

Run sczbt_register to configure the zone as a SUNW.gds data service:

# ln -s sczbt_config.nms-zone01 sczbt_config
# ./sczbt_register
sourcing ./sczbt_config
Registration of resource nms-zone01-rs succeeded.
Validation of resource nms-zone01-rs succeeded.

Enable the zone resource:

# clrs enable  -n nms01-cl-global,nms02-cl-global -v nms-zone01-rs
resource nms-zone01 marked as enabled

Check Out the resource group:

# clrg show  nms-zone01-rg
Resource Groups and Resources ===
Resource Group:                                 nms-zone01-rg
RG_Description:                                  NMS Zone01 Resource Group
RG_mode:                                         Failover
RG_state:                                        Managed
Failback:                                        False
Nodelist:                                        nms01-cl-global nms02-cl-global
--- Resources for Group nms-zone01-rg ---
Resource:                                     nms-zone01-fs
Type:                                          SUNW.HAStoragePlus:6
Type_version:                                  6
Group:                                         nms-zone01-rg
R_description:                                 nms-zone01 dependent filesystems
Resource_project_name:                         default
Enabled{nms01-cl-global}:                      True
Enabled{nms02-cl-global}:                      True
Monitored{nms01-cl-global}:                    True
Monitored{nms02-cl-global}:                    True
Resource:                                     nms-zone01-rs
Type:                                          SUNW.gds:6
Type_version:                                  6
Group:                                         nms-zone01-rg
R_description:
Resource_project_name:                         default
Enabled{nms01-cl-global}:                      True
Enabled{nms02-cl-global}:                      True
Monitored{nms01-cl-global}:                    True
Monitored{nms02-cl-global}:                    True

Check out the zone:

zoneadm list -cv
ID NAME             STATUS     PATH                           BRAND    IP
 0 global           running    /                              native   shared
 6 nms-zone01       running    /nms-zone01                    native   shared

Fail over and fail back:

# clrg switch  -v -n nms02-cl-global nms-zone01-rg
# clrg switch  -v -n nms01-cl-global nms-zone01-rg

Congradulations! You now have a HA Solaris 10 Zone.

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright 2017 ©Aceadmins. All rights reserved.