San Francisco

dave spink toolset


SOLARIS PROCEDURES:

IPMP PATH_TO_INST LUN DISCOVER NFS
CACHEFS RBAC NIS RECOVERY
iSCSI NDD INTERFACE SVM FILE SYSTEM SVM PROBLEMS
SVM to VERITAS DU AND DF SERVER UPGRADE LIVE UPGRADE
FULL DUPLEX ZONES DISK WIPE NTP
BACKUP ROUTE ALOM & RSC SED WRONG MAGIC
HOSTNAME CHANGE GROW UFS ADDING SWAP EXPORT TEST
SSH SCRIPTS COMMANDS

IPMP

Manually configure IPMP for active and standby.

# ifconfig ge0 plumb 10.70.80.67 netmask 255.255.252.0 broadcast 10.70.83.255 group prod deprecated -failover up
ge0: flags=9040843 mtu 1500 index 8
inet 10.70.80.67 netmask fffffc00 broadcast 10.70.83.255 groupname prod
ether 8:0:20:ff:23:6b

# ifconfig ge0 addif 10.70.80.69 netmask 255.255.252.0 broadcast 10.70.83.255 up
ge0:1: flags=1000843 mtu 1500 index 8
inet 10.70.80.69 netmask fffffc00 broadcast 10.70.83.255

# ifconfig ge1 plumb 10.70.80.68 netmask 255.255.252.0 broadcast 10.70.83.255 group prod deprecated -failover standby up
ge1: flags=69040843 mtu 1500 index 9
inet 10.70.80.68 netmask fffffc00 broadcast 10.70.83.255 groupname prod
ether 8:0:20:ff:20:c4

Automating IPMP with a single active path and standby interface.

# cat /etc/hostname.ge0
10.70.80.67 group prod -failover deprecated up
addif 10.70.80.69 up

# cat /etc/hostname.ge1
10.70.80.68 group prod  -failover deprecated standby up

Configuring IPMP Failover (Multiple Active Paths with Failover)

# ifconfig ge0 plumb 10.70.128.67 group test_single deprecated -failover netmask + broadcast + up
# ifconfig ge0 addif 10.70.128.69 netmask + broadcast + up
# ifconfig ge1 plumb 10.70.128.68 group test_single deprecated -failover netmask + broadcast + up
# ifconifg ge1 addif 10.70.128.70 netmask + broadcast + up


PATH_TO_INST

We had a SUN E15K server with 4 dual ported HBAs (8 paths) and yet was only seeing some the EMC SAN LUNs on 2 paths. The procedure below demonstates how to trace a device from the SAN through to Solaris device tree and explain why the condition occurred. Firstly, we use PowerPath to select a device that should contain 8 active paths but only has 2.

Pseudo name=emcpower104a
Symmetrix ID=000190101384
Logical device ID=1E3D
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
---------------- Host ---------------   - Stor -   -- I/O Path -  -- Stats ---
### HW Path                 I/O Paths    Interf.   Mode    State  Q-IOs Errors
==============================================================================
1281 pci@1c/QLGC,qla@1         c4t123d44s0 FA  9aB   active  alive      0      0
1280 pci@1c/QLGC,qla@1         c6t122d44s0 FA  8aB   active  alive      0      0

Determine what FA's the device is mapped too.

# sudo symdev -sid 1384 show 1E3D | grep FA
Not Visible              N/A       FA   04A:1  RW   000  00  02C  N/A
Not Visible              N/A       FA   08A:1  RW   000  00  02C  N/A
Not Visible              N/A       FA   09A:1  RW   000  00  02C  N/A
Not Visible              N/A       FA   13A:1  RW   000  00  02C  N/A

Determine the server HBA masking entries for the device.

# symmaskdb -sid 1384 -dev 1e3d list assign
Device  Identifier        Type   Dir:P     
------  ----------------  -----  ----------------     
1E3D    210000e08b0ef0e4  FIBRE  FA-4A:1
        210000e08b0ed4e6  FIBRE  FA-4A:1
        210000e08b0e0de8  FIBRE  FA-8A:1
        210000e08b0eefe4  FIBRE  FA-8A:1
        210100e08b2e0de8  FIBRE  FA-9A:1
        210100e08b2eefe4  FIBRE  FA-9A:1
        210100e08b2ef0e4  FIBRE  FA-13A:1
        210100e08b2ed4e6  FIBRE  FA-13A:1

Prepare table showing the servers HBA instance, server targets, zone entries, device tree usage.

hba2 - 210000e08b0ef0e4
    SymFA   Port Sym FA WWPN            Target  HostDevices      HostWWPN-VCM   FA-Map  Disk-MaskFA     ZoneActive
    FA-4A   1    50060482d52de223       120     0(problem)       20             20      20              Y
    FA-8A   0    50060482d52de207       108     91               90             90      90		Y
hba3 - 210100e08b2ef0e4
    SymFA   Port Sym FA WWPN            Target  HostDevices      HostWWPN-VCM   FA-Map  Disk-MaskFA     ZoneActive
    FA-13A  1    50060482d52de22c       121     0(problem)       20		20      20	        Y
    FA-10D  0    5006048ad52de219       109     91               90             90      90      	Y
hba0 - 210000e08b0e0de8
    SymFA   Port Sym FA WWPN            Target  HostDevices      HostWWPN-VCM   FA-Map  Disk-MaskFA     ZoneActive
    FA-8A   1    50060482d52de227       122     21               20             20      20              Y
    FA-8C   1    5006048ad52de227       110     91               90             90      90              Y
hba1 - 210100e08b2e0de8
    SymFA   Port Sym FA WWPN            Target  HostDevices      HostWWPN-VCM   FA-Map  Disk-MaskFA     ZoneActive
    FA-9A   1    50060482d52de228       123     22               20             21      21              Y
    FA-9C   1    5006048ad52de228       111     23(problem)      90             90      90              Y
hba14 - 210000e08b0ed4e6
    SymFA   Port Sym FA WWPN            Target  HostDevices      HostWWPN-VCM   FA-Map  Disk-MaskFA     ZoneActive
    FA-4A   1    50060482d52de223       124     0(problem)       20             20      20              Y
    FA-8A   0    50060482d52de207       112     23(problem)      90             90      90              Y
hba15 - 210100e08b2ed4e6
    SymFA   Port Sym FA WWPN            Target  HostDevices      HostWWPN-VCM   FA-Map  Disk-MaskFA     ZoneActive
    FA-13A  1    50060482d52de22c       125     0(problem)       20             20      20              Y
    FA-10D  0    5006048ad52de219       113     91               90             90      90              Y
hba12 - 210000e08b0eefe4
    SymFA   Port Sym FA WWPN            Target  HostDevices      HostWWPN-VCM   FA-Map  Disk-MaskFA     ZoneActive
    FA-8A   1    50060482d52de227       126     0(problem)       20             20      20              Y
    FA-8C   1    5006048ad52de227       114     91               90             90      90              Y
hba13 - 210100e08b2eefe4
    SymFA   Port Sym FA WWPN            Target  HostDevices      HostWWPN-VCM   FA-Map  Disk-MaskFA     ZoneActive
    FA-9C   1    5006048ad52de228       115     23(problem)      90             90      90              Y
    FA-9A   1    50060482d52de228       127     0(problem)       20             20      20              Y

When the server boots check the messages file for HBA driver registers a connection to disk. This will confirm the zones active, confirmed by qlogic driver marking disk as enable on all 8 paths in host message files.

May 10 16:18:11 uxgfpr02 qla2300: [ID 157811 kern.info] hba0-SCSI-target-id-122-lun-44-enable;
May 10 16:18:12 uxgfpr02 qla2300: [ID 157811 kern.info] hba1-SCSI-target-id-123-lun-44-enable;
May 10 16:18:13 uxgfpr02 qla2300: [ID 157811 kern.info] hba12-SCSI-target-id-126-lun-44-enable;
May 10 16:18:14 uxgfpr02 qla2300: [ID 157811 kern.info] hba13-SCSI-target-id-127-lun-44-enable;
May 10 16:18:15 uxgfpr02 qla2300: [ID 157811 kern.info] hba14-SCSI-target-id-124-lun-44-enable;
May 10 16:18:17 uxgfpr02 qla2300: [ID 157811 kern.info] hba15-SCSI-target-id-125-lun-44-enable;
May 10 16:18:18 uxgfpr02 qla2300: [ID 157811 kern.info] hba2-SCSI-target-id-120-lun-44-enable;
May 10 16:18:19 uxgfpr02 qla2300: [ID 157811 kern.info] hba3-SCSI-target-id-121-lun-44-enable;

Next extract of HBA instances from the messages, this is required to examine path_to_inst instances.

# grep "QLogic Fibre Channel Driver v4.22 Instance" messages | sort
May 10 14:57:15 uxgfpr02 qla2300: [ID 818750 kern.notice] QLogic Fibre Channel Driver v4.22 Instance: 0
May 10 14:57:16 uxgfpr02 qla2300: [ID 818750 kern.notice] QLogic Fibre Channel Driver v4.22 Instance: 1
May 10 14:57:17 uxgfpr02 qla2300: [ID 818750 kern.notice] QLogic Fibre Channel Driver v4.22 Instance: 12
May 10 14:57:19 uxgfpr02 qla2300: [ID 818750 kern.notice] QLogic Fibre Channel Driver v4.22 Instance: 13
May 10 14:57:24 uxgfpr02 qla2300: [ID 818750 kern.notice] QLogic Fibre Channel Driver v4.22 Instance: 14
May 10 14:57:25 uxgfpr02 qla2300: [ID 818750 kern.notice] QLogic Fibre Channel Driver v4.22 Instance: 15
May 10 14:57:30 uxgfpr02 qla2300: [ID 818750 kern.notice] QLogic Fibre Channel Driver v4.22 Instance: 2
May 10 14:57:31 uxgfpr02 qla2300: [ID 818750 kern.notice] QLogic Fibre Channel Driver v4.22 Instance: 3

Extract those HBA instances used in path_to_inst.

"/pci@1c,600000/QLGC,qla@1" 0 "qla2300"
"/pci@1c,600000/QLGC,qla@1,1" 1 "qla2300"
"/pci@21c,600000/QLGC,qla@1" 2 "qla2300"
"/pci@21c,600000/QLGC,qla@1,1" 3 "qla2300"
"/pci@1dc,600000/QLGC,qla@1" 12 "qla2300"
"/pci@1dc,600000/QLGC,qla@1,1" 13 "qla2300"
"/pci@1fc,600000/QLGC,qla@1" 14 "qla2300"
"/pci@1fc,600000/QLGC,qla@1,1" 15 "qla2300"

For the 2 active paths, show the path_to_inst entry.

uxgfpr02:/export/home/sanadmin> ls -l /dev/rdsk/c4t123d44s2
/dev/rdsk/c4t123d44s2 -> ../../devices/pci@1c,600000/QLGC,qla@1,1/sd@7b,2c:c,raw
uxgfpr02:/export/home/sanadmin> ls -l /dev/rdsk/c6t122d44s2
/dev/rdsk/c6t122d44s2 -> ../../devices/pci@1c,600000/QLGC,qla@1/sd@7a,2c:c,raw

Read the SUN document which explains the system warning messages; http://sunsolve.sun.com/search/document.do?assetkey=1-25-49441-1. See an extract below.

WARNING: sd58589:a minor 0x726e8 too big for 32-bit applications
Solaris cannot support more than 32768 instances each of sd or ssd devices

Extract the entries from path_to_inst showing 2 valid instances (hence 2 paths) and 6 invalid instance numbers. For example, target 120 is zoned to hba2 as per messages file and zone details above. Therefore hba2 path_to_inst entry is "/pci@21c,600000/QLGC,qla@1". The matching target number 120 is represented in hex as 78 and the device (as per FA) is hex address 2c. Note the instance number is greater than 32768 and therefore the device tree can not be built. By working through each target and hba instance you extract the path_to_inst numbers. Hence the reason for 2 paths for device 1E3D is now explained.

"/pci@21c,600000/QLGC,qla@1" 2 "qla2300"
> grep sd@78,2c path_to_inst  (target 120 - hba2)
"/pci@21c,600000/QLGC,qla@1/sd@78,2c" 40216 "sd"

"/pci@21c,600000/QLGC,qla@1,1" 3 "qla2300"
> grep sd@79,2c path_to_inst (target 121 - hba3)
"/pci@21c,600000/QLGC,qla@1,1/sd@79,2c" 42520 "sd"

"/pci@1c,600000/QLGC,qla@1" 0 "qla2300"
> grep sd@7a,2c path_to_inst (target 122 - hba0)
"/pci@1c,600000/QLGC,qla@1/sd@7a,2c" 28440 "sd"		(valid instance - dev tree built)

"/pci@1c,600000/QLGC,qla@1,1" 1 "qla2300"
> grep sd@7b,2c path_to_inst (target 123 - hba1)
"/pci@1c,600000/QLGC,qla@1,1/sd@7b,2c" 30744 "sd"	(valid instance - dev tree built)

"/pci@1fc,600000/QLGC,qla@1" 14 "qla2300"
> grep sd@7c,2c path_to_inst (target 124 - hba14)
"/pci@1fc,600000/QLGC,qla@1/sd@7c,2c" 37144 "sd"
"/pci@1fc,600000/QLGC,qla@1,1/sd@7c,2c" 39192 "sd"

"/pci@1fc,600000/QLGC,qla@1,1" 15 "qla2300"
> grep sd@7d,2c path_to_inst (target 125 - hba15)
"/pci@1fc,600000/QLGC,qla@1/sd@7d,2c" 37400 "sd"
"/pci@1fc,600000/QLGC,qla@1,1/sd@7d,2c" 39448 "sd"

"/pci@1dc,600000/QLGC,qla@1" 12 "qla2300"
> grep sd@7e,2c path_to_inst (target 126 - hba12)
"/pci@1dc,600000/QLGC,qla@1/sd@7e,2c" 33560 "sd"
"/pci@1dc,600000/QLGC,qla@1,1/sd@7e,2c" 35608 "sd"

"/pci@1dc,600000/QLGC,qla@1,1" 13 "qla2300"
> grep sd@7f,2c path_to_inst (target 127 - hba13)
"/pci@1dc,600000/QLGC,qla@1/sd@7f,2c" 33816 "sd"
"/pci@1dc,600000/QLGC,qla@1,1/sd@7f,2c" 35864 "sd"


LUN DISCOVER

Check host see HBA.

# fcinfo hba-port				

HBA Port WWN: 21000024ff3e82e2
        OS Device Name: /dev/cfg/c11
        Manufacturer: QLogic Corp.
        Model: 371-4325-02
        Firmware Version: 05.03.02
        FCode/BIOS Version:  BIOS: 2.02; fcode: 2.03; EFI: 2.01;
        Serial Number: 0402H00-1210022141
        Driver Name: qlc
        Driver Version: 20100301-3.00
        Type: L-port
        State: online
        Supported Speeds: 2Gb 4Gb 8Gb
        Current Speed: 8Gb
        Node WWN: 20000024ff3e82e2
HBA Port WWN: 21000024ff3e82e3
        OS Device Name: /dev/cfg/c10
        Manufacturer: QLogic Corp.
        Model: 371-4325-02
        Firmware Version: 05.03.02
        FCode/BIOS Version:  BIOS: 2.02; fcode: 2.03; EFI: 2.01;
        Serial Number: 0402H00-1210022141
        Driver Name: qlc
        Driver Version: 20100301-3.00
        Type: L-port
        State: online
        Supported Speeds: 2Gb 4Gb 8Gb
        Current Speed: 8Gb
        Node WWN: 20000024ff3e82e3
HBA Port WWN: 21000024ff3e82fe
        OS Device Name: /dev/cfg/c12
        Manufacturer: QLogic Corp.
        Model: 371-4325-02
        Firmware Version: 05.03.02
        FCode/BIOS Version:  BIOS: 2.02; fcode: 2.03; EFI: 2.01;
        Serial Number: 0402H00-1210022044
        Driver Name: qlc
        Driver Version: 20100301-3.00
        Type: L-port
        State: online
        Supported Speeds: 2Gb 4Gb 8Gb
        Current Speed: 8Gb
        Node WWN: 20000024ff3e82fe
HBA Port WWN: 21000024ff3e82ff
        OS Device Name: /dev/cfg/c13
        Manufacturer: QLogic Corp.
        Model: 371-4325-02
        Firmware Version: 05.03.02
        FCode/BIOS Version:  BIOS: 2.02; fcode: 2.03; EFI: 2.01;
        Serial Number: 0402H00-1210022044
        Driver Name: qlc
        Driver Version: 20100301-3.00
        Type: L-port
        State: online
        Supported Speeds: 2Gb 4Gb 8Gb
        Current Speed: 8Gb
        Node WWN: 20000024ff3e82ff

See if host HBA is connected to TARGET.

# luxadm -e port

/devices/pci@400/pci@1/pci@0/pci@c/SUNW,qlc@0/fp@0,0:devctl        CONNECTED
/devices/pci@400/pci@1/pci@0/pci@c/SUNW,qlc@0,1/fp@0,0:devctl      CONNECTED
/devices/pci@500/pci@2/pci@0/pci@a/SUNW,qlc@0/fp@0,0:devctl        CONNECTED
/devices/pci@500/pci@2/pci@0/pci@a/SUNW,qlc@0,1/fp@0,0:devctl      CONNECTED

To force another login, sometimes you may need to block/unblock from switch for FULL FCP login.

# luxadm -e forcelip /devices/pci@400/pci@1/pci@0/pci@c/SUNW,qlc@0/fp@0,0:devctl

Solaris 9 with SUNW SAN package may need a little help to see disks. First, verify the HBA can see all disks to confirm SAN configuration.

# cfgadm -o show_FCP_dev -al
Ap_Id                          Type         Receptacle   Occupant     Condition
c1                             fc-private   connected    configured   unknown
c1::2100000087108f12,0         disk         connected    configured   unknown
c1::500000e01431e431,0         disk         connected    configured   unknown
c4                             fc-fabric    connected    unconfigured unknown
c4::5006016339a029c1,0         disk         connected    unconfigured unknown
c4::5006016339a029c1,1         disk         connected    unconfigured unknown
c4::5006016339a029c1,2         disk         connected    unconfigured unknown
c4::5006016339a029c1,3         disk         connected    unconfigured unknown
c4::5006016339a029c1,4         disk         connected    unconfigured unknown
c5                             fc-fabric    connected    unconfigured unknown
c5::5006016239a029c1,0         disk         connected    unconfigured unknown
c5::5006016239a029c1,1         disk         connected    unconfigured unknown
c5::5006016239a029c1,2         disk         connected    unconfigured unknown
c5::5006016239a029c1,3         disk         connected    unconfigured unknown
c5::5006016239a029c1,4         disk         connected    unconfigured unknown

Review the list of controllers and status.

uxifts01s:/opt/EMLXemlxu/bin # cfgadm -al
Ap_Id                          Type         Receptacle   Occupant     Condition
c0                             scsi-bus     connected    configured   unknown
c0::dsk/c0t0d0                 CD-ROM       connected    configured   unknown
c1                             fc-private   connected    configured   unknown
c1::2100000087108f12           disk         connected    configured   unknown
c1::500000e01431e431           disk         connected    configured   unknown
c4                             fc-fabric    connected    unconfigured unknown
c4::5006016339a029c1           disk         connected    unconfigured unknown
c4::5006016b39a029c1           disk         connected    unconfigured unknown
c5                             fc-fabric    connected    unconfigured unknown
c5::5006016239a029c1           disk         connected    unconfigured unknown
c5::5006016a39a029c1           disk         connected    unconfigured unknown
usb0/1                         unknown      empty        unconfigured ok
usb0/2                         unknown      empty        unconfigured ok
usb0/3                         unknown      empty        unconfigured ok
usb0/4                         unknown      empty        unconfigured ok

Notice how c4 and c5 are not configured? Theses means disks behind this controller are not accessible.

# cfgadm -c configure c4
# cfgadm -c configure c5

Notice how the controllers now show as configured.

Ap_Id                          Type         Receptacle   Occupant     Condition
c0                             scsi-bus     connected    configured   unknown
c0::dsk/c0t0d0                 CD-ROM       connected    configured   unknown
c1                             fc-private   connected    configured   unknown
c1::2100000087108f12           disk         connected    configured   unknown
c1::500000e01431e431           disk         connected    configured   unknown
c4                             fc-fabric    connected    configured   unknown
c4::5006016339a029c1           disk         connected    configured   unknown
c4::5006016b39a029c1           disk         connected    configured   unknown
c5                             fc-fabric    connected    configured   unknown
c5::5006016239a029c1           disk         connected    configured   unknown
c5::5006016a39a029c1           disk         connected    configured   unknown
usb0/1                         unknown      empty        unconfigured ok
usb0/2                         unknown      empty        unconfigured ok
usb0/3                         unknown      empty        unconfigured ok
usb0/4                         unknown      empty        unconfigured ok

Check disk access again, and you should see all disks are visible.

#  cfgadm -o show_FCP_dev -al
Ap_Id                          Type         Receptacle   Occupant     Condition
c1                             fc-private   connected    configured   unknown
c1::2100000087108f12,0         disk         connected    configured   unknown
c1::500000e01431e431,0         disk         connected    configured   unknown
c4                             fc-fabric    connected    configured   unknown
c4::5006016339a029c1,0         disk         connected    configured   unknown
c4::5006016339a029c1,1         disk         connected    configured   unknown
c4::5006016339a029c1,2         disk         connected    configured   unknown
c4::5006016339a029c1,3         disk         connected    configured   unknown
c4::5006016339a029c1,4         disk         connected    configured   unknown
c5                             fc-fabric    connected    configured   unknown
c5::5006016239a029c1,0         disk         connected    configured   unknown
c5::5006016239a029c1,1         disk         connected    configured   unknown
c5::5006016239a029c1,2         disk         connected    configured   unknown
c5::5006016239a029c1,3         disk         connected    configured   unknown
c5::5006016239a029c1,4         disk         connected    configured   unknown

To enable MPXIO on Solaris SPARC 10, it requires a reboot.

# stmsboot -e

# stmsboot -L
non-STMS device name                    STMS device name
------------------------------------------------------------------
/dev/rdsk/c10t21000024FF2D5FE7d0        /dev/rdsk/c0t600144F0A0CE8C1B00004F735ED30001d0
/dev/rdsk/c10t21000024FF2D5FE7d1        /dev/rdsk/c0t600144F0A0CE8C1B00004F7366790002d0
/dev/rdsk/c11t21000024FF2D6136d1        /dev/rdsk/c0t600144F0A0CE8C1B00004F7366790002d0
/dev/rdsk/c11t21000024FF2D6136d0        /dev/rdsk/c0t600144F0A0CE8C1B00004F735ED30001d0
/dev/rdsk/c12t21000024FF2D5FE6d1        /dev/rdsk/c0t600144F0A0CE8C1B00004F7366790002d0
/dev/rdsk/c12t21000024FF2D5FE6d0        /dev/rdsk/c0t600144F0A0CE8C1B00004F735ED30001d0
/dev/rdsk/c13t21000024FF2D6137d1        /dev/rdsk/c0t600144F0A0CE8C1B00004F7366790002d0
/dev/rdsk/c13t21000024FF2D6137d0        /dev/rdsk/c0t600144F0A0CE8C1B00004F735ED30001d0

Find information about attached LUNs, in this instance we have MPXIO running.

# luxadm probe

Found Fibre Channel device(s):
  Node WWN:20000024ff2d6136  Device Type:Disk device
    Logical Path:/dev/rdsk/c0t600144F0A0CE8C1B00004F735ED30001d0s2
  Node WWN:20000024ff2d5fe6  Device Type:Disk device
    Logical Path:/dev/rdsk/c0t600144F0A0CE8C1B00004F7366790002d0s2

See status of paths to LUN.

# luxadm display /dev/rdsk/c0t600144F0A0CE8C1B00004F735ED30001d0s2

  /dev/rdsk/c0t600144F0A0CE8C1B00004F735ED30001d0s2

  /devices/scsi_vhci/ssd@g600144f0a0ce8c1b00004f735ed30001:c,raw
   Controller           /devices/pci@400/pci@1/pci@0/pci@c/SUNW,qlc@0/fp@0,0
    Device Address              21000024ff2d6136,0
    Host controller port WWN    21000024ff3e82e2
    Class                       primary
    State                       ONLINE
   Controller           /devices/pci@400/pci@1/pci@0/pci@c/SUNW,qlc@0,1/fp@0,0
    Device Address              21000024ff2d5fe7,0
    Host controller port WWN    21000024ff3e82e3
    Class                       secondary
    State                       STANDBY
   Controller           /devices/pci@500/pci@2/pci@0/pci@a/SUNW,qlc@0,1/fp@0,0
    Device Address              21000024ff2d6137,0
    Host controller port WWN    21000024ff3e82ff
    Class                       primary
    State                       ONLINE
   Controller           /devices/pci@500/pci@2/pci@0/pci@a/SUNW,qlc@0/fp@0,0
    Device Address              21000024ff2d5fe6,0
    Host controller port WWN    21000024ff3e82fe
    Class                       secondary
    State                       STANDBY

For Solaris host that requires sd.conf entries.

# vi /kernel/drv/sd.conf
# update_drv -f sd
# devfsadm -Cv
# powercf -q
# powermt config
# powermt save
# vxdctl enable

What are the required /etc/system settings on a Solaris10 SPARC server (using kernel embedded leadville driver)?". The /etc/system must have.

forceload: drv/ssd 
set ssd:ssd_max_throttle=20 
set ssd:ssd_io_time=0x3c 
# If Powerpath or DMP are NOT present then set ssd:ssd_io_time=0x78 
set fcp:fcp_offline_delay=20 
set fcp:ssfcp_enable_auto_configuration=1 

In the case of meta devices (which have more physical devices on the back end and can thus physically process more IOs in parallel), it may be beneficial to increase the queue depth to 32. It is important to note that in Solaris the sd_max_throttle/ssd_max_throttle settings are global, so all devices including non-meta’s will also be affected.



NFS

See 11 kernel parameters.

# vi /etc/system
* Read ahead operations to queue (default 4)
set nfs:nfs3_nra = 32
set nfs:nfs4_nra = 32
* Max client threads allowed (default 8)
set nfs:nfs3_max_threads = 256
set nfs:nfs4_max_threads = 256
* Block Size for IO (default 32768)
set nfs:nfs3_bsize = 1048576
set nfs:nfs4_bsize = 1048576
set nfs:nfs3_max_transfer_size=1048576
set nfs:nfs4_max_transfer_size=1048576
* Number of RPC connections to NFS Server (default 1)
set rpcmod:clnt_max_conns = 8

General NFS

# mount -F nfs -o rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,nointr,proto=tcp,suid ivan-ar:/export/testnfs /mnt 

Single Instance Solaris Mount Point options:

Datafiles	rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,[forcedirectio or llock],nointr,proto=tcp,suid
Binaries	rw,bg,hard, rsize=1048576,wsize=1048576,vers=3,nointr,proto=tcp,suid

RMAN Mount point option for Solaris

# mount -F nfs -o rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,foredirectio ivan-ar:/export/testnfs /mnt


RECOVERY

Root password recovery, for systems without trusted root access.

# init 0
ok: boot cdrom -s
# fsck -y /dev/rdsk/c0t0d0s0
# mount /dev/dsk/c0t0d0s0 /a
# TERM=vt100
# export TERM
# vi /a/etc/shadow

Resolve corrupt /etc/system file.

ok: boot -a
Enter filename of kernel [kernel/sparcv9/unix]
Enter directory /kernel, /usr/kernel, /platform/'uname -m'/kernel
Name of system file [etc/system]
Banner is displayed
root filesystem type [ufs]
Enter physical name of root device [pci@1f,0/ide@d/disk@0,0:a]

Restore root file system from ufsdump backup.

ok: boot cdrom -s
# newfs /dev/rdsk/c0t0d0s0
# mount /dev/dsk/c0t0d0s0 /a
# cd /a
# ufsrestore rf /dev/rmt/0
# rm restoresymtable
# cd /usr/platform/'uname -m'/lib/fs/ufs
# installboot bootblk /dev/rdsk/c0t0d0s0
# cd /
# umount /a
# fsck /dev/rdsk/c0t0d0s0
# init 6
# ufsdump 0uf /dev/rmt/0 /dev/rdsk/c0t0d0s0

Restore regular file system from ufsdump.

ok: boot cdrom -s
# newfs /dev/rdsk/c0t0d0s5
# mount /dev/dsk/c0t0d0s5 /mnt
# cd /mnt
# ufsrestore rf /dev/rmt/0
# rm restoresymtable
# cd /usr/platform/'uname -m'/lib/fs/ufs
# installboot bootblk /dev/rdsk/c0t0d0s5
# cd /
# umount /mnt
# fsck /dev/rdsk/c0t0d0s5
# init 6
# ufsdump 0uf /dev/rmt/0 /dev/rdsk/c0t0d0s5


CACHEFS

Create a cacheFS file system.

# cfsadmin -c /cache/cache0
# mkdir /data
# mount -F cachefs -o backfstype=nfs,cachedir=/cache/cache0, cacheid=data_cache host1:/cdrom /data

Manually invoke a consistency check and determine cacheFS statistics.

# cfsadmin -s /data
# cfsadmin -l /cache/cache0

Determine the size of cache by creating a log file, assign the cache directory, review the size of cache, then disable logging on this cacheFS directory.

# mkdir /var/cachelogs
# cachefslog -f /var/cachelogs/data.log /data
# cachefslog /data
# cachefswssize /var/cachelogs/data.log
# cachefslog -h /data

To verify cacheFS system integrity.

# umount /data
# fsck -F cachefs -o noclean /cache/cache0
# mount -F cachefs -o backfstype=nfs,cachedir=/cache/cache0, cacheid=data_cache host1:/cdrom /data

To dismantle cacheFS.

# cfsadmin -l /cache/cache0
# umount /data
# cfsadmin -d data_cache /cache/cache0
# fsck -F cachefs -o noclean /cache/cache0


RBAC

Create role.

# roleadd -u 1000 -g 10 -d /export/home/shutusr -m shutusr
# passwd -r files shutusr
# cat /etc/user_attr
root::::type=normal;auths=solaris.*,solaris.grant;profiles=All
shutusr::::type=role;profiles=All

Add entry to default profile file.

# vi /etc/security/prof_attr
Shut:::Able to shutdown the system;help=howto.html

Add the newly create profile entry into the role.

# rolemod -P Shut,All shutusr
# cat /etc/user_attr
root::::type=normal;auths=solaris.*,solaris.grant;profiles=All
shutusr::::type=role;profiles=Shut,All

Assign the role to a real user.

# usermod -R shutusr admin
# cat /etc/user_attr
root::::type=normal;auths=solaris.*,solaris.grant;profiles=All
shutusr::::type=role;profiles=Shut,All
admin::::type=normal;roles=shutusr

Assign commands to the profile.

# vi /etc/security/exec_attr
Shut:suser:cmd:::/usr/sbin/shutdown:uid=0

Test the configuration by first logging on as "user1".

# su minime
# /usr/sbin/shutdown -i 6 -g 0


NIS

Set the basic environment for master, slave(s), and clients.

# domainname cpships.com
# domainname > /etc/defaultdomain
# vi /etc/hosts
ensure the NIS master and slave hostname and address exist in this file
# vi /etc/nsswitch.conf
passwd: files nis
# vi /etc/timezone
add GMT your.domain

Configuring NIS Master.

# mkdir /etc/nis
# cp /etc/passwd /etc/nis; cp /etc/shadow /etc/nis
# vi /etc/nis/passwd
remove non NIS accounts, do the same with shadow file
# chmod -R 700 /etc/nis
# vi /var/yp/Makefile
modify PWDIR to /etc/nis
# touch /etc/ethers; touch /etc/bootparams
# /usr/sbin/ypinit -m
# /usr/lib/netsvc/yp/ypstart
# /usr/ccs/bin/make
# ypcat passwd
# ypwhich -m

Configuring NIS Slave.

# /usr/sbin/ypinit -c
# /usr/lib/netsvc/yp/ypstart
# /usr/sbin/ypinit -s master_server
# /usr/lib/netsvc/yp/ypstop
# /usr/lib/netsvc/yp/ypstart
# ypcat passwd

Configuring NIS Client.

# ypinit -c
# /usr/lib/netsvc/yp/ypstart
# ypwhich -m

Testing dynamic rebind (only works if during ypinit -c you added both master and slave).

# ypwhich		;on client
# stop-a		;on master
# ypwhich		;on client to verify slave usage
ok: go			;on master to startup again


iSCSI

Verify iSCSI packages are installed.

# pkginfo SUNWiscsiu SUNWiscsir
system SUNWiscsiu Sun iSCSI Device Driver (root)
system SUNWiscsir Sun iSCSI Management Utilities (usr)

Confirm you can reach target, review current devices.

# ping 192.168.56.101
    192.68.56.101 is alive
# echo | format 
    0. c0d0 DEFAULT cyl 2085 alt 2 hd 255 sec 63
          /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0

Configure the device to be dynamically discovered (Send Targets).

# iscsiadm add discovery-address 192.168.56.101:3260

Enable the iSCSI target discovery using the Send Targets discovery method. (15 seconds to complete).

# iscsiadm modify discovery --sendtargets enable

Create the iSCSI device links for the local system. (This may take a couple of minutes).

# devfsadm -i iscsi
# echo | format
    0. c0d0 DEFAULT cyl 2085 alt 2 hd 255 sec 63
          /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
    1. c2t600144F0E7F0137D00004C4F0F9C0002d0 DEFAULT cyl 1021 alt 2 hd 128 sec 32
          /scsi_vhci/disk@g600144f0e7f0137d00004c4f0f9c0002

Display information about the iSCSI initiator.

# iscsiadm list initiator-node
nitiator node name: iqn.1986-03.com.sun:01:e00000009079.4ae61b35
Initiator node alias: unknown
	Login Parameters (Default/Configured):
		Header Digest: NONE/-
		Data Digest: NONE/-
	Authentication Type: NONE
	RADIUS Server: NONE
	RADIUS access: unknown
	Configured Sessions: 1

Display information about which discovery methods are in use.

# iscsiadm list discovery
Discovery:
	Static: disabled
	Send Targets: enabled
	iSNS: enabled

Display iSCSI Target Information.

# iscsiadm list target
Target: iqn.1986-03.com.sun:02:9e616bcd-9f00-6235-d49b-b0b13e4b3074
	Alias: lun0
	TPGT: 2
	ISID: 4000002a0000
	Connections: 1	

# iscsiadm list target -v iqn.1986-03.com.sun:02:9e616bcd-9f00-6235-d49b-b0b13e4b3074
Target: iqn.1986-03.com.sun:02:9e616bcd-9f00-6235-d49b-b0b13e4b3074
	Alias: lun0
	TPGT: 2
	ISID: 4000002a0000
	Connections: 1
		CID: 0
		  IP address (Local): 192.168.56.110:33270
		  IP address (Peer): 192.168.56.101:3260
		  Discovery Method: SendTargets 
		  Login Parameters (Negotiated):
		  	Data Sequence In Order: yes
		  	Data PDU In Order: yes
		  	Default Time To Retain: 20
		  	Default Time To Wait: 2
		  	Error Recovery Level: 0
		  	First Burst Length: 65536
		  	Immediate Data: yes
		  	Initial Ready To Transfer (R2T): yes
		  	Max Burst Length: 262144
		  	Max Outstanding R2T: 1
		  	Max Receive Data Segment Length: 32768
		  	Max Connections: 1
		  	Header Digest: NONE
		  	Data Digest: NONE

Create ZFS Pool using iSCSI LUN

# zpool create zfstest c2t600144F0E7F0137D00004C4F0F9C0002d0
# zpool list
   NAME      SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
   zfstest  1.98G  76.5K  1.98G     0%  ONLINE  -

# zpool status
  pool: zfstest
  state: ONLINE
  scrub: none requested
  config:
        NAME                                     STATE     READ WRITE CKSUM
        zfstest                                  ONLINE       0     0     0
          c2t600144F0E7F0137D00004C4F0F9C0002d0  ONLINE       0     0     0

# cd /zfstest
# mkfile 100m testfile

Remove iSCSI LUNs (if needed).

Disable iSCSI target discovery (since we used SendTargets to discovery).

# iscsiadm modify discovery --sendtargets disable

Remove iSCSI device discovery.

# iscsiadm remove discovery-address 192.168.56.101:3260

Remove the iSCSI target device. LUN0 must be the last LUN removed if multiple LUNs are associated with a target.

# iscsitadm delete target --lun 0 spinkx    ### this didn't work; hence I cleaned up device tree to remove old LUNs


NDD INTERFACE

How to set 100 Full duplex (hme0).

# ndd -set /dev/hme instance 0
# ndd -set /dev/hme adv_autoneg_cap 0
# ndd -set /dev/hme adv_100fdx_cap 1
# ndd -set /dev/hme adv_100hdx_cap 0

How to set to 1000 Full Duplex (ge1).

# ndd -set /dev/ge instance 1
# ndd -set /dev/ge adv_1000autoneg_cap 0
# ndd -set /dev/ge adv_1000fdx_cap 1
# ndd -set /dev/ge adv_1000hdx_cap 0

How to permantently set interface via kernel. Ensure Cisco switch has auto-negiotate turned off.

# cd /platform/sun4u/kernel/drv
# vi hme.conf
adv_1000fdx_cap=0;
adv_1000hdx_cap=0;
adv_100fdx_cap=1;
adv_100hdx_cap=0;
adv_10fdx_cap=0;
adv_10hdx_cap=0;
adv_autoneg_cap=0;


SVM FILE SYSTEM

Increase Disk Space on Soft Partition d100.

# df -kl
Filesystem            kbytes    used   vail capacity  Mounted on
/dev/md/dsk/d0       61512060 48960375 11936565    81%    /
/dev/md/dsk/d5       3010671  753053 2197405    26%    /var
/dev/md/dsk/d101     2034207  561318 1411863    29%    /sapmnt/QAF
/dev/md/dsk/d100     60932061 52833703 7530351    88%    /oracle/QAF
/dev/md/dsk/d102     2034207  404596 1568585    21%    /usr/sap/QAF

# metastat -p
d0 -m d10 d20 1
d10 1 1 c1t0d0s0
d20 1 1 c1t1d0s0
d1 -m d11 d21 1
d11 1 1 c1t0d0s1
d21 1 1 c1t1d0s1
d5 -m d15 d25 1
d15 1 1 c1t0d0s5
d25 1 1 c1t1d0s5
d100 -p d50 -o 1 -b 115343360
d101 -p d50 -o 115343362 -b 4194304
d102 -p d50 -o 119537667 -b 4194304
d103 -p d50 -o 123731972 -b 4194304
d50 -m d51 d52 1
d51 1 1 c2t0d0s0
d52 1 1 c2t1d0s0

# metafree d100
Soft partitions configured on SVM Component: d50
Soft Partition			MB Used
---------------------------------------
d100				56320
d101				2048
d102				2048
d103				2048
---------------------------------------
Total MB: 69979
Total MB Used: 62464
Total MB Avail: 7515


# metattach d100 4g
d100: Soft Partition has been grown

# growfs -M /oracle/QAF /dev/md/rdsk/d100
Warning: 8192 sector(s) in last cylinder unallocated

# metastat -p
d0 -m d10 d20 1
d10 1 1 c1t0d0s0
d20 1 1 c1t1d0s0
d1 -m d11 d21 1
d11 1 1 c1t0d0s1
d21 1 1 c1t1d0s1
d5 -m d15 d25 1
d15 1 1 c1t0d0s5
d25 1 1 c1t1d0s5
d100 -p d50 -o 1 -b 115343360  -o 127926277 -b 8388608
d101 -p d50 -o 115343362 -b 4194304
d102 -p d50 -o 119537667 -b 4194304
d103 -p d50 -o 123731972 -b 4194304
d50 -m d51 d52 1
d51 1 1 c2t0d0s0
d52 1 1 c2t1d0s0

# metafree d100
Soft partitions configured on SVM Component: d50
Soft Partition			MB Used
---------------------------------------
d100				60416
d101				2048
d102				2048
d103				2048
---------------------------------------
Total MB: 69979
Total MB Used: 66560
Total MB Avail: 3419


SVM PROBLEMS

Unable to remove metadevice as device is open problem! The following error may occur when attempting to remove a metadevice using the metaclear command.

# metaclear -r -f d1
metaclear: hostname: d1: metadevice is open

Use fuser to confirm that there are no processes keeping the device. Then move startup scripts to prevent disksuite starting on boot.

# fuser /dev/md/dsk/d1
# mv /etc/rcS.d/S35svm.init /etc/rcS.d/s35svm.init
# mv /etc/rc2.d/S95svm.sync /etc/rc2.d/s95svm.sync

Halt the system and boot into single user mode. You can then delete the metadevices and associated metadbs as follows. /p>

# metaclear -r -f d1
# metadb -d /dev/dsk/cxtxdxs7
# metadb -d -f /dev/dsk/cxtxdxs7

Reboot the host to multi-user mode and confirm through metastat output that no metadevices remain.



SVM to VERITAS

How to migrate from an encapsulated system disk under Solstice Disk Suite control to VERITAS Volume Manager. First boot the system from CD-ROM into single user mode.

ok: boot cdrom -s
# mount /dev/dsk/c0t3d0s0 /a
# TERM=sun
# export TERM

Remove SVM entries from /etc/system by commenting out rootdev and mddb.

# cp /a/etc/system /a/etc/system.orig
# vi /a/etc/system
	*rootdev:/pseudo/md:0,blk
	*set md:mddb_bootlist1=" [some sd numbers]

Remove SVM entries from /etc/vfstab file.

# cp /a/etc/vfstab /a/etc/vfstab.orig
# vi /a/etc/vfstab
        #/dev/md/dsk/d0	  /dev/md/rdsk/d0	/  ufs  1  no  -
	/dev/dsk/c0t3d0s0 /dev/rdsk/c0t3d0s0	/  ufs  1  no  -
# reboot

Run metastat to find all metadevices. Note: If the boot disk is mirrored, you must first detach the mirror and then clear the metadevice.

# metastat
# metadetach d0 d1
# metaclear -r d0
# metaclear d1

Find where the state database replicas reside and delete them. These hold configuration information similar to the private region for Volume Manager.

# metadb -i
# metadb -d c#t#d#s#
# metadb -d -f c#t#d#s#				;last replica needs to be removed with -f

Remove the SVM packages.

SUNWmdu        Solstice DiskSuite Commands
SUNWmdr        Solstice DiskSuite Drivers
SUNWmdx        Solstice DiskSuite Drivers(64-bit)
SUNWmdg        Solstice DiskSuite Tool (optional)
SUNWmdnr       Solstice DiskSuite Log Daemon Configuration Files (optional)
SUNWmdnu       Solstice DiskSuite Log Daemon (optional)

Remove the configuration files: mddb.cf, md.tab, md.cf.

# cd /etc/lvm
# rm mddb.cf md.tab md.cf


DU AND DF

The 'du' and 'df' command sometimes do not match. One of the reasons this occurs is that an opened file was removed from the system, but the process that had the file opened is still running. To illustrate this issue first review the disk usage and then create a large file.

# df -kl .
Filesystem            kbytes    used   avail capacity  Mounted on
/dev/dsk/c0t0d0s0    1091719  613356  423778    60%    /

# mkfile 100m bigfile
# df -kl .
Filesystem            kbytes    used   avail capacity  Mounted on
/dev/dsk/c0t0d0s0    1091719  715828  321306    70%    /

Begin a process which "opens" the file. This 'cat' command will sit there forever. Just leave it alone. The file "bigfile" is now opened. If you have 'lsof' installed (lsof is a neat freeware program which lists opened files), you can see that the file 'bigfile' is opened.

# cat >> bigfile

In another window, remove the file.

# rm bigfile

Check the size of the filesystem. Notice that the space has NOT be reclaimed.

# df -k .
Filesystem           kbytes    used   avail capacity  Mounted on
/dev/dsk/c0t0d0s0    1091719  715828  321306    70%    /

Stop the 'cat' program (by pressing ^C in that window). Now that the program has stopped, you can see that the space has finally been reclaimed.

# df -kl .
Filesystem            kbytes    used   avail capacity  Mounted on
/dev/dsk/c0t0d0s0    1091719  613356  423778    60%    /

A few reasons why du and df can show different answers:

  1. Inconsistent fileystem requiring fsck(1m).
  2. An open file is deleted, as described above.
  3. Mounting a file system onto of a directory that contains files.


SERVER UPGRADE

The following is an outline of steps used to replace an old server (in use) with a new server. The new server is configured with a temporary host name. Futher below contains list of files and rsync script example.

  1. Identify all files installed that need to be migrated.
  2. Create a rsync script for these files and directories.
  3. If you're planning to import the disk groups test this functionality on your new server. Verify a Veritas Disk Group can be imported and deported from another system. Hence, create a test disk group on a third server and create a file system with files. Deport this disk and run the EMC LUN masking software so that you new server has access to the disk devices. Import the disk group on the new server. The idea is to ensure the Veritas import works as expected which by default test the Zoning and HBA configuration.
  4. Stop the applications on the current server.
  5. Perform a backup if needed, for example run an EMC BCV backup.
  6. If you're planning to export/import disk group then umount the file systems first and deport disk group.
  7. Copy files via tar (if hidden files needed) or rsync to the new server. With the file systems umounted (if performing import dg) you'll collect all the local oracle files and directory mount points.
  8. On the new server comment out vfstab file, stop application startup scripts, change hostname, change IP addresses (from temporary to permanent) and reboot the new server. Perform the same tasks on the current server.
  9. Perform SAN tasks - tidy zoning, lun masking if needed.
  10. If the new server is assigned access to disks via the SAN (as part of preparing for disk group import) configure the HBA driver and confirm device level access.
  11. Edit the vfstab file, verify the disks and file systems will mount, enable startup scripts. Perform a reboot.

Files to be Copied - this list depends on whether you deport/import disk groups.

/.rhosts
/etc/netmasks
/etc/hosts
/etc/syslog.conf
/etc/access
/etc/ftpaccess
/etc/shadow
/etc/aliases
/etc/auto*
/etc/inetd.conf
/etc/passwd
/etc/profile
/etc/system
/etc/group
/etc/vfstab
/etc/hostname*
/etc/rc*
/etc/services
/etc/shells
/etc/mail/*
/etc/init.d/scripts
/usr/local/bin/*
/usr/sap/*
/var/spool/cron/*
/var/spool/mqueue/*
/var/opt/oracle/*
/usr/local/netsaint
/etc/cron.d
/etc/dfstab
/app*
/export/home/*
/oracle/*
/sapmnt/*
/interfaces/*
/var/spool/cron/crontabs
/var/spool/cron/cron.allow

Rsync Script example for copying files.

d1pr0004:/var/tmp/sync$ cat sync.sh
#!/bin/sh
/opt/csw/bin/rsync -lavz --rsh=rsh --progress --rsync-path=/opt/csw/bin/rsync /export/home d1drsap:/export
/opt/csw/bin/rsync -lavz --rsh=rsh --progress --rsync-path=/opt/csw/bin/rsync /var/opt/oracle d1drsap:/var/opt
/opt/csw/bin/rsync -lavz --rsh=rsh --progress --rsync-path=/opt/csw/bin/rsync /app d1drsap:/
/opt/csw/bin/rsync -lavz --rsh=rsh --progress --rsync-path=/opt/csw/bin/rsync /sapmnt d1drsap:/
/opt/csw/bin/rsync -lavz --rsh=rsh --progress --rsync-path=/opt/csw/bin/rsync /usr/sap d1drsap:/usr
/opt/csw/bin/rsync -lavz --rsh=rsh --progress --rsync-path=/opt/csw/bin/rsync /oracle d1drsap:/
/opt/csw/bin/rsync -lavz --rsh=rsh --progress --rsync-path=/opt/csw/bin/rsync /interfaces d1drsap:/
/opt/csw/bin/rsync -lavz --rsh=rsh --progress --rsync-path=/opt/csw/bin/rsync /usr/local/bin d1drsap:/usr/local
/opt/csw/bin/rsync -lavz --rsh=rsh --progress --rsync-path=/opt/csw/bin/rsync /usr/local/netsaint d1drsap:/usr/local
### Sendmail
/opt/csw/bin/rsync -lavz --rsh=rsh --progress --rsync-path=/opt/csw/bin/rsync /etc/mail d1drsap:/etc
rcp -p /usr/lib/sendmail d1drsap:/usr/lib
### Uncomment the day of....
# /opt/csw/bin/rsync -lavz --rsh=rsh --progress --rsync-path=/opt/csw/bin/rsync \ 
/var/spool/cron/crontabs d1drsap:/var/spool/cron

Alternatively use tar command, note the directory used in tar command which catches the hidden files for us.

# tar cvpfE - PR1 | ( rsh SERVER "cd /oracle ; tar -xvpf - ")


LIVE UPGRADE

In this documentation, we're upgrading Solaris 7 to Solaris 8. See http://www.sun.com/bigadmin/features/articles/live_upgrade.html.

First steps.

  1. Perform complete backup of system.
  2. If possible, also make a three way mirror prior to upgrade.
  3. Disable startup scripts to prevent applications from starting on reboots.
  4. Make a local backup of /etc/vfstab, /etc/system, /kernel/drv/sd.conf in case they are overwritten.

Install prerequisite patches and packages.

# mount d1de0003:/jumpstart /mnt
# cd /mnt/Patches/Live_Upgrade/Solaris-7
# patchadd xxxx.xx
# pkginfo -l SUNWadmap
# pkginfo -l SUNWadmc
# pkginfo -l SUNWlibC
# cd /mnt/OS/Solaris-8-2003-05/Solaris_8/Product
# pkgadd -d . SUNWlur
# pkgadd -d . SUNWluu

Determine boot environment (BE) disk.

c0t0d0s2	rootdisk	;don't touch this disk
c0t1d0s2	rootmirror 	;remove mirrors, install LU

Determine partitions used by rootdisk.

/dev/vx/dsk/swapvol	 - 	- 	swap	-       no      -
/dev/vx/dsk/rootvol	/dev/vx/rdsk/rootvol		/ufs	1       no      -
/dev/vx/dsk/var 	/dev/vx/rdsk/var        	/var	ufs	1       no      -
/dev/vx/dsk/opt 	/dev/vx/rdsk/opt        	/opt	ufs	2       yes     -

Remove veritas mirrors from boot environment disk (rootmirror).

# vxprint -ht | grep swapvol
.. vxprint check other volumes
# vxassist -g rootdg remove mirror swapvol alloc="rootdisk"
# vxassist -g rootdg remove mirror rootvol alloc="rootdisk"
# vxassist -g rootdg remove mirror var alloc="rootdisk"
# vxassist -g rootdg remove mirror opt alloc="rootdisk"

Remove the boot environment disk from Veritas control.

# /usr/sbin/vxdg  -g rootdg  rmdisk rootmirror
# /etc/vx/bin/vxdiskunsetup  -C c0t1d0

Check existing space used by mount points on root disk.

# df -k | egrep "rootvol|var|opt" | awk '{ total += $3 } END { print total }

Partition Boot Environment disk, we prefer to create a single root file system as per SUN Blueprints boot disk layout. Leave 800Mb for later use, for example SVM meta devices.

# format
Partition 0	/	12GB
Partition 1	swap	  4GB

Create an alternate Boot Environment joining /, /var, /opt. The -c for current boot environment, -m for new location of file systems. The /swap partition is not cloned and remains on c0t0d0s0.

# lucreate -c "Solaris_7" -m /:/dev/dsk/c0t1d0s0:ufs -m /var:merged:ufs -m /opt:merged:ufs -n "Solaris_8"

Check current and alternate boot environments.

# lufslist Solaris_7
# lufslist Solaris_8

Prior to upgrading the boot environment, you must remove veritas on the boot environment disk.

# lumount Solaris_8 /a
# cd /a/etc
# cp vfstab vfstab-DATE
# vi vfstab
...change vxfs entries to physical disk entries
...also modify swap to point to new physical slice location
# cp system system-DATE
# vi system
...remve veritas entries
# touch /a/etc/vx/reconfig.d/state.d/install-db
# lumount /a

Copy over Veritas and JNI packages before rebooting to single user mode.

# cp VRTSxxx /a/var/tmp/
# cp JNI Drivers /a/var/tmp

Upgrade the boot environment. The -u for upgrade, -n boot environment name, -s OS image path.

# luupgrade -u -n Solaris_8 -s /mnt/OS/Solaris-8-2003-05

Perform some log file checking to ensure upgrade was successful.

# vi /a/var/adm/messages

Patch the boot environment.

# patchadd -R /a -M /mnt/Patches/8_Recommended patch_order
# luumount Solaris_8

Activate the boot environment.

# luactivate Solaris_8
# init 0
ok setenv boot-device disk1:a
ok boot
# lufslist Solaris_8
# lustatus

Perform a final init 6 to ensure everything works fine.

# init 6


FULL DUPLEX

Configuring Server NIC Interfaces settings requires both server and switch port have the same settings.

# cd /platform/sun4u/kernel/drv
# vi hme.conf
adv_1000fdx_cap=1;
adv_1000hdx_cap=0;
adv_100fdx_cap=1;
adv_100hdx_cap=0;
adv_10fdx_cap=1;
adv_10hdx_cap=0;
adv_autoneg_cap=0;

From the command line you can force speed and duplex. Be sure to check the NIC card instance number via ifconfig.

ndd -set /dev/eri instance 0 
ndd -set /dev/eri adv_100fdx_cap 1
ndd -set /dev/eri adv_100hdx_cap 0
ndd -set /dev/eri adv_autoneg_cap 0

ndd -set /dev/hme instance 0
ndd -set /dev/hme adv_100fdx_cap 1
ndd -set /dev/hme adv_100hdx_cap 0
ndd -set /dev/hme adv_autoneg_cap 0

ndd -set /dev/qfe instance 0
ndd -set /dev/qfe adv_100fdx_cap 1
ndd -set /dev/qfe adv_100hdx_cap 0
ndd -set /dev/qfe adv_autoneg_cap 0

Once you have rebooted check the settings. You should see all ones which mean speed at 1000, mode if full duplex and link is up.

# ndd -set /dev/eri instance 0
# ndd -get /dev/bge0 \?
# ndd -get /dev/eri link_speed link_mode link_status


ZONES

The Global Zone is the base OS managing all devices, hence you place virtual zones on top of them. To create a virtual zone.

# mkdir /zone
# zonecfg -z twlight
zonecfg:twlight> create
zonecfg:twlight> set zonepath=/zone/twlight
zonecfg:twlight> set autoboot=true
zonecfg:twlight> add net
zonecfg:twlight:net> set address=10.152.29.200
zonecfg:twlight:net> set physical=eri0
zonecfg:twlight:net> end
zonecfg:twlight> add inherit-pkg-dir
zonecfg:twlight:inherit-pkg-dir> set dir=/mytest
zonecfg:twlight:inherit-pkg-dir> end
zonecfg:twlight> info
zonepath: /zone/twlight
autoboot: true
pool:
inherit-pkg-dir:
        dir: /lib
inherit-pkg-dir:
        dir: /platform
inherit-pkg-dir:
        dir: /sbin
inherit-pkg-dir:
        dir: /usr
inherit-pkg-dir:
        dir: /mytest
net:
        address: 10.152.29.200
        physical: eri0
zonecfg:twlight> verify
zonecfg:twlight> commit
zonecfg:twlight> exit
# cd /etc/zones
# ls
SUNWblank.xml    SUNWdefault.xml  index            twlight.xml

Install Zone (this will take a few minutes).

# zoneadm -z twlight install
# cat /zone/twlight/root/var/sadm/system/logs/install_log

Boot Zone & login first time from the console to enter basic setup information.

# zoneadm -z twlight boot
# zlogin -e \@ -C twlight
Shift @. to exit console

Zone Management (from global zone)

# zonecfg -z twlight
# zoneadm list -v
# zlogin twlight
# df -kZ
# ps -z twlight

Shutdown and statup of zones.

# zlogin -e \@ -C twlight
# shutdown
# zoneadm -z twlight boot
# shutdown

To remove a zone.

# zoneadm -z twlight halt
# zoneadm -z twlight uninstall
# zonecfg -z twlight delete

From the global zone packages can be applied to all zones OR, -G for global zone only, -Z for all non-global zones.

# pkgadd -G
# pkgadd -Z


DISK WIPE

To wipe a disk use the purge command from format. The purge commands writes fours patterns to the entire surface of the disk, reads all data and then performs another write.

# format
format> analyze
analyze> purge
...
analyze> quit


NTP

The public time servers distribute time to clients at no cost. Typically, organisations synchronise a few local servers to a public server, and then distribute the time within the organization using those local time servers.

Configuring NTP Server, see our server ntp.conf settings.

# cp /etc/inet/ntp.server /etc/inet/ntp.conf
# vi /etc/inet/ntp.conf

  # US IL Stratum 2
    server 130.126.24.24
  # US VA Stratum 2
    server 198.82.161.227
  # US TX Stratum 2
    server 128.249.1.10
  # Set stratum to higher level
    fudge 127.127.1.0 stratum 10
  # Peer with c1pr0002
    peer 10.140.129.2
  # Peer with d1pre1s1
    peer 10.140.130.2
  # Peer with d1pre1s2
  # peer 10.140.130.3
    enable auth monitor
    driftfile /var/ntp/ntp.drift
    statsdir /var/ntp/ntpstats/
    filegen peerstats file peerstats type day enable
    filegen loopstats file loopstats type day enable
    filegen clockstats file clockstats type day enable

# touch /var/ntp/ntp.drift
# /etc/init.d/xntpd start
# ps -ef | grep ntp | grep -v grep
# snoop | grep -i ntp

Configuring NTP Client.

# cp /etc/inet/ntp.client /etc/inet/ntp.conf
# vi /etc/inet/ntp.conf
  server 10.140.130.2
  server 10.140.130.3
# /etc/init.d/xntpd start
# ps -ef | grep ntp | grep -v grep

Monitoring Tools.

# xntpdc
# ntpq


BACKUP ROUTE

To add a backup route first ensure the IP address is not being used with ping or nslookup. Next determine the interface itself is not being used.

# myip="10.140.128.66"
# ping $myip
# nslookup $myip
# ifconfig -a

Plumb interface.

# ifconfig hme1 plumb
# ifconfig hme1 $myip netmask 255.255.252.0 up

Verify Connectivity.

# traceroute -i hme1 d1pr0150

Configure system to use interface.

# vi /etc/hostname.hme1
# vi /etc/hosts		

Add route for backup server and check route exists..

# /usr/sbin/route add host 10.140.128.51 $myip -interface
# netstat -rn		

From another terminal send pings to backup server. From current terminal make sure the Opkts increases with each ping.

# ping -s d1pr0000	
# netstat -I hme1	

Ensure the route is maintained across reboot by adding the route command from above to rootusr file.

# vi /etc/init.d/rootusr
  ...
  /usr/sbin/route add host 10.140.128.51 10.140.128.66 -interface


ALOM & RSC

To display the alom prompt, connect to serial mgnt port via terminal server.

# #.
sc> console
user: admin
password: adminXXX

To configure an RSC card run rsc-config then create a new RSC user.

# /usr/platform/SUNW,Sun-Fire-280R/rsc/rsc-config
# cd /usr/platform/platform_name/RSC
# ./RSCadm useradd
# ./RSCadm userperm cuar
# ./RSCadm userpassword
# /opt/rsc/bin/rsc

To redirect the console to RSC.

ok setenv diag-console rsc
ok setenv input-device rsc-console
ok setenv output-device rsc-console

If you are wanting to redirect the console server back to TTYA , you can do the following.

ok setenv diag-out-console false
ok setenv input-device keyboard
ok setenv output-device screen

To login via rsc.

# telnet 10.140.130.30
   Please enter login: admin
   Password: adminxxx
rsc>


SED

An example of adding a line inside a file. See file contents before sed script.

# cat testfile
MONDAY
TUESDAY
SERVER = X
CLIENT_NAME = 123
FJDKES
DKDKDD

Create a file for sed to use.

# cat in.txt
/SERVER = X/ a\
SERVER = YYYY
SERVER = BBBB

Run sed command and see results.

# sed -f in.txt textfile > t.t
# cat t.t
MONDAY
TUESDAY
SERVER = X
SERVER = YYYY
SERVER = BBBB
CLIENT_NAME = 123
FJDKES
DKDKDD


WRONG MAGIC

The Solaris Leadville driver does not allow an admin to access the HBA for device binding. You can use Solaris 10's new Host Based LUN masking to remove access to a individual LUN. We used this feature on a SUN servers to unmask VCM DB in order to prevent "corrupt label - wrong magic number" with write protected VCM DB.

Verify that the patch is installed.

# showrev -p | grep 119130-22

Determine the VCM DB auto mapped, in these example 0062.

# ./emc/SYMCLI/V6.3.0/bin/syminq | grep 8400062000
/dev/rdsk/c2t50060482D52DE217d0s2  GK     EMC       SYMMETRIX    5771 8400062000       2880
/dev/rdsk/c4t50060482D52DE218d0s2  GK     EMC       SYMMETRIX    5771 8400062000       2880

Select from cfgadm the disks (target and associated LUNs) to mask out. Note from the above output LUN binding is zero.

# cfgadm -alo show_SCSI_LUN | grep -i 50060482D52DE217,0
c2::50060482d52de217,0         disk         connected    configured   unknown
# cfgadm -alo show_SCSI_LUN | grep -i 50060482D52DE218,0
c4::50060482d52de218,0         disk         connected    configured   unknown

Edit the file /kernel/drv/fp.conf.

pwwn-lun-blacklist=
"50060482d52de217,0",
"50060482d52de218,0";

Reboot the server for the changes to take effect. A reboot is necessary for changes to the fp.conf to take effect i.e. update_drv -f fp is not an option and using cfgadm -c configure on devices listed as 'unavailable' will also fail with. A reconfiguration boot (boot -r) is NOT required for the changes to take effect. Even using a boot -r the device paths and /etc/path_to_inst will remain unchanged, since this is a true masking effect, that is, the fcp driver is aware of the luns, it just leaves them in an 'unavailable' state by not enumerating them. After reboot check message file for black list entries.

# init 6
# grep black /var/adm/messages
May 20 18:30:23 uxnbpr03 fcp:     LUN 0 of port 50060482d52de217 is masked due to black listing.
May 20 18:30:23 uxnbpr03 fcp:     LUN 0 of port 50060482d52de218 is masked due to black listing.

Double-check, use the cfgadm or syminq command.

TBA - when I ran this I didn't see this output..SAMPLE ONLY
c2::50020f23000097b9,4         unavailable  connected    unconfigured unknown <--- blacklisted
c2::50020f23000097b9,5         unavailable  connected    unconfigured unknown <--- blacklisted


HOSTNAME CHANGE

Modify the following files with your new host name. Reboot for changes to take effect.

# vi /etc/nodename
# vi /etc/inet/hosts
# vi /etc/hostname.hme0
# vi /etc/net/ticlts/hosts
# vi /etc/net/ticots/hosts
# vi /etc/net/ticotsord/hosts
# vi /etc/inet/ipnodes


GROW UFS

Assume you create a ufs file system with minfree set to 10% of the disk capacity. At some point you start running out of space and need some of that minfree disk space back. Use the tunefs tool to reduce the minfree area and run mkfs to increase the existing ufs file system space. Alternatively you may be running SVM with soft partitions and need to increase the amount of space allocated, see SVM for an example with soft partitions.

# /usr/sbin/newfs -m %free /dev/rdsk/c1t3d0s0
# /usr/sbin/tunefs -m %free /dev/rdsk/c1t3d0s0
# /usr/lib/fs/ufs/mkfs -G -M /current/mount /dev/rdsk/cXtYdZsA newsize


ADDING SWAP

The following creates an empty file, adds to swap, shows how to ensure the swap is added across reboots. The example also shows how to remove the swap file.

# mkfile 20m /export/data/swapfile
# swap -a /export/data/swapfile
# vi /etc/vfstab
/export/home/swapfile   -       -       swap    -       no      -
# swap -d /export/data/swapfile
# rm /export/data/swapfile


EXPORT TEST

To test export display is working.

# export DISPLAY=10.120.36.8:0.0      ***may not need
# /usr/openwin/bin/xclock


SSH

To allow root ssh access change.

# vi/etc/ssh/sshd_config			*** change PermitRootLogin yes to replace PermitRootLogin no
# svcadm restart svc:/network/ssh:default