Wednesday, 17 September 2008

Veritas settings

I'm not sure what versions of Veritas this applies to. It might be all or only older ones.


  • The file /etc/vx/dmppolicy.info contains some settings that veritas uses. There's a sample
    arraytype
    #
    arrayname
    #
    enclosure
    #
    naming
    scheme=ebn persistence=no
    #

  • The file /etc/vx/disk.info contains a mapping from the sym identification of a disk to the Veritas EBN. It's used whe the persistence=yes option is used in /etc/vx/dmppolicy.info. It's format is like this
    FUJITSU%5FMAT3147F%20SUN146G%5FDISKS%5F500000E0112EF5F0 c0t0d0 0x3b40000 0x1 c0t0d0 Disk DISKS
    EMC%5FSYMMETRIX%5F890678%5F373833384630303851000008 c6t500604844A373DA6d65 0x3b40088 0x2 EMC0_0 EMC 890678
    EMC%5FSYMMETRIX%5F890678%5F373833364430303851000008 c6t500604844A373DA6d31 0x3b40030 0x2 EMC0_1 EMC 890678
    EMC%5FSYMMETRIX%5F890678%5F373833354530303851000008 c6t500604844A373DA6d16 0x3b40028 0x2 EMC0_2 EMC 890678


I suspect that if /etc/vx/disk.info is allowed to get out of date it can cause systems to have vxdisk entries for disks that don't exist.
Checking that newly presented EMC disks have been found by the OS. This avoids doing a full devfsadm.

  • Find the syms in the dev group # symdg show device_group

  • Find the identifier of a new disk
    # symmaskdb -sid 271 -dev 1A6B list assignment
    Symmetrix ID : 000287890271

    Device Identifier Type Dir:P
    ------ ---------------- ----- ----------------
    1A6B 210100e08ba1b1e5 FIBRE FA-7A:0
    210100e08ba104ec FIBRE FA-10A:0


  • Find the FA's WWNs
    # symcfg -sid 271 list -fa 7A -p 0
    Symmetrix ID: 000287890271

    S Y M M E T R I X F I B R E D I R E C T O R S

    Dir Port WWN VCM Volume Set Pnt to Pnt
    Enabled Addressing

    FA-7A 0 500604844a36d7c6 Yes No Yes

  • And
     # symcfg -sid 271 list -fa 10A -p 0

    Symmetrix ID: 000287890271

    S Y M M E T R I X F I B R E D I R E C T O R S

    Dir Port WWN VCM Volume Set Pnt to Pnt
    Enabled Addressing

    FA-10A 0 500604844a36d7c9 Yes No Yes


  • Find the controllers that the FA's are connected to
    cfgadm -al  | egrep "500604844a36d7c6|500604844a36d7c9"
    c7::500604844a36d7c9 disk connected configured unknown
    c13::500604844a36d7c6 disk connected configured unknown

  • Configure those adapters/FA's so that the disks are found
    # cfgadm -v -c configure c7::500604844a36d7c9
    # cfgadm -v -c configure c13::500604844a36d7c6
    Both paths should now be visible.

Wednesday, 3 September 2008

luxadm

Some luxadm notes.

  • List the enclosures and FC cards (-p adds WWNs and dev-path info)
    luxadm probe

  • List the FC devices with status
    host#:/ # luxadm -e port
    /devices/pci@13d,700000/SUNW,qlc@1/fp@0,0:devctl CONNECTED
    /devices/pci@13d,700000/SUNW,qlc@1,1/fp@0,0:devctl CONNECTED
    /devices/pci@13d,600000/SUNW,qlc@1/fp@0,0:devctl NOT CONNECTED
    /devices/pci@13d,600000/SUNW,qlc@1,1/fp@0,0:devctl NOT CONNECTED

  • List the devices on a port
    host:/ # luxadm -e dump_map /devices/pci@13d,700000/SUNW,qlc@1/fp@0,0:devctl
    Pos Port_ID Hard_Addr Port WWN Node WWN Type
    0 613c13 0 5006048c4a36d7c6 5006048c4a36d7c6 0x0 (Disk device)
    1 616013 0 500604844a373d89 500604844a373d89 Failed to get the type.
    2 632f13 0 5006048452a4f9d7 5006048452a4f9d7 0x0 (Disk device)
    3 661913 0 210000e08b82a178 200000e08b82a178 0x1f (Unknown Type,Host Bus Adapter)

  • Force a reset of the devices on a FC bus
    luxadm -e forcelip /dev/cfg/c2

Friday, 22 August 2008

NIC link details

Finding the interface link details on solaris can be a pain. Is it in ndd or /etc/system or kstat or perhaps the driver conf file under /kernel/drv. Maybe it's in the driver conf file in /etc. Solaris 10 simplifies things with dladm
root@sol10:/ # dladm show-dev
eri2 link: unknown speed: 100 Mbps duplex: unknown
ce4 link: unknown speed: 100 Mbps duplex: half
ce5 link: unknown speed: 100 Mbps duplex: half
ce6 link: unknown speed: 100 Mbps duplex: half
ce7 link: unknown speed: 0 Mbps duplex: unknown
eri0 link: unknown speed: 100 Mbps duplex: half
ce0 link: unknown speed: 100 Mbps duplex: full
ce1 link: unknown speed: 0 Mbps duplex: unknown
ce2 link: unknown speed: 100 Mbps duplex: half
ce3 link: unknown speed: 100 Mbps duplex: full
eri1 link: unknown speed: 0 Mbps duplex: unknown
dman0 link: unknown speed: 0 Mbps duplex: unknown

Wednesday, 20 August 2008

Create a mirrored soft partition


  • Setup the disks with slice 3 of the whole disk

  • Create each half of the mirror
    metainit d79 1 1 c4t14d0s3
    metainit d69 1 1 c0t10d0s3

  • Create the mirror with one half
    metainit d11 -m d69

  • Attach the other half
    metattach d11 d79

  • Create the soft partition in the mirror
    metainit d115 -p d11 68g

Finding WWN's of FC controllers

root@sol10host:/ # fcinfo hba-port
HBA Port WWN: 210000e08b14b1bc
OS Device Name: /dev/cfg/c13
Manufacturer: QLogic Corp.
Model: 375-3108-xx
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb
Current Speed: 2Gb
Node WWN: 200000e08b14b1bc
HBA Port WWN: 210100e08b34b1bc
OS Device Name: /dev/cfg/c12
Manufacturer: QLogic Corp.
Model: 375-3108-xx
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb
Current Speed: not established
Node WWN: 200100e08b34b1bc

Monday, 18 August 2008

Sample pset and zone commands (Containers)

root@e4k # zpool status
pool: superstore
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
superstore ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c0t0d0 ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0
c0t2d0 ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0
c0t4d0 ONLINE 0 0 0
c1t6d0 ONLINE 0 0 0
c0t16d0 ONLINE 0 0 0
c0t18d0 ONLINE 0 0 0
c0t20d0 ONLINE 0 0 0
c1t21d0 ONLINE 0 0 0


zpool create zones c0t0d0 c1t1d0 c0t2d0 c1t3d0 c0t4d0 c1t6d0 c0t16d0 c0t18d0 c0t20d0 c1t21d0


zoneadm list -vc
# removed old ones with
root@e4k # zonecfg -z webservices delete
Are you sure you want to delete zone webservices (y/[n])? y
root@e4k # zonecfg -z subversion delete
Are you sure you want to delete zone subversion (y/[n])? y

# create the zone config (either sparse or whole root)
# the zone xml files are in /etc/zones/*.xml

# sparse zone...
root@e4k # zonecfg -z rbs1
rbs1: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:rbs1> create
zonecfg:rbs1> set zonepath=/zones/rbs1
zonecfg:rbs1> set autoboot=true
zonecfg:rbs1> add net
zonecfg:rbs1:net> set address=10.0.0.7
zonecfg:rbs1:net> set physical=hme0
zonecfg:rbs1:net> end
zonecfg:rbs1> verify
zonecfg:rbs1> exit

# share /root with the zones
root@e4k # zonecfg -z rbs5
zonecfg:rbs5> add fs
zonecfg:rbs5:fs> set dir=/root
zonecfg:rbs5:fs> set special=/root
zonecfg:rbs5:fs> set type=lofs
zonecfg:rbs5:fs> end
zonecfg:rbs5> verify
zonecfg:rbs5> exit

# whole root zone (the -b does the trick)
root@e4k # zonecfg -z rbs1
rbs1: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:rbs1> create -b
zonecfg:rbs1> set zonepath=/zones/rbs1
zonecfg:rbs1> set autoboot=true
zonecfg:rbs1> add net
zonecfg:rbs1:net> set address=10.0.0.7
zonecfg:rbs1:net> set physical=hme0
zonecfg:rbs1:net> end
zonecfg:rbs1> verify
zonecfg:rbs1> exit


# install the zone
root@e4k # zoneadm -z rbs1 install

# boot the zone
root@e4k # zoneadm -z rbs1 boot

root@e4k # zlogin -C rbs1
[Connected to zone 'rbs1' console]


# back in the global zone
# enable poold
root@e4k # pooladm -e
# might need to start the pool service
root@e4k # svcadm enable svc:/system/pools

# setup the initial /etc/pooladm.conf
poolcfg -c 'create pset pset_z1 (uint pset.min = 1; uint pset.max = 4)'

# setup the fair share scheduler (why?)
poolcfg -c 'create pool pool_z1 (string pool.scheduler="FSS")'

# join the two
poolcfg -c 'associate pool pool_z1 (pset pset_z1)'

# view what's been done
poolcfg -c info

# save it
pooladm -c

# activate it for the zone
root@e4k # zonecfg -z rbs1 set pool=pool_z1

# reboot the zone
###############################First done...########################
Now: Create a 6 CPU pool for the global
Create another 3 8 CPU pools
Create another 3 zones
assign the pools


# create the global pool
POOL=global
P_MAXCPU=6
P_ZONE=global
poolcfg -c 'create pset 'pset_$POOL' (uint pset.min = 1; uint pset.max = '$POOL_MAXCPU' )'
poolcfg -c 'create pool 'pool_$POOL' (string pool.scheduler="FSS")'
poolcfg -c 'associate pool 'pool_$POOL' (pset 'pset_$POOL')'
poolcfg -c info
pooladm -c
zonecfg -z $P_ZONE set pool=pool_$POOL

# create the three 8 CPU pools (z2->z4 & rbs2->rbs4)
POOL=z5
POOL_MAXCPU=8
P_ZONE=rbs5
poolcfg -c 'create pset 'pset_$POOL' (uint pset.min = 1; uint pset.max = '$POOL_MAXCPU' )'
poolcfg -c 'create pool 'pool_$POOL' (string pool.scheduler="FSS")'
poolcfg -c 'associate pool 'pool_$POOL' (pset 'pset_$POOL')'
poolcfg -c info
pooladm -c
zonecfg -z $P_ZONE set pool=pool_$POOL

# check how many CPUs are in each
for a in 1 2 3 4 5; do echo rbs$a; zlogin rbs$a mpstat; done

for a in 1 2 3 4 5; do
POOL=z$a
zonecfg -z rbs$a remove pool=pool_$POOL
done

#### done it wrong!
# remove the pool association with the zones
for a in 1 2 3
do
zonecfg -z rbs$a clear pool
done

# remove the pools and psets
for a in 1 2 3 4 5
do
POOL=z$a
echo poolcfg -c \'destroy pset pset_$POOL\'
echo poolcfg -c \'destroy pool pool_$POOL\'
done

# try again.
# create a single 8 CPU pool for zones rbs1->rbs5
POOL=zones
POOL_MAXCPU=8
poolcfg -c 'create pset 'pset_$POOL' (uint pset.min = 1; uint pset.max = '$POOL_MAXCPU' )'
poolcfg -c 'create pool 'pool_$POOL' (string pool.scheduler="FSS")'
poolcfg -c 'associate pool 'pool_$POOL' (pset 'pset_$POOL')'
poolcfg -c info
pooladm -c

POOL=zones1
for a in 1 2 3 4 5
do
P_ZONE=rbs$a
echo zonecfg -z $P_ZONE set pool=pool_$POOL
done

# commit it
pooladm -c

# boot the zones
root@e4k # for a in 1 2 3 4 5
do
zoneadm -z rbs$a boot
done


POOL=zones
POOL_MAXCPU=4
poolcfg -c 'modify pset 'pset_$POOL' (uint pset.min = 1; uint pset.max = '$POOL_MAXCPU' )'


POOL=zones1
POOL_MAXCPU=8
poolcfg -c 'create pset 'pset_$POOL' (uint pset.min = 1; uint pset.max = '$POOL_MAXCPU' )'
poolcfg -c 'create pool 'pool_$POOL' (string pool.scheduler="FSS")'
poolcfg -c 'associate pool 'pool_$POOL' (pset 'pset_$POOL')'
poolcfg -c info
pooladm -c

POOL=zones1
for a in 1 2 3
do
P_ZONE=rbs$a
echo zonecfg -z $P_ZONE set pool=pool_$POOL
done

POOL=zones
for a in 4 5
do
P_ZONE=rbs$a
echo zonecfg -z $P_ZONE set pool=pool_$POOL
done


for a in 1 2 3 4 5
do
P_ZONE=rbs$a
echo $P_ZONE
zonecfg -z $P_ZONE info | grep -i pool
done

# boot the zones
for a in 1 2 3 4 5
do
zoneadm -z rbs$a boot
done

Friday, 11 July 2008

Replacing an SVM disk


root@sun-system:/ # metastat d6
d6: Mirror
Submirror 0: d46
State: Needs maintenance
Submirror 1: d86
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 8395200 blocks (4.0 GB)

d46: Submirror of d6
State: Needs maintenance
Invoke: metareplace d6 c0t2d0s6
Size: 8395200 blocks (4.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t2d0s6 0 No Maintenance Yes


d86: Submirror of d6
State: Okay
Size: 8395200 blocks (4.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t2d0s6 0 No Okay Yes


Device Relocation Information:
Device Reloc Device ID
c1t2d0 Yes id1,sd@SFUJITSU_MAT3073N_SUN72G_000529B0909M____AAN0P570909M

root@sun-system:/ # metasync d6
root@sun-system:/ # metareplace -e d6 c0t2d0s6
d6: device c0t2d0s6 is enabled
root@sun-system:/ # metastat d6
d6: Mirror
Submirror 0: d46
State: Resyncing
Submirror 1: d86
State: Okay
Resync in progress: 1 % done
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 8395200 blocks (4.0 GB)

d46: Submirror of d6
State: Resyncing
Size: 8395200 blocks (4.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t2d0s6 0 No Resyncing Yes


d86: Submirror of d6
State: Okay
Size: 8395200 blocks (4.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t2d0s6 0 No Okay Yes


Device Relocation Information:
Device Reloc Device ID
c0t2d0 Yes id1,sd@SSEAGATE_ST373207LSUN72G_34311C8E____________3KT11C8E
c1t2d0 Yes id1,sd@SFUJITSU_MAT3073N_SUN72G_000529B0909M____AAN0P570909M

Solaris resource management

This isn't original, but then very little here is

# Solaris 10 11/06 moved the enabling of pools to SMF therefore to enable pools, use:
svcadm enable system/pools:default
svcadm enable system/pools/dynamic:default


# Create processor set
poolcfg -c 'create pset pset_batch (uint pset.min = 2; uint pset.max = 4)'
poolcfg -c 'modify pset pset_batch (string pset.comment="Batch processing")'
poolcfg -c 'transfer to pset pset_batch ( cpu 121; cpu 122 )'


# Create the pool
poolcfg -c 'create pool pool_appl1_batch'
poolcfg -c 'modify pool pool_appl1_batch (string pool.comment="appl1 batch")'
poolcfg -c 'associate pool pool_appl_batch (pset pset_batch)'


# Further pools can be created and associated with the same processor set:
poolcfg -c 'create pool pool_appl2_batch'
poolcfg -c 'modify pool pool_appl2_batch (string pool.comment="appl2 batch")'
poolcfg -c 'associate pool pool_app2_batch (pset pset_batch)'


# activate the config
pooladm -c

# After the pool has been configured, it can be bound to a zone:
poolbind -p pool_foo -i zoneid myzone

# To pin or unpin a CPU, use the pooladm commands:
poolcfg -c "modify cpu 166 (boolean cpu.pinned=true)"
poolcfg -c "modify cpu 167 (boolean cpu.pinned=false)"
pooladm -c


# To allow a processor set to take CPU from other pools when the server is as a whole is busy, it should have the utilisation option set, e.g.
poolcfg -c 'modify pset pset_foo ( string pset.poold.objectives="utilization < 90")'

# If the pool configuration is no longer required, remove the configuration & disable pools:
pooladm -x
pooladm -d

# Alternatively, for Solaris 10 11/06:
svcadm disable system/pools/dynamic:default
svcadm disable system/pools:default


# See also note on Solaris Projects

Project / Resource pool notes.

A project can be used to limit CPu and memory useage (I'm not sure where this came from but it's certainly not original)

# to add a user to a projects
usermod -K project=myproj myappl
# or edit /etc/user_attr

# to add a user to a project
projmod -a -U user3 myproj

# to remove a user from a project
projmod -r -U user1 myproj

# NOTE: This sets the project membership to be only the named user
# all other previous members are removed
projmod -U user1 myproj

# newtask spawns a shell in the new project by default
# to run a script in a project
newtask -p myproj /opt/application/bin/startup

# to change the project of the current shell
newtask -p myproj -c $$

# to see your current project
id -p

# To create a project
projadd -c "Application foo" -U fooadm -K project.pool=pool_bar user.fooadm

# to set a limit on shared memory to 4GB on the user.oracle project which already exists:
projmod -aK 'project.max-shm-memory=(privileged,4294967296,deny)' user.oracle

# to check
$ su - oracle
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
$ prctl -n project.max-shm-memory $$
process: 24150: -ksh
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
privileged 4.00GB - deny -
system 16.0EB max deny -


# to enable the Fair Share Scheduler
# Set the default class to FSS for the next reboot
dispadmin -d FSS
# set the default shceduler
priocntl -s -c FSS
# switch init to use the new shceduler
priocntl -s -c FSS -i pid 1
# switch all TS (Time share) Class processes to FSS
priocntl -s -c FSS -i class TS

# reboot is a good idea now to check that the config sticks
# you can see the scheduler in ps with
ps -eo 'pid comm class'

# priocntl has a '-i zone global' option that can be used when allocating shares to the global zone. It's not saved across reboots
# so needs to be made part of init if required

# for non-global zones (this will change the next boot)
zonecfg -z my-zone
zonecfg:my-zone> add rctl
zonecfg:my-zone:rctl> set name=zone.cpu-shares
zonecfg:my-zone:rctl> add value (priv=privileged,limit=20,action=none)
zonecfg:my-zone:rctl> end
zonecfg:my-zone> verify
zonecfg:my-zone> commit
zonecfg:my-zone> exit


# for a running zone do above and also
prctl -i my-zone -n zone.cpu-shares -r -v 20

# projects within non-global zones are configured like in the global zone.
projmod -aK 'project.cpu-shares=(privileged,40,none)' myproj
prctl -n project.cpu-shares -r -v 60 -i project myproj

# verified with
prctl -n project.cpu-shares $pid

# resource caps can only be applied to projects
# enable to daemon (in each zone that uses resource capping)
rcapadm -E

# can only limit resident memory at the moment (rcap.rss-max)
projmod -aK rcap.rss-max=X projname

# monitoring

# check the utilisation of pools
poolstat 5 10

# list memory/CPU load by projects
prstat -J

# monitor resource cap limits
rcapstat

# monitor CPU stats by processor set
mpstat -a 10

# list pids in each scheduling class
priocntl -d -i all

Wednesday, 7 May 2008

Veritas Commands


  • Create Some Volumes in a disk group

    vxassist -g $DG make $volume 5g
    vxassist -g $DG make $volume 10g
    vxassist -g $DG make $volume 10g

  • Rename volumes

    vxedit -g $DG -v rename $volume1 $volume2

  • show disks in a device group

    vxdisk -g $DG list

  • how much can a volume be grown?

    vxassist -g $DG maxgrow $volume

  • how much is free on each disk in a disk group. Results in 512b blocks on solaris


    vxdg -g $DG free

  • how big is the biggest fileystem that can be created


    vxassist -g $DG maxsize

  • As above with layout specified

    vxassist -g $DG maxsize layout=stripe
    vxassist -g $DG maxsize layout=raid5
    vxassist -g $DG maxsize layout=concat

  • info about a disk group

    vxprint -hrtg $DG

  • remove a volume from a disk group

    vxassist -g $DG remove volume $volume

  • relayout to a concat

    vxassist -g $DG relayout $volume layout=concat

  • see what's happening in the background

    vxtask -l list

  • increase a volume and it's filesystem at the same time

    vxresize -g $DG $volume 3584m
    vxresize -g $DG $volume +5g

  • Is veritas licensed? Check exit code.

    /opt/VRTS/bin/vxlicrep

  • You can only create/resize volumes on the cluster host which is the master. This will tell you if the host is a master a slave or inactive (not in a cluster).


    vxdctl -c mode

  • To enable multipath (dmp) paths use

    vxdmpadm enable path=$_base_dev

  • To bring a disk under veritas control

    vxdisksetup -if $_emc_path format=cdsdisk privlen=32m

  • Add a disk to a veritas disk group

    vxdg -g $DG adddisk $DISK_NAME=$EMC_DEV_NAME

  • How much space will be used in overheads on a veritas filesystem?


    ((bytes in device) / (bytes/block) * (2 bits/block) / (8 bits/byte)) + (bytes in log) + (2 MB misc)


    for a 10 GB file system create via

    mkfs -o bsize=1024,logsize=16m
    the formula gives:


    (10G/1024*2/8)+16M+2M
    or
    ((10 * 2^30) / 1024 * 2 / 8) + (16 * 2^20) + (2 * 2^20) = 21495808 bytes
    or
    (10737418240 / 1024 * 2 / 8) + 16777216 + 2097152 = 21495808 = 20.5MB
    A clean 10G filesystem uses 20543488 bytes = 19.6 MB

    Which is close enough for an estimate.
  • remove a volume
    umount /mount/point
    vxassist -g $DG remove volume $volume

  • remove a disk group
    vxdg destroy $DG


  • To resize a volume and say which disks to leave alone
    vxresize -g $DG $volume 68G !${DG}_008 !${DG}_009

  • To move all the data off some disks so that they can be removed
    vxassist -g $DG move $volume !${DG}_008 !${DG}_009

  • Remove the disks from the disk group
    vxdg -g $DG rmdisk ${DG}_008 ${DG}_009

  • Rename the remainaing disks in the disk group
    vxedit -g $DG -d rename ${DG}_010 ${DG}_003

Wednesday, 30 April 2008

General notes

# to boot past a corrupted /etc/system file use the following.

# it will prompt for the /etc/system file and you can reply /dev/null or /etc/system.backup etc.

# it can be use to recreate /etc/path_to_inst as well.

ok {} boot -as

# show errors on disks

iostat -En


# show when metastat states happened

metastat -t


# to label one disk the same as another

# this will prompt you to save the partition layout as a name. Don't forget the quotes.

# it only lasts as long as format is run

format -> select disk -> partition -> name

# this will prompt you for a name

select disk -> partition -> select

# now write the labelabel


# To show the current kernel config and state use

sysdef