Zones Creatoin Under Vcs
Zones Creatoin Under Vcs
Zones Creatoin Under Vcs
server2:
server2:
#
#
#
#
haconf -makerw
hatype -modify Zone LogDbg -delete DBG_1 DBG_2 DBG_3 DBG_4 DBG_5
hatype -modify Zpool LogDbg -delete DBG_1 DBG_2 DBG_5
haconf -dump -makero
You will see the output like this zone resource name usa0300uz1333
bash-3.2# hagrp -resources zone_uz1333
usa0300uz1333
zone_uz1333_root
cluster ux592_ux593 (
UserNames = { admin = dKLdKFkHLgLLjTLfKI,
z_usa0300uz1329_usa0300ux593 = JKKlKHjQLfHEgELh,
z_usa0300uz1329_usa0300ux592 = gJJkJGiPKeGDfDKg,
z_usa0300uz1330_usa0300ux592 = chhIheGniCebDbiE,
z_usa0300uz1330_usa0300ux593 = aLLmLIkRMgIFhFMi,
z_usa0300uz1331_usa0300ux592 = gmmNmjLsnHjgIgnJ,
z_usa0300uz1331_usa0300ux593 = gJJkJGiPKeGDfDKg,
z_usa0300uz1332_usa0300ux593 = bkkLkhJqlFheGelH,
z_usa0300uz1332_usa0300ux592 = aLLmLIkRMgIFhFMi,
z_usa0300uz1333_usa0300ux593 = hllMliKrmGifHfmI,
z_usa0300uz1333_usa0300ux592 = fQQrQNpWRlNKmKRn }
Administrators = { admin }
You can delete these user with the hauser delete z-**-** once SG deleted at VCS
Once the resource created you need to link with zpool with zone resources and no
need to create any mount resource for zone root file system it will pick associated
mount entry from zone xml file.
group zone_uz1330
//
//
//
Zone usa0300uz1330
//
//
Zpool zone_uz1330_root
//
//
//
//
}
}
uz1333_zone
now copy the /etc/zones/index file to other node and put the zone in
configured state on other node where you not created zone.
Better to put autoboot false in each zone on both the servers( even
though it will not affect but better to keep false for safer side)
As it required failover purpose we need zone xml on that node as well and
create the zone root path with 700 permissions
Before bringing SG online , you can export and import zone root zpool on
other nodes and you can test the zone whether it is booting out of VCS or
not .once you have zone XML file on both the nodes
Failover will work like it will detach the zone from one node and attach
with F option so in this case it will not look for any other patch miss
match version it will simply attach. If you want to equal patches with
zones you can detach and attach with U option where it required normally
we dont do.
-- SYSTEM STATE
-- System
A usa0300ux592
State
RUNNING
Frozen
A usa0300ux593
RUNNING
-- GROUP STATE
-- Group
System
Probed
AutoDisabled
State
B zone_uz1329
usa0300ux592
ONLINE
B zone_uz1329
usa0300ux593
OFFLINE
B zone_uz1330
usa0300ux592
ONLINE
B zone_uz1330
usa0300ux593
OFFLINE
B zone_uz1331
usa0300ux592
ONLINE
B zone_uz1331
usa0300ux593
OFFLINE
B zone_uz1332
usa0300ux592
OFFLINE
B zone_uz1332
usa0300ux593
ONLINE
B zone_uz1333
usa0300ux592
ONLINE
B zone_uz1333
usa0300ux593
OFFLINE
Bash-3.2#
Once zone booted on physical server you need to enable password less
authentication between physical server to virtual zone which you created
at VCS level otherwise it will give message please run hazonesetup
Zlogin usa0300uz1333
export VCS_HOST=physical ser(usa0300ux593)
export VCS_HOST=usa0300ux593
# /opt/VRTSvcs/bin/halogin z_yourZONE_server2 abc123
Here z_usa0300uz1333_usa0300ux593 which we need to provide in this
command and this user which we created with hazonesetup command in
earlier you can see main.cf file and password is hclr00t which we provided
same in hazonesetup command earlier.
What is the physical server name which you imported in VCS_HOST like
ux593 so user which you have to provided also same z-zonenameusa0300ux593
If my zone is at usa0300ux592 physical server I logged into zone zlogin
now I need to run same command with different usere and different host.
#export VCS_HOST=usa0300ux592
# /opt/VRTSvcs/bin/halogin z_usa0300uz1333_usa0300ux592 hclr00t
It will generate .vcspwd of root account home directory but this needs to
be copied into / location otherwise it will throw the messages related to
hazonsetup password less authentication messages in engine logs while
failover
Ex: root having home directory having /root so when I ran the the halogin
command on zone it will created .vcspwd file under /root
Login name: root
Directory: /root
Shell: /sbin/sh
1 root
root
1 root
drwxr-xr-x 22 root
root
root
100 usa0300ux592 z_usa0300uz1332_usa0300ux592 IppQpmOvqKmjLjqM- this entry came while my zone at physical server ux592 after running
halogin,vcs_host
bash-3.2#
so I copied this file into / and check the failover again and it went
successfully without messages as it will look this file under zone root
account /
if halogin not setup on both the physical server that will not create any
issue about it will show the messages while failover
# ls -l /.vcspwd (file exists?)
# cat /.vcspwd (if it exists check how many lines you have inside, it would
be
2) # ls -l /etc/VRTSvcs/.vcshost (file exists?)
# VCS_HOST=server2 # export VCS_HOST
# /opt/VRTSvcs/bin/halogin z_yourZONE_server2 <password-you-setbefore>
Below is the other example of zone SG with mount points under zones also
where you need to create the mount resource
If you have IPMP at OS level you should create IPMP group for public
network under VCS ,
So you can put zone public ip-address under VCS instead of placing ipaddress in zone config
I followed BUR ipmp in VCS and not put in zone config similarly we can put
it
before adding the external mount point under zone make sure respective
mount points are existed
under zone(/apps/oracle) and run export/import R/export on associated
zpool(oracle) on both the nodes
Symantec case which helped on this setup you can find all the details in
this case also
RE Symantec Case
05890758
ref _00D30jPy _50050QD1DF ref .msg
multinicb
hagrp -add Network_Adapters
hagrp -modify Network_Adapters SystemList usa0300ux594 0
usa0300ux595 1
hagrp -modify Network_Adapters Parallel 1
hagrp -modify Network_Adapters AutoStartList usa0300ux594
usa0300ux595
hares -add IPMP_BUR MultiNICB Network_Adapters
hares -add IPMP_BUR_Phantom Phantom Network_Adapters
Recommendation:
The reason for the mpathd failure was that network was unavailable for 13 seconds,
and in perfect world, the network should not be unavailable however as we have
seen this coming more than once, we could give more time for mpathd to recover.
Current attributes:
----------------------MultiNICB
MonitorInterval
MultiNICB
ToleranceLimit
-----------------------
10
0
- we can increase the ToleranceLimit to 2 so that it will give more time before it can
declare the resources was faulted.
Side effect of increasing too much is that it will wait longer before it can detect an
actual failure
ToleranceLimit allows to prolong the monitoring cycles before declaring a failure.
We may also increase the toleranceLimit for IPMultiNICB
Current attributes:
----------------------IPMultiNICB MonitorInterval
IPMultiNICB ToleranceLimit
-----------------------
30
1
Please feel free to let me know if you need further information/clarification on this.
Monitor interval means: agent will monitor every 30 seconds so it means every minute 2 times it
will monitor and
Tolerance Limit value: if nic fault it will wait for some time then declared as it faulty
To change the value you need to use hatype
CleanRetryLimit
MultiNICB
OfflineWaitLimit
MultiNICB
OnlineRetryLimit
0
0
0
MultiNICB
OnlineWaitLimit
MultiNICB
RestartLimit
MultiNICB
ToleranceLimit
2
0
0
ConfInterval
600
MultiNICB
InfoInterval
MultiNICB
MonitorInterval
MultiNICB
OfflineMonitorInterval 60
0
10
ConfInterval
600
MultiNICB
InfoInterval
MultiNICB
MonitorInterval
MultiNICB
OfflineMonitorInterval 60
0
10
600
0
30
0
0
IPMultiNICB OnlineRetryLimit
IPMultiNICB OnlineWaitLimit
IPMultiNICB RestartLimit
IPMultiNICB ToleranceLimit
CleanRetryLimit
MultiNICB
OfflineWaitLimit
MultiNICB
OnlineRetryLimit
MultiNICB
OnlineWaitLimit
MultiNICB
RestartLimit
MultiNICB
ToleranceLimit
0
0
CleanRetryLimit
MultiNICB
OfflineWaitLimit
MultiNICB
OnlineRetryLimit
MultiNICB
OnlineWaitLimit
MultiNICB
RestartLimit
MultiNICB
ToleranceLimit
0
1