Stepbystep Oraclexe
Stepbystep Oraclexe
net
Step-by-step Guide:
Oracle 10g Express Edition
The purpose of this document is to suggest procedures for creating a clustered Oracle 10g
Express Edition configuration with Linuxha.net. These procedures were tested with the
following configuration.
• Fedora Core 5
• DRBD version 0.7.20
• Linuxha.net version 1.2
• Oracle 10g Express Edition version 10.2.0.1
The Oracle processes will run with the oracle user credentials. The oracle user and dba
group must exist on both nodes and have the same user and groups ids.
All commands in this section are executed as root on both nodes, unless otherwise indicated.
1. Check whether the oracle user exists and that the user and group ids are the same on
both nodes.
# grep oracle /etc/passwd
If there is no result, continue to step 2 since the oracle user does not exist. Otherwise,
compare the user and group ids on both nodes. These are the third and fourth fields
respectively of the user account record as shown below:
oracle:x:501:501::/usr/lib/oracle/xe:/bin/bash
If the user and group ids are not the same on both nodes, proceed to step 6. Otherwise
go to the next section.
3. Create dba group and assign the larger of the group ids identified in step 2, e.g. 504.
# groupadd --gid 504 dba
5. Create oracle user and assign the larger of the user ids identified in step 4, e.g. 504.
# useradd --uid 504 --gid dba --home-dir /usr/lib/oracle-xe -M oracle
6. If the dba group ids are different, change the group id on one of the nodes to be the
same as the other, ensure that the new group id is not already in use.
# grep 504 /etc/group # check that group id 504 is not in use
# groupmod -g 504 dba # change group id to 504
7. If the oracle user ids are different, change the user id on one of the nodes to be the
same as the other, ensure that the new user id is not already in use.
# grep 504 /etc/passwd # check that user id 504 is not in use
# usermod --uid 504 --gid dba oracle # change user id to 504
Installation
Carry out these instructions on both nodes, all commands are executed as root.
2. Download Oracle 10g Express Edition i386 RPM package (this is a single-line command).
# wget http://download.oracle.com/otn/linux/oracle10g/xe/10201/oracle-xe-
10.2.0.1-1.0.i386.rpm
5. Edit /etc/init.d/oracle-xe
By default, the oracle-xe script will start or stop the application only if it is enabled at
boot. This behaviour has to be changed in order to use this script to start and stop the
application from the cluster.
Replace lines 597-628 in /etc/init.d/oracle-xe with the following:
case "$1" in
start)
start
;;
configure)
configure
;;
stop)
stop
;;
8. Configure firewall
If the firewall is running, it must be configured to permit access to the listener and HTTP
server ports defined in step 4. By default, these are ports 1521 and 8080 respectively.
Insert the following before the COMMIT command in /etc/sysconfig/iptables.
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 1521 -j ACCEPT
Directory/File Comment
oradata Database file directory
app/oracle/admin Database parameter file and alert log directory
app/oracle/flash_recovery_area Redo log, archive log, and RMAN backup
directory.
Configuration
Primary Node
1. Login as oracle
# su - oracle
4. Change the address of the default local listener to the virtual IP address (192.168.1.39).
SQL> ALTER SYSTEM
2 SET LOCAL_LISTENER='(ADDRESS = (PROTOCOL = TCP)
3 (HOST = 192.168.1.39)(PORT = 1521))';
SQL> ALTER SYSTEM REGISTER;
7. Exit SQL*Plus
SQL> EXIT
10. Configure the Oracle listener to use the virtual IP address assigned to the clustered
application.
Replace the host name in line 16 of ~/app/oracle/admin/network/listener.ora with
12. Copy the initialization parameter file - spfileXE.ora - and the password file - orapwXE -
from $ORACLE_HOME/dbs to ~/app/oracle/admin/XE/dbs.
$ cp $ORACLE_HOME/dbs/spfileXE.ora $ORACLE_HOME/dbs/orapwXE
> ~/app/oracle/admin/XE/dbs
Secondary Node
1. Login as oracle
# su - oracle
All commands are to be executed as root. Steps 1 to 3 must be executed on both nodes, the
rest on the primary node.
1. Use fdisk to create a partition (/dev/sdb5) to be used by the application volume group.
This should be at least 2 GB.
2. Initialize partition
# pvcreate /dev/sdb5
4. Create logical volume (adminlv) to store parameter files and alert logs, this should be
large enough to cater for growth of the alert logs.
# lvcreate --size 64M --name adminlv oraclevg
5. Create logical volume (flashlv) for flash recovery area, this needs to be large enough to
hold the database backups and redo logs (active and archived).
# lvcreate --size 512M --name flashlv oraclevg
6. Create logical volume (oradatalv) to store database files, this should be at least 1 GB.
# lvcreate --size 1200M --name oradatalv oraclevg
Build Application
<check>
<name>flag_check</name>
<type>internal</type>
<module>flag_check oracle-xe</module>
<interval>5</interval>
<action_list>
<action rc="0" action="NOP"/>
<action rc="1" action="%RCDATA%"/>
<action rc="2" action="ABORT"/>
</action_list>
</check>
<check>
<name>db</name>
<type>internal</type>
<module>procmon /etc/cluster/oracle-xe/db.xml</module>
<interval>10</interval>
<action_list>
<action rc="0" action="NOP"/>
<action rc="1" action="STOP"/>
<action rc="2" action="FAILOVER"/>
</action_list>
</check>
<check>
<name>listener</name>
<type>internal</type>
<module>procmon /etc/cluster/oracle-xe/listener.xml</module>
<interval>15</interval>
<action_list>
<action rc="0" action="NOP"/>
<action rc="1" action="STOP"/>
<action rc="2" action="FAILOVER"/>
</action_list>
</check>
<check>
<name>fsmonitor</name>
<type>internal</type>
<module>fsmon oracle-xe</module>
<interval>10</interval>
<action_list>
<action rc="0" action="NOP"/>
<action rc="1" action="PAUSE 30"/>
<action rc="2" action="STOP"/>
<action rc="3" action="FAILOVER"/>
<action rc="10" action="PAUSE 60"/>
</action_list>
</check>
</lems_config>
8. Build oracle-xe.
# clbuildapp --application oracle-xe --sync
The following is the output of a successful build.
INFO 21/07/2006 23:37:06 Backups directory defaulted to /clbackup
INFO 21/07/2006 23:37:06
INFO 21/07/2006 23:37:06 Validation of Application 'oracle-xe' started.
INFO 21/07/2006 23:37:06 ['/var/log/cluster/build/oracle-xe-check-
300607212337.log']
INFO 21/07/2006 23:37:07 Initial Validation of Application successful.
INFO 21/07/2006 23:37:08
INFO 21/07/2006 23:37:08 NOTE: Build of new application is being performed.
INFO 21/07/2006 23:37:08
INFO 21/07/2006 23:37:08 Host Environment Validation started.
INFO 21/07/2006 23:37:08 ['/var/log/cluster/build/oracle-xe-envcheck-
300607212337.log']
INFO 21/07/2006 23:37:12 Host Environment Validation successful.
INFO 21/07/2006 23:37:12
INFO 21/07/2006 23:37:12 Cluster state : DOWN
INFO 21/07/2006 23:37:12 Application state: UNDEFINED
INFO 21/07/2006 23:37:12
INFO 21/07/2006 23:37:12 Volume Group Configuration started.
INFO 21/07/2006 23:37:12 ['/var/log/cluster/build/oracle-xe-lvm-
300607212337.log']
INFO 21/07/2006 23:37:20 Volume Group Configuration successful.
INFO 21/07/2006 23:37:20
INFO 21/07/2006 23:37:20 Application Resource Allocation started.
INFO 21/07/2006 23:37:20 ['/var/log/cluster/build/oracle-xe-build-
300607212337.log']
INFO 21/07/2006 23:37:33 Application Resource Allocation successful.
INFO 21/07/2006 23:37:33
INFO 21/07/2006 23:37:33 Application Data Synchronisation started.
INFO 21/07/2006 23:37:33 ['/var/log/cluster/build/oracle-xe-syncdata-
300607212337.log']
Storage Syncing: 1200Mb/ 1Mb [0.1 % Complete]
Storage Syncing: 0Mb/ 0Mb [100 % Complete]
INFO 21/07/2006 23:49:46 Application Data Synchronisation successful.
INFO 21/07/2006 23:49:47
For the commands used in this section, the host name of the primary node is fc5s1, and the
secondary node is fc5s2. All commands are executed as root on either node unless indicated
otherwise.
File Systems
Process Monitors
General Monitors
}
Instance "XE", status READY, has 1 handler(s) for this service...
Service "XEXDB" has 1 instance(s).
c
Instance "XE", status READY, has 1 handler(s) for this service...
Service "XE_XPT" has 1 instance(s).
Instance "XE", status READY, has 1 handler(s) for this service...
The command completed successfully
6. Verify database connection using SQL*Plus on either node, by executing the following:
# su - oracle -c "sqlplus -S /nolog" << EOF
> CONNECT HR/HR@XE
> COL REGION_NAME FOR A30
> SELECT * FROM REGIONS;
> EXIT
> EOF
The following output would be generated if the connection was successful:
REGION_ID REGION_NAME
---------- ------------------------------
1 Europe
2 Americas
3 Asia
4 Middle East and Africa