Performance and Tuning For Oracle WebLogic Server
Performance and Tuning For Oracle WebLogic Server
Performance and Tuning For Oracle WebLogic Server
June 2013
This document is for people who monitor performance and
tune the components in a WebLogic Server environment.
Oracle Fusion Middleware Performance and Tuning for Oracle WebLogic Server, 11g Release 1 (10.3.6)
E13814-08
Copyright © 2007, 2013, Oracle and/or its affiliates. All rights reserved.
This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license,
transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse
engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is
prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it
on behalf of the U.S. Government, the following notice is applicable:
U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data
delivered to U.S. Government customers are "commercial computer software" or "commercial technical data"
pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As
such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and
license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of
the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software
License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.
This software or hardware is developed for general use in a variety of information management
applications. It is not developed or intended for use in any inherently dangerous applications, including
applications that may create a risk of personal injury. If you use this software or hardware in dangerous
applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other
measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages
caused by use of this software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of
their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks
are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD,
Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced
Micro Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information on content, products,
and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly
disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle
Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your
access to or use of third-party content, products, or services.
Contents
iii
3.2 Tuning Tips.................................................................................................................................. 3-4
iv
7.4.7 Tuning the Stuck Thread Detection Behavior ................................................................. 7-5
7.5 Tuning Network I/O.................................................................................................................. 7-5
7.5.1 Tuning Muxers ..................................................................................................................... 7-6
7.5.1.1 Java Muxer..................................................................................................................... 7-6
7.5.1.2 Native Muxers............................................................................................................... 7-7
7.5.1.3 Non-Blocking IO Muxer .............................................................................................. 7-7
7.5.2 Which Platforms Have Performance Packs?.................................................................... 7-7
7.5.3 Enabling Performance Packs.............................................................................................. 7-8
7.5.4 Changing the Number of Available Socket Readers ...................................................... 7-8
7.5.5 Network Channels............................................................................................................... 7-8
7.5.6 Reducing the Potential for Denial of Service Attacks..................................................... 7-8
7.5.6.1 Tuning Message Size.................................................................................................... 7-9
7.5.6.2 Tuning Complete Message Timeout.......................................................................... 7-9
7.5.6.3 Tuning Number of File Descriptors........................................................................... 7-9
7.5.7 Tune the Chunk Parameters .............................................................................................. 7-9
7.5.8 Tuning Connection Backlog Buffering .......................................................................... 7-10
7.5.9 Tuning Cached Connections ........................................................................................... 7-10
7.6 Setting Your Compiler Options ............................................................................................. 7-10
7.6.1 Compiling EJB Classes ..................................................................................................... 7-10
7.6.2 Setting JSP Compiler Options ......................................................................................... 7-10
7.6.2.1 Precompile JSPs ......................................................................................................... 7-11
7.6.2.2 Optimize Java Expressions....................................................................................... 7-11
7.7 Using WebLogic Server Clusters to Improve Performance............................................... 7-11
7.7.1 Scalability and High Availability ................................................................................... 7-11
7.7.2 How to Ensure Scalability for WebLogic Clusters....................................................... 7-12
7.7.3 Database Bottlenecks........................................................................................................ 7-12
7.7.4 Session Replication ........................................................................................................... 7-13
7.7.5 Asynchronous HTTP Session Replication..................................................................... 7-13
7.7.5.1 Asynchronous HTTP Session Replication using a Secondary Server................ 7-13
7.7.5.2 Asynchronous HTTP Session Replication using a Database .............................. 7-14
7.7.6 Invalidation of Entity EJBs .............................................................................................. 7-14
7.7.7 Invalidation of HTTP sessions ........................................................................................ 7-14
7.7.8 JNDI Binding, Unbinding and Rebinding..................................................................... 7-14
7.7.9 Running Multiple Server Instances on Multi-Core Machines.................................... 7-15
7.8 Monitoring a WebLogic Server Domain............................................................................... 7-15
7.8.1 Using the Administration Console to Monitor WebLogic Server ............................. 7-15
7.8.2 Using the WebLogic Diagnostic Framework................................................................ 7-15
7.8.3 Using JMX to Monitor WebLogic Server....................................................................... 7-15
7.8.4 Using WLST to Monitor WebLogic Server ................................................................... 7-16
7.8.5 Resources to Monitor WebLogic Server ........................................................................ 7-16
7.8.6 Third-Party Tools to Monitor WebLogic Server .......................................................... 7-16
7.9 Tuning Class and Resource Loading .................................................................................... 7-16
7.9.1 Filtering Loader Mechanism ........................................................................................... 7-16
7.9.2 Class Caching .................................................................................................................... 7-17
7.10 Server Migration with Database Leasing on RAC Clusters............................................... 7-17
v
8 Tuning the WebLogic Persistent Store
8.1 Overview of Persistent Stores ................................................................................................... 8-1
8.1.1 Using the Default Persistent Store..................................................................................... 8-1
8.1.2 Using Custom File Stores and JDBC Stores ..................................................................... 8-2
8.1.3 Using a JDBC TLOG Store.................................................................................................. 8-2
8.1.4 Using JMS Paging Stores .................................................................................................... 8-2
8.1.4.1 Using Flash Storage to Page JMS Messages.............................................................. 8-3
8.1.5 Using Diagnostic Stores ...................................................................................................... 8-3
8.2 Best Practices When Using Persistent Stores .......................................................................... 8-3
8.3 Tuning JDBC Stores .................................................................................................................... 8-4
8.4 Tuning File Stores ....................................................................................................................... 8-4
8.4.1 Basic Tuning Information ................................................................................................... 8-4
8.4.2 Tuning a File Store Direct-Write-With-Cache Policy ..................................................... 8-5
8.4.2.1 Using Flash Storeage to Increase Performance ........................................................ 8-6
8.4.2.2 Additional Considerations .......................................................................................... 8-6
8.4.3 Tuning the File Store Direct-Write Policy ........................................................................ 8-7
8.4.4 Tuning the File Store Block Size ........................................................................................ 8-7
8.4.4.1 Setting the Block Size for a File Store......................................................................... 8-8
8.4.4.2 Determining the File Store Block Size ....................................................................... 8-9
8.4.4.3 Determining the File System Block Size.................................................................... 8-9
8.4.4.4 Converting a Store with Pre-existing Files ............................................................... 8-9
8.5 Using a Network File System.................................................................................................... 8-9
8.5.1 Configuring Synchronous Write Policies ...................................................................... 8-10
8.5.2 Test Server Restart Behavior ........................................................................................... 8-10
8.5.3 Handling NFS Locking Errors ........................................................................................ 8-10
8.5.3.1 Solution 1 - Copying Data Files to Remove NFS Locks ....................................... 8-11
8.5.3.2 Solution 2 - Disabling File Locks in WebLogic Server File Stores ...................... 8-12
8.5.3.2.1 Disabling File Locking for the Default File Store........................................... 8-12
8.5.3.2.2 Disabling File Locking for a Custom File Store ............................................. 8-13
8.5.3.2.3 Disabling File Locking for a JMS Paging File Store....................................... 8-13
8.5.3.2.4 Disabling File Locking for a Diagnostics File Store....................................... 8-14
9 DataBase Tuning
9.1 General Suggestions ................................................................................................................... 9-1
9.2 Database-Specific Tuning .......................................................................................................... 9-2
9.2.1 Oracle..................................................................................................................................... 9-2
9.2.2 Microsoft SQL Server .......................................................................................................... 9-3
9.2.3 Sybase .................................................................................................................................... 9-3
vi
10.2.2.3 Ready Bean Caching ................................................................................................. 10-3
10.2.3 Tuning the Query Cache.................................................................................................. 10-3
10.3 Tuning EJB Pools...................................................................................................................... 10-3
10.3.1 Tuning the Stateless Session Bean Pool ......................................................................... 10-3
10.3.2 Tuning the MDB Pool....................................................................................................... 10-4
10.3.3 Tuning the Entity Bean Pool ........................................................................................... 10-4
10.4 CMP Entity Bean Tuning ........................................................................................................ 10-4
10.4.1 Use Eager Relationship Caching .................................................................................... 10-4
10.4.1.1 Using Inner Joins ....................................................................................................... 10-5
10.4.2 Use JDBC Batch Operations ............................................................................................ 10-5
10.4.3 Tuned Updates.................................................................................................................. 10-5
10.4.4 Using Field Groups .......................................................................................................... 10-5
10.4.5 include-updates................................................................................................................. 10-6
10.4.6 call-by-reference................................................................................................................ 10-6
10.4.7 Bean-level Pessimistic Locking ....................................................................................... 10-6
10.4.8 Concurrency Strategy....................................................................................................... 10-7
10.5 Tuning In Response to Monitoring Statistics....................................................................... 10-8
10.5.1 Cache Miss Ratio............................................................................................................... 10-8
10.5.2 Lock Waiter Ratio ............................................................................................................. 10-8
10.5.3 Lock Timeout Ratio .......................................................................................................... 10-9
10.5.4 Pool Miss Ratio.................................................................................................................. 10-9
10.5.5 Destroyed Bean Ratio..................................................................................................... 10-10
10.5.6 Pool Timeout Ratio ......................................................................................................... 10-10
10.5.7 Transaction Rollback Ratio............................................................................................ 10-10
10.5.8 Transaction Timeout Ratio ............................................................................................ 10-11
10.6 Using the JDT Compiler........................................................................................................ 10-11
vii
12.5 Using Pinned-To-Thread Property to Increase Performance ............................................ 12-3
12.6 Database Listener Timeout under Heavy Server Loads .................................................... 12-3
12.7 Disable Wrapping of Data Type Objects .............................................................................. 12-3
12.8 Advanced Configurations for Oracle Drivers and Databases........................................... 12-3
12.9 Use Best Design Practices ....................................................................................................... 12-3
13 Tuning Transactions
13.1 Logging Last Resource Transaction Optimization.............................................................. 13-1
13.1.1 LLR Tuning Guidelines.................................................................................................... 13-1
13.2 Read-only, One-Phase Commit Optimizations ................................................................... 13-2
viii
14.14.8 Configuring a JMS Server to Actively Scan Destinations for Expired Messages .. 14-19
14.15 Tuning Applications Using Unit-of-Order......................................................................... 14-20
14.15.1 Best Practices ................................................................................................................... 14-20
14.15.2 Using UOO and Distributed Destinations .................................................................. 14-20
14.15.3 Migrating Old Applications to Use UOO ................................................................... 14-20
14.16 Using One-Way Message Sends .......................................................................................... 14-21
14.16.1 Configure One-Way Sends On a Connection Factory............................................... 14-21
14.16.2 One-Way Send Support In a Cluster With a Single Destination ............................. 14-21
14.16.3 One-Way Send Support In a Cluster With Multiple Destinations .......................... 14-22
14.16.4 When One-Way Sends Are Not Supported ................................................................ 14-22
14.16.5 Different Client and Destination Hosts ....................................................................... 14-22
14.16.6 XA Enabled On Client's Host Connection Factory .................................................... 14-22
14.16.7 Higher QOS Detected..................................................................................................... 14-22
14.16.8 Destination Quota Exceeded......................................................................................... 14-23
14.16.9 Change In Server Security Policy ................................................................................. 14-23
14.16.10 Change In JMS Server or Destination Status .............................................................. 14-23
14.16.11 Looking Up Logical Distributed Destination Name ................................................. 14-23
14.16.12 Hardware Failure............................................................................................................ 14-23
14.16.13 One-Way Send QOS Guidelines ................................................................................... 14-23
14.17 Tuning the Messaging Performance Preference Option .................................................. 14-24
14.17.1 Messaging Performance Configuration Parameters.................................................. 14-25
14.17.2 Compatibility With the Asynchronous Message Pipeline........................................ 14-26
14.18 Client-side Thread Pools....................................................................................................... 14-26
14.19 Best Practices for JMS .NET Client Applications............................................................... 14-26
ix
18 Tuning Web Applications
18.1 Best Practices ............................................................................................................................ 18-1
18.1.1 Disable Page Checks......................................................................................................... 18-1
18.1.2 Use Custom JSP Tags ....................................................................................................... 18-1
18.1.3 Precompile JSPs................................................................................................................. 18-2
18.1.4 Disable Access Logging ................................................................................................... 18-2
18.1.5 Use HTML Template Compression ............................................................................... 18-2
18.1.6 Use Service Level Agreements........................................................................................ 18-2
18.1.7 Related Reading ................................................................................................................ 18-2
18.2 Session Management ............................................................................................................... 18-2
18.2.1 Managing Session Persistence ........................................................................................ 18-3
18.2.2 Minimizing Sessions......................................................................................................... 18-3
18.2.3 Aggregating Session Data ............................................................................................... 18-3
18.3 Pub-Sub Tuning Guidelines ................................................................................................... 18-4
B Capacity Planning
B.1 Capacity Planning Factors ........................................................................................................ B-1
x
B.1.1 Programmatic and Web-based Clients ............................................................................ B-2
B.1.2 RMI and Server Traffic....................................................................................................... B-2
B.1.3 SSL Connections and Performance .................................................................................. B-2
B.1.4 WebLogic Server Process Load......................................................................................... B-3
B.1.5 Database Server Capacity and User Storage Requirements ......................................... B-3
B.1.6 Concurrent Sessions ........................................................................................................... B-3
B.1.7 Network Load ..................................................................................................................... B-4
B.1.8 Clustered Configurations .................................................................................................. B-4
B.1.9 Server Migration ................................................................................................................. B-4
B.1.10 Application Design............................................................................................................. B-5
B.2 Assessing Your Application Performance Objectives .......................................................... B-5
B.3 Hardware Tuning ...................................................................................................................... B-5
B.3.1 Benchmarks for Evaluating Performance ....................................................................... B-5
B.3.2 Supported Platforms .......................................................................................................... B-5
B.4 Network Performance ............................................................................................................... B-5
B.4.1 Determining Network Bandwidth ................................................................................... B-6
B.5 Related Information .................................................................................................................. B-6
xi
xii
Preface
This preface describes the document accessibility features and conventions used in this
guide—Performance and Tuning for Oracle WebLogic Server.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle
Accessibility Program website at
http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.
Conventions
The following text conventions are used in this document:
Convention Meaning
boldface Boldface type indicates graphical user interface elements associated
with an action, or terms defined in text or the glossary.
italic Italic type indicates book titles, emphasis, or placeholder variables for
which you supply particular values.
monospace Monospace type indicates commands within a paragraph, URLs, code
in examples, text that appears on the screen, or text that you enter.
xiii
xiv
1
1Introduction and Roadmap
This chapter describes the contents and organization of this guide—Performance and
Tuning for Oracle WebLogic Server.
■ Section 1.1, "Document Scope and Audience"
■ Section 1.2, "Guide to this Document"
■ Section 1.3, "Performance Features of this Release"
■ Chapter 10, "Tuning WebLogic Server EJBs," provides information on how to tune
applications that use EJBs.
■ Chapter 11, "Tuning Message-Driven Beans," provides information on how to tune
Message-Driven beans.
■ Chapter 12, "Tuning Data Sources," provides information on how to tune JDBC
applications.
■ Chapter 13, "Tuning Transactions," provides information on how to tune Logging
Last Resource transaction optimization.
■ Chapter 14, "Tuning WebLogic JMS," provides information on how to tune
applications that use WebLogic JMS.
■ Chapter 15, "Tuning WebLogic JMS Store-and-Forward," provides information on
how to tune applications that use JMS Store-and-Forward.
■ Chapter 16, "Tuning WebLogic Message Bridge," provides information on how to
tune applications that use the Weblogic Message Bridge.
■ Chapter 17, "Tuning Resource Adapters," provides information on how to tune
applications that use resource adaptors.
■ Chapter 18, "Tuning Web Applications," provides best practices for tuning
WebLogic Web applications and application resources:
■ Chapter 19, "Tuning Web Services," provides information on how to tune
applications that use Web services.
■ Chapter 20, "Tuning WebLogic Tuxedo Connector," provides information on how
to tune applications that use WebLogic Tuxedo Connector.
■ Appendix A, "Using the WebLogic 8.1 Thread Pool Model," provides information
on using execute queues.
■ Appendix B, "Capacity Planning," provides an introduction to capacity planning.
Server
This chapter provides a short list of top performance tuning recommendations. Tuning
WebLogic Server and your WebLogic Server application is a complex and iterative
process. To get you started, we have created a short list of recommendations to help
you optimize your application's performance. These tuning techniques are applicable
to nearly all WebLogic applications.
■ Section 2.1, "Tune Pool Sizes"
■ Section 2.2, "Use the Prepared Statement Cache"
■ Section 2.3, "Use Logging Last Resource Optimization"
■ Section 2.4, "Tune Connection Backlog Buffering"
■ Section 2.5, "Tune the Chunk Size"
■ Section 2.6, "Use Optimistic or Read-only Concurrency"
■ Section 2.7, "Use Local Interfaces"
■ Section 2.8, "Use eager-relationship-caching"
■ Section 2.9, "Tune HTTP Sessions"
■ Section 2.10, "Tune Messaging Applications"
This chapter provides a tuning roadmap and tuning tips for you can use to improve
system performance:
■ Section 3.1, "Performance Tuning Roadmap"
■ Section 3.2, "Tuning Tips"
■ The configuration of hardware and software such as CPU type, disk size vs. disk
speed, sufficient memory.
There is no single formula for determining your hardware requirements. The
process of determining what type of hardware and software configuration is
required to meet application needs adequately is called capacity planning.
Capacity planning requires assessment of your system performance goals and an
understanding of your application. Capacity planning for server hardware should
focus on maximum performance requirements. See Appendix B, "Capacity
Planning."
■ The ability to interoperate between domains, use legacy systems, support legacy
data.
■ Development, implementation, and maintenance costs.
You will use this information to set realistic performance objectives for your
application environment, such as response times, throughput, and load on specific
hardware.
The disk I/O on an application server can be optimized using faster disks or RAID,
disabling synchronous JMS writes, using JTA direct writes for tlogs, or increasing the
HTTP log buffer.
Tip: Even if you find that the CPU is 100 percent utilized, you should
profile your application for performance improvements.
This chapter describes how to tune your operating system. Proper OS tuning improves
system performance by preventing the occurrence of error conditions. Operating
system error conditions always degrade performance. Typically most error conditions
are TCP tuning parameter related and are caused by the operating system's failure to
release old sockets from a close_wait call. Common errors are "connection
refused", "too many open files" on the server-side, and "address in use:
connect" on the client-side.
In most cases, these errors can be prevented by adjusting the TCP wait_time value
and the TCP queue size. Although users often find the need to make adjustments
when using tunnelling, OS tuning may be necessary for any protocol under
sufficiently heavy loads.
Tune your operating system according to your operating system documentation. For
Windows platforms, the default settings are usually sufficient. However, the Solaris
and Linux platforms usually need to be tuned appropriately.
This chapter describes how to configure JVM tuning options for WebLogic Server. The
Java virtual machine (JVM) is a virtual "execution engine" instance that executes the
bytecodes in Java class files on a microprocessor. How you tune your JVM affects the
performance of WebLogic Server and your applications.
■ Section 5.1, "JVM Tuning Considerations"
■ Section 5.2, "Which JVM for Your System?"
■ Section 5.3, "Garbage Collection"
■ Section 5.4, "Enable Spinning for IA32 Platforms"
any pointer in the running program, it is considered "garbage" and ready for
collection. A best practice is to tune the time spent doing garbage collection to within
5% of execution time.
The JVM heap size determines how often and how long the VM spends collecting
garbage. An acceptable rate for garbage collection is application-specific and should be
adjusted after analyzing the actual time and frequency of garbage collections. If you
set a large heap size, full garbage collection is slower, but it occurs less frequently. If
you set your heap size in accordance with your memory needs, full garbage collection
is faster, but occurs more frequently.
The goal of tuning your heap size is to minimize the time that your JVM spends doing
garbage collection while maximizing the number of clients that WebLogic Server can
handle at a given time. To ensure maximum performance during benchmarking, you
might set high heap size values to ensure that garbage collection does not occur during
the entire run of the benchmark.
You might see the following Java error if you are running out of heap space:
java.lang.OutOfMemoryError <<no stack trace available>>
java.lang.OutOfMemoryError <<no stack trace available>>
Exception in thread "main"
To modify heap space values, see Section 5.3.4, "Specifying Heap Size Values".
To configure WebLogic Server to detect automatically when you are running out of
heap space and to address low memory conditions in the server, see Section 5.3.8,
"Automatically Logging Low Memory Conditions" and Section 5.3.4, "Specifying Heap
Size Values".
where the logfile.txt 2>&1 command redirects both the standard error and
standard output to a log file.
On HPUX, use the following option to redirect stderr stdout to a single file:
-Xverbosegc:file=/tmp/gc$$.out
where $$ maps to the process ID (PID) of the Java process. Because the output
includes timestamps for when garbage collection ran, you can infer how often
garbage collection occurs.
3. Analyze the following data points:
a. How often is garbage collection taking place? In the weblogic.log file, compare
the time stamps around the garbage collection.
b. How long is garbage collection taking? Full garbage collection should not take
longer than 3 to 5 seconds.
c. What is your average memory footprint? In other words, what does the heap
settle back down to after each full garbage collection? If the heap always
settles to 85 percent free, you might set the heap size smaller.
4. Review the New generation heap sizes (Sun) or Nursery size (Jrockit).
■ For Jrockit: see Section 5.3.6, "JRockit JVM Heap Size Options".
■ For Sun: see Section 5.3.7, "Java HotSpot VM Heap Size Options".
5. Make sure that the heap size is not larger than the available free RAM on your
system.
Use as large a heap size as possible without causing your system to "swap" pages
to disk. The amount of free RAM on your system depends on your hardware
configuration and the memory requirements of running processes on your
machine. See your system administrator for help in determining the amount of
free RAM on your system.
6. If you find that your system is spending too much time collecting garbage (your
allocated virtual memory is more than your RAM can handle), lower your heap
size.
Typically, you should use 80 percent of the available RAM (not taken by the
operating system or other processes) for your JVM.
7. If you find that you have a large amount of available free RAM remaining, run
more instances of WebLogic Server on your machine.
Remember, the goal of tuning your heap size is to minimize the time that your
JVM spends doing garbage collection while maximizing the number of clients that
WebLogic Server can handle at a given time.
JVM vendors may provide other options to print comprehensive garbage
collection reports. For example, you can use the JRockit JVM -Xgcreport option
to print a comprehensive garbage collection report at program completion, see
"Viewing Garbage Collection Behavior", at
http://docs.oracle.com/docs/cd/E13150_01/jrockit_
jvm/jrockit/webdocs/index.html.
For example, when you start a WebLogic Server instance from a java command line,
you could specify the JRockit VM heap size values as follows:
$ java -Xns10m -Xms512m -Xmx512m
The default size for these values is measured in bytes. Append the letter 'k' or 'K' to the
value to indicate kilobytes, 'm' or 'M' to indicate megabytes, and 'g' or 'G' to indicate
gigabytes. The example above allocates 10 megabytes of memory to the Nursery heap
sizes and 512 megabytes of memory to the minimum and maximum heap sizes for the
WebLogic Server instance running in the JVM.
For detailed information about setting the appropriate heap sizes for WebLogic's
JRockit JVM, see "Tuning the JRockit JVM" at
http://docs.oracle.com/docs/cd/E13150_01/jrockit_
jvm/jrockit/webdocs/index.html.
For example, when you start a WebLogic Server instance from a java command line,
you could specify the HotSpot VM heap size values as follows:
$ java -XX:NewSize=128m -XX:MaxNewSize=128m -XX:SurvivorRatio=8 -Xms512m -Xmx512m
The default size for these values is measured in bytes. Append the letter 'k' or 'K' to the
value to indicate kilobytes, 'm' or 'M' to indicate megabytes, and 'g' or 'G' to indicate
gigabytes. The example above allocates 128 megabytes of memory to the New
generation and maximum New generation heap sizes, and 512 megabytes of memory
to the minimum and maximum heap sizes for the WebLogic Server instance running
in the JVM.
http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp
-140102.html for more information on the command-line options and environment
variables that can affect the performance characteristics of the Java HotSpot Virtual
Machine.
For additional examples of the HotSpot VM options, see:
■ "Standard Options for Windows (Win32) VMs" at
http://docs.oracle.com/javase/6/docs/technotes/tools/windows/
java.html.
■ "Standard Options for Solaris VMs and Linux VMs" at
http://docs.oracle.com/javase/6/docs/technotes/tools/solaris/
java.html.
The Java Virtual Machine document provides a detailed discussion of the Client and
Server implementations of the Java virtual machine for Java SE 5.0 at
http://docs.oracle.com/javase/1.5.0/docs/guide/vm/index.html.
5.4.2 JRockit
The JRockit VM automatically adjusts the spinning for different locks, eliminating the
need set this parameter.
This chapter provides information on how to tune WLDF integration with JRockit
Mission Control Flight Recorder: WebLogic Diagnostic Framework (WLDF) provides
specific integration points with JRockit Mission Control Flight Recorder. WebLogic
Server events are propagated to the Flight Recorder for inclusion in a common data set
for runtime or post-incident analysis. See "Using WLDF with Oracle JRockit Flight
Recorder" in Configuring and Using the Diagnostics Framework for Oracle WebLogic Server.
■ Section 6.1, "Using JRockit Flight Recorder"
■ Section 6.2, "Using WLDF"
■ Chapter 6.3, "Tuning Considerations"
Tuning WebLogic Diagnostic Framework and JRockit Flight Recorder Integration 6-1
Tuning Considerations
edit()
startEdit()
cd("Servers/myserver")
cd("ServerDiagnosticConfig")
cd("myserver")
cmo.setWLDFDiagnosticVolume("Medium")
save()
activate()
. . .
JRockit and WLDF generate global events that contain information about the recording
settings even when disabled. For example, JVM Metadata events list active recordings
and WLDF GlobalInformationEvents list the domain, server, machine, and
volume level. You can use WLDF Image capture to capture JFR recordings along with
other component information from WebLogic Server, such as configuration.
The Diagnostic Volume setting does not affect explicitly configured diagnostic
modules. By default, the Diagnostic Volume is set to Low. By default the JRockit
default recording is also Off. Setting the WLDF volume to Low or higher enables
WLDF event generation and the JVM events which would have been included by
default if the JRockit default recording were enabled separately. So if you turn the
WLDF volume to Low, Medium, or High, you get WLDF events and JVM events
recorded in the JFR, there is no need to separately enable the JRockit default recording.
6.2.1 Using JRockit controls outside of WLDF to control the default JVM recording
You can enable the JVM default recording (or another recording) and continue to
generate JVM events regardless of the WLDF volume setting. This allows you to
continue to generate JVM events when WLDF events are off.
Note: There is a known issue that can cause a potentially large set of
JVM events to be generated when WLDF captures the JFR data to the
image capture. The events generated are at the same timestamp as the
WLDF image capture and are mostly large numbers of JIT compilation
events. With the default JFR and WLDF settings, these events are seen
only during the WLDF image capture can be ignored. The expectation
is that the VM events generating during Diagnostic Image capture
should have little performance impact, as they are generated in a short
window and are disabled during normal operation.
This chapter describes how to tune WebLogic Server to match your application needs.
■ Section 7.1, "Setting Java Parameters for Starting WebLogic Server"
■ Section 7.2, "Development vs. Production Mode Default Tuning Values"
■ Section 7.4, "Thread Management"
■ Section 7.5, "Tuning Network I/O"
■ Section 7.6, "Setting Your Compiler Options"
■ Section 7.7, "Using WebLogic Server Clusters to Improve Performance"
■ Section 7.8, "Monitoring a WebLogic Server Domain"
■ Section 7.9, "Tuning Class and Resource Loading"
■ Section 7.10, "Server Migration with Database Leasing on RAC Clusters"
■ For higher performance throughput, set the minimum java heap size equal to the
maximum heap size. For example:
"%JAVA_HOME%\bin\java" -server –Xms512m –Xmx512m -classpath %CLASSPATH% -
See Section 5.3.4, "Specifying Heap Size Values" for details about setting heap size
options.
The following table lists the performance-related configuration parameters that differ
when switching from development to production startup mode.
For information on switching the startup mode from development to production, see
"Change to Production Mode" in the Oracle WebLogic Server Administration Console
Help.
7.3 Deployment
The following sections provide information on how to improve deployment
performance:
■ Section 7.3.1, "On-demand Deployment of Internal Applications"
■ Section 7.3.2, "Use FastSwap Deployment to Minimize Redeployment Time"
■ Section 7.3.3, "Generic Overrides"
7.4.3 What are the SLA Requirements for Each Work Manager?
Service level agreement (SLA) requirements are defined by instances of request classes.
A request class expresses a scheduling guideline that a server instance uses to allocate
threads. See "Understanding Work Managers" in Configuring Server Environments for
Oracle WebLogic Server.
7.4.5 Understanding the Differences Between Work Managers and Execute Queues
The easiest way to conceptually visualize the difference between the execute queues of
previous releases with work managers is to correlate execute queues (or rather,
execute-queue managers) with work managers and decouple the one-to-one
relationship between execute queues and thread-pools.
For releases prior to WebLogic Server 9.0, incoming requests are put into a default
execute queue or a user-defined execute queue. Each execute queue has an associated
execute queue manager that controls an exclusive, dedicated thread-pool with a fixed
number of threads in it. Requests are added to the queue on a first-come-first-served
basis. The execute-queue manager then picks the first request from the queue and an
available thread from the associated thread-pool and dispatches the request to be
executed by that thread.
For releases of WebLogic Server 9.0 and higher, there is a single priority-based execute
queue in the server. Incoming requests are assigned an internal priority based on the
configuration of work managers you create to manage the work performed by your
applications. The server increases or decreases threads available for the execute queue
depending on the demand from the various work-managers. The position of a request
in the execute queue is determined by its internal priority:
■ The higher the priority, closer it is placed to the head of the execute queue.
■ The closer to the head of the queue, more quickly the request will be dispatched a
thread to use.
Work managers provide you the ability to better control thread utilization (server
performance) than execute-queues, primarily due to the many ways that you can
specify scheduling guidelines for the priority-based thread pool. These scheduling
guidelines can be set either as numeric values or as the capacity of a server-managed
resource, like a JDBC connection pool.
waits for a fixed amount of time for data to become available at a socket. If no data
arrives, the thread moves to the next socket.
subtracting from the value any Ethernet or TCP header sizes. Set this parameter to
the same value on the client and server.
■ weblogic.utils.io.chunkpoolsize—Sets the maximum size of the chunk
pool. The default value is 2048. The value may need to be increased if the server
starts to allocate and discard chunks in steady state. To determine if the value
needs to be increased, monitor the CPU profile or use a memory/ heap profiler for
call stacks invoking the constructor weblogic.utils.io.Chunk.
■ weblogic.PartitionSize—Sets the number of pool partitions used (default is
4). The chunk pool can be a source of significant lock contention as each request to
access to the pool must be synchronized. Partitioning the thread pool spreads the
potential for contention over more than one partition.
Server MBeans" in Developing Custom Management Utilities With JMX for Oracle
WebLogic Server.
<prefer-application-resources>
<resource-name>x/y</resource-name>
<resource-name>z*</resource-name>
</prefer-application-resources>
In this example, resource filtering has been configured for the exact resource name
"x/y" and for any resource whose name starts with "z". '*' is the only wild card pattern
allowed. Resources with names matching these patterns are searched for only on the
application classpath, the system classpath search is skipped.
See "Configuring Time Synchronization for the Cluster" in the Oracle Grid Infrastructure
Installation Guide.
This chapter describes how to tune the persistent store, which provides a built-in,
high-performance storage solution for WebLogic Server subsystems and services that
require persistence.
■ Section 8.1, "Overview of Persistent Stores"
■ Section 8.2, "Best Practices When Using Persistent Stores"
■ Section 8.3, "Tuning JDBC Stores"
■ Section 8.4, "Tuning File Stores"
■ Section 8.5, "Using a Network File System"
Before reading this chapter, Oracle recommends becoming familiar with WebLogic
store administration and monitoring. See Using the WebLogic Persistent Store in
Configuring Server Environments for Oracle WebLogic Server.
Note: Most Flash storage devices are a single point of failure and are
typically only accessible as a local device. They are suitable for JMS
server paging stores which do not recover data after a failure and
automatically reconstruct themselves as needed.
In most cases, Flash storage devices are not suitable for custom or
default stores which typically contains data that must be safely
recoverable. A configured Directory attribute of a default or custom
store should not normally reference a directory that is on a single
point of failure device.
Use the following steps to use a Flash storage device to page JMS messages:
1. Set the JMS server Message Paging Directory attribute to the path of your
flash storage device, see Section 14.12.1, "Specifying a Message Paging Directory."
2. Tune the Message Buffer Size attribute (it controls when paging becomes
active). You may be able to use lower threshold values as faster I/O operations
provide improved load absorption. See Section 14.12.2, "Tuning the Message
Buffer Size Option."
3. Tune JMS Server quotas to safely account for any Flash storage space limitations.
This ensures that your JMS server(s) will not attempt to page more messages than
the device can store, potentially yielding runtime errors and/or automatic
shutdowns. As a conservative rule of thumb, assume page file usage will be at
least 1.5 * ((Maximum Number of Active Messages) * (512 + average message body
size)) rounded up to the nearest 16MB. See Section 14.7, "Defining Quota."
■ Add a new store only when the old store(s) no longer scale.
■ When performing head-to-head vendor comparisons, make sure all the write
policies for the persistent store are equivalent. Some non-WebLogic vendors
default to the equivalent of Disabled.
■ Depending on the synchronous write policy, custom and default stores have a
variety of additional tunable attributes that may improve performance. These
include CacheDirectory, MaxWindowBufferSize, IOBufferSize,
BlockSize, InitialSize, and MaxFileSize. For more information see the
JMSFileStoreMBean in the Oracle WebLogic Server MBean Reference.
whereas cache files are strictly for performance and not for high availability and can be
stored locally.
When the Direct-Write-With-Cache synchronous write policy is selected, there
are several additional tuning options that you should consider:
■ Setting the CacheDirectory. For performance reasons, the cache directory
should be located on a local file system. It is placed in the operating system temp
directory by default.
■ Increasing the MaxWindowBufferSize and IOBufferSize attributes. These
tune native memory usage of the file store.
■ Increasing the InitialSize and MaxFileSize tuning attributes. These tune
the initial size of a store, and the maximum file size of a particular file in the store
respectively.
■ Tune the BlockSize attribute. SeeSection 8.4.4, "Tuning the File Store Block Size."
For more information on individual tuning parameters, see the JMSFileStoreMBean in
the Oracle WebLogic Server MBean Reference.
■ To prevent unused cache files from consuming disk space, test and development
environments may need to be modified to periodically delete cache files that are
left over from temporarily created domains. In production environments, cache
files are managed automatically by the file store.
For file stores with the synchronous write policy of Direct-Write, you may be
directed by Oracle Support or a release note to set weblogic.Server options on the
command line or start script of the JVM that runs the store:
■ Globally changes all stores running in the JVM:
-Dweblogic.store.AvoidDirectIO=true
■ For the default store, where server-name is the name of the server hosting the
store:
-Dweblogic.store._WLS_server-name.AvoidDirectIO=true
■ The file store's disk drive has a 10,000 RPM rotational rate.
■ The disk drive has a battery-backed write-back cache.
and the messaging rate is measured at 166 messages per second.
In this example, the low messaging rate matches the disk drive's latency (10,000 RPM
/ 60 seconds = 166 RPS) even though a much higher rate is expected due to the
battery-backed write-back cache. Tuning the store's block size to match the file
systems' block size could result in a significant improvement.
In some other cases, tuning the block size may result in marginal or no improvement:
■ The caches are observed to yield low latency (so the I/O subsystem is not a
significant bottleneck).
■ Write-back caching is not used and performance is limited by larger disk drive
latencies.
There may be a trade off between performance and file space when using higher block
sizes. Multiple application records are packed into a single block only when they are
written concurrently. Consequently, a large block size may cause a significant increase
in store file sizes for applications that have little concurrent server activity and
produce small records. In this case, one small record is stored per block and the
remaining space in each block is unused. As an example, consider a Web Service
Reliable Messaging (WS-RM) application with a single producer that sends small 100
byte length messages, where the application is the only active user of the store.
Oracle recommends tuning the store block size to match the block size of the file
system that hosts the file store (typically 4096 for most file systems) when this yields a
performance improvement. Alternately, tuning the block size to other values (such as
paging and cache units) may yield performance gains. If tuning the block size does not
yield a performance improvement, Oracle recommends leaving the block size at the
default as this helps to minimize use of file system resources.
To set the block size of a store, use one of the following properties on the command
line or start script of the JVM that runs the store:
■ Globally sets the block size of all file stores that don't have pre-existing files.
-Dweblogic.store.BlockSize=block-size
■ Sets the block size for a specific file store that doesn’t have pre-existing files.
-Dweblogic.store.store-name.BlockSize=block-size
■ Sets the block size for the default file store, if the store doesn’t have pre-existing
files:
-Dweblogic.store._WLS_server-name. BlockSize=block-size
The value used to set the block size is an integer between 512 and 8192 which is
automatically rounded down to the nearest power of 2.
then store B has a block size of 8192 and store A has a block size of 512.
Note: Setting the block size using command line properties only
takes effect for file stores that have no pre-existing files. If a store has
pre-existing files, the store continues to use the block size that was set
when the store was first created.
If Oracle WebLogic Server does not restart after abrupt machine failure when JMS
messages and transaction logs are stored on NFS mounted directory, the following
errors may appear in the server log files:
This error is due to the NFS system not releasing the lock on the stores. WebLogic
Server maintains locks on files used for storing JMS data and transaction logs to
protect from potential data corruption if two instances of the same WebLogic Server
are accidentally started. The NFS storage device does not become aware of machine
failure in a timely manner and the locks are not released by the storage device. As a
result, after abrupt machine failure, followed by a restart, any subsequent attempt by
WebLogic Server to acquire locks on the previously locked files may fail. Refer to your
storage vendor documentation for additional information on the locking of files stored
in NFS mounted directories on the storage device. If it is not reasonably possible to
tune locking behavior in your NFS environment, use one of the following two
solutions to unlock the logs and data files.
Use one of the following two solutions to unlock the logs and data files:
■ Section 8.5.3.1, "Solution 1 - Copying Data Files to Remove NFS Locks"
■ Section 8.5.3.2, "Solution 2 - Disabling File Locks in WebLogic Server File Stores"
With this solution, the WebLogic file locking mechanism continues to provide
protection from any accidental data corruption if multiple instances of the same
servers were accidently started. However, the servers must be restarted manually after
abrupt machine failures. File stores will create multiple consecutively numbered.DAT
files when they are used to store large amounts of data. All files may need to be copied
and renamed when this occurs.
You can also use the WebLogic Server Administration Console to disable WebLogic file
locking mechanisms for the default file store, a custom file store, a JMS paging file
store, and a Diagnostics file store, as described in the following sections:
■ Section 8.5.3.2.1, "Disabling File Locking for the Default File Store"
■ Section 8.5.3.2.2, "Disabling File Locking for a Custom File Store"
■ Section 8.5.3.2.3, "Disabling File Locking for a JMS Paging File Store"
■ Section 8.5.3.2.4, "Disabling File Locking for a Diagnostics File Store"
8.5.3.2.1 Disabling File Locking for the Default File Store Follow these steps to disable file
locking for the default file store using the WebLogic Server Administration Console:
1. If necessary, click Lock & Edit in the Change Center (upper left corner) of the
Administration Console to get an Edit lock for the domain.
2. 2.In the Domain Structure tree, expand the Environment node and select Servers.
3. In the Summary of Servers list, select the server you want to modify.
4. Select the Configuration > Services tab.
5. Scroll down to the Default Store section and click Advanced.
6. Scroll down and deselect the Enable File Locking check box.
7. Click Save to save the changes. If necessary, click Activate Changes in the Change
Center.
8. Restart the server you modified for the changes to take effect.
The resulting config.xml entry looks like:
Example 8–3 Example config.xml Entry for Disabling File Locking for a Default File Store
<server>
<name>examplesServer</name>
...
<default-file-store>
<synchronous-write-policy>Direct-Write</synchronous-write-policy>
<io-buffer-size>-1</io-buffer-size>
<max-file-size>1342177280</max-file-size>
<block-size>-1</block-size>
<initial-size>0</initial-size>
<file-locking-enabled>false</file-locking-enabled>
</default-file-store>
</server>
8.5.3.2.2 Disabling File Locking for a Custom File Store Use the following steps to disable
file locking for a custom file store using the WebLogic Server Administration Console:
1. If necessary, click Lock & Edit in the Change Center (upper left corner) of the
Administration Console to get an Edit lock for the domain.
2. In the Domain Structure tree, expand the Services node and select Persistent
Stores.
3. In the Summary of Persistent Stores list, select the custom file store you want to
modify.
4. On the Configuration tab for the custom file store, click Advanced to display
advanced store settings.
5. Scroll down to the bottom of the page and deselect the Enable File Locking check
box.
6. Click Save to save the changes. If necessary, click Activate Changes in the Change
Center.
7. If the custom file store was in use, you must restart the server for the changes to
take effect.
The resulting config.xml entry looks like:
Example 8–4 Example config.xml Entry for Disabling File Locking for a Custom File
Store
<file-store>
<name>CustomFileStore-0</name>
<directory>C:\custom-file-store</directory>
<synchronous-write-policy>Direct-Write</synchronous-write-policy>
<io-buffer-size>-1</io-buffer-size>
<max-file-size>1342177280</max-file-size>
<block-size>-1</block-size>
<initial-size>0</initial-size>
<file-locking-enabled>false</file-locking-enabled>
<target>examplesServer</target>
</file-store>
8.5.3.2.3 Disabling File Locking for a JMS Paging File Store Use the following steps to
disable file locking for a JMS paging file store using the WebLogic Server
Administration Console:
1. If necessary, click Lock & Edit in the Change Center (upper left corner) of the
Administration Console to get an Edit lock for the domain.
2. In the Domain Structure tree, expand the Services node, expand the Messaging
node, and select JMS Servers.
3. In the Summary of JMS Servers list, select the JMS server you want to modify.
4. On the Configuration > General tab for the JMS Server, scroll down and deselect
the Paging File Locking Enabled check box.
5. Click Save to save the changes. If necessary, click Activate Changes in the Change
Center.
6. Restart the server you modified for the changes to take effect.
The resulting config.xml entry looks like:
Example 8–5 Example config.xml Entry for Disabling File Locking for a JMS Paging File
Store
<jms-server>
<name>examplesJMSServer</name>
<target>examplesServer</target>
<persistent-store>exampleJDBCStore</persistent-store>
...
<paging-file-locking-enabled>false</paging-file-locking-enabled>
...
</jms-server>
8.5.3.2.4 Disabling File Locking for a Diagnostics File Store Use the following steps to
disable file locking for a Diagnostics file store using the WebLogic Server
Administration Console:
1. If necessary, click Lock & Edit in the Change Center (upper left corner) of the
Administration Console to get an Edit lock for the domain.
2. In the Domain Structure tree, expand the Diagnostics node and select Archives.
3. In the Summary of Diagnostic Archives list, select the server name of the archive
that you want to modify.
4. On the Settings for [server_name] page, deselect the Diagnostic Store File
Locking Enabled check box.
5. Click Save to save the changes. If necessary, click Activate Changes in the Change
Center.
6. Restart the server you modified for the changes to take effect.
The resulting config.xml entry looks like:
Example 8–6 Example config.xml Entry for Disabling File Locking for a Diagnostics File
Store
<server>
<name>examplesServer</name>
...
<server-diagnostic-config>
<diagnostic-store-dir>data/store/diagnostics</diagnostic-store-dir>
<diagnostic-store-file-locking-enabled>false</diagnostic-store-file-lockingenabled
>
<diagnostic-data-archive-type>FileStoreArchive</diagnostic-data-archive-type>
<data-retirement-enabled>true</data-retirement-enabled>
<preferred-store-size-limit>100</preferred-store-size-limit>
<store-size-check-period>1</store-size-check-period>
</server-diagnostic-config>
</server>
This chapter describes how to tune your database to prevent it from becoming a major
enterprise-level bottleneck. Configure your database for optimal performance by
following the tuning guidelines in this chapter and in the product documentation for
the database you are using.
■ Section 9.1, "General Suggestions"
■ Section 9.2, "Database-Specific Tuning"
9.2.1 Oracle
This section describes performance tuning for Oracle.
■ Number of processes — On most operating systems, each connection to the Oracle
server spawns a shadow process to service the connection. Thus, the maximum
number of processes allowed for the Oracle server must account for the number of
simultaneous users, as well as the number of background processes used by the
Oracle server. The default number is usually not big enough for a system that
needs to support a large number of concurrent operations. For platform-specific
issues, see your Oracle administrator's guide. The current setting of this parameter
can be obtained with the following query:
SELECT name, value FROM v$parameter WHERE name = 'processes';
■ Buffer pool size —The buffer pool usually is the largest part of the Oracle server
system global area (SGA). This is the location where the Oracle server caches data
that it has read from disk. For read-mostly applications, the single most important
statistic that affects data base performance is the buffer cache hit ratio. The buffer
pool should be large enough to provide upwards of a 95% cache hit ratio. Set the
buffer pool size by changing the value, in data base blocks, of the db_cache_
size parameter in the init.ora file.
■ Shared pool size — The share pool in an important part of the Oracle server
system global area (SGA). The SGA is a group of shared memory structures that
contain data and control information for one Oracle database instance. If multiple
users are concurrently connected to the same instance, the data in the instance's
SGA is shared among the users. The shared pool portion of the SGA caches data
for two major areas: the library cache and the dictionary cache. The library cache
stores SQL-related information and control structures (for example, parsed SQL
statement, locks). The dictionary cache stores operational metadata for SQL
processing.
For most applications, the shared pool size is critical to Oracle performance. If the
shared pool is too small, the server must dedicate resources to managing the
limited amount of available space. This consumes CPU resources and causes
contention because Oracle imposes restrictions on the parallel management of the
various caches. The more you use triggers and stored procedures, the larger the
shared pool must be. The SHARED_POOL_SIZE initialization parameter specifies
the size of the shared pool in bytes.
The following query monitors the amount of free memory in the share pool:
SELECT * FROM v$sgastat
WHERE name = 'free memory' AND pool = 'shared pool';
■ Maximum opened cursor — To prevent any single connection taking all the
resources in the Oracle server, the OPEN_CURSORS initialization parameter allows
administrators to limit the maximum number of opened cursors for each
connection. Unfortunately, the default value for this parameter is too small for
systems such as WebLogic Server. Cursor information can be monitored using the
following query:
SELECT name, value FROM v$sysstat
WHERE name LIKE 'opened cursor%';
■ Database block size — A block is Oracle's basic unit for storing data and the
smallest unit of I/O. One data block corresponds to a specific number of bytes of
physical database space on disk. This concept of a block is specific to Oracle
RDBMS and should not be confused with the block size of the underlying
operating system. Since the block size affects physical storage, this value can be set
only during the creation of the database; it cannot be changed once the database
has been created. The current setting of this parameter can be obtained with the
following query:
SELECT name, value FROM v$parameter WHERE name = 'db_block_size';
■ Sort area size — Increasing the sort area increases the performance of large sorts
because it allows the sort to be performed in memory during query processing.
This can be important, as there is only one sort area for each connection at any
point in time. The default value of this init.ora parameter is usually the size of
6–8 data blocks. This value is usually sufficient for OLTP operations but should be
increased for decision support operation, large bulk operations, or large
index-related operations (for example, recreating an index). When performing
these types of operations, you should tune the following init.ora parameters
(which are currently set for 8K data blocks):
sort_area_size = 65536
sort_area_retained_size = 65536
9.2.3 Sybase
The following guidelines pertain to performance tuning parameters for Sybase
databases. For more information about these parameters, see your Sybase
documentation.
■ Lower recovery interval setting results in more frequent checkpoint operations,
resulting in more I/O operations.
This chapter describe how to tune WebLogic Server EJBs for your application
environment.
■ Section 10.1, "General EJB Tuning Tips"
■ Section 10.2, "Tuning EJB Caches"
■ Section 10.3, "Tuning EJB Pools"
■ Section 10.4, "CMP Entity Bean Tuning"
■ Section 10.5, "Tuning In Response to Monitoring Statistics"
■ Section 10.6, "Using the JDT Compiler"
in heterogeneous clusters, where all EJBs have not been deployed to all WebLogic
Server instances. In these cases, WebLogic Server uses a multitier connection to
access the datastore, rather than multiple direct connections. This approach uses
fewer resources, and yields better performance for the transaction. However, for
best performance, the cluster should be homogeneous — all EJBs should reside on
all available WebLogic Server instances.
fields from the data base that are included in the field group. This means that if most
transactions do not use a particular field that is slow to load, such as a BLOB, it can be
excluded from a field-group. Similarly, if an entity bean has a lot of fields, but a
transaction uses only a small number of them, the unused fields can be excluded.
Note: Be careful to ensure that fields that are accessed in the same
transaction are not configured into separate field-groups. If that
happens, multiple data base calls occur to load the same bean, when
one would have been enough.
10.4.5 include-updates
This flag causes the EJB container to flush all modified entity beans to the data base
before executing a finder. If the application modifies the same entity bean more than
once and executes a non-pk finder in-between in the same transaction, multiple
updates to the data base are issued. This flag is turned on by default to comply with
the EJB specification.
If the application has transactions where two invocations of the same or different
finders could return the same bean instance and that bean instance could have been
modified between the finder invocations, it makes sense leaving include-updates
turned on. If not, this flag may be safely turned off. This eliminates an unnecessary
flush to the data base if the bean is modified again after executing the second finder.
This flag is specified for each finder in the cmp-rdbms descriptor.
10.4.6 call-by-reference
When it is turned off, method parameters to an EJB are passed by value, which
involves serialization. For mutable, complex types, this can be significantly expensive.
Consider using for better performance when:
■ The application does not require call-by-value semantics, such as method
parameters are not modified by the EJB.
or
■ If modified by the EJB, the changes need not be invisible to the caller of the
method.
This flag applies to all EJBs, not just entity EJBs. It also applies to EJB invocations
between servlets/JSPs and EJBs in the same application. The flag is turned off by
default to comply with the EJB specification. This flag is specified at the bean-level in
the WebLogic-specific deployment descriptor.
Note: If the lock is not exclusive lock, you man encounter deadlock
conditions. If the data base lock is a shared lock, there is potential for
deadlocks when using that RDBMS.
A high cache miss ratio could be indicative of an improperly sized cache. If your
application uses a certain subset of beans (read primary keys) more frequently than
others, it would be ideal to size your cache large enough so that the commonly used
beans can remain in the cache as less commonly used beans are cycled in and out upon
demand. If this is the nature of your application, you may be able to decrease your
cache miss ratio significantly by increasing the maximum size of your cache.
If your application doesn't necessarily use a subset of beans more frequently than
others, increasing your maximum cache size may not affect your cache miss ratio. We
recommend testing your application with different maximum cache sizes to determine
which give the lowest cache miss ratio. It is also important to keep in mind that your
server has a finite amount of memory and therefore there is always a trade-off to
increasing your cache size.
A high lock waiter ratio can indicate a suboptimal concurrency strategy for the bean. If
acceptable for your application, a concurrency strategy of Database or Optimistic will
allow for more parallelism than an Exclusive strategy and remove the need for locking
at the EJB container level.
Because locks are generally held for the duration of a transaction, reducing the
duration of your transactions will free up beans more quickly and may help reduce
your lock waiter ratio. To reduce transaction duration, avoid grouping large amounts
of work into a single transaction unless absolutely necessary.
The lock timeout ratio is closely related to the lock waiter ratio. If you are concerned
about the lock timeout ratio for your bean, first take a look at the lock waiter ratio and
our recommendations for reducing it (including possibly changing your concurrency
strategy). If you can reduce or eliminate the number of times a thread has to wait for a
lock on a bean, you will also reduce or eliminate the amount of timeouts that occur
while waiting.
A high lock timeout ratio may also be indicative of an improper transaction timeout
value. The maximum amount of time a thread will wait for a lock is equal to the
current transaction timeout value.
If the transaction timeout value is set too low, threads may not be waiting long enough
to obtain access to a bean and timing out prematurely. If this is the case, increasing the
trans-timeout-seconds value for the bean may help reduce the lock timeout ratio.
Take care when increasing the trans-timeout-seconds, however, because doing so can
cause threads to wait longer for a bean and threads are a valuable server resource.
Also, doing so may increase the request time, as a request ma wait longer before
timing out.
If your pool miss ratio is high, you must determine what is happening to your bean
instances. There are three things that can happen to your beans.
■ They are in use.
■ They were destroyed.
■ They were removed.
Follow these steps to diagnose the problem:
1. Check your destroyed bean ratio to verify that bean instances are not being
destroyed.
2. Investigate the cause and try to remedy the situation.
3. Examine the demand for the EJB, perhaps over a period of time.
One way to check this is via the Beans in Use Current Count and Idle Beans Count
displayed in the Administration Console. If demand for your EJB spikes during a
certain period of time, you may see a lot of pool misses as your pool is emptied and
unable to fill additional requests.
As the demand for the EJB drops and beans are returned to the pool, many of the
beans created to satisfy requests may be unable to fit in the pool and are therefore
removed. If this is the case, you may be able to reduce the number of pool misses by
increasing the maximum size of your free pool. This may allow beans that were
created to satisfy demand during peak periods to remain in the pool so they can be
used again when demand once again increases.
A high pool timeout ratio could be indicative of an improperly sized free pool.
Increasing the maximum size of your free pool via the max-beans-in-free-pool
setting will increase the number of bean instances available to service requests and
may reduce your pool timeout ratio.
Another factor affecting the number of pool timeouts is the configured transaction
timeout for your bean. The maximum amount of time a thread will wait for a bean
from the pool is equal to the default transaction timeout for the bean. Increasing the
trans-timeout-seconds setting in your weblogic-ejb-jar.xml file will give
threads more time to wait for a bean instance to become available.
Users should exercise caution when increasing this value, however, since doing so
may cause threads to wait longer for a bean and threads are a valuable server resource.
Also, request time might increase because a request will wait longer before timing out.
Begin investigating a high transaction rollback ratio by examining the Section 10.5.8,
"Transaction Timeout Ratio" reported in the Administration Console. If the transaction
timeout ratio is higher than you expect, try to address the timeout problem first.
An unexpectedly high transaction rollback ratio could be caused by a number of
things. We recommend investigating the cause of transaction rollbacks to find
potential problems with your application or a resource used by your application.
A high transaction timeout ratio could be caused by the wrong transaction timeout
value. For example, if your transaction timeout is set too low, you may be timing out
transactions before the thread is able to complete the necessary work. Increasing your
transaction timeout value may reduce the number of transaction timeouts.
You should exercise caution when increasing this value, however, since doing so can
cause threads to wait longer for a resource before timing out. Also, request time might
increase because a request will wait longer before timing out.
A high transaction timeout ratio could be caused by a number of things such as a
bottleneck for a server resource. We recommend tracing through your transactions to
investigate what is causing the timeouts so the problem can be addressed.
This chapter provides tuning and best practice information for Message-Driven Beans
(MDBs).
■ Section 11.1, "Use Transaction Batching"
■ Section 11.2, "MDB Thread Management"
■ Section 11.3, "Best Practices for Configuring and Deploying MDBs Using
Distributed Topics"
■ Section 11.4, "Using MDBs with Foreign Destinations"
■ Section 11.5, "Token-based Message Polling for Transactional MDBs Listening on
Queues/Topics"
■ Section 11.6, "Compatibility for WLS 10.0 and Earlier-style Polling"
■ In WebLogic Server 8.1, you could increase the size of the default execute queue
knowing that a larger default pool means a larger maximum MDB concurrency.
Default thread pool MDBs upgraded to WebLogic Server 9.0 will have a fixed
maximum of 16. To achieve MDB concurrency numbers higher than 16, you will
need to create a custom work manager or custom execute queue. See Table 11–1.
Caution:
Non-transactional Foreign Topics: Oracle recommends explicitly setting
max-beans-in-free-pool to 1 for non-transactional MDBs that
work with foreign (non-WebLogic) topics. Failure to do so may result
in lost messages in the event of certain failures, such as the MDB
application throwing Runtime or Error exceptions.
Unit-of-Order: Oracle recommends explicitly setting
max-beans-in-free-pool to 1 for non-transactional
Compatibility mode MDBs that consume from a WebLogic JMS
topic and process messages that have a WebLogic JMS Unit-of-Order
value. Unit-of-Order messages in this use case may not be processed
in order unless max-beans-in-free-pool is set to 1.
The following sections provide information on the behavior of WebLogic Server when
using MDBs that consume messages from Foreign destinations:
■ Section 11.4.1, "Concurrency for MDBs that Process Messages from Foreign
Destinations"
■ Section 11.4.2, "Thread Utilization for MDBs that Process Messages from Foreign
Destinations"
11.4.1 Concurrency for MDBs that Process Messages from Foreign Destinations
The concurrency of MDBs that consume from destinations hosted by foreign providers
(non-WebLogic JMS destinations) is determined using the same algorithm that is used
for WebLogic JMS destinations.
11.4.2 Thread Utilization for MDBs that Process Messages from Foreign Destinations
The following section provides information on how threads are allocated when
WebLogic Server interoperates with MDBs that process messages from foreign
destinations:
This chapter provides tips on how to get the best performance from your WebLogic
data sources.
■ Section 12.1, "Tune the Number of Database Connections"
■ Section 12.2, "Waste Not"
■ Section 12.3, "Use Test Connections on Reserve with Care"
■ Section 12.4, "Cache Prepared and Callable Statements"
■ Section 12.5, "Using Pinned-To-Thread Property to Increase Performance"
■ Section 12.6, "Database Listener Timeout under Heavy Server Loads"
■ Section 12.7, "Disable Wrapping of Data Type Objects"
■ Section 12.8, "Advanced Configurations for Oracle Drivers and Databases"
■ Section 12.9, "Use Best Design Practices"
This chapter explains how to get the most out of your applications by implementing
the administrative performance tuning features available with WebLogic JMS.
■ Section 14.1, "JMS Performance & Tuning Check List"
■ Section 14.2, "Handling Large Message Backlogs"
■ Section 14.3, "Cache and Re-use Client Resources"
■ Section 14.4, "Tuning Distributed Queues"
■ Section 14.5, "Tuning Topics"
■ Section 14.6, "Tuning for Large Messages"
■ Section 14.7, "Defining Quota"
■ Section 14.8, "Blocking Senders During Quota Conditions"
■ Section 14.9, "Tuning MessageMaximum"
■ Section 14.10, "Setting Maximum Message Size for Network Protocols"
■ Section 14.11, "Compressing Messages"
■ Section 14.12, "Paging Out Messages To Free Up Memory"
■ Section 14.13, "Controlling the Flow of Messages on JMS Servers and Destinations"
■ Section 14.14, "Handling Expired Messages"
■ Section 14.15, "Tuning Applications Using Unit-of-Order"
■ Section 14.16, "Using One-Way Message Sends"
■ Section 14.17, "Tuning the Messaging Performance Preference Option"
■ Section 14.18, "Client-side Thread Pools"
■ Section 14.19, "Best Practices for JMS .NET Client Applications"
■ Avoid large message backlogs. See Section 14.2, "Handling Large Message
Backlogs."
■ Create and use custom connection factories with all applications instead of using
default connection factories, including when using MDBs. Default connection
factories are not tunable, while custom connection factories provide many options
for performance tuning.
■ Write applications so that they cache and re-use JMS client resources, including
JNDI contexts and lookups, and JMS connections, sessions, consumers, or
producers. These resources are relatively expensive to create. For information on
detecting when caching is needed, as well as on built-in pooling features, see
Section 14.3, "Cache and Re-use Client Resources."
■ For asynchronous consumers and MDBs, tune MessagesMaximum on the
connection factory. Increasing MessagesMaximum can improve performance,
decreasing MessagesMaximum to its minimum value can lower performance, but
helps ensure that messages do not end up waiting for a consumer that's already
processing a message. See Section 14.9, "Tuning MessageMaximum."
■ Avoid single threaded processing when possible. Use multiple concurrent
producers and consumers and ensure that enough threads are available to service
them.
■ Tune server-side applications so that they have enough instances. Consider
creating dedicated thread pools for these applications. See Section 11, "Tuning
Message-Driven Beans."
■ For client-side applications with asynchronous consumers, tune client-side thread
pools using Section 14.18, "Client-side Thread Pools."
■ Tune persistence as described in Section 8, "Tuning the WebLogic Persistent Store."
In particular, it's normally best for multiple JMS servers, destinations, and other
services to share the same store so that the store can aggregate concurrent requests
into single physical I/O requests, and to reduce the chance that a JTA transaction
spans more than one store. Multiple stores should only be considered once it's
been established that the a single store is not scaling to handle the current load.
■ If you have large messages, see Section 14.6, "Tuning for Large Messages."
■ Prevent unnecessary message routing in a cluster by carefully configuring
connection factory targets. Messages potentially route through two servers, as they
flow from a client, through the client's connection host, and then on to a final
destination. For server-side applications, target connection factories to the cluster.
For client-side applications that work with a distributed destination, target
connection factories only to servers that host the distributed destinations
members. For client-side applications that work with a singleton destination,
target the connection factory to the same server that hosts the destination.
■ If JTA transactions include both JMS and JDBC operations, consider enabling the
JDBC LLR optimization. LLR is a commonly used safe "ACID" optimization that
can lead to significant performance improvements, with some drawbacks. See
Section 13, "Tuning Transactions."
■ If you are using Java clients, avoid thin Java clients except when a small jar size is
more important than performance. Thin clients use the slower IIOP protocol even
when T3 is specified so use a full java client instead. See Programming Stand-alone
Clients for Oracle WebLogic Server.
■ Tune JMS Store-and-Forward according to Section 15, "Tuning WebLogic JMS
Store-and-Forward."
messages (make sure they called start()), high "pending" counts for queues,
already processed persistent messages re-appearing after a shutdown and restart,
and already processed transactional messages re-appearing after a delay (the
default JTA timeout is 30 seconds, default transacted session timeout is one hour).
■ Check WebLogic statistics for queues that are not being serviced by consumers. If
you're having a problem with distributed queues, see Section 14.4, "Tuning
Distributed Queues."
■ Check WebLogic statistics for topics with high pending counts. This usually
indicates that there are topic subscriptions that are not being serviced. There may
be a slow or unresponsive consumer client that's responsible for processing the
messages, or it's possible that a durable subscription may no longer be needed and
should be deleted, or the messages may be accumulating due to delayed
distributed topic forwarding. You can check statistics for individual durable
subscriptions on the administration console. A durable subscription with a large
backlog may have been created by an application but never deleted. Unserviced
durable subscriptions continue to accumulate topic messages until they are either
administratively destroyed, or unsubscribed by a standard JMS client.
■ Understand distributed topic behavior when not all members are active. In
distributed topics, each produced message to a particular topic member is
forwarded to each remote topic member. If a remote topic member is unavailable
then the local topic member will store each produced message for later
forwarding. Therefore, if a topic member is unavailable for a long period of time,
then large backlogs can develop on the active members. In some applications, this
backlog can be addressed by setting expiration times on the messages. See
Section 14.14.1, "Defining a Message Expiration Policy."
■ In certain applications it may be fine to automatically delete old unprocessed
messages. See Section 14.14, "Handling Expired Messages."
■ For transactional MDBs, consider using MDB transaction batching as this can yield
a 5 fold improvement in some use cases.
■ Leverage distributed queues and add more JVMs to the cluster (in order to add
more distributed queue member instances). For example, split a 200,000 message
backlog across 4 JVMs at 50,000 messages per JVM, instead of 100,000 messages
per JVM.
■ For client applications, use asynchronous consumers instead of synchronous
consumers when possible. Asynchronous consumers can have a significantly
lower network overhead, lower latency, and do not block a thread while waiting
for a message.
■ For synchronous consumer client applications, consider: enabling prefetch,
using CLIENT_ACKNOWLEDGE to enable acknowledging multiple consumed
messages at a time, and using DUPS_OK_ACKNOWLEDGE instead of AUTO_
ACKNOWLEDGE.
■ For asynchronous consumer client applications, consider using DUPS_OK_
ACKNOWLEDGE instead of AUTO_ACKNOWLEDGE.
■ Leverage batching. For example, include multiple messages in each transaction, or
send one larger message instead of many smaller messages.
■ For non-durable subscriber client-side applications handling missing ("dropped")
messages, investigate MULTICAST_NO_ACKNOWLEDGE. This mode broadcasts
messages concurrently to subscribers over UDP multicast.
For server-side applications, WebLogic automatically wraps and pools JMS resources
that are accessed using a resource reference. See "Enhanced Support for Using
WebLogic JMS with EJBs and Servlets" in Programming JMS for Oracle WebLogic Server.
This pooling code can be inefficient at pooling producers if the target destination
changes frequently, but there's a simple work-around: use anonymous producers by
passing null for the destination when the application calls createProducer() and
then instead pass the desired destination into each send call.
■ To check for heavy JMS resource allocation or leaks, you can monitor mbean stats
and/or use your particular JVM's built in facilities. You can monitor mbean stats
using the console, WLST, or java code.
■ Check JVM heap statistics for memory leaks or unexpectedly high allocation
counts (called a JRA profile in JRockit).
■ Similarly, check WebLogic statistics for memory leaks or unexpectedly high
allocation counts.
topics is to assume that each current JMS message consumes 256 bytes of memory plus
an additional 256 bytes of memory for each subscriber that hasn't acknowledged the
message yet. For example, if there are 3 subscribers on a topic, then a single published
message that hasn't been processed by any of the subscribers consumes 256 + 256*3 =
1024 bytes even when the message is paged out. Although message header memory
usage is typically significantly less than these rules of thumb indicate, it is a best
practice to make conservative estimates on memory utilization.
In prior releases, there were multiple levels of quotas: destinations had their own
quotas and would also have to compete for quota within a JMS server. In this release,
there is only one level of quota: destinations can have their own private quota or they
can compete with other destinations using a shared quota.
In addition, a destination that defines its own quota no longer also shares space in the
JMS server's quota. Although JMS servers still allow the direct configuration of
message and byte quotas, these options are only used to provide quota for destinations
that do not refer to a quota resource.
For more information about quota configuration parameters, see QuotaBean in the
Oracle WebLogic Server MBean Reference. For instructions on configuring a quota
resource using the Administration Console, see "Create a quota for destinations" in the
Oracle WebLogic Server Administration Console Help.
The Quota parameter of a destination defines which quota resource is used to enforce
quota for the destination. This value is dynamic, so it can be changed at any time.
However, if there are unsatisfied requests for quota when the quota resource is
changed, then those requests will fail with a
javax.jms.ResourceAllocationException.
Note: Outstanding requests for quota will fail at such time that the
quota resource is changed. This does not mean changes to the message
and byte attributes for the quota resource, but when a destination
switches to a different quota.
■ If sufficient space does not become available before the timeout period ends,
you receive a resource allocation exception.
If you choose not to enable the blocking send policy by setting this value to 0,
then you will receive a resource allocation exception whenever sufficient space
is not available on the destination.
For more information about the Send Timeout field, see "JMS Connection
Factory: Configuration: Flow Control" in the Oracle WebLogic Server
Administration Console Help.
3. Click Save.
For example, if the JMS application acknowledges 50 messages at a time, set the
MessagesMaximum value to 101.
mw_home\user_projects\domains\domainname\servers\servername\tmp
As producers slow themselves down, the threshold condition gradually corrects itself
until the server/destination is unarmed. At this point, a producer is allowed to increase
its production rate, but not necessarily to the maximum possible rate. In fact, its
message flow continues to be controlled (even though the server/destination is no
longer armed) until it reaches its prescribed flow maximum, at which point it is no
longer flow controlled.
For more information about the flow control fields, and the valid and default values
for them, see "JMS Connection Factory: Configuration: Flow Control" in the Oracle
WebLogic Server Administration Console Help.
For detailed information about other JMS server and destination threshold and quota
fields, and the valid and default values for them, see the following pages in the
Administration Console Online Help:
■ "JMS Server: Configuration: Thresholds and Quotas"
■ "JMS Queue: Configuration: Thresholds and Quotas"
■ "JMS Topic: Configuration: Thresholds and Quotas"
over how the system searches for expired messages and how it handles them when
they are encountered.
Active message expiration ensures that expired messages are cleaned up immediately.
Moreover, expired message auditing gives you the option of tracking expired
messages, either by logging when a message expires or by redirecting expired
messages to a defined error destination.
■ Section 14.14.1, "Defining a Message Expiration Policy"
■ Section 14.14.7, "Tuning Active Message Expiration"
3. If you selected the Log expiration policy in previous step, use the Expiration
Logging Policy field to define what information about the message is logged.
For more information about valid Expiration Logging Policy values, see
Section 14.14.5, "Defining an Expiration Logging Policy".
4. Click Save.
■ Redirect — Moves expired messages from their current location into the Error
Destination defined for the destination.
■ For more information about the Expiration Policy options for a template, see
"JMS Template: Configuration: Delivery Failure" in the Oracle WebLogic Server
Administration Console Help.
3. If you selected the Log expiration policy in Step 4, use the Expiration Logging
Policy field to define what information about the message is logged.
For more information about valid Expiration Logging Policy values, see
Section 14.14.5, "Defining an Expiration Logging Policy".
4. Click Save.
If no header fields are displayed, the line for header fields is not be displayed. If no
user properties are displayed, that line is not be displayed. If there are no header fields
and no properties, the closing </ExpiredJMSMessage> tag is not necessary as the
opening tag can be terminated with a closing bracket (/>).
For example:
<ExpiredJMSMessage JMSMessageID='ID:N<223476.1022177121567.1' />
All values are delimited with double quotes. All string values are limited to 32
characters in length. Requested fields and/or properties that do not exist are not
displayed. Requested fields and/or properties that exist but have no value (a null
value) are displayed as null (without single quotes). Requested fields and/or
properties that are empty strings are displayed as a pair of single quotes with no space
between them.
For example:
<ExpiredJMSMessage JMSMessageID='ID:N<851839.1022176920344.0' >
<UserProperties First='Any string longer than 32 char ...' Second=null Third=''
/>
</ExpiredJMSMessage>
14.14.8 Configuring a JMS Server to Actively Scan Destinations for Expired Messages
Follow these directions to define how often a JMS server will actively scan its
destinations for expired messages. The default value is 30 seconds, which means the
JMS server waits 30 seconds between each scan interval.
1. Follow the directions for navigating to the JMS Server: Configuration: General
page of the Administration Console in "Configure general JMS server properties"
in the Oracle WebLogic Server Administration Console Help.
2. In the Scan Expiration Interval field, enter the amount of time, in seconds, that you
want the JMS server to pause between its cycles of scanning its destinations for
expired messages to process.
To disable active scanning, enter a value of 0 seconds. Expired messages are
passively removed from the system as they are discovered.
For more information about the Expiration Scan Interval attribute, see "JMS Server:
Configuration: General" in the Oracle WebLogic Server Administration Console Help.
3. Click Save.
There are a number of design choices that impact performance of JMS applications.
Some others include reliability, scalability, manageability, monitoring, user
transactions, message driven bean support, and integration with an application server.
In addition, there are WebLogic JMS extensions and features have a direct impact on
performance.
For more information on designing your applications for JMS, see "Best Practices for
Application Design" in Programming JMS for Oracle WebLogic Server.
■ Unit-of-work
■ Distributed destinations
■ When used in conjunction with the Blocking Sends feature, then using one-way
sends on a well-running system should achieve similar QOS as when using the
two-way send mode.
■ One-way send mode for topic publishers falls within the QOS guidelines set by
the JMS Specification, but does entail a lower QOS than two-way mode (the
WebLogic Server default mode).
■ One-way send mode may not improve performance if JMS consumer applications
are a system bottleneck, as described in "Asynchronous vs. Synchronous
Consumers" in Programming JMS for Oracle WebLogic Server.
■ Consider enlarging the JVM's heap size on the client and/or server to account for
increased batch size (the Window) of sends. The potential memory usage is
proportioned to the size of the configured Window and the number of senders.
■ The sending application will not receive all quota exceptions. One-way messages
that exceed quota are silently deleted, without throwing exceptions back to the
sending client. See Section 14.16.8, "Destination Quota Exceeded" for more
information and a possible work around.
■ Configuring one-way sends on a connection factory effectively disables any
message flow control parameters configured on the connection factory.
■ By default, the One-way Window Size is set to "1", which effectively disables
one-way sends as every one-way message will be upgraded to a two-way send.
(Even in one-way mode, clients will send a two-way message every One Way Send
Window Size number of messages configured on the client's connection factory.)
Therefore, you must set the one-way send window size much higher. It is
recommended to try setting the window size to "300" and then adjust it according
to your application requirements.
■ The client application will not immediately receive network or server failure
exceptions, some messages may be sent but silently deleted until the failure is
detected by WebLogic Server and the producer is automatically closed. See
Section 14.16.12, "Hardware Failure" for more information.
The Messaging Performance Preference tuning option on JMS destinations enables you
to control how long a destination should wait (if at all) before creating full batches of
available messages for delivery to consumers. At the minimum value, batching is
disabled. Tuning above the default value increases the amount of time a destination is
willing to wait before batching available messages. The maximum message count of a
full batch is controlled by the JMS connection factory's Messages Maximum per
Session setting.
Using the Administration Console, this advanced option is available on the General
Configuration page for both standalone and uniform distributed destinations (or via
the DestinationBean API), as well as for JMS templates (or via the TemplateBean
API).
Specifically, JMS destinations include internal algorithms that attempt to automatically
optimize performance by grouping messages into batches for delivery to consumers.
In response to changes in message rate and other factors, these algorithms change
batch sizes and delivery times. However, it isn't possible for the algorithms to optimize
performance for every messaging environment. The Messaging Performance
Preference tuning option enables you to modify how these algorithms react to changes
in message rate and other factors so that you can fine-tune the performance of your
system.
It may take some experimentation to find out which value works best for your system.
For example, if you have a queue with many concurrent message consumers, by
selecting the Administration Console's Do Not Batch Messages value (or specifying "0"
on the DestinationBean MBean), the queue will make every effort to promptly
push messages out to its consumers as soon as they are available. Conversely, if you
have a queue with only one message consumer that doesn't require fast response
times, by selecting the console's High Waiting Threshold for Message Batching value
(or specifying "100" on the DestinationBean MBean), then the queue will strongly
attempt to only push messages to that consumer in batches, which will increase the
waiting period but may improve the server's overall throughput by reducing the
number of sends.
For instructions on configuring Messaging Performance Preference parameters on a
standalone destinations, uniform distributed destinations, or JMS templates using the
Administration Console, see the following sections in the Administration Console
Online Help:
■ "Configure advanced topic parameters"
■ "Configure advanced queue parameters"
■ "Uniform distributed topics - configure advanced parameters"
This chapter provides information on how to get the best performance from
Store-and-Forward (SAF) applications. For WebLogic Server releases 9.0 and higher,
JMS provides advanced store-and-forward capability for high-performance message
forwarding from a local server instance to a remote JMS destination. See
"Understanding the Store-and-Forward Service" in Configuring and Managing
Store-and-Forward for Oracle WebLogic Server.
■ Section 15.1, "Best Practices"
■ Section 15.2, "Tuning Tips"
■ Configure separate SAF Agents for JMS SAF and Web Services Reliable Messaging
Agents (WS-RM) to simplify administration and tuning.
■ Sharing the same WebLogic Store between subsystems provides increased
performance for subsystems requiring persistence. For example, transactions that
include SAF and JMS operations, transactions that include multiple SAF
destinations, and transactions that include SAF and EJBs. See Section 8, "Tuning
the WebLogic Persistent Store".
■ Increase the JMS SAF Window Size for applications that handle small messages.
By default, a JMS SAF agent forwards messages in batches that contain up to 10
messages. For small messages size, it is possible to double or triple performance
by increasing the number of messages in each batch to be forwarded. A more
appropriate initial value for Window Size for small messages is 100. You can then
optimize this value for your environment.
Changing the Window Size for applications handling large message sizes is not
likely to increase performance and is not recommended. Window Size also tunes
WS-RM SAF behavior, so it may not be appropriate to tune this parameter for SAF
Agents of type Both.
■ Increase the JMS SAF Window Interval. By default, a JMS SAF agent has a
Window Interval value of 0 which forwards messages as soon as they arrive.
This can lower performance as it can make the effective Window size much
smaller than the configured value. A more appropriate initial value for Window
Interval value is 500 milliseconds. You can then optimize this value for your
environment. In this context, small messages are less than a few K, while large
messages are on the order of tens of K.
Changing the Window Interval improves performance only in cases where the
forwarder is already able to forward messages as fast as they arrive. In this case,
instead of immediately forwarding newly arrived messages, the forwarder pauses
to accumulate more messages and forward them as a batch. The resulting larger
batch size improves forwarding throughput and reduces overall system disk and
CPU usage at the expense of increasing latency.
■ Use the common thread pool—A server instance changes its thread pool size
automatically to maximize throughput, including compensating for the number of
bridge instances configured. See "Understanding How WebLogic Server Uses
Thread Pools" in Configuring Server Environments for Oracle WebLogic Server.
■ Configure a work manager for the weblogic.jms.MessagingBridge class. See
"Understanding Work Managers" in Designing and Configuring WebLogic Server
Environments.
■ Use the Administration console to set the Thread Pool Size property in the
Messaging Bridge Configuration section on the Configuration: Services page for a
server instance. To avoid competing with the default execute thread pool in the
server, messaging bridges share a separate thread pool. This thread pool is used
only in synchronous mode (Asynchronous Mode Enabled is not set). In
asynchronous mode the bridge runs in a thread created by the JMS provider for
the source destination. Deprecated in WebLogic Server 9.0.
Exactly-once1 false
At-least-once true
At-most-once true
1
If the source destination is a non-WebLogic JMS provider and the QOS is Exactly-once, then the
Asynchronous Mode Enabled attribute is disabled and the messages are processed in synchronous mode.
This chapter describes Oracle best practices for tuning Web applications and managing
sessions.
■ Section 18.1, "Best Practices"
■ Section 18.2, "Session Management"
■ Section 18.3, "Pub-Sub Tuning Guidelines"
For example: If you use a a single large attribute that contains all the session data and
only 10% of that data changes, the entire attribute has to be replicated. This causes
unnecessary serialization/deserialization and network overhead. You should move
the 10% of the session data that changes into a separate attribute.
This chapter describes Oracle best practices for designing, developing, and deploying
WebLogic Web Services applications and application resources.
■ Section 19.1, "Web Services Best Practices"
■ Section 19.2, "Tuning Web Service Reliable Messaging Agents"
■ Section 19.3, "Tuning Heavily Loaded Systems to Improve Web Service
Performance"
file (JAXP API). Oracle recommends setting the properties on the command line to
avoid unnecessary file operations at runtime and improve performance and
resource usage.
■ Follow "JWS Programming Best Practices" in Getting Started With JAX-WS Web
Services for Oracle WebLogic Server.
■ Follow best practice and tuning recommendations for all underlying components,
such as Section 10, "Tuning WebLogic Server EJBs", Section 18, "Tuning Web
Applications", Section 12, "Tuning Data Sources", and Section 14, "Tuning
WebLogic JMS".
■ Ensure that retry delay is not set too low. This may cause the system to make
unnecessary delivery attempts.
19.3.1 Setting the Work Manager Thread Pool Minimum Size Constraint
Define a Work Manager and set the thread pool minimum size constraint
(min-threads-constraint) to a value that is at least as large as the expected
number of concurrent requests or responses into the service.
For example, if a Web service client issues 20 requests in rapid succession, the
recommended thread pool minimum size constraint value would be 20 for the
application hosting the client. If the configured constraint value is too small,
performance can be severely degraded as incoming work waits for a free processing
thread.
For more information about the thread pool minimum size constraint, see
"Constraints" in Configuring Server Environments for Oracle WebLogic Server.
This chapter provides information on how to get the best performance from WebLogic
Tuxedo Connector (WTC) applications. The WebLogic Tuxedo Connector (WTC)
provides interoperability between WebLogic Server applications and Tuxedo services.
WTC allows WebLogic Server clients to invoke Tuxedo services and Tuxedo clients to
invoke WebLogic Server Enterprise Java Beans (EJBs) in response to a service request.
See "WebLogic Tuxedo Connector" in Information Roadmap for Oracle WebLogic Server .
■ Section 20.1, "Configuration Guidelines"
■ Section 20.2, "Best Practices"
■ When using transactional applications, try to make the remote services involved in
the same transaction available from the same remote access point. See "WebLogic
Tuxedo Connector JATMI Transactions" in the WebLogic Tuxedo Connector
Programmer’s Guide for Oracle WebLogic Server.
■ The number of client threads available when dispatching services from the
gateway may limit the number of concurrent services running. There is no
WebLogic Tuxedo Connector attribute to increase the number of available threads.
Use a reasonable thread model when invoking service. See Section 7.4, "Thread
Management" and "Using Work Managers to Optimize Scheduled Work" in
Configuring Server Environments for Oracle WebLogic Server.
■ WebLogic Server Releases 9.2 and higher provide improved routing algorithms
which enhance transaction performance. Specifically, performance is improved
when there are more than one Tuxedo service requests involved in a 2 phase
commit (2PC) transaction. If your application does only single service request to
the Tuxedo domain, you can disable this feature by setting the following WebLogic
Server command line parameter:
-Dweblogic.wtc.xaAffinity=false
■ Call the constructor TypedFML32 using the maximum number of objects in the
buffer. Even if the maximum number is difficult to predict, providing a reasonable
number improves performance. You approximate the maximum number by
multiplying the number of fields by 1.33.
Note: This performance tip does not apply to TypedFML buffer type.
For example:
If there are 50 fields in a TypedFML32 buffer type then the maximum number is
63. Calling the constructor TypedFML32(63, 50) performs better than
TypedFML32().
If there are 50 fields in a TypedFML32 buffer type and each can have maximum 10
occurrences, then call the constructor TypedFML32(625, 50) will give better
performance than TypedFML32()
■ When configuring Tuxedo applications that act as servers interoperating with
WTC clients, take into account of parallelism that may be achieved by carefully
configuring different servers on different Tuxedo machines.
■ Be aware of the possibility of database access deadlock in Tuxedo applications.
You can avoid deadlock through careful Tuxedo application configuration.
■ If your are using WTC load balancing or service level failover, Oracle recommends
that you do not disable WTC transaction affinity.
■ For load balancing outbound requests, configure the imported service with
multiple entries using a different key. The imported service uses composite key to
determine each record's uniqueness. The composite key is compose of "the service
name + the local access point + the primary route in the remote access point list".
The following is an example of how to correctly configure load balancing requests
for service1 between TDomainSession(WDOM1,TUXDOM1) and
TDomainSession(WDOM1,TUXDOM2:
This chapter describes how to use and tune WebLogic 8.1 thread pools. If you have
been using execute queues to improve performance prior to this release, you may
continue to use them after you upgrade your application domains to WebLogic Server
10.x.
Oracle recommends migrating from execute queues to using the self-tuning execute
queue with work managers. See "Using Work Managers to Optimize Scheduled Work"
in Configuring Server Environments for Oracle WebLogic Server.
■ Section A.1, "How to Enable the WebLogic 8.1 Thread Pool Model"
■ Section A.2, "Tuning the Default Execute Queue"
■ Section A.3, "Using Execute Queues to Control Thread Usage"
■ Section A.4, "Monitoring Execute Threads"
■ Section A.5, "Allocating Execute Threads to Act as Socket Readers"
■ Section A.6, "Tuning the Stuck Thread Detection Behavior"
.
<server>
<name>myserver</name>
<ssl>
<name>myserver</name>
<enabled>true</enabled>
<listen-port>7002</listen-port>
</ssl>
<use81-style-execute-queues>true</use81-style-execute-queues>
<listen-address/>
</server>
.
.
.
Configured work managers are converted to execute queues at runtime by the server
instance.
Unless you configure additional execute queues, and assign applications to them, the
server instance assigns requests to the default execute queue.
Note: If native performance packs are not being used for your
platform, you may need to tune the default number of execute queue
threads and the percentage of threads that act as socket readers to
achieve optimal performance. For more information, see Section A.5,
"Allocating Execute Threads to Act as Socket Readers".
The value of the ThreadCount attribute depends very much on the type of work your
application does. For example, if your client application is thin and does a lot of its
work through remote invocation, that client application will spend more time
connected — and thus will require a higher thread count — than a client application
that does a lot of client-side processing.
If you do not need to use more than 15 threads (the development default) or 25 threads
(the production default) for your work, do not change the value of this attribute. As a
general rule, if your application makes database calls that take a long time to return,
you will need more execute threads than an application that makes calls that are short
and turn over very rapidly. For the latter case, using a smaller number of execute
threads could improve performance.
To determine the ideal thread count for an execute queue, monitor the queue's
throughput while all applications in the queue are operating at maximum load.
Increase the number of threads in the queue and repeat the load test until you reach
the optimal throughput for the queue. (At some point, increasing the number of
threads will lead to enough context switching that the throughput for the queue begins
to decrease.)
Table A–2 shows default scenarios for adjusting available threads in relation to the
number of CPUs available in the WebLogic Server domain. These scenarios also
assume that WebLogic Server is running under maximum load, and that all thread
requests are satisfied by using the default execute queue. If you configure additional
execute queues and assign applications to specific queues, monitor results on a
pool-by-pool basis.
available. In such a situation, the division of threads into multiple queues may yield
poorer overall performance than having a single, default execute queue.
Default WebLogic Server installations are configured with a default execute queue
which is used by all applications running on the server instance. You may want to
configure additional queues to:
■ Optimize the performance of critical applications. For example, you can assign a
single, mission-critical application to a particular execute queue, guaranteeing a
fixed number of execute threads. During peak server loads, nonessential
applications may compete for threads in the default execute queue, but the
mission-critical application has access to the same number of threads at all times.
■ Throttle the performance of nonessential applications. For an application that
can potentially consume large amounts of memory, assigning it to a dedicated
execute queue effectively limits the amount of memory it can consume. Although
the application can potentially use all threads available in its assigned execute
queue, it cannot affect thread usage in any other queue.
■ Remedy deadlocked thread usage. With certain application designs, deadlocks
can occur when all execute threads are currently utilized. For example, consider a
servlet that reads messages from a designated JMS queue. If all execute threads in
a server are used to process the servlet requests, then no threads are available to
deliver messages from the JMS queue. A deadlock condition exists, and no work
can progress. Assigning the servlet to a separate execute queue avoids potential
deadlocks, because the servlet and JMS queue do not compete for thread
resources.
Be sure to monitor each execute queue to ensure proper thread usage in the system as
a whole. See Section A.2.1, "Should You Modify the Default Thread Count?" for
general information about optimizing the number of threads.
5. Specify how the server instance should detect an overflow condition for the
selected queue by modifying the following attributes:
Queue Length—Specifies the maximum number of simultaneous requests that
the server can hold in the queue. The default of 65536 requests represents a very
large number of requests; outstanding requests in the queue should rarely, if ever
reach this maximum value. Always leave the Queue Length at the default value of
65536 entries.
Queue Length Threshold Percent—The percentage (from 1–99) of the
Queue Length size that can be reached before the server indicates an overflow
condition for the queue. All actual queue length sizes below the threshold
percentage are considered normal; sizes above the threshold percentage indicate
an overflow. By default, the Queue Length Threshold Percent is set to 90 percent.
6. To specify how this server should address an overflow condition for the selected
queue, modify the following attribute:
Threads Increase—The number of threads WebLogic Server should add to
this execute queue when it detects an overflow condition. If you specify zero
threads (the default), the server changes its health state to "warning" in response to
an overflow condition in the execute queue, but it does not allocate additional
threads to reduce the workload.
7. To fine-tune the variable thread count of this execute queue, modify the following
attributes:
Threads Minimum—The minimum number of threads that WebLogic Server
should maintain in this execute queue to prevent unnecessary overflow
conditions. By default, the Threads Minimum is set to 5.
Threads Maximum—The maximum number of threads that this execute queue
can have; this value prevents WebLogic Server from creating an overly high thread
count in the queue in response to continual overflow conditions. By default, the
Threads Maximum is set to 400.
8. Click Save.
9. To activate these changes, in the Change Center of the Administration Console,
click Activate Changes. Not all changes take effect immediately—some require a
restart.
10. You must reboot the server to use the new thread detection behavior values.
See "Creating and Configuring Servlets" in Developing Web Applications, Servlets, and
JSPs for Oracle WebLogic Server for more information about specifying initialization
parameters in web.xml.
A.5.1 Setting the Number of Socket Reader Threads For a Server Instance
To use the Administration Console to set the maximum percentage of execute threads
that read messages from a socket:
1. If you have not already done so, in the Change Center of the Administration
Console, click Lock & Edit.
2. In the left pane of the console, expand Environment > Servers.
3. On the Summary of Servers page, select the server instance for which you will
configure thread detection behavior.
4. Select the Configuration > Tuning tab.
5. Specify the percentage of Java reader threads in the Socket Readers field. The
number of Java socket readers is computed as a percentage of the number of total
execute threads (as shown in the Thread Count field for the Execute Queue).
6. Click Save.
7. To activate these changes, in the Change Center of the Administration Console,
click Activate Changes.
interactions per second with WebLogic Server represents the total number of
interactions that should be handled per second by a given WebLogic Server
deployment. Typically for Web deployments, user interactions access JSP pages or
servlets. User interactions in application deployments typically access EJBs.
Consider also the maximum number of transactions in a given period to handle spikes
in demand. For example, in a stock report application, plan for a surge after the stock
market opens and before it closes. If your company is broadcasting a Web site as part
of an advertisement during the World Series or World Cup Soccer playoffs, you should
expect spikes in demand.