SAP XI PI Administrators Guide
SAP XI PI Administrators Guide
SAP XI PI Administrators Guide
All versions
1 General 3
2 Architectural Overview 4
2.1 Basic Architecture 4
3 System Resources 7
3.1 Database 7
3.3 Network 11
4 Runtime Monitoring 12
4.1 SEEBURGER Workbench 12
5 Troubleshooting 15
5.1 Known Error Messages 15
transaction isolation during JTA transaction and when the connection is shared. 15
already bound. 15
This document aims to be an overview of the SEEBURGER SAP XI / PI solutions targeted at system
administrators. It should help them understanding the parts of the system involved when using
SEEBURGER solutions and what needs to be monitored to keep the system up and running.
Additionally, it gives some help what needs to be considered when moving or migrating a system.
Note: This document does not cover administration or migration tasks for
SEEBURGER Professional Message Tracking (MT) or SEEBURGER Portal.
Please refer to corresponding manuals or involve SEEBURGER support desk or
consulting.
Note: Information in this document is a "look behind the curtain". Keep this in mind,
as things might change between releases or even between patches in order to add
new features or fix existing bugs. We try our best to keep this document up-to-date,
so check for changes in the document in case of upgrades.
As the SEEBURGER solutions cover a wide range of different EDI protocols and message types
which result in lots of different business scenarios, it is hard to give a concrete architectural overview.
Nevertheless there are a few concepts which are similar between all SEEBURGER solutions. One
example for that is the SEEBURGER recovery mechanism which is used by all SEEBURGER
adapters and a few modules as well.
This topic only explains the general behavior of the specific concepts. In the next topic System
Resources (page 7) the used system resources (like database tables or file system paths) are
listed. This is due to the fact that while the concepts remain relatively stable, the concrete files, paths,
tables, etc. might change more frequently.
Additionally the Message ID store is also used to correlate external IDs (specified in the EDI message
protocol) with the XI MessageID which is used by the PI system to identify transactions.
The Message ID store has a reorganization mechanism in place which per default removes entries in
a final state after 7 days. The preservation time is configurable via the managed connection factory
properties of the corresponding adapter.
Note: All relevant information is stored within the recover job file. In case of missing
entries in the table no data loss has occurred if the job file and its referenced
temporary files still exist on file system!
This topic provides a quick overview about where relevant data for SEEBURGER adapters and
modules is stored. In case you try to move or migrate a PI system, ensure to sync all of these contents
(either by means of the SAP migration tool, or manually using operating system utilities).
For a more detailed explanation what is stored in the respective places either have a look into the
previous topic (for general concepts applicable to all SEEBURGER solutions) or in the corresponding
adapter or module documentation.
3.1 Database
All of the configuration data and most of the runtime data are stored within the database, please refer
to the topic Architectural Overview (page 4) for a more detailed explanation about how all of these
elements play together.
SEEBURGER uses the following tables within the standard PI database scheme. Keep in mind that
you will only have tables applicable to your set of solutions which are installed on the system. The
table contains information about the solution(s) which the tables belong to. The category indicates the
type of data stored in this table (either runtime or configuration).
Note: All tables start with the prefix SEE_. In case you find a table which is prefixed
with SEE_ it is most likely that this table is simply missing here. Handle it the same
way you do with the other tables listed here (with regard to migrating systems,
monitoring table size, etc.).
It is necessary, that the SYS/global directory is shared between all cluster nodes (regardless
whether the nodes are located on one hardware or distributed between several machines. In UNIX
environments this is also requested by the SAP documentation. Unfortunately, this is not required
for SAP systems on Windows. Nevertheless in order for the cluster functionality within SEEBURGER
solutions (e.g. handover of recover jobs if one node goes down) this is a strict requirement. This
directory is crucial for the work of the SEEBURGER solutions so it needs to be maintained when
moving or migrating systems.
Regarding the server<x>/…/SeeConfig directories under the root directories mentioned above,
each adapter generates a unique ID for each node it runs on. This information is persisted in these
directories, and therefore it is recommended to move these directories along if you choose to migrate
your system. If not, a new ID will be generated, which might have some effects on the SEEBURGER
Resource Management (e.g. hanging resources).
In large part server<x>/seeburger contains runtime data, for example the transmission data dumps (if
activated in the managed connection factory properties), but it may also contain other persisted data
which has been processed on that node, especially in versions before 1.8 (e.g. in previous versions
the recover jobs have been located there.) It is recommended to include this directory when migrating
system.
This topic is intended to give administrators are quick overview about the different monitoring facilities
of the SEEBURGER solutions.
Message Monitor
As explained in the architectural overview, the SEEBURGER Message Monitor can be used to monitor
the contents of the Message ID store.
It also contains information about failing incoming messages which unfortunately are only visible within
the SAP monitoring for a limited amount of time (only a fixed amount of message is visible within the
SAP communication channel monitoring, therefore messages quickly "rotate" out of this monitors).
With the help of the Message Monitor it is possible to have a look at errors which happened in a
specific time frame.
Recovery Monitor
The Recovery Monitor allows you to view the current active recover jobs. As already noted in the
architectural overview topic of this document, every message being received by a SEEBURGER
adapter results in a recover job. This means on a busy system, there are always a lot of jobs visible in
the monitor. To narrow the search down, use the time filtering capabilities of the monitor.
You should look for recover jobs which have a retry count greater one. These indicate a processing
error (either temporary, due to various runtime effects like a timeout while acquiring a database
connection, or permanent due to a configuration issue).
If you are using the SEEBURGER resource management (e.g. when using OFTP over ISDN or
accessing different VAN boxes which allow only a limited number of concurrent logins) you can watch
the current active resources and their reservations via the reservation monitor.
One reservation should always be visible, which is the RecoverTimer_DEFAULT. This reservation is
used to ensure, that only one node in the cluster is responsible for dispatching recover jobs in case
they fail in the first run and need to be retried. If you do not have such a reservation, chances are high,
that the recovery mechanism is not working properly.
System Status
Note: The system status is currently not cluster-aware. That means it displays the
"view" of the current cluster node. In case of database summaries this of course is
applicable for the complete instance, but for example Java system properties might
be different across server nodes.
• Check the amount of entries in SEE_RECOVERYJOB. The entry count should reflect your actual
message throughput. In case there are messages stuck an administrator needs to check them
manually, and should either reprocess the messages or delete them, if they are no longer needed.
• Check that the entries in SEE_RECOVERYJOB are in sync with the job files on the file system.
Job files are located at …/SYS/global/seeburger/recovery (from version 1.8 on) and match the file
name pattern *.job.
• Check the amount of entries in the table SEE_MSGIDSTORE*. The amount should reflect your
message throughput in the preservation time frame configured. In case the number increases
further analysis is required. This might coincide with an increasing number of recover jobs in
case there is a general issue with the system (e.g. due to a configuration issue some types of
messages do not get processed correctly).
• In case the resource management is used, check the reservation monitor (or the
SEE_RMRESERVATION table) if entries get stuck. The reservation date is visible and should
not exceed a few minutes. An exception to this is of course the RecoverTimer_DEFAULT which
usually exists as long as the corresponding server node stays alive.
Additionally keep in mind, that some scenarios (e.g. if you have a constant stream of OFTP
messages, which are all transmitted via the same ISDN line) a reservation can last several hours
or even longer. Nevertheless, as soon as no current message requires a specific resource that
reservation should get cleaned up automatically by adapter.
• From version 2.2 on the table SEE_BPATTACH should also be checked to roughly match the
message throughput within the preservation time frame.
• Monitor the disk usage of the SEEBURGER file system paths which are used. Keep in mind, that
e.g. the transmission data dump directories are not cleaned automatically, and therefore may
increase over time.
• The SEEBURGER BIC module might write log files of the conversions in case of an error (can be
configured, check the BIC documentation). This might also be a source of ever increasing disk
usage. The attachment logs are stored under server<x>/seeburger/attachmentlog.
5.1.1
com.sap.engine.services.dbpool.exceptions.BaseSQLException:
Cannot change transaction isolation during JTA transaction and
when the connection is shared.
The above noted exception can occur due to SEEBURGER software trying to change the transaction
isolation level on a database connection which is maintained by the application server. If the stack
trace contains com.seeburger.recover.db.DBAccessHandler the error can safely be ignored.
Unfortunately this exception is generated by SAP libraries and SEEBURGER is not able to prevent
these exceptions from being logged.
In case the startup-failure was due to a temporary problem (e.g database connection was broken) a
server restart is enough to fix the issue. In case it is a permanent error, the first start of the adapter
after the server restart will contain the real error. The logs covering that first start are needed by the
SEEBURGER support team to find the root cause.