10-1 Service Development Help PDF
10-1 Service Development Help PDF
10-1 Service Development Help PDF
Version 10.1
October 2017
This document applies to webMethods Service Development Version 10.1 and to all subsequent releases.
Specifications contained herein are subject to change and these changes will be reported in subsequent release notes or new editions.
Copyright © 2008-2017 Software AG, Darmstadt, Germany and/or Software AG USA Inc., Reston, VA, USA, and/or its subsidiaries and/or
its affiliates and/or their licensors.
The name Software AG and all Software AG product names are either trademarks or registered trademarks of Software AG and/or
Software AG USA Inc. and/or its subsidiaries and/or its affiliates and/or their licensors. Other company and product names mentioned
herein may be trademarks of their respective owners.
Detailed information on trademarks and patents owned by Software AG and/or its subsidiaries is located at
hp://softwareag.com/licenses.
Use of this software is subject to adherence to Software AG's licensing conditions and terms. These terms are part of the product
documentation, located at hp://softwareag.com/licenses and/or in the root installation directory of the licensed product(s).
This software may include portions of third-party products. For third-party copyright notices, license terms, additional rights or
restrictions, please refer to "License Texts, Copyright Notices and Disclaimers of Third Party Products". For certain specific third-party
license restrictions, please refer to section E of the Legal Notices available under "License Terms and Conditions for Use of Software AG
Products / Copyright and Trademark Notices of Software AG Products". These documents are part of the product documentation, located
at hp://softwareag.com/licenses and/or in the root installation directory of the licensed product(s).
Table of Contents
Using the VCS Integration Feature to Check Elements In and Out of a VCS.........................133
VCS Integration Supported Features..................................................................................... 134
VCS Integration Unsupported Features................................................................................. 135
Locking Locally vs VCS Locking............................................................................................ 135
System Locking and VCS Integration Feature.......................................................................136
About Unlocking Elements with Integration Server Administrator.......................................... 136
Adding New Packages and Elements to a VCS.................................................................... 136
Adding Existing Packages and Elements to a VCS...............................................................137
Modifying Elements that are in the VCS................................................................................137
Checking Out Packages and Elements..................................................................................138
Checking In Packages and Elements.................................................................................... 139
Reverting Changes to a Checked Out Package or Element..................................................140
Getting the Latest Version from the VCS...............................................................................141
Getting an Earlier Version from the VCS............................................................................... 142
Deleting Packages and Elements from the VCS................................................................... 144
Restoring Deleted Items......................................................................................................... 145
Restoring a Deleted Package......................................................................................... 145
Restoring a Deleted Folder or Element.......................................................................... 146
Copying and Moving Folders or Elements............................................................................. 146
Renaming Packages, Folders, and Elements........................................................................ 147
Viewing the History of a Folder or Element........................................................................... 147
Version History Details.................................................................................................... 148
Working with Blaze Rules Services........................................................................................149
Working with Web Service Descriptors.................................................................................. 149
Working with webMethods Adapter Connections...................................................................149
Working with Java Services................................................................................................... 150
Copying Java Services....................................................................................................150
Moving Java Services..................................................................................................... 150
Labeling Java Services in the VCS.................................................................................150
Managing Packages.....................................................................................................................153
Creating a Package................................................................................................................154
Guidelines for Naming Packages....................................................................................154
Documenting a Package........................................................................................................ 155
Accessing Package Documentation................................................................................156
Building Services.........................................................................................................................169
A Process Overview............................................................................................................... 170
Package and Folder Requirements........................................................................................171
About the Service Signature.................................................................................................. 172
Guidelines for Specifying Input Parameters....................................................................173
Guidelines for Specifying Output Parameters................................................................. 174
Declaring Input and Output Parameters......................................................................... 174
Using a Specification as a Service Signature..........................................................175
Using an IS Document Type to Specify Service Input or Output Parameters.......... 176
Inserting Input and Output Parameters....................................................................176
About Service Run-Time Parameters.....................................................................................177
Maintaining the State of Service..................................................................................... 178
Specifying the Run-Time State for a Service...........................................................178
About Service Caching....................................................................................................179
When Are Cached Results Returned?.....................................................................179
Types of Services to Cache.....................................................................................180
Controlling a Service’s Use of Cache...................................................................... 181
Specifying the Duration of Cached Results............................................................. 182
Refreshing Service Cache by Using the Prefetch Option........................................ 182
Configuring Caching of Service Results.................................................................. 183
Specifying the Execution Locale..................................................................................... 183
About URL Aliases for Services......................................................................................185
Creating a Path Alias for a Service......................................................................... 187
Running Services.........................................................................................................................435
Using Launch Configurations to Run Services.......................................................................436
Creating a Launch Configuration for Running a Service.................................................437
Supplying Input Values to a Service...................................................................................... 438
Entering Input for a Service............................................................................................ 438
Specifying a Value for a String Variable.................................................................. 440
Specifying Values for a String List Variable............................................................. 441
Specifying Values for a String Table Variable.......................................................... 443
Specifying Values for a Document Variable that Has Defined Content.................... 446
Specifying Values for a Document Variable with No Defined Content..................... 447
Specifying Values for a Document List Variable...................................................... 449
Specifying a Value for an Object Variable............................................................... 451
Specifying Values for an Object List Variable.......................................................... 452
Saving Input Values.........................................................................................................453
Loading Input Values.......................................................................................................453
Subscribing to Events.................................................................................................................915
What Happens When an Event Occurs?............................................................................... 916
Subscribing to Events.............................................................................................................917
Creating Event Filters......................................................................................................918
Creating Event Filters for Services................................................................................. 922
Viewing and Editing Event Subscriptions...............................................................................922
Suspending Event Subscriptions............................................................................................923
Deleting an Event Subscription.............................................................................................. 923
Building an Event Handler......................................................................................................923
Invoking Event Handlers Synchronously or Asynchronously................................................. 924
About Alarm Events................................................................................................................925
About Audit Events.................................................................................................................925
About Audit Error Events........................................................................................................926
About Exception Events......................................................................................................... 926
About Guaranteed Delivery Events........................................................................................ 926
Connecting to webMethods API Portal for Publishing REST API Descriptors...................... 981
Configuring a Connection to API Portal................................................................................. 982
Adding a Connection Configuration for API Portal..........................................................982
Editing a Connection Configuration for API Portal.......................................................... 984
Removing a Connection Configuration for API Portal.....................................................984
Changing the Default Connection Configuration for API Portal...................................... 984
Properties....................................................................................................................................1005
Integration Server Properties................................................................................................1006
Event Manager Properties.............................................................................................1006
My Locked Elements..................................................................................................... 1007
Server ACL Information.................................................................................................1008
Server Information......................................................................................................... 1008
Package Properties.............................................................................................................. 1009
Package Information......................................................................................................1009
Package Dependencies.................................................................................................1009
Package Settings...........................................................................................................1010
Package Permissions.................................................................................................... 1012
Package Replication Services.......................................................................................1012
Package Startup/Shutdown Services............................................................................ 1013
Element Properties............................................................................................................... 1015
Element Information.......................................................................................................1015
Element Permissions.....................................................................................................1015
Element General Properties.......................................................................................... 1016
REST Resource Configuration...................................................................................... 1017
Document Type Properties................................................................................................... 1017
General Properties for IS Document Types.................................................................. 1018
webMethods Messaging Properties.............................................................................. 1020
Universal Name Properties........................................................................................... 1023
Flat File Dictionary Properties.............................................................................................. 1024
General Properties for a Flat File Dictionary.................................................................1024
Flat File Element Properties.................................................................................................1024
Record Definition Properties......................................................................................... 1025
Record Reference Properties........................................................................................1028
Composite Definition Properties....................................................................................1030
Composite Reference Properties.................................................................................. 1033
Field Definition Properties............................................................................................. 1035
Field Reference Properties............................................................................................1038
Flat File Schema Properties.................................................................................................1041
General Properties for a Flat File Schema................................................................... 1041
Icons............................................................................................................................................ 1161
Package Navigator View Icons.............................................................................................1162
UDDI Registry View Icons.................................................................................................... 1166
Flat File Element Icons.........................................................................................................1166
Flow Step Icons....................................................................................................................1167
OData Service Icons.............................................................................................................1168
REST API Descriptor Icons..................................................................................................1170
Schema Component Icons................................................................................................... 1170
Toolbars...................................................................................................................................... 1175
Compare Editor Toolbar........................................................................................................1176
Document Type Editor Toolbar............................................................................................. 1176
Keyboard Shortcuts...................................................................................................................1187
webMethodsService Development provides tools and features that developers can use to
build and test services. webMethodsService Development also provides tool to connect
to Integration Server, manage packages, and create the elements needed to support
services such as document types, triggers, and web service descriptors. You can learn
more by looking in Contents for Software AG Products > webMethods Service Development
Help.
Document Conventions
Convention Description
Italic Identifies variables for which you must supply values specific to
your own situation or environment. Identifies new terms the first
time they occur in the text.
{} Indicates a set of choices from which you must choose one. Type
only the information inside the curly braces. Do not type the { }
symbols.
Convention Description
... Indicates that you can type multiple options of the same type.
Type only the information. Do not type the ellipsis (...).
Online Information
Software AG Documentation Website
You can find documentation on the Software AG Documentation website at hp://
documentation.softwareag.com. The site requires Empower credentials. If you do not
have Empower credentials, you must use the TECHcommunity website.
Software AG TECHcommunity
You can find documentation and other technical information on the Software AG
TECHcommunity website at hp://techcommunity.softwareag.com. You can:
Access product documentation, if you have TECHcommunity credentials. If you do
not, you will need to register and specify "Documentation" as an area of interest.
Access articles, code samples, demos, and tutorials.
Use the online discussion forums, moderated by Software AG professionals, to
ask questions, discuss best practices, and learn how other customers are using
Software AG technology.
Link to external websites that discuss open standards and web technology.
Software AG Designer provides a set of Service Development features that you can use
to build, edit, and debug services and integration logic. It provides a collection of editors
and views in which you can develop the logic and supporting objects (referred to as
elements) for an integration solution. It also provides tools for running and debugging
the solutions you create.
Designer lets you rapidly construct integration logic with an easy-to-use implementation
language called the webMethods flow language. Flow language provides a set of simple
but powerful constructs that you use to specify a sequence of actions (steps) that
the Integration Server will execute at run time. Designer also has extensive data
transformation and mapping capabilities that allow you to quickly drag-and-drop data
fields from one step to the next.
Besides providing tools for constructing flow services, Designer provides additional
editors and tools for creating various elements that support the execution of an
integration solution. For example, you use Designer to create the document types and
schemas used for data validation and to define triggers that launch the execution of
services when certain messages are received.
Note: This guide describes features and functionality that may or may not be
available with your licensed version of webMethods Integration Server. For
information about the licensed components for your installation, see the
Settings > License page in the webMethods Integration Server Administrator.
webMethods Integration Server provides an environment for the orderly, efficient, and
secure, execution of services. It decodes client requests, identifies the requested services,
invokes the services, passes data to them in the expected format, encodes the output
produced by the services, and returns output to the clients.
Using Designer, you build and edit services, document types, and other elements directly
on an Integration Server. You connect Designer to Integration Server through server
definitions. A server definition specifies the location and characteristics of the Integration
Server to which Designer is connecting.
Field Description
Field Description
Password The password for user. Use the exact combination of upper-
and lower-case characters with which it was originally
defined. Integration Server passwords are case sensitive.
Field Description
Field Description
Password The password for the user account in User. Use the exact
combination of upper- and lower-case characters with which it
was originally defined. Integration Server passwords are case
sensitive.
5. Click Connect.
Designer populates the boom half of the FetchIntegration ServerDefinitions dialog with
a list of server definitions available on the other Integration Server.
6. Select one or more definitions to fetch, and click OK.
Designer refreshes the Integration Servers page, this time including the fetched
definitions. Designer automatically tries to connect to the fetched servers.
7. For any server definitions that have the status No user or password, select the
definition, click Edit, supply the user ID and password, and click OK.
4. In the Open window, navigate to the .properties file you want to import.
5. Click OK to import the data from the selected .properties file into your Preferences
> Software AG > Integration Servers screen.
If you update the configuration so that a different server definition is the default, and
a user subsequently creates a step when Designer is not connected to an Integration
Server, Designer will use the new default server for the new steps. In contrast, Designer
will continue to use the original servers for existing steps.
There must be one and only one default Integration Server defined at all times.
Changing Passwords
You can change the password for your user account. If you forget your password,
contact the server administrator.
Important: You cannot use Designer to change passwords of users that are stored in
an external directory. For information about managing users stored in an
external directory, see webMethods Integration Server Administrator’s Guide.
Password Requirements
For security purposes, Integration Server places length and character restrictions on
passwords. Integration Server contains a default set of password requirements; however,
your server administrator can change these. For more information about these password
requirements, contact your server administrator.
The default password requirements provided by webMethods Integration Server are as
follows:
Requirement Default
Minimum length 8
Requirement Default
To ensure the security of your password, follow the additional guidelines below:
Do not choose obvious passwords, such as your name, address, phone number,
license plate, name of your spouse or child, or a birthday
Do not use any word that can be found in the dictionary.
Do not write your password down.
Do not share your password with anyone.
Change your password frequently.
3. Select the server definition for which you want to change the password and click
Change Password.
4. In the Change Password dialog box, in the Old password field, type your current
password.
5. In the New password field, type your new password.
6. In the Confirm new password field, retype your new password. Click OK.
Important: The server administrator can disable the feature for changing your password
from Designer. If the feature is disabled and you try to change your
password, you will receive a message stating that the administrator has
disabled the feature.
Synchronizing Passwords
The password stored locally in secure storage and in Integration Server can become out
of sync. This mismatch of credentials occurs due to either of the following reasons:
The password change operation is not successful due to machine or network failure
between the time the new password is stored locally in secure storage and its new
value is updated in Integration Server.
The password to connect to Integration Server is changed using the change
password functionality in another instance of Designer.
In both these instances, Designer will not be able to connect to Integration Server. To
synchronize the passwords and reconnect to Integration Server, edit the disabled server
definitions in Designer and provide the right credentials. For information on editing the
server definitions, refer to "Editing Server Definitions" on page 45.
Before you can create a new Java or C service, ensure that all services in the folder in
which you want to create the new service and of the type you want to create (Java or
C) are unlocked. Alternatively, you can ensure that you have all the services locked.
For more information, see "Guidelines for Locking Java and C/C++ Services" on page
102.
Note: The Choose template field appears only for those elements that support
property templates. The default value for this field is Default.
4. When you have supplied all the information that Designer needs to create the
element, click Finish. Designer refreshes the Package Navigator view and displays
the new element.
Note: When Designer creates web service connectors as part of creating a consumer
web service descriptor, Designer applies the default property template to
the web service connector. You can modify the properties of the element
by changing them in the Properties view or apply a different template after
the element is created. For more information about applying templates, see
"Applying Property Templates to Elements" on page 85.
? ' - # = ) ( . / \ ;
% * : $ ] [ " + , ~
Characters outside of the basic ASCII character set, such as multi-byte characters.
If you specify a name that disregards these restrictions, Designer displays an error
message. When this happens, use a different name or try adding a leer or number to
the name to make it valid.
You cannot undo a move, copy, rename, or delete action using the Edit > Undo
command.
If you select a publishable document type that is associated with an adapter
notification, Designer handles actions performed on the document type as follows:
For non-copy actions, you must also select the adapter notification before you can
perform a non-copy action on the document type.
For copy actions, you can select the publishable document type without selecting
its associated adapter notification. However, the copied publishable document
type loses its association with the adapter notification.
Opening Elements
When opening elements from the Package Navigator view, keep the following points in
mind:
Double-click a folder to expand or collapse the contents of the folder in the Package
Navigator view.
If you have enabled the Version Control System (VCS) Integration feature of
Designer, Designer might exhibit slowdowns, error messages (such as “Server
version has changed” and “Session already in use”), and may stop responding
completely when you expand a large element (such as a folder) in the Package
Navigator view. This condition occurs because Designer checks the lock status of
each element within the expanded element in the Integration Server.
Tip: You can also use the Open Integration Server Element dialog box to easily
locate and open an element by typing any portion of the element name. To
open the Open Integration Server Element dialog box, right-click anywhere in
the Package Navigator view and select Open Elements from the context menu
or press the CTRL+SHIFT+A keys in the keyboard.
Closing Elements
Keep the following points in mind when closing elements:
You do not need to close elements when you exit Designer. Designer remembers
which elements were open and displays them when you restart Designer.
If you close an element without saving changes made to the element, Designer will
prompt you to save changes.
To close an element
Do one of the following:
To... Click...
Close the active element (that is, the element File > Close.
whose tab is highlighted)
To... Click...
Save all elements you have edited, on all servers File > Save All.
Note: The contents of the Comments property that was available in the Properties
view in previous versions of Service Development are available in the
Comments tab starting from version 9.7.
Select... To...
Prompt before Instruct Designer to alert you when dependents (that is,
updating dependents other elements that use the selected element, such as flow
when renaming/ services, IS document types, or triggers) exist.
moving
If dependents exist, Designer lists those dependents
before renaming or moving the selected element and
prompts you to:
Select... To...
4. Click OK.
To... Select...
To... Click...
3. If the elements you want to move or copy contain unsaved changes, Designer alerts
you that you must first save the changes. Click OK to close the alert dialog box. Then,
save the changes and repeat the move/copy action.
4. If you do not have Read access to the elements you are moving or copying, or
Write access to the location you are moving/copying them to, Designer displays a
message that identifies the elements that are preventing the action from completing
successfully. Click OK and then either obtain the proper access from your system
administrator or select only those elements to which you have proper access.
5. Select the location where you want to move or copy the elements.
6. Click Edit > Paste.
7. If the destination already contains an element with the same name as an element you
are moving or copying, do one of the following:
If you are moving the element, Designer alerts you that the element cannot be
moved. Click OK to close the alert dialog box. Rename the element if desired and
repeat the move action.
If you are copying the element, Designer copies the element and appends a
number to the name of the copied element. (For example, if you are copying
a flow service named checkOrder2 to a destination that already contains a
flow service with that name, Designer copies the service and names the copy
checkOrder2_1.) Rename the element if desired.
For more information about renaming elements, see "Renaming Elements" on page
66.
8. If one of the elements you moved or copied is a Java service, perform the following
as necessary:
If you are moving or copying the Java service to a folder with other Java services
that are system locked or locked by another user, Designer alerts you that the
element cannot be moved/copied. Click OK and then ask the owner of the lock to
remove the lock.
If the Java service you are moving or copying contains a shared source that
conflicts with the shared source of an existing Java service in the destination
folder, Designer alerts you that there is a conflict. Click OK to use the destination
folder’s shared source, or click Cancel to cancel the entire move action.
Note: If no shared Java source conflict exists, Designer moves the Java service
and its shared source to the destination folder. If a conflict does exist,
you must re-specify the shared source in the copy of the service. Using
the Designer Java Service Editor, you can copy the information from the
Source tab of the original service to the Source tab of the copy. For more
information about the Source tab of the Java Service Editor, see "Source
Tab" on page 332
9. If you selected the Prompt before updating dependents when renaming/moving check
box in the Package Navigator preferences and any dependent elements on the
current server contain unsaved changes, Designer alerts you to save them. Select the
elements and click OK to save the changes or Cancel to cancel the entire move or copy
action.
10. If the Move and Rename Dependencies dialog box appears, do one of the following:
To... Click...
Tip: You can also move elements by clicking and dragging them to their new
location.
Shared fields with different values. In this case, you must first manually copy the
Shared fields into the destination folder and then move or copy the Java service
Tip: To retain the status of a publishable document type and its link to a Broker
document type, use the package replication functionality in the Integration
Server Administrator instead of using Designer to move or copy the
package containing the publishable document type. For information about
package replication, see webMethods Integration Server Administrator’s Guide.
Renaming Elements
When renaming elements, keep the following points in mind:
You can rename any elements for which you have Write access to the element and
its parent folder. When renaming a folder, you must also have Write access to all
elements within the folder. For more information about Write access and ACLs
assigned to elements, see "Assigning and Managing Permissions for Elements" on
page 89.
When you rename a folder, Designer automatically renames all of the elements in
that folder (that is, changes their fully qualified names).
If the folder you want to rename contains elements with unsaved changes, you must
save the changes before you can rename the folder.
Element names must be unique across all packages. If you try to rename an element
using a name that already exists, Designer reverts the element back to its original
name.
When you rename an adapter notification, Designer also renames its associated
publishable document type and prompts you to indicate whether to rename the
associated Broker document type.
You cannot rename a listener or connection element.
When you rename a publishable document type, Designer checks for dependents
such as triggers and services that use the publishable document type. (Designer
performs dependency checking only if you select the Prompt before updating
dependents when renaming/moving preference.) If Designer finds elements that use
the publishable document type, Designer gives you the option of updating the
publishable document type name in each of these elements. If you do not update the
references, all of the references to the publishable document type will be broken.
Important: You must manually update any services that invoke the pub.publish services
and specify this publishable document type in the documentTypeName or the
receivedDocumentTypeName parameter.
To rename an element
1. In Package Navigator view, select the element that you want to rename. Right-click
the element and click Rename.
2. If the element you want to rename contains unsaved changes, Designer alerts you
that the element cannot be renamed until you save the changes. Click OK to close the
alert dialog box. Then, save the changes and repeat the rename action.
3. Edit the name and press ENTER.
If an element already exists with that name at the same level, Designer displays a
message alerting you that the rename action could not be completed. Click OK to
close the message dialog box and repeat the rename action.
4. If you selected the Prompt before updating dependents when renaming/moving check box
in the Package Navigator preferences and any dependent elements on the current
server contain unsaved changes, Designer alerts you to save the elements that will be
affected by the rename action. Select the elements and click OK to save the changes or
Cancel to cancel the entire rename action.
5. If the Move and Rename Dependencies dialog box appears, do one of the following:
To... Click...
When refactoring field names, ensure that you have write access to the element and its
parent folder. Variables cannot be refactored through the pipeline tab or the package
navigator.
Refactoring Elements
You can change field names in all the dependents of elements. Refactoring ensures that
changes are applied to all the applicable references of elements.
To refactor elements
1. In Package Navigator view, double-click the element to open it.
Ensure that you have write access to the element.
2. Right-click the variable you want to refactor.
3. Select Refactor > Rename.
4. In the Refactor variable wizard, type the new name in the text box and click Next.
All the occurrences of the variable are displayed in the Changes to be performed list.
5. Select the check boxes corresponding to the appropriate variables to define the scope
of refactoring. By default, all the variable occurrences are selected.
Note: If you clear the selection of a variable occurrence from the Changes to
be performed list, that occurrence is unlinked from the source element.
Then you must manually edit the particular variable occurrence.
Do not clear the selection of the source element from the list.
The IS Asset Compare view displays the changes between the original asset and the
refactored asset. This view is only available for IS document types and Flow services.
6. Click Finish. The Refactor Log tab displays the refactor status of each occurrence.
Note: For variable occurrences that are not refactored, the refactor log
displays the reason along with the status. Then you must manually
edit these occurrences.
For an element that has more than one input variable with the same
name, refactoring any one of those variables results in Designer
displaying an extra variable. This is an issue with the manner in which
Designer displays variables with the same name after refactoring. The
issue does not impact the results of the refactoring operation.
Deleting Elements
When deleting elements, keep the following points in mind:
You can delete any elements to which you have Write access for the element and
its parent folder. When deleting a folder, you must also have Write access to all
elements within the folder. For more information about Write access and ACLs
assigned to elements, see "Assigning and Managing Permissions for Elements" on
page 89
When you delete a folder or the last Java service in a folder, Designer also deletes the
shared source for that folder. If you cancel the delete action, no elements (including
non-Java service elements) are deleted.
You can only delete an adapter notification’s publishable document type if you
delete its associated adapter notification.
When you delete an adapter notification, Designer also deletes its associated
publishable document type and prompts you to indicate whether to delete the
associated Broker document type.
You cannot delete a listener or connection element.
If you delete a dictionary, the dependency manager will list all flat file schemas
and dictionaries that will be impacted by the deletion, and prompts you to confirm
the deletion. However, it does not identify the names of the records, fields, or
composites that reference the dictionary; that is your responsibility.
If you delete a publishable document type, Designer prompts you to keep or delete
the associated Broker document type.
If you delete a publishable document type and Broker document type associated
with a trigger or a flow service, you might break any integration solution that
uses the document type.
If you delete the Broker document type, you might negatively impact any
publishable document types created from that Broker document type on other
Integration Servers. When the developers synchronize document types with
Broker and they choose to Pull fromBroker, publishable document types associated
with the deleted Broker document type will be removed from their Integration
Servers.
To delete elements
1. In Package Navigator view, select the elements that you want to delete.
2. Select Edit > Delete.
3. If you have selected the Confirm before deleting check box in the Preferences dialog box
for Package Navigator view, do the following:
a. If any elements on Integration Server have unsaved changes, Designer prompts
you to save changes. Select the elements whose changes you want to save and
click OK.
b. If other elements are dependents of the elements you are deleting, Designer
indicates which items will be affected by the deletion.
c. If you are deleting a publishable document type, Designer prompts you to keep
or delete the associated Broker document type. Do one of the following:
To... Do this...
What Is a Dependent?
To determine how a selected element is used by other elements on the server, you can
find dependents of the selected element. A dependent is an element that uses a selected
element. For example, suppose that the flow service ServiceA invokes the flow service
receivePO. The ServiceA service uses the receivePO service. This makes ServiceA a dependent of
the flow service receivePO. If you delete receivePO, ServiceA will not run.
Dependent elements
During debugging, you might want to locate all of the dependents of a given service
or IS document type. Or, before editing an IS document type, you might want to know
what elements, such as specifications, webMethods Messaging Triggers, or flow services,
will be affected by changes to the IS document type.
In addition to finding a dependent IS element, Designer also finds the Dynamic Server
Pages (DSPs) that depend on the service. For example, suppose that the DSP page
myPage.dsp resides in Integration Server_directory\instances\instance_name \packages
\myPackage\pub and uses the service myFolder:submitMyPage. If you find dependents for
the myFolder.submitMyPage service, Designer also returns the following as a dependent:
Integration Server_directory\instances\instance_name\ \packages\myPackage\pub
\myPage.dsp
Note: Designer does not consider a Java service that invokes another services to
be a dependent. For example, if Java service A invokes service B, and you
instruct Designer to find dependents of service B, service A will not appear as
a dependent.
Finding Dependents
To find dependents of a selected element
1. In Package Navigator view, right-click the element for which you want to find
references and select Find Dependents.
2. If any elements on Integration Server have unsaved changes, Designer prompts you
to save changes. Select the elements whose changes you want to save, and then click
OK.
Designer displays the dependents of the selected element on the Search view.
3. After Designer finds the dependents of the selected element, you may do either of
the following:
To jump to an element in Package Navigator view, right-click that element in the
results, and select Show In > Package Navigator.
To see all dependents of a found dependent click next to the item in the results
list.
What Is a Reference?
To determine how a selected element uses other elements on the server, you can
find references of the selected element. A reference is an element that is used by a
selected element. For example, the flow service ServiceA invokes the services receivePO,
pub.schema:validate, processPO and submitPO. Additionally, in its input signature, ServiceA
declares a document reference to the IS document type PODocument. The services
receivePO, validate, ProcessPO, and SubmitPO, and the IS document type PODocument, are
used byServiceA. The elements receivePO, validate, ProcessPO, SubmitPO, and PODocument are
references of ServiceA.
Elements as references
During debugging of a complex flow service, you might want to locate all of the
services, IS document types, and specifications used by the flow service. Use the Find
References command to locate the references.
You can also use the Find References command to locate any unresolved references. An
unresolved reference is an element that does not exist in the Package Navigator view yet
is still referred to in the service, IS document type, or specification that you selected.
The element might have been renamed, moved, or deleted. To prevent unresolved
references, specify the dependency checking safeguards. For more information about
these safeguards, see "Configuring Dependency Checking for Elements" on page 59.
Finding References
To find references of a selected element
1. In Package Navigator view, right-click the element for which you want to find
references and select Find References.
2. If any elements on Integration Server have unsaved changes, Designer prompts you
to save changes. Select the elements whose changes you want to save, and then click
OK.
Designer displays the references of the selected element on the Search view.
3. After Designer finds the references of the selected element, you may do either of the
following:
To jump to an element in Package Navigator view, right-click that element in the
results, and select Show In > Package Navigator.
To see all references of a found reference, click next to the item in the results
list.
Pipeline reference
Pipeline references are also those locations where you modify the value of a variable
in a document reference or document reference list by assigning a value using or
dropping a value using on the Pipeline view toolbar. The following image of Pipeline
view identifies these types of pipeline references.
When you edit an IS document type, the changes affect any document reference and
document reference list variables defined by that IS document type. The changes
might make pipeline references invalid. For example, suppose the input signature
for ServiceA contains a document reference variable POInfo based on the IS document
type PODocument. The IS document type PODocument contains the field PONum . In the
pipeline for ServiceA, you link the PONum field to another pipeline variable. If you edit
the PODocument IS document type by deleting the PONum field, the pipeline reference
(the link) for the field in the ServiceA pipeline is broken (that is, it is invalid) because the
pipeline contains a link to a field that does not exist.
When you edit an IS document type, you might want to check all dependent pipeline
modifiers for validity. You can use the Inspect Pipeline References command to locate any
broken or invalid pipeline references. You can use this command to:
Search for invalid pipeline references in a selected flow service.
Search for invalid pipeline references involving document reference and document
reference list variables defined by a selected IS document type.
After Designer finds the pipeline references of the selected element, you can jump to
an element in Package Navigator view. Right-click that element in the results, and
select Show In > Package Navigator.
Finding Elements
You can find elements and fields within Designer using the following methods:
Searching for elements in the Package Navigator view. When creating and editing
elements, you might lose track of where you saved certain elements. For example,
suppose that you do not remember the folder to which you saved a service called
Test. You can use either the Search dialog box or the Open Integration Server Element
dialog box to search for elements.
Locate an invoked service from the editor. You can highlight the location of an invoked
service in the Package Navigator view. This is especially helpful when working with
a flow wrien by another party or complex flows that make multiple invokes.
Locate a referenced document type from the editor. You can highlight the location of a
referenced document type in the Package Navigator view. This is especially helpful
when working with services with complex signatures, mapping data, or working
with flow services wrien by other developers.
Linking open editors. If you have a lot of elements open, you might want to quickly
bring the editor for an element to the top of the stack of open editor views.
Note: If a document type is referred to in other elements, the field names inside the
reference are not searched. The field names of the referred document type can
be searched.
Limit the scope of the search to a Select the package in the Search in this
specific package package only list.
Search within all the user-defined a. Select All Packages in the Search in this
packages package only list.
b. Clear the Include system packages check
box.
Search within all the packages a. Select All Packages in the Search in this
package only list.
b. Select the Include system packages check
box.
6. Click Browse and select the asset types to search for the specified search string.
The selected asset types are displayed as comma-separated values in the Asset type
text box.
7. Click Search.
The search results are displayed in the Search view.
Note: You can also select an element and click Show in > Package Navigator to
locate the element in Package Navigator view.
Caching Elements
You can improve performance in Designer by caching Package Navigator elements that
are frequently used. When elements are located in the Designer cache, Designer does not
need to request them from the Integration Server and can therefore display them more
quickly.
Keep in mind that increasing the cache reduces the amount of available memory. If you
experience memory problems, consider decreasing the number of cached elements.
To cache elements
1. In Designer, click Window > Preferences.
2. In the Preferences dialog box, select Software AG>Service Development> Package
Navigator.
3. In the Number of elements to cache field, type the number of elements that you want to
cache per Designer session. The total number of cached elements includes elements
on all the Integration Servers to which you are connected.
The minimum number of elements is 0. The default is 100 elements. The higher the
number of elements, the more likely an element will be in the cache, which reduces
network traffic and speeds up Designer.
4. Click OK. The caching seings take effect immediately.
Note: Clearing cached elements from Designer is different from clearing the
contents of the pipeline from Integration Server cache. If you want to clear the
contents of the pipeline from a server’s cache, see "About Service Caching" on
page 179.
Exporting Elements
Folders or elements in a package, can be exported to your hard drive so that they can
be shared with partners or developers. A folder or element is exported to a ZIP file and
saved on your hard drive. The ZIP file can then be unzipped into the ns directory of a
package on the server. Locking information is not exported.
Note: The Export from Server option is not the same as the File > Export option. With
File > Export, you can export files from the Workbench to the file system.
Note: You can create property templates for flow, C/C++, and Java services.
Field Description
Element Type The element type for which you are creating the template.
Note: You can create property templates for flow, C/C++, and
Java services.
Template Next to each property, specify the value you want to use in the
Properties template. For each property, Designer displays a default value.
You can edit the fields and specify values for the properties.
For more information about specifying the properties, see
"Properties" on page 1005.
Note: You will not be able to specify values for properties that
must be unique for each element such as Universal name
and Output template when defining templates.
4. Click OK.
5. In the Preferences page, click OK.
Note: When you delete a template, the elements that use the deleted template will
be reset to use the default template.
You can limit access to elements to groups of users by using access control lists (ACLs).
Typically created by a system administrator, ACLs allow you to restrict access on
a broader level. For example, if you have a production package and a development
package on the Integration Server, you can restrict access to the production package to
users in an Administrators ACL, and restrict access to the development package to users
in a Developers ACL.
Within ACLs, you can also assign different levels of access, depending on the access that
you want different groups of users to have. For example, you may want a “Tester” ACL
to only have Read and Execute access to elements. Or, you may want a “Contractor”
ACL that denies List access to sensitive packages on the Integration Server, so that
contractors never see them in Designer.
What Is an ACL?
An ACL controls access to packages, folders, and other elements (such as services, IS
document types, and specifications) at the group level. An ACL identifies groups of
users that are allowed to access an element (Allowed Groups) and/or groups that are not
allowed to access an element (Denied Groups). When identifying Allowed Groups and
Denied Groups, you select from groups that you have defined previously.
There are four different kinds of access: List, Read, Write, and Execute.
List controls whether a user sees the existence of an element and its metadata; that is,
its input and output, seings, and ACL permissions. The element will be displayed
on screens in Designer and the Integration Server Administrator.
Read controls whether a user can view the source code and metadata of an element.
Write controls whether a user can update an element. This access also controls
whether a user can lock, rename, or delete an element or assign an ACL to it.
Execute controls whether a user can execute a service or a web service descriptor.
For more details about these types of access, see webMethods Integration Server
Administrator’s Guide.
every time it is invoked, whether directly by a client or by another service. For details, see
"Assigning ACLs" on page 93.
The following diagram illustrates the points at which ACL checking occurs when a client
requests a service.
Stage Description
Stage Description
Purch:SendPO service to be an internally invoked service. The server does not
check the ACL of the Purch:SendPO service before executing it.
Note: Any service that the Purch:SubmitPO flow service invokes could also be
invoked directly by the client. For example, if the client directly invokes the
Purch:SendPO service, the server checks the ACL of the Purch:SendPO service. If
the client is invoking the service on the behalf of a user that is a member of an
allowed group and not a member of a denied group, then the server executes
the Purch:SendPO service.
Creating ACLs
You create ACLs using the Integration Server Administrator. For details, see webMethods
Integration Server Administrator’s Guide.
Note: An element can inherit access from all elements except a package.
Assigning ACLs
You can assign an ACL to a package, folder, services, and other elements in the Package
Navigator view. Assigning an ACL restricts or allows access to an element for a group of
users.
Keep the following points in mind when assigning ACLs:
You can assign only one ACL per element.
You can only assign an ACL to an element for List, Read, or Write access if you are
a member of that ACL. For example, if you want to allow DevTeam1 to edit the
ProcessPO service, you must be a member of the DevTeam1 ACL. That is, your user
name must be a member of a group that is in the Allowed list of the DevTeam1 ACL.
The ACLs assigned to an element are mutually exclusive; that is, an element can
have different ACLs assigned for each level of access. For example, the following
element has the Developers ACL assigned for Read access and the Administrators
ACL assigned for Write access.
ACL usage is not required. For more information, see "Is ACL Usage Required?" on
page 92.
List ACL The ACL whose allowed member can see that the element
exists and view the element’s metadata (input, output, etc.).
Read ACL The ACL whose allowed member can view the source code
and metadata of the element.
Write ACL The ACL whose allowed member can lock, check out, edit,
rename, and delete the element.
Execute ACL The ACL whose allowed member can execute the service.
This level of access only applies to services and web service
descriptors.
4. Under Enforce execute ACL, specify when Integration Server performs ACL checking.
Select one of the following:
When top-level Integration Server performs ACL checking against the service
service only when it is directly invoked from a client or DSP. For example,
suppose a client invokes the OrderParts service on server A.
After checking port access, server A checks the Execute ACL
assigned to OrderParts to make sure the requesting user is
allowed to run the service. It does not check the Execute ACL
when other services invoke OrderParts.
5. Click OK.
Note: When an Integration Server has the VCS Integration feature enabled, an
element is locked when it is checked out of the version control system. With
the appropriate ACL permissions, you are able to check out (lock) and check
in (unlock) elements, folders and packages.
To debug a service by sending an XML file to a service, you must have Read access
to the service.
To set a breakpoint in a service, you must have Read access to the service.
ACL Information. Integration Server. The Server ACL Information page lists the ACLs
contained in the Integration Server to which you are connected. This information is read
only; to edit ACLs, users, and groups, use the Integration Server Administrator.
I can’t see the source of a flow or Java service. However, I can see the input and output.
You do not have Read access to the service. Contact your server administrator to obtain
access.
I receive an exception when I try to lock an element.
The element may be locked by someone else, system locked (marked read only on the
server), or you may not have Write access. Refresh the Package Navigator view. If a
lock is not shown but you still cannot lock the element, reload the package. In addition,
make sure that you are a member of the ACL assigned for Write access to the element.
To verify, select the element and click File > Properties. Select Permissions in the Properties
for elementName dialog box.
I receive an error when I debug a service.
You must have a minimum of Read access to step through a service. If you don’t have
Read access to the service when you are stepping through, or stepping into a service,
you will receive an error message.
If you do have Read access to a service but you do not have Read access to a service
it invokes, Designer “steps over” the invoked service but does not display an error
message.
You must also have Read access to a service to set a breakpoint in the service or send an
XML document to the service.
I receive an exception when I try to go to a referenced service from the pipeline.
You do not have List access to the referenced service. Contact your server administrator.
I receive a “Couldn’t find in Package Navigator view” message when I try to find a service in the
Package Navigator view. However, I know it is on the Integration Server.
If you do not see the service listed in the Package Navigator view, you probably do not
have List access to that service. Contact your server administrator.
I can’t copy and paste a Java service.
Check to make sure that you have Write access to all Java services in the folder into
which you want to paste the service, as well as Write access to the folder itself.
Note: If you are using the local service development feature in Designer, the locking
mode must be set to system. To do so, set the wa.server.ns.lockingMode
parameter to system.
What Is a Lock?
A lock on an element prevents another user from editing that element. There are two
types of locks: user locks and system locks. When an element is locked by you, you have a
user lock. The element is read-only to all other users on the Integration Server. Another
user cannot edit the element until you unlock it.
When an element’s supporting files (node.xml, for example) are marked read-only on
the Integration Server, the element is system locked. For example, the server administrator
has the ability to mark an element’s supporting files on the server as read-only, in which
case they are system locked. To edit the element, you must ask the server administrator
to remove the system lock (that is, make the element’s files writable), and then you must
reload the package in which the element resides.
Elements are shown in the following ways in Designer’s Package Navigator:
When you lock an adapter notification, Designer also locks its associated publishable
document type. You cannot directly lock the publishable document type associated
with an adapter notification.
When you lock a folder or package, you only lock existing, unlocked elements within
it. Other users can still create new elements in that folder or package.
When you lock a Java or C/C++ service, Designer locks all other Java or C/C++
services within the folder. This means that other users cannot create Java and C/C+
+ services in a folder or package that contains the Java or C/C++ services. To create
these services, all existing Java and C/C++ services in the folder must be unlocked
and the user must have Write access to all Java and C/C++ services in the folder. For
details, see "Guidelines for Locking Java and C/C++ Services" on page 102.
You cannot lock a listener or connection element.
To lock an element
1. In the Package Navigator or in the editor, select the elements that you want to lock.
2. Right-click the elements and then click Lock.
If the elements were successfully locked, a green check mark appears next to their
icons in Package Navigator view. If one or more of the elements could not be locked
(for example, if they are system locked, locked by another user, or elements to which
you do not have Write access), Designer displays a dialog box listing them. For
information about troubleshooting lock problems, see "Lock and Unlock Problems"
on page 106.
by another user, overwriting that user’s changes to the service. Therefore, if you use
jcode, do not use the locking features in the Integration Server.
Before you save a Java or C/C++ service, multiple corresponding files must be writable on the
server.A single Java or C/C++ service corresponds to the following files:
.java
.class
.ndf
.frag (may not be present)
Before you save a Java or C/C++ service, all of the preceding files must be writable.
Therefore, make sure that all system locks are removed from those files before
saving.
Important: Before you system lock an element, always verify that it is not locked by a
user on the Integration Server. If an element becomes system locked while a
user is editing it, the user will not know until he or she tries to save changes
to the element. If this occurs, make the element’s corresponding files writable
on the server. After this is done, the user can save his or her changes to the
element.
Note: System locking is not supported if you are running webMethods Integration
Server as root on a Unix system.
elements by clicking Unlock All. For more information about unlocking elements, see
"Unlocking Elements" on page 105.
Unlocking Elements
After you edit an element and save changes to the server, you should unlock it to make
it available to other users. There are several ways to unlock elements, depending on
whether you are a member of the Developers ACL or the Administrators ACL. If you are
a developer, you can unlock elements in Designer. If you are an administrator, you can
unlock elements in the Integration Server Administrator as well as in Designer.
Important: When an Integration Server has the VCS Integration feature enabled, the
Automatically unlock upon save preference must be disabled.
Troubleshooting
This following sections address common problems that may arise when implementing
cooperative development with webMethods components.
Save Problems
When I try to save an element that I have locked, I get an exception message.
During the time that you had the lock, the element became system-locked, its ACL
status changed, or a server administrator removed your lock and another user locked
the element. If the exception message indicates that a file is read-only, then one or all
of the files that pertain to that element on the server are system-locked. Contact your
administrator to remove the system lock. After this is done, you can save the element
and your changes will be incorporated.
If the exception message indicates that you cannot perform the action without ACL
privileges, then the ACL assigned to the element has been changed to an ACL in which
you are not an Allowed user. To preserve your changes to the element, contact your
server administrator to:
1. Remove your lock on the element.
2. Lock the element.
3. Edit the ACL assigned for Write access to the element, to give you access.
4. Unlock the element.
You can then save your changes to the element.
When I try to save a template, I get an error message.
The template file on the server is read-only. Contact your server administrator to make
the file writable.
Other Problems
I can’t delete a package.
One of the elements in that package is system-locked (read-only) or locked by another
user. Contact your administrator or contact the user who has the element locked in the
package.
The webMethods Integration Server went down while I was locking or unlocking an element.
The action may or may not have completed, depending on the exact moment at which
the server ceased action. When the server is back up, restore your session and look at the
current status of the element.
Can I select multiple elements to lock or unlock in Package Navigator view simultaneously?
Yes, you can select multiple elements to lock or unlock in the Package Navigator view of
Designer, as long as your selection does not contain the following:
A server
A folder or package and its contents
A package and any other element
An adapter notification record
You can also lock or unlock all elements in a package or folder that have not been
previously locked/unlocked by right-clicking the package or folder and selecting Lock or
Unlock.
Where is the lock information stored (such as names of elements that are locked, when they were
locked, etc.)?
If you are using the VCS Integration feature, lock information is stored internally
in Repository version 4 and in the VCS repository. If you are using local service
development, lock information is stored in the VCS repository only.
Important: It is not recommended that you use locking and unlocking functionality
in an Integration Server cluster. Locking information for elements could
be inadvertently shared with another Integration Server in the cluster.
Use a standalone Integration Server, not a cluster, during development to
eliminate these issues.
The local service development feature is a Designer feature that you can use to develop
Integration Server packages locally as Eclipse projects. With this feature, you can check
package elements and their supporting files in to and out of a version control system
(VCS) directly from Designer.
To connect Designer to a VCS client, the local service development feature uses the
following components:
A local development package, which is an Integration Server package that is
intended to be used with the local service development feature.
A local development project, which is an Eclipse project that contains the associated
local development package.
A local development server, which is an Integration Server instance that is installed
in the same installation directory as the Designer instance you are using.
Important: The WmVCS package, which provides the functionality for using the VCS
Integration Feature, is deprecated as of Integration Server version 9.9.
Software AG recommends that you use the local service development feature
How version control Locally, within the Eclipse Through Integration Server
tasks are performed framework (commands
are sent directly between
Designer and the VCS
client)
Menus and Uses the VCS client’s Uses its own commands to
commands used menus and commands, access the VCS client, which
which may already be may require extra time to
familiar to you learn
Team Foundation Team Foundation Server plug-in for Eclipse Version 14.x
Server
Supported Elements
The local service development feature works with all of the packages and IS elements
displayed in the Package Navigator view, as well as package contents that are not visible
in the Package Navigator view, such as supporting files associated with the folder or
element.
Note: The local service development feature works with the local development
server only. The feature does not work with packages in other servers listed in
the Package Navigator view.
Prerequisites
Before you use the local service development feature, you must:
Ensure that Integration Server is installed in the same installation directory as the
Designer instance you are using. If you selected Local Version Control Integration (or
Designer Workstation in versions prior to 9.8) from the Software AG Installer, this was
already done for you.
Ensure that the wa.server.ns.lockingMode parameter is set to system on the
Integration Server used as the local development server. If you set any other value
for this parameter, the Local Service Development feature may not work as expected.
For more information about this parameter, see webMethods Integration Server
Administrator’s Guide.
mount the corresponding host directory of local Git repository or the TFS project
directory in the Docker container. Consider the following sample startup command:
docker run -i -t -d \
--name $[container_name] \
-v $[container_named_volume ]: [Install_Dir]/IntegrationServer/instances/ [instance_Name] \
-v [host_git_project_location]:/[container_git_project_location]
-p [host_primary_port:]primary_port \
[image_name] \
/bin/bash
Permissions
Access control lists (ACLs) determine the level of access to packages, folders, and other
elements (such as services, IS document types, and specifications) at the group level.
ACL seings are stored on the local development Integration Server, not with the
elements themselves. This means that ACL information does not get checked in to the
VCS repository when you check in an element. When another user checks the element
out of the VCS repository, that user’s local development server uses the default ACL
to determine access to that element. You can preserve ACL seings for an element by
deploying the element from the local development server and then seing the element’s
ACL seings manually on the production server. For more information about ACLs, see
"Assigning and Managing Permissions for Elements" on page 89.
Note: On the Integration Server used as the local development server, the
wa.server.ns.lockingMode parameter must be set to system. If you set any
other value for this parameter, the Local Service Development feature may not
work as expected. For more information about this parameter, see webMethods
Integration Server Administrator’s Guide.
Note: If you are using Team Foundation Server as your VCS client, Integration
Server system locks all elements and marks them as read-only. To unlock an
element in preparation for editing it, select the Check out for edit option from
the Team menu.
The local service development feature works with the local development server only.
By default, a new Designer installation includes a single server definition named
Default. This server is marked as the default server and is configured to use
localhost:5555.
If your Designer installation needs to connect to more Integration Servers, you can
configure additional Integration Server instances on the Window > Preferences > Software
AG > Integration Servers page. If there are multiple Integration Server instances configured,
only one instance will be active at a time. The default Integration Server instance will
be treated as the default local development server. However, you can set any of the
available local Integration Server instances as the local development server. Perform the
following step to set an Integration Server instance as the local development server.
In Package Navigator view, right-click the Integration Server instance that you want
to use as a local development server and select Use as Local Version Control Integration
Server
Note: The Use as Local Version Control Integration Server option is available only if
the selected Integration Server is connected and is installed in the same
installation directory as the Designer instance.
Setting an Integration Server or Microservices Container Running inside a Docker container as the
Local Development Server
To setup Integration Server or Microservices Container running inside a Docker
container as local development server, ensure the following prerequisites are met:
Docker and Designer reside on the same host.
Docker daemon is up and directly listens to the Docker Remote API requests using a
TCP socket without authentication and decryption.
Root storage area of the server instance in the Docker container is available on the
host.
To do this, set Dtarget.configuration=localdev when creating a Dockerfile
to build the Docker image for Integration Server or Microservices Container. This
configuration creates a mount point using the volume instructions in the Dockerfile.
For information on creating a Docker file and Docker image, see webMethods
Integration Server Administrator’s Guide.
The volume is mounted as named volume while starting up the container.
For example, consider the following sample container startup command:
docker run -i -t -d \
--name $[container_name] \
-v $[container_named_volume ]: [Install_Dir ]/IntegrationServer/instances/[instance_Name] \
-p [host_primary_port:]primary_port \
[image_name] \
/bin/bash
Where... Specify...
Where... Specify...
Designer creates a project in your workspace with the same name as the package.
In Package Navigator view, the icon representing the package that you have shared
to the VCS changes to showing that the package is shared. The package and
the elements contained in the package will now be available in the VCS. Designer
displays icon overlays that are specific to your VCS for the files in the shared
package in Package Navigator view. These icon overlays indicate the VCS status of
the files in your workspace.
5. If you are using Team Foundation Server as your VCS client, you must do the
following after the project is created:
a. Add the content of the project to the Team Foundation Server repository. To
do this, right-click the project in Package Explorer view or Navigator view and
select Team > Check In Pending Changes.
b. Set the Team Foundation Server working folder to any system location. To set the
Integration Server_directory \instances\default\packages directory as the working
directory, clear the Move project to Integration Server package as linked resource check
box in Window > Preferences > Software AG > Service Development > Local Service
Development. To do this, from the Team Explorer view, open the Source Control
editor. Right-click the local service development project in the Source Control
editor, and select Set Working Folder
Note: You can also add any file that is outside the package namespace structure
or that is not an IS asset (that is, it does not appear in the Package
Navigator view of Designer). For example, an html file (output template
files for a service) in the pub folder of a package. To do this, right-click
the files in Navigator view, select Team, and select the check in or commit
option specific to your VCS client.
Note: You can also right-click the checked-out package, folder, or element and
select Show Files. You can select multiple folders or elements by pressing
the CTRL key while selecting. Designer highlights all the server files
associated with the package, folder, or element in the Navigator view.
Right-click these files in Navigator view and check in or commit these files
to the VCS using the option that is specific to your VCS client, available in
the Team menu.
If you are checking in a package or folder, all the contents within it are also checked
in.
The element(s) that are checked out are now available for editing in your Eclipse
workspace.
Note: If you are checking out a package that you have not checked in
previously, you must move this package from your workspace to the
Integration Server_directory\instances\default\packages directory of your
local service development Integration Server.
Note: You can also right-click the package, folder, or element that you want
to check in and select Show Files. You can select multiple folders or
elements by pressing the CTRL key while selecting. Designer highlights
all the server files associated with the package, folder, or element in the
Navigator view. Right-click these files in Navigator view and, from the
Team menu, select the appropriate option to check in the files.
2. Designer displays the Progress Monitor dialog box. Click Run in Background to
continue working in Designer.
Note: You can also check in any folders or elements that are outside the package
namespace structure or files that are not IS assets (that is, they do not
appear in the Package Navigator view of Designer). To do this, right-click
the files in Navigator view, select Team, and select the check in or commit
option specific to your VCS client.
To get the latest version of a package, folder, or element from the VCS
1. Close the respective editor if you have any of the elements on which you are
performing the get latest operation open in the Package Navigator view.
2. In Package Navigator view, right-click the package, folder, or element for which you
want to retrieve the latest version and select Team > Get Latest Version.
Note: The menu option Get Latest Version might differ depending on the VCS
client that you use.
Designer displays the Progress Monitor dialog box. Click Run in Background to
continue working in Designer.
Designer retrieves the latest version of the package, folder, or element and displays a
confirmation message.
Note: Designer automatically rebuilds the Java and C/C++ services in a local
service development package, when you get the latest version of the
package from the VCS, if you have the Build Automatically workspace
preference (Project > Build Automatically) selected.
Note: Designer automatically rebuilds the Java and C/C++ services in a local service
development project or package, when you retrieve a specific version of the
project or package from the VCS, if you have the Build Automatically workspace
preference (Project > Build Automatically) selected.
directory, if you have the Build Automatically workspace preference (Project > Build
Automatically) selected.
Reloading a Package
If you make any changes to the package and/or it's contents in your workspace, you
must reload the package on the local development server to activate the changes and
to make sure that the changes are reflected in the Integration Server_directory\instances
\default\packages directory.
To reload a package
1. In Navigator view, right-click the package that you need to reload and select Load IS
Package.
2. If you need to replace the package in the Integration Server_directory \instances
\default\packages directory, right-click the package in Navigator view and select
Reload IS Package.
If the package that you copy from the VCS repository is not enabled, Designer
displays a message informing you that the package is in a disabled state. Use
Integration Server Administrator to enable the package. For more information about
enabling a package, see webMethods Integration Server Administrator’s Guide.
Note: Designer automatically rebuilds the Java and C/C++ services in a local
service development package, when you reload the package on the
local development server, if you have the Build Automatically workspace
preference (Project > Build Automatically) selected.
Important: Revision compare is currently supported only for document types and flow
services that are part of a local service development project.
Keep the following points in mind when you use revision compare:
The Compare Element(s) With > Revision option is available for a document type or flow
service in a local service development project only if:
You have checked out the element to the Project Explorer; and
You have reloaded the element on the local development server to make sure that
the changes are reflected in the Integration Server_directory \instances\default
\packages directory.
Revision compare is supported for SVN (Polarion Software Subversive SVN) and Git
VCS clients.
To compare the local service development project revision of an element with a specified revision of
the element
1. In the Package Navigator view, select an element in a local service development
project, right-click, and select Compare Element(s) With > Revision.
2. In the dialog box that opens, specify the revision of the element that you want to
compare with by choosing one of the following options:
Select the Head Revision.
Specify the Date and time values corresponding to the revision.
Specify a Revision number or use the Browse buon to browse the VCS and select
a revision of the element.
3. Click OK to display the list of differences in a compare editor. For information on the
compare editor, see "Working with the Compare Editor" on page 974.
Designer automatically rebuilds the Java and C/C++ services in a local service
development package, when you perform any VCS operations on the package or project
in the local development server
Before checking out Java or C/C++ services, you must:
Ensure that the system environment variable PATH is set to include
JDK_directory\bin.
--OR--
Ensure that the wa.server.compile property is set to JDK_directory\javac
-classpath {0} -d {1} {2} in the config.ini file that is located in the
Software AG_directory\eclipse\v36\configuration directory.
For example,
watt.server.compile=C\:\\softwareag\\jvm\\jvm160_32\\bin\\javac
-classpath {0} -d {1} {2}
To ensure that Designer automatically builds the Java and C/C++ services, you must do
the following:
Add VCS project nature to the packages that contain the Java or C/C++ services, if
these packages are not created in the local Integration Server but are available in the
VCS repository. To do this, right-click the package in the Package Explorer view and
select Configure > Convert to Local Service Development Project. This also adds the Java
project nature to the package.
Select the Build Automatically workspace preference. To do this, select Project > Build
Automatically.
Upon building a Java service, Designer creates the .java and .class files. In case of C/C++
services, Designer generates the Java class associated with the C/C++ service.
Important: When disconnecting a local service development project from the VCS,
any pending changes you have in this project will be lost. If you want
your changes to be updated in the VCS repository, make sure that you
check in your pending changes to the VCS before disconnecting the
project from the VCS.
Note: If you want to connect to a VCS again, you need to share the project to the
VCS by right-clicking the project in Package Explorer view or Navigator
view and clicking Team > Share Project.
4. To delete the local service development project, right-click the project in Navigator
view and select Delete.
5. Click OK to confirm the deletion of the project.
The local service development project is deleted, but the associated
package will still be available in the Package Navigator view and in the
Integration Server_directory\instances\default\packages directory.
Designer enables you to create, maintain, and manage custom integration packages
for use by the webMethods Integration Server. Often, many enterprise organizations
employ a version control system (VCS) for the development of software solutions,
providing automatic auditing, versioning, and security to software development
projects. Such products include Microsoft Visual SourceSafe and IBM Rational
ClearCase.
With the VCS Integration feature installed in your development environment, you can
check packages or elements in to and out of your version control system (VCS). For
example, to modify a flow service element, you would:
1. Check out the flow service. This automatically checks out its supporting files
(node.ndf and flow.xml).
2. Modify the flow service in Designer and save the changes.
3. Check in the flow service element. This also checks in the node.ndf and flow.xml files
and makes the files read-only when they are checked in.
Alternatively, if you want to work on other elements in addition to the flow service, you
can check out the entire package.
For information about configuring VCS to work with Integration Server, see Configuring
the VCS Integration Feature.
Note: The VCS Integration feature provides functionality similar to that of local
service development. However, the VCS Integration feature and local
service development are not the same. For information about how the VCS
Integration feature compares to local service development, see "How Does
Local Service Development Differ from the VCS Integration Feature?" on page
112.
Important: The WmVCS package, which provides the functionality for using the VCS
Integration Feature, is deprecated as of Integration Server version 9.9.
Software AG recommends that you use the local service development feature
(Local Version Control Integration) to check package elements and their
supporting files into and out of a version control system (VCS) directly from
Designer.
Revert changes
Delete
Get latest version
Get earlier version
View history
Note: The VCS Integration feature automatically adds C/C++ services to the VCS
when they are created, but thereafter, you must check in and check out the
supporting files for C/C++ services manually using the VCS client.
If you check out … Integration Server checks out these files for you…
A package or a folder All of the folders and elements within the package or
folder, along with their supporting files.
Important: The VCS Integration feature requires that you disable the Automatically Unlock
Upon Save option in Designer, if you have implemented it. For information
on disabling this option, see "Automatically Unlocking an Element Upon
Saving" on page 106.
Although the Check Out command can be applied at the package and folder level,
the check mark is not applied to package and folder icons, only to the checked out
elements within the package or folder.
Although folders are never shown as checked out in the Package Navigator view,
you can apply the Revert Changes command to a folder to revert changes to all of the
elements in the folder’s hierarchy.
Reverting changes to a newly created element (prior to initial check in) may cause
the element to be corrupted. Software AG recommends that you check in a new
element immediately after creation and then check it out again before you enter any
modifications.
When you apply the Revert Changes command to a package, or to a folder in a
package, all of the supported elements within the container’s hierarchy that are
currently checked out are reverted.
The Revert Changes command reloads the entire package containing the element; this
may cause sessions currently using services in the package to fail.
Note: When you work with a package with many elements, it may take a
significant amount of time to check the package in or out. Designer will
not be available during the check in or check out procedure.
command for Dynamic Views because Dynamic Views always contain the latest
version.
Some folders or elements might be deleted after you apply the Get Latest Version
command. This will occur when there is no current version of that folder or element
(that is, it has been deleted from the VCS repository since the last update).
The Get Latest Version command reloads the entire package containing the element.
This might cause sessions currently using services in the package to fail.
Note: If a folder contains locked non-Java elements and you apply the Get Latest
Version command to an unlocked Java element in that folder, the error
message “Subnode(s) checked out” displays. Check in all elements in the
folder and try again.
For ClearCase, the Get Earlier Version command loads the version of the branch
indicated by the ClearCase Branch Name field in Integration Server Administrator
(see Configuring the VCS Integration Feature). If no branch is identified in that field,
ClearCase loads the main ClearCase branch. Do not use the Get Earlier Version
command for Dynamic Views because Dynamic Views always contain the latest
version.
Some folders or elements might be deleted after you apply the Get Earlier Version
command. This will occur when there is no earlier version of that folder or element
(that is, it has been added to the VCS repository since the creation of the version
being retrieved).
When you apply the Get Earlier Version command to a Java service, the earlier version
will be loaded for all Java services in that folder, as well as for all folders and
elements contained in the folder.
The Get Earlier Version command also reloads the entire package containing the
element. This might cause sessions currently using services in the package to fail.
Select... To...
This signifies January 13, 2006, 14:56 hours. You must type the date into
the Date field in this format:
01/13/06 14:56
Do not include the time zone (for example, EST) when typing the date
and time. The precision of the specified time (that is, whether minutes
and seconds are accepted, or minutes only) is determined by the time
format of the VCS application. For example, Visual SourceSafe dates
files with minutes only.
Select... To...
Label Get an earlier version by providing the VCS label (Visual SourceSafe
and ClearCase) or to get an earlier version by providing a version
number (ClearCase).
In the field next to Label, enter the VCS label text or version number.
3. Click OK. All supported and checked in elements are updated to the specified
version in the VCS repository.
Notes:
If a folder contains locked elements that are not Java services and you apply the Get
Earlier Version command to an unlocked Java service in that folder, the error message
“Subindex() checked out” displays. Check in all elements in the folder and try again.
If you edit an element that you retrieved using the Get Earlier Version command, the
VCS Integration feature will not view it as the most current version and therefore
will not allow you to check it in. When you aempt to check in the element, an “out
of date” error message displays. Update the element to the latest version before
applying your changes.
Visual SourceSafe and ClearCase do not support entry of a delete comment. The VCS
history will show only the VCS user name and the time of deletion.
Note: Do not apply the Delete command to any earlier version packages or elements
that are checked out, as unpredictable results may occur.
To delete a package, folder, or element from both Integration Server and the VCS
1. In Package Navigator view, select the package, folder, or element you want to delete.
2. Select Edit > Delete.
The Delete Confirmation dialog box appears. If you are deleting a publishable
document type, Designer prompts you to delete the associated Broker document
type as well. For more information about deleting Broker document types, see
"Deleting Elements" on page 70.
3. Click OK.
If a VCS package or element is checked out by another user, the package or element
will not be deleted, and an error message appears. In addition, any parent folders
leading to the checked out element will not be deleted.
When deleting a package, a message box appears stating that the deletion is
complete and that the deleted package has been copied to the recovery area of the
Integration Server. If this message appears, click OK.
Note: Packages are best restored with the Recover Packages feature of the
Integration Server Administrator (only administrator users can recover a
package)
To restore a folder or element that has been deleted from both Integration Server and the VCS
1. With your VCS client, restore the folder or element in the VCS repository.
2. Using the VCS client, check out the restored element to its original location in the ..
\packages\ns directory. You may receive a message that the folder or element does
not exist, with a request to create the folder or element. If so, respond so that all
folders, elements, and files are created.
3. In Package Navigator, right-click the package that contained the deleted element and
select Reload Package. This displays the restored element in the Package Navigator
view.
Note: At this point, although the restored element is in a checked out state on
the VCS server, it does not display the checked out symbol in the Package
Navigator view.
4. In Package Navigator view, right-click the restored element and select Check In.
Important: When you move or rename a folder or element, you are effectively creating a
new entity in the VCS repository. Therefore, the previous folder or element
is deleted and a new entity is created. This means that a new revision history
is started for the moved or renamed entity as well. If you want to view
previous revision information for a moved or renamed folder, or element,
you must locate the deleted version of the file in the VCS repository and
view the revision history there.
Because the default behavior of the VCS Integration feature is to add any new folder
or element to the VCS repository, a copied or moved folder or element automatically
appears in the VCS repository in its new location. In the case of a moved item, the
previous location is deleted from both Designer and the VCS repository.
A copied folder or element is always placed in a checked out state in its new location. A
moved folder or element retains its checked in or checked out state; special conditions
apply when you copy or move a coded service. For more information on copying and
moving coded services, see "Working with Java Services" on page 150.
When you move an element that has dependents, the dependents will not be updated
until you check in the moved element. This may cause failure of any services that use the
dependent elements.
Note: Technically, folders are not checked in or checked out of the VCS repository;
it is actually the elements within the folder that are checked in or out. When
viewing the history for a folder, you are actually viewing the history of the
node.idf file within the folder.
When you move or rename a folder or element, the previous folder or element is deleted
and a new entity is created. This means that a new revision history is started for the
moved or renamed entity as well. If you want to view previous revision information for
a moved or renamed folder, or element, you must locate the deleted version of the file in
the VCS repository and view the revision history there.
Information Definition
User The VCS user account name under which the revision was
executed.
Checked In The full VCS project path for the checked in file.
Comment Text entered by the user at check in time. This may contain no
text if the user did not enter a comment.
dev_user The Designer user under which the revision was executed.
is_time The date and time applied to the revision by Integration Server.
Label The VCS label applied to the file. If no VCS label exists, this entry
is not displayed.
Information Definition
Label Text entered by the user when the label was applied. This may
Comment contain no text if the user did not enter a comment.
Note: Because the adapter connection appears within a package, there will be a
corresponding folder created within the VCS repository. However, this folder
contains only the *.ndf file that defines the folder; no adapter connection files
are placed there.
Also note that a publishable document type for an adapter notification cannot be
directly checked in to and out of the VCS repository. It is automatically checked in or out
when you use the VCS client to manually check in or check out its associated adapter
notification.
Note: When you move a Java service, its VCS revision history is reset. To retrieve
the earlier information, you must find the previous version of the file as a
deleted item in its previous VCS location and view the revision history there.
The files supporting a Java service are stored in two locations within the package
directory structure in the VCS repository:
..\package \code\source
..\package \ns\folderName \JavaServiceNameFolder
The files in these directories must have the same label for the entire Java service to be
retrieved by label.
To minimize any problems, Software AG recommends that you apply the version label
at the package level, thereby including all folders and elements within the package
hierarchy.
8 Managing Packages
■ Creating a Package ................................................................................................................... 154
■ Documenting a Package ............................................................................................................ 155
■ Viewing Package Settings, Version Number, and Patch History ............................................... 156
■ Assigning a Version Number to a Package ............................................................................... 157
■ About Copying Packages Between Servers .............................................................................. 158
■ Reloading a Package ................................................................................................................. 160
■ Comparing Packages ................................................................................................................. 160
■ Deleting a Package .................................................................................................................... 160
■ Exporting a Package .................................................................................................................. 161
■ About Package Dependencies ................................................................................................... 161
■ Assigning Startup, Shutdown, and Replication Services to a Package ..................................... 164
A package is a container that is used to bundle services and related elements, such as
specifications, IS document types, IS schemas, and triggers. When you create a folder,
service, IS document type, or any element, you save it in a package.
Packages are designed to hold all of the components of a logical unit in an integration
solution. For example, you might group all the services and files specific to a particular
marketplace in a single package. By grouping these components into a single package,
you can easily manipulate them as a unit. For example, you can copy, reload, distribute,
or delete the set of components (the “package”) with a single action.
Although you can group services using any package structure that suits your purpose,
most sites organize their packages by function or application. For example, they might
put all purchasing-related services in a package called “PurchaseOrderMgt” and all
time-reporting services into a package called “TimeCards.”
On the server, a package represents a subdirectory within the
IntegrationServer_directory \instances\instance_name \packages directory. All the
components that belong to a package reside in the package’s subdirectory.
Creating a Package
When you want to create a new grouping for services and related files, create a package.
When you create a package, Designer creates a new subdirectory for the package in the
file system on the machine where the Integration Server is installed. For information
about the subdirectory and its contents, see webMethods Integration Server Administrator’s
Guide.
To create a package
1. In Designer: File > New > Package
2. In New Integration Server Package dialog box, select the Integration Server on which
you want to create the package.
3. In the Name field, type the name for the new package using any combination
of leers, numbers, and the underscore character. For more information, see
"Guidelines for Naming Packages" on page 154.
4. Click Finish. Designer refreshes the Package Navigator view and displays the new
package.
Make sure the package name describes the functionality and purpose of the services
it contains.
Avoid creating package names with random capitalization (for example,
cOOLPkgTest).
Avoid using articles (for example, “a,” “an,” and “the”) in the package name. For
example, instead of TestTheService, use TestService.
Do not use the prefix “Wm” in any case combination. Integration Server and
Designer use the “Wm” prefix for predefined packages that contain services, IS
document types, and other files. Additionally, custom packages with a “Wm” prefix
can be problematic when deploying the packages using Deployer.
Avoid using control characters and special characters like periods (.) in a package
name. The wa.server.illegalNSChars seing in the server.cnf file (which is located
in the IntegrationServer_directory \instances\instance_name \config directory) defines
all the characters that you cannot use when naming packages. Additionally, the
operating system on which you run the Integration Server might have specific
requirements that limit package names.
Documenting a Package
You can communicate the purpose and function of a package and its services to other
developers by documenting the package.
serverName :port is the name and port address of Integration Server on which
the package resides.
DocumentName is the name of the web document you want to access. If you do
not specify a DocumentName , Integration Server automatically
displays the index.html file.
The Package Settings page displays the version and patch history for the package
since the last full release of the package. (A full release of a package incorporates all
previous patches for the package.) For more information about package seings, see
"Package Properties" on page 1009.
Note: When the server administrator installs a full release of a package (a release
that includes all previous patches for the package), the Integration Server
removes the existing patch history. This helps the server administrator avoid
potential confusion about version numbers and re-establish a baseline for
package version numbers.
5. Click OK.
Copying Packages
When copying packages, keep the following points in mind:
You can copy a package to a different server only if you are a member of a group
assigned to the Replicators ACL on the source and destination servers and you are
logged on to both servers.
Before you copy a package that contains elements with unsaved changes, you must
save the changes.
You cannot undo a copy action using the Edit > Undo command.
If you copy a package that depends on other packages to load (that is, the package
has package dependencies), and the required packages are not present on the
destination server, the package will be copied but it will not be enabled.
You cannot copy a package to another server if the destination server already
contains a package with that name.
Note: Because UNIX directories are case sensitive, Integration Servers running
in a UNIX environment will allow packages with similar names to
reside on the same server. For example, you can copy a package named
orderProcessing to a server that contains a package named OrderProcessing.
When you copy a package from another Integration Server, it is possible that an
HTTP URL alias associated with the new package has the same name as an HTTP
URL alias already defined on your Integration Server. If Integration Server detects a
duplicate alias name, it will write a message to the server.log.
When you copy a package from another Integration Server, it is possible that a port
alias for a port in the new package has the same alias as a port already defined on
your Integration Server. A port alias must be unique across the Integration Server.
If Integration Server detects a duplicate port alias, it will not create the port and will
write the following warning to the server.log:
[ISS.0070.0030W] Duplicate alias duplicateAliasName encountered creating protocol
listener on port portNumber
Note: If you want the port to be created when the package is loaded, use
Integration Server Administrator to delete the existing port with that alias,
create a new port that has the same properties as the just deleted port,
and then reload the package containing the port with the duplicate alias.
Integration Server creates the port when the package is reloaded.
When you copy a package from a version of Integration Server prior to version 9.5
SP1 to an Integration Server version 9.5 SP1, Integration Server creates an alias for
each port associated with the package. Integration Server assigns each port an alias.
For information about the naming conventions used Integration Server, see the
webMethods Integration Server Administrator’s Guide.
If the package you are copying is associated with an e-mail listener, Integration
Server will install the package but will not enable the listener. This is because the
password required for the Integration Server to connect to the e-mail server was
not sent with other configuration information about the listener. To enable the
listener, go to the Security > Ports > Edit E-mail Client Configuration Screen and update the
Password field to specify the password needed to connect to the e-mail server.
Reloading a Package
Sometimes, you need to reload a package on the server to activate changes that have
been made to it outside of Designer. You need to reload a package if any of the following
occurs:
A Java service that was compiled using jcode is added to the package.
New jar files are added to the package.
Any of the configuration files for the package are modified.
Note: Reloading a package is not the same as refreshing the Package Navigator
view. When you refresh the Package Navigator view, Designer retrieves
a fresh copy of the contents of all the packages from the memory of the
Integration Server. When you reload a package, Integration Server removes
the existing package information from memory and loads new versions of the
package and its contents into its memory.
To reload a package
1. In Package Navigator view, select the package you want to reload.
2. Right-click the package and click Reload Package.
Comparing Packages
You can use the compare tool to compare two packages on the same server or on
different servers. For more information, see "Comparing Integration Server Packages
and Elements" on page 973.
Deleting a Package
When you no longer need the services and files in a package, you can delete the package.
Deleting a package removes the package and all of its contents from the Package
Navigator view.
When you delete a package from Designer, Integration Server saves a copy of the
package. If you later want to recover the package and its contents, contact your server
administrator. Only Integration Server Administrator users can recover a package.
For more information about recovering packages, see webMethods Integration Server
Administrator’s Guide.
Before you delete a package, make sure that:
Other users or other services do not use (depend on) the services, templates, IS
document types, and schemas in the package. You can use the Package Dependencies
option to identify other services that are dependent on a service in a package that
you want to delete. For more information, see "Identifying Package Dependencies"
on page 162.
All elements in the package that you want to delete are unlocked, or locked by you.
If the package contains elements that are locked by others or system locked, you
cannot delete the package.
To delete a package
1. In Package Navigator view, select the package you want to delete.
2. Click Edit > Delete.
Exporting a Package
Packages can be exported to your hard drive so that they can be shared with partners or
developers. You can install an exported package on another server by using the package
publishing functionality in the Integration Server Administrator. Locking information is
not exported.
To export a package
1. In Package Navigator view, select the package you want to export to your hard
drive.
2. Right-click the package and click Export from Server.
3. In the Save As dialog box, select the location on your hard drive where you want the
exported package to reside. Click Save.
This exports the package to a ZIP file and saves it on your hard drive. The ZIP file
can then be published on another server.
Note: The Export from Server option is not the same as the File > Export option. With
File > Export, you can export files from the Workbench to the file system.
Important: Other webMethods components might include packages that register new
types of elements in Designer. You should save instances of these new
element types in packages that list the registering package as a package
dependency. The registering package needs to load before your packages so
that Designer can recognize instances of the new element type. For example,
if you create new flat file schemas, you must save the flat file schemas in
packages that identify the WmFlatFile package as a package dependency.
Package The name of the package you want Integration Server to load
prior to loading the package selected in the Package Navigator
6. Click OK.
7. Click OK in the Properties for PackageName dialog box.
Note: A service that you just created does not appear in the Available Services list
if you have not refreshed your session on the server since you created the
service.
5. Click OK.
Note: The term replication service does not refer to the services contained in
pub.replicator or to services that subscribe to replication events (replication event
services).
9 Building Services
■ A Process Overview ................................................................................................................... 170
■ Package and Folder Requirements ........................................................................................... 171
■ About the Service Signature ...................................................................................................... 172
■ About Service Run-Time Parameters ........................................................................................ 177
■ About Automatic Service Retry .................................................................................................. 192
■ About Service Auditing ............................................................................................................... 194
■ Using a Circuit Breaker with a Service ...................................................................................... 201
■ About Universal Names for Services or Document Types ......................................................... 205
■ About Service Output Templates ............................................................................................... 210
■ Printing a Flow Service .............................................................................................................. 212
■ Comparing Flow Services .......................................................................................................... 212
Services are method-like units of logic that operate on documents. They are executed
by Integration Server. You build services to carry out work such as extracting data
from documents, interacting with back-end resources (for example, submiing a query
to a database or executing a transaction on a mainframe computer), and publishing
documents to the Broker. Integration Server is installed with an extensive library of
built-in services for performing common integration tasks. Adapters and other add-on
packages provide additional services that you use to interact with specific resources or
applications. webMethods graphical implementation language, flow, enables you to
quickly aggregate services into powerful sequences called flow services.
A Process Overview
Building a service is a process that involves the following basic stages:
Stage 8 Debugging.
During this stage you can use the tools provided by Designer to run
and debug your flow service. For information about this stage, see
"Running Services" on page 435, "Debugging Flow Services" on page
461, and "Debugging Java Services" on page 487.
Note: You can create templates with a set of pre-defined values for element
properties. You can then apply the template when creating new instances of
the element instead of seing the properties each time you create an element.
For more information about the element property templates, see "Using
Property Templates with Elements" on page 84.
Once the package and folder are in place, use the File > New command to start the process
of creating a new service. For details, see "Creating New Elements" on page 54.
OrderTotal String
Although you are not required to declare input and output parameters for a service (the
Integration Server will execute a service regardless of whether it has a specification or
not), there are good reasons to do so:
Declaring parameters makes the service’s input and outputs visible to Designer.
Without declared input and output parameters, you cannot:
Link data to and/or from the service using the Pipeline view.
Assign default input values to the service on the Pipeline view.
Validate the input and output values of the service at run time.
Log the input and output document fields of the service.
Run or debug the service in Designer and enter initial input values.
Generate skeleton code for invoking the service from a client.
Declaring parameters makes the input and output requirements of your service
known to other developers who may want to call your service from their programs.
For these reasons, it is strongly recommended that you make it a practice to declare a
signature for every service that you create.
Designer supports several data types for use in services. Each data type supported by
Designer corresponds to a Java data type and has an associated icon. When working in
the editor, you can determine the data type for a field by looking at the icon next to the
field name.
Note: The purpose of declaring input parameters is to define the inputs that a
calling program or client must provide when it invokes this flow service.
You do not need to declare inputs that are obtained from within the flow
itself. For example, if the input for one service in the flow is derived from
the output of another service in the flow, you do not need to declare that
field as an input parameter.
When possible, use variable names that match the names used by the services in the flow.
Variables with the same name are automatically linked to one another in the
pipeline. (Remember that variable names are case sensitive.) If you use the same
variable names used by flow’s constituent services, you reduce the amount of
manual data mapping that needs to be done. When you specify names that do not
match the ones used by the constituent services, you must use the Pipeline view to
manually link them to one another.
Avoid using multiple inputs that have the same name. Although Designer permits you
to declare multiple input parameters with the same name, the fields may not be
processed correctly within the service or by services that invoke this service.
Make sure the variables match the data types of the variables they represent in the flow. For
example, if a service in the flow expects a document list called LineItems , define
that input variable as a document list. Or, if a service expects a Date object called
EmploymentDate , define that input variable as an Object and apply the java.util.Date
object constraint to it. For a complete description of the data types supported by
Designer, see "Data Types" on page 1155.
Declared input variables appear automatically as inputs in the pipeline. When you select the
first service or MAP step in the flow, the declared inputs appear under Pipeline In.
Trigger services have specific input parameter requirements. If you intend to use a
service with a webMethods Messaging Trigger or a JMS trigger, make sure the
input signature conforms to the requirements for each of those trigger types. For
more information about creating webMethods Messaging Trigger, see "Creating
a webMethods Messaging Trigger " on page 727. For more information about
creating JMS triggers, see "Working with JMS Triggers" on page 671.
Important: If you edit a cached service by changing the inputs (not the pipeline),
you must reset the server cache. If you do not reset it, the old cached
input parameters will be used at run time. To reset the service cache from
Designer, select the service and then click the Reset buon next to Reset
Cache in the Properties view. To reset the service cache from Integration
Server Administrator, select Service Usage under Server in the Navigation
panel. Select the name of the service and an information screen for that
service appears. Click Reset Server Cache.
Input/Output tab
For a flow service, the input side describes the initial contents of the pipeline. In other
words, it specifies the variables that this flow service expects to find in the pipeline at
run time. The output side identifies the variables produced by the flow service and
returned to the pipeline.
You can declare a service signature in one of the following ways:
Reference a specification.A specification defines a set of service inputs and outputs.
You can use a specification to define input and output parameters for multiple
services. When you assign a specification to a service, you cannot add, delete, or
modify the declared variables using the service’s Input/Output tab.
Reference an IS document type. You can use an IS document type to define the input
or output parameters for a service. When you assign an IS document type to the
Input or Output side of the Input/Output tab, you cannot add, modify, or delete the
variables on that half of the tab.
Manually insert input and output variables. Drag variables from the Palee view to the
Input or Output sides of the Input/Output tab.
services that reference it) or detach the specification so you can manually define the
parameters on the service’s Input/Output tab.
Any change that you make to the specification is automatically propagated to all
services that reference that specification.
If the specification resides in a different package than the service, you must set up
a package dependency. For more information about package dependencies, see
"About Package Dependencies" on page 161.
Important: The run-time parameters should only be set by someone who is thoroughly
familiar with the structure and operation of the selected service. Improper
use of these options can lead to a service failure at run time and/or the return
of invalid data to the client program.
Important: Do not use the stateless option unless you are certain that the service
operates as an atomic unit of work. If you are unsure, set the Stateless
property in the Run time category to False.
Note: If a cached entry with input parameter values that are identical to the current
invocation does not exist in the cache, Integration Server executes the service
and stores the results in the cache.
When a cached service does not have input parameters (for example, a date/time service)
and previous results do not exist in the cache, at run time Integration Server executes the
service and stores the results. When the service executes again, Integration Server uses
the cached copy. In other words, Integration Server does not use the run-time pipeline
for the current service invocation; you will always receive cached results until the cache
expires.
the service is invoked. Following are guidelines for you to consider when determining
whether to cache the results for a service.
Services suited for caching:
Services that require no state information. If a service does not depend on state
information from an earlier transaction in the client’s session, you can cache its
results.
Services that retrieve data from data sources that are updated infrequently. Services whose
sources are updated on a daily, weekly, or monthly basis are good candidates for
caching.
Services that are invoked frequently with the same set of inputs. If a service is frequently
invoked by clients using the same input values, it is beneficial to cache the results.
Services that you should not cache:
Services that perform required processing. Some services contain processing that must be
processed each time a client invokes it. For example, if a service contains accounting
logic to perform charge back and you cache the service results, the server does not
execute the service, so the service does not perform charge back for the subsequent
invocations of the service.
Services that require state information. Do not cache services that require state
information from an earlier transaction, particularly information that identifies the
client that invoked it. For example, you do not want to cache a service that produced
a price list for office equipment if the prices in the list vary depending on the client
who initially connects to the data source.
Services that retrieve information from frequently updated sources. If a service retrieves
data from a data source that is updated frequently, the cached results can become
outdated. Do not cache services that retrieve information from sources that are
updated in real time or near real time, such as stock quote systems or transactional
databases.
Services that are invoked with unique inputs. If a service handles a large number of
unique inputs and very few repeated requests, you will gain lile by caching its
results. You might even degrade server performance by quickly consuming large
amounts of memory.
Note: If you do not have administrator privileges on your Integration Server, work
with your server administrator to monitor and evaluate your service’s use of
cache.
When returning results for a cached service, Integration Server returns a reference to the
cached results instead of the actual value of the cached results. If a subsequent step in
the service modifies the returned result, Integration Server changes the cached value as
well, which affects all references to the cached value. If any other service uses the cached
results, those services will begin using the updated cache value. To address this issue,
you can do one of the following:
Do not change the results of a cached service in a subsequent step in the flow service.
Configure the ServiceResults cache to return the actual value instead of a reference,
you need to modify the service results cache. For more information about changing
the ServiceResults cache, see the webMethods Integration Server Administrator’s Guide.
Important: Integration Server resets the cache for a service automatically whenever any
edits are made to the service. However, if the input signature includes a
document reference variable and the referenced document type changes, you
must reset the service cache. If you do not reset it, Integration Server uses
the old cached input parameters at run time until such time as the cached
results expire. To reset the service cache from Designer, select the service
and then click Reset next to Reset Cache in the Properties view. To reset the
service cache from Integration Server Administrator, select Service Usage
under Server in the Navigation panel. Select the name of the service and an
information screen for that service appears. Click Reset Server Cache.
Note: The cache may not be refreshed at the exact time specified in Cache expire. It
may vary from 0 to 15 seconds, according to the cache sweeper thread. For
details, see the wa.server.cache.flushMins seing in Integration Server.
invoked without the proper access privileges. To avoid this problem, enable Prefetch
on the invoked services rather than on the Java or C/C++ services that call them.
When you enable Prefetch, you must also set the Prefetch activation property to specify
when the server should initiate a prefetch. This seing specifies the minimum
number of times a cached result must be accessed (hit) in order for the server to
prefetch results. If the server retrieves the cached results fewer times than specified
in the Prefetch activation property, the server will not prefetch the service results when
the cache expires.
The cache may not be refreshed at the exact time the last hit fulfills the Prefetch
activation requirement. It may vary from 0 to 15 seconds, according to the cache
sweeper thread. For details, see the wa.server.cache.flushMins seing in Integration
Server.
Select... To...
[$default] Default Runtime Locale Use the server’s default JVM locale.
3. If you selected Open locale editor, complete the following in the Define Custom Locale
dialog box.
Language Select one of the ISO 639 codes that represent the language.
(2- or 3-leer codes)
Script Optional. Select one of the 4-leer script codes in the ISO
15924 registry.
4. Click OK. Integration Server will execute the service in the specified locale.
Item Description
2 Specifies the path portion of the URL for which the URL alias is to be
generated. This portion includes the invoke directive “invoke.” The path
also identifies the folder in which the flow service resides and the name of
the service to invoke.
Separate subfolders with periods. These fields are case sensitive. Be sure
to use the same combination of upper and lower case leers as specified in
the folder name on webMethods Integration Server.
To create the URL alias for a flow service, replace the portion of the URL containing the
invoke directive with an alias name in the HTTP URL alias property and save the service.
For example, if the name of a flow service is folder.subFolder:serviceName, then the
path to invoke the service is invoke/folder.subFolder/serviceName. If you enter “test” in
the HTTP URL alias property and save the service, then the two following URLs will point
to the same service:
hp://IS_server:5555/invoke/folder.subFolder/serviceName
hp://IS_server:5555/test
URL Alias for a REST Service (that uses the rest directive)
Important: You can create a URL alias for a REST service that uses the rest directive, not
a service that uses the restv2 directive.
Item Description
2 Specifies the path portion of the URL for the URL alias is to be generated.
This portion includes the rest directive “rest.” The path also identifies the
REST resource folder in which the service resides.
Item Description
Important: Do not use reserved characters in the URL alias string. Alias strings that
contain reserved characters are invalid and will not work.
Important: The pipeline debug options you select can be overwrien at run time by
the value of the wa.server.pipeline.processor property set in the server
configuration file. This property globally enables or disables the Pipeline
debug seings. The default enables the Pipeline debug feature on a service-
by-service basis. For more information on seing properties in the server
configuration file, see webMethods Integration Server Administrator’s Guide.
Select... To...
None Run the service without saving or restoring the pipeline. This
is the default.
Select... To...
Restore Restore the pipeline from a file when the service executes.
(Override)
When the service executes, the server loads the
pipeline file, folderName .serviceName .xml, from the
IntegrationServer_directory \instances\instance_name \pipeline
directory. The server will throw an exception if the pipeline
file does not exist or cannot be found.
Restore (Merge) Merge the pipeline with one from a file when the service
executes.
When this option is selected and the input parameters in the
file match the input parameters in the pipeline, the values
defined in the file are used in the pipeline. If there are input
parameters in the pipeline that are not matched in the file, the
input parameters in the pipeline remain in the pipeline.
When the service executes, the server loads the
pipeline file, folderName .serviceName .xml, from the
IntegrationServer_directory \instances\instance_name \pipeline
directory. The server will throw an exception if the pipeline
file does not exist or cannot be found.
The name and data type of the variable that Integration Server adds to the pipeline
with the contents of the XML document
The Default xmlFormat property specifies the default handling for XML documents
received by the service. Keep the following points in mind when seing the Default
xmlFormat property value for a service:
You can specify a default XML format for flow services and Java services only. The
Default xmlFormat property is not available for C/C++ services, .NET services, or web
service connectors.
The default XML format specified for a service by the Default xmlFormat property can
be overridden by the value of the xmlFormat argument in the URL of an individual
client request. However, the client request should specify the xmlFormat argument
only when it is recommended in the documentation for the service. A client should
specify the xmlFormat only when knowing how the service will respond. For more
information see "Submiing and Receiving XML via HTTP" on page 938
The XML format determines whether or not Integration Server parses the document.
If parsing is not needed, it can unnecessarily slow down the execution of a service.
For example, an application might handle the XML as a simple String. In this case,
the automatic parsing is unnecessary and should be avoided.
Make sure the input signature of the service contains an input parameter that
matches the variable name and data type that Integration Server produces for the
default format.
Important: If the service already has REST resources configured, Designer displays
a warning message if you change the selection of the allowed HTTP
methods to exclude any method used in the configuration of the
Note: If service auditing is also configured for the service, Integration Server adds
an entry to the service log for each failed retry aempt. Each of these entries
Tip: You can invoke the pub.flow:getRetryCount service to retrieve the current retry
count and the maximum specified retry aempts. For more information about
this service, see the webMethods Integration Server Built-In Services Reference. For
more information about building a service that retries, see "About Automatic
Service Retry" on page 192.
Note: When Integration Server logs an entry for a service, the log entry contains the
identify of the server that executed the service. The server ID in the log entry
always uses the Integration Server primary port, even if a service is executed
using another (non-primary) Integration Server port.
Each service has a set of auditing properties located in the Audit category on the service’s
Properties view. These properties determine when a service generates audit data and
what data is stored in the service log. For each service, you can decide:
Whether the service should generate audit data during execution. That is, do you
want the service to generate audit data to be captured in the service log? If so, you
must decide whether the service will generate audit data every time it executes or
only when it is invoked directly by a client request (HTTP, FTP, SMTP, etc.) or a
trigger.
The points during service execution when the service should generate audit data to
be saved in the service log. You might want a service to produce audit data when it
starts, when it ends successfully, when it fails, or a combination of these.
Whether to include a copy of the service input pipeline in the service log. If the
service log contains a copy of the input pipeline, you can use the webMethods
Monitor to perform more extensive failure analysis, examine the service’s input data,
or re-invoke the service.
Keep in mind that generating audit data can impact performance. Integration Server
uses the network to send the audit data to the service log and uses memory to actually
save the data in the service log. If a large amount of data is saved, performance can be
impacted. When you configure audit data generation for services, you should balance
the need for audit data against the potential performance impact.
Note: The service log can be a flat file or a database. If you use a database, the
database must support JDBC. You can use Integration Server to view the
service log whether it is a flat file or a database. If the service log is a database,
you can also use the webMethods Monitor to view audit data and re-invoke
the service. Before you configure service auditing, check with your Integration
Server Administrator to learn what kind of service log exists. For more
information about the service log, see the webMethods Audit Logging Guide.
Error Auditing
In error auditing, you use the service log to track and re-invoke failed services. To use
the service log for error auditing, services must generate audit data when errors occur,
and the Integration Server must save a copy of the service’s input pipeline in the service
log.
With webMethods Monitor, you can only re-invoke top-level services (those services
invoked directly by a client or by a webMethods Messaging Trigger). Therefore, if your
intent with error auditing is to re-invoke failed services, the service needs to generate
audit data only when it is the top-level service and it fails.
To make sure the service log contains the information needed to perform error auditing,
select the following Audit properties.
To use the service log for error auditing, use a database for the service log.
Service Auditing
When you perform service auditing, you use the service log to track which services
execute successfully and which services fail. You can perform service auditing to analyze
the service log and determine how often a service executes, how many times it succeeds,
and how many times it fails. To use the service log for service auditing, services need to
generate audit data after execution ends.
To make sure the service log contains the information needed to perform service
auditing, select the following Audit properties.
To use the service log for service auditing, you can use either a flat file or a database as
the service log.
To use the service log to audit for recovery, use a database for the service log.
Note: Typically, you will audit long-running services in conjunction with error
auditing, service auditing, or auditing for recovery.
The options you select in the Audit category of the Properties view can be overwrien
at run time by the level set for the Service Logger in Integration Server. View the
Service Logger level on the Setting > Logging > View Service Logger Details page of
Integration Server Administrator.
The service generates audit data only when it satisfies the selected option under
Enable auditing and the selected option in the Log on property. For example, if When
top-level service only is selected and the service is not the root service in the flow
service, it will not generate audit data.
The pipeline data saved in the service log is the state of the pipeline just before the
invocation of the service. It is not the state of the pipeline at the point the service
generates audit data.
Including the pipeline in the service log is useful only when the service log is a
database. Integration Server cannot save the pipeline to a flat file service log.
When a service generates audit data, it also produces an audit event. If you want
the audit event to cause another action to be performed, such as sending an e-mail
notification, write an event handler. Then subscribe the event handler to audit
events. For more information about events and event handlers, see "Subscribing to
Events" on page 915
If you want audit events generated by a service to pass a copy of the input pipeline
to any subscribed event handlers, set Include pipeline to On errors only or Always.
Integration Server can also log select fields from the service signature. Logged fields
can be viewed in the webMethods Monitor. For information about field logging, see
"Logging Input and Output Fields" on page 199.
You can associate a custom value with an auditing context. The custom value can be
used to search for service audit records in the webMethods Monitor. For information
about creating and logging custom values for auditing contexts, see "Assigning a
Custom Value to an Auditing Context" on page 201.
To configure service auditing, you must have write access to the service and own the
lock on the service or have it checked out.
For detailed information about the Audit properties, see "Audit Properties" on page
1092.
Log on... Data logged at the start of the Data logged at the end of the
service... service...
4. Select the check boxes next to the fields you want to log.
5. If you want to define an alias for a field, type an Alias name.
The alias defaults to the name of the selected field, but it can be modified to any alias
for viewing in webMethods Monitor.
java.lang.Boolean VARCHAR
java.lang.Byte VARCHAR
java.lang.Character VARCHAR
java.lang.Double FLOAT
java.lang.Float FLOAT
java.lang.Integer FLOAT
java.lang.Long FLOAT
java.lang.Short FLOAT
java.util.Date DATE
byte[] VARCHAR
UNKNOWN VARCHAR
Integration Server calls the toString() method on objects that do not have a defined
Java wrapper type. If you are logging one of your own types and you implement the
toString() method, the server saves the value returned by your implementation to the
audit log. If you do not supply a toString implementation, the server saves the output of
java.lang.Object.toString() to the database.
a circuit breaker on the invocation of the remote service, you can limit the impact the
abnormal behavior of a remote service on other micorservices and critical resources in
your system.
Note: The circuit breaker feature is available by default for a service that resides in
a Microservices Container. To use the circuit breaker feature with Integration
Server, your Integration Server must have additional licensing. Note: In
addition to the licensing requirement, to use the circuit breaker functionality
in version 10.1, you must install the following fixes: ESB_10.1_Fix2 and
IS_10.1_Core_Fix2.
Using a circuit breaker with a service may impact service performance. However, the
benefits of using a circuit breaker may outweigh the performance impact.
If the circuit breaker for a service considers a timeout to be a failure event and
you want circuit breaker to aempt to cancel the thread executing the service,
you must configure the thread kill functionality on the Integration Server that
hosts the service. Specifically, you must set the wa.server.threadKill.enabled
and the wa.server.threadKill.interruptThread.enabled server configuration
parameters to true. For more information about the thread kill functionality,
including configuration information and limitations, see webMethods Integration
Server Administrator’s Guide.
When configuring a service to use circuit breaker and transient error handling,
also known as service retry, keep in mind that circuit breaker could open the
circuit before completing all of the retry aempts. The circuit breaker handles
any subsequent retry aempts as it would any request for the service. For more
information about how a circuit breaker works, see Developing Microservices with
webMethods Microservices Container.
When specifying an alternate service to invoke when a service has an open circuit,
do not create a circular reference in which the service with the configured circuit
breaker calls itself.
Specify... To...
False Disable a circuit breaker for this service. This is the default.
3. In the Failure event field, specify the events that the circuit breaker considers to be a
failure event,
Select... To...
Exception only Indicate that a failure event occurs only when the service ends
with an exception.
This is the default.
Timeout only Indicate that a failure event occurs only when the service
execution time exceeds the Timeout period property value.
Exception or Indicate that a failure event occurs when the service ends with
Timeout an exception or the service execution time exceeds the Timeout
period property.
4. If the circuit breaker treats a timeout as a failure event, configure the following
information:
Timeout period The number of seconds that service execution can take
before being considered a timeout failure event. If
the timeout period elapses before service execution
completes, the circuit breaker considers a timeout failure
event to have occurred. The default is 60 seconds.
You must specify a timeout period greater than 0.
Select... To...
5. In the Failure threshold field, specify the number of failure events that cause the circuit
to open if all the events occur within the failure period. The default is 5.
6. In the Failure period field, specify the length of time, measured in seconds, during
which the number of failure events equal to the failure threshold causes the circuit to
open. The default is 60 seconds.
7. In the Circuit open action field, specify how circuit breaker responds to a request for
this service when the circuit is open.
Select... To...
8. If you set the Circuit open action field to Invoke service, in the Circuit open service field,
specify the fully qualified name of the alternate service that circuit breaker invokes
upon receiving a request when the circuit is open. For more information about
building a service for use with an open circuit, see Developing Microservices with
webMethods Microservices Container.
9. In the Circuit reset period field, specify the length of time, measured in seconds, for
which the circuit remains in an open state. The default is 300 seconds.
During the reset period, the circuit breaker responds to requests to invoke the service
as specified by the Circuit open action property. When the reset period elapses, the
circuit breaker places the circuit in a half-open state. The next request for the service
results in service execution, after which the circuit breaker either closes or re-opens
the circuit.
10. Click File > Save.
. (period)
- (dash)
_ (underscore)
Additionally, the local name must begin with a leer or an underscore. The
following are examples of valid local names:
addCustOrder
authorize_Level1
générent
For specific rules relating to NCNames, see “NCName” definition in the Namespaces
in XML specification.
Note: It is possible for an implicit name to match the explicit name of another
service. When this condition exists, the explicit name takes precedence. That
is, when a universal name is requested, Integration Server searches its registry
of explicit names first. If it does not find the requested name there, it looks for
a matching implicit name.
receive an error message when you aempt to save the service or document type. You
will not be permied to save it until you specify both parts of the universal name.
If you move a service or document type, or a folder containing a service or document
type, Designer retains the explicit universal name. If you copy a service or document
type, or a folder containing a service or document type, Designer does not retain the
explicit universal name.
Earlier versions of the webMethods SOAP implementation did not include the http://
localhost/ prefix as part of an implicit name. However, the server is backward
compatible. It will resolve QNames that clients submit in either the old form (without
the hp prefix) or the new form (with the hp prefix).
Namespace The URI that will be used to qualify the name of this service or
name document type. You must specify a valid absolute URI.
Service Description
Note: If you assign an output template to a service and later copy that service
to a different package, you must copy the output template file to
the IntegrationServer_directory \instances\instance_name \packages
\packageName \templates directory of the new package. (If you copy an
entire package, any output templates will be included automatically.)
If the template file has a file extension other than .html, rename the file
extension as “.html” so that Designer will recognize its contents.
The server treats the case of the file name differently depending on which operating
system you are using. For example, on a case-insensitive system such as Windows,
the server would see the names “template” and “TEMPLATE” as the same name.
However, on a case-sensitive system such as UNIX, the server would see these as
two different names. If you are trying to assign an existing output template and you
enter a file name in the wrong case on a UNIX system, the wrong file name could be
assigned as the output template for your service.
Note: Changes you make to an output template affect all the services in the
package that use the template, not just the service that is currently open
in the editor.
Note: The View as HTML feature is available only for flow services.
A flow service is a service that is wrien in the webMethods flow language. This simple
yet powerful language lets you encapsulate a sequence of services within a single service
and manage the flow of data among them.
Any service can be invoked within a flow (including other flow services). For instance,
a flow might invoke a service that you create, any of the built-in services provided with
the Integration Server, and/or services from a webMethods add-on product such as the
webMethods Adapter for JDBC.
You create flow services using Designer. They are saved in XML files on Integration
Server.
Important: Flow services are wrien as XML files in a format that is understood by
Designer. Create and maintain flow services using Designer. You cannot
create or edit a flow service with a text editor.
Invocation Steps
Data-Handling Steps
Control Steps
LOOP Executes a set of flow steps once for each element in a specified
array. For more information about this step, see "The LOOP Step"
on page 253.
The pipeline holds the input and output for a flow service
When you build a flow service, you use Designer to specify how information in the
pipeline is mapped to and from services in the flow.
which the flow steps execute. Designer displays shapes for flow steps as well as for
the start and end of the flow service. Steps such as BRANCH, LOOP, and REPEAT
that can contain child steps can be collapsed or expanded.
Because the Tree tab and Layout tab provide the same capabilities for building a flow
service, work in whichever tab you find easier to use. You can easily switch between the
tabs when building a flow service.
Designer uses the Tree tab as the default tab for building and viewing flow services. For
this reason, unless specifically stated otherwise, the procedures in the webMethods Service
Development Help are wrien for working in the Tree tab in the flow service editor. For
information about working in the Layout tab in the flow service editor, see "Working in
the Layout Tab" on page 263.
Note: When you specify a new service name at the time of creating a new REST V2
resource operation, Designerautomatically creates the specified flow service
under the same folder as the REST V2 resource. For details, see "Defining a
REST V2 Resource Operation" on page 509.
Important: The flow steps produced by this option are no different than those
produced by manually inserting INVOKE pub.xml:loadXMLNode and INVOKE
pub.xml:queryXMLNode steps in a flow service. After Designer inserts the set of
default steps into your flow service, you can edit the default steps and insert
additional steps just as you would any ordinary flow service.
To create the flow service from an XML document that resides on the Internet,
type the URL of the resource. (The URL you specify must begin with http: or
https:.)
To create the flow service from an XML document on your local file system, type
in the path and file name, or click the Browse buon to navigate to and select the
file.
8. Click Finish to create the flow service.
Note: Before running a flow service that expects an XML document as input,
you must first create a launch configuration that specifies the XML file,
and then debug the service in Designer. For information about creating a
launch configuration, see "Creating a Launch Configuration for Running a
Service" on page 437.
Note: If the flow service expects an XML document as input, you must create a
launch configuration and debug the service in Designer before running it.
For more information, see "Creating a Launch Configuration for Running a
Service" on page 437.
When creating a flow service from an XML Schema definition that contains a large
number of complex type definitions, and you want Integration Server to create a
separate IS document for each complex type definition, you may need to increase
the number of elements that Designer maintains in cache. If the cache is not large
enough to include all of the generated IS document types, then Designer will have
to repeatedly retrieve the document types from Integration Server while creating the
flow service. This increases network traffic and can prolong the time needed to create
the flow service. If the cache is large enough to contain all of the IS document types
and other elements generated by Designer while creating a flow service, Designer
might create the flow service more quickly. To increase the number of elements
cached by Designer, see "Caching Elements" on page 82.
10. On the Select Processing Options panel, under Schema domain, specify the schema
domain to which any generated IS schemas will belong. Do one of the following:
To add the IS schema to the default schema domain, select Use default schema
domain.
To add the IS schemas to a specified schema domain, select Use specified schema
domain and provide the name of the schema domain in the text box. A valid
schema domain name is any combination of leers, numbers, and/or the
underscore character. For information about restricted characters, see "About
Element Names" on page 54.
11. Under Content model compliance, select one of the following to indicate how strictly
Integration Server represents content models from the XML Schema definition in the
resulting IS document type.
Select... To...
12. If you selected strict or lax compliance, next to Preserve text position, do one of the
following to specify whether document types generated from complex types that
allow mixed content will contain multiple *body fields to preserve the location of text
in instance documents.
Select the Preserve text position check box to indicate that the document type
generated for a complex type that allows mixed content preserves the locations
for text in instance documents. The resulting document type contains a *body
field after each field and includes a leading *body field. In instance documents for
this document type, Integration Server places text that appears after a field in the
*body .
Clear the Preserve text position check box to indicate that the document type
generated for a complex type that allows mixed content does not preserve the
locations for text in instance documents. The resulting document type contains a
single *body field at the top of the document type. In instance documents for this
document type, text data around fields is all placed in the same *body field.
13. If this document type will be used as the input or output signature of a service
exposed as a web service and you want to enable streaming of MTOM aachment
for elements of type base64Binary, select the Enable MTOM streaming for elements of type
base64Binary check box.
For more information about streaming of MTOM aachments, see "Working with
Web Services" on page 787.
14. If you want Integration Server to use the Xerces Java parser to validate the XML
Schema definition, select the Validate schema using Xerces check box.
Select... To...
18. Under Complex type handling, select one of the following to indicate how Integration
Server handles references to named complex type definitions:
Select... To...
Expand complex types inline Use a document field defined in line to represent
the content of a referenced complex type
definition.
Select... To...
about derived types, see "Derived Types and IS
Document Types" on page 578.
19. If you selected Generate document types for complex types and you want to register each
document type with the complex type definition from which it was created, select
the Register document type with schema type check box.
Note: If you want derived type support for document creation and validation,
select the Register document types with schema type check box. For more
information, see "Registering Document Types with Their Schema Types"
on page 580.
20. If you want Integration Server to generate IS document types for all complex types
in the XML Schema definition regardless of whether the types are referenced by
elements or other type definitions, select the Generate document types for all complex
types in XML Schema check box.
If you leave this check box cleared, Integration Server generates a separate IS
document type for a complex type only if the complex type is referenced or is
derived from a referenced complex type.
21. If any of the root elements you selected for the IS document type contain a
namespace URI and you want to create a new namespace prefix for it, click Next.
Otherwise, continue with step 22.
22. On the Assign Prefixes panel, if you want the IS document type to use different
prefixes than those specified in the XML schema definition, select the prefix you
want to change and enter a new prefix. Repeat this step for each namespace prefix
that you want to change.
Note: The prefix you assign must be unique and must be a valid XML NCName
as defined by the specification hp://www.w3.org/TR/REC-xml-names/
#NT-NCName.
Note: Integration Server uses Xerces Java parser version J-2.11.0. Limitations
for this version are listed at hp://xerces.apache.org/xerces2-j/xml-
schema.html.
When validating XML schema definitions, Integration Server uses the Perl5 regular
expression compiler instead of the XML regular expression syntax defined by the
World Wide Web Consortium for the XML Schema standard. As a result, in XML
schema definitions consumed by Integration Server, the paern constraining facet
must use valid Perl regular expression syntax. If the supplied paern does not use
proper Perl regular expression syntax, Integration Server considers the paern to be
invalid.
If you selected strict compliance and Integration Server cannot represent the content
model in the complex type accurately, Integration Server does not generate any IS
document types for the flow service.
If you selected lax compliance and indicated that Integration Server should preserve
text locations for content types that allow mixed content (you selected the Preserve
text position check box), Integration Server adds *body fields in the document type
only if the complex type allows mixed content and Integration Server can correctly
represent the content model declared in the complex type definition. If Integration
Server cannot represent the content model in an IS document type, Integration
Server adds a single *body field to the document type.
The contents of an IS document type with a Model type property value other than
“Unordered” cannot be modified.
If the XML schema definition contains an element reference to an element
declaration whose type is a named complex type definition (as opposed to an
anonymous complex type definition), Integration Server creates an IS document type
for the named complex type definition. In the IS document type for the root element,
Integration Server uses document reference field to represent the element reference.
An exception to this behavior is the situation in which the element reference is the
only reference to the complex type definition and the Only generate document types
for elements with multiple references option is selected. In this situation, Integration
Server uses document field defined in line to represent the content of the referenced
complex type.
Integration Server uses the prefixes declared in the XML Schema or the ones you
specified as part of the field names. Field names have the format prefix :elementName
or prefix :@aributeName .
If the XML Schema does not use prefixes, the Integration Server creates prefixes for
each unique namespace and uses those prefixes in the field names. Integration Server
uses “ns” as the prefix. The first namespace is “ns1” and the second namespace is
“ns2”.
When creating a flow service from an XML Schema definition that imports multiple
schemas from the same target namespace, Integration Server throws Xerces
validation errors indicating that the element declaration, aribute declaration, or
type definition cannot be found. The Xerces Java parser honors the first <import> and
ignores the others. To work around this issue, you can do one of the following:
Combine the schemas from the same target namespace into a single XML Schema
definition. Then change the XML schema definition to import the merged schema
only.
When creating the flow service, clear the Validate schema using Xerces check box
to disable schema validation by the Xerces Java parser. When generating the
flow service. Integration Server will not use the Xerces Java parser to validate the
schemas associated with the XML Schema definition.
Before running a flow service that expects an XML document as input, you must first
create a launch configuration that specifies the XML file, and then debug the service
in Designer. For information about creating a launch configuration, see "Creating a
Launch Configuration for Running a Service" on page 437.
Tip: You can also move a flow step by dragging it up or down with your
mouse.
2. Use the following toolbar buons to move the step left or right beneath the current
parent step.
Promote a flow step in the hierarchy (that is, move the step
one level up in the hierarchy)
Property Description
Label Assigns a name to the selected flow step. When a label is assigned,
that label appears next to the step in the editor. The label allows
you to reference that flow step in other flow steps. In addition,
you use the label to control the behavior of certain flow steps. For
example, the BRANCH step uses the Label property to determine
which alternative it is supposed to execute.
See "The BRANCH Step" on page 233 and "The EXIT Step" on
page 258 for additional information about this use of the label
property.
Invoke any service for which the caller of the current flow has access rights on the
local webMethods Integration Server.
Invoke built-in services and services on other webMethods Integration Servers.
Invoke flow services recursively (that is, a flow service that calls itself). If you use
a flow service recursively, bear in mind that you must provide a means to end the
recursion.
Invoke any service, validating its input and/or output.
Note: If you are using any adapters (for example, the webMethods Adapter for
JDBC), you will have additional built-in services, which are provided by the
adapters. See the documentation provided with those adapters for details.
Validate input Whether or not you want the server to validate the input
to the service against the service input signature. Select
True to validate the input. Select False if you do not want to
validate the input.
Validate output Whether or not you want the server to validate the output
of the service against the service output signature. Select
True to validate the output. Select False if you do not want
to validate the output.
4. If necessary, on the Pipeline view, link Pipeline In variables to Service In variables. Link
Service Out variables to Pipeline Out variables. For more information about linking
variables to a service, see "About Linking Variables" on page 277.
5. Click File > Save.
Tip: In Designer, clicking the buon next to or opening the Palee view
displays a list of commonly used services. You can edit the Window >
Preferences >Software AG>Service Development> Flow Service Editor preferences to
customize this list of services to suit your needs.
order one way if the PaymentType value is “CREDIT CARD” and another way if it is
“CORP ACCT”.
When you build a BRANCH step, you can:
Branch on a switch value.Use a variable to determine which child step executes. At
run time, the BRANCH step matches the value of the switch variable to the Label
property of each of its targets. It executes the child step whose label matches the
value of the switch.
Branch on an expression. Use an expression to determine which child step executes. At
run time, the BRANCH step evaluates the expression in the Label property of each
child step. It executes the first child step whose expression evaluates to “true.”
Important: You cannot branch on a switch value and an expression for the same
BRANCH step. If you want to branch on the value of a single variable and
you know the possible run-time values of the switch variable exactly, branch
on the switch value. If you want to branch on the values of more than one
variable or on a range of values, branch on expressions.
Keep the following points in mind when assigning labels to the targets of the BRANCH
step:
You must give each target step a label unless you want to match an empty string. For
that case, you leave the Label property blank. For more about matching an empty
string, see "Branching on Null and Empty Values" on page 237.
Each Label value must be unique within the BRANCH step.
When you specify a literal value as the Label of a child step, the value you specify
must match the run-time value of the switch variable exactly. The Label property is
case sensitive.
You can use a regular expression as the value of Label instead of a literal value.
You can match a null value by using the $null value in the Label property. For more
information about specifying a null value, see "Branching on Null and Empty
Values" on page 237.
You can designate a default step for all unmatched cases by using the $default
value in the Label property. For more information about using the $default seing,
"Specifying a Default Step" on page 239.
Branching on an Expression
When you branch on an expression, you assign an expression to each child of a branch
step. At run time, the BRANCH step evaluates the expressions assigned to the child
steps. It executes the first child step with an expression that evaluates to true.
To branch on an expression
1. Create a list of the conditional steps (target steps) and make them children of the
BRANCH step.
2. In the Properties view for the BRANCH step, set Evaluate labels to True.
3. In the Label property of each target, specify the expression that, when true, will
cause the target step to execute. The expressions you create can include multiple
variables and can specify a range of values for variables. Use the syntax provided by
webMethods to create the expression. For more information about expression syntax,
see "Conditional Expressions" on page 1189.
Keep in mind that only one child of a BRANCH step is executed: the first target step
whose label contains an expression that evaluates to true. If none of the expressions
evaluate to true, none of the child steps are invoked, and execution falls through to
the next step in the flow service. You can use the $default value in the Label property
to designate a default step for cases where no expressions evaluate to true. For more
information about using the $default value, see "Specifying a Default Step" on page
239.
Important: The expressions you create for the children of a BRANCH step need to be
mutually exclusive (only one condition should evaluate to true at run time).
length string. To branch on null or empty values, set the Label property for the target
step as follows.
A null value Set the Label property to $null. At run time, the BRANCH step
executes the target step with the $null label if the switch variable is
explicitly set to null or does not exist in the pipeline.
You can use $null with any type of switch variable.
An empty Leave the Label property blank (empty). At run time, the
string BRANCH step executes the target step with no label if the switch
variable is present, but contains no characters.
You can use an empty value only when the switch variable is of
type String.
Important: If you branch on expressions (Evaluate labels is set to True), you cannot branch
on null or empty values. When executing the BRANCH step and evaluating
labels, Integration Server ignores target steps with a blank or $null label.
The following example shows a BRANCH step used to authorize a credit card number
based on the buyer’s credit card type (CreditCardType ). It contains three target steps. The
first target step handles situations where the value of CreditCardType is null or where
CreditCardType does not exist in the pipeline. The second target step handles cases
where the value of CreditCardType is an empty string. (Note that the first two target
steps are EXIT steps that will return a failure condition when executed.) The third target
step has the $default label, and will process all specified credit card types.
BRANCH that contains target steps to match null values or empty strings
Important: You can only have one default target step for a BRANCH step. Designer
always evaluates the default step last. The default step does not need to be
the last child of the BRANCH step.
The SEQUENCE step that you use as a target for a BRANCH can contain any valid flow
step, including additional BRANCH steps. For additional information about building a
SEQUENCE, see "The SEQUENCE Step" on page 251.
4. Insert the conditional steps that belong to the BRANCH (that is, its children) using
the following steps:
a. Insert a flow step by clicking the buon next to on the flow service editor
toolbar and clicking the required flow step.
b. Indent the flow step using on the flow service editor toolbar to make it a child
of the BRANCH step.
c. In the Label property on the Properties view, specify the switch value that will
cause this step to execute at run time.
To match... Specify...
To match... Specify...
Any unmatched value (that is, execute the step if the value $default
does not match any other label)
Important: If you are branching on expressions, make sure the expressions you
assign to the target steps are mutually exclusive. In addition, do not
use null or empty values as labels when branching on expressions.
The BRANCH step ignores target steps with a $null label or blank
label.
FAILURE Re-executes the set of child steps if any step in the set fails.
SUCCESS Re-executes the set of child steps if all steps in the set
complete successfully.
Important: Note that children of a REPEAT always execute at least once. The Count
property specifies the maximum number of times the children can be re-
executed. At the end of an iteration, the server checks to see whether the
condition (that is, failure or success) for repeating is satisfied. If the condition
is true and the Count is not met, the children are executed again. This process
continues until the repeat condition is false or Count is met, whichever occurs
first. (In other words, the maximum number of times that children of a
REPEAT will execute when Count is > -1, is Count+1.)
If the REPEAT step is a child of another flow step, the failure is propagated to its parent.
The REPEAT step immediately exits a set of children at the point of failure (that is, if
the second child in a set of three fails, the third child is not executed).
When Repeat on is set to FAILURE, the failure of a child within a REPEAT step does
not cause the REPEAT step itself to fail unless the Count limit is also reached.
The Timeout property for the REPEAT step specifies the amount of time in which the
entire REPEAT step, including all of its possible iterations, must complete. When
you use REPEAT to retry on failure, you may want to leave the Timeout value at 0
(no limit) or set it to a very high value. You can also set the property to the value
of a pipeline variable by typing the name of the variable between % symbols. The
variable you specify must be a String.
As a developer, you must be thoroughly familiar with the processes you include
within a REPEAT step. Make certain that the child steps you specify can safely be
repeated in the event that a failure occurs. You don’t want to use REPEAT if there is
the possibility that a single action, such as accepting an order or crediting an account
balance, could be applied twice.
Important:
If you use this step as a target for a BRANCH or EXIT
step, you must specify a value in the Label property.
For more information about the BRANCH and EXIT
steps, see "The BRANCH Step" on page 233 or "The
EXIT Step" on page 258.
Repeat interval The length of time (in seconds) that you want the server to
wait between iterations of the children.
If you want to use the value of a pipeline variable for this
property, type the variable name between % symbols (for
example, %waittime%). The variable you specify must be a
String.
Repeat on FAILURE
4. Beneath the REPEAT step, use the following steps to insert each step that you want
to repeat:
a. Insert a flow step using the buons on the flow service editor toolbar.
b. Indent that flow step using on the flow service editor toolbar. (Make it a child
of the REPEAT step.)
c. Set the properties for the child step as needed.
5. Click File > Save.
Important:
If you use this step as a target for a BRANCH or EXIT
step, you must specify a value in the label property.
For more information about the BRANCH and EXIT
steps, see "The BRANCH Step" on page 233 or "The
EXIT Step" on page 258.
Repeat interval The length of time (in seconds) that you want the server to
wait between iterations of the children.
If you want to use the value of a pipeline variable for this
property, type the variable name between % symbols (for
example, %waittime%). The variable you specify must be a
String.
Repeat on SUCCESS
4. Beneath the REPEAT step, use the following steps to insert each step that you want
repeat:
a. Insert a flow step using the buons on the flow service editor toolbar.
b. Indent that flow step using on the editor toolbar to make it a child of the
REPEAT step.
c. Set the properties for the child step as needed.
5. Click File > Save.
SUCCESS Exit the sequence when any step in the SEQUENCE succeeds.
Execution continues with the next step in the flow service.
Exiting upon success is useful for building a set of alternative
steps that are each aempted at run time. Once one of the
members of the set runs successfully, the remaining steps in the
SEQUENCE are skipped.
If a child step in a SEQUENCE configured to exit on success fails,
any changes that the child step made to the pipeline are rolled
back (undone), and processing continues with the next child step
in the SEQUENCE.
Note: Rollback operations are performed on the first level of the pipeline only. That
is, first-level variables are restored to their original values before the step
failed, but the server does not roll back changes to any documents to which
the first-level variables refer.
You may include any valid flow step within the body of a LOOP, including additional
LOOP steps. The following example shows a pair of nested LOOPs. Note how the
indentation of the steps determines the LOOP to which they belong.
LOOP properties
When you design your flow, remember that because the services within the loop operate
against individual elements in the specified input array, they must be designed to take
elements of the array as input, not the entire array.
For example, if your LOOP executes against a document list called LineItems that
contains children called Item , Qty , and UnitPrice , you would specify LineItems as the
Input array for the LOOP step, but services within the loop would take the individual
elements of LineItems (for example, Item , Qty , UnitPrice , and so forth) as input.
Note: The LOOP step is not thread-safe when the input array is a child of another
variable (for example, a String list that is a child of a Document). Because
the LOOP step changes the dimensionality of the input and output arrays
during execution of the step, any threads invoking services that access the
parent variable can experience the input array variable as either an array or a
scalar. This results in unpredictable behavior for threads accessing the parent
variable.
If the input array is a top-level variable in the pipeline, any thread that
accesses the pipeline object (IData) for the service containing the LOOP step
might also experience unpredictable behavior. Consequently, do not code
other services that might concurrently access the object, such as a document,
document list, or pipeline, that contains the input array. For information
about the changes in dimensionality of inputs in a LOOP step, see "About the
Pipeline for a LOOP Step" on page 255.
The field used as the output array is also reduced dimensionally within the body of a
LOOP step. While the LOOP step produces an array, each iteration of the LOOP step
produces one element in the array. If the output array is a String list, within the body of
the LOOP it is a String. If the output array is a String table, within the body of the LOOP
the output is a String list.
In the following example, the LOOP step executes the pub.math:addInts service for each
item in the input array named myInputList . The LOOP step collects the output into an
array named myOutputList . Inside the LOOP step, the pub.math:addInts service operates
on one element of myInputList and produces one element of myOutputList . That is,
the pub.math:addInts service takes a String as input and produces a String as output.
Consequently, in the pub.math:addInts service pipeline, the input is a String named
myInputList and the output is a String named myOutputList . If you viewed the pipeline
after the LOOP step completes, myInputList and myOutputList would appear as String
lists.
Important:
If you use this step as a target for a BRANCH or
EXIT step, you must specify a value in the Label
property. For more information about the BRANCH
and EXIT steps, see "The BRANCH Step" on page
233 or "The EXIT Step" on page 258.
Input array The name of the array variable on which the LOOP will
operate. This variable must be one of the following types:
String list, String table, Document list, or Object list.
Output array The name of the element that you want the server to collect
each time the LOOP executes. You do not need to specify
Important: When you build a LOOP step, make sure that you specify the output array
variable in the LOOP Output array property before creating a link to the output
array variable within a MAP or INVOKE step in the body of the LOOP.
If you specify the output array variable after creating a link to it, the link
will fail at run time. You can debug the step in Designer to see if the link
succeeds. If the link fails, delete the link to the output array variable and then
recreate it.
The following flow service contains two EXIT steps that, if executed, will exit the nearest
ancestor LOOP step. If the value of CreditCardType is null or an empty string, the
matching EXIT step executes and exits the LOOP over the '/PurchaseOrdersList' step.
Use the EXIT step to exit the nearest ancestor LOOP step
Important:
If you use this step as a target for a BRANCH step, you
must specify a value in the Label property. For more
information about the BRANCH step, see "The BRANCH
Step" on page 233.
Exit from The flow step from which you want to exit. Specify one of the
following:
Note: If the label you specify does not match the label
of an ancestor flow step, the flow will exit with
an exception.
Specify To...
Failure message The text of the exception message you want to display. If you
want to use the value of a pipeline variable for this property,
type the variable name between % symbols (for example,
%mymessage%). The variable you specify must be a String.
Tip: The MAP step is especially useful for hard coding an initial set of input values
in a flow service. To use it in this way, insert the MAP step at the beginning
of your flow, and then use the Set Value modifier to assign values to the
appropriate variables in Pipeline Out.
For more information about the MAP step, see "Mapping Data in Flow Services" on page
271.
The Layout tab is a graphical view of a flow service that Designer displays in the flow
service editor. You use the Layout tab to create flow services.
Note: Designer uses the Tree tab as the default tab for building and viewing flow
services. For this reason, unless specifically stated otherwise, the procedures
in the webMethods Service Development Help are wrien for working in the Tree
tab in the flow service editor.
executes steps in that order. (In tree view, Designer evaluates the target steps from top to
boom.)
The following illustration identifies the basic elements of a flow service in the Layout
tab.
Designer automatically inserts the start and end symbols when you create a flow service.
When you insert a step into a flow service, Designer automatically draws the lines
connecting the flow step to the rest of the steps in the service.
Note: Designer automatically draws, redraws, and deletes lines when you insert,
move, or delete steps in a flow service. You cannot move or delete lines.
Tip: When you move the mouse pointer over any flow step box in the Layout tab,
the properties for the step appear in a tool tip.
Each box also displays an additional property that is relevant to the flow step type, such
as Input array for LOOP and Switch for BRANCH.
Each box that contains a flow step displays properties for the step, such as Label and
Comments. The following table indicates which property is shown for each flow step.
BRANCH Switch specifies the name of the variable whose value causes the
execution of one of the BRANCH step's children at run time. If
you branch on expressions, this property is blank.
INVOKE Service specifies the name of the service that is invoked at run
time.
LOOP Input array specifies the name of the array against which the
selected LOOP step will run. Type the name of this variable
exactly as it will appear in the pipeline at run time.
MAP Label
REPEAT Repeat on
SEQUENCE none
Basic elements of a step that contains child steps in the Layout tab
The following table identifies the buons and icons that you can use when building a
flow step that contains child steps.
Button Description
Displays the step or child step in the editor while hiding the rest of
the flow service. Use this buon to view and edit the step or child
step in isolation.
Displays the previous view of the flow step or child step in the
editor. Use this buon to navigate back one level in the step or child
step.
Grid. You can also use the Flow Service Editor preferences page to enable the grid and to
customize grid line seings.
Tip: You might find it easier to build services in the Layout tab if you have a larger
view of the flow service. Use the and buons on the Palee view to
zoom in on or zoom out of the flow service.
which steps execute. You can also relocate a step to make it a child of another step in the
flow service.
Because systems rarely produce data in the exact format that other systems need,
you commonly need to build flow services that perform data transformations. Data
transformation resolves differences in the way data is represented within documents
that applications and systems exchange. In Designer, data transformations can be
accomplished by mapping data. By mapping, you can accomplish the following types of
transformations:
Name transformations. This type of transformation resolves differences in the way
data is named. For example, one service or document format might use telephone as
the name of the variable for telephone number information and another might use
phoneNumber . When you perform name transformations, the value and position of
a variable in the document structure remains the same, but the name of the variable
changes.
Structural transformations.This type of transformation resolves differences in the
data type or structure used to represent a data item. For example, one service or
document format might put the telephone number in a String called telephone , and
the next may expect to find it nested in a Document named customerInfo . When you
perform structural transformations, the value of the variable remains the same, but
the data type or position of the variable in the Document structure changes.
Value transformations.This type of transformation resolves differences in the way
values are expressed (for example, when systems use different notations for values
such as standard codes, units of currency, dates, or weights and measures). When
you perform value transformations, the name and position of the variable remain the
same, but the data contained in the variable changes. For example, you can change
the format of a date, concatenate two Strings, or add the values of two variables
together.
When you build flow services or convert between document formats, you may need to
perform one, two, or all of the above types of data transformation. The webMethods
flow language provides two ways for you to accomplish data transformations between
services and document formats in the pipeline: you can map variables to each other
(create links) or you can insert transformers, which are services invoked within a MAP
step.
This Represents...
stage...
1 The expected state of the pipeline just before the selected service
executes.
Pipeline In depicts the set of variables that are expected to be in the
pipeline before the service executes (based on the declared input and
output parameters of the preceding services).
Service In depicts the set of variables the selected service expects as
input (as defined by its input parameters).
In the Pipeline view, you can insert “pipeline modifiers” at this
stage to adjust the contents of the pipeline to suit the requirements
of the service. For example, you can link variables, assign values
to variables, drop variables from the pipeline, or add variables to
the pipeline. Modifications that you specify during this stage are
performed immediately before the service executes at run time.
2 The expected state of the pipeline just after the service executes.
Service Out depicts the set of variables that the selected service
produces as output (as defined by its output parameters).
This Represents...
stage...
Pipeline Out depicts the set of variables that are expected to be in the
pipeline after the service executes. It represents the set of variables
that will be available to the next service in the flow. If the selected
service (INVOKE step) is the last step in the flow service, Pipeline Out
displays the output variables for the flow service (as declared on the
Input/Output tab).
In the Pipeline view, you can insert “pipeline modifiers” at this stage
to adjust the contents of the pipeline. For example, you can link
variables, assign values to variables, drop variables from the pipeline,
or add variables to the pipeline. Modifications that you specify during
this stage are performed immediately after the service executes at run
time.
Note: Designer displays small symbols next to a variable icon to indicate validation
constraints. Designer uses to indicate an optional variable. Designer uses the
‡ symbol to denote a variable with a content constraint. Designer also uses
to indicate that the variable has a default value that can be overridden
assigned to it and to indicate that the variable has a null value that cannot
be overridden assigned to it. A combination of the and symbols next to
a variable icon indicates that the variable has a fixed default value that is not
null and cannot be overridden. For information about applying constraints to
variables, see "About Variable Constraints" on page 647.
The Pipeline In column represents input to the MAP step. It contains the names of all
of the variables in the pipeline at this point in the flow.
The Transformers column displays any services inserted in the MAP step to complete
value transformations. For more information about invoking services in a MAP step,
see "Working with Transformers" on page 302.
The Pipeline Out column represents the output of the MAP step. It contains the names
of variables that will be available in the pipeline when the MAP step completes.
When you first insert a MAP step into your flow, Pipeline In and Pipeline Out are identical.
However, if the MAP step is the only step in the flow service or is the last step in the
flow service, Pipeline Out also displays the variables declared as output in the flow
service.
Tip: While scrolling through a large amount of data, if you do not want
Designer to display the links if the source or target variables are not
visible, right-click anywhere inside the Pipeline view and select Hide links if
variables are not visible.
Note: You can also use the Flow Service Editor preferences page to view or hide the
full namespace path of the referenced document types in Pipeline view.
To view or hide the full namespace path of the referenced document types in Pipeline view
Right-click anywhere inside the Pipeline view and select Show Referenced Document
Type Name.
3. Scroll or resize the Pipeline view to display the portion of the pipeline you want to
view as HTML.
4. Right-click anywhere inside the Pipeline view and click View as HTML.
Designer creates an HTML page and displays it in your default browser.
5. Use your browser's print command to print the pipeline.
Note: The Pipeline view does not display implicit links for a MAP step.
In cases where the services in a flow do not use the same names for a piece of
information, use the Pipeline view to explicitly link the variables to each other. Explicit
linking is how you accomplish name and structure transformations required in a flow.
Designer connects explicitly linked variables with a solid black line.
On the input side of the Pipeline view, use to link a variable from the pipeline to the
service. In the following example, the service expects a value called OrderTotal , which is
equivalent to the pipeline variable BuyersTotal (that is, they are simply different names
for the same data). To use the value of BuyersTotal as the value for OrderTotal , you
“link” the pipeline variable to the service using .
At run time, the server will copy the value from the source variable (BuyersTotal ) to the
target variable (OrderTotal ) before executing the service.
Important: Do not link variables with different Object constraints. If you link variables
with different Object constraints and input/output validation is selected, the
run-time result is undefined.
All the output variables that a service produces are automatically placed in the pipeline.
Just as you can link variables from the Pipeline In stage to a service’s input variables, you
can link the output from a service to a different variable in Pipeline Out.
In the following example, a variable called TransNumber is linked to the field Num
in a Document called TransactionRecord . At run time, the server will copy the value of
TransNumber to Num , and both TransNumber and Num will be available to subsequent
services in the flow.
in the Pipeline" on page 287. For more information about placing conditions on
links between variables, see "Linking Variables Conditionally" on page 293.
You cannot create a link to a variable if you already assigned a value to a variable.
After a link executes, both the source and target variables exist in the pipeline. The
target variable does not replace the source variable.
You cannot create a link to a variable if the variable has a fixed null or default value
assigned to it. Designer uses the symbol next to the variable icon to indicate
that the variable has a fixed value that you cannot override by linking it to another
variable.
Tip: You can also use your mouse to link variables to one another. To do this, select
the source variable and drag your mouse to the appropriate target variable.
Step 3: The value of String1 is changed to “modified” after the link executes
In Step 3, the value of the String1 in Document1 was set to “modified.” However, the
value of String1 in Document2 changed also. This is because in Step 2 of the flow service,
the value of Document1 was copied to Document2 by reference. Changes to the value of
Document1 in later flow steps also change the value of Document2.
A Document (or a Document List) and its children cannot both be targets. After a
Document or Document List is the target of a link, its children cannot be the targets
of links.
After the child variable of a Document or Document List is the target of a link, the
parent Document or Document List cannot be a target of a link.
If you link from a Document variable to another Document variable, the structure
of the source Document variable overwrites the structure of the target Document
variable.
You cannot link a nested Document List to a target Document List when the
Document Lists have different sizes. A nested Document List is one that is contained
within a parent Document List. Document Lists are considered to have different
sizes when they have a different number of entries within the lists. If you need to
move values from the source Document List to the target, create user code that uses a
LOOP flow step to assign values from the source to the target one by one.
When a Document Reference or Document Reference List refers to an IS document
type that contains identically named variables that are of the same data type and
both identically named variables are assigned a value or are linked to another
variable, Integration Server might not maintain the order of the document contents
in the pipeline when the service executes. For example, Integration Server might
group all of the identical variables at the end of the document. To prevent the change
in the order of document contents, set default values for the identically named
variables. To do this, insert a MAP step in the service before the step in which you
want link or assign a value to the variables. In the MAP step, under Pipeline Out,
select the Document Reference variable and click on the Pipeline view toolbar.
In the Enter Input for dialog box, assign default values to the identically named
variables.
String List variables named aList and bList , and documentList had two String children
named aString and bString . You could combine the two String Lists by linking aList to
aString and bList to bString .
Tip: You can also convert a String List to a Document List (IData[ ] object) by
invoking the built-in service pub.list:stringListToDocumentList. You can insert the
service as an INVOKE step or as a transformer. For more information about
transformers, see "Working with Transformers" on page 302. For more
information about built-in services, see the webMethods Integration Server Built-
In Services Reference.
Document. To map the information in the String List to a Document, create a link
between the String List and each field in the Document. Then, specify an index value for
each link. In the following pipeline, the elements in buyerAddress String List are mapped
to the address Document.
You can specify an index value when linking to or from an array variable
Note: Designer uses blue links in the Pipeline view to indicate that properties
(conditions or index values for arrays) have been applied to the link between
variables.
Each element in an array can be the source or target of a link; that is, each element in
the array can be the start or end of a link. For example, if a source String List variable
contains three elements, you can link each of the three elements to a target variable.
If the source and target variables are arrays, you can specify an index for each
variable. For example, you can link the third element in a source String List to the
fifth element in target String List.
If you do not specify an array index for an element when linking to or from arrays,
the default behavior of the Pipeline view will be used. For information about the
default behavior of the Pipeline view, see "Default Pipeline Rules for Linking to and
from Array Variables" on page 290.
If you are linking to or from a String Table, you need to specify an index value for the
row and column.
When you link a Document or Document List variable to another Document
or Document List variable, the structure of the source variable determines the
structure of the target variable. For more information, see "Linking to Document and
Document List Variables" on page 284.
At run time, the link (copy) fails if the source array index contains a null value or
if you specify an invalid source or target index (such as a leer or non-numeric
character). Integration Server generates journal log messages (at debug level 6 or
higher) when links to or from array variables fail.
The following procedure explains how to link to or from an array variable.
Tip: You can also open the Link Indices dialog box by selecting the link between
the variables and clicking on the Pipeline view toolbar.
A scalar An array variable that is The link defines the length of the
variable empty (the variable does array variable; that is, it contains
not have a defined length) one element and has length of
one. The first (and only) element
in the array is assigned the value
of the scalar variable.
An array An array variable that has The length of the source array
variable a defined length variable must equal the length
of the target array variable. If
the lengths do not match, the
link will not occur. If the lengths
are equal, the elements in the
target array variable are assigned
the values of the corresponding
elements in the source array
variable.
No link occurs.
A source variable that is the child of a Document List is treated like an array because
there is one value of the source variable for each Document in the Document List. For
example:
Tip: You can also delete a link by selecting it and then pressing the DELETE key.
that is not null. After you link two variables, you would edit the properties and add the
condition that needs to be true.
A blue link indicates that a condition is applied to the link connecting the variables
Designer uses a blue link in the Pipeline view to indicate that properties (that is,
conditions or index values for arrays) have been applied to a link between variables.
Note: You cannot add conditions to the links between implicitly linked variables.
Tip: If the conditions for links to the same target variable are not mutually
exclusive, consider using a flow service containing a BRANCH step instead.
In BRANCH steps, child steps are evaluated in a top to boom sequence.
Integration Server executes the first child step that evaluates to true and skips
the remaining child steps. For more information about the BRANCH step, see
"The BRANCH Step" on page 233.
Notes:
The Include empty values for String Type check box is disabled when assigning values
to pipeline variables of type String, String List, String Table, Document, Object, and
Object List. It is available only when assigning values to Document List variables.
For more information, see "Specifying Values for a Document List Variable" on page
449.
The check boxes next to each element in the tree are disabled when assigning values
to pipeline variables of type String, String List, String Table, Document, Object, and
Object List. The check box is only enabled for top-level Document variables within
a Document List and is used along with the Include empty values for String Type check
box. For more information, see "Specifying Values for a Document List Variable" on
page 449.
The Perform pipeline variable substitution check box indicates whether you want
Integration Server to perform pipeline variable substitution at run time. To use a
variable when assigning a String value, you type the name of the pipeline variable
enclosed in % symbols (for example, %Phone%). If you specify a pipeline variable
enclosed in % symbols for a String value, you must select the Perform pipeline variable
substitution check box for the variable substitution to occur.
The Perform global variable substitution check box indicates whether you want
Integration Server to perform global variable substitution at run time. To use a global
variable when assigning a String value, you type the name of the global variable
enclosed in % symbols (for example, %myFTPUsername%). If you specify a global
variable enclosed in % symbols for a String value, you must select the Perform global
variable substitution check box for the variable substitution to occur.
If a pipeline variable and global variable have the same name and you select both
Perform global variable substitution and Perform pipeline variable substitution, Integration
Server uses the value of the pipeline variable.
The Overwrite pipeline value check box indicates whether you want the server to use
the value you specify even when the variable has a pipeline value at run time.
Select the check box to have Integration Server always use the value you specify.
Clear the check box if you want Integration Server to use the value you specify
only if the variable does not contain a value at run time.
You must select the Perform global variable substitution check box for the variable
substitution to occur at run time.
If the specified global variable has the same name as a pipeline variable name you
select the Perform global variable substitution check box and the Perform pipeline variable
substitution check box, Integration Server uses the value of the pipeline variable at run
time.
If the global variable that you specified for performing a variable substitution is not
defined in Integration Server, at run time Integration Server throws an exception and
service execution fails.
You can mix literal and global variables. For example, if you specify (%areaCode
%) %Phone%, the resulting String would be formaed to include the parentheses
and space. If you specify %firstName% %initial%. %lastName%, the period and
spacing would be included in the value.
For more information about defining global variables, see webMethods Integration Server
Administrator’s Guide.
5. If the variable is a Document or a Document List, add more variables to define its
contents. Then use to indent each member variable beneath the Document or
Document List variable.
6. Do one of the following with the new variable:
Link the variable to another variable.
Assign a value to the variable using on the Pipeline view toolbar.
Drop the variable.
pub.date Transform time and date information from one format to another.
For more information about built-in services, see the webMethods Integration Server Built-
In Services Reference.
Inserting a Transformer
When inserting transformers, keep the following points in mind:
Transformers can be inserted in a MAP step only.
Any service can be used as a transformer, including flow services, C services, and
Java services.
The transformers in a single MAP step operate on the same set of pipeline data.
Transformers in a MAP step are independent of each other and do not execute in a
specific order. As a result, the output of one transformer cannot be used as the input
of another transformer in the same MAP step.
Software AG recommends avoiding the use of a service as a transformer if the
service is subject to transient failures, such as a connection failures, as these services
might be hard to debug when used as a transformer.
To insert a transformer
1. In the flow service editor, select the MAP step in which you want to insert a
transformer.
2. In the Pipeline view, do one of the following:
Click the buon adjacent to on the Pipeline view toolbar and select the
service you want to use as a transformer. If the service you want to insert does
not appear in the list, click Browse to select a service on Integration Server.
In the Palee view that is located within the Pipeline view, select the folder
containing the service you want to add as a transformer. Select the service and
click in the Transformers area of Pipeline view.
In Package Navigator view, select the service you want to use as a transformer
and drag it to the Transformers area of Pipeline view.
3. To set properties for the transformer, select it and then specify the following
information in the Properties view:
Service The fully qualified name of the service that will be invoked
at run time as a transformer. When you insert a transformer,
Designer automatically assigns the name of that service
to the service property. If you want to change the service
that is invoked by a transformer, specify the service’s fully
qualified name in the folderName:serviceName format or click
to select a service from a list.
4. Link pipeline variables to the transformer variables. See " Linking Variables to a
Transformer" on page 304.
Designer does not automatically add the output of a transformer to the pipeline. If
you want the output of a transformer to appear in the pipeline, you need to explicitly
link the output variable to a Pipeline Out variable.
If you do not link any output variables or the transformer does not have any
declared output variables, the transformer service will not run.
You can link a transformer output variable to more than one Pipeline Out variable.
You can assign a value to a transformer input value using on the Pipeline view
toolbar.
To prevent the Pipeline view from becoming cluered, the Pipeline view may not
display all the links between the transformer and the pipeline variables. To view all
the links, double-click the transformer or click next to the transformer name.
Use the following procedure to link pipeline and transformer variables when the
transformer is not expanded. If the transformer is expanded (that is, you can see all of
the input and output variables for the transformer), you link variables just as you would
for an INVOKE step.
Dimensionality refers to the number of arrays to which a variable belongs. For example,
the dimensionality of a single String is 0, that of a single String List or Document List is
1, and that of a single String Table is 2. A String that is a child of a Document List has a
dimensionality of 1. A String List that is a child of a Document List has a dimensionality
of 2.
Note: If the Validate input and/or Validate output check boxes are selected on the
Input/Output tab of the service acting as a transformer, Integration Server
automatically validates the input and/or output for the service every time the
service executes. If you set up validation via the properties for a transformer
when it is already set up for validation via the service’s Input/Output tab,
Integration Server performs validation twice. This can slow down the
execution of a transformer and, ultimately, the flow service.
Copying Transformers
You may want to use the same transformer more than once in a MAP step. For example,
you might want to convert all the dates in a purchase order to the same format. Instead
of inserting the service repeatedly, you can copy and paste the transformer service.
You can copy transformers between MAP steps in the same flow or MAP steps in
different flow services.
You can copy multiple transformers at a time.
Copying a transformer does not copy the links between transformer variables and
pipeline variables or any values you might have assigned to transformer variables
using .
To copy a transformer
1. In the flow service editor, select the MAP step containing the transformer service you
want to copy.
2. In the Pipeline view, under Transformers, select the transformer that you want to
copy. Right-click and select Copy.
3. Do one of the following:
To paste the transformer in the same MAP step, right- click anywhere under
Transformers and select Paste.
To paste the transformer in another MAP step, select that MAP step. In Pipeline
view, right- click anywhere under Transformers and select Paste.
4. Link the input and output variables of the transformer. See " Linking Variables to a
Transformer" on page 304.
Renaming Transformers
If Integration Server displays the message “Transformer not found” when you try to
expand a transformer or when you point the mouse to the transformer, then the service
referenced by the transformer has been renamed, moved, or deleted. You need to change
the Service property of the transformer so that the transformer points to the moved, or
renamed service.
If the service referenced by the transformer has been deleted, you may want to delete the
transformer.
Tip: You can enable safeguards so that you do not inadvertently affect or break
other services when you move, rename, or delete a service. For more
information, see "Configuring Dependency Checking for Elements" on page
59.
To rename a transformer
1. Use Package Navigator view to determine the new name or location of the service
called by the transformer.
2. Open the flow service containing the transformer you want to rename.
3. In the flow service editor, select the MAP step containing the transformer. Then, in
Pipeline view, select the transformer you want to rename.
4. In the Service property in the Properties view, delete the old name and type in the
service’s new fully qualified name in the folderName:serviceNam e format, or click
to select a service from a list.
Debugging Transformers
When you debug a flow service, you can use the following debugging techniques with
transformers:
Step into a MAP step and step through the execution of each transformer. For more
information about stepping into and out of a MAP step, see "Stepping Into and Out
of a MAP Step" on page 470.
Set a breakpoint on a transformer so that service execution stops when the
transformer is encountered. For more information about seing breakpoints, see
"Seing and Removing Breakpoints on Flow Step" on page 473.
Disable a transformer so that it does not execute at run time. For more information
about disabling transformers, see "Disabling and Enabling Flow Steps and
Transformers" on page 475.
3. Select the flow step to test, and switch to the Data Mapper view.
4. In the Mapping tab of the Data Mapper view, define the required pipeline variables
and transformers, and create links.
You can perform these operations in the Mapping tab as you would in the Pipeline
view. For more information, see the following sections:
"About Linking Variables" on page 277
"Adding Variables to the Pipeline" on page 301
"Working with Transformers" on page 302
5. In the Testing tab of the Data Mapper view, specify values for the input variables of
the flow step in the Input Value Creation area.
You can use either of the following approaches to specify the values:
In the Value column of the Input Value Creation area, type the required values
against the corresponding variable names listed in the Name column.
To load input values, which match the structure of the flow service's input
signature, from a file; click Load, and select the appropriate file.
To load input values from a file, and replace the flow service's input signature
with structure and data types from the file; click Load and Replace, and select the
appropriate file.
Note: For more information about loading input values from a file, see "Loading
Input Values" on page 453.
6. If you want to save the specified input values for later use, click Save.
Otherwise, directly go to Step 7.
7. To test the flow step based on the specified input values, click .
Designer displays the results of the test in the Test Outcome area of the Data Mapper
view.
Depending on the results of a test execution, the Test Outcome area displays
information in the following tabs:
The Pipeline tab displays the contents of the pipeline when the test finishes
executing.
The Messages tab displays messages from Designer about the test and any
exception thrown during the execution.
Reduces multiple flow steps: Nested array mapping involves multiple flow steps.
Now, using ForEach mapping, you can easily create nested mapping in a single MAP
step.
Copy modes: When an input array is mapped to an output array, the value of output
array is merged with input array. Using ForEach mapping, you can now merge,
overwrite, and append the values to output array.
Multiple transformer invocation: ForEach mapping allows you to directly apply
multiple transformers on input array elements while mapping.
Note: For information about the syntax used in conditions, see "Conditional
Expressions" on page 1189.
Copy Mode: Specifies how to copy the elements from input to output array. Following
copy modes are supported:
Append: Appends the output array.
Merge: Updates the existing output array with source values. This is the default
mode.
Overwrite: Overwrites the output array.
Elements Selection: Specifies the indexes of the input array that participate in the
ForEach mapping. The first index element is 0. You can specify multiple cardinalities
using the " , " delimiter. Consider the following example:
Note: When you specify both Elements Selection and Filter Input, then ForEach first
selects elements based on the Elements Selection and then applies the Filter Input
conditions on selected elements.
2. In the Pipeline view of the ForEach mapping, a link is allowed only if the source
variable, target variable, or both the variables are part of the current ForEach
mapping array elements.
Following table identifies the possible cases in the Pipeline view of the ForEach
mapping.
3. In the Pipeline view of the ForEach mapping, you can build a nested ForEach
mapping only from those input array elements that are part of the parent ForEach
mapping.
Data validation is the process of verifying that run-time data conforms to a predefined
structure and format. Data validation also verifies that run-time data is a specific data
type and falls within a defined range of values.
By performing data validation, you can make sure that:
The pipeline, a document (IData object), or an XML document contains the data
needed to execute subsequent services. For example, if a service processes a
purchase order, you might want to verify that the purchase order contains a
customer name and address.
The data is in the structure expected by subsequent services. For example, a service
that processes a purchase order might expect the customer address to be a document
field with the following fields: name, address, city, state, and zip.
Data is of the type and within a value range expected by a service. For example, if a
service processes a purchase order, you might want to make sure that the purchase
order does not contain a negative quantity of an item (such as -5 shirts).
By using the data validation capabilities built into Integration Server, you can decide
whether or not to execute a service based on the validity of data. The validation
capabilities can also eliminate extra validation code from your services.
data type for a variable and the possible values for the variable at run time. For more
information see, "About Variable Constraints" on page 647.
Note: The declared input and output parameters for a service are sometimes called
the signature of the service.
You can specify that you want to perform input/output validation for a service in the
following ways:
Input/Output tab. Set properties on the Input/Output tab to instruct the validation engine
in Integration Server to validate the inputs and/or outputs of the service every time
the service executes. If a client calls the service and the inputs are invalid, the service
fails and does not execute.
INVOKE step properties. Set up input/output validation via the INVOKE step properties
to instruct the validation engine to validate the service input and/or output only
when it is called from within another flow service. At run time, if the inputs and/or
outputs of the service are invalid, the INVOKE flow step that calls the service fails.
To determine which method to use, decide whether or not you want the service input
and output values validated every time the service runs. If you want to validate the
input and output values every time the service runs, specify validation via the Input/
Output tab. For example, if your service requires certain input to exist or fall within a
specified range of values, you might want the pipeline validated every time the service
runs.
If the input and/or output values do not need to be validated every time the service
executes, set up validation via the INVOKE step properties. Specifying input/output
validation via the INVOKE step properties allows you to decide on a case-by-case basis
whether you want validation performed
Note: If you specify input/output validation via the INVOKE step and an input
or output value is invalid, the service itself does not actually fail. The
validation engine validates input values before Integration Server executes
the service. If the service input is not valid, the INVOKE flow step for the
service fails. Similarly, the validation engine validates output values after
Integration Server executes the service. If the service output is not valid,
the INVOKE flow step for the service fails. Whether or not the entire flow
service fails when an individual flow step fails depends on the exit conditions
for the service. For information about specifying exit conditions, see "Using
SEQUENCE to Specify an Exit Condition" on page 251.
Important: Keep in mind that the Validate input and Validate output properties are
independent of any validation seings that you might have already set in the
service. If you select the Validate input and/or Validate output check boxes on
the Input/Output tab of the invoked service, Integration Server performs input/
output validation every time the service executes. If you also specify input/
output validation via the INVOKE step, duplicate validation will result,
possibly slowing down the execution of the service.
For example, suppose that you invoke the pub.client.ldap:search service in a flow to retrieve
an IData object from an LDAP directory service. If you want to validate that object
before you use it in other services, invoke the pub.schema:validate service after retrieving
the object. As another example, you might want to validate an XML document that
has been converted to a document (IData object). You would use the pub.schema:validate
service to validate the resulting document (IData object) against an IS document type.
The pub.schema:validate service considers a document (IData object) to be valid when it
complies with the structural and content constraints described in the IS document type
it is validated against. This service returns a string that indicates whether validation was
successful and an IData array that contains any validation errors. When you insert the
pub.schema:validate service into a flow service, you can specify the maximum number of
errors that the service can collect. You can also specify whether the pub.schema:validate
service should fail if the document (IData object) is invalid.
For more information about the pub.schema:validate service, see the webMethods Integration
Server Built-In Services Reference
Note: The validation engine in Integration Server can perform document (IData
object) validation automatically when a document is published. For more
information, see "About Run-Time Validation for a Published Document" on
page 608.
Integration Server, such as IS document types, IS schemas, service signatures, and the
web service descriptor.
Note: When validating supplied XML, if the XML contains an element defined to
be of simple type with a paern constraining facet, by default, Integration
Server uses Perl paern matching to evaluate element content. However,
if wa.core.datatype.usejavaregex is set to true, during XML validation,
Integration Server uses the Java regular expression compiler and Integration
Server performs paern matching as described by java.util.regex.paern.
IData validInput;
IData dtrResult;
.
.
.
// put the result from the xmlNodeToDocument service (i.e, the object to
// be validated) into the key named <object>
IDataCursor validCursor = validInput.getCursor();
IDataCursor dtrCursor = drtResult.getCursor();
if (dtrCursor.first("boundNode")) {
// assumption here that there's data at the current cursor position
validCursor.insertAfter( "object", dtrCursor.getValue() );
}
dtrCursor.destroy();
// errors
validCursor.insertAfter( "maxErrors", "1000" );
validCursor.destroy();
// invoke pub.schema.validate to validate contents of <object>
IData validResult = context.invoke("pub.schema", "validate", validInput);
// check <isValid> to see whether <object> is valid and process
// accordingly
IDataCursor validCursor = validResult.getCursor();
if(validCursor.first("isValid"))
{
if (IDataUtil.getString(validCursor).equals("false"))
{
IData [] vr = IDataUtil.getIDataArray(validCursor, "errors");
System.out.println ( vr.length+" ERROR(s) found with example");
for (int j=0; j < vr.length; j++ )
{
System.out.println( vr[j].toString() );
}
}
}
validCursor.destroy();
. . .
Validation Errors
During data validation, the validation engine generates errors when it encounters values
that do not conform to the structural and content constraints specified in the blueprint.
The format in which the validation engine returns errors depends on whether validation
was performed using the built-in services or by checking the declared input and output
parameters for the service.
When the validation engine performs data validation by executing the built-in
services pub.schema:validate or pub.schema:validatePipeline, errors are returned in the
errors output variable (an IData list). For each validation error, the errors variable
lists the error code, the error message, and the location of the error.
When the validation engine performs validation by comparing run-time data to
the declared input and output parameters, the validation engine returns all the
validation errors in a string. This string contains the error code, error message, and
error location for each error found during input/output validation.
Validation Exceptions
If you use the pub.schema:validate and pub.schema:validatePipeline services to perform data
validation, you can determine whether the service should succeed or fail if the data
being validated is invalid. You might want a service to succeed even if the data is
invalid. In the pub.schema:validate and pub.schema:validatePipeline services, the value of the
This topic describes the use of Java services in a Service Development Project and how to
use the Java Service Editor to create and edit Java services.
Task Ensure that the IS package and folder in which you want to create the Java
1 service exists.
If not, create them. For more information, see "Package and Folder
Requirements" on page 171.
Task Use Designer to add the Java service element. For more information, see
2 "Creating a Java Service" on page 336.
Designer creates a Service Development Project in your local workspace for
the Java service. For more information, see "Service Development Projects
in the Local Workspace" on page 334.
Do the following to build the logic for the Java service:
Define the input and output parameters for the service. For more
information, see "About the Service Signature" on page 172.
Optionally, generate starter code for the service based on the declared
input and output parameters. For more information, see "Generating Java
Code from Service Input and Output Parameters" on page 340.
Add additional Java code and modify the generated Java code as
necessary. You can use the webMethods Integration Server Java API in
your service. For more information, see the webMethods Integration Server
Java API Reference.
Task Provide classes required to compile the Java service. You add any
3 additional third-party classes to:
Task Compile the Java service. Designer automatically compiles the service
4 when you save it. For more information, see "Compiling a Java Service" on
page 345.
Task Debug the Java service. For more information, see "Debugging Java
5 Services" on page 487.
Designer also provides the ability for you to generate code that invokes a Java service.
You can generate code that a client would use to invoke the Java service and code
that another service would use to invoke the Java service. For more information, see
"Building a Java Client" on page 962 and "Generating Code a Java Service Can Use to
Invoke a Specified Service" on page 346.
Note: You can use the Designer Java service editor to edit Java services that you
created in Developer. Additionally, you can use Designer to edit Java services
you created with your own IDE, provided that you properly commented them
as described in "Building Java Services in Your Own IDE" on page 351 and
"Adding Comments to Your Java Code for the jcode Utility" on page 355.
Source Tab
You specify the code for the Java service on the Source tab, which extends the standard
Eclipse Java editor. Because the Eclipse Java editor requires source files to be in the local
workspace, Designer also requires source files to be in the local workspace. To achieve
this, Designer adds Java classes to a Service Development Project, which is a project with
extensions to support Java services. For more information, see "Service Development
Projects in the Local Workspace" on page 334.
The full capabilities of the Eclipse Java editor are available, for example, source
formaing, code completion, etc. However, unlike the Eclipse Java editor, the Designer
Java service editor protects the sections of a Java service that contain required code to
prevent structural damage to the service. The following illustrates the contents of the
Source tab for a newly created service.
package orders.orderStatus
Java package definition
import com.wm.data.*;
Add additional imports import com.wm.util.Values;
here import com.wm.app.b2b.server.Service;
import
com.wm.app.b2b.server.ServiceException;
public final class orderStatus_checkStatus_SVC
Class definition
}
Final “}”
Note: You can set the Java service editor preferences so that Designer uses
Values in and return out for the input/output rather than an IData
object. For more information, see "Java/C Service Editors Preferences" on
page 998.
Final brace “}”. The Java service editor does not allow you to add code after the final
brace “}”.
Note: By default, Designer adds some required imports that you cannot delete.
Although the Java service editor will allow you to remove the imports,
when you save the service, Designer adds the required imports back to the
service.
extends, where you can specify a super class for the implementation.
implements, where you can specify the Java interfaces that the Java service
implements.
source code, where you add the code for the primary Java service method.
shared code, where you can specify declarations, methods, etc. that will be shared by
all services in the current folder.
Note: You cannot enter or paste special characters including '{' in the extends or
implements section of a Java service.
The toolbar and icons that the Source tab uses are the same as the buons and icons used
in the standard Eclipse Java editor. For a description, see the Eclipse Java Development
User Guide.
the workspace because it includes the server identification. In this case, the Service
Development Project will have the following name:
MyPackage[ServerB_5555]
Note: You might still need to add additional classes and jar files to Integration
Server so that Integration Server can compile the service. For more
information, see information about managing IS packages and how
Integration Server stores IS package information in webMethods Integration
Server Administrator’s Guide.
The following shows the format of a Service Development Project and an example.
Format Example
- projectName - MyPackage[ServerA_5555]
+ JRE System Library - JRE System Library
- src - src
+javaPackageName(1) - folderA
. - folderA.folderB
. + classes
. + IS_CLIENT
+javaPackageName (n ) + IS_SERVER
+ classes .
+ defaultJarFile(1) .
. .
. + lib
.
+ defaultJarFile (n )
+ lib
4. If you have a template you want to use to initialize a default set of properties for the
service, select if from the Choose template list.
5. Click Finish.
6. Specify the input parameters and output parameters for the Java service on the Input/
Output tab. For more information, see "About the Service Signature" on page 172.
7. Optionally, specify usage notes or comments in the Comments tab.
8. Specify service properties using the Properties view. For more information, see:
"About Service Run-Time Parameters" on page 177
"About Automatic Service Retry" on page 192
"About Service Auditing" on page 194
"About Universal Names for Services or Document Types" on page 205
"About Service Output Templates" on page 210.
9. Optionally, generate starter code for the service based on the declared input and
output parameters. For more information, see "Generating Java Code from Service
Input and Output Parameters" on page 340.
10. Add and modify the Java code on the Source tab.
You can use the webMethods Integration Server Java API in your service. For more
information, see webMethods Integration Server Java API Reference.
11. Select File > Save.
Designer compiles the Java service on Integration Server and displays compilation
error messages from the server in a popup window. Designer also writes the error
messages to the Designer log file making them visible within the Error Log View.
Designer also compiles the Java service locally in the Service Development Project.
Additionally, if the workspace preference Build Automatically is selected, Designer
rebuilds other classes in the Service Development Project at the same time. Designer
adds compilation errors from the local compilation to the Problems view. If
Problems view is not already open, you can open it by selecting Window > Show View
> Problems. To view the line of code that caused the error, double click on the error
in the Problems view and Designer shifts focus to the Java service editor, with the
cursor positioned at the line of code that caused the error.
Note:
When you create a new Java service, Designer adds a Java class associated with
the Java service to a Service Development Project in your local workspace. If an
appropriate Service Development Project that corresponds to the service’s IS package
does not yet exist, Designer creates one for the service. For more information, see
"Service Development Projects in the Local Workspace" on page 334.
Designer adds initial code to the Java service. For all Java services, Designer adds the
Java package definition, class definition, primary method definition, and a minimum
set of imports. If the service is the second or subsequent Java service created in the
same IS folder, Designer also adds any shared code defined in other Java services in
the IS folder, additional imports, extends, and implements.
Because Designer is connected to Integration Server, when you save the service in
Designer, your changes are also immediately saved in Integration Server.
Additionally, when you save a Java service, Designer compiles it both in the
Service Development Project in Designer and on Integration Server. When Designer
compiles the service locally, by default, it also rebuilds other classes in the Service
Development Project.
If your Java service requires additional classes to compile, you must add them, either
as individual class files or in jar files, to both the Service Development Project and
to Integration Server. If you set up IS package dependencies for the Java service in
Integration Server and there are classes and/or jar files in the IS packages required so
that the service can compile, you must manually add them to Service Development
Project. For more information, see "Adding Classes to the Service Development
Project" on page 343. For more information about adding classes to Integration
Server and how Integration Server stores package information, see webMethods
Integration Server Administrator’s Guide.
When a folder contains multiple Java services, Designer adds an empty
implementation of all of the Java services in the folder to each Java service. This
allows a Java service in a folder to invoke another Java service in the same folder
directly using methodName (pipeline) where methodName is the local name of the Java
service.
Using an IData Object for the Java Service Input and Output
An IData object is the universal container that Java services use for service input and
output. A Java service method signature takes exactly one argument of type IData, and
the same IData object contains the output from the service. An IData object contains an
ordered collection of key/value pairs on which a service operates. For a key/value pair:
The key must be a String.
The value can be any Java object (including an IData object).
Tip: You can use Designer to generate code for geing input from and writing
output to an IData object. After generating the code, you can copy and paste it
into the Java service you are creating. For more information, see "Generating
Java Code from Service Input and Output Parameters" on page 340.
When the Java service is invoked, Integration Server passes the IData object to it. The
service needs to get the input values it needs from the key/value pairs within the IData
object. The following sample code uses methods of the IDataCursor class to position
the cursor and uses the getValue method to get the input value of the myVariable input
variable from the IData object:
public final static void myservice (IData pipeline)
throws ServiceException
{
IDataCursor myCursor = pipeline.getCursor();
if (myCursor.first( "inputValue1" )) {
String myVariable = (String) myCursor.getValue();
.
.
}
myCursor.destroy();
.
.
return;
}
A service returns output by inserting it into the same IData object that was used for the
input values. All of the service outputs must be wrien to the IData object. For example:
public final static void myservice (IData pipeline)
throws ServiceException
{
IDataCursor myCursor = pipeline.getCursor();
if (myCursor.first( "inputValue1" )) {
String myVariable = (String) myCursor.getValue();
.
.
}
myCursor.last();
myCursor.insertAfter( "outputValue1", myOutputVariable );
myCursor.destroy();
return;
}
Note: Integration Server passes everything that the Java service puts into the
pipeline (that is, the IData object) as output, regardless of what is declared
as its input/output parameters. Declaring a service's input and output
parameters does not filter what variables the service actually receives or
The following shows code that Designer generated for the above input and output
parameters:
// pipeline
IDataCursor pipelineCursor = pipeline.getCursor();
String input1 = IDataUtil.getString( pipelineCursor, "input1" );
// inDoc
IData inDoc = IDataUtil.getIData( pipelineCursor, "inDoc" );
if ( inDoc != null)
{
IDataCursor inDocCursor = inDoc.getCursor();
// pipeline
IDataCursor pipelineCursor_1 = pipeline.getCursor();
IDataUtil.put( pipelineCursor_1, "output1", "output1" );
// outputDoc
IData outputDoc = IDataFactory.create();
IDataCursor outputDocCursor = outputDoc.getCursor();
IDataUtil.put( outputDocCursor, "out1", "out1" );
IDataUtil.put( outputDocCursor, "out2", "out2" );
outputDocCursor.destroy();
IDataUtil.put( pipelineCursor_1, "outputDoc", outputDoc );
pipelineCursor_1.destroy();
You add individual class files to the “classes” folder of the Service Development
Project.
If you have Java classes that are packaged together in jar files, you add the jar files to
the “lib” folder of the Service Development Project.
If you set up IS package dependencies for a Java service in Integration Server and
there are classes and/or jar files in the IS packages required so that the service can
compile, you must manually add them to Service Development Project.
Important: The Java source files for these classes should not be maintained within the
Service Development Project.
4. If you want to add jar files to the Service Development Project, drag them from the
file system into the “lib” folder of the Service Development Project in the Project
Explorer view.
If you have the Build Automatically Workspace preference selected, after adding new
class and/or jar files to the Service Development Project, Designer automatically
rebuilds the project. If you have the Build Automatically preference turned off, you
can force a rebuild by selecting Project > Build Project. You set the Build Automatically
preference using Window > Preferences > General > Workspace.
After the project is rebuilt, Designer removes the errors from the Problems view.
However, the errors might still exist for the Folder class that resides in Integration
Server. To correct the error, ensure Integration Server has access to the required
class and jar files, open the Java service in the Designer, and save it again to force the
compilation of the service on Integration Server.
Important: You do not need to use the jcode utility to compile and transfer the Java
service to Integration Server. The jcode utility is only necessary when you
are using an IDE other than Designer. For more information about building
Java services using your own IDE, see "Building Java Services in Your Own
IDE" on page 351.
You do not have to generate code for all the input and output parameters. You can select
to generate code for only the input parameters, only the output parameters, or you can
select one or more input/output parameters for which to generate code.
When Designer generates code from the service input/output parameters, it puts the
code on the clipboard. From there, you can paste it into a Java service and modify it as
necessary.
The following shows code that Designer generated for the above input and output
parameters:
// input
// inDoc
IData inDoc = IDataFactory.create();
IDataCursor inDocCursor = inDoc.getCursor();
IDataUtil.put( inDocCursor, "in1", "in1" );
IDataUtil.put( inDocCursor, "in2", "in2" );
inDocCursor.destroy();
IDataUtil.put( inputCursor, "inDoc", inDoc );
inputCursor.destroy();
// output
IData output = IDataFactory.create();
try{
output = Service.doInvoke( "Folder2.subFolder", "selectedService",
input );
}catch( Exception e){}
IDataCursor outputCursor = output.getCursor();
String output1 = IDataUtil.getString( outputCursor, "output1" );
// outputDoc
IData outputDoc = IDataUtil.getIData( outputCursor, "outputDoc" );
if ( outputDoc != null)
{
IDataCursor outputDocCursor = outputDoc.getCursor();
String out1 = IDataUtil.getString( outputDocCursor, "out1" );
String out2 = IDataUtil.getString( outputDocCursor, "out2" );
outputDocCursor.destroy();
}
outputCursor.destroy();
b. If there are IS asset dependencies for the Java service you are deleting, Designer
indicates which items will be affected by the deletion. Click Continue.
4. Click OK to confirm the deletion.
As an alternative to creating Java Services using the Designer Java Service Editor, you
can use your own IDE.
Note: For information about creating Java services using Designer, see "Building
Java Services" on page 329.
When you use your own IDE, you must create the Java code yourself, compile it, and
store the compiled class file and other service information in Integration Server. To help
you with these tasks, Integration Server provides the jcode utility.
The following describes the basic steps for building a Java service with your own IDE.
1. Understand how Java service are stored in Integration Server. For a description, see
"How Java Services are Organized on Integration Server " on page 352.
2. Optionally create an empty Java service using Designer that you can use as a
guideline for coding your own service. For more information, see "Building Java
Services" on page 329.
3. Write the Java code for your service using your own IDE.
Define the input and output parameters for the service. The service must use
an IData object for service input and output. For more information, see "IData
Object for Java Service Input and Output" on page 355.
Ensure your code meets requirements described in "Requirements for the Java
Service Source Code" on page 354.
Add comments to the code that identify various fragments, for example, imports
or service inputs and outputs. For more information, see "Adding Comments to
Your Java Code for the jcode Utility" on page 355. These comments are used
by the jcode utility, which you use in the next step.
4. Use the jcode utility to compile the Java service and store its service information in
Integration Server.
5. Reload the package to load the compiled Java service into memory so that it is
executable.
folder names that contain the Java service folder. For example, a service named
recording.accounts:createAccount is made up of a Java method called createAccount in a Java
class called accounts within the recording Java package.
When building a Java service in your own IDE, it is helpful to understand how Java
services are stored in Integration Server. Integration Server stores information about
services within its packages directory, specifically in the namespace (ns) and code
directories of a package.
Important: Although you might want to examine the contents of the Integration Server
namespace directories, do not manually modify this information. Only
modify this information using the appropriate Software AG tools and/or
utilities. Inappropriate changes, especially to the ns directory of the WmRoot
package, can disable Integration Server.
The following shows the location of the source for the recording.accounts:createAccount
service:
Integration Server_directory\instances\instance_name \packages\purch\code\source
\recording\accounts.java
The source code for the createAccount service is a method in accounts.java.
The code\classes subdirectory contains the compiled code for a Java service (that
is, the class file). The following shows the directory path to the classes in the purch
package:
Integration Server_directory\instances\instance_name \packages\purch\code\classes
When you build a Java service in your own IDE, you need to add the Java class file to
the code\classes subdirectory. You can do so by using the jcode utility to compile the
Java source. For more information, see "Using the jcode Utility" on page 359 and
"Using jcode frag/fragall to Split Java Source for Designer " on page 362.
The following shows the location of the class file for the recording.accounts:createAccount
service:
Integration Server_directory\instances\instance_name \packages\purch\code\classes
\recording\accounts.class
The createAccount service is a method of the accounts class.
Note: Integration Server provides classes that you can use with Java services that
you build. For a description of these classes, see webMethods Integration Server
Java API Reference.
Service definitions and service inputs and outputs. Add the following comments to mark
the beginning and end of the logic for one method in the class. This results in a Java
service in Integration Server.
// --- <<IS-START(serviceName )>> ---
service logic
// --- <<IS-END>> --
Shared code. Add the following comments to mark the beginning and end of the
shared code within the class.
// --- <<IS-START-SHARED>> --
shared code
// --- <<IS-END-SHARED>> ---
For example, the following code fragment shows the tags used to mark the beginning
and end of the import section.
.
.
.
// --- <<IS-START-IMPORTS>>
---
import com.wm.data.*;
import java.util.*;
// --- <<IS-END-IMPORTS>>
---
.
.
.
*
* To indicate nesting, use a single "-" at the beginning of
* each line for each level of nesting.
* /
public static void createAccount (IData pipeline)
throws ServiceException
{
// --- <<IS-START(createAccount)>> ---
// [i] field:0:required name
// [i] field:1:required references
// [i] record:0:required data
// [i] - field:1:required address
// [i] - field:1:required phone
// [o] field:1:required message
// [o] field:1:required id
IDataCursor idc = pipeline.getCursor();
String name = IDataUtil.getString(idc, "name");
String [] refs = IDataUtil.getStringArray(idc, "references");
IData data = IDataUtil.getIData(idc, "data");
// Service logic that takes action on the input information
// goes here. Note that when you use the jcode utility to
// fragment the service, it does not strip comments inside
// the service body. As a result, the comments are
// preserved and will display if you use Designer
// to view the service.
idc.last();
idc.insertAfter ("message", "createAccount not fully implemented");
idc.insertAfter ("id", "00000000");
idc.destroy();
// --- <<IS-END>> -- -
return ;
}
/* *
* == COMPLEX SIGNATURES = =
* The getAccount service takes a single string "id", and returns
* a complex structure representing the account information.
* Note the use of the helper functions (defined below).
* /
public static void getAccount (IData pipeline)
throws ServiceException
{
// --- <<IS-START(getAccount)>> -- -
// [i] field:0:required id
// [o] record:1:required account
// [o] - field:0:required name
// [o] - field:1:required refs
// [o] - record:0:required contact
// [o] -- field:0:required address
// [o] -- field:0:required phone
IDataCursor idc = pipeline.getCursor();
if(idc.first("id"))
{
try
{
String id = IDataUtil.getString(idc);
IData data = getAccountInformation(id);
idc.last();
idc.insertAfter ("account", data);
}
catch (Exception e)
{
throw new ServiceException(e.toString());
}
}
idc.destroy();
// --- <<IS-END>> -- -
}
/* *
* == SHARED SOURCE ==
* Wrap the start and end of the shared code with the
* IS-START-SHARED and IS-END-SHARED tags. The shared code includes
* both global data structures and non-public functions that are
* not exposed as services.
*/
// --- <<IS-START-SHARED>> ---
private static Vector accounts = new Vector( );
private static IData getAccountInformation (String id) {
throw new RuntimeException ("this service is not implemented yet");
}
// --- <<IS-END-SHARED>> ---
}
make Examine a package to determine the source files that have been
makeall updated since the last compilation, then compile those source
files and save the resulting class files in the classes directory of
the package.
Use make to compile the source files in a single folder of a
package.
Use makeall to compile the source files in all the folders of a
package.
For more information, see "Using jcode comp to Create Java
Source from Fragments" on page 363.
frag Split the source files in a package into fragments that the jcode
fragall utility then stores in the namespace (ns) directory of the package.
As a result, when you view the service in Designer, Designer
displays the code from the updated fragments.
When building a Java service in your own IDE, you use the two-step process of making
(compiling) and fragmenting the source code often. To make these actions easier, the
jcode utility supports the shortcut commands described in the following table. For
more information about these shortcuts, see "Using jcode Shortcut Commands" on page
364.
update Compile and fragment only source files that have changed for a
single package.
upall Compile and fragment only source files that have changed for all
Integration Server packages.
hailmary Compile and fragment all source files (whether they have
changed or not) for all Integration Server packages.
The jcode utility reports which files were compiled, as well as any errors that it
encountered during the compiling process.
Important: Before you can compile a Java service using the jcode utility, you must
set the environment variable, IS_DIR, to point to the directory in which
Integration Server is installed.
For example:
"-Dwatt.server.compile="C:\java\jdk1.6.0_11\bin\javac -classpath {0}
-d {1} {2}"
The wa.server.compile property specifies the compiler command that you want
Integration Server to use to compile Java services. For more information about this
property, see the webMethods Integration Server Administrator’s Guide.
Important: If the Java source code contains any non-ASCII characters, set the property
wa.server.java.source=Unicode | UnicodeBig | UnicodeLile. The default
value is file.encoding. When Unicode is set, the compile command line
specified in the property wa.server.compile.unicode is used. The default
value of this property is the following:
“javac -encoding Unicode -classpath {0} -d {1} {2}”
package is the name of the Integration Server package containing the source code you
want to compile.
Important: Before you use the jcode utility to update the Java code fragments and
service signature, you must add specially formaed Java comments (jcode
tags) to the Java source code to designate various segments of the source
code. For more information, see "Adding Comments to Your Java Code for
the jcode Utility" on page 355.
package is the name of the Integration Server package containing the source code you
want to fragment.
Important: The existing source file, if there is one, is overwrien by the source file
that the jcode utility produces. User locks in Designer will not prevent this
because the jcode utility operates independently of locking functionality.
Note: When building a Java service in your own IDE, you cannot use the comp
command if you have not previously used the frag/fragall to split the source
into fragments.
package is the name of the Integration Server package containing the source code you
want to compile and fragment.
A map service is a service that is wrien in the webMethods flow language. A map
service can be used to map document types of different formats.
A map service can be reused in different flow services. You can create a map service
to perform a complex mapping of service signatures (input and output variable) and
invoke this mapping service when the same transformation or mapping is required in
other flow or Java services.
On the Tree tab, Designer lists map actions from top to boom. The Graphical Tree
tab provides a more condensed view of a map service. When you click on a map
action in the map service, the corresponding step is displayed in the Pipeline view.
On the Graphical View tab, Designer provides a graphical representation of all the
map actions involved in a map service.
Tree tab and Graphical View tab provide the same capabilities for building a map
service hence, work in whichever tab you find easier to use. You can easily switch
between the tabs when building a map service.
7. Click OK.
The updated map service appears on the Tree tab.
A C/C++ service is a Java service that calls a C program that you have created. Designer
generates the Java code needed to successfully call the C program.
You use Designer to build a set of starter files that you can use to create a C/C++ service.
These files include:
A Java service that calls the C program that you have created.
A C/C++ source-code template that you use to create your C program.
A make file you use to compile the finished program and place it on the server.
Task 2 Use Designer to create the C/C++ service element. For more
information, see "Creating a C/C++ Service" on page 379.
Task 3 Generate starter code for the service based on the declared input and
output parameters. For more information, see "Generating C/C++ Code
from Service Input and Output Parameters" on page 382.
Task 4 Add additional code or modify the generated code, if necessary. You
can use the Integration Server C/C++API in your service. For more
information, see webMethods Integration Server C/C++ API Reference.
Task 5 Specify service properties such as the run time seings, service retry,
service auditing, and permissions using the Properties view. For more
information, see "Building Services" on page 169.
Task 6 Provide classes required to compile the C/C++ service. You add any
additional third-party classes to:
Service Development Project in Designer so that Designer can locally
compile the service. For more information, see "Adding Classes to the
Service Development Project" on page 383.
Integration Server so that the server can compile the service. For
more information, see information about managing IS packages and
how Integration Server stores IS package information in webMethods
Integration Server Administrator’s Guide.
Task 8 Debug the C/C++ service. The primary way to debug a C/C++ service is
to debug the Java class associated with the C/C++ service that Designer
maintains in a Service Development Project. For more information, see
"Debugging C/C++ Services" on page 388.
this directory. If the package does not already have a code/libs directory, create one
before you begin building the service.
The folder in which you want to create the service must already exist. For more
information, see "Creating New Elements" on page 54.
The specification that you want to use to define the inputs and outputs for the
service must exist. For more information about specifying a specification, see "Using
a Specification as a Service Signature" on page 175.
If you are running the Integration Server as an NT service, you must complete one of
the following:
Set the Windows system environment variable PATH to include
Integration Server_directory\lib
-OR-
Copy the wmJNI.dll and wmJNIc.dll files located in
Integration Server_directory\lib to the Integration Server_directory
Note: You can use the Designer C/C++ service editor to edit the C/C++ services
that you created in Developer. Additionally, you can use Designer to edit C/
C++ services you created with your own IDE, provided that you properly
commented them as described in "Building Java Services in Your Own IDE"
on page 351 and "Adding Comments to Your Java Code for the jcode Utility"
on page 355.
Source Tab
You specify the code for the C/C++ service in the Source tab, which extends the standard
Eclipse Java editor. Because the Eclipse Java editor requires source files to be in the local
workspace, Designer also requires source files to be in the local workspace. To achieve
this, Designer adds Java classes to a Service Development Project, which is a project with
extensions to support Java services. For more information, see "Service Development
Projects in the Local Workspace" on page 377.
The full capabilities of the Eclipse Java editor are available. These include source code
formaing and code completion. However, unlike the Eclipse Java editor, the Designer
C/C++ service editor protects the sections of a C/C++ service that contain required code
to prevent structural damage to the service. The following illustrates the contents of the
Source tab for a newly created service.
package orders.orderStatus;
Package definition
Primary method {
definition /**
* The primary method for the C service
*
* @param in
*
The input Values
* @return The output Values
*/
public static final Values checkStatus(Values in)
{
// --- <<IS-GENERATED-CODE-1-
START>> ---
Values out = in;
// --- <<IS-GENERATED-CODE-1-END>> ---
out = ccheckStatus(Service.getSession(),
in);
pairs on which a service operates. A Values object can contain any number of key/
values pairs.
You define the data to pass into the service via the Values object by defining input
parameters on the Input/Output tab of the editor. You add code to the primary method
that modifies the key/value pairs contained in the Values object. The Values object
then becomes the output of the service. The service returns the output parameters
you define on the Input/Output tab.
Final brace “}”. The C/C++ service editor does not allow you to add code after the final
brace “}”.
Note: By default, Designer adds some required imports that you cannot delete.
Although you can remove the imports in the C/C++ service editor,
Designer adds the required imports back when you save the service.
extends, where you can specify a super class for the implementation.
implements, where you can specify the Java interfaces that the C/C++ service
implements.
Note: You cannot enter or paste special characters including '{' in the extends or
implements section of a C/C++ service.
source code, where you add the code for the primary C/C++ service method.
shared code, where you can specify declarations, methods, etc. that will be shared by
all services in the current folder.
Note: The shared code section of the C/C++ service editor contains the code that
loads the library that contains the C/C++ program.
The toolbar and icons that the Source tab uses are the same as the buons and icons used
in the standard Eclipse Java editor. For a description, see the Eclipse Java Development
User Guide.
Development Project. Designer creates one Service Development Project per package
containing a C/C++ service.
When you create a C/C++ service, Designer adds a Java class associated with the C/C+
+ service to a Service Development Project. If a Service Development Project does not
already exist for a C/C++ service, Designer creates one. You can use the Project Explorer,
Package Explorer, or Navigator views to view the Service Development Projects.
Note: You might still need to add additional classes and jar files to Integration
Server so that Integration Server can compile the service. For more
information about managing IS packages and how Integration Server stores IS
package information, see webMethods Integration Server Administrator’s Guide.
The following shows the format of a Service Development Project and an example.
Format Example
- projectName - MyPackage[ServerA_5555]
+ JRE System Library + JRE System Library
- src - src
+javaPackageName(1) - folderA
. - folderA.folderB
. + classes
. + IS_CLIENT
+javaPackageName (n ) + IS_SERVER
+ classes .
+ defaultJarFile(1) .
. .
. + lib
.
+ defaultJarFile (n )
+ lib
If you want to use Unicode characters in the C/C++ service, you need to change the
text file encoding preference. To do so, in the Workspace preferences, select Other
under Text file encoding and select or type a new encoding.
To... See...
Track when the service is started and "About Service Auditing" on page
completed and whether the service 194
succeeded or failed
9. Optionally, generate starter code for the service based on the declared input and
output parameters.
Designer adds initial code to the C/C++ service. For all C/C++ services, Designer adds
the package definition, class definition, primary method definition, and a minimum
set of imports. If the service is the second or subsequent C/C++ service created in the
same IS folder, Designer also adds any shared code defined in other C/C++ services
in the IS folder, additional imports, extends, and implements.For more information,
see "Generating C/C++ Code from Service Input and Output Parameters" on page
382.
10. Add and modify the code on the Source tab. You can specify declarations, methods,
etc. to the initial code that Designer generates.
You can use the webMethods Integration Server Java API in your service. For more
information, see webMethods Integration Server Java API Reference.
11. Optionally, specify usage notes or comments in the Comments tab.
12. Select File > Save.
Designer compiles the C/C++ service on Integration Server and displays the
compilation error messages from the server. Designer also writes the error messages
to the Designer log file making them visible within the Error Log view.
When you create a C/C++ service, Designer generates a source code file and a make
file and places these files in the following directory:
Integration Server_directory\instances\instance_name \packages\packageName\code
\source
The names of the files will match the service name you specified in Designer.
The source code file will be named <servicename> .c and the make file will be
<servicename> .mak.
Designer also compiles the C/C++ service locally in the Service Development
Project. Additionally, if the workspace preference Build Automatically is selected,
Designer rebuilds other classes in the Service Development Project at the same time.
Designer adds compilation errors from the local compilation to the Problems view. If
Problems view is not already open, you can open it by selecting Window > Show View
> Problems. To view the line of code that caused the error, double click on the error
in the Problems view and Designer shifts focus to the C/C++ service editor, with the
cursor positioned at the line of code that caused the error. For more information, see
"Compiling the C/C++ Source Code" on page 386.
Note: If your C/C++ service requires additional classes to compile, you must
add them, either as individual class files or in jar files, to both the Service
Development Project and to Integration Server. If you have set up IS
package dependencies for a C/C++ service and if the service requires
classes or jar files in these IS packages to compile, you must manually
add the classes or jar files to Service Development Project. For more
information, see "Adding Classes to the Service Development Project" on
page 383. For more information about adding classes to Integration
Server and how Integration Server stores package information, see
webMethods Integration Server Administrator’s Guide.
Important: Software AG recommends that you do not update the service signature of
the C/C++ service in the Input/Output tab of the C/C++ service editor.
2. If you want to generate code for a subset of the input/output parameters, on the
Input/Output tab, select the parameters for which you want to generate code. To select
more than one variable, press the CTRL key as you select parameters.
3. Right click in the editor to view the context menu, and select Generate Code.
4. In the Code Generation dialog box, select For implementing this service and click Next.
5. For Specification, select the Input and/or Output check boxes to select the parameters
for which you want to generate code.
6. For Which fields? select one of the following:
All fields if you want to generate code for all of the parameters identified by your
Specification selection.
Selected fields if you want to generate code for only the parameters you selected
before starting the code generation.
7. Click Finish. Designer generates code and places it on the clipboard.
8. Select the Source tab.
9. Paste the contents of the clipboard into your source code.
10. Save the C/C++ service.
Important: Do not maintain the Java source files for these classes within the Service
Development Project.
4. If you want to add jar files to the Service Development Project, drag them from the
file system into the “lib” folder of the Service Development Project in the Project
Explorer view.
If you have the Build automatically Workspace preference selected, after adding new
class and/or jar files to the Service Development Project, Designer automatically
rebuilds the project. If you have the Build automatically preference turned off, you
can force a rebuild by selecting Project > Build Project. You set the Build automatically
preference using Window > Preferences > General > Workspace.
After the project is rebuilt, Designer removes the compilation errors, if any, from the
Problems view. However, the errors might still exist for the Folder class that resides
in Integration Server.
To correct the error, first, ensure that Integration Server has access to the required
class and jar files. Then, open the C/C++ service in the Designer and save it again to
force the compilation of the service on Integration Server.
The names of the files will match the service name you specified in Designer. The source
code file will be named <servicename> .c and the make file will be <servicename> .mak.
You create the C/C++ program in the serviceName Impl.c file, not the original file. The
serviceName Impl.c file is the file in which the make file expects to find your source code.
This step is taken to maintain a copy of the original source file to which you can refer, or
revert to, during the development process.
For example, if your service name is PostPO, you would create a copy of PostPO.c
and name it PostPOImpl.c.
3. Edit the serviceNameImpl.c file as necessary to build your service.
This file contains instructive comments that will guide the development process. You
can also refer to webMethods Integration Server C/C++ API Reference for information
about how to use the webMethods C/C++ API to make the data in your service
available to other services.
4. Edit the make file to customize it for your development environment. Set the
following path seings:
Set... To...
Important: The source code file serviceName.c contains code based on the
specification you used to define the inputs and outputs for the service.
If you edit the specification, you need to regenerate the source code file.
Designer does not update the serviceName.c file automatically. For
more information about generating source code files for a C/C++ service,
see "Creating a C/C++ Service" on page 379.
5. After you finish coding your service, run your make file to compile it. Following is a
typical make command:
make –f SalesTax.mak
The make file compiles your program and puts the finished DLL in the code\libs
directory in the package in which the service resides. If this directory does not exist
when you run the make file, your program will not compile successfully.
6. Once your program compiles successfully, restart Integration Server to reload the
code\libs directory. This makes the service available for execution and allows you to
test it with Designer. For details on testing, see "Debugging C/C++ Services" on page
388.
2. If you want to generate code for a subset of the input/output parameters, on the
Input/Output tab, select the parameters for which you want to generate code. To select
more than one variable, press the CTRL key as you select parameters.
3. In the editor, right click the service to view the context menu, and select Generate
Code.
4. In the Code Generation window, select For calling this service from another service and
click Next.
5. For Specification, select the Input and/or Output check boxes to reflect the parameters
for which you want to generate code.
6. For Which Fields? select one of the following:
All Fields if you want to generate code for all of the parameters identified by your
Specification selection.
Selected Fields if you want to generate code for only the parameters you selected
before starting the code generation.
7. Click Finish. Designer generates code and places it on the clipboard.
8. Paste the contents of the clipboard into a C/C++ service.
create a launch configuration, Designer creates one on the fly and saves it locally in
an unexposed location of your workspace.
Launch the test harness in debug mode. The test harness prompts for input values and
then launches the Java class you want to debug in debug mode.
By default, the debugger executes the Java class using the JRE in the Service
Development Project where the C/C++ service resides. You can change the Service
Development Project’s JRE by updating the project’s Java Build Path property. You
can also specifically identify the JRE to use for debugging by identifying the JRE in
the Java Application launch configuration.
If the Java class being debugged invokes a service, the invoked service runs in
Integration Server. The debugger treats the statement to invoke a service like any
executable line of code in the Java class; that is, you can Step Over it and see results
from it. You cannot use the debugger to Step Into the invoked service.
If the debugger suspends execution of the service, Designer switches to the Debug
perspective. The Debug view will show the test harness class and be positioned at
the statement where the execution was suspended. You can use the other views in
the Debug perspective to inspect the state of the C/C++ service to this point. You
can use the actions in the Debug view toolbar to resume the execution. For more
information about suspending execution, see "How to Suspend Execution of a Java
Class while Debugging" on page 494.
When the execution of the C/C++ service completes, the debugger displays a window
that contains the service results.
For more information about debugging the C/C++ service by debugging its Java
wrapper, see "Debugging Java Services" on page 487.
A .NET service is a service that calls methods imported from .NET assemblies. Designer
provides the .NET service editor for creating, viewing, and editing .NET services in your
IS package.
Logged Fields tab indicates the input and output parameters for which Integration
Server logs data. For more information about logging the contents of input and
output fields, see "Logging Input and Output Fields" on page 199.
Comments tab contains the comments or notes, if any, for the .NET service.
Note: You can use the Designer .NET editor to edit .NET services that you created in
Developer.
Property Description
Domain Name The name of the application domain in which the .NET service
is to run.
Assembly Path The location of the directory that holds the .NET assembly
in which the method called by the .NET service resides. For
information about changing this property, see "Modifying
the .NET Assembly Information" on page 395.
Assembly The name of the .NET assembly in which the method called by
Name the .NET service resides. For information about changing this
property, see "Modifying the .NET Assembly Information" on
page 395.
Domain The configuration file associated with the domain. The file
Configuration must be located in the assembly path. Enter only the file name.
File For more information about domain configuration file, see the
webMethods Package for Microsoft .NET Installation and User's
Guide .
Class Name The name of the class that owns the method called by the .NET
service.
This field is read-only.
Property Description
Class Lifetime How Integration Server maintains the instance data for the
class that owns the method called by the .NET service.
A brief description of each seing follows. For a more
detailed description of each and instructions for updating
this property, see "Modifying the Class Lifetime for a .NET
Service" on page 397.
Class Description
Lifetime
Method Name The name of the method called by the .NET service. This field
is read-only.
services that call those methods. For more information about the editor, see ".NET
Service Editor" on page 392.
Before you can create a .NET service, make sure the .NET Common Language Runtime
(CLR) is loaded (i.e., started). If it is not, use Integration Server Administrator to load it.
For instructions, see webMethods Package for Microsoft .NET Installation and User's Guide.
You can invoke .NET services from flow services. You can also execute them from
Designer. For more information, see "Running a .NET Service in Designer " on page
398.
use the .NET Properties tab to modify the information so that Integration Server can
continue to call the method. If you do not update the information, aempts to call that
method from Integration Server will fail.
Note: When you create multiple .NET services from an assembly, as described in
"Creating a .NET Service" on page 394, all the services share information
about the assembly. When you change shared information for one .NET
service, Integration Server changes the information for all .NET services
associated with the assembly.
To... Do this...
Change the assembly In the Assembly Path field, type the new location of the
path name directory that holds the .NET assembly in which the
method resides.
Change the assembly In the Assembly Name field, type the new name of
name the .NET assembly in which the method resides.
Change the domain In the Domain Name field, type the new domain name.
name
Change the domain In the Domain Configuration File field, type the name of
configuration file the new domain configuration file.
Note: When you set the Class Lifetime property for a service, Designer automatically
sets the Class Lifetime property of all .NET services associated with the same
class to the same seing.
Session Integration Server creates a separate object for each user. The
object exists until the user session is closed or until the object
times out.
The default timeout value for the object is three minutes. Use
the Class Timeout property to specify a different timeout value
for an object.
Use this seing when the .NET service calls a method that does
not require any session data to be kept.
4. If you set the lifetime to Session, specify a value for the Class Timeout (Mins) property
to define the timeout value for objects.
Set a high enough value so that Integration Server does not prematurely destroy
objects under normal usage. The default is 3 (i.e., three minutes).
Note: Integration Server starts counting the minutes for the timeout when an
instance of the class is created. Whenever a .NET service accesses the class,
Integration Server resets the count.
Important: If you set the Class Lifetime property to Global or Session, the instance data
can be used across multiple invocations of methods in a class. If multiple
services are using a given global object or a session object at the same time,
those objects need to be thread safe.
Note: If you do not specify an application domain, the service runs in the
default webmDomain application domain.
refid Reference ID
c. If there are any other input fields, specify values for them.
5. Click OK to run the service.
You can use Designer to create XSLT services that transform XML source data according
to instructions in an associated style sheet.
What Is XSLT?
XSLT (EXtensible Stylesheet Language Transformations) is a language used to transform
XML documents into other XML documents or formats. It is an industry standard for
XML data mapping, based on its flexibility and reusability. Integration Server supports
that flexibility by providing a straightforward mechanism for converting XML data
within Designer.
To instruct Integration Server to use the compiling processor, you use the
useCompilingProcessor input parameter of an XSLT service. For more information about
useCompilingProcessor , see "XSLT Service Signature" on page 406.
Note: You can use a compiling processor to create a translet only if you are using
style sheets that conform to XSLT Version 1.0.
Note: Transforming a very large XML file can exceed the memory parameters
set in Integration Server, resulting in the following error message:
“Could not run filename. java.lang.reflect.InvocationTargetException:
OutOfMemoryError”. If this occurs, edit the wrapper.java.maxmemory
property in the custom_wrapper.conf file. For information about changing the
JVM heap size by editing the Java properties in the custom_wrapper.conf file,
see the webMethods Integration Server Administrator’s Guide.
What Is a Translet?
A translet is a compiled java class that you can use to perform XSL transformations. By
default, Integration Server uses an interpretive processor to process a style sheet. But
for greater efficiency, you can instruct Integration Server to use a compiling processor
provided by Xalan. The compiling processor compiles the style sheet into a translet. The
style sheet compilation is performed only once per style sheet (unless the style sheet is
modified) and the resultant translet is reused during subsequent transformations. As a
result, transformations are performed more quickly.
Integration Server writes the translet to the same folder that contains the associated style
sheet. The translet will be available even after Integration Server restarts and can be used
in subsequent transformations.
Task 1 Create an XSLT service and the associated XSLT style sheet. For more
information, see "Creating an XSLT Service" on page 405.
Task 2 Edit the XSLT style sheet and write the XSLT transformation code.
Information about writing XSLT code is outside the scope of this help.
However, for some suggestions for creating a well-formed style sheet,
see "Guidelines for the XSLT Style Sheet" on page 410.
Note: Designer lists the templates that are defined on the Window > Preferences
> XML > XSL > Templates page.
5. Click Finish.
Designer refreshes the Package Navigator view and displays the new service in the
XSLT service editor.
Designer saves the style sheet as a text file using the naming convention
serviceName .xsl. It is stored in the same directory as the service’s node.ndf
file, that is, within the \ns directory of the package containing the service. For
example, when you save the XSLT service com.example.inventory:convert, Designer
names the style sheet file convert.xsl and stores it in the following directory:
Integration Server_directory\instances\instance_name \packages\packageName \ ns
\folderName \com\example\inventory\convert.
Important: Do not rename the style sheet file. When an XSLT service is executed, it
looks in the service directory for a style sheet called serviceName .xsl that
contains instructions for transforming the XML data. If the appropriately
named file is not in that location, the service creates an empty style sheet
file, and ignores the renamed one. However, you can rename an XSLT
service; Designer automatically renames the style sheet file to match the
new service name.
You can specify service properties such as the run time seings, service retry, service
auditing, and permissions using the Properties view. For more information, see
"Building Services" on page 169.
Input Parameters
Important:
By default, the XSLT transformation engine
caches style sheets, which improves performance
Important:
To help prevent an external entity aack in a
production environment, set loadExternalEntities
Output Parameters
Usage Notes
The xmldata , xmlUrl , filename , xmlStream , and node input parameters are mutually
exclusive. Use any one of these parameters to specify the type of XML input.
If the loadExternalEntities input parameter is set to false, you can have the service load,
read, and transform content from a trusted external entity by doing one of the following:
Place the trusted external entity file in the Integration Server installation directory or
subdirectories.
Include the trusted external entity in the list of trusted entities identified in the server
parameter wa.core.xml.allowedExternalEntities. For more information about this
parameter, see webMethods Integration Server Administrator’s Guide.
If the loadExternalEntities input parameter is not specified in the service
signature, Integration Server checks the value of the server parameter
wa.core.xml.expandGeneralEntities. If this parameter is set to false, the service
blocks all external entities that are not included in the list of trusted entities
specified in wa.core.xml.allowedExternalEntities. For more information about
wa.core.xml.expandGeneralEntities, see webMethods Integration Server Administrator’s
Guide.
encounters invalid code. Results from the service are returned to Designer and displayed
in the Results view.
For more information about debugging services, see "Running Services" on page 435.
Note: You cannot use the Eclipse debugging framework for XSLT services in which
you have performed pipeline customizations. For more information about
customizing an XSLT service in the pipeline, see "Passing Name/Value Pairs
from the Style Sheet to the Pipeline" on page 412.
2. On the Configurations tree, select XSL and open the launch configuration that you
want to run to debug the XSLT service.
3. Click Debug to debug the XSLT service using this launch configuration.
Using the appropriate instructions in the XSLT style sheet in conjunction with the XSLT
service xslParamInput parameter, you can:
Override the value of a name/value pair defined in the style sheet. By passing a
new value from the pipeline to the style sheet you can specify values during the
transformation that were not available when you wrote the style sheet, and run
different transformations without changing the underlying XSLT style sheet.
Define a new name/value pair in the style sheet, and pass it to the pipeline when you
run the service.
Task In the Pipeline, specify new values for each name/value pair you want to
1 override in the style sheet. For more information about specifying new
values for each name/value pair you want to override in the style sheet, see
"Specifying New Values for Name/Value Pair" on page 411.
Task Define each name/value pair as an XSLT parameter in the style sheet.
2 For more information about defining each name/value pair as an XSLT
parameter in the style sheet, see "Defining Name/Value Pair as an XSLT
Parameter" on page 412.
Note: If a pipeline variable and a global variable have the same name and you
select both the Perform global variable substitution and Perform pipeline variable
substitution check boxes, Integration Server uses the value of the pipeline
variable.
6. If you want Integration Server to use the specified value only if the variable does not
contain a value at run time, clear the Overwrite pipeline value check box. (If you select
this check box, Integration Server will always apply the specified value.)
7. Click OK.
2. At run time, the XSLT service will pass the new value from the pipeline to the style
sheet. The style sheet will use the new value during the transformation of the XML
data.
To pass name/value pairs from the XSLT style sheet to the pipeline
1. Open the XSLT style sheet.
Note: After you define an XSLT parameter, you identify it to the XSLT processor
as a variable, rather than text, by prefixing the name with a dollar sign.
4. For each new name/value pair you want to add to the $output parameter, insert the
following xsl:value-of element, where key identifies a name/value pair and xpath
is any valid XPATH expression:
<xsl:value-of select="IOutputMap:put($output,'key',string(xpath))"/>
The style sheet passes the contents of the $output parameter to the xslParamOutput
variable of the service, and the service puts the resulting document in the pipeline.
If you are using the XALAN compiling processor, you must use the outputVariable
variable (described in the previous step) when adding name/value pairs to the
output. For example:
<xsl:value-of select="IOutputMap:put($outputVariable,'key',
string(xpath))"/>
<!-- Adds a new element with a matching name for each text string
in the result tree.-->
<xsl:template match="*">
<xsl:element name="{name()}">
<xsl:apply-templates />
</xsl:element>
</xsl:template>
<xsl:value-of select="Date:toString($date)"/>
</xsl:template>
</xsl:stylesheet>
The following sample shows the IntDate class updated to use the compiling processor.
<?xml version="1.0" ?>
<!--Declares namespace for the XSL elements and Java functions-->
<xsl:stylesheet version="1.1"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xsltc="http://xml.apache.org/xalan/xsltc"
xmlns:IOutput="com.wm.pkg.xslt.extension.IOutputMap"
xmlns:Date="java.util.Date"
xmlns:IntDate="com.wm.pkg.xslt.samples.date.IntDate"
exclude-result-prefixes="IOutput Date IntDate">
<xsl:param name="second"/>
<xsl:param name="date" select=
"IntDate:getDate($year,$month,$day,$hour,$minute,$second)"/>
<xsl:output method="xml" indent="yes" />
<!--Adds a new element with a matching name for each text string in
the result tree-->
<xsl:template match="*">
<xsl:element name="{name()}">
<xsl:apply-templates />
</xsl:element>
</xsl:template>
filename packages/WmXSLT/pub/samples/xdocs/date.xml
month 05
day 21
hour 12
minute 12
second 12
The service uses the style sheet to transform the XML data from the source file specified
in filename , converts the data into an XML string, and puts the string in the pipeline as
the results parameter. The service also generates the name/value pairs and puts them
in the xslParamOutput document. The Results view will display results similar to the
following:
Tip: If the list of available connectors is long and you know the name of the
connector you want to use, you can locate the connector quickly by typing
its name in the box below Available Connectors. You can also use this
technique when selecting the connection pool and service in the next steps.
5. On the Connection Pool page of the wizard, select the connection pool for connecting
to the cloud application provider. Click Next.
6. On the Select Service page of the wizard, select the cloud virtual service that you
want the cloud connector service to invoke.
Note: If only one cloud virtual service is available to select, this page will not
appear.
7. Click Finish.
Designer creates the cloud connector service and displays the service details in the
cloud connector service editor.
8. Edit the cloud connector service as follows:
service has been created for your project. For more information about CloudStreams
connector virtual services, see Administering webMethods CloudStreams.
In pipeline, document, and input/output validation, the data validation applies
constraints to its variables. Constraints are the restrictions on the structure or
content of variables. For more information about icons for constrained variables, see
"Viewing the Constraints Applied to Variables" on page 433.
Note: Designer displays the appropriate pages of the Operation and Business
Object Configuration wizard depending on whether the selected
operation requires metadata, such as a business object, fields, and
data types of fields. Further, Designer displays different Business
Object panels based on the scenarios mentioned in the Single or Multiple
Operations and Multiple Business Objects with dependencies section.
b. In the Select the Business Object page, select a business object and click Next.
c. In the Select Fields page, specify the fields or parameters to use in the request/
response body for the object.
The mandatory fields or parameters for the business object are selected by
default, and cannot be cleared.
d. You can add new custom fields as a String, String list, String Table, Document,
Document list, Document reference, Document reference list, Object, and Object
list by clicking and entering the custom field details in the Add a new custom
field dialog box. You can add custom fields only for the operations that support
custom fields. Custom field names should be unique within the available fields.
While adding a custom field as the child of another custom field, the field name
must be unique among all the children of the parent field. The following table
lists the different toolbar buons available in the Select Fields page:
Select... To...
e. If you want to configure concrete types for the abstract types in the operation you
selected, click Next. If the operation you selected does not have any abstract type
field, click Finish.
f. In the Configure Data Types of Fields page, select a value from the list of values
next to the abstract type to configure concrete types for the abstract types in the
operation.
g. Click Finish. Designer displays a confirmation message. Click OK to update the
operation. Designer replaces the existing operation and associated metadata with
the updated or default information.
Single or Multiple Operations with Multiple Business Objects with dependencies
Designer displays different Business Object panels based on the following scenarios:
Single Operation has a single Business Object - This panel appears if an operation
has only a single business object. The operation has neither multiple interactions
nor has records. Only the object is displayed in the panel. An example of a single
operation and a single object can be a "create" operation that contains only the
"contact" business object.
Single Operation has multiple Business Objects - This panel appears when a single
operation has multiple business objects. The panel has an optional Record
Number column (may not appear for some operations) and an Objects column.
An example of a single operation with multiple objects can be a "create"
operation that contains two business objects, "contact" and "account".
Single Operation has multiple Business Objects with dependencies - This panel appears
when a single operation has multiple business objects and some of the business
objects may have dependencies on other business objects. The panel has Record
Number, Objects, and the Requires columns. Here, you can set dependencies on
the records appearing at a lower level, for example, record 3 can be dependent on
(requires) record 1 or record 2.
Multiple Operations have multiple Business Objects - This panel appears when
multiple operations have multiple business objects. The panel has an optional
Record Number column (may not appear for some operations), an Interactions
column, and an Objects column. For example, the "create" and "update"
operations can act on the "account" and "contact" business objects respectively.
Multiple Operations have multiple Business Objects with dependencies - This panel
appears when multiple operations have multiple business objects and some
of the business objects may have dependencies on other business objects. The
panel has Record Number, Interactions, Objects, and the Requires columns.
For example, the "create" and "update" operations can act on the "account" and
"contact" business objects respectively. Also, you can set dependencies on the
records appearing at a lower level, for example, record 3 can be dependent on
(requires) record 1 or record 2.
5. On the Headers tab, do the following:
a. To include a header as part of the service signature, select the Active check box
next to the header.
b. To specify a default value for the header variable, click the Default Value box next
to the variable and type or paste a default value. If the variable is null in the
input pipeline, this default value will be used at run time. The value given at
run time always take precedence over the default value. However, if the existing
default value is of type "fixed default", the overwrite will fail.
c. Repeat the above steps in the Output section of the tab to select the SOAP headers
whose contents you want to add to the service’s output pipeline.
Note: If the operation you selected on the Operation tab has mandatory headers,
Designer displays those headers in gray. You cannot edit or delete a
mandatory header.
6. If the operation you selected has predefined input parameters (for example, the
Query and QueryAll operations have the where and limit parameters), you can
configure them on the Parameters tab as follows:
a. To specify a default value for a parameter, click the Default Value box next to the
parameter. Then, type or paste a default value. If the variable is null in the input
pipeline, this default value will be used at run time. The value given at run time
always take precedence over the default value. However, if the existing default
value is of type "fixed default", the overwrite will fail.
b. If a predefined parameter is not mandatory, you can activate/de-activate the
parameter by clicking the Active check box.
If a predefined parameter is mandatory, Designer displays the parameter in
gray and the Active check box is selected. You cannot de-activate or delete a
mandatory parameter.
c. To move a parameter up in the list, select the parameter and click . To move a
parameter down in the list, select the parameter and click .
7. If you want to add other parameters to the service signature, such as variables to be
replaced at run time with a user’s input, do the following on the Parameters tab:
a. Click .
b. Assign a name to the new parameter. If you want to rename the parameter later,
click its name and type a new name.
c. To specify a default value for the parameter, click the Default Value box next to the
parameter. Then, type or paste a default value. If the variable is null in the input
pipeline, this default value will be used at run time. The value given at run time
always take precedence over the default value. However, if the existing default
value is of type "fixed default", the overwrite will fail.
d. You can activate/de-activate the parameter by clicking the Active check box, or
you can delete it by selecting the parameter and clicking .
e. To move a parameter up in the list, select the parameter and click . To move a
parameter down in the list, select the parameter and click .
8. On the Input/Output tab, do the following:
a. To have the server validate the input to the service against the service input
signature, select the Validate input check box.
b. To have the server validate the output to the service against the service output
signature, select the Validate output check box.
c. Review the service’s input and output signature and make any necessary
changes as follows:
The requestBody and responseBody sections are derived from the operation you
selected on the Operation tab. The value of $connectionAlias is derived from the
connection pool you specified when you first created the cloud connector service.
The fault section is derived from the operation response. You cannot change
these values in the editor.
9. On the Logged Fields tab, do the following:
a. Select the check boxes next to the fields you want to log at run time.
b. If you want to create an alias for a logged field to make it easier to locate in
Designer, click the Alias box next to a field and type the alias name.
For more information about logged fields, see the section on logging input and
output fields in Designer.
10. On the Summary tab, review the details about the cloud connector service.
11. On the Comments tab, enter descriptive comments or usage notes, if any.
12. Click File > Save to save your changes.
descriptive comments or usage notes, if any. You edit a cloud connector service using the
service editor in Designer.
Keep the following points in mind when editing a cloud connector service:
Before you edit a cloud connector service, create the service as described in "Creating
a Cloud Connector Service" on page 420.
Software AG CloudStreams provides a default connector virtual service for policy
enforcements, called WmCloudStreams.RestVS. If this service does not meet the
needs of your CloudStreams project, ensure that an appropriate connector virtual
service has been created for your project. For more information about CloudStreams
connector virtual services, see Administering webMethods CloudStreams.
In pipeline, document, and input/output validation, the data validation applies
constraints to its variables. Constraints are the restrictions on the structure or
content of variables. For more information about icons for constrained variables, see
"Viewing the Constraints Applied to Variables" on page 433.
Note: Designer displays the appropriate pages of the Resource and Business
Object Configuration wizard depending on whether the selected resource
requires metadata, such as a business object, fields, and data types
of fields. Further, Designer displays different Business Object panels
based on the scenarios mentioned in the Single or Multiple Resources and
Multiple Business Objects with dependencies section.
d. In the Select the Business Object page, select a business object and click Next.
e. In the Select Fields page, specify the fields or parameters to use in the request/
response body for the object.
The mandatory fields or parameters for the business object are selected by
default, and cannot be cleared.
f. You can add new custom fields as a String, String list, String Table, Document,
Document list, Document reference, Document reference list, Object, and Object
list by clicking and entering the custom field details in the Add a new custom
field dialog box. You can add custom fields only for the operations that support
custom fields. Custom field names should be unique within the available fields.
While adding a custom field as the child of another custom field, the field name
must be unique among all the children of the parent field. The following table
lists the different toolbar buons available in the Select Fields page:
Select... To...
g. If you want to configure concrete types for the abstract types in the resource you
selected, click Next. If the resource you selected does not have any abstract type
field, click Finish.
h. In the Configure Data Types of Fields page, select a value from the list of values
next to the abstract type to configure concrete types for the abstract types in the
resource.
i. Click Finish. Designer displays a confirmation message. Click OK to update the
resource. Designer replaces the existing resource and associated metadata with
the updated or default information.
j. In the Request Processing section, select an appropriate parsing type. The parsing
type determines how the service accepts the input.
Option Meaning
Note: If the resource you selected does not contain any requests or responses,
the Request Processing or Response Processing fields are not available.
Option Meaning
Option Meaning
Note: If the resource you selected does not contain any requests or responses,
the Request Processing or Response Processing fields will not be
available.
Single Resource has a single Business Object - This panel appears if a resource has
only a single business object. The resource has neither multiple interactions nor
has records. Only the object is displayed in the panel. An example of a single
resource and a single object can be a "create" resource that contains only the
"contact" business object.
Single Resource has multiple Business Objects - This panel appears when a single
resource has multiple business objects. The panel has an optional Record Number
column (may not appear for some resources) and an Objects column. An example
of a single resource with multiple business objects can be a "create" resource that
contains two business objects, "contact" and "account".
Single Resource has multiple Business Objects with dependencies - This panel appears
when a single resource has multiple business objects and some of the business
objects may have dependencies on other business objects. The panel has Record
Number, Objects, and the Requires columns. Here, you can set dependencies on
the records appearing at a lower level, for example, record 3 can be dependent on
(requires) record 1 or record 2.
Multiple Resources have multiple Business Objects - This panel appears when
multiple resources have multiple business objects. The panel has an optional
Record Number column (may not appear for some resources), an Interactions
column, and an Objects column. For example, the "create" and "update" resources
can act on the "account" and "contact" business objects respectively.
Multiple Resources have multiple Business Objects with dependencies - This panel
appears when multiple resources have multiple business objects and some of
the business objects may have dependencies on other business objects. The
panel has the Record Number, Interactions, Objects, and Requires columns.
For example, the "create" and "update" resources can act on the "account" and
"contact" business objects respectively. Also, you can set dependencies on the
records appearing at a lower level, for example, record 3 can be dependent on
(requires) record 1 or record 2.
4. On the Headers tab, Designer displays the default HTTP transport headers for the
resource, along with their default values. At run time, while processing the headers,
Software AG CloudStreams substitutes values as necessary, for example, replaces
the “cn.sessionToken” value in the X-SFDC-Session header with the actual runtime
session ID. In order to customize the headers, do the following:
a. To specify a default value for the header variable, click the Default Value box to
the right of the variable and type or paste the new value. If the variable is null
in the input pipeline, this value will be used at run time. If the variable has an
existing default value defined in the Cloud Connector Descriptor, this value will
overwrite the existing value at run time. However, if the existing default value is
of type “fixed default”, the overwrite will fail as mentioned earlier.
b. To add a custom header to the service’s input pipeline, in the Input section of the
tab, click . Type a name for the header and provide a default value if desired.
c. To move a header up in the list, select the header and click . To move a header
down in the list, select the header and click .
d. To include a header as part of the service signature, select the Active check box
next to the header.
e. To delete a custom header that you added, select the header and click .
f. Repeat the above steps in the Output section of the tab to select the HTTP
transport protocol headers whose contents you want to add to the service’s
output pipeline.
information the parameter can hold, the parameterization style of the request,
and the dynamic default value needed to access the resource.
Currently, three parameter styles are supported: URI_CONTEXT ,
QUERYSTRING_PARAM and CFG_PARAM .
For more information about the supported parameter styles, see the section
Understanding REST Parameters in the document Administering webMethods
CloudStreams.
b. To specify a default value for the parameter, click the Default Value box to the right
of the parameter. Then, type or paste the default value. The default value is used
at run time, if the parameter value is not explicitly specified in the input pipeline.
Also, this default value will overwrite any existing default value that is defined
in the Cloud Connector Descriptor, at run time. However, if the existing default
value is of type “fixed default”, the overwrite will fail as mentioned earlier.
Note: You cannot specify a default value for a parameter with data type as
"Record".
The requestBody and responseBody sections are derived from the REST resource
you selected on the Resource tab. The value of $connectionAlias is derived from
the connection pool you specified when you first created the cloud connector
service. The status, statusMessage, and fault values are derived from the resource
response. You cannot change these values in the editor.
7. On the Logged Fields tab, do the following:
a. Select the check boxes next to the fields you want to log at run time.
b. If you want to create an alias for a logged field to make it easier to locate in
Designer, click the Alias box next to a field and type the alias name.
For more information about logged fields, see the section on logging input and
output fields in Designer.
8. On the Summary tab, review the details about the cloud connector service.
9. On the Comments tab, enter descriptive comments or usage notes, if any.
10. Click File > Save to save your changes.
21 Running Services
■ Using Launch Configurations to Run Services .......................................................................... 436
■ Supplying Input Values to a Service .......................................................................................... 438
■ Running a Service ...................................................................................................................... 454
■ Viewing Results from Running a Service .................................................................................. 455
■ Running Services from a Browser ............................................................................................. 459
When you run a service, Designer invokes the service (just as an ordinary IS client
would) and receives its results. The service executes once, from beginning to end (or
until an error condition forces it to stop). The service executes on the Integration Server
on which you have an open session, or if you using a launch configuration, on the
Integration Server specified in the launch configuration.
Results from the service are returned to Designer and displayed in Results view. This
allows you to quickly examine the data that the service produces and optionally change
it or save it to a file. You can use the saved data as input for a later debug session or to
populate the pipeline during a debugging session.
Note: You also create launch configurations to debug flow services. You can use a
launch configuration created for running a service when you debug a flow
service. Similarly, you can use a launch configuration that you created for
debugging a flow service when you run a service. For more information about
launch configurations for debugging flow services, see "Creating Launch
Configurations for Debugging Flow Services" on page 465.
Designer requires launch configurations to run services. However, if a service does not
have an associated launch configuration and you bypass the Run Configurations dialog
boxes when running the service, Designer creates one on the fly and saves it in your
workspace. You can use this configuration from one session to the next. In fact, Designer
reuses this configuration every time you run or debug the service without creating
another launch configuration.
By default, Designer saves launch configurations locally in an unexposed location in
your workspace. However, you might want to share launch configurations with other
developers. You can specify that Designer save a launch configuration to a shared file
within your workspace; this location will be exposed. On the Common tab in the Run
Configurations dialog box, select the Shared file option and provide a workspace location
in which to save the file.
You might consider creating a launch configuration for each set of data that you
routinely use to test your service. This will provide you with a ready-made set of test
cases against which to verify the service when it is modified by you or other developers
in the future. Many sites establish a workspace project directory just for holding sets of
test data that they generate in this manner.
Note: If you are running or debugging the service, Designer displays the Enter
Input for serviceName dialog box that you use to specify input.
Note: The seing applies to all String-type variables in the root document of
the input signature. The seing does not apply to String-type variables
within Document Lists. You define how you want to handle String-type
variables within Document Lists separately when you assign values to
Document Lists variables. For more information, see "Specifying Values for
a Document List Variable" on page 449.
5. Enter values for the input variables. For specific information for how to specify a
value based on a variable’s data type, see one of the following:
6. To save the input values to a file for use in later debugging, click Save or Save Inputs.
In the Save As dialog box, specify the name and location of the file to which you want
the values saved. Click Save.
Note: The Perform pipeline variable substitution check box is not available when
using a launch configuration.
3. If you specified a global variable as the value of the String variable (for example,
%myFTPUserName%), select the Perform global variable substitution check box so that
Integration Server replaces the variable name with the global variable value at run
time.
Note: The Perform global variable substitution check box is not available when using
a launch configuration.
Note: If a pipeline variable and a global variable have the same name and you
select both the Perform global variable substitution and Perform pipeline variable
substitution check boxes, Integration Server uses the value of the pipeline
variable.
4. If you want Integration Server to use the value you specified only when the variable
does not contain a value at run time, clear the Overwrite pipeline value check box.
(If you select this check box, Integration Server will always apply the value you
specified.)
Note: The Overwrite pipeline value check box is not available when using a launch
configuration.
To... Do this...
Append a String to the end of the list Click Add Row and specify a value in
the Value column.
Insert a String into the middle of the Select the String below where you want
list to add the new one and click Insert
Row.
Remove a String from the list Select the String and click Delete Row.
Note: The Perform pipeline variable substitution check box is not available when
using a launch configuration.
4. If you specified a global variable as the value of the String variable (for example,
%myFTPUserName%), select the Perform global variable substitution check box so that
Integration Server replaces the global variable name with the global variable value at
run time.
Note: The Perform global variable substitution check box is not available when using
a launch configuration.
Note: If a pipeline variable and a global variable have the same name and you
select both the Perform global variable substitution and Perform pipeline variable
substitution check boxes, Integration Server uses the value of the pipeline
variable.
5. If you want Integration Server to use the value you specified only when the variable
does not contain a value at run time, clear the Overwrite pipeline value check box. (If
you select this check box, Integration Server always applies the value you specified.)
Note: The Overwrite pipeline value check box is not available when using a launch
configuration.
6. After adding the String List elements you want and specifying values, do one of the
following:
If you are working with a launch configuration, click Apply on the Input tab to
save the value you entered. You can continue to specify values or click Run to
execute the service.
If using the Enter Input for serviceName dialog box, continue to specify input
values, or if you are finished, click OK to close the dialog box and execute the
service.
If using the Enter Input for variableName dialog box, click OK to close the dialog box.
To... Do this...
Insert a row in the middle In the table viewer, select the row below
where you want to add the new one
and click Insert Row.
You can mix literal and pipeline variables. For example, if you specify
(%areaCode%) %Phone%, the resulting String would be formaed to include the
parentheses and space. If you specify %firstName% %initial%. %lastName%,
the period and spacing would be included in the value.
4. If you assigned a value using the % symbol along with a pipeline variable (for
example, %myFTPServer%), select the Perform pipeline variable substitution check box so
that during service execution Integration Server replaces the pipeline variable name
with the run-time value of the variable.
Note: The Perform pipeline variable substitution check box is not available when
using a launch configuration.
5. If you specified a global variable as the value of the String variable (for example,
%myFTPUserName%), select the Perform global variable substitution check box so that
Integration Server replaces the variable name with the global variable value at run
time.
Note: The Perform global variable substitution check box is not available when using
a launch configuration.
Note: If a pipeline variable and a global variable have the same name and you
select both the Perform global variable substitution and Perform pipeline variable
substitution check boxes, Integration Server uses the value of the pipeline
variable.
6. If you want Integration Server to use the value you specified only when the variable
does not contain a value at run time, clear the Overwrite pipeline value check box.
(If you select this check box, Integration Server will always apply the value you
specified.)
Note: The Overwrite pipeline value check box is not available when using a launch
configuration.
7. After adding the table rows and columns you want and assigning values, do one of
the following:
If you are working with a launch configuration, click Apply on the Input tab to
save the value you entered. You can continue to specify values or click Run to
execute the service.
If using the Enter Input for serviceName dialog box, continue to specify input
values, or if you are finished, click OK to close the dialog box and execute the
service.
If using the Enter Input for variableName dialog box, click OK to close the dialog box.
Tip: Click to view and/or update your preferences for how Designer displays
and expands the contents of Document variables.
Note: If the Document variable has no defined content, you can add String name/
value pairs and then assign values. For more information, see "Specifying
Values for a Document Variable with No Defined Content" on page 447.
Use the following procedure to specify values for a Document variable. You perform this
procedure from:
The Input tab if you are working with a launch configuration
The Enter Input for serviceName dialog box if you are running or debugging a service
The Enter Input for variableName dialog box if you are assigning a value to a pipeline
variable
2. If you want Integration Server to use the value you specified only when the variable
does not contain a value at run time, clear the Overwrite pipeline value check box.
(If you select this check box, Integration Server will always apply the value you
specified.)
Note: The Overwrite pipeline value check box is not available when using a launch
configuration.
Note: If the Document already has defined content, see "Specifying Values for a
Document Variable that Has Defined Content" on page 446.
Use the following procedure to specify values for a Document variable. You perform this
procedure from:
The Input tab if you are working with a launch configuration
The Enter Input for serviceName dialog box if you are running or debugging a service
The Enter Input for variableName dialog box if you are assigning a value to a pipeline
variable
To specify String name/value pairs for a Document variable with no defined content
1. Select the Document variable that has no defined content.
Designer displays a document viewer at the boom of the screen. You use the
document viewer to add String name/value pairs.
2. Do the following to append, insert, and delete String name/value pairs:
To... Do this...
3. For each name/value pair you added, in the Name column type a name.
Note: If you leave a Name column empty, Designer will discard the row.
4. For each name/value pair you added, in the Value column type a value:
If you want to assign a literal value, type a String value.
If you want to derive the value from a String variable in the pipeline, type the
name of that variable enclosed in % symbols (for example, %Phone%).
You can mix literal and pipeline variables. For example, if you specify
(%areaCode%) %Phone%, the resulting String would be formaed to include the
parentheses and space. If you specify %firstName% %initial%. %lastName%,
the period and spacing would be included in the value.
5. If you assigned a value using the % symbol along with a pipeline variable (for
example, %Phone%), select the Perform pipeline variable substitution check box so that the
server performs variable substitution at run time.
Note: The Perform pipeline variable substitution check box is not available when
using a launch configuration.
6. If you want Integration Server to use the value you specified only when the variable
does not contain a value at run time, clear the Overwrite pipeline value check box.
(If you select this check box, Integration Server will always apply the value you
specified.)
Note: The Overwrite pipeline value check box is not available when using a launch
configuration.
Tip: Click to view and/or update your preferences for how Designer displays
and expands the contents of Document variables.
Use the following procedure to specify values for a Document List variable. You perform
this procedure from:
The Input tab if you are working with a launch configuration
The Enter Input for serviceName dialog box if you are running or debugging a service
The Enter Input for variableName dialog box if you are assigning a value to a pipeline
variable
To... Do this...
Insert a Document into the middle of Select the Document below where you
the list want to add the new one and click
Insert Row
Remove a Document from the list Select the Document and click Delete
Row
How you assign values is based on the data type of a variable. For help for how to
specify a value for the variable, see one of the following:
4. For each top-level Document that you added to the Document List, select how you
want to handle String variables within that Document that have no value.
If you want to use empty Strings (i.e., a String with a zero-length), select either
the check box next to the top-level Document in the tree or the Include empty
values for String Types check box. Designer will select both check boxes.
If you want to use null values, clear the check box next to the top-level Document
in the tree or the Include empty values for String Types check box. Designer will clear
both check boxes.
Designer allows you to define this seing for each Document within a Document
List.
5. If you want Integration Server to use the value you specified only when the variable
does not contain a value at run time, clear the Overwrite pipeline value check box.
(If you select this check box, Integration Server will always apply the value you
specified.)
Note: The Overwrite pipeline value check box is not available when using a launch
configuration.
Note: The Overwrite pipeline value check box is not available when using a launch
configuration.
If you are working with a launch configuration, click Apply on the Input tab to
save the value you entered. You can continue to specify values or click Run to
execute the service.
If using the Enter Input for serviceName dialog box, continue to specify input
values, or if you are finished, click OK to close the dialog box and execute the
service.
If using the Enter Input for variableName dialog box, click OK to close the dialog box
To... Do this...
Insert an Object into the middle of Select the Object below where you want
the list to add the new one and click Insert
Row
Remove an Object from the list Select the Object and click Delete Row
3. If you want Integration Server to use the value you specified only when the variable
does not contain a value at run time, clear the Overwrite pipeline value check box.
(If you select this check box, Integration Server will always apply the value you
specified.)
Note: The Overwrite pipeline value check box is not available when using a launch
configuration.
4. After adding the Object List elements you want and specifying values, do one of the
following:
If you are working with a launch configuration, click Apply on the Input tab to
save the value you entered. You can continue to specify values or click Run to
execute the service.
If using the Enter Input for serviceName dialog box, continue to specify input
values, or if you are finished, click OK to close the dialog box and execute the
service.
If using the Enter Input for variableName dialog box, click OK to close the dialog box.
Running a Service
When you run a service, you can select the launch configuration that Designer uses to
run the service. If a launch configuration does not exist for a service, Designer creates a
launch configuration and immediately prompts you for input values and then runs the
service. Designer saves the launch configuration in your workspace.
Note: If a flow service expects an XML document as input, you must create a launch
configuration and debug the service.
To run a service
1. In Package Navigator view, select the service you want to run.
2. In Designer: Run > Run As > Run Service
3. If multiple launch configurations exist for the service, use the Select Launch
Configuration dialog box to select the launch configuration that you want Designer to
use to run the service.
4. If the launch configuration is set up to prompt the user for input values or there is
no launch configuration, in the Enter Input for serviceName dialog box, specify input
values for the service.
If the service has no input parameters, Designer displays the No input dialog box
stating a message to that effect. If you do not want Designer to display this message
when the service is run again, select the Do not show this dialog again check box. You
can reverse this selection by selecting Always show the No input dialog on the Run/
Debug preferences page.
5. Click OK. For more information about supplying input values, see "Entering Input
for a Service" on page 438.
Note: If you type in input values, Designer discards the values you specified
after the run. If you want to save input values, create a launch
configuration. For instructions, see "Creating a Launch Configuration for
Running a Service" on page 437.
Designer runs the service and displays the results in the Results view. If the launch
configuration specifies an XML file to use as input, Designer submits the file to the
server, which parses it into a node object and then passes it to the selected service.
Note: You can open the Results View preferences page by clicking the View Menu
buon ( ) and selecting Preferences.
For each service execution, the Results view can display the following tabs:
Messages tab displays any messages from Designer about the launch configuration
and any exceptions thrown by the service during execution.
Call Stack tab identifies the flow step that generated the error and lists its antecedents;
the Call Stack tab is only applicable to a flow service.
Pipeline tab displays the contents of the pipeline at the time the service finished
executing.
Messages Tab
The Messages tab in the Results view displays the time the launch configuration started
and completed executing, the name and location of the launch configuration, and
any error and exception messages that Integration Server generated during service
execution. If you did not use a launch configuration when running or debugging the
service, Designer displays the name and location of the launch configuration it created to
execute the service.
Pipeline Tab
The Pipeline tab in the Results view contains the contents of the pipeline after the service
finishes executing.
Keep the following points in mind when examining the Pipeline tab in the Results view.
The Pipeline tab shows all variables that the service placed in the pipeline, not just
those that were declared in the service’s input/output parameters.
Variables that a service explicitly drops from the pipeline do not appear on the
Pipeline tab.
When you select a variable in the Pipeline tab, Designer displays details about the
variable value in the details panel in the lower half of the Pipeline tab. For array
variables, Designer displays the index number and position for each item in the
array. You can copy and paste values from the details panel in the Pipeline tab.
You can browse the contents of the Pipeline tab, but you cannot edit it directly.
However, if you debug a flow service, you can edit the contents of the pipeline. For
more information, see "Modifying the Flow Service Pipeline while Debugging" on
page 477.
You can save the contents of the Pipeline tab to a file and use that file to restore the
pipeline at a later point. For additional information about saving and restoring the
contents of the Pipeline tab in the Results view, see "Saving the Results" on page
457 and "Restoring the Results" on page 458.
When a failure occurs within a Java service, the Pipeline tab represents the state of
the pipeline at the point when that Java service was initially called. If the Java service
made changes to these values before throwing the exception, those changes will note
be reflected on the Pipeline tab.
If you run a flow service and an error occurs, the Pipeline tab will only show results
up to the point of the error.
Variables with object types that Designer does not directly support will appear in the
Pipeline tab, but because Designer cannot render the values of such objects, a value
does not appear in the Value column. Instead, the Value column displays the object’s
Java class message.
Variables that contain com.wm.util.Table objects appear as Document Lists in the
Pipeline tab.
Click... To...
Click... To...
In the Save Pipeline to serverName dialog box, specify the name for
the file containing the pipeline contents.
Click... To...
Click... To...
command to run a service, your browser (not Designer) actually invokes the service and
receives its results.
If you are developing services that will be invoked by browser-based clients, particularly
ones whose output will be formaed using output templates, you will want to test those
services using the Run in Browser command to verify that they work as expected.
Note: Run in Browser only submits String and String List inputs to the service. If
you want to pass other types of inputs, use the Run > Run As > Run Service
option or set the values in the service instead of entering it in the Enter Input
for serviceName dialog box.
4. If you want to pass empty variables (variables that have no value) to the service,
select the Include empty values for String Types check box. When you select this option,
empty Strings are passed with a zero-length value. If you do not select this option,
Designer excludes empty variables from the query string that it passes to the
browser.
5. If you want to save the input values that you have entered, click Save. Input values
that you save can be recalled and reused in later tests. For more information about
saving input values, see "Saving Input Values" on page 453.
6. Click OK. Designer builds the URL to invoke the service with the inputs you have
specified, launches your browser, and passes it the URL.
If the service executes successfully, the service results appear in your browser. If
an output template is assigned to the service, the template will be applied to the
results before they are returned.
If the service execution fails with an error, an error message is displayed in the
browser.
Designer provides a range of tools to assist you during the debugging phase of
development. For example, you can:
Run flow services, specify their input values, and inspect their results.
Examine the call stack and the pipeline when an error occurs.
Execute services in debug mode, a mode that lets you monitor a flow service’s
execution path, execute its steps one at a time, specify points where you want to
halt execution, and examine and modify the pipeline before and after executing
individual flow service steps.
You select the Step Over command for the last step in the flow service.
You forcefully terminate the debug session by selecting Run > Terminate.
You exit Eclipse.
View Description
Debug view Displays the debug sessions and contains tools to manage
the debugging. When debugging a service in Designer, the
Debug view displays the stack frames associated with each
launch configuration. Debug view contains commands to
start, stop, and step through a service.
Variables view Displays information about the set of variables for the
selected stack frame in Debug view. When using Designer
to debug a service, Variables view displays the contents
of the pipeline prior to the execution of the flow step in
the selected stack frame in Debug view. The details pane
in Variables view displays detailed information about
the selected variable. You can edit the variable value in
the detail pane. You can save or modify the contents of
Variables view before resuming execution. Variables view
will be blank after the service executes to completion.
For each launch configuration that you use to debug a service, Debug view contains
a launch configuration stack frame.
The launch configuration stack frame contains an Integration Server thread stack
frame. The launch configuration can appear in the Debug view multiple times, once
for each debug session.
The Integration Server server thread stack frame contains the service thread stack
frame.
If a debug session is suspended, the service thread stack frame displays
“(suspended)” after the service name.
The service thread stack frame contains the name of the flow service and the step at
which the debug session is suspended.
If you stepped into a child INVOKE service or into a transformer in a MAP step,
Debug view displays the parent service (and its flow step) and the child service
(and its flow step) under the service thread stack frame. Designer displays the child
service above the parent service because the child service is at the top of the call
stack structure. Designer highlights the child step at which the debug session is
suspended. Variables view displays the contents of the pipeline that will be passed to
the child service.
If you stepped into a MAP step to invoke a transformer, under the service thread
stack frame, Designer displays MAPINVOKE before Designer executes the
transformer. Designer displays MAPCOPY right after the service executes but before
Designer executes the links from the transformer to the variables in Pipeline Out.
7. If you want Designer to pass the service an IData that contains input values for each
input variable in the service signature, do the following:
a. On the Input tab, select Use IData.
b. Specify the input values to save with the launch configuration by doing one of
the following:
Type the input value for each service input parameter. For more information
about providing input values, see "Entering Input for a Service" on page 438.
To load the input values from a file, click Load to locate and select the file
containing the input values. If Designer cannot parse the input data, it
displays an error message indicating why it cannot load the data. For more
information about loading input values from a file, see "Loading Input
Values" on page 453
To load input values from a file and replace the service input signature with
the structure and data types in a file, click Load and Replace.
c. If you want to pass empty variables (variables that have no value) to the service,
select the Include empty values for String Type check box. When you select this
option, empty strings are passed with a zero-length value. If you do not select
this option, Designer excludes empty value from the IData it passes to the service
as input.
8. If you want Designer to send the flow service an XML document a input, do the
following:
a. Select Use XML.
b. In the Location field, enter the path and file name of the XML document to use as
input or click Browse to locate and select the XML file.
Designer displays the contents of the XML document on the Input tab.
9. If you selected the Use IData option and you want to save the input values that you
have entered, click Save. Input values that you save can be recalled and reused in
later tests.
10. Click Apply.
11. If you want to execute the launch configuration, click Debug. Otherwise, click Close.
Open a child flow a so that you can debug the individual flow Step Into
steps within it
Execute flow steps one after another up to a specified flow step Debug to Here
Return to the parent flow from a child that you have stepped into Step Return
To return to the MAP step pipeline, select Run > Step Return or click on the
Debug view toolbar.
b. Select Run > Step Over or click to execute the links between the transformer to
the variables in Pipeline Out.
c. Repeat the above steps each transformer that you want to individually execute
in the MAP step. If you want to execute the next transformer without stepping
through transformer and link execution, select Run > Step Over.
5. If the transformer is not a flow service or you do not want to step into it, select Run >
Step Over or click on the Debug view toolbar.
6. If you want to return to the parent without stepping through the entire MAP, select
Run > Step Return or click on the Debug view toolbar. This executes the remaining
transformers in the MAP, returns to the parent flow service, and selects (but does not
execute) the next step in the parent flow service.
Note: If you select Step Return, Designer executes the remaining steps in the
MAP and returns to the parent automatically. However, Designer stops
executing if it encounters an enabled breakpoint.
You can use Step Into to step into a transformer that is not a flow service.
In Debug view, under the service thread stack frame, Designer displays
MAPINVOKE before Designer executes the transformer. Designer
displays MAPCOPY right after the service executes but before Designer
executes the links from the transformer to the variables in Pipeline Out.
Note: In the ForEach mapping, if you do not want to step into it a particular step,
select Run > Step Over or click on the Debug view toolbar.
4. If you want to return to the parent without stepping through the entire ForEach
mapping, select Run > Step Return or click on the Debug view toolbar. This
executes the remaining steps (links, transformers, or nested ForEach mappings) in
the ForEach mapping, returns to the parent, and selects (but does not execute) the
next step in the parent flow service or ForEach mapping.
Note: If you select Step Return, Designer executes the remaining steps in the ForEach
mapping and returns to the parent automatically. However, Designer stops
executing if it encounters an enabled breakpoint.
Related Topics
Stepping Into and Out of a MAP Step
so that you can examine the pipeline in the Variables view before and after that segment
executes.
When you execute a service that contains a breakpoint or call a child service that
contains a breakpoint, the service is executed up to, but not including, the designated
breakpoint step. At this point, processing stops and the debug session suspends. To
resume processing, you can execute one of the step commands or select Run > Resume.
After you resume the debug session, Designer stops at any subsequent breakpoints.
When working with breakpoints, keep the following points in mind:
Breakpoints are persistent in Designer.
Breakpoints are also local to yourDesigner workspace. Breakpoints that you set in
your workspace do not affect other developers or users who might be executing or
debugging services in which you have set breakpoints.
Breakpoints are only recognized when you execute a service in debug session.
Breakpoints are ignored when you run a service.
To set a breakpoint in a service, you must have Read access to a service. However, if
the service is invoked within another service (a top-level service) to which you have
Read access, you can set a breakpoint on the service within the top-level service.
When you delete a flow step or transformer that contains a breakpoint, Designer
removes the breakpoint.
You can use breakpoints as markers in your flow services. To do this, assign a
breakpoint to the flow step that you want to use as a marker. In Breakpoints view,
you can quickly go to the flow step by right-clicking the breakpoint and selecting Go
to File or by double-clicking the breakpoint.
Breakpoints can be used in flow services that contain transactions, however,
the breakpoint must be set before the transaction starts or after the transaction
commits or rolls back. Do not set breakpoints within the transaction. If you do so,
transactionality will not be honored and the flow service may throw an exception.
When using breakpoints in a flow service that performs an asynchronous request/
reply with webMethods messaging where Universal Messaging is the messaging
provider, insert the breakpoint at or before the service that initiates the request and/
or after the service that retrieves the request. That is, insert the breakpoint at or
before the invocation of pub.publish:publishAndWait or pub.publish:deliverAndWait and/or after
the invocation of pub.publish:waitForReply. Do not set a breakpoint on a step that occurs
after the initiating the request but before retrieving the reply. When you run a flow
service in debug mode, Designer considers every Run, Step Into, Step Over action to be
a new service execution. The request/reply channel created by the publishing service
is removed when Designer encounters the breakpoint. Seing a breakpoint after the
publish but before retrieving the reply results in the removal of the request reply/
channel which means that there is no channel from which the pub.publish:waitForReply
service can retrieve a reply document.
You can use Breakpoints view to manage your existing breakpoints.
You can import/export breakpoints from one workspace to share them with other
developers.
Breakpoint States
Breakpoints can have the following states.
Note: You can also use Breakpoints view to remove breakpoints or select Run >
Remove All Breakpoints.
Note: You can also enable and disable breakpoints using Breakpoints view.
Important: The run-time effect of disabling a step is the same as deleting it. Disabling a
key step or forgeing to re-enable a disabled step or transformer can break
the logic of a service and/or cause the service to fail. Designer allows you to
disable any step or transformer in a flow service, but it is your responsibility
to use this feature carefully.
4. Continue debugging the service using the step commands or selecting Run > Resume.
Note: You can also change a variable value by selecting the variable and then
modifying the value in the detail pane.
Dropping Variables
When dropping variables from the pipeline while debugging the service, keep the
following points in mind:
You can only modify the pipeline when a subsequent step in the service exists to
which to pass the pipeline values. You cannot modify the values of the pipeline after
the service ends. However, if you debug the service using the step commands, you
can modify the pipeline values for the next flow step in the service.
When drop variables from the pipeline, the changes only apply to the current
debugging session. The service is not permanently changed.
You can only drop existing variables. You cannot add new variables to the pipeline.
You can only change the pipeline for the top-most stack frame in the debug session.
exact state later in the debugging process. There are three ways to save the contents of
the pipeline:
Manually save the contents when you debug a service using Designer.
Automatically save the pipeline at run time using the Pipeline debug property. For
more information about this property, see "Automatically Saving or Restoring the
Pipeline at Run Time" on page 188.
Programmatically save the pipeline at run time by invoking pub.flow:savePipelineToFile
at the point where you want to capture the pipeline. For more information about
using this service, see the webMethods Integration Server Built-In Services Reference.
When you save a pipeline, it is saved in a file in XML format. The file you create can be
used to:
Manually load the pipeline into Variables view while debugging a service.
Automatically load the pipeline at run time using the Pipeline debug property.
Load a default set of input values when creating a launch configuration.
Load a set of input values into the Input dialog box when debugging a service with
Designer.
Dynamically load the pipeline at run time using the pub.flow:restorePipelineFromFile
service.
To save the pipeline to your local file system, click on the Variables view
toolbar. Specify a location and name for the file in the Save As dialog box. Click
Save.
To save the pipeline to the IntegrationServer_directory \instances
\instance_name \pipeline directory on the machine on which Integration Server
reside, click on the Variables view toolbar. In the Save Pipeline to serverName
dialog box, specify the name for the file containing the pipeline contents.
4. Continue debugging the service using the step commands or selecting Run > Resume.
2. In the debug session, use the step command or a breakpoint to reach the step for
which you want to load the saved pipeline.
3. Do one of the following:
To load the pipeline from your local file system, click on the Variables view
toolbar. In the Open dialog box, navigate to and select the file. Click Open.
To load the pipeline from the IntegrationServer_directory \instances
\instance_name \pipeline directory on the machine on which Integration Server
reside, click on the Variables view toolbar. In the Load IData from Server dialog
box, specify the name for the file containing the pipeline contents.
4. Continue debugging the service using the step commands or selecting Run > Resume.
Note: The server logs exceptions thrown by individual services, to the error log. For
more information about using the error log, see webMethods Integration Server
Administrator’s Guide.
Debug Level Defines What and How Much the Server Logs
To define the type and amount of information that the server logs, set the server’s debug
level. The debug level seings range from Off, indicating you want the server to log
nothing, to Trace, indicating that you want the server to maintain an extremely detailed
log.
Use the Integration Server AdministratorSettings > Logging > View Server Logger Details
screen to set debug levels that Integration Server uses for each of its facilities. When
debugging an issue, you can use this screen to increase the logging level for a specific
Integration Server facility. For example, you might set the logging level for the Services
facility to Trace.
When you have not defined a specific debug level for a facility, Integration Server uses a
default debug level. You configure the default by seing the logging level for the Default
facility on the Settings > Logging > View Server Logger Details screen. Integration Server also
uses the Default facility seing as the value of the wa.debug.level server configuration
parameter. If you do not define a default debug level, Integration Server uses Info, which
means the server logs informational, warning, error, and fatal messages.
When you start the server, you can temporarily override the default debug level by
specifying an alternative level on the startup command. This seing remains in effect
until you shutdown and restart the server.
For more information about the available debug levels, seing the debug level, and
configuring server logging, see webMethods Integration Server Administrator’s Guide.
Important: Because debug levels above Info can produce lots of detail and can quickly
generate an extremely large log file, do not run your server at the Debug or
Trace levels except for brief periods when you are aempting to troubleshoot
a particular issue.
the server log. You might use it to post progress messages at certain points in a service
(to indicate whether certain segments of code were executed) or to record the value of a
particular variable in the log file so you can examine it after the service executes. In the
following example, the last two messages are progress messages that were posted to the
server log using pub.flow:debugLog.
2012-03-28 16:56:12 EDT [ISS.0028.0005C] Loading LogDemo package
2012-03-28 16:56:53 EDT [ISC.0081.0001E] New LogDemo:demoService
2012-03-28 16:57:56 EDT [ISP.0090.0004C]
begin database update
2012-03-28 16:57:56 EDT [ISP.0090.0004C]
database update completed
Key Description
message A String that defines the message that you want wrien to
server log. This can be a literal string. However, for debugging
purposes, it is often useful to link this parameter to a pipeline
variable whose run-time value you want to capture.
3. Save the service. (If you are using your own IDE, you will need to recompile the
service, register it again on Integration Server, and reload its package.)
4. Execute the service.
For additional information about pub.flow:debugLog, see the webMethods Integration Server
Built-In Services Reference.
Key Description
3. Save the service. (If you are using your own IDE, you will need to recompile the
service, register it again on Integration Server, and reload its package.)
4. Execute the service. For additional information about pub.flow:tracePipeline, see the
webMethods Integration Server Built-In Services Reference.
Designer provides the ability to debug a Java service by debugging the Java class
associated with the Java service maintained in the Service Development Project.
Note: As a secondary method of debugging a Java service, you can debug a Java
service that is running in Integration Server. This method requires setup on
the Integration Server to change the way the server starts and that can affect
the server’s performance. For more information, see "About Debugging a Java
Service while it Runs in Integration Server " on page 497.
The functionality that Designer provides to debug a Java service by debugging its
Java class is an extension of the Eclipse Java Development Tools (JDT) debugger. The
JDT debugger acts on Java classes that are in the local workspace; it cannot debug the
Java service in Integration Server. As a result, to debug a Java service, you use the
JDT debugger to debug the service’s Java class that Designer maintains in a Service
Development Project. Debugging the Java class might produce different results than
when the Java service executes in Integration Server, depending on differences in
JVM system properties, date/time, time zone information, locale, language seings,
encodings, etc.
When debugging a Java service in this way, you can debug the primary method
and shared code of the Java class that represents the Java service. To debug the Java
class, you launch it in debug mode and use the JDT debugger to suspend/resume the
execution of the Java class, inspect variables, and evaluate expressions.
The actions you take to use the debugger are:
Optionally set breakpoints to identify locations where you want the debugger
to suspend execution when running the Java class in debug mode. For more
information, see "How to Suspend Execution of a Java Class while Debugging" on
page 494.
Generate a test harness, which is a Java class that you generate for the Java service
you want to debug. The logic that Designer generates for the test harness sets up the
inputs, invokes the Java class, and displays the outputs.
Optionally create a Java Application launch configuration to configure seings for
debugging the Java class. For example, you might want to set JVM arguments to
match the seings Integration Server uses so that your test more closely matches
how the Java service would execute in Integration Server. For more information,
see "About Java Application Launch Configuration" on page 491. If you do not
create a launch configuration, Designer creates one on the fly and saves it locally in
an unexposed location of your workspace.
Launch the test harness in debug mode. The test harness prompts for input values and
then launches the Java class you want to debug in debug mode.
By default, the debugger executes the Java class using the JRE in the Service
Development Project where the Java service resides. You can change the Service
Development Project’s JRE by updating the project’s Java Build Path property. You
can also specifically identify the JRE to use for debugging by identifying the JRE in
the Java Application launch configuration.
If the Java class being debugged invokes a service, the invoked service runs in
Integration Server. The debugger treats the statement to invoke a service like any
executable line of code in the Java class; that is, you can Step Over it and see results
from it. You cannot use the debugger to Step Into the invoked service.
If the debugger suspends execution of the service, Designer switches to the Debug
perspective. The Debug view will show the test harness class and be positioned
at the statement where the execution was suspended. You can use the other views
in the Debug perspective to inspect the state of the Java service to this point. You
can use the actions in the Debug view toolbar to resume the execution. For more
information about suspending execution, see "How to Suspend Execution of a Java
Class while Debugging" on page 494.
When the execution of the Java service completes, the debugger displays a window
that contains the service results.
service has an input signature, the test harness then prompts you to supply input values.
You can type in values or load values from a file. After the test harness has the input
values, it executes the Java class you want to debug in debug mode. You can use the
debugger to debug your Java class. When execution of the Java class completes, the test
harness displays the outputs from the Java class in a popup window.
You can update the logic that Designer generates for a test harness to make the following
modifications:
Change the Integration Server to which the test harness connects.
By default, the test harness aempts to connect to the Integration Server used to
create the test harness. You can specify a different Integration Server.
Update the test harness to connect to Integration Server using SSL.
By default, the test harness does not use SSL when connecting to Integration Server.
You can uncomment logic in the generated test harness so that it uses SSL.
Provide a user name and password for the Integration Server.
Provide Integration Server credentials to prevent the test harness from prompting
for the user name and password. This is useful if you plan to launch the test harness
several times to debug a Java class. However, if you want to share the test harness
with other users, do not supply your user name and password because this presents
a security risk.
For instructions for how to generate a test harness, see "Creating a Test Harness" on page
490.
By default, the code identifies the Integration Server associated with the Java
service for which you generated the test harness.
b. Replace the host name and port number with the host name and port number of
an alternate Integration Server.
4. If you want the test harness to use SSL when connecting to Integration Server:
a. Locate the following statements in the test harness Java class:
// To use SSL:
//
// context.setSecure(true);
// Optionally send authentication certificates
//
// String cert = "c:\\myCerts\\cert.der"; //$NON-NLS-1$
// String privKey = "c:\\myCerts\\privkey.der"; //$NON-NLS-1$
// String cacert = "c:\\myCerts\\cacert.der"; //$NON-NLS-1$
// context.setSSLCertificates(cert, privKey, cacert);
Important: If you want to share the test harness with other users, do not supply your
user name and password because this presents a security risk.
The following lists the tabs available when creating a Java Application launch
configuration and the type of information you specify on each:
Main tab. Specify the name of the Service Development Project that contains the Java
class you want to debug and the fully-qualified name of the Java class.
Select the Stop in main check box if you want the debugger to suspend execution
in the main method when you launch the Java class in debug mode. For more
information, see "How to Suspend Execution of a Java Class while Debugging" on
page 494.
Arguments tab. Specify Program and JVM arguments to use when debugging. You
might want to set JVM arguments to match the seings Integration Server uses
so that your test more closely matches how the Java service would execute in
Integration Server.
JRE tab. Specifies the JRE to use when executing the Java class in debug mode. By
default, it is set to the JRE in the Service Development Project. You can specify an
alternative JRE to use when debugging.
Classpath tab. Specifies the location of class files to use when executing the Java class
in debug mode.
Source tab. Specifies the location of source files to display in the Debug view. If you
want to debug the source associated with any third-party jar files, you can specify
them on this tab.
Environment tab. Specifies the environment variable values to use when executing the
Java class in debug mode.
Common tab. By default, Designer saves launch configurations to an unexposed
location of the workspace. However, you might want to share launch configurations
with other developers. You can specify that Designer save a launch configuration to a
shared file using the Shared file option and providing a workspace location in which
to save the file.
Note: As a secondary method of debugging a Java service, you can debug a Java
service that is running in Integration Server. This method requires setup on
the Integration Server to change the way the server starts and that can affect
the server’s performance. For more information, see " About Debugging a Java
Service while its Class Runs in Designer " on page 488.
8. Optionally, click Save Inputs to save the input values that you have specified so
that you can use them to load input values in the future. For more information, see
"Saving Input Values" on page 453.
9. Click OK to start launch the Java class in debug mode.
The debugger executes the Java class. If you have set breakpoints or used the Stop in
main option, the debugger suspends execution where you specified. If execution is
suspended, Designer switches to the Debug perspective. For more information, see
"How to Suspend Execution of a Java Class while Debugging" on page 494.
10. If execution suspends, use the views in the Debug perspective to inspect the state of
the Java service and the actions in the Debug view toolbar to resume the execution.
For more information about using the debugger, see the Eclipse Java Development User
Guide.
When the execution ends, Designer displays the Output for serviceName window with
the service results.
11. In the Output for serviceName window, optionally click Save Inputs to save the service
results to a file.
This might be useful if you are testing another service that takes the results of this
service as input. When debugging the next service you can load the results as input
to execute that service.
12. Click OK to close the Output for serviceName window.
Important: Never remotely debug a Java service that is running on your production
Integration Server. You should always use a development system.
To configure Integration Server version 9.7 or later for remotely debugging a Java service
1. Shut down Integration Server.
2. If you need to change the port number, perform the following:
a. Open the startDebugMode.bat/sh file in a text editor. You can find the
startDebugMode.bat/sh file in the following location:
Software AG_directory\profiles\IS_instance_name \bin
b. Locate and change the DEBUG_PORT property to specify the port on which the
server should listen for the debugger to aach. The default is 10033.
c. Save your changes and close the startDebugMode.bat/sh file.
3. If Integration Server and the debugging tool are on different machines and you
require a firewall port, open a firewall port for the debug port.
4. Run startDebugMode.bat/sh.
Integration Server logs the following on your console:
"Debug enabled (portNumber )"
Listening for transport dt_socket at address: portNumber
Important: Before performing the following procedure, make a backup copy of your
setenv.bat/sh file.
To set up Integration Server 9.0, 9.5.x, or 9.6 for remotely debugging a Java service
1. Shut down Integration Server.
2. Open the .bat/sh file in a text editor. You can find the setenv.bat/sh file in the
following location:
On versions 9.0 to 9.5.x:
IntegrationServer_directory \bin
On version 9.6:
IntegrationServer_directory \instances\instance_name \bin
3. Locate and change the DEBUG_ENABLED property to true.
4. If you want to change the port number, locate and change the DEBUG_PORT
property. The default is 9191.
5. Save your changes and close the setenv.bat/sh file.
6. If Integration Server and Designer are on different machines, if required, open a
firewall port for the debug port.
7. If you are running Integration Server as a service, you must update the service for
the changes in the setenv.bat file to take effect.
To update the service, open a command window, navigate to the following location:
On versions 9.0 to 9.5.x:
Integration Server_directory\support\win32
On version 9.6:
Integration Server_directory\instances\instance_name \support\win32
8. Run this command:
installSvc.bat update
Note: When opening a remote Java service in Designer, only add breakpoints; do
not make other edits. To edit a Java service, follow the procedure described in
"Editing an Existing Java Service" on page 342.
When creating the Java project in Designer, you create it from an existing source, which
is the IS package on Integration Server.
If your Integration Server runs on a different machine than Designer, before performing
the following procedure, map a logical drive from the machine on which Designer is
running to the Integration Server machine that contains the IS package. To map a drive
from Windows Explorer, use Tools > Map Network Drive. The mapped logical drive should
be a shared drive that allows you to access the IS package. You can find IS packages in
the following directory on the Integration Server machine:
IntegrationServer_directory \packages
6. Click Next.
For Integration Server version 9.6, 9.5.x, and 9.0, update the setenv.bat/sh file. For
more information, see "Seing Up Integration Server Version 9.0, 9.5.x, or 9.6 for
Remotely Debugging a Java Service" on page 499.
Create a Java project in Designer for the IS package containing the Java service you
want to debug. For more information, see "Creating a Java Project for an IS Package
in Designer " on page 500.
Create a Remote Java Application launch configuration to use when remotely
debugging the Java service. For more information, see "Creating a Remote Java
Application Launch Configuration" on page 501.
After the setup is complete, you can debug the Java service. To do so, open the
remote Java service to set breakpoints. Then run the Remote Java Application launch
configuration, which you created earlier, in debug mode and execute the Java service.
The debug session suspends execution at any breakpoints you set in any of the Java
services in the Java project identified in the launch configuration. In Designer you can
use the Debug perspective to inspect the state of the service execution.
Important: After seing breakpoints, service execution will be suspended every time
the service is executed. That is whether it is executed from Designer,
Integration Server Administrator, or from an IS client.
2. Establish a listener that waits for the Java service to execute by running the launch
configuration in debug mode.
a. In Designer: Run > Debug Configurations.
b. In the Debug Configurations dialog box, under Remote Java Application select the
launch configuration you created for debugging the Java service.
c. In the right panel, click Debug.
3. Execute the service in any way you want. For example, you can:
In Designer in the Package Explorer view, select the Java service and then select
Run As > Runs Service.
Debug a flow service that invokes the Java service. While stepping through the
flow service using the flow service debugger, when the step invokes the Java
service executes, control is transferred to the Remote Java Application debugger.
Invoke the service from an IS client.
4. Switch to Debug perspective by selecting Window > Open Perspective > Debug.
Integration Server suspends the execution where you specified breakpoints. In Designer
you can use the Debug perspective to inspect the state of the Java service. Use the actions
in the Debug view toolbar to resume the execution. For more information about using
the views in the Debug perspective, see the Eclipse Java Development User Guide.
Note: For more information about working with REST services in Integration
Server, see the REST Developer’s Guide.
The REST resource name must be unique across the entire Integration Server
namespace. That is, the fully qualified name of the REST resource must be unique on
the Integration Server.
You can select the HTTP methods for which you want Designer to create a service.
You can instruct Designer to create a _default service to handle the HTTP methods for
which there is not a specific service.
The format of the request URI depends on the location of the REST resource. For
more information about the format of the request URI, see the REST Developer’s
Guide.
Note: You can select Default only if at least one of the HTTP methods is not
selected.
7. Click Finish.
Designer creates the REST resource folder and generates services for the selected
HTTP methods.
Notes:
The signatures for the flow services generated by Designer contain only the input
parameters $resourceID and $path which are required by Designer. These parameter
are optional and can be deleted.
You might delete $resourceID and/or $path if your REST application does not use the
values supplied by those variables.
The _default service also contains a required input parameter named $hpMethod .
After creating the services, you need to modify the service signature to include any
additional input parameters expected by the service and any output parameters
produced by the service.
The services generated by Designer are empty. You must add processing logic to the
generated services.
Note: If you upgrade to Integration Server version 9.10 or later from a version of
Integration Server prior to 9.10, Designer uses the specified logic to convert
regular folders to REST resource folders.
If the contents of a REST resource folder change such that Designer no longer considers
the folder and its contents to be a REST resource, Designer replaces the REST resource
folder icon with the regular folder icon. For example, suppose that a REST resource
folder named topics contains services named _get, _delete, and _default. If you delete all of
the services from topics, Designer uses the regular folder icon for the topics folder.
The flow service associated with a resource operation. You can either associate an
existing service with a resource operation or create a new service and associate it
with the resource operation.
The URL template-based approach provides you with greater flexibility than the legacy
approach in defining REST resources. For a REST V2 resource, you can define multiple
operations and associate each operation with a URL format, HTTP methods, and a flow
service. In addition, you can edit these details based on your requirements.
Note: Ensure that the name of the REST V2 resource is unique within the
Integration Server namespace.
4. Click Finish.
Designer adds an empty REST V2 resource to the selected location in the namespace.
For the created resource, you must define resource operations.
Important: For a REST V2 resource operation, the combination of a URL format and the
supported HTTP methods must be unique.
You can associate either a new flow service or an existing flow service to a resource
operation. Consider the following while associating a flow service to a REST V2 resource
operation:
If you want to associate an existing flow service to a REST V2 resource operation,
you must ensure that the dynamic parameter that you specify in the URL format
already exists as a variable of type String in the input signature of the particular
service. The service that you select need not reside in the same folder as the REST V2
resource.
For information about static and dynamic parameters in a REST V2 resource, see
REST Developer’s Guide.
If you rename a flow service associated with REST V2 resource operations and
update its references, the impacted resource operations also are updated with the
new name of the service.
If you delete the flow service associated with a REST V2 resource operation or if you
delete the input variable that is also referenced in the dynamic parameter of the URL
format, the operation will not work when invoked by any subsequent client request
to Integration Server.
Field Description
REST URL The URL format that a request from a client application must
follow.
Designer automatically appends the restv2 directive and the
resource name to the URL format.
Service Name The name of the flow service to associate with the REST V2
resource operation.
You can specify a service name in either of the following
ways:
Browse and select an existing flow service.
Field Description
HTTP Methods Select one or more HTTP methods for the REST V2 resource
operation.
Note: For a resource operation, you can select only from the
HTTP methods that are supported for the associated
flow service. For information about configuring the
supported HTTP methods for an Integration Server
service, see "Run Time Properties for Services" on page
1085.
which corresponds to the dynamic parameter provided in the REST URL (in this case,
id).
Based on the information specified, any client request to the REST server must be in the
format: GET /restv2/discussion/topic/{id}, where restv2 is the directive used
to invoke the discussion resource, topic is a static parameter and {id} is a dynamic
parameter that accepts any value associated with a specific topic.
Stage 3 Modify information for the REST resources within the REST API
descriptor.
During this stage, you add or remove REST resources to the REST
API descriptor.
Stage 4 Modify the operations for the REST resources in the REST API
descriptor.
During this stage, you can change the MIME types consumed or
produced by a specific operation. You can also review the source
values assigned to parameters and add or remove operation
responses.
3. In the Element name field, type a name for the REST API descriptor using any
combination of leers, numbers, and the underscore character. For more information
about restricted characters, see "Guidelines for Naming Elements" on page 55.
4. Click Next.
5. Depending on the type of REST resources you want to include in the REST API
descriptor, select either REST V2 Resources (Recommended) or REST Resources.
Note: If you want to create a REST API descriptor from a Swagger document,
see "Creating a REST API Descriptor from a Swagger Document" on page
529.
6. In the Specify REST API Descriptor General Details panel, provide the following
information:
Host:Port Name The host and port for the Integration Server on
which the application resides in the format: host :port
By default, the REST API descriptor uses the
primary host:port of the Integration Server to which
Designer is connected.
Path The base path for the REST API descriptor. The
default is either /rest or /restv2 depending on the
type of REST resources included in the descriptor.
The path must begin with a “/” (slash).
The default value for Path is the REST directive
used on the Integration Server to which Designer is
connected.
For REST resources using the legacy approach,
Integration Server obtains this value from the
Note: You can edit the value of this field only for REST
API descriptors containing resources using the
legacy approach.
7. Click Next.
8. In the Select the REST Resources panel, select one or more REST resources to include
in the REST API descriptor.
9. Click Finish.
Designer creates the REST API descriptor using the information you provided along
with the selected REST resources.
Note: You can access the MIME types preference by clicking the in the upper
right corner of the General tab in the REST API descriptor editor.
To... Do this...
Add a MIME type Click Add. In the Add new MIME type dialog box,
enter the MIME type and click OK.
Edit a MIME type Select the MIME type that you want to edit and click
Edit. In the Edit MIME type dialog box, modify the
selected MIME type and click OK.
Delete a MIME type Select the MIME type that you want to delete and
click Remove.
4. Click Apply to save your changes to the list of available MIME types.
5. Click OK.
Note: For a REST API descriptor containing REST V2 resources, you must manually
refresh the descriptor whenever you update the associated resources or
services.
variable with a particular name and ignores subsequent identically named variables.
Note that a variable in the input parameters can have the same name as a variable in
the output parameters.
Note: Ensure that the REST V2 resources that you add have operations defined
because empty resources are not displayed in the REST Resources tab for
the particular descriptor.
Important: This procedure applies only to a REST API descriptor containing resources
using the legacy approach. You cannot edit the path or suffix for a REST API
descriptor containing REST V2 resources.
By default, each REST resource in a REST API descriptor derives its path from
the namespace of the REST resource. For example, if the REST resource is named
myREST.myRESTResource, the path is “/myREST.myRESTResource”. However, you might
not want to expose the namespace of the REST resource in the Swagger document. You
can override the default path with a path that you specify. For example, you could use /
customers/premium or /myPath.
By default, there is no suffix for the REST resources in a REST API descriptor. However,
if you want users who invoke the REST resource to include query parameters, you
can specify that information in the suffix. Integration Server appends the suffix to the
path. For example, if you want the request to invoke a REST resource to include the
$resourceID, specify a suffix of: /{$resourceID}. If you want the request URL to
include the $resourceID and the $path, specify a suffix of /{$resourceID}/{$path}.
If you change the path, the suffix, or both, make sure that Integration Server can resolve
the resulting resource path. The path must be invokable by Integration Server.
Note: The values that you specify for the path and suffix apply only to the REST
resource as it used in this REST API descriptor. It does not affect the same
REST resource used in another REST API descriptor or the REST resource
itself.
operation which Designer obtains from Comments tab for the corresponding resource
operation. For each operation in a REST API descriptor, you can do the following:
Change the MIME types that the operation consumes or produces
Review the assigned source of the parameter and change the source if necessary.
Add and remove responses.
Note: The MIME types set for a REST resource operation in a descriptor override the
MIME types set for the parent REST API descriptor. That is, the consumes and
produces MIME types specified for an individual REST resource operation
replace the MIME types specified for the REST API descriptor.
You can change the name, description, or required values for a parameter by modifying
the parameter in the corresponding REST resource service. However, you can only make
the following changes to a parameter in a REST API descriptor:
Important: For a REST API descriptor containing resources created using the legacy
approach, you cannot change the assigned source for a parameter if the
source value is BODY. For a REST API descriptor containing REST V2
resources, you cannot change the assigned source for a parameter if the
value is either BODY or PATH.
Code The HTTP status code that the operation can return
Return Output Whether or not the operation returns output with the
response. Select one of the following:
True if the operation returns output with the response.
Typically a REST operation returns output for successful
HTTP status code, such as status code of 200.
False if the operation does not return output with the
response. This is the default.
String string
Document ref
java.lang.Boolean boolean
java.lang.Character string
Note: For any parameter that is an array or of type String table, for the
corresponding parameter in the Swagger document, Integration Server sets
the “type” as array.
security
securityDefinitions
externalDocs
6. In the Select the Swagger Document dialog box, do one of the following:
7. If you specified API Portal as the source, click Next and then under Select the API from
API Portal list, select the API that you want to use to create the REST API descriptor.
8. If you specified CentraSite as the source, click Next and then under Select REST
Resource from CentraSite, select the REST resource in CentraSite that you want to
use to create the REST API descriptor.
9. If you specified File/URL as the source, do one of the following:
Enter the URL for the Swagger document. The URL should begin with hp:// or
hps://.
Provide your username and password to use the Swagger document from the
website.
Click Browse to navigate to and select a Swagger document on your local file
system.
10. Click Finish.
Designer creates the REST API descriptor. Designer also creates the associated
services (services), definitions (docTypes), and resources (resources) and places them
under a folder that has the same name as the REST API descriptor. The Properties
view displays the source URL and source URI value for the REST API descriptor and
its associated document type. For more information about the fields in the Properties
view, see "REST API Descriptor Properties" on page 1063.
Note: You cannot edit the elements of the REST API descriptor that you created.
You can only edit the service implementation of the services generated.
Note: Do not edit, rename, or delete the services, docTypes, and resources folders,
its subfolders, or the parent folder containing them all. Also, when you
move the REST API descriptor to a different location in Designer, ensure
that you move the corresponding folder containing the services, docTypes,
and resources folders.
Note: In Designer, the option to refresh a REST API descriptor is available only for
those REST API descriptors that are created using a Swagger document.
6. If you specified API Portal as the source, click Next and then under Select the API from
API Portal list, select the API that you want to use to create the REST API descriptor
and click Finish.
7. If you specified CentraSite as the source, click Next and then under Select REST
Resource from CentraSite, select the REST resource in CentraSite that you want to
use to create the REST API descriptor and click Finish.
8. If you specified File/URL as the source, do one of the following and click Finish:
Enter the URL for the Swagger document. The URL should begin with hp:// or
hps://.
Click Browse to navigate to and select a Swagger document on your local file
system.
Designer refreshes the REST API descriptor. If Designer cannot refresh a REST API
descriptor, Designer rolls back to the last saved version of the REST API descriptor.
If refresh is not successful, use the messages returned by Designer and the messages
in the error log to determine why.
Note: If you want to update details of a REST API descriptor that is already
published to API Portal, you can select the particular descriptor for
publishing. In such a situation, Integration Server overwrites the existing
details for that descriptor on API Portal.
2. In the Asset Publish dialog box, select API Portal as the destination for publishing the
descriptors and click Next.
The Publish Assets toAPI Portal dialog box lists the API descriptor you selected
in Step 1. If you selected a folder or a package, all the descriptors in the particular
folder or package are selected in the dialog box.
3. In the Publish Assets to API Portaldialog box, refine your selection of API
descriptors by selecting or clearing the appropriate check boxes, and click Finish.
The publish process starts for the REST API descriptors that you selected. Designer
displays the results of the publish process for the selected descriptors on the
Published Metadata screen.
Note: For information about the errors that occur during the publish process, see
the Integration Server server log.
OData (Open Data Protocol) enables applications to expose data or resources as a data
service that clients can access within corporate networks and across the Internet. It
provides a REST-based protocol for performing create, read, update and delete (CRUD)
operations against resources that are exposed as data services.
Integration Server acts as an OData service provider and supports OData version 2.0.
You can use the Service Development perspective in Designer to create OData services.
An OData service can be described as an endpoint service that is based on the OData
protocol and allows access to data. The OData service exposes an OData entity data
model that contains data organized and described in a standard manner.
The OData service consists of a set of entity types, external entity types, and complex
types, their properties, and the associations between the entity types. When you create
an OData service, Integration Server, acting as an OData service provider, generates the
required OData service implementations as flow services that can perform the CRUD
operations for each entity type.
These flow services will:
Be located in a folder whose fully qualified name is unique across the entire
Integration Server namespace.
Be named _insert, _retrieve, _update, and _delete to perform the create, read, update, and
delete operations respectively.
Accept certain predefined input parameters that are passed in through the OData
request.
Note: While the services _retrieve, _update, _insert, and _delete might appear to
be regular flow services, it is the naming convention and location of the
services that instruct Integration Server to treat them as OData service
implementations.
Complex Type. Structural types consisting of a list of properties but with no key. For
example, Address, which includes city, street, state, and country. You can access a
complex type only when they are added as a complex property to an entity type.
Properties. Used to define the characteristics of OData elements. For example, a
Customer entity type may have properties such as CustomerId, Name, and Address.
Properties can be simple or complex. Simple property can contain primitive types
(such as a string, an integer, or a Boolean value). Complex property can contain
structured data such as a complex type.
Associations. Represents the relationship between two entity types. For example,
relationship between Customer and Order. An association has two ends and
each end specifies the entity type aached to that end. Associations can be Single
(unidirectional) or Bidirectional depending on the number of entity types that can be
at that end of the association.
Navigation Property. Represents the association end and allows navigation from an
entity to related entities. For example, Product can have a navigation property
to Category and Category can, in return, have a navigation link to one or more
products.
Annotations
Collection type data type
Lambda query operators
Stage 2 Add OData elements to the OData service in the OData service editor.
During this stage, you specify the OData elements, namely Entity Type,
External Entity Type, and Complex Type that define the entity model that
this OData service exposes.
Note: If you go back to the previous screens and return to the Select the Entity
and Properties screen, the entity list does not get refreshed. To get the new
list of entities, start creating the OData service again.
9. Click Finish.
Designer creates the OData service and displays the details in the OData service
editor. You can now add additional OData elements, specify properties, and define
the association between the entity types.
To add OData elements, see "Adding OData Elements to the OData Service" on
page 541.
To specify the properties of OData elements, see "Adding Properties to the
OData Elements" on page 542.
To define associations between OData elements, see "Adding Associations to
OData Elements" on page 543.
10. Click File > Save to save the OData service.
Designer creates a folder with the same name as the OData service, with a prefix of
an “_” (underscore). Inside this folder, Designer creates folders for each entity type.
These folders contain OData service implementations that are named _insert, _retrieve,
_update, and _delete can perform the create, read, update, and delete operations for
each entity.
The signatures for the OData service implementations generated by Designer contain
document variables generated using the entity types and its properties. You must
not modify the signature of the generated OData service implementations. Also,
Integration Server adds the processing logic to these services.
Note: If the Palee view is not visible, display it by clicking on the right side of
the editor.
Note: The name of an entity type must be unique among all the entity types
in the OData service.
Note: The name of the complex type element must be unique across all of the
complex types in the OData service.
To add properties
1. Open the OData service and select the entity type or complex type to which you
want to add property.
2. In the Palee view of the OData editor, under Properties, select Simple and/or Complex
properties and drag it to the Tree tab to add simple and complex properties to the
OData service.
Note: If the Palee view is not visible, display it by clicking on the right side of
the editor.
The entity type and complex type elements can have one or more properties. Each
entity type should have at least one property that is a key.
To add associations
1. Open the OData service and select the entity type, external entity type, or complex
type to which you want to add association.
2. In the Palee view of the OData editor, under Associations, select Single or
Bidirectional and drag it to the Tree tab.
Note: If the Palee view is not visible, display it by clicking on the right side of
the editor.
3. In the Entity Association dialog box, select the principal and dependent entity types
for the ends of the association.
4. Click OK.
Designer creates the associations and displays the OData navigation elements under
the corresponding entity types. In case of unidirectional association, a navigation
element is added only to the specific entity type that is the association end. In case of
bidirectional association, navigation elements are added to both entity types that are
at the association ends.
5. Click File > Save to save the association.
Here, parent_context is the OData service node in the Integration Server namespace
and resource refers to the name of an entity type or a collection of instances of an entity
type.
For example:
http://localhost:5555/odata/container:company/Products
Note: When processing an OData service request, Integration Server checks the user
name associated with the request against the appropriate access control list
(ACL) associated with the service. If the user belongs to a group that is listed
in the ACL, the server accepts the request. Otherwise the server rejects the
request. Ensure that the OData service has the required ACLs associated with
it so that Integration Server processed the requests.
Here, parent_context is the OData service node in the Integration Server namespace,
resource refers to the name of an entity type or a collection of instances of an entity
type, and expressions are the filter expressions.
Integration Server supports the following logical operators:
Note: You can also use custom filters instead of OData built-in filters while using
the $filter system query option. To use your custom filters, set the Use custom
filter property of the OData service to True. You can then specify custom filter
queries as the value for the $filter parameter of the _retrieve and _update OData
service implementations.
An IS document type contains a set of fields used to define the structure and type of
data in a document (IData object). You can use an IS document type to specify input or
output parameters for a service or specification. You can also use an IS document type to
build a document or document list field and as the blueprint for pipeline validation and
document (IData object) validation.
IS document types can provide the following benefits:
Using an IS document type as the input or output signature for a service can reduce
the effort required to build a flow.
Using an IS document type to build document or document list fields can reduce
the effort needed to declare input or output parameters or the effort/time needed to
build other document fields.
IS document types improve accuracy, because there is less opportunity to introduce
a typing error typing field names.
IS document types make future changes easier to implement, because you can make a
change in one place (the IS document type) rather than everywhere the IS document
type is used.
3. In the Element name field, type a name for the IS document type using any
combination of leers, numbers, and/or the underscore character. For information
about restricted characters, see "About Element Names" on page 54.
4. Click Next.
5. On the Select the Source Type panel, select None.
6. Click Finish to create the empty IS document type.
Note: When defining an IS document type, avoid adding identically named fields to
the IS document. In particular, avoid adding identically named fields that are
the same data type. While Designer allows this, the identically named fields
may cause some anomalies especially with regards to mapping data in the
pipeline.
Keep the following points in mind if you intend to make the IS document type
publishable:
If you intend to use Broker as the messaging provider, keep in mind that the Broker
has restrictions for field names. When a document is published to Broker, fields with
names that do not meet these restrictions will be passed through byBroker. If you
create a trigger that subscribes to the publishable document type, any filters that
include field names containing restricted characters will be saved on the Integration
Server only. The filters will not be saved on the Broker, possibly affecting Integration
Server performance. For more information, see "Creating a webMethods Messaging
Trigger " on page 727.
If you intend to use Universal Messaging as the messaging provider and use
protocol buffers as the encoding type, keep in mind that some field names might not
work with protocol buffers. If a publishable document type contains fields that use
unsupported characters, these fields and their contents will be passed through to
Universal Messaging. Subscribing triggers will decode the field properly. However,
Universal Messaging cannot filter on the contents of these fields.
2. Drag the document type field that you want to define from the Palee to the Tree tab
in the editor.
3. Type the name of the field and then press ENTER.
4. With the field selected, set field properties and apply constraints in the Properties
view (optional).
5. If the field is a document or a document list, repeat the preceding steps to define
and set the properties and constraints for each of its members. Use to indent each
member field beneath the document or document list field.
6. Enter comments or notes, if any, in the Comments tab.
7. Select File > Save.
Note: Designer displays small symbols next to a field icon to indicate validation
constraints. Designer uses to indicate an optional field. Designer uses the
‡ symbol to denote a field with a content constraint. For information about
applying constraints to fields, see "About Variable Constraints" on page
647.
breaking the link to the source. For information about allowing editing of elements
derived from a source, see "Allowing Editing of Derived Elements" on page 61
Integration Server does not create IS document types or IS schemas from an XML
schema definition (XSD) if the XSD contains a type definition derived by extension
and that type definition contains a direct or indirect reference to itself. If Integration
Server encounters a type definition that contains a recursive extension while creating
an IS document type or an IS schema from an XSD, Integration Server throws a
StackOverflowError and does not continue creating the IS document type or IS
schema.
To create the IS document type from an XML document on your local file system,
type in the path and file name, or click the Browse buon to navigate to and select
the file.
7. Click Finish to create the IS document type.
If you want to add or edit fields in the IS document type, see "Creating an Empty IS
Document Type" on page 548.
2. In the New Document Type dialog box, select the folder in which you want to save
the IS document type.
3. In the Element name field, type a name for the IS document type using any
combination of leers, numbers, and/or the underscore character. For information
about restricted characters, see "About Element Names" on page 54.
If you are creating an IS document type from an event type, you may want to use the
event type name as the name for the IS document type.
4. Click Next.
5. On the Select a Source Type panel, select XML Schema. Click Next.
6. On the Select a Source Location panel, under Source location, do one of the following
to specify the source file for the document type:
To use an XML schema definition in CentraSite as the source, select CentraSite.
To use an XML Schema definition that resides on the Internet as the source, select
File/URL. Then, type the URL of the resource. (The URL you specify must begin
with http: or https:.)
To use an XML Schema definition that resides on your local file system as the
source, select File/URL. Then, type in the path and file name, or click the Browse
buon to navigate to and select the file.
To use an existing event type as the source, navigate to the Event Type Store and
select the XML Schema definition for the event type.
The default location of the Event Type Store is: Software AG_directory\common
\EventTypeStore
7. Click Next.
8. If you selected CentraSite as the source, under Select XML Schema fromCentraSite,
select the XML Schema definition in CentraSite that you want to use to create the IS
document type. Click Next.
If Designer is not configured to connect to CentraSite, Designer displays the
CentraSite> Connections preference page and prompts you to configure a connection
to CentraSite.
9. On the Select Processing Options panel, under Schema domain, specify the schema
domain to which any generated IS schemas will belong. Do one of the following:
To add the IS schema to the default schema domain, select Use default schema
domain.
To add the IS schemas to a specified schema domain, select Use specified schema
domain and provide the name of the schema domain in the text box. A valid
schema domain name is any combination of leers, numbers, and/or the
underscore character. For information about restricted characters, see "About
Element Names" on page 54.
10. Under Content model compliance, select one of the following to indicate how strictly
Integration Server represents content models from the XML Schema definition in the
resulting IS document type.
Select... To...
11. If you selected strict or lax compliance, next to Preserve text position, do one of the
following to specify whether document types generated from complex types that
allow mixed content will contain multiple *body fields to preserve the location of text
in instance documents.
Select the Preserve text position check box to indicate that the document type
generated for a complex type that allows mixed content preserves the locations
for text in instance documents. The resulting document type contains a *body
field after each field and includes a leading *body field. In instance documents for
this document type, Integration Server places text that appears after a field in the
*body .
Clear the Preserve text position check box to indicate that the document type
generated for a complex type that allows mixed content does not preserve the
locations for text in instance documents. The resulting document type contains a
single *body field at the top of the document type. In instance documents for this
document type, text data around fields is all placed in the same *body field.
12. If this document type will be used as the input or output signature of a service
exposed as a web service and you want to enable streaming of MTOM aachment
for elements of type base64Binary, select the Enable MTOM streaming for elements of type
base64Binary check box.
For more information about streaming of MTOM aachments, see the Web Services
Developer’s Guide
13. If you want Integration Server to use the Xerces Java parser to validate the XML
Schema definition, select the Validate schema using Xerces check box.
Select... To...
Select... To...
17. Under Complex type handling, select one of the following to indicate how Integration
Server handles references to named complex type definitions:
Select... To...
18. If you selected Generate document types for complex types and you want to register each
document type with the complex type definition from which it was created, select
the Register document type with schema type check box.
Note: If you want derived type support for document creation and validation,
select the Register document types with schema type check box. For more
information, see "Registering Document Types with Their Schema Types"
on page 580.
19. If you want Integration Server to generate IS document types for all complex types
in the XML Schema definition regardless of whether the types are referenced by
elements or other type definitions, select the Generate document types for all complex
types in XML Schema check box.
If you leave this check box cleared, Integration Server generates a separate IS
document type for a complex type only if the complex type is referenced or is
derived from a referenced complex type.
If you are creating an IS document type from an event type, clear the Generate
document types for all complex types in XML Schema check box.
20. Click Next.
21. On the Assign Prefixes panel, if you want the IS document type to use different
prefixes than those specified in the XML schema definition, select the prefix you
want to change and enter a new prefix. Repeat this step for each namespace prefix
that you want to change.
Note: The prefix you assign must be unique and must be a valid XML NCName
as defined by the specification hp://www.w3.org/TR/REC-xml-names/
#NT-NCName.
22. Click Finish. Integration Server generates the IS document type(s) and IS schema and
saves it on the server. Designer displays them in the Package Navigator view.
Notes:
Integration Server uses the internal schema parser to validate the XML schema
definition. If you selected the Validate schema using Xerces check box, Integration
Server also uses the Xerces Java parser to validate the XML Schema definition. With
either parser, if the XML Schema does not conform syntactically to the schema for
XML Schemas defined in XML Schema Part 1: Structures, Integration Server does
not create an IS document type or an IS schema. Instead, Designer displays an error
message that lists the number, title, location, and description of the validation errors
within the XML Schema definition.
Note: Integration Server uses Xerces Java parser version J-2.11.0. Limitations
for this version are listed at hp://xerces.apache.org/xerces2-j/xml-
schema.html.
When validating XML schema definitions, Integration Server uses the Perl5 regular
expression compiler instead of the XML regular expression syntax defined by the
World Wide Web Consortium for the XML Schema standard. As a result, in XML
schema definitions consumed by Integration Server, the paern constraining facet
must use valid Perl regular expression syntax. If the supplied paern does not use
proper Perl regular expression syntax, Integration Server considers the paern to be
invalid.
If you selected strict compliance and Integration Server cannot represent the content
model in the complex type accurately, Integration Server does not generate any IS
document types.
If you selected lax compliance and indicated that Integration Server should preserve
text locations for content types that allow mixed content (you selected the Preserve
text position check box), Integration Server adds *body fields in the document type
only if the complex type allows mixed content and Integration Server can correctly
represent the content model declared in the complex type definition. If Integration
Server cannot represent the content model in an IS document type, Integration
Server adds a single *body field to the document type.
The contents of an IS document type with a Model type property value other than
“Unordered” cannot be modified.
If the XML schema definition contains an element reference to an element
declaration whose type is a named complex type definition (as opposed to an
anonymous complex type definition), Integration Server creates an IS document type
for the named complex type definition. In the IS document type for the root element,
Integration Server uses document reference field to represent the element reference.
An exception to this behavior is the situation in which the element reference is the
only reference to the complex type definition and the Only generate document types
for elements with multiple references option is selected. In this situation, Integration
Server uses document field defined in line to represent the content of the referenced
complex type.
Integration Server uses the prefixes declared in the XML Schema or the ones you
specified as part of the field names. Field names have the format prefix :elementName
or prefix :@aributeName .
If the XML Schema does not use prefixes, the Integration Server creates prefixes for
each unique namespace and uses those prefixes in the field names. Integration Server
uses “ns” as the prefix for the first namespace, “ns1” for the second namespace,
“ns2”.
If the XML Schema definition contains a user-specified namespace prefix and
a default namespace declaration, both pointing to the same namespace URI,
Integration Server uses the user-specified namespace prefix and not the default
namespace.
If the namespace prefix in the XML Schema as well as the default namespace point
to the same namespace URI, Integration Server gives preference to the user-specified
namespace prefix over the default namespace.
When creating an IS document type from an XML Schema definition that imports
multiple schemas from the same target namespace, Integration Server throws Xerces
validation errors indicating that the element declaration, aribute declaration, or
type definition cannot be found. The Xerces Java parser honors the first <import> and
ignores the others. To work around this issue, you can do one of the following:
Combine the schemas from the same target namespace into a single XML Schema
definition. Then change the XML schema definition to import the merged schema
only.
When creating the IS document type clear the Validate schema using Xerces check
box to disable schema validation by the Xerces Java parser. When generating
the IS document type, Integration Server will not use the Xerces Java parser to
validate the schemas associated with the XML Schema definition.
object Document
string String
null null
Note: If JSON text begins with an array at the root and the array is unnamed, when
parsing the JSON text, Integration Server uses a fixed name of $rootArray for
the array value. The $rootArray field appears in the pipeline. When creating
a JSON response, if the pipeline contains $rootArray with an array value at
its root, Integration Server discards the $rootArray name and transforms the
array value into a JSON array.
Note: With regards to using Integration Server data types with JSON, Integration
Server supports only those types that can be mapped to a JSON value as
defined in hps://tools.ietf.org/html/rfc7159#section-3. Integration Servercan
take any valid arbitrary JSON text and convert it to an IData. Integration
Server must be able to convert the resulting IData to JSON text that is
identical to the original text. If Integration Server cannot do that for an
Integration Server data type, then Integration Server does not support the
use of that data type with JSON. For example, com.wm.util.Table is not
supported for JSON even though it is supported for XML. Integration Server
embeds additional type information in XML when converting IData to XML.
However, Integration Server cannot embed the additional type information
in JSON because the additional type information is treated as JSON text. The
resulting JSON text would not match the original JSON text.
2. In the New Document Type dialog box, select the folder in which you want to save
the IS document type.
3. In the Element name field, type a name for the IS document type using any
combination of leers, numbers, and/or the underscore character. For information
about restricted characters, see "About Element Names" on page 54.
4. Click Next.
5. On the Select the Source Type panel, select JSON and click Next.
6. On the Select a Source Location panel, under Source location, select File/URL.
7. Enter the path to and name of the JSON object or click Browse to navigate to and
select the source file.
8. Click Next.
9. On the Select Java Wrapper Type panel, under Java wrapper type for real numbers select
how Integration Server should map real numbers from the JSON object to fields in
the IS document type as follows:
Note: The default seing for Java wrapper type for real numbers is set by the
wa.server.json.decodeRealAsDouble server configuration parameters. For
example, if wa.server.json.decodeRealAsDouble is set to true, Designer
displays Double as the default for Java wrapper type for real numbers. You can
override this seing by selecting Float.
For more information about the wa.server.json.decodeRealAsDouble
server configuration parameter, see webMethods Integration Server
Administrator’s Guide.
10. On the Select Java Wrapper Type panel, under Java wrapper type for integers select how
Integration Server should map integers from the JSON object to the fields in the IS
document type as follows:
Note: The default seing for Java wrapper type for integers is set by the
wa.server.json.decodeIntegerAsLong server configuration parameter. For
11. Click Finish. Integration Server creates the document type. Designer refreshes the
Package Navigator view automatically and displays the new document type.
Note: The Broker Document Type option is enabled only if your Integration Server
is connected to a Broker.
6. On the Select a Broker Document Type panel, do one of the following to specify the
source file for the document type:
a. Select the Broker document type from which you want to create an IS document
type, from the displayed list of Broker document types on the Broker territory to
which the Integration Server is connected.
You can also type a search string in the Enter Broker document type name field to
filter the list of Broker document types.
b. If you want to replace existing elements in the Integration Server namespace with
identically named elements referenced by the Broker document type, select the
Overwrite existing elements when importing referenced elements check box.
7. Click Finish. Designer refreshes the Package Navigator view automatically and
displays the new document type.
Notes:
When you create an IS document type from a Broker document type that references
other elements, Designer will also create an element for each referenced element.
Integration Server will contain a document type that corresponds to the Broker
document type and one new element for each element the Broker document type
references. Designer also creates the folder in which the referenced element was
located. Designer saves the new elements in the package you selected for storing the
new publishable document type.
For example, suppose that the Broker document type references a document type
named address in the customerInfo folder. Designer would create an IS document
type named address and save it in the customerInfo folder. If a field in the Broker
document type was constrained by a simple type definition declared in the IS
schema purchaseOrder, Designer would create the referenced IS schema purchaseOrder.
You can associate only one IS document type with a given Broker document type.
If you try to create a publishable document type from a Broker document type that
is already associated with a publishable document type on your Integration Server,
Designer displays an error message.
If you did not select the Overwrite existing elements when importing referenced elements
check box and the Broker document type references an element with the same name
as an existing Integration Server element, Designer will not create the publishable
document type. For more information about overwriting referenced elements, see
"Importing and Overwriting References During Synchronization" on page 625
In the Publication category of the Properties panel, the Provider definition property
displays the name of the Broker document type used to create the publishable
document type. Or, if you are not connected to a Broker, this field displays Not
Publishable. You cannot edit the contents of this field. For more information about the
contents of this field, see "About the Associated Provider Definition" on page 592.
Once a publishable document type has an associated Broker document type, you
need to make sure that the document types remain in sync. That is, changes in one
document type must be made to the associated document type. You can update one
document type with changes in the other by synchronizing them. For information
about synchronizing document types, see "About Synchronizing Publishable
Document Types" on page 615.
is not selected, the corresponding field in the IS document type might disallow
null values (Allow null = false) and indicate the field is optional (Required = false).
For an IS document type created from a source, Designer displays the location of the
source in the Source URI property. Designer also sets the Linked to source property
to true which prevents any editing of the document type contents. To edit the
document type contents, you first need to make the document type editable by
breaking the link to the source. For information about allowing editing of elements
derived from a source, see "Allowing Editing of Derived Elements" on page 61.
Select... To...
6. On the Select a Source Location panel, under Source location, select one of the
following to specify the location of the e-form template:
Select... To...
File/URL Use an e-form template on a file system. Enter the path to and
name of the e-form template or click Browse to navigate to and
select the source file.
Click Next.
7. On the Select Processing Options panel, under Content model compliance, select one of
the following to indicate how strictly Integration Server represents content models
from the XML Schema definition in the resulting IS document type.
Select... To...
8. If you selected strict or lax compliance, next to Preserve text position, do one of the
following to specify whether document types generated from complex types that
allow mixed content will contain multiple *body fields to preserve the location of text
in instance documents.
Select the Preserve text position check box to indicate that the document type
generated for a complex type that allows mixed content preserves the locations
for text in instance documents. The resulting document type contains a *body
field after each field and includes a leading *body field. In instance documents for
this document type, Integration Server places text that appears after a field in the
*body .
Clear the Preserve text position check box to indicate that the document type
generated for a complex type that allows mixed content does not preserve the
locations for text in instance documents. The resulting document type contains a
single *body field at the top of the document type. In instance documents for this
document type, text data around fields is all placed in the same *body field.
9. If this document type will be used as the input or output signature of a service
exposed as a web service and you want to enable streaming of MTOM aachment
for elements of type base64Binary, select the Enable MTOM streaming for elements of type
base64Binary check box.
For more information about streaming of MTOM aachments, see the Web Services
Developer’s Guide
10. Click Next.
11. On the Select Root Node panel, under Select the root node, select the root node for the
XML schema definition used in the e-form template.
The standard name for a root node is as follows:
Adobe LiveCycle E-Form Template: xdp
Microsoft InfoPath E-Form Template: myFields
Keep in mind that the e-form template developer can change the root node.
On the Select a Source Location panel, Designer displays the path to and name of
the XML schema definition extracted from the e-form template in the File/URL field.
Designer creates a set of temporary files containing the XML Schema definition in
the workspace. Designer removes the files after creating the IS document type.
12. Under Element reference handling, select one of the following to determine how
Integration Server handles references to global elements of complex type:
Select... To...
Select... To...
13. Under Complex type handling, select one of the following to indicate how Integration
Server handles references to named complex type definitions:
Select... To...
14. If you selected Generate document types for complex types and you want to register each
document type with the complex type definition from which it was created, select
the Register document type with schema type check box.
Note: If you want derived type support for document creation and validation,
select the Register document types with schema type check box. For more
information, see "Registering Document Types with Their Schema Types"
on page 580.
15. If you want Integration Server to generate IS document types for all complex types
in the XML Schema definition regardless of whether the types are referenced by
elements or other type definitions, select the Generate document types for all complex
types in XML Schema check box.
If you leave this check box cleared, Integration Server generates a separate IS
document type for a complex type only if the complex type is referenced or is
derived from a referenced complex type.
16. Click Finish.
also adds a field named contentID . At run time, the contentID field contains a unique
identifier for the instance of the content type.
To create an IS document type from a file in the webMethods Content Service Platform
1. In the Service Development perspective, select File > New > Document Type
2. In the New Document Type dialog box, select the folder in which you want to save
the IS document type.
3. In the Element name field, type a name for the IS document type using any
combination of leers, numbers, and/or the underscore character. For information
about restricted characters, see "About Element Names" on page 54.
4. Click Next.
5. On the Select a Source Type panel, select webMethodsContent Service Platform and click
Next.
6. In the From Repository list, select the content repository that contains the content type
template from which you want to create a document type.
7. Click Next.
8. If you want to filter the contents of selected repository, type search criteria in the text
box.
9. Select the content type from which you want to create a document type and click
Next.
10. In the Description field, type a description for the IS document type. This is optional.
The description will appear in the Comment property for the IS document type. If you
do not enter a description, the Comment property contains a message indicating the
source of the IS document type.
11. Click Next.
12. On the Select Processing Options panel, under Content model compliance, select one of
the following to indicate how strictly Integration Server represents content models
from the XML Schema definition in the resulting IS document type.
Select... To...
Select... To...
not generate an IS document type from any XML schema
definition that contains those items.
13. If you selected strict or lax compliance, next to Preserve text position, do one of the
following to specify whether document types generated from complex types that
allow mixed content will contain multiple *body fields to preserve the location of text
in instance documents.
Select the Preserve text position check box to indicate that the document type
generated for a complex type that allows mixed content preserves the locations
for text in instance documents. The resulting document type contains a *body
field after each field and includes a leading *body field. In instance documents for
this document type, Integration Server places text that appears after a field in the
*body .
Clear the Preserve text position check box to indicate that the document type
generated for a complex type that allows mixed content does not preserve the
locations for text in instance documents. The resulting document type contains a
single *body field at the top of the document type. In instance documents for this
document type, text data around fields is all placed in the same *body field.
14. If this document type will be used as the input or output signature of a service
exposed as a web service and you want to enable streaming of MTOM aachment
for elements of type base64Binary, select the Enable MTOM streaming for elements of type
base64Binary check box.
For more information about streaming of MTOM aachments, see the Web Services
Developer’s Guide
Select... To...
18. Under Complex type handling, select one of the following to indicate how Integration
Server handles references to named complex type definitions:
Select... To...
Select... To...
reference field refers to the IS document type
created for the complex type definition.
Integration Server generates a separate IS document
type for any types derived from the referenced
complex types. For more information about derived
types, see "Derived Types and IS Document Types"
on page 578.
19. If you selected Generate document types for complex types and you want to register each
document type with the complex type definition from which it was created, select
the Register document type with schema type check box.
Note: If you want derived type support for document creation and validation,
select the Register document types with schema type check box. For more
information, see "Registering Document Types with Their Schema Types"
on page 580.
20. If you want Integration Server to generate IS document types for all complex types
in the XML Schema definition regardless of whether the types are referenced by
elements or other type definitions, select the Generate document types for all complex
types in XML Schema check box.
If you leave this check box cleared, Integration Server generates a separate IS
document type for a complex type only if the complex type is referenced or is
derived from a referenced complex type.
21. Click Finish.
Notes:
When a content type in the Content Service Platform serves as the source, Designer
creates a publishable IS document type. Designer adds the envelope (_env ) field
to the IS document type automatically. This field is a document reference to the
pub:publish:envelope document type.
If Integration Server is connected to a Broker at the time you create an IS document
type from an content type in the Content Service Platform, the resulting IS document
type will be publishable to the Broker and will have an associated Broker document
type.
flat file. This can be helpful when mapping to or from services that consume or produce
flat files.
You can also use the pub.flatFile:generate:createDocumentType to create an IS document type
from a flat file schema.
If you select the option to expand complex types inline, the schema processor generates
the document type as follows. In this example, the schema processor expanded the
complex types named documentX and documentY inline within the new IS document
type:
If you select the option to generate complex types as separate document types, the
schema processor generates the document types as follows. In this example, the
schema processor generated three IS document types—one for the complex type
named documentY, one for the complex type named documentX (with a reference to
documentY), and one for the root element eltA (with references to documentX and
documentY):
The schema processor generates all three document types in the same folder.
Note: If the complex type is anonymous, the schema processor expands it inline
rather than generate a separate document type.
If the XML Schema you are using to generate an IS document type contains recursive
complex types (that is, element declarations that refer to their parent complex types
directly or indirectly), you can avoid errors in the document type generation process by
selecting the option to generate complex types as separate document types. (Selecting
the option to expand complex types inline will result in infinitely expanding nested
documents.)
If you generate an IS document type from this XML schema definition and you select
the Generate document types for complex types option, Integration Server creates an
IS document type for the base Address complex type and another for the derived
USAddress complex type.
When data conforms to the derived version rather than the base, an XML document or
IData object should indicate the specific derived version that is in use:
In an XML document, the xsi:type aribute is included to specify the derived type
being used for a complex type. For example, the following XML line indicates that
the invoice address will use the alternate format defined by the USAddress complex
type:
<invoiceAddress xsi:type="order:USAddress">
In a document (IData object), Integration Server uses the *doctype field, which contains
the name of the derived document type that represents the structure of a Document
field.
that you select in Designer should correspond to the schema type name that Integration
Server should use for the <xsi:type> aribute in the XML.
For example, you might have a Document field for an invoice address. To indicate that
the structure of the invoice address uses a derived type that represents an address in the
United States, for the *doctype field select the name of the appropriate derived document
type (for example, docType_Ref_order_USAddress).
For example, a complex type in XML being converted might include the following:
<invoiceAddress xsi:type="order:USAddress"
When Integration Server generates the Document field for the invoice address, it will
add a *doctype field and set its value to the fully-qualified name of the derived document
type that corresponds to the schema type name “order:USAddress” (for example,
orders:docType_Ref_order_USAddress).
When working with a Document field that was converted from XML, do not delete or
edit the *doctype field.
Note: When creating a web service descriptor from a WSDL, Integration Server
registers each document type that it creates with the associated schema type
defined in the WSDL.
It is important to register IS document types when the XML schema definition uses
derived types so that Integration Server can later perform data conversion. That is so
that Integration Server can convert data that conforms to the IS document types and
the XML schema definition from a document (IData object) to XML, and vice versa. It is
also important so that Integration Server can validate documents (IData objects) that use
derived types. For more information about derived types and derived document types,
see "Derived Types and IS Document Types" on page 578 and "*doctype Fields in IS
Document Types and Document Fields" on page 579.
The rest of this section illustrates what happens when Integration Server registers IS
document types with their XML schema types and how the registration is used during
data conversion. The following shows a portion of an XML schema definition that is
used for the illustration.
<xsd:element name="purchaseOrder">
<xsd:sequence>
<xsd:element name="id" type="xsd:string"/>
<xsd:element name="invoiceAddress" type="order:Address">
</xsd:sequence>
</xsd:element>
<xsd:complexType name="Address">
<xsd:sequence>
<xsd:element name="name" type="xsd:string"/>
<xsd:element name="street" type="xsd:string"/>
<xsd:element name="city" type="xsd:string"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="USAddress">
<xsd:complexContent>
<xsd:extension base="order:Address">
<xsd:sequence>
<xsd:element name="state"/>
<xsd:element name="zip"/>
</xsd:sequence>
</xsd:extension>
</xsd:complexContent>
</xsd:complexType>
When you create IS document types from the above XML schema definition, selecting
the Generate document types for complex types and the Register document types with schema
type options, Integration Server:
Creates an IS document type for the base Address complex type and another for the
derived USAddress complex type.
Adds a *doctype field to IS document types created for the base Address complex
type and the derived USAddress complex type.
Registers the Address complex type with the IS document type it generates for the
Address complex type.
Registers the USAddress complex type with the derived IS document type it
generates for the derived USAddress complex type. For example, this might
establish a mapping between the complex type order:USAddress and the IS
document type docTypeRef_order_USAddress.
Because the IS document types were registered with the XML schema types, Integration
Server can later:
Convert XML data based on the schema to a document (IData object) and validate
the document
When an element in an XML instance is based on a derived type, the XML uses
the xsi:type aribute to identify the derived type for the element. When the
IS document type associated with derived type is registered, Integration Server
can locate the correct IS document type to use for the conversion, as well as set
the *doctype field to indicate the IS document type that defines the format in the
resulting document (IData object).
For example, if Integration Server converts an XML document that uses the
USAddress complex type, when parsing the XML, Integration Server finds the
<invoiceAddress xsi:type=”order:USAddress” element. Integration Server
uses the value of the xsi:type aribute, that is order:USAddress, and looks up
the registration to determine the corresponding IS document type. After Integration
Server determines the IS document type, it can then do the conversion using the IS
document type that corresponds to order:USAddress.
During the conversion, Integration Server sets the *doctype field to the fully-
qualified name of the IS document type it found in the registration. As a result, when
Integration Server validates the IData object, it determines the correct IS document
type to use for validation by using the value in the *doctype field.
When this property is set to true, the schema processor imports all substitution
group members (a non-abstract head element and substitutable elements) as optional
fields, even though they are defined as required elements in the XML Schema
definition.
Note: Because all the substitution group members are imported as optional,
during validation, Integration Server might consider some documents to
be valid even though the documents are actually invalid. For example,
suppose the original XML schema definition required the head element
or one of the member elements to be present. If none of the substitution
group elements are present in the instance document, Integration Server
considers the document to be valid because the corresponding fields
are optional in resulting IS document type. Additionally, if the instance
document contains more than one member of the substitution group,
Integration Server considers the document to be valid because the
corresponding fields are optional.
When this property is set to false, the resulting document type contains a field that
corresponds to the head element in the substitution group, but does not contain any
elements for members of the substitution group. This is the default.
When generating fields for a substitution group, Integration Server exhibits the
following behavior:
If the head element is declared as abstract, Integration Server does not include that
element in the IS document type.
Normally, when Integration Server creates a document type for a content model
that contains multiple occurrences of an element, Integration Server aggregates the
repeated fields into a single array. For example, if Integration Server encounters two
elements named "myElement", Integration Servercollects them into a single array
named "myElement". However, when Integration Server creates a document type for
a substitution group, if the same element is included in the substitution group more
than once via two different substitution group members, Integration Server does not
aggregate the elements into an array.
Integration Server cannot create an IS document type from an XML Schema
definition that contains a substitution group with a recursive reference to another
substitution group. For example, if a member of the substitution group contains a
reference to the head element, Integration Server enters a loop which eventually
results in a stack overflow error.
URI property. Designer also sets the Linked to source property to true which prevents
any editing of the document type contents. To edit the document type contents, you
first need to make the document type editable by breaking the link to the source.
For information about allowing editing of elements derived from a source, see
"Allowing Editing of Derived Elements" on page 61. However, Software AG does not
recommend editing the contents of document types created from WSDL documents.
filter. If you want a provider filter that operates on the contents of _properties to work
regardless of the encoding type, always include _properties in the filter expression.
For an IS document type created from an e-form template, any modifications to the
content or structure of the IS document type will make it out of sync with the e-form
template from which it was created. This makes it unusable with the associated e-
form. When an instance of the e-form template is received, it will not match the IS
document type.
Note: Designer prints the contents of the editor only. Variables and properties
that are collapsed will not be expanded in the printed version of the
HTML.
for subscribers. For more information, see "About the Encoding Type for a Publishable
Document Type" on page 597.
When you build an integration solution that uses publication and subscription, you need
to create the publishable document types before you create triggers, services that process
documents, and services that publish documents.
If a document type contains a _properties field at the top-level and the associated
messaging provider is Universal Messaging, Integration Server and Universal
Messaging treat the contents of _properties as custom header fields in the published
document. For more information about the _properties field, see "About the
Properties Field" on page 595.
Designer makes an IS document type generated from an e-form template a
publishable document type automatically.
You can make a document type publishable when the Linked to source property is
set to true. When a document type is linked to its source, you cannot change the
structure or contents of the document type. However, Designer does not consider the
addition of the _env field to be a structural change that breaks the association with
the source file.
Note: You can publish a document associated with a Broker connection alias
locally by seing the local input parameter of the publishing service to
true.
4. If you selected a Universal Messaging connection alias for the Connection alias
name property or you selected DEFAULT and the default messaging connection
alias is a Universal Messaging connection alias, next to Encoding type, select one of
the following to indicate the format used to encode and decode instances of this
publishable document type.
For more information about seing the encoding type, see "About the Encoding
Type for a Publishable Document Type" on page 597.
5. Next to the Discard property, select one of the following to indicate how long
instances of this publishable document type remain on the provider before the
messaging provider discards them.
Select... To...
6. Next to the Storage type property, select the storage method to use for instances of
this publishable document type.
Select... To...
For more information about selecting a storage type, see "Seing the Document
Storage Type for a Publishable Document Type" on page 606.
7. Select File > Save. Designer displays beside the document type name in the
Package Navigator to indicate it is a publishable document type.
8. If you selected protocol buffers as the encoding type and a field in the publishable
document type cannot be represented in protocol buffers, Designer displays a
warning message to that effect. Click OK to dismiss the message.
Notes:
In the Connection alias type property, Designer displays Broker or Universal Messaging
to indicate which messaging provider is used by the selected alias.
In the Properties view, the Provider definition property displays the name of the
corresponding object created on the messaging provider.
Universal Messaging creates a channel that corresponds to the document type.
The channel name uses the following naming convention: wm/is/folderName/
subFolderName/documentTypeName. If a channel with that name already exists,
Integration Server does not create a new channel.
metadata about the document. For more information about this field, see "About the
Envelope Field" on page 594.
If you selected protocol buffers as the encoding type, Integration Server creates
a message descriptor for the publishable document type. For more information
about using protocol buffers as the encoding type, see "Using Protocol Buffers as the
Encoding Type" on page 598.
Once a publishable document type corresponds to an associated provider definition,
you need to make sure that the document type and provider definition remain
in sync. You can update one with changes in the other by synchronizing them.
For information about synchronizing document types, see "About Synchronizing
Publishable Document Types" on page 615
If you change messaging connection alias assigned to a publishable document type,
you might need to synchronize the publishable document type with its associated
provider definition.
Once a document type is publishable, any changes to the content, structure, or
properties can impact the corresponding provider definition, subscribing triggers,
or publishing services. For more information about editing a publishable document
type, see "Important Considerations When Modifying Publishable Document Types"
on page 585.
For more information about the _env field and the contents of the pub:publish:envelope
document type, see the webMethods Integration Server Built-In Services Reference.
Note: If an IS document type contains a field named _env , you need to delete that
field before you can make the IS document type publishable.
Note: Integration Server uses the contents of _properties as custom header fields
document when the document is published to Universal Messaging only. For
all messaging providers, Integration Server includes _properties in the body of
the published document.
For the contents of the _properties field to be added to the message header of a published
document, _properties :
Must be a Document or Document reference variable.
Must be at the top-level of the publishable document type. That is, _properties cannot
be a child of another document in the document type.
Can include any number of fields.
Can contain fields of type String.
Can contain Object fields with a Java wrapper type of:
java.lang.Boolean
java.lang.Byte
java.lang.Character
java.lang.Double
java.lang.Float
java.lang.Integer
java.lang.Long
java.lang.Short
java.util.Date
Should not contain fields of type Document, Document List, Document Reference,
or Document Reference List. When creating the message header, Integration Server
ignores the content of fields of these types in _properties . Integration Server includes
the entire contents of _properties the published document, but Integration Server
only uses scalar fields that are direct children of _properties in the message header.
For information about creating a filter for use with custom header fields, see "Creating a
webMethods Messaging Trigger " on page 727.
Each adapter notification has an associated publishable document type . When you
create an adapter notification in Designer, Integration Server automatically generates a
corresponding publishable document type. Designer assigns the publishable document
type the same name as the adapter notification, but appends PublishDocument to the
name. You can use the adapter notification publishable document type in triggers and
flow services just as you would any other publishable document type.
The adapter notification publishable document type is directly tied to its associated
adapter notification. Integration Server automatically propagates the changes from the
adapter notification to the publishable document type. That is, when working in Package
Navigator view, Designer treats an adapter notification and its publishable document
type as a single unit. If you perform an action on the adapter notification, Designer
performs the same action on the publishable document type. For example, if you rename
the adapter notification, Designer automatically renames the publishable document
type. If you move, cut, copy, or paste the adapter notification Designer moves, cuts,
copies, or pastes the publishable document type.
The Connection alias name property for the adapter notification publishable document
type will initially have the default messaging connection alias that is configured in
Integration Server. This means that the default messaging connection alias will be used
to publish and receive instances of the adapter notification publishable document type.
Any changes to the Connection alias name property of the adapter notification publishable
document type will be propagated to its associated adapter notification.
For information about how to create and modify adapter notifications, see the
appropriate adapter user’s guide.
Messaging as the messaging provider, you can specify an encoding type. You can specify
one of the following encoding types:
IData, the universal container in Integration Server for sending and receiving data.
When a document type uses IData as the encoding type, Integration Server encodes
published instances of the document type as a serialized IData object.
Protocol buffers, a format for serializing structured data developed by Google and
implemented by Integration Server. When a document type uses protocol buffers
as the encoding type, Integration Server encodes the published instances of the
document type as protocol buffer.
Note: When a publishable document type uses Broker as the messaging provider,
Integration Server always encodes published documents as a Broker Event.
Integration Server encodes locally published documents as IData.
The encoding type for a publishable document type also determines the scope of the
message to which Universal Messaging applies a provider filter. In turn, this affects
the provider filters that you can build for the webMethods Messaging Triggers that
subscribe to the document type.
When IData is the encoding type, Universal Messaging can filter on the custom
header fields added via _properties only. The provider filter created by a
webMethods Messaging Trigger can include _properties header fields only.
When protocol buffers is the encoding type, Universal Messaging can filter on the
body of the document only. However, when creating the published document,
Integration Server includes the _properties headers in the body of the document as
well. The provider filter created by a webMethods Messaging Trigger can include
body and _properties header fields.
For more information about creating filters for use with Universal Messaging, see
"Creating a webMethods Messaging Trigger " on page 727.
Note: You can only specify an encoding type for a publishable document type
in Integration Server and Designer versions 9.7 or later. Additionally, the
publishable document type must use Universal Messaging version 9.7 or later
as the messaging provider.
body as well as the header of the document, triggers can be more selective about which
documents they receive.
When you save a publishable document type for which protocol buffers is the encoding
type, Integration Server creates a message descriptor that represents the structure of
the document type as a protocol buffer. Integration Server saves the message descriptor
along with other metadata in the node.ndf file for the publishable document type. When
an instance of the publishable document type is published, Integration Server uses the
message descriptor to encode the document as a protocol buffer and then sends the
document to Universal Messaging. When a trigger receives the published document,
Integration Server uses the message descriptor to decode the document from a protocol
buffer.
When creating the message descriptor, Integration Server includes only fields that
can be represented in the protocol buffers format. Not all field names, data types, and
structures that are valid for a publishable document type can be represented in the
protocol buffer message descriptor. When publishing a document, Integration Server
places fields that cannot be represented in a protocol buffer message descriptor in an
UnknownFieldSet. An UnknownFieldSet is a collection of fields that may be present
while encoding or decoding the document but are not present in the message descriptor.
Integration Server encodes the UnknownFieldSet as a serialized IData byte array. The
UnknownFieldSet, which is included in the published document, is passed through to
the subscribers. Universal Messaging cannot use provider filters to filter on the contents
of the UnknownFieldSet. However, a webMethods Messaging Trigger that receives the
document will be able to decode the UnknownFieldSet and include its contents in the
pipeline.
If you encode documents as protocol buffers to make use of provider filters for the
document body, you may want to delegate as much filtering to the Universal Messaging
as possible. If so, make sure the fields on whose contents you want Universal Messaging
to filter can be represented within a protocol buffer message descriptor. Universal
Messaging can only filter on fields that can be represented in the protocol buffer
message format.
The following list identifies limitations for representing a fields in a protocol buffer
descriptor:
Field names must meet the following criteria to be encoded:
First character must be a leer (a-z or A-Z).
Subsequent characters must be a leer, number, or underscore symbol (_).
If the field name does not meet the preceding criteria, Designer displays the
following message when you save the publishable document type: Cannot create
field ''fieldName '' in publishable document type ''publishableDocumentTypeName '';
this field name is not valid for use with protocol buffer encoding. The Universal
Messaging provider will transport the field contents as part or the UnknownFieldSet,
which will be visible to Integration Server clients only.
Note: Integration Server reserves the use of field names that begin with the
underscore character for Integration Server usage, for example _env and
_properties .
Fields at the same level that share the same name, such as fields at the top-level
of the document type or sibling fields in a Document variable, cannot be encoded
with protocol buffers. Integration Server encodes the identically named fields as
part of the IData byte array for the UnknownFieldSet. For information about how
Integration Server decodes the contents of fields with the same name, see "Decoding
Protocol Buffers" on page 601.
If the publishable document type contains duplicate variables, Designer displays the
following message when you save the publishable document type: Cannot create
field ''fieldName '' in publishable document type ''publishableDocumentTypeName '';
field with duplicate names are not permied with protocol buffer encoding.
Fields must be defined to be data type supported by protocol buffers encoding.
String tables cannot be encoded with protocol buffers and will be defined as byte
array within the message descriptor and passed through as a serialized IData
object.
Objects and Object Lists defined to be an unknown Java wrapper type cannot be
encoded with protocol buffers. Instead, unknown Objects and Object Lists will
be defined as byte array within the message descriptor and passed through as a
serialized IData object.
Note: An Object or Object List field is unknown when the Java wrapper type
property for the fields is set to UNKNOWN. For more information about
assigning a Java wrapper type to a field, see "Applying Constraints to a
Variable" on page 649.
To generate additional logging information in the Sever log when Integration Server
creates the protocol buffer descriptor set the logging level for the server log facility 0154
Protocol Buffer Encoding (Universal Messaging) to Debug or Trace. Increased logging can
help you to locate problems that occur during protocol buffer encoding.
For more information about fields that cannot be represented in protocol buffers, see
"Using Protocol Buffers as the Encoding Type" on page 598.
However, at the time Integration Server publishes a document, there might be additional
fields that cannot be encoded as protocol buffers. Integration Server adds these fields to
UnknownFieldSet.
The following contents of a published document will not be encoded as protocol buffers:
Undeclared fields. Any fields that are in the published document but are not defined
in the publishable document type will be added to the UnknownFieldSet. On the
subscribing side, Integration Server decodes these undeclared fields and adds them
immediately before the _env field.
Fields with a null value. Even if Integration Server can represent the field in protocol
buffers, null values cannot be included in protocol buffers. Fields with null values
will be added to the UnknownFieldSet. On the subscribing side, Integration Server
decodes these fields as null at their original position as defined in the publishable
document type.
Any list field in which one of the elements is a null value. The entire list is encoded
as a single serialized IData and placed in the UnknowFieldSet. On the subscribing
side, Integration Server decodes the list field into its original position as defined in
the publishable document type.
In addition, document encoding can fail if Integration Server encounters an unexpected
data type. For example, if publishable document type defines a field named myString
to be a String but at run time, the data type of myString is not an instance of String,
Integration Server cannot encode myString because it is not the expected data type. In
fact, document encoding fails entirely and publication fails with the following error:
Protocol buffer coder cannot handle data type dataTypeName for field fieldName in
document type: publishableDocumentType . Error: errorMessage
When decoding values for duplicate fields, Integration Server does not maintain
order of the values if one of the fields is empty . Integration Server decodes later
occurrence of the field at/in the position of the empty duplicate field.
The following table provides examples of how Integration Server decodes lists.
Note: Some Broker document types have a storage type of Persistent. The
Persistent storage type automatically maps to the guaranteed storage type
in the Integration Server.
a volatile client queue subscribes, the Broker changes the storage type of the document
from guaranteed to volatile before placing it in the volatile client queue. The Broker does
not change the storage type of a volatile document before placing it in a guaranteed
client queue.
The following table indicates how the client queue storage type affects the document
storage type.
If document storage type And the client queue storage The Broker saves the document
is... type is... as...
Guaranteed Volatile
Guaranteed Guaranteed
Note: On the Broker, each client queue belongs to a client group. The client queue
storage type property assigned to the client group determines the storage
type for all of the client queues in the client group. You can set the client
queue storage type only when you create the client group. By default, the
Broker assigns a client queue storage type of guaranteed for the client group
created for Integration Servers. For more information about client groups, see
Administering webMethods Broker.
Select... To...
Select... To...
Integration Server Administrator. For more information about seing this property,
see webMethods Integration Server Administrator’s Guide.
A webMethods messaging property for publishable document types that indicates
whether instances of a publishable document type should be validated. Integration
Server honors the value of this property (named Validate when published) only if the
wa.server.publish.validateOnIS is set to perDoc (the default).
Select... To...
If you intend to delete the associated provider definition as well, make sure that the
connection alias to the messaging provider is enabled.
If the Integration Server is a member of a cluster or the client prefix for the associated
messaging connection alias is shared, you can delete the publishable document type
but you cannot remove the associated provider definition.
You can only delete a publishable document type if you own the lock (or have the
publishable document type checked out) and have Write permission to it.
If you delete a Broker document type that is required by another Integration
Server, you can synchronize (push) the document type to the Broker from that
Integration Server. If you delete a Broker document type that is required by a non-
IS Broker client, you can recover the document from the Broker .adl backup file. See
Administering webMethods Broker for information about importing .adl files.
Click... To...
Cancel Cancel the operation and preserve the element in the Integration
Server.
d. Click Apply. When you enter values for constrained objects in the Input tab,
Integration Server automatically validates the values. If the value is not of the
type specified by the object constraint, Designer displays a message identifying
the variable and the expected type.
7. Click the Action tab and specify publish seings for the document type.
a. Select the type of publishing for the document.
Note: The options available here are enabled or disabled, depending on the
messaging provider of the publishable document type or the version of
Integration Server to which your Designer is connected.
Select... To...
Select... To...
b. If you selected either Deliver to a destination or Deliver to a destination and wait for
a Reply, in the Destination ID field, specify the destination to which you want to
deliver the document. You can either enter the destination name or click Browse
to select the destination. If you click Browse, Designer displays all the available
Destination IDs.
Note: Integration Server assigns trigger clients names according to the client
prefix set for the Broker connection alias.
c. If you selected a publication action in which you wait for a reply, you need to
select the document type that you expect as a reply. You can enter the document
type name or click Browse to select the document type.
If you click Browse, Designer displays all the publishable document types on the
Integration Server to which you are currently connected. In the Elements Name
field, type the fully qualified name of the publishable document type that you
expect as a reply or select it from the Folder list. If the service does not expect a
specific document type as a reply, leave this field blank.
d. Under Set how long Designerwaits for a Reply, select one of the following:
Select... To...
8. Optionally, click the Common tab to define general information about the launch
configuration and to save the launch configuration to a file.
9. Click Apply.
10. Click Run to test the publishable document type now. Otherwise, click Close.
Note: The input data is not saved after the default launch configuration
runs. If you want to save the input data in a launch configuration, see
"Creating a Launch Configuration for a Publishable Document Type"
Notes:
Designer displays the instance document and publishing information in the Results
view.
If you selected a publication action in which you wait for a reply, and Designer
receives a reply document, Designer displays the reply document as the value of the
receiveDocumentTypeName field in the Results view.
If Designer does not receive the reply document before the time specified next
Wait for elapses, Designer displays an error messages stating that the publish and
wait (or deliver and wait) has timed out. The Results view displays null next to the
receiveDocumentTypeName field to indicate that the Integration Server did not receive
a reply document.
Synchronization Status
Each publishable document type on your Integration Server has a synchronization
status to indicate whether it is in sync with the provider definition, out of sync with the
provider definition, or not associated with a provider. The following table identifies each
possible synchronization status for a document type.
Status Description
Status Description
Updated on the The publishable document type has been modified on the
Provider messaging provider.
Updated Both The publishable document type and the provider definition
Locally and on the have both been modified since the last synchronization. You
Provider must decide which definition is the required one and push to
or pull from the Broker accordingly. Information in one or the
other document type is overwrien.
Created Locally The publishable document type was made publishable when
the messaging provider was not connected or the publishable
document type was loaded on Integration Server via package
replication. An associated provider definition may or may not
exist on the messaging provider.
If the messaging provider is Broker and an associated
provider definition exists on the Broker, synchronize the
document types by pulling from the Broker.
If an associated provider definition does not exist on the
messaging provider or the messaging provider is Universal
Messaging, create (and synchronize) the provider definition
by pushing to the provider.
Status Description
In Sync with The IS document type and the provider definition are already
Provider synchronized. No action is required.
Note: When you switch the Broker configured for the Integration Server to a Broker
in a different territory, the Integration Server displays the synchronization
status as it was before the switch. This synchronization status may be
inaccurate because it does not apply to elements that exist on the second
Broker.
Synchronization Actions
When you synchronize document types, you decide for each publishable document type
whether to push or pull the document type to the messaging provider. When you push
the publishable document type to the messaging provider, you update the provider
definition with the publishable document type on your Integration Server. When you
pull the document type from the messaging provider, you update the publishable
document type on your Integration Server with the provider definition.
The following table describes the actions you can take when synchronizing a publishable
document type.
Action Description
Pull from Update the publishable document type with information from the
Provider provider definition.
Note: You can only pull from the messaging provider when the
publishable document type uses Broker as the messaging
provider.
Skip Skip the synchronization action for this document type. (This
action is only available when you synchronize multiple document
types at one time.)
Note: You can only pull from the messaging provider when the publishable
document type uses Broker as the messaging provider.
Note: For a publishable document type created for an adapter notification, you
can select Skip or Push to Provider only. A publishable document type for an
adapter notification can only be modified on the Integration Server on which
it was created.
Select... To...
Pull from Update the publishable document type with the provider
Provider definition. This option is available when the provider
definition is a Broker document type only.
4. If you select Pull from Provider, as the action, Designer enables the Overwrite existing
elements when importing referenced elements check box.
5. If you want to replace existing elements in the Integration Server with identically
named elements referenced by the provider definition (Broker document type),
select the Overwrite existing elements when importing referenced elements check box. See
"Importing and Overwriting References During Synchronization" on page 625 for
more information about this topic.
6. Click Synchronize to synchronize the document type and provider definition.
If the Linked to source property is set to true for the publishable document type, the
action you can take depends on the source for the publishable document type. You
can select:
Pull from Provider only if the Source URI is a Broker document type.
Push to Provider only if the Source URI is a URI other than a Broker document type.
Note: When synchronizing multiple document types, Designer does not prevent
Integration Server from overwriting publishable document types for which
Linked to source is true.
When you switch the Broker configured for Integration Server to a Broker in a
different territory, Integration Server displays the synchronization status as it was
before the switch. This synchronization status may be inaccurate because it does not
apply to elements that exist on the second Broker.
The result of a synchronization action depends on the document status. For
more information about how the result of a synchronization status depends
on the synchronization status, see "Combining Synchronization Action with
Synchronization Status" on page 618.
Select... To...
Set All to Push Change the Action for all publishable document types in the
list to Push to Provider.
Note: When you select Set All to Push, Designer sets the
publication action for adapter notification document
types to Skip.
Set All to Pull Change the Action for all publishable document types in the
list to Pull from Provider.
Select... To...
Set All to Skip Change the Action for all publishable document types in the
list to Skip.
Select... To...
Pull from Update the publishable document type with the provider
Provider definition.
4. If you want to replace existing elements in Package Navigator view with identically
named elements referenced by the Broker document type, select the Overwrite
existing elements when importing referenced elements check box. For more information
about importing referenced elements during synchronization, see "Importing and
Overwriting References During Synchronization" on page 625.
5. Click Synchronize to perform the specified synchronization actions for all the listed
publishable document types.
If you choose not to overwrite elements when you synchronize document types by
pulling from the Broker, Integration Server does not synchronize any document
type that references existing elements on the Integration Server. Integration Server
synchronizes only those document types that do not reference elements.
4. In the Name field, enter a new name for your launch configuration. The Document
tab displays the name of the Integration Server where the document resides as well
as the name of the document that you want to publish.
You can change the document specified in the Document Type field by clicking Browse.
5. Click the JMS Settings tab and specify the JMS message details:
a. In the JMS connection alias name field, click . In the Select a JMS connection
alias for documentName dialog box, select the JMS connection alias that you want
this launch configuration to use to receive messages from the JMS provider. Click
OK.
If a JMS connection alias has not yet been configured on Integration Server,
Designer displays a message stating the JMS subsystem has not been configured.
For information about creating a JMS connection alias, see webMethods Integration
Server Administrator’s Guide.
b. In the Destination name field, do one of the following to specify the destination:
If the JMS connection alias uses JNDI to retrieve administered objects, specify
the lookup name of the Destination object.
If the JMS connection alias uses the native webMethods API to connect
directly to Broker, specify the provider-specific name of the destination.
If the JMS connection alias creates a connection on Broker or Universal
Messaging, click to select from a list of existing destinations. After you
select the destination, click OK.
c. From the Destination type list, do the following:
Select Queue to send the message to a particular queue.
Select Topic to send the message to a topic.
You need to specify a destination type only if you specified a JMS connection alias
name that uses the native webMethods API.
d. Select Prepare message for BPM to add information to the JMS message that enables
Process Engine to start a process instance when it receives the message. When
this check box is selected, the published JMS message includes the documentType
property which specifies the fully qualified name of the IS document type used
to create the JMS message. Process Engine uses the document type name to map
the JMS message to the correct process model and start a process instance.
e. Under JMS Message Header and Properties, specify the values for the pre-defined
and custom properties that you want to add to the JMS message header. Click
to add a row to specify custom properties. Click to insert a row and to
delete a row.
6. Click the Input tab and specify input values for the document, which will form the
contents of the JMS message body.
a. Enter valid values for the fields defined in the document or click Load to retrieve
the values from a file. For information about loading input values from a file, see
"Loading Input Values" on page 453.
b. If you want to save the input values that you have entered, click Save. Input
values that you save can be recalled and reused in later runs. For information
about saving input values, see "Saving Input Values" on page 453.
c. Click Apply. When you enter values for constrained objects in the Input tab,
Integration Server automatically validates the values. If the value is not of the
type specified by the object constraint, Designer displays a message identifying
the variable and the expected type.
7. Optionally, click the Common tab to define general information about the launch
configuration and to save the launch configuration to a file.
8. Click Apply.
9. Click Run to run the launch configuration to publish the IS document as a JMS
message now. Otherwise, click Close.
Note: The input data is saved automatically after the launch configuration
runs. To save the input data in a launch configuration, see "Saving
Input Values" on page 453.
Designer creates and publishes a JMS message. Designer displays the JMS message
in the Results view.
An XML document type is an asset in the IS namespace created from an XML Schema
definition. When you create an XML document type from an XML schema definition,
Integration Server creates a collection of XML document types to represent the structure,
content, and constraints defined in an XML schema definition. Each XML document
type corresponds to a global element declaration, global aribute declaration, or global
complex type definition in an XML Schema definition.
What Is XMLData?
At run time, instances of XML document types and XML fields are of contained in an
XMLData object. XMLData is an IData object that uses a specific encoding format to
represent the XML Information Set (XML Infoset). The format facilitates all the features
of XML Infoset and XML Schema, including support for capabilities such as nested
model groups and substitution groups. The format also eliminates the need to specify an
association between prefixes and namespace URIs.
Unlike the traditional encoding used for XML representation with raw IData, the
encoding format for XMLData is not public and is subject to change at any time. The
various built-in services that support XMLData, including those in the pub.xmldata folder
in the WmPublic package, and flow MAP step operations are the only supported means
of accessing and modifying XMLData. Directly manipulating an XMLData object as one
can a traditional IData object will lead to unexpected results.
Note: XML document types and instance documents based on XML document types
are intended to implement XML Schema and XML as closely as possible.
Behavior that is inconsistent with XML Schema and XML will be treated
as known issues that need resolution. Implementations should not exploit
behavior that is inconsistent with XML and XML schema as it may have
unpredictable results.
Note: Integration Server creates arrays for XML document types when
an individual element has a maxOccurs greater than 1. If there
are two fields named myLocalName#myNamespaceName
and each has a maxOccurs greater than 1, Integration Server
creates {1}myLocalName#myNamespaceName as an array and
{2}myLocalName#myNamespaceName as an array.
The contents of XML document types and XML fields cannot be edited.
6. If you selected CentraSite as the source, click Next. Then, under Select a Schema,
select the XML schema definition that you want to use as the source and click Finish.
If Designer is not configured to connect to CentraSite, Designer displays the
CentraSite> Connections preference page and prompts you to configure a connection
to CentraSite.
7. Click Finish.
Notes:
Integration Server uses the internal schema parser to validate an XML schema
definition. If you selected the Validate schema using Xerces check box, Integration
Server also uses the Xerces Java parser to validate the XML Schema definition. With
either parser, if the XML Schema does not conform syntactically to the schema
for XML Schemas defined in XML Schema Part 1: Structures (which is located at
hp://www.w3.org/TR/xmlschema-1), Integration Server does not create an XML
document type. Instead, Designer displays an error message that lists the number,
title, location, and description of the validation errors within the XML Schema
definition. If only warnings occur, Designer generates the XML document type and
the other assets.
Note: Integration Server uses Xerces Java parser version J-2.11.0. Limitations
for this version are listed at hp://xerces.apache.org/xerces2-j/xml-
schema.html.
When validating XML schema definitions, Integration Server uses the Perl5 regular
expression compiler instead of the XML regular expression syntax defined by the
World Wide Web Consortium for the XML Schema standard. As a result, in XML
schema definitions consumed by Integration Server, the paern constraining facet
must use valid Perl regular expression syntax. If the supplied paern does not use
proper Perl regular expression syntax, Integration Server considers the paern to be
invalid.
Creating a Specification
When you create a specification, you can define the specification content by:
Manually inserting input and output variables.
Referencing an IS document type for the input and/or output signature.
Referencing a specification to use as the entire signature.
Typically, you do not build a specification by referencing another specification.
However, it is useful to do this in the situation where you will use the specification with
a group of services whose requirements are expected to change (that is, they match
an existing specification now but are expected to change at some point in the future).
Referencing a specification gives you the convenience of using an existing specification
and the flexibility to change the specification for only that single group of services in the
future.
To create a specification
1. In Designer: File > New > Specification
2. In the New Specification dialog box, select the folder in which you want to save the
specification.
3. In the Element Name field, type a name for the specification using any combination of
leers, numbers, and/or the underscore character. For information about restricted
characters, see "About Element Names" on page 54.
4. Click Finish.
5. To define the specification content by referencing another specification, in the
Specification Reference field in the Input/Output tab, type the fully qualified name of
the specification, or click to select it from a list.
6. To use an IS document type to define the input content for a specification, in the Input
field, type the fully qualified name of the IS document type or click to select it
from a list.
7. To use an IS document type to define the output content for a specification, in the
Output field, type the fully qualified name of the IS document type or click to
select it from a list.
8. To define the specification by inserting variables manually, do the following for each
variable that you want to add:
a. In the Palee view, select the type of variable you want to define and drag it to
the Input or Output side of the Input/Output tab.
If the Palee view is not visible, display it by clicking on the right side of the
specification editor.
b. Type a name for the variable and press ENTER.
c. With the variable selected, set variable properties and apply constraints using the
Properties view.
d. If the variable is a document or document list, repeat steps a–c to define and set
the properties and constraints for each of its members. Use to indent each
member beneath the document or document list variable.
9. Optionally, enter comments or notes for the specification in the Comments tab.
10. Click File > Save.
A variable can be a String, String list, String table, document, document list, document
reference, document reference list, Object, or Object list. Variables are used to declare the
expected content and structure of service signatures, document contents, and pipeline
contents. In addition to specifying the name and data type of a variable, you can set
properties that specify an XML namespace, indicate whether the variable is required at
runtime, and indicate whether the variable can be null at runtime.
Select a variable in the editor to set general properties and constraints for the variable.
Note: Specific properties in the Properties view are enabled or disabled, depending
on the type of variable you have selected.
3. In the Element Name field, type the fully qualified name of the IS document type or
select it from the list.
4. Click OK.
5. Type the name of the field.
6. Click File > Save.
Large Editor Entered in a large text area instead of a text field. This is
useful if you expect a large amount of text as input for the
field, or you need to have TAB or new line characters in the
input
Note: These options are not available for Objects and Object lists.
For document and document list variables, you can also specify the structure of the
variable; that is, you can specify what variables can be contained in the document
(IData object) or document list (IData[ ] object) at run time. For example, you
could specify that the lineItem document variable must contain the child variables
itemNumber , quantity , size , color , and unitPrice . You could also specify that the
lineItem document can optionally contain the child variable specialInstructions .
Content constraints describe the data type for a variable and the possible values for
the variable at run time. You can apply content constraints to String, String list,
String table, Object, and Object list variables.
When you specify a content constraint for a String, String list or String table variable,
you can also apply a constraining facet to the content. A constraining facet places
limitations on the content (data type). For example, for a String variable named
itemQuantity , you might apply a content constraint that requires the value to
be an integer. You could then apply constraining facets that limit the content of
itemQuantity to a value between 1 and 100.
You can use simple types from an IS schema as content constraints for String, String
list, or String table variables.
For pipeline and document validation, the IS document type used as the blueprint needs
to have constraints applied to its variables. For input/output validation, constraints need
to be applied to the declared input and output parameters of the service being validated.
For more information about data validation, see "Performing Data Validation" on page
319.
Note: When you create an IS document type from an XML Schema or a DTD, the
constraints applied to the elements and aributes in the original document are
preserved in the new IS document type. For more information about creating
IS document types, see "Creating an IS Document Type" on page 548.
A document is published. When this occurs, the contents of the document are
validated against the specified document type.
During debugging, when you enter values for a constrained Object or Object list in
the Input dialog box.
When you assign a value to an Object or Object list variable in the Pipeline view
using on the toolbar.
Property Description
Select... To...
Property Description
Select... To...
Select... To...
3. If the selected variable is a String, String list, or String table, and you want to specify
content constraints for the variable, click and then do one of the following:
If you want to use a content type that corresponds to a built-in simple type in
XML Schema, in the Content type list, select the type for the variable contents. To
apply the selected type to the variable, click OK.
If you want to customize the content type by changing the constraining facets
applied to the type, see "Customizing a String Content Type" on page 651.
If you want to use a simple type from an IS schema as the content constraint,
click Browse. In the Browse dialog box, select the IS schema containing the simple
type you want to apply. Then, select the simple type you want to apply to the
variable. To apply the selected type to the variable, click OK.
4. If the selected variable is an Object or Object list, for the Java wrapper type property,
select the Java class for the variable contents. If you do not want to apply a Java class
or if the Java class is not listed, select UNKNOWN.
For more information about supported Java classes for Objects and Object lists, see
"Java Classes for Objects" on page 1158.
5. Repeat this procedure for each variable to which you want to apply constraints in the
IS document type, specification, service input, or service output.
6. Click File > Save.
In the Content type list, select the content type you want to customize.
If you want to customize a simple type from an IS schema, click Browse. In the
Browse dialog box, select the IS schema containing the simple type. Then, select
the simple type you want to customize and apply to the variable. Click OK.
3. Click Customize. Designer makes the constraining facet fields below the Content type
list available for data entry (that is, changes the background of the constraining
facet fields from grey to white). Designer changes the name of the content type to
contentType _customized.
4. In the fields below the Content type list, specify the constraining facet values you want
to apply to the content type.
5. Click OK. Designer saves the changes as a new content type named
contentType _customized.
Note: The constraining facets displayed below the Content type list depend on
the primitive type from which the simple type is derived. Primitive types
are the basic data types from which all other data types are derived. For
example, if the primitive type is string, Designer displays the constraining
facets enumeration, length, minLength, maxLength, and pattern. For more
information about primitive types, refer to XML Schema Part 2: Datatypes at
hp://www.w3.org/TR/xmlschema-2/.
Note:
By default, Designer hides the
variables with this constraint.
To display these variables in
the content and structure of
service signatures, document
and pipeline contents, and in the
Run Configurations, Enter Input
for serviceName , and Enter Input
for variableName dialog boxes,
select the Show variables with fixed
values property on the Service
Development Preferences page.
Note:
By default, Designer hides the
variables with this constraint.
To display these variables in
the content and structure of
service signatures, document
and pipeline contents, and in the
Run Configurations, Enter Input
for serviceName , and Enter Input
for variableName dialog boxes,
select the Show variables with fixed
Note: Designer displays the ‡ symbol next to String, String List, and String table
variables with a content type constraint only. Designer does not display the
‡ symbol next to Object and Object list variables with a specified Java class
constraint. Object and Object lists with an applied Java class constraint have
a unique icon. For more information about icons for constrained Objects, see
"Java Classes for Objects" on page 1158.
Content constraints in an IS schema describe the type of information that elements and
aributes can contain in a valid instance document. For example, the <quantity>
element might be required to contain a value that is a positive integer.
During data validation, Designer compares the elements and aributes in the instance
document with the structural and content constraints described for those elements and
aributes in the IS schema. Designer considers the instance document to be valid when it
complies with the structural and content constraints described in the IS schema.
You can create IS schemas from an XML schema, a DTD (Document Type Definition), or
an XML document that references an existing DTD.
Schema editor
Schema Browser
The Schema browser displays the components of an IS schema in a format that mirrors
the structure and content of the source file. The Schema browser groups the global
element declarations, aribute declarations, simple type definitions, and complex type
definitions from the source file under the top-level headings ELEMENTS, ATTRIBUTES,
SIMPLE TYPES, and COMPLEX TYPES. For example, the ELEMENTS heading contains
all of the global element declarations from the XML schema or the DTD.
If the source file does not contain one of these global components, the corresponding
heading is absent. For example, if you create an IS schema from an XML schema
that does not contain any global aribute declarations, the Schema browser does not
display the ATTRIBUTES heading. An IS schema created from a DTD never displays
the SIMPLE TYPES or COMPLEX TYPES headings because DTDs do not contain type
definitions.
Note: A DTD does contain aribute declarations. However, the Schema browser
does not display the ATTRIBUTES heading for IS schemas generated from
DTDs. This is because an aribute declaration in a DTD associates the
aribute with an element type. Accordingly, the Schema browser displays
The Schema browser uses unique symbols to represent the components of the IS
schema. Each of these symbols relates to a component of an XML schema or a DTD.
The following table identifies the symbol for each component that can appear in an IS
schema.
Note: In the following table, global refers to elements, aributes, and types declared
or defined as immediate children of the <schema> element in an XML schema.
All element type declarations in a DTD are considered global declarations.
Symbol Description
Symbol Description
Symbol Description
not correspond to a component in an XML Schema definition or
a DTD.
Component Details
The Component Details area displays information that you use to examine and edit the
selected component in the Schema browser. The contents of Component Details varies
depending on what component you select. For example, when you select a globally
declared element of complex type, the Component Details looks like the following:
When you select a simple type definition, the Component Details looks like the
following:
Creating an IS Schema
You can create IS schemas from XML schema definitions, DTDs, and XML documents
that reference an existing DTD. The resulting IS schema contains all of the defined types,
declared elements, and declared aributes from the source file.
You can create an IS schema from an XML Schema definition in CentraSite. To do so,
Designer must be configured to connect to CentraSite.
To create an IS schema
1. In the Package Navigator view of the Service Development perspective, click File >
New > Schema.
2. In the New Schema dialog box, select the folder in which you want to save the IS
document type.
3. In the Element Name field, type a name for the IS document type using any
combination of leers, numbers, and/or the underscore character. For information
about restricted characters, see "About Element Names" on page 54.
4. Click Next.
5. On the Select the Source Type panel, do one of the following:
Select... To...
6. On the Select a Source Location panel, under Source location, do one of the following
to specify the source file for the IS schema:
To use an XML schema definition in CentraSite as the source, select CentraSite.
To use an XML document, DTD, or XML schema definition that resides on the
Internet as the source, select File/URL. Then, type the URL of the resource. (The
URL you specify must begin with http: or https:.)
To use an XML document, DTD, or XML Schema definition that resides on your
local file system as the source, select File/URL. Then, type in the path and file
name, or click the Browse buon to navigate to and select the file.
7. Click Next.
8. If you selected CentraSite as the source, under Select XML Schema fromCentraSite,
select the XML Schema definition in CentraSite that you want to use to create the IS
schema. Click Next.
If Designer is not configured to connect to CentraSite, Designer displays the
CentraSite> Connections preference page and prompts you to configure a connection
to CentraSite.
Note: You can also create an IS schema from an XML Schema definition asset in
CentraSite by dragging and dropping the schema asset from the Registry
Explorer view into Package Navigator view.
9. If the source file is an XML Schema definition, on the Select Schema Domain panel,
under Schema domain, specify the schema domain to which any generated IS schemas
will belong. Do one of the following:
To add the IS schema to the default schema domain, select Use default schema
domain.
To add the IS schemas to a specified schema domain, select Use specified schema
domain and provide the name of the schema domain in the text box. A valid
schema domain name is any combination of leers, numbers, and/or the
underscore character. For information about restricted characters, see "About
Element Names" on page 54.
For more information about schema domains, see "About Schema Domains" on page
669.
10. If the source file is an XML Schema definition and you want Integration Server to
use the Xerces Java parser to validate the XML Schema definition, select the Validate
schema using Xerces check box.
Note: Integration Server uses Xerces Java parser version J-2.11.0. Limitations
for this version are listed at hp://xerces.apache.org/xerces2-j/xml-
schema.html.
When validating XML schema definitions, Integration Server uses the Perl5 regular
expression compiler instead of the XML regular expression syntax defined by the
World Wide Web Consortium for the XML Schema standard. As a result, in XML
schema definitions consumed by Integration Server, the paern constraining facet
must use valid Perl regular expression syntax. If the supplied paern does not use
proper Perl regular expression syntax, Integration Server considers the paern to be
invalid.
Integration Server does not create IS schemas from an XML schema definition (XSD)
if the XSD contains a type definition derived by extension and that type definition
contains a direct or indirect reference to itself. If Integration Server encounters a type
definition that contains a recursive extension while creating an IS schema from an
XSD, Integration Server throws a StackOverflowError and does not continue creating
the IS schema.
You might receive errors or warnings when creating an IS schema from a DTD. If
one or more errors occur, Designer does not generate an IS schema. If only warnings
occur, Designer generates the IS schema.
When creating an IS schema from an XML Schema definition that imports multiple
schemas from the same target namespace, Integration Server throws Xerces
validation errors indicating that the element declaration, aribute declaration, or
type definition cannot be found. The Xerces Java parser honors the first <import> and
ignores the others. To work around this issue, you can do one of the following:
Combine the schemas from the same target namespace into a single XML Schema
definition. Then change the XML schema definition to import the merged schema
only.
When creating the IS schema, clear the Validate schema using Xerces check box
to disable schema validation by the Xerces Java parser. When generating the IS
schema, Integration Server will not use the Xerces Java parser to validate the
schemas associated with the XML Schema definition.
schema references an external schema that does not contain a target namespace
declaration, the external schema assumes the target namespace of the root schema.
Import. When you generate an IS schema from an XML schema that contains an
<import> element to import the contents of an external schema in a different
namespace, Integration Server creates one IS schema per namespace. For example,
if the source XML schema imports two XML schemas from the same namespace,
Integration Server creates an IS schema for the source XML schema and then a
second IS schema that includes the components from the two imported XML
schemas. Integration Server assigns each imported schema the name that you specify
and appends an underscore and a number to each name. For example, if you create
an IS schema named “mySchema” from mySchema.xsd, Integration Server generates
an IS schema named “mySchema_2” for the imported XML schema.
Redefine. Schema authors can also use <redefine> to include and then redefine type
definitions, model groups, and aribute groups from an external XML schema in the
same namespace. Integration Server creates one IS schema per namespace.
You can edit any of the constraining facet values that appear in the Component
Details when you select an editable simple type definition in the Schema browser. The
constraining facets displayed in the Component Details depend on the primitive type
from which the simple type was derived. For example, if the simple type definition
is derived from string, the Component Details displays the enumeration, length,
minLength, maxLength, pattern, and whiteSpace facets. The Component Details only
displays constraining facet values set in the simple type definition. It does not display
constraining facet values the simple type definition inherited from the simple types from
which it was derived.
You can view the constraining facet values set in the type definitions from which
a simple type was derived by clicking Base Constraints. Base constraints are the
constraining facet values set in all the type definitions from which a simple type is
derived—from the primitive type to the immediate parent type. These constraint values
represent the cumulative facet values for the simple type.
When you edit the constraining facets for a simple type definition, you can only
make the constraining facets more restrictive. The applied constraining facets cannot
become less restrictive. For example, if the length value is applied to the simple type, the
maxLength or minLength values cannot be set because the maxLength and minLength facets
are less restrictive than length.
Tip: You can create a custom simple type to apply to a field as a content type
constraint. For information about creating a custom simple type, see
"Customizing a String Content Type" on page 651. For information about
applying constraints to fields, see "Applying Constraints to a Variable" on
page 649.
3. In Component Details, specify the constraining facets that you want to apply to the
simple type definition.
4. Click File > Save.
A JMS trigger subscribes to destinations (queues or topics) on a JMS provider and then
specifies how Integration Server processes messages the JMS trigger receives from those
destinations. Integration Server and Designer support two types of JMS triggers:
Standard JMS triggers use routing rules to specify which services can process messages
received by the trigger. The trigger service in the routing rule receives the entire JMS
message as an IData.
SOAP- JMS triggers are used to receive JMS messages that contain SOAP messages.
When a SOAP-JMS trigger receives a message, Integration Server extracts the SOAP
message from the JMS message and passes the SOAP message to the internal web
services stack. The web services stack processes the message according to the web
service descriptor specified in the SOAP-JMS request.
Note: Information about using Integration Server for JMS is located in webMethods
Integration Server Administrator’s Guide, webMethods Service Development Help,
and Using webMethods Integration Server to Build a Client for JMS.
webMethods Integration Server Administrator’s Guide contains information
about how to configure Integration Server to work with a JMS provider,
how to create a WS endpoint trigger, and how to manage JMS triggers at
run time.
webMethods Service Development Help includes this Working with JMS
Triggers topic which provides procedures for using Designer to create
JMS triggers and set JMS trigger properties.
Using webMethods Integration Server to Build a Client for JMS contains
information such as how to build services that send and receive JMS
messages, how Integration Server works with cluster policies when
sending JMS messages, and detailed information regarding how
Integration Server performs exactly-once processing. For completeness,
Using webMethods Integration Server to Build a Client for JMS also includes
the Working with JMS Triggers topic that appears in webMethods Service
Development Help.
SOAP-JMS trigger receives JMS messages from a destination (queue or topic) on the
JMS provider. Note that a SOAP-JMS trigger can specify a message selector which
limits the messages the SOAP-JMS trigger receives from that destination. Integration
Server extracts the SOAP message and passes it to the internal web services stack for
processing. Integration Server also retrieves JMS message properties that it passes
to the web services stack, including targetService, soapAction, contentType, and
JMSMessageID. These properties specify the web service descriptor and operation
for which the SOAP request is intended. The web services stack then processes the
SOAP message according to the web service descriptor (for example, executing request
handlers) and invokes the web service operation specified in the SOAP request message.
A SOAP-JMS trigger is associated with one or more provider web service descriptors via
a provider web service endpoint alias. The provider web service endpoint alias specifies
the SOAP-JMS trigger that receives messages from destinations on the JMS provider.
The provider web service endpoint alias is assigned to a JMS binder in a provider web
service descriptor. In this way, SOAP-JMS triggers act as listeners for provider web
service descriptors.
Note: Even though a SOAP-JMS trigger is associated with one or more provider web
service descriptors, the SOAP-JMS trigger can pass any SOAP-JMS message to
the web services stack for processing.
The properties assigned to the SOAP-JMS trigger determine how Integration Server
acknowledges the message, provides exactly-once processing, or handles transient or
fatal errors.
While SOAP-JMS triggers and standard JMS triggers share many properties and
characteristics, some properties available to standard JMS triggers are not available to
SOAP-JMS triggers, specifically:
SOAP-JMS triggers can subscribe to one destination only. Consequently, SOAP-JMS
triggers do not have joins. Designer does not display the Join expires and Expire after
properties for a SOAP-JMS trigger.
SOAP-JMS triggers use web services to process the payload of the JMS message.
Designer does not display the Message Routing table for SOAP-JMS triggers.
SOAP-JMS triggers cannot be used to perform ordered service execution. Standard
JMS triggers use multiple routing rules and local filters to perform ordered service
execution. Because SOAP-JMS triggers do not use routing rules, SOAP-JMS triggers
cannot be used to perform ordered service execution.
A SOAP-JMS trigger, specifically a connection for a SOAP-JMS trigger, can process
only one message at a time. Batch processing is not available for SOAP-JMS triggers.
Designer does not display the Max batch processing property for SOAP-JMS triggers.
A transacted SOAP-JMS trigger (one that executes as part of a transaction) has
additional requirements and limitations when used with web service descriptors. For
more information, see the Web Services Developer’s Guide.
If you use a JNDI provider to store JMS administered objects, the Connection
Factories that you want the JMS trigger to use to consume messages must already
exist.
If you use a JNDI provider to store JMS administered objects and the JMS provider
is not webMethods Broker, the destinations (queues and topics) from which this JMS
trigger will receive messages must already exist.
If the JMS provider is webMethods Broker, Software AG Universal Messaging, or
webMethods Nirvana the destinations (queues and topics) from which the JMS
trigger receives messages do not need to exist before you create the JMS trigger.
Instead, you can create destinations using the JMS trigger editor. You can also create,
modify, and delete durable subscribers via the JMS trigger. For more information,
see "Managing Destinations and Durable Subscribers on the JMS Provider through
Designer " on page 684.
6. In the Select a JMS connection alias for triggerName dialog box, select the JMS
connection alias that you want this JMS trigger to use to receive messages from the
JMS provider. Click OK.
Designer sets the Transaction type property to match the transaction type specified for
the JMS connection alias.
If a JMS connection alias has not yet been configured on Integration Server, Designer
displays a message stating the JMS subsystem has not been configured. For
information abut creating a JMS connection alias, see webMethods Integration Server
Administrator’s Guide.
7. In the JMS trigger type list, select one of the following:
Select To...
8. Under JMS destinations and message selectors, specify the destinations from which
the JMS trigger will receive messages. For more information, see "Adding JMS
Destinations and Message Selectors to a JMS Trigger" on page 678.
Note: For SOAP-JMS triggers, you can specify one destination only.
9. If you selected multiple destinations, select the join type. The join type determines
whether Integration Server needs to receive messages from all, any, or only one of
destinations to execute the trigger service.
All (AND) Integration Server to invoke the trigger service when the
trigger receives a message from every destination within
the join time-out period. The messages must have the same
activation.
Only one (XOR) Integration Server to invoke the trigger service when it
receives a message from any of the specified destinations.
For the duration of the join time-out period, the Integration
Server discards any messages with the same activation that
the trigger receives from the specified destinations.
10. If this is a standard JMS trigger, under Message routing, add routing rules. For more
information, see "Adding Routing Rules to a Standard JMS Trigger" on page 683.
11. In the Properties view, set properties for the JMS trigger.
12. Enter comments or notes, if any, in the Comments tab.
13. Click File > Save.
4. In the Destination Name column, in the Destination Type column, select the type of
destination:
Select... If...
5. In the JMS Message Selector column, click . In the Enter JMS Message Selector
dialog box, enter the expression that you want to use to receive a subset of messages
from this destination and click OK.
For more information about creating a JMS message selector, see "Creating a
Message Selector" on page 683.
6. If you specified the destination type as Topic (Durable Subscriber), in the Durable
Subscriber Name column, do one of the following:
Enter a name for the durable subscriber.
If the JMS connection alias creates a connection on webMethods Broker,
Universal Messaging, or Nirvana click to select from a list of existing durable
subscribers for the topic. In the Durable Subscriber List dialog box select the
durable subscriber and click OK.
If the durable subscriber that you want this JMS trigger to use does not exist, you
can create it by entering in the name in the Durable Subscriber Name column. The
name must be unique for the connection where the connection name is the client
ID of the JMS connection alias. webMethods Broker, Universal Messaging, or
Nirvana will create the durable subscriber name using the client ID of the JMS
connection alias and the specified durable subscriber name.
7. If you want the JMS trigger to ignore messages sent using the same JMS connection
alias as the JMS trigger, select the check box in the Ignore Locally Published column.
This property applies only when the Destination Type is Topic or Topic (Durable
Subscriber).
Note: If the JMS connection alias specified for this trigger has the Create New
Connection per Trigger option enabled, then Ignore Locally Published will
not work. For the JMS trigger to ignore locally published messages, the
publisher and subscriber must share the same connection. When the JMS
connection alias uses multiple connections per trigger, the publisher and
subscriber will not share the same connection.
8. Repeat this procedure for each destination from which you want the JMS trigger to
receive messages.
9. Click File > Save.
Notes:
If you specify a new durable subscriber name and the JMS connection alias that
the JMS trigger uses to retrieve messages is configured to manage destinations,
Integration Server creates a durable subscriber for the topic when the JMS trigger is
first enabled.
If you specify a destination type of Topic (Durable Subscriber) but do not specify a
durable subscriber name, Designer changes the destination type to Topic when you
save the JMS trigger.
Note: Prior to version 9.5 SP1, Software AG Universal Messaging was named
webMethods Nirvana.
The JMS connection alias used by the JMS trigger must be configured to manage
destinations.
The JMS connection alias must be enabled when you work with the JMS trigger.
If the JMS connection alias creates a connection on a webMethods Broker in a
webMethods Broker cluster, you will not be able to create a destination at the
webMethods Broker.
Select... To...
Durable Subscriber Name A name for the durable subscriber. The name must be
unique for the connection, where the connection name
is the client ID of the JMS connection alias. The JMS
provider (webMethods Broker, Universal Messaging,
or Nirvana) will create the durable subscriber name
using the client ID of the JMS connection alias and the
specified durable subscriber name.
This field only applies if the destination is Topic
(Durable Subscriber).
Select... To...
holds the messages in guaranteed storage. If a durable subscription already exists for the
specified durable subscriber on the JMS provider, this service resumes the subscription.
A non-durable subscription allows subscribers to receive messages on their chosen
topic only if the messages are published while the subscriber is active. A non-durable
subscription lasts the lifetime of its message consumer. Note that non-durable
subscribers cannot receive messages in a load-balanced fashion.
Note: If you want to filter on the contents of the JMS message body, write a local
filter. Integration Server evaluates a local filter after the JMS trigger receives
the message from the JMS provider. Only standard JMS triggers can use local
filters.
Note that even though the properties field is a child of the JMSMessage document, the
JMSMessage document does not need to appear in the filter expression.
The following filter matches those messages where the data document within the
JMSMessage /body document contains a field named myField whose value is “A”:
%body/data/myField% == "A"
Note: When receiving a batch of messages, Integration Server evaluates the local
filter against the first message in the batch only. Integration Server does not
apply the filter to subsequent messages in the batch. For more information
about batch processing, see "About Batch Processing for Standard JMS
Triggers" on page 694.
Select a durable subscriber that you want the JMS trigger to use from a list of existing
durable subscribers for a specified topic.
Change the Shared State or Order By mode for a queue or durable subscriber by
changing the message processing mode of the JMS trigger. You can do this only
when webMethods Broker is the JMS provider only.
Designer uses the JMS connection alias specified by the JMS trigger to make the changes
on the JMS provider. To manage destinations on the JMS provider, the JMS connection
alias that the JMS trigger uses must be
Configured to manage destinations
Enabled when you create and edit the JMS trigger.
To manage destinations on webMethods Broker, Integration Server must be version
8.0 SP1 or higher.
To manage destinations on Universal Messaging, Integration Server must be version
9.0 SP1 or higher.
Note: Prior to version 9.5 SP1, Software AG Universal Messaging was named
webMethods Nirvana.
For a complete list of the requirements for using Designer to manage destinations
and durable subscribers on the JMS provider, see webMethods Integration Server
Administrator’s Guide.
you confirm the change, Integration Server removes the durable subscriber from
webMethods Broker. If you do not confirm the change, the durable subscriber will
remain on webMethods Broker. You will need to use the webMethods Broker interface
in My webMethods to remove the durable subscriber.
Note: If another client, such as another JMS trigger, currently connects to the queue
or durable subscriber that you want to modify or remove, then Integration
Server cannot update or remove the queue or durable subscriber. If the
JMS provider is webMethods Broker, updates must be made through My
webMethods. If the JMS provider is Universal Messaging, updates must be
made through Universal Messaging Enterprise Manager. If the JMS provider
is Nirvana, updates must be made through Nirvana Enterprise Manager.
For more information about managing destinations and durable subscriptions on the
JMS provider, see "Managing Destinations and Durable Subscribers on the JMS Provider
through Designer " on page 684.
Important: Messages must be sent to JMS provider in the same order in which you want
the messages to be processed.
Note: If you disable a SOAP-JMS trigger that acts as a listener for one or more
provider web service descriptors, Integration Server will not retrieve any
messages for those web service descriptors.
Select... To...
Enabled The JMS trigger is available. A JMS trigger must be enabled for it
to receive and process messages.
An enabled trigger can have a status of “Not Running” which
means that it would not receive and process messages. Reasons
that an enabled JMS trigger can be disabled include: a disabled
JMS connection alias, an exception thrown by the trigger, and
trigger failure at startup. JMS trigger status can be seen on the
Settings > Messaging > JMS Trigger Management page in Integration
Server Administrator.
Suspended The JMS trigger is running and connected to the JMS provider.
Integration Server has stopped message retrieval, but continues
processing any messages it has already retrieved. Integration
Server enables the JMS trigger automatically upon server restart
or when the package containing the JMS trigger reloads.
Note: The Acknowledgement mode property is not available for transacted JMS
triggers. That is, if the JMS connection alias is of type XA_TRANSACTION
or LOCAL_TRANSACTION, Designer does not display the Acknowledgement
mode property.
Select... To...
Select... To...
Note: You need to specify a join time-out only when the join type is All (AND) or Only
one (XOR). You do not need to specify a join time-out for an Any (OR) join.
Note: You need to specify a join time-out only when the join type is All (AND) or Only
one (XOR). You do not need to specify a join time-out for an Any (OR) join.
Select... To...
False Specify that the join does not expire. Integration Server should
wait indefinitely for messages from the additional destinations
specified in the join condition. Set the Join expires property to
False only if you are confident that all of the messages will be
received eventually.
Select... To...
Important:
A join is persisted across server restarts.
Serial Processing
In serial processing, Integration Server processes messages received by a JMS trigger
one after the other in the order in which the messages were received from the JMS
provider. Integration Server uses a single thread for receiving and processing a message
for a serial JMS trigger. Integration Server evaluates the first message it receives,
determines which routing rule the message satisfies, and executes the service specified
in the routing rule. Integration Server waits for the service to finish executing before
processing the next message received from the JMS provider.
If you want to process messages in the same order in which JMS clients sent the
messages to the JMS provider, you will need to configure the JMS provider to ensure
that messages are received by the JMS trigger in the same order in which the messages
are published.
For information about using serial JMS triggers in a cluster to process messages from
a single destination in publishing order, see the Using webMethods Integration Server to
Build a Client for JMS.
Tip: If your trigger contains multiple routing rules to handle a group of messages
that must be processed in a specific order, use serial processing.
Concurrent Processing
In concurrent processing, Integration Server processes messages received from the
JMS provider in parallel. That is, Integration Server processes as many messages for
the JMS triggers as it can at the same time, using a separate server thread to process
each message. Integration Server does not wait for the service specified in the routing
rule to finish executing before it begins processing the next message. You can specify
the maximum number of messages Integration Server can process concurrently. This
equates to specifying the maximum number of server threads that can process messages
for the JMS trigger at one time.
Concurrent processing provides faster performance than serial processing. Integration
Server processes the received messages more quickly because it can process more than
one message for the trigger at a time. However, the more messages Integration Server
processes concurrently, the more server threads it dispatches, and the more memory the
message processing consumes.
Additionally, for JMS triggers with concurrent processing, Integration Server does not
guarantee that messages are processed in the order in which they are received.
A concurrent trigger can connect to the JMS provider through multiple connections,
which can increase trigger throughout. For more information about multiple
connections, refer to "Using Multiple Connections to Retrieve Messages for a Concurrent
JMS Trigger" on page 696.
of handling a high volume of small messages for the purposes of persisting them or
delivering them to another back-end resource. For example, you might want to take
a batch of messages, create a packet of SAP IDocs, and send the packet to SAP with a
single call. Alternatively, you might want to insert multiple messages into a database at
one time using only one insert. The trigger service processes the messages as a unit as
opposed to in a series.
The Max batch messages property indicates the maximum number of messages that the
trigger service can receive at one time. For example, if the Max batch messages property
is set to 5, Integration Server passes the trigger service up to 5 messages received by the
JMS trigger to process during a single execution.
Integration Server uses one consumer to receive and process a batch of messages. During
pre-processing, Integration Server checks the maximum delivery count for each message
and, if exactly-once processing is configured, determines whether or not the message is
a duplicate. Integration Server then bundles the message into a single IData and passes
it to the trigger service. If the message has exceeded the maximum delivery count or is
a duplicate message, Integration Server does not include it in the message batch sent to
the trigger service.
Integration Server acknowledges all the messages received in a batch from the JMS
provider at one time. This includes messages that failed pre-processing. As described by
the Java Message Service standard, when a client acknowledges one message, the client
acknowledges all of the messages received by the session. Because Integration Server
uses a consumer that includes a javax.jms.MessageConsumer and a javax.jms.Session,
when Integration Server acknowledges one message in the batch, it effectively
acknowledges all the messages received in the batch.
If a batch of messages is not acknowledged or they are recovered back to the JMS
provider, the JMS provider can redeliver all of the messages in the batch to the JMS
trigger. However, when using webMethods Broker, Integration Server can acknowledge
individual messages that fail pre-processing.
Consult the documentation for your JMS provider to determine whether or not the
JMS provider supports the reuse of transacted JMS sessions. Note that webMethods
Broker version 8.2 and higher, Software AG Universal Messaging version 9.5 SP1
and higher, and webMethods Nirvana version 7 and higher support the reuse of
transacted JMS sessions.
A JMS trigger that contains an All (AND) or Only one (XOR) join cannot use batch
processing.
SOAP-JMS triggers cannot process messages in batches.
Note: This prefetch cache can be used with JMS triggers that receive messages from
webMethods Broker only.
The use of the prefetch cache for a JMS trigger and the number of messages Integration
Server might retrieve with each request are determined by the Max prefetch size property
for the JMS trigger and the value of the wa.server.jms.trigger.maxPrefetchSize
parameter.
When the Max prefetch size property is greater than 0, Integration Server uses the
prefetch cache with the JMS trigger. The Max prefetch size property value specifies the
number of messages that Integration Server might retrieve and cache for the trigger.
The default is 10.
When the Max prefetch size property is set to -1, Integration Server uses the prefetch
cache with the JMS trigger. The wa.server.jms.trigger.maxPrefetchSize parameter
value determines how many messages Integration Server might retrieves and cache
for the JMS trigger.
When the Max prefetch size property is set to 0, Integration Server does not use the
prefetch cache with the JMS trigger.
When the prefetch cache is in use and the number of messages retrieved by Integration
Server is greater than one, the same server thread might process all of the messages
retrieved by the prefetch request. This is true even for concurrent JMS triggers. The first
thread for the concurrent JMS trigger processes the first set of prefetched messages. The
second thread for the concurrent JMS trigger processes the second set of prefetched
messages.
For example, suppose that the number of available messages is 22, Max execution threads
is 4, and Max prefetch size is 10. In the initial request for messages, the first server thread
may retrieve 10 messages. The same server thread will process these first 10 messages.
The second server thread may retrieve 10 messages, all of which will be processed by the
second server thread. The third server thread may retrieve the remaining 2 messages,
both of which will be processed by the third server thread. While the concurrent JMS
trigger can use up to 4 server threads, Integration Server might use only 3 server threads
to retrieve and process messages due to the way in which a JMS trigger processes
prefetched messages. A concurrent JMS trigger will use all of the configured execution
threads to process messages only when the number of messages on the webMethods
Broker is greater than the number of messages that can be prefetched.
Note: When you are working with a cluster of Integration Servers, the prefetch
behavior might appear at first to be misleading. For example, suppose
that you have a cluster of two Integration Servers. Each Integration Server
contains the same JMS trigger. Twenty messages are sent to a destination from
which JMS trigger receives messages. It might be expected the JMS trigger
on Integration Server 1 will receive the first message, the JMS trigger on
Integration Server 2 will receive the second message, and so forth. However,
what may happen is that the JMS trigger on Integration Server 1 will receive
the first 10 messages and the JMS trigger on Integration Server 2 will receive
the second 10 messages.
If you use webMethods Broker as the JMS provider, changing the message
processing mode for a JMS trigger can create a mismatch with the corresponding
destination on the webMethods Broker. If you do not use Designer to make the
changes, you need to use the webMethods Broker interface of My webMethods to
update the destination.
A concurrent JMS trigger can use multiple connections to retrieve messages from the
JMS provider. For information about requirements for using multiple connections,
see "Using Multiple Connections to Retrieve Messages for a Concurrent JMS Trigger"
on page 696.
You can only use the Max prefetch property with webMethods Broker.
Select... To...
3. If you want this trigger to perform batch processing, next to Max batch messages,
specify the maximum number of messages that the trigger service can receive at one
time. If you do not want the trigger to perform batch processing, leave this property
set to 1. The default is 1.
4. If you want this trigger to use multiple connections to receive messages from the JMS
provider, next to Connection count, specify the number of connections you want the
JMS trigger to make to the JMS provider. The default is 1.
5. If you want Integration Server to use the prefetch cache with this JMS trigger, in the
Properties view, under webMethods Broker do one of the following for Max prefetch
size:
Specify the number of messages you want Integration Server to retrieve and
cache for this JMS trigger. The default is 10 messages.
Note: A JMS trigger is connected to the webMethods Broker when the specified
JMS connection alias is enabled and connected to the webMethods Broker.
Important: If you disable or suspend a SOAP-JMS trigger that acts as a listener for one
or more provider web service descriptors, Integration Server will not retrieve
any messages for those web service descriptors until the trigger is enabled.
You can handle the exception that causes the fatal error by configuring Integration
Server to generate JMS retrieval failure events for fatal errors and by creating an
event handler that subscribes to JMS retrieval failure events. Integration Server passes
the event handler the contents of the JMS message as well as information about the
exception.
Integration Server handles fatal errors for transacted JMS differently than for non-
transacted JMS triggers. For information about fatal error handling for transacted JMS
triggers, see "Fatal Error Handling for Transacted JMS Triggers" on page 716.
If the JMS provider is not available, and the seings for the pub.jms* service indicate
that Integration Server should write messages to the client side queue, Integration
Server does not throw an ISRuntimeException.
A transient error occurs on the back-end resource for an adapter service. Adapter
services built on Integration Server 6.0 or later, and based on the ART framework,
detect and propagate exceptions that signal a retry automatically if a transient error
is detected on their back-end resource.
Note: A web service connector that sends a JMS message can throw an
ISRuntimeException, such as when the JMS provider is not available.
However, Integration Server automatically places the ISRuntimeException
in the fault document returned by the web service connector. If you want
the parent flow service to catch the transient error and re-throw it as an
ISRuntimeException, you must code the parent flow service to check the fault
document for an ISRuntimeException and then throw an ISRuntimeException
explicitly.
You can also configure Integration Server and/or a JMS trigger to handle transient errors
that occur during trigger preprocessing. The trigger preprocessing phase encompasses
the time from when a trigger first receives a message from it’s local queue on Integration
Server to the time the trigger service executes.
For more information about transient error handling for trigger preprocessing, see
"Transient Error Handling During Trigger Preprocessing" on page 783.
Note: Integration Server does not apply the SOAP-JMS trigger transient error
handling behavior to service handlers executed as part of processing web
services. Integration Server treats all errors thrown by service handler as
fatal errors.
The maximum number of retry aempts Integration Server should make for each
trigger service.
The time interval between retry aempts.
How to handle a retry failure. That is, you can specify what action Integration
Server takes if all the retry aempts are made and the trigger service or web service
Note: Integration Server does not retry a trigger service that fails because a
ServiceException occurred. A ServiceException indicates that there is
something functionally wrong with the service. A service can throw a
ServiceException using the EXIT step.
Step Description
1 Integration Server makes the final retry aempt and the trigger service or
web service operation fails because of an ISRuntimeException.
2 Integration Server treats the last trigger service or web service operation
failure as a ServiceException.
In summary, the default retry failure behavior (Throw exception) rejects the message and
allows the trigger to continue with message processing when retry failure occurs for a
trigger service.
Step Description
1 Integration Server makes the final retry aempt and the trigger service or
web service operation fails because of an ISRuntimeException.
Step Description
Server Administrator or by invoking the pub.trigger:enableJMSTriggers
service.
3 Integration Server recovers the message back to the JMS provider. This
indicates that the required resources are not ready to process the message
and makes the message available for processing at a later time. For serial
triggers, it also ensures that the message maintains its position at the top
of trigger queue.
Tip: You can change the frequency with which the resource
monitoring service executes by modifying the value of the
wa.server.jms.trigger.monitoringInterval property.
In summary, the Suspend and retry later option provides a way to resubmit the message
programmatically. It also prevents the trigger from retrieving and processing other
messages until the cause of the transient error condition has been remedied.
Note: If you do not configure service retry for a trigger, set the Max retry attempts
property to 0. Because managing service retries creates extra overhead, seing
this property to 0 can improve the performance of services invoked by the
trigger.
Select... To...
Suspend and retry later Specify that Integration Server should recover the
message back to the JMS provider and suspend the
trigger when the last allowed retry aempt ends
because of an ISRuntimeException.
Select... To...
become available, you must provide a resource
monitoring service that Integration Server can
execute to determine when to resume the trigger.
5. If you selected Suspend and retry later, then in the Resource monitoring service
property specify the service that Integration Server should execute to determine the
availability of resources associated with the trigger service. Multiple triggers can use
the same resource monitoring service. For information about building a resource
monitoring service, see Using webMethods Integration Server to Build a Client for JMS.
6. Click File > Save.
Notes:
Standard JMS triggers and services can both be configured to retry. When a trigger
invokes a service (that is, the service functions as a trigger service), Integration
Server uses the trigger retry properties instead of the service retry properties.
SOAP-JMS triggers and services used as operations in provider web service
descriptors can both be configured to retry. When a web service operation processes
a message received by a SOAP-JMS trigger, Integration Server uses the trigger retry
properties instead of the service (operation) retry properties.
Integration Server does not retry service handlers invoked by a SOAP-JMS trigger.
When Integration Server retries a trigger service and the trigger service is configured
to generate audit data on error, Integration Server adds an entry to the audit log for
each failed retry aempt. Each of these entries will have a status of “Retried” and an
error message of “Null”. However, if Integration Server makes the maximum retry
aempts and the trigger service still fails, the final audit log entry for the service
will have a status of “Failed” and will display the actual error message. Integration
Server makes the audit log entry regardless of which retry failure option the trigger
uses.
Integration Server generates the following journal log message between retry
aempts:
[ISS.0014.0031D] Service serviceName failed with ISRuntimeException. Retry x of y
will begin in retryInterval milliseconds.
You can invoke the pub.flow:getRetryCount service within a trigger service to determine
the current number of retry aempts made by Integration Server and the maximum
number of retry aempts allowed for the trigger service. For more information about
the pub.flow:getRetryCount service, see the webMethods Integration Server Built-In Services
Reference.
Document history database maintains a record of all persistent message IDs processed
by JMS triggers that have an acknowledgment mode of CLIENT_ACKNOWLEDGE and
for which exactly-once processing is configured.
Document resolver service is a service created by a user to determine the message
status. The document resolver service can be used instead of or in addition to the
document history database.
The steps that Integration Server takes to determine a message’s status depend on the
exactly-once properties configured for the JMS trigger.
Note: For detailed information about exactly-once processing for messages received
by JMS triggers, see Using webMethods Integration Server to Build a Client for
JMS.
If you intend to use a document history database as part of duplicate detection, you
must first install the document history database component and associate it with a
JDBC connection pool. For instructions, see Installing Software AG Products.
Note: For detailed information about exactly-once processing for messages received
by JMS triggers, see Using webMethods Integration Server to Build a Client for
JMS.
Note: For the increased logging to appear in the server log, you must set the logging
level for server facility 0134 JMS Subsystem to Trace.
Note: For the increased logging to appear in the server log, you must set the logging
level for server facility 0134 JMS Subsystem to Trace.
Where triggerName is the fully qualified name of the trigger in the format
folder .subfolder :triggerName .
5. Click Save Changes.
Stage 3 Specify the destination (queues or topics) on the JMS provider from
which you want to receive messages. You also specify any message
selectors that you want the JMS provider to use to filter messages for the
JMS trigger.
If this a SOAP-JMS trigger, you can specify one destination only.
Stage 4 For a standard JMS trigger, create routing rules and specify the services
that Integration Server invokes when the JMS trigger receives messages.
SOAP-JMS triggers do not use routing rules.
Fatal error handling > Specifies whether you want Integration Server
Suspend on error to suspend the trigger when a trigger service
ends with an error. Select True or False.
Stage 6 Test and debug the JMS trigger. For more information, see "Debugging a
JMS Trigger" on page 710.
Step Description
2 Integration Server rolls back the entire transaction and Integration Server
recovers the message back to the JMS provider. The JMS provider marks the
message as redelivered and increments the value of the JMSXDeliveryCount
property in the JMS message.
Step Description
will not retrieve any messages for those web service descriptors
until the trigger is enabled.
5 The JMS trigger remains suspended until one of the following occurs:
You enable the trigger using the pub.trigger:enableJMSTriggers service.
You enable the trigger using Integration Server Administrator.
Integration Server restarts or the package containing the trigger reloads.
(When Integration Server suspends a trigger because of a fatal error,
Integration Server considers the change to be temporary. For more
information about temporary vs. permanent state changes for triggers, see
webMethods Integration Server Administrator’s Guide.)
You can handle the exception that causes the fatal error by configuring Integration
Server to generate JMS retrieval failure events for fatal errors and by creating an event
handler that subscribes to JMS retrieval failure events. Integration Server passes the
contents of the JMS message and exception information to the event handler.
The trigger service catches and wraps a transient error and then re-throws it as an
ISRuntimeException.
The web service operation that processes the message received by a SOAP-
JMS trigger catches and wraps a transient error and then re-throws it as an
ISRuntimeException.
Note: A web service connector that sends a JMS message can throw an
ISRUntimeException, such as when the JMS provider is not available.
However, Integration Server automatically places the ISRuntimeException
in the fault document returned by the web service connector. If you want
the parent flow service to catch the transient error and re-throw it as an
ISRuntimeException, you must code the parent flow service to check the fault
document for an ISRuntimeException and then throw an ISRuntimeException
explicitly.
You can specify one of the following transient error handling options for a transacted
JMS trigger:
Recover only. After a transaction is rolled back, Integration Server receives the
message from the JMS provider almost immediately. This is the default.
Suspend and recover. After a transaction is rolled back, Integration Server suspends the
JMS trigger and receives the message from the JMS provider at a later time.
You can also configure Integration Server and/or a JMS trigger to handle transient errors
that occur during trigger preprocessing. The trigger preprocessing phase encompasses
the time from when a trigger first receives a message from it’s local queue on Integration
Server b to the time the trigger service executes.
For more information about transient error handling for trigger preprocessing, see
"Transient Error Handling During Trigger Preprocessing" on page 783.
Step Description
3 Integration Server receives the same message from the JMS provider and
processes the message.
Because Integration Server receives the message almost immediately after
transaction roll back, it is likely that the temporary condition that caused the
ISRuntimeException has not resolved and the trigger service will end with a
transient error again. Consequently, seing On transaction rollback to Recover
only could result in wasted processing.
Step Description
Step Description
5 If the resource monitoring service indicates that the resources are available
(that is, the value of isAvailable is true), Integration Server enables the
trigger. Message processing and message retrieval resume for the JMS
trigger.
If the resource monitoring service indicates that the resources are not
available (that is, the value of isAvailable is false), Integration Server waits
a short time interval (by default, 60 seconds) and then re-executes the
resource monitoring service. Integration Server continues executing the
resource monitoring service periodically until the service indicates the
resources are available.
Step Description
6 After Integration Server resumes the JMS trigger, Integration Server receives
the message from the JMS provider and processes the message.
Note: If the maximum delivery count has been met, the JMS provider will
not deliver the message to the JMS trigger. The maximum delivery
count determines the maximum number of time the JMS provider
can deliver the message to the JMS trigger. It is controlled by the
wa.server.jms.trigger.maxDeliveryCount property.
Select... To...
Recover only Specify that Integration Server recovers the message after a
transaction is rolled back due to a transient error.
This is the default.
Suspend and Specify that Integration Server does the following after a
recover transaction is rolled back due to a transient error:
Suspends the JMS trigger
Recovers the message after a resource monitoring service
indicates that the resources needed by the trigger service are
available.
3. If you selected Suspend and recover, in the Resource monitoring service property, specify
the service that Integration Server should execute to determine the availability of
resources associated with the trigger service or web service operation. Multiple
triggers can use the same resource monitoring service.
4. Click File > Save.
Note: Prior to Integration Server and Software AG Designer versions 9.5 SP1,
a webMethods messaging trigger was called a webMethods Broker/local
trigger.
Stage 2 Create one or more conditions for the webMethods messaging trigger.
During this stage, you create a trigger condition which associates
a subscription to a publishable document types with a service that
processes instances of that document types. You can also create filters to
apply to incoming documents and select join types.
Note: Provider filters must be identical if multiple conditions in the same trigger
specify the same publishable document type.
If more than one condition in the webMethods messaging trigger specifies the
same publishable document type and the trigger receives messages from Universal
Messaging, the provider filters must be identical in each condition but the local
filters can be different. Specifically, the contents of the Provider Filter (UM) column
must be identical for each condition that subscribes to the publishable document
type. The contents of the Filter column can be different.
The webMethods messaging trigger contains no more than one join condition.
The webMethods messaging trigger subscribes to publishable document types that
use the same messaging connection alias. For the publishable document types to
which the trigger subscribes, the value of the Connection alias name property can be:
You can also use the pub.trigger:createTrigger service to create a webMethods messaging
trigger. For more information about this service, see the webMethods Integration Server
Built-In Services Reference.
Creating Conditions
A condition associates one or more publishable document types with a single service.
A webMethods messaging trigger subscribes to the publishable document type in a
subscription. The service, called a trigger services, processes instance of the document
type received by the trigger.
A condition can be a simple condition or a join condition. A simple a condition
associates one publishable document type with a service. A join associates more than
one publishable document types with a service and specifies how the trigger handles the
documents as a unit.
A webMethods messaging trigger must have at least one condition.
Keep the following points in mind when you create a condition for a webMethods
messaging trigger:
The publishable document types and services that you want to use in a condition
must already exist.
A webMethods messaging trigger can subscribe to publishable document types only.
A webMethods messaging trigger cannot subscribe to ordinary IS document types.
An XSLT service cannot be used as a trigger service.
Conditions must meet additional requirements identified in " webMethods
Messaging Trigger Requirements" on page 725.
Trigger services must meet additional requirements identified in "Trigger Service
Requirements" on page 726.
If a webMethods messaging trigger subscribes to a publishable document type that is
not in the same package as the trigger, create a package dependency on the package
containing the publishable document type from the package containing the trigger.
This ensures that Integration Server loads the package containing the publishable
document type before loading the trigger.
If a webMethods messaging trigger uses a trigger service that is not in the same
package as the trigger, create a package dependency on the package containing the
trigger service from the package containing the trigger. This ensures that Integration
Server loads the package containing the service before loading the trigger.
3. Under Condition detail, in the Name field, type the name you want to assign to the
condition. Designer automatically assigns each condition a default name such as
Condition1 or Condition2 . You can keep this name or change it to a more descriptive
one.
4. In the Service field, enter the fully qualified service name that you want to associate
with the publishable document types in the condition. You can type in the service
name, or click to navigate to and select the service.
5. Click under Condition detail to add a new document type subscription for this
webMethods messaging trigger .
6. In the Select dialog box, select the publishable document types to which you want
to subscribe. You can select more than one publishable document type by using the
CTRL or SHIFT keys.
Designer creates a row for each selected publishable document type. Designer enters
the name of the messaging connection alias used by each publishable document type
in the Connection Alias column.
7. In the Filter column next to each publishable document type, do the following:
If the publishable document type uses webMethods Broker as the messaging
provider, specify a filter that you want Integration Server and/or webMethods
Broker to apply to each instance of this publishable document type. For more
information, see "Creating Filters for Use with webMethods Broker " on page
735.
If the publishable document type uses Universal Messaging as the messaging
provider, specify the local filter that you want Integration Server to apply to each
instance of the publishable document type received by the trigger. For more
information, see "Creating Filters for Use with Universal Messaging " on page
732.
Create the filter in the Filter column using the conditional expression syntax
described in webMethods Service Development Help.
Filters are optional for a trigger condition. For more information about filters, see
"Using Filters with a Subscription" on page 731.
8. If the publishable document type uses Universal Messaging as the messaging
provider, in the Provider Filter (UM only) column, enter the filter that you want
Universal Messaging to apply to each instance of the publishable document type.
Universal Messaging enqueues the document for the trigger only if the filter criteria
is met. For information about the syntax for provider filters for Universal Messaging,
see the Universal Messaging documentation. For more information about using
filters in trigger conditions, see "Creating Filters for Use with Universal Messaging "
on page 732.
9. If you specified more than one publishable document type in the condition, select a
join type.
All (AND) Integration Server invokes the trigger service when the server
receives an instance of each specified publishable document
type within the join time-out period. The instance documents
must have the same activation ID. This is the default join type.
Any (OR) Integration Server invokes the trigger service when it receives
an instance of any one of the specified publishable document
types.
Only one (XOR) Integration Server invokes the trigger service when it receives
an instance of any of the specified document types. For
the duration of the join time-out period, Integration Server
discards any instances of the specified publishable document
types with the same activation ID.
10. Repeat this procedure for each condition that you want to add to the webMethods
messaging trigger .
11. Click File > Save.
Notes:
Integration Server validates the webMethods messaging trigger before saving it. If
Integration Server determines that the webMethods messaging trigger is invalid,
Designer prompts you to save the webMethods messaging trigger in a disabled
state. For more information about valid webMethods messaging triggers, see "
webMethods Messaging Trigger Requirements" on page 725.
Integration Server establishes the subscription locally by creating a trigger queue for
the webMethods messaging trigger.
If the trigger subscribes to one or more publishable document types that use
webMethods Broker as the messaging provider, one of the following happens upon
saving the trigger.
If Integration Server is currently connected to the webMethods Broker,
Integration Server registers the trigger subscription with the webMethods Broker
by creating a client for the trigger on the webMethods Broker. Integration Server
also creates a subscription for each publishable document type specified in the
webMethods messaging trigger conditions and saves the subscriptions with the
webMethods messaging trigger client. webMethods Broker validates the filters in
the webMethods messaging trigger conditions when Integration Server creates
the subscriptions.
If Integration Server is not currently connected to a webMethods Broker,
the webMethods messaging trigger will only receive documents published
locally. When Integration Server reconnects to a webMethods Broker, the
next time Integration Server restarts Integration Server will create a client for
document meets the filter criteria, the messaging provider enqueues the document
for the subscribing trigger.
Local filter. A local filter is saved on Integration Server. After a trigger receives a
document, Integration Server applies the filter to the document. If the document
meets the filter criteria, Integration Server executes the trigger.
How you create filters for a condition depends on the following:
The messaging provider used by the publishable document type.
If the messaging provider is Universal Messaging, the encoding type for the
publishable document type.
Note: If the trigger contains multiple conditions that subscribe to the same
publishable document type, Integration Server does verify that the
provider filters are identical upon save. If the supplied provider filters are
identical, Integration Server saves the trigger. If the provider filters are not
identical, Integration Server throws an exception and considers the trigger
to be invalid.
A local filter that Integration Server applies to the published document header or
document body after the trigger receives the document. Use the Filter column in the
Condition detail table to specify a local filter.
Create the local filter using the conditional expression syntax described in
webMethods Service Development Help.
When you save a trigger, Integration Server evaluates the local filter to make
sure it uses the proper syntax. If the syntax is correct, Integration Server saves
the webMethods messaging trigger in an enabled state. If the syntax is incorrect,
Integration Server saves the webMethods messaging trigger in a disabled state
Use Universal Messaging Enterprise Manager to view and edit the configuration
properties for the realm to which Integration Server connects.
Note: When the encoding type is IData, it is optional to include _properties in the
provider filter. For example, if you want Universal Messaging to filter for
messages where the contents of the _properties /color field is equal to “blue”,
the provider filter would be: color=’blue’. However, when the encoding type
is protocol buffers, you need to include _properties in the provider filter. For
example, _properties.color=’blue’. If you want a provider filter that operates
on the contents of _properties to work regardless of the encoding type, always
include _properties in the filter expression.
stringField1 = ‘a’ and stringField2 = ‘b’ The value of stringField1 is “a” and
the value of stringField2 is “b”.
boolean1 = true and boolean2 = false The value of boolean1 is true and the
value of boolean2 is false.
Where boolean1 and boolean2 are Object
fields with the Java wrapper type
java.lang.Boolean applied.
enabled state. If the syntax is incorrect, Integration Server saves the webMethods
messaging trigger in a disabled state.
webMethods Broker evaluates the filter syntax to determine if the filter syntax
is valid on the webMethods Broker. If webMethods Broker determines that the
syntax is valid for the webMethods Broker, it saves the filter with the document type
subscription. If the webMethods Broker determines that the filter syntax is not valid
on the webMethods Broker or if aempting to save the filter on the webMethods
Broker would cause an error, webMethods Broker saves the subscription without the
filter.
webMethods Broker saves as much of a filter as possible with the subscription. For
example, suppose that a filter consists of more than one expression, and only one of
the expressions contains the syntax that the webMethods Broker considers invalid.
webMethods Broker saves the expressions it considers valid with the subscription on
the webMethods Broker. (Integration Server saves all the expressions.)
When a filter is saved only on Integration Server and not on webMethods Broker,
the performance of Integration Server can be affected. When the webMethods Broker
applies the filter to incoming documents, it discards documents that do not meet filter
criteria. Integration Server only receives documents that meet the filter criteria. If the
subscription filter resides only on Integration Server, webMethods Broker automatically
places the document in the subscriber’s queue. webMethods Broker routes all the
documents to the subscriber, creating greater network traffic between the webMethods
Broker and the Integration Server and requiring more processing by the Integration
Server.
The table below identifies the HintNames that you can use with a document
subscription.
Hint Description
IncludeDeliver When set to true, the filter applies to documents that are
delivered to the client and documents that are delivered to
the subscription queue. By default, filters are only applied to
documents that are delivered to the subscription queue.
LocalOnly When set to true, the filter is applied only to documents that
originate from the webMethods Broker to which the Integration
Server is connected. Documents originating from a different
webMethods Broker are discarded.
Hint Description
Keep the following points in mind when you add hints to filters:
Hints must be added at the end of the filter string in the Filter field.
Hints must be in the following format:
{hint: HintName=Value}
For example, the following filter will match only those documents that originate
from the webMethods Broker to which the Integration Server is connected and the
value of city is equal to Fairfax.
%city% L_EQUALS "Fairfax" {hint:LocalOnly=true}
A filter can also contain a combination of subscription hints. For example, the
following filter will match only those documents that do not have a subscriber
and that originate from the webMethods Broker to which the Integration Server is
connected.
{hint:DeadLetterOnly=true} {hint:LocalOnly=true}
Note: If you are using Universal Messaging you can configure a dead events store.
For more information, see the Universal Messaging documentation.
Integration Server will never execute serviceA. Whenever Integration Server receives
documentA, the document satisfies ConditionAB, and Integration Server executes serviceAB.
You might want to use multiple conditions to control the service execution when a
service that processes a document depends on another service successfully executing.
For example, to process a purchase order, you might create one service that adds a new
customer record to a database, another that adds a customer order, and a third that bills
the customer. The service that adds a customer order can only execute successfully if the
new customer record has been added to the database. Likewise, the service that bills the
customer can only execute successfully if the order has been added. You can ensure that
the services execute in the necessary order by creating a webMethods messaging trigger
that contains one condition for each expected publishable document type. You might
create a webMethods messaging trigger with the following conditions:
If you create one webMethods messaging trigger for each of these conditions, you
could not guarantee that the Integration Server would invoke services in the required
order even if publishing occurred in that order. Specifying serial dispatching for the
webMethods messaging trigger ensures that a service will finish executing before the
next document is processed. For example, Integration Server could still be executing
addCustomer, when it receives the documents customerOrder and customerBill. If you specified
concurrent dispatching instead of serial dispatching, the Integration Server might
execute the services addCustomerOrder and billCustomer before it finished executing
addCustomer. In that case, the addCustomerOrder and billCustomer services would fail.
Important: An ordered scenario assumes that documents are published in the correct
order and that you set up the webMethods messaging trigger to process
Note: You can also suspend document retrieval and document processing for a
webMethods messaging trigger. Unlike disabling a webMethods messaging
trigger, suspending retrieval and processing does not destroy the client queue.
The webMethods Broker continues to enqueue documents for suspended
webMethods messaging triggers. However, Integration Server does not
retrieve or process documents for suspended webMethods messaging
triggers. For more information about suspending webMethods messaging
triggers, see webMethods Integration Server Administrator’s Guide.
You cannot disable a webMethods messaging trigger during trigger service execution.
Select... To...
How the join time-out affects document processing by the webMethods messaging
trigger is different for each join type.
For an All (AND) join, the join time-out determines how long Integration Server
waits to receive an instance of each publishable document type in the condition.
For an Only one (XOR) join, the join time-out determines how long Integration
Server discards instances of publishable document types in the condition after it
receives an instance document of one of the publishable document types.
An Any (OR) join condition does not need a join time-out. Integration Server treats
an Any (OR) join condition like a webMethods messaging trigger with multiple
simple conditions that all use the same trigger service.
documents of the type specified in the join condition. Integration Server discards only
those documents with same activation ID as the first document.
When the time-out period elapses, the next document in the webMethods messaging
trigger queue that satisfies the Only one (XOR) condition causes the trigger service to
execute and the time-out period to start again. Integration Server executes the service
even if the document has the same activation ID as an earlier document that satisfied
the join condition. Integration Server generates a journal log message when the time-out
period elapses for an Only one (XOR) condition.
Select... To...
True Indicate that Integration Server stops waiting for the other
documents in the join condition once the time-out period
elapses.
In the Expire after property, specify the length of the join time-
out period. The default time period is 1 day.
False Indicate that the join condition does not expire. Integration
Server waits indefinitely for the additional documents
specified in the join condition. Set the Join expires property to
False only if you are confident that all of the documents will be
received.
Important:
A join condition is persisted across server restarts. To
remove a waiting join condition that does not expire,
disable, then re-enable and save the webMethods
messaging trigger. Re-enabling the webMethods
messaging trigger effectively recreates the webMethods
messaging trigger.
Note: Priority messaging applies only to documents that are routed through the
webMethods Broker and Universal Messaging. Priority messaging does not
apply to locally published documents.
To use priority messaging, you configure both the publishing side and the subscribing
side.
On the publishing side, set a message priority level in the document envelope.
The priority level indicates how quickly the document should be processed once
it is published. A value of 0 is the lowest processing priority; a value of 9 indicates
expedited processing. The default priority is 4.
On the subscribing side, enable priority messaging for the webMethods messaging
trigger. This is necessary only for triggers that receive documents from webMethods
Broker.
Select... To...
Note: For a webMethods messaging trigger that receives locally published messages
or messages from the webMethods Broker, Integration Server, uses the user
account specified in the Run Trigger Service As User property on the Settings >
Resources > Store Settings page in Integration Server Administrator. For more
information about the Run Trigger Service As User property, see webMethods
Integration Server Administrator’s Guide.
Note: The Execution user property only applies to webMethods messaging triggers
that receive documents from Universal Messaging. The publishable document
type to which a trigger subscribes determine the messaging provider from
which the trigger receives documents. The Execution user property is display
only if a webMethods messaging trigger receives locally published documents
or documents published to webMethods Broker.
2. In the Properties view, under General, in the Execution user property, type the name
of the user account whose credentials Integration Server uses to execute a service
associated with the webMethods messaging trigger. You can specify a locally defined
user account or a user account defined in a central or external directory.
3. Click File > Save.
Note: A refill level can be set for webMethods messaging triggers that receive
documents from the webMethods Broker only. Refill level does not apply
to webMethods messaging triggers that receive documents from Universal
Messaging.
For a webMethods messaging trigger that receives messages from Universal Messaging,
Integration Server receives documents for the trigger one at a time until the trigger
queue is at capacity. After the number of documents in the trigger queue equals the
configured capacity, Integration Server stops receiving documents. When the number of
documents awaiting processing in the trigger queue is less than the configured capacity,
the trigger resumes receiving messages from Universal Messaging.
Note: The Refill level property applies to webMethods messaging triggers that
receive documents from the webMethods Broker only. The Refill level
property does not apply to webMethods messaging triggers that receive
documents from Universal Messaging.
You can increase the number of document acknowledgements returned at one time by
changing the value of the Acknowledgement queue size property. The acknowledgement
queue is a queue that contains pending acknowledgements for guaranteed documents
processed by the webMethods messaging trigger. When the acknowledgement
queue size is greater than one, a server thread places a document acknowledgement
into the acknowledgement queue after it finishes executing the trigger service.
Acknowledgements collect in the queue until a background thread returns them as a
group to the sending resource.
If the Acknowledgement queue size is set to one, acknowledgements will not collect in the
acknowledgement queue. Instead, Integration Server returns an acknowledgement to the
sending resource immediately after the trigger service finishes executing.
Serial Processing
In serial processing, Integration Server processes the documents received by a
webMethods messaging trigger one after the other. Integration Server retrieves the first
document received by the webMethods messaging trigger, determines which condition
the document satisfies, and executes the service specified in the webMethods messaging
trigger condition. Integration Server waits for the service to finish executing before
retrieving the next document received by the webMethods messaging trigger.
In serial processing, Integration Server processes documents for the webMethods
messaging trigger in the same order in which it retrieves the documents from the
messaging provider. However, Integration Server processes documents for a serial
trigger more slowly than it processes documents for a concurrent trigger.
If your webMethods messaging trigger contains multiple conditions to handle a group
of published documents that must be processed in a specific order, use serial processing.
This is sometimes called ordered service execution. Only triggers that receive messages
from webMethods Broker can perform ordered service execution.
When a webMethods messaging trigger receives documents from webMethods Broker,
the queue for the serial trigger on the webMethods Broker has a Shared Document
Order mode of “Publisher”.
When a webMethods messaging trigger receives documents from Universal Messaging,
the named object for a trigger with serial processing is a priority named object. That is,
in Universal Messaging Enterprise Manager, the named object for the trigger has the
Subscription Priority check box selected.
Note: In addition to the term “non-clustered group,” the terms “stateless cluster”
and “external cluster” are sometimes used to describe the situation in which a
For each webMethods messaging trigger, each server in the cluster or non-clustered
group maintains a trigger queue in memory. This allows multiple servers to process
documents for a single webMethods messaging trigger. The messaging provider
manages the distribution of documents to the individual webMethods messaging
triggers in the cluster or non-clustered group.
How the messaging provider distributes documents for a serial trigger on the
Integration Servers in the cluster or group to ensure that documents from a single
publisher are processed in publication order varies:
webMethods Broker distributes documents so that the Integration Servers in
the cluster or non-clustered group process guaranteed documents from a single
publisher in the same order in which the documents were published. Multiple
Integration Servers can process documents for a single trigger, but only one
Integration Server in the cluster or non-clustered group processes documents
for a particular publisher. For more information, see "Serial Processing with the
webMethods Broker in a Clustered or a Non-Clustered Group of Integration Servers"
on page 752
Universal Messaging distributes all the documents to which a particular serial
trigger subscribes to the same Integration Server in a cluster or non-clustered
group. Regardless of the document publisher, all of the published documents to
which a specific serial trigger subscribes are received and processed by the same
Integration Server. Because a serial trigger processes only one document at a time,
this distribution approach ensures that documents are processed in the same order
in which they were published. For more information, see "Serial Processing with
Universal Messaging in a Clustered or a Non-Clustered Group of Integration
Servers" on page 754.
documents A1 and A2, PublisherB published documents B1, B2, and B3, and PublisherC
published documents C1and C2.
The following illustration and explanation describe how serial document processing
works in a clustered environment that uses webMethods Broker as the messaging
provider.
Step Description
1 ServerX retrieves the first two documents in the queue (documents A1 and
B1) to fill its processCustomerInfo trigger queue to capacity. ServerX begins
processing document A1.
Step Description
Notes:
The webMethods Broker and Integration Servers in a cluster cannot ensure that
serial webMethods messaging triggers process volatile documents from the same
publisher in the order in which the documents were published.
When documents are delivered to the default client in a cluster, the webMethods
Broker and Integration Servers cannot ensure that documents from the same
publisher are processed in publication order. This is because the Integration Server
acknowledges documents delivered to the default client as soon as they are retrieved
from the webMethods Broker.
approach ensures that documents from the same publisher are processed in the order in
which the documents were published.
To indicate that all of the documents for a serial trigger be sent to the same Integration
Server, Integration Server creates a priority named object on Universal Messaging
that corresponds to the serial trigger,. In Universal Messaging Enterprise Manager,
the named object for the trigger has the Subscription Priority check box selected. With a
priority named object, multiple consumers can connect to the named object but only
one consumer is active. The active consumer has priority over the other consumers,
which remain in fail-over mode. If the active consumer disconnects, one of the fail-
over consumers becomes the active consumer and begins receiving documents. When
a particular webMethods messaging trigger runs on multiple Integration Servers, each
instance of the trigger is a consumer. Each trigger instance can connect to the priority
named object but only one trigger at a time processes messages.
Note: If you do not need serial processing of documents by publisher, but you want
a trigger to process documents one at a time, select concurrent processing and
set Max execution threads to 1. This configuration allows the trigger on each
Integration Server in the cluster or group to process one document at a time.
Serial Triggers Migrated to Integration Server 9.9 or Later from 9.8 or Earlier
Prior toIntegration Server 9.9, when using Universal Messaging as the messaging
provider, a webMethods messaging trigger with serial processing corresponded
to a shared named object on Universal Messaging. As of Integration Server 9.9, a
webMethods messaging trigger with serial processing corresponds to a priority
named object on Universal Messaging. All webMethods messaging triggers created
on Integration Server 9.9 or later will correspond to a priority named object. However,
migrated serial triggers will still correspond to a shared named object. The trigger and
named object will be out of sync. To synchronize the migrated serial trigger and the
named object, you must do one of the following:
If you are using a fresh install of Universal Messaging 9.9 or later (that is, the
Universal Messaging server was not migrated), when you start Integration Server,
synchronize the publishable document types with the provider using Designer or
the built-in service pub.publish:syncToProvider. Synchronizing the publishable document
types causes Integration Server to reload the webMethods messaging triggers.
Integration Servercreates a priority named object for each serial trigger.
If you are using an installation of Universal Messaging 9.9 or later that was migrated
from an earlier version, you must delete and recreate the named object. For more
information about deleting and recreating a named object associated with a trigger,
see "Synchronizing the webMethods Messaging Trigger and Named Object on
Universal Messaging " on page 759.
Concurrent Processing
In concurrent processing, Integration Server processes the documents received by
a webMethods messaging trigger in parallel. Integration Server processes as many
documents in the webMethods messaging trigger queue as it can at the same time.
Integration Server does not wait for the service specified in the webMethods messaging
trigger condition to finish executing before it begins processing the next document in the
trigger queue. You can specify the maximum number of documents Integration Server
can process concurrently.
Concurrent processing provides faster performance than serial processing. The
Integration Server process the documents in the trigger queue more quickly because
the Integration Server can process more than one document at a time. However,
the more documents Integration Server processes concurrently, the more server
threads Integration Server dispatches, and the more memory the document processing
consumes.
Additionally, for concurrent webMethods messaging triggers, Integration Server does
not guarantee that documents are processed in the order in which they are received.
Concurrent document processing is equivalent to the Shared Document Order mode of
“None” on the webMethods Broker.
When receiving messages from Universal Messaging, the Universal Messaging window
size limits the number of documents that can be processed at one time by an individual
trigger. By default, the window size of a client queue for the trigger is set to the sum of
the Capacity and Max execution threads properties. For example, if the Capacity property
is set to 10 and Max execution threads is set to 5, the client queue window size is 15.
The window size set for a trigger overrides the default value specified in Universal
Messaging/
Select... To...
concurrently. Integration Server uses one server thread to process each document in
the trigger queue.
4. If you selected serial processing and you want Integration Server to suspend
document processing and document retrieval automatically when a trigger service
ends with an error, under Fatal error handling, select True for the Suspend on error
property.
For more information about fatal error handling, see "Fatal Error Handling for a
webMethods Messaging Trigger " on page 760.
5. Click File > Save to save the webMethods messaging trigger.
Notes:
If you selected serial processing, Integration Server creates a priority named object
on the channels that correspond to the publishable document types to which the
trigger subscribes.
If you selected concurrent processing, Integration Server creates a shared named
object on the channels that correspond to the publishable document types to which
the trigger subscribes
Integration Server Administrator can be used to change the number of concurrent
execution threads for a webMethods messaging trigger temporarily or permanently.
For more information, see webMethods Integration Server Administrator’s Guide.
Important: Any documents that existed in the trigger client queue before you
changed the message process mode will be lost.
Note: A webMethods Broker connection alias shares a client prefix if the Shared
Client Prefix property for the connection alias is set to Yes.
Note: A Universal Messaging connection alias does not share a client prefix if the
Shared Client Prefix property for the connection alias is set to No.
When you change the processing mode for a webMethods messaging trigger that
uses a Universal Messaging connection alias that shares a client prefix, Integration
Serverdoes not delete and recreate the named object that corresponds to the trigger
on Universal Messaging. As a result, the trigger on Integration Server will be out of
sync with the associated named object on Universal Messaging. If the same trigger
exists on other Integration Server, such as in a cluster or a non-clustered group of
Integration Server, the changed trigger will also be out of sync with the trigger on
other Integration Server. This affects document processing. One of the following
situations occurs:
If you changed the processing mode from serial to concurrent, the corresponding
named object on Universal Messaging remains a priority named object. The
trigger continues to process documents concurrently, However, if the trigger
exists on more than one Integration Server, such as in a cluster or a non-clustered
group of Integration Servers, Universal Messaging distributes documents to the
trigger on the first Integration Server to connect to Universal Messaging only.
This trigger has priority and will receive and process all the documents to which
the trigger subscribes. The other Integration Servers are connected to Universal
Messaging but are in fail-over mode and will not receive or process documents
unless the first trigger disconnects.
If you changed the processing mode from concurrent to serial, the corresponding
named object on Universal Messaging remains a shared named object.
Integration Server does not change the named object to be a priority named
object. Consequently, if the trigger exists on more than one Integration Server,
such as in a cluster or a non-clustered group of Integration Servers, Universal
Messaging distributes documents to the trigger on each connected Integration
Server. Universal Messaging does not distribute documents in a way that ensures
that processing order matches publication order.
For information about how to synchronize the trigger and the named object when
the processing mode is out of sync, see "Synchronizing the webMethods Messaging
Trigger and Named Object on Universal Messaging " on page 759.
Note: A Universal Messaging connection alias shares a client prefix if the Shared
Client Prefix property for the connection alias is set to Yes.
Software AG does not recommend changing the processing mode for a trigger
when more than one Integration Server connects to the same named object that
corresponds to the trigger. For example, if the trigger is on an Integration Server that
is part of a cluster or a non-clustered group, more than one Integration Server can
share the same named object.
To synchronize the webMethods messaging trigger and the named object on Universal Messaging
Do one of the following:
If the webMethods messaging trigger resides on the only Integration Server
connected to Universal Messaging and the Shared Client Prefix property for the
Universal Messagingconnection alias is set to No, start the trigger to delete and
recreate the corresponding named object. You can start a trigger by disabling and
then enabling the Universal Messagingconnection alias used by the trigger.
document publishers before deleting the named object. Then create the named
object for the trigger by disabling and then enabling the Universal Messaging
connection alias used by the trigger.
Document processing and document retrieval remain suspended until one of the
following occurs:
You specifically resume document retrieval or document processing
for the webMethods messaging trigger. You can resume document
retrieval and document processing using Integration Server
Administrator, built-in services (pub.trigger:resumeProcessing or
pub.trigger:resumeRetrieval), or by calling methods in the Java API
(com.wm.app.b2b.server.dispatcher.trigger.TriggerFacade.setProcessingSuspended()
and
com.wm.app.b2b.server.dispatcher.trigger.TriggerFacade.setRetrievalSuspended()).
Integration Server restarts, the webMethods messaging trigger is enabled or disabled
(and then re-enabled), the package containing the webMethods messaging trigger
reloads. (When Integration Server suspends document retrieval and document
processing for a webMethods messaging trigger because of an error, Integration
Server considers the change to be temporary. For more information about temporary
vs. permanent state changes for webMethods messaging triggers, see webMethods
Integration Server Administrator’s Guide.)
For more information about resuming document processing and document retrieval, see
webMethods Integration Server Administrator’s Guide and the webMethods Integration Server
Built-In Services Reference.
Automatic suspension of document retrieval and processing can be especially useful
for serial webMethods messaging triggers that are designed to process a group of
documents in a particular order. If the trigger service ends in error while processing
the first document, you might not want to the webMethods messaging trigger to
proceed with processing the subsequent documents in the group. If Integration Server
automatically suspends document processing, you have an opportunity to determine
why the trigger service did not execute successfully and then resubmit the document
using webMethods Monitor.
By automatically suspending document retrieval as well, Integration Server prevents the
webMethods messaging trigger from retrieving more documents. Because Integration
Server already suspended document processing, new documents would just sit in
the trigger queue. If Integration Server does not retrieve more documents for the
webMethods messaging trigger and Integration Server is in a cluster, the documents
might be processed more quickly by another Integration Server in the cluster.
the trigger service to fail is temporary, the trigger service might execute successfully if
the Integration Server waits and then re-executes the service.
You can configure transient error handling for a webMethods messaging trigger to
instruct Integration Server to wait an specified time interval and then re-execute a
trigger service automatically when an ISRuntimeException occurs. Integration Server re-
executes the trigger service using the original input document.
When you configure transient error handling for a webMethods messaging trigger, you
specify the following retry behavior:
Whether Integration Server should retry trigger services for the webMethods
messaging trigger. Keep in mind that a trigger service can retry only if it is coded to
throw ISRuntimeExceptions.
The maximum number of retry aempts Integration Server should make for each
trigger service.
The time interval between retry aempts.
How to handle a retry failure. That is, you can specify what action Integration Server
takes if all the retry aempts are made and the trigger service still fails because of an
ISRuntimeException.
You can also configure Integration Server and/or a webMethods messaging trigger
to handle transient errors that occur during trigger preprocessing. The trigger
preprocessing phase encompasses the time from when a trigger first receives a message
from it’s local queue on webMethods messaging trigger to the time the trigger service
executes.
For more information about transient error handling for trigger preprocessing, see
"Transient Error Handling During Trigger Preprocessing" on page 783.
ISRuntimeException, the trigger service ends in error. Integration Server will not retry
the trigger service.
Adapter services built on Integration Server 6.0 or later, and based on the ART
framework, detect and propagate exceptions that signal a retry if a transient error is
detected on their back-end resource. This behavior allows for the automatic retry when
the service functions as a trigger service.
Note: Integration Server does not retry a trigger service that fails because a
ServiceException occurred. A ServiceException indicates that there is
something functionally wrong with the service. A service can throw a
ServiceException using the EXIT step.
Step Description
1 Integration Server makes the final retry aempt and the trigger service fails
because of an ISRuntimeException.
Step Description
Step Description
1 Integration Server makes the final retry aempt and the trigger service fails
because of an ISRuntimeException.
Step Description
manually using Integration Server Administrator or by invoking the
pub.trigger:resumeRetrieval and pub.trigger:resumeProcessing public services.
5 If the resource monitoring service indicates that the resources are available
(that is, the value of isAvailable is true), Integration Server resumes
document retrieval and document processing for the webMethods
messaging trigger.
If the resource monitoring service indicates that the resources are not
available (that is, the value of isAvailable is false), Integration Server waits
a short time interval (by default, 60 seconds) and then re-executes the
resource monitoring service. Integration Server continues executing the
resource monitoring service periodically until the service indicates the
resources are available.
Step Description
trigger. The webMethods messaging trigger and trigger service process the
document just as they would any document in the trigger queue.
Select... To...
Max attempts Specify that Integration Server retries the trigger service a
reached limited number of times.
In the Max retry attempts property, enter the maximum number
of times Integration Server should aempt to re-execute the
trigger service. The default is 0 retries.
Successful Specify that the Integration Server retries the trigger service
until the service executes to completion.
Select... To...
infinite retry loop, see "About Retrying Trigger Services
and Shutdown Requests" on page 768.
3. In the Retry interval property, specify the time period the Integration Server waits
between retry aempts. The default is 10 seconds.
4. Set the On retry failure property to one of the following:
Select... To...
5. If you selected Suspend and retry later, then in the Resource monitoring service
property, specify the service that Integration Server should execute to determine the
availability of resources associated with the trigger service. Multiple webMethods
messaging triggers can use the same resource monitoring service.
6. Click File > Save.
Notes:
webMethods messaging triggers and services can both be configured to retry. When
a webMethods messaging trigger invokes a service (that is, the service functions as a
trigger service), the Integration Server uses the webMethods messaging trigger retry
properties instead of the service retry properties.
When Integration Server retries a trigger service and the trigger service is configured
to generate audit data on error, Integration Server adds an entry to the service log for
each failed retry aempt. Each of these entries will have a status of “Retried” and an
error message of “Null”. However, if Integration Server makes the maximum retry
aempts and the trigger service still fails, the final service log entry for the service
will have a status of “Failed” and will display the actual error message. This occurs
regardless of which retry failure option the webMethods messaging trigger uses.
Integration Server generates the following journal log message between retry
aempts:
[ISS.0014.0031D] Service serviceName failed with ISRuntimeException. Retry x of y
will begin in retryInterval milliseconds.
If you do not configure service retry for a webMethods messaging trigger, set
the Max retry attempts property to 0. This can improve the performance of services
invoked by the webMethods messaging trigger.
You can invoke the pub.flow:getRetryCount service within a trigger service to determine
the current number of retry aempts made by the Integration Server and the
maximum number of retry aempts allowed for the trigger service. For more
information about the pub.flow:getRetryCount service, see the webMethods Integration
Server Built-In Services Reference.
Important:
If wa.server.trigger.interruptRetryOnShutdown is
set to “false” and a webMethods messaging trigger
is set to retry until successful, a trigger service can
Integration Server uses duplicate detection to determine the document’s status. The
document status can be one of the following:
New. The document is new and has not been processed by the webMethods
messaging trigger.
Duplicate. The document is a copy of one already processed the webMethods
messaging trigger.
In Doubt.Integration Server cannot determine the status of the document. The
webMethods messaging trigger may or may not have processed the document
before.
To resolve the document status, Integration Server evaluates, in order, one or more of
the following:
Redelivery count indicates how many times the transport has redelivered the
document to the webMethods messaging trigger.
Document history database maintains a record of all guaranteed documents processed
by webMethods messaging triggers for which exactly-once processing is configured.
Document resolver service is a service created by a user to determine the document
status. The document resolver service can be used instead of or in addition to the
document history database.
The steps that Integration Server performs to determine a document’s status depend on
the exactly-once properties configured for the subscribing trigger. For more information
about configuring exactly-once properties, see "Configuring Exactly-Once Processing for
a webMethods Messaging Trigger " on page 771.
Note: If you changed the message processing mode for a webMethods messaging
trigger that uses a Universal Messaging connection alias with a shared
client prefix, you might need to use Universal Messaging Enterprise
Manager to delete and recreate the named object. For more information, see
"Synchronizing the webMethods Messaging Trigger and Named Object on
Universal Messaging " on page 759.
Note: In addition to the term “non-clustered group,” the terms “stateless cluster”
and “external cluster” are sometimes used to describe the situation in which a
group of Integration Servers function in a manner similar to a cluster but are
not part of a configured cluster.
Configurations dialog box, select the Shared file option and provide a workspace location
in which to save the file.
In a launch configuration for a webMethods messaging trigger, you specify:
The condition that you want Designer to test. Each launch configuration can specify
only one condition in the webMethods messaging trigger.
The document type whose subscription you want to test. For an Any (OR) or Only
one (XOR) join condition, you specify the document type for which you want to
supply input.
Input data that Designer uses to build a document. Designer evaluates the filter
using the data in the document and provides the document as input to the trigger
service.
You can create multiple launch configurations for each webMethods messaging trigger.
c. Click OK.
8. On the Input tab, select the tab with the name of the IS document type for which you
want to provide input data.
If the selected condition uses an All (AND) join, Designer displays one tab for each
document type in the join condition. If the condition is an Only one (XOR) join and
you selected multiple document types for which to supply input data, Designer
displays one tab for each selected document type.
a. Select or clear the Include empty values for String Types check box to indicate how to
handle variables that have no value.
If you want to use an empty String (i.e., a String with a zero-length), select the
Include empty values for String Types check box. Also note that Document Lists
that have defined elements will be part of the input, but they will be empty.
If you want to use a null value for the empty Strings, clear the check box.
String-type variables will not included in the input document.
Note: The seing applies to all String-type variables in the root document of
the input signature. The seing does not apply to String-type variables
within Document Lists. You define how you want to handle String-type
variables within Document Lists separately when you assign values to
Document Lists variables.
b. Specify the values to save with the launch configuration for the webMethods
messaging trigger by doing one of the following:
Type the input value for each field in the document type.
To load the input values from a file, click Load to locate and select the file
containing the input values. If Designer cannot parse the input data, it
displays an error message indicating why it cannot load the data.
Designer validates the provided input values. If provided values do not match
the input parameter data type, Designer displays a message to that effect. You
cannot use the launch configuration for the webMethods messaging trigger if the
provided input does not match the defined data type.
c. If you want Designer to give the user executing the launch configuration the
option of providing different input values than those saved with the launch
configuration, select the Prompt for data at launch check box. If you clear this check
box, Designer passes the webMethods messaging trigger the same set of data
every time the launch configuration executes.
9. Repeat the preceding step for each IS document type displayed on the Input tab.
10. If you want to save the input values that you have entered, click Save.
11. Click Apply.
12. If you want to execute the launch configuration, click Run. Otherwise, click Close.
Designer displays results for running the webMethods messaging trigger in the
Results view.
Note: The seing applies to all String-type variables in the root document of the
input signature. The seing does not apply to String-type variables within
Document Lists. You define how you want to handle String-type variables
within Document Lists separately when you assign values to Document
Lists variables. For more information, see webMethods Service Development
Help.
8. Specify the values to save with the launch configuration for the webMethods
messaging trigger by doing one of the following:
Type the input value for each field in the document type.
To load the input values from a file, click Load to locate and select the file
containing the input values. If Designer cannot parse the input data, it displays
an error message indicating why it cannot load the data.
Note: If you type in input values, Designer discards the values you specified
after the run. If you want to save input values, create a launch
configuration. For instructions, see "Running a webMethods Messaging
Trigger " on page 778.
9. Click OK.
Designer runs the trigger and displays the results in the Results view.
Note: Integration Server generates additional logging for triggers that receive
messages from Universal Messaging or through Digital Event Services
only.
Note: For the increased logging to appear in the server log, you must set the logging
level for server facility 0153 Dispatcher (Universal Messaging) to Trace.
Note: For the increased logging to appear in the server log, you must set the logging
level for server facility 0153 Dispatcher (Universal Messaging) to Trace.
Where triggerName is the fully qualified name of the trigger in the format
folder .subfolder :triggerName .
5. Click Save Changes.
■ Server and Trigger Properties that Affect Transient Error Handling During Trigger
Preprocessing ................................................................................................................................... 784
■ Overview of Transient Error Handling During Trigger Preprocessing ........................................ 785
Trigger preprocessing encompasses the time from when a trigger first receives a
message (document) from its local queue on Integration Server to the time Integration
Server invokes the trigger service. Transient errors can occur during this time. A
transient error is an error that arises from a temporary condition that might be resolved
or corrected quickly, such as the unavailability of a resource due to network issues or
failure to connect to a database. For example, if a document history database is used for
exactly-once processing, the unavailability of the database may cause a transient error.
Because the condition that caused the trigger preprocessing to fail is temporary, the
trigger preprocessing might complete successfully if Integration Server waits and then
re-aempts trigger preprocessing. To allow the preprocessing to complete successfully,
Integration Server provides some properties and seings for transient error handling.
Note: The On Retry Failure trigger property also determines how Integration
Server handles retry failure for a trigger service.
The On Transaction Rollback property for a transacted JMS trigger. When set to
Suspend and recover, Integration Server suspends a transected JMS trigger that
encounters a transient error during trigger preprocessing.
For a detailed explanation about how Integration Server uses these property seings
when a transient error occurs during trigger preprocessing, see "Overview of Transient
Error Handling During Trigger Preprocessing" on page 785.
Step Description
Step Description
provider, and uses the audit subsystem to log the document. This may
result in message loss.
3 Integration Server does one of the following once the trigger is suspended:
If the transient error (ISRuntimeException) is caused by a SQLException
(which indicates that an error occurred while reading to or writing from
the database), Integration Server suspends the trigger and schedules
a system task that executes an internal service that monitors the
connection to the document history database. Integration Server resumes
the trigger and re-executes it when the internal service indicates that the
connection to the document history database is available.
If the transient error (ISRuntimeException) is caused by a
ConnectionException (which indicates that document history
database is not enabled or is not properly configured), and the
wa.server.trigger.preprocess.monitorDatabaseOnConnectionException
property is set to true, Integration Server schedules a system task
that executes an internal service that monitors the connection to the
document history database. Integration Server resumes the trigger and
re-executes it when the internal service indicates that the connection to
the document history database is available.
If the transient error (ISRuntimeException)
is caused by a ConnectionException and the
wa.server.trigger.preprocess.monitorDatabaseOnConnectionException
property is set to false, Integration Server does not schedule a system
task to check for the database's availability and will not resume the
trigger automatically. You must manually resume the trigger after
configuring the document history database properly.
If the transient error (ISRuntimeException) is caused by some other type
of exception, Integration Server suspends the trigger and schedules a
system task to execute the trigger's resource monitoring service (if one
is specified). When the resource monitoring service indicates that the
resources used by the trigger are available, Integration Server resumes
the trigger and again receives the message from the messaging provider.
If a resource monitoring service is not specified, you will need to resume
the trigger manually (via Integration Server Administrator or the
pub.trigger* services).
Web services are building blocks for creating open, distributed systems. A web service is a
collection of functions that are packaged as a single unit and published to a network for
use by other software programs. For example, you could create a web service that checks
a customer’s credit or tracks delivery of a package. If you want to provide higher-level
functionality, such as a complete order management system, you could create a web
service that maps to many different IS flow services, each performing a separate order
management function.
Designer uses web service descriptors to encapsulate information about web services
and uses web service connectors to invoke web services.
A consumer web service descriptor defines an external web service, allowing Integration
Server to create a web service connector (WSC) for each operation in the web service.
The web service connector(s) can be used in Designer just like any other IS flow
service; when a connector is invoked it calls a specific operation of a web service.
In version 9.0 and later, Integration Server also creates a response service for each
operation in the web service. Response services are flow services to which you can
add custom logic to process asynchronous SOAP responses.
@aribute fields (fields starting with the “@” symbol) are not allowed at the top
level
@aribute fields are not allowed (fields starting with the “@” symbol)
@aribute fields (fields starting with the “@” symbol) are not allowed at the top
level
List fields (String List, Document List, Document Reference List, and Object List)
are not allowed at the top level
Duplicate field names (identically named fields) are not allowed at the top level
and invoke the service that you want to expose as a web service. You can then use the
wrapper service as an operation in a provider web service descriptor.
For example, suppose that you want to expose an XSLT service as a web service on
one Integration Server and invoke it from another. However, the XSLT source contains
an optional run-time property that is added to the pipeline at run time. This optional
property is not reflected in the input signature of the XSLT service. If you added the
XSLT service to a provider web service descriptor, the resulting WSDL document would
not list the property as part of the input message. Consequently, a consumer web service
descriptor and a web service connector created from the WSDL document would not
account for the property and invocation will fail.
To successfully use the XSLT service as a web service, you can do the following:
1. Create a wrapper flow service that:
Defines all of the input parameters of the XSLT service in its input signature.
Defines the run-time property of the XSLT source in its input signature.
Invokes the XSLT service.
2. On the Integration Server that hosts the wrapper flow service and the XSLT service,
create a provider web service descriptor from the wrapper flow service.
On the Integration Server from which you will invoke the web service, create a
consumer web service descriptor from the WSDL of the provider web service descriptor.
The web service connector that corresponds to the operation for the XSLT service will
display the complete input signature.
When using an adapter service to create a provider web service descriptor, if the
service returns values in the pipeline that do not match the output signature, you
must change those variable properties to optional fields (where applicable), or else
wrap the service in a flow to add or drop variables to match the output signature.
Web service descriptors that are not running in compatibility mode can stream
MTOM aachments for both inbound and outbound SOAP messages. To stream
MTOM aachments, the object that represents the field to be streamed should be of
type com.wm.util.XOPObject Java class.
You can quickly create a service first provider web service descriptor by right-
clicking the service, selecting Generate Provider WSD. Enter a name for the web service
descriptor in the Provide a Name dialog box and click OK. Designer automatically
creates a provider web service descriptor in the same folder as the selected IS service,
using all the default options.
Integration Server generates invalid WSDL for Axis and .Net clients if the provider
web service descriptor contains a C service for that takes a document specification as
input. Axis and .Net clients cannot handle the resulting Java stub classes and throw
an error. Do not use a C service with a document specification in the input in a server
first provider web service descriptor if you know that the resulting WSDL will be
used by Axis and .Net clients.
SOAP version Whether SOAP messages for this web service should use SOAP
1.1 or SOAP 1.2 message format.
Transport The transport protocol used to access the web service. Select
one of the following:
HTTP
HTTPS
JMS
Use and style The style/use for operations in the provider web service
for operations descriptor. Select one of the following:
Document - Literal
RPC - Literal
RPC - Encoded
Endpoint The address at which the web service can be invoked. Do one
of the following:
To use a provider web service endpoint alias to specify
the address, select the Alias option. Then, in the Alias list,
select the provider web service endpoint alias. Select
DEFAULT(aliasName ) if you want to use the information
in the default provider web service endpoint alias for the
address. If the Alias list includes a blank row, the Integration
Server does not have a default provider web service endpoint
alias for the protocol.
Note: You can only specify Host and Port for the endpoint
if a default provider endpoint alias does not exist
for the selected protocol. When a default alias exists,
Designer populates the Host and Port fields with the
host and port from the default provider end point
alias.
Target The URL that you want to use as the target namespace for
namespace the provider web service descriptor. In a WSDL document
generated for this provider web service descriptor the
elements, aributes, and type definitions will belong to this
namespace.
Note: If you specify a transport, but do not specify a host, port, or endpoint alias,
Integration Server uses the primary port as the port in the endpoint URL.
If the selected transport and the protocol of the primary port do not match,
web service clients will not execute successfully. For more information see
"Protocol Mismatch Between Transport and Primary Port" on page 796.
9. Under Enforce WS-I Basic Profile 1.1 compliance do one of the following:
Select Yes if you want Designer to validate all the web service descriptor objects
and properties against the WS-I requirements before creating the web service
descriptor.
Select No if you do not want Designer to enforce compliance for WS-I Basic
Profile 1.1.
Note: WS-I compliance cannot be enforced if the WSDL contains a SOAP over
JMS binding.
10. If you want Integration Server to use the Xerces Java parser to validate the schema
elements that represent the signatures of the services used as operations, select the
Validate schema using Xerces check box.
11. Click Finish.
If Designer cannot create or cannot completely generate a web service descriptor,
Designer displays error messages or warning messages.
Notes:
If you selected the Validate schema using Xerces check box, when creating a service
first provider web service descriptor, Integration Server converts the signatures of
the services used as operations to XML schema elements. Then Integration Server
uses the Xerces Java parser to validate the schema elements. If the schema element
does not conform syntactically to the schema for XML Schemas defined in XML
Note: Integration Server uses Xerces Java parser version J-2.11.0. Limitations
for this version are listed at hp://xerces.apache.org/xerces2-j/xml-
schema.html.
Note: If you want to use Robust In-Only MEP rather than In-Only MEP,
after creating the web service descriptor for a service with no output
parameters, add a fault to the operation.
For more information about Integration Server MEP support, see the Web Services
Developer’s Guide.
For example, suppose that you specify a transport of HTTPS when creating the provider
web service descriptor, but do not specify a host, port, or endpoint alias. Additionally,
Integration Server does not identify a default web service provider endpoint alias for
HTTPS. Furthermore, the primary port is an HTTP port. In this situation, Designer
displays the above message.
You must resolve this mismatch before making a WSDL document for this provider web
service descriptor available to web service consumers. Otherwise, the web service clients
will not execute successfully.
Do not create a WSDL first provider web service descriptor from a WSDL that
specifies RPC-Encoded, contains aributes in its operation signature, and/or has
complex type definitions with mixed content. Integration Server might successfully
create a web service descriptor from such WSDLs. However, the web service
descriptor may exhibit unexpected runtime behavior.
7. Click Next.
8. If you selected CentraSite as the source, under Select Web Service fromCentraSite,
select the service asset in CentraSite that you want to use to create the web service
descriptor. Click Next.
Designer filters the contents of the Services folder to display only service assets that
are web services.
If Designer is not configured to connect to CentraSite, Designer displays the
CentraSite> Connections preference page and prompts you to configure a connection
to CentraSite.
9. If you selected File/URL as the source, do one of the following:
Enter the URL for the WSDL document. The URL should begin with hp:// or
hps://. Click Next.
Click Browse to navigate to and select a WSDL document on your local file
system. Click Next.
10. If you selected UDDI as the source, under Select Web Service from UDDI Registry, select
the web service from the UDDI registry. Click Next.
If Designer is not currently connected to a UDDI registry, the Open UDDI Registry
Session dialog box appears. Enter the details to connect to the UDDI registry and
click Finish.
11. Under Content model compliance, select one of the following to indicate how strictly
Integration Server enforces content model compliance when creating IS document
types from the XML Schema definition in the WSDL document.
Select... To...
Select... To...
12. Select the Enable MTOM streaming for elements of type base64Binary check box if you want
elements declared to be of type base64Binary in the WSDL or schema to be enabled
for streaming of MTOM aachments. For more information about MTOM streaming
for web services, see the Web Services Developer’s Guide.
13. If you want Integration Server to use the Xerces Java parser to validate any schema
elements in the WSDL document or any referenced XML Schema definitions before
creating the web service descriptor, select the Validate schema using Xerces check box.
14. Under Enforce WS-I Basic Profile 1.1 compliance do one of the following:
Select Yes if you want Designer to validate all the WSD objects and properties
against the WS-I requirements before creating the WSD.
Select No if you do not want Designer to enforce compliance for WS-I Basic
Profile 1.1.
Note: WS-I Basic Profile 1.0 supports only HTTP or HTTPS bindings.
Consequently, WS-I compliance cannot be enforced if the WSDL contains a
SOAP over JMS binding.
15. Click Next if you want to specify different prefixes than those specified in the XML
schema definition. If you want to use the prefixes specified in the XML schema
definition itself, click Finish.
16. On the Assign Prefixes panel, if you want the web service descriptor to use different
prefixes than those specified in the XML schema definition, select the prefix you
want to change and enter a new prefix. Repeat this step for each namespace prefix
that you want to change.
Note: The prefix you assign must be unique and must be a valid XML NCName
as defined by the specification hp://www.w3.org/TR/REC-xml-names/
#NT-NCName.
Note: If you create the web services descriptor using the earlier version of the
web services stack, the Pre-8.2 compatibility mode property will be set to true
for the resulting web service descriptor.
Integration Server does not create a provider web service descriptor if the WSDL
document contains any bindings that are not supported by Integration Server.
Integration Server will create duplicate operations in case the WSDL document has
multiple port names for the same binding. To ensure that duplicate operations are
not created, modify the WSDL to make the port name unique for each binding.
When creating the binders for a WSDL first provider web service descriptor
generated from a WSDL document with an HTTP or HTTPS binding, Integration
Server assigns the default provider endpoint alias for HTTP or HTTPS to the binder.
Integration Server uses the information from the default provider endpoint alias
during WSDL generation and run-time processing. Integration Server determines
whether to use the HTTP or HTTPS default provider endpoint alias by selecting
the default alias for the protocol specified in the soap:addressLocation aribute of
the wsdl:port element. If a default provider endpoint alias is not specified for the
protocol used by the binding in the WSDL document, Integration Server uses its
own hostname as the host and the primary port as the port. If the binding transport
protocol is not the same as the primary port protocol, the web service descriptor
has a protocol mismatch that you must resolved before making a WSDL generated
from the descriptor available to consumers. For more information about a protocol
mismatch, see Protocol Mismatch Between Transport and Primary Port.
Note: The default provider endpoint alias also determines security, WS-
Addressing, and WS-Reliable Messaging information for the web service
descriptor and resulting WSDL document.
Integration Server uses the internal schema parser to validate the XML schema
definition associated with the WSDL document. If you selected the Validate schema
using Xerces check box, Integration Server also uses the Xerces Java parser to validate
the XML Schema definition. With either parser, if the XML Schema does not
conform syntactically to the schema for XML Schemas defined in XML Schema Part
1: Structures (which is located at hp://www.w3.org/TR/xmlschema-1), Integration
Server does not create an IS schema or an IS document type for the web service
descriptor. Instead, Designer displays an error message that lists the number, title,
location, and description of the validation errors within the XML Schema definition.
Note: Integration Server uses Xerces Java parser version J-2.11.0. Limitations
for this version are listed at hp://xerces.apache.org/xerces2-j/xml-
schema.html.
When validating XML schema definitions, Integration Server uses the Perl5 regular
expression compiler instead of the XML regular expression syntax defined by the
World Wide Web Consortium for the XML Schema standard. As a result, in XML
schema definitions consumed by Integration Server, the paern constraining facet
must use valid Perl regular expression syntax. If the supplied paern does not use
proper Perl regular expression syntax, Integration Server considers the paern to be
invalid.
When creating the document types for the provider web service descriptor,
Integration Server registers each document type with the complex type definition
from which it was created in the schema. This enables Integration Server to provide
derived type support for document creation and validation.
If you selected strict compliance and Integration Server cannot represent the content
model in the complex type accurately, Integration Server does not generate any IS
document types or the web service descriptor.
The contents of an IS document type with a Model type property value other than
“Unordered” cannot be modified.
For an IS document type from a WSDL document, Designer displays the location
of the WSDL in the Source URI property. Designer also sets the Linked to source
property to true which prevents any editing of the document type contents. To edit
the document type contents, you first need to make the document type editable by
breaking the link to the source. However, Software AG does not recommend editing
the contents of document types created from WSDL documents.
If the source WSDL document is annotated with WS-Policy:
Integration Server enforces the annotated policy at run time. However, if you
aach a policy from the policy repository to the web service descriptor, the
aached policy will override the original annotated policy.
Integration Server will only enforce supported policy assertions in the annotated
policy. For information about supported assertions, see the Web Services
Developer’s Guide.
Integration Server does not save the annotated policy in the policy repository.
The Message Exchange Paern (MEP) that Integration Server uses for an operation
defined in the WSDL can be In-Out MEP, In-Only MEP, or Robust In-Only MEP.
Integration Server always uses In-Out MEP when the web service descriptor’s
Pre-8.2 compatibility mode property is set to true. When this property is set to false,
Integration Server uses:
In-Out MEP when an operation has defined input and output.
In-Only MEP when an operation has no defined output and no defined fault.
Robust In-Only MEP when an operation has no defined output, but does have a
defined fault.
For more information about Integration Server MEP support, see the Web Services
Developer’s Guide.
If the WSDL is annotated with WS-Policy, Integration Server will only enforce
supported policy assertions. Currently Integration Server supports only WS-Security
policies. Also be aware that Integration Server does not save the WS-Policy that is
in the WSDL in the policy repository. Integration Server will enforce the annotated
policy unless a policy that resides in the Integration Server policy repository is
specifically aached to the web service descriptor. If you aach a policy to the web
service descriptor, the aached policy will override the original annotated policy.
Integration Server creates the docTypes and services folders to store the IS document
types, IS schemas, and skeleton services generated from the WSDL document.
These folders are reserved for elements created by Integration Server for the web
service descriptor only. Do not place an custom IS elements in these folders. During
refresh of a web service descriptor, the contents of these folders will be deleted and
recreated.
If an XML Schema definition referenced in the WSDL document contains the <!
DOCTYPE declaration, Integration Server issues a java.io.FileNotFoundException.
To work around this issue, remove the <!DOCTYPE declaration from the XML
Schema definition.
When creating a WSDL first provider web service descriptor from an XML
Schema definition that imports multiple schemas from the same target namespace,
Integration Server throws Xerces validation errors indicating that the element
declaration, aribute declaration, or type definition cannot be found. The Xerces Java
parser honors the first <import> and ignores the others. To work around this issue,
you can do one of the following:
Combine the schemas from the same target namespace into a single XML Schema
definition. Then change the XML schema definition to import the merged schema
only.
When creating the WSDL first provider web service descriptor, clear the Validate
schema using Xerces check box to disable schema validation by the Xerces Java
parser. When generating the web service descriptor, Integration Server will
not use the Xerces Java parser to validate the schemas associated with the XML
Schema definition.
When Integration Server executes a web service connector, the web service connector
calls a specific operation of a web service.
In versions 9.0 and later, Integration Server also creates a response service for each
operation in the WSDL document. Response services are flow services to which you can
add custom logic to process asynchronous SOAP responses. For more information about
response services, see "About Response Services" on page 815.
7. If you specified CentraSite as the source, under Select web service fromCentraSite,
select the service asset in CentraSite that you want to use to create the web service
descriptor. Click Next.
Designer filters the contents of the Services folder to display only service assets that
are web services.
If Designer is not configured to connect to CentraSite, Designer displays the
CentraSite> Connections preference page and prompts you to configure a connection
to CentraSite.
8. If you specified File/URL as the source, do one of the following:
Enter the URL for the WSDL document. The URL should begin with hp:// or
hps://. Click Next
Click Browse to navigate to and select a WSDL document on your local file
system. Click Next
9. If you specified UDDI as the source, under Select web service from UDDI Registry, select
the web service from the UDDI registry. Click Next.
If Designer is not currently connected to a UDDI registry, the Open UDDI Registry
Session dialog box appears. Enter the details to connect to the UDDI registry and
click Finish.
10. Under Content model compliance, select one of the following to indicate how strictly
Integration Server enforces content model compliance when creating IS document
types from the XML Schema definition in the WSDL document.
Select... To...
11. Under Document type generation, select the Enable MTOM streaming for elements of type
base64Binary check box if you want elements declared to be of type base64Binary in
the WSDL or schema to be enabled for streaming of MTOM aachments. For more
information about MTOM streaming for web services, see the Web Services Developer’s
Guide.
12. If you want to use the Xerces Java parser to validate any schema elements in the
WSDL document or any referenced XML Schema definitions before creating the web
service descriptor, select the Validate schema using Xerces check box.
Note: Integration Server uses an internal schema parser to validate the schemas
in or referenced by a WSDL document. However, the Xerces Java parser
provides stricter validation than the Integration Server internal schema
parser. As a result, some schemas that the internal schema parser considers
to be valid might be considered invalid by the Xerces Java parser. While
validation by the Xerces Java parser can increase the time it takes to
create a web service descriptor and its associated elements, using stricter
validation can help ensure interoperability with other web service vendors.
13. Under Enforce WS-I Basic Profile 1.1 compliance do one of the following:
Select Yes if you want Designer to validate all the WSD objects and properties
against the WS-I requirements before creating the WSD.
Select No if you do not want Designer to enforce compliance for WS-I Basic
Profile 1.1.
Note: WS-I Basic Profile 1.0 supports only HTTP or HTTPS bindings.
Consequently, WS-I compliance cannot be enforced if the WSDL contains a
SOAP over JMS binding.
14. Click Next if you want to specify different prefixes than those specified in the XML
schema definition. If you want to use the prefixes specified in the XML schema
definition itself, click Finish.
15. On the Assign Prefixes panel, if you want the web service descriptor to use different
prefixes than those specified in the XML schema definition, select the prefix you
want to change and enter a new prefix. Repeat this step for each namespace prefix
that you want to change.
Note: The prefix you assign must be unique and must be a valid XML NCName
as defined by the specification hp://www.w3.org/TR/REC-xml-names/
#NT-NCName.
Note: If you create the web services descriptor using the earlier version of the
web services stack, the Pre-8.2 compatibility mode property will be set to true
for the resulting web service descriptor.
Integration Server does not create binders for unsupported bindings in the WSDL
document. If the WSDL document does not contain any bindings supported by
Integration Server, Integration Server does not create a consumer web service
descriptor.
When creating the document types for the consumer web service descriptor,
Integration Server registers each document type with the complex type definition
from which it was created in the schema. This enables Integration Server to provide
derived type support for document creation and validation.
Integration Server uses the internal schema parser to validate the XML schema
definition associated with the WSDL document. If you selected the Validate schema
using Xerces check box, Integration Server also uses the Xerces Java parser to validate
the XML Schema definition. With either parser, if the XML Schema does not
conform syntactically to the schema for XML Schemas defined in XML Schema Part
1: Structures (which is located at hp://www.w3.org/TR/xmlschema-1), Integration
Server does not create an IS schema or an IS document type for the web service
descriptor. Instead, Designer displays an error message that lists the number, title,
location, and description of the validation errors within the XML Schema definition.
Note: Integration Server uses Xerces Java parser version J-2.11.0. Limitations
for this version are listed at hp://xerces.apache.org/xerces2-j/xml-
schema.html.
When validating XML schema definitions, Integration Server uses the Perl5 regular
expression compiler instead of the XML regular expression syntax defined by the
World Wide Web Consortium for the XML Schema standard. As a result, in XML
schema definitions consumed by Integration Server, the paern constraining facet
must use valid Perl regular expression syntax. If the supplied paern does not use
proper Perl regular expression syntax, Integration Server considers the paern to be
invalid.
If you selected strict compliance and Integration Server cannot represent the content
model in the complex type accurately, Integration Server does not generate any IS
document types or the web service descriptor.
For an IS document type from a WSDL document, Designer displays the location
of the WSDL in the Source URI property. Designer also sets the Linked to source
property to true which prevents any editing of the document type contents. To edit
the document type contents, you first need to make the document type editable by
breaking the link to the source. However, Software AG does not recommend editing
the contents of document types created from WSDL documents.
The contents of an IS document type with a Model type property value other than
“Unordered” cannot be modified.
Operations and binders cannot be added, edited, or removed from a consumer web
service descriptor.
The Message Exchange Paern (MEP) that Integration Server uses for an operation
defined in the WSDL can be In-Out MEP, In-Only MEP, or Robust In-Only MEP.
Integration Server always uses In-Out MEP when the web service descriptor’s Pre-8.2
compatibility mode property is true. When this property is false, Integration Server
uses:
In-Out MEP when an operation has defined input and output.
In-Only MEP when an operation has no defined output and no defined fault.
The web service connector that Integration Server creates will no SOAP message-
related output parameters and, when executed, will not return output related to
a SOAP response.
Robust In-Only MEP when an operation has no defined output, but has a defined
fault. The web service connector that Integration Server creates will return
no output related to a SOAP response if the operation executes successfully.
However, if an exception occurs, the web service connector returns the SOAP
fault information as output.
For more information about Integration Server MEP support, see the Web Services
Developer’s Guide.
Integration Server creates response services for all In-Out and Robust In-Only MEP
operations in the WSDL document.
When creating a web service descriptor from a WSDL document, Integration Server
treats message parts that are defined by the type aribute instead of the element
aribute as an error and does not allow the web service descriptor to be created. You
can change this behavior by seing the wa.server.SOAP.warnOnPartValidation
parameter to true. When this parameter is set to true, Integration Server will return
a warning instead of an error and will allow the web service descriptor to be created.
If the WSDL document is annotated with WS-Policy:
Integration Server enforces the annotated policy at run time. However, if you
aach a policy from the policy repository to the web service descriptor, the
aached policy will override the original annotated policy.
Integration Server will only enforce supported policy assertions in the annotated
policy. Currently Integration Server supports only WS-Security policies.
Integration Server does not save the annotated policy in the policy repository.
If an XML Schema definition referenced in the WSDL document contains the <!
DOCTYPE declaration, Integration Server issues a java.io.FileNotFoundException.
To work around this issue, remove the <!DOCTYPE declaration from the XML
Schema definition.
When creating a consumer web service descriptor from an XML Schema definition
that imports multiple schemas from the same target namespace, Integration
Serverthrows Xerces validation errors indicating that the element declaration,
aribute declaration, or type definition cannot be found. The Xerces Java parser
honors the first <import> and ignores the others. To work around this issue, you can
do one of the following
Combine the schemas from the same target namespace into a single XML Schema
definition. Then change the XML schema definition to import the merged schema
only.
When creating the consumer web service descriptor, clear the Validate schema
using Xerces check box to disable schema validation by the Xerces Java parser.
When generating the web service descriptor, Integration Server will not use the
Xerces Java parser to validate the schemas associated with the XML Schema
definition.
Note: The consumerWSDName_ folder and its subfolders docTypes, connectors, and
responseServices are reserved for elements created by Integration Server for
the web service descriptor only. Do not place any custom IS elements in these
folders.
Contains flow steps that create and send a message to the web service endpoint
using the transport, protocol, and location information specified in the web service’s
WSDL document in conjunction with input supplied to the web service connector.
Contains flow steps that extract data or fault information from the response message
returned by the web service.
Note: A web service connector that worked correctly with previous versions of
Developer, Designer, and Integration Server should continue to work with
version 8.2 and later. In addition, any external clients created from WSDL
generated from previous versions of Developer and Integration Server should
continue to work as they did in the previous version.
For detailed information about a web service connector, such as a description of the web
service connector signature, see the Web Services Developer’s Guide.
If the Validate Schema using Xerces property is set to true for a web service descriptor,
Integration Server validates the schemas associated with a consumer web service
descriptor when you refresh the web service connector.
Refreshing web service connectors is different than refreshing a web service
descriptor. When refreshing web service connectors, Integration Server uses the
original WSDL document to recreate the web service connectors and the contents
of the consumerWSDName_ folder. When refreshing a web service descriptor,
Integration Server uses an updated version of the WSDL document to regenerate the
web service descriptor and its associated IS elements. For more information about
refreshing web service descriptors, see "About Refreshing a Web Service Descriptor"
on page 816.
If you are using the local service development feature, using versions of Subversion
prior to 1.7 as your VCS client might cause issues while refreshing web service
connectors. Software AG recommends that you use Subversion 1.7 or higher as your
VCS client.
Note: If the web service connector uses a JMS binding to send a message using
SOAP over JMS, you can specify how Integration Server proceeds when
the JMS provider is not available at the time the message is sent. For more
information, see "Configuring Use of the Client Side Queue" on page 844.
logic to process asynchronous SOAP responses. Integration Server creates the response
services only if the consumer web service descriptor:
Is created on Integration Server version 9.0 or later.
Has the Pre-8.2 compatibility mode property set to false.
Integration Server invokes the response services for processing asynchronous SOAP
responses received for the associated consumer web service descriptor. That is,
Integration Server invokes a response service when Integration Server receives a SOAP
response with the endpoint URL pointing to a consumer web service descriptor and
if this SOAP response contains a WS-Addressing action through which the response
service can be resolved.
The responseServices folder also contains a genericFault_Response service, which is the
default response service that Integration Server invokes when Integration Server cannot
determine the specific response service for a SOAP response or if there are errors while
processing the response.
For more information about response services and how Integration Server processes
responses asynchronously, see the Web Services Developer’s Guide
the consumerWSDName_ folder. For more information about refreshing web service
connectors, see "Refreshing a Web Service Connector" on page 814.
The following table provides an overview of the activities involved in refreshing a web
service descriptor.
Step Description
1 You select the web service descriptor that you want to refresh.
You can refresh WSDL first provider web service descriptors or
consumer web service descriptors created on Integration Server
version 7.1 or later.
Note: Service first provider web service descriptors are not created
from a WSDL document and therefore cannot be refreshed.
Step Description
Step Description
document types or IS schemas that were generated from
the WSDL document.
Service Does one of the following for the skeleton services generated
for operations in the original WSDL document:
If logic has been added to the skeleton service or service
properties have been set and the corresponding operation
exists in the updated WSDL document, Integration Server
merges the logic into the refreshed service and ensures that
the property values match the values set prior to refreshing.
If logic has not been added to the skeleton service, service
properties have not been set, and the corresponding
operation exists in the updated WSDL document,
Integration Server recreates the empty skeleton service.
If a service corresponds to an operation that does not
exist in the updated WSDL document, Integration Server
removes the operation that corresponds to the service from
the web service descriptor. Integration Server keeps the
service in the “services” folder.
Web service Deletes and recreates all web service connectors. Any
connector changes made to a web service connector, including changes
for pipeline mapping, will be lost.
connectors folder Deletes the connectors folder and all elements contained in
that folder and its subfolders. Integration Server will not
recreate any elements manually added to the folder or its
subfolders.
docTypes folder Deletes the docTypes folder and all elements contained in
that folder and its subfolders. Integration Server will not
recreate any elements manually added to the folder or its
subfolders.
services folder Does the following with the contents of the services folder:
Adds new skeleton services for new operations in the
updated WSDL document.
For modified operations, updates the skeleton services
including merging in any logic that was added. For more
information, see the “Service” row in this table.
For deleted operations, Integration Server removes the
operation that corresponds to the service from the web
service descriptor. Integration Server keeps the service in
the “services” folder.
The following table provides details about how refreshing a web service descriptor
affects the contents of the web service descriptor itself.
not execute to completion successfully, and other issues that need to be resolved. Before
refreshing a web service descriptor, review the following considerations.
Refresh a web service descriptor only if you are familiar with the original WSDL
document, the changes in the updated WSDL document, and the web service
descriptor. Designer does not provide a list of changes to the web service descriptor
as part of the refresh. You will need to use your knowledge of the WSDL document
changes and the web service descriptor to ensure that operations, services, pipeline
mapping, and other aspects of the web service descriptor work as expected.
During refresh, mappings between variables might break or be lost. This is
particularly true when the web service descriptor has manually added headers
or faults and the updated WSDL document has new headers or faults of the same
name.
During refresh of a consumer web service descriptor, Integration Server deletes and
recreates the contents of the consumerWSDName_ . This includes all of the document
types, Integration Server schemas and web service connectors generated from the
original WSDL document. Any changes made to these elements will be lost. For
web service connectors, this includes maps (links) between variables in the pipeline,
variables added to the pipeline, variables dropped from the pipeline, and values
assigned to pipeline variables.
During refresh of a WSDL first provider web service descriptor, Integration Server
deletes and recreates the contents of the docTypes folder. Changes made to the IS
document types and IS schemas generated from the original WSDL document will
be lost.
Because Integration Server deletes and recreates the contents of the
consumerWSDName_ folder, docTypes folder, connectors folder, and services folder
during refresh, do not place any custom elements in these folders. These folders are
reserved for elements created by Integration Server for the web service descriptor
only. Before refreshing a web service descriptor, remove any custom elements from
these folders.
If you used an IS element created by Integration Server for the web service descriptor
with another IS element that is not associated with the web service descriptor,
refreshing the web service descriptor might break the other usages of the IS element.
For example, suppose that you used an IS document type created for an input
message as the input signature of a service not used as an operation in the web
service descriptor. If the input messages is removed from the updated WSDL
document upon refresh, the other service will have a broken reference. The service
will reference a document type that no longer exists.
If you refresh a WSDL first provider web service descriptor for which web service
clients have already been created, the web service clients will need to be recreated.
Consumers will need to recreate their web service client using the new WSDL
document that Integration Server generates for the provider web service descriptor.
During refresh, Integration Server regenerates the web service descriptor using
the functionality and features available in the Integration Server version on which
the web service descriptor was originally created. After refreshing the web service
descriptor, the Created on version property value is the same version of Integration
Server as before the refresh. Refreshing a web service descriptor on the latest version
of Integration Server does not update the web service descriptor to include all the
web service features and functionality available in the current version of Integration
Server. If you want the web service descriptor to use the features available with the
current version of Integration Server, delete the web service descriptor and recreate it
using Designer and the current version of Integration Server.
If you are using the local service development feature, using versions of Subversion
prior to 1.7 as your VCS client might cause issues while refreshing web service
connectors. Software AG recommends that you use Subversion 1.7 or higher as your
VCS client.
797. For information about pre-requisites for creating a consumer web service
descriptor, see "Creating a Consumer Web Service Descriptor" on page 805.
Refreshing a web service descriptor is different than refreshing web service
connectors. For more information about refreshing web service connectors, see
"Refreshing a Web Service Connector" on page 814.
Before refreshing a web service descriptor, review the information in
"Considerations for Refreshing a Web Service Descriptor" on page 822.
6. Click Next.
7. If you selected CentraSite as the source, under Select Web Service fromCentraSite,
select the service asset in CentraSite that you want to use to create the web service
descriptor. Click Next.
Designer filters the contents of the Services folder to display only service assets that
are web services.
Note: The prefix you assign must be unique and must be a valid XML NCName
as defined by the specification hp://www.w3.org/TR/REC-xml-names/
#NT-NCName.
For a consumer web service descriptor, the generated WSDL will always contain the
original annotated policy from the source WSDL document.
Note: The WS-I profile only address the SOAP 1.1 protocol.
To change the target namespace for a service first provider web service descriptor
1. In Package Navigator view, open and lock the service first provider WSD for which
you want to change the target namespace.
2. In the Properties view, in the Target namespace field, specify the URL that you want
to use as the target namespace for elements, aributes, and type definitions in the
WSDL generated for this provider WSD.
3. Click File > Save.
specifies the field size, in kilobytes, that determines whether Integration Server sends
base64binary encoded data in an outbound SOAP message as a MIME aachment
or whether it sends it inline in the SOAP message. For more information about this
property, see webMethods Integration Server Administrator’s Guide.
If you want to stream MTOM aachments, you have to do additional configuration. For
more information about the configuration required to enable MTOM streaming, see the
Web Services Developer’s Guide.
For detailed information about the content and structure of the soapHeaders document
that Integration Server adds to the pipeline, see Web Services Developer’s Guide.
Note: Integration Server uses Xerces Java parser version J-2.11.0. Limitations for this
version are listed at hp://xerces.apache.org/xerces2-j/xml-schema.html.
When validating XML schema definitions, Integration Server uses the Perl5 regular
expression compiler instead of the XML regular expression syntax defined by the World
Wide Web Consortium for the XML Schema standard. As a result, in XML schema
definitions consumed by Integration Server, the paern constraining facet must use
valid Perl regular expression syntax. If the supplied paern does not use proper Perl
regular expression syntax, Integration Server considers the paern to be invalid.
Note: Integration Server uses the internal schema processor to validate the
schemas at this point as well.
You select an element declaration from an XML Schema definition to use as the input
or output signature of a 6.5 SOAP-MSG style operation. For more information about
using 6.5 SOAP-MSG style services as operations, see "Using a 6.5 SOAP-MSG Style
Service as an Operation" on page 848.
You refresh the web service connectors for a consumer web service descriptor.
Integration Server sets the Validate Schema using Xerces property to true for all new web
service descriptors. If you migrated a web service descriptor from a previous version of
Integration Server, the migration utility set the value based on the version of Integration
Server from which the web service descriptor was migrated.
If the web service descriptor was migrated from Integration Server version 7.1.x, the
migration utility set the Validate Schema using Xerces property to true.
If the web service descriptor was migrated from Integration
Server version 8.x, the migration utility used the value of the
wa.server.wsdl.validateWSDLSchemaUsingXerces parameter to determine the
value of the Validate Schema using Xerces property. If the parameter was set to true,
the migration utility set the property to true. It the parameter was set to false, the
migration utility set the property to false.
service descriptor if the WSDL document contains bindings with different use value
or operations with different use values. Integration Server throws the following
exception:
[ISS.0085.9285] Bindings or operations with mixed "use" are not
supported.
Note: The restriction on mixed binding styles across binders does not apply to web
service descriptors that run in pre-8.2 compatibility mode.
SOAP Version Whether SOAP messages for this web service should use SOAP
1.1 or SOAP 1.2 message format.
Transport The transport protocol used to access the web service. Select
one of the following:
HTTP
HTTPS
JMS
Use and Style The style/use for operations in the provider web service
for Operations descriptor. Select one of the following:
Document - Literal
RPC - Literal
RPC - Encoded
Endpoint The address at which the web service can be invoked. Do one
of the following:
To use a provider web service endpoint alias to specify the
address, select the Alias option. Then, in the Alias list, select
the provider web service endpoint alias.
Select DEFAULT(aliasName ) if you want to use the
information in the default provider web service endpoint
alias for the address. If the Alias list includes a blank row,
the Integration Server does not have a default provider web
service endpoint alias for the protocol.
Note: You can only specify Host and Port for the endpoint
if a default provider endpoint alias does not exist
for the selected protocol. When a default alias exists,
4. Click OK. Designer adds the new binder to the Binders tab.
5. Click File > Save.
Notes:
If you specify HTTP or HTTPS as the transport, but do not specify a host, port, or
provider web service endpoint alias and there is not a default provider endpoint
alias for the transport protocol, Integration Server uses the primary port as the port
in the endpoint URL. If the selected transport and the protocol of the primary port
do not match, web service clients will not execute successfully. For more information
see "Protocol Mismatch Between Transport and Primary Port" on page 796.
You can change the default name that Designer assigns to the binder. You can
rename the binder by changing the value of the Binder name property or by selecting
the new binder, right-clicking it, and selecting Rename.
2. In the Binders tab, select the binder containing the operation for which you want to
edit the SOAP action.
3. In the Properties view, next to the SOAP action property, click the browse buon.
Designer displays the SOAP Action dialog box which identifies the SOAP action
string associated with each operation in the selected binder.
4. For the operation whose SOAP action you want to change, enter the new SOAP
action value in the SOAP Action column. Make sure that the new SOAP Action value
is unique across the web service descriptor.
5. Click OK.
Designer applies the SOAP action change to the operation in this binder only.
6. Click File > Save.
protocol matches the binder protocol. If there have been changes to the web service
endpoint aliases since you connected Designer to Integration Server, use Designer to
refresh the connection to Integration Server.
If this is a provider web service and the binder protocol is HTTP or HTTPS, you can
assign the default provider endpoint alias to the binder. Select DEFAULT(aliasName )
if you want to use the information in the default provider web service endpoint alias
for the address. If the Alias list includes a blank row, Integration Server does not have
a default provider web service endpoint alias for the protocol.
Note: If you select the blank row and a default provider endpoint alias is later
set for the selected protocol, Integration Server then uses the information
from the alias when constructing the WSDL document and during run-
time processing.
Keep the following points in mind when enabling use of the client side queue for a JMS
binder:
The client side queue associated with the JMS binder is determined by the JMS
connection alias in the consumer web service endpoint alias for the binder. The
maximum size of the client side queue must be greater than zero. If the JMS
connection alias sets the size of the client side queue to zero (Maximum Queue Size is
set to 0), the client side queue is effectively disabled. Integration Server will not write
messages to a client side queue that has a maximum size of 0 messages. For more
information about configuring a JMS connection alias, see webMethods Integration
Server Administrator’s Guide
The client side queue can be used with web service connectors for In-Only and In-
Out operations. For an In-Out operation, the reply to destination for the web service
must be a non-temporary queue.
To configure the use of the client side queue for a JMS binder
1. In the Package Navigator view in theService Development perspective, open
and lock the web service descriptor containing the binder for which you want to
configure the use of the client side queue.
2. In the Binders tab, select the JMS binder for which you want to configure the use of
the client side queue.
3. In the Properties view, next to the Use CSQ property, select True to enable use of the
client side queue. If you do not want Integration Server to use the client side queue
for JMS messages sent using the binding represented by this binder, select False.
4. Click File > Save.
A header element defines the format of the SOAP headers that may be present in a SOAP
message (request or response). Headers are optional and can be added to or deleted
from any web service descriptor.
A fault element provides a definition for a SOAP fault (that is, the response returned to
the sender when an error occurs while processing the SOAP message). Fault elements
are optional and can be added to or deleted from any web service descriptor.
Adding Operations
When you add operations to a service first provider WSD, the operations are also added
to every binder in the WSD. The values defined by a specific binder will apply to the
operation.
Note: You can add operations to a service first provider WSD only.
The specified operations are added to the provider WSD. The operations appear
in the Operations tab and are also added to each binder contained in the provider
WSD.
If a service signature does not meet the style/use signature requirements established
by the existing binder, Designer does not add the service as an operation.
Designer adds the new operation to all binders in the web service descriptor.
4. Click File > Save.
If the operation already exists in the web service descriptor, Designer adds it as a
copy and appends “_n ” to its name, where n is an incremental number.
Tip: You can also add operations by selecting one or more services in Package
Navigator view and dragging them into the Operations tab.
To copy or move an existing operation from one provider web service descriptor to another
1. In Package Navigator view, open and lock the provider WSD that contains the
operation you want to copy or move.
2. In the Operations tab, select one or more operations. Click or on the web
service descriptor editor toolbar.
3. In Package Navigator view, open and lock the provider WSD into which you want to
paste the cut or copied operations (the target provider WSD).
4. In the Operations tab of the target WSD, click on the web service descriptor editor
toolbar.
5. Click File > Save.
Designer adds the specified operations to the provider WSD. Designer also adds the
operations to all binders in the target web service descriptor exactly as they existed
in the source web service descriptor. The binder values for each individual binder
apply to the operations within the binders.
If the operation being added already exists in the provider WSD, Designer adds it as
a copy and appends “_n ” to its name, where “n ” is an incremental number.
Any header handler processing that changes the SOAP message and occurs before
service invocation affects the SOAP message passed to the service. Note that 6.5
SOAP-MSG style services expect the SOAP message to be in a certain format.
Specifically, any changes to the SOAP body might affect the ability of the 6.5 SOAP-
MSG style service to process the request.
When a 6.5 SOAP-MSG style service is added as an operation, you can add fault
processing to the operation response. For fault processing to work, you need to
modify the 6.5 SOAP-MSG style service to detect a Fault condition, add Fault output
data to the pipeline, and drop the SOAP response message (soapResponseData object)
from the pipeline.
4. In the Properties view, next to the Signature field, click the browse buon.
5. In the Modify I/O Signature dialog box, do one of the following:
Select... To...
Original IS service Use the input or output signature from the originating IS
service as the input or output signature. This is the default.
Deleting Operations
Keep the following points in mind when deleting operations from a web service
descriptor:
You can delete operations from a service first provider WSD only.
When you delete an operation on the Operations tab, Designer removes the
operation from all the binders in the provider WSD.
If you delete an operation from within a binder (that is, you delete the operation in
the Binders tab), any other instances of that operation in other binders remain in the
web service descriptor. If an operation exists in only one binder and is deleted from
that binder, the operation is removed from the web service descriptor.
Note: Integration Server considers all of the headers defined in a web service
descriptor to be required. If the header does not exist in the SOAP message at
run time, Integration Server throws an error.
While failure when a required header is missing is the correct behavior,
Integration Server provides a configuration property to control whether
missing required headers in a SOAP response results in an error. If you do not
want Integration Server to throw an error in case of missing required headers,
set the wa.server.SOAP.ignoreMissingResponseHeader server configuration
parameter to true.
An IS document type used as a header or fault for an operation with a binding style/
use of RPC/Encoded cannot contain fields named *body or @aribute fields (fields
starting with the “@” symbol).
You must set up a package dependency if you use an IS document type from a
different package as a header.
A header must have a registered header handler. However, you can add the header
to an operation and register a header handler for it later. A header without a handler
will be ignored or will cause the request to fail (depending on whether the Must
Understand property for the header is set to False or True).
After a header handler is registered in Integration Server, the IS document types
associated with the handler will be listed in the selection dialog box that is displayed
when you add a header. For more information about registering handlers, see the
Web Services Developer’s Guide.
The WS Security Handler does not expose supported headers.
If you add a response header to an operation that uses an In-Only Message Exchange
Paern (MEP), the MEP will change to In-Out MEP. For more information about
message exchange paerns, see the Web Services Developer’s Guide.
You can also add headers to an operation by dragging IS document types from the
Package Navigator view to the Operations tab.
Integration Server considers all of the headers defined in a web service descriptor to
be required.
Important: When you add a header (or a fault) to a consumer web service descriptor,
you must refresh the web service connector(s). See "Refreshing a Web
Service Connector" on page 814.
Note: If there is a top-level instance document for the fault, in addition to the
one in the $fault/detail variable, Integration Server ignores the top-level
document.
Specify a fault with a structure that was not previously defined using a fault
element. Optionally, override the fault reasons, code, subcodes, node and/or role that
Integration Server generates.
Although you can identify the structure of SOAP faults in advance, it is not required.
To signal a fault at run time, you can add fault information that does not match
defined fault elements to the $fault/detail variable in the pipeline. Be sure that the
name does not match any defined fault elements. Integration Server recognizes the
$fault/detail variable in the service pipeline. Because the document in the $fault/detail
variable does not match a defined fault element, Integration Server generates the
fault detail without using an IS document type for the structure.
To override the fault reasons, code, subcodes, node and/or role, set up the endpoint
service to provide the corresponding values in fields within the $fault variable. For
more information, see "The $fault Variable" on page 858.
Integration Server ignores any top-level instance document that might be in the
pipeline for a fault. Using the information from the $fault/detail variable, Integration
Server generates a SOAP response that contains a SOAP fault. If values are specified
for the fault reasons, code, subcodes, node and/or role within the $fault variable,
Integration Server uses those values instead of values it generates.
Additionally, faults can occur for the following reasons:
The endpoint service throws a service exception.
In this case, Integration Server constructs a fault message out of the service
exception. If the pipeline also contains a $fault variable, Integration Server uses the
information specified in the $fault variable to override the fault information.
To make the $fault variable available, you can write a Java service that throws a
ServiceException, but before throwing the exception, places the $fault variable in the
pipeline.
Alternatively, for a flow service, you can use the EXIT with failure construct. As a
result, before exiting the flow service with a failure, you can place the $fault variable
into pipeline.
A request handler service ended in failure and signaled that a fault should be
generated.
When the request handler returns a status code 1 or 2, Integration Server generates a
SOAP fault, along with the fault code, subcodes, reasons, node, and role for the fault.
You can use the pub.soap.handler:updateFaultBlock service to modify the code, subcodes,
reasons, node, and/or role that Integration Server generates.
Note: When the request handler returns status code 3, you are expected to build
the SOAP fault. As a result, the pub.soap.handler:updateFaultBlock service is not
necessary.
Note: The structure of the SOAP fault returned by the web service connector
depends on the version of Integration Server on which the web service
descriptor was created. For more information, see Web Services Developer’s
Guide.
It is possible for a web service to return a fault that does not appear in a WSDL file. To
account for these SOAP faults, you can add fault elements to a WSDL first provider
web service descriptor or a consumer web service descriptor. For more information, see
"Adding a Fault Element to an Operation" on page 856.
Important: When you add a fault to a consumer web service descriptor, you must
refresh the web service connector(s). See "Refreshing a Web Service
Connector" on page 814.
Notes:
If you add a fault element to an operation in a consumer web service descriptor, and
then refresh the web service connector, Integration Server updates the logic of the
web service connector to look for and handle the fault at run time.
If you add a fault element to an operation in a WSDL first provider web service
descriptor, the WSDL document generated from the provider web service descriptor
will include the new faults as soap:fault elements in the operation.
You can add multiple fault elements to an operation in a web service descriptor.
At run time, if the service that corresponds to the operation returns multiple fault
documents, the SOAP fault in the resulting SOAP response will contain only one
fault document. Specifically, Integration Server returns the fault document that is
an instance of the IS document type that appears first in the operations list of fault
elements.
For example, suppose that an operation had three fault elements listed in this
order: faultA, faultB, and faultC. Note that each fault element corresponds to an
IS document type of the same name. At run time, execution of operation (service)
results in two fault documents—one for faultB and one for faultC. In the SOAP
response generated by Integration Server, the SOAP fault contains the faultB
document only.
Variable Description
$fault Document Fault information that overrides other fault information in the
service pipeline, if any.
reasons Document List Optional. Reasons for the SOAP fault. Integration
Server uses values you specify to modify the reasons it
generates for the fault.
Note: For a SOAP 1.1 fault, if you specify more than one reason,
Integration Server uses the first reason. Multiple reasons
are supported for SOAP 1.2 faults.
Variable Description
node String Optional. The URI to the SOAP node where the fault
occurred. Integration Server uses value you specify to modify
the node it generates for the fault.
role String Optional. The role in which the node was operating at
the point the fault occurred. Integration Server uses value you
specify to modify the role it generates for the fault.
fault For a SOAP 1.1 fault, if you specify subcode values, the service
code and ignores them because subcodes are only applicable for a SOAP
subcodes 1.2 fault.
fault reasons For a SOAP 1.1 fault, if you specify more than one reason, the
service only uses the first reason. Multiple reasons are supported
for SOAP 1.2 faults.
fault node For a SOAP 1.1 fault, if you specify a value for node, the service
ignores it because the fault node is only applicable for a SOAP
1.2 fault.
fault role The fault role is supported for both SOAP 1.1 and SOAP 1.2
faults.
For detailed information about request, response, or fault handler services, see
Web Services Developer’s Guide.
Any IS service can be used as a handler service. However, handler services must use
a specific service signature. Integration Server defines the service handler signature
in the pub.soap.handler:handlerSpec specification. Integration Server also provides several
services that you can use when creating handler services. These services are located in
the pub.soap.handler folder in the WmPublic package.
When you register a handler, you name the handler, identify the services that function
as the request, response or fault handler services, and indicate whether the handler is for
use with provider web service descriptors or consumer web service descriptors.
You can assign multiple handlers to a web service descriptor. Designer displays the
handlers on the Handlers tab. The collection of handlers assigned to a web service
descriptor is called a handler chain. For a consumer web service descriptor, Integration
Server executes the handler chain for output SOAP requests and inbound SOAP
responses. For a provider web service descriptor, Integration Server executes the handler
chain for inbound SOAP requests and outbound SOAP responses.
When executing the handler chain, Integration Server executes request handler services
by working through the handler chain from top to boom. However, Integration Server
executes response handler services and fault handler services from boom to top.
The order of handlers in the handler chain may be important, depending on what
processing the handlers are performing.
Specify QNames only if you want to associate with handler with one or more
QNames. Registering QNames with a handler provides the following benefits:
Integration Server can perform mustUnderstand checking for the header with
the QName at run time. If a service receives a SOAP message in which a header
requires mustUnderstand processing by the recipient, Integration Server uses
the header QName to locate the handler that processes the header. Note that the
handler must be part of the handler chain for the WSD that contains the service.
When adding headers to a WSD, Designer populates the list of IS document
types that can be used as headers in the WSD with the IS document types
whose QNames were registered with the handlers already added to the WSD.
If you add a IS document type as a header to a WSD and the QName of that IS
document type is not associated with a handler, Designer adds the header but
display a warning stating that there is not an associated handler.
When consuming WSDL to create a provider or consumer WSD, Integration
Server automatically adds a handler to the resulting WSD if the WSDL contains a
QName supported by the handler.
Note: You must set up a package dependency if the web service descriptor uses a
handler from a different package.
If you change the Pre-8.2 compatibility mode property of a web service descriptor
from false to true after a policy is aached to it, the policy subject will no longer be
governed by that policy.
For more information about Pre-8.2 compatibility mode property, see "About Pre-8.2
Compatibility Mode" on page 865.
When aaching policies, avoid aaching a policy that contains policy assertions that
Integration Server does not support. For information about supported assertions, see
the Web Services Developer’s Guide. If you aach a policy that contains unsupported
policy assertions, unexpected behavior may occur.
If you aach a policy to a WSDL first provider web service descriptor or a consumer
web service descriptor, the aached policy will override any annotated policy in the
source WSDL.
For a web service descriptor with a policy aached to it, the aached policy
always takes precedence at run time.
For a consumer web service descriptor, even though the consumer WSDL will
not show the aached policy, Integration Server will enforce the aached policy
at run time.
When you aach a policy to or remove a policy from a provider web service
descriptor, the WSDL generated for that web service descriptor is changed as well.
Any web service clients generated from the WSDL will need to be regenerated.
When you aach a policy to or remove a policy from a consumer web service
descriptor, you do not need to refresh the web service connectors to pick up the
policy change. Integration Server detects and enforces the policy change at run time.
If the policy you are aaching contains WS-SecurityPolicy assertions and you also
want to use MTOM streaming, be aware that if the fields to be streamed are also
being signed and/or encrypted, Integration Server cannot use MTOM streaming
because Integration Server needs to keep the entire message in memory to sign and/
or encrypt the message.
Note: You can use Designer 8.2 or later with an Integration Server 8.2 or later to
create and edit a web service descriptor regardless of the compatibility mode.
If warnings occur, Designer determined that the web service descriptor can be
deployed to the corresponding web services stack successfully but some run-time
behavior might change. Designer displays any warnings about the functional
changes of the web service descriptor in the web services stack. Click OK to
proceed with the change to the Pre-8.2 compatibility mode property. Click Cancel to
cancel the change.
5. Click File > Save.
Note: If your UDDI registry is CentraSite, you will also be able to use the Registry
Explorer view in Designer in addition to the UDDI Registry view. The
Registry Explorer view displays the contents of the CentraSite registry to
which Designer is currently connected. To open the Registry Explorer view,
select Window > Show View > Other and in the Show View dialog box, select
CentraSite> Registry Explorer.
Inquiry URL The URL configured for browsing the UDDI registry. This
field is mandatory.
Security URL The security URL for the UDDI registry. This field can be
mandatory or optional, depending on the registry.
Publish URL The URL configured for publishing services to the UDDI
registry. This field is mandatory if you want to publish a web
service descriptor to the UDDI registry.
6. Click Finish.
If you publish a service that does not meet the criteria specified in the currently
applied filter, Designer does not display the newly published web service in the
UDDI Registry view.
Designer applies each filter that you create to the entire contents of the UDDI
registry. For example, if you apply two filters in succession, Designer clears the first
filter before applying the second filter. Designer does not apply the second filter to
the results of the first filter.
To clear a filter
1. In UDDI Registry view, right-click and select Clear UDDI Filter.
2. Click OK to confirm removing the filter.
Designer removes the filter and displays all the published web services in the UDDI
registry.
2. In the New Web Service Descriptor dialog box, select either Provider (Inbound Request)
or Consumer (Outbound Request).
Follow the prompts that Designer displays and enter the required information for
the type of web service descriptor you are creating. Designer creates the provider
web service descriptor and saves it to the folder you specified. Designer also creates
supporting IS elements, such as flow services and IS document types.
Note: You can also create a web service descriptor by dragging and dropping a
service from the UDDI Registry view to a folder in the Package Navigator
view.
publishing web service assets. Software AG does not recommend using a mixture of
publishing methods.
Before publishing a service to a UDDI registry, be sure to create a provider web
service descriptor using the IS service as an operation of the web service.
Note: You cannot delete a web service from another business’ folder in the registry.
The Delete buon will be disabled.
You can translate documents into and from flat file formats using the functionality and
services provided in the webMethods Flat File package (WmFlatFile). You can also
use these services as templates to create services in Designer that can convert between
flat file documents and IS documents (IData objects). The services in the WmFlatFile
package also provide a way to manage dictionary entries, entire flat file dictionaries, and
flat file schemas.
To set up the translation, you use a flat file schema to define how to identify individual
records within a flat file and what data is contained in each of those records. For detailed
information about the services in the WmFlatFile package and processing flat files, see
webMethods Integration Server Built-In Services Reference.
Concepts
You can use the flat file features to translate documents into and from flat file formats.
To set up the translation, you use a flat file schema to define how to identify individual
records within a flat file and what data is contained in each of those records. You can
also create a flat file dictionary to contain the flat file elements (records, composites,
fields) that you want to make available for use in all flat file schemas.
Note: You can reference a flat file dictionary definition in any flat file schema
regardless of whether the dictionary and schema are located in the same
package.
When creating an element definition in a flat file dictionary, you specify only certain
properties. You then specify the remaining properties in the instance of the element
definition in a particular flat file schema.
Stage 1 Create the flat file schema. During this stage, you create the new flat file
schema on the Integration Server where you will do your development
and testing. For more information, see "Creating the Flat File Schema" on
page 884.
Stage 2 Define the record parser and specify a record identifier for the flat file schema.
During this stage, you associate a record parser with the flat file schema
that will process flat files inbound to the Integration Server. You also
specify how you want the record to be identified after it is parsed. For
more information about defining the record parser, see "Specifying a
Record Parser" on page 885. For more information about specifying a
record identifier, see "Specifying a Record Identifier" on page 894.
Stage 3 Define the structure.During this stage, you specify the hierarchical structure
of the flat file by creating and nesting record definitions or record
references. For more instructions, see "Defining the Schema Structure" on
page 895.
Stage 4 Set properties for the flat file schema.During this stage, you set up the ACL
(access control lists) permissions, configure a default record, add areas,
and allow undefined data for your flat file schema or dictionary.
Stage 5 Test the flat file schema.During this stage, you can use the tools provided by
Designer to test the flat file schema. For more information, see "Testing a
Flat File Schema" on page 899.
Note: When validation is enabled, Integration Server can generate errors for
the Ordered, Mandatory, Validator, and Undefined Data properties. To enable
validation, you must set the validate variable of the convertToValues service to
True. For more information about this service, see webMethods Integration
Server Built-In Services Reference.
3. In the Element name field, type a name for the flat file schema using any combination
of leers, numbers, and/or the underscore character. For information about restricted
characters, see "About Element Names" on page 54.
4. Click Finish.
Integration Server generates a flat file schema and Designer displays it in the
Package Navigator view.
5. Next, use the flat file schema editor to configure the record parser and record
identifier. See "Specifying a Record Parser" on page 885.
Note: If you are using the webMethods Module for EDI to process EDI
documents, you should use the wm.b2b.edi services to create your flat
file schemas. This help system does not provide information about
creating EDI flat file schemas for use with the webMethods Module for
EDI. For more information and steps, see the webMethods Module for
EDIINT Installation and User’s Guide. The EDI Document Type option is
displayed for you to view existing EDI flat file schemas.
Property Description
--OR--
b. Field or composite
Property Description
--OR--
c. Subfield
Property Description
--OR--
Property Description
Property Description
convertToValues service to create the strings Doe,
John and Doe, Jane, the record would appear
as “Doe, John”,“Doe, Jane”. When using the
convertToString service to create “Doe, John”,“Doe,
Jane”, the value of the record would be Doe, John
and Doe, Jane. When using the convertToString
service, if you have specified both the Release
Character and the Quoted Release Character, the
Quoted Release Character will be used.
--OR--
e. Release character
Property Description
--OR--
Property Description
the field delimiter appears in the sixth character
position from the beginning of the document.
4. Next, set the record identifier for the schema. See "Specifying a Record Identifier" on
page 894.
Property Description
--OR--
b. Subfield
Property Description
Property Description
--OR--
Property Description
--OR--
Property Description
character appears in the sixth character position
from the beginning of the document.
d. Release character
Property Description
--OR--
5. Next, set the record identifier for the schema. See "Specifying a Record Identifier" on
page 894.
Property Description
--OR--
b. Subfield
Property Description
--OR--
Property Description
Property Description
example, your field delimiter is (,) and your
release character is “. When you want to use (,)
within a field as text, you must preface it with
your quoted release character. When using the
convertToValues service to create the strings Doe, John
and Doe, Jane, the record would appear as “Doe,
John”,“Doe, Jane”. When using the convertToString
service to create “Doe, John”,“Doe, Jane”, the
value of the record would be Doe, John and Doe,
Jane.When using the convertToString service, if you
have specified both the Release Character and the
and the Quoted Release Character, the Quoted
Release Character will be used.
--OR--
d. Release character
Property Description
Property Description
document is located. For example, if you specify 5
as the character position, you have indicated that
the field delimiter appears in the sixth character
position from the beginning of the document.
4. Next, set the record identifier for the schema. See"Specifying a Record Identifier" on
page 894.
Value Description
3. Next, define the structure for the flat file schema. For instructions, see "Defining the
Schema Structure" on page 895.
Element See...
4. Add flat file elements to define the structure of the flat file schema. You can add
additional records. You can also further define records by adding child composite
and field definitions. For instructions about adding, configuring, and nesting flat file
elements in your flat file schema, see "Defining Flat File Elements" on page 904.
If the default record is specified when creating the flat file schema, any record that
cannot be recognized will be parsed using this default record. If a default record
is not selected, the record will be treated as undefined data. If the Undefined Data
property is set to False and the validate variable of the convertToValues service is set
to true, convertToValues will generate errors when it encounters undefined data. For
more information about the Undefined Data property, see "Allowing Undefined Data"
on page 896
Note: If the file is encoded using a multi-byte encoding, and if you use a fixed length
or variable length parser, the service puts two placeholders into the pipeline:
unDefData and unDefBytes.
Select... To...
True Allow undefined data. If you select this option, you can
choose whether to allow undefined data at the record level.
False Not allow undefined data in any location in this flat file
schema. This is the default.
Select... To...
Creating an Area
An area is a way to associate an arbitrary string with a given record. For example, you
may have an address record that needs to specify the shipping address in the document
header, but needs to specify the billing address in the document detail. To differentiate
these two records, you would create "Header" and "Detail" areas.
To create an area
1. In the Package Navigator view of Designer, open the flat file schema to which you
want to add an area.
2. In the Properties view in the Settings area, click next to Areas.
3. Click to add a new area to the flat file schema. Click to insert a new area in a
specific location in the schema. Click to delete an existing area.
4. Save the flat file schema.
Note: If you do not use this property, validation errors will occur if the record
structure of an inbound document does not match the record structure
defined in its flat file schema.
name in this field. For more information about alternate names, see "Record
Definition Properties" on page 1025.
4. Save the flat file schema.
Button Description
Select the element you want to move, and click to move the
element up or down in the flat file schema structure.
Select the element you want to move, and click to move the
element left or right in the flat file schema structure.
Tip: If you select the flat file schema in Package Navigator view and then select
Run > Run Configurations, Designer populates the Integration Server and Flat
File Schema fields automatically.
6. On the Input tab, in the Skip whitespace list, select true if you want Designer to ignore
whitespace at the beginning of a record.
Note: If the flat file schema specifies a fixed length parser, Designer always
preserves whitespace when processing a flat file document. For fixed
length parsers, the Skip whitespace value is ignored.
7. In the Encoding list, select the encoding for the flat file that you will be testing.
8. Next to the File field, click the Browse buon to navigate to and select the flat file that
you want this launch configuration to use when testing the flat file schema.
9. Optionally, click the Common tab to specify general information about the launch
configuration and to save the launch configuration to a file.
10. Click Apply.
11. Click Run to test the flat file schema now. Otherwise, click Close.
values. Designer then runs the launch configuration. Designer saves the launch
configuration in your workspace
Designer always performs validation when testing the flat file schema. To enable
validation when processing a flat fie document using the pub.flatFile:convertToValues
service, set the validate input parameter to true.
If the flat file schema specifies a fixed length parser, Designer always preserves
whitespace when processing a flat file document. For fixed length parsers, the Skip
whitespace value is ignored.
Stage 1 Create the flat file dictionary. During this stage, you create the new flat file
dictionary on Integration Server. For more information, see "Creating a
Flat File Dictionary" on page 901.
Stage 2 Add Elements to the Flat File Dictionary. During this stage, you add elements
to the Record Definition, Composite Definition, or Field Definition
elements of the flat file dictionary. For more information, see "Adding
Elements to the Flat File Dictionary" on page 902.
Stage 3 Set Properties for the Flat File Dictionary. During this stage, you set up the
ACL (access control lists) permissions, configure the default record, allow
undefined data, and specify floating records for your flat file dictionary.
For more information, see "Seing Properties for the Flat File Dictionary"
on page 902.
Note: You can quickly create a flat file dictionary by right-clicking the folder,
selecting New > Flat File Dictionary. Enter a name for the flat file dictionary
in the New Flat File Dictionary dialog box and click Finish. Designer
automatically creates a flat file dictionary in the selected folder.
Note: You cannot create references to the elements added to a dictionary until
you save the dictionary.
Element Property
Element Property
3. After you have specified the properties for the selected record, save the dictionary.
You now can create flat file schemas based on this flat file dictionary.
Note: To edit a flat file dictionary, you must have the proper access permissions and
must lock the flat file dictionary. For information about access permissions see
"Assigning and Managing Permissions for Elements" on page 89
Element See...
2. In the Flat File Structure tab of the flat file schema editor, or in the flat file dictionary
editor, select the schema or dictionary and click in the editor tool bar. (You can
also right–click and select New.)
3. Select Record Definition and click Next.
4. Specify a name for the record definition in the Enter Record Definition Name dialog
box and click Finish.
Important: This name must match the value of its record identifier exactly as it will
appear in the flat file. The name of a record reference does not have to
match the name of the record definition in the flat file dictionary. The
name of a record reference will be matched to the record identifier in the
record. The name of the record definition in the flat file dictionary does
not need to match the record identifier that appears in the flat file.
7. Click Finish.
The record is added to the flat file schema structure. The Referring To field indicates
the record definition to which the record reference refers. The Dictionary field
indicates the flat file dictionary to which the record reference refers. If the element is
a record definition, these two fields are empty.
Property Description
Mandatory Optional. Select the check box to require that this composite
appear in the flat file. If it is not selected, the composite is
not required to appear in the flat file. If it is selected and the
convertToValues service validate variable is set to true, errors
will be generated if the composite does not appear in the flat
file.
2. In the Flat File Structure tab of the flat file schema editor, or in the flat file dictionary
editor, select the record definition and click in the editor toolbar. (You can also
right-click the element and select New.)
3. Select Composite Reference and click Next.
4. Navigate to the flat file dictionary in which the element is located, select the
dictionary, and then click Next.
5. Select the element that you want to reference and then click Next.
6. Enter the details required in the Enter Composite Reference Name(s) as specified in
"Adding a Composite Definition" on page 906.
7. Click Finish.
Fixed Position Counting from zero (0), indicates a fixed number of bytes to be
extracted from a record.
Property Description
Nth Field Counting from zero (0), indicates the field that you want to
extract from the record.
Property Description
39 Subscribing to Events
■ What Happens When an Event Occurs? ................................................................................... 916
■ Subscribing to Events ................................................................................................................ 917
■ Viewing and Editing Event Subscriptions .................................................................................. 922
■ Suspending Event Subscriptions ............................................................................................... 923
■ Deleting an Event Subscription .................................................................................................. 923
■ Building an Event Handler ......................................................................................................... 923
■ Invoking Event Handlers Synchronously or Asynchronously ..................................................... 924
■ About Alarm Events ................................................................................................................... 925
■ About Audit Events .................................................................................................................... 925
■ About Audit Error Events ........................................................................................................... 926
■ About Exception Events ............................................................................................................. 926
■ About Guaranteed Delivery Events ........................................................................................... 926
■ About JMS Delivery Failure Events ........................................................................................... 928
■ About JMS Retrieval Failure Events .......................................................................................... 928
■ About Port Status Events ........................................................................................................... 929
■ About Replication Events ........................................................................................................... 929
■ About Security Events ................................................................................................................ 930
■ About Session Events ................................................................................................................ 931
■ About Stat Events ...................................................................................................................... 931
■ About Transaction Events .......................................................................................................... 931
The Event Manager monitors Integration Server for events and invokes event handlers
when those events occur. An event is a specific action that the Event Manager recognizes
and an event handler can react to. An event handler is a service that you write to perform
some action when a particular event occurs. You then subscribe the event handlers to the
events about which they need to be notified.
You can use the Event Manager to manage all of your event subscriptions and perform
the following tasks:
Subscribe event handlers to events.
View or edit event subscriptions.
Suspend event subscriptions.
Delete event subscriptions.
Note: You can also use built-in services to add, modify, and delete event
subscriptions. These services are located in the pub.event folder. For more
information about built-in services, see the webMethods Integration Server Built-
In Services Reference.
Note: The Event Manager monitors local Integration Server events only. It does not
monitor EDA (Event Driven Architecture) events.
Subscribing to Events
You can use the Event Manager in Designer to subscribe to an event on the current
server. This action registers the event handler with the Event Manager and specifies
which events will invoke it.
Use the following procedure to subscribe to an event on the current Integration Server.
Before you subscribe to an event, you must have completed the following:
Identified the event type you want to subscribe to.
Identified the service or services that generate an event you want to subscribe to (if
you want to subscribe to an audit event, exception event, or JMS delivery failure
event).
Wrien the event handler that will execute when the identified event occurs.
To subscribe to an event
1. In Package Navigator view, select the current Integration Server and select File >
Properties. In the Properties for serverName dialog box, select Event Manager.
2. In the View event subscribers for list, select the event type to which you want to
subscribe.
3. Click to add a new subscriber.
4. In the Add Event Subscriber dialog box, complete the following fields:
Service The fully qualified name of the event handler that will
subscribe to the event (that is, the service that will execute
when the event occurs). You can either type the name in the
Service field or browse to locate and select the service from a
list.
Example:sgxorders.Authorization:LogAuthTrans
Filter A paern string to further limit the events this event handler
subscribes to. Filters vary depending on the event type you are
subscribing to.
For example, if you are subscribing to an audit or exception
event, create a filter to specify the names of services whose
events this event handler subscribes to (that is, the services
Note: Integration Server saves information for event types and event subscriptions
in the eventcfg.bin file. This file is generated the first time you start the
Integration Server and is located in the Integration Server_directory\config
directory. Copy this file from one Integration Server to another to duplicate
event subscriptions across servers.
Important: The asterisk (*) is the only wildcard character allowed in an event filter. All
other characters in the paern string are treated as literals. Paern strings are
case sensitive.
Alarm Event The message generated by the alarm event. Create a filter that
specifies some of the text of the message. The event handler with
this filter will process all alarm events containing the specified
text.
The following filter specifies that any alarm events that generate
a message containing the word “port” will invoke the event
handler:
*port*
Audit Event The fully qualified name of the service that generates the audit
event. Create a filter to specify the services whose audit events
you want to invoke the event handler.
The following filter specifies that the service
sgxorders.Authorization:creditAuth will invoke the event handler:
sgxorders.Authorization:creditAuth
Audit Error Event The concatenated value of the destination and errorCode fields of
the audit error event. If the audit error event value matches the
filter, the event will be passed to the event handler. You can use
the asterisk (*) as a wildcard character in the filter.
You can use filters to limit the events that your event handler
will receive as follows:
If you set the filter to YourSearchTerm , the event handler will
receive events whose values contain onlyYourSearchTerm .
If you set the filter to YourSearchTerm* , the event handler will
receive events whose values begin with YourSearchTerm .
If you set the filter to *YourSearchTerm , the event handler will
receive events whose values end with YourSearchTerm .
If you set the filter to *YourSearchTerm* , the event handler will
receive events whose values contain YourSearchTerm anywhere
in the value.
Error Event The error message text. The following filter specifies that any
error event with a message that contains the word "missing" will
invoke the event handler.
*missing*
Exception Event The fully qualified name of the service that generates the
exception event. Create a filter to specify the services whose
exception events you want to invoke the event handler.
The following filter specifies that all services that start with the
word “credit” and belong to any folder will invoke the event
handler:
*:credit*
GD Start Event The fully qualified name of the service that is being invoked
using guaranteed delivery. Create a filter to specify the services
that, when invoked using guaranteed delivery, will invoke the
event handler.
The following paern string specifies that all services that start
with the word “sendPO” and belong to any folder will invoke the
event handler:
*:sendPO*
JMS Delivery The name of the JMS connection alias used to send the message
Failure Event to the JMS provider.
The following filter specifies that a JMS delivery failure
event involving a JMS connection alias with “XA” in the JMS
connection alias name will invoke the event handler:
*XA*
JMS Retrieval The fully qualified name of the JMS trigger that called the trigger
Failure Event service for which the error occurred.
The following filter specifies that a JMS retrieval failure event
involving a JMS trigger named “ordering:processTransaction” will
invoke the event handler:
*ordering:processTransaction*
Journal Event The major code and minor code of the generated event. The
format of the filter is <majorCode>.<minorCode>. For example,
the following filter specifies that any journal event with major
code of 28 followed by a minor code of 34 will invoke the event
handler:
*28.34*
Replication Event The name of the package being replicated. Create a filter to
specify the packages that, when replicated, will invoke the event
handler.
The following filter specifies that a replication event involving
the package named “AcmePartnerPkg” will invoke the event
handler:
AcmePartnerPkg
Session Start The user name for the user starting the session on the Integration
Event Server or the groups to which the user belongs. Create a filter
to specify which users or which user groups invoke an event
handler when they start a session on the server.
The following filter specifies that a session start event generated
by a user in the “Administrators” group will invoke the event
handler.
*Administrators*
* All services.
6. Click OK when you finish viewing or editing event subscriptions. Your changes take
effect immediately.
Stage 1 Creating an empty service. During this stage, you create the empty
service that you want to use as an event handler.
Stage 2 Declaring the input and output. During this stage, you declare the
input and output parameters for the event handler by selecting
the specification or IS document type for the event type in
pub.event. The specification and IS document type indicate the run-
time data that will be contained in the IData object passed to the
event handler.
Stage 3 Inserting logic, code, or services. During this stage, you insert the
logic, code, or services to perform the action you want the event
handler to take when the event occurs. If you are building a flow
service, make sure to link data between services and the pipeline.
Stage 4 Testing and debugging the service. During this stage, you use the
testing and debugging tools available in Designer to make sure
the event handler works properly.
Stage 5 Subscribing to the event. During this stage, you use the Event
Manager to subscribe the event handler to the event. This action
registers the event handler with the Event Manager and specifies
which events will invoke it. You can create filters to be more
selective about the events to which you subscribe.
Set the value of the server configuration parameter specific to the event to true, if
you want Integration Server to invoke the event handlers that subscribe to the event
asynchronously. Set the value of the server configuration parameter specific to the event
to false, if you want Integration Server to invoke the event handlers that subscribe to the
event synchronously. The default value is true.
For more information about specifying the server configuration parameters, refer to
webMethods Integration Server Administrator’s Guide.
Note: Keep in mind that event handlers are processed independently of the services
that invoke them. Event handlers are not designed to replace the error
handling and/or error recovery procedures that you would normally include
in your service.
guaranteed delivery transactions to a file or database. You might also want to use
guaranteed delivery events to invoke event handlers that send notification. For example,
if you use guaranteed delivery to invoke a service that processes purchase orders, you
might want to send notification to a business account manager about purchase orders
from a particular client, or when the value of a purchase order is greater than a certain
amount.
A Guaranteed Delivery Transaction generates Guaranteed Delivery Events and Transaction Events
Stage Description
Stage 2 The remote Integration Server receives the request and begins
executing Service B. When the remote server begins executing
Service B, the remote server generates a Tx Start event. By default,
the Tx Start event is logged to the txinyyyymmdd .log file.
Stage Description
Stage 4 The remote Integration Server sends the results of Service B to the
requesting client (here, the local Integration Server).
Stage 5 The local Integration Server receives the results of Service B and
generates a GD End event. By default, the GD End event is logged
to the txoutyyyymmdd .log file.
For details about guaranteed delivery, see the Guaranteed Delivery Developer’s Guide.
A service that functions as an event handler for a Security event should use the
pub.event:security specification as its service signature. For more information about the
pub.event:security service, see the webMethods Integration Server Built-In Services Reference.
Note: Integration Server provides an agent that you can configure for use with
a network monitoring system. For information about implementing
this agent, see the readme file in the agentInstall.jar file located in the
Integration Server_directory\lib directory.
Tx End events occur when an Integration Server finishes executing a service invoked
with guaranteed delivery.
Transaction events result from guaranteed delivery transactions. Each guaranteed
delivery transaction generates a Tx Start event and a Tx End event. In fact, the
transaction events occur between the guaranteed delivery events. A Tx Start event
occurs immediately after a GD Start event and a Tx End event occurs immediately
before a GD End event. For more information about how transaction events relate to
guaranteed delivery events, see "Guaranteed Delivery Events and Transaction Events"
on page 927.
You can subscribe to Tx Start and Tx End events to invoke event handlers that log
guaranteed delivery transactions to a file or database. You might also want to use
transaction events to invoke event handlers that send notification.
You can create a client that submits an XML document to a target service, which then
receives the XML document.
The following table describes the methods a client can use to submit an XML document
and how Integration Server passes the XML document to the target service based on the
method.
Submit the XML document Integration Server passes the document as an XML
in an arbitrarily named String to the target service. It is the responsibility of
String variable the target service to parse the XML so that it is in a
format that can be manipulated.
For more information, see "Submiing and Receiving
XML in a String Variable" on page 935.
Submit the XML document Integration Server automatically parses the XML and
in a special String variable passes it as a node to the target service.
named $xmldata
For more information, see "Submiing and Receiving
XML in a String Variable" on page 935.
Post the XML document via Integration Server either automatically parses the
HTTP XML and passes it as a node to the target service
or passes the XML document directly to the target
service as an XML stream or byte array.
For more information, see "Submiing and Receiving
XML in a String Variable" on page 935.
FTP the XML document Integration Server automatically parses the XML and
passes it as a node to the target service.
For more information, see "Submiing and Receiving
XML in a String Variable" on page 935.
Send the XML document as Integration Server automatically parses the XML and
an email attachment passes it as a node to the target service.
For more information, see "Submiing and Receiving
XML in a String Variable" on page 935.
xmlStringToXMLNode service produces a node that the target service can subsequently
query or convert to an IData object.
For example, continuing with the previous example, the target service,
purch:postOrder, would pass the orders String, which contains the XML document, to
pub.xml:xmlStringToXMLNode.
After the XML document is represented as a node, the target service can invoke:
pub.xml:queryXMLNode to query the node
pub.xml:xmlNodeToDocument to convert the node to an IData object
For more information about the pub.xml:xmlStringToXMLNode. pub.xml:queryXMLNode, and
pub.xml:xmlNodeToDocument services, see webMethods Integration Server Built-In Services
Reference.
Note: To use the $xmldata variable to submit an XML document, but bypass
automatic parsing so that Integration Server sends the body of the request
directly to the target service as a stream or byte array, your client must use
HTTP to invoke the target service. For more information, see "Submiing and
Receiving XML via $xmldata without Parsing" on page 942.
{
public static void main(String args[])
throws Exception
{
//--Read the XML document from a specified file (or from stdin)
Context c = new Context();
Important: This example shows a Java-based client. However, you can use any type of
IS client, including a browser-based client. For a browser-based client, post
the XML document as the value portion of a $xmldata=value pair. You can
post other name=value pairs with the request. For more information, see
"Building a Browser-Based Client" on page 966.
Important: The XML document must be the only text in the body of the request. Do
not assign the XML document to a name=value pair.
pub.client:http Description
input variable
url Specify the URL of the target service that is to receive the XML
document.
In the URL, include the xmlFormat argument if you want to override
the behavior specified by the Default xmlFormat property for the target
service. For more information about the xmlFormat values, see
"About the xmlFormat Value" on page 940.
pub.client:http Description
input variable
headers Specify information for the Content-Type field of the HTTP request
header.
Key Value
data Specify the XML document to submit via HTTP. Use one of the
following keys:
Key Value
enhanced Integration Server parses the XML using the enhanced XML
parser automatically. Integration Server uses the default options
specified for enhanced XML parsing on the Settings > Enhanced
XML Parsing page in Integration Server Administrator. Integration
Server passes the XML document to the target service as a
org.w3c.dom.Node object named node .
For more information about configuring the enhanced XML
parser, see webMethods Integration Server Administrator’s Guide.
node Integration Server parses the XML using the legacy XML
parser automatically and passes it to the target service as a
com.wm.lang.xml.Node named object node .
Note: If parsing is not needed, it can unnecessarily slow down the execution of a
service. For example, an application might handle the XML as a simple String.
In this case, the automatic parsing is unnecessary and should be avoided.
By default, Integration Server obtains the xmlFormat value from the Default xmlFormat
property assigned to the target service. However, the client can override the Default
xmlFormat property value by supplying the xmlFormat argument in the URL it uses
to invoke the target service. The following shows the URL format when using the
xmlFormat argument:
hp://hostname :port /invoke/folder /serviceName ?xmlFormat=format
Specify cached, node, stream, or bytes for format .
For example, suppose that the configured Default xmlFormat property value is node. If
you want to invoke the sales:orderInfo service on the server rubicon:5555 and override the
configured Default xmlFormat value so that Integration Server passes the XML document
directly to the sales:orderInfo service as an XML stream, use the following URL:
hp://rubicon:5555/invoke/sales/orderInfo?xmlFormat=stream
Note: The client request should specify the xmlFormat argument only when it is
recommended in the documentation for the service. Furthermore, a client
should specify the xmlFormat argument only when knowing how the service
will respond.
pub.client:http Description
input variable
url Specify the URL of the target service that is to receive the XML
document.
Note: Rather than specifying the query string portion of the URL
with the xmlFormat argument in the url variable, specify the
xmlFormat argument in the data/args variable, as described
below.
data Use the args key of the data input variable to specify key/value pairs
that the service places in the query string of the URL.
Key Value
Argument Value
pub.client:http Description
input variable
The query string that the service appends to the URL will use
the following format:
?$xmldata=string &xmlFormat=format
where:
string is the value you specify for the $xmldata argument
(that is, the XML document).
format is the value you specify for the xmlFormat argument.
For information about how to set other input variables when using the pub.client:http
service to submit an XML document, see "Submiing and Receiving XML via HTTP" on
page 938.
No file extension
If you want to submit an XML document in a file that has no file extension,
edit the lib/mime.types file and add the following to associate the special key,
ftp_no_extension with the text/xml content type. Using ftp_no_extension
indicate a null extension.
text/xml ftp_no_extension
2. Point to the target directory where the client is to copy the file containing the XML
document.
The target directory is the Integration Server namespace (ns) directory where the
target service resides. Use the following format:
cd \ns\folder \subfolder \serviceName
For example, if the target directory is the namespace directory containing the
purchasing:submitOrder service, use the following:
cd \ns\purchasing\submitOrder
Important: Note that the root directory for this operation is your Integration Server’s
namespace directory (ns), not the root directory of the target machine.
3. Copy the XML document to the target directory using the following command,
where filename is the name of the file that contains the XML document:
put filename
The file that the client sends to Integration Server via FTP is never actually wrien to
the server’s file system. The XML document you send and the output file it produces
are wrien to a virtual directory system maintained in the client’s Integration Server
session. When the client ends the FTP session, Integration Server automatically
deletes the original file and any results from the session.
Important: Software AG recommends that you use a unique name for each XML
document that you FTP to Integration Server (perhaps by aaching a
timestamp to the name) so that you do not inadvertently overwrite other
FTPed XML documents or their results during an FTP session.
If your client is a service running in an Integration Server, instead of coding each of the
actions described above, the client can invoke services in the pub.client folder to FTP a
file. For information about these services, see the webMethods Integration Server Built-In
Services Reference.
Code the client to retrieve the output file using the FTP “get” command. For example,
to retrieve the output in PurchaseOrder.xml2.out, the client can use the following FTP
command:
get PurchaseOrder.xml.out
If your client is a service running in an Integration Server, it can invoke services in the
pub.client folder to perform FTP commands to get a file. For information about these
services, see the webMethods Integration Server Built-In Services Reference.
When Integration Server receives an XML document via email, the server automatically
parses the XML document and passes it as a node to a service for processing.
Note: If you leave the subject line empty, Integration Server first aempts to
pass the XML document to the global service. If the global service is
not defined, the server then aempts to pass the XML document to the
default service assigned to the email port (if one has been assigned). You
assign the global service and the port’s default service when defining
the email port. For more information, see webMethods Integration Server
Administrator’s Guide.
pub.client:smtp Description
input variable
subject A String containing the fully qualified name of the target service
that is to process the XML document. For example:
orders:ProcessPO
pub.client:smtp Description
input variable
from A String containing the email address where the client expects
results. The target service should send its output to this email
address.
body A String containing input variables for the target service in URL
query string format. For example:
one=1&two=2&three=3&$user=Administrator&$pass=manage
This example sets five input variables: one , two , and three are set
to the values 1, 2 and 3, respectively. The input variables $user and
$pass have special meaning to the email port. Use these variables to
specify the user name and password for the email port. You must
specify $user and $pass if authentication is enabled on the email
port.
Key Value
You can use the load and query services to fetch HTML or XML documents from the
Internet and extract data for use in other services.
Note: If you want to retrieve documents from a local file system, use the
pub.file:getFile service. For more information about pub.file:getFile, see the
webMethods Integration Server Built-In Services Reference.
Basic Concepts
To successfully use Integration Server’s load and query services, you should understand
the following terms and concepts.
Term Concept
Term Concept
Note: If you want to fetch a document from a local file system, do not use
pub.xml:loadXMLNode. Instead, use the pub.file:getFile service. For more
information, see the webMethods Integration Server Built-In Services Reference.
Note: If you want to fetch a document from a local file system, do not use
pub.xml:loadEnhancedXMLNode. Instead, use the pub.file:getFile service. For more
information, see the webMethods Integration Server Built-In Services Reference.
Note: When you use pub.xml:queryXMLNode to query an enhanced XML node (a node
produced by the enhanced XML parser), you must use XQL as the query
language.
When creating a service, you can construct and configure the service to retry
automatically if a transient error occurs during service execution. A transient error is an
error that arises from a temporary condition that might be resolved or restored, such as
the unavailability of a resource due to network issues or failure to connect to a database.
The service might execute successfully if Integration Server waits a short interval of time
and then retries the service.
To build a service that retries, you create it so that it catches errors and determines
whether an error is transient. When the service determines that an error is transient,
have it re-throw the error as an ISRuntimeException. The ISRuntimeException is the
signal to Integration Server to retry the service. For more information about how to
construct the service, see "Requirements for Retrying a Service" on page 956 and
"Example Service that Throws an Exception for Retry" on page 957.
In addition to constructing the service for retry, you also must set retry properties for
the service (or the trigger calling the service) so that Integration Server knows that it is to
retry a service when an ISRuntimeException is thrown. When Integration Server retries
the service it re-executes it using the original input. For more information about how
to configure a service for retry, see "About Automatic Service Retry" on page 192 and
"Configuring Service Retry" on page 193.
If the service invokes an adapter service, ensure that the service catches transient
errors that the adapter service detects.
When an adapter service built on Integration Server 6.0 or later, and based on the
ART framework, detects a transient error, for example, if their back-end server
is down or the network connection is broken, the adapter service propagates an
exception that is based on ISRuntimeException. When creating a service that invokes
an adapter service, ensure that the logic that catches errors and determines whether
they are transient errors can interpret the adapter service exception that signals a
retry.
For more information about adapter services, see the relevant adapter guides.
Important: The pub.flow:getLastError service must be the first service invoked within
the catch sequence. If it is not first and a preceding service in the catch
sequence fails, the error thrown in the try sequence is overwrien with
the new error.
Note: If the service logic in the try sequence includes an adapter service and a
transient error occurs during adapter service execution, the adapter service
throws an exception that extends the ISRuntimeException. Ensure that
your catch sequence interprets the adapter service exception that signals
retry. For more information, see "Requirements for Retrying a Service" on
page 956.
STEP 1.2.3 - Set flag to indicate whether the service should retry
This step sets the transient error flag based on whether the try sequence failed
because of a transient error. In this example if a transient error occurred, the variable
isTransientError ) is set to “true”.
After this step executes, Integration Server exits the catch sequence, exits the outer
sequence, and then executes the BRANCH on ‘/isTransientError’ step.
STEP 2 - Check transient error flag
This step uses the value of isTransientError to determine whether the service should
throw an ISRuntimeException.
If the try sequence executed successfully, isTransientError is null. As a result,
Integration Server falls through to the end of the service because the value of the
switch variable does not match any of the target steps. Integration Server will not
aempt to retry the service.
If the try sequence failed, but the catch sequence determined that the error was not
transient, the catch sequence does not set isTransientError to “true”. It might be
null or the catch sequence might set isTransientError to another value, for example,
“false”. Either way, Integration Server falls through to the end of the service because
the value of the switch variable does not match any of the target steps. Integration
Server will not aempt to retry the service.
If the try sequence failed and the catch sequence determined that the error was
transient, isTransientError is “true”, and as a result, Integration Server executes the
next step.
STEP 2.1 - Throws ISRuntimeException
Integration Server executes this step to invoke pub.flow:throwExceptionForRetry service
when the value of isTransientError is “true”. This service wraps the exception
generated by the transient error in the try sequence and re-throws it as an
ISRuntimeException.
If the service is configured for retry, Integration Server retries the service if the
maximum number of retries has not been reached. For more information, see
"Configuring Service Retry" on page 193.
Client code is application code that invokes a service on Integration Server. It typically
performs the following basic tasks:
Prompts the user for input values for the service that the client invokes (if the service
takes input)
Places the input values into an input document
Opens a session on Integration Server
Invokes the service
Receives output from the service
Closes the session on Integration Server
Displays the service’s output to the user
Using Designer you can automatically generate client code in Java and C/C++. The
generated client code can serve as a good starting point for your own development.
You can also build client code on your own for browser-based clients and REST clients.
When Designer generates Java client code, Designer replaces any space in a variable
name with an underscore.
The Java client code that Designer generates does not support multiple input or
output variables with the same name.
If you want to override these limitations, you will need to modify the client code that
Designer generates.
Readme.txt A file that contains information and instructions for the Java
client code. Read this file for information about compiling and
running the Java client application.
Note: If the client will connect to Integration Server using the Secure Socket
Layer (SSL), in addition to following the instructions in the Readme.txt
file, you must ensure that the unlimited strength jurisdiction policy files
(local_policy.jar and US_export_policy.jar) are installed as part of your
JVM. If you are using the JVM that was installed with Integration Server,
no further action is needed. If you are using a different JVM, obtain the
files from the JDK provider.
Important: The provided C libraries are built using JDK 1.1.7. If you want to use
a different version of the JDK to compile C/C++ services, you need to
rebuild the C/C++ libraries with that JDK and then replace the old library
files with the rebuilt ones. For more information about rebuilding the C
libraries, see the README installed with the C/C++ SDK. To rebuild the
C libraries, you need use the C/C++ SDK. The C/C++ SDK is not installed
by default. To install the C/C++ SDK, select it from the list of installable
components during installation.
CReadme.txt A file that contains information and instructions for the C client
code. Refer to this file for information about compiling, running,
and deploying your C/C++ client application.
ServiceName .mak A file that contains compiler seings for the C/C++ client.
Be sure to update this file with the correct seings for your
environment.
ServiceName .c An example file that contains the C/C++ client code. It is not
intended for use “as is” in custom applications.
Item Description
Item Description
4 Identifies the service that you want to invoke. The service name is case
sensitive. Be sure to use the same combination of upper and lower case
leers as specified in the service name on Integration Server.
5 Specifies the input values for the service. Specify a question mark (?)
before the input values. The question mark signals the beginning of the
query string that contains the input values. Each input value is represented
as variable =value . The variable portion is case sensitive. Be sure to use the
same combination of upper and lower case leers as specified in your
service. If your service requires more than one input value, separate each
variable =value with an ampersand (&).
Note: Only specify the query string portion of the URL when using the
HTTP GET method.
Note: If you are serving the web pages that invoke services from an Integration
Server, you can use a relative URL to invoke the service. By doing so, you can
serve the exact web page from several servers without having to update the
URLs.
Specify the URL for the service in the ACTION aribute and “POST” in the METHOD
aribute. For example:
<FORM ACTION="/invoke/sample.webPageDemo/getProductCost" METHOD="POST">
After the user fills in the form and submits it, the web browser creates a document
that contains the information the user supplied in the HTML form (performs an HTTP
POST). The browser invokes the URL identified in the ACTION aribute, which invokes
the service on Integration Server, and the browser posts the document that contains
the user’s input information to Integration Server. For more information about how the
server creates the IData object that it sends to the service, see "How Input Values are
Passed to the Service the Browser-Based Client Invokes" on page 968.
sku A1 String
quantity 1 String
Note: Avoid using input variable names that end in “List.” Although Integration
Server accepts variable names ending in “List,” the resulting IData might
not be structured in the way you need. For example, if you pass in a variable
called skuList , the resulting IData contains a String called skuList and a String
list called skuListList . Additionally, if you pass in variables named sku and
skuList , subsequent sku and skuList variables in the query string might not be
placed in the IData fields as expected.
If you must use “List” at the end of your variable name, consider using
“list” (lowercase) or appending one or more characters at the end of the name
(for example, abcListXX).
When Browser-Based Clients Pass Multiple Values for the Same Input Variable
When Integration Server receives multiple input values that are associated with the
same variable name, the String variable in the IData object will contain only the value
of the first variable. The String list variable will contain all the values. For example, the
following shows a URL that contains two values for the variable year and the resulting
IData object that Integration Server creates:
/invoke/sample.webPageDemo/checkYears?year=1998&year=1999
1999
Similarly, if the HTML form contains two fields with the same name and a user supplies
values for more than one, the String variable in the IData object contains only the value
of the first variable; the String list variable contains all values. For example, the following
shows sample HTML code that renders check boxes:
<INPUT TYPE="checkbox" NAME="Color" VALUE="blue">Blue<BR>
<INPUT TYPE="checkbox" NAME="Color" VALUE="green">Green<BR>
<INPUT TYPE="checkbox" NAME="Color" VALUE="red">Red<BR>
If the browser user selects all check boxes, the document that is posted to Integration
Server will contain three values for the variable named Color . The following shows the
IData object that the server passes to the service:
green
red
When Browser-Based Clients Pass Multiple Input Variables with the Same
Name
If the URL that a browser-based client passes to Integration Server contains multiple
variables that have the same name, Integration Server determines how to handle the
duplicate variables based on the seing of the wa.server.hp.listRequestVars server
configuration parameter.
To have Integration Server:
Create list variables for only duplicate variables, set wa.server.hp.listRequestVars server
to asNeeded. This is the default.
With this seing, Integration Server creates an IData object that contains:
String variable that contains the first occurrence of each input variable
String list variable that contains all occurrences of each duplicated variable
For example, this request:
/invoke/sample.webPageDemo/checkYears?year=1998&year=1999&month=June
With this seing, Integration Server creates an IData object that contains:
String variable that contains the first occurrence of each input variable
String list variable that contains all occurrences of each input variable
For example, for this request:
/invoke/sample.webPageDemo/checkYears?year=1998&year=1999&month=June
How Integration Server Returns Output from the Service the Client
Invoked
By default, when a service is invoked by a browser-based client, Integration Server
displays the output from the service in an HTML web page, using a table to render the
output values.
Alternatively, you can assign an output template to the service that the browser-based
client invokes. In this case, Integration Server formats the output using the assigned
output template. Using an output template gives you the opportunity to design how
you want the output to display. With a template you can embed URLs that link to other
resources or that invoke another service to perform the next step of the task that the
browser-based client performs. You can use the results from one service to dynamically
construct how the output is displayed and/or as input into a subsequent service that
is invoked. For more information about output templates, see "About Service Output
Templates" on page 210.
Designer provides you with the ability to compare packages and elements in Integration
Server. The compare tool is useful to compare packages and elements on the same
server or on different servers, and to track changes to a package or element during
the development process. For example, you can use the compare tool to identify the
differences between the development, staging, and production versions of a package or
element. You can use the tool to compare:
Packages
Folders
Flow Services
Integration Server Document Types
The differences between the items that you compare are presented in a compare editor
along with annotations to indicate the changes.
You can also use the compare tool to compare two revisions of an element in a local
service development project. For information, see "Comparing Revisions of an Element"
on page 129.
The ability to compare packages and elements is available only with Integration Server
9.9 and later.
Note: To compare packages and folders, the Integration Server on which they
are located must have the pub.assets:getChecksums service in the WmPublic
package. For additional details on the pub.assets:getChecksums service, see
webMethods Integration Server Built-In Services Reference.
The compare editor, which is different from the element editor, consists of the following
panels:
Change List Panel: Shows, in the top panel, the list of differences between the packages
or elements being compared.
Content Panel: For flow services and IS document types, this panel, which appears
below the Change List panel, provides a drill-down, visual view of the differences
which are listed in summary form in the Change List panel. In the case of packages
and folders, you can right click on a changed item in the Change List panel and select
Compare Contents to open the element-level view. Designer opens a compare editor
that shows the element level view of the changed item that you selected in a new tab.
The Change List panel is the top panel in the compare editor. The Change List panel
lists out the differences between the packages or elements that you compare in a tree
structure. The header text in the panel shows the names of the two packages or elements
that are being compared and the total number of changes. The changes are annotated
with respect to the first element that you selected. The following annotations are used to
indicate the differences:
Changed- An item is present in both packages or elements being compared but has
changed.
Added- An item is present only in the first package or element being compared, and
is not present in the second package or element.
Removed- An item is present only in the second package or element being compared,
and is not present in the first package or element.
Repositioned (x to y ) - An item has the position x in the second element being
compared and the position y in the first element.
Content Panel
The Content panel is located below the Change List panel. An element-level visual view of
the differences between the elements being compared is shown in the Content panel as
described below:
For a flow service or an IS document type, the difference that you select in the
Change List panel is displayed in detail in the Content panel.
Designer displays the first element on the left side of the Content panel and the
second element on the right side. The path of the elements compared are displayed
at the top of the panels respectively.
Each change is indicated by highlighting an existing item in a package or element on
one side with a box and using a line to link the item to the corresponding item in the
other package or element on the other side, at the position where the item is present
or should have been.
Designer allows you to edit an element that you have locked from the Content panel.
Right-click on the element and select Open in Editor to open the element in an editor.
After you have made changes in the element, save the changes. Designer displays
the Reload Compare Editor dialog box, prompting you to confirm a refresh in the
compare editor. Click OK to refresh the compare editor with the changes to the
element.
Use the toolbar icons or their equivalent keyboard shortcuts listed below to navigate
between the changed items:
Previous difference: CTRL + , or
Next difference: CTRL + . or
Use the toolbar icons listed below to merge the changes:
Merge changes from left to right:
Merge changes from right to left:
Merging IS Elements
Before you perform the merge operation, you must ensure that you have write access to
the element. The merge icons are enabled only if there are any changes. If the elements
are read-only, the corresponding icons are disabled. For example, if the elements on
the right side are read-only, the left to right merge icon is disabled. Changes cannot be
merged if:
IS element is not locked for edit
IS element is retrieved from VCS repository
IS element does not have Write ACL privilege
Changes depend on some other conditions. For example in IS document type
element, you cannot merge Time to live property if the Discard property is set to false.
For more information on IS element properties, see "Properties" on page 1005.
option will jump to the item in the Package Navigator view for the first
package or folder.
Removed This indicates that an item is present only in the second package or
folder, and selecting the Show Left Element in Package Navigator option will take
you to the first package or folder in the Package Navigator view from which
the item was removed or under which it was expected to be present.
b. Select Show Right Element in Package Navigator with the following results,
depending on whether the item is shown as Changed, Added, or Removed:
Changed: The corresponding item in the second package or folder is shown in
the Package Navigator view.
Added: This indicates that an item is present only in the first package or folder
being compared, and selecting the Show Right Element in Package Navigator
option will take you to the second package or folder in the Package Navigator
view from which the item was removed or under which it was expected to be
present.
Removed: This indicates that an item is present only in the second package
or folder, and selecting the Show Right Element in Package Navigator option will
jump to the item in the Package Navigator view for the second package or
folder.
c. Select Open in Compare Editor to open the element-level view of the difference in
another instance of the compare editor.
Note: The Open in Compare Editor option in only available for Changed items.
5. For packages, in the List of changes panel, select the required property under
Properties. The compare editor displays the comparison of properties in the IS Asset
Compare panel.
Note: Properties are compared only for packages, and not for folders.
Designer helps you publish REST API descriptors created on Integration Server to
API Portal. Before you can publish the REST API descriptors, you must configure the
required connection to API Portal.
Note: For information about publishing REST API descriptors to API Portal after
configuring the required connection, see "Publishing REST API Descriptors to
API Portal" on page 532.
Field Description
Field Description
Save password (in the Eclipse secure Indicates whether the password for the specified
storage) user account should be saved in Eclipse secure
storage. API Portal uses this password from the
Eclipse secure storage whenever user authorization
is required. If you want to save the password in
Eclipse secure storage, select this check box.
If you decide not to save the password in Eclipse
secure storage, you must specify your password
each time your user authorization is required for
connecting to API Portal.
Tenant The tenant for which the REST API descriptors are
to be published.
5. To verify whether API Portal can be accessed by using the specified information,
click Test.
6. To store the connection configuration details, click OK.
A connection configuration is added to the Connections page with the specified
details. The first connection configuration that you create is automatically marked
as default. This default configuration is indicated with a check mark on the
Connections page. Designer always uses the default connection configuration for
API Portal.
Note: You can change the default connection configuration for API Portal. For
more information, see "Changing the Default Connection Configuration
for API Portal" on page 984.
Name The name to use for the API Portal connection configuration.
Host The host name of API Portal for the connection configuration.
Tenant The tenant for which the REST API descriptors are to be
published.
Field Description
Variables to expand per Specifies the number of child variables that Designer
document displays automatically for each document variable.
The minimum is 1 level. The maximum is 100 levels.
The default is 25 variables.
This preference applies when entering values for a
document variable only.
Note: If the service signature contains very complex document structures where the
documents are nested and deep, Software AG Designer may stop responding
if you try to expand those complex document structures, with the default
document expansion seings.
If you are developing services for connectors with signatures having
complex document structures where the documents are nested and deep, the
recommended document expansion seing (Windows Preferences > Software AG
> Document Expansion) values are as follows:
Document expansion level = 2
Recursive document expansion level = 1
Variables to expand per document = 10
On the Service Development Preferences page, you can specify the behavior of editors
and views in the Service Development perspective. You can also use the Service
Development Preferences page to define property values and launching preferences for
elements.
You can open the Service Development Preferences page by selecting Window >
Preferences and then selecting Software AG>Service Development from the navigation tree.
Preference Description
Show variables with fixed When selected, Designer displays the variables
values with fixed default values, which are hidden by
default. You cannot override the default values
assigned to these variables by mapping it to
another variable or by assigning any input values
to this variable during service execution. When the
Show variables with fixed values property is selected,
Designer displays these variables in the content
and structure of service signatures, document and
pipeline contents, and in the Run Configurations,
Enter Input for serviceName , and Enter Input for
variableName dialog boxes.
Preference Description
Automatic polling of adapter When selected, Designer will reload metadata from
metadata the adapter every time it creates a new adapter
service/notification. This option can be useful for
adapter developers that are working on designing
the metadata.
Use grouping in tree browser Indicates whether tree structures for adapters
will group items together. This may improve
Preference Description
performance and may make it easier to locate items
in tree browsers.
If you selected the Use grouping in tree browser
check box, in the Limit visible items per group to
field, specify the maximum number of items that
Designer groups together.
Preference Description
Do not include internal Select this option to exclude internal properties and
properties of element supporting files associated with elements being
compared.
Show the change on Select this option to display the element-level view of
single-click a change on single-click.
Show Status Bar Select this option to display the status of an item as
messages as tooltip tooltip.
Reload compare editor Select one of the following options to reload the
change list after you merge and save an item:
Prompt: Displays a dialog box prompting to confirm
the compare editor reload after an item is merged
and saved.
Never: Never reloads the compare editor after an
item is merged and saved.
Always: Automatically performs the compare editor
reload after every merge and save operation.
when creating new instances of the element. You can create multiple templates for an
element type.
Note: You can create property templates for flow, C/C++, and Java services.
Preference Description
Note: You will not be able to specify values for properties that must be unique
for each element such as Universal name and Output template when defining
templates.
Preference Description
Services List Use this list to specify the list of services that appear
under Insert on the flow service editor Palee view.
Each row in the list represents a single command on
the menu. Commands will appear on the menu in
the order you specify them. You may add as many
services as you need.
The Name column specifies a label for the service.
The Service column specifies the services associated
with the labels on the menu.
Validate flow service When Validate service references while saving is cleared,
Designer does not validate the referenced services
Preference Description
while saving a flow service. By default, this check box
is not selected.
When Validate service references while saving is selected,
Designer validates all the referenced services while
saving a flow service.
Label Properties In the Layout tab, specifies the height and width used
for displaying the name of the service for an INVOKE
step.
Default Pipeline Tree Specifies the tooltip that appears for pipeline
Tooltip variables. Select one of the following:
Select... To...
Preference Description
Service Out, and Transformers columns to be scrolled
horizontally and vertically independent of each
other. This makes it easy to scroll through data when
mapping a large amount of data in the Pipeline view.
To hide the horizontal and vertical scroll bars, clear
the Enable independent scrolling check box.
Preference Description
Preference Description
Protected line background Indicates the color to shade the protected sections
color of the Java service and the C/C++ service on the
Source tab of the Java or C/C++ service editor. The
Preferences window displays the current color on a
buon. To change the color:
1. Click the buon to display the Color window.
2. Select a new color.
3. Click OK.
Default Java service signature Indicates whether you want to use an IData
signature or a Values signature for new Java
services. Select:
Preference Description
Launching Preferences
Use the Launching preferences page to indicate whether Designer should save any open
elements with unsaved changes before starting a launch configuration.
Preference Description
Save required dirty editors Indicates whether Designer prompts you to save
before launching any elements with unsaved changes before starting
a launch configuration.
Specify one of the following:
Always. Designer saves any elements with
unsaved changes automatically and does not
prompt you to save any elements.
Never. Designer does not save any elements with
unsaved change nor does Designer prompt you to
save any elements.
Prompt. Designer prompts you to save any
elements with unsaved changes.
Preference Description
Preference Description
Move project When this option is selected, Designer allows you to set
to Integration any repository on your workspace as the local repository
Server directory and a linked directory is created under the
package Integration Server_directory\instances\default\packages
as linked directory.
resource
TCP Indicates the TCP URL of the local Docker daemon that listens
Connection for the Docker Remote API requests.
Preference Description
Preference Description
Update local references when When selected, Designer updates local references
pasting multiple elements when copying and pasting a group of elements.
When two elements within a group refer to each
other, it is called a local reference. If you clear this
check box, Designer retains the original references
in the copied elements.
Hide generated flow services When selected, Designer does not display flow
services automatically generated by a process in
the Package Navigator view.
Number of elements to cache Specifies the number of elements that you want
to cache per Designer session. The higher the
Preference Description
number of elements, the more likely an element
will be in the cache, which reduces network traffic
and speeds up Designer by caching elements that
are frequently used. The total number of cached
elements includes elements on all the servers to
which you are connected.
Click Clear Cache to remove all cached elements
from memory. Clearing the cache does not
remove flow services with breakpoints, flow
services that are currently being debugged, and
unsaved elements. Keep in mind that the cache is
automatically cleared when you close Designer or
when you refresh the session.
Reset Tip Dialogs Re-enables message boxes and reminders that have
been disabled with the Don't Show This Again check
box.
Preference Description
Number of results to display Specifies the number of results that you want
Designer to display in the Results view. Set this
preference to an integer smaller than 100. The
default is 5.
Preference Description
set the Number of results to display option
in the Results View preferences page to a
smaller value, preferably 1, to ensure that the
performance is not affected.
Run/Debug Preferences
Use the Run/Debug preferences page to customize the seings while running or
debugging a service.
Preference Description
Always show When selected, Designer displays the No input dialog box, every
the No input time Designer runs a service with no input parameters.
dialog
When this check box is cleared, Designer no longer displays the
No input dialog box when executing a service with no input
parameters.
This option is selected by default.
Note: You can clear the Always show the No input dialog preference
by selecting the Do not show this dialog again check box in the
No input dialog box.
Preference Description
Preference Description
Preference Description
Encoding for Specifies the encoding that Designer uses when creating a
WSDL URL consumer web service descriptor or a WSDL first provider web
services descriptor from a WSDL whose URL contains special
characters.
If you select the Encoding for WSDL URL check box, do one of the
following:
To use the default platform encoding, select Default.
To specify an encoding other than the platform default, select
Other. Then select the encoding from the list next to Other.
50 Properties
■ Integration Server Properties ................................................................................................... 1006
■ Package Properties .................................................................................................................. 1009
■ Element Properties ................................................................................................................... 1015
■ Document Type Properties ...................................................................................................... 1017
■ Flat File Dictionary Properties .................................................................................................. 1024
■ Flat File Element Properties .................................................................................................... 1024
■ Flat File Schema Properties .................................................................................................... 1041
■ JMS Trigger Properties ............................................................................................................ 1044
■ Link Properties ......................................................................................................................... 1055
■ OData Service Properties ........................................................................................................ 1057
■ REST V2 Resource Properties ................................................................................................ 1062
■ REST API Descriptor Properties .............................................................................................. 1063
■ Schema Properties ................................................................................................................... 1066
■ Schema Component Properties ............................................................................................... 1067
■ Service Properties .................................................................................................................... 1084
■ Specification Properties ............................................................................................................ 1100
■ Transformer Properties ............................................................................................................ 1101
■ Variable Properties ................................................................................................................... 1102
■ Web Service Connector Properties .......................................................................................... 1107
■ Web Service Descriptor Properties .......................................................................................... 1113
■ webMethods Messaging Trigger Properties ............................................................................. 1128
Integration Server property information is available from the Service Development >
Package Navigator view of Designer.
Use the Properties dialog box to view and edit properties for Integration Servers and
packages. You can also use the Properties dialog box to view general information and
permissions for Integration Server elements such as document types, services, flow
steps, JMS triggers, web service connectors, and web service descriptors.
You can open the Properties dialog box by selecting the server, package, or element in
Package Navigator view and selecting File > Properties. You can also open the Properties
dialog box by right-clicking the server, package, or element and selecting Properties.
Property Description
View event Specifies the type of event whose subscriptions are displayed in
subscribers for this page. Select the event type for which you want to add, edit, or
delete a subscription.
The table in this page displays subscribers to the selected event
type as follows:
Property Description
Property Description
you create a filter depends on the event type you are
subscribing to.
The asterisk (*) character represents any string of
characters and is the only wild-card character allowed
in the paern string. All other characters are treated
literally. Paern strings are case sensitive.
You use the buons in this page to add, edit, and delete a subscription
Add a subscription.
My Locked Elements
Use the My Locked Elements page to unlock elements for the selected server.
To open this page, in Designer select File > Properties > My Locked Elements.
Property Description
Select the Select the elements that you want to unlock. CTRL+click to select
Elements to more than one or click Unlock All to unlock all of the elements in the
Unlock list.
Property Description
ACLs The ACLs defined on the Integration Server to which you are
connected. These include the default ACLs that were installed
with the server. To edit an ACL, use the Integration Server
Administrator.
User Group Allowed. The user group(s) that have been explicitly allowed
Association for to access the packages, folders, services, or other elements
'[ACL name]' associated with this ACL. To edit a user group, use the
Integration Server Administrator.
Denied. The user group(s) that have been explicitly denied access
to the packages, folders, services, or other elements associated
with this ACL.
Resulting Displays the names of users that the ACL authorizes, given the
Users for '[ACL current seings in the Allowed and Denied lists. The server builds
name]' this list by looking at the groups to which each user belongs
and comparing that to the groups to which the ACL allows or
denies access. For details on how the server determines access, see
webMethods Integration Server Administrator’s Guide.
Server Information
View general information about a server from the Server Information page. To open this
page, select File > Properties > Server Information.
Property Description
Property Description
User Name The user name you use to connect to this Integration Server.
Proxy Specifies the proxy server as set on Window > Preferences > General >
Network Connections.
Package Properties
Use the Properties dialog box to view information about packages on the Integration
Server and to assign package dependencies, permissions, replication services, startup
and shutdown services.
To open the Properties dialog box, click the package in the Package Navigator of
Designer and select File > Properties.
Package Information
The Element Information page displays the type and name of the Integration Server
package.
To open this page, click the package in the Package Navigator of Designer and select File
> Properties > Element.
Package Dependencies
The Package Dependencies page displays the packages on which this package is
dependent. For example, if a package needs the services in another package to load
before it can load, you need to set up package dependencies. You might also want
to identify package dependencies if a startup service for a package invokes a service
in another package. The startup service cannot execute if the package containing the
invoked service has not yet loaded.
To open this page, click the package in the Package Navigator of Designer and select File
> Properties > Package Dependencies.
Property Description
Package The name of the package you want webMethods Integration Server
to load before the package selected in Package Navigator.
Version The version number of the package you want loaded. More than
one version of the same package might contain the services and
elements that a dependent package needs Integration Server to
load first. A dependency declared on a version is satisfied by a
package with a version that is equal to or greater than the specified
version. For example, to specify versions 3.0 or later of a package,
type 3.0 for the version number. To specify versions 3.1 or later,
type 3.1.0 for the version number.
You can also use an asterisk (*) as a wildcard in the version number
to indicate that any version number equal to or greater than the
specified version will satisfy the package dependency. If any
version of the package satisfies the package dependency, type *.*
as the version number.
You use the buons in this page to add, edit, and delete a package dependency.
Package Settings
The Package Seings page displays general information about a package including
package and JVM versions, build and patch numbers, publishers, and patch history.
To open this page, click the package in the Package Navigator of Designer and select File
> Properties > Package Settings.
Property Description
Package Specifies the version number for the package. Version numbers
version need to be in one of the following formats: X .x or X .x .x (for
Property Description
example, 1.0, 2.1, 2.1.3, or 3.1.2). By default, Designer assigns the
version number 1.0 to a new package.
Build Displays the build number of the package. The build number is
a generation number that a user assigns to a package each time
the package is regenerated. For example, a user might generate
version 1.0 of the “Finance” package ten times and assign build
numbers 1,2,3…10 to the different generations or builds of the
package.
The build number is not the same as the package version number.
One version of a package might have multiple builds.
Description Displays a brief description of the package wrien by the user who
created the package release.
JVM version Displays the version of the JVM (Java virtual machine) required to
run the package.
Publisher Displays the name of the publishing server that created the
package release.
Patch number Displays the patch numbers included in this release of the package.
Patch history Displays a list of all the patches installed for this package release.
When the server administrator installs a full release of the package
(a release that includes all previous patches for the package),
Integration Server removes the existing patch history. This helps
the server administrator avoid potential confusion about version
numbers and re-establish a baseline for package version numbers.
Property Description
Property Description
Publisher The name of the publishing server that created the package release.
Patch Number The patch numbers included in this release of the package.
Package Permissions
You assign an ACL to an element in the Permissions page of the Properties dialog box.
Depending on the element you select, certain access levels are displayed. For example,
for a package, you can only set List access. For details about the different levels of access
available for elements, see webMethods Integration Server Administrator’s Guide.
To open this page, click the package in the Package Navigator of Designer and select File
> Properties > Permissions.
Property Description
List ACL Users in the Allowed list of this assigned ACL can see that the
element exists and view the element’s metadata (input, output,
etc.).
Property Description
Startup Displays the list of services that can be used as startup services
services and the list of assigned startup services in the package. A
startup service is one that the webMethods Integration Server
automatically executes when it loads a package into memory.
Startup services are useful for generating initialization files or
assessing and preparing (for example, seing up or cleaning up)
the environment before the server loads a package. However, you
can use a startup service for any purpose. For example, you might
want to execute a time-consuming service at startup so that its
cached result is immediately available to client applications.
Available Displays a list of the services that can be used as startup services
services for the package. Any service in the package can be a startup
service. After you select a service as a startup service, the service
does not appear in the Available services list.
Property Description
You use the following buons under Startup services to add and
remove startup services.
Shutdown Displays the list of services that can be used as shutdown services
services and the list of assigned shutdown services in the package. A
shutdown service is one that the webMethods Integration Server
automatically executes when it unloads a package from memory.
Shutdown services are useful for executing clean-up tasks such as
closing files and purging temporary data. You could also use them
to capture work-in-progress or state information before a package
unloads.
Element Properties
Use the Properties dialog box to view information about any Integration Server element
listed in the Package Navigator. Integration Server elements include folders, subfolders,
document types and services.
To open the Properties dialog box, click any Integration Server element in the Package
Navigator of Designer and select File > Properties.
Element Information
The Element Information page displays the type and name of the Integration Server
element.
To open this page, click the element in the Package Navigator of Designer and select File
> Properties > Element Information.
Element Permissions
You assign an ACL to an element in the Permissions page of the Properties dialog box.
Depending on the element you select, certain access levels are displayed. For example,
for a package, you can only set List access. For details about the different levels of access
available for elements, see webMethods Integration Server Administrator’s Guide.
The ACLs assigned to an element are mutually exclusive; that is, an element can have
different ACLs assigned for each level of access.
To open this page, click the element in the Package Navigator of Designer and select File
> Properties > Permissions.
Property Description
List ACL Users in the Allowed list of this assigned ACL can see that the
element exists and view the element’s metadata (input, output,
etc.).
Read ACL Users in the Allowed list of this assigned ACL can view the source
code and metadata of the element.
Write ACL Users in the Allowed list of this assigned ACL can lock, edit,
rename, and delete the element.
Execute ACL Users in the Allowed list of this assigned ACL can execute the
service. This level of access only applies to services and web
service descriptors.
Property Description
Note: This property applies to services only. While you can set
an execute ACL for web service descriptors, Integration
Server always performs ACL checking when a web service
descriptor is called.
Property Description
Server Server definition name for the Integration Server on which the
element resides.
REST Resource Click Configure... to open the REST Resource Configuration page
and configure REST resources for the selected service.
Property Description
Property Description
REST URL The format of the URL that must be followed when clients send
REST requests to Designer acting as a REST resource.
Supported The HTTP methods that the REST URL supports. The following
Methods methods are supported: GET, PUT, POST, PATCH, and DELETE.
You can use the following buons on this page to add, edit, and delete a REST resource
configuration:
Property Description
Model type Specifies the content model for this document type. The content
model provides a formal description of the structure and allowed
content for a document type which can then be used to validate an
instance document.
The Model type property is display-only. To change the model type
for a document type, modify the XML schema definition, and
recreate the document type.
The contents of an IS document type with a Model type property
value other than “Unordered” cannot be modified.
The Model type property can have one of the following values:
Value... Description...
Choice One and only one of the fields in the document type
can appear in the instance document.
The choice model type corresponds to an complex
type definition that contains a choice compositor in the
model group.
Property Description
@aributeName field for the aribute value and a *body
field for the simple content.
Reuse Specifies whether this element can be dragged from the CentraSite
Registry Explorer view to a BPM process or CAF project.
When this property is set to public, you can drag the asset to a
BPM process or CAF project.
When this property is set to private (the default), you cannot drag
the asset to a BPM process or CAF project.
All published assets are available for Impact Analysis, whether
they are public or private.
Although changing the public/private status will immediately
change whether or not you can drag an element to a BPM process
or CAF project, the element's status in CentraSite will not change
until the next publication of assets to CentraSite.
Property Description
Source URI Displays the location or URI of the source used to create this
document type. If this document type was not based on a source
and was instead created from scratch, the Source URI property is
empty.
Linked to Indicates whether the document type reflects the content and
source structure of the source from which it was created. When set to
true, the contents of the document type cannot be edited. When
set to false, the document type can be edited but may no longer
accurately reflect the content and structure of the source.
Schema type Displays the name of the complex type definition with which the
name IS document type is registered. This is the complex type definition
from which the IS document type was created.
This property applies to IS document types created from XML
Schema definitions only.
Property Description
Provider Displays the name of the object that corresponds to the publishable
definition document type on the messaging provider. This property displays
Not Publishable if the document type cannot be published. This
property displays Publishable Locally Only if instances of the
document type can be published and subscribed to within this
Integration Server only.
Encoding type Specifies the format used to encode and decode instances of this
publishable document type.
Property Description
Select... To...
Time to live Specifies how long the messaging provider keeps instances of this
publishable document type. If the time to live elapses before a
subscriber retrieves the document and sends an acknowledgement,
the messaging provider discards the document.
Storage type Specifies whether instances of this document type are stored in
memory or on disk.
Select... To...
Property Description
The acknowledgment allows the sending resource
to remove its copy of the document from disk
storage.
Select... To...
Property Description
Namespace The URI that will be used to qualify the name of this document
name type. You must specify a valid absolute URI.
Local name A name that uniquely identifies the document type within the
collection encompassed by Namespace name. The name can be
composed of any combination of leers, digits, or the period (.),
dash (]) and underscore (_) characters. Additionally, it must begin
with a leer or the underscore character.
Property Description
Reuse Specifies whether this element can be dragged from the CentraSite
Registry Explorer view to a BPM process or CAF project.
When this property is set to public, you can drag the asset to a
BPM process or CAF project.
When this property is set to private (the default), you cannot drag
the asset to a BPM process or CAF project.
All published assets are available for Impact Analysis, whether
they are public or private.
Although changing the public/private status will immediately
change whether or not you can drag an element to a BPM process
or CAF project, the element's status in CentraSite will not change
until the next publication of assets to CentraSite.
Property Description
Ordered Specifies whether child records must appear in the flat file in the
same order in which they appear in the record definition.
Select... To...
Select... To...
Property Description
Max repeat Maximum number of that instances of this record definition can
repeat in the flat file. Set to Unlimited if instances of this record
definition can repeat any number of times in the flat file. Set to 0
if the record can appear once but cannot repeat. The default is 1,
meaning the record can appear once and repeat once.
If you set the Max repeat value to an integer and the validate
parameter in the pub.flatFile:convertToValues service is set to true,
Integration Server generates errors when the record repeats more
than the number of times allowed by the Max repeat value.
Select... To...
Area The area assigned to this record definition. The Areas property for
the flat file definition determines the possible values that can be
assigned to a record.
Position Integer indicating the position of the of the record in the flat file.
Select Not Used if you do not want to specify a position for the
record.
Allow undefined Specifies whether an instance of the record definition can contain
data undefined data and not be considered invalid. A record definition
Property Description
can only allow undefined data if the flat file schema is configured
to allow undefined data. (When the Flat File Definition tab is
active, in the Properties view, the Allow undefined data property is
set to True.)
Select... To...
Check fields Specifies whether extra fields in the record instance are considered
errors.
Select... To...
Alternate name Another name for the record definition. When an IS document
type is generated from a flat file schema, the alternate name is
used as the name of the document field that corresponds to this
record definition.
Property Description
Property Description
Ordered Specifies whether child records must appear in the flat file in the
same order in which they appear in the record reference.
Select... To...
False Specify that child records in the flat file can appear
in any order.
Select... To...
Property Description
Max repeat Maximum number of that instances of this record reference can
repeat in the flat file. Set to Unlimited if instances of this record
reference can repeat any number of times in the flat file. Set to 0
if the record can appear once but cannot repeat. The default is 1,
meaning the record can appear once and repeat once.
If you set the Max repeat value to an integer and the validate
parameter in the pub.flatFile:convertToValues service is set to true,
Integration Server generates errors when the record repeats more
than the number of times allowed by the Max repeat value.
Value... Description...
Area The area assigned to this record reference. The Areas property for
the flat file definition determines the possible values that can be
assigned to a record reference.
Position Integer indicating the position of the of the record in the flat file.
Select Not Used if you do not want to specify a position for the
record.
Allow undefined Specifies whether an instance of the record reference can contain
data undefined data and not be considered invalid. A record reference
can only allow undefined data if the flat file schema is configured
to allow undefined data. (When the Flat File Definition tab is
active, in the Properties view, the Allow undefined data property is
set to True.)
Select... To...
Property Description
record reference in a flat file contain undefined
data.
Check fields Specifies whether extra fields in the record instance are considered
errors. This value is determined by the referenced record
definition.
Value... Description...
Alternate name Another name for the record definition. This value is determined
by the referenced record definition.
When an IS document type is generated from a flat file schema,
the alternate name of the record definition is used as the name of
the document field that corresponds to this record reference.
Property Description
Select... To...
Extractor Field number in the record that contains the composite you want
to extract. This pulls the field or composite data from the record,
or pulls the subfield data from the composite. If you leave this
property empty, the composite will not be extracted.
Click to open the Extractors dialog box and specify the
extractor.
For a composite definition in a record reference, the Extractor value
is determined by the composite definition in the referenced record
definition.
Property Description
Select... To...
Check fields Specifies whether extra fields in the composite instance are
considered errors.
Select... To...
Alternate name Another name for the composite definition. When an IS document
type is generated from a flat file schema, the alternate name is
used as the name of the document field that corresponds to this
composite definition.
Property Description
Property Description
Select... To...
Property Description
Extractor Field number in the record that contains the composite you want
to extract. This pulls the field or composite data from the record,
or pulls the subfield data from the composite. If you leave this
property empty, the composite will not be extracted.
Click to open the Extractors dialog box and select an extractor.
Value... Description...
Check fields Specifies whether extra fields in the composite instance are
considered errors. This value is determined by the referenced
composite definition.
Value... Description...
Property Description
Alternate name Another name for the composite definition. This value is
determined by the referenced composite definition.
When an IS document type is generated from a flat file schema,
the alternate name is used as the name of the document field that
corresponds to this composite definition
Property Description
Select... To...
Note: This property does not apply to field definitions in flat file
dictionaries.
Extractor Location of the data to extract for this field. Click to open the
Extractors dialog box and specify an extractor.
The extractor works for a field only if field delimiters have been
defined for this flat file schema.
Select... To...
Note: This property does not apply to field definitions in flat file
dictionaries.
Property Description
Validator Specifies the type of validator to use to perform validation for the
field.
Click to open the Validators dialog box and select a validator.
Select... To...
Format service Enter the fully-qualified name of the service to use to format
data from this field. You can click to navigate to and select a
service.
Alternate name Another name for the field definition. When an IS document
type is generated from a flat file schema, the alternate name is
used as the name of the String field that corresponds to this field
definition.
ID Code IDCode for the field definition. The IDCode is provided in a SEF
file and is used by the WmEDI package.
Data type Data type for the field as specified in the SEF file. This information
is used by the WmEDI package.
Property Description
Note: This property does not apply to field definitions in flat file
dictionaries.
Property Description
Select... To...
Extractor Location of the data to extract for this field. Click to open the
Extractors dialog box and specify an extractor.
The extractor works for a field only if field delimiters have been
defined for this flat file schema.
Select... To...
Property Description
Validator Specifies the type of validator to use to perform validation for the
field as determined by the referenced field definition.
Value... Description...
Property Description
Format service Enter the fully-qualified name of the service to use to format data
from this file as determined by the referenced field definition.
Alternate name Another name for the field definition as determined by the
referenced field definition. When an IS document type is
generated from a flat file schema, the alternate name is used as the
name of the String field that corresponds to this field definition.
Data type Data type for the field as specified in the SEF file. This information
is used by the WmEDI package.
Property Description
Reuse Specifies whether this element can be dragged from the CentraSite
Registry Explorer view to a BPM process or CAF project.
When this property is set to public, you can drag the asset to a
BPM process or CAF project.
When this property is set to private (the default), you cannot drag
the asset to a BPM process or CAF project.
All published assets are available for Impact Analysis, whether
they are public or private.
Although changing the public/private status will immediately
change whether or not you can drag an element to a BPM process
or CAF project, the element's status in CentraSite will not change
until the next publication of assets to CentraSite.
Property Description
Set Click to browse to and select the default record for this flat file
schema from a flat file dictionary. This record is used to parse an
undefined data record when thepub.flatFile.convertToValues service
fails to find a match between the flat file and the flat file schema.
Property Description
Note: If the flat file you are parsing does not contain record
identifiers, you must select a default record. By selecting a
default record, a CSV (comma separated values) file can be
parsed as a special case of record with no record identifier,
but with fixed field and record delimiters.
Delete Click delete the default record for this flat file schema.
The actual record definition still exists, but is no longer assigned to
this flat file schema.
Settings Properties
In the Properties view, under Settings, you can specify whether undefined data is
allowed, assign names to particular sections of the flat file schema, and designate a
floating record.
Property Description
Select... To...
Property Description
service will generate errors when undefined data is
encountered.
Floating record Identifies the record in the flat file schema that will act as the
floating record. By designating a floating record, you enable that
record to appear in any position within a flat file without causing a
parsing validation error.
You can specify the record name or the alternate name that is
assigned to the record.
If you do not use this property, validation errors will occur if the
record structure of an inbound document does not match the
record structure defined in its flat file schema. For information
on avoiding this type of error, see the Flat File Schema Developer’s
Guide.
Property Description
Ordered Specifies whether records must appear in the flat file in the same
order in which they appear in a flat file schema.
Note: This property applies only to records that appear at the root
of the flat file schema, not records that are child elements of
records.
Select... To...
Property Description
true, Integration Server generates errors when
the records do not appear in the defined order.
Property Description
Select... To...
Transaction type Indicates whether or not the JMS trigger receives and processes
messages as part of a transaction.
Value Description
Property Description
Select... To...
Property Description
Join expires Indicates whether the join expires after the time period specified
in Expire after.
Select... To...
Expire after Specifies how long Integration Server waits for the remaining
documents in the join. The default join time-out is 1 day.
Execution user Specifies the name of the user account whose credentials
Integration Server uses to execute a service associated with the
Property Description
JMS trigger. You can specify a locally defined user account or a
user account defined in a central or external directory.
Property Description
Select... To...
Transaction type Indicates whether or not the JMS trigger receives and processes
messages as part of a transaction.
Value Description
Property Description
Execution User Specifies the name of the user account whose credentials
Integration Server uses to execute a service associated with the
JMS trigger. You can specify a locally defined user account or a
user account defined in a central or external directory.
Property Description
Select... To...
Property Description
Max execution Specify the maximum number of messages that Integration Server
threads can process concurrently. Integration Server uses one thread to
process each message. The default is 1 server thread.
Max batch Specify the maximum number of messages that the trigger service
messages can receive at one time. If you do not want the trigger to perform
batch processing, leave this property set to 1. The default is 1.
Connection Specifies the number of connections this trigger makes to the JMS
count provider. Multiple connections can improve trigger throughput,
but keep in mind that each connection requires a dedicated
Integration Server thread, regardless of the current throughput.
The default is 1.
Note: If you specify a connection count greater than one, the alias
associated with this trigger must be configured to create a
new connection for each trigger. For more information about
JMS connection aliases, refer to webMethods Integration Server
Administrator’s Guide.
Property Description
Suspend on Specifies that the Integration Server suspends the JMS trigger
Error when an exception occurs during trigger service execution. This
property is available for serial triggers only.
Select... To...
Property Description
Retry interval Specifies the length of time Integration Server waits between retry
aempts. The default is 10 seconds.
On retry failure Indicates how Integration Server handles a retry failure for a JMS
trigger. A retry failure occurs when Integration Server reaches the
maximum number of retry aempts and the trigger service still
fails because of an ISRuntimeException.
This property also determines how Integration Server handles a
transient error that occurs during trigger preprocessing.
Property Description
Select... To...
Select... To...
Property Description
Property Description
Select... To...
Property Description
trigger service at a later time when the resources
needed by the trigger service become available.
Property Description
Detect Enables exactly-once processing for the JMS trigger and instructs
duplicate the server to check a message’s redelivery count to determine
whether the trigger has received the message before.
Select... To...
Property Description
the server to check a document’s redelivery count to
determine whether the trigger received the document
previously.
The redelivery count indicates the number of times the
routing resource has redelivered a document to the
trigger.
Select... To...
History time to Specifies the length of time the document history database
live maintains an entry for a message processed by the JMS trigger.
During this time period, the Integration Server discards any
Property Description
messages with the same universally unique identifier (UUID) as an
existing document history entry for the trigger. When a document
history entry expires, the Integration Server removes it from the
document history database. If the trigger subsequently receives
a message with same UUID as the expired and removed entry,
the server considers the copy to be new because the entry for the
previous message has been removed from the database.
Property Description
Link Properties
Use the Properties view to apply conditions to the link you have drawn between two
variables or specify which element of an array you want to link to or from.
To view properties for a link, double-click the link in the Pipeline editor.
Property Description
Select... To...
Copy condition Specifies the expression that must evaluate to true before
the Integration Server will execute the link between fields.
The server evaluates the condition at run time only if
the Evaluate copy condition property is set to True. Click
to specify a condition. Use the syntax provided by
webMethods to write the condition. For details on the
syntax, see "Conditional Expressions" on page 1189.
Select... To...
Property Description
Property Description
Alias Specifies an alternate name for the namespace name of the OData
service.
Namespace Displays the namespace name, which is the fully qualified name of
the OData service on the Integration Server.
Use custom Indicates whether or not to use custom filters instead of the built-
filter in filters that Integration Server provides while using the $filter
system query option.
Select... To...
Property Description
Property Description
Property Description
Property Description
Connection Specifies the connection used with the external source type.
Alias
Property Description
Name Specifies the name of the Simple property. The name of the
property must be unique within the set of Simple properties for the
entity type or complex type.
Key Indicates whether or not the OData element is a key. This property
is available only for the Simple property of OData entity types.
Each OData entity type must have a Key property that uniquely
identifies the entity type within the OData service at run time.
Select... To...
Nullable Indicates whether or not the property can have a null value.
Note: If you selected True for the Key property, the property cannot
have a null value.
Select... To...
True Indicate that property can have a null value. This is the
default.
Property Description
Default Determines the default value of the property. Enter the default
value or select NULL if the default value is null.
Note: If the OData element is a key, that is, if you selected True
for the Key property, the Default property cannot have a null
value.
Fixed Length Specifies whether the length of the value of the property must be
fixed or whether it can vary.
Select... To...
Max Length Specifies the maximum length of the value of the property. Enter a
positive integer if you want to restrict the value to a specific length.
Select Max if the value can be of any length.
Unicode Specifies whether or not the value of the property is encoded using
Unicode (UTF-8) or ASCII. .
Select... To...
Collation Specifies a sorting sequence that can be used for comparison and
ordering operations on values of the property.
Property Description
Association Properties
In the Properties view, you can specify the properties for the OData associations.
Property Description
Name Specifies the name of the OData Association. The value of this
property is derived from the two entity types that are part of this
entity association and the multiplicity in this association.
Property Description
Entity Type Specifies the Entity Type on the specific association end.
Multiplicity Specifies the number of entity types that can be at the specific end
of the association.
Select... To...
Property Description
Role Specifies the name of the role played by the entity type at an
association end.
Property Description
From Role Specifies the name of the role played by this entity type in the
association.
To Role Specifies the name of the role played by this entity type in the
association.
Property Description
Source URI Displays the location of the source used to create this resource.
The value for this property appears only for a REST V2 resource
associated with a descriptor that is generated from a Swagger
document.
Property Description
Reuse Specifies whether this element can be dragged from the CentraSite
Registry Explorer view to a BPM process or CAF project.
When this property is set to public, you can drag the asset to a
BPM process or CAF project.
When this property is set to private (the default), you cannot drag
the asset to a BPM process or CAF project.
All published assets are available for Impact Analysis, whether
they are public or private.
Property Description
Source URI The URI of the source Swagger document used to generate the
REST API descriptor. This property is not used for a REST API
descriptors that is not generated from a Swagger document.
Source URL The URL to access the source Swagger document used to generate
the REST API descriptor. This property is not used for a REST API
descriptors that is not generated from a Swagger document.
Property Description
Path The path for the REST resource. By default, each REST resource in
a REST API descriptor derives its path from the namespace of the
REST resource.
Note: The value of this property cannot be edited for REST API
descriptors containing REST V2 resources.
For a REST resource created using the legacy approach, you can
override the default path with a custom value. For example, you
could use /customers/premium or /myPath.
Change the path of the REST resource to be the path of your
choosing. If you do not include “/” as the first character in the Path
property, Integration Server adds it in the Swagger document.
Ensure that Integration Server can resolve the path that you
specify. Integration Server must be able to invoke the path.
Note: You can add a suffix only if the descriptor contains REST
resources created using the legacy approach. This property
is not used for descriptors containing REST V2 resources.
Property Description
Operation Properties
When you select an operation in a REST resource on the REST Resources tab, the
Properties view displays the operation name and description.
Property Description
Property Description
Property Description
Schema Properties
In the Properties view, you can view and set the properties for an IS schema. To view the
properties for an IS schema, double-click the schema in Package Navigator view.
To edit the properties for a specification, you must have Write access to it and own the
lock.
Property Description
Property Description
Schema Displays the name of the schema domain to which the IS schema
domain belongs.
This property applies to schemas created from XML Schema
definitions only.
Reuse Specifies whether this schema can be dragged from the CentraSite
Registry Explorer view to a BPM process or CAF project.
When this property is set to public, you can drag the asset to a
BPM process or CAF project.
When this property is set to private (the default), you cannot drag
the asset to a BPM process or CAF project.
All published assets are available for Impact Analysis, whether
they are public or private.
Although changing the public/private status will immediately
change whether or not you can drag an element to a BPM process
or CAF project, the element's status in CentraSite will not change
until the next publication of assets to CentraSite.
Source URI Displays the location or URI of the source used to create this
schema.
Linked to Indicates whether the schema reflects the content and structure
source of the source from which it was created. When set to true, the
contents of the schema, specifically simple type definitions, cannot
be edited. When set to false, the simple type definitions in the
schema can be edited but may no longer accurately reflect the
simple type definitions from the source.
Details (the right side of the schema editor). The information contained in Component
Details varies with the selected component.
An all content model specifies that child elements can appear once, or not at all, and in
any order in the instance document. This symbol corresponds to the <all> compositor
in an XML Schema definition.
Min Occurs The minimum number of occurrences of the content model for an
element in the instance document. The value of Min Occurs is equal
to the value of the minOccurs aribute in the <all> content model
Max Occurs The maximum number of occurrences of the content model for
an element in the instance document. The value of Max Occurs is
equal to the value of the maxOccurs aribute in the <all> content
model.
Summary of The name and occurrence constraints for the child elements in the
Children content model.
Name. The name of the child element.
Min,Max. The minimum and maximum occurrence constraints
for the child element. The Min and Max values correspond to the
minOccurs and maxOccurs aributes (respectively) in the local
element declaration.
Qualifier Whether the matching aribute can or cannot be from one of the
namespaces listed in the URIs field.
The namespace aribute value in the <anyAttribute> declaration
determines the value of Qualifier. See the following.
URIs The namespaces to which the matching aribute can or cannot belong.
The namespace aribute value in the <anyAttribute> declaration
determines the value of URIs. See the following.
##local "unqualified"
Because an <any> element declaration does not have a name, the schema uses 'Any' as
the name of the element.
Min Occurs The minimum number of occurrences for the matching element in the
instance document.
Max The maximum number of occurrences for the matching element in the
Occurs instance document.
Qualifier Whether the matching element can or cannot be from one of the
namespaces listed in the URIs field.
The namespace aribute value in the <any> declaration determines the
value of Qualifier. See the following.
URIs The namespaces to which the matching element can or cannot belong.
The namespace aribute value in the <any> declaration determines the
value of URIs. See the following.
##local "unqualified"
Attribute Declaration
An aribute declaration associates an aribute name with a simple type definition. This
symbol corresponds to the XML Schema <attribute> declaration or the aribute in a
DTD ATTLIST declaration.
An aribute declaration can specify a default value, a fixed value, and whether
the appearance of the aribute in the instance document is required. Like element
declarations, aribute declarations can be global or local.
Name The local name and target namespace of the aribute declaration.
The Name value is equal to the expanded value (prefix plus local
name) of the name aribute in the aribute declaration.
Default The default value for the aribute in an instance document. The
Default value is equal to the value of the default aribute in the
aribute declaration. During data validation, Integration Server
supplies the instance document with an aribute whose value
equals that of Default if:
The element to which the aribute is assigned appears in the
instance document, and
The aribute itself does not appear.
If the element to which the aribute declaration is assigned does
not appear in the instance document, Integration Server does not
augment the instance document.
Fixed Value The fixed value for the aribute. The Fixed Value is equal to the
value of the fixed aribute in the aribute declaration. If this
aribute appears in an instance document, the aribute value
must equal the Fixed Value. During data validation, Integration
Server supplies the instance document with an aribute whose
value equals Fixed Value if:
The element to which the aribute is assigned appears in the
instance document, and
Simple Type The name and namespace of the simple type definition associated
with the aribute. See "Simple Type Definition" on page 1083.
Attribute Reference
Name The local name and target namespace of the aribute declaration
for this aribute reference. The Name value is equal to the
expanded value (prefix plus local name) of the name aribute in
the aribute declaration.
Fixed Value The fixed value for the referenced aribute. The Fixed Value is
equal to the value of the fixed aribute in the aribute declaration.
If this aribute appears in an instance document, the aribute
value must equal the Fixed Value. During data validation,
Integration Server supplies the instance document with an
aribute whose value equals Fixed Value if:
The element to which the aribute is assigned appears in the
instance document, and
The aribute itself does not appear in the instance document.
Simple Type The name and namespace of the simple type definition associated
with the referenced aribute. See "Simple Type Definition" on
page 1083.
A choice content model specifies that only one of the child elements in the content
model can appear in the instance document. This symbol corresponds to the <choice>
compositor in an XML Schema definition or a choice list in a DTD element type
declaration.
If one of the child elements does not appear or more than one child element appears,
the instance document is not schema-valid. (An exception to this is when the minOccurs
aribute for the <choice> element is set to 0. If minOccurs=0, Integration Server does
not generate a validation error if no child element appears.)
Min Occurs The minimum number of occurrences of the content model for an
element in the instance document. The value of Min Occurs is equal
to the value of the minOccurs aribute in the <choice> content
model
Max Occurs The maximum number of occurrences of the content model for
an element in the instance document. The value of Max Occurs is
equal to the value of the maxOccurs aribute in the <choice>
content model.
Summary of The name and occurrence constraints for the child elements in the
Children content model.
Name. The name of the child element.
Min,Max. The minimum and maximum occurrence constraints
for the child element. The Min and Max values correspond to the
A complex type definition defines the structure and content for elements of complex
type. (Elements of complex type can contain child elements and carry aributes.) This
symbol corresponds to the <complexType> element in an XML Schema definition.
If the complex type definition is unnamed (an anonymous type), the Schema Browser
displays 'Anonymous' as the name of the complex type definition.
Name The local name and target namespace of the complex type. The
Name value is equal to the expanded value (prefix plus local name)
of the name aribute in the type definition.
If the Schema Browser displays 'Anonymous' as the name of the
simple type, the complex type is an anonymous (unnamed) type
defined in an element declaration.
Note: If the complex type was created from a simple type, then the Schema Browser
also displays the fields for the simple type. For details, see "Simple Type
Definition" on page 1083.
Element Declaration
An element declaration associates an element name with a type definition. This symbol
corresponds to the <element> declaration in an XML Schema and the ELEMENT
declaration in a DTD.
An element declaration can contain aributes to specify a default value, a fixed value,
and whether the element is abstract or nillable. If an element declaration is part of a
content specification, the element declaration can contain aributes to specify occurrence
constraints.
Default The default value for the element. The Default value is equal to the
value of the default aribute in the element declaration.
During data validation, if the element appears in an instance
document but contains no content, Integration Server supplies the
element with the Default value. If the element does not appear in
the instance document, Integration Server does not augment the
instance document.
Fixed Value The fixed value for the element. The Fixed Value is equal to the
value of the fixed aribute in the element declaration. When
Integration Server validates an instance document against the
schema, if the element appears, its value must be equal to the
Fixed Value.
During data validation, if the element appears in an instance
document but contains no content, Integration Server supplies
the element with the Fixed Value. If the element does not appear in
the instance document, Integration Server does not augment the
instance document.
Complex Type The name and namespace of the complex type assigned to the
element. This field appears only if the element is defined to be
of complex type. If the element is defined to be of anonymous
complex type, this field displays 'Anonymous' as the name of the
complex type.
In the Schema Browser, the complex type definition assigned to an
element appears as an immediate child of the element.
Simple Type The name and namespace of the simple type assigned to the
element. This field appears only if the element is defined to be of
simple type. If the element is defined to be of anonymous simple
type, this field displays 'Anonymous' as the name of the simple
type.
In the Schema Browser, the simple type definition assigned to an
element appears as an immediate child of the element.
Min Occurs The minimum number of times this element must appear. The Min
Occurs value is equal to the value of the minOccurs aribute in the
local element declaration. If the local element declaration does not
specify minOccurs, Designer uses a default value of 1.
Max Occurs The maximum number of times this element may appear. The Max
Occurs value is equal to the value of the maxOccurs aribute in the
local element declaration. If the local element declaration does not
specify maxOccurs, Designer uses a default value of 1.
This field appears only when you select a local element
declaration; that is, an element declaration in a complex type
definition.
Element Reference
Default The default value for the referenced element. The Default value
is equal to the value of the default aribute in the referenced
element declaration.
During data validation, if the element appears in an instance
document but contains no content, Integration Server supplies the
element with the Default value. If the element does not appear in
the instance document, Integration Server does not augment the
instance document.
Fixed Value The fixed value for the referenced element. The Fixed Value is equal
to the value of the fixed aribute in the element declaration.
When Integration Server validates an instance document against
Complex Type The name and namespace of the complex type assigned to the
referenced element. This field appears only if the element is
defined to be of complex type. If the element is defined to be of
anonymous complex type, this field displays 'Anonymous' as the
name of the complex type.
Simple Type The name and namespace of the simple type assigned to the
referenced element. This field appears only if the element is
defined to be of simple type. If the element is defined to be of
anonymous simple type, this field displays 'Anonymous' as the
name of the simple type.
In the Schema Browser, the simple type definition assigned to an
element appears as an immediate child of the element.
Min Occurs The minimum number of times this element must appear. The Min
Occurs value is equal to the value of the minOccurs aribute in the
local element declaration. If the local element declaration does not
specify minOccurs, Designer uses a default value of 1.
This field appears only when you select a local element
declaration; that is, an element declaration in a complex type
definition.
Max Occurs The maximum number of times this element may appear. The Max
Occurs value is equal to the value of the maxOccurs aribute in the
local element declaration. If the local element declaration does not
specify maxOccurs, Designer uses a default value of 1.
This field appears only when you select a local element
declaration; that is, an element declaration in a complex type
definition.
Empty Content
Empty content occurs in XML Schema definition when an element's associated complex
type definition does not contain any element declarations. An element with empty
content may still carry aributes. In a DTD, an element has empty content when it is
declared to be of type EMPTY.
A mixed content model allows character data to be interspersed with child elements.
This symbol corresponds to the mixed=”true” aribute in a complex type definition in
an XML Schema definition or a DTD element list in which the first item is #PCDATA.
A sequence content model specifies that the child elements in the instance document
must appear in the same order in which they are declared in the content model. This
symbol corresponds to the <sequence> compositor in an XML Schema definition or a
sequence list in an element type declaration in a DTD.
Min Occurs The minimum number of occurrences of the content model for an
element in the instance document. The value of Min Occurs is equal
to the value of the minOccurs aribute in the <sequence> content
model
Max Occurs The maximum number of occurrences of the content model for
an element in the instance document. The value of Max Occurs is
equal to the value of the maxOccurs aribute in the <sequence>
content model.
Summary of The name and occurrence constraints for the child elements in the
Children content model.
Name. The name of the child element.
Min,Max. The minimum and maximum occurrence constraints
for the child element. The Min and Max values correspond to the
minOccurs and maxOccurs aributes (respectively) in the local
element declaration.
A simple type definition specifies the data type for an element that contains only
character data or for an aribute. Unlike complex type definitions, simple type
definitions cannot carry aributes. This symbol corresponds to the <simpleType>
element in an XML Schema definition.
If the simple type definition is unnamed (an anonymous type), the Schema Browser
displays 'Anonymous' as the name of the complex type definition.
Base The constraining facet values set in the type definitions from
Constraints which a simple type was derived. Base constraints are the
constraining facet values from the primitive type to the immediate
parent type. These constraint values represent the cumulative
facet values for the simple type.
Simple The local name and target namespace of the simple type. The
Type:Name Name value is equal to the expanded value (prefix plus local name)
of the name aribute in the type definition. If the Schema Browser
displays 'Anonymous' as the name of the simple type, the simple
type is an anonymous (unnamed) type defined in an element or
aribute declaration.
Primitive Type The primitive datatype from which the simple type is derived.
Service Properties
To view properties for a service, double-click the service in the Package Navigator of
Designer. In the Properties view, you can configure the Runtime, Transient Error Handling,
Universal Name, Audit, and Output Template properties for the service.
Note: A web service connector also uses the Universal Name, Audit, and Output
Template categories of the Properties view, but does not use the Retry on
ISRuntimeException properties. A web service connector uses all the properties
in the Run time category with the exception of the Default xmlFormat property.
To edit the properties for a service, you must have Write access to it and own the lock.
Note: General properties for services do not apply to OData services. For more
information about the general properties for OData services, see "General
Properties for OData Services" on page 1057.
Property Description
Reuse Specifies whether this element can be dragged from the CentraSite
Registry Explorer view to a BPM process or CAF project.
When this property is set to public, you can drag the asset to a
BPM process or CAF project.
When this property is set to private (the default), you cannot drag
the asset to a BPM process or CAF project.
All published assets are available for Impact Analysis, whether
they are public or private.
Although changing the public/private status will immediately
change whether or not you can drag an element to a BPM process
or CAF project, the element's status in CentraSite will not change
until the next publication of assets to CentraSite.
Source URI Displays the location or URI of the source used to create this flow
service. A flow service can be created from sources such as XML
documents, XML Schema definitions, and WSDL documents. If
this flow service was created as an empty flow service and was not
based on a source, the Source URI property is empty.
Creating a URL alias for a service. You can create an alias for the path portion of the URL
used to invoke a service.
Saving and restoring of the pipeline. You can save the pipeline or restore a previously
saved pipeline at run time.
XML format for the service input. If the service receives an XML document, you can
specify the format that Integration Server uses for the document when it passes the
document to the service.
HTTP methods for a service. You can select the HTTP methods that can be configured
for a service. This selection overrides the HTTP methods configured for a resource
corresponding to the service.
Important: The Run time properties in the Properties view should only be set by someone
who is thoroughly familiar with the structure and operation of the selected
service. Improper use of these options can lead to a service failure at run
time and/or the return of invalid data to the client program.
Property Description
Cache results Indicates whether Integration Server stores the service results
in a local cache for the time period specified in the Cache expire
property. After the service executes, the server places the entire
pipeline contents into a local cache. When subsequent requests
for the service provide the same set of input values, the server
returns the cached results instead of invoking the service again.
Select True to cache the service results. Select False if you do not
want to cache service results. Cache results for stateless services
only.
The default is False.
Note: Caching is only available for data that can be wrien to the
repository. Because XML nodes cannot be wrien to the
repository, they cannot be cached.
Property Description
Cache expire Specifies the amount of time that the pipeline contents stay in
memory after they are cached. If you enable the Cache results
property, type an integer in this field representing the number
of minutes you want a result to remain cached. The expiration
timer begins when the server initially caches the results. The
server does not restart the expiration timer each time it retrieves
the results from cache. The minimum cache expiration time is
one minute.
Reset cache Click Reset to clear the cached results for this service.
Prefetch activation Specifies the minimum number of times that a cached result
must be accessed (hit) with the same inputs in order for the
server to prefetch results when it expires. If you enable Prefetch,
Property Description
you must specify an integer representing the minimum number
of hits a cached result must receive to be eligible for prefetch.
(Entries that do not receive the minimum number hits are
released from memory.)
Note: The cache may not be refreshed at the exact time the last
hit fulfills the Prefetch Activation requirement. It may vary
from 0 to 15 seconds, according to the cache sweeper
thread. For details, see the wa.server.cache.flushMins
seing in Integration Server.
Execution locale Specifies the locale in which this service will be executed.
HTTP URL Alias Specifies an alias for the path portion of the URL used to invoke
a service.
For a flow service, the path portion of the URL consists of the
invoke directive and the fully qualified service name. For a
REST service, the path portion of the URL consists of the rest
directive and the location of the REST resource folder in which
the service resides.
Select... To...
Restore To merge the pipeline with one from a file when the
(Merge) service executes.
Property Description
Default xmlFormat The default XML format for XML documents received by the
service.
Note: You can specify the default XML format for flow services
and Java services only. The Default xmlFormat property is
not available for C/C++ services, .NET services, or web
service connectors.
Select... To...
Property Description
Property Description
Allowed HTTP Click to select the HTTP methods that you can configure for
methods a service. The supported methods are GET, HEAD, PUT, POST,
PATCH, DELETE, and OPTIONS.
Important:
If the service already has REST resources configured,
Designer displays a warning message if you change the
selection of the allowed HTTP methods to exclude any
method used in the configuration of the resources. In
such a situation, any client request invoking the excluded
method will fail.
Therefore, you must ensure that the set of HTTP methods
configured for a REST resource is always a subset of the
methods allowed for the underlying service.
Property Description
Max retry Specifies the number of times Integration Server should aempt
attempts to re-execute the service when the service fails because of an
ISRuntimeException. An ISRuntimeException occurs when the
service catches a transient error, wraps the error, and re-throws
it as an exception. (A transient error is an error that arises from a
condition that might be resolved quickly, such as the unavailability
of a resource due to network issues or failure to connect to a
database.)
The default is 0, which indicates that Integration Server does not
aempt to re-execute the service.
Property Description
the Integration Server makes the maximum retry aempts.
By default, the maximum retry period is 15,000 milliseconds
(15 seconds). When you configure service retry, Integration
Server verifies that the retry period for that service will
not exceed the maximum retry period. Integration Server
determines the retry period for the service by multiplying the
maximum retry aempts by the retry interval. If this value
exceeds the maximum retry period, Designer displays an
error indicating that either the maximum aempts or the
retry interval needs to be modified.
Audit Properties
In the Properties view, under Audit, you enable auditing and specify when a service
should generate audit data.
Property Description
Select... To...
Log on Specifies the execution points at which the service generates audit
data.
Select... To...
Property Description
Property Description
Include pipeline Specifies when Integration Server should include a copy of the
input pipeline in the service log.
Select... To...
Property Description
pipeline can degrade performance because it may
negatively impact the rate at which the data is
saved to the service log.
Important: The options you select can be overwrien at run time by the value of the
wa.server.auditLog server property, set in the server configuration file. This
property specifies whether to globally enable or disable service logging. The
default enables customized logging on a service-by-service basis.
Note: To use the circuit breaker feature with Integration Server, your Integration
Server must have additional licensing. In addition to the licensing
requirement, to use the circuit breaker functionality in version 10.1, you must
install the following fixes: ESB_10.1_Fix2 and IS_10.1_Core_Fix2.
Property Description
Specify... To...
Specify... To...
Property Description
Cancel thread on Timeout Specifies whether the circuit breaker gracefully aempts
to cancel the thread executing the service when the
timeout period elapses causing the timeout failure event.
Canceling a thread can free up resources held by the
thread.
For circuit breaker to cancel a thread, the
wa.server.threadKill.enabled property must be set to
true.
If you want circuit breaker to aempt to interrupt a
service thread in addition to aempting to cancel it, the
wa.server.threadKill.interruptThread.enabled property
must be set to true.
Use care when configuring a circuit breaker to cancel
threads. Canceling a thread might not free up resources
being held by the service. For more information about
canceling threads, see the webMethods Integration Server
Administrator’s Guide.
Specify... To...
Property Description
Failure threshold The number of failure events occurring within the failure
period that cause the circuit to open.
The default is 5.
If circuit breaker is enabled for the service, you must
specify a value greater than 0.
Circuit open action Action the circuit breaker takes when receiving requests
to invoke the service when the circuit is open.
Specify... To...
Circuit open service Fully qualified name of the service that circuit breaker
invokes when receiving requests for this service when
the circuit is open. This property applies only if the
Circuit open action property is set to Invoke service.
Property Description
Circuit reset period Length of time, measured in seconds, for which the
circuit remains in an open state once it is opened. During
the reset period, the circuit breaker responds to requests
to invoke the service as specified by the Circuit open action
property. When the reset period elapses, the circuit
breaker places the circuit in a half-open state. The next
request for the service results in service execution, after
which the circuit breaker either closes or re-opens the
circuit.
The default is 300 seconds.
Property Description
Namespace Specifies the name used to qualify the local name of this service.
name The namespace name you specify must be a valid absolute URI
(relative URIs are not supported).
Local name Specifies a name that uniquely identifies this service within the
collection encompassed by Namespace name. The name can be
composed of any combination of leers, digits, or the period (.),
dash (-), or underscore (_) characters. The name must begin with a
leer or the underscore character.
Property Description
Name Specifies the name of the file that contains the output template for
the selected service. To assign an existing template to this service,
type the name of the template file in this field. To create a new
template file for this service, type a name for the template in this
field.
Template Opens the Template source page so that you can edit the existing
source output template.
Specification Properties
In the Properties view, you can set the properties for a specification. To view the
properties for a specification, double-click the specification in Package Navigator view.
To edit the properties for a specification, you must have Write access to it and own the
lock.
Property Description
Property Description
When this property is set to public, you can drag the asset to a
BPM process or CAF project.
When this property is set to private (the default), you cannot drag
the asset to a BPM process or CAF project.
All published assets are available for Impact Analysis, whether
they are public or private.
Although changing the public/private status will immediately
change whether or not you can drag an element to a BPM process
or CAF project, the element's status in CentraSite will not change
until the next publication of assets to CentraSite.
Transformer Properties
In the Properties view, you can set the properties for a transformer inserted into a MAP
step.
To view properties for a transformer, double-click the transformer in the Pipeline view
of Designer.
Property Description
Service Specifies the fully qualified name of the service that is invoked at
run time. When you insert a transformer, Designer automatically
assigns the name of that service to the Service property.
If the service that a transformer invokes is moved, renamed,
or deleted, you must change the Service property. Specify the
service’s fully qualified name in the folderName :serviceName format
or click to select a service from a list.
Validate input Specifies whether or not Integration Server validates the input to
the transformer against the input signature of the service. Select
True if you want to validate the input of the service. Select False if
you do not want to validate the input of the service.
Validate output Specifies whether or not Integration Server validates the output of
the transformer against the output signature of the service. Select
Property Description
True if you want to validate the output of the service. Select False if
you do not want to validate the output of the service.
Variable Properties
You can specify the data type and input values for a variable. You can also apply content
constraints and structural constraints to a variable for validation purposes. A variable
can be a String, String list, String table, document, document list, document reference,
document reference list, Object, or Object list.
In the Properties view, select a variable in the editor to set general properties and
constraints for the variable.
Note: Specific properties in the Properties view are enabled or disabled, depending
on the type of variable you have selected.
Property Description
Model type Specifies the content model for a document or document list
variable. The content model provides a formal description of the
structure and allowed content for a document.
The Model type property is display-only. To change the model
type for a document or document list, modify the corresponding
complex type definition in the XML schema definition, and
recreate the document type that contains this document or
document list.
Property Description
Value... Description...
Choice One and only one of the fields in the document can
appear.
The choice model type corresponds to an complex
type definition that contains a choice compositor in the
model group.
String display Specifies how you want to enter input data for this variable. You
type can only select a display type if the variable is a String. Select one
of the following:
Select... To...
Property Description
Pick list Allows you to enter the list of values that users can select for this
choices variable.
Property Description
Property Description
Required Specifies whether or not the variable needs to exist at run time.
The Required property appears for variables in document types if
one or more of the following are true:
The document type was created using a version of Integration
Server prior to version 8.2.
The document type was created using Developer.
The Model type property of the document type is Unordered.
Select... To...
Allow null Specifies whether null is a valid value for this variable.
Select... To...
Allow unspecified Specifies whether the document is open or closed. This property
fields is enabled only if the variable is a document or document list.
Select... To...
Property Description
Content type Specifies the XML schema simple type that constrains the value
of the String field. This property is enabled if the variable is a
String, String list, or String table.
To view and edit the content constraint for a variable, click
and select one of the following:
Select... To...
Java wrapper type Specifies the Java class of an Object field. This property is
enabled if the variable is an Object or Object list.
Note: Designer displays the ‡ symbol next to String, String list, and String table
variables with a content type constraint only. Designer does not display the
‡ symbol next to Object and Object list variables with a specified Java class
constraint. Object and Object lists with an applied Java class constraint have
a unique icon. For more information about icons for constrained Objects, see
"Java Classes for Objects" on page 1158.
Property Description
Reuse Specifies whether this element can be dragged from the CentraSite
Registry Explorer view to a BPM process or CAF project.
Property Description
When this property is set to public, you can drag the asset to a
BPM process or CAF project.
When this property is set to private (the default), you cannot drag
the asset to a BPM process or CAF project.
All published assets are available for Impact Analysis, whether
they are public or private.
Although changing the public/private status will immediately
change whether or not you can drag an element to a BPM process
or CAF project, the element's status in CentraSite will not change
until the next publication of assets to CentraSite.
Source URI Displays the location of the source WSDL used to create the web
service connector.
Important: The Run time properties in the Properties view should only be set by
someone who is thoroughly familiar with the structure and operation of the
selected service. Improper use of these options can lead to a service failure at
run time and/or the return of invalid data to the client program.
Property Description
Property Description
False if the service is part of a multi-service transaction or if you
are unsure of its state requirements.
The default is False.
Cache results Indicates whether Integration Server stores the service results
in a local cache for the time period specified in the Cache expire
property. After the service executes, the server places the entire
pipeline contents into a local cache. When subsequent requests for
the service provide the same set of input values, the server returns
the cached results instead of invoking the service again. Select True
to cache the service results. Select False if you do not want to cache
service results. Cache results for stateless services only.
The default is False.
Note: Caching is only available for data that can be wrien to the
repository server. Because XML nodes cannot be wrien to
the repository, they cannot be cached.
Cache expire Specifies the amount of time that the pipeline contents stay in
memory after they are cached. If you enable the Cache results
property, type an integer in this field representing the number of
minutes you want a result to remain cached. The expiration timer
begins when the server initially caches the results. The server does
not restart the expiration timer each time it retrieves the results
from cache. The minimum cache expiration time is one minute.
Reset cache Click Reset to clear the cached results for this service
Prefetch Specifies the minimum number of times that a cached result must
activation be accessed (hit) with the same inputs in order for the server to
prefetch results when it expires. If you enable Prefetch, you must
specify an integer representing the minimum number of hits a
cached result must receive to be eligible for prefetch. (Entries
Property Description
that do not receive the minimum number hits are released from
memory.)
The cache may not be refreshed at the exact time the last hit
fulfills the Prefetch Activation requirement. It may vary from 0 to 15
seconds, according to the cache sweeper thread. For details, see the
wa.server.cache.flushMins seing in Integration Server.
Note: The options you select can be overwrien at run time by the
value of the wa.server.pipeline.processor property, set in
the server configuration file. This property specifies whether
to globally enable or disable the Pipeline debug feature. The
default enables the Pipeline debug feature on a service-by-
service basis. For more information on seing properties in
the server configuration file, see webMethods Integration Server
Administrator’s Guide.
Audit Properties
In the Properties view, under Audit, you enable auditing and specify when a service
should generate audit data.
Property Description
Select... To...
Log on Specifies the execution points at which the service generates audit
data.
Select... To...
Include pipeline Specifies when Integration Server should include a copy of the
input pipeline in the service log.
Property Description
Select... To...
Important: The options you select can be overwrien at run time by the value of the
wa.server.auditLog server property, set in the server configuration file. This
property specifies whether to globally enable or disable service logging. The
default enables customized logging on a service-by-service basis.
Property Description
Namespace Specifies the name used to qualify the local name of this service.
name The namespace name you specify must be a valid absolute URI
(relative URIs are not supported).
Local name Specifies a name that uniquely identifies this service within the
collection encompassed by Namespace name. The name can be
composed of any combination of leers, digits, or the period (.),
dash (-), or underscore (_) characters. The name must begin with a
leer or the underscore character.
Property Description
Property Description
Name Specifies the name of the file that contains the output template for
the selected service. To assign an existing template to this service,
type the name of the template file in this field. To create a new
template file for this service, type a name for the template in this
field.
Template Opens the Template source page so that you can edit the existing
source output template.
Property Description
Property Description
Direction Displays whether the web service descriptor is for a provider web
service (that can be invoked by an external user) or for a consumer
web service (that requests the use of a provider entity's web service).
Note: WS-I Basic Profile 1.0 supports only HTTP or HTTPS bindings.
Consequently, WS-I compliance cannot be enforced if the
WSDL contains a SOAP over JMS binding. The WS-I compliance
property cannot be set to true if a web service descriptor has a
JMS binder.
Reuse Specifies whether this element can be dragged from the CentraSite
Registry Explorer view to a BPM process or CAF project.
When this property is set to public, you can drag the asset to a BPM
process or CAF project.
When this property is set to private (the default), you cannot drag
the asset to a BPM process or CAF project.
All published assets are available for Impact Analysis, whether they
are public or private.
Although changing the public/private status will immediately
change whether or not you can drag an element to a BPM process or
CAF project, the element's status in CentraSite will not change until
the next publication of assets to CentraSite.
Source URI Displays the location of the source used to create the web service
descriptor. For a consumer web service descriptor or a WSDL first
provider web service descriptor, the Source URI is the location of
the WSDL document. For a service first web service descriptor, the
Source URI is empty.
Target Displays the XML Target Namespace of the web service. By default
namespace this is set to the fully qualified URL of the host server.
WSDL URL URL used to retrieve the WSDL for the web service.
Namespaces Displays a list of the XML namespaces and the associated namespace
prefixes used within the web service descriptor when it was initially
created.
Property Description
Pipeline Specifies whether the contents of the SOAP header are placed in the
headers pipeline as a document named soapHeaders.
enabled
When this property is set to true for a provider web service
descriptor and an IS service that corresponds to an operation in the
WSD is invoked, Integration Server places the contents of the SOAP
request header in the input pipeline for the IS service.
When this property is set to true for a consumer web service
descriptor and one of the web service connectors is invoked,
Integration Server places the contents of the SOAP response header
in the output pipeline for the web service connector.
The default is False.
Validate SOAP For a consumer web service descriptor, specifies whether Integration
response Server validates a SOAP response received by any web service
connectors within the consumer WSD. The default is True.
Created on Identifies the version of Integration Server on which the web service
version descriptor was created.
Pre-8.2 Indicates whether or not the web service descriptor runs in pre-8.2
compatibility compatibility mode. Web service descriptors that run in pre-8.2
mode compatibility mode are compatible with versions of Integration
Server prior to version 8.2. web service descriptors that do not
run in pre-8.2 compatibility mode are compatible with versions of
Integration Server 8.2 and later.
For web service descriptors created using Designer on Integration
Server 8.2 and later, the default is False. For web service descriptors
created using Developer, the default is True.
Property Description
Outbound Fully qualified name of the IS service that Integration Server must
callback invoke for an outbound SOAP message if you want to insert custom
service processing logic into a SOAP request message in case of a consumer
web service descriptor and a SOAP response message in case of
a provider web service descriptor. For more information about
outbound callback services, see Web Services Developer’s Guide.
Filter login Indicates whether or not Integration Server filters the login
credentials credentials in incoming SOAP requests based on the credentials that
are provided in the WS-Security policy aached to the web service
descriptor.
When this property is set to true, Integration Server filters the login
credentials in incoming SOAP requests and processes only those
credentials that are provided in the WS-Security policy aached to
the web service descriptor.
When this property is set to false, Integration Server processes all
the credentials that are available in the incoming SOAP request
Property Description
without verifying whether the credentials are also provided in the
WS-Security policy aached to the web service descriptor.
The default is True.
Integration Server applies this property to process incoming
requests in case of provider web service descriptors and to process
asynchronous responses in case of consumer web service descriptors.
Omit xsd:any When generating the schema definition in a WSDL for a provider
from WSDL web service descriptor, specifies whether the xsd:any element is
omied from the complex type definition that corresponds to an
open document.
Integration Server considers a document to be open if the Allow
unspecified fields property is set to True and considers a document to
be closed if the Allow unspecified fields property is set to False.
Select... To...
Note: For changes to this property to take effect, save the changes
and either refresh the web service descriptor or reload the
package that contains the web service descriptor.
Operation Properties
In the Properties view, under General, you can view basic information about an
operation in the web service descriptor.
Property Description
Operation Displays the name of the operation. For a provider web service
Name descriptor, this will be the local portion of the Universal Name of
the IS service. For a consumer web service descriptor, this will be
the operation Name from the WSDL that was used to create the
web service descriptor.
IS Service Displays the fully qualified name of the IS service representing this
operation.
Property Description
Document type Fully qualified name of the IS Document type that defines the
body.
Schema URL URL to the XML schema definition if the signature source is an
element declaration from an XML schema definition.
Signature The source for the input/output signature for the operation. Click
Modify Signature to change the signature source.
You can only change the operation signature source for a provider
web service descriptor created from an existing IS service. You can
Property Description
use an element declaration in an external XML schema definition
or an IS document type.
Property Description
Property Description
Document type Displays the fully qualified name of the IS Document type that
defines the Header element.
Role URI naming the Actor (for SOAP 1.1) or Role (for SOAP 1.2) at
which this header element is targeted. A Header is “targeted” at
a SOAP Node if the node is acting in the role specified on that
Header. The possible values are defined by the SOAP Specification.
Property Description
Document type Displays the fully qualified name of the IS Document type defining
the Fault element.
Property Description
Property Description
Property Description
Port address Endpoint address associated with this web service, that is, the
network address at which the web service can be invoked.
For a consumer web service descriptor, this value is determined
by the location aribute in the soap:address element (which is
contained within the soap:port element of the service element).
For a WSDL first provider web service descriptor, the Port
address is empty.
For a service first web service descriptor, you can edit the Port
address for a binder that uses HTTP or HTTPS as the Transport.
For a web service descriptor that uses the JMS transport, the
Port address displays the initial part of the JMS URI, specifically
“jms”:<lookup var>:<dest>?targetService. Integration Server
displays additional information that is part of the JMS URI in
the JMS Seings and JMS Message Details properties.
The Port address value is display-only when Transport is JMS, the
binder is in a consumer web service descriptor, or the binder is
in a WSDL first provider web service descriptor.
Port alias Endpoint alias name associated with this web service. The
endpoint alias name will be used for this binder when
generating a WSDL for a provider or when executing a web
service connector for a consumer. The actual endpoint value is
looked up at run time in both cases. New aliases can be defined
from the Integration Server, using Settings > Web Services.
For a provider web service and a binder with a protocol of
HTTP or HTTPS, you can assign the default provider endpoint
alias to the binder. Select DEFAULT(aliasName ) if you want to
use the information in the default provider web service endpoint
alias. If the Alias list includes a blank row, Integration Server
does not have a default provider web service endpoint alias for
the protocol.
Port name Name of the port associated with the web service, as defined by
the WSDL; an aggregate of a binding and a network address.
Property Description
Directive The SOAP processor for which the web service will be a target.
The drop-down menu lists all registered SOAP processors on
the Integration Server to which you are currently connected.
Porttype name Name of the portType associated with the WSDL binding
element.
SOAP version Version of the SOAP message protocol to be used; either SOAP
1.1 or SOAP 1.2.
SOAP binding The style of the SOAP binding and its operations; either
style Document or RPC (Remote Procedure Call).
SOAP binding use The usage aribute of the SOAP binding and its operations;
either literal or encoded
SOAP action SOAP action associated with the operations in the binder.
Click in the Value column to display the SOAP action string
associated with each operation in the binder.
Property Description
Response The address template that you can use as ReplyTo or FaultTo
endpoint address address to make the consumer web service descriptor process
template responses asynchronously. This property displays the following
address format:
HTTP binder: http://<server>:<port>/ws/wsdName/portName
HTTPS binder: https://<server>:<port>/ws/wsdName/portName
JMS binder: jms:<topic/queue/jndi>:<destinationName>?
targetService=soapjms/wsdName/portName
Where, wsdName is the web service descriptor name and
portName is the name of the port associated with the web service,
as defined by the WSDL; an aggregate of a binding and a
network address.
You must specify this address as the value for ReplyTo and/or
FaultTo address in the messageAddressingProperties parameter of
the corresponding web service connector to use this consumer
web service descriptor to process responses asynchronously
by invoking the callback response services. You must replace
the placeholders <server> and <port> or <topic/queue/jndi> and
<destinationName> with appropriate values depending on the
transport mechanism used to invoke the web service.
Use CSQ Indicates whether Integration Server places the request message
in the client side queue if the JMS provider is not available at the
time the message is sent.
Property Description
available at the time the web service connector
executes. This is the default.
Property Description
Variant identifier Specifies how a destination name is looked up. The Variant
identifier corresponds to the jms-variant syntax in the JMS URI
Schema. The Variant identifier will be one of the following:
Value Description
Property Description
Destination If the Variant identifier is jndi, specifies the JNDI provider lookup
name for the destination to which messages are sent on the JMS
provider.
If the Variant identifier is queue or topic, specifies the name of the
destination to which messages are sent.
JMS connection Name of the JMS connection alias used to connect to the JMS
alias provider.
Designer displays this property only when the Variant identifier is
“queue” or “topic”.
JNDI JNDI provider lookup name for the connection factory used to
connection create a connection to the JMS provider.
factory name
Designer displays this property only when the Variant identifier is
“jndi”.
JNDI initial Java class name of the InitialContextFactory for the JNDI
context factory provider.
Designer displays this property only when the Variant identifier is
“jndi”.
JNDI URL Location of the registry when the registry is being used as the
initial context.
Designer displays this property only when the Variant identifier is
“jndi”.
Other properties Any additional properties the JNDI provider requires for
configuration.
Designer displays this property only when the Variant identifier is
“jndi”.
information for the request message, such as delivery mode, time to live, and the
destination for replies. The JMS Message Details properties are read-only.
For a provider web service descriptor, the web service endpoint alias assigned to the
binder’s port alias determines the values of the properties under JMS Message Details. A
blank property indicates that the web service endpoint alias does not specify a value for
the property. For example, if the web service endpoint alias does not specify a delivery
mode, the Delivery mode property under JMS Message Details will be blank too.
For a consumer web services descriptor, the binding information in the WSDL
document used to create the consumer web service descriptor determines the values
of the JMS Message Details properties. If the WSDL document does not contain
information that Integration Server uses to populate a property for the corresponding
binding, the property will be blank. For example, if the WSDL does not contain the
soapjms:timetolive element, the Time to live property will be blank in the binder.
Property Description
Delivery mode The message delivery mode for the request message. This is the
delivery mode that web service clients must specify in the JMS
message that serves as the request message for the web service.
Value Description
Time to live The number of milliseconds that can elapse before the request
message expires on the JMS provider. A value of 0 indicates that
the message does not expire.
Priority Specifies the message priority. The JMS standard defines priority
levels from 0 to 9, with 0 as the lowest priority and 9 as the
highest.
Reply to name Name or lookup name of the destination to which the web service
sends a response (reply) message.
Reply to type Type of destination to which the web service sends the response
(reply) message.
Property Description
Value Description
Note: The Reply to type property is only applicable when the Variant
identifier is “queue” or “topic”.
Property Description
Class name The Java class name of the web service handler based on JAX-RPC
that acts as the header handler.
Policy type The policy type associated with the header handler. Policy files
used with this header handler must be of this type.
Policy name Specifies the name of the policy assigned to this header handler.
The policy name is obtained from the ID aribute in the policy file.
At run time, the assigned policy can be overridden by the value of
the Effective policy name property.
Effective policy Specifies the name of the policy used with this header handler at
name run time. The effective policy overrides the policy assigned in the
Policy name property.
Property Description
Select... To...
Join expires Indicates whether the join expires after the time period specified in
Expire after.
Select... To...
False Indicate that the join should not expire. That is,
Integration Server should wait indefinitely for the
remaining documents in a join condition.
Property Description
to publishable document types that can be published to
Universal Messaging.
Expire after Specifies how long Integration Server waits for the remaining
documents in the join condition. The default join time-out period is
1 day.
Priority enabled Specifies whether priority messaging is enabled or disabled for the
webMethods Messaging Trigger.
This property applies to webMethods Messaging Triggers that
receive documents from Broker only. webMethods Messaging
Triggers that receive documents from Universal Messaging always
receive higher priority documents in an expedited fashion.
Additionally, priority messaging does not apply to locally
published documents received by the webMethods Messaging
Trigger. At run time, Integration Server ignores the value of the
Priority enabled property if the trigger receives a locally published
document.
Select... To...
Execution user Specifies the name of the user account whose credentials
Integration Server uses to execute a service associated with the
webMethods Messaging Trigger. You can specify a locally defined
user account or a user account defined in a central or external
directory.
Property Description
Reuse Specifies whether this element can be dragged from the CentraSite
Registry Explorer view to a BPM process or CAF project.
When this property is set to public, you can drag the asset to a
BPM process or CAF project.
When this property is set to private (the default), you cannot drag
the asset to a BPM process or CAF project.
All published assets are available for Impact Analysis, whether
they are public or private.
Although changing the public/private status will immediately
change whether or not you can drag an element to a BPM process
or CAF project, the element's status in CentraSite will not change
until the next publication of assets to CentraSite.
Property Description
Property Description
retrieves more documents for the trigger. The default is 4
documents.
Property Description
Select... To...
Property Description
Max execution Specifies the maximum number of server threads that can process
threads documents for this trigger concurrently. Integration Server uses
one server thread to process each document in the trigger queue.
The default is 1 server thread.
Property Description
Select... To...
Property Description
Property Description
Select... To...
Max retry Specifies the maximum number of times the Integration Server
attempts should re-execute the trigger service if an ISRuntimeException
occurs during service execution. The default is 0 aempts, which
indicates that Integration Server does not retry the trigger service.
Retry interval Specifies the length of time Integration Server waits between
aempts to execute the trigger service. The default is 10 seconds.
On retry failure Indicates how Integration Server handles a retry failure for a
trigger. A retry failure occurs when Integration Server reaches the
maximum number of retry aempts and the trigger service still
fails because of an ISRuntimeException.
This property also determines how Integration Server handles a
transient error that occurs during trigger preprocessing.
Select... To...
Property Description
Property Description
Select... To...
Property Description
determine whether the trigger received the document
previously. The redelivery count indicates the number
of times the routing resource has redelivered a
document to the trigger.
Select... To...
History time to Specifies the length of time the document history database
live maintains an entry for a document processed by the trigger.
During this time period, Integration Server discards any
documents with the same universally unique identifier (UUID)
as an existing document history entry for the trigger. When a
document history entry expires, Integration Server removes it from
the document history database. If the trigger subsequently receives
a document with same UUID as the expired and removed entry,
Property Description
the server considers the copy to be new because the entry for the
previous document has been removed from the database.
A flow step is a basic unit of work (expressed in the webMethods flow language) that
webMethods Integration Server interprets and executes at run time. The webMethods
flow language provides the following flow steps that invoke services and flow steps that
let you edit data in the pipeline:
BRANCH
EXIT
INVOKE
LOOP
MAP
REPEAT
SEQUENCE
BRANCH
The BRANCH step selects and executes a child step based on the value of one or more
variables in the pipeline. You indicate the variables you want to branch on by specifying
a switch value or by writing an expression that includes the variables.
Branching on Expressions
When you branch on expressions, you set the Evaluate labels property of the BRANCH
step to true. In the Label property for each child step, you write an expression that
includes one or more variables. At run time, the BRANCH step executes the first child
step with an expression that evaluates to true.
If you want to specify a child step to execute when none of the expressions are true, set
the label of the child step to $default.
BRANCH Properties
The BRANCH step has the following properties.
Property Description
Timeout Optional. Specifies the maximum number of seconds that this step
should run. If this time elapses before the step completes, Integration
Server issues a FlowTimeoutException and execution continues with
the next step in the service.
If you want to use the value of a pipeline variable for this property,
type the variable name between % symbols. For example,
%expiration%. The variable you specify must be a String.
Property Description
Label Optional. (Required if you are using this BRANCH step as a target
for another BRANCH or EXIT step.) Specifies a name for this instance
of the BRANCH step, or a null, unmatched, or empty string ($null,
$default, blank).
Switch Specifies the String field that the BRANCH step uses to determine
which child flow step to execute. The BRANCH step executes the
child flow step whose label matches the value of the field specified in
the Switch property. Do not specify a value if you set the Evaluate labels
property to True.
Evaluate Specifies whether or not you want the server to evaluate labels
labels of child steps as conditional expressions. When you branch on
expressions, you enter expressions in the Label property for the
children of the BRANCH step. At run time, the server executes
the first child step whose label evaluates to True. To branch on
expressions, select True. To branch on the Switch value, select False.
EXIT
The EXIT step exits the entire flow service or a single flow step. Specifically, it may exit
from the nearest ancestor loop step, a specified ancestor step, the parent step, or the
entire flow service.
The EXIT step can throw an exception if the exit is considered a failure. When an
exception is thrown, user-specified error message text is displayed by typing it directly
or by assigning it to a variable in the pipeline.
EXIT Properties
The EXIT step has the following properties.
Property Description
Label Optional. (Required if you are using this EXIT step as a target for
a BRANCH step.) Specifies a name for this specific step, or a null,
unmatched, or empty string ($null, $default, blank).
Exit from Required. Specifies the flow step or service from which you want to
exit.
Failure Optional. Specifies the text of the exception message that is displayed
message when Signal is set to FAILURE. If you want to use the value of a pipeline
variable for this property, type the variable name between % symbols.
For example, %mymessage%. The variable you specify must be a String.
INVOKE
The INVOKE flow step invokes another service. You can use it to invoke any type of
service, including another flow service.
INVOKE Properties
The INVOKE step has the following properties.
Property Description
Timeout Optional. Specifies the maximum number of seconds that this step
should run. If this time elapses before the step completes, Integration
Server issues a FlowTimeoutException and execution continues with
the next step in the service.
If you want to use the value of a pipeline variable for this property,
type the variable name between % symbols. For example,
%expiration%. The variable you specify must be a String.
Service Required. Specifies the fully qualified name of the service to invoke.
Validate input Optional. Specifies whether the server validates the input to the
service against the service input signature. If you want the input to
be validated, select True. If you do not want the input to be validated,
select False.
Validate Optional. Specifies whether the server validates the output of the
output service against the service output signature. If you want the output
Property Description
to be validated, select True. If you do not want the output to be
validated, select False.
LOOP
The LOOP step takes as input an array variable that is in the pipeline. It loops over the
members of an input array, executing its child steps each time through the loop. For
example, if you have a service that takes a string as input and a string list in the pipeline,
use the LOOP step to invoke the service one time for each string in the string list.
You identify a single array variable to use as input when you set the properties for the
LOOP step. You can also designate a single variable for output. The LOOP step collects
an output value each time it runs through the loop and creates an output array that
contains the collected output values. If you want to collect more than one variable,
specify a document that contains the fields you want to collect for the output variable.
LOOP Properties
The LOOP step has the following properties.
Property Description
Timeout Optional. Specifies the maximum number of seconds that this step
should run. If this time elapses before the step completes, Integration
Server issues a FlowTimeoutException and execution continues with
the next step in the service.
If you want to use the value of a pipeline variable for this property,
type the variable name between % symbols. For example,
%expiration%. The variable you specify must be a String.
Label Optional. (Required if you are using this step as a target for a
BRANCH or EXIT step.) Specifies a name for this specific step, or a
null, unmatched, or empty string ($null, $default, blank).
Input array Required. Specifies the input array over which to loop. You must
specify a variable in the pipeline that is an array data type (that is,
String list, String table, document list, or Object list).
Output Optional. Specifies the name of the field in which the server places
array output data for an iteration of the loop. The server collects the output
from the iterations into an array field with the same name. You do
not need to specify this property if the loop does not produce output
values.
MAP
The MAP step adjusts the pipeline at any point in a flow. It makes pipeline
modifications that are independent of an INVOKE step.
Within the MAP step, you can:
Link (copy) the value of a pipeline input field to a new or existing pipeline output
field.
Drop an existing pipeline input field. (Keep in mind that once you drop a field from
the pipeline, it is no longer available to subsequent services in the flow.)
Assign a value to a pipeline output field.
Perform document-to-document mapping in a single view by inserting transformers.
MAP Properties
The MAP step has the following properties.
Property Description
Timeout Optional. Specifies the maximum number of seconds that this step
should run. If this time elapses before the step completes, Integration
Server issues a FlowTimeoutException and execution continues with
the next step in the service.
If you want to use the value of a pipeline variable for this property, type
the variable name between % symbols. For example, %expiration%.
The variable you specify must be a String.
If you do not need to specify a time-out period, leave Timeout blank.
For more information about how Integration Server
handles flow step timeouts, refer to the description of the
Property Description
wa.server.threadKill.timeout.enabled configuration parameter in
webMethods Integration Server Administrator’s Guide.
Label Optional. (Required if you are using this step as a target for a BRANCH
or EXIT step.) Specifies a name for this specific step, or a null,
unmatched, or empty string ($null, $default, blank).
REPEAT
The REPEAT step repeatedly executes its child steps up to a maximum number of times
that you specify. It determines whether to re-execute the child steps based on a Repeat on
condition. You can set the repeat condition to one of the following:
Repeat if any one of the child steps fails.
Repeat if all of the elements succeed.
You can also specify a time period that you want the REPEAT flow step to wait before it
re-executes its child steps.
REPEAT Properties
The REPEAT step has the following properties.
Property Description
Timeout Optional. Specifies the maximum number of seconds that this step
should run. If this time elapses before the step completes, Integration
Server issues a FlowTimeoutException and execution continues with
the next step in the service.
If you want to use the value of a pipeline variable for this property, type
the variable name between % symbols. For example, %expiration%.
The variable you specify must be a String.
If you do not need to specify a time-out period, leave Timeout blank.
For more information about how Integration Server
handles flow step timeouts, refer to the description of the
wa.server.threadKill.timeout.enabled configuration parameter in
webMethods Integration Server Administrator’s Guide.
Property Description
Label Optional. (Required if you are using this step as a target for a BRANCH
or EXIT step.) Specifies a name for this specific step, or a null,
unmatched, or empty string ($null, $default, blank).
Count Required. Specifies the maximum number of times the server re-
executes the child steps in the REPEAT step. Set Count to 0 (zero) to
instruct the server that the child steps should not be re-executed. Set
Count to a value greater than zero to instruct the server to re-execute the
child steps up to a specified number of times. Set Count to -1 to instruct
the server to re-execute the child steps as long as the specified Repeat on
condition is true.
If you want to use the value of a pipeline variable for this property, type
the variable name between % symbols. For example, %servicecount%.
The variable you specify must be a String.
Repeat Optional. Specifies the number of seconds the server waits before re-
interval executing the child steps. Specify 0 (zero) to re-execute the child steps
without a delay.
If you want to use the value of a pipeline variable for this property, type
the variable name between % symbols. For example, %waittime%. The
variable you specify must be a String.
Repeat on Required. Specifies when the server re-executes the REPEAT child steps.
Select SUCCESS to re-execute the child steps when the all the child steps
complete successfully. Select FAILURE to re-execute the child steps when
any one of the child steps fails.
If the REPEAT step is a child of another step, the failure is propagated to its parent.
SEQUENCE
The SEQUENCE step forms a collection of child steps that execute sequentially. This is
useful when you want to group a set of steps as a target for a BRANCH step.
You can set an exit condition that indicates whether the SEQUENCE should exit
prematurely and, if so, under what condition. Specify one of the following exit
conditions:
Exit the SEQUENCE when a child step fails.Use this condition when you want to ensure
that all child steps are completed successfully. If any child step fails, the SEQUENCE
ends prematurely and the sequence fails.
Exit the SEQUENCE when a child step succeeds. Use this condition when you want to
define a set of alternative services, so that if one fails, another is aempted. If a child
step succeeds, the SEQUENCE ends prematurely and the sequence succeeds.
Exit the SEQUENCE after executing all child steps. Use this condition when you want to
execute all of the child steps regardless of their outcome. The SEQUENCE does not
end prematurely.
SEQUENCE Properties
The SEQUENCE step has the following properties.
Property Description
Timeout Optional. Specifies the maximum number of seconds that this step
should run. If this time elapses before the step completes, Integration
Server issues a FlowTimeoutException and execution continues with
the next step in the service.
If you want to use the value of a pipeline variable for this property, type
the variable name between % symbols. For example, %expiration%.
The variable you specify must be a String.
If you do not need to specify a time-out period, leave Timeout blank.
For more information about how Integration Server
handles flow step timeouts, refer to the description of the
wa.server.threadKill.timeout.enabled configuration parameter in
webMethods Integration Server Administrator’s Guide.
Label Optional. (Required if you are using this step as a target for a BRANCH
or EXIT step.) Specifies a name for this specific step, or a null,
unmatched, or empty string ($null, $default, blank).
Property Description
pipeline to the state it was in before the
SEQUENCE step executed.
52 Data Types
■ Data Types in IData Objects .................................................................................................... 1156
■ Java Classes for Objects ......................................................................................................... 1158
■ How Designer Supports Tables ............................................................................................... 1160
Designer supports several data types for use in services. Each data type supported by
Designer corresponds to a Java data type and has an associated icon. Designer applies
different Java classes and displays different icons depending on whether the data type is
associated with:
An element in an IData object
An Object or Object list to which you have applied a Java class
Note: Designer does not provide a separate data type for tables.
Note: Designer displays small symbols next to variable icons to indicate validation
constraints. Designer uses to indicate an optional variable and the ‡ symbol
to denote a variable with a content constraint. Designer also uses to
indicate that the variable has a default value that can be overridden assigned
to it and to indicate that the variable has a null value that cannot be
overridden assigned to it. A combination of the and symbols next to a
variable icon indicates that the variable has a fixed default value that is not
null and cannot be overridden.
Note: When you input values for a constrained Object during debugging or when
assigning a value in the pipeline, Designer validates the data to make sure it is
of the correct type.
The following table identifies the Java classes you can apply to Objects and Object list
variables in Designer.
Note:
Integration Server only
supports this Java wrapper
type for web services.
53 Icons
■ Package Navigator View Icons ................................................................................................ 1162
■ UDDI Registry View Icons ....................................................................................................... 1166
■ Flat File Element Icons ............................................................................................................ 1166
■ Flow Step Icons ....................................................................................................................... 1167
■ OData Service Icons ................................................................................................................ 1168
■ REST API Descriptor Icons ..................................................................................................... 1170
■ Schema Component Icons ....................................................................................................... 1170
This topic describes the icons used to identify elements in the Service Development
perspective.
REST resource folder. A folder that contains the services that act as REST
resources. To display the services for a REST resource, click next
to its name. Services can be named _get, _put, _post, _patch, _delete, or
_default.
Flat file dictionary. A flat file dictionary contains record definitions, field
definitions, and composite definitions that can be used in multiple flat
file schemas.
Flat file schema. A flat file schema is the blueprint that contains the
instructions for parsing or creating the records in a flat file, as well as
the constraints to which an inbound flat file document should conform
to be considered valid. Using flat file schemas, you can translate
documents into and from flat file formats.
XSLT service. An XSLT service converts XML data into other XML
formats or into HTML, using rules defined in an associated XSLT
stylesheet.
Trading Networks document type. You can drag and drop a Trading
Networks (TN) document type into a process model. The “drop”
creates a Receive step in the process, with the subscription set to the
TN document type name.
The following table identifies the icon used for each flat file element
The following table identifies the icon used for each flow step.
Icon Description
Key OData property. Each Entity Type should have at least one
Key property.
Symbol Description
Symbol Description
<element> declaration in an XML schema definition and the
ELEMENT declaration in a DTD.
Symbol Description
Symbol Description
54 Toolbars
■ Compare Editor Toolbar ........................................................................................................... 1176
■ Document Type Editor Toolbar ................................................................................................ 1176
■ Flat File Schema and Dictionary Editors Toolbars ................................................................... 1177
■ Package Navigator View Toolbar ............................................................................................. 1178
■ Pipeline View Toolbar ............................................................................................................... 1178
■ REST API Descriptor Toolbar .................................................................................................. 1180
■ Service Editor Toolbar .............................................................................................................. 1180
■ Results View Toolbar ............................................................................................................... 1181
■ Specification Editor Toolbar ..................................................................................................... 1182
■ UDDI Registry View Toolbar .................................................................................................... 1183
■ Variables View Toolbar ............................................................................................................. 1184
■ Web Service Descriptor Editor Toolbar .................................................................................... 1184
This topic describes the various toolbar buons available in the Service Development
perspective.
Button Description
Button Description
Deletes the selected variable. If you select a variable that has children,
the children are deleted as well. Equivalent to Edit > Delete.
Moves the selected variable down one position. If the selected variable
cannot be moved down from its current position, this buon is not
available. (Try promoting or demoting it first.)
Promotes the selected variable one position to the left. You use this
buon to move a variable out of a document or document list. If the
Button Description
variable cannot be shifted left from its current position, this buon is
not available.
Demotes the selected variable one position to the right. You use this
buon to make a variable a member of a document or document list.
If the variable cannot be shifted right from its current position, this
buon is not available.
Displays a list of data types that you can use to create variables for the
document type. To create a variable for the document type, select the
appropriate data type from this list, and then give the new variable a
name.
Button Description
Note: This toolbar buon appears on the flat file schema editor
toolbar only.
Button Description
Opens the editor for the selected element in Package Navigator view.
Button Description
Button Description
ForEach mapping buon. This buon is not available unless you select
two array variables of the same data type.
Drops the selected variable from the pipeline. You may remove a
variable from the Pipeline In or Pipeline Out stage. When you drop a
variable, that variable is removed permanently from the pipeline and
is not available to subsequent services in the flow.
Moves the selected variable down one position. If the selected variable
cannot be moved down from its current position, this buon is not
available. (Try promoting or demoting it first.)
Promotes the selected variable one position to the left. If the variable
cannot be shifted left from its current position, this buon is not
available.
Demotes the selected variable one position to the right. If the variable
cannot be shifted right from its current position, this buon is not
available.
Enables the Pipeline In, Pipeline Out, Service In, Service Out, and
Transformers columns to be scrolled horizontally and vertically
independent each other. Independent scrolling is especially useful
when mapping a large amount of data in the Pipeline view.
Button Description
Button Description
Deletes the selected flow step. If you select a step that has children, the
children are deleted as well. Equivalent to Edit > Delete.
Moves the selected flow step up in the list. If the selected step cannot
be moved up from its current position, this buon is not available. (Try
promoting or demoting it first.)
Moves the selected flow step down in the list. If the selected step
cannot be moved down from its current position, this buon is not
available. (Try promoting or demoting it first.)
Click the buon next to to view the list of flow steps and a
list of commonly used services that can be inserted into the flow
Button Description
service as an INVOKE step. You can edit the Window > Preferences
>Software AG>Service Development> Flow Service Editor preferences to
customize this list of services to suit your needs.
Inserts a MAP step into the flow service. A MAP step performs
specified editing operations on the pipeline (for example, adding or
dropping variables to or from the pipeline).
Inserts a LOOP step into the flow service. A LOOP step executes a set
of steps once for each element in a specified array.
Inserts a REPEAT step into the flow service. A REPEAT step re-
executes a set of steps up to a specified number of times based on the
successful or non-successful completion of the set.
Inserts an EXIT step into the flow service. An EXIT step controls
the execution of a flow steps; for example, halt an entire flow from
within a series of deeply nested steps, throw an exception without
writing a Java service, or exit a LOOP or REPEAT without throwing an
exception.
Inserts an INVOKE step into the flow service. Select from the
displayed list of services or browse to select a service.
Button Description
Saves the service results pipeline to a file in your local file system.
Restores the pipeline contents from a file on your local file system.
Pins a result to Results view so that the result is not removed from
Results view.
Button Description
Deletes the selected variable. If you select a variable that has children,
the children are deleted as well. Equivalent to Edit > Delete.
Button Description
Moves the selected variable down one position. If the selected variable
cannot be moved down from its current position, this buon is not
available. (Try promoting or demoting it first.)
Promotes the selected variable one position to the left. You use this
buon to move a variable out of a document or document list. If the
variable cannot be shifted left from its current position, this buon is
not available.
Demotes the selected variable one position to the right. You use this
buon to make a variable a member of a document or document list.
If the variable cannot be shifted right from its current position, this
buon is not available.
Displays a list of data types that you can use to create variables for
the specification. To create a variable for the specification, select the
appropriate data type from this list, and then give the new variable a
name.
Button Description
Button Description
Remove the filter from the contents of the UDDI Registry view and
display all the published web services.
Create a web service descriptor (WSD) from the web service selected in
the UDDI Registry view.
Button Description
Drop a selected variable from the pipeline passed to the next step in a
debugging session.
Loads a pipeline from a local file. The pipeline you load completely
replaces the current debugging pipeline.
Button Description
55 Keyboard Shortcuts
You can use the following keyboard shortcuts to navigate and perform actions in the
Service Development perspective.
56 Conditional Expressions
■ Guidelines for Writing Expressions and Filters ........................................................................ 1190
■ Syntax ....................................................................................................................................... 1191
■ Operators for Use in Conditional Expressions ......................................................................... 1194
■ Operator Precedence in Conditional Expressions ................................................................... 1201
■ Addressing Variables ................................................................................................................ 1202
■ Rules for Use of Expression Syntax with the Broker ............................................................... 1205
Integration Server provides syntax and operators that you can use to create expressions
for use with the BRANCH step, pipeline mapping, and in trigger conditions.
In a BRANCH step, you can use an expression to determine the child step that
webMethods Integration Server executes. At run time, the first child step whose
conditional expression evaluates to “true” is the one that will be executed. For more
information about the BRANCH step, see "BRANCH" on page 1140.
In pipeline mapping, you can place a condition on the link between variables. At
run time, webMethods Integration Server only executes the link if the assigned
condition evaluates to “true.” For more information about applying conditions to
links between variables, see "Linking Variables Conditionally" on page 293.
For webMethods Messaging Triggers, you can further specify the documents that a
trigger receives and processes by creating filters for the publishable document types.
A filter specifies criteria for the contents of a document.
Note: The conditional expressions syntax is for filters created for documents
received from Broker or locally and for the local filter for a document
received from Universal Messaging. For information about the syntax
for creating a provider filter on Universal Messaging, see the Universal
Messaging documentation.
For JMS triggers, you can create local filters to further limit the messages a JMS
trigger processes. A local filter specifies criteria for the contents of the message
body. Integration Server applies a local filter to the message after the JMS trigger
receives the message from the JMS provider. If the message meets the filter criteria,
Integration Server executes the trigger service specified in the routing rule.
Syntax
When you create an expression, you need to determine which values to include in
the expression. Values can be represented as variable names, regular expressions,
numbers, and Strings. The following table identifies the types of values you can use in
an expression and the syntax for each value type.
Example Explanation
Example Explanation
Example Explanation
Example Explanation
Example Explanation
Byte "xx"
Example: "10" (for 0X0A)
Character "a"
Example: "C"
Relational Operators
You can use relational operators to compare the value of two fields or you can compare
the value of a field with a constant. Integration Server provides two types of relational
operators: standard and lexical.
Standard Relational Operators can be used in expressions and filters to compare the
contents of fields (variables) with other variables or constants.
Lexical Relational Operators can be used to compare the contents of fields (variables)
with string values in trigger filters.
Note: You can also use standard relational operators to compare string values.
However, filters that use standard relational operators to compare string
values will not be saved with the trigger subscription on Broker. If the
subscription filter resides only on Integration Server, Broker automatically
places the document in the subscriber’s queue. Broker does not evaluate the
filter for the document. Broker routes all of documents to the subscriber,
creating greater network traffic between Broker and Integration Server and
requiring more processing by Integration Server.
== a == b Equal to.
Note: To set the filter collation locale, use My webMethods to change the locale
on the Broker Server. You might need to restart the Broker Server for the
change to take effect. For more information about administering the Broker
Server, see Administering webMethods Broker.
Filters that use lexical relational operators to compare string values will be saved
with the trigger subscription on the Broker. Filters that use standard relational
operators to compare string values will not be saved on the Broker.
When you view filters on My webMethods, a lexical operator appears as its
equivalent standard operator. For example, the expression %myString% L_EQUALS
"abc" appears as myString=="abc".
The following table describes the lexical operators that you can use in filters.
Operator Description
Operator Description
Logical Operators
You can use the following logical operators in expressions to create conditions consisting
of more than one expression:
& expr & expr Logical AND. Both expressions must evaluate to true
for the entire condition to be true.
&& expr && Logical AND. Both expressions must evaluate to true
expr for the entire condition to be true.
and expr and Logical AND. Both expressions must evaluate to true
expr for the entire condition to be true.
1 ()
2 not,!
5 or, |, ||
Tips
To override the order in which expressions in a condition are evaluated, enclose
the steps you want evaluated first in parentheses. Integration Server evaluates
expressions contained in parentheses first.
Addressing Variables
In an expression, you can refer to the values of variables that are children of other
variables and refer to the values of elements in an array variable. To address children of
variables or an element in an array, you need to use a directory-like notation to describe
the position of the value.
Notes:
To view the path to a variable in the pipeline, rest the mouse pointer over the
variable name. Designer displays the variable path in a tool tip.
To copy the path to a variable in a pipeline, select the variable, right-click, and select
Copy.
You can enclose variable names in %, for example %buyerInfo/state%. If the
variable name includes special characters, you must enclose the path to the variable
in % (percent) symbols and enclose the variable name in " " (quotation marks). For
more information about using variables as values in expressions, see "Syntax" on
page 1191.
Type... To...
\ backslash \\
[ opening bracket
] closing bracket
( opening parenthesis
) closing parenthesis
% percent \%
Note: When you use variable names with special characters in expressions or filters,
you must enclose the variable name in " " (quotation marks).
57 Regular Expressions
■ Using a Regular Expression in a Mask ................................................................................... 1210
■ Regular Expression Operators ................................................................................................. 1210
Note: Integration Server and Designer use PERL regular expressions by default.
To specify a regular expression, you must enclose the expression between / symbols.
When the server encounters this symbol, it knows to interpret the characters between
these symbols as a paern-matching string (that is, a regular expression).
A simple paern-matching string such as /string/ matches any element that contains
string. So, for example, the regular expression /webMethods/ would match all of the
following strings:
"webMethods"
"You use webMethods Integration Server to execute services"
"Exchanging data with XML is easy using webMethods"
"webMethods Integration Server"
retains the first 30 characters in each matching element and discards the rest.
This example would keep the first 25 characters within each paragraph
and discard the rest.
\Z Match only at the end of a string (or before a new line at the end).
Example doc.p[/webMethods\Z/].text
This example would return any paragraph containing the string
‘webMethods’ at the end of the paragraph element or at the end of any
line within that element.
You can apply content constraints to variables in the IS document types, specifications,
or service signatures that you want to use as blueprints in data validation. Content
constraints describe the data a variable can contain. At validation time, if the variable
value does not conform to the content constraints applied to the variable, the validation
engine considers the value to be invalid. For more information about validation, see
"Performing Data Validation" on page 319.
When applying content constraints to variables, you can do the following:
Select a content type. A content type specifies the type of data for the variable value,
such as string, integer, boolean, or date. A content type corresponds to a simple type
definition in a schema.
Set constraining facets. Constraining facets restrict the content type, which in turn,
restrict the value of the variable to which the content type is applied. Each content
type has a set of constraining facets. For example, you can set a length restriction for
a string content type, or a maximum value restriction for an integer content type.
For example, for a String variable named itemQuantity , you might specify a content type
that requires the variable value to be an integer. You could then set constraining facets
that limit the content of itemQuantity to a value between 1 and 100.
The content types and constraining facets described in this appendix correspond
to the built-in data types and constraining facets in XML Schema. The World Wide
Web Consortium (W3C) defines the built-in data types and constraining facets in the
specification XML Schema Part 2: Datatypes (hp://www.w3c.org/TR/xmlschema-2).
Content Types
The following table identifies the content types you can apply to String, String list, or
String table variables. Each of these content types corresponds to a built-in simple type
defined in the specification XML Schema Part 2: Datatypes.
Note: For details about constraints for Objects and Object lists, see "Data Types" on
page 1155.
Note: The anyURI type indicates that the variable value plays
the role of a URI and is defined like a URI. webMethods
Integration Server does not validate URI references
dateTime A specific instant of time (a date and time of day). Values need
to match the following paern:
CCYY-MM-DDThh:mm:ss.sss
Example
P2Y10M20DT5H50M represents a duration of 2 years, 10 months,
20 days, 5 hours, and 50 minutes
gDay A specific day that recurs every month. Values must match the
following paern:
---DD
Where DD represents the day. The paern can include a
Z at the end to indicate Coordinated Universal Time or to
indicate the difference between the time zone and coordinated
universal time.
Constraining Facets
enumeration, maxExclusive, maxInclusive, minExclusive,
minInclusive, paern
Example
---24 indicates the 24th of each month
gMonth A Gregorian month that occurs every year. Values must match
the following paern:
--MM
Where MM represents the month. The paern can include
a Z at the end to indicate Coordinated Universal Time or to
indicate the difference between the time zone and coordinated
universal time.
Constraining Facets
enumeration, maxExclusive, maxInclusive, minExclusive,
minInclusive, paern
Example
--11 represents November
gMonthDay A specific day and month that recurs every year in the
Gregorian calendar. Values must match the following paern:
--MM-DD
Where MM represents the month and DD represents the day.
The paern can include a Z at the end to indicate Coordinated
Universal Time or to indicate the difference between the time
zone and coordinated universal time.
Constraining Facets
enumeration, maxExclusive, maxInclusive, minExclusive,
minInclusive, paern
Example
--09-24 represents September 24th
Example
2001 indicates 2001
Name XML names that match the Name production of XML 1.0
(Second Edition).
Constraining Facets
enumeration, length, maxLength, minLength, paern,
whiteSpace
NCName Non-colonized XML names. Set of all strings that match the
NCName production of Namespaces in XML.
Constraining Facets
enumeration, length, maxLength, minLength, paern,
whiteSpace
Example
MAB-0907
time An instant of time that occurs every day. Values must match
the following paern:
hh:mm:ss.sss
Where hh indicates the hour, mm the minutes, and ss the
seconds. The paern can include a Z at the end to indicate
Coordinated Universal Time or to indicate the difference
between the time zone and coordinated universal time.
Constraining Facets
enumeration, maxExclusive, maxInclusive, minExclusive,
minInclusive, paern
Example
18:10:00-05:00 (6:10 pm, Eastern Standard Time) Eastern
Standard Time is 5 hours behind Coordinated Universal Time.
Example
0, 22335, 123223333
Constraining Facets
When you apply a content type to a variable, you can also set constraining facets for the
content type. Constraining facets are properties that further define the content type. For
example, you can set a minimum value or precision value for a decimal content type.
Each content type has a set of constraining facets. The constraining facets described in
the following table correspond to constraining facets defined in the specification XML
Schema Part 2: Datatypes.
Note: Previous versions of XML Schema contained the constraining facets duration,
encoding, period, precision, and scale. However, these constraining facets are
not included in the recommendation of XML Schema Part 2: Datatypes. The
constraining facets duration, encoding, and period were removed. precision was
Note: The word “fixed” appears next to the name of a constraining facet whose
value is fixed and cannot be changed. When a facet has a fixed value, the facet
is called a fixed facet.
The webMethods Query Language (WQL) to map data from web documents. This topic
describes the WQL and the references, operators, and properties available for use while
parsing the contents of web documents.
Overview
The webMethods Query Language (WQL) provides the primary mechanism for
mapping data from web documents. When a web document is read by webMethods,
the XML or HTML markup within the document is used to parse the contents of the
document into the object model.
XML and HTML markup both consist of tag elements enclosed in angle brackets: < >. In
the process of parsing, tag elements are transformed into arrays of objects; the aributes
of tag elements become object properties. XML and HTML markup both implement
containing elements and empty elements. Containing elements have open and close tags.
Empty elements are single tags.
When a web document is parsed, the text contained within containing elements becomes
the text property of the corresponding XML node.
Data parsed from web documents is accessed with WQL queries, which consist of one or
more indexed element arrays and an object property.
Note: When you use pub.xml:queryXMLNode to query an enhanced XML node (a node
produced by the enhanced XML parser), you must use XQL as the query
language. WQL cannot be used to query an enhanced XML node.
Object References
For the following object references, x and y represent numerical indexes.
doc.element [x].property
An absolute reference uses a numerical index into an element array.
doc.element [x].element[x].property
Nested element arrays scope the object reference to children elements.
doc.element [x].line [x].property
An array of lines is fabricated when the text property of a node contains line breaks.
doc.element [x].^.property
The parent of an element is referenced with ^.
doc.element [x].?[x].property
A ? matches any type of element array.
doc.element [].property
Empty brackets signify that all members of an element array are to be returned.
Match strings are compared with the .text property of the indexed object. The .text
property contains the text of all child objects.
doc.element [/RegularExpression/].property
Returns an array of elements whose text property matches the specified regular
expression. For information about how to construct a regular expression, see "Regular
Expressions" on page 1209.
doc.element (property ='match').property
Matches the value of a specific element property.
Sibling Operators
WQL provides the following set of operators to refer to siblings of a specified element.
The examples shown in these descriptions refer to the following HTML structure:
The sibling operators are constrained by the current parent. If an operator exceeds the
boundaries of the current parent, a null value is produced for that reference.
doc.element [x][email protected]
References the nth sibling after element[x], regardless of type. Compare with
doc.element [x].+n.property , below.
Example Result
doc.td[0].b[0][email protected] Italic 0
doc.td[0].i[0][email protected] Bold 1
doc.td[0].b[0][email protected] Null
doc.element [x][email protected]
References the nth sibling prior to element[x], regardless of type. Compare with
doc.element [x].-n.property , below.
Example Result
doc.td[1].b[end][email protected] Bold 3
doc.td[1].i[end][email protected] Bold 4
doc.td[1].b[end][email protected] Null
doc.element [x].+n.property
References the nth sibling after element[x] that is of the same type as element[x].
Compare with doc.element [x][email protected] , above.
Example Result
doc.td[0].b[0].+1.text Bold 1
doc.td[0].i[0].+1.text Null
doc.td[0].b[0].+3.text Null
doc.element [x].-n.property
References the nth sibling before element[x] that is of the same type as element[x].
Compare with doc.element [x][email protected] , above.
Example Result
doc.td[1].b[end].-2.text Null
doc.td[1].i[end].-1.text Italic 1
doc.td[1].b[end].-3.text Null
Object Properties
In addition to the properties derived from the aributes of a parsed XML or HTML tag
element, the following properties are available for all objects:
text/.txt
Returns the text of an object.
.value/.val
Returns the value of an object. (Equivalent to the text of the object if the element has no
VALUE aribute.)
.source/.src
Returns the XML or HTML source of an object.
.csource/.csrc
Returns the XML or HTML of the source generated from the parse tree of the document.
.index/.idx
Returns the numerical index of an object.
.reference/.ref
Returns a complete object reference.
Property Masking
Property masking allows for the stripping away of unwanted text from the value of an
object property.
doc.element [x].property [x-y]
Returns a range of characters from position x to y .
doc.element [x].property ['mask']
Uses wildcard matching and token collecting to extract desired data from the value of an
object property.
doc.element [x].property [/RegularExpression/]
Uses a regular expression to extract desired data from the value of an object property.
For information about how to construct a regular expression, see "Regular Expressions"
on page 1209.